Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I have the following table in my database. ``` # select * FROM matches; name | prop | rank ------+------+------- carl | 1 | 4 carl | 2 | 3 carl | 3 | 9 alex | 1 | 8 alex | 2 | 5 alex | 3 | 6 alex | 3 | 8 alex | 2 | 11 anna | 3 | 8 anna | 3 | 13 anna | 2 | 14 (11 rows) ``` Each person is ranked at work by different properties/criterias called 'prop' and the performance is called 'rank'. The table contains multiple values of (name, prop) as the example shows. I want to get the best candidate following from some requirements. E.g. I need a candidate that have `(prop=1 AND rank > 5)` and `(prop=3 AND rank >= 8)`. Then we must be able to sort the candidates by their rankings to get the best candidate. EDIT: Each person must fulfill ALL requirements How can I do this in SQL?
``` select x.name, max(x.rank) from matches x join ( select name from matches where prop = 1 AND rank > 5 intersect select name from matches where prop = 3 AND rank >= 8 ) y on x.name = y.name group by x.name order by max(rank); ```
Filtering the data to match your criteria here is quite simple (as shown by both Amir and sternze): ``` SELECT * FROM matches WHERE prop=1 AND rank>5) OR (prop=3 AND rank>=8 ``` The problem is how to aggregate this data so as to have just one row per candidate. I suggest you do something like this: ``` SELECT m.name, MAX(DeltaRank1) AS MaxDeltaRank1, MAX(DeltaRank3) AS MaxDeltaRank3 FROM ( SELECT name, (CASE WHEN prop=1 THEN rank-6 ELSE 0 END) AS DeltaRank1, (CASE WHEN prop=3 THEN rank-8 ELSE 0 END) AS DeltaRank3, FROM matches ) m GROUP BY m.name HAVING MaxDeltaRank1>0 AND MaxDeltaRank3>0 SORT BY MaxDeltaRank1+MaxDeltaRank3 DESC; ``` This will order the candidates by the sum of how much they exceeded the target rank in prop1 and prop3. You could use different logic to indicate which is best though. In the case above, this should be the result: ``` name | MaxDeltaRank1 | MaxDeltaRank3 ------+---------------+-------------- alex | 3 | 0 ``` ... because neither anna nor carl reach both the required ranks.
Matching algorithm in SQL
[ "", "sql", "postgresql", "relational-division", "" ]
My table is defined like this: Name is a string and property too. ``` ID | Name | Property ``` An example for data in this table is this: ``` ID | Name | Property 1 Peter Newsletter 2 Paul Register 3 Peter Register 4 Shaun Newsletter 5 Steve Register ``` Now I like to query all people that have the property newsletter and register. As a result I should get Peter, because he has both property's. So the resulting table should be like: ``` ID | Name | Property 1 Peter Newsletter 3 Peter Register ``` So everything I try to query is which person has both property's newsletter and register.
Here is one method: ``` select t.* from table t where exists (select 1 from table t2 where t2.name = t.name and t2.property = 'NewsLetter' ) and exists (select 1 from table t2 where t2.name = t.name and t2.property = 'Register' ); ``` If you just want the list of names, perhaps with `id`s, I would do that as: ``` select t.name from table t where t2.property in ('NewsLetter', 'Register') group by t.name having count(distinct property) = 2; ``` How you get the list of `id`s depends on your database, something like `listagg()` or `group_concat()` or `string_agg()`.
An alternative, pretty much on the same lines as Gordon's solution, but without using `EXISTS`: ``` select * from tablename where name in (select name from tablename where property = 'Newsletter') and name in (select name from tablename where property = 'Register') ```
How to write a SQL-Query for the following table?
[ "", "sql", "" ]
I'm not good at sql but I can create,understand common SQL queries. While scouring the net it seems its hard to find a befitting way on this query. I have a query which is ``` SELECT COUNT(`BetID`), FORMAT(SUM(`BetAmount`),0), FORMAT(SUM(`Payout`),0), ROUND((SUM(`BetAmount`) / COUNT(`BetID`)),2), ROUND((((SUM(`BetAmount`) + SUM(`Payout`)) / SUM(`Payout`)) * 100),2) FROM `betdb` ``` I would like to subtract the result of ``` FORMAT(SUM(`BetAmount`),0) ``` and ``` FORMAT(SUM(`Payout`),0) ``` Any other ideas to execute subtraction in this mysql query?
If you want the numbers rounded *before* subtracting them (which seems to be the case when you want to subtract the formatted numbers), you'll need to round them first to the same precision as the formatting, subtract and lastly format the result; ``` SELECT COUNT(`BetID`), FORMAT(SUM(`BetAmount`),0), FORMAT(SUM(`Payout`),0), FORMAT(ROUND(SUM(`BetAmount`),0) - ROUND(SUM(`Payout`),0),0) diff, ROUND((SUM(`BetAmount`) / COUNT(`BetID`)),2), ROUND((((SUM(`BetAmount`) + SUM(`Payout`)) / SUM(`Payout`)) * 100),2) FROM `betdb` ``` [A simple SQLfiddle to test with](http://sqlfiddle.com/#!2/2e64f/1).
Use `FORMAT((SUM(BetAmount) - SUM(Payout)),0)` Try this: ``` SELECT COUNT(`BetID`), FORMAT(SUM(`BetAmount`),0), FORMAT(SUM(`Payout`),0), FORMAT((SUM(`BetAmount`) - SUM(`Payout`)),0), ROUND((SUM(`BetAmount`) / COUNT(`BetID`)),2), ROUND((((SUM(`BetAmount`) + SUM(`Payout`)) / SUM(`Payout`)) * 100),2) FROM `betdb` ```
mysql get difference from the result of 2 fields queried
[ "", "mysql", "sql", "select", "sum", "difference", "" ]
Probably a simple one but I'm having trouble getting this to work in SQL. ( easy with VBA ). Given xxx\_123\_abcd I need to extract the 123 part. whatever code searchs for the \_ should always work from the left because sometimes and string might be xxx\_123\_abcd\_xxx. Thanks in advance for any advice :)
Use CHARINDEX to search for the underscores, to be able to extract the part of the string you need with SUBSTRING. Something like this should work: ``` SELECT SUBSTRING(MyColumn, CHARINDEX('_', MyColumn)+1, CHARINDEX('_', MyColumn, CHARINDEX('_', MyColumn)+1)-CHARINDEX('_', MyColumn)-1) FROM MyTable ``` MyColumn = `xxx_123_abcd_xxx` produces `123` using the above code. Assumptions: The string you are looking for is enclosed by the first and second occurence of an underscore.
you can also go with this ``` SELECT LEFT(Col1,PATINDEX('%[^0-9]%', Col1+'a')-1) from( SELECT SUBSTRING(Col1, PATINDEX('%[0-9]%', Col1), LEN(Col1)) As Result From table )x ```
Find a string between two string values
[ "", "sql", "sql-server", "" ]
i am using this query to get some result ``` select t.accode,t.acname, sum(t.debit)as debit, sum(t.credit) as credit from transactions t inner join accounts a on a.code = t.accode where a.TypeCode = 5 and t.date between '2014-05-01' and '2014-05-15' group by t.accode,t.acname ``` i am getting this result with this query ``` accode acname debit credit 1 umer 200 300 ``` now i want to add a previos balance column in this result. sample query for that is ``` sum(debit)-sum(credit) as previousbalance from transaction where date < '2014-05-01' ``` and then the result will look like this ``` accode acname previousbalance debit credit 1 umer 50 200 300 ``` can any one please tell me how to do this ? i shall be very thankful to you.
You can insert a subquery in the `SELECT` statement like this: ``` select t.accode,t.acname, sum(t.debit)as debit, sum(t.credit) as credit , (SELECT sum(t2.debit) - sum(t2.credit) FROM transactions AS t2 INNER JOIN accounts AS a2 ON t2.accode = a2.code WHERE a2.code = t.accode AND t2.date < '2014-05-01') as prevbalance from transactions t inner join accounts a on a.code = t.accode where a.TypeCode = 5 and t.date between '2014-05-01' and '2014-05-15' group by t.accode,t.acname ``` You might need another group by in that subquery, not sure about that.
You can just add the columns to your existing field as below: ``` select t.accode,t.acname, sum(t.debit)as debit, sum(t.credit) as credit, ( sum(t.debit) - sum(t.credit) ) as previousbalance from transactions t inner join accounts a on a.code = t.accode where a.TypeCode = 5 and t.date between '2014-05-01' and '2014-05-15' group by t.accode,t.acname; ``` Another approach would be inner/nested queries: ``` select tab.*, (credit-debit) as previousbalance from ( select t.accode,t.acname, sum(t.debit)as debit, sum(t.credit) as credit from transactions t inner join accounts a on a.code = t.accode where a.TypeCode = 5 and t.date between '2014-05-01' and '2014-05-15' group by t.accode,t.acname ) tab; ```
want to add one more column in my Sql server query
[ "", "sql", "sql-server", "sql-server-2012", "" ]
Sorry for the vague title (I just don't know how to describe this conundrum) Give the following schedule table for a classroom: | Classroom | CourseName | Lesson | StartTime | EndTime | | --- | --- | --- | --- | --- | | 1001 | Course 1 | Lesson 1 | 0800 | 0900 | | 1001 | Course 1 | Lesson 2 | 0900 | 1000 | | 1001 | Course 1 | Lesson 3 | 1000 | 1100 | | 1001 | Course 2 | Lesson 10 | 1100 | 1200 | | 1001 | Course 2 | Lesson 11 | 1200 | 1300 | | 1001 | Course 1 | Lesson 4 | 1300 | 1400 | | 1001 | Course 1 | Lesson 5 | 1400 | 1500 | I would like to group the table to display this: | Classroom | CourseName | StartTime | EndTime | | --- | --- | --- | --- | | 1001 | Course 1 | 0800 | 1100 | | 1001 | Course 2 | 1100 | 1300 | | 1001 | Course 1 | 1300 | 1500 | Basically we are looking at a schedule that show which crouse is using what classroom during a certain timespan... My initial thought was: Group by `Classroom` and `CourseName` and take `Max` and `Min` for `start\end` time but that will not give me the time spans it will show as if Course 1 is using the `Classroom` from 08:00 - 16:00 with no break in the middle.
The query determines each rows `EndTime` by using `NOT EXISTS` to make sure no other class or course of a different type is scheduled between a course range's `StartTime` and `EndTime` and then uses `MIN` and `GROUP BY` to find the `StartTime`. The `NOT EXISTS` part ensures that there aren't "breaks" between the `StartTime` and `EndTime` ranges by searching for any rows that have an `EndTime` between `StartTime` and `EndTime` but belong to a different `CourseName` or `CourseRoom`. ``` SELECT t0.ClassRoom, t0.CourseName, MIN(t0.StartTime), t0.EndTime FROM ( SELECT t1.ClassRoom, t1.CourseName, t1.StartTime, ( SELECT MAX(t2.EndTime) FROM tableA t2 WHERE t2.CourseName = t1.CourseName AND t2.ClassRoom = t1.ClassRoom AND NOT EXISTS (SELECT 1 FROM tableA t3 WHERE t3.EndTime < t2.EndTime AND t3.EndTime > t1.EndTime AND (t3.CourseName <> t2.CourseName OR t3.ClassRoom <> t2.ClassRoom) ) ) EndTime FROM tableA t1 ) t0 GROUP BY t0.ClassRoom, t0.CourseName, t0.EndTime ``` <http://www.sqlfiddle.com/#!6/39d4b/9>
If you're using SQLServer 2012 or better you can use `LAG` to get the previous value of a column, then `SUM() OVER (ORDER BY ...)` to create a rolling sum, in this case one that count the change of the CourseName, that can be used as the `GROUP BY` anchor ``` With A AS ( SELECT ClassRoom , CourseName , StartTime , EndTime , PrevCourse = LAG(CourseName, 1, CourseName) OVER (ORDER BY StartTime) FROM Table1 ), B AS ( SELECT ClassRoom , CourseName , StartTime , EndTime , Ranker = SUM(CASE WHEN CourseName = PrevCourse THEN 0 ELSE 1 END) OVER (ORDER BY StartTime, CourseName) FROM A ) SELECT ClassRoom , CourseName , MIN(StartTime) StartTime , MAX(EndTime) EndTime FROM B GROUP BY ClassRoom, CourseName, Ranker ORDER BY StartTime ``` `SQLFiddle demo`
Group consecutive rows of same value using time spans
[ "", "sql", "sql-server", "t-sql", "" ]
I have a sales order table that goes: ``` SALESORDERCODE ITEMCODE WAREHOUSECODE SO-605338 ITEM-003200 MAIN SO-605338 ITEM-016328 PFC ``` I also have an inventory table that goes: ``` WAREHOUSECODE ITEMCODE UNITSINSTOCK MAIN ITEM-003200 1 MAIN ITEM-016328 2 PFC ITEM-016328 3 PFC ITEM-003200 4 ``` Can anyone help me with a series of joins that will output: ``` SALESORDERCODE ITEMCODE UNITS IN MAIN UNITS IN PFC SO-605338 ITEM-003200 1 3 SO-605338 ITEM-016328 2 4 ``` I can write a `WHERE` clause that will output either MAIN or PFC, but I need to columns side by side. Thanks for your help!
If it's limited to just those two, you can use a case statement: [SQL Fiddle](http://sqlfiddle.com/#!3/4a92c/1) ``` select SO.SalesOrderCode ,I.ITEMCODE ,sum(case when I.WAREHOUSECODE = 'MAIN' then I.UNITSINSTOCK else 0 end) as MAIN ,sum(case when I.WAREHOUSECODE = 'PFC' then I.UNITSINSTOCK else 0 end) from SalesOrder SO inner join Inventory I ON SO.ITEMCODE = I.ITEMCODE group by SO.SalesOrderCode, I.ITEMCODE ``` Otherwise, depending on your RDBMS platform, you may be able to look into pivoting.
You can get the result using 2 JOINs on your `INVENTORY` table - one join for each `WarehouseCode` that you want in a column: ``` select so.SALESORDERCODE, so.ITEMCODE, i_main.UNITSINSTOCK UnitsInMain, i_pfc.UNITSINSTOCK UnitsInPFC from SalesOrder so left join Inventory i_main on so.ItemCode = i_main.ItemCode and i_main.WAREHOUSECODE = 'Main' left join Inventory i_pfc on so.ItemCode = i_pfc.ItemCode and i_pfc.WAREHOUSECODE = 'PFC'; ``` See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/586c5/4). You'll notice that when you LEFT JOIN on the `INVENTORY` table you will include the filter for the `WarehouseCode` that you want to return. It's not exactly clear if you need a total of the units in each Warehouse, if so then this could be rewritten using an aggregate function with a CASE expression: ``` select so.SALESORDERCODE, so.ITEMCODE, sum(case when i.WAREHOUSECODE = 'Main' then i.UNITSINSTOCK end) UnitsInMain, sum(case when i.WAREHOUSECODE = 'PFC' then i.UNITSINSTOCK end) UnitsInPFC from SalesOrder so left join Inventory i on so.ItemCode = i.ItemCode group by so.SALESORDERCODE, so.ITEMCODE; ``` See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/586c5/12/0)
Multiple Where Clauses as Separate Columns
[ "", "sql", "" ]
I'm still new to SQL so this question I am asking may be easy for you. So I am creating a report that for every week generates the prior 14 days (Or 2 weeks) of the funded contracts. I know this has to be Hardcoded to a specific company. The ID for that specific company is '55' So can someone help me with this function? My query I know is not yet finished I am just stuck on how to enter the Date function for this. ``` Create PROC [dbo].[spAdminFundedDateee] Begin SELECT c.program_id, d.dealer_code,b.last_name As DealerName, a.account_no, a.vin, ((e.last_name)+','+(e.first_name)) As Name, a.funded_date, a.cancel_refund_date, a.purchase_date,a.miles, a.duration,a.sale_price,a.number_of_payments, a.sales_tax, a.downpayment from tDealer d Join tContact b ON d.contact_id = b.contact_id Join tContract a On d.dealer_id = a.dealer_id Join tCompany c ON d.company_id= c.company_id Join tContact E On e.contact_id = a.contact_id Where c.program_id = 55 And a.funded_date between End exec spAdminFundedDateee '05/1/2014','05/30/2014','55' ```
In order to check if a.funded\_date is between today's date and two weeks ago, you are going to need several sql server functions. The first one is [GetDate()](http://msdn.microsoft.com/en-us/library/ms188383.aspx). This returns the current date and time as a datetime value. Now, you want to check only for the date parameter (not the time). If someone runs your stored procedure at 1pm, you don't want to eliminate all of the data from before 1pm from the day 14 days ago. You want all the data, no matter the time, beginning from 14 days ago. To solve this issue, we want to change getDate() to a date only. so, cast(getDate() as date). Today, this would return 6-18-14. Lastly, you want to check for the date two weeks ago. [dateAdd](http://www.w3schools.com/sql/func_dateadd.asp) allows you to add any amount of time that you specify to a date or a time. in this case, you want the information from 14 days ago. this is going to look like dateadd(dd, -14, cast(getDate() as date)). Since between is inclusive, all you need to do now is put it together! ``` between dateadd(dd, -14, cast(getDate() as date)) and cast(getDate() as date) ```
if a.funded\_date is a DATETIME then ``` a.funded_date between dateadd(day,-14,getdate()) and getdate() ``` if a.funded\_date is a DATE then ``` a.funded_date between cast(dateadd(day,-14,getdate()) as date) and cast(getdate() as date) ```
Calculating 2 weeks for every week SQL
[ "", "sql", "sql-server", "hardcode", "" ]
I have the table below.Using salary as condition I want to get multiple rows. Below is current table call it employee. ``` empid name salary ----------------------------------- 1 A1 alex 20000 2 B2 ben 4500 3 C1 carl 14000 ``` compare the salary to certain fixed values, and every time the salary is larger than the fixed value, show a record in output.My attempt condition case is close to this: ``` incometype= case When salary<6000 then 101 When salary Between 6000 And 18000 Then 102 Else 103 End ``` Desired ouput would be: ``` empid name salary incometype ------------------------------------------ 1 A1 alex 20000 101 2 A1 alex 20000 102 3 A! alex 20000 103 4 B2 ben 4500 101 5 C1 carl 14000 101 6 C1 carl 14000 102 ``` I have tried using union but union will give me 3 rows for each record even when value meets 1st condition.
Your question is unclear, because your logic implies that you should only have 3 output rows for 3 input rows. Your output however implies that you want to compare the salary to certain fixed values, and every time the salary is larger than the fixed value, show a record in output. If the former is the case, Minh's query is all you need. In the latter case, you can do something like this: ``` select e.*, m.incometype from employee e left join ( select 0 as threshold, 101 as incometype union select 5999 as threshold, 102 as incometype union select 17999 as threshold, 103 as incometype ) m on e.salary > m.threshold order by e.empid ``` If you want to add a calculate column i.e. one with values calculated using columns in this query, you can simply add it as a column in the `select` clause, like so: ``` select e.*, m.incometype, case when <first condition> then <business logic here> .... else <handle default case> end as yourcomputedcolumn from ... ```
This returns 3 rows and enough for your need: ``` SELECT empid, name, salary, case When salary<6000 then 101 When salary Between 6000 And 18000 Then 102 Else 103 End as incometype FROM employee; ```
Select statement with multiple rows from condition on values in single column
[ "", "sql", "select", "conditional-statements", "" ]
I have a simple table like ``` user_id | meta_key | meta_value =============================== 1 | color | green 2 | color | red 1 | size | L 2 | size | L 3 | color | blue ... ``` Now I like a single query where I get all colors and sizes of a certain user ``` SELECT a.user_id AS ID, a.meta_value AS color, b.meta_value AS size FROM table AS a LEFT JOIN table AS b ON a.user_id = b.user_id WHERE a.meta_key = 'color' AND b.meta_key = 'size' GROUP BY a.meta_value, b.meta_value ``` This works as long I have a `size` for each user. If `size` is missing (ID 3 in example) the user isn't in the result
The predicate on `b.meta_key` in the **`WHERE`** is negating the "outerness" of the LEFT JOIN operation. One option is to relocate that predicate to the **`ON`** clause, for example: ``` SELECT a.user_id AS ID, a.meta_value AS color, b.meta_value AS size FROM table AS a LEFT JOIN table AS b ON a.user_id = b.user_id AND b.meta_key = 'size' WHERE a.meta_key = 'color' GROUP BY a.meta_value, b.meta_value ``` --- **FOLLOWUP** **Q:** What if `color` is missing? **A:** You'd need a table or inline view as a rowsource that returns every value of `user_id` that you want included in the result set. For example, if you had a table named "users" with one row per `user_id`, and you want to return every `id` in that table: ``` SELECT u.id AS user_id , a.meta_value AS color , b.meta_value AS size FROM user u LEFT JOIN `table` a ON a.user_id = u.id AND a.meta_key = 'color' LEFT JOIN `table` b ON b.user_id = u.id AND b.meta_key = 'size' GROUP BY u.id , a.meta_value , b.meta_value ``` --- We also note that the `GROUP BY` in your original query does not include the `user_id` column, so any rows that have the same `meta_value` for color and size will be collapsed, and you'll get only one of the `user_id` values that have matching color and size. Note that if the combination of `user_id` and `meta_key` is UNIQUE (in table `table`), then the GROUP BY isn't necessary. --- In the normative relational model, attributes of an entity are implemented as columns. As a good example, the resultset being returned by the query looks a lot like the table we'd expect. When an Entity Attribute Value (EAV) model is implemented in a relational database, the SQL gets an order of magnitude more complicated. --- If you want to return ONLY values of user\_id that have either `color` or `size`, you could get a list of user\_id values with an inline view (query against `table`). This query is the same as above, but replacing the reference to the `user` table with an inline view: ``` SELECT u.id AS user_id , a.meta_value AS color , b.meta_value AS size FROM ( SELECT t.user_id AS id FROM table t WHERE t.meta_key IN ('color','size') GROUP BY t.user_id ) u LEFT JOIN `table` a ON a.user_id = u.id AND a.meta_key = 'color' LEFT JOIN `table` b ON b.user_id = u.id AND b.meta_key = 'size' GROUP BY u.id , a.meta_value , b.meta_value ```
The where clause is killing you. Let's take `b.meta_key = 'size'`. If there are no rows in your B table, meta\_key will come back null. The filter on that column removes them from the result. Try adding the `b.meta_key = 'size'` to the on clause.
Joining same table with NULL values in MySQL
[ "", "mysql", "sql", "" ]
I have the following procedure for inserting into a user table: ``` -- ================================================ -- Template generated from Template Explorer using: -- Create Procedure (New Menu).SQL -- -- Use the Specify Values for Template Parameters -- command (Ctrl-Shift-M) to fill in the parameter -- values below. -- -- This block of comments will not be included in -- the definition of the procedure. -- ================================================ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO -- ============================================= -- Author: Andy Armstrong -- Create date: -- Description: -- ============================================= CREATE PROCEDURE db_SignupAddLogin -- Add the parameters for the stored procedure here @LoginName VARCHAR(15), @LoginPassword VARCHAR(15) AS BEGIN DECLARE @GUID UNIQUEIDENTIFIER SET @GUID = NEWID(); -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; -- Insert statements for procedure here INSERT INTO tblMemberLogin ( UserID, LoginName, LoginPassword ) VALUES ( @GUID, @LoginName, @LoginPassword ) RETURN @GUID END GO ``` However when I execute it I get the following error: ``` Msg 206, Level 16, State 2, Procedure db_SignupAddLogin, Line 34 Operand type clash: uniqueidentifier is incompatible with int ``` I cannot quite workout why as i am not referencing an int anywhere. My Schema for tblMemberLogin looks like this: UserID(PK,uniqueidentifier,notnull) LoginName(nchar(15),not null) LoginPassword(nchar(15),not null) Please help!
`RETURN` can only be used with an `int`. You can simply use a `SELECT` query to retrieve the value of variable `@GUID`. Reference: [http://technet.microsoft.com/en-us/library/ms174998(v=sql.110).aspx](http://technet.microsoft.com/en-us/library/ms174998%28v=sql.110%29.aspx)
get rid of `RETURN @GUID` and you should be good to go.
unqiueidenfitier is not compatible with type int SQL Server Procedure
[ "", "sql", "sql-server", "stored-procedures", "sql-server-2012-express", "" ]
``` COUNTRY NUM_GUNS MAX_BOR MAX_DISP Gt.Britain 40000 45000 32000 Germany 40000 45000 42000 USA 60000 48000 46000 Japan 45000 54000 65000 ``` This is my table and i need column name with highest value in the row.Example for Gt.Britain the max value is 45000 i want to display it as max\_bor. Thank you.
Just try do it *explicitly*, like that: ``` select case when NUM_GUNS = Greatest(NUM_GUNS, MAX_BOR, MAX_DISP) then 'NUM_GUNS' when MAX_BOR = Greatest(NUM_GUNS, MAX_BOR, MAX_DISP) then 'MAX_BOR' else 'MAX_DISP' end as MaxColumnName, Greatest(NUM_GUNS, MAX_BOR, MAX_DISP) as MaxColumnValue from MyTable ```
You can use `CASE` Statement ``` SELECT COUNTRY, CASE WHEN NUM_GUNS > MAX_BOR AND NUM_GUNS > MAX_DISP THEN 'NUM_GUNS' ELSE NUM_GUNS END NUM_GUNS, CASE WHEN MAX_BOR > NUM_GUNS AND MAX_BOR > MAX_DISP THEN 'MAX_BOR' ELSE MAX_BOR END MAX_BOR , CASE WHEN MAX_DISP > MAX_BOR AND MAX_DISP > NUM_GUNS THEN 'MAX_DISP' ELSE MAX_DISP END MAX_DISP FROM YourTable ``` *Output:* --- ``` COUNTRY NUM_GUNS MAX_BOR MAX_DISP -------------------------------------- Gt.Britain 40000 MAX_BOR 32000 Germany 40000 MAX_BOR 42000 USA NUM_GUNS 48000 46000 Japan 45000 54000 MAX_DISP ```
I have to display the column name with highest value in a row in oracle
[ "", "sql", "oracle", "" ]
Basically I just need to get a 5 digits number that is separated by a space. The 5 digits number can be anywhere in the varchar. Example: I have a varchar column with this various data in SQL 2008 table ``` travel visa 34322 LLL001 Coffee 34332 Jakarta FDR001 34312 Taxi cost cash taxi cash 34321 34556 eating dinner with customer eating dinner 34256 with customer visa cost 34221 REF773716637366 ``` the 5 digits number can be anywhere separated by a space what is best to extract this? ``` 34322 34332 34312 34556 34256 34221 ``` Thanks Row like this should return blank ``` Visa refNbr 778738878 ``` Tried the following with no luck yet ``` SELECT pjtran.tr_comment ,substring(pjtran.tr_comment,PATINDEX('%[0-9]%',pjtran.tr_comment),5) ,Left(SubString(pjtran.tr_comment, PatIndex('%[0-9.-]%', pjtran.tr_comment), 50),PatIndex('%[^0-9.-]%', SubString(pjtran.tr_comment, PatIndex('%[0-9.-]%', pjtran.tr_comment), 50) + 'X')-1) ,len(pjtran.tr_comment)-len(replace(pjtran.tr_comment,' ','')) ``` I think I need to use a combination of counting the number of space in the varchar. and the above. but I am not sure how to do it
How about something like this? ``` select substring(tr_comment, patindex('%[0-9][0-9][0-9][0-9][0-9] %', tr_comment), 5) as zip5 ``` If you want to consider that it might be at the end of the string: ``` select substring(tr_comment, patindex('%[0-9][0-9][0-9][0-9][0-9] %', tr_comment + ' '), 5 ) as zip5 ```
use this, this may help you ``` select SUBSTRING(tr_comment, PATINDEX('%[0-9]%', tr_comment), 5) as zip ```
How to find 5 digit number in a varchar?
[ "", "sql", "sql-server", "" ]
For some reason i need to explode a query in multisteps (very large rows and many join) , **I know its possible to perform the query like SELECT IN (subquery) but its not what i need to achieve.** How to construct my query into step where query 2 can use the result of query 1 (without using temporary table) with an IN clause ; expected query 2 should look like this : select otherstuff where foo IN(id,id,id,id,longlistid,) query 1 => select stuf as result1 query 2 => select otherstuf where foo IN (result1) **I thought to use shortcut @ don't know if its appropriate , here is my try :** ``` CREATE PROCEDURE `exemple`() BEGIN SELECT @query1 := `id` FROM `table` WHERE `stuf`= 'cat' UNION ALL SELECT `id` FROM `table` WHERE `dog`= 'hard' ; SELECT please FROM help WHERE me IN(@query1) ; END ``` The IN clause seems not correct like this.. **Another try , store result in a declared locale variable :** ``` CREATE PROCEDURE `exemple`() BEGIN DECLARE resultquery1 VARCHAR(5000); SELECT `id` FROM `table` WHERE `stuf`= 'cat' UNION ALL SELECT `id` FROM `table` WHERE `dog`= 'hard' INTO resultquery1 ; SELECT please FROM help WHERE me IN(resultquery1) ; END ``` Not enought , i got this logical error : Error Code: 1172. Result consisted of more than one row regards, Jess
Use a `Temporary Table` instead and have the query1 value inserted in that table. then in your query2 you can directly fetch the data from temporary table; like below (A code sample) ``` CREATE PROCEDURE `exemple`() BEGIN create temporary table temptest(col varchar(20)); insert into temptest SELECT `foo` FROM `table` WHERE `stuf`= 'cat' UNION ALL SELECT `bar` FROM `table` WHERE `dog`= 'hard' ; SELECT please FROM help WHERE me IN (select distinct col from temptest) ; DROP TABLE temptest; END ``` **EDIT:** You may not need the temporary table at all. Per your post explanation, you can include the query directly in `IN` clause like below. Hope this helps. ``` SELECT please FROM help WHERE me IN ( SELECT `foo` FROM `table` WHERE `stuf`= 'cat' UNION ALL SELECT `bar` FROM `table` WHERE `dog`= 'hard' ) ; ``` **FINAL EDIT:** A inlist is a comma separated value, you can't simply store like the way you are trying cause it's a scalar variable where you are trying to store and it's incorrect; You may do little trick like below and store it but that's not guaranteed to work. ``` set @inlist := ( select group_concat(unitid) from ( SELECT unitid, 1 as 'ID' FROM teachers WHERE username= 'abcdced' UNION ALL SELECT unitid, 1 as 'ID' FROM teachers WHERE username= 'harikas' ) x group by ID); ``` Moreover, I am not sure who told you that going through variable will be efficient. I may agree that `TEMPORARY` table may pose some issue based on your situation but the subquery option I given (Or the suggestion provided by Gordon in below answer) will be much efficient than the way you are trying to do. At least it will not incurr the cost of fetching and storing into variables (If at all it works).
I add an answer here To follow the @Rahul correct answer in final edit , for thoses who need to achieve this particular case (when subquery cannot perform as well you'd like) , here is the final form of the procedure using the mysql GROUP\_CONCAT and CONCAT function : **temp is the declared local variable for storing first result SET SESSION group concat to avoid the group\_concat limitation length** ``` CREATE PROCEDURE `exemple`() BEGIN DECLARE temp VARCHAR(10000); SET SESSION group_concat_max_len = 100000; SELECT GROUP_CONCAT(foo) FROM bar WHERE cible = x GROUP BY cible INTO temp; SET @S = CONCAT ( "SELECT * FROM anothertable WHERE whatyouwant IN(", temp," ) " ); PREPARE STMT FROM @S; EXECUTE STMT; END ```
mysql stored procedure construct with GROUP_CONCAT and IN clause
[ "", "mysql", "sql", "stored-procedures", "" ]
I have this query : ``` SELECT SUM(IF(b.correct = 1,1,0)) correct_answers, SUM(IF(b.correct != 1,1,0)) incorrect_answers, SUM(IF(a.answer IS NULL) 1,0)) as no_answers, a.ID FROM `a` join b on a.answer = b.id where a.created_at between '2014-06-10' and '2014-06-17' group by a.ID order by correct_answers desc ``` Basically I am trying to fetch the number of correct , incorrect and no answers for a quiz per user . So I am getting the correct and incorrect counts. But when I try to count the number of no answers (answer column is NULL in table a) I am getting it all zeros. Anything wrong with this query ?
Try this: ``` SELECT SUM(b.correct = 1) correct_answers, SUM(b.correct != 1) incorrect_answers, SUM(a.answer IS NULL) AS no_answers, a.ID FROM a LEFT JOIN b ON a.answer = b.id WHERE a.created_at BETWEEN '2014-06-10' AND '2014-06-17' GROUP BY a.ID ORDER BY SUM(b.correct = 1) DESC; ```
You missed a ',' int the 3rd Line Change : ``` SUM(IF(a.answer IS NULL) 1,0)) as no_answers, ``` To: ``` SUM(IF(a.answer IS NULL, 1,0)) as no_answers, ```
Count of NULLS in IF condition
[ "", "mysql", "sql", "count", "group-by", "sum", "" ]
Seems like a simple question... but I am stumped. ``` declare @Total_User int set @Total_User = 8 declare @Total int set @Total = 12 declare @Number int set @Number = (@Total_User / @Total) * 100 select @Number as 'Standard' ``` I am expecting 66, but my select comes out zero ?? What am I doing wrong?
The result of `(@Total_User / @Total)` is zero, as it is doing integer division. You can multiply by 100 first, which gives the result that you expected: ``` set @Number = 100 * @Total_User / @Total ``` To get a rounded value rather than truncated, you would use floating point values and the `round` function: ``` set @Number = round(100.0 * @Total_User / @Total, 0) ``` This will give the result `67` rather than `66`, as that is closer to the actual result `66.6666666666`
Try: set @Number = (@Total\_User\*100 / @Total)
How do I get an int
[ "", "sql", "math", "" ]
I try to make a sql query to get all my followers except me. I have the following tables: users ``` | id_user | username | | 1 | xaxa | | 2 | toto | | 3 | bubu | | 4 | yiyi | | 5 | pepe | | 6 | sisi | | 7 | fifi | | 8 | mama | | 9 | juju | | 10 | cece | => me ``` friends ``` | id_friend | id_user | id_user_to | | 1 | 10 | 1 | | 2 | 2 | 10 | | 3 | 2 | 1 | | 4 | 6 | 3 | | 5 | 2 | 9 | | 6 | 6 | 7 | | 7 | 5 | 3 | | 8 | 10 | 5 | | 9 | 9 | 8 | | 10 | 8 | 10 | ``` **I want to have this:** my friends ``` | id_user | name | | 1 | xaxa | | 2 | toto | | 5 | pepe | | 8 | mama | ``` actually I have `id_user 10` (me) in the result with this query => ``` SELECT id_user, name FROM `users` WHERE id_user NOT IN ( SELECT `id_user` FROM `friends` WHERE ( `id_user` = 10 OR `id_user_to` = 10 )) OR id_user NOT IN ( SELECT `id_user_to` FROM `friends` WHERE ( `id_user` = 10 OR `id_user_to` = 10 )) GROUP BY `id_user` ```
The solution is a union of two simple joins: ``` SELECT u.id_user, u.username name FROM friends f JOIN users u ON u.id_user = f.id_user_to WHERE f.id_user = 10 UNION SELECT u.id_user, u.username FROM friends f JOIN users u ON u.id_user = f.id_user WHERE f.id_user_to = 10 ``` Note that the keyword `UNION` removes duplicates from the result, so no need to code anything special to handle the case when there's a friend link in both directions between two users (FYI, `UNION ALL` retains duplicates). Because at most one index is used per table per query, by splitting the query into two parts, if indexes are created on the user id columns of the friends table, this query will scale well (to millions of users) There was no need to code anything to handle an "except me" condition, unless you have a row in the friends table for you being your own friend, which you don't. SqlFiddle: <http://sqlfiddle.com/#!9/7cbb3/4>
This should do: ``` SELECT u.id_user, u.username name FROM `friends` f JOIN `users` u ON u.id_user = f.id_user_to and f.id_user = 10 or u.id_user = f.id_user and f.id_user_to = 10 ORDER BY u.id_user ``` SqlFiddle: <http://sqlfiddle.com/#!9/7cbb3/1>
SQL query to get my followers except me
[ "", "mysql", "sql", "" ]
I have 1 id column and 3 int columns. Data is inserted in this way ``` insert into table (col1, col2) values (1,1) insert into table (col2, col3) values (1,1) insert into table (col1, col3) values (1,1) insert into table (col1) values (1) ``` then, row 1, col 3 is null row 2, col 1 is null, row 3, col 2 is null. row 4, col 2 and col 3 are null. I want to replace the null values with 0. How can I do that? Note: I won't know which columns are null, so i can't use `ISNULL(colname, 0)`
If you want to update the table so the `NULL`s become `0`, use `coalesce()`: ``` update table t set col1 = coalesce(col1, 0), col2 = coalesce(col2, 0), col3 = coalesce(col3, 0) where col1 is null or col2 is null or col3 is null; ``` `coalesce()` is the ANSI-standard function for replacing a `NULL` value with something else.
If you want to insert `0` to columns by default, you can alter the table and set `DEFAULT CONSTRAINT` to needed column ``` ALTER TABLE myTable ADD DEFAULT(0) FOR Col1 ``` *No **UPDATE** statement required* Anyhow you need to update existing records using UPDATE statement.
select multiple columns which are null for a row
[ "", "sql", "sql-server", "" ]
I have a basic order detail table that contains: `docDate`, `customerCode`, `itemCode`, `Price`, and `Quantity`. Looked through a number of similar questions, but they didn't seem to take into account everything I need, and I failed to re-purpose the other solutions. The end goal: Only the most recent record of each individual item ever bought by a customer. (So one record per item) Prices and quantities change record to record, and I just want the most recent. This provides me with the most recent entry of each Item (code and date) ``` SELECT itemCode, MAX(docDate) AS docDate FROM RDR1 WHERE customerCode= 'TEST001' GROUP BY ItemCode ``` Now I need to be able to pull the other pieces of information from those most recent rows, like price and quantity.
I think you want to something like this: ``` select * -- or whatever columns you actually want from RDR1 inner join (SELECT itemCode, MAX(docDate) AS docDate FROM RDR1 WHERE customerCode= 'TEST001' GROUP BY ItemCode) MD on MD.docDate = RDR1.docDate and md.ItemCode = RDR1.ItemCode ```
``` SELECT distinct (ItemCode), docDate, Price, Quantity FROM RDR1 WHERE customerCode= 'TEST001' ORDER BY docDate DESC ```
Need to select most recent record, group by customer and item code
[ "", "sql", "sql-server", "" ]
In XLS I have two columns A and B. ``` A,B 1,1 2,3 2,5 1,6 5,11 2,13 ``` A column have value, and B column is calculated with formula, (A + (previous row B value)) How can i do this calculation on MYSQL? I'm trying to join same table twice and i can get previous rows A column next to B. I can sum them, but how can I sum them with this formula? XLS formula looks like this: ``` H20 = H19+G20 ``` This is my SQL created from suggestions. ``` SELECT date, time, sum, @b := sum+@b as 'AccSum', count FROM (SELECT t.date, t.time, t.sum, t.count FROM TMP_DATA_CALC t ORDER BY t.epoch) as tb CROSS JOIN (SELECT @b := 0) AS var ; ```
``` SELECT A, @b := A+@b AS B FROM (SELECT A FROM YourTable ORDER BY id) AS t CROSS JOIN (SELECT @b := 0) AS var ``` The user variable `@b` holds the value of `B` from the previous row, allowing you to add the current row's `A` to it. [DEMO](http://www.sqlfiddle.com/#!2/f66f8a/1)
<http://sqlfiddle.com/#!2/74488/2/1> shows how to select the data. ``` SET @runtot:=0; Select a,b, @runtot:=@runtot+a from b ``` However there's an underlying problem I can't figure out. Since you don't have a defined order, the SQL could do this ordering in any way, so you may not get the desired results.. without a defined order you results may be unpredictable. runtot = running total.
Mysql how to sum column with previous column sum
[ "", "mysql", "sql", "math", "xls", "" ]
I have the following tables. One haves the `TimeRegister` for each Employee (each row equals a IN and a OUT in `asc` sequence): ``` Employee TimeRegister 15 2014-04-01 11:51:43.000 15 2014-04-01 14:03:52.000 15 2014-04-01 14:17:01.000 15 2014-04-01 16:01:12.000 15 2014-04-03 09:48:33.000 15 2014-04-03 12:13:43.000 ``` The other table haves all dates: ``` Date 2014-04-01 00:00:00.000 2014-04-02 00:00:00.000 2014-04-03 00:00:00.000 2014-04-04 00:00:00.000 ``` As you can notice on first table there's no record for employee 15 for 2014-04-02. But I wanted that date with a `NULL` Time Register so it appears like this: ``` Employee TimeRegister 15 2014-04-01 11:51:43.000 15 2014-04-01 14:03:52.000 15 2014-04-01 14:17:01.000 15 2014-04-01 16:01:12.000 15 NULL 15 2014-04-03 09:48:33.000 15 2014-04-03 12:13:43.000 ``` I don't want to `INSERT` into the table itself but rather get this with a `VIEW`. All the help will be greatly appreciated. Thanks in advance!
I've used a recursive cte to generate all dates for a range, but if as you say you have another table with all dates (or all relevant / working dates), use that instead of the `cteDateGen`. Similarly, I've scraped the unique employees from the same table as `TimeRegister` - again you may have another table of Employee. Once you have a list of all dates and employees, you can Left Outer join to the table with `TimeRegister`s to ensure at least one row per Date / Employee combination, even if there is no such row in the table: ``` With cteDateGen AS ( SELECT 0 as Offset, CAST(DATEADD(dd, 0, '2014-04-01') AS DATE) AS WorkDate UNION ALL SELECT Offset + 1, CAST(DATEADD(dd, Offset, '2014-04-01') AS DATE) FROM cteDateGen WHERE Offset < 1000 ), cteAllEmployees AS ( SELECT DISTINCT(Employee) AS Employee FROM TimeRegister ) SELECT d.WorkDate, e.Employee, t.TimeRegister FROM cteDateGen d CROSS JOIN cteAllEmployees e LEFT JOIN TimeRegister t ON e.Employee = t.Employee AND CAST(t.TimeRegister AS DATE) = d.WorkDate WHERE d.WorkDate BETWEEN '2014-04-01' AND '2014-04-10' OPTION(MAXRECURSION 0); ``` I'm assuming the same table structure as in the fiddle [here](https://stackoverflow.com/a/24304613/314291)
This isn't perfect (it gives the null in the employee rather than the date, and the comparison of just the year/month/day is crude!)) ``` select Employee, coalesce(TimeRegister, Date) from Date left outer join EmpDate on left(cast(Date as nvarchar), 10) = left(cast(TimeRegister as nvarchar), 10) ``` i.e. gives: ``` EMPLOYEE COLUMN_1 15 April, 01 2014 11:51:43+0000 15 April, 01 2014 14:03:52+0000 15 April, 01 2014 14:17:01+0000 15 April, 01 2014 16:01:12+0000 (null) April, 02 2014 00:00:00+0000 15 April, 03 2014 09:48:33+0000 15 April, 03 2014 12:13:43+0000 (null) April, 04 2014 00:00:00+0000 ``` SQL Fiddle Link: <http://sqlfiddle.com/#!3/482b1/7/0>
SQL - Create row for filling in NULL into missing dates but repeating ID Column Value
[ "", "sql", "sql-server", "date", "row", "fill", "" ]
I have an array of ids, I have to select from table for each value of array , i can get it by one by one in `for loop`, ``` SELECT point, privacy FROM `tableName` WHERE id='1403176452487620892'and status=1 ``` but the problem is that array size is 100, i need a single query not 100.
Why can't you use: ``` SELECT point, privacy FROM `tableName` WHERE status=1 and id in(?,?,?...) ``` Yes, it's ridiculously long, but if one query is what you need...
You can use like this : ``` $ids = join(',',$ids); $sql = "SELECT * FROM tableName WHERE id IN ($ids)"; ```
Select Query by array values
[ "", "mysql", "sql", "arrays", "" ]
I've been having trouble understanding this example, I have the answer but I'm having trouble wrapping my head around how this works. How are the joins working exactly? Examine the structures of the PLAYER and TEAM tables: ``` PLAYER ------------- PLAYER_ID NUMBER(9) PK LAST_NAME VARCHAR2(25) FIRST_NAME VARCHAR2(25) TEAM_ID NUMBER MANAGER_ID NUMBER(9) TEAM ---------- TEAM_ID NUMBER PK TEAM_NAME VARCHAR2(30) ``` For this example, team managers are also players, and the `MANAGER_ID` column references the `PLAYER_ID` column. For players who are managers, `MANAGER_ID` is `NULL`. Which `SELECT` statement will provide a list of all players, including the player's name, the team name, and the player's manager's name? ANSWER: ``` SELECT p.last_name, p.first_name, m.last_name, m.first_name, t.team_name FROM player p LEFT OUTER JOIN player m ON (p.manager_id = m.player_id) LEFT OUTER JOIN team t ON (p.team_id = t.team_id); ```
So the first `LEFT OUTER JOIN` takes the player table, then adds on the info for each players manager. Each player has an `ID` for its manager, who is also a player with an `ID`. If Mr. A, with `player_id 9`, is a manager for Ted, with `player_id 5`, then Ted's manager\_id will be `9`. The first join takes Ted's `manager_id`, `9`, and matches it to the `player_id` of his manager, Mr. A, so that the manager's info is now on the table as well, and `m.last_name` and `m.first_name` will show Mr. A's name. The second join takes the `team_id` and simply matches it to the table of teams, appending the `team_name` to the player's information in the table. It's tough to explain without sample data and diagrams. Sorry.
In your example the players that are also managers, have a null value in the manager\_id column. Take the first part of the SQL statement, which is: ``` SELECT p.last_name, p.first_name, m.last_name, m.first_name, t.team_name FROM player p LEFT OUTER JOIN player m ON (p.manager_id = m.player_id) ``` Since you are performing a LEFT JOIN (instead of an INNER JOIN), your manager records will not be filtered out (even though their NULL value in the manager\_id column, will not successfully join to any player\_id in the player table), and m.last\_name and m.first\_name will be NULL for those manager players. If you were to change that query to an INNER JOIN instead, your players that are also managers would not be returned at all by the query, because they have a NULL value for manager\_id and there is no player records with a player\_id of NULL. ``` SELECT p.last_name, p.first_name, m.last_name, m.first_name, t.team_name FROM player p INNER JOIN player m ON (p.manager_id = m.player_id) ``` I hope this helps!
SQL JOIN and LEFT OUTER JOIN
[ "", "sql", "oracle", "join", "outer-join", "" ]
In SQL Server 2012 I have a table called `Deal_Country`. This table contains a field called `deal_id`, which is not a primary key. The same `deal_id` value can be in the table multiple times. What I would like to know is the number that the most common `deal_id` is in the table. Please see example below. ``` deal_id -------- ABC12 DFG34 DFG34 KNG10 ABC12 PPL11 ABC12 ``` The answer I would like returned is 3 as the most frequent `deal_id` (ABC12) is shown 3 times. I have tried the query below however I get ``` "cannot perform an aggregate function on an expression containing an aggregate or a subquery." select max(count(distinct deal_id)) from DEAL_COUNTRY ```
Use `order by` and `top`: ``` select top 1 deal_id, count(*) as numtimes from DEAL_COUNTRY group by deal_id order by count(*) desc ``` In MySQL, you would use `limit` instead of `top 1`.
**MySQL Solution**: `count` first, sort them from max to min and pick the top 1 record. ``` select deal_id, count(deal_id) deal_count from deal_country order by 2 desc limit 1 ```
counting times the most frequent record appears in a table
[ "", "sql", "sql-server", "" ]
I have following 3 tables(Organization, OrganizationAddress, Address).Each organizations has more than one address and one of them is primary.if an organization has primary address,I need to select the primary address else that column can be null or blank. How can i write a sql sub query in mysql?. Please help me, Thank you, ![enter image description here](https://i.stack.imgur.com/hIGXY.png)
You just need a `LEFT JOIN` ``` select a.orgid, b.orgaddressid, c.address from organization a left join organization_address b on a.orgid = b.orgid and b.isprimaryaddress = 'YES' left join address c on b.orgaddressid = c.addressid where a.orgid = 1; ```
Maybe something like this. I think you need to use `LEFT JOIN` for both tables: ``` select organization.orgid, organization_address.orgaddressid, address.address from organization LEFT JOIN organization_address ON organization_address.orgid = organization.orgid AND organization_address.isprimaryaddress = 'YES' LEFT JOIN address ON organization_address.orgaddressid = address.addressid; ```
Write Sql sub Query
[ "", "mysql", "sql", "sql-server", "subquery", "" ]
I have a select statement that gives me the results I need for my update but I have no idea how to incorporate it into the update. Below is the select statement and the results. ``` select top 20 percent fpartno ,(fytdiss * fmatlcost) as 'total' from inmast where fpartno not like 'CRV%' and fcstscode = 'A' group by fpartno, fytdiss,fmatlcost,fabccode order by total desc fpartno total --------------------------------------------------- 1062-20-0244 172821.4800000000 B50602 91600.7205800000 BM6031PQ 82978.3200000000 LY2F-DC12 74740.9500000000 BM6033SQ 51640.4200000000 DTM06-6S-E007 49810.4700000000 ``` My update looks like this ``` update inmast set fabccode = 'A' ``` I'm guessing my select would some how go into the where clause but I'm not sure how.
Updating `top 20 percent` is tricky... because you can't put an `order by` in an update. I would do something like this: ``` select * -- update t set fabccode='a' from inmast t where fpartno in ( select top 20 percent fpartno from inmast t where fpartno not like 'CRV%' and fcstscode = 'A' group by fpartno, fytdiss,fmatlcost,fabccode order by (fytdiss * fmatlcost) desc) ``` Run this is a `select` and make sure it works for you as expected. If yes, then you can just remove the `select` line, and uncomment the `update` line. Alternate solution: ``` select * -- update t set fabccode='a' from inmast t join (select top 20 percent fpartno from inmast t where fpartno not like 'CRV%' and fcstscode = 'A' group by fpartno, fytdiss,fmatlcost,fabccode order by (fytdiss * fmatlcost) desc) x on t.fpartno = x.fpartno ```
``` update inmast set fabccode = 'A' where fpartno in ( select top 20 percent fpartno from inmast where fpartno not like 'CRV%' and fcstscode = 'A' group by fpartno, fytdiss,fmatlcost,fabccode order by (fytdiss * fmatlcost) desc) ```
Update based on a select statement SQL Server 2005
[ "", "sql", "sql-server", "sql-server-2005", "" ]
I have 2 tables like ``` Company( #id_company, ... ) addresses( address, *id_company*, *id_city* ) cities( #id_city, name_city, *id_county* ) countries( #id_country, name_country ) ``` What i want is : It is a good design ? ( a company can have many addresses ) And the important thing is that you my notice that i didn't add a `PK` for `addresses` table because **every address of a companies will be different**, so am I right ? And i will never have a `where` in a `select` that specify a address.
First of all we should distinguish natural keys and technical keys. As to natural keys: 1. A country is uniquely identified by its name. 2. A city can be uniquely identified by its country and a unique name. For instance there are two Frankfurt in Germany. To make sure what we are talking about we either use the distinct names Frankfurt/Main and Frankfurt/Oder or use the city name with its zip codes range. 3. A company gets identified by its full name usually. Or use some tax id, code, whatever. 4. To uniquely identify a company address we would take the company plus country, city and address in the city (street name and number usually). You've decided to use technical keys. That's okay. But you should still make sure that names are unique. You don't want France and France in your table, it must be there just once. You don't want Frankfurt and Frankfurt without any distinction in your city table for Germany either. And you don't want to have the same address twice entered for one company. * company( #id\_company, name\_company, ... ) plus a unique constraint on name\_country or whatever makes a company unique * countries( #id\_country, name\_country ) plus a unique constraint on name\_country * cities( #id\_city, name\_city, *id\_county* ) plus a unique constraint on name\_city, id\_country * addresses( address, *id\_company*, *id\_city* ) with a unique constraint on all three columns From what you say, it looks like you want the addresses only for lookup. You don't want to use them in any other table, not now and not in the future. Well, then you are done. As you need a unique constraint on all three columns, you could just as well declare this as your primary key, but you don't have to. Keep in mind, that to reference a company address in any other future table, you would have to store address + id\_company + id\_city in that table. At that point you would certainly like to have an address id instead. But you can add that when needed. For now you can do without.
This design is fine. A (relational) table *always* has a (candidate) key. (One of which you can choose as the primary key, but candidate keys, aka keys, are what matter.) Because if no subset of columns smaller than set of all columns is unique then the key is the set of all columns. Since every table has one, in SQL you should declare it. Eg in SQL if you want to declare a FOREIGN KEY constraint to the key of this table then you have to declare that column set a key via PRIMARY KEY, KEY or UNIQUE. Also, telling the DBMS what you know helps optimize your use of it. What matters to determining keys are subsets of columns that are unique that don't have smaller subsets that are unique. Those are the keys. ``` A company, address or city is not unique since you are going to have multiple of each. A (city,address) is not unique normally. A (city,company) is not unique normally. A (company,address) is not unique normally. So (company,address,city) is the (only) (candidate) key. ``` Note that if there were only ever one city, then (company,address) would be the key. And if there were only ever one company, then (address,city) would be the key. So your given reason that the "because every address[+city?] of a company [?] will be different" isn't sound unless we're supposed to assume other things.
PK for table that have not unique data
[ "", "sql", "database", "database-design", "database-performance", "" ]
I read most relevant Q/A on <https://stackoverflow.com/questions/tagged/greatest-n-per-group> tag but doesn't found solution for my task as details differ. I have a table with *amount*/*currency*/*date* and have a task to convert *amount* to *amount in national equivalent* on that *date*. One problem that currency exchange rate table have holes so direct joint on *amount*/*currency*/*date* give **null**. As rule of thumb - in such case business rules dictate that you can get last available rate for given *amount*/*currency*. My dumb solution: ``` select p.AMOUNT * cr.RATE from PAYMENT p join CURRENCY_RATE cr on cr.CURRENCY = p.CURRENCY and cr.DATE = (select max(subcr.DATE) from CURRENCY_RATE subcr where subcr.CURRENCY = cr.CURRENCY and subcr.DATE <= p.DATE) ``` give very bad execution plan (this is simplified query, original have a lot of **full table scans**, **hash joins** due to additional business logic). Query work on large set of `PAYMENT`, table accessed by **full scan**. Many `CURRENCY`/`DATE` pairs was queried from `CURRENCY_RATE`. I don't really sure that using index on pair as **first in index range scan** will be good strategy to retrieve pairs... I use Oracle and don't understand if windowed function applicable in that situation when `max(...) over (partition by ...)` also must have additional condition... **UPDATE** I plan to use query for data migration and importing tasks, so really have no filter on `PAYMENT`. I start thinking that I can import with `p.AMOUNT * cr.RATE` ever if it **null** and then update incomplete records with above query. This look promising as holes very rare occur in `CURRENCY_RATE`. Another solution that I see - to use materialized view or another table which have no holes.
latest currencies by "max" with "group py": ``` select cr.COUNTRY, cr.CURRENCY, cr.RATE from CURRENCY_RATE cr where cr.dt = (select max(subcr.DT) from CURRENCY_RATE subcr where subcr.COUNTRY = cr.COUNTRY and subcr.CURRENCY = cr.CURRENCY group by subcr.CURRENCY, subcr.COUNTRY); -- too long select count (*) from ( select cr.COUNTRY, cr.CURRENCY, cr.RATE from CURRENCY_RATE cr where cr.dt = (select max(subcr.DT) from CURRENCY_RATE subcr where subcr.COUNTRY = cr.COUNTRY and subcr.CURRENCY = cr.CURRENCY group by subcr.CURRENCY, subcr.COUNTRY)); ``` latest currencies by "not exist": ``` select cr.COUNTRY, cr.CURRENCY, cr.RATE from CURRENCY_RATE cr where not exists (select 1 from CURRENCY_RATE subcr where subcr.COUNTRY = cr.COUNTRY and subcr.CURRENCY = cr.CURRENCY and subcr.dt > cr.dt); -- tooo long.... select count (*) from ( select cr.COUNTRY, cr.CURRENCY, cr.RATE from CURRENCY_RATE cr where not exists (select 1 from CURRENCY_RATE subcr where subcr.COUNTRY = cr.COUNTRY and cr.CURRENCY = subcr.CURRENCY and subcr.dt > cr.dt)); ``` latest currencies by "join" and "is null": ``` -- Too long... select cr1.* from CURRENCY_RATE cr1 left join CURRENCY_RATE cr2 on (cr1.COUNTRY = cr2.COUNTRY and cr1.CURRENCY = cr2.CURRENCY and cr2.DT > cr1.DT) where cr2.DT is null; ``` latest currencies by "row\_number() over (partition by ... order by ...)": ``` with maxcr as ( select cr.COUNTRY, cr.CURRENCY, cr.RATE, row_number() over (partition by cr.COUNTRY, cr.CURRENCY order by cr.DT desc) as rown from CURRENCY_RATE cr ) select * from maxcr where maxcr.rown = 1; select maxcr.COUNTRY, maxcr.CURRENCY, maxcr.RATE from ( select cr.COUNTRY, cr.CURRENCY, cr.RATE, row_number() over (partition by cr.COUNTRY, cr.CURRENCY order by cr.DT desc) as rown from CURRENCY_RATE cr) maxcr where maxcr.rown = 1; -- 2.5 sec select count(*) from ( select maxcr.COUNTRY, maxcr.CURRENCY, maxcr.RATE from ( select cr.COUNTRY, cr.CURRENCY, cr.RATE, row_number() over (partition by cr.COUNTRY, cr.CURRENCY order by cr.DT desc) as rown from CURRENCY_RATE cr) maxcr where maxcr.rown = 1); ``` latest currencies by "max" and "in": ``` select cr.COUNTRY, cr.CURRENCY, cr.RATE from CURRENCY_RATE cr where (cr.COUNTRY, cr.CURRENCY, cr.dt) in (select subcr.COUNTRY, subcr.CURRENCY, max(subcr.DT) from CURRENCY_RATE subcr group by subcr.COUNTRY, subcr.CURRENCY); -- .250 sec select count(*) from ( select cr.COUNTRY, cr.CURRENCY, cr.RATE from CURRENCY_RATE cr where (cr.COUNTRY, cr.CURRENCY, cr.dt) in (select subcr.COUNTRY, subcr.CURRENCY, max(subcr.DT) from CURRENCY_RATE subcr group by subcr.COUNTRY, subcr.CURRENCY)); -- 2.3 sec update DATA_2 inc set inc.MONEY_CNV = inc.MONEY_V * ( select cr1.RATE from ( select comp.COMPANY COMPANY, cr.CURRENCY, cr.RATE from COMPANY comp join CURRENCY_RATE cr on (cr.COUNTRY = comp.COUNTRY) where (cr.COUNTRY, cr.CURRENCY, cr.dt) in (select subcr.COUNTRY, subcr.CURRENCY, max(subcr.DT) from CURRENCY_RATE subcr group by subcr.COUNTRY, subcr.CURRENCY)) cr1 where cr1.COMPANY = inc.COMPANY and cr1.CURRENCY = inc.CODE_V) where inc.INC_DATE > DATE '2014-01-01'; ``` latest currencies by "max" and "=": ``` select cr.COUNTRY, cr.CURRENCY, cr.RATE from CURRENCY_RATE cr where cr.dt = (select max(subcr.DT) from CURRENCY_RATE subcr where subcr.COUNTRY = cr.COUNTRY and subcr.CURRENCY = cr.CURRENCY); -- .250 sec select count (*) from ( select cr.COUNTRY, cr.CURRENCY, cr.RATE from CURRENCY_RATE cr where cr.dt = (select max(subcr.DT) from CURRENCY_RATE subcr where subcr.COUNTRY = cr.COUNTRY and subcr.CURRENCY = cr.CURRENCY)); with cr1 as (select cr.COUNTRY, cr.CURRENCY, cr.RATE from CURRENCY_RATE cr where cr.dt = (select max(subcr.DT) from CURRENCY_RATE subcr where subcr.COUNTRY = cr.COUNTRY and cr.CURRENCY = subcr.CURRENCY) ) select comp.COMPANY, cr1.CURRENCY, cr1.RATE from cr1 join COMPANY comp on cr1.COUNTRY = comp.COUNTRY; -- .250 sec with cr1 as (select cr.COUNTRY, cr.CURRENCY, cr.RATE from CURRENCY_RATE cr where cr.dt = (select max(subcr.DT) from CURRENCY_RATE subcr where subcr.COUNTRY = cr.COUNTRY and cr.CURRENCY = subcr.CURRENCY) ) select count(*) from cr1 join COMPANY comp on cr1.COUNTRY = comp.COUNTRY; with cr1 as ( select comp.COMPANY, cr.CURRENCY, cr.RATE from COMPANY comp join CURRENCY_RATE cr on cr.COUNTRY = comp.COUNTRY where cr.dt = (select max(subcr.DT) from CURRENCY_RATE subcr where subcr.COUNTRY = comp.COUNTRY and cr.CURRENCY = subcr.CURRENCY) ) select count(*) from cr1; -- 3 sec update DATA_2 inc set inc.MONEY_CNV = inc.MONEY_V * ( select cr1.RATE from ( select comp.COMPANY COMPANY, cr.CURRENCY, cr.RATE from COMPANY comp join CURRENCY_RATE cr on (cr.COUNTRY = comp.COUNTRY) where cr.dt = (select max(subcr.DT) from CURRENCY_RATE subcr where subcr.COUNTRY = comp.COUNTRY and cr.CURRENCY = subcr.CURRENCY)) cr1 where cr1.COMPANY = inc.COMPANY and cr1.CURRENCY = inc.CODE_V) where inc.INC_DATE > DATE '2014-01-01'; ``` latest currencies by "max" with "group py" and "join": ``` select cr.COUNTRY, cr.CURRENCY, cr.RATE from CURRENCY_RATE cr join (select subcr.CURRENCY, subcr.COUNTRY, max(subcr.DT) dt from CURRENCY_RATE subcr group by subcr.CURRENCY, subcr.COUNTRY) maxcr on maxcr.COUNTRY = cr.COUNTRY and maxcr.CURRENCY = cr.CURRENCY and maxcr.dt = cr.DT; -- .250 sec select count (*) from ( select cr.COUNTRY, cr.CURRENCY, cr.RATE from CURRENCY_RATE cr join (select subcr.CURRENCY, subcr.COUNTRY, max(subcr.DT) dt from CURRENCY_RATE subcr group by subcr.CURRENCY, subcr.COUNTRY) maxcr on maxcr.COUNTRY = cr.COUNTRY and maxcr.CURRENCY = cr.CURRENCY and maxcr.dt = cr.DT); with cr1 as ( select cr.COUNTRY, cr.CURRENCY, cr.RATE from CURRENCY_RATE cr join (select subcr.CURRENCY, subcr.COUNTRY, max(subcr.DT) dt from CURRENCY_RATE subcr group by subcr.CURRENCY, subcr.COUNTRY) maxcr on maxcr.COUNTRY = cr.COUNTRY and maxcr.CURRENCY = cr.CURRENCY and maxcr.dt = cr.DT ) select comp.COMPANY, cr1.CURRENCY, cr1.RATE from cr1 join COMPANY comp on cr1.COUNTRY = comp.COUNTRY; -- .300 sec with cr1 as ( select cr.COUNTRY, cr.CURRENCY, cr.RATE from CURRENCY_RATE cr join (select subcr.CURRENCY, subcr.COUNTRY, max(subcr.DT) dt from CURRENCY_RATE subcr group by subcr.CURRENCY, subcr.COUNTRY) maxcr on maxcr.COUNTRY = cr.COUNTRY and maxcr.CURRENCY = cr.CURRENCY and maxcr.dt = cr.DT ) select count(*) from cr1 join COMPANY comp on cr1.COUNTRY = comp.COUNTRY; ``` latest N currencies: ``` -- Vendor independent by tooo slow... select cr.COUNTRY, cr.CURRENCY, cr.RATE from CURRENCY_RATE cr left outer join CURRENCY_RATE cr2 on (cr2.COUNTRY = cr.COUNTRY and cr2.CURRENCY = cr.CURRENCY and cr2.DT >= cr.DT) group by cr.COUNTRY, cr.CURRENCY, cr.RATE having count(*) <= 3 order by cr.COUNTRY, cr.CURRENCY, cr.RATE; -- Very fast (full table scan). select cr.COUNTRY, cr.CURRENCY, cr.RATE, cr.DT from ( select subcr.*, row_number() over (partition by subcr.COUNTRY, subcr.CURRENCY order by subcr.DT) rown from CURRENCY_RATE subcr) cr where cr.rown <= 3; ```
You might try a query like this: ``` SELECT A.AMOUNT * A.RATE FROM ( SELECT P.AMOUNT, CR.RATE, ROW_NUMBER() OVER (PARTITION BY P.ROWID ORDER BY CR.DATE DESC) AS RN FROM PAYMENT P INNER JOIN CURRENCY_RATE CR ON P.CURRENCY = CR.CURRENCY AND P.DATE >= CR.DATE ) A WHERE A.RN = 1 ``` Here are some things to watch out for: 1. Using reserved words like `DATE` and `CURRENCY` can cause conflicts in name resolution. 2. The query will exclude rows from `PAYMENT` that have no matching rows in `CURRENCY_RATE`. If you want to include such rows, use `LEFT JOIN` instead of `INNER JOIN`. 3. If the combination of `CURRENCY` and `DATE` in `CURRENCY RATE` is not unique, the query will arbitrarily select one of the rows. If you want to select a specific row in this case, add expressions as necessary to the `ORDER BY` clause so that the row you want will come first. 4. If `PAYMENT` has a unique, non-null key, you can use this in place of `P.ROWID` in the `PARTITION BY` clause.
greatest-n-per-group with condition and joining to large table (or query for getting amount in national currency when holes in currency rates table)
[ "", "sql", "performance", "oracle", "query-optimization", "greatest-n-per-group", "" ]
I am trying to import a CSV into MSSQL 2008 by using the flat file import method but I am getting an Overflow error. Any ideas on how to go around it? I used the tool before for files containing up to 10K-15K records but this file has 75K records in it.... These are the error messages ``` Messages Error 0xc020209c: Data Flow Task 1: The column data for column "ClientBrandID" overflowed the disk I/O buffer. (SQL Server Import and Export Wizard) Error 0xc0202091: Data Flow Task 1: An error occurred while skipping data rows. (SQL Server Import and Export Wizard) Error 0xc0047038: Data Flow Task 1: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED. The PrimeOutput method on component "Source - Shows_csv" (1) returned error code 0xC0202091. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. There may be error messages posted before this with more information about the failure. (SQL Server Import and Export Wizard) ```
This could be a format problem of the csv file e.g. the delimiter. Check if the delimiters are consistent within the file. It could also be a problem of blank lines. I had a similar problem a while ago. I've solved it by removing all blank lines in the csv file. Worth a try anyway.
You may have one or more bad data elements. Try loading a small subset of your data to determine if it's a small number of bad records or a large one. This will also tell you if your loading scheme is working and your datatypes match. Sometimes you can quickly spot data issues if you open the csv file in excel.
Import Flat File to MS SQL
[ "", "sql", "sql-server", "" ]
Here I have a sample table: ``` id | datetime | views -------------------------------- 1 | 2014-02-27 21:00:00 | 200 65 | 2014-02-08 05:00:00 | 1 65 | 2014-02-08 10:00:00 | 3 65 | 2014-02-08 17:00:00 | 1 65 | 2014-02-08 20:00:00 | 1 65 | 2014-02-08 21:00:00 | 1 65 | 2014-02-09 04:00:00 | 2 65 | 2014-02-09 05:00:00 | 1 65 | 2014-02-09 06:00:00 | 3 65 | 2014-02-09 07:00:00 | 1 65 | 2014-02-09 09:00:00 | 1 65 | 2014-02-09 10:00:00 | 2 65 | 2014-02-09 13:00:00 | 1 70 | 2014-02-09 14:00:00 | 3 70 | 2014-02-09 15:00:00 | 2 ``` I am trying to get the views of a particular id per day and not per hour (this the setup of this table). What is the best way at it?
You can use the [DATE function](http://dev.mysql.com/doc/refman/5.6/en/date-and-time-functions.html#function_date) to get the date part of your datetime value: ``` SELECT id, DATE(datetime) AS mydate, SUM(views) AS total FROM yourtable GROUP BY id, DATE(datetime); ```
``` SELECT `id` , DATE(`datetime`) , SUM(`views`) FROM `table` GROUP BY `id` , DATE(`datetime`) ```
Is it possible to sum the result of a column based on the date being group by day?
[ "", "mysql", "sql", "datetime", "" ]
I have a block of vbscript which takes in a variable, builds a sql string with it, executes the sql string, then passes the results of the sql query along with the original variable into another sub. It works pretty much all the time, except in one instance. Here's the code (I added line numbers for reference... Also, I know record\_source isn't used, I left it during a change I made some months ago for backwards compatibility): ``` 1224 Sub UpdateContact1 (record_source, user_id) 1225 Response.Write("UpdateContact1 - user_id: " & user_id & "<br />") 1226 strSQL = "EXEC usp_GM_GetInfoForContact1 " & user_id 1227 Set rs = con.Execute(strSQL) 1228 Response.Write("UpdateContact1 after sql - user_id: " & user_id & "<br />") 1229 If Not rs.EOF Then 1230 Response.Write("UpdateContact1 inside of If<br />") 1231 Response.Write("user_id: " & user_id & "<br />") 1232 InsertFullContact1 user_id, rs("GoldMineAccountNumber"), rs("BusinessName"), rs("FullName"), rs("LastName"), rs("Salutation"), rs("Title"), rs("WorkPhone"), rs("CellPhone"), rs("HomePhone"), rs("PrimaryFax"), rs("WorkExt"), rs("Address1"), rs("Address2"), rs("City"), rs("State"), rs("PostalCode"), rs("Country"), rs("FirstName"), rs("ReferralName"), rs("PrivateTitle"), rs("Specialty"), "", rs("GLA"), rs("GroupsDesc"), rs("RolesDesc") 1233 End If 1234 End Sub ``` Here is the result of that code: ``` UpdateContact1 - user_id: 34838 error '80020009' /includes/gm_functions.asp, line 1228 ``` Initial research suggests that error 80020009 could indicate that the variable is null. I checked using IsNull: ``` 1224 Sub UpdateContact1 (record_source, user_id) 1225 Response.Write("UpdateContact1 - user_id: " & user_id & "<br />") 1226 strSQL = "EXEC usp_GM_GetInfoForContact1 " & user_id 1227 Set rs = con.Execute(strSQL) 1228 If IsNull(user_id) Then 1229 Response.Write("user_id is null") 1230 Else 1231 Response.Write("user_id is not null<br />") 1232 Response.Write("Length: " & Len(user_id) & "<br />") 1233 End If 1234 If Not rs.EOF Then 1235 Response.Write("UpdateContact1 inside of If<br />") 1236 Response.Write("user_id: " & user_id & "<br />") 1237 InsertFullContact1 user_id, rs("GoldMineAccountNumber"), rs("BusinessName"), rs("FullName"), rs("LastName"), rs("Salutation"), rs("Title"), rs("WorkPhone"), rs("CellPhone"), rs("HomePhone"), rs("PrimaryFax"), rs("WorkExt"), rs("Address1"), rs("Address2"), rs("City"), rs("State"), rs("PostalCode"), rs("Country"), rs("FirstName"), rs("ReferralName"), rs("PrivateTitle"), rs("Specialty"), "", rs("GLA"), rs("GroupsDesc"), rs("RolesDesc") 1238 End If 1239 End Sub ``` And this was the output: ``` UpdateContact1 - user_id: 34838 user_id is not null error '80020009' /includes/gm_functions.asp, line 1232 ``` Executing the stored procedure manually in SSMS using the same user\_id works fine. I'm at a loss to even understand what is happening to the variable, much less fix it. I'd appreciate any help. Thanks...
I was able to resolve this after looking at the hints provided by Pavel and doing more research on the error code. The issue was that user\_id is assigned (prior to this sub being called) to a value in another recordset which was also stored in the rs variable (which I redefine inside of this sub). Apparently in classic asp, variables are assigned by reference rather than value... So when I changed the definition of rs, the user\_id variable stopped having a value. I resolved this by defining a new variable inside of the sub and using that to store the recordset rather than redefining rs.
I'm pretty sure that your issue is connected with defining variables. Let me explain why do I think so. I created a test page (test.asp) ``` <!--#include virtual="/includes/gm_functions.asp"--> <% UpdateContact1 "who cares", 34838 'UpdateContact1DefineNewVar "who cares", 34838 response.write("Finished") %> ``` and here is my UpdateContact1 and UpdateContact1DefineNewVar functions ``` Sub UpdateContact1 (record_source, user_id) Response.Write("UpdateContact1 - user_id: " & user_id & "<br />") strSQL = "EXEC usp_GM_GetInfoForContact1 " & user_id Response.Write("Hey I created a SQL query <br />") Set rs = con.Execute(strSQL) Response.Write("Hey I got some data<br/>") If Not rs.EOF Then Dim some ''code End If End Sub Sub UpdateContact1DefineNewVar (record_source, user_id) Dim strSQL Dim rs Response.Write("UpdateContact1DefineNewVar - user_id: " & user_id & "<br />") strSQL = "EXEC usp_GM_GetInfoForContact1 " & user_id Response.Write("Hey I created a SQL query <br />") Set rs = con.Execute(strSQL) Response.Write("Hey I got some data<br/>") If Not rs.EOF Then Dim some ''code End If End Sub ``` The outputs are: > UpdateContact1 - user\_id: 34838 > > Microsoft VBScript runtime error '800a01f4' > > Variable is undefined: 'strSQL' > > /includes/gm\_functions.asp, line 1217 and > UpdateContact1DefineNewVar - user\_id: 34838 **Hey I created a SQL query** > > Microsoft VBScript runtime error '800a01a8' > > Object required: '' > > /includes/gm\_functions.asp, line 1232 So this **Hey I created a SQL query** makes me think that variables you are using are not defined (include file is missing?). I'm not a expert in vbscript, but I hope it will help you. ## UPDATE I extended the code by adding Response.Write(VarType(strSQL)) So the first output was > Microsoft VBScript runtime error '800a01f4' **Variable is undefined**: > 'UpdateContact1' /test.asp, line 5 and the second was > **0** UpdateContact1DefineNewVar - user\_id: 34838 Hey I created a SQL query Microsoft VBScript runtime error '800a01a8' Object required: '' > /includes/gm\_functions.asp, line 1221 and **0** means vbEmpty - Indicates Empty (uninitialized) <http://www.w3schools.com/vbscript/func_vartype.asp>
Unable to access vbscript variable after using it in sql string - Variable is not null, but errors when used
[ "", "sql", "sql-server", "vbscript", "asp-classic", "" ]
I have a table (Table1) that has an ID that is shared from multiple-inserts: > ``` > ID | RefID | Field_Name | Field_Value | Type > 1 | 1 | NumbAmt | 1111 | INT > 2 | 1 | LocAdd | 123 Street | String > 3 | 1 | LocDesc | Something | String > 4 | 1 | LocHidden | Useless | Hidden > ``` I can't use the ID since it is made from the inserts, the RefID is the main thing used to narrow down this data to all those with the **RefID = 1 AND Type != 'Hidden'**. Whenever I do a case statement query: ``` SELECT CASE WHEN Field_Name = 'NumbAmt' THEN Field_Value END Amt, CASE WHEN Field_Name = 'LocAdd' THEN Field_Value END Address, CASE WHEN Field_Name = 'LocDesc' THEN Field_Value END Description FROM Table1 WHERE RefID = 1 AND Type IN ('INT','String') ``` It returns the results like: > ``` > Amt | Address | Description > 1111 | NULL | NULL > NULL | 123 Street | NULL > NULL | NULL | Something > ``` My question is, how would I gather all the data but have it split into separate columns without all the NULLs showing? (My assumption leads me to believe a temp table) Or show up like: > ``` > Amt | Address | Description > 1111 | 123 Street | Something > ```
You have several options: * Join the table to itself * Use PIVOT * Subquery all fields in the SELECT list * Use OUTER APPLY for each field * Use CTE * **Consider to rethink your scheme!** **JOINS** ``` SELECT T1_RefID.RefID, T1_NumbAmt.FieldValue AS NumbAmt, T1_LocAdd.FieldValue AS LocAdd FROM (SELECT DISTINCT RefID FROM Table1) T1_RefID LEFT JOIN Table1 T1_NumbAmt ON T1_RefID.RefID = T1_NumbAmt.RefID AND T1_NumbAmt.FieldName = 'NumbAmt' AND T1_NumbAmt.Type != 'Hidden' LEFT JOIN Table1 T1_LocAdd ON T1_RefID.RefID = T1_LocAdd.RefID AND T1_LocAdd.FieldName = 'LocAdd' AND T1_LocAdd.Type != 'Hidden' /* And so on*/ ``` **PIVOT** ``` SELECT * FROM ( SELECT RefID, FieldName, FieldValue FROM Table1 WHERE Type != 'Hidden' ) AS src PIVOT ( MAX(FieldValue) FOR FieldName IN (NumbAmt, LocAdd, LocDesc) ) AS PVT ``` [SQL Fiddle](http://sqlfiddle.com/#!6/61335/8)
You could create a view for this and use it ``` SELECT T.RefID, TAmt.Field_Value as Amt, TAddress.Field_Value as Address, TType.Field_Value as Type FROM ( SELECT DISTINCT RefID FROM Table1 WHERE Type IN ('INT', 'String') ) T LEFT JOIN Table1 AS TAmt ON T.RefID = TAmt.RefID AND TAmt.Field_Name = 'NumbAmt' LEFT JOIN Table1 AS TAddress ON T.RefID = TAddress.RefID AND TAddress.Field_Name = 'LocAdd' LEFT JOIN Table1 AS TType ON T.RefID = TType.RefID AND TType.Field_Name = 'LocDesc' ```
SQL: Selecting NOT NULL data with a column that is shared and changes?
[ "", "sql", "sql-server", "" ]
I have a sample table which looks like following: ``` WITH T(ID, SLOW, MEDIUM, FAST, SUPER_FAST) AS (SELECT 1, 'Y', 'Y', 'N', 'Y' FROM DUAL UNION ALL SELECT 2, 'N','Y','N', 'Y' FROM DUAL ) SELECT 'Identified in ' ||CASE WHEN L_TEXT_A IS NOT NULL THEN L_TEXT_A END ||' , ' ||CASE WHEN L_TEXT_B IS NOT NULL THEN L_TEXT_B END ||' , ' ||CASE WHEN L_TEXT_C IS NOT NULL THEN L_TEXT_C END ||' & ' ||CASE WHEN L_TEXT_D IS NOT NULL THEN L_TEXT_D END FROM ( SELECT CASE WHEN slow = 'Y' THEN 'slow' END L_TEXT_A, CASE WHEN medium = 'Y' THEN 'medium' END L_TEXT_B, CASE WHEN FAST = 'Y' THEN 'fast' END L_TEXT_C, case when SUPER_FAST = 'Y' then 'super fast' END L_TEXT_D FROM T ); -- Identified in slow , medium , & super fast Identified in , medium , & super fast ``` I need to get a result which should return as like: ``` 'Identified in slow, medium and super fast' 'Identified in medium and super fast' ``` The condition is no delimiter for single column which having `Y` , ampersand for two 'Y' values, and commas followed by ampersand which having more than 2 columns.
If you are using Oracle 11g2, a combination of unpivot and listagg can help, try this query ``` WITH T(ID, SLOW, MEDIUM, FAST, SUPER_FAST) AS (SELECT 1, 'Y', 'Y', 'N', 'Y' FROM DUAL UNION ALL SELECT 2, 'N','Y','N', 'Y' FROM DUAL union all SELECT 3, 'N','N','N', 'Y' FROM DUAL union all SELECT 4, 'N','N','N', 'N' FROM DUAL ), tlist as (select listagg(speed, ', ') within group (order by rownum) as list, id from T unpivot (pace for speed in (SLOW as 'Slow',MEDIUM as 'Medium', FAST as 'Fast', SUPER_FAST as 'Super Fast')) where pace = 'Y' group by id) select case when instr(list, ', ', -1, 1) = 0 then list else substr(list, 1, instr(list, ', ', -1, 1) - 1) || ' and ' || substr(list, instr(list, ', ', -1, 1) + 2) end as list from tlist; ``` output: ``` | LIST | |-----------------------------| | Slow, Medium and Super Fast | | Medium and Super Fast | | Super Fast | ```
Here is some code that uses simple instr, substr and replace functions built in functions which seems to work for all combinations. Note, if you have the option of doing this via a PL/SQL function you could probably simplify the code a bit: ``` with t(id, slow, medium, fast, super_fast) as ( select 1, 'n', 'n', 'n', 'n' from dual union all select 2, 'y', 'n', 'n', 'n' from dual union all select 3, 'n', 'y', 'n', 'n' from dual union all select 4, 'n', 'n', 'y', 'n' from dual union all select 5, 'n', 'n', 'n', 'y' from dual union all select 6, 'y', 'y', 'n', 'n' from dual union all select 7, 'y', 'n', 'y', 'n' from dual union all select 8, 'y', 'n', 'n', 'y' from dual union all select 9, 'n', 'y', 'y', 'n' from dual union all select 10, 'n', 'y', 'n', 'y' from dual union all select 11, 'n', 'n', 'y', 'y' from dual union all select 12, 'y', 'y', 'y', 'n' from dual union all select 13, 'y', 'y', 'n', 'y' from dual union all select 14, 'y', 'n', 'y', 'y' from dual union all select 15, 'n', 'y', 'y', 'y' from dual union all select 16, 'y', 'y', 'y', 'y' from dual ), step1 as ( select 'identified in ' || decode(slow,'y', 'slow' || ', ', '') || decode(medium,'y', 'medium' || ', ', '') || decode(fast,'y', 'fast' || ', ', '') || decode(super_fast,'y', 'super fast' || ', ', '') str from t ), step2 as ( select length(str) - length(replace(str, ',', null)) as vals, -- count values (using commas) substr(str, 1, length(str)-2) as str -- strip final comma from step1 ), step3 as ( select str, decode(vals, 0, 0, 1, 0, instr(str, ',', 1, vals -1)) as final_comma_pos, vals from step2 ), step4 as ( select decode(vals, 0, null, 1, str, substr(str, 1, final_comma_pos - 1) || ' &' || substr(str, final_comma_pos + 1) ) as str from step3 ) select * from step4; ``` Output: ``` identified in slow identified in medium identified in fast identified in super fast identified in slow & medium identified in slow & fast identified in slow & super fast identified in medium & fast identified in medium & super fast identified in fast & super fast identified in slow, medium & fast identified in slow, medium & super fast identified in slow, fast & super fast identified in medium, fast & super fast identified in slow, medium, fast & super fast ```
oracle user defined delimiter in sql
[ "", "sql", "oracle", "oracle11g", "" ]
I´m trying to create a procedure that has a parameter called m\_reaplicacao. This parameter receives the values 'S' for Yes, 'N' for No and 'T' for all records. When the parameter is Yes, I should return the records with value equals to 9. When the parameter is No, I should return the records different of 9. And finally, when the the value is All, I should return all records from the table. With the code bellow, Oracle says: Compilation errors for PROCEDURE MYDB.CONTAS\_A\_PAGAR\_SPS Error: PL/SQL: ORA-00905: missing keyword Line: 84 Text: ta.id\_1a\_cbr = 9; ``` select * from proposta ta where ta.estado = 'RJ' and case when m_reaplicacao = 'S' then ta.id_1a_cbr = 9; when m_reaplicacao = 'N' then ta.id_1a_cbr <> 9 else null end case; ``` I saw a lot of posts, but I did not solve this one. Can someone help me, please?
Don't use a `CASE` statement in a `WHERE` clause when you really want a simple combination of boolean evaluations. ``` WHERE ta.estado = 'RJ' AND ( m_reaplicacao = 'T' OR (m_reaplicacao = 'S' AND ta.id_1a_cbr = 9) OR (m_reaplicacao = 'N' AND ta.id_1a_cbr <> 9) ) ``` If for some reason you really do want to use a `CASE` statement, you'd need the `CASE` to return a value that you check in the `WHERE` clause. For example ``` WHERE ta.estado = 'RJ' AND (CASE WHEN m_reaplicacao = 'S' AND ta.id_1a_cbr = 9 THEN 1 WHEN m_reaplicacao = 'N' AND ta.id_1a_cbr <> 9 THEN 1 WHEN m_reaplicacao = 'T' THEN 1 ELSE 2 END) = 1 ``` This is not generally the clearest way to express this sort of condition, however.
You cannot return expressions in `CASE` statements, easiest to add additional `WHERE` criteria sets: ``` select * from proposta ta where ta.estado = 'RJ' and ( (m_reaplicacao = 'S' AND ta.id_1a_cbr = 9) OR (m_reaplicacao = 'N' AND ta.id_1a_cbr <> 9) ) ``` Not sure what you want to happen in the `NULL` situation.
Using case inside where clause
[ "", "sql", "oracle", "plsql", "" ]
Basically I have an application that processes transactions. Each transaction has a transaction number. In the application view on the front the total for each transaction is calculated but it is not stored in the database (and this cannot be changed) Now in SSRS they want to see the value come through on the report. The way to do this is to match the transaction number and if they match (no matter how many records) it adds them up on is there a way to do this on SSRS? Or maybe a more elegant way to do it would be to do it in the stored procedure. And example of this is below which I know won't work but just to give you an idea of what I'm trying to accomplish here: ``` SELECT transactionID , value , Sum(Case When transactionID = TransactionID Then Value Else 0) As Total ``` EDIT: In response to some of the comments I made this edit. First off in the above I made a mistake and did TransactionID = transaction but it should be what it is currently. Continuing on... In order to do this I need to match the transaction number against other transaction numbers in the data set and see if there is a match. To illustrate I did this example below: This is an example dataset: * TransId: 1 Value: 200 * TransId: 2 Value: 300 * TransId: 1 Value: 100 * TransId: 2 Value: 500 * TransId: 1 Value: 400 From this dataset I should get these values in the report: * TransId: 1 Value: 200 Total: 700 * TransId: 2 Value: 300 Total: 800 * TransId: 1 Value: 100 Total: 700 * TransId: 2 Value: 500 Total: 800 * TransId: 1 Value: 400 Total: 700 So for each row I want to see the complete total for each record(TransID) and not a runnign total.
[Window functions](http://msdn.microsoft.com/en-us/library/ms189461.aspx) are just so much more fun though: ``` SELECT TransactionId , Value , SUM(Value) OVER(PARTITION BY TransactionId) AS Total FROM SomeTable ``` [SQL Fiddle](http://sqlfiddle.com/#!3/667dca/1)
This should do the trick ``` select transactionID , value , TransactionSum from MyTable t join (select transactionID , sum (value) as TransactionSum from MyTable group by transactionID ) x on t.transactionID = x.transactionID ```
SSRS/T-SQL sum if another condition is met
[ "", "sql", "sql-server", "t-sql", "reporting-services", "" ]
I have a largish( 10's of millions of rows) SQL table, that lists attribute types, and attributes. I want to investigate the relationship between subsets (three or four at a time) of these attributes, for a given object. The objects may have some, all, or none, of the attributes that I'm interested in. If it has none of the attributes that I'm interested in, I can consider it not to exist. ``` Id | AttributeType | AttributeValue ------------------------------------ 01 | 01 | 100 01 | 02 | 4500 01 | 04 | D 01 | 15 | E ``` The problem is, essentially that I want to return results for all the attributes types that I'm in if any of them exist, but no result if none of them do. So performing this query: ``` select case when Att1.id is null then Att2.id else Att1.id end as Id, Att1.AttributeValue as Attribute5, Att2.AttributeValue as Attribute6 from Attributes Att1 full outer join Attributes Att2 on Att1.id = Att2.id and Att1.AttributeType = 5 and Att2.AttributeType = 6 ``` Doesn't work for Id 1 because it has no attribute of either type, but the query create null records on either side of the join, so I see something like this: ``` Id | Attribute5 | Attribute6 ----------------------------- 01 | 100 | Null 01 | 4500 | Null ``` If I try to avoid creating the null records I miss out on records that I want to see. This query: ``` select case when Att1.id is null then Att2.id else Att1.id end as Id, Att1.AttributeValue as Attribute1, Att2.AttributeValue as Attribute2 from Attributes Att1 full outer join Attributes Att2 on Att1.id = Att2.id where Att1.AttributeType = 1 and Att2.AttributeType = 3 ``` produces nothing, but should produce: ``` Id | Attribute1 | Attribute3 ----------------------------- 01 | 100 | Null ``` I can fix both of these problems by using a left join, ``` select case when Att1.id is null then Att2.id else Att1.id end as Id, Att1.AttributeValue as Attribute1, Att2.AttributeValue as Attribute3 from Attributes Att1 left join Attributes Att2 on Att1.id = Att2.id and Att2.AttributeType = 3 where Att1.AttributeType = 1 ``` produces the correct output. The problem with this that it doesn't treat the attributes equally. So, if Id 01 has a value for attribute 01 but not 03 it's fine, but if it didn't have 01, and did have 03, I wouldn't see it. This becomes more of a problem when I extend to three and four joins. Ideally, given how often I'm going to have to run this query for different attribute types, and how much processing time goes into creating the attribute table in the first place, I'd love to be able to get all the results that I need, and no others, with a single query.
Try something like this... ``` select distinct base.id, Att1.AttributeValue as Attribute1, Att2.AttributeValue as Attribute2, Att3.AttributeValue as Attribute3, Att4.AttributeValue as Attribute4 from Attributes base left join Attributes Att1 on base.id = Att1.id and Att1.AttributeType = 1 left join Attributes Att2 on base.id = Att2.id and Att2.AttributeType = 2 left join Attributes Att3 on base.id = Att3.id and Att3.AttributeType = 3 left join Attributes Att4 on base.id = Att4.id and Att4.AttributeType = 4 where base.id = 1 ``` You need a "static" table, and left join the attributes on to that... Ideally, since you're not using anything but the ID from the base table, it would be better performing if you didn't use the entire table here, but, given the layout, this will work, if only as an example. If you know you're looking at ID 1,3,5,7, then it would probably be better to put them in a variable/temp table, and join off that to eliminate having to join your `Attributes` table an extra time.
You may want to use SQL-Server's "Pivot" functionality ([http://technet.microsoft.com/en-us/library/ms177410(v=sql.105).aspx](http://technet.microsoft.com/en-us/library/ms177410%28v=sql.105%29.aspx)) I think the syntax for your example would be: ``` SELECT Id, [01], [02], [04], [15], [06] from (SELECT Id, AttributeType, AttributeValue From Attributes) att PIVOT ( MAX(AttributeValue) for AttributeType IN ([01], [02], [04], [15], [06]) ) AS myPivot ``` Which would give you a column for each of the four AttributeTypes with the value in each. Note that you have to use a grouping function, so I've used MAX(). If you have more than one record for the same Id/AttributeType combination, you will only get the row returned by MAX(). For your example I get: ``` Id 01 02 04 15 06 01 100 4500 D E NULL ``` With millions of rows I'm not sure how this will perform, but it should be the simplest solution I know of, and it works for a reasonable number of columns. NULLs should work automatically.
Select Null From A Table When Records May Not Exist
[ "", "sql", "sql-server", "join", "" ]
Looking to find all rows where a certain json column contains an empty object, `{}`. This is possible with JSON arrays, or if I am looking for a specific key in the object. But I just want to know if the object is empty. Can't seem to find an operator that will do this. ``` dev=# \d test Table "public.test" Column | Type | Modifiers --------+------+----------- foo | json | dev=# select * from test; foo --------- {"a":1} {"b":1} {} (3 rows) dev=# select * from test where foo != '{}'; ERROR: operator does not exist: json <> unknown LINE 1: select * from test where foo != '{}'; ^ HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts. dev=# select * from test where foo != to_json('{}'::text); ERROR: operator does not exist: json <> json LINE 1: select * from test where foo != to_json('{}'::text); ^ HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts. dwv=# select * from test where foo != '{}'::json; ERROR: operator does not exist: json <> json LINE 1: select * from test where foo != '{}'::json; ^ HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts. ```
There is **no equality (or inequality) operator** for the data type **`json`** as a whole, because equality is hard to establish. Consider [**`jsonb`**](https://www.postgresql.org/docs/current/datatype-json.html) in Postgres 9.4 or later, where this is possible. More details in this related answer on dba.SE (last chapter): * [How to remove known elements from a JSON[] array in PostgreSQL?](https://dba.stackexchange.com/a/64765/3684) `SELECT DISTINCT` and `GROUP BY` fail for the same reason (no equality operator). Casting both sides of the expression to `text` allows the operators `=` or `<>`, but that's not normally reliable as there are many possible text representations for the *same* JSON value. However, for **this particular case (empty object)** it works just fine: ``` SELECT * FROM test WHERE foo::text <> '{}'::text; ``` In Postgres 9.4 or later, cast to `jsonb` instead. (**Or use `jsonb` to begin with.**) ``` SELECT * FROM test WHERE foo::jsonb <> '{}'::jsonb; ``` Or, to cover absolutely every possibility: ``` SELECT * FROM test WHERE CASE json_typeof(foo) WHEN 'object' THEN foo::text <> '{}'::text WHEN 'array' THEN foo::text <> '[]'::text ELSE true -- other types are never "empty" END; ``` [The manual for `json_typeof()`:](https://www.postgresql.org/docs/current/functions-json.html#id-1.5.8.22.8.13.2.2.24.1.1.1) > Possible types are `object`, `array`, `string`, `number`, `boolean`, and `null`.
Empty JSON array `[]` could also be relevant. Then this could work for both `[]` and `{}`: ``` select * from test where length(foo::text) > 2 ; ```
How to query a json column for empty objects?
[ "", "sql", "json", "postgresql", "types", "jsonb", "" ]
I am using below mentioned code when I tried to connect with SQL server. I am geeting Login failed for user 'domain\username'. I have checked that the user having sysadmin permission in SQL Server & I am using SQL server Express edition. connection string I used for this "Initial Catalog=Employee;Server=serverName" ``` public static bool connectSqlClient(string connecton) { bool isConnect = false; try { string username = @"domain\username"; string initString = "abcpassowrd"; // Instantiate the secure string. SecureString testString = new SecureString(); // Use the AppendChar method to add each char value to the secure string. foreach (char ch in initString) testString.AppendChar(ch); testString.MakeReadOnly(); SqlCredential cred = new SqlCredential(username, testString); SqlConnection conn = new SqlConnection(connecton, cred); conn.Open(); isConnect = true; } catch (Exception) { throw; } return isConnect; } ``` Let me know if I missed something.
Typically, when you add a Login to Sql Server, there are 2 modes. Sql Server Authentication (which is the "old school" user-name and pwd scenario) and "Windows Authentication". which is where you find a user (domain\username) on your network and add the login. THIS SCENARIO DOES NOT REQUIRE A PASSWORD TO BE SET/SENT. ``` Server=myServerAddress;Database=myDataBase;Trusted_Connection=True; ``` That is a typical "Windows Authentication" connection string. You don't set the username/pwd, because the IIDentity is "picked up" from who/whatever is logged into or whose credentials the application is running. I ~think you want to use Windows-Authentication (since you mention 'domain/username')....but you're setting the pwd as if you were using Sql-Server-Authentication. Thus "mixing" the 2 models. as mkross mentions, just because you add the LOGIN, you still need to "link in" the database itself. If you go to the Security/Logins/ (properties) and go the "User Mapping" selection, you can "Map" to the database, and select a role\_membership like "db\_datareader" as a low rights role. APPEND: <http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlcredential%28v=vs.110%29.aspx> ``` SqlCredential provides a more secure way to specify the password for a login attempt using SQL Server Authentication. SqlCredential is comprised of a user id and a password that will be used for SQL Server Authentication. ``` So yeah, that's for Sql-Server-Authentication. Not Windows-Authentication. you probably just need ``` string myConnectionString = "Server=myServerAddress;Database=myDataBase;Trusted_Connection=True;" ```
While you may have created the user did you add that particular user to your actual database? This tripped me up for a while after I inherited a demo that already had the user created. When I backed it up and restored it the user was not included and needed to be added back in. Here you can see the folder **Security** with **Logins** as a sub folder: <https://i.stack.imgur.com/oiHrb.png> Add the user here first. Once the login is created you then need to expand your actual database itself and add the login there as well.
Error : Login failed for user 'domain\username' using SqlCredential object
[ "", "sql", "sql-server", "sqlconnection", "" ]
I need help with how to write an `SQL` statement. This what I am trying to say, this clearly is not `SQL` format but for purpose of understanding the request: ``` SELECT * FROM $A$ WHERE marketprovider_1 OR marketprovider_2 OR marketprovider_3 OR marketprovider4 OR marketprovider_5 CONTAIN ‘data’ AND ‘tv’ AND WHERE marketprovider_1 OR marketprovider_2 OR marketprovider_3 OR marketprovider4 OR marketprovider_5 DO NOT CONTAIN “vios” OR ‘phone” OR “uverse’; ``` My failed attempt at the query: ``` SELECT * FROM $A$ WHERE CONTAINS(marketprovider_1, ‘data’) OR CONTAINS(marketprovider_2, ‘data’) OR CONTAINS(marketprovider_3, ‘data’) OR CONTAINS(marketprovider_4, ‘data’) OR CONTAINS(marketprovider_5, ‘data’) AND WHERE CONTAINS(marketprovider_1, ‘tv’) OR CONTAINS(marketprovider_2, ‘tv) OR CONTAINS(marketprovider_3, ‘tv) OR CONTAINS(marketprovider_4 ‘tv) OR CONTAINS(marketprovider_5, ‘tv) AND WHERE NOT IN(marketprovider_1, ‘vios’) OR NOT IN (marketprovider_2, ‘vios’) OR NOT IN (marketprovider_3, ‘vios’) OR NOT IN (marketprovider_4, ‘vios’) OR NOT IN (marketprovider_5, ‘vios’) AND WHERE NOT IN (marketprovider_1, ‘phone’) OR NOT IN (marketprovider_2, ‘phone’) OR NOT IN (marketprovider_3, ‘phone’) OR NOT IN (marketprovider_4 ‘phone’) OR NOT IN (marketprovider_5, ‘phone’) AND WHERE NOT IN (marketprovider_1, ‘uverse’) OR NOT IN (marketprovider_2, ‘uverse’) OR NOT IN (marketprovider_3, ‘uverse’) OR NOT IN (marketprovider_4 ‘uverse’) OR NOT IN (marketprovider_5, ‘uverse’) ```
Here's an example of valid SQL `WHERE` clause syntax that implements the predicates you described. ``` WHERE ( ( marketprovider_1 LIKE '%data%' AND marketprovider_1 LIKE '%tv%' ) OR ( marketprovider_2 LIKE '%data%' AND marketprovider_2 LIKE '%tv%' ) OR ( marketprovider_3 LIKE '%data%' AND marketprovider_3 LIKE '%tv%' ) OR ( marketprovider_4 LIKE '%data%' AND marketprovider_4 LIKE '%tv%' ) OR ( marketprovider_5 LIKE '%data%' AND marketprovider_5 LIKE '%tv%' ) ) AND NOT ( marketprovider_1 LIKE '%vios%' OR marketprovider_2 LIKE '%vios%' OR marketprovider_3 LIKE '%vios%' OR marketprovider_4 LIKE '%vios%' OR marketprovider_5 LIKE '%vios%' OR marketprovider_1 LIKE '%phone%' OR marketprovider_2 LIKE '%phone%' OR marketprovider_3 LIKE '%phone%' OR marketprovider_4 LIKE '%phone%' OR marketprovider_5 LIKE '%phone%' OR marketprovider_1 LIKE '%uverse%' OR marketprovider_2 LIKE '%uverse%' OR marketprovider_3 LIKE '%uverse%' OR marketprovider_4 LIKE '%uverse%' OR marketprovider_5 LIKE '%uverse%' ) ``` (I'm assuming that by "contains" you mean that a particular string occurs within the column value, and by "does not contain" means you want to check that a particular string does not occur anywhere within the column contents.) The `LIKE` operator is one way to check whether a particular string occurs within another string, the `'%'` is a wildcard character that matches zero, one or more characters. For example: ``` '' LIKE '%tv%' => 0 'feefifofum' LIKE '%tv%' => 0 'tv' LIKE '%tv%' => 1 'tvfifofum' LIKE '%tv%' => 1 'fee tv fo' LIKE '%tv%' => 1 NULL LIKE '%tv%' => NULL ``` --- The example WHERE clause (above) checks for a single marketprovider\_N column that contains both the string `'data'` and the string `'tv'`; it's possible this differs from what you meant to specify. If you meant (instead) that if any of the marketprovider\_N columns contained 'data' and any of the marketprovider\_N columns contained 'tv', that would be a slightly different WHERE clause: ``` WHERE ( ( marketprovider_1 LIKE '%data%' ) OR ( marketprovider_2 LIKE '%data%' ) OR ( marketprovider_3 LIKE '%data%' ) OR ( marketprovider_4 LIKE '%data%' ) OR ( marketprovider_5 LIKE '%data%' ) ) AND ( ( marketprovider_1 LIKE '%tv%' ) OR ( marketprovider_2 LIKE '%tv%' ) OR ( marketprovider_3 LIKE '%tv%' ) OR ( marketprovider_4 LIKE '%tv%' ) OR ( marketprovider_5 LIKE '%tv%' ) ) AND NOT ( marketprovider_1 LIKE '%vios%' OR marketprovider_2 LIKE '%vios%' OR marketprovider_3 LIKE '%vios%' OR marketprovider_4 LIKE '%vios%' OR marketprovider_5 LIKE '%vios%' OR marketprovider_1 LIKE '%phone%' OR marketprovider_2 LIKE '%phone%' OR marketprovider_3 LIKE '%phone%' OR marketprovider_4 LIKE '%phone%' OR marketprovider_5 LIKE '%phone%' OR marketprovider_1 LIKE '%uverse%' OR marketprovider_2 LIKE '%uverse%' OR marketprovider_3 LIKE '%uverse%' OR marketprovider_4 LIKE '%uverse%' OR marketprovider_5 LIKE '%uverse%' ) ``` Anything beyond this, and we're just guessing at permutations of how the specification could be interpreted. Example data, and identifying whether the row should be returned or not, would narrow down the specification. e.g. which of these rows should be returned... ``` marketprovider_1 marketprovider_2 marketprovider_3 marketprovider_4 marketprovider_5 ---------------- ---------------- ---------------- ---------------- ---------------- data tv coffee NULL NULL NULL NULL data phone tv NULL NULL data tv NULL NULL NULL tv tv tv NULL NULL data netflix net NULL NULL mountain dew phone uverse cable netflix amazon hulu no uverse wifi roku ```
Try using : ``` marketprovider_1 like ('%data%') ``` Where the % is a wildcard. You can use NOT LIKE for the opposite. The IN clause you are using is for searching for lists of things. An example is: ``` marketprovider_1 in ('data','this','that') ```
SQL Query, Need help using "contains" and "does not contain"
[ "", "mysql", "sql", "" ]
I am trying to get all the values which exist in one table and not the other. The condition to join them is varchar in one table and bigInt in the other (swmItems is bigInt, OBJ\_TAB is varchar). I tried the following: ``` SELECT si.upc as upc from swmItems si left outer join dbo.OBJ_TAB obj on cast(obj.F01 as BIGINT) = si.upc WHERE obj.F01 IS NULL ``` however it gives me the error: "Error converting data type varchar to bigint". I also tried the following: ``` SELECT * FROM OBJ_TAB WHERE ISNUMERIC(f01) != 1 ``` And it return nothing so all my values should be numbers in OBJ\_TAB I think it's because of the values exist in the first table and not the second (trying to convert null to bigInt). Is there anyway to join the two tables this way? I am using Microsoft SQL 2012 Footnote: In OBJ\_TAB there exist number of zeros in front of the numbers that's why comparing everything as varchar doesn't work for me.
``` SELECT swmItems.upc FROM swmItems LEFT JOIN OBJ_TAB ON CASE WHEN ISNUMERIC(OBJ_TAB.F01 + '.e0') = 1 -- ensure no decimals THEN CASE WHEN CONVERT(float, OBJ_TAB.F01) BETWEEN -9223372036854775808 AND 9223372036854775807 THEN CONVERT(bigint, OBJ_TAB.F01) END END = swmItems.upc WHERE OBJ_TAB.F01 IS NULL ```
You cannot convert any non-numeric value into integer datatype. but you can convert numeric value into varchar datatype. So, you can try like this. ``` SELECT si.upc as upc from swmItems si left outer join dbo.OBJ_TAB obj on obj.F01 = Cast(si.upc As Varchar) WHERE obj.F01 IS NULL ```
Convert varchar to bigInt including null values
[ "", "sql", "sql-server", "" ]
My SQL is very rusty. I'm trying to transform this table: ``` +----+-----+--------------+-------+ | ID | SIN | CONTACT | TYPE | +----+-----+--------------+-------+ | 1 | 737 | b@bacon.com | email | | 2 | 760 | 250-555-0100 | phone | | 3 | 737 | 250-555-0101 | phone | | 4 | 800 | 250-555-0102 | phone | | 5 | 850 | l@lemon.com | email | +----+-----+--------------+-------+ ``` Into this table: ``` +----+-----+--------------+-------------+ | ID | SIN | PHONE | EMAIL | +----+-----+--------------+-------------+ | 1 | 737 | 250-555-0101 | b@bacon.com | | 2 | 760 | 250-555-0100 | | | 4 | 800 | 250-555-0102 | | | 5 | 850 | | l@lemon.com | +----+-----+--------------+-------------+ ``` I wrote this query: ``` SELECT * FROM (SELECT * FROM people WHERE TYPE = 'phone') phoneNumbers FULL JOIN (SELECT * FROM people WHERE TYPE = 'email') emailAddresses ON phoneNumbers.SIN = emailAddresses.SIN; ``` Which produces: ``` +----+-----+--------------+-------+------+-------+-------------+--------+ | ID | SIN | CONTACT | TYPE | ID_1 | SIN_1 | CONTACT_1 | TYPE_1 | +----+-----+--------------+-------+------+-------+-------------+--------+ | 2 | 760 | 250-555-0100 | phone | | | | | | 3 | 737 | 250-555-0101 | phone | 1 | 737 | b@bacon.com | email | | 4 | 800 | 250-555-0102 | phone | | | | | | | | | | 5 | 850 | l@lemon.com | email | +----+-----+--------------+-------+------+-------+-------------+--------+ ``` I know that I can select the columns I want, but the SIN column is incomplete. I seem to recall that I should join in the table a third time to get a complete SIN column, but I cannot remember how. How can I produce my target table (ID, SIN, PHONE, EMAIL)? Edit and clarification: I am grateful for the answers I have received so far, but as a SQL greenhorn I am unfamiliar with the techniques you are using (case statements, conditional aggregation, and pivoting). Can this not be done using JOIN and SELECT? Please excuse my ignorance in this matter. (It's not that I am not interested in superior techniques, but I do not want to move too fast too soon.)
One way to approach this is conditional aggregation: ``` select min(ID), SIN, max(case when type = 'phone' then contact end) as phone, max(case when type = 'email' then contact end) as email from people t group by sin; ```
Seems a [pivot](http://www.oracle.com/technetwork/articles/sql/11g-pivot-097235.html) (oracle.com) would work easily here. ``` SELECT ID, SIN, PHONE, EMAIL FROM PEOPLE PIVOT ( MAX(CONTACT) FOR TYPE IN ('EMAIL', 'PHONE') ) ```
Split column into two columns based on type code in third column
[ "", "sql", "oracle11g", "" ]
I am using a function to get total of a column field is : ``` ALTER FUNCTION [dbo].[GetTwoWeeklyWorkTime](@EmployeeID as int,@PayPeriodID as varchar(10)) RETURNS int AS BEGIN DECLARE @StartDate DATETIME DECLARE @EndDate DATETIME Select @StartDate=[PeriodStartDate],@EndDate=[PeriodEndDate] from PayPeriod where PayPeriodId=@PayPeriodID -- Declare the return variable here DECLARE @ResultVar numeric(10,2) SELECT @ResultVar= Sum([WorkingTime])/60 FROM [DayLog] Where EmployeeId =@EmployeeID AND CreatedDate between @StartDate AND @endDate -- Return the result of the function RETURN Isnull(@ResultVar,0) ``` END At the line ``` Sum([WorkingTime])/60 ``` I get the result as int. How to convert or Cast it into numeric or float..? **EDIT** I tried flowing: ``` Sum(Cast([WorkingTime] as float))/60 Sum([WorkingTime])/60.0 ``` But no success.
Well, you can *calculate* it as a `float` using ``` Sum(Cast([WorkingTime] as float))/60 ``` but you also need to change your function return value to `float`: ``` RETURNS float ```
`INT / INT` would result in an `INT`, even though result could be a fraction. Use this: ``` Sum([WorkingTime]) / 60.0 ```
Error converting Sum[Field] as numeric or float
[ "", "sql", "sql-server-2008", "sql-convert", "" ]
I have a table that has the names of the department, but if a department is not part of the count in the other table then I don't get the department name. How can I get the names of the departments who are not part of the CSEReduxResponses to display in the output with a 'branchtotalstarsgiven' of 0? Here are some test data and the query that gets me the count of the departments. ``` create table CSEReduxDepts (csedept_id int, csedept_name varchar(25)); insert into CSEReduxDepts (csedept_id, csedept_name) values (1,'one'), (2,'two'), (3,'three'), (4,'four'); create table CSEReduxResponses (execoffice_status int, submitterdept int); insert into CSEReduxResponses (execoffice_status,submitterdept) values (1,1), (1,1), (1,1), (1,1), (1,2), (1,2); ------------------------------------------------------------------ SELECT submitterdept,csedept_name, COUNT(*) as 'branchtotalstarsgiven' FROM CSEReduxResponses c join CSEReduxDepts d on c.submitterdept= d.csedept_id WHERE execoffice_status = 1 GROUP BY execoffice_status, submitterdept, csedept_name ```
To get the expected result there are few changes to implement: * change the `JOIN` from `INNER` to `LEFT/RIGHT` * change the `WHERE` condition so that it will not remove the department not in `CSEReduxResponses` * change the `COUNT` so that it'll return 0 when the department is not in `CSEReduxResponses` the resulting query is ```` ``` SELECT submitterdept , csedept_name , COUNT(execoffice_status) as 'branchtotalstarsgiven' FROM CSEReduxDepts d LEFT JOIN CSEReduxResponses c on d.csedept_id = c.submitterdept WHERE COALESCE(execoffice_status, 1) = 1 GROUP BY execoffice_status, submitterdept, csedept_name ``` ```` `SQLFiddle demo` the changed part are in bold, the `COUNT` use execoffice\_status as parameter because `COUNT(*)` count every row, but `COUNT(field)` count only the rows that are not null.
AS I understood you need names of departements that don't figure in the second table : ``` SELECT csedept_id, csedept_name FROM CSEReduxDepts WHERE csedept_id NOT IN (SELECT distinct submitterdept FROM CSEReduxResponses) ```
How to get all names of a table column to show if its not part of another table?
[ "", "sql", "" ]
``` create table tab3(a integer,d1 datetime default getdate()) insert into tab3(a) values(1) insert into tab3 (a) select a from tab3 GO 20 insert into tab3 (a) select a from tab3 select d1,count(*) from tab3(NOLOCK) group by d1 ``` The final insert definitely takes a non trivial amount of time(3 seconds on my machine) However, the value in `d1` is distinct per batch. i.e. the final query returns only 22 rows ``` 2014-06-22 20:34:53.787 1 2014-06-22 20:34:56.127 1 2014-06-22 20:34:56.140 2 2014-06-22 20:34:56.153 4 2014-06-22 20:34:56.157 8 2014-06-22 20:34:56.160 16 2014-06-22 20:34:56.163 32 2014-06-22 20:34:56.167 64 2014-06-22 20:34:56.170 128 2014-06-22 20:34:56.177 256 2014-06-22 20:34:56.183 512 2014-06-22 20:34:56.193 1024 2014-06-22 20:34:56.210 2048 2014-06-22 20:34:56.240 4096 2014-06-22 20:34:56.293 8192 2014-06-22 20:34:56.397 16384 2014-06-22 20:34:56.493 32768 2014-06-22 20:34:56.607 65536 2014-06-22 20:34:56.817 131072 2014-06-22 20:34:57.240 262144 2014-06-22 20:34:57.710 524288 2014-06-22 20:35:01.630 1048576 ``` Why is GETDATE() initialized per statement instead of per insert? How can I ensure the default value is initialized per row instead of per statement without the use of a cursor? EDIT:Probably related,tab3\_log has the same schema as tab3 ``` CREATE TRIGGER tab3_logger on tab3 AFTER INSERT AS BEGIN INSERT INTO tab3_log(a) select a from inserted END ``` Shown all rows from a single insert statement as having the same datetime
> When is a default value initialized? This depends on whether or not the expression is a "[runtime constant](http://blogs.msdn.com/b/conor_cunningham_msft/archive/2010/04/23/conor-vs-runtime-constant-functions.aspx)". To get your desired behaviour you can wrap the call in a scalar UDF. ``` CREATE FUNCTION dbo.F() RETURNS DATETIME AS BEGIN RETURN GETDATE() END GO CREATE TABLE T ( A CHAR(8000) NULL, B FLOAT DEFAULT RAND(), C DATETIME DEFAULT GETDATE(), D DATETIME DEFAULT dbo.F(), E UNIQUEIDENTIFIER DEFAULT NEWID() ) INSERT INTO T (A) SELECT TOP 100000 'A' FROM master..spt_values v1, master..spt_values v2 SELECT COUNT(DISTINCT B) AS B, COUNT(DISTINCT C) AS C, COUNT(DISTINCT D) AS D, COUNT(DISTINCT E) AS E FROM T GO DROP TABLE T; DROP FUNCTION F; ``` Returns (example, exact value for `D` will vary) ``` +---+---+-----+--------+ | B | C | D | E | +---+---+-----+--------+ | 1 | 1 | 823 | 100000 | +---+---+-----+--------+ ``` The first two are evaluated per statement the second two per row.
SQL is set-based, not row-by-row. The insert statement logically happens all-at-once so it is correct to assign the same value for all rows inserted by the statement. You might even get the same getdate value for different statements on a faster machine since getdate only has a precision of a millisecond with an accuracy plus or minus 3-4 milliseconds.
When is a default value initialized?
[ "", "sql-server", "sql", "" ]
I'm new to Sql but what is the best way to insert more than 1000 rows from an excel document into my database(Sql server 2008.) For example I'm using the below query: ``` INSERT INTO mytable(companyid, category, sub, catalogueref) VALUES ('10197', 'cat', 'sub', '123'), ('10197', 'cat2', 'sub2', '124') ``` This is working fine but there is a limit of inserting 1000 records and I have 19000 records and I don't really want to do 19 separate insert statements and another question, is that the company id is always the same is there a better way then writing it 19000 times?
Microsoft provides an [import wizard](http://msdn.microsoft.com/en-us/library/ms141209.aspx) with SQL Server. I've used it to migrate data from other databases and from spreadsheets. It is pretty robust and easy to use.
Just edit the data in Excel or another program to create N amount of insert statements with a single insert for each statement, you'll have an unlimited number of inserts. For example... ``` INSERT INTO table1 VALUES (6696480,'McMurdo Station',-77.846,166.676,'Antarctica','McMurdo') INSERT INTO table1 VALUES (3833367,'Ushuaia',-54.8,-68.3,'America','Argentina') ...19,000 later INSERT INTO table1 VALUES (3838854,'Rio Grande',-53.78769,-67.70946,'America','Argentina') ```
Inserting more than 1000 rows from Excel into SQLServer
[ "", "sql", "sql-server", "sql-server-2008", "insert", "" ]
Here is some data: ``` record ------------------------------------------------- | id | name | ------------------------------------------------- | 1 | Round Cookie | | 2 | Square Cookie | | 3 | Oval Cookie | | 4 | Hexagon Cookie | ------------------------------------------------- record_field_data ---------------------------------------------- | id | record_id | data | type_id | ---------------------------------------------- | 1 | 1 | White | 1 | | 2 | 1 | Round | 2 | | 3 | 2 | Green | 1 | | 4 | 2 | Square | 2 | | 5 | 3 | Blue | 1 | | 6 | 3 | Oval | 2 | | 7 | 4 | Hexagon | 2 | ---------------------------------------------- record_type_field ------------------------------------------------- | id | data_type | ------------------------------------------------- | 1 | Color | | 2 | Shape | ------------------------------------------------- ``` I am trying to get a list of all records left joined with the record\_field\_data of type "Color". This needs to be a left join because there may not be record\_field\_data of a given type, and I still want the record if the case. This is the query I have come up with but it is returning a left join with ALL record\_field\_data and not just the specific ones I want. ``` SELECT record.id AS id, recordfield.data, recordtype.field_name FROM record LEFT JOIN record_field_data AS recordfield ON (record.id = recordfield.record_id) LEFT JOIN record_type_field AS recordtype ON (recordfield.type_id = recordtype.id AND recordtype.data_type = 'Color'); ``` I could do this with a subquery in the JOIN but I can't use a subquery. I have to translate this to HQL and subqueries are not supported in HQL for joins. The result I am looking for is records ordered by the record\_field\_data where record\_type\_field.data\_type is 'Color'. Note that "Hexagon cookie" doesn't have a color defined, I don't know if it should be at the top or bottom at this point. Either way will work. ``` ------------------------------------------------- | id | name | ------------------------------------------------- | 3 | Oval Cookie | | 2 | Square Cookie | | 1 | Round Cookie | | 4 | Hexagon Cookie | ------------------------------------------------- ```
``` SELECT r.id, r.name FROM record r JOIN record_type_field rf ON rf.data_type = 'Color' LEFT JOIN record_type_data rd ON rd.record_id = r.id AND rd.type_id = rf.id ORDER BY rd.data ```
Have you tried using the SQL 'IN' clause. ``` select record.id as id, recordfield.data, recordtype.field_name from record left join record_field_data recordfield ON record.id = recordfield.record_id WHERE id IN (SELECT id FROM record_type_field where data_type='Color'); ``` The IN clause allows you to specify a list condition. So the subquery gets all of the ids where the type is "Color", and you are then doing the join and selecting all records from that join that have an id in the list of ids corresponding to the type "Color".
Filtering data in left join
[ "", "mysql", "sql", "join", "" ]
**Background** Coming from a `mysql` background, I find getting started with Oracle quite a different experience. **Question** Where do I find this command tool for Oracle? I have checked [wikipedia](http://en.wikipedia.org/wiki/SQL%2aPlus) and it only says: > An Oracle programmer in the appropriately configured software environment can launch SQL\*Plus And it's not in the PATH on my Windows: ``` C:\Users\jeff>sqlplus 'sqlplus' is not recognized as an internal or external command, operable program or batch file. ```
You can locate the `sqlplus` executable in Windows by running in a CMD shell ``` dir /s /b c:\sqlplus.exe ``` Suppose you find the file at ``` c:\oracle\product\11.2.0\client_1\bin\sqlplus.exe ``` Then you have determined that your ORACLE\_HOME is: ``` c:\oracle\product\11.2.0\client_1 ``` Assuming the above ORACLE\_HOME, set your environment variables (Control Panel > System > Environment Variables). Below is example, so modify these to match the ORACLE\_HOME you determined above. ``` ORACLE_HOME=c:\oracle\product\11.2.0\client_1 TNS_ADMIN=c:\oracle\product\11.2.0\client_1\network\admin PATH= *(add this below the end of your PATH)* ;c:\oracle\product\11.2.0\client_1\bin ```
Usually in $ORACLE\_HOME/bin and usually they suggest to run ``` . oraenv ``` to prepare your environment.
Where is Oracle sqlplus located?
[ "", "mysql", "sql", "oracle", "command-line", "sqlplus", "" ]
This may just be a Friday thing but I cant work out how to display these records as one line, my data looks like this ``` PTaskID Part Requisition Service Requisition 394512 Yes No 394512 No Yes ``` What I want is 1 row with the `PTaskID` but with the following, if there are more rows as in the example and one of them has a Yes in the `Part Requisition` then the overall row value for `Part Requisition` is Yes and the same logic should apply to `Service Requisition`. Any help would be great PJD ``` CREATE TABLE [dbo].[Load]( [PTaskID] [int] NOT NULL, [Part Requisition] [varchar](3) NOT NULL, [Service Requisition] [varchar](3) NULL ) ON [PRIMARY] INSERT INTO Load (PTaskID, [Part Requisition], [Service Requisition]) VALUES (394512, 'Yes', 'No'), (394512, 'No', 'Yes') ```
Given that 'Yes' comes after 'No' (\*in most collations), you could `GROUP` and then do a `Max`, viz ``` SELECT PTaskID, MAX([Part Requisition]) as [MaxPartRequisition], MAX([Service Requisition]) as [MaxServiceRequisition] FROM [Load] GROUP BY PTaskID; ``` [SqlFiddle here](http://sqlfiddle.com/#!3/82189/1)
Try this ``` Select Ptaskid, case when sum(case when [Part Requisition] = 'Yes' then 1 else 0 end)> 0 then 'Yes' else 'No' end 'Part Requisition', case when sum(case when [Part Requisition] = 'Yes' then 1 else 0 end)> 0 then 'Yes' else 'No' end 'Service Requisition' from [Load] group by ptaskid ```
Grouping 2 lines in to 1
[ "", "sql", "sql-server", "t-sql", "" ]
I'm trying to insert values into a table named EMPLOYEE\_PROGRAM\_LEVEL. Here are the column names I need to specify: ``` EMPLOYEE_ID, PROGRAM_ID, LEVEL, MANAGE_ACCESS ``` Before doing the insertion, I know only the employee name and the program name. LEVEL and MANAGE\_ACCESS are known constants that I need to hardcode in the query. The problem is: I need to get the id corresponding to the employee name and the id corresponding to the program name but I also need to insert the constant values in the same query! I've tried: ``` INSERT INTO EMPLOYEE_PROGRAM_LEVEL(EMPLOYEE_ID, PROGRAM_ID, LEVEL, MANAGE_ACCESS) SELECT ID, ID_PROGRAM FROM EMPLOYEE T1 INNER JOIN PROGRAM T2 ON T2.NOM='programName' WHERE T1.USERNAME='userName' VALUES(ID, ID_PROGRAM, '9000', 'O'); ``` '9000' and 'O' are the fixed values(LEVEL and MANAGE\_ACCESS columns) Apparently, I can't use VALUES in a INSERT SELECT QUERY which is blocking me. Here's the error I get: ``` Incorrect syntax near the keyword 'VALUES'. ``` What should I do?
This should work for you: ``` INSERT INTO EMPLOYEE_PROGRAM_LEVEL(EMPLOYEE_ID, PROGRAM_ID, LEVEL, MANAGE_ACCESS) SELECT ID, ID_PROGRAM,'9000', 'O' FROM EMPLOYEE T1 INNER JOIN PROGRAM T2 ON T2.NOM='programName' WHERE T1.USERNAME='userName' ```
You're almost there, just need to add the constants to the SELECT: ``` INSERT INTO EMPLOYEE_PROGRAM_LEVEL(EMPLOYEE_ID, PROGRAM_ID, LEVEL, MANAGE_ACCESS) SELECT ID, ID_PROGRAM, '9000', 'O' FROM EMPLOYEE T1 INNER JOIN PROGRAM T2 ON T2.NOM='programName' WHERE T1.USERNAME='userName' ```
MS SQL: INSERT values from another table and values from constant data
[ "", "sql", "sql-server", "" ]
I'm trying to join two tables based on the first two digits of UK Postcodes. The problem I have is that not all postcodes in my table start with two letters. For example: ``` Table 1 Birmingham B Bath BA Table 2 B5 5NR BA1 2BX ``` If I was to join using LEFT(Postcode,2) I would be excluding Birmingham. If I was to join using LEFT(Postcode,1) I would be excluding Bath. Is there a way to just take the first text part of a postcode? Many Thanks
You can try the following query, assuming that the second table contains a number after the postcode to which we need to cut off to extract the postcode. ``` SELECT Name,A.Code,B.Code,B.Val FROM table1 A JOIN table2 B ON A.Code =LEFT(B.Code,PATINDEX('%[0-9]%',B.Code)-1) ```
This will eliminate the last character of a string in SQL Server: ``` LEFT(Postcode,LEN(Postcode)-1) ```
SQL Joining on Different cell values
[ "", "sql", "" ]
Of all the SQL transpose questions using PIVOT and UNPIVOT I am yet to see one where you transpose a single row into columns. This is my query: ``` SELECT '' as Status, '' as "Count", sum(Created) as "Created", sum(Forwarded) as "Forwarded", sum(Replied) as "Replied" FROM ( SELECT CASE WHEN a.TO_WG_ID is null and a.CREATED_DATE_TIME = (select min(b.created_date_time) from ymtn.message b where b.thread_id = a.thread_id) THEN 1 ELSE 0 END AS "Created", CASE WHEN a.TO_WG_ID is not null and a.FROM_WG_ID is not null THEN 1 ELSE 0 END AS "Forwarded", CASE WHEN a.TO_WG_ID is null and a.CREATED_DATE_TIME != (select min(b.created_date_time) from ymtn.message b where b.thread_id = a.thread_id) THEN 1 ELSE 0 END AS "Replied" FROM ymtn.MESSAGE a left join YMTN.WORKGROUP b ON a.FROM_WG_ID=b.WORKGROUP_ID where b.WORKGROUP_ID='1STOP_PROCESS' ) a ``` and this is my output from above query (with two static fields - Status and Count - which I may not need if I can just figure out to transpose it right): ``` Status Count Created Forwarded Replied 1693 209 1499 ``` This is my desired output: ``` Status Count Created 1693 Forwarded 209 Replied 1499 ``` Please help me find a way to do this. Thanks!
## Test Data ``` DECLARE @TABLE TABLE ( [Status] VARCHAR(10) ,[Count] VARCHAR(10) ,Created INT ,Forwarded INT ,Replied INT ) INSERT INTO @TABLE VALUES ('', '', 1693 , 209 ,1499) ``` ## Query ``` ;WITH CTE AS ( SELECT * FROM ( SELECT Created, Forwarded, Replied FROM @TABLE --<-- Your Existing query here ) t UNPIVOT (Vals FOR N IN (Created, Forwarded, Replied)) up ) SELECT N AS [Status] ,Vals AS [Count] FROM CTE ``` ## Result Set ``` ╔═══════════╦═══════╗ ║ Status ║ Count ║ ╠═══════════╬═══════╣ ║ Created ║ 1693 ║ ║ Forwarded ║ 209 ║ ║ Replied ║ 1499 ║ ╚═══════════╩═══════╝ ```
You could just union them. ``` Select 'Created' as Status, Sum... as Count from... union all select 'Forwarded', Som... union all select 'Replied, sum... ```
SQL Transpose sum of field into columns
[ "", "sql", "sql-server", "transpose", "" ]
I have a quiz table ``` id | user_id | quiz_id -------------------------- 1 | 34567 | 12334 2 | 34567 | 12334 3 | 34567 | 23455 ``` id 1 and 2 depicts a quiz that can be assigned to the same user twice and a quiz transaction table ``` id | date | status ------------------------ 1 | 2014 | assigned 2 | 2014 | assigned 3 | 2014 | assigned ------------------------ 1 | 2014 | completed ``` `id` is foreign key to quiz table id, the last row depicts whenever a user finished the quiz, the row in the transaction table is updated with status 'completed' Expected Result: I want a table with a structure like ``` id | user_id| course_id | date | status ------------------------------------------ 1 | 34567 | 12334 | 2014 | completed 2 | 32567 | 12334 | 2014 | assigned 3 | 2014 | 23455 | 2014 | assigned ``` My query is ``` SELECT q.id, q.user_id, q.course_id, qt.date, qt.status FROM quiz q LEFT JOIN quiz_transaction qt ON q.id = qt.id ``` but it gives me extra row (as the query will) ``` 1 | 34567 | 12334 | 2014 | assigned ``` I cannot use ``` ON qt.type = 'completed' ``` Because if its completed it should return a completed row and if not it should return an assigned row but not both. So in the result I cannot have ``` 1 | 34567 | 12334 | 2014 | completed 1 | 34567 | 12334 | 2014 | assigned ``` How can I do it?
How about simply using the `MAX()` function with `GROUP BY` ([**SQL Fiddle**](http://sqlfiddle.com/#!6/39ac3/1/0)): ``` SELECT q.id, q.user_id, q.course_id, qt.date, MAX(qt.status) AS Status FROM quiz q LEFT JOIN quiz_transaction qt ON q.id = qt.id GROUP BY q.id, q.user_id, q.course_id, qt.date ``` **EDIT:** If you need to order a string a certain way, you could use a CASE statement to convert the string to a number. Get the MAX value and then convert it back ([**SQL Fiddle**](http://sqlfiddle.com/#!6/e13b9/4/0)): ``` SELECT m.id, m.user_id, m.quiz_id, MAX(m.date), CASE WHEN MAX(m.status) = 1 THEN 'assigned' WHEN MAX(m.status) = 2 THEN 'doing' WHEN MAX(m.status) = 3 THEN 'completed' END AS Status FROM ( SELECT q.id, q.user_id, q.quiz_id, qt.date, CASE WHEN qt.status = 'assigned' THEN 1 WHEN qt.status = 'doing' THEN 2 WHEN qt.status = 'completed' THEN 3 END AS Status FROM quiz q LEFT JOIN quiz_transaction qt ON q.id = qt.id ) AS m GROUP BY m.id, m.user_id, m.quiz_id; ```
Depending on your release SQLnServer supports Standard SQL's "Windowed Aggregate Functions". ROW\_NUMBER will give you a single row: ``` SELECT q.id ,q.user_id ,q.quiz_id ,qt.date ,qt.status FROM quiz q JOIN ( SELECT id ,date ,status ,ROW_NUMBER() OVER (PARTITION BY id ORDER BY Status DESC) as rn FROM quiz_transaction ) as qt ON q.id = qt.id WHERE rn = 1 ``` If you got more complex ordering rules you need to use a CASE: ``` ,ROW_NUMBER() OVER (PARTITION BY id ORDER BY CASE Status WHEN 'completed' THEN 1 WHEN 'doing' THEN 2 WHEN 'assigned' THEN 3 END) as rn ```
Select one row from non unique rows based on row value
[ "", "sql", "sql-server", "subquery", "case", "conditional-statements", "" ]
I have a few large tables in a oracle DB (lots of columns and data types) that I need to move to another oracle database (say from my DEV region to UAT region). Is there anyway I can get sql developer or sql plus to output a create table statement that is exactly the structure of the existing table? thanks
If you are making use of the SQL Developer you can right click the table that you want to generate a script for. From there select Quick DDL and then click on Save To File. This will then generate the create table script to an external sql file. You can do this for multiple tables as well by selecting more than one table at a time. Hope this helps!!
If you want to get it through SQL statement then you can try the below query ``` SELECT dbms_metadata.get_ddl('TABLE', 'Your_Table_Name') FROM dual; ``` See [DBMS\_METADATA](http://docs.oracle.com/cd/B19306_01/appdev.102/b14258/d_metada.htm) for more information. You can generate the script using Toad software as well (In case you have Toad installed)
generate create table from existing table
[ "", "sql", "oracle", "" ]
I have been working on this query for some time now, and reading `right join` question after `right join` question here on SO, but I cannot figure this one out. I have the following Query: ``` DECLARE @ExpectedDateSample VARCHAR(10) DECLARE @Date datetime DECLARE @DaysInMonth INT DECLARE @i INT --GIVE VALUES SET @ExpectedDateSample = SUBSTRING(CONVERT(VARCHAR, DATEADD(MONTH, +0, GETDATE()), 112),5,2)+'/'+CONVERT(VARCHAR(4), DATEPART(YEAR, GETDATE())) SET @Date = Getdate() SELECT @DaysInMonth = datepart(dd,dateadd(dd,-1,dateadd(mm,1,cast(cast(year(@Date) as varchar)+'-'+cast(month(@Date) as varchar)+'-01' as datetime)))) SET @i = 1 --MAKE TEMP TABLE CREATE TABLE #TempDays ( [days] VARCHAR(50) ) WHILE @i <= @DaysInMonth BEGIN INSERT INTO #TempDays VALUES(@i) SET @i = @i + 1 END SELECT DATEPART(DD, CONVERT(DATE, a.budg_tempDODate1, 103)) ExpectedDate, SUM(a.budg_do1_total) ExpectedAmount FROM CRM.dbo.budget a RIGHT JOIN #TempDays on DATEPART(DD, CONVERT(DATE, a.budg_tempDODate1, 103)) = #TempDays.days WHERE DATEPART(MONTH, a.budg_tempDODate1) = DATEPART(MONTH, GetDate()) AND DATEPART(YEAR, a.budg_tempDODate1) = DATEPART(YEAR, GetDate()) GROUP BY a.budg_tempDODate1 --DROP TABLE TO ALLOW CREATION AGAIN DROP TABLE #TempDays ``` In my `Budget table` I have a few days out of the month missing, but that is why I create a Temp table to count all the days of the month. And then RIGHT join to that Temp table. I am trying to calculate how much cash is expected on each day of the month. If the day does not exist in my budget table, **DO NOT** leave it out completely, but rather display 0 is expected. ## What I am currently getting ``` +------+---------------+ | DAYS | AMOUNT | +------+---------------+ | 1 | 34627.000000 | | 2 | 72474.000000 | | 3 | 27084.000000 | | 4 | 9268.000000 | | 5 | 32304.000000 | | 6 | 23261.000000 | | 7 | 5614.000000 | | 9 | 3464.000000 | | 10 | 20046.000000 | | 12 | 7449.000000 | | 13 | 265163.000000 | | 14 | 24210.000000 | | 15 | 68848.000000 | | 16 | 31702.000000 | | 17 | 2500.000000 | | 19 | 2914.000000 | | 20 | 238406.000000 | | 21 | 15642.000000 | | 22 | 2514.000000 | | 23 | 46521.000000 | | 24 | 34093.000000 | | 25 | 899081.000000 | | 26 | 204085.000000 | | 27 | 316341.000000 | | 28 | 48826.000000 | | 29 | 2657.000000 | | 30 | 440401.000000 | +------+---------------+ ``` ## What I was Expecting: ``` +------+---------------+ | DAYS | AMOUNT | +------+---------------+ | 1 | 34627.000000 | | 2 | 72474.000000 | | 3 | 27084.000000 | | 4 | 9268.000000 | | 5 | 32304.000000 | | 6 | 23261.000000 | | 7 | 5614.000000 | | 8 | NULL | | 9 | 3464.000000 | | 10 | 20046.000000 | | 11 | NULL | | 12 | 7449.000000 | | 13 | 265163.000000 | | 14 | 24210.000000 | | 15 | 68848.000000 | | 16 | 31702.000000 | | 17 | 2500.000000 | | 18 | NULL | | 19 | 2914.000000 | | 20 | 238406.000000 | | 21 | 15642.000000 | | 22 | 2514.000000 | | 23 | 46521.000000 | | 24 | 34093.000000 | | 25 | 899081.000000 | | 26 | 204085.000000 | | 27 | 316341.000000 | | 28 | 48826.000000 | | 29 | 2657.000000 | | 30 | 440401.000000 | +------+---------------+ ``` As you can see, the expected result still shows the days Im not expecting any value. Can Anybody notice anything immediately wrong with my query... Any help and tips would be greatly appreciated. I'm using SQL server 2008 Thanks! Mike
Change your `where` clause to part of the join, and display the day value from `#tempdays`, not the data table ``` SELECT #TempDays.days ExpectedDate, SUM(a.budg_do1_total) ExpectedAmount FROM CRM.dbo.budget a RIGHT JOIN #TempDays on DATEPART(DD, CONVERT(DATE, a.budg_tempDODate1, 103)) = #TempDays.days AND DATEPART(MONTH, a.budg_tempDODate1) = DATEPART(MONTH, GetDate()) AND DATEPART(YEAR, a.budg_tempDODate1) = DATEPART(YEAR, GetDate()) GROUP BY #TempDays.days ```
The problem is your Where Clause - > WHERE DATEPART(MONTH, a.budg\_tempDODate1) = DATEPART(MONTH, GetDate()) AND DATEPART(YEAR, a.budg\_tempDODate1) = DATEPART(YEAR, GetDate())
SQL not delivering expected result with RIGHT JOIN
[ "", "sql", "sql-server", "right-join", "" ]
These numeric values needs to be converted into date time. It is date of birth so the values can not be in the future. Can I somehow implement a rule that not accept date-of-birth where the value would equal to over 100 years old?? 201246-1324 040210-6387 111257-0647 040210-6387 Result: **2046-12-20 00:00:00.000 - Wrong** 2010-02-04 00:00:00.000 - Right 1957-12-11 00:00:00.000 - Right 2010-02-04 00:00:00.000 - Right
Temporarily change your cutoff, since you're working with DOBs which are always in the past.. ``` sp_configure 'two digit year cutoff', 2015 reconfigure ``` Then use a standard datediff on years < 100.
You can use the datediff function in an if statement for this circumstance i believe Essentially, if the time between the two dates being compared is greater than 100 years then do not insert the record <http://www.w3schools.com/sql/func_datediff.asp>
Wrong (future) convert to datetime
[ "", "sql", "date", "datetime", "type-conversion", "" ]
I have a stored procedure and I cant work out how the string is meant to be built up. The SQL statement works fine when I do not have a variable in there so it is definitely the way I am writing it in. I am just a beginner so unsure about the syntax. Here is the code: ``` CREATE PROCEDURE [dbo].[SP_SLINVOICE] @varCURRENCY AS VARCHAR(3) AS BEGIN SET NOCOUNT ON; DECLARE @SQL NVARCHAR(MAX); SELECT @SQL = N'SELECT dbo.invmaster.InvNumber FROM dbo.invmaster INNER JOIN dbo.invdetail ON dbo.invmaster.INVMasterID = dbo.invdetail.INVMasterID INNER JOIN dbo.Company ON dbo.invmaster.InvCompanyID = dbo.Company.CompanyID WHERE dbo.InvMaster.InvCurrency = '' + @varCURRENCY + '' AND dbo.invmaster.DocType <> ''MC'' ORDER BY dbo.invmaster.InvNumber ASC;'; EXEC sp_executesql @sql; ``` The @varCURRENCY does not give me an error when i execute. But it does not work either when i pass it through a parameter. Please let me know if you can see what the issue is. Thanks in advance
Your SQL is ending up like this: ``` WHERE dbo.InvMaster.InvCurrency = '@varCURRENCY' ``` So you are not looking for the value of the parameter, you are looking for `@Currency`, I am not sure why you are using Dynamic SQL, the following should work fine: ``` CREATE PROCEDURE [dbo].[SP_SLINVOICE] @varCURRENCY AS VARCHAR(3) AS BEGIN SET NOCOUNT ON; SELECT dbo.invmaster.InvNumber FROM dbo.invmaster INNER JOIN dbo.invdetail ON dbo.invmaster.INVMasterID = dbo.invdetail.INVMasterID INNER JOIN dbo.Company ON dbo.invmaster.InvCompanyID = dbo.Company.CompanyID WHERE dbo.InvMaster.InvCurrency = @varCURRENCY AND dbo.invmaster.DocType <> 'MC' ORDER BY dbo.invmaster.InvNumber ASC; END ``` If you need Dynamic SQL for some other reason you can use the following to pass `@varCURRENCY` as a parameter to `sp_executesql`: ``` DECLARE @SQL NVARCHAR(MAX); SELECT @SQL = N'SELECT dbo.invmaster.InvNumber FROM dbo.invmaster INNER JOIN dbo.invdetail ON dbo.invmaster.INVMasterID = dbo.invdetail.INVMasterID INNER JOIN dbo.Company ON dbo.invmaster.InvCompanyID = dbo.Company.CompanyID WHERE dbo.InvMaster.InvCurrency = @varCURRENCY AND dbo.invmaster.DocType <> ''MC'' ORDER BY dbo.invmaster.InvNumber ASC;'; EXEC sp_executesql @sql, N'@varCURRENCY VARCHAR(3)', @varCURRENCY; ```
If you want to pass a variable to an `sp_executesql` context, you need to pass it as a parameter. ``` EXECUTE sp_executesql @sql, N'@varCurrency varchar(3)', @varcurrency= @varCurrency; ``` <http://msdn.microsoft.com/en-gb/library/ms188001.aspx> Although why you don't just use ``` select ... where dbo.InvMaster.InvCurrency = @varCURRENCY ``` instead of the executesql is beyond me.
SQL Stored Procedure WHERE CLAUSE Variable
[ "", "sql", "sql-server", "sql-server-2008", "variables", "stored-procedures", "" ]
I currently have some SQL that is used to create an excel report in the following format: ``` COL1 COL2 COL3 2 1 8 3 7 9 1 2 4 ``` Now what I am trying to do is sum up the total of these each value and insert it at the bottom using `UNION ALL` (unless of course there is a better way.) Now the values for each column are generated already by sums. The concept I can't grasp is how to sum all the values for the final row, if this is even possible. So the output should look like so: ``` COL1 COL2 COL3 2 1 8 3 7 9 1 2 4 6 10 21 ``` Thanks!
It looks like you want to add ``` WITH ROLLUP ``` to the end of your query eg: ``` Select sum(a) as col1, sum(b) as col2 from yourtable group by something with rollup ``` Depending on the full nature of your query, you may prefer to use `with cube`, which is similar. See [http://technet.microsoft.com/en-us/library/ms189305(v=sql.90).aspx](http://technet.microsoft.com/en-us/library/ms189305%28v=sql.90%29.aspx)
``` select col1 ,col2 ,col3 from tableA union select sum(col1) ,sum(col2) ,sum(col3) from tableA order by col1,col2,col3 ```
Create SQL summary using union
[ "", "sql", "sql-server", "union", "" ]
I'm trying to make a listing for **Ageing Transaction** where areas the list will have 0-30 days column and 31-60 days column. If transaction date is within 30 days, then its in 0-30 days column, if it's within the range of 31-60 then it's inside 31-60 days column. So to make it more clear, here's the table ``` |trans_code |customer |trans_date | credit| debit | |ABC1000 |John ptd |2014-05-20 | 0.00 | 200.00 | |ABC1000 |John ptd |2014-07-06 |200.00 | 0.00 | |ABC1001 |Petron |2014-04-25 | 0.00 | 600.00 | |ABC1001 |Petron |2014-06-10 |600.00 | 0.00 | ``` John ptd has a debt of $200 on 2014-05-20 and he paid his debt on 2014-07-06. So the date range is within 31-60 days. The $200 he paid will goes into 31-60 days column in Ageing Transaction listing(like below). ``` |Customer | 0-30 days | 31-60 days| |John ptd | 0.00 | 200.00| |Petron | 0.00 | 600.00| ``` Now my problem is how to compare trans\_date since it's in the same column using SQL. It should be BASED on **trans\_code**. Updated: It's actually starts with Debit before Credit
Here is a better approach: ``` SELECT d.customer, CASE WHEN DATEDIFF(c.trans_date, d.trans_date) <= 30 THEN d.debit ELSE 0.0 END AS `0-30`, CASE WHEN DATEDIFF(c.trans_date, d.trans_date) > 30 AND DATEDIFF(c.trans_date, d.trans_date) <= 60 THEN d.debit ELSE 0.0 END AS `31-60` FROM table AS d INNER JOIN table AS c ON c.trans_code = d.trans_code and c.customer = d.customer WHERE d.credit = 0.0 and c.debit = 0.0 ```
``` SELECT t.customer, SUM(c1.credit) AS `0-30`, SUM(c2.credit) AS `31-60`, FROM table AS t LEFT JOIN table AS c1 ON c1.debit = t.credit AND DATEDIFF(t.trans_date, c1.trans_date) <= 30 LEFT JOIN table AS c2 ON c2.debit = c2.credit AND DATEDIFF(t.trans_date, c2.trans_date) > 30 DATEDIFF(t.trans_date, c2.trans_date) <= 60 ``` Take a look at the [DATEDIFF](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_datediff) function. You JOIN to the transactions paid within 30 days and to the ones paid within 31-60 days.
SQL - Compare Date in a same Columnn
[ "", "mysql", "sql", "" ]
I am trying to concatenate multiple columns in a query in SQL Server 11.00.3393. I tried the new function `CONCAT()` but it's not working when I use more than two columns. So I wonder if that's the best way to solve the problem: ``` SELECT CONCAT(CONCAT(CONCAT(COLUMN1,COLUMN2),COLUMN3),COLUMN4) FROM myTable ``` I can't use `COLUMN1 + COLUMN2` because of `NULL` values. **EDIT** If I try `SELECT CONCAT('1','2','3') AS RESULT` I get an error > The CONCAT function requires 2 argument(s)
Through discourse it's clear that the problem lies in using VS2010 to write the query, as it uses the canonical `CONCAT()` function which is limited to 2 parameters. There's probably a way to change that, but I'm not aware of it. An alternative: ``` SELECT '1'+'2'+'3' ``` This approach requires non-string values to be cast/converted to strings, as well as `NULL` handling via `ISNULL()` or `COALESCE()`: ``` SELECT ISNULL(CAST(Col1 AS VARCHAR(50)),'') + COALESCE(CONVERT(VARCHAR(50),Col2),'') ```
``` SELECT CONCAT(LOWER(LAST_NAME), UPPER(LAST_NAME) INITCAP(LAST_NAME), HIRE DATE AS ‘up_low_init_hdate’) FROM EMPLOYEES WHERE HIRE DATE = 1995 ```
SQL Server: Best way to concatenate multiple columns?
[ "", "sql", "sql-server-2012", "" ]
Having Table1 ``` id | productname | store | price ----------------------------------- 1 | name a | store 1 | 4 2 | name a | store 2 | 3 3 | name b | store 3 | 6 4 | name a | store 3 | 4 5 | name b | store 1 | 7 6 | name a | store 4 | 5 7 | name c | store 3 | 2 8 | name b | store 6 | 5 9 | name c | store 2 | 1 ``` I need to get all columns but only the rows with the lowest price. Result needed: ``` id | productname | store | price ----------------------------------- 2 | name a | store 2 | 3 8 | name b | store 6 | 5 9 | name c | store 2 | 1 ``` My best try is: ``` SELECT ProductName, MIN(Price) AS minPrice FROM Table1 GROUP BY ProductName ``` But then I need the `ID` and `STORE` for each row.
Try this ``` select p.* from Table1 as p inner join (SELECT ProductName, MIN(Price) AS minPrice FROM Table1 GROUP BY ProductName) t on p.productname = t.ProductName and p.price = t.minPrice ```
You didn't mention your SQL dialect, but most DBMSes support Standard SQL's "Windowed Aggregate Functions": ``` select * from ( select t.*, RANK() OVER (PARTITION BY ProductName ORDER BY Price) as rnk from table1 as t ) as dt where rnk = 1 ``` If multiple stores got the same lowest price all of them will be returned. If you want only a single shop you have to switch to ROW\_NUMBER instead of RANK or add column(s) to the ORDER BY.
Select columns with and without group by
[ "", "sql", "" ]
I'm using the hortonworks's Hue (more like a GUI interface that connects hdfs, hive, pig together)and I want to load the data within the hdfs into my current created table. Suppose the table's name is "test", and the file which contains the data, the path is: /user/hdfs/test/test.txt" But I'm unable to load the data into the table, I tried: ``` load data local inpath '/user/hdfs/test/test.txt' into table test ``` But there's error said can't find the file, there's no matching path. I'm still so confused. Any suggestions? Thanks
As you said "**load the data within the hdfs into my current created table**". But in you command you are using : load data **local** inpath '/user/hdfs/test/test.txt' into table test Using **local** keyword it looks for the file in your local filesystem. But you file is in **HDFS**. I think you need to remove **local** keyword from you command. Hope it helps...!!!
Since you are using the hue and the output is showing not matching path. I think you have to give the complete path. for example: load data local inpath '/home/cloudera/hive/Documents/info.csv' into table tablename; same as you can give the complete path where the hdfs in which the document resides. You can use any other format file
How to load data into Hive table
[ "", "sql", "hive", "" ]
I have a `Dossiers` table with a `print_flag` column and i want set `print_flag`=1 for multiple rows. ``` UPDATE dossiers SET print_flag = 1 WHERE id=1013997,id=1020799,id=1020800,id=1020800; ``` How should the SQL look like?
You should use the IN clause as it'll allow you to use multiple values for a single column. ``` UPDATE dossiers SET print_flag = 1 WHERE id IN(1013997, 1020799, 1020800); ```
You need `IN` clause as: ``` UPDATE dossiers SET print_flag = 1 WHERE id IN (1013997,1020799,1020800,1020800); ```
SQL update multiple rows based on multiple where conditions
[ "", "sql", "sql-server", "" ]
Been scratching my head for a while on this one and despite trying many variations I cannot see the mistake. After writing the app file, which contains what looks like the correct DataMapper.setup code for using PostgreSQL (?), and upon trying to play around in IRB/PRY, i just get a 'FATAL database not created' message even after i have called 'Song.auto\_migrate!', here is my code, can anyone help me get past this? Thanks in advance: ``` require 'data_mapper' require 'dm-core' #main DataMapper gem require 'dm-migrations' #extra DataMapper functionality extension DataMapper.setup(:default, "postgres://localhost/development") class Song include DataMapper::Resource property :id, Serial property :title, String property :lyrics, Text property :length, Integer property :released_on, Date end DataMapper.finalize ``` I require the file all fine in irb, then call Song.auto\_migrate! and it runs the 'database does not exist' error. What am i doing wrong?
You need to do this in command line: ``` psql ``` and then ``` CREATE DATABASE development; ``` before even trying to run Data Mapper setup code.
Perhaps you are missing this line: DataMapper.auto\_upgrade!
PostgreSQL & DataMapper issues
[ "", "sql", "sqlite", "postgresql", "sinatra", "" ]
I am experiencing an interesting problem with a NULL value in the following table: ``` CREATE TABLE dbo.STAFF ( staffID int PRIMARY KEY NOT NULL, lastname varchar(50) NULL, firstname varchar(50) NULL, pay money NULL, supervisorID int NULL ) INSERT INTO STAFF (staffID, lastname, firstname, pay, supervisorID) VALUES (1, 'Smith', 'John', 60000, NULL), (2,'Jones', 'Bridget', 13000, 1), (3,'Robinson', 'Smokey', 14000, 1), (4, 'Vedder', 'Eddie', 13000, 3), (5, 'Almighty', 'Bruce', 12000, 2), (6, 'Addington', 'Jane', 11000, 2), (7, 'Hogan', 'Hulk', 10000, 3), (8, 'Jackson', 'Jacky', 15000, 2), (9, 'Samuelson', 'Sammy', 14400, 1) ; ``` When I run this query the first row isnt returned and it seems it is being ignored because of the null value. Why does this not work. I assume it is something to do with the NULL value in a CASE statement within a self-join? I am really interested as to why this wont work. ``` SELECT (a.firstname + ' ' + a.lastname) AS "Employee Name", CASE WHEN a.supervisorID IS NOT NULL THEN (b.firstname + ' ' + b.lastname) ELSE 'No supervisor' END AS "supervisor Name" FROM dbo.STAFF a, dbo.STAFF b WHERE a supervisorID= b.staffID; ``` ![query ouput](https://i.stack.imgur.com/SCdqt.jpg)
You have to use a LEFT JOIN. I think if join type isn't give INNER JOIN is default.
You can try this: ``` SELECT (a.firstname + ' ' + a.lastname) AS "Employee Name", CASE WHEN a.supervisorID IS NOT NULL THEN (b.firstname + ' ' + b.lastname) ELSE 'No supervisor' END AS "supervisor Name" FROM dbo.STAFF a LEFT JOIN dbo.STAFF b WHERE a supervisorID= b.staffID; ```
SQL Server: CASE ignoring rows with NULL values within SELF JOIN
[ "", "sql", "sql-server", "" ]
I am a new learner in Netezza. Right now, I wanna write a sql statement to feedback the recent data objects corresponding to a defined recent date. Like return the data objects after 2014-06-10. Or return the objects before a certain date. Like return the data objects before 2014-03-10. How to do that? The attribute in where clause is timestamp. I assume it is a simple process; however, I cannot find it. Thank you in advance, guys!
You can compare the date or timestamp field to a date created with an explicit call to the to\_date function, or you can simply compare it to a character literal of the date in a format that the compiler will recognize and implicitly cast to a date. For example: ``` TESTDB.ADMIN(ADMIN)=> select * from date_test; COL1 --------------------- 2014-06-23 00:00:00 2014-06-22 00:00:00 2014-06-24 20:44:51 (3 rows) TESTDB.ADMIN(ADMIN)=> select * from date_test where col1 < '2014-06-23'; COL1 --------------------- 2014-06-22 00:00:00 (1 row) TESTDB.ADMIN(ADMIN)=> select * from date_test where col1 < to_date('2014-06-23', 'YYYY-MM-DD'); COL1 --------------------- 2014-06-22 00:00:00 (1 row) ```
Netezza has built-in function for this by simply using: ``` SELECT DATE(STATUS_DATE) AS DATE, COUNT(*) AS NUMBER_OF_ FROM X GROUP BY DATE(STATUS_DATE) ORDER BY DATE(STATUS_DATE) ASC ``` This will return just the date portion of the timetamp and much more useful than casting it to a string with "TO\_CHAR()" because it will work in GROUP BY, HAVING, and with other netezza date functions. (Where as the TO\_CHAR method will not) Also, the DATE\_TRUNC() function will pull a specific value out of Timestamp ('Day', 'Month, 'Year', etc..) but not more than one of these without multiple functions and concatenate. DATE() is the perfect and simple answer to this and I am surprised to see so many misleading answers to this question on Stack. I see TO\_DATE a lot, which is Oracle's function for this but will not work on Netezza. Also, above the TO\_DATE function is Oracle's version of this but will not work on Netezza.
How to compare dates in Netezza?
[ "", "sql", "netezza", "" ]
I have two tables my database. One is a list of drivers and one is a list of violations. Each violation has a status, either yellow or red and I need to count the number of yellow violations and red violations that a particular driver has and ORDER BY either red or yellow ASC or DESC. Drivers Table: AutoID (primary key and foreign key to violations table driverID) clientID employerID Violations Table: violationID (primary key) status driverID (foreign key to autoID in Drivers table) I ran the query below to get all the red or yellow violations ``` SELECT d.autoID, d.clientID, v.status FROM edsp_drivers d, edsp_violations v WHERE d.employerID='000000028' AND d.autoID=v.driverID AND (v.status = 'yellow' OR v.status='red') ORDER BY d.autoID ASC ``` Results as follows: ``` autoID clientID status 000000206 000000015 Yellow 000000206 000000015 Red 000000206 000000015 Yellow 000000206 000000015 Yellow 000000206 000000015 Yellow 000000206 000000015 Yellow 000000206 000000015 Yellow 000000206 000000015 Yellow 000000206 000000015 Yellow 000000206 000000015 Yellow 000000207 000000015 Yellow 000000207 000000015 Yellow 000000367 000000015 Yellow 000000367 000000015 Yellow 000000367 000000015 Red 000000368 000000015 Red 000000369 000000015 Yellow 000000369 000000015 Yellow 000000369 000000015 Red 000000369 000000015 Yellow 000000369 000000015 Yellow 000000369 000000015 Yellow 000000398 000000015 Yellow 000000398 000000015 Yellow ``` What I need to do is count all the yellow or red alerts grouped by driver and ORDER BY red/yellow DESC - this is where I am stuck. Hopefully someone can help here?
Sounds like you want to group by driver and violation color: ``` SELECT d.autoID, v.status, COUNT(v.status) FROM edsp_drivers d LEFT JOIN edsp_violations v ON d.autoID = v.driverID AND (v.status = 'yellow' OR v.status='red') WHERE d.employerID = '000000028' GROUP BY d.autoID, v.status ORDER BY v.status ASC, COUNT(v.status) DESC ```
Try: ``` SELECT d.autoID , SUM( CASE WHEN v.status = 'yellow' THEN 1 ELSE 0 END) as Yellow_Count , SUM( CASE WHEN v.status = 'red' THEN 1 ELSE 0 END) as Red_Count FROM edsp_drivers d, edsp_violations v WHERE d.employerID='000000028' AND d.autoID=v.driverID AND (v.status = 'yellow' OR v.status='red') GROUP BY d.autoID ORDER BY d.autoID ASC; ```
SQL Query to COUNT all rows with duplicate values broken down by primary key
[ "", "mysql", "sql", "count", "" ]
I have 3 models: ``` class Student < ActiveRecord::Base has_many :student_enrollments, dependent: :destroy has_many :courses, through: :student_enrollments end class Course < ActiveRecord::Base has_many :student_enrollments, dependent: :destroy has_many :students, through: :student_enrollments end class StudentEnrollment < ActiveRecord::Base belongs_to :student belongs_to :course end ``` I wish to query for a list of courses in the Courses table, that do not exist in the StudentEnrollments table that are associated with a certain student. I found that perhaps Left Join is the way to go, but it seems that joins() in rails only accept a table as argument. The SQL query that I think would do what I want is: ``` SELECT * FROM Courses c LEFT JOIN StudentEnrollment se ON c.id = se.course_id WHERE se.id IS NULL AND se.student_id = <SOME_STUDENT_ID_VALUE> and c.active = true ``` How do I execute this query the Rails 4 way? Any input is appreciated.
You can pass a string that is the join-sql too. eg `joins("LEFT JOIN StudentEnrollment se ON c.id = se.course_id")` Though I'd use rails-standard table naming for clarity: ``` joins("LEFT JOIN student_enrollments ON courses.id = student_enrollments.course_id") ```
If anyone came here looking for a generic way to do a left outer join in Rails 5, you can use the [`#left_outer_joins`](http://guides.rubyonrails.org/active_record_querying.html#left-outer-joins) function. Multi-join example: Ruby: ``` Source. select('sources.id', 'count(metrics.id)'). left_outer_joins(:metrics). joins(:port). where('ports.auto_delete = ?', true). group('sources.id'). having('count(metrics.id) = 0'). all ``` SQL: ``` SELECT sources.id, count(metrics.id) FROM "sources" INNER JOIN "ports" ON "ports"."id" = "sources"."port_id" LEFT OUTER JOIN "metrics" ON "metrics"."source_id" = "sources"."id" WHERE (ports.auto_delete = 't') GROUP BY sources.id HAVING (count(metrics.id) = 0) ORDER BY "sources"."id" ASC ```
LEFT OUTER JOIN in Rails 4
[ "", "sql", "ruby-on-rails", "ruby-on-rails-4", "rails-activerecord", "mysql2", "" ]
I have a table set up that tracks changes to a user's account. It has `ID`, `UserAccountNo`, `OldVal`, `NewVal`, `ChangeColumnName` columns. I have a query set up similar to this: ``` Select case when ChangeColumnName = 'Address1' then NewVal else '' end as Address1, when ChangeColumnName = 'Address2' then NewVal else '' end as Address2, when ChangeColumnName = 'City' then NewVal else '' end as City, when ChangeColumnName = 'State' then NewVal else '' end as State, when ChangeColumnName = 'Zip' then NewVal else '' end as Zip, when ChangeColumnName = 'Phone' then NewVal else '' end as Phone from table Where (Conditions) ``` If someone changes the city, state, and zip, there are 3 entries in the table. When I run this query, I get 3 rows returned. I would like to get them all together in one row, and haven't been able to figure out how. When I tried using groupby with max(colname) as suggested in other posts, it gives the max NewVal value, so I end up with email addresses in Phone columns. Is this possible to do in SQL 2008 without reforming the entire table?
Try this ``` create table #t ( id int, userAccountNo int, oldVal varchar(255), newVal varchar(255), changeColName varchar(255) ); insert #t values (1, 1, '123 main st', '123 s. main st.', 'Address1'), (2, 1, 'Springville', 'Springfield', 'City'), (3, 1, 'Springfield', 'N. Springfield', 'City'), (4, 2, '12345', '12346', 'Zip'); with U as (select distinct userAccountNo from #t), Address1 as (select userAccountNo, newVal from #t as T1 where changeColName = 'Address1' and id >=ALL (select id from #t as T2 where T1.userAccountNo = T2.userAccountNo and T1.changeColName = T2.changeColName)), City as (select userAccountNo, newVal from #t as T1 where changeColName = 'City' and id >=ALL (select id from #t as T2 where T1.userAccountNo = T2.userAccountNo and T1.changeColName = T2.changeColName)), Zip as (select userAccountNo, newVal from #t as T1 where changeColName = 'Zip' and id >=ALL (select id from #t as T2 where T1.userAccountNo = T2.userAccountNo and T1.changeColName = T2.changeColName)) select U.userAccountNo, A1.newVal as [Address1], C.newVal as [City], Z.newVal as [Zip] from U full outer join Address1 as A1 on U.userAccountNo = A1.userAccountNo full outer join City as C on U.userAccountNo = C.userAccountNo full outer join Zip as Z on U.userAccountNo = Z.userAccountNo; ``` And if it seems to work it can be extended to cover all of your columns.
I suggest that you use pivot command, use this script and let me know : ``` IF OBJECT_ID('_temp') IS NOT NULL DROP TABLE _temp SELECT * INTO _temp FROM ( Select 'PostalCode' AS ChangeColumnName, '95100' AS NewValue UNION ALL Select 'City' AS ChangeColumnName, 'Argenteuil' AS NewValue UNION ALL Select 'LastName' AS ChangeColumnName, 'DAOUI' AS NewValue UNION ALL Select 'FirstName' AS ChangeColumnName, 'Youssef' AS NewValue UNION ALL Select 'Phone Number' AS ChangeColumnName, '00212 6 60 93 36 12' AS NewValue ) AS Temp DECLARE @v_ListeColonnes VARCHAR(MAX) = '' ,@v_sql VARCHAR(MAX) = '' SELECT @v_ListeColonnes = @v_ListeColonnes + ',' + QUOTENAME(ChangeColumnName) FROM _temp IF LEN(@v_ListeColonnes) > 1 BEGIN SELECT @v_ListeColonnes = RIGHT(@v_ListeColonnes, LEN(@v_ListeColonnes)-1) SET @v_sql = 'SELECT '+CHAR(13) +' ' + @v_ListeColonnes + ' '+CHAR(13) +'FROM _temp '+CHAR(13) +'PIVOT (MAX(NewValue) '+CHAR(13) +' FOR ChangeColumnName in(' + @v_ListeColonnes + ')) as pvt '+CHAR(13) EXEC(@v_sql) END IF OBJECT_ID('_temp') IS NOT NULL DROP TABLE _temp ``` I hope this will help you.
SQL Merge Data into Single Row
[ "", "sql", "sql-server", "sql-server-2008", "" ]
Query should output a certain list of Items, along with info like store information and manager info. Uses a Cursor to flip through list of various different levels of management, selects relevant information, then emails that person what the query returned for their district/region/store. My issue is with the SSIS leg of the journey. Although the code acts like it runs, if I run Itemdata.dtsx separately (so as to see errors), it throws me the error: "Warning: The external columns for component "Sources-ItemData"(1) are out of sync with the data source columns. The external column "RM\_Email"(49) needs to be removed from the external columns. The external column "SM\_Email"(46) needs to be removed from the external columns. The external column "DM\_Email"(43) needs to be removed from the external columns." This results in SQL Server Mngt Studio saying it ran, but the emails' contents are nothing but the table headers; no data, and the table headers don't change regardless of what I do. I have eliminated these columns from any mention in my source code (posted below) and in the Table it uses. What am I missing? ``` BEGIN SET NOCOUNT ON; DECLARE @ProcedureName varchar(255) DECLARE @ExportFolder varchar(255) DECLARE @TempFolder varchar(255) DECLARE @WarningLevel varchar(255) = 'log' DECLARE @MsgDescription varchar(2000) = '' DECLARE @RecordCount int = 0 DECLARE @ReportDate date = getdate() DECLARE @Begdate date = convert(date,(dateadd(month,-1,getdate()))) DECLARE @Enddate date = convert(date,(dateadd(day,-1,getdate()))) DECLARE @Siteid int DECLARE @Subject varchar(75) = '' DECLARE @Body varchar(50) = '' DECLARE @RMLastName varchar(25) DECLARE @RMFirstName varchar(25) DECLARE @RegionKey int DECLARE @DistrictKey int DECLARE @Email varchar(50) BEGIN TRY --Table used as data source for each pass Truncate Table Example.dbo.itemdata --Send reports to Regional Managers by building Cursor RMListCursor, --then running SELECT statement against each name (using @RMLastName and @RMFirstName to discern), --then emails results of SELECT statement to that Regional Manager. --Goes through CursorList, then ends. BEGIN --Set cursor for RM Email; returns all regional managers. DECLARE RMListCursor CURSOR FOR SELECT distinct t.lastname, t.firstname, t.Email FROM Example.[dbo].[tblUser] t JOIN example.dbo.vStoreDistrictRegionActive vs ON t.LastName = vs.RMLastName and t.FirstName = vs.RMFirstName ORDER BY LastName OPEN RMListCursor FETCH NEXT FROM RMListCursor INTO @RMLastName , @RMFirstName , @Email WHILE @@FETCH_STATUS = 0--(@SetInt < 6) BEGIN Truncate table Example.dbo.itemdata --Builds data, then inserts into Table built for this query. Note that there is no mention of DM_EMAIL, RM_EMAIL, or SM_EMail anywhere. INSERT INTO Example.dbo.itemdata SELECT InvoiceNumber, shipFROMid, ad.SiteId, MfrCode, PartCode, UnitNetCore, ad.QuantityShipped, ShipDate, --First/Last Name of this item's store's District Manager. rtrim(substring((SELECT ISNULL(DMfirstName,'') FROM example.dbo.vSiteRegionDistrictActiveV2 dm WHERE ad.siteid = dm.SiteNumber),1,30)) + ' ' + substring((SELECT ISNULL(DMLastName,'') FROM example.dbo.vSiteRegionDistrictActiveV2 dm WHERE ad.siteid = dm.SiteNumber),1,30) DM --This is where DM_EMAIL, RM_EMAIL, and SM_EMail originally were before they were removed from both here and .ItemData. FROM example.dbo.vInvoiceHeaderDetail_Adis ad join example.dbo.Site ss on ad.SiteId=ss.siteid join example.dbo.vStoreDistrictRegionActive vs on ad.SiteId = vs.SiteId WHERE ad.siteid is not null and UnitNetCore>=250 and SUBSTRING(InvoiceNumber,2,1)='D' and QuantityShipped>0 and isactive=1 and isowned=1 and ShipDate between @Begdate and @Enddate and vs.RMFirstName = @RMFirstName and vs.RMLastname = @RMLastName ORDER BY ad.SiteId,ShipFROMID,shipdate -- Execute SSIS package which downloads table to d: for email. set @RecordCount=@@ROWCOUNT --Quick check so that if the results were blank, don't bother sending a blank email. IF @RecordCount<>0 BEGIN set @Subject = 'Cores billed from PWI >= $250 ' + cast(CONVERT(date,GETDATE()) as varchar(12)) set @Body = 'Run date/time- ' + cast(GETDATE() as CHAR(20)) EXEC xp_cmdshell 'd:\"Program Files (x86)"\"Microsoft SQL Server"\100\DTS\Binn\DTexec.exe /f "D:\etl\bulk\ssis\Misc\ItemInfo.dtsx"' EXEC msdb.dbo.sp_send_dbmail @profile_name ='SQL Mail', @recipients ='test', --@email @subject = @Subject, @body = @Body, @body_format = 'HTML', @File_attachments = 'D:\export\temp\ItemInfo.xls', @attach_query_result_as_file =0, @query_attachment_filename='\ItemInfo.xls', @query_result_width = 500 END --Brings CURSOR back up to next name on List, repeats process. FETCH NEXT FROM RMListCursor INTO @RMLastName , @RMFirstName , @Email END END CLOSE RMListCursor DEALLOCATE RMListCursor END TRY BEGIN CATCH SET @WarningLevel = 'error' SET @MsgDescription = 'SQL Err= [' + CAST(ERROR_MESSAGE() as varchar(200)) + ' (' + CAST(ERROR_LINE() as varchar) + ')]' EXEC example.dbo.spAddSysMessage 'Store Aging', @WarningLevel , @ProcedureName , '', 'EOM Store Aging Report', @RecordCount END CATCH END ```
Recheck column assignment between source and destination and in-between component as well. Give it a try by setting data flow component's Properties, ValidateExternalMetadata to False. Please have a look of [this](https://stackoverflow.com/questions/8656287/the-external-column-y-col1-598-needs-to-be-removed-from-the-external-columns) as well.
It seems like you were changing the OLEDB Source or other type of source recently. Thus, you must delete OLEDB Source and create a new one again. Then also delete the mapping of particular field, save, go back to mapping again and map it back. Than it should work fine.
SSIS Error: External Column for Source out of sync with Data Source columns; How do I remove External Columns?
[ "", "sql", "sql-server", "t-sql", "ssis", "" ]
i have the following query to find the rank ``` SELECT a.appName,RANK() OVER (PARTITION BY a.appName, b.depName ORDER BY a.appName) as RANK,b.depName,Count(distinct a.userName) as visitors FROM tbl_app_webstats a inner join tbl_users b on b.userName = a.userName where a.date_time between '01-JUN-12' and '20-JUN-14' and a.appName like '%' group by a.appName,b.depName order by a.appName; ``` but it is giving me incorrect result ``` appName RANK depName visitors app1 1 dep1 1 app1 1 dep2 1 app1 1 dep3 2 app2 1 dep1 3 app2 1 dep2 1 app2 1 dep3 5 app3 1 dep2 2 app3 1 dep5 8 app4 1 dep1 2 app4 1 dep5 13 app5 1 dep5 2 ``` the result sholuld be like this ``` appName RANK depName visitors app1 1 dep1 1 app1 2 dep2 1 app1 3 dep3 2 app2 1 dep1 3 app2 2 dep2 1 app2 3 dep3 5 app3 1 dep2 2 app3 2 dep5 8 app4 1 dep1 2 app4 2 dep5 13 app5 1 dep5 2 ``` any idea what might be wrong here?
Try below query: ``` SELECT a.appName, RANK() OVER (PARTITION BY a.appName ORDER BY b.depName) as RANK, b.depName,Count(distinct a.userName) as visitors FROM tbl_app_webstats a inner join tbl_users b on b.userName = a.userName where a.date_time between '01-JUN-12' and '20-JUN-14' and a.appName like '%' group by a.appName,b.depName order by a.appName; ``` **EDIT:** To get top 10 ranks ``` SELECT * FROM ( SELECT a.appName, RANK() OVER (PARTITION BY a.appName ORDER BY b.depName) as RANK, b.depName,Count(distinct a.userName) as visitors FROM tbl_app_webstats a inner join tbl_users b on b.userName = a.userName where a.date_time between '01-JUN-12' and '20-JUN-14' and a.appName like '%' group by a.appName,b.depName order by a.appName ) WHERE RANK <=10 order by appName; ```
Maybe something like this: ``` SELECT a.appName,RANK() OVER (PARTITION BY a.appName ORDER BY b.depName) ...... ``` You could also do this: ``` SELECT a.appName,ROW_NUMBER() OVER(PARTITION BY a.appName ORDER BY b.depName) ...... ``` To address you comment. You could do something like this: ``` WITH CTE AS ( SELECT RANK() OVER(PARTITION BY appName ORDER BY depName) AS Rank .... ) SELECT * FROM CTE WHERE CTE.Rank<=10; ```
finding rank with two columns oracle
[ "", "sql", "oracle", "" ]
I've a very big query, that in the end I've to ORDER BY some criterias. The main table of the query has about 1 500 000 rows, and I've a lot of JOIN statements. When I run the query it takes more than 9 minutes, when a run the EXPLAIN SELECT, it was clear it is the "ORDER BY" Clause, that takes a lot of time. The columns that are after the ORDER BY statement are indexed with btree, and are BIGINT. Can anyone suggest what can I do to make the execution time less? Below is a part of the EXPLAIN ANALYZE SELECT... (it takes more than 9 minutes to complete) The Query: ``` explain ANALYSE select aa2.aid as cid ,aa2.id ,aa2.ty ,coalesce(sign.bb,'') as tyy ,aa2.rd ,aa2.dt ,aa2.appliedon ,aa2.approvedon ,aa2.pprec ,aa2.ppret ,aa2.cdate ,pm.pmd,tblapplic.dob ,coalesce(tblapplic.firstname ,'') as fname ,coalesce(tblapplic.middlename ,'') as mname ,coalesce(tblapplic.surname ,'') as sname ,coalesce(Upper(dcou.descr),'') as dcouu ,coalesce(aa2.visaid,0) as visaid ,coalesce(tblcountry.descr,'') as visaname ,(SELECT chr(9) || array_to_string(ARRAY(SELECT descr FROM tblcountry WHERE dcountry=travdets.dcountry ORDER BY descr),chr(9))) as visanamecombo ,coalesce(aa2.stype,0) as stypeid ,coalesce(stype.descr,'') as stype ,coalesce(aa2.sts,0) as appStatusid ,coalesce(sta.descr,'') as appStatus ,lofficesuperuser.descr as lofficesuperuser,coalesce(aa2.loffice,0) as lofficeid ,coalesce(lofficesuperuser.descr,coalesce(loffice.descr,'')) as loffice ,coalesce(pm.id,0) as pmid ,(SELECT chr(9) || array_to_string(ARRAY(SELECT descr FROM stype WHERE typee=aa2.ty and visaa=coalesce(aa2.visaid,0) ORDER BY descr),chr(9))) as sTypeCombo ,(SELECT chr(9) || array_to_string(ARRAY(SELECT descr FROM sta WHERE typee=aa2.ty ORDER BY descr),chr(9))) as appStatusCombo ,coalesce(aa2.colltype,0) as colltypeid ,coalesce(tblcolltype.descr,'') as colltype,aa2.embcolldt as embcolldt ,aa2.clcolldt as clbcolldt,travdets.leavingdt as leaving,coalesce(aa2.pptrnum,'') as ppTrackNumber ,coalesce(aa2.rt,0) as regtypeid,coalesce(tbllist1.descr,'') as regtype ,coalesce(aa2.reff,0) as referralid ,coalesce(tblreferral.descr,'') as referral ,coalesce(aa2.scaller,0) as callerid ,coalesce(namess.descr,'') as caller,coalesce(aa2.travdets,0) as travdetsid ,coalesce(curact.luser,-1) as actionluser ,aa2.iss from aa2 LEFT JOIN tblapplic on aa2.aid=tblapplic.idnumber left JOIN travdets on aa2.travdets=travdets.idnumber left JOIN pm on aa2.id=pm.aa2id and pm.pagee=aa2.ty LEFT JOIN sign ON aa2.ty = sign.aa LEFT JOIN tblnationality dcou ON dcou.idnumber=travdets.dcountry left join tblcountry on aa2.visaid=tblcountry.idnumber left join aa2pre on aa2.id=aa2pre.aa2id left join tbluser on aa2pre.addusr=tbluser.idnumber left join loffice on tbluser.office=loffice.idnumber left join loffice lofficesuperuser on aa2.loffice=lofficesuperuser.idnumber left outer join stype on aa2.stype=stype.id left outer join sta on aa2.sts=sta.idnumber and sta.typee=aa2.ty left join tblcolltype on aa2.colltype=tblcolltype.idnumber left join tbllist1 on aa2.rt=tbllist1.idnumber left join tblreferral on aa2.reff=tblreferral.idnumber left join namess on aa2.scaller=namess.idnumber left join aa2_curr_act curact on aa2.id=curact.aa2id where aa2.op_status=0 and aa2.ty>0 ORDER BY aa2.aid DESC, aa2.ty DESC LIMIT 1000 ``` The plan: ``` Limit (cost=2502213.29..2502215.79 rows=1000 width=555) (actual time=569132.700..569133.021 rows=1000 loops=1) -> Sort (cost=2502213.29..2502284.78 rows=28593 width=555) (actual time=569132.699..569132.870 rows=1000 loops=1) Sort Key: aa2.aid, aa2.ty Sort Method: top-N heapsort Memory: 3539kB -> Hash Left Join (cost=424537.96..2500645.57 rows=28593 width=555) (actual time=3817.372..565313.067 rows=1175709 loops=1) Hash Cond: (aa2.id = curact.aa2id) -> Hash Left Join (cost=424498.48..469748.41 rows=28593 width=547) (actual time=3816.435..28001.006 rows=1175709 loops=1) Hash Cond: (aa2.scaller = namess.idnumber) -> Hash Left Join (cost=424492.25..469350.45 rows=28593 width=532) (actual time=3816.326..27006.364 rows=1175709 loops=1) Hash Cond: (aa2.reff = tblreferral.idnumber) -> Hash Left Join (cost=424452.85..468924.32 rows=28593 width=506) (actual time=3815.846..25976.031 rows=1175709 loops=1) Hash Cond: (aa2.rt = tbllist1.idnumber) -> Nested Loop Left Join (cost=424451.44..468537.08 rows=28593 width=487) (actual time=3815.819..25098.987 rows=1175709 loops=1) Join Filter: (aa2.colltype = tblcolltype.idnumber)" Rows Removed by Join Filter: 8229962 -> Hash Left Join (cost=424451.44..465533.73 rows=28593 width=451) (actual time=3815.793..21647.918 rows=1175709 loops=1) Hash Cond: ((aa2.sts = public.sta.idnumber) AND (aa2.ty = public.sta.typee)) -> Hash Left Join (cost=424443.94..464745.17 rows=28593 width=424) (actual time=3815.677..20748.314 rows=1175709 loops=1) Hash Cond: (aa2.stype = public.stype.id)" -> Hash Left Join (cost=424367.49..464239.83 rows=28593 width=383) (actual time=3814.669..19892.991 rows=1175709 loops=1) Hash Cond: (aa2.loffice = lofficesuperuser.idnumber) -> Hash Left Join (cost=424366.18..464131.28 rows=28593 width=374) (actual time=3814.651..19300.840 rows=1175709 loops=1) Hash Cond: (tbluser.office = loffice.idnumber) -> Hash Left Join (cost=424364.86..464002.72 rows=28593 width=373) (actual time=3814.628..18721.215 rows=1175709 loops=1) Hash Cond: (aa2pre.addusr = tbluser.idnumber) -> Hash Left Join (cost=424329.53..463574.24 rows=28593 width=373) (actual time=3813.991..18128.257 rows=1175709 loops=1) Hash Cond: (aa2.id = aa2pre.aa2id) -> Hash Left Join (cost=424309.57..463447.04 rows=28593 width=365) (actual time=3813.404..17427.883 rows=1175708 loops=1) Hash Cond: (aa2.visaid = public.tblcountry.idnumber) -> Hash Left Join (cost=424286.05..463120.22 rows=28593 width=342) (actual time=3813.207..16687.672 rows=1175708 loops=1) Hash Cond: (travdets.dcountry = dcou.idnumber) -> Hash Left Join (cost=424277.35..462718.39 rows=28593 width=330) (actual time=3813.056..15885.221 rows=1175708 loops=1) Hash Cond: (aa2.ty = sign.aa) -> Hash Left Join (cost=424275.90..462337.40 rows=28593 width=319) (actual time=3813.019..15036.693 rows=1175708 loops=1) Hash Cond: (aa2.travdets = travdets.idnumber) -> Hash Left Join (cost=408250.71..445773.33 rows=28593 width=307) (actual time=3559.650..12275.923 rows=1175708 loops=1)" Hash Cond: (aa2.aid = tblapplic.idnumber) -> Hash Right Join (cost=301618.16..338733.36 rows=28593 width=287) (actual time=2854.476..9334.815 rows=1175708 loops=1) Hash Cond: ((pm.aa2id = aa2.id) AND (pm.pagee = aa2.ty)) -> Seq Scan on pm (cost=0.00..23718.76 rows=595176 width=22) (actual time=0.148..270.163 rows=583001 loops=1) -> Hash (cost=301189.26..301189.26 rows=28593 width=275) (actual time=2854.215..2854.215 rows=1167712 loops=1) Buckets: 4096 Batches: 4 (originally 1) Memory Usage: 65537kB -> Seq Scan on aa2 (cost=0.00..301189.26 rows=28593 width=275) (actual time=0.010..1806.989 rows=1167712 loops=1) Filter: ((ty > 0) AND (op_status = 0))" Rows Removed by Filter: 98527 -> Hash (cost=98044.47..98044.47 rows=687047 width=28) (actual time=705.012..705.012 rows=447141 loops=1) Buckets: 131072 Batches: 1 Memory Usage: 28576kB" -> Seq Scan on tblapplic (cost=0.00..98044.47 rows=687047 width=28) (actual time=0.012..366.413 rows=447141 loops=1) -> Hash (cost=10560.64..10560.64 rows=437164 width=20) (actual time=253.078..253.078 rows=419170 loops=1) Buckets: 65536 Batches: 1 Memory Usage: 22917kB" -> Seq Scan on travdets (cost=0.00..10560.64 rows=437164 width=20) (actual time=0.006..122.472 rows=419170 loops=1) -> Hash (cost=1.20..1.20 rows=20 width=19) (actual time=0.014..0.014 rows=20 loops=1)" Buckets: 1024 Batches: 1 Memory Usage: 2kB" -> Seq Scan on sign (cost=0.00..1.20 rows=20 width=19) (actual time=0.004..0.009 rows=20 loops=1) -> Hash (cost=5.53..5.53 rows=253 width=20) (actual time=0.143..0.143 rows=253 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 14kB -> Seq Scan on tblnationality dcou (cost=0.00..5.53 rows=253 width=20) (actual time=0.007..0.064 rows=253 loops=1) -> Hash (cost=19.90..19.90 rows=290 width=31) (actual time=0.189..0.189 rows=294 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 20kB" -> Seq Scan on tblcountry (cost=0.00..19.90 rows=290 width=31) (actual time=0.004..0.094 rows=294 loops=1) -> Hash (cost=17.76..17.76 rows=176 width=16) (actual time=0.576..0.576 rows=431 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 21kB -> Seq Scan on aa2pre (cost=0.00..17.76 rows=176 width=16) (actual time=0.030..0.438 rows=494 loops=1) -> Hash (cost=28.48..28.48 rows=548 width=16) (actual time=0.623..0.623 rows=507 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 22kB -> Seq Scan on tbluser (cost=0.00..28.48 rows=548 width=16) (actual time=0.005..0.526 rows=507 loops=1) -> Hash (cost=1.14..1.14 rows=14 width=17) (actual time=0.010..0.010 rows=13 loops=1)" Buckets: 1024 Batches: 1 Memory Usage: 1kB" -> Seq Scan on loffice (cost=0.00..1.14 rows=14 width=17) (actual time=0.003..0.005 rows=13 loops=1)" -> Hash (cost=1.14..1.14 rows=14 width=17) (actual time=0.008..0.008 rows=13 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 1kB -> Seq Scan on loffice lofficesuperuser (cost=0.00..1.14 rows=14 width=17) (actual time=0.002..0.005 rows=13 loops=1) -> Hash (cost=55.09..55.09 rows=1709 width=49) (actual time=0.998..0.998 rows=1717 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 144kB -> Seq Scan on stype (cost=0.00..55.09 rows=1709 width=49) (actual time=0.005..0.431 rows=1717 loops=1) -> Hash (cost=5.40..5.40 rows=140 width=37) (actual time=0.100..0.100 rows=145 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 11kB -> Seq Scan on sta (cost=0.00..5.40 rows=140 width=37) (actual time=0.004..0.046 rows=145 loops=1) -> Materialize (cost=0.00..1.10 rows=7 width=44) (actual time=0.000..0.001 rows=7 loops=1175709) -> Seq Scan on tblcolltype (cost=0.00..1.07 rows=7 width=44) (actual time=0.013..0.014 rows=7 loops=1) -> Hash (cost=1.18..1.18 rows=18 width=27) (actual time=0.012..0.012 rows=18 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 2kB" -> Seq Scan on tbllist1 (cost=0.00..1.18 rows=18 width=27) (actual time=0.004..0.007 rows=18 loops=1) -> Hash (cost=29.18..29.18 rows=818 width=34) (actual time=0.469..0.469 rows=827 loops=1)" Buckets: 1024 Batches: 1 Memory Usage: 57kB" -> Seq Scan on tblreferral (cost=0.00..29.18 rows=818 width=34) (actual time=0.007..0.181 rows=827 loops=1) -> Hash (cost=3.88..3.88 rows=188 width=23) (actual time=0.099..0.099 rows=191 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 11kB -> Seq Scan on namess (cost=0.00..3.88 rows=188 width=23) (actual time=0.007..0.042 rows=191 loops=1) -> Hash (cost=23.10..23.10 rows=1310 width=16) (actual time=0.000..0.000 rows=0 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 0kB -> Seq Scan on aa2_curr_act curact (cost=0.00..23.10 rows=1310 width=16) (actual time=0.000..0.000 rows=0 loops=1) SubPlan 2 -> Result (cost=20.66..20.67 rows=1 width=0) (actual time=0.156..0.156 rows=1 loops=1175709) InitPlan 1 (returns $1) -> Sort (cost=20.65..20.66 rows=3 width=23) (actual time=0.135..0.137 rows=25 loops=1175709) Sort Key: public.tblcountry.descr Sort Method: quicksort Memory: 28kB -> Seq Scan on tblcountry (cost=0.00..20.62 rows=3 width=23) (actual time=0.017..0.063 rows=25 loops=1175709) Filter: (dcountry = travdets.dcountry) Rows Removed by Filter: 269 SubPlan 4 -> Result (cost=44.29..44.31 rows=1 width=0) (actual time=0.237..0.237 rows=1 loops=1175709) InitPlan 3 (returns $4) -> Sort (cost=44.29..44.29 rows=1 width=41) (actual time=0.224..0.225 rows=12 loops=1175709) Sort Key: public.stype.descr Sort Method: quicksort Memory: 25kB -> Bitmap Heap Scan on stype (cost=4.93..44.28 rows=1 width=41) (actual time=0.108..0.197 rows=12 loops=1175709) Recheck Cond: (typee = aa2.ty) Filter: (visaa = COALESCE(aa2.visaid, 0::bigint)) Rows Removed by Filter: 629 -> Bitmap Index Scan on stypeindtypee (cost=0.00..4.93 rows=90 width=0) (actual time=0.073..0.073 rows=745 loops=1175709) Index Cond: (typee = aa2.ty) SubPlan 6 -> Result (cost=5.87..5.88 rows=1 width=0) (actual time=0.056..0.056 rows=1 loops=1175709) InitPlan 5 (returns $6) -> Sort (cost=5.85..5.87 rows=7 width=27) (actual time=0.044..0.045 rows=13 loops=1175709) Sort Key: public.sta.descr Sort Method: quicksort Memory: 25kB -> Seq Scan on sta (cost=0.00..5.75 rows=7 width=27) (actual time=0.010..0.025 rows=13 loops=1175709) Filter: (typee = aa2.ty) Rows Removed by Filter: 132 Total runtime: 569133.611 ms ``` LATEST EDIT: Here is a workaround, for those that have no time to solve the problem. Just wrap the big query, and then make a ORDER BY. Something like this: ``` explain ANALYSE SELECT * FROM (select aa2.aid as cid ,aa2.id ,aa2.ty ,coalesce(sign.bb,'') as tyy ,aa2.rd ,aa2.dt ,aa2.appliedon ,aa2.approvedon ,aa2.pprec ,aa2.ppret ,aa2.cdate ,pm.pmd,tblapplic.dob ,coalesce(tblapplic.firstname ,'') as fname ,coalesce(tblapplic.middlename ,'') as mname ,coalesce(tblapplic.surname ,'') as sname ,coalesce(Upper(dcou.descr),'') as dcouu ,coalesce(aa2.visaid,0) as visaid ,coalesce(tblcountry.descr,'') as visaname ,(SELECT chr(9) || array_to_string(ARRAY(SELECT descr FROM tblcountry WHERE dcountry=travdets.dcountry ORDER BY descr),chr(9))) as visanamecombo ,coalesce(aa2.stype,0) as stypeid ,coalesce(stype.descr,'') as stype ,coalesce(aa2.sts,0) as appStatusid ,coalesce(sta.descr,'') as appStatus ,lofficesuperuser.descr as lofficesuperuser,coalesce(aa2.loffice,0) as lofficeid ,coalesce(lofficesuperuser.descr,coalesce(loffice.descr,'')) as loffice ,coalesce(pm.id,0) as pmid ,(SELECT chr(9) || array_to_string(ARRAY(SELECT descr FROM stype WHERE typee=aa2.ty and visaa=coalesce(aa2.visaid,0) ORDER BY descr),chr(9))) as sTypeCombo ,(SELECT chr(9) || array_to_string(ARRAY(SELECT descr FROM sta WHERE typee=aa2.ty ORDER BY descr),chr(9))) as appStatusCombo ,coalesce(aa2.colltype,0) as colltypeid ,coalesce(tblcolltype.descr,'') as colltype,aa2.embcolldt as embcolldt ,aa2.clcolldt as clbcolldt,travdets.leavingdt as leaving,coalesce(aa2.pptrnum,'') as ppTrackNumber ,coalesce(aa2.rt,0) as regtypeid,coalesce(tbllist1.descr,'') as regtype ,coalesce(aa2.reff,0) as referralid ,coalesce(tblreferral.descr,'') as referral ,coalesce(aa2.scaller,0) as callerid ,coalesce(namess.descr,'') as caller,coalesce(aa2.travdets,0) as travdetsid ,coalesce(curact.luser,-1) as actionluser ,aa2.iss from aa2 LEFT JOIN tblapplic on aa2.aid=tblapplic.idnumber left JOIN travdets on aa2.travdets=travdets.idnumber left JOIN pm on aa2.id=pm.aa2id and pm.pagee=aa2.ty LEFT JOIN sign ON aa2.ty = sign.aa LEFT JOIN tblnationality dcou ON dcou.idnumber=travdets.dcountry left join tblcountry on aa2.visaid=tblcountry.idnumber left join aa2pre on aa2.id=aa2pre.aa2id left join tbluser on aa2pre.addusr=tbluser.idnumber left join loffice on tbluser.office=loffice.idnumber left join loffice lofficesuperuser on aa2.loffice=lofficesuperuser.idnumber left outer join stype on aa2.stype=stype.id left outer join sta on aa2.sts=sta.idnumber and sta.typee=aa2.ty left join tblcolltype on aa2.colltype=tblcolltype.idnumber left join tbllist1 on aa2.rt=tbllist1.idnumber left join tblreferral on aa2.reff=tblreferral.idnumber left join namess on aa2.scaller=namess.idnumber left join aa2_curr_act curact on aa2.id=curact.aa2id where aa2.op_status=0 and aa2.ty>0) ORDER BY aa2.aid DESC, aa2.ty DESC LIMIT 1000 ```
Here is a workaround, for those that have no time to solve the problem. Just wrap the big query, and then make a ORDER BY.
The reason behind this is that your query is not using any of that table's indexes. > For a query that requires scanning a large fraction of the table, an explicit sort is likely to be faster than using an index because it requires less disk I/O due to following a sequential access pattern. Indexes are more useful when only a few rows need be fetched. [1](http://www.postgresql.org/docs/current/static/indexes-ordering.html) If an `ORDERY BY` can use an index, you would see something like this, in your `EXPLAIN`: ``` Index Scan using test_idx on test_tbl (cost=0.15..24.28 rows=285 width=27) ``` F.ex. in my (smaller) table this index is used, if a used `LIMIT 210`, but an entire table scan is used, if I supplied `LIMIT 211` (your table can differ however, the query planner can take row number & width into account f.ex., while deciding to use an index, or not).
Postgresql ORDER BY of indexed columns is slow
[ "", "sql", "postgresql", "" ]
I have a scenario ``` Key Name Amt 101 Nikh 100 101 Mark 150 101 Jess 75 102 Sam 200 102 Lee 150 ``` Observe that Key is same for multiple records. Also it is coming from a different table than Name and Amt. I want to returns records havig minimum Amt. Below is the eg.: ``` Key Name Amt 101 Jess 75 102 Lee 150 ``` So far, I have a code running which returns the Key and Amt records, How do I add a Name column and make sure it returns exactly the name associated with the min(Amt) Pl note: I am working on SSRS 2008. For better understanding, I am pasting my code here(edited): ``` SELECT A.AppID,A.AppDetailID, MIN(B.Amt) AS LOWAmt FROM AppDetail AS A LEFT OUTER JOIN (SELECT TOP (100) PERCENT C.AppID, C.PartyID, MIN(C.TotalScore) AS Amt, D.Name, D.Indicator, E.FirstName, E.LastName FROM DetailGuarantor AS D RIGHT OUTER JOIN Applicant AS E ON D.BeginDate = E.BeginDate AND D.PartyID = E.PartyID AND D.AppID = E.AppID RIGHT OUTER JOIN ApplicationScore AS C ON E.AppID = C.AppID AND E.PartyID = C.PartyID GROUP BY C.AppID, C.PartyID, D.Name, D.Indicator, E.FirstName, E.LastName, ORDER BY C.AppID, Amt) AS B ON A.AppID = B.AppID GROUP BY A.AppID,A.AppDetailID ```
You should approach this using `row_number()`: ``` select [key], name, amt from (select t.*, row_number() over (partition by [key] order by amt asc) as seqnum from table t ) t where seqnum = 1; ```
Classic question and answer ``` ;WITH CTE as ( SELECT row_number() over (partition by [Key] order by Amt) rn, [Key], Name, Amt FROM <table> ) SELECT [Key], Name, Amt FROM CTE WHERE rn = 1 ``` Edit: If, in case of ties, you want to show all results with the lowest Amt, replace `row_number()` with `rank()`
Return rows with column having min value
[ "", "sql", "sql-server", "reporting-services", "ssrs-2008", "" ]
I am trying to pass a complete query to source qualifier of Informatica powercenter Designer. Since the query is large I cannot parameterize it (limitations of UNIX also comes here). So, is there a way I can refer and run a query without hardcoding it inside Informatica, and get the result in Informatica. Any help!!
You can put the query in a view and reference it in the parameter.
You can create a pipeline function or table function and Use that in SQ Override. They have advantage over views where you can pass in parameters to the Function.
Pass a query to informatica
[ "", "sql", "teradata", "informatica", "informatica-powercenter", "" ]
In the code below I am inserting values into a table and getting the error "String or binary data would be truncated." My table definition: ``` CREATE TABLE urs_prem_feed_out_control ( bd_pr_cntl_rec_type char(7) NULL , pd_pr_cntl_acctg_dte char(6) NULL , bd_pr_cntl_run_dte char(10) NULL , bd_pr_cntl_start_dte char(10) NULL , bd_pr_cntl_end_dte char(10) NULL , bd_pr_cntl_rec_count char(16) NULL , bd_pr_tot_premium char(16) NULL , bd_pr_tot_commission char(16) NULL , fd_ctl_nbr integer NOT NULL ) DECLARE @cur_fd_ctl_nbr INT = 2, @acctg_cyc_ym_2 CHAR(6) = '201402', @rundate CHAR (10) = CONVERT(CHAR(10),GETDATE(),101), @cycle_start_dt DATETIME = '2014-02-17', @cycle_end_dt DATETIME = '2014-02-24', @record_count INT = 24704, @tot_pr_premium DECIMAL(18,2) = 476922242, @tot_pr_comm DECIMAL(18,2) = 2624209257 ``` Insert code (I've declared the variables as constant values for testing, I took these values from what they were at runtime): ``` INSERT INTO urs_prem_feed_out_control SELECT fd_ctl_nbr = @cur_fd_ctl_nbr, bd_pr_cntl_rec_type = 'CONTROL', bd_pr_cntl_acctg_dte = @acctg_cyc_ym_2, bd_pr_cntl_run_dte = @rundate, bd_pr_cntl_start_dte = CONVERT(CHAR(10),@cycle_start_dt,101), bd_pr_cntl_end_dte = CONVERT(CHAR(10),@cycle_end_dt,101), bd_pr_cntl_rec_count = RIGHT('0000000000000000' + RTRIM(CONVERT(CHAR(16),@record_count)),16), bd_pr_tot_premium = CASE WHEN @tot_pr_premium < 0 THEN '-' + SUBSTRING(RIGHT('000000000000000' + LTRIM(RTRIM(CONVERT(VARCHAR,ABS(@tot_pr_premium)*100))),18),1,15) ELSE '+' + SUBSTRING(RIGHT('000000000000000' + LTRIM(RTRIM(CONVERT(VARCHAR,ABS(@tot_pr_premium)*100))),18),1,15) END, bd_pr_tot_commission = CASE WHEN @tot_pr_comm < 0 THEN '-' + SUBSTRING(RIGHT('000000000000000' + LTRIM(RTRIM(CONVERT(VARCHAR,ABS(@tot_pr_comm)*100))),18),1,15) ELSE '+' + SUBSTRING(RIGHT('000000000000000' + LTRIM(RTRIM(CONVERT(VARCHAR,ABS(@tot_pr_comm)*100))),18),1,15) END ``` When I look at each value individually it seems like they are all within the variable length constraints of the table. Any idea why I'm getting this error? Thanks!
The problem with your insert query is *THE ORDER OF INSERTION* : ``` SELECT fd_ctl_nbr = @cur_fd_ctl_nbr, ``` This column must be defined at the last in the `INSERT` as its the last column defined in the create table script. Change your query to this: ``` INSERT INTO #urs_prem_feed_out_control (fd_ctl_nbr, bd_pr_cntl_rec_type, pd_pr_cntl_acctg_dte, bd_pr_cntl_run_dte, bd_pr_cntl_start_dte, bd_pr_cntl_end_dte, bd_pr_cntl_rec_count, bd_pr_tot_premium, bd_pr_tot_commission) SELECT fd_ctl_nbr = @cur_fd_ctl_nbr, bd_pr_cntl_rec_type = 'CONTROL', bd_pr_cntl_acctg_dte = @acctg_cyc_ym_2, bd_pr_cntl_run_dte = @rundate, bd_pr_cntl_start_dte = CONVERT(CHAR(10),@cycle_start_dt,101), bd_pr_cntl_end_dte = CONVERT(CHAR(10),@cycle_end_dt,101), bd_pr_cntl_rec_count = RIGHT('0000000000000000' + RTRIM(CONVERT(CHAR(16),@record_count)),16), bd_pr_tot_premium = CASE WHEN @tot_pr_premium < 0 THEN '-' + SUBSTRING(RIGHT('000000000000000' + LTRIM(RTRIM(CONVERT(VARCHAR,ABS(@tot_pr_premium)*100))),18),1,15) ELSE '+' + SUBSTRING(RIGHT('000000000000000' + LTRIM(RTRIM(CONVERT(VARCHAR,ABS(@tot_pr_premium)*100))),18),1,15) END, bd_pr_tot_commission = CASE WHEN @tot_pr_comm < 0 THEN '-' + SUBSTRING(RIGHT('000000000000000' + LTRIM(RTRIM(CONVERT(VARCHAR,ABS(@tot_pr_comm)*100))),18),1,15) ELSE '+' + SUBSTRING(RIGHT('000000000000000' + LTRIM(RTRIM(CONVERT(VARCHAR,ABS(@tot_pr_comm)*100))),18),1,15) END ``` Doing this would also work. Notice that the first column here in the `SELECT` within the `INSERT` is just the way you have provided in your question. See this here-> <http://sqlfiddle.com/#!3/0e09b/1> Hope this helps!!!
This is why you should never write an insert statement without specifying the columns. Since you did not, it wil try to put the data in the columns in the order they are in the table which is not at all the order you have them in. Another thing that can happen when you get this sort of message (but which I don't think applies in your case, I include it for people searching later) is that the eeror is actually coming from a trigger and not the main insert. Finally a note on database design, you should not be using char for dates, you should be using date fields. You cannot do date math on a char field and it will accept incorrect date values like feb30, 2014. It is always a bad idea to store dates as anything except date or datetime values. In general char should only be used rarely when a column will always have the same number of characters (like a 2 column state abbreviation), it should not be used as the default datatype. You need to do a better job of defining datatypes that match the type and size of data being stored. You can run into problems with queries as 'VA' is not the same thing as 'VA '. In general my experience is that less than 1 % of all database fields should be Char.
TSQL error on insert "String or binary data would be truncated"
[ "", "sql", "sql-server", "t-sql", "insert", "truncate", "" ]
I am struggling with a select query I'm trying to create to join two tables which has a 1 to many row ratio. ### Core Table ``` id | client_id | vehicle_code | risk | .... --------------------------------------------------------------------- 20 | 2 | C2E | There is no risk | ``` ### Secondary Table ``` id | LTEQR_id | period | milage | advanced_payments | subsequent_payments | total | .... --------------------------------------------------------------------------------------------------------------------- 10 | 20 | 1 Year | 2000 | NULL | NULL | NULL | 11 | 20 | 2 years | 1000 | NULL | 23 | 16000.00 | ``` The issue I am having is selecting the rows based on certain columns being populated or not, I have a query to select all the rows which have a `subsequent_payments` of null ``` SELECT l.id ,l.client_id ,l.vehicle_code ,l.risk FROM core_table l INNER JOIN secondary_table p ON p.LTEQR_id = l.id WHERE p.subsequent_payments IS NULL ``` Which works fine for `subsequent_payments` and other columns, but when I try to do the opposite like `IS NOT NULL`, it will still select the row because in the secondary table id 11 has the value 23 for `subsequent_payments` but I want it to only select the row if all rows in the secondary table are not null and I'm struggling to get my head around the logic, so help with this is appreciated I'm open to any solution whether that includes changing the table designs I don't mind.
You can do that with a subquery instead of a join ``` SELECT * FROM core_table WHERE NOT EXISTS ( SELECT LTEQR_id FROM secondary_table WHERE LTEQR_id = core_table.id AND subsequent_payments IS NULL) ``` Another way would be to `GROUP BY` and use a `CASE`: you would group the records from `secondary_table`, use a case to return 1 or 0 depending on the `NULL` status of the `subsequent_payments` column, which you can then filter with a `HAVING` clause. It's a bit more complicated than the trivial solution above, but possibly a more performant alternative if you have problems with that.
You want to select records from a table where exists or not exists a record in another table. So use an EXISTS clause rather than joining the tables. EDIT: You can also use an IN clause.
SQL table join selection
[ "", "sql", "" ]
So, i've got a query (which i use as a Stored Procedure) in SQL Server 2008 R2. It works, but i cannot believe there isn't a more efficient way. The data is in a table 'ServiceInstance'. This is one, completely flat table containing **16** 'instances' per IPAddress, each of them having a unique TCPPort for that IPAddress. The columns 'isRestarting' and 'isInuse' are not really important to this system; both when 'isRestarting' or 'isInUse' are *True*, the 'IsAvailable' is *False* The column 'CPUId' is; each server has 4 CPUs - and the Delhpi application i'm running can only have 1 application on a CPU at the same time. So, when CPU #1 on Server with IP '192.168.4.151' is in use, that CPUId on that IP is not allowed to be returned from the query. (There are 16 instances on one server with 4 cores) ![The table with data](https://i.stack.imgur.com/kL0lO.png)) So, the SP must do the following: Get an available instance, which must comply to: * The SP should always return 1 row. + This is a low prio for me atm - i can rerun the SP if needed * The ServerInstance' IsEnabled must be true * **When the ServerInstance is returned, it must be set to 'IsAvailable' = False so it won't be picked again until work is done (reset is done by my application logic)** * None of the same CPUId on that IP must be 'IsAvailable = False' + This makes sure the CPU # on that server is free * The ServerInstance which hasn't been used the longest is preferred. + Because I have a large pool and want to load balance traffic * The 'found' ServerInstance is to be updated immediately. + The 'LastRequestDate' should be stamped + The 'IsInuse' set to True + The 'IsAvailable' set to **False** So. With this information I've created this monster: ``` UPDATE top(1) ServiceInstance SET LastRequestDate=GETDATE() ,IsInUse=1 ,IsAvailable=0 OUTPUT inserted.ServiceInstanceId, inserted.IpAddress, inserted.TcpPort, inserted.LastRequestDate, inserted.IsInUse WHERE ServiceInstanceId IN ( SELECT Top (1) ServiceInstanceId FROM ServiceInstance WHERE (ServiceInstance.IsAvailable = 1 AND ServiceInstance.IsEnabled = 1) AND ServiceInstanceId NOT IN ( SELECT NGI1.ServiceInstanceId FROM (SELECT CpuId,IpAddress FROM [ServiceInstance] NGI WHERE IsInUse=1) a INNER JOIN ServiceInstance AS NGI1 ON a.IpAddress = NGI1.IpAddress AND a.CpuId = NGI1.CpuId ) ORDER BY LastRequestDate ASC ) ``` However, I feel this cannot be the most efficient way to go about things. This Query is supposed to be running ~10 times per second on peek hours and thus currently putting some heavy CPU pressure on my SQL Server. Any tips welcome! I feel i should be able to use PARTITION OVER or join to my own table, but I cannot seem to create it successfully...! Ok, so, the table structure is as following: * ServiceInstanceId INT NOT NULL * IPAddres varchar(20) NOT NULL * TCPPort varchar(5) NOT NULL * LastRequestDate DateTime NOT NULL * IsEnabled BIT NOT NULL * IsAvailable BIT NOT NULL * IsRestarting BIT NOT NULL * IsInuse BIT NOT NULL * CPU INT NOT NULL On this moment, I have no indexes. This because the table is mutated a lot (every time a ServerInstance is 'used' the table mutates 3 or 4 times (1= *use*, 2 = *restart after use*, 3= set *IsAvailable*, 4= *restart on failure*) My guess was that, if i made Indexes, these would have to be updated every mutation. Not sure but i felt like it would decrease performance :) Exec plan: ![Execution plan for my query:](https://i.stack.imgur.com/02ZPW.png) IMPORTANT ADDITION after some loadtests: I really needed to use `Exec @RC =sp_getapplock @Resource='MyLock', @LockMode='Exclusive', @LockOwner='Transaction', @LockTimeout = 1000` For this StoredProcedure. Doesn't work well without it!
``` WITH UnAvailableCpus AS ( SELECT IpAddress ,CpuId FROM ServiceInstance WHERE IsAvailable = 0 GROUP BY IpAddress ,CpuId ) ,AvailableInstances AS ( SELECT ServiceInstanceId ,LastRequestDate FROM ServiceInstance LEFT JOIN UnavailableCpus ON UnavailableCpus.IpAddress = ServiceInstance.IpAddress AND UnavailableCpus.CpuId = ServiceInstance.CpuId WHERE ServiceInstance.IsAvailable = 1 AND ServiceInstance.IsEnabled = 1 AND UnavailableCpus.IpAddress IS NULL ) ,PreferredInstance AS ( SELECT TOP 1 ServiceInstanceId FROM AvailableInstances ORDER BY LastRequestDate ) UPDATE ServiceInstance SET LastRequestDate=GETDATE() ,IsInUse=1 ,IsAvailable=0 OUTPUT inserted.ServiceInstanceId ,inserted.IpAddress ,inserted.TcpPort ,inserted.LastRequestDate ,inserted.IsInUse WHERE ServiceInstanceId IN (SELECT ServiceInstanceId FROM PreferredInstance) ```
It may be related with the structure of the table (if you post it we can further elaborate). Best guess is to create indexes for any column used for joins, if not already done: * CpuId * IpAddress * IsInUse Also not bad to create indexes for columns used for where conditions: * IsAvailable * IsEnabled Anyway, this that you present is a "planning" algorithm for process execution of something like a distributed operating system. This kind of algorithms are always programmed to have all information presistent in memory, and the execution of the planning (selecting which executes what and where) to be compiled to native code. In the case of the Linux kernel, for example, in C++. If you delegate the info to a database engine, and the planning to a three level nested SQL, performance is going to be bad. Databases are better suited for processing large quantities of information for a result, or serving a lot of clients quering information at the same time, or logging and storing a lot of data quickly. Maybe you should try to program everything in Delphi, with an object oriented approach, without database and SQL.
SQL query improved performance needed (1 table)
[ "", "sql", "sql-server", "" ]
I have the following table: ``` | ID | Name | DateA | TimeToWork | TimeWorked | |:--:|:----:|:----------:|:----------:|:----------:| | 1 |Frank | 2013-01-01 | 8 | 5 | | 2 |Frank | 2013-01-02 | 8 | NULL | | 3 |Frank | 2013-01-03 | 8 | 7 | | 4 |Jules | 2013-01-01 | 4 | 9 | | 5 |Jules | 2013-01-02 | 4 | NULL | | 6 |Jules | 2013-01-03 | 4 | 3 | ``` The table is very long, every person has an entry for every day in a year. For each person I have the Date he worked (`DateA`), the hours he has to work according to contract (`TimeToWork`) and the hours he worked (`TimeWorked`). As you can see some days a person didnt work on a day he had to. This is when a person took a full day overtime. What I try to accomplish is to get the following table out of the first one above. ``` | Name | January | Feburary | March | ... | Sum | |:----:|:----------:|:--------:|:-----:|:---:|:---:| |Frank | 2 | 0 | 1 | ... | 12 | |Jules | 5 | 1 | 3 | ... | 10 | ``` For each month I want to count all days where a person took A FULL day off and sum all up in the `Sum` column. I tried something like `Select (case when Datetime(month, DateA = 1 then count(case when timetowork - (case when timeworked then 0 end) = timetowork then 1 else 0 end) end) as 'January'` but my TSQL is just not that good and the code doent work at all. Btw using this my select command would be about 40 lines. I really would appreciate if anyone could help me or give me a link to a good source so I can read myself into it.
If I understand the question right, than Gordon Linoff's answer is a good beginning, but doesn't deal with "full day off". ``` select Name, sum(case when month(DateA) = 01 and TimeWorked is null then 1 else 0 end) as Jan, sum(case when month(DateA) = 02 and TimeWorked is null then 1 else 0 end) as Feb, ... sum(case when month(DeteA) = 12 and TimeWorked is null then 1 else 0 end) as Dec, sum(case when TimeWorked is null then 1 else 0 end) as Sum from table T where year(DateA) = 2013 group by name ``` This method solves the problem?
The correct syntax is conditional aggregation: ``` select name, sum(case when month(datea) = 1 then timeworked else 0 end) as Jan, sum(case when month(datea) = 2 then timeworked else 0 end) as Feb, . . . sum(case when month(datea) = 12 then timeworked else 0 end) as Dec, sum(timeworked) from table t where year(datea) = 2013 group by name; ```
Get Month columns from datetime column and count entries
[ "", "sql", "t-sql", "" ]
I am tring to execute the below code... ``` select a.name, a.phone, b.mobile, b.relation, case when a.phone<>'' and b.mobile<>'' then a.phone + ' ' + b.mobile when a.phone<>'' and b.mobile='' then a.phone when a.phone='' and b.mobile<>'' then b.mobile else '' end as phone from abc a join bcdb where a.id=b.id and b.relation='a123' ``` but on executing the first case, the values are summed up instead of being concatenated... can you please guide me
Since you are using MySQL, you've got to use the [CONCAT](https://dev.mysql.com/doc/refman/5.6/en/string-functions.html#function_concat) function to concatenate your strings, not the `+` operator. ``` select a.name, a.phone, b.mobile, b.relation, case when a.phone<>'' and b.mobile<>'' then CONCAT(a.phone, ' ', b.mobile) when a.phone<>'' and b.mobile='' then a.phone when a.phone='' and b.mobile<>'' then b.mobile else '' end as phone from abc a join bcdb where a.id=b.id and b.relation='a123' ``` Remark: You should take care that none of the operands should be NULL.
**UPDATED**: You seem to be storing phone and mobile columns as numbers. SO you'll have to convert them to strings for concatenation. Try this: ``` CONVERT(a.phone, CHAR) + ' ' + CONVERT(b.mobile, CHAR) ``` So your query effectively becomes this: ``` select a.name, a.phone, b.mobile, b.relation, case when a.phone<>'' and b.mobile<>'' then CONVERT(a.phone, CHAR) + ' ' + CONVERT(b.mobile, CHAR) when a.phone<>'' and b.mobile='' then a.phone when a.phone='' and b.mobile<>'' then b.mobile else '' end as phone from abc a join bcdb where a.id=b.id and b.relation='a123' ``` Hope this helps!!!
concatenation in sql not working as expected
[ "", "mysql", "sql", "concatenation", "" ]
I have two tables that I need to combine into one query (bad software), from tables with different schema. I can fake one table easily to take care of the schema differences but the kicker is that I only want a subset of rows from the second table based on (joined with) info from the first table. My last resort is making a stored proc and temporary table, but I am wondering if it's possible to do this with a query. This is in MySQL 5.5 Say I have something like: ``` Cats BadCats ---- ------- id cat_id name color weight badness length color awesomeness ``` I want to select all blue cats and bad cat values for cats that are in the blue result set. Like: ``` SELECT id,name,awesomeness from Cats where color = 'blue' UNION SELECT cat_id,null,badness from BadCats where BadCats.cat_id = Cats.id and BadCats.color = 'blue' -- or BadCats.color = Cats.color ``` Another contrived example of what I'd like to do would be: ``` SELECT id,name from Cats c JOIN BadCats bc on bc.cat_id=c.id ``` But fake BadCat's columns somehow, so the join would work. My stored proc would look something like: ``` CREATE TABLE temp SELECT id,name,awesomeness from Cats where color = 'blue' INTO temp SELECT cat_id,null,badness from BadCats where cat_id in (select distinct id from Cats) INTO temp return * from temp ``` Or potentially: ``` CREATE TABLE temp SELECT cat_id,null,badness from BadCats INTO temp -- same schema SELECT id,name,awesomeness from Cats LEFT JOIN BadCats on id=cat_id where Cats.color = 'blue' ``` Or even more potentially, a view on BadCats with the same schema as Cats... The end goal here is to get a list of blue cats and an idea of how awesome/bad they are by including rows in BadCats matching the Cats that were found, so the software can process the rows based on id.
``` SELECT id,name,awesomeness from Cats where color = 'blue' UNION SELECT cat_id,null,badness from BadCats where cat_id in ( SELECT id from Cats where color = 'blue') and color = 'blue ```
This doesn't need to be a UNION, this is simply an outer join. ``` SELECT id, name from Cats c LEFT OUTER JOIN BadCats bc on bc.cat_id=c.id WHERE c.color = 'blue' ``` You get all the blue cats from Cats, whether there is a matching bc.cat\_id or not. Then *if* there is a matching row in BadCats with the same id, it is joined. If there is not a matching bad cat for a given cat, all the columns from BadCats are "faked" with NULLs. See <http://blog.codinghorror.com/a-visual-explanation-of-sql-joins/> for more on join types. It would be worth your time to learn how to use them.
In MySQL, can I SELECT with UNION but limit the rows from the second query based on the first query?
[ "", "mysql", "sql", "union", "mysql-5.5", "" ]
I am developing an stateless API that needs to support pagination. I use an Oracle database. I use Spring with MyBatis for database access. From the documentation, I understand that I can use the RowBounds class to limit the number of rows returned by the query. However, it seems that there's no special optimization done to the query to support pagination. For example, if I set RowBounds to offset 100 with 50 records, I'd expect the query to have the following added: ``` (original query with the where clause...) and ROWNUM < 150 and ROWNUM >= 100 ``` But there's nothing there, it's just the query that I defined manually. This is terrible for performance, since I might have several thousand results. What am I doing wrong? Thanks.
`Mybatis` leaves many things up to the `SQL driver` that is being used, and it appears the exact behavior surrounding`RowBounds` is one of those. See <http://mybatis.github.io/mybatis-3/java-api.html>, particularly the section that says: > Different drivers are able to achieve different levels of efficiency > in this regard. For the best performance, use result set types of > SCROLL\_SENSITIVE or SCROLL\_INSENSITIVE (in other words: not > FORWARD\_ONLY). The default is apparently `UNSET`, but you could try to use `SCROLL_SENSITIVE` as the `ResultSetType` attribute in the `select` tag and see if that helps. See <http://mybatis.github.io/mybatis-3/sqlmap-xml.html> for more info on that. If that doesn't work you can always work around the issue by ditching the use of `RowBounds` and implement a **SettingsBean** class (or similar) that your `select` tag would take as a `parameterType`, and which contains fields for the `offset` and `limit` (or perhaps `rowStart` and `rowEnd` make more sense for `Oracle`, and then you can set those at runtime as needed and interpolate them dynamically into the SQL at the time the `select` is executed. While a bit more code, you get to control the behavior exactly as you want through pure dynamic SQL. I have used an approach like this with `Mybatis` and `Postgres` and it has worked well. So you would implement your **SettingsBean** class with those fields and their getters and setters, and your `select` statement might then look something like: ``` <select id="selectFoo" parameterType="com.foo.bar.SettingsBean"> select * from foo where rownum >= #{rowStart} and rownum < #{rowEnd} </select> ```
I found a simple work around to this issue. I had followed the Mybatis instructions @khampson recommended and was passing a RowBounds instance to the mapper with no limits being enforced. ``` RowBounds rowbounds = new RowBounds(0, resultLimit); roster = tableMapper.selectAll(rowbounds); ``` mapper java ``` public List<Row> selectAll(RowBounds rowbounds); ``` mapper xml ``` <select id="com.TableMapper.selectAll" resultMap="row" timeout="10"> SELECT * FROM table; </select> ``` simply appending "LIMIT #{param1.offset}, #{param1.limit}" to the mapper's xml produced the behavior I wanted! ``` <select id="com.TableMapper.selectAll" resultMap="row" timeout="10"> SELECT * FROM table LIMIT #{param1.offset}, #{param1.limit}; </select> ```
MyBatis RowBounds doesn't limit query results
[ "", "sql", "spring", "oracle", "mybatis", "" ]
TSQL Says : Invalid Column Name 'VehicleId' ``` SELECT (SELECT dbo.Vehicles.Id FROM dbo.Vehicles JOIN dbo.Movements ON dbo.Vehicles.Id=dbo.Movements.VehicleId WHERE dbo.Movements.Id=dbo.Alarms.MovementId) As VehicleId FROM [dbo].[Alarms] WHERE VehicleId =1 ``` What is the problem ?
I think, you are looking for something like this: ``` SELECT Alarms.* FROM dbo.Vehicles JOIN dbo.Movements ON dbo.Vehicles.Id=dbo.Movements.VehicleId JOIN dbo.Alarms ON dbo.Movements.Id=dbo.Alarms.MovementId WHERE VehicleId =1 ```
The table `Alarms` does not have a column named `VehicleId` The where clause does not use the alias ``` SELECT (SELECT dbo.Vehicles.Id FROM dbo.Vehicles JOIN dbo.Movements ON dbo.Vehicles.Id=dbo.Movements.VehicleId WHERE dbo.Movements.Id=dbo.Alarms.MovementId) As VehicleId FROM [dbo].[Alarms] WHERE VehicleId = 1 -- this is wrong ``` Rather than using a correlated subquery try multiple joins ``` SELECT dbo.Vehicles.Id FROM dbo.Vehicles JOIN dbo.Movements ON dbo.Vehicles.Id = dbo.Movements.VehicleId JOIN dbo.Alarms ON dbo.Alarms.MovementId = dbo.Movement.Id WHERE dbo.Movements.VehicleId = 1 ```
SQL Subquery, Invalid Column Name
[ "", "sql", "sql-server", "" ]
Table data looks like: ``` EventID | MPID | rundate | Horizon | otherData 1 | 1 | 23-Jun-2014 | 360 | other value 1 | 1 | 23-Jun-2014 | 365 | pther value 1 | 1 | 23-Jun-2014 | 300 | pther value 1 | 1 | 22-Jun-2014 | 700 | pther value 1 | 2 | 23-Jun-2014 | 400 | other value 1 | 2 | 23-Jun-2014 | 340 | oth 2 | 3 | 23-Jun-2014 | 360 | pther value 2 | 3 | 23-Jun-2014 | 300 | pther value 2 | 3 | 22-Jun-2014 | 365 | pther value ``` I want to select the max rundate for each event and marketplace group and then select max horizon among that group and then print the entire row. Desired Result is : ``` EventID | MPID | rundate | Horizon | otherData 1 | 1 | 23-Jun-2014 | 365 | pther value 1 | 2 | 23-Jun-2014 | 400 | other value 2 | 3 | 23-Jun-2014 | 360 | pther value ``` Please let me know the SQL query for this. I tried following query but its not working: ``` SELECT * from dsie_result_overalls where id in ( SELECT k.id from dsie_result_overalls k, ( SELECT a.event_id, a.marketplaceid, MAX(a.horizon) as horizon FROM dsie_result_overalls a, ( SELECT id, event_id, marketplaceid, MAX(rundate) AS rundate FROM dsie_result_overalls GROUP BY event_id, marketplaceid ) b WHERE a.event_id = b.event_id AND a.marketplaceid = b.marketplaceid AND a.rundate = b.rundate GROUP BY a.event_id, a.marketplaceid ) l WHERE k.event_id = l.event_id AND k.marketplaceid = l.marketplaceid AND k.horizon = l.horizon ); ``` It selects the multiple rundate for max horizon.
Try this query ``` Select T.* From Tbl T JOIN ( Select Max(S.Horizon) MaxHorizon,Max(S.rundate) As dte,S.EventID,S.MPID From Tbl S Join ( Select T1.EventID,Max(T1.rundate) As Maxrundate,T1.MPID From Tbl T1 Group By T1.EventID,T1.MPID ) JR On S.rundate = JR.Maxrundate AND S.EventID = JR.EventID AND S.MPID = JR.MPID Group By S.MPID,S.EventID )R ON T.Horizon = R.MaxHorizon AND T.EventID = R.EventID AND T.MPID = R.MPID AND T.rundate = R.dte ``` **[Fiddle Demo](http://sqlfiddle.com/#!2/2ebc5/7)** --- Output would be --- ``` EventID | MPID | rundate | Horizon | otherData 1 | 1 | 23-Jun-2014 | 365 | pther value 1 | 2 | 23-Jun-2014 | 400 | other value 2 | 3 | 23-Jun-2014 | 360 | pther value ```
The proper way... ``` SELECT x.* FROM dsie_result_overalls x JOIN ( SELECT a.eventid , a.mpid , a.rundate , MAX(a.horizon) max_horizon FROM dsie_result_overalls a JOIN ( SELECT eventid , mpid , MAX(rundate) max_rundate FROM dsie_result_overalls GROUP BY eventid , mpid ) b ON b.eventid = a.eventid AND b.mpid = a.mpid AND b.max_rundate = a.rundate GROUP BY a.eventid , a.mpid , a.rundate ) y ON y.eventid = x.eventid AND y.mpid = x.mpid AND y.rundate = x.rundate AND y.max_horizon = x.horizon; ``` The hack way... ``` SELECT * FROM ( SELECT * FROM dsie_result_overalls ORDER BY eventid , mpid , rundate DESC , horizon DESC ) x GROUP BY eventid , mpid; ``` The old-fashioned way... ``` SELECT x.* FROM dsie_result_overalls x LEFT JOIN dsie_result_overalls y ON y.eventid = x.eventid AND y.mpid = x.mpid AND (y.rundate > x.rundate OR (y.rundate = x.rundate AND y.horizon > x.horizon)) WHERE y.id IS NULL; ```
Query to find result with max rundate with max horizon
[ "", "mysql", "sql", "max", "" ]
I have a long running report written in SQL\*Plus with a couple of SELECTs. I'd like to change the transaction isolation level to get a consistent view on the data. I found two possible solutions: ``` SET TRANSACTION ISOLATION LEVEL SERIALIZABLE; ``` and ``` SET TRANSACTION READ ONLY; ``` Which one do I use for a report and why? Any performance implications? Any implications on other sessions (this is a production database). Please note that the question is specifically about the two options above, not about the various isolation levels. Does `SERIALIZABLE` blocks changes to a table that is queried for my report? I would naively assume that `READ ONLY` is a little bit less stressful for the database, as there are no data changes to be expected. Is this true, does Oracle take advantage of that?
This article from the Oracle documentation gives a lot of detailed info about the different transaction isolation levels. <http://docs.oracle.com/cd/B10501_01/server.920/a96524/c21cnsis.htm> In your example, it sounds like you are wanting `Serializable`. Oracle does not block when reading data, so using serializable in your read-only query should not block queries or crud operations in other transactions. As mentioned in other answers, using the `read only` isolation level is similar to using `serializable`, except that `read only` does not allow inserts, updates, or deletes. However, since `read only` is not an SQL standard and `serializable` is, then I would use `serializable` in this situation since it should accomplish the same thing, will be clear for other developers in the future, and because Oracle provides more detailed documentation about what is going with "behind the scenes" with the `serializable` isolation level. Here is some info about serializable, from the article referenced above (I added some comments in square brackets for clarification): > Serializable isolation mode provides somewhat more consistency by > **protecting against phantoms [reading inserts from other transactions] and nonrepeatable reads [reading updates/deletes from other transactions]** and can be > important where a read/write transaction executes a query more than > once. > > Unlike other implementations of serializable isolation, which lock > blocks for read as well as write, **Oracle provides nonblocking queries [non-blocking reads]** > and the fine granularity of row-level locking, both of which reduce > write/write contention.
In Oracle, you can really choose between `SERIALIZABLE` and `READ COMMITTED`. `READ ONLY` is the same as serializable, in regard to the way it sees other sessions' changes, with the exception it does not allow table modifications. With `SERIALIZABLE` or `READ ONLY` your queries won't see the changes made to the database after your serializable transaction had begun. With `READ COMMITTED`, your queries won't see the changes made to the database during the queries' lifetime. ``` SERIALIZABLE READ COMMITTED ANOTHER SESSION (or READ ONLY) Change 1 Transaction start Transaction start Change 2 Query1 Start Query1 Start ... ... Change 3 Query1 End Query1 End Query2 Start Query2 Start ... ... Change 4 Query2 End Query2 End ``` With serializable, query1 and query2 will only see change1. With read committed, query1 will see changes 1 and 2, and query2 will see changes 1 through 3.
For a long running report, do I use a read only or serializable transaction?
[ "", "sql", "oracle", "" ]
This is the result of separating a single table in two: ``` Table users: user_id (pk, ai) email password last_login Table data: user_id (fk to users.user_id) data_1 data_2 ``` To select a single record when there was only one table: ``` SELECT users.email, users.password, data.data_1, data.data_2 FROM users,data WHERE users.email='$user_email' AND users.user_id=data.user_id"; ``` How do I get all records from both tables having the rows connected by users.user\_id=data.user\_id? ``` Row1: email, password, data_1, data2 Row2: email, password, data_1, data2 Row3: email, password, data_1, data2 Row4: email, password, data_1, data2 ... ```
Using explicit `join` syntax could help you. Rewrite your query to: ``` SELECT users.email, users.password, data.data_1, data.data_2 FROM users INNER JOIN data ON users.user_id=data.user_id WHERE users.email='$user_email' ``` and get all rows without a WHERE condition: ``` SELECT users.email, users.password, data.data_1, data.data_2 FROM users INNER JOIN data ON users.user_id=data.user_id ``` It separates the concerns: conditions that join tables from conditions that restricts the result set.
have you tried this? ``` SELECT users.email, users.password, data.data1, data.data2 FROM users,data WHERE users.user_id=data.user_id ``` or this? ``` SELECT users.email, users.password, data.data1, data.data2 FROM users inner join data on users.user_id=data.user_id ```
How to get all data from 2 tables using foreign key
[ "", "mysql", "sql", "" ]
I know the importance of indexes and how order of joins can change performance. I've done a bunch of reading related to multi-column indexes and haven't found the answer to my question. I'm curious if I do a multi-column index, if the order that they are specified matters at all. My guess is that it would not, and that the engine would treat them as a group, where ordering doesn't matter. But I wish to verify. For example, from mysql's website (<http://dev.mysql.com/doc/refman/5.0/en/multiple-column-indexes.html>) ``` CREATE TABLE test ( id INT NOT NULL, last_name CHAR(30) NOT NULL, first_name CHAR(30) NOT NULL, PRIMARY KEY (id), INDEX name (last_name,first_name) ); ``` Would there be any benifit in any cases where the following would be better, or is it equivalent? ``` CREATE TABLE test ( id INT NOT NULL, last_name CHAR(30) NOT NULL, first_name CHAR(30) NOT NULL, PRIMARY KEY (id), INDEX name (first_name,last_name) ); ``` Specificially: ``` INDEX name (last_name,first_name) ``` vs ``` INDEX name (first_name,last_name) ```
When discussing multi-column indexes, I use an analogy to a telephone book. A telephone book is basically an index on last name, then first name. So the sort order is determined by which "column" is first. Searches fall into a few categories: 1. If you look up people whose last name is Smith, you can find them easily because the book is sorted by last name. 2. If you look up people whose first name is John, the telephone book doesn't help because the Johns are scattered throughout the book. You have to scan the whole telephone book to find them all. 3. If you look up people with a specific last name Smith and a specific first name John, the book helps because you find the Smiths sorted together, and within that group of Smiths, the Johns are also found in sorted order. If you had a telephone book sorted by first name then by last name, the sorting of the book would assist you in the above cases #2 and #3, but not case #1. That explains cases for looking up exact values, but what if you're looking up by ranges of values? Say you wanted to find all people whose first name is John and whose last name begins with 'S' (Smith, Saunders, Staunton, Sherman, etc.). The Johns are sorted under 'J' within each last name, but if you want all Johns for all last names starting with 'S', the Johns are not grouped together. They're scattered again, so you end up having to scan through all the names with last name starting with 'S'. Whereas if the telephone book were organized by first name then by last name, you'd find all the Johns together, then within the Johns, all the 'S' last names would be grouped together. So the order of columns in a multi-column index definitely matters. One type of query may need a certain column order for the index. If you have several types of queries, you might need several indexes to help them, with columns in different orders. You can read my presentation [How to Design Indexes, Really](http://www.slideshare.net/billkarwin/how-to-design-indexes-really) for more information, or the [video](https://www.youtube.com/watch?v=ELR7-RdU9XU).
The two indexes are different. This is true in MySQL and in other databases. MySQL does a pretty good job of explaining the different in the [documentation](http://dev.mysql.com/doc/refman/5.7/en/multiple-column-indexes.html). Consider the two indexes: ``` create index idx_lf on name(last_name, first_name); create index idx_fl on name(first_name, last_name); ``` Both of these should work equally well on: ``` where last_name = XXX and first_name = YYY ``` idx\_lf will be optimal for the following conditions: ``` where last_name = XXX where last_name like 'X%' where last_name = XXX and first_name like 'Y%' where last_name = XXX order by first_name ``` idx\_fl will be optimal for the following: ``` where first_name = YYY where first_name like 'Y%' where first_name = YYY and last_name like 'X%' where first_name = XXX order by last_name ``` For many of these cases, both indexes *could possibly* be used, but one is optimal. For instance, consider idx\_lf with the query: ``` where first_name = XXX order by last_name ``` MySQL could read the entire table using idx\_lf and then do the filtering after the `order by`. I don't think this is an optimization option in practice (for MySQL), but that can happen in other databases.
Does Order of Fields of Multi-Column Index in MySQL Matter
[ "", "mysql", "sql", "performance", "indexing", "" ]
I have a "stock market" data table with `contract` values. I want to get the volume (number of trades executed per day), and the close price, with the close price being the last recorded contract price that trading day. ``` CREATE TABLE IF NOT EXISTS `contracts` ( `id` int(11) NOT NULL AUTO_INCREMENT, `price` decimal(5,2) NOT NULL, `created_at` datetime NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=1502 ; ``` I'm able to get the data I want (mostly!) by doing a query like this: ``` SELECT count(id) as volume, price, DATE(created_at) FROM `contracts` GROUP BY DATE(created_at) ``` However, I want the close price-- the price returned is the price of whatever record happens to be last by ID. Is there a way to get the last `price` value set by the `created_at` column?
One solution is to use a sub-query to get that price ``` SELECT count(ctr.id) as volume, (SELECT cc.price FROM contracts as cc WHERE DATE(cc.created_at) = DATE(ctr.created_at) ORDER BY cc.created_at DESC LIMIT 1) as price, DATE(ctr.created_at) FROM `contracts` as ctr GROUP BY DATE(ctr.created_at) ``` **EDIT:** Changed the query to use `LIMIT` and select the price. I used `ODER BY DESC` because it will get the row with the latest date.
``` SELECT id, count(id) AS volume, DATE(created_at), (SELECT price FROM `contracts` as innerContracts WHERE innerContracts.id = outerContracts.id ORDER BY created_at DESC LIMIT 1) FROM `contracts` AS outerContracts GROUP BY DATE(created_at) ```
SQL query with GROUP BY and conditions
[ "", "mysql", "sql", "group-by", "" ]
I have a table called `Users` which is currently holding data on both `Customers` and `Staff`. It has their names and emails and passwords etc. It also has a field called `TypeOfUserID` which holds a value to say what type of user they are .e.g Customer or Staff Would it be better to have two separate tables: `Customers` and `Staff`? It seems like duplication because the fields are the same for both types of user. The only field I can get rid of is the `TypeOfUserID` column. However, having them both in one table called `Users` means that in my front-end application I have to keep adding a clause to check what type of user they are. If for any reason I need to allow a different type of user access e.g. `External Supplier` then I have to manage the addition of `TypeOfUserID` in multiple places in the WHERE clauses.
# Short Answer: It depends. If your current needs are met, and you don't foresee this model needing to be changed for a long time / it would be easy to change if you had to, stick with it. # Longer answer: If staff members are just a special case of user, I don't see any reason you'd want to change anything about the database structure. Yes, for staff-specific stuff you'd need to be sure the person was staff, but I don't really see any way around that- you always have to know they're staff, first. If, however, you want finer-grained permissions than binary (a person can belong to the 'staff' group but that doesn't necessarily say whether or not they're in the users' group, for instance), you might want to change the database. The easiest way to do that, of course, would be to have a unique ID associated with each user, and use that key to look up their group permissions in a different table. Something like: ``` uid | group ------------ 1 | users 1 | staff 2 | users 3 | staff 4 | users 5 | admin ``` Although you may or may not want an actual string for each group; most likely you'd want another level of indirection by having a 'groups' table. So, that table above would be a 'group\_membership' table, and it could look more like: ``` uid | gid ------------ 1 | 1 1 | 2 2 | 1 3 | 2 4 | 1 5 | 3 ``` To go along with it, you'd have the 'groups' table, which would be: ``` gid | group ------------- 1 | users 2 | staff 3 | admin ``` But, again, that's only if you're imagining a larger number of roles and you want more flexibility. If you only ever plan on having 'users' and 'staff' and staff are just highly privileged users, all of that extra stuff would be a waste of your time. However, if you want really fine grained permissions, with maximum flexibility, you can use the above to make them happen via a 'permissions' table: ``` gid | can_create_user | can_fire_people | can_ban_user ------------------------------------------------------- 1 | false | false | false 2 | true | false | true 3 | true | true | true ``` ## Some Example Code Here's a working PostgreSQL example of getting permissions `can_create_user` and `can_fire_people` for a user with uid 1: ``` SELECT bool_or(can_create_user) AS can_create_user, bool_or(can_fire_people) AS can_fire_people FROM permissions WHERE gid IN (SELECT gid FROM group_membership WHERE uid = 1); ``` Which would return: ``` can_create_user | can_fire_people ---------------------------------- true | false ``` because user 1 is in groups 1 and 2, and group 2 has the `can_create_user` permission, but neither group has the `can_fire_people` permission. ((I know you're using SQL Server, but I only have access to a PostgreSQL server at the moment. Sorry about that. The difference should be minor, though.) ## Notes You'll want to make sure that `uid` and `gid` are primary keys in the `users` and `groups` table, and that there are foreign key constraints for those values in every other table which uses them; you don't want nonexistent groups to have permissions, or nonexistent users to be accidentally added to groups. # Alternatively A graph database solves this problem pretty elegantly; you'd simply create edges linking users to groups, and edges linking groups to permissions. If you want to work with a technology that's currently sexy / buzzword compliant, might want to give that a try, depending on how enormous of a change that'd be. # Further information The phrase you'll want to google is "access control". You'll probably want to implement access control lists (as outlined above) or something similar. Since this is primarily a security-related topic, you might also want to ask this question on [sec.se](https://security.stackexchange.com), or at least look around there for related answers.
Even they look similar, they are logically from different areas. You will never need a union between those tables. But as your application develops, you will need to add more and more specific fields for these tables and they will became more different than similar.
Does it follow best-practice DB design to mix staff and customer details in 1 table?
[ "", "sql", "sql-server", "" ]
I have a table that contains basically an error log. These errors arrive based on various factors and have various severities. I am trying to get a query put together the total of each severity per hour in a 24 hour period. I have it working, except for one small item. The results are sorted by the Hour, which always starts at 0. What I need is for it to be in true time sequence. Meaning that if the query is run at noon on 6/23, the query needs to have the first record at noon 6/22. Sample table data: ID created severity ``` FL41988194-51133 4/13/2014 20:21 critical JO03982444-74849 4/14/2014 12:46 major JO03982444-74852 4/14/2014 12:46 major JO03982444-74855 4/14/2014 12:46 major BY79194841-06182 4/19/2014 19:54 major BY79194841-06183 4/19/2014 19:54 major BY79194841-06184 4/19/2014 19:54 major TV90425333-88384 5/20/2014 7:02 major FZ23706935-25024 6/7/2014 14:56 major SY05532197-47119 6/12/2014 3:57 major ``` Here is my current code: ``` select [Hour], ISNULL([Critical],0) as Critical, ISNULL([Major],0) as Major, ISNULL([Minor],0) as Minor, ISNULL([Warning],0) as Warning, ISNULL([Information],0) as Information, ISNULL([Critical],0) + ISNULL([Warning],0) + ISNULL([Major],0) + ISNULL([Minor],0) + ISNULL([Information],0) as [Total] from (SELECT DATEPART(hh, created) as 'Hour', [severity], count([id]) as incidents FROM [ALARM_TRANSACTION_SUMMARY] where created >= GETDATE()-1 group by DATEPART(hh, created), severity **Union ALL select NumberValue, NULL, NULL From NumberTable** ) PS PIVOT ( SUM (incidents) FOR severity IN ( [Critical], [Information], Major, Minor, Warning) ) AS pvt ``` The Current output would look like this: ``` Hour Critical Major Minor Warning Information Total ---- -------- ----- ----- ------- ----------- ----- 0 0 0 0 0 0 0 1 0 0 0 0 0 0 2 0 0 0 0 0 0 3 0 1 0 0 0 1 4 0 0 0 0 0 0 5 0 0 0 0 0 0 6 0 0 0 0 0 0 7 0 1 0 0 0 1 8 0 0 0 0 0 0 9 0 0 0 0 0 0 10 0 0 0 0 0 0 11 0 0 0 0 0 0 12 0 3 0 0 0 3 13 0 0 0 0 0 0 14 0 1 0 0 0 1 15 0 0 0 0 0 0 16 0 0 0 0 0 0 17 0 0 0 0 0 0 18 0 0 0 0 0 0 19 0 2 0 0 0 2 20 1 0 0 0 0 1 21 0 0 0 0 0 0 22 0 0 0 0 0 0 23 0 0 0 0 0 0 ``` As I said above, this does give me the hourly totals, but the totals are in the wrong sort order...so if I ran this at 2pm, I need the output to look like this: ``` Hour Critical Major Minor Warning Information Total ---- -------- ----- ----- ------- ----------- ----- 14 0 1 0 0 0 1 15 0 0 0 0 0 0 16 0 0 0 0 0 0 17 0 0 0 0 0 0 18 0 0 0 0 0 0 19 0 2 0 0 0 2 20 1 0 0 0 0 1 21 0 0 0 0 0 0 22 0 0 0 0 0 0 23 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 2 0 0 0 0 0 0 3 0 1 0 0 0 1 4 0 0 0 0 0 0 5 0 0 0 0 0 0 6 0 0 0 0 0 0 7 0 1 0 0 0 1 8 0 0 0 0 0 0 9 0 0 0 0 0 0 10 0 0 0 0 0 0 11 0 0 0 0 0 0 12 0 3 0 0 0 3 13 0 0 0 0 0 0 ``` One other thing. This is feeding a report that requires the data to be presented as above (the charting software isn't smart enough to use the raw data and subtotal by the timestamps).
Try this query. ``` select [Hour], ISNULL([Critical],0) as Critical, ISNULL([Major],0) as Major, ISNULL([Minor],0) as Minor, ISNULL([Warning],0) as Warning, ISNULL([Information],0) as Information, ISNULL([Critical],0) + ISNULL([Warning],0) + ISNULL([Major],0) + ISNULL([Minor],0) + ISNULL([Information],0) as [Total] from (SELECT DATEPART(hh, created) as 'Hour', DATEPART(hh, DATEADD(hh,-DATEPART(hh, created),GetDate())) AS SortHour, [severity], count([id]) as incidents FROM [ALARM_TRANSACTION_SUMMARY] where created >= GETDATE()-1 group by DATEPART(hh, created), severity Union ALL select NumberValue,DATEPART(hh, DATEADD(hh,-NumberValue,GetDate())), NULL, NULL From NumberTable ) PS PIVOT ( SUM (incidents) FOR severity IN ( [Critical], [Information], Major, Minor, Warning) ) AS pvt order by SortHour ```
``` (original query) ORDER BY CASE WHEN [Hour] >= DATEPART(HOUR, GETDATE()) THEN [Hour] ELSE [Hour] + 24 END ```
mssql calculate total by hour
[ "", "sql", "sql-server", "" ]
Given the following SQL ``` SELECT T1."PN" as "Part Number", T2."QTY" as "Quantity", T2."BRANCH" AS "Location", T3."STOCK" as "Bin" FROM "XYZ"."PARTS" T1, "XYZ"."BALANCES" T2, "XYZ"."DETAILS" T3 WHERE (T2."PART_ID" = T1."PART_ID") AND (T3."PART_ID" = T1."PART_ID") ORDER BY "Part Number" ASC, "Location" ASC ``` We get results such as ``` YZ-7-CA-080 88 01 STOCK7 YZ-7-CA-080 88 01 03482 YZ-7-CA-080 88 01 A8K2D ``` For location 01, there are 88 pieces of that part number YZ-7-CA-080 and they can be found in any of the 3 bins STOCK7, 03482, or A8K2D. The location value refers to a common branch like a warehouse and the quantity is for the entire warehouse, not the bins. I need to change the output so we can write out instead one entry with bins as a list ``` YZ-7-CA-080 88 01 STOCK7,03482,A8K2D ``` So I am looking for a good way to do this refactoring of the results in SQL. I feel there should be a way to use a function or subquery or something like that and hoping there is a single multi-db solution but assume there could be a need for different solutions on different dbs. (Oracle is the primary solution we are trying to solve but secondary priority db we need this for is SQL Server). Note: There are multiple locations per part number so its not enough to set distinct on the first column to reduce the multiple part number entries. There would be multiple of the same part number at location 02 also with the same issue. Ideas?
For older versions, I guess **WM\_CONCAT** would work. Modifying Gordon Linoff's query: ``` SELECT T1."PN" as "Part Number", max(T2."QTY") as "Quantity", T2."BRANCH" AS "Location", WM_CONCAT(T3."STOCK") as Bins FROM "XYZ"."PARTS" T1 JOIN "XYZ"."BALANCES" T2 ON T2."PART_ID" = T1."PART_ID" JOIN "XYZ"."DETAILS" T3 ON T3."PART_ID" = T1."PART_ID" GROUP BY t1.PN, t2.Branch ORDER BY "Part Number", "Location"; ``` Also refer [this](https://stackoverflow.com/a/470663/3492139) link for an alternate approach: Including the answer in the link for refernce: ``` create table countries ( country_name varchar2 (100)); insert into countries values ('Albania'); insert into countries values ('Andorra'); insert into countries values ('Antigua'); SELECT SUBSTR (SYS_CONNECT_BY_PATH (country_name , ','), 2) csv FROM (SELECT country_name , ROW_NUMBER () OVER (ORDER BY country_name ) rn, COUNT (*) OVER () cnt FROM countries) WHERE rn = cnt START WITH rn = 1 CONNECT BY rn = PRIOR rn + 1; CSV -------------------------- Albania,Andorra,Antigua ```
You can do this in Oracle using `listagg()`: ``` SELECT T1."PN" as "Part Number", max(T2."QTY") as "Quantity", T2."BRANCH" AS "Location", listagg(T3."STOCK") within group (order by t3.stock) as Bins FROM "XYZ"."PARTS" T1 JOIN "XYZ"."BALANCES" T2 ON T2."PART_ID" = T1."PART_ID" JOIN "XYZ"."DETAILS" T3 ON T3."PART_ID" = T1."PART_ID" GROUP BY t1.PN, t2.Branch ORDER BY "Part Number", "Location"; ``` I also fixed the query to use proper explicit join syntax.
How to group multiple values into a single column in SQL
[ "", "sql", "sql-server", "oracle", "" ]
i am having problem with a sql search . i am trying to search with the help of datetime because in my project time is very important. when i use query like this ``` where registration_date = '2014-06-19 00:12:08.940' ``` it does works but as soon as i try to search using ``` where registration_date like '2014-06-19%' ``` i get no results. because i want to search for all the registration taken place on a specific date including time, i am assuming that i dont know the specific time...
You will have to cast to date to compare two dates on equality. eg ``` CAST(registration_date AS DATE) = CAST('2014-06-19' AS DATE) ``` An alternative: ``` DECLARE @DateToFilter DATE = (CAST('2014-06-19' AS DATE)) [..] WHERE registration_date >= @DateToFilter AND registration_date < DATEADD(d, 1, @DateToFilter) ``` EDIT: Regarding performance, and *assuming* an index exists on the date column, the index can be used in both cases (yes, CAST(x AS DATE) is SARGable). There is a very interesting analysis on the performance differences [here](https://dba.stackexchange.com/questions/51340/should-i-join-datetime-to-a-date-using-cast-or-range) and [here](https://dba.stackexchange.com/questions/34047/cast-to-date-is-sargable-but-is-it-a-good-idea). As always, test.
Simply search for everything that's at least on or after that date, and earlier than the next day. ``` registration_date >= '2014-06-19' AND registration_date < '2014-06-20' ```
How to Search datetime in Sql server 2008R2
[ "", "sql", "sql-server", "datetime", "" ]
Tables are: ``` Patient(PatientID, FirstName, LastName) Disease(DiseaseID, DiseaseName) PatientDisease(PatientID, DateID, DiseaseID) ``` It's clear that *Patient* and *Disease* tables have *PatientID* and *DiseaseID* as PKs, but what about *PatientDisease* table knowing that each patient in a single day can be diagnosed with a single disease?
I think, that it here you should use composite PK with all 3 fields: PatientId, DateId, DiseaseId. No matter one disease per day can be diagnosed or not, all 3 fields are mandatory for patient disease.
Eigher you can go with composite key or add another column patient desiese-id to work as unique row number
Identify the PK in given tables
[ "", "mysql", "sql", "database", "" ]
I want to Calculate average hours for week. e.g. My Weekly hours are 44:43 (hh:mm) and my Working days are 5 then it should return **8:57** 44:43/ 5 ==> 8:57 How i can achieve in SQL Server.
Here's the query that I have constructed as per your requirements: ``` DECLARE @TotalHours VARCHAR(5) = '44:43' SELECT CAST(CAST(AvgMinutes AS INT)/60 AS VARCHAR) + ':' + CAST(CAST(AvgMinutes AS INT)%60 AS VARCHAR) FROM ( SELECT CEILING((CAST(LEFT(@TotalHours,CHARINDEX(':',@TotalHours) - 1) AS INT ) * 60 + CAST(RIGHT(@TotalHours,LEN(@TotalHours) - CHARINDEX(':',@TotalHours)) AS INT ))/5.0) AvgMinutes ) AS TotalMinutes ```
You can convert the time to seconds, then divide, and convert back to time format. Assuming that `WeeklyHours` are in a `time` format, something like this: ``` select dateadd(second, datediff(second, 0, WeeklyHours) / WorkingDays, cast(0 as time)) ``` This expression should definitely work if `WeeklyHours` is stored as a `datetime`.
SQl Server Average Hours per day
[ "", "sql", "sql-server-2008", "" ]
I want to join two tables (`sales_flat_order` & `sales_flat_order_address`). I want to use the `entity_id` from one table to match with the `parent_id` in the other table. Now there is one row for every `entity_id` but there are two rows for every parent\_id. But i just want to have one row after the join. I am using this Statement but it still returns two rows for every entity\_id after the join. ``` select o.customer_email, a.prefix from sales_flat_order o left JOIN sales_flat_order_address a on o.entity_id = a.parent_id ``` Does anybody have an idea?
``` select o.customer_email, a.prefix from sales_flat_order o left JOIN (Select distinct a.prefix, a.parentid from sales_flat_order_address) a on o.entity_id = a.parent_id ```
Just add a distinct? ``` select DISTINCT o.customer_email, a.prefix from sales_flat_order o left JOIN sales_flat_order_address a on o.entity_id = a.parent_id ``` Obviously this will only remove multiple rows where o.customer\_email, a.prefix are in fact the same, otherwise you would need to find some way to decide which record you actually wanted... eg: ``` select o.customer_email, MIN(a.prefix) from sales_flat_order o left JOIN sales_flat_order_address a on o.entity_id = a.parent_id group by o.customer_email ```
SQL joining tables (a,b) - but only rows from a
[ "", "sql", "magento", "join", "" ]
I have inherited a MySQL database, that has a table as follows: ``` mysql> describe stock_groups; +--------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +--------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | group | varchar(5) | YES | | NULL | | | name | varchar(255) | YES | | NULL | | | parent | varchar(5) | YES | | NULL | | | order | int(11) | YES | | NULL | | +--------+--------------+------+-----+---------+----------------+ 5 rows in set (0.00 sec) ``` When I run `mysql> select * from stock_groups where`group`='D2';` I get: ``` mysql> select * from stock_groups where `group`='D2'; +----+-------+------+--------+-------+ | id | group | name | parent | order | +----+-------+------+--------+-------+ | 79 | D2 | MENS | D | 51 | +----+-------+------+--------+-------+ 1 row in set (0.00 sec) ``` and also i have a table: ``` mysql> describe stock_groups_styles_map; +-------+-------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------+-------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | group | varchar(5) | NO | | NULL | | | style | varchar(25) | NO | | NULL | | +-------+-------------+------+-----+---------+----------------+ 3 rows in set (0.01 sec) ``` when I run: ``` mysql> select `group` from stock_groups_styles_map where style='N26'; +-------+ | group | +-------+ | B1 | | B11 | | D2 | | V2 | +-------+ 4 rows in set (0.00 sec) ``` how do i get the `stock_groups.name` ?
``` SELECT stock_groups.name FROM stock_groups_styles_map, stock_groups WHERE stock_groups_styles_map.style ='N26'; ``` worked for me.
Join the tables, and select only the data you need. If you need unique rows, use the `distinct` keyword: ``` select -- If you need unique names, use "select distinct" instead of "select" sg.name from stock_groups_styles_map as sgs inner join stock_groups as sg on sgs.group = sg.group where sgs.style = 'N26' ``` You could also solve this using subqueries, but that would be rather inneficient in this case. ***Something important*** You should add the appropriate indexes to your tables. It will improve the performance of your database.
mysql INNER_JOIN subquery
[ "", "mysql", "sql", "" ]
I have a SQL Server 2008 database with multiple tables. I am writing a stored procedure for the user to get some market values from the tables. Eg. I have a table which has columns `Currency`, `[Market Value]`, `[Hedging]` and `[Customer ID]`. Now the user can choose to see the Currency, Market Value and Hedging by specifying a Customer ID. However I would like the user to be able to set multiple customer id's and see a table with aggregated values. So for instance I would like to see the Currency, Market Value and Hedging for Customer ID 2, 3 and 7. The `Select` statement would look something like this: ``` Select [Currency], sum([Market Value]), sum([Hedging]) from myDatabase where ([Customer ID] = 2 or [Customer ID] = 3 or [Customer ID] = 7) Group by [Currency], [Customer ID] ``` I could then specify a parameter and let the user decide which customer ID to use. The problem is, I do not always know which combination of customer id's that the user would like to see. If I knew that there would always be three customer id's I could just set three parameters, but sometimes it might be only one customer id, other times it might be 5. Is there a smooth way to come around this? I was thinking about setting my select statement as a string and then executing it by the exec command - however my procedure is quite long with a couple of inner and left joins, temporary tables etc, so that solution might not be the best option. Does anyone have a bright idea on how to solve this?
Since you are using SQL 2008 I would suggest a better alternative that is both strongly typed and avoids the need of creating dynamic SQL: User Defined Table Types (UDTT) You can create a generic table type that holds integer values like so: ``` CREATE TYPE IntegerList AS TABLE ( Value int NOT NULL ) GO ``` Then in your stored procedure you would specify a parameter of this type: ``` CREATE PROCEDURE [dbo].[MyProcedure] @customerIds IntegerList READONLY AS BEGIN -- here you can use the @customerIds variable as a (read only) table: SELECT Value FROM @customerIds END ```
``` where ([Customer ID] IN ({ids placeholder}) ``` You can build the `SELECTreplacing` the `{ids placeholder}` with "1,3,5,... n" string
SQL Server stored procedure unkown number of variables
[ "", "sql", "sql-server-2008", "stored-procedures", "" ]
I am using SQL Server 2012. I have two tables `Sales` & `Country`. `Sales` has approx. 15 columns one of which is a primary key called `s_id`, `Country` has 5 columns and contains a foreign key also called `s_id`. I want to write a query which will select only records in the Sales table where the s\_id has more than one record in the Country table. Then from the list returned by this query perform another query which should return only rows where the field `usd_value > 0` or `eur_value > 0`. Both `usd_value` and `eur_value` are fields in the `Sales` table. Below is a brief and simple example which hopefully helps. ``` Sales s_id usd_value eur_value MN13 0 5 MN14 0 0 MN15 325 125 MN28 320 0 Country s_id country MN13 NL MN13 FR MN14 GB MN14 US MN15 US MN28 CA MN28 US MN28 MX Result I would like to see s_id usd_value eur_value MN13 0 5 MN28 320 0 ```
Try it like this ``` SELECT COUNT(c.[s_id]) Total, s.[usd_value], s.[eur_value] FROM Sales s LEFT JOIN Country c ON c.[s_id] = s.[s_id] GROUP BY s.[s_id] HAVING Total > 1 AND (usd_value > 0 OR eur_value > 0) ```
Try this: ``` select s.* from sales s where ( s.eur_value > 0 or s.usd_value > 0 ) and s.s_id in ( select s_id from country group by s_id having count(*) > 1 ) ```
multiple select queries using primary and foreign key
[ "", "sql", "sql-server", "" ]
Right now I have: ``` year(epe_curremploymentdate) = year(DATEADD(year, -1, SYSDATETIME())) ``` But that is giving me everything from last year. All I want is the data from today's date, one year ago.
It should be: ``` epe_curremploymentdate = DATEADD(year, -1, GETDATE()) ``` Don't check for **year** in your where clause. **EDIT:** Simple: use ``` cast(epe_curremploymentdate AS DATE) = cast(DATEADD(year, -1, GETDATE()) AS DATE) ``` That is, if you are using SQL Server 2008 and above.
This should get you to where you need to go: ``` Declare @StartDate datetime, @EndDate datetime -- @StartDate is midnight on today's date, last year set @StartDate = Convert(date, (DATEADD(year, -1, getdate()))) set @EndDate = DATEADD(Day, 1, @StartDate) select * from YourTable where epe_curremploymentdate >= @StartDate and epe_curremploymentdate < @EndDate -- This where clause will get you everything that happened at any time -- on today's date last year. ```
How do I find data from this day exactly one year ago?
[ "", "sql", "sql-server", "datetime", "" ]
I have a very simple request to do with **SQL Server 2008**, but I can't success to achive what I want. ![Schema SQL](https://i.stack.imgur.com/ExoIA.png) For a given `FAVORITE`, a user can have zero or many `COMMENTs`, however, if the user have more than one comment, all `COMMENTs` have a `DeletedDate` `NOT NULL` (except possibly the last one). But a user can have a `FAVORITE` without `COMMENT`. `COMMENT` is totally optional. --- I'm trying to create a request, which, for a given user, will return all its valid favorites (where `FAVORITE.DeleteDate` is `NOT NULL`) and the valid `COMMENT` associated to the `FAVORITED` (if it exists). Here is my request : ``` SELECT * FROM FAVORITE f LEFT JOIN COMMENT co ON f.IdReferenced = co.IdReferenced WHERE f.IdUser = 7 AND f.DeletedDate IS NULL AND co.IdUser = 7 ``` *(Please use IdUser = 7 to test)* However, this request returns all `COMMENTs` of all `FAVORITEs` of user 7, even comments where `DeletedDate` is NOT NULL. I tried to add a `AND co.DeletedDate IS NOT NULL` to the above request, but now, it didn't return FAVORITEs which haven't any COMMENT **FIDDLE** --- To reproduce the problem, I created this [SQLFIDDLE](http://sqlfiddle.com/#!3/584e46/5) ![Attempted result](https://i.stack.imgur.com/qmeKC.png) The request must return the above lines, minus the red-crossed line **IMPORTANT** I just made an error in my SQLFIDDLE, the fourth line of COMMENT should be ``` INSERT [dbo].[COMMENT] ([IdComment], [IdUser], [IdReferenced], [CommentText], [CreationDate], [ModificationDate], [DeletedDate]) VALUES (8, 7, 2869, N'Must appear 3', CAST(0x0000A33500EC1133 AS DateTime), NULL, NULL) ``` Could you please improve my request ? I'll have to write it in LINQ, but I should be able to traduce from SQL to LINQ. Thank you for your time !
If I understand your requirement you need to put the condition for the right table (`co.DeletedDate IS NOT NULL`) in the `JOIN` portion of your query like: ``` SELECT * FROM FAVORITE f INNER JOIN COMMENT co ON f.IdReferenced = co.IdReferenced AND co.DeletedDate IS NULL AND co.IdUser = f.IdUser WHERE f.IdUser = 7 AND f.DeletedDate IS NULL ``` EDIT: In your sql fiddle the row below had a value inserted in DeletedDate. If this value is `null` the above query gives the desired result. Also have changed the '`LEFT JOIN`' to '`INNER JOIN`' in order to not show different users. ``` INSERT [dbo].[COMMENT] ([IdComment], [IdUser], [IdReferenced], [CommentText], [CreationDate], [ModificationDate], [DeletedDate]) VALUES (8, 7, 2869, N'Must appear 3', CAST(0x0000A33500EC1133 AS DateTime), NULL, NULL) ```
If I understood your problem correctly. You could use `EXISTS` to check if there is such a record [SQLFiddle](http://sqlfiddle.com/#!3/3c35a/8) ``` SELECT * FROM FAVORITE f LEFT JOIN COMMENT co ON f.IdReferenced = co.IdReferenced WHERE f.IdUser = 7 AND f.DeletedDate IS NULL AND EXISTS ( SELECT 1 FROM COMMENT WHERE COMMENT.IdUser = co.IdUser AND co.DeletedDate IS NULL) ```
SQL - Join 2 tables using a join column AND a condition
[ "", "sql", "sql-server", "sql-server-2008", "left-join", "sqlfiddle", "" ]
would like get your advice on how to achieve the following. I have two tables. One is called the Category and other the Trxn. I trying to find how many transactions are are in each category. I understand I need to left join my Trxn table with Category table. But the issue is that, CategoryId in the Category table is 6 characters and this same ID is split into two columns in the Trxn table as 4 and 2 character column respectively. How can I can join the both the columns and left join the Category table. ?? Is it possible. My aspx application has to show how many traxns have been done in each category. I have attached a sample database schema via SQLFiddle link below. ``` SELECT Category.ID, count(Trxn.Cat_ID) FROM Category LEFT JOIN Trxn on Category.ID = (Trxn.Cat_ID+Trxn.STV) where Category.Status='A' AND Trxn.Status='ok' group by Category.ID ``` <http://sqlfiddle.com/#!3/6feef/2/0> Please advice.
You have part of your left outer join table in the where clause. You can't do that, it makes it become a normal join. This works: ``` SELECT Category.ID, COUNT(Trxn.Cat_ID) FROM Category LEFT JOIN Trxn ON Category.ID = (rtrim(Trxn.Cat_ID)+ltrim(Trxn.STV)) AND Trxn.Status='ok' WHERE Category.Status='A' GROUP BY Category.ID ```
For some reason there was a space after cat\_id when you concatenate the values. I was able to find this by just running. I just noticed the data type is a char. ``` select cat_id + stv from trxn ``` I trimmed of the space and the query is working. ``` SELECT Category.ID, count(Trxn.Cat_ID) FROM Category LEFT JOIN Trxn on Category.ID = (rtrim(Trxn.Cat_ID)+ltrim(Trxn.STV)) AND Trxn.Status='ok' where Category.Status='A' group by Category.ID ```
Counting the number of transactions by left joining a category table
[ "", "sql", "sql-server", "left-join", "" ]
I have data in a table of length 9 where data is like ``` 999999969 000000089 666666689 ``` I want to delete only those data in which any number from 1-9 is repeating more than 5 times.
This can be **much simpler** with a regular expression using a [**back reference**](http://www.postgresql.org/docs/current/interactive/functions-matching.html#POSIX-BRACKET-EXPRESSIONS). ``` DELETE FROM tbl WHERE col ~ '([1-9])\1{5}'; ``` That's all. ### Explain `([1-9])` ... a character class with digits from 1 to 9, parenthesized for the following back reference. `\1` ... back reference to first (and only in this case) parenthesized subexpression. `{5}` .. exactly (another) 5 times, making it "more than 5". [Per documentation:](http://www.postgresql.org/docs/current/interactive/functions-matching.html#POSIX-BRACKET-EXPRESSIONS) > A *back reference (**\n**)* matches the same string matched by the previous > parenthesized subexpression specified by the number **`n`** [...] For example, `([bc])\1` matches `bb` or `cc` but not `bc` or `cb`. [SQL Fiddle demo.](http://sqlfiddle.com/#!15/d41d8/2349)
OK, so the logic here can be summed up as: * Find the longest series of the same consecutive digit in any given number; and * Return true if that longest value is > 5 digits Right? So, lets split it into series of consecutive digits: ``` regress=> SELECT regexp_matches('666666689', '(0+|1+|2+|3+|4+|5+|6+|7+|8+|9+)', 'g'); regexp_matches ---------------- {6666666} {8} {9} (3 rows) ``` then filter for the longest: ``` regress=> SELECT x[1] FROM regexp_matches('6666666898', '(0+|1+|2+|3+|4+|5+|6+|7+|8+|9+)', 'g') x ORDER BY length(x[1]) DESC LIMIT 1; x --------- 6666666 (1 row) ``` ... but really, we don't actually care about that, just if any entry is longer than 5 digits, so: ``` SELECT x[1] FROM regexp_matches('6666666898', '(0+|1+|2+|3+|4+|5+|6+|7+|8+|9+)', 'g') x WHERE length(x[1]) > 5; ``` can be used as an `EXISTS` test, e.g. ``` WITH blah(n) AS (VALUES('999999969'),('000000089'),('666666689'),('15552555')) SELECT n FROM blah WHERE EXISTS ( SELECT x[1] FROM regexp_matches(n, '(0+|1+|2+|3+|4+|5+|6+|7+|8+|9+)', 'g') x WHERE length(x[1]) > 5 ) ``` which is actually pretty efficient and return the correct result (always nice). But it can be simplified a little more with: ``` WITH blah(n) AS (VALUES('999999969'),('000000089'),('666666689'),('15552555')) SELECT n FROM blah WHERE EXISTS ( SELECT x[1] FROM regexp_matches(n, '(0{6}|1{6}|2{6}|3{6}|4{6}|5{6}|6{6}|7{6}|8{6}|9{6})', 'g') x; ) ``` You can use the same `WHERE` clause in a `DELETE`.
Deleting records with number repeating more than 5
[ "", "sql", "regex", "database", "postgresql", "pattern-matching", "" ]