Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
Can someone guide me on how to convert seconds into total hours, minutes and seconds format? For example: 2,652,819 secs should give 736:53:39 I am able to get the same in `days:hours:minutes:seconds` format but I specifically require the convert the days also into total hours
Assuming you've got an integral value for `tot_secs` β€” floating point will require some judicious use of `round()`, `ceil()` and/or `floor()`, but you can try something like this: ``` select hours = ( t.tot_secs / 3600 ) , -- hours is total seconds / (secs/hr) minutes = ( t.tot_secs % 3600 ) / 60 , -- minutes is whatever's left over / 60 seconds = t.tot_secs % 60 , -- whatever's left over is seconds hh_mm_ss = convert(varchar, t.tot_secs / 3600 ) + ':' + right( '0' + convert(varchar, ( t.tot_secs % 3600 ) / 60 ) , 2 ) + ':' + right( '0' + convert(varchar, t.tot_secs % 60 ) , 2 ) from ( select tot_secs = 2652819 ) t ``` The above query yields ``` hours minutes seconds hh_mm_ss ----- ------- ------- -------- 736 53 39 736:53:39 ```
This will parse seconds into days:hours:minutes:seconds. ``` declare @seconds int = 2652819 select @seconds / (60 * 60 * 24) [days], (@seconds / (60 * 60)) % 60 [hours], (@seconds / 60) % 60 [minutes], @seconds % 60 seconds select cast(@seconds / (60 * 60 * 24) as varchar(10)) + ':' + cast((@seconds / (60 * 60)) % 60 as varchar(2)) + ':' + cast((@seconds / 60) % 60 as varchar(2)) + ':' + cast(@seconds % 60 as varchar(2)) as [days:hours:minutes:seconds] ``` Output: ``` days|hours|minutes|seconds ----|-----|-------|------- 30 | 16 | 53 | 39 days:hours:minutes:seconds -------------------------- 30:16:53:39 ```
Converting Seconds into Total Hours:minutes:seconds
[ "", "sql", "t-sql", "sql-server-2008-r2", "" ]
I'm trying to implement the search feature on my project, so I add the class method : ``` class Donation < ActiveRecord::Base belongs_to :member def self.search_by_name name includes(:member).where(member: { name: name } ) end end ``` And I call it in the donatinos#index ``` class DonationsController < ApplicationController def index search_donations end private def search_donations if !params[:name].blank? @donations = Donation.search_by_name params[:name] else @donations = Donation.all end end end ``` Finally, I implement the index template: ``` <%= form_tag donations_path,:method => 'get' do %> <%= text_field_tag :name %> <%= submit_tag "search", name: nil %> <% end %> <table> <% @donations.each do |donation| %> <tr> <td><%= donation.total %></td> </tr> <% end %> </tbody> </table> ``` I got the error message after I take "Ted Wang" in the form ``` SQLite3::SQLException: no such column: member.name: SELECT "donations"."id" AS t0_r0, "donations"."organization_id" AS t0_r1, "donations"."category" AS t0_r2, "donations"."borrow_id" AS t0_r3, "donations"."member_id" AS t0_r4, "donations"."total" AS t0_r5, "donations"."note" AS t0_r6, "donations"."created_at" AS t0_r7, "donations"."updated_at" AS t0_r8, "donations"."time" AS t0_r9, "members"."id" AS t1_r0, "members"."no" AS t1_r1, "members"."name" AS t1_r2, "members"."nick_name" AS t1_r3, "members"."introducer" AS t1_r4, "members"."role" AS t1_r5, "members"."gender" AS t1_r6, "members"."birthday" AS t1_r7, "members"."lunar_birthday" AS t1_r8, "members"."lunar_deathday" AS t1_r9, "members"."identified_number" AS t1_r10, "members"."home_phone" AS t1_r11, "members"."company_phone" AS t1_r12, "members"."bbcall" AS t1_r13, "members"."mobile" AS t1_r14, "members"."address" AS t1_r15, "members"."permanent_address" AS t1_r16, "members"."education" AS t1_r17, "members"."job" AS t1_r18, "members"."created_at" AS t1_r19, "members"."updated_at" AS t1_r20, "members"."organization_type" AS t1_r21, "members"."organization_id" AS t1_r22, "members"."involved" AS t1_r23 FROM "donations" LEFT OUTER JOIN "members" ON "members"."id" = "donations"."member_id" WHERE "member"."name" = 'Ted Wang' ``` Did I use `includes` wrong? and member has the attribute name, I also ran the db:migrate.
As per Rails convention, table names are lower cased and pluralized model names so in the where clause you should specify `members` (in *plural*) and NOT `member` (in *singular*). You can easily debug this looking at the error message: `SQLite3::SQLException: no such column: member.name`. With the error message, one can diagnose that the table name is `members` so something in the query is definitely wrong as its referring to `member.name` instead of `members.name`. ``` def self.search_by_name name includes(:member).where(members: { name: name } ) ## ^ ## plural end ```
``` def self.search_by_name name includes(:member).where("members.name = ?", name) end ``` Everything else stays the same .
The includes and where doesn't retrieve the correctly associated object
[ "", "sql", "ruby-on-rails", "ruby", "rails-activerecord", "" ]
I am attempting to Use Bulk Insert to upload a very large data file (5M rows). All columns are just varchars no conversion. So the Format file is simple... ``` 11.0 29 1 SQLCHAR 0 8 "" 1 AccountId "" 2 SQLCHAR 0 10 "" 2 TranDate "" 3 SQLCHAR 0 4 "" 3 TransCode "" 4 SQLCHAR 0 2 "" 4 AdditionalCode "" 5 SQLCHAR 0 11 "" 5 CurrentPrincipal "" 6 SQLCHAR 0 11 "" 6 CurrentInterest "" 7 SQLCHAR 0 11 "" 7 LateInterest "" ... 27 SQLCHAR 0 8 "" 27 Operator "" 28 SQLCHAR 0 10 "" 28 UpdateDate "" 29 SQLCHAR 0 12 "" 29 TimeUpdated "" ``` but each time, at some point, I get the same error: > Msg 4832, Level 16, State 1, Line 1 Bulk load: An unexpected end of > file was encountered in the data file. Msg 7399, Level 16, State 1, > Line 1 The OLE DB provider "BULK" for linked server "(null)" reported > an error. The provider did not give any information about the error. > Msg 7330, Level 16, State 2, Line 1 Cannot fetch a row from OLE DB > provider "BULK" for linked server "(null)". I have tried the following: ``` Bulk Insert [TableName] From 'dataFilePPathSpecification' With (FORMATFILE = 'formatFilePPathSpecification') ``` but I get the error after about 5-6 minutes, and no data has been inserted. When I added BatchSize parameter, I get the error after a much longer time, near the end of the file, after all except a very few of the rows have been inserted successfully. ``` Bulk Insert [TableName] From 'dataFilePPathSpecification' With (BATCHSIZE = 200, FORMATFILE = 'formatFilePPathSpecification') ``` When I set the BatchSize to 2000 it runs much faster, (Fewer, larger transacxtions I assume), but it still fails. Does this have something to do with how the Bulk Insert recognizes the end of the file? If so, what do I need to do to the format file to fix it ?
Explicitly state your row terminator: ``` BULK INSERT TableName FROM 'Path' WITH ( DATAFILETYPE = 'char', ROWTERMINATOR = '\r\n' With (FORMATFILE = 'formatFilePPathSpecification') ); ``` If this still fails, check your file to see if you have unexpected terminators embedded in text fields.
If you still have problem even after enabling the errorfile output, you can do a binary search for the problem by setting the FirstRow and LastRow options and running bulk insert repeatedly to isolate the problem. To be honest your input format looks so simple it might be a good idea to write a small C#, Python, or whatever floats your boat app to quality check you data before attempt import. You could simply discard invalid rows (or possibly fix them) or write them to an exceptions file for hand processing, or simply stop the job -- I.e., file must be perfect or it is considered corrupted. Validating 5M rows this way will be quite fast -- essentially as fast as you can read the file (and possible write) the file.
SQL Server Bulk Insert fixed width file failure
[ "", "sql", "sql-server", "bulkinsert", "" ]
I have this table [myTable] in SQL Server: ``` ID | Timestamp [datatype= datetime] 1 | 2013-08-05 23:09:02.000 2 | 2013-11-05 23:00:00.000 3 | 2013-08-05 23:09:02.000 4 | 2013-11-05 23:00:00.000 5 | 2014-01-01 22:00:00.000 ``` In the table I have [sometimes] some duplicated timestamps. Using a SELECT statement, how can I add 1 millisecond to each of the duplicated values so then I have only unique timestamps? Output would be: ``` ID | Timestamp 1 | 2013-08-05 23:09:02.000 2 | 2013-11-05 23:00:00.000 3 | 2013-08-05 23:09:02.001 4 | 2013-11-05 23:00:00.001 5 | 2014-01-01 22:00:00.000 ``` Thanks in advance.
``` SELECT id, DATEADD(MILLISECOND,ROW_NUMBER() OVER (PARTITION BY Timestamp ORDER BY Id)-1,Timestamp) Timestamp ```
``` SELECT DATEADD(ms,(ID*10),tbl.[TIMESTAMP]),tbl.* FROM [TABLE] tbl INNER JOIN ( SELECT [TIMESTAMP],Count(*) [CNT] FROM [TABLE] GROUP BY [TIMESTAMP] HAVING count(*)>1 ) dupe ON dupe.[TIMESTAMP]=tbl.[TIMESTAMP] ```
Duplicate Timestamps in SQL Server Query: How to add millisecond and avoid duplicates?
[ "", "sql", "sql-server", "" ]
I am having getting the last sql to work. The first two grouping gave me the right answer but the last one which involves joining 2 tables gave me a number thats too high when i manually add the first two together. sql is to generate figures 1,2 and the (1+2) as 3. Pls help. sql below.... ``` select [OrderDate] as "Date", 'Cobs' as "Payment Source", COUNT(*) as "Quantity", SUM([Amount]) as "Value in Pounds" from cobaltins where PN like 'BT%' group by OrderDate union all select [OrderDate] as "Date", 'Cobs Adhoc' as "Payment Source", COUNT(*) as "Quantity", SUM([Amount]) as "Value in Pounds" FROM cobaltins_adhoc where name = 'Vauz' group by OrderDate union all select cba.OrderDate as "Date", 'Cumulative' as "Payment Source", COUNT(*) as "Quantity", sum((cb.Amount)+(cba.Amount)) as "Value in Pounds" FROM cobaltins as cb left join cobaltins_adhoc as cba on cb.OrderDate = cba.OrderDate where cb.PN like 'BT%' or cba.name = 'Vauz' group by cba.OrderDate ```
Rather than try and get it in one you might want to try and sum an inline view of your UNION ``` SELECT "Date", "Payment Source", SUM(Quantity) Quantity, SUM("Value in Pounds") "Value in Pounds" FROM ( select [OrderDate] as "Date", 'Cobs' as "Payment Source", COUNT(*) as "Quantity", SUM([Amount]) as "Value in Pounds" from cobaltins where PN like 'BT%' group by OrderDate union all select [OrderDate] as "Date", ' Cobs Adhoc' as "Payment Source",COUNT(*) as "Quantity", SUM([Amount]) as "Value in Pounds" FROM cobaltins_adhoc where name = 'Vauz' group by OrderDate ) t GROUP BY "Date", "Payment Source" ```
The way that you are doing your join ``` FROM cobaltins as cb LEFT JOIN cobaltins_adhoc as cba ON cb.OrderDate = cba.OrderDate ``` is only joining on the order date. if you have multiple orders per date then it's going to try to join those together arbitrarily. you will want to add something else into that join to make it distinct.
sql to sum counts and add amounts from 2 tables
[ "", "sql", "" ]
I have a model named `Application`. And `Application` is associated to `has_many` model named `Location`. `Application` has many `Location` In my Application query: ``` $this->Application->find('all', array('conditions' => 'Application.status' => 'accepted')); ``` I'm finding `applications` where `status` is `accepted`. Next thing that I would like to achieve is to find `Application` records where associated `Location` is empty/null or in other words where `count` of `Location` records is 0. I tried to make `join` query like this: ``` $join_query = array( 'table' => 'locations', 'alias' => 'Location', 'type' => 'INNER', 'conditions' => array( 'Location.application_id = Application.id', 'OR' => array( array('Location.id' => NULL) ) ) ); ``` But seems like it's just querying `Application` records that do have associated `Location` records. Thanks in advanced if you guys have any idea(s).
You need to use a left join, not an inner join. Inner join will get only those results that have a row in both of the tables you are joining, where you want only results where there is only a row in the left table. Left joins will get all the results in the left table, regardless if there's a row associated with it in the right table. Then add a condition after the join is complete, to only select those joined results where Location.id is null. ``` $this->Application->find('all', array( 'conditions' => array('Location.id' => null), 'joins' => array( array( 'table' => 'locations', 'alias' => 'Location', 'type' => 'LEFT', 'conditions' => array('Location.application_id = Application.id') ), ), ) ); ```
Your query says "find any application and its location with application\_id = id, AND (1 OR where location.id = null)", so that will match any application that has location. What I'd do is to leave joins and just use containable and counts. With plain sql I'd use a left join and count the `Locations`, like in [this example](https://stackoverflow.com/questions/2215754/sql-left-join-count). But cake doesn't behave well with not named columns, like "COUNT(\*) AS num\_locations", so I tend to avoid that. I'd transform your query to a containtable one ``` $apps = this->Application->find('all', array('contains'=>'Location')); foreach($apps as $app) { if (count($app['Location']) <= 0) //delete record } ``` You could also implement a [counterCache](http://book.cakephp.org/2.0/en/models/associations-linking-models-together.html#countercache-cache-your-count), and keep in a BD column the number of locations per application, so the query can be a simple find like ``` $this->Application->find('all', array('conditions'=>array('location_count'=>0))); ``` Ooooor, you could add a [virtual field](http://book.cakephp.org/2.0/en/models/virtual-fields.html) with "SUM(\*) as num\_locations" and then use your join with "left outter join" and compare "num\_locations = 0" on the conditions. Those are the options that comes to mind. Personally I'd use the first one if the query will be a one time/not very used one. Probably put it in the Application model like ``` public function findAppsWithNoLocations() { $apps = this->Application->find('all', array('contains'=>'Location')); foreach($apps as $app) { if (count($app['Location']) <= 0) //delete record } } ``` But the other two options would be better if the sum of locations per app is going to be a recurrent query you'll search for. **EDIT** And of course Kai's answer options that does what you want xD. This tendency to complicate things will be the end of me... Well, will leave the answer here to show a reference to other convoluted options (specifically counterCache if you'll need to count the relations a lot of times).
Query to find where associated model (has many) is empty in cakephp
[ "", "mysql", "sql", "cakephp", "" ]
I have a CTE query that updates a single column in a table with approx 2.5 millions rows. I let the query run and it took about 16 hours!! How can I update this procedure so it will be faster? I read that SELECT INTO and create a new table should be a better way. I just dont know how to convert this CTE into a SELECT INTO. ``` WITH CubeWithRowNumber AS ( SELECT rownum = ROW_NUMBER() OVER ( ORDER BY CustomerId, Period ), c.Period, c.CustomerId, c.PayDate, NS_Regular, NS_Single, NySales FROM Cube2 c ) UPDATE Cube2 SET MonthlySales = ( SELECT CASE WHEN YEAR(cu.Period) = YEAR(cu.PayDate) THEN cu.NySales ELSE CASE WHEN prev.Period IS NULL OR YEAR(cu.Period) <> YEAR(prev.Period) THEN cu.NS_Regular + cu.NS_Single ELSE cu.NS_Regular + cu.NS_Single - prev.NS_Regular - prev.NS_Single END END AS Result FROM CubeWithRowNumber cu LEFT JOIN CubeWithRowNumber prev ON prev.rownum = cu.rownum - 1 AND cu.CustomerId = prev.CustomerId WHERE cu.CustomerId = Cube2.CustomerId AND cu.Period = Cube2.Period) ```
It's possible to avoid calling the `CTE` twice by remodelling the query ``` UPDATE Cube2 SET MonthlySales = CASE WHEN YEAR(cu.Period) = YEAR(cu.PayDate) THEN cu.NySales WHEN YEAR(cu.Period) <> YEAR(COALESCE(prev.Period, 0)) THEN cu.NS_Regular + cu.NS_Single ELSE cu.NS_Regular + cu.NS_Single - prev.NS_Regular - prev.NS_Single END FROM Cube2 cu CROSS APPLY (SELECT TOP 1 Period, NS_Regular, NS_Single FROM cube2 WHERE cu.CustomerId = cube2.CustomerId AND cu.Period > cube2.Period ORDER BY Period Desc) prev; ``` that can improve the performances, even more if supported by an index on CustomerId and Period, but introduce an `ORDER BY` that is somewhat costly, so you may want to check it on a reduced set of data. Another little trouble is that `CROSS APPLY` is like a `INNER JOIN` and the first period for every customer have not previous period. To fix that it's possible to change the `CROSS APPLY` to an `OUTER APPLY` that is like a `LEFT JOIN`, but that will destroy the perfomances, or we can create some values from nothing. Coalescing a grouping function can do that: if there is the row its values it will remain the same, if the subquery is empty the `MAX` (or `MIN` or `AVG`, your choice) will create a new row, as the `MAX` of a table with no rows is `NULL`. The updated `UPDATE` is: ``` UPDATE Cube2 SET MonthlySales = CASE WHEN YEAR(cu.Period) = YEAR(cu.PayDate) THEN cu.NySales WHEN YEAR(cu.Period) <> YEAR(COALESCE(prev.Period, 0)) THEN cu.NS_Regular + cu.NS_Single ELSE cu.NS_Regular + cu.NS_Single - prev.NS_Regular - prev.NS_Single END FROM Cube2 cu CROSS APPLY (SELECT COALESCE(MAX(Period), 0) Period , COALESCE(MAX(NS_Regular), 0) NS_Regular , COALESCE(MAX(NS_Single), 0) NS_Single FROM (SELECT TOP 1 Period, NS_Regular, NS_Single FROM cube2 WHERE cu.CustomerId = cube2.CustomerId AND cu.Period > cube2.Period ORDER BY Period Desc) a ) prev; ``` there is some extra work for the grouping, but hopefully not that much. Sometime converting the `CASE` logic to math operator can help to further improve performances but in addition to the fact that it doesn't work always the query become less readable. If you want to try it here is the version converted ``` UPDATE Cube2 SET MonthlySales = cu.NySales * (1 - CAST((YEAR(cu.Period) - YEAR(cu.PayDate)) as BIT)) + (cu.NS_Regular + cu.NS_Single) * (0 + CAST(YEAR(cu.Period) - YEAR(COALESCE(prev.Period, 0)) as BIT)) * (0 + CAST((YEAR(cu.Period) - YEAR(cu.PayDate)) as BIT)) + (cu.NS_Regular + cu.NS_Single - prev.NS_Regular - prev.NS_Single) * (1 - CAST(YEAR(cu.Period) - YEAR(COALESCE(prev.Period, 0)) as BIT)) * (0 + CAST((YEAR(cu.Period) - YEAR(cu.PayDate)) as BIT)) FROM Cube2 cu CROSS APPLY (SELECT COALESCE(MAX(Period), 0) Period , COALESCE(MAX(NS_Regular), 0) NS_Regular , COALESCE(MAX(NS_Single), 0) NS_Single FROM (SELECT TOP 1 Period, NS_Regular, NS_Single FROM cube2 WHERE cu.CustomerId = cube2.CustomerId AND cu.Period > cube2.Period ORDER BY Period Desc) a ) prev; ```
I don't think you need to reference the table three times. Your CTE is updatable, so I think the following is equivalent: ``` WITH CubeWithRowNumber AS ( SELECT c.*, rownum = ROW_NUMBER() OVER (ORDER BY CustomerId, Period), FROM Cube2 c ) UPDATE CubeWithRowNumber crn SET MonthlySales = (SELECT (CASE WHEN YEAR(crn.Period) = YEAR(crn.PayDate) THEN crn.NySales ELSE (CASE WHEN prev.Period IS NULL OR YEAR(crn.Period <> YEAR(prev.Period) THEN crn.NS_Regular + crn.NS_Single ELSE crn.NS_Regular + crn.NS_Single - prev.NS_Regular - prev.NS_Single END) END) AS Result FROM CubeWithRowNumber prev WHERE prev.rownum = crn.rownum - 1 AND crn.CustomerId = prev.CustomerId ); ``` There might be further optimizations along these lines, although if you are using a more recent version of SQL Server, the `lag()` function would be a better choice.
Convert CTE to better performing query
[ "", "sql", "sql-server", "sql-server-2005", "common-table-expression", "database-performance", "" ]
I have a table like this: ![Table view](https://i.stack.imgur.com/jppmT.png) The `RP Type` column specifies the type of rate plan which can be (low, high, normal) I want to create a view in which I can see number of each subscribers' which can be high, normal, low. It should be like this I guess: ``` Create view t as select SubID, Count(something) as NumeOfHIGH, Count(Something) as NumOfLOW, Count(Something) as NumOfNormal Group by SubID ``` I might be wrong..Thank you
You can form your query in the following way: ``` SELECT SubID, SUM ( CASE WHEN RP_Type='High' THEN 1 ELSE 0 END ) AS NumOfHigh, SUM ( CASE WHEN RP_Type='Low' THEN 1 ELSE 0 END ) AS NumOfLow, SUM ( CASE WHEN RP_Type='Normal' THEN 1 ELSE 0 END ) AS NumOfNormal FROM <table_name> Group by SubID ``` If there are multiple `RP_Type` in each of High, Low and Normal Category, you can include each type with `WHEN` in respective category. For more information and reference: [Conditional Processing using CASE](http://technet.microsoft.com/en-us/library/aa213265%28v=sql.80%29.aspx)
i tried this : ``` select SubscriberID ,COUNT(*) AS NumOfPlans ,SUM([SP active days]) as ActiveDays ,SUM(Case when [RP Type]='low' then 1 else 0 end ) as #LowPlan ,SUM(Case when [RP Type]='high' then 1 else 0 end ) as #NormalPlan ,SUM(Case when [RP Type]='high' then 1 else 0 end ) as #HighPlan ,SUM(Case when [RP Type]='promotion' then 1 else 0 end ) as #PromotionsPlan ,SUM(Case when [RP Type]='portal' then 1 else 0 end ) as #PortablePlan ,SUM(Case when [RP Type]='newyear' then 1 else 0 end ) as #NewYearPlan ,SUM(Case when [RP Type]='hamaval1' then 1 else 0 end ) as #HamrahAval1Plan ,SUM(Case when [RP Type]='hamaval2' then 1 else 0 end ) as #HamrahAval2Plan ,SUM(Case when [RP Type]='samsung' then 1 else 0 end ) as #SamsungPlan ,SUM(Case when [RP Type]='DEMO' then 1 else 0 end ) as #DemoPlan ,SUM(Case when [RP Type]IS null then 1 else 0 end ) as #NuLL ,SUM([Extra Quota]) as SumGIG from [Cus Consumption].[dbo].CustData where [Expiry Date]<= '2014 - 01 - 15' group by [SubscriberID] ``` The ironic thing is that i does not return a correct answer..I don't know why @Rohit Aggrawal
Counting specific cells in SQL Server
[ "", "sql", "sql-server", "count", "" ]
I have 3 tables: user, recommendation (post\_id, user\_id), post When a user votes on a post, a new recommendation record gets created with the post\_id, user\_id, and vote value. I want to have a query that shows a random post that a user hasn't seen/voted on yet. My thought on this is that it needs to join all recommendations of a user to the post table... and then select the records that don't have a joined recommendation. Not sure how to do this though... What I have so far that definitely doesn't work: ``` SELECT "posts".* FROM "posts" INNER JOIN "recommendations" ON "recommendations"."post_id" = "posts"."id" ORDER BY RANDOM() LIMIT 1 ```
There are several good ways to exclude rows that already have a recommendation from a given user: [Select rows which are not present in other table](https://stackoverflow.com/questions/19363481/select-rows-which-are-not-present-in-other-table/19364694#19364694) The important question is: **arbitrary** or **random**? For an *arbitrary* pick (any qualifying row is good enough), this should be cheapest: ``` SELECT * FROM posts p WHERE NOT EXISTS ( SELECT 1 FROM recommendations WHERE post_id = p.id AND user_id = $my_user_id ) LIMIT 1; ``` The sort step might be expensive (and unnecessary) for *lots of posts*. In such a use case most of the posts will typically have no recommendation from the user at hand, yet. You'd have to order all those rows by `random()` every time. If *any* post without recommendation is good enough, dropping `ORDER BY` will make it considerably faster. Postgres can just return the first qualifying post it finds.
You can do this with a left outer join: ``` SELECT p.* FROM posts p LEFT OUTER JOIN recommendations r ON r.post_id = p.id and r.userid = YOURUSERID WHERE r.post_id IS NULL ORDER BY RANDOM() LIMIT 1; ``` Note that I simplified the query by removing the double quotes (not needed for your identifier names) and adding table aliases. These changes make the query easier to write and to read.
postgresql query to determine recommendation records that haven't been seen by a user
[ "", "sql", "postgresql", "" ]
I have hierarchical data that links instances of an entity using `DATE_FROM` and `DATE_TO`. Please see [sqlfiddle](http://sqlfiddle.com/#!4/7c19c/1). Using `CONNECT_BY` I can determine the number of contiguous instances for each entity, i.e., the length of the "islands", which is mostly what I want. E.g., this gives expected lengths of islands for each entity with a `DATE_FROM` in 2014: ``` -- QUERY 1 SELECT T.ENTITY_ID, MAX(LEVEL) MAX_LEVEL FROM TEST T WHERE EXTRACT(YEAR FROM T.DATE_FROM) = 2014 CONNECT BY T.ENTITY_ID = PRIOR T.ENTITY_ID AND T.DATE_FROM = PRIOR T.DATE_TO GROUP BY T.ENTITY_ID ``` However, what I would like to do is count the rows in the islands where the `DATE_FROM` and `DATE_TO` span some minimum number of days. I don't want to break the island hierarchy when I do this. So I tried this, but it's wrong. Results are not consistently what I am looking for. ``` -- QUERY 2 SELECT T.ENTITY_ID, MAX(LEVEL) MAX_LEVEL, SUM( CASE WHEN PRIOR T.DATE_TO - PRIOR T.DATE_FROM > 183 THEN 1 ELSE 0 END ) LONG_TERM_COUNT FROM TEST T WHERE EXTRACT(YEAR FROM T.DATE_FROM) = 2014 CONNECT BY T.ENTITY_ID = PRIOR T.ENTITY_ID AND T.DATE_FROM = PRIOR T.DATE_TO GROUP BY T.ENTITY_ID ``` Which gives ``` +-----------+-----------+-----------------+ | ENTITY_ID | MAX_LEVEL | LONG_TERM_COUNT | +-----------+-----------+-----------------+ | 1 | 4 | 3 | | 2 | 5 | 4 | +-----------+-----------+-----------------+ ``` but I am looking for ``` +-----------+-----------+-----------------+ | ENTITY_ID | MAX_LEVEL | LONG_TERM_COUNT | +-----------+-----------+-----------------+ | 1 | 4 | 4 | | 2 | 5 | 4 | +-----------+-----------+-----------------+ ``` I need an Oracle solution. Thanks for reading.
A WHERE-condition is evaluated after CONNECT BY, so your query does not start with the rows from 2014. It creates the hierarchy for each and every row in your table, you can easily see that when you remove the WHERE and the aggregation: ``` SELECT T.ENTITY_ID, LEVEL, T.DATE_TO, T.DATE_FROM, prior T.DATE_TO, prior T.DATE_FROM FROM TEST T CONNECT BY T.ENTITY_ID = PRIOR T.ENTITY_ID AND T.DATE_TO = PRIOR T.DATE_FROM order by 1,2 ``` You need to use START WITH instead of a WHERE-condition: ``` SELECT T.ENTITY_ID, LEVEL, T.DATE_TO, T.DATE_FROM, prior T.DATE_TO, prior T.DATE_FROM FROM TEST T START WITH EXTRACT(YEAR FROM T.DATE_FROM) = 2014 CONNECT BY T.ENTITY_ID = PRIOR T.ENTITY_ID AND T.DATE_TO = PRIOR T.DATE_FROM ``` So finally it's: ``` SELECT T.ENTITY_ID, MAX(LEVEL) MAX_LEVEL, -- or COUNT(*) SUM( CASE WHEN T.DATE_TO - T.DATE_FROM > 183 THEN 1 ELSE 0 END ) LONG_TERM_COUNT FROM TEST T CONNECT BY T.ENTITY_ID = PRIOR T.ENTITY_ID AND T.DATE_TO = PRIOR T.DATE_FROM START WITH EXTRACT(YEAR FROM T.DATE_FROM) = 2014 GROUP BY T.ENTITY_ID ``` And you might get a wrong result if two rows in 2014 exist, so you need to start with the latest row in 2014: ``` SELECT T.ENTITY_ID, MAX(LEVEL) MAX_LEVEL, SUM( CASE WHEN T.DATE_TO - T.DATE_FROM > 183 THEN 1 ELSE 0 END ) LONG_TERM_COUNT FROM TEST T CONNECT BY T.ENTITY_ID = PRIOR T.ENTITY_ID AND T.DATE_TO = PRIOR T.DATE_FROM START WITH T.DATE_FROM = ( SELECT MAX(T2.DATE_FROM) FROM TEST T2 WHERE T.ENTITY_ID = T2.ENTITY_ID AND T2.DATE_FROM >= DATE '2014-01-01' AND T2.DATE_FROM <= DATE '2014-12-31' ) GROUP BY T.ENTITY_ID ``` [Fiddle](http://sqlfiddle.com/#!4/bd792/1)
Your sql statement is correct. But one scenario that needs to be considered when `CASE WHEN T.DATE_TO - PRIOR T.DATE_FROM > 183` statement becomes `null` which will not be counted. ``` INSERT INTO TEST VALUES (1,TO_DATE('20130101','YYYYMMDD'),TO_DATE('20140101','YYYYMMDD')); INSERT INTO TEST VALUES (1,TO_DATE('20140101','YYYYMMDD'),TO_DATE('20150101','YYYYMMDD')); ``` From your data example the Case equivalent: ``` CASE WHEN TO_DATE('20140101','YYYYMMDD') - PRIOR TO_DATE('20140101','YYYYMMDD') > 183 ``` This gives `null` value;
Counting Rows with Constraint in Hierarchical Data
[ "", "sql", "oracle", "hierarchical-data", "gaps-and-islands", "" ]
I need to get all company ids from a table that have both city ids (let's say 7333 and 10906) but id doesn't work the way I do it. This is my table: ![enter image description here](https://i.stack.imgur.com/BuMeC.png) and this is my code ``` SELECT `company_id` as id FROM `logistics_companies_destinations` WHERE `city_id`= 7333 and `city_id` = 10906 ```
You can also solve this using an `INNER JOIN` to the table itself, joining on the same `company_id` and requiring both city\_ids to be present: ``` SELECT `lcd1`.`company_id` AS id FROM `logistics_companies_destinations` AS lcd1 INNER JOIN `logistics_companies_destinations` AS lcd2 WHERE `lcd1`.`city_id`= 7333 AND `lcd2`.`city_id` = 10906 ``` *Sorry, I usually also frown upon stupidly abbreviated table names like this, but did not come up with better aliases ;)*
MySQL does not have an `INTERSECT` keyword, but one way to implement this is: ``` SELECT `company_id` as id FROM `logistics_companies_destinations` WHERE `city_id` = 10906 and `company_id` IN (SELECT `company_id` as id FROM `logistics_companies_destinations` WHERE `city_id`= 7333) ``` [Another way (as note by another poster) is to join to the table twice and apply the filter conditions one to each join].
get all rows from table that have both variables in a same column
[ "", "mysql", "sql", "" ]
Im hoping this is a straight forward question but I cannot work out the syntax for it. I just need to find if a value within "ID" also exists in "ID2", lets say the tables called "teacher" ``` ID-ID2 10-1 11-2 12-13 13-4 ``` the only match there is row 4 as 13 also exists in id2, so I would need to pull that out with a select query, can anybody advise? thanks. Hi on top of this I have a second table called staff with the following setup ``` ID-Name 1-smith 2-jones 3-bruce ``` whereby ID is the same ID as in the teacher table, I think I need to join them here but im not sure what to do with the ID in the second table. The only information I need from the second table is the name so the Cartesian product should look like the above only with the processing done from table 1. thanks in advance scrap that, solved it, thanks
Is this what you want? ``` select t.* from teacher t where exists (select 1 from teacher t2 where t2.id = t.id2); ```
``` select * from teacher where id in (select distinct id2 from teacher) ``` or ``` select t1.* from teacher t1 join teacher t2 on t1.id = t2.id2 ```
sql compare two unique rows from the same table
[ "", "sql", "" ]
I have the following 2 tables. First is: `idea_box` ![enter image description here](https://i.stack.imgur.com/3YV1j.png) Second is: `idea_box_voting` ![enter image description here](https://i.stack.imgur.com/tDDDz.png) Now I want result something like this, based on `0 and 1 count`from thumbs field. ![enter image description here](https://i.stack.imgur.com/WnCQa.png) I have never created sql query like this one so I am hoping that someone help me. Thanks.
Try this: ``` SELECT a.idea_id, a.property_id, a.the_idea, a.user_id, a.added_date, a.status, SUM(b.thumbs = 1) AS up, SUM(b.thumbs = 0) AS down FROM idea_box a LEFT JOIN idea_box_voting b ON a.idea_id = b.idea_id GROUP BY a.idea_id; ```
SELECT t1.\*,sum(t2.thumbs=1) as up,sum(t2.thumbs=0) as down FROM table1 as t1 left join table2 as t2 using(idea\_id) group by t1.idea\_id;
MYSQL get spacific table fields counts
[ "", "mysql", "sql", "select", "group-by", "sum", "" ]
===== EDIT: Modified original question ===== Based on the comments, I have edited the original question (I left the original question at the end) as follows: I have sample employee data stored in a table (emp) like this: ``` ID | Key | Value 1000 | 1 | Engineer 1000 | 2 | Male 1000 | 3 | 30 1001 | 1 | Manager 1001 | 2 | Female 1001 | 3 | 35 ``` where the keys are mapped in another table (key\_prop): ``` Key | Value 1 | Type 2 | Sex 3 | Age 4 | ID ``` I'm looking for statistics of the employees grouped by Employee Type: ``` Type | Count | Number of Males | Number of Females | Avg Age Engineers | | | | Managers | | | | ``` It's easy to get the 1st column (I am using Oracle SQL): ``` SELECT (key_prop.value, COUNT(key_prop.value)) FROM emp INNER JOIN key_prop ON key_prop.id = emp.id AND key_prop.value = 1 GROUP BY key_prop.value ``` But I can't seem to get the other columns in the same query. Do I have to group several inner joins together or use nested SQL queries? == ORIGINAL POSTING BELOW == I have sample employee data stored as rows of KVPs like this: ``` Type | Engineer Sex | Male Age | 30 Type | Manager Sex | Female Age | 35 ... ``` I'm looking for statistics of the employees grouped by Employee Type: ``` Type | Count | Number of Males | Number of Females | Avg Age Engineers | | | | Managers | | | | ``` It's easy to get the first column (I am using Oracle SQL). But I can't seem to get the other columns in the same query.
You can do this by first aggregating by the employee id and then reaggregating: ``` select type, count(*) as num, sum(case when sex = 'male' then 1 else 0 end) as males, sum(case when sex = 'female' then 1 else 0 end) as females, avg(cast(age as decimal(5, 2))) as avgage from (select employeeid, max(case when col1 = 'type' then value end) as type, max(case when col1 = 'sex' then value end) as sex, max(case when col1 = 'age' then value end) as age form employees e group by employeeid ) t group by type; ```
You should group by type and then use aggreagate functions with accurate conditions: ``` SELECT type, COUNT(*) AS count, COUNT(CASE WHEN Sex = 'Male' THEN 1 ELSE NULL END) AS number_of_males, COUNT(CASE WHEN Sex = 'Female' THEN 1 ELSE NULL END) AS number_of_females, AVG(age) AS avg_age FROM employees GROUP BY type ```
Select averages in each GROUP BY group?
[ "", "sql", "oracle", "count", "group-by", "aggregate-functions", "" ]
I want to align a column in my table. In this case 'Title' ``` mysql> SELECT Titel, KuenstlerName from Bild; +--------------------------+---------------+ | Title | Artist| +--------------------------+---------------+ | Drei Musikanten | Picasso | | Buveuse assoupie | Picasso | | Die FlΓΆte des Pan | Picasso | | Paolo als Harlekin | Picasso | | Die Umarmung | Picasso | | Sitzende Frau | Picasso | | Sternennacht | van Gogh | | Der Park | Klingt | | Liegender Hund im Schnee | Marc | | Hauptweg und Nebenwege | Klee | | Jungfrau im Baum | Klee | | Das gelbe Haus | Klee | +--------------------------+---------------+ ``` Is it possible to align the left column to the right, like this: ``` +--------------------------+---------------+ | Title | Artist | +--------------------------+---------------+ | Drei Musikanten| Picasso | | Buveuse assoupie| Picasso | | Die FlΓΆte des Pan| Picasso | +--------------------------+---------------+ ``` Thanks for your help!
If your `Title` length is 150 then query like this ``` SELECT LPAD(Titel,150,' ') as Titel , KuenstlerName from Bild; ``` Check [**Manual**](http://www.w3resource.com/mysql/string-functions/mysql-lpad-function.php) for `LPAD` function.
If `Titel` is 100 characters, then do this ``` SELECT LPAD(Titel, 100, ' '), ... ``` or ``` SELECT RIGHT(CONCAT(REPEAT(' ', 100), Titel), 100), ... ```
How to align a column right-adjusted
[ "", "mysql", "sql", "select", "format", "tabular", "" ]
I am using SQL Server 2008 R2 on the EC2 instance. I have a database having 4 fields of all varchar(50). ``` SELECT * FROM xyz WHERE DataDate ='20140609' ``` This query gives no result. ``` SELECT * FROM xyz WHERE DataDate = (Select MAX(DataDate) from xyz) ``` This query runs perfectly. ``` Select MAX(DataDate) from xyz ``` This query results in 20140609. I cannot understand why this this happening. Can someone please explain this?
As stated in comments it's likely due to leading spaces in the values. If you do the following to remove any spaces from the values it should work: ``` SELECT * FROM xyz WHERE REPLACE(DataDate, ' ', '')='20140609' ``` Alternately, you could use [**LTRIM**](http://msdn.microsoft.com/en-gb/library/ms177827.aspx) / [**RTRIM**](http://msdn.microsoft.com/en-us/library/ms178660.aspx) functions to do this. ## [Sample SQL Fiddle](http://sqlfiddle.com/#!3/b394b/7) ``` create table SampleData(myDate varchar(50)) insert into SampleData(myDate) values(' 20140609 ') insert into SampleData(myDate) values('20140608') insert into SampleData(myDate) values('20140607') insert into SampleData(myDate) values('20140606') ``` Both of these queries work in the fiddle: ``` SELECT * FROM SampleData WHERE REPLACE(myDate, ' ', '')='20140609' Select MAX(REPLACE(myDate, ' ', '')) from SampleData ``` ## Future Advice: Even so, it's not a good idea to save dates as varchar for numerous reasons so I would suggest changing that if possible. The below would perform a conversion on your existing data to convert them to valid dates, whilst removing any spaces, assuming your dates are always in the format `YYYYMMDD`: ``` SELECT cast(REPLACE(DataDate, ' ', '') as DateTime) FormattedDate FROM xyz ``` To implement this, you could create a new `DateTime` column on the table and insert the correct date values in to there with something like below and then modify your code to use the new column: First add a new column to the table: ``` ALTER TABLE xyz ADD FormattedDate DateTime NULL ``` Then update the data in the new column so it holds the converted dates: ``` UPDATE xyz SET FormattedDate = cast(REPLACE(DataDate, ' ', '') as DateTime) ```
in sql spaces count as the character. Then use like operator to this query as SELECT \* FROM xyz WHERE DataDate like '%20140609%'
Filter dates stored as varchar in SQL Server
[ "", "sql", "sql-server", "sql-server-2008", "sql-server-2008-r2", "" ]
Say I have the following data set ``` Column1 (VarChar(50 or something)) Elias Sails Pails Plane Games ``` What I'd like to produce from this column is the following set: ``` LETTER COUNT E 3 L 4 I 3 A 5 S 5 And So On... ``` One solution I thought of was combining all strings into a single string, and then count each instance of the letter in that string, but that feels sloppy. This is more an exercise of curiosity than anything else, but, is there a way to get a count of all distinct letters in a dataset with SQL?
I would do this by creating a table of your letters similar to: ``` CREATE TABLE tblLetter ( letter varchar(1) ); INSERT INTO tblLetter ([letter]) VALUES ('a'), ('b'), ('c'), ('d'); -- etc ``` Then you could join the `letters` to your table where your data is like the letter: ``` select l.letter, count(n.col) Total from tblLetter l inner join names n on n.col like '%'+l.letter+'%' group by l.letter; ``` See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/f837e/1). This would give a result: ``` | LETTER | TOTAL | |--------|-------| | a | 5 | | e | 3 | | g | 1 | | i | 3 | | l | 4 | | m | 1 | | p | 2 | | s | 4 | ```
If you create a table of letters, like this: ``` create table letter (ch char(1)); insert into letter(ch) values ('A'),('B'),('C'),('D'),('E'),('F'),('G'),('H') ,('I'),('J'),('K'),('L'),('M'),('N'),('O'),('P') ,('Q'),('R'),('S'),('T'),('U'),('V'),('W'),('X'),('Y'),('Z'); ``` you could do it with a cross join, like this: ``` select ch, SUM(len(str) - len(replace(str,ch,''))) from letter cross join test -- <<== test is the name of the table with the string group by ch having SUM(len(str) - len(replace(str,ch,''))) <> 0 ``` [Here is a running demo on sqlfiddle.](http://www.sqlfiddle.com/#!3/0d47a/2) You can do it without defining a table by embedding a list of letters into a query itself, but the idea of cross-joining and grouping by the letter would remain the same. Note: [see this answer](https://stackoverflow.com/a/9789266/335858) for the explanation of the expression inside the `SUM`.
Count Of Distinct Characters In Column
[ "", "sql", "sql-server", "count", "distinct-values", "" ]
How do I get the average from multiple columns? for example: ``` Columns: ID 125Hz 250Hz 500Hz 750Hz 1000Hz 1500Hz 2000Hz 3000Hz 4000Hz 6000Hz 8000Hz Values: 1 92 82 63 83 32 43 54 56 54 34 54 ``` I want to get the average of all the columns except the ID. How do I do that?
You have to manually add the columns since there's no available built-in functions for horizontal aggregation. ``` select (125Hz+250Hz+500Hz+750Hz+1000Hz+1500Hz+2000Hz+3000Hz+4000Hz+6000Hz+8000Hz)/11 as aveHz from table_name ```
In SQL-SERVER you can use this ``` DECLARE @total int DECLARE @query varchar(550) DECLARE @ALLColumns VARCHAR(500) SET @ALLColumns = '' ----Build the string columns SELECT @ALLColumns = @ALLColumns + '+' + '['+sc.NAME+']' FROM sys.tables st INNER JOIN sys.columns sc ON st.object_id = sc.object_id WHERE st.name LIKE '%YOUR_TABLE_NAME%' AND sc.NAME LIKE '[0-9]%';--[0-9]% just get the column that start with number ----Get the total number of column, SELECT @total = count(*) FROM sys.tables st INNER JOIN sys.columns sc ON st.object_id = sc.object_id WHERE st.name LIKE '%YOUR_TABLE_NAME%' AND sc.NAME LIKE '[0-9]%';--[0-9]% just get the column that start with number SET @query = 'SELECT SUM('+ SUBSTRING(@ALLColumns,2,LEN(@ALLColumns))+')/' +CAST(@total as varchar(4))+ ' AS [AVG] FROM [YOUR_TABLE_NAME] GROUP BY [ID]' --SELECT @query EXECUTE(@query) ``` This will execute a query like this one: ``` SELECT SUM([125Hz]+[250Hz]+[500Hz]+[750Hz]+[1000Hz]+[1500Hz]+[2000Hz] +[3000Hz]+[4000Hz]+[6000Hz]+[8000Hz])/11 AS [AVG] FROM [YOUR_TABLE_NAME] GROUP BY [ID] ``` **UPDATE** Add a column to store the avg, I called it [AVG] and chage the value of @query to ``` SET @query = ' CREATE TABLE #Medition (ID int,[AVG] decimal(18,4)) INSERT INTO #Medition (ID,[AVG]) SELECT ID,SUM ('+ SUBSTRING(@ALLColumns,2,LEN(@ALLColumns))+')/' +CAST(@total as varchar(10)) + ' AS [AVG] FROM Medition GROUP BY ID UPDATE YOUR_TABLE_NAME SET YOUR_TABLE_NAME.[AVG] = #Medition.[AVG] FROM YOUR_TABLE_NAME INNER JOIN #Medition ON YOUR_TABLE_NAME.ID =#Medition.ID DROP TABLE #Medition ' ``` Note: Build queries string is a little ugly
SQL average from multiple columns
[ "", "mysql", "sql", "sql-server", "mysql-workbench", "" ]
I just learned about the wonders of columnstore indexes and how you can "Use the columnstore index to achieve up to 10x query performance gains over traditional row-oriented storage, and up to 7x data compression over the uncompressed data size." With such sizable performance gains, is there really any reason to NOT use them?
The main disadvantage is that you'll have a hard time reading only a part of the index if the query contains a selective predicate. There are ways to do it (partitioning, segment elimination) but those are neither particularly easy to reliably implement nor do they scale to complex requirements. For scan-only workloads columnstore indexes are pretty much ideal.
Columnstore Indexes is especially beneficial for **DataWarehousing (DW)**. Meaning that you will only perform updates or deletes at certain times. This is due to their special design with delta loading and more features. This video will show great detail and a nice basic overview of what the exact difference is [Columnstore Index](https://www.youtube.com/watch?v=772Go3sewHI). ## Traditional If you however have a high **I/O** (input and output) of the application; Columnstore Index is not ideal since traditional **row indexing** will find and manipulate (using the rows found through the index) on that specific target. An example of this would be a ATM application which frequently changes the values of the **rows** of the given persons accounts. ## ColumnStore **Columnstore Indexing** indexes throughout the **COLUMNS** which is not ideal in this case since the row values will be spread throughout the segmentations (columnsindexes). I highly recommend the video! I want to also elaborate on the non-clustered vs clustered columnstore: > Non-clustered Columnstore (update in 2012) saves the **WHOLE** data again meaning (2X data) twice the data. > > Where as Clustered Columnstore index (update in 2014) only takes up 5MB for about 16GB of data. This is due to the RTE (runtime encoding) which saves the amount of duplicate data in each column. Making the index take up less extra storage.
Columnstore index proper usage
[ "", "sql", "sql-server", "database", "t-sql", "optimization", "" ]
I know there is no "last" row so I hope I'm clear that isn't what I'm really looking for. I want to select the rows in a table if the value of one particular field is the last alphabetically. I'll try my best to draw it out below. I'm a bit of a novice so please bear with me... TABLE [Order Number], [Delivery Date], [Order Qty], [Shipped Qty], [Bill To], [Ship To], [Invoice Number] There are many times when we will reissue invoices and that invoice number will increment by a letter. This will also update additional field values as well. Below is a typical set with multiples invoices... ``` '987654', '2014-05-01 00:00:00', '100', '90', 'BillToXYZ', 'ShipToXYZ', '987654A' - NEW RECORD - '987654', '2014-05-01 00:00:00', '-100', '-90', 'BillToXYZ', 'ShipToXYZ', '987654B' - NEW RECORD - '987654', '2014-05-01 00:00:00', '100', '100', 'BillToXYZ', 'ShipToNEWSHIPTOLOCATION', '987654C' - NEW RECORD - '987654', '2014-05-01 00:00:00', '10', '10', 'BillToXYZ', '2ndNEWSHIPTOLOCATION', '987654D' - NEW RECORD - ``` What I need is to query all the above fields and only bring back those where the [Invoice Number] is the last(alphabetically) (in this case 987654D) but also have it SUM the values of the [Order Qty] and [Shipped Qty] for all of the records regardless of [Invoice Number]. If I can provide any additional information please let me know. Thank you in advance.
It possible to use the `ROW_NUMBER` function to get the last row in a group setting the `ORDER BY` descending and then using a filter to get the row with the value 1. The `SUM` and `MAX` with windowing help to get the other aggregate values. ``` WITH D AS ( SELECT [Order Number] , [Delivery Date] , SUM([Order Qty]) OVER (PARTITION BY [Order Number]) [Total Order Qty] , [Total Shipped Qty] = SUM([Shipped Qty]) OVER (PARTITION BY [Order Number]) , [Bill To] , [Ship To] , [Last Invoice Number] = MAX([Invoice Number]) OVER (PARTITION BY [Order Number]) , ID = ROW_NUMBER() OVER (PARTITION BY [Order Number] ORDER BY [Invoice Number] DESC) FROM Table1 ) SELECT [Order Number] , [Delivery Date] , [Total Order Qty] , [Total Shipped Qty] , [Bill To] , [Ship To] , [Last Invoice Number] FROM D WHERE ID = 1 ``` `SQLFiddle demo`
Is this what you're going after? **EDIT 2:** Got clarification from OP. Have edited my solution to bring up aggregation for complete order but only the line item information of the latest invoice. Fiddle here: <http://sqlfiddle.com/#!3/08bb0/6> ``` SELECT t1.[Order Number], t2.[Max Invoice Number], t1.[Ship To], t2.[Sum Order Qty], t2.[Sum Shipped Qty] FROM Table1 t1 INNER JOIN (SELECT [Order Number], MAX([Invoice Number] ) AS [Max Invoice Number], SUM([Order Qty]) AS [Sum Order Qty], SUM([Shipped Qty]) AS [Sum Shipped Qty] FROM Table1 GROUP BY [Order Number]) t2 ON t1.[Invoice Number] = t2.[Max Invoice Number] ORDER BY t1.[Order Number]; ``` You can add more columns from the D invoice by placing them in the SELECT list from t1, as I did with [Ship To]. **EDIT 1:** Added grouping so whole table is considered. To be fair, Martin K. posted this answer first.
Find the most recent row of a group
[ "", "sql", "sql-server-2008-r2", "" ]
I'm trying to fetch some data from an db. I've got an `Taak` table with an possible `idPartij` column. Possible, because it can be an real `idPartij`, but can also be `null`. The query I've got: ``` SELECT T.idTaak, T.Taaktype, P.Partijnaam, T.Naar, T.UltimatumDatum, T.Opmerking, T.Status, T.Prioriteit FROM Taak AS T, Partij AS P WHERE T.idPartij = P.idPartij ORDER BY idTaak DESC ``` This is working fine when I've got an id in `T.idPartij`, but as mentioned earlier, that id can be `null`. And when that is the case, the row won't be in the result. Only problem: When I remove the where clause, I get the rows a lot ( because the `Partij` table isn't filtered anymore... I'm an total noob at SQL, so i can't think of an way to "fix" this problem. Only thing i can think of is creating 2 query's, but i don't think that is an nice way to do...
Since you are trying to link two tables, you should use a `LEFT OUTER JOIN` instead: ``` SELECT T.idTaak, T.Taaktype, P.Partijnaam, T.Naar, T.UltimatumDatum, T.Opmerking, T.Status, T.Prioriteit FROM Taak AS T LEFT OUTER JOIN Partij AS P ON T.idPartij = P.idPartij ORDER BY idTaak DESC ``` If you don't want to include the `NULL` values use an `INNER JOIN` instead. Note that you should not use these old-style-joins [it's a bad habit to kick](https://sqlblog.org/2009/10/08/bad-habits-to-kick-using-old-style-joins). --- According to the `NULL` issue: You cannot use `= NULL` or `<> NULL` because `NULL` is not equal or unequal to anything. `NULL` means unknown. So use `IS NULL` or `IS NOT NULL`: ``` WHERE T.idPartij IS NULL OR T.idPartij = P.idPartij ``` From [MSDN](http://msdn.microsoft.com/en-us/library/ms191504%28v=sql.105%29.aspx) > A value of NULL indicates that the value is unknown. A value of NULL > is different from an empty or zero value. No two null values are > equal. Comparisons between two null values, or between a NULL and any > other value, return unknown because the value of each NULL is unknown.
You didn't specify a database platform, so it is safe to use ANSI syntax. Do a `coalesce` to compare `P.idPartij` when `T.idPartij` is `null`: ``` where P.idPartij = coalesce(T.idPartij, P.idPartij) ``` It will match all `T` rows if `T.idPartij` is `null`. If you want to return no `T` row when `idPartij` is `null`, use a `left outer join`: ``` from Taak t left outer join Partij p on t.idPartij = p.idPartij ```
SQL Where ID equals ID but can be null
[ "", "sql", "where-clause", "" ]
Say we have a simple table of blog articles, for example: ``` aid title content 1 Foo Lorem Ips… 2 Bar Dolor Sit… 3 Boo Amet Cons… ``` … and another table for the comments: ``` cid aid name comment date 1 1 zaphod First! 1404294939 2 1 arthur Not you again!!111 1404296182 3 1 marvin It’s all useless anyw… 1404299811 ``` And now I want to generate a list of all the articles with the most recent comment of the respective article. That means I have to left join the `articles` table on the `comments` table, but for each article row I only want the comment row with the highest `date` value or in other words: only the first row when I do `ORDER BY date DESC LIMIT 1` for every article-comment connection. What is the best (or maybe easiest) way to do this?
The methodology needed is to first find the latest comment date per article: ``` SELECT aid, MAX(Date) AS Date FROM Comments GROUP BY aid; ``` You can then place this logic inside a subquery and join back to comments using `aid` and `date` to get all the fields: ``` SELECT c.* FROM Comments AS c INNER JOIN ( SELECT aid, MAX(Date) AS Date FROM Comments GROUP BY aid ) AS mc ON mc.aid = c.aid AND mc.Date = c.Date; ``` Then you can add a join to your Articles table. If all articles have comments, or you only want articles with comments then you can use: ``` SELECT * FROM Articles AS a INNER JOIN Comments AS c ON c.Aid = a.aid INNER JOIN ( SELECT aid, MAX(Date) AS Date FROM Comments GROUP BY aid ) AS mc ON mc.aid = c.aid AND mc.Date = c.Date; ``` If not you will need to use a LEFT JOIN with parentheses to ensure the `INNER JOIN` on the subquery does not remove articles with no comments: ``` SELECT * FROM Articles AS a LEFT JOIN (Comments AS c INNER JOIN ( SELECT aid, MAX(Date) AS Date FROM Comments GROUP BY aid ) AS mc ON mc.aid = c.aid AND mc.Date = c.Date) ON c.Aid = a.aid; ``` **[Example on SQL Fiddle](http://sqlfiddle.com/#!2/51a5bb/1)** This is really just shorthand for: ``` SELECT * FROM Articles AS a LEFT JOIN ( SELECT c.* FROM Comments AS c INNER JOIN ( SELECT aid, MAX(Date) AS Date FROM Comments GROUP BY aid ) AS mc ON mc.aid = c.aid AND mc.Date = c.Date ) AS c ON c.Aid = a.aid; ``` But since MySQL materialises all subqueries, this avoids the unnessesary materialisation of `Comments`. [Comparing the execution plans](http://sqlfiddle.com/#!2/51a5bb/3) of the two queries shows that the former will perform better.
You can do this using joins only.It is much faster than Sub query. ``` select articles.*,comments1.* from Articles articles LEFT JOIN (Comments comments1 LEFT JOIN Comments comments2 on comments1.date<comments2.date and comments1.aid=comments2.aid ) on comments2.cid IS NULL and comments1.aid=articles.aid; ``` Example of SQLFiddle : <http://sqlfiddle.com/#!2/51a5bb/4>
MySQL Join left row with only the first right row
[ "", "mysql", "sql", "join", "relational-database", "" ]
I have a table with sell orders and I want to list the `COUNT` of sell orders per day, between two dates, without leaving date gaps. This is what I have currently: ``` SELECT COUNT(*) as Norders, DATE_FORMAT(date, "%M %e") as sdate FROM ORDERS WHERE date <= NOW() AND date >= NOW() - INTERVAL 1 MONTH GROUP BY DAY(date) ORDER BY date ASC; ``` The result I'm getting is as follows: ``` 6 May 1 14 May 4 1 May 5 8 Jun 2 5 Jun 15 ``` But what I'd like to get is: ``` 6 May 1 0 May 2 0 May 3 14 May 4 1 May 5 0 May 6 0 May 7 0 May 8 ..... 0 Jun 1 8 Jun 2 ..... 5 Jun 15 ``` Is that possible?
Creating a range of dates on the fly and joining that against you orders table:- ``` SELECT sub1.sdate, COUNT(ORDERS.id) as Norders FROM ( SELECT DATE_FORMAT(DATE_SUB(NOW(), INTERVAL units.i + tens.i * 10 + hundreds.i * 100 DAY), "%M %e") as sdate FROM (SELECT 0 i UNION SELECT 1 UNION SELECT 2 UNION SELECT 3 UNION SELECT 4 UNION SELECT 5 UNION SELECT 6 UNION SELECT 7 UNION SELECT 8 UNION SELECT 9)units CROSS JOIN (SELECT 0 i UNION SELECT 1 UNION SELECT 2 UNION SELECT 3 UNION SELECT 4 UNION SELECT 5 UNION SELECT 6 UNION SELECT 7 UNION SELECT 8 UNION SELECT 9)tens CROSS JOIN (SELECT 0 i UNION SELECT 1 UNION SELECT 2 UNION SELECT 3 UNION SELECT 4 UNION SELECT 5 UNION SELECT 6 UNION SELECT 7 UNION SELECT 8 UNION SELECT 9)hundreds WHERE DATE_SUB(NOW(), INTERVAL units.i + tens.i * 10 + hundreds.i * 100 DAY) BETWEEN DATE_SUB(NOW(), INTERVAL 1 MONTH) AND NOW() ) sub1 LEFT OUTER JOIN ORDERS ON sub1.sdate = DATE_FORMAT(ORDERS.date, "%M %e") GROUP BY sub1.sdate ``` This copes with date ranges of up to 1000 days. Note that it could be made more efficient easily depending on the type of field you are using for your dates. EDIT - as requested, to get the count of orders per month:- ``` SELECT aMonth, COUNT(ORDERS.id) as Norders FROM ( SELECT DATE_FORMAT(DATE_SUB(NOW(), INTERVAL months.i MONTH), "%Y%m") as sdate, DATE_FORMAT(DATE_SUB(NOW(), INTERVAL months.i MONTH), "%M") as aMonth FROM (SELECT 0 i UNION SELECT 1 UNION SELECT 2 UNION SELECT 3 UNION SELECT 4 UNION SELECT 5 UNION SELECT 6 UNION SELECT 7 UNION SELECT 8 UNION SELECT 9 UNION SELECT 10 UNION SELECT 11)months WHERE DATE_SUB(NOW(), INTERVAL months.i MONTH) BETWEEN DATE_SUB(NOW(), INTERVAL 12 MONTH) AND NOW() ) sub1 LEFT OUTER JOIN ORDERS ON sub1.sdate = DATE_FORMAT(ORDERS.date, "%Y%m") GROUP BY aMonth ```
You are going to need to generate a virtual (or physical) table, containing every date in the range. That can be done as follows, using a sequence table. ``` SELECT mintime + INTERVAL seq.seq DAY AS orderdate FROM ( SELECT CURDATE() - INTERVAL 1 MONTH AS mintime, CURDATE() AS maxtime FROM obs ) AS minmax JOIN seq_0_to_999999 AS seq ON seq.seq < TIMESTAMPDIFF(DAY,mintime,maxtime) ``` Then, you join this virtual table to your query, as follows. ``` SELECT IFNULL(orders.Norders,0) AS Norders, /* show zero instead of null*/ DATE_FORMAT(alldates.orderdate, "%M %e") as sdate FROM ( SELECT mintime + INTERVAL seq.seq DAY AS orderdate FROM ( SELECT CURDATE() - INTERVAL 1 MONTH AS mintime, CURDATE() AS maxtime FROM obs ) AS minmax JOIN seq_0_to_999999 AS seq ON seq.seq < TIMESTAMPDIFF(DAY,mintime,maxtime) ) AS alldates LEFT JOIN ( SELECT COUNT(*) as Norders, DATE(date) AS orderdate FROM ORDERS WHERE date <= NOW() AND date >= NOW() - INTERVAL 1 MONTH GROUP BY DAY(date) ) AS orders ON alldates.orderdate = orders.orderdate ORDER BY alldates.orderdate ASC ``` Notice that you need the `LEFT JOIN` so the rows in your output result set will be preserved even if there's no data in your `ORDERS` table. Where do you get this sequence table `seq_0_to_999999`? You can make it like this. ``` DROP TABLE IF EXISTS seq_0_to_9; CREATE TABLE seq_0_to_9 AS SELECT 0 AS seq UNION SELECT 1 UNION SELECT 2 UNION SELECT 3 UNION SELECT 4 UNION SELECT 5 UNION SELECT 6 UNION SELECT 7 UNION SELECT 8 UNION SELECT 9; DROP VIEW IF EXISTS seq_0_to_999; CREATE VIEW seq_0_to_999 AS ( SELECT (a.seq + 10 * (b.seq + 10 * c.seq)) AS seq FROM seq_0_to_9 a JOIN seq_0_to_9 b JOIN seq_0_to_9 c ); DROP VIEW IF EXISTS seq_0_to_999999; CREATE VIEW seq_0_to_999999 AS ( SELECT (a.seq + (1000 * b.seq)) AS seq FROM seq_0_to_999 a JOIN seq_0_to_999 b ); ``` You can find an explanation of all this in more detail at <http://www.plumislandmedia.net/mysql/filling-missing-data-sequences-cardinal-integers/> If you're using MariaDB version 10+, these sequence tables are built in.
Aggregating data by date in a date range without date gaps in result set
[ "", "mysql", "sql", "time-series", "mariadb", "" ]
Consider these two PostgreSQL functions: ``` CREATE OR REPLACE FUNCTION f_1 (v1 INTEGER, v2 OUT INTEGER) AS $$ BEGIN v2 := v1; END $$ LANGUAGE plpgsql; CREATE OR REPLACE FUNCTION f_2 (v1 INTEGER) RETURNS TABLE(v2 INTEGER) AS $$ BEGIN v2 := v1; END $$ LANGUAGE plpgsql; ``` In any "ordinary" procedural SQL language (e.g. Transact-SQL), the two types of functions would be quite different. `f_1` would actually be a procedure, whereas `f_2` would be a table-valued function. In SQL Server, the latter is returned from `INFORMATION_SCHEMA.ROUTINES` like so: ``` SELECT r.routine_schema, r.routine_name FROM information_schema.routines r WHERE r.routine_type = 'FUNCTION' AND r.data_type = 'TABLE' ``` In PostgreSQL, this doesn't work, however. The following query shows that there is essentially no difference between the signatures of `f_1` and `f_2`: ``` SELECT r.routine_name, r.data_type, p.parameter_name, p.data_type FROM information_schema.routines r JOIN information_schema.parameters p USING (specific_catalog, specific_schema, specific_name); ``` The above yields: ``` routine_name | data_type | parameter_name | data_type -------------+-----------+----------------+---------- f_1 | integer | v1 | integer f_1 | integer | v2 | integer f_2 | integer | v1 | integer f_2 | integer | v2 | integer ``` Things don't get much better when I have multiple columns returned from the functions, in case of which I don't even have a "formal" return type anymore. Just `record`: ``` CREATE OR REPLACE FUNCTION f_3 (v1 INTEGER, v2 OUT INTEGER, v3 OUT INTEGER) AS $$ BEGIN v2 := v1; END $$ LANGUAGE plpgsql; CREATE OR REPLACE FUNCTION f_4 (v1 INTEGER) RETURNS TABLE(v2 INTEGER, v3 INTEGER) AS $$ BEGIN v2 := v1; END $$ LANGUAGE plpgsql; ``` ... I'll get: ``` routine_name | data_type | parameter_name | data_type -------------+-----------+----------------+---------- f_3 | record | v1 | integer f_3 | record | v2 | integer f_3 | record | v3 | integer f_4 | record | v1 | integer f_4 | record | v2 | integer f_4 | record | v3 | integer ``` If coming from other databases, clearly the *intent* of the lexical signature is quite different. As an Oracle person, I expect `PROCEDURES` to have side-effects, whereas `FUNCTIONS` don't have any side-effects (unless in an autonomous transaction) and can be safely embedded in SQL. I know that PostgreSQL cleverly treats all functions as tables, but I don't think it's a good idea to design `OUT` parameters as table columns in any query... ### My question is: Is there any formal difference at all between the two ways to declare functions? If there is, how can I discover it from the `INFORMATION_SCHEMA` or from the `PG_CATALOG`?
`\df public.f_*` does this ``` select n.nspname as "Schema", p.proname as "Name", pg_catalog.pg_get_function_result(p.oid) as "Result data type", pg_catalog.pg_get_function_arguments(p.oid) as "Argument data types", case when p.proisagg then 'agg' when p.proiswindow then 'window' when p.prorettype = 'pg_catalog.trigger'::pg_catalog.regtype then 'trigger' else 'normal' end as "Type" from pg_catalog.pg_proc p left join pg_catalog.pg_namespace n on n.oid = p.pronamespace where p.proname ~ '^(f_.*)$' and n.nspname ~ '^(public)$' order by 1, 2, 4; ``` which returns this ``` List of functions Schema | Name | Result data type | Argument data types | Type --------+------+-------------------------------+--------------------------------------------+-------- public | f_1 | integer | v1 integer, OUT v2 integer | normal public | f_2 | TABLE(v2 integer) | v1 integer | normal public | f_3 | record | v1 integer, OUT v2 integer, OUT v3 integer | normal public | f_4 | TABLE(v2 integer, v3 integer) | v1 integer | normal (4 rows) ``` To drop a function it is necessary to pass its *input* (`IN` and `INOUT`) arguments data types. Then I guess the function name and its *input* arguments data types do form its signature. And to change the returned data type it is necessary to first drop it and recreate.
It *appears* that the `pg_catalog.pg_proc.proretset` flag contains a *hint* about whether the function returns a set (i.e. a table): ``` SELECT r.routine_name, r.data_type, p.parameter_name, p.data_type, pg_p.proretset FROM information_schema.routines r JOIN information_schema.parameters p USING (specific_catalog, specific_schema, specific_name) JOIN pg_namespace pg_n ON r.specific_schema = pg_n.nspname JOIN pg_proc pg_p ON pg_p.pronamespace = pg_n.oid AND pg_p.proname = r.routine_name WHERE r.routine_schema = 'public' AND r.routine_name IN ('f_1', 'f_2', 'f_3', 'f_4') ORDER BY routine_name, parameter_name; ``` The above would yield: ``` routine_name | data_type | parameter_name | data_type | proretset -------------+-----------+----------------+-----------+---------- f_1 | record | v1 | integer | f f_1 | record | v2 | integer | f f_2 | record | v1 | integer | t f_2 | record | v2 | integer | t f_3 | record | v1 | integer | f f_3 | record | v2 | integer | f f_3 | record | v3 | integer | f f_4 | record | v1 | integer | t f_4 | record | v2 | integer | t f_4 | record | v3 | integer | t ``` ### INFORMATION\_SCHEMA.COLUMNS emulation For what it's worth and in case someone needs this crazy thing, here's the beautiful query I came up with to emulate SQL Server's nice `INFORMATION_SCHEMA.COLUMNS` implementation that returns table-valued function columns (which is what we really needed when supporting table-valued functions in [jOOQ's code generator](https://github.com/jOOQ/jOOQ/issues/3375)): ``` SELECT p.proname AS TABLE_NAME, columns.proargname AS COLUMN_NAME, ROW_NUMBER() OVER(PARTITION BY p.oid ORDER BY o.ordinal) AS ORDINAL_POSITION, format_type(t.oid, t.typtypmod) AS DATA_TYPE, information_schema._pg_char_max_length(t.oid, t.typtypmod) AS CHARACTER_MAXIMUM_LENGTH, information_schema._pg_numeric_precision(t.oid, t.typtypmod) AS NUMERIC_PRECISION, information_schema._pg_numeric_scale(t.oid,t.typtypmod) AS NUMERIC_SCALE, not(t.typnotnull) AS IS_NULLABLE FROM pg_proc p, LATERAL generate_series(1, array_length(p.proargmodes, 1)) o(ordinal), LATERAL ( SELECT p.proargnames[o.ordinal], p.proargmodes[o.ordinal], p.proallargtypes[o.ordinal] ) columns(proargname, proargmode, proargtype), LATERAL ( SELECT pg_type.oid oid, pg_type.* FROM pg_type WHERE pg_type.oid = columns.proargtype ) t WHERE p.proretset AND proargmode = 't' AND p.proname LIKE 'f%'; ``` The above nicely returns (column names shortened for SO): ``` table_name | column_name | ordinal | data_type | length | precision | scale | nullable f_2 | v2 | 1 | integer | | 32 | 0 | t f_4 | v2 | 1 | integer | | 32 | 0 | t f_4 | v3 | 2 | integer | | 32 | 0 | t ```
Is there any formal difference at all between PostgreSQL functions with OUT parameters and with TABLE results?
[ "", "sql", "postgresql", "stored-procedures", "user-defined-functions", "out-parameters", "" ]
I have two tables that names StationTabs and Links in SQLServer2008. ``` StationTabs ------------ StationId(int) Longitude(float) Latitude(float) Links ---- id(int) StationStart(int) StationEnd(int) ``` StationStart and StationEnd are foreign keys of StationId. I should get Longitude and Latitude values for StationStart and StationEnd with the same query.I've tried something like this; ``` Select Longitude,Latitude From StationTabs, (Select * From Links ) As ST1, (Select * From Links) As ST2 Where ST1.StationStart=StationTabs.StationId AND ST2.StationEnd=StationTabs.StationId; ``` This statement only returns Longitude and Latitude values of StationEnd.Can somebody help? For instance; ``` StationId = 1, Longitude = 35,18, Latitude = 37,01 StationId = 2, Longitude = 36,43, Latitude = 38,88 ``` In Links table; ``` id = 3, StationStart = 2 , StationStart = 1 ``` For result,I have to get the values of ``` lonStart=36,43 and latStart = 38,88 lonEnd=35,18 and latEnd=37,01 ```
If you're wanting to get the start and end lat/long values for each `Links` entry, you're doubling-up references to the wrong table: ``` SELECT ST1.Longitude as StartLong, ST1.Latitude as StartLat, ST2.Longitude as EndLong, ST2.Latitude as EndLat, l.ID as LinkID FROM Links l inner join StationTabs ST1 on l.StationStart = ST1.StationId inner join StationTabs ST2 on l.StationEnd = ST2.StationId ```
This should work: ``` SELECT Longitude, Latitude FROM StationTabs st INNER JOIN Links lStart ON st.StationId = lStart.StationStart INNER JOIN Links lEnd ON st.StationId = lEnd.StationEnd ``` If one of both (or both) can be null but you want to include this record anyway you should use `LEFT OUTER JOIN` instead.
Properly Joining Tables in SQL
[ "", "sql", "sql-server-2008", "" ]
I want to select 1000 rows from another table using the last 1000 ID from another table. This is the query but it returned an error message. What did I do wrong with this query? ``` SELECT * FROM table1 WHERE id IN ( SELECT id FROM table2 LIMIT 50 ) ``` Error message received. ``` Error Code : 1235 This version of MySQL doesn't yet support 'LIMIT & IN/ALL/ANY/SOME subquery' ```
``` SELECT * FROM table1 x JOIN ( SELECT id FROM table2 ORDER BY id LIMIT 50 ) y ON y.id = x.id; ```
You should join both tables and sort in descending order and *then* apply the limit. ``` SELECT table1.* FROM table1 JOIN table2 USING (id) ORDER BY id DESC LIMIT 1000; ``` This will give you 1000 entries with highest `id`in descending order if, and only if, they exist in both tables.
SELECT ID FROM subquery with limit 1000 rows in MySQL
[ "", "mysql", "sql", "subquery", "" ]
I have a a number of sp's that create a temporary table `#TempData` with various fields. Within these sp's I call some processing sp that operates on `#TempData`. Temp data processing depends on sp input parameters. SP code is: ``` CREATE PROCEDURE [dbo].[tempdata_proc] @ID int, @NeedAvg tinyint = 0 AS BEGIN SET NOCOUNT ON; if @NeedAvg = 1 Update #TempData set AvgValue = 1 Update #TempData set Value = -1; END ``` Then, this sp is called in outer sp with the following code: ``` USE [BN] --GO --DBCC FREEPROCCACHE; GO Create table #TempData ( tele_time datetime , Value float --, AvgValue float ) Create clustered index IXTemp on #TempData(tele_time); insert into #TempData(tele_time, Value ) values( GETDATE(), 50 ); --sample data declare @ID int, @UpdAvg int; select @ID = 1000, @UpdAvg = 1 ; Exec dbo.tempdata_proc @ID, @UpdAvg ; select * from #TempData; drop table #TempData ``` This code throws an error: *Msg 207, Level 16, State 1, Procedure tempdata\_proc, Line 8: Invalid column name "AvgValue".* But if only I uncomment declaration `AvgValue float` - everything works OK. The question: is there any workaround letting the stored proc code remain the same and providing a tip to the optimizer - skip this because AvgValue column will not be used by the sp due to params passed. Dynamic SQL is not a welcomed solution BTW. Using alternative to `#TempData` tablename is undesireable solution according to existing tsql code (huge modifications necessary for that). Tried SET FMTONLY, tempdb.tempdb.sys.columns, try-catch wrapping without any success.
The way that stored procedures are processed is split into two parts - one part, checking for syntactical correctness, is performed at the time that the stored procedure is created or altered. The remaining part of compilation is deferred until the point in time at which the store procedure is *executed*. This is referred to as [Deferred Name Resolution](http://technet.microsoft.com/en-us/library/ms190686%28v=sql.105%29.aspx) and allows a stored procedure to include references to tables (not just limited to temp tables) that do not exist at the point in time that the procedure is created. Unfortunately, when it comes to the point in time that the procedure is executed, it needs to be able to *compile* all of the individual statements, and it's at this time that it will discover that the table exists but that the column doesn't - and so at this time, it will generate an error and refuse to run the procedure. The T-SQL language is unfortunately a very simplistic compiler, and doesn't take runtime control flow into account when attempting to perform the compilation. It doesn't analyse the control flow or attempt to defer the compilation in conditional paths - it just fails the compilation because the column doesn't (at this time) exist. Unfortunately, there aren't any mechanisms built in to SQL Server to control this behaviour - this is the behaviour you get, and anything that addresses it is going to be perceived as a workaround - as evidenced already by the (valid) suggestions in the comments - the two main ways to deal with it are to use dynamic SQL or to ensure that the temp table always contains all columns required. One way to workaround your concerns about maintenance if you go down the "all uses of the temp table should have all columns" is to move the column definitions into a separate stored procedure, that can then augment the temporary table with all of the required columns - something like: ``` create procedure S_TT_Init as alter table #TT add Column1 int not null alter table #TT add Column2 varchar(9) null go create procedure S_TT_Consumer as insert into #TT(Column1,Column2) values (9,'abc') go create procedure S_TT_User as create table #TT (tmp int null) exec S_TT_Init insert into #TT(Column1) values (8) exec S_TT_Consumer select Column1 from #TT go exec S_TT_User ``` Which produces the output `8` and `9`. You'd put your temp table definition in `S_TT_Init`, `S_TT_Consumer` is the inner query that multiple stored procedures call, and `S_TT_User` is an example of one such stored procedure.
Create the table with the column initially. If you're populating the TEMP table with SPROC output just make it an IDENTITY INT (1,1) so the columns line up with your output. Then drop the column and re-add it as the appropriate data type later on in the SPROC.
StoredProc manipulating Temporary table throws 'Invalid column name' on execution
[ "", "sql", "t-sql", "sql-server-2005", "tempdata", "" ]
Consider the following: ``` class User < ActiveRecord::Base has_many :events end class Event < ActiveRecord::Base belongs_to :user #this user is the event owner has_many :members end class Members < ActiveRecord::Base belongs_to :user belongs_to :event end ``` Now, I need to list all the members for which `current_user` is the owner. so I have come up with this: ``` @members = Member.where event_id: current_user.events ``` which produces the following query: ``` SELECT "members".* FROM "members" WHERE "members"."event_id" IN (SELECT "events"."id" FROM "events" WHERE "events"."user_id" = 1) ``` This works as expected but uses subqueries instead of `JOIN`. Does anyone know a better way to write this same query?
Add a has\_many :through association to your User model: ``` class User < ActiveRecord::Base has_many :events has_many :members, :through => :events end ``` Now you can query for all a user's members through the members association: ``` user.members ``` The SQL generated will look something like: ``` SELECT "members".* FROM "members" INNER JOIN "events" ON "members"."id" = "events"."member_id" WHERE "events"."user_id" = 1 ```
I guess this will work. ``` Member.joins(:event).where("events.user_id = ?" , current_user.id) ```
How to write this ActiveRecord Query using Join instead of subquery in Rails 4
[ "", "sql", "ruby-on-rails", "postgresql", "activerecord", "ruby-on-rails-4", "" ]
I'm working on a project and need some help trying to get this XML query to work. I'm trying to query the XML below in SQL as a new column "Company" with the Company values. The column name is `CustomData` and is varchar(3000) so it looks like it needs to be converted to XML, unless there is another way to do this. ``` <?xml version="1.0" encoding="UTF-8"?> <ArrayOfCustomData xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <CustomData name="Company" value="WT001" /> <CustomData name="Location" value="123456" /> <CustomData name="DealerCode" value="WT" /> <CustomData name="Domain" value="WT" /> <CustomData name="Region" value="1234" /> </ArrayOfCustomData> ``` Here is the SQL code I'm using. ``` select CustomData.value('(/ArrayOfCustomData/CustomData name="Company")[1]','varchar(30)') as Company from tbl_User_Ref ``` I get this error: > "Cannot find either column "CustomData" or the user-defined function or aggregate "CustomData.value", or the name is ambiguous." Any help is appreciated. Thanks!
I managed to get it working by using the code below. Not sure why but it looks like my XML is actually UTF-16 and not UTF-8, which might've been giving me problems as well. Thanks for the help. ``` CAST(CAST(CustomData AS NTEXT) AS XML).value('(/ArrayOfCustomData/CustomData[@name="Company"]/@value)[1]','varchar(3000)') as Company ```
Your XPath is invalid, try this way instead : ``` select CustomData.value('(/ArrayOfCustomData/CustomData[@name="Company"]/@value)[1]','varchar(30)') as Company from tbl_User_Ref ``` Notice the use of `@` at the beginning of attribute name to address an XML attribute, and the use of expression within square-brackets (`[....]`) to filter XML data (to filter `<CustomData>` by value of `name` attribute to be specific). **UPDATE :** If you're using SQL Server you can use `CAST()` function to convert the column to XML type, for example : ``` select (CAST(CustomData as xml)). value('(/ArrayOfCustomData/CustomData[@name="Company"]/@value)[1]','varchar(30)') as Company from tbl_User_Ref ``` ## [SQL Fiddle demo](http://sqlfiddle.com/#!3/7d894/1) Another approach using CTE : [Select XML from varchar(max) column](https://stackoverflow.com/questions/4429568/select-xml-from-varcharmax-column)
SQL Query - Querying XML, having issues
[ "", "sql", "xml", "" ]
I have a table with hierarchical data in it, the structure goes like this: ``` ID ParentId ---- ---------- 1 NULL 2 1 3 2 4 2 5 3 6 5 ``` If I pass the node Id I would like to get the top most node Id/details by traversing through all its parents in SQL. I tried CTE, i somehow cannot get the combination correct. However, i got this working as a function but it is so slow that i had to post this question. In the above example if I pass 6, i would want to have the top most i.e. 1. By traversing through 6 => 5 => 3 => 2 => [1] (result) Thanks in advance for your help.
``` DECLARE @id INT = 6 ;WITH parent AS ( SELECT id, parentId, 1 AS [level] from tbl WHERE id = @id UNION ALL SELECT t.id, t.parentId, [level] + 1 FROM parent INNER JOIN tbl t ON t.id = parent.parentid ) SELECT TOP 1 id FROM parent ORDER BY [level] DESC ``` @TechDo's answer assumes the lowest ID will be the parent. If you don't want to rely on this then the above query will sort by the depth.
Please try: ``` declare @id int=6 ;WITH parent AS ( SELECT id, parentId from tbl WHERE id = @id UNION ALL SELECT t.id, t.parentId FROM parent INNER JOIN tbl t ON t.id = parent.parentid ) SELECT TOP 1 id FROM parent order by id asc ```
Get Root parent of child in Hierarchical table
[ "", "sql", "sql-server", "sql-server-2008", "sql-server-2008-r2", "" ]
Confused on why this happens. I have a query that is excluding records that I want to include. The record in question has these values: ``` MEMBER_ID NAME_FIRST NAME_LAST START_DATE END_DATE PROGRAM_ID INDICATOR -------------------- -------------------- ------------------------------ ---------- --------- ---------- ------------------------- M######## BOB JOHN 01-FEB-10 30-APR-14 M90 plan ``` (changed values slightly to preserve HIPPA compliance). So the end date is clearly 30-APR-14. And yet when I run this SQL, the record does NOT come back: ``` SELECT HNO.MEMBER_ID, HNAME.NAME_FIRST, HNAME.NAME_LAST, HDATE.START_DATE, HDATE.END_DATE, HNAME.PROGRAM_ID, HDATE.INDICATOR FROM HCFA_NAME_ORG HNO INNER JOIN NAME HNAME ON HNO.NAME_ID = HNAME.NAME_ID INNER JOIN HCFA_DATE HDATE ON HNO.NAME_ID = HDATE.NAME_ID WHERE INSTR(HNO.MEMBER_ID,'M',1,1)>0 AND MEMBER_ID='M20012289' and INDICATOR='plan' AND HDATE.START_DATE <= LAST_DAY(ADD_MONTHS(SYSDATE,-2)) AND HNAME.PROGRAM_ID != 'XXX' AND (HDATE.END_DATE IS NULL OR HDATE.END_DATE>=LAST_DAY(ADD_MONTHS(SYSDATE,-2))) ``` When I comment out the last line, it does come back: Why is this? The value is clearly =LAST\_DAY(ADD\_MONTHS(SYSDATE,-2)) So why does this get excluded? Maybe I am not understanding something about Oracle date fields or how to properly query them.
Use Trunc() function to truncate the time associated with the date. Then only the dates can be compared as per your requirement.
The last line, looks as : ``` AND (HDATE.END_DATE IS NULL OR HDATE.END_DATE>=LAST_DAY(ADD_MONTHS(SYSDATE,-2))) 30-APR-14-00.00:00 >= 30-JUN-14-09.xx:xx (it includes the time part), ``` therefore it returns FALSE and you don't get the record. To truncates the time part use TRUNC() function: ``` AND (HDATE.END_DATE IS NULL OR HDATE.END_DATE>=TRUNC(LAST_DAY(ADD_MONTHS(SYSDATE,-2)))) ``` Just a small hint that there are months with 31 days :)
Need help querying Oracle date field
[ "", "sql", "oracle", "" ]
I have a `TABLE` with Columns: `USER_ID`,`TIMESTAMP` and `ACTION` Every row tells me which user did what action at a certain time-stamp. **Example:** * Alice starts the application at 2014-06-12 16:37:46 * Alice stops the application at 2014-06-12 17:48:55 I want a list of users with the time difference between the first row in which they start the application and the last row in which they close it. Here is how I'm trying to do it: ``` SELECT USER_ID,DATEDIFF( (SELECT timestamp FROM MOBILE_LOG WHERE ACTION="START_APP" AND USER_ID="Alice" order by TIMESTAMP LIMIT 1), (SELECT timestamp FROM MOBILE_LOG WHERE ACTION ="CLOSE_APP" AND USER_ID="Alice" order by TIMESTAMP LIMIT 1) ) AS Duration FROM MOBILE_LOG AS t WHERE USER_ID="Alice"; ``` I ask for the DATEDIFF between two SELECT queries, but I just get a list of Alice`s with -2 as Duration. Am i on the right track?
I think you should group this table by USER\_ID and find minimum date of "START\_APP" and maximum of "CLOSE\_APP" for each user. Also you should use in [DATEDIFF](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_datediff) the CLOSE\_APP time first and then START\_APP time in this case you will get positive value result ``` SELECT USER_ID, DATEDIFF(MAX(CASE WHEN ACTION="CLOSE_APP" THEN timestamp END), MIN(CASE WHEN ACTION="START_APP" THEN timestamp END) ) AS Duration FROM MOBILE_LOG AS t GROUP BY USER_ID ``` `SQLFiddle demo`
``` SELECT user_id, start_time, close_time, DATEDIFF(close_time, start_time) duration FROM (SELECT MIN(timestamp) start_time, user_id FROM MOBILE_LOG WHERE action="START_APP" GROUP BY user_id) start_action JOIN (SELECT MAX(timestamp) close_time, user_id FROM MOBILE_LOG WHERE ACTION ="CLOSE_APP" GROUP BY user_id) close_action USING (user_id) WHERE USER_ID="Alice"; ``` You make two "tables" with the earliest time for start for each user, and the latest time for close for each user. Then join them so that the actions of the same user are together. Now that you have everything setup you can easily subtract between them.
MySQL Date difference between two rows
[ "", "mysql", "sql", "row", "datediff", "" ]
I have a simple table with the following ``` TABLE Color( Color_id int IDENTITY(1,1) NOT NULL PRIMARY KEY, Coloring_type [varchar](20) NULL, ColorA int NOT NULL, ColorB int NULL, ColorC int NULL) ``` and I want to execute the following query to insert some data and return the Inserted Color\_id to do some stuff with in C# ``` INSERT INTO Color OUTPUT INSERTED.Color_id VALUES ('solid' ,'red' ,'green' ,'blue') ``` anyway, in fact, insertion like this may make the data duplicated when inserting the same values again so I want to make a query that either insert the data if it is new and output the inserted id or output the id of the same data if they're already inserted to the table, what changes should I make to the query?
You can try using the following one, ``` IF EXISTS (SELECT Color_id FROM Color WHERE Coloring_type = 'solid' AND ColorA = 'red' AND ColorB = 'green' AND ColorC = 'blue') BEGIN SELECT Color_id FROM Color WHERE Coloring_type = 'solid' AND ColorA = 'red' AND ColorB = 'green' AND ColorC = 'blue' END ELSE BEGIN INSERT INTO Color OUTPUT INSERTED.Color_id VALUES ('solid' ,'red' ,'green' ,'blue') SELECT SCOPE_IDENTITY() --THIL WILL RETURN THE NEWLY ADDED Color_id END ```
You can check for existence of a record with an [`EXISTS`](http://msdn.microsoft.com/en-us/library/ms188336.aspx) check beforehand and then execute code based on the result. You can output the id in 2 ways, with an output parameter or the easiest way would be a simple `SELECT` and in C# you would call [`.ExecuteScalar`](http://msdn.microsoft.com/en-gb/library/system.data.sqlclient.sqlcommand.executescalar%28v=vs.110%29.aspx), to return a single value: ``` IF EXISTS (SELECT 1 FROM Color WHERE Coloring_type = 'solid' AND ColorA = 'red' AND ColorB = 'green' AND ColorC = 'blue') BEGIN -- IT EXISTS UPDATE CODE GOES HERE -- GET ID SELECT ID FROM Color WHERE Coloring_type = 'solid' AND ColorA = 'red' AND ColorB = 'green' AND ColorC = 'blue' END ELSE BEGIN -- IT DOES NOT EXIST SO INSERT CODE GOES HERE -- GET ID USING @@IDENTITY TO GET LAST INSERTED ID SELECT @@IDENTITY END ``` If doing this in a stored procedure, you would take the values in as parameters and replace the colour values with `@params`: ``` WHERE Coloring_type = @Color_type AND ColorA = @ColorA AND ColorB = @ColorB AND ColorC = @ColorC ```
SQL output the id of inserted new data or output the id if duplicated without insertion
[ "", "sql", "sql-server", "" ]
Hi I try to implement sql cipher. I use the code from github from here(<https://github.com/sqlcipher/android-database-sqlcipher>). Extract the source code and import it into eclipse. But it's not running. I got the following exception . Please anyone tell me what i've done wrong? ``` FATAL EXCEPTION: main java.lang.UnsatisfiedLinkError: Couldn't load stlport_shared: findLibrary returned null at java.lang.Runtime.loadLibrary(Runtime.java:365) at java.lang.System.loadLibrary(System.java:535) at net.sqlcipher.database.SQLiteDatabase.loadLibs(SQLiteDatabase.java:120) at net.sqlcipher.database.SQLiteDatabase.loadLibs(SQLiteDatabase.java:115) at example.SQLDemoActivity.onCreate(SQLDemoActivity.java:20) at android.app.Activity.performCreate(Activity.java:5020) at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1080) at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2148) at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2211) at android.app.ActivityThread.access$600(ActivityThread.java:149) at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1300) at android.os.Handler.dispatchMessage(Handler.java:99) at android.os.Looper.loop(Looper.java:153) at android.app.ActivityThread.main(ActivityThread.java:4987) at java.lang.reflect.Method.invokeNative(Native Method) at java.lang.reflect.Method.invoke(Method.java:511) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:821) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:584) at dalvik.system.NativeStart.main(Native Method) ```
I found the solution: Initially i forgot to add `armeabi`, `armeabi-v7a`, `x86` folders. Which contains .so files. In `armeabi`: ``` libdatabase_sqlcipher.so libsqlcipher_android.so libstlport_shared.so ``` In `armeabi-v7a`: ``` libdatabase_sqlcipher.so libsqlcipher_android.so libstlport_shared.so ``` In `x86`: ``` libdatabase_sqlcipher.so libsqlcipher_android.so libstlport_shared.so ``` I've add armeabi, armeabi-v7a,x86. folders in libs now it works.
Your project is not properly including the native .so files with your application. These are required for SQLCipher for Android to operate. They should be in a platform specific folder beneath the `libs` folder. As a reference, please review the project structure of the SQLCipher for Android [test suite](https://github.com/sqlcipher/sqlcipher-android-tests).
Implement sqlcipher-for-android
[ "", "android", "sql", "sqlcipher", "" ]
Given the following body of a case statement: ``` 1 WHEN r.code= '00' then 'A1' 2 WHEN r.code ='01' AND r.source = 'PXWeb' then 'A2' < 3 WHEN r.code ='0120' then 'A3' 4 WHEN r.code ='01' then 'A4' < 5 WHEN r.code ='1560' then 'A5' 6 WHEN r.code ='1530' then 'A6' 7 WHEN r.code ='1550' then 'A7' ``` I'm assuming line 2 will always execute before line 4? Then I read statements like '*SQL is a declarative language, meaning that it tells the SQL engine what to do, not how*' in [Order Of Execution of the SQL query](https://stackoverflow.com/questions/4596467/order-of-execution-of-the-query) and wonder if this also relates to the order of execution in the CASE statement. Essentially, can i leave the code above as it is without having to change line 4 to ``` 4 WHEN r.code ='01' AND r.source != 'PXWeb' then 'A4' ```
The value that is returned will be the value of the `THEN` expression for the earliest `WHEN` clause (textually) that matches. That does mean that if your line 2 conditions are met, the result will be `A2`. But, if your `THEN` expressions were more complex than just literal values, some of the work to *evaluate* those expressions may happen even when that expression is not required. E.g. ``` WHEN r.code= '00' then 'A1' WHEN r.code ='01' AND r.source = 'PXWeb' then 'A2' WHEN r.code ='0120' then 1/0 WHEN r.code ='01' then 'A4' ``` *could* generate a division by zero error even if `r.code` isn't equal to `0120`, and even if it's equal to `00`, say. I don't know what the standard has to say on this particular issue but I know that it is true of some products.
Never mind: "The CASE statement evaluates its conditions sequentially and stops with the first condition whose condition is satisfied." <http://msdn.microsoft.com/en-gb/library/ms181765.aspx>
Execution order of WHEN clauses in a CASE statement
[ "", "sql", "case", "sybase", "case-when", "" ]
I got three columns with binary values. Then i got a query which makes a string based on those values. I made this work using `case`. But it's quite huge (it's a part of a bigger query), and i was wondering maybe there is a better way of doing this? ``` SELECT (CASE WHEN TableA.flag1 = 1 AND TableA.flag2 = 1 AND TableA.flag3 = 1 THEN 'CGR' WHEN TableA.flag1 = 1 AND TableA.flag2 = 1 THEN 'CG' WHEN TableA.flag1 = 1 AND TableA.flag3 = 1 THEN 'CR' WHEN TableA.flag2 = 1 AND TableA.flag3 = 1 THEN 'GR' WHEN TableA.flag1 = 1 THEN 'C' WHEN TableA.flag2 = 1 THEN 'G' WHEN TableA.flag3 = 1 THEN 'R' ELSE 'nothing' END) FROM TableA ``` Im working on MSSQL 2000 server.
You can use `left()` instead. ``` select left('C', T.flag1)+ left('G', T.flag2)+ left('R', T.flag3) from TableA as T ```
I don't know if this is "better" but it is more concise: ``` SELECT ((CASE WHEN TableA.flag1 = 1 THEN 'C' ELSE '' END) + (CASE WHEN TableA.flag2 = 1 THEN 'G' ELSE '' END) + (CASE WHEN TableA.flag3 = 1 THEN 'R' ELSE '' END) ) FROM TableA; ``` Okay, this isn't exactly the same because you get `''` instead of `'nothing'`. But I think the empty string does a better job of representing "no flags" than `'nothing'` does.
Replacing CASE in SQL query
[ "", "sql", "sql-server", "case", "" ]
I have a daily sessions table with columns user\_id and date. I'd like to graph out DAU/MAU (daily active users / monthly active users) on a daily basis. For example: ``` Date MAU DAU DAU/MAU 2014-06-01 20,000 5,000 20% 2014-06-02 21,000 4,000 19% 2014-06-03 20,050 3,050 17% ... ... ... ... ``` Calculating daily active users is straightforward but calculating the monthly active users e.g. the number of users that logged in today minus 30 days, is causing problems. How is this achieved without a left join for each day? Edit: I'm using Postgres.
Assuming you have values for each day, you can get the total counts using a subquery and `range between`: ``` with dau as ( select date, count(userid) as dau from dailysessions ds group by date ) select date, dau, sum(dau) over (order by date rows between -29 preceding and current row) as mau from dau; ``` Unfortunately, I think you want distinct users rather than just user counts. That makes the problem much more difficult, especially because Postgres doesn't support `count(distinct)` as a window function. I think you have to do some sort of self join for this. Here is one method: ``` with dau as ( select date, count(distinct userid) as dau from dailysessions ds group by date ) select date, dau, (select count(distinct user_id) from dailysessions ds where ds.date between date - 29 * interval '1 day' and date ) as mau from dau; ```
This one uses COUNT DISTINCT to get the rolling 30 days DAU/MAU: (calculating reddit's user engagement in BigQuery - but the SQL is standard enough to be used on other databases) ``` SELECT day, dau, mau, INTEGER(100*dau/mau) daumau FROM ( SELECT day, EXACT_COUNT_DISTINCT(author) dau, FIRST(mau) mau FROM ( SELECT DATE(SEC_TO_TIMESTAMP(created_utc)) day, author FROM [fh-bigquery:reddit_comments.2015_09] WHERE subreddit='AskReddit') a JOIN ( SELECT stopday, EXACT_COUNT_DISTINCT(author) mau FROM (SELECT created_utc, subreddit, author FROM [fh-bigquery:reddit_comments.2015_09], [fh-bigquery:reddit_comments.2015_08]) a CROSS JOIN ( SELECT DATE(SEC_TO_TIMESTAMP(created_utc)) stopday FROM [fh-bigquery:reddit_comments.2015_09] GROUP BY 1 ) b WHERE subreddit='AskReddit' AND SEC_TO_TIMESTAMP(created_utc) BETWEEN DATE_ADD(stopday, -30, 'day') AND TIMESTAMP(stopday) GROUP BY 1 ) b ON a.day=b.stopday GROUP BY 1 ) ORDER BY 1 ``` I went further at [How to calculate DAU/MAU with BigQuery (engagement)](https://stackoverflow.com/questions/33226570/how-to-calculate-dau-mau-with-bigquery/33226571)
Querying DAU/MAU over time (daily)
[ "", "sql", "postgresql", "" ]
I've created a query.In that query,there are some tables which are joined to another table.I m directly getting the values of columns ContactId,FirstName,LastName,CreatedOn. For paymnettype column,I m parsing the value inside of P.new\_name(the values are something like 'Credit Card - 2014-06-29').There are 2 type of value for paymenttype,'Credit Card' and 'Miles Point' My main purpose in that query is getting sum of the profits of flight reservations per contact,that's why I'm grouping the tables by all columns except totalamount column to make aggregate function work.I m grouping the table by paymenttype because I have to,otherwise it's giving error as expected. The problem is that if a contact did a reservation in two type of payment,the contactid is being displayed in two columns. I want to make it displayng in one column,getting sum as usual in the query and writing 'all' in paymenttype column. Is that Possible.If it is ,any idea about how i can do that. Also I should mention about that,I'll use that query in SSRS ,so any idea to solve the problem in SSRS side would help me ``` Select C.ContactId,C.FirstName,C.LastName,C.CreatedOn, LTRIM(RTRIM(left(P.new_name ,(CHARINDEX('-',P.new_name)-1))))) as paymenttype, SUM(F.new_totalamounttl) as totalamount From Contact C left join SalesOrder S on C.ContactId=S.ContactId left join new_flightreservation F on S.SalesOrderId=F.new_salesorderid left join new_payment P on F.new_paymentid=P.new_paymentId where (LTRIM(RTRIM(left(P.new_name ,(CHARINDEX('-',P.new_name)-1))))) IN ( CASE @paymenttype WHEN 'all' THEN (LTRIM(RTRIM(left(P.new_name ,(CHARINDEX('-',P.new_name)-1))))) ELSE @paymenttype END ) GROUP BY C.ContactId,C.FirstName,C.LastName,C.CreatedOn, (LTRIM(RTRIM(left(P.new_name ,(CHARINDEX('-',P.new_name)-1))))) ```
Drawing on the ideas of [my previous (and hackish) answer](https://stackoverflow.com/a/24512477/1535706) , I could devise the following (even more hackish, though more terse) solution: ``` SELECT C.ContactId,C.FirstName,C.LastName,C.CreatedOn, CASE WHEN ( COUNT(CASE x.paymenttype WHEN 'Credit Card' THEN 1 ELSE NULL END) * COUNT(CASE x.paymenttype WHEN 'Miles Point' THEN 1 ELSE NULL END) ) > 0 THEN 'all' WHEN COUNT(CASE x.paymenttype WHEN 'Credit Card' THEN 1 ELSE NULL END) > 0 THEN 'Credit Card' ELSE 'Miles Point' END AS paymenttype, SUM(F.new_totalamounttl) AS totalamount FROM Contact C left join SalesOrder S on C.ContactId=S.ContactId left join new_flightreservation F on S.SalesOrderId=F.new_salesorderid left join ( Select P.new_paymentId, LTRIM(RTRIM(left(P.new_name ,(CHARINDEX('-',P.new_name)-1)))) as paymenttype, From new_payment P ) AS x on F.new_paymentid=x.new_paymentId WHERE @paymenttype = x.paymenttype OR @paymenttype = 'all' -- alternatively, for the RDBMS that allows it: -- @paymenttype IN (x.paymenttype, 'all') GROUP BY C.ContactId,C.FirstName,C.LastName,C.CreatedOn ``` **Edit:** simplified the query somewhat to avoid one extra anonymous view
I think by splitting it into two parts, you can get a fairly elegant solution. The first part deals with the PaymentType and the second does the grouping/summing based on that. ``` SELECT ContactId ,FirstName ,LastName ,CreatedOn ,SUM(subamount) totalamount ,paymenttype FROM ( Select C.ContactId ,C.FirstName ,C.LastName ,C.CreatedOn ,@paymenttype as paymenttype ,CASE WHEN @paymenttype = β€˜all’ THEN F.new_totalamounttl WHEN @paymenttype = LTRIM(RTRIM(LEFT(P.new_name ,(CHARINDEX('-',P.new_name)-1))))) THEN F.new_totalamounttl ELSE 0 END as subamount From Contact C left join SalesOrder S on C.ContactId=S.ContactId left join new_flightreservation F on S.SalesOrderId=F.new_salesorderid left join new_payment P on F.new_paymentid=P.new_paymentId ) sub GROUP BY ContactId ,FirstName ,LastName ,CreatedOn ,paymenttype ```
SQL How To Control The Grouping Column
[ "", "sql", "" ]
Good morning my beloved sql wizards and sorcerers, I am wanting to substitute on 3 columns of data across 3 tables. Currently I am using the NVL function, however that is restricted to two columns. See below for an example: ``` SELECT ccc.case_id, NVL (ccvl.descr, ccc.char)) char_val FROM case_char ccc, char_value ccvl, lookup_value lval1 WHERE ccvl.descr(+) = ccc.value AND ccc.value = lval1.descr (+) AND ccc.case_id IN ('123')) case_char table case_id|char |value 123 |email| work_email 124 |issue| tim_ char_value table char | descr work_email | complaint mail tim_ | timeliness lookup_value table descr | descrlong work_email| xxx@blah.com ``` Essentially what I am trying to do is if there exists a match for case\_char.value with lookup\_value.descr then display it, if not, then if there exists a match with case\_char.value and char\_value.char then display it. I am just trying to return the description for 'issue'from the char\_value table, but for 'email' I want to return the descrlong from the lookup\_value table (all under the same alias 'char\_val'). So my question is, how do I achieve this keeping in mind that I want them to appear under the same alias. Let me know if you require any further information. Thanks guys
You could nest NVL: ``` NVL(a, NVL(b, NVL(c, d)) ``` But even better, use the SQL-standard [COALESCE](http://docs.oracle.com/cd/B28359_01/server.111/b28286/functions023.htm), which does take multiple arguments and also works on non-Oracle systems: ``` COALESCE(a, b, c, d) ```
How about using [**COALESCE**](http://docs.oracle.com/cd/B28359_01/server.111/b28286/functions023.htm#SQLRF00617): ``` COALESCE(ccvl.descr, ccc.char) ```
Using NVL for multiple columns - Oracle SQL
[ "", "sql", "oracle", "coalesce", "nvl", "" ]
Intention: detect whether a numeric sequence contains gaps. No need to identify the missing elements, just flag (true / false) the sequence if it contains gaps. ``` CREATE TABLE foo(x INTEGER); INSERT INTO foo(x) VALUES (1), (2), (4); ``` Below is my (apparently correctly functioning) query to detect gaps: ``` WITH cte AS (SELECT DISTINCT x FROM foo) SELECT ( (SELECT COUNT(*) FROM cte a CROSS JOIN cte b WHERE b.x=a.x-1) =(SELECT COUNT(*)-1 FROM cte)) OR (NOT EXISTS (SELECT 1 FROM cte)) ``` where the `OR` is needed for the edge case where the table is empty. The query's logic is based on the observation that in a contiguous sequence the number of links equals the number of elements minus 1. Anything more idiomatic or performant (should I be worried by the CROSS JOIN in particularly long sequences?)
Try this: ``` SELECT CASE WHEN ((MAX(x)-MIN(x)+1 = COUNT(DISTINCT X)) OR (COUNT(DISTINCT X) = 0) ) THEN 'TRUE' ELSE 'FALSE' END FROM foo ``` `SQLFiddle demo`
The following should detect whether or not there are gaps: ``` select (case when max(x) - min(x) + 1 = count(distinct x) then 'No Gaps' else 'Some Gaps' end) from foo; ``` If there are no gaps or duplicates, then the number of *distinct* values of x is the max minus the min plus 1.
detect gaps in integer sequence
[ "", "sql", "postgresql", "" ]
I want to get values which are not present in the table ,by considering the values only in the IN clause . For example , ``` select orderNumber from order where orderNumber in (List of orderNumbers) ``` This will give the orderNumbers which are present in the table by considering the values present in the IN clause , likewise I want the orderNumbers which are not present in the table by considering the values present in the IN clause . Is there any query logic present to handle this situation?
You need a join to do this with a `left outer join` and a `where` clause: ``` select on.* from (select 1 as OrderNumber from dual union all select 2 from dual . . . select n from dual ) ordn left outer join orders o on o.ordernumber = ordn.ordernumber where o.ordernumber is NULL; ```
I think what you are looking for is the `MINUS` keyword. This works without an `IN` clause though, but you should be able to make this work for your usecase. This gives all ordernumbers in `in_clause_table` that are not in `order`. ``` SELECT ordernumbers FROM in_clause_table MINUS SELECT orderNumber FROM order; ``` Note: incidentally, there are many ways to achieve this, but I think this might be most concise one.
Get values from a table which are not present only in the 'IN' clause
[ "", "mysql", "sql", "oracle", "plsql", "" ]
This is my current select on my table: ``` SELECT count(*) as Titel, KuenstlerName FROM Bild GROUP BY KuenstlerName; +-------+---------------+ | Titel | KuenstlerName | +-------+---------------+ | 3 | Klee | | 1 | Klingt | | 1 | Marc | | 6 | Picasso | | 1 | van Gogh | +-------+---------------+ ``` Is there a way to add a new row/column or just display a message which returns the total values of the column 'Titel'. **For Example:** ``` +-------+---------------+ | Titel | KuenstlerName | +-------+---------------+ | 3 | Klee | | 1 | Klingt | | 1 | Marc | | 6 | Picasso | | 1 | van Gogh | | 12 | Total | +-------+---------------+ +-------+---------------+-----+ | Titel | KuenstlerName |Total| +-------+---------------+-----+ | 3 | Klee | 12 | | 1 | Klingt | | | 1 | Marc | | | 6 | Picasso | | | 1 | van Gogh | | +-------+---------------+-----+ ``` **or:** ``` +-------+---------------+ | Titel | KuenstlerName | +-------+---------------+ | 3 | Klee | | 1 | Klingt | | 1 | Marc | | 6 | Picasso | | 1 | van Gogh | +-------+---------------+ Total: 12 ``` *Thanks for your help!*
Probably not the most elegant solution but this would work: ``` SELECT count(*) as Titel, KuenstlerName FROM Bild GROUP BY KuenstlerName UNION ALL SELECT count(*) as Titel, 'Total' AS KuenstlerName FROM Bild ``` (someone jumped in with virtually the same solution while I was typing, UNION ALL should be marginally faster than UNION alone though).
Try this: `SELECT sum(titel),coalesce(KuenstlerName,'Total') FROM testdb.titel GROUP BY KuenstlerName with rollup;` Thanks. Added screenshot of result. ![enter image description here](https://i.stack.imgur.com/l9Gxy.png)
SQL: Count column and show the total result
[ "", "mysql", "sql", "select", "count", "sum", "" ]
I have these tables I want to create for a Grooming Shop: ``` CREATE TABLE Pets( Id int NOT NULL PRIMARY KEY IDENTITY ,Name varchar(20) ,Breed varchar(35) ,[Weight] decimal (10,2) NOT NULL ,Cat bit NOT NULL ,Dog bit NOT NULL ) CREATE TABLE UserInfo( Id int NOT NULL PRIMARY KEY IDENTITY ,FirstName varchar(15) NOT NULL ,LastName varchar(30) NOT NULL ,PhoneNumber varchar(10) NOT NULL ,EmailAddress varchar(30) NOT NULL ,AddressId int NOT NULL FOREIGN KEY REFERENCES [Address](Id)--Address Table already created ,PetId int NOT NULL FOREIGN KEY REFERENCES Pets(Id) ) CREATE TABLE Appointments( Id int NOT NULL PRIMARY KEY IDENTITY ,[Date] date NOT NULL ,UserInfoId int NOT NULL FOREIGN KEY REFERENCES UserInfo(Id) ,PetId int NOT NULL FOREIGN KEY REFERENCES Pets(Id)--? ) ``` My Appointments table will have the UserInfo but should it also have the Pets info? If there can be more than 1 pet for each appointment, and more than on appointment for each pet... When I wanted the UserInfo to point to the Pets table, because each user will have at least one pet.
Since each user will have one or more pets and a pet presumably has only one owner, I suggest you add UserInfoId to the Pets table and remove PetId from the UserInfo table. I also suggest you remove PetId from the Appointment table and create a separate table for the pets included in the appointment. So the appointment is actually with the owner, who may bring one or more pets. ``` CREATE TABLE UserInfo( Id int NOT NULL PRIMARY KEY IDENTITY ,FirstName varchar(15) NOT NULL ,LastName varchar(30) NOT NULL ,PhoneNumber varchar(10) NOT NULL ,EmailAddress varchar(30) NOT NULL ,AddressId int NOT NULL FOREIGN KEY REFERENCES [Address](Id) ); CREATE TABLE Pets( Id int NOT NULL PRIMARY KEY IDENTITY ,UserInfoId int NOT NULL FOREIGN KEY REFERENCES UserInfo(Id) ,Name varchar(20) ,Breed varchar(35) ,[Weight] decimal (10,2) NOT NULL ,Cat bit NOT NULL ,Dog bit NOT NULL ); CREATE TABLE Appointments( Id int NOT NULL PRIMARY KEY IDENTITY ,[Date] date NOT NULL ,UserInfoId int NOT NULL FOREIGN KEY REFERENCES UserInfo(Id) ); CREATE TABLE AppointmentPets( AppointmentId int NOT NULL ,PetId int NOT NULL CONSTRAINT PK_AppointmentPets PRIMARY KEY(AppointmentId, PetId) ); ```
No, you needn't have pets info in appointment table because the information related to pet can be retrieved from the UserInfo table itself.
sql server relationships between 3 tables
[ "", "sql", "sql-server", "" ]
I have the following tables in my database A table with users: ``` --------------------- | userId | username | --------------------- | 1 | john doe | | 2 | jane doe | --------------------- ``` A list of "interests" ``` --------------------- | intId | interest | --------------------- | 1 | books | | 2 | cars | --------------------- ``` And a table in which I save what interests a user has ``` -------------------- | userId | intId | -------------------- | 1 | 1 | | 1 | 2 | | 2 | 1 | -------------------- ``` Now I also have a web page where I can select one or more interests. Based on that I want to retrieve the users that have those interests. So suppose I select "books" and "cars". Then I should get two records back, "jonh doe" and "jane doe". But the thing is that I'm not really sure how to create a query for this. I could do a simple `LEFT JOIN`. But then I would get 3 records back. Since "john doe" is interested in "books" AND "cars". But I want each user to appear only once in the result. So how should I create this query?
Something like this should do it with a `DISTINCT` if you're working with interest names: ``` SELECT DISTINCT u.UserName FROM Users u INNER JOIN UserInterests ui ON ui.userId = u.userId INNER JOIN Interest i on i.intId = ui.intId WHERE i.Interest in ('books', 'cars') ``` If you want to do it based on interest IDs: ``` SELECT DISTINCT u.UserName FROM Users u INNER JOIN UserInterests ui ON ui.userId = u.userId WHERE ui.intId in (1,2) ```
``` SELECT *, STUFF((SELECT DISTINCT ','+u.username FROM dbo.UserInterests ui INNER JOIN dbo.[user] u ON u.userid = ui.UserId WHERE ui.IntId=i.intId FOR XML PATH('')),1,1,'') FROM dbo.Interests i ```
Find interests of a user and let it appear only once in the result set
[ "", "sql", "sql-server", "sql-server-2008-r2", "" ]
I need to convert datetime to ddmmyyyy as string/varchar in sql server. Let me repeat, not dd/mm/yyyy. I have searched and found only related to dd/mm/yyyy ``` Datetime VARCHAR '02-07-2014' '02072014' ```
`Select Replace(CONVERT(VARCHAR(20), GETDATE(), 103),'/','')` OR `select substring(CONVERT(VARCHAR(10),GETDATE(),103),1,2) + substring(CONVERT(VARCHAR(10),GETDATE(),103),4,2) + substring(CONVERT(VARCHAR(10),GETDATE(),103),7,4)` will does the same Input: `02-07-2014` This will give output: `02072014`
**SQL Code:** ``` Replace(CONVERT(VARCHAR(20), GETDATE(), 103),'/','') ```
Convert datetime to ddmmyyyy in SQL server
[ "", "sql", "sql-server", "sql-server-2008", "datetime", "sql-server-2008-r2", "" ]
Hi i'm using this command to left join a table with another table that has a little more records but it keep printing NULL. I want that NULL to become 0. ``` SELECT * FROM TABLE1 Left JOIN Table2 ON TABLE1.ID=Table2.IDRel UPDATE Table2 SET IDRel = 0 where IDRel = NULL ```
The update should not be needed. As you said Table2 `has a little more records` that's the key thing here, that means that for any row from Table1 for which no matching `IDRel` value could be found that column (`IDRel`) will stay `NULL`. Now you could for example use `ISNULL(IDRel, 0)` to replace null value with 0, but maybe an `INNER JOIN` instead of the `LEFT JOIN` could get you the right result throwing out all rows that could not be matched... However... If you really were to update that column it would only work if you did the correct comparison against `NULL` (that is `IS` not `=`) that would mean changing your update query into: `UPDATE Table2 SET IDRel = 0 where IDRel IS NULL`
IS the update even needed? ``` SELECT T1.*, coalesce(t2.IDREL,0) as IDREL FROM TABLE1 T1 Left JOIN Table2 ON TABLE1.ID=Table2.IDRel ```
Can't update from NULL to 0
[ "", "sql", "sql-server", "database", "sql-server-2008", "dbo", "" ]
I have a table play day with following columns date\_played, winner, loser with following values, ``` (Jun-03-14, USA, China) (Jun-05-14, USA, Russia) (Jun-06-14, France, Germany) . . . . (Jun-09-14, USA, Russia) ``` I need to obtain all instances where USA has won exactly 3 rows in a sequence. I tried with the following query. ``` Select date, winner, loser, RANK() OVER (PARTITION BY winner ORDER BY date rows 2 preceding) as rank from playday; ```
You can use the following query. ``` select winner,loser,date,cnt from (select winner, loser, date, date - lag(date,3) over ( order by date) as cnt from playday) where cnt >=3 ```
first you need to find out when was the last time they lost. second count the number of wins, greater than (>) the date of the last time they lost. third return all rows greater than last loss, if count > 3. sorry, don't have an SQL parser in front of me to put it in code properly. ``` Set @team_name = "USA"; select date, winner, loser from playday where (select count(*) as wins_since_loss from playday where playday.winner = @team_name and playday.date > (select max(date) as losing_date from playday where playday.loser = @team_name)) = 3 ```
SQL - Row Sequencing
[ "", "sql", "analytical", "" ]
I have the following data in a table. The number of values in each row can vary and the number of rows could also vary. The table has 1 column with csv formatted values. The values will always be numeric ``` Data 1,2 4 5,12, 10 6,7,8,9,10 15,17 ``` I would like to end up with a temp table with the following ``` Data Lowest Highest 1,2 1 2 4 4 4 5,12, 10 5 12 6,7,8,9,10 6 10 15,17 15 17 ``` Can anyone help with writing a sql query or function to achieve this
Instead of function, you can achieve by this ``` ;WITH tmp AS (SELECT A.rn,split.a.value('.', 'VARCHAR(100)') AS String FROM (SELECT Row_number() OVER(ORDER BY (SELECT NULL)) AS RN, Cast ('<M>' + Replace([data], ',', '</M><M>') + '</M>' AS XML) AS String FROM table1) AS A CROSS apply string.nodes ('/M') AS Split(a)) SELECT X.data,Tmp.lower,Tmp.higher FROM (SELECT rn,Min(Cast(string AS INT)) AS Lower,Max(Cast(string AS INT)) AS Higher FROM tmp GROUP BY rn) Tmp JOIN (SELECT Row_number() OVER(ORDER BY (SELECT NULL)) AS RN1,data FROM table1) X ON X.rn1 = Tmp.rn ``` **[FIDDLE DEMO](http://sqlfiddle.com/#!6/b54d0/22)** *Output would be:* --- ``` Data Lower Higher 1,2 1 2 4 4 4 5,12, 10 5 12 6,7,8,9,10 6 10 15,17 15 17 ```
First create a user defined function to convert each row of 'DATA' column to a intermediate table as: ``` /****** Object: UserDefinedFunction [dbo].[CSVToTable]******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE FUNCTION [dbo].[CSVToTable] (@InStr VARCHAR(MAX)) RETURNS @TempTab TABLE (id int not null) AS BEGIN ;-- Ensure input ends with comma SET @InStr = REPLACE(@InStr + ',', ',,', ',') DECLARE @SP INT DECLARE @VALUE VARCHAR(1000) WHILE PATINDEX('%,%', @INSTR ) <> 0 BEGIN SELECT @SP = PATINDEX('%,%',@INSTR) SELECT @VALUE = LEFT(@INSTR , @SP - 1) SELECT @INSTR = STUFF(@INSTR, 1, @SP, '') INSERT INTO @TempTab(id) VALUES (@VALUE) END RETURN END GO ``` Function is explained further [here](http://www.codeproject.com/Tips/584680/Using-comma-separated-value-parameter-strings-in-S). Then Using `Cross Apply` we can get the desired output as: ``` With CTE as ( select T.Data, Min(udf.Id) as [Lowest],Max(udf.Id) as [Highest] from Test T CROSS APPLY dbo.CSVToTable(T.Data) udf Group By Data ) Select * from CTE ``` [Sample Code here...](http://rextester.com/TEM85821) What a `Cross Apply` does is : it applies the right table expression to each row from the left table and produces a result table with the unified result sets.
SQL Query for Min and Max Values
[ "", "sql", "sql-server", "" ]
I am building a small query to find all CustomerNumbers where all of their policies are in a certain status (terminated). Here is the query I am working on ``` select a.cn ,p.pn , tp = COUNT(p.pn) , tp2 = SUM(case when p.status = 4 then 1 else 0 end) from ( select cn, cn2 from bc union select cn, cn2= fn from ic ) as a left join p as p on a.cn = p.cn group by a.cn, pn ``` My issue is when I add the clause: ``` WHERE cn = tp ``` It says the columns are invalid. Am I missing something incredibly obvious?
You can't use aliases at the same level of a query. The reason is that the `where` clause is *logically* evaluated before the `select`, so the aliases defined in the `select` are not available in the `where`. A typical solution is to repeat the expression (other answers) or use a subquery or cte: ``` with cte as ( <your query here> ) select cte.* from cte where TotalPolicies = TermedPolicies; ``` However, in your case, you have an easier solution, because you have an aggregation query. So just use: ``` having TotalPolicies = TermedPolicies ```
You cannot use the aliased aggregate column names in the where clause. You have to use the expression itself instead. Also you cannot use it as where cluase, but use it in the having clause ``` HAVING COUNT(p.PolicyNumber) = SUM(case when p.status = 4 then 1 else 0 end) ```
How do I compare SUM and COUNT()s in SQL?
[ "", "sql", "" ]
Hey all I have the following query: ``` SELECT LTRIM(STR(CAST(CAST('7/1/2014 9:07:00 AM' as datetime) AS float), 20, 10)) ContactDateTime ``` This query produces this number: **41819.3798611111** Now what I am needing to do is convert that back to the normal **'7/1/2014 9:07:00 AM**' date (if thats even possible) that it was. However, I am not sure how to go about doing that in MS SQL. Any help would be appreciated!
Here's a method for rounding the DateTime to the nearest second: ``` DECLARE @Float FLOAT SET @Float = 41819.3798611111 SELECT DATEADD(ms, 500 - DATEPART(ms, CAST(@Float AS DATETIME) + '00:00:00.500'), CAST(@Float AS DATETIME)) ``` `2014-07-01 09:07:00.000`
SELECT CAST(41819.3798611111 AS DATETIME)
SQL date to a float then back to a date
[ "", "sql", "sql-server", "date", "casting", "floating-point", "" ]
I have altered my stored procedure by mistake and I have no backup of the database. Is there any way of recovering my work ?
Try running this. It gets a list of the queries that are in the plan cache, it may be buried in there. ``` SELECT execquery.last_execution_time AS [Date Time], execsql.text AS [Script] FROM sys.dm_exec_query_stats AS execquery CROSS APPLY sys.dm_exec_sql_text(execquery.sql_handle) AS execsql ORDER BY execquery.last_execution_time DESC ```
No, there isn't. The stored procedures are part of the database. Take this as an opportunity to learn to do backups.
Is there any way to recover a stored procedure overwritten with an ALTER statement?
[ "", "sql", "sql-server", "database", "sql-server-2008", "" ]
I'm trying to get data from one table, so I join it like so: ``` SELECT A.ACCT_NO FROM ACCOUNT AS A RIGHT OUTER JOIN CUSTOMER AS C ON A.CUST_ID = C.CUST_ID WHERE C.FIRST_NM = 'ADAM'; ``` The problem right now is that I'm somewhat confused with regards to how the joins are functioning. If I change it to: ``` LEFT OUTER JOIN CUSTOMER AS C ``` Or even: ``` INNER JOIN CUSTOMER AS C ``` I get the same exact data! I'm following this chart right here: <https://i.stack.imgur.com/1UKp7.png> Based on that, I assumed that only the RIGHT OUTER JOIN would work because it is only pulling the columns from CUSTOMER. However, that doesn't seem to be the case, as both outer joins work. In addition, I was shocked to see INNER JOIN work as well. Why is this? Or is it because I'm missing key concepts with regards to join in SQL? Thanks!
When using outer joins, the limiting criteria on the table where you only want matching records MUST be placed on the join OR you have to use an or condition to handle NULL. If you don't, you will get the same results as an INNER Join. It has to do with when the join occurs vs when the where clause criteria is applied. When the limiting criteria is on the join, it occurs before the join, in the where clause, after. When in the where clause, the NULL values generated by the outer join are then excluded... by the where clause... making it look like an inner join. Since I think you want all accounts but only customer information where first name is Adam, you need a left join but move the where clause criteria to the join. **PREFERRED:** ``` SELECT A.ACCT_NO FROM ACCOUNT AS A LEFT OUTER JOIN CUSTOMER AS C ON A.CUST_ID = C.CUST_ID AND C.FIRST_NM = 'ADAM'; ``` OR If you must ``` SELECT A.ACCT_NO FROM ACCOUNT AS A LEFT OUTER JOIN CUSTOMER AS C ON A.CUST_ID = C.CUST_ID WHERE (C.FIRST_NM = 'ADAM' OR c.First_NM is null) ``` but c.First\_NM must not be nullable or you can't distinguish between the null on the join vs the null on the record. <http://blog.codinghorror.com/a-visual-explanation-of-sql-joins/> for help on joins. ORDER of the tables matter... * If you want all customers and only those accounts which match a customer then use a `RIGHT JOIN` as you have listed. * If you want all accounts and only those customers having a related account, then use a `LEFT JOIN` * If you want all accounts and all customers use a `FULL OUTER JOIN` * If you only want customers with accounts use an `INNER JOIN`
This query will return nulls for the values that are not in Account table and all the records from the Customer table ``` SELECT * FROM ACCOUNT AS A RIGHT OUTER JOIN CUSTOMER AS C ON A.CUST_ID = C.CUST_ID ``` This query will return nulls for the values that are not in Customer table and all the records from the Account table ``` SELECT * FROM ACCOUNT AS A LEFT JOIN CUSTOMER AS C ON A.CUST_ID = C.CUST_ID ``` This query will not return rows that do not have equivalents in customer table so only rows from accounts that have customers. ``` SELECT * FROM ACCOUNT AS A INNER JOIN CUSTOMER AS C ON A.CUST_ID = C.CUST_ID ```
Inner Join, and Outer Join confusion
[ "", "sql", "join", "" ]
I have the following SQL query: ``` SELECT v.PassNo, VName, e.eName, d.dtName, v.EntryTime FROM Visitorlogo_tbl v LEFT JOIN EmployeeMaster_tbl e ON v.empid=e.eId JOIN DepartmentMaster_tbl d ON v.Deptid=d.dtId WHERE v.EntryTime >= '2014-06-29' ``` And I'm getting the following output: ``` PassNo VName eName dtName EntryTime 100 jaseem null admin 2014-06-29 23:17:47.257 101 deepu sabu sales 2014-06-29 24:17:47.257 103 rabeeh null IT 2014-06-30 23:17:47.257 ``` Because I want to concatenate **`ename`** and **`dtname`**, I wrote the query like this: ``` SELECT v.PassNo AS 'Badge Id', VName AS VisitorName, e.eName + ' ('+ d.dtName+')' AS 'Name(Department)', v.EntryTime FROM Visitorlogo_tbl v JOIN DepartmentMaster_tbl d ON v.Deptid = d.dtId LEFT JOIN EmployeeMaster_tbl e ON v.empid = e.eId WHERE v.EntryTime >= '2014-06-29' ``` However, this leads to a mismatch in the output: ``` Badge Id VisitorName Name(Department) EntryTime 100 jaseem null 2014-06-29 23:17:47.257 101 deepu sabu 2014-06-29 24:17:47.257 103 rabeeh null 2014-06-30 23:17:47.257 ``` Why am I seeing `NULL` instead of the department name for badge id's `100` and `103`?
Concatenating strings, where one of your strings are `NULL` will always yield `NULL`. For example: `SELECT NULL + 'Moo'` will result in `NULL` - always. `ISNULL()` accepts 2 arguments and will return the second when the first resolves to `NULL` - [BOL](http://msdn.microsoft.com/en-us/library/ms184325.aspx) Do you mean to do this? ``` SELECT [Badge Id] = v.PassNo ,VisitorName = VName ,[Name(Department)] = ISNULL(e.eName, '') + ISNULL(' (' + d.dtName + ')', '') ,v.EntryTime FROM Visitorlogo_tbl V JOIN DepartmentMaster_tbl D ON v.Deptid=d.dtId LEFT JOIN EmployeeMaster_tbl E ON v.empid=e.eId WHERE v.EntryTime >= '2014-06-29' ```
If you concatenate null with anything, it will return null. So you need to check for null to get the proper result. You can use the following query instead ``` select v.PassNo as 'Badge Id',VName as VisitorName,ISNULL(e.eName,'') + ISNULL(' ('+ d.dtName+')','') as 'Name(Department)',v.EntryTime from Visitorlogo_tbl v join DepartmentMaster_tbl d on v.Deptid=d.dtId left join EmployeeMaster_tbl e on v.empid=e.eId where v.EntryTime >='2014-06-29' ```
While concatenating two columns an output mismatch occurs
[ "", "sql", "sql-server", "" ]
I'm trying to write an SQL Server query to extract rows with the highest 'Revision' number for each distinct ColumnId. For example if I have a data set such as: ``` Id ColumnId Revision ------------------------ 1 1 1 2 1 2 3 1 3 4 2 1 5 2 2 6 2 3 7 2 4 ``` My desired result is as follows: ``` Id ColumnId Revision ------------------------ 3 1 3 7 2 4 ``` I tried to use the following SQL statement: ``` SELECT Id, ColumnId, MAX(Revision) As Revision from Bug GROUP BY ColumnId ``` If I remove the Id from the query above it partially returns what I need. However I also need the Id column. What am I missing here to get this query to work as expected? **EDIT:** One thing I haven't made clear from the data set above is that highest Revision number for a particular ColumnId does not necessarily have the highest Id. ``` Id ColumnId Revision ------------------------ 1 1 1 2 1 3 <- Note this has a higher revision number than row Id 3. 3 1 2 <- This has a lower revision number than row Id 2. 4 2 1 5 2 2 6 2 3 7 2 4 ```
You could use a subquery and then inner join back on the ColumnId and the MaxRevision like this: ``` SELECT A.Id, A.ColumnId, A.Revision from Bug A INNER JOIN (SELECT ColumnId, MAX(Revision) As MaxRevision FROM BUG GROUP BY ColumnId ) B ON A.ColumnId = B.ColumnId AND A.Revision = B.MaxRevision ```
You can use `ROW_NUMBER`, for example with a common-table-expression(CTE): ``` WITH CTE AS ( SELECT Id, ColumnId, Revision, RN = ROW_NUMBER() OVER (PARTITION BY ColumnId ORDER BY Revision DESC) FROM Bug ) SELECT Id, ColumnId, Revision FROM CTE WHERE RN = 1 ``` `Demo`
SQL Query group by highest revision number
[ "", "sql", "sql-server", "" ]
I have a #temptable which I'm trying to populate but its not working. ``` DECLARE @nBranchId int ,@tmStartDate datetime ,@tmEndDate datetime SELECT @nBranchId = 3483 ,@tmStartDate = DATEADD(DAY, -10, GETDATE()) ,@tmEndDate = GETDATE() CREATE table #temptable ( nResultsId int ,nInstrId int ,nBranchId int ,nFoldersId int ,strPaperId varchar(50) ,strPosName varchar(50) ,fQuantity float ,fRevaluationPrice float ,fHistRevaluationPrice float ,tmDate datetime ,nPrevResultsId int ) INSERT INTO #temptable SELECT xpr.nResultsId ,xpr.nInstrId ,xpr.nBranchId ,xpr.nFoldersId ,xpr.strPaperId ,xpr.strPosName ,xpr.fQuantity ,xpr.fRevaluationPrice ,xpr.fHistRevaluationPrice ,xpr.tmDate ,nPrevResultsId = dbo.fnGetPrevTradeResultId(xpr.nBranchId, xpr.nInstrId, xpr.strPaperId, xpr.strPosName,xpr.tmDate, xpr.nFoldersId) FROM dbo.XP_Results AS xpr WITH(READUNCOMMITTED) WHERE 1 = 1 AND xpr.nBranchId = ISNULL(@nBranchId, xpr.nBranchId) AND xpr.tmDate BETWEEN @tmStartDate AND @tmEndDate AND xpr.nInstrId <> 18 DROP table #temptable ``` Getting this error: Msg 8152, Level 16, State 14, Line 28 String or binary data would be truncated. The statement has been terminated. Where am I missing it? Have looked and looked but can't solve it
You have different length data types To avoid this problem use a [`SELECT INTO`](http://www.w3schools.com/sql/sql_select_into.asp) statement `#Temptable` would be created automatically with correct data type (Extra benefit you don't have to script `CREATE` statement) ``` DECLARE @nBranchId int ,@tmStartDate datetime ,@tmEndDate datetime SELECT @nBranchId = 3483 ,@tmStartDate = DATEADD(DAY, -10, GETDATE()) ,@tmEndDate = GETDATE() SELECT xpr.nResultsId ,xpr.nInstrId ,xpr.nBranchId ,xpr.nFoldersId ,xpr.strPaperId ,xpr.strPosName ,xpr.fQuantity ,xpr.fRevaluationPrice ,xpr.fHistRevaluationPrice ,xpr.tmDate ,nPrevResultsId = dbo.fnGetPrevTradeResultId(xpr.nBranchId, xpr.nInstrId, xpr.strPaperId, xpr.strPosName,xpr.tmDate, xpr.nFoldersId) INTO #temptable FROM dbo.XP_Results AS xpr WITH(READUNCOMMITTED) WHERE 1 = 1 AND xpr.nBranchId = ISNULL(@nBranchId, xpr.nBranchId) AND xpr.tmDate BETWEEN @tmStartDate AND @tmEndDate AND xpr.nInstrId <> 18 DROP table #temptable ```
Should be fixed by changing these two columns to look like this. Likely what is going on is, you are trying to insert varchars greater than 50 characters into a varchar(50) column. ``` strPaperId varchar(max), strPosName varchar(max) ```
Getting error when inserting into temptable
[ "", "sql", "t-sql", "" ]
Besides from Import and Export Wizard, Is there any query to transfer a table from a server to another server in SQL Server 2008? ``` SELECT * INTO [Server1].[db_name].[dbo].[table_name] FROM [Server2].[db_name].[dbo].[table_name] ```
There are several options. If you have the correct permissions, you can create a Linked Server following instructions here: <http://msdn.microsoft.com/en-us/library/ff772782.aspx> and then use OPENQUERY (<http://msdn.microsoft.com/en-us/library/ms188427.aspx>) to select the records: ``` SELECT * INTO new_db.new_schema.new_table FROM OPENQUERY(linked_server_name, 'select * from old_db.old_schema.old_table'); ``` Or you can do something similar with OPENROWSET (<http://technet.microsoft.com/en-us/library/ms190312.aspx>) if you don't want to go through setting up the linked server: ``` SELECT * INTO new_db.new_schema.new_table FROM OPENROWSET('SQLNCLI', 'Server=OldServer;Trusted_Connection=yes;', 'select * from old_db.old_schema.old_table'); ``` Both may require some tweaking based on the authentication method you're using, usernames, privileges, and all that.
Assuming both server1 and server2 are sql server then You do not need a openquery. Create a linked server on server1 for server2 and write query select \* into server1.schema.table\_name from server2.schema.table\_name
Transfer a table from a server to another server
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "" ]
I have a select that brings me some primary keys. I need to SELECT all values from other table that are not listed in the previous query. How can I do it? I have been trying: ``` SELECT id FROM tab1, ,(SELECT id... WHERE LIKE '%abc%' ) AS result WHERE result != tab1.id; ``` But didn't work, it brings me also the data from the subquery. I'm using PostgreSQL.
You can try this, SELECT \* from table1 WHERE id NOT IN (SELECT id from table2 where ); where "id" will be common coloumn in both tables i.e in table1 and table2. and will be the condition on whatever basis you need to fetch records from table2 in inner query.
``` SELECT tab1.id FROM tab1 LEFT OUTER JOIN (SELECT id FROM ... WHERE LIKE '%abc%' ) AS result ON result.ID = tab1.id where result.ID is null; ```
SELECT values excluding values from other SELECT
[ "", "sql", "postgresql", "" ]
I have a table `node_weather` that looks like this: ``` +--------------------+--------------+------+-----+--------------------+ | Field | Type | Null | Key | | +--------------------+--------------+------+-----+--------------------+ | W_id | mediumint(9) | NO | PRI | ement | | temperature | int(10) | YES | | | | humidity | int(10) | YES | | | | wind_direction | int(10) | YES | | | | wind_speed | int(10) | YES | | | | wet_temperature | int(10) | YES | | | | P_id | mediumint(9) | YES | MUL | | | time | timestamp | NO | | CURRENT_TIMESTAMP | | level_current | int(10) | YES | | ``` In this table, I am recording the data from 4 different weather stations, defined by P\_id field (=> P\_id can be of values 1, 2, 3 or 4). The data is continuosly filling the table, coming from all the stations. How to create a query to get the last recorded row for each of the weather stations?
Maybe something like this: ``` SELECT * FROM node_weather JOIN ( SELECT tblInner.P_id, MAX(tblInner.time) AS maxTime FROM node_weather as tblInner GROUP BY tblInner.P_id ) AS tblMax ON tblMax.maxTime = node_weather.time AND tblMax.P_id = node_weather.P_id ``` this will get the latest combination with max `time` and `P_id`
Try this query. It is faster than sub query.. ``` select nw1.* from node_weather nw1 LEFT JOIN node_weather nw2 on nw2.P_id = nw1.P_id and nw2.time>nw1.time where nw2.W_ID is null; ```
MySQL - how to retrieve the last recorded values grouped by one of the fields?
[ "", "mysql", "sql", "" ]
I have a `order` table and each order in that table belongs to either a `device` or `part` (these two have no specific intersection in fields). So I need to store the `did` or `pid` as FK in `order` table. ``` "device" +----+.... | id |<---------+ +----+.... : : : : : . : : : "part" : +----+.... : | id |<-------+ : +----+.... : : : : : : . : : : : : : "order" @ @ +-----+-------+.... | id | for |.... +-----+-------+.... : : : : : : : : . ``` Now How should I do this? * Add a `type` field to `order` table and store both `pid` or `did` on one column(e.g. `for`) * Create an intermediate view with `type`,`pid`,`did` columns * Create a higher level table (e.g. `goods`) and make its PK to be FK on `order`,`part`,`device` Which one is the best method? or some other method?
Either use **exclusive foreign keys** or **inheritance**, as explained [here](https://stackoverflow.com/a/13317463/533120). Just be careful about CHECK, which is [not enforced](https://stackoverflow.com/a/2115641/533120) by MySQL, so you'll have to "emulate" it via trigger. Please don't do the "type" approach, it can lead to [problems](https://stackoverflow.com/a/20873843/533120).
I would go with two tables, one for the order and the other for item. Item table will have ID, type (part or device; P or D) and the rest of the details. In the order table, I will just have a FK to the Item Table ID.
Foreign Key From two tables in one column in SQL
[ "", "mysql", "sql", "database-design", "relational-database", "foreign-key-relationship", "" ]
I would like to return table entries from an inner join which do not have any matching entries in the second column. Lets consider the following two tables: Table one: ``` Name Number A 1 A 2 A 4 ``` Table two: ``` Name ID A 3 ``` The query should return Name=A ID=3. If ID would be 4, the query should not return anything. Is this even possible in SQL? Thanks for any hints! Edit: the joined table would look like this: ``` Name Number ID A 1 3 A 2 3 A 4 3 ``` So if I do this query I get no entries in the result set: ``` SELECT * FROM TABLE_ONE INNER JOIN TABLE_TWO ON TABLE_ONE.NAME=TABLE_TWO.NAME WHERE NUMBER=ID ``` Exactly in this situation I would like to get the Name returned!
``` SELECT * FROM Table2 WHERE NOT EXISTS ( SELECT * FROM Table1 WHERE Table1.Name = Table2.Name AND Table1.Number = Table2.ID ) ```
Yes, instead of using an INNER join, use a LEFT or a FULL OUTER join. This will allow null values from the other table to appear when you have a value in one of your tables. The FULL OUTER JOIN keyword returns all rows from the left table (table1) and from the right table (table2). The LEFT JOIN keyword returns all rows from the left table (table1), with the matching rows in the right table (table2). The result is NULL in the right side when there is no match. (There is also a RIGHT join, but it does the same thing as the left join, just returning all rows from the RIGHT table instead of the left).
SQL Inner Join and further requirements
[ "", "sql", "join", "where-clause", "" ]
The architecture of my DB involves records in a Tags table. Each record in the `Tags` table has a string which is a `Name` and a foreign kery to the `PrimaryID's` of records in another `Worker` table. Records in the Worker table have tags. Every time we create a Tag for a worker, we add a new row in the Tags table with the inputted Name and foreign key to the worker's PrimaryID. Therefore, we can have multiple Tags with different names per same worker. **Worker Table** ``` ID | Worker Name | Other Information __________________________________________________________________ 1 | Worker1 | .......................... 2 | Worker2 | .......................... 3 | Worker3 | .......................... 4 | Worker4 | .......................... ``` **Tags Table** ``` ID |Foreign Key(WorkerID) | Name __________________________________________________________________ 1 | 1 | foo 2 | 1 | bar 3 | 2 | foo 5 | 3 | foo 6 | 3 | bar 7 | 3 | baz 8 | 1 | qux ``` My goal is to filter `WorkerID`'s based on an inputted table of strings. I want to get the set of `WorkerID`'s that have the same tags as the inputted ones. For example, if the inputted strings are `foo` and `bar`, I would like to return `WorkerID`'s 1 and 3. Any idea how to do this? I was thinking something to do with GROUP BY or JOINING tables. I am new to SQL and can't seem to figure it out.
This is a variant of relational division. Here's one attempt: ``` select workerid from tags where name in ('foo', 'bar') group by workerid having count(distinct name) = 2 ```
You can use the following: ``` select WorkerID from tags where name in ('foo', 'bar') group by WorkerID having count(*) = 2 ``` and this will retrieve your desired result/ Regards.
Multiple records in a table matched with a column
[ "", "sql", "sql-server", "database", "procedures", "" ]
I have used following query for getting info which is greater than 2012 but I am not getting correct data with compare to date ,please give me correct query? MYSQL QUERY: ``` SELECT a.invoiceno, a.invoicerefno, a.invoicedate, c.companyname, a.grandtotal, a.twempname, itemdescription, quantity FROM twsql_twalldata.t_invoicedet a INNER JOIN twsql_twalldata.t_salesinv_items b ON a.invoiceno=b.invoiceno INNER JOIN twsql_twalldata.t_customerdet c ON a.customercode=c.customercode WHERE a.twempname NOT LIKE '%Auto%' AND itemdescription LIKE '%AMC%' OR itemdescription LIKE '%annual maintenance contract%' AND invoicecancelled=0 AND a.invoicedate > '2012-04-01 00:00:00'; ```
Try this: ``` SELECT a.invoiceno, a.invoicerefno, a.invoicedate, c.companyname, a.grandtotal, a.twempname, itemdescription, quantity FROM twsql_twalldata.t_invoicedet a INNER JOIN twsql_twalldata.t_salesinv_items b ON a.invoiceno=b.invoiceno INNER JOIN twsql_twalldata.t_customerdet c ON a.customercode=c.customercode WHERE a.twempname NOT LIKE '%Auto%' AND (itemdescription LIKE '%AMC%' OR itemdescription LIKE '%annual maintenance contract%') AND invoicecancelled=0 AND DATE(a.invoicedate) > DATE('2012-04-01'); ```
If the `invoicedate` data type is datetime then you will need one correction i.e. using proper parentheses for the OR condition. ``` WHERE a.twempname NOT LIKE '%Auto%' AND ( itemdescription LIKE '%AMC%' OR itemdescription LIKE '%annual maintenance contract%') AND invoicecancelled=0 AND a.invoicedate > '2012-04-01 00:00:00'; ```
mysql query with date comparison
[ "", "mysql", "sql", "date", "select", "" ]
I need to find active records that fall between a range of date parameters from a table containing applications. First, I look for a record between the date range in a table called 'app\_notes' and check if is linked to an application. If there is no app\_note record in the date range, I must look at the most recent app note from before the date range. If this app note indicates a status of active, I select it. The app\_indiv table connects an individual to an application. There can be multiple app\_indiv records for each individual and multiple app\_notes for each app\_indiv. Here is what I have so far: ``` SELECT DISTINCT individual.indiv_id FROM individual INNER JOIN app_indiv ON app_indiv.indiv_id = individual.indiv_id INNER JOIN app_note ON app_indiv.app_indiv_id = app_note.app_indiv_id WHERE (app_note.when_mod BETWEEN @date_from AND @date_to) /* OR most recent app_note indicates active */ ``` How can I get the most recent app\_note record if there is not one in the date range? Since there are multiple app\_note records possible, I don't know how to make it only retrieve the most recent.
``` SELECT * FROM individual i INNER JOIN app_indiv ai ON ai.indiv_id = i.indiv_id OUTER APPLY ( SELECT TOP 1 * FROM app_note an WHERE an.app_indiv_id = ai.app_indiv_id AND an.when_mod < @date_to ORDER BY an.when_mod DESC ) d WHERE d.status = 'active' ``` Find the last note less than end date, check to see if it's active and if so show the individual record.
(untested) You'll need to use a [CASE](http://www.dotnet-tricks.com/Tutorial/sqlserver/1MS1120313-Understanding-Case-Expression-in-SQL-Server-with-Example.html) switch. ``` SELECT DISTINCT individual.indiv_id FROM individual INNER JOIN app_indiv ON app_indiv.indiv_id = individual.indiv_id INNER JOIN app_note ON app_indiv.app_indiv_id = app_note.app_indiv_id WHERE (CASE WHEN app_note.when_mod BETWEEN @date_from AND @date_to THEN (SELECT appnote.when_mod from individual where appnote.when_mod BETWEEN @date_from AND @date_to) WHEN app_note.when_mod NOT BETWEEN @date_from and @date_to THEN (SELECT appnote.when_mod from individual appnote.when_mod LIMIT 1)) ``` Query might not be correct. Switch might need to be in the first SELECT part of the query.
Query for active records between date range or most recent before date range
[ "", "sql", "sql-server", "" ]
I have a SQL Server table like this: `table(Id, DestId, WeekDay)` WeekDay is nvarchar holding values like `Monday`, `Tuesday`, `Wednesday`, etc. Here `Id` and `Weekday` are composite primary keys. I want the result of select query to be sorted on `WeekDay` order, i.e. Monday should come before Tuesday and Wednesday and so on. Write now `Select * from table where Id=1001` gives records sorted on WeekDays Alphabetic order
Solution here: <http://dbaspot.com/sqlserver-faq/393109-order-day-week.html> You have to make a case structure in your SQL statement like this one: ``` select * from table ORDER BY case WeekDay when 'Monday' then 1 when 'Tuesday' then 2 when 'Wednesday' then 3 when 'Thursday' then 4 when 'Friday' then 5 when 'Saturday' then 6 when 'Sunday' then 7 else 8 END ; ```
You can try this: ``` SELECT * FROM Table ORDER BY CHARINDEX(LEFT(WeekDay,3),'MonTueWedThuFriSatSun') ``` `CHARINDEX(LEFT(WeekDay,3),'MonTueWedThuFriSatSun')` will return the index position. If you want to **sort from Sunday**, the you can change the string to `'SunMonTueWedThuFriSat'`
Sort table data on weekday
[ "", "sql", "sql-server", "sql-server-2008", "" ]
i'm trying to find the sum of 2 count values stated in the query below. Even though the query is working fine when I'm just finding out the count values, when i try to find the sum of the count values i get an error...Please Help.. ``` SELECT COUNT(BUG_ID) BUG_COUNT, COUNT(STATUS) CLOSED_COUNT, (BUG_COUNT + CLOSED_COUNT) AS TOTAL_COUNT FROM bugzilla_dashboard WHERE CATEGORY='Legacy'; ``` The error message I'm getting.. ``` ORA-00904: "CLOSED_COUNT": invalid identifier 00904. 00000 - "%s: invalid identifier" *Cause: *Action: Error at Line: 3 Column: 14 ```
Try like this ``` SELECT COUNT(BUG_ID) BUG_COUNT,COUNT(STATUS) CLOSED_COUNT, (COUNT(BUG_ID) + COUNT(STATUS)) AS TOTAL_COUNT FROM bugzilla_dashboard WHERE CATEGORY='Legacy'; ``` *Or:* ``` SELECT BUG_COUNT,CLOSED_COUNT,(BUG_COUNT + CLOSED_COUNT) AS TOTAL_COUNT FROM ( SELECT COUNT(BUG_ID) BUG_COUNT, COUNT(STATUS) CLOSED_COUNT FROM bugzilla_dashboard WHERE CATEGORY='Legacy' ) Tmp ```
``` SELECT COUNT(BUG_ID) BUG_COUNT, COUNT(STATUS) CLOSED_COUNT, (COUNT(BUG_ID) + COUNT(STATUS)) AS TOTAL_COUNT FROM bugzilla_dashboard WHERE CATEGORY='Legacy'; ``` obviously...
Find sum of multiple sql counts
[ "", "sql", "aggregate-functions", "jdeveloper", "" ]
How to compare only day and month with date field in mysql? For example, I've a date in one table: `2014-07-10` Similarly, another date `2000-07-10` in another table. I want to compare only day and month is equal on date field. I tried this format. But I can't get the answer ``` select * from table where STR_TO_DATE(field,'%m-%d') = STR_TO_DATE('2000-07-10','%m-%d') and id = "1" ```
Use DATE\_FORMAT instead: ``` SELECT DATE_FORMAT('2000-07-10','%m-%d') ``` yields ``` 07-10 ``` Here's your query re-written with `DATE_FORMAT()`: ``` SELECT * FROM table WHERE DATE_FORMAT(field, '%m-%d') = DATE_FORMAT('2000-07-10', '%m-%d') AND id = "1" ```
you can do it with the `DAYOFMONTH` and `MONTH` function: ``` SELECT * FROM table WHERE DAYOFMONTH(field) = 31 AND MONTH(field) = 12 AND id = 1; ``` **EDIT:** Of course you can write following too if you want to compare two fields: ``` SELECT * FROM table WHERE DAYOFMONTH(field) = DAYOFMONTH(field2) AND MONTH(field) = MONTH(field2) AND id = 1 ...; ``` for further information have a look at the manual: 1. [`MONTH`](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_month) 2. [`DAYOFMONTH`](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_dayofmonth)
Compare only day and month with date field in mysql
[ "", "mysql", "sql", "date", "select", "" ]
IΒ΄m trying to add up the amounts from a table if the customer id (KUNDENID) is the same. These are my two tables: ``` +----------+------------+--------+ | KUNDENID | datum | Betrag | +----------+------------+--------+ | 2 | 2013-06-05 | 120.5 | | 1 | 2013-02-05 | 20.95 | | 3 | 2013-02-05 | 250.3 | | 3 | 2013-05-13 | 100 | | 4 | 2013-01-01 | 1500 | +----------+------------+--------+ +----------+---------+-----------+------------------------------+--------------+-------------+ | KUNDENID | vorname | nachname | email | geburtsdatum | telefon | +----------+---------+-----------+------------------------------+--------------+-------------+ | 1 | Homer | Simpson | chunkylover53@aol.com | 1956-05-02 | 555-364 | | 2 | Doug | Heffernan | doug.heffernan@hotmail.com | 1965-02-09 | 555-3684 | | 3 | Dexter | Morgan | dexter.morgan@gmail.com | 1971-02-01 | 555-3684 | | 4 | Sheldon | Cooper | sheldoncooper@gmail.com | 1981-04-22 | 555-2274648 | | 5 | Maurice | Moss | moss@Reynholm-Industries.com | 1972-03-01 | 555-6677 | +----------+---------+-----------+------------------------------+--------------+-------------+ ``` I tried this INNER JOIN: ``` SELECT k.KUNDENID, vorname, nachname, SUM(Betrag) FROM kunde k INNER JOIN verkauf v ON k.KUNDENID = v.KUNDENID GROUP BY k.KUNDENID; ``` Output: ``` +----------+---------+-----------+------------------+ | KUNDENID | vorname | nachname | SUM(Betrag) | +----------+---------+-----------+------------------+ | 1 | Homer | Simpson | 20.9500007629395 | | 2 | Doug | Heffernan | 120.5 | | 3 | Dexter | Morgan | 350.300003051758 | | 4 | Sheldon | Cooper | 1500 | +----------+---------+-----------+------------------+ ``` So I know the JOIN is correct but how can I show the customer who didn't buy anything in the same JOIN? My desired Output: ``` +----------+---------+-----------+------------------+ | KUNDENID | vorname | nachname | SUM(Betrag) | +----------+---------+-----------+------------------+ | 1 | Homer | Simpson | 20.9500007629395 | | 2 | Doug | Heffernan | 120.5 | | 3 | Dexter | Morgan | 350.300003051758 | | 4 | Sheldon | Cooper | 1500 | | 5 | Maurice | Moss | 0 | +----------+---------+-----------+------------------+ ``` Thanks for your help!
Use `LEFT JOIN` instead of `INNER JOIN`. ``` SELECT k.KUNDENID, vorname, nachname, IFNULL(SUM(Betrag), 0) AS gesamt FROM kunde k LEFT JOIN verkauf v ON k.KUNDENID = v.KUNDENID GROUP BY k.KUNDENID; ``` Here's [Wikipedia](http://en.wikipedia.org/wiki/Join_%28SQL%29#Left_outer_join)'s basic explanation. They show an example as well. > The result of a left outer join (or simply left join) for tables A and > B always contains all records of the "left" table (A), even if the > join-condition does not find any matching record in the "right" table > (B). This means that if the ON clause matches 0 (zero) records in B > (for a given record in A), the join will still return a row in the > result (for that record)β€”but with NULL in each column from B. A left > outer join returns all the values from an inner join plus all values > in the left table that do not match to the right table.
Yes you can use Left outer join in place of Inner Join Like : ``` SELECT k.KUNDENID, vorname, nachname, SUM(Betrag) AS Betrag FROM kunde k LEFT OUTER JOIN verkauf v ON k.KUNDENID = v.KUNDENID GROUP BY k.KUNDENID; ```
INNER JOIN: add up amounts with same id
[ "", "mysql", "sql", "join", "inner-join", "" ]
I have two tables, Alpha & Bravo. Bravo has a column `id` (integer, primary key) and some other columns that are not relevant for this question. Alpha has columns `id` (integer, primary key), `bravo_id` (foreign key to table Bravo), `special` (a single char, null for most rows but has a value for certain important rows), `created` (a DATETIME), and some others not relevant to this question. I would like to get all the special rows from Alpha, *plus* for each special row I would like to get the "previous" non-special row from Alpha associated with the same row of Beta (that is, I would like to get the Alpha row with the same `bravo_id` and the most recent `created` that is older than the `created` of the special row), and I need to keep the special row & its previous row linked. Currently I'm doing this with n+1 queries: ``` SELECT id, bravo_id, created FROM Alpha WHERE special IS NOT NULL ``` followed by a query like this for each result in the initial query: ``` SELECT id, created FROM Alpha WHERE special IS NULL AND bravo_id = BrvN AND created < CrtN ORDER BY created DESC ``` Obviously that's wildly inefficient. Is there a way I can retrieve this information in a single query that will put each special row & its previous non-special row in a single row of the result? Our product supports both SQL Server (2008 R2 if relevant) and Oracle (11g if relevant) so a query that works for both of those would be ideal, but a query for only one of the two would be fine. **EDIT:** "Created" is perhaps a misnomer. The datetime in that column is when the referenced object was created, and *not* when it was entered into the database (which could be anywhere from seconds to years later). An ordering of the rows of Alpha based on the `created` column would have little or no correlation to an ordering based on the `id` column (which is a traditional incrementing identity/sequence).
This works in both SQL Server & Oracle: ``` select A.id, A.bravo_id, A.created, B.id, B.created from Alpha A left join Alpha B on A.bravo_id = B.bravo_id and B.created < A.created and B.special is null where A.special is not null and (B.created is null or B.created = (select max(S.created) from Alpha S where S.special is null and S.bravo_id = A.bravo_id and S.created < A.created)) ``` It left joins in all rows with the same foreign key and a lower/older created, then uses the where clause to filter them out (being careful not to exclude A rows that have no older row).
``` SELECT a.Id, a.Bravo_Id, a.Created, d.Id, d.Created FROM @Alpha a OUTER APPLY ( SELECT TOP 1 da.id, da.Created FROM @Alpha da WHERE da.Special IS NULL AND da.Bravo_Id = a.Bravo_Id AND da.Created < a.Created ORDER BY da.Created DESC ) d WHERE a.Special IS NOT NULL ``` You can bind both queries with apply (ms sql server query)
How can I get certain rows and the "previous" rows from a table that includes a datetime column?
[ "", "sql", "sql-server", "oracle", "query-optimization", "" ]
I have following table in my SQL Database and need to group it by the max-value of an 5 minute period of time. ``` +------+--------+------------------+ | Path | Sample | DateTime | +------+--------+------------------+ | Srv1 | 0.5 | 2014-07-04 10:48 | | Srv1 | 0.7 | 2014-07-04 10:50 | | Srv1 | 0.9 | 2014-07-04 10:52 | | Srv1 | 0.6 | 2014-07-04 10:54 | | Srv2 | 8.2 | 2014-07-04 10:48 | | Srv2 | 7.4 | 2014-07-04 10:50 | | Srv2 | 10.9 | 2014-07-04 10:52 | | Srv2 | 9.9 | 2014-07-04 10:54 | | Srv3 | 7.8 | 2014-07-04 10:48 | | Srv3 | 1.3 | 2014-07-04 10:50 | | Srv3 | 5.7 | 2014-07-04 10:52 | | Srv3 | 2.4 | 2014-07-04 10:54 | | Srv4 | 4.2 | 2014-07-04 10:47 | | Srv4 | 3.8 | 2014-07-04 10:49 | | Srv4 | 5.4 | 2014-07-04 10:51 | | Srv4 | 2.4 | 2014-07-04 10:53 | | Srv5 | 1.6 | 2014-07-04 10:48 | | Srv5 | 1.3 | 2014-07-04 10:50 | | Srv5 | 1.6 | 2014-07-04 10:52 | | Srv5 | 1.3 | 2014-07-04 10:54 | +------+--------+------------------+ ``` Following table would be my goal: ``` +------+--------+------------------+ | Path | Sample | DateTime | +------+--------+------------------+ | Srv1 | 0.5 | 2014-07-04 10:45 | | Srv1 | 0.9 | 2014-07-04 10:50 | | Srv2 | 8.2 | 2014-07-04 10:45 | | Srv2 | 10.9 | 2014-07-04 10:50 | | Srv3 | 7.8 | 2014-07-04 10:45 | | Srv3 | 5.7 | 2014-07-04 10:50 | | Srv4 | 6.8 | 2014-07-04 10:45 | | Srv4 | 5.4 | 2014-07-04 10:50 | | Srv5 | 1.6 | 2014-07-04 10:45 | | Srv5 | 1.6 | 2014-07-04 10:50 | +------+--------+------------------+ ``` I tried following code but it didn't put out the expecting result. `GROUP BY me.Path, pd.DateTime, DATEPART(mi, pd.DateTime) % 10` My whole SQL is: ``` SELECT TOP (100) PERCENT me.Path, MAX(pd.SampleValue) AS Sample, pd.DateTime FROM Perf.vPerfRaw AS pd INNER JOIN dbo.vPerformanceRuleInstance AS pri ON pri.PerformanceRuleInstanceRowId = pd.PerformanceRuleInstanceRowId INNER JOIN dbo.vPerformanceRule AS pr ON pr.RuleRowId = pri.RuleRowId INNER JOIN dbo.vManagedEntity AS me ON me.ManagedEntityRowId = pd.ManagedEntityRowId INNER JOIN dbo.vRule AS vr ON vr.RuleRowId = pri.RuleRowId INNER JOIN OperationsManager.dbo.RelationshipGenericView AS rgv ON rgv.TargetObjectDisplayName = me.Path WHERE (pr.CounterName = '% Processor Time') AND (vr.RuleDefaultName = 'Processor % Processor Time Total 2003' OR vr.RuleDefaultName = 'Processor % Processor Time Total Windows Server 2008' OR vr.RuleDefaultName = 'Processor Information % Processor Time Total Windows Server 2008 R2' OR vr.RuleDefaultName = 'Processor Information % Processor Time Total Windows Server 2012' OR vr.RuleDefaultName = 'Processor Information % Processor Time Total Windows Server 2012 R2') AND (rgv.SourceObjectDisplayName = 'SVM') GROUP BY me.Path, pd.DateTime, DATEPART(mi, pd.DateTime) % 10 ORDER BY me.Path ```
You are putting `pd.DateTime` in the selected columns AND the group by clause, so you're not actually performing any aggregation. There are many ways to do this, some better than others depending on your data, but try this : ``` SELECT TOP (100) PERCENT me.Path, MAX(pd.SampleValue) AS Sample, DATEADD(MI, -(DATEPART(MI, pd.DateTime) % 5), pd.DateTime) AS PeriodStart FROM ... WHERE ... GROUP BY me.Path, DATEADD(MI, -(DATEPART(MI, pd.DateTime) % 5), pd.DateTime) ORDER BY me.Path ``` Note that this will only work if seconds in your datetime values are always 0. You'll need an additionnal `DATEADD` to offset them otherwise.
Try this ``` SELECT Path, MAX(Sample), DATEADD(MINUTE, (CASE WHEN DATEPART(MINUTE, Created)%5 BETWEEN 1 AND 2 THEN 0 ELSE 5 END)-(DATEPART(MINUTE, Created))%5, Created) AS NewDate FROM ... GROUP BY Path, DATEADD(MINUTE, (CASE WHEN DATEPART(MINUTE, Created)%5 BETWEEN 1 AND 2 THEN 0 ELSE 5 END)-(DATEPART(MINUTE, Created))%5, Created) ORDER BY Path ```
SQL Group by time
[ "", "sql", "sql-server", "group-by", "" ]
I want my query to return the rows of the table where a column contains a specific value first so i given query like this: ``` SELECT IDName FROM Type_ofID_tbl WHERE deleted = 0 ORDER BY(IDName ='Emirates ID'), IDName ``` but this showing error Incorrect syntax near '='.
Try this ``` select IDName from Type_ofID_tbl where deleted=0 order by case when IDName ='Emirates ID' then 1 else 2 end ```
There is no need for `(IDName ='Emirates ID')` in the `ORDER BY` clause; it should be like this ``` SELECT IDName FROM Type_ofID_tbl WHERE deleted = 0 ORDER BY IDName ASC ``` If you want to order columns who has the `IDName ='Emirates ID'` ``` SELECT IDName FROM Type_ofID_tbl WHERE deleted = 0 AND IDName = 'Emirates ID' ORDER BY IDName ASC ``` The [ORDER BY](http://www.w3schools.com/sql/sql_orderby.asp) keyword sorts the records in ascending order by default. To sort the records in a descending order, you can use the `DESC` keyword.
while return rows with a specific value first getting syntax error
[ "", "sql", "sql-server", "" ]
I have a table called `course offerings`. In this table you will find each course with its respective `course_start_date` and `course_end_date`. I am also using an enum field called `course_status`. There are two possible status for each course `IN SESSION` or `Closed` but not both. How could I create a query that will change the status to `Closed` if it is passed its `course_end_date`? [SQLFIDDLE](http://sqlfiddle.com/#!2/812d1) ``` CREATE TABLE course_offerings ( id int auto_increment primary key, course_name varchar(20), course_start_date date, course_end_date date, course_status enum('CLOSED','IN SESSION') ); INSERT INTO course_offerings (course_name, course_start_date, course_end_date, course_status) VALUES ('Math', '2013-02-20', '2014-02-20', 'IN SESSION'), ('Science', '2013-02-20', '2014-04-18', 'IN SESSION'); ```
I don't know if there is a dynamic way to do what you are trying. This simple SQL query can help you update such records: ``` UPDATE course_offerings SET course_status = 'Closed' WHERE course_end_date < SYSDATE ```
Try this ``` UPDATE course_offerings SET course_status = 'Closed' WHERE course_end_date = "passed_date"; ```
Expiring a value based on a date field
[ "", "mysql", "sql", "" ]
I'm struggling to even explain what I need to do so please be patient with me. I have the following table and rows in it: TBNAME: **`Distances`** # Track, Person, Date, Distance ``` TRACK1, P1, 1/1/2014, 15 TRACK2, P1, 13/1/2014, 12 TRACK1, P1, 20/2/2014, 10 TRACK2, P1, 15/1/2014, 9 TRACK1, P2, 2/1/2014, 11 TRACK2, P2, 14/1/2014, 13 TRACK1, P2, 21/2/2014, 8 TRACK2, P2, 16/1/2014, 6 ``` What I would, ideally, like to see as a result is something like this: ``` P1, TRACK1, 20/2/2014, 10, TRACK2, 15/1/2014, 9 P2, TRACK1, 21/2/2014, 8, TRACK2, 16/1/2014, 6 ``` Or, in other words, for each person, the most recent date and distance for that date for each track in one row. Hope someone can understand this and offer a solution too :) Cheers, Pierre
Try this: ``` SELECT T1.Person, T1.Track, MAX(T1.Date), MIN(T1.Distance), T2.Track, MAX(T2.Date), MIN(T2.Distance) FROM Distances AS T1 INNER JOIN Distances AS T2 ON T1.Person = T2.Person WHERE T1.Track <> T2.Track AND T1.Track = 'Track1' GROUP BY T1.Track, T1.Person, T2.Track ``` The output result of the query is showing exactly the same of your expected result.
Look for [`ROW_NUMBER()`](http://technet.microsoft.com/library/ms186734%28v=sql.110%29.aspx) and [`OVER PARITION BY`](http://technet.microsoft.com/library/ms189461%28v=sql.110%29.aspx). Idea is something like (I did not try to run this query): ``` ;WITH data AS ( SELECT *, -- returns number for each pair of person and track starting from most recent date --Code enhanced at here row_number() over (partition BY person, track order by dte DESC) nr FROM distances ) SELECT * FROM data WHERE nr = 1 -- we want just the most recent one ORDER BY person, -- "group by" person track ; ``` It's still doesn't support showing one row for each person... I don't think you can do it with SQL (because of unknown number of tracks). There is [`PIVOT/UNPIVOT`](http://technet.microsoft.com/library/ms177410%28v=sql.105%29.aspx), but I don't think it fits here.
Select values from multiple rows from 1 table as 1 record
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have created a View: ``` CREATE VIEW Lunch AS SELECT 'Beer' AS item UNION SELECT 'Olives' UNION SELECT 'Bread' UNION SELECT 'Salami' UNION SELECT 'Calamari' UNION SELECT 'Coffee'; GO ``` Then I executed the following query in a different query. ``` SELECT item FROM Lunch ``` This is resulting in the data from the view being returned. There are not any underlying tables to hold data. But still the system is showing records. How is this possible?
It doesn't actually hold data, but it can evaluate SQL, which is all you're doing in the definition. So the following can be evaluated: ``` SELECT 'Beer' AS item UNION SELECT 'Olives' UNION SELECT 'Bread' UNION SELECT 'Salami' UNION SELECT 'Calamari' UNION SELECT 'Coffee'; ``` The same as a view defined with: ``` SELECT * FROM Customers ``` ## [SQL View Reference:](http://technet.microsoft.com/en-us/library/aa214068%28v=sql.80%29.aspx) > A view can be thought of as either a virtual table or a stored query. The data accessible through a view **is not stored in the database as a distinct object**. What is stored in the database is a **SELECT statement**. The result set of the SELECT statement **forms the virtual table** returned by the view. A user can use this virtual table by referencing the view name in Transact-SQL statements the same way a table is referenced.
I don't fully understand the question. The view is not "holding data", it is holding a query. The query statement has constants in it that are turned into a result set that can be used by other queries. You can think of a query as substituting the text of the query directly into a statement. So, when you do: ``` select * from lunch; ``` SQL really processes this as: ``` select * from ( SELECT 'Beer' AS item UNION SELECT 'Olives' UNION SELECT 'Bread' UNION SELECT 'Salami' UNION SELECT 'Calamari' UNION SELECT 'Coffee' ) t ``` This is a good model of what happens, although it is not quite what really happens. What really happens is that the view is compiled and the compiled code is inserted into the compiled code for the query. There is another concept of "materialized views". These are views where you have indexes and the values are actually stored in the database. However, this is not an example of a materialized view.
Can a View hold some data
[ "", "sql", "sql-server", "sql-server-2008", "" ]
If I have names of locations along with location id's in the same table, how can I display the location name based on the location id? The location id will never change, but the name might, so by using the value of the id instead of the name will keep all records available. However, I want the name to be shown and not the id. Right now I have: ``` select loc_name from location where loc_id = loc_name ``` but there's nothing in the output.
Loc\_ID is presumably an integer, and Loc\_Name is presumably VarChar. Those 2 fields will never equal, so asking for a dataset where they equal each other will not give you any output. Instead, you want something like: ``` SELECT loc_name FROM location WHERE loc_id = YourLocID ``` Where YourLocID above is the value of Loc\_ID for the Loc\_Name you want to display.
You need to specifiy the ID you want to see the location for, i.e.: ``` select loc_name from location where loc_id = 3; ```
Show name based on ID
[ "", "sql", "select", "where-clause", "" ]
``` DECLARE @Chat varchar(100) = 'GiveSilk:[Axmeed] Amount:[10]' ,@TargetCharName varchar(32),@Amount int SET @TargetCharName = (REPLACE(@Chat,'GiveSilk:[','')) select @TargetCharName ``` thats my sql query i want to select 'Axmeed' at the result but the problem is that 'Axmeed' string is not static it becomes any text how do i select it alone ?
``` DECLARE @Chat varchar(100) = 'GiveSilk:[Axmeed] Amount:[10]' SELECT SUBSTRING(@Chat,CHARINDEX('[',@Chat)+1,CHARINDEX(']',@Chat)-CHARINDEX('[',@Chat)-1) ```
One option is to use Regular Expressions. A CaptureGroup is what you are looking for. For example: ``` SELECT SQL#.RegEx_CaptureGroup( N'GiveSilk:[Axmeed] Amount:[10]', -- input string N'GiveSilk:\[(\w+)\]', -- Regular Expression 1, -- Capture Group Number NULL, -- Not Found Replacement 1, -- StartAt -1, -- Length N'IgnoreCase' -- RegEx Options List ); ``` The example makes use of the [SQL#](http://www.SQLsharp.com/) library but you can always write your own SQLCLR function if you prefer. (note: I am the author of the SQL# library, but the function noted here is available in the Free version.)
SQL Query string replace
[ "", "sql", "sql-server", "string", "t-sql", "" ]
I have data in my sql database and need to create changeControl value. ``` data changeControl M-0101 0 M-0101 0 M-02 1 M-03 0 OT-014 1 OT-014 1 M-228 0 M-228 0 ``` Are there any way to create this changeControl value depend on data?
There is an easy way, but it depend on your DB. In SQL Server and in Oracle you can use DENSE\_RANK() to get the order of the values, using that you'll have the same number for equal values In SQL Server ``` SELECT data , (DENSE_RANK() OVER (ORDER BY data) - 1) % 2 changeControl FROM Table1 ``` In Oracle the `%` is not present, we need to use the function `MOD()` ``` SELECT data , MOD((DENSE_RANK() OVER (ORDER BY data) - 1), 2) changeControl FROM Table1 ``` In MySQL there is no `DENSE_RANK`, but there is another way to get the result ``` SELECT data, changeControl FROM (SELECT @last , data , @id := CASE WHEN @last <> data THEN 1 - @id WHEN @last = data THEN @id ELSE 0 END changeControl , @last:= data FROM Table1 , (SELECT @id := 0) a) b ``` if you don't mind additional column you can run directly the subquery. p.s.: If you don't specify what DB you're using we have to guess.
You need to first break your data into groups, and assign each group a number. Then join this information with main table, sort by data, and take modulo by two from group number - this will be your changeControl column. ``` select tn.*, t.RN % 2 as changeControl from tableName tn left join (select ROW_NUMBER() OVER (ORDER BY data) AS RN, data from tableName group by data) t on t.data = tn.data order by tn.data ``` This problem cannot be solved by taking modulo by two of id or data hash - id keeps changing even for identical values, and two sequential hashes although may be different, can both be even, producing same 'changeControl' value.
Creating alternating 0 and 1 values based on changing data value
[ "", "sql", "" ]
SQL newbie hear tearing my hair out trying to work this one out! I have a problem that is similar to this. I have the following data and all fields are defined as CHARACTER - No DATE or TIME unfortunately thanks to poor design by the original DBA ``` Surname Name LoginDate LoginTime Smith John 2014-06-25 13.00 Smith John 2014-06-24 14.00 Smith Susan 2014-06-26 09.00 Smith Susan 2014-06-26 11.30 Jones Bill 2014-06-25 09.30 Jones Bill 2014-06-25 12.30 Jones Bill 2014-06-26 07.00 ``` What i'm trying to get on my output is the most recent log in for each person so I would expect to see ``` Smith John 2014-06-25 13.00 Smith Susan 2014-06-26 11.30 Jones Bill 2014-06-26 07.00 ``` I've tried different combinations of temporary tables, using CONCAT on the Date and Time and the MAX function but I just keep drawing a blank. I think I know the tools and commands I need to use I just can't seem to string them all together properly. I know I have to group them by name/surname then somehow combine the date and time in a way that lets me use the MAX function but when I output them I can never seem to get the LoginDate and LoginTime to appear as seperate fields on the output because they're not included in any GROUP BY that I use. Is anyone able to show me how to do this as I haven't got a lot of hair to start with :)
Try this Query - ``` With MaxTimeStamp as ( SELECT Surname, Name, Max(TIMESTAMP(LoginDate, LoginTime)) as LoginDateTime FROM YourTable group by Surname, Name ) select c.Surname, c.Name, d.LoginDate, d.LoginTime, from MaxTimeStamp c Join YourTable d on c.Surname = d.Surname and c.Name = d.Name and Date(c.LoginDateTime) = d.LoginDate and Time(c.LoginDateTime) = d.LoginTime ```
I have split it into two parts, to find the newest login per user and do the group by. I then join to the data again to find the record. ``` select dt.Surname , dt.Name , dt.LoginDate , dt.LoginTime from dataTable dt join (select Surname, Name, MAX(LoginDate+LoginTime) ts from #temp group by surname, name) sub on sub.Name = dt.Name and sub.Surname = dt.Surname and dt.LoginDate+dt.LoginTime = ts WHERE not sub.Name is null ``` You can also do it without the need to join, by splitting the timestamp out again. ``` select sub.Surname , sub.Name , LEFT(ts,10) LoginDate , RIGHT(ts,5) LoginTime from (select Surname, Name, MAX(LoginDate+LoginTime) ts from #temp group by surname, name) sub ```
SQL To Find Most RECENT row Using GROUP BY
[ "", "sql", "" ]
I'm writing an application that implements a message system through a 'memos' table in a database. The table has several fields that look like this: ``` id, date_sent, subject, senderid, recipients,message, status ``` When someone sends a new memo, it will be entered into the memos table. A memo can be sent to multiple people at the same time and the recipients userid's will be inserted into the 'recipients' field as comma separated values. It would seem that an SQL query like this would work to see if a specific `userid` is included in a memo: ``` SELECT * FROM memos WHERE recipients LIKE %15% ``` But I'm not sure this is the right solution. If I use the SQL statement above, won't that return everything that "contains" 15? For example, using the above statement, user 15, 1550, 1564, 2015, would all be included in the result set (and those users might not actually be on the recipient list). What is the best way to resolve this so that ONLY the user 15 is pulled in if they are in the recipient field instead of everything containing a 15? Am I misunderstanding the LIKE statement?
I think you would be better off having your recipients as a child table of the memos table. So your memo's table has a memo ID which is referenced by the child table as ``` MemoRecipients ----- MemoRecipientId INT PRIMARY KEY, IDENTITY, MemoId INT FK to memos NOT NULL UserId INT NOT NULL ``` for querying specific memos from a user you would do something like ``` SELECT * FROM MEMOS m INNER JOIN memoRecipients mr on m.Id = mr.memoId WHERE userId = 15 ```
I am not sure how your application is exactly pulling these messages, but I imagine that better way would be creating a table `message_recepient`, which will represent many-to-many relationship between recipients and memos ``` id, memoId, recepientId ``` Then your application could pull messages like this ``` SELECT m.* FROM memos m inner join message_recepient mr on m.id = mr.memoId WHERE recepientId = 15 ``` This way you will get messages for the specific user. Again, don't know what your `status` field is for but if this is for `new/read/unread`, you could add in your where ``` and m.status = 'new' Order by date_set desc ``` This way you could just accumulate messages, those that are new
How to tightly contain an SQL query result
[ "", "sql", "" ]
I have a `varchar` which is coming through as the following format: `yyyy-dd-MM hh:mm:ss` I need to be able to convert that to a valid `datetime`, what will be the best method? **Edit for example:** ``` DECLARE @d varchar = '2014-17-01 12:00:00' SELECT CONVERT(DATETIME, @d, 103); ``` returns ``` Msg 241, Level 16, State 1, Line 3 Conversion failed when converting date and/or time from character string. ```
Via `CONVERT`? ``` SELECT CONVERT(DATETIME, '2014-31-01 17:00:00', 103 /* British */); (No column name) 2014-01-31 17:00:00.000 ```
EDITED to add SET DATEFORMAT. This resolved the error for me and ensures everything gets into the correct bucket. ``` SET DATEFORMAT ydm DECLARE @d VARCHAR(20) = '2014-17-01 12:00:00'; DECLARE @targetDate DATETIME = GETDATE(); -- Control Date DECLARE @passedDate DATETIME = CAST(@d AS DATETIME); SELECT @targetDate, @passedDate; ```
SQL Custom datetime format to datetime
[ "", "sql", "sql-server", "converters", "" ]
I have a table: ``` [letter] [Name] [status] [price] A row1 1 11 A row1 1 15 B row2 2 9 B row2 3 23 B row2 3 30 ``` And want to select data something like this: ``` SELECT letter, Name, COUNT(*), CASE WHEN price>10 THEN COUNT(*) ELSE NULL END GROUP BY letter, Name ``` the result is: ``` A row1 2 2 B row2 1 null B row2 2 2 ``` But I want this format: ``` A row1 2 2 B row2 3 2 ``` Please, help me to modify my query
Close. Probably want this instead: ``` SELECT letter, Name, COUNT(*), SUM(CASE WHEN price>10 THEN 1 ELSE 0 END) FROM TableThatShouldHaveAppearedInTheQuestionInTheFromClause GROUP BY letter, Name ``` should work. Assuming that the intention of the fourth column is to return the count of the number of rows, within each group, with a `price` greater than `10`. It's also possible to do this as a `COUNT()` over a `CASE` then returns non-`NULL` and `NULL` results for the rows that should and should not be counted, but I find the above form easier to quickly reason about.
Since nulls are not used in aggregate functions: ``` SELECT letter , name , count(*) , count( case when price > 10 then 1 end ) FROM t GROUP BY letter, name ``` You were very close.
One column condition in sql
[ "", "sql", "aggregate-functions", "" ]
I need to make the results of CONCAT(ROUND(oi.OutboundUnits/oi.OutboundCalls \*100),'%') AS OutboundConv return in a 2 digit format even if the result is 9 or below.
You can use the LPAD function, with a different condition for 100% using the CASE statement, as below: ``` CASE WHEN ROUND(oi.OutboundUnits/oi.OutboundCalls *100) = 100 THEN '100%' ELSE LPAD(CONCAT(ROUND(oi.OutboundUnits/oi.OutboundCalls *100),'%'), 3, '0') END ``` **Reference**: [LPAD on MySQL Reference Manual](http://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_lpad)
You can use concat and right function. ``` right(concat('0', OutboundConv), 3) ```
Return 2 digit computation even if result is 1 digit MYSQL
[ "", "mysql", "sql", "" ]
I have this query: ``` if OBJECT_ID('tempdb..#tempA') is not null drop table #tempA create table #tempA ( tempid varchar(5), tempdate smalldatetime ) declare @loopdate smalldatetime set @loopdate = '4/2/2013' while (@loopdate <= '4/28/2013') begin --Purpose is to get IDs not in TableB for each date period insert into #tempA (tempid, tempdate) select storeid, @loopdate from ( select tableAid from tableA except select tableBid from tableB where tableBdate = @loopdate ) as idget set @loopdate = DATEADD(day, 1, @loopdate) end ``` Is there a way to make the while loop set-based or is this best that could be done? EDIT: made changes for correctness EDIT: end result ``` ID1 4/2/2014 ID2 4/2/2014 ID4 4/2/2014 ID2 4/3/2014 ID1 4/4/2014 ID5 4/4/2014 ID3 4/5/2014 ```
Still a loop but maybe a little more efficient ``` while (@loopdate <= '4/28/2013') begin --Purpose is to get IDs not in TableB for each date period insert into #tempA (tempid, tempdate) select storeid, @loopdate from ( select tableAid from tableA left join tableB on tableB.tableBid = tableA.tableAid and tableB.tableBdate = @loopdate where tableB.tableBid is null ) as idget set @loopdate = DATEADD(day, 1, @loopdate) end ``` This needs some work but may get you all the way with a set ``` ;WITH Days as ( SELECT cast('4/2/2013' AS datetime ) as 'Day' UNION ALL SELECT DATEADD(DAY, +1, Day) as 'Day' FROM Days where [DAY] <= '4/28/2013' ) SELECT tableA.tableAid, Days.[Day] from Days left join tableB on tableB.tableBdate = Days.[Day] full join tableA on tableB.tableBid = tableA.tableAid where tableB.tableBid is null ```
it depends on whether not tableA has a date on it, if not then: ``` WITH DateList(DateDay) AS ( SELECT CAST('2013-04-28' AS DATETIME) UNION ALL SELECT DATEADD(DAY, DATEDIFF(DAY,0,DATEADD(DAY, -1, DateDay)),0) FROM DateList WHERE DateDay between '2013-04-03' and '2013-04-28' ) SELECT DISTINCT tableAid , DateDay FROM DateList cross join #tableA a left join #tableB b on tableAid = b.tableBid and b.tableBdate = DateDay where b.tableBid is null ORDER BY DateDay ASC ```
Can this while be converted to a set based query?
[ "", "sql", "sql-server", "t-sql", "sql-server-2012", "" ]
So I have this "table" which is a result of the query ``` SELECT Valoare FROM GEConfig WHERE Cimp IN('Societate','Adresa','Banca','CapitalSocial','Cont','CUI','NrRegCom','ModulReceptiiExtCotaTVA') ``` GeConfig is a table what is used to configurate the aplication for each client so the data that will result from the query above will be diferent for eaech client. ``` Valoare ======================== 1 aaa 2 bbb 3 ccc 4 ddd 5 eee 6 fff 7 ggg 8 hhh ``` I want to pivot this table so it will look like ``` col1 col2 col3 col4 col5 col6 col7 col8 aaa bbb ccc ddd eee fff ggg hhh ``` I don't have any aggregate, I only have that one Column with 8 rows which I want to make into 1 row with 8 columns. Why I want this? I have to use into Rave Report. I tried something like this ``` select Valoare , [1] , [2] , [3] from ( select Valoare from GEConfig ) x pivot ( max(Valoare) for Valoare in([1], [2], [3]) )p ``` But it clear that is very very wrong as I have no idea on how to use a PIVOT.
A more general approach is to add a row number and use it as anchor ``` WITH _ID AS ( SELECT Valoare , _ID = Row_Number() OVER (ORDER BY Valoare) FROM GEConfig ) SELECT [1] = MAX(CASE WHEN _ID = 1 THEN Valoare ELSE NULL END) , [2] = MAX(CASE WHEN _ID = 2 THEN Valoare ELSE NULL END) , [3] = MAX(CASE WHEN _ID = 3 THEN Valoare ELSE NULL END) , [4] = MAX(CASE WHEN _ID = 4 THEN Valoare ELSE NULL END) , [5] = MAX(CASE WHEN _ID = 5 THEN Valoare ELSE NULL END) , [6] = MAX(CASE WHEN _ID = 6 THEN Valoare ELSE NULL END) , [7] = MAX(CASE WHEN _ID = 7 THEN Valoare ELSE NULL END) , [8] = MAX(CASE WHEN _ID = 8 THEN Valoare ELSE NULL END) FROM _ID ``` --- **Static case** You can use a real `PIVOT` as in the answer from MarkD, or you can use a fake one ``` SELECT [1] = MAX(CASE WHEN Valoare = 'aaa' THEN 'aaa' ELSE NULL END) , [2] = MAX(CASE WHEN Valoare = 'bbb' THEN 'bbb' ELSE NULL END) , [3] = MAX(CASE WHEN Valoare = 'ccc' THEN 'ccc' ELSE NULL END) , [4] = MAX(CASE WHEN Valoare = 'ccc' THEN 'ccc' ELSE NULL END) , [5] = MAX(CASE WHEN Valoare = 'ddd' THEN 'ddd' ELSE NULL END) , [6] = MAX(CASE WHEN Valoare = 'eee' THEN 'eee' ELSE NULL END) , [7] = MAX(CASE WHEN Valoare = 'fff' THEN 'fff' ELSE NULL END) , [8] = MAX(CASE WHEN Valoare = 'ggg' THEN 'ggg' ELSE NULL END) FROM GEConfig ```
Try this: ``` SELECT * from (SELECT cast([name] AS nvarchar(MAX)) [name], row_number() over( ORDER BY name) AS id FROM TEMP)a pivot ( max([name]) FOR id IN ([1], [2], [3], [4], [5]) )pvt ```
SQL PIVOT one column only
[ "", "sql", "sql-server", "pivot", "" ]
Table example: ``` time a b c ------------- 12:00 1 0 1 12:00 2 3 1 13:00 3 2 1 13:00 3 3 3 14:00 1 1 1 ``` How can I get `AVG(a)` from row `WHERE b!=0` and `AVG(c)` grouped by time. Is it possible to solve with `sql` only? I mean that query should not count 1st row to get `AVG(a)`, but not the same with `AVG(c)`.
You can utilize `CASE` statements to get conditional aggregates: ``` SELECT AVG(CASE WHEN b != 0 THEN a END) ,AVG(c) FROM YourTable GROUP BY time ``` Demo: [SQL Fiddle](http://sqlfiddle.com/#!15/612f6/8/0) This works because a value not captured by `WHEN` criteria in a `CASE` statement will default to `NULL`, and `NULL` values are ignored by aggregate functions.
``` SELECT AVG(a), AVG(c) from table WHERE b != 0 group by time ``` Yea... is this what you need?
Very special kind of AVG statement
[ "", "sql", "postgresql", "" ]
I got two table, one is member another is member\_status table. I need to get the count of the number of member in the same status. member ``` +----+-------------+-----------+---------+--------+--------+--------------+---------+---------------------+---------------------+ | id | full_name | mobile_no | address | status | remark | edit_user_id | user_id | created_at | updated_at | +----+-------------+-----------+---------+--------+--------+--------------+---------+---------------------+---------------------+ | 1 | John Doe | 123 | | 1 | | 1 | 1 | 2014-06-19 15:51:08 | 2014-06-19 15:51:08 | | 2 | Michael Bay | 123 | | 1 | | 1 | 1 | 2014-06-19 15:51:08 | 2014-06-19 15:51:08 | | 3 | Hey Hey | 123 | | 3 | | 1 | 1 | 2014-06-19 15:51:08 | 2014-06-19 15:51:08 | +----+-------------+-----------+---------+--------+--------+--------------+---------+---------------------+---------------------+ ``` member\_status ``` +----+---------------------+----------------------+--------+--------------+---------+---------------------+---------------------+ | id | name | description | status | edit_user_id | user_id | created_at | updated_at | +----+---------------------+----------------------+--------+--------------+---------+---------------------+---------------------+ | 1 | Visitor | Visitor | 1 | 1 | 1 | 2014-06-19 15:51:08 | 2014-06-19 15:51:08 | | 3 | Member | Member | 1 | 1 | 1 | 2014-06-19 15:51:08 | 2014-06-19 15:51:08 | | 4 | Level 1 | Level 1 | 1 | 1 | 1 | 2014-06-19 15:51:08 | 2014-06-19 15:51:08 | | 11 | Level 12 | Level 12 | 1 | 1 | 1 | 2014-07-03 17:27:00 | 2014-07-03 17:27:00 | +----+---------------------+----------------------+--------+--------------+---------+---------------------+---------------------+ ``` I had tried this query, but it is NOT CORRECT output... ``` select `member_status`.`name`, count(*) as totalNum from member right join `member_status` on (member.`status` = `member_status`.id) group by `member_status`.name ``` This is output I would like to have it ``` +---------------------+----------+ | name | totalNum | +---------------------+----------+ | Level 1 | 0 | | Level 12 | 0 | | Member | 1 | | Visitor | 2 | +---------------------+----------+ ``` However, somehow, the output is this one ``` +---------------------+----------+ | name | totalNum | +---------------------+----------+ | Level 1 | 1 | | Level 12 | 1 | | Member | 1 | | Visitor | 2 | +---------------------+----------+ ``` Can anyone please advice?
Try `select * from member right join member_status on (member.status = member_status.id)` and you'll see NULL-s for member fields. That's why you are getting more counts then you want. One of the solutions: `select member_status.name, count(member.id) as totalNum from member right join member_status on (member.status = member_status.id) group by member_status.name` or `select member_status.name, sum(case when member.id is null then 0 else 1 END) as totalNum from member right join member_status on (member.status = member_status.id) group by member_status.name`
`SELECT S.NAME, SUM(CASE WHEN S.ID = T.STATUS THEN 1 ELSE 0 END ) FROM member_status S, member T GROUP BY S.NAME`
get all record with number of one to many table count
[ "", "sql", "" ]
I have this error from SSIS when i'm trying to import an access 2013 database to a sql server 2014 database I used `Microsoft Office 15.0 Access Database Engine OLE DB Provider` For the Source and `SQL Server Native Client 11.0` For the Destination ## TITLE: SQL Server Import and Export Wizard Could not connect source component. Error 0xc0202009: Source - Amounts [1]: SSIS Error Code DTS\_E\_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80040E37. Error 0xc02020e8: Source - Amounts [1]: Opening a rowset for "`Amounts`" failed. Check that the object exists in the database. --- ADDITIONAL INFORMATION: Exception from HRESULT: 0xC02020E8 (Microsoft.SqlServer.DTSPipelineWrap) --- The error is for the provider? I just notice that when i choose another Destination for the data (the same as the Source) it throw a similar error but whith another row index for example "Error 0xc0202009: Source - Amounts [43]: SSIS Error..."
The error indicates that it is not able to find the Amounts table in your Access file. Ensure that the table is present and that you have not specified any extra spaces in the table name. Looking at the versions of SQL Server and Access you are using I have a feeling it might have to do with the drivers. Can you make sure you are using - ![enter image description here](https://i.stack.imgur.com/shzE1.png)
There is import wizard bug, when you select 'copy data from one or more table...', the automatic generated sql have single quote on table name, which cause error message, [Opening a rowset for "Amounts" failed. Check that the object exists in the database.] Click the 'preview' button(next to 'edit mapping' button), you can see the sql. The solution is remove the single quote around table name by go back, choose 'write a query to specify the data to transfer'. now write your query. then next, then double click the [dbo].[query] change to the table name you want, [dbo][your\_table\_name] you have to do one table by one table, i finally figure it out and make it works.
SSIS error on access import
[ "", "sql", "sql-server", "ms-access", "ssis", "" ]
I am trying to do a trigger, that select values into some tables, and then insert them into another table. So, for now I got this. There is a lot of columns, so I don't copy them, it is only varchar2 values, and this part works, so I don't think it is useful : ``` create or replace TRIGGER TRIGGER_FICHE AFTER INSERT ON T_AG BEGIN declare begin INSERT INTO t_ag_hab@DBLINK_DEV () values (); /*commit;*/ end; END; ``` Stored procedure where trigger will be called(again a lot of parameters, not relevant to copy them : ``` INSERT INTO T_AG() VALUES (); commit work; ``` The thing is, we cannot do commit into a trigger, I read that, and understand it. But, how can I see an update of my table, with the new value? When the process is runnign, there is nor error, but I don't see the new line into t\_ag\_hab. I know it's not very clear, but I don't know how to explain it other way. How can I fix this?,
Because you're inserting into a remove table via a database link, you have [a distributed transaction](http://docs.oracle.com/cd/E29505_01/server.1111/e25789/transact.htm#CHDJCBCF): > ... distributed transaction processing is more complicated because the database must coordinate the committing or rolling back of the changes in a transaction as an atomic unit. The entire transaction must commit or roll back. When you commit, you're committing both the local insert and the remote insert performed by your trigger, as an atomic unit. You cannot commit one without the other, and you don't have to do anything extra to commit the remote change: > The two-phase commit mechanism is transparent to users who issue distributed transactions. In fact, users need not even know the transaction is distributed. A COMMIT statement denoting the end of a transaction automatically triggers the two-phase commit mechanism. No coding or complex statement syntax is required to include distributed transactions within the body of a database application. If you can't see the inserted data from the remote database afterwards then something else has deleted it after the commit, or more likely you're looking at the wrong database. One slight downside (though also a feature) of a database link is that it hides the details of where the work is being done. You can drop and recreate a link to make your code update a different target database without having to modify the code itself. But that means your code doesn't know where the insert is actually going - you'd need to check the data dictionary to see where the link is pointing. And even then you might not know as the link can be using a TNS alias to identify the database, and changes to the `tnsnames.ora` aren't visible from within the database. If you can see the data after committing by querying `t_ag_ab@dblink_dev` from the same database you ran your procedure, but you can't see if when queried locally from the database you expect that to be pointing to, then the link isn't pointing where you think it is. The insert is going to one database, and you are performing your query against a different one. Only you can decide which is the 'correct' database though; and either redefine the link (or TNS entry, if appropriate), or change where you're doing the query.
I am not able to understand you requirement clearly. For updating records in main table and insert the old records in audit table. we can use the below query as a trigger.(MS-SQL) ``` Create trigger trg_update ON T_AGENT AFTER UPDATE AS BEGIN UPDATE Tab1 SET COL1 = I.COL1, COL2=I.COL2 FROM INSERTED I INNER JOIN Tab1 ON I.COL3=Tab1.Col3 INSERT Tab1_Audit(COL1,COL2,COL3) SELECT Tab1 FROM DELETED RETURN; END; ```
Trigger that insert into a dblink table
[ "", "sql", "triggers", "oracle10g", "" ]
here is my Table1: ``` a b c 100 110 1 200 0 2 300 150 3 100 0 2 200 300 2 300 106 3 ``` i want to show the result as that: ``` abc xyz 2 P 2 U 2 PL ``` this is because count row where (b > a) as P and (b = '0') as U and (b <= a and b != '0') as PL. problem is that how to show these value like P,U,PL these value are not exist in my Table1. so how i am showing my table like this ``` abc xyz 2 P 2 U 2 PL ``` i am using SQL Server 2008.
You can union the results together. ``` SELECT COUNT(1) AS abc ,'P' AS xyz FROM #temp WHERE a > b AND b != 0 UNION SELECT COUNT(1) AS abc, 'U' AS xyz FROM #temp WHERE b = 0 UNION SELECT COUNT(1) AS abc, 'PL' AS xyz FROM #temp WHERE b<= a AND b != 0 ``` I had to ad b != 0 into your P condition to match your expected results.
Use a case statement ``` SELECT Count(*) as abc, CASE WHEN b= 0 THEN 'U' WHEN b>a THEN 'P' ELSE 'PL' END as xyz FROM [Table 1] GROUP BY CASE WHEN a>b THEN 'P' WHEN b=0 THEN 'U' ELSE 'PL' END ```
sql server query count data
[ "", "sql", "sql-server", "sql-server-2008", "" ]
We have two tables table1 and table2 Table 1 ![Table 1](https://i.stack.imgur.com/fTfda.jpg) Table 2 ![enter image description here](https://i.stack.imgur.com/YR3R7.jpg) **We need the resultant table as :** ![enter image description here](https://i.stack.imgur.com/8GvS1.png) **All this should be done in a single SQL query** Thanks in advance.
I think you can make it without the second query, I tested it and returned your expected values. ``` select table_2.id_pro, product_name, SUM(1) as Quantity, priceprod, SUM(1) * priceprod as 'quantity * priceprod' from Table_2 inner join Table_1 t1 on table_2.id_pro = t1.id_pro group by table_2.id_pro, product_name, priceprod ``` And my SqlFiddle test <http://sqlfiddle.com/#!3/08c2ef/1>
I believe this should be what you need, or fairly close anyway! You need to group up your results from your first table to get your quantity value and then join those results to your second table to be able to create your desired output. ``` SELECT t1.id_pro, t2.product_name, s.Quantity, t2.priceperprod, s.Quantity * t2.priceperprod FROM table_2 t2 INNER JOIN ( SELECT COUNT(*) AS Quantity, t.id_pro FROM table_1 t GROUP BY t.id_pro ) t1 ON t2.id_pro = t1.id_pro ```
Join two tables and retrieve relevant data and calculate the result
[ "", "sql", "" ]
I have the following database: ``` paperid | authorid | name ---------+----------+--------------- 1889374 | 897449 | D. N. Page 1889374 | 1795881 | C. N. Pope 1889374 | 1952069 | S. W. Hawking ``` I would like to create a table having following columns: * paperid * author name - for each author of this paperid * coauthors - for each coauthor of that paper The result should look like this: ``` paperid | author | coauthors ---------+---------------+--------------------------- 1889374 | D. N. Page | C. N. Pope S. W. Hawking 1889374 | C. N. Pope | D. N. Page S. W. Hawking 1889374 | S. W. Hawking | D. N. Page C. N. Pope ``` and this is achieved with following queries: ``` SELECT foo.paperid, npa.name as author, foo.coauthors INTO npatest FROM newpaperauthor npa CROSS JOIN ( SELECT paperid, string_agg(name, ' ') as coauthors FROM newpaperauthor GROUP BY paperid ORDER BY paperid) foo; UPDATE npatest SET coauthors = regexp_replace(coauthors, author, ''); SELECT * FROM npatest; ``` The problem arise when there's more `paperid`s in the database like to : ``` paperid | authorid | name | affiliation ---------+----------+------------------+------------------------ 1889373 | 122817 | Kazuhiro Hongo | 1889373 | 1091191 | Hiroshi NAKAGAWA | 1889373 | 1874415 | Hiroshi Nakagawa | University of Oklahoma 1889373 | 2149773 | Han Soo Chang | 1889374 | 897449 | D. N. Page | 1889374 | 1795881 | C. N. Pope | 1889374 | 1952069 | S. W. Hawking | ``` Then I will get a cartesian product of them like: ``` paperid | author | coauthors ---------+------------------+---------------------------------------------------------------- 1889373 | Kazuhiro Hongo | Hiroshi NAKAGAWA Hiroshi Nakagawa Han Soo Chang 1889374 | Kazuhiro Hongo | D. N. Page C. N. Pope S. W. Hawking 1889373 | Hiroshi NAKAGAWA | Kazuhiro Hongo Hiroshi Nakagawa Han Soo Chang 1889374 | Hiroshi NAKAGAWA | D. N. Page C. N. Pope S. W. Hawking 1889373 | Hiroshi Nakagawa | Kazuhiro Hongo Hiroshi NAKAGAWA Han Soo Chang 1889374 | Hiroshi Nakagawa | D. N. Page C. N. Pope S. W. Hawking 1889373 | Han Soo Chang | Kazuhiro Hongo Hiroshi NAKAGAWA Hiroshi Nakagawa 1889374 | Han Soo Chang | D. N. Page C. N. Pope S. W. Hawking 1889373 | D. N. Page | Kazuhiro Hongo Hiroshi NAKAGAWA Hiroshi Nakagawa Han Soo Chang 1889374 | D. N. Page | C. N. Pope S. W. Hawking 1889373 | C. N. Pope | Kazuhiro Hongo Hiroshi NAKAGAWA Hiroshi Nakagawa Han Soo Chang 1889374 | C. N. Pope | D. N. Page S. W. Hawking 1889373 | S. W. Hawking | Kazuhiro Hongo Hiroshi NAKAGAWA Hiroshi Nakagawa Han Soo Chang 1889374 | S. W. Hawking | D. N. Page C. N. Pope ``` How to get rid of that cartesian product there ?
This can be **surprisingly simple** with [`array_agg()`](http://www.postgresql.org/docs/current/interactive/functions-aggregate.html) as window aggregate function combined with [`array_remove()`](http://www.postgresql.org/docs/current/interactive/functions-array.html#ARRAY-FUNCTIONS-TABLE) (introduced with pg 9.3): ``` CREATE TABLE npatest AS SELECT paperid, name AS author , array_to_string(array_remove(array_agg(name) OVER (PARTITION BY paperid), name), ', ') AS coauthors FROM newpaperauthor n; ``` If author names are not unique, there are complications. Then again, if author names are not unique, your whole operation is flawed. Using `array_agg()` and `array_remove()` instead of `string_agg()` and `regexp_replace()`, because the latter would fail easily for similar names like 'Jon Fox' and 'Jon Foxy', and also be messy with delimiters. [`array_to_string()`](http://www.postgresql.org/docs/9.3/interactive/functions-array.html#ARRAY-FUNCTIONS-TABLE) transforms the array to a string. I used `', '` as separator, which seems more sensible to me than just a space. The use of [`SELECT INTO`](http://www.postgresql.org/docs/current/interactive/sql-selectinto.html) is discouraged. Use the superior [`CREATE TABLE AS`](http://www.postgresql.org/docs/current/interactive/sql-createtableas.html) instead. [Per documentation:](http://www.postgresql.org/docs/current/interactive/sql-selectinto.html#AEN81072) > `CREATE TABLE AS` is the recommended syntax, since this form of > `SELECT INTO` is not available in ECPG or PL/pgSQL, because they > interpret the `INTO` clause differently. Furthermore, `CREATE TABLE AS` > offers a superset of the functionality provided by `SELECT INTO`. [**SQL Fiddle.**](http://sqlfiddle.com/#!15/fae38/1)
Here is a way to approach this problem: Generate the list of all co-authors as a subquery. Generate the list of all authors. Then join these together and do the string manipulation to get what you want. --- The authors is easy: ``` select paperid, npa.name as author from newpaperauthor npa; ``` The co-authors is easy: ``` select paperid, string_agg(npa.name, ' ') as coauthors from newpaperauthor npa group by paperid; ``` The combination requires some list substitution: ``` select a.paperid, a.author, replace(replace(coauthors, author, ''), ' ', ' ') as coauthors from (select paperid, npa.name as author from newpaperauthor npa ) a join (select paperid, string_agg(npa.name, ' ') as coauthors from newpaperauthor npa group by paperid ) ca on a.paperid = ca.paperid; ```
How to eliminate cartesian product with joins while using subquery?
[ "", "sql", "postgresql", "join", "postgresql-9.3", "cartesian-product", "" ]
I have seen many similar questions but none that meet my needs exactly, and I cannot seem to deduce a solution on my own from inspecting the other questions. I have the following (mock) table below. My actual table has many more columns. ``` TableA: ID | color | feel | size | alive | age ------------------------------------------ 1 | blue | soft | large | true | 36 2 | red | soft | large | true | 36 2 | blue | hard | small | false | 37 2 | blue | soft | large | true | 36 2 | blue | soft | small | false | 39 15 | blue | soft | medium | true | 04 15 | blue | soft | large | true | 04 15 | green | soft | large | true | 15 40 | pink | sticky | large | true | 83 51 | brown | rough | tiny | false | 01 51 | gray | soft | tiny | true | 59 34 | blue | soft | large | true | 02 ``` I want the result to look like: ``` Result of query on TableA: ID | color | feel | size | alive | age ------------------------------------------- 1 | blue | soft | large | true | 36 2 | red | soft | large | true | 36 15 | blue | soft | medium | true | 04 40 | pink | sticky | large | true | 83 51 | brown | rough | tiny | false | 01 34 | blue | soft | large | true | 02 ``` I want one row for every unique ID column, but I do not want to check the other columns. I need the other columns returned in my result set, but I do not want to filter on them. I just need one row for every unique ID - I do not care which row. In my example, I selected the first row of every unique ID. I have tried variations of ``` select * from TableA group by ID having ID = max(ID) ``` Most examples I have seen with group by and max and/or min functions involve only 2 columns. I have many more columns, however. I have also seen examples using CTE, but I am not using SQL Server (I am using Sybase). How can I achieve the result set described? **EDIT** We are using Sybase version 15.1.
There are a variety of ways to do this. If you have a more recent version of Sybase, you can use `row_number()`: ``` select t.* from (select t.*, row_number() over (partition by id order by id) as seqnum from table t ) t where seqnum = 1; ```
Your solution with MIN has some drawbacks. It doesn't return you a specific row but MIN values from the group of rows. You can get as result rows which are not in database. Is it OK for you ? Row\_number is supported in sybase 15.2 <http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc38151.1520/html/iqrefbb/iqrefbb262.htm> It's sad if it is not supported in 15.1. You can use then identity column and temporary table to achieve what you want.
Sybase select distinct on one column, do not care about others
[ "", "sql", "distinct", "aggregate-functions", "sybase", "" ]
Query ``` SELECT * from Table_2 WHERE name like ('Joe'); ``` Output ``` 1 100 Joe 2 200 JOE 3 300 jOE 4 400 joe ``` Why is it not case sensitive?
Problem: > Query not case sensitive Cause: Column 'Name' has a case-insensitive (`CI`) collation. Solution: You have to use a `CS` collation: `SELECT * FROM fn_helpcollations() WHERE description LIKE N'%case-sensitive%'`. Note: There is a database collation and column level collation. And, there is, also, a server level collation. ``` SELECT DATABASEPROPERTYEX(DB_NAME(), 'Collation') AS DatabaseCollation /* -- Sample output (my database) DatabaseCollation ---------------------------- SQL_Latin1_General_CP1_CI_AS */ SELECT col.collation_name AS ColumnCollation FROM sys.columns col WHERE col.object_id = OBJECT_ID(N'dbo.Table_2') AND col.name = N'Name' /* -- Sample output (my database) ColumnCollation ---------------------------- SQL_Latin1_General_CP1_CI_AS */ ``` Simply changing database collation will **NOT** change the collation for existing user tables and columns: > This statement does not change the collation of the columns in any > existing user-defined tables. These can be changed by using the > COLLATE clause of ALTER TABLE. [Source](http://msdn.microsoft.com/en-us/library/ms175835.aspx) After **changing database collation**, the output of above queries will be: ``` /* DatabaseCollation -- changed ---------------------------- SQL_Latin1_General_CP1_CS_AS */ /* ColumnCollation -- no change ---------------------------- SQL_Latin1_General_CP1_CI_AS */ ``` and, as you can see the collation of column `Name` remains CI. More, changing database collation will affect only the new created tables and columns. Thus, changing database collation could generate strange results (in my *opinion*) because some `[N][VAR]CHAR` columns will be CI and the new columns will be CS. Detailed solution #1: if just some queries for column `Name` need to be `CS` then I will rewrite `WHERE` clause of these queries thus: ``` SELECT Name FROM dbo.Table_2 WHERE Name LIKE 'Joe' AND Name LIKE 'Joe' COLLATE SQL_Latin1_General_CP1_CS_AS ``` ![enter image description here](https://i.stack.imgur.com/WgEJL.png) This will give a change to SQL Server to do an `Index Seek` on column `Name` (in there is an index on column `Name`). Also, the execution plan will include an implicit conversion (see `Predicate` property for `Index Seek`) because of following predicate `Name = N'Joe' COLLATE SQL_Latin1_General_CP1_CS_AS`. Detailed solution #2: if all queries for column `Name` need to be CS then I will change the collation only for column `Name` thus: ``` -- Drop all objects that depends on this column (ex. indexes, constraints, defaults) DROP INDEX IX_Table_2_Name ON dbo.Table_2 -- Change column's collation ALTER TABLE dbo.Table_2 ALTER COLUMN Name VARCHAR(50) COLLATE SQL_Latin1_General_CP1_CS_AS -- Replace VARCHAR(50) with proper data type and max. length -- Replace COLLATE SQL_Latin1_General_CP1_CS_AS with the right CS collation -- Recreate all objects that depends on column Name (ex. indexes, constraints, defaults) CREATE INDEX IX_Table_2_Name ON dbo.Table_2 (Name) -- Test query SELECT Name FROM dbo.Table_2 WHERE Name LIKE 'Joe' ``` ![enter image description here](https://i.stack.imgur.com/Ww2VV.png)
If you want your query to be case sensitive on few occasions only, then you can try below query: ``` SELECT * FROM TableName where Col1 = 'abcdEfhG' COLLATE SQL_Latin1_General_CP1_CS_AS ``` Just add **"COLLATE SQL\_Latin1\_General\_CP1\_CS\_AS"** in front of the query.
SQL Server Like Query not case sensitive
[ "", "sql", "sql-server", "sql-server-2008", "sql-server-2008-r2", "sql-server-2012", "" ]
I have the following database schema: ``` CREATE TABLE public.sgclasstab_id67 ( oid bigint NOT NULL, att_1113 bigint, att_1114 bigint, att_1115 character varying(500), att_1116 character varying(2000), att_1578 double precision, CONSTRAINT sgclasstab_id67_pkey PRIMARY KEY (oid) ) CREATE TABLE public.sgclasstab_id68 ( oid bigint NOT NULL, att_1119 bigint, att_1139 bigint, att_1496 character varying(2000), CONSTRAINT sgclasstab_id68_pkey PRIMARY KEY (oid) ) CREATE TABLE public.sggeofacelist ( oid bigint NOT NULL, meanid smallint, numofislands smallint DEFAULT 0, compound smallint DEFAULT 0, extra character varying(512), crs bigint DEFAULT (-1), crsapp bigint DEFAULT (-1), version bigint DEFAULT 0, feature geometry, CONSTRAINT sggeofacelist_pkey PRIMARY KEY (oid), CONSTRAINT enforce_dims_feature CHECK (st_ndims(feature) = 3), CONSTRAINT enforce_srid_feature CHECK (st_srid(feature) = 0) ) CREATE TABLE public.sggeopointlist ( oid bigint NOT NULL, angle double precision, meanid smallint, crs bigint DEFAULT (-1), crsapp bigint DEFAULT (-1), origx double precision, origy double precision, feature geometry, CONSTRAINT sggeopointlist_pkey PRIMARY KEY (oid), CONSTRAINT enforce_dims_feature CHECK (st_ndims(feature) = 3), CONSTRAINT enforce_srid_feature CHECK (st_srid(feature) = 0) ) ``` The column sgclasstab\_id67.att\_1114 references geometries in the table sggeofacelist, which contains only polygons, sgclasstab\_id68.att\_1139 references geometries in the table sggeopointlist, which contains only point geometries. Both tables can contain hundred thousands of geometries, only a small percentage of them being related to the above tables. All geometries use GIST indexes. Now, when I run the following query ``` UPDATE sgclasstab_id68 SET att_1496 = ( SELECT t3943814704643.att_1115 FROM sgclasstab_id67 t3943814704643, sggeofacelist t3943863539361, sgclasstab_id68 t3943875447103, sggeopointlist t3943875522916 WHERE ((t3943814704643.att_1114=t3943863539361.oid )) AND ((t3943875447103.att_1139=t3943875522916.oid )) AND ((t3943863539361.feature && (t3943875522916.feature) AND ST_Intersects(t3943863539361.feature,(t3943875522916.feature)))) AND (t3943863539361.oid=t3943814704643.att_1114) AND sgclasstab_id68.oid = t3943875447103.oid LIMIT 1 ) ``` it literally runs forever (i cancelled it after 4 days). This is to no surprise, looking at the execution plan: ``` Update on sgclasstab_id68 (cost=0.00..1076.63 rows=100 width=736) -> Seq Scan on sgclasstab_id68 (cost=0.00..1076.63 rows=100 width=736) SubPlan 1 -> Limit (cost=0.70..10.48 rows=1 width=516) -> Nested Loop (cost=0.70..10.48 rows=1 width=516) -> Nested Loop (cost=0.55..10.18 rows=1 width=524) -> Nested Loop (cost=0.29..9.33 rows=1 width=5482) -> Seq Scan on sgclasstab_id67 t3943814704643 (cost=0.00..1.01 rows=1 width=524) -> Index Scan using sggeofacelist_pkey on sggeofacelist t3943863539361 (cost=0.29..8.31 rows=1 width=4974) Index Cond: (oid = t3943814704643.att_1114) -> Index Scan using sggeopointlist_idx on sggeopointlist t3943875522916 (cost=0.27..0.84 rows=1 width=48) Index Cond: ((t3943863539361.feature && feature) AND (t3943863539361.feature && feature)) Filter: _st_intersects(t3943863539361.feature, feature) -> Index Scan using sgclasstab_id68a1139_idx on sgclasstab_id68 t3943875447103 (cost=0.14..0.29 rows=1 width=8) Index Cond: (att_1139 = t3943875522916.oid) Filter: (sgclasstab_id68.oid = oid) ``` If i didn't misread anything here, Postgres first performs the intersection and then rules out all irrelevant geometries that are not referenced by objects from sgclasstab\_id68. Wouldn't it be much more performant to swap those two operations or did I do something in this query to make this option unavailable? If no, is there a way to force Postgres to reconsider? PostgreSQL 9.3, PostGIS 2.1.1 r12113. Thanks in advance (and sorry for the hard-to-read query, it's autogenerated).
Taking joop's answers into account, i came up with the following cleaned up and modified version of my first statement ``` UPDATE sgclasstab_id68 SET att_1496 = ( SELECT sgclasstab_id67.att_1115 FROM sgclasstab_id67, sggeofacelist, sggeopointlist WHERE ((sgclasstab_id67.att_1114=sggeofacelist.oid )) AND ((sgclasstab_id68.att_1139=sggeopointlist.oid )) AND (ST_Intersects(sggeofacelist.feature,(sggeopointlist.feature))) AND (sggeofacelist.oid=sgclasstab_id67.att_1114) ) ``` This yields the following execution plan ``` Update on sgclasstab_id68 (cost=0.00..1074.13 rows=100 width=736) -> Seq Scan on sgclasstab_id68 (cost=0.00..1074.13 rows=100 width=736) SubPlan 1 -> Nested Loop (cost=0.55..10.18 rows=1 width=516) -> Nested Loop (cost=0.29..9.33 rows=1 width=5482) -> Seq Scan on sgclasstab_id67 (cost=0.00..1.01 rows=1 width=524) -> Index Scan using sggeofacelist_pkey on sggeofacelist (cost=0.29..8.31 rows=1 width=4974) Index Cond: (oid = sgclasstab_id67.att_1114) -> Index Scan using sggeopointlist_idx on sggeopointlist (cost=0.27..0.84 rows=1 width=40) Index Cond: (sggeofacelist.feature && feature) Filter: ((sgclasstab_id68.att_1139 = oid) AND _st_intersects(sggeofacelist.feature, feature)) ``` The results seem to be identical and the first tests already imply that it is faster by orders of magnitude.
[not an answer] FYI: the cleaned-up query(hope I didn't make any misstakes): ``` UPDATE sgclasstab_id68 dst SET att_1496 = ( SELECT cla.att_1115 FROM sgclasstab_id67 cla JOIN sggeofacelist fac ON cla.att_1114 = fac.oid AND fac.oid = cla.att_1114 JOIN sggeopointlist pnt ON fac.feature && (pnt.feature) AND ST_Intersects(fac.feature, pnt.feature) JOIN sgclasstab_id68 cla2 ON cla2.att_1139 = pnt.oid WHERE 1=1 AND dst.oid = cla2.oid -- AND cla2.oid = cla2.oid LIMIT 1 ) ; ``` [ANSWER] It appears that the unaliased cla2 (classtab\_id68) reference refers to the **inner** query, and not to the destination table in the outer UPDATE statement. So all the `classtab_id68` will be updated with the same value. Also there is a lack of FK constraints/indexes for the joined columns. --- UPDATE: on second thought the table reference `JOIN sgclasstab_id68 cla2` is not necessary; it refers to the same row as the target row, so the query can be further reduced to: ``` UPDATE sgclasstab_id68 dst SET att_1496 = ( SELECT c67.att_1115 FROM sgclasstab_id67 c67 JOIN sggeofacelist fl ON c67.att_1114 = fl.oid AND fl.oid = c67.att_1114 JOIN sggeopointlist pnt ON (fl.feature && pnt.feature) AND ST_Intersects(fl.feature, pnt.feature) WHERE dst.att_1139 = pnt.oid LIMIT 1 ) ; ``` [but there still is a need for proper FKs/indexes.] --- Addendum: the `LIMIT 1` in the subquery (without an order by) is suspect, too. You'll want your update to be *kind of* deterministic, at least. This subquery just picks one random row from the result set (if there are more than one) and assigns that to the destination table. Does not seem logical; at least not to me. --- Finally you don't really need the scalar(?) subquery, but can use of the normal `UPDATE` syntax instead (I also removed the boundingbox join, which is already implied by the `ST_intersects()` join: ``` UPDATE sgclasstab_id68 dst SET att_1496 = c67.att_1115 FROM sgclasstab_id67 c67 JOIN sggeofacelist fl ON c67.att_1114 = fl.oid AND fl.oid = c67.att_1114 JOIN sggeopointlist pnt ON ST_Intersects(fl.feature, pnt.feature) WHERE dst.att_1139 = pnt.oid ; ```
"Unfavourable" execution plan with Postgres and PostGIS
[ "", "sql", "postgresql", "postgis", "" ]
Let's say I have some timestamp fields in a column that look like this "02-JUL-14 10.36.07.713000000 PM". Is there a way for me to left-click the column in that table and do a search that contains "JUL" so that I only have rows show up that contain "JUL"? I have read articles of people saying to include % and ' characters around the word, and using the LIKE ' ' syntax, but nothing has worked out for me. So in the picture below, I wish I could just type in "JUL", and only rows with the TIME\_OF\_ENTRY column that contain the letters "JUL" will appear. I do realize that typing in the filter window below is an EXACT search, but I want to know if there is a way to do a "contains" type of search. ![enter image description here](https://i.stack.imgur.com/hDE5i.png)
Paste ``` like '%JUL%' ``` in the filter field, and whatever values in that column which contain "JUL" will show up. The <http://www.thatjeffsmith.com/archive/2011/11/sql-developer-quick-tip-filtering-your-data-grids/> link from @OldProgrammer contained the solution that I found. There may be more helpful things in that link for more advanced filtering, perhaps.
Type ``` to_char(TIME_OF_ENTRY) like '%JUL%' ``` in the filter field above.
How to filter a column in Oracle Sql Developer
[ "", "sql", "oracle-sqldeveloper", "" ]
I have the following table: ``` Object Field Values ------------------------------------ 1 1 A;A;A;B;A;A 2 1 A;B;C;C 2 2 X 3 1 X;Y;Z 3 2 V;V;V;V;V;V;V;V;V;V;V ``` How can I select from this table only the unique values from the concatenated `values`? So: ``` Object Field Values --------------------- 1 1 A;B 2 1 A;B;C 2 2 X 3 1 X;Y;Z 3 2 V ``` In any scripting language, I would loop through the values from `Values`, explode on `;` and loop through that array with some logic filtering out duplicates. However, I need to do this only using SQL (Server 2008). Can anybody tell me if and how this can be done? Any help is greatly appreciated :-)
To do this first create a split function. This is the one I use but if you search the internet (or even SO) for "SQL Server Split Function" you will find a number of alternatives if you don't like this: ``` ALTER FUNCTION [dbo].[Split](@StringToSplit NVARCHAR(MAX), @Delimiter NCHAR(1)) RETURNS TABLE AS RETURN ( SELECT ID = ROW_NUMBER() OVER(ORDER BY n.Number), Position = Number, Value = SUBSTRING(@StringToSplit, Number, CHARINDEX(@Delimiter, @StringToSplit + @Delimiter, Number) - Number) FROM ( SELECT TOP (LEN(@StringToSplit) + 1) Number = ROW_NUMBER() OVER(ORDER BY a.object_id) FROM sys.all_objects a CROSS JOIN sys.all_objects b ) n WHERE SUBSTRING(@Delimiter + @StringToSplit + @Delimiter, n.Number, 1) = @Delimiter ); ``` Then you can split your field, So running: ``` SELECT t.Object, t.Field, s.Value FROM T CROSS APPLY dbo.Split(t.[Values], ';') AS s ``` Will turn this: ``` Object Field Values ------------------------------------ 1 1 A;A;A;B;A;A ``` into: ``` Object Field Values ------------------------------------ 1 1 A 1 1 A 1 1 A 1 1 B 1 1 A 1 1 A ``` Then you can apply the `DISTINCT` Operator: ``` SELECT DISTINCT t.Object, t.Field, s.Value FROM T CROSS APPLY dbo.Split(t.[Values], ';') AS s; ``` To give: ``` Object Field Values ------------------------------------ 1 1 A 1 1 B ``` Then you can concatenate your rows back into a single column giving a final query: ``` SELECT t.Object, t.Field, [Values] = STUFF(x.value('.', 'NVARCHAR(MAX)'), 1, 1, '') FROM T CROSS APPLY ( SELECT DISTINCT ';' + s.Value FROM dbo.Split(t.[Values], ';') AS s FOR XML PATH(''), TYPE ) AS s (x) ``` --- SQL Fiddle appears to be down, but once you have the Split function created the below is a full working example: ``` CREATE TABLE #T (Object INT, Field INT, [Values] VARCHAR(MAX)); INSERT #T VALUES (1, 1, 'A;A;A;B;A;A'), (2, 1, 'A;B;C;C'), (2, 2, 'X'), (3, 1, 'X;Y;Z'), (3, 2, 'V;V;V;V;V;V;V;V;V;V;V'); SELECT t.Object, t.Field, [Values] = STUFF(x.value('.', 'NVARCHAR(MAX)'), 1, 1, '') FROM #T AS T CROSS APPLY ( SELECT DISTINCT ';' + s.Value FROM dbo.Split(t.[Values], ';') AS s FOR XML PATH(''), TYPE ) AS s (x); ``` **EDIT** Based on your comment that you can't create tables or modify the DDL, I thought I would account for the situation where you can't create a function either. You can expand the above split function out into your query, so you don't actually need to create a function: ``` CREATE TABLE #T (Object INT, Field INT, [Values] VARCHAR(MAX)); INSERT #T VALUES (1, 1, 'A;A;A;B;A;A'), (2, 1, 'A;B;C;C'), (2, 2, 'X'), (3, 1, 'X;Y;Z'), (3, 2, 'V;V;V;V;V;V;V;V;V;V;V'); SELECT t.Object, t.Field, [Values] = STUFF(x.value('.', 'NVARCHAR(MAX)'), 1, 1, '') FROM #T AS T CROSS APPLY ( SELECT DISTINCT ';' + SUBSTRING(t.[Values], Number, CHARINDEX(';', t.[Values] + ';', Number) - Number) FROM ( SELECT TOP (LEN(t.[Values]) + 1) Number = ROW_NUMBER() OVER(ORDER BY a.object_id) FROM sys.all_objects a CROSS JOIN sys.all_objects b ) n WHERE SUBSTRING(';' + t.[Values] + ';', n.Number, 1) = ';' FOR XML PATH(''), TYPE ) AS s (x); ```
Here is a standalone solution: ``` DECLARE @t table(Object int, Field int, [Values] varchar(max)) INSERT @t values (1, 1, 'A;A;A;B;A;A'), (2, 1, 'A;B;C;C'), (3, 1, 'X'), (4, 1, 'X;Y;Z'), (5, 1, 'V;V;V;V;V;V;V;V;V;V;V') SELECT t.Object, t.Field, x.[NewValues] FROM @t t CROSS APPLY ( SELECT STUFF(( SELECT distinct ';'+t.c.value('.', 'VARCHAR(2000)') value FROM ( SELECT x = CAST('<t>' + REPLACE([Values], ';', '</t><t>') + '</t>' AS XML) ) a CROSS APPLY x.nodes('/t') t(c) for xml path(''), type ).value('.', 'varchar(max)'), 1, 1, '') [NewValues] ) x ``` Result: ``` Object Field NewValues 1 1 A;B 2 1 A;B;C 3 1 X 4 1 X;Y;Z 5 1 V ``` According to @GarethD's comment this may perform slow. Test data: ``` create table #t(Object int identity(1,1), Field int, [Values] varchar(max)) INSERT #t values (1, 'A;A;A;B;A;A'),(1, 'A;B;C;C'), (1, 'X'), (1, 'X;Y;Z'),(1, 'V;V;V;V;V;V;V;V;V;V;V') insert #t select field, [values] from #t union all select field, [values] from #t union all select field, [values] from #t insert #t select field, [values] from #t union all select field, [values] from #t union all select field, [values] from #t insert #t select field, [values] from #t union all select field, [values] from #t union all select field, [values] from #t insert #t select field, [values] from #t union all select field, [values] from #t union all select field, [values] from #t insert #t select field, [values] from #t union all select field, [values] from #t union all select field, [values] from #t insert #t select field, [values] from #t union all select field, [values] from #t union all select field, [values] from #t ``` Performance testing my script: ``` SELECT t.Object, t.Field, x.[NewValues] FROM #t t CROSS APPLY ( SELECT STUFF(( SELECT distinct ';'+t.c.value('.', 'VARCHAR(2000)') value FROM ( SELECT x = CAST('<t>' + REPLACE([Values], ';', '</t><t>') + '</t>' AS XML) ) a CROSS APPLY x.nodes('/t') t(c) for xml path(''), type ).value('.', 'varchar(max)'), 1, 1, '') [NewValues] ) x ``` Result less than 1 sec. Performance testing Garath script (had to edit testdata to get all rows. Identical rows were considered as 1 row): ``` WITH CTE AS ( SELECT DISTINCT t.Object, t.Field, s.Value FROM #T AS T CROSS APPLY ( SELECT ID = ROW_NUMBER() OVER(ORDER BY n.Number), Position = Number, Value = SUBSTRING(t.[Values], Number, CHARINDEX(';', t.[Values] + ';', Number) - Number) FROM ( SELECT TOP (LEN(t.[Values]) + 1) Number = ROW_NUMBER() OVER(ORDER BY a.object_id) FROM sys.all_objects a CROSS JOIN sys.all_objects b ) n WHERE SUBSTRING(';' + t.[Values] + ';', n.Number, 1) = ';' ) AS s ) SELECT Object, Field, [Values] = STUFF((SELECT ';' + Value FROM CTE AS T2 WHERE T2.Object = T.Object AND T2.Field = T.Field FOR XML PATH(''), TYPE ).value('.', 'VARCHAR(MAX)'), 1, 1, '') FROM CTE AS T GROUP BY Object, Field; ``` Result 6 seconds If any row has null in values this script will also crash.
Delete duplicate values from concatenated string
[ "", "sql", "sql-server", "sql-server-2008", "" ]
If I have a table with columns: a, b, c and later I do a ALTER TABLE command to add a new column "d", is it possible to add it between a and b for example, and not at the end? I heard that the position of the columns affects performance.
It's not possible to add a column between two existing columns with an ALTER TABLE statement in SQLite. This [works as designed](http://www.sqlite.org/lang_altertable.html). > The new column is always appended to the end of the list of existing > columns. As far as I know, MySQL is the only SQL (ish) dbms that lets you [determine the placement of new columns](http://dev.mysql.com/doc/refman/5.7/en/alter-table.html). > To add a column at a specific position within a table row, use FIRST > or AFTER col\_name. The default is to add the column last. You can also > use FIRST and AFTER in CHANGE or MODIFY operations to reorder columns > within a table. But this isn't a feature I'd use regularly, so "as far as I know" isn't really very far.
With every sql platform I've seen the only way to do this is to drop the table and re-create it. However, I question if the position of the column affects performance... In what way would it, what operations are you doing that you think it will make a difference? --- I will also note that dropping the table and recreating it is often not a heavy lift. Making a backup of a table and restoring that table is easy on all major platforms so scripting a backup - drop - create - restore is an easy task for a competent DBA. In fact I've done so often when users ask -- but I always find it a little silly. The most often reason given is the tool of choice behaves nicer when the columns are created in a certain order. (This was also @Jarad's reason below) So this is a good lesson for tool makers, make your tool able to reorder columns (and remember it between runs) -- then everyone is happy.
Sqlite ALTER TABLE - add column between existing columns?
[ "", "sql", "performance", "sqlite", "alter-table", "" ]
My current query pulls something like this: ``` ╔════════════════════════════════════════╗ β•‘ A Monthly 123 123 123 123 123 123 β•‘ β•‘ B Quarterly 123 123 123 123 123 123 β•‘ β•‘ C SemiAnnual 123 123 123 123 123 123 β•‘ β•‘ D Annual 123 123 123 123 123 123 β•‘ β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β• ``` The problem comes in when I don't get a row returned for a particular mode. When that happens it just drops the mode. I want it to show: ``` ╔════════════════════════════════════════╗ β•‘ A Monthly 123 123 123 123 123 123 β•‘ β•‘ B Quarterly 0 0 0 0 0 0 β•‘ β•‘ C SemiAnnual 123 123 123 123 123 123 β•‘ β•‘ D Annual 123 123 123 123 123 123 β•‘ β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β• ``` I know there is a way to do this. Just drawing a blank. Something like pulling the modes separately and then pulling the sums? Here is my current query: ``` SELECT MODE, SUM(POLCT) AS POLCT, SUM(RDRCNT) AS RDRCNT, SUM(INCCNT) AS INCCNT, SUM(INCINS) AS INCINS, SUM(INS_AMOUNT) AS INS_AMOUNT, SUM (PREM) AS PREM FROM (SELECT CASE WHEN MODE = '12' THEN 'D Annual' WHEN MODE = '03' THEN 'B Quarterly' WHEN MODE = '06' THEN 'C SemiAnnual' WHEN MODE = '01' THEN 'A Monthly' ELSE ' ' END AS MODE, POLICY_COUNT * NEGATIVE AS POLCT, RIDER_COUNT * NEGATIVE AS RDRCNT, INCNT * NEGATIVE AS INCCNT, INS_AMOUNT * NEGATIVE * INCNT AS INCINS, INS_AMOUNT * NEGATIVE AS INS_AMOUNT, PRSC * NEGATIVE AS PREM FROM DIST_OF_ISSUES AS a LEFT OUTER JOIN DIST_OF_ISSUES_PLANS AS b ON a.PLANID = b.PLANID LEFT OUTER JOIN DIST_OF_ISSUES_TYPE_ORDER AS c ON b.TYPE = c.TYPE WHERE SUBSTRING(a.PLANID, 1, 4) NOT IN ( '1020', '2599', '1600', '1601', '2597', '2598' ) AND ( a.MONTH < 4 AND a.MONTH > 0 ) AND a.YEAR = 2014) AS A GROUP BY MODE ORDER BY MODE ```
This ended up solving the issue: ``` Select MODE, SUM(POLCT) AS POLCT, SUM(RDRCNT ) AS RDRCNT, SUM(INCCNT) AS INCCNT, SUM(INCINS ) AS INCINS, SUM(INS_AMOUNT) AS INS_AMOUNT, SUM (PREM) AS PREM FROM (SELECT CASE when mode = '12' then 'D Annual' when mode = '03' then 'B Quarterly' when mode = '06' then 'C SemiAnnual' when mode = '01' then 'A Monthly' else ' ' end as MODE, POLICY_COUNT * NEGATIVE AS POLCT, RIDER_COUNT * NEGATIVE AS RDRCNT, Incnt * NEGATIVE AS INCCNT, INS_AMOUNT * NEGATIVE * Incnt AS INCINS, INS_AMOUNT * NEGATIVE AS INS_AMOUNT, PRSC * NEGATIVE AS PREM FROM Dist_Of_Issues as a Left Outer Join Dist_Of_Issues_Plans as b on a.PlanID = b.PlanID Left Outer Join Dist_Of_Issues_Type_Order as c on b.Type = c.Type where substring(a.PlanID,1,4) not in ('1020','2599','1600','1601','2597','2598') and (a.Month < 4 and a.Month > 0) and a.Year = 2014 UNION ALL Select distinct (CASE when mode = '12' then 'D Annual' when mode = '03' then 'B Quarterly' when mode = '06' then 'C SemiAnnual' when mode = '01' then 'A Monthly' else ' ' end) as MODE, 0 as POLCT, 0 as RDRCNT, 0 as INCCNT, 0 as INCINS, 0 as INS_AMOUNT, 0 as PREM from Dist_Of_Issues) as A Group BY Mode ORDER BY Mode' ```
You need to create a pseudo table with all the values you want in: ``` SELECT * FROM ( SELECT 'D Annual' AS Mode UNION ALL SELECT 'B Quarterly' AS Mode UNION ALL SELECT 'C SemiAnnual' AS Mode UNION ALL SELECT 'A Monthly' AS Mode UNION ALL SELECT ' ' AS Mode ) AS m ``` Then you can LEFT JOIN your main query to this, meaning rows are returned even if there are no records: ``` SELECT m.MODE, SUM(a.POLCT) AS POLCT, SUM(a.RDRCNT ) AS RDRCNT, SUM(a.INCCNT) AS INCCNT, SUM(a.INCINS ) AS INCINS, SUM(a.INS_AMOUNT) AS INS_AMOUNT, SUM(a.PREM) AS PREM FROM ( SELECT 'D Annual' AS Mode UNION ALL SELECT 'B Quarterly' AS Mode UNION ALL SELECT 'C SemiAnnual' AS Mode UNION ALL SELECT 'A Monthly' AS Mode UNION ALL SELECT ' ' AS Mode ) AS m ( SELECT CASE WHEN mode = '12' THEN 'D Annual' WHEN mode = '03' THEN 'B Quarterly' WHEN mode = '06' THEN 'C SemiAnnual' WHEN mode = '01' THEN 'A Monthly' ELSE ' ' END AS MODE, POLICY_COUNT * NEGATIVE AS POLCT, RIDER_COUNT * NEGATIVE AS RDRCNT, Incnt * NEGATIVE AS INCCNT, INS_AMOUNT * NEGATIVE * Incnt AS INCINS, INS_AMOUNT * NEGATIVE AS INS_AMOUNT, PRSC * NEGATIVE AS PREM FROM Dist_Of_Issues AS a LEFT OUTER JOIN Dist_Of_Issues_Plans AS b ON a.PlanID = b.PlanID LEFT OUTER JOIN Dist_Of_Issues_Type_Order AS c ON b.Type = c.Type WHERE SUBSTRING(a.PlanID,1,4) NOT IN ('1020','2599','1600','1601','2597','2598') AND (a.Month < 4 AND a.Month > 0) AND a.Year = 2014 ) AS a ON a.Mode = m.Mode GROUP BY m.Mode ORDER BY m.Mode; ```
How to pull summary data even when there is no rows returned?
[ "", "sql", "sum", "" ]
I have two table. 1st table => member {member\_id, name, active} 2nd table => savings {savings\_id, member\_id, month, year, amount, type, paid} Member Table ``` +-----------+--------+--------+ | member_id | name | active | +-----------+--------+--------+ | 105 | Andri | 1 | | 106 | Steve | 1 | | 110 | Soraya | 1 | | 111 | Eva | 1 | | 112 | Sonia | 1 | +-----------+--------+--------+ ``` Savings Table ``` +------------+-----------+-------+------+--------+------+------+ | savings_id | member_id | month | year | amount | type | paid | +------------+-----------+-------+------+--------+------+------+ | 1 | 120 | NULL | NULL | 150000 | 1 | 1 | | 14 | 105 | 7 | 2014 | 80000 | 2 | 1 | | 15 | 105 | 7 | 2014 | 25000 | 3 | 1 | | 16 | 105 | 7 | 2014 | 60000 | 4 | 1 | | 17 | 105 | 7 | 2014 | 100000 | 5 | 1 | | 18 | 106 | 7 | 2014 | 80000 | 2 | 1 | | 19 | 106 | 7 | 2014 | 25000 | 3 | 1 | | 20 | 106 | 7 | 2014 | 60000 | 4 | 1 | | 21 | 106 | 7 | 2014 | 100000 | 5 | 1 | | 31 | 110 | 7 | 2014 | 25000 | 3 | 1 | | 32 | 110 | 7 | 2014 | 60000 | 4 | 1 | | 33 | 110 | 7 | 2014 | 100000 | 5 | 1 | | 34 | 111 | 7 | 2014 | 80000 | 2 | 1 | | 35 | 111 | 7 | 2014 | 25000 | 3 | 1 | | 36 | 111 | 7 | 2014 | 60000 | 4 | 1 | | 37 | 111 | 7 | 2014 | 100000 | 5 | 1 | | 38 | 112 | 7 | 2014 | 80000 | 2 | 1 | | 39 | 112 | 7 | 2014 | 25000 | 3 | 1 | | 40 | 112 | 7 | 2014 | 60000 | 4 | 1 | | 41 | 112 | 7 | 2014 | 100000 | 5 | 1 | | 85 | 105 | 7 | 2015 | 80000 | 2 | 1 | | 86 | 105 | 7 | 2015 | 25000 | 3 | 1 | | 87 | 105 | 7 | 2015 | 60000 | 4 | 1 | | 88 | 105 | 7 | 2015 | 100000 | 5 | 1 | | 89 | 106 | 7 | 2015 | 80000 | 2 | | | 90 | 106 | 7 | 2015 | 25000 | 3 | | | 91 | 106 | 7 | 2015 | 60000 | 4 | | | 92 | 106 | 7 | 2015 | 100000 | 5 | | | 101 | 110 | 7 | 2015 | 80000 | 2 | | | 102 | 110 | 7 | 2015 | 25000 | 3 | | | 103 | 110 | 7 | 2015 | 60000 | 4 | | | 104 | 110 | 7 | 2015 | 100000 | 5 | | | 105 | 111 | 7 | 2015 | 80000 | 2 | 1 | | 106 | 111 | 7 | 2015 | 25000 | 3 | 1 | | 107 | 111 | 7 | 2015 | 60000 | 4 | 1 | | 108 | 111 | 7 | 2015 | 100000 | 5 | 1 | | 109 | 112 | 7 | 2015 | 80000 | 2 | | | 110 | 112 | 7 | 2015 | 25000 | 3 | | | 111 | 112 | 7 | 2015 | 60000 | 4 | | | 144 | 110 | 7 | 2014 | 50000 | 1 | 1 | +------------+-----------+-------+------+--------+------+------+ ``` When member make a savings, they could choose 5 type of savings, What i want to do is to make a list of member and all of their saving. This is mysql query ``` SELECT m.member_id, name, SUM(s1.amount) as savings1, SUM(s2.amount) as savings2, SUM(s3.amount) as savings3, SUM(s4.amount) as savings4, SUM(s5.amount) as savings5 FROM members m LEFT JOIN savings s1 ON s1.member_id = m.member_id AND s1.type = 1 AND s1.paid = 1 LEFT JOIN savings s2 ON s2.member_id = m.member_id AND s2.type = 2 AND s2.paid = 1 LEFT JOIN savings s3 ON s3.member_id = m.member_id AND s3.type = 3 AND s3.paid = 1 LEFT JOIN savings s4 ON s4.member_id = m.member_id AND s4.type = 4 AND s4.paid = 1 LEFT JOIN savings s5 ON s5.member_id = m.member_id AND s5.type = 5 AND s5.paid = 1 WHERE active = 1 GROUP BY m.member_id ``` This is the output ``` +-----------+--------+----------+----------+----------+----------+----------+ | member_id | name | savings1 | savings2 | savings3 | savings4 | savings5 | +-----------+--------+----------+----------+----------+----------+----------+ | 105 | Andri | NULL | 1280000 | 400000 | 960000 | 1600000 | | 106 | Steve | NULL | 80000 | 25000 | 60000 | 100000 | | 110 | Soraya | 50000 | NULL | 25000 | 60000 | 100000 | | 111 | Eva | NULL | 1280000 | 400000 | 960000 | 1600000 | | 112 | Sonia | NULL | 80000 | 25000 | 60000 | 100000 | +-----------+--------+----------+----------+----------+----------+----------+ ``` As you can see the calculation is not right, for example savings2 for member 105 it should be 160K. Any suggestion what should be the query for this case. <http://sqlfiddle.com/#!2/9eca9/1>
The problem is you're forming the full join product before summation. So if there is more than one row in more than one of your `savings` tables you will end up with duplication. If you [perform the join without summation](http://sqlfiddle.com/#!2/9eca9/72), you can clearly see what is going on. There are two ways around this. 1. Do all summations in derived tables: ``` SELECT m.member_id, name, s1.amount as savings1, s2.amount as savings2, ... FROM members m LEFT JOIN ( select SUM(amount) as amount, member_id from savings where type = 1 and paid = 1 group by member_id ) s1 ON s1.member_id = m.member_id LEFT JOIN ( select SUM(amount) as amount, member_id from savings where type = 2 and paid = 1 group by member_id ) s2 ON s2.member_id = m.member_id ... WHERE active = 1 GROUP BY m.member_id ``` 2. Join once and use a conditional sum: ``` SELECT m.member_id, name, SUM(CASE WHEN s.type = 1 then s.amount ELSE NULL END) as savings1, SUM(CASE WHEN s.type = 2 then s.amount ELSE NULL END) as savings2, ... LEFT JOIN savings s s2 ON s.member_id = m.member_id AND s.paid = 1 WHERE active = 1 GROUP BY m.member_id ```
You probably do not need multiple left joins and could be done as ``` SELECT m.member_id, m.name, SUM(case when s.type= 1 then s.amount end) as savings1, SUM(case when s.type= 2 then s.amount end) as savings2, SUM(case when s.type= 3 then s.amount end) as savings3, SUM(case when s.type= 4 then s.amount end) as savings4, SUM(case when s.type= 5 then s.amount end) as savings5 FROM savings s join members m on m.member_id = s.member_id WHERE m.active = 1 GROUP BY m.member_id ```
Multiple Left Join on same table
[ "", "mysql", "sql", "left-join", "" ]
SQL Query: **Aim:** > Write a SQL query to retrieve the address of highest rated Phills > Coffee in United States addressing format. ``` id name house street city state zip country rating 1 Best Buy 34 Main St Carson CA 98064 USA 9 2 Phills Coffee 4568 Sepulveda Blvd Torrance CA 50833 USA 6 3 Starbucks 3 Ocean Blvd Long Beach WA 45093 USA 9 4 Phills Coffee 214 Carson St Huntington Beach PA 89435 USA 4 ``` **US Addressing Format (For people outside USA):** ``` http://bitboost.com/ref/international-address-formats/united_states/ ``` **My attempt:** ``` SELECT house, street, city, state,country,zip FROM table WHERE name="Phills Coffee" ORDER BY rating DESC LIMIT 1 ``` Am I doing wrong? Or How can I improve this query? Thanks,
US address format would be like: 4568 Sepulveda Blvd, Torrance, CA 50833 USA. So your select would look like: SELECT CONCAT(house, ' ', street, ', ', city, ', ', state, ' ', zip, ' ', country) FROM table WHERE name="Phills Coffee" ORDER BY rating DESC LIMIT 1
You've been asked to retrieve the address, and in United States addressing format. So use that : > SELECT CONCAT(street,' ','country', ',') ... Dont know what is the US addressing format, but use concat to get it done. Your WHERE condition and ORDER BY are OK
SQL Query to find highest rated
[ "", "mysql", "sql", "" ]
Here is a sample Database ``` First_name country Andy US Manu India Paul Pakistan Ramesh Pakistan Rich India ``` So, what i want is to select all records from the above table and display according to name. Like :- I want to select person name to be display first whose country name is India and after US, Pakistan. How can i perform this task in single SQL query ? **Update** I don't know how many Country are there. Country the need to be display first will be input by the user.
Use a CASE statement to give each record a sortkey. 0 for a country match, 1 for a mismatch, so the desired country comes first. ``` select * from mytable order by case when country = @country then 0 else 1 end, first_name ```
May be something like this ``` Select * From Table1 Order By CASE WHEN country = 'INDIA' THEN 0 WHEN country = 'US' THEN 1 Esle 2 END; ``` Or You can use **[FIELD](http://www.w3resource.com/mysql/string-functions/mysql-field-function.php)** ``` Select * From Table1 Order By FIELD(country, 'India', 'US', 'Pakistan') ; ```
How to sort based on keyword first?
[ "", "mysql", "sql", "sorting", "select", "sql-order-by", "" ]