Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
First I need to point that I read [Database columns type prefix](https://stackoverflow.com/questions/995512/database-columns-type-prefix) but it's not the same issue. A long time ago someone who I worked with told me that in project I took part, all columns need to have unique prefix. For example for `users` table I use prefix `u_` so all columns are named `u_id`, `u_name` and so on. The same with all other tables for example for `products` it will be `p_` prefix. The reason of that was easier SQL JOINS - all columns will have unique names if 2 or more tables would be join. To be honest I've used this suggestion so far but in fact I don't know if it is used by many of you or it's really so useful. What's your opinion on that? Do you use such column naming or maybe you think this is completely unnecessary and waste of time? (when displaying data you need to use prefixes if you don't remove them using function or foreach) **EDIT** Just in case more explanation Assume we have users table with fields id, name and address table with fields id, name, user\_id In case if this method is used if we want to get all fields we can do: ``` SELECT * FROM users u LEFT JOIN address a on u.u_id = a.a_user_id ``` And in case we don't use prefixes for columns we should use: ``` SELECT u.id AS `u_id`, u.name AS `u_name`, a.id AS `a_id`, a.name AS `a_name`, a.user_id FROM users u LEFT JOIN address a on u.id = a.user_id ``` assuming of course we want to use columns as names and not numeric indexes 0,1 and so on (for example in PHP) **EDIT2** It seems that I haven't explained enough what's the problem - in MySQL of course in both cases everything works just fine and that's not a problem. However the problem is when I want to use data in programming language, for example PHP. If I use: ``` <?php $db = new mysqli('localhost','root','','test_prefix'); $result = $db->query("SELECT * FROM `user` u LEFT JOIN `address` a ON u.id = a.user_id "); while ($data = $result->fetch_array()) { var_dump($data); } ``` I get: > array(8) { [0]=> string(1) "1" ["id"]=> string(1) "1" [1](https://i.stack.imgur.com/avevw.jpg)=> string(5) > "Frank" ["name"]=> string(3) "USA" [2]=> string(1) "1" [3]=> string(3) > "USA" [4]=> string(1) "1" ["user\_id"]=> string(1) "1" } array(8) { > [0]=> string(1) "2" ["id"]=> string(1) "2" [1](https://i.stack.imgur.com/avevw.jpg)=> string(4) "John" > ["name"]=> string(6) "Canada" [2]=> string(1) "2" [3]=> string(6) > "Canada" [4]=> string(1) "2" ["user\_id"]=> string(1) "2" } Whereas result in PhpMyAdmin for that query look like this: ![PhpMyAdmin result](https://i.stack.imgur.com/avevw.jpg) In PHP get all the data but I can access data using numerical indexes: `$data[0]`, `$data[1]` and that's not very convenient. I cannot use user name because in `$data['name']` there's only address name, the same in id. If I used any of both: aliases for columns or prefixes for columns I would be able to use string indexes for accessing data for example `$data['user_name']` to access User name and `$data['address_name']` to access Address name.
I believe this is stupid. You actually end up prefixing all columns in all queries with their table names (or "identifiers"), even where there is no ambiguity. If you compare: ``` SELECT t1_col1, t2_col1 FROM t1, t2; ``` ... with: ``` SELECT t1.col1, t2.col1 FROM t1, t2; ``` ... then the recommendation may appear sensible. Now compare: ``` SELECT t3_col3, t4_col4 FROM t3, t4; ``` ... with: ``` SELECT col3, col4 FROM t3, t4; -- assuming col3 exists only in t3, and col4 only in t4 ``` Now where is the benefit? One can still argue that a one-or-two letter prefix is still preferable to a long table name: ``` SELECT t1_col1, t2_col1 FROM very_long_table_name1, very_long_table_name2; ``` But why bother with a prefix when you can do: ``` SELECT t1.col1, t2.col1 FROM very_long_table_name1 AS t1, very_long_table_name2 AS t2; ``` --- Actually, there could be cases where the prefix might come in handy (*handy* does not mean *recommended* in my mind). For example, some drivers (I'm thinking old PHP) may get confused when multiple columns of a result set have the same name (because they return rows as an array indexed by column name). The problem could still be worked around by aliasing the columns in the result set.
I think it's unnecessary, BUT in the name of consistency on an existing project you should maintain it or refactor the whole database. Why it is unnecassary? Well take a look at the following query wich illustrates how you can get whatever you want out of the database Also i think it's more readable and the alliasing works fine In the cases where your column names collide wich doesn't work that well with some drivers you could use the AS statement to get that specific field because you can JOIN the same table twice wich gives you the exact same problem anyway when you use the prefixes ``` SELECT `m`.*, `u1`.`username` AS `sender`, `u2`.`username` AS `receiver` FROM `messages` `m` INNER JOIN `users` `u1` ON `m`.`sender` = `u1`.`id` INNER JOIN `users` `u2` ON `m`.`receiver` = `u2`.`id` ```
Column prefixes in tables
[ "", "mysql", "sql", "database", "database-schema", "prefix", "" ]
I have data similar to the following: ``` Date ID Amount 10-Jun-14 978500302 163005350 17-Jun-14 978500302 159947117 24-Jun-14 978500302 159142342 1-Jul-14 978500302 159623201 8-Jul-14 978500302 143066033 14-Jul-14 978500302 145852027 15-Jul-14 978500302 148595751 ``` Is there a way in oracle that I can get an average of this data, which excludes the highest and lowest value? I can get the overall average by doing a `GROUP BY ID`, and then `AVG(Amount)`. But how can I do this while excluding min and max?
The easiest way is to use analytic functions to get the minimum and maximum values before aggregating: ``` select id, avg(amount) from (select d.*, min(amount) over (partition by id) as mina, max(amount) over (partition by id) as maxa from data d ) d where amount > mina and amount < maxa group by id; ```
``` select id, avg(amount) from tbl t where id not in (select id from tbl group by id having count(distinct amount) > 2) or (amount <> (select min(x.amount) from tbl x where x.id = t.id) and amount <> (select max(x.amount) from tbl x where x.id = t.id)) group by id ``` The first line in the WHERE clause is there to retain IDs that do not have more than 2 values. They would otherwise be excluded from the results. If you would rather they be excluded, you can get rid of that line. You could also try the following: ``` select id, avg(amount) from (select id, amount from tbl minus select id, min(amount) from tbl group by id minus select id, max(amount) from tbl group by id) group by id ```
Group by and exclude minimum and maximum from results
[ "", "sql", "database", "oracle", "select", "" ]
Looking for some advice on how to tackle the following select query with EF. \*Note, I just need help with the SELECT via EF the create table/insert stuff is in there just for a sample of my table structure ``` CREATE TABLE SampleTable(Foo INT, Bar VARCHAR(50)) INSERT SampleTable VALUES (1,'GoodData'), (2, 'BetterData'), (2, 'Whatever'), (10, 'GoodData') SELECT * FROM SampleTable st WHERE (Foo = 2 AND Bar = 'BetterData') OR (Foo = 1 AND Bar = 'GoodData') --There could be a 1000 of these line 'OR' lines DROP TABLE SampleTable ```
Here are a couple of extension methods to help: ``` public static class PredicateBuilder { public static Expression<Func<T, bool>> True<T>() { return f => true; } public static Expression<Func<T, bool>> False<T>() { return f => false; } public static Expression<Func<T, bool>> Or<T>(this Expression<Func<T, bool>> expr1, Expression<Func<T, bool>> expr2) { var invokedExpr = Expression.Invoke(expr2, expr1.Parameters.Cast<Expression>()); return Expression.Lambda<Func<T, bool>> (Expression.OrElse(expr1.Body, invokedExpr), expr1.Parameters); } public static Expression<Func<T, bool>> And<T>(this Expression<Func<T, bool>> expr1, Expression<Func<T, bool>> expr2) { var invokedExpr = Expression.Invoke(expr2, expr1.Parameters.Cast<Expression>()); return Expression.Lambda<Func<T, bool>> (Expression.AndAlso(expr1.Body, invokedExpr), expr1.Parameters); } } ``` Then to use it for your dynamic query needs: ``` Expression<Func<SampleTableObject, bool>> predicate = PredicateBuilder.True<SampleTableObject>(); foreach(var item in conditionsList) { predicate = predicate.Or(x=>x.Foo == item.Foo && x.Bar == item.Bar); } var data = EFContext.SampleTableObjects.Where(predicate); ```
There is another namespace in LINQ named **system.linq.dynamic** It is very usefull if you want to make your query dynamicly via your code runtime. Have a look on it!!
How to do this dynamic where clause in EF
[ "", "sql", "entity-framework", "" ]
I am using SQL Server 2012 to write a procedure and I am receiving this error when trying to run this select statement. > Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression. I think it has something to do with the `Select` statement in the `where` clause but I'm not too sure. The procedure: ``` select top 100 Team.ID, Team.Name, sum(Results.Points) as PointsTotal from Results inner join Teamon Results.TeamID = Team.ID where Results.TeamID = Team.ID and Results.Date > DATEADD(YY, -1, GETDATE()) and Results.Date <> (Select Min(Date) from Results Inner join Competition on Competition.ID = Results.CompetitionID Inner join Team on Team.ID = Results.TeamID where CompetitionID = 3 and TeamID = Team.ID and Date > (DATEADD(YY, -1, GETDATE())) group by Team.Name Having Count(Competition.ID) > 1) group by Team.ID, Team.Name order by PointsTotal desc ```
Since your WHERE clause is grouping by Team.Name and selecting the MIN(Date) you are going to return the minimum date for each team (based on the other criteria too). If you are using a <> operator you can only have one value on the other side of that operator, if you need more than one value use the NOT IN operator instead. If you need the individual teams to match up with the minimum date, i would LEFT JOIN to a subquery on the TeamId and MinimumDate, and then in the WHERE clause I would make sure that one of the fields that you are joining on IS NULL (which would indicate that the join condition did not match the sub-query, and that will filter out any records with that team and the minimum date). The query would look something like this: ``` select top 100 Team.ID, Team.Name, sum(Results.Points) as PointsTotal from Results inner join Team on Results.TeamID = Team.ID left join ( Select Team.Id, Min(Date) AS MinDate from Results Inner join Competition on Competition.ID = Results.CompetitionID Inner join Team on Team.ID = Results.TeamID where CompetitionID = 3 and TeamID = Team.ID and Date > (DATEADD(YY, -1, GETDATE())) group by Team.ID Having Count(Competition.ID) > 1 ) MinimumDateQuery ON Results.TeamID = MinimumDateQuery.Id AND Results.Date = MinimumDateQuery.MinDate where Results.Date > DATEADD(YY, -1, GETDATE()) and MinimumDateQuery.Id IS NULL group by Team.ID, Team.Name order by PointsTotal desc ```
If the subquery may return more then one result you should replace the `<>` operator with `not in`
Select Statement in Where Clause Returning More Than 1 Value
[ "", "sql", "sql-server", "select", "sql-server-2012", "" ]
I have a problem and I have 2 solutions for that problem. I have to write a procedure which will return rows falling into required categories. The required categories will be passed as boolean (or bit) values. The first solution is:- ``` DECLARE @IsRowType1Req bit; DECLARE @IsRowType2Req bit; DECLARE @tbl1 table (ID int, RowType varchar(50)); DECLARE @tmpTbl table (ID int, RowType varchar(50)); IF @IsRowType1Req = 1 INSERT INTO @tmpTbl SELECT * FROM @tbl1 WHERE RowType = 'RowType1'; IF @IsRowType2Req = 1 INSERT INTO @tmpTbl SELECT * FROM @tbl1 WHERE RowType = 'RowType2'; SELECT * FROM @tmpTbl; ``` Now, this solution uses IF clause to select only required type of rows. The other one is:- ``` DECLARE @IsRowType1Req bit; DECLARE @IsRowType2Req bit; DECLARE @tbl1 table (ID int, RowType varchar(50)); SELECT * FROM @tbl1 WHERE RowType = 'RowType1' AND @IsRowType1Req = 1 UNION SELECT * FROM @tbl1 WHERE RowType = 'RowType2' AND @IsRowType2Req = 1 ``` This solution utilizes WHERE clause to restrict unwanted type of rows. Can anybody please tell me which one will be faster and why?
The answer would likely depend on the size of the table as well as the size of each subset returned. The IF solution has to insert rows into another table, which doesn't come cheap. The WHERE solution, on the other hand, uses UNION which implies sorting to remove duplicate entries. If you replaced UNION with UNION ALL (which you could safely do because your two subsets would never have duplicates), the WHERE solution would become the better one of the two hands down. However, since rows in `@tbl1` cannot be more than one type, you could solve the problem differently. You could match `@tbl1` against a dynamically built table consisting of 0, 1 or 2 rows depending on the values of `@IsRowType1Req` and `@IsRowType2Req`. You would build that table like this: ``` SELECT 'RowType1' WHERE @IsRowType1Req = 1 UNION ALL SELECT 'RowType2' WHERE @IsRowType2Req = 1 ``` and then join it to `@tbl1`: ``` SELECT * FROM @tbl1 AS t INNER JOIN ( SELECT 'RowType1' WHERE @IsRowType1Req = 1 UNION ALL SELECT 'RowType2' WHERE @IsRowType2Req = 1 ) AS f (RowType) ON t.RowType = f.RowType ; ``` In a way, this might mean that WHERE still "wins", but you could rewrite the virtual table without using WHERE: ``` SELECT CASE @IsRowType1Req WHEN 1 THEN 'RowType1' END UNION ALL SELECT CASE @IsRowType2Req WHEN 1 THEN 'RowType2' END ``` or like this, using the VALUES constructor introduced in SQL Server 2008: ``` VALUES (CASE @IsRowType1Req WHEN 1 THEN 'RowType1' END), (CASE @IsRowType2Req WHEN 1 THEN 'RowType2' END) ``` That way the table would always consist of 2 rows, each row containing either a requested type or NULL. The result of the join with that table would still be the same and match your desired result.
With the information you have given there seems no need to use a `UNION`of `IF`. ``` SELECT * FROM @tbl1 WHERE (RowType = 'RowType2' AND @IsRowType2Req = 1) OR (RowType = 'RowType1' AND @IsRowType1Req = 1); ```
Checking condition in IF vs WHERE clause
[ "", "sql", "sql-server-2008", "t-sql", "" ]
I have a derived table with a list of relative seconds to a foreign key (ID): ``` CREATE TABLE Times ( ID INT , TimeFrom INT , TimeTo INT ); ``` The table contains mostly non-overlapping data, but there are occasions where I have a TimeTo < TimeFrom of another record: ``` +----+----------+--------+ | ID | TimeFrom | TimeTo | +----+----------+--------+ | 10 | 10 | 30 | | 10 | 50 | 70 | | 10 | 60 | 150 | | 10 | 75 | 150 | | .. | ... | ... | +----+----------+--------+ ``` The result set is meant to be a flattened linear idle report, but with too many of these overlaps, I end up with negative time in use. I.e. If the window above for `ID = 10` was 150 seconds long, and I summed the differences of relative seconds to subtract from the window size, I'd wind up with `150-(20+20+90+75)=-55`. This approach I've tried, and is what led me to realizing there were overlaps that needed to be flattened. So, what I'm looking for is a solution to flatten the overlaps into one set of times: ``` +----+----------+--------+ | ID | TimeFrom | TimeTo | +----+----------+--------+ | 10 | 10 | 30 | | 10 | 50 | 150 | | .. | ... | ... | +----+----------+--------+ ``` Considerations: Performance is very important here, as this is part of a larger query that will perform well on it's own, and I'd rather not impact its performance much if I can help it. On a comment regarding "Which seconds have an interval", this is something I have tried for the end result, and am looking for something with better performance. Adapted to my example: ``` SELECT SUM(C.N) FROM ( SELECT A.N, ROW_NUMBER()OVER(ORDER BY A.N) RowID FROM (SELECT TOP 60 1 N FROM master..spt_values) A , (SELECT TOP 720 1 N FROM master..spt_values) B ) C WHERE EXISTS ( SELECT 1 FROM Times SE WHERE SE.ID = 10 AND SE.TimeFrom <= C.RowID AND SE.TimeTo >= C.RowID AND EXISTS ( SELECT 1 FROM Times2 D WHERE ID = SE.ID AND D.TimeFrom <= C.RowID AND D.TimeTo >= C.RowID ) GROUP BY SE.ID ) ``` The problem I have with this solution is I have get a Row Count Spool out of the EXISTS query in the query plan with a number of executions equal to COUNT(C.\*). I left the real numbers in that query to illustrate that getting around this approach is for the best. Because even with a Row Count Spool reducing the cost of the query by quite a bit, it's execution count increases the cost of the query as a whole by quite a bit as well. Further Edit: The end goal is to put this in a procedure, so Table Variables and Temp Tables are also a possible tool to use.
Left join each row to its successor overlapping row on the same ID value (where such exist). Now for each row in the result-set of `LHS left join RHS` the contribution to the elapsed time for the ID is: `isnull(RHS.TimeFrom,LHS.TimeTo) - LHS.TimeFrom as TimeElapsed` Summing these by ID should give you the correct answer. Note that: - where there isn't an overlapping successor row the calculation is simply `LHS.TimeTo - LHS.TimeFrom` - where there is an overlapping successor row the calculation will net to `(RHS.TimeFrom - LHS.TimeFrom) + (RHS.TimeTo - RHS.TimeFrom)` which simplifies to `RHS.TimeTo - LHS.TimeFrom`
OK. I'm still trying to do this with just one `SELECT`. But This totally works: ``` DECLARE @tmp TABLE (ID INT, GroupId INT, TimeFrom INT, TimeTo INT) INSERT INTO @tmp SELECT ID, 0, TimeFrom, TimeTo FROM Times ORDER BY Id, TimeFrom DECLARE @timeTo int, @id int, @groupId int SET @groupId = 0 UPDATE @tmp SET @groupId = CASE WHEN id != @id THEN 0 WHEN TimeFrom > @timeTo THEN @groupId + 1 ELSE @groupId END, GroupId = @groupId, @timeTo = TimeTo, @id = id SELECT Id, MIN(TimeFrom), Max(TimeTo) FROM @tmp GROUP BY ID, GroupId ORDER BY ID ```
How to consolidate blocks of time?
[ "", "sql", "sql-server", "t-sql", "" ]
![enter image description here](https://i.stack.imgur.com/5U2sN.png) I have 2 tables with above fields. I have created below query ``` SELECT DISTINCT H.HolidayId,H.HolidayDate,H.Description,CH.ClientId, case when CH.HolidayId is not null then 1 else 0 end as IsHoliday FROM Holiday H LEFT JOIN ClientHolidays CH ON h.HolidayId = CH.HolidayId ``` ![enter image description here](https://i.stack.imgur.com/vLXWE.png) But I need to list all holidays with IsHoliday field if client has added as holiday then IsHoliday field should come as 1 else 0 And when querying with ClientId. It should list all holidays depend upon clientId I need result as below ![enter image description here](https://i.stack.imgur.com/rG68O.png) While passing client id i need to get all holidays with IsHoliday field should come as 0 or 1
I have used your queries after 2 days only i found bug in that. then i found a solution for that. Below is the solution: ``` SELECT h.HolidayId, HolidayDate,Description,CASE WHEN ClientID IS NULL THEN 0 WHEN ClientID != @ClientId THEN 0 ELSE 1 END AS IsHoliday FROM HOLIDAY h LEFT OUTER JOIN CLIENTHOLIDAYS ch ON ch.HolidayId = h.HolidayId WHERE ClientId = @ClientId OR ClientID is null or ClientID is not null ``` Thank you
Try below query, it will work as per the requirement :) ``` SELECT h.HolidayId, HolidayDate, CASE WHEN ClientID IS NULL THEN 0 ELSE 1 END AS IsHoliday FROM HOLIDAY h LEFT OUTER JOIN CLIENTHOLIDAYS ch ON ch.HolidayId = h.HolidayId WHERE ClientId = 1 OR ClientID is null ```
select table using complicated query
[ "", "sql", "sql-server", "" ]
Seeing slowness in insert/select after truncating a table that has over 6 million rows. I daily insert 5 to 6 millions records into a table and I was able to insert/select data without any issue for some 7 or 8 days but when the table size went above 10 GB / 30 million rows, there were a few timeout issues. So I thought of truncating the table daily before data uploading since 1 day's data is enough for me. Now I am seeing extreme slowness in insert/select till i rebuild index in middle of upload. last 2 days it took 5 hours to insert 1 million rows , after a index rebuild remaining 3.5 to 4 million rows went into table in less than 15 mins. I dont prefer to rebuild index in middle of my upload process. i don't do a .NET Bulk insert , i insert rows in batches using Stored proc , since i do some validation. I am using SQL Server 2008.
After enabling auto stats update , every thing went normal. from now After truncating big tables i will makes sure table stats get updated if DB auto stats update is disabled.. thanks every one for all your valuable suggestions.
Make sure you update your statistics on all your indexes... and it probably wouldn't hurt to shrink your log file. There are articles on how to do both all over the internet. But that's my best guess at what's going on.
SQL Server - insert/select slow after TRUNCATE TABLE has 6 over million rows
[ "", "sql", "sql-server", "performance", "sql-server-2008", "truncate", "" ]
Could someone help me with this table? ``` col1 col2 0 1 0 1 1 0 1 1 2 0 2 1 3 1 3 1 ``` (<http://sqlfiddle.com/#!2/67d076>) I'd like to select distinct values from the first column and filter out the values that have zeroes in the second column. The result should be the following list: [0, 3]
``` select distinct col1 from test where col1 not in (select col1 from test where col2 = 0) ```
It sounds like your looking for a query like this: ``` SELECT DISTINCT `col1` FROM `test` WHERE `col1` NOT IN ( SELECT `col1` FROM `test` WHERE `col2` = 0) ``` Or using a `JOIN`, like this: ``` SELECT DISTINCT a.`col1` FROM `test` a LEFT JOIN `test` b ON a.`col1` = b.`col1` AND b.`col2` = 0 WHERE b.`col1` IS NULL ``` [**Demonstration**](http://sqlfiddle.com/#!2/67d076/20)
Select distinct rows and apply a filter
[ "", "mysql", "sql", "distinct", "" ]
so the 672 is the amount of quarters in a week and i need the avg(value) of all quarters of 5 weeks on the same day on the same quarter ``` select value, DATEADD(MINUTE, a.QuarterNumber * 15, '2000-01-01') AS [Timestamp] from measurements.Archive a INNER JOIN measurements.Points p ON a.PointId = p.Id INNER JOIN fifthcore..cm_lod_devices ld ON ld.Uuid = p.LogicalDeviceUuid WHERE ld.Id IN (SELECT Value FROM @LodDeviceIds) AND ( a.QuarterNumber = 510176 OR a.QuarterNumber = 510176 - 672 OR a.QuarterNumber = 510176 - (672*2) OR a.QuarterNumber = 510176 - (672*3) OR a.QuarterNumber = 510176 - (672*4) OR a.QuarterNumber = 510176 - (672*5) ... ) ```
I do not know which is the last number you want. But the principal would be to find how many ones you want. In your example i will assume 5 as the max multiplier. So knowing the firt quarter which is `510176` ani would find the min quarter which would be in your example `510176 - (672*5)` and i would test the integer part of the division with 672: ``` select value, DATEADD(MINUTE, a.QuarterNumber * 15, '2000-01-01') AS [Timestamp] from measurements.Archive a INNER JOIN measurements.Points p ON a.PointId = p.Id INNER JOIN fifthcore..cm_lod_devices ld ON ld.Uuid = p.LogicalDeviceUuid WHERE ld.Id IN (SELECT Value FROM @LodDeviceIds) AND ((510176 - (672*5))- a.QuarterNumber)%672 = 0 AND a.QuarterNumber>510176 - (672*5) AND a.QuarterNumber<=510176 ``` On the above you would only need to change `5` to the number of your expected quarters -1.
Use `in` instead: ``` where a.QuarterNumber in (510176, 510176 - 672, 510176 - (672*2), 510176 - (672*3), 510176 - (672*4) 510176 - (672*5), . . .) ``` If there is some sort of coding scheme, and you essentially want in infinite list, then use modulo arithmetic. In many databases, this will work: ``` where mod(a.QuarterNumber, 672) = mod(510176, 672) and a.QuarterNumber <= 510176 ```
how to rewrite this SQL Query without the or 's
[ "", "sql", "" ]
``` DECLARE @TodayDayOfWeek INT DECLARE @EndOfPrevWeek DateTime DECLARE @StartOfPrevWeek DateTime DECLARE @EndOfThisWeek DateTime DECLARE @StartOfThisWeek DateTime ``` --Delcared the Parameters ``` SET @TodayDayOfWeek = datepart(dw, GetDate()) SET @EndOfPrevWeek = DATEADD(dd, -@TodayDayOfWeek, GetDate()) SET @StartOfPrevWeek = DATEADD(dd, -(@TodayDayOfWeek+6), GetDate()) SET @StartOfThisWeek = DATEADD(dd, -@TodayDayOfWeek, GetDate())+1 SET @EndOfThisWeek = DATEADD(dd, -@TodayDayOfWeek, GetDate())+7 ``` --Set the parameters dates to use for this week or last week ``` select st.SALESID , st.SALESNAME , st.CUSTACCOUNT , st.CUSTOMERREF , st.SALESSTATUS , cast (sl.LINENUM as decimal (18,0)) , sl.ITEMID , cast (sl.SALESQTY as decimal (18,0)) , cast (sl.LINEAMOUNT as decimal(18,2)) , case when st.CREATEDDATETIME between @StartOfPrevWeek and @EndOfPrevWeek then 'L' when st.CREATEDDATETIME between @StartOfThisWeek and @EndOfThisWeek then 'T' End 'TorL' from salestable st inner join salesline sl on st.SALESID = sl.SALESID where st.DATAAREAID = 'fr' order by TorL ``` TorL is the case results, I would like to filter this using where, to only show where T or L is T for this week or L for last week. Thanks
``` ;WITH MyCTE AS ( select st.SALESID , st.SALESNAME , st.CUSTACCOUNT , st.CUSTOMERREF , st.SALESSTATUS , cast (sl.LINENUM as decimal (18,0)) AS LINENUM , sl.ITEMID , cast (sl.SALESQTY as decimal (18,0)) AS SALESQTY , cast (sl.LINEAMOUNT as decimal(18,2)) AS LINEAMOUNT , case when st.CREATEDDATETIME between @StartOfPrevWeek and @EndOfPrevWeek then 'L' when st.CREATEDDATETIME between @StartOfThisWeek and @EndOfThisWeek then 'T' End Torl from salestable st inner join salesline sl on st.SALESID = sl.SALESID where st.DATAAREAID = 'fr' ) SELECT * FROM MyCTE WHERE Torl = 'T' ```
The nasty way to do this would be to just add the conditions for setting 'T' or 'L' to your WHERE clause: ``` DECLARE @TodayDayOfWeek INT DECLARE @EndOfPrevWeek DATE DECLARE @StartOfPrevWeek DATE DECLARE @EndOfThisWeek DATE DECLARE @StartOfThisWeek DATE --Delcared the Parameters SET @TodayDayOfWeek = datepart(dw, GetDate()) SET @EndOfPrevWeek = DATEADD(dd, -@TodayDayOfWeek, GetDate()) SET @StartOfPrevWeek = DATEADD(dd, -(@TodayDayOfWeek+6), GetDate()) SET @StartOfThisWeek = DATEADD(dd, -@TodayDayOfWeek, GetDate())+1 SET @EndOfThisWeek = DATEADD(dd, -@TodayDayOfWeek, GetDate())+7 select st.SALESID , st.SALESNAME , st.CUSTACCOUNT , st.CUSTOMERREF , st.SALESSTATUS , cast (sl.LINENUM as decimal (18,0)) , sl.ITEMID , cast (sl.SALESQTY as decimal (18,0)) , cast (sl.LINEAMOUNT as decimal(18,2)) , case when CAST(st.CREATEDDATETIME AS DATE) between @StartOfPrevWeek and @EndOfPrevWeek then 'L' when CAST(st.CREATEDDATETIME AS DATE) between @StartOfThisWeek and @EndOfThisWeek then 'T' End 'TorL' from salestable st inner join salesline sl on st.SALESID = sl.SALESID where st.DATAAREAID = 'fr' AND (CAST(st.CREATEDDATETIME AS DATE) between @StartOfPrevWeek and @EndOfThisWeek) order by TorL ``` EDIT: Making the variables DATETIME can cause unwanted issues where you eliminate certain dates just because of the timestamp. For this solution, it is recommended to make the variables of type DATE, and then CAST the st.CREATEDATETIME AS DATE.
Using Case results in where clause - i would like to use the results of TorL - SQL Server
[ "", "sql", "sql-server", "case", "" ]
``` SELECT * FROM Products WHERE Name LIKE '%' + (SELECT TOP 1 Gift.Name FROM Gift WHERE Id = 65) + '%' ``` Subquery returns sth like "toy gun". Some of them even consists of three or more words. Obviously main query looks for Names that include "toy gun". What I want to do is return all results for "toy" or "gun". Any suggestion?
You will need to create function that splits out the text of your column. Here is one possible way to do it: <http://www.sqlservercentral.com/blogs/querying-microsoft-sql-server/2013/09/19/how-to-split-a-string-by-delimited-char-in-sql-server/> Once you have that, you can use a JOIN with a LIKE: ``` SELECT * FROM Products p INNER JOIN dbo.fnSplitString ( SELECT TOP 1 Gift.Name FROM Gift WHERE Id = 65 ) sub ON p.Name LIKE '%' + sub.splitdata + '%' ``` (I'm sure that's not 100% correct syntax.)
If you have Full Text indexing enabled then using `FREETEXT` query produce what you want. You can check Full Text if it enabled by running the below query: ``` SELECT FULLTEXTSERVICEPROPERTY('IsFullTextInstalled') ``` If the above returns 1, then your good to go. If 0 is returned then search for installation instructions for your SQL Server version. Here is a good guide to setting up the FullText index <http://blog.sqlauthority.com/2008/09/05/sql-server-creating-full-text-catalog-and-index/>. Once set up you can use the below code to perform your required search: ``` Declare @searchString nvarchar (100); Select top 1 @searchString = Gift.Name FROM Gift WHERE Id=65; Select * From Products Where FREETEXT (Name, @searchString); ```
Query with LIKE statement for any of the words, not whole
[ "", "sql", "sql-server", "t-sql", "sql-like", "" ]
I have two tables: tables: **products** and **posts** table **products** has field **model** with values "AH0002", "O-PO-201", "O-PO-304" etc. table **posts** has field **post\_title** with values like "Product AH0002 is the best", "Red O-PO-201 is really good". What I need is to display products rows where their name doesn't appear in posts table (in post\_title field). How to do it in mysql?
Try: ``` select pr.* from products pr left join posts po on post_title like concat('%', pr.model, '%') where post_title is null ``` The above assumes mysql syntax, it will vary slightly by database. [SQL Fiddle](http://sqlfiddle.com/#!2/ec9d6/5)
I think you want something like this: ``` SELECT p.* FROM products p WHERE NOT EXISTS (SELECT 1 FROM posts WHERE post_title LIKE '%' + p.model + '%') ``` This syntax assumes Microsoft SQL Server. For MySQL, just change the LIKE clause to be LIKE CONCAT('%', p.model, '%')
SQL - Get records from one table that aren't "LIKE" in another table
[ "", "mysql", "sql", "" ]
I'm attempting to create a SQL query to delete child records when the parent has been deleted. The thing is, the children and parents are both stored in the same table. Here's the (simplified) data model: ***ThingTable*** ``` ID | Name | ParentID 1000 | Thing1 | NULL 1001 | Thing2 | 1000 1002 | Thing3 | 1000 1003 | Thing4 | 1000 1004 | Thing5 | 1003 ``` ***ChildThingTable*** ``` ID | Color 1001 | Blue 1002 | Black 1003 | Green 1004 | Red ``` Assuming ID 1000 (the parent) was deleted, I'd need to delete the corresponding records from ChildThingTable as well as ThingTable. My only restriction is that I cannot use a trigger or alter the underlying database structure in any way. Here's my pseudocode that I've worked out, but I'm having difficulty translating it into SQL: 1. Delete from ChildThingTable where the matching record in ThingTable does not have a parent, based on ID and ParentID. 2. Delete from ThingTable where the matching record does not have a parent. Any assistance would be greatly appreciated!
You can use a Common Table Expression to recurse the ``` -- Begin Create Test Data SET NOCOUNT ON CREATE TABLE #ThingTable ( ID INT NOT NULL, [Name] VARCHAR(255) NOT NULL, [ParentID] INT NULL ) CREATE TABLE #ChildThingTable ( ID INT NOT NULL, [Color] VARCHAR(255) NOT NULL, ) INSERT INTO #ThingTable (ID,[Name],ParentID) VALUES (1000,'Thing1',NULL) INSERT INTO #ThingTable (ID,[Name],ParentID) VALUES (1001,'Thing2',1000) INSERT INTO #ThingTable (ID,[Name],ParentID) VALUES (1002,'Thing3',1000) INSERT INTO #ThingTable (ID,[Name],ParentID) VALUES (1003,'Thing4',1000) INSERT INTO #ThingTable (ID,[Name],ParentID) VALUES (1004,'Thing5',1003) INSERT INTO #ChildThingTable ( ID, Color ) VALUES ( 1001 , 'Blue') INSERT INTO #ChildThingTable ( ID, Color ) VALUES ( 1002 , 'Black') INSERT INTO #ChildThingTable ( ID, Color ) VALUES ( 1003 , 'Green') INSERT INTO #ChildThingTable ( ID, Color ) VALUES ( 1004 , 'Red') SET NOCOUNT OFF GO -- End Create Test Data -- This is a batch, but could easily be a stored procedure. DECLARE @InputID INT SET @InputID = 1000; SET NOCOUNT ON DECLARE @Temp TABLE(ID INT NOT NULL); WITH ThingCTE (ID, ParentID, [Level]) AS ( SELECT tt1.ID, tt1.ParentID, 1 AS [Level] FROM #ThingTable tt1 WHERE tt1.ID = @InputID UNION ALL SELECT tt2.ID, tt2.ParentID, tc1.[Level]+1 FROM #ThingTable tt2 JOIN ThingCTE tc1 ON (tt2.ParentID = tc1.ID) ) INSERT INTO @Temp ( ID ) SELECT ID FROM ThingCTE SET NOCOUNT OFF DELETE ctt -- Output is for debug purposes, should be commented out in production. OUTPUT Deleted.* FROM #ChildThingTable ctt JOIN @Temp t ON (ctt.ID = t.ID); DELETE tt -- Output is for debug purposes, should be commented out in production. OUTPUT Deleted.* FROM #ThingTable tt JOIN @Temp t ON (tt.ID = t.ID) DROP TABLE #ChildThingTable; DROP TABLE #ThingTable; ```
Does adding a constraint constitute 'alter the underlying database structure in any way' Because I would consider using a constraint with a cascading Delete, This would require the Parent Id column to be a Foreign key to the ID table, then adding the constraint when you delete the primary key, the matching foreign key will be deleted too. ``` ALTER TABLE dbo.T2 ADD CONSTRAINT FK_T1_T2_Cascade FOREIGN KEY (EmployeeID) REFERENCES dbo.T1(EmployeeID) ON DELETE CASCADE ``` There is also recursive queries that can do similar things but I would start with this. Then something like this should work <http://en.wikipedia.org/wiki/Hierarchical_and_recursive_queries_in_SQL> ``` DELETE FROM thingtable START WITH ID = 1000 CONNECT BY PRIOR ID = ParentID; ```
Delete Child Records with Missing Parent
[ "", "sql", "sql-server", "sql-server-2008", "" ]
My table looks like this AFTER UNION ``` St ---- FUT 80CE 79CE 81PE 79PE ``` I need ``` St ---- FUT 79CE 80CE 81PE 79PE ``` Using this: ``` order by case St when 'FUT' then 1 else 2 end , case when St like '%PE' then St end, case when St like '%CE' then St end ``` I get this: ``` St ---- FUT 79CE 80CE 79PE 81PE ```
Well, you could use `ROW_NUMBER()` to order the data in a common table expression, that would allow you to easily use the row number in the ordering `CASE`; ``` WITH unionresult AS ( <original query here> ), cte AS ( SELECT *, ROW_NUMBER() OVER (ORDER BY st) rn FROM unionresult ) SELECT st FROM cte ORDER BY CASE WHEN st = 'FUT' THEN 1 ELSE 2 END, CASE WHEN St LIKE '%PE' THEN -rn END, CASE WHEN St LIKE '%CE' THEN rn END ``` [A simple SQLfiddle to test with](http://sqlfiddle.com/#!3/7adc2/1).
Try this: ``` order by case St when 'FUT' then 1 else 2 end , case when St like '%CE' then 1 else 2 end , case when St like '%PE' then St end DESC, case when St like '%CE' then St end ```
SQL Server 2005 using case in order by
[ "", "sql", "sql-server", "" ]
from table I retrieves values, for example, ``` 7752652:1,7752653:2,7752654:3,7752655:4 ``` or ``` 7752941:1,7752942:2 ``` i.e. string may contain any quantity of substrings. What I need: remove all occurrences of characters from char ':' to a comma char. For example, ``` 7752652:1,7752653:2,7752654:3,7752655:4 ``` should be ``` 7752652,7752653,7752654,7752655 ``` How do it?
I solved this problem with CLR function. It is more quickly and function can be used in complex queries ``` public static SqlString fnRemoveSuffics(SqlString source) { string pattern = @":(\d+)"; string replacement = ""; string result = Regex.Replace(source.Value, pattern, replacement); return new SqlString(result); } ```
Replace `:` with start tag `<X>`. Replace `,` with end tag `</X>` and an extra comma. Add an extra end tag to the end `</X>`. That will give you a string that look like `7752941<X>1</X>,7752942<X>2</X>`. Cast to XML and use `query(text())` to get the root text values. Cast the result back to string. [SQL Fiddle](http://sqlfiddle.com/#!6/71fd0/1) **MS SQL Server 2012 Schema Setup**: ``` create table T ( C varchar(100) ) insert into T values ('7752652:1,7752653:2,7752654:3,7752655:4'), ('7752941:1,7752942:2') ``` **Query 1**: ``` select cast(cast(replace(replace(T.C, ':', '<X>'), ',', '</X>,')+'</X>' as xml).query('text()') as varchar(100)) as C from T ``` **[Results](http://sqlfiddle.com/#!6/71fd0/1/0)**: ``` | C | |---------------------------------| | 7752652,7752653,7752654,7752655 | | 7752941,7752942 | ```
T-SQL - remove chars from string beginning from specific character
[ "", "sql", "sql-server", "t-sql", "" ]
Hi I have the following script: ``` DECLARE @sql VARCHAR(2000); DECLARE @tableName SYSNAME; DECLARE @columnName SYSNAME; DECLARE @count INT; DECLARE @NotCursor TABLE(ID INT IDENTITY(1, 1), TableName SYSNAME, ColumnName SYSNAME) DECLARE @StartLoop INT DECLARE @EndLoop INT DECLARE @SQLFinalQuery VARCHAR(MAX) INSERT INTO @NotCursor SELECT TABLE_NAME, COLUMN_NAME FROM INFORMATION_SCHEMA.COLUMNS WHERE (DATA_TYPE = 'date' OR DATA_TYPE = 'datetime') AND table_name NOT LIKE '%[_]%' ORDER BY TABLE_NAME SELECT @StartLoop = MIN(ID), @EndLoop = MAX(ID) FROM @NotCursor SET @SQLFinalQuery = ';WITH cte_Resultset AS'+CHAR(13)+CHAR(10) +'(' WHILE @StartLoop <= @EndLoop BEGIN SELECT @tableName = TableName, @columnName = ColumnName FROM @NotCursor WHERE ID = @StartLoop SET @sql = 'SELECT ''' + @tableName + ''' as [TableName], ' + '''' + @columnName + ''' AS [ColumnName], ' + 'DATEPART(yy, ' + QUOTENAME(@columnName) + ') AS [Year], COUNT(1) AS [NumberofRows]'+CHAR(13)+CHAR(10) +'FROM ' + QUOTENAME(@tableName) +CHAR(13)+CHAR(10) +'GROUP BY DATEPART(yy, ' + QUOTENAME(@columnName) + ')'; SET @SQLFinalQuery = @SQLFinalQuery+CHAR(13)+CHAR(10)+@sql; SET @SQLFinalQuery = CASE WHEN @StartLoop = @EndLoop THEN @SQLFinalQuery+CHAR(13)+CHAR(10)+')' ELSE @SQLFinalQuery+CHAR(13)+CHAR(10)+'UNION ALL' END SET @StartLoop = @StartLoop + 1 END SET @SQLFinalQuery = @SQLFinalQuery +'SELECT TOP 10 SUM(NumberofRows) AS NumberOfRows,TableName,ColumnName,Year'+CHAR(13)+CHAR(10) +'FROM cte_Resultset'+CHAR(13)+CHAR(10) +'WHERE Year IS NOT NULL'+CHAR(13)+CHAR(10) +'GROUP BY TableName, ColumnName, Year'+CHAR(13)+CHAR(10) +'ORDER BY SUM(NumberofRows) DESC'+CHAR(13)+CHAR(10) EXEC (@SQLFinalQuery) ``` The output of this script provides me with the NumberofRows, TableName, ColumnName, and Year. However, I also want to additionally filter the results. Currently, the script searches through every table without "\_" in it. But, I also want it to also only look at the tables which are related to a calendar (the table must be joint to a calendar). The table where the table name and calendar name can be found is called TimeDependencies. Is there any way to join my current code and this table, so that the result of the script will filter out tables which dont have an associated calendar with them? Thanks. Sample Data: ``` T002 dtCodeObjective T002 dtDCNandPersistency T002 dtServiceFee T004 dtMilitaryCommission ```
@user3712641,This answer is related to the question from [SQL Server 2008 - Optimizing Select statement [duplicate]](https://stackoverflow.com/questions/24998798/sql-server-2008-optimizing-select-statement) By the time I could answer, the original thread was closed as a duplicate. However as I had already put some time into optimize this query, I am going ahead and adding to this post instead. What I did is remove looped inserts and eliminated cursors. Tests on local machine showed some performance increment. Please try and let me know if it works for you. ``` DECLARE @sql VARCHAR(2000); DECLARE @tableName SYSNAME; DECLARE @columnName SYSNAME; DECLARE @count INT; DECLARE @NotCursor TABLE(ID INT IDENTITY(1, 1), TableName SYSNAME, ColumnName SYSNAME) DECLARE @StartLoop INT DECLARE @EndLoop INT DECLARE @SQLFinalQuery VARCHAR(MAX) INSERT INTO @NotCursor SELECT TABLE_NAME, COLUMN_NAME FROM INFORMATION_SCHEMA.COLUMNS WHERE (DATA_TYPE = 'date' OR DATA_TYPE = 'datetime') AND table_name NOT LIKE '%[_]%' ORDER BY TABLE_NAME SELECT @StartLoop = MIN(ID), @EndLoop = MAX(ID) FROM @NotCursor SET @SQLFinalQuery = ';WITH cte_Resultset AS'+CHAR(13)+CHAR(10) +'(' WHILE @StartLoop <= @EndLoop BEGIN SELECT @tableName = TableName, @columnName = ColumnName FROM @NotCursor WHERE ID = @StartLoop SET @sql = 'SELECT ''' + @tableName + ''' as [TableName], ' + '''' + @columnName + ''' AS [ColumnName], ' + 'DATEPART(yy, ' + QUOTENAME(@columnName) + ') AS [Year], COUNT(1) AS [NumberofRows]'+CHAR(13)+CHAR(10) +'FROM ' + QUOTENAME(@tableName) +CHAR(13)+CHAR(10) +'GROUP BY DATEPART(yy, ' + QUOTENAME(@columnName) + ')'; SET @SQLFinalQuery = @SQLFinalQuery+CHAR(13)+CHAR(10)+@sql; SET @SQLFinalQuery = CASE WHEN @StartLoop = @EndLoop THEN @SQLFinalQuery+CHAR(13)+CHAR(10)+')' ELSE @SQLFinalQuery+CHAR(13)+CHAR(10)+'UNION ALL' END SET @StartLoop = @StartLoop + 1 END SET @SQLFinalQuery = @SQLFinalQuery +'SELECT TOP 10 SUM(NumberofRows) AS NumberOfRows,TableName,Year'+CHAR(13)+CHAR(10) +'FROM cte_Resultset'+CHAR(13)+CHAR(10) +'GROUP BY TableName, Year'+CHAR(13)+CHAR(10) +'ORDER BY SUM(NumberofRows) DESC'+CHAR(13)+CHAR(10) EXEC (@SQLFinalQuery) ``` IF you need to include Columns, Please replace the last 6 line with this ``` SET @SQLFinalQuery = @SQLFinalQuery +'SELECT TOP 10 SUM(NumberofRows) AS NumberOfRows,TableName,ColumnName,Year'+CHAR(13)+CHAR(10) +'FROM cte_Resultset'+CHAR(13)+CHAR(10) +'GROUP BY TableName, ColumnName, Year'+CHAR(13)+CHAR(10) +'ORDER BY SUM(NumberofRows) DESC'+CHAR(13)+CHAR(10) EXEC (@SQLFinalQuery) ```
Based on the updated comments that the OP wants to get the counts for all tables, not just the 10 largest counts this can be done with no loops. ``` declare @SQL nvarchar(max) select @SQL = STUFF(( select top 20 'SELECT ''' + TABLE_NAME + ''' as [TableName], ' + '''' + COLUMN_NAME + ''' AS [ColumnName], ' + 'DATEPART(yy, ' + QUOTENAME(COLUMN_NAME) + ') AS [Year], COUNT(1) AS [NumberofRows] FROM ' + QUOTENAME(TABLE_NAME) + ' GROUP BY DATEPART(yy, ' + QUOTENAME(COLUMN_NAME) + ') union all ' from INFORMATION_SCHEMA.COLUMNS WHERE (DATA_TYPE = 'date' or DATA_TYPE = 'datetime') and table_name not like '%[_]%' ORDER BY TABLE_NAME for xml path('')),1 , 0 , '') select @SQL = stuff(@SQL, len(@SQL) - 9, 10, '') + ' order by NumberOfRows desc' exec sp_executesql @SQL ```
SQL Server 2008 - How to find top 10 tables and order them
[ "", "sql", "sql-server", "" ]
I have a logging table collecting values from many probes: ``` CREATE TABLE [Log] ( [LogID] int IDENTITY (1, 1) NOT NULL, [Minute] datetime NOT NULL, [ProbeID] int NOT NULL DEFAULT 0, [Value] FLOAT(24) NOT NULL DEFAULT 0.0, CONSTRAINT Log_PK PRIMARY KEY([LogID]) ) GO CREATE INDEX [Minute_ProbeID_Value] ON [Log]([Minute], [ProbeID], [Value]) GO ``` Typically, each probe generates a value every minute or so. Some example output: ``` LogID Minute ProbeID Value ====== ================ ======= ===== 873875 2014-07-27 09:36 1972 24.4 873876 2014-07-27 09:36 2001 29.7 873877 2014-07-27 09:36 3781 19.8 873878 2014-07-27 09:36 1963 25.6 873879 2014-07-27 09:36 2002 22.9 873880 2014-07-27 09:36 1959 -30.1 873881 2014-07-27 09:36 2005 20.7 873882 2014-07-27 09:36 1234 23.8 873883 2014-07-27 09:36 1970 19.9 873884 2014-07-27 09:36 1991 22.4 873885 2014-07-27 09:37 1958 1.7 873886 2014-07-27 09:37 1962 21.3 873887 2014-07-27 09:37 1020 23.1 873888 2014-07-27 09:38 1972 24.1 873889 2014-07-27 09:38 3781 20.1 873890 2014-07-27 09:38 2001 30 873891 2014-07-27 09:38 2002 23.4 873892 2014-07-27 09:38 1963 26 873893 2014-07-27 09:38 2005 20.8 873894 2014-07-27 09:38 1234 23.7 873895 2014-07-27 09:38 1970 19.8 873896 2014-07-27 09:38 1991 22.7 873897 2014-07-27 09:39 1958 1.4 873898 2014-07-27 09:39 1962 22.1 873899 2014-07-27 09:39 1020 23.1 ``` What is the most efficient way to get just the latest reading for each Probe? e.g.of desired output (note: the "Value" is not e.g. a Max() or an Avg()): ``` LogID Minute ProbeID Value ====== ================= ======= ===== 873899 27-Jul-2014 09:39 1020 3.1 873894 27-Jul-2014 09:38 1234 23.7 873897 27-Jul-2014 09:39 1958 1.4 873880 27-Jul-2014 09:36 1959 -30.1 873898 27-Jul-2014 09:39 1962 22.1 873892 27-Jul-2014 09:38 1963 26 873895 27-Jul-2014 09:38 1970 19.8 873888 27-Jul-2014 09:38 1972 24.1 873896 27-Jul-2014 09:38 1991 22.7 873890 27-Jul-2014 09:38 2001 30 873891 27-Jul-2014 09:38 2002 23.4 873893 27-Jul-2014 09:38 2005 20.8 873889 27-Jul-2014 09:38 3781 20.1 ```
This is another approach ``` select * from log l where minute = (select max(x.minute) from log x where x.probeid = l.probeid) ``` You can compare the execution plan w/ a fiddle - <http://sqlfiddle.com/#!3/1d3ff/3/0>
Try this: ``` SELECT T1.* FROM Log T1 INNER JOIN (SELECT Max(Minute) Minute, ProbeID FROM Log GROUP BY ProbeID)T2 ON T1.ProbeID = T2.ProbeID AND T1.Minute = T2.Minute ``` You can play around with it on [SQL Fiddle](http://sqlfiddle.com/#!3/d0d9e/2)
How to select most recent values?
[ "", "sql", "sql-server", "" ]
Here is the result I need, simplified: ``` select name, phonenumber from contacttmp left outer join phonetmp on (contacttmp.id = phonetmp.contact_id); name | phonenumber -------+-------------- bob | 111-222-3333 bob | 111-222-4444 bob | 111-222-5555 frank | 111-222-6666 joe | 111-222-7777 ``` The query, however displays the name, I'm trying to omit the name after the first result: ``` name | phonenumber -------+-------------- bob | 111-222-3333 | 111-222-4444 | 111-222-5555 frank | 111-222-6666 joe | 111-222-7777 ``` Here's how I made the example tables and the data: ``` create table contacttmp (id serial, name text); create table phonetmp (phoneNumber text, contact_id integer); select * from contacttmp; id | name ----+------- 1 | bob 2 | frank 3 | joe select * from phonetmp ; phonenumber | contact_id --------------+------------ 111-222-3333 | 1 111-222-4444 | 1 111-222-5555 | 1 111-222-6666 | 2 111-222-7777 | 3 ``` ### Old part of question I'm working on a contacts program in PHP and a requirement is to display the results but omit the other fields after the first record is displayed if there are multiple results of that same record. From the [postgres tutorial join](http://www.postgresql.org/docs/9.3/static/tutorial-join.html) examples I'm doing something like this with a left outer join: ``` SELECT * FROM weather LEFT OUTER JOIN cities ON (weather.city = cities.name); city | temp_lo | temp_hi | prcp | date | name | location --------------+---------+---------+------+------------+---------------+----------- Hayward | 37 | 54 | | 1994-11-29 | | San Francisco | 46 | 50 | 0.25 | 1994-11-27 | San Francisco | (-194,53) San Francisco | 43 | 57 | 0 | 1994-11-29 | San Francisco | (-194,53) ``` I can't figure out how to, or if it is possible to, alter the above query to not display the other fields after the first result. For example, if we add the clause "WHERE location = '(-194,53)'" we don't want the second (and third if there is one) results to display the columns other than location, so the query (plus something extra) and the result would look like this: ``` SELECT * FROM weather LEFT OUTER JOIN cities ON (weather.city = cities.name) WHERE location = '(-194,53)'; city | temp_lo | temp_hi | prcp | date | name | location --------------+---------+---------+------+------------+---------------+----------- San Francisco | 46 | 50 | 0.25 | 1994-11-27 | San Francisco | (-194,53) | | | | | | (-194,53) ``` Is this possible with some kind of JOIN or exclusion or other query? Or do I have to remove these fields in PHP after getting all the results (would rather not do). To avoid confusion, I'm required to achieve a result set like: ``` city | temp_lo | temp_hi | prcp | date | name | location --------------+---------+---------+------+------------+---------------+----------- San Francisco | 46 | 50 | 0.25 | 1994-11-27 | San Francisco | (-194,53) | | | | | | (-19,5) | | | | | | (-94,3) Philadelphia | 55 | 60 | 0.1 | 1995-12-12 | Philadelphia | (-1,1) | | | | | | (-77,55) | | | | | | (-3,33) ``` Where any additional results for the same record (city) with different locations would only display the different location.
Consider the slightly modified test case in the fiddle below. ### Simple case For the simple case dealing with a **single column** from each column, comparing to the *previous* row with the window function `lag()` does the job: ``` SELECT CASE WHEN lag(c.contact) OVER (ORDER BY c.contact, p.phone_nr) = c.contact THEN NULL ELSE c.contact END , p.phone_nr FROM contact c LEFT JOIN phone p USING (contact_id); ``` You could repeat that for *n columns*, but that's tedious ### For many columns ``` SELECT c.*, p.phone_nr FROM ( SELECT * , row_number() OVER (PARTITION BY contact_id ORDER BY phone_nr) AS rn FROM phone ) p LEFT JOIN contact c ON c.contact_id = p.contact_id AND p.rn = 1; ``` Something like a "reverse LEFT JOIN". This is assuming referential integrity (no missing rows in `contact`. Also, contacts without any entries in `phone` are not in the result. Easy to add if need should be. [**SQL Fiddle.**](http://sqlfiddle.com/#!15/ef550/1) Aside, your query in the first example exhibits a rookie mistake. ``` SELECT * FROM weather LEFT OUTER JOIN cities ON (weather.city = cities.name) WHERE location = '(-194,53)'; ``` One does not combine a `LEFT JOIN` with a `WHERE` clause on the right table. Doesn't makes sense. Details: * [Explain JOIN vs. LEFT JOIN and WHERE condition performance suggestion in more detail](https://stackoverflow.com/questions/24876673/explain-join-vs-left-join-and-where-condition-performance-suggestion-in-more-de/24876797#24876797) Except to test for existence ... * [Select rows which are not present in other table](https://stackoverflow.com/questions/19363481/select-rows-which-are-not-present-in-other-table/19364694#19364694)
You can do this type of logic in SQL, but it is not recommended. The result set from SQL queries is in a table format. Tables represented unordered sets and generally have all columns meaning the same thing. So, having a result set that depends on the values from the "preceding" row is not a proper way to use SQL. Although you can get this result in Postgres, I do not recommend it. Usually, this type of formatting is done on the application side.
SQL JOIN to omit other columns after first result
[ "", "sql", "postgresql", "window-functions", "" ]
I have a MySQL table where I have a certain id as a foreign key coming from another table. This id is not unique to this table so I can have many records holding the same id. I need to find out which ids are seen the least amount of times in this table and pull up a list containing them. For example, if I have 5 records with id=1, 3 records with id=2 and 3 records with id=3, I want to pull up only ids 2 & 3. However, the data in the table changes quite often so I don't know what that minimum value is going to be at any given moment. The task is quite trivial if I use two queries but I'm trying to do it with just one. Here's what I have: ``` SELECT id FROM table GROUP BY id HAVING COUNT(*) = MIN(SELECT COUNT(*) FROM table GROUP BY id) ``` If I substitute COUNT(\*) = 3, then the results come up but using the query above gives me an error that MIN is not used properly. Any tips?
You need a double `select` in the `having` clause: ``` SELECT id FROM table GROUP BY id HAVING COUNT(*) = (SELECT MIN(cnt) FROM (SELECT COUNT(*) as cnt FROM table GROUP BY id) t); ```
I would try with: ``` SELECT id FROM table GROUP BY id HAVING COUNT(*) = (SELECT COUNT(*) FROM table GROUP BY id ORDER BY COUNT(*) LIMIT 1); ``` This gets the minimum selecting the first row from the set of counts in ascendent order.
Obtain a list with the items found the minimum amount of times in a table
[ "", "mysql", "sql", "" ]
I need a query that could remove unnecessary characters (a not-so-needed trailing comma as an example) from the string stored in my database table. So that ``` EMAIL_ADD abc@gmail.com, abc@yahoo.com,def@example.org, abs-def@ac.uk, ``` would update it into something like this: ``` EMAIL_ADD abc@gmail.com abc@yahoo.com,def@example.org abs-def@ac.uk ```
Using `TRIM()` function with `TRAILING` option removes a specific unwanted character from end of string , in your case being a comma present at end. ``` UPDATE tableName SET EMAIL_ADD = TRIM(TRAILING ',' FROM EMAIL_ADD) ``` [**See documentation here TRIM()**](http://www.techonthenet.com/oracle/functions/trim.php)
If you have a specific list of characters to filter out at the start and end use `trim` functions: ``` select ltrim(ltrim(rtrim(rtrim(email_add, ','), ' '), ','), ' ') from tableX ``` Here I nested `ltrim` and `rtrim` to remove leading and trailing `,` and . Or using `trim`: ``` select trim(trim(both ',' from email_add)) from tableX ```
How could I remove unnecessary characters in SQL
[ "", "sql", "database", "oracle", "" ]
Apologies if I'm not phrasing this well. I've searched for some time but I somehow have been missing how to do this. It would be great if someone could point me in the right direction. Basically, I have a table with 2 columns: `Serv_No` / `Prd_Name` Each `Serv_No` (1,2,3,4,5 etc.) may have unlimited varying `Prd_Name` (A, B, C, D, AA, BB, CC etc.) I want to only include a `Serv_No` where a `Serv_No` has `Prd_Name = AA` and `Prd_Name <> BB`. If a `Serv_No` has both `A1 & B1` then exclude all instances of that `Serv_No`, even if the other rows with that `Serv_No` have a different `Prd_Name`. Thanks
This is usually done using GROUP BY/HAVING over CASEs: ``` select serv_no from tab group by serv_no having -- include AA sum(case when prd_name = 'AA' then 1 else 0 end) = 1 -- AA plus at least 1 other row <> BB and sum(case when prd_name <> 'BB' then 1 else 0 end) >= 2 -- exclude if both A1 & B1 are present and sum(case when prd_name in ('A1', 'B1') then 1 else 0 end) <> 2 ``` If you want to get the detail rows, not only the serv\_no you can move the SUMs as a Windowed Aggregate using *OVER (PARTITION BY serv\_no)* into the QUALIFY clause.
try this: ``` SELECT * FROM [TABLE] WHERE [serv_no] IN( SELECT [serv_no] FROM [TABLE] GROUP BY [serv_no] having COUNT([serv_no])=1 ) AND prd_name = 'AA' ``` this will select all serv\_no where they have only 1 prd\_name and then filter table by them you can change your condition by adding more conditions at end
Excluding all ID values based on criteria, with a one to many relationship - SQL
[ "", "sql", "teradata", "" ]
I have two tables called Recovery and Installments. ``` Recovery ([pk] RID, emp_id, amount, duration, type] Installments ([pk] ID, [fk] recovery_id, recovered_amount, date) ``` When an employee is granted a loan, that information is stored in `Recovery` table. This loan is then recovered installment basis, from monthly salary. (installment is `[amount /duration]`). Once recovered, details about the installment recovered should be stored in `Installments` table. Since an loan may be recovered in several installments the relation ship of Recovery to Installment is `1:M.` Now let's say I just want to find out all recovery details that should be deducted from Salary. I have written this query in this regard. Here I take sum of all paid installment and then deduct it from the loan amount and if the balance is greater than 0 then the installment is calculated. Expected result is `Emp_DI | Installment` ``` WITH Summary (RID, emp_id, amount, duration, installment) AS( SELECT Recovery.RID, Recovery.emp_id, Recovery.amount, Recovery.duration, SUM(Installment.recovered_installment) AS Installments FROM Recovery LEFT OUTER JOIN Installment ON Recovery.RID = Installment.recovery_id WHERE Recovery.type = 'Loan' GROUP BY Recovery.duration, Recovery.amount, Recovery.emp_id, Recovery.RID ) SELECT emp_id, SUM(amount / duration) AS INS FROM Summary WHERE ((CASE WHEN installment != NULL THEN (amount-installment) ELSE(amount) END)> 0 ) GROUP BY emp_id ``` This query runs fine give the desired out put with one exception. I.e. it seems that there is an issue in WHERE clause. ``` WHERE ((CASE WHEN installment != NULL THEN (amount-installment) ELSE(amount) END)> 0 ) ``` This does not filter out the records with balance zero as expected. I couldn't understand the reason for this. So could you please help me? Your help is really appreciated!
You need `is not null`: ``` WHERE ((CASE WHEN installment IS NOT NULL THEN (amount-installment) ELSE(amount) END)> 0 ) ``` Almost any comparison to `NULL`, including `= NULL` and `<> NULL` returns `NULL`. And, `NULL` is treated as false. Use `IS NULL` and `IS NOT NULL`.
The criteria `installment != NULL` needs to be replaced with `installment is not null`. [All comparisons to NULL return FALSE.](http://technet.microsoft.com/en-us/library/ms191270(v=sql.105).aspx) For the specific comparison you are making, this WHERE clause might be easier on the eyes: ``` WHERE (amount - COALESCE(installment,0)) > 0 ```
SQL Where clause does not filter out "Zero" values
[ "", "sql", "sql-server", "filtering", "where-clause", "common-table-expression", "" ]
I had to make a new table to get the Include statement working in Entity Framework since EF was looking for a table called `be_PostTagbe_Posts`. I was using EF Code First from DB. But now the question is about SQL. I added one row of data and now the include works. But what I am looking for is a SQL command that can copy data from 1 column in 1 table and 1 column in another into the new `be_PostTagbe_Posts` table. In the be\_Posts table I need the data in `PostRowID` to go into `be_Posts_PostRowID` and `PostTagId` to go into `be_PostTag_PostTagID`. Both `be_PostTag_PostTagID` and `be_Posts_PostRowID` are in the new `be_PostTagbe_Posts` table. I am not very good with SQL so not sure how to do this. Edit: Thanks for the answers. I tried 2 separate queries but only data was inserted into the the `be_PostTag_PostTagID` while `be_PostTag_PostRowID` remained null. And I tried this query which returned `The multi-part identifier "be_PostTag.PostID" could not be bound.` ``` INSERT INTO be_PostTagbe_Posts(be_PostTag_PostTagID, be_Posts_PostRowID) SELECT be_PostTag.PostTagID, be_Posts.PostRowID WHERE be_PostTag.PostID = be_Posts.PostID ``` EDIT: This only inserted half the data - even 2 inserts leave one column null ``` INSERT INTO be_PostTagbe_Posts (be_Posts_PostRowID) SELECT PostRowID FROM be_Posts; INSERT INTO be_PostTagbe_Posts (be_PostTag_PostTagID) SELECT PostTagID FROM be_PostTag; ``` And yet management studio tells me the query executed successfully but one column is still null. Weird. Here are screenshots of the tables: ![enter image description here](https://i.stack.imgur.com/kSZAk.jpg) ![enter image description here](https://i.stack.imgur.com/pLxqE.jpg) ![enter image description here](https://i.stack.imgur.com/qOVtH.jpg)
``` SELECT PostTagID AS be_PostTag_PostTagID, PostRowID AS be_Posts_PostRowID INTO be_PostTagbe_Posts FROM be_PostTag Inner JOIN be_Posts ON be_PostTag.PostID=be_Posts.PostID ``` That command created the new table with the 2 columns populated.
If i understand you ,you want to Copy `Table Z's Column A` to `Table X` And `Table Z's Column B` to `Table Y`. If it is so, According to your question it is not clear about Table Structure of `TableX` and `TableY` Assuming `TableX` And `TableY` to single ColumnTable [Apart from IdentityColumn] our query will be ``` INSERT INTO TableX SELECT ColumnA FROM TableZ INSERT INTO TableY SELECT ColumnB FROM TableZ ``` Rest put your Entire Structure of Table To Get More Help Because These query are on Assumptions
SQL Command to copy data from 1 column in table and 1 column in another table into a new table?
[ "", "sql", "sql-server", "" ]
I am using oracle 11g and trying to insert a string containing special UTF8 characters eg '(ε- c'. The NLS character sets for the databse are... ``` NLS_NCHAR_CHARACTERSET AL16UTF16 NLS_CHARACTERSET WE8ISO8859P1 ``` when I copy and paste the above string into a `NVARCHAR` field it works fine. if I execute the below I get an upside down question mark in the field ``` insert into title_debug values ('(ε- c'); ``` where title debug table consists of a single `NVARCHAR2(100)` field called title. I have attempted to assign this string to a `NVARCHAR2(100)` variable then iserting this. And also attempted all the different CAST / CONVERT ect functions I can find and nothing is working. Any assistance would be greatly appreciated. UPDATE I have executed ``` select dump(title, 1016), dump(title1, 1016) into v_title, v_title1 from dual ``` where title is the string passed in as a varchar and title1 is the string passed in as a NVarchar. Unsuprisingly the encodings come through as WE8ISO8859P1 and AL16UTF16. but on both the ε comes through as hex 'BF'. This is the upside down Question mark. My only thought left is to try and pass this through as a raw and then do something with it. However I have not yet been able to figure out how to convert the string into a acceptable format with XQuery (OSB). Continued thanks for assistance.
Our DBA found the solution to this issue. The answer lay in a setting on the dbc connection on the bus to tell it to convert utf8 to NChar. On The connection pool page add the following lines to the Properties box. oracle.jdbc.convertNcharLiterals=true oracle.jdbc.defaultNchar=true this will allow you to be able to insert into NVarchar2 fields while maintaining the utf8 characters. Cheers
First verify the data is being stored correctly, then use the correct NLS\_LANG settings. See my answer to this question: [when insert persian character in oracle db i see the question mark](https://stackoverflow.com/questions/24584283/when-insert-persian-character-in-oracle-db-i-see-the-question-mark/24597374#24597374)
Losing special characters on insert
[ "", "sql", "oracle", "xpath", "utf-8", "osb", "" ]
Suppose I have a list of inventory I have acquired/discarded: ``` Date Quantity Item 1/1/2000 10 Apple 2/2/2000 5 Orange 21/10/2000 1 Banana 4/1/2001 2 Apple 8/8/2000 -3 Orange ``` I would now like to generate a query that can produce the number of oranges, bananas and apples I have at every time point: ``` Date Quantity Item 1/1/2000 10 Apple 2/2/2000 5 Orange 21/10/2000 1 Banana 4/1/2001 12 Apple 8/8/2000 2 Orange ``` Is something like this possible in Access? I'm stumped and don't really know where to start on this.
Try this: ``` select curr.[Date], curr.[Quantity] as change_in_qty, sum(prev.[Quantity])+curr.[Quantity] as total_qty, curr.[Item] from tblname curr inner join tblname prev on curr.[Item] = prev.[Item] where prev.[Date] < curr.[Date] group by curr.[Date], curr.[Quantity], curr.[Item] union all select [Date], [Quantity], [Quantity], [Item] from tblname x where not exists ( select 1 from tblname y where y.[Item] = x.[Item] and y.[Date] < x.[Date] ) z ``` Should sum up all rows for the given item where the date is less than that of the date on the current row.
If you are looking to see quantity of item by date you may want to try this statement using a `Group BY`. ``` Select [Date],Count(*) As Quantity, Item from Inventory group by Date,Item ```
How do I use an Access SQL query to track inventory composition?
[ "", "sql", "ms-access", "" ]
Is there a way to achieve the following union query using only one query using IN clause with multiple columns? Instead of using ``` select * from table_a where field_a in (select field_1 from table_b) union select * from table_a where field_a in (select field_2 from table_b); ``` I want to create something that will look like the following: ``` select * from table_a where field_a in (select field_1,field_2 from table_b) ; ```
The most you can get is to put the union inside the sub query: ``` select * from table_a where field_a in (select field_1 from table_b union select field_2 from table_b) ``` Or: ``` select * from table_a where field_a in (select field_1 from table_b) or field_a in ( select field_2 from table_b) ```
The equivalent is this: ``` select * from table_a where field_a in (select field_1 from table_b) or field_a in (select field_2 from table_b) ``` NOT THIS: ``` select * from table_a where field_a in (select field_1, field_2 from table_b) ``` Because in the latter case, field1 and field2 would have to be appear on the same row of table\_b. In the UNION query that you want to mimic, that is not the case. You need 2 separate INs to mimic what that UNION query is doing. I answered a similar question on the difference between the above not long ago here: [Difference in two SQL query, but same result](https://stackoverflow.com/questions/24975902/difference-in-two-sql-query-but-same-result/24976129)
IN clause with multiple columns
[ "", "mysql", "sql", "" ]
The idea is to check for the last time the systems were synced up by selecting the start time of the last sync. The problem is that the table that holds the sync info is new and as it moves to production the table will be empty so this query breaks the system because it does not return anything: ``` select ST_DT_TM from SYNCJOBLOG where SYS_ID = 'MF' and STAT_CDE = 'FN' and SYNC_ID = (select MAX(SYNC_ID) from SYNCJOBLOG where SYS_ID = 'MF' and STAT_CDE = 'FN') ``` I've tried going the route of using a case to check if anything exists like this: ``` select case when exists (select ST_DT_TM from SYNCJOBLOG where SYS_ID = 'MF') then ST_DT_TM else '' end as ST_DT_TM from SYNCJOBLOG where SYS_ID = 'MF' and STAT_CDE = 'FN' and SYNC_ID = (select MAX(SYNC_ID) from [XREFDB].[dbo].[SYNCJOBLOG] where SYS_ID = 'MF' and STAT_CDE = 'FN')) ``` When something does exist in the table then it works but if nothing does then it returns an empty set of rows and breaks the system. If I take out the outer where clause and nothing is in the table then it works to return the blank row but if something is in the table then it returns multiple rows and I can't have that. Is there another way to go about checking if anything exists in the table and if not then return a blank row otherwise just return the row of the max SYNC\_ID?
I would change your system code to check the count of records retunred and solve your issue that way. If that isn't possible try: ``` select ST_DT_TM from SYNCJOBLOG where SYS_ID = 'MF' and STAT_CDE = 'FN' and SYNC_ID = (select MAX(SYNC_ID) from SYNCJOBLOG where SYS_ID = 'MF' and STAT_CDE = 'FN') union select '' where not exists( select 1 from SYNCJOBLOG where SYS_ID = 'MF' and STAT_CDE = 'FN' and SYNC_ID = (select MAX(SYNC_ID) from SYNCJOBLOG where SYS_ID = 'MF' and STAT_CDE = 'FN')) ```
Try this: ``` select top 1 ST_DT_TM from ( select ST_DT_TM, 0 as IsAlternate from SYNCJOBLOG where SYS_ID = 'MF' and STAT_CDE = 'FN' and SYNC_ID = (select MAX(SYNC_ID) from SYNCJOBLOG where SYS_ID = 'MF' and STAT_CDE = 'FN') union all select 0 as ST_DT_TM, 1 as IsAlternate ) t order by IsAlternate; ```
SQL case with exists not working
[ "", "sql", "sql-server", "" ]
Let's say I have a simple schema of two tables, `users` and `posts`. The `posts` table has a foreign key to `users` indicating who authored the post. Also, let's say I want to list the users and their 3 most-recent posts. I can do this in O(n) queries (1 to list users, 1 for each user getting their posts), but how would I do this in O(1) queries? Either one query to get the users and posts all at once, or 2 queries, one to get the users and one to get the posts. Assume I would de-dupe any repeated user data.
A self join that should work on most db's. This assumes that `post_time` is unique per user. If that's not the case, then you can replace `on p2.post_time >= p.post_time` with `on p2.id >= p.id`. ``` select u.username, p.id, p.title from user u join ( select p.id, p.title, p.user_id from posts p join posts p2 on p2.user_id = p.user_id and p2.post_time >= p.post_time group by p.id, p.title, p.user_id having count(*) <= 3 ) p on u.id = p.user_id ```
You didn't state your DBMS, so this is ANSI SQL (supported by a wide range of DBMS): ``` select * from ( select u.username, p.title, row_number() over (partition by u.id order by p.post_time desc) as rn from users u join posts p on u.id = p.user_id ) t where rn <= 3 order by u.username; ```
SQL query: how to get users and their three latest posts?
[ "", "sql", "greatest-n-per-group", "" ]
I have a table where each address has different types, for each type a row. How to find the addresses where for the exact types i need? Eg. ``` ID TypID Street 1 1 Street 1 1 2 Street 1 2 2 Street 2 3 1 Street 3 3 2 Street 3 ``` In the above i need to find addresses which has type 1 and 2. That query result should be adresses with id 1 and 3.
Group by the `id` and then count the different `typeid`s in the `having` clause ``` select id from your_table where typeid in (1,2) group by id having count(distinct typeid) = 2 ```
You can use INTERSECT for this ``` select id from tbl where typid = 1 intersect select id from tbl where typid = 2 ``` although it won't work in mysql if that happens to be the database you're using.
TSQL - Query for multiple types in multiple rows
[ "", "sql", "sql-server", "t-sql", "" ]
first table: ``` create table login( id int auto_increment, user varchar(20), pass varchar(20), primary key(id) ); ``` first table is created with primary key.. second table: ``` create table check( cid int, rid int, usname varchar(20), word varchar(20), FOREIGN KEY(rid) REFERENCES login(id) ); ``` Error: ``` ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'check (cid int, rid int, usname varchar(20), word varchar(20), FOREIGN KEY(rid) R' at line 1 ```
The problem isn't the foreign key. `check` is a reserved word in MySQL. So you need to choose another name or escape it: ``` create table `check`( cid int, rid int, usname varchar(20), word varchar(20), FOREIGN KEY(rid) REFERENCES login(id) ); ``` Ironically, although it is a reserved word, it is not actually used. Alas, I do wish that MySQL supported `check` constraints. By the way, [here](http://www.sqlfiddle.com/#!2/63e967) is a SQL Fiddle.
`check` is a key word in `MySQL`; either surround it with backticks, `, or choose a different name.
foreign key not creating in table
[ "", "mysql", "sql", "" ]
I'm attempting to do something similar to [this question](https://stackoverflow.com/questions/1250156/how-do-i-return-rows-with-a-specific-value-first). I have this table: ![<code>tab_id</code> is the second column. <code>order_in_tab</code> is the fourth column.](https://i.stack.imgur.com/rgZyX.jpg) `tab_id` is the second column. `order_in_tab` is the fourth column. I want to order by `tab_id` equal to `2` first, then rest of `tab_id` ascending, then `order_in_tab` ascending. ``` select * from cam_to_tab_mapping where unit_id='90013550' order by (tab_id='2') asc, tab_id asc, order_in_tab asc ``` However, it says `Incorrect syntax at '='.`. I'm a complete SQL newbie, so I'm not sure what is wrong (or if I'm misunderstanding the linked solution from above).
Try changing the query like this: ``` select * from cam_to_tab_mapping where unit_id='90013550' order by CASE WHEN tab_id='2' THEN 1 ELSE 0 END DESC, tab_id asc, order_in_tab asc ```
I think you have a copy & paste error in your query. ``` select * from cam_to_tab_mapping where unit_id='90013550' order by (tab_id='2') asc, tab_id asc, order_in_tab asc ``` You have a logical expression as the first `order by` criteria. Maybe you meant ``` select * from cam_to_tab_mapping where unit_id='90013550' and tab_id='2' order by tab_id asc, order_in_tab asc ```
Incorrect syntax at '='
[ "", "sql", "sql-server", "" ]
``` SELECT s1.id, s3.food_name, Count(*) AS TotalRefill FROM (SELECT ( s1.food_value - s2.food_value ) AS difference FROM `serving_info` s1, `serving_info` s2 WHERE s1.id - s2.id = '1' AND s1.food_name = 'Shrimp' AND s2.food_name = 'Shrimp') AS diff, `serving_info` s3 WHERE s3.id = diff.id AND s3.food_value >= '990' AND diff.difference >= '150' ``` Result: `#1054 - Unknown column 's1.id' in 'field list'` ``` -- -- Table structure for table `employees` -- CREATE TABLE IF NOT EXISTS `employees` ( `id_user` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(128) NOT NULL, `email` varchar(64) NOT NULL, `phone_number` varchar(16) NOT NULL, `username` varchar(16) NOT NULL, `password` varchar(32) NOT NULL, `confirmcode` varchar(32) DEFAULT NULL, PRIMARY KEY (`id_user`), KEY (name) ); -- -- Table structure for table `Foods` -- CREATE TABLE IF NOT EXISTS `Foods` ( `Food_name` varchar(40) NOT NULL, `CostPerRefill` double NOT NULL, PRIMARY KEY (`Food_name`) ); -- -- Table structure for table `Serving_Info` -- CREATE TABLE IF NOT EXISTS `Serving_Info` ( `id` int(255) NOT NULL AUTO_INCREMENT, `Food_Value` varchar(40) NOT NULL, `Food_name` varchar(40) NOT NULL, `Served_On` TIMESTAMP DEFAULT CURRENT_TIMESTAMP, `Oncall` varchar(128), Foreign key(`Oncall`) REFERENCES `employees`(`name`), Foreign key(`Food_name`) REFERENCES `Foods`(`Food_name`), PRIMARY KEY (`id`), UNIQUE (`Oncall`,`Food_name`, `Served_On`) ); ``` What is causing the s1 to not be declared? For some reason the s1 instance isn't detected by the s1.id. I have been trying to figure out from a while by changing different brackets but I really cannot figure out how to debug this..I have tried changing the position of the close bracket but that would mess up the query.
What all others here said is right, but just to give you a more preceise solution, look at the query below it should not be s1.id but it should be diff.id ``` SELECT diff.id, s3.food_name, Count(*) AS TotalRefill FROM (SELECT ( s1.food_value - s2.food_value ) AS difference, s1.id FROM `serving_info` s1, `serving_info` s2 WHERE s1.id - s2.id = '1' AND s1.food_name = 'Shrimp' AND s2.food_name = 'Shrimp') AS diff, `serving_info` s3 WHERE s3.id = diff.id AND s3.food_value >= '990' AND diff.difference >= '150' ```
S1is defined inside the parentheses and thus not visible outside them, where you reference it.
SQL SELECT query not working
[ "", "sql", "" ]
I am using sql to write up a query in which I am trying to choose records that have their date\_from column (of type date/time) that satisfy these conditions: * it can be any year * months are june, july, august and september * AND IF IT IS JUNE, IT SHOULD CONSIDER FROM THE 15th JUNE ONWARDS Here's what i've tried .. ``` Select name, surname FROM employees emp where month(emp.date_from) IN ('06' , '07' , '08', '09') ``` I have also tried using CASE but failed. Any help please?
``` WHERE MONTH(emp.date_from) IN (7,8,9) OR (MONTH(emp.date_from) = 6 AND DAY(emp.date_from) >= 15); ``` **UPDATE** Or as dates are treated as strings in `MySQL` ``` WHERE RIGHT(emp.date_from, 5) BETWEEN '06-15' AND '09-30'; ``` I don't know which would perform better however.
Sqlserver 2012 ``` SELECT * FROM employees WHERE DateFromParts(2000,month(date_from),day(date_from)) between '2000-06-15' and '2000-09-30' ``` Year 2000 is chosen because it is leap year and will handle leap year issues around 2000-02-29.
SQL Date Wildcards
[ "", "mysql", "sql", "sql-server", "" ]
I have a database with two different tables. ``` **call_code** activity call_id activity_id maintCall_plan activity_desc maintCall_unplanned contact_person creditCalls day newBussCalls activity_date phoneCalls revenue time ``` I am having trouble debugging my code. The problem is syntax error in `insert into` statement and `conConnection.Execute` is highlighted. Here's my code snippet: ``` Private Sub Command1_Click() Dim conConnection As ADODB.Connection Dim cmdCommand As New ADODB.Command Dim strSql As String Set conConnection = New ADODB.Connection conConnection.ConnectionString = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" & _ App.Path & "\" & "db_weekActRep.mdb;Mode=Read|Write" conConnection.CursorLocation = adUseClient conConnection.Open sqlStr = "INSERT INTO activity(activity_desc, contact_person, day, activity_date, revenue, time) VALUES (" sqlStr = sqlStr & "'" & txtAct8am.Text & "'," sqlStr = sqlStr & "'" & txtComp8am.Text & "'," sqlStr = sqlStr & "'" & Label31.Caption & "'," sqlStr = sqlStr & "'" & Label20.Caption & "'," sqlStr = sqlStr & "'" & txtRev8am.Text & "'," sqlStr = sqlStr & "'" & Label9.Caption & "')" conConnection.Execute sqlStr Select Case Combo1.ListIndex Case 0 sqlStr = "INSERT INTO call_code(maintCall_plan) VALUES (" sqlStr = sqlStr & "'" & "1" & "')" conConnection.Execute sqlStr Case 1 sqlStr = "INSERT INTO call_code(maintCall_unplanned) VALUES (" sqlStr = sqlStr & "'" & "2" & "')" conConnection.Execute sqlStr Case 2 sqlStr = "INSERT INTO call_code(creditCalls) VALUES (" sqlStr = sqlStr & "'" & "3" & "')" conConnection.Execute sqlStr Case 3 sqlStr = "INSERT INTO call_code(newBussCalls) VALUES (" sqlStr = sqlStr & "'" & "4" & "')" conConnection.Execute sqlStr Case 4 sqlStr = "INSERT INTO call_code(phoneCalls) VALUES (" sqlStr = sqlStr & "'" & "5" & "')" conConnection.Execute sqlStr End Select End Sub ``` Also, I wanted to know if is it possible to insert data into two different tables using one click-event or button? If it is, am I doing it right? By the way, I didn't include the `activity_id` for the activity table because it is auto-incrementing. Same with the `call_id` for the `call_code` table. Any help would be much appreciated.
I get it now guys.. I'm just missing a bracket in some of my fields. Here's the code snippet.. ``` sqlStr = "INSERT INTO activity(activity_desc, contact_person, revenue, [day], activity_date, [time]) VALUES (" sqlStr = sqlStr & "'" & txtAct8am.Text & "'," sqlStr = sqlStr & "'" & txtComp8am.Text & "'," sqlStr = sqlStr & "'" & txtRev8am.Text & "'," sqlStr = sqlStr & "'" & Label31.Caption & "'," sqlStr = sqlStr & "'" & Label20.Caption & "'," sqlStr = sqlStr & "'" & Label9.Caption & "')" conConnection.Execute sqlStr ``` I just need to enclose day and time field in a bracket.. Thanks for your responce I really appreciate it.. My program is now running. Thanks again
Your first `INSERT` statement has declared in this wrong way: You have two field `activity_desc, contact_person` but you try to insert more than two fields (six for precision). So you must add the other 4 missing fields ``` sqlStr = "INSERT INTO activity(activity_desc, contact_person) VALUES (" sqlStr = sqlStr & "'" & txtAct8am.Text & "'," sqlStr = sqlStr & "'" & txtComp8am.Text & "'," sqlStr = sqlStr & "'" & Label31.Caption & "'," sqlStr = sqlStr & "'" & Label20.Caption & "'," sqlStr = sqlStr & "'" & txtRev8am.Text & "'," sqlStr = sqlStr & "'" & Label9.Caption & "')" ``` Add 4 missing fields in the INSERT INTO statement. Two second `INSERT` (in the `CASE` statement) don't fill the primary key. If you use an autoincremental field it's ok, else it's wrong because a primary key can't be NULL
Syntax error in my SQL statement "INSERT INTO"
[ "", "sql", "ms-access", "vb6", "" ]
I'm working through a problem, working with SQL Oracle. I'm getting the right results beside a row that should be showing up that has a count of 0. The question is **Question:** For each section of the Project Management course, list the section ID, location and number of students enrolled. Sort by section ID. **My Code:** ``` SELECT s.Section_Id, s.Location, COUNT(*) AS Enrolled FROM Course c, Section s, Enrollment e WHERE c.Course_No = s.Course_No AND s.Section_Id = e.Section_Id AND c.Description = 'Project Management' GROUP BY c.Course_No, s.Location, s.Section_Id ORDER BY s.Section_Id; ``` **My Results:** ``` SECTION_ID LOCATION ENROLLED ---------- ------------------- ---------- 48 L211 4 119 L211 3 120 L214 2 ``` **Expected Results:** ``` SECTION_ID LOCATION ENROLLED ---------- ------------------- ---------- 48 L211 4 119 L211 3 120 L214 2 121 L507 0 ``` So as you can see I'm missing the row with 0 enrolled on my results and can't seem to get that row to appear. Also you will notice that it is a section id and location that go with it for that project management class but it won't appear. I'm not sure what I'm doing wrong. Any help would be great, also [here is the Schema](http://authors.phptr.com/rischert/documents/AppendixD.pdf). DBMS: I'm using Oracle SQL Developer
How about joining to the tables instead of using the WHERE clause: ``` SELECT s.Section_Id, s.Location, COUNT(e.Section_Id) AS Enrolled FROM course c LEFT JOIN section s ON c.Course_No = s.Course_No LEFT JOIN enrollment e ON s.Section_Id = e.Section_Id WHERE c.Description = 'Project Management' GROUP BY c.Course_No, s.Location, s.Section_Id ORDER BY s.Section_Id; ```
You need to outer join to course and enrollment otherwise you won't see sections that don't have any courses/enrollments. You can either use the ANSI syntax ...LEFT JOIN... etc or the old Oracle syntax using (+) against the columns of the deficient table:- ``` SELECT s.Section_Id, s.Location, COUNT(*) AS Enrolled FROM Course c, Section s, Enrollment e WHERE c.Course_No (+) = s.Course_No AND s.Section_Id = e.Section_Id (+) AND c.Description (+) = 'Project Management' GROUP BY c.Course_No, s.Location, s.Section_Id ORDER BY s.Section_Id; ``` These days I would use the ANSI syntax...
Row that should have a 0 count not showing
[ "", "sql", "oracle", "join", "" ]
**Is there a way to store the WHERE clause statements as a callable variable?** I need to run the query below about 20 times using different 'Date' and 'ID' values, but the 'Code' values will stay the same. However, after the 20 queries, I will need to change the 'Code' values to another set of values and using the same 20 'Date' and 'ID' combinations. I am using SQL Server Management Studio 2012. **Edit:** This is actually a subquery for me to count the number of records that results from it. Each count query is union'ed to additional count queries so I can execute all queries at once and have the result show 1 column with the counts. I would like to know how to just make an excel table with the results where each column is specific to a 'Code' set, but I haven't looked into it yet. ``` SELECT DISTINCT a,b,c FROM mytable WHERE (Code BETWEEN '201' AND '205') OR (Code BETWEEN '211' AND '215') OR (Code BETWEEN '241' AND '245') OR (Code = '450') OR (Code BETWEEN '381' AND '387') OR (Code BETWEEN '391' AND '397') OR (Code BETWEEN '401' AND '420') OR (Code BETWEEN '441' AND '444') OR (Code BETWEEN '358' AND '360') OR (Code BETWEEN '371' AND '937') OR (Code = '499') OR (Code BETWEEN '218' AND '239')) AND (Date > '20101231' AND Date < '20120101') AND (ID IN ('3','6','7')) ``` How my Code values change: ``` WHERE (Code IN ('791','792')) OR (Code BETWEEN '801' AND '899') OR (Code BETWEEN '101' AND '125') OR (Code BETWEEN '401' AND '429') WHERE (Code BETWEEN '281' AND '749') OR Code = '2037' ``` There are 2 other Code lists, for a total of 5, which involve BETWEEN, IN, and = statements. I can post these as well if it helps, but they are follow the same manner of declaration as above. The Date values are either for calendar years or quarters. The ID values are either stated via IN, NOT IN, or =. They are all char, varchar, tinyint, or date. **Alternative:** I could just copy and paste over and over replacing the code but would like to improve my coding ability. Also, Find and Replace only reads single lines. I can't get it to work for multiple lines, such as the WHERE clause. I've heard of regular expressions but am not sure how to use them to do what I want. Thanks for the help!
In this scenario, I will use a variable table to store the code values like below The @t table here is a variable table, I.e an in memory table, but it can also be a permenant table The table contains a value called key, the key column will contain an integer for each set of code ranges you want to store. For the below example I put down a set of value 1 and 2. ``` declare @t TABLE ( key INT, FROM_code VARCHAR(10), TO_CODE VARCHAR(10) ) insert into @T VALUES (1,'791','792') insert into @T VALUES (1,'801','899') insert into @T VALUES (1,'101','125') insert into @T VALUES (1,'401','429') Insert into @t values (2,'281','749') Insert into @t values (2,'2037','2037') ``` Now, in the where select statement can be written as below ``` Declare @KEY INT -- PASS key as input parameter to the stored procedure. Set @key =1 SELECT distinct a,b,c From <my_table> M JOIN @T T ON T.key = @key WHERE m.code between from_code and to_code ``` For the second iteration just replace @key=1 with @key=2
If this is an ad hoc query that you're just going to run manually a handful of times, this is ugly but correct. If this is something you'll be doing somewhat regularly, you can store your values in another table. You could have a table called "coderange" that had two columns, "minCode" and "maxCode". Then you'd rewrite this query: ``` SELECT DISTINCT a,b,c FROM mytable m JOIN coderange c WHERE Code Between c.minCode and c.maxCode AND Date > '20101231' AND Date < '20120101' AND ID IN ('3','6','7'); ```
How to store a list of values as a variable to perform multiple SQL queries?
[ "", "sql", "sql-server", "sql-server-2012", "table-variable", "" ]
I have the following query in a stored procedure in SQL server: ``` SELECT TLI.LESNumber ,COUNT(TLT.PL) INTO #PWCM FROM #tmpLESImport TLI INNER JOIN tbl_LES L on TLI.LESNumber=L.NUMB WHERE ISNULL(L.DELT_FLAG,0)=0 AND L.SCHL_PK=@SCHL_PK AND TLI.PL IS NOT NULL AND LEN(TLI.PL)>0 GROUP BY LESNumber HAVING COUNT(PL)>1 ``` When the query is run I get the following error: `An object or column name is missing or empty. For SELECT INTO statements, verify each column has a name. For other statements, look for empty alias names. Aliases defined as "" or [] are not allowed. Change the alias to a valid name.` Can anyone tell me why? `#PWCM` does not appear anywhere until this query.
When you `SELECT INTO` a table, it creates the table (in this case, a temp table). In order to create a table, each column needs a name, which your `count` column does not. You just need to give it a name: ``` SELECT TLI.LESNumber,COUNT(TLT.PL) [NumRecords] INTO #PWCM FROM #tmpLESImport TLI ... ```
I had this error for this query ``` SELECT CASE WHEN COALESCE([dbo].[my-table].[field],"") = '...' THEN 'A' WHEN COALESCE([dbo].[my-table].[field],"") = '...' THEN 'B' ... END AS aaa INTO ##TEMPTABLE FROM [dbo].[my-table] ``` Turns out I had to change the `""` inside the COALSCE into `''`. Solved it for me
SQL Server query erroring with 'An object or column name is missing or empty'
[ "", "sql", "sql-server", "" ]
I have a table that looks something like this: ``` __________________________________________________________________________________ |id |code |code_status |code_due_date|code2 |code_status2 |code_due_date2|...| |1 |ABCD |A |MMDDYYYY |Null |Null |Null |...| |2 |Null |Null |Null |Null |Null |Null |...| |3 |EFGH |A |MMDDYYYY |ABCD |B |MMDDYYYY |...| |... |... |... |... |... |... |... |...| ---------------------------------------------------------------------------------- ``` Actual Table structure is: ``` CREATE TABLE MY_DATA ( PRSN_UNIV_ID VARCHAR2(11), CHKL_ITM_CD_1 VARCHAR2(6), CHKL_ITM_CD_2 VARCHAR2(6), CHKL_ITM_CD_3 VARCHAR2(6), CHKL_ITM_CD_4 VARCHAR2(6), CHKL_ITM_CD_5 VARCHAR2(6), CHKL_ITM_CD_6 VARCHAR2(6), CHKL_ITM_CD_7 VARCHAR2(6), CHKL_ITM_CD_8 VARCHAR2(6), CHKL_ITM_CD_9 VARCHAR2(6), CHKL_ITM_CD_10 VARCHAR2(6), CHKL_ITM_CD_11 VARCHAR2(6), CHKL_ITM_CD_12 VARCHAR2(6), CHKL_ITM_CD_13 VARCHAR2(6), CHKL_ITM_CD_14 VARCHAR2(6), CHKL_ITM_CD_15 VARCHAR2(6), CHKL_ITM_CD_16 VARCHAR2(6), CHKL_ITM_CD_17 VARCHAR2(6), CHKL_ITM_CD_18 VARCHAR2(6), CHKL_ITM_CD_19 VARCHAR2(6), CHKL_ITM_CD_20 VARCHAR2(6), CHKL_ITM_CD_21 VARCHAR2(6), CHKL_ITM_CD_22 VARCHAR2(6), CHKL_ITM_CD_23 VARCHAR2(6), CHKL_ITM_CD_24 VARCHAR2(6), CHKL_ITM_CD_25 VARCHAR2(6), CHKL_ITM_CD_26 VARCHAR2(6), CHKL_ITM_CD_27 VARCHAR2(6), CHKL_ITM_CD_28 VARCHAR2(6), CHKL_ITM_CD_29 VARCHAR2(6), CHKL_ITM_CD_30 VARCHAR2(6), CHKL_ITM_STAT_CD_1 VARCHAR2(1), CHKL_ITM_STAT_CD_2 VARCHAR2(1), CHKL_ITM_STAT_CD_3 VARCHAR2(1), CHKL_ITM_STAT_CD_4 VARCHAR2(1), CHKL_ITM_STAT_CD_5 VARCHAR2(1), CHKL_ITM_STAT_CD_6 VARCHAR2(1), CHKL_ITM_STAT_CD_7 VARCHAR2(1), CHKL_ITM_STAT_CD_8 VARCHAR2(1), CHKL_ITM_STAT_CD_9 VARCHAR2(1), CHKL_ITM_STAT_CD_10 VARCHAR2(1), CHKL_ITM_STAT_CD_11 VARCHAR2(1), CHKL_ITM_STAT_CD_12 VARCHAR2(1), CHKL_ITM_STAT_CD_13 VARCHAR2(1), CHKL_ITM_STAT_CD_14 VARCHAR2(1), CHKL_ITM_STAT_CD_15 VARCHAR2(1), CHKL_ITM_STAT_CD_16 VARCHAR2(1), CHKL_ITM_STAT_CD_17 VARCHAR2(1), CHKL_ITM_STAT_CD_18 VARCHAR2(1), CHKL_ITM_STAT_CD_19 VARCHAR2(1), CHKL_ITM_STAT_CD_20 VARCHAR2(1), CHKL_ITM_STAT_CD_21 VARCHAR2(1), CHKL_ITM_STAT_CD_22 VARCHAR2(1), CHKL_ITM_STAT_CD_23 VARCHAR2(1), CHKL_ITM_STAT_CD_24 VARCHAR2(1), CHKL_ITM_STAT_CD_25 VARCHAR2(1), CHKL_ITM_STAT_CD_26 VARCHAR2(1), CHKL_ITM_STAT_CD_27 VARCHAR2(1), CHKL_ITM_STAT_CD_28 VARCHAR2(1), CHKL_ITM_STAT_CD_29 VARCHAR2(1), CHKL_ITM_STAT_CD_30 VARCHAR2(1), CHKL_ITM_DUE_DT_1 DATE, CHKL_ITM_DUE_DT_2 DATE, CHKL_ITM_DUE_DT_3 DATE, CHKL_ITM_DUE_DT_4 DATE, CHKL_ITM_DUE_DT_5 DATE, CHKL_ITM_DUE_DT_6 DATE, CHKL_ITM_DUE_DT_7 DATE, CHKL_ITM_DUE_DT_8 DATE, CHKL_ITM_DUE_DT_9 DATE, CHKL_ITM_DUE_DT_10 DATE, CHKL_ITM_DUE_DT_11 DATE, CHKL_ITM_DUE_DT_12 DATE, CHKL_ITM_DUE_DT_13 DATE, CHKL_ITM_DUE_DT_14 DATE, CHKL_ITM_DUE_DT_15 DATE, CHKL_ITM_DUE_DT_16 DATE, CHKL_ITM_DUE_DT_17 DATE, CHKL_ITM_DUE_DT_18 DATE, CHKL_ITM_DUE_DT_19 DATE, CHKL_ITM_DUE_DT_20 DATE, CHKL_ITM_DUE_DT_21 DATE, CHKL_ITM_DUE_DT_22 DATE, CHKL_ITM_DUE_DT_23 DATE, CHKL_ITM_DUE_DT_24 DATE, CHKL_ITM_DUE_DT_25 DATE, CHKL_ITM_DUE_DT_26 DATE, CHKL_ITM_DUE_DT_27 DATE, CHKL_ITM_DUE_DT_28 DATE, CHKL_ITM_DUE_DT_29 DATE, CHKL_ITM_DUE_DT_30 DATE ) ``` I need to do something similar the the following pseudo code, but I do not know how to write it or if it even possible. I need to first loop through each row and test if a column in that row is equal to a code. If it is I want to take the status and due date of the same number and put it in the correct variable. There are about 30 different codes that I need to loop through to test for. ``` Declare variables variable_code_status variable_code_due_date variable_code_status2 variable_code_due_date2 variable_code_status3 variable_code_due_date3 ... for each row for each column if column = code set variable_code_status = code_status set variable_code_due_date = code_due_date end if if column = code2 set variable_code_status = code_status2 set variable_code_due_date = code_due_date2 end if if column - code3 set variable_code_status = code_status3 set variable_code_due_date = code_due_date3 end if ... end for end for insert into table( variable_code_status variable_code_due_date variable_code_status2 variable_code_due_date2 variable_code_status3 variable_code_due_date3 ...); ``` I'm not sure if this will make sense to anyone or not.
You can do it like this one: ``` declare id MY_DATA.PRSN_UNIV_ID%type; cd varchar2(6); stat varchar2(1); due date; cur sys_refcursor; begin for i in 1..30 loop open cur for 'select PRSN_UNIV_ID, CHKL_ITM_CD_'||i||', CHKL_ITM_STAT_CD_'||i||', CHKL_ITM_DUE_DT_'||i||' from MY_DATA'; loop fetch cur into id, cd, stat, due; exit when cur%notfound; if .... then --> put here all your conditions insert into .... end if; end loop; end loop; end; ```
You could use a `SYS_REFCURSOR` and the `DBMS_SQL` package to dynamically address this requirement. The below procedure is a generic example of how to dynamically interact with a query, in this case it simply prints the results to the console. You could modify it to test the column descriptions at each iteration and perform some different operation. (This works in 11.2.0.3.0) ``` PROCEDURE Output_Any_Query ( pcRefCursor IN OUT SYS_REFCURSOR ) IS vnCursorNum NUMBER; vnColCount NUMBER; vtabDescriptions DBMS_SQL.DESC_TAB; vsColumnValue VARCHAR2(4000); BEGIN vnCursorNum := DBMS_SQL.to_cursor_number(pcRefCursor); DBMS_SQL.DESCRIBE_COLUMNS(vnCursorNum, vnColCount, vtabDescriptions); --Print the column headers FOR i IN 1..vnColCount LOOP dbms_sql.define_column(vnCursorNum, i, vsColumnValue, 4000); dbms_output.put(RPAD(LPAD(vtabDescriptions(i).col_name,15),20)||'|'); END LOOP; --End of line dbms_output.put_line(''); --loop through the rows WHILE ( dbms_sql.fetch_rows(vnCursorNum) > 0 ) LOOP --loop across the columns FOR i IN 1 .. vnColCount LOOP dbms_sql.column_value(vnCursorNum, i, vsColumnValue); dbms_output.put(RPAD(NVL(vsColumnValue,' '),20)||'|'); END LOOP; --End of line dbms_output.put_line(''); END LOOP; END; ```
Looping through rows and then columns in Oracle
[ "", "sql", "oracle", "plsql", "" ]
I am trying to update a column of the table using the below query.. But I get an error > An aggregate may not appear in the set list of an UPDATE statement Code: ``` UPDATE Test.dbo.Table1 SET InDate = MIN(b.Date) FROM Test.dbo.Table1 a LEFT OUTER JOIN Test.dbo.Table2 b ON a.ID1 = b.ID2 WHERE b.Code = 'IN'; ``` I want to update the InDate column in my table with the oldest date from Table2 (b.Date) column where (b.code) is 'IN' What is wrong in here?
You need to put the aggregate in a temp table or subquery and you need an explicit `GROUP BY` statement. ``` UPDATE Test.dbo.Table1 SET InDate = min_date FROM Test.dbo.Table1 c inner join (SELECT a.id1, MIN(b.Date) min_date FROM Test.dbo.Table1 a LEFT OUTER JOIN Test.dbo.Table2 b ON a.ID1 = b.ID2 Group by a.id1) d ON c.ID1 = d.ID1 WHERE c.Code = 'IN'; ```
Assuming your data model is something like the following, joining to a derived table should do the trick: ``` --Data Setup: DECLARE @Table1 TABLE (ID1 INT, InDate DATETIME) DECLARE @Table2 TABLE (ID2 INT, ID1 INT, Date DATETIME, Code VARCHAR(12)) INSERT INTO @Table1 (ID1) VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10) INSERT INTO @Table2 (ID2, ID1, Date, Code) VALUES (1, 1, '1/1/2014', 'OUT'), (2, 1, '5/1/2014', 'IN'), (3, 1, '3/1/2013', 'IN'), (4, 2, '1/1/2014', 'OUT'), (5, 2, '1/1/2014', 'IN'), (6, 3, '1/1/2014', 'IN'), (7, 4, '1/1/2014', 'IN'), (8, 5, '1/1/2014', 'IN'), (9, 6, '2/1/2014', 'OUT'), (10, 7, '3/1/2014', 'IN'), (11, 8, '4/1/2014', 'IN'), (12, 9, '2/1/2014', 'IN'), (12, 9, '2/1/2014', 'IN'), (12, 10, '1/2/2014', 'IN'), (12, 10, '1/3/2014', 'IN'), (12, 10, '1/4/2014', 'IN'), (12, 10, '1/1/2014', 'OUT') --Actual Update: UPDATE T1 SET InDate = T2.MinDate FROM @Table1 T1 JOIN (SELECT T2.ID1, MIN(Date) AS MinDate FROM @Table2 T2 WHERE T2.Code = 'IN' GROUP BY T2.ID1) T2 ON T2.ID1 = T1.ID1 --Results SELECT * FROM @Table1 ```
Updating Using Aggregate Function
[ "", "sql", "sql-server", "" ]
**I currently have** a table `Bugs` that looks like this: ``` |ID |Priority |Created |Updated |Status |Category |X123 |Major |01/01 |01/03 |Open |A |X145 |Normal |01/01 |01/02 |Closed |B |X678 |Minor |01/03 |01/03 |Open |C |X763 |Major |01/02 |01/03 |Closed |C ``` All columns are varchar(25) except Created and Updated, which are dates. **I need to create** a view with the following format: ``` |Date |Major |Normal |Minor |Category |01/01 |4 |3 |4 |A |01/01 |3 |5 |2 |B |01/01 |2 |4 |7 |C |01/02 |7 |3 |4 |A |01/02 |3 |9 |5 |B |01/02 |1 |6 |3 |C ``` Where the numbers under Major, Normal, and Minor are the count of CURRENTLY OPEN bugs of that priority on a given date. By currently open, I mean this: open bugs are active on the interval `Created`-`GETDATE()`, closed bugs are active on the interval `Created`-`Updated`. **I have** a list of all the dates I need through this query: ``` WITH D AS ( SELECT Dates AS DateValue FROM DatesTable WHERE Dates >= '2012-03-23' AND Dates <= GETDATE() ), ``` Any ideas of how I might do this? I've played with the idea of a pivot query and grouping, but I've not been able to cover everything I need. Your help is greatly appreciated! --- EDIT: The `Bugs` table with example data ``` CREATE TABLE [dbo].[Bugs]( [ID] [varchar](25) NOT NULL, [Priority] [varchar](25) NOT NULL, [Updated] [date] NOT NULL, [Created] [date] NOT NULL, [Status] [varchar](25) NOT NULL, [Category] [varchar](25) NOT NULL, ) ON [PRIMARY] INSERT INTO Bugs VALUES (X123, Major, 01/01/12, 01/03/12, Open, A) INSERT INTO Bugs VALUES (X145, Normal, 01/01/12, 01/02/12, Closed, B) INSERT INTO Bugs VALUES (X678, Minor, 01/03/12, 01/03/12, Open, C) INSERT INTO Bugs VALUES (X763, Major, 01/02/12, 01/03/12, Closed, C ) ```
This is what I tested at last: ``` SELECT d.Dates as Date, SUM(CASE Priority WHEN 'Major' THEN 1 ELSE 0 END) as Major, SUM(CASE Priority WHEN 'Normal' THEN 1 ELSE 0 END) as Normal, SUM(CASE Priority WHEN 'Minor' THEN 1 ELSE 0 END) as Minor, b.Category FROM Bugs b INNER JOIN DatesTable d ON d.Dates >= b.Created WHERE (Status = 'Closed' AND d.Dates <= Updated OR Status = 'Open') AND d.Dates <= GETDATE() AND d.Dates >= '2012-01-01' GROUP BY d.Dates, b.Category ORDER BY d.Dates ```
Something like this might what you are looking for. ``` select b.Date , SUM(case when b.Priority = 'Major' then 1 end) as Major , SUM(case when b.Priority = 'Normal' then 1 end) as Normal , SUM(case when b.Priority = 'Minor' then 1 end) as Minor , b.Category from Bugs b group by b.Date, b.Category order by b.Date ```
Grouping types of bugs by date and priority
[ "", "sql", "sql-server", "t-sql", "" ]
This code compiles, but in TOAD it won't show the "hi wo" output ``` CREATE OR REPLACE PROCEDURE AdelTest IS tmpVar NUMBER; BEGIN DBMS_OUTPUT.ENABLE(100: in INTEGER); DBMS_OUTPUT.PUT_LINE('hi wo'); tmpVar := 0; EXCEPTION WHEN NO_DATA_FOUND THEN NULL; WHEN OTHERS THEN --consider logging error then r-raise RAISE; END AdelTest; ``` How do I show the output(similar to how println shoots to console in Java ) ?
You need to enable DBMS Output. If working in the Editor right-click and choose DBMS Output off of the Desktop flyout menu. There's a button that is red if it's disabled. Click it and it will turn green. Then execute your code. Output will display there. If you are working outside of the Editor (in Schema Browser for instance) select DBMS Output off of the main Toad View menu. Enable your output there. Output will display in that window.
You can also view server output if for some reason DBMS output window is not getting enable. use `QSR` editor embedded within toad for oracle edition.. How to use 1. select the code in place 2. Editor > Execute `SQL` via `QSR` have the set `serveroutput` on as first statement in QSR window and run.. now it will show output in window.. Hope it helps!!!
How to get my PUT_LINE statement to display in TOAD?
[ "", "sql", "toad", "" ]
I need to SUM Col3 but the non-empty value for Col2 needs to be retained. Is there a way to accomplish this with a GROUP BY? Table ``` Col1 Col2 Col3 ----------------------------------- 123 abc 2 123 3 ``` `SELECT` query: ``` SELECT Col1, Col2, SUM(Col3) FROM Table GROUP BY Col1, Col2 ``` Result: ``` Col1 Col2 Col3 -------------------------------- 123 abc 5 ```
Group by only Col1 ``` SELECT Col1, Max(Col2) SUM(Col3) FROM Table GROUP BY Col1 ```
Add a filter on col2 with a having clause. With this it will remove line 2 that has the null value. We cannot consider the row that has the null value and use it only for sum(col3). hence result would be 123, abc, 2 and not 123, abc, 5 ``` select col1, col2, sum(col3) as c from table2 group by col1, col2 having col2 is not null ```
How to overwrite empty column values in a GROUP BY with SQL Server?
[ "", "sql", "sql-server", "group-by", "" ]
In my query's `where` clause I have the condition: ``` User.DateOfBirth BETWEEN @startDate AND @endDate ``` Where `@startDate` and `@endDate` are nullable. If `@startDate` is null, I want all values less than or equal to `@endDate`; if `@endDate` is null, I want all values greater than or equal to `@startDate`; if both are null I want all values. My failed attempt - returns 0 results ``` ((( User.DateOfBirth > @startDate) OR (@startDate Is null)) AND (( User.DateOfBirth < @endDate) OR (@endDate is null)) ) ``` (Editor's note: `BETWEEN` includes the end points (less/greater than *or equal*), but the description for the null cases didn't. I've assumed this is an oversight.
Try this: ``` [User].DateOfBirth BETWEEN ISNULL(@startDate,[User].DateOfBirth) AND ISNULL(@endDate,[User].DateOfBirth) ```
Two approaches spring to mind: ### Tread the four cases separately and then OR them together: 1. start and end are null: any date matches, 2. start is null, so need DoB <= end 3. send is null, so need DoB >= start 4. neither is null, so need between This will lead to a long expression. ### Use [`IsNull`](https://stackoverflow.com/a/24881013/67392): As shown by mehdi lotfi in his [answer](https://stackoverflow.com/a/24881013/67392).
How can I include null values in BETWEEN clause?
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I've moved to PostgreSQL from MySQL, but got strange problem. My query worked perfect in MySQL. ``` SELECT MIN("events"."schedule") AS schedule FROM "events" WHERE ("events"."state" IN (1)) AND ("events"."schedule" >= '2014-07-01') GROUP BY EXTRACT(YEAR FROM "events"."schedule"), EXTRACT(MONTH FROM "events"."schedule") ORDER BY "events"."schedule" DESC ``` PostgreSQL gived me this error: ``` ERROR: column "events.schedule" must appear in the GROUP BY clause or be used in an aggregate function LINE 6: ORDER BY "events"."schedule" DESC ^ ********** Error ********** ERROR: column "events.schedule" must appear in the GROUP BY clause or be used in an aggregate function SQL state: 42803 Character: 238 ``` How should I rewrite this query to have "events.schedule" in GROUP BY clause when I group data by two columns at once?
If you want to order by the resulting column (the min(schedule)) then you can use the following: ``` SELECT MIN("events"."schedule") AS schedule FROM "events" WHERE "events"."state" IN (1) AND "events"."schedule" >= '2014-07-01' GROUP BY EXTRACT(YEAR FROM "events"."schedule"), EXTRACT(MONTH FROM "events"."schedule") ORDER BY 1 desc ``` The reason you're getting an error is because you're trying to order by the schedule column itself, not the aggregated min(schedule)
You can also simplify quite a bit: ``` SELECT MIN(schedule) AS schedule FROM events WHERE state = 1 AND schedule >= '2014-07-01' GROUP BY date_trunc('month', schedule) ORDER BY 1 DESC; ``` In particular, replace the two `EXTRACT` calls with a single [`date_trunc()`](http://www.postgresql.org/docs/current/interactive/functions-datetime.html#FUNCTIONS-DATETIME-TRUNC). Cheaper.
PostgreSQL grouping error in a query after moving from MySQL
[ "", "mysql", "sql", "postgresql", "" ]
I'm working through a problem, working with SQL Oracle. For some reason I am getting the result of 13 for instructors teaching that course when I should only be getting 1. There is only 1 Instructor that teaches only 3 or more students in that room. **Question:** Create a query to determine the number of instructors who have taught more than 3 students in a course section taught in Room L211. **My Code:** ``` SELECT COUNT(Instructor_Id) AS NumberOfInstructors FROM Section s, Enrollment e WHERE s.Section_Id = e.Section_Id AND Location = 'L211' HAVING COUNT(Student_Id) = (SELECT COUNT(Student_iD) FROM Section s, Enrollment e WHERE s.Section_Id = e.Section_Id AND Location = 'L211') ORDER BY s.Course_No; ``` **My Results:** ``` NUMBEROFINSTRUCTORS ------------------- 13 ``` **Expected Results:** ``` NUMBEROFINSTRUCTORS ------------------- 1 ``` So I feel like Im kind of going in the right direction maybe not but I feel like it is adding all the instructors up that teach in that class. Ive messed around a lot with that code I've given so if anyone can point me in the right direction. I'm guessing I need to have a count for counting the students who have been in a class with that room number I think I've tried that and got a result of 4 so I'm not sure. Any help would be great, also [here is the Schema](http://authors.phptr.com/rischert/documents/AppendixD.pdf). DBMS: I'm using Oracle SQL Developer
try this: ``` SELECT COUNT(*) NumberOfInstructors From (Select Instructor_Id FROM Section s join Enrollment e on e.Section_Id = s.Section_Id WHERE s.Location = 'L211' Group By Instructor_Id HAVING COUNT(Student_Id) >= 3) Z ```
I **did not** test the code, but it seems to me you are missing a condition. Your question requires you to find "Instructor that teaches only 3 or more students in that room", but you did not have any condition to check for student number > 3. Try this: ``` SELECT COUNT(Instructor_Id) AS NumberOfInstructors FROM Section s, Enrollment e WHERE s.Section_Id = e.Section_Id AND Location = 'L211' HAVING COUNT(Student_Id) = (SELECT COUNT(Student_iD) FROM Section s, Enrollment e WHERE s.Section_Id = e.Section_Id AND Location = 'L211' AND COUNT(Student_Id) > 3) ORDER BY s.Course_No; ```
A Query of certain students in a certain room
[ "", "sql", "oracle", "" ]
I would want to leave a sorted table, ie when I did a query `select * from NewTable` I obtained the sorted table. I've tried, but not sort the table how I specify ``` select column1,column2,column3,column4 into NewTable from Table1,Table2 order by column1,column2 ```
The clustered index of a table decides how the data is ordered, this example will demonstrate it: ``` CREATE TABLE test (id int, value varchar) INSERT INTO test VALUES(1, 'z') INSERT INTO test VALUES(2, 'y') INSERT INTO test VALUES(3, 'x') SELECT * FROM test CREATE CLUSTERED INDEX IX_test ON test (value ASC) SELECT * FROM test ``` This is the result: ``` id value ----------- ----- 1 z 2 y 3 x id value ----------- ----- 3 x 2 y 1 z ``` After creating the index, the result is reversed, since the index is sorting the value-column ascending. However please note, as others have mentioned, that the only 100% guaranteed way to get a correctly ordered result is to use an `ORDER BY` clause.
You only get result sets in a particular order when you use `order by`. Tables represent unordered sets, so they have no order except when being output as result sets. However, you can use a trick in SQL Server to make that `order by` fast. The trick is to using the `order by` in insert and have an identity primary key. Then ordering by the primary key should be very efficient. You could do this as: ``` create table NewTable ( NewTableId int identity(1, 1) not null primary key, column1 . . . . . . ); insert into NewTable(column1, column2, column3, column4) select column1, column2, column3, column4 from Table1 cross joinTable2 order by column1, column2; ``` Now when you select from the table doing: ``` select column1, column2, column3, column4 from NewTable order by id; ``` You are ordering by the primary key and no real sort is being done.
How to sort a table by columns without query
[ "", "sql", "sql-server", "jquery-ui-sortable", "" ]
Is there a function in ***PostgreSQL*** that is the same as `NUMTODSINTERVAL(n, interval unit)` in *Oracle*?
If you want a functionality similar to [this](http://docs.oracle.com/cd/B12037_01/server.101/b10759/functions093.htm) function (i.e. the unit is variable -- not constant): a simple concatenation & a cast is enough in PostgreSQL: ``` select cast(num || unit as interval) ``` [SQLFiddle](http://sqlfiddle.com/#!15/d41d8/2704) You can read more about `interval`'s input formats [here](http://www.postgresql.org/docs/current/static/datatype-datetime.html#DATATYPE-INTERVAL-INPUT).
Just multiply your variable with the desired interval: ``` interval '1' day * n ``` Since Postgres 9.4 you can also use the function [make\_interval()](https://www.postgresql.org/docs/9.4/functions-datetime.html) ``` make_interval(days => n) ```
NUMTODSINTERVAL in PostgreSQL
[ "", "sql", "oracle", "postgresql", "" ]
I have the following tables ``` AdmittedPatients(pid, workerid, admitted, discharged) Patients(pid, firstname, lastname, admitted, discharged) DiagnosticHistory(diagnosisID, workerid, pid, timeofdiagnosis) Diagnosis(diagnosisID, description) ``` Here is an SQL Fiddle: <http://sqlfiddle.com/#!15/e7403> Things to note: * AdmittedPatients is a history of all admissions/discharges of patients at the hospital. * Patients contain all patients who have records at the hospital. Patients also lists who are currently staying at the hospital (i.e. discharged is NULL). * DiagnosticHistory contains all diagnosis made. * Diagnosis has the description of the diagnosis made Here is my task: list patients who were admitted to the hospital within 30 days of their last discharge date. For each patient list their patient identification number, name, diagnosis, and admitting doctor. This is what I've cooked up so far: ``` select pid, firstname, lastname, admittedpatients.workerid, patients.admitted, admittedpatients.discharged from patients join admittedpatients using (pid) group by pid, firstname, lastname, patients.admitted, admittedpatients.workerid, admittedpatients.discharged having patients.admitted <= admittedpatients.discharged; ``` This returns pid's from 0, 1, and 4 when it should 0, 1, 2, and 4.
Not sure why out need group by or having here... no aggregate... ``` SELECT A.pid, firstname, lastname, A.workerid, P.admitted, A.discharged FROM patients P INNER JOIN admittedpatients A on P.pID = A.pID WHERE date_add(a.discharged, interval 30 day)>=p.admitted and p.admitted >=a.discharged ``` updated fiddle: <http://sqlfiddle.com/#!2/dc33c/30/0> Didn't get into returning all your needed fields but as this gets the desired result set I imagine it's just a series of joins from here... Updated to postgresql: ``` SELECT A.pid, firstname, lastname, A.workerid, P.admitted, A.discharged FROM patients P INNER JOIN admittedpatients A on P.pID = A.pID WHERE a.discharged+ interval '30 day' >=p.admitted and p.admitted >=a.discharged ``` <http://sqlfiddle.com/#!15/e7403/1/0>
I didn't see any diagnostic info in the fiddle, so I didn't return any. ``` select pid ,p.lastname,p.firstname ,ad.lastname,ad.firstname from AdmittedPatients as a join AdmittedPatients as d using (pid) join Patients as p using (pid) join AdminDoctors as ad on ad.workerid=a.workerid where d.discharged between a.admitted-30 and a.admitted ```
Postgresql - retrieve rows within criteria within 30 day span
[ "", "sql", "postgresql", "" ]
I'm having trouble finding the right sql query. I want to select all the rows with a unique x value and if there are rows with the same x value, then I want to select the row with the greatest y value. As an example I've put a part of my database below. ``` ID x y 1 2 3 2 1 5 3 4 6 4 4 7 5 2 6 ``` The selected rows should then be those with ID 2, 4 and 5. This is what I've got so far ``` SELECT * FROM base WHERE x IN ( SELECT x FROM base HAVING COUNT(*) > 1 ) ``` But this only results in the rows that occur more than once. I've added the tags R, postgresql and sqldf because I'm working in R with those packages.
Here is a typical way to formulate the query in ANSI SQL: ``` select b.* from base b where not exists (select 1 from base b2 where b2.x = b.x and b2.y > b.y ); ``` In Postgres, you would use `distinct on` for performance: ``` select distinct on (x) b.* from base b order by x, y desc; ```
You could try this query: ``` select x, max(y) from base group by x; ``` And, if you'd also like the `id` column in the result: ``` select base.* from base join (select x, max(y) from base group by x) as maxima on (base.x = maxima.x and base.y = maxima.max); ```
SQL query: same rows
[ "", "sql", "r", "postgresql", "sqldf", "" ]
I am looking for a way to change the output of an access query to return either 1, 2 or 3 in replace of Low, Medium or High. I would like to convert the format of the field from Text to Numeric, since I wish to perform calculations using these numbers. Any ideas would be appreciated.
Another possibility is [switch](http://www.techonthenet.com/access/functions/advanced/switch.php), for example: ``` SELECT Field1, Switch([field1]="Low",1,[Field1]="Medium",2,[Field1]="High",3) AS SwitchValue FROM aTable ``` But it may be more convenient to simply create a small table with the substitute values and Join.
You can use `iif()` in MS Access: ``` select iif(col = 'High', 3, iif(col = 'Low', 1, 2)) as ColNumeric ```
Microsoft Access Query - Convert Text to Numeric Depending on Field Contents - Low, Medium, High to 1,2 ,3
[ "", "sql", "ms-access", "text", "format", "numeric", "" ]
I'm using SQL Server 2012. When I run this query... ``` select count(*) from MembershipStatusHistory msh join gym.Account a on msh.AccountID = a.AccountID join gym.MembershipType mt on a.MembershipTypeID = mt.MembershipTypeID join MemberTypeGroups mtg on mt.MemberTypeGroupID = mtg.MemberTypeGroupID where mtg.MemberTypeGroupID IN (1,2) and msh.NewMembershipStatus = 'Cancelled' and year(msh.ChangeDate) = year(getdate()) and month(msh.ChangeDate) = month(getdate()) and day(msh.ChangeDate) = day(getdate()) ``` ...it returns almost instantly. Great. Now, when I run the same exact query like this: ``` declare @CancellationsToday int SET @CancellationsToday = ( select count(*) from MembershipStatusHistory msh join gym.Account a on msh.AccountID = a.AccountID join gym.MembershipType mt on a.MembershipTypeID = mt.MembershipTypeID join MemberTypeGroups mtg on mt.MemberTypeGroupID = mtg.MemberTypeGroupID where mtg.MemberTypeGroupID IN (1,2) and msh.NewMembershipStatus = 'Cancelled' and year(msh.ChangeDate) = year(getdate()) and month(msh.ChangeDate) = month(getdate()) and day(msh.ChangeDate) = day(getdate()) ) ``` ...it takes 1.5 MINUTES to return. Consistently, every time. What the \*\*\*\* is going on? I have to use a variable because I need to sum the result later on in my stored proc. I am storing the results of other queries in the same proc and they are fast. I am stumped. Here is the execution plan from the SLOW query: ![enter image description here](https://i.stack.imgur.com/ev6pp.png) And here is the execution plan from the FAST query: ![enter image description here](https://i.stack.imgur.com/x7UAw.png) I'll be honest, I don't know what these execution plans mean or what I need to correct.
Very strange but try something like this.... ``` declare @CancellationsToday int; select @CancellationsToday = count(*) from MembershipStatusHistory msh join gym.Account a on msh.AccountID = a.AccountID join gym.MembershipType mt on a.MembershipTypeID = mt.MembershipTypeID join MemberTypeGroups mtg on mt.MemberTypeGroupID = mtg.MemberTypeGroupID where mtg.MemberTypeGroupID IN (1,2) and msh.NewMembershipStatus = 'Cancelled' and year(msh.ChangeDate) = year(getdate()) and month(msh.ChangeDate) = month(getdate()) and day(msh.ChangeDate) = day(getdate()) ```
Mmmm strange, try this: ``` SELECT @CancellationsToday = COUNT(*) FROM ...... ``` Another thing worth to mention is don't use functions in the `WHERE` clause. I think you have only the date in `msh.ChangeDate`, make a variable with today's date like this: ``` DATEADD(dd, 0, DATEDIFF(dd, 0, GETDATE())) ``` and use that in the `WHERE` clause.
Horrible sql server performance when capturing result in variable
[ "", "sql", "sql-server", "sql-server-2012", "" ]
Hi i have 2 tables. ``` A B |-----|--------------| |------|-------|--------| | ID | aText | | ID | aId | bText | |-----|--------------| |------|-------|--------| | 1 | test | | 10 | 1 | zzzzz | |-----|--------------| |------|-------|--------| ``` Is it possible to create a unique key on the bText combined with aText ? aText and bText ain't unique in their serperataded tables.
exactly no. You should use one table with parent ``` |------|-------|--------| | ID |Parent | Text | |------|-------|--------| | 1 | | zzzzz | |------|-------|--------| | 10 | 1 | yyyyy | ------------------------- ``` Parent column foreign key to ID column its mean self refferencing.
Yes if `aText` is `UNIQUE` in table `A`; create a `UNIQUE INDEX` on `(aId,btext)` for table `B` Otherwise I don't think it is possible with your current data structure.
Unique key on 2 columns over joined tables
[ "", "mysql", "sql", "database", "" ]
I'm faced with a big query that is generated in a string and executed with "OPEN pCursor FOR vQuery" and I'm trying to get the query out of the string variable and as a proper "compilable" query. I'm having this problem where a different table is query depending on a variable ``` vQuery := 'SELECT ...'; IF pVar = 1 Then vQuery := vQuery || ' FROM table1'; ELSE vQuery := vQuery || ' FROM table2'; END IF vQuery := vQuery || ' WHERE ...'; ``` The two tables have pretty much the same column name. Is there a way to have this as a single query ``` OPEN Pcursorout FOR SELECT ... FROM CASE WHEN pVar = 1 THEN table1 ELSE table1 END WHERE ...; ``` Or I'm stuck at having two queries? ``` IF pVar = 1 Then OPEN Pcursorout FOR SELECT ... FROM table1 WHERE ...; ELSE OPEN Pcursorout FOR SELECT ... FROM table2 WHERE ...; END IF ``` The select and where part are large and exactly the same for both table.
You could use a [UNION](http://www.w3schools.com/sql/sql_union.asp) and use your variable pVar to only include the results from one query in the result set. ``` SELECT t1.col1, t1.col2, ..., t1.col10 FROM table1 t1 WHERE pVar = 1 and ... UNION SELECT t2.col1, t2.col2, ..., t2.col10 FROM table1 t2 WHERE pVar <> 1 and ... ``` This isn't exactly what you asked about -- not being required to have duplicate lists of columns for the two select statements -- but I think it might capture your intent. It will require that the columns selected by both queries have the same datatype so there will be a (somewhat weak) constraint that the columns of both query results are the same. For example, you won't be able to add a new column to one query but not the other.
Perhaps using UNION / UNION ALL to unite both queries? The requirement for using UNION/UNION ALL is that all SELECTs being united must return columns with the same names. So if you have ``` SELECT t.f1, t.f2, t.f3 FROM t WHERE ... and your other query is SELECT q.f1, q.f2, q.f3 FROM q WHERE ... ``` you can have both running as a single SQL statement with UNION: ``` SELECT t.f1, t.f2, t.f3 FROM t WHERE ... UNION SELECT q.f1, q.f2, q.f3 FROM q WHERE ... ``` Keep in mind that if you need to return columns that exist in one table but not in the other, you can still use UNION, just return NULL and name the column correspondingly to the column name in the table that has it.
Same query but different tables
[ "", "sql", "oracle", "" ]
I have this table: ``` Date |StockCode|DaysMovement|OnHand 29-Jul|SC123 |30 |500 28-Jul|SC123 |15 |NULL 27-Jul|SC123 |0 |NULL 26-Jul|SC123 |4 |NULL 25-Jul|SC123 |-2 |NULL 24-Jul|SC123 |0 |NULL ``` The reason only the top row has an OnHand value is because I can get this from another table that stores the current qty on hand for any stock code. The other records in the table are taken from another table that logs all the movement for any given day. I want to update the above table so that the OnHand column shows the QtyOnHand for that row's date based on the previous record's stock and movement, such that is looks like this at the end of the update: ``` Date |StockCode|DaysMovement|OnHand 29-Jul|SC123 |30 |500 28-Jul|SC123 |15 |470 27-Jul|SC123 |0 |455 26-Jul|SC123 |4 |455 25-Jul|SC123 |-2 |451 24-Jul|SC123 |0 |453 ``` I'm currently achieving this with a CURSOR. But performance really sucks over thousands of records. Is there some SET-based UPDATE statement I can run that will achieve the same result?
Try this (**[Fiddle demo](http://sqlfiddle.com/#!3/a9164/33)**) ``` DECLARE @Movement INT , @OnHandRunning INT ;WITH CTE AS ( SELECT TOP 100 percent DaysMovement, OnHand FROM Table1 ORDER BY [StockCode], [Date] DESC ) UPDATE CTE SET @OnHandRunning = OnHand = COALESCE(@OnHandRunning - @Movement, OnHand), @Movement = DaysMovement ``` **UPDATE:** For multiple `StockCodes` you can modify above query like below (**[Fiddle demo 2](http://sqlfiddle.com/#!3/10f26/1)**): ``` DECLARE @Movement INT , @OnHandRunning INT, @StockCode VARCHAR(10) = '' ;WITH CTE AS ( SELECT TOP 100 percent DaysMovement, OnHand, StockCode FROM Table1 ORDER BY [StockCode],[Date] DESC ) UPDATE CTE SET @OnHandRunning = OnHand = CASE WHEN @StockCode<> StockCode THEN OnHand ELSE @OnHandRunning - @Movement END, @Movement = DaysMovement, @StockCode = StockCode ```
This works, no idea how it performs compared to a cursor though? ``` --Data DECLARE @Table TABLE ( [Date] DATE, StockCode VARCHAR(50), DaysMovement INT, OnHand INT); INSERT INTO @Table VALUES ('20140729', 'SC123', 30, 500); INSERT INTO @Table VALUES ('20140728', 'SC123', 15, NULL); INSERT INTO @Table VALUES ('20140727', 'SC123', 0, NULL); INSERT INTO @Table VALUES ('20140726', 'SC123', 4, NULL); INSERT INTO @Table VALUES ('20140725', 'SC123', -2, NULL); INSERT INTO @Table VALUES ('20140724', 'SC123', 0, NULL); --Query SELECT t1.[Date], t1.StockCode, t1.DaysMovement, CASE WHEN t1.OnHand IS NULL THEN MAX(t2.OnHand) - SUM(t2.DaysMovement) ELSE t1.OnHand END AS OnHand FROM @Table t1 LEFT JOIN @Table t2 ON t1.[Date] < t2.[Date] GROUP BY t1.[Date], t1.StockCode, t1.DaysMovement, t1.OnHand ORDER BY t1.[Date] DESC; ``` Results are: ``` Date StockCode DaysMovement OnHand 2014-07-29 SC123 30 500 2014-07-28 SC123 15 470 2014-07-27 SC123 0 455 2014-07-26 SC123 4 455 2014-07-25 SC123 -2 451 2014-07-24 SC123 0 453 ```
SQL Update table with cumulative value
[ "", "sql", "sql-server", "t-sql", "" ]
Say I have these tables: ``` create table letter (id_letter bigint, id_group_table bigint, letter char(1)); create table group_table (id_group_table bigint, id_whatever bigint, champion char(1)); create table whatever (id_whatever bigint); ``` I want to update the `group_table` so I can set in the `champion` column the letter that has most occurrences in the `letter` table that is related with each row from the `group_table`. Today I have to iterate in my application all rows from the `group_table` and run a query for each row to discover what is the letter most used... I want to do that in one update, is it possible? Here's what I'm trying (and does not work): ``` update group_table gt set gt.champion = ( select inner_champ from ( select le.letter as inner_champ, count(*) from letter le where le.id_group_table = gt.id_group_table group by le.letter order by count(*) desc limit 1 ) ) where gt.id_whatever in (1,2,3,4); ``` MySQL does not allow me to reference the `group_table` inside the sub query with `gt.id_group_table`... is it possible to do that? Thanks!!
``` update group_table join ( select tab1.letter, tab1.id_group_table from ( select letter, id_group_table, count(letter) as occurrences from letter group by id_group_table, letter order by occurrences desc ) tab1 group by tab1.id_group_table having max(tab1.occurrences) ) tab2 on group_table.id_group_table = tab2.id_group_table set champion = tab2.letter where group_table.id_whatever in (1,2,3,4); ```
You can do a `JOIN` with `group_table` separately in inner query like below ``` update group_table join ( select letter, id_group_table, count(distinct id_group_table) as occurences from letter group by letter having max(occurences) ) tab on group_table.id_group_table = tab.id_group_table set champion = tab.letter where group_table.id_whatever in (1,2,3,4); ```
How to reference outer table in a SET sub query?
[ "", "mysql", "sql", "" ]
As I am not good with MySQL query's so I wish someone help me for creating this kind of sql query. I having two MySQL tables which is describe bellow: ``` Table Name: rating ------------------- property_id user_id area_rate_count safety_rate_count friendly_rate_count walkability_rate_count 4 28 1 1 1 2 5 38 2 3 4 1 5 40 2 2 3 1 6 40 2 3 1 4 10 43 2 2 3 1 Table Name: listing ------------------- property_id title 4 Sample 1 5 Sample 2 6 Sample 3 10 Sample 4 11 Sample 5 12 Sample 6 ``` Now first I want to sum each column and divide. (area\_rate\_count, safety\_rate\_count, friendly\_rate\_count, walkability\_rate\_count). For example In `property_id:5` having two times so first calculate column sum and divide by 2. **After calculation we will get this output:** ``` Table Name: rating (After Calculation) -------------------------------------- property_id rate 4 5 5 9 (Divided by 2 because this property_id is two times in table) 6 10 10 8 ``` And Finally I want join this result to my `listing` table and result looks something like this: ``` Table Name: listing ------------------- property_id title rate 4 Sample 1 5 5 Sample 2 9 (Divided by 2 becouse property_id is two times in table) 6 Sample 3 10 10 Sample 4 8 11 Sample 5 0 12 Sample 6 0 ``` Thanks.
I think you want the `avg()` aggregation function along with a `join`: ``` select l.property_id, l.title, coalesce(avg(area_rate_count + safety_rate_count + friendly_rate_count + walkability_rate_count ), 0) as rate from listing l left outer join property_id p on l.property_id = p.property_id group by l.property_id, l.title ; ```
If I understood it right I think you need this: ``` select l.property_id, l.title, coalesce(r.ssum/if(r.ct=0,1,r.ct), 0) as rate from listing l LEFT JOIN (select property_id, sum(area_rate_count+safety_rate_count +friendly_rate_count+walkability_rate_count) ssum, count(*) ct from rating group by property_id ) r ON l.property_id = r.property_id order by l.property_id ``` See it here on fiddle: <http://sqlfiddle.com/#!2/589d6/5> **Edit** As OP asked on the comments that he wants all columns from `listing` here is what he want: ``` select l.*, coalesce(r.ssum/if(r.ct=0,1,r.ct), 0) as rate from listing l LEFT JOIN (select property_id, sum(area_rate_count+safety_rate_count +friendly_rate_count+walkability_rate_count) ssum, count(*) ct from rating group by property_id ) r ON l.property_id = r.property_id order by l.property_id ```
SQL - How to calculate column value and join with another table
[ "", "mysql", "sql", "sum", "" ]
I have a varchar value display as same as SQL date time.i need to convert this to SQL date time format.can any one help me......... ``` 2014-07-28 18:05:14(varchar) --> 2014-07-28 18:05:14(DateTime) ```
``` select cast('2014-07-28 18:05:14' as datetime) ```
YOU CAN TRY THIS ``` cast('2014-07-28 18:05:14' AS DATETIME) ```
SQL Convert varchar value to datetime
[ "", "sql", "" ]
Fairly new to SQL programming. I'm trying to create a stored procedure which will take the users name, compare it to the users table to retrieve their 'role' and then select all pieces of Feedback from the Feedback table which are related to them. There are currently 3 possible values: - User - Governance Board - Admin If the Role is either 'Governance Board' or 'Admin' I want all entries in the table to be retrieved However if the Role is 'User' I only want the entries where the [publish] column is set to true or the 'Author' or 'IdentifiedBy' column is set to themselves. This is what I currently have however it is throwing errors around the case statement ``` @Alias varchar(max), AS DECLARE @Role varchar(max) SET @Role = ( SELECT [Role] FROM [CSLL].[Users] WHERE [Alias] = @Alias ) CASE WHEN @Role = "Admin" THEN SELECT * FROM [CSLL].[Feedback] WHEN @Role = "Governance Board" THEN SELECT * FROM [CSLL].[Feedback] WHEN @Role = "User" THEN SELECT * FROM [CSLL].[Feedback] WHERE [Publish] = True OR [Author] = @Alias or [IdentifiedBy]=@Alias ELSE END ``` Any help would be most appreciated. I'm sure it's just something that I'm doing which is dumb but being new to SQL I can't spot it. Thanks in advance Tom
You can write this query as a single `select`. ``` select f.* from Feedback f cross join (SELECT [Role] FROM [CSLL].[Users] WHERE [Alias] = @Alias) r where (r.[role] in ('Admin', 'Governance Board') ) or (r.[role] = 'User' and ([Publish] = True OR [Author] = @Alias or [IdentifiedBy] = @Alias)) ``` I think keeping the logic in one place and having a single query is better from the perspective of undersatnding the logic and maintaining the code.
You can't use CASE this way to control flow. You have to use IF. ``` IF @Role='Admin' SELECT * FROM [CSLL].[Feedback] ELSE IF @Role='Governance Board' SELECT * FROM [CSLL].[Feedback] ELSE IF ... etc ```
SQL CASE Selection Statement Error
[ "", "sql", "sql-server", "stored-procedures", "case", "" ]
I have a list of items and, by deposit, stocks, purchase orders, sales orders, forecasts of consumption or production, etc.. I want to set up a query for each item back to me all this information, deposit by deposit. Obviously, it is not said that this information is available for each item. For example, considering the following tables: ``` T1 | REF | |--------| | 1 | T2 | REF | DEPOT | |--------|--------| | 1 | A | | 1 | B | T3 | REF | DEPOT | |--------|--------| | 1 | A | | 1 | C | T4 | REF | DEPOT | |--------|--------| | 1 | B | | 1 | C | | 1 | D | ``` If I take only the first three tables (just to start), I have not found better than: ``` SELECT T1.ref AS T1ref , T2.ref AS T2ref, T2.depot AS T2depot , T3.ref AS T3ref, T3.depot AS T3depot FROM T1 LEFT JOIN T2 ON T2.ref = T1.ref FULL JOIN T3 ON T3.ref = T1.ref AND T3.depot = T2.depot ``` The output: ``` | T1REF | T2REF | T2DEPOT | T3REF | T3DEPOT | |--------|--------|---------|--------|---------| | 1 | 1 | A | 1 | A | | 1 | 1 | B | (null) | (null) | | (null) | (null) | (null) | 1 | C | ``` What I want: ``` | T1REF | T2REF | T2DEPOT | T3REF | T3DEPOT | |--------|--------|---------|--------|---------| | 1 | 1 | A | 1 | A | | 1 | 1 | B | (null) | (null) | | 1 | (null) | (null) | 1 | C | ``` There must be a clean method to do this, but I didn't find anything. And it's hard to find material on this. Does anyone have a hint? The sqlfiddle: <http://sqlfiddle.com/#!3/19014/2> Thank you, David. **Edit:** And with T4: ``` | T1REF | T2REF | T2DEPOT | T3REF | T3DEPOT | T4REF | T4DEPOT | |--------|--------|---------|--------|---------|--------|---------| | 1 | 1 | A | 1 | A | (null) | (null) | | 1 | 1 | B | (null) | (null) | 1 | B | | 1 | (null) | (null) | 1 | C | 1 | C | | 1 | (null) | (null) | (null) | (null) | 1 | D | ``` I should have named my tables with better names : T1 = items, T2 = stocks, T3 = purchases, T4 = sells. So, T1 will always have all the refs, and also a lot of other information I need.
An example with all the 4 tables and Coalesce as stated by Gordon Linoff ``` ;with[T1]([REF])as( select * from (values(1),(2))[A]([REF]) ), [T2]([REF],[DEPOT])as( select*from(values (1,'A'), (1,'B'), (1,'E') )[a]([REF],[DEPOT]) ), [T3]([REF],[DEPOT])as( select*from(values (1,'A'), (1,'C') )[a]([REF],[DEPOT]) ), [T4]([REF],[DEPOT])as( select*from(values (1,'B'), (1,'C'), (1,'D'), (1,'E') )[a]([REF],[DEPOT]) ) select * from [T1] outer apply ( select T2.REF T2REF, T2.DEPOT T2DEPOT, T3.REF T3REF, T3.DEPOT T3DEPOT, T4.REF T4REF, T4.DEPOT T4DEPOT from T2 full outer join T3 on T2.REF = T3.REF and T2.DEPOT = T3.DEPOT full outer join T4 on COALESCE(T3.REF,T2.REF) = T4.REF and COALESCE(T3.DEPOT,T2.DEPOT) = T4.DEPOT where COALESCE(T2.REF,T3.REF,T4.REF) = T1.REF ) TR ```
You can fix your particular query by using `coalesce()`: ``` SELECT coalesce(T1.ref, t2.ref, t3.ref) AS T1ref ``` However, instead of using `full outer join`, I find it easier to start with the lists and combinations that I care about. In this case, you seem to care about `t1.ref` and `depots` from all the tables. Perhaps this is closer to what you really want to do: ``` SELECT t1ref.ref, T2.ref AS T2ref, T2.depot AS T2depot, T3.ref AS T3ref, T3.depot AS T3depot FROM (select ref from T1 union select ref from T2 union select ref from T3 ) t1ref cross join (select depot from T2 union select depot from t3 ) d LEFT JOIN T2 ON T2.ref = T1ref.ref and t2.depot = d.depot LEFT JOIN T3 ON T3.ref = T1ref.ref AND T3.depot = d.depot --OR T2.depot IS NULL) ```
FULL JOIN on severals tables
[ "", "sql", "sql-server", "join", "outer-join", "" ]
This has been stumping me for a little bit and I can't seem to get the right query Table 1: ``` ╔════════════════════╗ ║ Record Level ║ ╠════════════════════╣ ║ 1 0 ║ ║ 1 1 ║ ║ 1 2 ║ ║ 2 3 ║ ║ 2 4 ║ ║ 2 5 ║ ║ 3 2 ║ ╚════════════════════╝ ``` What I'm looking for is the count of the max level of each unique record ie should return ``` ╔════════════════════╗ ║ level 2, 2 records ║ ║ level 5, 1 record ║ ╚════════════════════╝ ``` etc.
You can use a sub-query to first determine the max levels and then do a count of the max levels, as below: ``` SELECT max_level, count(*) FROM ( SELECT max(level) AS max_level FROM table1 GROUP BY Record ) max_levels GROUP BY max_level ORDER BY max_level; ```
You need two steps, (1) find the max level for each record and (2) count the records that have that max level for each level. You can use a subquery to create the Record/MaxLevel table: ``` SELECT MaxLevel, COUNT(*) AS RecordCount FROM ( SELECT Record, MAX(Level) AS MaxLevel FROM Table1 GROUP BY Record ) GROUP BY MaxLevel ``` You may also want a third select to make sure you include all levels if you want 0 counts for levels that have no records with their value as the max level. You can select the distinct values for level as subquery 'a' and left join into the max level counts subquery 'b'. ``` SELECT a.Level, COUNT(b.Record) AS RecordCount FROM (SELECT DISTINCT Level FROM Table1) a LEFT OUTER JOIN ( SELECT Record, MAX(Level) AS MaxLevel FROM Table1 GROUP BY Record ) b ON b.MaxLevel = a.Level GROUP BY a.Level ```
SELECT Count of MAX value over 2 columns
[ "", "sql", "sql-server", "sql-server-2008-r2", "" ]
I am having a problem with the last select statement, the OR. what we want is to include records who have TIER = T1 but only if the have a COST > 500 and skilllevel = 'S'. but this code of last line only has to be an OR. AND will leave out most of the records. ``` Insert Into @tblMain Select * from ( SELECT distinct a.ID_KEY,a.STAT, a.TIER, f.FACILITY_ID, f.FACILITY_NAME Name, a.RX, a.PATIENTNAME, a.MEDICATION , (a.COST) as Cost, (a.COST) as Interchanges, s.Savings Savings FROM[PBM].[T_CHARGES] a --Inner Join PBM.FACILITY f on a.FACILITYNPI = f.FACILITY_NPI inner Join PBM.PHARMACY_NPI pn on a.PHARMACYNPI = pn.NPI Inner join PBM.PHARMACY_FACILITY pp on pn.PHARMACY_ID = pp.PHARMACY_ID inner Join PBM.FACILITY f on pp.FACILITY_ID = f.FACILITY_ID AND a.FACILITYNPI = f.FACILITY_NPI Left Outer Join @TableSum s on a.ID_KEY = s.ID_KEY where f.FACILITY_NAME is not null and month(a.DATEDISPENSED) = @Month and year(a.DATEDISPENSED) = @Year and f.FACILITY_ID = @FacilityId and a.STAT not in (3,4,5) and (TIER <> 'T1' ) --OR (TIER = 'T1' AND COST>500 AND SKILLLEVEL = 'S' ) ```
SQL Server will respect parentheses for order of evaluation, which I think you need: ``` ... AND ( (TIER <> 'T1') OR (TIER = 'T1' AND COST>500 AND SKILLLEVEL = 'S') ) ```
By default in sql server `NOT` takes precedence over `AND` , `AND` takes precedence over `OR` ``` NOT --> AND --> OR ``` to enforce an order other than default you can make use of Parenthesis. ``` ........... and f.FACILITY_ID = @FacilityId and a.STAT not in (3,4,5) and ( (TIER <> 'T1') OR (TIER = 'T1' AND COST > 500 AND SKILLLEVEL = 'S' ) ) ```
Or on Query not working
[ "", "sql", "" ]
Is there any difference when I create a table index for more columns if I use the columns in different order? Exactly is difference between `ID, isValid, Created` and `ID, Created, isValid` indices? And is there any difference in querying order? ``` where ID = 123 and isValid = 1 and Created < getdate() ``` vs. ``` where ID = 123 and Created < getdate() and isValid = 1 ``` Column types: ID *[int]*, isValid *[bit]*, Created *[datetime]*)
> Exactly is difference between `ID, isValid, Created` and `ID, Created, isValid` indices? If you always use **all three** columns in your `WHERE` clause - there's no difference. (*as Martin Smith points out in his comment - since of the criteria is not an equality check, the sequence of the columns in the index **does** matter*) However: an index can only ever used if the n left-most columns (here: n between 1 and 3) are used. So if you have a query that might only use `ID` and `isValid` for querying, then the first index *can* be used - but the second one will *never* be used for sure. And if you have queries that use `ID` and `Created` as their `WHERE` parameters, then your second index *might* be used, but the first one can never be used.
`AND` is commutative, so the order of ANDed expressions in `WHERE` doesn't matter. Order of columns in an index *does* matter, it should match your queries. If `ID` is your table's clustered primary key and your queries ask for specific `ID`, don't bother creating an index. That would be like giving an index to a book saying "page 123 is on page 123" etc.
SQL Server table index columns order
[ "", "sql", "sql-server", "indexing", "query-performance", "" ]
I have a problem once I want to run the next query: ``` declare @j int = 1; declare @column varchar(255); set @column = 'last_name'; declare @tmp varchar(255); declare @query nvarchar(255) = N'select ' + @column + N' from TEST where id = @j' declare @tbl table(tmp varchar(255)) insert into @tbl exec sp_executesql @query select top 1 @tmp = tmp from @tbl select @tmp select * from @tbl; ``` The problem is that if I change the variable `@j` to a numeric value in the declaration of the `@query` variable, like this ``` declare @query nvarchar(255) = N'select ' + @column + N' from TEST where id = 1' ``` the query is running successfully, if I left the `@j` variable there, like this ``` declare @query nvarchar(255) = N'select ' + @column + N' from TEST where id = @j' ``` I got an error message: "Must declare the scalar variable `@j`." Why? And how can I solve that my query would work with the variable `@j`?
in the place of ==== `where id = @j'` change like this may it works Convert(nvarchar(2), @j);
You need to cast the `@j` variable to string. AS its type is [INT](http://msdn.microsoft.com/en-us/library/ms187745.aspx) you should cast it as ``` CAST(@j AS VARCHAR(12)) ``` Also, you can pass parameters to dynamic SQL statement when it is executed using `sp_executesql`. In your case it will be something like this: ``` declare @j int = 1; declare @column varchar(255); set @column = 'last_name'; declare @tmp varchar(255); declare @query nvarchar(255) = N'select ' + @column + N' from TEST where id = @j' declare @tbl table(tmp varchar(255)) insert into @tbl exec sp_executesql @query, N'@j INT', @j = @j select top 1 @tmp = tmp from @tbl select @tmp select * from @tbl; ``` In the following line: exec sp\_executesql @query, N'@j INT', @j = @j 1. `@query` is your dynamic T-SQL statement 2. `N'@j INT'` is declaration of parameters 3. `@j = @j` is parameters assignments [sp\_executsql documentation](http://msdn.microsoft.com/en-us/library/ms188001.aspx)
Variable in a dynamic query
[ "", "sql", "sql-server", "variables", "dynamic", "" ]
Hello I am currently developing a small website for a real estate and I want to design a database for it. I have implemented user sign-up but I want the user to select one or more roles. To be more specific a **user** can be someone who can **sell** or **buy** houses or **both**. So for each user-type I want to have a table with its properties. My problem is that I don't know how to relate the users with attributes. So far I have these tables: ``` User: varchar userid; varchar password; varchar registrationDate; Buyer: varchar id; varchar housesBought; Seller varchar id; varchar rating; varchar housesSold; ``` Somehow I need to connect these entities but so far I cannot find any good practice. Any guideline or solution would be appreciated. Thx in advance
It has been already said that you could use foreign key to link your buyer and seller tables to the user they represent. As a minor improvement I can propose you to use `user_id` as primary key for both your `seller` and `buyer` tables while it is the foreign key at the same time. ``` User: int userid; (Primary Key) varchar password; varchar registrationDate; Buyer: int user_id; (Primary Key, ForeignKey to User(userid)) varchar housesBought; Seller int user_id; (Primary Key, ForeignKey to User(userid)) varchar rating; varchar housesSold; ``` It will guarantee that no user can have two seller(buyer) "accounts" and remove unnecessary primary key. It is so-called [identifying relationship](https://stackoverflow.com/questions/2814469/still-confused-about-identifying-vs-non-identifying-relationships). [This](https://stackoverflow.com/questions/8842876/primary-and-foreign-key-at-the-same-time) and [this](https://stackoverflow.com/questions/10565846/use-composite-primary-key-as-foreign-key) can show your technical details about how to implement such relationship in DDL. P.S.: [Don't store passwords as plain text](https://security.stackexchange.com/questions/7118/what-to-do-about-websites-that-store-plain-text-passwords).
You don't say what database you are using, but I'll guess maybe it's MySql? Anyway, the following principles will apply to almost any relational database package (MS Access, SQL Server, PostGres etc) For any table that has a relationship with the user table, you need to have a "foreign key", which is a field that relates back to another table. For example, taking your design: ``` User int id; varchar user_type; varchar password; varchar registrationDate; Buyer: int id; int user_id; varchar housesBought; Seller int id; int user_id; varchar rating; varchar housesSold; ``` Note that I've changed the primary keys (id fields) to integers. You should not use varchar for numerical data, as its very inefficient when it comes to indexing. The user\_id fields link to the id field of the User table. However, I would actually recommend a different design altogether. Your design, with separate tables for buyers and sellers attributes has some limitations. For example, what if someone is both a buyer and a seller - then you have data stored in two different places, which is unnecessary in this simple example. Also, the relationship that we have defined between the user table and the other two tables is a one-to-one relationship. For any given user, you'll always have only one record in each table. This implies that you might be better just storing all the information in one table. Generally in relational database design, we use 1-to-many relationships as an efficient way to partition and link data. For example, let's say you have a table storing addresses for your users. Each user could have more than one address, so we have something like this: ``` Address int id; int user_id; bool is_default_address; varchar address_line1; varchar address_line2; varchar county; varchar postcode; ``` Because we can have many addresses per one user, this is therefore a one-to-many relationship, using the foreign key user\_id to link the tables. Now, my final recommendation is to overhaul your database design to get away from those 1-to-1 relationships. I suggest you go for something like this: ``` User: int userid; varchar password; varchar registrationDate; int rating; int housesSold; int housesBought; Property int id; int seller_id; int buyer_id; varchar description; varchar address; varchar postcode; decimal price; ``` Now you have all user details stored in one table - yes there's a small amount of redundancy, but its way more flexible, and easier to retrieve the data you need, without your website having to make necessary table joins. The fields seller\_id and buyer\_id in the Property table are both foreign keys linking to the User.id field. Your web app can then count the number of links to a particular user, and use that to autopopulate the housesSold and housesBought fields in the user table. Relational database design is a huge topic, so I've only scratched the surface, but I hope these are enough pointers to get you going. It'd be worth investing in a good book, perhaps the relevant O'Reilly book for your chosen database platform.
How to connect base table with its specific role variation tables?
[ "", "sql", "database-design", "" ]
I am having issues forming this SQL string in VB6 ``` Select A.ID, A.AstTp, A.Offset, A.Age, B.LNo, B.ACnt, B.CommCnt Into [LnReg] From [ALPA] In [c:\Temp\ALPA.mdb] As A LEFT OUTER JOIN [ALX] IN [c:\Temp\ALX.mdb] As B On A.ID = B.ID Where (A.AstTp="Sealed") ``` My ADO connection is to the mdb with LnReg. Error thrown is '[c:\Temp\ALPA.mdb]' is not a valid name. **[EDIT]** Syntax error in FROM Clause ``` Select A.ID, A.AstTp, A.Offset, A.Age, B.LNo, B.ACnt, B.CommCnt Into [LnReg] From [ALPA] In "c:\Temp\ALPA.mdb" As A LEFT OUTER JOIN [ALX] IN "c:\Temp\ALX.mdb" As B On A.ID = B.ID Where (A.AsTp="Sealed") ```
I prefer linking table from other mdb files to main mdb file. I use this functions to linking tables. With linked table you can do anything like with local table. ``` Function AccessLinkToTable(sLinkFromDB As String, sLinkToDB As String, sLinkToTable As String, Optional sNewLinkTableName As String, Optional sPassword As String) As Boolean 'Inputs : sLinkFromDB The path to the original database. ' sLinkToDB The path to the database to link to. ' sLinkToTable The table name to link to in sLinkToDB. ' [sNewLinkTableName] The name of the new link table. sLinkFromDB. 'Outputs : Returns True if succeeded in linking to the table 'Author : Andrew Baker www.vbusers.com 'Date : 03/09/2000 14:17 'Notes : Requires a reference to reference to both ADO (MS ActiveX Data Objects) and MSADOX.DLL ' (MS ADO Ext. 2.5 DLL and Security). 'Revisions : 21.1.2002, Roman Plischke, password Dim catDB As ADOX.Catalog Dim TblLink As ADOX.Table On Error GoTo ErrFailed If Len(Dir$(sLinkFromDB)) > 0 And Len(Dir$(sLinkToDB)) > 0 Then 'Databases exist Set catDB = New ADOX.Catalog 'Open a Catalog on database in which to create the link. catDB.ActiveConnection = "Provider=Microsoft.Jet.OLEDB.4.0;" & "Data Source=" & sLinkFromDB Set TblLink = New ADOX.Table With TblLink 'Name the new Table If Len(sNewLinkTableName) Then .Name = sNewLinkTableName Else .Name = sLinkToTable End If 'Set ParentCatalog property to the open Catalog. 'This allows access to the Properties collection. Set .ParentCatalog = catDB 'Set the properties to create the link. .Properties("Jet OLEDB:Create Link") = True .Properties("Jet OLEDB:Link Datasource") = sLinkToDB .Properties("Jet OLEDB:Remote Table Name") = sLinkToTable If Len(sPassword) Then .Properties("Jet OLEDB:Link Provider String") = "MS Access;Pwd=" & sPassword End If End With 'Append the table to the Tables collection. catDB.Tables.Append TblLink Set catDB = Nothing 'Set return as success AccessLinkToTable = True End If Exit Function ErrFailed: On Error GoTo 0 AccessLinkToTable = False End Function Function AccessLinkTableUpdate(sLinkDatabasePath As String, sLinkToNewDatabase As String, sLinkTableName As String) As Boolean Dim catDB As ADOX.Catalog On Error GoTo ErrFailed Set catDB = New ADOX.Catalog 'Open a catalog on the database which contains the table to refresh. catDB.ActiveConnection = "Provider=Microsoft.Jet.OLEDB.4.0;" & "Data Source=" & sLinkDatabasePath If catDB.Tables(sLinkTableName).Type = "LINK" Then catDB.Tables(sLinkTableName).Properties("Jet OLEDB:Link Datasource") = sLinkToNewDatabase AccessLinkTableUpdate = True End If Set catDB = Nothing Exit Function ErrFailed: On Error GoTo 0 AccessLinkTableUpdate = False End Function ```
``` Select A.ID, A.AstTp, A.Offset, A.Age, B.LNo, B.ACnt, B.CommCnt Into [LnReg] From [c:\Temp\ALPA.mdb].ALPA As A LEFT OUTER JOIN [c:\Temp\ALX.mdb].ALX As B On A.ID = B.ID Where (A.AstTp="Sealed") ```
Using two mdbs to Select Into a third
[ "", "sql", "vb6", "adodb", "" ]
My goal is to create a query that will return a count of unique customers who have purchased in a 365 day window. I created the query below in postgres and and the resulting queries are very slow. My table is 812,024 rows of just order dates and customer id's. When I remove the distinct statement, I can get the query to return a result in approx 60 seconds, with it, I have yet to finish. I created an index on (order\_date, id). I am a complete newb to SQL, this is really the first time i have ever done anything with it, and after trying to find a solution to this problem all day, I could not find anything that I could get to work, even though I have seen a lot about the slow performace of distinct. ``` SELECT (d1.Ordered) AS Ordered, COUNT(distinct d2.ID) Users FROM ( SELECT order_date AS Ordered FROM orders GROUP BY order_date ) d1 INNER JOIN ( SELECT order_date AS Ordered, id FROM orders ) d2 ON d2.Ordered BETWEEN d1.Ordered - 364 AND d1.Ordered GROUP BY d1.Ordered ORDER BY d1.Ordered "Sort (cost=3541596.30..3541596.80 rows=200 width=29)" " Sort Key: orders_1.order_date" " -> HashAggregate (cost=3541586.66..3541588.66 rows=200 width=29)" " -> Nested Loop (cost=16121.73..3040838.52 rows=100149627 width=29)" " -> HashAggregate (cost=16121.30..16132.40 rows=1110 width=4)" " -> Seq Scan on orders orders_1 (cost=0.00..14091.24 rows=812024 width=4)" " -> Index Only Scan using x on orders (cost=0.43..1822.70 rows=90225 width=29)" " Index Cond: ((order_date >= (orders_1.order_date - 364)) AND (order_date <= orders_1.order_date))" ```
Assuming actual `date` types. ``` SELECT d.day, count(distinct o.id) AS users_past_year FROM ( SELECT generate_series(min(order_date), max(order_date), '1 day')::date AS day FROM orders -- single query ) d LEFT JOIN ( -- fold duplicates on same day right away SELECT id, order_date FROM orders GROUP BY 1,2 ) o ON o.order_date > d.day - interval '1 year' -- exclude AND o.order_date <= d.day -- include GROUP BY 1 ORDER BY 1; ``` Folding multiple purchases from the same user on the same day first only makes sense if that's a common thing. Else it will be faster to omit that step and simply left-join to the table `orders` instead. It's rather odd that `orders.id` would be the ID of the user. Should be named something like `user_id`. If you are not comfortable with `generate_series()` in the `SELECT` list (which works just fine), you can replace that with a `LATERAL JOIN` in Postgres 9.3+. ``` FROM (SELECT min(order_date) AS a , max(order_date) AS z FROM orders) x , generate_series(x.a, x.z, '1 day') AS d(day) LEFT JOIN ... ``` Note that `day` is type `timestamp` in this case. Works the same. You may want to cast. ## General performance tips I understand this is a read-only table for a single user. This simplifies things. You already seem to have an index: ``` CREATE INDEX orders_mult_idx ON orders (order_date, id); ``` That's good. Some things to try: ### Basics Of course, the usual performance advice applies: <https://wiki.postgresql.org/wiki/Slow_Query_Questions> <https://wiki.postgresql.org/wiki/Performance_Optimization> ### Streamline table Cluster your table using this index once: ``` CLUSTER orders USING orders_mult_idx; ``` This should help a bit. It also effectively runs `VACUUM FULL` on the table, which removes any dead rows and compacts the table if applicable. ### Better statistic ``` ALTER TABLE orders ALTER COLUMN number SET STATISTICS 1000; ANALYZE orders; ``` Explanation here: * [Configuration parameter work\_mem in PostgreSQL on Linux](https://stackoverflow.com/questions/8106181/work-mem-in-postgresql-on-linux) ### Allocate more RAM Make sure you have ample resources allocated. In particular for [`shared_buffers` and `work_mem`](http://www.postgresql.org/docs/current/interactive/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-MEMORY). You can do this temporarily for your session. ### Experiment with [planner methods](http://www.postgresql.org/docs/current/interactive/runtime-config-query.html#RUNTIME-CONFIG-QUERY-ENABLE) Try disabling nested loops ([`enable_nestloop`](http://www.postgresql.org/docs/current/interactive/runtime-config-query.html#GUC-ENABLE-NESTLOOP)) (in your session only). Maybe hash joins are faster. (I would be surprised, though.) ``` SET enable_nestedloop = off; -- test ... RESET enable_nestedloop; ``` ### Temporary table Since this seems to be a "temporary table" by nature, you could try and make it an actual temporary table saved in RAM only. You need enough RAM to allocate enough [`temp_buffers`](http://www.postgresql.org/docs/current/interactive/runtime-config-resource.html#GUC-TEMP-BUFFERS). Detailed instructions: * [How to delete duplicate entries?](https://stackoverflow.com/questions/1746213/how-to-delete-duplicate-entries/8826879#8826879) Be sure to run `ANALYZE` manually. Temp tables are not covered by autovacuum.
No need for the self-join, use `generate_series` ``` select g.order_date as "Ordered", count(distinct o.id) as "Users" from generate_series( (select min(order_date) from orders), (select max(order_date) from orders), '1 day' ) g (order_date) left join orders o on o.order_date between g.order_date - 364 and g.order_date group by 1 order by 1 ```
Slow query in postgres using count distinct
[ "", "sql", "postgresql", "" ]
I have the following table: ``` my_table ------------------------ | common_id | uniq_val | ------------------------ | 1 | foo | ------------------------ | 1 | bar | ------------------------ ``` And I want to aggregate values from it such that the resulting query looks like: ``` DESIRED RESULT --------------------------------------- | common_id | uniq_val_1 | uniq_val_2 | --------------------------------------- | 1 | foo | bar | --------------------------------------- OR --------------------------------------- | common_id | uniq_val_1 | uniq_val_2 | --------------------------------------- | 1 | bar | foo | --------------------------------------- ``` So I've written the query: ``` SELECT t1.common_id, t1.uniq_val, t2.uniq_val FROM my_table t1 JOIN my_table AS t2 ON t1.common_id=t2.common_id WHERE t1.uniq_val!=t2.uniq_val; ``` Which results in ``` RESULTING SELECT --------------------------------------- | common_id | uniq_val_1 | uniq_val_2 | --------------------------------------- | 1 | foo | foo | --------------------------------------- | 1 | bar | bar | --------------------------------------- ``` But I only need one of those columns, so I should be able to do a GROUP BY t1.common\_id, like: ``` SELECT t1.common_id, t1.uniq_val, t2.uniq_val FROM my_table t1 JOIN my_table AS t2 ON t1.common_id=t2.common_id WHERE t1.uniq_val!=t2.uniq_val GROUP BY t1.common_id; ``` Unfortunately this returns the error: ``` ERROR: column "t1.uniq_val" must appear in the GROUP BY clause or be used in an aggregate function ``` Can anyone point out the error in my logic?
How about simple aggregation? ``` select common_id, min(uniq_val) as uniq_val_1, max(uniq_val) as uniq_val_2 from my_table group by common_id; ```
you can try `distinct on` `SELECT distinct on (t1.common_id) t1.common_id, t1.uniq_val, t2.uniq_val FROM my_table t1 JOIN my_table AS t2 ON t1.common_id=t2.common_id WHERE t1.uniq_val!=t2.uniq_val;` I think it will produce what you need!
Aggregating a unique pair of column values from the same table based on a common column value
[ "", "sql", "postgresql", "join", "group-by", "" ]
I have many many rows in the table. The table has a Date column (which includes date and time) I want to be able to loop through the table and find any gap between the two rows where the difference between the row is 5 min. For example: ``` ID Date 1 2014-07-29 13:00:00 2 2014-07-29 13:01:00 3 2014-07-29 13:07:00 ``` So as you can see the time difference between the first row and second row is 1 min so I ignore, then I should be checking the time between the second row and third row. Since the time difference is 6 min I want to spit out an exception with the dates that were compared. The table could contain many rows, so I would go and check the next record to the previous one and so one... How could I achieve this in SQL Server. I can do a datediff, but I will have a lot of rows and I don't want to perform this manually. Any suggestions? NOTE\* I don't need to worry about the cross over of hours from one day to another, since this task is only going to be used for a single day. I will specify on SQL statement where date = getdate()
One way to do it ``` WITH ordered AS ( SELECT *, ROW_NUMBER() OVER (ORDER BY date) rn FROM table1 ) SELECT o1.id id1, o1.date date1, o2.id id2, o2.date date2, DATEDIFF(s, o1.date, o2.date) diff FROM ordered o1 JOIN ordered o2 ON o1.rn + 1 = o2.rn WHERE DATEDIFF(s, o1.date, o2.date) > 60 ``` Output: ``` | ID1 | DATE1 | ID2 | DATE2 | DIFF | |-----|-----------------------------|-----|-----------------------------|------| | 2 | July, 29 2014 13:01:00+0000 | 3 | July, 29 2014 13:07:00+0000 | 360 | ``` Here is **[SQLFiddle](http://sqlfiddle.com/#!6/a40a1/8)** demo
Since you are on SQL Server 2012, you can make use of the `LEAD` and `LAG` functions, like so: ``` ;with cte as (select id, [date], datediff(minute,[date],lead([date],1,0) over (order by [date])) difference, row_number() over (order by [date]) rn from tbl) select * from cte where rn <> 1 --Skip first row ordered by the date and difference > 5 ``` This will return all rows which have a difference of more than 5 minutes with the next row. The assumption is that the rows are sorted in ascending order of date. [Demo](http://rextester.com/TFQNH17288)
Find a gap between time stamp
[ "", "sql", "sql-server", "" ]
I have a table structure like this: ``` ------------------------------------- | Country | Count 1 | Count 2 | ------------------------------------- | USA | 10 | 10 | ------------------------------------- | USA | 10 | 10 | ------------------------------------- | France | 200 | 200 | ------------------------------------- | USA | 10 | 10 | ------------------------------------- | France | 100 | 100 | ------------------------------------- ``` I would like to select the total of Count 1 and Count 2 for each country. So my output would be like ``` ------------------------- | Country | Count | ------------------------- | USA | 60 | ------------------------- | France | 600 | ------------------------- ```
Try using [`SUM()`](http://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html#function_sum) ``` SELECT Country, SUM(`Count 1` + `Count 2`) AS count FROM table_name GROUP BY country ```
This is pretty straight-forward, you can use the aggregate function [`SUM()`](http://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html#function_sum) with a `GROUP BY`: ``` SELECT country, sum(count1 + count2) as Total FROM yourtable GROUP BY country; ``` See [Demo](http://sqlfiddle.com/#!2/1ef0e/1)
SQL: Selecting sum of two columns across multiple rows
[ "", "mysql", "sql", "" ]
I am receiving the 0x84BB0001 Error after attempting to install a new instance of SQL Server 2012. This is happening after an uninstall of a previous version of SQL Server 2012. I have pasted the log file below. Any help is greatly appreciated. ``` Overall summary: Final result: Failed: see details below Exit code (Decimal): -2068119551 Exit facility code: 1211 Exit error code: 1 Exit message: The system cannot find the file specified. Start time: 2014-07-29 11:59:09 End time: 2014-07-29 12:09:12 Requested action: Install Exception help link: http://go.microsoft.com/fwlink?LinkId=20476&ProdName=Microsoft+SQL+Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=11.0.3128.0&EvtType=0x37D77D9E%400xDC80C325&EvtType=0x37D77D9E%400xDC80C325 Setup completed with required actions for features. Troubleshooting information for those features: Next step for SQLEngine: SQL Server Setup was canceled before completing the operation. Try the setup process again. Next step for Replication: SQL Server Setup was canceled before completing the operation. Try the setup process again. Next step for SSMS: SQL Server Setup was canceled before completing the operation. Try the setup process again. Next step for SNAC: SQL Server Setup was canceled before completing the operation. Try the setup process again. Next step for SNAC_SDK: SQL Server Setup was canceled before completing the operation. Try the setup process again. Next step for LocalDB: SQL Server Setup was canceled before completing the operation. Try the setup process again. Next step for Writer: SQL Server Setup was canceled before completing the operation. Try the setup process again. Next step for Browser: SQL Server Setup was canceled before completing the operation. Try the setup process again. Machine Properties: Machine name: MAPCOM444 Machine processor count: 4 OS version: Windows 7 OS service pack: Service Pack 1 OS region: United States OS language: English (United States) OS architecture: x64 Process architecture: 64 Bit OS clustered: No Product features discovered: Product Instance Instance ID Feature Language Edition Version Clustered Package properties: Description: Microsoft SQL Server 2012 Service Pack 1 ProductName: SQL Server 2012 Type: RTM Version: 11 SPLevel: 0 Installation location: c:\a95a4ef80214055fe53d\x64\setup\ Installation edition: Express Product Update Status: None discovered. User Input Settings: ACTION: Install ADDCURRENTUSERASSQLADMIN: true AGTSVCACCOUNT: NT AUTHORITY\NETWORK SERVICE AGTSVCPASSWORD: ***** AGTSVCSTARTUPTYPE: Disabled ASBACKUPDIR: Backup ASCOLLATION: Latin1_General_CI_AS ASCONFIGDIR: Config ASDATADIR: Data ASLOGDIR: Log ASPROVIDERMSOLAP: 1 ASSERVERMODE: MULTIDIMENSIONAL ASSVCACCOUNT: <empty> ASSVCPASSWORD: <empty> ASSVCSTARTUPTYPE: Automatic ASSYSADMINACCOUNTS: <empty> ASTEMPDIR: Temp BROWSERSVCSTARTUPTYPE: Disabled CLTCTLRNAME: <empty> CLTRESULTDIR: <empty> CLTSTARTUPTYPE: 0 CLTSVCACCOUNT: <empty> CLTSVCPASSWORD: <empty> CLTWORKINGDIR: <empty> COMMFABRICENCRYPTION: 0 COMMFABRICNETWORKLEVEL: 0 COMMFABRICPORT: 0 CONFIGURATIONFILE: CTLRSTARTUPTYPE: 0 CTLRSVCACCOUNT: <empty> CTLRSVCPASSWORD: <empty> CTLRUSERS: <empty> ENABLERANU: true ENU: true ERRORREPORTING: false FEATURES: SQLENGINE, REPLICATION, SSMS, SNAC_SDK, LOCALDB FILESTREAMLEVEL: 0 FILESTREAMSHARENAME: <empty> FTSVCACCOUNT: <empty> FTSVCPASSWORD: <empty> HELP: false IACCEPTSQLSERVERLICENSETERMS: false INDICATEPROGRESS: false INSTALLSHAREDDIR: c:\Program Files\Microsoft SQL Server\ INSTALLSHAREDWOWDIR: c:\Program Files (x86)\Microsoft SQL Server\ INSTALLSQLDATADIR: <empty> INSTANCEDIR: C:\Program Files\Microsoft SQL Server\ INSTANCEID: SQLEXPRESS INSTANCENAME: SQLEXPRESS ISSVCACCOUNT: NT AUTHORITY\Network Service ISSVCPASSWORD: <empty> ISSVCSTARTUPTYPE: Automatic MATRIXCMBRICKCOMMPORT: 0 MATRIXCMSERVERNAME: <empty> MATRIXNAME: <empty> NPENABLED: 0 PID: ***** QUIET: false QUIETSIMPLE: false ROLE: AllFeatures_WithDefaults RSINSTALLMODE: DefaultNativeMode RSSHPINSTALLMODE: DefaultSharePointMode RSSVCACCOUNT: <empty> RSSVCPASSWORD: <empty> RSSVCSTARTUPTYPE: Automatic SAPWD: <empty> SECURITYMODE: <empty> SQLBACKUPDIR: <empty> SQLCOLLATION: SQL_Latin1_General_CP1_CI_AS SQLSVCACCOUNT: NT Service\MSSQL$SQLEXPRESS SQLSVCPASSWORD: <empty> SQLSVCSTARTUPTYPE: Automatic SQLSYSADMINACCOUNTS: <empty> SQLTEMPDBDIR: <empty> SQLTEMPDBLOGDIR: <empty> SQLUSERDBDIR: <empty> SQLUSERDBLOGDIR: <empty> SQMREPORTING: false TCPENABLED: 0 UIMODE: AutoAdvance UpdateEnabled: true UpdateSource: MU X86: false Configuration file: C:\Program Files\Microsoft SQL Server\110\Setup Bootstrap\Log\20140729_114734\ConfigurationFile.ini Detailed results: Feature: Database Engine Services Status: Failed: see logs for details Reason for failure: Setup was canceled for the feature. Next Step: SQL Server Setup was canceled before completing the operation. Try the setup process again. Feature: SQL Server Replication Status: Failed: see logs for details Reason for failure: Setup was canceled for the feature. Next Step: SQL Server Setup was canceled before completing the operation. Try the setup process again. Feature: Management Tools - Basic Status: Failed: see logs for details Reason for failure: Setup was canceled for the feature. Next Step: SQL Server Setup was canceled before completing the operation. Try the setup process again. Feature: SQL Client Connectivity Status: Failed: see logs for details Reason for failure: Setup was canceled for the feature. Next Step: SQL Server Setup was canceled before completing the operation. Try the setup process again. Feature: SQL Client Connectivity SDK Status: Failed: see logs for details Reason for failure: Setup was canceled for the feature. Next Step: SQL Server Setup was canceled before completing the operation. Try the setup process again. Feature: LocalDB Status: Failed: see logs for details Reason for failure: Setup was canceled for the feature. Next Step: SQL Server Setup was canceled before completing the operation. Try the setup process again. Feature: SQL Writer Status: Failed: see logs for details Reason for failure: Setup was canceled for the feature. Next Step: SQL Server Setup was canceled before completing the operation. Try the setup process again. Feature: SQL Browser Status: Failed: see logs for details Reason for failure: Setup was canceled for the feature. Next Step: SQL Server Setup was canceled before completing the operation. Try the setup process again. Rules with failures: Global rules: Scenario specific rules: Rules report file: C:\Program Files\Microsoft SQL Server\110\Setup Bootstrap\Log\20140729_114734\SystemConfigurationCheck_Report.htm Exception summary: The following is an exception stack listing the exceptions in outermost to innermost order Inner exceptions are being indented Exception type: Microsoft.SqlServer.Configuration.Sco.ScoException Message: The system cannot find the file specified. HResult : 0x84bb0001 FacilityCode : 1211 (4bb) ErrorCode : 1 (0001) Data: WatsonData = SQLBrowser DisableRetry = true HelpLink.EvtType = 0x37D77D9E@0xDC80C325 DisableWatson = true Stack: at Microsoft.SqlServer.Configuration.Sco.Service.GetStartAccount() at Microsoft.SqlServer.Configuration.SqlBrowser.SqlBrowserPublicConfig.CalculateUserNamePassword() at Microsoft.SqlServer.Configuration.SqlBrowser.SqlBrowserPublicConfig.Calculate() at Microsoft.SqlServer.Chainer.Infrastructure.InputSettingService.CalculateSettings(IEnumerable`1 settingIds) at Microsoft.SqlServer.Configuration.Settings.Calculate() at Microsoft.SqlServer.Configuration.InstallWizardFramework.ConfigurationController.LoadData() at Microsoft.SqlServer.Configuration.InstallWizard.ServicesController.LoadData() at Microsoft.SqlServer.Configuration.InstallWizardFramework.InstallWizardPageHost.PageEntered(PageChangeReason reason) at Microsoft.SqlServer.Configuration.InstallWizardFramework.InstallWizardTabbedPageHost.PageEntered(PageChangeReason reason) at Microsoft.SqlServer.Configuration.WizardFramework.UIHost.set_SelectedPageIndex(Int32 value) at Microsoft.SqlServer.Configuration.InstallWizardFramework.ConfigurationController.OnPageNavigationRequest(PageChangeReason reason) at Microsoft.SqlServer.Configuration.InstallWizard.DiskUsageController.LoadData() at Microsoft.SqlServer.Configuration.InstallWizardFramework.InstallWizardPageHost.PageEntered(PageChangeReason reason) at Microsoft.SqlServer.Configuration.WizardFramework.UIHost.set_SelectedPageIndex(Int32 value) at Microsoft.SqlServer.Configuration.WizardFramework.NavigationButtons.nextButton_Click(Object sender, EventArgs e) at System.Windows.Forms.Control.OnClick(EventArgs e) at System.Windows.Forms.Button.OnMouseUp(MouseEventArgs mevent) at System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks) at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.ButtonBase.WndProc(Message& m) at System.Windows.Forms.Button.WndProc(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m) at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) Inner exception type: System.ComponentModel.Win32Exception Message: The system cannot find the file specified. HResult : 0x80004005 Error : 2 Stack: at Microsoft.SqlServer.Configuration.Sco.Service.QueryServiceConfig() at Microsoft.SqlServer.Configuration.Sco.Service.GetStartAccount() ```
I was able to get past the error by changing the instance name. The old instance name from the uninstalled version was SQLEXPRESS, I changed this to SQLEXPRESS2012 and the installation proceeded.
Have you tried deleting the contents in the folder? C:/Users/[UserName]/AppData/Local/Microsoft\_Corporation. installation again as a repair, hope this will work.
Error 0x84BB0001 while installing Sql Server 2012
[ "", "sql", "sql-server", "database", "" ]
I'm running the below SQL query: ``` SELECT share_name AS "Server", Capacity, Available, ((available * 1.00) / capacity) * 100 as "% Free" FROM storage_info WHERE share_name LIKE '%192.168.1.1%' OR share_name LIKE '%192.168.1.2%' OR share_name LIKE '%192.168.1.3%' OR share_name LIKE '%192.168.1.4%'; ``` Which returns the results of the 'share\_name' column as IP addresses, as they are stored in the table. Is it possible to change what the output of the IP addresses will be, within the query - in the same way that 'AS' changes the name of the column?
``` SELECT CASE WHEN share_name LIKE '%192.168.1.1%' THEN 'SERVER 1' WHEN share_name LIKE '%192.168.1.2%' THEN 'SERVER 2' WHEN share_name LIKE '%192.168.1.3%' THEN 'SERVER 3' WHEN share_name LIKE '%192.168.1.4%' THEN 'SERVER 4' ELSE share_name END AS "Server", Capacity, Available, ((available * 1.00) / capacity) * 100 as "% Free" FROM storage_info WHERE share_name LIKE '%192.168.1.1%' OR share_name LIKE '%192.168.1.2%' OR share_name LIKE '%192.168.1.3%' OR share_name LIKE '%192.168.1.4%'; ```
You can do it as follows: ``` SELECT CASE WHEN share_name = '192.168.1.1' THEN 'Server 1' ELSE share_name END AS "Server", Capacity, Available, ((available * 1.00) / capacity) * 100 as "% Free" FROM storage_info WHERE share_name LIKE '%192.168.1.1%' OR share_name LIKE '%192.168.1.2%' OR share_name LIKE '%192.168.1.3%' OR share_name LIKE '%192.168.1.4%'; ```
How can I change the results of a single column of a sql query, within the query itself?
[ "", "sql", "sql-server", "database", "" ]
I am having problems with the following script. I am trying to return a table where recipient counts for both mailings are returned simultaneously. I know that what I have is wrong, but it may give you an idea of what I am looking for. ``` SELECT count( mailing_recipient_id ) AS CountA FROM mailing_recipient WHERE `mailing_id` =( SELECT mailing_id FROM mailing WHERE mailing_name = 'Mailing A' ) UNION SELECT COUNT( mailing_recipient_id ) AS CountB FROM mailing_recipient WHERE `mailing_id` =( SELECT mailing_id FROM mailing WHERE mailing_name = 'Mailing B' ); ``` Thank you muchly.
You can use a `JOIN` and `GROUP BY` to achieve the result you're looking for, as: ``` SELECT m.mailing_name , count(mr.mailing_recipient_id) FROM mailing_recipient mr INNER JOIN mailing m ON mr.mailing_id = m.mailing_id GROUP BY m.mailing_name ORDER BY m.mailing_name; ```
Just throw them in a subquery: ``` SELECT SUM(tot.CountA) FROM ( SELECT count( mailing_recipient_id ) AS CountA FROM mailing_recipient WHERE `mailing_id` =( SELECT mailing_id FROM mailing WHERE mailing_name = 'Mailing A' ) UNION SELECT COUNT( mailing_recipient_id ) AS CountA FROM mailing_recipient WHERE `mailing_id` =( SELECT mailing_id FROM mailing WHERE mailing_name = 'Mailing B' ); ) tot ``` Match the aliases in both of the union queries and SUM the alias.
MYSQL: Return Counts from Multiple Tables
[ "", "mysql", "sql", "join", "return", "" ]
I have a table with columns A,B,C and I'd like to get all combinations of records having {B,C} unique. That is both B value and C value will appear only once in one set. Do you have any ideas how to achieve that? I assume the output has to contain one combination on a single row, which is not a problem. To make it clear here is an example: * 1,1,0 * 6,1,1 * 1,1,2 * 3,2,0 * 5,2,1 * 1,2,3 One possible combination is {1,1,0},{1,2,3}, while {6,1,1},{5,2,1} isn't, because the C column value '1' is not unique. What I'd like to get is such an output: ``` 1,1,0,1,2,3 6,1,1,1,2,3 ``` IOW the output will be n-tuples having B,C values unique.
I think you want a strange verson of a self join: ``` select t1.*, t2.* from table t1 join table t2 on t1.b <> t2.b and t1.c <> t2.c; ``` This will return all pairs from the table where the `b` columns have distinct values and the `c` columns have distinct values.
This is what group by is for. It combines all records with the same values in the group by list into a single row. ``` select B, C from my_table group by B, C ```
SQL restricted set of combinations
[ "", "mysql", "sql", "combinations", "restrict", "" ]
I have a Database,Which is in Single user mode ,If i want to access a tables in the database i will be changing in the properties from single to multiuser.How can i make the Database multi user permenantly?.
`ALTER DATABASE [MyDB] SET MULTI_USER` If it throws an error like `user is already connected to it`, select 'master' db in the dropdown and try it that way. If that doesn't do it, use `sp_who` to find what spid is accessing the DB and kill it.
If you want to do it in SSMS object explorer. Right click on your database. Go to properties > options. Scroll to the bottom and find "Restrict Access" and change it to multi\_user. Click ok. Just an alternative to query window. Both do the same the same.
How to change database from Single user mode to multi user
[ "", "sql", "sql-server", "database", "database-administration", "administrator", "" ]
I have an application that will provide me with a string from which some unknown portion of that string may be the entire string found in the column of another record, i.e., I am provided with "A12345B" and there may be a record in the table whose [SerialNumber] column is equal to "12345" or perhaps to "123". Is there a sql query or algorithm you can suggest I use to find the matching records for such a situation? Thanks!
Here's a couple of ways to achieve that: Assuming we have a search variable: ``` DECLARE @Input NVARCHAR(50) = 'ABC12345' ``` **Using [LIKE](http://msdn.microsoft.com/en-gb/library/ms179859.aspx) Operator** ``` SELECT * FROM MyTable WHERE @Input LIKE '%' + SerialNumber + '%' ``` If you need to exclude blank `SerialNumber`s, then add this line: ``` AND SerialNumber <> '' ``` **Using [CHARINDEX](http://msdn.microsoft.com/en-gb/library/ms186323.aspx) Function** ``` SELECT * FROM MyTable WHERE CHARINDEX(SerialNumber, @Input) > 0 ```
If I understand your question correctly you're looking for `CHARINDEX`. It accepts two strings and looks for one in the other. ``` CHARINDEX ( expressionToFind ,expressionToSearch [ , start_location ] ) ```
SQL query to find record from superset of characters
[ "", "sql", "sql-server", "sql-server-ce", "" ]
I am struggling writing a (to me!) complex query which will calculate the total number of units sold of items, pulling data from two separate tables. I am using MySQL. The first table is a simple 'order\_contents' table (`sale_ord_contents` in my example) with item\_id's and the quantity sold. Some items may be an Assembly, in which case their contents are stored in another table, `sale_ord_assembly_contents`, along with the quantity of each subcomponent. In my example here, the only 'Assembly' is the Fruit Basket. I have made a note that one of the Fruit Basket entries has a quantity sold of (2), meaning all of its contents should have their quantity multiplied by two. ![enter image description here](https://i.stack.imgur.com/xD618.png) My goal is to find the 20 most common items by their accumulative quantity sold between the two tables. The part I am struggling with the most is how to handle multiplying the values of subcomponents by their quantity of the parent in the `sale_ord_contents` table. I have written this query which handles the `sale_ord_contents` calculation, but I am having trouble joining in the data from the `sale_ord_assembly_contents` table... ``` SELECT item_id, name, sku, SUM(quantity) AS purchases FROM sale_ord_contents GROUP BY item_id ORDER BY purchases DESC LIMIT 20 ``` Am I going to be better off doing this in PHP, or is this feasible in MySQL? If so, could a SQL guru assist me in writing a query that provides the desired results? Thank you!
Updated my answer, please try again the following: ``` SELECT IF (soac.item_id IS NULL, soc.item_id, soac.item_id) as item_id, IF (soac.name IS NULL, soc.name, soac.name) as name, SUM(soc.quantity * COALESCE(soac.quantity, 1)) AS total_sold FROM sale_ord_contents soc LEFT OUTER JOIN sale_ord_assembly_contents soac ON soac.order_item_id = soc.id GROUP BY IF (soac.quantity IS NULL, soc.item_id, soac.item_id), IF (soac.name IS NULL, soc.name, soac.name) ORDER BY total_sold DESC LIMIT 20 ```
The key to solving your problem is a left join. Essentially what this does is it matches up values between the table on the left and the table on the right side of the `left join`. If the table on the right doesn't have any values that match up correctly, `null` is inserted instead. Doing a left join between sale\_ord\_contents and sale\_ord\_assembly\_contents will yield results like this: > ``` > Sale_ord_contents_name | quantity | sale_ord_assembly_contents_name | quantity > apple | 5 | null | null > fruitbasket 1 | 1 | apple | 3 > fruitbasket 1 | 1 | orange | 1 > ``` These numbers don't match up with the example you provided, but you get the point. This is exactly what you will need to find the correct numbers. The next keyword that is important is `coalesce`. What this does is it goes through the list of items that you have provided and returns the first value that is not null. So if I use coalesce(sale\_ord\_assembly\_contents\_name, sale\_ord\_contents\_name), for the above example the null in the assembly table will be ignored and apple will be returned. Then, since the assembly name exists for the next two rows, apple and orange will be the value returned by the coalesce. I believe the following query is what you will need: ``` SELECT coalesce(soa.item_id, so.item_id) as item_id, coalesce(soa.name, so.name) as name, so.sku, SUM(so.quantity * coalesce(soa.quantity, 1)) AS purchases FROM sale_ord_contents so left join sale_ord_assembly_contents soa on so.id = soa.order_item_id GROUP BY item_id, name, so.sku ORDER BY purchases DESC LIMIT 20 ```
Multiple Table Query with SUM and COUNT
[ "", "mysql", "sql", "count", "sum", "multiple-tables", "" ]
suppose I've these two tables with the sample data: ![Employee Table](https://i.stack.imgur.com/4XdFO.png) ![Employee_Roles Table](https://i.stack.imgur.com/l4bHQ.png) I want to find which employees don't have the role say 5330. The answer should be employee 2 & 3. If I simply try doing a employee left outer join with employee\_roles with where role\_id <> 5330 it includes employee 1 as well but that should not be the case. Any help is greatly appreciated.
You can do this with a `Where Not Exists`: ``` Select e.* From employee e Where Not Exists ( Select 1 From employee_roles r Where r.employee_id = e.employee_id And r.role_id = 5330 ) ```
Here is the syntax for the `left outer join` version: ``` select e.* from employee e left outer join employee_roles er on er.employee_id = e.employee_id and er.role_id = 5330 where er.employee_id is null; ``` The logic is that you use the `left outer join` to look for matches and returns the rows where there is no match. It is very important the the condition on `role_id` be in the `on` clause rather than the `where` clause. You can also phrase this query using `NOT EXISTS` and `NOT IN`.
sql with set difference capability
[ "", "sql", "db2", "" ]
How can I get the Count of the subgroup. I have Table1 and Table2 (below). How can I get the table Result. I tired group by but only manage to get the total rows. I am trying it in Microsoft SQL. ``` Table1 ID PriID Function 1 500 A 2 500 B 3 500 C 4 501 A 5 501 B 6 502 C 7 503 A Table2 PriID Color 500 Green 501 Red 502 Green 503 Green Result Function Green Red Total A 2 1 3 B 1 1 2 C 2 0 2 ``` below is the current one I tried ``` SELECT t1.Function, Count(t1.PriID) as Total FROM Table1 t1 LEFT JOIN Table2 t2 on t1.PriID = t2.PriID GROUP BY t1.function ``` and I get this ``` Result Function Total A 3 B 2 C 2 ```
Try this ``` SELECT t1.Function, SUM(CASE WHEN Color = 'Green' THEN 1 ELSE 0 END) AS Green, SUM(CASE WHEN Color = 'Red' THEN 1 ELSE 0 END) AS Red, Count(t1.PriID) as Total FROM Table1 t1 LEFT JOIN Table2 t2 on t1.PriID = t2.PriID GROUP BY t1.function ```
This is the best approach for doing this stuff ``` ;with cte as ( select [Function],t1.priID,color from tbl1 t1 join tbl2 t2 on t1.priID = t2.priid ),def as ( select [Function],[Green],[Red] from ( select [Function],color,priID from cte) dk Pivot (count(priID) for color in([Green],[Red] ) ) pvt ) select *,green+red [Total] from def ```
How to get the Count of the subgroup in SQL
[ "", "sql", "t-sql", "" ]
I am using a CASE statement to return Morning, Evening or Not Yet closed on basis of comparison. It works best but not for 'Not Yet Closed'. It returns null which I don't want. I want if no closing date has been inserted then it should return 'Not Yet Closed' instead of NULL. ``` ALTER PROCEDURE [dbo].[USP_Report_SelectComplaintsByShift] @IsMorningEvening bit AS BEGIN Begin Try Select ComplaintID, ComplaintSubject, Complainants.ComplainantName as Complainant, Case When (datepart(hour, Complaints.ClosingDateTime) < 17) then 'Morning' When (datepart(hour, Complaints.ClosingDateTime) >= 17) then 'Evening' When ( Complaints.ClosingDateTime = '' ) then 'Not Closed Yet' End as ClosingShift from Complaints Inner Join Complainants ON Complaints.Complainant_ID = Complainants.ComplainantI END ```
You must use **`Else`** in your case statement. ``` Select ComplaintID, ComplaintSubject, Complainants.ComplainantName as Complainant, Case When (datepart(hour, Complaints.ClosingDateTime) < 17) then 'Morning' When (datepart(hour, Complaints.ClosingDateTime) >= 17) then 'Evening' Else 'Not Closed Yet' End as ClosingShift from Complaints Inner Join Complainants ON Complaints.Complainant_ID = Complainants.ComplainantI ``` If you want to check your query with NULL Value use following query: ``` Select ComplaintID, ComplaintSubject, Complainants.ComplainantName as Complainant, Case When (datepart(hour, Complaints.ClosingDateTime) < 17) then 'Morning' When (datepart(hour, Complaints.ClosingDateTime) >= 17) then 'Evening' When (Complaints.ClosingDateTime IS NULL) Then 'Not Closed Yet' End as ClosingShift from Complaints Inner Join Complainants ON Complaints.Complainant_ID = Complainants.ComplainantI ```
You check with **`IS NULL`** and **`ELSE`** part: Try like this ``` Select ComplaintID, ComplaintSubject, Complainants.ComplainantName as Complainant, Case When (datepart(hour, Complaints.ClosingDateTime) < 17) Then 'Morning' When (datepart(hour, Complaints.ClosingDateTime) >= 17) Then 'Evening' When (Complaints.ClosingDateTime IS NULL ) Then 'Not Closed Yet' ELSE 'Not Closed Yet' End as ClosingShift from Complaints Inner Join Complainants ON Complaints.Complainant_ID = Complainants.ComplainantI ```
Returning message instead of null in case statement
[ "", "sql", "sql-server", "t-sql", "" ]
I've started to use access recently. I am trying to insert a few rows into the database; however, I am stuck as it is throwing an error: > Too few parameters. I have a table test with only one column in it named start\_date I want to insert all the dates between two dates for example if I consider 1/7/2014 to 3/7/2014 I need dates as 1/7/2014,2/7/2014,3/7/2014 in my table, but I have problem inserting the code I used is as follows ``` Private Sub createRec_Click() Dim StrSQL As String Dim InDate As Date Dim DatDiff As Integer Dim db As database InDate=Me.FromDateTxt 'here I have used a code to find out the difference between two dates that i've not written For i = 1 To DatDiff StrSQL = "INSERT INTO Test (Start_Date) VALUES ('" & InDate & "' );" StrSQL = StrSQL & "SELECT 'Test'" db.Execute StrSQL db.close i=i+1 next i End Sub ``` My code throws an error in the line Db.Execuite StrSQL as too few parameters.
since you mentioned you are quite new to access, i had to invite you to first remove the errors in the code (the incomplete for loop and the SQL statement). Otherwise, you surely need the for loop to insert dates in a certain range. Now, please use the code below to insert the date values into your table. I have tested the code and it works. You can try it too. After that, add your for loop to suit your scenario ``` Dim StrSQL As String Dim InDate As Date Dim DatDiff As Integer InDate = Me.FromDateTxt StrSQL = "INSERT INTO Test (Start_Date) VALUES ('" & InDate & "' );" DoCmd.SetWarnings False DoCmd.RunSQL StrSQL DoCmd.SetWarnings True ```
You can't run two SQL statements into one like you are doing. You can't "execute" a select query. db is an object and you haven't set it to anything: (e.g. set db = currentdb) In VBA integer types can hold up to max of 32767 - I would be tempted to use Long. You might want to be a bit more specific about the date you are inserting: ``` INSERT INTO Test (Start_Date) VALUES ('#" & format(InDate, "mm/dd/yyyy") & "#' );" ```
Error "Too few parameters." when attempting to insert values into a database table using VBA in MS access
[ "", "sql", "ms-access", "vba", "ms-access-2010", "" ]
I want to add a clause that does the following: ``` DECLARE @RequiresApproval BIT SET @RequiresApproval = 0 SELECT * FROM [user] u WHERE school_id = 1 ``` Here's my logic I want * when @RequiresApproval = 1, then add the clause: AND approved = 1 * when @RequiresApproval = 0, then leave out the clause: AND approved = 1 My goal is to have SQL will end up like either one of the following, depending on the value of @RequiresApproval. Is this possible to do? ``` -- when @RequiresApproval = 1, use this SELECT * FROM [user] u WHERE school_id = 1 -- when @RequiresApproval = 0, use this SELECT * FROM [user] u WHERE school_id = 1 AND approved = 1 ```
Ok. This should implement your logic: ``` SELECT * FROM [user] u WHERE school_id = 1 AND (@RequiresApproval = 0 or approved = 1); ```
You can use this query - you just need to use `OR` between your two conditions. ``` SELECT * FROM [user] u WHERE school_id = 1 AND (approved = 1 OR @RequiresApproval = 0) ```
SQL Conditional AND Clause
[ "", "sql", "sql-server", "" ]
I have a table with arrays as one column, and I want to sum the array elements together: ``` > create table regres(a int[] not null); > insert into regres values ('{1,2,3}'), ('{9, 12, 13}'); > select * from regres; a ----------- {1,2,3} {9,12,13} ``` I want the result to be: ``` {10, 14, 16} ``` that is: `{1 + 9, 2 + 12, 3 + 13}`. Does such a function already exist somewhere? The `intagg` extension looked like a good candidate, but such a function does not already exist. The arrays are expected to be between 24 and 31 elements in length, all elements are `NOT NULL`, and the arrays themselves will also always be `NOT NULL`. All elements are basic `int`. There will be more than two rows per aggregate. All arrays will have the same number of elements, in a query. Different queries will have different number of elements. My implementation target is: PostgreSQL 9.1.13
**General solutions** for any number of arrays with any number of elements. Individual elements or the the whole array can be NULL, too: ### Simpler in 9.4+ using [`WITH ORDINALITY`](http://www.postgresql.org/docs/current/functions-srf.html) ``` SELECT ARRAY ( SELECT sum(elem) FROM tbl t , unnest(t.arr) WITH ORDINALITY x(elem, rn) GROUP BY rn ORDER BY rn ); ``` See: * [PostgreSQL unnest() with element number](https://stackoverflow.com/questions/8760419/postgresql-unnest-with-element-number/8767450#8767450) ### Postgres 9.3+ This makes use of an implicit [`LATERAL JOIN`](http://www.postgresql.org/docs/current/sql-select.html) ``` SELECT ARRAY ( SELECT sum(arr[rn]) FROM tbl t , generate_subscripts(t.arr, 1) AS rn GROUP BY rn ORDER BY rn ); ``` See: * [What is the difference between LATERAL JOIN and a subquery in PostgreSQL?](https://stackoverflow.com/questions/28550679/what-is-the-difference-between-lateral-join-and-a-subquery-in-postgresql/28557803#28557803) ### Postgres 9.1 ``` SELECT ARRAY ( SELECT sum(arr[rn]) FROM ( SELECT arr, generate_subscripts(arr, 1) AS rn FROM tbl t ) sub GROUP BY rn ORDER BY rn ); ``` The same works in later versions, but *set-returning functions in the `SELECT` list* are not standard SQL and were frowned upon by some. Should be OK since Postgres 10, though. See: * [What is the expected behaviour for multiple set-returning functions in SELECT clause?](https://stackoverflow.com/questions/39863505/what-is-the-expected-behaviour-for-multiple-set-returning-functions-in-select-cl/39864815#39864815) *db<>fiddle [here](https://dbfiddle.uk/?rdbms=postgres_13&fiddle=28f17ec272ac2f05665d44a2f7a0da78)* Old [sqlfiddle](http://sqlfiddle.com/#!17/09a4c/1) Related: * [Is there something like a zip() function in PostgreSQL that combines two arrays?](https://stackoverflow.com/questions/12414750/is-there-something-like-a-zip-function-in-postgresql-that-combines-two-arrays/12414884#12414884)
If you need better performances and can install Postgres extensions, the [agg\_for\_vecs](https://github.com/pjungwir/aggs_for_vecs) C extension provides a `vec_to_sum` function that should meet your need. It also offers various aggregate functions like `min`, `max`, `avg`, and `var_samp` that operate on arrays instead of scalars.
Pairwise array sum aggregate function?
[ "", "sql", "arrays", "postgresql", "aggregate-functions", "" ]
Need some help with the logic for SQL join Table1 * CustomerID (ex.1234123) * Customer\_name (ex.string) Table 2 - CustomerID (ex.1234 but has it multiple times in the column) - products (dog food, cat food, etc on different rows within the same column) - revenue for product How do join two tables together when one table has multiple rows of data that aggregates into one ID with one row of data. Row CustomerID ---- Customer\_name-------dog food (revenue)------cat food (revenue) Hope this makes sense and google searching this nonsense didnt really find what i was looking for.
Try: ``` select c.customerid, C.Customer_name, sum(case when product = 'dog food' then revenue else 0 end) as dog_food_rev, sum(case when product = 'cat food' then revenue else 0 end) as cat_food_rev from table1 c join table2 p on c.customerid = p.customerid group by c.customerid, c.Customer_name ``` This assumes the revenue field is named REVENUE Also assumes that the product values are entitled 'dog food' and 'cat food' exactly (rename as necessary) [SQL Fiddle](http://sqlfiddle.com/#!3/413cb/4)
Pivoting the table ``` Select CustomerID, Customer_name, IsNull([Dog Food],0) [Dog Food], IsNull([Cat Food],0) [Cat Food] from ( select A.*, B.product, B.Revenue from Table1 A inner join Table2 B on A.CustomerID = B.CustomerID ) A pivot (sum(revenue) for product in ([Dog Food],[Cat Food])) B ``` [SQL Fiddle](http://sqlfiddle.com/#!3/413cb/2)
logic on join two tables and using sum formula
[ "", "sql", "" ]
I am looking for some help in regards to the format that I want my data outputted. Below is a snippet of the code I am using. It is currently converting the value from the database in seconds and outputting it like > 1d 12:05:52 I want it to output the information so it calculates the day in the hours, so basically dropping the '1d' like below > 36:05:52 ``` CAST(FLOOR([Running] / 86400) AS VARCHAR(10))+'d ' + CONVERT(VARCHAR(8), DATEADD(SECOND, [Running], '19000101'), 8) AS [Running] ``` Can someone please point me in the right direction using the code above? Thanks in advance for your help.
This should work: ``` SELECT CASE WHEN [Running]/3600 <= 9 THEN '0' ELSE '' END + CONVERT(VARCHAR(10),[Running]/3600)+':'+ RIGHT('00'+CONVERT(VARCHAR(2),([Running]%3600)/60),2)+':'+ RIGHT('00'+CONVERT(VARCHAR(2),[Running]%60),2) AS [Running] ``` I tested it using this: ``` DECLARE @Running int SET @Running = 60*60*24*30 + 60*3 + 3 -- should output 720:03:03 SELECT CASE WHEN @Running/3600 <= 9 THEN '0' ELSE '' END + CONVERT(VARCHAR(10),@Running/3600)+':'+ RIGHT('00'+CONVERT(VARCHAR(2),(@Running%3600)/60),2)+':'+ RIGHT('00'+CONVERT(VARCHAR(2),@Running%60),2) AS [Running] ``` Output: ``` Running ---------------- 720:03:03 (1 row(s) affected) ```
As @Hadi said in his comment, you can use the TimeSpan object in VB.Net (you've tagged the question with this so it seems reasonable to suggest), but you could also use this bit of SQL instead, which I think is slightly simpler than the other suggestion : ``` CAST(CAST(FLOOR([Running] / 3600) AS INT) AS VARCHAR) + RIGHT(CONVERT(VARCHAR, DATEADD(SECOND, [Running], '1900-01-01'), 108), 6) as [Running] ```
SQL statement DateTime format
[ "", "sql", "sql-server", "vb.net", "" ]
I'm working in an IO bound system (and this isn't going to change). So I'm rewriting some sql to only update when it needs to and it's going really well. I'm seeing about a 70% increase in performance. The only problem is the sql is more bloated which isn't the end of the world, just more code to maintain. So my question is.. Is there an easier way to get Oracle to only update when it needs to compare to adding a where clause: ``` update table_name set field_one = 'one' where field_one != 'one'; ``` Note: The real code is much more complex so adding a 'where' like this can sometimes double the length of the query. Using 11g
Given the nature of how SQL works, this is exactly what you need to do. If you tell it: ``` update table_name set field_one = 'one'; ``` that means something entirely different in SQL than ``` update table_name set field_one = 'one' where field_one != 'one'; ``` The database can only process what you told it to process, In the first case, because there is no where clause, you have told it to process all the records. In the second case you have put a filter on it to process only some specific records. It is up to the code **writer** not the database to determine the content of the query. If you didn't want every record updated, you should not have told it to do so. The database is quite literal about the commands you give it. Yes the second set of queries are longer becasue they are being more specific. They havea differnt meaning than the orginal queries. That is all to the good as it is far faster to update the ten records you are interested in than all 1,000,000 records in the table. You need to get over the idea that longer is somehow a bad thing in database queries. Often it is a good thing as you are being more correct in what you are asking for. Your orginal queries were simply incorrect. And now you have pay the price to fix what was a systemically bad practice.
I guess there isn't an easier way....
Avoid redundant updates
[ "", "sql", "oracle", "plsql", "" ]
``` CUSTOMER(ID,NAME,ENTRY_DT) 1,Dave,8312012 2,Tom,11262013 3,Iva,3312012 . . . ``` So the ENTRY\_DT column has numeric string value in MDDYYYY or MMDDYYYY format. I want to write a simple select query so that ENTRY\_DT shows ``` 8/31/2012 11/26/2013 3/31/2012 . . . ```
``` DECLARE @TABLE TABLE (Int_Date INT) INSERT INTO @TABLE VALUES (8312012), (11262013), (3312012) SELECT LEFT(RIGHT('00000000'+ CAST(Int_Date AS VARCHAR(10)), 8), 2) +'/' +REVERSE(SUBSTRING(REVERSE(Int_Date), 5, 2)) + '/' + RIGHT(Int_Date, 4) FROM @TABLE ``` ## Result ``` 08/31/2012 11/26/2013 03/31/2012 ```
``` Select covert(date,right(('00000000' + entry_dt),8),101) ``` --- if it is SQL server 2008 or above Replace date with date time if it is 2005 or below.
How to select numeric string as a valid DATE
[ "", "sql", "sql-server", "date", "select", "case", "" ]
I'm trying to read a column from a database using a SQL query. The column consists of empty string or numbers as strings, such as ``` "7500" "4460" "" "2900" "2640" "1850" "" "2570" "9050" "8000" "9600" ``` I'm trying to find the right sql query to extract all the numbers (as integers) and removing the empty ones, but I'm stuck. So far I've got ``` SELECT * FROM base WHERE CONVERT(INT, code) IS NOT NULL ``` Done in program R (package sqldf)
If all columns are valid integers, you could use: ``` select * , cast(code as int) IntCode from base where code <> '' ``` To prevent cases when field `code` is not a valid number, use: ``` select *, cast(codeN as int) IntCode from base cross apply (select case when code <> '' and not code like '%[^0-9]%' then code else NULL end) N(codeN) where codeN is not null ``` [SQL Fiddle](http://sqlfiddle.com/#!3/e6ab10/6) **UPDATE** To find rows where code is not a valid number, use ``` select * from base where code like '%[^0-9]%' ```
Using sqldf with the default sqlite database and this test data: ``` DF <- data.frame(a = c("7500", "4460", "", "2900", "2640", "1850", "", "2570", "9050", "8000", "9600"), stringsAsFactors = FALSE) ``` try this: ``` library(sqldf) sqldf("select cast(a as aint) as aint from DF where length(a) > 0") ``` giving: ``` aint 1 7500 2 4460 3 2900 4 2640 5 1850 6 2570 7 9050 8 8000 9 9600 ``` **Note** In plain R one could write: ``` transform(subset(DF, nchar(a) > 0), a = as.integer(a)) ```
SQL query: convert
[ "", "sql", "sql-server", "r", "postgresql", "sqldf", "" ]
I have been working on SQL Server for a long time and recently I was moved to a project which uses Oracle. While I have heard of the term `Cursor`, I have not encountered one in the SQL Server. So my questions are: 1. Why do you even bother declaring `Cursor` in SQL Server? (or under what circumstances do you actually need a `cursor`?) 2. Why declaring `cursor` is mandatory in Oracle?
**Why do you even bother declaring Cursor in SQL Server? (or under what circumstances do you actually need a cursor?)** Anytime you can't do set based processing record by record processing may be needed; then you would need to use a cursor. Set based logic is all or nothing when in the transaction. Perhaps I'm processing individual records and I'm willing to accept situations where partial works. In this case I could manage each record individually get 99% complete and have the one that "fails" write out to a log. However *usually* this too can be done via set based logic if one thinks it though. **Why declaring cursor is mandatory in Oracle?** Unlike SQL server which returns data sets directly, Oracle returns data sets via REF CURSOR from package, procedure and function. So if you want a dataset back to work with, you must user a REF cursor.
`Cursor` returns the consistent data on the time of the cursor opening. To show this I will open a cursor then I will change a row and I will compare the results of the database and the cursor: ``` SQL> conn hr/hr Connected. SQL> select employee_id, email from employees; EMPLOYEE_ID EMAIL ----------- ------------------------- 100 SKING 101 NKOCHHAR ... 205 SHIGGINS 206 WGIETZ 107 rows selected. SQL> var rc refcursor SQL> ed Wrote file afiedt.buf 1 begin 2 open :rc for 3 select employee_id 4 , email 5 from employees 6 order by 1; 7* end; SQL> / PL/SQL procedure successfully completed. SQL> update employees set email = 'xxxxxx' where employee_id = 206; 1 row updated. SQL> commit; Commit complete. SQL> print rc EMPLOYEE_ID EMAIL ----------- ------------------------- 100 SKING 101 NKOCHHAR ... 205 SHIGGINS 206 WGIETZ 107 rows selected. SQL> select employee_id, email from employees; EMPLOYEE_ID EMAIL ----------- ------------------------- 100 SKING 101 NKOCHHAR ... 205 SHIGGINS 206 xxxxxx 107 rows selected. ``` As you can see cursor has the data that was in the database in the time of cursor opening. This is very important behavior, assume you want to work with bank accounts, for example, you want to calculate the sum. If someone changes the table data you will have consistent data and you will give the right answer anyway.
Why do you use Cursor in SQL Server?
[ "", "sql", "sql-server", "oracle", "" ]
I have a database with a list of old ItemID's that need updating to a new format. The old format is of the form 8046Y and the new format moves the 4th digit to the end and prepends a hyphen and adds a 0 if it's a single digit. The old format also uses alpha characters when the number goes over 9 for example 464HB where the H represents 17. I also need to add a 1 to the beginning of the new format. All this can be done with string manipulation in SQL I hope. Some examples: ``` 8046Y becomes 1804Y-06 464HB becomes 1464B-17 (H = 17) ``` Can anyone give me some pointers as to how to go about this in SQL? I got as far as: ``` select '1' + LEFT(ItemID, 3) + RIGHT(ItemID,1) + '-' + '0' + SUBSTRING(ItemID,3,1) from items ``` But the conversion from a=10 to z=36 stumped me
Try this: ``` select '1' + left(@str,3) + right(@str,1) + '-' + case when substring(@str,4,1) like '%[0-9]%' 1 then right('00' + substring(@str,4,1),2) else cast(10 + ascii(substring(@str,4,1))-ascii('A') as varchar(2)) end ``` Explanation: If the 4th character is a digit, then do not change the value and append it after padding with a zero. Otherwise, use `ASCII` to get the ASCII value for that character, get it's difference with the value for 'A' and add the offset of 10. [Demo](http://rextester.com/BOG70440)
``` select '1' + LEFT(ItemID, 3) + RIGHT(ItemID,1) + '-' +CASE RIGHT(LEFT(ItemID,1),2) WHEN 'a' then 10 WHEN 'b' THEN 11 etc... END [NewItemID] from items ``` Just add the appropriate cases in that format.
SQL String Manipulation and character replacement
[ "", "sql", "sql-server", "" ]
Is there a query which drops / deletes databases with no tables in them (deletes empty databases)? Server is Microsoft SQL Server 2005
This should do it. Tested on a lab machine and it dropped all databases with 0 user tables. Note, however, that tables aren't the only things in a database, necessarily. There could be stored procedures, functions, etc that someone might still need. NOTE THAT THIS IS A VERY DANGEROUS OPERATION, AS IT DROPS DATABASES. USE AT YOUR OWN RISK. I AM NOT RESPONSIBLE FOR DAMAGE YOU CAUSE. ``` USE [master]; DECLARE @name varchar(50); DECLARE @innerQuery varchar(max); DECLARE tableCursor CURSOR FOR SELECT name FROM sys.databases where owner_sid != 0x01; OPEN tableCursor; FETCH NEXT FROM tableCursor INTO @name WHILE @@FETCH_STATUS = 0 BEGIN SET @innerQuery = 'USE [' + @name + ']; IF (SELECT COUNT(*) FROM sys.objects WHERE type = ''U'') = 0 BEGIN USE [master]; DROP DATABASE [' + @name + '] END' EXEC(@innerQuery) FETCH NEXT FROM tableCursor INTO @name END CLOSE tableCursor; DEALLOCATE tableCursor; ``` Note also that, if a database is in use, SQL Server will refuse to drop it. So, if there are other connections to a particular database that this tries to drop, the command will abort. To avoid that problem, you can set the database in question to single-user mode. The following script is the same as the above, except it also sets the target databases to single-user mode to kill active connections. BE EVEN MORE CAREFUL WITH THIS, AS IT'S ESSENTIALLY THE NUCLEAR OPTION: ``` use [master]; DECLARE @name varchar(50); DECLARE @innerQuery varchar(max); DECLARE tableCursor CURSOR FOR SELECT name FROM sys.databases where owner_sid != 0x01; OPEN tableCursor; FETCH NEXT FROM tableCursor INTO @name WHILE @@FETCH_STATUS = 0 BEGIN SET @innerQuery = 'USE [' + @name + ']; IF (SELECT COUNT(*) FROM sys.objects WHERE type = ''U'') = 0 BEGIN USE [master]; ALTER DATABASE [' + @name + '] SET SINGLE_USER WITH ROLLBACK IMMEDIATE; DROP DATABASE [' + @name + ']; END' EXEC(@innerQuery) FETCH NEXT FROM tableCursor INTO @name END CLOSE tableCursor; DEALLOCATE tableCursor; ```
I think it is not possible (at least not with a TSQL command, might be a stored procedure somewhere). You would have to query `sysobjects` and abort if any found. (And you have to decide if you want to ignore some of the system objects like the design tables).
Drop databases with no tables
[ "", "sql", "sql-server", "sql-server-2005", "" ]
I need some help to complete a query in SQL-Oracle. I trying to have it select the row(s) or security/ies with the highest or max gain from price1 to price2. here is what i have already. When I try to use sub queries in the select or where clauses, fails....and also i cannot use queries that are hard-quoted because the data in the table will increase periodically to 1000 securites. Any help is appreciated My query: ``` select security, price1, price2, ((price2 - price1)/price1)*100 as Percentage FROM market ``` My table and data ``` CREATE TABLE market (security VARCHAR2(30) PRIMARY KEY, PRICE1 NUMBER(6, 2), PRICE2 NUMBER(6, 2)); INSERT INTO market VALUES('A', 62, 178); INSERT INTO market VALUES('B', 80, 328); INSERT INTO market VALUES('C', 72, 736); INSERT INTO market VALUES('D', 270, 565); INSERT INTO market VALUES('E', 570, 665); INSERT INTO market VALUES('F', 400, 1640); INSERT INTO market VALUES('G', 800, 3280); COMMIT; ```
There are several ways of doing this. Here is one way: ``` select m.* from (select security, price1, price2, ((price2 - price1)/price1)*100 as Percentage, dense_rank() over (order by ((price2 - price1)/price1)*100 desc) as ranking from market ) m where ranking = 1; ```
Here is another alternative. ``` select * from (select security, price1, price2, ((price2 - price1)/price1)*100 as Percentage from market order by ((price2 - price1)/price1)*100 desc ) where rownum = 1 ; ```
SQL - Oracle - how to select securities from table with top change %
[ "", "sql", "oracle", "stocks", "" ]
I am using SQL Server 2012 & MATLAB. I have a table of 5 columns (1 char, 1 datetime and 3 floats). I have a simple query shown below that returns the data from this table which contains over a million records - this however causes an out of memory error in MATLAB. **simple query** ``` select id_co, date_r, FAM_Score, FAM_A_Score, FAM_Score from GSI_Scores where id_co <> 'NULL' order by id_co, date_rating ``` So I was looking to breakdown the query select the data in batches of 250,000 records. I have just come across the ROW\_NUMBER function which I added to my query, please see below. This numbers all the records for me. However I am having trouble selecting say records between 250,000 and 500,000. How do I do this? **updated query** ``` select id_co, date_r, FAM_Score, FAM_A_Score, FAM_Score, row_number() over (order by id_co) as num_co from GSI_Scores where id_co <> 'NULL' and num_sedol between 250000 and 500000 order by id_co, date_rating ```
Simply use a sub query or [Common Table Expression](http://technet.microsoft.com/en-us/library/ms190766(v=sql.105).aspx) (CTE). ``` ;WITH CTE AS ( --Your query ) SELECT * FROM CTE WHERE num_co BETWEEN 250000 AND 500000 ```
You could try using the OFFSET x ROWS FETCH NEXT y ROWS ONLY commands like this: ``` CREATE TABLE TempTable ( TempID INT IDENTITY(1,1) NOT NULL, SomeDescription VARCHAR(255) NOT NULL, PRIMARY KEY(TempID)) INSERT INTO TempTable (SomeDescription) VALUES ('Description 1'), ('Description 2'), ('Description 3'), ('Description 4'), ('Description 5'), ('Description 6'), ('Description 7'), ('Description 8'), ('Description 9'), ('Description 10') SELECT * FROM TempTable ORDER BY TempID OFFSET 3 ROWS FETCH NEXT 2 ROWS ONLY; ```
using row_number to return specific rows of query
[ "", "sql", "sql-server", "" ]
I am very new in sql, then i am so confused how to get join or get value from two. First table: ``` ID P_ID Name AGE U_ID 1 5 B 8 5w 2 8 D 17 6j 3 7 R 67 0qw ``` Second Table: ``` ID P_ID Address Edu 1 6 Bddd +2 2 7 Dssss Bachelor 3 2 rress Phd ``` Here, i want to get accorading to P\_ID, but i have U\_ID only. For this: Let us assume that now I have **U\_ID=0qw**. How to get value from second table. **Address** and **edu** , and **Age** Thanks in advance.
Join on the column that both tables have in common. ``` select t1.age, t2.address, t2.edu from table1 t1 join table2 t2 on t1.p_id = t2.p_id where t1.u_id = '0qw' ``` Then use the table names or alias names (like `t1` for `table1`) to pick columns from the tables you join.
I think you are looking forward to this: ``` SELECT t2.Address, t2.Edu, t1.Age FROM firstTable t1 JOIN secondTable t2 ON t1.P_ID = t2.P_ID WHERE t1.U_ID = '0qw' ```
How to get value from two table if id is same in sql
[ "", "mysql", "sql", "sql-server", "" ]
I have two tables in two different databases. My first table is an older version and has fewer columns than the second table. I want to copy the contents of my old table to my new table. In each database table there are several distribution in this case. How can I do to quickly copy data from old tables to the new without having to write the column names manually for each table? Thanks!
The following code should do the work. Basically what it does is: *1. Collects column names from both tables. 2. Intersects the column names in order to filter out columns that exists only in 1 table. 3. Get a string which is the column names delimited by a comma. 4. Using the string from stage #3 creating the insert command. 5. Executing the command from stage #4.* ``` --BEGIN TRAN DECLARE @oldName NVARCHAR(50) = 'OldTableName', @newName NVARCHAR(50) = 'newTableName' DECLARE @oldDBName NVARCHAR(50) = '[OldDBName].[dbo].['+@oldName+']', @newDBName NVARCHAR(50) = '[newDBName].[dbo].['+@newName+']' /*This table variable will have columns that exists in both table*/ DECLARE @tCommonColumns TABLE( ColumnsName NVARCHAR(max) NOT NULL ); INSERT INTO @tCommonColumns SELECT column_name --,* FROM information_schema.columns WHERE table_name = @oldName AND COLUMNPROPERTY(object_id(@oldName), column_name, 'IsIdentity') = 0 --this will make sure you ommit IDentity columns INTERSECT SELECT column_name --, * FROM information_schema.columns WHERE table_name = @newName AND COLUMNPROPERTY(object_id(@newName), column_name,'IsIdentity') = 0--this will make sure you ommit IDentity columns --SELECT * FROM @tCommonColumns /*Get the columns as a comma seperated string */ DECLARE @columns NVARCHAR(max) SELECT DISTINCT @columns = STUFF((SELECT ', ' + cols.ColumnsName FROM @tCommonColumns cols FOR XML Path('')),1,1,'') FROM @tCommonColumns PRINT @columns /*Create tyhe insert command*/ DECLARE @InserCmd NVARCHAR(max) SET @InserCmd = 'INSERT INTO '+@newDBName +' ('+@columns +') SELECT '+@columns +' FROM '+@oldDBName PRINT @InserCmd /*Execute the command*/ EXECUTE sp_executesql @InserCmd --ROLLBACK ``` Please note that this script might fail if you have [FOREIGN KEY Constraints](http://technet.microsoft.com/en-us/library/ms175464(v=sql.105).aspx) That are fulfiled in the old table but not in the new table. **Edit:** The query was updated to omit [`Identity`](http://msdn.microsoft.com/en-us/library/ms186775.aspx) columns. **Edit 2:** query updated for supporting different databases for the tables (make sure you set the `@oldName` ,`@newName`, `@oldDBName`, `@newDBName` variables to match actual credentials).
You can "avoid writing the column names manually" in SSMS by dragging and dropping the "Columns" folder under the table in the Object Explorer over to a query window (just hold the dragged item over whitespace or the character position where you want the names to appear). All the column names will be displayed separated by commas. ![SSMS Table Columns folder](https://i.stack.imgur.com/i41dn.gif) You could also try something like this to get just the list of columns that are common between two tables (then writing the INSERT statement is trivial). ``` SELECT Substring(( SELECT ', ' + S.COLUMN_NAME FROM INFORMATION_SCHEMA.COLUMNS S INNER JOIN INFORMATION_SCHEMA.COLUMNS D ON S.COLUMN_NAME = D.COLUMN_NAME WHERE S.TABLE_SCHEMA = 'dbo' AND S.TABLE_NAME = 'Source Table' AND D.TABLE_SCHEMA = 'dbo' AND D.TABLE_NAME = 'Destination Table' FOR XML PATH(''), TYPE ).value('.[1]', 'nvarchar(max)'), 3, 21474783647) ; ``` You could also create an SSIS package that simply moves all the data from one table to the other. Column names that match would automatically be linked up. Depending on your familiarity with SSIS, this could take you 2 minutes, or it could take you 2 hours.
SQL Server : query to insert data into table from another table with different struct
[ "", "sql", "sql-server", "" ]
there are so many questions here on this subject (see [here](https://stackoverflow.com/questions/7745609/sql-select-only-rows-with-max-value-on-a-column) or [here](https://stackoverflow.com/questions/24842959/mysql-performance-query) for examples), but I cannot figure out how to correctly run my particular case. Here is the [SQLFiddle](http://sqlfiddle.com/#!2/9fc8e/6) that contains table schema, records, and queries. At this moment I have a query that works, but it is very inefficient because it uses a dependent subquery: ``` SELECT id_document, version, id_plant_mov, id_production_type, is_ext, id_cost_center, id_import_kpi_code, id_plant_tag, value FROM document_production_history2 WHERE id_document = 751 AND id_production_type IN (1, 3) AND is_group_production = 0 AND (id_document, id_plant_mov, id_production_type, id_cost_center, id_import_kpi_code, id_plant_tag, is_ext, version) IN ( SELECT id_document, id_plant_mov, id_production_type, id_cost_center, id_import_kpi_code, id_plant_tag, is_ext, MAX(version) FROM document_production_history2 GROUP BY id_document, id_plant_mov, id_production_type, id_cost_center, id_import_kpi_code, id_plant_tag, is_ext); ``` I tried to rewrite the above query like this: ``` SELECT d.id_document id_doc, d.version, d.id_plant_mov, d.id_production_type id_prod_type, d.is_ext, d.id_cost_center, d.id_import_kpi_code kpi_code, d.id_plant_tag, d.value FROM document_production_history2 d JOIN ( SELECT id_document, id_plant_mov, id_production_type, is_ext, id_cost_center, id_import_kpi_code, id_plant_tag, is_group_production, MAX(version) version FROM document_production_history2 WHERE id_document = 751 AND id_production_type IN (1, 3) AND is_group_production = 0 GROUP BY id_document, id_plant_mov, id_production_type, id_cost_center, id_import_kpi_code, id_plant_tag, is_group_production ) m ON d.version = m.version AND d.id_document = m.id_document AND d.id_production_type = m.id_production_type AND d.id_plant_mov = m.id_plant_mov AND d.id_plant_tag = m.id_plant_tag AND d.id_cost_center = m.id_cost_center AND d.id_import_kpi_code = m.id_import_kpi_code AND d.is_ext = m.is_ext AND d.is_group_production = m.is_group_production; ``` but it returns 27 rows instead of the 10 rows expected. Thanks in advance.
Here's a `JOIN` that's equivalent to your first query. You need to take `is_group_production = 0` out of the subquery, and only do it in the main query. This filters out rows where the max version is a group production. ``` SELECT d.id_document id_doc, d.version, d.id_plant_mov, d.id_production_type id_prod_type, d.is_ext, d.id_cost_center, d.id_import_kpi_code kpi_code, d.id_plant_tag, d.value FROM document_production_history2 d JOIN ( SELECT id_document, id_plant_mov, id_production_type, is_ext, id_cost_center, id_import_kpi_code, id_plant_tag, MAX(version) version FROM document_production_history2 WHERE id_document = 751 AND id_production_type IN (1, 3) GROUP BY id_document, id_plant_mov, id_production_type, id_cost_center, id_import_kpi_code, id_plant_tag ) m ON d.version = m.version AND d.id_document = m.id_document AND d.id_production_type = m.id_production_type AND d.id_plant_mov = m.id_plant_mov AND d.id_plant_tag = m.id_plant_tag AND d.id_cost_center = m.id_cost_center AND d.id_import_kpi_code = m.id_import_kpi_code AND d.is_ext = m.is_ext WHERE d.is_group_production = 0; ``` [DEMO](http://sqlfiddle.com/#!2/9fc8e/21)
A derived table is the way to go here. Your problem in the second example was that you were getting max(version) for every row returned, including differences in is\_group\_production - this is where the extra rows crept in. So the where clause needs to stay in the outer query for that reason. Theoretically you could move the other two parts of the where clause to the inner query, but I find that quite unreadable and unintuitive. This returns 10 rows: ``` SELECT d.id_document, d.version, d.id_plant_mov, d.id_production_type, d.is_ext, d.id_cost_center, d.id_import_kpi_code, d.id_plant_tag, d.value FROM document_production_history2 d JOIN (SELECT id_document, id_plant_mov, id_production_type, id_cost_center, id_import_kpi_code, id_plant_tag, is_ext, MAX(VERSION) AS maxversion FROM document_production_history2 GROUP BY id_document, id_plant_mov, id_production_type, id_cost_center, id_import_kpi_code, id_plant_tag, is_ext) m ON d.version = m.maxversion and d.id_document = m.id_document and d.id_production_type = m.id_production_type and d.id_plant_mov = m.id_plant_mov and d.id_plant_tag = m.id_plant_tag and d.id_cost_center = m.id_cost_center and d.id_import_kpi_code = m.id_import_kpi_code and d.is_ext = m.is_ext WHERE d.id_document = 751 and d.id_production_type IN (1, 3) and d.is_group_production = 0 ``` And as far as I can see it performs pretty well. Derived tables aren't ideal solutions, but they're an order of magnitude better that sub-queries, as mysql doesn't have to execute them for every row of the result-set.
MySQL select max value without subquery
[ "", "mysql", "sql", "performance", "greatest-n-per-group", "" ]
I would just like to copy a specific `LOAD_NO` from a table called `LOADS` and insert it back into the same `LOADS` table but without using the old `LOAD_NO` (aka `ID`).... The tables name is `LOADS`. 1. I would like to copy a specific row and insert it into the table but without needing to specify all of the fields (because the table has a lot of fields) 2. I need to exclude the field named: `LOAD_NO` 3. Then delete the old row I tried something like this: I cloned the `LOADS` table without cloning the data and I named that table `LOADS_TempTable` and added the identity of `LOAD_NO` seed start at: `11600` ``` SELECT * INTO LOADS_TempTable FROM LOADS WHERE LOAD_NO = 200; ALTER TABLE LOADS_TempTable DROP COLUMN LOAD_NO; INSERT INTO LOADS SELECT * FROM LOADS_TempTable; DELETE FROM LOADS WHERE LOAD_NO = 200; ``` Here is how the column `LOAD_NO` in the table `LOADS` looks: ![enter image description here](https://i.stack.imgur.com/tTJ29.png) The error I get says the following: > Msg 8101, Level 16, State 1, Line 3 > An explicit value for the identity column in table 'LOADS' can only be specified when a column list is used and IDENTITY\_INSERT is ON. I tried to add: ``` set identity_insert LOADS OFF GO ``` and also ``` set identity_insert LOADS ON GO ``` I always get the same error. THANKS FOR ANY HELP :)
Since `LOAD_NO` is an **identity** column, you should not be specifying a value for it when inserting. So you need to modify our `INSERT` statement to **not** include the `LOAD_NO` column - something like this: ``` INSERT INTO LOADS(col1, col2, ..., colN) -- all columns *EXCEPT* LOAD_NO SELECT col1, col2, ...., colN -- all columns *EXCEPT* LOAD_NO FROM LOADS_TempTable; ``` SQL Server will automatically assign a new `LOAD_NO` to the row you're inserting - after all, that's the *job* of the **identity** column!
Try this: ``` set identity_insert LOADS ON Update LOADS set Load_No = (select max(load_no) + 1 from loads) where load_no = 200 set identity_insert LOADS OFF ``` You probably also want to wrap that in a transaction.
SQL Server : copy row and Insert into to the same table but with a different ID
[ "", "sql", "sql-server", "" ]
I have a list of data that looks like this: ``` Name Date Weight Person 1 01/01/2014 89KG Person 2 01/01/2014 62KG Person 1 07/01/2014 88KG Person 2 07/01/2014 62KG Person 1 21/01/2014 85KG Person 2 21/01/2014 63KG ``` What I would like to do is select only the records with a distinct name and are the latest dates in a given month. So for this example I would like to only select the person 1 and person 2 records for 21/01/2014 (as this is the latest date). I'm using SQL 2008.
Please see if this works for you. **Sample Data:** ``` IF OBJECT_ID(N'tempdb..#TEMP') > 0 BEGIN DROP TABLE #TEMP END CREATE TABLE #TEMP(Name VARCHAR(20), WDate VARCHAR(20), Weight VARCHAR(20)) INSERT INTO #TEMP VALUES ('Person 1', '01/01/2014', '89KG'), ('Person 2', '01/01/2014', '62KG'), ('Person 1', '07/01/2014', '88KG'), ('Person 1', '07/01/2014', '88KG'), ('Person 2', '07/02/2014', '62KG'), ('Person 1', '21/01/2014', '85KG'), ('Person 2', '21/01/2014', '63KG'); ``` **Script:** ``` ;WITH cte_DateFormat AS ( SELECT Name, CONVERT(DATE, WDate, 103) AS WDate, Weight FROM #TEMP ) , cte_Rank AS ( SELECT ROW_NUMBER() OVER (PARTITION BY Name, CAST(YEAR(WDate) AS VARCHAR(4)) + CAST(MONTH(WDate) AS VARCHAR(2)) ORDER BY WDate DESC) AS ID, Name, WDate, Weight FROM cte_DateFormat ) SELECT Name, WDate, Weight FROM cte_Rank WHERE ID = 1 ``` **Cleanup Script:** ``` IF OBJECT_ID(N'tempdb..#TEMP') > 0 BEGIN DROP TABLE #TEMP END ```
Please try using `DENSE_RANK`: ``` select * From ( select *, DENSE_RANK() over(PARTITION BY YEAR([Date]), MONTH([Date]) ORDER BY [Date] desc) Rnk From tbl )x where Rnk=1 ```
Select 1 Distinct Record per month by latest date
[ "", "sql", "sql-server", "" ]
I'd like to check the status of the agent after I start it using this statement ``` EXEC sp_startpublication_snapshot @publication ``` As I want to do a next step that needs the job to be already started.
After some research I got a work around way ``` SELECT snapshot_ready FROM sysmergepublications ``` This query returns 0 if not ready and 1 if started Thanks all for your contribution :)
I do not believe there is a built-in replication stored procedure to check the snapshot agent status, I could be wrong. However, you could query MSsnapshot\_history. Something like this should do the trick: ``` SELECT agent_id, runstatus, start_time, time, duration, comments, delivered_transactions, delivered_commands, delivery_rate, error_id, timestamp FROM dbo.MSsnapshot_history WHERE comments = 'Starting agent.' ``` Likewise, you can check when the snapshot agent is finished: ``` SELECT agent_id, runstatus, start_time, time, duration, comments, delivered_transactions, delivered_commands, delivery_rate, error_id, timestamp FROM dbo.MSsnapshot_history WHERE comments = '[100%] A snapshot of 68 article(s) was generated.' ``` Alternatively, you could the status of the Snapshot Agent job using sp\_help\_job.
How to check replication snapshot agent status?
[ "", "sql", "sql-server", "sql-server-2008-r2", "database-replication", "merge-replication", "" ]
I have 3 different tables as shown below: ``` | DataEntry: | | OOW: | | ContractExpired: | | +----+-----------------------| | +----+-------------| | +----+-------------| | Country | valid | invalid | | Country | Warranty | | Country | Expired | | AU | 1 | 2 | | AU | 1 | | AU | 1 | | CN | 22 | 3 | | CN | 1 | | CN | 1 | | JP | 1 | 1 | | +----+-------------| | +----+-------------| | KR | 1 | 1 | | SG | 1 | 1 | | +----+-----------------------| ``` The query i written out like this: ``` SELECT Country, (SELECT SUM(CASE WHEN CallType = 'valid' THEN 1 ELSE Null END) AS [valid] FROM DataEntry AS DataEntry_1) AS [valid], (SELECT SUM(CASE WHEN CallType = 'Invalid' THEN 1 ELSE Null END) AS [Invalid] FROM DataEntry AS DataEntry_2) AS [Invalid], (SELECT COUNT(CallType) AS [Warranty] FROM OOW) AS [Warranty], (SELECT COUNT(CallType) AS [Expired] FROM ContractExpired) AS [Expired] ``` The result return shown as below: ``` | valid | invalid | warranty | Expired | | 26 | 8 | 2 | 2 | ``` I wanted to make some changes for it by grouping into country: (SAMPLE) ``` | Country | valid | invalid | warranty | Expired | | AU | 1 | 2 | 1 | 1 | | CN | 22 | 3 | 1 | 1 | | JP | 1 | 1 | null | null | | KR | 1 | 1 | null | null | | SG | 1 | 1 | null | null | ``` I tried the group by but the query returning wrong result: ``` SELECT Country, (SELECT SUM(CASE WHEN CallType = 'valid' THEN 1 ELSE Null END) AS [valid] FROM DataEntry AS DataEntry_1) AS [valid], (SELECT SUM(CASE WHEN CallType = 'Invalid' THEN 1 ELSE Null END) AS [Invalid] FROM DataEntry AS DataEntry_2) AS [Invalid], (SELECT COUNT(CallType) AS [Warranty] FROM OOW) AS [Warranty], (SELECT COUNT(CallType) AS [Expired] FROM ContractExpired) AS [Expired] FROM DataEntry AS DataEntry_3 Group by Country ``` Result: ``` | Country | valid | invalid | warranty | Expired | | AU | 26 | 8 | 2 | 2 | | CN | 26 | 8 | 2 | 2 | | JP | 26 | 8 | 2 | 2 | | KR | 26 | 8 | 2 | 2 | | SG | 26 | 8 | 2 | 2 | ``` Please provide some guidance to solve this.
If you really want to do this without JOINs, you can, and it is quite simple. You have to use SUBSELECTs, and filter them with a WHERE clause: ``` SELECT de.Country SUM(CASE WHEN de.CallType = 'valid' THEN 1 ELSE Null END) AS [valid], SUM(CASE WHEN de.CallType = 'Invalid' THEN 1 ELSE Null END) AS [Invalid], (SELECT COUNT(CallType) FROM OOW o WHERE o.Country=de.Country) AS [Warranty], (SELECT COUNT(CallType) FROM ContractExpired ce WHERE ce.Country=de.Country) AS [Expired] FROM DataEntry de GROUP BY de.Country ``` There is an [SQLFiddle here](http://sqlfiddle.com/#!3/d93ba2/1). This SQL assumes you have the following table structure, which matches your SQL, but not the structure you gave us: ``` DataEntry( Country CHAR, Calltype CHAR); OOW( Country CHAR, Calltype CHAR); CREATE TABLE ContractExpired( Country CHAR, Calltype CHAR); ``` Be sure to also have indexes on Country on all three tables. But this is good advice for any answer posted here, with JOINs or SUBSELECTs.
shouldn't your query be something like below. This may not be the exact query but will give you an idea as what to be done to get it working. ``` SELECT de.Country, SUM(CASE WHEN de.CallType = 'valid' THEN 1 ELSE Null END) AS [valid], SUM(CASE WHEN de.CallType = 'Invalid' THEN 1 ELSE Null END) AS [Invalid], FROM DataEntry de join ( select COUNT(CallType) AS [Warranty] FROM OOW group by country ) tab on de.country = tab.country join ( SELECT COUNT(CallType) AS [Expired] FROM ContractExpired group by country ) tab1 on de.country = tab1.country Group by de.Country ```
Combine 3 tables without using join but need group by
[ "", "sql", "" ]