Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
My problem is this: I have a table "users": ``` id_us | name ------------- 1 | bob 2 | ken 3 | jones ``` and I also have a table for pets "pets" ``` id_pet | pet ------------- 1 | dog 2 | cat 3 | fish ``` and a table for storing the relation of user and pets "user\_pets" ``` id | id_us | pet ------------------- 1 | 1 | 1 --------> (dog) 2 | 1 | 2 --------> (cat) 3 | 2 | 1 --------> (dog) 4 | 3 | 3 --------> (fish) 3 | 3 | 2 --------> (cat) ``` I have been trying to create a query that gives me all the users that do not have dogs, the problem of my query is that because the user "Bob" for example has two pets and one of them is not a dog, it is returned in my result for the query even if he is a dog owner. Query: ``` SELECT usuario.name FROM usuario JOIN user_pets ON usuario.id_us = user_pets.id_us WHERE user_pets.pet != 1 GROUP BY usuario.name ```
You might consider using `EXISTS` Inner Query bring all the dogs from the users. Then if `NOT EXISTS` any result mean Bob doesnt have a dog. ``` SELECT * FROM users u WHERE NOT EXISTS ( SELECT * FROM user_pets up WHERE up.id_pet = 1 AND up.id_us = u.id_us ) ```
``` SELECT usuario.name FROM usuario LEFT JOIN user_pets ON usuario.id_us = user_pets.id_us AND user_pets.pet = 1 WHERE user_pets.id is null ``` OR ``` SELECT usuario.name FROM usuario WHERE NOT EXISTS (SELECT id_ud FROM user_pets WHERE user_pets.pet = 1 ANd USerpets.id_us = usuario.ID) ``` Or ``` SELECT usuario.name FROM usuario WHERE usuario.ID NOT IN (SELECT id_ud FROM user_pets WHERE user_pets.pet = 1 ```
How to use query irelationship in SQL
[ "", "mysql", "sql", "" ]
I've got a SQL table where I want to find the first and last dates of a group of records, providing they're sequential. ``` Patient | TestType | Result | Date ------------------------------------------ 1 | 1 | A | 2012-03-04 1 | 1 | A | 2012-08-19 1 | 1 | B | 2013-05-27 1 | 1 | A | 2013-06-20 1 | 2 | X | 2012-08-19 1 | 2 | X | 2013-06-20 2 | 1 | B | 2014-09-09 2 | 1 | B | 2015-04-19 ``` Should be returned as ``` Patient | TestType | Result | StartDate | EndDate -------------------------------------------------------- 1 | 1 | A | 2012-03-04 | 2012-08-19 1 | 1 | B | 2013-05-27 | 2013-05-27 1 | 1 | A | 2013-06-20 | 2013-06-20 1 | 2 | X | 2012-08-19 | 2013-06-20 2 | 1 | B | 2014-09-09 | 2015-04-19 ``` The problem is that if I just group by `Patient`, `TestType`, and `Result`, then the first and third rows in the example above would become a single row. ``` Patient | TestType | Result | StartDate | EndDate -------------------------------------------------------- 1 | 1 | A | 2012-03-04 | 2013-06-20 1 | 1 | B | 2013-05-27 | 2013-05-27 1 | 2 | X | 2012-08-19 | 2013-06-20 2 | 1 | B | 2014-09-09 | 2015-04-19 ``` I feel like there's got to be something clever I can do with a partition, but I can't quite figure out what it is.
There are several ways to approach this. I like identifying the groups using the difference of row number values: ``` select patient, testtype, result, min(date) as startdate, max(date) as enddate from (select t.*, (row_number() over (partition by patient, testtype order by date) - row_number() over (partition by patient, testtype, result order by date) ) as grp from table t ) t group by patient, testtype, result, grp order by patient, startdate; ```
``` select patient, testtype, result, date as startdate, isnull(lead(date) over(partition by patient, testtype, result order by date), date) as enddate from tablename; ``` You can use `lead` function to get the value of date (as enddate) from the next row in each group. [SQL Fiddle](http://www.sqlfiddle.com/#!6/55a9f/1) with sample data.
Aggregate/Windowed Function To Find Min and Max of Sequential Rows
[ "", "sql", "sql-server", "sql-server-2012", "" ]
I've come across ``` create table new_table as select * from old_table where 1=2; ``` What is the meaning of the `where 1=2` condition and what function does it perform?
`1=2` always evaluates as false. This is a common trick to utilize the create as select to copy a table's structure without copying any of its rows (as none of them will pass the test of `1=2`).
Just it will copy the structure of a `table` not `data` present in a `table`. `1=0 or 1=2` will always returns false in where clause so `select query` doesn't return any `rows` from table it's just a trick for create a `table` as like another `table` in your `schema` .
What is the meaning of 1=2 when copying database structure?
[ "", "sql", "oracle", "ddl", "create-table", "" ]
I know the question is a duplicate, but I am not able to find a solution to my problem below: ``` SELECT CAST(747448809352908.49434500000 AS Decimal(25,12)) AS [Value] ``` This is the value I am getting from one of my calculations. I am trying to insert into a Table with Value column designed as Decimal(25,12). With a value as above I am getting an overflow error. Please assist. The problem being I cannot update the settings of the column. Therefore I need to fit into the the column. I cannot can I??
your total length of value is not matched with your data type. you are assigning `Decimal(25,12)` but length more than 25, actually 27 ``` SELECT CAST('747448809352908.49434500000' AS Decimal(27,12)) AS [Value] ``` In SQL Server, decimal and numeric are numeric data types that have fixed precision and scale. For a decimal(5,2) variable, it can only hold three (5-2) digits before the decimal point and two digits after the decimal point. However, “1234” has four digits before the decimal point so it exceeds the precision and the error occurs. The same issue happens if you assign an integer value 1234 to the decimal(5,2). [decimal and numeric](https://msdn.microsoft.com/en-IN/library/ms187746.aspx)
tinka answer is correct but here is explanation the length of this `747448809352908.49434500000` is 27 so in DECIMAL should contain the maximum length of the number , so the error because you specify 25 , the number 12 is the number after the comma `49434500000` whic is 11.
Arithmetic overflow error converting numeric to data type numeric for Decimal(25,12)
[ "", "sql", "sql-server", "sql-server-2008", "sql-server-2012", "" ]
I have a table called `ranks` which stores a 1-7 value and connects it with an item and user id. An item's rating is the average of all ratings it has been given. Ranks table: `id | userid | value | itemid.` To sort by average highest rating, I do (from an answer here): ``` select avg(value), itemid from ranks group by itemid order by avg(value) desc ``` However, I also want to be able to filter items with a certain rating, not just highest rated -> lowest rated. For example, items with an average rating beteen 5 and 6. ``` select avg(value), itemid from ranks having avg(value) between 5 and 6 group by itemid order by avg(value) desc ``` This gives me a nonspecific error at `group by`. Can anyone point me in the right direction?
Your `having` clause needs to come after your `group by` clause, so just reverse the order they appear in: ``` select avg(value), itemid from ranks group by itemid having avg(value) between 5 and 6 order by avg(value) desc ```
``` select * from ( select avg(value) as avgval, itemid from ranks group by itemid) t where avgval between 5 and 6; ``` You can do it with a sub-query.
Sort by average rating between 2 values
[ "", "mysql", "sql", "" ]
I have an attendance SQL table that stores the start and end day's punch of employee. Each punch (punch in and punch out) is in a separate record. I want to calculate the total working hour of each employee for a requested month. I tried to make a scalar function that takes two dates and employee ID and return the calculation of the above task, but it calculate only the difference of one date between all dates. The data is like this: ``` 000781 2015-08-14 08:37:00 AM EMPIN 539309898 000781 2015-08-14 08:09:48 PM EMPOUT 539309886 ``` My code is: ``` @FromDate NVARCHAR(10) ,@ToDate NVARCHAR(10) ,@EmpID NVARCHAR(6) CONVERT(NVARCHAR,DATEDIFF(HOUR ,(SELECT Time from PERS_Attendance att where attt.date between convert(date,@fromDate) AND CONVERT(Date,@toDate) AND (EmpID= @EmpID OR ISNULL(@EmpID, '') = '') AND Funckey = 'EMPIN') ,(SELECT Time from PERS_Attendance att where attt.date between convert(date,@fromDate) AND CONVERT(Date,@toDate) AND (EmpID= @EmpID OR ISNULL(@EmpID, '') = '') AND Funckey = 'EMPOUT') )) FROM PERS_Attendance attt ```
One more approach that I think is simple and efficient. * It doesn't require modern functions like `LEAD` * it works correctly if the same person goes in and out several times during the same day * it works correctly if the person stays in over the midnight or even for several days in a row * it works correctly if the period when person is "in" overlaps the start OR end date-time. * **it does assume that data is correct**, i.e. each "in" is matched by "out", except possibly the last one. Here is an illustration of a time-line. Note that `start` time happens when a person was "in" and `end` time also happens when a person was still "in": All we need to do it calculate a plain sum of time differences between each event (both `in` and `out`) and `start` time, then do the same for `end` time. If event is `in`, the added duration should have a positive sign, if event is `out`, the added duration should have a negative sign. The final result is a difference between sum for end time and sum for start time. ``` summing for start: |---| + |----------| - |-----------------| + |--------------------------| - |-------------------------------| + --|====|--------|======|------|===|=====|---|==|---|===|====|----|=====|--- time in out in out in start out in out in end out in out summing for end: |---| + |-------| - |----------| + |--------------| - |------------------------| + |-------------------------------| - |--------------------------------------| + |-----------------------------------------------| - |----------------------------------------------------| + ``` I would recommend to calculate durations in minutes and then divide result by 60 to get hours, but it really depends on your requirements. By the way, it is a bad idea to store dates as `NVARCHAR`. ``` DECLARE @StartDate datetime = '2015-08-01 00:00:00'; DECLARE @EndDate datetime = '2015-09-01 00:00:00'; DECLARE @EmpID nvarchar(6) = NULL; WITH CTE_Start AS ( SELECT EmpID ,SUM(DATEDIFF(minute, (CAST(att.[date] AS datetime) + att.[Time]), @StartDate) * CASE WHEN Funckey = 'EMPIN' THEN +1 ELSE -1 END) AS SumStart FROM PERS_Attendance AS att WHERE (EmpID = @EmpID OR @EmpID IS NULL) AND att.[date] < @StartDate GROUP BY EmpID ) ,CTE_End AS ( SELECT EmpID ,SUM(DATEDIFF(minute, (CAST(att.[date] AS datetime) + att.[Time]), @StartDate) * CASE WHEN Funckey = 'EMPIN' THEN +1 ELSE -1 END) AS SumEnd FROM PERS_Attendance AS att WHERE (EmpID = @EmpID OR @EmpID IS NULL) AND att.[date] < @EndDate GROUP BY EmpID ) SELECT CTE_End.EmpID ,(SumEnd - ISNULL(SumStart, 0)) / 60.0 AS SumHours FROM CTE_End LEFT JOIN CTE_Start ON CTE_Start.EmpID = CTE_End.EmpID OPTION(RECOMPILE); ``` There is `LEFT JOIN` between sums for end and start times, because there can be `EmpID` that has no records before the start time. `OPTION(RECOMPILE)` is useful when you use [Dynamic Search Conditions in T‑SQL](http://www.sommarskog.se/dyn-search-2008.html). If `@EmpID` is `NULL`, you'll get results for all people, if it is not `NULL`, you'll get result just for one person. If you need just one number (a grand total) for all people, then wrap the calculation in the last `SELECT` into `SUM()`. If you always want a grand total for all people, then remove `@EmpID` parameter altogether. It would be a good idea to have an index on `(EmpID,date)`.
My approach would be as follows: ``` CREATE FUNCTION [dbo].[MonthlyHoursByEmpID] ( @StartDate Date, @EndDate Date, @Employee NVARCHAR(6) ) RETURNS FLOAT AS BEGIN DECLARE @TotalHours FLOAT DECLARE @In TABLE ([Date] Date, [Time] Time) DECLARE @Out TABLE ([Date] Date, [Time] Time) INSERT INTO @In([Date], [Time]) SELECT [Date], [Time] FROM PERS_Attendance WHERE [EmpID] = @Employee AND [Funckey] = 'EMPIN' AND ([Date] > @StartDate AND [Date] < @EndDate) INSERT INTO @Out([Date], [Time]) SELECT [Date], [Time] FROM PERS_Attendance WHERE [EmpID] = @Employee AND [Funckey] = 'EMPOUT' AND ([Date] > @StartDate AND [Date] < @EndDate) SET @TotalHours = (SELECT SUM(CONVERT([float],datediff(minute,I.[Time], O.[Time]))/(60)) FROM @in I INNER JOIN @Out O ON I.[Date] = O.[Date]) RETURN @TotalHours END ```
Get total working hours from SQL table
[ "", "sql", "sql-server", "t-sql", "" ]
I have this query, ``` SELECT DISTINCT username, publishedapp FROM tbl_name WHERE username in ( SELECT username FROM tbl_name WHERE publishedapp = \"Mencap@Work2-1\" ) AND publishedapp NOT LIKE \"%Desktop%\" AND publishedapp NOT LIKE \"%RDP%\" GROUP BY username HAVING COUNT(DISTINCT publishedapp) > 1 ``` which works perfectly on a db,table but not in another, the DBs have the same type of content just the one which doesn't work on has more row as it has been gathering information for longer time. on the newer table im getting 0 as result to the query, the table definitely has content as I've seen the rows and columns on PHPMyadmin and also have user run this query ``` SELECT username, publishedapp FROM tbl_name ORDER BY username ``` The original table [![enter image description here](https://i.stack.imgur.com/0Ll6z.png)](https://i.stack.imgur.com/0Ll6z.png) The new table [![enter image description here](https://i.stack.imgur.com/veoA5.png)](https://i.stack.imgur.com/veoA5.png) Im specting something like this coming out [![enter image description here](https://i.stack.imgur.com/VYcYF.png)](https://i.stack.imgur.com/VYcYF.png) Any ideas what could this be?
What you do to troubleshoot something like this is to break down the query into pieces until you find the source of the problem. Sometimes you may need to use select \* or specify columns in the join or the where clause that you are not returning in the final product to see what is going on. I would do something like the following, running each one until you see where the records fall out.: ``` SELECT username FROM tbl_name WHERE publishedapp = \"Mencap@Work2-1\" SELECT * FROM tbl_name WHERE username in ( SELECT username FROM tbl_name WHERE publishedapp = \"Mencap@Work2-1\" ) AND publishedapp NOT LIKE \"%Desktop%\" SELECT * FROM tbl_name WHERE username in ( SELECT username FROM tbl_name WHERE publishedapp = \"Mencap@Work2-1\" ) AND publishedapp NOT LIKE \"%Desktop%\" AND publishedapp NOT LIKE \"%RDP%\" SELECT username, publishedapp, COUNT(DISTINCT publishedapp) FROM tbl_name WHERE username in ( SELECT username FROM tbl_name WHERE publishedapp = \"Mencap@Work2-1\" ) AND publishedapp NOT LIKE \"%Desktop%\" AND publishedapp NOT LIKE \"%RDP%\" GROUP BY username,publishedapp ``` Note that I correctly used group by. You should never under any circumstances use group by the way you did, your results can be incorrect. You should use group by the way it was intended to work and the way every other database works by including all columns in the select that are not part of the aggregate.
Try simplifying the query. If I understand correctly: ``` SELECT username, group_concat(publishedapp) FROM tbl_name WHERE publishedapp NOT LIKE '%Desktop%' AND publishedapp NOT LIKE '%RDP%' GROUP BY username HAVING COUNT(DISTINCT publishedapp) > 1 AND SUM(publishedapp = 'Mencap@Work2-1') > 0; ``` There is also the possibility that although names *look* similar they are not. For instance, there could be trailing spaces at the end of an app name. Does this return what you expect? ``` select * from tbl_name where publishedapp = 'Mencap@Work2-1' ```
Query not workin
[ "", "mysql", "sql", "" ]
I am unsure how to get SQL to do what I am trying to figure out. I have one student, and depending on their grade level, would be in one of three schools (schoolid). I need to return only the StudentID and SchoolID of the school the student is currently in, based on the grade level. How do I search in this way please? [![enter image description here](https://i.stack.imgur.com/G7whk.png)](https://i.stack.imgur.com/G7whk.png) Desired Result (the student is in grade 12, therefore the SchoolID is 500): [![enter image description here](https://i.stack.imgur.com/f8wtD.png)](https://i.stack.imgur.com/f8wtD.png) [SQL Fiddle](http://sqlfiddle.com/#!6/caa7d/2/0) Thank you.
``` where GradesOfferedInSchool like '%' + cast(StudentCurrentGrade as varchar(max)) + '%' ``` But that is a terrible data design Should not have multiple values in one column This is still just a hack as this would find '12b' If this is a view then look for normalized data better but still a hack ``` where GradesOfferedInSchool like '% ' + cast(StudentCurrentGrade as varchar(max)) + ', %' or GradesOfferedInSchool like cast(StudentCurrentGrade as varchar(max)) + ', %' or GradesOfferedInSchool like '%, ' + cast(StudentCurrentGrade as varchar(max)) or GradesOfferedInSchool = cast(StudentCurrentGrade as varchar(max)) ```
This works for me using Microsoft SQL Server. The `CAST` is needed otherwise it was throwing an error due to the different datatypes of the fields. ``` SELECT StudentID, SchoolID FROM dbo.StudentFeederPatterns WHERE GradesOfferedInSchool LIKE '%' + CAST(StudentCurrentGrade AS VARCHAR) + '%' ``` [SQL Fiddle](http://sqlfiddle.com/#!6/caa7d/19/0 "SQL Fiddle")
TSQL - How to search a string in multiple rows?
[ "", "sql", "sql-server", "t-sql", "" ]
I have the following table of transport records for a bus company: ``` CREATE TABLE ride_txn( passenger_no int(11) pk, txn_time timestamp, action varchar(10) ) ``` where the action could be "Board" or "Deboard". Say I have 2 rows where for passenger\_no. 100, he boarded at 1.30pm and alighted at 4.30pm. ``` passenger_no txn_time action 100 13:30:00 Board 100 16:30:00 Deboard ``` Can I write an sql query to retrieve the hours that he is in the bus? I do the count at the beginning of each hour so he was in the bus at 2pm, 3pm and 4pm. In other words, I am trying to get something like ``` passenger_no hour_in_bus 100 2 100 3 100 4 ```
Here's a start: ``` select passenger_no, hr from ride_txn rt, ( select 0 hr union all select 1 union all select 2 union all select 3 union all select 4 union all select 5 union all select 6 union all select 7 union all select 8 union all select 9 union all select 10 union all select 11 union all select 12 union all select 13 union all select 14 union all select 15 union all select 16 union all select 17 union all select 18 union all select 19 union all select 20 union all select 21 union all select 22 union all select 23 ) hrs where action = 'Board' and hrs.hr between hour(txn_time) /* could add 3599 seconds to only count top of the hour */ and ( select min(txt_time) from ride_txn rt2 where rt2.passenger_no = rt.passenger_no and rt2.txt_time > rt.txt_time and action = 'Deboard' ) ``` I had to assume that the events will pair off correctly and also that the "deboard" time will be greater than the "board" time. So nothing spans midnight and it takes place within a single day. It would handle multiple pairs within the day though. I'm not sure if `hour()` is actually a MySQL function but I'm sure you can find the equivalent one. I'm also assuming it returns a number from 0 to 23.
``` select id,y.tm from ( select id, max(case when action = 'Deboard' then hour(dt) end) as d_time, max(case when action = 'Board' then hour(dt) end) as b_time from tablename group by id) x join ( select distinct hour(dt) as tm from tablename ) y on y.tm between x.b_time and x.d_time ``` This would work assuming you have all the 24 hours in the table.
Finding hourly passenger loads given boarding and deboarding time
[ "", "mysql", "sql", "" ]
My table ``` ID Name Addr tDate ------------------------------- | 1 | Aa | street | 20151231 | 2 | Aa | street | 20130202 | 2 | Aa | street | 20120101 | 3 | Aa | way | 20150821 | 4 | Bb | street | 20150821 | 7 | Xb | street | 20150821 | 5 | Cc | way | 20150821 | 5 | Cc | way | 20150821 | 6 | Cc | no way | 20150821 ``` Result 01 ``` ID Name Addr | tDate ------------------------------- | 1 | Aa | street | 20151231 | 2 | Aa | street | 20130202 | 2 | Aa | street | 20120101 ``` Going to create a new `nID` It should be copied as is OR merged if Name and Addr are identical and selecting `ID` with newest `tDate` Result 02 ``` ID Name Addr tDate nID ------------------------------------ | 1 | Aa | street | 20151231 | 1 | 2 | Aa | street | 20120101 | 1 <-- nID != ID | 2 | Aa | street | 20130202 | 1 <-- nID != ID | 3 | Aa | way | 20150821 | 3 | 4 | Bb | street | 20150821 | 4 | 7 | Xb | street | 20150821 | 7 | 5 | Cc | way | 20150821 | 5 | 5 | Cc | way | 20150821 | 5 | 6 | Cc | no way | 20150821 | 6 ``` I've tried this. Not sure if it's correct. ``` SELECT DISTINCT dr.* FROM MyTable dr inner join( SELECT ID, Name, Addr FROM MyTable GROUP BY ID, Name, Addr ) ss on dr.Name = ss.Name and dr.Addr = ss.Addr and dr.ID <> ss.ID order by Name ```
**EDIT** : *Complete change after your addition of tDate, and need for two result sets* Result Set 1: ``` SELECT id, Name, Addr, tDate FROM ( SELECT *, COUNT(*) OVER (PARTITION BY Name, Addr) AS occurrences FROM MyTable ) AS parsed WHERE occurrences > 1 ``` Result Set 2: ``` SELECT *, FIRST_VALUE(ID) OVER (PARTITION BY Name, Addr ORDER BY tDate DESC ROWS UNBOUNDED PRECEDING) AS nID FROM MyTable ORDER BY ID ``` Example : <http://sqlfiddle.com/#!6/9285ae/9>
The [`dense_rank`](https://msdn.microsoft.com/en-us/library/ms173825.aspx) window function should do the trick: ``` SELECT ID, Name, Addr DENSE_RANK() OVER (PARTITION BY Name, Addr ORDER BY ID) AS nID FROM mytable ORDER BY 1, 4 ```
SELECT and create new ID for unique ids having identical values on 2 columns
[ "", "sql", "sql-server", "select", "sql-server-2012", "" ]
Say I have a table matching `person_id`s to `pet`s. I have a list of the ideal pets that a homeowner must have (at least one of each) and, from the following table, I want to see who meets the requirements. That list is, of course, `(dog, cat, tiger)`. People can definitely have more than one of each, but these are essential (therefore `person_id = 1` is the only one that works). ``` +---------+-----------+--------+ | home_id | person_id | pet | +---------+-----------+--------+ | 1 | 1 | dog | | 2 | 1 | dog | | 3 | 1 | cat | | 4 | 1 | tiger | | 5 | 2 | dog | | 6 | 2 | cat | | 7 | 3 | <null> | | 8 | 4 | tiger | | 9 | 4 | tiger | | 10 | 4 | tiger | +---------+-----------+--------+ ``` I've been able to check who has a tiger or a cat by running: ``` select person_id, pet from house group by person_id having pet in ('dog','cat','tiger'), ``` but obviously this gives the `person_id`s that have at least one of those pets – not all of them.
One way of doing this is to count how many different pets each person has and to compare it (i.e. join it) with the total number of different pets: ``` SELECT person_id FROM (SELECT person_id, COUNT(DISTINCT pet) AS dp FROM pets GROUP BY person_id) a JOIN (SELECT COUNT(DISTINCT pet) AS dp FROM pets) b ON a.dp = b.dp ``` EDIT: If just some pets are considered "ideal", and this list is known upfront, the query can be greatly simplified by introducing this information in a `where` clause: ``` SELECT person_id FROM pets WHERE pet IN ('dog', 'cat', 'tiger') GROUP BY person_id HAVING COUNT(DISTINCT pet) = 3 ```
``` select person_id from house where pet = 'dog' intersect select person_id from house where pet = 'cat' intersect select person_id from house where pet = 'tiger' ``` You can use `intersect` to get people who have all the 3 pets.
Group by having at least one of each item
[ "", "sql", "postgresql", "select", "relational-division", "" ]
I'm trying to combine 2 fields from 2 tables into one field, not one record using **Delphi SQL**. This is what my tables look like: Table1: ``` ItemID ------ E001 E002 E004 I001 ``` Table2: ``` ItemID ------ E002 E003 I001 I002 ``` ItemID servers as a key between the two tables. I am trying to write a query that will return the following: ``` ItemID E001 E002 E003 E004 I001 I002 ``` Is this possible and if so how? `JOIN` didn't seem to work and `UNION` didn't either unless there is something I must do in my actual database design. Any ideas as to what will work?
Seems like what you want is the following query: ``` select ItemID from Table1 union select ItemID from Table2 ``` The `union` eliminates duplicate rows. If you want to keep the duplicate, use `union all`.
Simple approach (assumed MySQL due to mysql tag): ``` select ItemId as ItemId from Table1 as t1 union select ItemId as ItemId from Table2 as t2 ``` [DEMO HERE](http://sqlfiddle.com/#!9/1a291/5) If you really want to have `*` as a part of you result then please use following code (simulated left outer join): ``` select IFNULL(t.ItemId1, t.ItemId2), case when t.ItemId1 = t.ItemId2 then '*' else '' end as star from( select t1.ItemId as ItemId1, t2.ItemId as ItemId2 FROM Table1 t1 left join Table2 t2 on t1.ItemId = t2.ItemId union select t1.ItemId as ItemId1, t2.ItemId as ItemId2 FROM Table1 t1 right join Table2 t2 on t1.ItemId = t2.ItemId ) as t ``` [2ND DEMO HERE](http://sqlfiddle.com/#!9/1a291/21)
How to combine 2 fields from 2 tables into one field using Delphi SQL
[ "", "sql", "delphi", "" ]
I have this query ``` SELECT DISTINCT publishedapp FROM tbl_name WHERE publishedapp LIKE "%@%" OR publishedapp LIKE "%Desk%" OR publishedapp LIKE "%RDP%" OR publishedapp LIKE "%CTX%\" ORDER BY publishedapp ``` It returns a list but from that list I want to filter a couple entries, Ive tried adding AND publishedapp NOT LIKE \"%ActiveH Desktop June 2013%\" like this ``` SELECT DISTINCT publishedapp FROM tbl_name WHERE publishedapp LIKE "%@%" OR publishedapp LIKE "%Desk%" OR publishedapp LIKE "%RDP%" OR publishedapp LIKE "%CTX%" AND publishedapp NOT LIKE "%ActiveH Desktop June 2013%" ORDER BY publishedapp ``` but it doesnt exclude "ActiveH Desktop June 2013" from the list. Any ideas how can i selectively take out row from a filtered list? From this table ``` Username Client Name Date Time Published App abim 009283-LAP 01/08/2015 19:18:40.90 Mencap@Work2-1 adetolaok 005421-DSK 01/08/2015 15:14:24.51 Mencap@Work2-1 amandawo AMANDA-FIXED-PC 01/08/2015 9:20:29.01 Mencap@Work2-1 amandawo 009759-DSK 01/08/2015 11:15:14.18 Mencap@Work2-1 AndreasR 015029-LAP 01/08/2015 16:17:08.15 Mencap@Work2-1 AnneG 009255-LAP 01/08/2015 8:36:16.91 Mencap Desktop with Acrobat AnneG 009255-LAP 01/08/2015 10:27:40.10 Mencap Desktop with Acrobat AnneG 009255-LAP 01/08/2015 11:32:57.52 Mencap Desktop with Acrobat AntonyT ANTONY 01/08/2015 11:22:10.08 Mencap@Work2-1 assend XL3SS 01/08/2015 12:02:30.32 Desktop on NC-CITRIXIT01 BrianW BRIAN-HP 01/08/2015 19:00:00.02 Mencap Desktop with Office 2010 CandiceL 010198-LAP 01/08/2015 21:05:40.67 Mencap@Work2-1 carolinej 009132-LAP 01/08/2015 14:52:02.40 Mencap Desktop with Acrobat CharlotteWi 015084-DSK 01/08/2015 16:09:17.25 Mencap@Work2-1 ChelseaS 005240-LAP 01/08/2015 11:15:11.69 Mencap@Work2-1 chrisch CHRIS-PC 01/08/2015 8:11:42.06 Powerplan Ciaram 008615-LAP 01/08/2015 8:46:31.71 Mencap@Work2-1 ClaireTu 009588-DSK 01/08/2015 11:40:15.15 Mencap @ Work Desktop clemmiet 008956-LAP 01/08/2015 21:17:45.47 Mencap Desktop with Office 2010 ColetteP 009363-LAP 01/08/2015 9:36:48.10 Mencap@Work2-1 danielleba 009723-DSK 01/08/2015 13:40:36.72 Mencap@Work2-1 danielleba 009723-DSK 01/08/2015 13:41:01.34 Mencap@Work2-1 danielleyo 004425-DSK 01/08/2015 19:46:38.96 Mencap @ Work Desktop darrenp 015148-DSK 01/08/2015 12:05:03.50 Mencap@Work2-1 davidf roid37e2c5c861b3993 01/08/2015 21:43:51.36 Mencap@Work2-1 davidpar 004451-DSK 01/08/2015 8:48:57.15 Mencap@Work2-1 dawnpo 009359-LAP 01/08/2015 12:50:37.28 Thin Client Desktop deboraho 007019-LAP 01/08/2015 15:25:38.81 Mencap@Work2-1 debradu 009410-LAP 01/08/2015 18:11:43.92 Mencap @ Work Desktop deen 010336-LAP 01/08/2015 10:50:40.99 Mencap@Work2-1 dhumisaniM 007019-LAP 01/08/2015 8:22:30.58 Mencap@Work2-1 dianeh NICKS 01/08/2015 11:03:26.73 Mencap@Work2-1 dianeh NICKS 01/08/2015 11:05:26.17 Mencap@Work2-1 Eileenh 009786-LAP 01/08/2015 10:02:20.25 Mencap@Work2-1 Gabby 008518-LAP 01/08/2015 19:55:49.40 Mencap@Work2-1 GaliniP 005703-DSK 01/08/2015 22:19:34.59 Mencap@Work2-1 garypl 005635-DSK 01/08/2015 10:40:43.18 Mencap@Work2-1 gillt GILLSCOMPUTER 01/08/2015 21:36:55.34 Mencap@Work2-1 h2005 005359-DSK 01/08/2015 8:02:32.38 Mencap@Work2-1 h2033 009434-DSK 01/08/2015 16:43:40.31 Internet Explorer h2033 009434-DSK 01/08/2015 22:06:46.48 Internet Explorer h2053 009216-DSK 01/08/2015 14:31:46.27 Mencap@Work2-1 h2087 005618-RMS 01/08/2015 12:37:57.20 Internet Explorer h3012 009723-DSK 01/08/2015 7:24:28.99 Internet Explorer h3012 009723-DSK 01/08/2015 7:24:29.16 Internet Explorer misactive AMSDSSMORRIS 16/07/2015 11:06:42.95 ActiveH Desktop June 2013 ``` id like to get back ``` Mencap@Work2-1 Mencap Desktop with Acrobat Desktop on NC-CITRIXIT01 Mencap Desktop with Office 2010 Mencap @ Work Desktop Thin Client Desktop ```
Just use all the OR conditions in bracket. ``` SELECT DISTINCT publishedapp FROM tbl_name WHERE (publishedapp LIKE '%@%' OR publishedapp LIKE '%Desk%' OR publishedapp LIKE '%RDP%' OR publishedapp LIKE '%CTX%') AND publishedapp NOT LIKE '%ActiveH Desktop June 2013%' ORDER BY publishedapp ``` Here the problem is that the last statement is true but none of the other conditions are. So because of 'AND' the final result of the where condition is false and it is not filtering 'ActiveH Desktop June 2013'.
try the below code ``` SELECT DISTINCT publishedapp FROM tbl_name WHERE (publishedapp LIKE '%@%' OR publishedapp LIKE '%Desk%' OR publishedapp LIKE '%RDP%' OR publishedapp LIKE '%CTX%') ORDER BY publishedapp ```
SQL WHERE LIKE OR FILTER
[ "", "mysql", "sql", "" ]
Suppose I have a table as follows: ``` Name Part Address City ... Stuff Bob X1 Y1 Z1 ... Stuff1 Bob X2 Y1 Z1 ... Stuff1 Bob X3 Y1 Z1 ... Stuff1 Susan V1 Y2 Z2 ... Stuff2 Susan V2 Y2 Z2 ... Stuff2 .... ``` Here, Stuff is many columns. Notice that Address, City, Stuff doesn't change. So I just want to return the first row. I know I need to do something like ``` SELECT * FROM myTable GROUP_BY (NAME) ``` but I'm not sure how to select the first row only after the group by? I saw other posts but they all were selecting based upon a min.. since my columns aren't numeric, I'm not sure how they would apply?
Use [`ROW_NUMBER`](https://msdn.microsoft.com/en-us/library/ms186734.aspx) instead. > Returns the sequential number of a row within a partition of a result > set, starting at 1 for the first row in each partition. ``` SELECT * FROM ( SELECT *, rn = ROW_NUMBER() OVER(PARTITION BY Name ORDER BY Part) FROM tbl ) AS t WHERE rn = 1 ``` You can replace the `ORDER BY` column depending on your definition of first.
try this ``` WITH CTE AS ( SELECT ROW_NUMBER() OVER(PARTITION BY GroupColumn ORDER BY SomeColumn) as indx FROM YourTABLENAME ) select * from CTE where indx=1 ```
Select First Row from each group when many columns
[ "", "sql", "sql-server", "" ]
I am storing some filter data in my table. Let me make it more clear: I want to store some `where` clauses and their values in a database and use them when I want to retrieve data from a database. For example, consider a `people` table (entity set) and some filters on it in another table: ``` "age" , "> 70" "gender" , "= male" ``` Now when I retrieve data from the `people` table I want to get these filters to filter my data. I know I can generate a SQL query as a string and execute that but is there any other better way in EF, LINQ?
One solution is to use [Dynamic Linq Library](https://www.nuget.org/packages/System.Linq.Dynamic.Library/) , using this library you can have: ``` filterTable = //some code to retrive it var whereClause = string.Join(" AND ", filterTable.Select(x=> x.Left + x.Right)); var result = context.People.Where(whereClause).ToList(); ``` Assuming that filter table has columns `Left` and `Right` and you want to join filters by `AND`. My suggestion is to include more details in the filter table, for example separate the operators from operands and add a column that determines the join is `And` or `OR` and a column that determines the other row which joins this one. You need a tree structure if you want to handle more complex queries like `(A and B)Or(C and D)`. Another solution is to build expression tree from filter table. Here is a simple example: ``` var arg = Expression.Parameter(typeof(People)); Expression whereClause; for(var row in filterTable) { Expression rowClause; var left = Expression.PropertyOrField(arg, row.PropertyName); //here a type cast is needed for example //var right = Expression.Constant(int.Parse(row.Right)); var right = Expression.Constant(row.Right, left.Member.MemberType); switch(row.Operator) { case "=": rowClause = Expression.Equal(left, right); break; case ">": rowClause = Expression.GreaterThan(left, right); break; case ">=": rowClause = Expression.GreaterThanOrEqual(left, right); break; } if(whereClause == null) { whereClause = rowClause; } else { whereClause = Expression.AndAlso(whereClause, rowClause); } } var lambda = Expression.Lambda<Func<People, bool>>(whereClause, arg); context.People.Where(lambda); ``` this is very simplified example, you should do many validations type casting and ... in order to make this works for all kind of queries.
This is an interesting question. First off, make sure you're honest with yourself: you are creating a new query language, and this is *not* a trivial task (however trivial your expressions may seem). If you're certain you're not underestimating the task, then you'll want to look at [LINQ expression trees](https://msdn.microsoft.com/en-us/library/bb397951) ([reference documentation](https://msdn.microsoft.com/en-us/library/system.linq.expressions)). Unfortunately, it's quite a broad subject, I encourage you to learn the basics and ask more specific questions as they come up. Your goal is to interpret your filter expression records (fetched from your table) and create a LINQ expression tree for the predicate that they represent. You can then pass the tree to `Where()` calls as usual.
Entity Framework filter data by string sql
[ "", "sql", "sql-server", "linq", "entity-framework", "linq-expressions", "" ]
I'm implementing in my application an event logging system to save some event types from my code, so I've created a table to store the log type and an Incremental ID: ``` |LogType|CurrentId| |info | 1 | |error | 5 | ``` And also a table to save the concrete log record ``` |LogType|IdLog|Message | |info |1 |Process started| |error |5 |some error | ``` So, every time I need to save a new record I call a SPROC to calculate the new id for the log type, basically: `newId = (currentId + 1)`. But I am facing an issue with that calculation because if multiple processes calls the SPROC at the same time the "generated Id" is the same, so I'm getting log records with the same Id, and every record must be Id-unique. This is my SPROC written for **SQL Server 2005**: ``` ALTER PROCEDURE [dbo].[usp_GetLogId] @LogType VARCHAR(MAX) AS BEGIN SET NOCOUNT ON; BEGIN TRANSACTION BEGIN TRY DECLARE @IdCreated VARCHAR(MAX) IF EXISTS (SELECT * FROM TBL_ApplicationLogId WHERE LogType = @LogType) BEGIN DECLARE @CurrentId BIGINT SET @CurrentId = (SELECT CurrentId FROM TBL_ApplicationLogId WHERE LogType = @LogType) DECLARE @NewId BIGINT SET @NewId = (@CurrentId + 1) UPDATE TBL_ApplicationLogId SET CurrentId = @NewId WHERE LogType = @LogType SET @IdCreated = CONVERT(VARCHAR, @NewId) END ELSE BEGIN INSERT INTO TBL_ApplicationLogId VALUES(@LogType, 0) EXEC @IdCreated = usp_GetLogId @LogType END END TRY BEGIN CATCH DECLARE @ErrorMessage NVARCHAR(MAX) SET @ErrorMessage = ERROR_MESSAGE() IF @@TRANCOUNT > 0 ROLLBACK TRANSACTION; RAISERROR (@ErrorMessage, 16, 1) END CATCH IF @@TRANCOUNT > 0 COMMIT TRANSACTION SELECT @IdCreated END ``` I would appreciate your help to fix the sproc to return an unique id on every call. It has to work on **SQL Server 2005**. Thanks
Can you achieve what you want with an identity column? Then you can just let SQL Server guarantee uniqueness. Example: ``` create table my_test_table ( ID int identity ,SOMEVALUE nvarchar(100) ); insert into my_test_table(somevalue)values('value1'); insert into my_test_table(somevalue)values('value2'); select * from my_test_table ``` If you **must** issue the new ID values yourself for some reason, try using a sequence, as shown here: ``` if object_id('my_test_table') is not null begin drop table my_test_table; end; go create table my_test_table ( ID int ,SOMEVALUE nvarchar(100) ); go if object_id('my_test_sequence') is not null begin drop sequence my_test_sequence; end; go CREATE SEQUENCE my_test_sequence AS INT --other options are here: https://msdn.microsoft.com/en-us/library/ff878091.aspx START WITH 1 INCREMENT BY 1 MINVALUE 0 NO MAXVALUE; go insert into my_test_table(id,somevalue)values(next value for my_test_sequence,'value1'); insert into my_test_table(id,somevalue)values(next value for my_test_sequence,'value2'); insert into my_test_table(id,somevalue)values(next value for my_test_sequence,'value3'); select * from my_test_table ``` One more edit: I think this is an improvement to the existing stored procedure, given the requirements. Include the new value calculation directly in the UPDATE, ultimately return the value directly from the table (not from a variable which could be out of date) and avoid recursion. A full test script is below. ``` if object_id('STACKOVERFLOW_usp_getlogid') is not null begin drop procedure STACKOVERFLOW_usp_getlogid; end go if object_id('STACKOVERFLOW_TBL_ApplicationLogId') is not null begin drop table STACKOVERFLOW_TBL_ApplicationLogId; end go create table STACKOVERFLOW_TBL_ApplicationLogId(CurrentID int, LogType nvarchar(max)); go create PROCEDURE [dbo].[STACKOVERFLOW_USP_GETLOGID](@LogType VARCHAR(MAX)) AS BEGIN SET NOCOUNT ON; BEGIN TRANSACTION BEGIN TRY DECLARE @IdCreated VARCHAR(MAX) IF EXISTS (SELECT * FROM STACKOVERFLOW_TBL_ApplicationLogId WHERE LogType = @LogType) BEGIN UPDATE STACKOVERFLOW_TBL_APPLICATIONLOGID SET CurrentId = CurrentID + 1 WHERE LogType = @LogType END ELSE BEGIN --first time: insert 0. INSERT INTO STACKOVERFLOW_TBL_ApplicationLogId(CurrentID,LogType) VALUES(0,@LogType); END END TRY BEGIN CATCH DECLARE @ErrorMessage NVARCHAR(MAX) SET @ErrorMessage = ERROR_MESSAGE() IF @@TRANCOUNT > 0 begin ROLLBACK TRANSACTION; end RAISERROR(@ErrorMessage, 16, 1); END CATCH select CurrentID from STACKOVERFLOW_TBL_APPLICATIONLOGID where LogType = @LogType; IF @@TRANCOUNT > 0 begin COMMIT TRANSACTION END end go exec STACKOVERFLOW_USP_GETLOGID 'TestLogType1'; exec STACKOVERFLOW_USP_GETLOGID 'TestLogType1'; exec STACKOVERFLOW_USP_GETLOGID 'TestLogType1'; exec STACKOVERFLOW_USP_GETLOGID 'TestLogType2'; exec STACKOVERFLOW_USP_GETLOGID 'TestLogType2'; exec STACKOVERFLOW_USP_GETLOGID 'TestLogType2'; ```
Not much you can do here, but validate that: * table TBL\_ApplicationLogId is indexed by column LogType. * @LogType sp parameter is the same data type as column LogType in table TBL\_ApplicationLogId, so it can actually use the index if/when it exists. * If you have a concurrency issue, maybe forcing the lock level on table TBL\_ApplicationLogId during select and update can help. Just add (ROWLOCK) after the table name, Eg: TBL\_ApplicationLogId (ROWLOCK)
SPROC that returns unique calculated INT for each call
[ "", "sql", "t-sql", "sql-server-2005", "" ]
After updating mysql version 5.7.8-rc-log, I granted privileges like this: ``` GRANT select ON test_db.* TO 'test'@'host'; ``` and getting following error: > SELECT command denied to user 'test'@'host' for table 'session\_variables' but when I grant privileges like this: ``` GRANT select ON *.* TO 'test'@'host'; ``` it works. Can anybody help?
Here are the [article1](http://planet.mysql.com/entry/?id=5991741), [article2](https://github.com/rails/rails/issues/21108), [article3](http://code.openark.org/blog/mysql/baffling-5-7-globalstatus-variables-issues-unclean-migration-path) related to this issue. As per these articles, Workaround is setting `show_compatibility_56 = on` in `/etc/my.cnf` and restart mysql server. MySQL 5.7 introduces a change in the way we query for global variables and status variables: the `INFORMATION_SCHEMA.(GLOBAL|SESSION)_(VARIABLES|STATUS)` tables are now deprecated and empty. Instead, we are to use the respective performance\_schema.(global|session)\_(variables|status) tables. But the change goes farther than that; there is also a security change. So non-root user gets: ``` mysql> show session variables like 'tx_isolation'; ERROR 1142 (42000): SELECT command denied to user 'normal_user'@'my_host' for table 'session_variables' ``` **Solutions?** The following are meant to be solutions, but do not really solve the problem: * SHOW commands. SHOW GLOBAL|SESSION VARIABLES|STATUS will work properly, and will implicitly know whether to provide the results via information\_schema or performance\_schema tables. But, aren't we meant to be happier with SELECT queries? So that I can really do stuff that is smarter than LIKE 'variable\_name%'? And of course you cannot use SHOW in server side cursors. Your stored routines are in a mess now. This does not solve the GRANTs problem. * show\_compatibility\_56: an introduced variable in 5.7, boolean. It truly is a time-travel-paradox novel in disguise, in multiple respects. Documentation introduces it, and says it is deprecated. time-travel-paradox :O But it actually works in 5.7.8 (latest) time-travel-paradox plot thickens Your automation scripts do not know in advance whether your MySQL has this variable Hence SELECT @@global.show\_compatibility\_56 will produce an error on 5.6 But the "safe" way of SHOW GLOBAL VARIABLES LIKE 'show\_compatibility\_56' will fail on a privilege error on 5.7 time-travel-paradox :O Actually advised by my colleague Simon J. Mudd, show\_compatibility\_56 defaults to OFF. I support this line of thought. Or else it's old\_passwords=1 all over again. show\_compatibility\_56 doesn't solve the GRANTs problem. This does not solve any migration path. It just postpones the moment when I will hit the same problem. When I flip the variable from "1" to "0", I'm back at square one. **Suggestion** I claim security is not the issue, as presented above. I claim Oracle will yet again fall into the trap of no-easy-way-to-migrate-to-GTID in 5.6 if the current solution is unchanged. I claim that there have been too many changes at once. Therefore, I suggest one of the alternative two flows: * Flow 1: keep information\_schema, later migration into performance\_schema In 5.7, information\_schema tables should still produce the data. No security constraints on information\_schema Generate WARNINGs on reading from information\_schema ("...this will be deprecated...") performance\_schema also available. With security constraints, whatever. In 5.8 remove information\_schema tables; we are left with performance\_schema only. * Flow 2: easy migration into performance\_schema: In 5.7, performance\_schema tables should not require any special privileges. Any user can read from them. Keep show\_compatibility\_56 as it is. SHOW commands choose between information\_schema or performance\_schema on their own -- just as things are done now. In 5.8, performance\_schema tables will require SELECT privileges. Hope this will help you.
Try with the following way, maybe you will get the result. ``` GRANT ALL PRIVILEGES ON bedgeaj_medmax.transactions to 'bedgeaj_root'@'%' IDENTIFIED BY 'password'; ``` (OR) ``` GRANT ALL PRIVILEGES ON mydb.* TO 'myuser'@'%' WITH GRANT OPTION; ```
Command denied for table 'session_variables'
[ "", "mysql", "sql", "t-sql", "privileges", "sql-grant", "" ]
I have the following table (call it just `table`): ``` id name value group_id PK varchar(32) integer integer ``` Now, I need to write a query that returns me `SUM(value)` grouped by `group_id` and `COUNT(name)` where `name like '%me%'`( I don't need to just compute count grouped by `group_id`, I need to compute the count satisfies by the condition). But, I need to do that without writing subqueries, and I'm limited to 9.2. I tend to write a custom aggregate function specific for that needs. Would it be a right solution?
With 9.4 you can use a filtered aggregate: ``` select group_id, sum(value) count(name) filter (where name like '%me%') from the_table group by group_id; ``` For earlier versions, you need to use a CASE statement: ``` select group_id, sum(value) count(case when name like '%me%' then 1 end) from the_table group by group_id; ``` This works because aggregates will ignore null values and the `case` statement will return a `NULL` if the name doesn't match
It should be noted that case can be replaced by casting a bool to int which is shorter (`LIKE` can be replaced by `~~` but that is both shorter and less readable). ``` SELECT group_id ,Sum(value) AS sum_value ,Sum((name LIKE '%me%')::int) AS count_value FROM table GROUP BY group_id; ```
How to compute the count by a condition
[ "", "sql", "postgresql", "postgresql-9.2", "" ]
I need your help! I have a table: ``` CREATE TABLE `table` ( `id` int(11) NOT NULL AUTO_INCREMENT, `res` varchar(255) DEFAULT NULL, `value` int(6) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=8 DEFAULT CHARSET=utf8; -- Records of table INSERT INTO `table` VALUES (1, 'gold', 44); INSERT INTO `table` VALUES (2, 'gold', 44); INSERT INTO `table` VALUES (3, 'gold', 45); INSERT INTO `table` VALUES (4, 'gold', 46); INSERT INTO `table` VALUES (5, 'gold', 44); INSERT INTO `table` VALUES (6, 'gold', 44); INSERT INTO `table` VALUES (7, 'gold', 44); INSERT INTO `table` VALUES (8, 'gold', 47); ``` i need to make SELECT request which will ignored next or previous duplicated rows and i receive data like this: ``` - gold:44 (ignored 1 record) - gold:45 - gold:46 - gold:44 (ignored 2 records) - gold:47 ``` there is no object which duplicated record will ignore (first,second,last). (i tried to use group by value or distinct but this way removes other records with same value)
You can solve this with a `gaps and islands` solution. - Normally that involves `ROW_NUMBER()` which is not present in MySQL - The solution below mimics `ROW_NUMBER()` with variables and `ORDER BY` Link to example : <http://sqlfiddle.com/#!9/32e72/12> ``` SELECT MIN(id) AS id, res, value FROM ( SELECT IF (@res = res AND @val = value, @row := @row + 1, @row := 1) AS val_ordinal, id AS id, res_ordinal AS res_ordinal, @res := res AS res, @val := value AS value FROM ( SELECT IF (@res = res , @row := @row + 1, @row := 1) AS res_ordinal, id AS id, @res := res AS res, @val := value AS value FROM `table`, ( SELECT @row := 0, @res := '', @val := 0 ) AS initialiser ORDER BY res, id ) AS sequenced_res_id, ( SELECT @row := 0, @res := '', @val := 0 ) AS initialiser ORDER BY res, value, id ) AS sequenced_res_val_id GROUP BY res, value, res_ordinal - val_ordinal ORDER BY MIN(id) ; ``` If I add `res_ordinal`, `val_ordinal` and `res_ordinal - val_ordinal` to your data, it can be seen that you can now differentiate between the two sets of `44` ``` GROUP INSERT INTO `table` VALUES ('1', 'gold', '44'); 1 - 1 = 0 (Gold, 44, 0) INSERT INTO `table` VALUES ('2', 'gold', '44'); 2 - 2 = 0 INSERT INTO `table` VALUES ('3', 'gold', '45'); 3 - 1 = 2 (Gold, 45, 2) INSERT INTO `table` VALUES ('4', 'gold', '46'); 4 - 1 = 3 (Gold, 46, 3) INSERT INTO `table` VALUES ('5', 'gold', '44'); 5 - 3 = 2 (Gold, 44, 2) INSERT INTO `table` VALUES ('6', 'gold', '44'); 6 - 4 = 2 INSERT INTO `table` VALUES ('7', 'gold', '44'); 7 - 5 = 2 INSERT INTO `table` VALUES ('8', 'gold', '47'); 8 - 1 = 7 (Gold, 47, 7) ``` NOTE: According to your data I could use `id` instead of making my own `res_ordinal`. doing it this way, however, copes with gaps in the `id` sequence and having multiple different resources. This means that in the following example the two golds are considered to be duplicates of each other... ``` 1 Gold 44 1 - 1 = 0 (Gold, 44, 0) 2 Poop 45 1 - 1 = 0 (Poop, 45, 0) 3 Gold 44 2 - 2 = 0 (Gold, 44, 0) -- Duplicate 4 Gold 45 3 - 1 = 2 (Gold, 44, 2) ```
``` select t1.* from `table` t1 where not exists ( select 1 from `table` t2 where t1.id = 1+t2.id and t1.res = t2.res and t1.value = t2.value ); ``` works fine
How to ignore next duplicated row?
[ "", "mysql", "sql", "" ]
I have a simple query that give me the count of application types; it looks something like this: ``` SELECT Application_Type, COUNT(*) FROM Loan_Applications GROUP BY Application_Type; ``` It returns something like this: ``` Home 3 Car 21 Commercial 16 ``` There is a field in the database called ***Submission\_Date*** (Of type Date) How can I query and break up this data by week? ``` Type This week Last week 2 weeks ago Home 1 1 1 Car 9 6 6 Commercial 10 0 3 ```
You can try something like: ``` SELECT Application_Type, SUM(IF(Submission_Date BETWEEN CURRENT_DATE AND CURRENT_DATE - INTERVAL 1 WEEK, 1, 0)) AS 'This week', SUM(IF(Submission_Date BETWEEN CURRENT_DATE- INTERVAL 1 WEEK AND CURRENT_DATE - INTERVAL 2 WEEK, 1, 0)) AS 'Last week', SUM(IF(Submission_Date BETWEEN CURRENT_DATE- INTERVAL 2 WEEK AND CURRENT_DATE - INTERVAL 3 WEEK, 1, 0)) AS '2 weeks ago', FROM Loan_Applications GROUP BY Application_Type ; ``` Or: ``` SET @date1w = CURRENT_DATE - INTERVAL 1 WEEK; SET @date2w = CURRENT_DATE - INTERVAL 2 WEEK; SET @date3w = CURRENT_DATE - INTERVAL 3 WEEK; SELECT Application_Type, SUM(IF(Submission_Date BETWEEN CURRENT_DATE AND @date1w, 1, 0)) AS 'This week', SUM(IF(Submission_Date BETWEEN @date1w AND @date2w, 1, 0)) AS 'Last week', SUM(IF(Submission_Date BETWEEN @date2w AND @date3w, 1, 0)) AS '2 weeks ago', FROM Loan_Applications GROUP BY Application_Type ; ```
You can make a SUMIF type of calculation. The following sums the number of rows where the submission date is within the last week. ``` SUM(CASE WHEN submission_date >= CURDATE() - 7 THEN 1 ELSE 0 END) ``` You could then repeat this for different ranges, to get any "bands" that you desire.
Grouping COUNT by Time in MySql
[ "", "mysql", "sql", "" ]
I currently use a pretty basic backup script to backup my SQL databases to a given directory, zipped with Winrar. I am looking to use the SQL compression command (currently commented out) prior to the Winrar IF the version of SQL the script is being used on is SQL Standard or higher. Here is what my current script looks like: ``` Declare @backupPath nvarchar(1000); set @backupPath = 'C:\Backups\Auto\'; Declare @fileName nvarchar(100); Declare @currentDate datetime Declare @fullPath nvarchar(1000); Declare @databaseName nvarchar(100); set @databaseName = 'Database_name'; -- Do not change these values set @currentDate = GETDATE(); set @fileName = @databaseName + '_' + REPLACE(REPLACE(REPLACE((CONVERT(nvarchar(24), GETDATE(), 120)), ':', ''),' ', ''),'-', '') + '.bak' set @fullPath = @backupPath + @fileName; print 'adding device ' + @fileName EXEC sp_addumpdevice 'disk', @fileName, @fullPath; BACKUP database @databaseName to @fileName --WITH COMPRESSION print 'dropping device ' + @fileName EXEC sp_dropdevice @fileName ``` I would like the script to check for version/edition, then if the Version/Edition is Standard or higher, to run the WITH COMPRESSION command.
This may not be the full, but I think you will get the point: ``` DECLARE @databaseName nvarchar(100) DECLARE @fileName nvarchar(100) DECLARE @serverEdition int; DECLARE @useCompression bit; SELECT @serverEdition = Cast(SERVERPROPERTY('EditionID') as int); -- Reference: http://stackoverflow.com/questions/2070396/how-can-i-tell-what-edition-of-sql-server-runs-on-the-machine IF @serverEdition IN ( 1804890536, -- Enterprise 610778273, -- Enterprise Eval -1534726760 -- Standard ) BEGIN useCompression = 1; -- Supports compression END if @useCompression BEGIN BACKUP database @databaseName to @fileName WITH COMPRESSION; END ELSE BEGIN BACKUP database @databaseName to @fileName; END ```
Without checking anything. If compression is supported compress, if not error is raised and then backup normally. ``` BEGIN TRY BACKUP DATABASE @databaseName TO DISK = @filename WITH COMPRESSION; END TRY BEGIN CATCH BACKUP DATABASE @databaseName TO DISK = @filename; END CATCH ```
SQL Script to compress database backups if version allows it
[ "", "sql", "sql-server", "database", "backup", "" ]
Using MySQL, is there an efficient way to filter records on one table by checking for the presence of multiple rows on an associated table? For example, I have a table of images and another table identifying objects in that image: ``` table: images id | url ----+-------------------------------- 1 | http://www.example.com/foo.png 2 | http://www.example.com/bar.png 3 | http://www.example.com/baz.png table: image_tags image_id | tag ----------+------------- 1 | cats 1 | living-room 2 | beach 2 | towel 2 | dogs 3 | cats 3 | dogs 3 | goldfish ``` If I wanted to query for all images that contain both `cats` and `dogs`, what would be the best way to approach this? What if I wanted images with `cats`, `dogs`, and `goldfish`? One approach I've tried is to query for all `image_ids` of images containing dogs, then of those containing cats, and take the intersection. I can do this in my application but I was wondering if there was a way to run the same operation using SQL handling an arbitrary number of tags.
I like to approach these types of queries ("set-within-set" queries) with `group by` and `having`: ``` select it.image_id from image_tags it where it.tag in ('cats', 'dogs') group by it.image_id having count(*) = 2; ``` Note the "2" needs to equal the number of tags you want matched in the `in` list.
Another alternative, although more verbose but perhaps more performant (?? - sorry don't have MySQL to test) than Gordon's answer is: ``` select i.id, i.url from images i where exists( select 1 from image_tags g where g.image_id=i.id and g.tag='cats' ) and exists( select 1 from image_tags g where g.image_id=i.id and g.tag='dogs' ) ```
MySQL filter on multiple rows on associated table
[ "", "mysql", "sql", "" ]
I'm working with an oracle database and what I basically need to concatenate in one column, values from multiple columns for every row. Something like this: ``` col1 col2 col3 col4 col5 ____________________________________ 1 A B C D 2 A B C 3 C A 4 D A C col1 col2 ____________ 1 A,B,C,D 2 A,B,C 3 C,A 4 D,A,C ```
Use below query ``` select col1,rtrim( col2||','||col3||','||col4||','||col5,' ,') as col2 from table_name ```
[SQL Fiddle](http://sqlfiddle.com/#!4/f4ea2/2) **Oracle 11g R2 Schema Setup**: ``` CREATE TABLE test (col1, col2, col3, col4, col5 ) AS SELECT 1, 'A', 'B', 'C', 'D' FROM DUAL UNION ALL SELECT 2, 'A', 'B', 'C', NULL FROM DUAL UNION ALL SELECT 3, 'C', 'A', NULL, NULL FROM DUAL UNION ALL SELECT 4, 'D', 'A', 'C', NULL FROM DUAL UNION ALL SELECT 5, NULL, NULL, NULL, NULL FROM DUAL UNION ALL SELECT 6, NULL, NULL, NULL, 'A' FROM DUAL UNION ALL SELECT 7, 'B', NULL, NULL, 'A' FROM DUAL UNION ALL SELECT 8, NULL, 'C', NULL, 'A' FROM DUAL; ``` **Query 1**: If there are no `NULL` values between other values (it will introduce multiple commas in rows 7 & 8): ``` SELECT col1, TRIM( ',' FROM col2||','||col3||','||col4||','||col5 ) AS col2 FROM test ``` **[Results](http://sqlfiddle.com/#!4/f4ea2/2/0)**: ``` | COL1 | COL2 | |------|---------| | 1 | A,B,C,D | | 2 | A,B,C | | 3 | C,A | | 4 | D,A,C | | 5 | (null) | | 6 | A | | 7 | B,,,A | | 8 | C,,A | ``` The last two queries will work for all examples: **Query 2**: ``` SELECT col1, TRIM( ',' FROM col2 || NVL2( col3, ','||col3, NULL ) || NVL2( col4, ','||col4, NULL ) || NVL2( col5, ','||col5, NULL ) ) AS col2 FROM test ``` **[Results](http://sqlfiddle.com/#!4/f4ea2/2/1)**: ``` | COL1 | COL2 | |------|---------| | 1 | A,B,C,D | | 2 | A,B,C | | 3 | C,A | | 4 | D,A,C | | 5 | (null) | | 6 | A | | 7 | B,A | | 8 | C,A | ``` **Query 3**: ``` SELECT col1, REGEXP_REPLACE( col2||','||col3||','||col4||','||col5, '(^|,),+|,+($)', '\1' ) AS col2 FROM test ``` **[Results](http://sqlfiddle.com/#!4/f4ea2/2/2)**: ``` | COL1 | COL2 | |------|---------| | 1 | A,B,C,D | | 2 | A,B,C | | 3 | C,A, | | 4 | D,A,C | | 5 | (null) | | 6 | A | | 7 | B,A | | 8 | C,A | ```
Concatenate values from multiple columns in Oracle
[ "", "sql", "oracle", "loops", "" ]
Im trying to find query each model with their most used ram configuration. Table: ``` PC (code, model, speed, ram, hd, cd, price) ``` So far, I was able to list every model with every ram config and number of times the ram config has been used. ``` select model, ram, max(config) from (select model,ram,count(ram) as config from pc group by model, ram) group by model, ram ``` Output: ``` MODEL RAM MAX(CONFIG) ------- ---- ----------- 1232 64 2 1232 32 2 1233 128 3 1121 128 3 1233 64 1 1260 32 1 ``` I face problems when I try to have model listed with its most used ram. ``` select model, ram from (select model, ram, count(ram) as config from pc group by model, ram) group by model having config = max(config); Error : ORA-00979: not a GROUP BY expression ```
``` with x as (select model,ram,count(ram) as config from pc group by model,ram) , y as (select model, max(config) as mxconfig from x group by model) select x.model, x.ram --choose max(x.ram) or min(x.ram) in case of a tie and group by x.model from x join y on x.model = y.model and x.config = y.mxconfig ``` This solution uses `cte` to achieve what you need. If you need to get either `max` or `min` ram when there is a tie for config, you should have one more `group by` on model.
I think what you are looking for is: ``` SELECT model,ram FROM (SELECT model,ram,count(ram) AS config FROM pc GROUP BY model,ram) WHERE config=max(config) ``` The records should already be grouped by your sub query
sql issue with having clause
[ "", "sql", "oracle", "group-by", "having", "ora-00979", "" ]
I am trying to write a query to update several rows of my SQL table at once. Below is the code I have tried, and it doesn't appear to be proper SQL as it doesn't work. Is there a way to accomplish this is one query? ``` $query = "UPDATE table_names SET Name='Bob' WHERE ID=7 SET Name='Mike' WHERE ID=34" ```
One way to do this is with a `case` expression: ``` UPDATE table_name SET name = CASE id WHEN 7 THEN 'Bob' WHEN 34 THEN 'Mike' END WHERE id IN (7, 34) ```
This solution is more practival than "case when" I found and you can update as many columns as wanted ``` UPDATE searches_group_library SET compiled_query = new_val.compiledq FROM (VALUES (28, 'fakecompilation'), (57, 'fakecompilation2') ) new_val (id, compiledq) WHERE new_val.id = searches_group_library.id; ```
SQL update multiple rows on specific ID's
[ "", "sql", "sql-update", "" ]
Suppose we have the database: ``` ----------- | A -|- B | |----|----| | 1 | 1 | | 1 | 2 | | 2 | 1 | ----------- ``` Where `A` and `B` is the primary key. Suppose we want to get all `A`s that contain the elements in `B` of 1 and 2. ``` SELECT A FROM Table WHERE B = 1 AND B = 2; ``` The above fails because it never holds true as the query is only for a single record. ``` SELECT A FROM Table WHERE B = 1 OR B = 2; ``` Works but erroneously includes the primary key value 2, which only maps to 1 in `B`, and not both 1 and 2 in `B`.
`GROUP BY` solution, return all a's that have more than 1 different b value in (1,2): ``` select a from table where b in (1,2) group by a having count(distinct b) > 1 ``` Or, `JOIN` solution: ``` select distinct a from (select a from table where b = 1) t1 join (select a from table where b = 2) t2 on t1.a = t2.a ``` Or an `INTERSECT` solution: ``` select a from table where b = 1 intersect select a from table where b = 2 ``` **Edit:** `GROUP BY` query that *perhaps* is faster then the `HAVING count distinct` version: ``` select a from table where b in (1,2) group by a having max(b) <> min(b) ```
You can use the group by method from jarlh or make a Join with a 'distinct': ``` select distinct a from (select a from table where b = 1) t1 join (select a from table where b = 2) t2 on t1.a = t2.a ```
How to get an ID associated with at least all contents?
[ "", "sql", "" ]
So I have the following query where it does as i expect it to. It get the latest published article from each aothor. However now I want it to get the latest two article from each author. How can I do this? ``` SELECT author_article.ID FROM ( SELECT sorted_articles.ID, sorted_articles.AuthorID, sorted_articles.PublishedDate FROM ArticlePage sorted_articles ORDER BY PublishedDate DESC ) author_article GROUP BY author_article.AuthorID ORDER BY author_article.PublishedDate DESC; ``` [![enter image description here](https://i.stack.imgur.com/WWNDT.png)](https://i.stack.imgur.com/WWNDT.png) So what I need is the latest 2 article for each author.
If you want the authors and the article ids, then this will put them in one row: ``` SELECT ap.AuthorId, SUBSTRING_INDEX(GROUP_CONCAT(ap.AuthorId ORDER BY ap.PublishedDate DESC ), ',', 2) as Top2Articles FROM ArticlePage ap GROUP BY ap.AuthorId; ``` Note: the default length for the group concat intermediate value is limited, but can be changed if some authors have lots and lots and lots of articles. Also, your original query is using a (mis)feature of MySQL that is explicitly documented not to work as you intend. You have columns in the `SELECT` that are not in the `GROUP BY`. These values come from *indeterminate* rows, so the `ORDER BY` in the subquery may not affect the results the way you intend.
Use a correlated sub-query to count all more recent articles by the same author. If there are 1 or less more recent articles, return the row. ``` SELECT * FROM ArticlePage t1 WHERE (select count(*) from ArticlePage t2 where t2.AuthorID = t1.AuthorID and t2.PublishedDate > t1.PublishedDate) <= 1 ```
GROUP BY and get the top two row
[ "", "mysql", "sql", "subquery", "" ]
I have 2 tables: Name, and Birthday. I need to get Names of people who are over 18, but I cannot manually enter a date. ``` UserID = int Name = varchar(50) Birthday = Date UserID Name UserID Birthday ----------- ----------------- 1 ABC 1 1997-05-15 2 DEF 2 1997-09-21 3 GHI 3 2011-02-01 ``` I currently have this: ``` select u.UserID from tbl_Name as u join tbl_Birthday as b on u.UserId=b.UserID where Birthday < '1997-08-22'; ``` I tried changing the last line to the following and it still didn't work: ``` where datediff(year, birthday, convert(date, getdate())) > 18; where Birthday - convert(date,getdate()) > 18; ``` Edit: I mixed up startdate and enddate in DATEDIFF, but this gives me problems with people who are 18 (born in 1997). Edit 2: Made question more clear by specifying that a date cannot be manually entered. Edit 3: Changed birthday dates.
Do not do date math on the `Birthday` column. That requires calculating the expression for every row in the table! Instead, do the math on the other side of the equality or inequality. If your birth dates are stored with no time portion: ``` WHERE Birthday <= DateAdd(year, -18, GetDate) ``` If they are stored with a time portion that can be set to other than midnight, by convention a person is 18 on the date of their birthday, so: ``` WHERE Birthday < DateAdd(year, -18, Convert(date, GetDate() + 1)) ``` Note: subtracting `datetime` values in SQL Server yields the number of days apart, explaining why one of your expressions was not working correctly (you treated it like years).
``` where DATEADD(YY,18,Birthday) >= getdate() ```
T-SQL Date difference greater than an integer
[ "", "sql", "sql-server", "t-sql", "" ]
I have found a few posts (ex. [SQL Query to find the last day of the month](https://stackoverflow.com/questions/16646585/sql-query-to-find-the-last-day-of-the-month) , [Get the last day of the month in SQL](https://stackoverflow.com/questions/1051488/get-the-last-day-of-the-month-in-sql)) to **get** the last day of the month, but is there a way to determine if a date **is** the last day of the month? For example, if I had a list of these dates- ``` 8/29/2015 --fail 8/30/2015 --fail 8/31/2015 --pass 9/1/2015 --fail ``` How could I write a query to determine which dates pass / fail? I could not simply test if DAY() = 31, because the months do not all have the same number of days in them.
If you are using SQL Server 2012 or above, use `EOMONTH`: ``` SELECT my_date, IsEndOfMonth = IIF(my_date = EOMONTH(my_date), 1, 0) FROM my_table ```
There are two simple ways that come to mind. First, would be: ``` select testDate , case when month(testDate) <> month(dateadd(d,1,testDate)) then 'pass' else 'fail' end endOfMonth from tblTest ``` This tests if 1 day after the testDate falls on a month other than the month testDate is currently on. If so, testDate must be the last day of its current month. The second would be: ``` select testDate , case when day(dateadd(d,1,testDate)) = 1 then 'pass' else 'fail' end endOfMonth from tblTest ``` This is a similar test, but in a different way. If 1 day after testDate is the first of a month, then testDate must be the **last** day of the month.
SQL - How to tell if a date is the last day of the month?
[ "", "sql", "sql-server", "t-sql", "datetime", "" ]
I have two tables : ``` video (ID, TITLE, ..., UPLOADED_DATE) join_video_category (ID (not used), ID_VIDEO_ ID_CATEGORY) ``` rows in video : 4 500 000 | rows in join\_video\_category : 5 800 000 1 video can have many category. I have a query works perfectly, 20 ms max to get result : ``` SELECT * FROM video WHERE ID IN (SELECT ID_VIDEO FROM join_video_category WHERE ID_CATEGORY=11) LIMIT 1000; ``` This query take 1000 video, the order is not important. BUT, when i would like to get 10 latest video from a category, my query take arround 30-40 seconds : ``` SELECT * FROM video WHERE ID IN (SELECT ID_VIDEO FROM join_video_category WHERE ID_CATEGORY=11) ORDER BY UPLOADED_DATE DESC LIMIT 10; ``` I have index on ID\_CATEGORY, ID\_VIDEO, UPLOADED\_DATE, PRIMARY ON ID video and join\_video\_category. I have tested it with JOIN on my query, it's the same result.
First, the comparisons are to two very different queries. The first returns a bunch of videos whenever it encounters them. The second has to read *all* the videos and then sort them. Try rewriting this as a `JOIN`: ``` SELECT v.* FROM video v JOIN join_video_category vc ON v.id = bc.id_video WHERE vc.ID_CATEGORY = 11 ORDER BY v.UPLOADED_DATE DESC LIMIT 10; ``` That may or may not help. You have a lot of data and so you might have a lot of videos for a given category. If so, a `where` clause that gets more recent data might really help: ``` SELECT v.* FROM video v JOIN join_video_category vc ON v.id = bc.id_video WHERE vc.ID_CATEGORY = 11 AND v.UPLOADED_DATE >= '2015-01-01' ORDER BY v.UPLOADED_DATE DESC LIMIT 10; ``` Finally, if that doesn't work, consider adding something like `UPLOADED_DATE` into `join_video_category`. Then, this query should blaze: ``` select vc.video_id from join_vdeo_category vc where vc.ID_CATEGORY = 11 order by vc.UPLOADED_DATE desc limit 10; ``` with an index on `join_video_category(id_category, uploaded_date, video_id)`.
If it is 1:Many, don't use an extra table between Video and Category. However, your row counts imply that it is Many:Many. If it is 1:Many, simply have the category\_id in the Video table, then simplify all the queries. If it is Many:Many, then be sure to use *this* pattern for the junction table: ``` CREATE TABLE map_video_category ( video_id ..., category_id ..., PRIMARY KEY(video_id, category_id), -- both ids, one direction INDEX (category_id, video_id) -- both ids, the other direction ) ENGINE=InnoDB; -- significantly better than MyISAM on INDEX handling here ``` The ID that you mentioned is a waste. The composite keys are optimal for all situations, and will improve performance in most situations. Do not use `IN ( SELECT ... )`; the optimizer does a poor job of optimizing it. Change to a `JOIN`, `LEFT JOIN`, `EXISTS`, or some other construct.
order by makes query slow
[ "", "mysql", "sql", "performance", "database-indexes", "" ]
I have a simple table `data` to record a stream of data with columns `recorded_at` DateTime and `value` Integer. The data is not distributed at regular intervals, there may be multiple rows per minute, or none for hours. If I want to select a single value this query works great for my purposes, selecting the closest result greater than where I'm searching: `select * from data where recorded_at > '2015-01-01 01:01:01' limit 1` However, I can't find an efficient way to select the single closest row to multiple search dates aside from using a union of the above query repeated, which creates an enormous query if I want to select several hundred points. For example, if I wanted the single row closest to '2015-01-01 01:01:01' and also '2015-02-02 02:02:02'. Is there a better way to accomplish this? Basically the pseudo-query I'm after is: `select * from data where recorded_at NEAR (date1, date2, date3)` where NEAR works like IN but fuzzier, because I don't know the exact dates beforehand.
To get multiple points: ``` select d.* from data d where recorded_at > '2015-01-01 01:01:01' order by recorded_at limit 100 ``` To get them efficiently, create an index on `data(recorded_at)`: ``` create index idx_data_recorded_at on data(recorded_at); ``` Also, a `union` would probably *not* help your cause. Although `limit` without an `order by` is not guaranteed to return any particular row, it would generally return the same row, over and over. EDIT (based on question edit): For multiple dates, this gets a bit harder. You are probably best off using `union all`: ``` (select d.* from data d where recorded_at > '2015-01-01 01:01:01' order by recorded_at limit 1 ) union all (select d.* from data d where recorded_at > '2015-02-01 01:01:01' order by recorded_at limit 1 ) ``` Although complicated, each subquery should be blazingly fast with the right index so the overall query should be quite fast.
You need ORDER by ``` select * from data where recorded_at > '2015-01-01 01:01:01' and recorded_at < '2015-01-02 01:01:01' order by recorded_at limit 1 ``` This is edited after OP edited his question If you want to have multiple dates and want to get closest dates for each of these, I would suggest using a lookup table that has all your searched dates and try the following. keycol is the unique column in table data ``` select t1.* from ( select t.keycol,min(t.recorded_at) as recorded_at from data as t inner join lookup_table as dates on t.recorded_at>look.recorded_at group by t.keycol ) as t2 on t1.keycol=t2.keycol and t1.recorded_at=t2.recorded_at ```
SQL - how can I efficiently select multiple closest time series data points
[ "", "mysql", "sql", "" ]
I need to checking a column where numeric or not in SQL Server 2012. This my case code. ``` CASE WHEN ISNUMERIC(CUST_TELE) = 1 THEN CUST_TELE ELSE NULL END AS CUSTOMER_CONTACT_NO ``` But when the `'78603D99'` value is reached, it returns `1` which means SQL Server considered this string as numeric. Why is that? How to avoid this kind of issues?
Unfortunately, the `ISNUMERIC()` function in SQL Server has many quirks. It's not exactly buggy, but it rarely does what people expect it to when they first use it. However, since you're using SQL Server 2012 you can use the `TRY_PARSE()` function which will do what you want. This returns NULL: `SELECT TRY_PARSE('7860D399' AS int)` This returns 7860399 `SELECT TRY_PARSE('7860399' AS int)` <https://learn.microsoft.com/en-us/sql/t-sql/functions/try-parse-transact-sql?view=sql-server-ver16> Obviously, this works for datatypes other than `INT` as well. You say you want to check that a value is numeric, but I think you mean `INT`.
Although `try_convert()` or `try_parse()` works for a built-in type, it might not do exactly what you want. For instance, it might allow decimal points, negative signs, and limit the length of digits. Also, `isnumeric()` is going to recognize negative numbers, decimals, and exponential notation. If you want to test a string only for digits, then you can use `not like` logic: ``` (CASE WHEN CUST_TELE NOT LIKE '%[^0-9]%' THEN CUST_TELE END) AS CUSTOMER_CONTACT_NO ``` This simply says that `CUST_TELE` contains no characters that are not digits.
SQL Server's ISNUMERIC function
[ "", "sql", "sql-server-2012", "" ]
Every 5 minutes my historical table is updated (by another process) with a new record (TAG\_NAME remains the same but TIME and VALUE are updated). I am using the following query to return the LATEST record while also grouping by TAG\_NAME. ``` SELECT TAG_NAME,max(TIME) as max_time, (select value from historical where TAG_NAME=hist.TAG_NAME order by time desc limit 1) as max_value FROM historical hist WHERE TIME IS NOT NULL AND GROUP BY TAG_NAME; ``` This query is taking as long as 1 minute on a table with 100k rows. Can anyone help me optimize this query? Much Thanks!
You could get the combined maximum, so if two times are the same you get the biggest value. ``` select TAG_NAME, max(concat(time,value)) as time_value from historical group by TAG_NAME ``` If necessary you can split time and value in mySQL, but you can also split it in the app.
Try this: ``` SELECT TAG_NAME, max(`TIME`) as max_time, max(`value`) as max_value FROM historical WHERE `TIME` IS NOT NULL GROUP BY TAG_NAME ; ```
Mysql query with sub-selects taking too long..
[ "", "mysql", "sql", "" ]
I use 3 tables to insert data for a 4th table. My tables : ``` Typology id_typo------PK name_typology---- Country id_country---PK name_country--- Brut id_brut------PK name_typology-- name_country--- Structure id_struct---PK id_typo-----fk id_country---fk ``` I want transfert the data from the table Brut to the table Structure. The problem is that one line is inserted. Why ? My request : ``` INSERT INTO structure (id_struct,id_typo,id_country) SELECT x.id_struct,y.id_typo, z.id_country, FROM brut AS x, typology AS y, country AS z WHERE x.name_typology = y.name_typology AND x.name_country = z.name_country ```
Use left join if you dont have matching rows in table y & z so that atleast you get all rows from table x: ``` INSERT INTO structure (id_struct,id_typo,id_country) SELECT x.id_struct,y.id_typo, z.id_country, FROM brut AS x left join typology AS y on trim(x.name_typology) = trim(y.name_typology) left join country AS z on trim(x.name_country) = trim(z.name_country) ```
May be issue about space to name Use this query to remove space ``` INSERT INTO structure (id_struct,id_typo,id_country) SELECT x.id_struct,y.id_typo, z.id_country, FROM brut AS x, typology AS y, country AS z WHERE trim(x.name_typology) = trim(y.name_typology) AND trim(x.name_country) = trim(z.name_country) ```
Why one line only is inserted?
[ "", "mysql", "sql", "" ]
I have a city column field in my table with city names starting with @,&,\*,',$...(example- %york,$pradesh,@lorida and also having values delhi,New york...) My requirement is to only getthe city names(like delhi,New york) in the field starting only with Alphabets from A-Z. Please help me out.
You can do this with the REGEXP (Regular Expression): ``` SELECT * FROM cities where name REGEXP '^[A-Za-z]'; ``` It now gets you the lines only Starting with A-Z and a-z: Before: [![enter image description here](https://i.stack.imgur.com/NpqJL.png)](https://i.stack.imgur.com/NpqJL.png) After: [![enter image description here](https://i.stack.imgur.com/2ZNWD.png)](https://i.stack.imgur.com/2ZNWD.png)
Depending on your DBMS system you can use regular expressions in WHERE clause. For example, PostgreSQL provides them like this. ``` SELECT * FROM cities WHERE name ~* '^[A-Z]'; ``` MySQL/MariaDB provides them as well (see [here](https://dev.mysql.com/doc/refman/5.1/en/regexp.html)).
How to Select rows from a field starts only with A-Z in sql
[ "", "mysql", "sql", "" ]
I was wondering if it is possible to set a declared variable to a return value from a select result? Something like: ``` @WatchedSeconds SET @WatchedSeconds = 200 DECLARE @SelectedVideo int SET @SelectedVideo = (SELECT TOP 1 * FROM Video v WHERE v.VideoID = 12) IF @SelectedVideo IS NOT NULL BEGIN IF @SelectedVideo.VideoLength = @WatchedSeconds BEGIN --DO SOMETHING END IF @SelectedVideo.SomeOtherColumn = @SomethingElse BEGIN END END ``` It's for using some information from the `SELECT` result multiple places in a `Stored Procedure`. I know that I can set a variable to e.g, a integer, and set it to the selected result, if it returns a integer, e.g: ``` DECLARE @VideoSeconds int SET @VideoSeconds = (SELECT v.Length FROM Video v WHERE v.VideoID = @VideoID) ``` This way I have to make multiple variables, and multiple `SELECT` calls if I need to use more values from the `Video` result. And that's what I want to avoid.
You can try something like (declare variables first...) ``` SELECT TOP 1 @var1=col1, @var2=col2, @var3=col3, [...] FROM YourTable WHERE YourFilter ``` EDIT: All together this seems not to be the best approach... With SQL you should not think in values and single rows but rather in result sets (set based programming). Your thinking leads to many tiny selects, while loops, cursors and all this stuff one should avoid.
You can do this simply by running: ``` SELECT @videoSeconds = v.Length FROM Video v WHERE v.VideoID = @VideoID ``` so as to not add the `SET` part. Also, you must make sure that only 1 row is being returned by the query, otherwise it will generate an error.
SQL Set variable to select result
[ "", "sql", "sql-server", "stored-procedures", "" ]
Having touble getting only values that appear once. I currently have some sql code that gets out the all the entries that have 0 percent. The problem is that two rows can contain the same person With different percentages. If one of these is above 0 then i dont want it to come out in the Query ``` abridged table: Name - Percent steve 0 dan 0 mike 100 harold 50 steve 80 carl 0 carl 0 Result: dan - 0 Carl - 0 ``` Here is how far ive gotten, but not managed to make any variation of Count() or having or Group by working. ``` select person, Value2, Value3, Value4, percent from Table1 INNER JOIN Table1 ON Table2.valueNum = Table1.valueNum INNER JOINTable1 ON Table3.valueNum = Table1.valueNum INNER JOIN Table1 ON Table4.valueNum = Table1.valueNum WHERE (@date BETWEEN table1.FROMDATE AND table1.todate) AND table1.percent = 0 AND table1.varchar IN ('T', 'X') ```
Your example SQL and abridged table don't match. However, this looks like the basic idea you are after: ``` select * from dbo.table a where a.percent = 0 and not exists ( select 'x' from dbo.table b where a.Name = b.Name and b.percent > 0 ); ```
This is one method ``` select name,0 as percent from abridged group by name having min(percent)=0 and max(percent)=0 ```
SQL getting values that apear once, not distinct
[ "", "sql", "sql-server", "" ]
I have a table in SQL Server with a huge range of numbers. I want to be able to set a color for a specific range of values. If it falls between 0 to 20% it should be red then 21 to 40% it should be blue and on so on. I have tried some code but it is returning some NULL values and i don't understand why. How do I fix this? ``` WITH cte as ( SELECT [% APL Population], ([% APL Population] / NULLIF(Max([% APL Population]) OVER(), 0) ) * 100 AS [% APL PopulationPercent] FROM dbo.Indicators) SELECT [% APL Population], [% APL Populationcolor] = CASE WHEN [% APL PopulationPercent] BETWEEN 0 AND 20 THEN 'red' WHEN [% APL PopulationPercent] BETWEEN 21 AND 40 THEN 'blue' WHEN [% APL PopulationPercent] BETWEEN 41 AND 60 THEN 'green' WHEN [% APL PopulationPercent] BETWEEN 61 AND 100 THEN 'purple' END FROM cte ``` [![enter image description here](https://i.stack.imgur.com/anB1p.png)](https://i.stack.imgur.com/anB1p.png) Thank you in advance.
Using BETWEEN you create gap between values. You need to use comparison operators. Try this: ``` WITH cte([% APL Population],[% APL PopulationPercent]) AS ( SELECT [% APL Population] ,[% APL PopulationPercent] = ( [% APL Population] / NULLIF(Max([% APL Population]) OVER(), 0) ) * 100 FROM dbo.Indicators ) SELECT [% APL Population], [% APL Populationcolor] = CASE WHEN [% APL PopulationPercent] >= 0 AND [% APL PopulationPercent] < 20 THEN 'red' WHEN [% APL PopulationPercent] >= 20 AND [% APL PopulationPercent] < 40 THEN 'blue' WHEN [% APL PopulationPercent] >= 40 AND [% APL PopulationPercent] < 60 THEN 'green' WHEN [% APL PopulationPercent] >= 60 AND [% APL PopulationPercent] <= 100 THEN 'purple' ELSE 'unknown' END FROM cte ```
You can't use `between` in this scenario. ``` WHEN [% APL PopulationPercent] < 20 THEN 'red' WHEN [% APL PopulationPercent] >= 20 AND [% APL PopulationPercent ] <40 THEN 'blue' ... ``` Issue was because 20.5 is grather than 20 and less than 21. Then don't match any `case` condition.
How to set a color for a specific range of values in SQL Server?
[ "", "sql", "sql-server", "" ]
I have a very simple table where I keep player bans. There are only two columns (not actually, but for simplicity's sake) - a player's unique ID (uid) and the ban expire date (expiredate) I don't want to keep expired bans in the table, so all bans where expiredate < currentdate need to be deleted. To see if a player is banned, I query this table with his uid and the current date to see if there are any entries. If there are - we determine that the player is banned. So I need to run two queries. One that would fetch me the bans, and another that would clean up the table of redundant bans. I was wondering if it would be possible to combine these queries into one. Select and return the entry if it is still relevant, and remove the entry and return nothing if it is not. Are there any nice ways to do this in a single select query? Edit: To clarify, I actually have some other information in the table, such as the ban reason, the ban date, etc. so I need to return the row as well as delete irrelevant entries.
Unfortunately you cannot have a delete statement inside a select statement... I got you a [link from sqlite docos](https://www.sqlite.org/lang_select.html) for your reference. Select statements are only used for retrieving data. Although a `select` could be used in a sub-query for a `delete` statement, it would still be used for retrieving data. You must execute the two statements in separated in your case.
``` DELETE FROM TableName WHERE expiredate < CURDATE(); ```
DELETE rows in a SELECT statement based on condition
[ "", "sql", "sqlite", "select", "sql-delete", "" ]
I have a table like this: | id | conn\_id | read\_date | | --- | --- | --- | | 1 | 1 | 2010-02-21 | | 2 | 1 | 2011-02-21 | | 3 | 2 | 2011-02-21 | | 4 | 2 | 2013-02-21 | | 5 | 2 | 2014-02-21 | I want the second highest read\_date for particular 'conn\_id's i.e. I want a group by on conn\_id. Please help me figure this out.
Here's a solution for a particular `conn_id` : ``` select max (read_date) from my_table where conn_id=1 and read_date<( select max (read_date) from my_table where conn_id=1 ) ``` If you want to get it for all `conn_id` using `group by`, do this: ``` select t.conn_id, (select max(i.read_date) from my_table i where i.conn_id=t.conn_id and i.read_date<max(t.read_date)) from my_table t group by conn_id; ```
Following answer should work in MSSQL : ``` select id,conn_id,read_date from ( select *,ROW_NUMBER() over(Partition by conn_id order by read_date desc) as RN from my_table ) where RN =2 ``` There is an intresting article on use of rank functions in MySQL here : [ROW\_NUMBER() in MySQL](https://stackoverflow.com/questions/1895110/row-number-in-mysql)
Get second highest values from a table
[ "", "mysql", "sql", "" ]
I recently took a new job and I am trying to simplify some older queries left around and for the life of me I cannot figure out how to get this down into two queries using union. There's got to be a way but I cant simplify more than what I have. ``` select ( select avg (Employee.salary) from Employee left join Job_title on Employee.job_title = Job_title.Job_title where Employee.Exempt_Non_Exempt_Status='0' group by Employee.Exempt_Non_Exempt_Status ) as AverageSalary, 'Non-Exempt' as Status , ( select MIN (Employee.salary) from Employee left join Job_title on Employee.job_title = Job_title.Job_title where Employee.Exempt_Non_Exempt_Status='0' group by Employee.Exempt_Non_Exempt_Status ) as MinimumSalary, 'Non-Exempt' as Status , ( select MAX (Employee.salary) from Employee left join Job_title on Employee.job_title = Job_title.Job_title where Employee.Exempt_Non_Exempt_Status='0' group by Employee.Exempt_Non_Exempt_Status ) as MaximumSalary, 'Non_Exempt' as Status UNION select ( select avg (Employee.salary) from Employee left join Job_title on Employee.job_title = Job_title.Job_title where Employee.Exempt_Non_Exempt_Status='1' group by Employee.Exempt_Non_Exempt_Status ) as AverageSalary, 'Exempt' as Status , ( select MIN (Employee.salary) from Employee left join Job_title on Employee.job_title = Job_title.Job_title where Employee.Exempt_Non_Exempt_Status='1' group by Employee.Exempt_Non_Exempt_Status ) as MinimumSalary, 'Exempt' as Status , ( select MAX (Employee.salary) from Employee left join Job_title on Employee.job_title = Job_title.Job_title where Employee.Exempt_Non_Exempt_Status='1' group by Employee.Exempt_Non_Exempt_Status )as MaximumSalary, 'Exempt' as Status ```
Since you are only selecting 1 field in each of the subqueries, you don't need a "Group By". Also, depending on the the data, you might not need to link in the Job\_title Table. If you need the Job\_title table then... ``` SELECT Min(salary), Max(salary), Avg(salary), 'Non-Exempt' AS Status FROM (SELECT employee.salary FROM employee LEFT JOIN job_title ON employee.job_title = job_title.job_title WHERE employee.exempt_non_exempt_status = '0') UNION SELECT Min(salary), Max(salary), Avg(salary), 'Exempt' AS Status FROM (SELECT employee.salary FROM employee LEFT JOIN job_title ON employee.job_title = job_title.job_title WHERE employee.exempt_non_exempt_status = '1') ``` If you don't need to link in the Job\_title (since you're not using it anywhere) then... ``` SELECT Min(salary), Max(salary), Avg(salary), 'Non-Exempt' AS Status FROM (SELECT employee.salary FROM employee WHERE employee.exempt_non_exempt_status = '0') UNION SELECT Min(salary), Max(salary), Avg(salary), 'Exempt' AS Status FROM (SELECT employee.salary FROM employee WHERE employee.exempt_non_exempt_status = '1') ```
You should be able to do something like this: ``` select min(salary), max(salary), avg(salary), 'Non-Exempt' as otherThingy from ( select Employee.salary from Employee left join Job_title on Employee.job_title = Job_title.Job_title where Employee.Exempt_Non_Exempt_Status='0' ) thingy UNION select min(salary), max(salary), avg(salary), 'Exempt' as otherThingy from ( select Employee.salary from Employee left join Job_title on Employee.job_title = Job_title.Job_title where Employee.Exempt_Non_Exempt_Status='1' ) thingy ``` or to get really fancy you could do it all in one go most likely: <http://sqlfiddle.com/#!6/8b437/1> ``` select min(salary), max(salary), avg(salary), case when Exempt_Non_Exempt_Status = 0 then 'Non-Exempt' else 'Exempt' end as exemptStatus from ( select Employee.salary, Exempt_Non_Exempt_Status from Employee ) thingy group by Exempt_Non_Exempt_Status ```
Sql 2012 Simplify Query Union
[ "", "sql", "left-join", "union", "" ]
I would like to know what I'm doing wrong. What I'm trying to do is get the half of each month, lets say if i input 8/1/2015 it should return 7/16/2015 and if i used 8/31/2015 it should return 8/15/2015. I created a Sql Script which is isn't running, here is the code. ``` SELECT case when @rank = 'R' then month(DATEADD(day, -15, @date_from)+'/'+ case when day(DATEADD(day, -15, @date_from)) between 1 and 15 then 1 else 16 end +'/'+year(DATEADD(day, -15, @date_from) DATEADD(month, -1, @date_from) end ``` but i get a error, is there a way for me to do this more efficiently? and what is my solution for the error?
That is an invalid statement as it is missing a few brackets, and it maybe missing an else statement for the first case when. I am unsure if this is what you are trying to achieve. ``` DECLARE @rank AS VARCHAR(50) = 'R'; DECLARE @date_from AS DATETIME = '1/1/2015'; SELECT CASE WHEN @rank = 'R' THEN CAST( -- get month Cast(Month(Dateadd( day, -15, @date_from )) AS VARCHAR(2)) + '/' + -- get day CASE WHEN Day(Dateadd( day, -15, @date_from )) BETWEEN 1 AND 15 THEN '1' ELSE '16' END + '/' + -- get year Cast(Year(Dateadd( day, -15, @date_from )) AS VARCHAR(5)) AS DATETIME) ELSE DATEADD(month, -1, @date_from) END ```
I am creating a subquery sir, the complete code looks like this ``` set @refdatefrom = ( SELECT case when @rank = 'R' then month(DATEADD(day, -15, @date_from)+'/'+ case when day(DATEADD(day, -15, @date_from)) between 1 and 15 then 1 else 16 end +'/'+year(DATEADD(day, -15, @date_from) DATEADD(month, -1, @date_from) end ) ``` then each value which i would add will be inserted in a table. The error which i get is ``` Server: Msg 156, Level 15, State 1, Procedure finalize, Line 65 Incorrect syntax near the keyword 'end'. Server: Msg 156, Level 15, State 1, Procedure finalize, Line 107 Incorrect syntax near the keyword 'end'. ```
Nested case statement in sql for date
[ "", "sql", "sql-server", "" ]
I have a table that contains some corrupted records, because I forgot to add an `UNIQUE` index for two columns. Take a look at the following table for an example: ``` +----+-------------+--------+------------+ | id | uuid | object | project_id | +----+-------------+--------+------------+ | 1 | 73621000001 | screw | 1 | | 2 | 73621000002 | screw | 1 | | 3 | 73621000003 | screw | 1 | | 4 | 73621000004 | tube | 1 | | 5 | 73621000005 | plate | 2 | | 6 | 73621000006 | plate | 2 | | 7 | 73621000007 | plate | 2 | | 8 | 73621000008 | plate | 2 | | 9 | 73621000009 | plate | 2 | | 10 | 73621000010 | gear | 4 | | 11 | 73621000011 | gear | 4 | +----+-------------+--------+------------+ ``` As you can see, there are some `object`-`project_id`-combinations that occur multiple times, but have different `uuids`. I want to delete all duplicate records, but keep those with the highest `uuid`. The resulting table should be this: ``` +----+-------------+--------+------------+ | id | uuid | object | project_id | +----+-------------+--------+------------+ | 3 | 73621000003 | screw | 1 | | 4 | 73621000004 | tube | 1 | | 9 | 73621000009 | plate | 2 | | 11 | 73621000011 | gear | 4 | +----+-------------+--------+------------+ ``` I can see which `object`s have duplicates using the following query: ``` SELECT uuid, object, project_id, COUNT(*) FROM uuid_object_mapping GROUP BY object, project_id HAVING COUNT(*) > 1; ``` I can get the 'clean' table using this query: ``` SELECT MAX(uuid) as uuid, object, project_id FROM uuid_object_mapping GROUP BY object, project_id; ``` And I can verify that the 'clean' table does not contain duplicates using ``` SELECT uuid, object, project_id, COUNT(*) FROM ( SELECT MAX(uuid) as uuid, object_name, project_id FROM uuid_object_mapping GROUP BY object_name, project_id ) AS clean GROUP BY object_name, project_id HAVING COUNT(*) > 1; ``` But how can I delete everything that is not in the 'clean' table?
In MySQL, you can use a `join`, but you need to be careful about the `NULL` values: ``` delete om from uuid_object_mapping om join (select MAX(uuid) as uuid, object, project_id from uuid_object_mapping group by object, project_id ) omkeep on omkeep.object = om.object and omkeep.project_id <=> om.project_id where om.uuid <> omkeep.uuid; ``` The `NULL` values seem to have disappeared, so you can use this `on` clause: ``` on omkeep.object = om.object and omkeep.project_id = om.project_id ```
Please use partition by along with order by uuid clause. Search for partion by. It is the best technique to remove duplicate.
How to remove 'duplicate' records?
[ "", "mysql", "sql", "" ]
There is a table of the following structure: ``` CREATE TABLE history ( pk serial NOT NULL, "from" integer NOT NULL, "to" integer NOT NULL, entity_key text NOT NULL, data text NOT NULL, CONSTRAINT history_pkey PRIMARY KEY (pk) ); ``` The `pk` is a primary key, `from` and `to` define a position in the sequence and the sequence itself for a given entity identified by `entity_key`. So the entity has one sequence of 2 rows in case if the first row has the `from = 1; to = 2` and the second one has `from = 2; to = 3`. So the point here is that the `to` of the previous row matches the `from` of the next one. The order to determine "next"/"previous" row is defined by `pk` which grows monotonously (since it's a `SERIAL`). The sequence does not have to start with 1 and the `to - from` does not necessary 1 always. So it can be `from = 1; to = 10`. What matters is that the "next" row in the sequence matches the `to` exactly. Sample dataset: ``` pk | from | to | entity_key | data ----+--------+------+--------------+------- 1 | 1 | 2 | 42 | foo 2 | 2 | 3 | 42 | bar 3 | 3 | 4 | 42 | baz 4 | 10 | 11 | 42 | another foo 5 | 11 | 12 | 42 | another baz 6 | 1 | 2 | 111 | one one one 7 | 2 | 3 | 111 | one one one two 8 | 3 | 4 | 111 | one one one three ``` And what I cannot realize is how to partition by "sequences" here so that I could apply window functions to the group that represents a single "sequence". Let's say I want to use the `row_number()` function and would like to get the following result: ``` pk | row_number | entity_key ----+-------------+------------ 1 | 1 | 42 2 | 2 | 42 3 | 3 | 42 4 | 1 | 42 5 | 2 | 42 6 | 1 | 111 7 | 2 | 111 8 | 3 | 111 ``` For convenience I created an SQLFiddle with initial seed: <http://sqlfiddle.com/#!15/e7c1c> PS: It's not the "give me the codez" question, I made my own research and I just out of ideas how to partition. It's obvious that I need to `LEFT JOIN` with the `next.from = curr.to`, but then it's still not clear how to reset the partition on `next.from IS NULL`. PS: It will be a 100 points bounty for the most elegant query that provides the requested result PPS: the desired solution should be an SQL query not pgsql due to some other limitations that are out of scope of this question.
I don’t know if it counts as “elegant,” but I think this will do what you want: ``` with Lagged as ( select pk, case when lag("to",1) over (order by pk) is distinct from "from" then 1 else 0 end as starts, entity_key from history ), LaggedGroups as ( select pk, sum(starts) over (order by pk) as groups, entity_key from Lagged ) select pk, row_number() over ( partition by groups order by pk ) as "row_number", entity_key from LaggedGroups ```
Just for fun & completeness: a recursive solution to reconstruct the (doubly) linked lists of records. [ this will **not** be the fastest solution ] NOTE: I commented out the ascending pk condition(s) since they are not needed for the connection logic. ``` WITH RECURSIVE zzz AS ( SELECT h0.pk , h0."to" AS next , h0.entity_key AS ek , 1::integer AS rnk FROM history h0 WHERE NOT EXISTS ( SELECT * FROM history nx WHERE nx.entity_key = h0.entity_key AND nx."to" = h0."from" -- AND nx.pk > h0.pk ) UNION ALL SELECT h1.pk , h1."to" AS next , h1.entity_key AS ek , 1+zzz.rnk AS rnk FROM zzz JOIN history h1 ON h1.entity_key = zzz.ek AND h1."from" = zzz.next -- AND h1.pk > zzz.pk ) SELECT * FROM zzz ORDER BY ek,pk ; ```
Partitioning function for continuous sequences
[ "", "sql", "postgresql", "postgresql-9.3", "window-functions", "" ]
Tried to show it as simple as possible. I want to SELECT Subject if 2 cells in a row are equal Table ``` --------------------------------------------- Subject --- username --- Lastpostername --------------------------------------------- subject A --- user1 --- user3 Subject B --- user2 --- user3 Subject C --- user3 --- user3 Subject D --- user4 --- user1 ``` Result I need is to select subject C cause username and lastpostername are equal (Sorted DESC by ID, so newest comes first)
``` SELECT t.subject FROM tab AS t WHERE t.username = t.Lastpostername ORDER BY t.id DESC ```
suppose your table name is 'abc' & then you can create bellow query, ``` SELECT Subject from abc WHERE abc.username = abc.Lastpostername ORDER BY abc.id DESC; ```
SELECT if 2 cells in a row are equal
[ "", "mysql", "sql", "" ]
I have read lots of post about how to update multiple columns but still can't find right answer. I have one table and I would like update this table from another table. ``` Update table1 set (a,b,c,d,e,f,g,h,i,j,k)=(t2.a,t2.b,t2.c,t2.d,t2.e,t2.f,t2.g,t2.h,t2.i,t2.j,t2.k) from ( SELECT ..... with join ... where .... ) t2 where table1.id=table2.id ``` If I running only select statement (between brackets) then script return values but not working with update
TSQL does not support [row-value constructor](https://connect.microsoft.com/SQLServer/feedback/details/299231/add-support-for-ansi-standard-row-value-constructors). Use this instead: ``` UPDATE table1 SET a = t2.a, b = t2.b, (...) FROM ( SELECT ..... with join ... WHERE .... ) t2 WHERE table1.id = table2.id ```
You don't need to use a sub-query you can also simply do the following.... ``` Update t1 set t1.a = t2.a ,t1.b = t2.b ,t1.c = t2.c ,t1.d = t2.d ....... from table1 t1 JOIN table2 t2 ON t1.id = t2.id WHERE ....... ```
Sql server update multiple columns from another table
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I don't know how to phrase my question but I will try my best. I am trying to accomplish something like this for a report in SSRS. ``` SELECT COLUMN1, COLUMN2, COLUMN3 FROM TABLE WHERE COLUMN1 = CASE WHEN @VAR1 IS NOT NULL THEN @VAR1 WHEN @VAR1 IS NULL THEN IN (SELECT COLUMN1 FROM TABLE2) END ``` How do I rewrite my case statement to allow this type of logic? Should I be writing it in a different way? I know that this question can be interpreted as an open ended question but I will mark the answer as answered as soon as I test and it is working for me. Thanks in advance. **EDIT** Column1 from Table2 has NULLs in the Column. I know that the 'IN' doesn't produce NULLs. I need to pull EVERYTHING in that column from the subquery.
No - you can't use the IN as you'd like. You just need to re-work your two possibilities a little. ``` SELECT COLUMN1, COLUMN2, COLUMN3 FROM TABLE WHERE (@VAR1 IS NOT NULL AND COLUMN1 = @VAR1) OR (@VAR1 IS NULL AND COLUMN1 IN (SELECT COLUMN1 FROM TABLE2) ) ``` **For NULLs:** To match NULLS, you can try: ``` SELECT COLUMN1, COLUMN2, COLUMN3 FROM TABLE WHERE (@VAR1 IS NOT NULL AND COLUMN1 = @VAR1) OR (@VAR1 IS NULL AND (COLUMN1 IN (SELECT COLUMN1 FROM TABLE2) OR COLUMN1 IS NULL) ) ) ```
How about this: ``` SELECT COLUMN1, COLUMN2, COLUMN3 FROM TABLE WHERE COLUMN1 in (SELECT distinct Coalesce(@VAR1, TABLE2.COLUMN1) FROM TABLE2 UNION SELECT @VAR1 --In case it is possible TABLE2 is empty ) ``` It should make use of any indexes on `Column1` in `TABLE`. If your column is nullable, any indexes will probably be useless, but to return the records with null values if null values are in `TABLE2` you could use: ``` SELECT COLUMN1, COLUMN2, COLUMN3 FROM TABLE WHERE COLUMN1 in (SELECT distinct Coalesce(@VAR1, TABLE2.COLUMN1) FROM TABLE2 UNION SELECT @VAR1 ) OR (COLUMN1 IS NULL AND EXISTS(SELECT distinct TABLE2.COLUMN1 FROM TABLE2 WHERE TABLE2.COLUMN1 IS NULL ) ```
Can 'IN' Be Used In A Case Statement?
[ "", "sql", "sql-server", "t-sql", "reporting-services", "" ]
Query should return Naveed Rizwan Fayaz and Ahmed Name. SQL query for students who have enrolled in English or Urdu course but not in both. ``` declare @Student table(sid int identity(1, 1), sname varchar(250)) declare @Course table(cid int identity(1, 1), cname varchar(250)) declare @StudentCourse table(cid int, sid int) insert into @Student(sname) select 'Mehboob' union all --1 select 'Rahim' union all -- 2 select 'Naveed' union all --3 select 'Rizwan' union all --4 select 'Fayaz' union all --5 select 'Ahmed' -- 6 insert into @Course(cname) select 'English' union all select 'Urdu' insert into @StudentCourse(sid ,cid) select 1,1 union all select 2,1 union all select 3,1 union all select 4,1 union all select 5,2 union all select 6,2 union all select 1,2 union all select 2,2 ```
``` select sid, max(cid) from StudentCourse group by sid having count(*)=1 ```
I kept practicing and finally i did it by this query do you guys have any other in your mind. ``` select s.*, count(1) as TotalEnrolledCourses from @Student s inner join @StudentCourse sc on s.sid = sc.sid where sc.cid in (1,2) group by s.sid, sname having TotalEnrolledCourses = 1 ```
SQL query to get students who have enrolled in English or Urdu course but not in both
[ "", "sql", "sql-server", "database", "" ]
I'm trying to execute the SQL query, but getting 'Missing right parenthesis' error: ``` ((SELECT CUST_ACCT FROM CUSTMASTER WHERE SITE_ID IN (SELECT SITE_ID FROM HOUSEMASTER WHERE FTAX=1)) AND (SELECT CUST_ACCT FROM CUSTMASTER WHERE SITE_ID IN (SELECT SITE_ID FROM HOUSEMASTER WHERE MGT=1))) ``` I feel like the parenthesis are properly checked. Please let me know what is missing?
It looks like you're trying to get the results from two different queries as one by adding a `and` operator between them. This won't work. You could use a `union` query, but it looks like it can be reduced to the query below. Maybe this is what you really wanted? ``` SELECT CUST_ACCT FROM CUSTMASTER WHERE SITE_ID IN ( SELECT SITE_ID FROM HOUSEMASTER WHERE FTAX=1 OR MGT=1 ) ```
I think there have to be UNION: ``` ((SELECT CUST_ACCT FROM CUSTMASTER WHERE SITE_ID IN (SELECT SITE_ID FROM HOUSEMASTER WHERE FTAX = 1)) UNION (SELECT CUST_ACCT FROM CUSTMASTER WHERE SITE_ID IN (SELECT SITE_ID FROM HOUSEMASTER WHERE MGT=1))) ```
Unable to execute the query
[ "", "sql", "" ]
I have the following ranking system. ``` SET @1=0; SELECT id, username, magic_xp, @i:=@i+1 AS rank FROM hs_users ORDER BY magic_xp DESC; hs_users id username magic_xp rank 988 5hapescape 14926854 1 737 Ozan 13034431 2 989 Kurt 13034431 3 6 LEGACY 0 4 11 Bobby 0 5 276 Bobby123 0 6 345 Mynamesjason 0 7 450 Demon Spawn 0 8 987 Satan 0 9 ``` As you see I have 2 users have the same xp. I want to make them both have `rank = 2` and the rest should follow from `3`. How can I group them like this? ``` | username | magic_xp | rank | | ---------- + -------- + ---- | | ShapeScape | 1000 | 1 | | Kurt | 100 | 2 | | Ozan | 100 | 2 | | Legacy | 10 | 3 | ```
**query** ``` set @i := 0; set @lagxp := null; select id, username, magic_xp, @i := if(@lagxp = magic_xp, @i, if(@lagxp := magic_xp, @i + 1, @i + 1)) as rank from hs_users order by magic_xp desc ; ``` or ``` SELECT id, username, magic_xp, IF (@score=hs_users.magic_xp, @rank:=@rank, @rank:=@rank+1) as rank, @score:=hs_users.magic_xp score FROM hs_users, (SELECT @score:=0, @rank:=0) r ORDER BY magic_xp DESC; ``` **output** ``` +-----+------------+----------+------+----------+ | id | username | magic_xp | rank | lagxp | +-----+------------+----------+------+----------+ | 988 | Shapescape | 14926894 | 1 | 14926894 | | 737 | Ozan | 13034431 | 2 | 13034431 | | 989 | Kurt | 13034431 | 2 | 13034431 | | 6 | Legacy | 0 | 3 | 0 | +-----+------------+----------+------+----------+ ``` [sqlfiddle](http://sqlfiddle.com/#!9/1d72d5/12)
In MySQL, the most efficient way is to use variables: ``` select t.*, (@rank := if(@magic_xp = magic_xp, @rank, if(@magic_xp := magic_xp, @rank + 1, @rank + 1) ) ) as rank from table t cross join (select @rank := 0, @magic_xp := NULL) params order by magic_xp desc; ``` Note the complicated expression for the variables. The assignment of both variables is in a single expression. This is on purpose. MySQL does not guarantee the order of assignment of expressions in a `SELECT`, and sometimes, it does not even evaluate them in order. A single expression is the safe way to do this logic. A more standard approach in SQL is to use a correlated subquery: ``` select t.*, (select count(distinct t2.magic_xp) from table t2 where t2.magic_xp >= t.magic_xp ) as rank from table t; ```
SQL User Score ranking by grouping
[ "", "mysql", "sql", "mysql-variables", "" ]
I will start out by saying, I am working with two tables, that are a bit of a mess in terms of how they are organized. I am currently pleading my case to get this adjusted, but have no control over them. So the organization of the tables is out of my control. I have one table that looks like this **Customer** ``` | id | name | date_joined | |----|------|-------------| | 1 | Bob | 2012-01-01 | | 2 | Jack | 2012-01-01 | | 3 | Jill | 2012-01-01 | ``` Since the customer table was created, another table has been created to hold customer information **Customer2** ``` | id | name | year_joined | month_joined | day_joined | |----|---------|-------------|--------------|------------| | 4 | Ken | 2013 | 1 | 1 | | 5 | Lindsey | 2013 | 1 | 1 | | 1 | Bob | 2012 | 1 | 1 | ``` You will notice that while Customer2 contains new customers for the first two entries it also contains Bob, who is the same bob that is in the first table with updated information to match the new Customer2 table layout. I need to select all of these records into one result set based on a list of ID's, it is assumed that the ID's are unique. Therefore we can be sure that Bob in Customer is the same as Bob in Customer2. I am currently doing this with a union statement like this. ``` select * from ( select id, name, date_joined, '' as year_joined, '' as month_joined, '' as day_joined from customer union select id, name, '' as date_joined, year_joined, month_joined, day_joined from customer2 ) as U where U.id in (list of ID's) ``` However this results in a table that looks like this ``` | id | name | date_joined | year_joined | month_joined | day_joined | |----|---------|-------------|-------------|--------------|------------| | 4 | Ken | | 2013 | 1 | 1 | | 5 | Lindsey | | 2013 | 1 | 1 | | 1 | Bob | | 2012 | 1 | 1 | | 1 | Bob | 2012-01-01 | | | | | 2 | Jack | 2012-01-01 | | | | | 3 | Jill | 2012-01-01 | | | | ``` As we can see we have 'duplicate' records for **Bob**, which are not really duplicates. My question is this: When I encounter 'duplicates' like this, is there any way I can specify to only pick the record from the **Customer2** table? I need this because I do not want duplicates and prefer to keep the record that matches the latest table layout.
This should get you started: ``` SELECT id , name , date_joined , NULL year_joined , NULL month_joined , NULL day_joined FROM Customer WHERE id NOT IN (SELECT id FROM Customer2) UNION ALL SELECT id , name , NULL , year_joined , month_joined , day_joined FROM Customer2 ORDER BY id ; ``` Alternatively, using just **SET** operations: ``` (SELECT id , name , date_joined , NULL year_joined , NULL month_joined , NULL day_joined FROM Customer EXCEPT (SELECT id , name , CONVERT(DATE, CAST(year_joined AS VARCHAR(4)) + '-' + CAST(month_joined AS VARCHAR(2)) + '-' + CAST(day_joined AS VARCHAR(2)) , 102 ) , NULL , NULL , NULL FROM Customer2 ) ) UNION ALL SELECT id , name , NULL , year_joined , month_joined , day_joined FROM Customer2 ORDER BY id ; ``` Either one takes from the **Customer** those, *not* found in **Customer2**, and combines them with all found in **Customer2**: ``` | id | name | date_joined | year_joined | month_joined | day_joined | |----|---------|-------------|-------------|--------------|------------| | 1 | Bob | (null) | 2012 | 1 | 1 | | 2 | Jack | 2012-01-01 | (null) | (null) | (null) | | 3 | Jill | 2012-01-01 | (null) | (null) | (null) | | 4 | Ken | (null) | 2013 | 1 | 1 | | 5 | Lindsey | (null) | 2013 | 1 | 1 | ``` See it in action: [SQL Fiddle](http://sqlfiddle.com/#!3/7423d/2). Myself though, I'd usually prefer a genuine date column over three columns with date particles... Please comment, if and as this requires adjustment / further detail.
You would have to make the data match up exactly. Something like this could work: ``` select * from ( select id, name, datepart(year,date_joined) as year_joined, datepart(month,date_joined) as month_joined, datepart(day,date_joined) as day_joined from customer union select id, name, year_joined, month_joined, day_joined from customer2 ) as U where U.id in (list of ID's) ```
SQL Union prefer records from one table?
[ "", "sql", "sql-server", "duplicates", "union", "" ]
I am trying to connect two tables with left join and a date. **My SQL Query** ``` SELECT ord.`ordernumber` bestellnummer, his.`change_date` zahldatum FROM `s_order` ord LEFT JOIN `s_order_history` his ON ((ord.`id`=his.`orderID`) AND (ord.`cleared`=his.`payment_status_id`)) #AND MIN(his.`change_date`) WHERE ord.`ordertime` >= \''.$dateSTART.'\' AND ord.`ordertime` <= \''.$dateSTOP.'\'' ; ``` **s\_order** ``` +----+---------------------+---------+-------------+ | id | ordertime | cleared | ordernumber | +----+---------------------+---------+-------------+ | 1 | 2014-08-11 19:53:43 | 2 | 123 | | 2 | 2014-08-15 18:33:34 | 2 | 125 | +----+---------------------+---------+-------------+ ``` **s\_order\_history** ``` +----+-------------------+-----------------+---------+---------------------+ | id | payment_status_id | order_status_id | orderID | orderID change_date | +----+-------------------+-----------------+---------+---------------------+ | 1 | 1 | 5 | 1 | 2014-08-11 20:53:43 | | 2 | 2 | 5 | 1 | 2014-08-11 22:53:43 | | 3 | 2 | 7 | 1 | 2014-08-12 19:53:43 | | 4 | 1 | 5 | 2 | 2014-08-15 18:33:34 | | 5 | 1 | 6 | 2 | 2014-08-16 18:33:34 | | 6 | 2 | 6 | 2 | 2014-08-17 18:33:34 | +----+-------------------+-----------------+---------+---------------------+ ``` **Wanted result:** ``` +-------------+---------------------+ | ordernumber | change_date | +-------------+---------------------+ | 123 | 2014-08-11 22:53:43 | | 125 | 2014-08-17 18:33:34 | +-------------+---------------------+ ``` The problem I have is getting only the date, where the cleared/payment\_status\_id value has been changed in s\_order. I currently get all dates where the payment\_status\_id matches the current cleared value, but I only need the one, where it happend first. This is only an excerpt of the actually query, since the original is a lot longer (mostly more left joins and a lot more tables).
You can group data by `ordernumber` ``` SELECT ord.`ordernumber` bestellnummer, MIN(his.`min_change_date`) as zahldatum FROM `s_order` ord LEFT JOIN `s_order_history` his ON ((ord.`id`=his.`orderID`) AND (ord.`cleared`=his.`payment_status_id`)) #AND MIN(his.`change_date`) WHERE ord.`ordertime` >= \''.$dateSTART.'\' AND ord.`ordertime` <= \''.$dateSTOP.'\'' GROUP BY ord.`ordernumber`; ``` or you can group data in a subquery: ``` SELECT ord.`ordernumber` bestellnummer, his.`min_change_date` zahldatum FROM `s_order` ord LEFT JOIN ( SELECT orderID, payment_status_id, MIN(change_date) as min_change_date FROM s_order_history GROUP BY orderID, payment_status_id ) his ON (ord.`id` = his.`orderID` AND ord.`cleared` = his.`payment_status_id`) WHERE ord.`ordertime` >= \''.$dateSTART.'\' AND ord.`ordertime` <= \''.$dateSTOP.'\''; ```
MIN is an aggregate function so you can't use it in a JOIN straight up like you've tried above. You also are not comparing it to a value in your JOIN. You'll want to do something like: ``` his.`change_date` = (SELECT MIN(his.`change_date`) FROM s_order_history where ord.`id` = his.`orderID`) ``` in your `JOIN`.
Using left join with min
[ "", "mysql", "sql", "left-join", "min", "" ]
I'm looking for the following query in SQL - i.e. select ID from table where entry is within 'last hour' and **the last** check-in value was 'false'. Sample Data 'Table1': ``` ID(int), Check-In(boolean), Name(nvarchar), Entry(DateTime)*, PersonID(int) *DateTime Format: DD/MM/YYYY HH:MM:SS 1, true, Klaus, 14/05/2015 15:45:21, 100 2, true, Klaus, 14/05/2015 16:05:22, 100 3, false, Klaus, 14/05/2015 16:06:04, 100 4, true, Pete, 14/05/2015 16:20:33, 101 5, false, Michelle, 14/05/2015 16:24:22, 105 6, true, Pete, 14/05/2015 16:25:55, 101 7, false, Pete, 14/05/2015 16:28:44, 101 8, true, Pete, 14/05/2015 16:29:36, 101 ``` Result of Query: Select ID from Table1 where time = last\_hour and (LAST) Check-In was false' **= 3 and 5 (do not select 7)** In the above example, I don't want to select ID 7 as the last check-in of Pete was true (ID 8). Any ideas how I can achieve that with a SQL query? Is this possible with a simple query?
This is other way using `partition`. I can't run the explain now. But my guess is this version have to do less scans than a `exists` for each row. [**SQL Fiddle Demo**](http://sqlfiddle.com/#!3/09463/2) > Take note fiddle doesn't have the 1 hour validation > Also sql server bit field is (0,1) not (false, true) * `row = 1`: Select the last entry for each user * `CheckIn = 0` is the CheckIn = False . ``` WITH last_entry as ( SELECT *, ROW_NUMBER() OVER(PARTITION BY Name ORDER BY Entry DESC) AS row FROM table1 WHERE entry > DATE_SUB(NOW(), INTERVAL 1 HOUR) ) SELECT * FROM last_entry WHERE row = 1 and CheckIn = 0 ```
To make sure the **last** checkin is `false`, you must check that there is not a newer row in the table. You can do that with a "not exists" clause. Try this one: ``` select * from table1 t1 where entry > DATE_SUB(NOW(), INTERVAL 1 HOUR) and checkin = false and not exists ( select * from table1 t2 where t2.name = t1.name and t2.entry > t1.entry) ```
Sql query to only select rows if last entry of item meets condition
[ "", "sql", "sql-server", "" ]
I have two tables with a many to many relationship and I am trying to merge the 2 tables in a select statement. I want to see all of the records from both tables, but only match 1 record from table A to 1 record to table b, so null values are ok. For example table A has 20 records that match only 15 records from table B. I want to see all 20 records, the 5 that are unable to be matched can show null. ## Table 1 ## Something | Code# apple | 75 pizza | 75 orange | 6 Ball | 75 green | 4 red | 6 ## Table 2 ## date | id# Feb-15 | 75 Feb-11 | 75 Jan-10 | 6 Apr-08 | 4 ## The result I need is Something | Date | Code# | ID# --- apple | Feb-15 | 75 | 75 pizza | Feb-11 | 75 | 75 orange | Jan-10 | 6 | 6 Ball | NULL | 75 | NULL green | Apr-08 | 4 | 4 red | NULL | 6 | NULL
I'm imagining something like this. You want to pair of the rows side by side but one side is going to have more than the others. ``` select * /* change to whatever you need */ from ( select *, row_number() over (partition by "code#" order by "something") as rn from tableA ) as a full outer join /* sounds like maybe left outer join will work too */ ( select *, row_number() over (partition by "id#" order by "date" desc) as rn from tableB ) as b on b."id#" = a."code#" and b.rn = a.rn ``` Actually I don't know how you're going to get "ball" to comes after "apple" and "pizza" without some other column to sort on. Rows in SQL tables don't have any ordering and you can't rely on the default listing from `select *...` or assume that the order of insertion is significant.
A regular Left-join should do it for you. ``` select tableA.* , tableB.* from tableA left join tableB on tableB.PrimaryKey = tableA.PrimaryKey ```
How to query 2 tables in sql server with many to many relationship to identify differences
[ "", "sql", "sql-server", "join", "m2m", "" ]
I have a query ``` with x as (select row_number() over(partition by FirstName order by Investment_DT desc) as rn, * from [dbSuppHousing].[dbo].[tblABC]) select Login_Name ,r.Role_Name ,Investment_DT ,FirstName ,LastName ,Login_Name ,Investment_DT ,Investment_ID from x join tblUsers t on t.UserName = x.Login_Name join tblUser_Roles ur on t.User_Id=ur.USER_ID join tblRoles r on r.Role_Id=ur.Role_ID where x.rn = 1 order by x.FirstName ``` I want to insert the result of this query into another table as is. Typically I use a query like: ``` insert into tblABC2 select * from tblABC ``` But I'm not sure how to do this in this case which is a query which begins with `with x as`
``` with x as (select row_number() over(partition by FirstName order by Investment_DT desc) as rn, * from [dbSuppHousing].[dbo].[tblABC]) select Login_Name ,r.Role_Name ,Investment_DT ,FirstName ,LastName ,Login_Name ,Investment_DT ,Investment_ID into #temptable from x join tblUsers t on t.UserName = x.Login_Name join tblUser_Roles ur on t.User_Id=ur.USER_ID join tblRoles r on r.Role_Id=ur.Role_ID where x.rn = 1 -- order by x.FirstName ``` You can use `into` to insert into the table you need. Also note that you can't do an `order by` when doing this (which has been commented out).
You simply need to put the insert statement between the common table expression (CTE) and your select statement: ``` with x as (select row_number() over(partition by FirstName order by Investment_DT desc) as rn, * from [dbSuppHousing].[dbo].[tblABC]) /* place insert statement here */ select Login_Name ,r.Role_Name ,Investment_DT ,FirstName ,LastName ,Login_Name ,Investment_DT ,Investment_ID from x join tblUsers t on t.UserName = x.Login_Name join tblUser_Roles ur on t.User_Id=ur.USER_ID join tblRoles r on r.Role_Id=ur.Role_ID where x.rn = 1 order by x.FirstName ``` If you are running this on SQL Server 2008+, you need to ensure that any statement previous to the CTE is terminated with a semicolon.
Create a new table based on a query SQL Server
[ "", "sql", "sql-server", "t-sql", "" ]
I'm trying to find geometric average of values from a table with millions of rows. For those that don't know, to find the geometric average, you mulitply each value times each other then divide by the number of rows. You probably already see the problem; The number multiplied number will quickly exceed the maximum allowed system maximum. I found a great solution that uses the natural log. <http://timothychenallen.blogspot.com/2006/03/sql-calculating-geometric-mean-geomean.html> However that got me to wonder wouldn't the same problem apply with the arithmetic mean? If you have N records, and N is very large the running sum can also exceed the system maximum. So how do RDMS calculate averages during queries?
Very easy to check. For example, SQL Server 2008. ``` DECLARE @T TABLE(i int); INSERT INTO @T(i) VALUES (2147483647), (2147483647); SELECT AVG(i) FROM @T; ``` **result** ``` (2 row(s) affected) Msg 8115, Level 16, State 2, Line 7 Arithmetic overflow error converting expression to data type int. ``` There is no magic. Column type is `int`, server adds values together using internal variable of the same type `int` and intermediary result exceeds range for `int`. You can run the similar check for any other DBMS that you use. Different engines may behave differently, but I would expect all of them to stick to the original type of the column. For example, averaging two `int` values `100` and `101` may result in `100` or `101` (still `int`), but never `100.5`. For SQL Server this behavior is [documented](https://msdn.microsoft.com/en-us/library/ms177677.aspx). I would expect something similar for all other engines: > AVG () computes the average of a set of values by dividing the sum of > those values by the count of nonnull values. If the sum exceeds the > maximum value for the data type of the return value an error will be > returned. So, you have to be careful when calculating simple average as well, not just product. --- Here is extract from [SQL 92 Standard](http://www.contrib.andrew.cmu.edu/~shadow/sql/sql1992.txt): > 6) Let DT be the data type of the < value expression >. > > 9) If SUM or AVG is specified, then: > > a) DT shall not be character string, bit string, or datetime. > > b) If SUM is specified and DT is exact numeric with scale S, then the > data type of the result is exact numeric with implementation-defined > precision and scale S. > > c) If AVG is specified and DT is exact numeric, then the data type of > the result is exact numeric with implementation- defined precision not > less than the precision of DT and implementation-defined scale not > less than the scale of DT. > > d) If DT is approximate numeric, then the data type of the result is > approximate numeric with implementation-defined precision not less > than the precision of DT. > > e) If DT is interval, then the data type of the result is inter- val > with the same precision as DT. So, DBMS can convert `int` to larger type when calculating `AVG`, but it has to be an `exact numeric` type, not floating-point. In any case, depending on the values you can still get arithmetic overflow.
Most databases don't support a `product()` function the way they support an average. However, you can use do what you want with logs. The product (simplified) is like: ``` select exp(sum(ln(x)) as product ``` The average would be: ``` select power(exp(sum(ln(x))), 1.0 / count(*)) as geoaverage ``` or ``` select EXP(AVG(LN(x))) as geoaverage ``` The LN() function might be LOG() on some platforms... These are schematics. The functions for `exp()` and `ln()` and `power()` vary, depending on the database. Plus, if you have to take into account zero or negative numbers, the logic is more complicated.
How does the Average function work in relational databases?
[ "", "sql", "math", "rdbms", "" ]
``` cCode cName2 cFather iLevel cCatCode 1 Assets NULL 1 NULL 11 Current assets 1 2 NULL 1101 Cash on Hand 11 3 12012 110101 Cash on Hand 1101 4 **NULL** 110102 NULL 1101 4 **NULL** 110103 cashier 1101 4 **NULL** 110104 Cash on Hand 1101 4 **NULL** ``` I want to update the value for `cCatCode` where `cFather = 'cCode'`. To copy `cCatcode` `12012` to all subsidary account like 110101, 110102, 110103, 110104 to be like this ``` cCode cName2 cFather iLevel cCatCode 1 Assets NULL 1 NULL 11 Current assets 1 2 NULL 1101 Cash on Hand 11 3 12012 110101 Cash on Hand 1101 4 12012 110102 NULL 1101 4 12012 110103 cashier 1101 4 12012 110104 Cash on Hand 1101 4 12012 ``` I run this but it is not working ``` update chart set cCatCode = ccatcode where cfather = ccode ```
One method (that should work in all databases) is using a correlated subquery. For your particular data, this (or something very similar) will work in most databases: ``` update chart set cCatCode = (select c2.cCatCode from chart c2 where c2.cCode = chart.Father ) where cCatCode is NULL; ```
1. For **SQL Server:** ``` update c1 set c1.cCatCode=c2.cCatCode from chart c1 join chart c2 on c1.cFather LIKE CAST(c2.cfather as varchar(max))+'%' and c1.cCode<>c2.cCode where c1.cCatCode IS NULL and c2.cCatCode IS NOT NULL ``` [**SQL Fiddle**](http://www.sqlfiddle.com/#!3/0ba29/1) demo for SQL Server 2. For **MySQL:** ``` update chart c1 join chart c2 on c1.cFather LIKE CONCAT(c2.cfather,'%') and c1.cCode<>c2.cCode set c1.cCatCode=c2.cCatCode where c1.cCatCode IS NULL and c2.cCatCode IS NOT NULL ``` [**SQL Fiddle**](http://www.sqlfiddle.com/#!9/10c0c/1) demo for MySQL Result: ``` cCode cName2 cFather iLevel cCatCode ------------------------------------------------ 1 Assets (null) 1 (null) 11 Current assets 1 2 (null) 1101 Cash on Hand 11 3 12012 110101 Cash on Hand 1101 4 12012 110102 (null) 1101 4 12012 110103 cashier 1101 4 12012 110104 Cash on Hand 1101 4 12012 ```
I want to run a sql update query to a database to do the following
[ "", "sql", "" ]
Below is my Table Structure with data: ``` ID Date EmpId 67 2015-08-24 10:44:33.087 293 68 2015-08-24 10:41:49.950 293 69 2015-08-24 10:42:49.951 293 70 2015-08-24 10:45:15.157 013 71 2015-08-24 10:46:33.131 013 72 2015-08-24 10:42:33.134 013 73 2015-08-24 10:47:33.136 013 ``` ID Column is a auto generated column. For every employee there are 2 or more than 2 records. In this case. EmpId - 293 has got three records. 013 has got 4 records. I am looking for a query to return record with maximum Date for every employee. In this case the result should be as below: ``` ID Date EmpId 69 2015-08-24 10:44:33.087 293 73 2015-08-24 10:47:33.136 013 ``` My requirement is to return all 3 columns in the result set.
``` select ID,Date,EMPID from ( select ID, Date, EMPId, ROW_NUMBER() over (partition by EMPId order by Date desc) as RowNUm from Employee ) A where RowNUm = 1 ```
Try this ``` WITH TempId AS ( SELECT *, row_number() OVER(PARTITION BY EmpID ORDER BY ID desc) AS [Num] FROM Employee1 ) DELETE TempId WHERE [Num] > 1 --For deleting record Select * from Employee1 --For Selecting record ``` Find solution in Fiddle <http://sqlfiddle.com/#!6/80f3d/3>
Delete Duplicate Records with condition
[ "", "sql", "sql-server", "t-sql", "" ]
I have query ``` SELECT Site, Risk_Time_Stamp,COMPUTER_NAME, [IP_ADDR1_TEXT],Number_of_Risks FROM dbo.sem_computer WHERE [dbo].[V_SEM_COMPUTER].COMPUTER_ID = SEM_COMPUTER.COMPUTER_ID GROUP BY Site, Risk_Time_Stamp,COMPUTER_NAME, [IP_ADDR1_TEXT],Number_of_Risks ``` which gives 4 rows of output and another query ``` select * from SEPM_site ``` which gives 1000 rows of output I wish to concat both outputs such that there are 1000 rows of output However when I do ``` ; with cte as (SELECT Site, Risk_Time_Stamp,COMPUTER_NAME, [IP_ADDR1_TEXT],Number_of_Risks FROM dbo.sem_computer WHERE [dbo].[V_SEM_COMPUTER].COMPUTER_ID = SEM_COMPUTER.COMPUTER_ID GROUP BY Site, Risk_Time_Stamp,COMPUTER_NAME, [IP_ADDR1_TEXT],Number_of_Risks) select * from cte join SEPM_Site ss on cte.Site = ss.Site ``` I get 4 rows of output instead of 1000 How to fix this?
You are not getting 1,000 rows because the JOIN doesn't have 1,000 matches. If you want to get all the rows, even when there's no match, use a LEFT JOIN starting on SEPM\_Site table. ``` ; with cte as (SELECT Site, Risk_Time_Stamp,COMPUTER_NAME, [IP_ADDR1_TEXT],Number_of_Risks FROM dbo.sem_computer WHERE [dbo].[V_SEM_COMPUTER].COMPUTER_ID = SEM_COMPUTER.COMPUTER_ID GROUP BY Site, Risk_Time_Stamp,COMPUTER_NAME, [IP_ADDR1_TEXT],Number_of_Risks) select * from SEPM_Site ss LEFT join cte on cte.Site = ss.Site ```
I think you want a `left join`: ``` with cte (. . .) select * from SEPM_Site ss left join cte on cte.Site = ss.Site ``` Note: I reverse the order of the tables. For your original query, you would want a `right join`. I prefer `left join`.
SQL with cte, how to concat data, not join
[ "", "sql", "sql-server", "common-table-expression", "" ]
I have a very unusual request. I have some filtered data in a table, columns being ID, Date, and Event. Event is an XML column where in one of the tag is StartWork and EndWork. From huge data I have filtered out data for a request. My Sample data has 6 rows, in sequence of StartWork, EndWork, StartWork, EndWork and so on. What exactly I want to do is find time difference between each combination. I mean, EndWork - StartWork = Difference. Next EndWork - StartWork = Difference2 and so on. [![enter image description here](https://i.stack.imgur.com/cQsQT.png)](https://i.stack.imgur.com/cQsQT.png) Basically I want, 2-1, 4-3, 6-5 and so on. I tried doing it with Pivot, but couldn't get desired result
``` Select DateDiff(minute,StartWork.datacreated,EndWork.datacreated) from (Select datacreated,LineNb=row_number() over(Order by datacreated) from Table where eventdata.value('(/data/status/text())[1]','varchar(15)')='StartWork') StartWork INNER JOIN (Select datacreated,LineNb=row_number() over(Order by datacreated) from Table where eventdata.value('(/data/status/text())[1]','varchar(15)')='EndWork') EndWork ON StartWork.LineNb=EndWork.LineNb ```
This should help you: ``` declare @xmlStart XML='<data><status>StartWork</status></data>'; declare @xmlENd XML='<data><status>EndWork</status></data>'; declare @tbl TABLE(id INT,datecreated DATETIME,eventdata XML); INSERT INTO @tbl VALUES(1,{ts'2015-07-29 09:17:34'},@xmlStart) ,(2,{ts'2015-07-29 09:20:24'},@xmlEnd) ,(3,{ts'2015-07-29 10:05:41'},@xmlStart) ,(4,{ts'2015-07-29 10:18:34'},@xmlEnd); WITH resolvedCTE AS ( SELECT TOP 100 PERCENT id,datecreated,eventdata.value('(/data/status)[1]','varchar(max)') AS EventStatus FROM @tbl ) ,StartEvnets AS ( SELECT ROW_NUMBER() OVER(ORDER BY datecreated) AS inx,id,datecreated FROM resolvedCTE WHERE EventStatus='StartWork' ) ,EndEvnets AS ( SELECT ROW_NUMBER() OVER(ORDER BY datecreated) AS inx,id,datecreated FROM resolvedCTE WHERE EventStatus='EndWork' )SELECT StartEvnets.id, CAST(EndEvnets.datecreated - StartEvnets .datecreated AS TIME) FROM StartEvnets INNER JOIN EndEvnets ON StartEvnets.inx =EndEvnets.inx ```
transform sql rows to column
[ "", "sql", "sql-server", "pivot", "" ]
I want to get records who are about reach todays date with in 2 days ProjectEnddt. i want to show Projects who are going to end with in 2 days. want to show 2 days before.
**Query** ``` SELECT ProjectEndDt , ProjectRenewalDt , ProjectUpgradeDt FROM tbl_project WHERE CONVERT(VARCHAR(10), DATEADD(DAY, 2, ProjectEndDt)) >= CONVERT(VARCHAR(10), GETDATE()); ```
Try this: ``` SELECT ProjectEndDt ,ProjectRenewalDt ,ProjectUpgradeDt FROM tbl_project WHERE CONVERT(DATE,[ProjectEndDt]) < DATEADD(DAY, 3,CONVERT(DATE,GETDATE())) ```
how to get records who is reaching todays date
[ "", "sql", "sql-server", "" ]
I want to select a record from a table where the date format stored as **'Jan 27 2015 12:00AM'** When I select I should covert the date to the following format **'27/01/2015 00:00:00'**. I tried the following it works for Date but not for TIME. So I used get date. ``` select CONVERT(VARCHAR(10), CONVERT(date,StartDate,106), 103) + ' ' + convert(VARCHAR(8), GETDATE(), 14) as StartDate from Logistic ``` Can some one help me to convert the date format correctly. I tried the Answer: ``` CONVERT(VARCHAR(10), StartDate, 103) + ' ' + CONVERT(VARCHAR(8), StartDate, 108) ``` Result: **Aug 25 201 Aug 25 2**
## Convert MMM dd yyyy HH:MM[AM|PM] to dd/MM/yyyy hh:mm:ss ``` Declare @Date VARCHAR(20) = 'Jan 27 2015 12:05AM' SELECT CONVERT(VARCHAR(10), CONVERT(DATETIME, @Date),103) + ' ' + CONVERT(VARCHAR(8), CONVERT(DATETIME, @Date),108) Result: 27/01/2015 00:05:00 ```
Check this: Hope this answers your question... EDIT: Hope this is the format you need:L ``` declare @d VARCHAR(100) ='Jan 27 2015 11:59AM'; SELECT CONVERT(VARCHAR(10),CAST(@d AS DATE),103) + ' ' + CAST(CAST(@d AS TIME) AS VARCHAR(8)) ``` I'm not used to the AM/PM format and I had assumed, that 12:00AM is midday and 12:00PM is midnight, but it is exactly the other way round... I'm a little puzzled at the moment... ;-) ``` declare @d VARCHAR(100) ='Jan 27 2015 11:59AM'; SELECT CONVERT(DATETIME,@d,100); SET @d ='Jan 27 2015 12:00AM'; SELECT CONVERT(DATETIME,@d,100); SET @d ='Jan 27 2015 12:01AM'; SELECT CONVERT(DATETIME,@d,100); --some more, just to check... SET @d ='Jan 27 2015 10:00AM'; SELECT CONVERT(DATETIME,@d,100); SET @d ='Jan 27 2015 10:00PM'; SELECT CONVERT(DATETIME,@d,100); SET @d ='Jan 27 2015 00:00AM'; SELECT CONVERT(DATETIME,@d,100); ```
SQL Select Specific Date format dd/mm/YYYY hh:mm:ss
[ "", "sql", "sql-server", "date", "datetime", "" ]
The query below calculates what we need but for only one specific column. How can we do this for all the columns within that table, without having to duplicate the case statement multiple times. This needs to be done for hundreds of tables, so duplicating the case statement is not ideal. ``` Select SUM(cast(case when column is null then 0 else 1 end as float))/count(*) from [Table] ``` So the output would be something like Column Name: Data completeness Customer Name: 88%
Solution by Jens Suessmeyer from [Finding the percentage of NULL values for each column in a table](http://blogs.msdn.com/b/samlester/archive/2012/09/22/tsql-solve-it-your-way-finding-the-percentage-of-null-values-for-each-column-in-a-table.aspx) ``` SET NOCOUNT ON DECLARE @Statement NVARCHAR(MAX) = '' DECLARE @Statement2 NVARCHAR(MAX) = '' DECLARE @FinalStatement NVARCHAR(MAX) = '' DECLARE @TABLE_SCHEMA SYSNAME = <SCHEMA_NAME> DECLARE @TABLE_NAME SYSNAME = <TABLE_NAME> SELECT @Statement = @Statement + 'SUM(CASE WHEN ' + COLUMN_NAME + ' IS NULL THEN 1 ELSE 0 END) AS ' + COLUMN_NAME + ',' + CHAR(13) , @Statement2 = @Statement2 + COLUMN_NAME + '*100 / OverallCount AS ' + COLUMN_NAME + ',' + CHAR(13) FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = @TABLE_NAME AND TABLE_SCHEMA = @TABLE_SCHEMA IF @@ROWCOUNT = 0 RAISERROR('TABLE OR VIEW with schema "%s" and name "%s" does not exists or you do not have appropriate permissions.',16,1, @TABLE_SCHEMA, @TABLE_NAME) ELSE BEGIN SELECT @FinalStatement = 'SELECT ' + LEFT(@Statement2, LEN(@Statement2) -2) + ' FROM (SELECT ' + LEFT(@Statement, LEN(@Statement) -2) + ', COUNT(*) AS OverallCount FROM ' + @TABLE_SCHEMA + '.' + @TABLE_NAME + ') SubQuery' EXEC(@FinalStatement) END ```
First, you can simplify the logic to: ``` Select AVG(case when column is null then 0.0 else 1.0 end) from [Table] ``` Then, you can generate the code. The following generates the `from` expressions. You can copy them over into the query: ``` select replace(' avg(case when [@col] is null then 0.0 else 1.0 end) as [@col],', '@col', column_name) from information_schema.columns where table_name = @TableName and table_schema = @SchemaName ``` Note: `quotename()` is more correct, but the above should work for reasonable column names (I never have column names that need to be quoted).
How do you calculate data completeness for multiple tables based on null values within columns?
[ "", "sql", "sql-server", "database-design", "" ]
I have table `borrower` with a column named `given_names`. I want a SQL query to display all the initials of the `given_names` column Meaning every first letter of a word in the `given_names` column I tried: ``` select substring(cast (given_names as text),1,1) + substring(given_names,charindex('')+(' ',given_names)+1,1) from borrower ```
As mentioned in my comment above, `regexp_split_to_table` will help out a lot here. The idea is that we split your names up into seperate records using a space as a delimiter. `split_to_table` will generate a new record for each token encountered. We can also capture a `row_number()` for each `firstname` so we can stitch the records back together using `string_agg` after using `substring()` to get the intitial. ``` SELECT string_agg(initial, '') as initials FROM ( SELECT row_number() OVER (ORDER BY firstname) as recnum, substring(regexp_split_to_table(t.firstname, '\s+') FROM 1 FOR 1) as initial FROM test as t ) t_init GROUP BY recnum ``` You can check out a working example at [sqlfiddle](http://sqlfiddle.com/#!15/299fd/12) The cool thing about using this method is that the firstname can be a single word, or 100 words. A name like "John Jacob Jingleheimer Schmidt" will turn into "JJJS" just as easily as the name "Harold" will turn into "H", all within the same query.
I got another solution using *regex* ``` SELECT string_agg(arr [1], '') AS initials FROM ( SELECT row_number() OVER () AS rn ,regexp_matches(fname, '\y(?!(the|of)\y)\w', 'gi') arr FROM tblnames ) t GROUP BY rn ``` [**SQL Fiddle example**](http://sqlfiddle.com/#!15/a684e/2) --- Solution 1 ``` SELECT string_agg(col, '') initials FROM ( SELECT row_number() OVER () AS rn ,left(unnest(string_to_array(fname, ' ')), 1) col FROM tblnames ) t GROUP BY rn ``` --- Solution 2 With following table as example ``` CREATE TABLE tblnames ( fname varchar(100) ); INSERT INTO tblnames VALUES ('Stack Overflow'), ('Stack Over Flow'), ('Stackoverflow'); ``` running this select statement ``` select string_agg(col,'') from ( SELECT left(unnest(string_to_array('Stack Over Flow', ' ')),1) col )t ``` gives out put as ``` string_agg text --------- SOF ``` So we can wrap the above select statement into a function like below ``` create or replace function getInitials(col text) returns text As $$ SELECT string_agg(col, '') FROM ( SELECT left(unnest(string_to_array(col, ' ')), 1) col ) t $$ language sql ``` So that we can get initial letter of each *fname* in table *tblnames* like following select ``` select fname,getInitials(fname) initails from tblnames ``` ---
First letter of each word in a column
[ "", "sql", "postgresql", "" ]
Input : 123 Output : One Two Three No Hundreds No Thousand No Thirty-Twenty Only Clean Digits are to be spelled.Has Anyone Got an Idea about it ?
In *SQL-Server* you can do It with `CASE` statement, you have to use `COALESCE` to convert null values to blank strings, something like: ``` DECLARE @val INT = 123 SELECT COALESCE(CASE WHEN @val LIKE '%0%' THEN 'Zero ' END,'') + COALESCE(CASE WHEN @val LIKE '%1%' THEN 'One ' END,'') + COALESCE(CASE WHEN @val LIKE '%2%' THEN 'Two ' END,'') + COALESCE(CASE WHEN @val LIKE '%3%' THEN 'Three ' END,'') + COALESCE(CASE WHEN @val LIKE '%4%' THEN 'Four ' END,'') + COALESCE(CASE WHEN @val LIKE '%5%' THEN 'Five ' END,'') + COALESCE(CASE WHEN @val LIKE '%6%' THEN 'Six ' END,'') + COALESCE(CASE WHEN @val LIKE '%7%' THEN 'Seven ' END,'') + COALESCE(CASE WHEN @val LIKE '%8%' THEN 'Eight ' END,'') + COALESCE(CASE WHEN @val LIKE '%9%' THEN 'Nine ' END,'') ``` **OUTPUT:** ``` One Two Three ``` --- As per *Gordon Linoff* comment you can avoid `COALESCE` by adding `ELSE` clause: ``` SELECT CASE WHEN @val LIKE '%0%' THEN 'Zero ' ELSE '' END + CASE WHEN @val LIKE '%1%' THEN 'One ' ELSE '' END + CASE WHEN @val LIKE '%2%' THEN 'Two ' ELSE '' END + CASE WHEN @val LIKE '%3%' THEN 'Three ' ELSE '' END + CASE WHEN @val LIKE '%4%' THEN 'Four ' ELSE '' END + CASE WHEN @val LIKE '%5%' THEN 'Five ' ELSE '' END + CASE WHEN @val LIKE '%6%' THEN 'Six ' ELSE '' END + CASE WHEN @val LIKE '%7%' THEN 'Seven ' ELSE '' END + CASE WHEN @val LIKE '%8%' THEN 'Eight ' ELSE '' END + CASE WHEN @val LIKE '%9%' THEN 'Nine ' ELSE '' END ``` **UPDATE** As per comment that this will not work with number like '3421', updated code to work in situation like this: ``` DECLARE @val NVARCHAR(40) = '412' DECLARE @i INT = 1 CREATE TABLE #Temp ( Val NVARCHAR(40), IdRn INT ) WHILE (@i < LEN(@val) + 1) BEGIN INSERT INTO #Temp VALUES (CASE WHEN SUBSTRING(@val,@i,1) LIKE '0' THEN 'Zero ' ELSE '' END, @i), (CASE WHEN SUBSTRING(@val,@i,1) LIKE '1' THEN 'One ' ELSE '' END, @i), (CASE WHEN SUBSTRING(@val,@i,1) LIKE '2' THEN 'Two ' ELSE '' END, @i), (CASE WHEN SUBSTRING(@val,@i,1) LIKE '3' THEN 'Three ' ELSE '' END, @i), (CASE WHEN SUBSTRING(@val,@i,1) LIKE '4' THEN 'Four ' ELSE '' END, @i), (CASE WHEN SUBSTRING(@val,@i,1) LIKE '5' THEN 'Five ' ELSE '' END, @i), (CASE WHEN SUBSTRING(@val,@i,1) LIKE '6' THEN 'Six ' ELSE '' END, @i), (CASE WHEN SUBSTRING(@val,@i,1) LIKE '7' THEN 'Seven ' ELSE '' END, @i), (CASE WHEN SUBSTRING(@val,@i,1) LIKE '8' THEN 'Eight ' ELSE '' END, @i), (CASE WHEN SUBSTRING(@val,@i,1) LIKE '9' THEN 'Nine ' ELSE '' END, @i) SET @i = @i + 1 END DECLARE @Nums VARCHAR(8000) SELECT @Nums = COALESCE(@Nums + ' ', '') + Val FROM #Temp WHERE Val <> '' SELECT @Nums ``` **OUTPUT** ``` Four One Two ```
Try this ``` declare @Input varchar(max)='123' declare @ouput varchar(max)='' declare @temp int=1 declare @char nvarchar(max)='' WHILE @temp <=LEN(@Input) BEGIN set @char = SUBSTRING(@Input, @temp, 1) set @ouput += (case when @char=1 then 'One ' when @char=2 then 'Two ' when @char=3 then 'Three ' else 'so on ' end) SET @temp = @temp + 1; END; print @ouput ```
Do we have Sql Query That Returns Single Digits Like "One Two Three" When we give input 123?
[ "", "sql", "sql-server", "" ]
I have to update a table in oracle db as batch of 10k. I tried this: ``` BEGIN WHILE (true) LOOP UPDATE TOP (10000) CUSTOMERS SET ACTIVE = 'N' WHERE ACTIVE='Y'; IF sql%notfound THEN EXIT; END IF; COMMIT; END LOOP; END; ``` It doesn't works as plsql doesn't support top. Any suggestions?
your pl/SQL block, using JVA's suggestion, should be completed like this (somce you are new to pl/sql I am adding some syntactical suggestions that you could be interested in): ``` BEGIN -- WHILE (TRUE) can be omitted: "loop ... end loop;" -- already is an endless loop LOOP UPDATE CUSTOMERS SET ACTIVE = 'N' WHERE ACTIVE='Y' AND rownum <= 1000; exit when sql%notfound; -- notice that exit accepts "when condition" --IF sql%notfound THEN -- you can avoid a if/endif by using "exit when" -- EXIT; -- END IF; COMMIT; END LOOP; commit; -- you missed this commit for the last iteration END; ``` Don't be tempted of placing "commit" before the "exit when sql%notfound": after a "commit" sql%notfound is always false and your loop would be really endless. **Let me point out that, in order to be efficient, this approach requires the the "ACTIVE" column to be indexed!** if you don't have an index on the "active" column, each "update" will be forced to restart a full table scan from the beginning just to find the next 1000 records that still need to be updated. This other approach I am proposing uses some advanced PL/SQL features you, as a learner, mighy be interested in (rowid, "table of",cursor bulk fetches and "forall") and does only one scan of the table to be updated so (in case of absence of indexes) it performs better than the previous approach. keep in mind that if you have indexes, this is slower *(but using foralls, bulk collects and rowid accesses, it is not that slower)* , but it can get handy in cases where things are more complex (for example: when the where condition needs to access data from other tables using complex joins that can't be made faster). There are cases when the "where" is so complex and slow that you really don't want to re-execute it over and over using a "where rownum<=1000" approach. ``` declare type rowid_array is table of rowid; ids rowid_array; cursor cur is select rowid as id from CUSTOMERS where ACTIVE='Y'; begin open cur; loop fetch cur bulk collect into ids limit 1000; exit when ids.count = 0; forall c in ids.first .. ids.last update CUSTOMERS set ACTIVE='N' where rowid = ids(c); commit; end loop; end; ```
``` UPDATE CUSTOMERS SET ACTIVE = 'N' WHERE ACTIVE='Y' AND ROWNUM <= 10000; -- first 10k rows ```
updating 1 million records in oracle DB as batch of 10k;
[ "", "sql", "database", "oracle", "plsql", "sqlplus", "" ]
I'm trying to perform a simple `IF` statement. I have a table named 'products' that has a column `count` which is a current count of that specific product in stock. The `count` column in the database is of data type `INT`. I can only get this to work if I statically assign the variable `@count`. If I assign a select statement to `@count` the script fails. Even though running the query alone will return a result. Working query: ``` DECLARE @count INT SET @count = 2 IF (@count > 1) BEGIN PRINT 'It works!' END ``` Failing query: ``` DECLARE @count INT SET @count = (SELECT TOP 1 count from products WHERE count > 1) IF (@count > 1) BEGIN PRINT 'It works!' END ```
Try: ``` DECLARE @count INT SET @count = isnull((SELECT TOP 1 [count] from products WHERE [count] > 1),0) IF (@count > 1) BEGIN PRINT 'It works!' END ``` count is a [SQL Keyword](https://msdn.microsoft.com/en-us/library/ms189822.aspx) that's why you need to use `[]` when you are using any `keywords` as a `column Name`.
You'd normally use EXISTS ``` IF EXISTS (SELECT * from products WHERE count > 1) BEGIN PRINT 'It works!' END ```
SQL Server : Not So Simple IF Statement
[ "", "sql", "sql-server", "database", "" ]
I want to use IF-Else in Insert Query in sproc. I have tried this code, and getting error. What's wrong? ``` create procedure sp_m_ii_2_kaydet @mtu int, @mod int, @moid int, @uid int as insert into m_ii2_alisfatura (muhtur, mturid, muhodm, modmid, uid) values (@mtu, IF (@mtu = 3) SELECT MAX(id) FROM m_ii1_vsf_personel ELSE IF (@mtu = 2) SELECT MAX(id) FROM m_ii1_vsf_firma ELSE IF (@mtu = 4) SELECT MAX(id) FROM m_ii1_vsf_taseron, @mod, @moid, @uid) ```
That's not a valid syntax try this You need to use three different insert statements ``` CREATE PROCEDURE Sp_m_ii_2_kaydet @mtu INT, @mod INT, @moid INT, @uid INT AS IF ( @mtu = 3 ) INSERT INTO m_ii2_alisfatura (muhtur,mturid,muhodm,modmid,uid) SELECT @mtu,Max(id),@mod,@moid,@uid FROM m_ii1_vsf_personel ELSE IF ( @mtu = 2 ) INSERT INTO m_ii2_alisfatura (muhtur,mturid,muhodm,modmid,uid) SELECT @mtu,Max(id),@mod,@moid,@uid FROM m_ii1_vsf_firma ELSE IF ( @mtu = 4 ) INSERT INTO m_ii2_alisfatura (muhtur,mturid,muhodm,modmid,uid) SELECT @mtu,Max(id),@mod,@moid,@uid FROM m_ii1_vsf_taseron ``` or Use `union all` by filtering the data in where clause ``` CREATE PROCEDURE Sp_m_ii_2_kaydet @mtu INT, @mod INT, @moid INT, @uid INT AS INSERT INTO m_ii2_alisfatura (muhtur,mturid,muhodm,modmid,uid) SELECT @mtu,Max(id),@mod,@moid,@uid FROM m_ii1_vsf_personel WHERE @mtu = 3 UNION ALL SELECT @mtu,Max(id),@mod,@moid,@uid FROM m_ii1_vsf_firma WHERE @mtu = 2 UNION ALL SELECT @mtu,Max(id),@mod,@moid,@uid FROM m_ii1_vsf_taseron WHERE @mtu = 4 ``` or even `case` statement ``` CREATE PROCEDURE Sp_m_ii_2_kaydet @mtu INT, @mod INT, @moid INT, @uid INT AS INSERT INTO m_ii2_alisfatura (muhtur,mturid,muhodm,modmid,uid) SELECT @mtu,CASE WHEN @mtu = 3 THEN (SELECT Max(id) FROM m_ii1_vsf_personel) WHEN @mtu = 2 THEN (SELECT Max(id) FROM m_ii1_vsf_firma) WHEN @mtu = 4 THEN (SELECT Max(id) FROM m_ii1_vsf_taseron) END,@mod,@moid,@uid ``` **Note :** First two queries will not insert any record if atleast one condition is not matching. But the case statement will insert data will null value even though none of the condition is matched
I am getting wrong MAX(ID) values with this combined code; ``` ...procedure [dbo].[sp_m_ii_2_kaydet] @mtu int, @mod int, @uid int as declare @mtuid int declare @modid int if (@mtu = 3 and @mod = 1) set @mtuid = (select MAX(id) from m_ii1_vsf_personel) set @modid = (select MAX(id) from m_kasa_personel); if (@mtu = 3 and @mod = 2) set @mtuid = (select MAX(id) from m_ii1_vsf_personel) set @modid = (select MAX(id) from m_hesap_personel); if (@mtu = 3 and @mod = 3) set @mtuid = (select MAX(id) from m_ii1_vsf_personel) set @modid = (select MAX(id) from m_senet_personel); if (@mtu = 3 and @mod = 4) set @mtuid = (select MAX(id) from m_ii1_vsf_personel) set @modid = (select MAX(id) from m_cekcikis_personel); if (@mtu = 2 and @mod = 1) set @mtuid = (select MAX(id) from m_ii1_vsf_firma) set @modid = (select MAX(id) from m_kasa_firma); if (@mtu = 2 and @mod = 2) set @mtuid = (select MAX(id) from m_ii1_vsf_firma) set @modid = (select MAX(id) from m_hesap_firma); if (@mtu = 2 and @mod = 3) set @mtuid = (select MAX(id) from m_ii1_vsf_firma) set @modid = (select MAX(id) from m_senet_firma); if (@mtu = 2 and @mod = 4) set @mtuid = (select MAX(id) from m_ii1_vsf_firma) set @modid = (select MAX(id) from m_cekcikis_firma); if (@mtu = 4 and @mod = 1) set @mtuid = (select MAX(id) from m_ii1_vsf_taseron) set @modid = (select MAX(id) from m_kasa_taseron); if (@mtu = 4 and @mod = 2) set @mtuid = (select MAX(id) from m_ii1_vsf_taseron) set @modid = (select MAX(id) from m_hesap_taseron); if (@mtu = 4 and @mod = 3) set @mtuid = (select MAX(id) from m_ii1_vsf_taseron) set @modid = (select MAX(id) from m_senet_taseron); if (@mtu = 4 and @mod = 4) set @mtuid = (select MAX(id) from m_ii1_vsf_taseron) set @modid = (select MAX(id) from m_cekcikis_taseron); insert into m_ii2_alisfatura (muhtur, mturid, muhodm, modmid, uid) values (@mtu, @mtuid, @mod, @modid, @uid) ```
Using IF - ELSE in SQL Server Insert Query
[ "", "sql", "sql-server", "t-sql", "" ]
What are alternatives to UNION ALL for joining many tables to see aggregated data from many source systems? I am looking for solution beyond T-SQL. Suppose we have 3 source tables and we want to join them by id: TableA ``` id Adata 1 8383 2 2707 3 6181 4 6708 ``` TableB ``` id Bdata 1 2669 1 8582 2 6335 2 7355 3 7355 3 2277 4 2789 4 8566 ``` TableC ``` id Adata 1 2856 2 1364 3 4476 4 3311 4 8073 ``` **The desired result table I want to get:** [![enter image description here](https://i.stack.imgur.com/hXno7.png)](https://i.stack.imgur.com/hXno7.png) **The primitive way I do now to get it (by UNION ALL):** [![enter image description here](https://i.stack.imgur.com/IlQ8U.png)](https://i.stack.imgur.com/IlQ8U.png) Reaching desired results by creation of huge table consisting of many tables under each other and many nulls seems to be very frustrating and primitive solution if we have many columns and many tables. I think this is common problem if we want to join different information systems like accounting, payroll, purchase etc. The final aim is to give and Excel user aggregated data with a chance to drill down limited to selected table. For example by double clicking in desired results table in the first Bdata 11251 the user is intrested to see only the data that make that figure. But instead the user gets this: [![enter image description here](https://i.stack.imgur.com/salNG.png)](https://i.stack.imgur.com/salNG.png) **Update.** I want to preserve a chance for drill down in data. So I want to be able to see all details for each ID if I double click on results table in Excel.
With your example, a simple join seems to be enough, why would you need to `UNION ALL` ? ``` SELECT A.id, SUM(A.Adata), SUM(B.Bdata), SUM(C.Cdata) FROM TableA A JOIN TableB B ON A.id = B.id JOIN TableC C ON A.id = C.id GROUP BY A.id ``` If you want to show the overall total, you can use `WITH ROLLUP`: ``` SELECT ISNULL(sub.id,'Total') AS "id", sub.Adata, sub.Bdata, sub.Cdata FROM ( SELECT A.id, SUM(A.Adata) AS Adata, SUM(B.Bdata) AS Bdata, SUM(C.Cdata) AS Cdata FROM TableA A JOIN TableB B ON A.id = B.id JOIN TableC C ON A.id = C.id GROUP BY A.id WITH ROLLUP) sub ```
You are miximg concepts. UNION ALL is something rarely needed and used. You use it when glueing similar result sets together. This is not the case here. In order to join tables, you should use joins of course. You get a result row per group you state with GROUP BY (the ID in your case). You use aggregation functions such as SUM, MAX, COUNT, etc. to aggregate data. The query to write depends on whether all IDs are to be present in table A and the other tables. The difference is mainly the kind of join used then. **ID must be present in all tables:** ``` select id, sum(a.adata), sum(b.bdata), sum(c.cdata) from a join b using (id) join c using (id) group by id; ``` **ID must be pesent in table a only:** ``` select id, sum(a.adata), coalesce(sum(b.bdata),0), coalesce(sum(c.cdata),0) from a left join b using (id) left join c using (id) group by id; ``` **ID doesn't have to exist in any particular table:** ``` select id, coalesce(sum(a.adata),0), coalesce(sum(b.bdata),0), coalesce(sum(c.cdata),0) from a full outer join b using (id) full outer join c using (id) group by id; ``` EDIT: I should add that SQL Server doesn't support the USING clause (which is standard SQL2003). You can replace it with an ON clause, which is easy as long as you don't need full outer joins, which are much more complicated without a using clause. And: You get a sum row at the end by using `group by rollup(id)` instead of only `group by id`.
Alternatives to UNION ALL to get aggregate data
[ "", "sql", "sql-server", "join", "union-all", "" ]
In this query it is partitioning the records but i need to assign row number for all the result set a row number should not be repeated it should be unique fpr each row.Anyone help ? Thanks in advance. ``` select * from Store order by row_number() over (partition by category order by storename),category ``` Table cloumn sample in SQLFIDDLE : <http://sqlfiddle.com/#!6/767ab/86> I want output in the format of : ``` RowNo Category 1 Fruits 2 Chocs 3 Vegetables 4 Fruits 5 Chocs 6 Vegetables .......... ```
Try something like this: ``` WITH data AS ( SELECT ROW_NUMBER() OVER (PARTITION BY Category ORDER BY Category) rn, * FROM STORE ) SELECT ROW_NUMBER() OVER (ORDER BY rn, Category) rn , StoreId , StoreName , Category , Price , Quantity , Address FROM data ORDER BY 1 ``` This is producing the following sequence: ``` 1 CHOCOS 2 FRUITS 3 ICECREAM 4 VEGETABLES 5 CHOCOS 6 FRUITS 7 ICECREAM 8 VEGETABLES 9 CHOCOS 10 FRUITS ```
``` SELECT ROW_NUMBER() OVER (ORDER BY Category) AS Id, -- New Id * FROM ( SELECT ROW_NUMBER() OVER (PARTITION BY Category ORDER BY StoreName) PartitionId, * FROM Store ) data WHERE PartitionId = 1 -- Get one record for each category ``` [SQL Fiddle](http://sqlfiddle.com/#!6/767ab/122)
Assigning unique ROW NUMBER() for all the PARTITIONED RECORDS
[ "", "sql", "sql-server", "window-functions", "" ]
Every time implement this part of my select statement I get an Divide by zero exception. I tried replacing ISNULL with NUllIF. Same error. here is my code: ``` isnull([Balance], 0) * isnull(sce.tradepoundsperunit, 0) * (isnull(limitallocation_limitcommodity.priceperpound, 0) / CASE WHEN ISNULL(limit_limitcommodity.priceperpound, 1) = 0 THEN 1 ELSE ISNULL(limit_limitcommodity.priceperpound, 1) END ) / isnull(CASE WHEN ISNULL(l.PoundsPerUnit, 1) = 0 THEN 1 ELSE ISNULL(l.PoundsPerUnit, 1) END * ISNULL(targetu.bushelsperunit, 1) ,1) ``` AS Limitconvertedbalance,
I think any of the clauses: ISNULL(limit\_limitcommodity.priceperpound, 1) ISNULL(l.PoundsPerUnit, 1) ISNULL(targetu.bushelsperunit, 1) could be returning 0 because you are only checking for null and not zero. e.g. if l.PoundsPerUnit=0 then checking ISNULL(l.PoundsPerUnit,1) is still going to return zero. I think that something like this should solve your problem. Instead of checking for null it checks for both null or zero for all denominators and inserts 1 instead. ``` isnull([Balance],0) * isnull(sce.tradepoundsperunit,0) * ( isnull(limitallocation_limitcommodity.priceperpound,0) / CASE WHEN limit_limitcommodity.priceperpound IS NULL OR limit_limitcommodity.priceperpound=0 THEN 1 ELSE limit_limitcommodity.priceperpoind END ) / ( CASE WHEN l.PoundsPerUnit IS NULL OR l.PoundsPerUnit =0 THEN 1 ELSE l.PoundsPerUnit END * CASE WHEN targetu.bushelsperunit IS NULL OR targetu.bushelsperunit=0 THEN 1 ELSE targetu.bushelsperunit END ) AS Limitconvertedbalance, ```
I believe the answer may be in this bit: ``` * ISNULL(targetu.bushelsperunit, 1) ``` If targetu.bushelsperunit is 0, the code could be executing that \* before it executes the previous /, resulting in a divide by 0. Is that field meant to be in the denominator?
Divide by zero error in Sql View
[ "", "sql", "select", "view", "dividebyzeroexception", "" ]
We are using 3-Tier Architecture in ASP.Net. There are 3 Layers 1. Presentation 2. Business 3. Data Access The Data Access Layer contains the GetData and ExecuteQuery etc function. What I want to know is that, that want to call the View directly from the Presentation Layer. Is there any chance of SQL injection in calling a view from front-end without using stored procedure? Presentation Layer (C#) ``` protected void btnView_Click(object sender, EventArgs e) { DL obj = new DL(); DataTable tb = new DataTable(); string query = "select * from ViewTeacher where FID = " + txtName.Text; tb = obj.GetData(query); } ``` DBAccess ``` public DataTable GetData(string query) { DataTable datatable = new DataTable(); SqlCommand cmd = new SqlCommand(); cmd.Connection = con; cmd.CommandText = query; try { if (cmd.Connection.State != ConnectionState.Open) { cmd.Connection.Open(); } using (SqlDataAdapter da = new SqlDataAdapter(cmd)) { da.Fill(datatable); } } catch (Exception ex) { throw new ArgumentException(ex.Message); } return datatable; } ```
How are you "calling a view"? If you're running an ad-hoc query of: ``` SELECT <columns> FROM View WHERE ColumnX = 'Y' ``` and if that query is being constructed using (potentially) hostile input then yes, of course that can be subject to SQL injection - the whole point of injection is that the attacker can change the nature of the query: ``` SELECT <columns> FROM View WHERE ColumnX = 'Z' UNION ALL SELECT name,0,0,0,0 FROM INFORMATION_SCHEMA.TABLES --' ``` The attacker isn't limited to just the objects that are present in the original query. --- The untrustworthy input in the two above queries was: ``` Y ``` and ``` Z' UNION ALL SELECT name,0,0,0,0 FROM INFORMATION_SCHEMA.TABLES -- ```
As you are writing the query as follows that takes value from a textbox, 100% there is posibility for sql injection. ``` string query = "select * from ViewTeacher where FID = " + txtName.Text; ```
SQL Injection on Views
[ "", "sql", "sql-injection", "3-tier", "" ]
I require data to be returned from table in following format. ``` <Root> <Property name="test1">text1</Property> <Property name="test2">text2</Property> <Property name="test3">text3</Property> <Property name="test4">text4</Property> </Root> ``` I've tried the code from other post [SQL Server for xml path add attributes and values](https://stackoverflow.com/questions/25237439/sql-server-for-xml-path-add-attributes-and-values) and can get it to work for single line but not multiple. eg. ``` Select 'test1' as [@name], 'text1' FOR XML PATH('Property'), ROOT('Root') ``` Works giving ``` <Root> <Property name="test1">text1</Property> </Root> ``` but ``` Select 'test1' as [@name], 'text1' ,'test2' as [@name], 'text2' ,'test3' as [@name], 'text3' ,'test4' as [@name], 'text4' FOR XML PATH('Property'), ROOT('Root') ``` Fails with Attribute-centric column '@name' must not come after a non-attribute-centric sibling in XML hierarchy in FOR XML PATH. Cant find how to get the format I require above.
The element name used i path is repeated for each row returned. If you want to repeat the same element name for one row you need to specify the element name in the column alias and you need to separate the different elements with a empty column value. ``` select 'test1' as [Property/@name], 'text1' as Property, null, 'test2' as [Property/@name], 'text2' as Property, null, 'test3' as [Property/@name], 'text3' as Property, null, 'test4' as [Property/@name], 'text4' as Property for xml path(''), root('Root'); ```
You can achieve your desired output using following query: ``` select T.Name as 'Property/@name', T.[Text] as 'Property' from ( select 'test1' as Name, 'text1' as [Text] union all select 'test2', 'text2' union all select 'test3', 'text3' union all select 'test4', 'text4' ) as T for xml path(''), root('Root') ```
SQL FOR XML Path, returning multiple child elements
[ "", "sql", "sql-server", "xml", "xpath", "" ]
Lets say I have a table like this: ``` ID_COMPANY ID_EMPLOYEE SALARY COMP0001 EMP0001 200 COMP0001 EMP0002 100 COMP0001 EMP0003 300 COMP0002 EMP0001 200 COMP0002 EMP0003 200 COMP0003 EMP0002 200 ``` I want to add a column to my table so it can `SUM` salary from based on `ID_Employee` like this: ``` ID_COMPANY ID_EMPLOYEE SALARY TOTAL COMP0001 EMP0001 200 400 COMP0001 EMP0002 100 300 COMP0001 EMP0003 300 500 COMP0002 EMP0001 200 400 COMP0002 EMP0003 200 500 COMP0003 EMP0002 200 300 ``` Thanks in advance
You can pre-calculate amounts groupped by employee in CTE and then join that CTE in your query: ``` ;with cte_Totals as ( select ID_EMPLOYEE, sum(SALARY) as TOTAL from your_table group by ID_EMPLOYEE ) select ID_COMPANY, ID_EMPLOYEE, SALARY, T1.TOTAL from your_Table as T left outer join cte_Totals as T1 on T1.ID_EMPLOYEE = T.ID_EMPLOYEE ```
``` select id_employee, sum(salary) from [table] group by id_employee ``` This will not 'add' the column to the table, but will output the result. Since this data would change if a new record is added it would be better to generate it dynamically with a query. If you must store it, you should store the result in a materialized view or a temporary table, not add the column to the original table.
How to Select X, SUM (X)?
[ "", "sql", "sql-server-2008", "" ]
I need the sum of the employees Money made but grouped into the months between two dates, however I have a date field and want the name of the months and year as two separate fields. This query is showing the un-grouped information - minus the year and month in the formats required. Once I have the Employees money made for that month I need it sorted by Name (already done) and by Month year within each employee ``` SELECT Employee.Name , Employee.ID , Sales.Date , Sales.Money FROM Database.Sales.Sales INNER JOIN Database.Employee.Employee ON Sales.ID=Employee.ID WHERE Sales.Date BETWEEN '2000-01-01' AND '2001-01-01' ORDER BY Employee.Name ``` Thanks
Here is a query: ``` SELECT Employee.Name , Employee.ID , Sum(Sales.Money) , Year(Sales.Date) , Month(Sales.Date) FROM Database.Sales.Sales INNER JOIN Database.Employee.Employee ON Sales.ID=Employee.ID WHERE Sales.Date BETWEEN '2000-01-01' AND '2001-01-01' group by Employee.Name , Employee.ID , Year(Sales.Date) , Month(Sales.Date) ORDER BY Employee.Name, Year(Sales.Date), Month(Sales.Date) ```
There is a `Month()` and `Year` functions you can use and group by. Incorperate something like this into your query: ``` SELECT Employee.Name, Employee.ID, MONTH(Sales.Date) as `month`, YEAR(Sales.Date) as `year`, SUM(Sales.Money) FROM Database.Sales.Sales INNER JOIN Database.Employee.Employee ON Sales.ID=Employee.ID WHERE Sales.Date BETWEEN '2000-01-01' AND '2001-01-01' GROUP by Employee.Name , Employee.ID, `month` , `year` ORDER BY Employee.Name ```
SQL Grouping by Month on dates fields
[ "", "mysql", "sql", "sql-server", "t-sql", "date", "" ]
I have to display a table of data sorted by the following condition First should be Private field1 and field2 Match Private field1 Match Private Non-Private field1 and field2 Match Non-Private field1 Match Non-Private I'm very weak in SQL and only order by I know is ``` select * from sampleTable order by field1 ``` I know how to sort the data according to this order using C# and jQuery. Is there any way to accomplish the same using T-SQL? The table is like this ``` id field1 field2 private --------------------------- 316 test1 test 1 319 test2 NULL 0 320 test3 NULL 0 321 test4 test4 1 322 NULL NULL 0 323 NULL NULL 1 ```
You could try something like: ``` SELECT * FROM sampleTable ORDER BY private DESC, CASE WHEN field1 IS NULL THEN 1 ELSE 0 END, CASE WHEN field2 IS NULL THEN 1 ELSE 0 END ``` I'm assuming 'Match' means 'Not Null'? **UPDATE** I should point out that if non-null values in field1 and field2 are to be sorted too, then an additional sort clause is needed: ``` SELECT * FROM sampleTable ORDER BY private DESC, CASE WHEN field1 IS NULL THEN 1 ELSE 0 END, field1, CASE WHEN field2 IS NULL THEN 1 ELSE 0 END, field2 ``` So, to explain it in a little more detail: The `private` field comes first, and because it's sorted in descending order, rows marked private will come first. Effectively, we've now got two groups of rows that each move on to the next sort expression. Unfortunately, sorting each by `field1` or `field2` will put NULLs first, which is not what you want. The `CASE` expression effectively sorts by '1' if it's NULL, and a '0' otherwise. This moves all the NULLs to the end, because 1 > 0. Now that they've been separated, we can further sort each group by the field value.
I completely misunderstood your question at first, here a new answer: ``` SELECT * from sampleTable ORDER BY private desc, CASE WHEN field1=field2 THEN 2 WHEN field1>'' THEN 1 END desc ``` Again the meaning of `'Match'` is unclear. But seeing you have accepted Michael's answer already I guess what you really mean is "is anything but not null". Then my version would be: ``` SELECT * from sampleTable ORDER BY private desc, CASE WHEN field1>='' THEN 1 ELSE 0 END +CASE WHEN field2>='' THEN 1 ELSE 0 END desc, field1, field2 ```
ORDER BY using specific order in SQL Server 2012
[ "", "sql", "sql-server-2012", "" ]
I have the following SELECT statements in a stored procedure: ``` SELECT COUNT(*) AS TotalCreditDenied FROM Table1 WHERE StatusId = 8 AND ManufacturerId = 1; SELECT COUNT(*) AS TotalCreditApproved FROM Table1 WHERE StatusId = 7 AND ManufacturerId = 1; SELECT SUM(CreditInvoiceAmount) AS TotalCreditPaid FROM Table1 WHERE ManufacturerId = 1 SELECT SUM(ApproxCreditDue) AS OutstandingBalance FROM Table1 WHERE StatusId = 3 AND ManufacturerId = 1; SELECT AVG(DATEDIFF(day, ReturnDate, CreditInvoiceDate)) AS AverageTimeToPay FROM Table1 WHERE ManufacturerId = 1 ``` I am trying to find the fastest and least costly way to get these return values back as a single result set with one row. Right now, it returns 5 result sets. I can use `UNION` to get one column of all the values, but how could I get the return values in 5 columns and one row. Additionally, is there any way to execute these queries so the database only has to be scanned once instead of 5 times? Thanks!
You can do this with a `SELECT ... WHERE ManufacturerId = 1` and `CASE` ``` SELECT COUNT(CASE WHEN StatusId = 8 THEN 1 END) AS TotalCreditDenied, COUNT(CASE WHEN StatusId = 7 THEN 1 END) AS TotalCreditApproved, SUM(CreditInvoiceAmount) AS TotalCreditPaid, SUM(CASE WHEN StatusId = 3 THEN ApproxCreditDue END) AS OutstandingBalance, AVG(DATEDIFF(day, ReturnDate, CreditInvoiceDate)) AS AverageTimeToPay FROM Table1 WHERE ManufacturerId = 1; ```
You should be able to consolidate those into something like the following: ``` Select SUM(CASE StatusId WHEN 8 THEN 1 ELSE 0 END) as TotalCreditDenied , SUM(CASE StatusId WHEN 7 THEN 1 ELSE 0 END) as TotalCreditApproved , SUM(CreditInvoiceAmount) as TotalCreditPaid , SUM(CASE StatusId WHEN 3 THEN ApproxCreditDue ELSE 0 END) as OutstandingBalance , AVG(DATEDIFF(day,ReturnDate, CreditInvoiceDate)) as AverageTimeToPay from Table1 where ManufacturerId = 1; ```
Least costly method of combining multiple select statements to output one row
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have a table of `posts` like this ``` +--------------------+--------------+ | Field | Type | +--------------------+--------------+ | id | int(11) | | title | varchar(255) | | body | text | | published_at | datetime | +--------------------+--------------+ ``` What I want achieve is to `order` by `published_at`. Normally I would do: ``` SELECT * FROM posts ORDER BY published_at; ``` But my requirement here is that the query should fetch the results from current date on top and then the previous ones and after that fetch those from future. Current my results are as follows: ``` +-------------------------------+----+---------------------+ | title | id | published_at | +----------------------------------------------------------| | Hello world | 1 | 2015-01-06 12:21:16 | | 20+ Tools For RoR Development | 2 | 2015-08-25 12:21:23 | | Angular JS tutorial | 3 | 2015-09-31 10:51:55 | | Visual search | 4 | 2015-03-12 12:27:26 | | Ruby on Rails best practices | 5 | 2015-01-21 00:00:00 | +-------------------------------+----+---------------------+ ``` Whereas my desired outcome would be: ``` +-------------------------------+----+---------------------+ | title | id | published_at | +----------------------------------------------------------| | 20+ Tools For RoR Development | 2 | 2015-08-25 12:21:23 | | Hello world | 1 | 2015-01-06 12:21:16 | | Ruby on Rails best practices | 5 | 2015-01-21 00:00:00 | | Visual search | 4 | 2015-03-12 12:27:26 | | Angular JS tutorial | 3 | 2015-09-31 10:51:55 | +-------------------------------+----+---------------------+ ```
Solution without using UNION/CASE ``` SELECT * FROM posts ORDER BY DATE(published_at)=DATE(NOW()) DESC, DATE(published_at)<DATE(NOW()) DESC, DATE(published_at)>DATE(NOW()) ASC` ``` check if this works. Its working fine with stimulated data. You can change desc to asc according to your sorting requirement for past and future dates
There is a possible solution without UNION (ALL): ``` select id, title, published_at, case when date(published_at) = curdate() then '1-now' when date(published_at) < curdate() then '2-past' else '3-future' end as order_group from t order by order_group asc, published_at asc; ``` <http://sqlfiddle.com/#!9/59c48/2> It needs a filesort to run so may be not as fast as UNION ALL for bigger data. UNION has to remove duplicates so I expect it to be similar.
Sorting data according to current date
[ "", "mysql", "sql", "" ]
I support a SQL database for a third party software package. They have a lot of what they call "Shadow Tables", really just audit tables. This is all fine and good but their system does not clean up these tables so it is up to me to do so. They also add new "Shadow Tables" without notice with every upgrade. The old way we were purging the tables was with a long list of `DELETE FROM` statements but this list has become very long and hard to maintain. To try to make the purge process easier to maintain and automatically catch new "Shadow Tables" I wrote the following stored procedure. The stored procedure works but I would prefer to figure out a way without using a cursor and dynamic queries since this will be running daily on a lot of different tables. Is there an alternative way of doing this without using a cursor and dynamic queries? ``` DECLARE @workingTable varchar(128); DECLARE @sqlText varchar(250); DECLARE @CheckDate DATETIME = DATEADD(yy, -2, GETDATE()); DECLARE curKey SCROLL CURSOR FOR SELECT name AS TableName FROM dataTEST.sys.tables WHERE (name like '%[_]h' OR name like '%[_]dh') ORDER BY name OPEN curKey WHILE @@fetch_status = 0 BEGIN FETCH NEXT FROM curKey INTO @workingTable SET @sqlText = 'DELETE FROM DataTEST.dbo.' + @workingTable + ' WHERE LAST_MOD < ''' + CONVERT(CHAR(10), @CheckDate, 101) + ''';' --PRINT @sqlText EXEC (@sqlText) END CLOSE curKey DEALLOCATE curKey ```
I do not know of anyway to get away from dynamic SQL when you do not know the table names ahead of time. SQL Server has a feature where you can do variable assignment in a `select` statement, once for each row returned. This can be used to eliminate the cursor and pass one string with all the `delete` statements to SQL server to execute ``` DECLARE @sqlText nvarchar(MAX) = ''; -- initialize because NULL + 'x' is NULL DECLARE @CheckDate DATETIME = DATEADD(YEAR, -2, GETDATE()); SELECT @sqlText = @SqlText + 'DELETE FROM dataTEST.dbo.' + QUOTENAME(name) + ' WHERE LAST_MOD < @CheckDate ; ' FROM dataTEST.sys.tables WHERE (name like '%[_]h' OR name like '%[_]dh') ORDER BY name IF @@ROWCOUNT > 0 EXEC sp_executesql @sqlText , N'@CheckDate DATETIME' , @CheckDate ```
I don't think using using `cursor` and `dynamic query` here is a bad idea One way is to append the delete queries and execute it at the end after generating all the delete queries. Btw, cursor is just used for framing dynamic query so it is not a big deal ``` DECLARE @workingTable varchar(128); DECLARE @sqlText nvarchar(max)=''; DECLARE @CheckDate DATETIME = DATEADD(yy, -2, GETDATE()); DECLARE curKey SCROLL CURSOR FOR SELECT name AS TableName FROM dataTEST.sys.tables WHERE (name like '%[_]h' OR name like '%[_]dh') ORDER BY name OPEN curKey WHILE @@fetch_status = 0 BEGIN FETCH NEXT FROM curKey INTO @workingTable SET @sqlText += 'DELETE FROM DataTEST.dbo.' + @workingTable + ' WHERE LAST_MOD < ''' + CONVERT(CHAR(10), @CheckDate, 101) + ''';' END CLOSE curKey DEALLOCATE curKey --PRINT @sqlText EXEC (@sqlText) ```
Run the same query against multiple tables without dynamic sql
[ "", "sql", "sql-server", "t-sql", "dynamic-sql", "" ]
I'm storing UUID v4 values in a PostgreSQL v9.4 table, under column "id". When I create the table, is there any difference in following write or read performance whether I define the "id" column as [VARCHAR(36), CHAR(36)](http://www.postgresql.org/docs/9.4/static/datatype-character.html), or [UUID](http://www.postgresql.org/docs/9.4/static/datatype-uuid.html) data type? Thanks!
Use `uuid`. PostgreSQL has the native type for a reason. It stores the uuid internally as a 128-bit binary field. Your other proposed options store it as hexadecimal, which is very inefficient in comparison. Not only that, but: * `uuid` does a simple bytewise sort for ordering. `text`, `char` and `varchar` consider collations and locales, which is nonsensical for a uuid. * There is only one canonical respresentation of a `uuid`. The same is not true for text etc; you have to consider upper vs lower case hex, presence or absence of `{...-...}`s etc. There's just no question. Use `uuid`. The only other type that makes any sense is `bytea`, which at least can be used to store the 16 bytes of the uuid directly. This is what I'd do if I was using systems that couldn't cope with data types outside the basic set, like a really dumb ORM of some kind.
UUID would be the fastest because its 128 bits -> 16 bytes and comparisons are done numerically. Char(36) and varchar(36) seems to be the same and slow: <http://www.depesz.com/2010/03/02/charx-vs-varcharx-vs-varchar-vs-text/>. The server should check EOF to determine the job of reading the value has finished or not for each character. Also text comparison is slower than numerical comparison. And because UUID consists of 16 bytes, comparing UUID is much faster than comparing two texts of 36 characters. Use native UUID for performance.
Performance difference between UUID, CHAR, and VARCHAR in PostgreSql table?
[ "", "sql", "postgresql", "database-performance", "sqldatatypes", "" ]
I am a software developer and I was recently approached by DBA to optimize the query that an app of mine is using. DBA reported that query takes about 50% of CPU and high I/O operations when it runs. The query is pretty straight forward and I am unsure how to optimize it. **Question 1:** How can I optimize this query? **Question 2:** is it even my job to do so, shouldn't DBA be more knowledgeable in this? Mind you we have no DB developers, just DBA and Software Developers. DB has approximately 30-50 million of records, it is constantly maintained/monitored by DBA, but I am unsure how. Server is on a dedicated machine and is `Microsoft SQL Server 2005 - 9.00.5057.00 (X64)` **PS:** Please do not provide ways to improve DB by structural changes, I know it's a bad design to have currency stored as varchar, but it is what it is, we can't change DB structure, only queries accessing it. Thank you for any insight. **Query:** ``` SELECT COALESCE(CAST([PH].[PAmount] AS decimal(15, 2)) + CAST([PH].[Fee] AS decimal(15, 2)), 0.0) AS [PayAmount], [PH].[PDate] AS [PayDate] FROM [History] AS [PH] WITH (NOLOCK) WHERE [PH].[PMode] IN ('C', 'P') AND [PH].[INNO] = 'XYZ' AND [PH].[PStatus] IN ('CONSERVED', 'EXPECTING', 'REFRIGERATED', 'POSTPONED', 'FILED') AND [PH].[Locked] = 1 AND [PH].[PDate] >= 'Jan 1, 2015' ORDER BY [PH].[PDate] ASC ``` **Fields:** `PAmount` - non-clustered index, `varchar(50)` `Fee` - not indexed, `decimal(6,2)` `PDate` - clustered index, `datetime` `PMode` - non-clustered index, `varchar(5)` `INNO` - non-clustered index, `varchar(50)` `PStatus` - non-clustered index, `varchar(50)` `Locked` - not indexed, `bit` **Execution plan:** `SELECT---Compute Scalar---Filter---NestedLoops-|--Index Seek (Inner Join) | cost 0% Cost 0% Cost 0% Cost 0% | cost 4% |---Key Lookup Cost 96%`
It seems like you have a misconception about indexes. Indexes don't combine with each other, so it's not a question of having a column "indexed" or "not indexed". It's not good to have a separate index for individual columns. It's about having indexes with several columns that much up with individual queries. An index on a column won't help a query if it's still more efficient for the database to select on another column first. I'm getting a little stale at this, but for this query I'd recommend an index that looks something like this: ``` CREATE NONCLUSTERED INDEX [ix_History_XXXXX] ON [History] ( [INNO] ASC, [Locked] ASC, [PDate] ASC, [PMode] ASC ) INCLUDE ( PStatus, PAmount, Fee) ``` You may want to swap around PDate, PMode, and PStatus, depending on their [selectivity](http://www.programmerinterview.com/index.php/database-sql/selectivity-in-sql-databases/). When building an index, you want to list the most specific items first. The general idea is that an index stores each successive item in order. With this index, rows for all of the `XYZ` values for `INNO` will be grouped together, and so the query engine can **seek** right to that section of the index. The next most specific column is `Locked`. Even though this is a `bit` value, because it is limited to exactly one value we are still able to **seek** directly to the one specific part of the index that will matter for the entire query. Again: I haven't had to do this kind of thing for a while, so you might do as well listed `PMode` here; I just don't recall whether the Sql Server query optimizer is smart enough to handle the two values in an efficient way. From here on out the best option for the index depends on how much each of the query values limits the results. Since we're no longer able to get all of the results into one space, we're gonna have to **scan** the relevant parts of the index. My instinct here is to use the `Date` value next. This will allow the scan to walk the index starting with the first date that matches your result, and help it get the records in the correct order, but again: this is just my instinct. You may be able to do better by listing PMode or PStatus first. Finally, the additional in the `INCLUDES` clause will allow you to entirely complete this query from the index, without actually going back to the full table. You use an INCLUDES clause rather than just appending the values to the query to avoid making Sql Server rebuild the index for updates to these columns. This is why PStatus, for example, probably should not be part of the main index, if the status is something that can change, and why you *might* be better off also leaving `Locked` out of the index. These are things you'll want to **measure** and test for yourself, though.
I would see if I got better results with ISNULL instead of COALESCE. The other thing is looking at the indexes. You listed the fields that are indexed. If those fields are covered by several indexes, I suggest making one good covering index for this query. A covering index is one where all of the data needed by the query are contained in the index. If an index used by the query is not covering, then there is an extra trip (or trips) to the table to get the rest of the fields. It is more efficient if all of the data is right there in the query. Check out these articles: [What are Covering Indexes and Covered Queries in SQL Server?](https://stackoverflow.com/questions/609343/what-are-covering-indexes-and-covered-queries-in-sql-server) <https://www.simple-talk.com/sql/learn-sql-server/using-covering-indexes-to-improve-query-performance/> For the data that is not part of a join or in the where clause, you can use the include keyword. Included fields are not searchable parts of the index, but will save the trip to the database. Try the the index below. All of the fields in the where clause are part of the searchable part of the index, and all of the returned fields that are not part of the where clause are included. You might need to play with the order after looking at the execution plan, but I took my best guess. ``` Create Nonclustered Index Ix_Ncl_History_CoveringBigSelect on History(PDate, PMode, INNO, PStatus, Locked) Include (PAmount, Fee) ``` Here is an article about included columns.
Optimize SQL query, TSQL
[ "", "sql", "sql-server", "t-sql", "query-optimization", "" ]
I am trying to create a table using following code in oracle 10g and each time I am getting the following error: > ORA-00923: FROM keyword not found where expected The query is as follows: ``` Create table Tab2 nologging as select /*+parallel(a,6)*/ Tab1col1, Tab1col2, MAX(case when tab1Col5 = '21-aug-2015' then Tab1Col3 end) 21AUGBALANCE, MAX(case when tab1Col5 = '22-aug-2015' then Tab1Col3 end) 22AUGBALANCE, MAX(case when tab1Col5 = '23-aug-2015' then Tab1Col3 end) 23AUGBALANCE from Tab1 a GROUP BY msisdn, sdp_node order by msisdn, sdp_node ``` Table 1 has 5 columns namely `tab1Col1, tab1Col2, Tab1Col3, Tab1Col4 and Tab1Col5.` I need to create Tab2 from Tab1 which also has 5 columns 1,2,3,4,5. But what is the error in this code?
try ``` Create table Tab2 nologging as select /*+parallel(a,6)*/ Tab1col1, Tab1col2, MAX(case when tab1Col5 = '21-aug-2015' then Tab1Col3 end) "21AUGBALANCE", MAX(case when tab1Col5 = '22-aug-2015' then Tab1Col3 end) "22AUGBALANCE", MAX(case when tab1Col5 = '23-aug-2015' then Tab1Col3 end) "23AUGBALANCE" from Tab1 a GROUP BY msisdn, sdp_node order by msisdn, sdp_node ``` oracle supports column names starting with numbers, but you have to quote them if you want a column name STARTING with a number. alternatively, pick different names (e.g. BALANCE21AUG)
you have problem with column aliases try use double quotes "" like this ``` Create table Tab2 nologging as select /*+parallel(a,6)*/ Tab1col1, Tab1col2, MAX(case when tab1Col5 = '21-aug-2015' then Tab1Col3 end) "21AUGBALANCE", MAX(case when tab1Col5 = '22-aug-2015' then Tab1Col3 end) "22AUGBALANCE", MAX(case when tab1Col5 = '23-aug-2015' then Tab1Col3 end) "23AUGBALANCE" from Tab1 a GROUP BY msisdn, sdp_node order by msisdn, sdp_node ```
Error - Keyword not found where expected
[ "", "sql", "oracle", "ddl", "" ]
I have one table with contains foreign key "columnA" column and one more column "columnB". I want to prevent adding same values in "columnB" but only for same value in "columnA"... ``` columnA columnB 1 'a' 1 'a' - this is not allowed 2 'a' - this is allowed ``` From my perspective only way to do that is by using trigger, but i suppose that there is a better, more elegant way to make this constraint. Do you know best way to make this logic?
A Unique constraint would work. ``` alter table TableName add constraint UQ_consrtaint unique(columnA, columnB); ``` That should do it.
It looks like you need to create a primary key like this: ``` DECLARE @DataSource TABLE ( [A] TINYINT ,[B] CHAR ,PRIMARY KEY([A], [B]) ); INSERT INTO @DataSource ([A], [B]) VALUES (1, 'a'); -- ok INSERT INTO @DataSource ([A], [B]) VALUES (2, 'a'); -- ok INSERT INTO @DataSource ([A], [B]) VALUES (1, 'a'); -- error ``` It will give you the following error: > Msg 2627, Level 14, State 1, Line 14 Violation of PRIMARY KEY > constraint 'PK\_\_#B1CFBEC\_\_D86D1834E734E52B'. Cannot insert duplicate > key in object 'dbo.@DataSource'. The duplicate key value is (1, a). in the case above. Or unique constrain on the two columns: ``` DECLARE @DataSource TABLE ( [A] TINYINT ,[B] CHAR ,UNIQUE ([A], [B]) ); ```
How to add unique constraint that depends of the foreign key values?
[ "", "sql", "sql-server", "t-sql", "" ]
I need to round off couple of fields that I extract from Teradata SQL assistant. Currently I am using CAST(Field1 as numeric(20,2)) as Field1 18.529 is rounded to 18.53 but 36.425 is rounded to 36.42 instead I am expecting 36.43 How can this be achieved?
The rounding rules for CASTs depend on a global setting, *RoundHalfwayMagUp* in dbscontrol. You might try the ROUND function which defaults to the rounding rules you prefer: ``` ROUND(36.425,2) ```
I found an old post on a forum [here](https://forums.teradata.com/forum/database/how-to-use-round-function-in-teradata) which states that the `RoundHalfwayMagUp` controls whether .5 rounds up or down. See [the docs](http://www.info.teradata.com/HTMLPubs/DB_TTU_13_10/index.html#page/Utilities/B035_1102_109A/DBScontrol.14.734.html#ww15141404) for more info
Issue in rounding off the values extracted from Teradata SQL assistant
[ "", "sql", "decimal", "rounding", "teradata", "" ]
I need to find duplicate record (with master record id and duplicate record ids): ``` select ciid, name from ( select ciid, name, row_number() over ( partition by related_id, name order by updatedate desc) rn ) where rn = 1; ``` This gives me the master record IDs, but it also includes records without duplicates. If I use ``` select ciid, name from ( select ciid, name, row_number() over ( partition by related_id, name order by updatedate desc) rn ) where rn > 1; ``` This gets me all the duplicate records, but not the master record. I was wishing if I do something like: ``` select ciid, name from ( select ciid, name, row_number() over ( partition by related_id, name order by updatedate desc ) rn, count(*) over ( partition by related_id, name order by updatedate desc ) cnt ) where rn = 1 and cnt > 1; ``` But I was worried about the performance, or even is it actually doing what I want. How do I get the master record only for the ones with duplicates? Please note that `name` is not unique column. Only `ciid` is unique.
I haven't tested this (because I don't have real data and am too lazy to create some), but it seems something along these lines might work: ``` with has_duplicates as ( select related_id, name from yourtable group by related_id, name having count (*) > 1 ), with_dupes as ( select y.ccid, y.name, row_number() over (partition by y.related_id, y.name order by y.updatedate desc) rn from yourtable y, has_duplicates d where y.related_id = d.related_id and y.name = d.name ) select ccid, name from with_dupes where rn = 1 ```
I ended up using similar query in my question: ``` select ciid, name from ( select ciid, name, row_number() over ( partition by related_id, name order by updatedate desc ) rn, count(*) over ( partition by related_id, name desc ) cnt ) where rn = 1 and cnt > 1; ``` Works surprisingly well. The master record is where rn = 1 and duplicates are where rn > 1. Make sure `count(*) over (partition ..)` cannot have `order by` clause.
How to select both row_number and count over partition?
[ "", "sql", "oracle", "window-functions", "" ]
I have done a very simple example with 2 tables: **Table A: user\_client** ``` id | username --------------- 2222 | test ``` **Table B: user\_client\_contact** ``` client_id | country -------------------- 2222 | latvia 2222 | estonia ``` I get the result: ``` id ----- 2222 2222 ``` but the result what I expect is: ``` id ----- 2222 ``` SQL: ``` select user_client.id from user_client left join user_client_contact on user_client_contact.client_id = user_client.id ``` See the sqlfiflle here: <http://sqlfiddle.com/#!9/270cc/7>
Just use the [distinct](http://www.w3schools.com/sql/sql_distinct.asp) keyword to eliminate the duplicate: ``` select distinct user_client.id from user_client left join user_client_contact on user_client_contact.client_id = user_client.id ``` if you only want `user_client.ids` that have an entry in the `user_client_contact` table, you should be using an `inner join`, not a `left join`: ``` select distinct user_client.id from user_client inner join user_client_contact on user_client_contact.client_id = user_client.id ``` The reason you're getting two rows, is because your 1 row in `user_client` is able to join to two rows within `user_client_contact`. If you were selecting ***everything*** from the query like with: ``` select * from user_client left join user_client_contact on user_client_contact.client_id = user_client.id ``` you would see that you're getting two distinct rows, one as `2222,2222,latvia` and one as `2222,2222,estonia`. Since you're only selecting one column of 3 from the columns between the two tables, they look to be duplicate. As I said at the beginning, using distinct would give you only the unique values.
That's what a left join does. If you don't want information from table B, it sounds like maybe you want a semi-join. Like this: ``` select user_client.id from user_client where exists ( SELECT 'contact' FROM user_client_contact WHERE user_client_contact.client_id = user_client.id ) ``` Or, if you only want a single row from table B, you just need to specify more conditions in your WHERE clause.
Left join brings every row from table B
[ "", "sql", "" ]
i just want to create an sql query and the result is something like on the image., something like Fibonacci sequence in SQL. Ex. Column 1: 10 , then the value of Result column is Result: 10 , since that is the first row. , then assuming that the value of column1 2nd row is 50, then the value of Result 2nd row will be 60.. (Result: 60).. and so on. Sample is the image below. How can i do that continuously ? any help would be appreciated. Thanks [![enter image description here](https://i.stack.imgur.com/pFdGm.jpg)](https://i.stack.imgur.com/pFdGm.jpg)
If you are using `MSSQL2012` or higher you can use `OVER` clause. ``` SELECT t2.id, t2.value, SUM(t2.value) OVER (ORDER BY t2.id) as [Result] FROM Test01 t2 ORDER BY t2.id; ``` **`sql fiddle demo`**
You can try this ``` CREATE TABLE #TEST(ID INT,VALUE INT) INSERT INTO #TEST VALUES (1,10),(2,20),(3,30),(4,40),(5,50),(6,60),(7,70) ;WITH CTE as ( SELECT ID,VALUE,VALUE AS RESULT FROM #TEST WHERE ID=1 UNION ALL SELECT T.ID,T.VALUE,T.VALUE+C.RESULT FROM #TEST T INNER JOIN CTE C ON T.ID = C.ID+1 ) SELECT * FROM CTE ``` [![Result](https://i.stack.imgur.com/pIYtn.png)](https://i.stack.imgur.com/pIYtn.png)
How to continously add values of starting row and next row to it
[ "", "sql", "sql-server-2012", "cumulative-sum", "" ]
I have a table with ID and Dept. Table is ``` id dept salary date 1 30 2000 8/25/2015 12:06:54.870 PM 2 20 5500 7/12/2015 12:06:54.870 PM 3 30 6700 11/21/2013 12:06:54.870 PM 4 30 8900 4/16/2009 12:06:54.870 PM 5 30 9900 6/29/2014 12:06:54.870 PM 6 10 1120 7/3/2015 12:06:54.870 PM 7 20 8900 4/13/2013 12:06:54.870 PM 8 10 2400 7/23/2015 12:06:54.870 PM 9 30 2600 8/21/2015 12:06:54.870 PM 10 10 2999 8/3/2015 12:06:54.870 PM ``` Just need the output like this ``` Dept ID 30 1,3,4,5,9 ```
This is the best way I know. Please do post if anyone knows a better solution: I have named your table `sal` ``` DECLARE @id INT , @max INT , @dep INT , @all VARCHAR(255) SELECT @id = 1 , @max = MAX(id) FROM sal SELECT * INTO #tmp FROM sal WHILE (1=1) BEGIN SELECT @dep = dept FROM #tmp WHERE id = @id IF @dep IS NULL BEGIN SELECT @id = @id + 1 IF @id > @max BREAK ELSE CONTINUE END UPDATE #tmp SET @all = @all + ',' + CONVERT(VARCHAR, id) WHERE dept = @dep --remove last comma select @all = RIGHT(@all, LEN(@all)-1) DELETE #tmp WHERE dept = @dep -- selecting the output. insert into table if you want SELECT @dep, @all SELECT @dep = NULL , @all = NULL SELECT @id = @id + 1 IF @id > @max BREAK -- fail safe IF @id > 100 BREAK END drop table #tmp ```
A bit simpler solution for those who want this to work for particular query: ``` DECLARE @res_csv VARCHAR(10000) BEGIN SELECT SomeIntField INTO #tmp FROM YourTblName WHERE 1=1 -- a hardcoded query UPDATE #tmp SET @res_csv = @res_csv + case when @res_csv is not NULL then ',' end + CONVERT(VARCHAR, SomeIntField) drop table #tmp print @res_csv END ```
Sybase: How do I concatenate rows in sybase column
[ "", "sql", "t-sql", "concatenation", "sybase", "" ]
<http://sqlfiddle.com/#!9/ea4d2/1> Here is the SQLfiddle to look at. storepkid 16 is repeated 2 times. But I want it only one time. I tried DISTINCT but it eliminate entire duplicate row so it's not working in this case. What changes do I need to make in query to get proper result? Please help. NOTE: forgot to mention in initial question that I tried GROUP BY too but it's giving incorrect data in record set result. I tried to use GROUP BY in following ways. <http://sqlfiddle.com/#!9/ea4d2/20> <http://sqlfiddle.com/#!9/ea4d2/17> NOTE2: @Gabriel Valdez Timbol, I want result like this. Row with duplicate storepkid should be eliminated. ``` | Storepkid | selldate | +---------------+-----------------------+ | 19 | August, 25 2015 10:00:00 | | 12 | August, 24 2015 19:00:00 | | 16 | August, 24 2015 16:00:00 | | 15 | August, 23 2015 13:00:00 | | 17 | August, 21 2015 10:00:00 | ```
You can use `max` function. Use the below query ``` SELECT p.storepkid, max(p.selldate) AS recentselldate FROM (SELECT storepkid, purchasedatetime AS selldate FROM t_product_purchase UNION ALL SELECT storepkid, starttime AS selldate FROM t_service_purchase UNION ALL SELECT storepkid, selldatetime AS selldatetime FROM t_coupon_purchase ) p GROUP BY storepkid order by max(p.selldate) DESC LIMIT 0,5 ``` OUTPUT: ``` | Storepkid | selldate | +---------------------------------------+ | 19 | August, 25 2015 10:00:00 | | 12 | August, 24 2015 19:00:00 | | 16 | August, 24 2015 16:00:00 | | 15 | August, 23 2015 13:00:00 | | 14 | August, 21 2015 13:15:00 | ``` Check the [DEMO HERE](http://sqlfiddle.com/#!9/ea4d2/60)
Here Is Correct Code i tried it at your fiddle [Fiddle](http://sqlfiddle.com/#!9/ea4d2/7) I also ran into a similar problem ---> [Joining Two Tables](https://stackoverflow.com/a/32178452/4947143) ``` SELECT p.storepkid, p.selldate AS recentselldate FROM (SELECT storepkid, purchasedatetime AS selldate FROM t_product_purchase UNION ALL SELECT storepkid, starttime AS selldate FROM t_service_purchase UNION ALL SELECT storepkid, selldatetime AS selldatetime FROM t_coupon_purchase) p GROUP BY p.storepkid ORDER BY recentselldate DESC LIMIT 0,5 ```
Want to eliminate duplicate record based on only column value
[ "", "mysql", "sql", "" ]
I have a date value as below in my table. ``` 2015-05-25 ``` I want to convert the values as below. ``` 05/25 ``` How to do this? Date value is having date datatype.
Use the `CONVERT` function to change it to `mm/dd/yy` (style 1) And the `LEFT` function to only select `mm/dd` (integer\_expression 5) ``` SELECT LEFT(CONVERT(date, 1), 5) FROM yourtable ``` Input: ``` 2015-05-25 ``` Output: ``` date 05/25 ```
Try: ``` select convert(varchar(5),getdate(),101) ```
change the date format in sql server
[ "", "sql", "sql-server", "t-sql", "" ]
I have a table with data in the following format ``` col1 col2 a1 a2;a3;a4 b1 b2 c1 c2;c3 d1 null ... ``` I'm trying to split the strings, get unique combinations of col1/col2 and insert them into tableB. So the expected outcome should look like this: ``` a1 a2 a1 a3 a1 a4 b1 b2 c1 c2 c1 c3 ... ``` I tried the following query: ``` INSERT INTO tableB (col1, col2) SELECT col1, (regexp_substr(col2,'[^;]+', 1, LEVEL)) FROM tableA CONNECT BY regexp_substr(col2, '[^;]+', 1, LEVEL) IS NOT NULL; ``` Not sure what's going wrong here but it keeps executing (it actually went on for more than an hour) and when I finally cancel the task, nothing's been inserted. The table is quite large (around 25000 rows) but I've done similar inserts with larger tables and they worked fine. I also tried adding a where clause (although it seems redundant) with ``` WHERE col2 LIKE'%;%' ``` That didn't help either. Any suggestions would be great. Edit: I tried counting the max number of substrings in col2, to ballpark the number of rows to be inserted, and found the max to be 42 substrings. The whole table has 25814 rows, so worst case scenario, it's inserting 1084104 rows. If that has anything to do with it.
Don't use connect by to split string into rows. Use a PL/SQL procedure that does varchar2 -> collection split. For a ad-hoc kind of query, stick with `xmltable` as a simple way to split string into rows (it is a bit slower than PL/SQL). The following kind of query is expected to take 3-4 seconds for each input 1000 rows. ``` select t.col1, c2.val from ( select 'a1' col1, 'a2;a3;a4' col2 from dual union all select 'b1', 'b2' from dual union all select 'c1', 'c2;c3' from dual union all select 'd1', null from dual ) t , xmltable('$WTF' passing xmlquery(('"'||replace(replace(t.col2,'"','""'),';','","')||'"') returning sequence ) as wtf columns val varchar2(4000) path '.' )(+) c2 ``` Fiddle: <http://sqlfiddle.com/#!4/9eecb7d/5059>
One thing you can try is to select all the `distinct` values. ``` INSERT INTO tableB (col1, col2) SELECT distinct col1, (regexp_substr(col2,'[^;]+', 1, LEVEL)) FROM tableA CONNECT BY regexp_substr(col2, '[^;]+', 1, LEVEL) IS NOT NULL; commit; ``` You should also `commit` the transaction if you need the changes to be permanent.
SQL insert with regexp
[ "", "sql", "oracle-sqldeveloper", "oracle11gr2", "" ]
Assume I have table `exection_records` which has the data: ``` +----+----------+---------------------+ | id | handle | finishedAt | +----+----------+---------------------+ | 1 | task_foo | 2015-08-16 03:10:33 | | 2 | task_foo | 2015-08-15 04:00:27 | | 3 | task_foo | 2015-08-14 02:10:25 | | 4 | task_bar | 2015-08-17 03:00:25 | | 5 | task_bar | 2015-08-16 02:01:25 | | 6 | task_bar | 2015-08-13 06:02:50 | +----+----------+---------------------+ ``` Now I want to get the row where `finishedAt` at is at its most recent timestamp for each unique handle, that is: ``` +----+----------+---------------------+ | id | handle | finishedAt | +----+----------+---------------------+ | 1 | task_foo | 2015-08-16 03:01:33 | | 4 | task_bar | 2015-08-17 03:00:25 | +----+----------+---------------------+ ``` I know that there is [`MAX`](https://dev.mysql.com/doc/refman/5.0/en/example-maximum-column.html) in MySQL. I could get the very latest record for each task via: ``` SELECT *,MAX(finishedAt) FROM db.execution_records where taskHandle = 'task_foo'; SELECT *,MAX(finishedAt) FROM db.execution_records where taskHandle = 'task_bar'; ``` Yet I do not want to issue multiple queries but one, and I do not want to name the handles. How could I achieve my query?
In MySQL, the simplest way is to have a sub-query that finds the last finished time for each handle, then join those results back on to your table to pick out the whole row. ``` SELECT execution_records.* FROM ( SELECT handle, MAX(finished_at) AS max_finished_at FROM execution_records GROUP BY handle ) AS summary INNER JOIN execution_records ON execution_records.handle = summary.handle AND execution_records.finished_at = summary.max_finished_at ``` You can still filter this down to specific handles *(rather than all of them)* with a simple `WHERE` clause. ``` WHERE summary.handle IN ('task_foo','task_bah') ``` The optimiser will then use macro-like-expansion to push that where clause down in to your aggregate query, as well as the outer query.
``` select * from exection_records join (select handle, max(finishedAt) max from exection_records group by handle) m on exection_records.finishedAt=max ``` [Demo on sqlfiddle](http://sqlfiddle.com/#!9/b782d/1)
How to query a row with a unique identifier with their respective max value?
[ "", "mysql", "sql", "select", "max", "" ]
Using PostgreSQL 9.4, I store data as JSON array in plpgsql code, for example: ``` j := '[1,2,3,4,5]'::json ``` Next I loop through this array and do something with digits. ``` FOR i1 IN 0..(json_array_length(j)-1) LOOP RAISE NOTICE 'i1=%', j->>(i1); END LOOP; ``` I get this output: ``` 1 2 3 4 5 ``` How can I get a random sort order for the loop? Like: ``` 3 5 1 2 4 ```
Keep it simple, just use `json_array_elements_text` and `ORDER BY random()`. ``` DECLARE item text BEGIN FOR item IN SELECT json_array_elements_text('[1,2,3,4,5]'::json) ORDER BY random() LOOP RAISE NOTICE 'item is %',item; END LOOP; END; ```
1. Make new array of keys (from 0 to 4) ``` SELECT INTO intarray array_agg(s.n) FROM (SELECT generate_series(0,json_array_length(j)-1) AS n ORDER BY random()) s ``` 2. In main loop get values from j array by value from new array ``` FOR i1 IN 0..(json_array_length(j)-1) LOOP RAISE NOTICE 'i1=%', j->>(intarray[i1]); END LOOP; ```
Get elements from JSON array in random order
[ "", "sql", "json", "postgresql", "plpgsql", "" ]
My following SQL query has errors from MySQL workbench, and error is table "new\_table" not recognized. Does it mean MySQL does not support select into statement? ``` SELECT student_id into new_table FROM students; ``` thanks in advance, Lin
The problem is that [MySQL does not support the SELECT ... INTO ... syntax](https://dev.mysql.com/doc/refman/5.0/en/ansi-diff-select-into-table.html). You have to use it like this: ``` Insert into new_table Select * FROM students; ```
Use ``` CREATE TABLE new_table as SELECT student_id FROM students; ``` If the table already exists: ``` INSERT INTO new_table SELECT * FROM students; ``` [This thread](https://dev.mysql.com/doc/refman/5.0/en/ansi-diff-select-into-table.html) has some details on the syntax differences.
MySQL does not support select INTO?
[ "", "mysql", "sql", "" ]
Consider a table like with the following data ``` column_a (boolean) | column_order (integer) TRUE | 1 NULL | 2 NULL | 3 TRUE | 4 NULL | 5 FALSE | 6 NULL | 7 ``` I would like to write a queries that replaces each `NULL` value in `column_a` with the last non-`NULL` value out of the previous values of the column according to the order specified by `column_order` The result should look like: ``` column_a (boolean) | column_order (integer) TRUE | 1 TRUE | 2 TRUE | 3 TRUE | 4 TRUE | 5 FALSE | 6 FALSE | 7 ``` For simplicity, we can assume that the first value is never null. The following works if there are no more than one consecutive `NULL` values: ``` SELECT COALESCE(column_a, lag(column_a) OVER (ORDER BY column_order)) FROM test_table ORDER BY column_order; ``` However, the above does not work for an arbitrary number of consecutive `NULL` values. What is a Postgres query that is able to achieve the results above? Is there an efficient query that scales well to a large number of rows?
You can use a handy trick where you `sum` over a `case` to create partitions based on the divisions between null and non-null series, then `first_value` to bring them forward. e.g. ``` select *, sum(case when column_a is not null then 1 else 0 end) OVER (order by column_order) as partition from table1; column_a | column_order | partition ----------+--------------+----------- t | 1 | 1 | 2 | 1 | 3 | 1 t | 4 | 2 | 5 | 2 f | 6 | 3 | 7 | 3 (7 rows) ``` then ``` select first_value(column_a) OVER (PARTITION BY partition ORDER BY column_order), column_order from ( select *, sum(case when column_a is not null then 1 else 0 end) OVER (order by column_order) as partition from table1 ) partitioned; ``` gives you: ``` first_value | column_order -------------+-------------- t | 1 t | 2 t | 3 t | 4 t | 5 f | 6 f | 7 (7 rows) ```
I'm more familiar with SqlServer, but this should do what you need. ``` update tableA as a2 set column_a = b2.column_a from ( select a.column_order, max(b.column_order) from tableA as a inner join tableA as b on a.column_order > b.column_order and b.column_a is not null where a.column_a is null group by a.column_order ) as junx inner join tableA as b2 on junx.max =b2.column_order where a2.column_order = junx.column_order ``` [SQL Fiddle](http://www.sqlfiddle.com/#!15/302a2/11/1)
Update ordered row with last not-null value
[ "", "sql", "postgresql", "" ]
I have **Payments** table with multiple columns, including **Student,** **Value** and **Payment\_type.** I want to create a query that will calculate the sum of values, if *all the records* of the same student have only NULL as *Payment type*. If a student has at least one Payment type different than NULL, that student shouldn't be included. Example: ``` Student Payment Value Payment_type 1 1 100 NULL 1 2 200 NULL 2 1 200 NULL 3 1 150 Cash 2 2 100 Cash 3 2 200 NULL 1 3 200 NULL ``` If you look at the example, it should give me result 500, because the sum of values of student 1 is 500, and his/her ALL payment types are NULL.
``` select student, sum(value) from payments group by student having sum(case when Payment_type is not null then 1 else 0 end) = 0 ```
[SQL Fiddle](http://sqlfiddle.com/#!9/1e58a/1) **MySQL 5.6 Schema Setup**: ``` CREATE TABLE Payments (`Student` int, `Payment` int, `Value` int, `Payment_type` varchar(4)) ; INSERT INTO Payments (`Student`, `Payment`, `Value`, `Payment_type`) VALUES (1, 1, 100, NULL), (1, 2, 200, NULL), (2, 1, 200, NULL), (3, 1, 150, 'Cash'), (2, 2, 100, 'Cash'), (3, 2, 200, NULL), (1, 3, 200, NULL) ; ``` **Query 1**: ``` select student, sum(value) from payments group by student having max(Payment_type) IS NULL ``` **[Results](http://sqlfiddle.com/#!9/1e58a/1/0)**: ``` | Student | sum(value) | |---------|------------| | 1 | 500 | ```
SQL: How to find the sum of values where all the records has the same value in a column?
[ "", "sql", "select", "where-clause", "" ]
I'm trying to restore a database from a dump generated with `mysqldump`. However it contains binary data. I made an `mysqldump` of a database which have an entity framework migration history table. > mysqldump.exe --opt --user=root foo > dump.sql This table has a column with binary data (longblob) and it's causing me issues when trying to restore. I first tried to restore via WorkBench, but it failed. I then copied the command that workbench used and ran it manually. It obviously had the same result. > mysql.exe --protocol=tcp --host=localhost --user=root --port=3306 --default-character-set=utf8 --comments --database=foo < dump.sql > > ERROR: ASCII '\0' appeared in the statement, but this is not allowed unless option --binary-mode is enabled and mysql is run in non-interactive mode. Set --binary-mode to 1 if ASCII '\0' is expected. Query: ' ■-'. It tells me to add `--binary-mode=1`, so I did. > mysql.exe --binary-mode=1 --protocol=tcp --host=localhost --user=root --port=3306 --default-character-set=utf8 --comments --database=foo < dump.sql > > ERROR 1064 (42000) at line 1: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '??-' at line 1 But it still doesn't work. Then I tried to find `??-` in the dump file but couldn't. I read somewhere that I shouldn't change the charset. So I tried removing `--default-character-set=utf8` from the command. > mysql.exe --binary-mode=1 --protocol=tcp --host=localhost --user=root --port=3306 --comments --database=foo < dump.sql > > ERROR 1064 (42000) at line 1: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ' ■-' at line 1 Now I'm able to find `■-` in the dump file, but it doesn't really help me :/ --- Content of `dump.sql` ``` -- MySQL dump 10.13 Distrib 5.7.7-rc, for Win64 (x86_64) -- -- Host: localhost Database: foo -- ------------------------------------------------------ -- Server version 5.7.7-rc-log /*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */; /*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */; /*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */; /*!40101 SET NAMES utf8 */; /*!40103 SET @OLD_TIME_ZONE=@@TIME_ZONE */; /*!40103 SET TIME_ZONE='+00:00' */; /*!40014 SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0 */; /*!40014 SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0 */; /*!40101 SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='NO_AUTO_VALUE_ON_ZERO' */; /*!40111 SET @OLD_SQL_NOTES=@@SQL_NOTES, SQL_NOTES=0 */; -- -- Table structure for table `__migrationhistory` -- DROP TABLE IF EXISTS `__migrationhistory`; /*!40101 SET @saved_cs_client = @@character_set_client */; /*!40101 SET character_set_client = utf8 */; CREATE TABLE `__migrationhistory` ( `MigrationId` varchar(100) NOT NULL, `ContextKey` varchar(200) NOT NULL, `Model` longblob NOT NULL, `ProductVersion` varchar(32) NOT NULL, PRIMARY KEY (`MigrationId`,`ContextKey`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; /*!40101 SET character_set_client = @saved_cs_client */; -- -- Dumping data for table `__migrationhistory` -- LOCK TABLES `__migrationhistory` WRITE; /*!40000 ALTER TABLE `__migrationhistory` DISABLE KEYS */; INSERT INTO `__migrationhistory` VALUES ('123456789012345_InitialCreate', ```
Found the problem! I was running `mysqldump` via a powershell script which was causing the dump.sql file to be encoded incorrectly. Switched to a bat script instead and now it works
**do not use** IO redirect, use mysqldump option instead. > -r, --result-file=name > Direct output to a given file. This option should be used > in systems (e.g., DOS, Windows) that use carriage-return > linefeed pairs (\r\n) to separate text lines. This option > ensures that only a single newline is used. the OS IO redirect will change the encoding of the result file.
Restore mysqldump with binary data from EF migration history table
[ "", "sql", "mysql", "entity-framework-migrations", "" ]
I have a question to query tuple(s) that have latest date of each name. This is my example table. ``` ID || NAM E || DATE || INFOA || INFOB || INFOC 1 || Alice || 2015-08-20 12:0:0 || Y || N || Y 2 || Bob || 2015-08-20 12:0:0 || Y || N || Y 3 || Cheschire || 2015-08-20 12:0:0 || N || Y || Y 4 || Alice || 2015-08-25 12:0:0 || N || Y || N 5 || Bob || 2015-08-15 12:0:0 || Y || Y || N ``` Query I used ``` SELECT NAME, MAX(DATE), INFOA, INFOB, INFOC FROM EXAMPLE_TABLE GROUP BY NAME,INFOA,INFOB,INFOC ``` Result is... ``` Alice || 2015-08-20 12:0:0 || Y || N || Y Bob || 2015-08-20 12:0:0 || Y || N || Y Cheschire || 2015-08-20 12:0:0 || N || Y || Y Alice || 2015-08-25 12:0:0 || N || Y || N Bob || 2015-08-15 12:0:0 || Y || Y || N ``` But my expected result is... ``` Bob || 2015-08-20 12:0:0 || Y || N || Y Cheschire || 2015-08-20 12:0:0 || N || Y || Y Alice || 2015-08-25 12:0:0 || N || Y || N ``` What should I do?
Please try with the below code snippet. ``` DECLARE @userData TABLE( ID INT NOT NULL, Name VARCHAR(MAX) NOT NULL, [Date] DATETIME NOT NULL, INFOA VARCHAR(MAX) NOT NULL, INFOB VARCHAR(MAX) NOT NULL, INFOC VARCHAR(MAX) NOT NULL ); INSERT INTO @userData VALUES ('1','Alice','2015-08-20 12:0:0','Y','N','Y') INSERT INTO @userData VALUES ('2','Bob','2015-08-20 12:0:0','Y','N','Y') INSERT INTO @userData VALUES ('3','Cheschire','2015-08-20 12:0:0','N','Y','Y') INSERT INTO @userData VALUES ('4','Alice','2015-08-25 12:0:0','N','Y','N') INSERT INTO @userData VALUES ('5','Bob','2015-08-15 12:0:0','Y','Y','N') SELECT a.ID,a.Name,a.Date, a.INFOA,a.INFOB,a.INFOC FROM ( select *,RANK() OVER (PARTITION BY [Name] ORDER BY [DATE] DESC) AS [Rank] from @userData ) a where a.[Rank] = 1 ORDER BY a.ID ```
Use `NOT EXISTS` to return a row if there are no other row with same name but a later date: ``` select * from tablename t1 where NOT EXISTS (select 1 from tablename t2 where t2.name = t1.name and t2.date > t1.date) ```
How to query the latest date from each duplicated name
[ "", "sql", "oracle", "" ]
In PostgreSQL, what is the `ROW()` function used for? Specifically what is the difference between ``` SELECT ROW(t.f1, t.f2, 42) FROM t; ``` where `f1` is of type `int`, `f2` is of type `text` and ``` CREATE TYPE myrowtype AS (f1 int, f2 text, f3 numeric); ```
You are confusing levels of abstraction. As other answers already point out, `CREATE TYPE` only registers a (composite / row) type in the system. While a `ROW` constructor actually returns a row. A row type created with the `ROW` constructor does not preserve column names, which becomes evident when you try to convert the row to JSON. While being at it, `ROW` is just a ***noise word*** most of the time. [The manual:](https://www.postgresql.org/docs/current/sql-expressions.html#SQL-SYNTAX-ROW-CONSTRUCTORS) > The key word `ROW` is optional when there is more than one expression in the list. Demo: ``` SELECT t AS r1, row_to_json(t) AS j1 , ROW(1, 'x', numeric '42.1') AS r2, row_to_json(ROW(1, 'x', numeric '42.1')) AS j2 , (1, 'x', numeric '42.1') AS r3, row_to_json( (1, 'x', numeric '42.1')) AS j3 , (1, 'x', '42.1')::myrowtype AS r4, row_to_json((1, 'x', '42.1')::myrowtype) AS j4 FROM (SELECT 1, 'x', numeric '42.1') t; ``` *db<>fiddle [here](https://dbfiddle.uk/?rdbms=postgres_14&fiddle=37e82cba88dc7a62538ab096acd1060d)* Old [sqlfiddle](http://sqlfiddle.com/#!15/af436/1) `r1` and `j1` preserve original column names. `r2` and `j2` do not. `r3` and `j3` are the same; to demonstrate how `ROW` is just noise. `r4` and `j4` carry the column names of the registered type. You can cast the row (record) to a registered row type if *number* and *data types* of the elements match the row type - *names* of input fields are ignored. * [Return multiple columns of the same row as JSON array of objects](https://stackoverflow.com/questions/26486784/return-multiple-columns-of-the-same-row-as-json-array-of-objects)
Row constructors can be used to build composite values to be stored in a composite-type table column, or to be passed to a function that accepts a composite parameter. Also, it is possible to compare two row values or test a row with IS NULL or IS NOT NULL. [4.2.13. Row Constructors](http://www.postgresql.org/docs/9.4/static/sql-expressions.html) Example: ``` CREATE TYPE myrowtype AS (f1 int, f2 text, f3 numeric); CREATE TABLE mytable (ct myrowtype); INSERT INTO mytable(ct) VALUES (CAST(ROW(11,'this is a test',2.5) AS myrowtype)); ```
What is a row constructor used for?
[ "", "sql", "postgresql", "types", "row", "" ]
``` select * from employees where last_name between 'A' AND 'E'; ``` Why the is answer coming till 'D' not 'E', is there any other way to fetch the details
> where last\_name between 'A' AND 'E'; **String comparison** is not similar as comparing numbers. String comparison is done on their **ASCII** values. So, you are comparing the ASCII value of **last\_name** with a **single character**, which will not give your desired output. ``` SQL> SELECT ename, ASCII(ename), ASCII('A'), ASCII('E') FROM emp; ENAME ASCII(ENAME) ASCII('A') ASCII('E') ---------- ------------ ---------- ---------- SMITH 83 65 69 ALLEN 65 65 69 WARD 87 65 69 JONES 74 65 69 MARTIN 77 65 69 BLAKE 66 65 69 CLARK 67 65 69 SCOTT 83 65 69 KING 75 65 69 TURNER 84 65 69 ADAMS 65 65 69 JAMES 74 65 69 FORD 70 65 69 MILLER 77 65 69 14 rows selected. SQL> ``` Based on above **ASCII values**, you would get only those rows where you have the ASCII value of the `ename` between `65` and `69`. You need to use **SUBSTR** to first extract the first character of last\_name and compare it with `'A'` and `'E'`. For example, I am using the standard `emp` table in SCOTT schema and adding two rows with ename starting with `'D'` and `'E'`. ``` SQL> WITH DATA AS( 2 SELECT ename FROM emp 3 UNION 4 SELECT 'DAWSON' FROM DUAL 5 UNION 6 SELECT 'EINSTEIN' FROM DUAL 7 ) 8 SELECT * FROM DATA 9 WHERE substr(ename,1,1) BETWEEN 'A' AND 'E'; ENAME ---------- ADAMS ALLEN BLAKE CLARK DAWSON EINSTEIN 6 rows selected. ```
Your command is perfect for "employees whose names *are* (not start with) from A to E". `"E"` would be in. `"Einstein"` is out, as it is *later* than `"E"`. ``` WHERE last_name >= 'A' AND last_name < 'F' ``` will give you what you want. Alternately, you can do ``` WHERE SUBSTR(last_name, 1, 1) BETWEEN 'A' AND 'E' ``` but it would be slow, as it would not be able to use the index.
String comparision in Oracle SQL
[ "", "sql", "oracle", "oracle11g", "oracle10g", "string-comparison", "" ]