Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I have a query which uses needs to know how many days passed since 1st of January in the current year. Which means that if the query runs for example in: > 2nd Jan 2017 than it should return 2 (as 2 days passed since 1st Jan > 2017). > > 10th Feb 2016 than it should return 41 (as 41 days passed since 1st > Jan 2016). basically it needs to take `Current Year` from `Curent Date` and count the days since `1/1/(Year).` i have the current year with: `SELECT EXTRACT(year FROM CURRENT_DATE);` I created the 1st of Jan with: ``` select (SELECT EXTRACT(year FROM CURRENT_DATE)::text || '-01-01')::date ``` How do I get the difference from this date to `Current_Date`? Basically this question can be **Given two dates, how many days between them**? Something like `age(timestamp '2016-01-01', timestamp '2016-06-15')` **isn't good** because I need the result only in days. while age gives in years,months and days.
An easier approach may be to extract the day of year ("doy") field from the date: ``` db=> SELECT EXTRACT(DOY FROM CURRENT_DATE); date_part ----------- 41 ``` And if you need it as a number, you could just cast it: ``` db=> SELECT EXTRACT(DOY FROM CURRENT_DATE)::int; date_part ----------- 41 ``` Note: The result 41 was produced by running the query today, February 9th.
Another solution is to use **DATEDIFF** ``` SELECT DATE_PART('day', now()::timestamp - '2016-01-01 00:00:00'::timestamp); ```
How to get how many days passed since start of this year?
[ "", "sql", "postgresql", "date", "select", "" ]
The SQL query looks like this: ``` Declare @Values nvarchar(max) , @From int , @To int Set @Values = 'a,b,c,d,e,f,g,h,i,j' Set @From = 3 , @To = 7 ``` The values must be split into rows based on `@From` and `@To`. Based on the above query, required output is ``` c d e f g ``` Any help would be appreciated.
**Disclaimer**: I'm the owner of the project [Eval SQL.NET](http://eval-sql.net/) If you have a C# background, you may might consider this library which let you use C# syntax directly in T-SQL ``` DECLARE @Values NVARCHAR(MAX) , @From INT , @To INT SELECT @Values = 'a,b,c,d,e,f,g,h,i,j' , @From = 3 , @To = 7 -- The index start at 0 and not at 1 in C# DECLARE @sqlnet SQLNET = SQLNET::New(' values.Split('','').Skip(from - 1).Take(to - from + 1)') .ValueString('values', @Values) .Val('from', @From) .Val('to', @to) -- In multiple rows SELECT * FROM [dbo].[SQLNET_EvalTVF_1](@sqlnet) -- In same rows SELECT SQLNET::New(' var splits = values.Split('','').Skip(from - 1).Take(to - from + 1).ToList(); return string.Join('' '', splits);') .ValueString('values', @Values) .Val('from', @From) .Val('to', @to) .Eval() ```
You need a string splitter for this. Here is [DelimitedSplit8k](http://www.sqlservercentral.com/articles/Tally+Table/72993/) by Jeff Moden. ``` DECLARE @Values NVARCHAR(MAX), @From INT, @To INT SELECT @Values = 'a,b,c,d,e,f,g,h,i,j', @From = 3, @To = 7 SELECT STUFF(( SELECT ' ' + Item FROM dbo.DelimitedSplit8K(@Values, ',') WHERE ItemNumber BETWEEN @From AND @To FOR XML PATH('') ), 1, 1, '') ```
SQL Server : comma-separated List based on offset values
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I want to grab a list of values from an SQL table, and then run a query using each of these values to update other fields in the same table. In another language, I'd use a for loop for this, but I've read that looping operations should be avoided/aren't supported in SQL. Conceptually (obviously the code isn't valid), this is what I'm trying to achieve: ``` my_list = SELECT item_id FROM my_table WHERE value = "my_value" AND field_id = 1 for x in my_list: UPDATE my_table SET meta_value = "my_value" WHERE field_id = 2 AND item_id = x ``` What's the proper way to achieve this is SQL please?
You should be able to join the table with a subset of itself: ``` UPDATE my_table t1 INNER JOIN (SELECT item_id FROM my_table WHERE value = "my_value" AND field_id = 1) t2 ON t1.item_id = t2.item_id SET t1.meta_value = "my_value" WHERE t1.field_id = 2 ```
Try this update with join statement: ``` update My_Table t JOIN My_Table s on(t.item_id = s.item_id and t.field_id = 2 and s.field_id = 1) set t.meta_value = 'my_value' where s.value = 'MY_VALUE' ```
Proper way to update SQL table without using loops
[ "", "mysql", "sql", "loops", "" ]
I have following two queries how do i use `union` so as to see both the results in the single query execution ``` select TOP 1 AGE, DIAGNOSIS_CODE_1, count(DIAGNOSIS_CODE_1) as total_count from Health where age = 7 group by AGE, DIAGNOSIS_CODE_1 order by total_count DESC; select TOP 1 AGE, DIAGNOSIS_CODE_1, count(DIAGNOSIS_CODE_1) as total_count from Health where age = 9 group by AGE, DIAGNOSIS_CODE_1 order by total_count DESC; ``` Sample out put [![enter image description here](https://i.stack.imgur.com/YXc8B.png)](https://i.stack.imgur.com/YXc8B.png) Sample out put [![enter image description here](https://i.stack.imgur.com/OwDqa.png)](https://i.stack.imgur.com/OwDqa.png)
Just add UNION ALL in between those queries. The ORDER BY clause wont accept when UNION ALL applied. So i concluded it by taking them in a inner set. ``` SELECT * FROM ( SELECT TOP 1 AGE, DIAGNOSIS_CODE_1, COUNT(DIAGNOSIS_CODE_1) AS TOTAL_COUNT FROM HEALTH WHERE AGE = 7 GROUP BY AGE, DIAGNOSIS_CODE_1 UNION ALL SELECT TOP 1 AGE, DIAGNOSIS_CODE_1, COUNT(DIAGNOSIS_CODE_1) AS TOTAL_COUNT FROM HEALTH WHERE AGE = 9 GROUP BY AGE, DIAGNOSIS_CODE_1 )AS A ORDER BY TOTAL_COUNT DESC; ``` As per the case you can go this way. If your case is to give order separately, then you can give order by in inner set. ``` SELECT * FROM ( SELECT TOP 1 AGE, DIAGNOSIS_CODE_1, COUNT(DIAGNOSIS_CODE_1) AS TOTAL_COUNT FROM HEALTH WHERE AGE = 7 GROUP BY AGE, DIAGNOSIS_CODE_1 ORDER BY TOTAL_COUNT DESC; )AS B UNION ALL SELECT * FROM ( SELECT TOP 1 AGE, DIAGNOSIS_CODE_1, COUNT(DIAGNOSIS_CODE_1) AS TOTAL_COUNT FROM HEALTH WHERE AGE = 9 GROUP BY AGE, DIAGNOSIS_CODE_1 ORDER BY TOTAL_COUNT DESC; )AS A ```
You can do this by `row_number() over(partition by..` like, ``` select AGE, DIAGNOSIS_CODE_1, total_count from ( select AGE, DIAGNOSIS_CODE_1, count(DIAGNOSIS_CODE_1) as total_count, row_number() over (partition by AGE order by count(DIAGNOSIS_CODE_1) desc) rnk from Health where age in (7, 9) group by AGE, DIAGNOSIS_CODE_1 ) x where rnk = 1 ``` Or you can use `union all` like; ``` with tmp_1 as ( select TOP 1 AGE, DIAGNOSIS_CODE_1, count(DIAGNOSIS_CODE_1) as total_count from Health where age = 7 group by AGE, DIAGNOSIS_CODE_1 order by total_count DESC ), tmp_2 as ( select TOP 1 AGE, DIAGNOSIS_CODE_1, count(DIAGNOSIS_CODE_1) as total_count from Health where age = 9 group by AGE, DIAGNOSIS_CODE_1 order by total_count DESC ) select AGE, DIAGNOSIS_CODE_1, total_count from tmp_1 union all select AGE, DIAGNOSIS_CODE_1, total_count from tmp_2 ```
How to use union of two queries in SQL that has order by and count()
[ "", "sql", "sql-server", "union", "" ]
Could somebody explain me whats wrong in this? My query is like this... ``` select t1.year, t1.empid, t2.tcost from (select year,empid,sum(cost) as total from orders group by year,empid) t1 inner join (select year, max(total) from t1 group by year) t2 on t1.year= t2.year ``` I am getting error message as below ``` ERROR at line 3: ORA-00942: table or view does not exist ``` I know I can achieve this using WITH clause but I want to know how to use join in this case. Thanks in advance
If in your query you use self-join, there is usually an equivalent query based on analytical functions. The advantage is, in most cases, shorter execution time. Here you could use [`max ... keep dense rank ...`](https://docs.oracle.com/cd/B19306_01/server.102/b14200/functions056.htm): ``` select year, empid, sum_cost from ( select year, empid, sum(cost) sum_cost, max(sum(cost)) keep (dense_rank last order by sum(cost)) over (partition by year) max_cost from orders group by year, empid ) where sum_cost = max_cost ``` Sample data and output: ``` create table orders (year number(4), empid number(5), cost number(10,2)); insert into orders values (2010, 1, 100); insert into orders values (2010, 1, 115); insert into orders values (2010, 1, 207); insert into orders values (2010, 2, 104); insert into orders values (2011, 1, 90); insert into orders values (2011, 2, 15); insert into orders values (2011, 2, 107); insert into orders values (2011, 3, 100); ``` Output: ``` YEAR EMPID SUM_COST ---- ----- -------- 2010 1 422 2011 2 122 ``` --- **Edit:** I doubt you could eliminate `with` clause if you want to do self-join here. `with` is used especially when complex sub-queries are used twice or more. And if you insist on `join` instead of `where (year,tcost) in ...`, as you suggested in of the comments, please use: ``` with vvn as (select year, empid, sum(cost) as sc from orders group by year, empid) select v1.year, v1.empid, v1.sc from vvn v1 join (select year, max(sc) msc from vvn group by year) v2 on v1.year = v2.year and v1.sc = v2.msc; ``` BTW, shortened version of my first answer does not really need `keep dense rank` part, simpler is: ``` select year, empid, sum_cost from (select year, empid, sum(cost) sum_cost, max(sum(cost)) over (partition by year) max_cost from orders group by year, empid ) where sum_cost = max_cost ``` Version with somehwat modified `keep...` is still valid and interesting, but you probably noticed this.
``` SELECT t1.year, t1.empid, t2.tcost FROM (SELECT year, empid, sum(cost) AS total FROM orders GROUP BY year, empid) t1 INNER JOIN (SELECT year, max(total) **AS tcost** FROM t1 **<-- ?? No, you need to specify a table** GROUP BY year) t2 ON t1.year = t2.year ``` You have a comma between t1 and INNER, and syntax is wrong in FROM T1, you cannot JOIN an inner table to another inner. Also max(total) needs to be aliased. All shown above.
how can I join two queries in which one is derived from other?
[ "", "sql", "oracle", "join", "" ]
I want to index data in height dimensions (128 dimensional vectors of integers in range of [0,254] are possible): ``` | id | vector | | 1 | { 1, 0, ..., 254} | | 2 | { 2, 128, ...,1} | | . | { 1, 0, ..., 252} | | n | { 1, 2, ..., 251} | ``` I saw that PostGIS implemented R-Trees. So can I use these trees in PostGIS to index and query multidimensional vectors in Postgres? I also saw that there is a [index implementation for int arrays](http://www.postgresql.org/docs/current/static/intarray.html). Now I have questions about how to perform a query. Can I perform a knn-search and a radius search on an integer array? Maybe I also must define my own distance function. Is this possible? I want to use the [Manhattan distance](https://en.wikipedia.org/wiki/Taxicab_geometry) (block distance) for my queries. I also can represent my vector as a binary string with the pattern `v1;v2;...;vn`. Does this help to perform the search? For example if I had these two string: ``` 1;2;1;1 1;3;2;2 ``` The result / distance between these two strings should be 3.
Perhaps a better choice would be the [cube extension](http://www.postgresql.org/docs/current/static/cube.html), since your area of interest is not individual integer, but full vector. Cube supports GiST indexing, and Postgres 9.6 will also bring KNN indexing to cubes, supporting [euclidean, taxicab (aka Manhattan) and chebishev distances](http://www.postgresql.org/docs/devel/static/cube.html#CUBE-OPERATORS-TABLE). It is a bit annoying that 9.6 is still in development, however there's no problem backporting patch for cube extension to 9.5 and I say that from experience. Hopefully 128 dimensions will still be enough to get [meaningful results](https://en.wikipedia.org/wiki/Curse_of_dimensionality). **How to do this?** First have an example table: ``` create extension cube; create table vectors (id serial, vector cube); ``` Populate table with example data: ``` insert into vectors select id, cube(ARRAY[round(random()*1000), round(random()*1000), round(random()*1000), round(random()*1000), round(random()*1000), round(random()*1000), round(random()*1000), round(random()*1000)]) from generate_series(1, 2000000) id; ``` Then try selecting: ``` explain analyze SELECT * from vectors order by cube(ARRAY[966,82,765,343,600,718,338,505]) <#> vector asc limit 10; QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------- Limit (cost=123352.07..123352.09 rows=10 width=76) (actual time=1705.499..1705.501 rows=10 loops=1) -> Sort (cost=123352.07..129852.07 rows=2600000 width=76) (actual time=1705.496..1705.497 rows=10 loops=1) Sort Key: (('(966, 82, 765, 343, 600, 718, 338, 505)'::cube <#> vector)) Sort Method: top-N heapsort Memory: 26kB -> Seq Scan on vectors (cost=0.00..67167.00 rows=2600000 width=76) (actual time=0.038..998.864 rows=2600000 loops=1) Planning time: 0.172 ms Execution time: 1705.541 ms (7 rows) ``` We should create an index: ``` create index vectors_vector_idx on vectors (vector); ``` Does it help: ``` explain analyze SELECT * from vectors order by cube(ARRAY[966,82,765,343,600,718,338,505]) <#> vector asc limit 10; -------------------------------------------------------------------------------------------------------------------------------------------------- Limit (cost=0.41..1.93 rows=10 width=76) (actual time=41.339..143.915 rows=10 loops=1) -> Index Scan using vectors_vector_idx on vectors (cost=0.41..393704.41 rows=2600000 width=76) (actual time=41.336..143.902 rows=10 loops=1) Order By: (vector <#> '(966, 82, 765, 343, 600, 718, 338, 505)'::cube) Planning time: 0.146 ms Execution time: 145.474 ms (5 rows) ``` At 8 dimensions, it does help.
(Addendum to selected answer) For people wanting more than 100 dimensions, beware: there's [a 100 dimensions limit in cube extension](https://www.postgresql.org/docs/current/static/cube.html#AEN176262). The tricky part is that postgres allows you to create cubes with more than 100 dimensions just fine. It's when you try to restore a backup that it is refused (the worst time to realize that). As recommended in documentation, I patched cube extension to support more dimensions. I made a docker image for it, and you can look at the Dockerfile to see how to do it yourself, from the [github repos](https://github.com/oelmekki/postgres-350d).
indexing and query high dimensional data in postgreSQL
[ "", "sql", "postgresql", "multidimensional-array", "indexing", "postgresql-9.3", "" ]
I'm getting a syntax error while following the MySQL guide for `IF` syntax. My query is: ``` if 0=0 then select 'hello world'; end if; ``` Logically, this should select `'hello world'`, but instead I get ``` ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'if (0=0) then select 'hello world'' at line 1 ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'end if' at line 1 ```
Your query is only valid in a stored procedure/function context. See [there](http://dev.mysql.com/doc/refman/5.7/en/if.html) for reference.
using if statements like this is valid only inside stored procedure or functions. What you'd probably like to use is the [if()](http://dev.mysql.com/doc/refman/5.7/en/control-flow-functions.html#function_if) function, and then you can use: ``` select IF(0=0, 'hello world',''); ```
IF syntax error
[ "", "mysql", "sql", "if-statement", "syntax", "syntax-error", "" ]
I have two tables: university and university\_list Table1 - university [![enter image description here](https://i.stack.imgur.com/6Q1rA.png)](https://i.stack.imgur.com/6Q1rA.png) Table2 - university\_list [![enter image description here](https://i.stack.imgur.com/5Qe6E.png)](https://i.stack.imgur.com/5Qe6E.png) I added `university_id` into the table 2 and I need to connect the two tables. If `university_name` from table 1 and `name` from table 2 are identical, get the `id` from table 1 and replace it onto table 2 `university_id` Thank you in advance!
``` UPDATE university_list a JOIN university b ON a.name = b.university_name SET a.university_id = b.id ```
``` select a.id,b.name from table1 as a inner join table2 as b on a.university_name = b.name ``` Above query will return id and name of university if match. Hold both value in variable and pass variable in update query. ``` update table2 set university_id = '$val' where b.name = '$name'; ```
SQL How to connect two tables together by a specific column?
[ "", "mysql", "sql", "phpmyadmin", "" ]
I've got a query which gets the total sales figure for the current day ``` SELECT SUM(cso.SubTotal) - ( SELECT SUM(cso.CreditAvailable) / 1.1 FROM dbo.CustomerCredit cso WHERE CONVERT(VARCHAR(10), cso.DateCreated, 102) = CONVERT(VARCHAR(10), SYSDATETIME(), 102) ) AS total_value FROM dbo.CustomerInvoice cso WHERE Convert(VARCHAR(10), cso.InvoiceDate, 102) = Convert(VARCHAR(10), SYSDATETIME(), 102) ``` What I now need, is a table with a list of all the dates in the current month on the left column, and on the right column, the total sales for each date (Using the query above) ``` +---------+---------+ | date | total | +---------+---------+ | 1/2/16 | 256232 | | 2/2/16 | 285632 | | 3/2/16 | 265231 | | 4/2/16 | 254215 | | 5/2/16 | 0 | | ....... | ..... | | 28/2/16 | 0 | | 29/2/16 | 0 | +-------------------+ ``` It doesn't matter if there are zero sales values for dates which occur in the future or for weekend dates. I've racked my brains for a solution, but as I'm only new to SQL I decided to reach out to the community.
You can use query like this: ``` ;with date_cur_month as( select convert(datetime, convert(varchar(6), getdate(), 112) + '01', 112) as dt union all select dateadd(day, 1, dt) as dt from date_cur_month where datepart(month, dateadd(day, 1, dt)) = datepart(month, getdate())) select convert(nvarchar(8), d.dt, 3) as date , isnull(sum(ci.SubTotal), 0) - isnull((select sum(cc.CreditAvailable) / 1.1 from dbo.CustomerCredit cc where cast(cc.DateCreated as date) = d.dt ), 0) as total from dbo.CustomerInvoice ci right outer join date_cur_month d on cast(ci.InvoiceDate as date) = d.dt group by d.dt ```
First, I believe your query can be written as: ``` SELECT SUM(SubTotal) - (SUM(CreditAvailable) / 1.1) AS total FROM dbo.CustomerInvoice WHERE CAST(InvoiceDate AS DATE) = CAST(SYSDATETIME() AS DATE) ``` To get the total for each date, just add a `GROUP BY`: ``` SELECT SUM(SubTotal) - (SUM(CreditAvailable) / 1.1) AS total FROM dbo.CustomerInvoice GROUP BY CAST(InvoiceDate AS DATE) ``` Now, to include all dates, even the dates with 0 total, you have to use a [tally table](http://www.sqlservercentral.com/articles/T-SQL/62867/). ``` DECLARE @fromDate AS DATE = '20160101', @toDate AS DATE = '20160131' ;WITH E1(N) AS( SELECT 1 FROM(VALUES (1),(1),(1),(1),(1),(1),(1),(1),(1),(1))t(N) ), E2(N) AS(SELECT 1 FROM E1 a CROSS JOIN E1 b), E4(N) AS(SELECT 1 FROM E2 a CROSS JOIN E2 b), CteTally(N) AS( SELECT TOP(DATEDIFF(DAY, @fromDate, @toDate) + 1) ROW_NUMBER() OVER(ORDER BY(SELECT NULL)) FROM E4 ), CteDates(dt) AS( SELECT DATEADD(DAY, N - 1, @fromDate) FROM CteTally ) SELECT d.dt, t.total FROM CteDates d LEFT JOIN ( SELECT dt = CAST(InvoiceDate AS DATE), total = SUM(SubTotal) - (SUM(CreditAvailable) / 1.1) FROM dbo.CustomerInvoice WHERE InvoiceDate >= @fromDate AND InvoiceDate < DATEADD(DAY, 1, @toDate) GROUP BY CAST(InvoiceDate AS DATE) ) t ON t.dt = d.dt ```
Get daily sales for every day this month
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have two tables.They have the same data but from different sources. I would like to find all columns from both tables that where id in table 2 occurs more than once in table 1. Another way to look at it is if table2.id occurs only once in table1.id dont bring it back. I have been thinking it would be some combination of group by and order by clause that can get this done but its not getting the right results. How would you express this in a SQL query? ``` Table1 | id | info | state | date | | 1 | 123 | TX | 12-DEC-09 | | 1 | 123 | NM | 12-DEC-09 | | 2 | 789 | NY | 14-DEC-09 | Table2 | id | info | state | date | | 1 | 789 | TX | 14-DEC-09 | | 2 | 789 | NY | 14-DEC-09 | Output |table2.id| table2.info | table2.state| table2.date|table1.id|table1.info|table1.state|table1.date| | 1 | 789 | TX | 14-DEC-09 | 1 | 123 | TX | 12-DEC-09 | | 1 | 789 | TX | 14-DEC-09 || 1 | 123 | NM | 12-DEC-09 | ```
If you using MSSQL try using a Common Table Expression ``` WITH cte AS (SELECT T1.ID, COUNT(*) as Num FROM Table1 T1 INNER JOIN Table2 T2 ON T1.ID = T2.ID GROUP BY T1.ID HAVING COUNT(*) > 1) SELECT * FROM cte INNER JOIN Table1 T1 ON cte.ID = T1.ID INNER JOIN Table2 T2 ON cte.ID = T2.ID ```
I find this a much simpler way to do it: ``` select TableA.*,TableB.* from TableA inner join TableB on TableA.id=TableB.id where TableA.id in (select distinct id from TableA group by id having count(*) > 1) ```
SQL Query to Bring Back where Row Count is Greater than 1
[ "", "sql", "sql-server", "group-by", "" ]
I have this request ``` select COALESCE (SUBSTR(description, 0, INSTR(description, '#')-1),description) FROM dual ``` Test: When : description = `123456789 # 11` => Result = `123456789` When : description = `123456789` => Result = `123456789` Is there any idea how to less complicated , because like you see it's so hard to read
To remove all characters after `#` use the regexp\_replace in the first line; note, that the trailing blank is preserved. ``` regexp_replace(description, '([^#]*).*', '\1') ``` The second line `D2` removes anything after the first `#` or *blank* ``` regexp_replace(description, '([^ #]*).*', '\1') ``` Here the sample query - the quotes show only the presence of blanks: ``` select description, '"'||regexp_replace(description, '([^#]*).*', '\1')||'"' d1, '"'||regexp_replace(description, '([^ #]*).*', '\1')||'"' d2 FROM (select '123456789 # 11' description from dual union all select '123456789' description from dual); ``` result ``` DESCRIPTION D1 D2 -------------- -------------- -------------- 123456789 # 11 "123456789 " "123456789" 123456789 "123456789" "123456789" ```
You can use `regexp_substr()`: ``` select regexp_substr(description, '^[^ #]*[ ]') ``` I doubt this would be any faster.
How to less complicated this request
[ "", "sql", "oracle", "" ]
We have the following tree structure. The aim is to construct a table in the database using postgresql query. The table should contains the following information. The first column contains the node. And the second contains the parent node. ``` |--node1.1.1 |-node1.1--| | |--node1.1.2 | | | node1--|-node1.2--|--node1.2.1 | | | | |-node1.3 ``` Table tree: [![enter image description here](https://i.stack.imgur.com/5DuTx.png)](https://i.stack.imgur.com/5DuTx.png) Below you find the query that will generate the initial table: ``` CREATE TABLE tree ( node character varying NOT NULL, node_parent character varying, CONSTRAINT tree_pkey PRIMARY KEY (node), CONSTRAINT fk_ FOREIGN KEY (node_parent) REFERENCES tree (node) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION ) WITH ( OIDS=FALSE ); ALTER TABLE tree OWNER TO postgres; INSERT INTO tree(node, node_parent) VALUES ('node1', null); INSERT INTO tree(node, node_parent) VALUES ('node1.1', 'node1'); INSERT INTO tree(node, node_parent) VALUES ('node1.2', 'node1'); INSERT INTO tree(node, node_parent) VALUES ('node1.3', 'node1'); INSERT INTO tree(node, node_parent) VALUES ('node1.1.1', 'node1.1'); INSERT INTO tree(node, node_parent) VALUES ('node1.1.2', 'node1.1'); INSERT INTO tree(node, node_parent) VALUES ('node1.2.1', 'node1.2'); ``` Then from this table (tree) we aim to generate the following table. The query should return for each composed node the list of subnodes. This query should generate a resultswhich are semilar to the example below. To do this we have proposed the following query. ``` SELECT node, node_parent FROM tree t where not (node in(select distinct node_parent from tree where not node_parent is null)) union all SELECT tn2.node, tn1.node_parent FROM tree tn1 join tree tn2 on tn1.node = tn2.node_parent where not tn1.node_parent is null and not (tn2.node in(select distinct node_parent from tree where not node_parent is null)) ``` Result : [![enter image description here](https://i.stack.imgur.com/j8BzI.png)](https://i.stack.imgur.com/j8BzI.png) The problem with the query that we have proposed is that it is not generic and does not work for all cases (this query work only for the case where we have a tree with depth equals to three). We want to have a query that work in all cases.
thank you for all your contributions. Thanks to that we have succeeded to solve the issue as follow: ``` WITH RECURSIVE hierarchy(node, node_parent) AS ( SELECT node, node_parent FROM tree UNION ALL SELECT t.node, h.node_parent FROM hierarchy h JOIN tree t ON h.node = t.node_parent ) SELECT * FROM hierarchy WHERE NOT (node IN ( SELECT node_parent FROM tree where not node_parent is null)) and not node_parent is null ```
Take a look at [with statement](http://www.postgresql.org/docs/current/static/queries-with.html). Your final query will look like the following. ``` with recursive hierarchy(node, node_parent, level) as ( select node, node_parent, 0 from tree where node_parent is null union all select t.node, t.node_parent, level + 1 from tree t join hierarchy h on t.node_parent = h.node) select * from hierarchy; ```
Processing a generated tree dynamically
[ "", "sql", "postgresql", "" ]
My understanding of using summary functions in SQL is that each field in the select statement that doesn't use a summary function, should be listed in the `group by` statement. ``` select a, b, c, sum(n) as sum_of_n from table group by a, b, c ``` My question is, why do we need to list the fields? Shouldn't the SQL syntax parser be implemented in a way that we can just tell it to `group` and it can figure out the groups based on whichever fields are in the `select` and aren't using summary functions?: ``` select a, b, c, sum(n) as sum_of_n from table group ``` I feel like I'm unnecessarily repeating myself when I write SQL code. What circumstances exist where we would not want it to automatically figure this out, or where it couldn't automatically figure this out?
To decrease the chances of errors in your statement. Explicitly spelling out the `GROUP BY` columns helps to ensure that the user wrote would they intended to write. You might be surprised at the number of posts that show up on Stackoverflow in which the user is grouping on columns that make no sense, but they have no idea why they aren't getting the data that they expect. Also, consider the scenario where a user might want to group on more columns than are actually in the `SELECT` statement. For example, if I wanted the average of the most money that my customers have spent then I might write something like this: ``` SELECT AVG(max_amt) FROM (SELECT MAX(amt) FROM Invoices GROUP BY customer_id) SQ ``` In this case I can't simply use `GROUP`, I need to spell out the column(s) on which I'm grouping. The SQL engine could allow the user to explicitly list columns, but use a default if they are not listed, but then the chances of bugs drastically increases. One way to think of it is like strongly typed programming languages. Making the programmer explicitly spell things out decreases the chance of bugs popping up because the engine made an assumption that the programmer didn't expect.
This is required to determine explicitly how do you want to group the records because, for example, you may use columns for grouping that are not listed in result set. However, there are RDBMS which allow to not specify `GROUP BY` clause using aggregate functions like MySQL.
SQL Syntax - Why do we need to list individual fields in an SQL group-by statement?
[ "", "sql", "" ]
[![sample table](https://i.stack.imgur.com/655fK.png)](https://i.stack.imgur.com/655fK.png) I have this issue and I cant find out a correct solution. The following image shows a table where I have different records going in. The keys for the records are `RID` and `NAME`, and I would like to create a query that returns only most recent dates from both keys (marked in grey in the image). I would appreciate this comunity help in trying to make it work, I have already tried joining with it self and try to get the Date1 > Date2 without success. I solve this by using this query: ``` SELECT * FROM <table> as o inner join ( select RID, NAME, max(CREATED) as CREATED from <table> group by RID, NAME ) as t on t.NAME=o.NAME and t.RID=o.RID and o.CREATED=t.CREATED order by ID ``` I would appreciate if you can find a better solution to it so I can also get the ID in the query?
Since maximum `ID` is related to maximum `CREATED` you can use aggregates to find maximum `CREATED` and `ID` for each distinct pair of `RID, NAME`: ``` select RID, NAME, max(ID), max(CREATED) from <your-table-name> group by RID, NAME ```
I solve this by using this query: ``` SELECT * FROM <table> as o inner join ( select RID, NAME, max(CREATED) as CREATED from <table> group by RID, NAME ) as t on t.NAME=o.NAME and t.RID=o.RID and o.CREATED=t.CREATED order by ID ``` I would appreciate if you can find a better solution to it so I can also get the ID in the query?
SQL Get most recent date from table to be included in a VIEW with inner join
[ "", "sql", "t-sql", "date", "datetime", "join", "" ]
I'm looking to copy the values of two columns (Column 1, Column 2, and Column 3) to another table; however, I don't want values to be copied if there is a duplicate value in Column 2. An example is below: ``` UserID Item Date ------------------------ 101 1 < 2-10-2016 101 1 < 2-9-2016 101 2 2-11-2016 101 3 2-11-2016 102 5 2-11-2016 102 6 2-14-2016 103 1 2-11-2016 103 4 < 2-11-2016 103 4 < 2-11-2016 ``` I want to INSERT INTO only: * UserID 101 Item 1 w/ date * UserID 101 Item 2 w/ date * UserID 101 Item 3 w/ date * UserID 102 Item 5 w/ date * UserID 102 Item 6 w/ date * UserID 103 Item 1 w/ date * UserID 103 Item 4 w/ date I've tried finding a way to filter duplicate Items (GROUP BY) from the table to no avail. Is there any efficient way to do this without using loops? There is also a unique identifier column that indexes these values.
Just do a `GROUP BY UserId, Item` and use `HAVING` to determine the group population: ``` INSERT INTO TableB (Col1, Col2) SELECT UserId, Item FROM TableA GROUP BY UserId, Item HAVING COUNT(*) = 1 ``` This will insert only non duplicated `UserId, Item` pairs into TableB. If you want to insert **all** `UserId, Item` pairs **just once**, then use: ``` INSERT INTO TableB (Col1, Col2) SELECT UserId, Item FROM TableA GROUP BY UserId, Item ``` Try this if you have additional fields: ``` ;WITH ToBeInserted AS ( SELECT UserID, Item, [Date], ROW_NUMBER() OVER (PARTITION BY UserID ORDER BY [Date] DESC) AS rn FROM TableA ) INSERT INTO TableB (UserID, Item, [Date]) SELECT UserID, Item, [Date] FROM ToBeInserted WHERE rn = 1 ``` [**`ROW_NUMBER`**](https://msdn.microsoft.com/en-us/library/ms186734.aspx) window function is used to enumerate records that belong to the same `UserID` partition: the record having the most recent `[Date]` value has a row number equal to one, next record has row nummber = 2, etc. `INSERT` operation uses this row number value in order to select just one record from each `UserID` partition.
Try ``` INSERT INTO TableB (Col1, Col2) SELECT UserId, Item, Max([Date]) FROM TableA GROUP BY UserId, Item ``` Use Min() if you want the smallest date to be inserted.
How to copy a column without duplicate values? SQL
[ "", "sql", "database", "t-sql", "" ]
I need to know query result structure . Let's say i have this query : ``` SELECT T.name as names FROM (SELECT name,sex FROM user) T WHERE T.sex='male' ``` What i need to know is the structure of the result of this query , something like this : ``` column_name : names TYPE : varchar(60) ``` Is there a way to get this ?
This can be a bit complicated to do. One method that works across databases is to do the following: * Create a table or view with the structure * Investigate the metadata In MySQL, you can do: ``` create table temp_table as select t.name as names from (select name, sex from user) t where t.sex = 'male' limit 0; ``` This should create an empty table with the right columns. You can then look at `INFORMATION_SCHEMA.COLUMNS` to get the information you want. In MySQL, a temporary table is preferable to a view, because (older versions of) MySQL severely limit the queries that can be used for views.
First create a `view` ``` CREATE VIEW view_name AS SELECT T.name as names FROM (SELECT name,sex FROM user) T WHERE T.sex='male' ``` Then simply run a `desc` or `describe` on the `view` like ``` desc view_name ``` You can get the result of this query using your code and use it as you need.
Sql query result structure
[ "", "mysql", "sql", "select", "subquery", "" ]
I have two tables named `tblStockManagement` and `tblFolding` in my database. i have column `Name` in `tblFolding` table and column `FoldingID` as Foreign Key in `tblStockManagement` table. now i have comboBox in my Winform and i want `Names` of Items in combobox from `tblFolding` Table but only those items that are not in `tblStockManagement` Table. (because i dont want to select data again if it is already in `tblStockManagement` table . instead i will update the quantity later). [![tblFolding Table](https://i.stack.imgur.com/UEAMY.png)](https://i.stack.imgur.com/UEAMY.png) [![tblStockManagement Table](https://i.stack.imgur.com/E2S6T.png)](https://i.stack.imgur.com/E2S6T.png) these are the screenshots of both of tables. please tell me how can i do that
This is you need.Basically a sub query which gets all folding id and using not in operator I exclude those matching sets. ``` SELECT Name FROM tblFolding WHERE FoldingID NOT IN ( SELECT FoldingID FROM tblStockManagement ) ; ```
`NOT EXISTS` version: ``` select * from tblFolding f where not exists (select * from tblStockManagement SM where sm.FoldingID = f.FoldingID) ``` `NOT EXISTS` is "NULL safe", which `NOT IN` isn't.
SQL Query to match if data is not present in another table
[ "", "sql", "sql-server", "" ]
This is probably a very simple question, but my attempts to search for an answer are thwarted by Google finding answers showing how to reuse a query by making a stored procedure instead. I want to reuse the results of a query inside a stored procedure. Here's a cut-down example where I've chopped out `NOCOUNT`, `XACT_ABORT`, `TRANSACTION`, `TRY`, and much of the logic. ``` CREATE PROCEDURE Do_Something @userId UNIQUEIDENTIFIER AS BEGIN DELETE FROM LikedItems WHERE likedItemId IN ( SELECT Items.id FROM Items WHERE Items.userId = @userId ) DELETE FROM FollowedItems WHERE followedItemId IN ( SELECT Items.id FROM Items WHERE Items.userId = @userId ) END ``` What is the syntax to reuse the results of the duplicated nested `SELECT` rather than doing it twice?
You can INSERT result of the SELECT into a temporary table or table variable, but it doesn't automatically mean that the overall performance would be better. You need to measure it. **Temp Table** ``` CREATE PROCEDURE Do_Something @userId UNIQUEIDENTIFIER AS BEGIN CREATE TABLE #Temp(id int); INSERT INTO #Temp(id) SELECT Items.id FROM Items WHERE Items.userId = @userId; DELETE FROM LikedItems WHERE likedItemId IN ( SELECT id FROM #Temp ) DELETE FROM FollowedItems WHERE followedItemId IN ( SELECT id FROM #Temp ) DROP TABLE #Temp; END ``` **Table variable** ``` CREATE PROCEDURE Do_Something @userId UNIQUEIDENTIFIER AS BEGIN DECLARE @Temp TABLE(id int); INSERT INTO @Temp(id) SELECT Items.id FROM Items WHERE Items.userId = @userId; DELETE FROM LikedItems WHERE likedItemId IN ( SELECT id FROM @Temp ) DELETE FROM FollowedItems WHERE followedItemId IN ( SELECT id FROM @Temp ) END ```
If the subquery is fast and simple - no need to change anything. Item's data is in the cache (if it was not) after the first query, locks are obtained. If the subquery is slow and complicated - store it into a table variable and reuse by *the same subquery* as listed in the question. If your question is not related to performance and you are beware of copy-paste: there is no copy-paste. There is the same logic, similar structure and references - yes, you will have almost the same query source code. In general, it is not the same. Some rows could be deleted from or inserted into Items table after the first query unless your are running under SERIALIZABLE isolation level. Many different things could happen during first delete, between first and second delete statements. Each delete statement also requires it's own execution plan - thus all the information about tables affected and joins must be provided to SERVER anyway. You need to filter by the same source again - yes, you provide subquery with the same source again. There is no "twice" or "reuse" of a partial code. Data collected by a complicated query - yes, it can be reused (*without running the same complicated query* - by *simple querying from prepared source*) via temp tables/table variables as mentioned before.
Reuse results of SELECT query inside a stored procedure
[ "", "sql", "sql-server", "t-sql", "stored-procedures", "" ]
Using this SQL query: ``` Select CityName, count(UserID) as UserCount from tblUsers group by CityName ``` I get these results: ``` CityName UserCount --------------------- City 1 10 City 2 15 ``` Expected output: ``` CityName UserCount Perc --------------------------- City 1 10 40 City 2 15 60 ``` As per above datasets, I want to get the % distribution of rows based on total sum of the value from 1 column. Please advise.
Try using a sub query to select the total count like this: ``` Select CityName, count(userID) as userCount, count(UserID)/(select count(*) from tblUsers)*100 as Perc from tblUsers group by CityName ``` You can also try JOIN like this: ``` SELECT t.CityName, t.userCount, t.userCount/s.totalCount*100 as Perc FROM (Select CityName, count(UserID) as UserCount from tblUsers group by CityName) t INNER JOIN (SELECT count(*) as totalCount from tblUsers) s ON(1=1) ```
The simplest way is to use window functions: ``` Select CityName, count(UserID) as UserCount , count(UserId) * 100.0 / sum(count(UserId)) over () as Perc from tblUsers group by CityName; ``` A subquery is not necessary.
Get % of rows based on total number of rows when using GROUP BY clause in SQL Server
[ "", "sql", "sql-server", "database", "sql-server-2008", "" ]
I've 2 tables: A device table and a User table. Device Table has 2 columns: ID, and MacAddress. User Table has 4 columns: ID, Name, Phone, MacAddress. There will be a fixed list populating the User Table for example: 1, Steve Marks, 219-373-1485, 5A:2B:3C:8D 2, Dan Marks, 310-248-1455, 5C:3A:2B:8A Every 5 mins the device table will be populated with MacAddresses and device information within the local vicinity. I want to create a view such that gets the name, phone of MacAddresses that are repeated more than twice in the device table and if there is a corresponding macaddress match in the User table. Thanks! Sam
As I understood your Users table contains all users of a WLAN hotspot for example. And the device list containts all mac addresses having accessed the hotspot at a given time. What about ``` SELECT name, macAddress, count(macAddress) FROM Device INNER JOIN USER ON User.MacAddress = Device.MacAddress GROUP BY name, macAddress HAVING COUNT(macAddress) >= 2 ``` If I understood your usecase right, I'd advice adding another timestamp column to the device table. So you are prepared for the data to change and are able to select within desired time periods. E.g. "who had at least 2 times access between 10 and 11 o'clock?": ``` SELECT name FROM Device INNER JOIN USER ON User.MacAddress = Device.MacAddress WHERE timestamp between '2016-02-14 10:00:00' and '2016-02-14 11:00:00' GROUP by name, macAddress HAVING COUNT(macAddress) >= 2 ORDER BY name ```
You can use this query ``` SELECT name,phone, MacAddress, COUNT(name) as total FROM user GROUP BY name HAVING ( COUNT(name) > 1 ) ```
MySQL query with if condition
[ "", "mysql", "sql", "database", "" ]
My problem here is to find a non-active person that didn't have any activity since or before Dec-01-2011. ``` PersonID Activity ---------------------- Alvin Jan-08-2010 Alvin Mar-11-2011 Alvin Feb-11-2015 Simon Nov-20-2010 Simon Jan-23-2011 Simon Jul-03-2011 Simon Nov-04-2011 Theodore Mar-09-2010 Theodore Oct-08-2013 Dave Aug-13-2012 Dave Jun-01-2014 Dave Apr-23-2015 Ian Aug-09-2010 Ian Nov-30-2010 Ian Jan-25-2011 Ian Mar-14-2011 Clare Sep-03-2011 Clare Aug-15-2014 Gale Jun-18-2010 Gale Dec-03-2010 ``` Output: ``` PersonID Activity ---------------------- Simon Nov-20-2010 Simon Jan-23-2011 Simon Jul-03-2011 Simon Jul-04-2011 Ian Aug-09-2010 Ian Nov-30-2010 Ian Jan-25-2011 Ian Mar-14-2011 Gale Jun-18-2010 Gale Dec-03-2010 ``` Desired Output: ``` PersonID --------- Simon Ian Gale ``` The desired Result is preferred as it will tell me the person who is not active.
A simple `GROUP BY` and `HAVING` will do the trick: [**SQL Fiddle**](http://sqlfiddle.com/#!6/690fdc/1/0) ``` SELECT PersonID FROM tbl GROUP BY PersonID HAVING COUNT(CASE WHEN Activity > '20111201' THEN 1 END) = 0 ```
Off the top of my head.. something like this should do it ``` SELECT PersonID FROM TableName GROUP BY PersonID HAVING MAX(Activity) <= '2011-12-01' ```
Finding a person with no Activity before a certain date
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have 3 tables (stars mach the ids from the table before): ``` product: prod_id* prod_name prod_a_id prod_b_id prod_user keywords: key_id** key_word key_prod* kay_country data: id dat_id** dat_date dat_rank_a dat_traffic_a dat_rank_b dat_traffic_b ``` I want to run a query (in a function that gets a `$key_id`) that outputs all these columns but only for the last 2 dates(dat\_date) from the 'data' table for the key\_id inserted - so that for every key\_word - I have the two last dat\_dates + all the other variables included in my SQL query: So... This is what I have so far. and I don't know how to get only the MAX vars. I tried using "max(dat\_date)" in different ways that didn't work. ``` SELECT prod_id, prod_name, prod_a_id, prod_b_id, key_id, key_word, kay_country, dat_date, dat_rank_a, dat_rank_b, dat_traffic_a, dat_traffic_b FROM keywords INNER JOIN data ON keywords.key_id = data.dat_id INNER JOIN prods ON keywords.key_prod = prods.prod_id ``` Is there a possability to do this with only one query? **EDIT (FOR IgorM):** ``` public function newnew() { $query = $this->db->query('WITH CTE AS ( SELECT *, ROW_NUMBER() OVER (PARTITION BY dat_id ORDER BY dat_date ASC) AS RowNo FROM data ) SELECT * FROM CTE INNER JOIN keywords ON keywords.key_id = CTE.dat_id INNER JOIN prods ON keywords.key_prod = prods.prod_id WHERE RowNo < 3 '); $result = $query->result(); return $result; } ``` This is the error on the output: ``` You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'CTE AS ( SELECT *, ROW_NUMBER() OVER (' at line 1 WITH CTE AS ( SELECT *, ROW_NUMBER() OVER (PARTITION BY dat_id ORDER BY dat_date ASC) AS RowNo FROM data ) SELECT * FROM CTE INNER JOIN keywords ON keywords.key_id = CTE.dat_id INNER JOIN prods ON keywords.key_prod = prods.prod_id WHERE RowNo < 3 ```
If you only want to do this for one key\_id at a time (as alluded to in your responses to other answers) and only want two rows, you can just do: ``` SELECT p.prod_id, p.prod_name, p.prod_a_id, p.prod_b_id, k.key_id, k.key_word, k.key_country, d.dat_date, d.dat_rank_a, d.dat_rank_b, d.dat_traffic_a, d.dat_traffic_b FROM keywords k JOIN data d ON k.key_id = d.dat_id JOIN prods p ON k.key_prod = p.prod_id WHERE k.key_id = :key_id /* Bind in key id */ ORDER BY d.dat_date DESC LIMIT 2; ``` Whether you want this depends on your data structure and whether there is more than one key/prod combination per date. Another option limiting just the data rows would be: ``` SELECT p.prod_id, p.prod_name, p.prod_a_id, p.prod_b_id, k.key_id, k.key_word, k.key_country, d.dat_date, d.dat_rank_a, d.dat_rank_b, d.dat_traffic_a, d.dat_traffic_b FROM keywords k JOIN ( SELECT dat_id, dat_date, dat_rank_a, dat_rank_b, dat_traffic_a, dat_traffic_b FROM data WHERE dat_id = :key_id /* Bind in key id */ ORDER BY dat_date DESC LIMIT 2 ) d ON k.key_id = d.dat_id JOIN prods p ON k.key_prod = p.prod_id; ``` If you want some kind of grouped results for all the keywords, you'll need to look at the other answers.
For SQL ``` WITH CTE AS ( SELECT *, ROW_NUMBER() OVER (PARTITION BY dat_id ORDER BY dat_date ASC) AS RowNo FROM data ) SELECT * FROM CTE INNER JOIN keywords ON keywords.key_id = CTE.dat_id INNER JOIN prods ON keywords.key_prod = prods.prod_id WHERE RowNo < 3 ``` For MySQL (not tested) ``` SET @row_number:=0; SET @dat_id = ''; SELECT *, @row_number:=CASE WHEN @dat_id=dat_id THEN @row_number+1 ELSE 1 END AS row_number, @dat_id:=dat_id AS dat_id_row_count FROM data d INNER JOIN keywords ON keywords.key_id = d.dat_id INNER JOIN prods ON keywords.key_prod = prods.prod_id WHERE d.row_number < 3 ``` The other approach is self joining. I don't want to take credit for somebody else's job, so please look on the following example: [ROW\_NUMBER() in MySQL](https://stackoverflow.com/questions/1895110/row-number-in-mysql) Look for the following there: ``` SELECT a.i, a.j, ( SELECT count(*) from test b where a.j >= b.j AND a.i = b.i ) AS row_number FROM test a ```
SQL: How to get cells by 2 last dates from 3 different tables?
[ "", "mysql", "sql", "" ]
I have a table like this: ``` DECLARE @T TABLE (note VARCHAR (50)) INSERT @T SELECT 'Amplifier' UNION ALL SELECT ';' UNION ALL SELECT 'Regulator' ``` How can I replace the semicolon (`';'`) with blank (`''`). Expected Output: ``` Amplifier '' -- here semicolon replace with blank Regulator ```
If you want to replace ALL semicolons from any outputted cell you can use `REPLACE` like this: ``` SELECT REPLACE(note,';','') AS [note] FROM @T ```
Fetching from the given table, use a `CASE` statement: ``` SELECT CASE WHEN note = ';' THEN '' ELSE note END AS note FROM @T; ``` [replace()](https://msdn.microsoft.com/en-us/library/ms186862.aspx) would replace *all* occurrences of the character. Doesn't seem like you'd want that. This expression only replaces exact matches of the whole string.
Replace ; with blank in SQL
[ "", "sql", "" ]
I am currently generating a table which converts the rows value to the new column, the following is my code: ``` SELECT ref_no, (CASE WHEN code = 1 THEN code END) AS 'count_1', (CASE WHEN code = 2 THEN code END) AS 'count_2', (CASE WHEN code = 3 THEN code END) AS 'count_3', (CASE WHEN code = 4 THEN code END) AS 'count_4', (CASE WHEN code = 5 THEN code END) AS 'count_5', (CASE WHEN code = 6 THEN code END) AS 'count_6' FROM data" ``` The output is: [![enter image description here](https://i.stack.imgur.com/ISJgP.png)](https://i.stack.imgur.com/ISJgP.png) However, I needs those duplicated rows to be combined, is there any way to do? I don't need to sum up those values as there is no overlap among them. I've tried with `group by` but it does not work as expected: [![enter image description here](https://i.stack.imgur.com/hnufT.png)](https://i.stack.imgur.com/hnufT.png) My expected out put is like: ``` ref c_1 c_2 c_3 c_4 c_5 c_6 1 1 2 3 - - - ``` This shows adding `ORDER BY` clause does not work in my context. [![enter image description here](https://i.stack.imgur.com/uAbGF.png)](https://i.stack.imgur.com/uAbGF.png) Updated: complete query in sqldf [![enter image description here](https://i.stack.imgur.com/GTFTZ.png)](https://i.stack.imgur.com/GTFTZ.png)
The answer is: YES By using GROUP BY and MAX like this: ``` SELECT ref_no, max(CASE WHEN code = 1 THEN code END) AS 'count_1', max(CASE WHEN code = 2 THEN code END) AS 'count_2', max(CASE WHEN code = 3 THEN code END) AS 'count_3', max(CASE WHEN code = 4 THEN code END) AS 'count_4', max(CASE WHEN code = 5 THEN code END) AS 'count_5', max(CASE WHEN code = 6 THEN code END) AS 'count_6' FROM data GROUP BY ref_no ORDER BY ref_no ```
The easiest would either be to use GROUP BY or a PIVOT function. GROUP BY example below: ``` SELECT ref_no, sum(CASE WHEN code = 1 THEN code ELSE 0 END) AS 'count_1', sum(CASE WHEN code = 2 THEN code ELSE 0 END) AS 'count_2', sum(CASE WHEN code = 3 THEN code ELSE 0 END) AS 'count_3', sum(CASE WHEN code = 4 THEN code ELSE 0 END) AS 'count_4', sum(CASE WHEN code = 5 THEN code ELSE 0 END) AS 'count_5', sum(CASE WHEN code = 6 THEN code ELSE 0 END) AS 'count_6' FROM data GROUP BY ref_no ``` A really long way of doing this using your existing code and a CTE table: ``` WITH results as ( SELECT ref_no, (CASE WHEN code = 1 THEN code END) AS 'count_1', (CASE WHEN code = 2 THEN code END) AS 'count_2', (CASE WHEN code = 3 THEN code END) AS 'count_3', (CASE WHEN code = 4 THEN code END) AS 'count_4', (CASE WHEN code = 5 THEN code END) AS 'count_5', (CASE WHEN code = 6 THEN code END) AS 'count_6' FROM data) SELECT ref_no , sum(coalesce(count_1),0) -- for sum , max(coalesce(count_1),0) -- for just the highest value -- Repeat for other ones FROM results GROUP BY ref_no ```
Remove duplicate rows when using CASE WHEN statement
[ "", "sql", "sql-server", "" ]
I have this query that locates all users that have the same email as (memberid) in the membership table and brings back any other users with matching emails ``` select tenant,first,last,memberid,email,ipaddress,password from membership where email IN ( SELECT email FROM membership where memberid = <Parameters.Member ID> GROUP BY email) ``` I also need to be able to bring back a count of all users that have the same password or ipaddress as the current row's user. I have tried using a case statement and cannot get the correct results.
Hogan got posted the correct answer before I was done +1 Can also use a join ``` select match.email, match.tenant, match.first, match.last , match.memberid, match.ipaddress, match.password , count(match.memberid) OVER (partition by match.ipaddress) as ipCount , count(match.memberid) OVER (partition by match.password) as passwordCount , member.memberid as [member.memberid], member.ipaddress as [member.ipaddress], member.password as [member.password] from membership as match join membership as member on member.email = match.email and member.memberid = <Parameters.Member ID> ```
In many platforms (including sql server) you can use a windowing function. A windowing function would allow you to give a count of the users that have the same password or ipaddress as the current row. If this is what you want ``` select tenant,first,last,memberid,email,ipaddress,password, count(memberid) OVER (partition by ipaddress) as ipaddress_count, count(memberid) OVER (partition by password) as password_count from membership where email IN ( SELECT email FROM membership where memberid = <Parameters.Member ID> GROUP BY email) ```
Returning count in SQL grouped by two other partitions with an in clause on an id
[ "", "sql", "sql-server", "" ]
I have a table with many columns, among which the fields **year**, **folder** and **seq\_no** serve as an identification method for the record. I'd like to assign the same id to those records that have this combination the same, and (if possible) the total of ids must be sequential, to make good use of the id column values. An example : ``` +-----+-----+------+------+-----+ | id |year |folder|seq_no|count| +=====+=====+======+======+=====+ | 1 |1973 | 5 | 11 | 2 | << 1973, 5, 11 +-----+-----+------+------+-----+ | 2 |2010 | 4 | 7 | 2 | << 2010, 4, 7 +-----+-----+------+------+-----+ | 3 |1973 | 11 | 12 | 1 | << 1973, 11, 12 +-----+-----+------+------+-----+ | 1 |1973 | 5 | 11 | 2 | << 1973, 5, 11 +-----+-----+------+------+-----+ | 4 |1500 | 4 | 7 | 1 | << 1500, 4, 7 +-----+-----+------+------+-----+ | 2 |2010 | 4 | 7 | 2 | << 2010, 4, 7 +-----+-----+------+------+-----+ ``` However, I'd prefer that the id is not assigned by calculation in the php part, but that the table itself on every new entry checks if this entry has the same combination as other ones. If yes, assign same id. If not, assign the next available id. Also, i would like to count for each row how many records are there with the same id, and this should also be done automatically. I was thinking of using triggers or functions ... not sure how to do that.
I would not try to store such IDs and especially COUNT in the table. Imagine, that in your example you want to insert one more row with ``` +-----+------+------+ |year |folder|seq_no| +-----+------+------+ |1973 | 5 | 11 | +-----+------+------+ ``` The server would have to find all existing rows with the same combination and update them with the new value of COUNT. Each `INSERT`, `UPDATE` and `DELETE` becomes really expensive. This kind of information can be calculated when needed with `DENSE_RANK` and `COUNT`: ``` SELECT year ,folder ,seq_no ,DENSE_RANK() OVER(ORDER BY year, folder, seq_no) AS ID ,COUNT(*) OVER(PARTITION BY year, folder, seq_no) AS cnt FROM YourTable ```
Do it at query time: ``` with t (year, folder, seq_no) as (values (1973,5,11), (2010,4,7), (1973,11,12), (1973,5,11), (1500,4,4), (2010,4,7) ) select dense_rank() over (order by year, folder, seq_no) as id, year, folder, seq_no, count(*) over (partition by year, folder, seq_no) as "count" from t ; id | year | folder | seq_no | count ----+------+--------+--------+------- 1 | 1500 | 4 | 4 | 1 2 | 1973 | 5 | 11 | 2 2 | 1973 | 5 | 11 | 2 3 | 1973 | 11 | 12 | 1 4 | 2010 | 4 | 7 | 2 4 | 2010 | 4 | 7 | 2 ```
Assign same id to rows with same combination of data
[ "", "sql", "postgresql", "" ]
I just hit a wall with my SQL query fetching data from my MS SQL Server. To simplify, say i have one table for sales, and one table for customers. They each have a corresponding userId which i can use to join the tables. I wish to first `SELECT` from the sales table where say price is equal to 10, and then join it on the userId, in order to get access to the name and address etc. from the customer table. In which order should i structure the query? Do i need some sort of subquery or what do i do? I have tried something like this ``` SELECT * FROM Sales WHERE price = 10 INNER JOIN Customers ON Sales.userId = Customers.userId; ``` Needless to say this is very simplified and not my database schema, yet it explains my problem simply. Any suggestions ? I am at a loss here.
A `SELECT` has a certain order of its components In the simple form this is: * What do I select: column list * From where: table name and joined tables * Are there filters: WHERE * How to sort: ORDER BY So: most likely it was enough to change your statement to ``` SELECT * FROM Sales INNER JOIN Customers ON Sales.userId = Customers.userId WHERE price = 10; ```
The `WHERE` clause must follow the joins: ``` SELECT * FROM Sales INNER JOIN Customers ON Sales.userId = Customers.userId WHERE price = 10 ``` This is simply the way SQL syntax works. You seem to be trying to put the clauses in the order that you think they should be applied, but SQL is a declarative languages, not a procedural one - you are defining what you want to occur, not how it will be done. You could also write the same thing like this: ``` SELECT * FROM ( SELECT * FROM Sales WHERE price = 10 ) AS filteredSales INNER JOIN Customers ON filteredSales.userId = Customers.userId ``` This may seem like it indicates a different order for the operations to occur, but it is logically identical to the first query, and in either case, the database engine may determine to do the join and filtering operations in either order, as long as the result is identical.
Specifying SELECT, then joining with another table
[ "", "sql", "sql-server", "" ]
What mistake am I making? ``` $ sqlite3 test.db SQLite version 3.8.5 2014-08-15 22:37:57 Enter ".help" for usage hints. sqlite> create table t (s text not null, i integer); sqlite> select * from t where s="somestring"; /* works */; sqlite> select * from t where i=0; /* works */; sqlite> select * from t where s="somestring" and where i=0; Error: near "where": syntax error ```
You don't need to specify where 2 times in this query ``` sqlite> select * from t where s="somestring" and i=0; ``` should be enough.
try ``` select * from t where s="somestring" and i=0; ``` instead of ``` select * from t where s="somestring" and where i=0; ```
sqlite: multiple where clause syntax: where s="somestring" and where i=0;
[ "", "sql", "sqlite", "android-sqlite", "" ]
I'm trying to create a view in Postgresql , but when I run this code appears this error: > syntax error at or near " THEN " ``` CREATE OR REPLACE VIEW VW_MONITOR_DEVICE AS SELECT P.POSIZIONE_DEVICE_ID AS MONITOR_DEVICE_ID, P.VALID AS VALID, [...] IF (VALID == FALSE THEN 'Valid' ELSE P.REASON_FOR_INVALID) AS DESCRIPTION, [...] FROM public.TA_POSIZIONI_DEVICE P JOIN ... ``` TA\_POSIZIONI\_DEVICE * VALID (Boolean not null)
You should use [CASE](http://www.postgresql.org/docs/9.5/static/functions-conditional.html) > The SQL **CASE** expression is a generic conditional expression, similar > to **if/else** statements in other programming languages ``` CASE WHEN condition THEN result [WHEN ...] [ELSE result] END ``` So, ``` CREATE OR REPLACE VIEW VW_MONITOR_DEVICE AS SELECT P.POSIZIONE_DEVICE_ID AS MONITOR_DEVICE_ID, P.VALID AS VALID, [...] CASE WHEN VALID = false THEN 'Valid' ELSE P.REASON_FOR_INVALID END AS DESCRIPTION, [...] FROM public.TA_POSIZIONI_DEVICE P JOIN ... ```
you can use case ``` case when VALID = FALSE THEN 'Valid' ELSE P.REASON_FOR_INVALID end DESCRIPTION, ```
Postgresql Syntax error at or near " THEN "
[ "", "sql", "postgresql", "syntax-error", "create-view", "" ]
I've selected table: **A1** and joined **A2** to **A1**, AND **A3** joined to **A2**. **A1**.level and **A2**.level always not null **A3**.level means that that joined row is null. **A1**.level is bigger then **A2**.level. IF **A3**.level is smaller than **A2**.level or setted to (NULL) SO, I need result like this ``` ╔══════════╦══════════╦══════════╗ ║ A1.level ║ A2.level ║ A3.level ║ ╠══════════╬══════════╬══════════╣ ║ 3 ║ 2 ║ 1 ║ ║ 14 ║ 10 ║ 5 ║ ║ 15 ║ 13 ║ (NULL) ║ ╚══════════╩══════════╩══════════╝ ``` I've tried to write statement like this ``` SELECT A1.level, A2.level, A3.level FROM A1, LEFT JOIN A2 ON A1.parentID = A2.id LEFT JOIN A3 ON A2.parentID = A3.id WHERE A1.level > A2.level AND A2.level > A3.level OR A3.level IS NULL ``` but it doesn't work. How to write IF (or CASE) statment for this? Thanks
You need to add parentheses: ``` SELECT A1.level, A2.level, A3.level FROM A1 JOIN A2 -- no need for LEFT JOIN ON A1.parentID = A2.id LEFT JOIN A3 ON A2.parentID = A3.id WHERE (A1.level > A2.level) AND (A2.level > A3.level OR A3.level IS NULL) ``` Otherwise the order of precedence is: ``` NOT - AND - OR ```
Use parenthesis; ``` SELECT A1.level, A2.level, A3.level FROM A1, LEFT JOIN A2 ON A1.parentID = A2.id LEFT JOIN A3 ON A2.parentID = A3.id WHERE (A1.level is not null and A2.level is not null) and --A1.level and A2.level always not null (A1.level > A2.level) and --A1.level is bigger then A2.level. (A2.level > A3.level OR A3.level IS NULL) --A3.level is smaller than A2.level or setted to (NULL) ```
SQL: whats wrong with this "where if" query?
[ "", "mysql", "sql", "" ]
I'm building a system that should show when the students missed two days in a row. For example, this table contains the absences. ``` day | id | missed ---------------------------------- 2016-10-6 | 1 | true 2016-10-6 | 2 | true 2016-10-6 | 3 | false 2016-10-7 | 1 | true 2016-10-7 | 2 | false 2016-10-7 | 3 | true 2016-10-10 | 1 | false 2016-10-10 | 2 | true 2016-10-10 | 3 | true ``` > (days 2016-10-8 and 2016-10-9 are weekend) in the case above: * student 1 missed the days 1st and 2nd. (consecutive) * student 2 missed the days 1st and 3rd. (nonconsecutive) * student 3 missed the days 2nd and 3rd. (consecutive) The query should select only student 1 and 3. Is possible to do stuff like this just with a single SQL Query?
Use inner join to connect two instances of the table- one with the 'first' day, and one with the 'second' day, and then just look for rows where both are missed: ``` select a.id from yourTable as a inner join yourTable as b on a.id = b.id and a.day = b.day-1 where a.missed = true and b.missed = true ``` **EDIT** Now that you changed the rules... and made it date and not int in the day column, this is what I'll do: 1. Use [DAYOFWEEK()](https://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_dayofweek) function to go to a day as a number 2. Filter out weekends 3. use modulo to get Sunday as the next day of Thursday: ``` select a.id from yourTable as a inner join yourTable as b on a.id = b.id and DAYOFWEEK(a.day) % 5 = DAYOFWEEK(b.day-1) % 5 where a.missed = true and b.missed = true and DAYOFWEEK(a.day) < 6 and DAYOFWEEK(b.day) < 6 ```
similar approach as other answers, but different syntax ``` select distinct id from t where missed=true and exists ( select day from t as t2 where t.id=t2.id and t.day+1=t2.day and t2.missed=true ) ```
How to select where there are 2 consecutives rows with a specific value using MySQL?
[ "", "mysql", "sql", "select", "group-by", "" ]
I have an SQL query that returns the following values: ``` BC - Worces BC Bristol BC Central BC Torquay BC-Bath BC-Exeter BC-Payroll ``` So, we have some BC with just a space, some with a dash and some with a dash with spaces on either side. When returning these values, I want to replace any of these BC variants with "Business Continuity: " followed by Bath, or Exeter etc. Is there a way of checking what value is returned and (I'm assuming in a separate column) returning a field based on it? If every iteration was the same, I could just use Trim, but it's the variation that's throwing me out.
You could use a case on the select ``` CASE WHEN Left(`colname`, 5) = 'BC - ' THEN CONCAT('Business Continuity: ', SUBSTRING(`colname`, 6)) WHEN Left(colname, 3) = 'BC ' THEN CONCAT('Business Continuity: ', SUBSTRING(`colname`, 4)) WHEN Left(`colname`, 3) = 'BC-' THEN CONCAT('Business Continuity: ', SUBSTRING(`colname`, 4)) ELSE `colname` END as `colname` ```
You can use [`REPLACE`](http://dev.mysql.com/doc/refman/5.7/en/replace.html) function along with `CASE` statement for this. ``` SELECT CASE WHEN `col_name` LIKE 'BC %' THEN REPLACE(`col_name`, 'BC ', 'Business Continuity: ') WHEN `col_name` LIKE 'BC-%' THEN REPLACE(`col_name`, 'BC-', 'Business Continuity: ') ELSE `col_name` END as `col_name` FROM `table_name`; ```
Check for a substring in MySQL and return a value based on it
[ "", "mysql", "sql", "string", "substring", "" ]
I got two tables A and B.One to many relationship exists between A and B. A\_Id is a foreign key . ``` Create table A(Id int,Name varchar(50)) create table B(Id int,A_Id int,Title varchar(50)) insert into A Values(1,'name1'); insert into A Values(2,'name2'); insert into A Values(3,'name3'); insert into A Values(4,'name4'); insert into B Values(10,1,'title1'); insert into B Values(11,1,'title5'); insert into B Values(12,2,'title2'); insert into B Values(13,2,'title6'); insert into B Values(14,3,'title3'); ``` I need to fetch records from table A and title from table B for matched record . If more than one value exists in table B then I need to select the record with the max Id(table B) . For example. There are two records in table B for A\_Id 1 . I need to select a row from table A and 'title5 ' from table B for matching records. I tried ``` SELECT A.*, B.Title FROM A JOIN B ON A.Id = B.A_Id ```
You can use a derived table that uses `ROW_NUMBER` to enumerate records within `A_Id` partitions: ``` select A.Id, A.Name, B.Title from A inner join ( select A_Id, Title, ROW_NUMBER() OVER (PARTITION BY A_Id ORDER BY Id DESC) AS rn from B ) AS B on A.Id = B.A_Id and B.rn = 1 ``` The record of derived table `B` with `B.rn = 1` is the one having the maximum `Id` value within its partition and is the one being used in the `INNER JOIN` operation. [**Demo here**](http://sqlfiddle.com/#!6/ce67b/1)
This may not be too performance effective, but surely works :) SELECT a.\*, b.title FROM a,b where a.id=b.a\_id and b.id = (select max(b.id) from b where A\_Id=a.id)
sql select from one to many
[ "", "sql", "join", "greatest-n-per-group", "" ]
I am taking a Cousera course talking about SQL and there is one line of code I cannot understand. What does it mean by 'hex(name || age)'? I know it turns the string into hexadecimal format using the hex() function, but what does 'name || age' do? I cannot find any document about the '||' operator.
`||` is the SQLite concatenation operator. So `hex(name || age)` will pass a concatenated string of `name` and `age` into the `hex()` function. From the SQLite [documentation](https://www.sqlite.org/lang_corefunc.html): > The hex() function interprets its argument as a BLOB and returns a string which is the upper-case hexadecimal rendering of the content of that blob.
The [documentation](http://www.sqlite.org/lang_expr.html#collateop) says: > The **||** operator is "concatenate" - it joins together the two strings of its operands.
SELECT hex(name || age) AS X FROM Ages ORDER BY X
[ "", "sql", "sqlite", "hex", "" ]
I have a table and I am trying to get the first person in the table with gender = 'M' and the first person with gender = 'F' First person in this case = ORDER BY name in alphabetical order ``` name | gender | .......other data A M B M C F D F E F M G F ``` **How do I get a result table with the first instance of 'M' , 'F' without the null/empty column?** Ideal result: ``` name | gender | ........other data A M C F ``` Thanks for the help!
You can use `row_number()` function for that like this: ``` SELECT name,gender from ( SELECT name,gender, row_number() OVER(PARTITION BY gender ORDER BY name ASC) as rnk FROM YourTable) WHERE rnk = 1 ``` You can add your other columns after the gender if you want.
``` SELECT name, gender FROM your table WHERE gender = "M" ORDER BY NAME Fetch first row only Union all SELECT name, gender FROM your table WHERE gender = "F" ORDER BY NAME Fetch first row only ```
How to get the first instance of a value partitioned by another column?
[ "", "sql", "postgresql", "" ]
Raw sql query: ``` SELECT * FROM (SELECT p.id, p.title, p.mark, (SELECT max(created) FROM comments c WHERE c.post_id=p.id AND c.mark=1) AS latest_at FROM posts p) AS Post WHERE Post.latest_at IS NOT NULL ORDER BY latest_at DESC LIMIT 10 ``` I'm trying to write equivalent sql query with join. How can i do it? ``` mysql> describe posts; +-------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | user_id | int(11) | NO | | NULL | | | title | varchar(255) | NO | | NULL | | | body | text | YES | | NULL | | | category_id | int(11) | YES | | NULL | | | tags | varchar(50) | YES | | NULL | | | mark | tinyint(4) | NO | | 1 | | | created | datetime | YES | | NULL | | | modified | datetime | YES | | NULL | | +-------------+--------------+------+-----+---------+----------------+ mysql> describe comments; +----------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | post_id | int(11) | NO | MUL | NULL | | | name | varchar(255) | NO | | NULL | | | email | varchar(255) | NO | | NULL | | | body | varchar(500) | NO | | NULL | | | mark | tinyint(4) | NO | | 1 | | | created | datetime | YES | | NULL | | | modified | datetime | YES | | NULL | | +----------+--------------+------+-----+---------+----------------+ ``` Any answer will be highly appreciated. Thanks in advance.
Try this: ``` SELECT p.id, p.title, p.mark, c.latest_at FROM posts p LEFT JOIN ( SELECT post_id, MAX(created) AS latest_at FROM comments WHERE mark = 1 GROUP BY post_id ) AS c ON c.post_id = p.id WHERE c.latest_at IS NOT NULL ORDER BY c.latest_at DESC LIMIT 10 ``` or just this: ``` SELECT p.id, p.title, p.mark, c.latest_at FROM posts p INNER JOIN ( SELECT post_id, MAX(created) AS latest_at FROM comments WHERE mark = 1 GROUP BY post_id ) AS c ON c.post_id = p.id ORDER BY c.latest_at DESC LIMIT 10 ``` because the `c.latest_at IS NOT NULL` predicate of the `WHERE` clause of the first query turns `LEFT JOIN` into an `INNER JOIN`.
I would suggest that: ``` SELECT p.id, p.title, p.mark, c.latest_at FROM posts p LEFT OUTER JOIN (select post_id, max(created) latest_at from comments where mark=1 group by post_id ) c ON (c.post_id=p.id) WHERE c.latest_at IS NOT NULL ORDER BY c.latest_at DESC LIMIT 10; ```
How can i write equivalent sql query with join?
[ "", "mysql", "sql", "" ]
I'm trying to load data from csv into an SQL table. My DDL: ``` CREATE TABLE pcm.dbo.partitiondocumentcount ( partitionkey NVARCHAR(30) NOT NULL, documentcount INT NOT NULL, datetime DATETIME2(3) DEFAULT CURRENT_TIMESTAMP, CONSTRAINT pk_partitiondocumentcount PRIMARY KEY (partitionkey ASC) ) CREATE NONCLUSTERED INDEX partitionkey_index ON pcm.dbo.partitiondocumentcount (partitionkey AS ``` My file (I also tried without the quote marks, didn't work as well) : ``` "partition-1",1 "partition-2",1 ``` My query: ``` BULK INSERT partitionDocumentCount FROM 'C:\files\pcmInitialConfiguration\partitionCount.csv' WITH( FIELDTERMINATOR = ',', ROWTERMINATOR = '\n' ) GO SELECT * FROM partitionDocumentCount ``` There error I get from DBVisualizer: ``` 15:36:26 [BULK - 0 row(s), 0.008 secs] [Error Code: 4864, SQL State: S0001] Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 1, column 2 (documentcount). 15:36:26 [SELECT - 0 row(s), 0.004 secs] Empty result set fetched ... 2 statement(s) executed, 0 row(s) affected, exec/fetch time: 0.012/0.000 sec [0 successful, 1 warnings, 1 errors] ```
The solution was simply add a column. I guess since my table had three columns it expected three in the file. My file (I also tried without the quote marks, didn't work as well) : Here's the working one ``` partition-1,1, partition-2,1, ```
I see one problem in your approach, but I'm not sure it would generate that error. The table has three columns but your data only has two. Even with a `default` constraint, `bulk insert` still looks for the third column. It might get that error because it is looking for a comma but encounters an end-of-line. The solution to that is to use a view: ``` create view pcm.dbo.partitiondocumentcount_2 as select partitionkey, documentcount from pcm.dbo.partitiondocumentcount; ``` Then: ``` BULK INSERT partitionDocumentCount_2 FROM 'C:\files\pcmInitialConfiguration\partitionCount.csv' WITH (FIELDTERMINATOR = ',', ROWTERMINATOR = '\n' ); ```
Bulk load data conversion error
[ "", "sql", "sql-server", "" ]
I have table like this for example: I need to select each uniq column by ver and to calculate how much is total of rows by each uniq ver column I think each total to add into the new column ``` table: id name ver 1 one 5 2 two 5 3 three 6 4 four 6 5 five 8 ``` SELECT, something like this ``` id ver total 1 5 2 2 6 2 4 8 1 ```
**Query** ``` select MIN(id) as id, ver, COUNT(ver) as total from your_table_name group by ver; ```
I think what you are looking for is: ``` select min(id), ver, count(id) from table group by ver; ```
How to select rows with uniq value of column and to calculate count?
[ "", "mysql", "sql", "" ]
This is for SQL Server 2008 R2, I'm a novice at SQL so please be as specific as you can. `Table1` has some recursive structure built into it, where the `ParentId` is either `Null` meaning it's the root, or `ParentId` is the `Id` of another row in `Table1` which denotes it as a child. Example data set: ``` Table1Id ParentId -------------------------------------------- 1 NULL 2 1 3 1 4 2 5 NULL 6 2 7 6 8 NULL 9 8 ``` With the above example the table then has the following tree structure with 3 root nodes: ``` Root 1 5 8 Child(teir1) 2 3 9 Child(teir2) 4 6 Child(tier3) 7 .... ``` Is there a way to return only the Root row given any of the row Ids? For example: ``` InputId ReturnedRowId ---------------------------- 1 1 2 1 3 1 4 1 5 5 6 1 7 1 8 8 9 8 ``` Any help would be appreciated.
You can use a CTE and traverse the hierarchy ``` IF OBJECT_ID('tempdb..#testData') IS NOT NULL DROP TABLE #testData CREATE TABLE #testData ( Table1Id INT ,ParentId INT NULL ) INSERT INTO #testData ( Table1Id, ParentId ) VALUES (1, NULL ) ,(2, 1 ) ,(3, 1 ) ,(4, 2 ) ,(5, NULL ) ,(6, 2 ) ,(7, 6 ) ,(8, NULL ) ,(9, 8 ) DECLARE @InputId INT SET @InputId = 2 --<<--Change this as appropriate ;WITH cteTraverse AS ( SELECT T.Table1Id, T.ParentId FROM #testData T WHERE Table1Id = @InputId UNION ALL SELECT T1.Table1Id, T1.ParentId FROM #testData T1 INNER JOIN cteTraverse T2 ON T1.Table1Id = T2.ParentId ) SELECT @InputId '@InputId', Table1Id 'ReturnedRowId' FROM cteTraverse WHERE ParentId IS NULL ```
This query do the job. ``` with CTE as ( Select Table1ID as ID, Table1ID as Ancestor, 0 as level from Table1 UNION ALL Select ID, ParentID, level + 1 from Table1 inner join CTE on CTE.Ancestor = Table1.Table1ID where ParentID is not NULL ) , R_only as ( Select ID as ID, MAX(level) as max_level from CTE group by ID ) select CTE.ID, Ancestor from CTE inner join R_only on CTE.ID = R_only.ID and CTE.level = R_only.max_level order by CTE.ID ```
How to use recursive logic to return only the Root row in a sql table (SQL Server 2008 R2)
[ "", "sql", "sql-server-2008-r2", "recursive-query", "" ]
I can't find a solution about how to split a comma-delimited string in ORACLE. Searched a lot, nothing works for my case Code ``` DECLARE TYPE T_ARRAY_OF_VARCHAR IS TABLE OF VARCHAR2(2000) INDEX BY BINARY_INTEGER; MY_ARRAY T_ARRAY_OF_VARCHAR; MY_STRING VARCHAR2(2000) := '12 3,456,,abc,def'; BEGIN FOR CURRENT_ROW IN ( with test as (select MY_STRING from dual) select regexp_substr(MY_STRING, '[^,]+', 1, rownum) SPLIT from test connect by level <= length (regexp_replace(MY_STRING, '[^,]+')) + 1) LOOP DBMS_OUTPUT.PUT_LINE('>' || CURRENT_ROW.SPLIT || '<'); --DBMS_OUTPUT.PUT_LINE(CURRENT_ROW.SPLIT); MY_ARRAY(MY_ARRAY.COUNT) := CURRENT_ROW.SPLIT; END LOOP; DBMS_OUTPUT.PUT_LINE('Array Size:' || MY_ARRAY.COUNT); END; ``` / The output is: ``` >12 3< >456< >abc< >def< >< Array Size:5 ``` The empty value is out of order!!!!
Try this for the parsing the list part. It handles NULLS: ``` SQL> select regexp_substr('12 3,456,,abc,def', '(.*?)(,|$)', 1, level, null, 1) SPLIT, level from dual connect by level <= regexp_count('12 3,456,,abc,def',',') + 1 ORDER BY level; SPLIT LEVEL ----------------- ---------- 12 3 1 456 2 3 abc 4 def 5 SQL> ``` Unfortunately when you search for regex's for parsing lists, you will always find this form which does NOT handle nulls and should be avoided: `'[^,]+'`. See here for more info: [Split comma separated values to columns in Oracle](https://stackoverflow.com/questions/31464275/split-comma-separated-values-to-columns-in-oracle/31464699#31464699).
Try xmltable and flwor expresion. The following example is not secure and throw error if you put string without comma. But is simpler to understand. ``` select xmlcast(column_value as varchar2(2000)) value_list from xmltable('for $val in ora:tokenize($strList,",") return $val' passing '12 3,456,,abc,def' as "strList" ); ``` And secured version. ``` select xmlcast(column_value as varchar2(2000)) value_list from xmltable('for $val at $index in ora:tokenize(concat(",",$strList),",") where $index > 1 return $val' passing '12 3,456,,abc,def' as "strList" ); ```
Oracle- Split string comma delimited (string contains spaces and consecutive commas)
[ "", "sql", "oracle", "plsql", "split", "" ]
I'm able to join three tables together, but I can't figure out a way to save the resulting table. The join statement I am using is: ``` SELECT * FROM Table1 INNER JOIN Table2 ON Table1.id = Table2.id INNER JOIN Table3 ON Table1.id = Table3.id ``` I've tried INSERT INTO, SELECT INTO, etc., and I am still unable to find a way to save the following query as a table. This must be possible, but I can't figure it out! id is a shared identifier for all tables.
``` SELECT Table1.*, Table2.OtherColumn, Table3.AnotherColumn, Table3.OneMoreColumn INTO TablesAsOf_20160215 FROM Table1 INNER JOIN Table2 ON Table1.id = Table2.id INNER JOIN Table3 ON Table1.id = Table3.id ```
The problem you are having is that you cannot create a new table with the same column names. Since your tables share some of the same fields, you have to rename those when inserting by aliasing them. Here's a quick example: ``` select t1.id as t1id, t2.id as t2id, t3.id as t3id into Table4 from Table1 t1 join Table2 t2 on t1.id = t2.id join Table3 t3 on t1.id = t3.id ``` This would create a new table, `Table4`, with 3 columns, `t1id`, `t2id` and `t3id`. Add as many columns as you need at this point.
Save the results of a join statement? SQL
[ "", "sql", "t-sql", "" ]
I have the three tables following in my SQL Server 2008 database: * Cars * Drivers * UnavailableDrivers What I want is to show unavailableDrivers to the end users if they set @isAvailable = 0. On the other hand, if isAvailable=1, then the end user should see only available drivers. Finally, if isAvailable is NULL then the user should see all the drivers. ``` declare @isAvailable bit; ``` I need to write a query like that: ``` Select * From Cars c Inner join Drivers d on (c.driverId = d.Id) CASE WHEN @isAvailable = 0 THEN inner join UnavailableDrivers uc on (c.driverId= uc.Id) WHEN @isAvailable = 1 THEN inner join UnavailableDrivers uc on (c.driverId != uc.Id) ELSE @IsAvailable END -- ELSE -> basically DO NOT JOIN UnavailableDrivers ``` It is giving incorrect syntax error and I couldn't find the right syntax for a few hours unfortunately. Actually I don't feel the query is right either. So any help to fix my query on this logic would be appreciated!
The following query do you want: ``` SELECT * FROM Cars c INNER JOIN Drivers d ON (c.driverId = d.Id) WHERE (@isAvailable IS NULL) -- will not join to UnavailableDrivers is isAvailable is null OR (@isAvailable = 0 AND c.driverId IN (SELECT uc.id FROM UnavailableDrivers uc)) -- will join to UnavailableDrivers only if isAvailable equals to 0 OR (@isAvailable = 1 AND c.driverId NOT IN (SELECT uc.id FROM UnavailableDrivers uc)) -- will join to UnavailableDrivers because isAvailable equals to 1 ```
Terrible approach, but you asked for it ``` USE [DB] GO CREATE PROCEDURE [GetDrivers] @isAvailable bit AS BEGIN IF (@isAvailable = 0) BEGIN Select * From Cars c Inner join Drivers d on (c.driverId = d.Id) inner join UnavailableDrivers uc on (c.driverId= uc.Id) END ELSE begin Select * From Cars c Inner join Drivers d on (c.driverId = d.Id) inner join UnavailableDrivers uc on (c.driverId!= uc.Id) end END ```
Joining a table (or not) based on a condition in SQL
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have a table with following schema in my DB2 database. ``` CREATE TABLE IDN_OAUTH_CONSUMER_APPS ( CONSUMER_KEY VARCHAR (255) NOT NULL, CONSUMER_SECRET VARCHAR (512), USERNAME VARCHAR (255), TENANT_ID INTEGER DEFAULT 0, APP_NAME VARCHAR (255), OAUTH_VERSION VARCHAR (128), CALLBACK_URL VARCHAR (1024), GRANT_TYPES VARCHAR (1024) / ``` I need to add a new column ID of Type integer not null auto increment, and make it the primary key. How can I do that without deleting the table?
I could do this successfully using following set of queries. ``` ALTER TABLE IDN_OAUTH_CONSUMER_APPS ADD COLUMN ID INTEGER NOT NULL DEFAULT 0 CREATE SEQUENCE IDN_OAUTH_CONSUMER_APPS_SEQUENCE START WITH 1 INCREMENT BY 1 NOCACHE CREATE TRIGGER IDN_OAUTH_CONSUMER_APPS_TRIGGER NO CASCADE BEFORE INSERT ON IDN_OAUTH_CONSUMER_APPS REFERENCING NEW AS NEW FOR EACH ROW MODE DB2SQL BEGIN ATOMIC SET (NEW.ID) = (NEXTVAL FOR IDN_OAUTH_CONSUMER_APPS_SEQUENCE); END REORG TABLE IDN_OAUTH_CONSUMER_APPS UPDATE IDN_OAUTH_CONSUMER_APPS SET ID = IDN_OAUTH_CONSUMER_APPS_SEQUENCE.NEXTVAL ``` And then add primary key using alter table.
I recommend using this approach. It does not require creating any satellite objects - no triggers, sequences, etc... ``` alter table test.test2 add column id integer not null default 0; alter table test.test2 alter column id drop default; alter table test.test2 alter column id set generated always as identity; call sysproc.admin_cmd ('reorg table test.test2'); update test.test2 set id = default; commit; ``` If using "db2" cli then the reorg command may be run directly without the "call sysproc.admin\_cmd" wrapper.
DB2 add auto increment column to an existing table
[ "", "sql", "db2", "auto-increment", "dbmigrate", "" ]
Really this is two related questions **Question:** Why would a query run faster with a function in a select clause than in the where clause **Question:** Why would an inline view take longer when moving the where clause from the inline view to the outer query. I'm not going to dump the entire query since it has columns and tables related to my work, but this is basically it. If you need a working example I will write a SQLFiddle that is similar to what I'm doing. **Run Time: 117s, Returns: 93 records** ``` SELECT COL1, COL2, COL3 FROM my_table WHERE [CONDITIONS...] and my_package.my_function(:bind_var, COL1) = 'Y'; ``` If I were to run the function by itself with what would be the bind var and what would be one of the COL1 values it would take .06s. So, ``` SELECT my_package.my_function(VAL1, VAL2) FROM DUAL; ``` So I rewrote the query like this: ``` SELECT * FROM ( SELECT COL1, COL2, COL3 FROM my_table WHERE [CONDITIONS...] ) temp_tbl WHERE my_package.my_function(:bind_var, COL1) = 'Y'; ``` **Run Time: 116s, Returns: 93 records** The query without the function takes ~3 seconds to run, but it doesn't make sense that a function that takes .06s for 93 records would take ~116s to run. I tried seeing what happens if I moved the function to the SELECT clause. ``` SELECT * FROM ( SELECT COL1, COL2, COL3, my_package.my_function(:bind_var, COL1) as fn_indc FROM my_table WHERE [CONDITIONS...] ) temp_tbl WHERE fn_indc = 'Y'; ``` When I run the inline view query it takes ~3 seconds to run. When I add the `WHERE fn_indc = 'Y';` it takes ~116 seconds to run. Why would moving the function from the WHERE to the SELECT matter? Comparison of CHAR does not take that long to perform. Also, if I made an inline view that retrieved the value from the function and performed my where conditions in the outer query, what would cause this to run longer?
How many times is the function being executed in each case? Without seeing query plans, I would wager that the query runs quickly when the other predicates are evaluated first, paring the result set down as much as possible before the function is called. When the function is only called 93 times (plus however many additional executions are required for the rows that aren't eliminated by any other predicate) the query runs quickly. On the other hand, if the function is called earlier in the query plan, it will be called many more times--potentially once for every row in the table and the query will return much more slowly. You could validate this by looking at the query plans or using some instrumentation to measure exactly how many times the function is called in the different cases. The Oracle optimizer is free to evaluate predicates in whatever order it deems appropriate based on statistics. It is possible that rewriting a query will cause the optimizer to choose a different plan that is better or worse. But tomorrow, the optimizer is perfectly free to change its mind and to use the slower plan for any of the variants that you posted. Of course, Murphy being the law of the land, the optimizer is likely to wait for the worst possible time to decide to flip the query plan on you when it will cause you the most pain and suffering. If the optimizer thinks that both the fast plan and the slow plan are roughly equally costly, that probably implies that it thinks that the function is either much less expensive to evaluate than it actually is or much more selective than it actually is. The best way to correct that mistaken belief is to [associate statistics](https://docs.oracle.com/cd/B28359_01/server.111/b28286/statements_4006.htm) with the function. This lets you tell the optimizer how expensive the query is and how selective it is. That, in turn, lets the optimizer make better estimates and makes it likely that it will pick the more efficient plan regardless of how you write the query (and makes it much less likely that the plan will change for the worse in the future). Now, you can also cheat a bit by writing the query in a way that prevents the optimizer from merging the predicate either by using hints or by putting something in the inline view that prevents the predicate from being pushed. One old trick is to throw a `rownum` in to the inline view. Of course, it is possible that some future version of the optimizer will be smart enough to figure out that `rownum` isn't doing anything here and can safely be removed. And you'd need to leave a nice long comment for the next person who comes along and wonders why you put a `rownum` in a query when you're not doing anything with it. ``` SELECT * FROM ( SELECT COL1, COL2, COL3, rownum FROM my_table WHERE [CONDITIONS...] ) temp_tbl WHERE my_package.my_function(:bind_var, COL1) = 'Y'; ```
you didn't give us much information, so i will guess... the following query does most probably make use of some indexes so it runs faster compared to FTS (Full Table Scan): ``` SELECT * FROM ( SELECT COL1, COL2, COL3, my_package.my_function(:bind_var, COL1) as fn_indc FROM my_table WHERE [CONDITIONS...] ) temp_tbl WHERE fn_indc = 'Y'; ``` so it would access 'my\_table' by corresponding index(es), then it would apply [my\_package.my\_function(:bind\_var, COL1)] function only to rows belonging to the result-set (i.e. that got through the filtering) If you didn't define function-based index oracle is not able to use indexes for the queries like: ``` SELECT * FROM ( SELECT COL1, COL2, COL3 FROM my_table WHERE [CONDITIONS...] ) temp_tbl WHERE my_package.my_function(:bind_var, COL1) = 'Y'; ``` so it does the following: 1. FTS (Full Table Scan) for my\_table 2. apply filter on each row: my\_package.my\_function(:bind\_var, COL1) = 'Y' PS if you would change your function, so that it would return the [:bind\_var] instead of expecting it as a parameter then you could build function-based index and make use of it as follows: ``` SELECT COL1, COL2, COL3 FROM my_table WHERE [CONDITIONS...] and my_package.my_function(COL1) = :bind_var; ```
Oracle SQL Function in WHERE clause takes longer to run than in SELECT clause
[ "", "sql", "oracle", "oracle11g", "" ]
I have the problem that I need to insert into a table with 2 entries, where one value is constant but fetched from another table, and the other one is the actual content that changes. Currently I have something like ``` INSERT INTO table (id, content) VALUES ((SELECT id FROM customers WHERE name = 'Smith'), 1), ((SELECT id FROM customers WHERE name = 'Smith'), 2), ((SELECT id FROM customers WHERE name = 'Smith'), 5), ... ``` As this is super ugly, how can I do the above in Postgres without the constant SELECT repetition?
Yet another solution: ``` insert into table (id, content) select id, unnest(array[1, 2, 5]) from customers where name = 'Smith'; ```
You can cross join the result of the select with your values: ``` INSERT INTO table (id, content) select c.id, d.nr from ( select id from customers where name = 'Smith' ) as c cross join (values (1), (2), (5) ) as d (nr); ``` This assumes that the name is unique (but so does your original solution).
PostgreSQL: How to insert multiple values without multiple selects?
[ "", "sql", "postgresql", "sql-insert", "" ]
I have Two Tables Called Table\_A and Table\_B like bellow. **Table\_A** ``` +-------+------------+------+---------+-----+ |MonthId|DepartmentId|Salary|Deduction| Paid| +-------+------------+------+---------+-----+ |01/2016| DEP01 | 1000 | 100 | 900| |01/2016| DEP02 | 2000 | 50 | 1950| |01/2016| DEP03 | 1500 | 0 | 1500| | | | | | | |02/2016| DEP01 | 1000 | 100 | 900| |02/2016| DEP02 | 2000 | 50 | 1950| |02/2016| DEP03 | 1500 | 0 | 1500| +-------------------------------------------+ ``` **Table\_B** ``` +-------+------------+------+---------+-----+------+------+ |MonthId|DepartmentId|Salary|Deduction| Paid|Bank_1|Bank_2| +-------+------------+------+---------+-----+------+------+ |01/2016| DEP01 | 1000 | 100 | 900| 400 | 500 | |01/2016| DEP02 | 2000 | 50 | 1950| 1400 | 550 | |01/2016| DEP03 | 1500 | 0 | 1500| 0 | 1500 | +---------------------------------------------------------+ ``` So When i try with ``` SELECT MonthId, DepartmentId, Salary, Deduction, Paid, Null AS Bank_1, Null AS Bank_2 FROM Table_A WHERE MonthId = '02/2016' UNION SELECT MonthId, DepartmentId, Salary, Deduction, Paid, Bank_1, Bank_2 FROM Table_B WHERE MonthId = '02/2016' ``` > WHERE MonthId = '02/2016' im getting what i need but when if try with > WHERE MonthId = '01/2016' Im getting bellow Result. so please help me to solve this problem. ``` +-------+------------+------+---------+-----+------+------+ |MonthId|DepartmentId|Salary|Deduction| Paid|Bank_1|Bank_2| +-------+------------+------+---------+-----+------+------+ |01/2016| DEP01 | 1000 | 100 | 900| NULL | NULL | |01/2016| DEP01 | 1000 | 100 | 900| 400 | 500 | |01/2016| DEP02 | 2000 | 50 | 1950| NULL | NULL | |01/2016| DEP02 | 2000 | 50 | 1950| 1400 | 550 | |01/2016| DEP03 | 1500 | 0 | 1500| NULL | NULL | |01/2016| DEP03 | 1500 | 0 | 1500| 0 | 1500 | +---------------------------------------------------------+ ```
I suspect you want aggregation: ``` SELECT MonthId, DepartmentId, Salary, Deduction, Paid, MAX(Bank_1), MAX(Bank_2) FROM (SELECT MonthId, DepartmentId, Salary, Deduction, Paid, Null AS Bank_1, Null AS Bank_2 FROM Table_A WHERE MonthId = '02/2016' UNION ALL SELECT MonthId, DepartmentId, Salary, Deduction, Paid, Bank_1, Bank_2 FROM Table_B WHERE MonthId = '02/2016' ) ab GROUP BY MonthId, DepartmentId, Salary, Deduction, Paid; ```
I think a `LEFT JOIN` should do it: Sample data: ``` CREATE TABLE #Table_A (MonthId VARCHAR(10), DepartmentId varchar(10), Salary int ,Deduction int, Paid int) INSERT INTO #Table_A VALUES ('01/2016','DEP01','1000','100','900'), ('01/2016','DEP02','2000','50','1950'), ('01/2016','DEP03','1500','0','1500'), ('02/2016','DEP01','1000','100','900'), ('02/2016','DEP02','2000','50','1950'), ('02/2016','DEP03','1500','0','1500') CREATE TABLE #Table_B (MonthId VARCHAR(10), DepartmentId varchar(10), Salary int ,Deduction int, Paid int, Bank_1 int ,Bank_2 int ) INSERT INTO #Table_B VALUES ('01/2016','DEP01','1000','100','900','400','500'), ('01/2016','DEP02','2000','50','1950','1400','550'), ('01/2016','DEP03','1500','0','1500','0','1500') ``` Query with WHERE clause: ``` SELECT A.MonthId, A.DepartmentId, A.Salary, A.Deduction, A.Paid, B.Bank_1, B.Bank_2 FROM #Table_A AS A LEFT OUTER JOIN #Table_B AS B ON A.MonthId = B.MonthId AND A.DepartmentId = B.DepartmentId AND A.Salary = B.Salary AND A.Deduction = B.Deduction AND A.Paid = B.Paid WHERE A.MonthId = '01/2016' ``` Result: [![enter image description here](https://i.stack.imgur.com/OeglL.png)](https://i.stack.imgur.com/OeglL.png) Query without the WHERE clause: ``` SELECT A.MonthId, A.DepartmentId, A.Salary, A.Deduction, A.Paid, B.Bank_1, B.Bank_2 FROM #Table_A AS A LEFT OUTER JOIN #Table_B AS B ON A.MonthId = B.MonthId AND A.DepartmentId = B.DepartmentId AND A.Salary = B.Salary AND A.Deduction = B.Deduction AND A.Paid = B.Paid ``` Result (you will not have any values in Bank\_1 ,Bank\_2 for 02/2016 as there are no records for that MonthId in #Table\_B ): [![enter image description here](https://i.stack.imgur.com/VJNJs.png)](https://i.stack.imgur.com/VJNJs.png)
How do I Union two tables with different columns
[ "", "sql", "" ]
I'm importing an Excel document into a SQL table via the Import Wizard. My table contains many columns and a few of them contain phone numbers, and number/letter identifiers that are rather long (10+ numbers). Once the Excel document is imported, these numbers get modified. An example: 22222222222222200000 gets stored as 2.22222E+19 in the SQL database. I've tried formatting the columns in Excel to be strings, general, integer, and etc. but it doesn't seem to matter. When the columns are set to integer, they show the number accurately in Excel, but upon import, they are modified to contain E+. While uploading, the Import Wizard automatically assumes these columns are "Float," and I've tried altering them to nvarchar as well. The import will still modify the large numbers. Is there any way to import these values exactly as they appear in Excel?
I've seen this issue before, it's Excel that is the issue not SSIS. Excel samples the 1st few rows and then infers the data type even if you explicitly set it to text. What you need to do is put this into the Excel file connection string in the SSIS package. This instruction tells Excel that the columns contain mixed data types and hints it to do extra checking before deciding that the column is a numeric type when in fact it's not. ``` ;Extended Properties="IMEX=1" ``` It should work with this (in most cases). The safer thing to do is export the Excel data to tab delimited text and use SSIS to import that.
there is an option to edit the mappings of the columns to the data type you choose. it's in one of the last stages of the wizard. be careful not to loose the 0 at the beginning of phone numbers due to data transformation, this can sometime be a real headache when you miss it
Importing Excel into SQL with mixed columns - Large Numbers Contain E+
[ "", "sql", "excel", "t-sql", "" ]
I have a Transactions table with over 2,500,000 rows and three columns (that are relevant): id, company\_id, and created\_at. id identifies the transaction, company\_id identifies which company received it, created\_at is a timestamp with the time that transaction was performed. What I want is to get a list of the differences between every consecutive pair of transactions of a given company. In other words, if my table goes: ``` id | company_id | created_at ------------------------------ 01 | ab | 2016/01/02 02 | ab | 2016/01/03 03 | cd | 2016/01/03 04 | ab | 2016/01/03 05 | cd | 2016/01/04 06 | ab | 2016/01/05 ``` (Note that there may be an arbitrary number of transactions of other companies between two consecutive transaction of a given company.) Then I want the output to be: ``` diff | company_id ------------------- 01 | ab 00 | ab 01 | cd 02 | ab ``` (I wrote the created\_at and diff values in days, but that's just for ease of visualisation.) I tried using [this](https://stackoverflow.com/questions/7937288/calculate-the-time-difference-between-of-two-rows) but it was too slow. --EDIT: "This" is: ``` SELECT (B.created_at - A.created_at) AS diff, A.company_id FROM Transactions A CROSS JOIN Transactions B WHERE B.id IN (SELECT MIN (C.id) FROM Transactions C WHERE C.id > A.id AND C.company_id = A.company_id) ORDER BY A.id ASC ```
To get a result like the one it looks like you're expecting, I will sometimes make use of MySQL user-defined variables, and have MySQL perform the processing of the rows "in order", so I can compare the current row to values from the previous row. For this to run efficiently, we'll want an appropriate index, to avoid an expensive "Using filesort" operation. (We're going to need the rows in `company_id` order, then by `id` order, so those will be the first two columns in the index. While we're at it, we might just as well include the `created_at` column and make it a covering index. ``` ... ON Transactions (company_id, id, created_at) ``` Then we can try a query like this: ``` SELECT t.diff , t.company_id FROM ( SELECT IF(r.company_id = @pv_company_id, r.created_at - @pv_created_at, NULL) AS diff , IF(r.company_id = @pv_company_id, 1, 0) AS include_ , @pv_company_id := r.company_id AS company_id , @pv_created_at := r.created_at AS created_at FROM (SELECT @pv_company_id := NULL, @pv_created_at := NULL) i CROSS JOIN Transactions r ORDER BY r.company_id , r.id ) t WHERE t.include_ ``` The MySQL Reference Manual explicitly warns against using user-defined variables like this within a statement. But the behavior we observe in MySQL 5.1 and 5.5 is consistent. (The big problem is that some future version of MySQL could use a different execution plan.) The inline view aliased as `i` is just to initialize a couple of user-defined variables. We could just as easily do that as a separate step, before we run our query. But I like to include the initialization right in the statement itself, so I don't need a separate SELECT/SET statement. MySQL accesses the Transactions table, and processes the `ORDER BY` first, ordering the rows from `Transactions` in (company\_id,id) order. (We prefer to have this done via an index, rather than via an expensive "Using filesort" operation, which is why we want that index defined, with `company_id` and `id` as the leading columns. The "trick" is saving the values from the current row into user-defined variables. When processing the next row, the values from the previous row are available in the user-defined variables, for performing comparisons (is the current row for the same company\_id as the previous row?) and for performing a calculation (the difference between the `created_at` values of the two rows. Based on the usage of the subtraction operation, I'm assuming that the `created_at` columns is integer/numeric. That is, I'm assuming that `created_at` is *not* `DATE`, `DATETIME`, or `TIMESTAMP` datatype, because we don't use the subtraction operation to find a difference. ``` SELECT a , b , a - b AS `subtraction` , DATEDIFF(a,b) AS `datediff` , TIMESTAMPDIFF(DAY,b,a) AS `tsdiff` FROM ( SELECT DATE('2015-02-17') AS a , DATE('2015-01-16') AS b ) t ``` returns: ``` a b subtraction datediff tsdiff ---------- ---------- ----------- -------- ------ 2015-02-17 2015-01-16 101 32 32 ``` (The subtraction operation doesn't throw an error. But what it returns may be unexpected. In this example, it returns the difference between two integer values `20150217` and `20150116`, which is *not* the number of days between the two `DATE` expressions.) **EDIT** I notice that the original query includes an `ORDER BY`. If you need the rows returned in a specific order, you can include that column in the inline view query, and use an `ORDER BY` on the outer query. ``` SELECT t.diff , t.company_id FROM ( SELECT IF(r.company_id = @pv_company_id, r.created_at - @pv_created_at, NULL) AS diff , IF(r.company_id = @pv_company_id, 1, 0) AS include_ , @pv_company_id := r.company_id AS company_id , @pv_created_at := r.created_at AS created_at , r.id AS id FROM (SELECT @pv_company_id := NULL, @pv_created_at := NULL) i CROSS JOIN Transactions r ORDER BY r.company_id , r.id ) t WHERE t.include_ ORDER BY t.id ``` Sorry, there's no getting around a "Using filesort" for the `ORDER BY` on the outer query.
Try this ``` SELECT t1.company_id, t2.created_at - t1.created_at as diff FROM Transactions t1 LEFT JOIN Transactions t2 on t2.created_at > t1.created_at and t2.company_id = t1.company_id ```
MySQL timestamp differences between two rows large table
[ "", "mysql", "sql", "" ]
consider the following table ``` id Part_no cust. work_order 1 abc xyz 111 2 abc xyz 123 3 abc xyz 121 4 qqq xyz 222 ``` Now when I enter a particular work order I want the following display. ``` part_no cust work_order abc xyz 111 abc xyz 123 abc xyz 121 ```
@Wyatt Shipman's answer should work. Here's another way to do it: ``` select a.part_no, a.cust, a.work_order from thetable as a inner join thetable as b on a.part_no = b.part_no where a.work_order = 111; ``` [SQL Fiddle](http://sqlfiddle.com/#!15/03923/2)
If you use mysql You can force a specific order by based an value for a fields ``` select part_no, cust, work_order from your_table where part_no = 'abc' order by FIELD(work_order,111,123,121); ``` for Oracle yuo can use decode ``` select part_no, cust, work_order from your_table where part_no = 'abc' order by decode(work_order,111,1, 123,2,121,3); ``` *(could be doesn't make sense but you can do it)*
How to display columns having same value?
[ "", "sql", "" ]
There are the tables: User ``` id | login | password | creationDate _____________________________________ 1 | user1 | 123 | 12.12.12 2 | user2 | 123 | 12.12.12 3 | user3 | 123 | 12.12.12 ``` AdditionalParams ``` id | userId | name | value _______________________________ 1 | 1 | petName | jim 2 | 1 | houseNum| 2 3 | 1 | favTea | black 4 | 2 | favTea | black 5 | 2 | petName | jam 6 | 3 | favTea | green 7 | 3 | lang | C++ 8 | 3 | petName | jem ``` So, it contains diff. data for diff. users, and not all of them have the same array of additional prams; My goal is to select something like that *"select all users with theirs petNames, but only if favTea=black":* Result ``` User.id | User.login | User.password | petName _____________________________________________ 1 | user1 | 123 | jim 2 | user2 | 123 | jam ``` I've tried many variants, but non of them return what i want. Here is the lasm my try: ``` WITH Results_CTE AS ( SELECT DISTINCT AdditionalParams.value, User.id as 'id', User.login as 'login', User.password as 'password', User.creationDate as 'creationDate', AdditionalParams.value as 'pet' FROM User INNER JOIN AdditionalParams ON User.id = AdditionalParams.userId WHERE AdditionalParams.name = 'PetName' AND AdditionalParams.value = 'black' AND AdditionalParams.name = 'PetName' OR AdditionalParams.name = 'favTea' ) SELECT id as 'id', login as 'login', password as 'password', status as 'status', batchType as 'batchType', creationDate as 'creationDate', ExportDate as 'ExportDate', pet as 'pet' FROM Results_CTE ``` PS: DataBase is MS SQL Server 2008
You can use: ``` SELECT u.id, u.login, u.password, ap2.value AS petName FROM User AS u INNER JOIN AdditionalParams AS ap ON u.Id = ap.userId AND ap.name = 'favTea' AND ap.value = 'black' LEFT JOIN AdditionalParams AS ap2 ON u.Id = ap2.userId AND ap2.name = 'petName' ``` It's just a query with two `JOIN` operations. The predicates of the `ON` clause implement the 'business' logic. [**Demo here**](http://sqlfiddle.com/#!9/e8a172/1)
Just a shot in the dark but I think you could replace your entire query with something this simple. Please note that storing passwords in clear text is horrific. Passwords should be salted and hashed at all times. The original value should NOT be recoverable. I would also make the join the AdditionalParams a left join in case somebody doesn't have a pet. ``` SELECT DISTINCT User.id , User.login , User.password , User.creationDate , AdditionalParams.value as 'pet' FROM User u LEFT JOIN AdditionalParams ap ON u.id = ap.userId WHERE ap.name = 'PetName' ```
Select from 2 table
[ "", "sql", "sql-server", "database", "select", "" ]
As mentioned in the title, I would like to have the optimal query to do the following: **WITHOUT SUBQUERIES** I have this [![enter image description here](https://i.stack.imgur.com/O9bsF.png)](https://i.stack.imgur.com/O9bsF.png) So Here, each client has commands(Orders) each client has factures (bills) and a bill contains commands(Orders) and the Amount of bills and orders are in MontantCommande and MontantFacture. So here i need to get the total sum of Orders Commande(MontantCommande) and the total sum of bills Facture(MontantFacture) for each client. I tried the following but i get wrong result ``` SELECT cl.idClient ,SUM(cm.MontantCommande) AS TotalCommande ,SUM(f.MontantFacture) AS TotalFacture FROM client cl INNER JOIN commande cm ON cl.idClient = cm.idClientCommande INNER JOIN facture f ON cl.idclient = f.idclientFacture GROUP BY cl.IdClient ``` How can i do this? EDIT: Here the result of my query with select from all concerned tables [![enter image description here](https://i.stack.imgur.com/bJsof.png)](https://i.stack.imgur.com/bJsof.png) The sum result is wrong here as you can see it must be 1050-1610 680-750 600-1000 **EDIT 2: I ahev to mention that i need to do that without subqueries. Thanks**
You can use correlated sub queries like this: ``` SELECT t.idClient, (select sum(s.montantcommande) from Commande s Where t.idClient = s.idClientCommande) as TotalCommande, (select sum(f.montantfactore) from Facture f Where t.idClient = f.idClientFacture) as TotalFacture FROM Client t ``` Or with a join: ``` SELECT commande.idClient, commande.sum1 as TotalCommande, facture.sum2 as TotalFacture FROM (SELECT t.idClient,sum(s.montantcommande) as sum1 FROM Client t INNER JOIN Commande s ON(t.idClient = s.idClientCommande) GROUP BY t.id_client) commande INNER JOIN (SELECT t.idClient,sum(s.montantfacuore) as sum2 FROM Client t INNER JOIN Facture s ON(t.idClient = s.idClientfacture) GROUP BY t.id_client) facture ON(facture.idClient = commande.idClient) ```
Please use this query ``` SELECT idClient,SUM(MontantCommande) AS TotalCommande , SUM(MontantFacture) AS TotalFacture from client c Inner join commande a on c.IdClient = a.idClientCommande inner join facture b on a.IdCommande = b.IdCommandFacture AND a.idClientCommande = b.idclientFacture group by idClient ```
Get sum of two different columns on two different tables linked to a third table
[ "", "sql", "sql-server", "t-sql", "" ]
I am trying to incorporate old help desk records into a newer help desk app. I still have all the data from the old help desk app and it uses the same fields as the new one but its in a different file. This select statement works fine for the newer application to search all the past calls for something specific, whether its a keyword or who it is allocated to and so on. ``` SELECT status, identity, description, contact, scan_text, extended_desc, allocated_to FROM helpdesk.table1 WHERE UPPER(allocated_to) = coalesce(?, allocated_to) AND identity = coalesce(?, identity) AND description = coalesce(?, description) AND contact= coalesce(?, contact) AND UPPER(scan_text) LIKE coalesce(?,scantext) and upper(extended_desc) like coalesce(?, extended_desc) ORDER by allocated, identity desc ``` What I am trying to do is use some kind of union so I can just have one box to search both the newer and the old records, instead of two different boxes and having to remember where the data might be stored. I was thinking something like this might work but I think my where clause is too ambiguous and I tried just about every combination of including a library in front of the fields. ``` Select * From ( select status, identity, description, contact, scan_text, extended_desc, allocated_to from helpdesk.table1 Union select status, identity, description, contact, scan_text, extended_desc, allocated_to from helpdesk.table2 ) WHERE UPPER(allocated_to) = coalesce(?, allocated_to) AND identity = coalesce(?, identity) AND description = coalesce(?, description) AND contact= coalesce(?, contact) AND UPPER(scan_text) LIKE coalesce(?,scantext) and upper(extended_desc) like coalesce(?, extended_desc) ORDER by allocated, identity desc ``` If I do just the union with the two selects, I get all the records from both tables but i need to be able to narrow the results to a keyword or some other kind of field like in the first block of code. If someone could point me in the right direction I would really appreciate it. Also I probably should have said that it is db2 and the sql is running on a web app. So when this sql runs it generates either drop down boxes or a text field to put your own words into to narrow down the results of all the help desk calls.
I don't know if this is specific to the web app generator that I am using or if you can use it other places in different SQL, but you can reference the parameters and pass them behind the scenes with something called a **parameter parameter** which are the ??1, ??2, ??3 in the second half of the union. This solved the problem of having two sets of boxes for user input by just passing what you enter in the first where. Here is the final version of code that works for me ``` SELECT status, identity, description, contact, scan_text, extended_desc, allocated_to FROM helpdesk.table1 WHERE UPPER(allocated_to) = coalesce(?, allocated_to) AND identity = coalesce(?, identity) AND description = coalesce(?, description) AND contact= coalesce(?, contact) AND UPPER(scan_text) LIKE coalesce(?,scantext) and upper(extended_desc) like coalesce(?, extended_desc) Union SELECT status, identity, description, contact, scan_text, extended_desc, allocated_to FROM helpdesk.table2 WHERE UPPER(allocated_to) = coalesce(??1, allocated_to) AND identity = coalesce(??2, identity) AND description = coalesce(??3, description) AND contact= coalesce(??4, contact) AND UPPER(scan_text) LIKE coalesce(??5,scantext) and upper(extended_desc) like coalesce(??6, extended_desc) ORDER by allocated, identity desc ``` Thank you for those that tried to help me out though.
You can always just put the where in both select statements that you're trying to union. Might be faster than combining both tables, then filtering. ``` SELECT * FROM helpdesk.table1 WHERE UPPER(allocated_to) = coalesce(?, allocated_to) AND identity = coalesce(?, identity) AND description = coalesce(?, description) AND contact= coalesce(?, contact) AND UPPER(scan_text) LIKE coalesce(?,scantext) and upper(extended_desc) like coalesce(?, extended_desc) UNION ALL SELECT * FROM helpdesk.table2 WHERE UPPER(allocated_to) = coalesce(?, allocated_to) AND identity = coalesce(?, identity) AND description = coalesce(?, description) AND contact= coalesce(?, contact) AND UPPER(scan_text) LIKE coalesce(?,scantext) and upper(extended_desc) like coalesce(?, extended_desc) ORDER BY allocated, identity desc; ``` just put the order by after the second select. `UPPER(allocated_to) = coalesce(?, UPPER(allocated_to))` syntax might work better also if you're worried about case.
SQL Union with coalesce(?) in where clause
[ "", "sql", "web-applications", "db2", "union", "" ]
I need Oracle SQL that returns the 'working' week number in year: * no overflowing weeks from one year to another * each week starts from monday * first few days in year are week 01 So the result should be: ``` 2015-12-28 - MON - week 53 2015-12-29 - TUE - week 53 2015-12-30 - WED - week 53 2015-12-31 - THU - week 53 === 2016-01-01 - FRI - week 01 - reseting yearly week counter 2016-01-02 - SAT - week 01 2016-01-03 - SUN - week 01 --- 2016-01-04 - MON - week 02 - monday start of new week 2016-01-05 - TUE - week 02 ... 2016-12-31 - SAT - week 53 === 2017-01-01 - SUN - week 01 - reseting yearly week counter 2017-01-02 - MON - week 02 - monday start of new week ... ```
I found the answer myself, `TO_CHAR(date,'IW')` format is of no use because the very first week in a year according to this standard (ISO) can start after the New Year but also before it (look at `TO_CHAR(TO_DATE('2014-12-31','YYYY-MM-DD'),'IW')=01` the first week that belongs to the next year!) ``` | DAY | WW | IW | MY ===========+=====+====+====+==== 2014-12-28 | SUN | 52 | 52 | 52 2014-12-29 | MON | 52 | 01 | 53 2014-12-30 | TUE | 52 | 01 | 53 2014-12-31 | WED | 52 | 01 | 53 2015-01-01 | THU | 53 | 01 | 53 ... | ... | .. | .. | .. 2016-12-31 | THU | 53 | 53 | 01 2016-01-01 | FRI | 01 | 53 | 01 2016-01-02 | SAT | 01 | 53 | 01 2016-01-03 | SUN | 01 | 53 | 01 2016-01-04 | MON | 01 | 01 | 02 2016-01-05 | TUE | 01 | 01 | 02 2016-01-06 | WED | 01 | 01 | 02 2016-01-07 | THU | 01 | 01 | 02 2016-01-08 | FRI | 02 | 01 | 02 ``` The logic is quite simple, let's look at the very first day in year and its offset from monday. If current day is bigger than this first day offset then week number should be incremented by 1. The number of very first day (offset from monday) is calculated with: ``` TO_CHAR(TO_DATE(TO_CHAR(dt,'YYYY')||'0101','YYYYMMDD'),'D')) ``` So the final SQL statement is ``` WITH DATES AS ( SELECT DATE '2014-12-25' + LEVEL -1 dt FROM DUAL CONNECT BY LEVEL <= 500 ) SELECT dt,TO_CHAR(dt,'DY') DAY,TO_CHAR(dt,'WW') WW,TO_CHAR(dt,'IW') IW, CASE WHEN TO_CHAR(dt,'D')<TO_CHAR(TO_DATE(TO_CHAR(dt,'YYYY')||'0101','YYYYMMDD'),'D') THEN LPAD(TO_CHAR(dt,'WW')+1,2,'0') ELSE TO_CHAR(dt,'WW') END MY FROM dates ``` Of course, one can create a function for that purpose like: ``` CREATE OR REPLACE FUNCTION WorkingWeek(dt IN DATE) RETURN CHAR IS BEGIN IF(TO_CHAR(dt,'D')<TO_CHAR(TO_DATE('0101'||TO_CHAR(dt,'YYYY'),'DDMMYYYY'),'D')) THEN RETURN LPAD(TO_CHAR(dt,'WW')+1,2,'0'); ELSE RETURN TO_CHAR(dt,'WW'); END IF; END WorkingWeek; / ```
> W - week number in a month > > WW - week number in a year, **week 1 starts at 1st of Jan** > > IW - week number in a year, according to ISO standard For your requirement, you need to use combination of `IW` and `WW` format. You could combine them using a **CASE** expression. If you want to generate the list of dates for entire year, then you could use the **[row generator](https://lalitkumarb.com/2015/04/15/generate-date-month-name-week-number-day-number-between-two-dates-in-oracle-sql/)** method. ``` SQL> WITH sample_data AS( 2 SELECT DATE '2015-12-28' + LEVEL -1 dt FROM dual 3 CONNECT BY LEVEL <= 15 4 ) 5 -- end of sample_data mimicking real table 6 SELECT dt, 7 TO_CHAR(dt, 'DY') DAY, 8 NVL( 9 CASE 10 WHEN dt < DATE '2016-01-01' 11 THEN TO_CHAR(dt, 'IW') 12 WHEN dt >= next_day(TRUNC(DATE '2016-01-01', 'YYYY') - 1, 'Monday') 13 THEN TO_CHAR(dt +7, 'IW') 14 END, '01') week_number 15 FROM sample_data; DT DAY WEEK_NUMBER ---------- --- ----------- 2015-12-28 MON 53 2015-12-29 TUE 53 2015-12-30 WED 53 2015-12-31 THU 53 2016-01-01 FRI 01 2016-01-02 SAT 01 2016-01-03 SUN 01 2016-01-04 MON 02 2016-01-05 TUE 02 2016-01-06 WED 02 2016-01-07 THU 02 2016-01-08 FRI 02 2016-01-09 SAT 02 2016-01-10 SUN 02 2016-01-11 MON 03 15 rows selected. ``` **NOTE:** The value `15` to generate 15 rows and the dates are hard-coded above just for demonstration using the WITH clause since OP did not provide the test case with create and insert statements. In reality, you need to use your table and column names.
SQL working week in Oracle
[ "", "sql", "oracle", "week-number", "" ]
I've below query: ``` DECLARE @url varchar (max)='http://v.mercola.com/blogs/public_blog/New-Diet-Pill-Expands-1-000-Times-in-Your-Stomach-24728.aspx' SELECT replace(replace(RIGHT(@URL , CHARINDEX ('/' ,REVERSE(@URL))-1),'.aspx',''),'-',' ') as abc ``` Which returns below output: **Actual output** - [![enter image description here](https://i.stack.imgur.com/5WAhU.jpg)](https://i.stack.imgur.com/5WAhU.jpg) **Expected output** [![enter image description here](https://i.stack.imgur.com/BKQJL.jpg)](https://i.stack.imgur.com/BKQJL.jpg) i.e i want to eliminate the string after last occurrence of `-`. What changes do i have to make to get the expected output.. In all i want a substring after last occurence of `/` and before last occurence of `-` as shown above. Please help and thanks in advance...!
Try this ``` DECLARE @url VARCHAR (max)='http://v.mercola.com/blogs/public_blog/New-Diet-Pill-Expands-1-000-Times-in-Your-Stomach-24728.aspx' SELECT Reverse(LEFT(mid, Charindex('/', mid) - 1)) FROM (SELECT Substring(Reverse(@url), Charindex('-', Reverse(@url)) + 1, Len(@url)) AS mid) a ```
Something like this: ``` DECLARE @url varchar (max)='http://v.mercola.com/blogs/public_blog/New-Diet-Pill-Expands-1-000-Times-in-Your-Stomach-24728.aspx' declare @suffix varchar(max) select @suffix = RIGHT(@URL , CHARINDEX ('/' ,REVERSE(@URL))-1) select left(@suffix, len(@suffix) - charindex('-', reverse(@suffix))) ``` Output: ``` New-Diet-Pill-Expands-1-000-Times-in-Your-Stomach ```
Return a substring from a specified string in SQL Server
[ "", "sql", "sql-server", "sql-server-2008", "substring", "charindex", "" ]
I have two tables `Products` and `ProductProperties`. ``` Products name - string description - text etc etc ProductProperties product_id - integer property_id - integer ``` There is also a table `Properties` which basically stores the list of property names and their attributes **How can I implement a SQL command that finds a product with the property\_ids (A or B or C) AND (X or Y or Z)** I've got upto here: ``` SELECT DISTINCT "products".* FROM "products" INNER JOIN "product_properties" ON "product_properties"."product_id" = "products"."id" AND "product_properties"."deleted_at" IS NULL WHERE "products"."deleted_at" IS NULL AND (product_properties.property_id IN ('504, 506, 403')) AND (product_properties.property_id IN ('520, 501, 502')) ``` But it doesn't really work since it's looking for a Product Property which has both values 504 and 520, which will never exist. Would appreciate some help!
You need to define intermediate resultsets on a property group basis: ``` SELECT DISTINCT p.* FROM products p JOIN product_properties groupA ON groupA.product_id = p.id AND groupA.deleted_at IS NULL AND groupA.property_id IN ('504') JOIN product_properties groupB ON groupB.product_id = p.id AND groupB.deleted_at IS NULL AND groupB.property_id IN ('520') WHERE p.deleted_at IS NULL ``` You see, you detected the problem yourself very nicely: "But it doesn't really work since it's looking for a Product Property which has both values 504 and 520, which will never exist." Indeed, recordsets are immutable within a query, all single criteria applied to them are applied all at once. You need to duplicate each table and apply individual criteria to them.
Hi try this queries i just thinking about it so i didn't try any of them check i got the idea i want to do ``` SELECT DISTINCT "products".* FROM products pr WHERE id IN ( SELECT product_id FROM ProductProperties WHERE property_id IN (504,520) GROUP BY product_id HAVING Count(*) = 2 ) AND "products"."deleted_at" IS NULL SELECT DISTINCT "products".* FROM products pr, INNER JOIN ( SELECT product_id,count(*) as nbr FROM ProductProperties WHERE property_id IN (504,520) GROUP BY product_id ) as temp ON temp.product_id = pr.id WHERE "products"."deleted_at" IS NULL AND temp.nbr = 2 ``` and also you can check this one as well ( you can use also the join in where clause instead of using INNER JOIN) ``` SELECT DISTINCT products.* FROM products as p INNER JOIN product_properties as p1 ON p1.product_id = p.id INNER JOIN product_properties as p2 ON p2.product_id = p.id WHERE p.deleted_at IS NULL AND p1.property_id = '504' AND p1.deleted_at IS NULL AND p2.property_id = '520' AND p2.deleted_at IS NULL ```
WHERE Clause for One-To-Many Association
[ "", "sql", "postgresql", "join", "" ]
I'm honestly really confused here, so I'll try to keep it simple. We have Table A: id Table B: id || number Table A is a "prefilter" to B, since B contains a lot of different objects, including A. So my query, trying to get all A's with a filter; ``` SELECT * FROM A a JOIN B b ON b.id = a.id WHERE CAST(SUBSTRING(b.number, 2, 30) AS integer) between 151843 and 151865 ``` Since ALL instances of A starts with a letter ("X\*\*\*\*\*\*"), I just want to truncate the first letter to let the filter do his work with the number specified by the user. At first glance, there should be absolutely no worries. But it seems I was wrong. And on something I didn't expect to be... It seems like my WHERE clause is executed BEFORE my JOIN. Therefore, since many B's have number with more than one Letter at the start, I have an invalid conversion happening. Despite the fact that it would NEVER happen if we stay in A's. I always thought that where clause was executed after joins, but in this case, it seems postgres wants to prove me wrong. Any explanations ? SQLFiddle demonstrating problem: <http://sqlfiddle.com/#!15/cd7e6e/7> And even with the SubQuery, it still makes the same error...
You can use the regex substr function to remove everything but digits: `CAST(substring(B.number from '\d') AS integer)`. See working example here: <http://sqlfiddle.com/#!15/cd7e6e/18>
SQL is a declarative language. For a select statement, you declare the criteria the data you are looking for must meet. You don't get to choose the execution path, your query isn't executed procedurally. Thus the optimizer is free to choose any execution plan it likes, as long as it returns records specified by your criteria. I suggest you change your query to cast to string instead of to integer. Something like: ``` WHERE SUBSTRING(b.number, 2, 30) between CAST(151843 AS varchar) and CAST(151865 AS varchar) ```
Query applying Where clause BEFORE Joins?
[ "", "sql", "postgresql", "" ]
What is the best way to format a date in a MySQL query? I am currently running the query below which selects the data but doesn't order the information correctly, I presume this is due to the current format of the 'updatetime' row. ``` SELECT * FROM updates WHERE udcode='Remote Connection' ORDER BY updatetime DESC LIMIT 20; ``` The current format is as follows: > 31/03/2015 13:41:45 How should this date be formatted in order for the ORDERING to work correctly? Thanks in advance.
Use: ``` ORDER BY DATE_FORMAT(updatetime, '%Y-%m-%d %H:%i:%S') DESC ```
you can change the format of your date in MySQL with DATE\_FORMAT ``` SELECT *, DATE_FORMAT(`updatetime`, '%Y-%m-%d %H:%i:%S') AS mydate FROM updates WHERE udcode='Remote Connection' ORDER BY mydate DESC LIMIT 20; ``` <http://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_date-format>
MySQL Date Format For Ordering By Date
[ "", "mysql", "sql", "" ]
Guys i have four queries: Query #1: ``` select satisfaction_score,count(satisfaction_score) as Satisfaction_count from j_survey_response where satisfaction_score != 0 group by satisfaction_score ``` The output will be ``` satisfaction_score Satisfaction_count 1 5 2 8 3 97 4 329 5 859 ``` Query #2: ``` select response_score,count(response_score) as response_count from j_survey_response where response_score != 0 group by response_score ``` OUTPUT ``` response_score response_count 1 28 2 8 3 42 4 250 5 980 ``` Query #3: ``` select responder_score,count(responder_score) as responder_count from j_survey_response where responder_score != 0 group by responder_score ``` OUTPUT ``` responder_score responder_count 1 24 2 3 3 30 4 236 5 987 ``` Query #4: ``` select service_score,count(service_score) as service_count from j_survey_response where service_score != 0 group by service_score ``` OUTPUT ``` service_score service_count 1 22 2 2 3 34 4 270 5 966 ``` But I need the output as below ``` score satisfaction_count response_count responder_count service_count 1 5 28 24 22 2 8 8 3 2 3 97 42 30 34 4 329 250 236 270 5 859 980 986 966 ```
You can `UNION ALL` the separate queries and apply *conditional* aggregation on the resulting set: ``` select score, max(case when type = 'satisfaction' then count end) as satisfaction_count, max(case when type = 'response' then count end) as response_count, max(case when type = 'responder' then count end) as responder_count, max(case when type = 'service' then count end) as service_count from ( select satisfaction_score as score, count(satisfaction_score) as count, 'satisfaction' as type from j_survey_response where satisfaction_score != 0 group by satisfaction_score union all select response_score, count(response_score) as count, 'response' as type from j_survey_response where response_score != 0 group by response_score union all select responder_score, count(responder_score) as count, 'responder' as type from j_survey_response where responder_score != 0 group by responder_score union all select service_score, count(service_score) as count, 'service' as type from j_survey_response where service_score != 0 group by service_score) as t group by score ```
Move them in subqueries and join them by score columns. Like this ``` select q1.satisfaction_score as score, q1.satisfaction_count, q2.response_count, q3.responder_count , q4.service_count, from (query 1) q1 join (query 2) q2 on q1.satisfaction_score=q2.response_score join (query 3) q3 on q1.satisfaction_score=q3.responder_score join (query 4) q4 on q1.satisfaction_score=q3.service_score ```
count multiple columns in one query
[ "", "mysql", "sql", "" ]
I have the following table: ![product](https://i.stack.imgur.com/x5Ejt.png) where the products are in different categories and i am excepting the output: ![output](https://i.stack.imgur.com/57k05.png) like product and its cost need to be displayed under category(For category cost value i want to display total products cost) .I tried with different approaches by using roll up and grouping , but i am not getting excepted output.
Here it goes: Sample Data: ``` CREATE TABLE #product (ID INT, Category VARCHAR(50), Product VARCHAR(50), Value INT) INSERT INTO #product VALUES(1,'Non-veg','Chicken',150), (2,'Non-veg','Mutton',200), (3,'Non-veg','Fish',220), (4,'Non-veg','Prawns',250), (5,'Veg','Gobi',100), (6,'Veg','Parota',45), (7,'Veg','vegbirani',150) ``` Query using [GROUP BY with ROLLUP](https://technet.microsoft.com/en-us/library/ms189305(v=sql.90).aspx) ``` SELECT Category, Product, SUM(Value) AS Value FROM #product GROUP BY Category, Product WITH ROLLUP ``` Results: [![enter image description here](https://i.stack.imgur.com/a5aBy.png)](https://i.stack.imgur.com/a5aBy.png) you can further manipulate the results: ``` SELECT COALESCE(product,category,'Total') Category, SUM(Value) AS Value FROM #product GROUP BY Category, Product WITH ROLLUP ``` Result: [![enter image description here](https://i.stack.imgur.com/vsMl5.png)](https://i.stack.imgur.com/vsMl5.png) To answer the comment below: "is there any way to display Category first then Products" this seemed to work: ``` ;WITH CTE AS ( SELECT Category, Product, SUM(Value) AS Value, ROW_NUMBER() OVER (PARTITION BY Category ORDER BY Product ) AS rn FROM #product GROUP BY Category, Product WITH ROLLUP) SELECT Category = COALESCE(A.product,A.category,'Total') , A.Value FROM CTE AS A ORDER BY ISNULL(A.category,'zzzzzz') ,rn ``` Results: [![enter image description here](https://i.stack.imgur.com/yyKPs.png)](https://i.stack.imgur.com/yyKPs.png)
Using Rollup you would do it like this. ``` SELECT COALESCE(product,category,'Total') Category, SUM(VALUE) cost FROM products GROUP BY ROLLUP(category,product) ```
How to display products under Category in sql in a table
[ "", "sql", "sql-server", "grouping", "rollup", "" ]
I have one master table and 5 child table called ``` @POWER_CHILD, @AUDIO_CHILD, @RESISTOR_CHILD, @CAPACITOR_CHILD, @INDUCTOR_CHILD ``` I need to pull data only when there is an match with child table but any column contain blank i.e no data in master table That will be treated as valid and appear in the output. I did `Inner Join`, but no luck. How can I handle blank, so I thought to ask to experts. Kindly don't treat as basic fundamental question. please share your thoughts. DDL: ``` DECLARE @MASTER TABLE ( PowerAmplifierID VARCHAR (50), AudioAmplifierID VARCHAR (50), ResistorID VARCHAR (50), CapacitorID VARCHAR (50), InductorID VARCHAR (50), Years VARCHAR (50) ) INSERT @MASTER SELECT '24456', '5392','', '2190', '10', '1959' UNION ALL SELECT '24456', '', '', '8888', '29', '1959' UNION ALL SELECT '30583', '4233', '', '2190', '56', '1959' UNION ALL SELECT '24455', '333333', '','2190','10', '1958' UNION ALL SELECT '696969', '7879', '1xt','5000','29', '2015' UNION ALL SELECT '24456', '5392', '', '2190', '29', '1959' UNION ALL SELECT '24455', '4233', '', '2190', '56', '1959' DECLARE @POWER_CHILD TABLE ( PowerAmplifierID VARCHAR (50), PowerAmplifier VARCHAR (50) ) INSERT @POWER_CHILD SELECT '24456', 'Class A Power Amplifiers' UNION ALL SELECT '24455', 'Class B Power Amplifiers' DECLARE @AUDIO_CHILD TABLE ( AudioAmplifierID VARCHAR (50), AudioAmplifier VARCHAR (50) ) INSERT @AUDIO_CHILD SELECT '5392' ,'Transconductance' UNION ALL SELECT '4233' ,'Transresistance' UNION ALL SELECT '7879', 'Vacuum-tube' DECLARE @RESISTOR_CHILD TABLE ( ResistorID VARCHAR (50), Resistor VARCHAR (50) ) INSERT @RESISTOR_CHILD SELECT '1xt', 'Thick film' UNION ALL SELECT '2xt', 'Metal film' UNION ALL SELECT '3xt', 'Wirewound' DECLARE @CAPACITOR_CHILD TABLE ( CapacitorID VARCHAR (50), Capacitor VARCHAR (50) ) INSERT @CAPACITOR_CHILD SELECT '2190', 'Film' UNION ALL SELECT '3536', 'tantalum' UNION ALL SELECT '9999', 'niobium' DECLARE @INDUCTOR_CHILD TABLE ( InductorID VARCHAR (50), Inductor VARCHAR (50) ) SELECT '29', 'air core' UNION ALL SELECT '56', 'parasitic' UNION ALL SELECT '35', 'Spiderweb' ``` Expected output ``` PowerAmplifierID PowerAmplifier AudioAmplifierID AudioAmplifier ResistorID Resistor CapacitorID Capacitor InductorID Inductor Year 24456 Class A Power Amplifiers 5392 Transconductance '' '' 2190 Film 29 air core 1959 24455 Class B Power Amplifiers 4233 Transresistance '' '' 2190 Film 56 parasitic 1959 ``` Thanks a lot... ``` 24456 5392 2190 10 1959 -- Invalid InductorID as not available in child table, so remove from output 24456 8888 29 1959-- Invalid CapacitorID as not available in child table,so remove from output 30583 4233 2190 56 1959-- Invalid Poweramplifierid as not available in child table, remove from output 24455 333333 2190 10 1958-- Invalid AudioAmplifierid as not available in child table, remove from output 696969 7879 1xt 5000 29 2015-- Invalid Poweramplifierid as not available in child table, remove from output 24456 5392 2190 29 1959 -- all valid id and resistorid is null so treated as valid record 24455 4233 2190 56 1959 -- all valid id and resistorid is null so treated as valid record ```
Not sure what is described about *blanks* but to return rows from `master` and related details (if any) `LEFT JOIN` may be applied: ``` SELECT * FROM @MASTER m LEFT JOIN @POWER_CHILD pc on pc.PowerAmplifierID = m.PowerAmplifierID LEFT JOIN @AUDIO_CHILD ac on ac.AudioAmplifierID = m.AudioAmplifierID LEFT JOIN @RESISTOR_CHILD rc on rc.ResistorID = m.ResistorID LEFT JOIN @CAPACITOR_CHILD cc on cc.CapacitorID = m.CapacitorID LEFT JOIN @INDUCTOR_CHILD ic on ic.InductorID = m.InductorID WHERE (m.PowerAmplifierID = '' or pc.PowerAmplifierID is not NULL) and (m.AudioAmplifierID = '' or ac.AudioAmplifierID is not NULL) and (m.ResistorID = '' or rc.ResistorID is not NULL) and (m.CapacitorID = '' or cc.CapacitorID is not NULL) and (m.InductorID = '' or ic.InductorID is not NULL) ```
Heres my sql - I use multiple left joins and coalesce to replace the nulls. You could add a where clause to remove completely unmatched lines. ``` select m.PowerAmplifierID , Coalesce(pc.PowerAmplifier,'') as PowerAmplifier, m.AudioAmplifierID , Coalesce(ac.AudioAmplifier,'') as AudioAmplifier, m.ResistorID , Coalesce(rc.Resistor,'') as Resistor, m.CapacitorID , Coalesce(cc.Capacitor,'') as Capacitor, m.InductorID , Coalesce(ic.InductorID,'') as Inductor, m.Years from @Master m left join @Power_Child pc on pc.PowerAmplifierID = m.PowerAmplifierID left join @Audio_Child ac on ac.AudioAmplifierID = m.AudioAmplifierID left join @RESISTOR_CHILD rc on rc.ResistorID = m.ResistorID left join @Capacitor_Child cc on cc.CapacitorID = m.CapacitorID left join @Inductor_Child ic on ic.InductorID = m.InductorID ``` To Show only rows which have no invalid Id's in the master table add this where clause ``` where not (((m.PowerAmplifierID is not null) and pc.PowerAmplifierID is null) Or ((m.AudioAmplifierID is not null) and ac.AudioAmplifierID is null) Or ((m.ResistorID is not null) and rc.ResistorID is null) Or ((m.CapacitorID is not null) and cc.CapacitorID is null) Or ((m.InductorID is not null) and ic.InductorID is null)) ``` if you want to show only rows which do have an invalid id in @master use this where clause ``` where ((m.PowerAmplifierID is not null) and pc.PowerAmplifierID is null) Or ((m.AudioAmplifierID is not null) and ac.AudioAmplifierID is null) Or ((m.ResistorID is not null) and rc.ResistorID is null) Or ((m.CapacitorID is not null) and cc.CapacitorID is null) Or ((m.InductorID is not null) and ic.InductorID is null) ``` In the data set you provided all rows in @master have an invalid id
Complicated Join in SQL Server 2008
[ "", "sql", "sql-server", "t-sql", "" ]
By using the below query the output is displayed as below ``` SELECT dlr_acc_acct_id , ( NVL (dlr_jan, 'D') || NVL (dlr_feb, 'D') || NVL (dlr_mar, 'D') || NVL (dlr_apr, 'D') || NVL (dlr_may, 'D') || NVL (dlr_jun, 'D') || NVL (dlr_jul, 'D') || NVL (dlr_aug, 'D') || NVL (dlr_sep, 'D') || NVL (dlr_oct, 'D') || NVL (dlr_nov, 'D') || NVL (dlr_dec, 'D') ) payment_history_profile FROM daybreak.accounts acc, daybreak.delinquency_ratings dr WHERE dlr_acc_acct_id = acc.ACC_ACCT_ID and dlr_acc_acct_id= '20090305975688' and dlr_year in (2011,2010,2009) ``` the output is ``` DLR_ACC_ACCT_ID,PAYMENT_HISTORY_PROFILE 20090305975688,DD0000000000 20090305975688,000000000000 20090305975688,0000000DDDDD ``` My expected output is ``` dlr_acc_acct_id payment_history_profile 20090305975688 DD00000000000000000000000000000DDDDD ```
If you are on 11gR2 and above, use **LISTAGG**: ``` SELECT DLR_ACC_ACCT_ID, listagg(PAYMENT_HISTORY_PROFILE, '') within GROUP( ORDER BY PAYMENT_HISTORY_PROFILE) PAYMENT_HISTORY_PROFILE FROM table_name GROUP BY DLR_ACC_ACCT_ID; ``` Use appropriate column in the `ORDER BY` clause.
Pushing what you have down into a subquery and adding the year for ordering listagg(), the following should work: ``` SELECT dlr_acc_acct_id , listagg(payment_history_profile,'') WITHIN GROUP (ORDER BY dlr_year) as full_history_profile FROM ( SELECT dlr_acc_acct_id ,dlr_year , ( NVL (dlr_jan, 'D') || NVL (dlr_feb, 'D') || NVL (dlr_mar, 'D') || NVL (dlr_apr, 'D') || NVL (dlr_may, 'D') || NVL (dlr_jun, 'D') || NVL (dlr_jul, 'D') || NVL (dlr_aug, 'D') || NVL (dlr_sep, 'D') || NVL (dlr_oct, 'D') || NVL (dlr_nov, 'D') || NVL (dlr_dec, 'D') ) payment_history_profile FROM daybreak.accounts acc, daybreak.delinquency_ratings dr WHERE dlr_acc_acct_id = acc.ACC_ACCT_ID and dlr_acc_acct_id= '20090305975688' and dlr_year in (2011,2010,2009) ) GROUP BY dlr_acc_acct_id ```
to combine the output of three rows in to single value in sql
[ "", "sql", "oracle", "" ]
I have two tables: `Products` and `CategoryProducts`. `CategoryProducts` contains: ``` PrdID Category ----------------- ``` `Products` contains: ``` PrdID Barcode Url ----------------------- ``` I have Product's Barcode value, for example 111111. > Need to select all Products.Url with the same Category, as this item's > Category. Having Products.PrdID, i can get all needed PrdID's from CategoryProducts like this: ``` select distinct c1.PrdID from CategoryProduct_MM c1 where c1.CategoryID in (select c2.CategoryID from CategoryProduct_MM c2 where c2.PrdID = 175) ```
Based on your comment, this is what you want: ``` SELECT t.Url FROM Products t INNER JOIN CategoryProducts s ON(s.PrdID = t.PrdID) WHERE s.CategoryID = (select p.categoryID from CategoryProducts p INNER JOIN Products f ON(p.prdID = f.prdID) WHERE f.barcode = 42244) ``` Selects all the URLS , that their users are in the same category as PrdID ->175
``` SELECT p.url FROM Products p JOIN CategoryProducts cp ON cp.PrdID = p.id // WHERE p.id = 175 GROUP BY p.url, p.Category ``` And if you need add comment query
Sqlite select statement from 2 tables
[ "", "sql", "sqlite", "" ]
I need some help in this my **case** is 1-two parameters date from , date to 2-number of team parameter that manually enter by user for later on use in some calculation **rquirement** count only working days (6days per week ) without Friday based on filtered period (date from and date to) **Code** ``` =(COUNT(IIF(Fields!Job_Status.Value="Closed",1,Nothing))) / ((DateDiff(DateInterval.day,Parameters!DateFrom.Value,Parameters!ToDate.Value )) * (Parameters!Number_of_teams.Value)) ``` **Note** this code is working fine but it calculate all days thanks in advance
Try this: ``` =(DATEDIFF(DateInterval.Day, CDATE("2016-02-14"), CDATE("2016-02-17")) + 1) -(DATEDIFF(DateInterval.WeekOfYear, CDATE("2016-02-14"), CDATE("2016-02-17")) * 2) -(IIF(WeekdayName(DatePart(DateInterval.Weekday,CDATE("2016-02-14"),FirstDayOfWeek.System))="sunday",1,0) -(IIF(WeekdayName(DatePart(DateInterval.Weekday,CDATE("2016-02-17"),FirstDayOfWeek.System))="saturday",1,0) )) ``` It will ruturn count of monday to friday between the given range in the above case it returns 3. For StartDate = 2016-02-14 and EndDate = 2016-02-21 it returns 5. **UPDATE:** Expression to exclude friday from the count. ``` =(DATEDIFF(DateInterval.Day, Parameters!DateFrom.Value, Parameters!ToDate.Value) + 1) -(DATEDIFF(DateInterval.WeekOfYear, Parameters!DateFrom.Value, Parameters!ToDate.Value) * 1) -(IIF(WeekdayName(DatePart(DateInterval.Weekday,Parameters!ToDate.Value,FirstDayOfWeek.System))="friday",1,0)) ``` Tested with: ``` DateFrom ToDate Result 2016-02-12 2016-02-19 6 2016-02-12 2016-02-18 6 2016-02-12 2016-02-15 3 ``` It is very strange to me see a saturday and sunday as working days instead of friday. Let me know if this helps you.
The most sustainable solution for this kind of question, in the long term, is to create a "[date dimension](https://www.mssqltips.com/sqlservertip/4054/creating-a-date-dimension-or-calendar-table-in-sql-server/)" aka "calendar table". That way any quirks in the classification of dates that don't conform to some neat mathematical pattern can be accommodated. If your government decides to declare date X a public holiday starting from next year, just add it to your public holidays column (attribute). If you want to group by say "work days, weekends, and public holidays" no need to reinvent the wheel, just add that classification to the calendar table and everyone has the benefit of it and you don't need to worry about inconsistency in calculation/classification. You might want the first or last working day of the month. Easy, filter by that column in the calendar table.
SSRS count working days only
[ "", "sql", "reporting-services", "ssrs-2012", "reportbuilder3.0", "" ]
Started BI Publisher about a week ago. When working on a new data model, about one or two queries in, I get this error when I try to save: ``` Failed to load servlet/res?s=%252F~developer1%252Ftest%252FJustin%2520Tests%252FOSRP%2520Information.xdm&desc=&_sTkn=9ba70c01152efbcb413. ``` I can no longer save my data model. I tried deleting my queries, logging in and out, turning machine off and on, but no luck. I'm currently resolved to saving all of my queries locally in notepad. I can create a whole new data model and it will save fine, but then after two or three queries the same thing happens. What's going on and why would anyone design such a confusing error message? Any help would be greatly appreciated.
After restarting your server once you won't get this issue.It happens some time due to the connection problem.so restart should work for this.It resolved my problem.
None of the proposed solutions worked for me. I found out, on my own, that any unnecessary brackets around CASE in a select statement will cause this error. Remove the unnecessary brackets and the error goes away.
BI Publisher - Fail to load and save data model
[ "", "sql", "datamodel", "bi-publisher", "" ]
I have a result set which provide me 2 columns named `Sequence` and `CorrectAns` and it contains N rows(100 rows right now to be specific). Now what I want is to divide these 100 rows to N columns(right now into 4 columns). So, how to do that? Any help would be appreciated. [![enter image description here](https://i.stack.imgur.com/KMcGI.png)](https://i.stack.imgur.com/KMcGI.png) This is the result that i am getting. Now what I want is something like this: Seq ColA Seq ColB Seq ColC Seq ColD 1     C       4       A       7       C       10       D 2       A       5       C       8       A       11       C 3       A       6       A       9       C       12       A and so on. Hope this helps
What you want is to pivot your data. Aside from the `PIVOT` command, one way to do that is to use conditional aggregation: [**SQL Fiddle**](http://sqlfiddle.com/#!6/ce882/1/0) ``` ;WITH Cte AS( SELECT *, grp = (ROW_NUMBER() OVER(ORDER BY Seq) -1) % (SELECT CEILING(COUNT(*) / (4 * 1.0)) FROM tbl) FROM tbl ), CteFinal AS( SELECT *, rn = ROW_NUMBER() OVER(PARTITION BY grp ORDER BY Seq) FROM Cte ) SELECT SeqA = MAX(CASE WHEN rn = 1 THEN Seq END), ColA = MAX(CASE WHEN rn = 1 THEN CorrectAns END), SeqB = MAX(CASE WHEN rn = 2 THEN Seq END), ColB = MAX(CASE WHEN rn = 2 THEN CorrectAns END), SeqC = MAX(CASE WHEN rn = 3 THEN Seq END), ColC = MAX(CASE WHEN rn = 3 THEN CorrectAns END), SeqD = MAX(CASE WHEN rn = 4 THEN Seq END), ColD = MAX(CASE WHEN rn = 4 THEN CorrectAns END) FROM CteFinal GROUP BY grp ```
use the following query, ``` SELECT * FROM ( SELECT Seq, CorrectAns, gro FROM your_table ) as t PIVOT ( SUM(gro) FOR CorrectAns IN (A,B,C,D....) )AS pvt ```
Divide N rows to N columns
[ "", "sql", "sql-server", "" ]
It is possible to insert multiple rows in one table using values from select statement? an example: ``` INSERT INTO SomeTable (UserID, ModuleID, ModuleRights) VALUES (u.UserId, 1, 15), (u.UserId, 2, 1), (u.UserId, 4, 3), (u.UserId, 8, 7) SELECT * FROM Users u ```
Yes, but you need to be careful how you do it. In this case, it appears you want a `cross join`: ``` INSERT INTO SomeTable (UserID, ModuleID, ModuleRights) SELECT u.UserId, v.ModuleID, v.ModuleRights FROM Users u CROSS JOIN (VALUES (1, 15), (2, 1), (4, 3), (8, 7) ) v(ModuleID, ModuleRights); ```
``` INSERT INTO SomeTable (UserID, ModuleID, ModuleRights) SELECT u.UserID, u.ModuleID, u.ModuleRights FROM Users u; ``` If your `ModuleID` and `ModuleRights` are not part of the `users` table then insert nulls or dummy values and replace on needed condition.
Inserting multiple rows into some table using values from select statement
[ "", "sql", "sql-server", "" ]
I need to join data from two different tables as shown below. Is that possible with only one sql query? If the "key" and "name" are on both tables, then it those are identical. "Status" is always "-" on table1 and "T" on table2. Also "name" and "comp" are match on both tables (example: Name3-C and Name4-B). I have tried to do with "union" and "join" but no solution yet! table1 ``` t1.name t1.time t1.comp t1.key t1.status name1 1 B 106 - name2 2 B - - name3 1 C 102 - name4 3 B 103 - name7 1 C 104 - ``` table2 ``` t2.name t2.time t2.comp t2.key t2.status name5 6 B 100 T name6 5 B - T name3 7 C 102 T name4 9 B 103 T ``` RESULT should be... ``` name time1 time2 t.comp t.key t.status name1 1 - B 106 - name2 2 - B - - name3 1 7 C 102 T name4 3 9 B 103 T name5 - 6 C 100 T name6 - 5 B - T name7 1 - C 104 - ``` Thanks in advance!
If I understand correctly, you want a `full outer join`: ``` select coalesce(t1.name, t2.name) as name, t1.time as time1, t2.time as time2, coalesce(t1.comp, t2.comp) as comp, coalesce(t1.key, t2.key) as key, t2.status from table1 t1 full outer join table2 t2 on t1.key = t2.key and t1.name = t2.name; ```
``` SELECT T1.time AS Time1, T2.Time AS Time2, ISNULL(T1.COMP,T2.Comp) AS Comp, ISNULL(T1.Key,T2.Key) FROM TABLE1 T1 FULL OUTER JOIN TABLE2 T2 ON T1.key = T2.key ```
SQL query, join two tables
[ "", "sql", "join", "union", "" ]
I have one query which is like this: ``` CREATE TABLE #Employee (Id INT, Name NVARCHAR(100), Status TINYINT) GO INSERT INTO #Employee ( Id, Name, Status) Values (1, 'Basavaraj Biradar', 0), (2, 'Shree Biradar', 0), (3, 'Kalpana Biradar', 0) GO DECLARE @LoopCounter INT = 1, @MaxEmployeeId INT = 10 , @EmployeeName NVARCHAR(100) WHILE(@LoopCounter <= @MaxEmployeeId) BEGIN SELECT @EmployeeName = (Name) FROM #Employee WHERE Id = @LoopCounter SET @LoopCounter = @LoopCounter + 1 PRINT @EmployeeName END ``` Giving Result: ``` Basavaraj Biradar Shree Biradar Kalpana Biradar Kalpana Biradar Kalpana Biradar Kalpana Biradar Kalpana Biradar Kalpana Biradar Kalpana Biradar Kalpana Biradar ``` Expected Output: ``` 1 Basavaraj Biradar 2 Shree Biradar 3 Kalpana Biradar 4 Basavaraj Biradar 5 Shree Biradar 6 Kalpana Biradar 7 Basavaraj Biradar 8 Shree Biradar 9 Kalpana Biradar 10 Basavaraj Biradar 11 Shree Biradar 12 Kalpana Biradar ```
This can be done without a `WHILE` loop or a `CURSOR`. All you need is a [Tally Table](https://stackoverflow.com/questions/32096103/selecting-n-rows-in-sql-server/32096374#32096374): [**SQL Fiddle**](http://sqlfiddle.com/#!6/292b8/1/0) ``` DECLARE @MaxEmployeeId INT = 10; DECLARE @N INT; SELECT @N = CEILING(@MaxEmployeeId/(COUNT(*)*1.0)) FROM #Employee; ;WITH E1(N) AS( SELECT 1 FROM(VALUES (1),(1),(1),(1),(1),(1),(1),(1),(1),(1))t(N) ), E2(N) AS(SELECT 1 FROM E1 a CROSS JOIN E1 b), E4(N) AS(SELECT 1 FROM E2 a CROSS JOIN E2 b), CteTally(N) AS( SELECT TOP(@N) ROW_NUMBER() OVER(ORDER BY(SELECT NULL)) FROM E4 ), CteFinal AS( SELECT *, Rn = ROW_NUMBER() OVER(ORDER BY N, Id) FROM #Employee e CROSS JOIN CteTally t ) SELECT Rn, Name FROM CteFinal WHERE Rn <= @MaxEmployeeId ORDER BY Rn ``` --- If it's only the number of times the records should be repeated: [**SQL Fiddle**](http://sqlfiddle.com/#!6/292b8/2/0) ``` DECLARE @RepeatTimes INT = 4 ;WITH E1(N) AS( SELECT 1 FROM(VALUES (1),(1),(1),(1),(1),(1),(1),(1),(1),(1))t(N) ), E2(N) AS(SELECT 1 FROM E1 a CROSS JOIN E1 b), E4(N) AS(SELECT 1 FROM E2 a CROSS JOIN E2 b), CteTally(N) AS( SELECT TOP(@RepeatTimes) ROW_NUMBER() OVER(ORDER BY(SELECT NULL)) FROM E4 ) SELECT Rn = ROW_NUMBER() OVER(ORDER BY N, Id), e.Name FROM Employee e CROSS JOIN CteTally t ORDER BY Rn ``` Result: ``` | Rn | Name | |----|-------------------| | 1 | Basavaraj Biradar | | 2 | Shree Biradar | | 3 | Kalpana Biradar | | 4 | Basavaraj Biradar | | 5 | Shree Biradar | | 6 | Kalpana Biradar | | 7 | Basavaraj Biradar | | 8 | Shree Biradar | | 9 | Kalpana Biradar | | 10 | Basavaraj Biradar | | 11 | Shree Biradar | | 12 | Kalpana Biradar | ```
You will need two loops. The outer loop defines the number of repetitions, while the inner loop will print all available names from the table. Try this - ``` CREATE TABLE #Employee (Id INT, Name NVARCHAR(100), Status TINYINT) GO INSERT INTO #Employee ( Id, Name, Status) Values (1, 'Basavaraj Biradar', 0), (2, 'Shree Biradar', 0), (3, 'Kalpana Biradar', 0) GO DECLARE @OuterLoopCounter INT = 1, @InnerLoopCounter INT=1, @MaxEmployeeID INT, @EmployeeName nvarchar(100) WHILE @OuterLoopCounter <= 4 -- Change to whatever value BEGIN SELECT @MaxEmployeeID = MAX(ID) FROM #Employee WHILE @InnerLoopCounter <= @MaxEmployeeID BEGIN SELECT @EmployeeName = Name FROM #Employee WHERE Id = @InnerLoopCounter PRINT @EmployeeName SET @InnerLoopCounter = @InnerLoopCounter + 1 END SET @InnerLoopCounter = 1 SET @OuterLoopCounter =@OuterLoopCounter +1 END ```
How to update incrementally the data into another table using loops?
[ "", "sql", "sql-server", "t-sql", "" ]
Basic SQL question but I have a mind blank. I have a table with the following setup: ``` date eventType ----------------------- 01/01/2016 0 02/01/2016 0 03/01/2016 2 03/01/2016 2 04/01/2016 6 04/01/2016 6 04/01/2016 6 04/01/2016 6 05/01/2016 0 06/01/2016 ... ``` I want to return the "next set of events where eventType<>0" So, if "today" was 01/01/2016, the query would return: ``` 03/01/2016 2 03/01/2016 2 ``` If "today" was 03/01/2016, the query would return: ``` 04/01/2016 6 04/01/2016 6 04/01/2016 6 04/01/2016 6 ``` Etc. Many thanks
I have a different solution, check this: ``` DECLARE @dtEventType DATE = '20160101' DECLARE @table TABLE ( cDate DATE , eventType TINYINT ) INSERT INTO @table VALUES( '20160101' , 0 ) , ( '20160102' , 0 ) , ( '20160103' , 2 ) , ( '20160103' , 2 ) , ( '20160104' , 6 ) , ( '20160104' , 6 ) , ( '20160104' , 6 ) , ( '20160104' , 6 ) , ( '20160105' , 0 ) SELECT * FROM @table L WHERE cDate = ( SELECT MIN( cDate ) AS mnDate FROM @table WHERE eventType <> 0 AND cDate > @dtEventType ) ``` But I liked the @GordonLiff's 3rd solution .
Hmmm. I think this may be a bit trickier than it seems. This does what you want for the data in the question: ``` select e.* from events e cross join (select top 1 eventType from events where date > getdate() and eventType <> 0 order by date ) as nexte where e.date > getdate() and e.eventType = nexte.eventType; ``` Or, perhaps a better fit: ``` select e.* from events e cross join (select top (1) e.* from events where date > getdate() and eventType <> 0 order by date ) as nexte where e.date > nexte.date and e.eventType = nexte.eventType; ``` Or, more simply: ``` select top (1) with ties e.* from events e where date > getdate() and eventType <> 0 order by date, eventType ```
SQL Server query return next date with an event
[ "", "sql", "sql-server", "" ]
I would like to know what options we have to do not make dynamic queries. For example: ``` IF @Mail <> '' BEGIN SELECT @Where = @Where + ' AND Mail = @Mail ' END ELSE IF @Phone <> '' BEGIN SELECT @Where = @Where + ' AND Phone like ''%'' + @Phone ' END ``` I would like not do do this, I would like to avoid dynamic queries, if someone can help me. By the way I want to filter by `Mail`, but if `Mail` does not exist then I have to filter by `Phone`, but never by both.
You can try this ``` AND ((ISNULL(@Mail,'') = '' or mail = @mail) OR (ISNULL(@Phone,'') = '' or Phone like '%' + @Phone)) ```
If you are going for performance, the best solution is to make separate queries for each of the cases. Gordon's solution is fine, but SQL server will not use any indexes you may want to use on the columns you filter by. It can be enhanced to use indexes by adding OPTION(RECOMPILE), but this will cause to recompile the query each time it is run. It will considerably improve performance if your table has a lot of rows (and you define indexes on the columns), but decrease performance for table with few rows (or without indexes) - it will have basically the same performance as dynamic query.
SQL Server alternative to dynamic query
[ "", "sql", "sql-server", "dynamic-sql", "" ]
I have this Oracle table which I want to clean from time to time when I reach 2000 rows of data: ``` CREATE TABLE AGENT_HISTORY( EVENT_ID INTEGER NOT NULL, AGENT_ID INTEGER NOT NULL, EVENT_DATE DATE NOT NULL ) / ``` How I can delete the oldest row from the table when the table reaches 2000 rows?
You can delete all but the newest 2000 rows with the following query: ``` DELETE FROM agent_history a WHERE 2000 < ( SELECT COUNT(1) cnt FROM agent_history b WHERE b.event_date < a.event_date ) ``` The query checks every row in the table (a) to see how many rows have an event\_date LESS than that row. If there are more than 2000 rows less than it, then it will delete that row. Let me know if this doesn't work.
Create a DBMS\_JOB or DBMS\_SCHEDULER, that kicks off after certain interval and call a procedure. In that procedure check the count and delete the rows based on event\_date. --- Sorry, I didn't see your comment until now. Here is the code you were looking for. Make sure you have the grants to create scheduler program and jobs. This code assumes that the event\_id is a sequence of #s and keeps up with the event\_date. Otherwise change the rank based on both time and id or of your choice. Also you can change time interval. Check DBMS\_SCHEDULER package documentation for any errors and corrections. ``` create or replace procedure proc_house_keeping is begin delete from ( select rank() over (order by event_id desc) rnk from agent_history ) where rnk > 2000; commit; end; / begin dbms_scheduler.create_program( program_name => 'PROG_HOUSE_KEEPING', program_type => 'STORED_PROCEDURE', program_action => 'PROC_HOUSE_KEEPING', number_of_arguments => 0, enabled => FALSE, comments => 'Procedure to delete rows greater than 2000'); end; / begin dbms_scheduler.create_job( job_name => 'table_house_keeping', program_name => 'PROG_HOUSE_KEEPING', start_date => dbms_scheduler.stime, repeat_interval => 'FREQ=MINUTELY;INTERVAL=1', end_date => dbms_scheduler.stime+1, enabled => false, auto_drop => false, comments => 'table house keeping, runs every minute'); end; / ```
Delete Oracle rows based on size
[ "", "sql", "oracle", "oracle11g", "oracle10g", "" ]
I want to select rows where some precise column has different values while another precise column has the same value. Exemple : ``` COLUMN_A | COLUMN_B __________|___________ | 1 | 2002 1 | 2002 2 | 2001 2 | 2007 3 | 2010 3 | 2010 ``` Now, suppose I want to know which Rows has the same A but different B, the query would return the rows ``` 2 | 2001 2 | 2007 ``` or just ``` 2 ``` as long as I know which one is it ...
This is the case for Count(**Distinct** *ColumnName*). It ensures that only unique values are taken into account. ``` With Src As ( Select * From (Values (1, 2002), (1, 2002), (2, 2001), (2, 2007), (3, 2010), (3, 2010) ) V (COLUMN_A, COLUMN_B) ) Select * From Src Where COLUMN_A In ( Select COLUMN_A From Src Group By COLUMN_A Having Count(Distinct COLUMN_B) > 1 --<- "More than one unique value" condition ) ``` --- ``` COLUMN_A COLUMN_B 2 2001 2 2007 ```
You can use: ``` SELECT COLUMN_A FROM dbo.YourTable GROUP BY COLUMN_A HAVING MIN(COLUMN_B) <> MAX(COLUMN_B); ``` Another way can be using `EXISTS`: ``` SELECT * FROM dbo.YourTable A WHERE EXISTS(SELECT 1 FROM dbo.YourTable WHERE COLUMN_A = A.COLUMN_A AND COLUMN_B <> A.COLUMN_B); ```
SQL select row where (this column has many different values in the other)
[ "", "sql", "sql-server", "" ]
Hello I need to take most countable countries from DB. how it looks like: ``` id name country 1 fsdf Sweden 2 dfdf Brazil 3 fgfg Sweden 4 gfgg Germany 5 fgfg Germany 6 fgfg Poland 7 fdff Germany 8 iuii Brazil 9 tyyt Sweden 10 tyut Sweden 11 gfgf Germany 12 fdff Holland ``` And I want output from this - from the most count like: ``` 1 Germany count 4 2 Sweden count 4 3 Brazil count 2 4 Poland count 1 5 Holand coun 1 ``` I was tried with something like this but not working ``` $top5 = $mysqli -> query ("SELECT top 5 country, count(*) from list_ots group by country order by country desc"); while ($row = mysql_fetch_assoc($top5)) { echo $row["country"]; } mysql_free_result($result); ```
The correct syntax in MySQL uses `LIMIT`, not `TOP`: ``` select country, count(*) as cnt from list_ots group by country order by cnt desc limit 5; ```
Use must use `LIMIT` to get top 5: ``` SELECT country, count(*) as count FROM list_ots GROUP BY country ORDER by count desc LIMIT 5; ```
Selecting top 5 most count countries in SQL
[ "", "mysql", "sql", "database", "" ]
I am running into a problem where a part of one of my triggers is failing. This failure is causing the transaction that wraps the trigger to rollback. The problem comes in that it is not raising an error to the invoke point of the command. It is acting like there was no error within the transaction and the only way it is caught is that the data that should have been changed isn't. In my example what happens is there are many changes in the transaction. The error occurs in the trigger. everything is rolled back, however, the caller of the command doesn't see a SQL Exception. The caller is never notified that there was an issue. Is there any way to instruct TSQL to throw the exception in a way that it will report an error to the invoke point of the command? ``` CREATE TABLE Archive ( aColumn INT NOT NULL ) CREATE TABLE Source ( aColumn INT NULL ) -- CREATE TRIGGER Archive_Trigger ON Source AFTER UPDATE AS BEGIN INSERT INTO Archive SELECT DELETED.aColumn FROM DELETED END -- Other attempts CREATE TRIGGER Archive_Trigger ON Source AFTER UPDATE AS BEGIN BEGIN TRY INSERT INTO Archive SELECT DELETED.aColumn FROM DELETED END TRY BEGIN CATCH SET XACT_ABORT ON; DECLARE @ErrorMessage NVARCHAR(4000) ; SET @ErrorMessage = ERROR_MESSAGE() ; RAISERROR('Error %s occurred in Archive Trigger', 16, 1, @ErrorMessage) ; END CATCH END ``` This is a really naive version of what the trigger is doing. The problem isn't happening on a production system. When it is happening is in a development environment. When the /Source/ table changes the /Archive/ table must also change. From time to time the archive is forgotten and that is when the error occurs. For example if aColumn was not null in archive and null in Source. This instance would cause the problem. After having a row in the table with a null aColumn if I execute the following code. The result I expect is to get an exception. I don't. ``` DataSource source; Connection connection = source.getConnection(); PreparedStatement statement = connection.prepareStatement("UPDATE Source SET aColumn = NULL"); statement.executeUpdate(); ```
The `RAISERROR` statement is not a severe error that halts processing. In fact, [MSDN says](https://msdn.microsoft.com/en-us/library/ms188792(v=sql.110).aspx): > The THROW statement honors SET XACT\_ABORT. RAISERROR does not. New > applications should use THROW instead of RAISERROR. You can verify this behavior by running the following SQL: ``` set xact_abort on begin tran raiserror('error 1', 16, 1, null) raiserror('error 2', 16, 1, null) ``` The output will be: ``` Msg 50000, Level 16, State 1, Line 3 error 1 Msg 50000, Level 16, State 1, Line 4 error 2 ``` Any attempt to raise a severe error will fail unless you are a sysadmin: > Msg 2754, Level 16, State 1, Line 8 Error severity levels greater than > 18 can only be specified by members of the sysadmin role, using the > WITH LOG option. I do think you should use SET XACT\_ABORT ON, but either: 1) Do a `SET XACT_ABORT ON` during the actual `INSERT`. This will cause the original error to terminate the connection. For example: ``` CREATE TRIGGER Archive_Trigger ON Source AFTER UPDATE AS BEGIN SET XACT_ABORT ON INSERT INTO Archive SELECT DELETED.aColumn FROM DELETED END ``` 2) Alternatively, you could cause a more severe error in the `BEGIN CATCH` statement. For example, cause a divide by 0 error with a `SELECT 1/0`. The client will have to ignore the divide by zero error to get the error message you raised. Note that [the original SET XACT\_ABORT setting will be restored after leaving the trigger](https://msdn.microsoft.com/en-us/library/ms190356(v=sql.110).aspx) (search for "trigger"). When the trigger does cause an error that cancels the trigger, SQL Server will raise the error 3609 to the client with the text: > The transaction ended in the trigger. The batch has been aborted. We had this specific issue happen in our application. A transaction was killed inside a trigger. The client application did not receive the error and continued onward outside of a transaction. Very bad things happened at this point. We solved this by having our data access object (DAO) detect when the current database connection is closed, broken, or the transaction is null. In this case, if a SQL error was returned in a SqlException.Errors with a number of 3609, the DAO throws a specific "the transaction ended in the trigger" exception.
Some exceptions have [severity](https://msdn.microsoft.com/en-us/library/ms164086.aspx) that is not enough to break execution of current batch. It just goes on and performs subsequent statements. If I'm not mistaken, constraint violation is not a batch-breaking error for example. To ensure whole batch breaks on noticeable error you may set [XACT\_ABORT](https://msdn.microsoft.com/en-us/library/ms188792(v=sql.120).aspx) option to `ON` which also will rollback *entire transaction* if established. Best way to manage exception is to surround your code with TRY-CATCH block which is [supposed](https://msdn.microsoft.com/en-us/library/ms175976(v=sql.120).aspx) to react on all errors with severity higher than info messages and lower than errors breaking connection. After falling into CATCH block (which can not be avoided unless you are intentionally silencing exceptions) you may throw another user-defined exception, rollback and so on, as you know it yourself. TRY-CATCH guaranties that your code will react on exceptions if any. Enabling `XACT_ABORT` inside of `TRY-CATCH` is not recommended because it's a bit senseless: you are attempting to *control* behavior of your code in case of exception (with try-catch) and at the same time telling to server *"abort'em all!"* (with xact\_abort).
TSQL: Prevent trigger suppressing error but rolling back transaction
[ "", "sql", "sql-server", "hibernate", "t-sql", "" ]
I'm using SSRS/SSDT in Visual Studio 2015 and SQL Server 2014. There's a bug that's been present for > 8 years where [you can't select multiple columns from different tables that have the same name](https://stackoverflow.com/questions/14466874/in-ssrs-why-do-i-get-the-error-item-with-same-key-has-already-been-added-wh). To get around this, I need to use a subquery. Every single answer I find rewrites the given query to remove the subquery, which would normally be great but **is not applicable in this case**. How do I pass a parameter to a subquery in SQL Server? Column aliases do not work with this bug--Using `AS` returns an unknown column error on the "duplicate" columns even though it works with all others. The last two lines in the `SELECT` clause work because the values are being queried so the report can use them, but the remainder of the actual query doesn't use them. Here's my current code (doesn't work because the subquery returns multiple rows). ``` SELECT t.[Description], t.RequestedCompletionDate, t.CommitDate, t.StatusId, t.PriorityId, p.ProjectNumber, s.Name AS StatusDescription, pr.Name AS PriorityDescription FROM ProjectTask t inner join Project p on p.Id = t.ProjectId inner join Project_TaskStatus s on s.Id = t.StatusId inner join Project_Priority pr on pr.Id = t.PriorityId WHERE t.Type = 'ET' AND t.StatusId NOT IN (4,7) AND ( SELECT StatusId FROM Project -- WHERE ? ) NOT IN (3, 4) ORDER BY t.PriorityId, t.CommitDate, t.RequestedCompletionDate ``` This is the code with aliases as requested in the comments. It throws an error: ``` SELECT t.[Description], t.RequestedCompletionDate, t.CommitDate, t.StatusId AS TaskStatusId, t.PriorityId, p.ProjectNumber, p.StatusId AS ProjectStatusId, s.Name AS StatusDescription, pr.Name AS PriorityDescription FROM ProjectTask t inner join Project p on p.Id = t.ProjectId inner join Project_TaskStatus s on s.Id = TaskStatusId inner join Project_Priority pr on pr.Id = t.PriorityId WHERE t.Type = 'ET' AND TaskStatusId NOT IN (4,7) AND ProjectStatusId NOT IN (3,4) ORDER BY t.PriorityId, t.CommitDate, t.RequestedCompletionDate -- Invalid column name 'TaskStatusId'. -- Invalid column name 'TaskStatusId'. -- Invalid column name 'TaskStatusId'. -- Invalid column name 'ProjectStatusId'. -- Invalid column name 'ProjectStatusId'. ``` The ideal code is below, but it throws the error `An item with the same key has already been added`, which is the error that SSRS/SSDT throws when trying to return multiple columns of the same name. ``` SELECT t.[Description], t.RequestedCompletionDate, t.CommitDate, t.StatusId, t.PriorityId, p.ProjectNumber, p.StatusId, s.Name AS StatusDescription, pr.Name AS PriorityDescription FROM ProjectTask t inner join Project p on p.Id = t.ProjectId inner join Project_TaskStatus s on s.Id = t.StatusId inner join Project_Priority pr on pr.Id = t.PriorityId WHERE t.Type = 'ET' AND t.StatusId NOT IN (4,7) AND p.StatusId NOT IN (3,4) ORDER BY t.PriorityId, t.CommitDate, t.RequestedCompletionDate ```
I feel very stupid for this, but apparently it works to just call `p.Id` in the subquery. It knows that I'm referencing the parent query's value even though it's in the subquery. Thank you everyone for your help. ``` SELECT t.[Description], t.RequestedCompletionDate, t.CommitDate, t.StatusId, t.PriorityId, p.ProjectNumber, s.Name AS StatusDescription, pr.Name AS PriorityDescription FROM ProjectTask t inner join Project p on p.Id = t.ProjectId inner join Project_TaskStatus s on s.Id = t.StatusId inner join Project_Priority pr on pr.Id = t.PriorityId WHERE t.Type = 'ET' AND t.StatusId NOT IN (4,7) AND ( SELECT StatusId FROM Project WHERE Id = p.Id ) NOT IN (3, 4) ORDER BY t.PriorityId, t.CommitDate, t.RequestedCompletionDate ```
> current code (doesn't work because the subquery returns multiple > rows). So instead of this ``` AND ( SELECT StatusId FROM Project -- WHERE ? ) NOT IN (3, 4) ``` You could do ``` AND EXISTS ( SELECT 1 FROM Project p2 WHERE p2.StatusId IN (3, 4) AND p2.Id = p.Id ) ```
Pass parameter to subquery
[ "", "sql", "sql-server", "reporting-services", "" ]
could you please help me to get a solution for my issue. I was using sybase database. I have two table trans\_status, trans\_ref trans\_status :- ``` | Corre_ID | Pro_type | Desc | Datetime | |----------|----------|-----------|-------------------------| | ABC_01 | Books | Started | 17/02/2016 00:17:18.963 | | ABC_01 | Books | Inprocess | 17/02/2016 00:18:18.963 | | ABC_01 | Books | Finished | 17/02/2016 00:19:18.963 | | ABC_02 | XXXX | Started | 16/02/2016 00:17:18.963 | | ABC_02 | XXXX | Inprocess | 16/02/2016 00:18:18.963 | | ABC_02 | XXXX | Finished | 16/02/2016 00:19:18.963 | | ABC_03 | yyyy | Started | 15/02/2016 00:17:18.963 | | ABC_03 | yyyy | Inprocess | 15/02/2016 00:18:18.963 | | ABC_03 | yyyy | Finished | 15/02/2016 00:19:18.963 | | ABC_04 | zzzz | Started | 14/02/2016 00:19:18.963 | ``` trans\_ref :- ``` | Payment_ID | Corre_ID | |------------|----------| | 1111 | ABC_01 | | 2222 | ABC_02 | | 3333 | ABC_03 | | 4444 | ABC_04 | ``` Desired Output :- ``` Corre_ID-----Payment_ID-----StartDate-----EndDate-----Response Time in Hours ABC_01-----1111-----17/02/2016 00:17:18.963-----17/02/2016 00:19:18.963-----1 ABC_02-----2222-----16/02/2016 00:17:18.963-----16/02/2016 00:19:18.963-----1 ABC_03-----3333-----15/02/2016 00:17:18.963-----15/02/2016 00:19:18.963-----1 ABC_04-----4444-----14/02/2016 00:19:18.963-----EMPTY-----EMPTY ``` could you please help me to build an sql query please. Here my description was not standard all the time.
So what you want is this: ``` SELECT t.Corre_ID,s.Payment_ID, max(case when t.Desc = 'Started' then t.Datetime end) as startDate, max(case when t.Desc = 'Finished' then t.Datetime end) as startDate, TIMESTAMPDIFF(HOUR,max(case when t.Desc = 'Started' then t.Datetime end), max(case when t.Desc = 'Inprocess' then t.Datetime end)) as response FROM trans_status t INNER JOIN trans_ref s ON (t.Corre_ID = s.Corre_ID) GROUP BY t.Corre_ID,s.Payment_ID ``` If I understood correctly, then the response time is the difference between start\_date and inprocess date
You must join the trans\_status Table twice to get the start and end date. Since the end date may be missing you have to make the second join a left join. Try the following query: ``` SELECT r.Corre_ID, Payment_ID, s_start.Datetime AS StartDate, s_end.Datetime AS EndDate, TIMEDIFF(s_end.Datetime, s_start.Datetime) AS 'Response Time in Hours' FROM trans_ref r JOIN trans_status s_start ON (r.Corre_ID = s_start.Corre_ID AND s_start.Desc = 'Started') LEFT JOIN trans_status s_end ON (r.Corre_ID = s_end.Corre_ID AND s_end.Desc = 'Finished') ``` If the process is not finished, the fields EndDate and 'Response Time in Hours' will be NULL.
How to get start/end date from single date column from an sql query
[ "", "mysql", "sql", "sybase", "" ]
I got a problem in Mysql sine a few days and i can't solve it (i'm pretty bad with DB) and I hope you can help me :) I'm going to simplify the problem so you can see where I'm stuck ! I got two tables : ``` current_stock (article_id, ...) stock_record (id, article_id) ``` Current\_stock describes the actual stock and stock\_record the articles which have already been inventoried. The problem comes when I want to get the articles left to inventory (thoses in current\_stock but not in stock\_record), and when i got multiples sames articles\_id I tried : ``` SELECT * FROM current_stock WHERE NOT EXISTS ( SELECT * FROM stock_record) ``` And others forms like : ``` SELECT * FROM current_stock LEFT JOIN (SELECT * FROM stock_record) second WHERE second.article_id IS NULL ``` BUT, when there a multiple lines with the same article\_id in current\_stock, AND there is one line with this article\_id in stock\_record, i'm not getting the good result ! For exemple: ``` current_stock (article_id) 8315 8315 8315 stock_record (id, article_id) 15, 8315 ``` I want the result to have two rows with the article\_id 8315 ! (Sorry for my bad english, hope you guys understand me !) EDIT : With theses forms i'm getting nothing in the result, because the "NOT EXISTS" or the "LEFT JOIN" is excluding ALL the lines from current\_stock with this article\_id. I would like to exclude only the amount of lines in stock\_record For exemple : I have 10 articles in my current\_stock(with the article\_id = 8315) I have 3 articles already scanned in my stock\_record(with the article\_id = 8315) I want the result to have 7 lines of this article\_id.
You can use variables to enumerate records. Then join on row number and ignore rows having a match in `stock_record`: ``` SELECT t1.* FROM ( SELECT *, @rn1 := IF(@aid = article_id, @rn1 + 1, IF(@aid := article_id, 1, 1)) AS rn1 FROM current_stock CROSS JOIN (SELECT @rn1 := 0, @aid := 0) AS vars ORDER BY article_id) AS t1 LEFT JOIN ( SELECT article_id, @rn2 := IF(@aid = article_id, @rn2 + 1, IF(@aid := article_id, 1, 1)) AS rn2 FROM stock_record CROSS JOIN (SELECT @rn2 := 0, @aid := 0) AS vars ORDER BY article_id ) AS t2 ON t1.article_id = t2.article_id AND t1.rn1 = t2.rn2 WHERE t2.article_id IS NULL ```
Your correct exists() query: ``` SELECT * FROM current_stock t WHERE NOT EXISTS (SELECT * FROM stock_record s where t.article_id = s.article_id) ``` The problem in your query, that you need to check if a **specific** id not exists in stock\_record, and not if any id not exists there. And your correct left join query: ``` SELECT * FROM current_stock s LEFT JOIN stock_record t on( t.article_id = s.article_id) WHERE t.id IS NULL ``` The problem with your left join query is that you didn't specify a relation condition (see the ON()) so, it didn't know which row to join to which row. You can also use NOT IN like this: ``` SELECT * FROM current_stock t WHERE article_id NOT IN(SELECT article_id FROM stock_record) ```
Mysql : How to find non duplicate rows with left join?
[ "", "mysql", "sql", "database", "join", "exists", "" ]
Im building a code that learns tic tac toe, by saving info in a database. I have two tables, `Games(ID,Winner)` and `Turns(ID,Turn,GameID,Place,Shape)`. I want to find parent by multiple child infos. For Example: ``` SELECT GameID FROM Turns WHERE GameID IN (WHEN Turn = 1 THEN Place = 1) AND GameID IN (WHEN Turn = 2 THEN Place = 4); ``` Is something like this possible? Im using ms-access. > Turm - Game turn > GameID - Game ID > Place - Place on matrix > 1=top right, 9=bottom left > Shape - X or circle Thanks in advance
This very simple query will do the trick in a single scan, and doesn't require you to violate **First Normal Form** by storing multiple values in a string (shudder). ``` SELECT T.GameID FROM Turns AS T WHERE (T.Turn = 1 AND T.Place = 1) OR (T.Turn = 2 AND T.Place = 4) GROUP BY T.GameID HAVING Count(*) = 2; ``` There is no need to join to determine this information, as is suggested by other answers. **Please use proper database design principles in your database, and don't violate First Normal Form by storing multiple values together in a single string!**
The general solution to your problem can be accomplished by using a sub-query that contains a self-join between two instances of the Turns table: ``` SELECT * FROM Games WHERE GameID IN ( SELECT Turns1.GameID FROM Turns AS Turns1 INNER JOIN Turns AS Turns2 ON Turns1.GameID = Turns2.GameID WHERE ( (Turns1.Turn=1 AND Turns1.Place = 1) AND (Turns2.Turn=2 AND Turns2.Place = 4)) ); ``` The Self Join between Turns (aliased Turns1 and Turns2) is key, because if you just try to apply both sets of conditions at once like this: ``` WHERE ( (Turns.Turn=1 AND Turns.Place = 1) AND (Turns.Turn=2 AND Turns.Place = 4)) ``` you will never get any rows back. This is because in your table there is no way for an individual row to satisfy both conditions at the same time. My experience using Access is that to do a complex query like this you have to use the SQL View and type the query in on your own, rather than use the Query Designer. It may be possible to do in the Designer, but it's always been far easier for me to write the code myself.
Selecting rows from Parent Table only if multiple rows in Child Table match
[ "", "sql", "ms-access", "" ]
I have the below Table Table1 ``` Emp ID | Emp Name 001 | ABC 002 | DEF 003 | GHI 004 | ABC 005 | XYZ ``` I am trying to get EMP ID and Emp Name where Emp Name is same but Emp ID is different. There is primary key in the table Here the output will be ``` Emp ID | Emp Name 001 | ABC 004 | ABC ```
You didn't specify your DBMS, so this is ANSI SQL: ``` select emp_id, emp_name from ( select emp_id, emp_name, count(*) over (partition by emp_name) as name_count from employee ) t where name_count > 1^; ```
I have the same problem yesterday :) and i believe below code will give you what you want. Use `INNER JOIN` to make a self join query, then use `HAVING` clause. ``` CREATE TABLE #Table1 (Emp_ID int, Emp_Name varchar(50)) INSERT INTO [#Table1] ( [Emp_ID], [Emp_Name] ) SELECT '001','ABC' UNION ALL SELECT '002','DEF' UNION ALL SELECT '003','GHI' UNION ALL SELECT '004','ABC' UNION ALL SELECT '005','XYZ' SELECT [t1].[Emp_ID], [t1].[Emp_Name] FROM [#Table1] t1 INNER JOIN ( SELECT [#Table1].[Emp_Name] FROM [#Table1] GROUP BY [#Table1].[Emp_Name] HAVING COUNT([#Table1].[Emp_ID]) > 1 ) t2 ON [t1].[Emp_Name] = [t2].[Emp_Name] DROP TABLE [#Table1] ``` Below is the result: ``` Emp_ID Emp_Name 1 ABC 4 ABC ``` **[SQL Fiddle Demo - Click here](http://sqlfiddle.com/#!3/a69a1/1)**
SQL Query to get name that appear twice
[ "", "sql", "" ]
I need a query to get string with space in between. i.e it should not return strings like ``` ' abc', 'abc ' and ' abc ' ``` and it should return strings like ``` 'ab c' ,' ab c', 'ab c ' and ' ab c ' ``` i tried with below query. ``` select user_fname,user_lname from user where user_fname like '% %'; ``` but it is returning all the rows.
`%` matches [*zero* or more characters](https://msdn.microsoft.com/en-us/library/ms179859.aspx). I'd suggest adding some `_`s in: ``` select user_fname,user_lname from user where user_fname like '%_ _%'; ``` If that's still matching too much, perhaps: ``` select user_fname,user_lname from user where user_fname like '%[^ ] [^ ]%'; ``` Which will match zero or more characters, then something that definitely isn't a space, a space, something that definitely isn't a space and then zero or more characters.
Try this: ``` SELECT user_fname,user_lname FROM user WHERE user_fname LIKE '% %' AND user_fname NOT LIKE ' %' AND user_fname NOT LIKE '% ' ```
Strings with space in between
[ "", "sql", "string", "sql-server-2008", "" ]
I have two models `Product` and `ProductProperties`. So, I store the properties for products in the Product Properties model which is associated with another model `Properties` How can I implement a scope that finds a product with the properties (A or B or C) AND (X or Y or Z) Filters I currently have are like so -- ``` scope :product_type_filter, lambda {|property_id| return nil if property_id.blank? joins(:product_properties).where('product_properties.property_id IN (?)', property_id).distinct } scope :metal_filter, lambda {|property_id| return nil if property_id.blank? joins(:product_properties).where('product_properties.property_id IN (?)', property_id).distinct } ``` And product the following SQL - `SELECT DISTINCT "products".* FROM "products" INNER JOIN "product_properties" ON "product_properties"."product_id" = "products"."id" AND "product_properties"."deleted_at" IS NULL WHERE "products"."deleted_at" IS NULL AND (product_properties.property_id IN ('504')) AND (product_properties.property_id IN ('520'))` But it doesn't really work since it's looking for a Product Property which has both values 504 and 520, which will never exist. Would appreciate some help!
So this is the join that I used -- ``` def self.find_with_properties property_ids, group_name joins(:product_properties).joins('JOIN product_properties '+group_name+' ON '+group_name+'.product_id = products.id AND '+group_name+'.property_id IN ('+property_ids.to_s.tr('[', '').tr(']', '').tr('"', '') +')') end ```
First of all I think you should not use scope for this task You should create a Class method to do it. It's best practice to not use scope when complex logic is involved. Secondly it's nothing wrong the above mentioned code. It's about calling them wrong. I recon you are chaining above scopes. By definition both above mentioned scope are same so you don't need to define twice. Try this one ``` def self.find_with_properties property_ids joins(:product_properties).where('product_properties.property_id IN (?)', property_ids) end ``` And Call it like ``` Product.find_with_properties([1,2,3]).find_with_properties([4,5,6]).uniq ``` If `'A' , 'B' , 'C'` are properties name then you should do it like: ``` self.find_by_properties_names(property_names) self.joins(:product_properties=>[:property]).where("properties.name IN(?)",property_names) end ``` Then can call like ``` Product.find_by_properties_names(["A","B","C"]).find_by_properties_names(["C","D","E"]) ```
Filtering with Joins - Rails
[ "", "sql", "ruby-on-rails", "activerecord", "" ]
I have a table like: ``` +------------+-------------------+--------------+------------+ | listing_id | transaction_title | image_thumb | sale_date | +------------+-------------------+--------------+------------+ | 226835186 | Title Version 11 | Img Style 11 | 2016-02-08 | +------------+-------------------+--------------+------------+ | 226835186 | Title Version 11 | Img Style 12 | 2016-02-16 | +------------+-------------------+--------------+------------+ | 228703248 | Title Version 21 | Img Style 21 | 2016-02-15 | +------------+-------------------+--------------+------------+ | 228703248 | Title Version 22 | Img Style 22 | 2016-02-17 | +------------+-------------------+--------------+------------+ | 228703248 | Title Version 23 | Img Style 21 | 2016-02-16 | +------------+-------------------+--------------+------------+ | 230105831 | Title Version 31 | Img Style 31 | 2016-02-12 | +------------+-------------------+--------------+------------+ | 230105831 | Title Version 32 | Img Style 31 | 2016-02-06 | +------------+-------------------+--------------+------------+ ``` I am trying to get a query of distinct `listing_id` with a latest used version of `transaction_title` and `image_thumb`. For the above table query output will be: ``` +------------+-------------------+--------------+------------+ | listing_id | transaction_title | image_thumb | sale_date | +------------+-------------------+--------------+------------+ | 226835186 | Title Version 11 | Img Style 12 | 2016-02-16 | +------------+-------------------+--------------+------------+ | 228703248 | Title Version 22 | Img Style 22 | 2016-02-17 | +------------+-------------------+--------------+------------+ | 230105831 | Title Version 31 | Img Style 31 | 2016-02-12 | +------------+-------------------+--------------+------------+ ``` I've tried different combinations of `select distinct, num_rows and max()` but can't get the desired result. Latest I've tried: ``` SELECT listing_id,transaction_title,image_thumb,sale_date FROM ( SELECT * FROM sales ORDER BY sale_date DESC ) AS transaction_title GROUP BY listing_id ``` Please help!
You can use row\_number base approach with data sorted descending first on listing\_id and sale\_date and then picking the rows with row\_number 1. This will give you the required dataset. A query template for this approach is as follows: ``` SELECT INVW.listing_id, INVW.transaction_title, INVW.image_thumb, INVW.sale_date FROM ( SELECT listing_id, transaction_title, image_thumb, sale_date ,@rank := if(@listing_id = listing_id or listing_id is null, @rank + 1, 1) as row_number ,@listing_id := listing_id as dummy FROM <###REPLACE_ME_WITH_TABLE_NAME###>, (select @rank := 0,@listing_id := '') rank ORDER BY listing_id,sale_date DESC ) INVW where INVW.row_number = 1; ```
You can use a derived table containing maximum dates per `listing_id`. If you `INNER JOIN` to this table you can get the exprected result set: ``` select t1.listing_id, transaction_title, image_thumb, sale_date from mytable as t1 inner join ( select listing_id, max(sale_date) max_date from mytable group by listing_id ) as t2 on t1.listing_id = t2.listing_id and sale_date = max_date ```
Select the latest version based on date for each ID in MYSQL
[ "", "mysql", "sql", "max", "distinct", "mysql-num-rows", "" ]
When a particular form loads I need to grab a distinct list of locations from a table, with the eventual goal of displaying them to the user (baby steps though, I'll get to that). The code below generates no error, but when I try to loop through the recordset returned by my query, I get an error in relation to the integer `i`. > Run-time error '6': Overflow I've tested the query and it does return the results that I expect, so I believe that my handling of the `Recordset` object my be the issue. what am I doing wrong here? ``` Private Sub Form_load() Dim DB As DAO.Database Set DB = CurrentDb ' Set the DB object to use '** ' Grab a recordset containing distinct locations '* Dim RS As DAO.Recordset Set RS = DB.OpenRecordset( _ "SELECT DISTINCT [Active Directory].[AD Location] FROM [Active Directory]" _ ) Dim i As Integer: i = 0 Dim locations() As String ReDim locations(0) '** ' Make an array of the locations to display '* If Not (RS.EOF And RS.BOF) Then ' Ensure that the recordset is not empty RS.MoveFirst ' Move to the first record (unnecessary here, but good practice) '** ' Loop through the recordset and extract the locations '* Do Until RS.EOF = True locations(i) = RS![AD Location] i = i + 1 ReDim Preserve locations(i) Loop Else '** ' Tell the user that there are no records to display '* Call MsgBox( _ "Sorry, something went wrong and there are no locations to display." & vbCrLf & vbCrLf & _ "Please ensure that the Active Directory table is not empty.", _ vbExclamation, _ "You'd better sit down, it's not good news..." _ ) End If RS.Close ' Close the recordset Set RS = Nothing ' Be a hero and destroy the now defunct record set End Sub ```
If I'm not missing something, you could just use *GetRows*: ``` Dim locations As Variant RS.MoveLast i = RS.RecordCount RS.MoveFirst locations = RS.GetRows(i) ```
Thanks to @Arvo who commented that I had forgotten to move to the next record in my `do` loop. Adding `RS.MoveNext` to the loop fixed the problem. ``` Do Until RS.EOF = True locations(i) = RS![AD Location] i = i + 1 ReDim Preserve locations(i) RS.MoveNext Loop ```
Recordset with few results causing 'Overflow' error
[ "", "sql", "ms-access", "distinct", "" ]
I faced the following requirement. The following query is called by a procedure. The value `p_pac_code` is the input parameter of the procedure. The requirement is the query should have an additional condition `sp_sbsp.SUBSPCLTY_CODE!='C430` if the `p_pac_code` value is '008'. For any other `p_pac_code` value, it should run as it is below. Is there way to do this by adding an additional condition in the WHERE clause? As for now, I have done this using IF.....ELSE using the query two times separately depending on `p_pac_code` value. But I am required to find a way to do with just adding a condition to this single query. ``` SELECT ptg.group_cid FROM PRVDR_TYPE_X_SPCLTY_SUBSPCLTY ptxss, PT_X_SP_SSP_STATUS pxsst , pt_sp_ssp_x_group ptg, group_x_group_store gg, specialty_subspecialty sp_sbsp, treatment_type tt, provider_type pt WHERE pt.PRVDR_TYPE_CODE = ptxss.PRVDR_TYPE_CODE AND tt.TRTMNT_TYPE_CODE = pxsst.TRTMNT_TYPE_CODE AND ptxss.PRVDR_TYPE_X_SPCLTY_SID = pxsst.PRVDR_TYPE_X_SPCLTY_SID AND tt.TRTMNT_TYPE_CODE = p_pac_code AND TRUNC(SYSDATE) BETWEEN TRUNC(PXSST.FROM_DATE) AND TRUNC(PXSST.TO_DATE) AND ptg.prvdr_type_code =ptxss.prvdr_type_code AND ptg.spclty_subspclty_sid = ptxss.spclty_subspclty_sid AND ptxss.spclty_subspclty_sid = sp_sbsp.spclty_subspclty_sid AND ptg.spclty_subspclty_sid = sp_sbsp.spclty_subspclty_sid AND ptg.status_cid = 2 AND ptg.group_cid = gg.group_cid AND gg.group_store_cid = 16 AND gg.status_cid = 2; ``` Thanks in advance.
Just add: ``` AND NOT ( NVL( sp_sbsp.SUBSPCLTY_CODE, 'x' ) = 'C430' AND NVL( p_pac_code value, 'x' ) = '008' ) ``` to the where clause. The `NVL` function is used so that it will match `NULL` values (if they exist in your data); otherwise, even though `NULL` does not match `C430` you will still find that `NULL = 'C430'` and `NULL <> 'C430'` and `NOT( NULL = 'C430' )` will all return false.
You can simply add a condition like this: ``` ... and ( ( sp_sbsp.SUBSPCLTY_CODE!='C430' and p_pac_code = '008') OR NVL(p_pac_code, '-') != '008' ) ``` This can be re-written in different ways, this one is quite self-explanatory
change where condition based on column value
[ "", "sql", "oracle", "" ]
This is being done on Oracle 11, and I'm trying to create a trigger where if the incoming meets certain values, to update another table. ``` --CREATE TRIGGER ON ITX CREATE OR REPLACE TRIGGER TRG_I_LAST_RECEIVED AFTER INSERT ON ITEM_TRANSACTION_LOG FOR EACH ROW WHEN ( NEW.ITEM_CONDITION_ID = 5005 -- REPAIRED AND NEW.WORKCENTER_ID IN (5001,5006,5063,5000,5022,5062) -- EWRC AND NEW.ORDER_ITEM_OPER_ID = 5009 -- Perform Exit Routing ) BEGIN UPDATE PART SET LAST_REPAIRED = SYSDATE WHERE PART_NO = :NEW.ITEM_PART_NO END; ``` When I do, it adds it, but I get this (ORA-24344): [![oracle error screen](https://i.stack.imgur.com/YZkyy.png)](https://i.stack.imgur.com/YZkyy.png) Now, I understand SQL fairly well (T-SQL) and maybe I'm missing something - can someone eyeball this and tell me why it's not valid? **EDIT**: The issue was missing semi-colon at the end of the update statement. Oracle's semi-colon discrepencies drive me bonkers sometimes. Here's the correct statement, and thanks to Justin Cave for finding it: ``` --CREATE TRIGGER ON ITX CREATE OR REPLACE TRIGGER TRG_I_LAST_RECEIVED AFTER INSERT ON ITEM_TRANSACTION_LOG FOR EACH ROW WHEN ( NEW.ITEM_CONDITION_ID = 5005 -- REPAIRED AND NEW.WORKCENTER_ID IN (5001,5006,5063,5000,5022,5062) -- EWRC AND NEW.ORDER_ITEM_OPER_ID = 5009 -- Perform Exit Routing ) BEGIN UPDATE PART SET LAST_REPAIRED = SYSDATE WHERE PART_NO = :NEW.ITEM_PART_NO; --add that semicolon! END; ```
Your `update` statement is missing a semicolon at the end. Since it sounds like you come from a T-SQL background, in SQL Server, semicolons are generally optional. They're always required in Oracle ``` BEGIN UPDATE PART SET LAST_REPAIRED = SYSDATE WHERE PART_NO = :NEW.ITEM_PART_NO; END; ```
Most probably you have missed the ":" before "NEW.ITEM\_CONDITION\_ID" and others columns, and ofcourse as "Justin Cave" mentioned you need ";" after update statement.
Creating a Trigger on Oracle
[ "", "sql", "oracle11g", "triggers", "" ]
I have a non periodic art course. Every day I add a new "present" entry to a table. I would like to know what are the student that missed the last 2 days. Just the last 2 days. > Important, each student choose a different weekday to have a class. and some of them 2 days per week. I do have 2 tables, **Students** and **Presences**. presents is a 3 column table : student\_id, day, present. I would to get a query highlighting all students that misses the last 2 days. This would return a table with 3 column: Student\_Id,Name and if misses the two last day, How can I get this result? The table strucure is * **Student** id: int, name: varchar * **Presence** student\_id: int, day: date, present: boolean An data example ``` presences students student_id day present id name --------------------------------- ------------ 1 2016-01-01 0 1 'Bob' 1 2016-01-10 1 2 'Carol' 1 2016-01-20 0 2 2016-01-15 1 2 2016-01-27 0 2 2016-01-21 0 ``` In this case Bob misses just the last day, and Carol misses last 2 days. the expected result would be: ``` student_id name misses_two_last_days ---------------------------------------- 1 Bob FALSE 2 Carol TRUE ```
Lots of problems with your query. Try rewriting it to use `joins`. You also have to define the last 2 days. I've attempted to do that with a `subquery` below. Then you need to use `aggregation` to see if the student missed both days. Here's something that should be close: ``` select s.id, s.name from students s inner join presences p on s.id = p.student_id and p.present = false inner join (select distinct day from presences order by day desc limit 2) t on p.day = t.day group by s.id, s.name having count(p.day) = 2 ``` --- Rereading your question, if you want to return all students and not just those that have missed the last 2 days, you need to use `outer joins` instead and remove the `having` clause and replace with a `case` statement: ``` select s.id, s.name, case when count(p.day) = 2 then 'missed' else '' end as Missed from students s left join presences p on s.id = p.student_id and p.present = false left join (select distinct day from presences order by day desc limit 2) t on p.day = t.day group by s.id, s.name ```
Another solution with one subuqery. Please check if it meets your requirement. ``` SELECT s.id, s.name, ( SELECT SUM(p.present = false) FROM p WHERE p.student_id = s.id ORDER BY p.day DESC LIMIT 2 ) AS misses_two_last_days FROM s; ```
SQL how count last entries for each id based on a criteria
[ "", "mysql", "sql", "nested", "" ]
I'm trying to create a computed column in order to have a unique index on a nullable column that ignores NULL rows1. I've composed this test case: ``` SELECT TEST_ID, CODE, UNIQUE_CODE, CAST(UNIQUE_CODE AS VARBINARY(4000)) AS HEX FROM ( SELECT TEST_ID, CODE, ISNULL(CODE, CONVERT(VARCHAR, SPACE(10)) + CONVERT(VARCHAR, TEST_ID)) AS UNIQUE_CODE FROM ( SELECT 1 AS TEST_ID, 'ABCDEFGHIJ' AS CODE UNION ALL SELECT 2, 'XYZ' UNION ALL SELECT 3, NULL ) TEST ) X; ``` It works as expected when `CODE` is not null but I only get a string of whitespaces when `CODE` is null (i.e., the trailing `TEST_ID` is missing): ``` TEST_ID | CODE | UNIQUE_CODE | HEX --------+------------+-------------+----------------------- 1 | ABCDEFGHIJ | ABCDEFGHIJ | 0x4142434445464748494A 2 | XYZ | XYZ | 0x58595A 3 | NULL | | 0x20202020202020202020 ``` The funny thing is that I already use this technique successfully in another table and I can't spot the difference: ``` CREATE TABLE SOME_OTHER_TABLE ( SOME_OTHER_TABLE_ID INT IDENTITY(1, 1) NOT NULL, NOMBRE VARCHAR(50), -- This works just fine: NOMBRE_UNICO AS ISNULL(NOMBRE, CONVERT(VARCHAR, SPACE(50)) + CONVERT(VARCHAR, SOME_OTHER_TABLE_ID)), CONSTRAINT SOME_OTHER_TABLE_PK PRIMARY KEY (SOME_OTHER_TABLE_ID) ); ``` What am I missing? *(1) This was a workaround for SQL Server 2005 that's no longer necessary in later versions thanks to filtered indexes.*
There you go with `" 3"` ``` SELECT TEST_ID, CODE, UNIQUE_CODE, CAST(UNIQUE_CODE AS VARBINARY(4000)) AS HEX FROM ( SELECT TEST_ID, CODE, ISNULL(CODE, CONVERT(VARCHAR(20), SPACE(10)) + CONVERT(VARCHAR(20), TEST_ID)) AS UNIQUE_CODE FROM ( SELECT 1 AS TEST_ID, cast('ABCDEFGHIJ' as varchar(20)) AS CODE UNION ALL SELECT 2, 'XYZ' UNION ALL SELECT 3, NULL ) TEST ) X; ``` [![enter image description here](https://i.stack.imgur.com/KBnnV.png)](https://i.stack.imgur.com/KBnnV.png) `'ABCDEFGHIJ'` (first value in the union list) is exactly 10 characters and this column is a first argument of IsNull. So it takes 10 characters as size for IsNull result. Which is enough only for spaces. Replacing this constant with `'ABCDEFGHIJKLMNOPQR'` would do the trick also.
It looks like SQL is trying to help out to define the column length in your inner query. By casting/converting to a specific size this fixes the problem. Once your `UNIQUE_CODE` field exceeds this value, the returned value is limited to the size of the column. ``` SELECT TEST_ID, CODE, UNIQUE_CODE, CAST(UNIQUE_CODE AS VARBINARY(4000)) AS HEX FROM ( SELECT TEST_ID, CODE, ISNULL(CODE, CONVERT(VARCHAR, SPACE(10)) + CONVERT(VARCHAR, TEST_ID)) AS UNIQUE_CODE FROM ( SELECT 1 AS TEST_ID, CONVERT(VARCHAR(50), 'ABCDEFGHIJ') AS CODE UNION ALL SELECT 2, 'XYZ' UNION ALL SELECT 3, NULL ) TEST ) X; ```
Computed column does not include one of the values
[ "", "sql", "sql-server", "sql-server-2008-r2", "" ]
I have this query, that will return me the ids of the favorited audios by the user, ordered by the date when they marked that ID as favorite: ``` SELECT audio_id FROM favorites WHERE user_id = ? ORDER BY `time` DESC ``` Now, I try to display those audios in the screen, ordered also by date ``` SELECT * FROM audios WHERE id IN ( SELECT audio_id FROM favorites WHERE user_id = ? ORDER BY `time` DESC ) AND status = '1' ``` But after that query they are not ordered by date. The order of the IDs in the second query is not the same order of the IDs in the first one. What can I do ? I don't want to store the result and then re-order it
The reason that it is not ordered by time is because your `ORDER BY` is in the inner `SELECT` statement. so it does not apply to the outer `SELECT` This should work for you: ``` SELECT DISTINCT A.* FROM audios AS A INNER JOIN favorites AS F ON A.id = F.audio_id AND F.user_id = ? AND A.status = '1' ORDER BY F.[time] DESC ```
The `ORDER BY` on the query in the `WHERE` clause does not have an impact on the data in the `SELECT` statement. Making the query a join, allows the data to be stored in it's current location, and still be able to sort accordingly. ``` SELECT A.*, f.[time] FROM audios A INNER JOIN favorites f ON A.audio_id = f.audio_id WHERE a.status = '1' and f.user_id = ? ORDER BY [time] DESC ```
Order result of query with the order of IN subquery
[ "", "mysql", "sql", "" ]
``` SELECT departments.department_id, department_name, departments.manager_id, first_name, max(salary) as Maximum_Salary, min(salary) Minimum_Salary FROM departments,employees WHERE departments.department_id = employees.department_id AND departments.manager_id = employee_id GROUP BY departments.department_id, department_name, departments.manager_id, first_name ORDER BY department_id; SELECT departments.department_id, department_name, departments.manager_id, max(salary) as Maximum_Salary, min(salary) Minimum_Salary FROM departments,employees WHERE departments.department_id = employees.department_id GROUP BY departments.department_id, department_name, departments.manager_id ORDER BY department_id; ``` I have tried two queries. I am getting manager's salary every time instead of Maximum and Minimum Salary of the department, but if I remove 'first\_name' from the 1st query and modify the joining clause, then the answer is right (i.e. 2nd Query), but I want to get the name of the manager.
You have to make two JOINs to the employees table : first join between employee and manager\_id, and second join between employee and department\_id. Something like this should work: ``` SELECT d.department_id, d.department_name, d.manager_id, m.first_name, max(e.salary) as Maximum_Salary, min(e.salary) Minimum_Salary FROM departments d LEFT JOIN employees e ON d.department_id=e.department_id LEFT JOIN employees m ON d.manager_id = m.employee_id GROUP BY d.department_id, d.department_name, d.manager_id, m.first_name ORDER BY d.department_id; ```
You'll need to select the employees table as following: *employees.salary* and *employees.first\_name*. The query will look like this: ``` select departments.department_id,department_name,departments.manager_id,employees.first_name,max(employees.salary) as Maximum_Salary,min(employees.salary) Minimum_Salary from departments,employees where departments.department_id=employees.department_id and departments.manager_id=employee_id group by departments.department_id,department_name,departments.manager_id,first_name order by department_id; ```
How to get the maximum and minimum salary of all the departments along with department name,manager id, manager name of each department?
[ "", "sql", "" ]
I have a question regarding SQL dates. The table I am working with has a date field in the following format: "22-SEP-08". The field is a date column. I am trying to figure out how to output records from 1/1/2000 to present day. The code below is not filtering the date field: ``` Select distinct entity.lt_date from feed.entitytable entity where entity.lt_date >= '2000-01-01' ``` Any help regarding this issue is much appreciated. Thanks! Edit: I am using Oracle SQL Developer to write my code.
DATEs do ***not*** have "a format". Any format you *see* is applied by the application *displaying* the date value. You can either change the configuration of SQL Developer to display dates in a different format, or you can use `to_char()` to format the date the way you want. --- The reason your statement does not work, is most probably because of the implicit data type conversion that you are relying on. `'2000-01-01'` is a string value, not a date. And the string is converted using the NLS settings of your session. Given the fact that you see dates displayed as `DD-MON-YY` means that that is the format that is used by the evil implicit data type conversion. You should supply date values **always** as real date literals. There are two ways of specifying a *real* date literal. The first is ANSI SQL and simple uses the keyword `DATE` in front of an ISO formatted string: ``` where entity.lt_date >= DATE '2000-01-01' ``` Note the `DATE` keyword in front of the string, wich makes it a real date literal not a string expression. The other option is to use `to_date()` to convert a character value into a date: ``` where entity.lt_date >= to_date('2000-01-01', 'yyyy-mm-dd'); ``` More details about specifying date literals can be found in the manual: * [Date literals](http://docs.oracle.com/database/121/SQLRF/sql_elements003.htm#SQLRF51062) * [to\_date function](http://docs.oracle.com/database/121/SQLRF/functions219.htm#SQLRF06132)
My guess is the data type isn't a Date. Just in case its a char type, try to convert it using the Oracle TO\_DATE() function. The Oracle documentation below should help you with parameters. <http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions183.htm>
SQL Date Format Conversion
[ "", "sql", "oracle", "" ]
**Data:** ``` id uid type 1 20 A 2 20 B 3 20 A 4 6 A 5 1 A 6 3 A 7 6 A 8 1 B ``` **Scenario:** I want to group by `type` and sort it by `id`. I am using group by to group the `uid`. **Current Query:** ``` SELECT type, GROUP_CONCAT(DISTINCT uid) AS users, COUNT(type) AS typeCount FROM `test2` GROUP BY type ``` **Problem:** But the order of the `uid` is incorrect, it should be in descending order according to `id`. **Expected Result:** ``` type users typeCount A 6,3,1,20 6 B 1,20 2 ``` **My results:** ``` type users typeCount A 20,6,1,3 6 B 20,1 2 ```
The mistery of MySQL. Actually the engine takes first value in ASC order, no matter that you are asking for DESC by ID, so first "flip" the table, then: ``` SELECT type, GROUP_CONCAT(DISTINCT uid ORDER BY id DESC) AS users, COUNT(type) AS typeCount FROM (SELECT * FROM `test2` ORDER BY id DESC) test2 GROUP BY type ``` [SQLFiddleDemo](http://sqlfiddle.com/#!9/bfcfec/4/0)
The answer from @mitkosoft is already right. I am posting this just to analyze the right expected result. From the following output, we can see that, for type 'A' group, before DISTINCT taking effect, after ORDER BY id DESC, the rows are: 6 3 1 6 20 20 Then DISTINCT can produce two possible results: 6,3,1,20 or 3,1,6,20. Which one is produced is undetermined and realization related. Otherwise, we can't rely on that. **Therefore, the expect result for group 'A' should be 6,3,1,20 or 3,1,6,20. Both correct.** ``` mysql> SELECT * FROM test2; +------+------+------+ | id | uid | type | +------+------+------+ | 1 | 20 | A | | 2 | 20 | B | | 3 | 20 | A | | 4 | 6 | A | | 5 | 1 | A | | 6 | 3 | A | | 7 | 6 | A | | 8 | 1 | B | +------+------+------+ 8 rows in set (0.00 sec) mysql> SELECT uid FROM test2 WHERE type='A' ORDER BY id DESC; +------+ | uid | +------+ | 6 | | 3 | | 1 | | 6 | | 20 | | 20 | +------+ 6 rows in set (0.00 sec) ```
Sorting in group_concat
[ "", "mysql", "sql", "group-by", "group-concat", "" ]
I am trying to do a three table join to display all columns. Only two of the tables have a key in common. Below is the code for the tables I already have: ``` CREATE TABLE Faculty ( FacultyID int, FirstName varchar(30), LastName varchar(30), EMail varchar(60), BirthDate DATE, Numbr_Courses int, PRIMARY KEY (FacultyID) ); CREATE TABLE Courses( CourseID int, CourseDisc varchar(4), CourseNum varchar(4), NumbrCred varchar(1), FirstYrOffered int, CourseTitle varchar(75), PRIMARY KEY (CourseID) ); CREATE TABLE Faculty_Courses( InstanceID int, FacultyID int, CourseDisc varchar(4), CourseNum varchar(4), CourseTitle varchar(75), PRIMARY KEY (InstanceID), FOREIGN KEY (FacultyID) REFERENCES Faculty(FacultyID) ); ``` The two tables that have the same key is Faculty and Faculty\_Course. I have attempted one peice of code that only gave me back an error. I guess I am having real trouble understanding how to do proper code for joining tables. My attempted code is below: ``` SELECT Faculty.*, Faculty_Courses.* FROM Faculty INNER JOIN Courses ON Faculty.FacultyID=Faculty_Courses.FacultyID ``` This gave me back the following error: **ERROR 1066 (42000): Not unique table/alias: 'Faculty'** Any help will be appreciated.
I would change your `Faculty_Courses` table: ``` CREATE TABLE Faculty_Courses( InstanceID int, FacultyID int, CourseID int, PRIMARY KEY (InstanceID), FOREIGN KEY (FacultyID) REFERENCES Faculty(FacultyID), FOREIGN KEY (CourseID) REFERENCES Courses(CourseID) ); ``` And then join all three tables together: ``` SELECT F.*, FC.*, C.* FROM Faculty F INNER JOIN Faculty_Courses FC ON F.FacultyID = FC.FacultyID INNER JOIN Courses C ON C.CourseID = FC.CourseID ``` You probably only really want some of the columns from `F` (Faculty) and `C` (Courses) and can mostly ignore the `FC` columns as they are just used for mapping between `F` and `C`
I'm surprised your `factory_courses` table doesn't have a reference to `courses`. Either way, with your current query, you are selecting from `faculty` and `courses` but aliasing `faculty_courses` -- you cannot do that. This is what your current query should look like: ``` select * from faculty f join faculty_courses fc on f.facultyid = fc.facultyid ``` --- To `join` the 3rd table, if you had a `courseid` in your `faculty_courses` table, perhaps something like this: ``` select * from faculty f join faculty_courses fc on f.facultyid = fc.facultyid join courses c on fc.courseid = c.courseid ``` * [A Visual Explanation of SQL Joins](http://blog.codinghorror.com/a-visual-explanation-of-sql-joins/)
Trying to do a three table join displaying all columns
[ "", "sql", "inner-join", "" ]
I have a table in SQL Server database that has a 2 fields X, Y. These fields represent coordinates on a field (american football field). 0,0 (X,Y) is the starting point with the largest value and furthest away point being 145,90 (X,Y). Each row in the table is a coordinate from the pitch. I want to be able to measure the distance between different coordinates on the field (different rows in the database) eg. 3,9 to 15,20. I also want to be able to measure the angle between coordinates. I looked the spatial types etc in SQL Server but its seems to me that is related to Latitude, Longitude. Is there any way in SQL Server that can I set such up in SQL with a custom map to measure between custom points, angles etc?
You can use the Geometry type. <https://msdn.microsoft.com/en-us/library/bb895270.aspx> Here's some quick examples. Use a derived column to create a Point: ``` CREATE TABLE #Test ( X INT, Y INT, POINT AS GEOMETRY::STGeomFromText('POINT('+CONVERT(VARCHAR(20),X)+' '+CONVERT(VARCHAR(20),Y)+')',0 ) ) INSERT INTO #Test( X, Y) VALUES (3, 9),(15,20) ``` If you want to view your points in SSMS adding a buffer can make things easier to see: ``` SELECT *,POINT.STBuffer(5) Pt FROM #Test ``` Distance is simple (say from the origin): ``` SELECT X, Y, POINT.STDistance(GEOMETRY::STGeomFromText('POINT(0 0)',0)) DistFromOrigin FROM #Test X Y DistFromOrigin ----------- ----------- ---------------------- 3 9 9.48683298050514 15 20 25 ``` Between any two points is just a matter of selecting the points and doing ``` Point1.STDistance(Point2) ``` And for angles, this function should work fine when changed to the `Geometry` type [Determining cardinal (compass) direction between points](https://stackoverflow.com/questions/14736464/determining-cardinal-compass-direction-between-points)
Do you remember the Pythagorean Theorem from algebra class? <http://betterexplained.com/articles/measure-any-distance-with-the-pythagorean-theorem/> That is what is used the measure the "long" side of a triangle which is what you are measuring here. You don't need anything special other some basic math for this. You know that the distance from (3, 9) to (15, 50) is 12. This is the distance side to side, or X. Then you also know the distance down field is 41 (50 - 9). To determine the distance between these point is simple X(2) + y(2) = c(2). So in this example it would be 12(2) + 41(2) = x(2). Simplified this becomes 144 + 574 = x(2) To solve you take the square root of 718. This would resolve to 26.795 If you wanted to store this value you could easily make a computed column that could calculate this for you. --EDIT-- I don't remember the formula for calculating the angle of the hypotenuse off the top of my head but it should be easy to find on the internet.
Storing custom coordinates in SQL Server
[ "", "sql", "sql-server", "geospatial", "geography", "" ]
I need to write a query in sql server that let's me get get month registered in my table and somehow covert those months into 4 seasons(autumn,fall,spring,summer). Does anyone know how to make this done??
One method is a `case`. Your question doesn't clarify the logic, but here is an example: ``` (case when month(date) in (12, 1, 2) then 'winter' when month(date) in (3, 4, 5) then 'spring' when month(date) in (6, 7, 8) then 'summer' when month(date) in (9, 10, 11) then 'autumn' end) as season ```
``` (CHOOSE(month(TheDate),'Winter', 'Winter', 'Spring', 'Spring', 'Spring', 'Summer', 'Summer', 'Summer', 'Autumn', 'Autumn', 'Autumn', 'Winter')) ```
Sql Query that somehow converts months to 4 seasons
[ "", "sql", "sql-server", "" ]
I have a large data set which for the purpose of this question has 3 fields: * Group Identifier * From Date * To Date On any given row the `From Date` will always be less than the `To Date` but within each group the time periods (which are in no particular order) represented by the date pairs could overlap, be contained one within another, or even be identical. What I'd like to end up with is a query that condenses the results for each group down to just the continuous periods. For example a group that looks like this: ``` | Group ID | From Date | To Date | -------------------------------------- | A | 01/01/2012 | 12/31/2012 | | A | 12/01/2013 | 11/30/2014 | | A | 01/01/2015 | 12/31/2015 | | A | 01/01/2015 | 12/31/2015 | | A | 02/01/2015 | 03/31/2015 | | A | 01/01/2013 | 12/31/2013 | ``` Would result in this: ``` | Group ID | From Date | To Date | -------------------------------------- | A | 01/01/2012 | 11/30/2014 | | A | 01/01/2015 | 12/31/2015 | ``` I've read a number of articles on date packing but I can't quite figure out how to apply that to my data set. How can construct a query that would give me those results?
The solution from book "Microsoft® SQL Server ® 2012 High-Performance T-SQL Using Window Functions" ``` ;with C1 as( select GroupID, FromDate as ts, +1 as type, 1 as sub from dbo.table_name union all select GroupID, dateadd(day, +1, ToDate) as ts, -1 as type, 0 as sub from dbo.table_name), C2 as( select C1.* , sum(type) over(partition by GroupID order by ts, type desc rows between unbounded preceding and current row) - sub as cnt from C1), C3 as( select GroupID, ts, floor((row_number() over(partition by GroupID order by ts) - 1) / 2 + 1) as grpnum from C2 where cnt = 0) select GroupID, min(ts) as FromDate, dateadd(day, -1, max(ts)) as ToDate from C3 group by GroupID, grpnum; ``` Create table: ``` if object_id('table_name') is not null drop table table_name create table table_name(GroupID varchar(100), FromDate datetime,ToDate datetime) insert into table_name select 'A', '01/01/2012', '12/31/2012' union all select 'A', '12/01/2013', '11/30/2014' union all select 'A', '01/01/2015', '12/31/2015' union all select 'A', '01/01/2015', '12/31/2015' union all select 'A', '02/01/2015', '03/31/2015' union all select 'A', '01/01/2013', '12/31/2013' ```
I'd use a `Calendar` table. This table simply has a list of dates for several decades. ``` CREATE TABLE [dbo].[Calendar]( [dt] [date] NOT NULL, CONSTRAINT [PK_Calendar] PRIMARY KEY CLUSTERED ( [dt] ASC )) ``` There are many ways to [populate such table](http://sqlperformance.com/2013/01/t-sql-queries/generate-a-set-1). For example, 100K rows (~270 years) from 1900-01-01: ``` INSERT INTO dbo.Calendar (dt) SELECT TOP (100000) DATEADD(day, ROW_NUMBER() OVER (ORDER BY s1.[object_id])-1, '19000101') AS dt FROM sys.all_objects AS s1 CROSS JOIN sys.all_objects AS s2 OPTION (MAXDOP 1); ``` Once you have a `Calendar` table, here is how to use it. Each original row is joined with the `Calendar` table to return as many rows as there are dates between From and To. Then possible duplicates are removed. Then classic gaps-and-islands by numbering the rows in two sequences. Then grouping found islands together to get the new From and To. **Sample data** I added a second group. ``` DECLARE @T TABLE (GroupID int, FromDate date, ToDate date); INSERT INTO @T (GroupID, FromDate, ToDate) VALUES (1, '2012-01-01', '2012-12-31'), (1, '2013-12-01', '2014-11-30'), (1, '2015-01-01', '2015-12-31'), (1, '2015-01-01', '2015-12-31'), (1, '2015-02-01', '2015-03-31'), (1, '2013-01-01', '2013-12-31'), (2, '2012-01-01', '2012-12-31'), (2, '2013-01-01', '2013-12-31'); ``` **Query** ``` WITH CTE_AllDates AS ( SELECT DISTINCT T.GroupID ,CA.dt FROM @T AS T CROSS APPLY ( SELECT dbo.Calendar.dt FROM dbo.Calendar WHERE dbo.Calendar.dt >= T.FromDate AND dbo.Calendar.dt <= T.ToDate ) AS CA ) ,CTE_Sequences AS ( SELECT GroupID ,dt ,ROW_NUMBER() OVER(PARTITION BY GroupID ORDER BY dt) AS Seq1 ,DATEDIFF(day, '2001-01-01', dt) AS Seq2 ,DATEDIFF(day, '2001-01-01', dt) - ROW_NUMBER() OVER(PARTITION BY GroupID ORDER BY dt) AS IslandNumber FROM CTE_AllDates ) SELECT GroupID ,MIN(dt) AS NewFromDate ,MAX(dt) AS NewToDate FROM CTE_Sequences GROUP BY GroupID, IslandNumber ORDER BY GroupID, NewFromDate; ``` **Result** ``` +---------+-------------+------------+ | GroupID | NewFromDate | NewToDate | +---------+-------------+------------+ | 1 | 2012-01-01 | 2014-11-30 | | 1 | 2015-01-01 | 2015-12-31 | | 2 | 2012-01-01 | 2013-12-31 | +---------+-------------+------------+ ```
Condense Time Periods with SQL
[ "", "sql", "sql-server", "sql-server-2012", "" ]
Data in Phone\_number column of my Temp\_table looks like this ``` 1234560200 1234560201 1234560202 2264540300 2264540301 2264540302 2264540303 2264540304 2264540305 2264540306 ``` I want it to find sequence of last 4 digits and and find First and Last number of sequence of it. For eg. There is sequence of first 3 rows as `0200, 0201, 0202, so First = 0200 and Last = 0202` Final Output of this query should be ``` First Last 0200 0202 0300 0306 ``` I tried below query, but not sure about this approach. ``` WITH get_nxt_range AS ( select substr(a.PHONE_NUMBER,7,4) range1, LEAD(substr(a.PHONE_NUMBER,7,4)) OVER (ORDER BY a.PHONE_NUMBER ) nxt_range from Temp_table a ) SELECT range1,nxt_range FROM get_nxt_range WHERE nxt_range = range1 +1 ORDER BY range1 ```
I think something like this might work: ``` select min (substr (phone_number, -4, 4)) as first, max (substr (phone_number, -4, 4)) as last from temp_table group by substr (phone_number, -4, 2) ```
One method to get sequences is to use the difference of row numbers approach. This works in your case as well: ``` select substr(phone_number, 1, 6), min(substr(phone_number, 7, 4)), max(substr(phone_number, 7, 4)) from (select t.*, (row_number() over (order by phone_number) - row_number() over (partition by substr(phone_number, 1, 6) order by phone_number) ) as grp from temp_table t ) t group by substr(phone_number, 1, 6), grp; ```
Finding sequence in data and grouping by it
[ "", "sql", "oracle", "gaps-and-islands", "" ]
I am working using SQL Server for a client and they have given me a request. They have a table `products` with many columns, one of which is `article`. There are about 28,000 rows in `products`. They wish to create a new table `articles` that contains only the article numbers from `products`, but set up so that the new table takes the `article` column and splits it into 3 columns each with (up to) 10,000 rows. I explained that this is not the best software to do this in but they insist (and they're the ones paying me!). The new table has columns `Article1`, `Artcile2`, and `Article3`. Can someone help me out with this? All I have succeeded in so far is getting the first 10,000 article numbers in correctly using ``` insert into articles (Article1) select top 10000 article from products ``` But now I am stuck as to how to insert the remaining values into the 2nd and 3rd columns. I know what I really need is some sort of `UPDATE` query but I can't get anywhere. I am running SSMS 2014.
If I understand your question correctly, you want to transform rows of `article` into a table with 3 columns. Here is a try: ``` ;WITH Cte AS( SELECT article, grp = (ROW_NUMBER() OVER(ORDER BY article) -1) % (SELECT CEILING(COUNT(*) / (3 * 1.0)) FROM products) FROM products ), CteFinal AS( SELECT *, rn = ROW_NUMBER() OVER(PARTITION BY grp ORDER BY article) FROM Cte ) INSERT INTO articles(Article1, Article2, Article3) SELECT Article1 = MAX(CASE WHEN rn = 1 THEN article END), Article2 = MAX(CASE WHEN rn = 2 THEN article END), Article3 = MAX(CASE WHEN rn = 3 THEN article END) FROM CteFinal GROUP BY grp ```
You can use NOT EXISTS clause, like when inserting in articles2 table, you can check it like NOT EXISTS (SELECT ARTICLE\_ID FROM ARTICLES2). I hope you are getting my point. Apart from that, if you can modify the structure of main ARTICLE table, then you can add a column with the name say PROCESSED (it would be a boolean column, with default value as 0). Once you have inserted data in articles1 table, you can update PROCESSED column for those articles to be 1, then you can process the remaining articles(which have processed column value = 0) for article2 and then update those with processed = 1, similarly for articles 3. Let me know if you have any questions.
SQL Server long column into multiple shorter columns
[ "", "sql", "sql-server", "sql-update", "sql-insert", "ssms-2014", "" ]
When I hit the below code without where clause, it shows row with `NULL` value but when I put where clause like below then there is no row with `NULL` value. I want the row with `NULL` value with where clause. ``` SELECT * FROM project_category as a LEFT OUTER JOIN project_estimate_detail as b ON a.id = b.project_cat_id where b.project_cat_id not in ('21','22','2') ```
Try this: ``` SELECT * FROM project_category as a LEFT OUTER JOIN project_estimate_detail as b ON a.id = b.project_cat_id where (b.project_cat_id IS NULL) OR (b.project_cat_id not in ('21','22','2')) ``` When `b.project_cat_id` is `NULL` then `NOT IN` evaluates to `NULL`. So you have to *explicitly* check for `NULL` using the `IS NULL` expression.
``` SELECT * FROM project_category AS a LEFT OUTER JOIN project_estimate_detail AS b ON a.id = b.project_cat_id WHERE b.project_cat_id NOT IN ( '21', '22', '2' ) AND b.project_cat_id IS NULL ```
Also show row with NULL value
[ "", "mysql", "sql", "" ]
This question is regarding query optimization to avoid multiple call to database via PHP. So Here is scenario, I have two tables one contains information you can call this as reference table and another one is data table, fields `key1` and `key2` are common in both table, based on these fields, we can join them. I don't know whether query can be made even simpler than what I am doing right now, what I want to achieve is as follows : > I would like to find distinct `key1,key2,info1,info2` from `main_info` > table, whenever serial value is less than 10 and `key1,key2` of both > table matches, and then group them by `info1,info2`, while grouping > count the repeated `key1,key2` for duplicates of `info1,info2` fields > and `group_concat` those keys **Contents of table `main_info`** ``` MariaDB [demos]> select * from main_info; +------+------+-------+-------+----------+ | key1 | key2 | info1 | info2 | date | +------+------+-------+-------+----------+ | 1 | 1 | 15 | 90 | 20120501 | | 1 | 2 | 14 | 92 | 20120601 | | 1 | 3 | 15 | 82 | 20120801 | | 1 | 4 | 15 | 82 | 20120801 | | 1 | 5 | 15 | 82 | 20120802 | | 2 | 1 | 17 | 90 | 20130302 | | 2 | 2 | 17 | 90 | 20130302 | | 2 | 3 | 17 | 90 | 20130302 | | 2 | 4 | 16 | 88 | 20130601 | +------+------+-------+-------+----------+ 9 rows in set (0.00 sec) ``` **Contents of table `product1`** ``` MariaDB [demos]> select * from product1; +------+------+--------+--------------+ | key1 | key2 | serial | product_data | +------+------+--------+--------------+ | 1 | 1 | 0 | NaN | | 1 | 1 | 1 | NaN | | 1 | 1 | 2 | NaN | | 1 | 1 | 3 | NaN | | 1 | 2 | 0 | 12.556 | | 1 | 2 | 1 | 13.335 | | 1 | 3 | 1 | NaN | | 1 | 3 | 2 | 13.556 | | 1 | 3 | 3 | 14.556 | | 1 | 4 | 3 | NaN | | 1 | 5 | 3 | NaN | | 2 | 1 | 0 | 12.556 | | 2 | 1 | 1 | 13.553 | | 2 | 1 | 2 | NaN | | 2 | 2 | 12 | 129 | | 2 | 3 | 22 | NaN | +------+------+--------+--------------+ 16 rows in set (0.00 sec) ``` **Via PHP I group fields `info1` and `info2` of table `main_info`, in current context `serial`,`product_data` of table `product1`, multiple times one after another (here I am running query twice as you can see)** **For field `serial` - 1st query** ``` MariaDB [demos]> select * , count(*) as serial_count,GROUP_CONCAT(key1,' ',key2) as serial_ids from -> ( -> SELECT distinct -> if(b.serial < 10,a.key1,null) AS `key1`, -> if(b.serial < 10,a.key2,null) AS `key2`, -> if(b.serial < 10,a.info1,null) AS `info1`, -> if(b.serial < 10,a.info2,null) AS `info2` -> FROM main_info a inner join product1 b on a.key1 = b.key1 AND a.key2= b.key2 -> ) as sub group by info1,info2 -> ; +------+------+-------+-------+--------------+-------------+ | key1 | key2 | info1 | info2 | serial_count | serial_ids | +------+------+-------+-------+--------------+-------------+ | NULL | NULL | NULL | NULL | 1 | NULL | | 1 | 2 | 14 | 92 | 1 | 1 2 | | 1 | 3 | 15 | 82 | 3 | 1 3,1 4,1 5 | | 1 | 1 | 15 | 90 | 1 | 1 1 | | 2 | 1 | 17 | 90 | 1 | 2 1 | +------+------+-------+-------+--------------+-------------+ 5 rows in set (0.00 sec) ``` **For field `product_data` - 2nd query** ``` MariaDB [demos]> select * , count(*) as product_data_count,GROUP_CONCAT(key1,' ',key2) as product_data_ids from -> ( -> SELECT distinct -> if(b.product_data IS NOT NULL,a.key1,null) AS `key1`, -> if(b.product_data IS NOT NULL,a.key2,null) AS `key2`, -> if(b.product_data IS NOT NULL,a.info1,null) AS `info1`, -> if(b.product_data IS NOT NULL,a.info2,null) AS `info2` -> FROM main_info a inner join product1 b on a.key1 = b.key1 AND a.key2= b.key2 -> ) as sub group by info1,info2 -> ; +------+------+-------+-------+--------------------+------------------+ | key1 | key2 | info1 | info2 | product_data_count | product_data_ids | +------+------+-------+-------+--------------------+------------------+ | 1 | 2 | 14 | 92 | 1 | 1 2 | | 1 | 3 | 15 | 82 | 3 | 1 3,1 4,1 5 | | 1 | 1 | 15 | 90 | 1 | 1 1 | | 2 | 2 | 17 | 90 | 3 | 2 2,2 3,2 1 | +------+------+-------+-------+--------------------+------------------+ 4 rows in set (0.01 sec) ``` **I would like to get output like this using one query, Group by info1, info2** ``` +------+------+-------+-------+--------------+-------------+--------------------+------------------+ | key1 | key2 | info1 | info2 | serial_count | serial_ids | product_data_count | product_data_ids | +------+------+-------+-------+--------------+-------------+--------------------+------------------+ | NULL | NULL | NULL | NULL | 1 | NULL | NULL | NULL | | 1 | 2 | 14 | 92 | 1 | 1 2 | 1 | 1 2 | | 1 | 3 | 15 | 82 | 3 | 1 3,1 4,1 5 | 3 | 1 3,1 4,1 5 | | 1 | 1 | 15 | 90 | 1 | 1 1 | 1 | 1 1 | | 2 | 1 | 17 | 90 | 1 | 2 1 | 3 | 2 2,2 3,2 1 | +------+------+-------+-------+--------------+-------------+--------------------+------------------+ ``` **Below is structure of tables** ``` DROP TABLE IF EXISTS `main_info`; CREATE TABLE `main_info` ( `key1` int(11) NOT NULL, `key2` int(11) NOT NULL, `info1` int(11) NOT NULL, `info2` int(11) NOT NULL, `date` int(11) NOT NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1; LOCK TABLES `main_info` WRITE; INSERT INTO `main_info` VALUES (1,1,15,90,20120501),(1,2,14,92,20120601),(1,3,15,82,20120801),(1,4,15,82,20120801),(1,5,15,82,20120802),(2,1,17,90,20130302),(2,2,17,90,20130302),(2,3,17,90,20130302),(2,4,16,88,20130601); UNLOCK TABLES; DROP TABLE IF EXISTS `product1`; CREATE TABLE `product1` ( `key1` int(11) NOT NULL, `key2` int(11) NOT NULL, `serial` int(11) NOT NULL, `product_data` varchar(1000) DEFAULT NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1; LOCK TABLES `product1` WRITE; INSERT INTO `product1` VALUES (1,1,0,'NaN'),(1,1,1,'NaN'),(1,1,2,'NaN'),(1,1,3,'NaN'),(1,2,0,'12.556'),(1,2,1,'13.335'),(1,3,1,'NaN'),(1,3,2,'13.556'),(1,3,3,'14.556'),(1,4,3,'NaN'),(1,5,3,'NaN'),(2,1,0,'12.556'),(2,1,1,'13.553'),(2,1,2,'NaN'),(2,2,12,'129'),(2,3,22,'NaN'); UNLOCK TABLES; ``` Someone please help me to get result in one query.
try this one ``` SELECT key1, key2, info1, info2, SUM(Scount) AS serial_count, GROUP_CONCAT(Skey1, ' ', Skey2) AS serial_ids, SUM(Pcount) AS product_data_count, GROUP_CONCAT(Pkey1, ' ', Pkey2) AS product_data_ids FROM ( SELECT DISTINCT IF(b.serial < 10 OR b.product_data IS NOT NULL,a.key1, NULL) AS `key1`, IF(b.serial < 10 OR b.product_data IS NOT NULL,a.key2, NULL) AS `key2`, IF(b.serial < 10 OR b.product_data IS NOT NULL,a.info1, NULL) AS `info1`, IF(b.serial < 10 OR b.product_data IS NOT NULL,a.info2, NULL) AS `info2`, IF(b.serial < 10,a.key1, NULL) AS `Skey1`, IF(b.serial < 10,a.key2, NULL) AS `Skey2`, IF(b.product_data IS NOT NULL,a.key1, NULL) AS `Pkey1`, IF(b.product_data IS NOT NULL,a.key2, NULL) AS `Pkey2`, IF(b.serial < 10, 1, NULL) AS `Scount`, IF(b.product_data IS NOT NULL, 1, NULL) AS `Pcount` FROM main_info a INNER JOIN product1 b ON a.key1 = b.key1 AND a.key2= b.key2 UNION ALL SELECT DISTINCT NULL AS `key1`, NULL AS `key2`, NULL AS `info1`, NULL AS `info2`, NULL AS `Skey1`, NULL AS `Skey2`, NULL AS `Pkey1`, NULL AS `Pkey2`, IF(serial > 9, 1, NULL) AS `Scount`, IF(product_data IS NULL, 1, NULL) AS `Pcount` FROM product1 WHERE serial > 9 xor product_data IS NULL ) AS sub GROUP BY info1,info2 ``` **RESULT (data from question)** ``` +------+------+-------+-------+--------------+-------------+--------------------+------------------+ | key1 | key2 | info1 | info2 | serial_count | serial_ids | product_data_count | product_data_ids | +------+------+-------+-------+--------------+-------------+--------------------+------------------+ | NULL | NULL | NULL | NULL | 1 | NULL | NULL | NULL | +------+------+-------+-------+--------------+-------------+--------------------+------------------+ | 1 | 2 | 14 | 92 | 1 | 1 2 | 1 | 1 2 | +------+------+-------+-------+--------------+-------------+--------------------+------------------+ | 1 | 3 | 15 | 82 | 3 | 1 3,1 4,1 5 | 3 | 1 3,1 4,1 5 | +------+------+-------+-------+--------------+-------------+--------------------+------------------+ | 1 | 1 | 15 | 90 | 1 | 1 1 | 1 | 1 1 | +------+------+-------+-------+--------------+-------------+--------------------+------------------+ ``` **RESULT (data from comment)** ``` +------+------+-------+-------+--------------+-------------+--------------------+------------------+ | key1 | key2 | info1 | info2 | serial_count | serial_ids | product_data_count | product_data_ids | +------+------+-------+-------+--------------+-------------+--------------------+------------------+ | NULL | NULL | NULL | NULL | 1 | NULL | 1 | NULL | +------+------+-------+-------+--------------+-------------+--------------------+------------------+ | 1 | 2 | 14 | 92 | 1 | 1 2 | 1 | 1 2 | +------+------+-------+-------+--------------+-------------+--------------------+------------------+ | 1 | 3 | 15 | 82 | 3 | 1 3,1 4,1 5 | 3 | 1 3,1 4,1 5 | +------+------+-------+-------+--------------+-------------+--------------------+------------------+ | 1 | 1 | 15 | 90 | 1 | 1 1 | 1 | 1 1 | +------+------+-------+-------+--------------+-------------+--------------------+------------------+ | 2 | 4 | 16 | 88 | 1 | 2 4 | 1 | 2 4 | +------+------+-------+-------+--------------+-------------+--------------------+------------------+ | 2 | 1 | 17 | 90 | NULL | NULL | 3 | 2 1,2 2,2 3 | +------+------+-------+-------+--------------+-------------+--------------------+------------------+ ``` **NOTE:** There is something that I can really understand about the base logic behind the question, so answer mainly base on expected result. Such as if group field (`info1` and `info2`) are null, the other result will always null except for `serial_count` and `product_data_count` that can be 1 or null, did you really meant to get that? Notice that this answer use another sub query with `UNION ALL` to satisfy that.
How about combining your two queries with a JOIN? SQL: ``` SELECT tbl1.key1, tbl1.key2, tbl1.info1, tbl1.info2, tbl1.serial_count, tbl1.serial_ids, tbl2.product_data_count, tbl2.product_data_ids FROM ( select * , count(*) as serial_count,GROUP_CONCAT(key1,' ',key2) as serial_ids from ( SELECT distinct if(b.serial < 10,a.key1,null) AS `key1`, if(b.serial < 10,a.key2,null) AS `key2`, if(b.serial < 10,a.info1,null) AS `info1`, if(b.serial < 10,a.info2,null) AS `info2` FROM main_info a inner join product1 b on a.key1 = b.key1 AND a.key2= b.key2 ) as sub group by info1,info2 ) tbl1 LEFT OUTER JOIN ( select * , count(*) as product_data_count,GROUP_CONCAT(key1,' ',key2) as product_data_ids from ( SELECT distinct if(b.product_data IS NOT NULL,a.key1,null) AS `key1`, if(b.product_data IS NOT NULL,a.key2,null) AS `key2`, if(b.product_data IS NOT NULL,a.info1,null) AS `info1`, if(b.product_data IS NOT NULL,a.info2,null) AS `info2` FROM main_info a inner join product1 b on a.key1 = b.key1 AND a.key2= b.key2 ) as sub group by info1,info2 ) tbl2 ON tbl1.info1 = tbl2.info1 AND tbl1.info2 = tbl2.info2 ORDER BY 3,4 ; ``` Output: ``` mysql> SELECT -> tbl1.key1, tbl1.key2, tbl1.info1, tbl1.info2, tbl1.serial_count, tbl1.serial_ids, -> tbl2.product_data_count, tbl2.product_data_ids -> FROM -> ( -> select * , count(*) as serial_count,GROUP_CONCAT(key1,' ',key2) as serial_ids from -> ( -> SELECT distinct -> if(b.serial < 10,a.key1,null) AS `key1`, -> if(b.serial < 10,a.key2,null) AS `key2`, -> if(b.serial < 10,a.info1,null) AS `info1`, -> if(b.serial < 10,a.info2,null) AS `info2` -> FROM main_info a inner join product1 b on a.key1 = b.key1 AND a.key2= b.key2 -> ) as sub group by info1,info2 -> ) tbl1 -> LEFT OUTER JOIN -> ( -> select * , count(*) as product_data_count,GROUP_CONCAT(key1,' ',key2) as product_data_ids from -> ( -> SELECT distinct -> if(b.product_data IS NOT NULL,a.key1,null) AS `key1`, -> if(b.product_data IS NOT NULL,a.key2,null) AS `key2`, -> if(b.product_data IS NOT NULL,a.info1,null) AS `info1`, -> if(b.product_data IS NOT NULL,a.info2,null) AS `info2` -> FROM main_info a inner join product1 b on a.key1 = b.key1 AND a.key2= b.key2 -> ) as sub group by info1,info2 -> ) tbl2 -> ON tbl1.info1 = tbl2.info1 AND tbl1.info2 = tbl2.info2 -> ORDER BY 3,4 -> ; +------+------+-------+-------+--------------+-------------+--------------------+------------------+ | key1 | key2 | info1 | info2 | serial_count | serial_ids | product_data_count | product_data_ids | +------+------+-------+-------+--------------+-------------+--------------------+------------------+ | NULL | NULL | NULL | NULL | 1 | NULL | NULL | NULL | | 1 | 2 | 14 | 92 | 1 | 1 2 | 1 | 1 2 | | 1 | 3 | 15 | 82 | 3 | 1 3,1 4,1 5 | 3 | 1 3,1 4,1 5 | | 1 | 1 | 15 | 90 | 1 | 1 1 | 1 | 1 1 | | 2 | 1 | 17 | 90 | 1 | 2 1 | 3 | 2 2,2 3,2 1 | +------+------+-------+-------+--------------+-------------+--------------------+------------------+ 5 rows in set (0.01 sec) mysql> select version(); +-----------------+ | version() | +-----------------+ | 10.1.10-MariaDB | +-----------------+ 1 row in set (0.00 sec) ```
Mysql group_concat of repeated keys and count of repetition of multiple columns in 1 query ( Query Optimization )
[ "", "mysql", "sql", "join", "mariadb", "" ]
Does it make any difference omitting the "AS" before defining the name of a column when creating a new view? ``` SELECT T.STUFF AS MY_STUFF ``` VS ``` SELECT T.STUFF MY_STUFF ```
A view is just a [stored query](http://docs.oracle.com/cd/E11882_01/server.112/e40540/schemaob.htm#CNCPT311), so [the `select` syntax](http://docs.oracle.com/cd/E11882_01/server.112/e41084/statements_10002.htm#i2126854) applies. As you can see from the syntax diagram for the select list items: [![enter image description here](https://i.stack.imgur.com/n9L8q.gif)](https://i.stack.imgur.com/n9L8q.gif) ... the AS keyword in the `expr AS c_alias` section is optional. So no, it makes no difference to the query, database or view. It's for readability and consistency with other database systems. I prefer to use it for anything other than a quick ad hoc query, and you may have coding standards that require it, but Oracle does not care.
It doesn't make any difference from the DB's point of view. Personally, I prefer the explicit `AS` and a separate line in my SQL statement per column for a couple of reasons: * you can easily search for a given column alias (just grep for `AS <alias>`) * without the `AS`, you might forget a comma - this results in one aliased column instead of two non-aliased ones. If you always use an `AS`, you can spot this kind of error more easily
Oracle column alias in select: is "AS" mandatory?
[ "", "sql", "oracle", "oracle11g", "" ]