Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I'm embarrassed to say that I've been trying to accomplish this for hours without success. I've read dozens of similar questions on StackOverflow and tried countless different things, but I simply do not have a good enough grasp of SQL to achieve what I'm trying to accomplish. I have two tables, `products` and `product_prices`. For simplicity, suppose they look like the following: ``` products: id ``` ``` product_prices: id | p_id | price | date_added ``` What I need to do is get the most recently added price, along with the date that price was added. So, in other words, for each product, I need to get the most recent `price` and `date_added` (along with the product id, `p_id`, of course). If I only needed to get the most recent date and price for one product whose ID is known, then I can do it with this: ``` SELECT price, date_added FROM product_prices WHERE p_id = 1 ORDER BY date_added DESC LIMIT 1 ``` However, this type of query will not work when I need to get the most recent date and price for *all* of the products. I believe the solution is to use the `MAX()` aggregate function in conjunction with `GROUP BY` and a subquery, but I simply cannot get it to work. Here is a test database on SQL Fiddle: <http://sqlfiddle.com/#!2/881cae/3> I realize there are a lot of similar questions on here, but I have read many of them and haven't been able to solve my problem, so I would appreciate direct help instead of just crying "duplicate!" and linking to another post. Thank you. EDIT: SQL Fiddle seems to be down at the moment, so here is the database schema I had on there: ``` CREATE TABLE products ( id int auto_increment primary key, name varchar(20) ); INSERT INTO products ( name ) VALUES ('product 1'), ('product 2'); CREATE TABLE product_prices ( id int auto_increment primary key, p_id int, price decimal(10,2), date_added int ); INSERT INTO product_prices ( p_id, price, date_added ) VALUES (1, 1.99, 1000), (1, 2.99, 2000), (1, 3.99, 3000), (1, 4.99, 4000), (1, 5.99, 5000), (1, 6.99, 6000), (2, 1.99, 1000), (2, 2.99, 2000), (2, 3.99, 3000), (2, 4.99, 4000), (2, 5.99, 5000), (2, 6.99, 6000); ```
Here is how you can do it: ``` SELECT pp.* FROM product_prices pp JOIN ( SELECT p_id, MAX(date_added) as max_date FROM product_prices GROUP BY p_id ) x ON pp.p_id = x.p_id AND pp.date_added = x.max_date ``` The idea is to make a set of tuples `{p_id, max_date}` for each product id (that's the inner query) and filter the `product_prices` data using these tuples (that's the `ON` clause in the inner join). [Demo on sqlfiddle.](http://sqlfiddle.com/#!2/525d9/8)
``` SELECT distinct on(p_id) price, date_added FROM product_prices ORDER BY p_id, date_added DESC ``` OR ``` SELECT price date_added FROM product_prices join ( SELECT p_id max(date_added) as max_date FROM product_prices group by p_id ) as last_price on last_price.p_id = product_prices.p_id and last_price.max_date = product_prices.date_added ``` neither tested so might contain a bug or two
How to get other columns from a row when using an aggregate function?
[ "", "mysql", "sql", "" ]
What is the difference between following two SQL queries ``` select a.id, a.name, a.country from table a left join table b on a.id = b.id where a.name is not null ``` and ``` select a.id, a.name, a.country from table a left join table b on a.id = b.id and a.name is not null ```
Base on the following two test result ``` select a.id, a.name,a.country from table a left join table b on a.id = b.id where a.name is not null ``` ***is faster (237 Vs 460).*** As far as I know, it is a standard. ![enter image description here](https://i.stack.imgur.com/JJf5V.png) ![enter image description here](https://i.stack.imgur.com/rZFsu.png)
There is no difference other than the syntax.
Difference between where and and clause in join sql query
[ "", "mysql", "sql", "relational-database", "" ]
I tried to get the customer that pay the maximum amount. It gave me the maximum amount but the wrong customer. what should i do? ``` SELECT temp.customerNumber, MAX( temp.max ) AS sum FROM ( SELECT p.customerNumber, p.amount AS max FROM payments p GROUP BY p.customerNumber ) AS temp ```
Using a join, possibly as follows:- ``` SELECT * FROM payments INNER JOIN ( SELECT MAX(amount) AS MaxAmount FROM payments ) Sub1 ON payments.amount = Sub1.MaxAmount ``` Down side of this is that if 2 people both have the same high payment then both will be returned.
I don't think you need the Subquery here: ``` SELECT p.customerNumber, MAX(p.amount) AS max FROM payments p GROUP BY p.customerNumber ORDER BY max DESC LIMIT 1 ```
why does this give me the wrong customerNumber?
[ "", "mysql", "sql", "" ]
I just need to merge two tables (to player\_new) without conflict. In table 1 (player\_new) I have 65,000 records. In table 2 (player\_old) I have 47,500 records. Table structure for both are: ``` -- ---------------------------- -- Table structure for player_new -- ---------------------------- DROP TABLE IF EXISTS `player_new`; CREATE TABLE `player_new` ( `id` int(11) NOT NULL AUTO_INCREMENT, `account_id` int(11) NOT NULL DEFAULT '0', `name` varbinary(24) NOT NULL DEFAULT 'NONAME' ........................ ) ENGINE=MyISAM AUTO_INCREMENT=1000 DEFAULT CHARSET=latin1; -- ---------------------------- -- Table structure for player_old -- ---------------------------- DROP TABLE IF EXISTS `player_old`; CREATE TABLE `player_old` ( `id` int(11) NOT NULL AUTO_INCREMENT, `account_id` int(11) NOT NULL DEFAULT '0', `name` varbinary(24) NOT NULL DEFAULT 'NONAME' ........................ ) ENGINE=MyISAM AUTO_INCREMENT=1000 DEFAULT CHARSET=latin1; ``` There some names are duplicated and I just need give same names to %s\_x (in player\_new table), so player can change his name later. Any ideas?
You should probably try restructuring your table as suggested by @echo\_me, but still what you want can be achieved by merging both the table data to a separate table and then renaming that new table to `player_new` as below. See a demo fiddle [here](http://sqlfiddle.com/#!2/c22d7e/2) ``` create table merged_player ( `id` int(11) NOT NULL AUTO_INCREMENT primary key, `account_id` int(11) NOT NULL DEFAULT '0', `name` varbinary(24) NOT NULL DEFAULT 'NONAME' ); insert into merged_player(account_id,name) select account_id,name from player_new union select account_id,name from player_old; drop table player_new; rename table merged_player to player_new; ```
why dont you use just one table like that ``` CREATE TABLE `player` ( `id` int(11) NOT NULL AUTO_INCREMENT, `account_id` int(11) NOT NULL DEFAULT '0', `Newname` varbinary(24) DEFAULT 'NONAME', `Oldname` varbinary(24) DEFAULT 'NONAME' ...... ) ENGINE=MyISAM AUTO_INCREMENT=1000 DEFAULT CHARSET=latin1; ``` then when you insert new name just update Newname column. and let Oldname
Merging two tables in one
[ "", "mysql", "sql", "" ]
I'm pretty new to Perl and I've been stumped by an error with DBI. I'm trying to query for a series of characters defined by their hex values: ``` my @compare = ( '\'%\'+x\'0A\'+\'%\'', '\'%\'+x\'92\'+\'%\'', '\'%\'+x\'96\'+\'%\'', '\'%\'+x\'A0\'+\'%\'' ); my $fetch_bad_rows = $dbh->prepare( qq{ SELECT B.* FROM ... AS B WHERE B. ... LIKE ? OR B. ... LIKE ? OR B. ... LIKE ? OR B. ... LIKE ? }); $fetch_bad_rows->execute(@compare) my $counter; for( $counter = 0; $fetch_bad_rows->fetch; $counter++ ) {}; print LOG sprintf("Found %d bad rows\n", $counter); ``` Executing the statement this way finds 0 rows. However, if I hard code the strings in the query, I get 437 records: ``` my $fetch_bad_rows = $dbh->prepare( qq{ SELECT B.* FROM ... AS B WHERE B. ... LIKE '%'+x'0A'+'%' OR B. ... LIKE '%'+x'92'+'%' OR B. ... LIKE '%'+x'96'+'%' OR B. ... LIKE '%'+x'A0'+'%' }); $fetch_bad_rows->execute ``` I haven't been able to figure out what I'm doing wrong with passing in the bind values. Anyone have any ideas? Thanks in advance.
Thanks for the responses. I did some more digging based on the info provided and found a way to get it working: ``` my @compare = ( '0A','92','96','A0' ); my $fetch_bad_rows = $dbh->prepare( qq{ SELECT B.* FROM ... AS B WHERE B. ... LIKE '%' + UNHEX(?) + '%' OR B. ... LIKE '%' + UNHEX(?) + '%' OR B. ... LIKE '%' + UNHEX(?) + '%' OR B. ... LIKE '%' + UNHEX(?) + '%' }); $fetch_bad_rows->execute(@compare) ``` I wasn't able to use 'x?' since the prepare call treated it as a table reference and complained that no table 'x' was found. The UNHEX routine turned out to be what I needed, though, since it takes a string input. Thanks again everyone
The `?` in the `prepare` will make sure that everything is escaped. So if you pass in stuff that has `'` it will escape the quotes: ``` '\'%\'+x\'0A\'+\'%\'' ``` Which can be more easily written as: ``` q{'%'+x'0A'+'%'} ``` will turn into: ``` ... LIKE '\'%\'+x\'0A\'+\'%\'' ``` And thus it does not find anything.
DBI prepared statement - bind hex-wildcard string
[ "", "sql", "arrays", "perl", "prepared-statement", "dbi", "" ]
In the below query, I want to cast Dense Rank function as nvarchar(255) but it is giving syntax error. I have following questions - 1. Is it possible to cast a value returned out of dense rank function? 2. If yes, what is the syntax? --- ``` SELECT cast('P' AS NVARCHAR(3)) AS ADDRESS_TYPE_CD, DENSE_RANK() OVER(PARTITION BY [CUSTOMER KEY] ORDER BY [PRIMARY ADDRESS LINE 1], [PRIMARY ADDRESS LINE 2], [PRIMARY ADDRESS LINE 3] + [PRIMARY ADDRESS LINE 4], [PRIMARY CITY], [PRIMARY STATE], [PRIMARY ZIP], [PRIMARY COUNTRY] ) AS ADDRESS_FLAG, [CUSTOMER KEY], [PRIMARY ADDRESS LINE 1] AS PA1, CASE WHEN [PRIMARY ADDRESS LINE 1] = [PRIMARY ADDRESS LINE 2] THEN NULL ELSE [PRIMARY ADDRESS LINE 2] END AS PA2, [PRIMARY ADDRESS LINE 3] + [PRIMARY ADDRESS LINE 4] AS PA3, [PRIMARY CITY] AS PCity, [PRIMARY STATE] AS PS, [PRIMARY ZIP] AS PZ, [PRIMARY COUNTRY] AS PC FROM mtb.DBO.EnrichedFile WHERE APPLICATION <> 'RBC' ```
``` SELECT CAST((DENSE_RANK() OVER(PARTITION BY [CUSTOMER KEY] ORDER BY [MAILING ADDRESS LINE 1], [MAILING ADDRESS LINE 2], [MAILING ADDRESS LINE 3]+[MAILING ADDRESS LINE 4], [MAILING CITY], [MAILING STATE], [MAILING ZIP], [MAILING COUNTRY]) as nvarchar(255)) AS ADDRESS_FLAG ``` Should be ``` SELECT CAST(DENSE_RANK() OVER(PARTITION BY [CUSTOMER KEY] ORDER BY [MAILING ADDRESS LINE 1], [MAILING ADDRESS LINE 2], [MAILING ADDRESS LINE 3]+[MAILING ADDRESS LINE 4], [MAILING CITY], [MAILING STATE], [MAILING ZIP], [MAILING COUNTRY]) as nvarchar(255)) AS ADDRESS_FLAG ``` You have a surplus opening bracket. Why are you casting this to `nvarchar(255)` anyway though? Even if there is some legitimate reason for wanting it as a string the maximum value it can possibly have is `9223372036854775807` so `varchar(19)` would be sufficient.
``` cast(DENSE_RANK() OVER(PARTITION BY [CUSTOMER KEY] ORDER BY [PRIMARY ADDRESS LINE 1],[PRIMARY ADDRESS LINE 2],[PRIMARY ADDRESS LINE 3] + [PRIMARY ADDRESS LINE 4],[PRIMARY CITY],[PRIMARY STATE],[PRIMARY ZIP],[PRIMARY COUNTRY] ) as nvarchar(255)) AS ADDRESS_FLAG ```
How to cast a value pulled out of dense rank function in SQL Server
[ "", "sql", "sql-server", "dense-rank", "" ]
My tables are: ``` customer(cid,name,city,state) orders(oid,cid,date) product(pid,productname,price) lineitem(lid,pid,oid,number,totalprice) ``` I want to select products bought by all the customers of city 'X'.It means I need to intersect products bought by all the customers living in city 'X' Example:If there are 3 customers c1,c2 and c3 my answer is c1.product(intersect)c2.product(intersect)c3.product I want to implement this only using `where exists` or `where not exists` because I need to write the relational calculus for the same in which `where not in` or `where in` is not available.My partial query is this: ``` select * from product p, lineitem l, customer c1 where exists( select * from customer c, orders o where o.cid=c.cid and c.city='X' and l.oid=o.oid and l.pid=p.pid and c1.cid=c.cid) ``` The above query gives me the pid,cid,oid,lid,totalprice,city,productname of all the customers living in city X.Now I need to figure out how to select the product common to all customers. Note: I cant use any of the aggregate functions because it's not available in relational calculus.I have a working query using aggregate functions and that is ``` select p.productname from product p, orders s, lineitem l, customer c where l.pid=p.pid and l.oid=s.oid and c.cid=s.cid and c.city='X' group by p.productname having count(distinct c.cid)=(select count(*) from customer c1 where c1.city='X') ``` It's ok if someone can convert the above query in `where exists` or `where not exists` form without `count` and `group by`. I am sure that it can be done because I can do that in relational algebra and according to [Codd's theorom](http://en.wikipedia.org/wiki/Codd%27s_theorem) tuple relational calculus and relational algebra are logically equivalent and any query expressed in one can be expressed in other.As both relational algebra and relational calculus do not support aggregate functions the query can be expressed in sql without aggregrate functions.
``` select * from product p where not exists( select * from customer c where c.city = 'x' and not exists ( select * from lineitem l where l.pid = p.pid and exists ( select * from orders o where o.oid = l.oid and o.cid = c.cid ) ) ) ``` Relation calculus: ``` {T| ∃pЄproduct (¬(∃cЄcustomer(c.city="Newark")^¬(∃lЄlineitem(l.pid=p.pid)^∃oЄorders(o.oid=l.oid^o.cid=c.cid))))} ``` Relation algebra: ![enter image description here](https://i.stack.imgur.com/0FS3q.png)
This is my answer. I created a sqlfiddle at <http://www.sqlfiddle.com/#!2/f2fb85/1>, so you can try it. The query is: ``` SELECT p.* FROM product p WHERE NOT EXISTS ( SELECT c.cid FROM customer c WHERE NOT EXISTS ( SELECT l.lid FROM lineitem l JOIN orders o ON o.oid = l.oid WHERE l.pid = p.pid AND o.cid = c.cid ) AND c.city = 'X' ) AND EXISTS ( SELECT c.cid FROM customer c WHERE c.city = 'X' ) ```
How to implement intersect using where exists or not exists in mysql
[ "", "mysql", "sql", "database", "relational-database", "intersection", "" ]
In an SQL SERVER 2000 table,there is a column called `Kpi of float type`. when try to convert that column to `varchar(cast(kpi as varchar(3)))`,It gives an error ``` Msg 232, Level 16, State 2, Line 1 Arithmetic overflow error for type varchar, value = 26.100000. ``` The thing is the column has only one distinct value and that is 26.1. I can not understand why does it show error when converting it into varchar!
your error has triggered by full stop in 26.1. you have applied this command: ``` cast(kpi as varchar(3)) ``` First: change cast from varchar(3) to varchar(4) (to get the first value after full stop) So you'll have: This line convert value 26.10000000 in 26.1 Then you apply `varchar(26.1)` not correct! Varchar type wants only integer value as size. Why you apply the last varchar?? I think you must to remove the external varchar and leave only cast function
Try this instead: ``` declare @x float = 26.1 select left(@x,3) ``` Result will be 26.
Arithmetic overflow error for type in Sql Server 2000
[ "", "sql", "sql-server", "sql-server-2000", "" ]
``` SiteVisitID siteName visitDate ------------------------------------------------------ 1 site1 01/03/2014 2 Site2 01/03/2014 3 site1 02/03/2014 4 site1 03/03/2014 5 site2 03/03/2014 6 site1 04/03/2014 7 site2 04/03/2014 8 site2 05/03/2014 9 site1 06/03/2014 10 site2 06/03/2014 11 site1 08/03/2014 12 site2 08/03/2014 13 site1 09/03/2014 14 site2 10/03/2014 ``` There are two sites and each need to have a visit entry for everyday of the month, so considering that today is 11/03/2014 we are expecting 22 entries but there are only 14 entries so missing 8, is there any way in sql we could pull out missing date entries Up to the current day of the month against sites ``` siteName missingDate ----------------------- site2 02/03/2014 site1 05/03/2014 site1 07/03/2014 site2 07/03/2014 site2 09/03/2014 site1 10/03/2014 site1 11/03/2014 site2 11/03/2014 ``` Here is my unsuccessful attempt I believe is wrong both logically and syntactically ``` select siteName, visitDate from SiteVisit not in (SELECT siteName, visitDate FROM SiteVisit WHERE Day(visitDate) != Day(CURRENT_TIMESTAMP) AND Month(visitDate) = Month(CURRENT_TIMESTAMP)) ``` Note: the above data and columns are simplified version of the actual table
I would recommend you use a `table valued function` to get you all days in between 2 selected dates as a table [(Try it out in this fiddle)](http://sqlfiddle.com/#!3/f389c/5): ``` CREATE FUNCTION dbo.GetAllDaysInBetween(@FirstDay DATETIME, @LastDay DATETIME) RETURNS @retDays TABLE ( DayInBetween DATETIME ) AS BEGIN DECLARE @currentDay DATETIME SELECT @currentDay = @FirstDay WHILE @currentDay <= @LastDay BEGIN INSERT @retDays (DayInBetween) SELECT @currentDay SELECT @currentDay = DATEADD(DAY, 1, @currentDay) END RETURN END ``` (I include a simple table setup for easy copypaste-tests) ``` CREATE TABLE SiteVisit (ID INT PRIMARY KEY IDENTITY(1,1), visitDate DATETIME, visitSite NVARCHAR(512)) INSERT INTO SiteVisit (visitDate, visitSite) SELECT '2014-03-11', 'site1' UNION SELECT '2014-03-12', 'site1' UNION SELECT '2014-03-15', 'site1' UNION SELECT '2014-03-18', 'site1' UNION SELECT '2014-03-18', 'site2' ``` now you can simply check what days no visit occured when you know the "boundary days" such as this: ``` SELECT DayInBetween AS missingDate, 'site1' AS visitSite FROM dbo.GetAllDaysInBetween('2014-03-11', '2014-03-18') AS AllDaysInBetween WHERE NOT EXISTS (SELECT ID FROM SiteVisit WHERE visitDate = AllDaysInBetween.DayInBetween AND visitSite = 'site1') ``` Or if you like to know all days where any site was not visited you could use this query: ``` SELECT DayInBetween AS missingDate, Sites.visitSite FROM dbo.GetAllDaysInBetween('2014-03-11', '2014-03-18') AS AllDaysInBetween CROSS JOIN (SELECT DISTINCT visitSite FROM SiteVisit) AS Sites WHERE NOT EXISTS (SELECT ID FROM SiteVisit WHERE visitDate = AllDaysInBetween.DayInBetween AND visitSite = Sites.visitSite) ORDER BY visitSite ``` Just on a side note: it seems you have some duplication in your table (not normalized) `siteName` should really go into a separate table and only be referenced from `SiteVisit`
Maybe you can use this as a starting point: ``` -- Recursive CTE to generate a list of dates for the month -- You'll probably want to play with making this dynamic WITH Dates AS ( SELECT CAST('2014-03-01' AS datetime) visitDate UNION ALL SELECT visitDate + 1 FROM Dates WHERE visitDate + 1 < '2014-04-01' ) -- Build a list of siteNames with each possible date in the month , SiteDates AS ( SELECT s.siteName, d.visitDate FROM SiteVisitEntry s CROSS APPLY Dates d GROUP BY s.siteName, d.visitDate ) -- Use a left anti-semi join to find the missing dates SELECT d.siteName, d.visitDate AS missingDate FROM SiteDates d LEFT JOIN SiteVisitEntry e /* Plug your actual table name here */ ON d.visitDate = e.visitDate AND d.siteName = e.siteName WHERE e.visitDate IS NULL OPTION (MAXRECURSION 0) ```
sql server displaying missing dates
[ "", "sql", "sql-server", "" ]
How can I display maximum OrderId for a CustomerId with many columns? I have a table with following columns: ``` CustomerId, OrderId, Status, OrderType, CustomerType ``` A customer with Same customer id could have many order ids(1,2,3..) I want to be able to display the max Order id with the rest of the customers in a sql view. how can I achieve this? Sample Data: ``` CustomerId OrderId OrderType 145042 1 A 110204 1 C 145042 2 D 162438 1 B 110204 2 B 103603 1 C 115559 1 D 115559 2 A 110204 3 A ```
I'd use a common table expression and [`ROW_NUMBER`](http://technet.microsoft.com/en-us/library/ms186734.aspx): ``` ;With Ordered as ( select *, ROW_NUMBER() OVER (PARTITION BY CustomerID ORDER BY OrderId desc) as rn from [Unnamed table from the question] ) select * from Ordered where rn = 1 ```
``` select * from table_name where orderid in (select max(orderid) from table_name group by customerid) ```
display max on one columns with multiple columns in output
[ "", "sql", "sql-server-2008", "" ]
**How to write sql statement?** ``` Table_Product +------------------+ | Product | +------------------+ | AAA | | ABB | | ABC | | ACC | +------------------+ ``` ``` Table_Mapping +---------------+---------------+ | ProductGroup | ProductName | +---------------+---------------+ | A* | Product1 | | ABC | Product2 | +---------------+---------------+ ``` I need the following result: ``` +------------+---------------+ | Product | ProductName | +------------+---------------+ | AAA | Product1 | | ABB | Product1 | | ABC | Product2 | | ACC | Product1 | +------------+---------------+ ``` Thanks, TOM
The following query does what you describe when run from within the Access application itself: ``` SELECT Table_Product.Product, Table_Mapping.ProductName FROM Table_Product INNER JOIN Table_Mapping ON Table_Product.Product = Table_Mapping.ProductGroup WHERE InStr(Table_Mapping.ProductGroup, "*") = 0 UNION ALL SELECT Table_Product.Product, Table_Mapping.ProductName FROM Table_Product INNER JOIN Table_Mapping ON Table_Product.Product LIKE Table_Mapping.ProductGroup WHERE InStr(Table_Mapping.ProductGroup, "*") > 0 AND Table_Product.Product NOT IN (SELECT ProductGroup FROM Table_Mapping) ORDER BY 1 ```
You would want to use CASE WHEN. Try this: ``` Select Product, (CASE WHEN Product = 'AAA' THEN 'Product1' WHEN Product = 'ABB' THEN 'Product1' WHEN Product = 'ABC' THEN 'Product2' WHEN Product = 'ACC' THEN 'Product1' ELSE Null END) as 'ProductName' from Table_Product order by Product ```
How to mapping data with text multi level?
[ "", "sql", "ms-access", "" ]
I have a table FieldList (ID int, Title varchar(50)) and want to create a temp table with a column list for each record in FieldList with the column name = FieldList.Title and the type as varchar. This all happens in a Stored Proc, and the temp table is returned to the client for reporting and data analysis. e.g. FieldList Table: ID Title 1 City 2 UserSuppliedFieldName 3 SomeField Resultant Temp table columns: City UserSuppliedFieldName SomeField
You can use the following proc to do what you are wanting. It just requires that you: 1. Create the Temp Table before calling the proc (you will pass in the Temp Table name to the proc). This allows the temp table to be used in the current scope as Temp Tables created in Stored Procedures are dropped once that proc ends / returns. 2. Put just one field in the Temp Table; the datatype is irrelevant as the field will be dropped (you will pass in the field name to the proc) [*Be sure to change the proc name to whatever you like, but the temp proc name is used in the example that follows*] ``` CREATE PROCEDURE #Abracadabra ( @TempTableName SYSNAME, @DummyFieldName SYSNAME, @TestMode BIT = 0 ) AS SET NOCOUNT ON DECLARE @SQL NVARCHAR(MAX) SELECT @SQL = COALESCE(@SQL + N', [', N'ALTER TABLE ' + @TempTableName + N' ADD [') + [Title] + N'] VARCHAR(100)' FROM #FieldList ORDER BY [ID] SET @SQL = @SQL + N' ; ALTER TABLE ' + @TempTableName + N' DROP COLUMN [' + @DummyFieldName + N'] ; ' IF (@TestMode = 0) BEGIN EXEC(@SQL) END ELSE BEGIN PRINT @SQL END GO ``` The following example shows the proc in use. The first execution is in Test Mode that simply prints the SQL that will be executed. The second execution runs the SQL and the SELECT following that EXEC shows that the fields are what was in the FieldList table. ``` /* -- HIGHLIGHT FROM "SET" THROUGH FINAL "INSERT" AND RUN ONCE -- to setup the example SET NOCOUNT ON; --DROP TABLE #FieldList CREATE TABLE #FieldList (ID INT, Title VARCHAR(50)) INSERT INTO #FieldList (ID, Title) VALUES (1, 'City') INSERT INTO #FieldList (ID, Title) VALUES (2, 'UserSuppliedFieldName') INSERT INTO #FieldList (ID, Title) VALUES (3, 'SomeField') */ IF (OBJECT_ID('tempdb.dbo.#Tmp') IS NOT NULL) BEGIN DROP TABLE #Tmp END CREATE TABLE #Tmp (Dummy INT) EXEC #Abracadabra @TempTableName = N'#Tmp', @DummyFieldName = N'Dummy', @TestMode = 1 -- look in "Messages" tab EXEC #Abracadabra @TempTableName = N'#Tmp', @DummyFieldName = N'Dummy', @TestMode = 0 SELECT * FROM #Tmp ``` Output from `@TestMode = 1`: > ALTER TABLE #Tmp ADD [City] VARCHAR(100), [UserSuppliedFieldName] > VARCHAR(100), [SomeField] VARCHAR(100) ; ALTER TABLE #Tmp DROP COLUMN > [Dummy] ;
Create this function and pass to it the list you require. It will generate a table for you temporarily that you can use in real-time with any other SQL query. I've provided an example as well. ``` CREATE FUNCTION [dbo].[fnMakeTableFromList] (@List varchar(MAX), @Delimiter varchar(255)) RETURNS table AS RETURN (SELECT Item = CONVERT(VARCHAR, Item) FROM (SELECT Item = x.i.value('(./text())[1]', 'varchar(max)') FROM (SELECT [XML] = CONVERT(XML, '<i>' + REPLACE(@List, @Delimiter, '</i><i>') + '</i>').query('.')) AS a CROSS APPLY [XML].nodes('i') AS x(i)) AS y WHERE Item IS NOT NULL); GO ``` And you can use it like this ... Parm1 = a list in a string seperated by a delimiter Parm2 = the delimiter character ``` SELECT * FROM fnMakeTableFromList('a,b,c,d,e',',') ``` Result is a table ... a b c d e
How to create a temp table from a dynamic list
[ "", "sql", "sql-server-2008", "" ]
I am trying to fetch column names form Table in Oracle. But I am not getting Column Names. I used Many query's, And Find may query's in Stack overflow But I didn't get answer. I used below query's: ``` 1. SELECT COLUMN_NAME FROM USER_TAB_COLUMNS WHERE TABLE_NAME='TABLE_NAME'; 2. SELECT COLUMN_NAME from ALL_TAB_COLUMNS where TABLE_NAME='TABLE_NAME'; ``` But Out Put is ``` no row selected ``` What is the problem here. Thank you very much
both of the queries are correct, just the thing which can cause this problem is that ,maybe you did n't write your table name with `capital letters` you must do something like this: ``` SELECT COLUMN_NAME FROM ALL_TAB_COLUMNS where TABLE_NAME = UPPER('TABLE_NAME'); ``` OR ``` SELECT COLUMN_NAME FROM USER_TAB_COLUMNS where TABLE_NAME = UPPER('TABLE_NAME'); ```
Try this: ``` SELECT column_name FROM all_tab_cols WHERE upper(table_name) = 'TABLE_NAME' AND owner = ' || +_db+ || ' AND column_name NOT IN ( 'password', 'version', 'id' ) ``` or ``` SELECT COLUMN_NAME FROM ALL_TAB_COLUMNS where TABLE_NAME = UPPER('TABLE_NAME'); ```
How to fetch Column Names from Table?
[ "", "sql", "oracle", "oracle11g", "" ]
## The Problem I have a PostgreSQL database on which I am trying to summarize the revenue of a cash register over time. The cash register can either have status ACTIVE or INACTIVE, but I only want to summarize the earnings created when it was ACTIVE for a given period of time. I have two tables; one that marks the revenue and one that marks the cash register status: ``` CREATE TABLE counters ( id bigserial NOT NULL, "timestamp" timestamp with time zone, total_revenue bigint, id_of_machine character varying(50), CONSTRAINT counters_pkey PRIMARY KEY (id) ) CREATE TABLE machine_lifecycle_events ( id bigserial NOT NULL, event_type character varying(50), "timestamp" timestamp with time zone, id_of_affected_machine character varying(50), CONSTRAINT machine_lifecycle_events_pkey PRIMARY KEY (id) ) ``` A counters entry is added every 1 minute and total\_revenue only increases. A machine\_lifecycle\_events entry is added every time the status of the machine changes. I have added an image illustrating the problem. It is the revenue during the blue periods which should be summarized. ![Timeline showing problem.](https://i.stack.imgur.com/GGChv.png) ## What I have tried so far I have created a query which can give me the total revenue in a given instant: ``` SELECT total_revenue FROM counters WHERE timestamp < '2014-03-05 11:00:00' AND id_of_machine='1' ORDER BY timestamp desc LIMIT 1 ``` ## The questions 1. How do I calculate the revenue earned between two timestamps? 2. How do I determine the start and end timestamps of the blue periods when I have to compare the timestamps in machine\_lifecycle\_events with the input period? Any ideas on how to attack this problem? ## Update Example data: ``` INSERT INTO counters VALUES (1, '2014-03-01 00:00:00', 100, '1') , (2, '2014-03-01 12:00:00', 200, '1') , (3, '2014-03-02 00:00:00', 300, '1') , (4, '2014-03-02 12:00:00', 400, '1') , (5, '2014-03-03 00:00:00', 500, '1') , (6, '2014-03-03 12:00:00', 600, '1') , (7, '2014-03-04 00:00:00', 700, '1') , (8, '2014-03-04 12:00:00', 800, '1') , (9, '2014-03-05 00:00:00', 900, '1') , (10, '2014-03-05 12:00:00', 1000, '1') , (11, '2014-03-06 00:00:00', 1100, '1') , (12, '2014-03-06 12:00:00', 1200, '1') , (13, '2014-03-07 00:00:00', 1300, '1') , (14, '2014-03-07 12:00:00', 1400, '1'); INSERT INTO machine_lifecycle_events VALUES (1, 'ACTIVE', '2014-03-01 08:00:00', '1') , (2, 'INACTIVE', '2014-03-03 00:00:00', '1') , (3, 'ACTIVE', '2014-03-05 00:00:00', '1') , (4, 'INACTIVE', '2014-03-06 12:00:00', '1'); ``` [SQL Fiddle with sample data.](http://sqlfiddle.com/#!15/2ae72) Example query: The revenue between '2014-03-02 08:00:00' and '2014-03-06 08:00:00' is 300. 100 for the first ACTIVE period, and 200 for the second ACTIVE period.
## DB design To make my work easier I sanitized your DB design before I tackled the questions: ``` CREATE TEMP TABLE counter ( id bigserial PRIMARY KEY , ts timestamp NOT NULL , total_revenue bigint NOT NULL , machine_id int NOT NULL ); CREATE TEMP TABLE machine_event ( id bigserial PRIMARY KEY , ts timestamp NOT NULL , machine_id int NOT NULL , status_active bool NOT NULL ); ``` [Test case in the fiddle.](http://sqlfiddle.com/#!15/24e30/2) ### Major points * Using `ts` instead of "timestamp". Never use basic type names as column names. * Simplified & unified the name `machine_id` and made it out to be `integer` as it should be, instead of `varchar(50)`. * `event_type varchar(50)` should be an `integer` foreign key, too, or an `enum`. Or even just a `boolean` for only active / inactive. Simplified to `status_active bool`. * Simplified and sanitized `INSERT` statements as well. ## Answers ### Assumptions * `total_revenue only increases` (per question). * Borders of the outer time frame are *included*. * Every "next" row per machine in `machine_event` has the opposite `status_active`. > **1.** How do I calculate the revenue earned between two timestamps? ``` WITH span AS ( SELECT '2014-03-02 12:00'::timestamp AS s_from -- start of time range , '2014-03-05 11:00'::timestamp AS s_to -- end of time range ) SELECT machine_id, s.s_from, s.s_to , max(total_revenue) - min(total_revenue) AS earned FROM counter c , span s WHERE ts BETWEEN s_from AND s_to -- borders included! AND machine_id = 1 GROUP BY 1,2,3; ``` > **2.** How do I determine the start and end timestamps of the blue periods when I have to compare the timestamps in `machine_event` with the input period? This query for *all* machines in the given time frame (`span`). Add `WHERE machine_id = 1` in the CTE `cte` to select a specific machine. ``` WITH span AS ( SELECT '2014-03-02 08:00'::timestamp AS s_from -- start of time range , '2014-03-06 08:00'::timestamp AS s_to -- end of time range ) , cte AS ( SELECT machine_id, ts, status_active, s_from , lead(ts, 1, s_to) OVER w AS period_end , first_value(ts) OVER w AS first_ts FROM span s JOIN machine_event e ON e.ts BETWEEN s.s_from AND s.s_to WINDOW w AS (PARTITION BY machine_id ORDER BY ts) ) SELECT machine_id, ts AS period_start, period_end -- start in time frame FROM cte WHERE status_active UNION ALL -- active start before time frame SELECT machine_id, s_from, ts FROM cte WHERE NOT status_active AND ts = first_ts AND ts <> s_from UNION ALL -- active start before time frame, no end in time frame SELECT machine_id, s_from, s_to FROM ( SELECT DISTINCT ON (1) e.machine_id, e.status_active, s.s_from, s.s_to FROM span s JOIN machine_event e ON e.ts < s.s_from -- only from before time range LEFT JOIN cte c USING (machine_id) WHERE c.machine_id IS NULL -- not in selected time range ORDER BY e.machine_id, e.ts DESC -- only the latest entry ) sub WHERE status_active -- only if active ORDER BY 1, 2; ``` Result is the list of blue periods in your image. [**SQL Fiddle demonstrating both.**](http://sqlfiddle.com/#!15/24e30/2) Recent similar question: [Sum of time difference between rows](https://stackoverflow.com/questions/22114645/sum-of-time-difference-between-rows/22117315)
Use self-join and build intervals table with actual status of each interval. ``` with intervals as ( select e1.timestamp time1, e2.timestamp time2, e1.EVENT_TYPE as status from machine_lifecycle_events e1 left join machine_lifecycle_events e2 on e2.id = e1.id + 1 ) select * from counters c join intervals i on (timestamp between i.time1 and i.time2 or i.time2 is null) and i.status = 'ACTIVE'; ``` I didn't use aggregation to show the result set, you can do this simply, I think. Also I missed machineId to simplify demonstration of this pattern.
Summarize values across timeline in SQL
[ "", "sql", "postgresql", "aggregate-functions", "date-arithmetic", "window-functions", "" ]
I got this problem with Group\_Concat and a where filter. In my table i got module names which are linked to a client. I want to search clients by module name, but in the group concat i still want to see all modules that are owned by the client. currently it will display all clients with those modules, but it will only display that specific module. I can't figure out how to make them both work together. Any suggestions on how to get my expected result?? These are some basic tables and the query i tried along with results i get and the result i really wanted ``` Client +--------------------+ | id | name | +--------------------+ | 1 | client1 | | 2 | client2 | | 3 | client3 | | 4 | client4 | +--------------------+ Module +--------------------+ | id | name | +--------------------+ | 1 | module1 | | 2 | module2 | | 3 | module3 | | 4 | module4 | +--------------------+ Client_Module +-------------------------+ | client_id | module_id | +-------------------------+ | 1 | 2 | | 1 | 3 | | 2 | 1 | | 2 | 2 | | 2 | 4 | | 3 | 4 | | 4 | 1 | | 4 | 2 | | 4 | 3 | | 4 | 4 | +-------------------------+ ``` Query: ``` SELECT client.id, client.name, GROUP_CONCAT(module.name) AS modules FROM client LEFT JOIN client_module ON client_module.client_id = client.id LEFT JOIN module ON module.id = client_module.module.id WHERE module.id IN (1,2) ``` Results: ``` Received +--------------------------------------------------+ | id | name | modules | +--------------------------------------------------+ | 1 | client1 | module2 | | 2 | client2 | module1,module2 | | 4 | client4 | module1,module2 | +--------------------------------------------------+ Expected +------------------------------------------------------+ | id | name | modules | +------------------------------------------------------+ | 1 | client1 | module2,module3 | | 2 | client2 | module1,module2,module4 | | 4 | client4 | module1,module2,module3,module4 | +------------------------------------------------------+ ```
You can Try Like this. ``` SELECT client.id, client.name, GROUP_CONCAT(module.name) AS modules FROM client LEFT JOIN client_module ON client_module.client_id = client.id LEFT JOIN module ON module.id = client_module.module_id group by client.id Having Find_In_Set('module1',modules)>0 or Find_In_Set('module2',modules)>0 ``` [**SQL Fiddle Demo**](http://www.sqlfiddle.com/#!2/b2734/7)
You are using `client_module.module_id` change it to `client_module.client_id`. Use `group by` with `group_cancat` ``` SELECT client.id, client.name, GROUP_CONCAT(module.name) AS modules FROM client LEFT JOIN client_module ON client_module.client_id = client.id LEFT JOIN module ON module.id = client_module.module_id WHERE client_module.client_id IN (1,2,4) group by client.id, client.name ``` [fiddle](http://sqlfiddle.com/#!2/97585/8)
MySQL group_concat with where clause
[ "", "mysql", "sql", "where-clause", "group-concat", "clause", "" ]
Here is my sql script ``` CREATE TABLE dbo.calendario ( datacal DATETIME NOT NULL PRIMARY KEY, horautil BIT NOT NULL DEFAULT 1 ); -- DELETE FROM dbo.calendario; DECLARE @dtmin DATETIME, @dtmax DATETIME, @dtnext DATETIME; SELECT @dtmin = '2014-03-11 00:00:00' , @dtmax = '2030-12-31 23:50:00' , @dtnext = @dtmin; WHILE (@dtnext <= @dtmax) BEGIN INSERT INTO dbo.calendario(datacal) VALUES (@dtnext); SET @dtnext = DATEADD(MINUTE, 10, @dtnext); END; ``` Basically, I want to create a table with date intervals of 10 minutes each. The loop inserts a lot of records, but I thought it would be fast to execute that. It takes several minutes... I'm using sql server 2008 r2. Any help is appreciated.
You should avoid loops etc and try to approach this set-based. (google for "RBAR SQL") Anyway, this runs in 1 sec on my laptop: ``` DROP TABLE dbo.calendario GO CREATE TABLE dbo.calendario ( datacal DATETIME NOT NULL PRIMARY KEY, horautil BIT NOT NULL DEFAULT 1 ); -- DELETE FROM dbo.calendario; DECLARE @dtmin DATETIME, @dtmax DATETIME, @intervals int SELECT @dtmin = '2014-03-11 00:00:00' , @dtmax = '2030-12-31 23:50:00' SELECT @intervals = DateDiff(minute, @dtmin, @dtmax) / 10 ;WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A, L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A, L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A, L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A, L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A, L4 AS B), L6 AS(SELECT 1 AS c FROM L5 AS A, L5 AS B), Nums AS(SELECT ROW_NUMBER() OVER(ORDER BY c) AS n FROM L6) INSERT INTO dbo.calendario(datacal) SELECT DateAdd(minute, 10 * (n - 1), @dtmin) FROM Nums WHERE n BETWEEN 1 AND @intervals + 1 -- SELECT * FROM dbo.calendario ORDER BY datacal ```
This code takes 23 seconds on my machine (and most of it in sorting) ``` DECLARE @DateMin AS datetime = '2014-03-11 00:00:00'; DECLARE @DateMax AS datetime = '2030-12-31 23:50:00'; DECLARE @Test AS Table ( datacal DATETIME NOT NULL PRIMARY KEY ); WITH Counter AS ( SELECT ROW_NUMBER() OVER (ORDER BY a.object_id) -1 AS Count FROM sys.all_objects AS a CROSS JOIN sys.all_objects AS b ) INSERT INTO @Test (datacal) SELECT DATEADD(minute, 10 * Count, @DateMin) FROM Counter WHERE DATEADD(minute, 10 * Count, @DateMin) <= @DateMax ```
Why is this sql script so slow and how to make it lightning fast?
[ "", "sql", "sql-server", "optimization", "sql-server-2008-r2", "" ]
I have a table called "Books" that has\_many "Chapters" and I would like to get all Books that have have more than 10 chapters. How do I do this in a single query? I have this so far... ``` Books.joins('LEFT JOIN chapters ON chapters.book_id = books.id') ```
Found the solution... ``` query = <<-SQL UPDATE books AS b INNER JOIN ( SELECT books.id FROM books JOIN chapters ON chapters.book_id = books.id GROUP BY books.id HAVING count(chapters.id) > 10 ) i ON b.id = i.id SET long_book = true; SQL ActiveRecord::Base.connection.execute(query) ```
Here is the query using Rails 4, ActiveRecord ``` Book.includes(:chapters).references(:chapters).group('books.id').having('count(chapters.id) > 10') ```
Rails SQL Query counts
[ "", "sql", "ruby-on-rails", "ruby", "" ]
I have this SQL: ``` SELECT sets.set_id, responses.Question_Id, replies.reply_text, responses.Response FROM response_sets AS sets INNER JOIN responses AS responses ON sets.set_id = responses.set_id INNER JOIN replies AS replies ON replies.reply_id = responses.reply_id WHERE sets.Sctn_Id_Code IN ('668283', '664524') ``` A partial result: ![enter image description here](https://i.stack.imgur.com/vkrpy.jpg) I want to replace the `reply_text` and response columns with one column that will have the value of the response if it isn't null and the reply text value otherwise. I'm not sure if this can be done with a case statement, at least nothing I've dug up leads me to think that I can. Am I incorrect in assuming this?
You could use a case statement, but `COALESCE` would be easier: ``` Select sets.set_id, responses.Question_Id, COALESCE(responses.Response, replies.reply_text) From response_sets as sets inner join responses as responses on sets.set_id = responses.set_id inner join replies as replies on replies.reply_id = responses.reply_id where sets.Sctn_Id_Code in ('668283', '664524') ``` [COALESCE](http://msdn.microsoft.com/en-us/library/ms190349.aspx) allows you to list two or more expressions, and takes the first one that isn't null.
Yes, `ISNULL(responses.Response, replies.reply_text)` or `COALESCE` also works. Please see [this post](https://stackoverflow.com/questions/7408893/using-isnull-vs-using-coalesce-for-checking-a-specific-condtion) for more information and examples. This [article dives](http://sqlmag.com/t-sql/coalesce-vs-isnull) deeper into `ISNULL` vs `COALESCE`.
Can SQL Case statement be used to select a database column
[ "", "sql", "sql-server", "" ]
I have a table and there are blank values in several columns, scattered all over the place. I want to replace '' with NULL. What's the quickest way to do this? Is there a trick I'm not aware of?
I did it like this: ``` DECLARE @ColumnNumber INT DECLARE @FullColumnName VARCHAR(50) DECLARE @SQL NVARCHAR(500) SET @ColumnNumber = 0 WHILE (@ColumnNumber <= 30) BEGIN SET @FullColumnName = 'Column' + CAST(@ColumnNumber AS VARCHAR(10)) SET @SQL = 'UPDATE [].[].[] SET ' + @FullColumnName + ' = NULL WHERE ' + @FullColumnName + ' = ''''' EXECUTE sp_executesql @SQL; SET @ColumnNumber = @ColumnNumber + 1 END ```
`update <table name> set <column 1> = case when <column 1> = '' then null else <column 1> end, <column 2> = case when <column 2> = '' then null else <column 2> end` you can add as many lines as you have columns. No need for a where clause (unless you have massive amounts of data - then you may want to add a where clause that limits it to rows that have empty values in each of the columns you are checking)
quickest way to update a table to set blank values to NULLs?
[ "", "sql", "sql-server", "" ]
I have a "Location" data set returned by a simple query from a MySQL database: ``` A1 A10, A2 A3 ``` It is sequenced by an "Order By Location" statement. The issue is that I would like the returned sequence to be: ``` A1 A2 A3 A10 ``` I am not sure if this is achievable with a MySQL Order By statement?
try this ``` order by CAST(replace((Location),'A','') as signed ) ``` [**DEMO HERE**](http://sqlfiddle.com/#!2/048db/5) EDIT: if you have other letters then A then consider to cut the first letter and order the rest as integers. ``` ORDER BY CAST(SUBSTR(loc, 2) as signed ) ``` [**DEMO HERE**](http://sqlfiddle.com/#!2/048db/9)
I think the easiest way to do this is to order by the length and then the value: ``` order by length(location), location ```
MySQL Order by numeric sequence
[ "", "mysql", "sql", "" ]
1st query - `select * from a full outer join b on a.x = b.y where b.y = 10` 2nd query - `select * from a full outer join b on a.x = b.y and b.y = 10` Consider these table extensions: ``` Table a Table b ======= ======= x y ----- ----- 1 2 5 5 10 10 ``` The first query will return: ``` 10 10 ``` And, the second query will return: ``` 1 NULL 5 NULL 10 10 ``` Could you please let me know the reasons in detail ?
The first query gives the expected result. But the second query's result is different because you are joining the tables on both the conditions`(a.x = b.y and b.y = 10)`. And since it is outer join, it'll print all the values which satisfy and which are `NULL` and so the output. I created [sql fiddle](http://sqlfiddle.com/#!6/fc386/2) so that you can understand it better
The second query has condition in the `ON` part so all the records are included even if they don't have pair in the joined table. The first one has condition in the `WHERE` part so `NULL`s are filtered out.
Why the below join query returns different result?
[ "", "sql", "" ]
I have a Table with `ID` (workers\_id), `Name`, `time_worked`, `time_to_work`, `Contract_Start_Date`, `Date_of_Entry`. This table holds the entries for every day for a worker. I want to calculate the overtime he collected up till now. I have the same entry for each contract in that table for every day where the only difference between the entries is the `Contract_STart_Date` and the `time_to_work`. As soon as he gets a new contract he gets a new entrie for every day in that table (I have to correct that one day but have no time atm, so take that as unflexible for that problem). I have the following table ``` | ID | Name | time_worked | time_to_work | Contract_Start_Date | Date_of_Entry | | -- | ---- | ----------- | ------------ | ------------------- | ------------- | | 11 | Jack | 8 | 8 | 2013-01-01 | 2013-01-01 | | 11 | Jack | 8 | 8 | 2013-04-01 | 2013-01-01 | | 11 | Jack | 8 | 8 | 2013-01-01 | 2013-01-02 | | 11 | Jack | 8 | 8 | 2013-04-01 | 2013-01-02 | ... | 11 | Jack | 6 | 8 | 2013-01-01 | 2013-04-15 | | 11 | Jack | 6 | 4 | 2013-04-15 | 2013-04-15 | | 11 | Jack | 8 | 8 | 2013-01-01 | 2013-04-16 | | 11 | Jack | 8 | 4 | 2013-04-15 | 2013-04-16 | ``` I want to add up the overtime for Jack for the relevant contract. I think I found a way to solve this (logically) but cannot transfer my thoughts into code. This is the approach: I set a number (`SeqNumber`) for each day by contract (already accomplished by my code below). ``` | ID | Name | time_worked | time_to_work | Contract_Start_Date | Date_of_Entry | SeqNumber | -- | ---- | ----------- | ------------ | ------------------- | ------------- |---------- | 11 | Jack | 8 | 8 | 2013-01-01 | 2013-01-01 |1 | 11 | Jack | 8 | 8 | 2013-04-01 | 2013-01-01 |2 | 11 | Jack | 8 | 8 | 2013-01-01 | 2013-01-02 |1 | 11 | Jack | 8 | 8 | 2013-04-01 | 2013-01-02 |2 ... | 11 | Jack | 6 | 8 | 2013-01-01 | 2013-04-15 |1 | 11 | Jack | 6 | 4 | 2013-04-15 | 2013-04-15 |2 | 11 | Jack | 8 | 8 | 2013-01-01 | 2013-04-16 |1 | 11 | Jack | 8 | 4 | 2013-04-15 | 2013-04-16 |2 ``` now is set a number (`ConSeqNumber`) to which contract\_start\_date the date\_of\_entry belongs ``` | ID | Name | time_worked | time_to_work | Contract_Start_Date | Date_of_Entry | SeqNumber| ConSeqNumber | -- | ---- | ----------- | ------------ | ------------------- | ------------- |----------| ------------ | 11 | Jack | 8 | 8 | 2013-01-01 | 2013-01-01 |1 |1 | 11 | Jack | 8 | 8 | 2013-04-01 | 2013-01-01 |2 |1 | 11 | Jack | 8 | 8 | 2013-01-01 | 2013-01-02 |1 |1 | 11 | Jack | 8 | 8 | 2013-04-01 | 2013-01-02 |2 |1 ... | 11 | Jack | 6 | 8 | 2013-01-01 | 2013-04-15 |1 |2 | 11 | Jack | 6 | 4 | 2013-04-15 | 2013-04-15 |2 |2 | 11 | Jack | 8 | 8 | 2013-01-01 | 2013-04-16 |1 |2 | 11 | Jack | 8 | 4 | 2013-04-15 | 2013-04-16 |2 |2 ``` The solution would be to sum every entry where the SeqNumber and the ConSeqNumber are equal. My output would be (according to the calculation `time_worked` - `time_to_work` and summarize the values. (8-8) + (8-8) + (6-4) + (8-4) = 6 ``` | Overtime | | -------- | | 6 | ``` My full code is: ``` select ID, Name,(sum(time_worked)-sum(time_to_work)) as 'overtime' from ( Select *, ROW_NUMBER() over (partition by Date_of_Entry order by Contract_Start_Date asc) as seqnum from MyTable where Contract_Start_Date <= Date_of_Entry ) MyTable WHERE seqnum = 1 AND YearA = DATEPART(YEAR, GETDATE()) -1 AND DATE_of_Entry <= GETDATE() AND DATEPART(MONTH, Date_of_Entry) BETWEEN 4 and 9 GROUP BY ID, Name ```
OK, looks like I found the solution: Data Sample ``` CREATE TABLE #test(WorkerID int, TimeWorked int, TimeToWork int, ContractStartDate datetime, DateOfEntry datetime INSERT INTO #test (WorkerID, TimeWorked, TimeToWork, ContractStartDate, DateOfEntry) VALUES (11, 8, 8, '2013-01-01', '2013-01-01'); INSERT INTO #test (WorkerID, TimeWorked, TimeToWork, ContractStartDate, DateOfEntry) VALUES (11, 8, 4, '2013-04-15', '2013-01-01'); INSERT INTO #test (WorkerID, TimeWorked, TimeToWork, ContractStartDate, DateOfEntry) VALUES (11, 8, 6, '2013-08-15', '2013-01-01'); INSERT INTO #test (WorkerID, TimeWorked, TimeToWork, ContractStartDate, DateOfEntry) VALUES (11, 8, 8, '2013-01-01', '2013-01-02'); INSERT INTO #test (WorkerID, TimeWorked, TimeToWork, ContractStartDate, DateOfEntry) VALUES (11, 8, 4, '2013-04-15', '2013-01-02'); INSERT INTO #test (WorkerID, TimeWorked, TimeToWork, ContractStartDate, DateOfEntry) VALUES (11, 8, 6, '2013-08-15', '2013-01-02'); INSERT INTO #test (WorkerID, TimeWorked, TimeToWork, ContractStartDate, DateOfEntry) VALUES (11, 7, 8, '2013-01-01', '2013-04-15'); INSERT INTO #test (WorkerID, TimeWorked, TimeToWork, ContractStartDate, DateOfEntry) VALUES (11, 6, 4, '2013-04-15', '2013-04-15'); INSERT INTO #test (WorkerID, TimeWorked, TimeToWork, ContractStartDate, DateOfEntry) VALUES (11, 6, 6, '2013-08-15', '2013-04-15'); INSERT INTO #test (WorkerID, TimeWorked, TimeToWork, ContractStartDate, DateOfEntry) VALUES (11, 4, 8, '2013-01-01', '2013-04-16'); INSERT INTO #test (WorkerID, TimeWorked, TimeToWork, ContractStartDate, DateOfEntry) VALUES (11, 8, 4, '2013-04-15', '2013-04-16'); INSERT INTO #test (WorkerID, TimeWorked, TimeToWork, ContractStartDate, DateOfEntry) VALUES (11, 4, 6, '2013-08-15', '2013-04-16'); INSERT INTO #test (WorkerID, TimeWorked, TimeToWork, ContractStartDate, DateOfEntry) VALUES (11, 2, 8, '2013-01-01', '2013-08-16'); INSERT INTO #test (WorkerID, TimeWorked, TimeToWork, ContractStartDate, DateOfEntry) VALUES (11, 2, 6, '2013-04-15', '2013-08-16'); INSERT INTO #test (WorkerID, TimeWorked, TimeToWork, ContractStartDate, DateOfEntry) VALUES (11, 2, 5, '2013-08-15', '2013-08-16'); ``` and with this I get what I want. Thanks a lot to all helpin me here! ``` ---select WorkerID,(sum(TimeWorked)-sum(TimeToWork)) as 'overtime' select * ---sum(timeworked - timetowork) from ( Select *, ROW_NUMBER() over (partition by DateOfEntry order by ContractStartDate desc) as seqnum from #test where ContractStartDate <= DateOfEntry) #test where seqnum = 1 drop table #test ```
i have taken the same data samples of @Riley.if I take your sample data then also overtime is right i.e. 6. ``` ;with CTE as ( select *,ROW_NUMBER() over (partition by workerid,DateofEntry order by ContractStartDate asc) as seqnum, ROW_NUMBER() over (partition by workerid order by workerid asc) as seqnum1 from @Hours ) ,CTE1 as ( select WorkerID,sum(timeworked - timetowork)overtime from cte where seqnum=1 group by WorkerID ) select a.WorkerID,a.WorkerName,b.overtime from cte a inner join cte1 b on a.WorkerID=b.WorkerID where a.seqnum1=1 ```
Conditional sum based on date (sum overtime by contract)
[ "", "sql", "sql-server", "t-sql", "" ]
I have a User model which has a `languages` attribute as an array (postgres) A `User has_many :documents` and a `document belongs_to :user` I want to find all document that are written by users knows English and French ``` @langs = ["English", "French"] Document.joins(:user).where(user.languages & @langs != nil ) ``` This doesn't work. What is the correct way to do this? Schema for languages ``` t.string "languages", default: [], array: true ```
Try this: ``` Document.joins(:user).where("user.languages @> ARRAY[?]::varchar[]", @langs) ``` This should work I tried on my models with the similar structure.
Perhaps you need a "contains" operation on your array in the database: ``` Document.joins(:user).where("user.languages @> ARRAY[?]", @langs) ```
Using Where on association's attributes condition
[ "", "sql", "ruby-on-rails", "postgresql", "ruby-on-rails-4", "where-clause", "" ]
This is my table structure ![enter image description here](https://i.stack.imgur.com/Kudft.png) I tried this before posting this question : ``` select x.col1,x.col2 from ( (select A from #t union all select C from #t) col1, (select B from #t union all select D from #t) col2 )as x ```
You can try like this. ``` Select A,B FROM #T UNION ALL Select C,D FROM #T WHERE C is not null ```
I would do it ``` SELECT T1.A AS A_or_C, T1.B AS B_or_D FROM table_name T1 UNION SELECT T2.C AS A_or_C, T2.D AS B_or_D FROM table_name T2 ``` just so it is absolutely clear. Cheers
How to write sql query for following
[ "", "sql", "sql-server", "" ]
I want to add the total sales by date using the HAVING SUM () variable but it is not working as expected. ``` SELECT sum(SalesA+SalesB) as Sales, sum(tax) as tax, count(distinct SalesID) as NumOfSales, Date FROM SalesTable WHERE Date >= '2014-03-01' GROUP BY Date, SalesA HAVING sum(SalesA+SalesB) >= 7000 ORDER BY Date ``` The results are; ``` |Sales| tax | NumOfSales | Date | 10224| 345 | 1 |2014-03-06| 9224| 245 | 1 |2014-03-06| 7224| 145 | 1 |2014-03-06| ``` If I remove the SalesA in the GROUP BY clause it seems to ignore my HAVING sum clause and adds all the totals. I would like the results to sum all by date like this . ``` |Sales| tax | NumOfSales | Date | 26672| 735 | 3 |2014-03-06 ``` Thank you for any help you can provide.
Do you want to remove individual rows whose salesa + sales b < 7000 so that you only sum rows whose total SalesA + SalesB >= 7000? ``` SELECT sum(SalesA+SalesB) as Sales, sum(tax) as tax, count(distinct SalesID) as NumOfSales, Date FROM SalesTable WHERE Date >= '2014-03-01' and SalesA+SalesB >= 7000 GROUP BY Date ORDER BY Date ```
You can try rewriting your SQL statement as follows. ``` SELECT sum(SalesA+SalesB) as Sales, sum(tax) as tax, count(distinct SalesID) as NumOfSales, Date FROM SalesTable WHERE Date >= '2014-03-01' AND SalesA+SalesB >= 10000 GROUP BY Date ORDER BY Date ```
HAVING SUM() Issue
[ "", "sql", "" ]
I am needing to average a column that is characters and not integers. For example clients order two ways from my company...phone and internet. I am being asked to get a percentage of how they all order. ``` Cust | OrderType ----------------- A | Phone A | Phone A | Phone A | Internet B | Internet B | Internet B | Phone ``` How can I pull this data and show my managers that Customer A orders by phone 80% of the time and Internet 20% of the time and Customer B orders by phone 66% of the time and Internet 33% of the time?
``` SELECT Cust, SUM(CASE WHEN OrderType = 'Phone' THEN 1 ELSE 0 END) * 100 / COUNT(*) 'Phone percentage', SUM(CASE WHEN OrderType = 'Phone' THEN 0 ELSE 1 END) * 100 / COUNT(*) 'Internet percentage' FROM Table1 GROUP BY Cust ```
You basically want the total number of orders divided by the count of each type of order. ``` /* create table #tmp (Cust CHAR(1), OrderType VARCHAR(10)) INSERT #tmp VALUES ('A', 'Phone') INSERT #tmp VALUES ('A', 'Phone') INSERT #tmp VALUES ('A', 'Phone') INSERT #tmp VALUES ('A', 'Internet') INSERT #tmp VALUES ('B', 'Internet') INSERT #tmp VALUES ('B', 'Internet') INSERT #tmp VALUES ('B', 'Phone') */ SELECT Cust, OrderType, (1.0 * COUNT(*)) / (SELECT COUNT(*) FROM #tmp t2 WHERE t2.Cust = t.Cust) pct, cast(cast(((1.0 * COUNT(*)) / (SELECT COUNT(*) FROM #tmp t2 WHERE t2.Cust = t.Cust) ) * 100 as int) as varchar) + '%' /* Formatted as a percent */ FROM #tmp t GROUP BY Cust, OrderType ```
Average of character fields?
[ "", "sql", "average", "percentage", "" ]
I have a table with two date columns as **ArrivalDate** and **DepartureDate**. I need to calculate the total time period in hours and minutes (not **date**) of all the entries in the table . I can get the time period of a particular record via datediff but what i need is the **sum of all the date differences in my table in hours and minutes** . What I am doing is this for getting date difference ``` set @StartDate = '10/01/2012 08:40:18.000' set @EndDate = '10/04/2012 09:52:48.000' SELECT CONVERT(CHAR(8), CAST(CONVERT(varchar(23),@EndDate,121) AS DATETIME) -CAST(CONVERT(varchar(23),@StartDate,121)AS DATETIME),8) AS TimeDiff ``` This only gives a particular date difference . I need all the date differences in my table and then there sum.
This query will generate the results you are looking for: ``` SELECT SUM(DATEDIFF(minute, R.ArrivalDate, R.DepartureDate)) / 60 as TotalHours , SUM(DATEDIFF(minute, R.ArrivalDate, R.DepartureDate)) % 60 as TotalMinutes FROM TableWithDates R ``` The first column returned divides the total minutes by 60 to give you the whole number of hours. The second column returned calculates the remainder when dividing the total minutes by 60 to give you the additional minutes remaining. Combined the 2 columns give you the sum total of the elapsed hours and minutes between all of your Arrival and Departure dates.
Try this: ``` SELECT SUM(DATEDIFF(HOUR, t.EndDate, t.StartDate)) AS Hours, ( SUM(DATEDIFF(MINUTE, t.EndDate, t.StartDate)) - SUM(DATEDIFF(HOUR, t.EndDate, t.StartDate)) * 60 ) AS Minutes FROM YOUR_TABLE t ``` IT should do the trick.
Adding date difference of two dates in Sql for all columns
[ "", "sql", "sql-server", "sql-server-2008", "date", "datetime", "" ]
I select email across two tables as follows: ``` select email from table1 inner join table2 on table1.person_id = table2.id and table2.contact_id is null; ``` Now I have a column in table2 called `email` I want to update **email** column of table2 with email value as selected above. Please tell me the update with select sql syntax for POSTGRES **EDIT:** I did not want to post another question. So I ask here: The above select statement returns multiple rows. What I really want is: ``` update table2.email if table2.contact_id is null with table1.email where table1.person_id = table2.id ``` I am not sure how to do this. My above select statement seems incorrect. Please help. I may have found the solution: [Update a column of a table with a column of another table in PostgreSQL](https://stackoverflow.com/questions/13473499/update-a-column-of-a-table-with-a-column-of-another-table-in-postgresql?rq=1)
I was looking for following solution. [Update a column of a table with a column of another table in PostgreSQL](https://stackoverflow.com/questions/13473499/update-a-column-of-a-table-with-a-column-of-another-table-in-postgresql?rq=1) UPDATE table2 t2 ``` SET val2 = t1.val1 FROM table1 t1 WHERE t2.table2_id = t1.table2_id AND t2.val2 IS DISTINCT FROM t1.val1 -- to avoid empty updates ```
Have you tried something like: ``` UPDATE table2 SET email = (SELECT email FROM table1 INNER JOIN table2 ON table1.person_id = table2.id AND table2.contact_id IS NULL) WHERE ... ```
updating a column in one table with value extracted from other table
[ "", "sql", "postgresql", "" ]
I've looked at a various answers on stackoverflow for selecting the max value or replacing column values but I'm not can't seem to figure out how I could do both and return all the rows. Basically I want to return all rows but replace the value of a column in a given row if another row has a higher number in the same column and the same identifier. I'm not an SQL expert and this has me scratching my head and pulling my hair out... I'm hoping this can be done via a query without updating the data. Maybe I need to rework the data but this would be a huge manual task. Maybe I could do this in the view? I'd appreciate and be open to any suggestions for how to do this. Below is an example of the view that is being queried. The **"code"** is the common field. ``` +-------+-------+-----------------------------------+ | type1 | type2 | code | amt | | +-------+-------+-----------------------------------+ | 1 | A | 100 | 59 | | 1 | B | 200 | 75 | | 2 | C | 100 | 65 <-- Max for code 100 | | 2 | D | 200 | 80 <-- Max for code 200 | | 3 | E | 100 | 55 | | 3 | F | 200 | 70 | +-------+-------+-----------------------------------+ ``` I need to return all rows but replace the "amt" with the max value if the "code" is the same and the number is higher in another row. Here's an example of the output I'm looking for: ``` +-------+-------+------------------------------------------------+ | type1 | type2 | code | amt | | +-------+-------+------------------------------------------------+ | 1 | A | 100 | 65 <-- replaced w/max for code 100 | | 1 | B | 200 | 80 <-- replaced w/max for code 200 | | 2 | C | 100 | 65 | | 2 | D | 200 | 80 | | 3 | E | 100 | 65 <-- replaced w/max for code 100 | | 3 | F | 200 | 80 <-- replaced w/max for code 200 | +-------+-------+------------------------------------------------+ ``` The reason for trying this in a query is to keep the original data. Is this possible with a query or do I need to try to update the data instead? Many thanks!
You can try like this. ``` Select type1,type2,code,Max(Amt) Over(PARTITION BY Code) AS MaxAmt from Table1 ``` [**Sql Fiddle**](http://www.sqlfiddle.com/#!3/b2d64/1)
Try this ``` Select Type1,type2,code,Max(Amt) Over(PARTITION BY CODE) AS Val from Table1 Order by Type1 ``` **[Fiddle](http://sqlfiddle.com/#!6/b2d64/1)**
SQL - select all rows and replace column value(s) with max column value with identifier
[ "", "sql", "sql-server", "sql-server-2008", "max", "" ]
I am trying to get the complete row with the lowest price, not just the field with the lowest price. Create table: ``` CREATE TABLE `Products` ( `SubProduct` varchar(100), `Product` varchar(100), `Feature1` varchar(100), `Feature2` varchar(100), `Feature3` varchar(100), `Price1` float, `Price2` float, `Price3` float, `Supplier` varchar(100) ); ``` Insert: ``` INSERT INTO `Products` (`SubProduct`, `Product`, `Feature1`, `Feature2`, `Feature3`, `Price1`, `Price2`, `Price3`, `Supplier`) VALUES ('Awesome', 'Product', 'foo', 'foo', 'foor', '1.50', '1.50', '0', 'supplier1'), ('Awesome', 'Product', 'bar', 'foo', 'bar', '1.25', '1.75', '0', 'supplier2'); ``` Select: ``` SELECT `SubProduct`, `Product`, `Feature1`, `Feature2`, `Feature3`, MIN(`Price1`), `Price2`, `Price3`, `Supplier` FROM `Products` GROUP BY `SubProduct`, `Product` ORDER BY `SubProduct`, `Product`; ``` You can see that at <http://sqlfiddle.com/#!2/c0543/1/0> I get the frist inserted row with the content of the column price1 from the second inserted row. I expect to get the complete row with the right features, supplier and other columns. In this example it should be the complete second inserted row, because it has the lowest price in column price1.
You need to get the MIN price rows and then JOIN those rows with the main table, like this: ``` SELECT P.`SubProduct`, P.`Product`, P.`Feature1`, P.`Feature2`, P.`Feature3`, `Price` AS Price1, P.`Price2`, P.`Price3`, P.`Supplier` FROM `Products` AS P JOIN ( SELECT `SubProduct`, `Product`, MIN(`Price1`) AS Price FROM `Products` GROUP BY `SubProduct`, `Product` ) AS `MinPriceRows` ON P.`SubProduct` = MinPriceRows.`SubProduct` AND P.`Product` = MinPriceRows.`Product` AND P.Price1 = MinPriceRows.Price ORDER BY P.`SubProduct`, P.`Product`; ``` Working Demo: <http://sqlfiddle.com/#!2/c0543/20> Here what I have done is to get a temporary recordset as `MinPriceRows` table which will give you MIN price per SubProduct and Product. Then I am joining these rows with the main table so that main table rows can be reduced to only those rows which contain MIN price per SubProduct and Product.
Try with this: ``` SELECT `p`.`SubProduct`, `p`.`Product`, `p`.`Feature1`, `p`.`Feature2`, `p`.`Feature3`, `p`.`Price1`, `p`.`Price2`, `p`.`Price3`, `p`.`Supplier` FROM `Products` `p` inner join (select MIN(`Price1`)as `Price1` From `Products` ) `a` on `a`.`Price1` = `p`.`Price1` ORDER BY `p`.`SubProduct`, `p`.`Product`; ``` demo: <http://sqlfiddle.com/#!2/c0543/24>
MySQL: Get the full row with min value
[ "", "mysql", "sql", "" ]
I'd like to add trailing zeros to a data set however there is a `WHERE` clause involved. In a `DOB` field I have a date of `1971` and I'd like to add `0000` to make the length equal 8 characters. Sometimes there is `197108` which then I'd need to only add two `00`. The fields that are `null` are ok. Any ideas?? Thanks in advance...
``` Update table set Dob = CONCAT(TRIM(Dob), '0') where LEN(TRIM(Dob)) < 8 ```
You can add trailing zeros by doing: ``` select left(col+space(8), 8) ``` However, you probably shouldn't be storing date in a character field.
Adding trailing zeros to rows of data
[ "", "sql", "t-sql", "" ]
I have created a SQL query that will return rows from an Oracle linked server. The query works fine and will return 40 rows, for example. I would like the results to only be inserted into a table if the number of rows returned is greater than 40. My thinking would then be that I could create a trigger to fire out an email to say the number has been breached.
``` DECLARE @cnt INT SELECT @cnt = COUNT(*) FROM LinkedServer.database.schemaname.tablename IF @cnt > 40 INSERT INTO table1 VALUES(col1, col2, col3 .....) ```
Let's say that the query is: ``` select a.* from remote_table a ``` Now you can modify the query: ``` select a.*, count(*) over () as cnt from remote_table a ``` and will contain the number of rows. Next, ``` select * from ( select a.*, count(*) over () as cnt from remote_table a ) where cnt > 40; ``` will return only if the number of rows is greater than 40. All you have to do is ``` insert into your_table select columns from ( select columns, count(*) over () as cnt from remote_table a ) where cnt > 40; ``` and will insert only if you have more than 40 rows in the source.
SQL - Insert if the number of rows is greater than
[ "", "sql", "sql-server", "oracle", "" ]
``` SELECT VehicleOwner, COALESCE(CarMileage, MotorcycleMileage, BicycleMileage, 0) AS Mileage, Count(*) FROM VehicleMileage Group by VehicleOwner Having Count(*)>1 ```
I figured it out... ``` SELECT b.VehicleOwner, City FROM (SELECT a.VehicleOwner, COALESCE(a.City, (Select b.City from a INNER Place b), (Select b.City from a INNER Place b)) AS City FROM VehicleMileage a) AS b WHERE b.VehicleOwner IN (SELECT VehicleOwner FROM VehicleMileage GROUP BY VehicleOwner HAVING COUNT(*)>1); ```
This works. ``` SELECT VehicleOwner, COALESCE(CarMileage, MotorcycleMileage, BicycleMileage, 0) AS Mileage, Count(*) FROM VehicleMileage Group by VehicleOwner, COALESCE(CarMileage, MotorcycleMileage, BicycleMileage, 0) Having Count(*)>1 ``` You can also put each column from COALESCE in GROUP BY section. Depends on what you want to achieve
How to group from COALESCE? SQL
[ "", "sql", "" ]
I have a research table like ``` researcher award person1 award1 person1 award2 person2 award3 ``` What I want to get is to count the award based on researcher but it researcher shouldnt be repeated. So in this example. The result should be ``` 2 ``` Coz award1 and award2 is the same person + award3 which is a different person. I already tried ``` SELECT count(award) from research where researcher=(select distinct(researcher) from researcher) ``` But it says ``` ERROR: more than one row returned by a subquery used as an expression ``` So any alternative solution or changes?
``` select count(*) from (select researcher, count(*) from MyTable group by researcher) as tempTable; ```
This will give you researcher and count ``` select researcher, count(*) as c from table group by researcher ``` maybe you only want awarded ones? ``` select researcher, count(*) as c from table where award is not null group by researcher ```
Count + Distinct SQL Query
[ "", "sql", "postgresql", "" ]
I'm trying to use sql query to show `name_id` and `name` attribute for all the people who have only grown tomato (`veg_grown`) and the result are show ascending order of name attribute. ``` CREATE TABLE people ( name_id# CHAR(4) PRIMARY KEY, name VARCHAR2(20) NOT NULL, address VARCHAR2(80) NOT NULL, tel_no CHAR(11) NOT NULL ) CREATE TABLE area ( area_id# CHAR(5) PRIMARY KEY, name_id# REFRENCES people, area_location_adress VARCHAR2(80) NOT NULL ) CREATE TABLE area_use ( area_id# REFERENCES area, veg_grown VARCHAR (20) NOT NULL ) ``` but the `veg_grown` attribute has no relation to the `people` table but the people and `area_use` table are linked through area table so I tried using `INNER JOIN` like this which I confused my-self and didn't even work: ``` SELECT name, name_id FROM people INNER JOIN area USING (name_id) SELECT area_id FROM area INNER JOIN area_use USING (area_id) WHERE veg_grown = 'tomato' ORDER BY name ASC; ``` Surely there must be a way to select `name_id` and `name` who has only grown tomato in SQL query I will take any help or advice :) thanks
``` SELECT p.name, p.name_id FROM people p JOIN area a ON p.name_id = a.name_id JOIN area_use au ON a.area_id = au.area_id AND au.veg_grown = 'tomato' LEFT JOIN area_use au2 ON a.area_id = au2.area_id AND au2.veg_grown <> 'tomato' WHERE au2.area_id IS NULL; ``` This will use a `LEFT JOIN` to find people that *only* grow tomatoes. To find people that grow tomatoes and possibly anything else too, remove the `LEFT JOIN` part and everything below it. [An SQLfiddle to test with](http://sqlfiddle.com/#!4/7afd1/2). EDIT: If your field names contain `#` in the actual table, you'll need to quote the identifiers and add the `#`, I left them out in this sample.
AFAICT you only want entries where all info is available, so there are no left/right joins. ``` SELECT p.name_id, p.name FROM people p JOIN area a ON p.name_id = a.name_id JOIN area_use au ON a.area_id = au.area_id WHERE au.veg_grown = 'tomato' ORDER BY p.name ASC ```
Using advance SELECT statement for SQL QUERY
[ "", "sql", "oracle", "oracle11g", "" ]
I'm creating a filtered index such that the WHERE filter includes the complete query criteria. WIth such an index, it seems that a key column would be unnecessary, though SQL requires me to add one. For example, consider the table: ``` CREATE TABLE Invoice ( Id INT NOT NULL IDENTITY PRIMARY KEY, Data VARCHAR(MAX) NOT NULL, IsProcessed BIT NOT NULL DEFAULT 0, IsInvalidated BIT NOT NULL DEFAULT 0 ) ``` Queries on the table look for new invoices to process, i.e.: ``` SELECT * FROM Invoice WHERE IsProcessed = 0 AND IsInvalidated = 0 ``` So, I can tune for these queries with a filtered index: ``` CREATE INDEX IX_Invoice_IsProcessed_IsInvalidated ON Invoice (IsProcessed) WHERE (IsProcessed = 0 AND IsInvalidated = 0) GO ``` **My question: What should the key column(s) for `IX_Invoice_IsProcessed_IsInvalidated` be?** Presumably the key column isn't being used. My intuition leads me to pick a column that is small and will keep the index structure relatively flat. Should I pick the table primary key (`Id`)? One of the filter columns, or both of them?
Because you have a clustered index on that table it doesn't really matter what you put in the key columns of that index; meaning `Id` is there free of charge. The only thing you can do is `include` everything in the included section of the index to actually have data handy at the leaf level of the index to exclude key lookups to the table. Or, if the queue is huge, then, perhaps, some other column would be useful in the key section. Now, if that table didn't have a primary key then you would have to `include` or specify as key columns all the columns that you need for joining or other purposes. Otherwise, RID lookups on heap would occur because on the leaf level of indexes you would have references to data pages.
What percentage of the table does this filtered index cover? If it's small, you may want to cover the entire table to handle the "SELECT \*" from the index without hitting the table. If it's a large portion of the table though this would not be optimal. Then I'd recommend using the clustered index or primary key. I'd have to research more because I forget which is optimal right now but if they're the same you should be set.
What key columns to use on filtered index with covering WHERE clause?
[ "", "sql", "sql-server", "sqlperformance", "filtered-index", "" ]
Apologies, this question is a little abstract and as such is a little hard to define so I will probably need to edit the question a couple of times to clarify: I've got a configuration file that I need to parse where each relevant line contains one of the following formats: ``` FieldName = Value FieldName(Index) = Value FieldName(Index1, Index2) = Value FieldName(Index1, Index2, ...IndexN) = Value ``` For example: ``` Field0 = 0 Field1(0, 0) = 0.01 Field1(0, 1) = 0.02 Field1(1, 0) = 0.03 Field1(1, 1) = 0.04 Field1(2, 0) = ADF0102BC5 Field1(2, 1) = ADF0102BC6 Field2(0, 0) = 0 Field2(0, 1) = 2 Field3(1) = 5 Field3(2) = 7 Field3(3) = 9 Field4(0, 0, 1) = 64.75 Field4(0, 1, 0) = 65.25 Field4(1, 0, 0) = 72.25 ``` The relevant lines are simple enough to parse from the file using regular expressions and I've got that bit handled already. What I'm having a problem with is how to model the data in the database so that as a new index comes into scope for a field, it can automatically be added without requiring new columns to be added to the table. The FieldName is always a Varchar of max length 50 The Value is always a numeric value represented in one of many string formats that need parsing individually and for the purpose of this question is largely irrelevant. Each of the indexes (if a field has them) is an integer value. Each has a meaning in its own right, but are used together as a mapping for a bunch of values to a field name. Each instance of a fieldname *i.e. Field1* will have a constant number of indexes, i.e. you will never have Field1(0, 0) and Field1(0, 0, 0). If Field1 has 2 indexes in one line of the configuration file, then all instances of Field1 would have 2 indexes. I need for the system to be flexible enough to parse the file and attach as many indexes as necessary for each field. I'm in 2 minds - do I treat the entire left side of the "equation" as the label and as such Field1(0, 0) become the "FieldName", which makes querying by index quite difficult or do I model my data such that these indexes effectively become coordinates for fields value? If the indexes remained constant across all files I could model this using: ``` Table Fields( FieldId Integer Identity(1, 1) Primary Key, FieldName VarChar(50) ) Table FieldValues( FieldId Integer Constraint FK_FV_FID Foreign Key References Fields(FieldId) Index1 Integer Index2 Integer Index3 Integer Index4 Integer Value Varchar(50) ) ``` Unfortunately, due to the unknown number of indexes until the file is parsed, it makes modeling that relationship more complex. Once the data is stored, I then need to be able to simply query using either the fieldname to get a list of all corresponding index references with their values i.e. ``` Field1 ------ 0, 0 = 0.01 0, 1 = 0.02 1, 0 = 0.03 1, 1 = 0.04 2, 0 = ADF0102BC5 2, 1 = ADF0102BC6 ``` Or ``` Field1 Where Index1 = 0 ----------------------- 0, 0 = 0.01 0, 1 = 0.02 ``` Or ``` Field1 Where Index 2 = 1 ------------------------ 0, 1 = 0.02 1, 1 = 0.04 2, 1 = ADF0102BC6 ``` Or ``` Field1 Where Index1 = 0 And Index2 = 1 -------------------------------------- 0, 1 = 0.02 ``` If I've got a complicated table structure, it makes simplified querying a bit more of a pain in the neck.
here is my thinking process on this situation, There will be major two different kinds of queries. One where results are not sliced by IndexPostion and/Or IndexValue. and Second Where Results are sliced by them. And no single table design can give me that result w/o any trade off. Trade Off might be storage, performance, or query complexity. Below solution is "let go Storage" but takes care of performance and query simplicity while accessing this schema. **For the First type of queries only table "SO\_FieldIndexValue" will be used.** **But for the second type of queries we need to join it with other two where we need the result filtered by IndexPosition/IndexPositionValue.** ![Schema Design](https://i.stack.imgur.com/VvYhW.jpg) ``` IF OBJECT_ID('SO_FieldIndexPositionValue') IS NOT NULL DROP TABLE SO_FieldIndexPositionValue IF OBJECT_ID('SO_FieldIndexValue') IS NOT NULL DROP TABLE SO_FieldIndexValue IF OBJECT_ID('SO_IndexPositionValue') IS NOT NULL DROP TABLE SO_IndexPositionValue CREATE TABLE SO_FieldIndexValue ( FIV_ID BIGINT NOT NULL IDENTITY CONSTRAINT XPK_SO_FieldIndexValue PRIMARY KEY NONCLUSTERED ,FieldName NVARCHAR(50)NOT NULL ,FieldIndex NVARCHAR(10) NOT NULL ,FieldValue NVARCHAR(500) NULL ) CREATE UNIQUE CLUSTERED INDEX CIDX_SO_FieldIndexValue ON SO_FieldIndexValue(FIV_ID ASC,FieldName ASC,FieldIndex ASC) CREATE NONCLUSTERED INDEX NCIDX_SO_FieldIndexValue ON SO_FieldIndexValue (FIV_ID,FieldName) INCLUDE (FieldIndex,FieldValue) CREATE TABLE SO_IndexPositionValue ( IPV_ID BIGINT NOT NULL IDENTITY CONSTRAINT XPK_SO_IndexPositionValue PRIMARY KEY NONCLUSTERED ,IndexName SYSNAME NOT NULL ,IndexPosition INT NOT NULL ,IndexPositionValue BIGINT NOT NULL ) CREATE UNIQUE CLUSTERED INDEX CIDX_SO_IndexPositionValue ON SO_IndexPositionValue(IPV_ID ASC,IndexPosition ASC, IndexPositionValue ASC) CREATE TABLE SO_FieldIndexPositionValue ( FIPV_ID BIGINT NOT NULL IDENTITY CONSTRAINT XPK_SO_FieldIndexPositionValue PRIMARY KEY NONCLUSTERED ,FIV_ID BIGINT NOT NULL REFERENCES SO_FieldIndexValue (FIV_ID) ,IPV_ID BIGINT NOT NULL REFERENCES SO_IndexPositionValue (IPV_ID) ) CREATE CLUSTERED INDEX CIDX_SO_FieldIndexPositionValue ON SO_FieldIndexPositionValue(FIPV_ID ASC,FIV_ID ASC,IPV_ID ASC) ``` **I have provided a simple SQL API to just demonstrate the how insert into this schema can be handle easily using single API.** **There is plenty of opportunity to play with this API and make customization as needed. for example add validation if input is in proper format.** ``` IF object_id('pr_FiledValueInsert','p') IS NOT NULL DROP PROCEDURE pr_FiledValueInsert GO CREATE PROCEDURE pr_FiledValueInsert ( @FieldIndexValue NVARCHAR(MAX) ,@FieldValue NVARCHAR(MAX)=NULL ) AS BEGIN SET NOCOUNT ON BEGIN TRY BEGIN TRAN DECLARE @OriginalFiledIndex NVARCHAR(MAX)=@FieldIndexValue DECLARE @FieldName sysname='' ,@FIV_ID BIGINT ,@FieldIndex sysname ,@IndexName sysname ,@IndexPosition BIGINT ,@IndexPositionValue BIGINT ,@IPV_ID BIGINT ,@FIPV_ID BIGINT ,@CharIndex1 BIGINT ,@CharIndex2 BIGINT ,@StrLen BIGINT ,@StartPos BIGINT ,@EndPos BIGINT SET @CharIndex1 = CHARINDEX('(',@OriginalFiledIndex) SET @StrLen = LEN(@OriginalFiledIndex) SET @CharIndex2 = CHARINDEX(')',@OriginalFiledIndex) SET @FieldName = RTRIM(LTRIM(SUBSTRING(@OriginalFiledIndex,1,@CharIndex1-1))) SET @FieldIndex = RTRIM(LTRIM(SUBSTRING(@OriginalFiledIndex,@CharIndex1+1,@StrLen-@CharIndex1-1))) --Insert FieldIndexValue and Get @FIV_ID SELECT @FIV_ID = FIV_ID FROM SO_FieldIndexValue WHERE FieldName=@FieldName AND FieldIndex=@FieldIndex IF @FIV_ID IS NULL BEGIN INSERT INTO SO_FieldIndexValue ( FieldName,FieldIndex,FieldValue ) SELECT @FieldName,@FieldIndex,@FieldValue SELECT @FIV_ID = SCOPE_IDENTITY() END ELSE BEGIN RAISERROR('Filed and Index Combination already Exists',16,1) END --Find the First IndexPosition and IndexPositionValue and Get @IPV_ID SELECT @StartPos=CHARINDEX('(',@OriginalFiledIndex,1)+1 SELECT @EndPos = CASE WHEN CHARINDEX(',',@OriginalFiledIndex,@StartPos)<>0 THEN CHARINDEX(',',@OriginalFiledIndex,@StartPos)- @StartPos ELSE CHARINDEX(')',@OriginalFiledIndex,@StartPos) - @StartPos END SELECT @IndexPosition = 1 SELECT @IndexPositionValue = SUBSTRING(@OriginalFiledIndex,@StartPos,@EndPos) SELECT @IndexName = 'Index'+CAST(@IndexPosition AS Sysname) --Insert IndexPositionvalue SELECT @IPV_ID = IPV_ID FROM SO_IndexPositionValue WHERE IndexPosition=@IndexPosition AND IndexPositionValue = @IndexPositionValue IF @IPV_ID IS NULL BEGIN INSERT SO_IndexPositionValue ( IndexName , IndexPosition , IndexPositionValue ) SELECT @IndexName,@IndexPosition,@IndexPositionValue SET @IPV_ID = SCOPE_IDENTITY() END --Insert the First FieldIndexPositionValue IF NOT EXISTS( SELECT TOP(1) 1 FROM SO_FieldIndexPositionValue WHERE FIV_ID = @FIV_ID AND IPV_ID = @IPV_ID ) BEGIN INSERT SO_FieldIndexPositionValue( FIV_ID, IPV_ID ) SELECT @FIV_ID,@IPV_ID END --If More than One Index exist, process remining indexpositions WHILE @StrLen>@StartPos+@EndPos BEGIN SET @StartPos = @StartPos+@EndPos+1 SET @EndPos = CASE WHEN CHARINDEX(',',@OriginalFiledIndex,@StartPos)<>0 THEN CHARINDEX(',',@OriginalFiledIndex,@StartPos)- @StartPos ELSE CHARINDEX(')',@OriginalFiledIndex,@StartPos) - @StartPos END SELECT @IndexPosition = @IndexPosition+1 SELECT @IndexPositionValue = SUBSTRING(@OriginalFiledIndex,@StartPos,@EndPos) SELECT @IndexName = 'Index'+CAST(@IndexPosition AS Sysname) --Insert IndexPositionvalue SET @IPV_ID = NULL SELECT @IPV_ID = IPV_ID FROM SO_IndexPositionValue WHERE IndexPosition=@IndexPosition AND IndexPositionValue = @IndexPositionValue IF @IPV_ID IS NULL BEGIN INSERT SO_IndexPositionValue ( IndexName , IndexPosition , IndexPositionValue ) SELECT @IndexName,@IndexPosition,@IndexPositionValue SET @IPV_ID = SCOPE_IDENTITY() END --Insert FieldIndexPositionValue IF NOT EXISTS( SELECT TOP(1) 1 FROM SO_FieldIndexPositionValue WHERE FIV_ID = @FIV_ID AND IPV_ID = @IPV_ID ) BEGIN INSERT SO_FieldIndexPositionValue( FIV_ID, IPV_ID ) SELECT @FIV_ID,@IPV_ID END END COMMIT TRAN END TRY BEGIN CATCH ROLLBACK TRAN SELECT ERROR_MESSAGE() END CATCH SET NOCOUNT OFF END GO ``` **Now Sample Input Data** ``` EXECUTE pr_FiledValueInsert 'FIELD1(0,1,0)',101 EXECUTE pr_FiledValueInsert 'FIELD1(0,1,2)','ABCDEF' EXECUTE pr_FiledValueInsert 'FIELD1(1,0,1)','hello1' EXECUTE pr_FiledValueInsert 'FIELD2(1,0,0)',102 EXECUTE pr_FiledValueInsert 'FIELD2(1,1,0)','hey2' EXECUTE pr_FiledValueInsert 'FIELD2(1,0,1)','hello2' ``` **Sample Query1** ``` SELECT FieldName,FieldIndex,FieldValue FROM dbo.SO_FieldIndexValue WHERE FieldName = 'Field1' ``` **Sample Result1** ![SampleResult1](https://i.stack.imgur.com/7vmlH.jpg) **Sample Query2** ``` SELECT FieldName,FieldIndex AS CompeleteIndex,IndexPosition,IndexPositionValue,FieldValue FROM SO_FieldIndexPositionValue fipv JOIN dbo.SO_IndexPositionValue ipv ON ipv.IPV_ID=fipv.IPV_ID JOIN dbo.SO_FieldIndexValue fiv ON fiv.FIV_ID=fipv.FIV_ID WHERE (IndexPosition=2 AND IndexPositionValue=1) AND FieldName = 'Field1' ``` **Sample Result2** ![SampleResult2](https://i.stack.imgur.com/TQMbA.jpg)
One thing my SQL experience has taught me - if you don't know how many of them there are, then they belong in rows rather than in columns. I suggest two tables structured like this : ## Row Row\_Id, Field\_Name, Value ## Index Row\_Id, Index\_Position, Index\_Value To look up a parameter value by its indices, you would do multiple joins to the Index table e.g. ``` select r.Row_Id, r.Value from Row r join Index i1 on r.Row_Id = i1.Row_Id join Index i2 on r.Row_Id = i2.Row_Id join Index i3 on r.Row_Id = i3.Row_Id where i1.Index_Position = 1 and i1.Index_Value = '3' and i2.Index_Position = 2 and i2.Index_Value = '7' and i3.Index_Position = 3 and i3.Index_Value = '42' and ``` EDIT : which basically comes down to conforming to [first normal form](http://en.wikipedia.org/wiki/First_normal_form). Having multiple pieces of information within one column (e.g. allowing your FieldName column to contain "FieldName(0,1)") violates this - which will lead to headaches later (as you noted - how to parse? how to compare rows with different number of entries? how to query?). EDIT 2 : sample data for the first three rows of the config file listed in your question. Basically every row in the config file maps to an entry in the Row table. And every single index parameter maps to an entry in the Index table (with a link back to which row it came from) : ## Row Row\_Id, Field\_Name, Value 1, "Field0", "0" 2, "Field1", "0.01" 3, "Field1", "0.02" ## Index Row\_Id, Index\_Position, Index\_Value 2, 1, 0 2, 2, 0 3, 1, 0 3, 2, 1
How can I model the following data structure?
[ "", "sql", "data-modeling", "" ]
I have written this code in Java and want to know if i can do it more efficiently in a SQL statement. I have two tables. One holds member's data, the other holds their measurements. ``` Members ID | Name ---------- 001 | Mary 002 | Jane 003 | Anne Measurements idMember | date | weight ------------------------------- 001 | 2013-06-21 | 65 002 | 2013-06-23 | 68 003 | 2013-06-21 | 75 001 | 2013-09-20 | 64 002 | 2013-06-21 | 70 001 | 2014-01-18 | 62 003 | 2013-06-21 | 74 002 | 2013-06-21 | 69 ``` What I need to be able to do is to find a total amount of weight lost (or gained). This means looking at each member and subtracting the last measurement from the first and then summing the total. Thanks!
``` select sum(s0.w - s1.w) as total_weight_loss from (select idMember, max(weight) as w from Measurements m where m.date = (select min(date) from Measurements n where m.idMember = n.idMember) group by idMember) s0 join (select idMember, min(weight) as w from Measurements m where m.date = (select max(date) from Measurements n where m.idMember = n.idMember) group by idMember) s1 on s0.idMember = s1.idMember ``` **EDIT** SInce it turned out that there also is a Measurements.ID, all difficulties related to the handling of the measurements on the same date can be avoided altogether like this: ``` select sum(s0.w - s1.w) as total_weight_loss from (select idMember, weight as w from Measurements m where m.ID = (select min(ID) from Measurements n where m.idMember = n.idMember)) s0 join (select idMember, weight as w from Measurements m where m.ID = (select max(ID) from Measurements n where m.idMember = n.idMember)) s1 on s0.idMember = s1.idMember ```
Try: ``` select members.name, curr.date as curr_date, curr.weight as curr_weight, prev.weight as prev_weight, curr.weight - prev.weight as weight_change from members join measurements curr on members.id = curr.idmember join measurements prev on members.id = prev.idmember where prev.date = (select max(x.date) from measurements x where x.idmember = prev.idmember and x.date < curr.date) ``` SQL fiddle: <http://sqlfiddle.com/#!2/c16123/7/0>
MySQL: Subtract two values in rows in table A for each member in table B
[ "", "mysql", "sql", "" ]
I am trying to count the number of times that a word appears in all the columns of a row, done an id. ``` +-------+------+------+------+------+------+------+ | id | pos1 | pos2 | pos3 | pos4 | pos5 | pos6 | +-------+------+------+------+------+------+------+ | core1 | AA | BB | CC | HH | YY | var1 | | core2 | AA | BB | var3 | TT | var2 | YY | | core5 | AA | BB | EE | GG | YY | ZZ | +-------+------+------+------+------+------+------+ ``` I've tried with this code but only returns '1' and not '2' that is what I want to. ``` SELECT count(id) FROM customize WHERE pos1 OR pos2 OR pos3 OR pos4 OR pos5 OR pos6 LIKE 'var%' AND id = 'core2'; ``` Any ideas of how could I do it?
``` SELECT (IF(pos1 LIKE 'var%', 1, 0) + IF(pos2 LIKE 'var%', 1, 0) + IF(pos3 LIKE 'var%', 1, 0) + IF(pos4 LIKE 'var%', 1, 0) + IF(pos5 LIKE 'var%', 1, 0) + IF(pos6 LIKE 'var%', 1, 0)) AS done FROM customize WHERE id = 'core2' ```
Maybe do a load of unioned queries (one for each column) and then count the results. ``` SELECT COUNT(*) FROM ( SELECT id FROM customize WHERE id = 'core2' AND pos1 LIKE 'var%' UNION ALL SELECT id FROM customize WHERE id = 'core2' AND pos2 LIKE 'var%' UNION ALL SELECT id FROM customize WHERE id = 'core2' AND pos3 LIKE 'var%' UNION ALL SELECT id FROM customize WHERE id = 'core2' AND pos4 LIKE 'var%' UNION ALL SELECT id FROM customize WHERE id = 'core2' AND pos5 LIKE 'var%' UNION ALL SELECT id FROM customize WHERE id = 'core2' AND pos6 LIKE 'var%' ) ``` Or if you want a count of the rows that have a column with that value:- ``` SELECT COUNT(id) FROM customize WHERE id = 'core2' AND (pos1 LIKE 'var%' OR pos2 LIKE 'var%' OR pos3 LIKE 'var%' OR pos4 LIKE 'var%' OR pos5 LIKE 'var%' OR pos6 LIKE 'var%') ``` But better to normalise your database design
Count number of text coincidence in all columns of a row
[ "", "mysql", "sql", "rows", "" ]
I have this query wherein I want to find out the sales for the current year and the sales for last year. I cannot make it into 2 separate queries since it has to be of the same item code. Meaning the item codes used in the sales for the current year must also be the item codes used for the sales last year. The code below is working but it takes almost 8 to 9 minutes to fetch ``` select p.itemcode, p.itemdescription, ( select round((SUM(SA.QUANTITY*P.SellingPrice)),2) from sales s join product p on s.itemcode=p.itemcode where YEAR(s.date) = 2013 ), ( select round((SUM(SA.QUANTITY * P.SellingPrice)),2) from sales s join product p on s.itemcode=p.itemcode where YEAR(s.date) = 2012 ) from product p join supplier s on p.suppliercode = s.suppliercode join currency c on c.countrycode=s.countrycode join country co on co.countrycode=c.countrycode JOIN SALES SA ON SA.ITEMCODE=P.ITEMCODE where c.countrycode = 'NZ' group by p.itemcode limit 10 ``` Ideally the output should be ``` Itemcode Itemdescription SalesforCurrentYear SalesforLastYear GS771516 BUBBLE PARTY MACHINE 1035300.00 2079300.00 GSNBC-025 X'MAS HOUSE 600612.25 1397163.25 GSNBC-031 BRANDENBURGER TOR 741010.75 1572207.25 ``` Thanks!!
The query can be simplified by eliminating two joins: ``` select ....... ....... from product p join supplier s on p.suppliercode = s.suppliercode JOIN SALES SA ON SA.ITEMCODE=P.ITEMCODE where s.countrycode = 'NZ' group by p.itemcode limit 10 ``` Afterwards, two dependent subqueries in the select clause can be reduced to one outer join: ``` select p.itemcode, p.itemdescription, round((SUM( CASE WHEN YEAR(s.date) = 2013 THEN SA.QUANTITY*P.SellingPrice ELSE 0 END )),2) As Sum2013, round((SUM( CASE WHEN YEAR(s.date) = 2012 THEN SA.QUANTITY * P.SellingPrice ELSE 0 END )),2) As Sum2012 from product p join supplier s on p.suppliercode = s.suppliercode LEFT JOIN SALES SA ON SA.ITEMCODE=P.ITEMCODE where s.countrycode = 'NZ' group by p.itemcode limit 10 ``` Please try this query and let us know how it will perform.
Follow any of these steps 1.You can parse your query. 2.Remove redundant statements. 3.Use inner join or outer join.
How to minimize time in mySQL select query?
[ "", "mysql", "sql", "select", "sql-optimization", "" ]
My sql table structure is : ``` Order No. | Carton No | ... .& so on D1 1 D1 2 D1 3 D1 4 D2 5 D2 6 ``` I want to get the count of Carton No group By Order No . and the records included in the count. like this - ``` OrderNo | Count | Carton No D1 4 1,2,3,4 D2 2 5,6 ``` Is it possible to get the desired result using sql query.
``` SELECT "Order No.",COUNT("Order No.")as Count , listagg("Carton No" , ',') within group (order by "Carton No") "Carton No" FROM tableName GROUP BY "Order No." ``` SQL Server ``` SELECT [Order No.],COUNT([Order No.])as Count , [Carton No]=STUFF((SELECT ','+[Carton No] FROM tableName WHERE [Order No.]=A.[Order No.] FOR XML PATH('')) , 1 , 1 , '' ) FROM tableName A GROUP BY [Order No.] ```
The `COUNT()` function gets you the count, but the comma-delimited list is much more difficult. As far as ANSI standard SQL, no. What you're asking denormalizes the query results, therefore the standard SQL response is to do this in the application, not in SQL. It's a display issue. Many RDBMSs do have vendor-specific aggregate functions that do this, however: Oracle: `wm_concat()` and `LISTAGG()`. `wm_concat()` is older, but it was never considered a documented function. MySQL: `GROUP_CONCAT()`. PostgreSQL: `array_agg()` and `string_agg()`. Older versions may not have both or either function. MS SQL: This one is extremely difficult. You generally have to use `STUFF()` in combination with `FOR XML PATH('')`. However, this behavior is not a documented behavior, so should be considered deprecated. Prior to SQL Server 2005's introduction of the `FOR XML PATH('')` statement, I don't believe there was any good way to do this. SQLite: `GROUP_CONCAT()`.
How to get Count of Records in sql
[ "", "sql", "sql-server", "" ]
I have below table structure : Table1 ``` ╔═════╦══════╦═════════════╦═════════════╗ ║Col1 ║ Col2 ║ TableName ║ ColumnName ║ ╠═════╬══════╬═════════════╬═════════════╣ ║ 1 ║ abc ║ Table2 ║ column2 ║ ║ 2 ║ xyz ║ ║ ║ ║ 3 ║ pqr ║ Table1 ║ column1 ║ ║ 4 ║ jbn ║ ║ ║ ╚═════╩════════════════════╩═════════════╝ Table2 : ╔════════╦═════════╗ ║Column1 ║ Column2 ║ ╠════════╬═════════╣ ║ 1 ║ A ║ ║ 2 ║ B ║ ║ 3 ║ C ║ ║ 4 ║ D ║ ╚════════╩═════════╝ Table3 ╔════════╦═════════╗ ║Column1 ║ Column2 ║ ╠════════╬═════════╣ ║ 1 ║ X ║ ║ 2 ║ Y ║ ║ 3 ║ Z ║ ║ 4 ║ A ║ ╚════════╩═════════╝ ``` I want to write stored procedure which will select data from Table1 and data from another table depending upon value of column tableName and columnName in Table1. I want data in following format: ``` ╔═════╦═════╦════════╗ ║Col1 ║ Col2║ List ║ ╠═════╬═════╬════════╣ ║ 1 ║ abc ║A,B,C,D ║ ║ 2 ║ xyz ║ ║ ║ 3 ║ pqr ║1,2,3,4 ║ ║ 4 ║ jbn ║ ║ ╚═════╩═════╩════════╝ ```
You will need a dynamic sql to get such a select. Check out the link <http://www.mssqltips.com/sqlservertip/1160/execute-dynamic-sql-commands-in-sql-server/> **EDIT:** The following code should do the trick. I have assumed that the column `Col1` in `Table1` is of type int. I have used Temp table to generate the required table. You can replace it will your table as per your convenience. Also I have used `#table1` which you can replace with your `Table1`. Also this might not be very good in terms of performance but this is the best I could come up with right now. ``` declare @count int, @Query VARCHAR(5000), @counter int, @tableName VARCHAR(50), @ColumnName VARCHAR(50), @Col1 INT, @Col2 VARCHAR(50) select @count = count(0) from #table1 SET @counter = 1 CREATE TABLE #table4 ( Col1 INT, Col2 VARCHAR(50), List VARCHAR(50) ) WHILE @counter <= @count BEGIN SELECT @tableName = TableName, @ColumnName = columnName, @Col1 = Col1, @Col2 = Col2 FROM #Table1 WHERE Col1 = @counter SELECT @Query = 'INSERT INTO #table4 (Col1 , Col2) VALUES (' + CONVERT(varchar(50),@Col1) + ', ''' + @Col2 + ''')' EXEC (@Query) SELECT @Query = '' IF ISNULL(@tableName, '') != '' AND ISNULL(@ColumnName, '') != '' BEGIN SELECT @Query = 'UPDATE #table4 SET LIST = STUFF((SELECT '','' + CONVERT(VARCHAR(50), ' + @ColumnName + ') FROM ' + @tableName + ' FOR XML PATH('''')),1,1,'''') WHERE Col1 = ' + CONVERT(varchar(50),@Col1) EXEC (@Query) END SET @counter = @counter + 1 END SELECT * FROM #table4 ``` Hope this helps
Try temporary table . look at here : <http://www.sqlteam.com/article/temporary-tables>
Use result of one query in another sql query
[ "", "sql", "sql-server", "stored-procedures", "dynamic-sql", "" ]
I have a query here that doesn't work and having trouble pin pointing my mistake. Any help would be great. Thanks I am trying to retrieve records with a program name starting with 'C' but my query returns zero records. My `PROGRAM` table has an entry of a `ProgName` of `Chemistry`. ``` SELECT P.ProgNumber, ProgName, StudID, DateEnrolled FROM PROGRAM AS P, STUDENT AS S WHERE P.ProgNo = S.ProgNo AND ProgName LIKE 'C%'; ```
Use ``` LIKE "C*" ``` MSAccess doesn't use % as the wildcard
``` SELECT P.ProgNumber, P.ProgName, S.StudID, S.DateEnrolled FROM PROGRAM P JOIN STUDENT S ON S.ProgNo = P.ProgNo WHERE P.ProgName LIKE 'C%'; ``` should work... you said you changed it to ='Chemistry', do you get the same result if you use lowercase c in chemistry?
SQL - Getting Name Starting with a particular letter
[ "", "sql", "ms-access-2013", "" ]
4am here... this is driving me nuts. I have a report table: * id\_report * invoice\_date * id\_user (sales person) I need to display, by month, how many items were sold, and also how many salespersons sold those items. For example, in january user 3 sold 10 items and user 8 sold 7 items, that should return: ``` date | items | salespersons 2014-01 | 17 | 2 ``` This was my first approach, but that doesn't bring me the 3rd column: ``` SELECT DATE_FORMAT(invoice_date, "%Y-%m") AS date, COUNT(*) AS items, FROM report GROUP BY date ``` Thanks!
In your query you haven't add the third column. Try this: ``` SELECT DATE_FORMAT(invoice_date, "%Y-%m") AS date, COUNT(*) AS items, COUNT(DISTINCT id_user) AS salespersons FROM report GROUP BY date ``` Working demo: <http://sqlfiddle.com/#!2/03e45/1>
It's important to use the `DISTINCT` keyword, or you will have the same count as items. ``` SELECT DATE_FORMAT(invoice_date, '%Y-%m') AS date, COUNT(*) AS items, COUNT(DISTINCT id_user) as sales_persons FROM report GROUP BY date ```
SQL query to return amount of sales and amount of salespersons
[ "", "mysql", "sql", "count", "group-by", "" ]
Let take a example, there are two tables as bellow. ``` OldData ----------- id name address NewData ----------- nid name address ``` I want to update `OldData` table with `NewData` table. For that purpose I am trying to use the following query: ``` UPDATE OldData SET (name, address) = (SELECT name, address FROM NewData WHERE nid = 234) WHERE id = 123 ``` But it gives a syntax error. What is the proper way of doing what I try?
``` UPDATE OldData o, NewData n SET n.name = o.name, n.address = o.address where n.nid=234 and o.id=123; ```
Try this: ``` Update oldData set name = (select name from newData where nid = 234),address = (select address from newData where nid = 123); ```
How to update multiple columns with a single select query in MySQL?
[ "", "mysql", "sql", "database", "" ]
I have table called `Info` with `ID, tName, restName` and `isOpen` columns. And I have values like this for example: ``` ID (Unique) | tName | restName | isOpen ----------------------------------------- 9 - TN10 - RN10 - 0 10 - TN10 - RN10 - 1 11 - TN11 - RN11 - 1 ``` I want to swap `tName` and `restName` values where `isOpen=1`. ``` ID (Unique) | tName | restName | isOpen ---------------------------------------- 9 - TN10 - RN10 - 0 10 - TN11 - RN11 - 1 11 - TN10 - RN10 - 1 ``` I know the values that I should change before query, I mean I know what `tName-restName` should change to what `tName-restName`. I am very new to queries and I just can't figure out how to swap values. If it was a language like C, I would just use a temp value and swap them. Can it be done with 1 query? I saw that it is easy when swapping columns but I couldn't find any useful material for my situation. # UPDATE I forgot to tell that I don't know about the values of IDs at the time and don't want to get them with another query. I want to make a query like "swap values where `restName=RN10` and `tName=TN10` with `restName=RN11` and `tName=TN11` and `isOpen=1`" if it is possible. So the selection is about `tName`, `restName` and `isOpen` i guess. Thank you for your replies, Have a nice day
I wrote a query after long tries :) that does what i want completely but I can't be sure if it is completely true or not. I would be happy if someone can check it and tell. Thank you... ``` UPDATE Info SET tName = CASE tName WHEN @tNameOld THEN @tNameNew WHEN @tNameNew THEN @tNameOld END, restName = CASE restName WHEN @restNameOld THEN @restNameNew WHEN @restNameNew THEN @restNameOld END WHERE tName in (@tNameOld,@tNameNew) AND isOpen=1 AND restName in (@restNameOld,@restNameNew) ```
For a more generic swap, let's say we have this: ``` declare @Swaps table ( tFirst varchar(10) not null, rFirst varchar(10) not null, tSecond varchar(10) not null, rSecond varchar(10) not null ) INSERT INTO @Swaps (tFirst,rFirst,tSecond,rSecond) VALUES ('TN10','RN10','TN11','RN11') --And more rows UPDATE i SET tName = o.tName, restName = o.restName FROM @Swaps s inner join Info i on ((s.tFirst = i.tName and s.rFirst = i.restName) or (s.tSecond = i.tName and s.rSecond = i.restName)) and i.IsOpen = 1 inner join Info o on ((s.tFirst = o.tName and s.rFirst = o.restName) or (s.tSecond = o.tName and s.rSecond = o.restName)) and o.IsOpen = 1 and (i.tName <> o.tName or i.restName <> o.restName) ``` --- (*Earlier answer*) For this specific swap, it can be done as: ``` UPDATE i SET tName = o.tName restName = o.restName FROM Info i INNER JOIN Info o on ( (i.ID = 10 and o.ID = 11) or (i.ID = 11 and o.ID = 10) ) ``` But I'm not sure how large your actual problem size is. If there are lots of swaps, you might want to store all of those combination in another (temp) table and do a further join to that.
How to swap the values of 2 columns in a row ? SQL Server
[ "", "sql", "sql-server", "" ]
I have 2 tables (item and item\_historic) and i'm looking for a query that keep the last item\_historic line (max date) and join it to the item table. **item :** ``` id_item state_item 5560 complete 5570 removed ``` **item\_historic :** ``` id_historic id_item state_historic date 2002 5560 declared 2011-01-13 13:32:15 2198 5560 complete 2011-03-14 11:44:40 1780 5570 declared 2011-03-15 15:26:55 2208 5570 removed 2011-04-15 08:17:59 ``` **result :** ``` id_item id_historic state_item date state_historic 5560 2198 complete 2011-03-14 11:44:40 complete 5570 2208 removed 2011-04-15 08:17:59 removed ``` I want one id\_item only. I hope that make sense and thanks in advance. EDIT : wrong result and my question is what should the query look like ? Tried : ``` select ah.id_item, ah.id_historic, at.state, date, ah.id_type, ah.state_item from item at left join item_historic ah on ah.id_item = at.id_item group by ah.id_item order by max(date) ; ```
In MySQL, the `not exists` approach is often the most efficient: ``` select ah.id_attestation, ah.id_historic, at.state, date, ah.id_type, ah.state_aft from `via-energica_plateforme`.attestation at left join `via-energica_plateforme`.attestation_historic ah on ah.id_attestation = at.id_attestation where not exists (select 1 from `via-energica_plateforme`.attestation_historic ah2 where ah2.id_item = ah.id_item and ah2.date > ah.date ); ``` This query will work best with an index on `attestation_historic(id_item, date)`. The `not exists` clause takes some getting used to in this context. It is saying "choose the row from `ah` where there is no more recent row from the table" -- which is that same as "get the maximum date". The advantage in MySQL is that no aggregation is required, and aggregation can be an expensive operation. But, for performance, you really need to compare the two approaches.
The following is a way of doing it with a sub-query: ``` SELECT i.id_item, ih.id_historic, i.state_item, sq.max_date as `date`, ih.state_historic FROM attestation i INNER JOIN (SELECT id_item, MAX(`date`) AS max_date FROM attestation_historic GROUP BY id_item) AS sq ON i.id_item = sq.id_item INNER JOIN attestation_historic ih ON i.id_item = ih.id_item AND sq.max_date = ih.`date`; ```
Query to join 2 tables and keep only 1 row from one of the tables
[ "", "mysql", "sql", "" ]
``` select * from dc_deal group by collection_id ``` In collection\_id column i have values **(1,3,3,4,4,5,NULL,NULL)**. Above query will return rows with **(1,2,3,4,NULL)** but i want to skip grouping on NULL value and need result like **(1,2,3,4,NULL,NULL)**
If we have a unique column (or set of columns) in the table, then we can add another expression to the `GROUP BY`. The expression needs to return a unique value for each row when `collection_id` is null. Otherwise, it returns a constant. Assuming we have a unique `id` column in the table, then we can do something like this: ``` ... GROUP BY collection_id, IF(collection_id IS NULL, id, 0) ``` That second expression in the `GROUP BY` returns a constant value `0` when `collection_id` is not null. But it returns a unique value for each row when collection\_id is null. Note that `id` here is just a reference to a column that is defined to be unique within the table. The PRIMARY KEY is a good candidate. If we don't have a unique index on a single column, then we can repeat this same type of expression for each column in our unique constraint, or for any set of expressions that is guaranteed to be unique on each row. ``` ... GROUP BY collection_id , IF(collection_id IS NULL, col1, '') , IF(collection_id IS NULL, col2, NULL) , IF(collection_id IS NULL, col3, collection_id) ``` Alternatively, we can use an expression generates a unique value: ``` ... GROUP BY IFNULL(collection_id,UUID()) ```
Try this: ``` SELECT * FROM dc_deal GROUP BY collection_id, case WHEN collection_id IS NULL THEN ID ELSE 0 END ``` Replace `ID` with another column in the table to group by. See an example in [**SQL Fiddle**](http://sqlfiddle.com/#!3/6873a/3).
MySQL Group By And Skip Grouping On Null Values
[ "", "mysql", "sql", "null", "group-by", "skip", "" ]
Here is my table table name is "loan" ``` ID(int) Loan Amount(Decimal) Initial Amount(decimal) Interest(Decimal) 1 100000 0 5 2 230000 0 10 3 150000 0 2 ``` I want to update all the value in "Initial Amount" column. I will add the result of (loan amount x interest rate) Example Computation ``` 100,000 x .05% = 5,000 ``` then the value of 5,000 will update to "inital amount"
I think something like this would work: ``` update table set `initial amount` = `Loan Amount` * (`Interest Rate` / 100) ```
``` update your_table set `initial amount` = `loan amount` * (`interest rate`/100.0) ```
How can I update 1 Column in SQL with different formula
[ "", "mysql", "sql", "sql-update", "" ]
could anybody explain me what's wrong with this query? ``` SELECT Attempts, (CASE WHEN LastLogin IS NOT NULL AND DATE_ADD(LastLogin, INTERVAL 1 MINUTE)>NOW() THEN 1 ELSE 0) AS Denied FROM tbl_loginLocks WHERE IP = "192.168.178.43"; ``` I got an error that a operation is missing. Any help would be appreciated. Thanks!
MS Access does not have a case statement, use IIF: ``` SELECT Attempts, IIf(Not IsNull(LastLogin) AND DATEADD("n",1,LastLogin)>NOW(),1,0) AS Denied FROM tbl_loginLocks WHERE IP = "192.168.178.43" ``` [Dateadd](http://office.microsoft.com/en-ie/access-help/dateadd-function-HA001228810.aspx) [IIF](http://www.techonthenet.com/access/functions/advanced/iif.php)
You're missing `END` at the end of the `CASE`: ``` SELECT Attempts, (CASE WHEN LastLogin IS NOT NULL AND DATE_ADD(LastLogin, INTERVAL 1 MINUTE)>NOW() THEN 1 ELSE 0 END) AS Denied FROM tbl_loginLocks WHERE IP = '192.168.178.43'; ``` Alternatively, use an IIF statement: ``` SELECT Attempts, IIF(LastLogin IS NOT NULL AND DATEADD("n", 1, LastLogin) >NOW(), 1, 0) AS Denied FROM tbl_loginLocks WHERE IP = '192.168.178.43'; ```
MS Access query not working
[ "", "sql", "ms-access", "case", "" ]
How to find the data whose 3rd character is '\_' underscore. like this ``` Names ------ al_en vi_i sm_th ```
you could do: ``` select name from t1 where SUBSTRING(name, 3 , 1) = '_' ``` [working fiddle](http://sqlfiddle.com/#!3/40045/5)
You can make use [LIKE](http://docs.oracle.com/cd/E11882_01/server.112/e41084/conditions007.htm#SQLRF52141) operator with [ESCAPE](http://docs.oracle.com/cd/E11882_01/server.112/e41084/conditions007.htm#SQLRF52148) clause. Since '\_' is a pattern-matching character, you need to use ESCAPE clause. ``` where names like '__\_%' escape '\'; ``` Here first to underscores match any two characters, escape character '\' causes the third underscore to be interpreted literally. ``` SQL> with x(y) as ( select 'al_en' from dual union all select 'a_en' from dual union all select 'awen' from dual union all select '___en' from dual union all select 'sm_' from dual ) select y from x where y like '__\_%' escape '\'; Y ----- al_en ___en sm_ ```
To retrieve data whose third character is '_' underscore?
[ "", "sql", "oracle", "" ]
I have a select statement where in it might return more than one value. so I would like to store the selected values in a table kind of array since there is no concept of array in sql and also use the same stored values. My select query is as follows: ``` if exists( Select StartDate,EndDate FROM Reservations1 where (DateAdd(day,0,StartDate) >= @StartDate) and (DateAdd(day,0,EndDate) <= @EndDate) and Requestid in (Select RequestId from RequestModelMap1 where ModelSerialNumber=@ModelSerialNumber) ) ``` If it returns some values then i want to store it.
Depending on the size of "array" you might just want to store it in `table variable`. Smaller sets I store in table variables, but larger I store in temp tables. ``` DECLARE @MyDateArray TABLE ( StartDate DATETIME ,EndDate DATETIME ) INSERT INTO @MyDateArray ( StartDate ,EndDate ) SELECT StartDate ,EndDate FROM Reservations1 WHERE ( DATEADD(day, 0, StartDate) >= @StartDate ) AND ( DATEADD(day, 0, EndDate) <= @EndDate ) AND Requestid IN ( SELECT RequestId FROM RequestModelMap1 WHERE ModelSerialNumber = @ModelSerialNumber ) SELECT * FROM @MyDateArray ```
Some pseudo SQL since I'm not sure what db system you are using: ``` -- create a table to store results CREATE TABLE SelectedDates ( StartDate DATETIME, EndDate DATETIME ); -- empty it TRUNCATE TABLE SelectedDates -- insert data INSERT INTO SelectedDates (StartDate, EndDate) Select StartDate,EndDate FROM Reservations1 where (DateAdd(day,0,StartDate) >= @StartDate) and (DateAdd(day,0,EndDate) <= @EndDate) and Requestid in (Select RequestId from RequestModelMap1 where ModelSerialNumber=@ModelSerialNumber) ```
How to use Arrays in sql
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have a following table structure which can't be change. ![enter image description here](https://i.stack.imgur.com/Ic3Ai.png) I'm trying to join these table and want to avoid duplicate records as well. ``` select p.ProductId,p.ProductName,inv.Details from Products p inner join Inventory inv on(p.ProductId = inv.ProductId) ``` here is [**SqlFiddle**](http://sqlfiddle.com/#!3/502f0/2/0).
From sqlserver 2008+ you can use cross apply. With cross apply you can make a join inside the subselect as demonstrated here. With top 1 you get maximum 1 row from table Inventory. It will also be possible to add an 'order by' statement to the subselect. However that seems out of scope for your question. ``` select p.ProductId,p.ProductName,x.Details from Products p cross apply (SELECT top 1 inv.Details FROM Inventory inv WHERE p.ProductId = inv.ProductId) x ```
You can use row\_number function to remove duplicates ``` WITH New_Inventory AS ( SELECT Productid, Details, ROW_NUMBER() OVER (Partition by Productid ORDER BY details) AS RowNumber FROM Products ) select p.ProductId,p.ProductName,inv.Details from Products p inner join New_Inventory inv on(p.ProductId = inv.ProductId) where RowNumber = 1 ```
sql remove duplicate record while using join
[ "", "sql", "sql-server", "" ]
I am trying to run a query on the following: Users table has id and is\_paying. Subscriptions table has id, user\_id, and subscription\_plan I want to get the list of user id's of users that have subscription\_plan = 81 and is\_paying = 'Y'. When I run ``` select count(distinct users.email) from users,subscriptions where subscription_plan_id=81 and users.is_paying_customer = 'Y'; ``` The query runs for too long because I don't think it's joining the 2 tables together and I'm unsure how to do this. Any advice?
To join a table correctly... you need to find something commen between them. The basic syntax is ``` SELECT fields from Table1 aliasname JOIN Table2 aliasname ON table1aliasname.commonfield = table2aliasname.commonfield WHERE wherecriteria ``` In this case: ``` select count(distinct users.email) from users,subscriptions where subscription_plan_id=81 and users.is_paying_customer = 'Y'; ``` What ties users to descriptions.. I will assume you have a field called userID try something like this: ``` select count(distinct users.email) from users u JOIN subscriptions s on u.userid = s.userid where subscription_plan_id=81 and users.is_paying_customer = 'Y'; ```
To get the list of users remove the `COUNT`. Also, you need to put a JOIN condition to combine users with subscriptions: ``` select DISTINCT users.email from users JOIN subscriptions ON users.id = subscriptions.user_id where subscription_plan_id=81 and users.is_paying_customer = 'Y'; ```
How to run a join query in sql?
[ "", "sql", "" ]
The subject seems confusing like I am confused on how to resolve the problem in hand. I have a table TESTCODES with data ``` testcode1 testdesc1 testcode2 testdesc2 testcode3 testdesc3 SB00001 CBC 1234-1 CBC Panel NULL NULL NULL NULL 1234-4 XRAY NULL NULL SB00002 'Culture B' 1234-2 Blood Cltr 989889 Blood Cnt ``` I have a query that gets me the TestCode from these fields: ``` SELECT COALESCE(TestCode1, TestCode2, TestCode3, '') TestCode FROM SOMEORDER ``` The coalesce works fine and returns the data as expected. But, I want the result to show which test code did it select. Eg: ``` TestCode SB00001 - TestCode1 1234-4 - TestCode2 SB00002 - TestCode1 ``` Can I do it with COALESCE itself or I need to change this query to use CASE statements?
It looks like you want to concatenate the field name to the actual value. You can do this inside the `coalesce()`: ``` SELECT COALESCE(TestCode1 + ' - TestCode1', TestCode2 + ' - TestCode2', TestCode3 + ' - TestCode3', '' ) TestCode FROM SOMEORDER; ``` The values that are `NULL` will still be `NULL` after the concatenation.
I think you can do it concatenating the fields with the field names: ``` SELECT COALESCE(TestCode1 + ' - TestCode1', TestCode2 + ' - TestCode2', TestCode3 + ' - TestCode3', '') TestCode FROM SOMEORDER ```
COALESCE function and to get the data from the supporting field
[ "", "sql", "sql-server-2008", "coalesce", "" ]
How do I code the following algorithm as a SQL query (`a` and `b` and `c` are are tables in my database) ``` for all a: case c.column=0 set a.columnname+=b.column1*b.column2 case c.column=1 ... earlier i solved a similar problem by: UPDATE a set a.column= (select SUM(column1) from b where a.column2=b.column2) ``` but since in is summing a product of two columns i don't think i can do the same. Also the real problem lies with the one to mny relationship a has with c. relationships: a one to many b. b one to one c lets say A is a table of company data, B is a table of employee data and C tells us if a employee is male or female(just an example not really my problem). Now i need to calculate the total salary given to each employee for each company and store it in a field in the company table.lets say i calculate this differently based on employees gender. Again now there a hundreds of companies and each company has thousands of employees.
To reduce a set you use `GROUP BY` and an aggregate function, like `SUM`. For each record in A you want to get the sum of some function on all related records in B and C. First we'll make a `SELECT` statement to be sure we get the right data. This will look something like: ``` SELECT a.id, SUM(CASE WHEN c.[column]=0 THEN b.column1 * b.column2 ELSE 0 END) + SUM(CASE WHEN c.[column]=1 THEN b.column1 + b.column2 ELSE 0 END) AS new_value FROM a INNER JOIN b ON a.id = b.a_id INNER JOIN c ON b.id = c.b_id GROUP BY a.id --and all other columns from A that you select/update, making sure it contains at least the columns required to select a unique record from a. ``` To convert this to an update statement you can do a 1:1 join to the regular table A: ``` UPDATE updated_a SET columnname = new_value FROM a AS updated_a INNER JOIN (SELECT a.id, SUM(CASE WHEN c.[column]=0 THEN b.column1 * b.column2 ELSE 0 END) + SUM(CASE WHEN c.[column]=1 THEN b.column1 + b.column2 ELSE 0 END) AS new_value FROM a INNER JOIN b ON a.id = b.a_id INNER JOIN c ON b.id = c.b_id GROUP BY a.id) AS calculation ON updated_a.id = calculation.id; ``` <http://sqlfiddle.com/#!6/2dffa/14>
you probably want to break it down into a series of queries with the "cases" from the case statement as different WHERE clauses : ``` UPDATE <thing> SET ....... FROM a,b c INNER JOIN ....... WHERE c.column=0; UPDATE <thing> SET ....... FROM a,b c INNER JOIN ....... WHERE c.column=1; ``` See these for examples of the type of syntax : * [SQL update query syntax with inner join](https://stackoverflow.com/questions/3867164/sql-update-query-syntax-with-inner-join) [SQL update query using joins](https://stackoverflow.com/questions/982919/sql-update-query-using-joins?rq=1)
How to write a loop as a SQL query efficiently
[ "", "sql", "" ]
I have an interest table with interest indicator as either Credit or Debit. The requirement is to get the Difference between the Credit and Debit Interest grouping by the Branch. My separated queries are: ``` select sol_id, sum(AMOUNT_IN_LCY)DEBIT_INTEREST from INTEREST_DETAILS where INT_ID = 'D' group by sol_id; ``` My other query is: ``` select sol_id, sum(AMOUNT_IN_LCY)CREDIT_INTEREST from INTEREST_DETAILS where INT_ID = 'C' group by sol_id; ``` I am totally Stuck on adding or subtracting the two queries into One resultSet grouped by sol\_id. Any ideas?? Is there a way to subtract the figures first and then group them?
``` select sol_id, (sum(CASE WHEN INT_ID = 'C' THEN AMOUNT_IN_LCY ELSE 0 END)- sum(CASE WHEN INT_ID = 'D' THEN AMOUNT_IN_LCY ELSE 0 END)) as Difference from INTEREST_DETAILS group by sol_id; ```
Try this: ``` select sol_id, ABS(sum(DECODE(INT_ID, 'D', -1, 1) * AMOUNT_IN_LCY)) DIFFERENCE from INTEREST_DETAILS group by sol_id ``` It will give you the difference in positive between debit and credit for each `sol_id`.
Subtract grouped figures in Oracle select Statement
[ "", "sql", "oracle", "" ]
So I have two Tables: ``` People Jobs id | name id | personId | Status | place -------------- --------------------------------- 1 | John 1 | 1 | Active | Home 2 | Melinda 2 | 1 | Active | Office 3 | Samuel 3 | 2 | Active | Home 4 | 3 | Active | Office 5 | 1 | Active | Garden ``` **Problem:** I want to display the names of the people who have more than one active job either on Home or Office. I could do the count and display the number but I can't diplay the names. Here is my code (which isn't working): ``` SELECT t.name FROM people p JOIN jobs j ON j.personId = p.id WHERE j.status = 'Active' AND j.place='Home' OR j.place='Office' HAVING count(j.personId) > 1 ```
your AND and OR are mixed in WHERE condition: change this ``` WHERE j.status = 'Active' AND j.place='Home' OR j.place='Office' ``` to ``` WHERE j.status = 'Active' AND (j.place='Home' OR j.place='Office') ``` TRY this ``` SELECT p.name FROM People p JOIN jobs t ON t.personId = p.id WHERE t.status = 'Active' AND (t.place='Home' OR t.place='Office') GROUP BY p.name HAVING count(t.personId) > 1 ``` [**DEMO HERE**](http://sqlfiddle.com/#!2/202cf6/8)
try this query ``` SELECT p.name FROM people p JOIN jobs j ON j.personId = p.id WHERE j.status = 'Active' AND (j.place='Home' OR j.place='Office') GROUP BY p.name HAVING count(j.personId) > 1 ```
SQL Show the values in a count()
[ "", "mysql", "sql", "" ]
In a column named "server\_url" there are links : * <http://www.abc.com/xyz/cv> * <https://www.abc.com/lmn/rq> * <https://www.abcd.com/kl> After .com there is always a "/" present. I want to get only the domain name as the output : www.abc.com , www.abcd.com Have to remove the http:// from the start and whatever is there after the third "\". Tried : ``` SUBSTRING( server_url, (charindex(':',server_url)+3), (charindex('/',server_url,10))) ``` I get the http removed. But its not removing the part after third '/'. Please suggest a function to do the same.
the third parameter is length, so you need to remove the length of the ignored portion at the start, so something like (beware of fence posts): ``` SUBSTRING( server_url, (charindex(':',server_url)+3), (charindex('/',server_url,10) - (charindex(':',server_url)+3))) ```
Assuming SQL Server, this seems to work: ``` declare @t table (server_url varchar(max) not null) insert into @t(server_url) values ('http://www.abc.com/xyz/cv'), ('https://www.abc.com/lmn/rq'), ('https://www.abcd.com/kl') ;With Positions as ( select server_url, CASE WHEN SUBSTRING(server_url,5,1)='s' THEN 8 ELSE 7 END as StartPosition, CHARINDEX('/',server_url,9) as EndPosition from @t ) select SUBSTRING(server_url,StartPosition,EndPosition-StartPosition) from Positions ``` I could do it without the Common Table Expression by repeating the `StartPosition` `CASE` expression multiple times, but I felt that this was cleaner.
Substring in MS SQL to get the required o/p
[ "", "sql", "sql-server", "substring", "" ]
I am using SQL Server 2012 to build an inventory planning / reorder engine. I have a bunch of dated transactions, call them credits and debits. I want to do two things at once: 1. Generate a Running Total (Daily net balance) 2. Generate replenish recommendations. Replenish will reset Running Total (in #1) back to zero. The table looks like this: ``` CREATE TABLE TX (TDate DATETIME, Qty INT); INSERT INTO TX VALUES ('2014-03-01', 20); INSERT INTO TX VALUES ('2014-03-02',-10); INSERT INTO TX VALUES ('2014-03-03',-20); INSERT INTO TX VALUES ('2014-03-04',-10); INSERT INTO TX VALUES ('2014-03-05', 30); INSERT INTO TX VALUES ('2014-03-06',-20); INSERT INTO TX VALUES ('2014-03-07', 10); INSERT INTO TX VALUES ('2014-03-08',-20); INSERT INTO TX VALUES ('2014-03-09', -5); ``` I am using the SQL 2012 SUM OVER() window function to show the running total of these. ``` select TDate, Qty, RunningTotal, RecommendedReplenish from ( select TDate, Qty, SUM(Qty) OVER (ORDER BY TDate ROWS UNBOUNDED PRECEDING) as RunningTotal, -1 * (CASE WHEN Qty < 0 AND SUM(Qty) OVER (ORDER BY TDate ROWS UNBOUNDED PRECEDING) < 0 THEN CASE WHEN Qty > SUM(Qty) OVER (ORDER BY TDate ROWS UNBOUNDED PRECEDING) THEN Qty ELSE SUM(Qty) OVER (ORDER BY TDate ROWS UNBOUNDED PRECEDING) END ELSE 0 END) as RecommendedReplenish /* Wrong, does not account for balance resetting to zero */ from TX ) T order by TDate ``` I need to find a way to reset the running total (aka RT) to zero if it dips below zero. My query where both Qty and RT are negative, and takes the greater (less negative) of these as the first recommended replenish. This works correctly the first time. I am not sure how to deduct this from the window running total.. would like to do this in a single statement if possible. Here is a summary of the output I am seeking: ``` TDate Qty R.Tot Replenish New RT ----------- ---- ----- ----------- --------- 3/1/2014 20 20 20 3/2/2014 -10 10 10 3/3/2014 -20 -10 10 0 3/4/2014 -10 -20 10 0 3/5/2014 30 10 30 3/6/2014 -20 -10 10 3/7/2014 10 0 20 3/8/2014 -20 -20 0 3/9/2014 - 5 -25 5 0 ``` Itzik Ben-Gan, Joe Celko, or other SQL hero, are you out there? :) Thanks in advance!
This can be done using a set-based solution: 1.Compute the normal running total (call it RT) 2.Compute the running minimum of RT (call it MN) When MN is negative, -MN is the total quantity you had to replenish so far. Let replenish\_rt be -MN when MN is negative. So, the new running total (call it new\_rt) is rt + replenish\_rt. And if you need to return the current replenish quantity needed, subtract the pervious replenish\_rt (using LAG) from the current. Here's the complete solution query: ``` with c1 as ( select *, sum(qty) over(order by tdate rows unbounded preceding) as rt from tx ), c2 as ( select *, -- when negative, mn is the total qty that had to be -- replenished until now, inclusive min(rt) over(order by tdate rows unbounded preceding) as mn_cur from c1 ) select tdate, qty, rt, replenish_rt - lag(replenish_rt, 1, 0) over(order by tdate) as replenish, rt + replenish_rt as new_rt from c2 cross apply(values(case when mn_cur < 0 then -mn_cur else 0 end)) as a1(replenish_rt); ``` Cheers, Itzik
Ugh, based on your comments, the only thing I can think to do is use a cursor, which I hate doing. [SQL Fiddle](http://sqlfiddle.com/#!6/592b3/1) ``` declare @Date date declare @Qty int declare @RR int declare @running int = 0 declare @results table (dt date, qty int, rt int, rr int ) declare C cursor for select TDate, Qty, RecommendedReplenish from ( select TDate, Qty, -1 * (CASE WHEN Qty < 0 AND SUM(Qty) OVER (ORDER BY TDate ROWS UNBOUNDED PRECEDING) < 0 THEN CASE WHEN Qty > SUM(Qty) OVER (ORDER BY TDate ROWS UNBOUNDED PRECEDING) THEN Qty ELSE SUM(Qty) OVER (ORDER BY TDate ROWS UNBOUNDED PRECEDING) END ELSE 0 END) as RecommendedReplenish /* Wrong, does not account for balance resetting to zero */ from TX ) T order by TDate open c fetch next from c into @date,@qty,@rr WHILE @@FETCH_STATUS = 0 BEGIN set @running = @running + @qty if @running <0 begin set @running = 0 end insert into @results values (@date,@qty,@running,@rr) fetch next from c into @date,@qty,@rr end close c deallocate c select * from @results ``` Which as far as I can tell, gives you the desired result. It ain't pretty, I'm sure it could use some cleanup, but it works. ``` +-------------+------+-----+----+ | DT | QTY | RT | RR | +-------------+------+-----+----+ | 2014-03-01 | 20 | 20 | 0 | | 2014-03-02 | -10 | 10 | 0 | | 2014-03-03 | -20 | 0 | 10 | | 2014-03-04 | -10 | 0 | 10 | | 2014-03-05 | 30 | 30 | 0 | | 2014-03-06 | -20 | 10 | 10 | | 2014-03-07 | 10 | 20 | 0 | | 2014-03-08 | -20 | 0 | 20 | | 2014-03-09 | -5 | 0 | 5 | +-------------+------+-----+----+ ```
Window Functions - Running Total with reset
[ "", "sql", "sql-server", "t-sql", "sql-server-2012", "" ]
I want to delete a row/tuple from a parent table, but it is throwing an error message because it has a FOREIGN KEY reference in its child table. However, in my case I want to **delete the record only from the parent table and maintain the data in the child table**. Is it possible to achieve this? I know the usage of [`ON DELETE CASCADE`](https://stackoverflow.com/questions/3574290/how-we-delete-a-row-from-parent-table-when-child-table-connected-with-paren-tabl), but I want to know if there is a solution for the secenario I described?
It is possible with some agreements in your data. To maintain child table **data** you'll have to do `ON DELETE SET NULL`. This will leave data, but set FK to `NULL` value (in child table). And that is because of data-integrity: while you *can* keep your data, your FK *can not* refer to non-existent row of parent table in terms of enforcing `FK` constraint. Thus, it will be set to `NULL` by this. If you want to "save" **value of FK** - then you definitely should not use FK at all because such behavior violates what FK is. So then just don't use that constraint, but be aware of possible integrity fails.
The point of a foreign key constraint is to prevent orphan records in the child table. So, no, it's not possible to do that, unless you drop the foreign key relationship. If you rely on 'ON DELETE CASCADE', then deleting the parent record will result in all the corresponding children to be deleted. If you want to delete the parent, but keep the children, you need to drop the foreign key constraint, or set the constraint to be 'ON DELETE SET NULL'. If you set 'ON DELETE SET NULL', then when you delete the parent record, the child records will remain, but the foreign key column value will be set to NULL.
How to delete a row ONLY in parent table, which is referenced by a Foregin Key from the child table
[ "", "sql", "foreign-keys", "foreign-key-relationship", "" ]
I have a varchar column called date\_submitted and it's formatted like so: 03-06-2014 4:32 pm 02-14-2014 2:44 am And so on... I was wondering how I would get all of the dates from a certain date or from something like 20 days ago. This select statement is something along the lines of what I need. ``` SELECT date_submitted FROM myTable WHERE date_submitted < 02-14-2014 12:00 am ```
try this with `DATE_SUB` ``` SELECT date_submitted FROM myTable WHERE STR_TO_DATE(date_submitted, '%m-%d-%Y') > DATE_SUB(DATE_FORMAT((NOW(),'%m-%d-%Y'), INTERVAL 20 DAY) ```
You need to [CONVERT](http://www.sqlusa.com/bestpractices/datetimeconversion/) your datetime column appropriately like this: ``` SELECT date_submitted FROM myTable WHERE convert(datetime,date_submitted,100) < DATE_SUB(convert(datetime,'02-14-2014 12:00 am',100), INTERVAL 20 DAY) ```
Varchar date in SQL query
[ "", "mysql", "sql", "date", "where-clause", "" ]
For example purposes lets say Im trying to figure out the average score for males and females from each parent. Example data looks like this: ``` parentID childID sex score ------------------------------------ 1 21 m 17 1 23 f 12 2 33 f 55 2 55 m 22 3 67 m 26 3 78 f 29 3 93 m 31 ``` This is the result I want: ``` parentID offspring m f avg-m avg-f avg-both ---------------------------------------------------- 1 2 1 1 17 12 14.5 2 2 1 1 22 55 38.5 3 3 2 1 28.5 29 28.67 ``` With the below query I can find the average for both males and females but I'm not sure how to get the average for either male or female ``` SELECT parentID, COUNT( childID ) AS offspring, SUM( IF( sex = 'm', 1, 0 ) ) AS m, SUM( IF( sex = 'f', 1, 0 ) ) AS f, max(score) as avg-both FROM sexb_1 WHERE avg-both > 11 GROUP BY parentID ``` I tried something like this in the query but it returns an error ``` AVG(IF(sex = 'm', max(score),0)) as avg-m ```
> I tried something like this in the query but it returns an error > > ``` > AVG(IF(sex = 'm', max(score),0)) as avg-m > ``` You can't use one aggregate function within another (in this case, `MAX()` within `AVG()`)—what would that even mean? Once one has discovered the `MAX()` of the group, over what is there to take an average? Instead, you want to take the `AVG()` of `score` values where the sex matches your requirement; since `AVG()` ignores `NULL` values and the default for unmatched `CASE` expressions is `NULL`, one can simply do: ``` SELECT parentID, COUNT(*) offspring, SUM(sex='m') m, SUM(sex='f') f, AVG(CASE sex WHEN 'm' THEN score END) `avg-m`, AVG(CASE sex WHEN 'f' THEN score END) `avg-f`, AVG(score) `avg-both` FROM sexb_1 GROUP BY parentID HAVING `avg-both` > 11 ``` See it on [sqlfiddle](http://sqlfiddle.com/#!2/a8b47/6/0).
You can try below query- ``` SELECT parentID, COUNT(childID) AS `offspring`, COUNT(IF(sex = 'm',sex ,NULL )) AS `m`, COUNT(IF(sex = 'f', sex,NULL)) AS `f`, AVG(IF(sex = 'm',score,NULL )) AS `avg-m`, COUNT(IF(sex = 'f', score,NULL)) AS `avg-f`, AVG(score) AS `avg-both` FROM sexb_1 GROUP BY parentID HAVING `avg-both` > 11; ```
Mysql average based on sum if in another column
[ "", "mysql", "sql", "if-statement", "sumifs", "" ]
I have 2 tables like this: Table1 doc\_id (1) doc\_folder (1010) doc\_title (invoice 2020) Table2 file\_id (1) file\_doc\_id (1) file\_name (invoice.pdf) Now, i perform a query to get all titles: ``` SELECT * FROM Table1 WHERE doc_folder='1010' ``` I'm like to get a file name from Table2 and have result like this: ``` FOLDER | TITLE | FILE NAME 1010 invoice invoice.pdf ``` How to get filename from Table2 TKS ALL
``` select table1.*, table2.file_name from table1 inner join table2 on table2.file_doc_id = table1.doc_id where table1.doc_folder='1010' ``` You can use `table. doc_folder, table.doc_title` instead of `table1.*` if you do not need to fetch all columns from `table1`
``` SELECT Table1.*, Table2.file_id, Table2.file_name FROM Table1 INNER JOIN Table2 ON Table1.doc_id = Table2.file_doc_id WHERE (Table1.doc_folder = '1010') ```
Get value from another table
[ "", "mysql", "sql", "" ]
``` table1 ----------------------------- | id (int) | dt (datetime) | ----------------------------- | 1 | 12-12-2012 | | 2 | 13-11-2013 | | 3 | 23-07-2014 | | 4 | 13-06-2014 | ----------------------------- table2 ----------------------------- | id (int) | dt2 (datetime) | ----------------------------- | 1 | 12-12-2012 | | 1 | 13-11-2013 | -> update table1 id=1 with this dt2 | 2 | 23-07-2014 | | 2 | 13-06-2014 | | 2 | 12-12-2012 | -> update table1 id=2 with this dt2 | 3 | 13-11-2013 | -> update table1 id=3 with this dt2 | 3 | 23-07-2014 | | 3 | 13-06-2014 | | 4 | 23-07-2014 | | 4 | 13-02-2014 | -> update table1 id=4 with this dt2 ----------------------------- ``` I want to update `table1.dt` with the corresponding `dt2` from `table2` based on the `id`. However, I do not want to update `table1.dt` to the largest corresponding datetime value from `table2.dt2`. I only want to update `dt` to the largest corresponding `dt2` which is not greater than the current date. So far what I have gotten is: ``` update table1 set table1.dt = table2.dt2 from table2 inner join table1 on table1.id = table2.id where ?table1.id=table2.id and...? ``` No idea how to modify the sql statement such that it will only update `dt` to the largest corresponding `dt2` which is not greater than the current date. Hope it isn't too confusing...
You could either using group and aggregates by in your derived table there, or possibly look into cross apply. Here's an example of the former. ``` update table1 set table1.dt = table2.dt2 from table1 join ( select id,max(dt2) as maxDT from table2 group by id) as derivedTable2 on derivedTable2 .id = table1.id ```
Try this ``` UPDATE table1 SET [dbo].[Table1].dt = [dbo].[Table2].dt2 FROM [dbo].[Table2] where [dbo].[Table2].id = [dbo].[Table1].id and [dbo].[Table2].dt2 > [dbo].[Table1].dt and [dbo].[Table2].dt2 <= getdate() ``` If i'm correcting in what you're trying to achieve you are after the "where" statement that says: where id's are equal and date in Table2 is greater that what you already have in table1 and the date in table2 is less than today? then update.
update data in table based on data in another table
[ "", "sql", "sql-server", "" ]
I am trying to rename my database by the following query: ``` RENAME DATABASE my_db TO newDB; ``` but its showing me the following error response: ``` Error Code: 1064. You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'DATABASE activation_server_db TO activationserver' at line 1 ``` Please help me find where I am going wrong?
Use these few simple commands ``` mysqldump -u username -p -v olddatabase > olddbdump.sql mysqladmin -u username -p create newdatabase mysql -u username -p newdatabase < olddbdump.sql ``` or For InnoDB, the following seems to work: create the new empty database, then rename each table in turn into the new database: ``` RENAME TABLE old_db.table TO new_db.table; ``` You will need to adjust the permissions after that.
I follow these simple steps: 1. Create new database 2. Backup the old database 3. Restore old database under new database
How to rename database in MySQL without loosing data?
[ "", "mysql", "sql", "database", "" ]
I have the following table in sql-server database: ![enter image description here](https://i.stack.imgur.com/PTjjY.jpg) I want to count the total number of rows where the column (state = Alabama and ID is not repeated). For this example from Alabama is 2. And I want to count the number of ID's that are repeated (here it is 2).
``` SELECT count(id), state FROM table_name WHERE id IN (SELECT id FROM table_name GROUP BY id having count(id) = 1) and state = 'Alabama' GROUP BY state ```
use ``` SELECT COUNT(DISTINCT ID) AS id FROM MYTABLE where State = 'Alabama' ``` if you need count of id's that are repeated ``` SELECT COUNT(ID) AS id FROM MYTABLE where ID= '105' ```
sql-server, count rows where column matches string and id is not repeated
[ "", "sql", "sql-server", "" ]
I'm attempting to implement a search function on a two tables with a one-to-many relationship. Think of it as a **post** with multiple **tags**. Each tag has its own row in the `tag` table. I'd like to retrieve a post if all of the search terms can be found in either a) the **post text**, b) the **post tags** or c) **both**. Let's say I've created my tables like this: ``` CREATE TABLE post ( id MEDIUMINT NOT NULL AUTO_INCREMENT, text VARCHAR(100) NOT NULL ); CREATE TABLE tag ( id MEDIUMINT NOT NULL AUTO_INCREMENT, name VARCHAR(30) NOT NULL, post MEDIUMINT NOT NULL ); ``` And I create indexes like this: ``` CREATE FULLTEXT INDEX post_idx ON post(text); CREATE FULLTEXT INDEX tag_idx ON tag(name); ``` If my search query were "TermA TermB" and wanted to search **just** in the post text, I'd formulate my SQL query like this: ``` SELECT * FROM post WHERE MATCH(text) AGAINST('+TermA +TermB' IN BOOLEAN MODE); ``` Is there a way to add tags into the mix? My previous attempt was this: ``` SELECT * FROM post RIGHT JOIN tag ON tag.post = post.id WHERE MATCH(post.text) AGAINST('TermA TermB' IN BOOLEAN MODE) OR MATCH(tag.name) AGAINST('TermA TermB' IN BOOLEAN MODE); ``` The problem is, this is only an **any words** query and not an **all words** query. By this I mean, I'd like to retrieve the post if TermA is in the text and TermB is in the tags. What am I missing here? Is this even possible using a fulltext search? Is there a better way to approach this?
Try this one: ``` SELECT post.* FROM post INNER JOIN (SELECT post, GROUP_CONCAT(name SEPARATOR ' ') tags FROM tag GROUP BY post) tag ON post.id=tag.post WHERE MATCH(post.text) AGAINST('+TermA +TermB' IN BOOLEAN MODE) OR MATCH(tags) AGAINST('+TermA +TermB' IN BOOLEAN MODE) ``` This might work to also get results that match from either content or tags, but it didn't work in the MySQL 5.1: ``` SELECT post.*, GROUP_CONCAT(tag.name SEPARATOR ' ') tags FROM post LEFT JOIN tag ON post.id=tag.post GROUP BY post.id HAVING MATCH(post.text,tags) AGAINST('+TermA +TermB' IN BOOLEAN MODE) ``` so I rewrote it as: ``` SELECT post.*, tags FROM post LEFT JOIN (SELECT post, GROUP_CONCAT(tag.name SEPARATOR ' ') tags FROM tag GROUP BY post) tags ON post.id=tags.post WHERE MATCH(post.text, tags) AGAINST('+TermA +TermB' IN BOOLEAN MODE) ```
This is possible, but I'm guessing that in your `Tags` table, you have one row for each tag per post. So one row containing the tag 'TermA' for post 1 and another record with the tag 'TermB', right? The all words query (with `+`) only returns rows where the searched field contains all the specified words. For the tags table, that is never the case. One possible solution would be to store all tags in a single field in the posts table itself. Then it would be easy to do advanced matching on the tags as well. Another possibility is to change the condition for tags altogether. That is, use an `all` query for the text and an `any` query for the tags. To do that, you'll have to modify the search query yourself, which can fortunately be as easy as removing the plusses from the query. You can also query for an exact match, like this: ``` SELECT * FROM post p WHERE MATCH(p.text) AGAINST('TermA TermB' IN BOOLEAN MODE) AND /* Number of matching tags .. */ (SELECT COUNT(*) FROM tags t WHERE t.post = p.id AND (t.tag in ('TermA', 'TermB') = /* .. must be .. */ 2 /* .. number of searched tags */ ) ``` In this query, I count the number of matching tags. In this case I want it to be exactly 2, meaning that both tags match (provided that tags are unique per post). You could also check for >= 1 to see if any tags match. But as you can see, this also requires parsing of the search string. You will have to remove the plusses (or even check their existence to understand whether you want 'any' or 'all'). And you will have to split it as well to get the number of searched words, and get the separate words themselves. All in all, adding all tags to a 'tags' field in `post` is the easiest way. Not ideal from a normalisation point of view, but that is managable, I think.
MySQL Fulltext Search: One-to-many Relationships
[ "", "mysql", "sql", "one-to-many", "relationship", "" ]
I'd like to display the results of a query, and I'd like to capture a column's value at the same time. The `FROM` and `WHERE` are the same in both queries. Can I do both in one query, or is it easier/better to just do the query twice, like this? ``` DECLARE @fooId INT SET @fooId = (SELECT FooId FROM Bar WHERE SnuhId = 5) SELECT * FROM Bar WHERE SnuhId = 5 ```
Unfortunately it has to be done in two operations. Test: ``` DECLARE @VAR DATETIME SELECT @VAR=GETDATE(), GETDATE() ``` Yields error message 141. [Here's another SO post on this.](https://stackoverflow.com/questions/4608140/a-select-statement-that-assigns-a-value-to-a-variable-must-not-be-combined-with)
If you try to do it in one query, you will get an error msg - `A SELECT statement that assigns a value to a variable must not be combined with data-retrieval operations.` To avoid this, the possibilities are 1) Make sure that all columns are assigned to a local variable. Ex ``` DECLARE @fooId INT DECLARE @barId INT SELECT @fooId = FooId, @barId = BarId FROM Bar WHERE SnuhId = 5 ``` 2) simply remove '\*' from the SELECT statement ``` SELECT @fooId = FooId, BarId FROM Bar WHERE SnuhId = 5 //remove BarId column from this query ``` 3) If you really need to do both, meaning to assign the value to local variables and to return the columns as a result set, you have to do it in 2 steps instead of combining them into one SELECT statement.
Select and assign to variable in one statement?
[ "", "sql", "sql-server", "t-sql", "sql-server-2008-r2", "" ]
I have a column named `0` in the table `mytable`. I want to select the contains of this column. when I tried with: ``` SELECT 0 FROM mytable WHERE key_id = '8' ``` It didn't show the column's data but it shows only the value 0 but this column doesn't have the 0 value. How can I get the contents of this column?
``` create table test_0(`0` INT ); INSERT INTO test_0 VALUES(1); SELECT `0` FROM test_0; +------+ | 0 | +------+ | 1 | +------+ 1 row in set (0.00 sec) ```
Since zero is a number you need to specify it with the `back ticks` ``` SELECT `0` FROM mytable WHERE key_id = '8' ```
How to select a column with a number name in a table with SQL
[ "", "mysql", "sql", "" ]
I have a table like this: ``` Year Month Code 1850 January 5210 1850 February 3524 1851 January 6752 1851 January 9877 1851 February 3698 ``` I want to delete repeated months within a year (e.g. 1851 January). I don´t mind loosing one code (6752 or 9877). I thought of using: ``` Select * from table1 group by Month ``` But I need to group for each year. Otherwise I will select only one January from the three in the table, and I need to select two of them (one in 1850 and one in 1851). Of course my table is huge and I cannot do it manually. Thanks.
If you want to have only the entries with count>1 then you can do this: ``` Select year, month, code, count(1) as cnt from table1 group by year, month having cnt>1; ``` If the table is huge, make sure that both `year` and `month` are indexes, otherwise you'll spend lot of time waiting for results. <http://sqlfiddle.com/#!2/eb325/3> UPDATE: for the case where there are more than 2 rows (and actually in general, if you don't care about the lost "code" entries), it might make sense to select one entry from each year-month into a new table (which will leave you with unique year-month combinations) and then discard the old table, like that: ``` CREATE TABLE table1_temp SELECT year, month, MIN(code) as code FROM table1 GROUP BY year, month; DROP TABLE table1; RENAME TABLE table1_temp TO table1; ``` <http://sqlfiddle.com/#!2/113954/1>
Query suggested by @Ashalynd will work if you have only 2 duplicate rows but it will not work if you have 3 rows for year 1951 and month January ...Below query will take care it. You can remove all rows getting from below query. ``` SELECT DISTINCT b.CODE FROM (SELECT YEAR, MONTH, CODE, COUNT(1) AS cnt FROM table1 GROUP BY YEAR, MONTH HAVING cnt>1) a, table1 b WHERE a.year=b.year AND a.month=b.month AND a.code<>b.code; ``` Another approach can be AS per below- ``` CREATE TABLE table1_new LIKE table1;ALTER TABLE table1_new ADD UNIQUE KEY (YEAR,MONTH);INSERT IGNORE INTO table1_new SELECT * FROM table1; TRUNCATE TABLE table1;INSERT INTO table1 SELECT * FROM table1_new; DROP TABLE table1_new; ``` Note: IF you want TO keep your VALUES UNIQUE based ON these FIELDS THEN CREATE UNIQUE INDEX ON your main table.
MySQL query for each subgroup
[ "", "mysql", "sql", "" ]
I have written the following SQL command to create a database, then create a table and then add data to the created\_db.created\_table. However it seems to be wrong. Since SQL takes the values as the fields name. What is the problem with it? I also use Microsoft SQL server 2012 Enterprise Edition. ``` CREATE DATABASE dbsample; CREATE TABLE passwording ( passwording_id INT ,passwording_username VARCHAR(256) ,passwording_date INT ); INSERT INTO passwording (passwording_username, passwording_date) VALUES("myUserName", 754254354); ``` And the error I'm getting: ``` Msg 207, Level 16, State 1, Line 11 Invalid column name 'myUserName'. ```
remove " in `"myUserName"` use `'myUserName'` ``` INSERT INTO passwording (passwording_username,passwording_date) VALUES ('myUserName',754254354); ``` [**Working Fiddle**](http://sqlfiddle.com/#!3/f512a)
In your table, you have specified datatype `INT` for column `passwording_date`. But you have entered value as `25/12/2011` which is a string. Would that work? You should have used [Date](http://msdn.microsoft.com/en-us/library/ms187752.aspx) datatype right? **EDIT**: With reference to your updated question, why use `""` to insert a string value to your table. Your query should rather be ``` INSERT INTO passwording (passwording_username, passwording_date) VALUES('myUserName', 754254354); ```
SQL insertion syntax error. Taking value as field
[ "", "sql", "sql-server", "" ]
I am using a `PreparedStatement` with a join. Now my question is: Does the `MySQL` Server join those two tables again every time, for example in a batch of 1000 of the same preparedStatements?
Yes, each time you execute the statement, the server will re-execute the query and have to redo the join. What the server should NOT have to do is re-parse and re-plan the query.
> Does the mysql Server join those two tables again everytime Yes. > in a batch of 1000 of the same preparedStatements If you are running a batch of 1000 joins you are apparently doing something wrong.
Does the MySQL Server join tables every time a PreparedStatement is ran?
[ "", "mysql", "sql", "optimization", "prepared-statement", "" ]
I'm using the ***MySql*** membership provider with a ***.NET MVC 4*** application and got it all set up as per [this tutorial](http://dev.mysql.com/doc/connector-net/en/connector-net-tutorials-asp-roles.html). The problem is that the provider sets up the `mysql_aspnet_users` table with the `UserID` ***PK*** as an `INT(11)`, whereas the ***MSSQL*** provider uses a `UNIQUEIDENTIFIER`. I need to migrate existing users to this database and I would much prefer to keep a `Guid` as the primary key within the Users and Membership tables. Is there any way to do this with the ***MySql membership provider***? Or do I need to write my own ***MySql membership provider*** just to use `UUIDs` as the primary keys? I've had a look for any documentation or "non-hacky" ways to do this but haven't had any luck so far.
I dont think you can prevent creating a custom membership user class with a custom membership provider. Something like this tutorial <http://msdn.microsoft.com/en-us/library/ms366730%28v=VS.85%29.aspx> they use GUID's as wel. You need to change the SQL abit so it works with MySQL
You can store a guid as a CHAR(16) binary if you want to make the most optimal use of storage space. or varchar(36) if its ok . --- <http://mysqlbackupnet.codeplex.com/wikipage?title=Using%20MySQL%20With%20GUID%20or%20UUID>
Using GUID for MySQL membership provider user
[ "", "mysql", "asp.net", "sql", "asp.net-mvc", "asp.net-mvc-4", "" ]
I have a database that's used to keep a history of the registers that are open and closed ``` id | name | assigned_to | state | created_on | created_by --------------------------------------------------------- 1 | Jay | 1 | OPEN | 01/01/2011 | 1 2 | Kay | 2 | OPEN | 01/01/2011 | 1 3 | Jay | 1 | CLOSED| 01/01/2111 | 1 ``` "assigned\_to" is the employee's id. "name" is the name of the register that they used. What the database is saying is that "Jay" has been open and closed but "Kay" is not closed at all. I can't figure out a cleaver way to determine how to get only the register that is currently open. Any ideas on how to approach this? Keep in mind that there will be thousands of these as time progresses.
Making a few assumptions, you can solve this as a "set-within-sets" subquery. The major assumption is that a `name` is "closed" only once and is not opened again afterwards. The following query checks that there is no `closed` state: ``` select r.name from registers r group by r.name having sum(case when state = 'Closed' then 1 else 0 end) = 0; ``` If a `name` can be opened after being closed, then you want to check the most recent time of each. You can try this: ``` select r.name from registers r group by r.name having sum(case when state = 'Closed' then 1 else 0 end) = 0 or max(case when state = 'Closed' then id end) > max(case when state = 'Opened' then id end); ```
You can simply use `ROW_NUMBER()` to get the latest value per name, and list those with the latest state equal to `OPEN`; ``` WITH cte AS ( SELECT *, ROW_NUMBER() OVER (PARTITION BY name ORDER BY created_on DESC, id DESC) rn FROM myTable ) SELECT id, name, assigned_to, state, created_on, created_by FROM cte WHERE rn=1 AND state='OPEN'; ``` [An SQLfiddle to test with](http://sqlfiddle.com/#!15/e88ea/3).
SQL Determine flag of a field
[ "", "sql", "database", "" ]
I used LPAD and LEFT function for query and I can get a 7 digits column. ``` Left(LPAD(id, 7, '0'), 2) as Year ``` First two digit is year (yy). Now I want to convert it to four digit year (yyyy). How can I do it? ``` 1249208 -> 2012 1122222 -> 2011 0911111 -> 2009 ... ... 9301010 -> 1993 ``` ### update I use MySQL. Data range from 1980-2014 so far. No more.
If you are using Oracle, use the to\_date function. The format 'RR' rounds to a year in the range 1950 to 2049. Thus, 06 is considered 2006 instead of 1906 ``` select extract(year FROM to_date(Left(LPAD(id, 7, '0'), 2), 'rr')) as Year FROM test ``` Here's an example of it working: ``` select extract(year FROM to_date('96', 'rr')) from dual ``` NOTE: The other format 'YY' will add '20' to the front of the year, so: select extract(year FROM to\_date('96', 'rr')) from dual will give '2096'
If you're using MySQL, you can use the `STR_TO_DATE()` function and `%y` format specifier (for 2-digit years): ``` SELECT YEAR(STR_TO_DATE(Left(LPAD(datestamp, 7, '0'), 2), "%y")) as Year FROM test ``` MySQL follows the following for disambiguation: * Year values in the range 00-69 are converted to 2000-2069. * Year values in the range 70-99 are converted to 1970-1999. (See <https://dev.mysql.com/doc/refman/5.1/en/two-digit-years.html>)
SQL convert two digits (yy) to four digitis (yyyy)
[ "", "sql", "" ]
I have been trying to get the grasps of SQL, recently, and I was wondering if you could explain how you could use a `WHERE/AND OR` in a `COUNT()` to return the number of rows with a certain criteria... Or am I looking into the wrong function? **Any helpful comments are welcome** EDIT: Heres what I'm working with, I have a simple `SELECT WHERE` query to print out a `profileName` and `profileIcon`. *I am using this as a test to see if `COUNT` works for what I want...* I then want to run a `COUNT` to see how many ROWs are within the criteria. ``` $query = $con->query("SELECT * FROM user_settings WHERE Username='" . $_SESSION['user'] . "'"); while($row = $query->fetch_array()) { echo '<tr><th>Current Icon:</th><td><img src="' . $row['ProfileIcon'] . '" /></td></tr>'; echo '<tr><th>Current Name:</th><td>' . $row['ProfileName'] . '</td></tr>'; } $query = $con->query("SELECT SUM(CASE WHEN Username='" . $_SESSION['user'] ."' THEN 1 ELSE 0 END) FROM user_settings"); var_dump($query); ``` The `var_dump` produces: `object(mysqli_result)#3 (0) { }`
I'd recommend that you read through [the documentation for COUNT](https://dev.mysql.com/doc/refman/5.1/en/counting-rows.html). It includes examples with fairly detailed explainations. An example: ``` SELECT COUNT(*) FROM pet; ``` this would count all rows found in the `pet` table. You can append a WHERE-statement after this to include a certain criteria.
For a "conditional" `COUNT`, you can use a `CASE` expression and `SUM`, e.g. ``` SUM(CASE WHEN ColumnA = 'Bob' THEN 1 ELSE 0 END) ``` will count the number of rows where `ColumnA` is `Bob`, but will include all rows (whether they have `Bob` or not) in the result set.
An explanation on COUNT()
[ "", "sql", "" ]
In my rails 4 application, a client (clients table) can have many projects (projects table). I have a column called `name` in each table. I am trying to write a `join` and then `select` which uses projects as the base table and clients as the lookup table. `client_id` is the `foreign_key` in the projects table: I am writing my query as follows: ``` Project.joins(:client).select('projects.id,projects.name,clients.name') ``` I get the following response: ``` Project Load (0.6ms) SELECT projects.id,projects.name,clients.name FROM "projects" INNER JOIN "clients" ON "clients"."id" = "projects"."client_id" => #<ActiveRecord::Relation [#<Project id: 1, name: "Fantastico Client">]> ``` If I try to alias it like so: ``` Project.joins(:client).select('projects.id,projects.name,clients.name as client_name') ``` Then I get the following response: ``` Project Load (0.8ms) SELECT projects.id,projects.name,clients.name as client_name FROM "projects" INNER JOIN "clients" ON "clients"."id" = "projects"."client_id" => #<ActiveRecord::Relation [#<Project id: 1, name: "The Dream Project">]> ``` In either case, ActiveRecord looses one of the names as you can see from the above response. How should I be writing this query?
If the column in `select` is not one of the attributes of the model on which the `select` is called on then those columns are not displayed. All of these attributes are still contained in the objects within `AR::Relation` and are accessible as any other public instance attributes. You could verify this by calling `first.client_name`: ``` Project.joins(:client) .select('projects.id,projects.name,clients.name as client_name') .first.client_name ```
You can use `:'clients.name'` as one of your symbols. For instance: ``` Project.select(:id, :name, :'clients.name').joins(:client) ``` I like it better because it seems like Rails understands it, since it quotes all parameters: ``` SELECT "projects"."id", "projects"."name", "clients"."name" FROM "projects" INNER JOIN "clients" ON "clients"."id" = "projects"."client_id" ``` (I'm not 100% sure that's the exact SQL query, but I'm fairly certain and I promise it *will* use `"clients"."name"`)
ActiveRecord Join Query and select in Rails
[ "", "sql", "ruby-on-rails", "activerecord", "" ]
I need to cast a DATE value in a query to a TIMESTAMP WITH TIME ZONE, but currently I'm getting the TimeZone Region ('Europe / Paris') which is not valid to be used by EF. For example, when doing this: ``` select CAST(FECHA AS TIMESTAMP WITH TIME ZONE) from test; ``` I currently get this output: ``` 07/03/14 09:22:00,000000000 EUROPE/PARIS ``` But I need it to be like: ``` 07/03/14 09:22:00,000000000 +01:00 ``` Any idea how to accomplish this?
You can cast the `DATE` to a `TIMESTAMP`, then use [`FROM_TZ`](http://docs.oracle.com/cd/E11882_01/server.112/e41084/functions068.htm#SQLRF00644) to convert this timestamp to a timestamp with time zone: ``` SQL> SELECT from_tz(CAST (SYSDATE AS TIMESTAMP), '+01:00') tz FROM dual; TZ ------------------------------------------------- 07/03/14 09:47:06,000000 +01:00 ```
With @Vincent Malgrat solution you need to get the TIMEZONE\_HOUR and then, format it to use in your query. I don't know if there is any chance to make it automatically. I can suggest you to nest some functions. It is not the cleanest solution but it works for me ``` SELECT TO_TIMESTAMP_TZ(TO_CHAR(CAST(FECHAHORA AS TIMESTAMP WITH TIME ZONE), 'DD-MM-YY HH24:MI:SS TZH:TZM'), 'DD-MM-YY HH24:MI:SS TZH:TZM' )FROM TEST; ``` And the result will be something like ``` 03/03/14 09:58:02,000000000 +01:00 ``` Regards!
ORACLE Casting DATE to TIMESTAMP WITH TIME ZONE WITH OFFSET
[ "", "sql", "oracle", "timestamp-with-timezone", "" ]
We have the oracle table called "Names" which has approximately 10k records. The table has one of the column as "ID". Now this column is totally messed up and the records looks somewhat similar to the following. ``` ID A.011 A.012 A.013 B.014 B.015 B.016 A-B-C X_Y_Z randomName OtherName ``` Now, what we have decided is to remove the dot(.) seperated portion of the record and leave the other records as is for now. So the record should look like ``` ID 011 012 013 014 015 016 A-B-C X_Y_Z randomName OtherName ```
You could use this statement: ``` update names set id=regexp_replace(id,'^[^\.]+\.','') ``` as you see here, ids without `.` are simply untouched: ``` select regexp_replace('tst234','^[^\.]+\.','') from dual ``` and here a test with a `.` ``` select regexp_replace('A.0234','^[^\.]+\.','') from dual ```
use regular expression: ``` update names set id = case when instr(id,'.') between 0 and length(id) - 1 then regexp_replace(id,'^[^\.]*\.(.+)$','\1') else id end; ```
How to remove the comma seperated part of the column record from the oracle table
[ "", "sql", "oracle", "" ]
I'll try to explain my problem: I've 2 similar query: First One ``` SELECT MOD_DESC, MOD_COD, COUNT(MOD_DESC) AS NUM FROM MODELLI, PV_PROPOSTI, GESTIONE_COMMESSE WHERE Something ``` Second one ``` SELECT MOD_DESC, MOD_COD, COUNT(MOD_DESC) AS NUM FROM DISTRIB, MODELLI WHERE Something ``` Now, the result its' really similar since the only difference are the where conditions: ``` MOD_DESC MOD_COD NUM desc1 cod1 2 desc2 cod2 1 desc3 cod3 3 ``` and ``` MOD_DESC MOD_COD NUM desc1 cod1 5 desc2 cod2 2 desc4 cod4 3 ``` But now i want to "merge" the 2 queries and have a result like ``` MOD_DESC MOD_COD NUM1 NUM2 desc1 cod1 2 5 desc2 cod2 1 2 desc3 cod3 3 0 desc3 cod3 0 3 ``` I've tryed this query: ``` SELECT t2.MOD_DESC, t2.MOD_COD, COUNT(t2.MOD_DESC) as NUM1, COUNT(t1.MOD_DESC) as NUM2 FROM ( SELECT MOD_DESC, MOD_COD FROM MODELLI, PV_PROPOSTI, GESTIONE_COMMESSE WHERE something ) t1 RIGHT JOIN ( SELECT MOD_DESC, MOD_COD FROM DISTRIB, MODELLI WHERE Something ) t2 ON t1.MOD_COD = t2.MOD_COD GROUP BY t2.MOD_DESC, t2.MOD_COD ORDER BY t2.MOD_DESC, t2.MOD_COD ``` But i don't get the corret result. Some help? (i'm on oracle db)
A few notes. 1) Do the counting in the sub-queries, not in the outer query. Joining your results before counting will change the results of the counts. 2) Use a FULL OUTER JOIN in case there are records present in one sub-query but not the other 3) Join on BOTH fields (MOD\_DESC and MOD\_COD), not just one ``` SELECT * FROM ( SELECT MOD_DESC, MOD_COD, COUNT(MOD_DESC) AS NUM FROM MODELLI, PV_PROPOSTI, GESTIONE_COMMESSE WHERE Something ) AS MPG FULL OUTER JOIN ( SELECT MOD_DESC, MOD_COD, COUNT(MOD_DESC) AS NUM FROM DISTRIB, MODELLI WHERE Something ) AS DM ON DM.MOD_DESC = MPG.MOD_DESC AND DM.MOD_COD = MPG.MOD_COD ```
You don't explain what kind of incorrect result you have. But I suggest you to try something like that: ``` SELECT COALESCE(t1.MOD_DESC, t2.MOD_DESC) AS MOD_DESC, COALESCE(t1.MOD_COD, t2.MOD_COD) AS MOD_COD, COUNT(t2.MOD_DESC) as NUM1, COUNT(t1.MOD_DESC) as NUM2 FROM ( query 1 ) AS t1 FULL OUTER JOIN ( query 2 ) AS t2 ON t1.MOD_COD = t2.MOD_COD GROUP BY COALESCE(t1.MOD_DESC, t2.MOD_DESC) AS MOD_DESC, COALESCE(t1.MOD_COD, t2.MOD_COD) AS MOD_COD ORDER BY MOD_DESC, MOD_COD ```
SQL Join 2 table with 2 count
[ "", "sql", "oracle", "t-sql", "" ]
I have database which contains a field named groupid and group name. sample data ``` groupid groupname 123 abc 234 bcr 1237 cde ``` I like to compare groupid with another inputted data, its size is greater than size of group id. I tried a query that not return correct answer ``` SELECT * FROM mydata WHERE groupid LIKE '12309098'; ``` My expected answer is `abc` What are the changes to made for correct answer thanks in advance
Since you want the value in the row to be the prefix of your input and not the other way around, you can just turn `LIKE` around the other way; ``` SELECT * FROM mydata WHERE '12309098' LIKE CONCAT(groupid, '%'); ``` [An SQLfiddle to test with](http://sqlfiddle.com/#!6/d77fd/1); EDIT: Since you asked about SQLite, there you need to use `||` for concatenation; ``` SELECT * FROM mydata WHERE '12309098' LIKE `groupid` || '%'; ``` [Another SQLfiddle](http://sqlfiddle.com/#!7/b480d/1).
You could do like below: ``` SELECT * FROM my data WHERE '12309098' LIKE CONCAT('%', groupid, '%'); ```
How to properly use LIKE statement in sql query
[ "", "sql", "sql-server", "" ]
I have a list of ids. e.g. (1, 2, 10). Is it possible to use that list as a table in Oracle ? I mean something like that: ``` select * from (1, 2, 10) as x, some_table where x not in some_table.value ```
``` select to_number(regexp_substr(str, '[^,]+', 1, level)) ids --put your comma-delimited list in the string below from (select '0,12,2,3,4,54,6,7,8,97' as str from dual) connect by level <= length(regexp_replace(str, '[^,]+')) + 1 minus --here comes select from some_table select level from dual connect by level <= 10; ```
You can use *with* construction ``` with x as ( select 1 as v from dual union all select 2 from dual union all select 10 from dual ) select * from x, some_table where x.v not in some_table.value -- <- or whatever condition(s) required ```
SQL-Statement to use predefined values list as an SQL-table
[ "", "sql", "oracle", "list", "" ]
I have two tables named TEST and STEPS which are related by Test-Id column. I am able to get all required columns by doing a join as below. ``` select t.id,t.name,s.step_no,s.step_data from test t,steps s where t.id = s.testid ``` What I require is that, apart fro the columns, I also need the total count of rows for each match. **Fiddle: <http://sqlfiddle.com/#!6/794508/1>** **Current Output:** ``` ID NAME STEP_NO STEP_DATA -- ---- ------- --------- 1 TC1 1 Step 1 1 TC1 2 Step 2 1 TC1 3 Step 3 2 TC2 1 Step 1 ``` **Required Output:** ``` ID NAME STEP_NO STEP_DATA COUNT -- ---- ------- --------- ----- 1 TC1 1 Step 1 3 1 TC1 2 Step 2 3 1 TC1 3 Step 3 3 2 TC2 1 Step 1 1 ``` Where count is the total number of rows from the STEPS table for each Id in TEST table. Please let me know if you need any information.
``` select t.id,t.name,s.step_no,s.step_data,counts.count from test t join steps s ON t.id = s.testid join (select testid, count(*) as count from steps group by testid) counts ON t.id = counts.testid ```
You could just add `count(*) over ...` to your query: ``` SELECT t.id, t.name, s.step_no, s.step_data, [count] = COUNT(*) OVER (PARTITION BY s.testid) FROM test t, steps s WHERE t.id = s.testid ``` You can read more about the OVER clause here: * [OVER Clause (Transact-SQL)](http://technet.microsoft.com/en-us/library/ms189461.aspx "OVER Clause (Transact-SQL)") Please consider also getting into the habit of * [**always specifying the schema for your tables**](https://sqlblog.org/2009/10/11/bad-habits-to-kick-avoiding-the-schema-prefix "Bad habits to kick : avoiding the schema prefix"), e.g. ``` test -> dbo.test ``` * [**using the proper JOIN syntax**](https://sqlblog.org/2009/10/08/bad-habits-to-kick-using-old-style-joins "Bad habits to kick : using old-style JOINs"), i.e. instead of ``` FROM a, b WHERE a.col = b.col ``` do ``` FROM a INNER JOIN b ON a.col = b.col ``` * [**ending your statements with a semicolon**](https://sqlblog.org/2009/09/03/ladies-and-gentlemen-start-your-semi-colons "Ladies and gentlemen, start your semi-colons!"). So, taking all those points into account, we could rewrite the above query like this: ``` SELECT t.id, t.name, s.step_no, s.step_data, [count] = COUNT(*) OVER (PARTITION BY s.testid) FROM dbo.test AS t INNER JOIN dbo.steps AS s ON t.id = s.testid ; ```
Count value in output with normal rows
[ "", "sql", "sql-server", "" ]
I'm struggling with the following SQL: I have a table that has a site\_id and the referrer domain for a site. I'm trying to simply count occurrences by referrer and site and then calculate the total of all referrers for one site. In my results below the count and site total are incorrect. The count should be half, e.g. for site id 1 the count for google and amazon should be 10 and the total for site id 2 should be 18. Any ideas what I could be doing wrong? ``` SELECT site_id, I.referrer_domain AS referrer_domain, COUNT(*) AS items, t.site_total FROM qVisitor_Tracking_1 as I, (SELECT COUNT(1) AS site_total FROM `qVisitor_Tracking_1` WHERE `referrer_domain` != '' group by site_id) AS T WHERE `referrer_domain` != '' GROUP BY `site_id`, referrer_domain site_id referrer count site total 1 amazon 20 24 1 google 20 24 2 amazon 12 24 2 google 18 24 ```
If I understand your requirements it looks like you are missing the ON clause to join the sub query to the main table. ``` SELECT I.site_id, I.referrer_domain AS referrer_domain, COUNT(*) AS items, T.site_total FROM qVisitor_Tracking_1 as I INNER JOIN ( SELECT site_id, COUNT(1) AS site_total FROM qVisitor_Tracking_1 WHERE referrer_domain != '' GROUP BY site_id ) AS T ON I.site_id = T.site_id WHERE referrer_domain != '' GROUP BY I.site_id, referrer_domain, T.site_total ```
In mysql you can use the `GROUP BY` modifier **`WITH ROLLUP`**: ``` SELECT site_id, referrer_domain COUNT(*) AS items FROM qVisitor_Tracking_1 WHERE referrer_domain != '' GROUP BY site_id, referrer_domain WITH ROLLUP ``` it should give a result like this: ``` site_id referrer items 1 amazon 7 1 google 13 1 NULL 20 2 amazon 6 2 google 9 2 NULL 15 NULL NULL 35 ``` see: <http://dev.mysql.com/doc/refman/5.0/en/group-by-modifiers.html>
MySQL: Count and Total for two different groupings
[ "", "mysql", "sql", "" ]
I have many access points, each AP has two RF cards, each RF card has a channel number. ``` AP_idx | RF_idx | Channel | 0 | 0 | 7 | 0 | 1 | 136 | 1 | 0 | 11 | 1 | 1 | 128 | 2 | 0 | 4 | 2 | 1 | 149 | ``` Now I need a mapping between AP\_idx and (RF0 channel, RF1 channel), see the following example ``` AP_idx | ChannelA | ChannelB | 0 | 7 | 136 | 1 | 11 | 128 | 2 | 4 | 149 | ``` I want channel as channelA, when RF\_idx == 0, and channel as channelB when RF\_idx == 1 How do I design the SQL statement ?
If I understand correctly, you want to "pivot" the data. In SQLite, one way to do this by using `group by`: ``` select AP_idx, max(case when RF_idx = 0 then Channel end) as ChannelA, max(case when RF_idx = 1 then Channel end) as ChannelB from table t group by AP_idx; ``` Another way is by using `join`: ``` select ta.AP_idx, ta.channel as ChannelA, tb.channel as ChannelB from table ta join table tb on ta.AP_idx = tb.AP_idx and ta.RF_idx = 0 and tb.RF_idx = 1; ``` This might have better performance with the right indexes. On the other hand, the aggregation method is safer if some of the channel values are missing.
``` select AP_idx, ChannelA, ChannelB from (select AP_idx, Channel AS ChannelA WHERE RF_idx = 0) AS T1 inner join (select AP_idx, Channel AS ChannelB WHERE RF_idx = 1) AS T2 using (AP_idx) ```
SQL: How to extract data from one column as different columns, according to different condition?
[ "", "sql", "sqlite", "" ]
I have a MySQL table like below. ``` ID NAME SURNAME 1 Joe Black 2 Mary Peterson 3 Bob Marley 4 Andy Murray ... ``` I want to write a **SELECT** command which will return every row **N** times. Example for N = 3. ``` SELECT NAME, SURNAME blah-blah (3 Times ) FROM Person NAME SURNAME Joe Black Joe Black Joe Black Mary Peterson Mary Peterson Mary Peterson Bob Marley Bob Marley Bob Marley Andy Murray Andy Murray ``` Thank you.
You could use `UNION ALL` to get the result you want, but you'd better not do such job with SQL. ``` SELECT * FROM ( SELECT NAME, SURNAME FROM Person UNION ALL SELECT NAME, SURNAME FROM Person UNION ALL SELECT NAME, SURNAME FROM Person ) A ORDER BY NAME ```
what about go through looping it in stored procedure like that : ``` delimiter // CREATE PROCEDURE ntimes() BEGIN DECLARE i INT DEFAULT 0; SET i = 0; WHILE i < 3 DO SELECT `Code`, `Rank` FROM Table1; SET i = i + 1; END WHILE; END; // DELIMITER ; ``` Here you just change 3 to what number of repeats you want , and you dont need mass of unions .
How to select every row N times in MySql table?
[ "", "mysql", "sql", "select", "" ]
I have a series of SQL commands I would like to run on about 40 different tables. There must be a way to do this without writing 40 different commands... I am running this in SQL Server. All tables have different names, and the column I want to manipulate (VariableColumn below) also varies in name. I do have a list of the names for both the tables and the columns. The end effect of this code: I am connecting VariableColumn as a foreign key to the DOC\_ID column in the DOCS table. Some tables have values in their VariableColumn that do not correspond to any in the DOC\_ID column (outdated data), so I am first deleting any such rows. The command: ``` -- Delete rows in VariableTable that have invalid VariableColumn values DELETE FROM VariableTable FROM VariableTable v LEFT OUTER JOIN DOCS d ON d.DOC_ID = v.VariableColumn WHERE d.DOC_ID IS NULL -- Add foreign key to VariableTable table ALTER TABLE VariableTable ADD CONSTRAINT FK_DOCS_VariableTable_VariableColumn FOREIGN KEY (VariableColumn) REFERENCES DOCS(DOC_ID); ```
Since you have the list of table and column names you can have them in a table. And you can use them in a cursor to build and execute your commands. For example: ``` DECLARE @Target TABLE (tbl SYSNAME,col SYSNAME) INSERT @Target VALUES ('tbl_1','col_a'),('tbl_2','col_b') DECLARE @tbl SYSNAME DECLARE @col SYSNAME DECLARE @sql NVARCHAR(MAX) DECLARE work CURSOR FOR SELECT tbl,col FROM @Target OPEN work FETCH NEXT FROM work INTO @tbl,@col WHILE @@FETCH_STATUS = 0 BEGIN SET @sql = 'PRINT ''Do something to table: ' + @tbl + ' column: '+ @col + '''' EXECUTE sp_executesql @sql FETCH NEXT FROM work INTO @tbl,@col END CLOSE work DEALLOCATE work ```
Assuming this is a one off batch you want to run, you could generate this with a simple generator such as NimbleText (<http://NimbleText.com/Live>) The data is a list of the tables and columns you want to edit, e.g. ``` Person, PersonID Document, DocumentID Vehicle, VehicleID etc... ``` The pattern is like this: ``` -- Delete rows in $0 that have invalid $1 values DELETE FROM $0 FROM $0 v LEFT OUTER JOIN DOCS d ON d.DOC_ID = v.$0 WHERE d.DOC_ID IS NULL -- Add foreign key to $0 table ALTER TABLE $0 ADD CONSTRAINT FK_DOCS_$0_$1 FOREIGN KEY ($1) REFERENCES DOCS(DOC_ID); ``` Press "Calculate", Grab the result and execute it in SQL.
Performing one SQL command on multiple tables (without re-writing the SQL command)
[ "", "sql", "sql-server", "" ]
I have a problem trying to run this script where I got the error: > ORA-00001: unique constraint (constraint\_name) violated So, the thing is can't insert date into the table, and other error is: > ORA-01438: value larger than specified precision allowed for this column ``` CREATE TABLE TEST ( TEST_ID INT NOT NULL, COMPONENTS VARCHAR2(30) NOT NULL, MATTER VARCHAR2(2) NOT NULL, LIC NUMBER(9, 2), MIC NUMBER(9, 2), UIP NUMBER(9, 2), CONSTRAINT TEST_pk PRIMARY KEY (TEST_ID)); --SEQUENCE TEST_ID CREATE SEQUENCE SEQ_TEST_ID START WITH 1 INCREMENT BY 1 MINVALUE 1 MAXVALUE 9999; --INSERTING VALUE TO TABLE INSERT ALL INTO TEST (TEST_ID,COMPONENTS, MATTER, LIC, MIC, UIP) VALUES (SEQ_TEST_ID.NEXTVAL,'Food waste','OG','50.50','39.00','13.00') INTO TEST (TEST_ID,COMPONENTS, MATTER, LIC, MIC, UIP) VALUES (SEQ_TEST_ID.NEXTVAL,'Paper' ,'OG','05.50','19.00','32.50') INTO TEST (TEST_ID,COMPONENTS, MATTER, LIC, MIC, UIP) VALUES (SEQ_TEST_ID.NEXTVAL,'Cardboard' ,'OG','00.00','00.00','10.00') INTO TEST (TEST_ID,COMPONENTS, MATTER, LIC, MIC, UIP) VALUES (SEQ_TEST_ID.NEXTVAL,'Plastic' ,'OG','03.00','04.00','00.00') INTO TEST (TEST_ID,COMPONENTS, MATTER, LIC, MIC, UIP) VALUES (SEQ_TEST_ID.NEXTVAL,'Textiles' ,'OG','03.00','06.00','04.00') INTO TEST (TEST_ID,COMPONENTS, MATTER, LIC, MIC, UIP) VALUES (SEQ_TEST_ID.NEXTVAL,'Ruber' ,'OG','03.00','00.00','01.00') INTO TEST (TEST_ID,COMPONENTS, MATTER, LIC, MIC, UIP) VALUES (SEQ_TEST_ID.NEXTVAL,'Leather' ,'OG','03.00','02.50','01.00') INTO TEST (TEST_ID,COMPONENTS, MATTER, LIC, MIC, UIP) VALUES (SEQ_TEST_ID.NEXTVAL,'Yard waste' ,'OG','00.00','00.00','15.00') INTO TEST (TEST_ID,COMPONENTS, MATTER, LIC, MIC, UIP) VALUES (SEQ_TEST_ID.NEXTVAL,'Wood' ,'OG','03.00','05.50','02.50') INTO TEST (TEST_ID,COMPONENTS, MATTER, LIC, MIC, UIP) VALUES (SEQ_TEST_ID.NEXTVAL,'Misc. Organic','OG','03.00','04.00','00.00') INTO TEST (TEST_ID,COMPONENTS, MATTER, LIC, MIC, UIP) VALUES (SEQ_TEST_ID.NEXTVAL,'Glass' ,'IG','05.50','05.50','08.00') INTO TEST (TEST_ID,COMPONENTS, MATTER, LIC, MIC, UIP) VALUES (SEQ_TEST_ID.NEXTVAL,'Tin Cans' ,'IG','00.50','00.50','05.00') INTO TEST (TEST_ID,COMPONENTS, MATTER, LIC, MIC, UIP) VALUES (SEQ_TEST_ID.NEXTVAL,'Aluminum' ,'IG','03.00','03.00','00.50') INTO TEST (TEST_ID,COMPONENTS, MATTER, LIC, MIC, UIP) VALUES (SEQ_TEST_ID.NEXTVAL,'Other metals' ,'IG','00.00','00.00','02.50') INTO TEST (TEST_ID,COMPONENTS, MATTER, LIC, MIC, UIP) VALUES (SEQ_TEST_ID.NEXTVAL,'Drit,ash,etc' ,'IG','20.50','15.50','05.00') SELECT * FROM DUAL;​ ```
If you are creating table in oracle, then `AUTO_INCREMENT` is not supported. Create the table using `MSW_ID` as number and create a sequence to use its value in your table as. ``` CREATE TABLE MSW ( MSW_ID number, COMPONENTS VARCHAR2(15) NOT NULL, MATTER VARCHAR2(2) NOT NULL, LIC NUMBER(4, 2), MIC NUMBER(4, 2), UIP NUMBER(4, 2), CONSTRAINT NSW_pk PRIMARY KEY (MSW_ID)); ``` You can learn more on sequences [here](http://docs.oracle.com/cd/B28359_01/server.111/b28286/statements_6015.htm#SQLRF01314); ``` CREATE SEQUENCE SEQ_MSW_ID START WITH 1 INCREMENT BY 1 MINVALUE 1 MAXVALUE 99999999999; ``` and then you insert query becomes ``` INSERT INTO MSW ( MSW_ID, COMPONENTS, MATTER, LIC, MIC, UIP) VALUES (SEQ_MSW_ID.NEXTVAL, 'Food waste' ,'OG','50.50','39.00','13.00'); --> semicolon INSERT INTO MSW ( MSW_ID, COMPONENTS, MATTER, LIC, MIC, UIP) VALUES (SEQ_MSW_ID.NEXTVAL, 'Paper' ,'OG','05.50','19.00','32.50'); --> semicolon . . . INSERT INTO MSW ( MSW_ID, COMPONENTS, MATTER, LIC, MIC, UIP) VALUES (SEQ_MSW_ID.NEXTVAL, 'Drit,ash,etc' ,'IG','20.50','15.50','05.00'); --> semicolon ``` Or use insert all as a single statement as ``` INSERT ALL INTO MSW ( MSW_ID, COMPONENTS, MATTER, LIC, MIC, UIP) VALUES (SEQ_MSW_ID.NEXTVAL, 'Food waste' ,'OG','50.50','39.00','13.00') INTO MSW ( MSW_ID, COMPONENTS, MATTER, LIC, MIC, UIP) VALUES (SEQ_MSW_ID.NEXTVAL, 'Paper' ,'OG','05.50','19.00','32.50') . . . INTO MSW ( MSW_ID, COMPONENTS, MATTER, LIC, MIC, UIP) VALUES (SEQ_MSW_ID.NEXTVAL, 'Drit,ash,etc' ,'IG','20.50','15.50','05.00') SELECT * FROM DUAL; --> semicolon ```
you can increaseur sequence cache size to 200,the default is 20. ``` alter sequence SEQ_TEST_ID cache 200; ``` try agian.
Getting ORA-00001 and ORA-01438 errors when inserting data into a table
[ "", "sql", "oracle", "oracle11g", "" ]
I made some research like queries for numeric range, interval query and more, but none of the answers have gave me the correct result. For simplicity I will put the info without using ids, the database has a proper schema but I want to create a query to obtain data that fits exact integer range. I have this table but this info is short to clarify my problem: ``` +------+------+--------+---------------+ | year | make | model | motor | +------+------+--------+---------------+ | 1998 | Audi | A3 | 4 Cil 1.8 Lts | | 1999 | Audi | A3 | 4 Cil 1.8 Lts | | 2000 | Audi | A3 | 4 Cil 1.8 Lts | | 2001 | Audi | A3 | 4 Cil 1.8 Lts | | 2002 | Audi | A3 | 4 Cil 1.8 Lts | | 2003 | Audi | A3 | 4 Cil 1.8 Lts | | 2004 | Audi | A3 | 4 Cil 1.8 Lts | | 2004 | Audi | A3 | 4 Cil 2.0 Lts | | 2005 | Audi | A3 | 4 Cil 2.0 Lts | +------+------+--------+---------------+ ``` For example I want to obtain the motor that fits 1998 - 2004 range (4 Cil 1.8 Lts), but this query give me also 4 Cil 2.0 Lts ``` SELECT DISTINCT motor FROM general G WHERE G.year BETWEEN 1998 AND 2004 ``` **Resulted table:** ``` +---------------+ | motor | +---------------+ | 4 Cil 1.8 Lts | | 4 Cil 2.0 Lts | +---------------+ ``` Also I want that this query returned empty if not fit exactly with the range, I created an other query trying to obtain a different result but not worked. ``` SELECT DISTINCT motor FROM general G WHERE G.year IN (1998, 2005); ``` **Resulted table:** ``` +---------------+ | motor | +---------------+ | 4 Cil 1.8 Lts | | 4 Cil 2.0 Lts | +---------------+ ``` Thank you all --- **Update:** My problem is not related with make or model parameters, I'm struggle with year range which in this specific example is obtain this result if the range is between 1998 and 2004. **Result:** ``` +---------------+ | motor | +---------------+ | 4 Cil 1.8 Lts | +---------------+ ``` If the year range is between 1998 and 2005, I want to obtain an empty result because in my table, data not fits exactly with this range.
It looks like you are asking "what values of the `motor` column appears in all years from 1998 to 2004?" The query in your question answers a related question, which is "what values of `motor` appear in any year from 1998 - 2004?" This query (<http://sqlfiddle.com/#!2/53d6d/15/0>) answers your question by comparing the number of years where Audi made that engine with the total number of years. ``` SELECT motors.years, motors.motor FROM ( SELECT COUNT(DISTINCT year) AS years, motor FROM general WHERE year BETWEEN 1998 AND 2004 GROUP BY motor ) AS motors JOIN ( SELECT COUNT(DISTINCT year) AS years FROM general WHERE year BETWEEN 1998 AND 2004 ) years ON motors.years = years.years ``` This query (<http://sqlfiddle.com/#!2/53d6d/17/0>) takes a slightly different approach, by first computing the production lifetime of each engine in the table, and then comparing your date range to the production range. ``` SELECT motor FROM ( SELECT MIN(year) AS firstyear, MAX(year) AS lastyear, motor FROM general GROUP BY motor ) AS motors_years WHERE firstyear=1998 AND lastyear=2004 ```
I think you can just add a second where condition against motor column to be like `4 Cil 1.8 Lts` ``` SELECT DISTINCT motor FROM general G WHERE G.year BETWEEN 1998 AND 2004 AND G.motor LIKE '4 Cil 1.8 Lts' ```
MySQL for exact integer range
[ "", "mysql", "sql", "" ]
This is the table ``` CREATE TABLE Employee (EmpID number(5) primary key, SIN Number(9) Not null, LastName Varchar2(25) Not null, FirstName Varchar2(25), Street Varchar2(30), City Varchar2(25), Province Char(2), PostalCode Varchar2(7), JobCode Number(4) Not null, Foreign Key(JobCode) REFERENCES Job, IncomeTax Char(1), BirthDate Date, HireDate Date, JobCodeDate Date) TABLESPACE users; ``` This is the line I am trying to insert, there is only three numeric values and all of them are numbers as far as I can see. ``` INSERT INTO Employee VALUES(93284,3249028,'fnwei','wefw','213jnijni','32n4','AB','s8j 900',3000,'N','24-Aug-86','07-Jul-03','07-Jul-03'); ERROR at line 1: ORA-01858: a non-numeric character was found where a numeric was expected ```
``` alter SESSION set NLS_DATE_FORMAT = 'DD-Mon-YY'; ``` I just had to type this in so that sql will execute the date format in my insert query's correctly
There's possibly a discrepancy between the order of fields as laid out in the INSERT statement, and the order that Oracle is expecting them. I suggest trying again using the full syntax of the INSERT (i.e. specify field names when doing the INSERT). That way, it's absolutely clear the value to field correlation being made. So something like this: ``` INSERT INTO Employee (EmpID, SIN, LastName, FirstName, Street, City, Province, PostalCode, JobCode, IncomeTax, BirthDate, HireDate, JobCodeDate) VALUES(111, 111,'DEF','ABC','111 111 111.','4535','CC','S6H 1X7',3000,'N','1111-08-24','1211-07-07','1213-07-07'); ```
ORA-01858: a non-numeric character was found where a numeric was expected? Even when the values are numbers?
[ "", "sql", "oracle", "" ]
`Flag1` is a `varchar` column with values "true" and "false". I need to convert this into bit column. When I try to do this: ``` Convert(Bit,Flag1) ``` it shows an error ``` Msg 245, Level 16, State 1, Line 2 Syntax error converting the varchar value 'False' to a column of data type bit. ```
I suspect that there are other values in addition to 'true' and 'false' in the field 'Flag1'. So check for the values in Flag1. select distinct Flag1 from YouTable. Here is my proof: ``` declare @Flag varchar(25) = 'False' select CONVERT(Bit, @Flag) ``` It works fine. However, this will give the same error. ``` declare @Flag varchar(25) = ' False' -- Pay attention to the the space in ' False'! select CONVERT(Bit, @Flag) ``` -> Msg 245, Level 16, State 1, Line 2 Conversion failed when converting the varchar value ' False' to data type bit. Pay attention to the the space in ' False' in the error message!
While selecting from table, you can do this: ``` SELECT CASE Flag1 WHEN 'true' THEN 1 ELSE 0 END AS FlagVal ``` Syntax: ``` CASE input_expression WHEN when_expression THEN result_expression [ ...n ] [ ELSE else_result_expression ] END Searched CASE expression: CASE WHEN Boolean_expression THEN result_expression [ ...n ] [ ELSE else_result_expression ] END ```
How to convert a varchar column to bit column in SQL SERVER
[ "", "sql", "sql-server", "sql-server-2008", "sql-server-2005", "sql-server-2000", "" ]
I'm trying to count duplicates based on a column of a table in an Oracle Database. This query using group by: ``` select count(dockey), sum(total) from ( select doc1.xdockeyphx dockey, count(doc1.xdockeyphx) total from ecm_ocs.docmeta doc1 where doc1.xdockeyphx is not null group by doc1.xdockeyphx having count(doc1.xdockeyphx) > 1 ) ``` Returns `count = 94408` and `sum(total) = 219330`. I think this is the correct value. Now, trying this other query using a self join: ``` select count(distinct(doc1.xdockeyph)) from ecm_ocs.docmeta doc1, ecm_ocs.docmeta doc2 where doc1.did > doc2.did and doc1.xdockeyphx = doc2.xdockeyphx and doc1.xdockeyphx is not null and doc2.xdockeyphx is not null ``` The result is also 94408 but this one: ``` select count(*) from ecm_ocs.docmeta doc1, ecm_ocs.docmeta doc2 where doc1.did > doc2.did and doc1.xdockeyphx = doc2.xdockeyphx and doc1.xdockeyphx is not null and doc2.xdockeyphx is not null ``` Is returning 1567466, which I think is wrong. The column I'm using to find duplicates is XDOCKEYPHX and the DID is the primary key of the table. Why is the value `sum(total)` different from the result of the last query? I can't see why the last query is returning more duplicate rows than expected.
Thanks to @vogomatix, since his answer helped me understand my problem and where I was wrong. The last query actually results in a number of rows showing each pair of duplicates with no repetitions, but it's not suitable to count for them as the `sum(total)` from the first one. Given this case: ``` DID | XDOCKEYPHX --------------- 1 | 1 2 | 1 3 | 1 4 | 2 5 | 2 6 | 3 7 | 3 8 | 3 9 | 3 ``` The first inner query would return ``` DID | XDOCKEYPHX --------------- 1 | 3 2 | 2 3 | 4 ``` And the full query would be `count = 3`, meaning there are 3 documents with n duplicates, and the total duplicated documents `sum(total) = 9`. Now, the second and third query, if we use just a `select *`, will give something like: ``` DID_1 | XDOCKEYPHX | DID_2 -------------------------- 2 | 1 | 1 3 | 1 | 1 3 | 1 | 2 5 | 2 | 4 7 | 3 | 6 8 | 3 | 6 8 | 3 | 7 9 | 3 | 6 9 | 3 | 7 9 | 3 | 8 ``` So now, the second query `select count(distinct(xdockeyphx))` will give the correct value 3, but the third query `select count(*)` will give 10, which well, is incorrect for me since I wanted to know the sum of duplicates for each DID (9). What the third query gives you is all the pairs of duplicates, so you can then compare them or whatever. My misunderstanding was thinking that if I counted all the rows in the third query, I should get the sum of duplicates for each DID (`sum(total)` of the first query), which was a wrong idea and now I realize it.
You don't need the complexity of your last where clause ``` where doc1.did > doc2.did and doc1.xdockeyphx = doc2.xdockeyphx and doc1.xdockeyphx is not null and doc2.xdockeyphx is not null ``` If you think about it, `doc2.xdockeyphx` cannot be null if `doc1.xdockeyphx` is not null. perhaps it is better expressed by joining tables.... ``` select count(*) from ecm_ocs.docmeta doc1 join ecm_ocs.docmeta doc2 on doc1.xdockeyphx = doc2.xdockeyphx where doc1.xdockeyphx is not null and doc1.did > doc2.did ``` Your first two queries report distinct/grouped results where your last one simply reports all results, which is why the counts differ.
Self join vs group by when counting duplicates
[ "", "sql", "oracle", "group-by", "duplicates", "self-join", "" ]
I'm getting the below collation conflict error when I try to join two fairly large tables via a UNION ALL statement. ``` SELECT * FROM [TABLEA] UNION ALL SELECT * FROM [TABLEB] Msg 457, Level 16, State 1, Line 1 Implicit conversion of varchar value to varchar cannot be performed because the collation of the value is unresolved due to a collation conflict between "SQL_Latin1_General_CP1_CI_AS" and "SQL_Latin1_General_CP1_CS_AS" in UNION ALL operator. ``` I would like to identify which columns exactly are mismatched but am unfamiliar with querying sys.columns or information\_schema.
This should do (assuming that the columns on both tables have the same names): ``` SELECT * FROM ( SELECT * FROM sys.columns WHERE OBJECT_NAME(object_id) = 'TABLEA') A INNER JOIN (SELECT * FROM sys.columns WHERE OBJECT_NAME(object_id) = 'TABLEB') B ON A.name = B.name WHERE A.collation_name <> B.collation_name ```
Just replace `MyTable1` and `MyTable2` with your tables names ``` SELECT OBJECT_NAME(c.object_id) as TableName ,c.name AS ColumnName ,c.collation_name as CollationName FROM sys.columns AS c JOIN sys.tables AS t ON c.object_id = t.object_id WHERE t.name IN ( 'MyTableA', 'MyTableB' ) AND c.collation_name IS NOT NULL ``` In case column names are exactly the same than you can do this ``` WITH TableA AS ( SELECT OBJECT_NAME(c.object_id) AS TableName ,c.name AS ColumnName ,c.collation_name AS CollationName FROM sys.columns AS c JOIN sys.tables AS t ON c.object_id = t.object_id WHERE t.name IN ( 'TableA' ) AND c.collation_name IS NOT NULL ), TableB AS ( SELECT OBJECT_NAME(c.object_id) AS TableName ,c.name AS ColumnName ,c.collation_name AS CollationName FROM sys.columns AS c JOIN sys.tables AS t ON c.object_id = t.object_id WHERE t.name IN ( 'TableB' ) AND c.collation_name IS NOT NULL ) SELECT a.TableName ,a.ColumnName ,a.CollationName ,b.TableName ,b.ColumnName ,b.CollationName FROM tableA AS a JOIN TableB AS b ON a.ColumnName = b.ColumnName AND a.CollationName <> b.CollationName ```
determining where column collations don't match
[ "", "sql", "sql-server", "t-sql", "" ]
I have table called 'Task' have four column (id , x, y and x\*y) i only enter the values (id, x and y) need trigger or other thing when i entered the values calculate automatic the value 'x\*y' thanks ,
If you *do* want a trigger (maybe the value can be changed later, and `x*y` is just the default?), then yes this *can* be done: ``` CREATE TRIGGER T_task on task instead of insert as insert into task (id,x,y,[x*y] /* Really need to rename this */) select id,x,y,x*y from inserted ``` See [Use the inserted and deleted Tables](http://technet.microsoft.com/en-us/library/ms191300.aspx)
In SQL Server, you can just plain use a [computed column](http://technet.microsoft.com/en-us/library/ms188300.aspx), no need for triggers; ``` CREATE TABLE task ( id INT, x INT, y INT, xy AS x*y); -- xy will always be selected as x*y ``` [An SQLfiddle to test with](http://sqlfiddle.com/#!6/9452a/1). Other databases have similar functionality, for example [Oracle's virtual columns](http://www.oracle-base.com/articles/11g/virtual-columns-11gr1.php).
How can multiply two value in sql in row and put the result in the another cell in the same row?
[ "", "sql", "sql-server", "triggers", "" ]
I have 2 tables like below: ``` Table 1 --------- 1 2 3 Table 2 -------- A B C ``` How do i join to get an output like below: ``` Output --------- 1 A 1 B 1 C 2 A 2 B 2 C 3 A 3 B 3 C ```
Use `Cross join`: ``` SELECT * FROM Table1 CROSS JOIN Table2 ``` You can change the order by replacing `*` with table fields. Read more about [**Cross Join**](http://docs.oracle.com/javadb/10.8.2.2/ref/rrefsqljcrossjoin.html). **OR** you could do this: ``` SELECT Table1.*,Table2.* FROM Table2,Table1 ``` Result: ``` ID NAME 1 A 1 B 1 C 2 A 2 B 2 C 3 A 3 B 3 C ```
You want to do a CROSS JOIN and that will give you the Cartesian product of all the rows. See <http://en.m.wikipedia.org/wiki/Join_(SQL)>
Oracle: Joins two tables to duplicate rows for 2 table
[ "", "sql", "oracle", "join", "" ]
I need to sum all the temperatures for all stations by hour: ``` station hour temperature ----------------------------------- 100 1 2 101 1 2 100 2 4 101 2 4 ``` I want the result to be like this: ``` hour temperature --------------------- 1 4 2 8 ``` ... where the temperature for station 100 gets added to station 101 (2 + 2 = 4) and so on. What would be the best way to do this? Thanks in advance.
try something like this: ``` select hour,sum(temperature) from MyTable group by hour ```
``` select hour,sum(temperature) from table_name group by hour; ```
Sum different values from same column
[ "", "sql", "sum", "" ]
I'm getting the conversion error in SQL. I have tried Casting and Convert as int but failed. ``` SELECT ProdId, ParentCat, ProdFullName, ProdImage FROM Product WHERE ProdId IN (SELECT Compared FROM Product WHERE ProdId = 10803) ``` * `ProdId` is `int` * `Compared` column is `varchar` `Compared` column values are : ``` 32879, 10844, 10747, 81113, 8001, 81469 ```
In this case, you should build the query as a string and execute it `DECLARE @query VARCHAR(MAX) SELECT @query = 'SELECT ProdId, ParentCat, ProdFullName, ProdImage FROM Product WHERE ProdId IN (' + Compared + ')' FROM Product WHERE ProdId = 10803 EXEC (@query)`
``` SELECT ProdId, ParentCat, ProdFullName, ProdImage FROM Product WHERE ProdId IN (SELECT convert(int,ltrim(rtrim(Compared))) FROM Product WHERE ProdId = 10803) ``` Read more about [CAST and CONVERT](http://msdn.microsoft.com/en-us/library/ms187928%28v=sql.100%29.aspx)
Conversion failed when converting the varchar to int
[ "", "sql", "sql-server", "sql-server-2008", "" ]
In SQL Server, trying to write a age-off report for inventory purposes. Each week, the inventory system marks thousands of rows for deletion. This takes place on Sundays @ 06:00:00 as part of weekly SQL DB purge schedule. Using (yyyy-mm-dd hh:mm:ss:ms) format for closed\_time, how can I calculate the numbers of days between that date, until next Sunday of the current week? And to be more elaborate, is there a way to narrow it down to the exact DD:HH:MM? The problem is the each client's Sunday DB schedule for purge varies. So that might be difficult to compute. Might be easier to just calculate whole days until Sunday 00:00:00. I tried using the DATEDIFF function with no success. ``` SELECT Yada DATEDIFF(DAY, closed_time,DW) AS Days_Until_Purged FROM DB1 WHERE closed_time DESC ``` Thx in advance
try this: ``` select 8-DATEpart(w, closed_time) AS Days_Until_Purged from DB1 ... ```
If you choose any Sunday in the past (Such as `06:00 Sunday 2nd January 2000`), you can calculate time that has GONE BY since then. Then, if you take that and do `modulo 7-days` you get the time that has gone by since the most recent Sunday. Then, if you do `7 - time_gone_by_since_last_sunday` you get the time until the next sunday. I'm going to do this in minutes to cope with a client that has a setting of `06:30`. ``` DECLARE @batch_processing_time SMALLDATETIME SET @batch_processing_time = '2000-01-02 06:00' SELECT (60*24*7) - DATEDIFF(minute, @batch_processing_time, closed_time) % (60*24*7) FROM yourTable ``` That's the number of minutes from each record's `closed_time` until the next `@batch_processing_time`. Divide by (24\*60) to get it in days.
DATEDIFF - How many days until Sunday
[ "", "sql", "sql-server", "datediff", "" ]
How to find a month which is incomplete in ORACLE? For example, I have a column called `month_str` in table A, this `month_str` is from 1/1/2014 to 3/13/2014, thus Jan and Feb are complete months, and March is incomplete. Do you know how to find the incomplete one? Thank you! The month\_str data type is varchar2.
Simples, the incomplete month will be the current month Look up SYSDATE
``` SQL> with t (month_str) as ( 2 select rownum - 1 + trunc(sysdate,'YEAR') 3 from dual connect by level <= 72 4 ) 5 select to_char(max(month_str),'MON YYYY') 6 from t 7 group by trunc(month_str,'MM') 8 having max(month_str) != last_day(max(month_str)) 9 / TO_CHAR(MAX(MONTH ----------------- MAR 2014 ```
ORACLE how to find partial month
[ "", "sql", "oracle", "plsql", "" ]