Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I have this table ``` ID Name Price 1 John 12 € 2 John 35 € 3 Alex 15 € 4 Alex 12 € 5 James 10 € ``` I need a query that update a field in another table, summing up all the values in price field that have the same name. For example the results of the query in this case would be: ``` ID Name Price 1 John 47 € 2 Alex 27 € 3 James 10 € ```
With Access 2010 and later we can use [Data Macros](http://office.microsoft.com/en-ca/access-help/create-a-data-macro-HA010378170.aspx) to accomplish this: ![MacroList.png](https://i.stack.imgur.com/5MZRW.png) ![AfterDelete.png](https://i.stack.imgur.com/ulXuZ.png) ![AfterUpdate.png](https://i.stack.imgur.com/5PSoY.png) ![AfterInsert.png](https://i.stack.imgur.com/kzbHS.png) ![UpdateOtherTable.png](https://i.stack.imgur.com/LxZZE.png)
``` INSERT INTO tableName2(name, price) SELECT name, SUM(price) FROM tableName GROUP BY name ``` TableName2 is the second table you wrote on your question, while the tableName is the first one. Notice this would ADD them into the table, not really update (since otherwise you'd have to insert all the data manually).
Update totals in another table when changes made to the current table
[ "", "sql", "ms-access", "sql-update", "ms-access-2013", "" ]
I'm a complete SQL novice. I want to display a simple grid..... 1. **Sign up......>30.....30-60.....60-90.....>90.....Total** 2. **Feb**...............4..........30........... 6 ......... 0 .......40 3. **Mar** ............. 0 .........11 ...........1 ..........4 .......16 4. **Apr** 5. **May** etc 6. **Jun** etc Where **total** represents the total of number of customer sign ups per month, and where the likes of '30-60' represents the number sign ups on the date 30-60 days since today's date The query I am using is as follows...... ``` SELECT case MONTH(Datecreated) when 2 then 'Febrary' when 3 then 'March' when 4 then 'April' when 5 then 'May' when 6 then 'June' end as 'Signup Month', (select count(*) from WS_USER_DETAILS where datediff(day, datecreated, GETUTCDATE()) < 30 ) as '<= 30 days', (select count(*) from WS_USER_DETAILS where datediff(day, datecreated, GETUTCDATE()) between 30 and 60) as '<= 60 days', (select count(*) from WS_USER_DETAILS where datediff(day, datecreated, GETUTCDATE()) between 60 and 90) as '<= 90 days', (select count(*) from WS_USER_DETAILS where datediff(day, datecreated, GETUTCDATE()) >90) as '<= 120 days', count(userid) AS 'Total' FROM WS_USER_DETAILS group by MONTH(Datecreated) ``` The problem I have is.... I want to run the select statements for each month. However it populates all months. For example...... 1. **Sign up......>30.....30-60.....60-90.....>90.....Total** 2. **Feb**...............0..........11........... 1 ......... 4 .......40 3. **Mar** ............. 0 .........11 ...........1 ..........4 .......16 If possible can someone advise me on how to run the select statements for each month? Thanks
First select the time span per record, then group and count. Be careful not to have overlapping ranges. ``` select monthname, isnull(sum(lessthan30),0) as lt30, isnull(sum(between30and59),0) as btw30a59, isnull(sum(between60and89),0) as btw60a89, isnull(sum(between90and119),0) as btw90a119, isnull(sum(morethan119),0) as mt119, isnull(sum(lessthan30),0) + isnull(sum(between30and59),0) + isnull(sum(between60and89),0) + isnull(sum(between90and119),0) + isnull(sum(morethan119),0) as total from ( select month(datecreated) as mon, datename(month,datecreated) as monthname, case when datediff(day, datecreated, GETUTCDATE()) < 30 then 1 else 0 end as lessthan30, case when datediff(day, datecreated, GETUTCDATE()) between 30 and 59 then 1 else 0 end as between30and59, case when datediff(day, datecreated, GETUTCDATE()) between 60 and 89 then 1 else 0 end as between60and89, case when datediff(day, datecreated, GETUTCDATE()) between 90 and 119 then 1 else 0 end as between90and119, case when datediff(day, datecreated, GETUTCDATE()) >= 120 then 1 else 0 end as morethan119 from WS_USER_DETAILS ) as dummy group by monthname, mon order by mon; ```
I think something like this would fit your needs better. You can use Sum Function and a case statement to work like a count if. ``` SELECT DATENAME(MM,Datecreated), --This Function automatically gets the name of the month --Here is where you can use case statements like a count if. SUM(CASE WHEN DATEDIFF(dd, datecreated, GETUTCDATE()) < 30 THEN 1 ELSE 0 END) '<= 30 days', SUM(CASE WHEN DATEDIFF(dd, datecreated, GETUTCDATE()) < 60 THEN 1 ELSE 0 END) '<= 60 days', SUM(CASE WHEN DATEDIFF(dd, datecreated, GETUTCDATE()) < 90 THEN 1 ELSE 0 END) '<= 90 days', SUM(CASE WHEN DATEDIFF(dd, datecreated, GETUTCDATE()) < 120 THEN 1 ELSE 0 END) '<= 120 days' FROM WS_USER_DETAILS GROUP BY Datepart(MM, Datecreated) ``` ## Sources: 1. <http://msdn.microsoft.com/en-us/library/ms189794.aspx> 2. <http://msdn.microsoft.com/en-us/library/ms174420.aspx> 3. <http://msdn.microsoft.com/en-us/library/ms174420.aspx>
Running a select statement for each case
[ "", "sql", "" ]
``` SELECT *, COUNT(examID) AS ExamCount FROM ExamSession GROUP BY [examID], [userID], [sessionID] ``` This gives me a result set, but the `ExamCount` has a '1' in each row even if an exam is displayed in more than 1 row... I'm trying to get the number of times an `examID` appears in the result set. SO the result set looks like this: ``` examID | userID | sessionID | ExamCount --------------------------------------------------------- 1111 | xxxxxx | xxxxxx | 1 1111 | xxxxxx | xxxxxx | 1 1111 | xxxxxx | xxxxxx | 1 2222 | xxxxxx | xxxxxx | 1 2222 | xxxxxx | xxxxxx | 1 3333 | xxxxxx | xxxxxx | 1 3333 | xxxxxx | xxxxxx | 1 3333 | xxxxxx | xxxxxx | 1 3333 | xxxxxx | xxxxxx | 1 ``` How can I get a count of the number of times an `examID` appears? Thanks!
# Code: ``` COUNT(examID) OVER(PARTITION BY examID) AS ExamCount ```
To elaborate a bit on jbarker answer ``` if object_id(N'dbo.groupTry',N'U') is not null drop table dbo.groupTry create table dbo.groupTry ( examID int, userID int, sessionID int, ExamCount int ) insert into dbo.groupTry values (1111, 1234, 4321, 1), (1111, 9876, 6789, 1), (1111, 8765, 5678, 1), (2222, 7654, 4567, 1), (2222, 6543, 3456, 1), (3333, 5432, 2345, 1), (3333, 1987, 1789, 1), (3333, 1876, 1678, 1), (3333, 1765, 1567, 1) select count(g.examID) over(partition by examID) as ExamCount, g.examID, g.userID, g.sessionID, g.ExamCount from dbo.groupTry g group by examID, userID, sessionID, ExamCount ``` In my own simple words, over just means: don't look at all the columns when you count, just look and count the unique rows of examID (because we are partitioning by examID).
count and group by to get totals
[ "", "sql", "sql-server", "t-sql", "" ]
I have a table with the following columns: ``` Id [int] UserId [int] ProductId [int] Created [datetime] ``` Every user can select any product at any time. No restrictions. For a report I have to select all products which haven't been chosen for four weeks. I tried the following approach: ``` SELECT DISTINCT ProductId FROM table WHERE Created > 'Three weeks ago' AND Created < 'today' ``` Problem: This result also gives Ids for products which were added today if they were also added three weeks ago. For example, if I have the following data: ``` Id UserId ProductId Created 1 1 3 2014-09-26 2 2 1 2014-08-12 3 3 3 2014-07-26 4 1 2 2014-06-26 5 6 4 2014-05-26 ``` I want the query to return the ProductIds 1, 2 and 4 NOT 3, since it has been chosen by a user during the last four weeks. I'm a little stuck. Does anyone know a way to achieve what I'm looking for via MSSQL?
Using a simple GROUP BY query, you can select distinct products their respective last `Created` dates: ``` SELECT ProductId, MAX(Created) AS LastCreated FROM table GROUP BY ProductId ; ``` Now, this would just return all the products without filtering but you need to filter on `LastCreated`. Since that column is aggregated, you can filter on it using a HAVING clause, like this: ``` SELECT ProductId FROM table GROUP BY ProductId HAVING MAX(Created) < 'Three weeks ago' ; ``` The actual `'Three weeks ago'` expression would depend on the flavour of SQL your product is using. In SQL Server, for instance, the condition would look like this: ``` MAX(Created) < DATEADD(WEEK, -3, CURRENT_TIMESTAMP) ```
if you use MSSQL: ``` SELECT DISTINCT ProductId FROM table WHERE Created BETWEEN dateadd(week,-3,getdate()) and dateadd(day,-1,getdate()) ; ``` **Edit:** Exclude today
Select fields from table which weren't added for the last four weeks
[ "", "sql", "" ]
I have two stored procedures. The first stored procedure value's using in second stored procedure. So I want to combine the two stored procedures into a single one. ``` Create procedure [carcallD] @carid varchar(10) as Begin select t.dtime, t.locid, t.vtid from Transaction_tbl t where Tbarcode = @carid End ``` If I execute this with `carid` of `413` I will get out put like this: ``` dtime locid vtid ----------------------- ----------- ----------- 2014-06-09 14:59:47 5 8 ``` My other stored procedure looks like this: ``` ALTER procedure [dbo].[Weekend] @wday varchar(50), @yr varchar(50), @vtid integer, @locid integer as begin set nocount on DECLARE @todaysdate date Declare @checkWeekend integer select @todaysdate = CONVERT(varchar(10), GETDATE(), 111) select @checkWeekend = Weekend from weekends_tbl where weekdays = @wday if @checkWeekend = 1 begin select Hamount as amount from locvtypeassign_tbl where vtid = @vtid and locid = @locid and active = 0 end else begin if @todaysdate in (select Hdate from Pholidays_tbl where year = @yr) begin select Hamount as amount from locvtypeassign_tbl where vtid = @vtid and locid = @locid and active = 0 end else begin select Namount as amount from locvtypeassign_tbl where vtid = @vtid and locid = @locid and active = 0 end end end ``` Here am using parameter * `@wday =` I want to pass particular day of dtime from my output * `@yr =` pass particular year of dtime from my output * `@vtid =` pass vtid from my output * `@locid =` pass locid from my output So can I combine these two stored procedures into a single one? If anyone is able to help me, I'd appreciate it. Thanks in advance I want to get output like this: ``` dtime locid vtid amount ----------------------- ----------- ----------- --------- 2014-06-09 14:59:47 5 8 100 ```
Try this ``` ALTER procedure [dbo].[Weekend] @carid varchar(50) as begin Declare @wday datetime, @yr varchar(50), @vtid integer, @locid integer, @day varchar(10), @year integer -- taking parameter value select @wday = t.dtime from Transaction_tbl t where Tbarcode=@carid set @day=datename(Weekday,@wday) set @year=year(@wday) set @vtid = (select t.vtid from Transaction_tbl t where Tbarcode=@carid); set @locid = (select t.locid from Transaction_tbl t where Tbarcode=@carid); set nocount on DECLARE @todaysdate date Declare @checkWeekend integer select @todaysdate = CONVERT(varchar(10), GETDATE(), 111) --End --check current day is holiday(Weeknd) select @checkWeekend= Weekend from weekends_tbl where weekdays=@day if @checkWeekend= 1 begin Select t.dtime, k.HBarcode, m.make,t.Compl, t.plateno,t.self,t.dtime, v.vtype, l.locname,case when l.edt is null or l.edt =0 then l.minEdt +l.BuffrEDT else l.edt + l.BuffrEDT end as EDT, t.locid,t.vtid,t.lsttic, c.Colname, te.UniqueName,DATEDIFF(minute,t.dtime,getdate()) as Duration,pl.PS,pc.PlateCode,t.Paid,t.Status,t.DelDate,t.vtid,t.Locid, Hamount as amount from Transaction_tbl t left JOIN KHanger_tbl k ON t.transactID = k.transactID left JOIN make_tbl m ON t.mkid = m.mkid left join PlateSource_tbl pl on t.PSID=pl.PSID left join PlateCode_tbl pc on t.PCdID=pc.PCdID left JOIN vtype_tbl v ON v.vtid = t.vtid left JOIN Location_tbl l ON t.locid = l.locid left JOIN Color_tbl C ON t.colid = c.colid left JOIN Terminals_tbl te ON k.tid = te.tid left join locvtypeassign_tbl loc on t.Locid=loc.locid where loc.vtid=@vtid and loc.locid=@locid and loc.active=0 and t.TBarcode=@carid end else --Check current day belongs to any public holiday begin if @todaysdate in(select Hdate from Pholidays_tbl where year=@year) begin Select t.dtime, k.HBarcode, m.make,t.Compl, t.plateno,t.self,t.dtime, v.vtype, l.locname,case when l.edt is null or l.edt =0 then l.minEdt +l.BuffrEDT else l.edt + l.BuffrEDT end as EDT, t.locid,t.vtid,t.lsttic, c.Colname, te.UniqueName,DATEDIFF(minute,t.dtime,getdate()) as Duration,pl.PS,pc.PlateCode,t.Paid,t.Status,t.DelDate,t.vtid,t.Locid, Hamount as amount from Transaction_tbl t left JOIN KHanger_tbl k ON t.transactID = k.transactID left JOIN make_tbl m ON t.mkid = m.mkid left join PlateSource_tbl pl on t.PSID=pl.PSID left join PlateCode_tbl pc on t.PCdID=pc.PCdID left JOIN vtype_tbl v ON v.vtid = t.vtid left JOIN Location_tbl l ON t.locid = l.locid left JOIN Color_tbl C ON t.colid = c.colid left JOIN Terminals_tbl te ON k.tid = te.tid left join locvtypeassign_tbl loc on t.Locid=loc.locid where loc.vtid=@vtid and loc.locid=@locid and loc.active=0 and t.TBarcode=@carid end -- so calculating normal day amount else begin Select t.dtime, k.HBarcode, m.make,t.Compl, t.plateno,t.self,t.dtime, v.vtype, l.locname,case when l.edt is null or l.edt =0 then l.minEdt +l.BuffrEDT else l.edt + l.BuffrEDT end as EDT, t.locid,t.vtid,t.lsttic, c.Colname, te.UniqueName,DATEDIFF(minute,t.dtime,getdate()) as Duration,pl.PS,pc.PlateCode,t.Paid,t.Status,t.DelDate,t.vtid,t.Locid, Namount as amount from Transaction_tbl t left JOIN KHanger_tbl k ON t.transactID = k.transactID left JOIN make_tbl m ON t.mkid = m.mkid left join PlateSource_tbl pl on t.PSID=pl.PSID left join PlateCode_tbl pc on t.PCdID=pc.PCdID left JOIN vtype_tbl v ON v.vtid = t.vtid left JOIN Location_tbl l ON t.locid = l.locid left JOIN Color_tbl C ON t.colid = c.colid left JOIN Terminals_tbl te ON k.tid = te.tid left join locvtypeassign_tbl loc on t.Locid=loc.locid where loc.vtid=@vtid and loc.locid=@locid and loc.active=0 and t.TBarcode=@carid end end --fetching amount nd details part over--- --Checking corresponding barcde complimentry or not.if compl taking deltails if(select COUNT(t1.Compl) from dbo.Transaction_tbl t1 where T1.TBarcode=@Carid)=1 begin declare @compl integer =null, @transid integer=null, @complid integer=null select @transid=t.transactID from dbo.Transaction_tbl t where t.TBarcode=@carid Select @compl=co.Cmplid from dbo.ComplimentTransactAssign_tbl co where co.TransactID=@transid select c.CompName,c1.Remarks from Complimentary_tbl c inner join ComplimentTransactAssign_tbl c1 on c.CmplID=c1.Cmplid where c.CmplID=@compl and c1.TransactID=@transid end --End Compl Checking--- declare @locatnid integer, @location nvarchar(100) begin select @locatnid= t.Locid from dbo.Transaction_tbl t where t.TBarcode=@carid select l1.StartTime,l1.EndTime from dbo.Location_tbl l1 where l1.Locid=@locatnid end end ```
You can use output parameteres: ``` CREATE PROCEDURE GetImmediateManager @employeeID INT, @managerID INT OUTPUT AS BEGIN SELECT @managerID = ManagerID FROM HumanResources.Employee WHERE EmployeeID = @employeeID END ``` Get a look in this link: <http://technet.microsoft.com/en-us/library/ms378108(v=sql.110).aspx>
Stored procedure take value from stored procedure itself in SQL Server
[ "", "sql", "sql-server", "stored-procedures", "" ]
I have a schema that looks like this: ``` create table image_tags ( image_tag_id serial primary key, image_id int not null ); create index on image_tags(image_id); ``` When I execute a query with two columns, it is ridiculously slow (eg, `select * from image_tags order by image_id desc, image_tag_id desc limit 10;`). If I drop one of those columns in the sort (doesn't matter which), it is super fast. I used `explain` on both queries, but it didn't help me understand why two columns in the `order by` clause were so slow, it just showed me how much slower using two columns was.
For `order by image_id desc, image_tag_id desc` sorting to be optimized via indexes you need to have this index: ``` create index image_tags_id_tag on image_tags(image_id, image_tag_id); ``` Only having a composite index (with little exceptions I presume, but not in this case) would help optimizer to use it to determine the order straight away.
``` create index on image_tags(image_id, image_tag_id); ``` try indexing..
ORDER BY with two columns is much slower than ORDER BY on either column
[ "", "sql", "postgresql", "" ]
I have a table in SQL Server with two numeric columns. At least one of these numeric fields must be filled. How do I write a check constraint to verify this?
This can be done with a check constraint that verifies null value and matches the result with `or` ``` create table #t (i int , j int , constraint chk_null check (i is not null or j is not null)) ``` The following are the test cases ``` insert into #t values (null, null) --> error insert into #t values (1, null) --> ok insert into #t values (null, 1) --> ok insert into #t values (1, 1) --> ok ```
late answer, but here is a solution for Sql Server for any number of columns to check: ``` CONSTRAINT CK_one_is_not_null CHECK (COALESCE(col1, col2, col3) IS NOT NULL ) ```
I need a check constraint on two columns, at least one must be not null
[ "", "sql", "sql-server-2012", "" ]
I have the following query: ``` SELECT ShipperName, S.ShipmentID, DepartureDate FROM SHIPMENT S, SHIPMENT_ITEM SI WHERE S.ShipmentID = SI.ShipmentID AND Value > 1000 ORDER BY ShipperName ASC, DepartureDate DESC; ``` Returns these results: ``` International 4 2013-06-02 International 4 2013-06-02 International 4 2013-06-02 Worldwide 3 2013-05-05 ``` The first three results are identical but what I am looking for is this: ``` International 4 2013-06-02 Worldwide 3 2013-05-05 ``` What have I done wrong?
You are selecting data that appear to come exclusively from the Shipment table (`ShipperName`, `S.ShipmentID`, `DepartureDate`), but you are joining that table to the `ShipmentItem` table, which apparently may have more than one row per Shipment. Your joined table has one row for each item (that is related to a shipment in the Shipment table). Evidently you want only *shipments* that contain at least one item having `value` greater than 1000, but as your query is written you will get a result row for each *item* with such a value. The simplest way to achieve what you want is probably just to add the `DISTINCT` keyword to your query, as others have suggested. There are other alternatives, though, some of which might be more appropriate for different, more complex requirements.
You may need to add the DISTINCT keyword on your select statement. Like: ``` SELECT DISTINCT ShipperName, S.ShipmentID, DepartureDate FROM SHIPMENT S, SHIPMENT_ITEM SI WHERE S.ShipmentID = SI.ShipmentID AND Value > 1000 ORDER BY ShipperName ASC, DepartureDate DESC; ```
Microsoft SQL Server duplicate results from query
[ "", "sql", "join", "duplicates", "" ]
I want to find a credit card numeric value in a sql string. for example; ``` DECLARE @value1 NVARCHAR(MAX) = 'The payment is the place 1234567812345678' DECLARE @value2 NVARCHAR(MAX) = 'The payment is the place 123456aa7812345678' DECLARE @value3 NVARCHAR(MAX) = 'The payment1234567812345678is the place' ``` The result should be : ``` @value1Result 1234567812345678 @value2Result NULL @value3Result 1234567812345678 ``` 16 digits must be together without space. How to do this in a sql script or a function? edit : if I want to find these 2 credit card value. ``` @value4 = 'card 1 is : 4034349183539301 and the other one is 3456123485697865' ``` how should I implement the scripts?
You can write as: ``` SELECT case when Len(LEFT(subsrt, PATINDEX('%[^0-9]%', subsrt + 't') - 1)) = 16 then LEFT(subsrt, PATINDEX('%[^0-9]%', subsrt + 't') - 1) else '' end FROM ( SELECT subsrt = SUBSTRING(string, pos, LEN(string)) FROM ( SELECT string, pos = PATINDEX('%[0-9]%', string) FROM table1 ) d ) t ``` `Demo`
You can use PathIndex as ``` PATINDEX('%[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]%', yourStr) ``` if the result is 0 then it doesnt containg 16 digits other was it contains. It can be used withing a `Where` statement or `Select` statement based on your needs
search in a string creditcard numeric value
[ "", "sql", "sql-server", "t-sql", "" ]
I have a table with ``` Name EnteredDateID RecvdDateID A 20140901 20240901 B 20140901 20140901 C 20140901 20140901 D 20140901 20140901 E 20140901 20110901 F 20140901 20140901 G 20140901 20110901 ``` I need to write a query that would do the following ``` SELECT * FROM TABLEA WHERE ``` and this is where I'm falling over ``` IF RecvdDateID > GETDATE() THEN USE EnteredDateID > 20140630 ELSE USE THE RecvdDateID > 20140630 ``` So I would end up with ``` Name EnteredDateID RecvdDateID A 20140901 20240901 B 20140901 20140901 C 20140901 20140901 D 20140901 20140901 F 20140901 20140901 ```
You can use ``` SELECT * FROM TABLEA WHERE 20140630 < CASE WHEN RecvdDateID > GETDATE() THEN EnteredDateID ELSE RecvdDateID END ```
``` SELECT * FROM TABLEA WHERE (RecvdDateID > GETDATE() AND EnteredDateID > 20140630) OR (RecvdDateID <= GETDATE() AND RecvdDateID > 20140630) ```
Using IF or Case to determine what field to use in a WHERE statement
[ "", "sql", "sql-server-2008", "t-sql", "" ]
I have a table filled with profile IDs in a sql server database: ``` select WhateverID from Whatevers ``` With each ID return from this query, I need to run a series of 5 stored procs in order against it, like so: ``` EXEC MyStoredProc WhateverID, 'Value 1' EXEC MyStoredProc WhateverID, 'Value 2' EXEC MyStoredProc WhateverID, 'Value 3' EXEC MyStoredProc WhateverID, 'Value 4' EXEC MyStoredProc WhateverID, 'Value 5' ``` What is the best way to combine these two structures so that it runs until all IDs are covered from the initial query?
You can use dynamic sql here quite effectively by building your dynamic string and then executing the whole thing in one step. Here is a simple example of how you could do this. The exec statement is commented out but you should get the idea. ``` create table #Something ( WhateverID int ) insert #Something (WhateverID) Values(1),(2),(3),(4),(5) declare @SQL nvarchar(max) = '' select @SQL = @SQL + 'EXEC MyStoredProc ' + CAST(WhateverID as varchar(4)) + ', Value 1;' + 'EXEC MyStoredProc ' + CAST(WhateverID as varchar(4)) + ', Value 2;' + 'EXEC MyStoredProc ' + CAST(WhateverID as varchar(4)) + ', Value 3;' + 'EXEC MyStoredProc ' + CAST(WhateverID as varchar(4)) + ', Value 4;' + 'EXEC MyStoredProc ' + CAST(WhateverID as varchar(4)) + ', Value 5;' from #Something select @SQL --exec @SQL drop table #Something ```
Don't use a cursor, a simple, narrow (Ids only) table-valued in-memory variable will suffice. ``` declare @Ids table (id integer primary key not null) Insert @ids(id) Select whateverId from whatevers Declare @id integer While exists(Select * from @ids) Begin Select @id = Min(id) From @ids EXEC MyStoredProc @id, 'Value 1' EXEC MyStoredProc @id, 'Value 2' EXEC MyStoredProc @id, 'Value 3' EXEC MyStoredProc @id, 'Value 4' EXEC MyStoredProc @id, 'Value 5' Delete @Ids where id = @id End ``` or ``` declare @Ids table (id integer primary key not null) Insert @ids(id) Select whateverId from whatevers Declare @id integer Declare @param varchar(20) Declare @i tinyInt = 0 While exists(Select * from @ids) Begin Select @id = Min(id) From @ids while @i <= 5 Begin Set @i += 1 Set @param = 'Value ' + str(@i, 1,0) Exec MyStoredProc @id, @param End Set @i = 0 Delete @Ids where id = @id End ```
running a stored proc from a select statement with or without loop
[ "", "sql", "sql-server", "t-sql", "" ]
I am trying to get row count in SQL Server 2008. I am using join which works perfect, I just want to get the total records because at the moment total records go beyond thousand. ``` select tablet.Approach.ApproachId,StartDateTime,QuestionId from Tablet.Approach join tablet.DataCapture on tablet.Approach.ApproachId = tablet.DataCapture.ApproachId where QuestionId = 25 ``` So that I can see the existing result plus 1 extra field which displays total number of rows
You can achieve it like this - **Solution: 1** ``` SELECT tablet.Approach.ApproachId ,StartDateTime ,QuestionId ,COUNT(*) OVER () AS TotalRowCount FROM Tablet.Approach JOIN tablet.DataCapture ON tablet.Approach.ApproachId = tablet.DataCapture.ApproachId WHERE QuestionId = 25 ``` **Solution: 2** (Using CTE) ``` ;WITH SourceData as ( SELECT tablet.Approach.ApproachId ,StartDateTime ,QuestionId FROM Tablet.Approach JOIN tablet.DataCapture ON tablet.Approach.ApproachId = tablet.DataCapture.ApproachId WHERE QuestionId = 25 ) ,RecordCnt AS ( SELECT COUNT(ApproachId) AS TotalRowCount FROM SourceData ) Select * from SourceData Cross join RecordCnt ```
try this ``` select count(*), tablet.Approach.ApproachId,StartDateTime,QuestionId from Tablet.Approach join tablet.DataCapture on tablet.Approach.ApproachId = tablet.DataCapture.ApproachId where QuestionId = 25 group by tablet.Approach.ApproachId,StartDateTime,QuestionId ```
How to use Count(*) with sql joins
[ "", "sql", "sql-server", "" ]
I am trying to migrate a Drupal site to a new host. In the tutorials I read I need an .sql export of the database, as well as the files (I have these). I don't have access to the old hosting phpmyadmin method but I used backup and migrate module on Drupal which created a file with .mysql extension. After a few failed attempts at migrating, new host support says they need a .sql file From what I see in my searches here, some have mentioned simply changing the extension of the file. Can I just change the .mysql extension to .sql and give that to my new host - is it really that simple?
You should try back up & migrate module to import the file smoothly. .mysql file may not work if you try to import from phpmyadmin. Instead, install backup & migrate and then restore the db.
**Yes**, you can just change it to SQL. Every relational database server uses a SQL a bit different from the SQL standard, but usually they use the same extension: sql.
Can I change a .mysql file to .sql or is it more complex than that?
[ "", "mysql", "sql", "drupal", "" ]
I have the following table containing data about how each group solves problems: ``` PROB | GROUP | REPNO | STATUS -----+---------+-------+------- 1 | Juniors | 0 | 1 | Seniors | 1 | 1 | Juniors | 2 | 1 | Experts | 3 | SOLVED 2 | Juniors | 0 | 2 | Seniors | 1 | SOLVED ``` Column PROB defines problems that groups were solving, column GROUP defines which group was working on this prob, column REPNO defines the number of solving repetition (repeated tries until it was finally solved, 0 means first try, no repetition yet), and final column STATUS defines whether task was solved in that particular try. Here I can make distribution how work was efficiently solved in each repetition by ALL GROUPS (SELECT ... GROUP BY repno). But I want to show how efficiently probs were solvved by each particular group (distribution per group's own repetition order). For example, PROB 1 was tried 2 times by group Juniors, and once by group Seniors and was not solved, and finally solved by group Experts in their first try. So I need to make recalculation of repetition for each particular group: ``` PROB | GROUP | REPNO | REPNO_J | REPNO_S | REPNO_E | STATUS -----+---------+-------+---------+---------+---------+------- 1 | Juniors | 0 | 0 | | | 1 | Seniors | 1 | | 0 | | 1 | Juniors | 2 | 1 | | | 1 | Experts | 3 | | | 0 | SOLVED <-- experts solved in first try 2 | Juniors | 0 | 0 | | | 2 | Seniors | 1 | | 0 | | SOLVED <-- seniors solved in first try ``` How to make this recalculation?
USE RANK() `SQL FIDDLE` ``` select PROB, GROUP1, REPNO, DECODE(GROUP1,'Juniors', rank() over (partition by prob, group1 order by repno) - 1) as REPNO_J, DECODE(GROUP1,'Seniors', rank() over (partition by prob, group1 order by repno) - 1) as REPNO_S, DECODE(GROUP1,'Experts', rank() over (partition by prob, group1 order by repno) - 1) as REPNO_E, STATUS from mytable order by prob,repno ```
Use the ROW\_NUMBER windowing function to get the number per group: ``` select prob, "GROUP", repno, case when "GROUP" = 'Juniors' then try end as repno_j, case when "GROUP" = 'Seniors' then try end as repno_s, case when "GROUP" = 'Experts' then try end as repno_e, status from ( select prob, "GROUP", repno, status, row_number() over(partition by prob, "GROUP" order by repno) - 1 as try from mytable ) order by prob, repno; ``` By the way: It is not a good idea to Name a column GROUP. This is a reserved word in SQL. So you must use quotes with it and consider upper/lower case whenever you use the name in a query.
Reorder within a group
[ "", "sql", "oracle", "" ]
We are using the following trigger in SQL Server to maintain the history now I need to identify the operations just like insert,update or delete. I found some information [HERE](https://stackoverflow.com/questions/2965521/oracle-and-triggers-inserted-updated-deleted) but it doesn't works with the SQL Server. ``` CREATE TRIGGER audit_guest_details ON [PMS].[GSDTLTBL] FOR INSERT,UPDATE,DELETE AS DECLARE @SRLNUB1 INT; DECLARE @UPDFLG1 DECIMAL(3,0); SELECT @SRLNUB1 = I.SRLNUB FROM inserted I; SELECT @UPDFLG1 = I.UPDFLG FROM inserted I; BEGIN /* Here I need to identify the operation and insert the operation type in the GUEST_ADT 3rd field */ insert into dbo.GUEST_ADT values(@SRLNUB1,@UPDFLG1,?); PRINT 'BEFORE INSERT trigger fired.' END; GO ``` But here I need to identify the operation and want to insert operation type accordingly. Here I don't want to create three trigger for every operations
For Inserted : Rows are in inserted only. For Updated: Rows are in inserted and deleted. For Deleted: Rows are in deleted only. ``` DECLARE @event_type varchar(42) IF EXISTS(SELECT * FROM inserted) IF EXISTS(SELECT * FROM deleted) SELECT @event_type = 'update' ELSE SELECT @event_type = 'insert' ELSE IF EXISTS(SELECT * FROM deleted) SELECT @event_type = 'delete' ELSE --no rows affected - cannot determine event SELECT @event_type = 'unknown' ```
This is a simplified version of Mikhail's answer that uses a [searched CASE expression](https://learn.microsoft.com/en-us/sql/t-sql/language-elements/case-transact-sql?view=sql-server-ver15). ``` DECLARE @Operation varchar(7) = CASE WHEN EXISTS(SELECT * FROM inserted) AND EXISTS(SELECT * FROM deleted) THEN 'Update' WHEN EXISTS(SELECT * FROM inserted) THEN 'Insert' WHEN EXISTS(SELECT * FROM deleted) THEN 'Delete' ELSE NULL --Unknown END; ```
How to identify the operation type(insert,update,delete) in SQL Server trigger
[ "", "sql", "sql-server", "triggers", "" ]
I am accesing to the database using a Dto with some entities and I want to improve the request without modifing my dto and removing the fetch so hibernate don´t return all the data from all the entities (hibernate is set to lazy), I tried with the next, but it´s not working: ``` StringBuilder hql = new StringBuilder(); hql.append(" select d.id as id, ce.cod as clasification.cod "); hql.append(" from Document d"); hql.append(" join d.clasificacionEntity ce"); ``` The working hql request: ``` StringBuilder hql = new StringBuilder(); hql.append(" select d"); hql.append(" from Document d"); hql.append(" join fetch d.clasificacionEntity ce"); ``` The problem is when I try to use "ce.cod as clasification.cod" the second dot gives me a error, there is other way to do that? , thanks a lot!!! My dto result is: ``` DocumentDto{ private id private clasificacionEntityDto; } And clasificacionEntityDto { private cod } ```
At the end I just create a dto like this: ``` DocumentDto{ private id private String clasificacionEntityDtoCod; } ``` and set it like this: ``` StringBuilder hql = new StringBuilder(); hql.append(" select d.id as id, ce.cod as clasificacionEntityDtoCod "); hql.append(" from Document d"); hql.append(" join d.clasificacionEntity ce"); ```
When you write this code: ``` hql.append(" select d.id as id, ce.cod as clasification.cod "); ``` You tell on parser `d.id` named `id` and `ce.cod` named `clasification.cod` The last alias is wrong! A correct name is `clasification` or `cod`. If you want to point properties of ce.cod (cod which data type has?) you can point after extraction of your result.
Hql query using dot´s in the response parameters
[ "", "sql", "spring", "hibernate", "hql", "dto", "" ]
I have a stored procedure which takes into input two table types (each of which contain only one ID column). These are filters which need to be applied to the main query inside the stored procedure. I was doing it by joining the main query with these two table type parameters. Something like below: ``` SELECT <cols> FROM <table> t join @tableVar1 v1 on t.c1 = v1.c1 join @tableVar2 v2 on t.c2 = v2.c2 ``` Now I am required to change the procedure such that if the table variable contains no rows it should return back the entire set. So if @tableVar1 has 0 rows I do not join the main query with it. What I wanted to do and I am struggling with is having the same query conditionally not do the join if the variable has 0 rows. I can add if statements for each combination, but that would mean I have to write 4 queries. The query is complex I am afraid changes made afterwards might go out of sync. So is it possible to do this in the same query with minimal performance impact? Appreciate your help.
Something like this maybe - instead of inner joins, I have used left joins and then all 4 conditions go into the where clause. ``` declare @countv1 int, @countv2 int select @countv1 = COUNT(1) from @tableVar1 select @countv2 = COUNT(1) from @tableVar2 SELECT <cols> FROM <table> t left join @tableVar1 v1 on t.c1 = v1.c1 left join @tableVar2 v2 on t.c2 = v2.c2 where (@countv1=0 or v1.c1 is not null) and (@countv2=0 or v2.c1 is not null) ```
You can achieve conditional Join using [`Dynamic SQL Query`](http://msdn.microsoft.com/en-us/library/ms188001.aspx) like ``` Declare @CntVar1 int Declare @CntVar2 int Declare @ExSQL nvarchar(max) select @CntVar1 = COUNT(1) from @tableVar1 select @CntVar2 = COUNT(1) from @tableVar2 Set @ExSQL= 'Select <cols> FROM <table> t' IF @CntVar1>=1 Begin Set @ExSQL = @ExSQL + 'join @tableVar1 v1 on t.c1 = v1.c1' End IF @CntVar2>=1 Begin Set @ExSQL = @ExSQL +'join @tableVar2 v2 on t.c2 = v2.c2' End Exec Sp_executeSQL @ExSQL ```
Conditionally Adding / Removing Joins in a SQL statement
[ "", "sql", "sql-server", "t-sql", "" ]
When I am using **rownum** and **between** keywords then the query doesn't return any rows. Can anyone explain the reason why query is not retrieving any rows? ``` select * from cus where rownum between 2 and 6; ``` I just want to check whether rownum will work when it is used with between keyword .So ,I just tried the above query to display the rows which are in between 2 and 6. But when I tried to execute the query, it doesn't retrieve any rows. thanks in advance
Oracle rownum starts at 1, so you will never get the first rownum if you say `between 2 and N`. It takes a row to "initiate" the rownum pseudocolumn sequence, so by eliminating rownum 1 in your criteria, you eliminate all rownums (or every row essentially has rownum `0`). Look at it like this. You don't get a ROWNUM until the database returns a row to you. The first row of any criteria will always be ROWNUM 1. Now, the trick you can use is to use a subquery. Each subquery will have its own rownum, and if you alias it to another column name, you can preserve it into outer queries, and treat it however you like. So if you are looking to implement paging of a result set, you would normally alias rownum from inner results as rownum\_ to an outer subquery to limit with BETWEEN. ``` select * from (select t.*, rownum as rownum_ from t) where rownum_ between 2 and 6 ``` But note, that the outer result set will have its own rownum, so you could do: ``` select t2.*, rownum from (select a, b, rownum as rownum_ from t) t2 where rownum_ between 2 and 6 ``` You will see `rownum` on the final result still starts at 1, but your inner result will have `rownum_` starting at 2.
> select \* from cus where rownum between 2 and 6; That is completely wrong. Because, `ROWNUM` is a pseudo-column which increments to 2 only when it started at `ROW` one(random, of course). In your query, the predicate `ROWNUM BETWEEN 2 AND 6` is meaningless, since ROWNUM has not yet been assigned. Let's understand this step-by-step : 1. **Understand how a SQL statement is interpreted. After the query is parsed, the filter is applied.** 2. **In your query, the filter `ROWNUM BETWEEN 2 AND 6` is meaningless, since, Oracle has not yet assigned `ROWNUM` 1 as the first row is not yet fetched.** 3. **When the first row is fetched, then `ROWNUM` is assigned as a pseudo-number. But the filter in your query directly points to rows between 2 and 6, which is absurd. So, no rows are returned.**
row num is not displaying any rows when using between keyword
[ "", "sql", "oracle", "oracle11g", "between", "rownum", "" ]
This is probably a pretty basic question but I'm trying to produce a histogram of order total values (exclusive of shipping & tax) for a given month. Unfortunately there's no column in the table for the total, so it needs to be calculated from the subtotal minus any discounts or applied credits. I thought something like this might work, but I don't think the SUM expression is being evaluated correctly in the case statement as it returns only the "else" condition. ``` select t.range as [price range], COUNT(*) as [orders] from ( select case when SUM(o.subtotal - o.discount - o.credit) between 0 and 49.99 then '0-49.99' when SUM(o.subtotal - o.discount - o.credit) between 50 and 99.99 then '50-99.99' when SUM(o.subtotal - o.discount - o.credit) between 100 and 149.99 then '100-149.99' when SUM(o.subtotal - o.discount - o.credit) between 150 and 199.99 then '150-199.99' else '200+' end as range from dbo.[order] o where o.date_placed BETWEEN '4/1/14' AND '4/30/14') t group by t.range ``` What am I doing wrong? This is in MS SQL Server btw.
This should work: ``` select t.range as [price range], COUNT(*) as [orders] from ( select case when (o.subtotal - o.discount - o.credit) between 0 and 49.99 then '0-49.99' when (o.subtotal - o.discount - o.credit) between 50 and 99.99 then '50-99.99' when (o.subtotal - o.discount - o.credit) between 100 and 149.99 then '100-149.99' when (o.subtotal - o.discount - o.credit) between 150 and 199.99 then '150-199.99' else '200+' end as range from dbo.[order] o where o.date_placed BETWEEN '4/1/14' AND '4/30/14') t group by t.range ```
Try this format for your case statmenets ``` select sum(case when o.subtotal - o.discount - o.credit between 0 and 49.99 then 1 else 0 end) as bucket1, sum(case when o.subtotal - o.discount - o.credit between 50 and 99.99 then 1 else 0 end) as bucket2, sum(case when o.subtotal - o.discount - o.credit between 100 and 149.99 then then 1 else 0 end) as bucket3, sum(case when o.subtotal - o.discount - o.credit between 150 and 199.99 then 1 else 0 end) as bucket4, sum(case when o.subtotal - o.discount - o.credit >= 200 then 1 else 0 end) as bucket5 ```
SQL select case with SUM?
[ "", "sql", "sql-server", "" ]
How can I avoid getting this MySQL error **Incorrect column specifier for column topic\_id** ? MySQL Error... ``` #1063 - Incorrect column specifier for column 'topic_id' ``` SQL Schema... ``` CREATE TABLE discussion_topics ( topic_id char(36) NOT NULL AUTO_INCREMENT, project_id char(36) NOT NULL, topic_subject VARCHAR(255) NOT NULL, topic_content TEXT default NULL, date_created DATETIME NOT NULL, date_last_post DATETIME NOT NULL, created_by_user_id char(36) NOT NULL, last_post_user_id char(36) NOT NULL, posts_count char(36) default NULL, PRIMARY KEY (topic_id) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1; ```
To use `AUTO_INCREMENT` you need to deifne column as `INT` or floating-point types, not `CHAR`. `AUTO_INCREMENT` use only unsigned value, so it's good to use `UNSIGNED` as well; ``` CREATE TABLE discussion_topics ( topic_id INT NOT NULL unsigned AUTO_INCREMENT, project_id char(36) NOT NULL, topic_subject VARCHAR(255) NOT NULL, topic_content TEXT default NULL, date_created DATETIME NOT NULL, date_last_post DATETIME NOT NULL, created_by_user_id char(36) NOT NULL, last_post_user_id char(36) NOT NULL, posts_count char(36) default NULL, PRIMARY KEY (topic_id) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1; ```
The `auto_increment` property only works for numeric columns (integer and floating point), not `char` columns: ``` CREATE TABLE discussion_topics ( topic_id INT NOT NULL AUTO_INCREMENT, project_id char(36) NOT NULL, topic_subject VARCHAR(255) NOT NULL, topic_content TEXT default NULL, date_created DATETIME NOT NULL, date_last_post DATETIME NOT NULL, created_by_user_id char(36) NOT NULL, last_post_user_id char(36) NOT NULL, posts_count char(36) default NULL, PRIMARY KEY (topic_id) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1; ```
How can I avoid getting this MySQL error Incorrect column specifier for column COLUMN NAME?
[ "", "mysql", "sql", "auto-increment", "ddl", "" ]
I've seen a few answers to questions similar to mine but I cannot get them to work. I have several date fields in my query that return the date and time like such 7/1/2014 12:00:00 AM. Is there a way I can just have the fields show 7/1/2014? ``` SELECT DISTINCT C.RECEIPTDATE, (I.CLIENTID ||' - '||PO.CLIENTNAME) AS CLIENT, D.INVOICEID, D.SVCFROMDATE, D.SVCTODATE, D.SVCCODE FROM M_EQP_ORDERS WHERE..... ``` I basically would like to cut down the two date fields to the shorter date format minus the time. Thanks in advance!
Use to\_char function: ``` SELECT DISTINCT to_char(C.RECEIPTDATE,'DD/MM/YYYY'), (I.CLIENTID ||' - '||PO.CLIENTNAME) AS CLIENT, D.INVOICEID, D.SVCFROMDATE, D.SVCTODATE, D.SVCCODE FROM M_EQP_ORDERS WHERE..... ```
Just use the function TRUNC. ``` SELECT DISTINCT TRUNC(C.RECEIPTDATE), (I.CLIENTID ||' - '||PO.CLIENTNAME) AS CLIENT, D.INVOICEID, TRUNC(D.SVCFROMDATE), TRUNC(D.SVCTODATE), D.SVCCODE FROM M_EQP_ORDERS WHERE..... ```
Convert datetime field to just a date field in SQL (Oracle)
[ "", "sql", "oracle", "datetime", "" ]
How do we check version of Oracle on which we are working? How do we check the version of the interface on which we are working? I have tried ``` select v$ from version ; ```
`select banner from v$version;` should work. you can also use. `select version from PRODUCT_COMPONENT_VERSION where rownum = 1;`
There are multiple ways, to list a few of them : 1. ``` SQL> select banner from v$version where rownum=1; BANNER -------------------------------------------------------------------------------- Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production ``` 2. ``` SQL> set serveroutput on; SQL> exec dbms_output.put_line(dbms_db_version.version||'.'||dbms_db_version.release); 12.1 PL/SQL procedure successfully completed. ``` 3. ``` SQL> SELECT VERSION FROM V$INSTANCE; VERSION ----------------- 12.1.0.1.0 ``` 4. ``` SQL> select version from PRODUCT_COMPONENT_VERSION where rownum = 1; VERSION -------------------------------------------------------------------------------- 12.1.0.1.0 ```
How do we check version of Oracle
[ "", "sql", "oracle", "version", "" ]
Why won't `SSMS` process this script for me? ``` DECLARE @containsReturn AS CHAR(500) = 'ME@ME.co.uk; YOU@YOU.co.uk;' CREATE TABLE #x(ProblemColumn CHAR(500)); INSERT INTO #x VALUES(@containsReturn); SELECT * FROM #X; UPDATE #x SET ProblemColumn = REPLACE(ProblemColumn, char(13), '') WHERE CONTAINS(ProblemColumn, CHAR(13)) ``` I get this error, focused on the final `CHAR(13)` in the `UPDATE`'s `WHERE` clause: > Msg 102, Level 15, State 1, Line 14 > Incorrect syntax near 'CHAR'.
One quick thing why you even care where clause in this case, just run the update statement like this ``` UPDATE #x SET ProblemColumn = REPLACE(ProblemColumn, char(13), '') ``` it will do the right thing. it will perform better too
Because the TSQL [`CONTAINS`](http://msdn.microsoft.com/en-us/library/ms187787.aspx) method takes a `NVARCHAR` as the second parameter and it cant implicitly convert `CHAR(13)` so you need to declare it as a variable which can be: ``` DECLARE @srch NVARCHAR(1) = CHAR(13) UPDATE #x SET ProblemColumn = REPLACE(ProblemColumn, char(13), '') WHERE CONTAINS(ProblemColumn, @srch) ``` Incidentally, when you do this, you just get another error: > Msg 7601, Level 16, State 2, Line 15 > Cannot use a CONTAINS or FREETEXT predicate on table or indexed view '#x' because it is not full-text indexed. If your real table is not a temp table, and really is Full-Text indexed, you'll be fine. Otherwise you could just do something much simpler: ``` UPDATE #x SET ProblemColumn = REPLACE(ProblemColumn, char(13), '') WHERE ProblemColumn LIKE '%' + CHAR(13) + '%' ```
CONTAINS unhappy with char(13)
[ "", "sql", "sql-server", "" ]
If I have the following sample data: ``` ╔══════════════╦══════════════════╦════════════╦═══════╗ ║ Client ║ con_id ║ mat1_07_03 ║ Ccode ║ ╠══════════════╬══════════════════╬════════════╬═══════╣ ║ Clients Name ║ C13109BBFD511534 ║ $1,062.00 ║ NOFL ║ ║ Clients Name ║ C11AC9BBF74D6882 ║ $879.73 ║ NOFL ║ ║ Clients Name ║ C12A69BBF1ACB578 ║ $2,790.29 ║ NOFA ║ ║ Clients Name ║ C12A69BBF1ACB578 ║ $912.00 ║ NOFL ║ ║ Clients Name ║ C6B0CA1A767C9744 ║ $2,180.11 ║ NOFL ║ ║ Clients Name ║ C11AC9BBF74D6882 ║ $878.67 ║ NOFA ║ ║ Clients Name ║ C13B79BBF4F1F450 ║ $300.00 ║ NOFL ║ ║ Clients Name ║ C12A69BBF1ACB578 ║ $1,790.67 ║ NOFL ║ ║ Clients Name ║ CA6869E2FE38A449 ║ $240.00 ║ NOFA ║ ║ Clients Name ║ C46439FB0D847140 ║ $3,392.66 ║ NOFL ║ ║ Clients Name ║ C12A69BBF1ACB578 ║ $1,791.73 ║ NOFA ║ ║ Clients Name ║ C13B49BBF12ED236 ║ $0.00 ║ NOFL ║ ║ Clients Name ║ C12A69BBF1ACB578 ║ $879.73 ║ NOFL ║ ╚══════════════╩══════════════════╩════════════╩═══════╝ ``` And Apply the following query: ``` SELECT [Client]=MAX(m.Client) ,[CaseCount]=COUNT(m.con_id) ,[AmtInDispute]=CONVERT(char, SUM(Convert(money, m.mat1_07_03)), 101) FROM lntmu11.matter m GROUP BY m.con_id ORDER BY COUNT(m.Client) DESC ``` How can I further group the `Ccode` Column to get the `COUNT` of how many rows are either `NOFA` or `NOFL` My desired output would appear as: ``` ╔══════════╦═══════════╦═══════════════╦═══════════════╦══════════════╗ ║ Client ║ CaseCount ║ NOFACaseTotal ║ NOFLCaseTotal ║ AmtInDispute ║ ╠══════════╬═══════════╬═══════════════╬═══════════════╬══════════════╣ ║ Client A ║ 3548 ║ 2000 ║ 1548 ║ 5,658,307.60 ║ ║ Client B ║ 3366 ║ 100 ║ 3266 ║ 2,885,649.48 ║ ║ Client C ║ 3014 ║ 800 ║ 2214 ║ 2,851,507.13 ║ ║ Client D ║ 2340 ║ 340 ║ 2000 ║ 3,467,207.12 ║ ╚══════════╩═══════════╩═══════════════╩═══════════════╩══════════════╝ ```
Just use conditional aggregation: ``` SELECT m.Client, CaseCount = COUNT(m.con_id), CONVERT(varchar(255), SUM(Convert(money, m.mat1_07_03)), 101) as AmtInDispute sum(case when cCode = 'NOFA' then 1 else 0 end) as NOFACaseTotal, sum(case when cCode = 'NOFL' then 1 else 0 end) as NOFLCaseTotal, FROM lntmu11.matter m GROUP BY m.client ORDER BY COUNT(m.Client) DESC; ``` Note: when doing conversions to a character, always include the length (in T-SQL).
You can use conditional aggregation to get the other 2 columns. Using an aggregate function with a CASE expression to count the `Ccode` values that you want for each of the columns. ``` SELECT [Client]=MAX(m.Client) ,[CaseCount]=COUNT(m.con_id) ,NOFACaseTotal = sum(case when m.Ccode = 'NOFA' then 1 else 0 end) ,NOFLCaseTotal = sum(case when m.Ccode = 'NOFL' then 1 else 0 end) ,[AmtInDispute]=CONVERT(char(50), SUM(Convert(money, m.mat1_07_03)), 101) FROM lntmu11.matter m GROUP BY m.con_id ORDER BY COUNT(m.Client) DESC; ``` See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/32a78/2).
SQL GROUP BY with FILTER IN SELECT
[ "", "sql", "sql-server", "t-sql", "pivot", "" ]
I have two tables SurrogateKeys and SurrogateKeyRelatives. I will find out of there are records that satisfy a query joining both tables. I am playing with two options: ``` select count(*) from ( select top(1) sk.Username from SurrogateKeys as sk inner join SurrogateKeyRelatives as skr on sk.Id = skr.SurrKeyId where sk.Username='John.Doe' and skr.SomeField = 1 ) ``` and ``` if exists( select top(1) sk.Username from SurrogateKeys as sk inner join SurrogateKeyRelatives as skr on sk.Id = skr.SurrKeyId where sk.Username='John.Doe' and skr.SomeField = 1 ) begin select 1 end else begin select 0 end ``` Attempting option 1 gives me Incorrect Syntax near ')'. What am I missing in the above query?
``` select count(*) from ( select top(1) sk.Username from SurrogateKeys as sk inner join SurrogateKeyRelatives as skr on sk.Id = skr.SurrKeyId where sk.Username='John.Doe' and skr.SomeField = 1 ) Q1 ```
``` select top(1) CASE WHEN skr.SurrKeyId is null THEN 0 ELSE 1 END from SurrogateKeys as sk left join SurrogateKeyRelatives as skr on sk.Id = skr.SurrKeyId and sk.Username='John.Doe' and skr.SomeField = 1 ```
Select count from subselect syntax
[ "", "sql", "sql-server", "t-sql", "left-join", "" ]
I have a table with data like: ``` col1 col2 col3 col4 1 A B Z-A.P-E-001.02 2 A B Z.01 3 A B Z.02 4 A B Z-A.P-E-001.01 5 C D M.01 6 C D M.02 ``` I want to find values in col4 having numbers at the end (number after last '.'). e.g. in this scenario expected outout is: ``` value col4 Z Z.01 Z Z.02 Z Z.03 M M.01 M M.02 Z-A.P-E-001 Z-A.P-E-001.01 Z-A.P-E-001 Z-A.P-E-001.02 ``` Can you please give me an idea on how to do it?
This is how I came to an answer with the help from Melvin Smith: ``` SELECT SUBSTR(col4, 0, instr(col4, regexp_substr(col4, '\.\d+$')) -1) AS VALUE, col4 FROM new_test where regexp_like(col4, '\.\d+$'); ``` @Melvin: Thanks a ton!!!!!
Use Orace's regex functions, `regexp_substr()` and `regexp_like()` I'm not sure what your actual requirements are, but something like this (off top of my head): ``` -- will match up to dot, but not including select regexp_substr(col4, '^\w+') as value, col4 from tab where regexp_like(col4, '\d'); -- if any digit in column ``` Or if you want to only look for values with numbers at end of field use the $ anchor ``` where regexp_like(col4, '\d$'); ```
Find value and corresponding entries with number at the end
[ "", "sql", "oracle", "" ]
A very simple representation of my problem is: Reviews ``` | mentor_id | mentee_id | status | created | | 1 | 2 | active | 2014-08-13 | | 1 | 2 | inactive | 2014-08-20 | | 1 | 2 | inactive | 2014-08-27 | | 1 | 3 | inactive | 2014-08-20 | | 1 | 2 | inactive | 2014-09-03 | ``` User Table ``` | id | first_name | last_name | | 1 | Ivan | Pietro | | 2 | Alexander | Summers | | 3 | Mark | Xavier | ``` Mentorship Table ``` | id | mentee_id | mentor_id | created | | 1 | 2 | 1 | 2014-08-06 | | 2 | 3 | 1 | 2014-08-06 | ``` `mentor_id` and `mentee_id` are ids of the users table. I want to get the `users.id`, `users.first_name` and `users.last_name` of the mentees(`user` table) that have a `status` of `inactive` for the last 14 days after they were created in the mentorship table.
Your question isn't crystal clear, but I think you are asking for: ``` SELECT DISTINCT u.* FROM reviews r JOIN mentorship m ON m.mentee_id = r.mentee_id AND m.mentor_id = r.mentor_id JOIN user u ON u.id = r.mentee_id WHERE r.status = 'inactive' AND DATEDIFF(r.created, m.created) >= 14 ``` This would give you: ``` id first_name last_name -- ---------- ---------- 2 Alexander Summers 3 Mark Xavier ``` It's difficult to know if this is what you are after because the dataset is limited (there are only 2 mentees anyway). Hopefully it is!
Check this ``` select user.id,user.first_name,user.last_name,max(Reviews.created) RCreated, Mentorship.created MCreated from Reviews inner join User on Reviews.mentee_id=USer.id inner join Mentorship on Reviews.mentee_id=Mentorship.mentee_id where status='inactive' group by User.id having DATEDIFF(RCreated, MCreated) >= 14 ``` <http://sqlfiddle.com/#!2/dfc71/26>
Get results depending on time-based conditions
[ "", "mysql", "sql", "" ]
I need to be able to change an AND condition of an inner join based on a flag. I tried the following based on [this SO answer](https://stackoverflow.com/questions/10256848/can-i-use-case-statement-in-a-join-condition), and while it compiles, when you run it never completes. Any thoughts? ``` SELECT Stuff, ... FROM Table t1 INNER JOIN ... INNER JOIN ... ... INNER JOIN OTHER_TABLE t2 ON CASE WHEN @ParticipantsFlag = 'S' AND t1.ID = t2.ID AND t2.STUDENT_ID IS NOT NULL THEN 1 WHEN @ParticipantsFlag = 'A' AND t1.ID = t2.ID AND t2.ADULT_ID IS NOT NULL THEN 1 ELSE 0 END = 1 ```
``` SELECT Stuff, ... FROM Table t1 INNER JOIN ... INNER JOIN ... ... INNER JOIN OTHER_TABLE t2 ON t1.ID = t2.ID AND ( ( @ParticipantsFlag = 'S' AND t2.STUDENT_ID IS NOT NULL ) or ( @ParticipantsFlag = 'A' AND t2.ADULT_ID IS NOT NULL ) ) ``` +1 to linoff but I this may be more efficient two join approach ``` LEFT JOIN OTHER_TABLE t2s ON t1.ID = t2s.ID AND @ParticipantsFlag = 'S' LEFT JOIN OTHER_TABLE t2a ON t1.ID = t2a.ID AND @ParticipantsFlag = 'A' WHERE t2s.STUDENT_ID IS NOT NULL or t2a.ADULT_ID IS NOT NULL ```
`or` conditions in joins can be quite expensive. I would recommend that you join the table twice: ``` LEFT JOIN OTHER_TABLE t2s ON @ParticipantsFlag = 'S' and t1.ID = t2.ID AND t2.STUDENT_ID IS NOT NULL LEFT JOIN OTHER_TABLE t2a ON @ParticipantsFlag = 'A' AND t1.ID = t2.ID AND t2.ADULT_ID IS NOT NULL ``` Note the use of `left join` instead of `inner join`. Then adjust the `select` and `where` clauses to get what you need from the two tables. To handle the implicit filtering of one-or-the-other: ``` where (t2s.id is not null or t2a.id is not null) ``` To combine the values in the select, use `coalesce()`: ``` select coalesce(t2s.name, t2a.name) as name ```
Changing AND condition of a INNER JOIN based on a flag?
[ "", "sql", "sql-server-2008", "t-sql", "case", "inner-join", "" ]
We already have this table (`IPConfig`) in db. (Sql Server 2k8) ``` IP | Member ----------------------- 10.1.2.100 | Joe 10.2.2.200 | Maley ``` Now, I should have a query which should do this: 1. Leave current table intact. If table is not there, create one. 2. Insert new records (defined in the next version deployment) I started with this but could not proceed. ``` IF (NOT EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = 'dbo' AND TABLE_NAME = 'IPConfig')) BEGIN CREATE TABLE dbo.IPConfig ( IP CHAR(10) PRIMARY KEY NOT NULL, Member VARCHAR(32) NOT NULL) END DECLARE @TempTable Table( -- Create a temp table. IP CHAR(10) PRIMARY KEY NOT NULL, Member VARCHAR(32) NOT NULL) INSERT INTO @TempTable( IP, Member) SELECT '10.1.2.100', --Already existing Ip; dont insert 'Joe' UNION SELECT '10.2.2.200', --Already existing Ip; dont insert 'Maley', UNION SELECT '10.3.5.200', --New; Insert. 'NewUser', UPDATE TABLE dbo.IPConfig /// Here what should be done ? Should I loop through the temp table using triggers or what is the best way ? ``` A super simple example please. Sorry if this a naive question; totally new to MSSQL programming.
If I understand correctly you only insert new unmatched IPs to dbo.IPConfig. Left outer join is the best choice here. Insert the results to your target table. ``` INSERT INTO dbo.IPConfig([IP], [Member]) SELECT a.[IP], a.[Member] FROM @TempTable a LEFT OUTER JOIN dbo.IPConfig b ON a.[IP]=b.[IP] WHERE b.[IP] is null ```
You don't need any temp table, you could create a stored procedure though. The simplest way is described here [INSERT VALUES WHERE NOT EXISTS](https://stackoverflow.com/questions/17991479/insert-values-where-not-exists) ``` INSERT IPConfig (IP, Member) SELECT '10.1.2.100', 'Joe' WHERE NOT EXISTS ( SELECT 1 FROM IPConfig WHERE IP = '10.1.2.100' ); ``` As stored procedure: ``` CREATE PROCEDURE InsertIfNotExist @IP nvarchar(50), @Member nvarchar(50) AS INSERT IPConfig (IP, Member) SELECT @IP, @Member WHERE NOT EXISTS ( SELECT 1 FROM IPConfig WHERE IP = @IP ); GO EXECUTE InsertIfNotExist '10.1.2.100', 'Joe'; EXECUTE InsertIfNotExist '10.2.2.200', 'Maley' ```
Insert rows (if not already exist) to a already existing table (in MS-SQL / T-Sql)
[ "", "sql", "sql-server", "t-sql", "triggers", "" ]
I want to access a weather table and sumamrise it in terms of days and months. I want some of the values to be AVG and some to be SUM. I want to underpin the resulting record with values from the collective data that represent the maximum count but after a few combinations, I have not managed it. **EXAMPLE DATA:** ``` day_date main_weather temp 2012-01-01 07:00:00 Cloudy 8.0 2012-01-01 08:00:00 Cloudy 10.0 2012-01-01 09:00:00 Sunny 12.0 2012-01-01 10:00:00 Sunny 16.0 2012-01-01 11:00:00 Sunny 18.0 ``` **WANTED RESULT:** ``` DATE(day_date) MAX(COUNT(main_weather) AVG(temp) 2012-01-01 Sunny 12.8 ``` Here's my first SQL to show what I am trying to do: ``` SELECT DATE(`day_date`), MAX(COUNT(`main_weather`)), <--- this is the piece I am stuck with the max values. AVG(`temp`) FROM `sma_weather` GROUP BY `day_date`; ```
Looks like a correlated subquery would do it, but not sure how well this will scale. ``` SELECT DATE(`day_date`) AS day , ( SELECT w1.`main_weather` FROM `weather` w1 where DATE(w1.`day_date`) = DATE(`weather`.`day_date`) GROUP BY DATE(`day_date`) , `main_weather` order by count(*) DESC limit 1 ) AS max_count_weather , AVG(`temp`) AS av_temp FROM `weather` GROUP BY DATE(`day_date`) ; ``` `See this SQLFiddle`
Try something like (untested): ``` select day_date, (select top 1 main_weather from MyTable t2 where date(t1.day_date) = date(t2.day_date) group by main_weather order by count(*) desc) as Max_MainWeather, avg (temp) as AvgTemp from MyTable t1 group by (day_date) ```
SQL - return rows for values with MAX COUNT
[ "", "mysql", "sql", "" ]
I have 2 tables like this: ![enter image description here](https://i.stack.imgur.com/Bmxn9.png) ![enter image description here](https://i.stack.imgur.com/Eh9tM.png) For example in first row from table 2 col value is 3661 and its equal to 'med' from table 1 I want to update class from table 1 with 'MED' and ... This is the code I wrote but something is wrong :( ``` UPDATE table1 SET class = ( CASE WHEN table2.col = table1.med THEN 'MED' -------------------------------- WHEN table2.col = table1.mgl THEN 'MGL' -------------------------------- WHEN table2.col = table1.rhp THEN 'RHP' -------------------------------- WHEN table2.col = table1.epd THEN 'EPD' -------------------------------- WHEN table2.col = table1.jpa THEN 'JPA' -------------------------------- ELSE 'NULL' FROM table1 LEFT outer JOIN table2 ) ```
I edited your code. you did not mention which database you use, Anyway try this: ``` UPDATE table1 SET class = ( CASE WHEN table2.col = table1.med THEN 'MED' -------------------------------- WHEN table2.col = table1.mgl THEN 'MGL' -------------------------------- WHEN table2.col = table1.rhp THEN 'RHP' -------------------------------- WHEN table2.col = table1.epd THEN 'EPD' -------------------------------- WHEN table2.col = table1.jpa THEN 'JPA' -------------------------------- ELSE 'NULL' END) from table1 left join table2 on table1.number=table2.number ```
You want something like this: ``` UPDATE table1 SET class = COALESCE((SELECT MIN(CASE WHEN table2.col = table1.med THEN 'MED' WHEN table2.col = table1.mgl THEN 'MGL' . . . END) as newval FROM table2 ), 'NULL') ``` This is a bit tricky. You need to decide which row you want if there are multiple matches. The above chooses an arbitrary value among the matches. The `coalesce()` is to handle the case where there are no matches. The subquery will return `NULL` in that case. This is standard SQL and should work in any database. Specific databases might have other ways of writing this query.
Compare 2 table columns and update new column
[ "", "sql", "" ]
For example, something like: `SELECT * FROM myTable WHERE myColumn LIKE '[0-9]n.[0-9]n'` Of course I know the syntax above will not work, but I m trying to explain: n digits, followed by a period, followed by n digits. Thanks
You could use following patters for **unsigned** decimal values: ``` SELECT *, CASE WHEN (x.Col1 LIKE '%[0-9]%.%[0-9]%' /*OR x.Col1 LIKE '.%[0-9]%' OR x.Col1 LIKE '%[0-9]%.' OR x.Col1 LIKE '%[0-9]%'*/) AND x.Col1 NOT LIKE '%[^0-9.]%' THEN 1 ELSE 0 END AS IsDecimal, ISNUMERIC(x.Col1) AS [IsNumeric] FROM ( SELECT '1.23' UNION ALL SELECT '.23' UNION ALL SELECT '1.' UNION ALL SELECT '123' UNION ALL SELECT NULL UNION ALL SELECT '' UNION ALL SELECT '1A.23' UNION ALL SELECT '1.2A3' UNION ALL SELECT 'A.B' UNION ALL SELECT '.' UNION ALL SELECT '$' ) x(Col1) /* Col1 IsDecimal IsNumeric ----- ----------- ----------- 1.23 1 1 .23 0 1 1. 0 1 123 0 1 NULL 0 0 0 0 1A.23 0 0 1.2A3 0 0 A.B 0 0 . 0 1 $ 0 1 */ ``` If you uncomment `/*OR x.Col1 LIKE '.%[0-9]%' OR x.Col1 LIKE '%[0-9]%.' OR x.Col1 LIKE '%[0-9]%'*/` then `.23 , 1. and 123` will be considered (also) valid decimal values.
Do you need exactly n or just anything that will cast to decimal ``` select cast(varDec as Decimal(p,s)) where varDec not like '%[^.1-9]%' and varDec not like '%.%.%' and len(varDec) > 0 and varDec <> '.' ```
Is there a way to match any decimal number with T-SQL 'LIKE' expression?
[ "", "sql", "sql-server", "regex", "t-sql", "" ]
``` UPDATE Contact_UDF SET hasAttendedEvent = ( SELECT CASE WHEN c.Contact_ID IN (SELECT DISTINCT Contact_ID FROM Action a WHERE a.Action_Type_ID =39) THEN 1 ELSE 0 END FROM Contact c) ``` I'm running this small bit of SQL to set a flag in the Contact\_UDF table but keep running into the following error: > Msg 512, Level 16, State 1, Line 1 Subquery returned more than 1 > value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression. The statement has been terminated. I can't for the life of me understand why, as I'm using IN not any of the operators it describes. Can anyone help?
To see why this gives this error, simply execute the subquery and conclude that it indeed returns more than one row: ``` SELECT CASE WHEN c.Contact_ID IN (SELECT DISTINCT Contact_ID FROM Action a WHERE a.Action_Type_ID =39) THEN 1 ELSE 0 END FROM Contact c ``` It returns 1 row for each record in Contact. You forgot a clause to make a 1 to 1 mapping between Contact\_UDF and Contact: ``` UPDATE Contact_UDF udf SET hasAttendedEvent = (SELECT CASE WHEN c.Contact_ID IN (SELECT DISTINCT Contact_ID FROM Action a WHERE a.Action_Type_ID =39) THEN 1 ELSE 0 END FROM Contact c WHERE c.Contact_ID = udf.Contact_ID) ``` Of course, this assumes that 'c.Contact\_ID = udf.Contact\_ID' restricts the subquery to one row. I can't be sure that this is true without more information about your schema.
Popovitsj and Laurence have it fairly well covered: Your sub query can return multiple rows which don't behave like a list of values than you can use in 'in expression. However, you can use the exists function instead so : ``` UPDATE Contact_UDF udf SET hasAttendedEvent = (SELECT CASE WHEN exists ( (SELECT a.Contact_ID FROM Action a WHERE a.Action_Type_ID =39 and a.Contact_ID = c.Contact_ID) THEN 1 ELSE 0 END FROM Contact c WHERE c.Contact_ID = udf.Contact_ID) ``` Documentation on exists function: (<http://technet.microsoft.com/en-us/library/ms189259(v=sql.105).aspx>) Documenation on referencing column from main query in subquery: (<http://technet.microsoft.com/en-us/library/ms189575(v=sql.105).aspx>)
"Subquery returned more than 1 value." error in SQL Manager
[ "", "sql", "sql-server", "" ]
I have problem in Sql Server for select only date part from DateTime. The value of DateTime is 2014-05-01 00:00:00.000. If tried this query I don't have error and the output is correct: ``` SELECT CONVERT(VARCHAR(10),'2014-05-01 00:00:00.000',110) 2014-05-01 ``` If tried this other query in the doTable: ``` SELECT TOP 100 * FROM [n].[a2].[DOTABLE] WHERE CONVERT(VARCHAR(10),data,110) > DATEADD(DAY, - 1, getdate()) ORDER BY data DESC; ``` I have this error: ``` SQL Server Error Messages - Msg 242 - The conversion of a char data type to a datetime data type resulted in an out-of-range datetime value. ``` The version of SQL server is: ``` Microsoft SQL Server 2005 - 9.00.1399.06 (Intel X86) Oct 14 2005 00:33:37 Copyright (c) 1988-2005 Microsoft Corporation Standard Edition on Windows NT 6.1 (Build 7600: ) ``` I suppose I'm not doing right but I know why.
I think the following is a better way to do what you want: ``` where date >= dateadd(day, 0, datediff(day, 0, getdate()) - 1) ``` This truncates the current date to midnight yesterday, which I am guessing is what you really want. For your method, try using format 120: ``` SELECT TOP 100 * FROM [n].[a2].[DOTABLE] WHERE CONVERT(VARCHAR(10), data, 120) > DATEADD(DAY, - 1, getdate()) ORDER BY data DESC; ``` You can do this on both sides: ``` SELECT TOP 100 * FROM [n].[a2].[DOTABLE] WHERE CONVERT(VARCHAR(10), data, 120) > CONVERT(varchar(10), DATEADD(DAY, - 1, getdate()), 120) ORDER BY data DESC; ``` This format is YYYY-MM-DD which is useful for comparisons. Then, upgrade SQL Server, and use the `date` data type instead.
Verified your query in SQL server 2008. It's running fine may be a specific issue related to SQL server 2005 for conversion between varchar and date time. You can add explicit conversion to date type here ``` SELECT TOP 100 * FROM [n].[a2].[DOTABLE] WHERE CAST( CONVERT(VARCHAR(10),data,110) as datetime) > DATEADD(DAY, - 1, getdate()) ORDER BY data DESC; ```
Sql Server Select Only Date Part From DateTime
[ "", "sql", "sql-server", "datetime", "" ]
I'm in the process of writing a stored procedure (for SQL Server 2012) that is supposed to calculate the number of hours for our employee from 16-15th of every month. I have the following database structure ![My Database Structure](https://i.stack.imgur.com/IxAr0.jpg) I have written a stored procedure to calculate the hours but I think I can only get the week start date to filter my condition. The stored procedure is returning me the wrong result because the weekly start date is not always the 16th. ``` CREATE PROCEDURE [dbo].[spGetTotalHoursBetween16to15EveryMonth] AS BEGIN SET NOCOUNT ON BEGIN TRY DECLARE @SixteenthDate datetime2(7) = DATEADD(DAY, 15, DATEADD(MONTH, DATEDIFF(MONTH, 0, GETDATE()), 0)) DECLARE @currentDate datetime2(7) = getDate() DECLARE @LastSixteenthDate datetime2(7) = DATEADD(DAY, 15, DATEADD(MONTH, DATEDIFF(MONTH, 0, DATEADD(mm, DATEDIFF(mm, 0, GETDATE()) - 1, 0)), 0)) IF(@currentDate >= @SixteenthDate) BEGIN SELECT (Sum(Day1Hours) + sum(Day2Hours) + Sum(Day3Hours) + sum(Day4Hours) + Sum(Day5Hours) + sum(Day6Hours) + Sum(Day7Hours)) AS Total FROM dbo.TimeSheets WHERE WeekStartDate BETWEEN DATEADD(wk, DATEDIFF(wk, 0, @SixteenthDate), -1) AND @currentDate END ELSE BEGIN SELECT (Sum(Day1Hours) + sum(Day2Hours) + Sum(Day3Hours) + sum(Day4Hours) + Sum(Day5Hours) + sum(Day6Hours) + Sum(Day7Hours)) AS Total FROM dbo.TimeSheets WHERE WeekStartDate BETWEEN DATEADD(wk, DATEDIFF(wk, 0, @LastSixteenthDate), -1) AND @currentDate END END TRY BEGIN CATCH THROW END CATCH END ```
I'd probably just do it the simple way: ``` declare @today date = convert(date,current_timestamp) declare @prev_month_end date = dateadd( day , -day(@today) , @today ) declare @period_start date = dateadd( day , 16 , @prev_month_end ) -- 16th of THIS month declare @period_end date = dateadd( month , 1 , @period_start ) -- 16th of NEXT month select @period_start = dateadd(month, -1 , @period_start ) , @period_end = dateadd(month, -1 , @period_end ) where day(@today) < 16 select total_hours = coalesce(sum(t.hours),0) from ( select id = t.id , report_date = dateadd(day,0,t.WeekStartDate) , hours = t.Day1Hours from dbo.TimeSheets t union all select id = t.id , report_date = dateadd(day,1,t.WeekStartDate) , hours = t.Day2Hours from dbo.TimeSheets t union all select id = t.id , report_date = dateadd(day,2,t.WeekStartDate) , hours = t.Day3Hours from dbo.TimeSheets t union all select id = t.id , report_date = dateadd(day,3,t.WeekStartDate) , hours = t.Day4Hours from dbo.TimeSheets t union all select id = t.id , report_date = dateadd(day,4,t.WeekStartDate) , hours = t.Day5Hours from dbo.TimeSheets t union all select id = t.id , report_date = dateadd(day,5,t.WeekStartDate) , hours = t.Day6Hours from dbo.TimeSheets t union all select id = t.id , report_date = dateadd(day,6,t.WeekStartDate) , hours = t.Day7Hours from dbo.TimeSheets t ) t where t.report_date >= @period_start and t.report_date < @period_end ```
Always start with start or an end of a month. E.g. Here's some logic **Start Date = Start date of previous month + 16** **End Date = Start date of current month + 15** Following might help you figure out the dates ``` -- First Day of the month select DATEADD(mm, DATEDIFF(mm,0,getdate()), 0) -- Last Day of previous month select dateadd(ms,-3,DATEADD(mm, DATEDIFF(mm,0,getdate() ), 0)) ``` More examples are [here](http://www.databasejournal.com/features/mssql/article.php/3076421/Examples-of-how-to-Calculate-Different-SQL-Server-Dates.htm)
Stored Procedure for Calculating hours from 16th of Every Month to 15 of every month
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "stored-procedures", "" ]
I have a simple table structure in my postgres database: ``` CREATE TABLE device ( id bigint NOT NULL, version bigint NOT NULL, device_id character varying(255), date_created timestamp without time zone, last_updated timestamp without time zone, CONSTRAINT device_pkey PRIMARY KEY (id ) ) ``` I'm often querying data based on deviceId column. The table has 3,5 million rows, so it leads to performance issues: ``` "Seq Scan on device (cost=0.00..71792.70 rows=109 width=8) (actual time=352.725..353.445 rows=2 loops=1)" " Filter: ((device_id)::text = '352184052470420'::text)" "Total runtime: 353.463 ms" ``` Hence I've created index on device\_id column: ``` CREATE INDEX device_device_id_idx ON device USING btree (device_id ); ``` However my problem is, that database still uses sequential scan, not index scan. The query plan after creating the index is the same: ``` "Seq Scan on device (cost=0.00..71786.33 rows=109 width=8) (actual time=347.133..347.508 rows=2 loops=1)" " Filter: ((device_id)::text = '352184052470420'::text)" "Total runtime: 347.538 ms" ``` The result of the query are 2 rows, so I'm not selecting a big portion of the table. I don't really understand why index is disregarded. What can I do to improve the performance? *edit:* My query: ``` select id from device where device_id ='357560051102491A'; ``` I've run `analyse` on the device table, which didn't help device\_id contains also characters.
It seems, like time resolves everything. I'm not sure what have happened, but currently its working fine. From the time I've posted this question I didn't change anything and now I get this query plan: ``` "Bitmap Heap Scan on device (cost=5.49..426.77 rows=110 width=166)" " Recheck Cond: ((device_id)::text = '357560051102491'::text)" " -> Bitmap Index Scan on device_device_id_idx (cost=0.00..5.46 rows=110 width=0)" " Index Cond: ((device_id)::text = '357560051102491'::text)" ``` Time breakdown (timezone GMT+2): * ~15:50 I've created the index * ~16:00 I've dropepd and recreated the index several times, since it was not working * 16:05 I've run `analyse device` (didn't help) * 16:44:49 from app server request\_log, I can see that the requests executing the query are still taking around **500 ms** * 16:56:59 I can see first request, which takes **23 ms** (the index started to work!) The question stays, why it took around 1:10 hour for the index to be applied? When I was creating indexes in the same database few days ago the changes were immediate.
Indexes are not used when you cast a column to a different type: ``` ((device_id)::text = '352184052470420'::text) ``` Instead, you can do this way: ``` (device_id = ('352184052470420'::character varying)) ``` (or maybe you can change device\_id to TEXT in the original table, if you wish.) Also, remember to run `analyze device` after index has been created, or it will not be used anyway.
Index doesn't improve performance
[ "", "sql", "postgresql", "indexing", "query-performance", "" ]
I am creating a table to store data from various government reports. An example row in the table would be: Values\_table ``` Date Location Report Attribute Value '2014-09-29' 'U.S.' 'LM_HG201' 'Price' 210 ``` Obviously I do not want to save strings for all of these columns, so I will be making subtables with ids to use instead. In order to balance table speed and ease of use, would it be better to make multiple subtables like: Location\_table ``` ID Location 1 'U.S' 2 'World' ``` Report\_table ``` ID Report 1 'LM_HG201' ``` Attribute\_table ``` ID Attribute 1 'Price' ``` Where my query of the values table is something like: ``` SELECT value FROM Values_table WHERE location = (SELECT id FROM locations_table WHERE location = 'U.S') AND attribute = (SELECT id FROM attributes_table WHERE attribute = 'price') ``` Or would it be better to use a single descriptors subtable, like this: Descriptors\_table ``` ID Location Report Attribute 1 'U.S.' 'LM_HG201' 'Price' ``` Where my query of the values table would be: ``` SELECT value FROM values_table WHERE descriptor_id IN (SELECT id FROM descriptors_table WHERE location = 'U.S.' AND report = 'LM_HG201' AND attribute = 'Price') ``` In my mind the second approach seems better but I've never seen it done this way. Any thoughts on which is worse/better?
Normalization rules and business requirements ought to drive your design. If Location is repeated for several reports, it might make sense to normalize it. I would not split out Price, because that would appear to be unique to a report. Repetitions down to the penny would be serendipity. You should learn [how to design relational models](http://en.wikipedia.org/wiki/Database_normalization) and what normalization rules are.
``` SELECT V.Value FROM Values_table V INNER JOIN locations_table L ON L.id = V.location INNER JOIN attributes_table A ON A.Id = V.attribute INNER JOIN descriptors_table D ON D.id = V.descriptor_id WHERE L.location = 'U.S' AND A.attribute = 'price' AND D.report = 'LM_HG201' ```
MYSQL - Should I use multiple subtables or one subtable for data descriptors?
[ "", "mysql", "sql", "" ]
I am very very new to SQL language, and trying to execute the following code. It gives me an error message below.. What did I do wrong? Error Message: ``` ORA-00928: missing SELECT keyword 00928. 00000 - "missing SELECT keyword" ``` Code: ``` WITH ABC(one,two,three) AS( select 25, 15,23 from dual ) INSERT INTO ABC(one, two, three) VALUES (10,11,12) select * from ABC; ``` Thank you very much in advance! **EDIT** -- Explanation I need to create table with more than 100 rows. I am not allowed to create new table due to sufficient privilege, so I am trying to create virtual table. The idea on my head is inserting rows to the virtual table with For-loop. But this 'missing select keyword' error occurs when I try to insert any record... **EDIT** -- Regarding the word "very very new to SQL language" I am currently a computer engineering student in University, and studied 'Database' course last year. Now, I am doing an internship. So like.. I am not like a beginner who needs w3school tutorial, but I am new to 'SQL language for real life'
You need another "table" that returns the number of rows you want. In Oracle this is usually done using an undocumented feature of the `connect by` operator: ``` select level as rn from dual connect by level <= 100 ``` The "undocumented" part is the fact that no real "connect" is done and no `start with` is supplied. Look up the `connect by` in the manual for more details on this. will return 100 rows. You can combine this with your initial CTE and do a cross join to return the single row a hundred times: ``` WITH abc (one,two,three) AS ( select 25, 15,23 from dual ), num_rows as ( select level as rn from dual connect by level <= 100 ) select abc.* from abc cross join num_rows; ```
Why don't you try with `union all` like this: ``` WITH ABC(one,two,three) AS( select 25, 15,23 from dual union all select 10,11,12 from dual ) select * from ABC ```
SQL insert statement - missing select keyword
[ "", "sql", "oracle", "" ]
I have a table in SQL Server with this fiels: t1(Amount, Tax, Extra, Option1, Option2, Option3) I have a query like this: ``` Select (Amount/Tax*Extra)+Option1 as Value1, (Amount/Tax*Extra)+Option2 as Value2, (Amount/Tax*Extra)+Option3 as Value3 From t1 ``` It's possible to precalculate (Amount/Tax\*Extra) and use the precalculate value in all columns like this: ``` Select @pre_calculated_value+Option1 as Value1, @pre_calculated_value+Option2 as Value2, @pre_calculated_value+Option3 as Value3 ``` Like a varible but for each column? Thanks for your help!
I usually use a CTE as follows: ``` ;WITH cte AS ( SELECT (t1.Amount / t1.Tax * t1.Extra) AS [CalculatedValue], t1.Option1, t1.Option2, t1.Option3 FROM t1 ) SELECT cte.[CalculatedValue] + Option1 AS [Value1], cte.[CalculatedValue] + Option2 AS [Value2], cte.[CalculatedValue] + Option3 AS [Value3], ``` This assumes that you only need this calculations in maybe just this, or relatively few, queries. You can use a computed column (persisted or non-persisted) if the calculation will be used in many / most queries. At that point, the difference between PERSISTED and non-PERSISTED is a trade-off between performance and diskspace: PERSISTED take up actual disk space but are only calculated upon INSERT and if any of the referenced columns change value, while non-PERSISTED are virtual columns that are calculated only upon being queried, but for every run of every query that they are referenced in.
I found the solution, the "persistent" command during creation was the solution. Ref. <http://blog.sqlauthority.com/2010/08/03/sql-server-computed-column-persisted-and%C2%A0performance/>
Avoid repetitive calculations in columns? SQL Server query
[ "", "sql", "sql-server", "calculated-columns", "" ]
I want to update a field when its Create date is from yesterday 2014-09-24 but the update does not happen although the conditions are fullfilled. I guess the problem is the time in the CreateDate field like "2014-09-24 12:25:00" so the match does not work. How can I compare only for the date? ``` UPDATE MyTable SET SentDate = NULL WHERE Category = 'test' AND CreateDate like '2014-09-24' ```
Perhaps try to convert your column like this: ... AND CONVERT(VARCHAR(10), CreateDate,101) = '09/24/2014' -- SQL Server syntax used for conversion Another option would be to select a range ... AND CreateDate >= '2014-09-24' AND CreateDate < '2014-09-25'
Ideally, you'd do it this way: ``` UPDATE MyTable SET SentDate = NULL WHERE Category = 'test' AND CAST(CreateDate as Date) = '2014-09-24' ``` However, I see you're on Sql Server 2005, and the `Date` type isn't supported until Sql Server 2008. So instead, you should do it this way: ``` UPDATE MyTable SET SentDate = NULL WHERE Category = 'test' AND dateadd(dd, datediff(dd,0, CreateDate), 0) = '2014-09-24' ``` This is typically a lot better than any string-based method for excluding the time portion of a date. Alternatively, you could do this: ``` UPDATE MyTable SET SentDate = NULL WHERE Category = 'test' AND CreateDate >= '2014-09-24' and CreateDate < '2014-09-25' ```
Query a datetime field without considering the time in T-SQL
[ "", "sql", "t-sql", "" ]
Hi i have the table **"users"** and **"jobs"** with following data USERS ``` +------+---------------+-----------+---------+---------------------+ | id | first_name | last_name | role_id | created | +------+---------------+-----------+---------+---------------------+ | 1026 | Administrator | Larvol | 2 | 2014-07-25 22:28:21 | | 20 | Worker | Larvol | 3 | 2014-07-24 20:14:18 | | 22 | test | user | 3 | 2014-07-25 16:06:27 | +------+---------------+-----------+---------+---------------------+ ``` JOBS ``` +----+--------+---------+ | id | status | user_id | +----+--------+---------+ | 1 | 3 | 20 | | 2 | 4 | 22 | +----+--------+---------+ ``` what i have done so far to fetch data from tables is ``` SELECT Worker.id, first_name, last_name, role_id, Worker.created, (COUNT( NULLIF(Job.id, 0) )) AS JobsAmount, ((SUM( IF( status <> 0, 1, 0 ) )-SUM( IF( status = 1, 1, 0 ) ))) AS JobsReviewed FROM alpha_dev.users AS Worker LEFT JOIN jobs AS Job ON Job.user_id = Worker.id WHERE Worker.role_id = 3 GROUP BY Worker.id; ``` and the result that i got is ``` +----+------------+-----------+---------+---------------------+------------+--------------+ | id | first_name | last_name | role_id | created | JobsAmount | JobsReviewed | +----+------------+-----------+---------+---------------------+------------+--------------+ | 20 | Worker | Larvol | 3 | 2014-07-24 20:14:18 | 1 | 1 | | 22 | test | user | 3 | 2014-07-25 16:06:27 | 1 | 1 | +----+------------+-----------+---------+---------------------+------------+--------------+ ``` now i want to create a OR condition on **"(COUNT( NULLIF(Job.id, 0) )) AS JobsAmount,"** something like ``` WHERE Worker.role_id = 3 OR (COUNT( NULLIF(Job.id, 0) )) = 1 ``` but that was not working so instead i ended up with HAVING clause ``` WHERE Worker.role_id = 3 GROUP BY Worker.id HAVING COUNT(NULLIF(`Job`.`id`, 0)) = 0; ``` HAVING here works as a AND condition. and given me EMPTY SET, whereas i want condition in HAVING to work as a OR condition so either ``` Worker.role_id = 3 OR COUNT(NULLIF(`Job`.`id`, 0)) = 0 ``` should be true to get things done. Any help would be appreciated. Thanks
The expression `COUNT(NULLIF(Job.id, 0))` will be zero if there isn't an associated row in `Job`. Another way to check that there isn't an associated row in `Job` is to see if a required column in `Job` is null. I say *required column* because an optional column may be null anyway, whereas a required column will only be null if there aren't any matches in the left-joined table. Put more concretely, in you specific scenario I'm pretty sure `COUNT(NULLIF(Job.id, 0))` is the same condition as `Job.user_id IS NULL`. Because the expression `Job.user_id IS NULL` isn't based on an aggregate, it can go into the `WHERE` instead of the `HAVING`: * role=3 OR job count = 0 ``` SELECT Worker.id, first_name, last_name, role_id, Worker.created, (COUNT( NULLIF(Job.id, 0) )) AS JobsAmount, ((SUM( IF( status <> 0, 1, 0 ))-SUM( IF( status = 1, 1, 0 ) ))) AS JobsReviewed FROM alpha_dev.users AS Worker LEFT JOIN jobs AS Job ON Job.user_id = Worker.id WHERE Worker.role_id = 3 OR Job.user_id IS NULL GROUP BY Worker.id ``` This will only work for COUNT=0. If you need to check for COUNT=2 or COUNT=5 or whatever, you'll need to push your existing query into a subquery, then have the outer query apply the `OR` logic: * role=3 OR job count = 5 ``` SELECT * FROM ( SELECT Worker.id, first_name, last_name, role_id, Worker.created, (COUNT( NULLIF(Job.id, 0) )) AS JobsAmount, ((SUM(IF(status <> 0, 1, 0))-SUM( IF(status = 1, 1, 0)))) AS JobsReviewed FROM alpha_dev.users AS Worker LEFT JOIN jobs AS Job ON Job.user_id = Worker.id GROUP BY Worker.id ) WorkerSummary WHERE role_id = 3 OR JobsAmount = 5 ```
COUNT is an aggregation function. So without grouping something it doesn't mean anything. If you want an OR for those conditions then a solution might be a union of two queries with on one side the query without the having and just the role\_id = 3 condition and on the other side the query with the having COUNT condition and role\_id not equal to 3. ``` SELECT Worker.id, first_name, last_name, role_id, Worker.created, (COUNT( NULLIF(Job.id, 0) )) AS JobsAmount, ((SUM( IF( status <> 0, 1, 0 ) )-SUM( IF( status = 1, 1, 0 ) ))) AS JobsReviewed FROM alpha_dev.users AS Worker LEFT JOIN jobs AS Job ON Job.user_id = Worker.id WHERE Worker.role_id = 3 GROUP BY Worker.id UNION SELECT Worker.id, first_name, last_name, role_id, Worker.created, (COUNT( NULLIF(Job.id, 0) )) AS JobsAmount, ((SUM( IF( status <> 0, 1, 0 ) )-SUM( IF( status = 1, 1, 0 ) ))) AS JobsReviewed FROM alpha_dev.users AS Worker LEFT JOIN jobs AS Job ON Job.user_id = Worker.id WHERE Worker.role_id <> 3 GROUP BY Worker.id HAVING COUNT(NULLIF(`Job`.`id`, 0)) = 0; ``` It might be possible to use a UNION ALL instead of a UNION, which has a better performance, but that depends on whether or not you want or will have duplicate rows because of the union.
Use Count in mysql with AND, OR or NOT condition with or without HAVING?
[ "", "mysql", "sql", "database", "having", "" ]
I have a table where customer transactions are stored in this format: ``` Account Tran_type Tran_Amount tran_particular Tran_date 165266 C 5000 deposit 19_SEP-2014 165266 D 3000 withdrawal 20-SEP-2014 165266 C 8000 Deposit 21-SEP-2014 ``` I am attempting to extract the Information for a Statement like this: ``` select tran_date, tran_particular, (case when tran_type = 'C' then tran_amt else 0 end) CREDIT, (case when tran_type = 'D' then tran_amt else 0 end) DEBIT from tran_table order bby tran_date asc; ``` Is there a wat to add the Balance column on each row so it would show the Balance after the Transaction? say: ``` DATE DESC CREDIT DEBIT BALANCE 19-SEP-2014 DEPOSIT 5000 0 5000 20-SEP-2014 WITHDRAWAL 3000 2000 21-SEP-2014 DEPOSIT 8000 0 10000 ``` Please assist. ***EDIT*** I have trie the aswers suggested but it seems my balance is tagged to the date. See the output I have currently: ![enter image description here](https://i.stack.imgur.com/golzi.png) See the Balance does not change until the date changes.
``` select tran_date, tran_particular, Credit, Debit, SUM(Delta) OVER (ORDER BY tran_date) AS Balance from ( select tran_date, tran_particular, Case Tran_Type When 'C' THEN Tran_Amount Else 0 End AS Credit, Case Tran_Type When 'D' THEN Tran_Amount Else 0 End AS Debit, Case Tran_Type When 'C' THEN Tran_Amount When 'D' THEN -1 * Tran_Amount Else 0 End AS Delta from TRANSACTIONS order by tran_date ) ``` Should do it
That will cost a sub-query: ``` SELECT tran_date, tran_particular, (CASE when tran_type = 'C' THEN tran_amt ELSE 0 end) CREDIT, (CASE when tran_type = 'D' THEN tran_amt ELSE 0 end) DEBIT, (SELECT SUM(CASE when type = 'C' tran_amt ELSE (-1) * tran_amt end) FROM tran_table trn2 WHERE trn2.Account = trn1.Account AND trn2.tran_id <= trn1.tran_id -- AND trn2.tran_date <= trn1.tran_date ) BALANCE FROM tran_table trn1 ORDER BY tran_date asc; ``` In large scale data, having such a sub-query is not recommended. Having a materialized view is more rational.
Incremental adding in sql select
[ "", "sql", "oracle", "" ]
I want to create a stored procedure with the following parameter: `@WorkspaceID int` What I have now is this: ``` DECLARE @WorkspaceRuleIds varchar(max) SELECT @WorkspaceRuleIds = COALESCE(@WorkspaceRuleIds + ', ', '') + CAST(RuleID AS varchar(5)) FROM Dgn_Workspace_Rules WHERE WorkspaceID = @WorkspaceID; SELECT ag.* FROM Dgn_Rules ag LEFT JOIN Dgn_Workspace_Rules wr ON ag.RuleID IN (@WorkspaceRuleIds) WHERE wr.WorkspaceID = @WorkspaceID ``` If `@WorkspaceID` receives value of 25 then `ag.RuleID IN (80,82)` should get as parameters... but I get this error instead > Msg 245, Level 16, State 1, Line 3 > Conversion failed when converting the nvarchar value '80,82' to data type int. How do I fix this? Thanks!
Thanks guys for your help but.. I found an answer for my question. ``` SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE FUNCTION [dbo].[CSVToTable] (@InStr VARCHAR(MAX)) RETURNS @TempTab TABLE (id int not null) AS BEGIN ;-- Ensure input ends with comma SET @InStr = REPLACE(@InStr + ',', ',,', ',') DECLARE @SP INT DECLARE @VALUE VARCHAR(1000) WHILE PATINDEX('%,%', @INSTR ) <> 0 BEGIN SELECT @SP = PATINDEX('%,%',@INSTR) SELECT @VALUE = LEFT(@INSTR , @SP - 1) SELECT @INSTR = STUFF(@INSTR, 1, @SP, '') INSERT INTO @TempTab(id) VALUES (@VALUE) END RETURN END GO ``` then execute this: ``` DECLARE @WorkspaceRuleIds varchar(max) SELECT @WorkspaceRuleIds = COALESCE(@WorkspaceRuleIds + ', ', '') + CAST(RuleID AS varchar(5)) FROM Dgn_Workspace_Rules WHERE WorkspaceID = @WorkspaceID; SELECT ag.* FROM Dgn_Rules ag LEFT JOIN Dgn_Workspace_Rules wr ON ag.RuleID IN (SELECT * FROM dbo.CSVToTable(@WorkspaceRuleIds)) WHERE wr.WorkspaceID = @WorkspaceID ``` and it outputs exactly what I need.
1. there are several well know techniques to concatenating row values. Read [Concatenating Row Values in Transact-SQL](https://www.simple-talk.com/sql/t-sql-programming/concatenating-row-values-in-transact-sql/) for pros and cons of each. 2. `ag.RuleID IN (@WorkspaceRuleIds)` this does not do anything close to what you expect. It is not a macro substitution. It is a check for the `IN` predicate with the scalar value `@WorkspaceRuleIds`. Therefore `@WorkspaceRuleIds` will be coerced to an int, according to the [Data Type Precedence Rules](http://msdn.microsoft.com/en-us/library/ms190309.aspx). Which results in the cast error you see. 3. Your queries makes absolutely no sense. JOIN on an IN condition? 4. **NEVER** use comma separated lists in SQL. **NEVER**.
How do I return a string of numbers separated by comma as an array parameter in SQL
[ "", "sql", "sql-server", "" ]
I have four tables ``` store[store_id(pk),name] itemsA(item_id(pk),store_id,name) itemB(item_id(pk),store_id,name) itemC(item_id(pk),store_id,name) ``` I want a query to retrieve a store and the number of items that he have. something like : ``` select s.store_id ,s.name,count() as numberOfItems from store limit 100 ``` what is the optimal query to achieve that with the following restraints : cannot create a function in the db cannot create view I can only run queries on the db Thanks
I would recommend doing this with correlated subqueries: ``` select s.store_id, s.name, ((select count(*) from itemsA a where a.store_id = s.store_id) + (select count(*) from itemsB b where b.store_id = s.store_id) + (select count(*) from itemsC c where c.store_id = s.store_id) ) as numberOfItems from store s limit 100; ``` You then want an index in each of the item tables: `itemsA(stored_id)`, `itemsB(store_id)`, and `itemsC(store_id)`. The reason this is optimized is because it only has to calculate the values for the arbitrary 100 stores chosen by the `limit`. And, the calculation can be done directly from the index. Other approaches will require doing the calculation for all the stores. Note: usually when using `limit` you want an `order by` clause.
Stores with no items will not show up with this query. If this is a requirement it will have to be tweaked somewhat. ``` SELECT s.store_id, COUNT(*) FROM Store s JOIN ItemA a ON a.store_id = s.store_id JOIN ItemB b ON b.store_id = s.store_id JOIN ItemC c ON c.store_id = s.store_id GROUP BY s.store_id ``` A simple modification to also include stores with 0 items: ``` SELECT s.store_id, COUNT(a.store_id) + COUNT(b.store_id) + COUNT(c.store_id) FROM Store s LEFT JOIN ItemA a ON a.store_id = s.store_id LEFT JOIN ItemB b ON b.store_id = s.store_id LEFT JOIN ItemC c ON c.store_id = s.store_id GROUP BY s.store_id ```
optimise query with calculate field from several tables
[ "", "mysql", "sql", "" ]
The simple version What is the correct syntax of this? ``` INSERT INTO foo(IP, referer) VALUES(SELECT bin FROM dbo.bar("foobar"),"www.foobar.com/test/") ``` I am getting syntax errors near 'SELECT' and ')' The long version: I want to insert using a Function and a string (this is simplified, in reality there will be a few other values including datetime, ints, etc to insert along with the function result). I have a function, itvfBinaryIPv4, which was set up to convert IPs to a binary(4) datatype to allow for easy indexing, I used this as a reference: [Datatype for storing ip address in SQL Server](https://stackoverflow.com/questions/1385552/datatype-for-storing-ip-address-in-sql-server). So this is what I am trying to accomplish: ``` INSERT INTO foo (IP, referer) VALUES(SELECT bin FROM dbo.itvfBinaryIPv4("192.65.68.201"), "www.foobar.com/test/") ``` However, I get syntax errors near 'SELECT' and ')'. What is the correct syntax to insert with function results and direct data?
It should be like this - ``` INSERT INTO foo (IP, referer) SELECT bin, 'www.foobar.com/test/' FROM dbo.itvfBinaryIPv4('192.65.68.201') ``` here assume `dbo.itvfBinaryIPv4("192.65.68.201")` is table valued function.
The `INSERT` command comes in two flavors: **(1)** either you have all your values available, as literals or SQL Server variables - in that case, you can use the `INSERT .. VALUES()` approach: ``` INSERT INTO dbo.YourTable(Col1, Col2, ...., ColN) VALUES(Value1, Value2, @Variable3, @Variable4, ...., ValueN) ``` Note: I would recommend to **always** explicitly specify the list of column to insert data into - that way, you won't have any nasty surprises if suddenly your table has an extra column, or if your tables has an `IDENTITY` or computed column. Yes - it's a tiny bit more work - **once** - but then you have your `INSERT` statement as solid as it can be and you won't have to constantly fiddle around with it if your table changes. **(2)** if you **don't** have all your values as literals and/or variables, but instead you want to rely on another table, multiple tables, or views, to provide the values, then you can use the `INSERT ... SELECT ...` approach: ``` INSERT INTO dbo.YourTable(Col1, Col2, ...., ColN) SELECT SourceColumn1, SourceColumn2, @Variable3, @Variable4, ...., SourceColumnN FROM dbo.YourProvidingTableOrView ``` Here, you must define exactly as many items in the `SELECT` as your `INSERT` expects - and those can be columns from the table(s) (or view(s)), or those can be literals or variables. Again: explicitly provide the list of columns to insert into - see above. You can use **one or the other** - but you **cannot** mix the two - you cannot use `VALUES(...)` and then have a `SELECT` query in the middle of your list of values - pick one of the two - stick with it.
SQL Server: Inserting from various sources
[ "", "sql", "sql-server", "" ]
I want to index this query clause -- note that the text is static. `SELECT * FROM tbl where flags LIKE '%current_step: complete%'` To re-iterate, the `current_step: complete` never changes in the query. I want to build an index that will effectively pre-calculate this boolean value, thereby preventing full table scans... I would prefer not to add a boolean column to store the pre-calculated value as this would necessitate code changes in the application....
If you don't want to change the query, and it isn't just an issue of nor changing the data maintenance (in which case a virtual column and/or index would do the job), you could use a materialised view that applies the filter, and let query rewrite take case of using that instead of the real table. Which may well be overkill but is an option. The original plan for a mocked-up version: ``` explain plan for SELECT * FROM tbl where flags LIKE '%current_step: complete%'; select * from table(dbms_xplan.display); -------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 2 | 60 | 3 (0)| 00:00:01 | |* 1 | TABLE ACCESS FULL| TBL | 2 | 60 | 3 (0)| 00:00:01 | -------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter("FLAGS" IS NOT NULL AND "FLAGS" LIKE '%current_step: complete%') ``` A materialised view that will only hold the records your query is interested in (this is a simple example but you'd need to decide how to refresh and add a log if needed): ``` create materialized view mvw enable query rewrite as SELECT * FROM tbl where flags LIKE '%current_step: complete%'; ``` Now your query hits the materialised view, thanks to [query rewrite](http://docs.oracle.com/cd/B28359_01/server.111/b28313/qrbasic.htm): ``` explain plan for SELECT * FROM tbl where flags LIKE '%current_step: complete%'; select * from table(dbms_xplan.display); ------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 2 | 60 | 3 (0)| 00:00:01 | | 1 | MAT_VIEW REWRITE ACCESS FULL| MVW | 2 | 60 | 3 (0)| 00:00:01 | ------------------------------------------------------------------------------------- ``` But any other query will still use the original table: ``` explain plan for SELECT * FROM tbl where flags LIKE '%current_step: working%'; select * from table(dbms_xplan.display); -------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 27 | 3 (0)| 00:00:01 | |* 1 | TABLE ACCESS FULL| TBL | 1 | 27 | 3 (0)| 00:00:01 | -------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter("FLAGS" LIKE '%current_step: success%' AND "FLAGS" IS NOT NULL) ``` Of course a virtual index would be simpler if you are allowed to modify the query...
A full text search index might be what you are looking for. There are a few ways you can implement this: * Oracle has Oracle Text where you can define which type of full text index you want. * Lucene is a Java full text search framework. * Solr is a server product that provides full text search.
Oracle index for a static like clause
[ "", "sql", "oracle", "database-performance", "" ]
``` Declare int @NumberofPriorYears=2; Declare int @currentYear=2014; SELECT * FROM MyTable a WHERE a.FiscalYear in ( ``` When @NumberofPriorYears is 2 then i want to pass `@currentYear-@NumberofPriorYears` (i.e 2014,2013,2012) or when `@NumberofPriorYears` is null then pass `@currentYear` i.e 2014. Appreciate any help on this.
Can you use a range? ``` a.FiscalYear >= @currentYear-isnull(@NumberofPriorYears,0) and a.FiscalYear < @currentYear+1 ```
Try this ``` Declare int @numberofyears=2; Declare int @currentYear=2014; set @currentYear = datepart(year,getdate()) SELECT * FROM MyTable a WHERE datepart(year,a.FiscalYear) between @currentYear and @currentYear+@numberofYears ```
SQL Where condition for number of years
[ "", "sql", "sql-server", "" ]
Below is my scenario I have a table with columns Country and State. I have only one row with values USA and NY. Now if the user chooses NY then I have to display 'New York' and if other than NY then display 'No Records'. I tried the below Oracle sql and it is failing for me. When I enter other state like MA, I am expecting 'No Records', but I get null value. ``` SELECT CASE WHEN STATE='NY' THEN 'New York' when state <> 'NY' THEN 'No Records' end from abc where state in ('&state') ``` Please advise.
Use `ELSE` instead. ``` SELECT CASE WHEN STATE='NY' THEN 'New York' ELSE 'No Records' end from abc where state in ('&state') ```
If you want to be sure that you get at one row, use aggregation: ``` SELECT (CASE WHEN max(STATE) = 'NY' THEN 'New York' ELSE 'No Records' END) from abc where state in ('&state'); ```
Case statement for null Records
[ "", "sql", "oracle", "" ]
i m running a sql query to select data from a table . i want to include headers as the first row in the result of the query. headers will be coming from some other table or can we can sqlhardcode the values . below is my query to fetch data. ``` select invoice_type_code, shipper_short_code , charge_type_code , incurred_date , charge_amount , charge_description from prepayment_charge_calc ; ``` i want a header above data of first row of every column.these header shouldnt be the column name for eg. ``` header1 header2 header3 header4 header5 header6 1 2 3 4 5 6 ``` header 1 to 6 are coming from other table or can be the harcoded value .below this header we should have the data extracted from "prepayment\_charge\_calc" table .. 1,2,3,4 are the data from "prepayment\_charge\_calc" table can any one suggest me the query for this .
I don't see any extra effort to have the column names in the header the way you want, just mention an `ALIAS` to the columns the way you want. For example : `select col1 as "header1", col2 as "header2".... from table` In any `GUI/non GUI based tool`, the `scroll pane` automatically places the `table header` at the top of the `resultset`, which is actually the column headers you need.
Assuming Oracle DBMS, you can create the header row manually using DUAL table and then union with the real data. Use a dummy psuedo-column ("rno" in the example below) to sort the data. However, you have to convert any other datatype to VARCHAR to make this work. Idea is illustrated below: ``` select 'header1', 'header2', 'header3', 'header4', 'header5', 'header6', 1 rno from dual union select invoice_type_code, shipper_short_code , charge_type_code , incurred_date , --convert this using to_char if date datatype charge_amount , --convert this using to_char if numeric datatype charge_description, 2 rno from prepayment_charge_calc order by rno; ```
to include a header into result of sQL query
[ "", "sql", "oracle", "" ]
``` MERGE DestinationTable AS D USING SourceTable AS S ON D.Alternate_ID = S.ID WHEN MATCHED AND ( D.ID <> S.ID OR D.col1 <> S.col1 OR D.col2 <> S.col2 OR D.col3 <> S.col3 OR D.col4 <> S.col4 OR D.col5 <> S.col5 OR D.col6 <> S.col6 OR D.col7 <> S.col7 OR D.col8 <> S.col8 ) ``` Hi all, i'am trying to update DestinationTable if any of the column values in SourceTable have changed using the Merge statement snippet above. However, if i have a NULL value in the destination column and a string or bit value in the source, the comparison `D.col8 <> S.col8` will return false because of the way SQL handles comparisons to NULL values. As a result DestinationTable is not updated with new values from SourceTable. What is the better way to handle this issue. If D.Col8 is NULL and S.Col8 is has a string or bit value, i still want to return true for an expression like `D.col8 <> S.col8` SO if i have a value of "Test" in S.Col8 and NULL in D.Col8, I want to update Destination column from NULL to "Test"
``` MERGE DestinationTable AS D USING SourceTable AS S ON D.Alternate_ID = S.ID WHEN MATCHED AND ( D.ID <> S.ID OR (D.col1 IS NULL AND S.col1 IS NOT NULL) OR (D.col1 IS NOT NULL AND S.col1 IS NULL) OR D.col1 <> S.col1 OR (D.col2 IS NULL AND S.col2 IS NOT NULL) OR (D.col2 IS NOT NULL AND S.col2 IS NULL) OR D.col2 <> S.col2 OR (D.col3 IS NULL AND S.col3 IS NOT NULL) OR (D.col3 IS NOT NULL AND S.col3 IS NULL) OR D.col3 <> S.col3 OR (D.col4 IS NULL AND S.col4 IS NOT NULL) OR (D.col4 IS NOT NULL AND S.col4 IS NULL) OR D.col4 <> S.col4 OR (D.col5 IS NULL AND S.col5 IS NOT NULL) OR (D.col5 IS NOT NULL AND S.col5 IS NULL) OR D.col5 <> S.col5 OR (D.col6 IS NULL AND S.col6 IS NOT NULL) OR (D.col6 IS NOT NULL AND S.col6 IS NULL) OR D.col6 <> S.col6 OR (D.col7 IS NULL AND S.col7 IS NOT NULL) OR (D.col7 IS NOT NULL AND S.col7 IS NULL) OR D.col7 <> S.col7 OR (D.col8 IS NULL AND S.col8 IS NOT NULL) OR (D.col8 IS NOT NULL AND S.col8 IS NULL) OR D.col8 <> S.col8 ) ```
You could do it a lot simpler with [`BINARY_CHECKSUM`](http://msdn.microsoft.com/en-us/library/ms173784.aspx): ``` MERGE DestinationTable AS D USING SourceTable AS S ON D.Alternate_ID = S.ID WHEN MATCHED AND BINARY_CHECKSUM(d.col1, d.col2, ...,d.col8) <> BINARY_CHECKSUM(s.col1, ..., s.col8) ``` There is a small collision probability of having false negatives (values *did* change, but checksum is the same), but is negligible.
Comparing NULL against strings in SQL Server
[ "", "sql", "sql-server", "t-sql", "" ]
I have column A B C D I am trying to update Column C with divide functionality e.g. If Column C contains value 0.9 then i want to update that value in to 1/0.9= 1.33333 (round it to 1.3) So the column has to update from 0.9 to 1.3. Is there a way to do by `SQL` query with out store procedure?
Since you haven't indicated which database you are using, I will offer a query that works for both mysql, postgres and sqlserver (that covers most database instances): ``` UPDATE mytable SET columnC = CAST( 1 / columnC AS DECIMAL(8,1)) WHERE columnC = 0.9 ```
Try this: ``` UPDATE table SET columnC = 1 / columnC where columnC = 0.9 ```
update a column with condition thru Sql Query
[ "", "sql", "" ]
I have a SQL table which hold unique REFID (int) and many other columns. I wanted to search a row using the half REFID . So if someone just search **0001** then 50001, 00015... comes up. I have tried: ``` SELECT TOP 10 REFID FROM Tablename where REFID LIKE '%' + cast(0001 as varchar(10)) +'%' ``` however the problem is, it also giving me 150100 however I wanted **0001** to be in order. **'0001' is passed in as a parameter** passed in from my C# application. I know I can convert the '0001' to string/varchar before sending it to the SQL however I was looking for a way to do it within the SQL so I can pass in the int from C# application
I have found: 1. `cast('0001' as varchar(10))` as 0001 === 1 thanks to ALEX K. 2. SQL will strip leading zero and there is no way of keeping the zero if you don't know the length. My solution: I will send a string from my application and let SQL search it using the string.
# Code: ``` SELECT TOP 10 REFID FROM Tablename where REFID LIKE '%0001%' ```
Search half the refID in SQL
[ "", "sql", "" ]
Is there a way to, in a single query, return rows of `my_table where column_1 = "xxx"` if it does have results, and if it doesnt, return `where column_2 = "xxx"` ? Like a `select * from my_table where column_1 = "xxx" or column_2 = "xxx"` but only displaying the `column_2= "xxx"` if there is no results for `column_1 = "xxx"` This table has 400M+ rows, so please take performance into account
Try this: ``` select * from my_table where (column_1 = "xxx" and exists (select null from my_table where column_1 = "xxx")) or (column_2 = "xxx" and not exists (select null from my_table where column_1 = "xxx")) ```
You can try this query: ``` select t.* from my_table t where column_1 = 'xxx' union all select t.* from my_table t where column_2 = 'xxx' and not exists (select 1 from my_table where column_1 = 'xxx'); ``` This should have reasonable performance if you have an index on `column_1` and another index on `column_2`. If you are only expecting one value, you can do something like this: ``` select t.* from (select t.* from my_table t where column_1 = 'xxx' or column_2 = 'xxx' order by (case when column_1 = 'xxx' then 1 else 2 end) ) t where rownum = 1; ``` This will not make use of an index (at least not very well), but it will accomplish what you want with a single scan of the table.
How to query a table for different columns based on first column result
[ "", "sql", "oracle11g", "" ]
If primary keys are always sorted, how can I store Guids in random order as primary key.
Well, *primary keys* aren't necessarily stored in sorted order on the disk. But *clustered indexes* are. And in the vast majority of cases the primary key is the clustered index. Though this doesn't necessarily guarantee sorting of results, it's just that results are usually sorted by the clustered index by default. > how can I store Guids in random order as primary key GUIDs don't make for good clustered indexes for exactly this reason. SQL Server does have something called a [Sequential GUID](http://msdn.microsoft.com/en-us/library/ms189786.aspx) to address this. The resulting GUIDs won't be consecutive, but they will be sequential. It has some caveats, though: > Creates a GUID that is greater than any GUID previously generated by this function **on a specified computer since Windows was started**. If the system reboots, the sequence is lost. If multiple systems create keys, the sequence is lost. Additionally, there's the problem that we're still relying on the SQL Server to generate the key, which kind of defeats a significant reason to use a GUID. In general I would suggest not using a GUID as a clustered index. As an alternative one might use a normal `IDENTITY` key as the clustered index and create a separate GUID column (with potentially an index of its own, and even a unique constraint just to make sure applications don't try to re-insert an existing record). That separate column becomes a kind of "global identifier" in a more business-logic sense, and not so much in a data persistence implementation sense.
No, the table data is not always stored in the order of the primary key, but usually the primary key has a clustered index, and the data is always stored in the order of the clustered index. If you don't want the data stored in the order of the primary key, you should use a non-clustered index for it. Note that eventhough you usually get the data in the order that it is stored, the order is not guaranteed unless you use an `order by` clause. If the order is at all important, you should always specify what it should be.
Are primay keys stored in sorted order or do they just appear sorted by SQL statement?
[ "", "sql", "sql-server", "guid", "" ]
I have to query from an `oracle 11` db. With the query below I get all recent `TAG_VALUE, TAG_DESC, INSERTION_DATE and PROJECT_ID` from my database. ``` SELECT * FROM (SELECT t.tag_value, t.tag_desc, u.update_as_of AS INSERTION_DATE, p.proj_id AS PROJECT_ID, Row_number() over( PARTITION BY p.proj_id ORDER BY u.update_as_of DESC) RN FROM project p join update u ON p.project_id = u.project_id join tag t ON t.tag_id = u.tag_id WHERE t.tag_desc LIKE 'Equity%') WHERE rn = 1; ``` However, I came accross the cases that the answer of my request(without sorting it by date) can look like that: ``` +----------------------------------------------+ | TAG_VALUE TAG_DESC INSERTION_DATE PROJECT_ID | +----------------------------------------------+ | null Equity 14-DEC-14 1 | | 0 Equity 14-DEC-14 1 | | 312 Equity 14-DEC-14 1 | | 23343 Equity 17-DEC-11 5 | | 1263 Equity 16-DEC-11 5 | | null Equity 22-JÄN-14 2 | | null Equity 11-JÄN-14 2 | | null Equity 25-SEPT-13 2 | | 0 Equity 20-SEPT-13 2 | | 1234 Equity 19-SEPT-13 2 | | 13415 Equity 18-SEPT-13 2 | | 99999 Equity 16-OCT-10 9 | +----------------------------------------------+ ``` My `Result Set` should look like that: ``` +----------------------------------------------+ | TAG_VALUE TAG_DESC INSERTION_DATE PROJECT_ID | +----------------------------------------------+ | 312 Equity 14-DEC-14 1 | | 23343 Equity 17-DEC-11 5 | | 1234 Equity 19-SEPT-13 2 | | 99999 Equity 16-OCT-10 9 | +----------------------------------------------+ ``` There are two cases, which basically focus on the same problem: * As you can see there are two cases, when the `project_id = 1` the insertion date is always the same. However, with my query above I still get `null` back because of the `ordering`. How can I get the number `312` back without getting the `null` or the `0` value? * If `projectID = 2` there are different Insertion dates and the earlier dates have as `TAG_VALUE` `null` elements. However, I would like to have the `tagValue` of `| 1234 Equity 19-SEPT-13 2 |`, because it is the latest value? How, can I basically ignore all `null` and also `0` values values and only take the numeric, which is greater than `0` value with the earliest date? I really appreciate your answer!
Considering that your grouping happens within PROJECT\_ID for INSERTION\_DATE DESC and a positive TAG\_VALUE, I tweaked the analytical functions to achieve the result. This may not be a robust solution, but will certainly help you. **Data Setup:** ``` CREATE TABLE Table1 ("TAG_VALUE" varchar2(5), "TAG_DESC" varchar2(6), "INSERTION_DATE" varchar2(10), "PROJECT_ID" int) ; INSERT ALL INTO Table1 ("TAG_VALUE", "TAG_DESC", "INSERTION_DATE", "PROJECT_ID") VALUES (NULL, 'Equity', '14-DEC-14', 1) INTO Table1 ("TAG_VALUE", "TAG_DESC", "INSERTION_DATE", "PROJECT_ID") VALUES ('0', 'Equity', '14-DEC-14', 1) INTO Table1 ("TAG_VALUE", "TAG_DESC", "INSERTION_DATE", "PROJECT_ID") VALUES ('312', 'Equity', '14-DEC-14', 1) INTO Table1 ("TAG_VALUE", "TAG_DESC", "INSERTION_DATE", "PROJECT_ID") VALUES ('23343', 'Equity', '17-DEC-11', 5) INTO Table1 ("TAG_VALUE", "TAG_DESC", "INSERTION_DATE", "PROJECT_ID") VALUES ('1263', 'Equity', '16-DEC-11', 5) INTO Table1 ("TAG_VALUE", "TAG_DESC", "INSERTION_DATE", "PROJECT_ID") VALUES (NULL, 'Equity', '22-JÄN-14', 2) INTO Table1 ("TAG_VALUE", "TAG_DESC", "INSERTION_DATE", "PROJECT_ID") VALUES (NULL, 'Equity', '11-JÄN-14', 2) INTO Table1 ("TAG_VALUE", "TAG_DESC", "INSERTION_DATE", "PROJECT_ID") VALUES (NULL, 'Equity', '25-SEPT-13', 2) INTO Table1 ("TAG_VALUE", "TAG_DESC", "INSERTION_DATE", "PROJECT_ID") VALUES ('0', 'Equity', '20-SEPT-13', 2) INTO Table1 ("TAG_VALUE", "TAG_DESC", "INSERTION_DATE", "PROJECT_ID") VALUES ('1234', 'Equity', '19-SEPT-13', 2) INTO Table1 ("TAG_VALUE", "TAG_DESC", "INSERTION_DATE", "PROJECT_ID") VALUES ('13415', 'Equity', '18-SEPT-13', 2) INTO Table1 ("TAG_VALUE", "TAG_DESC", "INSERTION_DATE", "PROJECT_ID") VALUES ('99999', 'Equity', '16-OCT-10', 9) SELECT * FROM dual ; ``` **Query:** ``` SELECT tag_value, tag_desc, insertion_date, project_id FROM (SELECT tag_value, tag_desc, insertion_date, project_id, Last_value(Decode(tag_value, 0, NULL, tag_value) ignore nulls) over ( PARTITION BY project_id ORDER BY insertion_date ROWS BETWEEN unbounded preceding AND unbounded following ) new_tag_value FROM table1) WHERE tag_value = new_tag_value; ``` **Result:** ``` TAG_VALUE TAG_DESC INSERTION_DATE PROJECT_ID 312 Equity 14-DEC-14 1 1234 Equity 19-SEPT-13 2 23343 Equity 17-DEC-11 5 99999 Equity 16-OCT-10 9 ``` Here is the [fiddle](http://www.sqlfiddle.com/#!4/013c3/31)
Your question is: "How, can I basically ignore all null and also 0 values values" The simple answer is: By removing those records in the WHERE clause. I use `AND t.tag_value > 0` here. You can replace it with `AND t.tag_value <> 0 AND t.tag_value IS NOT NULL`, if you want to allow negative values. ``` SELECT * FROM ( SELECT t.tag_value, t.tag_desc, u.update_as_of AS INSERTION_DATE, p.proj_id AS PROJECT_ID, ROW_NUMBER() OVER(PARTITION BY p.proj_id ORDER BY u.update_as_of DESC) RN FROM updated u JOIN project p ON p.project_id = u.project_id JOIN tag t ON t.tag_id = u.tag_id WHERE t.tag_desc LIKE 'Equity%' AND t.tag_value > 0 ) WHERE RN = 1; ```
Query only numeric values of the earliest possible date
[ "", "sql", "oracle", "oracle11g", "" ]
Using Oracle, how do you select current date (i.e. SYSDATE) at 11:59:59? Take into account that the definition of midnight [might be a little ambiguous](https://english.stackexchange.com/questions/6459/how-should-midnight-on-be-interpreted) (Midnight Thursday means Straddling Thursday and Friday or Straddling Wednesday and Thursday?).
To select current date (Today) before midnight (one second before) you can use any of the following statements: ``` SELECT TRUNC(SYSDATE + 1) - 1/(24*60*60) FROM DUAL SELECT TRUNC(SYSDATE + 1) - INTERVAL '1' SECOND FROM DUAL; ``` What it does: 1. Sum one day to `SYSDATE`: `SYSDATE + 1`, now the date is Tomorrow 2. Remove time part of the date with `TRUNC`, now the date is Tomorrow at 00:00 3. Subtract one second from the date: `- 1/(24*60*60)` or `- INTERVAL '1' SECOND FROM DUAL`, now the date is Today at 11:59:59 **Note 1:** If you want to check date intervals you might want to check @Allan answer below. **Note 2:** As an alternative you can use this other one (which is easier to read): ``` SELECT TRUNC(SYSDATE) + INTERVAL '23:59:59' HOUR TO SECOND FROM DUAL; ``` 1. Remove time part of the current date with `TRUNC`, now the date is Today at 00:00 2. Add a time interval of `23:59:59`, now the date is Today at 11:59:59 **Note 3:** To check the results you might want to add format: ``` SELECT TO_CHAR(TRUNC(SYSDATE + 1) - 1/(24*60*60),'yyyy/mm/dd hh24:mi:ss') FROM DUAL SELECT TO_CHAR(TRUNC(SYSDATE + 1) - INTERVAL '1' SECOND,'yyyy/mm/dd hh24:mi:ss') FROM DUAL SELECT TO_CHAR(TRUNC(SYSDATE) + INTERVAL '23:59:59','yyyy/mm/dd hh24:mi:ss') FROM DUAL ```
Personally, I dislike using one second before midnight. Among other things, if you're using a `timestamp`, there's a possibility that the value you're comparing to falls between the gaps (i.e. 23:59:59.1). Since this kind of logic is typically used as a boundary for a range condition, I'd suggest using "less than midnight", rather than "less than or equal to one second before midnight" if at all possible. The syntax for this simplifies as well. For instance, to get a time range that represents "today", you could use either of the following: ``` date_value >= trunc(sysdate) and date_value < trunc(sysdate) + 1 date_value >= trunc(sysdate) and date_value < trunc(sysdate) + interval '1' day ``` It's a little more cumbersome than using `between`, but it ensures that you never have a value that falls outside of the range you're considering.
Oracle: How to select current date (Today) before midnight?
[ "", "sql", "database", "oracle", "" ]
I've only used MySQL before. Postgres is a little different for me. I'm trying to use the Postgres.app for OSX. I have a database dump from our development server, and I want to create the correct user roles and import the database to my local machine so I can do development at home (can't access the database remotely). I *think* I've created the user. **\du** shows the appropriate user with the CreateDB permission. Then I used **\i ~/dump.sql** which seems to have imported the database. However when I use **\l** to list databases, it doesn't show up. So then I tried logging in with **psql -U *username***, but then it tells me "**FATAL: database *username* does not exist.**" Is that not the right switch for login? It's what the help said. I'm getting frustrated with something simple so I appreciate any help. With the Postgres.app, how can I create the necessary users with passwords and import the database? Thanks for any help!
It sounds like you probably loaded the dump into the database you were connected to. If you didn't specify a database when you started `psql` it'll be the database named after your username. It depends a bit on the options used with `pg_dump` when the dump file was created though. Try: ``` psql -v ON_ERROR_STOP=1 CREATE DATABASE mynewdb TEMPLATE template0 OWNER whateverowneruser; \c mynewdb \i /path/to/dump/file.sql ``` Personally, I recommend always using `pg_dump -Fc` to create custom-format dumps instead of SQL dumps. They're a *lot* easier to work with and `pg_restore` is a lot nicer than using `psql` for restoring dumps.
**Mac users: If you are here in 2023 and are on the Mac operating system and you have created a database dump using pg\_dump utility** **A. The dump is taken in SQL format (simple one)** ``` pg_dump -U <USER_NAME> -h <DATABASE_HOST> <DB_NAME> > sample.sql ``` Then in order to restore it use the below command. First, create the database manually using the command line/terminal ``` psql -U <USER_NAME> -h <DATABSE_HOST> ``` Once connected create the database using the command ``` create database test; \q ``` Now restore the dump using the below command and you are done :) ``` psql -U <USER_NAME> -d <DATABSE_NAME> -h <DATABSE_HOST> < sample.sql ``` For localhost use 127.0.0.1 as the database host. **(B) Dump is taken in binary format.** ``` pg_dump -U <USER_NAME> -h <DATABASE_HOST> -d <DB_NAME> -Fc > sample.dump ``` Then in order to restore it use the below command. First, create the database manually using the command line/terminal ``` psql -U <USER_NAME> -h <DATABSE_HOST> ``` Once connected create the database using the command ``` create database test; \q ``` Now restore the dump using the below command. ``` pg_restore -Fc -U <USER_NAME> -d <DATABSE_NAME> -h <DATABSE_HOST> sample.dump ``` For localhost use 127.0.0.1 as the database host. Note: Binary backups are a more practical choice for backing up production databases because they provide faster, more reliable, and more consistent backups. However, text backups may still be useful in certain scenarios, such as archiving or manual inspection of the backup file.
Import PostgreSQL dump on OSX
[ "", "sql", "database", "macos", "postgresql", "postgres.app", "" ]
I have a table with columns for a start- and enddate. My goal is to get a list of each year in that timespan for each row, so ``` +-------------------------+ | startdate | enddate | +------------+------------+ | 2004-08-01 | 2007-01-08 | | 2005-06-02 | 2007-05-08 | +------------+------------+ ``` should output this: ``` +-------+ | years | +-------+ | 2004 | | 2005 | | 2006 | | 2007 | | 2005 | | 2006 | | 2007 | +-------+ ``` I have problems now to create the years in between the two dates. My first approach was to use a UNION (order of dates is irrelevant), but the years in between are missing in this case... ``` Select Extract(Year From startdate) From table1 Union Select Extract(Year From enddate) From table1 ``` Thanks for any advises!
Try this Query ``` ; with CTE as ( select datepart(year, '2005-12-25') as yr union all select yr + 1 from CTE where yr < datepart(year, '2013-11-14') ) select yr from CTE ```
***Row Generator technique*** ``` SQL> WITH DATA1 AS( 2 SELECT TO_DATE('2004-08-01','YYYY-MM-DD') STARTDATE, TO_DATE('2007-01-08','YYYY-MM-DD') ENDDATE FROM DUAL UNION ALL 3 SELECT TO_DATE('2005-06-02','YYYY-MM-DD') STARTDATE, TO_DATE('2007-05-08','YYYY-MM-DD') ENDDATE FROM DUAL 4 ), 5 DATA2 AS( 6 SELECT EXTRACT(YEAR FROM STARTDATE) ST, EXTRACT(YEAR FROM ENDDATE) ED FROM DATA1 7 ), 8 data3 9 AS 10 (SELECT level-1 line 11 FROM DUAL 12 CONNECT BY level <= 13 (SELECT MAX(ed-st) FROM data2 14 ) 15 ) 16 SELECT ST+LINE FROM 17 DATA2, DATA3 18 WHERE LINE <= ED-ST 19 ORDER BY 1 20 / ST+LINE ---------- 2004 2005 2005 2006 2006 2007 6 rows selected. SQL> ```
List of years between two dates
[ "", "sql", "oracle", "" ]
After much searching the web and Stackoverflow, still looking for a way to use ALIAS to return columns without producing a new row/line. The following works create the columns "Sig\_1\_Emp" and "Sig\_3\_Staff", but the data for Sig\_1\_Emp and Sig\_3\_Staff are not in the same row, but two rows. ``` CASE WHEN VisitSignatures.order = 1 THEN Employees.last_name END AS Sig_1_Emp, CASE WHEN VisitSignatures.order = 3 THEN Employees.last_name END AS Sig_3_Staff ``` SO, would something like the follow work? ``` CASE WHEN VisitSignatures.order = 1 THEN Employees.last_name AS Sig_1_Emp WHEN VisitSignatures.order = 3 THEN Employees.last_name AS Sig_3_Staff END ``` Here's the full query: ``` Select CV.clientvisit_id, CV.program_id, CV.visittype, CV.non_billable, CV.rev_timein, CASE WHEN CVSig.ord = 1 THEN Employees.last_name ELSE Null END AS Sig_1_Emp, CASE WHEN CVSig.ord = 3 THEN Employees.last_name ELSE Null END AS Sig_3_Staff From CV Inner Join CVSig On CV.clientvisit_id = CVSig.clientvisit_id Inner Join EmpSig On CVSig.employeesignature_id = EmpSig.employeesignature_id Inner Join Employees On EmpSig.emp_id = Employees.emp_id Where CV.program_id In (121, 123) And CV.rev_timein >= @param1 And CV.rev_timein <= DATEADD(d, 1, @param2) ``` And a sample of the results: ``` +----------------+------------+-----------+------------+-----------+-------------+ | clientvisit_id | program_id | visittype | rev_timein | sig_1_emp | sig_3_staff | +----------------+------------+-----------+------------+-----------+-------------+ | 1001 | 121 | M_Mgmnt | 7/1/2014 | | Nurse_Pat | | 1001 | 121 | M_Mgmnt | 7/1/2014 | Doc_Sue | | +----------------+------------+-----------+------------+-----------+-------------+ ``` And this is what I'm hoping to acheive: ``` +----------------+------------+-----------+------------+-----------+-------------+ | clientvisit_id | program_id | visittype | rev_timein | sig_1_emp | sig_3_staff | +----------------+------------+-----------+------------+-----------+-------------+ | 1001 | 121 | M_Mgmnt | 7/1/2014 | Doc_Sue | Nurse_Pat | +----------------+------------+-----------+------------+-----------+-------------+ ``` My apologies for the confusion, and thank you for your patience. If this does not clarify my issue, please delete this post.
Given your update to the question, it's clear that what you're missing is a `GROUP BY` clause with appropriate aggregation functions. Note that in your sample result, the first four columns (`clientvisit_id`, `program_id`, `visittype`, `rev_timein`) contain identical values in both sample rows. If you `GROUP BY` those four columns, the sample rows will be grouped into a single row because their values are the same. I'm not sure why you aren't showing `non_billable` here, but the concept extends to as many columns as you want to use to define distinct groups. Since the `sig_1_emp` and `sig_3_staff` columns are not used for grouping, they must be used with [one of the available aggregate functions](http://msdn.microsoft.com/en-us/library/ms173454.aspx) as opposed to "naked" in the column list. The typical thing to do in this situation would be to use `MIN()` or `MAX()`, both of which ignore `NULL` rows and return only one of many values. They're equivalent when all the values in the group are equal, otherwise they give you whichever value is the minimum or maximum. If you don't know confidently that all the non-`NULL` rows have the same value, then you'll have to figure out some other way of choosing or deriving a value, based on the rows that do *not* appear in the `GROUP BY` clause, or live with arbitrarily taking whichever value compares as the maximum or minimum. Keeping in mind that each data type may have its own comparison rules. Seems inadvisable. So, try adding this to the end of your query: ``` GROUP BY clientvisit_id, program_id, visittype, rev_timein, non_billable ``` And wrapping your `CASE` expressions like so: ``` MAX(CASE WHEN CVSig.ord = 1 THEN Employees.last_name END) AS Sig_1_Emp, MAX(CASE WHEN CVSig.ord = 3 THEN Employees.last_name END) AS Sig_3_Staff ``` And see if that does the trick (keeping in mind the caveats already stated). You could keep the `ELSE NULL` if you prefer, but the expression will be `NULL` if none of the cases are true regardless; I don't think being explicit is worth the real estate in this case (pun intended).
If I'm understanding your question correctly, you're trying to `pivot` your results. You can use the `max` aggregate with the `case` statement to do this: ``` select otherfields, max(CASE WHEN VisitSignatures.order = 1 THEN Employees.last_name END) AS Sig_1_Emp, max(CASE WHEN VisitSignatures.order = 3 THEN Employees.last_name END) AS Sig_3_Staff from tables (VisitSignatures and Employees and others) group by otherfields ``` Whichever other fields you are selecting, you'll want to include in the `group by` clause.
sql, case when then as
[ "", "sql", "case", "alias", "" ]
``` table1 time userid id1 id2 9/1/2014 3:30 user1 123 555 9/1/2014 3:32 user1 123 555 9/1/2014 3:13 user1 123 555 9/1/2014 3:15 user1 123 555 9/1/2014 3:38 user2 321 555 9/1/2014 3:21 user2 321 555 9/1/2014 3:38 user2 456 666 9/1/2014 3:21 user2 456 666 table2 id1 orderid 321 order1 123 order2 ``` Explain query: ``` select_type table type possible index key key_len ref row Extra SIMPLE table1 ALL 934420 Using where; Using temporary; Using filesort SIMPLE table2 ref lookupindex lookupindex 33 table1.id1 1 ``` My table1 has about 1 billion rows, and table2 is lookup table that has 20k row, and order 555 is about 100 million rows. id2 is about 10% of total table1. table2 is basically lookup table that has all id1. id1-> orderid has many to one relation. In other word one id1 belongs to only one orderid. table2 and table1 does not have null value except userid. I want to calculate unique users for every orderid. My query is taking long time to run(did not finish within 5 hours, so i stopped it), i am not sure how to optimize it other than index. I have index on table2.id1. ``` select table2.orderid, count(distinct userid) from table1 left join table2 on table1.id1 = table2.id1 where table1.id2="555" group by table2.orderid ``` Does mysql do the left join first or where statement first?Should store the order 555 into different table then run they query?
**Q: Does mysql do the left join first or where statement first? Should store the order 555 into different table then run they query?** In theory, the optimizer is free to choose any execution plan that produces the specified result. The optimizer is supposed to be smart enough to choose an order of operations that it thinks is most efficient. In practice, the way we write a statement, and what indexes we make available can have a significant influence on the options available to MySQL. --- To see the execution plan the MySQL is choosing, we can use `EXPLAIN`. That shows us a summary of the operations that MySQL will perform. [**Understanding the Query Execution Plan**](http://dev.mysql.com/doc/refman/5.5/en/execution-plan-information.html) Having appropriate indexes available makes more efficient access paths available to MySQL. Without seeing the EXPLAIN output, or a definition of the tables, and what indexes are available, we're just guessing. Given that the statement is very slow, we're going to venture a guess that suitable indexes aren't available, and secondly, that MySQL is spending a boatload of time on the "Using filesort" operation for the `GROUP BY` operation.) It's also likely that the statement can be re-written to return an equivalent result, or a result that is nearly equivalent. We can throw out some recommendations to "try this" or "try that". But let's understand the operations that MySQL needs to perform. Firstly, there's an equality predicate on the `id2` column. If this is fairly selective (less than 10% or 20% of the total rows in `table1`, the likely an index on `table1` with `id2` as the leading column would provide an efficient access, which may give some performance benefit. (This is efficient because MySQL can use an range scan operation against the index to quickly narrow in on the requested rows, without having to look at every flipping row in the table.) Secondly, in your query there's an "outer join" operation to find matching rows in `table2`, with an equality predicate on `id1` column. So, likely, an index on `table2` with `id1` as the leading column may be of benefit. The query is also accessing the `orderid` column from the matched rows from `table2`; if we also include that column in the index, that would make it a "covering index", which is just a short way of saying that MySQL will have the ability to retrieve all of the values it needs directly from the index, without requiring lookups to pages in the underlying table. If that's a lot of rows being retrieved, we could spend a lot of time sorting them (the sort operation required by the GROUP BY.) There's a lot of information we don't have, about the cardinality of the orderid column, whether that column column can be null, the cardinality of the userid column, whether that can be null, how many rows we're expecting to be returned, and so on. --- Before we launch into tuning this particular statement, I think we need to understand what question this query is trying to answer, and make sure that this query would in fact return the answer you're looking for. We should be open to exploring whether an equivalent answer could be returned from a different query. It looks like you want a distinct list of `orderid` from `table2` (including possible NULL values), not all of them but only a subset, that meet certain criteria. And along with that `orderid` value, you want a count (the number of distinct `userid` values) from the rows from table1 that have a particular value in the `id2` column. For example, if we weren't concerned with the NULL values of `orderid`... (That is, the NULL values that would be produced by the original query due to outer join when there are rows from table1 that don't have a matching row in table2... for every row in `table1` that doesn't have a matching row in `table2`, we know that `table2.orderid` will be NULL...) Aside from the count for NULL orderid, the following query would return the same list of orderid and counts... ``` SELECT b.orderid , COUNT(DISTINCT a.userid) FROM table2 b JOIN table1 a ON a.id1 = b.id1 AND a.id2 = '555' WHERE b.orderid IS NOT NULL GROUP BY b.orderid ``` For optimum performance of that query, I'd recommend covering index on table2: ``` ON table2 (orderid, id1) ``` and a covering index on table1, either/or both of: ``` ON table1 (id2, id1, userid) ON table1 (id1, id2, userid) ``` (It's possible that we might be able to get MySQL to perform a Tight Index Scan operation to satisfy the GROUP BY, rather than an expensive sort of a temporary table ("Using filesort; Using temporary") What we'd really like to see is the output from an `EXPLAIN` for that query, and for the original query. (If we really do need the count for `NULL` values for `orderid`, we could write another query to get that separately.)
The problem is your distinct operation, that one is very expensive. You might make it more efficient by putting an index on userid, the two keys also should have an index. I'm not sure what you are trying to do functionally but maybe something else is possible too as an alternative.
mysql query is slow, need help to optimize
[ "", "mysql", "sql", "" ]
So I have 1 **(one)** table contains a list of games. For example: 1st row: `team1 = abc; team2 = xyz;` 2nd row: `team1 = xyz; team2 = abc;` 3rd row: `team1 = asd; team2 = jkl;` I've got about 4,000 rows like this. 2 teams may have played each other many times. Each row is a separate game. A user enters a `team1` name and a `team2` name and I basically want to display all rows that match that. So basically all games that have those 2 teams playing. I have tried: ``` SELECT * FROM matches WHERE team1 = '$team1' OR team1 = '$team2' UNION SELECT * FROM matches WHERE team2 ='$team2' OR team2 = '$team1' ``` I've tried a few more things using an `INNER JOIN` but it didn't seem to work either. Pretty new to SQL so any help would be appreciated! Thank you so much.
Your query would return any game in which either `$team1` or `$team2` has played. From your question I understand that you really want any game where `$team1` played against `$team2`, so I'd try ``` SELECT * FROM matches WHERE (team1 = '$team1' AND team2 = '$team2') OR (team1 = '$team2' AND team2 = '$team1') ``` While you *could* also achive the same result using a `union`, it will be easier to understand and probably also quicker to do it this way, given you have an index on both columns.
You could do this with logical operators: ``` SELECT * FROM matches WHERE (team1 = '$team1' AND team2 = '$team2') OR (team1 = '$team2' AND team1 = '$team2') ```
Only showing results that are the same from 2 queries
[ "", "mysql", "sql", "" ]
Can anyone with DEADLOCK experience enlighten me? I read that it can cause log file corruption - is that possible? I think MS would never do that. Also if "some situations", like mine, are okay with DEADLOCK, why not use it? I have no datasets, return tables (like other posts in Stack Overflow). I have one SQL statement with ID select which returns only one row like: ``` sqlstr = "SELECT Parameter1 FROM Companies WITH (NOLOCK) WHERE ID = 25 ``` Also, this parameter does not change. But as this is a heavy load aspnet application (not a web site) and I run this kind of query again and again, every SQL read causes a lock in SQL server. If possible I'd prefer to avoid that. Every post in this site is about multiple records, recordsets, dirty reads. I could not find anything about "reading single record which is not changing all the time". Any expert's opinion, please?
After deep searches and asking questions to many experts I found out that using NOLOCK hint causes no problem in this scenario, yet its not advised. nothing wrong with NOLOCK but as I use sql2014 I "should" use ISOLATION LEVEL option. Its a method came instead of NOLOCK. For example for huge table selects that cause deadlocks: ``` SET TRANSACTION ISOLATION LEVEL REPEATABLE READ; BEGIN TRANSACTION; SELECT * FROM HugeTable; COMMIT TRANSACTION; ``` is very handy. I had HugeTable and a web form that uses sqlAdapter and Radgrid to show this data. Whenever I run this report, though indexes and paging of radgrid is fine, it caused deadlock, which makes sense. I changed select statement of sqlAdapter to above sentence, its perfect now. best.
`NOLOCK` has two main disadvantages: It can return uncommitted data (you don't seem worried about that) and it can cause queries to spuriously fail under very rare circumstances. Never will `NOLOCK` cause physical database corruption. Consider using snapshot isolation for transactions that only read data. Readers under SI do not lock or block. SI takes them out of the picture. It provides perfect consistency for read-only transactions. Be sure to find out about the drawbacks.
Using NOLOCK for reading single static row. Whats the harm?
[ "", "sql", "sql-server", "deadlock", "" ]
I have two tables (SQL Fiddle available [here](http://sqlfiddle.com/#!2/0bcb5b)): People ``` ID Company_ID Name 1 1 Jones 2 2 Smith 3 3 Kim 4 2 Takahashi 5 3 Patel 6 1 Muler ``` Companies ``` ID Name 1 QQQ 2 AAA 3 MMM ``` I wish to order a selection of People by the Name of the Company they work for. SELECT \* FROM People WHERE (Some where clause) ORDER BY **HELP!**
You should `join` both tables: ``` SELECT p.* FROM People p JOIN Companies c ON c.ID = p.CompaniyID WHERE --(Some where clause) ORDER BY c.Name ASC ```
``` SELECT * FROM People INNER JOIN Companies ON Company_ID = Companies.ID ORDER BY Companies.Name ```
Order BY one table in MySQL
[ "", "mysql", "sql", "" ]
I have the following scenario. **SOURCE TABLE 1** ``` CREATE TABLE #Table1 ( Div varchar(10), Dept varchar(10), States varchar(10) ) INSERT INTO #Table1 SELECT 'Div1','Dept1','CA,NV,TX' UNION ALL SELECT 'Div2','Dept2','MI,OH,IN' UNION ALL SELECT 'Div3','Dept2','NY,NJ,PA' UNION ALL SELECT 'Div4','Dept1',NULL ``` **SOURCE TABLE 2** ``` CREATE TABLE #Table2 ( Div varchar(10), Dept varchar(10), States varchar(10) ) INSERT INTO #Table2 SELECT 'Div1','Dept1','CA' UNION ALL SELECT 'Div1','Dept1','NV, TX' UNION ALL SELECT 'Div1','Dept1','TX, CA' UNION ALL SELECT 'Div1','Dept1','CA, NV' UNION ALL SELECT 'Div2','Dept2','MI, OH' UNION ALL SELECT 'Div2','Dept2','MI, IN' UNION ALL SELECT 'Div2','Dept2','OH' UNION ALL SELECT 'Div3','Dept2','NY, NJ, PA' ``` **DESIRED OUTPUT** ``` CREATE TABLE #Table3 ( Div varchar(10), Dept varchar(10), States varchar(50) ) INSERT INTO #Table3 SELECT 'Div1','Dept1','CA - (3), NV - (2), TX - (2)' UNION ALL SELECT 'Div2','Dept2','MI - (2), OH - (2), IN - (1)' UNION ALL SELECT 'Div3','Dept2','NY - (1), NJ - (1), PA - (1)' UNION ALL SELECT 'Div4','Dept1',NULL SELECT * FROM #Table1 SELECT * FROM #Table2 SELECT * FROM #Table3 DROP TABLE #Table1 DROP TABLE #Table2 DROP TABLE #Table3 ``` [SQLFIDDLE](http://sqlfiddle.com/#!6/9801f) Goal: Based on `#Table1` and `#Table2`, join both table on `Div` and `Dept` fields and then aggregate the counts for distinct states in `States` field and create an output where you have `Div`, `Dept`, and `States` with unique count of each one of those states printed next to the state. I am not sure how to achieve this. I am trying `LIKE` but can't quite figure out how to make it dynamic. I will continue trying to see if I can figure out. Thought I would post this question here and see if I can get some assist. Thank you UPDATE: Desired Output ``` Div Dept States Div1 Dept1 CA - (3), NV - (2), TX - (2) Div2 Dept2 MI - (2), OH - (2), IN - (1) Div3 Dept2 NY - (1), NJ - (1), PA - (1) Div4 Dept1 NULL ```
Your requirements are very nasty but as developers we have to work with what we've got. Here's a solution using Common Table Expression (CTE) extensively: ``` ;WITH CTE1 AS ( SELECT Div, Dept, REPLACE(States,' ','') + ',' AS States FROM Table2 ), CTE2 AS ( SELECT c1.Div, c1.Dept, LEFT(c1.States,CHARINDEX(',', c1.States)-1) AS IndividualState, RIGHT(c1.States,LEN(c1.States)-CHARINDEX(',', c1.States)) AS RemainingStates FROM CTE1 c1 UNION ALL SELECT c2.Div, c2.Dept, LEFT(c2.RemainingStates,CHARINDEX(',', c2.RemainingStates)-1), RIGHT(c2.RemainingStates,LEN(c2.RemainingStates) - CHARINDEX(',', c2.RemainingStates)) FROM CTE2 c2 WHERE LEN(c2.RemainingStates) > 0 ), CTE3 AS ( SELECT Div, Dept, IndividualState, COUNT(*) AS StateCount FROM CTE2 GROUP BY Div, Dept, IndividualState ), CTE4 AS ( SELECT t1.Div, t1.Dept, ( SELECT c3.IndividualState + ' - (' + CONVERT(varchar(10),c3.StateCount) + '), ' FROM CTE3 c3 WHERE c3.Div = t1.Div AND c3.Dept = t1.Dept FOR XML PATH('') ) AS States FROM Table1 t1 ) SELECT Div, Dept, LEFT(States, LEN(States) - 1) AS States FROM CTE4 ``` ### Explanation 1. `CTE1` cleans up the data in `Table2`: remove spaces, add a comma to the end 2. `CTE2` does the normalization 3. `CTE3` does the counting 4. `CTE4` does the final assembly, putting `CA | 3` into `CA - (3), ...` The last `SELECT` remove the trailing comma for neater output. To better understand each step, you can replace the final `SELECT` with `SELECT * FROM CTE1`, `SELECT * FROM CTE2`, etc.
Ok, so first of all, you'll need to split the concatenated values in `#Temp1` and `#Temp2`. There are various methods for doing so, I'll use the **numbers table** one that is described [in this awesome blog](http://sqlperformance.com/2012/07/t-sql-queries/split-strings) post from Aaron Bertrand. So, we'll need a numbers table, which can be done this way: ``` ;WITH n AS ( SELECT x = ROW_NUMBER() OVER (ORDER BY s1.[object_id]) FROM sys.all_objects AS s1 CROSS JOIN sys.all_objects AS s2 ) SELECT Number = x INTO #Numbers FROM n WHERE x BETWEEN 1 AND 8000; ``` Then, you'll need to actually do the splitting and then a group concatenation method for your result: ``` ;WITH T1 AS ( SELECT * FROM #Table1 T OUTER APPLY (SELECT Item = SUBSTRING(T.States, Number, CHARINDEX(',',T.States + ',', Number) - Number) FROM #Numbers WHERE Number <= CONVERT(INT, LEN(T.States)) AND SUBSTRING(',' + T.States, Number, LEN(',')) = ',') N ), T2 AS ( SELECT * FROM #Table2 T OUTER APPLY (SELECT Item = SUBSTRING(T.States, Number, CHARINDEX(', ',T.States + ', ', Number) - Number) FROM #Numbers WHERE Number <= CONVERT(INT, LEN(T.States)) AND SUBSTRING(', ' + T.States, Number, LEN(', ')) = ', ') N ), T3 AS ( SELECT T1.Div, T1.Dept, T1.Item, COUNT(*) N FROM T1 LEFT JOIN T2 ON T1.Div = T2.Div AND T1.Dept = T2.Dept AND T1.Item = T2.Item GROUP BY T1.Div, T1.Dept, T1.Item ) SELECT A.Div, A.Dept, States = STUFF((SELECT ',' + CONVERT(VARCHAR(20), Item) + ' - (' + CAST(N AS VARCHAR(4)) + ')' FROM T3 WHERE Div = A.Div AND Dept = A.Dept FOR XML PATH(''), TYPE).value('.[1]','nvarchar(max)'),1,1,'') FROM T3 A ORDER BY Div, Dept, Item ``` The results are: ``` ╔══════╦═══════╦════════════════════════════╗ ║ Div ║ Dept ║ States ║ ╠══════╬═══════╬════════════════════════════╣ ║ Div1 ║ Dept1 ║ CA - (3),NV - (2),TX - (2) ║ ║ Div1 ║ Dept1 ║ CA - (3),NV - (2),TX - (2) ║ ║ Div1 ║ Dept1 ║ CA - (3),NV - (2),TX - (2) ║ ║ Div2 ║ Dept2 ║ IN - (1),MI - (2),OH - (2) ║ ║ Div2 ║ Dept2 ║ IN - (1),MI - (2),OH - (2) ║ ║ Div2 ║ Dept2 ║ IN - (1),MI - (2),OH - (2) ║ ║ Div3 ║ Dept2 ║ NJ - (1),NY - (1),PA - (1) ║ ║ Div3 ║ Dept2 ║ NJ - (1),NY - (1),PA - (1) ║ ║ Div3 ║ Dept2 ║ NJ - (1),NY - (1),PA - (1) ║ ║ Div4 ║ Dept1 ║ NULL ║ ╚══════╩═══════╩════════════════════════════╝ ```
T-SQL | String "Manipulation" and Aggregation
[ "", "sql", "sql-server", "t-sql", "sql-server-2012", "" ]
In the spark docs it's clear how to create parquet files from `RDD` of your own case classes; (from the docs) ``` val people: RDD[Person] = ??? // An RDD of case class objects, from the previous example. // The RDD is implicitly converted to a SchemaRDD by createSchemaRDD, allowing it to be stored using Parquet. people.saveAsParquetFile("people.parquet") ``` But not clear how to convert back, really we want a method `readParquetFile` where we can do: ``` val people: RDD[Person] = sc.readParquestFile[Person](path) ``` where those values of the case class are defined are those which are read by the method.
The best solution I've come up with that requires the least amount of copy and pasting for new classes is as follows (I'd still like to see another solution though) First you have to define your case class, and a (partially) reusable factory method ``` import org.apache.spark.sql.catalyst.expressions case class MyClass(fooBar: Long, fred: Long) // Here you want to auto gen these functions using macros or something object Factories extends java.io.Serializable { def longLong[T](fac: (Long, Long) => T)(row: expressions.Row): T = fac(row(0).asInstanceOf[Long], row(1).asInstanceOf[Long]) } ``` Some boiler plate which will already be available ``` import scala.reflect.runtime.universe._ val sqlContext = new org.apache.spark.sql.SQLContext(sc) import sqlContext.createSchemaRDD ``` The magic ``` import scala.reflect.ClassTag import org.apache.spark.sql.SchemaRDD def camelToUnderscores(name: String) = "[A-Z]".r.replaceAllIn(name, "_" + _.group(0).toLowerCase()) def getCaseMethods[T: TypeTag]: List[String] = typeOf[T].members.sorted.collect { case m: MethodSymbol if m.isCaseAccessor => m }.toList.map(_.toString) def caseClassToSQLCols[T: TypeTag]: List[String] = getCaseMethods[T].map(_.split(" ")(1)).map(camelToUnderscores) def schemaRDDToRDD[T: TypeTag: ClassTag](schemaRDD: SchemaRDD, fac: expressions.Row => T) = { val tmpName = "tmpTableName" // Maybe should use a random string schemaRDD.registerAsTable(tmpName) sqlContext.sql("SELECT " + caseClassToSQLCols[T].mkString(", ") + " FROM " + tmpName) .map(fac) } ``` Example use ``` val parquetFile = sqlContext.parquetFile(path) val normalRDD: RDD[MyClass] = schemaRDDToRDD[MyClass](parquetFile, Factories.longLong[MyClass](MyClass.apply)) ``` See also: <http://apache-spark-user-list.1001560.n3.nabble.com/Spark-SQL-Convert-SchemaRDD-back-to-RDD-td9071.html> Though I failed to find any example or documentation by following the JIRA link.
An easy way is to provide your own converter `(Row) => CaseClass`. This is a bit more manual, but if you know what you are reading it should be quite straightforward. Here is an example: ``` import org.apache.spark.sql.SchemaRDD case class User(data: String, name: String, id: Long) def sparkSqlToUser(r: Row): Option[User] = { r match { case Row(time: String, name: String, id: Long) => Some(User(time,name, id)) case _ => None } } val parquetData: SchemaRDD = sqlContext.parquetFile("hdfs://localhost/user/data.parquet") val caseClassRdd: org.apache.spark.rdd.RDD[User] = parquetData.flatMap(sparkSqlToUser) ```
How to convert spark SchemaRDD into RDD of my case class?
[ "", "sql", "apache-spark", "parquet", "" ]
I know that you can get column names from a table via the following trick in hive: ``` hive> set hive.cli.print.header=true; hive> select * from tablename; ``` Is it also possible to **just** get the column names from the table? I dislike having to change a setting for something I only need once. My current solution is the following: ``` hive> set hive.cli.print.header=true; hive> select * from tablename; hive> set hive.cli.print.header=false; ``` This seems too verbose and against the DRY-principle.
If you simply want to see the column names this one line should provide it without changing any settings: ``` describe database.tablename; ``` However, if that doesn't work for your version of hive this code will provide it, but your default database will now be the database you are using: ``` use database; describe tablename; ```
you could also do `show columns in $table` or see [Hive, how do I retrieve all the database's tables columns](https://stackoverflow.com/questions/29239565/hive-how-do-i-retrieve-all-the-databases-tables-columns/33154251#33154251) for access to hive metadata
Just get column names from hive table
[ "", "sql", "hadoop", "hive", "" ]
I have data of the following format: ``` Date Value 08/28 100 09/01 1 09/01 5 09/10 2 ``` I would like my output to be: ``` Date Value 08/28 100 08/29 100 08/30 100 08/31 100 09/01 106 09/02 106 . . . 09/10 108 ``` I'm just getting started with SQL, so any help would be appreciated. What I have right now is below, but that's not really close to what I seek: ``` SELECT Date, COUNT(DISTINCT(Service)) AS Value FROM [Directory] WHERE Date <= @myDate GROUP BY Date ORDER BY Date ```
First, you can use a sub query to get the aggregate values ``` SELECT Date, (SELECT SUM(Value) FROM Directory d WHERE d.Date <= Directory.Date) FROM [Directory] WHERE Date <= @myDate ORDER BY Date ``` Which would give you something that looks like this: ``` Date Value 08/28 100 09/01 101 09/01 106 09/10 108 ``` Then you can add a Date table as sgeddes suggested. This article explains if fairly well: <http://michaelmorley.name/how-to/create-date-dimension-table-in-sql-server> Then you can modify your query like so ``` SELECT DateTable.Date, (SELECT SUM(Value) FROM Directory d WHERE d.Date <= Directory.Date) FROM [Directory] LEFT OUTER JOIN DateTable on Directory.Date = DateTable.Date WHERE DateTable.Date <= @myDate ORDER BY DateTable.Date ``` To get the data format you're looking for.
Based on sgeddes suggestion: SELECT a.Date, COUNT(DISTINCT(d.Service)) AS Value ``` FROM [Directory] d LEFT OUTER JOIN [Date Table] a on d.Date = a.Date WHERE Date <= @myDate GROUP BY Date ORDER BY Date ```
Query to aggregate totals between dates
[ "", "sql", "aggregate-functions", "" ]
I'm new to SQL and am having trouble understanding why there's a `FROM` keyword in a `JOIN` statement if I use dot notation to select the `tables.columns` that I want. Does it matter which table I choose out of the two? I didn't see any explanation for this in w3schools definition on which table is the `FROM` table. In the example below, how do I know which table to choose for the `FROM`? Since I essentially already selected which `table.column` to select, can it be either? For example: ``` SELECT Customers.CustomerName, Orders.OrderID FROM Customers INNER JOIN Orders ON Customers.CustomerID=Orders.CustomerID ORDER BY Customers.CustomerName; ```
The order doesn't matter in an `INNER JOIN`. However, it does matter in `LEFT JOIN` and `RIGHT JOIN`. In a `LEFT JOIN`, the table in the `FROM` clause is the primary table; the result will contain every row selected from this table, while rows named in the `LEFT JOIN` table can be missing (these columns will be `NULL` in the result). `RIGHT JOIN` is similar but the reverse: rows can be missing in the table named in `FROM`. For instance, if you change your query to use `LEFT JOIN`, you'll see customers with no orders. But if you swapped the order of the tables and used a `LEFT JOIN`, you wouldn't see these customers. You would see orders with no customer (although such rows probably shouldn't exist).
The from statement refers to the join not the table. The join of table will create a set from which you will be selecting columns.
Explain which table to choose "FROM" in a JOIN statement
[ "", "mysql", "sql", "join", "" ]
I have the following table: ``` ID GROUPID oDate oValue 1 A 2014-06-01 100 2 A 2014-06-02 200 3 A 2014-06-03 300 4 A 2014-06-04 400 5 A 2014-06-05 500 FF. until the end of the month 30 A 2014-06-30 600 ``` I have 3 kinds of GROUPID, and each group will create one record per day. I want to calculate the total of oValue from the 2nd day of each month until the end of the month. So the total of June would be from 2/Jun/2014 until 30/Jun/2014. If July, then the total would be from 2/Jul/2014 until 31/Jul/2014. The output will be like this (sample): ``` GROUPID MONTH YEAR tot_oValue A 6 2014 2000 A 7 2014 3000 B 6 2014 1500 B 7 2014 5000 ``` Does anyone know how to solve this with sql syntax? Thank you.
You can use a correlated subquery to get this: ``` SELECT T.ID, T.GroupID, t.oDate, T.oValue, ct.TotalToEndOfMonth FROM T OUTER APPLY ( SELECT TotalToEndOfMonth = SUM(oValue) FROM T AS T2 WHERE T2.GroupID = T.GroupID AND T2.oDate >= T.oDate AND T2.oDate < DATEADD(MONTH, DATEDIFF(MONTH, 0, T.oDate) + 1, 0) ) AS ct; ``` For your example data this gives: ``` ID GROUPID ODATE OVALUE TOTALTOENDOFMONTH 1 A 2014-06-01 100 2100 2 A 2014-06-02 200 2000 3 A 2014-06-03 300 1800 4 A 2014-06-04 400 1500 5 A 2014-06-05 500 1100 30 A 2014-06-30 600 600 ``` **[Example on SQL Fiddle](http://sqlfiddle.com/#!3/29f72/1)** For future reference if you ever upgrade, in SQL Server 2012 (and later) this becomes even easier with windowed aggregate functions that allow ordering: ``` SELECT T.*, TotalToEndOfMonth = SUM(oValue) OVER (PARTITION BY GroupID, DATEPART(YEAR, oDate), DATEPART(MONTH, oDate) ORDER BY oDate DESC) FROM T ORDER BY oDate; ``` **[Example on SQL Fiddle](http://sqlfiddle.com/#!6/29f72/2)** **EDIT** If you only want this for the 2nd of each month, but still need all the fields then you can just filter the results of the first query I posted: ``` SELECT T.ID, T.GroupID, t.oDate, T.oValue, ct.TotalToEndOfMonth FROM T OUTER APPLY ( SELECT TotalToEndOfMonth = SUM(oValue) FROM T AS T2 WHERE T2.GroupID = T.GroupID AND T2.oDate >= T.oDate AND T2.oDate < DATEADD(MONTH, DATEDIFF(MONTH, 0, T.oDate) + 1, 0) ) AS ct WHERE DATEPART(DAY, T.oDate) = 2; ``` **[Example on SQL Fiddle](http://sqlfiddle.com/#!3/29f72/3)** If you are only concerned with the total then you can use: ``` SELECT T.GroupID, [Month] = DATEPART(MONTH, oDate), [Year] = DATEPART(YEAR, oDate), tot_oValue = SUM(T.oValue) FROM T WHERE DATEPART(DAY, T.oDate) >= 2 GROUP BY T.GroupID, DATEPART(MONTH, oDate), DATEPART(YEAR, oDate); ``` **[Example on SQL Fiddle](http://sqlfiddle.com/#!3/29f72/8)**
If you want grouped per GROUPID, year and month this should do it: ``` SELECT GROUPID, [MONTH] = MONTH(oDate), [YEAR] = YEAR(oDate), tot_oValue = SUM(ovalue) FROM your_table WHERE DAY(odate) > 1 GROUP BY GROUPID, YEAR(oDate), MONTH(oDate) ORDER BY GROUPID, YEAR(oDate), MONTH(oDate) ```
SUM from Specific Date until the end of the month SQL
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I'd like to insert these values in the following fashion: ``` insert into table (name, action-id) values ('user', select action from actions where name='user2'); ``` The result being: Inserts along the line of, `('user', 1) ('user', 2) ('user', 3)` I'm noticing this isn't correct sql. How would I go about accomplishing this? note) ``` select action from actions where name='user2' ``` would return: (1, 2, 3)
You can do it with a loop: ``` BEGIN FOR x IN (select action from actions where name='user2') LOOP insert into table (name, action-id) values ('user', x.action) END LOOP; END; ``` or you could use the [INSERT/SELECT](http://docs.oracle.com/cd/E17952_01/refman-5.1-en/insert-select.html) syntax: ``` INSERT INTO table (name, action-id) SELECT 'user', action FROM actions WHERE name='user2'; ```
Add the fixed value as a column in your query, and use insert-select instead of insert-values: ``` insert into table (name, action-id) select 'user', action from actions where name='user2'; ```
Oracle) Insert multiple rows with one fixed value
[ "", "sql", "oracle", "" ]
I have a table with 782,856 records. There is a column PEOPLE\_TYPE in this table that is varchar(20). I don't think table schema matters but if it does I will gladly post it. It has these distinct values (parens is a count of each type): ``` NULL (782,101) ANONYMOUS (1) BOARD (530) USER (224) ``` So why does this select return these results??? ``` select * from people where PEOPLE_TYPE != 'BOARD' ``` This return 225 rows...USER & ANONYMOUS....why aren't my nulls included...because I have now performed a text search and NULLs can't really be compared so they are eliminated? Thank You for your patience with my remedial question.
NULL is a strange thing. Any comparison with NULL is false: NULL = NULL is false NULL != anything is false NULL != NULL is also false. You have to say things like column is null, or column is not null. Your query would need ``` select * from people where PEOPLE_TYPE != 'BOARD' or PEOPLE_TYPE is null ```
You can read this for details on why records with NULL are not being returned: <http://msdn.microsoft.com/en-us/library/ms188048.aspx> If you want records with NULL to be returned you need to write the query like this: ``` select * from people where ISNULL(PEOPLE_TYPE, '0') != 'BOARD' ``` Or this: ``` select * from people where PEOPLE_TYPE != 'BOARD' OR PEOPLE_TYPE IS NULL ```
Why Select eliminating NULL records on a varchar comparison
[ "", "sql", "sql-server", "t-sql", "" ]
How to write a query to just determine that the values in a column are unique?
Try this: ``` SELECT CASE WHEN count(distinct col1)= count(col1) THEN 'column values are unique' ELSE 'column values are NOT unique' END FROM tbl_name; ``` Note: This only works if 'col1' does not have the data type 'ntext' or 'text'. If you have one of these data types, use 'distinct CAST(col1 AS nvarchar(4000))' (or similar) instead of 'distinct col1'.
``` select count(distinct column_name), count(column_name) from table_name; ``` If the # of unique values is equal to the total # of values, then all values are unique.
SQL query to determine that values in a column are unique
[ "", "sql", "sql-server", "t-sql", "" ]
I am having a bit of trouble with a query for SQL Server 2008. I have a table with some values and a category. This category can be e.g. Stock, Bond or NULL. Then I may want to see everything in my table that is not bonds: ``` SELECT Value, Name, Currency, Assetclass FROM MyTable WHERE Assetclass <> 'Bond' ``` Here I expect to see all my assets that are Stock and uncategorised (NULL). But instead I only see the stocks. I get the same result as setting my Where-condition to Assetclass = 'Stock'. I am aware that NULL is treated as an unidentified value, but I would expect it to only disregard rows that contain exactly 'Bond' and keep everything else, but this is apparently not the case?
As others have pointed out, this is the expected behavior. If you don't want to do an `OR` you could always replace null with something else in your comparison: ``` WHERE ISNULL(Assetclass, 'Anything but Bond') <> 'Bond' ```
This is the expected behaviour. You are asking for all the rows that have a value that is different from 'Bond'. `NULL` is not a value but a 'marker' stating that the system have no clue about the content of that field; being the content unknown the system cannot say for sure that the value is different from 'Bond' hence the row is not returned.
Query not returning values for NULL
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have the following table: ``` A B C D E F James Michael 123 Hello World 1 James Michael 123 Hello World 5 James Michael 123 Hello World 7 Harold Reynolds 345 There Poop 1 John Lowland 555 Woh Pop 1 Howard Yow 255 Man That 1 ``` I want to be able to select ALL the rows based on the MAX value of F. Result should be: ``` James Michael 123 Hello World Harold Reynolds 345 There Poop John Lowland 555 Woh Pop Howard Yow 255 Man That ```
You can do this with the `ROW_NUMBER()` function: ``` ;with cte AS (SELECT *,ROW_NUMBER() OVER(PARTITION BY "C" ORDER BY "F" DESC) AS RN FROM Table1) SELECT "A", "B", "C", "D", "E" FROM cte WHERE RN = 1 ``` Demo: [SQL Fiddle](http://sqlfiddle.com/#!15/a5878/3/0) The `ROW_NUMBER()` function generates a number for every row starting from 1 for each group of fields utilized in the `PARTITION BY` clause (optional) and the order is determined by the `ORDER BY` clause (required). Note: I'm assuming your `C` field is sufficient for identifying a row, but you may need to add fields to the `PARTITION BY` clause if that's not the case.
We can use row\_number and partition the result and get only 1 row from the partition ``` Select * from ( select *, row_number() over ( partition by A, B, C,D ,E order by F desc ) as seq from tableA) T Where T.seq =1 ```
How do I select distinct data in a table based on a max value of one column?
[ "", "sql", "postgresql", "" ]
I need to select rows from BUNDLES table which have one of several SAP\_STATE\_ID values. Those values depends on whether respective SAP status is supposed to be exported or not. This query runs really fast (there is index on SAP\_STATE\_ID field) - ``` SELECT b.* FROM BUNDLES b WHERE b.SAP_STATE_ID IN (2,3,5,6) ``` But... I'd like to fetch list of IDs dynamically, like this: ``` SELECT b.* FROM BUNDLES b WHERE b.SAP_STATE_ID IN (SELECT s.SAP_STATE_ID FROM SAP_STATES s WHERE s.EXPORT_TO_SAP = 1) ``` And ouch, this query is suddenly taking too much time. I would expect SQL server to run the subquery first (it doesn't depend on anything from main query) and then run whole thing just like in my first example. I tried to rewrite it to use joins instead of subquery: ``` SELECT b.* FROM BUNDLES b JOIN SAP_STATES s ON (s.SAP_STATE_ID = b.SAP_STATE_ID) WHERE s.EXPORT_TO_SAP = 1 ``` but it has same poor performance. It seems like it is running the subquery for each row of BUNDLES table or something like this. I am not very skilled in reading execution plans, but I tried. It says that 81% cost is for scanning Primary key index of BUNDLES (I have no idea why it should do such a thing, there is BUNDLE\_ID field defined as PRIMARY KEY, but it doesn't appear in the query at all...) Does anyone have an explanation why is SQL server so "stupid"? Is there a way to achieve what I want with good performance but without the need to provide static list of SAP\_STATE\_IDs? script for both tables and relevant indexes - <http://mab.to/xbYiI0wKj> execution plan for subquery version - <http://mab.to/8Qh6gpdYZ> query plan for version with joins - <http://mab.to/YCqeGCUbr> (for some reason these two plans looks the same and both suggest creating BUNDLES.SAP\_STATE\_ID index, which is already there)
I am pretty sure your statistics are off on the tables. If you want to get it working in a hurry I would write the query as: ``` SELECT b.* FROM SAP_STATES s INNER LOOP JOIN BUNDLES b ON s.SAP_STATE_ID = b.SAP_STATE_ID WHERE s.EXPORT_TO_SAP = 1 ``` This forces a nested loops join over `SAP_STATES` which filters on `BUNDLES`
When you use tables(temporary or physical), the SQL engine builds statistics against it and thus has a very clear idea on the number of rows in it and which is the best execution approach for it. On the other hand, a computed table(sub query) doesn't have statistics against it. So while it might be seemingly simple for a human to deduce the number of rows in it, the "stupid" SQL Engine is unaware of all this. Now, coming to the query, the `WHERE s.EXPORT_TO_SAP = 1` clause is making a world of difference here. The clustered index is sorted and built on the SAP\_STATE\_ID, but to additionally check the WHERE clause, it has no option but to scan the entire table(in the final dataset)! I bet that if instead of a clustered index, if there was a non clustered covered index on SAP\_STATE\_ID column which covered the EXPORT\_TO\_SAP field, it might have done the trick. Since clustered index scans are generally bad for performance, I would suggest you to take the below approach: ``` SELECT s.SAP_STATE_ID into #Sap_State FROM SAP_STATES s WHERE s.EXPORT_TO_SAP = 1 SELECT b.* FROM BUNDLES b join #Sap_State a on a.sap_state_id = b.sap_state_id ```
Why is subquery and join so slow
[ "", "sql", "sql-server", "join", "" ]
So Im stumped at the approach I need to take. have a query here: ``` select type_type, RCD_REASON, rc.description, MATERIAL_ID, USERID, DATETIME_SCRAPPED, ORDER_NUMBER, TRIMDATE, SUBSTR(SALES_ORDER,3,8) || '-' || SUBSTR(SOL_LINE_NUMBER,3,4) AS SalesOrder, QTY, COMPONENT_MATERIAL_ID from WORKCELL.history hh inner join WORKCELL.reason_codes rc on hh.rcd_reason = rc.reason where rc.type_type in ('FAB','TRM') AND TO_CHAR(DATETIME_SCRAPPED,'YYYYMMDD') between to_char(:FromDate,'YYYYMMDD') AND to_char(:ToDate,'YYYYMMDD') order by 1,2,9 ``` Thats pulling records based off two date parameters in SSRS. However, the problem is that I do not want duplicate records for SALES\_ORDER or ORDER\_NUMBER. I realize I can't use Select Distinct for my current query because technically they are all distinct based off the DATETIME\_SCRAPPED because it also includes the time. What would be the best way to organize this query to get all the rows I specified, and no duplicates based off the two columns?
This is usually solved using window functions (you didn't specify your DBMS, but this is ANSI standard SQL supported by a wide range of DBMS). ``` select * from ( select type_type, RCD_REASON, rc.description, MATERIAL_ID, USERID, DATETIME_SCRAPPED, ORDER_NUMBER, TRIMDATE, SUBSTR(SALES_ORDER,3,8) || '-' || SUBSTR(SOL_LINE_NUMBER,3,4) AS SalesOrder, QTY, COMPONENT_MATERIAL_ID , row_number() over (partition by material_id, order_number order by DATETIME_SCRAPPED desc) as rn from WORKCELL.history hh inner join WORKCELL.reason_codes rc on hh.rcd_reason = rc.reason where rc.type_type in ('FAB','TRM') AND TO_CHAR(DATETIME_SCRAPPED,'YYYYMMDD') between to_char(:FromDate,'YYYYMMDD') AND to_char(:ToDate,'YYYYMMDD') ) t where rn = 1 order by 1,2,9; ``` Through the `order by` in in the window definition (the `partition by` part) you can control which row you want to return if there are multiple with the same material\_id and order\_number. Using `order by DATETIME_SCRAPPED desc` results in picking the "latest" based on the `DATETIME_SCRAPPED`
Would it be something like? ``` select type_type, RCD_REASON, rc.description, MATERIAL_ID, USERID, (MAX)DATETIME_SCRAPPED as DATETIME_SCRAPPED, ORDER_NUMBER, TRIMDATE, SUBSTR(SALES_ORDER,3,8) || '-' || SUBSTR(SOL_LINE_NUMBER,3,4) AS SalesOrder, QTY, COMPONENT_MATERIAL_ID from WORKCELL.history hh inner join WORKCELL.reason_codes rc on hh.rcd_reason = rc.reason where rc.type_type in ('FAB','TRM') AND TO_CHAR(DATETIME_SCRAPPED,'YYYYMMDD') between to_char(:FromDate,'YYYYMMDD') AND to_char(:ToDate,'YYYYMMDD') Group By type_type, RCD_REASON, rc.description, MATERIAL_ID, USERID, ORDER_NUMBER, TRIMDATE, SalesOrder, QTY, COMPONENT_MATERIAL_ID order by 1,2,9 ``` The results I get in SSRS look like this, Below this table is how I would like it to look. ``` QTY Fabric/Pc Cut For When Prd Ord Sched Sales Ord Name > 1 906700021-5F72 T500173 09/30/2014 07:36:37 001038881084 20140929 05594568-4170 rg > 2 906700021-5F72 T500173 09/30/2014 06:22:12 001038881084 20140929 05594568-4170 rg > 2 906700021-5F72 T500175 09/30/2014 02:04:07 001038881052 20140929 05594568-4210 rg > 1 906700021-5F72 T500175 09/30/2014 10:45:42 001038881052 20140929 05594568-4210 rg > 1 906700021-5F72 T500176 09/30/2014 07:13:45 001038881057 20140929 05594568-4240 rg ``` I would like it to only return this: ``` QTY Fabric/Pc Cut For When Prd Ord Sched Sales Ord Name > 1 906700021-5F72 T500173 09/30/2014 07:36:37 001038881084 20140929 05594568-4170 rg > 2 906700021-5F72 T500175 09/30/2014 02:04:07 001038881052 20140929 05594568-4210 rg > 1 906700021-5F72 T500176 09/30/2014 07:13:45 001038881057 20140929 05594568-4240 rg ``` In the second data set, rows that have the same Prd Ord and Sales Ord, have been removed and the most recent time is used for the duplicate row.
Select Distinct or Group By for all but one column
[ "", "sql", "performance", "group-by", "duplicates", "distinct", "" ]
I have two tables with similar fields but the query is failing when I am doing a `SELECT` subquery in a `WHERE` clause. ``` SELECT foo FROM bar WHERE fizz IN (SELECT fizz FROM fuzz) ``` I deleted the `error.log` from AWS but the error was something to the extent that HIVE did not recognize the `SELECT`. How do I need to restructure this query? Thanks.
From the Subqueries in the WHERE Clause section of the [HIVE Language Manual](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+SubQueries): ``` SELECT b.foo FROM bar b WHERE b.fizz IN (SELECT f.fizz FROM fuzz f) ```
Hive has problems with subquery in the WHERE clause use a JOIN ``` SELECT foo FROM bar JOIN fuzz ON bar.fizz=fuzz.fizz ```
HIVE SQL Subquery in WHERE Clause
[ "", "sql", "hive", "" ]
I have a function in a VB .NET class library, which inserts XML text into a VARCHAR(MAX) column. The column results in an extra "?" at the front of the data in the column. I do not want that character in my data. The column data starts like : ``` ?<?xml version="1.0" encoding="utf-8"?><Registration xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance".... ``` The insert function is : ``` INSERT INTO Table (Data) OUTPUT Inserted.ID VALUES (@Data) ``` The table has 2 columns, Data and ID. Am I doing something wrong. The XML is created by the .Net XmlSerializer. Thanks
First, all XML in SQL Server is in Unicode (UCS-2, to be precise), and data access libraries probably know that. So storing their output in `varchar` column isn't the best idea - you might run into various issues with implicit conversion, and such. Try switching the column data type to `nvarchar` and look whether it helped. Second, it might be some mark bytes that are usually found in disk files stored in UTF-8. Since SQL Server doesn't support this encoding, these bytes might have been converted (again, implicitly) into something unreadable. Try something like this query: ``` select cast(substring(XMLField, 1, 10) as varbinary) from dbo.MyTable; ``` It will show you ASCII codes for those characters, at least. My best guess, however, would be to get rid of UTF-8 completely - the only way to store such data in SQL Server is via `varbinary` columns, but I doubt you will like the resulting overhead. Try switching to UTF-16 - it's backward compatible with UCS-2 (unless you deal in something truly exotique).
Varchar can only hold characters in the ascii code page. My guess would be you have some unicode character at the beginning of that string. Switch to nvarchar, you won't get rid of that initial character but you won't lose it either
XML text in Varchar(max) mysterious question mark
[ "", "sql", ".net", "sql-server", "xml", "vb.net", "" ]
I have a requirement, in which I need to output group the result based on rowCount. Here is the result set which I get from my SQL: ``` ID Date Count 1 10/01/2013 50 1 10/02/2013 25 1 10/03/2013 100 1 10/04/2013 200 1 10/05/2013 175 1 10/06/2013 45 2 10/01/2013 85 2 10/02/2013 100 ``` Can i have them as ``` id date Count 1 10/03/2013 175 1 10/04/2013 200 1 10/05/2013 175 1 10/06/2013 45 2 10/02/2013 185 ``` I need to reduce the result set by grouping their count **<= 200** per **ID**. Say for example the combined sum of 10/01, 10/02 and 10/03 came to 175 so I need to group them into one single row. Where as adding the values fir 10/05 and 10/06 would be >200, so leave them un-grouped. Is it possible in Oracle 11g to solve this using PLSQL or SQL Analytic functions? **New REsult Set Requested** Is there a way to return the result having additional Column to it? StartD column For each row it has to take the Previous End Date of that ``` ID StartD EndDate Count 1 10/01/2013 10/03/2013 175 1 10/03/2013 10/04/2013 200 1 10/04/2013 10/05/2013 250 1 10/05/2013 10/06/2013 190 1 10/06/2013 10/08/2013 45 2 10/01/2013 10/01/2013 185 ```
For such a task, you could use a [pipelined table function](http://www.oracle-base.com/articles/misc/pipelined-table-functions.php) to generate the required result. There is a little "plumbing" as it requires to define some additional types, but the function by itself is a simple loop over a cursor, accumulating values and generating row either on change of `id`, or when the accumulated total exceed the limit. You could implement that many ways. Here, using a plain old loop, instead of a *for in cursor*, I obtain something *not that inelegant*: ``` CREATE OR REPLACE TYPE stuff_row AS OBJECT ( id int, stamp date, last_stamp date, num int ); CREATE OR REPLACE TYPE stuff_tbl AS TABLE OF stuff_row; ``` ``` CREATE OR REPLACE FUNCTION partition_by_200 RETURN stuff_tbl PIPELINED AS CURSOR data IS SELECT id, stamp, num FROM stuff ORDER BY id, stamp; curr data%ROWTYPE; acc stuff_row := stuff_row(NULL,NULL,NULL,NULL); BEGIN OPEN data; FETCH data INTO acc.id,acc.stamp,acc.num; acc.last_stamp := acc.stamp; IF data%FOUND THEN LOOP FETCH data INTO curr; IF data%NOTFOUND OR curr.id <> acc.id OR acc.num+curr.num > 200 THEN PIPE ROW(stuff_row(acc.id,acc.stamp,acc.last_stamp,acc.num)); EXIT WHEN data%NOTFOUND; -- reset the accumulator acc := stuff_row(curr.id, curr.stamp, curr.stamp, curr.num); ELSE -- accumulate value acc.num := acc.num + curr.num; acc.last_stamp := curr.stamp; END IF; END LOOP; END IF; CLOSE data; END; ``` Usage: ``` SELECT * FROM TABLE(partition_by_200()); ``` Using the same test data as Mat in its [own answer](https://stackoverflow.com/a/26201065/2363712), this produces: ``` ID STAMP LAST_STAMP NUM 1 10/01/2013 10/03/2013 175 1 10/04/2013 10/04/2013 200 1 10/05/2013 10/05/2013 250 1 10/06/2013 10/07/2013 190 1 10/08/2013 10/08/2013 45 2 10/01/2013 10/02/2013 185 ```
You can do this in Oracle 12c with a [`MATCH_RECOGNIZE`](http://docs.oracle.com/database/121/DWHSG/pattern.htm#DWHSG8956) pattern matching technique. Setup (added a few rows, including some with a count above 200, for testing): ``` create table stuff (id int, stamp date, num int); insert into stuff values (1, to_date('10/01/2013', 'MM/DD/RRRR'), 50); insert into stuff values (1, to_date('10/02/2013', 'MM/DD/RRRR'), 25); insert into stuff values (1, to_date('10/03/2013', 'MM/DD/RRRR'), 100); insert into stuff values (1, to_date('10/04/2013', 'MM/DD/RRRR'), 200); insert into stuff values (1, to_date('10/05/2013', 'MM/DD/RRRR'), 250); insert into stuff values (1, to_date('10/06/2013', 'MM/DD/RRRR'), 175); insert into stuff values (1, to_date('10/07/2013', 'MM/DD/RRRR'), 15); insert into stuff values (1, to_date('10/08/2013', 'MM/DD/RRRR'), 45); insert into stuff values (2, to_date('10/01/2013', 'MM/DD/RRRR'), 85); insert into stuff values (2, to_date('10/02/2013', 'MM/DD/RRRR'), 100); commit; ``` The query would be: ``` select id, first_stamp, last_stamp, partial_sum from stuff match_recognize ( partition by id order by stamp measures first(a.stamp) as first_stamp , last(a.stamp) as last_stamp , sum(a.num) as partial_sum pattern (A+) define A as (sum(a.num) <= 200 or (count(*) = 1 and a.num > 200)) ); ``` Which gives: ``` ID FIRST_STAMP LAST_STAMP PARTIAL_SUM ---------- ----------- ---------- ----------- 1 01-OCT-13 03-OCT-13 175 1 04-OCT-13 04-OCT-13 200 1 05-OCT-13 05-OCT-13 250 1 06-OCT-13 07-OCT-13 190 1 08-OCT-13 08-OCT-13 45 2 01-OCT-13 02-OCT-13 185 6 rows selected ``` How this works: * The pattern matching is done over the whole table, partitioned by `id` and ordered by timestamp. * The pattern `A+` says we want groups of consecutive (according to the partition and order by clauses) rows that satisfy condition `A`. * The condition `A` is that the set satisfies: + The sum of num in the set is 200 or less + Or the set has single row with num greater than 200 (otherwise these rows never match, and aren't output). * The `measures` clause indicates what the match returns (on top of the partition key): + The first and last timestamps from each the group + The sum of num for each group --- Here's an approach with a table-valued function that should work in 11g (and 10g I think). Rather inelegant, but does the job. Traverses the table in order, outputting groups whenever they're "full". You could add a parameter for the group size too. ``` create or replace type my_row is object (id int, stamp date, num int); create or replace type my_tab as table of my_row; create or replace function custom_stuff_groups return my_tab pipelined as cur_sum number; cur_id number; cur_dt date; begin cur_sum := null; cur_id := null; cur_dt := null; for x in (select id, stamp, num from stuff order by id, stamp) loop if (cur_sum is null) then -- very first row cur_id := x.id; cur_sum := x.num; elsif (cur_id != x.id) then -- changed ID, so output last line for previous id and reset pipe row(my_row(cur_id, cur_dt, cur_sum)); cur_id := x.id; cur_sum := x.num; elsif (cur_sum + x.num > 200) then -- same id, sum overflows. pipe row(my_row(cur_id, cur_dt, cur_sum)); cur_sum := x.num; else -- same id, sum still below 200 cur_sum := cur_sum + x.num; end if; cur_dt := x.stamp; end loop; if (cur_sum is not null) then -- output the last line, if any pipe row(my_row(cur_id, cur_dt, cur_sum)); end if; end; ``` Use as: ``` select * from table(custom_stuff_groups()); ```
Group the Result based on RowCount in Oracle
[ "", "sql", "oracle", "plsql", "" ]
I would like to determine the ownership of a folder. Therefore every folder has a constant naming convention, which is stored in the table OWNER(ident\_string). Using the ident\_string I want to determine the owner\_id and write it (update) into table FOLDER(owner\_id). I have following tables in Postgresql: ``` create table owner( owner_id serial PRIMARY KEY, owner_name varchar(100), ident_string varchar(100)); create table folder( folder_id serial PRIMARY KEY, folder_name varchar(80), folder_path varchar(800), owner_id integer references owner(owner_id)); insert into owner (owner_name, ident_string) values ('Jonny English','b-jonny'); insert into owner (owner_name, ident_string) values ('Hanna Babara','b-hanna'); insert into owner (owner_name, ident_string) values ('Mary Marmelade','b-mary'); insert into folder (folder_name,folder_path) values ('b-jonny-20130101','/archive/backup/b-jonny-20130101'); insert into folder (folder_name,folder_path) values ('b-jonny-20130103','/archive/backup/b-jonny-20130103'); insert into folder (folder_name,folder_path) values ('b-hanna-20140101','/archive/backup/b-jonny-20140101'); insert into folder (folder_name,folder_path) values ('b-mary-20120303','/archive/backup/b-mary-20120303'); ``` I think the only possiblity to do so is via PL/pgSQL: * iterate for folder\_name in FOLDER over every row in OWNER * check for every ident\_string to lookup the owner\_id. Could somebody help me out?
Try this: ``` update folder as f set owner_id = o.owner_id from owner as o where o.ident_string = left(f.folder_name,length(o.ident_string)); ``` -g
Maybe something similar to the following? `update folder set folder.owner_id = owner.owner_id from folder join owner on folder_name like owner.ident_string + '%'` (like is the method for regexs in sql)
SQL - Dynamic Lookup with Regex
[ "", "sql", "postgresql", "plpgsql", "" ]
If I have three tables such as `dog`, `cat`, and `animal`, where a property of `animal` is `name`, how can I join `dog` and `cat` and return both names? The structure would look something like this: ``` ---Dog--- AnimalID char(9) ChasesID char(9) ---Cat--- AnimalID char(9) --Animal-- AnimalID char(9) Name char(20) ``` Where I want to join `Dog` and `Cat` on the ChasesID (references AnimalID in Cat table) and return the names of both animals from `Animal`.
I'm not sure what are you trying to do this way, but if you only want to get names of the animals you can try with something like this: ``` SELECT Animal.Name AS Name FROM (SELECT AnimalID FROM Cat UNION SELECT AnimalID FROM Dog) AS AnimalsUnion LEFT JOIN Animal ON Animal.AnimalID = AnimalsUnion.AnimalID ``` It should do the trick..
You need two joins for this: ``` select ad.name as dogname, ac.name as chases from dog d join animal ad on d.animalid = ad.animalid join animal ac on d.chasesid = ac.animalid; ``` From what I can tell, the `cat` table is superfluous. It doesn't have a separate id, so you don't need it. You can get the name directly from `animal`.
SQL Joins and Renaming columns
[ "", "mysql", "sql", "join", "inner-join", "renaming", "" ]
I'm building a query to display the names of athletes based on the fact that they have participated in more than one event. For this I have to use 2 tables as shown: ``` CREATE TABLE ATHLETE( ATHLETEID CHAR(4), ATHLETEFIRSTNAME VARCHAR2(20), ATHLETELASTNAME VARCHAR2(20), ATHLETEDOB DATE, REPCOUNTRY VARCHAR2(12), COACHID CHAR(4), CONSTRAINT ATHLETE_PK PRIMARY KEY(ATHLETEID), CONSTRAINT ATHLETE_FK1 FOREIGN KEY(COACHID) REFERENCES COACH(COACHID)); CREATE TABLE RESULTS( EVENTID CHAR(4), ATHLETEID CHAR(4), RANK NUMBER(1), CONSTRAINT RESULTS_PK PRIMARY KEY(EVENTID,ATHLETEID), CONSTRAINT RESULTS_FK1 FOREIGN KEY(EVENTID) REFERENCES EVENTSCHEDULE(EVENTID), CONSTRAINT RESULTS_FK2 FOREIGN KEY(ATHLETEID) REFERENCES ATHLETE(ATHLETEID)); ``` Using the below query I am able to show the ATHLETEID's that have participated in more than one event, what I'm struggling with is to display the name of the athlete as well because it's in a different table. I'm pretty sure I'm supposed to use a subquery, however I'm not sure how to build it. `SELECT A.ATHLETEID FROM RESULTS A GROUP BY A.ATHLETEID HAVING COUNT(*) > 1;` Thanks in advance!
ShoutCase SQL aside, you'll need to join back to athlete and then group by all non-aggregated columns, like so (note I've switched your aliasing to align to the table names): ``` SELECT a.ATHLETEID, a.ATHLETEFIRSTNAME, a.ATHLETELASTNAME, COUNT(r.EVENTID) as NumEvents FROM RESULTS r INNER JOIN ATHLETE a ON r.ATHLETEID = a.ATHLETEID GROUP BY a.ATHLETEID, a.ATHLETEFIRSTNAME, a.ATHLETELASTNAME HAVING COUNT(r.EVENTID) > 1; ```
simply Use join with count (**Subquery Not Needed**) ``` SELECT A.ATHLETEID,ATH.ATHLETEFIRSTNAME FROM RESULTS A JOIN ATHLETE ATH ON ATH.ATHLETEID =A.ATHLETEID GROUP BY A.ATHLETEID,ATH.ATHLETEFIRSTNAME HAVING COUNT(*) > 1; ```
SQL query that displays the names of Athletes who have participated in more than one event
[ "", "mysql", "sql", "sql-server", "" ]
I have two tables. I want to select 1 record from first table if condition is true in second table (active = 0) table Lead: ``` ------------- | id | name | ------------- | 1 | abc1 | | 2 | abc2 | | 3 | abc3 | | 4 | abc4 | | 5 | abc5 | ------------- ``` table LeadsDetails: ``` ------------------------- | id | lead_id | active | ------------------------- | 1 | 1 | 1 | | 2 | 1 | 0 | | 3 | 2 | 0 | | 4 | 3 | 1 | | 5 | 4 | 0 | | 6 | 5 | 0 | | 7 | 5 | 0 | -------------------------- ``` expected output: ``` -------------- | id | name | -------------- | 2 | abc2 | | 4 | abc4 | | 5 | abc5 | -------------- SELECT `Lead`.`id`, `Lead`.`name`, `Lead`.`unsubscribe` FROM `leads` AS `Lead` inner JOIN `LeadsDetails` AS `LeadsDetails` ON (`LeadsDetails`.`lead_id` = `Lead`.`id`) WHERE `LeadsDetails`.`active` = 0 ```
This should run faster than not exists because the subquery won't run for every row; in this case I'm counting the number of situations where the active field value on table leadsdetails is not 0, for the given ID, and showing only rows where that count is 0 (ie. for the given id the active field is ALWAYS 0) ``` select l.id, l.name from lead l join leadsdetails ld on l.id = ld.lead_id group by l.id, l.name having sum(case when ld.active <> 0 then 1 else 0 end) = 0 ``` **Fiddle:** <http://www.sqlfiddle.com/#!2/00970/2/0>
As you need to get the records only when active column doesn't have 1 use `NOT EXISTS` SQL FIDDLE DEMO : <http://www.sqlfiddle.com/#!2/00970/1> ``` SELECT * FROM Lead L WHERE NOT EXISTS ( SELECT 1 FROM LeasdDetails LD where L.id = LD.lead_id AND LD.active =1 ) ```
Select 1 record from first table if condition is true in second table (all refeance rows active = 0)
[ "", "mysql", "sql", "join", "phpmyadmin", "inner-join", "" ]
I have this query: ``` SELECT `shift`.`uid`, `shift`.`activity`, `users`.`fname`, `users`.`lname` FROM `shift`, `users` WHERE `shift`.`uid` = `users`.`id` ``` It works fine just like that, but I need to add a new column from another table and order by it. `times` : ``` | uid | User | time | +++++++++++++++++++++ | 3 | bob | 1231 | | 3 | bob | 1291 | | 4 | ned | 1651 | | 5 | ted | 5679 | | 6 | joe | 7665 | | 6 | joe | 7864 | ``` How can I include the maximum time from the time table for each user (WHERE `times`.`uid` = `shift`.`uid`) and then order by that column? Trouble is, all the other tables have one row per user but the time table has multiple and I can't figure out the correct combination of joins and group by.
You could join on an aggregate query: ``` SELECT `shift`.`uid`, `shift`.`activity`, `users`.`fname`, `users`.`lname` , t.max_time FROM `shift` JOIN `users` ON `shift`.`uid` = `users`.`id` JOIN (SELECT `uid`, MAX(`time`) AS max_time FROM `times` GROUP BY `uid`) t ON shift.uid = t.uid ORDER BY t.max_time ```
``` SELECT s.uid, s.activity, u.fname, u.lname, MAX(t.time) as maxtime FROM shift s, INNER JOIN users u ON u.id = s.uid INNER JOIN times t ON t.uid = u.id GROUP BY s.uid, s.activity, u.fname, u.lname ORDER BY maxtime ```
Joining Max Row where Col
[ "", "mysql", "sql", "aggregate-functions", "" ]
I have following data set returned by query is following, i am displaying some rows down below, but actual data returned is over 500k rows. ``` Date | amount 01-01-2010 | 200 01-02-2010 | 50 01-03-2010 | 400 01-04-2010 | 50 01-05-2010 | 0 01-06-2010 | 0 01-07-2010 | 100 ``` I would like query to return **Remaining Amount** column something like this: ``` Date | amount | Remaining 01-01-2010 | 200 | 600 01-02-2010 | 50 | 550 01-03-2010 | 400 | 150 01-04-2010 | 50 | 100 01-05-2010 | 0 | 100 01-06-2010 | 0 | 100 01-07-2010 | 100 | 0 ``` *Remaining Amount* starts with sum of total amount, which is sum of all records amount column.
You can use oracle analytic functions ``` SELECT DATE, AMOUNT, (SUM (AMOUNT) OVER ()) - (SUM (AMOUNT) OVER (ORDER BY DATE)) AS REMAINING FROM TABLE ```
You can also use the [Windowing Clause](http://docs.oracle.com/cd/E11882_01/server.112/e26088/functions004.htm#SQLRF06174) to avoid the `SUM(AMOUNT) - (...)`: ``` SELECT dt, AMOUNT, nvl(sum(amount) over (order by dt desc rows between unbounded preceding and 1 preceding) , 0) as REMAINING FROM yourtable order by dt ``` Using the analytic functions should give better performance than correlated subquery, because table is scanned only once.
How to write oracle sql query for remaining totals
[ "", "sql", "oracle", "oracle11g", "" ]
I am trying to query from two databases - located on the same server. The first database query is a bit complex with a JOIN and UNION clauses, but joining to the other database is pretty straight forward. I keep getting an error on the last line of code - `SITE_ID_NUMBER`. I have never attempted this - any clues as to what I may be doing wrong? Additional note - the JOIN clause for the databases is made of of one native column - i.e. "6041" and one calculated - I had to use a `SUBSTRING` and add the 6 to get the matching value. Update - After a few changes I am actually getting the error: > Msg 8156, Level 16, State 1, Line 32 > The column 'RECORD\_DATE\_TIME' was specified multiple times for 'P'. Thank you. ``` SELECT P.[6] + SUBSTRING(A.SHIP_TO,6,3) AS 'STORE' , S.SITE_ID_NUMBER, S.DM, S.AREA, S.STORE_NAME, S.LOCATION, P.WHSE AS 'DIVISION_DESC', P.ORDER_STATUS, P.MATERIAL, P.DESCRIPTION, P.PO_NUMBER AS 'CUSTOMER_PO_NUMBER', P.ORDER_QUANTITY AS'QTY', P.RSHIP_DATE AS 'REQUESTED_SHIP_DATE', S.TRANSIT_6040_6041 FROM (SELECT * FROM PDX_SAP_USER.dbo.VW_ADIDAS_RETAIL_aRI A JOIN PDX_SAP_USER.dbo.VW_WB_DELIVERIES D ON A.DELIVERY_NUMBER = D.DELIVERY_NUMBER UNION ALL SELECT * FROM PDX_SAP_USER.dbo.VW_ADIDAS_RETAIL_aRO A JOIN PDX_SAP_USER.dbo.VW_WB_DELIVERIES D ON A.DELIVERY_NUMBER = D.DELIVERY_NUMBER) P JOIN [ADI_USER.MAINTAINED].dbo.SiteDataAdiRbk S ON S.SITE_ID_NUMBER = P.STORE; ```
`Msg 8156, Level 16, State 1, Line 32 The column 'RECORD_DATE_TIME' was specified multiple times for 'P'.` RECORD\_DATE\_TIME is nowhere in the query you gave. Anyway, the error is clear. I'll show you a simple example which reproduces your error. Use this example to fix your code. ``` CREATE TABLE [dbo].[t1]( [id] [int] NULL, [name] [nchar](10) NULL ) ON [PRIMARY] ``` Join the table to itself as shown to get this error. ``` select * from ( select * from t1 inner join t1 as t2 on t1.id = t2.id ) as t Msg 8156, Level 16, State 1, Line 6 The column 'id' was specified multiple times for 't'. ```
At the outermost level you have 2 table aliases accessible: * P : Which is the result set from the sub query with the Union * S : Which is an alias for the table [ADI\_USER.MAINTAINED].dbo.SiteDataAdiRbk The very first item in your select list is *P.[6] + SUBSTRING(**A**.SHIP\_TO,6,3) AS 'STORE'* You are referencing a table or alias named A, which is not accessible.
Joining multiple Database on same server SQL Server
[ "", "sql", "sql-server", "join", "" ]
I would like to apply total $10.00 discount for each customers.The discount should be applied to multiple transactions until all $10.00 used. Example: ``` CustomerID Transaction Amount Discount TransactionID 1 $8.00 $8.00 1 1 $6.00 $2.00 2 1 $5.00 $0.00 3 1 $1.00 $0.00 4 2 $5.00 $5.00 5 2 $2.00 $2.00 6 2 $2.00 $2.00 7 3 $45.00 $10.00 8 3 $6.00 $0.00 9 ```
The query below keeps track of the running sum and calculates the discount depending on whether the running sum is greater than or less than the discount amount. ``` select customerid, transaction_amount, transactionid, (case when 10 > (sum_amount - transaction_amount) then (case when transaction_amount >= 10 - (sum_amount - transaction_amount) then 10 - (sum_amount - transaction_amount) else transaction_amount end) else 0 end) discount from ( select customerid, transaction_amount, transactionid, sum(transaction_amount) over (partition by customerid order by transactionid) sum_amount from Table1 ) t1 order by customerid, transactionid ``` <http://sqlfiddle.com/#!6/552c2/7> same query with a self join which should work on most db's including mssql 2008 ``` select customerid, transaction_amount, transactionid, (case when 10 > (sum_amount - transaction_amount) then (case when transaction_amount >= 10 - (sum_amount - transaction_amount) then 10 - (sum_amount - transaction_amount) else transaction_amount end) else 0 end) discount from ( select t1.customerid, t1.transaction_amount, t1.transactionid, sum(t2.transaction_amount) sum_amount from Table1 t1 join Table1 t2 on t1.customerid = t2.customerid and t1.transactionid >= t2.transactionid group by t1.customerid, t1.transaction_amount, t1.transactionid ) t1 order by customerid, transactionid ``` <http://sqlfiddle.com/#!3/552c2/2>
You can do this with recursive common table expressions, although it isn't particularly pretty. SQL Server stuggles to optimize these types of query. See [Sum of minutes between multiple date ranges](https://stackoverflow.com/questions/13464281/sum-of-minutes-between-multiple-date-ranges) for some discussion. If you wanted to go further with this approach, you'd probably need to make a temporary table of x, so you can index it on `(customerid, rn)` ``` ;with x as ( select tx.*, row_number() over ( partition by customerid order by transaction_amount desc, transactionid ) rn from tx ), y as ( select x.transactionid, x.customerid, x.transaction_amount, case when 10 >= x.transaction_amount then x.transaction_amount else 10 end as discount, case when 10 >= x.transaction_amount then 10 - x.transaction_amount else 0 end as remainder, x.rn as rn from x where rn = 1 union all select x.transactionid, x.customerid, x.transaction_amount, case when y.remainder >= x.transaction_amount then x.transaction_amount else y.remainder end, case when y.remainder >= x.transaction_amount then y.remainder - x.transaction_amount else 0 end, x.rn from y inner join x on y.rn = x.rn - 1 and y.customerid = x.customerid where y.remainder > 0 ) update tx set discount = y.discount from tx inner join y on tx.transactionid = y.transactionid; ``` [Example SQLFiddle](http://sqlfiddle.com/#!3/944e5/1)
SQL Deduct value from multiple rows
[ "", "sql", "sql-server", "sql-server-2008", "" ]
Hi how can I get the percentage of each record over the total? Lets imagine I have one table with the following ``` ID code Points 1 101 2 2 201 3 3 233 4 4 123 1 ``` The percentage for `ID 1` is `20%` for `2` is `30%` and so one how do I get it?
try like this ``` select id,code,points,(points * 100)/(select sum(points) from tabel1) from table1 ```
There's a couple approaches to getting that result. You essentially need the "total" points from the whole table (or whatever subset), and get that repeated on each row. Getting the percentage is a simple matter of arithmetic, the expression you use for that depends on the datatypes, and how you want that formatted. Here's one way (out a couple possible ways) to get the specified result: ``` SELECT t.id , t.code , t.points -- , s.tot_points , ROUND(t.points * 100.0 / s.tot_points,1) AS percentage FROM onetable t CROSS JOIN ( SELECT SUM(r.points) AS tot_points FROM onetable r ) s ORDER BY t.id ``` The view query `s` is run first, that gives a single row. The join operation matches that row with every row from `t`. And that gives us the values we need to calculate a percentage. Another way to get this result, without using a join operation, is to use a subquery in the SELECT list to return the total. --- Note that the join approach can be extended to get percentage for each "group" of records. ``` id type points %type -- ---- ------ ----- 1 sold 11 22% 2 sold 4 8% 3 sold 25 50% 4 bought 1 50% 5 bought 1 50% 6 sold 10 20% ``` To get that result, we can use the same query, but a a view query for `s` that returns total `GROUP BY r.type`, and then the join operation isn't a CROSS join, but a match based on type: ``` SELECT t.id , t.type , t.points -- , s.tot_points_by_type , ROUND(t.points * 100.0 / s.tot_points_by_type,1) AS `%type` FROM onetable t JOIN ( SELECT r.type , SUM(r.points) AS tot_points FROM onetable r GROUP BY r.type ) s ON s.type = t.type ORDER BY t.id ``` To do that same result with the subquery, that's going to be a correlated subquery, and that subquery is likely to get executed for every row in `t`. This is why it's more natural for me to use a join operation, rather than a subquery in the SELECT list... even when a subquery works the same. (The patterns we use for more complex queries, like assigning aliases to tables, qualifying all column references, and formatting the SQL... those patterns just work their way back into simple queries. The rationale for these patterns is kind of lost in simple queries.)
SQL percentage of the total
[ "", "sql", "percentage", "" ]
I'm stuck. I've looked for an answer, but can't seem to find subtracting time in the same table from two different rows of the same table that fits. I'm having a difficult time with the following query. In the table below, I want to differentiate the TimeOut from one row to the TimeIn of the next row. Consider in the following table of finding the difference in minutes between the TimeOut in Row 1 (10:35am) and the TimeIn in Row 2 (10:38am). Table 1: `TIMESHEET` ``` ROW EmpID TimeIn TimeOut ---------------------------------------------------------------- 1 138 2014-01-05 10:04:00 2014-01-05 10:35:00 2 138 2014-01-05 10:38:00 2014-01-05 10:59:00 3 138 2014-01-05 11:05:00 2014-01-05 11:30:00 ``` Expected results ``` ROW EmpID TimeIn TimeOut Minutes ---------------------------------------------------------------------------- 1 138 2014-01-05 10:04:00 2014-01-05 10:35:00 2 138 2014-01-05 10:38:00 2014-01-05 10:59:00 3 3 138 2014-01-05 11:05:00 2014-01-05 11:30:00 6 etc etc etc ``` Basically, I need to differentiate the times in the query to show how long employees were on break. I've tried doing a join, but that doesn't seem to work and I don't know if `OVER` with `PARTITION` is the way to go, because I cannot seem to follow the logic (Yeah, I'm still learning). I also considering two temp tables and comparing them, but that doesn't work when I start changing days or employee ID's. Finally, I am thinking maybe `LEAD` in an `OVER` statement? Or is it just simple to do a `DATEDIFF` with a `CAST`?
I have solved this for similar problems and it need not be that the rows even be sorted: ``` select t1.EmpID, t1.TimeIn, t1.TimeOut, datediff(minute, max(t2.TimeOut), t1.TimeIn) as minutes from timesheet t1 left join timesheet t2 on t1.EmpID = t2.EmpID and t2.TimeOut < t1.TimeIn group by t1.EmpID, t1.TimeIn, t1.TimeOut ``` Let me know if this works. Here is a sql fiddle: <http://sqlfiddle.com/#!3/89a43/1>
try something like that: ``` select *, DATEDIFF(minute, ( select max(b.TimeOut) from TIMESHEET as b where a.EmpID=b.EmpID and b.ROW<a.ROW ), a.TimeIn ) as diff from TIMESHEET as a ```
Find the time difference between two consecutive rows in the same table in sql
[ "", "sql", "sql-server", "sql-server-2008", "ssms", "" ]
I wrote a query in mysql using group\_concat like ``` SELECT c1,group_concat(c2) FROM table1 where sno in(1,4,8,10) group by c1; ``` and gives my expected result. Now the same query I want to write using hibernate criteria.
Simple answer is **No** **Why?** Hibernate support only common function/syntax used in multiple database. There ain't any `group_concat` function in Microsoft SQL Server and may be in other database as well. **Solution:** You have to execute it as Simple SQL Query.
You have two options (depending on your hibernate version). **Override the dialect class** *any hibernate version* You will need to subclass your dialect to add `group_concat()` 1. Introduce the dialect override class Create the following class somewhere in your app (e.g. util package) ``` package com.myapp.util; import org.hibernate.dialect.MySQL5Dialect; import org.hibernate.dialect.function.StandardSQLFunction; import org.hibernate.type.StandardBasicTypes; public class MySQLCustomDialect extends MySQL5Dialect { public MySQLCustomDialect() { super(); registerFunction("group_concat", new StandardSQLFunction("group_concat", StandardBasicTypes.STRING)); } } ``` 2. Map the dialect override class to boot properties Add the following property to your application.properities `spring.jpa.properties.hibernate.dialect = com.myapp.util.MySQLCustomDialect` **Use JPA Metadata Builder Contributor** *hibernate 5.2.18 or newer only* 1. Introduce metadata builder class Create the following class, remember to add package & resolve imports. ``` public class SqlFunctions implements MetadataBuilderContributor { @Override public void contribute(MetadataBuilder metadataBuilder) { metadataBuilder.applySqlFunction( "group_concat", new StandardSQLFunction( "group_concat", StandardBasicTypes.STRING ) ); } } ``` 2. Map new class in application boot properties > Leave the dialect properties as is ``` spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.MySQL5Dialect spring.jpa.properties.hibernate.metadata_builder_contributor = com.myapp.util.SqlFunctions ```
How to use group_concat in hibernate criteria?
[ "", "sql", "hibernate", "" ]
I'm using JavaDB and working with two tables in the same database. I'm trying to update the PrevSales column in the "SalesResp" table with the values that are in the "SellDate" column of the "Vehicles" table when the Vehicle table's SellDate is before a the specified date. I'm matching the data between the two tables by using the "VIN" column that is present in both tables. I continually get errors surrounding the inner join in my code, SQL state 42x01. ``` UPDATE SALESRESP JOIN VEHICLES ON (SALESRESP.VIN = VEHICLES.VIN) SET SALESRESP.PrevSale = VEHICLES.SELLDATE WHERE (((VEHICLES.SELLDATE)<'2013-09-24')); ``` I've tried various configurations, swapping the INNER JOIN and the SET placements, I've tried including a "FROM" clause. Nothing seems to work. Any advice? EDIT: Exact Error Code: ``` Error code 30000, SQL state 42X01: Syntax error: Encountered "JOIN" at line 1, column 19. ```
Use the following code and see, replace your date instead of sysdate. ``` UPDATE SALESRESP SET SALESRESP.PrevSale = (SELECT VEHICLES.SELLDATE FROM VEHICLES WHERE SALESRESP.VIN = VEHICLES.VIN AND VEHICLES.SELLDATE <'2013-09-24'); ```
OK, let's try being more explicit: `UPDATE SALESRESP SET SALESRESP.PrevSale = VEHICLES.SELLDATE FROM VEHICLES JOIN SALESREP ON (SALESRESP.VIN = VEHICLES.VIN) WHERE (((VEHICLES.SELLDATE)<'2013-09-24'))` See if that will work.
UPDATE with INNER JOIN
[ "", "sql", "sql-update", "inner-join", "javadb", "" ]
I have a joomla website and I want to make a query that it could run with a cronjob every day. It would look for any articles 30 days old and delete the articles this specific day. I can make a simple query like this: ``` SELECT * FROM `ibps6_content` WHERE created > '2013-10-16' AND created < '2013-10-16' ``` But I don’t know how to make it specify the last 30 days, instead of hardcoding the dates.
You need mysql [DATE\_FORMAT(date,format)](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-format) function. Please try to set your query as: ``` DELETE FROM `ibps6_content` WHERE created > DATE_FORMAT(NOW() - INTERVAL 1 MONTH, '%Y-%m-%d'); ``` Hope this helps
If created is a date datatype in the table ``` DELETE FROM ibps6_content WHERE created = CURRENT_DATE -INTERVAL 30 DAY ``` If it's datetime use DATE(created)
MySQL query to select a certain day in the past?
[ "", "mysql", "sql", "joomla", "" ]
Is there any way to insert data into table from a variable? Variable contents example: ``` 123;1;500;some text here; 145;0;250;and some more text; 146;1;0;; 146;0;3;this field in previous line is empty; ``` * Column dividers: `;` * Line dividers: `\r\n`
If the ;-seperated contents were in a file you could use [BULK INSERT](http://msdn.microsoft.com/en-us/library/ms188365.aspx). A StackOverflow question about this with solution you can find [here](https://stackoverflow.com/questions/15242757/import-csv-file-into-sql-server). So dumping the contents of your variable to a file and using BULK INSERT would be one way to do it.
``` DECLARE @table_var TABLE( col_list VARCHAR(50) ) INSERT INTO @table_var VALUES ('123;1;500;some text here'); INSERT INTO @table_var VALUES ('145;0;250;and some more text;'); INSERT INTO @table_var VALUES ('146;1;0;;'); INSERT INTO @table_var VALUES ('146;0;3;this field'); SOL 1 : SELECT DISTINCT S.a.value('(/H/r)[1]', 'VARCHAR(25)') AS col1 , S.a.value('(/H/r)[2]', 'VARCHAR(25)') AS col2 , S.a.value('(/H/r)[3]', 'VARCHAR(25)') AS col3 , S.a.value('(/H/r)[4]', 'VARCHAR(25)') AS col4 FROM ( SELECT *,CAST (N'<H><r>' + REPLACE(col_list, ';', '</r><r>') + '</r></H>' AS XML) AS [vals] FROM @table_var) d CROSS APPLY d.[vals].nodes('/H/r') S(a) Sol 2: SELECT col_list, NewXML.value('/col_list[1]/Attribute[1]', 'varchar(25)') AS [col1], NewXML.value('/col_list[1]/Attribute[2]', 'varchar(25)') AS [col2], NewXML.value('/col_list[1]/Attribute[3]', 'varchar(25)') AS [col3], NewXML.value('/col_list[1]/Attribute[4]', 'varchar(25)') AS [col4] FROM @table_var t1 CROSS APPLY (SELECT XMLEncoded=(SELECT col_list AS [*] FROM @table_var t2 WHERE t1.col_list = t2.[col_list] FOR XML PATH(''))) EncodeXML CROSS APPLY (SELECT NewXML=Cast('<col_list><Attribute>' + Replace(XMLEncoded, ';', '</Attribute><Attribute>') + '</Attribute></col_list>' AS XML)) CastXML ```
Insert data into table from a CSV-formatted text variable using T-SQL
[ "", "sql", "sql-server", "t-sql", "" ]
I'm have a problem with my MySQL query, currently I have the following ``` SELECT DISTINCT members.userid, messaging.toName, members.url, messaging.timestamp AS utime FROM messaging JOIN members ON members.userid = messaging.senderid WHERE ( messaging.userid = '$profileid' OR messaging.senderid = '1000450' ) ORDER BY messaging.timestamp DESC ``` Which brings back ``` userid toName url timestamp ``` The above query brings back all the different timestamp values but what i need it to do is just bring back the latest timestamp. for example ``` userid | toName | url | timestamp | ------------------------------------------ Test | Dave | url | 7787263803 | Test | Dave | url | 7787263804 | Test | Dave | url | 7787263805 | Test | Dave | url | 7787263806 | ``` How can I edit the query so it will bring back just the maxium timestamp like below? ``` userid | toName | url | timestamp | ------------------------------------------ Test | Dave | url | 7787263806 | ``` Any help would be great
One way to do that would be (using your query) to limit the output to 1 row, since you've already descending ordered your resultset by column timestamp. ``` SELECT DISTINCT members.userid, messaging.toName, members.url, messaging.timestamp AS utime FROM messaging JOIN members ON members.userid = messaging.senderid WHERE ( messaging.userid = '$profileid' OR messaging.senderid = '1000450' ) ORDER BY messaging.timestamp DESC LIMIT 1 ``` Another way of doing this would be to use aggregate function `MAX()` ``` SELECT DISTINCT members.userid, messaging.toName, members.url, MAX(messaging.timestamp) AS utime FROM messaging JOIN members ON members.userid = messaging.senderid WHERE ( messaging.userid = '$profileid' OR messaging.senderid = '1000450' ) GROUP BY 1,2,3 -- ORDER BY messaging.timestamp DESC # Unnecessary sorting in this solution ``` Which will produce a maximum timestamp for each distinct pair of (userid, toName, url). Also, you may get the same result using a WINDOW function: ``` SELECT * FROM ( SELECT DISTINCT members.userid, messaging.toName, members.url, messaging.timestamp AS utime, rank() OVER (ORDER BY messaging.timestamp) AS position FROM messaging JOIN members ON members.userid = messaging.senderid WHERE ( messaging.userid = '$profileid' OR messaging.senderid = '1000450' ) -- ORDER BY messaging.timestamp DESC # Unnecessary sorting in this solution ) WHERE position = 1 ``` Big advantage of this solution is that changing the WHERE clause you can get second, third ... very quickly.
just add `LIMIT 1` at the end of your query
Retrieve MAX timestamp from query
[ "", "mysql", "sql", "" ]
I want to select `Cars` from database with where clause looking for best DRY approach for my issue. for example I have this two parameters ``` params[:car_model_id] (int) params[:transmission_id] (int) params[:from_date] params[:to_date] ``` but I dont know which one will be null ``` if params[:car_model_id].nil? && !params[:transmission_id].nil? if params[:from_date].nil? && params[:from_date].nil? return Car.where(:transmission_id => params[:transmission_id]) else return Car.where(:transmission_id => params[:transmission_id], :date => params[:from_date]..params[:to_date]) end elseif !params[:car_model_id].nil? && params[:transmission_id].nil? if params[:from_date].nil? && params[:from_date].nil? return Car.where(:car_model_id=> params[:car_model_id]) else return Car.where(:car_model_id=> params[:car_model_id], :date => params[:from_date]..params[:to_date]) end else return Car.where(:car_model_id=> params[:car_model_id], :transmission_id => params[:transmission_id], :date => params[:from_date]..params[:to_date]) end ``` what is best approach to avoid such bad code and check if parameter is nil inline(in `where`)
You can do: ``` car_params = params.slice(:car_model_id, :transmission_id).reject{|k, v| v.nil? } ``` and then: ``` Car.where(car_params) ``` **Explanation:** Since, you're checking if the particular key i.e.: `:car_model_id` and `transmission_id` exists in `params`. The above code would be something like this when you have just `:transimission_id` in `params`: ``` Car.where(:transmission_id => '1') ``` or this when you have `:car_model_id` in `params`: ``` Car.where(:car_model_id => '3') ``` or this when you'll have both: ``` Car.where(:transmission_id => '1', :car_model_id => '3') ``` NOTE: This will work only when you have params keys as the column names for which you're trying to run queries for. If you intend to have a different key in `params` which doesn't match with the column name then I'd suggest you change it's key to the column name in controller itself before `slice`. **UPDATE**: Since, OP has edited his question and introduced more `if.. else` conditions now. One way to go about solving that and to always keep one thing in mind is to have your `user_params` correct values for which you want to run your queries on the model class, here it's `Car`. So, in this case: ``` car_params = params.slice(:car_model_id, :transmission_id).reject{|k, v| v.nil? } if params[:from_date].present? && params[:from_date].present? car_params.merge!(date: params[:from_date]..params[:to_date]) end ``` and then: ``` Car.where(car_params) ```
> what is best approach to avoid such bad code and check if parameter is > nil inline(in where) Good Question ! I will make implementation with two extra boolean variables (`transmission_id_is_valid` and `car_model_id_is_valid`) ``` transmission_id_is_valid = params[:car_model_id].nil? && !params[:transmission_id].nil? car_model_id_is_valid = !params[:car_model_id].nil? && params[:transmission_id].nil? if transmission_id_is_valid return Car.where(:transmission_id => params[:transmission_id]) elseif car_model_id_is_valid return Car.where(:car_model_id=> params[:car_model_id]) .... end ``` I think now is more human readable.
Rails ActiveRecord where clause
[ "", "sql", "ruby-on-rails", "ruby", "activerecord", "ruby-on-rails-4", "" ]
I was working in Access to make a query of a few tables, and realized that a column of a table does not meet a specific requirement. I have a field that consists of thousands of records, and it contains "years" in the following format (an example) : `1915-1918`. What I want to accomplish is to make that value individual in the records. So the end result would be : `1915,1916,1917,1918`. Basically, I want to convert `1915-1918` to `1915,1916,1917,1918`. I thought a simple concatenation would suffice, but could not wrap my head around how to make it so that it can do it for all thousands of records. I did some research and reached the conclusion that a user defined function might be the way to go. How would I go about this?
When your field value consists of 4 digits followed by a dash followed by 4 more digits, this function returns a comma-separated list of years. In any other cases (Null, a single year such as "1915" instead of a year range, or anything else), the function returns the starting value. ``` Public Function YearList(ByVal pInput As Variant) As Variant Dim astrPieces() As String Dim i As Long Dim lngFirst As Long Dim lngLast As Long Dim varReturn As Variant If pInput Like "####-####" Then astrPieces = Split(pInput, "-") lngFirst = CLng(astrPieces(0)) lngLast = CLng(astrPieces(1)) For i = lngFirst To lngLast varReturn = varReturn & "," & CStr(i) Next If Len(varReturn) > 0 Then varReturn = Mid(varReturn, 2) End If Else varReturn = pInput End If YearList = varReturn End Function ``` However, this approach assumes the start year in each range will be less than the end year. In other words, you would need to invest more effort to make `YearList("1915-1912")` return a list of years instead of an empty string. If that function returns what you want, you could use it in a `SELECT` query. ``` SELECT years_field, YearList(years_field) FROM YourTable; ``` Or if you want to replace the stored values in your years field, you can use the function in an `UPDATE` query. ``` UPDATE YourTable SET years_field = YearList(years_field); ```
You can use the `Split` function to return an array from the "years" field that contains the upper and lower year. Then loop from the lower year to the upper year and build the concatenated string. For example: ``` Public Function SplitYears(Years As String) As String Dim v As Variant Dim i As Long Dim s As String v = Split(Years, "-", 2) If UBound(v) = 1 Then For i = v(0) To v(1) s = s & "," & CStr(i) Next i s = Right(s, Len(s) - 1) Else s = v(0) End If SplitYears = s End Function ```
How do I split up a year range into individual years in a given field?
[ "", "sql", "ms-access", "vba", "ms-access-2013", "" ]
Using the [w3schools db](http://www.w3schools.com/sql/trysql.asp?filename=trysql_select_groupby), I have to create these queries. I am stuck on a couple questions and thought I'd ask.. Q2: Show top 5 employees, excluding the top employee (show employees ranked 2-5) in terms of total sales done by those employees. In the query display employee's first name, last name and TotalSales sorted in descending order. Filter the Data for only orders done in year 1996. My code without the year filtering is: ``` SELECT e.LastName, e.FirstName, SUM(od.Quantity*p.Price) AS OrderTotal FROM [Employees] AS e JOIN [Orders] AS o ON o.EmployeeID=e.EmployeeID JOIN [OrderDetails] AS od ON od.OrderID=o.OrderID JOIN [Products] AS p ON od.ProductID=p.ProductID GROUP BY e.EmployeeID ORDER BY OrderTotal DESC LIMIT 4 OFFSET 1 ``` I am not sure how to only return it with orders in 1996, when I try to Group By year 1996 doesn't seem to help.. Thank you!
Since the WebSQL database used for w3schools doesn't seem to support year() I think you could use this instead: ``` SELECT e.LastName, e.FirstName, SUM(od.Quantity*p.Price) AS OrderTotal FROM [Employees] AS e JOIN [Orders] AS o ON o.EmployeeID=e.EmployeeID JOIN [OrderDetails] AS od ON od.OrderID=o.OrderID JOIN [Products] AS p ON od.ProductID=p.ProductID WHERE OrderDate BETWEEN '1996-01-01' AND '1996-12-31' GROUP BY e.EmployeeID ORDER BY OrderTotal DESC LIMIT 4 OFFSET 1 ``` It seems to work when I tested with Chrome (but not Firefox)
You can use the YEAR() function. Check out this list of useful [datetime](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html) functions. Try this new query: ``` SELECT e.LastName, e.FirstName, SUM(od.Quantity*p.Price) AS OrderTotal FROM [Employees] AS e JOIN [Orders] AS o ON o.EmployeeID=e.EmployeeID JOIN [OrderDetails] AS od ON od.OrderID=o.OrderID JOIN [Products] AS p ON od.ProductID=p.ProductID WHERE YEAR([OrderDate]) = 1996 GROUP BY e.EmployeeID ORDER BY OrderTotal DESC LIMIT 4 OFFSET 1 ```
SQL Grouping by Year
[ "", "sql", "" ]
I have read many threads on this subject now and tried a few things but it has not worked as I hoped. I need some clarification and apologize if this is considered a duplicate thread. A client of mine hosts a Postgres database where one table holds a little more then 12 million records. They have tasked me with finding duplicate records, extract them for viewing and if everything looks ok, delete the duplicates. My main concern has been performance on the server. Running DISTINCT queries on 12 million records must consume a lot of resources? Since my first task is to extract the records for viewing in, say a CSV, and not simply deleting them my approach in PgAdmin was executing this to a file. ``` SELECT * FROM my_table WHERE my_table_id NOT IN ( SELECT DISTINCT ON ( num_1, num_2, num_3, num_4, num_5, my_date ) my_table_id FROM my_table ); ``` However this query takes way to long. After 20 minutes of execution time I halted the execution. To make things more complex my client is reluctant to allow me to clone a local copy of the table because of strict security. They prefer it is all done on the live hosting environment. The table definition is quite simple. It looks like this ``` CREATE TABLE my_table ( my_table_id bigserial NOT NULL, num_1 bigserial NOT NULL, num_2 bigserial NOT NULL, num_3 bigserial NOT NULL, num_4 numeric, num_5 integer, my_date date, my_text character varying ) ``` The primary key "my\_table\_id" has not been compromised and is always unique. The col "my\_text" is not interesting in the query since it will be empty for all duplicates. It is only the numeric fields and the date that needs matching. All columns (except my\_table\_id and my\_text) must match across records to qualify as a duplicate. What is the best way to solve this? Is there a server-friendly way that won´t eat all resources on the host environment? Please help me understand the best approach! Thanks you!
Need to use `GROUP BY` and `HAVING` to get duplicate records instead of `DISTINCT` subquery will find all duplicate records ``` SELECT * FROM my_table mt JOIN ( SELECT num_1, num_2, num_3, num_4, num_5, my_date FROM my_table GROUP BY num_1, num_2, num_3, num_4, num_5, my_date HAVING COUNT(*) >1 ) T ON mt.num_1= T.num_1 and mt.num_2= T.num_2 and mt.num_3= T.num_3 and mt.num_4= T.num_4 and mt.num_5= T.num_5 and mt.my_date= T.my_date ```
Another way using analytic functions ``` select * from ( select * , count(*) over (partition by num1,num2,num3,num4,my_date) cnt from mytable ) t1 where cnt > 1 ```
Find duplicate records in large table on multiple columns the right way
[ "", "sql", "postgresql", "postgresql-8.4", "" ]
I have two tables, `follow` and `followed`. I want to get all the rows in the `follow` table such that `follow.screen != followed.following_screen_name`. **follow table** ``` ID screen_name ----------------- 1 eddie 2 jason 3 omar 4 jonathan 5 jack ``` **followed table** ``` ID my_screen_name following_screen_name ------------------------------------------- 1 john eddie 2 kenny eddie 3 kenny omar 4 john jason 5 john omar ``` --- **Query I tried which didn't work** ``` SELECT follow.screen_name from follow, followed where followed.my_screen_name='john' AND follow.screen_name != followed.following_screen_name ``` **Expected results** ``` ID screen_name ----------------- 1 jonathan 2 jack ```
you can get this by doing a `LEFT JOIN` ``` SELECT F.screen_name FROM follow F LEFT JOIN followed FD on F.screen_name = FD.my_screen_name OR F.screen_name = FD.following_screen_name WHERE FD.my_screen_name IS NULL and FD.following_screen_name IS NULL ``` Another way is to use `NOT EXISTS`, get all rows that exists in followed and do NOT EXISTS clause to get desired result. ``` SELECT F.screen_name FROM follow F WHERE NOT EXISTS ( SELECT 1 FROM followed FD WHERE F.screen_name = FD.my_screen_name OR F.screen_name = FD.following_screen_name ) ```
There are plenty of ways to solve this, but common to all is that you need to compare the follow.screen\_name to both followed.my\_screen\_name and followed.following\_screen\_\_name. One way is to use NOT IN with a UNION: ``` select screen_name from follow where screen_name not in ( select following_screen_name from followed where following_screen_name is not null union all select my_screen_name from followed where my_screen_name is not null ) ``` While this approach is nice for clarity, it may not be as good for performance as using a left join or not exists.
MySQL table comparison between two tables
[ "", "mysql", "sql", "" ]
I only found slightly different examples I couldn't adapt to my needs due to my very limited sql skills. I have a table with 3 revelant columns: ``` ItemID Date Result 1 1.2.2014 A 5 6.4.2014 B 9 7.4.2014 A 1 8.4.2014 A 1 9.4.2014 A 1 10.4.2014 A ``` I want to find the Items that had a particular result (let's say A) 3 times consecutively. In the sample above it would be Item 1. The dates are not normally consecutive. It should work in Oracle SQL. Many thanks for the help!
Not sure if this is the *best* way of of achieving it, but it works. I created the following table to put your example data into: ``` CREATE TABLE DATE_RESULT( ITEM_ID INT, DATE_COL DATE, RESULT_COL VARCHAR2(255 CHAR) ); ``` Then ran this query: ``` SELECT ITEM_ID FROM( SELECT ITEM_ID, TO_CHAR(DATE_COL,'DD-MON-YYYY') AS DATE_COL, RESULT_COL, LAG(ITEM_ID,1,NULL) OVER (ORDER BY ITEM_ID ASC, DATE_COL ASC) AS LAST_ITEM_ID, LAG(ITEM_ID,2,NULL) OVER (ORDER BY ITEM_ID ASC, DATE_COL ASC) AS LAST_ITEM_ID2, LAG(TO_CHAR(DATE_COL,'DD-MON-YYYY'),1,NULL) OVER (ORDER BY ITEM_ID ASC, DATE_COL ASC) AS LAST_DATE, LAG(TO_CHAR(DATE_COL,'DD-MON-YYYY'),2,NULL) OVER (ORDER BY ITEM_ID ASC, DATE_COL ASC) AS LAST_DATE2, LAG(RESULT_COL,1,NULL) OVER (ORDER BY ITEM_ID ASC, DATE_COL ASC) AS LAST_RESULT, LAG(RESULT_COL,2,NULL) OVER (ORDER BY ITEM_ID ASC, DATE_COL ASC) AS LAST_RESULT2 FROM DATE_RESULT ) WHERE ITEM_ID = LAST_ITEM_ID AND ITEM_ID = LAST_ITEM_ID2 AND TO_DATE(DATE_COL)-1 = TO_DATE(LAST_DATE) AND TO_DATE(DATE_COL)-2 = TO_DATE(LAST_DATE2) AND RESULT_COL = LAST_RESULT AND RESULT_COL = LAST_RESULT2; ``` The query uses Oracle's LAG() function to get the values from previous rows. So in this example, LAST\_ITEM\_ID is the item ID from the previous row, and LAST\_ITEM\_ID is the item ID from 2 rows previous. In the WHERE clause I make sure that the ITEM\_ID matches the previous two ITEM\_IDs and that the RESULT\_COL matches the previous two RESULT\_COLs. I also make sure that the last two dates were consecutive.
``` SQL> WITH DATA AS( 2 SELECT 1 ITEM_ID, TO_DATE('1.2.2014','DD.MM.YYYY') DT, 'A' RSLT FROM DUAL UNION ALL 3 SELECT 5, TO_DATE('6.4.2014','DD.MM.YYYY') , 'B' RSLT FROM DUAL UNION ALL 4 SELECT 9, TO_DATE('7.4.2014','DD.MM.YYYY') , 'A' RSLT FROM DUAL UNION ALL 5 SELECT 1, TO_DATE('8.4.2014','DD.MM.YYYY') , 'A' RSLT FROM DUAL UNION ALL 6 SELECT 1, TO_DATE('9.4.2014','DD.MM.YYYY') , 'A' RSLT FROM DUAL UNION ALL 7 SELECT 1, TO_DATE('10.4.2014','DD.MM.YYYY') , 'A' RSLT FROM DUAL) 8 SELECT ITEM_ID FROM( 9 SELECT A.*, ROW_NUMBER() OVER(PARTITION BY ITEM_ID ORDER BY RSLT) RN 10 FROM DATA A) 11 WHERE RN =3 12 / ITEM_ID ---------- 1 SQL> ```
Sql query to find items that had n consecutive failed attempts
[ "", "sql", "oracle", "find-occurrences", "" ]