Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I have table `report_business_likes` with relevant fields: ``` id, facebook_id, created_at, some_info -- ----------- ----------- ---------- 1 123456789 '2013-12-23' blabla ``` I have other table named `businesses` with followed structure: ``` id, fb_id, data -- ----- ---- 33 123456789 xxx ``` I want to replace in `report_business_likes` field `facebook_id` with `id` from table `businesses`. In my case, the result should be: ``` id, facebook_id, created_at, some_info -- ----------- ----------- ---------- 1 33 '2013-12-23' blabla ``` As you can see I replaced `123456789` with `33`. How can I achieve that? I tried: ``` UPDATE `report_business_likes` SET facebook_id = BZ.id from (select id from `businesses` where fb_id = 123456789 ) as BZ, where facebook_id = 123456789 AND date(created_at) = '2013-12-23'; ``` But get syntax error: ``` [SQL] /*SELECT * FROM `report_business_likes` where facebook_id = 123456789 AND date(created_at) = '2013-12-23';*/ UPDATE `report_business_likes` SET facebook_id = BZ from (select id from `businesses` where fb_id = 123456789) as BZ, where facebook_id = 123456789AND date(created_at) = '2013-12-23'; [Err] 1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'from (select id from `businesses` where fb_id = 123456789) as BZ,' at line 3 ``` Please help,
``` UPDATE report_business_likes SET facebook_id = (select id from businesses where facebook_id = 123456789 ) WHERE facebook_id = 123456789 AND date(created_at) = '2013-12-23' ``` OR ``` UPDATE RBL SET RBL.facebook_id = B.id FROM report_business_likes RBL INNER JOIN businesses B ON RBL.facebook_id = B.facebook_id ```
``` UPDATE report_business_likes r JOIN businesses b ON r.facebook_id=b.fb_id SET r.facebook_id=b.id ```
How to update field value based on other table in mySql?
[ "", "mysql", "sql", "" ]
I have an `Emp` table, which has following data. ``` Eno Ename Location Deptid ------------------------------- 1 Alex Delhi 10 2 John Mumbai 10 ............................. ``` Like this I have 1000 records, I need to sort them by `Deptid` column and `Location`. The result after sort should be like this (If I sort by `deptid` and `location=Mumbai`): If a deptid=10 has 300 records (of which 150-Delhi, 100-Mumbai, 50-chennai), then I should get all the records of mumbai (only with deptid=10) first, then other locations of same deptid and then records from other deptid.
``` SELECT Eno, Ename, Location, Deptid FROM employee WHERE Deptid = 10 AND Location = 'Mumbai' UNION ALL SELECT Eno, Ename, Location, Deptid FROM employee WHERE Deptid = 10 AND Location <> 'Mumbai' UNION ALL SELECT Eno, Ename, Location, Deptid FROM employee WHERE Deptid <> 10 ``` An Order by will probably screw it up, unless you add another column like ``` SELECT Eno, Ename, Location, Deptid FROM ( SELECT Eno, Ename, Location, Deptid, 1 OrderBy FROM employee WHERE Deptid = 10 AND Location = 'Mumbai' UNION ALL SELECT Eno, Ename, Location, Deptid, 2 OrderBy FROM employee WHERE Deptid = 10 AND Location <> 'Mumbai' UNION ALL SELECT Eno, Ename, Location, Deptid, 3 OrderBy FROM employee WHERE Deptid <> 10) a ORDER BY OrderBy, Deptid, Location ```
Try this: ``` SELECT Eno, Ename, Location, Deptid FROM employee WHERE Deptid = 10 ORDER BY CASE deptid WHEN 10 THEN 0 ELSE deptid END, CASE location WHEN 'Mumbai' THEN 1 WHEN 'Delhi' THEN 2 WHEN 'Chennai' THEN 3 END ``` **OR** If you want data only for `deptid = 10` then use below query: ``` SELECT Eno, Ename, Location, Deptid FROM employee WHERE Deptid = 10 ORDER BY deptid, CASE location WHEN 'Mumbai' THEN 1 WHEN 'Delhi' THEN 2 WHEN 'Chennai' THEN 3 END ```
how to sort data based on two different (text and number) conditions in sql server?
[ "", "sql", "sql-server", "sorting", "select", "sql-order-by", "" ]
I need to combine two select statement into single select statement Select #1: ``` SELECT Product_Name as [Product Name], Product_Id as [Product Id] from tb_new_product_Name_id where Product_Name LIKE '%' + @product_name_id + '%' or Product_Id like '%' + @product_name_id + '%' ; ``` Select #2: ``` SELECT COUNT(Product_id) + 1 as duplicate_id FROM tb_new_product_Name_id_duplicate WHERE Product_id = (SELECT Product_id FROM tb_new_product_Name_id WHERE Product_Name = @product_name_id); ``` How to combine above two query into single select statement.i need to display three columns duplicate\_id,[Product Name],[Product Id] .thanks..
I think this is what you're looking for ``` SELECT A.Product_Name AS [Product Name], A.Product_Id AS [Product Id], B.duplicate_id FROM tb_new_product_Name_id AS A, ( SELECT COUNT(Product_id)+1 AS duplicate_id FROM tb_new_product_Name_id_duplicate WHERE Product_id= (SELECT Product_id FROM tb_new_product_Name_id WHERE Product_Name=@product_name_id ) ) AS B WHERE A.Product_Name LIKE '%'+@product_name_id+'%' OR Product_Id like '%'+@product_name_id+'%'; ```
You can use `subquery` to get your desired results ``` SELECT Product_Name as [Product Name], Product_Id as [Product Id],(SELECT COUNT(Product_id) + 1 as duplicate_id FROM tb_new_product_Name_id_duplicate WHERE Product_id = (SELECT Product_id FROM tb_new_product_Name_id WHERE Product_Name = @product_name_id)) as duplicate_id from tb_new_product_Name_id where Product_Name LIKE '%' + @product_name_id + '%' or Product_Id like '%' + @product_name_id + '%' ; ```
How to combine two select statements in SQL Server
[ "", "sql", "sql-server", "" ]
Is there any method to close/dispose existing SQL Server connections when users session end in ASP.NET, because I get that error I also use Entity Framework in my application > Timeout expired. The timeout period elapsed prior to obtaining a > connection from the pool. This may have occurred because all pooled > connections were in use and max pool size was reached.
Firstly, forget about sessions. Your connections should not be tied to the session at all - if they are, there is a problem. If the issue is that your EF contexts are tied to the session: then again, I'd say you're doing it very wrong. There are (IMO) two reasonable scopes for connections in a web app: * per call-site - i.e. where you obtain the connection whenever you need it (perhaps multiple times per request) and immediately dispose it before exiting the same method. This is usually achieved via `using` blocks. * per request - where you hold the request open on the request and re-use it, then close / dispose it in the end of each request. This can be achieved using the global context and the end-request event.
Use the Session\_End in Global.asax ``` protected void Session_End(object sender, EventArgs e) { // End connection } ```
Is that possible that dispose all connections to SQL Server when session end method asp.net
[ "", "asp.net", "sql", ".net", "sql-server", "" ]
I'm trying to get the rates from anonymous people plus the ones who are registered. They are in different tables. ``` SELECT product.id, (SUM( users.rate + anonymous.rate ) / COUNT( users.rate + anonymous.rate )) FROM products AS product LEFT JOIN users ON users.id_product = product.id LEFT JOIN anonymous ON anonymous.id_product = product.id GROUP BY product.id ORDER BY product.date DESC ``` So, the tables are like the following: ``` users--> id | rate | id_product | id_user 1 2 2 1 2 4 1 1 3 5 2 2 anonymous--> id | rate | id_product | ip 1 2 2 192..etc 2 4 1 198..etc 3 5 2 201..etc ``` What I'm trying with my query is: for each product, I would like to have the average of rates. Currently the output is null, but I have values in both tables. Thanks.
Try like this.. ``` SELECT product.id, (SUM( ifnull(ur.rate,0) + ifnull(ar.rate,0) ) / (COUNT(ur.rate)+Count(ar.rate))) FROM products AS product LEFT JOIN users_rate AS ur ON ur.id_product = product.id LEFT JOIN anonymous_rate AS ar ON ar.id_product = product.id GROUP BY product.id ``` [**Sql Fiddle Demo**](http://sqlfiddle.com/#!2/54b12/12)
You cannot use count and sum on joins if you group by ``` CREATE TABLE products (id integer); CREATE TABLE users_rate (id integer, id_product integer, rate integer, id_user integer); CREATE TABLE anonymous_rate (id integer, id_product integer, rate integer, ip varchar(25)); INSERT INTO products VALUES (1); INSERT INTO products VALUES (2); INSERT INTO products VALUES (3); INSERT INTO products VALUES (4); INSERT INTO users_rate VALUES(1, 1, 3, 1); INSERT INTO users_rate VALUES(1, 2, 3, 1); INSERT INTO users_rate VALUES(1, 3, 3, 1); INSERT INTO users_rate VALUES(1, 4, 3, 1); INSERT INTO anonymous_rate VALUES(1, 1, 3, '192..'); INSERT INTO anonymous_rate VALUES(1, 2, 3, '192..'); select p.id, ifnull( ( ifnull( ( select sum( rate ) from users_rate where id_product = p.id ), 0 ) + ifnull( ( select sum( rate ) from anonymous_rate where id_product = p.id ), 0 ) ) / ( ifnull( ( select count( rate ) from users_rate where id_product = p.id ), 0 ) + ifnull( ( select count( rate ) from anonymous_rate where id_product = p.id ), 0 )), 0 ) from products as p group by p.id ``` <http://sqlfiddle.com/#!2/a2add/8> I've check on sqlfiddle. When there are no rates 0 is given. You may change that.
MySQL sum plus count in same query
[ "", "mysql", "sql", "rating", "rate", "" ]
I'm trying to create a procedure and it's giving me "No errors" and then "ORA-24344 Success with compilation error" If I run everything inside the procedure it executes correctly but when i try to create the package body it does not work. I narrowed it down to this one procedure: ``` CREATE OR REPLACE PACKAGE TEG.SPCKG_AEC_CIS_SVC_PIPE_COMP IS TYPE OUT_CURSOR IS REF CURSOR; PROCEDURE CreateRptTables; END; GRANT EXECUTE ON TEG.SPCKG_AEC_CIS_SVC_PIPE_COMP TO TEG_USER; CREATE OR REPLACE PACKAGE BODY TEG.SPCKG_AEC_CIS_SVC_PIPE_COMP IS -------------------------------------------------------------------------------- PROCEDURE CreateRptTables IS /*========================================================================== 12/20/2013 TFS 24446 - Created function ==========================================================================*/ DECLARE CURSOR Cur_Comp IS SELECT * FROM TEG.AEC_CIS_SVC_PIPE_COMP; BEGIN FOR compRow in Cur_Comp LOOP If (compRow.cis_bus_res_loop <> compRow.cis_bus_res_loop_c) Then --Insert information into the details table INSERT INTO TEG.AEC_CIS_SVC_PIPE_DET( Facility_id, Serv_Pipe_Num) VALUES(compRow.Facility_ID, compRow.Serv_Pipe_Num); End If; END LOOP; END; END; SHOW ERRORS ```
You need to remove the "DECLARE" keyword. That is only needed in an anonymous PL/SQL block.
You can query `user_errors` or `all_errors` to see the issue, if `show errors` doesn't show you anything for some reason. An obvious problem in you procedure is that you have the `DECLARE` keyword. You only use that for anonymous blocks. Everything between the `PROCEDURE ... IS` and `BEGIN` is declaration in a named block.
Oracle Procedure compile error with declared cursor
[ "", "sql", "oracle", "procedure", "" ]
I've tried to update my database and changing dates. I've done some research but I did not found any issue. So I used two timestamp. I've tried to do that method: ``` UPDATE `ps_blog_post` SET `time_add` = ROUND((RAND() * (1387888821-1357562421)+1357562421)) ``` Now everywhere the new date is: ``` 0000:00:00 ``` Anykind of help will be much appreciated
You have the right idea, your conversion from the int literals you're using back to the timestamp seems off though - you're missing an explicit call to `FROM_UNIXTIME`: ``` UPDATE `ps_blog_post` SET `time_add` = FROM_UNIXTIME(ROUND((RAND() * (1387888821 - 1357562421) + 1357562421))) ```
Try this one to get timestamp between two timestamps ``` SET @MIN = '2013-01-07 00:00:00'; SET @MAX = '2013-12-24 00:00:00'; UPDATE `ps_blog_post` SET `time_add` = TIMESTAMPADD(SECOND, FLOOR(RAND() * TIMESTAMPDIFF(SECOND, @MIN, @MAX)), @MIN); ``` [**Fiddle**](http://sqlfiddle.com/#!2/d41d8/27851)
sql update random between two dates
[ "", "mysql", "sql", "random", "timestamp", "sql-update", "" ]
I'm using row\_number() expression but I don't get result as I expected. I have a sql table and some rows are duplicate. They have same 'BATCHID' and I want to get second row number for these, for others I use first row number. How can I do it? ``` SELECT * FROM (SELECT * , ROW_NUMBER() OVER (PARTITION BY BATCHID ORDER BY SCAQTY) Rn FROM SAYIMDCPC ) t WHERE Rn=1 ``` This code returns to me only first rows, but I want to get second rows for duplicated items.
`ROW_NUMBER()` gives every row a unique counter. You'd want to use `RANK()`, which is similar, but gives rows with identical values the same score: ``` SELECT * FROM (SELECT * , RANK() OVER (PARTITION BY batchid ORDER BY scaqry) rk FROM sayimdcpc) t WHERE rk = 1 ```
If some values are only shown once, but some twice (and perhaps more than twice), you don't want the "first" row, you want the "max" row. Try reversing your order condition: ``` SELECT * FROM (SELECT * , ROW_NUMBER() OVER (PARTITION BY BATCHID ORDER BY SCAQTY DESC) Rn FROM SAYIMDCPC ) t WHERE Rn=1 ``` As a side note, it's still better to explicitly list out all columns; for instance, you probably don't need `Rn` outside of this query...
SQL Server ROW_NUMBER() Issue
[ "", "sql", "sql-server", "row-number", "" ]
I want to combine the results of these two queries: ``` SELECT odate, Count(odate), Sum(dur) FROM table1 t1 GROUP BY odate ORDER BY odate; SELECT cdate, Count(cdate), Sum(dur) FROM table2 t2 GROUP BY cdate ORDER BY cdate; ``` and get something like this as a result: ``` odate,t1.count(odate),t2.sum(dur),t2.count(cdate),t2.sum(dur) order by odate ``` how to do that? I get an error when I run this one: ``` select odate,count(odate),sum(dur) from table1 t1 group by odate order by odate union select cdate,count(cdate),sum(dur) from table2 t2 group by cdate order by cdate; ```
Looking at your desired result you need a `JOIN` rather then a `UNION`. You can do it like this ``` select coalesce(odate, cdate) odate, count1, sum1, count2, sum2 from ( select odate, count(odate) count1, sum(dur) sum1 from table1 group by odate ) t1 full join ( select cdate, count(cdate) count2, sum(dur) sum2 from table2 group by cdate ) t2 on t1.odate = t2.cdate order by odate; ``` Sample output: ``` | ODATE | COUNT1 | SUM1 | COUNT2 | SUM2 | |--------------------------------|--------|--------|--------|--------| | January, 01 2013 00:00:00+0000 | 2 | 30 | 2 | 30 | | January, 02 2013 00:00:00+0000 | 1 | 30 | (null) | (null) | | January, 03 2013 00:00:00+0000 | (null) | (null) | 1 | 30 | ``` Here is **[SQLFiddle](http://sqlfiddle.com/#!4/2087c/1)** demo
What you're looking for is a UNION statement. Basically, you take the two queries and tell SQL that they can be joined together. the simplest (if all the fields are the same data type) would be something like: ``` select odate,count(odate),sum(dur) from table1 t1 group by odate UNION (select cdate,count(cdate),sum(dur) from table2 t2 group by cdate) ORDER BY odate; ``` Edit: your error is in where you put the ORDER BY statement. There's no point to order inside the UNION in most cases anyways. For more information see: <http://www.w3schools.com/sql/sql_union.asp>
combine outputs of two queries group by a common field
[ "", "sql", "database", "oracle", "" ]
This is my sql query: ``` SELECT c.*, CAST ( 0 as int ) Score FROM Caregiver c JOIN Elderly e ON EXISTS ( SELECT x.LanguageID FROM ( SELECT 1 AS LanguageID WHERE e.Chinese = 1 UNION ALL SELECT 2 AS LanguageID WHERE e.Malay = 1 UNION ALL SELECT 3 AS LanguageID WHERE e.Tamil = 1 UNION ALL SELECT 4 AS LanguageID WHERE e.English = 1 UNION ALL SELECT 5 AS LanguageID WHERE e.Others = 1 ) x INTERSECT SELECT y.LanguageID FROM ( SELECT 1 AS LanguageID WHERE c.Chinese = 1 UNION ALL SELECT 2 AS LanguageID WHERE c.Malay = 1 UNION ALL SELECT 3 AS LanguageID WHERE c.Tamil = 1 UNION ALL SELECT 4 AS LanguageID WHERE c.English = 1 UNION ALL SELECT 5 AS LanguageID WHERE c.Others = 1 ) y ) WHERE e.NRIC=@nric2 AND c.CaregiverID != (SELECT CaregiverID FROM RequestPairing WHERE ReqID=@reqid2) ``` which does not work because the subquery `( SELECT CaregiverID FROM RequestPairing WHERE ReqID=@reqid2)` is returning multiple values. My intention is to make use of the subquery to exclude certain rows from being returned by the main query. So any workaround for this?
You can change this condition to `NOT EXISTS`: ``` WHERE e.NRIC=@nric2 AND NOT EXISTS (SELECT CaregiverID FROM RequestPairing WHERE ReqID=@reqid2 AND CaregiverID = c.CaregiverID) ```
I think you want `not in`: ``` WHERE e.NRIC=@nric2 and c.CaregiverID not in (SELECT CaregiverID FROM RequestPairing WHERE ReqID=@reqid2) ```
workaround for getting multiple values from subquery
[ "", "sql", "sql-server-2012", "subquery", "" ]
I need a help about sql query performance... I have a view and when I run view as ``` select * from udv_salesAnalyze where _month=12 and _year=2012 ``` I got result in 2 seconds but when I add another filter as ``` select * from udv_salesAnalyze where _month=12 and _year=2012 and userRef in (1,2,5,6,9,11,12,13,14 ,19,22,25,26,27,31,34,35,37,38,39,41,47,48,49,53,54,57,59,61,62 ,65,66,67,68,69,70,74,77,78,79,80,83,86,87,88,90,91,92,94) ``` I got result in 1 min 38 seconds.. I modified query as ``` select * from udv_salesAnalyze where _month=12 and _year=2012 and userRef in (select * from udf_dependedUsers(2)) ``` (here udf\_dependedUsers is table returned Function) I got result in 38 seconds I joined table retuned function to view but again I got result in 38-40 seconds... is there any other way to get result more fastly... I ll be very appreciated you can give me a solution... thanks a lot ... ![execution plan :](https://i.stack.imgur.com/giyWw.png) here code fo udf\_dependedUsers : ``` ALTER FUNCTION [dbo].[udfn_dependedUsers] (@userId int) RETURNS @dependedUsers table (userRef int) AS BEGIN DECLARE @ID INT SET @ID = @userId ;WITH ret AS(SELECT userId FROM users WHERE userId = @ID UNION ALL SELECT t.userId FROM users t INNER JOIN ret r ON t.Manager = r.userId ) insert into @dependedUsers (userRef) select * from ret order by userId RETURN END ```
Problem was indicies of table which holds user data Here is Solution ; 1- write your query to Query Editor and Click "Display Estimated Execution Plan" button .. ![enter image description here](https://i.stack.imgur.com/GSwjB.png) 2- SQL Server gives you hints and query about index in "Execution Plan Window" that should be created on table ![enter image description here](https://i.stack.imgur.com/AVdpK.png) 3- Right Click on Execution Plan window and choose "Missing Index Details" ![enter image description here](https://i.stack.imgur.com/xGuty.png) 4- In Opend Query Page Rename index name ([] to something else which you want) and run query ![enter image description here](https://i.stack.imgur.com/iH3TH.png) 5- and Run you own query which was slow as I mentiond in my question... after these steps my query run in 4 second instead of 38
Try using a left join ``` select * from udv_salasAnalyze MainTable LEFT JOIN (select * from udf_dependedUsers(2)) SelectiveInTable --Try direct query like that you wrote in user function ON SelectiveInTable.userRef = MainTable.userRef where _month=12 and _year=2012 and SelectiveInTable.userRef != null ```
Sql IN clause slows down performance
[ "", "sql", "sql-server", "performance", "in-clause", "" ]
I'm trying to query SKUs not duplicated in product table like that: ``` SELECT entity_id, sku FROM catalog_product_entity WHERE sku NOT IN (SELECT sku FROM catalog_product_entity GROUP BY sku HAVING Count(*) > 1) ``` But it runs very slow, even my PC is hanging on. Anyone got a better solution for optimizing this query, please give me a help!
Does the below query achieve the same thing? ``` SELECT entity_id, sku FROM catalog_product_entity GROUP BY sku HAVING Count(*) = 1 ```
Make sure you have index on `sku`. Also try to use this query: ``` select MAX(entity_id), sku FROM catalog_product_entity GROUP BY sku HAVING count(*)=1 ```
mysql query select not in run too slow
[ "", "mysql", "sql", "performance", "" ]
I am trying to calculate age from Date of Birth, which I was able to do successfully using [this thread](https://stackoverflow.com/questions/5773405/calculate-age-in-mysql-innodb). However, some of my DateOfBirth have null values, and using my below formula, the result coms back as "2012" instead of (blank/null). **Here is my table:** ``` 10/06/1990 01/09/1998 *null* *null* 02/16/1991 ``` **Here is my desired result:** ``` 23 25 (blank) (blank) 22 ``` **Here is my formula so far:** ``` year(curdate())-year(user.DateOfBirth) - (dayofyear(curdate()) < dayofyear(user.DateOfBirth)) AS 'Age' ``` **Here is what I'm actually getting:** ``` 23 25 2012 2012 22 ``` **Here are a couple of things I've tried to eliminate the "2012", which results in some encrypted text:** ``` IF(user.DateOfBirth > '0001-01-01',AboveFormula,'') CASE AboveFormula WHEN 2012 THEN '' ELSE AboveFormula END AS 'Age' ```
Try this: ``` SELECT CASE WHEN user.DateOfBirth IS NULL THEN "" ELSE year(curdate())-year(user.DateOfBirth) - (dayofyear(curdate()) < dayofyear(user.DateOfBirth)) END AS 'Age' FROM myTable ```
``` SELECT CASE WHEN DateOfBirth IS NULL THEN "" ELSE DATE_FORMAT(NOW(), '%Y') - DATE_FORMAT(DateOfBirth , '%Y') - (DATE_FORMAT(NOW(), '00-%m-%d') < DATE_FORMAT(DateOfBirth , '00-%m-%d')) END AS age FROM myTable ```
MySQL Age from Date of Birth (ignore nulls)
[ "", "mysql", "sql", "" ]
How to check parameter has null or not in stored procedure e.g ``` select * from tb_name where name=@name ``` i need to check if @name has values or null means.how to do it.thanks...
Is this what you want? ``` select * from tb_name where name=@name and @name is not null ``` Actually, the extra check is unnecessary, because `NULL` will fail any comparison. Sometimes, `NULL` is used to mean "get all of them". In that case, you want: ``` select * from tb_name where name=@name or @name is null ```
In case you want results where Name is not null **and** equal to `@name` Try: ``` select * from tb_name where name=@name AND @name IS NOT NULL ``` If you want results where Name is null **Or** equal to `@name` Try: ``` select * from tb_name where name=@name OR @name IS NULL ``` Where you looking for one of those?
How to check parameter is not null value sql server
[ "", "sql", "sql-server", "" ]
I am writing a procedure to query some data in Oracle and grouping it: ``` Account Amt Due Last payment Last Payment Date (mm/dd/yyyy format) 1234 10.00 5.00 12/12/2013 1234 35.00 8.00 12/12/2013 3293 15.00 10.00 11/18/2013 4455 8.00 3.00 5/23/2013 4455 14.00 5.00 10/18/2013 ``` I want to group the data, so there is one record per account, the Amt due is summed, as well as the last payment. Unless the last payment date is different -- if the date is different, then I just want the last payment. So I would want to have a result of something like this: ``` Account Amt Due Last payment Last Payment Date 1234 45.00 13.00 12/12/2013 3293 15.00 10.00 11/18/2013 4455 22.00 5.00 10/18/2013 ``` I was doing something like ``` select Account, sum (AmtDue), sum (LastPmt), Max (LastPmtDt) from all my tables group by Account ``` But, that doesn't work for the last record above, because the last payment was only the $5.00 on 10/18, not the sum of them on 10/18. If I group by Account and LastPmtDt, then I get two records for the last, but I only want one per account. I have other data I'm querying, and I'm using a CASE, INSTR, and LISTAGG on another field (if combining them gives me this substring and that, then output 'Both'; else if it only gives me this substring, then output the substring; else if it only gives me the other substring, then output that one). It seems like I may need something similar, but not by looking for a specific date. If the dates are the same, then sum (LastPmt) and max (LastPmtDt) works fine, if they are not the same, then I want to ignore all but the most recent LastPmt and LastPmtDt record(s). Oh, and my LastPmt and LastPmtDt fields are already case statements within the select. They aren't fields that I already can just access. I'm reading other posts about RANK and KEEP, but to involve both fields, I'd need all that calculation of each field as well. Would it be more efficient to query everything, and then wrap another query around that to do the grouping, summing, and selecting fields I want? Related: [HAVING - GROUP BY to get the latest record](https://stackoverflow.com/questions/17380456/having-group-by-to-get-the-latest-record) Can someone provide some direction on how to solve this?
Try this: ``` select Account, sum ( Amt_Due), sum (CASE WHEN Last_Payment_Date = last_dat THEN Last_payment ELSE 0 END), Max (Last_Payment_Date) from ( SELECT t.*, max( Last_Payment_Date ) OVER( partition by Account ) last_dat FROM table1 t ) group by Account ``` Demo --> <http://www.sqlfiddle.com/#!4/fc650/8>
Rank is the right idea. Try this ``` select a.Account, a.AmtDue, a.LastPmt, a.LastPmtDt from ( select Account, sum (AmtDue) AmtDue, sum (LastPmt) LastPmt, LastPmtDt, RANK() OVER (PARTITION BY Account ORDER BY LastPmtDt desc) as rnk from all my tables group by Account, LastPmtDt ) a where a.rnk = 1 ``` I haven't tested this, but it should give you the right idea.
Select most current data in grouped set in Oracle
[ "", "sql", "oracle", "oracle11g", "group-by", "" ]
I need to select three columns from two different tables in sql server 2008. i tried below query but its show error like this error message ``` Column 'tb_new_product_Name_id.Product_Name' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause. ``` query ``` select pn.Product_Name as [Product Name], pn.Product_Id as [Product Id],COUNT(pnd.Product_id)+1 as duplicate_id from tb_new_product_Name_id as pn,tb_new_product_Name_id_duplicate as pnd where pn.Product_Name LIKE '%'+@product_name_id+'%' or (pn.Product_Id like '%'+@product_name_id+'%' and pnd.Product_Id like '%'+@product_name_id+'%' ); ``` where i made mistake ?
If you're going to have a `count` in your select statement you have to group on the other columns ``` select pn.Product_Name as [Product Name], pn.Product_Id as [Product Id],COUNT(pnd.Product_id)+1 as duplicate_id from tb_new_product_Name_id as pn ,tb_new_product_Name_id_duplicate as pnd where pn.Product_Name LIKE '%'+@product_name_id+'%' or (pn.Product_Id like '%'+@product_name_id+'%' and pnd.Product_Id like '%'+@product_name_id+'%' ); group by pn.Product_name, pn.Product_ID ``` You should also look at using [explicit join](https://stackoverflow.com/questions/44917/explicit-vs-implicit-sql-joins) syntax
You're using [aggregate function](http://en.wikipedia.org/wiki/Aggregate_function) COUNT, so you need to group by the other column that are not part in the aggregate. Try this: ``` select pn.Product_Name as [Product Name], pn.Product_Id as [Product Id],COUNT(pnd.Product_id)+1 as duplicate_id from tb_new_product_Name_id as pn,tb_new_product_Name_id_duplicate as pnd where pn.Product_Name LIKE '%'+@product_name_id+'%' or (pn.Product_Id like '%'+@product_name_id+'%' and pnd.Product_Id like '%'+@product_name_id+'%' ) group by pn.Product_Name, pn.Product_Id; ```
How to select columns from two tables sql server
[ "", "sql", "sql-server-2008", "" ]
I have fields called NoteID and VersionID in my SQL select statement. I need to include a calculated column in the select SQL query that will create a column called "Version No" in the result. Higher "Version ID" gets a higher "Version No" for the same NoteID So, in my query ``` select NoteID, VersionID, VersionNo from Notes ``` VersionNo should be calculated on the fly.
Try it this way ``` SELECT NoteID, VersionID, ROW_NUMBER() OVER (PARTITION BY NoteID ORDER BY VersionID) VersionNo FROM Notes ``` Sample Output: ``` | NOTEID | VERSIONID | VERSIONNO | |--------|-----------|-----------| | 1 | 1 | 1 | | 1 | 3 | 2 | | 1 | 5 | 3 | | 2 | 2 | 1 | | 2 | 6 | 2 | ``` Here is **[SQLFiddle](http://sqlfiddle.com/#!3/256f4/2)** demo
Give it a try ``` select NoteID, VersionID, row_number() OVER (ORDER BY VersionID) AS VersionNo from Notes ``` Output ``` | NOTEID | VERSIONID | VERSIONNO | |--------|-----------|-----------| | 1 | 1 | 1 | | 2 | 2 | 2 | | 3 | 3 | 3 | | 4 | 5 | 4 | | 5 | 6 | 5 | ```
How can I add a calculated column based on another column in an SQL select statement
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I need to create an excel sheet that takes a series of characters and numbers as an input, checks it against an Access database and then returns a value that corresponds to the 7th column of the same row of the database. I do not know much of VBA, and i managed to compile this code, taking tidbits from various sources such as StackOverflow, MS Office website, ExelForum, and AllenBrowne.com However, when I run the code, I get an error, "No value given for one or more required parameters", and frankly speaking, I am stumped as to where exactly the error is originating at, and why it is doing so. My code is as follows: ``` Sub Query() On Error GoTo errhandler: Dim con As String Dim sql As String Dim inputc con = "Provider=Microsoft.Jet.OLEDB.4.0;" _ & "Jet OLEDB:Engine Type=" & Jet4x _ & "; Data Source=" & "C:\Path\file.mdb;" Dim cn As New ADODB.Connection Dim rs As New ADODB.Recordset inputc = Range("B1").Value sql = "SELECT * FROM TABLE1 WHERE 'Custno' = " & input & ";" Set cn = New ADODB.Connection cn.Open con cn.Execute sql Set rs = new ADODB.Recordset rs.Open sql, cn, adOpenKeyset, adLockOptimistic Debug.Print rs.Fields("7") MsgBox rs.Fields("7") Exit Sub errhandler: MsgBox Err.Description End Sub ``` Any help provided is highly appreciated. -A Changed the code to this but now have an error that no values match: ``` Sub Query() On Error GoTo errhandler: Dim con As String Dim sql As String Dim inputc con = "Provider=Microsoft.Jet.OLEDB.4.0;" _ & "Jet OLEDB:Engine Type=" & Jet4x _ & "; Data Source=" & "C:\Path\file.mdb;" Dim cn As New ADODB.Connection Dim rs As New ADODB.Recordset inputc = Range("B1").Value sql = "SELECT * FROM TABLE1 WHERE 'Custno' = 'inputc';" Set cn = New ADODB.Connection cn.Open con cn.Execute sql Set rs = new ADODB.Recordset rs.Open sql, cn, adOpenKeyset, adLockOptimistic Debug.Print rs.Fields("7") MsgBox rs.Fields("7") Exit Sub errhandler: MsgBox Err.Description End Sub ``` I have checked the values and am searching for values that exist in the database.
Try this: Public variables: ``` Public db As DAO.Database Public rsttemp As DAO.Recordset Public acApp As Access.Application ``` Check if the database exist and open it: ``` Sub CheckDB() 'database path s = "C:\Path\file.mdb" If db Is Nothing Then Set acApp = New Access.Application acApp.OpenCurrentDatabase s Set db = acApp.CurrentDb End If End Sub ``` close DB ``` Sub CloseDB() If Not acApp Is Nothing Then acApp.CloseCurrentDatabase End If End Sub ``` Main Sub where you open the recordset ``` Sub SqlExecute() Call CheckDB inputc = Range("B1").Value sql = "SELECT * FROM TABLE1 WHERE Custno = """ & inputc & """;" Set rsttemp = db.OpenRecordset(sql, dbOpenSnapshot) MsgBox "Records count: " & rsttemp.RecordCount MsgBox rsttemp.Fields("7") rsttemp.Close Set rsttemp = Nothing Call CloseDB End Function ```
A few things: Remove the cn.Execute line Use the field name in the Print and MsgBox statements (in quotes) Put Jet4X inside the quotes since it is not a variable Also make sure to close your connection and recordset and set to Nothing at the end.
Error while querying from access in Excel
[ "", "sql", "vba", "excel", "" ]
``` Account Number Balance SequenceNo 12345 100,00 1 12345 120,52 2 12345 90,02 3 54646 100,56 1 51224 98 1 51224 52 2 ``` I have a table , has two columns; account number and balance. How can I generate SequenceNo over Account number ? Each account has it sequence numbers. Please help
You can achieve this simply by using [row\_number() over()](http://docs.oracle.com/cd/E11882_01/server.112/e41084/functions156.htm#i86310) analytic function: ``` SQL> with t1(Account_Number, Balance) as( 2 select 12345, 100.00 from dual union all 3 select 12345, 120.52 from dual union all 4 select 12345, 90.02 from dual union all 5 select 54646, 100.56 from dual union all 6 select 51224, 98 from dual union all 7 select 51224, 52 from dual 8 ) 9 select Account_Number 10 , balance 11 , row_number() over(partition by account_number 12 order by account_number) as sequence_no 13 from t1 14 ; ``` Result: ``` ACCOUNT_NUMBER BALANCE SEQUENCE_NO -------------- ---------- ----------- 12345 100 1 12345 120.52 2 12345 90.02 3 51224 98 1 51224 52 2 54646 100.56 1 6 rows selected ```
The solution is to use the row\_number() analytic function. Here is a [SQLFiddle](http://sqlfiddle.com/#!4/1438a/2 "SQL Fiddle").
How to get sequence number for each value set over a column
[ "", "sql", "performance", "oracle", "plsql", "" ]
I have a table of categories and a table of items. Each item has latitude and longitude to allow me to search by distance. What I want to do is show each category and how many items are in that category, within a distance chosen. E.g. Show all TVs in Electronics category within 1 mile of my own latitude and longitude. Here's what I'm trying but I cannot have two columns within an alias, obviously, and am wondering if there is a better way to do this? [Here is a SQL fiddle](http://sqlfiddle.com/#!2/b22a7/42) Here's the query: ``` SELECT *, ( SELECT count(*),( 3959 * acos( cos( radians(52.993252) ) * cos( radians( latitude ) ) * cos( radians( longitude ) - radians(-0.412470) ) + sin( radians(52.993252) ) * sin( radians( latitude ) ) ) ) AS distance FROM items WHERE category = category_id group by item_id HAVING distance < 1 ) AS howmanyCat, ( SELECT name FROM categories WHERE category_id = c.parent ) AS parname FROM categories c ORDER BY category_id, parent ```
First, start with the distance calculation for each item, then join in the category information and aggregate and filter ``` select c.*, count(i.item_id) as numitems from category c left outer join (SELECT i.*, ( 3959 * acos( cos( radians(52.993252) ) * cos( radians( latitude ) ) * cos( radians( longitude ) - radians(-0.412470) ) + sin( radians(52.993252) ) * sin( radians( latitude ) ) ) ) AS distance FROM items i ) i on c.category_id = i.category_id and distance < 1 group by category_id; ```
Is this what you're looking for: ``` SELECT categories.name, count(items.item_id) as cnt FROM items JOIN categories ON categories.category_id=items.category WHERE ( 3959 * acos( cos( radians(52.993252) ) * cos( radians( latitude ) ) * cos( radians( longitude ) - radians(-0.412470) ) + sin( radians(52.993252) ) * sin( radians( latitude ) ) ) ) < 1 GROUP BY categories.category_id; ``` this gives: Tvs | 1
Search by alias without showing the alias
[ "", "mysql", "sql", "" ]
I have a delicate situation wherein some records in my database are inexplicably missing. Each record has a sequential number, and the number sequence skips over entire blocks. My server program also keeps a log file of all the transactions received and posted to the database, and those missing records do appear in the log, but not in the database. The gaps of missing records coincide precisely with the dates and times of the records that show in the log. The project, still currently under development, consists of a server program (written by me in Visual Basic 2010) running on a development computer in my office. The system retrieves data from our field personnel via their iPhones (running a specialized app also developed by me). The database is located on another server in our server room. No one but me has access to my development server, which holds the log files, but there is one other person who has full access to the server that hosts the database: our head IT guy, who has complained that he believes he should have been the developer on this project. It's very difficult for me to believe he would sabotage my data, but so far there is no other explanation that I can see. Anyway, enough of my whining. What I need to know is, is there a way to determine who has done what to my database?
If you are using identity for your "sequential number", and your insert statement errors out the identity value will still be incremented even though no record has been inserted. Just another possible cause for this issue outside of "tampering".
Look at the transaction log if it hasn't been truncated yet: * *[How to view transaction logs in SQL Server 2008](https://stackoverflow.com/questions/4507509/how-to-view-transaction-logs-in-sql-server-2008)* * *[How do I view the transaction log in SQL Server 2008?](http://social.msdn.microsoft.com/Forums/sqlserver/en-US/e64f6f30-fd62-4ac4-b8bf-bef98b85ecbe/how-do-i-view-the-transaction-log-in-sql-server-2008?forum=sqldisasterrecovery)*
SQL Server 2012: A way to see if the database has been tampered with?
[ "", "sql", "sql-server", "database", "sql-server-2012", "" ]
For example i have this query ``` SELECT * FROM `table` WHERE `id` = '5123' ``` In this query it will search for all results which have the id of 5123. But i only want 1 row returned, so i could add LIMIT 1 to the query. But lets say that `id` is an unique index, will it still continue to search for results after it found one then?
No. One of the things indexes do is precisely what you suggest: > MySQL uses indexes for these operations: > > * To find the rows matching a WHERE clause quickly. > * To eliminate rows from consideration. If there is a choice between multiple indexes, MySQL normally uses the index that finds the smallest number of rows (the most selective index). From <http://dev.mysql.com/doc/refman/5.6/en/mysql-indexes.html>
More clauses you have in your query more burden it puts on your server, If it has a Unique Constraint defined on this column then there isnt a chance of duplicates in this column so I would personally avoid using LIMIT.
Should i put a limit when searching for unique indexes?
[ "", "mysql", "sql", "" ]
I know there is no `WHERE` clause in firebase. But how we can implement pattern searching in firebase as we can do in SQL using `LIKE` operator. How we can implement following sql query in firebase? ``` SELECT * FROM Customers WHERE City LIKE 's%'; ```
# The Idea A near solution is to create an index of your data which is indexed by City (or whichever other field you'd like to search by) in a server-side NodeJS script. This will *not quite* replicate the behavior of `LIKE`, however it will allow for easy querying by City name, which may eliminate the need for that behavior. # The Node Script This is typically done via a simple server-side script written in NodeJS. Imagine a scenario where your user data was stored in `/users/data`. Your script would look like this. ``` var ref = new Firebase("<Your Firebase>.firebaseio.com"); // A Firebase ref where all our users are stored var userDataRef = ref.child('users/data'); // A Firebase ref which is where we store our index var byCityRef = ref.child("users/byCity"); // Then bind to users/data so we can index each user as they're added userDataRef.on("child_added", function (snapshot) { // Load the user details var user = snapshot.val(); // Use the snapshot name as an ID (i.e. /users/data/Tim has an ID of "Tim") var userID = snapshot.name(); // Push the userID into users/byCity/{city} byCityRef.child(user.city).push(userID); }); ``` This script will create a structure like this: ``` { "users": { "data": { "Tim": {"hair": "red", "eyes": "green", "city": "Chicago"} }, "byCity": { "Chicago": { "-asd09u12": "Tim" } } } } ``` # The Client Script Once we've indexed our data, querying against it is simple and can be done in two easy steps. ``` var ref = new Firebase("<Your Firebase>.firebaseio.com"); var userDataRef = ref.child('users/data'); var byCityRef = ref.child('users/byCity') // Load children of /users/byCity/Chicago byCityRef.child('Chicago').on("child_added", function (snapshot) { // Find each user's unique ID var userID = snapshot.val(); // Then load the User's data from /users/data/{ID} userDataRef.child(userID).once(function (snapshot) { // userID = "Tim" // user = {"hair": "red", "eyes": "green", "city": "Chicago"} var user = snapshot.val(); }); }); ``` Now you have the near realtime load speed of Firebase with powerful querying capabilities!
You can't. You would need to manually enumerate your objects and search yourself. For a small data set that might be ok. But you'd be burning bandwidth with larger data sets. Firebase supports some limited querying capabilities using priorities but still not what you are asking for. The reason they don't support broad queries like that is because they are optimized for speed. You should consider another service more appropriate for search such as elastic search or a traditional RDBMS. You can still use firebase alongside those other systems to take advantage of its strengths - near real time object fetching and synchronization .
firebase: implementing pattern searching in firebase
[ "", "sql", "angularjs", "firebase", "" ]
I am trying to separate a `city/state/zip` field into the city, state, and zip. Normally I would do this with `charindex` of `','` to get the city and state, and `isnumeric` and `right()` for the zip. This will work fine for the zip, but most of the rows in the data I am working with now have no commas `City ST Zip`. Is there a way to identify the index of two upper case characters? If not, does anybody have a better idea than just a case statement checking for each state individually? **EDIT:** I found the PATINDEX/COLLATE option to work fairly intermittently. See my answer below.
I found the PATINDEX/COLLATE option to work fairly intermittently. Here is what I ended up doing: ``` --get rid of the sparsely used commas --get rid of the duplicate spaces update MyTable set CityStZip= replace( replace( replace(CityStZip,' ',' '), ' ',' '), ',','') select --check if state and zip are there and then grab the city case when isNumeric(right(CityStZip,1))=1 then left(CityStZip,len(CityStZip)-charindex(' ',reverse(CityStZip), charindex(' ',reverse(CityStZip))+1)+1) --no zip. check for state when left(right(CityStZip,3),1) = ' ' then left(CityStZip,len(CityStZip)-charIndex(' ',reverse(CityStZip))) else CityStZip end as City, --check if zip is there and then grab the city case when isNumeric(right(CityStZip,1))=1 then substring(CityStZip, len(CityStZip)-charindex(' ',reverse(CityStZip), charindex(' ',reverse(CityStZip))+1)+2, 2) --no zip. check if 3rd to last char is a space and grab the last two chars when left(right(CityStZip,3),1) = ' ' then right(CityStZip,2) end as [State], --grab everything after the last space if the last character is numeric case when isNumeric(right(CityStZip,1))=1 then substring(CityStZip, len(CityStZip)-charindex(' ',reverse(CityStZip))+1, charindex(' ',reverse(CityStZip))) end as Zip from MyTable ```
The reason why `PATINDEX` appears to work intermittently is that you cannot use a character range (i.e. `A-Z`) to accomplish a case-sensitive search, even if using a case-sensitive collation. The issue is that character ranges work like sorting, and case-sensitive sorting groups the upper-case letters with their lower-case equivalents, just like it would be ordered in a dictionary. Range sorting is really: a,A,b,B,c,C,d,D,etc. Or, depending on the collation, it might be: A,a,B,b,C,c,D,d,etc (there are 31 Collations that sort upper-case first). When doing this in a case-sensitive collation, that merely groups all `A` entries together, separate from the `a` entries, whereas in a case-*in*sensitive sort they would be intermixed. But if you specify each of the letters individually (hence not using a range), then it will work as expected: ``` PATINDEX(N'%[ABCDEFGHIJKLMNOPQRSTUVWXYZ][ABCDEFGHIJKLMNOPQRSTUVWXYZ]%', [CityStZip] COLLATE Latin1_General_100_CS_AS) ``` The reason that `PATINDEX` and `LIKE` (both of which allow for a single character class of `[A-Z]`) work this way is that the `[start-end]` syntax is *not* a Regular Expression. Many people claim that `PATINDEX` and `LIKE` support "limited" RegEx due to supporting this syntax, but that is not true. It is merely a very similar (and a confusingly similar) syntax to RegEx where `[A-Z]` would normally *not* include any lower-case matches. Of course, if you are guaranteed to only be searching on the US-English letters of A-Z, then a binary collation (i.e. one ending in `_BIN2`; don't use ones ending in `_BIN` as they have been deprecated since SQL Server 2005 was introduced, I believe) should work. ``` PATINDEX(N'%[A-Z][A-Z]%', [CityStZip] COLLATE Latin1_General_100_BIN2) ``` --- For more details about case-sensitive matching, especially in regards to including Unicode / NVARCHAR data, please see my related answer on DBA.StackExchange: [How to find values with multiple consecutive upper case characters](https://dba.stackexchange.com/questions/122612/how-to-find-values-with-multiple-consecutive-upper-case-characters/122625#122625)
Get index of two consecutive upper case characters
[ "", "sql", "sql-server", "collation", "string-parsing", "" ]
I haven't created any `columns(PAYMENTTERM)` in tables with double quotes but still I'm getting following Error: ``` Error(26,9): PL/SQL: SQL Statement ignored Error(27,29): PL/SQL: ORA-00904: "P"."PAYMENTTERM": invalid identifier ``` Please point out what wrong am i doing and what needs to be corrected: ``` CREATE OR REPLACE PROCEDURE PAYTERMUPDATE IS RecordCount INT; BEGIN SELECT count(1) INTO RecordCount FROM docmeta d INNER JOIN temp_pay_term p ON d.XPROJECT_ID = p.PID WHERE lower(d.PAYMENTTERM) <> lower(p.PAYMENTTERM); DBMS_OUTPUT.PUT_LINE(''); DBMS_OUTPUT.PUT_LINE('There were ' || to_char(RecordCount) || ' records where payment term is mismatch.'); DBMS_OUTPUT.PUT_LINE(''); FOR X IN ( SELECT p.PID, p.PAYMENTTERM FROM docmeta d, temp_pay_term p WHERE d.XPROJECT_ID = p.PID AND d.PAYMENTTERM <> p.PAYMENTTERM) LOOP UPDATE docmeta SET d.PAYMENTTERM = p.PAYMENTTERM WHERE XPROJECT_ID = X.PID; END LOOP; COMMIT; EXCEPTION WHEN OTHERS THEN raise_application_error(-1000, 'Error occured, No payment term were updated'); END PAYTERMUPDATE; ```
In this line: ``` UPDATE docmeta SET d.PAYMENTTERM = p.PAYMENTTERM WHERE XPROJECT_ID = X.PID ; ``` You must add an alias on docmeta (d) and the p.PAYMENTTERM alias must be X So, change in this way your query: ``` UPDATE docmeta d SET d.PAYMENTTERM = X.PAYMENTTERM WHERE XPROJECT_ID = X.PID ; ```
Use loop variable as alias to `PAYMENTTERM` that is `X` in you case, also declare alias for `DOCMETA` as `d`.
Invalid identifier in Oracle SQL
[ "", "sql", "oracle", "oracle-sqldeveloper", "" ]
I am working with SQL Server. I have a SQL query like this: ``` select t.TBarcode, l.Timeinterval from Transaction_tbl t LEFT OUTER JOIN Location_tbl l ON t.Locid = l.Locid ``` getting result like this: ``` Tbarcode Timeinterval: 1 00:10:00 2 00:05:00 3 00:20:00 ``` Instead of this `timeinterval` I want to get my `timeinterval` output like this: ``` Timeinterval: 10 05 20 ``` What changes do I have to make in my query to get this result?
If the `SQL Datatype` of `l.TimeInterval` is `datetime` or `time` then : ``` select t.TBarcode, CAST(DATEPART(minute,l.Timeinterval) as varchar(2)) from Transaction_tbl t LEFT OUTER JOIN Location_tbl l ON t.Locid = l.Locid ```
if TimeInterval is date, you could use `DATEDIFF`. [DATEDIFF](http://technet.microsoft.com/en-us/library/ms189794.aspx)
Convert time result to single digit value in SQL Server
[ "", "sql", "sql-server", "" ]
``` select e.employee_id,e.last_name,e.department_id, d.department_name,l.city,l.location_id from employees e join departments d on e.department_id=d.department_id join locations l on l.location_id=d.location_id and e.manager_id=149; ``` Can we Replace 'ON' clause with 'USING' clause. -used Employees,Departments,Locations table in oracle 11g. Try It.
No you cannot just replace `ON` with `USING`. But you can rewrite the query to contain `USING` clause in joins. See correct syntax below: ``` select e.employee_id, e.last_name, department_id, --Note there is no table prefix d.department_name, l.city, location_id --Note there is no table prefix from employees e join departments d using (department_id) join locations l using (location_id) where e.manager_id = 149; ```
The USING **clause** can be substituted for the ON **clause**, but just replacing `USING` with `ON` is not sufficient. * The USING clause doesn't use the equals sign syntax, so the replacement cannot be done word for word - This URL states the USING clause can be used in place of the ON clause; but this refers to the **CLAUSES**, not to only the keywords: <http://docs.oracle.com/javadb/10.6.2.1/ref/rrefsqljusing.html> - The syntax of the USING clause would be: ``` select * from table1 P join table1 S using (col1); ``` Here `col1` is present in both `table1` and `table2`. - You can't use a table prefix in the USING clause as you can in the ON clause (you can't use `P.col1` in a USING statement in ORACLE).
USING clause in oracle 11g
[ "", "sql", "oracle", "join", "oracle11g", "" ]
I am developing iPhone applications and using sqlite database. I am having answers\_ELA table as follows: ``` Id | observation_id | question_no | answer | answer2 ------------------------------------------------------------------- 1 | 60 | 0 | 2 | 1 2 | 60 | 1 | 3 | 4 3 | 61 | 0 | 3 | 4 4 | 61 | 1 | 3 | 2 5 | 62 | 0 | 3 | 1 6 | 62 | 1 | 1 | 2 ``` And I want result ``` question_no | total ------------------------- 0 | 14 1 | 15 ``` But its giving ``` question_no | total ------------------------- 0 | 4 1 | 3 ``` Means for last observation\_id only i.e. 62 And the query is as follows: ``` SELECT question_no, (answer+answer2) total FROM answers_ELA WHERE observation_id IN(60,61,62) GROUP BY question_no ``` What's wrong in this, and what should I do to get desired3 result.
Try this: ``` SELECT question_no, SUM(answer+answer2) total FROM answers_ELA WHERE observation_id IN (60,61,62) GROUP BY question_no ```
You have no aggregation function assigned. Try SUM(answer+answer2).
group by clause is not producing the expected sum
[ "", "sql", "sqlite", "select", "group-by", "sum", "" ]
``` DELETE edms_personalisation WHERE mail_id NOT IN (SELECT mail_id FROM edms_mail) AND rownum <= 25000; ``` Hello, can the SQL (run from a SQLPLUS interface) above be written in a more efficient way? I was recently alerted to the fact that the edms\_personalisation table has approx 80 Million rows and the edms\_mail table has approx 120,000 rows. I want to create a clear down script that only affect a limited number of rows per run so I can stick it on a Cron Job and get it to run every 5 minutes or so. Any ideas welcome. (Note: Appropriate indexes do EXIST on the DB tables)
``` DELETE edms_personalisation p WHERE NOT EXISTS (SELECT 'X' FROM edms_mail m WHERE m.mail_id = p.mail_id) AND rownum <= 25000; ``` or ``` DELETE edms_personalisation WHERE mail_id IN (SELECT mail_id FROM edms_personalisation MINUS SELECT mail_id FROM edms_mail) AND rownum <= 25000; ``` If Oracle I would have written a `PL/SQL` to bulk collect all the qualifying mail ids to be deleted.And make a FORALL DELETE querying the index directly(Bulk Binding). You can do it in batch too. Otherwise since the 'to be deleted' table is too big, wiser to copy the good data into temp table, truncate the table, and reload it from temp. When it has to be done in a frequent cycle, the above methods have to be used! Try this! Good Luck!
1. I think the delete statement in the question will work just fine. The question is how much amount of redo log will the delete statement generate. 2. General rule of thumb would be to delete rows batch wise with a commit in it although batch size shold not burst out the online redo log files. [i suppose the question is related to ORACLE] 3. If the delete is once in a time activity but you are doing it every 5 minutes with a batch of 25000 to cope up with the amount of rows to be deleted then copy out the required rows on to a new table, truncate the actual table and transfer data from new table to actual table. Of course doing it every five minutes would not make sense according to me. 4. If the data to be deleted will be huge for the first run but not for the subsequent runs then i would suggest to follow the method mentioned in 2nd point for the first run and the method mentioned in 1st point for the subsequent runs. ***DISCLAIMER: I think others would have faced the same problem and would have solved it with a better solution then mentioned above.***
SQL delete query optimization
[ "", "sql", "query-optimization", "sqlplus", "sql-delete", "" ]
I'm using SQL Server. I'm also relatively new to writing SQL... in a strong way. It's mostly self-taught, so I'm probably missing key ideas in terms of proper format. I've a table called 'SiteResources' and a table called 'ResourceKeys'. SiteResources has an integer that corresponds to the placement of a string ('siteid') and a 'resourceid' which is an integer id that corresponds to 'resourceid' in ResourceKeys. ResourceKeys also contains a string for each key it contains ('resourcemessage'). Basically, these two tables are responsible for representing how strings are stored and displayed on a web page. The best way to consistently update these two tables, is what? Let's say I have 5000 rows in SiteResources and 1000 rows in ResourceKeys. I could have an excel sheet, or a small program, which generates 5000 singular update statements, like: ``` update SiteResources set resoruceid = 0 WHERE siteid IS NULL AND resourceid IN (select resourceid from ResourceKeys where resourcemessage LIKE 'FooBar') ``` I could have thousands of those singular update statements, with FooBar representing each string in the database I might want to change at once, but isn't there a cleaner way to write such a massive number of update statements? From what I understand, I should be wrapping all of my statements in begin/end/go too, just in-case of failure - which leads me to believe there is a more systematic way of writing these update statements? Is my hunch correct? Or is the way I'm going about this correct / ideal? I could change the structure of my tables, I suppose, or the structure of how I store data - that might simplify things - but let's just say I can't do that, in this instance.
As far as I understand, you just need to update everything in table SiteResources with empty parameter 'placement of a string'. If so, here is the code: ``` UPDATE a SET resourceid = 0 FROM SiteResources a WHERE EXISTS (select * from ResourceKeys b where a.resourceid = b.resourceid) AND a.siteid IS NULL ``` For some specific things like 'FooBar'-rows you can add it like this: ``` UPDATE a SET resourceid = 0 FROM SiteResources a WHERE EXISTS (select * from ResourceKeys b where a.resourceid = b.resourceid and b.resourcemessage IN ('FooBar', 'FooBar2', 'FooBar3', ...)) AND a.siteid IS NULL ```
With table-valued parameters, you can pass a table from your client app to the SQL batch that your app submits for execution. You can use this to pass a list of all the strings you need to update to a single `UPDATE` that updates all rows at once. That way you don't have to worry about all of your concerns: the number of updates, transactional atomicitty, error handling. As a bonus, performance will be improved. I recommend that you do a bit of research what TVPs are and how they are used.
Structuring many update statements in SQL Server
[ "", "sql", "sql-server", "sql-update", "structure", "" ]
In a MySQL Database, I have two tables: Users and Items The idea is that Users can create as many Items as they want, each with unique IDs, and they will be connected so that I can display all of the Items from a particular user. Which is the better method in terms of performance and clarity? Is there even a real difference? 1. Each User will contain a column with a list of Item IDs, and the query will retrieve all matching Item rows. 2. Each Item will contain a column with the User's ID that created it, and the query will call for all Items with a specific User ID.
**2nd approach is better**, because it defines `one-to-many` relationship on `USER` to `ITEM` table. You can create foreign key on `ITEM` table on `USERID` columns which refers to `USERID` column in `USER` table. You can easily join both tables and index also be used for that query.
Let me just clarify why approach 2 is superior... The approach 1 means you'd be packing several distinct pieces of information within the same database field. That violates the principle of [atomicity](http://en.wikipedia.org/wiki/First_normal_form#Atomicity) and therefore the [1NF](http://en.wikipedia.org/wiki/First_normal_form). As a consequence: * Indexing won't work (bad for performance). * [FOREIGN KEY](http://en.wikipedia.org/wiki/Foreign_key)s and type safety won't work (bad for data integrity). Indeed, the approach 2 is the standard way for representing such "one to many" relationship.
Connecting Two Items in a Database - Best method?
[ "", "mysql", "sql", "performance", "database-design", "" ]
I have two tables in MySQL: Products: ``` id | value ================ 1 | foo 2 | bar 3 | foobar 4 | barbar ``` And properties: ``` product_id | property_id ============================= 1 | 10 1 | 11 2 | 15 2 | 16 3 | 10 3 | 11 4 | 10 4 | 16 ``` I want to get products that have determined properties. For example I need to get all products that have properties with ids 10 and 11. And I expect products with ids 1 and 3 but not 4! Is it possible in mysql or I need to use PHP for it? Thank you!
> with ids 10 and 11 Here's 2 solutions: ``` SELECT p.id, p.value, Count(DISTINCT propety_id) FROM products p INNER JOIN properties pr ON p.id = pr.product_id AND propety_id IN ( 10, 11 ) HAVING Count(DISTINCT propety_id) = 2; ``` or.... ``` SELECT p.id, p.value FROM products p INNER JOIN properties pr1 ON p.id = pr2.product_id AND pr1.propety_id = 10 INNER JOIN properties pr2 ON p.id = pr2.product_id AND pr2.propety_id = 11; ``` As for excluding rows - add a NOT exists clause, or do an additional left join and exclude matching rows.
``` SELECT * FROM [products] WHERE id IN (SELECT product_id FROM [properties] WHERE propety_id IN ( '10', '11' ) HAVING Count(DISTINCT propety_id) = 2); ```
Select row from one table where multiple rows in another table have determined values
[ "", "mysql", "sql", "" ]
I am using two sub query. If I pass null or empty value its throwing exception Exception message ``` Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression. ``` My query ``` SELECT A.Product_Name AS [Product Name], A.Product_Id AS [Product Id], B.[DuplicateId] FROM tb_new_product_Name_id AS A, ( SELECT COUNT(Product_id)+1 AS [Duplicate Id] FROM tb_new_product_Name_id_duplicate WHERE Product_id= (SELECT Product_id FROM tb_new_product_Name_id WHERE Product_Name=@product_name_id ) ) AS B WHERE Product_Name LIKE '%'+@product_name_id+'%' OR Product_Id like '%'+@product_name_id+'%'; ``` Where is my mistake?
Change this part ``` WHERE Product_id= (SELECT Product_id FROM tb_new_product_Name_id WHERE Product_Name=@product_name_id ) ``` to using IN instead ``` WHERE Product_id IN (SELECT Product_id FROM tb_new_product_Name_id WHERE Product_Name=@product_name_id ) ``` Have a look at [IN (Transact-SQL)](http://technet.microsoft.com/en-us/library/ms177682.aspx) > Determines whether a specified value matches any value in a subquery > or a list.
Usually, a subquery should return only one record, but sometimes it can also return multiple records when used with operators like IN, NOT IN in the where clause. The query would be like, ``` SELECT COUNT(Product_id)+1 AS [Duplicate Id] FROM tb_new_product_Name_id_duplicate WHERE Product_id IN (SELECT Product_id FROM tb_new_product_Name_id WHERE Product_Name=@product_name_id ) ```
how to pass null value in sub query sql server
[ "", "sql", "sql-server", "" ]
I have a table imported from a csv file. However, the date field isn't not formatted nicely. Is it possible to convert this string using a mysql **STR\_TO\_DATE** function? I need this `'05/11/2009 16:07:53:052'` to be converted as a datetime format such like `'2009-05-11 16:07:53'` and ignoring the microsecs.. I tried using something like this ``` UPDATE mytable SET updated_on = DATE(STR_TO_DATE(updated_on, '%Y-%m-%d %H:%i:%s')) ``` And ``` UPDATE mytable SET updated_on = DATE(STR_TO_DATE(updated_on, GET_FORMAT(DATETIME,'ISO'))) ``` But no luck, please help! Thanks
Check [**STR\_TO\_DATE**](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_str-to-date) function Try this: If the date format is `mm/dd/yyyy hh:mm:ss:sss` then ``` UPDATE mytable SET updated_on = STR_TO_DATE(updated_on, '%m/%d/%Y %H:%i:%s'); ``` If the date format is `dd/mm/yyyy hh:mm:ss:sss` then ``` UPDATE mytable SET updated_on = STR_TO_DATE(updated_on, '%d/%m/%Y %H:%i:%s'); ```
You need proper symbol to represent `microsecond`. It is `%f`. ``` mysql> select str_to_date( '05/11/2009 16:07:53:052', '%d/%m/%Y %H:%i:%s:%f' ); +------------------------------------------------------------------+ | str_to_date( '05/11/2009 16:07:53:052', '%d/%m/%Y %H:%i:%s:%f' ) | +------------------------------------------------------------------+ | 2009-11-05 16:07:53.052000 | +------------------------------------------------------------------+ 1 row in set (0.00 sec) ``` You can omit the time format part just to return date part, but with a warning on data truncation. ``` mysql> select str_to_date( '05/11/2009 16:07:53:052', '%d/%m/%Y' ); +------------------------------------------------------+ | str_to_date( '05/11/2009 16:07:53:052', '%d/%m/%Y' ) | +------------------------------------------------------+ | 2009-11-05 | +------------------------------------------------------+ 1 row in set, 1 warning (0.00 sec) mysql> show warnings; +---------+------+-----------------------------------------------------------+ | Level | Code | Message | +---------+------+-----------------------------------------------------------+ | Warning | 1292 | Truncated incorrect date value: '05/11/2009 16:07:53:052' | +---------+------+-----------------------------------------------------------+ 1 row in set (0.00 sec) ``` ***Refer to***: 1. [***MySQL: Date and Time Functions***](http://dev.mysql.com/doc/refman/5.6/en/date-and-time-functions.html) 2. [***on the same page, a useful reference table on format specifier symbols***](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-format)
Can i convert a mysql field with invalid microseconds value to a datetime format?
[ "", "mysql", "sql", "datetime", "sql-update", "date-format", "" ]
With SQL commands is it possible to pre-define a column value? Let's assume I have `cars` and `car_model` table. ``` cars id name car_model id name car_id ``` I want to put some `car_model` into table but like this. ``` INSERT INTO car_model (name, 1) VALUES ("A1"), ("A3"), ("A4") ```
"Pre-defining values" is basically the definition of "variable". So, you could use a variable. ``` SET @carID = 1; INSERT INTO car_model ( name ,car_id ) VALUES ("A1", @carID) ,("A3", @carID) ,("A4", @carID) ; ```
If you are inserting some constant or pre-define value , you can write your query like this: ``` INSERT INTO car_model (name,car_id ) VALUES ("A1","predefineVal"), ("A3","predefineVal"), ("A4","predefineVal"); ``` where `predefineVal` is your pre-define a column value which you want to insert in your case is `1`.
How to insert values with pre-defined column value?
[ "", "mysql", "sql", "" ]
I don't know how I can select top 8 rows from following query. I am very new to SQL. ``` SELECT TOP 100 PERCENT * FROM ( select TEXT, ID, Details from tblTEXT where (ID = 12 or ID = 13 or ID =15) ) X order by newid() ``` This query is giving me 31 random rows but I want to select top 8. I am using following but its not working ``` select top 8 from ( SELECT TOP 100 PERCENT * FROM ( select TEXT, ID, Details from tblTEXT where (ID = 12 or ID = 13 or ID =15) ) X order by newid() ) ```
What's wrong with just: ``` SELECT TOP 8 * FROM ( select TEXT, ID, Details from tblTEXT where (ID = 12 or ID = 13 or ID =15) ) X order by newid() ``` (The other answers are currently incorrect because they don't give a name for the outer subquery, but only an `ORDER BY` on the outermost query will affect result order anyway, so they have other issues when it comes to reliability)
try this ``` select top 8 * from ( SELECT TOP 100 PERCENT * FROM ( select TEXT, ID, Details from tblTEXT where (ID = 12 or ID = 13 or ID =15) ) X order by newid() )Y ``` and of-course, you are selecting top 8 from top 100 rows, you can also select top 8 directly from the inner query by replacing the `top 100` to `top 8` like this ``` SELECT TOP 8 * FROM ( select TEXT, ID, Details from tblTEXT where (ID = 12 or ID = 13 or ID =15) ) X order by newid() ```
select top 8 rows from a random selection SQLSERVER
[ "", "sql", "sql-server", "" ]
I have a table (MyTable) with following data. (Order by Order\_No,Category,Type) ``` Order _No Category Type Ord1 A Main Unit Ord1 A Other Ord1 A Other Ord2 B Main Unit Ord2 B Main Unit Ord2 B Other ``` What I need to do is, to scan through the table and see if any β€˜Category’ has more than one β€˜Main Unit’. If so, give a warning for the whole Category. Expected results should look like this. ``` Order _No Category Type Warning Ord1 A Main Unit Ord1 A Other Ord1 A Other Ord2 B Main Unit More than one Main Units Ord2 B Main Unit More than one Main Units Ord2 B Other More than one Main Units ``` I tried couple of ways (using subquery) to achieve results, but no luck. Please Help !! ``` (Case When (Select t1.Category From MyTable as t1 Where MyTable.Order_No = t1.Order_No AND MyTable.Category = t1. Category AND MyTable.Type = t1.Type AND MyTable.Type = β€˜Main Unit’ Group by t1. t1.Order_No, t1. Category, t1.Type Having Count(*) >1) = 1 Then β€˜More than one Main Units’ Else β€˜β€™ End ) as Warning ```
One option would be using `COUNT() OVER()`to count the main units, partitioning by category; ``` SELECT Order_No, Category, Type, CASE WHEN COUNT(CASE WHEN Type='Main Unit' THEN 1 ELSE NULL END) OVER (PARTITION BY Category) > 1 THEN 'More than one Main Units' ELSE '' END Warning FROM MyTable ``` [An SQLfiddle to test with](http://sqlfiddle.com/#!3/69f0e/2).
do you need to list all records? If you only need the duplicates you could do something like this: ``` select Order_No, Category, Type, count(*) as dupes from MyTable where Type='Main Unit' group by Order_No, Category, Type having count(*)>1 order by count(*) DESC; ```
How to find duplicates within a group - SQL 2008
[ "", "sql", "database", "sql-server-2008", "" ]
I'm providing maintenance support for some SSIS packages. The packages have some data flow sources with complex embedded SQL scripts that need to be modified from time to time. I'm thinking about moving those SQL scripts into stored procedures and call them from SSIS, so that they are easier to modify, test, and deploy. I'm just wondering if there is any negative impact for the new approach. Can anyone give me a hint?
Yes there are issues with using stored procs as data sources (not in using them in Execute SQL tasks though in the control flow) You might want to read this: <http://www.jasonstrate.com/2011/01/31-days-of-ssis-no-more-procedures-2031/> Basically the problem is that SSIS cannot always figure out the result set and thus the columns from a stored proc. I personally have run into this if you write a stored proc that uses a temp table. I don't know that I would go as far as the author of the article and not use procs at all, but be careful that you are not trying to do too much with them and if you have to do something complicated, do it in an execute sql task before the dataflow.
I can honestly see nothing but improvements. Stored procedures will offer better security, the possibility for better performance due to cached execution plans, and easier maintenance, like you pointed out. Refactor away!
Stored Procedure vs direct SQL command in SSIS data flow source
[ "", "sql", "stored-procedures", "ssis", "" ]
I have a table in Oracle with several constraints. When I insert a new record and not all constraints are valid, then Oracle raise only the "first" error. How to get all violations of my record? ``` CREATE TABLE A_TABLE_TEST ( COL_1 NUMBER NOT NULL, COL_2 NUMBER NOT NULL, COL_3 NUMBER NOT NULL, COL_4 NUMBER NOT NULL ); INSERT INTO A_TABLE_TEST values (1,null,null,2); ORA-01400: cannot insert NULL into ("USER_4_8483C"."A_TABLE_TEST"."COL_2") ``` I would like to get something like this: ``` Column COL_2: cannot insert NULL Column COL_3: cannot insert NULL This would be also sufficient: Column COL_2: not valid Column COL_3: not valid ``` Of course I could write a trigger and check each column individually, but I like to prefer constraints rather than triggers, they are easier to maintain and don't require manually written code. Any idea?
In the meantime I found a lean solution using deferred constraints: ``` CREATE TABLE A_TABLE_TEST ( COL_1 NUMBER NOT NULL DEFERRABLE INITIALLY DEFERRED, COL_2 NUMBER NOT NULL DEFERRABLE INITIALLY DEFERRED, COL_3 NUMBER NOT NULL DEFERRABLE INITIALLY DEFERRED, COL_4 NUMBER NOT NULL DEFERRABLE INITIALLY DEFERRED ); INSERT INTO A_TABLE_TEST values (1,null,null,2); DECLARE CHECK_CONSTRAINT_VIOLATED EXCEPTION; PRAGMA EXCEPTION_INIT(CHECK_CONSTRAINT_VIOLATED, -2290); REF_CONSTRAINT_VIOLATED EXCEPTION; PRAGMA EXCEPTION_INIT(REF_CONSTRAINT_VIOLATED , -2292); CURSOR CheckConstraints IS SELECT TABLE_NAME, CONSTRAINT_NAME, COLUMN_NAME FROM USER_CONSTRAINTS JOIN USER_CONS_COLUMNS USING (TABLE_NAME, CONSTRAINT_NAME) WHERE TABLE_NAME = 'A_TABLE_TEST' AND DEFERRED = 'DEFERRED' AND STATUS = 'ENABLED'; BEGIN FOR aCon IN CheckConstraints LOOP BEGIN EXECUTE IMMEDIATE 'SET CONSTRAINT '||aCon.CONSTRAINT_NAME||' IMMEDIATE'; EXCEPTION WHEN CHECK_CONSTRAINT_VIOLATED OR REF_CONSTRAINT_VIOLATED THEN DBMS_OUTPUT.PUT_LINE('Constraint '||aCon.CONSTRAINT_NAME||' at Column '||aCon.COLUMN_NAME||' violated'); END; END LOOP; END; ``` It works with any check constraint (not only `NOT NULL`). Checking `FOREIGN KEY` Constraint should work as well. Add/Modify/Delete of constraints does not require any further maintenance.
There no straightforward way to report all possible constraint violations. Because when Oracle stumble on first violation of a constraint, no further evaluation is possible, statement fails, unless that constraint is deferred one or the `log errors` clause has been included in the DML statement. But it should be noted that `log errors` clause won't be able to catch all possible constraint violations, just records first one. As one of the possible ways is to: 1. create `exceptions` table. It can be done by executing `ora_home/rdbms/admin/utlexpt.sql` script. The table's structure is pretty simple; 2. disable all table constraints; 3. execute DMLs; 4. enable all constraints with `exceptions into <<exception table name>>` clause. If you executed `utlexpt.sql` script, the name of the table exceptions are going to be stored would be `exceptions`. Test table: ``` create table t1( col1 number not null, col2 number not null, col3 number not null, col4 number not null ); ``` Try to execute an `insert` statement: ``` insert into t1(col1, col2, col3, col4) values(1, null, 2, null); Error report - SQL Error: ORA-01400: cannot insert NULL into ("HR"."T1"."COL2") ``` Disable all table's constraints: ``` alter table T1 disable constraint SYS_C009951; alter table T1 disable constraint SYS_C009950; alter table T1 disable constraint SYS_C009953; alter table T1 disable constraint SYS_C009952; ``` Try to execute the previously failed `insert` statement again: ``` insert into t1(col1, col2, col3, col4) values(1, null, 2, null); 1 rows inserted. commit; ``` Now, enable table's constraints and store exceptions, if there are any, in the `exceptions` table: ``` alter table T1 enable constraint SYS_C009951 exceptions into exceptions; alter table T1 enable constraint SYS_C009950 exceptions into exceptions; alter table T1 enable constraint SYS_C009953 exceptions into exceptions; alter table T1 enable constraint SYS_C009952 exceptions into exceptions; ``` Check the `exceptions` table: ``` column row_id format a30; column owner format a7; column table_name format a10; column constraint format a12; select * from exceptions ROW_ID OWNER TABLE_NAME CONSTRAINT ------------------------------ ------- ------- ------------ AAAWmUAAJAAAF6WAAA HR T1 SYS_C009951 AAAWmUAAJAAAF6WAAA HR T1 SYS_C009953 ``` Two constraints have been violated. To find out column names, simply refer to `user_cons_columns` data dictionary view: ``` column table_name format a10; column column_name format a7; column row_id format a20; select e.table_name , t.COLUMN_NAME , e.ROW_ID from user_cons_columns t join exceptions e on (e.constraint = t.constraint_name) TABLE_NAME COLUMN_NAME ROW_ID ---------- ---------- -------------------- T1 COL2 AAAWmUAAJAAAF6WAAA T1 COL4 AAAWmUAAJAAAF6WAAA ``` The above query gives us column names, and rowids of problematic records. Having rowids at hand, there should be no problem to find those records that cause constraint violation, fix them, and re-enable constraints once again. Here is the script that has been used to generate `alter table` statements for enabling and disabling constraints: ``` column cons_disable format a50 column cons_enable format a72 select 'alter table ' || t.table_name || ' disable constraint '|| t.constraint_name || ';' as cons_disable , 'alter table ' || t.table_name || ' enable constraint '|| t.constraint_name || ' exceptions into exceptions;' as cons_enable from user_constraints t where t.table_name = 'T1' order by t.constraint_type ```
Multiple constraints in table: How to get all violations?
[ "", "sql", "oracle", "constraints", "" ]
Good morning, I have a table `TestSeed` that stores a multiple choices test with the following structure: ``` QNo QText QA1 QA2 QA3 QA4 ``` It already contains data. I would like to move some of the columns to a temp table with the following structure: ``` QNo QA ``` Where `QNo` will store the question number from the first table and `QA` will store `QA1`, `QA2`, `QA3` and `QA4` over four rows of data. I am trying to do it in a SQL stored procedure. And it got down to the following situation: I want to create a nested loop where I can go through the `TestSeed` table rows in the outer loop and then go through the four `QA` fields and insert them in the inner loop. So my code will look something like this: ``` Declare @TempAnswers as table ( [QNo] int, [QAnswer] [nvarchar](50) NULL, ) DECLARE @QNO int DECLARE QROW CURSOR LOCAL FOR select QNo from @TempSeed OPEN QROW FETCH NEXT FROM QROW into @QNO WHILE @@FETCH_STATUS = 0 BEGIN DECLARE @i INT SET @i = 1 WHILE (@i <=4) Begin insert into @TempAnswers ( [QNo], [QAnswer] ) select QNo, 'QA'+@i --This is the part I need from @TempSeed SET @i = @i +1 END FETCH NEXT FROM QROW into @QNO END CLOSE IDs DEALLOCATE IDs ``` So I guess my question is: can I use a concatenated string to refer to a column name in SQL? and if so how? I am sort of a beginner. I would appreciate any help I can.
No need for loop, you can simply use the [`UNPIVOT`](http://technet.microsoft.com/en-us/library/ms177410%28v=sql.105%29.aspx) table operator to do this: ``` INSERT INTO temp SELECT QNO, val FROM Testseed AS t UNPIVOT ( val FOR col IN([QA1], [QA2], [QA3], [QA4]) ) AS u; ``` For example, if you have the following sample data: ``` | QNO | QTEXT | QA1 | QA2 | QA3 | QA4 | |-----|-------|-----|-----|-----|-----| | 1 | q1 | a | b | c | d | | 2 | q2 | b | c | d | e | | 3 | q3 | e | a | b | c | | 4 | q4 | a | c | d | e | | 5 | q5 | c | d | e | a | ``` The previous query will fill the `temp` table with: ``` | QNO | QA | |-----|----| | 1 | a | | 1 | b | | 1 | c | | 1 | d | | 2 | b | | 2 | c | | 2 | d | | 2 | e | | 3 | e | | 3 | a | | 3 | b | | 3 | c | | 4 | a | | 4 | c | | 4 | d | | 4 | e | | 5 | c | | 5 | d | | 5 | e | | 5 | a | ``` * [**SQL Fiddle Demo**](http://www.sqlfiddle.com/#!3/96d11/1) The `UNPIVOT` table operator, will convert the values of the four columns `[QA1], [QA2], [QA3], [QA4]` into rows, only one row. Then you can put that query inside a stored procedure.
So, to answer your last question, you can use Dynamic SQL which involves creating your query as a STRING and then executing it, in case you really want to stick to the method you already started. You will have to declare a variable to store the text of your query: ``` DECLARE @query NVARCHAR(MAX) SET @query = 'SELECT QNo, QA' + @i + ' FROM @TempSeed' EXEC sp_executesql @query ``` This will have to be done everytime you build your query which is to be executed (declaration, seting the text of the query and executing it). If you want **something simpler**, there are other answers here which will work.
Inserting four columns into one
[ "", "sql", "sql-server", "stored-procedures", "unpivot", "" ]
I wrote a script that runs each time a user logs into a computer in our domain. This script makes a record of the user as well as the computer they logged into. Any number of users can log into any number of computers. I just inherited this IT environment from a consultant who is no longer around, and I'm writing this little query so when I get a call from a user, I can search by that user's name and reasonably predict which computer they are using by the number of times they've logged into any given computer. Here's a sample of the data in the 'login' table: ``` COMPUTER USER ncofp02 lee ncofp02 lee ncofp02 andy ncodc01 andy ncodc01 andy ncodc01 lee ``` What I'm banging my head on is the logic to count distinct values across multiple columns. I'd like to see a result like this: ``` COMPUTER USER COUNT ncofp02 lee (2) ncofp02 andy (1) ncodc01 lee (1) ncodc01 andy (2) ``` Is there a way to accomplish this with a single query within mysql, or should I start looping some php? (booooo!)
Just list multiple columns in the `GROUP BY` clause. ``` SELECT computer, user, count(*) AS count FROM login GROUP BY computer, user ```
Try this: ``` SELECT l.computer, l.user, COUNT(DISTINCT l.computer, l.user) AS count FROM login l GROUP BY l.computer, l.user ```
MySQL Counting Distinct Values on Multiple Columns
[ "", "mysql", "sql", "select", "group-by", "distinct", "" ]
I have the following two tables: ``` BillingMatrixDefinition - id - amount BillingMatrix - definition (FK to table above) - service_id (FK) - provider_id (FK) - amount (Decimal) ``` I need to get all BillingMatrixDefinitions that have the `service_id` and `provider_id` that I specify. Here is the SQL query I currently have: ``` select def.id, service_id, provider_id, (case when matrix.amount is not null then matrix.amount else def.amount end) amount from billing_billingdefinition def left outer join billing_billingmatrix matrix on matrix.definition_id=def.id where (service_id = 25 or service_id is null) and (provider_id = 24 or provider_id is null) ``` This gives me the following results: ``` id service_id provider_id amount 1 25 24 200.00 1 NULL 24 300.00 2 NULL 24 800.00 3 NULL NULL 750.00 5 NULL NULL 450.00 6 NULL NULL 750.00 ``` However, I need to get the billing amount per id, so I can only get **ONE** item/amount for each `id`. In which case, I want to get the item where the `service_id=24`, and if that doesn't exist, then get it where `service_id=NULL`. The correct query should give me the following results: ``` id service_id provider_id amount 1 25 24 200.00 2 NULL 24 800.00 3 NULL NULL 750.00 5 NULL NULL 450.00 6 NULL NULL 750.00 ``` Notice how now there is no duplicate entry for 1, and I use the line item where a service\_id has been entered (use that one if it exists, else use NULL). What would be the correct query to do this?
Anothr way: ``` SELECT def.id AS id, COALESCE(matrix.service_id, matrix2.service_id) AS service_id, COALESCE(matrix.provider_id, matrix2.provider_id) AS provider_id, COALESCE(matrix.amount, matrix2.amount, def.amount) AS amount FROM billing_billingdefinition AS def LEFT JOIN billing_billingmatrix AS matrix ON matrix.definition_id = def.id AND matrix.service_id = 25 AND matrix.provider_id = 24 LEFT JOIN billing_billingmatrix AS matrix2 ON matrix2.definition_id = def.id AND matrix2.service_id IS NULL AND matrix2.provider_id = 24 ; ```
Try something along these lines (utilizing a temporary table): ``` CREATE TEMPORARY TABLE Results select def.id, service_id, provider_id, (case when matrix.amount is not null then matrix.amount else def.amount end) amount from billing_billingdefinition def left outer join billing_billingmatrix matrix on matrix.definition_id=def.id where (service_id = 25 or service_id is null) and (provider_id = 24 or provider_id is null); SELECT * FROM Results r1 WHERE IFNULL(r1.service_id, 0) = ( SELECT MAX(IFNULL(r2.service_id, 0)) FROM Results r2 WHERE r2.id = r1.id ); ``` [**SQL Fiddle**](http://sqlfiddle.com/#!2/14ed3/4) for the 2nd part only (uses already created `Results` table)
Challenging LEFT OUTER JOIN query grouping by MAX
[ "", "mysql", "sql", "select", "group-by", "left-join", "" ]
I have two tables, that are something like this: ``` records [id, submitted_date, username] 1, 2012-10-20, jjones 2, 2012-10-22, jsmith subrecords [id, record, name, value] 1, 1, Street, 123 Elm 2, 1, City, Chicago 3, 2, Street, 321 Maple 4, 2, City, New York ``` I am using a form tool that saves things this way. In separate custom PHP, however, I want to get query results like: ``` jjones, 123 Elm, Chicago jsmith, 321 Maple, New York ``` How do I do this? The best I can figure out so far is output like ``` jjones, Street, 123 Elm jjones, City, Chicago jsmith, Street, 321 Maple jsmith, City, New York ``` Which is not what I need. I want to get the results all together in the data rows returned from the query.
Just join to subrecords twice, once for the street and once for the city: ``` select r.username, street.value, city.value from records r join subrecords street on street.record = r.id and street.name = 'Street' join subrecords city on city.record = r.id and city.name = 'City' order by r.username; ```
I have no MySQL server in front of me to test it, but something like ``` select records.username, group_concat(subrecords.value separator ', ') as concatenated from records join subrecords on records.id = subrecords.record group by records.username ``` I am not sure it works, as I have not tested. Let me know if there is any problem so I can edit the answer to make it more helpful. Cheers.
mySQL - how to flatten records and subrecords
[ "", "mysql", "sql", "" ]
I have 2 tables. 1 Table have 2 columns `key` and `id2` (not primary). 2 Table have 2 columns `key` and `id2` (not primary) and so on information in row that cant be ignored. How can i import or update `key` from 1st table using smth like : "if 1st table id2= 2nd table id2 then import 1st `key` to `key`? I simply dont understand....
Try this: ``` UPDATE dt SET dt.key = st.anotherkey FROM DestinationTable AS dt INNER JOIN SourceTable AS st ON dt.Key2 = st.Key ``` But before you run above query, try this: ``` SELECT dt.*, st.* FROM DestinationTable AS dt INNER JOIN SourceTable AS st ON dt.Key2 = st.Key ```
Answer according to what I understand of your question: ``` UPDATE Table2 SET key=Table1.key WHERE Table1.id2=Table2.id2; ```
How to import \ update data from column from one table to column from another table by comparing their ids
[ "", "sql", "database", "import", "compare", "" ]
I was wondering if it's possible to have 2 columns referencing a single column in a different table. I have 2 tables `users` and `matches` The `users` table contains all the user id's in the column `user_id` and the matches contains two columns `userA` and `userB`. Would it be wise, if possible at all, to reference both columns to the same column in the `users` table? The reason why I want to use it, is because I've been told that it's best practice and that it allows the DBMS to optimize queries. To explain a bit further, the matches table holds the id's of `userA` who has selected `userB`. If `userB` also selects `userA` then there is a match. Boils down to a dating app.
Yes, you can do it: userA from matches pointing to user\_id from users, and also userB from matches pointing to user\_id from users. This is valid.
Logically `usera` and `userb` both are different entity. So, they shouldn't have same `userid`. You can have something like `users` table with `userid` `matches` table `usera` and `userb` column both referencing `userid` in `users` table So the result would be ``` Users userid 1 2 3 Matches usera userb 1 2 2 3 ```
multiple foreign keys referencing single column in other table
[ "", "mysql", "sql", "" ]
This is my String ``` Declare @qstr as varchar(max)='hireteammember.aspx?empemail=kuldeep@asselsolutions.com&empid=376&empname=kuldeep&adminname=TMA1&term=5&teamid=161&contactid=614Β₯1&WP=100Β₯5Β₯Months&Amt=500&DueDay=5&StrDt=12/31/2013&MemCatg=Employees&StrTm=21:05&PlnHrs=5&WrkDays=trueΒ₯trueΒ₯trueΒ₯trueΒ₯trueΒ₯falseΒ₯false' ``` I want to extract the values of empid,empname,adminname,term,teamid,contactid,WP,Months,Dueday,StrDt,MemCatgmStrTm,PlnHrs,WrkDays and assign them to new variables I have used ``` select ( SUBSTRING(@qstr,CHARINDEX('=',@qstr)+1,CHARINDEX('&',@qstr)-CHARINDEX('=',@qstr)-1))) ``` but only getting the 'empemail' , for the next occurance of special char '&' , not able to get the values of further terms , if i am using '&' in spite of '=' . Help me to split the whole string
``` CREATE FUNCTION dbo.SplitQueryString (@s varchar(8000)) RETURNS table AS RETURN ( WITH splitter_cte AS ( SELECT CHARINDEX('&', @s) as pos, 0 as lastPos UNION ALL SELECT CHARINDEX('&', @s, pos + 1), pos FROM splitter_cte WHERE pos > 0 ), pair_cte AS ( SELECT chunk, CHARINDEX('=', chunk) as pos FROM ( SELECT SUBSTRING(@s, lastPos + 1, case when pos = 0 then 80000 else pos - lastPos -1 end) as chunk FROM splitter_cte) as t1 ) SELECT substring(chunk, 0, pos) as keyName, substring(chunk, pos+1, 8000) as keyValue FROM pair_cte ) GO declare @queryString varchar(2048) set @queryString = 'foo=bar&temp=baz&key=value'; SELECT * FROM dbo.SplitQueryString(@queryString) OPTION(MAXRECURSION 0); ``` when run produces the following output. ``` keyName keyValue ------- -------- foo bar temp baz key value (3 row(s) affected) ``` I believe that this will do exactly what you are asking. [SQL FIDDLE DEMO](http://www.sqlfiddle.com/#!3/d1c48/1)
How about using XML to split the values into rows, and then splitting them into columns. Something like ``` Declare @qstr as varchar(max)='hireteammember.aspx?empemail=kuldeep@asselsolutions.com&empid=376&empname=kuldeep&adminname=TMA1&term=5&teamid=161&contactid=614Β₯1&WP=100Β₯5Β₯Months&Amt=500&DueDay=5&StrDt=12/31/2013&MemCatg=Employees&StrTm=21:05&PlnHrs=5&WrkDays=trueΒ₯trueΒ₯trueΒ₯trueΒ₯trueΒ₯falseΒ₯false' DECLARe @str VARCHAR(MAX) = SUBSTRING(@qstr,CHARINDEX('?',@qstr,0) + 1, LEN(@qstr)-CHARINDEX('?',@qstr,0)) DECLARE @xml XML SELECT @xml = CAST('<d>' + REPLACE(@str, '&', '</d><d>') + '</d>' AS XML) ;WITH Vals AS ( SELECT T.split.value('.', 'nvarchar(max)') AS data FROM @xml.nodes('/d') T(split) ) SELECT LEFT(data,CHARINDEX('=',data,0) - 1), RIGHT(data,LEN(data) - CHARINDEX('=',data,0)) FROM Vals ``` ## [SQL Fiddle DEMO](http://www.sqlfiddle.com/#!3/d41d8/27377)
Split the query string with repeatative special characters using SQL
[ "", "sql", "sql-server", "" ]
I am doing the BI reports for a group of 5 companies. Since the information is more or less the same for all the companies, I am consolidating all the data of the 5 companies in one DB, restructuring the important data, indexing the tables (I can not do that in the original DB because ERP restrictions) and creating the views with all the information required. In the group, I have some corporate roles that would be benefit of having the data of the 5 companies in one view, nevertheless, I am not interested that an employee of company 1 see the information of company 2, neither in the other way. There is any way to grant permissions restricting the information to the rows that contain employeeΒ΄s company name in a specific column?. I know that I could replicate the view and filtering the information using the WHERE clause, but I really want to avoid this. Please help. Thanks!
What you are talking about is row level security. There is little to no support out of the product for this. Here are a couple articles on design patterns that can be used. <http://sqlserverlst.codeplex.com/> <http://msdn.microsoft.com/en-us/library/bb669076(v=vs.110).aspx> What is the goal of consolidating all the companies into one database? Here are some ideas. 1 - Separate databases makes it easier to secure data; However, hard to aggregate data. Also, duplication of all objects. 2 - Use schema's to separate the data. Security can be given out at the schema level. This does have the same duplicate objects, less the database container, but a super user group can see all schema's and write aggregated reports. **I think schema's are under used by DBA's and developers.** 3 - Code either stored procedures and/or duplicate views to ensure security. While tables are not duplicated, some code is. Again there is no silver bullet for this problem. However, this is a green field project and you can dictate which way you want to implement it.
As of SQL Server 2016 there is support specifically for this problem. The MSDN link in the accepted answer already forwards to [the right article](https://learn.microsoft.com/en-us/dotnet/framework/data/adonet/sql/granting-row-level-permissions-in-sql-server). I decided to post again though as the relevant answer changed. You can now create security policies which implement row level permissions like this (code from MSDN; assuming per-user permissions and a column named `UserName` in your table): ``` CREATE SCHEMA Security GO CREATE FUNCTION Security.userAccessPredicate(@UserName sysname) RETURNS TABLE WITH SCHEMABINDING AS RETURN SELECT 1 AS accessResult WHERE @UserName = SUSER_SNAME() GO CREATE SECURITY POLICY Security.userAccessPolicy ADD FILTER PREDICATE Security.userAccessPredicate(UserName) ON dbo.MyTable, ADD BLOCK PREDICATE Security.userAccessPredicate(UserName) ON dbo.MyTable GO ``` Furthermore it's advisable to create stored procedures which check permission too for accessing the data as a second layer of security, as users might otherwise find out details about existing data they don't have access to i.e. by trying to violate constraints. For details see the MSDN article, which is exactly on this topic. It points out workarounds for older versions of SQL Server too.
SQL Server Grantting permission for specific rows
[ "", "sql", "sql-server", "" ]
`ActiveRecord::Base.connection.execute(sql)` The results are not typecasted so they are all strings as an example `ActiveRecord::Base.connection.execute(sql).entries` `=> [{"id" => "1", "length" => "120", "content" => "something"},{"id" => "2", "length" => "200", "content" => "blahblah"}]` Is it possible to execute raw transactions in activerecord and return typecasted results?
I think you are referring to [ORM](http://edgeguides.rubyonrails.org/active_record_basics.html) (Object relational mapping) First of all, `connection.execute` will return a Mysql adapter where you can just iterate over the rows You cant convert array of strings(the result you have) to ActiveRecord objects just like that ( I guess this is what you referred as typecasting) What you can do is use `find_by_sql`. **Ex:** ``` Blog.find_by_sql("select * from blog") # => [#<Blog id: 1, name: "first blog", description: nil, record_status_id: 1>] ``` Using this method you can get ActiveRecord Objects fro raw SQL
Consider manifesting your SQL statement as a view, and creating a new record to interface with the view. Here's a project where I'm backing AR with a view: <https://github.com/michaelkirk/household-account-mgmt/blob/develop/app/models/monthly_report.rb> ``` class CreateMonthlyReports < ActiveRecord::Migration def up sql = <<-SQL create view monthly_reports as select date_part('year', created_at) as year, date_part('month', created_at) as month, sum(purchase_amount) as purchases_amount, sum(investment_amount) as investments_amount from ( select * from transactions left join (select id as purchase_id, amount as purchase_amount from transactions where credit = false) as purchases on transactions.id = purchases.purchase_id left join (select id as investment_id, amount as investment_amount from transactions where credit = true) as investments on transactions.id = investments.investment_id) as classified_transactions group by year, month order by year, month SQL execute(sql) end def down sql = <<-SQL drop view monthly_reports SQL execute(sql) end ``` Then, since you've abstracted your complexity into a database view, which for all AR's intents/purposes works like a table your model and controller look completely vanilla. ``` class MonthlyReport < ActiveRecord::Base MONTHS = ["January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December"] def time_period "#{month} #{year}" end def month MONTHS[self[:month] - 1] end def year self[:year].to_i end end ``` Then you can do things like ``` class MonthlyReportsController < ApplicationController def index @monthly_reports = MonthlyReport.all end end ``` Note that because this is a DB view, you won't be able to do inserts. I'm not sure what would happen if you tried.
ActiveRecord execute raw transaction typecasting?
[ "", "sql", "ruby-on-rails", "activerecord", "" ]
Can we find the history of values, updates, deletes or inserts of a specific table? I don't have the access to create a `stored procedure`. Can you please provide me a query in the answer or some other way to find the history?
Two possibilities come to mind. 1.) If you have auditing enabled, you are all set. But, I'm guessing if that was the case, you wouldn't be asking the question. If you think this request will come up again, you should investigate setting up auditing for future requests. 2.) If auditing isn't set up, there's LogMiner, which allows you to examine the contents of the archived and online redo logs. This is probably your only solution, if you need the details of inserts, updates, deletes to a specific table. Hope that helps.
It is possible if `FLASHBACK` has been enabled over the schema or table. Most critical tables could have that enabled. Please check with DBA on this. If you have DBA access, then select the table name in `SQL Developer` , press `Shift+F4` and move to Flashback tab to find details. if enabled, you can use the below query ( just a sample) ``` SELECT * FROM employee AS OF TIMESTAMP TO_TIMESTAMP('2003-04-04 09:30:00', 'YYYY-MM-DD HH:MI:SS') WHERE name = 'JOHN'; ``` if not enabled, you may have to write `TRIGGERS` for evry `DML` over that table. I agree the history of data before `TRIGGERS` are gone for ever, unless `DBA` is able to do some magic with redo logs..!
history of a specific table in oracle
[ "", "sql", "oracle", "oracle-sqldeveloper", "" ]
``` CREATE TABLE [M].[SocialInfo] ( [Id] UNIQUEIDENTIFIER NOT NULL PRIMARY KEY DEFAULT newid(), [MemberId] UNIQUEIDENTIFIER DEFAULT newid(), [GroupId] UNIQUEIDENTIFIER DEFAULT newid(), [NewsURL] VARCHAR(200) NULL, CONSTRAINT [FK_SocialInfo_Member] FOREIGN KEY ([MemberId]) REFERENCES [M].[Member]([Id]), CONSTRAINT [FK_SocialInfo_Group] FOREIGN KEY ([GroupId]) REFERENCES [M].[Group]([Id]) ) ``` > How can I make both FK nullable? > > I'm unable to get the syntax?
I think you just left out the column creation piece. ``` CREATE TABLE [M].[SocialInfo] ( [Id] UNIQUEIDENTIFIER NOT NULL PRIMARY KEY DEFAULT newid(), [NewsURL] VARCHAR(200) NULL, MemberId INT NULL, GroupId INT NULL, CONSTRAINT [FK_SocialInfo_Member] FOREIGN KEY ([MemberId]) REFERENCES [M].[Member]([Id]), CONSTRAINT [FK_SocialInfo_Group] FOREIGN KEY ([GroupId]) REFERENCES [M].[Group]([Id]) ) ```
A default value for a foreign key makes no sense - use NULL instead ``` CREATE TABLE [M].[SocialInfo] ( [Id] UNIQUEIDENTIFIER NOT NULL PRIMARY KEY DEFAULT newid(), [MemberId] UNIQUEIDENTIFIER NULL, [GroupId] UNIQUEIDENTIFIER NULL, [NewsURL] VARCHAR(200) NULL, CONSTRAINT [FK_SocialInfo_Member] FOREIGN KEY ([MemberId]) REFERENCES [M].[Member]([Id]), CONSTRAINT [FK_SocialInfo_Group] FOREIGN KEY ([GroupId]) REFERENCES [M].[Group]([Id]) ) ```
How to make foreign key as NULLABLE in SQL?
[ "", "sql", "sql-server", "sql-server-2005", "" ]
I'm trying to calculate a month-to-date total using SQL Server 2008. I'm trying to generate a month-to-date count at the level of activities and representatives. Here are the results I want to generate: ``` | REPRESENTATIVE_ID | MONTH | WEEK | TOTAL_WEEK_ACTIVITY_COUNT | MONTH_TO_DATE_ACTIVITIES_COUNT | |-------------------|-------|------|---------------------------|--------------------------------| | 40 | 7 | 7/08 | 1 | 1 | | 40 | 8 | 8/09 | 1 | 1 | | 40 | 8 | 8/10 | 1 | 2 | | 41 | 7 | 7/08 | 2 | 2 | | 41 | 8 | 8/08 | 4 | 4 | | 41 | 8 | 8/09 | 3 | 7 | | 41 | 8 | 8/10 | 1 | 8 | ``` From the following tables: **ACTIVITIES\_FACT table** ``` +-------------------+------+-----------+ | Representative_ID | Date | Activity | +-------------------+------+-----------+ | 41 | 8/03 | Call | | 41 | 8/04 | Call | | 41 | 8/05 | Call | +-------------------+------+-----------+ ``` **LU\_TIME table** ``` +-------+-----------------+--------+ | Month | Date | Week | +-------+-----------------+--------+ | 8 | 8/01 | 8/08 | | 8 | 8/02 | 8/08 | | 8 | 8/03 | 8/08 | | 8 | 8/04 | 8/08 | | 8 | 8/05 | 8/08 | +-------+-----------------+--------+ ``` I'm not sure how to do this: I keep running into problems with multiple-counting or aggregations not being allowed in subqueries.
> A [running total](http://en.wikipedia.org/wiki/Running_total) is the summation of a sequence of numbers which is > updated each time a new number is added to the sequence, simply by > adding the value of the new number to the running total. I **THINK** He wants a running total for Month by each Representative\_Id, so a simple `group by` week isn't enough. He probably wants his `Month_To_Date_Activities_Count` to be updated at the end of every week. This query gives a **running total (month to end-of-week date)** ordered by Representative\_Id, Week ``` SELECT a.Representative_ID, l.month, l.Week, Count(*) AS Total_Week_Activity_Count ,(SELECT count(*) FROM ACTIVITIES_FACT a2 INNER JOIN LU_TIME l2 ON a2.Date = l2.Date AND a.Representative_ID = a2.Representative_ID WHERE l2.week <= l.week AND l2.month = l.month) Month_To_Date_Activities_Count FROM ACTIVITIES_FACT a INNER JOIN LU_TIME l ON a.Date = l.Date GROUP BY a.Representative_ID, l.Week, l.month ORDER BY a.Representative_ID, l.Week ``` --- ``` | REPRESENTATIVE_ID | MONTH | WEEK | TOTAL_WEEK_ACTIVITY_COUNT | MONTH_TO_DATE_ACTIVITIES_COUNT | |-------------------|-------|------|---------------------------|--------------------------------| | 40 | 7 | 7/08 | 1 | 1 | | 40 | 8 | 8/09 | 1 | 1 | | 40 | 8 | 8/10 | 1 | 2 | | 41 | 7 | 7/08 | 2 | 2 | | 41 | 8 | 8/08 | 4 | 4 | | 41 | 8 | 8/09 | 3 | 7 | | 41 | 8 | 8/10 | 1 | 8 | ``` [SQL Fiddle Sample](http://sqlfiddle.com/#!3/3b785/4/0)
As I understand your question: ``` SELECT af.Representative_ID , lt.Week , COUNT(af.Activity) AS Qnt FROM ACTIVITIES_FACT af INNER JOIN LU_TIME lt ON lt.Date = af.date GROUP BY af.Representative_ID, lt.Week ``` [SqlFiddle](http://sqlfiddle.com/#!3/181cfb/1)
How to calculate running total (month to date) in SQL Server 2008
[ "", "sql", "sql-server", "t-sql", "aggregation", "cumulative-sum", "" ]
[SQL FIDDLE](http://sqlfiddle.com/#!6/20480/1) ``` CREATE TABLE STUDY ( [ID][INT], STUDY_DATE VARCHAR(40), START_TIME VARCHAR (40), END_TIME VARCHAR (40) ) INSERT INTO STUDY VALUES(1,'2013-12-23','11:30:00','11:31:00') SELECT STUDY_DATE,START_TIME,END_TIME FROM STUDY WHERE (STUDY_DATE >='2013-12-22' AND CAST(START_TIME AS DATETIME) >='19:12:01') AND (STUDY_DATE <='2013-12-23' AND CAST(END_TIME AS DATETIME) <='13:12:14') ``` i have to fetch records from table with above criteria.. however my STUDY\_DATE criteria is fullfill but START\_TIME criteria not. thats the reason records not fetch from table.. What should i do.
In your example - '11:30:00' IS NOT more or equal '19:12:01' (when casted to datetime, but it doesnt matter). Do what people suggest - store date as datetime, dont use varchars for it. Upd: Ok, if you cant change your table: ``` SELECT STUDY_DATE,START_TIME,END_TIME FROM STUDY WHERE CAST(STUDY_DATE + 'T' + START_TIME AS DATETIME) >='2013-12-22T19:12:01' AND CAST(STUDY_DATE + 'T' + END_TIME AS DATETIME) <='2013-12-23T13:12:14' ```
Although it's better use proper data types (`datetime` in this case) if you can't change data types for this fields you can add [computed columns](http://technet.microsoft.com/en-us/library/ms191250%28v=sql.105%29.aspx) and even create indexes on them (for PERSISTED). ``` CREATE TABLE #STUDY ( [ID][INT], STUDY_DATE VARCHAR(40), START_TIME VARCHAR (40), END_TIME VARCHAR (40), START_DATETIME AS CAST(REPLACE(STUDY_DATE,'-','')+' '+START_TIME as datetime), END_DATETIME AS CAST(REPLACE(STUDY_DATE,'-','')+' '+END_TIME as datetime) ) INSERT INTO #STUDY VALUES(1,'2013-12-23','11:30:00','11:31:00') SELECT STUDY_DATE,START_TIME,END_TIME FROM #STUDY WHERE START_DATETIME >='20131222 19:12:01' AND END_DATETIME <='20131223 13:12:14' drop table #STUDY ```
Datetime search in sql
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I want all the rows between two dates. I tried **between** but it doesn't show value for same date i.e., 2013-01-01 to 2013-01-01. In this case the result is null. But if i give dates like 2013-01-01 to 2013-01-02. It shows 10 rows for the 2013-01-01 whole day. Any ideas guys how to solve this in MySQL. My query is like this ``` SELECT MONTHNAME(access_date) as date, DATE_FORMAT( access_date,'%d/%m/%Y') as month_date , COUNT( log_id ) as total_count FROM user_activity_log WHERE dam_id = (SELECT dam_id FROM dam_content_details WHERE content_type= 'userLogin') AND access_date >= '2013-01-03' AND access_date <= '2013-01-03' GROUP BY MONTH( access_date ) ORDER BY access_date ASC ```
This is like your test: ``` mysql> select '2013-12-30 12:11:23' BETWEEN '2013-12-30' AND '2013-12-30' as test; +------+ | test | +------+ | 0 | +------+ ``` This is the correct way to works with datetime values: ``` mysql> select '2013-12-30 12:11:23' BETWEEN '2013-12-30 00:00:00' AND '2013-12-30 23:59:59' as test; +------+ | test | +------+ | 1 | +------+ ``` This is the workaround in order to work between date and datetime values: ``` mysql> select CAST('2013-12-30 12:11:23' AS DATE) BETWEEN '2013-12-30' AND '2013-12-30' as test; +------+ | test | +------+ | 1 | +------+ ``` From [MySQL doc](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-type-conversion.html): > MySQL converts a time value to a date or date-and-time value by > parsing the string value of the time as a date or date-and-time. This > is unlikely to be useful. For example, '23:12:31' interpreted as a > date becomes '2032-12-31'. Time values not valid as dates become > '0000-00-00' or NULL. > > Explicit conversion can be used to override implicit conversion. For > example, in comparison of DATE and DATETIME values, the DATE value is > coerced to the DATETIME type by adding a time part of '00:00:00'. To > perform the comparison by ignoring the time part of the DATETIME value > instead, use the CAST() function in the following way: > > date\_col = CAST(datetime\_col AS DATE) Probably a bug in my opinion...
``` SELECT * FROM your_table WHERE date BETWEEN '2012-02-28' AND '2012-03-30' ```
Retrive the rows for a same date in mysql
[ "", "mysql", "sql", "" ]
I need to subtract a number of business days (1-15 days) from a specific date, for example subtracting **5** business days from `2013-12-27` should return `2013-12-20`, is there an easy way to do that?
This post explains how to do it using a recursive CTE: [Business Days Calc Using Recursive CTE](http://waheedrous.wordpress.com/2013/12/24/transact-sql-user-function-to-subtract-business-days/)
One way to do that is to pre-create a table with all the dates for couple of years in advance and select from that table. This way you can mark saturdays, sundays, holidays, etc.
Subtract Business Days from a Specific date
[ "", "sql", "sql-server", "t-sql", "" ]
The tables are: `contracts {id, user_id, unit_id}` AND `users {id}` A user can have several contracts, but EACH contract, can have only one user. So, the question is: the many-to-many relates to the table itself *(meaning: there are many contracts for many users*, ***so it's many-to-many)***, or to the record? *(meaning: each contract can have only one user*, ***so it's one-to-many)*** --- edit: Some clarification - contracts ``` user_id | unit_id | id - - - - - - - - - - - - - - - - - - - - - - - - - 455 | 455 | 1 455 | 115 | 1 344 | 225 | 1 222 | 455 | 1 ``` Please note, again: EACH user record can have many contracts, and EACH contract record can have one user, but in the table scope: they both can have many of the other.
One user to many contacts, in your own words. "Many to many" means that there are one-to-many relationships both ways (a user can have many contacts and a contact can have many users) not just that there are many records in both tables. Many-to-many relationships involve a junction table in most RDBMs.
The relationship between *users* and *contacts* is one-to-many. That is, one user has many contacts. This does not need an association table. You can just put the `userid` in the `contacts` table. An example of a many-many relationship would be items and orders. Items be in many orders. Orders can have many items. In this case, you need an association table to properly represent the relationship in a relational database.
Is it Many-To-Many or Many-To-One?
[ "", "sql", "many-to-many", "one-to-many", "" ]
I know this is micro-optimization - this is more a question of curiosity. I am curious which is faster of the two options below: ``` 1) :new.a := upper(:new.a); 2) select upper(:new.a) into :new.a from dual; ``` I didn't find any performance info [here](http://docs.oracle.com/cd/B19306_01/appdev.102/b14261/assignment_statement.htm) or [here](http://docs.oracle.com/cd/B19306_01/appdev.102/b14261/selectinto_statement.htm#i36066), though those are usage docs, so I'm probably looking in the wrong place. I did run an explain plan on the second, but couldn't figure out how to get one working on the first.
``` SQL> set timing on SQL> DECLARE 2 i number; 3 BEGIN 4 FOR j IN 1..100000 LOOP 5 i:=i+j; 6 END LOOP; 7 END; 8 / Procedura PL/SQL zosta-a zako˝czona pomyΕ₯lnie. Ca-kowity: 00:00:00.03 SQL> DECLARE 2 i number; 3 BEGIN 4 FOR j IN 1..100000 LOOP 5 SELECT i+j INTO i FROM dual; 6 END LOOP; 7 END; 8 / Procedura PL/SQL zosta-a zako˝czona pomyΕ₯lnie. Ca-kowity: 00:00:05.98 SQL> ``` 300 miliseconds vs. 6 seconds ===> **~ 20 times faster**
[Tom Kyte says](http://asktom.oracle.com/pls/apex/f?p=100:11:0%3a%3a%3a%3aP11_QUESTION_ID:60122715103602#2788690300346955974): > I have no idea why someone would want to code: > > ``` > select f(x) into l_x from dual; > ``` > > over > > ``` > l_x := f(x); > ``` > > It is just "stupid" (it must come from T-SQL and Sqlserver/Sybase > coders where they had to do it that way) I realise it doesn't entirely address your question. You will be doing a context switch in your second version, though I would expect the effect to be negligible except under extreme conditions, e.g. in a very tight loop. I have a vague memory that selecting from dual has been optimised out of PL/SQL calls when it is seen to be uneccessary but I can't find anything to back that up, so I might have imagined it. (And kordirko's test suggests I did!) There is certainly no benefit in using the `select` over the assignment, though.
Oracle Assignment vs Select Into
[ "", "sql", "performance", "oracle", "" ]
I'm using Oracle, In a column I'm getting data like: ``` my_name 1234 my_name1 1234 my name (1234) my name : 1234 ``` These are name and ID(numeric) name can contain numbers as well, but ID is always a number of 4-6 digits I'm interested in the ID only. I have decent understanding of regex in JS and Perl but I've no idea of it in Oracle SQL I tried this : `regexp_replace('my_name - 7203', '[^[:digit:]]')` which works fine but fails in cases when name contains numbers..
Try this : ``` select regexp_replace('my_4name - 7203', '.*?([[:digit:]]+)[)]?$','\1') from dual; ``` May this will help you.
It is not a pretty solution, it it should work as you want. Also I break it down on subqueries to be more understandeable. The final replace would be: ``` substr( trim(regexp_replace(name, '[^[:digit:]]', ' ')), regexp_instr( trim( regexp_replace(name, '[^[:digit:]]', ' ')), '[^[:digit:]]')+1, length(trim(regexp_replace(name, '[^[:digit:]]', ' ')))) as ID ``` Check it here: ``` select substr( trim(regexp_replace(name, '[^[:digit:]]', ' ')), regexp_instr(trim( regexp_replace( name, '[^[:digit:]]', ' ')), '[^[:digit:]]')+1, length(trim(regexp_replace(name, '[^[:digit:]]', ' ')))) as ID from ( select 'my_name 1234' name from dual union all select 'my_name1 1234' name from dual union all select 'my name (1234)' name from dual union all select 'my_name1:1234' name from dual union all select 'my name : 1234' name from dual) ``` This is the version to be understanded: ``` select substr( justNumbers, regexp_instr(justNumbers, '[^[:digit:]]')+1, length(justNumbers)) as ID from (select trim(regexp_replace(name, '[^[:digit:]]', ' ')) as justNumbers from ( select 'my_name 1234' name from dual union all select 'my_name1 1234' name from dual union all select 'my name (1234)' name from dual union all select 'my_name1:1234' name from dual union all select 'my name : 1234' name from dual) ) ``` The idea is to left just numbers then get it from the next character +1 that isn't a number (`regexp_instr(justNumbers, '[^[:digit:]]')+1`).
How to get numbers from a string
[ "", "sql", "regex", "oracle", "" ]
I am very confuse about the two structure. What are the advantage and disadvantage of this two table? Which one is better and why? ***TABLE1*** ``` id, name, age, birthdate, address somedata1 somedata1 somedata1 somedata1 somedata1 somedata2 somedata2 somedata2 somedata2 somedata2 somedata3 somedata3 somedata3 somedata3 somedata3 ``` ***TABLE2*** ``` id, col_name, col_value somedata name somedata somedata age somedata somedata birthdate somedata somedata address somedata somedata2 name somedata2 somedata2 age somedata2 somedata2 birthdate somedata2 somedata2 address somedata2 somedata3 name somedata3 somedata3 age somedata3 somedata3 birthdate somedata3 somedata3 address somedata3 ```
# Antipattern? In common case, second table is **anti-pattern** in context of database design. And, even more, it has specific name: *Entity-Attribute-Value* (EAV). There are some cases, when using this design is justified, but that are rare cases - and even there it can be avoided. # Why EAV is bad **Data integrity support** Despite the fact, that such structure seems to be more "flexible" or "advanced", this design has weakness. * **Impossible to make mandatory attributes**. You can not make some attribute mandatory, since attribute is now stored as a row - and the only sign that attribute is not set - is that the corresponding row absent in table. SQL will not allow you to build such constraint natively - thus, you'll have to check that in application - and, yes, *query your table each time* * **Mixing of data types**. You will not be able to use SQL standard data types. Because your value column must be a "super-type" for all stored values in it. That means - you'll have in general to store all data as *raw strings*. Then you'll see how painful is to work with dates as with strings, casting data types each time, checking data integrity, e t.c. * **Impossible to enforce referential intregrity**. In normal situation, you can use foreign key to restrict your values by those, which are defined in parent table. But not in this case - that's because referential integrity is applied to each row in table, but not for row values. So - you'll loose this advantage - and it's *one of fundamental* in relation DB * **Impossible to set attributes names**. That means - you can't restrict attribute name on DB level properly. For example, you'll write `"customer_name"` as attribute name in first case - and another developer will forget that and use `"name_of_customer"`. And.. it's ok, DB will pass that and you'll end with hours spent on debugging this case. **Row reconstruction** In addition, row reconstruction will be awful in common case. If you have, for example, 5 attributes - that will be 5 self-table `JOIN`-s. Too bad for such simple - at first glance - case. So I don't want even imagine how you'll maintain 20 attributes. # Can it be justified? My point is - no. In RDBMS there will always be a way to avoid this. It's horrible. And if EAV is intended to be used, then best choice may be *non-relational* databases.
in second case (table2) this is complex and take much time to find data when we make query for it. this case is used when you don't know about number of columns or they are varies, if you have fixed length of columns then used first case(table1) because in this case data find fast way.
Difference between two table structure
[ "", "mysql", "sql", "database", "database-design", "" ]
So I want to create a single query which will return me names of `projects` and number of `assigments` which are related to `project` in particular severity. For example: table `assigment` contains: ``` id | name | project_id | severity | -----+-------------------------------------+-------------+-----------+ 148 | Create background | 1 | 1| 184 | Create frontend | 1 | 1| 151 | Create Budged | 1 | 2| 155 | Assign all tasks | 1 | 3| 179 | Drink Beer | 1 | 1| ``` Table `project` contains only `name` and `id` as follows: ``` id | name -----+------------------------------------- 1 | Very Important Project ``` I would like to create a single query which will return something like this: ``` projectid | projectname | CriticalAssig| MediumAssig | LowAssig ------------+-----------------------+--------------+-------------+---------- ``` This works for me at the moment: ``` SELECT p.id, p.name, Count(a1.id) AS one, Count(a2.id) AS two, Count (a3.id) AS three FROM project p INNER JOIN assign a1 ON a1.project_id = p.id INNER JOIN assign a2 ON a2.project_id = p.id INNER JOIN assign a3 ON a3.project_id = p.id WHERE a2.severity = '2' AND a1.severity = '1' AND a3.severity = '3' GROUP BY p.id, p.name; ``` But result of this query is ridiculous in columns `one`, `two`, `three` I get numbers like `90000` (the same number everywhere) while the simple query `select count(*) from assig where project_id=x` returns `300`. Can anyone point me where my mistake is located?
You can do it without any joins: ``` count(case when severity = 1 then 1 else null end) as one ``` (The problem in your current query is that you're multiplying the number of rows before grouping. When developing aggregates, run the query without the aggregate function calls and group by/having clauses, to see what you're actually counting, summing, etc.)
Try this: ``` SELECT p.id, p.name, SUM(a.severity = 1) AS one, SUM(a.severity = 2) AS two, SUM(a.severity = 3) AS three FROM project p INNER JOIN assign a ON a.project_id = p.id GROUP BY p.id, p.name ``` **OR** ``` SELECT p.id, p.name, SUM(CASE WHEN a.severity = 1 THEN 1 ELSE 0 END) AS one, SUM(CASE WHEN a.severity = 2 THEN 1 ELSE 0 END) AS two, SUM(CASE WHEN a.severity = 3 THEN 1 ELSE 0 END) AS three FROM project p INNER JOIN assign a ON a.project_id = p.id GROUP BY p.id, p.name ```
SQL query to count rows under 3 different conditions
[ "", "sql", "postgresql", "select", "count", "group-by", "" ]
I have an SQL table below that contains data I need to collate. What I would like to do is combine all data for the ID of 746 contained in column B so that in the Results table column R2 contains the sum of column E when column C is non zero and column R4 contains the sum of column E when column D is none zero. *Both sums are then reduced by the percentage displayed in column F,* Column R3 will be the sum of column C and column R5 is the sum of column D. **Source Data** ``` +----+------+-------------+------+------------+----+ | A | B | C | D | E | F | +----+------+-------------+------+------------+----+ | 78 | 746 | 27 | 0 | 592.38 | 50 | | 78 | 746 | 27 | 0 | 592.38 | 50 | | 78 | 746 | 0 | 52.5 | 3178.36 | 50 | | 78 | 746 | 484.25 | 0 | 10616.8450 | | | 78 | 827 | 875 | 0 | 19215 | 50 | | 78 | 827 | 125 | 0 | 2745 | 50 | | 78 | 1078 | 63.59999847 | 0 | 1272 | 50 | +----+------+-------------+------+------------+----+ ``` **Results** ``` +-----+---------+--------+---------+------+ | R1 | R2 | R3 | R4 | R5 | +-----+---------+--------+---------+------+ | 746 | 5900.80 | 511.25 | 1589.18 | 52.5 | +-----+---------+--------+---------+------+ This script should populate the initial data create table #Test ( A int, B int, C decimal(10,2), D decimal(10,2), E decimal(10,2), F int ) insert into #Test select 78, 746, 27, 0, 0, 50 insert into #Test select 78, 746, 27, 0, 592.38, 50 insert into #Test select 78, 746, 0, 52.5, 3178.36, 50 insert into #Test select 78, 746, 484.25, 0, 10616.8450, 50 insert into #Test select 78, 827, 875, 0, 19215, 50 insert into #Test select 78, 827, 125, 0, 2745, 50 insert into #Test select 78, 1078,63.60, 0, 1272, 50 ``` As this is not something I have done a lot of in SQL server I am feeling a little flummoxed. The area where I think I need to be is subquery but am not exactly sure any help would be fantastic. Thanks
Ok, it seems that this is what you want: ``` SELECT B AS R1, SUM(CASE WHEN C != 0 THEN E END)*MIN(F)/100 AS R2, SUM(C) AS R3, SUM(CASE WHEN D != 0 THEN E END)*MIN(F)/100 AS R4, SUM(D) AS R5 FROM #test WHERE B = 746 GROUP BY B ``` Results: ``` ╔═════╦═════════════╦════════╦═════════════╦═══════╗ β•‘ R1 β•‘ R2 β•‘ R3 β•‘ R4 β•‘ R5 β•‘ ╠═════╬═════════════╬════════╬═════════════╬═══════╣ β•‘ 746 β•‘ 5900.805000 β•‘ 538.25 β•‘ 1589.180000 β•‘ 52.50 β•‘ β•šβ•β•β•β•β•β•©β•β•β•β•β•β•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β• ``` The difference in the result of the column `R3` is because you are not considering one of the rows.
``` SELECT 746 AS R1, SUM(c) AS R3, SUM(D) AS R5 FROM tablename WHERE B = 746; ``` * [Demo](http://www.sqlfiddle.com/#!3/4bfa5/7)
Combining multiple rows into one row SQL query
[ "", "sql", "sql-server", "" ]
i have a table in the database that consists of 10 columns, the header name for each column is > 'Col' + column number look like this ``` id|Col1|Col2|Col3|Col4|Col5|Col6 ``` i have a SQL statment that run through a Stored procedure this stored take column number as a parameter ``` @ColumnNumber nvarchar(10) ``` all i want to do is ordering the result by column number that will be passed in the parameter , something like this ``` Select * from [Table] order by ('Col' + @ColumnNumber) ``` but it doesn't work with me
You need to create dynamic SQL to make it work like ``` DECLARE @ColumnNumber nvarchar(10) DECLARE @strSQL nvarchar(max) SET @ColumnNumber = '1' SET @strSQL = 'Select * from [Table] order by Col' + @ColumnNumber EXEC(@strSQL) ``` **UPDATE :** If you have @List variable of XML type which have multiple ids then you can get in comma separated format like below ``` DECLARE @RecordIds nvarchar(max) DECLARE @List XML = '<Records><id>1</id></Records><Records><id>2</id></Records>' SELECT @RecordIds = STUFF((SELECT ',' + RecordId.value('.','varchar(5)') FROM @List.nodes('Records/id') AS Test(RecordId) FOR XML PATH ('')) , 1, 1, '') ``` Then you can use that comma separated string in your query like ``` SET @strSQL = 'Select * from [Table] where Id in ( ' + @RecordIds + ' ) order by Col' + @ColumnNumber ```
Try this one - ``` DECLARE @ColumnNumber INT SET @ColumnNumber = 1 SELECT * FROM [Table] ORDER BY CASE @ColumnNumber WHEN 1 THEN Col1 WHEN 2 THEN Col2 WHEN 3 THEN Col3 END ```
how to use string value in Order By clause?
[ "", "sql", "sql-server", "" ]
I have the following query: ``` SELECT * FROM `shop` WHERE `name` LIKE '%[0-9]+ store%' ``` I wanted to match strings that says `'129387 store'`, but the above regex doesn't work. Why is that?
Use [**REGEXP**](http://dev.mysql.com/doc/refman/5.5/en/regexp.html#operator_regexp) operator instead of **LIKE** operator Try this: ``` SELECT '129387 store' REGEXP '^[0-9]* store$'; SELECT * FROM shop WHERE `name` REGEXP '^[0-9]+ store$'; ``` Check the [**SQL FIDDLE DEMO**](http://www.sqlfiddle.com/#!2/92552/6) **OUTPUT** ``` | NAME | |--------------| | 129387 store | ```
If you mean MySQL, `LIKE` does not implement regular expressions. It implements the much more restricted SQL pattern matching, which just has two special operators: `%` matches any sequence of characters, and `_` matches any single character. If you want regular expression matching, you must use the `REGEXP` or `RLIKE` operator: ``` SELECT * FROM shop WHERE name REGEXP '[0-9]+ store' ``` MySQL's regular expression language doesn't include `\d` to specify digits, but you could write that as: ``` SELECT * FROM shop WHERE name REGEXP '[[:digit:]]+ store' ``` If the store name must begin with digits, you need an anchor: ``` SELECT * FROM shop WHERE name REGEXP '^[0-9]+ store' ``` You can learn more about regular expression syntax at [regular-expressions.info](http://regular-expressions.info).
regex in SQL to detect one or more digit
[ "", "mysql", "sql", "regex", "select", "sql-like", "" ]
I have a table as shown below: ``` cust_id/frequency 01/null 01/null 02/null 02/null 01/null 03/null 04/null 04/null 03/null ``` after updates the table with the frequency, it should shown as below ``` cust_id/frequency 01/3 01/3 02/2 02/2 01/3 03/2 04/2 04/2 03/2 ``` I have tried the below query, but it only shows the result I want, not updating the table. ``` select cust_id, count(*) as frequency from table A group by cust_id order by count(*) desc ``` Is there any query to help me to update that as I have 10000 plus cust\_id? Thank you very much for spending your time to look at my question.
Something like this: ``` UPDATE t1 SET t1.frequency = cnt FROM tbl t1 JOIN (SELECT cust_id, Count(1) cnt FROM tbl GROUP BY cust_id) t2 ON t1.cust_id = t2.cust_id ```
SQL Server has a very nice feature, which is that CTEs (and subqueries) can be updatable. You can do what you want using updatable CTEs and window functions: ``` with toupdate as ( select t.*, count(*) over (partition by cust_id) as newfrequency from table t ) update toupdate set frequency = newfrequency; ```
updating with frequency for customer ID
[ "", "sql", "sql-server", "sql-server-2008", "sql-update", "frequency", "" ]
I'm trying to execute a query in VB6. At the moment I have this: ``` Dim conn As ADODB.Connection Dim rs As ADODB.Recordset Private Sub Form_Load() Set conn = New ADODB.Connection conn.ConnectionString = "Provider = Microsoft.Jet.OLEDB.4.0; Data Source = " & App.Path & "\DB1.mdb; Persist Security Info = False" conn.Open End Sub Private Sub Command1_Click() Dim strSQL As String strSQL = "SELECT * FROM customers WHERE name = " & Text2.Text rs.Open strSQL, conn End Sub ``` This results in a `object variable or with block variable not set`. I'm kind of new to VB and Access databases, so any answers or links to good tutorials would be appreciated.
`rs` is never instantiated. Try ``` Dim rs As New ADODB.Recordset ```
Add ``` Set rs = New ADODB.Recordset ``` before running rs.Open. Same idea as the connection, the object needs to be instantiated first. Other than that you're ok. It's also good practice to close the rs and connection when you're done with them.
Executing a query in VB6
[ "", "sql", "ms-access", "vb6", "" ]
I have three tables as : ab, a, and b Table a and b should have multiple occurrences for the same touid. ``` SELECT * FROM ab tourid tourname ------ ------------ 100 hundred 110 one ten 120 one twenty select * from a; imageid tourid filesize ------- ------ ---------- 1 100 10 2 100 20 SELECT * FROM b uid tourid filesize ------ ------ ---------- 5 100 5 ``` sql query : ``` SELECT a.tourid, SUM(a.filesize) AS a_sum, SUM(b.filesize) AS b_sum FROM ab LEFT JOIN a ON a.tourid=ab.tourid LEFT JOIN b ON b.tourid=ab.tourid WHERE ab.tourid=100 ``` gives the result as: ``` tourid a_sum b_sum ------ ------ -------- 100 30 10 ``` But result should be as : ``` tourid a_sum b_sum ------ ------ -------- 100 30 5 ``` result of b\_sum column is wrong
In the first table, you have one row : ``` tourid tourname 100 hundred ``` Joining the a table will produce 2 rows : ``` tourid tourname imaageid filesize 100 hundred 1 10 100 hundred 2 20 ``` Joining the b table will keep 2 rows : ``` tourid tourname imaageid filesize uid filesize 100 hundred 1 10 5 5 100 hundred 2 20 5 5 ``` the good query is : ``` select tourid, sum_a, sum_b from ab left join (select tourid, sum(filesize) as sum_a from a group by tourid) a on ab.tourid = a.tourid left join (select tourid, sum(filesize) as sum_b from b group by tourid) b on ab.tourid = b.tourid ```
You need to add `GROUP BY ab.tourid` to overcome duplicates, there are repeated results therefore you are getting the sum as twice ``` SELECT a.tourid, SUM(a.filesize) AS a_sum, SUM(b.filesize) AS b_sum FROM ab LEFT JOIN a ON a.tourid=ab.tourid LEFT JOIN b ON b.tourid=ab.tourid WHERE ab.tourid=100 GROUP BY ab.tourid ```
left join produces wrong results
[ "", "mysql", "sql", "" ]
Table A structure: ![enter image description here](https://i.stack.imgur.com/dO1Aq.png) Table B structure: ![enter image description here](https://i.stack.imgur.com/LQZOs.png) Above are two tables, TableB.TableARelationID is a relationID which used to map table A. Desired output: ![enter image description here](https://i.stack.imgur.com/elEUt.png) The desired result would be taking TableA.RecordID and TableB.Text, but only of Type 2 in table B, i.e. ignore Type 1 Below is the SQL query which I used: ``` SELECT tablea.recordid, tableb.text FROM tablea LEFT JOIN tableb ON tablea.relationid = tableb.tablearelationid WHERE type = 2 ``` But the above query would output: ![enter image description here](https://i.stack.imgur.com/LG7qB.png) i.e. RecordID 1 was missing, as the "where" clause filtered. So how can I show RecordID 1 from Table A?
You need to move the `type = 2` filter to the join condition: ``` SELECT TableA.RecordID, TableB.Text FROM TableA LEFT JOIN TableB ON TableA.RelationID = TableB.TableARelationID AND TableB.Type = 2; ``` Consider the result of just this: ``` SELECT TableA.RecordID, TableB.Text, TableB.Type FROM TableA LEFT JOIN TableB ON TableA.RelationID = TableB.TableARelationID; ``` You would get ``` RecordID | Text | Type 1 | NULL | NULL 2 | B | 2 3 | C | 2 4 | D | 2 ``` Then you are filtering on the type column, so for recordID = 1 you have where `NULL = 2` which is false (it is not actually false, it is null, but it is not true), so this record is elimitated from the final result. Whenever you left join you must apply any filtering criteria you wish to apply to the left table in the join condition not the where, otherwise you effectively turn it into an inner join.
If you filter using the `Where` statement the join will be treated as a `inner join`. ``` select TableA.RecordID , TableB.Text from TableA left join TableB on TableA.RelationID = TableB.TableARelationID AND TableB.Type = 2 ```
SQL join two tables with specific condition
[ "", "sql", "select", "join", "left-join", "where-clause", "" ]
I have 7 columns (s0, s1, s2, s3, s4, s5, s6) in my table and I want to exectute the following query: ``` UPDATE myTable SET s0=ROUND(s0/(s0+s1+s2+s3+s4+s5+s6)*100)/100, s1=ROUND(s1/(s0+s1+s2+s3+s4+s5+s6)*100)/100, s2=ROUND(s2/(s0+s1+s2+s3+s4+s5+s6)*100)/100, s3=ROUND(s3/(s0+s1+s2+s3+s4+s5+s6)*100)/100, s4=ROUND(s4/(s0+s1+s2+s3+s4+s5+s6)*100)/100, s5=ROUND(s5/(s0+s1+s2+s3+s4+s5+s6)*100)/100, s6=ROUND(s6/(s0+s1+s2+s3+s4+s5+s6)*100)/100; ``` The problem is that mysql updates `s1` and then calculates `s2`, etc. How can I fix for each row the value `(s0+s1+s2+s3+s4+s5+s6)` in the sql request ?
I believe it should do the job as well (assuming table has a primary key): ``` UPDATE myTable a INNER JOIN ( SELECT ROUND(s0/(s0+s1+s2+s3+s4+s5+s6)*100)/100 AS new_s0, ROUND(s1/(s0+s1+s2+s3+s4+s5+s6)*100)/100 AS new_s1, ROUND(s2/(s0+s1+s2+s3+s4+s5+s6)*100)/100 AS new_s2, ROUND(s3/(s0+s1+s2+s3+s4+s5+s6)*100)/100 AS new_s3, ROUND(s4/(s0+s1+s2+s3+s4+s5+s6)*100)/100 AS new_s4, ROUND(s5/(s0+s1+s2+s3+s4+s5+s6)*100)/100 AS new_s5, ROUND(s6/(s0+s1+s2+s3+s4+s5+s6)*100)/100 AS new_s6, pk_column FROM myTable )b ON (b.pk_column = a.pk_column) SET a.s0 = b.new_s0, .... ```
An obvious solution is: Add column sum to your table. Update the sum (populate it with value for each row, this is trivial). Then use the sum column in your query (instead of s0+s1+s2+s3+s4+s5+s6). Finally drop the sum column.
Update table with constant value
[ "", "mysql", "sql", "" ]
I have two table "Table A" and "Table B" Table A is a result of joins with other tables. And Table B is a separate table with having 1 field common in Table A. Table A: ``` Year Name Value 2011 A Item1 2010 B 1 2011 C Item2 ``` Table B: ``` id Value 1 Item1 2 Item2 3 Item3 4 Item4 ``` I want result to be like: ``` Year Name Value 2011 A Item1 2010 B NULL 2011 C Item2 ``` My Efforts are: ``` SELECT d.Portfolio, d.Name, d.AccountName, d.CashGAAP, d.OriginalDate, d.Amount, d.AccountNumber, d.AttributeSetName, d.TheDate, d.Year, d.Value FROM (SELECT Portfolio.LegalEntityName AS Portfolio, Forecast.Name, CoA.AccountName, Forecast_Data.CashGAAP, CONVERT(DATE, Forecast_Data.TheDate) AS OriginalDate, SUM(Forecast_Data.Amount) AS Amount, CoA.AccountNumber , Attribute_Set.AttributeSetName, '' + CONVERT(VARCHAR, YEAR(Forecast_Data.TheDate)) + '-' + CONVERT(VARCHAR, MONTH(Forecast_Data.TheDate)) + '-01' AS TheDate, YEAR(Forecast_Data.TheDate) AS Year, Forecast_Attribute.Value FROM Portfolio INNER JOIN Forecast ON Portfolio.PortfolioID = Forecast.PortfolioID INNER JOIN Forecast_Account ON Forecast.ForecastID = Forecast_Account.ForecastID INNER JOIN Forecast_Data ON Forecast_Account.ForecastAccountID = Forecast_Data.ForecastAccountID INNER JOIN CoA ON CoA.AccountNumber = Forecast_Account.AccountNumber INNER JOIN Attribute_Set ON CoA.AttributeSetID = Attribute_Set.AttributeSetID INNER JOIN Forecast_Attribute ON Forecast_Account.ForecastAccountID = Forecast_Attribute.ForecastAccountID WHERE (Forecast.ForecastID = 5) OR (Forecast.ForecastID = 6) GROUP BY Portfolio.LegalEntityName, Forecast.Name, CoA.AccountName, Forecast_Data.CashGAAP, Forecast_Data.TheDate, CoA.AccountNumber, Attribute_Set.AttributeSetName, Forecast_Attribute.Value) AS d LEFT OUTER JOIN Vendor ON d.Value = Vendor.VendorName ``` I have renamed the example tables that are explained in Question: ``` Table A = d Table B = Vendor ```
You should LEFT JOIN B to the A and show `B.Value` instead of `A.Value` in the SELECT list: ``` SELECT Year, Name, B.Value FROM A LEFT JOIN B on A.Value=B.Value ```
The `INNER JOIN` keyword selects all rows from both tables as long as there is a match between the columns in both tables You need to apply `LEFT JOIN` ``` SELECT column_name(s) FROM TableA LEFT OUTER JOIN TableB ON tableA.Value=tableB.Value; ``` The `LEFT JOIN` keyword returns all rows from the left table (tableA), with the matching rows in the right table (tableB). The result is `NULL` in the right side when there is no match.
show null values that does not match with another table sql server
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "" ]
thanks for review this i have easily problem , i know it will solve by you frinds i have a table with 14 columns , 8 of cols are numeric (int , biging , numeric(18,0)) and other cols data type are in text value such as char(10), varchar(255),Nvarchar(4000) this table filled by same random value and in all cols (except ID) have lot of null value . i want make one or more Stored Procedure that Fill all null cell in numeric columns to 0 and all nulls in string columns to 'no value'. if this SP will return count of all null values in all columns , its amazing thanks so much
You can use a loop in store procedure but best way is you check all column for null value because may be any column have various data type. i hope this be helpful .
You can update as stated above but if you need to have a default field as 0 then you should have used default on table creation ``` CREATE TABLE dbo.Something ( id INT IDENTITY(1, 1), col1 INT DEFAULT 0 ) ``` but if its legacy code than on insert proc you could have used like ``` CREATE PROCEDURE [dbo].[spa_CRUDSomething] @flag CHAR(1) , @col1 INT = 0 AS ``` Usually the problem isn't in doing fixes rather on way of fixing the fixes
How To Fill All Null Values In Table with Stored Procedure
[ "", "sql", "sql-server", "stored-procedures", "sqldatatypes", "" ]
I have three tables `Color`, `Shade`, and `Activity`: Color: ``` id color --- ------ 1 red 2 green 3 white Shade id shade ---- ------- 1 light 2 dark ``` Activity: ``` user_id shade_id color_id ------ -------- -------- 1 1 1 1 1 2 2 2 3 ``` I am using mysql and can easily find the colors belonging to a user by a specific shade: ``` select c.name, 'assigned' from color c left join activity a on c.id = a.color_id where a.shade_id = 1 and a.user_id = 1; ``` The above will give me: ``` Color Status ----- ------- red assigned green assigned ``` **Question** However, I want a list of all the colors with `assigned` for ones that belong to the her and `not assigned` for ones that don't. So I would want ``` Color Status ----- ------- red assigned green assigned white not assigned ```
``` SELECT Color, B.shade_id Status FROM Color A LEFT JOIN (SELECT * FROM Activity WHERE shade_id = 1) B ON A.id = B.color_id ``` `NULL` represents unassigned.
Join it with `shade` table as well, like below ``` select name, case when shade is null then 'Not Assigned' else 'Assigned' end as Status from (select c.name, s.shade from color c left join shade s on c.id = s.id left join activity a on c.id = a.color_id where a.shade_id = 1 and a.user_id = 1; ) tab ```
how to join two tables and get null values
[ "", "mysql", "sql", "" ]
Im having trouble joining two count statements, whats throwing me off is that the teamids are different ``` SELECT COUNT(teamid) AS homescore FROM goals WHERE teamid = 1 AND gameid = 1 SELECT COUNT(teamid) AS awayscore FROM goals WHERE teamid = 2 AND gameid = 1 ``` *query 1 result* ``` homescore 0 ``` *query 2 result* ``` awayscore 0 ``` what im trying to achieve ``` homescore | awayscore 0 | 0 ```
Not sure exactly what you are trying to do, but maybe this: ``` SELECT COUNT(teamid) AS homescore FROM goals WHERE teamid = 1 AND gameid = 1 union SELECT COUNT(teamid) AS awayscore FROM goals WHERE teamid = 2 AND gameid = 1 ``` **EDIT**: to bring back columns and take care of null values. ``` select nvl((SELECT COUNT(teamid) AS score FROM goals WHERE teamid = 1 AND gameid = 1),0) as homescore, nvl((SELECT COUNT(teamid) AS score FROM goals WHERE teamid = 2 AND gameid = 1),0) as awayscore from dual ```
I THINK this is what your looking for. ``` select sum(case when teamid = 1 then 1 else 0 end) as homescore, sum(case when teamid = 2 then 1 else 0 end) as awayscore from goals where gameid = 1 ```
Inner join 2 count statements
[ "", "mysql", "sql", "" ]
So I want to remove the first 4 characters from a string in oracle. Those characters can be different every time. In my case I need to take away the first 4 characters of an IBAN and put them at the end of the string. I got the part of putting them to the end of the string but I can't get the first 4 characters to be removed. Every solution I find on the internet removes specified characters, not characters from a certain position in the string (1 to 4). I used the code below to get the first 4 characters to the end of the string and wanted to try something similar for removing them at the front but without success. ``` SELECT SUBSTR(iban_nummer, 1, 4) INTO iban_substring FROM dual; iban_nummer := iban_nummer || iban_substring; ```
See the [docs](http://docs.oracle.com/cd/B28359_01/olap.111/b28126/dml_functions_2101.htm#OLADM679): > substring\_length ... > When you do not specify a value for this argument, then the function returns all characters to the end of string. When you specify > a value that is less than 1, the function returns NA. So `iban_nummer := substr(iban_nummer, 5) || substr(iban_nummer, 1,4)` should work. The first part selects all characters beginning from the 5th, the second character numbers 1..4.
``` update table_name set col_name=substr(col_name,5); ```
Oracle: remove first 4 characters from a string
[ "", "sql", "string", "oracle", "" ]
In my **android** project's **SqLite** database I have to create a **foreign key** which is the **primery key** of the table. I wrote the sql statement as below using **SQLiteManager**. ``` CREATE TABLE OBTTourVehicleUpdate ( TourHeaderCode INT PRIMARY KEY NOT NULL, TourVehicleProcessCode INT NOT NULL, VehicleCode CHAR(10), TourStart TEXT , TourEnd TEXT , LastMilage DOUBLE, NewMilage DOUBLE, CONSTRAINT FOREIGN KEY (TourHeaderCode) REFERENCES OBTTourHeader(TourHeaderCode) ); ``` It gives me an error message saying that > Internal Error. near FOREIGN: syntax error. The **table structure** of two tables are as below. ![enter image description here](https://i.stack.imgur.com/6NJhL.jpg) how can I fix this.
Remove `CONSTRAINT` from your code. Just do the ``` FOREIGN KEY (TourHeaderCode) REFERENCES OBTTourHeader(TourHeaderCode). ``` Remember to call the table on the `onCreate()` and `onUpdate()` and also update the `DB_version`. And to be on the safe side, do not declare your FK as the PK too.
You get that error because your syntax is indeed incorrect. Read the [official SQLite documentation](http://www.sqlite.org/foreignkeys.html) to learn more about how it should be used. ``` CREATE TABLE OBTTourVehicleUpdate ( TourHeaderCode INT PRIMARY KEY NOT NULL, TourVehicleProcessCode INT NOT NULL, VehicleCode CHAR(10), TourStart TEXT , TourEnd TEXT , LastMilage DOUBLE, NewMilage DOUBLE, FOREIGN KEY(TourHeaderCode) REFERENCES OBTTourHeader(TourHeaderCode) ); ``` Something like that should work.
SQLite FOREIGN KEY Constraint
[ "", "android", "sql", "sqlite", "" ]
I have one table which is having two columns as a DATETIME like following ``` CreatedDate DeliveryDate Diff_Date 2013-05-12 18:30:00.000 2013-05-17 18:30:00.000 1900-01-06 00:00:00.000 ``` I want Diff\_Date for that i did sql query like ``` select (DeliveryDate - CreatedDate) as Diff_Date from MYTABLE ``` but i'm getting wrong answer :( please correct me.
Try this: For `MySQL` [**DATEDIFF()**](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_datediff) frinction. ``` SELECT DATEDIFF(DeliveryDate, CreatedDate) AS Diff_Date FROM MYTABLE ``` For `SQL SERVER` [**DATEDIFF()**](http://technet.microsoft.com/en-us/library/ms189794%28v=sql.105%29.aspx) function ``` SELECT DATEDIFF(dd, CreatedDate, DeliveryDate) AS Diff_Date FROM MYTABLE ```
If you want the number of days as an integer between two values then use: ``` select datediff(day, DeliveryDate, CreatedDate) ```
Getting wrong difference of dates column in sql?
[ "", "mysql", "sql", "sql-server", "select", "datediff", "" ]
First query ``` select id from posts where post_title='abc' || post_title='xyz' order by id desc limit 1; ``` Let's say the returned values are 730 and 735. Next query ``` delete from posts where id in(730,735); ``` I want both of these queries to be combined into one statement. How can it be done. Please help I have tried this one below. it doesn't work. ``` delete from posts where id in ( select id from posts where post_title='abc' order by id desc limit 1, select id from posts where post_title='xyz' order by id desc limit 1 ); ```
Try this: ``` DELETE FROM posts WHERE id IN (SELECT id FROM (SELECT post_title, MAX(id) id FROM posts WHERE post_title IN ('abc', 'xyz') GROUP BY post_title ) A ) ``` **OR** ``` DELETE FROM posts WHERE id IN (SELECT id FROM (SELECT post_title, id FROM posts WHERE post_title IN ('abc', 'xyz') ORDER BY post_title, id DESC ) A GROUP BY post_title) ```
In my opinion, We must use **IF-EXISTS** clause when executing delete-and-select in one query, because if select returns null, it will throw an exception so try this: ``` IF EXISTS (SELECT id FROM [Posts] WHERE post_title IN ('abc', 'xyz')) BEGIN DELETE FROM posts WHERE id IN (SELECT id FROM [Posts] WHERE post_title IN ('abc', 'xyz') ORDER BY post_title, id DESC ) END ```
Making select and delete queries as single statement
[ "", "mysql", "sql", "select", "join", "sql-delete", "" ]
Which ERD is more correct re proper Database Design? ## ERD #1 ![ERD 1](https://i.stack.imgur.com/3vwxn.png) or ## ERD #2 ![ERD 2](https://i.stack.imgur.com/z640z.png) Please explain why one is more correct than the other. * ERD 1 Author: [Ben Grunfeld](http://github.com/bengrunfeld) * ERD 2 Author: [Darren Frenkel](http://github.com/darrenfrenkel)
In my opinion **ERD #1** is a better design. **Reasons** > Orders/Product Table will have OrderID for a particular Order and for each Product Orderd and > quantity for that product for each Order. no repetition/No redundant data. **ERD #2** > Is a poor design as each order can have multiple products and you will > be adding the same orderid, CustomerID, Invoicedetails for multiple > products for the same Order, in simple words more redundant data. **Edit** > ERD #2 also violates the database normalization rules. In orders Table > you have InvoiceID and then Invoice\_Creation\_Date which only depends > on the InvoiceID . > > normalization rules say if a column in a table doesnt directly depend > on the Primary Key in that column it should be in a separate table. In > other words all the columns in a table should Directly depends on > Primary key only.
1. In ERD 1 you can only have one item per order -- hence the need for a join table in ERD2 2. ERD2 - Why would you have a FK in the invoice table for order\_ID, and also have invoice\_ID in the order table. One FK field is enough. I assume this is a one order to many invoices, in which case you would put the order\_ID in the invoice table
Which ERD is more correct re proper Database Design?
[ "", "sql", "database", "database-design", "erd", "" ]
I have a table which has rows for each product that a customer has purchased. I want to output a column from a SELECT query which shows the time it takes to deliver said item based on whether the customer has other items that need to be delivered. The first item takes 5 mins to deliver and all subsequent items take 2 mins to deliver e.g. 3 items would take 5+2+2=9 mins to deliver. This is what I have at the moment(Using the Northwind sample database on w3schools to test the query): ``` SELECT orders.customerid, orders.orderid, orderdetails.productid, CASE((SELECT Count(orders.customerid) FROM orders GROUP BY orders.customerid)) WHEN 1 THEN '00:05' ELSE '00:02' END AS DeliveryTime FROM orders LEFT JOIN orderdetails ON orderdetails.orderid = orders.orderid ``` This outputs '00:05' for every item due to the COUNT in my subquery(I think?), any ideas on how to fix this?
Gregory's answer works a treat and here's my attempts ``` -- Without each product line item listed SELECT O.CustomerId, O.OrderId, COUNT(*) AS 'NumberOfProductsOrderd', CASE COUNT(*) WHEN 1 THEN 5 ELSE (COUNT(*) * 2) + 3 END AS 'MinutesToDeliverAllProducts' FROM Orders AS O INNER JOIN OrderDetails AS D ON D.OrderId = O.OrderId GROUP BY O.CustomerId, O.OrderId -- Without each product line item listed SELECT O.CustomerId, O.OrderId, D.ProductId, CASE WHEN P.ProductsInOrder = 1 THEN 5 ELSE (P.ProductsInOrder * 2) + 3 END AS 'MinutesToDeliverAllProducts' FROM Orders AS O INNER JOIN OrderDetails AS D ON D.OrderId = O.OrderId INNER JOIN ( SELECT OrderId, COUNT(*) AS ProductsInOrder FROM OrderDetails GROUP BY OrderId ) AS P ON P.OrderId = O.OrderId GROUP BY O.CustomerId, O.OrderId, D.ProductId, P.ProductsInOrder ```
Try this ``` SELECT orders.customerid, orders.orderid, orderdetails.productid, numberorders, 2 * ( numberorders - 1 ) + 5 AS deleveryMinutes FROM orders INNER JOIN (SELECT orders.customerid AS countId, Count(1) AS numberOrders FROM orders GROUP BY orders.customerid) t1 ON t1.countid = orders.customerid LEFT JOIN orderdetails ON orderdetails.orderid = orders.orderid ORDER BY customerid ```
SQL Change output of column if duplicate
[ "", "sql", "count", "duplicates", "subquery", "case", "" ]
I've following query: ``` SELECT DATE(created) AS con_date, COUNT(*) AS con_per_day, GROUP_CONCAT(word_id) FROM connections WHERE created > (NOW() - INTERVAL 30 DAY) GROUP BY con_date ``` Counts words added per day. ![enter image description here](https://i.stack.imgur.com/0EHad.png) All good, but I want to count uniques! Note word\_id has 14,14 in the last raw. This must be counted as 1. I thought double group\_by would cut it, but it seems to be a wrong way to do it. Any ideas?
Use `distinct` ``` SELECT DATE(created) AS con_date, COUNT(distinct word_id) AS con_per_day, GROUP_CONCAT(word_id) FROM connections WHERE created > (NOW() - INTERVAL 30 DAY) GROUP BY con_date ```
You just have to to use distinct where yous say what to count: you would end up with: `COUNT(distinct word_id)`
Mysql - double group by
[ "", "mysql", "sql", "" ]
Struggling to get this at any sensible run time. I have three tables: temp\_company * id (PRIMARY KEY), number (KEY), s\_code (KEY) company * id (PRIMARY KEY), number (KEY) company\_scode * company\_id (UNIQUE on company\_id and code), code (KEY) There is also a foreign key between code and code the code\_description table. There is also a foreign key between company\_id and the id in the company table I need to match up the temp\_company table to the company table on the number field, I then want to check if the s\_code in temporary table exists for the company in the company\_scode table, if it doesn't then select that row. So far I have: ``` SELECT temp_company.s_code FROM temp_company WHERE temp_company.s_code NOT IN (SELECT code FROM company LEFT JOIN company_scode ON company.id = company_scode.company_id WHERE company.number = temp_company.number ) ``` but this is very slow, I would appreciate a better way to select every temp\_company record where it's s\_code does not exist in the many to many relationship between company and company\_scode. **\* UPDATE *\**** Thank you to Loc and Ollie for your answers, these are still taking a very long time (I left Ollie's for 8 hours and it was still going). In terms of index's I have updated above with info. I've put the explains below for the two answers to try to shed some light and hopefully get this faster. EXPLAIN for Ollie's answer: ``` | id | select_type | table | type | possible_keys | key | key_len | ref | rows | extra | +----+--------------------+------------+-------+---------------+------------+---------+-----------------------+---------+--------------------------+ | 1 | PRIMARY | tc | ALL | (NULL) | (NULL) | (NULL) | (NULL) | 3216320 | | +----+--------------------+------------+-------+---------------+------------+---------+-----------------------+---------+--------------------------+ | 1 | PRIMARY | <derived2> | ALL | (NULL) | (NULL) | (NULL) | (NULL) | 2619433 | Using where; Not exists | +----+--------------------+------------+-------+---------------+------------+---------+-----------------------+---------+--------------------------+ | 2 | DERIVED | s | index | company_id | code | 62 | (NULL) | 2405379 | Using index | +----+--------------------+------------+-------+---------------+------------+---------+-----------------------+---------+--------------------------+ | 2 | DERIVED | c | eq_ref| PRIMARY | PRIMARY | 4 | mydbname.s.company_id | 1 | | +----+--------------------+------------+-------+---------------+------------+---------+-----------------------+---------+--------------------------+ ``` EXPLAIN for Loc's answer: ``` | id | select_type | table | type | possible_keys | key | key_len | ref | rows | extra | +----+--------------------+-------+-------+---------------+------------+---------+---------------+---------+--------------------------+ | 1 | PRIMARY | tc | ALL | (NULL) | (NULL) | (NULL) | (NULL) | 3216320 | Using where | +----+--------------------+-------+-------+---------------+------------+---------+---------------+---------+--------------------------+ | 2 | DEPENDENT SUBQUERY | c | index | (NULL) | number | 63 | (NULL) | 3189756 | Using where; Using index | +----+--------------------+-------+-------+---------------+------------+---------+---------------+---------+--------------------------+ | 2 | DEPENDENT SUBQUERY | cc | ref | company_id | company_id | 4 | mydbname.c.id | 1 | Using where; Using Index | +----+--------------------+-------+-------+---------------+------------+---------+---------------+---------+--------------------------+ ```
In the end I got this down to a managable time of about 1 minute with the following: ``` CREATE TEMPORARY TABLE tc AS (SELECT company.id AS cid, temp_company.scode AS tcode FROM temp_company INNER JOIN company ON temp_company.number = company.number WHERE temp_company.scode IS NOT NULL AND temp_company.scode != "") CREATE TEMPORARY TABLE rc AS (SELECT tc.cid as cid FROM tc LEFT JOIN company_scode ON tc.cid = company_scode.company_id WHERE tc.tcode = company_scode.code) SELECT * FROM tc WHERE tc.cid NOT IN (SELECT cid FROM rc) ``` I'd rather not be using temporary tables so if anyone posts a solution in a similar or quicker timeframe then I'll happily update the answer to that.
TEST this: ``` SELECT tc.* FROM temp_company tc WHERE NOT EXISTS ( SELECT 1 FROM company c LEFT JOIN company_scode cc ON c.id = cc.company_id WHERE c.number = tc.number ) ```
Select records from table where field not in left join of different table in MySql
[ "", "mysql", "sql", "" ]
How would you get the last Primary Key/Auto Increment value in a table using OleDb? I need to get this value so I can create a folder for a record before it is added so that files can be copied to the folder when it is added. Any idea? I have tried as following. `@@Identity` 'Need to insert a record first and I can't do that without copying the files first `SELECT SCOPE_IDENTITY()` 'Doesn't work with `OleDb` This is the error message I get: ![enter image description here](https://i.stack.imgur.com/mmdiU.png)
I think this might work: ``` SELECT MAX(ID) FROM MyTable ```
you can do it like this because of The Jet 4.0 provider supports `@@Identity`, [Reference](http://www.mikesdotnetting.com/Article/54/Getting-the-identity-of-the-most-recently-added-record) ``` Dim query As String = "Insert Into Categories (CategoryName) Values (?)" Dim query2 As String = "Select @@Identity" Dim ID As Integer Dim connect As String = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=|DataDirectory|Northwind.mdb" Using conn As New OleDbConnection(connect) Using cmd As New OleDbCommand(query, conn) cmd.Parameters.AddWithValue("", Category.Text) conn.Open() cmd.ExecuteNonQuery() cmd.CommandText = query2 ID = cmd.ExecuteScalar() End Using End Using ```
How to get last auto increment value in a table? VB.NET
[ "", "sql", "vb.net", "oledb", "" ]
I am constructing a sql database with each record having a unique id as my primary key. I want the ids to be human readable with various parts representing sub-classifications. For example the first two-three digits represent product category, the next two-three represent location, and the last records the item. If I use a numeric code, I will need at least nine digits for each number. Example, 101-001-001 would mean category one, warehouse aisle one, item one. If I use alpha-numeric I could represent the same data with six digits, A1-A1-A1. What is the difference in space required to store each number (alpha-numeric vs numeric)? Since these id numbers will appear millions of times in my database, size considerations would be helpful in deciding which way to go. Is there any other reason I should prefer one over the other?
Using encoded keys is generally a bad idea. For instance, what happens if -- at some point in the future -- a warehouse has more than 999 items. You have no space for it in the code. Instead, use an auto-incremented (identity) integer for the id. This occupies 4 bytes, unless you are going to be supporting billions of rows in your table. In that case, use a big integer. You can have separate columns for the warehouse number, the aisle number, and the item number. These, in fact, should probably be foreign keys to reference tables.
Storing values signifying multiple information in same column is bad idea . Suppose you wan to fetch records of particular aisle . You will have to apply split function on each column value to retreive aisle number and match .
Should I Use Numeric or Alphanumeric IDs
[ "", "sql", "" ]
I have a simple table T1 that has the Schema ``` T1 (id NUMBER, val VARCHAR2(10), dat DATE) ``` considering that `dat` can be `NULL` I want to execute a query to select the `MAX(dat)` of a specific `val` or `NULL` if there's row having `dat = NULL` for the same `val` for example: ``` id val dat -- --- --- 1 a 12-NOV-13 2 b 23-MAY-13 3 b 26-JAN-14 4 a NULL ``` the query should return `NULL` `WHERE val = a` the query should return `26-JAN-14` `WHERE val = b` is it possible in a simple SELECT query ? Thanks in advance.
You can try selecting what you want, excluding duplicates, and then doing a union, similar to this; ``` SELECT VAL, MAX(DAT) FROM T1 WHERE VAL NOT IN (select VAL from T1 where DAT is NULL GROUP BY VAL, DAT) GROUP BY VAL, DAT UNION select VAL, DAT from T1 where DAT is NULL GROUP BY VAL, DAT ```
If I were you,I would use if clause for specific conditions [this links is useful](http://www.techonthenet.com/oracle/loops/if_then.php) ``` IF dat <= 0 THEN returnValue := null; ELSE returnValue := Max(dat); END IF; RETURN returnValue; ```
SQL select MAX value or NULL
[ "", "sql", "oracle", "" ]
I am building an case management application and one of the requirements is that each client gets their own Database with their own URL. However, it is starting to be a nightmare to maintain multiple instances of the application when upgrading. I am using IIS 7 ASP.NET MVC. I would like to have one application and have the application be aware of which database to get the data from depending on the User authentication. Is there a viable alternative?
Yes its better to have one instance of the application if possible otherwise it gets complicated. If the features for different clients are different then use feature flags to turn-on/turn-off features for each client/user. See below article by Martin Fowler about it. Feature flags can be per user or client. Facebook and few other major sites use feature flags effectively. <http://martinfowler.com/bliki/FeatureToggle.html> Regarding the database, I think you can have a **Common database** which has all basic information including all clients and then have **other databases specific to each client** which has all other data for the client. When any user hits a client specific URL in the **Session\_Start** method of **Global.asax** add logic to fetch the appropriate connection string and store it in session (`Session["ClientDbConnectionString"]`) so that you can use it whenever you need data from client DB. I would suggest you to store all the connection strings in a table in Common database (with a key identifying each client) so that you can add new connection string row when you want to on-board a new client. Whenever you do a new release I would suggest to update all the client databases together instead of just updating one client DB otherwise it will become unmanageable after a while.
Your question is really just a tip-of-the-iceberg question. Multi-tenancy is a complex topic. Having a per-client connection to the tenant database is just one aspect. There are other aspects to consider: * balancing of tenants: how do you ensure that tenants that grow much faster than the average get adequate resources *and* do not overwhelm other tenants that are collocated on the same storage (eg. same instance). How do you move a tenant that had grown? How do you aggregate small tenants with low activity? * isolation of resources: a tenant can consume a large percent of resources (eg. it can run a query in the database that take sup all the CPU and starves all the other queries), how do you ensure fairness to other tenants that are collocated? * shared data. Replicating changes to shared data (data that is common among all tenants, eg. lookup tables) *can be* problematic. * schema changes. This is one of the trickiest topics. Database schema and code are usually changed in-sync (code expects a certain schema) and deploying a schema change to all tenants *can be* problematic. I recommend you go over the [Multi-Tenant Data Architecture](http://msdn.microsoft.com/en-us/library/aa479086.aspx) white paper. This presents three basic approaches: * [Separate Databases](http://msdn.microsoft.com/en-us/library/aa479086.aspx#mlttntda_sepdat) * [Shared Database, Separate Schemas](http://msdn.microsoft.com/en-us/library/aa479086.aspx#mlttntda_sdss) * [Shared Database, Shared Schema](http://msdn.microsoft.com/en-us/library/aa479086.aspx#mlttntda_sdshs) In addition, I would add the [SQL Azure Federations](http://msdn.microsoft.com/en-us/library/windowsazure/hh597452.aspx) option (which was not available at the time the white paper was written). The paper discuss pros and cons of these approaches from database/storage point of view, considering things like: * storage cost * security * availability * scalability * ease of schema change (app upgrade) * extensibility From the client side, I'm not aware of any MVC extension that helps for the multi-tenant case, something along the line of the [act\_as\_tenant](https://github.com/ErwinM/acts_as_tenant) gem in Rails. > is starting to be a nightmare to maintain multiple instances of the application when upgrading. This is actually one of the biggest problems in multi-tenant architectures. Even if you have eliminated the friction of actually doing the upgrade in a DB, it is still a difficult problem to solve. Unless you can afford downtime *on all tenants* and take the entire system offline, upgrade all tenants databases, deploy new code that understands new schema, then bring all tenants online, doing it online is challenging because the app code (the ASP/MVC code) must be able to understand both versions (old and new) at once. Is not impossible to solve, but is difficult and must be coded carefully. That being said, an important part if 'eliminating the friction of actually upgrading'. I do no what procedure you employ to deploy an upgrade. It is critical that the upgrade is automated and scripted, without any manual intervention. Diff based tools are sometimes used, like [SQL Compare](http://www.red-gate.com/products/sql-development/sql-compare/) from Red-Gate or even Visual Studio's [`vsdbcmd.exe`](http://msdn.microsoft.com/en-us/library/aa833435%28v=vs.100%29.aspx#comparewithvsdbcmd). My favorite approach though is using upgrade scripts and metadata versioning in the application. For more details see [Version Control and your Database](http://rusanu.com/2009/05/15/version-control-and-your-database/). For reference, this approach is basically the [Rails Migrations](http://guides.rubyonrails.org/migrations.html) approach.
One Application Multiple Instances and Different DB
[ "", "sql", "asp.net-mvc", "database-connection", "multi-tenant", "" ]
Currently I have a column in my SQL table that saves payment information. For example a column might have "VISA-2435 exp:12/13 Auth#32423". I want to edit the VISA-2435 to display instead VISA-XXXX. Each row occurs is different so I can't do a simple search and replace for that static string. I tried the following query, but i'm not taking into account how the string might be different ``` UPDATE messages SET message = REPLACE(message, LIKE'%Visa-2334%', 'VISA-xxxx') WHERE message LIKE '%Visa' ``` Also, I could changed my mind and just edit the exp:12/13 portion of the string instead. Does anybody have any suggestions?
You can use **[Stuff()](http://technet.microsoft.com/en-us/library/ms188043.aspx)** function to do the job: Sample table and data: ``` Create table table1 (val varchar(50)) Insert into table1 (val) values ('VISA-2435 exp:12/13 Auth#32423') ``` Query 1: ``` --To replace 4 numbers after VISA- Select stuff(val, 6, 4, 'XXXX') col1 From Table1; ``` Query 2: ``` --To replace numbers after VISA- and exp: Select stuff(stuff(val, 6, 4, 'XXXX'), 15,5,'YY/MM') col1 From Table1 ``` **[Fiddle demo](http://sqlfiddle.com/#!3/fb958/2)**
Use a table value function (TVF) or CLR function. I think the TVF might be faster but you will have to test. Here is a quick implementation of a TVF. It masks all numbers. From a privacy standpoint, that is best! ``` -- -- Create a table value function -- CREATE FUNCTION MaskNumbers (@input_txt varchar(128)) RETURNS TABLE AS RETURN ( SElECT replace( replace( replace( replace( replace( replace( replace( replace( replace( replace(@input_txt, '0', 'X') , '1', 'X') , '2', 'X') , '3', 'X') , '4', 'X') , '5', 'X') , '6', 'X') , '7', 'X') , '8', 'X') , '9', 'X') as masked_txt ); ``` Sample call using your data. ``` declare @sample varchar(128) = 'VISA-2435 exp:12/13 Auth#32423' select * from MaskNumbers(@sample); ``` ![enter image description here](https://i.stack.imgur.com/m3Ehx.jpg) Sample call using adventure works credit card table. ``` use AdventureWorks2012; go select top 5 * from [Sales].[CreditCard] cross apply MaskNumbers(CardNumber); go ``` ![enter image description here](https://i.stack.imgur.com/0AW8a.jpg) **If you just want to change the 4 digits after VISA, use CHARINDEX() and STUFF();** ``` -- Raw data declare @sample varchar(128) = 'Random Stuff Before vIsA-2435 ExP:12/13 AuTh#32423 Random Words After'; -- Masked data select @sample as old_sample, case when charindex('visa-', @sample) > 0 then stuff(@sample, charindex('visa-', @sample) + 5, 4, 'XXXX') else @sample end as new_sample go ``` ![enter image description here](https://i.stack.imgur.com/ptOw2.jpg)
Edit string in SQL
[ "", "sql", "sql-server", "" ]
I have one table with some data , which is shown as below. ![enter image description here](https://i.stack.imgur.com/MsRTD.png) I want to get unique data from the table and i have tried below code ``` SELECT sa.EMPID, sa.EMPNAME, sa.DEPTID FROM dbo.Sample AS sa LEFT OUTER JOIN dbo.Sample AS se ON sa.EMPID = se.EMPID ``` but not able to get the result. I want the query to get below data ![enter image description here](https://i.stack.imgur.com/IuJiT.png) Can anyone please help me to solve this query..?
try this : ``` SELECT EMPNAME, DEPTID, MIN(EMPID) AS EMPID FROM dbo.Sample GROUP BY EMPNAME, DEPTID ```
Try this one - ``` SELECT EMPID, EMPNAME, DEPTID FROM ( SELECT EMPID, EMPNAME, DEPTID, RowNum = ROW_NUMBER() OVER (PARTITION BY EMPNAME, DEPTID ORDER BY 1/0) FROM dbo.[Sample] ) t WHERE RowNum = 1 ```
Get Unique data from table
[ "", "sql", "sql-server", "sql-server-2008", "" ]
Suppose I have a Gigantic table to save lots of logs from the beginning of time with this format: ``` ========================================================== | Name | Log | Date (type datetime) | ---------------------------------------------------------- | Bob | {Some:[sort,of,json]} | 1-May-2013 09:12:45 | | Josh | {Another:[sort,of,json]} | 1-May-2013 09:13:45 | | Fred | {Yada:[yada,yada,yada]} | 1-May-2013 09:14:45 | | Josh | {Ahoy:[whee,whee,whee]} | 1-May-2013 09:15:45 | | Lucy | {Ahem:[blagh,blgh,blgh]} | 1-May-2013 09:16:45 | | Bob | {Chih:[aw,ew,ow]} | 2-May-2013 09:12:45 | .......................................................... | Fred | {Cheh:[saw,sew,sow]} | 1-May-2014 09:12:45 | | Bob | {Chah:[waw,wew,wow]} | 1-May-2014 09:15:45 | ========================================================== ``` Now, given two datetimes, I need to grab one log for each person between those datetimes (any log within that time will do, but preferably the earliest within those two datetimes). Here's a query I've tried but it still take too long ``` select * from ( select Name, Log, rank() over (partition by Name order by Date asc) ranks from Table ) alias where ranks = 1 ```
You should use BETWEEN condition but inside of subquery ``` select * from ( select Name, Log, rank() over (partition by Name order by Date asc) ranks from Table WHERE Date BETWEEN @DateBegin AND @DateEnd ) alias where ranks = 1 ``` To run it faster you have to create indexes on `Name` and `Date` fields
You have to use `BETWEEN` ``` SELECT * FROM ( SELECT Name, Log, rank() OVER (partition by Name ORDER by Date ASC) ranks FROM Table WHERE Date BETWEEN '2012-04-01 02:00:00β€² AND '2012-04-20 02:00:00β€² ) alias WHERE ranks = 1; ``` You have maybe to create `INDEX` on field you are using to accelerate the executing of the query.
How to find specific records between two date date
[ "", "sql", "" ]
I have a requirement to order a list of countries alphabetically but with a specific country on TOP. After that country it should be ordered alphabetically . Example ``` India Afghanistan Albania Algeria American Samoa Andorra Angola Anguilla Antigua and Barbuda Argentina Armenia Aruba ``` ........... I tried the answer provided here [Sorting certain values to the top](https://stackoverflow.com/questions/1079068/sorting-certain-values-to-the-top) but it was not working I am using PL/SQl dev tool. Thanx in Advance
Thanx all for the response. This way i tried. ``` SELECT * FROM (SELECT L.LOOK_UP_CODE, TO_CHAR(L.CITY) LOOK_UP_DESC FROM GHCM_IN_CITIES_MAPPING_DTLS L, GHCM_LOOK_UP_TBL A WHERE L.ENABLED_FLAG = 'Y' AND L.STATE = IN_STATE AND A.LOOK_UP_TYPE = 'LOCATION_VALUE' UNION SELECT A.LOOK_UP_CODE LOOK_UP_CODE, A.LOOK_UP_DESC LOOK_UP_DESC FROM GHCM_LOOK_UP_TBL A WHERE A.LOOK_UP_TYPE = 'LOCATION_VALUE') ORDER BY (CASE WHEN LOOK_UP_DESC = 'Others' THEN 'ZZZ' ELSE LOOK_UP_DESC END); ``` And its working perfectly.
Something like this should work: **MySQL VERSION** ``` ORDER BY (country = 'India') DESC, country ASC ``` [SQLFIDDLE DEMO](http://www.sqlfiddle.com/#!2/da6cf/1) -- or -- ``` ORDER BY CASE WHEN country = 'India' THEN 1 ELSE 2 END, country ASC ``` [SQLFIDDLE DEMO](http://www.sqlfiddle.com/#!2/ad515/5) **ORACLE VERSION** ``` ORDER BY CASE WHEN country = 'India' THEN 1 ELSE 2 END ``` or you can have more than one specific value at top: ``` ORDER BY CASE WHEN country = 'India' THEN 1 WHEN country = 'United Kingdom' THEN 2 ELSE 3 END ```
Order by with a particular value on TOP
[ "", "sql", "oracle", "plsql", "" ]
Hi Guys I need help on getting the sum of a column after filtering the distinct values Sample table ``` refwo hrs 123 1 123 1 123 1 456 2 768 2 ``` how can i write an sql to get the total "5"? Ive searhed on this forum and most are saying use "Group by" but i think it will not work since i need 1 value instead of 2 columns
Use **SUM** function to get the Sum of column. Here is it. ``` SELECT SUM(DISTINCT hrs) AS Total FROM your_Table_Name GROUP BY refwo ```
``` SELECT SUM(DISTINCT hrs) FROM sample_table; ``` your Key is the `DISTINCT` inside `AGGREGATE` function! If you expected something like this? Go for a view like stuffs! ``` SELECT SUM(hrs) FROM (SELECT DISTINCT hrs , refwo FROM sample_table) as my_alias; ``` [SQL Fiddle](http://www.sqlfiddle.com/#!3/0b8c6/4)
Get SUM after filtering distinct values
[ "", "sql", "" ]
How to get the table structure in MS Access with a SQL query? Using the following query: ``` SELECT name FROM MSysObjects ``` Results in the following exception: > Exception: [Microsoft][ODBC Microsoft Access Driver] Record(s) cannot be read; no read permission on 'MSysObjects'.
Use [This](http://www.perfectparadigm.com/tip001.html) ``` SELECT * FROM MSysObjects WHERE Type=1 AND Flags=0 ``` Ms Access has several system tables that are, by default, hidden from tables list. You can show them. In Ms Access 2007 do a right click on tables list and select Navigation Options. At the bottom of the form you will find Show System Objects check box. Check it and system tables will show up in tables list. They all start with MSys. Alternatively, options form can be activated from application menu - click button Access options -> select Current Database and there is Navigation Options button. Now you can examine structure and contents and generate queries of all system tables with MsAccess tools. [Source](https://stackoverflow.com/questions/2629211/can-we-list-all-tables-in-msaccess-database-using-sql)
Set up an ODBC connection for your postrgres database and then call the transfer database command for each of your tables.
Getting the table structure in ms access with SQL query?
[ "", "sql", "ms-access", "table-structure", "" ]
I have two tables `Technology +--------+--------+ | tid(P) | Β nameΒ  | +--------+--------+ | Β  1 Β Β  | Β  JavaΒ | |--------|--------| | Β  2 Β Β  | Β  PHP Β | +--------+--------+` **Employee** +----------+----------+------------+ | Β eid(P) Β | Β tid(F)Β ^|Β join\_dateΒ  | +----------+----------+------------+ | Β Β  1 Β  Β  | Β  Β  1 Β  Β | 2013-10-01 | |----------|----------|------------| | Β Β  2 Β  Β  | Β  Β  1 Β  Β | 2013-10-10 | |----------|----------|------------| | Β Β  3 Β  Β  | Β  Β  1 Β  Β | 2013-10-12 | |----------|----------|------------| | Β Β  4 Β  Β  | Β  Β  1 Β  Β | 2013-09-10 | |----------|----------|------------| | Β Β  5 Β  Β  | Β  Β  1 Β  Β | 2013-11-10 | |----------|----------|------------| | Β Β  6 Β  Β  | Β  Β  1 Β  Β | 2013-12-10 | |----------|----------|------------| | Β Β  7 Β  Β  | Β  Β  2 Β Β  | 2013-08-01 | |----------|----------|------------| | Β Β  8 Β  Β  | Β  Β  2 Β Β  | 2013-10-28 | |----------|----------|------------| | Β Β  9 Β  Β  | Β  Β  2 Β Β  | 2013-05-12 | |----------|----------|------------| | Β Β  10 Β  Β | Β  Β  2 Β Β  | 2013-10-10 | |----------|----------|------------| | Β Β  11 Β  Β | Β  Β  2 Β Β  | 2013-11-10 | |----------|----------|------------| | Β Β  12 Β  Β | Β  Β  2 Β Β  | 2013-12-05 | |----------|----------|------------| I need to get data of **recently joined three employees for each technology**. I tried different joins and also did google on this but didn't get any success. `Expected Result +-------+--------+-------+------------+ | Β tidΒ  | Β nameΒ  | Β eidΒ  | join_dateΒ  | +-------+--------+-------+------------+ | Β  1 Β  | Β JavaΒ  | Β  6 Β  | 2013-12-10 | +-------+--------+-------+------------+ | Β  1 Β  | Β JavaΒ  | Β  5 Β  | 2013-11-10 | +-------+--------+-------+------------+ | Β  1 Β  | Β JavaΒ  | Β  3 Β  | 2013-10-12 | +-------+--------+-------+------------+ | Β  2 Β  | Β PHP Β  | Β  12Β  | 2013-12-05 | +-------+--------+-------+------------+ | Β  2 Β  | Β PHP Β  | Β  11Β  | 2013-11-10 | +-------+--------+-------+------------+ | Β  2 Β  | Β PHP Β  | Β  8 Β  | 2013-10-28 | +-------+--------+-------+------------+` What should be my query? Please guide. Thanks, Ankur
Try this: ``` SELECT A.tid, A.name, A.eid, A.join_date FROM (SELECT IF(@tid = @tid:=t.tid, @cnt:=@cnt+1, @cnt:=0) rowNo, t.tid, t.name, e.eid, e.join_date FROM Technology t INNER JOIN Employee e ON t.tid = e.tid, (SELECT @tid:=0, @cnt:=0) A ORDER BY t.tid, e.join_date DESC ) AS A WHERE A.rowNo < 3; ``` **EDIT::** ``` SELECT * FROM( SELECT *, IF(@tid=@tid:=result.tid,@count:=@count+1,@count:=0) as pos FROM (SELECT t.tid, t.name, e.eid, e.join_date FROM Technology t JOIN Employee e WHERE e.tid = t.tid ORDER BY t.tid, e.join_date DESC) result JOIN (SELECT @tid:=0, @count:=0) c) finalre where finalre.pos < 3; ```
You can try [GROUP\_CONCAT](http://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html#function_group-concat) and [SUBSTRING\_INDEX](http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_substring-index) ``` SELECT T.tid,T.name, SUBSTRING_INDEX(GROUP_CONCAT(eid ORDER BY join_date DESC), ',', 3) AS `eid`, SUBSTRING_INDEX(GROUP_CONCAT(join_date ORDER BY join_date DESC), ',', 3) AS `join_date` FROM Technology T LEFT OUTER JOIN Employee E ON E.tid = T.tid GROUP BY T.tid ``` Here if you need x entries just change the count variable in SUBSTRING\_INDEX to x You need to process the field eid and join\_date in your application logic. You can try some string tokenizer. Here is the [SQLFIDDLE](http://sqlfiddle.com/#!2/794858/7) REF: [How to hack MySQL GROUP\_CONCAT to fetch a limited number of rows?](https://stackoverflow.com/questions/1522509/how-to-hack-mysql-group-concat-to-fetch-a-limited-number-of-rows/10986929#10986929)
MySQL Join two Tables to get records with limit from child
[ "", "mysql", "sql", "select", "join", "sql-order-by", "" ]
I would like to alter the table if the table has the column with same data type and number exists Original tTable structure is TableName ``` ColumnName NVARCHAR(100) ``` Code for altering column if `ColumnName` with `NVARCHAR` and length 100 exists ``` IF EXISTS(...) BEGIN ALTER TABLE [dbo].[TableName] ALTER COLUMN [ColumnName] NVARCHAR(200) [NULL|NOT NULL] END ``` What find query I need to insert at `IF EXISTS(...)`?
I personally always opt for the SQL Server system views rather than the INFORMATION\_SCHEMA for reasons [detailed by Aaron Bertrand](https://sqlblog.org/2011/11/03/the-case-against-information_schema-views). The added advantage is that in this situation you can exclude computed columns, which just appear as normal columns in the table `INFORMATION_SCHEMA.COLUMNS`. ``` IF EXISTS ( SELECT 1 FROM sys.columns c INNER JOIN sys.types t ON t.system_type_id = c.system_type_id AND t.user_type_id = c.user_type_id WHERE c.name = 'ColumnName' AND c.[object_id] = OBJECT_ID(N'dbo.TableName', 'U') AND t.name = 'nvarchar' AND c.max_length = 100 AND c.is_computed = 0 ) BEGIN ALTER TABLE [dbo].[TableName] ALTER COLUMN [ColumnName] NVARCHAR(200) [NULL|NOT NULL] END; ``` As shown in [this SQL Fiddle](http://sqlfiddle.com/#!3/fa15a/4) when using the information schema method you may try and alter a computed column and get an error.
Me, I'm way too lazy to type in all those system table joins (let alone the INFORMATION\_YADA schema), I just use the metadata functions: ``` IF columnproperty(object_id('dbo.TableName'), 'ColumnName', 'ColumnId') is not null and columnproperty(object_id('dbo.TableName'), 'ColumnName', 'Precision') = 200 ALTER... ``` I believe this works for SQL 2005, but you'll need to check.
ALTER COLUMN IF EXISTS
[ "", "sql", "sql-server", "sql-server-2008", "sql-server-2005", "sql-server-2008-r2", "" ]
i have problem with LIMIT 1 in SQLite, my query without LIMIT 1 works fine, but sometimes removes every duplicate elements. I'd like to delete 1 row. this works: ``` myDataBase.execSQL("DELETE FROM praca WHERE pomieszczenie_id="+pomieszczenieid+" AND (praca.usluga_id IN (SELECT usluga_id FROM usluga WHERE usluga_nazwa='"+childvalue+"')) AND (sciana_id IN(SELECT sciana_id from sciana WHERE sciana_nazwa='"+rodzic_nazwa+"'))"); ``` this returns Near "LIMIT" syntax error ``` myDataBase.execSQL("DELETE FROM praca WHERE pomieszczenie_id="+pomieszczenieid+" AND (praca.usluga_id IN (SELECT usluga_id FROM usluga WHERE usluga_nazwa='"+childvalue+"')) AND (sciana_id IN(SELECT sciana_id from sciana WHERE sciana_nazwa='"+rodzic_nazwa+"')) LIMIT 1"); ``` How can i fix that ?
you may need to enable `SQLITE ENABLE UPDATE DELETE LIMIT` OR a workaround likeTHIS: ``` delete from tablename where rowid in ( select rowid from tablename condition LIMIT 1) ``` above query is just an example, replace it with your query and conditions.
I think you should to enable **SQLITE\_ENABLE\_UPDATE\_DELETE\_LIMIT** <http://www.sqlite.org/compile.html#enable_update_delete_limit> You must compile SQLite using NDK, and add parameter to enable SQLITE\_ENABLE\_UPDATE\_DELETE\_LIMIT
DELETE from LIMIT 1 error
[ "", "android", "mysql", "sql", "sqlite", "" ]
I am asked to generate custom ID values for primary key columns. The query is as follows, ``` SELECT * FROM SC_TD_GoodsInward WHERE EntityId = @EntityId SELECT @GoodsInwardId=IIF((SELECT COUNT(*) FROM SC_TD_GoodsInward)>0, (Select 'GI_'+RIGHT('00'+CONVERT(varchar,datepart(YY,getdate())),2)+RIGHT('00'+CONVERT(varchar,datepart(MM,getdate())),2)+RIGHT('00'+CONVERT(varchar,datepart(DD,getdate())),2)+'_'+CONVERT(varchar,@EntityId)+'_'+(SELECT RIGHT('0000'+CONVERT(VARCHAR,CONVERT(INT,RIGHT(MAX(GoodsInwardId),4))+1),4) from SC_TD_GoodsInward)), (SELECT 'GI_'+RIGHT('00'+CONVERT(varchar,datepart(YY,getdate())),2)+RIGHT('00'+CONVERT(varchar,datepart(MM,getdate())),2)+RIGHT('00'+CONVERT(varchar,datepart(DD,getdate())),2)+'_'+CONVERT(varchar,@EntityId)+'_0001')) ``` Here the SC\_TD\_GoodsInward is a table, GoodsInwardId is the value to be generated. I am getting the desired outputs too. Examples. ``` GI_131118_1_0001 GI_131212_1_0002 GI_131212_1_0003 ``` But, the above condition fails when the last digits reach **9999**. I simulated the query and the results were, ``` GI_131226_1_9997 GI_140102_1_9998 GI_140102_1_9999 GI_140102_1_0000 GI_140102_1_0000 GI_140102_1_0000 GI_140102_1_0000 GI_140102_1_0000 ``` After 9999, it goes to 0000 and does not increment thereafter. So, in the future, I will eventually run into a PK duplicate error. How can i recycle the values so that after 9999, it goes on as 0000, 0001 ... etc. What am I missing in the above query? NOTE: Please consider the `@EntityId` value to be 1 in the query. I am using SQL SERVER 2012.
Before giving a solution for the question few points on your question: 1. As the Custom primary key consists of mainly three parts Date(140102), physical location where transaction takes place (entityID), 4 place number(9999). 2. According to the design on a single date in a single physical location there cannot be more than 9999 transactions -- My Solution will also contain the same limitation. Some points on my solution 1. The 4 place digit is tied up to the date which means for a new date the count starts from 0000. For Example GI\_140102\_1\_0001, GI\_140102\_1\_0002, GI\_140102\_1\_0003, GI\_140103\_1\_0000, GI\_140104\_1\_0000 Any way the this field will be unique. 1. The solution compares the latest date in the record to the current date. The Logic: If current date and latest date in the record matches Then it increments 4 place digit by the value by 1 If the current date and the latest date in the record does not matched The it sets the 4 place digit by the value 0000. The Solution: (Below code gives out the value which will be the next GoodsInwardId, Use it as per requirement to fit in to your solution) ``` declare @previous nvarchar(30); declare @today nvarchar(30); declare @newID nvarchar(30); select @previous=substring(max(GoodsInwardId),4,6) from SC_TD_GoodsInward; Select @today=RIGHT('00'+CONVERT(varchar,datepart(YY,getdate())),2) +RIGHT('00'+CONVERT(varchar,datepart(MM,getdate())),2)+RIGHT('00'+CONVERT(varchar,datepart(DD,getdate())),2); if @previous=@today BEGIN Select @newID='GI_'+RIGHT('00'+CONVERT(varchar,datepart(YY,getdate())),2) +RIGHT('00'+CONVERT(varchar,datepart(MM,getdate())),2)+RIGHT('00'+CONVERT(varchar,datepart(DD,getdate())),2) +'_'+CONVERT(varchar,1)+'_'+(SELECT RIGHT('0000'+ CONVERT(VARCHAR,CONVERT(INT,RIGHT(MAX(GoodsInwardId),4))+1),4) from SC_TD_GoodsInward); END else BEGIN SET @newID='GI_'+RIGHT('00'+CONVERT(varchar,datepart(YY,getdate())),2) +RIGHT('00'+CONVERT(varchar,datepart(MM,getdate())),2)+RIGHT('00'+CONVERT(varchar,datepart(DD,getdate())),2) +'_'+CONVERT(varchar,1)+'_0000'; END select @newID; ``` T-SQL to create the required structure (Probable Guess) For the table: ``` CREATE TABLE [dbo].[SC_TD_GoodsInward]( [EntityId] [int] NULL, [GoodsInwardId] [nvarchar](30) NULL ) ``` Sample records for the table: ``` insert into dbo.SC_TD_GoodsInward values(1,'GI_140102_1_0000'); insert into dbo.SC_TD_GoodsInward values(1,'GI_140101_1_9999'); insert into dbo.SC_TD_GoodsInward values(1,'GI_140101_1_0001'); ``` \*\*Its a probable solution in your situation although the perfect solution would be to have identity column (use reseed if required) and tie it with the current date as a computed column.
You get this problem because once the last 4 digits reach `9999`, `9999` will remain the highest number no matter how many rows are inserted, and you are throwing away the most significant digit(s). I would remodel this to track the last used INT portion value of `GoodsInwardId` in a separate counter table (as an INTEGER), and then MODULUS (%) this by 10000 if need be. If there are concurrent calls to the PK generator, remember to lock the counter table row. Also, even if you kept all the digits (e.g. in another field), note that ordering a `CHAR` is as follows ``` 1 11 2 22 3 ``` and then applying `MAX()` will return 3, not 22. **Edit - Clarification of counter table alternative** The counter table would look something like this: ``` CREATE TABLE PK_Counters ( TableName NVARCHAR(100) PRIMARY KEY, LastValue INT ); ``` (Your `@EntityID` might be another candidate for the counter PK column.) You then increment and fetch the applicable counter on each call to your custom PK Key generation PROC: ``` UPDATE PK_Counters SET LastValue = LastValue + 1 WHERE TableName = 'SC_TD_GoodsInward'; Select 'GI_'+RIGHT('00'+CONVERT(varchar,datepart(YY,getdate())),2) +RIGHT('00'+CONVERT(varchar,datepart(MM,getdate())),2) +RIGHT('00'+CONVERT(varchar,datepart(DD,getdate())),2)+'_' +CONVERT(varchar,@EntityId)+'_' +(SELECT RIGHT('0000'+ CONVERT(NVARCHAR, LastValue % 10000),4) FROM PK_Counters WHERE TableName = 'SC_TD_GoodsInward'); ``` You could also modulo the LastValue in the counter table (and not in the query), although I believe there is more information about the number of records inserted by leaving the counter un-modulo-ed. [Fiddle here](http://sqlfiddle.com/#!6/3f832/1) Re : Performance - Selecting a single integer value from a small table by its PK and then applying modulo will be significantly quicker than selecting MAX from a `SUBSTRING` (which would almost certainly be a scan)
Incrementing custom primary key values in SQL
[ "", "sql", "sql-server", "sql-server-2012", "" ]
I have a table containing about 500 points and am looking for duplicates within a tolerance. This takes less than a second and gives me 500 rows. Most have a distance of zero because it gives the same point (PointA = PointB) ``` DECLARE @TOL AS REAL SET @TOL = 0.05 SELECT PointA.ObjectId as ObjectIDa, PointA.Name as PTNameA, PointA.[Description] as PTdescA, PointB.ObjectId as ObjectIDb, PointB.Name as PTNameB, PointB.[Description] as PTdescB, ROUND(PointA.Geometry.STDistance(PointB.Geometry),3) DIST FROM CadData.Survey.SurveyPoint PointA JOIN [CadData].Survey.SurveyPoint PointB ON PointA.Geometry.STDistance(PointB.Geometry) < @TOL -- AND -- PointA.ObjectId <> PointB.ObjectID ORDER BY ObjectIDa ``` If I use the commented out lines near the bottom, I get 14 rows but the execution time goes up to 14 seconds. Not that big a deal until my point table expands to 10's of thousands. I apologize in advance if the answer is already out there. I did look, but being new I get lost reading posts which are way over my head. ADDENDUM: ObjectID is a bigint and the PK for the table, so I realized that I could change the statement to ``` AND PointA.ObjectID > PointB.ObjectID ``` This now takes half the time and gives me half the results (7 rows in 7 seconds). I now don't get duplicates (as in Point 4 is close to Point 8 followed by Point 8 is close to Point 4). However the performance still concerns me as the table will be very large, so any performance issues will become problems. ADDENDUM 2: Changing the order of the JOIN and AND (or WHERE as suggested) as below makes no difference either. ``` DECLARE @TOL AS REAL SET @TOL = 0.05 SELECT PointA.ObjectId as ObjectIDa, PointA.Name as PTNameA, PointA.[Description] as PTdescA, PointB.ObjectId as ObjectIDb, PointB.Name as PTNameB, PointB.[Description] as PTdescB, ROUND(PointA.Geometry.STDistance(PointB.Geometry),3) DIST FROM CadData.Survey.SurveyPoint PointA JOIN [CadData].Survey.SurveyPoint PointB ON PointA.ObjectId < PointB.ObjectID WHERE PointA.Geometry.STDistance(PointB.Geometry) < @TOL ORDER BY ObjectIDa ``` I find it fascinating that I can change the @Tol value to something large that returns over 100 rows with no change in performance even though it requires many computations. But then adding a simple A
This is a fun question. It's not unrealistic that you get a large performance improvement by changing from "<>" to ">". As others have mentioned, the trick is to get the most out of your indexes. Certainly by using ">", you should easily get the server to limit to that specific range on your PK - avoiding looking "backwards" when you've already checked looking "forwards". This improvement will scale - will help as you add rows. But you're right to worry it won't help prevent any increase in work. As you're correctly thinking, as long as you have to scan a larger number of rows, it will take longer. And that's the case here because we always want to compare everything. If the first part is looking good, just the TOL check, have you thought about splitting out the second part entirely? Change the first part to dump into a temp table as ``` SELECT PointA.ObjectId as ObjectIDa, PointA.Name as PTNameA, PointA.[Description] as PTdescA, PointB.ObjectId as ObjectIDb, PointB.Name as PTNameB, PointB.[Description] as PTdescB, ROUND(PointA.Geometry.STDistance(PointB.Geometry),3) DIST into #AllDuplicatesWithRepeats FROM CadData.Survey.SurveyPoint PointA JOIN [CadData].Survey.SurveyPoint PointB ON PointA.Geometry.STDistance(PointB.Geometry) < @TOL ORDER BY ObjectIDa ``` And they you can write the direct query that skips duplicates, below. It isn't special, but against that small set in the temp table it should be perfectly speedy. ``` Select * from #AllDuplicatesWithRepeats d1 left join #AllDuplicatesWithRepeats d2 on ( d1.objectIDa = d2.objectIDb and d1.objectIDb = d2.objectIDa ) where d2.objectIDb is null ```
The execution plan is probably doing something behind the scenes when you add in the `ObjectID` comparison. Check the execution plan to see if the two different versions of the query are, for example, using an index seek vs. a table scan. If so, consider experimenting with [query hints](http://technet.microsoft.com/en-us/library/ms181714.aspx). As a workaround, you could always use a subquery: ``` DECLARE @TOL AS REAL SET @TOL = 0.05 SELECT ObjectIDa, PTNameA, PTdescA, ObjectIDb, PTNameB, PTdescB, DIST FROM ( SELECT PointA.ObjectId as ObjectIDa, PointA.Name as PTNameA, PointA.[Description] as PTdescA, PointB.ObjectId as ObjectIDb, PointB.Name as PTNameB, PointB.[Description] as PTdescB, ROUND(PointA.Geometry.STDistance(PointB.Geometry),3) DIST FROM CadData.Survey.SurveyPoint PointA JOIN [CadData].Survey.SurveyPoint PointB ON PointA.Geometry.STDistance(PointB.Geometry) < @TOL -- AND -- PointA.ObjectId <> PointB.ObjectID ) Subquery WHERE ObjectIDa <> ObjectIDb ORDER BY ObjectIDa ```
Adding simple AND after JOIN kills performance
[ "", "sql", "sql-server", "geospatial", "spatial-query", "" ]
I have a table as shown below: ![enter image description here](https://i.stack.imgur.com/IlEF1.png) Note: the MAX last\_orderdate is 20131015 and the format is yyyymmdd. I would like to show the final result looks like below: ![enter image description here](https://i.stack.imgur.com/g99m5.png) Is there any query to help me in this as I have 200000 plus records. Thank you very much for spending your time to look at my question.
For [**DATEDIFF()**](http://technet.microsoft.com/en-us/library/ms189794%28v=sql.105%29.aspx) function Try this: ``` UPDATE A SET A.monthDiff = DATEDIFF(mm, CONVERT(DATE, A.orderDate, 112), B.lastOrderDate), A.dayDiff = DATEDIFF(dd, CONVERT(DATE, A.orderDate, 112), B.lastOrderDate) FROM tableA A, (SELECT MAX(CONVERT(DATE, orderDate, 112)) lastOrderDate FROM tableA) B ``` Check the [**SQL FIDDLE DEMO**](http://sqlfiddle.com/#!3/d7f4c/2) **OUTPUT** ``` | ID | ORDERDATE | MONTHDIFF | DAYDIFF | |----|-----------|-----------|---------| | 1 | 20130105 | 9 | 283 | | 2 | 20130205 | 8 | 252 | | 3 | 20130305 | 7 | 224 | | 4 | 20130909 | 1 | 36 | | 5 | 20131001 | 0 | 14 | | 6 | 20131015 | 0 | 0 | ```
try something like this: ``` declare @a date set @a='20130105' declare @b date set @b='20131015' select datediff(d,@a,@b) as date_diff,datediff(m,@a,@b) as month_diff ```
updates in month and day differences
[ "", "sql", "sql-server", "sql-server-2008", "sql-update", "datediff", "" ]
I am Learning how to create dynamic Access Databases using VBA code and SQL. One of my big "pet peaves" is that the user interface is slick and very professional looking. I am looking for some tecniques in cosmetic touch ups to make the user interface more professional looking. For example, i would like it so that when the user clicks on a button to activate a sub form which is hidden, that the form appears with a bit of "smoothed animation" such as and exploding transition-type effect instead of the standard hard visual appearance of the form which is the standard. Any advice on what may be available will be very appreciated. Thank you, Andrew
This can not be done within Access, it's not designed for that. Access is a database application that comes packaged with Office. It's not going to provide any crazy out-of-the-box animation, it's just not meant for that. It's meant to be a very simple, stripped down database application. It is **somewhat** feasable that you could create animations in a 3rd party app (such as Adobe AfterEffects) and have those animations run prior to the form opening, similar to how one would create motion menus for DVDs.
You could put a bunch of small animated graphics (like lights on a Christmas tree or tiny arrows) all around the sub-form and time it to display for a short while.
adding animation to an access form - front end
[ "", "sql", "vba", "" ]
I have SQL code like this ``` IF Object_id('tempdb..#empDate) IS NOT NULL DROP TABLE #empDate CREATE TABLE #empDate ( [empID] INT, [AddLoc] VARCHAR(1000) ) ``` After the above code some more lines of SQL follow and then it is repeated. I get the following error. > Msg 2714, Level 16, State 1, Line 589 > There is already an object named '#empDate' in the database. I replaced the ``` IF Object_id('tempdb..#empDate) IS NOT NULL ``` with ``` IF Object_id('tempdb..#empDate%) IS NOT NULL ``` As it is written on the forums that SQL Server appends number to the subsequent temp table(s). Source: [Check if a temporary table exists and delete if it exists before creating a temporary table](https://stackoverflow.com/questions/659051/check-if-temp-table-exist-and-delete-if-it-exists-before-creating-a-temp-table) <http://blog.sqlauthority.com/2009/05/17/sql-server-how-to-drop-temp-table-check-existence-of-temp-table/> <http://blog.sqlauthority.com/2009/03/29/sql-server-fix-error-msg-2714-level-16-state-6-there-is-already-an-object-named-temp-in-the-database/> I am using Microsoft SQL Server 2008 on Windows 7 Enterprise. I am not able to understand the cause of the error. Please help.
**Sample One** This will fail...... Executing the same code again, will throw the error you are getting now ``` IF Object_id('tempdb..#empDate') IS NOT NULL BEGIN DROP TABLE #empDate END CREATE TABLE #empDate ( [empID] INT, [AddLoc] VARCHAR(1000) ) IF Object_id('tempdb..#empDate') IS NOT NULL BEGIN DROP TABLE #empDate END CREATE TABLE #empDate ( [empID] INT, [AddLoc] VARCHAR(1000) ) ``` **Sample Two (Fixed)** ``` IF Object_id('tempdb..#empDate') IS NOT NULL BEGIN DROP TABLE #empDate END CREATE TABLE #empDate ( [empID] INT, [AddLoc] VARCHAR(1000) ) GO --<-- Adding this Batch Separator will eliminate the Error IF Object_id('tempdb..#empDate') IS NOT NULL BEGIN DROP TABLE #empDate END CREATE TABLE #empDate ( [empID] INT, [AddLoc] VARCHAR(1000) ) ``` **Test** If you try Executing the following Statements in ONE BATCH they will fail even though there isnt any table at all with the name `#empDate`, it will not even execute the very 1st Create table Statement. and will throw an error. ``` CREATE TABLE #empDate ( [empID] INT, [AddLoc] VARCHAR(1000) ) DROP TABLE #empDate CREATE TABLE #empDate ( [empID] INT, [AddLoc] VARCHAR(1000) ) ``` But if you separate all the statement in separate batches they will be executed successfully something like this.. ``` CREATE TABLE #empDate ( [empID] INT, [AddLoc] VARCHAR(1000) ) GO DROP TABLE #empDate GO CREATE TABLE #empDate ( [empID] INT, [AddLoc] VARCHAR(1000) ) GO ```
pass database name with object\_id example : ``` DECLARE @db_id int; DECLARE @object_id int; SET @db_id = DB_ID(N'AdventureWorks2012'); SET @object_id = OBJECT_ID(N'AdventureWorks2012.Person.Address'); IF @db_id IS NULL BEGIN; PRINT N'Invalid database'; END; ELSE IF @object_id IS NULL BEGIN; PRINT N'Invalid object'; END; ELSE BEGIN; SELECT * FROM sys.dm_db_index_operational_stats(@db_id, @object_id, NULL, NULL); END; GO ```
Deletion\Creation of Temp tables in SQL Server 2008
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I'm attempting to pull some custom reports from a mandated SQL program than we must use at work and I'm running into a couple issues. I can pull all the data I need easily but for each unique person id/task id combination I only need the most current value. Additionally, if possible I want the latest value from either the due date or the waiver date column whichever is greater. --- ``` PersonnelTrainingEvent PersonnelID TrainingEventTypeID DueDate WaiverDate Personnel ID TrainingEventType ID Taskcode PersonnelDetail PersonnelID 5351 25947 1/1/1900 1/1/1900 5351 25947 Mob2 5351 5351 28195 8/1/2012 1/1/1900 5351 28195 CA01 5351 5351 26551 7/29/2010 1/1/1900 5351 26551 Mob10 5351 5351 25947 1/31/2012 1/1/1900 5351 25947 Mob2 5351 5351 28196 11/1/2012 1/1/1900 5351 28196 CA02 5351 5418 28195 1/1/1900 1/1/1900 5418 28195 CA01 5418 5418 30174 1/1/1900 1/1/1900 5418 30174 PJ18 5418 5418 28624 1/31/2014 2/1/2014 5418 28624 GA42 5418 5418 28595 6/30/2014 6/30/2014 5418 28595 GA43 5418 5418 28196 1/1/1900 1/1/1900 5418 28196 CA02 5418 6022 28195 3/3/2011 1/1/1900 6022 28195 CA01 6022 6022 28885 10/31/20121/1/1900 6022 28885 CA07 6022 6022 28884 1/1/1900 1/1/1900 6022 28884 CA06 6022 6022 28884 1/31/1901 1/1/1900 6022 28884 CA06 6022 6022 28196 1/1/1900 1/1/1900 6022 28196 CA02 6022 6022 28196 2/28/2011 1/1/1900 6022 28196 CA02 6022 6022 28624 9/30/2013 1/1/1900 6022 28624 GA42 6022 6022 28595 2/28/2014 1/1/1900 6022 28595 GA43 6022 6022 30174 2/28/2014 1/1/1900 6022 30174 PJ18 6022 ``` --- Here is the query I'm using... ``` SELECT PersonnelTrainingEvent.PersonnelID AS [PersonnelTrainingEvent PersonnelID] ,PersonnelTrainingEvent.TrainingEventTypeID ,PersonnelTrainingEvent.DueDate ,PersonnelTrainingEvent.WaiverDate ,Personnel.ID AS [Personnel ID] ,TrainingEventType.ID AS [TrainingEventType ID] ,TrainingEventType.Taskcode ,PersonnelDetail.PersonnelID AS [PersonnelDetail PersonnelID] FROM PersonnelTrainingEvent INNER JOIN TrainingEventType ON PersonnelTrainingEvent.TrainingEventTypeID = TrainingEventType.ID INNER JOIN Personnel ON PersonnelTrainingEvent.PersonnelID = Personnel.ID INNER JOIN PersonnelDetail ON Personnel.ID = PersonnelDetail.PersonnelID WHERE TrainingEventType.Taskcode IN (N'GA43', N'MOB2', N'CA01', N'CA02', N'Mob10', N'PJ67', N'CA06', N'CA07', N'T104', N'GA42', N'PJ18') Group By Personnel.ID, TrainingEventType.Taskcode; ``` --- I'm currently on vacation and getting glared at by my wife but I've been working on this query for 3 weeks now and I'm pounding my head against the wall. I've included a sample of the preferred outcome below... --- ``` PersonnelTrainingEvent PersonnelID TrainingEventTypeID DueDate WaiverDate Personnel ID TrainingEventType ID Taskcode PersonnelDetail PersonnelID 5351 28195 8/1/2012 1/1/1900 5351 28195 CA01 5351 5351 26551 7/29/2010 1/1/1900 5351 26551 Mob10 5351 5351 25947 1/31/2012 1/1/1900 5351 25947 Mob2 5351 5351 28196 11/1/2012 1/1/1900 5351 28196 CA02 5351 5418 28195 1/1/1900 1/1/1900 5418 28195 CA01 5418 5418 30174 1/1/1900 1/1/1900 5418 30174 PJ18 5418 5418 28624 1/31/2014 2/1/2014 5418 28624 GA42 5418 5418 28595 6/30/2014 6/30/2014 5418 28595 GA43 5418 5418 28196 1/1/1900 1/1/1900 5418 28196 CA02 5418 6022 28195 3/3/2011 1/1/1900 6022 28195 CA01 6022 6022 28885 10/31/20121/1/1900 6022 28885 CA07 6022 6022 28884 1/31/1901 1/1/1900 6022 28884 CA06 6022 6022 28196 2/28/2011 1/1/1900 6022 28196 CA02 6022 6022 28624 9/30/2013 1/1/1900 6022 28624 GA42 6022 6022 28595 2/28/2014 1/1/1900 6022 28595 GA43 6022 6022 30174 2/28/2014 1/1/1900 6022 30174 PJ18 6022 ``` --- Here are the links to the other answers I've looked at but I'm a learn by doing kinda guy and these seemed to help a little but I'm not understanding all the syntax... [SQL Select, Specific Rows based on multiple conditions?](https://stackoverflow.com/questions/16027485/sql-select-specific-rows-based-on-multiple-conditions) [SQL server select distinct rows using most recent value only](https://stackoverflow.com/questions/3442931/sql-server-select-distinct-rows-using-most-recent-value-only) [SQL Select with Group By and Order By Date](https://stackoverflow.com/questions/15758508/sql-select-with-group-by-and-order-by-date?rq=1) [SQL server select distinct rows using values before a certain date](https://stackoverflow.com/questions/19807469/sql-server-select-distinct-rows-using-values-before-a-certain-date?lq=1) [How to select only the latest rows for each user?](https://stackoverflow.com/questions/15644519/how-to-select-only-the-latest-rows-for-each-user?lq=1) [Get Distinct rows from a result of JOIN in SQL Server](https://stackoverflow.com/questions/18717839/get-distinct-rows-from-a-result-of-join-in-sql-server?rq=1) <http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=47479> [Selecting latest rows in subgroups](https://stackoverflow.com/questions/10935996/selecting-latest-rows-in-subgroups) I appreciate any help as I'm working to learn to do this myself. I will provide any input requested or a screenshot if I increase my rep enough to allow that. Thanks!
Thanks for all the assistance but I went back to basics. I cut the joined tables out and went to the basic four columns that I needed without the translation data. Once I did that I saw that the table only created duplicate entries with a 1900 date field so I simply used a WHERE clause to strip out the extra entry. I was really interested in Clockwork-Muse's method but I kept receiving a syntax error that didn't make sense and once I saw the issue I was having it was less code to strip what I needed. Thank you again for the support and information.
What you *probably* want is something like this: ``` SELECT Event.personnelId, Event.trainingEventTypeId, Event.dueDate, Event.waiverDate, Event.taskCode FROM (SELECT Event.personnelId, Event.trainingEventTypeId, Event.dueDate, Event.waiverDate, TrainingEventType.taskCode ROW_NUMBER() OVER(PARTITION BY Event.personnelId, Event.trainingEventTypeId ORDER BY CASE WHEN Event.dueDate >= Event.waiverDate THEN Event.dueDate ELSE Event.waiverDate END DESC) rn FROM PersonnelTrainingEvent Event JOIN TrainingEventType ON TrainingEventType.id = Event.trainingEventTypeId AND TrainingEventType.taskCode IN (N'GA43', N'MOB2', N'CA01', N'CA02', N'Mob10', N'PJ67', N'CA06', N'CA07', N'T104', N'GA42', N'PJ18')) Event WHERE Event.rn = 1 ``` However, it's difficult to tell because the query you've provided has syntax errors, has additional unneeded columns, and references tables which should not influence the results (the tables are likely to have at least one row, but no data from those tables is referenced, and multiple rows are usually unwanted).
SQL select latest row using multiple groups
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I using a query where I am selecting some attributes from the table based on a where condition. My where condition is- ``` date>GetDate(); ``` I have tried this- ``` SELECT TOP 2 img,name,substring(description,1,80) as description,Convert(nvarchar,date,106) as date FROM tbl_test where date>=Convert(nvarchar,GetDate(),106) order by date Asc; ``` This query is running fine but showing different result as compared to a different query of similar kind in which I am not converting the date format. ``` SELECT TOP 2 img,name,substring(description,1,80) as description,date FROM tbl_test where date>=GetDate() order by date Asc; ``` Please guide me where I am doing wrong?
Your first query will convert getdate() into nvarchar data type and it will compare date with string while 2nd query will compare 2 dates. **So 2nd option is better.** Still if you want to convert date into string then check then use 102 format like `WHERE CONVERT(varchar(20),date,102) >= CONVERT(varchar(20), getdate(),102)` For select column you can use format which you want like ``` SELECT CONVERT(varchar(20),date,106) ``` **Final Query is :** ``` SELECT TOP 2 img, name, SUBSTRING(description,1,80) as description, CONVERT(varchar(20),date,106) as [DisplayDate] FROM tbl_test WHERE CONVERT(varchar(20),date,102) >= CONVERT(varchar(20), getdate(),102) ORDER BY date ASC; ``` **Without convert to varchar, you can cast getdate() to date to remove time part :** ``` SELECT TOP 2 img, name, SUBSTRING(description,1,80) as description, CONVERT(varchar(20),date,106) as [DisplayDate] FROM tbl_test WHERE date >= CAST(getdate() as date) ORDER BY date ASC; ``` **[SQL Fiddle Demo](http://sqlfiddle.com/#!3/579ce/10)**
``` DECLARE @Date Datetime; SET @Date = GETDATE(); SELECT CONVERT(VARCHAR(12), @Date, 113) AS Date ``` **RESULT** ``` ╔══════════════╗ β•‘ Date β•‘ ╠══════════════╣ β•‘ 01 Jan 2014 β•‘ β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β• ``` **Edit** as Upendra Chaudhari has explained that when you do comparing column `Date` with a string `=Convert(varchar(20),GetDate(),102)`, what is actually happening behind the scenes is `Convert(varchar(20),GetDate(),102)` returns a string `2014.01.01` but to compare this string with a Datetime column SQL Server does an implicit conversion to compare both values. Sql Server have to have both values in the same datatype to compare them. Now datatype Datetime has Precedence over nvarchar/varchar datatype so sql server converts the string into datetime datatype which returns something like ``` SELECT CAST('2014.01.01' AS DATETIME) Result : 2014-01-01 00:00:00.000 ``` Now in this process of converting your values to string and then back to datetime you have actually lost all the time values in your comparing values. and this is the reason why you are getting unexpected results back. so make sure whenever you are comparing to have exactly the same datatype on both sides and take control of any data conversions in your code rather then sql server doing datatype conversions for you. I hope this will explain you why you are getting different results .
How to convert GetDate() in 1 Jan 2014 format?
[ "", "sql", "" ]
I have a database table called "Prices" which holds in itself data where there are different items with different prices and other properties. My goal is to group this data by hour (many samples are being collected hourly). The summarized data should be inserted into table called "HourlyData". The problem I am having is that my database which takes 2GB~ of space approximately at the moment ends up using over 40GB space during the query execution. That is bad news for me because I am using VPS and HDD capacity is limited and I end up having no space left on the device to finish the query execution. I have included the SQLfiddle to ensure better code readability here: **<http://sqlfiddle.com/#!15/38783/4>** There are two functions in the sqlfiddle paste, keep in mind that sqlfiddle doesn't support functions (as far as I know), so the code won't work without adjustments but for demonstration purposes it should be enough. Also the SCHEMA is built without correct ID syntax which should auto increment themselves and be primary keys of my tables - if that's somehow relevant. Since I am a newbie to postgresql/SQL - to maintain this ton of code, I have chopped it into temporary tables instead of one big query with many subqueries inside. I am unsure if it's a bad practice usage of temporary tables or not but I haven't noticed a difference in CACHE with both ways. **My question is, how could I go about this current execution more effectively?** I am using lots of AVG and LAG functions in the end of my second SQL function, I assume they are the ones to blame for cache building up but I haven't figured out a solution how could I do this piece by piece which would require less cache, since LAG function isn't usable with UPDATE command. One of the solutions of course is to summarize the data more often in the future. The other solution I've came up with is to dump the database and download it from VPS and run the functions from local computer where HDD space isn't a problem. However I am hoping that this is more solvable with more experienced SQL programming - if its not then **is there anything I could do to optimize the current code anyway in terms of cache size?** **Table code** ``` -- Table: "Prices" -- DROP TABLE "Prices"; CREATE TABLE "Prices" ( data_id integer, level smallint, sell_price integer, buy_price integer, sell_count integer, buy_count integer, -- name character varying(100), date timestamp without time zone DEFAULT ('now'::text)::timestamp(1) without time zone, "ID" serial NOT NULL, CONSTRAINT "Prices_pkey" PRIMARY KEY ("ID") ) WITH ( OIDS=FALSE ); -- Index: "Prices_ID_idx" CREATE INDEX "Prices_ID_idx" ON "Prices" USING btree ("ID"); -- Index: "Prices_data_id_idx" CREATE INDEX "Prices_data_id_idx" ON "Prices" USING btree (data_id); -- Table: "HourlyData" CREATE TABLE "HourlyData" ( data_id bigint, name character varying(100), date_time timestamp without time zone, hour integer, day integer, buy numeric(20,4), sell numeric(20,4), prev_buy numeric(20,4), prev_sell numeric(20,4), buy_count integer, sell_count integer, prev_buy_count integer, prev_sell_count integer, ab_change numeric(10,2), as_change numeric(10,2), abc_change numeric(10,2), asc_change numeric(10,2), "ID" integer NOT NULL DEFAULT nextval('"DailyData_ID_seq"'::regclass), CONSTRAINT "DailyData_pkey" PRIMARY KEY ("ID") ) WITH ( OIDS=FALSE ); -- Index: "DailyData_data_id_idx" CREATE INDEX "DailyData_data_id_idx" ON "HourlyData" USING btree (data_id); INSERT INTO Prices ("data_id", "level", "sell_price", "buy_price", "sell_count", "buy_count", "name", "date") VALUES (28262, 80, 18899, 15000, 53, 66, 'random_item', '2013-12-16 01:38:07'), (28262, 80, 18899, 15000, 53, 66, 'random_item', '2013-12-16 01:44:31'), (28262, 80, 18987, 15000, 46, 65, 'random_item', '2013-12-16 01:30:22'), (28262, 80, 18987, 16000, 49, 65, 'random_item', '2013-12-16 01:00:19'), (28265, 80, 18987, 16000, 48, 64, 'random_itema', '2013-12-16 01:30:20'), (28265, 80, 18987, 16000, 48, 64, 'random_itema', '2013-12-16 01:00:21'), (28265, 80, 17087, 16000, 49, 63, 'random_itema', '2013-12-16 01:30:22'), (28262, 80, 18980, 5028, 48, 62, 'random_item', '2013-12-16 10:00:28'), (28262, 80, 18975, 5528, 50, 60, 'random_item', '2013-12-16 10:30:30'), (28262, 80, 18975, 5228, 51, 59, 'random_item', '2013-12-16 10:00:27'), (28262, 80, 18975, 5500, 52, 59, 'random_item', '2013-12-16 10:30:21'), (28262, 80, 18975, 5600, 53, 59, 'random_item', '2013-12-16 10:00:23'), (28262, 80, 18979, 5700, 50, 58, 'random_item', '2013-12-16 10:30:28'), (28262, 80, 18977, 5028, 51, 56, 'random_item', '2013-12-16 10:00:23'), (28264, 80, 18978, 5028, 51, 54, 'random_itemaw', '2013-12-16 10:30:25'), (28264, 80, 18979, 5628, 50, 54, 'random_itemaw', '2013-12-16 10:00:28'), (28264, 80, 18979, 5028, 52, 64, 'random_itemaw', '2013-12-16 10:30:26'), (28264, 80, 18979, 15028, 52, 64, 'random_item', '2013-12-16 11:00:25'), (28264, 80, 17977, 15028, 56, 63, 'random_item', '2013-12-16 11:30:24'), (28264, 80, 17977, 15029, 58, 62, 'random_item', '2013-12-16 11:00:30'), (28262, 80, 17977, 15027, 58, 62, 'random_item', '2013-12-16 11:30:22'), (28262, 80, 16000, 15022, 59, 49, 'random_item', '2013-12-16 11:00:26'), (28262, 80, 17979, 15021, 56, 49, 'random_item', '2013-12-16 11:30:26'), (28262, 80, 17969, 15023, 58, 44, 'random_item', '2013-12-16 11:00:31'), (28262, 80, 18987, 15027, 48, 44, 'random_item', '2013-12-16 12:30:33'), (28262, 80, 20819, 15027, 40, 43, 'random_item', '2013-12-16 12:00:32'), (28262, 80, 21810, 15034, 37, 48, 'random_item', '2013-12-16 12:30:24'), (28262, 80, 21810, 15037, 39, 49, 'random_item', '2013-12-16 22:00:18'), (28262, 80, 21810, 15038, 39, 49, 'random_item', '2013-12-16 22:30:25'), (28262, 80, 21810, 15038, 39, 49, 'random_item', '2013-12-16 22:00:25'), (28262, 80, 21710, 15039, 40, 49, 'random_item', '2013-12-16 22:30:24'), (28262, 80, 21709, 15040, 41, 49, 'random_item', '2013-12-16 22:00:24'), (28262, 80, 21709, 15040, 41, 49, 'random_item', '2013-12-16 22:30:22'), (28262, 80, 21709, 15040, 41, 49, 'random_item', '2013-12-16 23:00:24'), (28262, 80, 21709, 15041, 41, 49, 'random_item', '2013-12-16 23:30:27'), (28266, 80, 21708, 15042, 42, 50, 'random_item1', '2013-12-17 05:00:26'), (28266, 80, 20000, 15041, 43, 49, 'random_item1', '2013-12-17 05:30:21'), (28266, 80, 20000, 15097, 43, 52, 'random_item1', '2013-12-17 05:00:28'), (28262, 80, 20000, 15097, 43, 52, 'random_item', '2013-12-17 05:30:28'), (28262, 80, 20000, 15097, 43, 52, 'random_item', '2013-12-17 05:00:31'), (28262, 80, 20000, 15097, 44, 51, 'random_item', '2013-12-17 05:30:34'), (28262, 80, 19997, 15097, 44, 47, 'random_item', '2013-12-17 05:00:20'), (28262, 80, 19997, 15098, 44, 50, 'random_item', '2013-12-17 05:30:26'), (28262, 80, 19997, 15098, 44, 50, 'random_item', '2013-12-17 05:00:24'), (28262, 80, 19997, 15098, 44, 49, 'random_item', '2013-12-17 05:35:44'), (28262, 80, 19996, 15098, 45, 48, 'random_item', '2013-12-17 05:00:22'), (28262, 80, 19996, 15097, 46, 47, 'random_item', '2013-12-17 05:30:24'), (28262, 80, 19996, 15097, 46, 47, 'random_item', '2013-12-17 05:00:29'), (28262, 80, 19996, 15097, 46, 47, 'random_item', '2013-12-17 05:30:24'), (28262, 80, 19996, 15041, 47, 46, 'random_item', '2013-12-17 05:00:25') ; ``` **Functions** ``` -- Function: percentageincrease(numeric, numeric) CREATE OR REPLACE FUNCTION percentageincrease(lugeja numeric, nimetaja numeric) RETURNS numeric AS $BODY$ BEGIN IF Nimetaja IS NULL or Nimetaja = 0 THEN RETURN 0; ELSE RETURN ROUND((Lugeja - Nimetaja) / Nimetaja * 100, 2); END IF; END; $BODY$ LANGUAGE plpgsql; -- Function: process_hourly_data() CREATE OR REPLACE FUNCTION process_hourly_data() RETURNS void AS $BODY$ CREATE TEMP TABLE "TEMP_summarize1" AS SELECT prices.data_id AS data_id, prices.name AS name, date_part('hour', prices.date) AS hour, date_part('day', prices.date) AS day, prices.date AS date_var, prices.buy_price, prices.sell_price, prices.sell_count, prices.buy_count FROM "Prices" as prices; CREATE TEMP TABLE "TEMP_summarize2" ( item_name character varying(100), data_id bigint, hour integer, day integer, date_var timestamp without time zone, avgbuy smallint, avgsell smallint, avgsellCount smallint, avgbuyCount smallint ); PREPARE TEMP2 AS INSERT INTO "TEMP_summarize2" SELECT MAX(whatever.name) as item_name, whatever.data_id, prices.hour, MAX(prices.day) as day, MAX(prices.date_var) as date_var, AVG(prices.buy_price) AS avgbuy, AVG(prices.sell_price) AS avgsell, AVG(prices.sell_count) AS avgsellCount, AVG(prices.buy_count) AS avgbuyCount FROM "TEMP_summarize1" AS prices, (SELECT data_id, name FROM "TEMP_summarize1" as whatever) AS whatever WHERE whatever.data_id = prices.data_id AND whatever.name = prices.name GROUP BY hour, whatever.data_id; PREPARE TEMP3 AS INSERT INTO "HourlyData" SELECT data_id, item_name, date_var, hour, day, avgbuy, avgsell, LAG(avgbuy, 1, NULL) OVER(PARTITION BY data_id) AS last_avgbuy, LAG(avgsell, 1, NULL) OVER(PARTITION BY data_id) AS last_avgsell, avgsellCount, avgbuyCount, LAG(avgsellCount, 1, NULL) OVER (PARTITION BY data_id) AS last_avgsellCount, LAG(avgbuyCount, 1, NULL) OVER (PARTITION BY data_id) AS last_avgbuyCount, percentageincrease(LAG(avgbuy, 1, NULL) OVER(PARTITION BY data_id), avgbuy), percentageincrease(LAG(avgsell, 1, NULL) OVER(PARTITION BY data_id), avgsell), percentageincrease(LAG(avgsellCount, 1, NULL) OVER (PARTITION BY data_id), avgsellCount), percentageincrease(LAG(avgbuyCount, 1, NULL) OVER (PARTITION BY data_id), avgbuyCount) FROM "TEMP_summarize2" ORDER BY data_id, hour; EXECUTE TEMP2; EXECUTE TEMP3; $BODY$ LANGUAGE sql; ```
You might want to try something like the following: ``` WITH Date_Range as (SELECT calendarDate, timeOfDay, calendarDate + timeOfDay as rangeStart, (calendarDate + timeOfDay) + INTERVAL '1 hour' as rangeEnd FROM Calendar CROSS JOIN TimeOfDay WHERE calendarDate >= CAST('2013-12-16' as DATE) AND calendarDate < CAST('2013-12-18' as DATE)) INSERT INTO HourlyData SELECT data_id, randomName, latestPriceChangeAt, EXTRACT(HOUR FROM timeOfDay) as hourOfDay, EXTRACT(DAY FROM calendarDate) as dayOfMonth, averageBuyPrice, averageSellPrice, previousAverageBuyPrice, previousAverageSellPrice, averageBuyCount, averageSellCount, previousAverageBuyCount, previousAverageSellCount, -- put the calls to your function here instead of these operations averageBuyPrice - previousAverageBuyPrice, averageSellPrice - previousAverageSellPrice, averageBuyCount - previousAverageBuyCount, averageSellCount - previousAverageSellCount FROM (SELECT data_id, calendarDate, timeOfDay, MAX(date) as latestPriceChangeAt, MAX(name) as randomName, AVG(sell_price) as averageSellPrice, AVG(sell_count) as averageSellCount, AVG(buy_price) as averageBuyPrice, AVG(buy_count) as averageBuyCount, LAG(AVG(buy_price)) OVER(PARTITION BY data_id ORDER BY calendarDate, timeOfDay) as previousAverageBuyPrice, LAG(AVG(sell_price)) OVER(PARTITION BY data_id ORDER BY calendarDate, timeOfDay) as previousAverageSellPrice, LAG(AVG(buy_count)) OVER(PARTITION BY data_id ORDER BY calendarDate, timeOfDay) as previousAverageBuyCount, LAG(AVG(sell_count)) OVER(PARTITION BY data_id ORDER BY calendarDate, timeOfDay) as previousAverageSellCount FROM Date_Range JOIN Prices ON Prices.date >= Date_Range.rangeStart AND prices.date < Date_Range.rangeEnd GROUP BY data_id, calendarDate, timeOfDay) data; ``` (Using your initial [SQL Fiddle as the base](http://sqlfiddle.com/#!15/e5979/20)) Note that I've changed the rollup to also be by *day*, not just hour, as otherwise your result columns are slightly odd. This is trivial to change if you wanted to group all days together, though. I'm also assuming you have a calendar/time of day table, which are ridiculously handy tools to have for analysis purposes. If you don't have one/can't create one, they can be created pretty easily with recursive CTEs. All query work assumes you also have appropriate indices - otherwise, your system will likely create some temporary ones... I personally think your biggest problem is attempting to "optimize" your query with temp tables, for what should be a simple aggregate query. What you're doing *does* work in some situations (and is occasionally necessary), but the version you were using wasn't helping any - there was a self-join in the `prices` table, which was likely a huge offender.
I have faced the same kind of issue. The issue might be caused because of `UPDATE` statements. Are you using `COMMIT` to commit your changes to the database? If you use `COMMIT` for each `UPDATE` then the cache size will definitely grow.
postgresql query optimization - 2GB database ends up being 50GB during query execution
[ "", "sql", "postgresql", "lag", "average", "temporary", "" ]