Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I have a table with columns `foo_name` and `foo_type` `foo_type` can have values of `A`, `B` or `C` I want to find all `foo_name` where the table does not have `foo_name` with all possible values of `foo_type`. In other words, ``` DISTINCT(foo_name) WHERE COUNT(rows grouped by foo_name) for all foo_name is less than 3 ``` Sample Data ``` Foo, A Foo, B Foo, C Bar, B Bar, C Baz, A Qux, A Qux, B Qux, C ``` My query should return *Bar* and *Baz* because those `foo_name` don't have rows for all possible values of `foo_type`. Appreciate pointers to working SQL that does the above. Additionally, I want to be able to extend the above from count() to find all `foo_name` where `foo_type` of some value (or values) not found. In the above sample data, I would want to be able to search for all `foo_name` where row matching `foo_name` and for `foo_type = A` not found (Answer: *Bar*)
That's a simple aggregate. You want a row per foo\_name in your results, so you GROUP BY foo\_name. Then limit your results in HAVING: ``` select foo_name from my_table group by foo_name having count(distinct foo_type) < 3; ``` You can easily change your HAVING clause in order to know what types where found for a foo\_name, e.g.: ``` select foo_name from my_table group by foo_name having max(case when foo_type = 'A' then 1 else 0 end) = 0 -- A not found and max(case when foo_type = 'B' then 1 else 0 end) = 1 -- B found and max(case when foo_type = 'C' then 1 else 0 end) = 1 -- C found ``` EDIT: Here is the same with another HAVING clause which may be easier to understand: ``` select foo_name from my_table group by foo_name having group_concat(distinct foo_type order by foo_type) = 'B,C'; ```
There are probably several ways to do this, but I would do it with a subquery. So something like this ... ``` SELECT foo_name FROM ( SELECT foo_name, COUNT(DISTINCT(foo_type)) AS foo_type_count FROM foo_table GROUP BY foo_name ) as sq WHERE foo_type_count != 3 ``` The subquery (inside the parentheses) returns all of the foo\_name values and a count of how many different foo\_types are set for each of the those values. Then you select some of the foo\_name values depending on some other criteria - in the case I provided, you could pull out all of the foo\_names that do not have all of the three foo\_types associated with them. If you want to tweak this, you can then add WHERE clauses inside the subquery, so you could do `WHERE foo_type != A` inside the subquery and change the WHERE clause outside the subquery to match `WHERE foo_type_count = 2` - this would return all foo\_names which have foo\_types B and C, whether or not that have A. So for your sample data set, it should return Foo, Bar, and Qux, but not Baz.
MySQL find all rows where rows with number of rows for possible values of column is less than n
[ "", "mysql", "sql", "" ]
I have a column name which is a SQL reserved word, how do I in query explorer in SQL Server Management Studio run this query ``` SELECT Name, [myTable].Schema FROM myTable ```
Add `[` `]` around your reserved word. ``` SELECT [Name], myTable.[Schema] FROM myTable ``` Actually it doesn't hurt to add brackets around every segment: ``` SELECT [Name], [myTable].[Schema] FROM [myTable] ```
Use brace over that reserved world. ``` SELECT [Name], [myTable].[Schema] FROM [myTable] ```
SQL query not running where column name is reserved word
[ "", "sql", "sql-server", "sql-server-2012", "" ]
I have a scheme with three tables with the following structure ``` donuts (name: string, price: integer) grocery (no: string, gname: string, credit: integer) distributor (dname: string, gno: string, deliverydate: date) ``` distributor.dname and distributor.gno are foreign keys that reference donuts and grocery which keys are name and no respectivey I am having trouble with 2 queries I am trying to write: 1. I am trying to query the names of all donuts who are in every "Vons" or "Smiths" grocery store. (specified in gname) I tried ``` SELECT d.name FROM donuts d, grocery g, distributor dd WHERE d.name = dd.cname AND dd.gno = g.no AND g.gname = 'Vons' OR g.gname = 'Smiths' ``` 2. I am trying to query grocery number (no in grocery) shared by atleast two different donuts. I tried ``` SELECT g.no FROM donuts d, grocery g, distributor dd WHERE g.no = dd.gno ``` This doesn't seem to return every occurence or account for OR. How could I fix my queries? I am a little new to sql please pardon my lack of knowledge.
1) You are using a combination of AND and OR in your query and you need to use brackets to specify the [order of operations](https://en.wikipedia.org/wiki/Order_of_operations), otherwise the OR clause will not be interpreted in the way you are intending. Example using brackets: ``` SELECT d.name FROM donuts d, grocery g, distributor dd WHERE d.name = dd.cname AND dd.gno = g.no AND ( g.gname = 'Vons' OR g.gname = 'Smiths' ) ``` In the example above, the brackets ensure that the OR operation is only between the two gname values, and not between gname = 'Smiths' and the rest of the where clause. 2) Assuming the data only has one instance of each grocery - distributor - donut relationship, and each grocery only gets each donut type from one distributor, you should be able to count the rows and apply a HAVING clause to find where there are two or more relationships: ``` SELECT g.no, COUNT(*) as Quantity FROM distributor dd, grocery g, donuts d WHERE dd.dname = d.name AND dd.gno = g.no GROUP BY g.no HAVING COUNT(*) >= 2 ```
Try cooping up the last or in brackets like this, it might be just trying to look for your key joining with vons, then smiths without the key joins. ``` SELECT d.name FROM donuts d, grocery g, distributor dd WHERE d.name = dd.cname AND dd.gno = g.no AND (g.gname = 'Vons' OR g.gname = 'Smiths') ```
SQL querying to get every possible link
[ "", "mysql", "sql", "" ]
I am not sure how to word this question but here is the issue. I have a table `subscribers` of emails that looks like this: ``` Email - Event_id 1@1.com - 123 1@1.com - 456 2@2.com - 123 3@3.com - 123 3@3.com - 123 ``` I have a query that looks like this: ``` select email as "Email Address" from subscribers where event_id='123' GROUP BY email ``` I want the result of this query to be: ``` Email Address 2@2.com 3@3.com ``` But obviously based on the query I have I get: ``` Email Address 1@1.com 2@2.com 3@3.com ``` Basically I would like to exclude emails that are associated with other event IDs and only collect those that appear in ONLY event with ID `123`
You can filter results you are not interested in using `IN`: ``` select email as "Email Address" from subscribers where event_id='123' and email not in( select email as "Email Address" from subscribers where event_id<>'123') group by email ```
Just use a `having` clause and move the condition there: ``` select email as "Email Address" from subscribers group by email having min(event_id) = max(event_id) and min(event_id) = '123'; ``` This says that the minimum event\_id on the email is the same as the maximum (hence all are equal or `NULL`) and the value is `'123'`.
SQL query to exclude records that appear in other rows?
[ "", "mysql", "sql", "database", "" ]
***BACKGROUND:*** I have the tables: **TUTPRAC:** CLASSID, UNITCODE, STAFFNO, CLASSDAY, CLASSTIME, CLASSTYPE, ROOMNUM **UNITSTREAM:** STREAMID, UNITCODE, STAFFNO, DAY, TIME, LOCATION ***PROBLEM:*** So i've got a database and tables and im trying to make a function that checks to see if a class is open before opening a new class, the procedure has to make sure that there isn't any overlaps (clash in terms of time-day or location) between streams (lectures), tute sessions and prac classes for that particular unit. TUTPRACS contains the records for both tutorials and practicals while UNITSTREAM containts the records for streams (Lectures). i also want the procedure to also raise an exception called CLASS\_OVERLAPS that prints an alert message in case of a clash. I'm quite new to PL/SQL so any help would be greatly appreciated. ``` CREATE OR REPLACE PROCEDURE OPEN_CLASS( p_class IN TUTPRAC.CLASSID%TYPE, p_unitc IN TUTPRAC.UNITCODE%TYPE, p_classd IN TUTPRAC.CLASS_DAY%TYPE, p_classt IN TUTPRAC.CLASS_TIME%TYPE, p_classtp IN TUTPRAC.CLASS_TYPE%TYPE, p_roomnm IN TUTPRAC.ROOMNUM%TYPE) IS BEGIN INSERT INTO TUTPRAC ("CLASSID", "UNITCODE", "CLASS_DAY", "CLASS_TIME", "CLASS_TYPE", "ROOMNUM") VALUES (p_class, p_unitc, p_classd, p_classt, p_classtp, p_roomnm); COMMIT; END; ```
You can try like this: ``` CREATE OR REPLACE PROCEDURE OPEN_CLASS( p_class IN TUTPRAC.CLASSID%TYPE, p_unitc IN TUTPRAC.UNITCODE%TYPE, p_classd IN TUTPRAC.CLASS_DAY%TYPE, p_classt IN TUTPRAC.CLASS_TIME%TYPE, p_classtp IN TUTPRAC.CLASS_TYPE%TYPE, p_roomnm IN TUTPRAC.ROOMNUM%TYPE) IS BEGIN DECLARE x NUMBER:=0; BEGIN -- checks SELECT nvl((SELECT 1 FROM TUTPRAC WHERE CLASSID = p_class and UNITCODE = p_unitc and CLASS_DAY = p_classd and CLASS_TIME = p_classt and CLASS_TYPE = p_classtp and ROOMNUM = p_roomnm) , 0) INTO x FROM dual; -- insert IF (x = 1) THEN INSERT INTO TUTPRAC ("CLASSID", "UNITCODE", "CLASS_DAY", "CLASS_TIME", "CLASS_TYPE", "ROOMNUM") VALUES (p_class, p_unitc, p_classd, p_classt, p_classtp, p_roomnm); END IF; END; ``` Or you can use EXISTS like this: ``` CREATE OR REPLACE PROCEDURE OPEN_CLASS( p_class IN TUTPRAC.CLASSID%TYPE, p_unitc IN TUTPRAC.UNITCODE%TYPE, p_classd IN TUTPRAC.CLASS_DAY%TYPE, p_classt IN TUTPRAC.CLASS_TIME%TYPE, p_classtp IN TUTPRAC.CLASS_TYPE%TYPE, p_roomnm IN TUTPRAC.ROOMNUM%TYPE) IS BEGIN INSERT INTO TUTPRAC ("CLASSID", "UNITCODE", "CLASS_DAY", "CLASS_TIME", "CLASS_TYPE", "ROOMNUM") INSERT INTO table SELECT p_class, p_unitc, p_classd, p_classt, p_classtp, p_roomnm FROM dual WHERE NOT EXISTS (SELECT NULL FROM TUTPRAC WHERE CLASSID = p_class and UNITCODE = p_unitc and CLASS_DAY = p_classd and CLASS_TIME = p_classt and CLASS_TYPE = p_classtp and ROOMNUM = p_roomnm ) ```
Do you know merge? [Merge](http://docs.oracle.com/cd/E11882_01/server.112/e41084/statements_9016.htm#SQLRF01606) ``` CREATE OR REPLACE PROCEDURE open_class( p_class IN TUTPRAC.CLASSID%TYPE, p_unitc IN TUTPRAC.UNITCODE%TYPE, p_classd IN TUTPRAC.CLASS_DAY%TYPE, p_classt IN TUTPRAC.CLASS_TIME%TYPE, p_classtp IN TUTPRAC.CLASS_TYPE%TYPE, p_roomnm IN TUTPRAC.ROOMNUM%TYPE) IS BEGIN merge into TUTPRAC a using (select p_class CLASSID, p_unitc UNITCODE, p_classd CLASS_DAY, p_classt CLASS_TIME, p_classtp CLASS_TYPE, p_roomnm ROOMNUM from dual) b on (a.CLASSID = b.CLASSID and a.UNITCODE = b.UNITCODE and a.CLASS_DAY = b.CLASS_DAY and a.CLASS_TYPE = b.CLASS_TYPE and a.ROOMNUM = b.ROOMNUM) WHEN NOT MATCHED THEN INSERT (a.CLASSID ,a.UNITCODE, a.CLASS_DAY, a.CLASS_TYPE, a.ROOMNUM) values ( b.CLASSID , b.UNITCODE , b.CLASS_DAY , b.CLASS_TYPE , b.ROOMNUM); if sql%ROWCOUNT = 0 then dbms_output.put_line('Class alredy exists'); else dbms_output.put_line('Class added'); end if; commit; END; / ```
How to write PL/SQL to check if record exists first, before inserting
[ "", "sql", "oracle", "stored-procedures", "plsql", "" ]
I have a table **Table1** which has 5 columns like this ``` | ID | Name | V1 | V2 | V3 | | 1 | A | 103 | 507 | 603 | | 2 | B | 514 | 415 | 117 | ``` and another table **Table2** which has values like this ``` | Values | Rooms | | 103 | ABC | | 507 | DEF | | 603 | GHI | | 514 | JKL | | 415 | MNO | | 117 | PQR | ``` I am running a join query to get rooms from **Table2** joined by **Table1** as ``` SELECT t2.values, t2.rooms, t1.Name FROM Table2 t2 INNER JOIN Table1 t1 ON t1.V1 = t2.Values OR t1.V2 = t2.Values OR t1.V3 = t2.Values; ``` this query gets the result but in ascending order of t2.values. I do not want to change any order. I just want to get result in whatever the **Table1** has values. ``` | Values | Rooms | Names | | 103 | ABC | A | | 117 | PQR | B | | 415 | MNO | B | | 507 | DEF | A | | 514 | JKL | B | | 603 | GHI | A | ``` The above result is ordered according to T2.Values and these values come form t1.V1, t1.V2, T1.V3. I do not want the order result. I want the result to be according the t1.V1, t1.V2, T1.V3 values. If we see at **Table1** the values would be 103, 507, 603, 514, 415, 117 and therefore the result should be ``` | Values | Rooms | Names | | 103 | ABC | A | | 507 | DEF | A | | 603 | GHI | A | | 415 | MNO | B | | 514 | JKL | B | | 117 | PQR | B | ``` I hope I made my explaination somehow better. Please If it still doesnt clear let me allow to edit it more. As **paxdiablo** suggested, I tried adding ORDER BY t1.name but that is not sorting and result is same. Why?
I know you've already accepted an answer, but it looks to me like you want them sorted by the order of ID in table1, and then order of the column (v1, v2, v3) that you've matched on. In which case, something like this should work: ``` SELECT t2.`values`, t2.rooms, t1.Name FROM Table2 t2 INNER JOIN Table1 t1 ON t1.V1 = t2.`values` OR t1.V2 = t2.`values` OR t1.V3 = t2.`values` ORDER BY t1.id, CASE WHEN t1.v1 = t2.`values` THEN 1 WHEN t1.v2 = t2.`values` THEN 2 WHEN t1.V3 = t2.`values` THEN 3 END ``` (Note I'm quoting `values` because it's a keyword in SQL...) What I'm doing here is: First, I'm ordering by t1.id, which gets you the rough sort order based on the rows in the t1 tables. Then I'm adding a secondary sort based on which `Values` column was matched in the result row, using a [`CASE`](https://dev.mysql.com/doc/refman/5.0/en/case.html) statement. For each row of your query results, if the result was produced by a match between `t1.v1` and `t2.values`, then the CASE statement evaluates to 1. If the result was because of a match between `t1.v2` and `t2.values`, then we get 2. If the result was because of a match between `t1.v3` and `t2.values`, then we get 3. So the overall sort order is based first on the order of the rows in `t1`, and then within that on the order of which column got matched between `t1` and `t2` for each row in your results, which seems to be the requirement (though it's hard to put into words!)
> I just want to get result in whatever the Table1 has values. This is where you've made your mistake. `Table1`, at least as far as SQL is concerned, doesn't *have* an order. Tables are unordered sets to which you *impose* order when extracting the data (if you wish). SQL `select` statements make absolutely *no* guarantee on the order in which results are returned, unless you specifically use `order by` or `group by`. Even `select * from table1` can return the rows in whatever order the DBMS sees fit to give them to you. If you want a specific ordering, you need to ask for it explicitly. For example, if you want them ordered by the room name, whack an `order by t1.name` at the end of your query. Though I'd probably go the whole hog and use a secondary sort order as well, with `order by t1.name, t2.rooms`. Or, to sort on the values, add `order by t2.values`. --- For example, punching this schema/data into [SQLFiddle](http://sqlfiddle.com/): ``` create table table1( id integer, name varchar(10), v1 integer, v2 integer, v3 integer); insert into table1 (id,name,v1,v2,v3) values (1,'a',103,507,603); insert into table1 (id,name,v1,v2,v3) values (2,'b',514,415,117); create table table2 ( val integer, room varchar(10)); insert into table2(val,room) values (103,'abc'); insert into table2(val,room) values (507,'def'); insert into table2(val,room) values (603,'ghi'); insert into table2(val,room) values (514,'jkl'); insert into table2(val,room) values (415,'mno'); insert into table2(val,room) values (117,'pqr'); ``` and then executing: ``` select t2.val, t2.room, t1.name from table2 t2 inner join table1 t1 on t1.v1 = t2.val or t1.v2 = t2.val or t1.v3 = t2.val ``` gives us an arbitrary ordering (it may *look* likes it's ordering by `rooms` within `name` but that's not guaranteed): ``` | val | room | name | |-----|------|------| | 103 | abc | a | | 507 | def | a | | 603 | ghi | a | | 514 | jkl | b | | 415 | mno | b | | 117 | pqr | b | ``` When we change that to sort on two descending keys `order by t1.name desc, t2.room desc`, we can see it re-orders based on that: ``` | val | room | name | |-----|------|------| | 117 | pqr | b | | 415 | mno | b | | 514 | jkl | b | | 603 | ghi | a | | 507 | def | a | | 103 | abc | a | ``` And, finally, changing the ordering clause to `order by t2.val asc`, we get it in value order: ``` | val | room | name | |-----|------|------| | 103 | abc | a | | 117 | pqr | b | | 415 | mno | b | | 507 | def | a | | 514 | jkl | b | | 603 | ghi | a | ``` --- Finally, if your intent is to order it by the order of columns in each row of `table1` (so the order is left to right `v1`, `v2`, `v3`, you can introduce an artificial sort key, either by using a `case` statement to select based on which column matched, or by running multiple queries which may be more efficient since: * you're not executing per-row functions, which tend not to scale very well; and * in larger DBMS', they can be parallelised. The multiple query option would go something like: ``` select 1 as minor, t2.val as val, t2.room as room, t1.name as name from table2 t2 inner join table1 t1 on t1.v1 = t2.val union all select 2 as minor, t2.val as val, t2.room as room, t1.name as name from table2 t2 inner join table1 t1 on t1.v2 = t2.val union all select 3 as minor, t2.val as val, t2.room as room, t1.name as name from table2 t2 inner join table1 t1 on t1.v3 = t2.val order by name, minor ``` and generates: ``` | minor | val | room | name | |-------|-----|------|------| | 1 | 103 | abc | a | | 2 | 507 | def | a | | 3 | 603 | ghi | a | | 1 | 514 | jkl | b | | 2 | 415 | mno | b | | 3 | 117 | pqr | b | ``` You can see there that it uses `name` as the primary key and the position of the value in the row as the minor key. Now some people may think it an ugly approach to introduce a fake column for sorting but it's a tried and tested method for increasing performance. However, you shouldn't trust me (or anyone) on that. My primary mantra for optimisation is *measure, don't guess.*
MySQL Inner Join changes the order of records
[ "", "mysql", "sql", "join", "" ]
I have a table with year, month, date, project and income columns. Each entry is added on the first of every month. I have to be able to get the total income for a particular project for every financial year. What I've got so far (which I think kind of works for yearly?) is something like this: ``` SELECT year, SUM(TotalIncome) FROM myTable WHERE ((date Between #1/1/2007# And #31/12/2015#) AND (project='aproject')) GROUP BY year; ``` *Essentially, rather than grouping the data by year, I would like to group the results by financial year.* I have used `DatePart('yyyy', date)` and the results are the same. I'll be running the query from excel to a database. I need to be able to select the number of years (e.g. 2009 to 2014, or 2008 to 2010, etc). I'll be taking the years from user input in excel (taking two dates, i.e. startYear, endYear). The results from the current query give me, each year would be data from 1st January to 31st December of the same year: ``` Year | Income 2009 | $123.12 2010 | $321.42 2011 | $231.31 2012 | $426.37 ``` I want the results to look something like this, where each financial year would be 1st July to 30th June of the *following* year: ``` FinancialYear | Income 2009-10 | $123.12 2010-11 | $321.42 2011-12 | $231.31 2012-13 | $426.37 ``` If possible, I'd also like to be able to do it per quarter. > Also, if it matters, I have read-only access so I can't make any modifications to the database.
This hasn't been tested but the logic is the same as the answer from [SQL query to retrieve financial year data grouped by the year](https://stackoverflow.com/questions/2591554/sql-query-to-retrieve-financial-year-data-grouped-by-the-year). ``` SELECT fy.FinancialYear, SUM(fy.TotalIncome) FROM ( SELECT IIF( MONTH(date) >= 7, YEAR(date) & "-" & YEAR(date)+1, YEAR(date)-1 & "-" & YEAR(date) ) AS FinancialYear, TotalIncome FROM myTable WHERE date BETWEEN #1/1/2007# AND #31/12/2015# AND project = 'aproject' ) AS fy GROUP BY fy.FinancialYear; ``` Extending this further you can get per quarter as well: ``` SELECT fy.FinancialQuarter, SUM(fy.TotalIncome) FROM ( SELECT IIF( MONTH(date) >= 10, "Q2-" & YEAR(date) & "-" & YEAR(date)+1, IIF( MONTH(date) >= 7, "Q1-" & YEAR(date) & "-" & YEAR(date)+1, IIF( MONTH(date) >= 4, "Q4-" & YEAR(date)-1 & "-" & YEAR(date), "Q3-" & YEAR(date)-1 & "-" & YEAR(date) ) ) ) AS FinancialQuarter, TotalIncome FROM myTable WHERE date BETWEEN #1/1/2007# AND #31/12/2015# AND project = 'aproject' ) AS fy GROUP BY fy.FinancialQuarter; ```
You need create a `financial_year` table ``` financial_year_id int primary key period varchar startDate date endDate date ``` with this data ``` 1 | 2009-10 | #1/1/2009# | #1/1/2010# 2 | 2010-11 | #1/1/2010# | #1/1/2011# 3 | 2011-12 | #1/1/2011# | #1/1/2012# 4 | 2012-13 | #1/1/2012# | #1/1/2013# ``` then perfom a join with your original table ``` SELECT FY.period, SUM (YT.TotalIncome) FROM YourTable YT INNER JOIN financial_year FY ON YT.date >= FY.startDate and YT.date < FY.endDate GROUP BY FY.period ``` For Quarter: ``` SELECT FY.period, DatePart ('q', date) as quarter, SUM (YT.TotalIncome) FROM YourTable YT INNER JOIN financial_year FY ON YT.date >= FY.startDate and YT.date < FY.endDate GROUP BY FY.period, DatePart ('q', date) ``` **NOTE** I wasnt sure if your `date` is just `date` or `datetime` so I went the safest way if is just `date` you could use ``` 1 | 2009-10 | #1/1/2009# | #31/12/2009# 2 | 2010-11 | #1/1/2010# | #31/12/2010# ``` AND ``` ON YT.date BETWEEN FY.startDate AND FY.endDate ```
SQL group data by financial year
[ "", "sql", "excel", "ms-access", "financial", "" ]
I'm trying to create 20 unique cards with numbers, but I struggle a bit.. So basically I need to create 20 unique matrices 3x3 having numbers 1-10 in first column, numbers 11-20 in the second column and 21-30 in the third column.. Any ideas? I'd prefer to have it done in r, especially as I don't know Visual Basic. In excel I know how to generate the cards, but not sure how to ensure they are unique.. It seems to be quite precise and straightforward to me. Anyway, i needed to create 20 matrices that would look like : ``` [,1] [,2] [,3] [1,] 5 17 23 [2,] 8 18 22 [3,] 3 16 24 ``` Each of the matrices should be unique and each of the columns should consist of three unique numbers ( the 1st column - numbers 1-10, the 2nd column 11-20, the 3rd column - 21-30). Generating random numbers is easy, though how to make sure that generated cards are unique?Please have a look at the post that i voted for as an answer - as it gives you thorough explanation how to achieve it.
*(N.B. : I misread "rows" instead of "columns", so the following code and explanation will deal with matrices with random numbers 1-10 on 1st row, 11-20 on 2nd row etc., instead of columns, but it's exactly the same just transposed)* This code should guarantee uniqueness and good randomness : ``` library(gtools) # helper function getKthPermWithRep <- function(k,n,r){ k <- k - 1 if(n^r< k){ stop('k is greater than possibile permutations') } v <- rep.int(0,r) index <- length(v) while ( k != 0 ) { remainder<- k %% n k <- k %/% n v[index] <- remainder index <- index - 1 } return(v+1) } # get all possible permutations of 10 elements taken 3 at a time # (singlerowperms = 720) allperms <- permutations(10,3) singlerowperms <- nrow(allperms) # get 20 random and unique bingo cards cards <- lapply(sample.int(singlerowperms^3,20),FUN=function(k){ perm2use <- getKthPermWithRep(k,singlerowperms,3) m <- allperms[perm2use,] m[2,] <- m[2,] + 10 m[3,] <- m[3,] + 20 return(m) # if you want transpose the result just do: # return(t(m)) }) ``` --- # Explanation **(disclaimer tl;dr)** To guarantee both randomness and uniqueness, one safe approach is generating all the possibile bingo cards and then choose randomly among them without replacements. To generate all the possible cards, we should : 1. generate all the possibilities for each row of 3 elements 2. get the cartesian product of them Step (1) can be easily obtained using function `permutations` of package `gtools` (see the object `allPerms` in the code). Note that we just need the permutations for the first row (i.e. 3 elements taken from 1-10) since the permutations of the other rows can be easily obtained from the first by adding 10 and 20 respectively. Step (2) is also easy to get in R, but let's first consider how many possibilities will be generated. Step (1) returned 720 cases for each row, so, in the end we will have `720*720*720 = 720^3 = 373248000` possible bingo cards! Generate all of them is not practical since the occupied memory would be huge, thus we need to find a way to get 20 random elements in this big range of possibilities without actually keeping them in memory. The solution comes from the function `getKthPermWithRep`, which, given an index `k`, it returns the k-th permutation with repetition of `r` elements taken from `1:n` (note that in this case permutation with repetition corresponds to the cartesian product). e.g. ``` # all permutations with repetition of 2 elements in 1:3 are permutations(n = 3, r = 2,repeats.allowed = TRUE) # [,1] [,2] # [1,] 1 1 # [2,] 1 2 # [3,] 1 3 # [4,] 2 1 # [5,] 2 2 # [6,] 2 3 # [7,] 3 1 # [8,] 3 2 # [9,] 3 3 # using the getKthPermWithRep you can get directly the k-th permutation you want : getKthPermWithRep(k=4,n=3,r=2) # [1] 2 1 getKthPermWithRep(k=8,n=3,r=2) # [1] 3 2 ``` Hence now we just choose 20 random indexes in the range `1:720^3` (using `sample.int` function), then for each of them we get the corresponding permutation of 3 numbers taken from `1:720` using function `getKthPermWithRep`. Finally these triplets of numbers, can be converted to actual card rows by using them as indexes to subset `allPerms` and get our final matrix (after, of course, adding `+10` and `+20` to the 2nd and 3rd row). --- # Bonus ## Explanation of getKthPermWithRep If you look at the example above (permutations with repetition of 2 elements in 1:3), and subtract 1 to all number of the results you get this : ``` > permutations(n = 3, r = 2,repeats.allowed = T) - 1 [,1] [,2] [1,] 0 0 [2,] 0 1 [3,] 0 2 [4,] 1 0 [5,] 1 1 [6,] 1 2 [7,] 2 0 [8,] 2 1 [9,] 2 2 ``` If you consider each number of each row as a number digit, you can notice that those rows (00, 01, 02...) are all the numbers from 0 to 8, represented in base 3 (yes, 3 as n). So, when you ask the k-th permutation with repetition of `r` elements in `1:n`, you are also asking to translate `k-1` into base `n` and return the digits increased by `1`. Therefore, given the algorithm to change any number from base 10 to base n : ``` changeBase <- function(num,base){ v <- NULL while ( num != 0 ) { remainder = num %% base # assume K > 1 num = num %/% base # integer division v <- c(remainder,v) } if(is.null(v)){ return(0) } return(v) } ``` you can easily obtain `getKthPermWithRep` function.
One 3x3 matrix with the desired value range can be generated with the following code: ``` mat <- matrix(c(sample(1:10,3), sample(11:20,3), sample(21:30, 3)), nrow=3) ``` Furthermore, you can use a for loop to generate a list of 20 unique matrices as follows: ``` for (i in 1:20) { mat[[i]] <- list(matrix(c(sample(1:10,3), sample(11:20,3), sample(21:30,3)), nrow=3)) print(mat[[i]]) } ```
Create 20 unique bingo cards
[ "", "sql", "r", "excel", "" ]
I'm fairly new to SQL so this may be fairly simple but I'm trying to write a script in SQL that will allow me to get a set of data but I don't want the first or last result in the query. I can find lots on how to remove the first result and how to remove the last result but not both. This is my query so far: ``` SELECT * FROM itinerary Where ID = 'A1234' ORDER BY DateTime ASC ``` I want to remove the first and the last record of that select based on the DateTime.
This may not be the most performant way to do this, but you didn't give any schema info, and it looks like your ID column is not unique. It would be easier if you had a primary key to work with. ``` SELECT * FROM itinerary WHERE ID = 'A1234' AND DateTime < (SELECT MAX(DateTime) FROM itinerary WHERE ID = 'A1234') AND DateTime > (SELECT MIN(DateTime) FROM itinerary WHERE ID = 'A1234') ORDER BY DateTime ASC ``` This will basically select every record where the ID is A1234 and the DateTime doesn't equal the max or min datetime. Please note, if you have multiple records with the same value for DateTime and that also happens to be the min or max value, you might exclude more than just the first or last. This might be good enough though. If not, you might need to write a stored procedure and not just straight ANSI SQL.
Try this .. ``` select * from (select a.*,row_number() over (partition by DateTime order by DateTime desc) as rnm from itinerary Where ID = 'A1234')x where rm <> 1 and rm not in ( select max(rm) from ( select row_number() over (partition by DateTime order by DateTime desc) as rnm from itinerary Where ID = 'A1234')) ```
Excluding first and last result from sql query
[ "", "mysql", "sql", "" ]
I Plan to create a database that consists of these attributes 1. id 2. name 3. job 4. work\_schedule For example, I want to input Mike the Janitor, and he works every Wednesday, Thursday and Sunday. How do I input this into SQL effectively ? I've tried to use array (For example work\_schedule = [3,4,7]). But, is there anything method that is easier ?
Use bitwise operation. use this values constant ``` Monday = 1 Tuesday = 2 Wednesday = 4 Thursday = 8 Friday = 16 Saturday = 32 Sunday = 64 ``` Then workschedule for 3, 5, 7 will be ``` SET workschedule = 4 + 16 + 64; ``` And select to get jobs on Wednesday will be ``` SELECT * FROM YourTable WHERE workschedule & 4 > 0 ```
This is actually an interesting question. There are a handful of methods. I can readily think of three, any of which might be appropriate given the circumstances. * Have a separate table `WorkSchedule` for each possible combination of days when someone could work. * Have a separate table of `WorkerDays` that has a separate row for each worker and each day when s/he could work. * Store the information in one row. The middle one is the most "SQL-like" in the sense that it is normalized, and should be flexible for most needs. The third alternative seems to be the path you are going down. A typical method is to store a separate flag for each day: `MondayFlag`, `TuesdayFlag`, etc. An alternative method is to store the flags within a single column, using bit-masks to identify the information you want. Of course, this depends on the bit-fiddling capabilities of the database you are working with. The actual choice of how to model the data depends on how it will be used. You need to think about the types of questions that will be asked about work days.
Input multiple data into SQL column
[ "", "sql", "" ]
I am trying to create a SQL database query where I need to get the mark of a student. More specific: I need to JOIN all the way through mark, subject, studentsubject, student because I cannot just select all the marks, but marks from a specific subject, from a specific student. Any ideas? I made just a trash request that select all the marks because of fail. I will be thankful for any help. [database diagram](http://prntscr.com/8uvru3). My original query: ``` SELECT Value FROM Mark JOIN Subject ON Mark.SubjectID = Subject.ID JOIN StudentSubject ON StudentSubject.subjectID = Subject.ID JOIN Student ON StudentSubject.studentID = Student.ID WHERE Student.NameStudent = 'Mira' ```
You can do joins like this and modify your filtering as needed: ``` select st.namestudent, st.surname, c.nameclass, su.namesubject, m.value from studentsubject ss inner join student st on st.id = ss.studentid inner join subject su on su.id = ss.subjectid left join mark m on ss.studentid = m.studentid and ss.subjectid = m.subjectid left join class c on c.id = st.classid where st.namestudent = 'Mira' and su.namesubject = 'Science' and c.classname = '10A' ```
``` SELECT * FROM Student St, Mark M, Subject Su WHERE St.ID = M.StudentID AND M.SubjectID = Su.ID AND St.NameStudent = 'me' AND Su.NameSubject = 'sql'; ``` I'd be wary of a relationship diagram with loops. Shouldn't the mark be in the Student[Takes]Subject table, or do you really need StudentSubject?
SQL data query to join tables
[ "", "sql", "" ]
I have two tables that are in a "one to many" relation. ``` TblProjects ProjectID ......... TblCustomers ProjectID Number ......... ``` How can I get all `ProjectIDs` for which **all** `Customers` satisfy this condition ``` Number % 100 = 0 ```
A general solution is to use `NOT EXISTS` with a reverse condition (`<>` instead of `=`): ``` SELECT DISTINCT p.ProjectID FROM TblProjects p INNER JOIN TblCustomers ct ON ct.ProjectID = p.ProjectID WHERE NOT EXISTS (SELECT 1 FROM TblCustomers c WHERE c.ProjectID = p.ProjectID AND (Number % 100) <> 0) ``` Here's a [SQLFiddle](http://sqlfiddle.com/#!3/15af6/4). --- Alternatively, specific for this use case, you can use a cleaner query: ``` SELECT p.ProjectID FROM TblProjects p INNER JOIN TblCustomers ct ON ct.ProjectID = p.ProjectID GROUP BY p.ProjectID HAVING MAX(ct.Number % 100) = 0 ``` Here's a [SQLFiddle](http://sqlfiddle.com/#!3/e800e/1). --- P.S. if you *only* need `ProjectID`, you don't need to join anything at all, just use `TblCustomers` directly.
You can just use inner join ``` Select * from tblProjects pro inner join tblCustomers cst on pro.projectID = cst.ProjectID and cst.Number % 100 = 0 ``` It will give you what you asked for
How to select parent rows where all children statisfies a condition in SQL?
[ "", "sql", "sql-server-2008", "t-sql", "" ]
Is there a way for me to do this using a query or a stored procedure? Example Table: ``` ID TYPE TIMESTAMP QTY P12345.1 A 2015-10-22 90 P12345.2 A 2015-10-22 0 P12001.1 A 2015-10-22 87 P12345.3 A 2015-10-23 92 P19000.1 B 2015-10-23 75 ``` I want to only select the rows provided that they have the same prefix in the ID (characters prior to period (.)), and they have the same type and same timestamp. In the example above, 3 rows have the same prefix: P12345.1, P12345.2 and P12345.3. However, only P12345.1 and P12345.2 have the same timestamp so I will be selecting the row of P12345.1 and not P12345.2. This should be the resulting table: ``` ID TYPE TIMESTAMP QTY P12345.1 A 2015-10-22 90 P12001.1 A 2015-10-22 87 P12345.3 A 2015-10-23 92 P19000.1 B 2015-10-23 75 ``` I'm really having a hard time solving this and I need to accomplish this using a query or a stored procedure. Thank you in advance. Would really appreciate your help.
``` select ID, TYPE, TIMESTAMP, QTY from tablename t1 where not exists (select 1 from tablename t2 where LEFT(t2.id, 6) = LEFT(t1.id, 6) and t2.TIMESTAMP= t1.TIMESTAMP and t2.id < t1.id) ```
Try this, ``` SELECT ID ,TYPE ,TIMESTAMP ,QTY FROM PrefixTable t1 WHERE NOT EXISTS ( SELECT 1 FROM PrefixTable t2 WHERE SUBSTRING(t2.id ,1 ,6) = SUBSTRING(t1.id ,1 ,6) AND t2.TIMESTAMP = t1.TIMESTAMP AND t2.id < t1.id ) ```
Is there a way to select rows based on data row as criteria?
[ "", "sql", "sql-server", "stored-procedures", "" ]
I have a data table with `MobileNo` and `RequestTime`, there are two rows with mobile no and date. I want to get all record with current date ``` SELECT * FROM Requester WHERE (Convert(datetime, RequestTime, 103)) = GETDATE() ``` I have the datetime as a string, so I need to convert it to `DateTime` in SQL but still the result is null Have a look the code and result in SQL Server [![enter image description here](https://i.stack.imgur.com/IwlUT.png)](https://i.stack.imgur.com/IwlUT.png)
`SELECT CONVERT(DATETIME, '21/10/2015', 103)` return: `2015-10-20 00:00:000.000` `SELECT GETDATE()` return: `2015-10-20 16:42:01.154` -- get current time You can try your code like this: ``` WHERE CONVERT(DATETIME, RequestTime, 103) = CONVERT(DATETIME, CAST(GETDATE() AS DATE), 103) ```
Please try this: ``` SELECT REQUESTERMOBILE,REQUESTTIME FROM REQUESTER WHERE CONVERT(DATE,CONVERT(DATETIME,REQUESTTIME),103) = CONVERT(DATE,GETDATE(),103) ```
Get filter the date rand in SQL Server
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have a dataframe (df) with three columns (a,b,c) I am inserting these values into SQL database using ``` df <- data.frame(a=1:10, b=10:1, c=11:20) values <- paste("(",df$a,",", df$b,",",df$c,")", sep="", collapse=",") cmd <- paste("insert into MyTable values ", values) result <- sqlQuery(con, cmd, as.is=TRUE) ``` Source: [How to insert a dataframe into a SQL Server table?](https://stackoverflow.com/questions/14334840/how-to-insert-a-dataframe-into-a-sql-server-table) My question is what is the update equivalent of this example ? How can i use ``` cmd <- paste("update MyTable values ", values) result <- sqlQuery(con, cmd, as.is=TRUE) ``` I tried sqlSave and sqlUpdate but it got sticky too soon with error messages like missing columns, index etc...so I am falling back to this example above and trying to learn how to use update statements based on that toy dataset. Any help is much appreciated folks.
So, firstly you have the wrong syntax for `UPDATE`. In general, ``` UPDATE table_name SET column1 = value1, column2 = value2...., columnN = valueN WHERE [condition]; ``` so you can't build up the values as a concatenated vector as you have done. If you don't select a particular element with the `WHERE` you will update the value `value1` across **all** the values in `column1`. EDIT: If you can't match the condition, then you aren't actually updating, you are inserting, which has the forms: It is possible to write the INSERT INTO statement in two forms. The first form does not specify the column names where the data will be inserted, only their values: ``` INSERT INTO table_name VALUES (value1,value2,value3,...); ``` The second form specifies both the column names and the values to be inserted: ``` INSERT INTO table_name (column1,column2,column3,...) VALUES (value1,value2,value3,...); ``` If you want to do anything more complicated, you will need to build up the query with SQL, probably in something other than R first, at least to learn. A possibility for experimentation could be [SQL fiddle](http://sqlfiddle.com/) if you aren't comfortable with SQL yet.
I know this question was posted over 4 years ago but I hope this will help out other userRs who are searching for an answer to this. ``` table <- [NAME OF THE TABLE YOU WANT TO UPDATE] x <- [YOUR DATA SET] # We'll need the column names of the table for our INSERT/UPDATE statement rs <- dbSendQuery(con, paste0('SHOW COLUMNS FROM ', table, ';')) col_names <- dbFetch(rs) dbClearResult(rs) # Find which columns are primary keys as we shouldn't need to update these pri <- which(col_names$Key == "PRI") # For each row in your data frame, build an UPDATE statement and query your db for(i in 1:nrow(x)) { # Transform ith row of dataset into character vector values <- sapply(x[i, ], as.character) # Build the INSERT/UPDATE query myquery <- paste0("INSERT INTO ", table, "(", paste(col_names$Field, collapse = ", "), ") ", # column names "VALUES", "('", paste(values, collapse = "', '"), "') ", # new records "ON DUPLICATE KEY UPDATE ", paste(col_names$Field[-pri], values[-pri], sep = " = '", collapse = "', "), # everything minus primary keys "';") # Show full query for clarity cat("Performing query", i, "of", nrow(x), ":\n", myquery, "\n\n") # Send query to database dbSendQuery(con, myquery) } ``` I just posted my solution to this problem on [GitHub](https://github.com/datawranglerai/r-mysql) if you're looking for a more guided walkthrough.
sql update data from r dataframe not insert
[ "", "mysql", "sql", "sql-server", "r", "rodbc", "" ]
table1: ``` | id | int(11) | name | varchar(255) | type | varchar(255) | property1 | varchar(255) | property2 | varchar(255) | property3 | varchar(255) ``` table2: ``` | id | int(11) | name | varchar(255) | type_level | varchar(255) | property11 | varchar(255) | property12 | varchar(255) .... | property33 | varchar(255) ``` These tables basically represent the same entity. table1 is like a short list of properties, table2 contains more information. The `name` property has the same value in both tables for any given entity, **not** the ID; `type` and `type_level` contain the same value but are unfortunately just named differently. I just started this project and I think this situation isn't very good, I'd like to merge the tables. I'd like to merge the tables to create this table: table\_merged ``` | id | int(11) | name | varchar(255) | type | varchar(255) | property1 | varchar(255) | property2 | varchar(255) | property3 | varchar(255) | property11 | varchar(255) | property12 | varchar(255) | property13 | varchar(255) .... | property33 | varchar(255) ``` The resulting table should confer a unique ID to every row, and the rows with the same `name` should be joined in one single row.... I hope I explained it well enough. How can I accomplish this? I tried `INNER JOIN` and `UNION` but so far not in the correct way.
Assuming there are no duplicate names in either table and that they match, you can do what you want with a `join`: ``` create table table_merged as select (@rn := @rn + 1) as id, t1.name, t1.type, t1.property1, t1.property2, t1.property3, t2.property11, t2.property12, t2.property13, . . . from table1 t1 left join table2 t2 on t1.name = t2.name and t1.type = t2.type_name cross join (select @rn := 0) params; ``` This version adds yet another new id for the merged table.
Create a new table, assign `AUTO INCREMENT` to the id and just insert select: ``` insert into table_merged (name,property1,.... values(select a.name, a.property1, a.property2, a.property3, b.property11... from table1 a left join table2 b on a.name = b.name) ```
Merging two tables in MySQL
[ "", "mysql", "sql", "" ]
I have this query: ``` SELECT Column1, Column2, Column3, /* computed column */ AS SortColumn FROM Table1 ORDER BY SortColumn ``` `SortColumn` serves no other purpose as to define an order for sorting the result set. Thus I'd like to omit it in the result set to decrease the size of the data sent to the client. The following fails … ``` SELECT Column1, Column2, Column3 FROM ( SELECT Column1, Column2, Column3, /* computed column */ AS SortColumn FROM Table1 ORDER BY SortColumn ) AS SortedTable1 ``` … because of: > Msg 1033, Level 15, State 1 > > The ORDER BY clause is invalid in views, inline functions, derived tables, subqueries, and common table expressions, unless TOP or FOR XML is also specified. So there's this hacky solution: ``` SELECT Column1, Column2, Column3 FROM ( SELECT TOP /* very high number */ Column1, Column2, Column3, /* computed column */ AS SortColumn FROM Table1 ORDER BY SortColumn ) AS SortedTable1 ``` Is there a clean solution I'm not aware of, since this doesn't sound like a rare scenario? --- **Edit:** The solutions already given work indeed fine for the query I referred to. Unfortunately, I left out an important detail: The (already existent) query consists of two `SELECT`s with a `UNION` in between, which changes the matter pretty much (again simplified, and hopefully not too simplified): ``` SELECT Column1, Column2, Column3 FROM Table1 UNION ALL SELECT Column1, Column2, Column3 FROM Table1 ORDER BY /* computed column */ ``` > Msg 104, Level 16, State 1 > > ORDER BY items must appear in the select list if the statement contains a UNION, INTERSECT or EXCEPT operator. So this error message clearly says that I have to put the computed column in both of the select lists. So there we are again with the subquery solution which doesn't reliably work, as pointed out in the answers.
If, for whatever reason, it's not practical to do the calculation in the `ORDER BY`, you can do something quite similar to your attempt: ``` SELECT Column1, Column2, Column3 FROM ( SELECT Column1, Column2, Column3, /* computed column */ AS SortColumn FROM Table1 ) AS SortedTable1 ORDER BY SortColumn ``` Note that all that's changed here is that the `ORDER BY` is applied to the outer query. It's perfectly valid to reference columns in the `ORDER BY` that don't appear in the `SELECT` clause.
You don't need to have a computed column in the select statement to use it in an order by ``` SELECT Column1, Column2, Column3 FROM Table1 ORDER BY /* computed column */ ``` If you need to do it using UNION, then do the UNION in a cte, and the order by in the select, making sure to include all the columns you need to do the calculation in the CTE ``` WITH src AS ( SELECT Column1, Column2, Column3, /* computation */ ColumnNeededForOrderBy FROM Table1 UNION ALL SELECT Column1, Column2, Column3, /* computation */ ColumnNeededForOrderBy FROM Table2 ) SELECT Column1, Column2, Column3 FROM src ORDER BY ColumnNeededForOrderBy ``` If you don't care to be specific with the column name, you can use the column index and skip the CTE. I don't like this because you might add a column to the query later and forget to update the index in the ORDER BY clause (I've done it before). Also, the query plans will likely be the same, so it's not like the CTE will cost you anything. ``` SELECT Column1, Column2, Column3, /* computation */ FROM Table1 UNION ALL SELECT Column1, Column2, Column3, /* computation */ FROM Table2 ORDER BY 4 ```
Is it possible to ORDER BY a computed column without including it in the result set?
[ "", "sql", "sql-server", "" ]
I have an MySQL query that is super simple but having an issue and wondering if someone can shed some light. All I am trying to do is include two aggregative functions that add "this year"+"last\_year" at the same time filter out any results with less than 200 total\_votes Currently, the query works and the output looks like this: ``` name | total_votes -------------------- apple | 119 lemon | 218 orange | 201 pear | 111 ``` --- However when I add a where statement I get a syntax error: ``` select name, sum(this_year)+sum(last_year) as total_votes from fruit_sales group by name where total_votes>200 ``` The above results in this syntax error in my SQL fiddle: ``` "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'where total>200' at line 6" ``` I've also tried: ``` select name, sum(this_year)+sum(last_year)>200 as total_votes from fruit_sales group by name ``` Here is an SQLfiddle with the table and my query in the works: <http://sqlfiddle.com/#!9/a6862/11> Any help here would be greatly appreciated!
``` SELECT name, SUM(this_year)+SUM(last_year) as total_votes FROM fruit_sales GROUP BY name HAVING SUM(this_year)+SUM(last_year) > 200 ``` `SqlFiddleDemo` You can also calculate sum as: ``` SELECT name, SUM(this_year + last_year) as total_votes FROM fruit_sales GROUP BY name HAVING total_votes > 200; ``` `SqlFiddleDemo2` For `@lcm` without `HAVING` and using subquery: ``` SELECT * FROM ( SELECT name, SUM(this_year + last_year) as total_votes FROM fruit_sales GROUP BY name) AS sub WHERE total_votes > 200; ``` `SqlFiddleDemo3`
You can also use the feldname from the select ``` select name, sum(this_year)+sum(last_year) as total_votes from fruit_sales group by name having total_votes>200; ```
MySQL query with less than or greater than in the where statement using aggregate function?
[ "", "mysql", "sql", "" ]
I have following table. ``` Table A: ID ProductFK Quantity Price ------------------------------------------------ 10 1 2 100 11 2 3 150 12 1 1 120 ---------------------------------------------- ``` I need select that repeat Rows N Time According to Quantity Column Value. So I need following select result: ``` ID ProductFK Quantity Price ------------------------------------------------ 10 1 1 100 10 1 1 100 11 2 1 150 11 2 1 150 11 2 1 150 12 1 1 120 ```
You could do that with a recursive CTE using `UNION ALL`: ``` ;WITH cte AS ( SELECT * FROM Table1 UNION ALL SELECT cte.[ID], cte.ProductFK, (cte.[Order] - 1) [Order], cte.Price FROM cte INNER JOIN Table1 t ON cte.[ID] = t.[ID] WHERE cte.[Order] > 1 ) SELECT [ID], ProductFK, 1 [Order], Price FROM cte ORDER BY 1 ``` Here's a working [SQLFiddle](http://sqlfiddle.com/#!6/d0f383/7). [Here's a longer explanation of this technique](https://technet.microsoft.com/en-us/library/ms186243(v=sql.105).aspx). --- Since your input is too large for this recursion, you could use an auxillary table to have "many" dummy rows and then use `SELECT TOP([Order])` for each input row (`CROSS APPLY`): ``` ;WITH E00(N) AS (SELECT 1 UNION ALL SELECT 1), E02(N) AS (SELECT 1 FROM E00 a, E00 b), E04(N) AS (SELECT 1 FROM E02 a, E02 b), E08(N) AS (SELECT 1 FROM E04 a, E04 b), E16(N) AS (SELECT 1 FROM E08 a, E08 b) SELECT t.[ID], t.ProductFK, 1 [Order], t.Price FROM Table1 t CROSS APPLY ( SELECT TOP(t.[Order]) N FROM E16) ca ORDER BY 1 ``` (The auxillary table is borrowed from [here](https://stackoverflow.com/questions/10819/sql-auxiliary-table-of-numbers), it allows up to 65536 rows per input row and can be extended if required) Here's a working [SQLFiddle](http://sqlfiddle.com/#!6/b2cb3/1).
You can use a simple `JOIN` to get the desired result as below: ``` SELECT t1.*, t2.number + 1 RepeatNumber FROM TableA t1 JOIN master.dbo.spt_values t2 ON t2.type = 'P' AND t2.number < t1.Quantity ``` The above query repeats each record by the specified number in `Quantity` column. **Note for `master.dbo.spt_values on type = 'P'`:** This table is used for getting a series of number which is hard-coded in it by condition of `type = 'P'`.
Repeat Rows N Times According to Column Value
[ "", "sql", "sql-server-2008", "t-sql", "repeat", "" ]
I'm building a simple friendship table with 3 columns: id, user1, user2 Once a user becomes friends with another user, their ids will be added to the table like this: ``` +----+-------+-------+ | id | user1 | user2 | +----+-------+-------+ | 1 | 15 | 9 | +----+-------+-------+ | 2 | 9 | 32 | +----+-------+-------+ ``` The above table is ok but in some cases, a user might want to become friends with a user they are already friends with, resulting in the following table: ``` +----+-------+-------+ | id | user1 | user2 | +----+-------+-------+ | 1 | 15 | 9 | +----+-------+-------+ | 2 | 9 | 32 | +----+-------+-------+ | 3 | 9 | 15 | +----+-------+-------+ ``` In this table, index 1 and 3 are in conflict with each other (2 & 3 are NOT) and I would therefore like an error returned upon insertion (duplicate entry). Is there a way to do this?
When inserting into this table, before you issue the query, you should always make sure that `user1` has the smaller of the two user IDs. Then the situation you described will never happen. You can implement this either in your application code, or as a stored procedure that you execute to insert a new relation between two users.
Let me offer another perspective. You might want to keep the friends table as a reciprocal relationship. So, *both* directions would be stored in the table. The correct table would look like: ``` ----+-------+-------+ | id | user1 | user2 | +----+-------+-------+ | 1 | 15 | 9 | +----+-------+-------+ | 2 | 9 | 15 | +----+-------+-------+ | 3 | 9 | 32 | +----+-------+-------+ | 4 | 32 | 9 | +----+-------+-------+ ``` Why would you want to do just an absurd thing, doubling the size of the data? Typical queries on such data are about who is a friend of friend or to list all the friends of a given user. Such queries require traversing this data as a graph structure, and you need both links. Not only do such queries become much more complicated with a single row per friendship, but because subqueries (derived tables) are often involved, the query loses the ability to use indexes. With this structure, you need to be careful when inserting to insert both directions of the relationship. Then a simple `unique` constraint on the two columns ensures that duplicates are not inserted.
Unique value combination in MySQL
[ "", "mysql", "sql", "" ]
My script for the pizza table is: ``` use sys; create table pizzas ( name varchar(50), toppings varchar(50), price float); insert into pizzas (name, toppings, price) values ('margarita', 'cheese, tomato', 5.0); insert into pizzas (name, toppings, price) values ('hawaiian', 'cheese, tomato, ham, pineapple', 5.5 ); insert into pizzas (name, toppings, price) values ('hot veggie', 'cheese, tomato, chilli, onion', 5.5 ); ``` I'm trying to create a query to determine which pizzas contain cheese.. What i've tried is: SELECT pizzas FROM toppings WHERE toppings="$cheese$"; but it doesn't work
``` SELECT name FROM pizzas WHERE toppings LIKE '%cheese%' ``` Should do the trick.
Try Below Query SELECT \* FROM PIZZAS WHERE TOPPINGS like '%CHEESE%'
Simple SQL Query workbench
[ "", "mysql", "sql", "workbench", "" ]
I need help to replace the numeric words with null. Example: 1. Dustun 0989 LLC 2. Dustun\_0989 LLC 3. 457 Dustun LLC 4. 457\_Dustun LLC 5. 334 Dunlop 987 Output: 1. Dustun LLC 2. Dustun\_0989 LLC 3. Dustun LLC 4. 457\_Dustun LLC 5. Dunlop
You could get it done with regular expressions. For example, something like this: ``` WITH the_table AS (SELECT 'Dustun 0989 LLC' field FROM dual UNION SELECT 'Dustun_0989 LLC' field FROM dual UNION SELECT '457 Dustun LLC' field FROM dual UNION SELECT '457_Dustun LLC' field FROM dual UNION SELECT 'Dunlop 987' field FROM dual UNION SELECT '222 333 ADIS GROUP 422 123' field FROM dual) SELECT field, TRIM(REGEXP_REPLACE(field,'((^|\s|\W)(\d|\s)+($|\s|\W))',' ')) FROM the_table ``` Note that (^|\s|\W) and ($|\s|\W) are Oracle regexp equivalent to \b, as explained in [Oracle REGEXP\_LIKE and word boundaries](https://stackoverflow.com/questions/7567700/oracle-regexp-like-and-word-boundaries) Where: * (^|\s|\W) is either the beginning of line, a blank space or a non-word character. * (\s|\d)+ is a combination of one or more digits and spaces. * ($|\s|\W) is either the end of line, a blank space or a non-word character.
No need for PL/SQL here, a simple SQL statement will do: ``` regexp_replace(the_column, '(\s[0-9]+\s)|(^[0-9]+\s)|(\s[0-9]+$)', ' ') ``` This does the replaces any number of digits between two whitespaces or digits at the start of the value followed by a whitespace or a whitespace followed by digits at the end of the input value. The following: ``` with sample_data (the_column) as ( select 'Dustun 0989 LLC' from dual union all select 'Dustun_0989 LLC' from dual union all select '457 Dustun LLC' from dual union all select '457_Dustun LLC' from dual union all select '334 Dunlop 987' from dual ) select regexp_replace(the_column, '(\s[0-9]+\s)|(^[0-9]+\s)|(\s[0-9]+$)', ' ') as new_value from sample_data ``` will output: ``` NEW_VALUE --------------- Dustun LLC Dustun_0989 LLC Dustun LLC 457_Dustun LLC Dunlop ``` To get rid of the leading (or trailing) spaces, use the `trim` function: `trim(regexp_replace(...))`
RegExp_Replace only numeric words in PLSQL
[ "", "sql", "regex", "oracle", "regexp-replace", "" ]
First of all thank you for read this. I want to do a query between 2 tables, but i don't know how to do it. I have a table called `products` and another called `product_photos`. I want to query ALL the `products` and in every lane of the result, add two fields from the table `product_photos`. The problem is when I execute my query works but only show the first field from `product_photos`, and I want to show every lane. I got this: ``` select p.*, ps.url_little, ps.url_big from product p LEFT join product_photos ps on (p.id_prod = ps.id_product) ``` How can I do this? Do I have to do subquery or union? Thank you all. EDIT: example of json result: `{\"id\":\"1\",\"id_prod\":\"375843\",\"ref\":\"5943853\",\"ean\":\"894378432831283\",\"concept\":\"Portamatr\\u00edculas Barracuda\",\"description\":\"Portamatr\\u00edculas Barracuda FZ6 a\\u00f1o 2004-2008\",\"price\":\"19.99\",\"old_price\":\"25.58\",\"category\":\"Motor\",\"family\":\"Accesorio veh\\u00edculo a motor\",\"sub_family\":\"Accesorio veh\\u00edculo a motor de dos ruedas\",\"gender\":\"\",\"sub_gender\":\"\",\"photo\":\"\",\"thumbnail\":\"\",\"type\":\"1\",\"size\":\"\",\"color\":\"\",\"weave\":\"\",\"motiu\":\"\",\"material\":\"\",\"artist\":\"\",\"technique\":\"\",\"paper\":\"\",\"tittle\":\"\",\"measure\":\"\",\"edition\":\"\",\"status\":\"\",\"reference\":\"\",\"cost\":\"0\",\"url_little\":\"urllittlee kgjhdfjfd\",\"url_big\":\"url bigota\"}` As you can see, i got the two fields, url\_little and url\_big, but only from 1 field, and I got two in the table product\_photos. I want both to appear. Second edit, i'm terrible explaining my problems, sorry: I receive this json: `{\"id\":\"1\",\"id_prod\":\"375843\",\"ref\":\"5943853\",\"ean\":\"894378432831283\",\"concept\":\"Portamatr\\u00edculas Barracuda\",\"description\":\"Portamatr\\u00edculas Barracuda FZ6 a\\u00f1o 2004-2008\",\"price\":\"19.99\",\"old_price\":\"25.58\",\"category\":\"Motor\",\"family\":\"Accesorio veh\\u00edculo a motor\",\"sub_family\":\"Accesorio veh\\u00edculo a motor de dos ruedas\",\"gender\":\"\",\"sub_gender\":\"\",\"photo\":\"\",\"thumbnail\":\"\",\"type\":\"1\",\"size\":\"\",\"color\":\"\",\"weave\":\"\",\"motiu\":\"\",\"material\":\"\",\"artist\":\"\",\"technique\":\"\",\"paper\":\"\",\"tittle\":\"\",\"measure\":\"\",\"edition\":\"\",\"status\":\"\",\"reference\":\"\",\"cost\":\"0\",\"url_little\":\"urllittlee kgjhdfjfd\",\"url_big\":\"url bigota\"},{\"id\":\"1\",\"id_prod\":\"375843\",\"ref\":\"5943853\",\"ean\":\"894378432831283\",\"concept\":\"Portamatr\\u00edculas Barracuda\",\"description\":\"Portamatr\\u00edculas Barracuda FZ6 a\\u00f1o 2004-2008\",\"price\":\"19.99\",\"old_price\":\"25.58\",\"category\":\"Motor\",\"family\":\"Accesorio veh\\u00edculo a motor\",\"sub_family\":\"Accesorio veh\\u00edculo a motor de dos ruedas\",\"gender\":\"\",\"sub_gender\":\"\",\"photo\":\"\",\"thumbnail\":\"\",\"type\":\"1\",\"size\":\"\",\"color\":\"\",\"weave\":\"\",\"motiu\":\"\",\"material\":\"\",\"artist\":\"\",\"technique\":\"\",\"paper\":\"\",\"tittle\":\"\",\"measure\":\"\",\"edition\":\"\",\"status\":\"\",\"reference\":\"\",\"cost\":\"0\",\"url_little\":\"SISI\",\"url_big\":\"NONO\"}` and I want to receive this: `{\"id\":\"1\",\"id_prod\":\"375843\",\"ref\":\"5943853\",\"ean\":\"894378432831283\",\"concept\":\"Portamatr\\u00edculas Barracuda\",\"description\":\"Portamatr\\u00edculas Barracuda FZ6 a\\u00f1o 2004-2008\",\"price\":\"19.99\",\"old_price\":\"25.58\",\"category\":\"Motor\",\"family\":\"Accesorio veh\\u00edculo a motor\",\"sub_family\":\"Accesorio veh\\u00edculo a motor de dos ruedas\",\"gender\":\"\",\"sub_gender\":\"\",\"photo\":\"\",\"thumbnail\":\"\",\"type\":\"1\",\"size\":\"\",\"color\":\"\",\"weave\":\"\",\"motiu\":\"\",\"material\":\"\",\"artist\":\"\",\"technique\":\"\",\"paper\":\"\",\"tittle\":\"\",\"measure\":\"\",\"edition\":\"\",\"status\":\"\",\"reference\":\"\",\"cost\":\"0\",\"url_little\":\"urllittlee kgjhdfjfd , SISI\",\"url_big\":\"url bigota, NONO\"}` As u can see, in the fields url\_little and url\_big are all the results, not only the first one. Thank you!
Judging by the final edit, I think this is what you want. It uses `GROUP_CONCAT` to join all the values in the linked table together. ``` select p.*, group_concat(ps.url_little SEPARATOR ', '), group_concat(ps.url_big SEPARATOR ', ') from product p LEFT join product_photos ps on (p.id_prod = ps.id_product) group by p.id_prod ```
You could use the FULL OUTER JOIN keyword. In your case that would mean something like this: ``` SELECT p.*, ps.url_little, ps.url_big from FROM product p FULL OUTER JOIN product_photos ps ON (p.id_prod = ps.id_product); ``` This will leave you with the full array of photos of a product.
sql join or subquery?
[ "", "mysql", "sql", "join", "union", "" ]
I'm using Oracle and I want to turn the result from a select count into a "binary" 0/1 value ... 0 = 0 ... non-zero = 1. From what I read online, in MS SQL, you can cast it to a "bit" but Oracle doesn't appear to support that. Here's my simple example query (the real query is much more complex). I want MATCH\_EXISTS to always be 0 or 1. Is this possible? ``` select count(*) as MATCH_EXISTS from MY_TABLE where MY_COLUMN is not null; ```
This should be fastest... get at most one row. ``` SELECT COUNT(*) AS MATCH_EXISTS FROM MY_TABLE WHERE MY_COLUMN IS NOT NULL AND rownum <= 1; ```
If you use an `exists` clause this should be faster for large tables because Oracle doesn't need to scan the whole table. As soon as there is one row, it can stop retrieving it: ``` select count(*) as match_exists from dual where exists (select * from my_table where my_column is not null); ```
Oracle SQL - convert select count(*) into zero or one
[ "", "sql", "oracle", "count", "binary", "boolean", "" ]
I have the following sql query below: ``` select * from a inner join b on b.id in (select c.id from c where c.someid = a.someid) or a.someid = b.id ``` This is working as expected but the execution time is bad (10 seconds for 4 rows) I tried many alternatives but their results are different. I'm having hard time having the in statement.
Thank you for the answers. I learned a lot. Unfortunately, the `EXISTS` didn't worked for my case. I used `UNION` and the result time is 2 seconds. ``` select * from a inner join b on b.id in (select c.id from c where c.someid = a.someid) union select * from a inner join b on b.someid = a.id ```
Your query looks fine. Either `b.id` matches `a.someid` or we must look up `c` entries for that `a.someid`. There is not much we can do about this, it simply is laborious to have to look in two places. There should be indexes on all IDs involved of course, but it would also be advisable to have a composite index on `c(someid,id)` for a quicker lookup. Apart from that, you can try with `EXISTS` instead of `IN`. One would expect the two to result in about the same execution plan, but some DBMS handle `EXISTS` better than `IN` for some reason. ``` select * from a inner join b on b.id = a.someid or exists ( select * from c where c.someid = a.someid and c.id = b.id ) ```
In statement alternative
[ "", "sql", "sql-server", "" ]
I have 5 tables. I want to get common users in table 1, 2 and 3 that are not in table 4 and 5. Can someone please help me :) **Tables** ``` table1(userid,discount) table2(userid,discount) table3(userid,discount) table4(userid,discount) table5(userid,discount) ```
One way, left join on the table rows to omit: ``` select * from table1 a join table2 b on (a.userid = b.userid) join table3 c on (a.userid = c.userid) left join table4 d on (a.userid = d.userid) left join table5 e on (a.userid = e.userid) where d.userid is null and e.userid is null; ```
Getting the users common to tables 1, 2, 3 is easy -- just do an inner join. To get all of those users which are not in tables 4 or 5, you could test for their non-existence in these tables in the `where` clause. ``` select * from table1 join table2 on table1.userid = table2.userid join table3 on table1.userid = table3.userid where not exists (select * from table4 where table4.userid = table1.userid) and not exists (select * from table5 where table5.userid = table1.userid) ```
Find common users in multiple tables in SQL
[ "", "mysql", "sql", "" ]
``` rN rD rnc d expectedResult abc1m 2010-03-31 abc 5.7 5.7 + 1.7 +9.6 abc3m 2010-03-31 abc 5.7 5.7 + 1.7 +9.6 abc1y 2010-03-31 abc 5.7 5.7 + 1.7 +9.6 xfx1m 2010-03-31 xfx 1.7 5.7 + 1.7 +9.6 xfx3m 2010-03-31 xfx 1.7 5.7 + 1.7 +9.6 xfx1y 2010-03-31 xfx 1.7 5.7 + 1.7 +9.6 tnt1m 2010-03-31 tnt 9.6 5.7 + 1.7 +9.6 tnt3m 2010-03-31 tnt 9.6 5.7 + 1.7 +9.6 tnt1y 2010-03-31 tnt 9.6 5.7 + 1.7 +9.6 ------------------------------------ abc1m 2010-04-01 abc 2.2 2.2 + 8.9 + 5.5 abc3m 2010-04-01 abc 2.2 2.2 + 8.9 + 5.5 abc1y 2010-04-01 abc 2.2 2.2 + 8.9 + 5.5 xfx1m 2010-04-01 xfx 8.9 2.2 + 8.9 + 5.5 xfx3m 2010-04-01 xfx 8.9 2.2 + 8.9 + 5.5 xfx1y 2010-04-01 xfx 8.9 2.2 + 8.9 + 5.5 tnt1m 2010-04-01 tnt 5.5 2.2 + 8.9 + 5.5 tnt3m 2010-04-01 tnt 5.5 2.2 + 8.9 + 5.5 tnt1y 2010-04-01 tnt 5.5 2.2 + 8.9 + 5.5 ``` **expected result** is the sum of distinct rnc for a specific date How to achieve this. I would like to use something like the code below but doesn't work. ``` select *, sum (d) over (partition by rD, distinct rnc) as expectedResult from myTable where ...--some condition order by ...--order by some columns ``` Using SQL Server 2012, thanks **edit:** Regarding the question being on hold, how is this unclear. IF one is only looking at the column `expectedResult` isn't it quite clear? What should I add in order to make it better? --And every rnc has d. Just assume every set is of the form given in the example. (answering one comment)
As last 2 chars for first column is repeatable, and you are actually summing in partition by that, give it a go and let me know if that's what you asked for ``` create table #TempTable (rn nvarchar(10), rD date, rnc nvarchar(10), d decimal(5,2)) insert into #TempTable (rn, rD, rnc, d) values ('abc1m','2010-03-31','abc', 5.7), ('abc3m','2010-03-31','abc', 5.7), ('abc1y','2010-03-31','abc', 5.7), ('xfx1m','2010-03-31','xfx', 1.7), ('xfx3m','2010-03-31','xfx', 1.7), ('xfx1y','2010-03-31','xfx', 1.7), ('tnt1m','2010-03-31','tnt', 9.6), ('tnt3m','2010-03-31','tnt', 9.6), ('tnt1y','2010-03-31','tnt', 9.6), ------------------------------------ ('abc1m','2010-04-01','abc', 2.2), ('abc3m','2010-04-01','abc', 2.2), ('abc1y','2010-04-01','abc', 2.2), ('xfx1m','2010-04-01','xfx', 8.9), ('xfx3m','2010-04-01','xfx', 8.9), ('xfx1y','2010-04-01','xfx', 8.9), ('tnt1m','2010-04-01','tnt', 5.5), ('tnt3m','2010-04-01','tnt', 5.5), ('tnt1y','2010-04-01','tnt', 5.5) select rn, rD, rnc, d, SUM(d) over (partition by right(rn,2), rD) as 'Sum' from #TempTable order by Rd ```
Here we use cte to group rows that are the same together. This way, we can sum only the first row of each group in the select. ``` ;WITH cte AS ( SELECT *, GroupRowIndex = ROW_NUMBER() OVER (PARTITION BY rateDate, rnc, d ORDER BY (SELECT 1)) FROM myTable ) SELECT *, expectedResult = SUM(d) OVER (PARTITION BY rateDate) FROM cte WHERE GroupRowIndex = 1 AND ...--some condition ORDER BY ...--order by some columns ```
SQL SERVER - sum over partition by distinct
[ "", "sql", "sql-server", "t-sql", "sql-server-2012", "window-functions", "" ]
How can I group an number of rows into a set columns based on the grouping criteria? For example, ``` ID Type Total ============================== 36197 Deduction -9 36200 Deduction -1 36337 Deduction 1 36363 Deduction 0 36364 Deduction 0 36200 Safety -1 36342 Safety 0 36350 Safety 10 36363 Safety 0 36364 Safety 1 ``` Into ``` ID Deduction Safety ========================================== 36197 -9 0 36200 -1 -1 36337 1 0 36363 0 0 36364 0 1 36342 0 0 36350 0 10 ```
You can use case statements to conditionally aggregate: ``` select id, sum(case when type = 'Deduction' then total else 0 end) as deduction, sum(case when type = 'Safety' then total else 0 end) as safety from tbl group by id ```
``` SELECT DISTINCT ID, (SELECT TOTAL FROM TABLE AS A WHERE A.ID = X.ID AND A.TYPE = 'DEDUCTION') AS DEDUCTION,(SELECT TOTAL FROM TABLE AS B WHERE B.ID = X.ID AND B.TYPE = 'SAFETY') AS SAFETY FROM TABLE AS X; ```
Show grouped rows in columns
[ "", "sql", "sql-server", "t-sql", "grouping", "" ]
I have a situation here to get data(i.e. fname, lname and count(jobid)) from two different table comparing their jobid, deliverymanid, pickupmanid and employeeid from job and employee table and combine it in one row. This is the job table ``` jobid pickupmanid deliverymanid ----- ---------------- ------------------ 1 1 2 2 2 2 3 1 1 ``` This is the employee table ``` employeeid fname lname ------------ ----------- ------------- 1 ABC XYZ 2 LMN OPR ``` This is how i should get output ``` employeeid totalpickupjobs totaldeliveryjobs fname lname ---------- --------------- ----------------- ----------- ----------- 1 2 1 ABC XYZ 2 1 2 LMN OPR ```
Try this: ``` SELECT e.employeeid, (SELECT COUNT(*) FROM jobtable j WHERE j.pickupmanid = e.employeeid) as totalpickupjobs, (SELECT COUNT(*) FROM jobtable j WHERE j.deliverymanid = e.employeeid) as totaldeliveryjobs, e.fname, e.lname FROM employeetable e ``` Go [**Sql Fiddle**](http://sqlfiddle.com/#!9/7f2d36/1/0)
Try this: ``` WITH x AS (SELECT 1 AS jobid,1 AS pickupmaid, 1 AS delivery_manid FROM dual UNION ALL SELECT 2 AS jobid,2 AS pickupmaid, 2 AS delivery_manid FROM dual UNION ALL SELECT 3 AS jobid,1 AS pickupmaid, 1 AS delivery_manid FROM dual ), y AS (SELECT 1 AS employeeid,'ABC' AS fname, 'XYZ' AS lname FROM dual UNION ALL SELECT 2 AS employeeid,'LMN' AS fname, 'OPR' AS lname FROM dual ) SELECT y.employeeid as employee_id, count(pickupmaid) as totalpickup, count(delivery_manid) as totaldelivery, y.fname as firstname, y.lname as lastname FROM y FULL OUTER JOIN x ON X.pickupmaid=y.employeeid group by y.employeeid, y.fname, y.lname; ```
Mysql query i can not solve
[ "", "mysql", "sql", "database", "" ]
I have two tables. First one ``` farmer_id Farmer_name 1 Raju 2 Jay 3 Ram 5 Vinay ``` Second one ``` farmer_id registered_farmer_id Season 1 2 2014-15 3 5 2015-16 ``` Table one is the look up table which looks for the name of the farmer. Table two has two columns namely farmer\_id and registered-farmer\_id those look up for the same table. i.e table 1. What I need is ``` Farmer_id farmer_name Registered_farmer_id Registered_farmer_name 1 Raju 2 Jay 3 Ram 5 Vinay ```
``` SELECT T2.farmer_id, T1.farmer_name, T2.registered_farmer_id, T3.farmer_name FROM TABLE2 T2 LEFT JOIN TABLE1 T1 ON T2.farmer_id = T1.farmer_id LEFT JOIN TABLE1 T3 ON T2.registered_farmer_id = T3.farmer_id; ``` With TABLE1 the name of table 1 and TABLE2 the name of table 2.
In MS SQL 2005+, you can use [CROSS APPLY](https://technet.microsoft.com/en-us/library/ms175156(v=sql.105).aspx): ``` SELECT A.farmer_id, A.Farmer_name, B.farmer_id, B.Farmer_name FROM ( SELECT T1.farmer_id, T2.Farmer_name, T3.Registered_farmer_id FROM Table1 T1 INNER JOIN Table T2 ON T1.farmer_id = T2.farmer_id ) AS A CROSS APPLY ( SELECT farmer_id, Farmer_name FROM Table1 WHERE farmer_id = A.Registered_farmer_id ) AS B ``` In Oracle, you use INNER JOIN instead: ``` SELECT A.farmer_id, A.Farmer_name, B.farmer_id, B.Farmer_name FROM ( SELECT T1.farmer_id, T2.Farmer_name, T3.Registered_farmer_id FROM Table1 T1 INNER JOIN Table T2 ON T1.farmer_id = T2.farmer_id ) AS A INNER JOIN ( SELECT farmer_id, Farmer_name FROM Table1 ) AS B WHERE B.farmer_id = A.Registered_farmer_id ```
retrieve lookup column twice with associated rows
[ "", "sql", "database", "oracle", "" ]
I got a students table and I need to find , for each student, his next birthday that will occur on a Sunday. How can I do that in Oracle Sql? Example: Bob, found in table students has his dob '23/10/1994'. The output would be '23/10/2016' because this is the first date(starting from sysdate) that will occour on a Sunday and will also be his birthday.
[SQL Fiddle](http://sqlfiddle.com/#!4/b436c/7) **Oracle 11g R2 Schema Setup**: ``` CREATE TABLE Students ( Name, DateOfBirth ) AS SELECT 'Alice', DATE '1994-10-22' FROM DUAL UNION ALL SELECT 'Bob', DATE '1994-10-23' FROM DUAL UNION ALL SELECT 'Carol', DATE '1992-02-29' FROM DUAL; ``` **Query 1**: ``` WITH Dates ( Name, DateOfBirth, Birthday, Incr ) AS ( SELECT Name, DateOfBirth, ADD_MONTHS( DateOfBirth, ( EXTRACT( YEAR FROM SYSDATE ) - EXTRACT( YEAR FROM DateOfBirth ) ) * 12 ), EXTRACT( YEAR FROM SYSDATE ) - EXTRACT( YEAR FROM DateOfBirth ) FROM Students UNION ALL SELECT Name, DateOfBirth, ADD_MONTHS( DateOfBirth, ( Incr + 1 ) * 12 ), Incr + 1 FROM Dates WHERE TO_CHAR( ADD_MONTHS( DateOfBirth, Incr * 12 ), 'DAY' ) <> 'SUNDAY ' ) SELECT Name, DateOfBirth, Birthday FROM Dates WHERE TO_CHAR( Birthday, 'DAY' ) = 'SUNDAY ' ``` **[Results](http://sqlfiddle.com/#!4/b436c/7/0)**: ``` | NAME | DATEOFBIRTH | BIRTHDAY | |-------|----------------------------|----------------------------| | Bob | October, 23 1994 00:00:00 | October, 23 2016 00:00:00 | | Alice | October, 22 1994 00:00:00 | October, 22 2017 00:00:00 | | Carol | February, 29 1992 00:00:00 | February, 28 2021 00:00:00 | ```
``` --prepare test data create table students(name varchar2(50), dob date) insert into students(name, dob) select 'Student '||rownum, to_date('1970.12.31','yyyy.mm.dd')-rownum from all_objects where rownum < 365 --the query select s.* ,to_char(s.next_dob,'DAY') day_of_week from ( select st.* ,to_date((extract(YEAR from sysdate) + addyear)*10000+extract(MONTH from dob)*100+extract(DAY from dob),'YYYYMMDD') as next_dob from students st ,(select rownum -1 as addyear from all_objects where rownum < 9) )s where s.next_dob > sysdate and rtrim(to_char(s.next_dob,'DAY')) = 'SUNDAY' order by next_dob ```
Find next birthday that occurs on a specific day in Oracle SQL
[ "", "sql", "oracle", "date", "" ]
I am practicing sql, and I came upon this problem. I am unsure of the sql logic, more specifically, it appears that I am using group by completely wrong. Problem: Print the name and age of the employee(s) with the highest salary among all employee of the same age. ``` employee(sid, sname, sex, age, year, salary) ``` First Attempt: ``` select E.sname, E.age from employee E where E.salary= max(E.salary) group by E.sname, E.age Error: Invalid SQL: ORA-00934: group function is not allowed here ``` Second attempt ``` select E.sname, E.age from employee E group by E.sname, E.age having E.salary= max(E.salary) Error: Invalid SQL: ORA-00979: not a GROUP BY expression ```
This may work: ``` select E.sname, E.age from employee E where E.salary= (select max(salary) from employee a where a.age = E.age ) ```
A simpler approach to the problem is using analytic function `rank`. ``` select sname, age, salary from ( select sname, age, salary, rank() over(partition by age order by salary desc) rnk from employee) t where rnk = 1 ``` Or to correct the query you have ``` select E.sname, E.age, E.salary from employee E where (age, salary) in (select age,max(salary) from employee group by age) ```
SQL error, incorrect implementation of group by
[ "", "sql", "" ]
Consider a scenario: ``` id name info done ----------------------- 1 abc x 0 2 abc y 1 <-- I have this id 3 pqr g 1 4 pqr h 0 5 pqr i 1 <-- I have this id ``` I have id for the last entry of every name. The result I'm expecting consists of 2 things: 1. info for last entry of the name 2. number of done [having value 1] for that name (1) can be easily achieved by `select info from table where id = myid` But how can (2) be achieved in the same query? Can it be achieved in the same query? Something like ``` select info, count(done) from table where id = myid group by name where ...... ```
This is a bit complicated, but can be done using conditional aggregation: ``` select max(case when t.id = myid then info end), sum(done) from table t where t.name = (select name from table t2 where t2.id = myid); ``` The key is getting all the rows for the given name. If you had multiple columns, then a correlated subquery might be the way to go: ``` select t.*, (select sum(t2.done) from table t2 where t2.name = t.name) as numdone from table t where t.id = myid; ```
Considering done is either 1 or 0 you could just get the sum and display that. ``` select info, sum(done) from table where id = mid group by info ``` EDIT: ``` select info, s from table inner join ( select name, sum(done) as s from table group by name ) as zzz on zzz.name = table.name where id = myid ```
sql combining count with other fields
[ "", "sql", "sqlite", "" ]
I have the following tables ``` CREATE TABLE `constraints` ( `id` int(11), `name` varchar(64), `type` varchar(64) ); CREATE TABLE `groups` ( `id` int(11), `name` varchar(64) ); CREATE TABLE `constraints_to_group` ( `groupid` int(11), `constraintid` int(11) ); ``` With the following data : ``` INSERT INTO `groups` (`id`, `name`) VALUES (1, 'group1'), (2, 'group2'); INSERT INTO `constraints` (`id`, `name`, `type`) VALUES (1, 'cons1', 'eq'), (2, 'cons2', 'inf'); INSERT INTO `constraints_to_group` (`groupid`, `constraintid`) VALUES (1, 1), (1, 2), (2, 2); ``` I want to get all constraints for all groups, so I do the following : ``` SELECT groups.*, t.* FROM groups LEFT JOIN (SELECT * FROM constraints LEFT JOIN constraints_to_group ON constraints.id=constraints_to_group.constraintid) as t ON t.groupid=groups.id ``` And get the following result : ``` id| name | id | name type groupid constraintid ----------------------------------------------------- 1 | group1 | 1 | cons1 | eq | 1 | 1 1 | group1 | 2 | cons2 | inf | 1 | 2 2 | group2 | 2 | cons2 | inf | 2 | 2 ``` **What I'd like to get :** ``` group_id | group_name | cons_id | cons_name | cons_type | groupid | constraintid ------------------------------------------------------------------------------------- 1 | group1 | 1 | cons1 | eq | 1 | 1 1 | group1 | 2 | cons2 | inf | 1 | 2 2 | group2 | 2 | cons2 | inf | 2 | 2 ``` **This is an example, in my real case my tables have much more columns so using the `SELECT groups.name as group_name, ...` would lead to queries very hard to maintains.**
Try this way ``` SELECT groups.id as group_id, groups.name as group_name , t.id as cons_id, t.name as cons_name, t.type as cons_type, a.groupid , a.constraintid FROM constraints_to_group as a JOIN groups on groups.id=a.groupid JOIN constraints as t on t.id=a.constraintid ```
Possible duplicate of this [issue](https://stackoverflow.com/questions/13153344/in-a-join-how-to-prefix-all-column-names-with-the-table-it-came-from)
SQL JOIN : Prefix fields with table name
[ "", "mysql", "sql", "" ]
I have `table1` and `table2`. I need to delete from table1 all rows. When I try `DELETE FROM table1` this error appear > 1451 - Cannot delete or update a parent row: a foreign key constraint > fails (db\_name.table2, CONSTRAINT table2\_ibfk\_4 FOREIGN KEY > (table1\_id) REFERENCES table1 (id)) How can I delete all rows in table1, and all rows in table2, which `table1_id` match with `id` from `table1`?
``` DELETE FROM table2 WHERE EXISTS ( SELECT 1 FROM table1 WHERE table1.id = table2.table1_id ); DELETE FROM table1; ```
Your error indicates an issue where there is a foreign key reference from `table2` to `table1`. To solve this you have a few options: * Delete the records referring to the `table1` record from `table2`, before deleting the record in `table1` * Add a `ON DELETE CASCADE` to the foreign key
How can I delete rows from two tables?
[ "", "sql", "" ]
Hi I'm looking to create two additional fields with my query. Status (ACTIVE/INACTIVE) and REASON (BAU/EXPIRY) based on criteria. Currently it is one column, but I want to split it into two. Below is the code: ``` select first_name,last_name,end_date as "Contract end date",date_removed, case when END_DATE > sysdate and DATE_REMOVED is null then 'ACTIVE' when END_DATE <= sysdate then 'INACTIVE - Contract Expired' when DATE_REMOVED is not null then 'INACTIVE - BAU' end as status from DEPARTMENTS d inner join employees e on e.DEPARTMENT_ID = d.DEPARTMENT_ID ``` Want it to display ``` **STATUS** **REASON** ACTIVE ACTIVE INACTIVE CONTRACT EXPIRY INACTIVE BAU ... ... ```
I don't normally answer my own question but below was what I was looking for ``` select first_name,last_name,end_date,date_removed, case when END_DATE > sysdate and DATE_REMOVED is null then 'ACTIVE' when END_DATE <= sysdate then 'INACTIVE' end as status, case when END_DATE <= sysdate then 'Contract Expired' when DATE_REMOVED is not null then 'BAU' end as Reason from DEPARTMENTS d inner join employees e on e.DEPARTMENT_ID = d.DEPARTMENT_ID; ```
based on your sample... ``` select case when STATUS like '%ACTIVE%' is null then '' , when STATUS like '%INACTIVE%' and DATE_REMOVED is not null then 'BAU' when STAUS like '%INACTIVE%' and END_DATE <= sysdate then 'Contract Expired' end as REASON, case when REASON IS NULL then 'ACTIVE' ESLE 'INACTIVE' end as STATUS from ( select first_name,last_name,end_date as "Contract end date",date_removed, case when END_DATE > sysdate and DATE_REMOVED is null then 'ACTIVE' when END_DATE <= sysdate then 'INACTIVE - Contract Expired' when DATE_REMOVED is not null then 'INACTIVE - BAU' end as status from DEPARTMENTS d inner join employees e on e.DEPARTMENT_ID = d.DEPARTMENT_ID) ```
How do I create two additional fields to a view based on output from two case statements
[ "", "sql", "oracle", "" ]
I have a data which is something like this ``` stories value -------------------------- 0 2194940472.78964 1 1651820586.1447 2 627935051.75 3 586994698.4272 4 89132137.57 5 134608008 6 40759564 7 0 8 0 10 0 11 0 12 0 13 26060602 17 0 18 0 19 84522335 20 316478066.045 24 0 ``` I want to sum it up as per the range Output which I am expected ``` stories value 0-3 125201021 4-7 215453123 8-12 453121545 12-max(numstories) 21354322 ``` I tried this but not able to figure it out what is wrong ``` select t.NumStories, SUM(t.bldnvalue) from (select a.NumStories, case when a.NumStories between 0 and 3 then sum(a.BldgValue) when a.NumStories between 4 and 7 then sum(a.BldgValue) when a.NumStories between 8 and 12 then sum(a.BldgValue) when a.NumStories between 13 and max(a.NumStories) then sum(a.BldgValue) end as bldnvalue from dbo.EDM_CocaCola_Coca_Cola_Company_1_1 a group by a.NumStories) t group by t.NumStories ``` With this query I am getting this output ``` NumStories value ------------------------------- 0 2194940472.78964 3 586994698.4272 12 0 6 40759564 7 0 1 1651820586.1447 24 0 18 0 10 0 4 89132137.57 19 84522335 13 26060602 5 134608008 2 627935051.75 17 0 11 0 20 316478066.045 8 0 ```
Just build the grouping string first that you want and group by that variable. ``` select case when a.NumStories between 0 and 3 then '0-3' when a.NumStories between 4 and 7 then '4-7' when a.NumStories between 8 and 12 then '8-12' when a.NumStories >= 13 then '13-max' end as stories, sum(a.BldgValue) as value from dbo.EDM_CocaCola_Coca_Cola_Company_1_1 a group by 1; ``` If you really want to print the max too, then you can put in a subquery in the "13-max" line as `(SELECT MAX(BldgValue) FROM dbo.EDM_CocaCola_Coca_Cola_Company_1_1)`
I like this result, I tried to use the BIN concept. I think the only issue would be with your max bin. I don't understand how you got your output sums. the first records value is '2,194,940,472.78964' which is bigger than your value in 0-3 bin ``` if OBJECT_ID('tempdb..#Test') is not null drop table #Test; Create table #Test ( Stories int , Value float ) insert into #Test values (0 , 2194940472.78964) , (1 , 1651820586.1447 ) , (2 , 627935051.75 ) , (3 , 586994698.4272 ) , (4 , 89132137.57 ) , (5 , 134608008 ) , (6 , 40759564 ) , (7 , 0 ) , (8 , 0 ) , (10, 0 ) , (11, 0 ) , (12, 0 ) , (13, 26060602 ) , (17, 0 ) , (18, 0 ) , (19, 84522335 ) , (20, 316478066.045 ) , (24, 0 ) if OBJECT_ID('tempdb..#Bins') is not null drop table #Bins; create Table #Bins( Label varchar(20) , Min int , Max int ) insert into #Bins values ('0-3', 0, 3) , ('4-7', 4, 7) , ('8-12', 8, 12) , ('13 - Max', 13, 999999999) Select b.Label , sum(t.Value) as Value from #Test t join #Bins b on t.stories between b.Min and b.Max Group by b.Label order by 1 ``` Output: ``` Label Value -------------------- ---------------------- 0-3 5061690809.11154 13 - Max 427061003.045 4-7 264499709.57 8-12 0 ```
SQL Server : data between specific range
[ "", "sql", "sql-server", "range", "sql-server-2014", "" ]
when dealing with large datasets where Table A has considerably very less no of rows when compared to B. Is there any difference between joining Table **A** with **B** and **B** with **A** in the query below SELECT A.col\_PK , B.col\_PK FROM **A** INNER JOIN **B** ON A.col\_PK = A.col\_PK WHERE A.col\_2 = "0001";
This is a bit long for a comment. For an `INNER JOIN`, there is no difference in functionality (order does make a difference for outer joins). For performance, the optimizer generally chooses the best ordering. Under some circumstances, when your query has long sequences of joins, the optimizer may not be able to consider all options. In those cases, the ordering *could* make a difference to the performance.
Result matters on which type on `JOIN` you use. Refer [TYPES OF JOINS AND THEIR WORKING](http://www.codeproject.com/Tips/712941/Types-of-Join-in-SQL-Server)
A JOIN B vs B JOIN A
[ "", "mysql", "sql", "database", "join", "rdbms", "" ]
I'm comparing two days ranks. If old day rank > new rank then increased, else decreased, but I'm getting decreased in both the cases ``` SELECT USERNAME , SUM(IMPROVED) IMPROVED , SUM(DECREASED) DECREASED , SUM(NoChange) NoChange , SUM(TimeLineCOMPLETED) TimeLineCOMPLETED , SUM(TLNotCompleted) TLNotCompleted FROM (SELECT (k.keyword) ,p.projectname ,pa.username ,CASE WHEN a.currentposition > b.currentposition THEN 1 ELSE 0 END IMPROVED ,CASE WHEN a.currentposition < b.currentposition THEN 1 ELSE 0 END DECREASED ,CASE WHEN a.currentposition = b.currentposition THEN 1 ELSE 0 END NoChange ,CASE WHEN pa.KeywordStatus = 'Stopped' THEN 1 ELSE 0 END TimeLineCOMPLETED ,CASE WHEN pa.KeywordStatus = 'InProgress' THEN 1 ELSE 0 END TLNotCompleted ,a.currentposition AS oldposition ,b.currentposition AS newposition ,pa.KeywordStatus AS TimeLineStatus ,k.targetdate ,k.positionExp FROM seo.tbl_keywordposition a JOIN seo.tbl_keywordposition b ON a.keywordid = b.keywordid AND a.psnupdatedate ='10/5/2015' AND b.psnupdatedate ='10/10/2015' LEFT JOIN tbl_keywords k ON k.keywordid = b.keywordid AND a.keywordid = b.keywordid LEFT JOIN tbl_project p ON p.ProjectId = k.ProjId LEFT JOIN tbl_projAssignment pa ON pa.ProjId = p.ProjectId AND pa.KeywordID = k.keywordid WHERE p.Projectname = 'october_project' AND a.psnupdatedate ='10/5/2015' AND b.psnupdatedate ='10/10/2015' and pa.KeywordStatus!='NULL' )INNERQUERY GROUP by USERNAME ``` **Sample data** ``` SELECT * FROM seo.Tbl_KeywordPosition where KeywordId in ('1514','1515') PositionId ProjectId KeywordId CurrentPosition PsnUpdateDate 31592 129 1514 10 2015-10-05 31593 129 1514 11 2015-10-10 31594 129 1515 10 2015-10-05 31595 129 1515 9 2015-10-10 ``` **Current output** ``` USERNAME IMPROVED DECREASED NoChange Ananth 0 2 0 ``` **Desired output** ``` USERNAME IMPROVED DECREASED NoChange Ananth 1 1 0 ``` Because `keywordid 1514` - old rank 10 and new rank 11 , so it should be decreased. `keywordid 1515` - old rank 10 and new rank 9 , so it should be increased
The problem is that CurrentPosition is varchar it should be cast(a.currentpostion as int) vs cast(b.currentposition as int)
Base on sample and result: ``` SELECT A.KeywordId, CASE WHEN B.CurrentPosition - A.CurrentPosition > 0 THEN 1 ELSE 0 END AS IMPROVED, CASE WHEN B.CurrentPosition - A.CurrentPosition < 0 THEN 1 ELSE 0 END AS DECREASED, CASE WHEN B.CurrentPosition - A.CurrentPosition = 0 THEN 1 ELSE 0 END AS NoChange FROM (SELECT * FROM seo.Tbl_KeywordPosition WHERE KeywordId IN ('1514','1515') AND PsnUpdateDate = '2015-10-05' )AS A INNER JOIN (SELECT * FROM seo.Tbl_KeywordPosition WHERE KeywordId IN ('1514','1515') AND PsnUpdateDate = '2015-10-10' ) AS B ON A.ProjectId = B.ProjectId AND A.KeywordId = B.KeywordId ```
Incorrect results in SQL statement while using case statement
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have a SQL issue that's kept me busy for a couple of days now. All I want is to copy most rows from one table (ORIGIN) to another (DESTINATION), but, in DESTINATION there should not be rows with the same PRIMARY KEY. It is possible that in ORIGIN we will have many duplicates at a given moment. Well, so far so good, we have many methods for this and they always worked for me in MySQL (which is what I have always worked with). But now I am forced to use SQL Server 2008. So far I have tried the NOT EXIST, NOT IN and JOIN methods. I doesn't matter what I do, I always get the infamous dupe key error. For example, If I use this: ``` insert into DESTINATION select * from ORIGIN where ORIGIN.[MY-PRIMARY-KEY-FIELD] not in (select [MY-PRIMARY-KEY-FIELD] from DESTINATION) ``` Then I get this: > Violation of PRIMARY KEY constraint 'PK\_\_DESTINAT\_\_52F5604C11CBEE86'. Cannot insert duplicate key in object 'dbo.DESTINATION'. The duplicate key value is (00013884880195). The specified value is, of course, duplicated in ORIGIN, but that's why I am checking if it also exists in DESTINATION before inserting it. I get the same exact error message with the other methods. Nothing works. The only thing I can think of is that, maybe, this is threading or doing weird stuff so that the rows are not still there when we go checking for it. Is there a way to force some kind of "synchronous mode" so that the info is pushed into the table as soon as possible to ensure the data is there when we go checking for it? That'd be a simple way to discard that... Or maybe I am bypassing something silly here. I don't know. So, any hint on what might be happening here will be welcome. Thanks beforehand.
Probably this one is more better option.. ``` insert into DESTINATION Select * from ( select <column_list>, row_number() over (partition by PK order by PK ) as rnm from origin)Temp where rnm=1; ```
You can use `NOT EXISTS` in query to find records that do not match ``` INSERT INTO DESTINATION (colfk,col1, col2, col3) SELECT * FROM (SELECT coppiesfk,coppiesRow1, coppiesRow2, coppiesRow3) AS tmp WHERE NOT EXISTS ( SELECT colfk FROM DESTINATION WHERE colfk = coppiesfk ) LIMIT 1; ```
SQL Server : copy rows to another table without duplicate keys
[ "", "sql", "sql-server", "" ]
I have these two tables (Moodle 2.8): ``` CREATE TABLE `mdl_course` ( `id` bigint(10) NOT NULL AUTO_INCREMENT, `category` bigint(10) NOT NULL DEFAULT '0', `sortorder` bigint(10) NOT NULL DEFAULT '0', `fullname` varchar(254) NOT NULL DEFAULT '', `shortname` varchar(255) NOT NULL DEFAULT '', `idnumber` varchar(100) NOT NULL DEFAULT '', `summary` longtext, `summaryformat` tinyint(2) NOT NULL DEFAULT '0', `format` varchar(21) NOT NULL DEFAULT 'topics', `showgrades` tinyint(2) NOT NULL DEFAULT '1', `newsitems` mediumint(5) NOT NULL DEFAULT '1', `startdate` bigint(10) NOT NULL DEFAULT '0', `marker` bigint(10) NOT NULL DEFAULT '0', `maxbytes` bigint(10) NOT NULL DEFAULT '0', `legacyfiles` smallint(4) NOT NULL DEFAULT '0', `showreports` smallint(4) NOT NULL DEFAULT '0', `visible` tinyint(1) NOT NULL DEFAULT '1', `visibleold` tinyint(1) NOT NULL DEFAULT '1', `groupmode` smallint(4) NOT NULL DEFAULT '0', `groupmodeforce` smallint(4) NOT NULL DEFAULT '0', `defaultgroupingid` bigint(10) NOT NULL DEFAULT '0', `lang` varchar(30) NOT NULL DEFAULT '', `theme` varchar(50) NOT NULL DEFAULT '', `timecreated` bigint(10) NOT NULL DEFAULT '0', `timemodified` bigint(10) NOT NULL DEFAULT '0', `requested` tinyint(1) NOT NULL DEFAULT '0', `enablecompletion` tinyint(1) NOT NULL DEFAULT '0', `completionnotify` tinyint(1) NOT NULL DEFAULT '0', `cacherev` bigint(10) NOT NULL DEFAULT '0', `calendartype` varchar(30) NOT NULL DEFAULT '', PRIMARY KEY (`id`), KEY `mdl_cour_cat_ix` (`category`), KEY `mdl_cour_idn_ix` (`idnumber`), KEY `mdl_cour_sho_ix` (`shortname`), KEY `mdl_cour_sor_ix` (`sortorder`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; CREATE TABLE `mdl_log` ( `id` bigint(10) NOT NULL AUTO_INCREMENT, `time` bigint(10) NOT NULL DEFAULT '0', `userid` bigint(10) NOT NULL DEFAULT '0', `ip` varchar(45) NOT NULL DEFAULT '', `course` bigint(10) NOT NULL DEFAULT '0', `module` varchar(20) NOT NULL DEFAULT '', `cmid` bigint(10) NOT NULL DEFAULT '0', `action` varchar(40) NOT NULL DEFAULT '', `url` varchar(100) NOT NULL DEFAULT '', `info` varchar(255) NOT NULL DEFAULT '', PRIMARY KEY (`id`), KEY `mdl_log_coumodact_ix` (`course`,`module`,`action`), KEY `mdl_log_tim_ix` (`time`), KEY `mdl_log_act_ix` (`action`), KEY `mdl_log_usecou_ix` (`userid`,`course`), KEY `mdl_log_cmi_ix` (`cmid`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; ``` And this query: ``` SELECT l.id, l.userid AS participantid, l.course AS courseid, l.time, l.ip, l.action, l.info, l.module, l.url FROM mdl_log l INNER JOIN mdl_course c ON l.course = c.id AND c.category <> 0 WHERE l.id > [some large id] AND l.time > [some unix timestamp] ORDER BY l.id ASC LIMIT 0,200 ``` mdl\_log table has over 200 milion records, and I need to export it into file using PHP and not die in intent. The main problem here is that executing this is too slow. The main killer here is the join to the mdl\_course table. If I remove it, everything works fast. Here is the explain: > ``` > +----+-------------+-------+-------+---------------------------------------------+----------------------+---------+----------------+------+-----------------------------------------------------------+ > | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | > +----+-------------+-------+-------+---------------------------------------------+----------------------+---------+----------------+------+-----------------------------------------------------------+ > | 1 | SIMPLE | c | range | PRIMARY,mdl_cour_cat_ix | mdl_cour_cat_ix | 8 | NULL | 3152 | Using where; Using index; Using temporary; Using filesort | > | 1 | SIMPLE | l | ref | PRIMARY,mdl_log_coumodact_ix,mdl_log_tim_ix | mdl_log_coumodact_ix | 8 | xray2qasb.c.id | 618 | Using index condition; Using where | > +----+-------------+-------+-------+---------------------------------------------+----------------------+---------+----------------+------+-----------------------------------------------------------+ > ``` Is there any way to remove usage of temporary and filesort? What do you propose here?
After some testing this query works fast as expected: ``` SELECT l.id, l.userid AS participantid, l.course AS courseid, l.time, l.ip, l.action, l.info, l.module, l.url FROM mdl_log l WHERE l.id > 123456 AND l.time > 1234 AND EXISTS (SELECT * FROM mdl_course c WHERE l.course = c.id AND c.category <> 0 ) ORDER BY l.id ASC LIMIT 0,200 ``` Thanks to JamieD77 for his suggestion! execution plan: > ``` > +----+--------------------+-------+--------+-------------------------+---------+---------+--------------------+----------+-------------+ > | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | > +----+--------------------+-------+--------+-------------------------+---------+---------+--------------------+----------+-------------+ > | 1 | PRIMARY | l | range | PRIMARY,mdl_log_tim_ix | PRIMARY | 8 | NULL | 99962199 | Using where | > | 2 | DEPENDENT SUBQUERY | c | eq_ref | PRIMARY,mdl_cour_cat_ix | PRIMARY | 8 | xray2qasb.l.course | 1 | Using where | > +----+--------------------+-------+--------+-------------------------+---------+---------+--------------------+----------+-------------+ > ```
Try moving the category selection outside the `JOIN`. Here I put it in an `IN()` which the engine will cache on successive runs. I don't have 200M rows to test on, so YMMV. ``` DESCRIBE SELECT l.id, l.userid AS participantid, l.course AS courseid, l.time, l.ip, l.action, l.info, l.module, l.url FROM mdl_log l WHERE l.id > 1234567890 AND l.time > 1234567890 AND l.course IN (SELECT c.id FROM mdl_course c WHERE c.category > 0) ORDER BY l.id ASC LIMIT 0,200; ```
How to optimize this query in MySQL
[ "", "mysql", "sql", "performance", "filesort", "" ]
I hope this is concise. I am basically looking for a methodology on how to improve queries after watching one of my colleagues speed up my query almost 10 fold with a quick change I had a query that had two tables `t_item` and `t_action` `t_item` is basically an item with characteristics and `t_action` is the events or actions that are performed on this item with a time stamp for each action each action also has an id My query joined the two tables on id. There were also some criteria made on `t_action.action_type` which is free text My simplified original query was like below ``` SELECT * FROM t_item JOIN t_action ON t_item.pk = t_action.fk WHERE t_action.action_type LIKE ('%PURCHASE%') AND t_item.location = 'DE' ``` This ran OK, it came back in roughly 8 mins My colleague changed it so that the `t_action.action_type` ended up in the FROM portion of the SQL. This reduced the time to 2 mins ``` SELECT * FROM t_item JOIN t_action ON t_item.pk = t_action.fk t_action.action_type LIKE ('%PURCHASE%') WHERE t_item.location = 'DE' ``` My question is, **Generally, how do you know when to put limits in the FROM clause vs in the WHERE clause.** I thought that Teradata SQL optimizer does this automatically Thank you for your help
In this case, you don't actually need to understand the plan. You just need to see if the two plans are the same. Teradata has a pretty good optimizer, so I would not expect there to be a difference between the two version (could be, but I would be surprised). Hence, caching is a possibility for explaining the difference in performance. For this query: ``` SELECT * FROM t_item JOIN t_action ON t_item.pk = t_action.fk t_action.action_type LIKE '%PURCHASE%' WHERE t_item.location = 'DE'; ``` The best indexes are probably on `t_item(location, pk)` and `t_action(action_type)`. However, you should try to get rid of the wildcards for a production query. This makes the query harder to optimize, which in turn might have a large impact on performance.
I tried to create similar query but didn't see any difference in the explain plan..though record counts were less trans(15k) and accounts(10k) with indexes on Account\_number. Probably what Gordon has specified , try to run the query at different time and also check explain plan for both the queries to see any difference. ``` Explain select * from trans t inner join ap.accounts a on t.account_number = a.account_number where t.trans_id like '%DEP%'; 4) We do an all-AMPs JOIN step from ap.a by way of a RowHash match scan with no residual conditions, which is joined to ap.t by way of a RowHash match scan with a condition of ("ap.t.Trans_ID LIKE '%DEP%'"). ap.a and ap.t are joined using a merge join, with a join condition of ("ap.t.Account_Number = ap.a.Account_Number"). The result goes into Spool 1 (group_amps), which is built locally on the AMPs. The size of Spool 1 is estimated with no confidence to be 11,996 rows (1,511,496 bytes). The estimated time for this step is 0.25 seconds. -> The contents of Spool 1 are sent back to the user as the result of statement 1. The total estimated time is 0.25 seconds. Explain select * from trans t inner join ap.accounts a on t.account_number = a.account_number and t.trans_id like '%DEP%'; 4) We do an all-AMPs JOIN step from ap.a by way of a RowHash match scan with no residual conditions, which is joined to ap.t by way of a RowHash match scan with a condition of ("ap.t.Trans_ID LIKE '%DEP%'"). ap.a and ap.t are joined using a merge join, with a join condition of ("ap.t.Account_Number = ap.a.Account_Number"). The result goes into Spool 1 (group_amps), which is built locally on the AMPs. The size of Spool 1 is estimated with no confidence to be 11,996 rows (1,511,496 bytes). The estimated time for this step is 0.25 seconds. -> The contents of Spool 1 are sent back to the user as the result of statement 1. The total estimated time is 0.25 seconds. ```
Teradata SQL Optimization
[ "", "sql", "query-optimization", "teradata", "" ]
I'm trying to print for each person its age using this format : E.g : 19 years , 8 months , 13 days. I've googled a lot and I've noticed that there is a specific function to calculate the difference between dates `DATEDIFF`. However this function does not exist in `SQL*Plus` , so I went on trying using `MONTHS_BETWEEN()` and some operators. My attempt: ``` SELECT name , ' ' || FLOOR(MONTHS_BETWEEN(to_date(SYSDATE),to_date(date_of_birth))/12)||' years ' || FLOOR(MOD(MONTHS_BETWEEN(to_date(SYSDATE),to_date(date_of_birth)),12)) || ' months ' || FLOOR(MOD(MOD(MONTHS_BETWEEN(to_date(SYSDATE),to_date(date_of_birth)),12),4))|| ' days ' AS "Age" FROM persons; ``` My issue relies on getting the days. I don't know how should I calculate the days , using this function ('tried dividing by 4 , or 30); I'm thinking my logic is bad but I can't figure it out , any ideas ?
Very similar to Lalit's answer, but you can get an accurate number of days without assuming 30 days per month, by using `add_months` to adjust by the total whole-month difference: ``` select sysdate, hiredate, trunc(months_between(sysdate,hiredate) / 12) as years, trunc(months_between(sysdate,hiredate) - (trunc(months_between(sysdate,hiredate) / 12) * 12)) as months, trunc(sysdate) - add_months(hiredate, trunc(months_between(sysdate,hiredate))) as days from emp; SYSDATE HIREDATE YEARS MONTHS DAYS ---------- ---------- ---------- ---------- ---------- 2015-10-26 1980-12-17 34 10 9 2015-10-26 1981-02-20 34 8 6 2015-10-26 1981-02-22 34 8 4 2015-10-26 1981-04-02 34 6 24 2015-10-26 1981-09-28 34 0 28 2015-10-26 1981-05-01 34 5 25 2015-10-26 1981-06-09 34 4 17 2015-10-26 1982-12-09 32 10 17 2015-10-26 1981-11-17 33 11 9 2015-10-26 1981-09-08 34 1 18 2015-10-26 1983-01-12 32 9 14 2015-10-26 1981-12-03 33 10 23 2015-10-26 1981-12-03 33 10 23 2015-10-26 1982-01-23 33 9 3 ``` You can verify by reversing the calculation: ``` with tmp as ( select trunc(sysdate) as today, hiredate, trunc(months_between(sysdate,hiredate) / 12) as years, trunc(months_between(sysdate,hiredate) - (trunc(months_between(sysdate,hiredate) / 12) * 12)) as months, trunc(sysdate) - add_months(hiredate, trunc(months_between(sysdate,hiredate))) as days from emp ) select * from tmp where today != add_months(hiredate, (12 * years) + months) + days; no rows selected ```
Getting the age in terms of **YEARS** and **MONTHS** is easy, but the tricky part is the the **DAYS**. If you can fix the days in a month, you could get the number of days in the same SQL. For example, using the standard **SCOTT.EMP** table and assuming every month has `30` days: ``` SQL> SELECT SYSDATE, 2 hiredate, 3 TRUNC(months_between(SYSDATE,hiredate)/12) years, 4 TRUNC(months_between(SYSDATE,hiredate) - 5 (TRUNC(months_between(SYSDATE,hiredate)/12)*12)) months, 6 TRUNC((months_between(SYSDATE,hiredate) - 7 TRUNC(months_between(SYSDATE,hiredate)))*30) days 8 FROM emp; SYSDATE HIREDATE YEARS MONTHS DAYS ---------- ---------- ---------- ---------- ---------- 2015-10-26 1980-12-17 34 10 9 2015-10-26 1981-02-20 34 8 6 2015-10-26 1981-02-22 34 8 4 2015-10-26 1981-04-02 34 6 23 2015-10-26 1981-09-28 34 0 28 2015-10-26 1981-05-01 34 5 24 2015-10-26 1981-06-09 34 4 17 2015-10-26 1982-12-09 32 10 17 2015-10-26 1981-11-17 33 11 9 2015-10-26 1981-09-08 34 1 18 2015-10-26 1983-01-12 32 9 14 2015-10-26 1981-12-03 33 10 22 2015-10-26 1981-12-03 33 10 22 2015-10-26 1982-01-23 33 9 3 14 rows selected. ``` But, be aware not every month has `30` days. So, you cannot have the accuracy with number of days. --- **UPDATE** I missed the total whole-month difference which @Alex Poole has explained in his accepted answer. I will let this answer for future readers to understand the part that was missed about calculating the number of days. **Modify this:** ``` TRUNC((months_between(SYSDATE,hiredate) - TRUNC(months_between(SYSDATE,hiredate)))*30) days ``` **With this:** ``` TRUNC(SYSDATE) - add_months(hiredate, TRUNC(months_between(sysdate,hiredate))) ```
How to get age in years,months and days using Oracle
[ "", "sql", "oracle", "date-arithmetic", "" ]
Any thoughts on how to write this in SQL? > List all the customers who bought products bought by every other The schema is: * `Customer` (cId, cName) * `Buys` (tId, cId, pId) * `Product` (pId, pName) I have been trying to select all the distinct items and group the table Buys using that but without luck.
``` -- Customers who bought products bought by every other customer SELECT DISTINCT cId,cName FROM Customer c1 ,Buys b1 WHERE c1.cId = b1.cId AND pId IN -- Products bought by every other customer (SELECT pId FROM (SELECT pId,COUNT(1) count_p FROM (SELECT DISTINCT cId,pId FROM Buys) GROUP BY pId) t1 ,(SELECT COUNT(1) count_c FROM Customer) t2 WHERE t1.count_p = t2.count_c) ``` Try the above query and see if it works. I didn't get a chance to test this.
Try this, tested on random data, works well. May be slow on large amount of data.. ``` SELECT cid, NAME FROM (SELECT c.cid, c.name, cc.cnt, COUNT(b.pid) cpid FROM customer c, buys b, product p, (SELECT COUNT(*) cnt FROM customer) cc WHERE c.cid = b.cid AND b.pid = p.pid GROUP BY c.cid, c.name, cc.cnt) WHERE cpid = cnt ```
SQL list items bought by everyone
[ "", "sql", "oracle", "items", "" ]
`select to_char(sysdate, 'DAY') from dual;` will return `TUESDAY` correctly. But why is it that when I run `select to_char(to_date(sysdate, 'DD-MON-YYYY'), 'DAY') from dual;` on the same day, it returns `SUNDAY`? Thanks to a `ORA-01835: day of week conflicts with Julian date` error when I try to run `select to_date('27-OCT-15', 'D') from dual;`, I have to end up converting the date (that I pass into a function dynamically) into the `DD-MON-YYYY` format before attempting to get the day of the week from it. Does anyone know why I am seeing this inconsistency? A better example is here (I have replaced column and table names in example). ``` SELECT DATE, to_char(DATE, 'DAY'), to_char(to_date(DATE, 'DD-MON-YYYY'), 'DAY') FROM DATES ORDER BY DATE; ``` The result is; ``` DATE TO_CHAR(D TO_CHAR(T --------- --------- --------- 19-OCT-15 MONDAY SATURDAY 19-OCT-15 MONDAY SATURDAY 19-OCT-15 MONDAY SATURDAY 21-OCT-15 WEDNESDAY MONDAY 21-OCT-15 WEDNESDAY MONDAY 02-NOV-15 MONDAY SATURDAY 02-NOV-15 MONDAY SATURDAY ```
> to\_date(sysdate Never apply **TO\_DATE** on a **DATE**. It forces Oracle to: * first convert it into a string * then convert it back to date based on the **locale-specific NLS settings**. You need **TO\_DATE** to convert a literal into date. For **date-arithmetic**, leave the date as it is. ``` SQL> SELECT to_char(to_date('27-OCT-2015', 'DD-MON-YYYY'), 'DAY') FROM dual; TO_CHAR(T --------- TUESDAY ``` **ROOT CAUSE** ``` SQL> alter session set nls_date_format='DD-MON-RR'; Session altered. SQL> select to_char(to_date(sysdate,'DD-MON-YYYY'),'DD-MON-YYYY') from dual; TO_CHAR(TO_ ----------- 27-OCT-0015 ``` So the year is changed to `0015`. Because, you converted **RR** format to **YYYY** for a value which is already a **DATE**. Now, the **27th day** of **October 0015** was a **SUNDAY**. ``` SQL> select to_char(to_date('27-OCT-0015','DD-MON-YYYY'),'DAY') FROM DUAL; TO_CHAR(T --------- SUNDAY ``` To understand the internal behaviour, let's see a small demo: ``` SQL> SET AUTOT ON EXPLAIN SQL> SELECT * FROM dual WHERE SYSDATE = TO_DATE(SYSDATE); no rows selected Execution Plan ---------------------------------------------------------- Plan hash value: 3752461848 --------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 2 | 2 (0)| 00:00:01 | |* 1 | FILTER | | | | | | | 2 | TABLE ACCESS FULL| DUAL | 1 | 2 | 2 (0)| 00:00:01 | --------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter(SYSDATE@!=TO_DATE(TO_CHAR(SYSDATE@!))) ``` Do you see the `filter(SYSDATE@!=TO_DATE(TO_CHAR(SYSDATE@!)))`. What happened with the filter? **TO\_DATE** on **SYSDATE** is internally interpreted as `TO_DATE(TO_CHAR(SYSDATE@!))`. So, first **TO\_CHAR** is applied, which will have a format depending on your local **NLS\_DATE\_FORMAT**, and then converted back to **DATE**. --- **UPDATE** Based on OP's comments below... Use: * **TO\_DATE** to convert a date literal(string) into a **DATE**. Since, **TO\_DATE** is NLS dependent, I would prefer to mention the format and **NLS\_DATE\_LANGUAGE** explicitly at individual statement level. ``` SQL> alter session set nls_date_format='YYYY-MM-DD'; Session altered. SQL> SELECT TO_DATE('27-OCT-2015', 'DD-MON-YYYY','NLS_DATE_LANGUAGE=ENGLISH') 2 FROM DUAL; TO_DATE('2 ---------- 2015-10-27 ``` If you depend on the **client's NLS settings**, it might not work for a person who uses a different NLS in some **different location**. ``` SQL> alter session set NLS_DATE_LANGUAGE='FRENCH'; Session altered. SQL> SELECT TO_DATE('27-OCT-2015', 'DD-MON-YYYY') from dual; SELECT TO_DATE('27-OCT-2015', 'DD-MON-YYYY') from dual * ERROR at line 1: ORA-01843: not a valid month ``` * **TO\_CHAR** to convert a DATE into a string to **display** in your **desired format** using a proper **FORMAT MODEL**. Don't do it vice-versa.
Since `sysdate` is already a date this expression: ``` to_date(sysdate, 'DD-MON-YYYY') ``` ... is both redundant and fragile. You ask Oracle to cast a date to string using whatever value `NLS_DATE_FORMAT`, `NLS_TERRITORY` and `NLS_LANGUAGE` currently have and then parse it as date using a fixed (and potentially different) format. Garbage in, garbage out. A couple of tips: 1. Don't stringify dates without a good reason. 2. When casting back and forth, always specify the format. I'm also not very fond of using month names as exchange format; I think numbers are less error prone.
Why is Tuesday sometimes Sunday?
[ "", "sql", "oracle", "date", "" ]
i have looked around and tried a couple of solutions i have read. i am trying to create a pl/sql trigger that will copy data from one table to another if a clause is met. i feel i may just be making a silly syntax error rather then a full critical failure but would appreciate some help. ``` create or replace TRIGGER TRG_APPLICATIONS BEFORE INSERT or UPDATE OF APP_ID, Status_id ON APPLICATIONS FOR EACH ROW BEGIN :new.APP_ID := SEQ_APP_ID.nextval; :new.APP_DATE := SYSDATE; IF STATUS_ID = 2 OR STATUS_ID = 5 OR STATUS_ID = 7 OR STATUS_ID = 8 THEN INSERT INTO APP_HISTORY SELECT SRN, STATUS_ID, APP_DATE FROM APPLICATIONS; END IF; END; ``` here are the errors > 6 4 PLS-00201: identifier 'STATUS\_ID' must be declared > > 6 1 PL/SQL: Statement ignored
Once you note that you need to reference the values with :NEW to get the current value of STATUS of ID for this insert or update, you will then hit your second error - you can't query the table on which the trigger exists as its content is in flux. You will get a mutating table error. Not to mention that you had no where clause on the SELECT so you would be dumping all of APPLICATIONS into APP\_HISTORY. I'm betting that all you want is to copy over the row as it was before the update. Of course there was no row before the insert, so there would be nothing to copy over. Or do you want to copy the NEW values into the HISTORY table on insert? Assuming that you want to keep the old values on update, then you would: ``` create or replace TRIGGER TRG_APPLICATIONS BEFORE INSERT or UPDATE OF APP_ID, Status_id ON APPLICATIONS FOR EACH ROW BEGIN -- APP_ID better not be the PK or it is changing on UPDATE! -- IF you only want this value set once on INSERT, wrap it in an IF INSERTING ... END IF; structure :new.APP_ID := SEQ_APP_ID.nextval; :new.APP_DATE := SYSDATE; IF UPDATING AND ( :NEW.STATUS_ID = 2 OR :NEW.STATUS_ID = 5 OR :NEW.STATUS_ID = 7 OR :NEW.STATUS_ID = 8 ) THEN INSERT INTO APP_HISTORY (SRN, STATUS_ID, APP_DATE) VALUES (:OLD.SRN, :OLD.STATUS_ID, :OLD.APP_DATE); END IF; END; ``` Other thoughts on your first cut of the trigger - your trigger fires on update of app\_id, but then changes the app\_id again inside the trigger. So if your UI is setting an APP\_ID value and then using that value to insert dependent records - you've just messed that up.
probably you need add `NEW` ``` IF :NEW.STATUS_ID = 2 OR :NEW.STATUS_ID = 5 OR :NEW.STATUS_ID = 7 OR :NEW.STATUS_ID = 8 THEN INSERT INTO APP_HISTORY (srn, status_id, app_date) VALUES (:NEW.SRN, :NEW.STATUS_ID, :NEW.APP_DATE); END IF; ```
PL/SQL Trigger issues (copy data from table 1 to table 2)
[ "", "sql", "database", "oracle", "plsql", "triggers", "" ]
I'm using nodeJs and mysql package. I want to use LIKE sql statement with varibale. this is source code : ``` var likemobile = '%'+mobile; var query = "SELECT vtiger_contactaddress.contactaddressid as 'leadid', vtiger_contactaddress.mobile, vtiger_contactaddress.phone FROM `vtiger_contactaddress` INNER JOIN `vtiger_crmentity` ON vtiger_crmentity.crmid=vtiger_contactaddress.contactaddressid AND vtiger_crmentity.deleted=0 AND vtiger_contactaddress.mobile LIKE "+likemobile+" OR vtiger_contactaddress.phone LIKE "+likemobile; ``` and this is error that returns: ``` Error: ER_PARSE_ERROR: You have an error in your SQL syntax; check the manual th at corresponds to your MySQL server version for the right syntax to use near '%8 8436500 OR vtiger_leadaddress.phone LIKE %88436500' at line 1 ```
If you indeed are using `node-mysql`, you should run queries as instructed in [the documentation](https://github.com/felixge/node-mysql#performing-queries). Assuming you have a `connection` object already, querying with bind variables becomes simple: ``` connection.query({ sql : "SELECT vtiger_contactaddress.contactaddressid as leadid, " + " vtiger_contactaddress.mobile, " + " vtiger_contactaddress.phone " + "FROM `vtiger_contactaddress` " + "INNER JOIN `vtiger_crmentity` " + " ON vtiger_crmentity.crmid=vtiger_contactaddress.contactaddressid" + " AND vtiger_crmentity.deleted=0 AND " + " (vtiger_contactaddress.mobile LIKE concat('%', ?) OR " + " vtiger_contactaddress.phone LIKE concat('%', ?))", values: [mobile, mobile] }, function (error, results, fields) { // error will be an Error if one occurred during the query // results will contain the results of the query // fields will contain information about the returned results fields (if any) }); ``` Things to note here: * Bind variables are used, preventing [SQL Injection](https://en.wikipedia.org/wiki/SQL_injection) * The [`concat`](https://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_concat) function is used for prefixing the (sanitized) input with the wildcard character (`%`) * The two query conditions that are `OR`ed together are now separated with parenthesis from `vtiger_crmentity.deleted=0`, which is probably what you want * You need to write the resultset-handling code in the callback function, accessing the data through the `results` and `fields` variables
You need to enclose the like pattern in single quotes. Two other suggestions: * Only use single quotes for string and date constants (not column aliases). * Use parameterized queries, so you are not putting user input into query strings.
error in LIKE SQL syntax
[ "", "mysql", "sql", "node.js", "" ]
My task is to combine two tables in a specific way. I have a table `Demands` that contains demands of some goods (tovar). Each record has its own ID, Tovar, Date of demand and Amount. And I have another table `Unloads` that contains unloads of tovar. Each record has its own ID, Tovar, Order of unload and Amount. `Demands` and `Unloads` are not corresponding to each other and amounts in demands and unloads are not exactly equal. One demand may be with 10 units and there can be two unloads with 4 and 6 units. And two demands may be with 3 and 5 units and there can be one unload with 11 units. The task is to get a table which will show how demands are covering by unloads. I have a solution ([SQL Fiddle](http://sqlfiddle.com/#!6/118841/1)) but I think that there is a better one. Can anybody tell me how such tasks are solved? What I have: ``` ------------------------------------------ | DemandNumber | Tovar | Amount | Order | |--------------------------------|-------- | Demand#1 | Meat | 2 | 1 | | Demand#2 | Meat | 3 | 2 | | Demand#3 | Milk | 6 | 1 | | Demand#4 | Eggs | 1 | 1 | | Demand#5 | Eggs | 5 | 2 | | Demand#6 | Eggs | 3 | 3 | ------------------------------------------ ------------------------------------------ | SaleNumber | Tovar | Amount | Order | |--------------------------------|-------- | Sale#1 | Meat | 6 | 1 | | Sale#2 | Milk | 2 | 1 | | Sale#3 | Milk | 1 | 2 | | Sale#4 | Eggs | 2 | 1 | | Sale#5 | Eggs | 1 | 2 | | Sale#6 | Eggs | 4 | 3 | ------------------------------------------ ``` What I want to receive ``` ------------------------------------------------- | DemandNumber | SaleNumber | Tovar | Amount | ------------------------------------------------- | Demand#1 | Sale#1 | Meat | 2 | | Demand#2 | Sale#1 | Meat | 3 | | Demand#3 | Sale#2 | Milk | 2 | | Demand#3 | Sale#3 | Milk | 1 | | Demand#4 | Sale#4 | Eggs | 1 | | Demand#5 | Sale#4 | Eggs | 1 | | Demand#5 | Sale#5 | Eggs | 1 | | Demand#5 | Sale#6 | Eggs | 3 | | Demand#6 | Sale#6 | Eggs | 1 | ------------------------------------------------- ``` Here is additional explanation from author's comment: * Demand#1 needs 2 Meat and it can take them from Sale#1. * Demand#2 needs 3 Meat and can take them from Sale#1. * Demand#3 needs 6 Milk but there is only 2 Milk in Sale#3 and 1 Milk in Sale#4, so we show only available amounts. * And so on. The field `Order` in the example determine the order of calculations. We have to process Demands according to their Order. Demand#1 must be processed before Demand#2. And Sales also must be allocated according to their Order number. We cannot assign eggs from sale if there are sales with eggs with lower order and non-allocated eggs. --- The only way I can get this is using loops. Is it posible to avoid loops and solve this task only with t-sql?
If the `Amount` values are `int` and not too large (not millions), then I'd use a [table of numbers](http://web.archive.org/web/20141209041043/http://web.archive.org/web/20150411042510/http://sqlserver2000.databases.aspfaq.com/why-should-i-consider-using-an-auxiliary-numbers-table.html) to generate as many rows as the value of each `Amount`. Here is a good [article](http://sqlperformance.com/2013/01/t-sql-queries/generate-a-set-1) describing how to generate it. Then it is easy to join `Demand` with `Sale` and group and sum as needed. Otherwise, a plain straight-forward cursor (in fact, two cursors) would be simple to implement, easy to understand and with `O(n)` complexity. If `Amounts` are small, set-based variant is likely to be faster than cursor. If `Amounts` are large, cursor may be faster. You need to measure performance with actual data. Here is a query that uses a table of numbers. To understand how it works run each query in the CTE separately and examine its output. [SQLFiddle](http://sqlfiddle.com/#!6/edcf1/1/0) ``` WITH CTE_Demands AS ( SELECT D.DemandNumber ,D.Tovar ,ROW_NUMBER() OVER (PARTITION BY D.Tovar ORDER BY D.SortOrder, CA_D.Number) AS rn FROM Demands AS D CROSS APPLY ( SELECT TOP(D.Amount) Numbers.Number FROM Numbers ORDER BY Numbers.Number ) AS CA_D ) ,CTE_Sales AS ( SELECT S.SaleNumber ,S.Tovar ,ROW_NUMBER() OVER (PARTITION BY S.Tovar ORDER BY S.SortOrder, CA_S.Number) AS rn FROM Sales AS S CROSS APPLY ( SELECT TOP(S.Amount) Numbers.Number FROM Numbers ORDER BY Numbers.Number ) AS CA_S ) SELECT CTE_Demands.DemandNumber ,CTE_Sales.SaleNumber ,CTE_Demands.Tovar ,COUNT(*) AS Amount FROM CTE_Demands INNER JOIN CTE_Sales ON CTE_Sales.Tovar = CTE_Demands.Tovar AND CTE_Sales.rn = CTE_Demands.rn GROUP BY CTE_Demands.Tovar ,CTE_Demands.DemandNumber ,CTE_Sales.SaleNumber ORDER BY CTE_Demands.DemandNumber ,CTE_Sales.SaleNumber ; ``` --- Having said all this, usually it is better to perform this kind of processing on the client using procedural programming language. You still have to transmit all rows from `Demands` and `Sales` to the client. So, by joining the tables on the server you don't reduce the amount of bytes that must go over the network. In fact, you increase it, because original row may be split into several rows. This kind of processing is sequential in nature, not set-based, so it is easy to do with arrays, but tricky in SQL.
I have no idea what your requirements are or what the business rules are or what the goals are but I can say this -- you are doing it wrong. This is SQL. In SQL you do not do loops. In SQL you work with sets. Sets are defined by select statements. If this problem is not resolved with a select statement (maybe with sub-selects) then you probably want to implement this in another way. (C# program? Some other ETL system?). However, I can also say there is probably a way to do this with a single select statement. However you have not given enough information for me to know what that statement is. To say you have a working example and that should be enough fails on this site because this site is about answering questions about problems and you don't have a problem you have some code. Re-phrase the question with inputs, expect outputs, what you have tried and what your question is. This is covered well in the FAQ. Or if you have working code you want reviewed, it may be appropriate for the code review site.
How to combine two tables allocating Sold amounts vs Demand without loops/cursor
[ "", "sql", "sql-server", "t-sql", "sql-server-2014", "" ]
I am trying to extract distinct values from pipe delimited text. When i searched in google i got the below expression but it is not working in some cases EG: ``` select regexp_replace('Bhal|Bhal|Bhal|Bhaloo|Bhaloo|Bhaloo|Bhaloooo|Bhaloooo|Bhaloooo|Baker|Baker|Baker', '([^|]+)(\|\1)+', '\1') from dual; ``` Expected Output: > Bhal|Bhaloo|Bhaloooo|Baker I have tried some combinations in the regex but it is not working for me. Any help would be appreciated.
This one sure is a challenge. First understand why the original was failing. The first string found of `'Bhal'` also was the first part of the second string `'Bhaloo'`. So the part of the string matched by the original regex of `'([^|]+)(\|\1)+'` (read as: match a group of one or more characters that are not a pipe followed by one or more groups consisting of a pipe followed by the string remembered in the first group) included the first 4 characters of the first occurrence of `Bhaloo`, causing the regex engine to consume those characters from the string as it was processed. The same for the remaining patterns found. The key is to include the ending pattern too, which would be the ending pipe or the end-of-line character if the regex engine is at the end of the string. Here I added the ending pattern group of `(\||$)` which reads as 'where followed by a pipe or the end of the line'. This ensures if the string happens to match the beginning of the next string, it will not be consumed by the regex engine. Then the replace pattern adds the end string as `\3` to ensure it gets printed in the output (basically adds it back since it got consumed by checking for it). ``` SQL> select regexp_replace('ABhal|Bhal|Bhal|Bhal|Bhaloo|Bhaloo|Bhaloo|Bhaloooo|Bhaloooo|Bhaloooo|||||Baker|Baker|Baker', 2 '([^|]*)(\|\1)*(\||$)', '\1\3') as unique_values 3 from dual; UNIQUE_VALUES --------------------------------- ABhal|Bhal|Bhaloo|Bhaloooo||Baker SQL> ``` EDIT: Slight tweak handles NULLS when in between other values. Not sure how useful this really is. Changed test case. Also changed the regex to match zero or more instead of one or more (asterisk instead of the plus sign). Caveats: I took my own advice and tested with unexpected values. Always expect the unexpected! Perhaps these could be factors for you? This expects the list to already be in order. i.e. if there is another 'Bhal' at the end, it will be treated as a new value. Nulls are not handled gracefully either. Well, sort of. Changed test case above to illustrate.
I had to add a | at the end of the string to make it work, so it's not the most elegant solution, but I believe it works: ``` select rtrim(regexp_replace('Bhal|Bhal|Bhal|Bhaloo|Bhaloo|Bhaloo|Bhaloooo|Bhaloooo|Baker|Baker|Baker'||'|' , '([^|]+\|)(\1)+', '\1'),'|')from dual ```
Distinct values from regexp_replace in oracle not working
[ "", "sql", "regex", "oracle", "oracle11g", "" ]
**Is there any way in which I could use a window function like `over (partition by column)` without using it as an aggregate function?** I have lots of columns and **I don't want to use group by** since I'll then have to specify both in the select and in the group by. I am giving a syntax example, which needs somehow to be corrected (by you guys cause it's not working when I adapt it on my real query(real query is too long and time consuming to explain it so just go with an example)). let's assume this works: ``` select *, ( select sum (column1) over (partition by column2) as sumCol1 from myTable where column20 = column21 ) from myTable ``` ok, now I want to do the same thing with two changes: **1:** NO aggregate function **2:** `column1` this time would be a `DATE` (I wouldn't be able to use an aggregate function with a `date` as far as I know, but as I am trying eliminate the aggregate, that shouldn't matter.) What i want should be somehow like this (query not correct, since this is what i'm trying to achieve) ``` select *, ( select column1 over (partition by column2) as Col1New from myTable where column20 = column21 ) from myTable ``` SQL Server 2012 Thanks edit: sample data: ``` rN rD rnc d e name abc1m 2010-03-31 abc 5.7 2 blue abc3m 2010-04-15 abc 5.7 3 blue abc1y 2010-02-14 abc 5.7 4 blue xfx1m 2010-02-31 xfx 1.7 2 blue xfx3m 2010-03-24 xfx 1.7 1 blue xfx1y 2012-03-30 xfx 1.7 1.7 red <= d=e use this date for "red" rows tnt1m 2010-03-28 tnt 9.6 2 red tnt3m 2010-01-12 tnt 9.6 9.6 blue <= d=e use this date for "blue" rows tnt1y 2010-08-20 tnt 9.6 2 red ``` Expected table, please look at `expectedCol` ``` rN rD rnc d e name expectedCol abc1m 2010-03-31 abc 5.7 2 blue 2010-01-12 abc3m 2010-04-15 abc 5.7 3 blue 2010-01-12 abc1y 2010-02-14 abc 5.7 4 blue 2010-01-12 xfx1m 2010-02-31 xfx 1.7 2 blue 2010-01-12 xfx3m 2010-03-24 xfx 1.7 1 blue 2010-01-12 xfx1y 2012-03-30 xfx 1.7 1.7 red 2012-03-30 tnt1m 2010-03-28 tnt 9.6 2 red 2012-03-30 tnt3m 2010-01-12 tnt 9.6 9.6 blue 2010-01-12 tnt1y 2010-08-20 tnt 9.6 2 red 2012-03-30 ``` The logic is this: when d = e then look at rD and take that date and put it into expectedCol1 group by name So, I would want to write soemthing like this: ``` select *, (select rD over (partition by name) as expectedCol1 from myTable where d = e) from myTable ```
Just calculate the `rD` for each `name` where `d = e` ``` WITH myDate AS ( SELECT name, rD FROM YourTable WHERE d = e ) SELECT t.*, m.rD as expectedCol FROM YourTable t JOIN myDate m ON t.name = m.name ```
From your sample data, something simpler like this looks like it should work: ``` select t1.*,t2.rD as expectedCol1 from myTable t1 inner join (select name,rD from myTable where e = d) t2 on t1.name = t2.name ``` Since you've stated that the e=d combination only occurs once for each name, the `t2` subquery should contain one row for each name. If some names may not have any rows where e=d, if you want them included, you should change to a `left join` and then consider what `expectedCol1` should be in that case.
over partition by without an aggregate function, avoiding group by
[ "", "sql", "sql-server", "t-sql", "sql-server-2012", "window-functions", "" ]
I'm trying to fetch data from a table and that id is not in another table. I referenced some code example to solve this but it didn't still work. I tried two style for this. My tried sql code is as below and please help me. Thank. **Style 1** ``` SELECT * FROM quotation q WHERE q.quotationId NOT IN (SELECT ip.quotation_id FROM invest_project ip) ``` **Style 2** ``` SELECT * FROM quotation q LEFT JOIN invest_project ip ON q.quotationId = ip.quotation_id WHERE q.quotationId IS NULL ```
``` Select * from [Table1] AS T1 where not exists( select * from [Table2] AS T2 where T1.ID = T2.ID) ``` This Work For Me
Try without alias... ``` SELECT * FROM quotation WHERE quotation.quotationId NOT IN (SELECT quotation_id FROM invest_project) ```
select fields from table where id not in another table in mysql [not working]
[ "", "mysql", "sql", "database", "mysql-workbench", "dml", "" ]
I have one table called `Incidents`. I want a SQL query that returns the count of incidents split out by year, by month. The start year is 2010, however, the end year will be variable. Example `Incidents` table: ``` DateLogged IncidentRef ----------- ------------ 2015-04-05 1 2014-06-04 2 2013-01-01 3 2012-12-10 4 2011-10-15 5 2010-10-01 6 2012-12-11 7 2011-10-10 8 2010-10-10 9 ``` Query Returns: ``` Year Jan Fev Mar Abr Mai Jun Jul Ago Set Out Nov Dez 2010 0 0 0 0 0 0 0 0 0 2 0 0 2011 0 0 0 0 0 0 0 0 0 2 0 0 2012 0 0 0 0 0 0 0 0 0 0 0 2 2013 1 0 0 0 0 0 0 0 0 0 0 0 2014 0 0 0 0 0 1 0 0 0 0 0 0 2015 0 0 0 1 0 0 0 0 0 0 0 0 ``` How can I do that query? Where do I start?
One option is to use conditional aggregation: ``` select year(datelogged), sum(case when month(datelogged) = 1 then 1 else 0 end) Jan, sum(case when month(datelogged) = 2 then 1 else 0 end) Feb, ..., sum(case when month(datelogged) = 12 then 1 else 0 end) Dec from yourtable group by year(datelogged) ``` * [SQL Fiddle Demo](http://sqlfiddle.com/#!3/82af54/1)
With pivoting: ``` ;with cte as(select year(date) y, month(date) m, ref from table) select * from cte pivot(count(ref) for m in([1],[2],...,[12]))p ```
SQL query to count incidents by year by month
[ "", "sql", "sql-server", "count", "" ]
I have following tables: 1. restaurants: ``` Restaurant_ID | Restaurant_Name | Phone_1 ``` 2. menu: ``` Item_ID | Rest_ID | Item_Name | Item_price ``` I want output: ``` Restaurant_ID| Item_Name | Item_price ``` I tried this query: ``` SELECT r.Restaurant_ID, m.Item_Name, m.Item_price FROM restaurants r, menu m WHERE r.Restaurant_ID = (SELECT `Rest_ID` FROM menu WHERE Item_ID=1) LIMIT 0,1; ``` But output of this query is all rows from both the tables. Actually i don't want to display all rows that's why i used LIMIT but i'm not getting desire output.
Try this: ``` SELECT r.Restaurant_ID, m.Item_Name, m.Item_price FROM restaurants r LEFT JOIN menu m ON r.Restaurant_ID=m.Rest_ID WHERE Item_ID=1 ```
This will Get you the desired result ``` SELECT r.Restaurant_ID, m.Item_Name, m.Item_price FROM restaurants r JOIN ( SELECT Item_ID, Rest_ID, Item_Name, Item_price FROM Menu GROUP BY Item_ID, Rest_ID, Item_Name, Item_price) m ON r.Restaurant_ID = m.Rest_ID WHERE m.Item_Id = 1 ```
MySQL display columns from two tables without repeatation
[ "", "mysql", "sql", "" ]
I am trying to sum the values from multiple rows. For example: **My output is:** ``` ID Value --------- 1 3 1 4 ``` **What I would like to see is:** ``` ID Value --------- 1 7 ``` **This is my code:** ``` SELECT id CASE sum(cast(value as float)) WHEN 0 THEN [other_value] else ISNULL([value] ,'') end AS 'Value' FROM table1 WHERE id = 1 GROUP BY id ``` I saw some solutions online such as I had to include the `GROUP BY` in order to avoid this error: > is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause But, still no sum. **NOTE:** Values are `varchar`, therefore, I need the `cast`
You are trying to sum a string (even if you cast it) The query will only work if your ISNULL goes to a 0 or some numeric value ``` SELECT id, SUM(CASE WHEN CAST(value AS FLOAT) = 0 THEN CAST([other_value] AS FLOAT) ELSE ISNULL([value], 0) END) AS 'Value' FROM table1 WHERE id = 1 GROUP BY id ```
You just need to `group by` the id column. ``` select id, sum(cast(value as numeric)) from tablename group by id ``` Edit: ``` SELECT id, sum ( case when cast(value as numeric) = 0 THEN cast([other_value] as numeric) else cast(ISNULL([value] ,0) as numeric) ) end AS totalvalue FROM table1 GROUP BY id ```
SQL Server: Sum values from multiple rows into one row
[ "", "sql", "sql-server", "" ]
I am in need of some help with figuring out how to go about using a **REGEX** to determine what number is *after* the **TOP** in a `SELECT` query string. Example(s): ``` SELECT top 1000 First_name, Last_name FROM PERS_DAT SELECT top 50 First_name, Middle_name, Last_Name FROM PERS_DAT SELECT top 105 Last_name FROM PERS_DAT ``` From the example query strings above, I would like it to output as follows: ``` 1000 50 105 ``` How would the **REGEX** be in order to find that information out and replace it with my own value?
It depends on how flexible the pattern needs to be (how many different kinds of variations in the input that it needs to support), but something like this would be a good start for a regex pattern: ``` (?<=^\s*SELECT\s+TOP\s+)\d+ ``` You need to specify that the pattern matching is case-insensitive so that it will match any of the following: ``` SELECT TOP 50 SELECT top 50 select top 50 ... etc. ``` Also, if the input string will contain multiple SQL statements, you'll need to specify the multi-line option so that the `^` matches the beginning of each line rather than the beginning of the string. Here's a complete example: ``` Dim input As String = _ "SELECT top 1000 First_name, Last_name FROM PERS_DAT" & Environment.NewLine & "SELECT top 50 First_name, Middle_name, Last_Name FROM PERS_DAT" & Environment.NewLine & "SELECT top 105 Last_name FROM PERS_DAT" Dim pattern As String = "(?<=^\s*SELECT\s+TOP\s+)\d+" For Each m As Match In RegEx.Matches(input, pattern, RegexOptions.IgnoreCase Or RegexOptions. Multiline) Console.WriteLine(m.Value) Next ``` The regex pattern uses a look-behind (i.e. `(?<=)`) statement to specify text that must proceed any valid match. The only part of the input string that is captured as the value of the match is the `\d+` part (i.e. the number). Everything else is just saying that the number must be preceded by the beginning of a new line followed by the words "select" and "top" separated by any number of white-space characters. **Update 1** Based on your comments below, here is an example of how you can use `Regex.Replace` to replace the number with a different value: ``` Dim input As String = "SELECT top 1000 First_name, Last_name FROM PERS_DAT" Dim pattern As String = "(?<=^\s*SELECT\s+TOP\s+)\d+" Dim output As String = Regex.Replace(input, pattern, Function(x) (Integer.Parse(x.Value) * 10).ToString(), RegexOptions.IgnoreCase) ``` In the above example, I am passing a lambda expression for the `MatchEvaluator` parameter. The lambda converts the matched number to an `Integer`, multiplies it by `10`, and then converts the result to a string and returns that as the replacement value. So, by providing that lambda expression, it will cause it to replace `SELECT top 1000` with `SELECT top 10000`. **Update 2** As a more complex example, if you created a method like this: ``` Private Function CapAt350(m As Match) As String If Integer.Parse(m.Value) < 350 Then Return m.Value Else Return "350" End If End Function ``` Then you could use it as your match evaluator like this: ``` Dim input As String = "SELECT top 1000 First_name, Last_name FROM PERS_DAT" Dim pattern As String = "(?<=^\s*SELECT\s+TOP\s+)\d+" Dim output As String = Regex.Replace(input, pattern, AddressOf CapAt350, RegexOptions.IgnoreCase) ```
If I understood your question correctly. ``` Imports System.Text.RegularExpressions ``` code: ``` Dim qry As String qry = "SELECT top 1000 First_name, Last_name FROM PERS_DAT" qry = qry.Replace(Integer.Parse(Regex.Replace(qry, "[^\d]", "")), "10") /*'here 10 is a custom value,If you want replace 1000 with 10 /*'output : SELECT top 10 First_name, Last_name FROM PERS_DAT ``` If you want then create a function like below ``` Function Tune_Query_String(ByVal qry As String, ByVal myval As String) Tune_Query_String = qry.Replace(Integer.Parse(Regex.Replace(qry, "[^\d]", "")), myval) Return Tune_Query_String End Function ``` usage: ``` Dim qry As String qry = "SELECT top 50 First_name, Middle_name, Last_Name FROM PERS_DAT" qry = Tune_Query_String(qry,"5") /*'output:SELECT top 5 First_name, Middle_name, Last_Name FROM PERS_DAT,replaced 50 with 5 ```
RegEX to replace "TOP ZZZZ" in SQL Server query string
[ "", "sql", "sql-server", "regex", "vb.net", "sql-server-2012", "" ]
I have a table that logs logins for each user and the current points total of that user at each login. What I'd like to do is get their first login for each day and calculate the points difference between their previous days' points total. To make things clearer, I have written a statement that gets each user's min login per day as follows: ``` SELECT loginLog.username, A.logInDate, loginLog.pointsTotal FROM loginLog JOIN (SELECT MIN(logID) AS logID, username, CAST(logInTime AS DATE) AS logInDate FROM loginLog GROUP BY username, CAST(logInTime AS DATE)) A ON loginLog.logID = A.logID ORDER BY username, logInDate DESC ``` Which produces the following results set: ``` username logInDate pointsTotal user1 2015-10-28 82685 user1 2015-10-27 51330 user1 2015-10-26 7810 user2 2015-10-28 221223 user2 2015-10-27 207234 user2 2015-10-26 178781 user3 2015-10-28 616120 user3 2015-10-27 598715 user3 2015-10-26 591289 user4 2015-10-28 187654 user4 2015-10-27 198378 user4 2015-10-26 115014 user5 2015-10-28 248138 user5 2015-10-27 224729 user5 2015-10-26 216229 user6 2015-10-28 68546 user6 2015-10-28 24139 user6 2015-10-27 33171 user6 2015-10-27 6459 user6 2015-10-26 6391 ``` So for example, on the first record I'd like to add a column dailyGrowth that would calculate 82685 - 51330, on the second record if would calculate 51330 - 7810 etc etc. Is this possible?
Since you are using `Sql Server 2012`, `lead` window function is easiest here: ``` DECLARE @t TABLE ( username VARCHAR(10) , logInDate DATE , pointsTotal INT ) INSERT INTO @t VALUES ( 'user1', '2015-10-28', 82685 ), ( 'user1', '2015-10-27', 51330 ), ( 'user1', '2015-10-26', 7810 ), ( 'user2', '2015-10-28', 221223 ), ( 'user2', '2015-10-27', 207234 ), ( 'user2', '2015-10-26', 178781 ) select *, pointsTotal - lead(pointsTotal) over(partition by username order by logInDate desc) AS dailyGrowth from @t ``` Output: ``` username logInDate pointsTotal dailyGrowth user1 2015-10-28 82685 31355 user1 2015-10-27 51330 43520 user1 2015-10-26 7810 NULL user2 2015-10-28 221223 13989 user2 2015-10-27 207234 28453 user2 2015-10-26 178781 NULL ``` To use your existing query: ``` select *, pointsTotal - lead(pointsTotal) over(partition by username order by logInDate desc) AS dailyGrowth from (existing query goes here)t ```
Use correlated sub-query to find previous row's `pointsTotal`: ``` select t1.username, t1.logInDate, t1.pointsTotal, t1.pointsTotal - (select TOP 1 t2.pointsTotal from tablename t2 where t2.username = t1.username and t2.logInDate < t1.logInDate order by t2.logInDate desc) from tablename t1 ``` (`SELECT TOP 1` is SQL Server specific. The ANSI SQL way is to add `fetch first 1 row only` after the `ORDER BY` clause. Some other dbms products have `LIMIT 1` instead.)
SQL - Calculate difference between two records
[ "", "sql", "sql-server", "sql-server-2012", "" ]
I need an SQL query that merge some results, but they have to match... For example, see this image : [![Image of the tables](https://i.stack.imgur.com/UgoLI.jpg)](https://i.stack.imgur.com/UgoLI.jpg) How can I do this? Is it possible? Thanks!
``` select t1.product, t1.quantity, t1.avgprice, coalesce(t2.buyprice, 0) from table1 t1 left join table2 t2 on t1.product = t2.product ``` The `LEFT JOIN` is there to also return table1 rows without a matching table2 row. `coalesce(t2.buyprice, 0)` is used to return 0 if no t2.buyprice (i.e. t2.buyprice is null).
You can use left join, like this: ``` SELECT t1.product, t1.quantity, t1.avgprice, ISNULL(t2.buyprice, 0) FROM table1 t1 LEFT JOIN table2 t2 ON t1.product = t2.product ```
Join SQL results in one table
[ "", "sql", "sql-server", "" ]
Database is MySQL. For example : I have one table and several SELECT queries with GROUP BY : ``` SELECT MIN(price) FROM `table` GROUP BY field1, field2, field3, field4; SELECT MIN(price) FROM `table` GROUP BY field1, field2, field3, field5; SELECT MIN(price) FROM `table` GROUP BY field1, field2, field3, field6; ``` All queries performs grouping by field1, field2, field3. Is there any way to optimize or cache the same operations ?
What you are trying to do sounds simple but it actually only makes sense from a "code conservation" perspective (meaning less code, not really less work for the database). In reality, "GROUP BY field1, field2, field3" is actually a subset of what's returned by "GROUP BY field1, field2, field3, field4". Let me illustrate with a data set: ``` price | field1 | field2 | field3 | field4 ------|--------|--------|--------|------- 1.00 | 1 | 1 | 1 | 1 1.50 | 1 | 1 | 1 | 2 2.00 | 1 | 1 | 2 | 3 3.00 | 1 | 1 | 2 | 3 ``` "GROUP BY field1, field2, field3" returns: ``` min_price | field1 | field2 | field3 ----------|--------|--------|-------- 1.00 | 1 | 1 | 1 2.00 | 1 | 1 | 2 ``` "GROUP BY field1, field2, field3, field4" returns *more* rows: ``` min_price | field1 | field2 | field3 | field4 ----------|--------|--------|--------|------- 1.00 | 1 | 1 | 1 | 1 1.50 | 1 | 1 | 1 | 2 2.00 | 1 | 1 | 2 | 3 ``` As you can see, you cannot somehow reuse the first group by statement to get the second result set. If performance is your question, @deadzone's suggestion of a materialized view is a good one. You could create a materialized view on a group by of *all* the fields if it would pair down the rows some (GROUP BY field1, field2, field3, field4, field5, field6). Other than that, you'll just need to ensure that each query is optimized. If code conservation is your concern, MySQL doesn't give you a lot of options because it doesn't support dynamic SQL. SQL is a language where the best option is all too often writing more code (insert inner programmer **sigh**).
If you're looking to improve the performance of your select queries that use aggregated data, I'd suggest looking into one or more [Materialized Views](http://www.fromdual.com/mysql-materialized-views). This is (under the covers) somewhat similar to having additional tables. But they are views on the source table and will need to be refreshed periodically. And although creating/refreshing the MV's might not be very fast, querying them for these queries should provide significant performance gains.
Optimizing several queries with GROUP BY that starts with the same fields
[ "", "mysql", "sql", "group-by", "" ]
i need to create a trigger that do this action: If the column "idStatoTicket" is update to the value "3" i've to set another column ("dataChiusura") to the current timestamp. I've tryed this ``` CREATE TRIGGER Customer_UPDATE ON TICKET FOR UPDATE AS BEGIN DECLARE @dataChiusura datetime IF UPDATE(idStatoTicket = 3) BEGIN SET @dataChiusura = CURRENT_TIMESTAMP END INSERT INTO TICKET(dataChiusura) VALUES(@dataChiusura) END ``` Some Help?
You need to join the `INSERTED` to your original table to be able to change the value to what you want it to be. Here is a working example for you: ``` Drop Table MyTable GO Create Table MyTable ( Id Int Identity (1, 1), Value VarChar (100), idStatoTicket Int, DataChiusura DateTime ) GO Insert Into MyTable Values ('Test 1', 1, GetDate()) Insert Into MyTable Values ('Test 2', 3, GetDate()-1) Select * From MyTable GO Create Trigger MyTable_Update ON MyTable FOR Update as Begin Update T Set DataChiusura = GetDate() From MyTable T Inner Join Inserted i On T.Id = i.id And i.idStatoTicket = 3 End GO Update MyTable Set Value = 'Test 22' Where Value = 'Test 2' Select * From MyTable ```
You should use the INSERTED data set to check for the value. Please try something like that: ``` if exists ( select * from INSERTED where idStatoTicket = 3 ) begin SET @dataChiusura = CURRENT_TIMESTAMP end ```
Create a trigger in sql server after a column is update to specified value
[ "", "sql", "sql-server", "triggers", "" ]
I am using Postgres version 9.4 and I have a `full_name` field in a table. In some cases, I want to put initials instead of the full\_name of the person in my table. Something like: ``` Name | Initials ------------------------ Joe Blow | J. B. Phil Smith | P. S. ``` The `full_name` field is a string value (obviously) and I think the best way to go about this is to split the string into an array foreach space i.e.: ``` select full_name, string_to_array(full_name,' ') initials from my_table ``` This produces the following result-set: ``` Eric A. Korver;{Eric,A.,Korver} Ignacio Bueno;{Ignacio,Bueno} Igmar Mendoza;{Igmar,Mendoza} ``` Now, the only thing I am missing is how to loop through each array element and pull the 1st character out of it. I will end up using `substring()` to get the initial character of each element - however I am just stuck on how to loop through them on-the-fly.. Anybody have a simple way to go about this?
Use `unnest` with `string_agg`: ``` select full_name, string_agg(substr(initials, 1,1)||'.', ' ') initials from ( select full_name, unnest(string_to_array(full_name,' ')) initials from my_table ) sub group by 1; full_name | initials ------------------------+------------- Phil Smith | P. S. Joe Blow | J. B. Jose Maria Allan Pride | J. M. A. P. Eric A. Korver | E. A. K. (4 rows) ``` In Postgres 14+ you can replace `unnest(string_to_array(...))` with `string_to_table(...)`. Test it in [db<>fiddle.](https://dbfiddle.uk/YDYto5uY)
**[SqlFiddleDemo](http://sqlfiddle.com/#!15/aa6f9/7)** ``` WITH add_id AS ( SELECT n.*, row_number() OVER (ORDER BY "Name") AS id FROM names n ), split_names AS ( SELECT id, regexp_split_to_table("Name", E'\\s+') AS single_name FROM add_id ), initials AS ( SELECT id, left(single_name, 1) || '.' AS initial FROM split_names ), final AS ( SELECT id, string_agg(initial, ' ') FROM initials GROUP BY id ) SELECT a.*, f.* FROM add_id a JOIN final f USING (id) ``` For debug I create the `Initial` to Show how match the `string_agg` ``` | Name | Initials | id | id | string_agg | |----------------|----------|----|----|------------| | Eric A. Korver | E. A. K. | 1 | 1 | E. A. K. | | Igmar Mendoza | I. M. | 2 | 2 | I. M. | | Ignacio Bueno | I. B. | 3 | 3 | I. B. | | Joe Blow | J. B. | 4 | 4 | J. B. | | Phil Smith | P. S. | 5 | 5 | P. S. | ``` After some work I got a compact version [**SqlFiddleDemo**](http://sqlfiddle.com/#!15/aa6f9/14) ``` SELECT "Name", string_agg(left(single_name, 1) || '.', '') AS Initials FROM ( SELECT "Name", regexp_split_to_table("Name", E'\\s+') AS single_name FROM names ) split_names GROUP BY "Name" ``` **OUTPUT** ``` | Name | initials | |----------------|----------| | Eric A. Korver | E.K.A. | | Igmar Mendoza | M.I. | | Ignacio Bueno | I.B. | | Joe Blow | B.J. | | Phil Smith | P.S. | ```
How to get initials easily out of text field using Postgres
[ "", "sql", "arrays", "postgresql", "aggregate-functions", "" ]
My table has a bunch of columns in the following format: ``` _settingA _settingB _settingB ``` And I want to rename them simply to add a prefix as follows: ``` _1_settingA _1_settingB _1_settingC ``` I have a *lot* more than three columns to rename in this way. If I had just three, I'd just do it manually one by one. What is the quickest / most efficient way to achieve this?
There's no single command aproach. Obviously you could type multiple comands for `RENAME` by your self, but let me intoduce some improvement:) As I said in [this answer](https://stackoverflow.com/a/33266974/5404196) > ...for all such bulk-admin-operations you could use PostgreSQL system tables to generate queries for you instead of writing them by hand In your case it would be: ``` SELECT 'ALTER TABLE ' || tab_name || ' RENAME COLUMN ' || quote_ident(column_name) || ' TO ' || quote_ident( '_1' || column_name) || ';' FROM ( SELECT quote_ident(table_schema) || '.' || quote_ident(table_name) as tab_name, column_name FROM information_schema.columns WHERE table_schema = 'schema_name' AND table_name = 'table_name' AND column_name LIKE '\_%' ) sub; ``` That'll give you set of strings which are SQL commands like: ``` ALTER TABLE schema_name.table_name RENAME COLUMN "_settingA" TO "_1_settingA"; ALTER TABLE schema_name.table_name RENAME COLUMN "_settingB" TO "_1_settingB"; ... ``` There no need using `table_schema` in `WHERE` clause if your table is in `public` schema. **Also remember using function `quote_ident()`** -- read my original answer for more explanation. Edit: I've change my query so now it works for all columns with name begining with underscore `_`. Because underscore is special character in SQL pattern matching, we must escape it (using `\`) to acctually find it.
Something simple like this will work. ``` SELECT FORMAT( 'ALTER TABLE %I.%I.%I RENAME %I TO %I;', table_catalog, table_schema, table_name, column_name, '_PREFIX_' + column_name ) FROM information_schema.columns WHERE table_name = 'foo'; ``` `%I` will do `quote_ident`, which is substantially nicer. If you're in PSQL you can run it with `\gexec`
Renaming multiple columns in PostgreSQL
[ "", "sql", "postgresql", "automation", "sql-update", "ddl", "" ]
This is a really interesting problem I have been struggling with all day. I have a table with an ID, Latitude, and Longitude of a location. This is a subset of locations from a larger set of locations. What I am trying to do is use this subset of locations and, for each location, return locations within 20 miles from the larger set using air distance. My issue is not with calculating the air distances, that is working, my issue is with how to operate on each row like a `For` loop but with a set-based method. So let's say this is my subset table, LocationSubset ``` +----+---------+----------+ | ID | Lat | Lon | +----+---------+----------+ | 1 | 41.0575 | -92.1364 | +----+---------+----------+ | 2 | 47.0254 | -92.5723 | +----+---------+----------+ | 3 | 38.9897 | -88.7623 | +----+---------+----------+ ``` And I am looking towards the larger table, Locations ``` +----+---------+-----------+ | ID | Lat | Lon | +----+---------+-----------+ | 1 | 41.0575 | -92.1364 | +----+---------+-----------+ | 2 | 47.0254 | -92.5723 | +----+---------+-----------+ | 2 | 38.9897 | -88.7623 | +----+---------+-----------+ | 4 | 36.2137 | -91.6528 | +----+---------+-----------+ | 5 | 39.2643 | -123.0073 | +----+---------+-----------+ | 6 | 39.941 | -123.0073 | +----+---------+-----------+ | 7 | 35.7683 | -91.6528 | +----+---------+-----------+ | 8 | 45.8406 | -91.6528 | +----+---------+-----------+ ``` Let's assume, using the Haversine formula that locations 5 and 6 are within 20 miles of location 1 and locations 4 and 8 are within 20 miles of location 2. I am looking to return something like this: ``` +----+------------+----------+ | ID | LocationID | Distance | +----+------------+----------+ | 1 | 5 | 15.4 | +----+------------+----------+ | 1 | 6 | 16 | +----+------------+----------+ | 2 | 4 | 17.4 | +----+------------+----------+ | 2 | 8 | 2.5 | +----+------------+----------+ ``` Each location could have zero to many locations within 20 miles and I am trying to capture this in another table. I can add clarification if necessary. Thank you for your time.
``` SELECT LS.ID, L.ID as LocationID, MS_DISTANCE (LS.Lat , LS.Lon, L.Lat , L.Lon) as Distance FROM LocationSubset LS JOIN Locations L ON MS_DISTANCE (LS.Lat , LS.Lon, L.Lat , L.Lon) < 20 AND LS.ID <> L.ID -- if you want remove comparasion with same object ``` But you are problably better using sql server spatial functions because those allow to use spatial index. **[Spatial Data](https://msdn.microsoft.com/en-us/library/bb933790.aspx)**
``` SELECT subtable.ID as ID1, subtable.Lat as Lat1, subtable.Lon AS Lon1, Locations.ID as ID2, Locations.Lat as Lat2, Locations.Lon AS Lon2, (CalculatedDistance using Lat1,Lon1,Lat2,Lon2) AS Distance FROM subtable CROSS JOIN maintable WHERE (CalculatedDistance using Lat1,Lon1,Lat2,Lon2)<20 ``` Where `subtable` a query/view that generates your subtable
In SQL Server how can I return values in one table that are less than 20 miles from a coordinate?
[ "", "sql", "sql-server", "" ]
Let me add more detailed and rephrase my question, since I rushed writing it as I was leaving work: Firstly the tables: TABLE A has a 1:1 relationship to TABLE B TABLE A has a 1:M relationship to TABLE XYZ (the table we want to update) I have a stored procedure named, sp\_parent that calls a different stored procedure named sp\_update\_child (this so main function is to update a table) In my sp\_update\_child I have a variable set like this: ``` SET @trustee_variable_id = SELECT TOP 1 ID_A FROM TABLE A WHERE clause1 AND clause2 AND etc ``` It returns an ID, let's say 3000 Then it goes to the update statement: ``` UPDATE TABLE_XYZ SET TABLE_XYZ.trustee_id = (@trustee_variable_id = TABLE_XYZ.trustee_id` ``` However, it cannot be updated because the ID retrieved, 3000 from TABLE A, is not in TABLE B and the only way to update that specific column is if ID 3000, is in TABLE B. How do I add a check to say, if the ID retrieved from TABLE A is not in TABLE B, then update TABLE\_XYZ.trustee\_id with the original ID that is already in trustee\_id column? Below is my script - not sure if I am heading in the right direction: ``` UPDATE TABLE_XYZ SET @trustee_variable_id = CASE WHEN @trustee_variable_id NOT IN (SELECT ID_A FROM TABLE_B) THEN (SELECT trustee_id FROM TABLE_XYZ WHERE clause1 = clause2) ``` Can anyone point in the right direction please?
I'm definitely confused by your update statement, but I think you might mean this. ``` update table_xyz set trustee_id = @trustee_variable_id where exists ( select * from table_b where id_b = @trustee_variable_id ) ```
If I understand your logic, then.... Table\_a and Table\_b contains two ids (and maybe other fields). The one you want to look for, and the one you want to use. For the sake of your example, the id\_a and id\_b are the id you wish to look for, and I made a column named return\_id which holds the value you want to put in your table you are updating. I assume table\_a might look like the following: **id\_a, return\_id** 1, 10 2, 20 4, 40 And table\_b might look like this: **id\_b, return\_id** 1, 100 2, 200 3, 300 4, 400 Now here is the sql: ``` declare @trustee_variable_id int = 3 select case when not exists (select return_id from table_a where id_a=@trustee_variable_id ) then (select return_id from table_b where id_b=@trustee_variable_id) else (select return_id from table_a where id_a=@trustee_variable_id) end ``` Since 3 does not exist in table\_a, it looks to table\_b to return the value of 300. If you run it with `declare @trustee_variable_id int = 2` it will return 20 since 2 exists in table\_a The example SQL above is a select statement, convert it into an update: ``` update [SomeTable] set [SomeColumn] = case when not exists (select return_id from table_a where id_a=@trustee_variable_id ) then (select return_id from table_b where id_b=@trustee_variable_id) else (select return_id from table_a where id_a=@trustee_variable_id) end ``` And don't forget a WHERE clause at the end of your update statement or you will alter ALL rows ;) Unless thats your intention.
If value is not in Table A, then select original value
[ "", "sql", "sql-server", "t-sql", "case-when", "" ]
I'm sorting products by title using `ORDER BY title ASC`, and get: ``` "Some" title // " 1 More title // 1 Another title // A Third title // T Yet another title // Y ``` I'd like to ignore quotation marks and other non-alpha characters as part of the query, so that it would yield: ``` Another title // A 1 More title // M "Some" title // S Third title // T Yet another title // Y ``` Is it possible to strip this stuff out with Postgres or pre-treat as part of the query, or would I need another column? **Update** This works great: `LOWER(regexp_replace(title, '[^[:alpha:]]', '', 'g')) ASC` More options here: <http://www.postgresql.org/docs/current/interactive/functions-matching.html#POSIX-CLASS-SHORTHAND-ESCAPES-TABLE>
One method is to use `regexp_replace()`: ``` order by regexp_replace(title, '[^a-zA-Z]', '', 'g') ``` You can also use: ``` order by regexp_replace(title, '[^[:alpha:]]', '', 'g') ``` This is safer for non-ASCII character sets.
use replace ``` ORDER BY replace(title, '"', '') asc ``` more advance is ``` ORDER BY regexp_replace(title, '[^a-zA-Z]', '', 'g') ```
Ignore quotation marks, numbers, spaces, etc in ORDER BY clause
[ "", "sql", "postgresql", "sorting", "" ]
I need to insert an almost duplicated row into table, while changing few values. For example insert duplicated row with new id (I don't want automatic id) and different name but all other values the same. The problem is that I need to make a select \* I know that there is a way to insert from select and changing values this way : ``` insert into Table1(id,name,surname) select newid(),'David',surname from Table1 where id=1 ``` but I don't want to enlist all fields ,instead I want to use select \*, so if fields added I won't have to change my stored procedure. I want something like : ``` insert into Table1 ( update (SELECT * FROM Table1 WHERE id= 1 ) t set t.id= newid(),name='David') ``` Is there a way to do it ?
The code I use: ``` declare @table sysname declare @excludecols nvarchar(max) declare @uniqueWhereToCopy nvarchar(max) declare @valuesToChange nvarchar(max) --copy settings set @table = 'orsrg' --the tablename set @excludecols='' --columnnames to exclude from the copy, seperated by commas set @uniqueWhereToCopy = 'ID=1188' set @valuesToChange = 'regel='' 4''' --columnName=<value>,columnName2=<value2>, .... (needed for unique indexes) set @excludecols=@excludecols + ',' set @valuesToChange=@valuesToChange + ',' --get the columnnames to copy declare @sqlcolumns nvarchar(max) set @sqlcolumns = '' SELECT @sqlcolumns = @sqlcolumns + name from (select '[' + c.name + '], ' as name FROM sys.COLUMNS c inner join sys.objects o on c.object_id = o.object_id WHERE o.name = @table and is_identity = 0 /*exclude identity*/ and is_rowguidcol = 0 /*exclude rowguids*/ and is_computed = 0 /*exclude computed columns*/ and system_type_id <> 189 /*exclude timestamp*/ and charindex(c.name, @excludecols,1) = 0 /*exclude user specified columns*/)q --get the select columns and values declare @sqlselectvalues nvarchar(max) set @sqlselectvalues = @sqlcolumns while len(@valuesToChange)>1 begin declare @colValueSet nvarchar(max) declare @colname sysname declare @value nvarchar(max) set @colValueSet = left(@valuesToChange,charindex(',',@valuesToChange,1)-1) set @valuesToChange = substring(@valuesToChange,charindex(',',@valuesToChange,1)+1,len(@valuesToChange)) set @colname = '[' + left(@colValueSet,charindex('=',@colValueSet,1)-1) +']' set @value = substring(@colValueSet,charindex('=',@colValueSet,1)+1,len(@colValueSet)) set @sqlselectvalues = REPLACE(@sqlselectvalues,@colname,@value) end --remove the last comma set @sqlcolumns = left(@sqlcolumns, len(@sqlcolumns)-1) set @sqlselectvalues = left(@sqlselectvalues, len(@sqlselectvalues)-1) --create the statement declare @stmt nvarchar(max) set @stmt = 'Insert into ' + @table + '(' + @sqlcolumns + ') select ' + @sqlselectvalues + ' from ' + @table + ' with (nolock) where ' + @uniqueWhereToCopy --copy the row exec sp_executesql @stmt ```
You can use temp hash table to accomplish this. ``` SELECT * INTO #temp FROM Table1 WHERE id= 1; UPDATE #temp SET ID = newid(), Name='David' INSERT INTO Table1 SELECT * FROM #temp; ``` Note that the #temp table is automatically dropped when the client disconnect from the DB server. Also, as previously noted, I prefer to use column names separately instead of \*. Example: [SQL Fiddle](http://sqlfiddle.com/#!6/fcf89/1)
SQL Server : Duplicate row in table while changing some values with select *
[ "", "sql", "t-sql", "insert", "sql-server-2014", "insert-update", "" ]
Currently I have two tables **Table\_A** and **Table\_B**. **Table\_A** ``` ID 1 2 2 3 3 4 ``` **Table\_B** ``` ID Alphabet 1 X 1 Y 2 X 2 Y 2 Z 3 X 3 Z 4 X ``` I want to group the column ID and Alphabet from Table\_B and find out which group in the combination has the alphabet Z in it. Then whichever group has Z in it, its ID would not be displayed when I look up IDs for Table\_A. **Expected Result** ``` ID 1 4 ``` I'm currently using this SELECT statement: ``` SELECT A.ID FROM Table_A A LEFT JOIN Table_B B ON A.ID = B.ID WHERE A.ID NOT IN (SELECT B.ID FROM Table_B WHERE Alphabet = 'Z' GROUP BY B.ID, Alphabet) ``` **Actual Result** ``` ID 1 1 2 2 3 4 ``` It removes the IDs that has the Alphabet Z in it but it does not relate to other duplicate IDs.
what about this: ``` select id from table_a where not exists ( select 1 from table_b where table_b.id = table_a.id and table_b.alphabet = 'Z') ```
Use this concatenation to create group alphabet [How can I combine multiple rows into a comma-delimited list in Oracle?](https://stackoverflow.com/questions/468990/how-can-i-combine-multiple-rows-into-a-comma-delimited-list-in-oracle) then you will have ``` id alphabet 1 x,y 2 x,y,z 3 x,z 4 x ``` then ``` SELECT id FROM newQuery WHERE INSTR(LOWER(alphabet), 'z') = 0; ```
SQL: How to relate duplicate rows to unique rows and remove them from another table?
[ "", "sql", "oracle", "join", "group-by", "db2-luw", "" ]
While trying out different compression settings in Redshift it would be very useful to know the size of each column. I know how to get the size of a table, but I want to know the size of each individual column in that table.
This query will give you the size (MB) of each column. What it does is that it counts the number of data blocks, where each block uses 1 MB, grouped by table and column. ``` SELECT TRIM(name) as table_name, TRIM(pg_attribute.attname) AS column_name, COUNT(1) AS size FROM svv_diskusage JOIN pg_attribute ON svv_diskusage.col = pg_attribute.attnum-1 AND svv_diskusage.tbl = pg_attribute.attrelid GROUP BY 1, 2 ``` You can read more about the two tables involved in the query here: [SVV\_DISKUSAGE](http://docs.aws.amazon.com/redshift/latest/dg/r_SVV_DISKUSAGE.html) & [pg\_attribute](http://www.postgresql.org/docs/9.2/static/catalog-pg-attribute.html).
A more accurate size of the table would include the hidden system columns `deletexid`, `insertxid`, `oid` (ROW ID), as well. One of my tables was using 752 blocks without including the hidden columns. When i added the hidden columns, it went upto 1063 blocks. ``` SELECT col, attname, COUNT(*) AS "mbs" FROM stv_blocklist bl JOIN stv_tbl_perm perm ON bl.tbl = perm.id AND bl.slice = perm.slice LEFT JOIN pg_attribute attr ON attr.attrelid = bl.tbl AND attr.attnum-1 = bl.col WHERE perm.name = '<TABLE-NAME>' GROUP BY col, attname ORDER BY col; ```
How can I find out the size of each column in a Redshift table?
[ "", "sql", "amazon-redshift", "" ]
``` update [RegistryCopy].[dbo].[RegFile_New] SET r.[FileNo] = v.[Cont#Account (New ACC)] where r.[FileNo] IN ( SELECT r.[FileNo] FROM [RegistryCopy].[dbo].[RegFile_New] as R INNER JOIN [Registry_Malta_Recovered].[dbo].OLD_NEW_AccountsFromSap as v ON r.FileNo] = v.[Acct in legacy (Old ACC)] where r.[FileNo] = ('1000040000') ) ``` I'm getting the error message: > The multi-part identifier "r.FileNo" could not be bound.
Your statement is invalid. You're referring to aliases in the outer query that are defined in the inner query (subquery), and you're also referring to fields in the outer query for tables that are only defined in the inner query. This is the closest approximation of your query: ``` UPDATE r SET [FileNo] = v.[Cont#Account (New ACC)] FROM [RegistryCopy].[dbo].[RegFile_New] AS R INNER JOIN [Registry_Malta_Recovered].[dbo].OLD_NEW_AccountsFromSap AS v ON r.[FileNo] = v.[Acct in legacy (Old ACC)] WHERE r.[FileNo] = ('1000040000') ```
It seems that "r" and "v" identifiers are available for the sub-query only. Please try to use `UPDATE FROM` syntax as it was done here: [Update a table using JOIN in SQL Server?](https://stackoverflow.com/questions/1604091/update-a-table-using-join-in-sql-server)
could not update
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I would like to change increment value of IDENTITY column in database and I have below restrictions: 1. Can't drop the column. 2. Having around 20k rows. Dropping table and recreate table with changed increment value would be the solution. I don't know the syntax. Can you please suggest the syntax or other best possible solution? Thanks in Advance!
If i understand you correctly base on your response to my comment, you backed up the data of the orig table in temp table, then you deleted the orig table and you want to recreate an orig table. If that is the case , you need the `IDENTITY_INSERT` to set `ON` and `OFF`, because the identity of the table is unique. The syntax is: ``` SET IDENTITY_INSERT [TableName] ON -- set to on -- Put your insert statement here -- insert the data from backed up temp table to your new table SET IDENTITY_INSERT [TableName] OFF -- set to off ```
If you can accept recreating table, there is no magic about the recreating table syntax. ``` CREATE TABLE temp_Table ( -- Identity column with new settings -- other columns ); INSERT INTO temp_Table SELECT -- Columns except identity column FROM old_table; DROP TABLE old_Table; EXEC sp_rename 'temp_Table', 'old_Table'; ``` However, you have to handle foreign key by yourself.
Change Increment value for Identity - SQL Server 2005
[ "", "sql", "sql-server", "" ]
I have two tables. Customers and Images. I join these and i select an Image Id from the Images table ONLY if the image is the profilepicture. However if a user has several pictures but only on of the pictures is the profile picture i still get back several rows with the same user. my code: ``` SELECT DISTINCT c.Id, c.Name, c.LastName, c.Email, c.MobilePhoneNumber, (SELECT CASE WHEN im.Isprofile = 1 THEN im.Id END) AS ImageId, im.ContentType FROM Customers c JOIN Images im ON im.CustomerId = c.Id ``` The "CASE WHEN" is obviously wrong and this feels overly simple thing to do but I have been at this for a while and can't figure it out. EDIT: I get this result: [![enter image description here](https://i.stack.imgur.com/CPbyX.png)](https://i.stack.imgur.com/CPbyX.png) I only want one row here since ImageId is null
``` SELECT DISTINCT c.Id, c.Name, c.LastName, c.Email, c.MobilePhoneNumber, (im.Id) AS ImageId, im.ContentType FROM Customers c LEFT JOIN Images im ON im.CustomerId = c.Id and im.Isprofile = 1 ``` Hope this helps
Use `OUTER APPLY`: ``` SELECT c.Id, c.Name, c.LastName, c.Email, c.MobilePhoneNumber, im.ID AS ImageID, im.ContentType FROM Customers c OUTER APPLY(SELECT TOP 1 ID, ContentType FROM Images im WHERE im.CustomerId = c.Id AND im.Isprofile = 1) im ```
How to select a column only if condition is met in TSQL
[ "", "sql", "sql-server", "t-sql", "" ]
I've a table structured somewhat similar to this: ``` CREATE TABLE `user` (`id` int, `name` varchar(7)); CREATE TABLE `email` (`id` int, `email_address` varchar(50), `verified_flag` tinyint(1),`user_id` int); CREATE TABLE `social` (`id` int,`user_id` int); INSERT INTO `user` (`id`, `name`) VALUES (1,'alex'), (2,'jon'), (3,'arya'), (4,'sansa'), (5,'hodor') ; INSERT INTO `email` (`id`,`email_address`,`verified_flag`,`user_id`) VALUES (1,'alex@gmail.com','1',1), (2,'jon@gmail.com','0',1), (3,'arya@gmail.com','0',3), (4,'sansa@gmail.com','1',4), (5,'reek@gmail.com','0',3), (6,'hodor@gmail.com','0',5), (7,'tyrion@gmail.com','0',1) ; INSERT INTO `social` (`id`,`user_id`) VALUES (1,4), (2,4), (3,5), (4,4), (5,4) ; ``` What I want to get is all emails: 1. which are not verified 2. which belongs to a user who has no, i.e 0, verified emails 3. which belongs to a user who has no, i.e 0, social records With the below query I'm able to apply the 1st and 3rd condition but not the 2nd one: ``` SELECT * FROM `email` INNER JOIN `user` ON `user`.`id` = `email`.`user_id` LEFT JOIN `social` ON `user`.`id` = `social`.`user_id` WHERE `email`.`verified_flag` = 0 GROUP BY `email`.`user_id`,`email`.`email_address` HAVING COUNT(`social`.`id`) = 0 ``` How can I achieve the result? Here's the [sqlfiddle](http://sqlfiddle.com/#!9/938e88/3) as well
You can use the following query: ``` SELECT e.`id`, e.`email_address`, e.`verified_flag`, e.`user_id` FROM ( SELECT `id`,`email_address`,`verified_flag`,`user_id` FROM `email` WHERE `verified_flag` = 0) AS e INNER JOIN ( SELECT `id`, `name` FROM `user` AS t1 WHERE NOT EXISTS (SELECT 1 FROM `email` AS t2 WHERE `verified_flag` = 1 AND t1.`id` = t2.`user_id`) AND NOT EXISTS (SELECT 1 FROM `social` AS t3 WHERE t1.`id` = t3.`user_id`) ) AS u ON u.`id` = e.`user_id`; ``` This query uses two derived tables: * `e` implements the first condition, i.e. returns all emails which are not verified * `u` implements the 2nd and 3rd condition, i.e. it returns a set of all users that have no verified emails *and* have no social records. Performing an `INNER JOIN` between `e` and `u` returns all emails satisfying condition no. 1 which belong to users satisfying conditions no. 2 and 3. [**Demo here**](http://sqlfiddle.com/#!9/938e88/5) You can alternatively use this query: ``` SELECT * FROM `email` WHERE `user_id` IN ( SELECT `email`.`user_id` FROM `email` INNER JOIN `user` ON `user`.`id` = `email`.`user_id` LEFT JOIN `social` ON `user`.`id` = `social`.`user_id` GROUP BY `email`.`user_id` HAVING COUNT(`social`.`id`) = 0 AND COUNT(CASE WHEN `email`.`verified_flag` = 1 THEN 1 END) = 0 ) ``` The subquery is used in order to select all `user_id` satisfying conditions no. 2 and 3. Condition no. 1 is redundant since if the user has no verified emails, then there is no way a verified email is related to this user. [**Demo here**](http://sqlfiddle.com/#!9/938e88/11)
Interesting and tricky one. I see you've got something going on there. But **having** and **sub queries** becomes a **VERY** bad idea when your tables become large. See below for an approach. Don't forget to set up your indexes! ``` SELECT * from email LEFT JOIN social on email.user_id = social.user_id -- tricky ... i'm going back to email table to pick verified emails PER user LEFT JOIN email email2 on email2.user_id = email.user_id AND email2.verified_flag = 1 WHERE -- you got this one going already :) email.verified_flag = 0 -- user does not have any social record AND social.id is null -- email2 comes in handy here ... we limit resultset to include only users that DOES NOT have a verified email AND email2.id is null ORDER BY email.user_id asc; ```
SQL Multiple table JOINS, GROUP BY and HAVING
[ "", "mysql", "sql", "join", "group-by", "having", "" ]
I have this query that give the sum between years but I want to add a new row at the end of each *TARMA* that give the differences between the years. Here is the query: ``` Select VPC.Armazem as TARMA ,YEAR(VPC.DATA) as DataTotal ,SUM(CASE WHEN VP.COMBUSTIVEL = 1 THEN VPL.QTD ELSE 0 END) as ADITIVADA ,SUM(CASE WHEN VP.COMBUSTIVEL = 2 THEN VPL.QTD ELSE 0 END) as X98 ,SUM(CASE WHEN VP.COMBUSTIVEL = 3 THEN VPL.QTD ELSE 0 END)as X95 ,SUM(CASE WHEN VP.COMBUSTIVEL = 4 THEN VPL.QTD ELSE 0 END) as XGAS ,SUM(CASE WHEN VP.COMBUSTIVEL = 5 THEN VPL.QTD ELSE 0 END) as XGPL ,SUM(CASE WHEN VP.COMBUSTIVEL = 6 THEN VPL.QTD ELSE 0 END) as XAGR ,SUM(CASE WHEN VP.COMBUSTIVEL = 7 THEN VPL.QTD ELSE 0 END) as MISTURA ,SUM(CASE WHEN VP.COMBUSTIVEL = 9 THEN VPL.QTD ELSE 0 END) as XAQ ,SUM(CASE WHEN VP.COMBUSTIVEL = 10 THEN VPL.QTD ELSE 0 END) as ADIESEL ,SUM(CASE WHEN VP.COMBUSTIVEL = 11 THEN VPL.QTD ELSE 0 END) as ADBLUE ,SUM(CASE WHEN VP.COMBUSTIVEL = 12 THEN VPL.QTD ELSE 0 END) as O95 ,SUM(CASE WHEN VP.COMBUSTIVEL = 13 THEN VPL.QTD ELSE 0 END) as O98 WHERE (MONTH(VPC.DATA) >= MONTH('2015-09-01') AND MONTH(VPC.DATA) <= MONTH('2015-09-30')) and (YEAR(VPC.DATA) >= YEAR('2014-09-01') AND YEAR(VPC.DATA) <= YEAR('2015-09-30')) and VPT.armazem IN ('454','457') and FACT_VD NOT IN ('A', 'I', 'G', 'M') GROUP BY YEAR(VPC.DATA) ,VPC.Armazem ORDER BY VPC.Armazem ,YEAR(VPC.DATA) ``` And here is the result without the difference: [Result][1] For example: > TARMA: 454 for X98 > > 2014: 1849.14077 > > 2015: 2571.47750 > > Difference: -722,33673 I'm using MS SQL. Is it with a `UNION`? How can I get the difference?
I got it: ``` ;with dados as ( Select VPC.Armazem as TARMA ,YEAR(VPC.DATA) as DataTotal1 ,SUM(CASE WHEN VP.COMBUSTIVEL = 1 THEN VPL.QTD ELSE 0 END) as SomaADITIVADA ,SUM(CASE WHEN VP.COMBUSTIVEL = 2 THEN VPL.QTD ELSE 0 END) as SomaX98 ,SUM(CASE WHEN VP.COMBUSTIVEL = 3 THEN VPL.QTD ELSE 0 END)as SomaX95 ,SUM(CASE WHEN VP.COMBUSTIVEL = 4 THEN VPL.QTD ELSE 0 END) as SomaXGAS ,SUM(CASE WHEN VP.COMBUSTIVEL = 5 THEN VPL.QTD ELSE 0 END) as SomaXGPL ,SUM(CASE WHEN VP.COMBUSTIVEL = 6 THEN VPL.QTD ELSE 0 END) as SomaXAGR ,SUM(CASE WHEN VP.COMBUSTIVEL = 7 THEN VPL.QTD ELSE 0 END) as SomaMISTURA ,SUM(CASE WHEN VP.COMBUSTIVEL = 9 THEN VPL.QTD ELSE 0 END) as SomaXAQ ,SUM(CASE WHEN VP.COMBUSTIVEL = 10 THEN VPL.QTD ELSE 0 END) as SomaADIESEL ,SUM(CASE WHEN VP.COMBUSTIVEL = 11 THEN VPL.QTD ELSE 0 END) as SomaADBLUE ,SUM(CASE WHEN VP.COMBUSTIVEL = 12 THEN VPL.QTD ELSE 0 END) as SomaO95 ,SUM(CASE WHEN VP.COMBUSTIVEL = 13 THEN VPL.QTD ELSE 0 END) as SomaO98 ,row_number() over (partition by VPC.Armazem order by YEAR(VPC.DATA) ASC) as NAno WHERE (MONTH(VPC.DATA) >= MONTH('2015-09-01') AND MONTH(VPC.DATA) <= MONTH('2015-09-30')) and (YEAR(VPC.DATA) >= YEAR('2014-09-01') AND YEAR(VPC.DATA) <= YEAR('2015-09-30')) and VPT.armazem IN ('454','457') and FACT_VD NOT IN ('A', 'I', 'G', 'M') GROUP BY YEAR(VPC.DATA) ,VPC.Armazem ) SELECT anosDetalhados.TARMA as TARMA ,anosDetalhados.DataTotal1 as DataTotal ,SUM(anosDetalhados.SomaADITIVADA) as ADITIVADA ,SUM(anosDetalhados.SomaX98) as X98 ,SUM(anosDetalhados.SomaX95) as X95 ,SUM(anosDetalhados.SomaXGAS) as XGAS ,SUM(anosDetalhados.SomaXGPL) as XGPL ,SUM(anosDetalhados.SomaXAGR) as XAGR ,SUM(anosDetalhados.SomaMISTURA) as MISTURA ,SUM(anosDetalhados.SomaXAQ) as XAQ ,SUM(anosDetalhados.SomaADIESEL) as ADIESEL ,SUM(anosDetalhados.SomaADBLUE) as ADBLUE ,SUM(anosDetalhados.SomaO95) as O95 ,SUM(anosDetalhados.SomaO98) as O98 FROM dados as anosDetalhados (nolock) GROUP BY anosDetalhados.DataTotal1 ,anosDetalhados.TARMA UNION ALL SELECT ano1.TARMA as TARMA ,NULL as DataTotal ,SUM(coalesce(ano1.SomaADITIVADA-ano2.SomaADITIVADA, 0)) as ADITIVADA ,SUM(coalesce(ano1.SomaX98-ano2.SomaX98, 0)) as X98 ,SUM(coalesce(ano1.SomaX95-ano2.SomaX95, 0)) as X95 ,SUM(coalesce(ano1.SomaXGAS-ano2.SomaXGAS, 0)) as XGAS ,SUM(coalesce(ano1.SomaXGPL-ano2.SomaXGPL, 0)) as XGPL ,SUM(coalesce(ano1.SomaXAGR-ano2.SomaXAGR, 0)) as XAGR ,SUM(coalesce(ano1.SomaMISTURA-ano2.SomaMISTURA, 0)) as MISTURA ,SUM(coalesce(ano1.SomaXAQ-ano2.SomaXAQ, 0)) as XAQ ,SUM(coalesce(ano1.SomaADIESEL-ano2.SomaADIESEL, 0)) as ADIESEL ,SUM(coalesce(ano1.SomaADBLUE-ano2.SomaADBLUE, 0)) as ADBLUE ,SUM(coalesce(ano1.SomaO95-ano2.SomaO95, 0)) as O95 ,SUM(coalesce(ano1.SomaO98-ano2.SomaO98, 0)) as O98 FROM dados as ano1 (nolock) LEFT JOIN dados as ano2 on ano1.TARMA=ano2.TARMA and ano1.NAno > ano2.NAno GROUP BY ano1.TARMA ORDER BY TARMA ,anosDetalhados.DataTotal1 ASC ```
Using your query as a subquery (without the `order by`), you can do something like this: ``` with cte as ( <your query here> ) select cte.*, (additava - lag(additava) over (partition by Armazem order by DataTotal) ) as additava_diff, . . . from cte; ``` Note: `lag()` requires SQL Server 2012+. EDIT: Prior to SQL Server 2012+, you could do: ``` with cte as ( <your query here> ) select cte.*, (additava - cte2.additava) as additava_diff, . . . from cte outer apply (select top 1 cte.* from cte cte2 where cte2.Armazem = cte.Armazem and cte2.DataTotal < cte.DataTotal order by cte2.DataTotal ) cte2 ```
SQL Select Adding a new row with difference between numbers
[ "", "sql", "sql-server", "" ]
Here is the database layout. I have a table with sparse sales over time, aggregated per day. If for an item I have 10 sales on the 01-01-2015, I will have an entry, but If I have 0, then I have no entry. Something like this. ``` |--------------------------------------| | day_of_year | year | sales | item_id | |--------------------------------------| | 01 | 2015 | 20 | A1 | | 01 | 2015 | 11 | A2 | | 07 | 2015 | 09 | A1 | | ... | ... | ... | ... | |--------------------------------------| ``` This is how I get a time series for 1 item. ``` SELECT doy, max(sales) FROM ( SELECT day_of_year AS doy, sales AS sales FROM myschema.entry_daily WHERE item_id = theNameOfmyItem AND year = 2015 AND day_of_year < 150 UNION SELECT doy AS doy, 0 AS sales FROM generate_series(1, 149) AS doy) as t GROUP BY doy ORDER BY doy; ``` And I currently loop with R making 1 query for every item. I then aggregate the results in a dataframe. But this is very slow. I would actually like to have only one query that would aggregate all the data in the following form. ``` |----------------------------------------------| | item_id | 01 | 02 | 03 | 04 | 05 | ... | 149 | |----------------------------------------------| | A1 | 10 | 00 | 00 | 05 | 12 | ... | 11 | | A2 | 11 | 00 | 30 | 01 | 15 | ... | 09 | | A3 | 20 | 00 | 00 | 05 | 17 | ... | 20 | | ... | |----------------------------------------------| ``` Would this be possible? By the way I am using a Postgres database.
## Solution 1. Simple query with an aggregate. The simplest and fastest way to get the expected result. It is easy to parse the `sales` column within a client program. ``` select item, string_agg(coalesce(sales, 0)::text, ',') sales from ( select distinct item_id item, doy from generate_series (1, 10) doy -- change 10 to given n cross join entry_daily ) sub left join entry_daily on item_id = item and day_of_year = doy group by 1 order by 1; item | sales ------+---------------------- A1 | 20,0,0,0,0,0,9,0,0,0 A2 | 11,0,0,0,0,0,0,0,0,0 (2 rows) ``` ## Solution 2. Dynamically created view. Based on the solution 1 with `array_agg()` instead of `string_agg()`. The function creates a view with a given number of columns. ``` create or replace function create_items_view(view_name text, days int) returns void language plpgsql as $$ declare list text; begin select string_agg(format('s[%s] "%s"', i::text, i::text), ',') into list from generate_series(1, days) i; execute(format($f$ drop view if exists %s; create view %s as select item, %s from ( select item, array_agg(coalesce(sales, 0)) s from ( select distinct item_id item, doy from generate_series (1, %s) doy cross join entry_daily ) sub left join entry_daily on item_id = item and day_of_year = doy group by 1 order by 1 ) q $f$, view_name, view_name, list, days) ); end $$; ``` Usage: ``` select create_items_view('items_view_10', 10); select * from items_view_10; item | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 ------+----+---+---+---+---+---+---+---+---+---- A1 | 20 | 0 | 0 | 0 | 0 | 0 | 9 | 0 | 0 | 0 A2 | 11 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 (2 rows) ``` ## Solution 3. Crosstab. Easy to use, but very uncomfortable with the greater number of columns due to the need to define the row format. ``` create extension if not exists tablefunc; select * from crosstab ( 'select item_id, day_of_year, sales from entry_daily order by 1', 'select i from generate_series (1, 10) i' ) as ct (item_id text, "1" int, "2" int, "3" int, "4" int, "5" int, "6" int, "7" int, "8" int, "9" int, "10" int); item_id | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 ---------+----+---+---+---+---+---+---+---+---+---- A1 | 20 | | | | | | 9 | | | A2 | 11 | | | | | | | | | (2 rows) ```
First you need a table with [all dates](https://stackoverflow.com/questions/14113469/generating-time-series-between-two-dates-in-postgresql) to fill the blank dates. 100 years of date mean 36,000 rows so no very big. Instead of calculate every time. allDates: ``` date_id s_date ``` or created calculating the fields ``` date_id s_date doy = EXTRACT(DOY FROM s_date) year = EXTRACT(YEAR FROM s_date) ``` Your base query will be **[SQL FIDDLE DEMO](http://sqlfiddle.com/#!15/300f8/1)**: ``` SELECT AD.year, AD.doy, allitems.item_id, COALESCE(SUM(ED.sales), 0) as max_sales FROM (SELECT DISTINCT item_id FROM entry_daily ) as allitems CROSS JOIN alldates AD LEFT JOIN entry_daily ED ON ED.day_of_year = AD.doy AND ED.year = AD.year AND ED.item_id = allitems.item_id WHERE AD.year = 2015 GROUP BY AD.year, AD.doy, allitems.item_id ORDER BY AD.year, AD.doy, allitems.item_id ``` You will have this OUTPUT ``` | year | doy | item_id | max_sales | |------|-----|---------|-----------| | 2015 | 1 | A1 | 20 | | 2015 | 1 | A2 | 11 | | 2015 | 2 | A1 | 0 | | 2015 | 2 | A2 | 0 | | 2015 | 3 | A1 | 0 | | 2015 | 3 | A2 | 0 | | 2015 | 4 | A1 | 0 | | 2015 | 4 | A2 | 0 | | 2015 | 5 | A1 | 0 | | 2015 | 5 | A2 | 0 | | 2015 | 6 | A1 | 0 | | 2015 | 6 | A2 | 0 | | 2015 | 7 | A1 | 39 | | 2015 | 7 | A2 | 0 | | 2015 | 8 | A1 | 0 | | 2015 | 8 | A2 | 0 | | 2015 | 9 | A1 | 0 | | 2015 | 9 | A2 | 0 | | 2015 | 10 | A1 | 0 | | 2015 | 10 | A2 | 0 | ``` Then you need install [tablefunc](http://www.postgresql.org/docs/9.1/static/tablefunc.html) and use crosstab to pivot this table [SAMPLE](https://stackoverflow.com/questions/15415446/pivot-on-multiple-columns-using-tablefunc/15421607#15421607)
How to generate multiple time series in one sql query?
[ "", "sql", "r", "postgresql", "" ]
Let's say you have tables: ``` CREATE TABLE TABLE1 (ID INTEGER PRIMARY KEY, VAL INT, CONDITION INT); CREATE TABLE TABLE2 (ID INTEGER PRIMARY KEY, VAL INT, CONDITION INT); ``` with data: ``` INSERT INTO TABLE1 VALUES (0, 0, 100); ``` and ``` INSERT INTO TABLE2 VALUES (0, 2, 100), (1, 1, 100), (3, 3, 100) ``` What will in TABLE1 after this query and why? ``` UPDATE TABLE1 SET VAL = SOURCE.VAL FROM TABLE2 SOURCE WHERE TABLE1.CONDITION = SOURCE.CONDITION ``` I got: ``` 0;2;100 ``` Does this mean that RDBMS executed `UPDATE` operation several times and I see only final result? Or it updates `VAL` only once? Explain: ``` "Update on table1 (cost=270.68..562.65 rows=18818 width=24)" " -> Merge Join (cost=270.68..562.65 rows=18818 width=24)" " Merge Cond: (table1.condition = source.condition)" " -> Sort (cost=135.34..140.19 rows=1940 width=14)" " Sort Key: table1.condition" " -> Seq Scan on table1 (cost=0.00..29.40 rows=1940 width=14)" " -> Sort (cost=135.34..140.19 rows=1940 width=14)" " Sort Key: source.condition" " -> Seq Scan on table2 source (cost=0.00..29.40 rows=1940 width=14)" ``` I used PostgreSQL.
From the [documentation](http://www.postgresql.org/docs/current/static/sql-update.html): > a target row shouldn't join to more than one row from the other table(s). If it does, then only one of the join rows will be used to update the target row, but which one will be used is not readily predictable. So the behaviour you see is exactly what is supposed to happen.
Your question seems to be about what happens when there are multiple matches in the tables being updated. Only one value gets set in the output. An arbitrary matching row is used for the update. The [documentation](http://www.postgresql.org/docs/9.4/static/sql-update.html) explains what happens: > **Notes** > > When a FROM clause is present, what essentially happens is that the > target table is joined to the tables mentioned in the from\_list, and > each output row of the join represents an update operation for the > target table. When using FROM you should ensure that the join produces > at most one output row for each row to be modified. In other words, a > target row shouldn't join to more than one row from the other > table(s). If it does, then only one of the join rows will be used to > update the target row, but which one will be used is not readily > predictable. > > Because of this indeterminacy, referencing other tables only within > sub-selects is safer, though often harder to read and slower than > using a join.
Conditional update of SQL table with JOIN
[ "", "sql", "postgresql", "" ]
My first time creating a stored procedure here please go easy on me. I was trying to `SELECT` my data from Table 1 (`EmpTBL`), then `INSERT` it into Table 2 (`EventTBL`) I think the culprit is this line right here: ``` @Ename varchar(250) = NULL, @Edate varchar(250) = NULL, ``` I think my logic is in the right direction I just dont why it doesn't work. Not sure what to do next. ``` ALTER PROCEDURE spBdayEmp (@Ename varchar(250) = NUll, @Edate varchar(250)= NUll, @Etype varchar(250) = 'Bday') AS INSERT INTO EventTBL(EventName, EventDate, EventType) VALUES (@Ename, @Edate, @Etype) SELECT @Ename = (Ename + ' ' + Lname), @Edate = DATEADD(YY, DATEPART(YYYY, GETDATE()) - DATEPART(YYYY,dateOfBirth), dateOfBirth) FROM EmpTBL WHERE DATEADD( Year, DATEPART( Year, GETDATE()) - DATEPART( Year, DateOfBirth), DateOfBirth) BETWEEN CONVERT( DATE, GETDATE()) AND CONVERT( DATE, GETDATE() + 30); ``` how would i get the values for? @Ename, @Edate? i would like the result of the SELECT query "Ename + ' ' + Lname," equals to "@Ename" and " DATEADD(YY, DATEPART(YYYY, GETDATE()) - DATEPART(YYYY,dateOfBirth), dateOfBirth)," equals to "@Edate"?
after reading [INSERT statement cannot contain a SELECT statement -sql server2012](https://stackoverflow.com/questions/25132481/insert-statement-cannot-contain-a-select-statement-sql-server2012) i realized i had it all wrong so after some experimenting i finally found a solution. ``` ALTER PROCEDURE spBdayEmp AS INSERT INTO EventTBL SELECT (Ename + ' ' + Lname), DATEADD(YY, DATEPART(YYYY, GETDATE()) - DATEPART(YYYY,dateOfBirth), dateOfBirth),('Bday')FROM EmpTBL WHERE DATEADD( Year, DATEPART( Year, GETDATE()) - DATEPART( Year, DateOfBirth), DateOfBirth) BETWEEN CONVERT( DATE, GETDATE()) AND CONVERT( DATE, GETDATE() + 30); ``` Thanks for everyone who's been trying to help out!
i hope you have only 3 columns in EventTBL table. please update your query as below: please change your query as below and then try ``` ALTER PROCEDURE spBdayEmp AS INSERT INTO EventTBL(EventName, EventDate, EventType) SELECT @Ename = (Ename + ' ' + Lname), @Edate = DATEADD(YY, DATEPART(YYYY, GETDATE()) - DATEPART(YYYY,dateOfBirth), dateOfBirth) ,'Bday' FROM EmpTBL WHERE DATEADD( Year, DATEPART( Year, GETDATE()) - DATEPART( Year, DateOfBirth), DateOfBirth) BETWEEN CONVERT( DATE, GETDATE()) AND CONVERT( DATE, GETDATE() + 30); ```
Insert Into Select stored procedure not working inserting NULL
[ "", "sql", "sql-server", "stored-procedures", "insert-into", "select-into", "" ]
**Is there any way to skip "locked rows" when we make "SELECT FOR UPDATE" in MySQL with an InnoDB table?** E.g.: terminal t1 ``` mysql> start transaction; Query OK, 0 rows affected (0.00 sec) mysql> select id from mytable ORDER BY id ASC limit 5 for update; +-------+ | id | +-------+ | 1 | | 15 | | 30217 | | 30218 | | 30643 | +-------+ 5 rows in set (0.00 sec) mysql> ``` At the same time, terminal t2: ``` mysql> start transaction; Query OK, 0 rows affected (0.00 sec) mysql> select id from mytable where id>30643 order by id asc limit 2 for update; +-------+ | id | +-------+ | 30939 | | 31211 | +-------+ 2 rows in set (0.01 sec) mysql> select id from mytable order by id asc limit 5 for update; ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction mysql> ``` So if I launch a query forcing it to select other rows, it's fine. But is there a way to skip the locked rows? I guess this should be a redundant problem in the concurrent process, but I did not find any solution. --- **EDIT**: In reality, my different concurrent processes are doing something apparently really simple: 1. take the first rows (which don't contain a specific flag - e.g.: "WHERE myflag\_inUse!=1"). 2. Once I get the result of my "select for update", I update the flag and commit the rows. So I just want to select the rows which are not already locked and where myflag\_inUse!=1... --- The following link helps me to understand why I get the timeout, but not how to avoid it: [MySQL 'select for update' behaviour](https://stackoverflow.com/questions/8849518/mysql-select-for-update-behaviour) --- ``` mysql> SHOW VARIABLES LIKE "%version%"; +-------------------------+-------------------------+ | Variable_name | Value | +-------------------------+-------------------------+ | innodb_version | 5.5.46 | | protocol_version | 10 | | slave_type_conversions | | | version | 5.5.46-0ubuntu0.14.04.2 | | version_comment | (Ubuntu) | | version_compile_machine | x86_64 | | version_compile_os | debian-linux-gnu | +-------------------------+-------------------------+ 7 rows in set (0.00 sec) ```
MySQL 8.0 introduced support for both `SKIP LOCKED` and `NO WAIT`. `SKIP LOCKED` is useful for implementing a job queue (a.k.a batch queue) so that you can skip over locks that are already locked by a concurrent transaction. `NO WAIT` is useful for avoiding waiting until a concurrent transaction releases the locks that we are also interested in locking. Without `NO WAIT`, we either have to wait until the locks are released (at commit or release time by the transaction that currently holds the locks) or the lock acquisition times out. `NO WAIT` acts as a lock timeout with a value of `0`. For more details about [`SKIP LOCK`](https://vladmihalcea.com/database-job-queue-skip-locked/) and `NO WAIT`.
This appears to now exist in MySQL starting in 8.0.1: <https://mysqlserverteam.com/mysql-8-0-1-using-skip-locked-and-nowait-to-handle-hot-rows/> > Starting with MySQL 8.0.1 we are introducing the SKIP LOCKED modifier > which can be used to non-deterministically read rows from a table > while skipping over the rows which are locked. This can be used by > our booking system to skip orders which are pending. For example: However, I think that version is not necessarily production ready.
MySQL InnoDB "SELECT FOR UPDATE" - SKIP LOCKED equivalent
[ "", "mysql", "sql", "multithreading", "concurrency", "innodb", "" ]
How do merge these three queries into one? 1. ``` SELECT "Skills"."name", "Skills"."id", "TrainerScores"."fellow_uid", MIN("TrainerScores"."score") AS "score" FROM "TrainerScores" INNER JOIN "Skills" ON "TrainerScores"."skill_id" = "Skills"."id" WHERE "TrainerScores"."fellow_uid" = 'google:105697533513134511631' AND DATE("TrainerScores"."created_at") BETWEEN '2015-10-01' AND '2015-10-30' GROUP BY "Skills"."name", "Skills"."id", "TrainerScores"."fellow_uid" ``` 2. ``` Select "Skills"."name", "Skills"."id", MIN("PeerScores"."score") AS "score" FROM "PeerScores" LEFT OUTER JOIN "Skills" ON "PeerScores"."skill_id" = "Skills"."id" WHERE "PeerScores"."evaluatee_uid" = 'google:105697533513134511631' AND DATE("PeerScores"."created_at") BETWEEN '2015-10-01' AND '2015-10-30' GROUP BY "Skills"."name", "Skills"."id" ``` 3. ``` Select "Skills"."name", "Skills"."id", MIN("SelfScores"."score") AS "score" FROM "SelfScores" LEFT OUTER JOIN "Skills" ON "SelfScores"."skill_id" = "Skills"."id" WHERE "SelfScores"."fellow_uid" = 'google:105697533513134511631' AND DATE("SelfScores"."created_at") BETWEEN '2015-10-01' AND '2015-10-30' GROUP BY "Skills"."name", "Skills"."id" ``` I want to use this as a report and I do not want to call each one of the queries any time I want to get the data.
Basically, use `UNION ALL` like [@jarlh already provided](https://stackoverflow.com/a/33413757/939860). Details in the manual in the chapter [**"Combining Queries"**](https://www.postgresql.org/docs/current/interactive/queries-union.html). But there is a ***lot more***. My educated guess, you really want this: ``` WITH vals AS (SELECT timestamp '2015-10-01 00:00' AS ts_low -- incl. lower bound , timestamp '2015-10-31 00:00' AS ts_hi -- excl. upper bound , text 'google:105697533513134511631' AS uid) SELECT s.name, sub.* FROM ( SELECT skill_id AS id, min(score) AS score, 'T' AS source FROM "TrainerScores", vals v WHERE fellow_uid = v.uid AND created_at >= v.ts_low AND created_at < v.ts_hi GROUP BY 1 UNION ALL SELECT skill_id, min(score), 'P' FROM "PeerScores", vals v WHERE evaluatee_uid = v.uid AND created_at >= v.ts_low AND created_at < v.ts_hi GROUP BY 1 UNION ALL SELECT skill_id, min(score), 'S' FROM "SelfScores", vals v WHERE fellow_uid = v.uid AND created_at >= v.ts_low AND created_at < v.ts_hi GROUP BY 1 ) sub JOIN "Skills" s USING (id); ``` ### Major points * First I trimmed the noise from your syntax (probably produced by your ORM) to make it human-readable: remove redundant double-quotes, add table aliases, trim noise words ... * Your use of `LEFT [OUTER] JOIN` was broken, since you filter on columns of the left table, which counteracts the `LEFT JOIN`. Replace with `[INNER] JOIN`. * Use [**sargable**](https://en.wiktionary.org/wiki/sargable) expressions in the `WHERE` clause or your query can't use plain indexes and will be very slow for big tables. Related: + [Get difference of another field between first and last timestamps of grouping](https://stackoverflow.com/questions/20565421/get-difference-of-another-field-between-first-and-last-timestamps-of-grouping/20574117#20574117) * Provide your parameters ***once*** in a CTE (`WITH` clause) - which is not needed in a prepared statement where you pass `uid`, `ts_low` and `ts_hi` as parameters instead. * I removed `"TrainerScores"."fellow_uid"` from the output in your first query to simplify the query. That's just your input parameter anyway. * You can aggregate your respective main tables ***before*** you join to `"Skills"` *once*. * I added a column `source` to signify the source of each row. Aside: It seems like you want to match the whole of October 2015, but then you exclude Oct. 31. Is that on purpose?
Alternative 1, simply a huge `UNION ALL`: ``` SELECT "Skills"."name", "Skills"."id", "TrainerScores"."fellow_uid", MIN("TrainerScores"."score") AS "score" FROM "TrainerScores" INNER JOIN "Skills" ON "TrainerScores"."skill_id" = "Skills"."id" WHERE "TrainerScores"."fellow_uid" = 'google:105697533513134511631' AND DATE("TrainerScores"."created_at") BETWEEN '2015-10-01' AND '2015-10-30' GROUP BY "Skills"."name", "Skills"."id", "TrainerScores"."fellow_uid" UNION ALL Select "Skills"."name", "Skills"."id", NULL, MIN("PeerScores"."score") AS "score" FROM "PeerScores" LEFT OUTER JOIN "Skills" ON "PeerScores"."skill_id" = "Skills"."id" WHERE "PeerScores"."evaluatee_uid" = 'google:105697533513134511631' AND DATE("PeerScores"."created_at") BETWEEN '2015-10-01' AND '2015-10-30' GROUP BY "Skills"."name", "Skills"."id" UNION ALL Select "Skills"."name", "Skills"."id", NULL, MIN("SelfScores"."score") AS "score" FROM "SelfScores" LEFT OUTER JOIN "Skills" ON "SelfScores"."skill_id" = "Skills"."id" WHERE "SelfScores"."fellow_uid" = 'google:105697533513134511631' AND DATE("SelfScores"."created_at") BETWEEN '2015-10-01' AND '2015-10-30' GROUP BY "Skills"."name", "Skills"."id" ```
How to return rows from three tables at once?
[ "", "sql", "node.js", "postgresql", "sequelize.js", "" ]
I have three tables, of which 2 are regular data tables and 1 is a many to many junction table. The two data tables: ``` table products product_id | product_name | product_color ----------------------------------------- 1 | Pear | Green 2 | Apple | Red 3 | Banana | Yellow ``` and ``` table shops shop_id | shop_location -------------------------- 1 | Foo street 2 | Bar alley 3 | Fitz lane ``` I have a junction table which contains the `shop_id`'s and `product_id`'s: ``` table shops_products shop_id | product_id -------------------- 1 | 1 1 | 2 2 | 1 2 | 2 2 | 3 3 | 2 3 | 3 ``` I want to select data from products that are in shop with shop\_id 3. I tried many examples from here with joins, left joins, inner joins, but I just don't know what I'm doing here and what is going wrong. The query I had, but just returned all products regardless if they are in the specified shop is the following: ``` SELECT products.product_name, products.product_color FROM products LEFT OUTER JOIN shops_products ON products.product_id = shops_products.product_id AND shops_products.shop_id = 3 LEFT OUTER JOIN shops ON shops_products.shop_id = shops.shop_id ``` The expected output is the following: ``` product_name | product_color ---------------------------- Apple | Red Banana | Yellow ``` This is in MySQL, thank you for any help, I really appreciate it.
I like to start from the outside and move in. So imagine all the columns were all jammed together in just one table, you could write something like: ``` SELECT * FROM products WHERE shop_id = 3 ``` You then just need to add the joins to make this statement possible. We know we need to add the join table next (as it's the one that joins directly onto the products table due to it having the product\_id in it). So that join is what goes next: ``` SELECT products.* FROM products INNER JOIN shops_products ON products.product_id = shops_products.product_id WHERE shops_products.shop_id = 3 ``` and actually you can stop right here... because `shop_id` exists on the join table already. But lets say you also wanted the shop's location in the set of final columns, you'd then add the shop-table join. ``` SELECT products.*, shops.shop_location FROM products INNER JOIN shops_products ON products.product_id = shops_products.product_id INNER JOIN shops ON shops_products.shop_id = shops.shop_id WHERE shops_products.shop_id = 3 ```
You can try this. ``` SELECT products.product_name, products.product_color FROM products INNER JOIN shops_products ON products.product_id = shops_products.product_id WHERE shops_products.shop_id = 3 ```
SQL join on junction table with many to many relation
[ "", "mysql", "sql", "join", "junction-table", "" ]
I have multiple tables in a PostgreSQL 9.4 database, where each row contains an interval as two columns "start" (inclusive) and "stop" (exclusive). Consider the following pseudo-code (the tables are more complicated). ``` CREATE TABLE left ( start TIMESTAMP, stop TIMESTAMP, [...] ); CREATE TABLE right ( start TIMESTAMP, stop TIMESTAMP, [...] ); ``` The intervals are inclusive of the start, but exclusive of the stop. I now need a query to find all possible intervals of time where there is a row in "left" covering the interval, but not simultaneously a row in "right" covering the same interval. One interval in "left" can be cut up into any number of intervals in the result, be shortened, or be entirely absent. Consider the following graph, with time progressing from left to right: ``` left [-----row 1------------------) [--row 2--) [--row 3----) right [--row1--) [--row2--) [--row3--) result [----) [----) [-------) [-----------) ``` In this tiny example, "left" has tree rows each representing three intervals and "right" has three rows, each representing three other intervals. The result has four rows of intervals, which together cover all possible timestamps where there is a row/interval in "left" covering that timestamp, but not a row/interval in "right" covering the same timestamp. The tables are of course in reality very much larger than three rows each - in fact I will frequently be wanting to perform the algorithm between two subqueries that have the "start" and "stop" columns. I have hit a dead end (multiple dead ends, in fact), and am on the virge of just fetching all records into memory and applying some procedural programming to the problem... Any solutions or suggestions of what thinking to apply is greatly appreciated.
Change the types of columns to `tsrange` (or create an appropriate views): ``` CREATE TABLE leftr ( duration tsrange ); CREATE TABLE rightr ( duration tsrange ); insert into leftr values ('[2015-01-03, 2015-01-20)'), ('[2015-01-25, 2015-02-01)'), ('[2015-02-08, 2015-02-15)'); insert into rightr values ('[2015-01-01, 2015-01-06)'), ('[2015-01-10, 2015-01-15)'), ('[2015-01-18, 2015-01-26)'); ``` The query: ``` select duration* gap result from ( select tsrange(upper(duration), lower(lead(duration) over (order by duration))) gap from rightr ) inv join leftr on duration && gap result ----------------------------------------------- ["2015-01-06 00:00:00","2015-01-10 00:00:00") ["2015-01-15 00:00:00","2015-01-18 00:00:00") ["2015-01-26 00:00:00","2015-02-01 00:00:00") ["2015-02-08 00:00:00","2015-02-15 00:00:00") (4 rows) ``` The idea: ``` l [-----row 1------------------) [--row 2--) [--row 3----) r [--row1--) [--row2--) [--row3--) inv(r) [----) [----) [-------------------------> l*inv(r) [----) [----) [-------) [-----------) ```
If the type change to `tsrange` is not an option, here an alternative solution using *window function*. The important idea is to realize that *only the start and end points of the intervals are relavent*. In the first step a transformation in a sequence of starting and ending timestamps is performed. (I use numbers to simplify the example). ``` insert into t_left select 1,4 from dual union all select 6,9 from dual union all select 12,13 from dual ; insert into t_right select 2,3 from dual union all select 5,7 from dual union all select 8,10 from dual union all select 11,14 from dual ; with event as ( select i_start tst, 1 left_change, 0 right_change from t_left union all select i_stop tst, -1 left_change, 0 right_change from t_left union all select i_start tst, 0 left_change, 1 right_change from t_right union all select i_stop tst, 0 left_change, -1 right_change from t_right ) select tst, left_change, right_change, sum(left_change) over (order by tst) as is_left, sum(right_change) over (order by tst) as is_right, '['||tst||','||lead(tst) over (order by tst) ||')' intrvl from event order by tst; ``` This ends with a two recods for each interval one for start (+1) and one for end (-1 in the CHANGE column). ``` TST LEFT_CHANGE RIGHT_CHANGE IS_LEFT IS_RIGHT INTRVL ``` --- ``` 1 1 0 1 0 [1,2) 2 0 1 1 1 [2,3) 3 0 -1 1 0 [3,4) 4 -1 0 0 0 [4,5) 5 0 1 0 1 [5,6) 6 1 0 1 1 [6,7) 7 0 -1 1 0 [7,8) 8 0 1 1 1 [8,9) 9 -1 0 0 1 [9,10) 10 0 -1 0 0 [10,11) 11 0 1 0 1 [11,12) 12 1 0 1 1 [12,13) 13 -1 0 0 1 [13,14) 14 0 -1 0 0 [14,) ``` The window SUM finction ``` sum(left_change) over (order by tst) ``` adds all changes so far, yielding the 1 for beeing **in interval** and 0 beeing **out of the interval**. The filter to get all (sub)intervals that are *left only* ist therefore trivial ``` is_left = 1 and is_right = 0 ``` The (sub)interval start with the timstamp of the current row and ends with the timstamp of the *next* row. Final notes: * You may need to add logik to ignore intervals of leghth 0 * I'm testing in Oracle, so pls re-check the Postgres functionality
SQL query to find all timestamps covered by an interval in A but not covered by an interval in B ("subtract" or "except" between multiple intervals)
[ "", "sql", "postgresql", "intervals", "" ]
I'm trying to query 3 tables to produce a dataset that produces a Yes/No result based on whether there are any fields in one of the tables that meet the criteria and grouping this by the customer. It's probably easiest to explain by giving an example of the code and results I'm trying to achieve. There are 3 tables, customers, orders, and products and I want a list of customers that have ordered specific products and then a simple yes no rating on whether any of those products are fragile. I can get the results I want if I count the number of fragile products likeso: ``` SELECT c.cust_id, c.cust_name, sum(CASE WHEN o.purchases IN(1000,2000,3000) THEN 1 ELSE 0 END) "Products Ordered", sum(CASE WHEN o.purchases IN(1000,2000,3000) AND p.fragility IN(4,5) THEN 1 ELSE 0 END) "Fragile product?" FROM customers_details c OUTER JOIN orders o ON (c.cust_id = o.cust_id) OUTER JOIN products p l ON (o.product_id = p.product_id) GROUP BY c.cust_id, c.cust_name ``` This will produce the following: ``` cust_id | cust_name | Products Ordered | Fragile product? 100 | Joe Bloggs | 2 | 1 200 | Jane Bloggs | 3 | 2 ``` But what I want is for the last column to give a simple yes/no, so I've tried this: ``` SELECT c.cust_id, c.cust_name, sum(CASE WHEN o.purchases IN(1000,2000,3000) THEN 1 ELSE 0 END) "Products Ordered", CASE WHEN o.purchases IN(1000,2000,3000) AND p.fragility IN(4,5) THEN 1 ELSE 0 END "Fragile product?" FROM customers_details c OUTER JOIN orders o ON (c.cust_id = o.cust_id) OUTER JOIN products p l ON (o.product_id = p.product_id) GROUP BY c.cust_id, c.cust_name, o.purchases ``` However that produces a dataset likeso: ``` cust_id | cust_name | Products Ordered | Fragile product? 100 | Joe Bloggs | 1 | YES 100 | Joe Bloggs | 1 | NO 200 | Jane Bloggs | 1 | YES 200 | Jane Bloggs | 1 | YES 200 | Jane Bloggs | 1 | NO ``` I've tried using SELECT DISTINCT but that just removes one of the Jane Bloggs entries like so: ``` cust_id | cust_name | Products Ordered | Fragile product? 100 | Joe Bloggs | 1 | YES 100 | Joe Bloggs | 1 | NO 200 | Jane Bloggs | 2 | YES 200 | Jane Bloggs | 1 | NO ``` Is what I'm trying to do possible? I'm querying a PostgreSQL DB. Edit: Using outer join because we want a full list of customers, and all orders that met the criteria rather than just the ones that are fragile.
You should be able to put the case when outside the sum: ``` SELECT c.cust_id, c.cust_name, sum(CASE WHEN p.purchases IN(1000,2000,3000) THEN 1 ELSE 0 END) "Products Ordered", CASE WHEN sum(CASE WHEN p.purchases IN(1000,2000,3000) AND p.fragility IN(4,5) THEN 1 ELSE 0 END) > 0 THEN 'Yes' ELSE 'No' END "Fragile product?" FROM customers_details c OUTER JOIN orders o ON (c.cust_id = o.cust_id) OUTER JOIN products p l ON (o.product_id = p.product_id) GROUP BY c.cust_id, c.cust_name ```
It's kind of ugly, but you could try: ``` MAX(CASE WHEN o.purchases IN(1000,2000,3000) AND p.fragility IN(4,5) THEN 'YES' ELSE NULL END) "Fragile product?" ``` If you have to see a literal 'no' then you could wrap it in a COALESCE statement.
SQL using Case to produce a Yes/No result from a SUM operator
[ "", "sql", "postgresql", "" ]
I have the following table UserId [nvarchar(128)], Rating [varchar(170)] :values will be mostly 1,2,3 but can have exceptions Rating contains 3 values [1,2, or 3] I want to get a result something like ``` UserId Count(1's),Count(2's) Count(3's) 1. 1001 10 8 2 2. 1002 5 10 3 ``` Is it possible in a single query
Do a `GROUP BY UserId` to count for each user-id. Use `CASE` to count 1's, 2's and 3's separately: ``` select UserId, count(case when Rating = 1 then 1 end) as [Count(1's)], count(case when Rating = 2 then 1 end) as [Count(2's)], count(case when Rating = 3 then 1 end) as [Count(3's)] from tablename group by UserId ```
Use [PIVOT](https://technet.microsoft.com/da-dk/library/ms177410(v=sql.105).aspx): ``` SELECT UserId, COALESCE([1],0) [Count(1's)], COALESCE([2],0) [Count(2's)], COALESCE([3],0) [Count(3's)] FROM ýour_table PIVOT (COUNT([Rating]) FOR Rating in([1],[2],[3]) )AS p ORDER BY UserId ```
Count Values from Table for each type
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have the table like following: ``` id | col1 | col2 | col3 | col4 ---+------+------+--------+----------- 1 | abc | 23 | data1 | otherdata1 2 | def | 41 | data2 | otherdata2 3 | ghi | 41 | data3 | otherdata3 4 | jkl | 58 | data4 | otherdata4 5 | mno | 23 | data1 | otherdata5 6 | pqr | 41 | data3 | otherdata6 7 | stu | 76 | data2 | otherdata7 ``` How can I fast select rows where col2+col3 doesn't have duplicates? There is over 15 millions of rows in the table, so join may be not suitable. Final result should look like this: ``` id | col1 | col2 | col3 | col4 ---+------+------+--------+----------- 2 | def | 41 | data2 | otherdata2 4 | jkl | 58 | data4 | otherdata4 7 | stu | 76 | data2 | otherdata7 ```
Window functions are definitely one possibility. But, if you care about performance, it is also worth trying another approach and comparing the speed. `NOT EXISTS` comes to mind: ``` select t.* from table t where not exists (select 1 from table t2 where t2.col2 = t.col2 and t2.col3 = t.col3 and t2.id <> t.id ); ``` This can take advantage of an index on `table(col2, col3)`.
Not sure how fast this will be, but this should work: ``` select id, col1, col2, col3, col4 from ( select id, col1, col2, col3, col4, count(*) over (partition by col2, col3) as cnt from the_table ) t where cnt = 1 order by id; ```
Exclude rows with the same values in some columns
[ "", "sql", "postgresql", "" ]
I am a Java Developer but I have a requirement to print given no. in a specific pattern like below in Oracle: [![enter image description here](https://i.stack.imgur.com/qgpvj.png)](https://i.stack.imgur.com/qgpvj.png) I tried below approach but it didn't worked for last scenario which I had mentioned in above input/output table i.e. Input: **123456**, it is giving output like: **P12345** ``` SELECT CONCAT('P', LPAD(my_number, 5, '0')) FROM DUAL; ``` Can you please guide me which function is useful for it ?
try this ``` SELECT CONCAT('P', LPAD(my_number, case when length(my_number) < 6 then 5 else length(my_number) end, '0')) FROM DUAL; ```
This is a flexible solution , covering all your examples but also even longer numbers ``` set serveroutput on; declare my_number varchar2(255); retval varchar2(255); begin my_number := '123456'; select case when length(my_number) < 6 then 'P' || LPAD(my_number, 5, '0') else 'P' || my_number end into retval FROM DUAL; dbms_output.put_line(retval); end; ```
How to format given number in Oracle to specific pattern?
[ "", "sql", "oracle", "" ]
I'm looking for a very basic version of proper case usable across SQL languages. This would only uppercase the first letter, and lowercase the rest (per record, not per word). For example, if a record has the value of `x` "the sky is GRAY", it would become "The sky is gray".
You can use `CONCAT` and concatenate part of string. ``` CREATE TABLE tab(x VARCHAR(1000)); INSERT INTO tab VALUES ('the sky is GRAY'); SELECT CONCAT(UPPER(LEFT(x, 1)), LOWER(RIGHT(x, LENGTH(x) - 1))) AS result FROM tab; ``` `SqlFiddleDemo` For more secure solution I would also trim text, because: ``` INSERT INTO tab VALUES ( ' the sky is GRAY'); ``` You will get: ``` the sky is gray ``` With trim: ``` SELECT CONCAT(UPPER(LEFT(TRIM(x), 1)), LOWER(RIGHT(TRIM(x), LENGTH(TRIM(x)) - 1))) AS result FROM tab; ``` `SqlFiddleDemo2` *Warning:* I highly doubt there's one query to "rule them all." Depending on your RDBMS, you may need to use: * `SUBSTRING` instead of `LEFT/RIGHT` * `LTRIM(RTRIM)` instead of `TRIM` * `LEN/DATALENGTH` instead of `LENGTH` * `+/||` instead of `CONCAT`
An answer in pure Standard SQL would be: ``` UPPER(SUBSTRING(x FROM 1 FOR 1)) || LOWER(SUBSTRING(x FROM 2)) ```
Basic proper case string in generic SQL
[ "", "sql", "" ]
I have a string with coordinates like: '0 0, 1 2, 3 4, 0 0' and I want to split it into two columns x and y. Result I want to receive: [![enter image description here](https://i.stack.imgur.com/RMXNi.png)](https://i.stack.imgur.com/RMXNi.png) I found a solution for T-SQL but unfortunately SQL Server does not have a split function encoded, I also found some functions to do that but only for one kind of separator and I can't 'upgrade' by myself, this is my very beginning with SQL. Please, help ;)
Check out this answer here... you should be able to tweak it for your needs: [TSQL: Nested split/parse of string into table (multiple concatenated Tag:Value in one string)](https://stackoverflow.com/questions/28772227/tsql-nested-split-parse-of-string-into-table-multiple-concatenated-tagvalue-i/28773089#28773089)
Here I use a recursive CTE **[SqlFiddleDemo](http://www.sqlfiddle.com/#!6/ca938/9)** ``` ;WITH tmp(id, DataItem, coord) as ( SELECT id, LEFT(coord, CHARINDEX(', ', coord + ', ')-1), STUFF(coord, 1, CHARINDEX(', ', coord + ', ')+1 , '') FROM coordinates UNION ALL SELECT id, LEFT(coord, CHARINDEX(', ', coord + ', ')-1), STUFF(coord, 1, CHARINDEX(', ', coord +', ')+1, '') FROM tmp WHERE coord > '' ) SELECT id, DataItem, coord, SUBSTRING(DataItem, 1, CASE CHARINDEX(' ', DataItem) WHEN 0 THEN LEN(DataItem) ELSE CHARINDEX(' ', DataItem)-1 END) AS X, SUBSTRING(DataItem, CASE CHARINDEX(' ', DataItem) WHEN 0 THEN LEN(DataItem)+1 ELSE CHARINDEX(' ', DataItem)+1 END, 1000) AS Y FROM tmp ORDER BY id ``` OUTPUT ``` | id | DataItem | coord | X | Y | |----|----------|---------------|---|---| | 1 | 0 0 | 1 2, 3 4, 0 0 | 0 | 0 | | 1 | 1 2 | 3 4, 0 0 | 1 | 2 | | 1 | 3 4 | 0 0 | 3 | 4 | | 1 | 0 0 | | 0 | 0 | ```
SQL Server split function with two delimiters
[ "", "sql", "sql-server", "split", "" ]
I have a table with these rows as example: *City:* ``` San Francisco (CA) Miami (FL) ``` As a result I want this: *City:* ``` San Francisco Miami ``` Any help? Regards.
You can use `REGEX_REPLACE`: ``` CREATE TABLE tab(City VARCHAR(120)); INSERT INTO tab VALUES ('San Francisco (CA)'); INSERT INTO tab VALUES ('Miami (FL)'); SELECT TRIM(REGEXP_REPLACE(city, '\((.+?)\)', '')) AS City FROM tab; ``` `SqlFiddleDemo` If you want to `UPDATE` you can use: ``` UPDATE tab SET City = TRIM(REGEXP_REPLACE(city, '\((.+?)\)', '')) WHERE INSTR(City, '(') > 0; SELECT City FROM tab; ``` `SqlFiddleDemo2`
For the sake of argument, here's a REGEXP\_SUBSTR( ) example. It assumes you want to keep everything from the start of the strin up to but not including the space before the first open paren. ``` SQL> with tbl(city) as ( select 'San Francisco (CA)' from dual union select 'Miami (FL)' from dual ) select regexp_substr(city, '^(.*) \(.*$', 1, 1, null, 1) new_city from tbl; NEW_CITY ------------------ Miami San Francisco SQL> ``` EDIT: Added a REGEXP\_REPLACE solution that does not require a TRIM(): ``` with tbl(city) as ( select 'San Francisco (CA)' from dual union select 'Miami (FL)' from dual ) select regexp_replace(city, '^(.*) \(.*$', '\1') new_city from tbl; ```
No show string between round brackets SQL Oracle
[ "", "sql", "oracle", "" ]
I am trying to write code that allows me to check if there are any cases of a particular pattern inside a table. The way I am currently doing is with something like ``` select count(*) from database.table where column like (some pattern) ``` and seeing if the count is greater than 0. I am curious to see if there is any way I can speed up this process as this type of pattern finding happens in a loop in my query and all I need to know is if there is even one such case rather than the total number of cases. Any suggestions will be appreciated. EDIT: I am running this inside a Teradata stored procedure for the purpose of data quality validation.
If you don't need the actual count, the most efficient way in Teradata will use `EXISTS`: ``` select 1 where exists ( select * from database.table where column like (some pattern) ) ``` This will return an empty result set if the pattern doesn't exist.
Using `EXISTS` will be faster if you don't actually need to know how many matches there are. Something like this would work: ``` IF EXISTS ( SELECT * FROM bigTbl WHERE label LIKE '%test%' ) SELECT 'match' ELSE SELECT 'no match' ``` This is faster because once it finds a single match it can return a result.
Fastest way to check if any case of a pattern exist in a column using SQL
[ "", "sql", "performance", "count", "teradata", "" ]
I want to select the shortest and longest name, and if there is more than one name with the same length, I should get the first one ordered by the name For example if I have these values: ``` 'abc', 'def', 'abcd', 'defghi', 'ghi' ``` I should get something like ``` abc 3 defghi 6 ``` This is my query ``` select Name, LEN(Name) from Customer where len(Name) = (select min(len(Name)) from Customer) or len(Name) = (select max(len(Name)) from Customer) order by Name; ``` but here is what I get ``` abc 3 def 3 defghi 6 ghi 3 ```
I think this is easiest as a `union all` with `top`: ``` select * from ((select top 1 c.* from customer order by len(name) desc, name ) union all (select top 1 c.* from customer order by len(name) asc, name ) ) t; ```
With window functions: ``` with cte as(select *, row_number() over(order by len(name), name) rn1, row_number() over(order by len(name) desc, name) rn2 from Customer) select name, len(name) from cte where rn1 = 1 or rn2 = 1 ```
How to get distinct values from aggregate method in SQL Server
[ "", "sql", "sql-server", "" ]
I have a Table T1 with following values [![Table](https://i.stack.imgur.com/AF5Ai.png)](https://i.stack.imgur.com/AF5Ai.png) I need a result table with additional column which is the average of upto date. i.e., `x1= 1000.45` `x2= (1000.45+2000.00)/2` `x3= (1000.45+2000.00+3000.50)/3` `x4= (1000.45+2000.00+3000.50+4000.24)/4` The result table should look like the following: [![expected result](https://i.stack.imgur.com/Ce6ZT.png)](https://i.stack.imgur.com/Ce6ZT.png) I need to write SQL statement in Oracle database to add a column to result table with column values x1, x2, x3, x4.
You need to use an analytic function for this. My untested SQL is as follows: ``` SELECT date, division, sum_sales, AVG( sum_sales ) OVER ( ORDER BY date ROWS UNBOUNDED PRECEDING ) FROM table; ``` `date` is a reserved word in Oracle, so if you are using that as your real column name you will need to include it in quotes.
``` select date,division,sum_sales,avg(sum_sales) over ( order by sum_sales ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) from table group by date,division,sum_sales ```
SQL statement to update a column
[ "", "sql", "database", "oracle", "sql-update", "" ]
I am being supplied a single integer that is supposed to represent an hour. So If it returns 1 it is 1:00 am and so forth on a 24 hour clock,13 for example is 1:00 pm. I need to convert this into time in SQL. I know in MYSQL they have a function which does this: SEC\_TO\_TIME(TheHour\*60\*60) Is there an equivalent I can use in SQL? How do I do this?
You could do something like this. ``` select cast(DATEADD(hour, 13, 0) as time) ``` The upside is that it will still work even with negative numbers or values over 24.
There are two T-SQL function: ``` DATEFROMPARTS ( year, month, day ) ``` and ``` TIMEFROMPARTS ( hour, minute, seconds, fractions, precision ) ``` then you can use CONVERT if you need to format it.
Convert an Integer to time
[ "", "sql", "sql-server", "" ]
I need to get the MAX date from a bunch of different dates across different tables. I have a query that returns me 6 dates. I need to somehow get that max of that query. Here is my first query that works and gives me the 6 different MAX dates: ``` SELECT MAX(mi.last_updated_dts), (SELECT MAX(lt.created_dts) FROM live_training as lt WHERE lt.me_id = 1156), (SELECT MAX(gm.created_dts) FROM group_member as gm WHERE gm.me_id = 1156), (SELECT MAX(clm.created_dts) FROM contact_list_member as clm WHERE clm.me_id = 1156), (SELECT MAX(mc.created_dts) FROM member_case as mc WHERE mc.me_id = 1156), (SELECT MAX(mcc.created_dts) FROM member_case_comment as mcc INNER JOIN member_case as mc ON (mcc.member_case_id = mc.id) WHERE mc.me_id = 1156) FROM member_info AS mi WHERE mi.id = 1276 ``` Here is kind of what I'm trying to do but it doesn't work: ``` SELECT MAX( SELECT MAX(mi.last_updated_dts), (SELECT MAX(lt.created_dts) FROM live_training as lt WHERE lt.me_id = 1156), (SELECT MAX(gm.created_dts) FROM group_member as gm WHERE gm.me_id = 1156), (SELECT MAX(clm.created_dts) FROM contact_list_member as clm WHERE clm.me_id = 1156), (SELECT MAX(mc.created_dts) FROM member_case as mc WHERE mc.me_id = 1156), (SELECT MAX(mcc.created_dts) FROM member_case_comment as mcc INNER JOIN member_case as mc ON (mcc.member_case_id = mc.id) WHERE mc.me_id =1156) FROM member_info AS mi WHERE mi.id = 1276 ) as theMostMaxDate ``` I don't know what to put in the from statement or if this is even possible. Thanks for any help!
One solution could be to rewrite the query to use `union all` in a derived table and get the max from that, although this would only get you a date and no information about the source of it (but if that is important you could add a column with a literal value indicating what query gave you the max date): ``` SELECT MAX(max_date) FROM ( SELECT MAX(mi.last_updated_dts) AS max_date FROM member_info AS mi WHERE mi.id = 1276 UNION ALL SELECT MAX(lt.created_dts) FROM live_training as lt WHERE lt.me_id = 1156 UNION ALL SELECT MAX(gm.created_dts) FROM group_member as gm WHERE gm.me_id = 1156 UNION ALL SELECT MAX(clm.created_dts) FROM contact_list_member as clm WHERE clm.me_id = 1156 UNION ALL SELECT MAX(mc.created_dts) FROM member_case as mc WHERE mc.me_id = 1156 UNION ALL SELECT MAX(mcc.created_dts) FROM member_case_comment as mcc INNER JOIN member_case as mc ON (mcc.member_case_id = mc.id) WHERE mc.me_id = 1156 ) t ```
@jpw has the right solution for this problem. I just want to point out that you can do it without subqueries: ``` SELECT MAX(mi.last_updated_dts) AS max_date FROM member_info AS mi WHERE mi.id = 1276 UNION ALL SELECT MAX(lt.created_dts) FROM live_training as lt WHERE lt.me_id = 1156 UNION ALL SELECT MAX(gm.created_dts) FROM group_member as gm WHERE gm.me_id = 1156 UNION ALL SELECT MAX(clm.created_dts) FROM contact_list_member as clm WHERE clm.me_id = 1156 UNION ALL SELECT MAX(mc.created_dts) FROM member_case as mc WHERE mc.me_id = 1156 UNION ALL SELECT MAX(mcc.created_dts) FROM member_case_comment as mcc INNER JOIN member_case as mc ON (mcc.member_case_id = mc.id) WHERE mc.me_id = 1156 ORDER BY max_date DESC LIMIT 1; ```
How to select max date from a list of max date subqueries
[ "", "mysql", "sql", "date", "" ]
I am wondering if someone can explain the concept of uniquely identifying sql server objects in a join. In my example there are 2 schemas and 2 tables (but with same name). My assumption was that even though table name might be same between 2 schemas, as long as they are referenced with their full qualified name databasename.schemaname.objectname, SQL server should be able to make out the difference. **However that does not seem to be the case and the workaround for this is to use alias**. I would appreciate If someone can explain or point out to some literature around **why sql server cannot uniquely identity these**. ``` CREATE SCHEMA [Sch1] GO CREATE SCHEMA [Sch2] GO CREATE TABLE [Sch1].[Table_1]( [ID] [int] NULL, [DESC] [nchar](10) NULL ) ON [PRIMARY] GO CREATE TABLE [Sch2].[Table_1]( [ID] [int] NULL, [DESC] [nchar](10) NULL ) ON [PRIMARY] GO Select * From Sch1.Table_1 Join Sch2.Table_1 on Sch1.Table_1.Id = Sch2.Table_1.Id ```
The **`SQL Server`** supports muliti-part identifiers: ``` linked_server.db_name.schema.table_name ``` In your case you have: ``` Select * From Sch1.Table_1 Join Sch2.Table_1 on Sch1.Table_1.Id = Sch2.Table_1.Id ``` Now you wonder why `SQL Server` cannot differentiate between them: ``` Sch1.Table_1 != Sch2.Table_1 ``` The case is because of `SQL Server` use something called **`exposed name`**. > **exposed name** > > which is the **last part of the multi-part table name** (if there is no > alias), or alias name when present Returning to your query you have exposed names `Table_1` and `Table_1` which are duplicates and you need to use aliases. From **`SQL Server 2005+`**: > Duplicate table detection algorithm has been changed correspondingly, > so that any tables with the same exposed names will be considered > duplicates I suspect that your code could work with SQL Server 2000 but I cannot check it for sure. For more info read **[`Msg 1013`](http://blogs.msdn.com/b/ialonso/archive/2007/12/21/msg-1013-the-object-s-and-s-in-the-from-clause-have-the-same-exposed-names-use-correlation-names-to-distinguish-them.aspx)**
As far as I can tell, I don't see any errors in your sample code. Please explain in detail what errors you're encountering. As for the four-part naming convention. the full object name syntax is: ``` server.database.schema.object ``` So a complete usage would be, for example: ``` select * from servername.databasename.Sch1.Table_1 ``` or ``` select * from servername.databasename.Sch2.Table_2 ``` from which you can ignore any part as long as there is no ambiguity. Therefore in your example you can ignore severname and databasename as they are the same. But you cannot ignore schema names as they are not. Addendum: Based on error message you posted later, you need to employ correlation naming on the join syntax: ``` select * from Sch1.Table_1 as t1 inner join Sch2.Table_1 as t2 on t1.ID=t2.ID ```
SQL Server Object Names
[ "", "sql", "sql-server", "join", "objectname", "" ]
I am not sure how to ask this question... but here it goes. I have a table called `lists` that has four columns, `id`, `name`, `type`, and `group`. The `type` column values are not unique and is a text column. I need a query to return `id` and `type` from table `lists`, but I only need one of each `type`. It should also be in alphabetical order. It can be the first occurrence of each type. In postgreSQL 9.4.5 I have tried `select distinct list_type, id from lists order by list_type;`, but this of course returns duplicate types that have unique ids. I only need the first occurrence of each `type` and that `id`. Some sample data: ``` id | type --------------------------------------+-------------------------------- 0014dea9-f73f-46d0-bf7d-d52717029582 | invest_fees 004665b5-4657-4cbc-8534-1aa9e4ef305f | invest_fees 00910f6c-bdf0-4991-ac0c-969b3b9c6b84 | invest_fees 009ed283-531b-4d7b-b51d-0c0e0e7a5707 | countries 00b4f8e2-ae47-4083-ae6e-8d6dbaa2cd63 | invest_fees 00ca1972-cf70-4fa2-bfde-bc89fce19433 | invest_fees 00feb6a2-4ee7-4e31-8780-cb5938587669 | countries 010777bc-7e74-4c13-8808-4c35cfdbf988 | pick_banks 01852693-7560-4de5-a534-0519e7c04c51 | countries 01bee5a4-23f7-427d-9b84-4c707154a812 | countries 01bf29f9-70af-4b3c-b7f9-d01e0f0f142c | invest_fees 01d51fe3-4c32-4d21-b38c-8e84a92ff0aa | invest_fees 01d981dd-13d4-4098-a7e3-bd1bb5f02f2b | countries 01de77bb-ff82-4c3c-b26f-f3829d84dd29 | invest_fees 01df6e6c-9a77-4b83-a825-09949768df54 | countries 01f11d01-f490-48a9-b21c-803f7c03f5f6 | invest_mtos ```
You can use the `row_number` function to get one `id` (ordered ascending) per `type`. ``` select id, type from ( select id, name, type, "group", row_number() over(partition by type order by type) as rn from lists ) t where rn = 1 ```
In postgres you can use ``` SELECT DISTINCT ON (type) type, many, other, columns FROM lists ORDER BY type ASC; ``` This should select only as many rows as there are different types. In your case there would be only one row where the type is `invest_fees` You can read more about selections with `DISTINC ON` right [here](http://www.postgresqltutorial.com/postgresql-select-distinct/)
SQL query to return only first occurance of one column value
[ "", "sql", "postgresql", "" ]
I have this table structure ``` +----------------+----------------+ | DATE | VALUE | |----------------|----------------| | 2015-01-01 | 5 | | 2015-01-02 | 4 | | 2015-01-03 | NULL | | 2015-02-10 | 2 | | 2015-02-25 | 1 | +----------------+----------------+ ``` I'm trying to get the most recent **non null** value within each month. In this case it would be this: ``` +----------------+----------------+ | MONTH | VALUE | |----------------|----------------| | 2015-01 | 4 | | 2015-02 | 1 | +----------------+----------------+ ``` I've tried DENSE\_RANK but i'm having a difficult time dealing with the null values. Using: ``` SELECT TO_CHAR(date,'YYYY-MM'), MAX(value) KEEP (DENSE_RANK FIRST ORDER BY date DESC) FROM mytable GROUP BY TO_CHAR(date,'YYYY-MM') ``` I'm getting ``` +----------------+----------------+ | MONTH | VALUE | |----------------|----------------| | 2015-01 | NULL | | 2015-02 | 1 | +----------------+----------------+ ``` Obviously I'm doing something wrong. Can you help me figure this out? Thanks in advance. **EDIT:** Unfortunately, adding the condition ``` "WHERE value IS NOT NULL" ``` can't be applied to my situation.
Unfortunately, `MAX() KEEP` doesn't have an `IGNORE NULLS` clause, as far as I know. But `LAST_VALUE` does. So, how about this: ``` SELECT mth, MAX (last_val) FROM (SELECT TO_CHAR (d, 'YYYY-MM') mth, d, n, LAST_VALUE ( n IGNORE NULLS) OVER (PARTITION BY TO_CHAR (d, 'YYYY-MM') ORDER BY d ASC ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) last_val FROM matt_test) GROUP BY mth ```
I have a personal averseness to construction like `SELECT ... FROM (SELECT ... FROM ...)`, so this is my proposal: ``` SELECT DISTINCT TRUNC(THE_DATE, 'MM') AS MONTH, FIRST_VALUE(THE_VALUE IGNORE NULLS) OVER (PARTITION BY TRUNC(THE_DATE, 'MM') ORDER BY THE_VALUE) AS VALUE FROM MY_TABLE; ```
Oracle - Dealing with NULLS in DENSE_RANK
[ "", "sql", "oracle", "" ]
I have 5 tables. Given a *discount* value, I want to find if that *discount* value appears in how many tables (out of 5). Is it in all 5 tables? or just in 2 tables etc. E.g: lets say 12% discount. I want to check if 12% exists in how many tables (out of 5 tables) Note: *discount* value appears only one time in a given table (unique discounts) Can someone please help me :) **Tables** ``` table1(userid,discount) table2(userid,discount) table3(userid,discount) table4(userid,discount) table5(userid,discount) ```
You can use `UNION`: ``` SELECT COUNT(*) AS numOfDiscounts FROM ( SELECT discount FROM table1 WHERE discount = 12 UNION ALL SELECT discount FROM table2 WHERE discount = 12 UNION ALL SELECT discount FROM table3 WHERE discount = 12 UNION ALL SELECT discount FROM table4 WHERE discount = 12 UNION ALL SELECT discount FROM table5 WHERE discount = 12) AS t ``` The above query gives the number of tables containing a row with `discount = 12`. [**Demo here**](http://sqlfiddle.com/#!9/f978bc/2) Alternatively, you can use: ``` SELECT COALESCE((SELECT COUNT(*) FROM table1 WHERE discount = 12),0) + COALESCE((SELECT COUNT(*) FROM table2 WHERE discount = 12),0) + COALESCE((SELECT COUNT(*) FROM table3 WHERE discount = 12),0) + COALESCE((SELECT COUNT(*) FROM table4 WHERE discount = 12),0) + COALESCE((SELECT COUNT(*) FROM table5 WHERE discount = 12),0) AS numOfDiscounts ``` [**Demo here**](http://sqlfiddle.com/#!9/f978bc/3) or: ``` SELECT (SELECT COUNT(CASE WHEN discount=12 THEN 1 END) FROM table1) + (SELECT COUNT(CASE WHEN discount=12 THEN 1 END) FROM table2) + (SELECT COUNT(CASE WHEN discount=12 THEN 1 END) FROM table3) + (SELECT COUNT(CASE WHEN discount=12 THEN 1 END) FROM table4) + (SELECT COUNT(CASE WHEN discount=12 THEN 1 END) FROM table5) AS numOfDiscounts ``` [**Demo here**](http://sqlfiddle.com/#!9/fc8ab/1)
only option you can use is left join with all the tables that will give you the table where your data resides
How to find a given value appears in how many tables in mysql
[ "", "mysql", "sql", "" ]
I have a table of `tickets` with a columns called `ticket_number` and `date`, which holds the date of creation. The data in that table : ``` ticket_number | date 150910.001 2015/09/10 150910.002 2015/09/10 150911.001 2015/09/11 150911.002 2015/09/11 150911.003 2015/09/11 ``` I want to make a SQL function that create `ticket_number` based on the date. If the date of day is different, the last 3 digit reset to 001 again. I am having trouble writing a SQL query to return such information. So far,I have already tried : ``` CREATE FUNCTION FcNoTicket(@date as datetime) RETURNS CHAR(10) AS BEGIN DECLARE @newNumber char(10),@date_k char(7), @ticketNo as int SELECT @date_k = RIGHT(Year(@date),2)+RIGHT('00'+CONVERT(NVARCHAR(2),DATEPART(MONTH,@date)),2) +RIGHT('00'+CONVERT(NVARCHAR(2),DATEPART(day,@date)),2)+'.' from tickets SELECT @ticketNo=ROW_NUMBER() OVER(PARTITION BY @date ORDER BY @date) from ticket SET @newNumber=@date_k+right('00'+cast(@ticketNo+1 as varchar(3)),3) RETURN @newNumber END ``` That code return continuous increment. Does anyone have any ideas on how to do this?
What about something like this: * all tickes in a single query. Just add a `Create table AS` [**SqlFiddle Demo**](http://sqlfiddle.com/#!6/db40e/7) ``` WITH tickets as ( SELECT [date], row_number() over( partition by [date] order by [date]) as rn FROM YourTable ) SELECT [date], RIGHT(Year([date]),2) + RIGHT('00' + CONVERT(NVARCHAR(2),DATEPART(MONTH,[date])),2) + RIGHT('00' + CONVERT(NVARCHAR(2),DATEPART(DAY ,[date])),2) + '.' + RIGHT('000'+ CONVERT(NVARCHAR(3), rn), 3) as ticket_id FROM tickets ``` **OUTPUT** ``` | date | ticket_id | |-----------------------------|------------| | September, 10 2015 00:00:00 | 150910.001 | | September, 10 2015 00:00:00 | 150910.002 | | September, 11 2015 00:00:00 | 150911.001 | | September, 11 2015 00:00:00 | 150911.002 | | September, 11 2015 00:00:00 | 150911.003 | ```
[**SQL Fiddle**](http://sqlfiddle.com/#!3/7019af/2) ``` SELECT Replace(CONVERT(VARCHAR, ydate, 11), '/', '') + '.' + RIGHT('000'+Cast(Row_number() OVER(partition BY ydate ORDER BY ydate) AS VARCHAR(10)), 3) RN, ydate FROM yourtable ```
How to create SQL Function that fill the field with increment
[ "", "sql", "sql-server", "" ]
I built this code as a test to delete a range of records from an Access 2013 database based upon a range of dates. I'm getting a missing operator error in query expression 'START\_DATE >= .....etc. I have tried the select statement with apostrophes as well. NOTE: the CALL line is all one line in the actual code. Also, if I run the CALL line with Between/AND instead of >= / <= , the code completes with no errors, but does not accomplish anything. It does not find and delete the rows. ``` Function Delete_Range() Dim begdt As Date Dim enddt As Date 'user inputs date range begdt = InputBox("Enter beginning date as mm/01/yyyy", "BEGINNING DATE") enddt = InputBox("Enter ending date as mm/01/yyyy", "ENDING DATE") Dim objectrecordset As ADODB.Recordset Set objectrecordset = New ADODB.Recordset 'initiate recordset object objectrecordset.ActiveConnection = CurrentProject.Connection Call objectrecordset.Open("select START_DATE from TEMP_DATE_RANGE where START_DATE IS >= " & begdt & " AND <= " & enddt, , , adLockBatchOptimistic) While objectrecordset.EOF = False 'delete record objectrecordset.Delete objectrecordset.UpdateBatch 'move to next record objectrecordset.MoveNext Wend End Function ``` Thank you everybody for your help. Here is the code that worked. ``` DoCmd.SetWarnings (warningsoff) 'Declare variables Dim begdt As String Dim enddt As String 'User inputs variables begdt = InputBox("Enter beginning date as mm/01/yyyy", "BEGINNING DATE") enddt = InputBox("Enter ending date as mm/01/yyyy", "ENDING DATE") 'Format variable as date and error handling If Not (IsDate(begdt) And IsDate(enddt)) Then MsgBox "Please enter a date using a the date format", vbOKOnly GoTo Finished Else begdt = Format(begdt, "\#yyyy\/mm\/dd\#") enddt = Format(enddt, "\#yyyy\/mm\/dd\#") End If 'Delete records from tables based upon user input date range Dim SQL As String Dim SQL2 As String Dim SQL3 As String SQL = "DELETE * FROM TEST_TBL_1 WHERE START_DATE BETWEEN " & begdt & " AND " & enddt & "" SQL2 = "DELETE * FROM TEST_TBL_2 WHERE START_DATE BETWEEN " & begdt & " AND " & enddt & "" SQL3 = "DELETE * FROM TEST_TBL_3 WHERE START_DATE BETWEEN " & begdt & " AND " & enddt & "" DoCmd.RunSQL SQL DoCmd.RunSQL SQL2 DoCmd.RunSQL SQL3 'Close form and show process complete page DoCmd.SetWarnings (warningson) DoCmd.Close acForm, "DELETE HISTORY", acSaveNo DoCmd.OpenForm "COMPLETE", acNormal, "", "", , acNormal Finished: End Function ```
You are mixing up date values and string expressions for these, and it all starts with your inputbox which always returns a string: ``` Dim begdt As String Dim enddt As String ' user inputs date range begdt = InputBox("Enter beginning date as mm/01/yyyy", "BEGINNING DATE") enddt = InputBox("Enter ending date as mm/01/yyyy", "ENDING DATE") If Not (IsDate(begdt) And IsDate(enddt)) Then ' Show error. Else begdt = Format(begdt, "\#yyyy\/mm\/dd\#) enddt = Format(enddt, "\#yyyy\/mm\/dd\#) Dim SQL As String SQL = "Select START_DATE From TEMP_DATE_RANGE Where START_DATE Between " & begdt & " And " & enddt & "" Dim objectrecordset As ADODB.Recordset Set objectrecordset = New ADODB.Recordset 'initiate recordset object objectrecordset.ActiveConnection = CurrentProject.Connection Call objectrecordset.Open(SQL, , , adLockBatchOptimistic) ' <Snip> ' Clean up. End If ```
First work out the query logic and syntax in the Access query designer. Assuming *START\_DATE* is Date/Time datatype, pick a couple static values for the start and end of your target date range: ``` SELECT START_DATE FROM TEMP_DATE_RANGE WHERE START_DATE BETWEEN #2015-1-1# AND #2015-10-30# ``` Adjust as needed. Once you have the proper SQL statement, build the corresponding statement text in your VBA code. ``` Dim strSelect As String strSelect = "SELECT START_DATE FROM TEMP_DATE_RANGE " & _ "WHERE START_DATE BETWEEN " & Format(begdt,"\#yyyy-m-d\#") & _ " AND " & Format(enddt,"\#yyyy-m-d\#") Debug.Print strSelect '<- inspect this in Immediate window; Ctrl+g will take you there ``` Then you can use *strSelect* with `objectrecordset.Open` However, since your goal is to delete those rows, you don't actually need a recordset. You can simply execute a `DELETE` statement instead. ``` DELETE FROM TEMP_DATE_RANGE WHERE START_DATE BETWEEN #2015-1-1# AND #2015-10-30# ```
What is the missing operator in this VBA Code?
[ "", "sql", "ms-access", "vba", "ms-access-2013", "" ]
I need to do the same thing I have done [here](https://stackoverflow.com/questions/33435290/mysql-getting-count-and-sum-of-integer-values-in-multiple-columns). This time I want to have a result based on two criterion. So I have a table like so. ``` images | videos | reviews | publisherId ---------------------------------------------------------------------------- 0 | 2 | 1 | 91 4 | 5 | 0 | 91 11 | 1 | 4 | 12 0 | 1 | 7 | 12 2 | 1 | 9 | 12 ``` I need to write a query that counts the rows in each column that is not equal to `0`, and also that gives the actual sum of the numbers in the columns and arranges the result based on the `publisherId`. So the result will look like this. ``` Columns | Counts(91) | Sum(91) | Counts(12) | Sum(12) ---------------------------------------------------------------------------- images | 1 | 4 | 2 | 13 videos | 2 | 7 | 3 | 3 reviews | 1 | 1 | 3 | 20 ``` And the query that gives result for one `publisherId`from the [answer](https://stackoverflow.com/questions/33435290/mysql-getting-count-and-sum-of-integer-values-in-multiple-columns/33435345#33435345) to my other question is this. ``` SELECT 'images' as `columns`, SUM(images <> 0) as counts, SUM(images) as `sum` FROM dpsreport where publisherId = 91 UNION ALL SELECT 'videos', SUM(videos <> 0), SUM(videos) FROM dpsreport where publisherId = 91 UNION ALL SELECT 'reviews', SUM(reviews <> 0), SUM(reviews) FROM dpsreport where publisherId = 91; ``` How do I achieve this result?
This doesn't require a `join` at all. This is just more conditional aggregation. I will show the logic for publisher 91. You can add two more columns yourself for 12: ``` SELECT 'images' as `columns`, SUM(images <> 0 and publisherid = 91) as counts_91, SUM(case when publisherid = 91 then images else 0 end) as sum_91 FROM dpsreport UNION ALL SELECT 'videos', SUM(videos <> 0 and publisherid = 91) as counts_91, SUM(case when publisherid = 91 then videos else 0 end) as sum_91 FROM dpsreport UNION ALL SELECT 'reviews', SUM(reviews <> 0 and publisherid = 91) as counts_91, SUM(case when publisherid = 91 then reviews else 0 end) as sum_91 FROM dpsreport; ``` As you practice the logic, it should get easier for you to figure out how to solve these problems yourself.
``` SELECT 'images' as `columns`, IF(publisherId=91, SUM(images <> 0), 0) as counts91, IF(publisherId=91, SUM(images), 0) as `sum91`, IF(publisherId=12, SUM(images <> 0), 0) as counts12, IF(publisherId=12, SUM(images), 0) as `sum12` FROM dpsreport group by publisherId UNION ALL SELECT 'videos' as `columns`, IF(publisherId=91, SUM(videos<> 0), 0) as counts91, IF(publisherId=91, SUM(videos), 0) as `sum91`, IF(publisherId=12, SUM(videos<> 0), 0) as counts12, IF(publisherId=12, SUM(videos), 0) as `sum12` FROM dpsreport group by publisherId UNION ALL SELECT 'reviews' as `columns`, IF(publisherId=91, SUM(reviews<> 0), 0) as counts91, IF(publisherId=91, SUM(reviews), 0) as `sum91`, IF(publisherId=12, SUM(reviews<> 0), 0) as counts12, IF(publisherId=12, SUM(reviews), 0) as `sum12` FROM dpsreport group by publisherId ```
MySQL: Count and sum of integer values in multiple columns on two criterion
[ "", "mysql", "sql", "count", "addition", "" ]
<http://sqlzoo.net/wiki/AdventureWorks_hard_questions> Question 11 states: > For every customer with a 'Main Office' in Dallas show AddressLine1 of the 'Main Office' and AddressLine1 of the 'Shipping' address - if there is no shipping address leave it blank. Use one row per customer. Relevant Tables/IDs are: ``` Customer CustomerAddress Address Customer ID CustomerID AddressID AddressID AddressLine1 AddressType City ``` My current code is ``` SELECT CA.CustomerID, CASE WHEN CA.AddressType = 'Main Office' THEN A.AddressLine1 ELSE "" END, . CASE WHEN CA.AddressType = 'Shipping' THEN A2.AddressLine1 ELSE "" END FROM Address A JOIN CustomerAddress CA ON A.AddressID = CA.AddressID JOIN Address A2 ON A.AddressID = A2.AddressID WHERE A.City = 'Dallas' ``` There's 5 total main offices in Dallas, and only one has a shipping address. When I tried to "Group By CustomerID" it only returns one of the addresses, even if I've searched for both of them like in the query above. How do I get BOTH addresses to return on the same row?
2 weeks later, I've finally figured it out. ``` SELECT T0.MainAddress AS 'Main Address', isnull(T1.ShippingAddress,' ') AS 'Shipping Address' FROM (SELECT CA.CustomerID AS [CustomerID], A.AddressLine1 AS [MainAddress] FROM CustomerAddress CA INNER JOIN Address A ON A.AddressID = CA.AddressID WHERE CA.AddressType = 'Main Office' AND A.City = 'Dallas') T0 LEFT JOIN (SELECT CA.CustomerID AS [CustomerID], A.AddressLine1 AS [ShippingAddress] FROM CustomerAddress CA INNER JOIN Address A ON A.AddressID = CA.AddressID WHERE CA.AddressType = 'Shipping' AND A.City = 'Dallas') T1 ON T0.CustomerID = T1.CustomerID ``` I eventually had to just make two different tables to draw from, as opposed to trying to draw from existing data. It's similar to the answer above, but it actually functions in the website that I was working in.
First in cte you filter customers from Dallas. Then left joining to find their shipping addresses: ``` ;with cte as(select ca.customerid, a.addressline1 from customeraddress ca join address a on ca.addressid = a.addressid where ca.addresstype = 'Main Office' and a.City = 'Dallas') select c.customerid, c.addressline1 as mainaddress, a.addressline1 as shippingaddress from cte c left join customeraddress ca on c.customerid = ca.customerid and ca.addresstype = 'Shipping' left join address a on ca.addressid = a.addressid ```
SQLZoo Hard Questions: Get results that share an ID on the same row
[ "", "sql", "" ]
lets say you have a table with 10 000 records of different email adresses, but within this tables there are a few hundred (this can vary and should not matter) addresses that contains a specific domain name ie @horses.com. I would like **in one single query** retrieve all 10 000 record, but the ones that contains @horses.com will always be on top of the list. Something like this " SELECT TOP 10000 \* FROM dbo.Emails ORDER BY -- the records that contains @horses.com comes first" OR Give me 10000 records from the table dbo.Emails but make shure everyone that contains "@horses.com" comes first, no matter how many there is. BTW This is on an sql 2012 server. Anyone??
Try this: ``` SELECT TOP 10000 * FROM dbo.Emails ORDER BY IIF(Email LIKE '%@horses.com', 0, 1) ``` This assumes the email ends in '@horses.com', which isn't unreasonable. If you really want a `contains`-like function, add another `%` after the `.com`. **Edit:** The `IIF` function is only available in sql server 2012 and later, for a more portable solution use `CASE WHEN Email LIKE '%@horses.com' THEN 0 ELSE 1 END`.
``` SELECT TOP 10000 * FROM dbo.Emails ORDER BY case when charindex('@horses.com', email) > 0 then 1 else 2 end, email ```
sql query order by parts of
[ "", "sql", "sql-server-2012", "" ]