Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I have a table with timestamp column i want to get the values where the timestamp in specific month (for example where the timpestamp between 1 september and 30 septemper) taking in considration if the month is 31 day. I use this query: ``` SELECT users.username, users.id, count(tahminler.tahmin)as tahmins_no FROM users LEFT JOIN tahminler ON users.id = tahminler.user_id GROUP BY users.id having count(tahminler.tahmin) > 0 ``` Can i add `where timestamp IN(dates_array)`?? date\_array will be the dates of the whole month??
``` SELECT users.username, users.id, count(tahminler.tahmin)as tahmins_no FROM users LEFT JOIN tahminler ON users.id = tahminler.user_id where year(timestamp) = 2013 and month(timestamp) = 9 GROUP BY users.id having count(tahminler.tahmin) > 0 ``` To make it work with indexes you can do ``` SELECT users.username, users.id, count(tahminler.tahmin)as tahmins_no FROM users LEFT JOIN tahminler ON users.id = tahminler.user_id where timestamp >= '2013-09-01' and timestamp < '2013-10-01' GROUP BY users.id having count(tahminler.tahmin) > 0 ```
In case of TIMESTAMP ``` YEAR (TIMESTAMP) = 2013 AND MONTH (TIMESTAMP) = 9 ``` To include in the same clause ``` DATE_FORMAT(TIMESTAMP,'%Y-%m')='2013-09' ``` For unix time stamp ``` YEAR (FROM_UNIXTIME(TIMESTAMP)) = 2013 AND MONTH(FROM_UNIXTIME(TIMESTAMP))=9 ``` To include in the same clause ``` DATE_FORMAT(FROM_UNIXTIME(TIMESTAMP),'%Y-%m')='2013-09' ```
How to compare Timestamp in where clause
[ "", "mysql", "sql", "timestamp", "where-clause", "" ]
I have table here. ![enter image description here](https://i.stack.imgur.com/snntW.jpg) How can i ``` SELECT sum(gload1) as 'g1', sum(gload2) as 'g2'. sum(gload3) as 'g3', sum(gload4) as 'g4' FROM member WHERE age = 15-20,21-25,26-30 GROUP BY gender; ``` result like this : Just sample ![enter image description here](https://i.stack.imgur.com/JwITb.jpg) How can i query like this. Thank for advamce.
Try like this: ``` SELECT CASE WHEN age BETWEEN 15 AND 20 THEN '15-20' WHEN age BETWEEN 20 AND 30 THEN '20-30' ELSE '30-...' END as age_group, gender, sum(gload1) as gload1_total FROM member GROUP BY CASE WHEN age BETWEEN 15 AND 20 THEN '15-20' WHEN age BETWEEN 20 AND 30 THEN '20-30' ELSE '30-...' END, gender ``` if it doesn't work, you might have to do a subquery: inner query will add the range string to every row and outer query will group by it.
``` SELECT CASE WHEN `age`>=15 AND `age`<=20 THEN '15-20' WHEN `age`>=21 AND `age`<=25 THEN '21-25' WHEN `age`>=26 AND `age`<=30 THEN '26-30' END AS `AgeRange`, SUM(`gload1`) AS `g1`, SUM(`gload2`) AS `g2`, SUM(`gload3`) AS `g3`, SUM(`gload4`) AS `g4` FROM member GROUP BY `AgeRange` ``` **See** [**SQL Fiddle**](http://sqlfiddle.com/#!2/9f006/36)
How to query group by 2 condition?
[ "", "mysql", "sql", "" ]
My input to the stored procedure is a string (e.g '2 years 3 months 4 days') which is a future date. How to convert this to a date by comparing with current date?
You can use below Query in SP ``` select dateadd(yy,2,dateadd(m,3,dateadd(d,4,GETDATE()))) ^Year ^Month ^Days ``` Here is the SP ``` create procedure test1 ( @year INT, @month INT, @day INT ) AS BEGIN select dateadd(yy,@year,dateadd(m,@month,dateadd(d,@day,GETDATE()))) END ```
``` declare @S varchar(50) set @S = '2 years 3 months 4 days' select dateadd(day, D.D, dateadd(month, D.M, dateadd(year, D.Y, getdate()))) as TheDate from (select replace(replace(replace(@S, ' years ', '.'), ' months ', '.'), ' days', '')) as T(S) cross apply ( select cast(parsename(T.S, 1) as int), cast(parsename(T.S, 2) as int), cast(parsename(T.S, 3) as int)Y ) as D(D, M, Y) ``` [SQL Fiddle](http://sqlfiddle.com/#!6/d41d8/8389)
Converting year month day to date in sql server
[ "", "sql", "sql-server-2008", "" ]
I have a database table that has more than 50 Million record and to improve searching i had to create a non clustered indexes, and once i create one it takes 5 ~ 10 minutes to be created so i guess in the background it sorts the data according to the index. So for example before adding index to my table searching was awful and takes long time and when i added the non clustered index, searching was fast. But that was only when i had 50 million records. The question is, what if i defined the index at the very beginning when creating the table before adding any data to the table? Would it give the same search performance i am getting right now? or do i have to delete and recreate the index every now and then to sort the data regularly? I am sorry if my question seemed stupid, i just started learning about indexes and it is a confusing topic for me.
A non-clustered index keeps a copy of the indexed fields in a special structure optimised for searching. Creating an index on 50 million records obviously takes some time. Once the index is created, it"s maintained automatically as records are added, deleted or updated, so you should only need to reindex if you've had a serious crash of the system or the disk. So generally, it's best to create the index at the time you create the table. There is an operation called 'updating statistics' which helps the query optimiser to improve its search performance. The details vary between database engines.
Databases indexes work like those in books. It's actually a pointer to the right rows in your table, based and ordered on a specific key (the column for which you define the index). So, basically, yes, if you create the index before inserting data, you should get the same search speed when you use it later on when the table is loaded with lots of records. Although, since each time you insert (or delete, or update the specific key) a record the index needs to be updated, inserting (or deleting or updating) large amount of data will be a bit slower. Indexes can get fragmented if you do a lot of insert and delete on the table. Thus, deleting and recreating them is usually part of a good maintenance plan.
Do i have to create a certain nonclustered index every now and then?
[ "", "mysql", "sql", "sql-server", "database", "database-indexes", "" ]
MY question is simple, How do you avoid the automatic sorting which the UNION ALL query does? This is my query ``` SELECT * INTO #TEMP1 FROM Final SELECT * INTO #TEMP2 FROM #TEMP1 WHERE MomentId = @MomentId SELECT * INTO #TEMP3 FROM #TEMP1 WHERE RowNum BETWEEN @StartRow AND @EndRow SELECT * INTO #TEMP4 FROM (SELECT *FROM #TEMP3 UNION ALL SELECT *FROM #TEMP2) as tmp SELECT DISTINCT * FROM #TEMP4 ``` I'm using SQL Server 2008. I need the Union ALL to perform like a simple Concatenate, which it isn't! Appreciate your help in this.
I think you're mistaken on which operation is actually causing the sort. Check the code below, UNION ALL will *not* cause a sort. You may be looking at the DISTINCT operation, which uses a sort (it sorts all items and the eliminates duplicates) ``` CREATE TABLE #Temp1 ( i int ) CREATE TABLE #temp2 ( i int ) INSERT INTO #Temp1 SELECT 3 UNION ALL SELECT 1 UNION ALL SELECT 8 UNION ALL SELECT 2 INSERT INTO #Temp2 SELECT 7 UNION ALL SELECT 1 UNION ALL SELECT 5 UNION ALL SELECT 6 SELECT * INTO #TEMP3 FROM (SELECT * FROM #Temp1 UNION ALL SELECT * FROM #temp2) X ```
`UNION ALL` adds all the records where as `UNION` adds only new/distinct records. Since you are using `UNION ALL` and using `DISTINCT` soon after, I think you are looking for `UNION` ``` SELECT * INTO #TEMP4 FROM ( SELECT * FROM #TEMP3 UNION --JUST UNION SELECT * FROM #TEMP2 ) AnotherTemp ``` Or you can simplify it as ``` SELECT * INTO #TEMP4 FROM SELECT DISTINCT * FROM Final WHERE MomentId = @MomentId OR RowNum BETWEEN @StartRow AND @EndRow ```
How to avoid Sorting in Union ALL
[ "", "sql", "sql-server", "" ]
I'm working on a program in Go, that makes heavy usage of MySQL. For sake of readability, is it possible to include the value of a column after each column name in an INSERT statement? Like: ``` INSERT INTO `table` (`column1` = 'value1', `column2` = 'value2'...); ``` instead of ``` INSERT INTO `table` (`column`, `column2`,...) VALUES('value1', 'value2'...); ``` so that it's easier to see which value is associated with which column, considering the SQL strings can often get fairly long
No, you cannot use your proposed syntax (though it would be nice). One way is to line up column names and values: ``` INSERT INTO `table` (`column`, `column2`,...) VALUES ('value1', 'value2'...); ``` Update in response to your comment "the statements contain variables from outside the string": if you parameterise your SQL statements then matching up column names to variables is easy to check if the parameters are named for their respective columns: `@ColumnName`. This is actually how I do it in my TSQL scripts: ``` INSERT INTO `table` ( `column`, `column2`, ... ) VALUES ( 'value1', 'value2', ... ); ``` (It's also common to put the commas at the start of the lines) but to be honest, once you get enough columns it is easy to mix up the position of columns. And if they have the same type (and similar range of values) you might not notice straight away....
Although this question is a bit older I will put that here for future researchers. I'd suggest to use the SET syntax instead of the ugly VALUES list syntax. ``` INSERT INTO table SET column1 = 'value1', column2 = 'value2'; ``` IMHO this is the cleanest way in MySQL.
Making SQL INSERT statement easier to read
[ "", "sql", "go", "code-readability", "" ]
I currently have 2 SQL tables that look like this: ![Data Table](https://i.stack.imgur.com/o7wsW.png) and... ![filter table](https://i.stack.imgur.com/1v3u2.png) I need to write a SELECT statement that retrieves all products from the DataTable that contain rows that match the FilterTable. So based on my example tables above, if I were to run the query, it would return the following result: ![Result table](https://i.stack.imgur.com/3wrw1.png) I recently found a question that kind of attempts this: [SQL query where ALL records in a join match a condition?](https://stackoverflow.com/questions/4764650/sql-query-where-all-records-in-a-join-match-a-condition?lq=1) but have been unsuccessful in implementing something similar Note - I am using Microsoft SQL Server 2008
This is a little complicated, but here is one solution. Basically you need to check to see how many records from the datatable match all the records from the filtertable. This uses a subquery to do that: ``` SELECT * FROM DataTable WHERE ID IN ( SELECT DT.ID FROM DataTable DT JOIN FilterTable FT ON FT.Name = DT.Name AND FT.Value = DT.VALUE GROUP BY DT.ID HAVING COUNT(*) = (SELECT COUNT(*) FROM FilterTable) ) ``` * [SQL Fiddle Demo](http://sqlfiddle.com/#!3/bdccb/1)
This will work: ``` SELECT * FROM Data WHERE ID NOT IN ( SELECT ID FROM Data JOIN Filter on Data.Name = Filter.Name and Data.Value <> Filter.Value ) ``` I set up a SQL Fiddle if you want to try other things: <http://sqlfiddle.com/#!3/38b87/6> EDIT: Better answer: ``` SELECT * FROM DATA WHERE ID NOT IN ( SELECT ID FROM DATA JOIN Filter ON DATA.Name = Filter.Name AND DATA.Value <> Filter.Value ) AND ID IN ( SELECT ID FROM DATA JOIN Filter ON DATA.Name = Filter.Name ) ``` This now fits where there is at least one filter that matches, and none that don't.
SQL Query to Filter a Table using another Table
[ "", "sql", "sql-server-2008", "t-sql", "select", "filter", "" ]
I have the following two Select statements - ``` SELECT * FROM tblAllocations WHERE AllocID IN ( SELECT MAX(AllocID) FROM tblAllocations WHERE FeeEarner = 'KLW' AND [Date] <= '2013-12-31' GROUP BY FeeEarner, CaseNo ``` and ``` SELECT UserID, CaseNo, SUM(Fees) AS [Fees] FROM tblTimesheetEntries WHERE UserID = 'KLW' AND [Date] <= '2013-12-31' GROUP BY UserID, CaseNo ``` Which return the following results - ![Results of Separate Queries](https://i.stack.imgur.com/PVCz7.png) What I want is to combine them in a Select statement which extracts some fields from the First Query and other fields from the Second Query. Based on the above results there should be just 5 lines returned, three of which would have fields from both Query, and two would only have fields from one Query (hence it would have some NULL values) I tried the following - ``` SELECT q1.CaseNo, q1.FeeEarner, q2.Fees AS [Fees], q1.Fees AS [Billed], (q2.Fees - q1.Fees) AS WIP FROM ( SELECT * FROM tblAllocations WHERE AllocID IN ( SELECT MAX(AllocID) FROM tblAllocations WHERE FeeEarner = 'KLW' AND [Date] <= '2013-12-31' GROUP BY FeeEarner, CaseNo ) ) AS q1, ( SELECT UserID, CaseNo, SUM(Fees) AS [Fees] FROM tblTimesheetEntries WHERE UserID = 'KLW' AND [Date] <= '2013-12-31' GROUP BY UserID, CaseNo ) AS q2 ``` However this acts like a Cross Join and gives me 15 lines as follows - ![Erroneous Results](https://i.stack.imgur.com/YncmA.png) Could some one advice me how to combine these two queries correctly so as to only return 5 lines.
Below is the corrected query: ``` SELECT q1.CaseNo, q1.FeeEarner, q2.Fees AS [Fees], q1.Fees AS [Billed], (q2.Fees - q1.Fees) AS WIP FROM ( SELECT * FROM tblAllocations WHERE AllocID IN ( SELECT MAX(AllocID) FROM tblAllocations WHERE FeeEarner = 'KLW' AND [Date] <= '2013-12-31' GROUP BY FeeEarner, CaseNo ) ) AS q1, ( SELECT UserID, CaseNo, SUM(Fees) AS [Fees] FROM tblTimesheetEntries WHERE UserID = 'KLW' AND [Date] <= '2013-12-31' GROUP BY UserID, CaseNo ) AS q2 where q1.CaseNo = q2.CaseNo ```
It's doing a cross join because you're not saying how `q1` and `q2` are "related" (joined). Also, since you want five rows (that is, all rows in `tblTimesheetEntries` regardless of a match in `tblAllocations`), you should use a right join (or a left but inverting `q1` and `q2`): ``` SELECT q2.CaseNo, q1.FeeEarner, q2.Fees AS [Fees], q1.Fees AS [Billed], (q2.Fees - q1.Fees) AS WIP FROM ( SELECT * FROM tblAllocations WHERE AllocID IN ( SELECT MAX(AllocID) FROM tblAllocations WHERE FeeEarner = 'KLW' AND [Date] <= '2013-12-31' GROUP BY FeeEarner, CaseNo )) AS q1 RIGHT JOIN ( SELECT UserID, CaseNo, SUM(Fees) AS[Fees] FROM tblTimesheetEntries WHERE UserID = 'KLW' AND [Date] <= '2013-12-31' GROUP BY UserID, CaseNo ) AS q2 ON q1.CaseNo = q2.CaseNo ``` SqlFiddle [here](http://sqlfiddle.com/#!6/7373a/1).
JOINING Results from two SELECT stataments each with their own WHERE criteria and GROUPING
[ "", "sql", "sql-server-express", "" ]
I have a SQL Query, checking if one of a number of given values already exists in a table, using a bunch of OR x = y statements. I then do a row count on the result. ``` $exists = db_query("SELECT * FROM {leads_client} WHERE (companyName = '".$form_state['values']['company_name']."' OR billingEmail = '".$form_state['values']['billing_email']."' OR leadEmail = '".$form_state['values']['lead_email']."' OR contactEmail = '".$form_state['values']['contact_email']."' OR url = '".$form_state['values']['company_url']."') AND NOT clientId = '".$clientId."'"); if($exists->rowCount() > 0){ //Do something } ``` What is the cleanest way to determine which of the OR statements was true, without breaking this into multiple queries?
You can do raw comparisons in the select: ``` SELECT *, companyName = '".$form_state['values']['company_name']."' AS companyNameMatch, billingEmail = '".$form_state['values']['billing_email']."' AS billingEmailMatch, ... FROM {leads_client} WHERE (companyName = '".$form_state['values']['company_name']."' OR billingEmail = '".$form_state['values']['billing_email']."' OR leadEmail = '".$form_state['values']['lead_email']."' OR contactEmail = '".$form_state['values']['contact_email']."' OR url = '".$form_state['values']['company_url']."') AND NOT clientId = '".$clientId."'"); ``` This will return a resultset like: ``` |------------------|-------------------| | companyNameMatch | billingEmailMatch | |------------------|-------------------| | 0 | 1 | |------------------|-------------------| ``` This way, you'll know which match by the columns with 1's.
Your web site is vulnerable to SQL injection attacks. You need to **immediately** read this article and fix **all** of your database queries to use parameters correctly. Drupal Writing Secure Code: <https://drupal.org/writing-secure-code> Drupal Database Access: <https://drupal.org/node/101496>
SQL Query - determine which logical operator is true
[ "", "mysql", "sql", "select", "" ]
I have a table that has values and group ids (simplified example). I need to get the average for each group of the middle 3 values. So, if there are 1, 2, or 3 values it's just the average. But if there are 4 values, it would exclude the highest, 5 values the highest and lowest, etc. I was thinking some sort of window function, but I'm not sure if it's possible. <http://www.sqlfiddle.com/#!11/af5e0/1> For this data: ``` TEST_ID TEST_VALUE GROUP_ID 1 5 1 2 10 1 3 15 1 4 25 2 5 35 2 6 5 2 7 15 2 8 25 3 9 45 3 10 55 3 11 15 3 12 5 3 13 25 3 14 45 4 ``` I'd like ``` GROUP_ID AVG 1 10 2 15 3 21.6 4 45 ```
Another option using analytic functions; ``` SELECT group_id, avg( test_value ) FROM ( select t.*, row_number() over (partition by group_id order by test_value ) rn, count(*) over (partition by group_id ) cnt from test t ) alias where cnt <= 3 or rn between floor( cnt / 2 )-1 and ceil( cnt/ 2 ) +1 group by group_id ; ``` Demo --> <http://www.sqlfiddle.com/#!11/af5e0/59>
I'm not familiar with the Postgres syntax on windowed functions, but I was able to solve your problem in SQL Server with this [SQL Fiddle](http://www.sqlfiddle.com/#!3/07bc9b/8/0). Maybe you'll be able to easily migrate this into Postgres-compatible code. Hope it helps! A quick primer on how I worked it. 1. Order the test scores for each group 2. Get a count of items in each group 3. Use that as a subquery and select only the middle 3 items (that's the where clause in the outer query) 4. Get the average for each group -- ``` select group_id, avg(test_value) from ( select t.group_id, convert(decimal,t.test_value) as test_value, row_number() over ( partition by t.group_id order by t.test_value ) as ord, g.gc from test t inner join ( select group_id, count(*) as gc from test group by group_id ) g on t.group_id = g.group_id ) a where ord >= case when gc <= 3 then 1 when gc % 2 = 1 then gc / 2 else (gc - 1) / 2 end and ord <= case when gc <= 3 then 3 when gc % 2 = 1 then (gc / 2) + 2 else ((gc - 1) / 2) + 2 end group by group_id ```
How to get average of the 'middle' values in a group?
[ "", "sql", "postgresql", "" ]
This is a questions about the two ways to use the SELECT CASE in MS SQL [CASE WHEN X = Y] and [CASE X WHEN Y] I am trying to define buckets for a field based on its values. I would need to use ranges, so it is necessary to use the "<" or ">" identifiers. As a simple example, I know it works like this: ``` SELECT CASE WHEN x < 0 THEN 'a' WHEN X > 100 THEN 'b' ELSE 'c' END ``` Now I have to write a lot of these, there will be more than 3 buckets and the field names are quite long, so this becomes very difficult to keep clean and easy to follow. I was hoping to use the other way of the select command but to me it looks like I can only use it with equals: ``` SELECT CASE X WHEN 0 then 'y' ELSE 'z' END ``` But how can I use this form to specify range conditions just as above? Something like: ``` SELECT CASE X WHEN < 0 THEN 'a' WHEN > 100 THEN 'b' ELSE "c" END ``` This one does not work. Thank You!
As an alternative approach, remember that it is possible to do math on the value that is the input to the "simple" CASE statement. I often use ROUND for this purpose, like this: ``` SELECT CASE ROUND(X, -2, 1) WHEN 0 THEN 'b' -- 0-99 WHEN 100 THEN 'c' -- 100-199 ELSE 'a' -- 200+ END ``` Since your example includes both positive and negative open-ended ranges, this approach may not work for you. Still another approach: if you are only thinking about readability in the SELECT statement, you could write a scalar-valued function to hide all the messiness: ``` CREATE FUNCTION dbo.ufn_SortValuesIntoBuckets (@inputValue INT) RETURNS VARCHAR(10) AS BEGIN DECLARE @outputValue VARCHAR(10); SELECT @outputValue = CASE WHEN @inputValue < 0 THEN 'a' WHEN @inputValue BETWEEN 0 AND 100 THEN 'b' WHEN @inputValue > 100 THEN 'c' END; RETURN @outputValue; END; GO ``` So now your SELECT statement is just: ``` SELECT dbo.ufn_SortValuesIntoBuckets(X); ``` One final consideration: I have often found, during benchmark testing, that the "searched" form (which you are trying to avoid) actually has better performance than the "simple" form, depending how many CASEs you have. So if performance is a consideration, it might be worth your while to do a little benchmarking before you change things around too much.
There is no such "third form" of the `CASE` - only the *searched* and the *simple* cases are supported \*. You need to use the searched kind (i.e. with separate conditions) even though the variable to which you apply the condition is always the same. If you are looking to avoid repetition in your SQL when `X` represents a complex expression, use `WITH` clause or a nested query to assign a name to the expression that you are selecting. --- \* The official name of your first example is "searched `CASE` expression"; your second example is called the "simple `CASE` expression".
SQL 'CASE WHEN x' vs. 'CASE x WHEN' with greater-than condition?
[ "", "sql", "sql-server", "t-sql", "select", "" ]
I'm trying to use variable order by in row\_number, it works when each case has a single column defined with no asc, or desc. but it doesn't work when I define more than one column or use asc or desc. this works: ``` select top(@perpage) *,row_number() over (order by case @orderby when 'updated' then date_edited when 'oldest' then date_added end ) as rid from (select * from ads) a where rid>=@start and rid<@start+@perpage ``` but I want to do this: ``` select top(@perpage) *,row_number() over (order by case @orderby when 'updated' then date_edited, user_type desc when 'oldest' then date_added, user_type desc when 'newest' then date_added desc, user_type desc when 'mostexpensive' then price desc, user_type desc when 'cheapest' then pricedesc, user_type desc when '' then user_type desc end ) as rid from (select * from ads) a where rid>=@start and rid<@start+@perpage ```
try this ``` select top(@perpage) *,row_number() over (order by case @orderby when 'updated' then date_edited end, case @orderby when 'oldest' then date_added end, case @orderby when 'newest' then date_added end desc, case @orderby when 'mostexpensive' then price end desc, case @orderby when 'cheapest' then price end, user_type desc ) as rid from (select * from ads) a where rid>=@start and rid<@start+@perpage ```
Try this instead: ``` select top(@perpage) *,row_number() over (order by case @orderby when 'updated' then date_edited when 'oldest' then date_added end, case @orderby when 'newest' then date_added end desc, case @orderby when 'mostexpensive' then -price when 'cheapest' then pricedesc end, user_type desc ) as rid from ads a where rid>=@start and rid<@start+@perpage ```
sql server: using variable order by in row_number()
[ "", "sql", "" ]
I want to build custom Where condition based on stored procedure inputs, if not null then I will use them in the statement, else I will not use them. ``` if @Vendor_Name is not null begin set @where += 'Upper(vendors.VENDOR_NAME) LIKE "%"+ UPPER(@Vendor_Name) +"%"' end else if @Entity is not null begin set @where += 'AND headers.ORG_ID = @Entity' end select * from table_name where @where ``` But I get this error ``` An expression of non-boolean type specified in a context where a condition is expected, near 'set'. ```
Use this : ``` Declare @Where NVARCHAR(MAX) ...... Create your Where DECLARE @Command NVARCHAR(MAX) Set @Command = 'Select * From SEM.tblMeasureCatalog AS MC ' ; If( @Where <> '' ) Set @Comand = @Command + ' Where ' + @Where Execute SP_ExecuteSQL @Command ``` I tested this and it Worked
You cannot simply put your variable in normal SQL as you have in this line: ``` select * from table_name where @where; ``` You need to use dynamic SQL. So you might have something like: ``` DECLARE @SQL NVARCHAR(MAX) = 'SELECT * FROM Table_Name WHERE 1 = 1 '; DECLARE @Params NVARCHAR(MAX) = ''; IF @Vendor_Name IS NOT NULL BEGIN SET @SQL += ' AND UPPER(vendors.VENDOR_NAME) LIKE ''%'' + UPPER(@VendorNameParam) + ''%'''; END ELSE IF @Entity IS NOT NULL BEGIN SET @SQL += ' AND headers.ORG_ID = @EntityParam'; END; EXECUTE SP_EXECUTESQL @SQL, N'@VendorNameParam VARCHAR(50), @EntityParam INT', @VendorNameParam = @Vendor_Name, @EntityParam = @Entity; ``` I assume your actual problem is more complex and you have simplified it for this, but if all your predicates are added using `IF .. ELSE IF.. ELSE IF`, then you don't need dynamic SQL at all, you could just use: ``` IF @Vendor_Name IS NOT NULL BEGIN SELECT * FROM Table_Name WHERE UPPER(vendors.VENDOR_NAME) LIKE '%' + UPPER(@Vendor_Name) + '%'; END ELSE IF @Entity IS NOT NULL BEGIN SELECT * FROM Table_Name WHERE headers.ORG_ID = @Entity; END ```
Building dynamic where condition in SQL statement
[ "", "sql", "sql-server", "stored-procedures", "where-clause", "dynamic-sql", "" ]
Given the following query: ``` SELECT dbo.ClientSub.chk_Name as Details, COUNT(*) AS Counts FROM dbo.ClientMain INNER JOIN dbo.ClientSub ON dbo.ClientMain.ate_Id = dbo.ClientSub.ate_Id WHERE chk_Status=1 GROUP BY dbo.ClientSub.chk_Name ``` I want to display the rows in the aggregation even if there are filtered in the WHERE clause.
`NULL` values are considered as zeros for aggregation purposes. For your intention you should use `GROUP BY ALL` that returns rows which were filtered also, with zero as aggregated value.
in case you use oracle sql: Have you tried `Select nvl(dbo.ClientSub.chk_name,'0') ....` ?
How to display filtered rows in aggregation
[ "", "sql", "sql-server-2005", "isnull", "group-by", "" ]
I've got the follow 2 SQL statements ``` select Fleet, SUM(hours) as Operating from table where colE = 'Operating' and START_TIME >= '2012-01-01' and END_TIME<='2013-01-01' and FLEET IS NOT NULL group by fleet order by FLEET asc select Fleet, SUM(hours) as Delay from table where colE='Delay' and START_TIME>='2012-01-01' and END_TIME<='2013-01-01' and FLEET IS NOT NULL group by fleet order by FLEET asc ``` I need the result of both these statments to basically show ``` select Fleet, (Operating / Delay ) as Calculated Col from table ``` Could anyone help me lead me in the direction of how to do this? New to sql so I believe I should use temp tables? Thanks!
The simplest way is to join the two results: ``` select a.Fleet, (Operating / Delay ) as Calculated_Col from (select Fleet, SUM(hours) as Operating from table where colE = 'Operating' and START_TIME >= '2012-01-01' and END_TIME<='2013-01-01' and FLEET IS NOT NULL group by fleet) a join (select Fleet, SUM(hours) as Delay from table where colE='Delay' and START_TIME>='2012-01-01' and END_TIME<='2013-01-01' and FLEET IS NOT NULL group by fleet) b on a.fleet = b.fleet order by a.FLEET asc ```
Simple `INNER JOIN` should do: ``` SELECT op.Fleet, (Operating / Delay) AS Calc FROM (select Fleet, SUM(hours) as Operating from table where colE = 'Operating' and START_TIME >= '2012-01-01' and END_TIME<='2013-01-01' and FLEET IS NOT NULL group by fleet ) AS op INNER JOIN ( select Fleet, SUM(hours) as Delay from table where colE='Delay' and START_TIME>='2012-01-01' and END_TIME<='2013-01-01' and FLEET IS NOT NULL group by fleet ) AS de ON op.Fleet = de.Fleet ORDER BY op.Fleet ASC ```
Using the result of a statement in another
[ "", "sql", "" ]
TABLE EventID --- EventTime (`TIMESTAMP`) ``` 1 2013-09-29 12:00:00.0 2 2013-09-29 12:01:00.0 3 2013-09-29 12:03:00.0 4 2013-09-28 1:03:00.0 5 2013-09-27 23:03:00.0 6 2013-09-26 17:03:00.0 7 2013-09-25 12:01:00.0 8 2013-09-24 20:03:00.0 9 2013-09-23 5:03:00.0 10 2013-09-23 12:01:00.0 ``` I want to retrieve rows, in MySQL, which satisfy given date range with **same time**. So, if I query to retrieve rows with '2013-09-**26 12:01**' as EventTime value for +/- 5 days, I expect to get 2nd, 7th and 10th rows. Please help frame SQL statement. Thanks!
``` WHERE EventTime >= '2013-09-26 12:01' - INTERVAL 5 DAY AND EventTime <= '2013-09-26 12:01' + INTERVAL 5 DAY AND TIME(EventTime) = TIME('2013-09-26 12:01') ```
You could use the mysql *HOUR* and *MINUTE* functions <https://dev.mysql.com/doc/refman/5.0/fr/date-and-time-functions.html>
MySQL select query for date range with same time
[ "", "mysql", "sql", "date", "time", "timestamp", "" ]
What is the right way to write an sql that'll update all `type = 1` with the sum of totals of rows where parent = id of row with `type=1`. Put simply: update likesd set totals = sum of all totals where parent = id of row where type = 1 ``` "id" "type" "parent" "country" "totals" "3" "1" "1" "US" "6" "4" "2" "3" "US" "6" "5" "3" "3" "US" "5" ``` **Desired results** ``` "id" "type" "parent" "country" "totals" "3" "1" "1" "US" "17" ->6+6+5=17 "4" "2" "3" "US" "6" "5" "3" "3" "US" "5" ``` **I was trying with (and failed)** ``` UPDATE likesd a INNER JOIN ( SELECT parent, sum(totals) totalsNew FROM likesd WHERE b.parent = a.id GROUP BY parent ) b ON a.id = b.parent SET a.totals = b.totalsNew; ```
You can do this with the multiple table syntax described [in the MySQL Reference Manual](http://dev.mysql.com/doc/refman/5.0/en/update.html): ``` update likesd a, (select parent, sum(totals) as tsum from likesd group by parent) b set a.totals = a.totals + b.tsum where a.type = 1 and b.parent = a.id; ``` The query updates one row and results in: ``` +------+------+--------+---------+--------+ | id | type | parent | country | totals | +------+------+--------+---------+--------+ | 3 | 1 | 1 | US | 17 | | 4 | 2 | 3 | US | 6 | | 5 | 3 | 3 | US | 5 | +------+------+--------+---------+--------+ ```
here is command that does what you want ``` update likesd as upTbl inner join (select tbl.id, tbl.totals + sum(tbl2.totals) as totals from likesd tbl inner join likesd tbl2 ON tbl2.parent = tbl.id where tbl.type = 1 group by tbl.id) as results ON upTbl.id = results.id set upTbl.totals = results.totals; ``` tested on MySql 5.5
Update rows with data from the same table
[ "", "mysql", "sql", "" ]
This is a really simple question, I know it is. But can't for the life of me figure it out. I have a few coloumns but the main ones are ReservationFee and Invoice ID. Basically, there can be multiple "ReservationFees" in a single invoice. This value will be put on a crystal report, so I ideally need to sum up reservation fees for each invoice id. **Example Data** ``` Invoice ID Reservation Fee 1 200 1 300 2 100 3 350 3 100 ``` **Expected Output** ``` Invoice ID Reservation Fee 1 500 2 100 3 450 ``` I have tried a few different sums and groupings but can't get it right, I'm blaming Monday morning!
If you want to SUM in the server then: ``` SELECT [Invoice ID], SUM([Reservation Fee]) FROM Table GROUP BY [Invoice ID] ``` If you want to SUM in the Crystal then add a `command` or drag and drop your table ``` SELECT [Invoice ID], [Reservation Fee] FROM Table ``` Then right click in the `details section` and Select `Insert Group`. Add the fields in the details section, Right Click on the `Reservation Fee` field and select insert running total. In the window choose a name, select evaluate for each row and Reset on change of group the one you entered before. Place the newly created field in the `Group Footer`.
Try `GROUP BY` clause: ``` Select [Invoice ID], SUM([Reservation Fee]) [Reservation Fee] From YourTable GROUP BY [Invoice ID] ```
Conditional summing T-SQL
[ "", "sql", "sql-server", "t-sql", "crystal-reports", "" ]
Let's say I have two related tables `parents` and `children` with a one-to-many relationship (one `parent` to many `children`). Normally when I need to process the information on these tables together, I do a query such as the following (usually with a `WHERE` clause added in): ``` SELECT * FROM parents INNER JOIN children ON (parents.id = children.parent_id); ``` How can I select all `parents` that have at least one `child` without wasting time joining all of the `children` to their `parents`? I was thinking of using some sort of `OUTER JOIN` but I am not sure exactly what to do with it. (Note that I am asking this question generally, so don't give me an answer that is tied to a specific RDBMS implementation unless there is no general solution.)
As I put earlier in comments: Solution with `LEFT JOIN` and `GROUP BY`: ``` SELECT p.parents.id FROM parents p LEFT JOIN children c ON (p.parents.id = c.children.parent_id) WHERE children.parent_id IS NOT NULL GROUP BY p.parents_id ``` The same with `DISTINCT`: ``` SELECT DISTINCT p.parents.id FROM parents p LEFT JOIN children c ON (p.parents.id = c.children.parent_id) WHERE children.parent_id IS NOT NULL ``` It should work in most SQL dialects, though some require `as` when assigning `table aliases`. The above is **not** tested. Hopefully I made no typos.
I think that the simplest solution that avoids a `JOIN` would be: ``` SELECT * FROM parents WHERE id IN (SELECT parent_id FROM children); ```
In SQL, how can I select all parents that have children?
[ "", "sql", "join", "query-optimization", "relational-database", "" ]
Let us say I have an array (received from the client side) of ids: `myArray = [1,5,19,27]` And I would like to return ALL items for which the (secondary) id is in that list. In SQL this would be: `SELECT * FROM Items WHERE id IN (1,5,19,27)` I am aware that I could do: `Item.where(id: [1,5,9,27])`, however the longer where query that this would be tacked onto uses the prepared statement syntax `Item.where('myAttrib = ? AND myOtherAttrib <> ? AND myThirdAttrib = ?', myVal[0], myVa[1], myVal[2])` with that in mind, what I would like is the following: `Item.where('id IN ?', myArray)` However, that produces a syntax error: `ActiveRecord::StatementInvalid: PG::Error: ERROR: syntax error at or near "1" LINE 1: SELECT "items".* FROM "items" WHERE (id in 1,2,3,4)` How can I work around this? What is the right way to use where with the prepared statement syntax for `IN` expressions.
You need to change your query to the following: ``` Item.where('id = ?', myArray) ``` ActiveRecord will then convert this to an IN clause if myArray is, in fact, an array.
I ended up using: `Item.where('id IN (?)', myArray)`
Using WHERE IN with Rails prepared statement syntax
[ "", "sql", "ruby", "ruby-on-rails-3", "prepared-statement", "rails-activerecord", "" ]
Im using OrientDB type graph. I need syntax of Gremlin for search same SQL LIKE operator ``` LIKE 'search%' or LIKE '%search%' ``` I've check with has and filter (in <http://gremlindocs.com/>). However it's must determine exact value is passed with type property. I think this is incorrect with logic of search. Thanks for anything.
Try: ``` g.V().filter({ it.getProperty("foo").startsWith("search") }) ``` or ``` g.V().filter({ it.getProperty("foo").contains("search") }) ```
For Cosmos Db Gremlin support ``` g.V().has('foo', TextP.containing('search')) ``` You can find the documentation [Microsoft Gremlin Support docs](https://learn.microsoft.com/en-us/azure/cosmos-db/gremlin-support) And [TinkerPop Reference](https://tinkerpop.apache.org/docs/3.4.0/reference/#a-note-on-predicates)
How Gremlin query same sql like for search feature
[ "", "sql", "gremlin", "orientdb", "" ]
I have this weird scenario (at least it is for me) where I have a table (actually a result set, but I want to make it simpler) that looks like the following: ``` ID | Actions ------------------ 1 | 10,12,15 2 | 11,12,13 3 | 15 4 | 14,15,16,17 ``` And I want to count the different actions in the all the table. In this case, I want the result to be 8 (just counting 10, 11, ...., 17; and ignoring the repeated values). In case it matters, I am using MS SQL 2008. If it makes it any easier, the Actions were previously on XML that looks like ``` <root> <actions>10,12,15</actions> </root> ``` I doubt it makes it easier, but somebody might comeback with an xml function that I am not aware and just makes everything easier. Let me know if there's something else I should say.
Using approach similar to <http://codecorner.galanter.net/2012/04/25/t-sql-string-deaggregate-split-ungroup-in-sql-server/>: First you need a function that would split string, there're many examples on SO, here's one of them: ``` CREATE FUNCTION dbo.Split (@sep char(1), @s varchar(512)) RETURNS table AS RETURN ( WITH Pieces(pn, start, stop) AS ( SELECT 1, 1, CHARINDEX(@sep, @s) UNION ALL SELECT pn + 1, stop + 1, CHARINDEX(@sep, @s, stop + 1) FROM Pieces WHERE stop > 0 ) SELECT pn, SUBSTRING(@s, start, CASE WHEN stop > 0 THEN stop-start ELSE 512 END) AS s FROM Pieces ) ``` Using this you can run a simple query: ``` SELECT COUNT(DISTINCT S) FROM MyTable CROSS APPLY dbo.Split(',', Actions) ``` Here is the demo: <http://sqlfiddle.com/#!3/5e706/3/0>
[SQL Fiddle](http://sqlfiddle.com/#!3/8b27e/46) **MS SQL Server 2008 Schema Setup**: ``` CREATE TABLE Table1 ([ID] int, [Actions] varchar(11)) ; INSERT INTO Table1 ([ID], [Actions]) VALUES (1, '10,12,15'), (2, '11,12,13'), (3, '15'), (4, '14,15,16,17') ; ``` **Query 1**: ``` DECLARE @S varchar(255) DECLARE @X xml SET @S = (SELECT Actions + ',' FROM Table1 FOR XML PATH('')) SELECT @X = CONVERT(xml,'<root><s>' + REPLACE(@S,',','</s><s>') + '</s></root>') SELECT count(distinct [Value]) FROM ( SELECT [Value] = T.c.value('.','varchar(20)') FROM @X.nodes('/root/s') T(c)) AS Result WHERE [Value] > 0 ``` **[Results](http://sqlfiddle.com/#!3/8b27e/46/0)**: ``` | COLUMN_0 | |----------| | 8 | ``` **EDIT :** I think this is exactly what you are looking for : [SQL Fiddle](http://sqlfiddle.com/#!3/d41d8/20918) **MS SQL Server 2008 Schema Setup**: **Query 1**: ``` DECLARE @X xml SELECT @X = CONVERT(xml,replace(' <root> <actions>10,12,15</actions> <actions>11,12,13</actions> <actions>15</actions> <actions>14,15,16,17</actions> </root> ',',','</actions><actions>')) SELECT count(distinct [Value]) FROM ( SELECT [Value] = T.c.value('.','varchar(20)') FROM @X.nodes('/root/actions') T(c)) AS Result ``` **[Results](http://sqlfiddle.com/#!3/d41d8/20918/0)**: ``` | COLUMN_0 | |----------| | 8 | ```
SQL Count distinct values within the field
[ "", "sql", "sql-server", "xml", "sql-server-2008", "" ]
I try to use stored procedured first time in my project. So I have syntax error if I try to use "if","else". If NewProductId equals "0" or null, I do not want to update. else I want to update my NewProductId ``` ALTER PROCEDURE MyProcedured ( @CustomerId INT, @CustomerName VARCHAR(80), @NewProductId INT ) AS BEGIN UPDATE CUSTOMERS SET CustomerName =@CustomerName , if(@ProductId !=null && @ProductId !=0) { ProductId =@NewProductId } WHERE CustomerId = @CustomerId END ```
Looks like you are mixing two different languages (C# with SQL). Would suggest using a SQL `CASE` statement to do this: ``` ALTER PROCEDURE MyProcedured (@CustomerId INT, @CustomerName VARCHAR(80), @NewProductId INT) AS BEGIN UPDATE CUSTOMERS SET CustomerName = @CustomerName, ProductId = CASE WHEN @ProductId IS NOT NULL AND ProductId <> 0 THEN @NewProductId ELSE ProductId END WHERE CustomerId = @CustomerId END ```
T-SQL is not C# ``` if @NewProductId is not null and @NewProductId <> 0 BEGIN UPDATE CUSTOMERS SET CustomerName =@CustomerName, ProductId =@NewProductId WHERE CustomerId = @CustomerId END ELSE BEGIN UPDATE CUSTOMERS SET CustomerName =@CustomerName WHERE CustomerId = @CustomerId END ``` Also notice that you don't have any variable named `@ProductID`. I suppose that you want to test the value in `@NewProductID`
Stored Procedure if else syntax error
[ "", "sql", "stored-procedures", "if-statement", "" ]
I have three tables: ``` Shop_Table shop_id shop_name Sells_Table shop_id item_id price Item_Table item_id item_name ``` The `Sells_Table` links the item and shop tables via FK's to their ids. I am trying to get the most expensive item from each store, i.e., output of the form: ``` (shop_name, item_name, price) (shop_name, item_name, price) (shop_name, item_name, price) (shop_name, item_name, price) ... ``` where price is the max price item for each shop. I can seem to achieve (shop\_name, max(price)) but when I try to include the item\_name I am getting multiple entries for the shop\_name. My current method is ``` create view shop_sells_item as select s.shop_name as shop, i.item_name as item, price from Shop_Table s join Sells_Table on (s.shop_id = Sells_Table.shop_id) join Item_Table i on (i.item_id = Sells_Table.item_id); select shop, item, max(price) from shop_sells_item group by shop; ``` However, I get an error saying that `item must appear in the GROUP BY clause or be used in an aggregate function`, but if I include it then I don't get the max price for each shop, instead I get the max price for each shop,item pair which is of no use. Also, is using a view the best way? could it be done via a single query?
Please note that the query below doesn't deal with a situation where multiple items in one store have the same maximum price (they are all the most expensive ones): ``` SELECT s.shop_name, i.item_name, si.price FROM Sells_Table si JOIN Shop_Table s ON si.shop_id = s.shop_id JOIN Item_Table i ON si.item_id = i.item_id WHERE (shop_id, price) IN ( SELECT shop_id, MAX(price) AS price_max FROM Sells_Table GROUP BY shop_id ); ```
You can do it Postgresql way: ``` select distinct on (shop_name) shop_name, item_name, price from shop_table join sells_table using (shop_id) join item_table using (item_id) order by shop_name, price; ```
How to join many to many relationship on max value for relationship field
[ "", "sql", "postgresql", "" ]
Im having problems with constructing a query which filters out posts from table a, that already existing in table b. Table A: ``` `bok_id` int(6) NOT NULL AUTO_INCREMENT, `tid` time NOT NULL, `datum` date NOT NULL, `datum_end` date NOT NULL, `framtid` varchar(5) NOT NULL, `h_adress` varchar(100) NOT NULL, `l_adress` varchar(100) NOT NULL, `kund` varchar(100) NOT NULL, `typ` varchar(5) NOT NULL, `bil` varchar(99) NOT NULL, `sign` varchar(2) NOT NULL, `tilldelad` varchar(12) NOT NULL, `skapad` datetime NOT NULL, `endrad` datetime NOT NULL, `endrad_av` varchar(2) NOT NULL, `kommentar` text NOT NULL, `weektype` varchar(3) NOT NULL, `Monday` tinyint(1) NOT NULL, `Tuesday` tinyint(1) NOT NULL, `Wednesday` tinyint(1) NOT NULL, `Thursday` tinyint(1) NOT NULL, `Friday` tinyint(1) NOT NULL, `Saturday` tinyint(1) NOT NULL, `Sunday` tinyint(1) NOT NULL, `avbokad` varchar(2) NOT NULL, `unika_kommentarer` varchar(2) NOT NULL, UNIQUE KEY `bok_id` (`bok_id`) ``` Table B: ``` `id` int(6) NOT NULL AUTO_INCREMENT, `bok_id` int(6) NOT NULL, `tid` time NOT NULL, `datum` date NOT NULL, `datum_end` date NOT NULL, `framtid` varchar(5) NOT NULL, `h_adress` varchar(100) NOT NULL, `l_adress` varchar(100) NOT NULL, `kund` varchar(100) NOT NULL, `typ` varchar(5) NOT NULL, `bil` varchar(99) NOT NULL, `sign` varchar(2) NOT NULL, `tilldelad` varchar(12) NOT NULL, `skapad` datetime NOT NULL, `endrad` datetime NOT NULL, `endrad_av` varchar(2) NOT NULL, `kommentar` text NOT NULL, `weektype` varchar(3) NOT NULL, `Monday` tinyint(1) NOT NULL, `Tuesday` tinyint(1) NOT NULL, `Wednesday` tinyint(1) NOT NULL, `Thursday` tinyint(1) NOT NULL, `Friday` tinyint(1) NOT NULL, `Saturday` tinyint(1) NOT NULL, `Sunday` tinyint(1) NOT NULL, `avbokad` varchar(2) NOT NULL, `unika_kommentarer` varchar(2) NOT NULL, UNIQUE KEY `id` (`id`) ``` What i want is a query which hides all rows in Table A which exists in table B IF Table B.tilldelad=Requested Date. (f.e 2013-09-30) Im not sure this makes any sense? What i want is to filter out period bookings as they have been executed and therefore exists in table b which indicates it has been. Since its recurring events the same bok\_id can exists SEVERAL times in table B, but only ONCE in table A... ``` SELECT * FROM bokningar WHERE bokningar.datum <= '2013-09-30' AND bokningar.datum_end >= '2013-09-30' AND bokningar.typ >= '2' AND bokningar.weektype = '1' AND bokningar.Monday = '1' ## Monday this is dynamically changed to current date AND bokningar.avbokad < '1' AND NOT EXISTS ( SELECT 1 FROM tilldelade WHERE tilldelade.tilldelad = '2013-09-30' ) ``` The above code does the trick regarding filtering out rows not in table B, however, if one row in table B has the current date, all results are filtered out. Only rows with corresponding bok\_id's is supposted to get filtered out. Any thoughts on how to do that? Perhaps a Distinct select?
Typically, NOT IN subselects are quite expensive in querying time. This can normally be handled by doing a LEFT-JOIN and Keeping only those where the other table IS NULL like... ``` SELECT * FROM bokningar LEFT JOIN tilldelade on bokningar.bok_id = tilldelade.bok_id AND tilldelade.tilldelad = '2013-09-30' WHERE bokningar.datum <= '2013-09-30' AND bokningar.datum_end >= '2013-09-30' AND bokningar.typ >= '2' AND bokningar.weektype = '1' AND bokningar.Monday = '1' ## Monday this is dynamically changed to current date AND bokningar.avbokad < '1' AND tilldelade.bok_id IS NULL ```
try something like ``` SELECT ... FROM TABLEA WHERE TABLEA.tilldelad=Requested Date AND NOT EXISTS (SELECT 1 FROM TABLEB WHERE TABLEB.tilldelad=Requested Date AND TABLEA.bok_id = TABLEB.bok_id AND...) ```
Mysql select * from a where doesnt exist in b
[ "", "mysql", "sql", "" ]
I was wondering how I can left join a table to itself or use a case statement to assign max values within a view. Say I have the following table: ``` Lastname Firstname Filename Smith John 001 Smith John 002 Smith Anna 003 Smith Anna 004 ``` I want to create a view that lists all the values but also has another column that displays whether the current row is the max row, such as: ``` Lastname Firstname Filename Max_Filename Smith John 001 NULL Smith John 002 002 Smith Anna 003 NULL Smith Anna 004 NULL ``` Is this possible? I have tried the following query: ``` SELECT Lastname, Firstname, Filename, CASE WHEN Filename = MAX(FileName) THEN Filename ELSE NULL END AS Max_Filename ``` but I am told that Lastname is not in the group by clause. However, if I group on Lastname, firstname, filename, then everything in the max\_filename is the same as filename. Can you please help me understand what I'm doing wrong and how to make this query work?
actually you're very close, but instead of using `max` as simple aggregate you can use `max` as window function: ``` select Lastname, Firstname, Filename, case when Filename = max(Filename) over(partition by Lastname, Firstname) then Filename else null end as Max_Filename from Table1 ``` **`sql fiddle demo`**
It could be something like that: ``` SELECT T.Lastname, T.FirstName, T.Filename, CASE (SELECT MAX(T1.Filename) FROM MyTable T1 WHERE T.Lastname = T1.Lastname AND T.FirstName = T1.FirstName) WHEN T.Filename THEN T.Filename ELSE NULL END FROM MyTable T ``` But I'm not sure what you mean by max filename? Total max from all records? Or separately for each name? Your expected result don't match either. Let me know and I'll modify the query.
Join based on max value
[ "", "sql", "sql-server", "t-sql", "max", "aggregate-functions", "" ]
I am getting a ORA-00923 (FROM keyword not found where expected) error when i run this query in sql\*plus. ``` SELECT EMPLOYEE_ID, FIRST_NAME||' '||LAST_NAME AS FULLNAME FROM EMPLOYEES WHERE (JOB_ID, DEPARTMENT_ID) IN (SELECT JOB_ID, DEPARTMENT_ID FROM JOB_HISTORY) AND DEPARTMENT_ID=80; ``` I ran that query in sql developer and guess what, it works without any problem, why I'm getting this error message when I try in sql\*plus.
``` SELECT EMPLOYEE_ID, FIRST_NAME || ' ' || LAST_NAME AS FULLNAME FROM EMPLOYEES WHERE JOB_ID IN (SELECT JOB_ID FROM JOB_HISTORY WHERE DEPARTMENT_ID = 80); ``` OR ``` SELECT EMPLOYEE_ID, FIRST_NAME || ' ' || LAST_NAME AS FULLNAME FROM EMPLOYEES WHERE JOB_ID IN (SELECT JOB_ID FROM JOB_HISTORY) AND DEPARTMENT_ID = 80; ``` OR ``` SELECT EMPLOYEE_ID, FIRST_NAME || ' ' || LAST_NAME AS FULLNAME FROM EMPLOYEES E WHERE EXISTS (SELECT NULL FROM JOB_HISTORY J WHERE J.JOB_ID = E.JOB_ID) AND DEPARTMENT_ID = 80; ```
Your query is totally valid and runs in sqlplus exactly as it should: ``` 14:04:01 (41)HR@sandbox> l 1 SELECT EMPLOYEE_ID, FIRST_NAME||' '||LAST_NAME AS FULLNAME 2 FROM EMPLOYEES 3 WHERE (JOB_ID, DEPARTMENT_ID) 4 IN (SELECT JOB_ID, DEPARTMENT_ID FROM JOB_HISTORY) 5* AND DEPARTMENT_ID=80 14:04:05 (41)HR@sandbox> / 34 rows selected. Elapsed: 00:00:00.01 ``` You encounter ORA-00923 only when you have a syntax error. Like this: ``` 14:04:06 (41)HR@sandbox> ed Wrote file S:\spool\sandbox\BUF_HR_41.sql 1 SELECT EMPLOYEE_ID, FIRST_NAME||' '||LAST_NAME AS FULLNAME X 2 FROM EMPLOYEES 3 WHERE (JOB_ID, DEPARTMENT_ID) 4 IN (SELECT JOB_ID, DEPARTMENT_ID FROM JOB_HISTORY) 5* AND DEPARTMENT_ID=80 14:05:17 (41)HR@sandbox> / SELECT EMPLOYEE_ID, FIRST_NAME||' '||LAST_NAME AS FULLNAME X * ERROR at line 1: ORA-00923: FROM keyword not found where expected ``` Probably you made one while copying your query from sqldeveloper to sqlplus? Are you sure that your post contains exactly, symbol-to-symbol, the query you're actually trying to execute? I would pay more attention to query text and error message - it usually points at an error, like `*` under `X` in my example.
Problems with query in SQL*PLUS
[ "", "sql", "oracle", "sqlplus", "" ]
I have a table that looks something like this: ``` SetId ID Premium 2012 5 Y 2012 6 Y 2013 5 N 2013 6 N ``` I want to update the 2013 records with the premium values where the setid equals 2012. So after the query it would look like this: ``` SetId ID Premium 2012 5 Y 2012 6 Y 2013 5 Y 2013 6 Y ``` Any help greatly appreciated
It's not clear which 2012 value you want to use to update which 2013 value, i've assumed that the `ID` should be the same. Full example using table variables that you can test yourself in management studio. ``` DECLARE @Tbl TABLE ( SetId INT, Id INT, Premium VARCHAR(1) ) INSERT INTO @Tbl VALUES (2012, 5, 'Y') INSERT INTO @Tbl VALUES (2012, 6, 'Y') INSERT INTO @Tbl VALUES (2013, 5, 'N') INSERT INTO @Tbl VALUES (2013, 6, 'N') --Before Update SELECT * FROM @Tbl --Something like this is what you need UPDATE t SET t.Premium = t2.Premium FROM @Tbl t INNER JOIN @Tbl t2 ON t.Id = t2.Id WHERE t2.SetId = 2012 AND t.SetId = 2013 --After Update SELECT * FROM @Tbl ```
``` UPDATE t SET t.Premium = (SELECT TOP 1 t2.Premium FROM dbo.TableName t2 WHERE t2.SetId = 2012) FROM dbo.TableName t WHERE t.SetId = 2013 ``` `Demonstration`
SQL Server - Update column from data in the same table
[ "", "sql", "sql-server", "" ]
I want to use **If Else** (or similar case determining tech:) under filterring clause of **WHERE**. Here is my sample code. ``` select * from some_table where record_valid_flg = 1 and if(user_param = '1') then age > 20 else name like 'Mr%' end if ``` How can I figure it out. I would really appreciated any suggestions.. Thanks!
Use CASE like this: ``` select * from some_table where record_valid_flg = 1 and CASE WHEN user_param = '1' THEN age > 20 ELSE name like 'Mr%' END; ```
``` CASE WHEN user_param = '1' THEN age > 20 ELSE name like 'Mr%' END ```
How can I achieve IF ELSE under clause of WHERE?
[ "", "mysql", "sql", "" ]
I'm trying to create a trigger for a Oracle table. Here are the requirements I have two tables Books, Copies (Books & copies have a 1 to n relationship. Each book can have 0 to n copies) Book Table: ``` CREATE TABLE Book ( book_id INTEGER NOT NULL , isbn VARCHAR2 (20) NOT NULL, publisher_id INTEGER NOT NULL , tittle VARCHAR2 (100) NOT NULL , cat_id INTEGER NOT NULL , no_of_copies INTEGER NOT NULL , .... CONSTRAINT isbn_unique UNIQUE (isbn), CONSTRAINT shelf_letter_unique UNIQUE (shelf_letter, call_number) ) ; ``` Copies Table ``` CREATE TABLE Copies ( copy_id INTEGER NOT NULL , book_id INTEGER NOT NULL , copy_number INTEGER NOT NULL, constraint copy_number_unique unique(book_id,copy_number) ) ; ``` The trigger (on update,edit of a the Book table) should add its corresponding copy records into Copies table. So if a insert into Books table has Book.no\_of\_copies as 5 then five new records should be inserted into the Copies table..
This is kinda long, but actually quite straightforward. Tested on a Oracle 10gR2 setup. Table: ``` CREATE TABLE books ( book_id INTEGER NOT NULL, no_of_copies INTEGER NOT NULL, CONSTRAINT pk_book_id PRIMARY KEY (book_id) ); CREATE TABLE copies ( book_id INTEGER NOT NULL, copy_no INTEGER NOT NULL, CONSTRAINT fk_book_id FOREIGN KEY (book_id) REFERENCES books (book_id) ON DELETE CASCADE ); ``` Then trigger: ``` CREATE TRIGGER tri_books_add AFTER INSERT ON books FOR EACH ROW DECLARE num INTEGER:=1; BEGIN IF :new.no_of_copies>0 THEN WHILE num<=:new.no_of_copies LOOP INSERT INTO copies (book_id,copy_no) VALUES (:new.book_id,num); num:=num+1; END LOOP; END IF; END; / CREATE TRIGGER tri_books_edit BEFORE UPDATE ON books FOR EACH ROW DECLARE num INTEGER:=1; BEGIN IF :new.no_of_copies<:old.no_of_copies THEN RAISE_APPLICATION_ERROR(-20001,'Decrease of copy number prohibited.'); ELSIF :new.no_of_copies>:old.no_of_copies THEN SELECT max(copy_no)+1 INTO num FROM copies WHERE book_id=:old.book_id; WHILE num<=:new.no_of_copies LOOP INSERT INTO copies (book_id,copy_no) VALUES (:old.book_id,num); num:=num+1; END LOOP; END IF; END; / ``` What the trigger do: * For the `tri_books_add` 1. use a `num` to "remember" `copy_no`; 2. use a [`WHILE-LOOP`](http://docs.oracle.com/cd/E11882_01/appdev.112/e25519/controlstatements.htm#CJACCEAC) statement to add copies. * For the `tri_books_edit` 1. use a `num` to "remember" `copy_no`; 2. [check if](http://docs.oracle.com/cd/E11882_01/appdev.112/e25519/controlstatements.htm#CJAJGBEE) new `no_of_copies` is illegally decreased; if so, [raise a custom error](https://stackoverflow.com/q/6020450); 3. append copies. The reason I separate books inserting and editing into two triggers, is because I employed a [`foreign key` constraint](http://docs.oracle.com/cd/E11882_01/server.112/e41084/clauses002.htm#i1002118), so `after insert` would be needed for inserting (correct me if I'm wrong on this, though). I then run some test: ``` INSERT INTO books (book_id,no_of_copies) VALUES (1,3); INSERT INTO books (book_id,no_of_copies) VALUES (2,5); ``` ``` SQL> select * from copies; BOOK_ID COPY_NO ---------- ---------- 1 1 1 2 1 3 2 1 2 2 2 3 2 4 2 5 8 rows selected. SQL> update books set no_of_copies=5 where book_id=1; 1 row updated. SQL> select * from copies; BOOK_ID COPY_NO ---------- ---------- 1 1 1 2 1 3 2 1 2 2 2 3 2 4 2 5 1 4 1 5 10 rows selected. SQL> update books set no_of_copies=3 where book_id=1; update books set no_of_copies=3 where book_id=1 * ERROR at line 1: ORA-20001: Decrease of copy number prohibited. ORA-06512: at "LINEQZ.TRI_BOOKS_EDIT", line 5 ORA-04088: error during execution of trigger 'LINEQZ.TRI_BOOKS_EDIT' ``` (I don't seem to be able to make [sqlfiddle](http://sqlfiddle.com/) to work on triggers so no online demos, sorry.)
Below code works for INSERT and UPDATE in table BOOK. It would insert rows in COPIES table only when either a new row is inserted in table BOOK, or an existing row in table BOOK is updated with no\_of\_copies greater than its current value. Table creation: ``` CREATE TABLE Book ( book_id INTEGER NOT NULL , isbn VARCHAR2 (20) NOT NULL, publisher_id INTEGER NOT NULL , tittle VARCHAR2 (100) NOT NULL , cat_id INTEGER NOT NULL , no_of_copies INTEGER NOT NULL , CONSTRAINT isbn_unique UNIQUE (isbn) ) ; CREATE TABLE Copies ( copy_id INTEGER NOT NULL , book_id INTEGER NOT NULL , copy_number INTEGER NOT NULL, constraint copy_number_unique unique(book_id,copy_number) ); CREATE SEQUENCE COPY_SEQ MINVALUE 1 MAXVALUE 999999 START WITH 1 INCREMENT BY 1 NOCACHE; ``` --- Trigger: ``` CREATE OR REPLACE TRIGGER TR_TEST BEFORE INSERT OR UPDATE ON BOOK FOR EACH ROW DECLARE V_CURR_COPIES NUMBER; V_COUNT NUMBER := 0; BEGIN IF :NEW.NO_OF_COPIES > NVL(:OLD.NO_OF_COPIES, 0) THEN SELECT COUNT(1) INTO V_CURR_COPIES --# of rows in COPIES table for a particular book. FROM COPIES C WHERE C.BOOK_ID = :NEW.BOOK_ID; WHILE V_COUNT < :NEW.NO_OF_COPIES - V_CURR_COPIES LOOP INSERT INTO COPIES ( COPY_ID, BOOK_ID, COPY_NUMBER ) SELECT COPY_SEQ.NEXTVAL, :NEW.BOOK_ID, V_COUNT + V_CURR_COPIES + 1 FROM DUAL; V_COUNT := V_COUNT + 1; END LOOP; END IF; END; ``` --- Testing: ``` INSERT INTO BOOK VALUES (1, 'ABCDEF', 2, 'TEST BOOK', 1, 3); UPDATE BOOK B SET B.NO_OF_COPIES = 4 WHERE B.BOOK_ID = 1; ```
Oracle SQL Trigger insert new records based on a insert column value
[ "", "sql", "database", "oracle", "triggers", "oracle-sqldeveloper", "" ]
I see lot of debate of 'Select' being called as DML. Can some one explain me why its DML as its not manipulating any data on schema? As its puts locks on table should this be DML? \**In Wikipedia \** I can see > "The purely read-only SELECT query statement is classed with the > 'SQL-data' statements[2] and so is considered by the standard to be > outside of DML. The SELECT ... INTO form is considered to be DML > because it manipulates (i.e. modifies) data. In common practice > though, this distinction is not made and SELECT is widely considered > to be part of DML.[3]" But in SELECT \* FROM INSERT select will just perform selection nothing other than this!! Please someone help me in understanding this concept. Thanks
The distinction that people usually make is between DDL (data definition language, i.e. managing schema objects) and DML (data manipulation language, i.e. managing data within the schema created by DDL). Clearly a SELECT is not DDL.
Data Manipulation Language (DML) is a vocabulary used to query/retrieve and work with data. Don't go by the word Manipulation, such statement allows both data accessing and processing. As you have tagged question with SQL Server 2005 following link can be referred: <http://technet.microsoft.com/en-US/library/ms177591(v=SQL.90).aspx>
Why 'Select' is called as DML statement ?
[ "", "sql", "database", "sql-server-2005", "" ]
I am trying to get the specific day of a Year. Here's what I have tried till now:- ``` -- Declare few variables DECLARE @Currentdate AS DATETIME DECLARE @DueDate AS DATETIME DECLARE @NewDate AS DATETIME -- Set the variables properly, just for testing SET @Currentdate = GETDATE() SET @DueDate = DATEADD(MONTH, 2, DATEADD(YEAR, 1, @Currentdate)) -- Check the output SELECT @Currentdate -- 2013-09-30 00:00:00.000 SELECT @DueDate -- 2014-11-30 00:00:00.000 ``` So, I want to get the `@NewDate` based on the `@Currentdate` year. For this I tried:- ``` SELECT @NewDate = DATEADD(DAY, DAY(DATEDIFF(day, 1, @DueDate)), DATEADD(MONTH, DATEDIFF(MONTH, 0, @Currentdate), 0)) SELECT @NewDate -- 2013-09-30 00:00:00.000 ``` But it didn't worked. :( My expected result is like: ``` -- 2013-11-30 00:00:00.000 -- Having the due date month and date same, but the year as current date one. ``` Any help is appreciated! **UPDATE** Sorry for all the confusion I have created. My question in simple words is:- I want to get the a new date variable having the date and the month same as `@DueDate` variable but the year as given in the `@Currentdate` variable. I hope that would clear things up a bit.
If the question is "given I have a particular datetime value in one variable, can I set another variable to be for the same day and month but in the current year" then the answer would be: ``` declare @DueDate datetime declare @NewDate datetime set @DueDate = '20141130' --Need to set @NewDate to the same month and day in the current year set @NewDate = DATEADD(year, --Here's how you work out the offset DATEPART(year,CURRENT_TIMESTAMP) - DATEPART(year,@DueDate), @DueDate) select @DueDate,@NewDate ``` --- > I want to get the a new date variable having the date and the month same as @DueDate variable but the year as given in the @Currentdate variable. Well, that's simply the above query with a single tweak: ``` set @NewDate = DATEADD(year, --Here's how you work out the offset DATEPART(year,@Currentdate) - DATEPART(year,@DueDate), @DueDate) ```
Try this one: ``` CAST(CAST( -- cast INT to VARCHAR and then to DATE YEAR(GETDATE()) * 10000 + MONTH(@DueDate) * 100 + DAY(@DueDate) -- convert current year + @DueDate's month and year parts to YYYYMMDD integer representation + CASE -- change 29th of February to 28th if current year is a non-leap year WHEN MONTH(@DueDate) = 2 AND DAY(@DueDate) = 29 AND ((YEAR(GETDATE()) % 4 = 0 AND YEAR(GETDATE()) % 100 <> 0) OR YEAR(GETDATE()) % 400 = 0) THEN 0 ELSE -1 END AS VARCHAR(8)) AS DATE) ```
How to get date to be the same day and month from one variable but in the year based on another variable?
[ "", "sql", "sql-server", "sql-server-2008", "" ]
Is it possible to have SQL stop checking the WHERE clause once a condition is met? For instance, if I have a statement as below: ``` SELECT * FROM Table1 WHERE Table1.SubID = (SELECT TOP 1 SubID FROM Table2 ORDER BY Date DESC) OR Table1.OrderID = (SELECT TOP 1 OrderID FROM Table2 ORDER BY Date DESC) ``` Is it possible to stop execution after the first check? In essence, only one of the two checks in the where clause should be used, giving precedence to the first. Example cases below. Example Cases: **Case1** ``` Table1 SubID=600 OrderID=5 Table2 TOP 1 SubID=NULL Table2 TOP 1 OrderID=5 Matches the OrderID to 5 ``` **Case 2** ``` Table1 SubId=600 OrderId=5 Table2 Top 1 SubID=600 Table2 Top 1 OrderID=3 Matches to SubID=600, not OrderID=3 ``` Given suggested answers, a `with` seems the best possible solution to solve what SQL is not inherently able to do. For my specific situation, the issue comes when attempting to put this into an `outer apply`, as below. ``` SELECT * FROM tbl_MainFields OUTER APPLY ( WITH conditional AS ( SELECT 1 AS 'choice', PlanCode, Carrier FROM tbl_payers WHERE tbl_payers.PlanCode = ( SELECT TOP 1 PlanCode FROM tbl_payerDenials WHERE tbl_payerDenials.AccountNumber = tbl_mainFields.AccountNumber ORDER BY InsertDate DESC ) UNION ALL SELECT 2 AS 'choice', PlanCode, Carrier FROM tbl_payers WHERE tbl_payers.OrderNum = ( SELECT TOP 1 DenialLevel FROM tbl_payerDenials WHERE tbl_payerDenials.AccountNumber = tbl_mainFields.AccountNumber ORDER BY InsertDate DESC ) ) SELECT PlanCode AS DenialPC, Carrier AS DenialCAR FROM conditional WHERE choice = (SELECT MIN(choice) FROM conditional) ) denialData ```
Credit to @RaduGheorghiu for the inspiration. This functionality is similar to the `WITH` and `MIN` combination suggested, but allows for use within an `OUTER APPLY` ``` SELECT * FROM tbl_MainFields OUTER APPLY ( SELECT TOP 1 PlanCode AS DenialPC, Carrier AS DenialCAR, Precedence FROM ( SELECT 1 AS 'Precedence', PlanCode, Carrier FROM tbl_payers WHERE tbl_payers.PlanCode = ( SELECT TOP 1 PlanCode FROM tbl_payerDenials WHERE tbl_payerDenials.AccountNumber = tbl_mainFields.AccountNumber ORDER BY InsertDate DESC ) UNION ALL SELECT 2 AS 'Precedence', PlanCode, Carrier FROM tbl_payers WHERE tbl_payers.OrderNum = ( SELECT TOP 1 DenialLevel FROM tbl_payerDenials WHERE tbl_payerDenials.AccountNumber = tbl_mainFields.AccountNumber ORDER BY InsertDate DESC ) ) AS denialPrecedence ORDER BY Precedence ) denialData ```
I think you can try something like this ``` WITH conditional AS( SELECT 1 AS 'choice', PlanCode, Carrier FROM tbl_payers WHERE tbl_payers.PlanCode = ( SELECT TOP 1 PlanCode FROM tbl_payerDenials JOIN tbl_mainFields ON tbl_payerDenials.AccountNumber = tbl_mainFields.AccountNumber ORDER BY InsertDate DESC ) UNION ALL SELECT 2 AS 'choice', PlanCode, Carrier FROM tbl_payers WHERE tbl_payers.OrderNum = ( SELECT TOP 1 DenialLevel FROM tbl_payerDenials JOIN tbl_mainFields ON tbl_payerDenials.AccountNumber = tbl_mainFields.AccountNumber ORDER BY InsertDate DESC ) ) SELECT * FROM tbl_MainFields tMF OUTER APPLY ( SELECT * FROM conditional c WHERE c.choice = (SELECT MIN(choice) FROM conditional) ) denialData ``` I'm using the `1` and `2` values to `mark` the queries, and then select the information from the first, if it returns values, otherwise it returns values from the second query (the `MIN(choice)` part). I hope it is clear.
SQL Or with a stop after first check
[ "", "sql", "sql-server", "t-sql", "select", "where-clause", "" ]
``` CREATE FUNCTION [dbo].[udfGetNextEntityID] () RETURNS INT AS BEGIN ;WITH allIDs AS ( SELECT entity_id FROM Entity UNION SELECT entity_id FROM Reserved_Entity ) RETURN (SELECT (MAX(entity_id) FROM allIDs ) END GO ``` SQL isn't my strong point, but I can't work out what I'm doing wrong here. I want the function to return the largest entity\_id from a union of 2 tables. Running the script gives the error: ``` Incorrect syntax near the keyword 'RETURN'. ``` I looked to see if there was some restriction on using CTEs in functions but couldn't find anything relevant. How do I correct this?
``` CREATE FUNCTION [dbo].[udfGetNextEntityID]() RETURNS INT AS BEGIN DECLARE @result INT; WITH allIDs AS ( SELECT entity_id FROM Entity UNION SELECT entity_id FROM Reserved_Entity ) SELECT @result = MAX(entity_id) FROM allIDs; RETURN @result; END GO ```
While you can do it, why do you need a CTE here? ``` RETURN ( SELECT MAX(entity_id) FROM ( SELECT entity_id FROM dbo.Entity UNION ALL SELECT entity_id FROM dbo.Reserved_Entity ) AS allIDs ); ``` Also there is no reason to use `UNION` instead of `UNION ALL` since this will almost always introduce an expensive distinct sort operation. And [please always use the schema prefix when creating / referencing any object](https://sqlblog.org/2009/10/11/bad-habits-to-kick-avoiding-the-schema-prefix).
Error using Common Table Expression in SQL User Defined Function
[ "", "sql", "sql-server", "sql-function", "" ]
I am in a logjam. When I run the following query, it works: ``` select DISTINCT l.Seating_Capacity - (select count(*) from tblTrainings t1, tbllocations l where l.locationId = t1.LocationId) as availableSeats from tblTrainings t1, tbllocations l where l.locationId = t1.LocationId ``` However, we would like to add a CASE statement that says, when Seating\_Capacity - total count as shown above = 0 then show 'FULL' message. Otherwise, show remaining number. Here is that query: ``` select DISTINCT case when l.Seating_Capacity - (select count(*) from tblTrainings t1, tbllocations l where l.locationId = t1.LocationId) = 0 then 'full' else STR(Seating_Capacity) end) availableSeats from tblTrainings t1, tbllocations l where l.locationId = t1.LocationId ``` I am getting 'Incorrect syntax near ')' which is near 'End' I am also getting an error that the inner Seating\_Capacity is invalid column name. Your assistance is greatly appreciated. I must have been in a dream land because I thought it was working during testing. Now, the app is live and it isn't working. Thanks a lot in advance ``` select DISTINCT l.LocationId,c.courseId, c.coursename, l.Seating_Capacity - (select count(*) from tblTrainings t1 where l.locationId = t1.LocationId and c.courseId = t1.courseId) as availableSeats,d.dateid,d.trainingDates,d.trainingtime,c.CourseDescription, i.instructorName,l.location,l.seating_capacity from tblLocations l Inner Join tblCourses c on l.locationId = c.locationId left join tblTrainings t on l.locationId = t.LocationId and c.courseId = t.courseId Inner Join tblTrainingDates d on c.dateid=d.dateid Inner Join tblCourseInstructor ic on c.courseId = ic.CourseId Inner Join tblInstructors i on ic.instructorId = i.instructorId WHERE CONVERT(VARCHAR(10), d.trainingDates, 101) >= CONVERT(VARCHAR(10), GETDATE(), 101) ```
I think you are over complicating the query with your subquery. As I understand it then the following should work as you need: ``` SELECT AvailableSeats = CASE WHEN l.Seating_Capacity - COUNT(*) = 0 THEN 'Full' ELSE STR(l.Seating_Capacity - COUNT(*)) END FROM tblTrainings t1 INNER JOIN tblLocations l ON l.LocationID = t1.LocationID GROUP BY l.Seating_Capacity; ``` I have changed your else to `STR(l.Seating_Capacity - COUNT(*))` because I assume you want to know the seats remaining, rather than just the capacity? If I have misinterpreted the requirement, just change it to `STR(l.Seating_Capacity)`. I have also switched your ANSI 89 implicit joins to ANSI 92 explicit joins, the standard changed over 20 years, and there are [plenty of good reasons](https://sqlblog.org/2009/10/08/bad-habits-to-kick-using-old-style-joins) to switch to the newer syntax. But for completeness the ANSI 89 version of the above query would be: ``` SELECT AvailableSeats = CASE WHEN l.Seating_Capacity - COUNT(*) = 0 THEN 'Full' ELSE STR(l.Seating_Capacity - COUNT(*)) END FROM tblTrainings t1, tblLocations l WHERE l.LocationID = t1.LocationID GROUP BY l.Seating_Capacity; ``` --- **EDIT** To adapt your full query you can simply replace your subquery in the select, with a joined subquery: ``` SELECT l.LocationId, c.courseId, c.coursename, CASE WHEN l.Seating_Capacity - t.SeatsTaken = 0 THEN 'Full' ELSE STR(l.Seating_Capacity - t.SeatsTaken) END AS availableSeats, d.dateid, d.trainingDates, d.trainingtime, c.CourseDescription, i.instructorName, l.location, l.seating_capacity FROM tblLocations l INNER JOIN tblCourses c ON l.locationId = c.locationId LEFT JOIN ( SELECT t.LocationID, t.CourseID, SeatsTaken = COUNT(*) FROM tblTrainings t GROUP BY t.LocationID, t.CourseID ) t ON l.locationId = t.LocationId AND c.courseId = t.courseId INNER JOIN tblTrainingDates d ON c.dateid=d.dateid INNER JOIN tblCourseInstructor ic ON c.courseId = ic.CourseId INNER JOIN tblInstructors i ON ic.instructorId = i.instructorId WHERE d.trainingDates >= CAST(GETDATE() AS DATE); ``` JOINs tend to optimise better than correlated subqueries (although sometimes the optimiser can determine that a JOIN would work instead), it also means that you can reference the result (`SeatsTaken`) multiple times without re-evaluating the subquery. In addition, by moving the count to a subquery, and removing the join to `tblTrainings` I think you eliminate the need for `DISTINCT` which should improve the performance. Finally I have changed this line: ``` WHERE CONVERT(VARCHAR(10), d.trainingDates, 101) >= CONVERT(VARCHAR(10), GETDATE(), 101) ``` To ``` WHERE d.trainingDates >= CAST(GETDATE() AS DATE); ``` I don't know if you do, but if you had an index on d.TrainingDates then by converting it to varchar to compare it to today you remove the ability for the optimiser to use this index, since you are only saying `>=` midnight today, there is no need to perform any conversion on d.TrainingDates, and all you need to do is remove the time portion of GETDATE(), which can be done by casting to DATE. More on this is contained [in this article](https://sqlblog.org/2009/10/16/bad-habits-to-kick-mis-handling-date-range-queries) (Yet another gem from [Aaron Bertrand](https://stackoverflow.com/users/61305/))
To avoid repeating the expression, you can use a [`WITH` clause](http://technet.microsoft.com/en-us/library/ms190766%28v=sql.105%29.aspx) to simplify your query: ``` WITH ( -- Start with your query that already works SELECT DISTINCT l.Seating_Capacity - (select count(*) from tblTrainings t1, tbllocations l where l.locationId = t1.LocationId) AS availableSeats FROM tblTrainings t1, tbllocations l WHERE l.locationId = t1.LocationId ) AS source SELECT -- Add a CASE statement on top of it CASE WHEN availableSeats = 0 THEN 'Full' ELSE STR(availableSeats) END AS availableSeats FROM source ```
My CASE statement is wrong. Any idea what I am doing wrong?
[ "", "sql", "sql-server-2008", "sql-server-2005", "" ]
I'd prefer to use set logic rather than iterating over the table with a cursor or something, but if that's what is needed it can be done. I am basically preparing a view in a stored procedure that will be used elsewhere for BI. Right now the stored procedure is just a select statement pulling from various tables with a decent amount of joins and other random logic. Here is an example of about what the tables would look like. First, the table that will be returned, second what exclusions the user wants to make. ![Example](https://i.stack.imgur.com/AWOiM.jpg) I want to look at each record in the excludes table and then apply that as a filter to the first table, so it would eliminate all the rows where the items match. (It will get a bit more complex down the road because they could select to eliminate an entire LocationCode, which would then cascade through all the WarehouseCodes and everything below it. It's basically a hierarchy. But I want to get the general idea down). I wasn't sure how to do it with a NOT EXISTS since I have to go row by row. I'm not sure if i have to use a cursor or iterate over another way. I'm wondering if there is another tool in SQL I am not aware of. Any suggestions on how to efficiently eliminate rows based on another tables values would be appreciated, thanks.
if you want to get whole rows from Table1 except rows which have similars in Table2, you can do this: ``` select * -- Column list here from Table1 as t where not exists ( select t.LocationCode, t.WarehouseCode, t.WarehouseName, t.StorageAddress intersect select t2.LocationCode, t2.WarehouseCode, t2.WarehouseName, t2.StorageAddress from Table2 as t2 ) ``` or more convinient ``` select * -- Column list here from Table1 as t where not exists ( select * from Table2 as t2 where t2.LocationCode = t.LocationCode and t2.WarehouseCode = t.WarehouseCode and t2.WarehouseName = t.WarehouseName and t2.StorageAddress = t.StorageAddress ) ```
Look into the [`EXCEPT`](http://technet.microsoft.com/en-us/library/ms188055%28v=sql.105%29.aspx) clause, which works similarly to a `UNION` ``` select col1, col2, col3 from yourtable EXCEPT select col1, col2, col3 from exceptions ```
Complex SQL Exclude Logic with set based operations
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I am using mysql Ver 14.14 Distrib 5.5.31, for debian-linux-gnu (x86\_64). This query fails with a `You have an error in your SQL syntax[...]near 'read ASC'` message : ``` SELECT 'messages'.* FROM 'messages' WHERE 'messages'.'user_id' = 2 ORDER BY read ASC; ``` where the `read` column is a `TINYINT(1)` value generated by the Rails ActiveRecord interface to store boolean values. The same action works when switching to postgresql, but i currently have no access to the pg generated queries. Is there something wrong with the actual query? (maybe i cannot order by a tinyint) or should I file a bug report?
Read is reserve keyword in mysql <http://dev.mysql.com/doc/mysqld-version-reference/en/mysqld-version-reference-reservedwords-5-5.html> you have to add 'read' ASC in your query
In addition to [naveen's answer](https://stackoverflow.com/a/19001703/866022), you'll need to [change your single quotation marks into backticks](https://stackoverflow.com/a/261476/190597): ``` SELECT `messages`.* FROM `messages` WHERE `messages`.`user_id` = 2 ORDER BY read ASC; ``` Better yet, do not use MySQL reserved words as column names. To change the name, use `ALTER`: ``` ALTER TABLE messages CHANGE read seen TINYINT ```
MySQL Order by boolean value tinyint
[ "", "mysql", "sql", "ruby-on-rails", "postgresql", "innodb", "" ]
I have two tables. ``` Table 1 columns are ==================== (MAINID, XID, Name) ==================== (A1 1 SAP) (B2 2 BAPS) (C3 3 SWAMI) Table 2 columns are =================== (ID COL1) =================== (1 XYZ) (2 ABC) ``` Now, **I want to find which XID value is not in Table2's ID column**. In Table 1 XID is unique and also in Table 2 ID is PK.
Aln alternative solution is by using LEFT JOIN. ``` SELECT tb1.* FROM Table1 AS tb1 LEFT JOIN Table2 AS tb2 ON tb1.XID = tb2.ID WHERE tb2.ID IS NULL ```
``` select xid from table1 where xid not in (select id from table2) ```
SQL join, compare two column values, match col values
[ "", "sql", "join", "" ]
I've a problem with the following SQL query: ``` DELETE FROM table1 WHERE uid = (SELECT table1.uid from table1 INNER JOIN table2 ON table2.user = table1.uid INNER JOIN table3 ON table3.uid = table2.seminar WHERE table3.end_date < CURDATE())) ``` The error is: You can't specify target table 'table1' for update in FROM clause Does anybody have an idea how to fix that?
Try this:- ``` DELETE table1 FROM table1 INNER JOIN table2 ON table2.user = table1.uid INNER JOIN table3 ON table3.uid = table2.seminar WHERE table3.end_date < CURDATE() ```
This may not be the most elegant way, but you can select your criteria into a temp table then DELETE FROM WHERE IN #MyTempTable ``` SELECT table1.uid INTO #MyTemp from table1 INNER JOIN table2 ON table2.user = table1.uid INNER JOIN table3 ON table3.uid = table2.seminar WHERE table3.end_date < CURDATE()) DELETE FROM table1 WHERE uid IN (SELECT uid from #MyTemp) DROP TABLE #MyTemp ```
SQL error: DELETE and SELECT FROM conflict
[ "", "mysql", "sql", "inner-join", "" ]
I have a temp table and in that one of my column `total_amount` is of integer type and NOT `NULL`. While querying data, I received `NULL` values for `total_Amount` column. How ever I used following syntax to remove nulls but some how still NULL values appearing, correct me if I am wrong. ``` Create table #Temp1 ( issue varchar(100) NOT NULL, total_amount int NOT NULL ) ``` This is my query ``` Case when total_amount = 0 then 0 else isnull(total_amount, 0) end as total_amount ``` I am facing issue at my else part.
You can use the COALESCE function to automatically return null values as 0. Syntax is as shown below: ``` SELECT COALESCE(total_amount, 0) from #Temp1 ```
The coalesce() is the best solution when there are multiple columns [and]/[or] values and you want the first one. However, looking at books on-line, the query optimize converts it to a case statement. ***MSDN excerpt*** The COALESCE expression is a syntactic shortcut for the CASE expression. That is, the code COALESCE(expression1,...n) is rewritten by the query optimizer as the following CASE expression: ``` CASE WHEN (expression1 IS NOT NULL) THEN expression1 WHEN (expression2 IS NOT NULL) THEN expression2 ... ELSE expressionN END ``` **With that said, why not a simple ISNULL()? Less code = better solution?** Here is a complete code snippet. ``` -- drop the test table drop table #temp1 go -- create test table create table #temp1 ( issue varchar(100) NOT NULL, total_amount int NULL ); go -- create test data insert into #temp1 values ('No nulls here', 12), ('I am a null', NULL); go -- isnull works fine select isnull(total_amount, 0) as total_amount from #temp1 ``` **Last but not least, how are you getting null values into a NOT NULL column?** I had to change the table definition so that I could setup the test case. When I try to alter the table to NOT NULL, it fails since it does a nullability check. ``` -- this alter fails alter table #temp1 alter column total_amount int NOT NULL ```
replace NULL with Blank value or Zero in sql server
[ "", "sql", "sql-server-2008", "sql-null", "" ]
The 3 table schema: ``` create table EMP (ID char(9) not null primary key, NAME varchar(20) not null, AGE integer not null, SALARY number not null, constraint min_salary check (salary>30000)); create table DEPARTMENT (DNUMBER integer not null primary key, DNAME varchar(15) not null unique, BUDGET float not null, MANAGER char(9) not null references EMP); create table WORKS (EMP char(9) not null references EMP, DEPT integer not null references DEPARTMENT, PCT_TIME integer, constraint check_pct check (PCT_TIME between 0 and 100), constraint WORKS_PK PRIMARY KEY (EMP,DEPT)); ``` What kind of query is this: ``` select name,age from emp,department,works where id=emp and dname='Software' and dept=dnumber ``` it select from 3 table but without a join keyword?
I believe that is called a cartesian product. It is defined as tables being linked without a limiting factor, such as a join statement. Just confirmed it from here: <http://wiki.answers.com/Q/What_is_sql_cartesian_product>
``` select name,age from emp,department,works where id=emp and dname='Software' and dept=dnumber ``` ...would appear to be an implicit inner join query. The `,` between the table are effectively doing a `CROSS JOIN`, returning the Cartesian Product, and the `where id=emp and dname='Software' and dept=dnumber` is effectively filtering the Cartesian product, producing the same effect as an `INNER JOIN`. The criterion fields are not qualified with table names as they are not ambiguous. Some database engines may not create as efficient a query plan for this type of join as they would for an explicit inner join query.
What type of this sql query is this select from multiply table without join?
[ "", "sql", "" ]
I want to create summary of my table with aggragate functions such as max,min,avg The query is something like this ``` select 'avg',avg(column1) as 'avg_resp',avg(col2) as 'ncount' from table union all select 'max',max(column1),max(col2) from table union all select 'min',min(column1),min(col2) from table; ``` Can it be done in better way?? PS: I want to use it for REST API so I am trying not to transpose/elongate the final result. REST API model from my current code: ``` { avg_resp:[min,max,avg], ncount: [min,max,avg] } ```
yes it is possible ``` select avg(column1) as avg_resp, avg(col2) as ncount, max(column1) as max_col1, max(col2) as max_col2, min(column1) as min_col1, min(col2) as min_col2 from table ```
Why not: ``` select avg(column1) as 'avg_resp', avg(col2) as 'ncount', max(column1) AS Max1, max(col2) AS MAx2, min(column1) AS Min1, min(col2) AS Min2 from table ```
how to calculate multiple aggregates on multiple columns
[ "", "sql", "sqlite", "" ]
I have a situation where I want to search for a field in the where clause only if the bit variable is 1 else ignore it. ``` @Active bit = 0 select * from Foo where firstname = 'xyz' and if(@active=1) ``` then search on the Active column else ignore the filtering on the Active column. How can I have that in a simple condition instead of checking each parameter seperately and then building the where clause
Just simple logic will usually suffice: ``` select * from Foo where firstname = 'xyz' and (@Active = 0 or Active = <filter condition>) ``` For general advice about writing code for arbitrary search conditions, you could do worse than read Erland Sommarskog's [Dynamic Search Conditions in T-SQL](http://www.sommarskog.se/dyn-search.html)
How about: ``` DECLARE @Active bit if @Active = 1 BEGIN (select * from Foo where firstname = 'bar' and Active = --condition) END else BEGIN (select * from Foo where firstname = 'bar') END ``` of course, something will have to set the value for @Active somewhere between the declaration and the `if...else` statement.
Bit field in where clause
[ "", "sql", "sql-server", "" ]
I have a rather long and complex SQL query that I'm working on at the moment. What I want to do is just add `LIMIT 10` ... But every time I end it to the end of the query I'm getting errors. Any help would be amazing, the query is as follows: ``` sqlQuery = " select DATENAME(Month,i.Datecreated) + ' ' + DATENAME(day,i.Datecreated) + ' ' + DATENAME(year,i.Datecreated) AS USDateCreated, i.imageId, GalleryName,Fullpath,MediumPath,ThumbPath,ViewCounter, i.DateCreated,ItemNumber,Gender,Minutes, right(convert(varchar(3), 100 + Seconds),2) as Seconds, FramesPerSecond,WeekNumber,Filename, (round(cast(Size as Decimal(16,2))/1024,2)) as Size, FlvFilename,FlvSize,NumberOfMovies, Free,Comment, (case when sum(rating)/count(i.imageId) is null then 0 else sum(rating)/count(i.imageId) end) as ratingResult, dbo.getTagNames(i.imageid) as tagsname,'' as yourTagNames, dbo.getTagNames(i.imageid) as memberTagNames,max(weekNumber)-1 as lastWeek from images as i left join Imagerating as ir on i.imageId = ir.imageId left join tag as t on i.imageId = t.imageId where 1=1 and galleryName = 'pictures' and weekNumber = '4' group by i.imageId,GalleryName,Fullpath,MediumPath,ThumbPath, ViewCounter,i.DateCreated,ItemNumber,Gender,Minutes,Seconds, FramesPerSecond,WeekNumber,Filename,Size,FlvFilename,FlvSize, NumberOfMovies,Free,Comment order by filename " ```
T-SQL doesn't support `LIMIT`. Instead, use [`TOP`](http://technet.microsoft.com/en-us/library/ms189463.aspx) in your `SELECT`: ``` SELECT TOP 100 -- Rather than LIMIT 100 at the bottom Field1, Field2 -- etc. FROM YourTable GROUP BY Field1 ORDER BY Field2 ``` If you are using SQL Server 2012 or greater you can use [`OFFSET` and `FETCH [FIRST|NEXT]`](http://technet.microsoft.com/en-us/library/ms188385.aspx) to get `LIMIT`'s ability to page through the result set.
LIMIT doesn't work in Sql Server. It's a MySql proprietary extension to standard SQL. In Sql Server, you can use the simple `TOP n` for the first page, but that's not really a good choice either if you're trying to do pagination. Fortunately, a more recent version of the SQL standard specifies a syntax you can use for pagination, and if you're lucky enough to be on Sql Server 2012 or later you can use it. It's called [OFFSET/FETCH](http://technet.microsoft.com/en-us/library/gg699618.aspx), and it looks something like this: ``` SELECT <columns> FROM <table> ORDER BY <order> OFFSET 30 ROWS FETCH NEXT 15 ROWS ONLY; ``` That would fetch the *3rd* page if the page size if 15. Note that the ORDER BY clause is required. Otherwise, an offset has no meaning. This is *standard sql*. Not only does Sql Server support it, but so does Oracle, PostGre, and a few others, and you can export more coming over time.
SQL Server : LIMIT not working on complex query
[ "", "sql", "sql-server", "" ]
I am trying to do a like query like so ``` def self.search(search, page = 1 ) paginate :per_page => 5, :page => page, :conditions => ["name LIKE '%?%' OR postal_code like '%?%'", search, search], order => 'name' end ``` But when it is run something is adding quotes which causes the sql statement to come out like so ``` SELECT COUNT(*) FROM "schools" WHERE (name LIKE '%'havard'%' OR postal_code like '%'havard'%')): ``` So you can see my problem. I am using Rails 4 and Postgres 9 both of which I have never used so not sure if its and an activerecord thing or possibly a postgres thing. How can I set this up so I have like `'%my_search%'` in the end query?
Your placeholder is replaced by a string and you're not handling it right. Replace ``` "name LIKE '%?%' OR postal_code LIKE '%?%'", search, search ``` with ``` "name LIKE ? OR postal_code LIKE ?", "%#{search}%", "%#{search}%" ```
Instead of using the `conditions` syntax from Rails 2, use Rails 4's `where` method instead: ``` def self.search(search, page = 1 ) wildcard_search = "%#{search}%" where("name ILIKE :search OR postal_code LIKE :search", search: wildcard_search) .page(page) .per_page(5) end ``` NOTE: the above uses parameter syntax instead of ? placeholder: these both should generate the same sql. ``` def self.search(search, page = 1 ) wildcard_search = "%#{search}%" where("name ILIKE ? OR postal_code LIKE ?", wildcard_search, wildcard_search) .page(page) .per_page(5) end ``` NOTE: using `ILIKE` for the name - postgres case insensitive version of LIKE
Rails 4 LIKE query - ActiveRecord adds quotes
[ "", "sql", "ruby-on-rails", "ruby", "postgresql", "activerecord", "" ]
I have two tables `AgentFare` and `SalesHeader` having `hdrGuid` and `hdrGuid` , `DocumentNumber` as columns respectively. I have to perform a query based on Document number, however I am not able to figure out which of the two following queries are most suitable. ``` SELECT agf.Fare , agf.Tax . agf.Commission FROM AgentFare as agf INNER JOIN SalesHeader as h ON agf.hdrGuid = h.hdrGuid AND h.DocumentNumber = 'XYZ' ``` OR ``` SELECT agf.Fare , agf.Tax . agf.Commission FROM AgentFare as agf INNER JOIN SalesHeader as h ON agf.hdrGuid = h.hdrGuid WHERE h.DocumentNumber = 'XYZ' ``` Which of the 2 are more appropriate ?
The "ON" statement should be used to define on what columns 2 tables should be joined. The "WHERE" statement is intended to filter your results. The second option is the more appropriate to use.
First method is more appropriate.Since it will take less time as compared to second query.
SQL statement structuring
[ "", "sql", "sql-server", "" ]
I have windows server 2008 r2 with microsoft sql server installed. In my application, I am currently designing a tool for my users, that is querying database to see, if user has any notifications. Since my users can access the application multiple times in a short timespan, i was thinking about putting some kind of a cache on my query logic. But then I thought, that my ms sql server probably does that already for me. Am I right? Or do I need to configure something to make it happen? If it does, then for how long does it keep the cache up?
It's safe to assume that MSSQL will has the caching worked out pretty well =) Don't bother trying to build anything yourself on top of it, simply make sure that the method you use to query for changes is efficient (eg. don't query on non-indexed columns). PS: wouldn't caching locally defeat the whole purpose of checking for changes on the database?
Internally the database does all sorts of things, including 'caching', but at all times it works *incredibly* hard to make sure your users see up-to-date data. So it has to do some work each time your application makes a request. If you want to reduce the workload by keeping static data in your application then you have to implement it yourself. The later versions of the .net framework have caching features built in so you should take a look at those (building your own caching can get very complex).
Database caching
[ "", "sql", "sql-server", "sql-server-2008", "caching", "" ]
Let's say I'm creating an address book in which the main table contains the basic contact information and a phone number sub table - ``` Contact =============== Id [PK] Name PhoneNumber =============== Id [PK] Contact_Id [FK] Number ``` So, a Contact record may have zero or more related records in the PhoneNumber table. There is no constraint on uniqueness of any column other than the primary keys. In fact, this must be true because: 1. Two contacts having different names may share a phone number, and 2. Two contacts may have the same name but different phone numbers. I want to import a large dataset which may contain duplicate records into my database and then filter out the duplicates using SQL. The rules for identifying duplicate records are simple ... they must share the same name and the same number of phone records having the same content. Of course, this works quite effectively for selecting duplicates from the Contact table but doesn't help me to detect actual duplicates given my rules: ``` SELECT * FROM Contact WHERE EXISTS (SELECT 'x' FROM Contact t2 WHERE t2.Name = Contact.Name AND t2.Id > Contact.Id); ``` It seems as if what I want is a logical extension to what I already have, but I must be overlooking it. Any help? Thanks!
The author stated the requirement of "two people being the same person" as: 1. Having the same name and 2. Having the same number of phone numbers **and** all of which are the same. So the problem is a bit more complex than it seems (or maybe I just overthought it). Sample data and (an ugly one, I know, but the general idea is there) a sample query which I tested on below test data which seems to be working correctly (I'm using Oracle 11g R2): ``` CREATE TABLE contact ( id NUMBER PRIMARY KEY, name VARCHAR2(40)) ; CREATE TABLE phone_number ( id NUMBER PRIMARY KEY, contact_id REFERENCES contact (id), phone VARCHAR2(10) ); INSERT INTO contact (id, name) VALUES (1, 'John'); INSERT INTO contact (id, name) VALUES (2, 'John'); INSERT INTO contact (id, name) VALUES (3, 'Peter'); INSERT INTO contact (id, name) VALUES (4, 'Peter'); INSERT INTO contact (id, name) VALUES (5, 'Mike'); INSERT INTO contact (id, name) VALUES (6, 'Mike'); INSERT INTO contact (id, name) VALUES (7, 'Mike'); INSERT INTO phone_number (id, contact_id, phone) VALUES (1, 1, '123'); -- John having number 123 INSERT INTO phone_number (id, contact_id, phone) VALUES (2, 1, '456'); -- John having number 456 INSERT INTO phone_number (id, contact_id, phone) VALUES (3, 2, '123'); -- John the second having number 123 INSERT INTO phone_number (id, contact_id, phone) VALUES (4, 2, '456'); -- John the second having number 456 INSERT INTO phone_number (id, contact_id, phone) VALUES (5, 3, '123'); -- Peter having number 123 INSERT INTO phone_number (id, contact_id, phone) VALUES (6, 3, '456'); -- Peter having number 123 INSERT INTO phone_number (id, contact_id, phone) VALUES (7, 3, '789'); -- Peter having number 123 INSERT INTO phone_number (id, contact_id, phone) VALUES (8, 4, '456'); -- Peter the second having number 456 INSERT INTO phone_number (id, contact_id, phone) VALUES (9, 5, '123'); -- Mike having number 456 INSERT INTO phone_number (id, contact_id, phone) VALUES (10, 5, '456'); -- Mike having number 456 INSERT INTO phone_number (id, contact_id, phone) VALUES (11, 6, '123'); -- Mike the second having number 456 INSERT INTO phone_number (id, contact_id, phone) VALUES (12, 6, '789'); -- Mike the second having number 456 -- Mike the third having no number COMMIT; -- does not meet the requirements described in the question - will return Peter when it should not SELECT DISTINCT c.name FROM contact c JOIN phone_number pn ON (pn.contact_id = c.id) GROUP BY name, phone_number HAVING COUNT(c.id) > 1 ; -- returns correct results for provided test data -- take all people that have a namesake in contact table and -- take all this person's phone numbers that this person's namesake also has -- finally (outer query) check that the number of both persons' phone numbers is the same and -- the number of the same phone numbers is equal to the number of (either) person's phone numbers SELECT c1_id, name FROM ( SELECT c1.id AS c1_id, c1.name, c2.id AS c2_id, COUNT(1) AS cnt FROM contact c1 JOIN contact c2 ON (c2.id != c1.id AND c2.name = c1.name) JOIN phone_number pn ON (pn.contact_id = c1.id) WHERE EXISTS (SELECT 1 FROM phone_number WHERE contact_id = c2.id AND phone = pn.phone) GROUP BY c1.id, c1.name, c2.id ) WHERE cnt = (SELECT COUNT(1) FROM phone_number WHERE contact_id = c1_id) AND (SELECT COUNT(1) FROM phone_number WHERE contact_id = c1_id) = (SELECT COUNT(1) FROM phone_number WHERE contact_id = c2_id) ; -- cleanup DROP TABLE phone_number; DROP TABLE contact; ``` Check at SQL Fiddle: <http://www.sqlfiddle.com/#!4/36cdf/1> **Edited** Answer to author's comment: Of course I didn't take that into account... here's a revised solution: ``` -- new test data INSERT INTO contact (id, name) VALUES (8, 'Jane'); INSERT INTO contact (id, name) VALUES (9, 'Jane'); SELECT c1_id, name FROM ( SELECT c1.id AS c1_id, c1.name, c2.id AS c2_id, COUNT(1) AS cnt FROM contact c1 JOIN contact c2 ON (c2.id != c1.id AND c2.name = c1.name) LEFT JOIN phone_number pn ON (pn.contact_id = c1.id) WHERE pn.contact_id IS NULL OR EXISTS (SELECT 1 FROM phone_number WHERE contact_id = c2.id AND phone = pn.phone) GROUP BY c1.id, c1.name, c2.id ) WHERE (SELECT COUNT(1) FROM phone_number WHERE contact_id = c1_id) IN (0, cnt) AND (SELECT COUNT(1) FROM phone_number WHERE contact_id = c1_id) = (SELECT COUNT(1) FROM phone_number WHERE contact_id = c2_id) ; ``` We allow a situation when there are no phone numbers (LEFT JOIN) and in outer query we now compare the number of person's phone numbers - it must either be equal to 0, or the number returned from the inner query.
In my question, I created a greatly simplified schema that reflects the real-world problem I'm solving. Przemyslaw's answer is indeed a correct one and did what I was asking both with the sample schema and, when extended, with the real one. But, after doing some experiments with the real schema and a larger (~10k records) dataset, I found that performance was an issue. I don't claim to be an index guru, but I wasn't able to find a better combination of indices than what was already in the schema. So, I came up with an alternate solution which fills the same requirements but executes in a small fraction (< 10%) of the time, at least using SQLite3 - my production engine. In hopes that it may assist someone else, I'll offer it as an alternative answer to my question. ``` DROP TABLE IF EXISTS Contact; DROP TABLE IF EXISTS PhoneNumber; CREATE TABLE Contact ( Id INTEGER PRIMARY KEY, Name TEXT ); CREATE TABLE PhoneNumber ( Id INTEGER PRIMARY KEY, Contact_Id INTEGER REFERENCES Contact (Id) ON UPDATE CASCADE ON DELETE CASCADE, Number TEXT ); INSERT INTO Contact (Id, Name) VALUES (1, 'John Smith'), (2, 'John Smith'), (3, 'John Smith'), (4, 'Jane Smith'), (5, 'Bob Smith'), (6, 'Bob Smith'); INSERT INTO PhoneNumber (Id, Contact_Id, Number) VALUES (1, 1, '555-1212'), (2, 1, '222-1515'), (3, 2, '222-1515'), (4, 2, '555-1212'), (5, 3, '111-2525'), (6, 4, '111-2525'); COMMIT; SELECT * FROM Contact c1 WHERE EXISTS ( SELECT 1 FROM Contact c2 WHERE c2.Id > c1.Id AND c2.Name = c1.Name AND (SELECT COUNT(*) FROM PhoneNumber WHERE Contact_Id = c2.Id) = (SELECT COUNT(*) FROM PhoneNumber WHERE Contact_Id = c1.Id) AND ( SELECT COUNT(*) FROM PhoneNumber p1 WHERE p1.Contact_Id = c2.Id AND EXISTS ( SELECT 1 FROM PhoneNumber p2 WHERE p2.Contact_Id = c1.Id AND p2.Number = p1.Number ) ) = (SELECT COUNT(*) FROM PhoneNumber WHERE Contact_Id = c1.Id) ) ; ``` The results are as expected: ``` Id Name ====== ============= 1 John Smith 5 Bob Smith ``` Other engines are bound to have differing performance which may be quite acceptable. This solution seems to work quite well with SQLite for this schema.
How to detect duplicate records with sub table records
[ "", "sql", "" ]
I'm trying to divide the numeric results from 2 pretty different queries. The end result should be Query 1 DIVIDED BY Query 2 Query 1 = ``` SELECT COUNT(DISTINCT(table1.ID)) AS count_1 FROM table1 INNER JOIN op INNER JOIN Org ON table1.EID = op.id AND Op.OrgID = Org.ID WHERE table1.TitleID = 123 AND op.BrandID = 1 AND op.Start <= NOW() AND op.End >= NOW(); ``` Query 2 = ``` SELECT COUNT(DISTINCT user.id) AS count_2 FROM table1 INNER JOIN user INNER JOIN ur ON table1.EID = user.id AND ur.userID = user.id WHERE user.BrandID = 1 AND table1.TitleID = 123 AND ur.role = 0 AND user.Inactive = 0; ```
Sure! You can use subselects to achieve this, though it will be pretty verbose! ``` SELECT ( SELECT COUNT(DISTINCT(table1.ID)) AS count_1 FROM table1 INNER JOIN op INNER JOIN Org ON table1.EID = op.id AND Op.OrgID = Org.ID WHERE table1.TitleID = 123 AND op.BrandID = 1 AND op.Start <= NOW() AND op.End >= NOW() ) / ( SELECT COUNT(DISTINCT user.id) AS count_2 FROM table1 INNER JOIN user INNER JOIN ur ON table1.EID = user.id AND ur.userID = user.id WHERE user.BrandID = 1 AND table1.TitleID = 123 AND ur.role = 0 AND user.Inactive = 0 ); ``` Format however it feels the least ugly to you.
Use sub queries like this: ``` SELECT Q1.count_1 / Q2.Count_2 FROM ( ... Query1 ...) AS Q1 JOIN ( ... Query2 ...) AS Q2 ON 1=1 ``` Replace Query1 and Query2 as your code.
Can you Divide 2 completely different query results into 1 result
[ "", "mysql", "sql", "inner-join", "division", "" ]
I have to create a view from table which must contain only the newest elements from a set of duplicates. It is an easy task, but the problem is that the table is not normalized. Each record is identified by three string columns: Transponder, Country and System. I cannot introduce any change in the database structure. The newest elements should be returned basing on Time column. So having such data: ``` Transponder | Country | System | Time -------------------------------------------- AAA | BBB | CCC | 2012-01-01 AAA | BBB | CCC | 2010-01-01 AAA | BBB | CCC | 2013-01-01 AAA | BAB | DDD | 2011-01-01 AAA | BAB | DDD | 2010-01-01 ``` I want my view to return only two rows: ``` AAA | BBB | CCC | 2013-01-01 AAA | BAB | DDD | 2011-01-01 ``` How can I achieve this? I use Postgresql 9.2.
``` select distinct on (transponder, country, system) * from some_table order by transponder, country, system, time desc ```
select transponder, country, system max(time) from TABLE group by transponder, country, system; This will give you the distinct record aggregated across the grouped records (transponder, country, system for the aggregated column (in this case time).
SQL View returning filtered data from unnormalized table
[ "", "sql", "postgresql", "" ]
I have two table named country and city that are relational table. Table Structure : ``` Country : CountryID CountryName City : CityID CountryID CityName ``` In which both table contain some data. i want result which country have less then 3 city.
``` select c1.CountryName from country c1 left join city c2 on c1.CountryID=c2.CountryID group by c2.CountryID,c1.CountryName,c1.CountryID having count(*)<3 ```
Try this ``` select c.CountryID from country c left join city ci on c.CountryID=ci.CountryID group by c.CountryID having COUNT(c.CountryID)<3 ```
Query for getting country name which has less then 3 city
[ "", "sql", "sql-server", "t-sql", "" ]
I am storing key value pairs in my database to help with normalization. What I want to do is iterate over all of my matching key/values based on some identifier, such as 'Apple' or 'Banana' and generate a view that would be equivalent to the "full" table version of the key/value pairs. Please see the fiddle below for more information: [fiddle](http://sqlfiddle.com/#!4/fc065/1) I presume I will have to use a cursor of sorts, but am not sure about the "best" approach for this type of problem.
Assuming your IDs uniquely identified a desired row in your view (they don't in your Fiddle example) you could create a view like this: ``` create view details_view as select id, max(case when key='Color' then value end) as color, max(case when key='Location' then value end) as location, max(case when key='Price' then value end) as price, from details group by id; ``` I once wrote a package to help generate queries like this one - see [my blog](http://tonyandrews.blogspot.co.uk/search/label/pivot). (I also ranted against this kind of data model [here](http://tonyandrews.blogspot.co.uk/2004/10/otlt-and-eav-two-big-design-mistakes.html)!)
Your fiddle makes it evident that your data design is flawed. In addition to using the dreaded EAV model (as pointed out by Tony Andrews), you also have a problem that there is no primary key in your table. It is impossible to know the color, location and price of apples. With the current setup it is impossible to know whether the apples in Texas are red or green. The best approach for your fiddle example is to change the schema like this: ``` table fruit_details ( id INTEGER, /* Generate using sequence */ fruit_type varchar2(50), location varchar2(100), price number(8.2) ); ```
Oracle iterate over key value pairs to generate view
[ "", "sql", "oracle", "report", "" ]
I have a Stored Proc which is using for Search Applicants is written as below: ``` /* AUTHOR : CREATION DATE : NOTES : PURPOSE : MODIFIED BY : MODIFICATION DATE : */ ALTER PROCEDURE USP_GET_ApplicantByFilter ( @ApplicantName VARCHAR(100)='Ram', @AgeFrom INT=0, @AgeTo INT=0, @Experience INT=0, @ExperienceMonths INT=0, @City VARCHAR(100)='', @QualificationID INT=0, @PositionID INT=0, @ApplyDateFrom DATETIME='2010-06-29 00:00:00.000', @ApplyDateTo DATETIME=NULL, @SortColumn Varchar(128)='ApplicantID', @SortDirection Varchar(56)='desc', @Page int=1, @RecsPerPage int =10 ) AS DECLARE @SQL VARCHAR(MAX) DECLARE @DSQL VARCHAR(MAX) DECLARE @whereCondition VARCHAR(1024) DECLARE @FirstRec int, @LastRec int SET @FirstRec = (@Page - 1) * @RecsPerPage SET @LastRec = (@Page * @RecsPerPage + 1) Declare @SectionCount int; Set NoCount On Begin SET @SQL='Select ROW_NUMBER() over( order by '+@SortColumn + ' ' +@SortDirection +') rownum, tblApplicants.ApplicantID, tblApplicants.ApplicantName, tblApplicants.FatherName, tblApplicants.DateOfBirth, tblApplicants.QualificationID, tblApplicants.EMailID, tblApplicants.Address, tblApplicants.City, tblApplicants.State, tblApplicants.Phone, tblApplicants.ApplyDate, tblApplicants.PositionID, tblApplicants.isActive, tblPositionMaster.PositionName FROM tblApplicants INNER JOIN tblPositionMaster ON tblApplicants.PositionID = tblPositionMaster.PositionID WHERE 1=1 AND tblApplicants.isActive=1 ' if @ApplicantName!='' begin SET @sql +=' AND tblApplicants.ApplicantName like ''%'+ @ApplicantName +'%''' end if @AgeFrom!=0 begin SET @SQL+=' AND DATEDIFF(YEAR,tblApplicants.DateOfBirth, GETDATE()) >= '+@AgeFrom end if @AgeTo!=0 begin SET @SQL+=' AND DATEDIFF(YEAR,tblApplicants.DateOfBirth, GETDATE()) <= '+@AgeTo end if @ApplyDateFrom IS NOT NULL begin SET @SQL+= ' AND CONVERT(DATETIME,tblApplicants.ApplyDate,101) ='+ CONVERT(DATETIME,@ApplyDateFrom,101) end SET @DSQL ='SELECT * from (' + @SQL +') AS tbl' print @DSQL DECLARE @TEMPResult TABLE(RowNum INT, ApplicantID int, ApplicantName varchar(100), FatherName varchar(200), DateOfBirth DATETIME, QualificationID int, EMailID varchar(200), Address varchar(200), City varchar(200), State varchar(200), Phone varchar(200), ApplyDate DATETIME, PositionID int, isActive int, PositionName varchar(200) ) INSERT INTO @TEMPResult EXEC(@DSQL) SELECT (Select Count(*) from @TEMPResult) as Count, * FROM @TEMPResult WHERE RowNum > @FirstRec AND RowNum < @LastRec RETURN END ``` i want to apply "**=>**" and "**<=**" operators on ApplyDate. every time i got "**\**Conversion failed when converting date and/or time from character string.*** \*" please help me how can apply these operators on ApplDate
``` SET @SQL+= ' AND CONVERT(DATETIME,tblApplicants.ApplyDate,101) ='+ CONVERT(DATETIME, @ApplyDateFrom, 101) ``` change it to: ``` SET @SQL+= ' AND CONVERT(DATETIME, tblApplicants.ApplyDate, 101) = CONVERT(DATETIME, ''' + cast(@ApplyDateFrom as nvarchar) + ''', 101)' ```
Replace this line ``` AND CONVERT(DATETIME,tblApplicants.ApplyDate,101) ='+ CONVERT(DATETIME,@ApplyDateFrom,101) ``` Updated ``` AND DATEDIFF(DD,tblApplicants.ApplyDate, CAST(''' + CAST(@ApplyDateFrom as varchar) + ''' as datetime)) = 0 ``` for more look this query ``` DECLARE @ApplyDateFrom DATETIME='2010-06-29 00:00:00.000' DECLARE @QUERY varchar(max) SET @QUERY = 'SELECT DATEDIFF(DD,GETDATE(), CAST(''' + CAST(@ApplyDateFrom as varchar) + ''' as datetime))' PRINT @QUERY ```
How to use DATETime Column in dynamic query in sql server?
[ "", "sql", "sql-server", "datetime", "stored-procedures", "dynamic", "" ]
I have the following query to basically find all duplicates in my username column: ``` SELECT `username` FROM `instagram_user` GROUP BY `username` HAVING COUNT( * ) >1 ``` How do I remove all the duplicates, such that it will only leave me with one unique username in the table? I don't care which entity it is that is persisted or removed, as long as there's one unique username in the table.
If you don't care what record to choose then just add a unique constraint while using `IGNORE` ``` ALTER IGNORE TABLE instagram_user ADD UNIQUE (username); ``` Here is **[SQLFiddle](http://sqlfiddle.com/#!2/625ee0/1)** demo and MySQL will do the job for you. You want to have that unique constraint anyway in order to keep your table out of duplicates in the future. or alternatively you can do ``` DELETE t FROM instagram_user t JOIN ( SELECT username, MAX(id) id FROM instagram_user GROUP BY username HAVING COUNT(*) > 1 ) q ON t.username = q.username AND t.id <> q.id ``` This one will leave only a row with max id for rows that have duplicate usernames. Here is **[SQLFiddle](http://sqlfiddle.com/#!2/88149/1)** demo
Not sure this is for SQL server, you can try a similar code in mysql. ``` ;With CteUsers AS( SELECT *,ROW_NUMBER() OVER (PARTITION BY username Order by username) AS ROWID FROM( SELECT PkId, `username` FROM `instagram_user` )tbltemp) SELECT * FROM CteUsers; ``` This will result as follow ``` PkId username RowId 1 xx 1 2 xx 2 .... ``` then delete where RowId > 1 ;With CteUsers AS( ``` SELECT *,ROW_NUMBER() OVER (PARTITION BY username Order by username) AS ROWID FROM( SELECT PkId, `username` FROM `instagram_user` )tbltemp) DELETE instagram_user WHERE PkId iN (SELECT PkId FROM CteUsers WHERE ROWID > 1); ```
remove duplicates in a column
[ "", "mysql", "sql", "" ]
I have a SQL query to check for an overflow in a DateTime value (above '9999-12-31'). But still it throws some overflow exception while running. What am I doing wrong? **Code** ``` SELECT CASE WHEN Dateadd(d,s.ClaimDelay,si.IssueDate) NOT IN (9999-12-31) THEN si.issuedate ELSE Dateadd(d,s.ClaimDelay,si.IssueDate) END FROM SubscriptionIssues si LEFT OUTER JOIN Subscriptions s ON s.Id=si.SubId ``` **Exception** Adding a value to a 'datetime' column caused an overflow.
Your case statement is apparently either adding so many days to `si.IssueDate` that the result is greater than **`Dec 31, 9999`** Edit your code to something like this:- ``` SELECT CASE WHEN datediff(d,si.IssueDate,'9999-12-31')<s.ClaimDelay THEN si.issuedate ELSE Dateadd(d,s.ClaimDelay,si.IssueDate) END FROM SubscriptionIssues si LEFT OUTER JOIN Subscriptions s ON s.Id=si.SubId ```
Instead of checking if adding to the date is an overflow, you can check if subtracting from the max date is smaller than the date you have. ``` declare @date date set @date = '9995-01-01' select case when dateadd(d, -1000, '9999-12-31') < @date then @date else dateadd(d, 1000, @date) end /* returns 9997-09-27 */ set @date = '9999-01-01' select case when dateadd(d, -1000, '9999-12-31') < @date then @date else dateadd(d, 1000, @date) end /* returns 9999-01-01 */ ```
Checking for an overflow datetime value
[ "", "sql", "sql-server", "" ]
I have the following that selects from a log and groups down to minute (excluding seconds and milisec): ``` SELECT DATEPART(YEAR, [Date]) AS YEAR, DATEPART(MONTH, [Date]) AS MONTH, DATEPART(DAY, [Date]) AS DAY, DATEPART(HOUR, [Date]) AS HOUR, DATEPART(MINUTE, [Date]) AS MIN, COUNT(*) AS COUNT FROM [database].[dbo].[errorlog] GROUP BY DATEPART(YEAR, [Date]), DATEPART(MONTH, [Date]), DATEPART(DAY, [Date]), DATEPART(HOUR, [Date]), DATEPART(MINUTE, [Date]) ORDER BY DATEPART(YEAR, [Date]) DESC, DATEPART(MONTH, [Date]) DESC, DATEPART(DAY, [Date]) DESC, DATEPART(HOUR, [Date]) DESC, DATEPART(MINUTE, [Date]) DESC; ``` But as you can see thats a lot of fuzz just for getting a count, so I wonder if there is a better way to group it so I get grouped down to minutes in respect to year, month, day and hour?
This should would work: ``` select CAST([Date] AS smalldatetime) as time_stamp, count(*) as count FROM [database].[dbo].[errorlog] group by CAST([Date] AS smalldatetime) order by CAST([Date] AS smalldatetime) desc; ``` Update after comments on this answer: ``` select dateadd(second,-datepart(ss,[Date]),[Date]) as time_stamp, count(*) as count FROM [database].[dbo].[errorlog] group by dateadd(second,-datepart(ss,[Date]),[Date]) order by dateadd(second,-datepart(ss,[Date]),[Date]) desc ; ``` The first solution rounds up the timestamp to the nearest minute. I realised that this is not exactly what the OP wanted. So, the second solution just substracts the `seconds` part from the timestamp and leaves the timestamp with seconds as `zero`(Assuming [Date] does not have fractional seconds)
``` DATEADD(minute,DATEDIFF(minute,'20010101',[Date]),'20010101') ``` Should round all `Date` column values down to the nearest minute. So: ``` SELECT DATEADD(minute,DATEDIFF(minute,'20010101',[Date]),'20010101'), COUNT(*) AS COUNT FROM [database].[dbo].[errorlog] GROUP BY DATEADD(minute,DATEDIFF(minute,'20010101',[Date]),'20010101') ORDER BY DATEADD(minute,DATEDIFF(minute,'20010101',[Date]),'20010101') DESC; ``` (You could move this expression into a subquery if you want to further reduce the repetition)
Is there a better way to group a log on minutes?
[ "", "sql", "t-sql", "" ]
I'm trying to create a constraint that does not allow dates in future years. I have this: ``` ALTER TABLE PACIENTE ADD CONSTRAINT ck_FechaNacimiento CHECK (FechaNacimiento<=current_date); ``` But i'm getting `error 02436`.
You cannot create a non-deterministic constraint. So you cannot create a constraint that references a function like `current_date` or `sysdate` that returns a different value every time you call it. If you want to enforce this sort of thing, you'd need to create a trigger on the table that throws an error if the business rule is violated, i.e. ``` CREATE OR REPLACE TRIGGER trg_paciente BEFORE INSERT OR UPDATE ON paciente FOR EACH ROW BEGIN IF( :new.FechaNacimiento > current_date ) THEN RAISE_APPLICATION_ERROR( -20001, 'FechaNacimiento<=current_date must be in the past' ); END IF; END; ```
I'd tried again with this and didn't show error, thanks by the way: ALTER TABLE EXAMENPACIENTE ADD CONSTRAINT ExamenPaciente\_FechaExamen\_c1 CHECK (FechaExamen<='30-SEP-2013');
Future Dates SQL Developer
[ "", "sql", "oracle", "" ]
I want to calculate the number of years between two dates. eg :- `Select to_date('30-OCT-2013') - TO_date('30-SEP-2014') FROM DUAL;` This would result to 335 days. I want to show this in years, which will be `.97` years.
Simply do this(divide by **365.242199**): ``` Select (to_date('30-SEPT-2014') - TO_date('30-OCT-2013'))/365.242199 FROM DUAL; ``` **1 YEAR = 365.242199 days** OR Try something like this using [MONTHS\_BETWEEN](http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions089.htm):- ``` select floor(months_between(date '2014-10-10', date '2013-10-10') /12) from dual; ``` or you may also try this:- ``` SELECT EXTRACT(YEAR FROM date1) - EXTRACT(YEAR FROM date2) FROM DUAL; ``` **# On a side note:-** > ***335/365.242199 = 0.917199603 and not .97***
I don't know how you figure that's .97 years. Here's what I get: ``` SQL> SELECT ( TO_date('30-SEP-2014') - to_date('30-OCT-2013')) / (ADD_MONTHS(DATE '2013-10-30',12) - DATE '2013-10-30') "Year Fraction" FROM DUAL; Year Fraction ------------- 0.91780821917 ``` You're going to have to pick a date to base your year calculation on. This is one way to do it. I chose to make a year be the number of days between 10/30/2013 and 10/30/2014. You could also make it a year between 9/30/2013 and 9/30/2014. As an aside, if you're only interested in 2 decimal places, 365 is pretty much as good as 366. UPDATE: Used ADD\_MONTHS in calculating the denominator. That way you can use the same date for the entire calculation of the number of days in a year.
Calculate year from date difference in Oracle
[ "", "sql", "oracle", "" ]
I update a Table with multiple fields. Now one of the fields may only be updated if another field has a defined value, e.g.: ``` id | name | image | update -------------------------------------------------- 1 | john | myimage.jpg | 0 2 | ben | yourimage.gif | 1 -------------------------------------------------- ``` Now i walk through all rows and update all fields but the image should only be update if the "update"-flag is set to 1. If its 0 the existing value should not be overwritten. *Now i tried this:* ``` ... `image` = IF(update = 1, VALUES(`image`),`image`) ... ``` but its obviously not working because it overwrites the image in every case.
If you only want to update the image column Ofer's answer is surely the best. If you'd like to pack the image update into a bigger query, the `IF()` works as follows: ``` IF(expression, return this if expression true, return this if expression false) ``` in your case: ``` UPDATE table t1 SET t1.image = IF(t1.update = 1, t1.image, 'new image') ```
``` update table set image = new_value where update = 1 and id = ?// if you want spacific row, if not ignore this line ```
MySQL update a row but a single field
[ "", "mysql", "sql", "cakephp", "" ]
I need help with a query. The query returns a column of all the views in the database. My ultimate goal is to have the whole result set be one column containing all the views in the database, and the other column containing how many records/rows are present in each corresponding table. This: ``` SELECT DISTINCT OWNER, OBJECT_NAME FROM DBA_OBJECTS WHERE OBJECT_TYPE = 'VIEW' AND OWNER = 'ADMIN' ``` returns the first column however I can't seem to find a way to combine it with : ``` select count(*) from view_X ``` to get the second column of the result set. Any help would be appreciated. Thanks
With some XML magic, this can be done with a single statement: ``` select object_name as view_name, to_number(extractvalue(xmltype(dbms_xmlgen.getxml('select count(*) c from "'||owner||'"."'||object_name||'"')),'/ROWSET/ROW/C')) as row_count from dba_objects where object_type = 'VIEW' and owner = 'ADMIN' order by 1; ```
This is a toughie. You can't join to the `select count(*) from view_X` or anything like that using straight SQL, so the best thing I can think of is a function that takes a view name and returns its count: ``` CREATE OR REPLACE FUNCTION ViewRowCount(viewName VARCHAR2) RETURN NUMBER AS rowCount NUMBER := 0; BEGIN EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM ' || viewName INTO rowCount; RETURN rowCount; END; / ``` Once the function is in place you can call it from your query: ``` SELECT DISTINCT OWNER, OBJECT_NAME, ViewRowCount(OBJECT_NAME) FROM DBA_OBJECTS WHERE OBJECT_TYPE = 'VIEW' AND OWNER = 'ADMIN'; ``` BTW, I don't think you need the `DISTINCT` for this query, but I don't have DBA access today so I can't be sure. The record counting will be slow enough as it is, so if there are duplicates before filtering with `DISTINCT` there will be a count for every duplicate row, making it even slower. --- Also take a look at Rachcha's solution, which doesn't need to create a new object (the function) like mine does. If you'll be calling from a front-end you'll need to use something like my answer, but if you'll be calling from SQL\*Plus Rachcha's will work very well.
Oracle - Performing operation on each row of result set
[ "", "sql", "oracle", "ssrs-2008", "" ]
I have a SELECT statement being calculated from a CASE WHEN THEN state (or could use multiple IF statements) aliased as 'Length', and I need to correctly GROUP the results together. The SELECT seems to be working, but the group groups them wrong. Here is my statement: ``` SELECT CASE WHEN DATEDIFF(o.EndDate, o.StartDate) < 30 THEN '<1 Month' WHEN DATEDIFF(o.EndDate, o.StartDate) < 90 THEN '1 - 2 Months' WHEN DATEDIFF(o.EndDate, o.StartDate) < 210 THEN '3 - 4 Months' ELSE '>4 Months' END AS 'Length', COUNT(DISTINCT(person.ID)) AS 'COUNT' FROM person INNER JOIN opportunity AS o INNER JOIN Organization AS org ON person.EntityID = o.id AND O.OrganizationID = Org.ID WHERE person.TitleID = 2 AND o.bID = 1 GROUP BY 'Length' ORDER BY 'Length' ASC; ``` This groups all results into '3 - 4 Months' which isn't right..
You need to use the whole `CASE` statement in the `GROUP BY` clause if you don't wrapped it in a subquery. ``` SELECT CASE WHEN DATEDIFF(o.EndDate, o.StartDate) < 30 THEN '<1 Month' WHEN DATEDIFF(o.EndDate, o.StartDate) < 90 THEN '1 - 2 Months' WHEN DATEDIFF(o.EndDate, o.StartDate) < 210 THEN '3 - 4 Months' ELSE '>4 Months' END AS `Length`, COUNT(DISTINCT(person.ID)) AS `COUNT` FROM person INNER JOIN opportunity AS o ON person.EntityID = o.id INNER JOIN Organization AS org ON o.OrganizationID = Org.ID WHERE person.TitleID = 2 AND o.bID = 1 GROUP BY CASE WHEN DATEDIFF(o.EndDate, o.StartDate) < 30 THEN '<1 Month' WHEN DATEDIFF(o.EndDate, o.StartDate) < 90 THEN '1 - 2 Months' WHEN DATEDIFF(o.EndDate, o.StartDate) < 210 THEN '3 - 4 Months' ELSE '>4 Months' END ORDER BY Length ASC; ``` Remove also the single quotes around the column name in the `ORDER BY` clause.
I was struggling with exactly the same problem and here is the solution I came up with: ``` SELECT CASE WHEN DATEDIFF(o.EndDate, o.StartDate) < 30 THEN '<1 Month' WHEN DATEDIFF(o.EndDate, o.StartDate) < 90 THEN '1 - 2 Months' WHEN DATEDIFF(o.EndDate, o.StartDate) < 210 THEN '3 - 4 Months' ELSE '>4 Months' END AS `Length`, COUNT(DISTINCT(person.ID)) AS `COUNT` FROM person INNER JOIN opportunity AS o INNER JOIN Organization AS org ON person.EntityID = o.id AND O.OrganizationID = Org.ID WHERE person.TitleID = 2 AND o.bID = 1 GROUP BY `Length` ORDER BY `Length` ASC; ```
Can you GROUP BY with a CASE WHEN THEN alias name?
[ "", "mysql", "sql", "group-by", "case", "" ]
I have a statement that sums entries for every quarter of the year. ``` BEGIN SET @mth = DATEPART(MONTH, GETDATE()) IF @mth BETWEEN '1' AND '3' -- SELECT-Statement ELSE IF @mth BETWEEN '4' AND '6' -- SELECT-Statement ELSE IF @mth BETWEEN '7' AND '9' -- SELECT-Statement ELSE IF @mth BETWEEN '10' AND '12' -- SELECT-Statement END ``` Till yesterday the Select-Statement works fine and sums what i want, today it doenst work anymore. I tired the select statement separately and it works and gives me the correct result. I also tried to change "ELSE IF @mth BETWEEN '10' AND '12'" to "ELSE" in the line of '10' AND '12', no help. The debugger doesnt even stop if i set a breakpoint in front of the last select statement. Anyone an idea where the problem is?
I guess that you probably Declared a `@mth` variable as `varchar` that can cause the issue. If you declared it as `int`, you will get your expected result. ``` BEGIN DECLARE @mth int SET @mth = DATEPART(MONTH, GETDATE()) IF @mth BETWEEN '1' AND '3' print '1' -- SELECT-Statement ELSE IF @mth BETWEEN '4' AND '6' print '2' -- SELECT-Statement ELSE IF @mth BETWEEN '7' AND '9' print '3' -- SELECT-Statement ELSE IF @mth BETWEEN '10' AND '12' print '4' -- SELECT-Statement END ```
This is how you could have solved it: ``` DECLARE @from datetime SET @from = dateadd(month, datediff(month, 0, current_timestamp)/3*3, 0) -- SELECT-Statement.... -- WHERE datefield >= @from -- AND datefield < dateadd(month, 3, @from) ```
IF/ELSE-IF depending on quarter doesnt work for the last quarter
[ "", "sql", "sql-server", "t-sql", "" ]
I'm trying to add an IF clause inside an update statement. Thought this was easy, but seems it isn't. This is the way it is. It's inside a stored procedure. ``` FETCH cur1 INTO procId, procType, procVals, procLen, procUpdated, procPrivate, procRegional; IF done THEN LEAVE the_loop; END IF; UPDATE scores t1 JOIN scores t2 ON FIND_IN_SET(t1.id, t2.vals) SET t1.private = t1.private+1, IF procType = 3 THEN // Problem lies here t1.regional = t1.regional+1; ELSE IF procType = 4 THEN t1.otherCol = t1.otherCol+1; END IF; WHERE t2.id = procId; ``` I'm stuck with the `IF` in there. Apart from the first `SET`, I also need to `Update` another column with the `IF`.
``` UPDATE scores t1 JOIN scores t2 ON FIND_IN_SET(t1.id, t2.vals) SET t1.private = t1.private+1 t1.regional = IF (procType = 3, t1.regional + 1, t1.regional) t1.otherCol = IF (procType = 4, t1.otherCol + 1, t1.otherCol) WHERE t2.id = procId ```
It would be simplier to break them into 2 separate update statements, because you are updating to different fields ``` IF procType = 3 THEN UPDATE scores t1 JOIN scores t2 ON FIND_IN_SET(t1.id, t2.vals) SET t1.private = t1.private+1, t1.regional = t1.regional+1; WHERE t2.id = procId; ELSE IF procType = 4 THEN UPDATE scores t1 JOIN scores t2 ON FIND_IN_SET(t1.id, t2.vals) SET t1.private = t1.private+1, t1.otherCol = t1.otherCol+1; WHERE t2.id = procId; ```
Adding an If inside an update statement
[ "", "mysql", "sql", "" ]
I'm trying to build an SQL query using Microsoft SQL server, where countries can be filtered by regions numbered from 0 to 4, 0 meaning Global I have constructed a query which filters the regions 1 to 4 correctly as per example below, but I can not figure out how to get all the countries if 0 (being Global) is selected as a region. ``` declare @region as integer set @region = 1 select * from Country where case LEFT(name,2) when 'US' THEN 1 when 'UK' THEN 2 WHEN 'DE' THEN 3 WHEN 'CZ' THEN 4 WHEN 'ES' THEN 4 WHEN 'FR' THEN 4 WHEN 'IT' THEN 4 WHEN 'NL' THEN 4 WHEN 'PL' THEN 4 WHEN 'RO' THEN 4 WHEN 'SE' THEN 4 END = @region ``` Any ideas? Is there better ways to construct the query? Thanks
To return all countries when `@region` is 0: ``` declare @region as integer set @region = 1 select * from Country where @region = 0 OR case LEFT(name,2) when 'US' THEN 1 when 'UK' THEN 2 WHEN 'DE' THEN 3 WHEN 'CZ' THEN 4 WHEN 'ES' THEN 4 WHEN 'FR' THEN 4 WHEN 'IT' THEN 4 WHEN 'NL' THEN 4 WHEN 'PL' THEN 4 WHEN 'RO' THEN 4 WHEN 'SE' THEN 4 END = @region ``` Also, can you add region as a column to Country table? That would make the whole thing easier?
you can put a field named region in country table then put this condition like below ``` select * from Country where @region = 0 or Country.region =@region ``` you can do in your code like this ``` select * from Country where @region = 0 OR (case LEFT(name,2) when 'US' THEN 1 when 'UK' THEN 2 WHEN 'DE' THEN 3 WHEN 'CZ' THEN 4 WHEN 'ES' THEN 4 WHEN 'FR' THEN 4 WHEN 'IT' THEN 4 WHEN 'NL' THEN 4 WHEN 'PL' THEN 4 WHEN 'RO' THEN 4 WHEN 'SE' THEN 4 END = @region) ```
SQL filter countries by numbered region and also globally
[ "", "sql", "sql-server", "" ]
I want to store images in a database using sql but cant seem to get it to work: ``` qry.SQL.Clear; qry.Sql.Add('update tbl set pic = :blobVal where id = :idVal'); qry.Parameters.ParamByName('idVal')._?:=1; ``` .Parameters has no .asinteger like .Param has but .Param isn't compatible with a TADOquery - to workaround I tried: ``` a_TParameter:=qry.Parameters.CreateParameter('blobval',ftBlob,pdinput,SizeOf(TBlobField),Null); a_TParam.Assign(a_TParameter); a_TParam.asblob:=a_Tblob; qry.ExecSql; ``` This also doesnt work: ``` qry.SQL.Clear; qry.Sql.Add('update tbl set pic = :blobVal where id = 1') qry.Parameters.ParamByName('blobVal').LoadFromStream(img as a_TFileStream,ftGraphic);//ftblob //or qry.Parameters.ParamByName('blobVal').LoadFromFile('c:\sample.jpg',ftgrafic);//ftblob qry.ExecSql; ```
Should be something like: ``` qry.Parameters.Clear; qry.Parameters.AddParameter.Name := 'blobVal'; qry.Parameters.ParamByName('blobVal').LoadFromFile('c:\sample.jpg', ftBlob); // or load from stream: // qry.Parameters.ParamByName('blobVal').LoadFromStream(MyStream, ftBlob); qry.Parameters.AddParameter.Name := 'idVal'; qry.Parameters.ParamByName('idVal').Value := 1; qry.SQL.Text := 'update tbl set pic = :blobVal where id = :idVal'; qry.ExecSQL; ``` To read the BLOB back from the DB: ``` qry.SQL.Text := 'select id, pic from tbl where id = 1'; qry.Open; TBlobField(qry.FieldByName('pic')).SaveToFile('c:\sample_2.jpg'); ```
I'm using Lazarus, not Delphi, but I guess its usually the same syntax. If so, here's a slight improvement on kobiks suggestion: Parameters are added automatically if the SQL.Text is assigned before trying to assign values to the parameters. Like this: ``` qry.Parameters.Clear; qry.SQL.Text := 'update tbl set pic = :blobVal where id = :idVal'; qry.Parameters.ParamByName('blobVal').LoadFromFile('c:\sample.jpg', ftBlob); qry.Parameters.ParamByName('idVal').Value := 1; qry.ExecSQL; ```
Insert/update TBlobfield (aka image) using sql parameters
[ "", "sql", "image", "delphi", "parameters", "blob", "" ]
``` SELECT DISTINCT t1.name as t1_name, MAX(t1.unit) as t1_unit, MAX(t1.id_producer_goods) AS hi_id_producer_goods, t2.name as t2_name FROM Table1 t1 left join Table2 t2 on t1.id_web_site=t2.id_web_site WHERE t1.id='23' GROUP BY t1.name ``` When I run the query, I get the following error: ``` Column 'Table2.name' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause. ``` How to write this query?
The error is pretty clear, either use an aggregate function with the `t2.name` or add it to the `GROUP BY`, it depends on the desired results you are looking for: ``` SELECT t1.name as t1_name, t2.name as t2_name, MAX(t1.unit) as t1_unit, MAX(t1.id_producer_goods) AS hi_id_producer_goods FROM Table1 hi left join Table2 t2 on t1.id_web_site=t2.id_web_site WHERE t1.id='23' GROUP BY t1.name, t2.name; ``` The error makes sense, because it has to know which value to select from the `t2.name` for each group of `t1.name`? should it select the `max`, `min`, etc. Otherwise `GROUP BY` it. Also, remove the `DISTINCT` there is no need for it with `GROUP BY`.
``` SELECT t1.name as t1_name, MAX(t1.unit) as t1_unit, MAX(t1.id_producer_goods) AS hi_id_producer_goods, t2.name as t2_name FROM Table1 hi left join Table2 t2 on t1.id_web_site=t2.id_web_site WHERE t1.id='23' GROUP BY t1.name,t2.name ``` You need to group by all fields that are not used in AGG functions. etc MAX
How to use group by in sql query?
[ "", "sql", "sql-server", "sql-server-2008", "" ]
SO, **The problem** My question is about - how to join table in MySQL with itself in reverse order? Suppose I have: ``` id name 1 First 2 Second 5 Third 6 Fourth 7 Fifth 8 Sixth 9 Seventh 13 Eight 14 Nine 15 Tenth ``` -and now I want to create a query, which will return joined records in reverse order: ``` left_id name right_id name 1 First 15 Tenth 2 Second 14 Nine 5 Third 13 Eight 6 Fourth 9 Seventh 7 Fifth 8 Sixth 8 Sixth 7 Fifth 9 Seventh 6 Fourth 13 Eight 5 Third 14 Nine 2 Second 15 Tenth 1 First ``` **My approach** I have now this query: ``` SELECT l.id AS left_id, l.name, (SELECT COUNT(1) FROM sequences WHERE id<=left_id) AS left_order, r.id AS right_id, r.name, (SELECT COUNT(1) FROM sequences WHERE id<=right_id) AS right_order FROM sequences AS l LEFT JOIN sequences AS r ON 1 HAVING left_order+right_order=(1+(SELECT COUNT(1) FROM sequences)); ``` -see this [fiddle](http://sqlfiddle.com/#!2/55b80/1) for sample structure & code. *Some background* There's no use case for that. I was doing that in application before. Now it's mostly curiosity if there's a way to do that in SQL - that's why I'm seeking not just 'any solution' (like mine) - but as simple as possible solution. Source table will always be small (<10.000 records) - so performance is not a thing to care, I think. **The question** Can my query be simplified somehow? Also, it's important not to use variables. Order could be included in result (like in my fiddle) - but that's not mandatory.
The only thing i can think to be improved is ``` SELECT l.id AS left_id, l.name ln, (SELECT COUNT(1) FROM sequences WHERE id<=left_id) AS left_order, r.id AS right_id, r.name rn, (SELECT COUNT(1) FROM sequences WHERE id>=right_id) AS right_order FROM sequences AS l LEFT JOIN sequences AS r ON 1 HAVING left_order=right_order; ``` There are 2 changes that should make this a little bit faster: 1) Calculating right order in reverse order in the first place 2) avoid using `SELECT COUNT` in the last line. Edit: I aliased the ln,rn because i couldn't see the columns in fiddle
Without the [SQL standard RANK() OVER(...)](https://stackoverflow.com/q/2376290/132382), you have to compute the ordering yourself as you discovered. The RANK() of a row is simply 1 + the COUNT() of all better-ranked rows. (DENSE\_RANK(), for comparison, is 1 + the COUNT() of all DISTINCT better ranks.) While RANK() can be computed as a scalar subquery in your SELECT projection — as, e.g., you have done with `SELECT (SELECT COUNT(1) ...), ...` — I tend to prefer joins: ``` SELECT lft.id AS "left_id", lft.name AS "left_name", rgt.id AS "right_id", rgt.name AS "right_name" FROM ( SELECT s.id, s.name, COUNT(1) AS "rank" -- Left ranking FROM sequences s LEFT JOIN sequences d ON s.id <= d.id GROUP BY 1, 2) lft INNER JOIN ( SELECT s.id, s.name, COUNT(1) AS "rank" -- Right ranking FROM sequences s LEFT JOIN sequences d ON s.id >= d.id GROUP BY 1, 2) rgt ON lft.rank = rgt.rank ORDER BY lft.id ASC; ```
Reverse join in MySQL
[ "", "mysql", "sql", "join", "sql-order-by", "" ]
I have a table with 3 fields ``` Example_Table ------------- ID (identity) SomeKey SomeValue ``` There are probably 150 values saved into this table. Apparently, all but 6 of those values has changed -\_-' I have an excel document containing the new values, and am not looking forward to trying to do updates on it all. As a programmer, I can't help but feel there's a better method than doing it manually, or worse yet dropping the table and rebuilding it with the new values. Does anyone know of a quick way to do a mass update like that? The new values in the spreadsheet are logically sorted / ordered by the key (desc) that they are paired with.
**If you are using SQL Server 2012 Management Studio** 1. Make sure the columns in Excel are lined up with SomeValue first and SomeKey (or ID) second. 2. Highlight the entire range, click `Ctrl`+`C`, switch to a new query window in Management Studio, and hit `Ctrl`+`V`. 3. Highlight your cursor at the beginning of the first line, hold `Shift`+`Alt` and use the down-arrow to scroll to the last line. 4. Type: ``` UPDATE dbo.Example_Table SET SomeValue = ' ``` 5. Repeat 3. placing your cursor after the value, and type: ``` ' WHERE SomeKey = ``` Now you've got a series of UPDATE statements you can run individually or altogether. **If you are using a previous version** 1. Make sure the columns in Excel are lined up with SomeValue first and SomeKey (or ID) second. 2. Insert a new column before SomeValue. 3. In the first row of the new column, type: ``` UPDATE dbo.Example_Table SET SomeValue = ' ``` 4. In the lower-right corner of that cell, drag with a cross to repeat the value across all applicable rows. 5. Repeat 2. and 3. in between SomeValue and SomeKey, this time typing: ``` ' WHERE SomeKey = ``` 6. Repeat 4. for the new column. 7. Highlight the entire range, click `Ctrl`+`C`, switch to a new query window in Management Studio, and hit `Ctrl`+`V`. 8. You may need to search and replace for Tab characters. Highlight one, hit `Ctrl`+`C`, `Ctrl`+`H`, `Tab` and make sure `Replace with:` is an empty string, then click `Replace All` (unless your data might naturally contain tabs).
Copy the spreadsheet data to a new workbook. Delete everything except the column with the keys and the column with the values. Insert new columns as needed between them, and add UPDATE sql code between them. You'll wind up with something like ``` Column A Column B Column C Col D UPDATE Example_Table SET SomeValue = '| value from column B |' WHERE SomeKey = '| value from column D ' ``` (Sorry about the markup - formatting help would be welcome) With a little cut & paste you'll have 150 update statements. Copy them into SQL Server and execute. You may want to paste them into notepad or equivalent first, to check for tabs etc. It should be a 5 minute job.
Quick way to update the value of column in 100+ rows?
[ "", "sql", "sql-server", "" ]
I have a PostgreSQL table as below: ``` PO Num | Salesman | Phone | -----------+-----------+-----------+ PO13175 | Sarah | 111 | PO13203 | Sarah | 1111 | PO12203 | Tom | 222 | PO8656 | Tom | 222 | ... (n rows) ``` The Salesman and Phone numbers are in pair and how can I identify the discrepancy like Number 2 Row? (Sarah should have a consistent phone number like Tom does) I have tried to use "partition" ``` select *,Row_number() over (partition by Salesman,Phone) as row from table where row>1; ``` But it didn't give me what I wanted. I wanted to get the record as below: ``` PO Num | Salesman | Phone | -----------+-----------+-----------+ PO13175 | Sarah | 111 | PO13203 | Sarah | 1111 | ```
You could query the results with a simple group by / having, too: ``` select salesman from sales group by salesman having count(distinct phone) > 1 ```
You were along the right lines with Windowing function. The following will return Sarah as having inconsistent phone number ``` select a.* from (select salesman ,phone ,count(*) as freq ,count(*)over(partition by salesman order by phone) as distinct_phone_num_id from mydata /*replace this with your actual tablename*/ group by salesman ,phone) as a inner join (select distinct salesman from mydata) as b on a.salesman=b.salesman and a.distinct_phone_num_id>1 ; ``` Explanation: The `a` table will first count all possible `salesman` & `phone` combinations. Then `count(*)over(partition by salesman order by phone)` will serially enumerate each distinct phone number found for each salesman. ``` SALESMAN PHONE FREQ DISTINCT_PHONE_NUM_ID Sarah 111 1 1 Sarah 1111 1 2 Tom 222 2 1 ``` So, above you can see that `DISTINCT_PHONE_NUM_ID` says Sarah has upto 2 phone numbers and Tom has only 1 distinct number. The purpose of the `inner join` is to give you a list of `salesman` with inconsistent phone numbers. Final output: ``` SALESMAN PHONE FREQ DISTINCT_PHONE_NUM_ID Sarah 1111 1 2 ```
PostgreSQL find inconsistency record in the row
[ "", "sql", "postgresql", "" ]
How do I execute a .sql file using mysql and netbeans. I am doing a course that requires me to interact with an .sql file that they have provided but the good guys at the Fitzwilliam institute haven't deemed it fit to explain how to use this file. If can tell me that would be great. I have already set up the basic database using netbeans and mySql. Thank you
Ok, if you have created a connection to your MySql db, then things are pretty forward. Go to 'Services' tab, expand the `Database` options. Right click the connection you have created. Click 'Connect' to connect to the db. And then right click your connection and click 'Execute Command...'. ![enter image description here](https://i.stack.imgur.com/mmZBZ.png) Write the command in the editor, and click the 'Run Sql' (Ctrl+Shift+E) ![enter image description here](https://i.stack.imgur.com/Iu0OY.png)
if the sql file is too large, netbeans cannot open it. In this case you can use the tool ij ([link](http://db.apache.org/derby/papers/DerbyTut/ij_intro.html)) which comes with Derby: Start ij commandline tool and then do something like this: ``` ij> connect 'jdbc:derby:MyDbTest;create=true' user 'peter' password 'parker'; ij> run 'sqlscript.sql'; ```
mySQL ,sql file netbeans
[ "", "mysql", "sql", "netbeans", "" ]
As per what I know, EXISTS returns true when a sub query contains atleast a row whereas NOT EXIST returns true if the subquery returns nothing. So for a given subquery either of the two should return true, right? For eg: 1) This returns as to what kind of store is present in one or more cities? ``` SELECT DISTINCT store_type FROM stores WHERE EXISTS (SELECT * FROM cities_stores WHERE cities_stores.store_type = stores.store_type); ``` 2) And this returns as to what kind of store is present in no cities? ``` SELECT DISTINCT store_type FROM stores WHERE NOT EXISTS (SELECT * FROM cities_stores WHERE cities_stores.store_type = stores.store_type); ``` So how can the same sub query give output for both the queries? as one uses EXIST and the other uses NOT EXIST? I took the examples from <http://dev.mysql.com/doc/refman/5.1/en/exists-and-not-exists-subqueries.html> Also how is 2 NOT EXISTS helping here? Isn't this a kind of OR? 3) This returns what kind of store is present in all cities? ``` SELECT DISTINCT store_type FROM stores s1 WHERE NOT EXISTS ( SELECT * FROM cities WHERE NOT EXISTS ( SELECT * FROM cities_stores WHERE cities_stores.city = cities.city AND cities_stores.store_type = stores.store_type)); ```
Typical usage for `EXISTS()` clauses is to check where some related row for a given row *exists* in a related table: ``` CREATE TABLE clients ( client_id INTEGER NOT NULL PRIMARY KEY , client_name varchar ); CREATE TABLE products ( product_id INTEGER NOT NULL PRIMARY KEY , product_name varchar ); CREATE TABLE orders ( client_id INTEGER NOT NULL REFERENCES clients(client_id) , product_id INTEGER NOT NULL REFERENCES products(product_id) , quantity INTEGER NOT NULL DEFAULT 1 , order_date DATE , PRIMARY KEY (client_id,product_id) ); INSERT INTO clients(client_id, client_name) VALUES (1, 'Alice' ), (2, 'Bob' ), (3, 'Charly' ), (4, 'Diana' ); INSERT INTO products(product_id, product_name) VALUES (1, 'Apple' ), (2, 'Banana' ), (3, 'Citrus' ); INSERT INTO orders(client_id,product_id,order_date) VALUES (1,2, '2013-9-8'),(2,1, '2013-9-11'),(3,2, '2013-10-1'); -- Find clients who ordered something SELECT * FROM clients cl WHERE EXISTS ( SELECT * FROM orders oo WHERE oo.client_id = cl.client_id ) ; -- Find clients who never ordered anything SELECT * FROM clients cl WHERE NOT EXISTS ( SELECT * FROM orders oo WHERE oo.client_id = cl.client_id ) ; -- Find products that were never ordered SELECT * FROM products pr WHERE NOT EXISTS ( SELECT * FROM orders oo WHERE oo.product_id = pr.product_id ) ; ```
To add to this, it would help performance when using `NOT EXISTS` by making sure you're checking the columns in a way that does not negate the indexes. I think this applies to larger datasets but is still good to know. Jayachandran explains it very well here: <http://social.msdn.microsoft.com/Forums/sqlserver/en-US/582544fb-beda-46c0-befd-4b28b5c2cdee/select-not-exists-very-slow>
When to use NOT EXISTS in SQL?
[ "", "mysql", "sql", "" ]
I am running this query in production(Oracle) and it is taking more than 3 minutes . Is there any way out to reduce the execution time ? Both svc\_order and event table contains almost 1million records . ``` select 0 test_section, count(1) count, 'DD' test_section_value from svc_order so, event e where so.svc_order_id = e.svc_order_id and so.entered_date >= to_date('01/01/2012', 'MM/DD/YYYY') and e.event_type = 230 and e.event_level = 'O' and e.current_sched_date between to_date( '09/01/2010 00:00:00', 'MM/DD/YYYY HH24:MI:SS') and to_date('09/29/2013 23:59:59', 'MM/DD/YYYY HH24:MI:SS') and (((so.sots_ta = 'N') and (so.action_type = 0)) or ((so.sots_ta is null) and (so.action_type = 0)) or ((so.sots_ta = 'N') and (so.action_type is null))) and so.company_code = 'LL' ```
Looking at the what you said that you cannot create indexes. I hope that the query is making a full table scan on the table. Please try a parallel hint. ``` select /*+ full(so) parallel(so, 4) */ 0 test_section, count(1) count, 'DD' test_section_value from svc_order so, event e where so.svc_order_id = e.svc_order_id and so.entered_date >= to_date('01/01/2012', 'MM/DD/YYYY') and e.event_type = 230 and e.event_level = 'O' and e.current_sched_date between to_date( '09/01/2010 00:00:00', 'MM/DD/YYYY HH24:MI:SS') and to_date('09/29/2013 23:59:59', 'MM/DD/YYYY HH24:MI:SS') and (((so.sots_ta = 'N') and (so.action_type = 0)) or ((so.sots_ta is null) and (so.action_type = 0)) or ((so.sots_ta = 'N') and (so.action_type is null))) and so.company_code = 'LL' ```
We cannot have additional indexes but tables must have at least meaning full primary key, right so is there one? That should result in at least index, non/clustered, anything. Look at it lets and try to make use of it. In case table is a heap, and we want to deal with it as it is, then we should reduce the number rows in each table individually by applying respective where filters and then combine that result set. In your query only meaning full result column depends on base tables is count(1). Other two columns are constants. Because also JOIN/Cartesian Product etc….. will lead DB engine to look for Indexes so instead use INTERSECT which I feel should better in your case. Some other changes you can do: Avoid using TO\_DATE or any kind of function in Right Side of the WHERE condition column. Prepare data in local Variable and use Local Variable in query. Also you need to check is there any good performance gain using >= than BETWEEN ? I have modified the query and also combined one redundant where condition. Remember that if this changes works for you right now that doesn’t mean it will work always. As son your table start hitting more data that qualifies those WHERE conditions this swill again come back as slow query. so for short term this might work but longer term you have to think about alternate options ``` 1) for example Indexed Views on top of this tables 2) Create same tables with different name and sync data between new and original table using “Insert/Update/Delete Trigger”. SELECT COUNT(1) AS [COUNT], 'DD' test_section_value ,0 test_section FROM ( SELECT so.svc_order_id FROM svc_order so WHERE so.entered_date >= to_date('01/01/2012', 'MM/DD/YYYY') AND so.company_code = 'LL' INTERSECT SELECT e.svc_order_id FROM event e WHERE e.event_type = 230 AND e.event_level = 'O' AND e.current_sched_date BETWEEN to_date('09/01/2010 00:00:00','MM/DD/YYYY HH24:MI:SS') AND to_date('09/29/2013 23:59:59','MM/DD/YYYY HH24:MI:SS') AND ( (( so.sots_ta = 'N' ) AND ( so.action_type IS NULL OR so.action_type = 0)) OR (( so.sots_ta IS NULL ) AND ( so.action_type = 0 )) --or ((so.sots_ta = 'N') and (so.action_type is null)) ) )qry1 ```
How to reduce query execution time for table with huge data
[ "", "sql", "database", "performance", "oracle", "" ]
I have a table with a list of scores for a test, lets say scores out of 30 for example. I would like to present the data by counting the number of people who's score was in certain percentage bracket. For example: ``` Total % Bracket ----------------- 5 95-100% 15 90-94% 20 85-89% 17 80-84% 15 75-79% etc ``` I thought about calculating the score required for each percentage bracket first and then doing a `SUM(CASE ....` somehow, but I seem to have lost my way.
Not got time to test this but something long the lines of.. ``` select t.pcbracket as [% Bracket], count(*) as [NumWithMark] from ( select case when mark between 0 and 9 then ' 0- 9' when mark between 10 and 19 then '10-19' when mark between 20 and 29 then '20-29' when mark between 30 and 39 then '30-39' else '40-100' end as pcbracket from testresults) t group by t.pcbracket ```
You can use CASE WHEN but in this case you will miss intervals with 0 count of records. So you can do it in the following way: ``` select t2.start,t2.finish, count(t.score) from t RIGHT JOIN ( select 0 as start, 4 as finish union all select 5 as start, 9 as finish union all select 10 as start, 14 as finish union all ....... union all select 95 as start, 99 as finish ) as t2 on t.score between t2.start and t2.finish group by t2.start,t2.finish order by t2.start ``` [Here is SQLFiddle demo](http://sqlfiddle.com/#!2/c22d4/8)
SQL Group By Percentage Increments
[ "", "sql", "sql-server-2005", "group-by", "percentage", "" ]
I have the following table in SQL Server: ``` ----------------------------- ID Age Gender 1 30 F 2 35 M 3 32 M 4 18 F 5 21 F ``` What I need to do is to execute a query that will group the records in given ranges and count the occurences. The results need to be displayed later in a histogram chart (bar chart). I tried a query similar to the following: ``` SELECT count(CASE WHEN Age>= 10 AND Age < 20 THEN 1 END) AS '10 - 20', count(CASE WHEN Age>= 21 AND Age < 30 THEN 1 END) AS '21 - 30', count(CASE WHEN Age>= 31 AND Age < 35 THEN 1 END) AS '31 - 35', count(CASE WHEN Age>= 36 AND Age < 40 THEN 1 END) AS '36 - 40', FROM (SELECT Age FROM Attendees) AS AgeGroups ``` For the moment, this does the trick, but does not consider the gender column. It will result a single row which counts the frequency of every age group: ``` 10-20 21-30 31-35 36-40 0 22 21 13 ``` If gender would be considered there should be two records shown, for each gender. I need to see something like: ``` Gender 10-20 21-30 31-35 36-40 M 0 12 9 6 F 0 10 12 7 ``` How should I approach this problem?
Simply add the `Gender` column to your `SELECT` and then do a `GROUP BY`. ``` SELECT Gender, count(CASE WHEN Age>= 10 AND Age < 20 THEN 1 END) AS [10 - 20], count(CASE WHEN Age>= 21 AND Age < 30 THEN 1 END) AS [21 - 30], count(CASE WHEN Age>= 31 AND Age < 35 THEN 1 END) AS [31 - 35], count(CASE WHEN Age>= 36 AND Age < 40 THEN 1 END) AS [36 - 40] FROM Attendees AS AgeGroups GROUP BY Gender ```
I recently came across a similar problem where I needed to look at several variables rather than just one, and my solution was to use a temporary table. ``` CREATE TABLE #bin ( startRange int, endRange int, agelabel varchar(10) ); GO INSERT INTO #bin (startRange, endRange, mylabel) VALUES (10, 20, '10-20') INSERT INTO #bin (startRange, endRange, mylabel) VALUES (21, 30, '21-30') INSERT INTO #bin (startRange, endRange, mylabel) VALUES (31, 35, '31-35') INSERT INTO #bin (startRange, endRange, mylabel) VALUES (36, 40, '36-40') GO SELECT b.agelabel as ageBracket, a.Gender, count(a.Gender) as total FROM Attendees a INNER JOIN #bin b on (a.Age >= b.startRange and a.Age <= b.EndRange) GROUP BY b.agelabel, a.Gender DROP TABLE #bin GO ``` Or Alternatively, and probably the better solution, ``` With table1 as ( SELECT CASE WHEN Age >= 10 and Age <= 20 then '10-20' WHEN Age > 20 and Age <= 30 then '21-30' WHEN Age > 30 and Age <= 35 then '31-35' WHEN Age > 35 and Age <= 40 then '36-40' ELSE 'NA' End as ageBracket, Gender FROM Attendees ) SELECT ageBracket, Gender, Count(Gender), FROM table1 GROUP BY ageBracket, Gender ``` Where the result would be : ``` AgeBracket Gender Total 10-20 M 0 10-20 F 0 21-30 M 12 21-30 F 10 31-35 M 9 31-35 F 12 36-40 M 6 36-40 F 7 ``` You can use the first select statement to gather all the data of your choice, while using the second query to perform any necessary calculations. I think these solutions might be a bit overkill for your problem, but as it was the only question I found concerning binning, hopefully it will be useful to others !
Create range bins from SQL Server table for histograms
[ "", "sql", "grouping", "histogram", "" ]
I have a single table with 2 columns: Actors and movies which looks like this: ``` ACTOR | MOVIE -------+------------- Volta | Pulp Fiction Bruce | Pulp Fiction Rhame | Pulp Fiction Walke | Pulp Fiction Rhame | Bad Movie Bruce | Bad Movie Volta | Decent Movie Brian | Decent Movie Walke | Awesome Movie Brian | Awesome Movie ``` I want to know know which actors, who appeared in Pulp Fiction, never has appeared in another movie with another actor from Pulp Fiction. From this example, the output should be: ``` Volta Walke ``` Because they appeared in Pulp Fiction and in Decent Movie and Awesome Movie respectively without any other actors from Pulp Fiction. I'm using MySQL.
``` SELECT m.actor FROM movies m WHERE m.movie = 'Pulp Fiction' AND NOT EXISTS ( SELECT 1 FROM movies m1 JOIN movies m2 ON m1.movie = m2.movie AND m1.actor <> m2.actor AND m2.movie <> 'Pulp Fiction' AND m2.actor IN (SELECT actor FROM movies WHERE movie = 'Pulp Fiction') WHERE m.actor = m1.actor ) ``` According to [SQLFiddle](http://sqlfiddle.com/#!9/557bd/17) done by ChrisProsser it should give the proper result.
There may be an easier way, but this should work: ``` select m.actor from movies m where m.movie = 'Pulp Fiction' and m.actor not in ( select m2.actor from movies m1, movies m2, movies m3 where m1.actor = m2.actor and m1.movie = 'Pulp Fiction' and m2.movie != m1.movie and m3.movie = m2.movie and m3.actor in (select m4.actor from movies m4 where m4.movie = 'Pulp Fiction') group by m2.actor having count(*) > 1); ``` I have created a [SQL Fiddle](http://sqlfiddle.com/#!9/557bd/1/0) to test this and the output is correct.
SQL - Relationship between actors
[ "", "mysql", "sql", "" ]
I have a problem here: "Show average distance per day driven by cars from Paris" I have also 2 tables referred to this problem 1. `table_cars`: `id`, `brand`, `type`, `license` 2. `table_distances`: `id_car`, `date`, `distance` I have managed to select "the average distance for the cars from Paris" ``` select avg(table_distances.distance) from table_distances INNER JOIN table_cars ON table_distances.id_car = table_cars.id where table_cars.license = 'Paris';' ``` Though, I have still a problem with average distance per day. I looked over related questions on the stackoverflow/google but I got more confused. Can somebody explain how I can improve my query to show average distance per day?
This should get you the distances per car per date. ``` SELECT id_car, date, AVG(table_distances.distance) FROM table_distances INNER JOIN table_cars ON table_distances.id_car = table_cars.id WHERE table_cars.license = 'Paris' GROUP BY id_car, date ORDER BY id_car, date ```
Simply add the date to what you select and group by it so it's averaged per row: ``` SELECT table_distances.date, avg(table_distances.distance) FROM table_distances INNER JOIN table_cars ON table_distances.id_car = table_cars.id WHERE table_cars.license = 'Paris' GROUP BY table_distances.date ```
SQL - average distance per day
[ "", "sql", "" ]
I'm not sure if the question is accurate but I couldn't think of a way to put it. I am going to attempt to explain it using an example. Consider a table of individual hospital visits with attributes for the visitID (VID), patientID (PID), and hospitalID (HID). ``` VID PID HID 1 A x 2 A y 3 A x 4 B z 5 B z ``` What I am looking to do is identify PID-HID pairs where more than 50% of the total VIDs for that PID were at the specified HID. In this case I would want it to return "A & x" since 2/3 of the total VIDs for PID "A" were at HID "x" and "B & Z" since all of the VIDs for "B" were at "z"
This might be nasty, but I think it does the job. It assumes you table is named `visits` ``` ;with infoCte as ( select pid, hid, count(*) as visitcount from visits group by pid, hid ) select * from infocte i where visitcount > ( select count(*) / 2 as midcount from visits v where v.pid = i.pid ) ``` The first part gets all of the visit counts by patient and hospital. The second part limits the results down to only those who have visited that particular hospital more than 50% of the time. If you need exactly 50% and above change the `>` to a `>=`.
Try this: ``` select PID,HID,100*cast(PHtotal as float)/Ptotal as PHperc from (select PID,count(*) as Ptotal from tbl group by PID) t, (select PID,HID,count(*) as PHTotal from tbl group by PID,HID) s where s.PID=t.PID and cast(PHtotal as float)/Ptotal>0.5 ``` **EDIT**: Added missed casts to float
Calculating percentage participation across columns
[ "", "sql", "sql-server", "" ]
I have two tables which I need to use a where clause. Table1 ``` CustomerID Product 1 Car 2 Table 3 Golf 4 Foo 5 Yoo ``` Table2 ``` CustomeID Comment 2 Three items 3 Returned 4 Complaint ``` I have a query which has two filters in the where statement like this ``` Select * from table1 a left join table2 b on a.customerid= b.customerid where b.comment<>'Returned' and b.comment not like 'Three%' ``` When I ran the query I just got one record. I want it to also return the two customerID's which are not in table2(three records)
Try: ``` SELECT * FROM table1 a LEFT JOIN table2 b on a.customerid= b.customerid WHERE (b.comment<>'Returned' and b.comment not like 'Three%') OR (b.customerid is NULL) ```
``` SELECT * FROM Table1 A LEFT JOIN Table2 B ON A.CustomerID=B.CustomeID WHERE ( B.COMMENT<>'RETURNED' AND B.COMMENT NOT LIKE 'THREE%') OR B.CustomeID IS NULL ``` This should work I guess.
SQL Where Null Values
[ "", "sql", "null", "where-clause", "teradata", "" ]
I have a many to many relationship: * A post can have many tags * A tag can have many posts Models: ``` public class Post { public virtual string Title { get; set; } public virtual string Content{ get; set; } public virtual User User { get; set; } public virtual ICollection<Tag> Tags { get; set; } } public class Tag { public virtual string Title { get; set; } public virtual string Description { get; set; } public virtual User User { get; set; } public virtual ICollection<Post> Posts { get; set; } } ``` I want to count all posts that belong to multiple tags but I don't know how to do this in NHibernate. I am not sure if this is the best way to do this but I used this query in MS SQL: ``` SELECT COUNT(*) FROM ( SELECT Posts.Id FROM Posts INNER JOIN Users ON Posts.UserId=Users.Id LEFT JOIN TagsPosts ON Posts.Id=TagsPosts.PostId LEFT JOIN Tags ON TagsPosts.TagId=Tags.Id WHERE Users.Username='mr.nuub' AND (Tags.Title in ('c#', 'asp.net-mvc')) GROUP BY Posts.Id HAVING COUNT(Posts.Id)=2 )t ``` But NHibernate does not allow subqueries in the from clause. It would be great if someone could show me how to do this in HQL.
I found a way of how to get this result without a sub query and this works with nHibernate Linq. It was actually not that easy because of the subset of linq expressions which are supported by nHibernate... but anyways query: ``` var searchTags = new[] { "C#", "C++" }; var result = session.Query<Post>() .Select(p => new { Id = p.Id, Count = p.Tags.Where(t => searchTags.Contains(t.Title)).Count() }) .Where(s => s.Count >= 2) .Count(); ``` It produces the following sql statment: ``` select cast(count(*) as INT) as col_0_0_ from Posts post0_ where ( select cast(count(*) as INT) from PostsToTags tags1_, Tags tag2_ where post0_.Id=tags1_.Post_id and tags1_.Tag_id=tag2_.Id and (tag2_.Title='C#' or tag2_.Title='C++'))>=2 ``` you should be able to build your user restriction into this, I hope. The following is my test setup and random data which got generated ``` public class Post { public Post() { Tags = new List<Tag>(); } public virtual void AddTag(Tag tag) { this.Tags.Add(tag); tag.Posts.Add(this); } public virtual string Title { get; set; } public virtual string Content { get; set; } public virtual ICollection<Tag> Tags { get; set; } public virtual int Id { get; set; } } public class PostMap : ClassMap<Post> { public PostMap() { Table("Posts"); Id(p => p.Id).GeneratedBy.Native(); Map(p => p.Content); Map(p => p.Title); HasManyToMany<Tag>(map => map.Tags).Cascade.All(); } } public class Tag { public Tag() { Posts = new List<Post>(); } public virtual string Title { get; set; } public virtual string Description { get; set; } public virtual ICollection<Post> Posts { get; set; } public virtual int Id { get; set; } } public class TagMap : ClassMap<Tag> { public TagMap() { Table("Tags"); Id(p => p.Id).GeneratedBy.Native(); Map(p => p.Description); Map(p => p.Title); HasManyToMany<Post>(map => map.Posts).LazyLoad().Inverse(); } } ``` test run: ``` var sessionFactory = Fluently.Configure() .Database(FluentNHibernate.Cfg.Db.MsSqlConfiguration.MsSql2012 .ConnectionString(@"Server=.\SQLExpress;Database=TestDB;Trusted_Connection=True;") .ShowSql) .Mappings(m => m.FluentMappings .AddFromAssemblyOf<PostMap>()) .ExposeConfiguration(cfg => new SchemaUpdate(cfg).Execute(false, true)) .BuildSessionFactory(); using (var session = sessionFactory.OpenSession()) { var t1 = new Tag() { Title = "C#", Description = "C#" }; session.Save(t1); var t2 = new Tag() { Title = "C++", Description = "C/C++" }; session.Save(t2); var t3 = new Tag() { Title = ".Net", Description = "Net" }; session.Save(t3); var t4 = new Tag() { Title = "Java", Description = "Java" }; session.Save(t4); var t5 = new Tag() { Title = "lol", Description = "lol" }; session.Save(t5); var t6 = new Tag() { Title = "rofl", Description = "rofl" }; session.Save(t6); var tags = session.Query<Tag>().ToList(); var r = new Random(); for (int i = 0; i < 1000; i++) { var post = new Post() { Title = "Title" + i, Content = "Something awesome" + i, }; var manyTags = r.Next(1, 3); while (post.Tags.Count() < manyTags) { var index = r.Next(0, 6); if (!post.Tags.Contains(tags[index])) { post.AddTag(tags[index]); } } session.Save(post); } session.Flush(); /* query test */ var searchTags = new[] { "C#", "C++" }; var result = session.Query<Post>() .Select(p => new { Id = p.Id, Count = p.Tags.Where(t => searchTags.Contains(t.Title)).Count() }) .Where(s => s.Count >= 2) .Count(); var resultOriginal = session.CreateQuery(@" SELECT COUNT(*) FROM ( SELECT count(Posts.Id)P FROM Posts LEFT JOIN PostsToTags ON Posts.Id=PostsToTags.Post_id LEFT JOIN Tags ON PostsToTags.Tag_id=Tags.Id WHERE Tags.Title in ('c#', 'C++') GROUP BY Posts.Id HAVING COUNT(Posts.Id)>=2 )t ").List()[0]; var isEqual = result == (int)resultOriginal; } ``` As you can see at the end I do test against your original query (without the users) and it is actually the same count.
In HQL: ``` var hql = "select count(p) from Post p where p in " + "(select t.Post from Tag t group by t.Post having count(t.Post) > 1)"; var result = session.Query(hql).UniqueResult<long>(); ``` You can add additional criteria to the subquery if you need to specify tags or other criteria.
How to count all posts belonging to multiple tags in NHibernate?
[ "", "sql", "nhibernate", "" ]
``` id_specific_price id_product ------------------------------- 1 2 2 2 3 2 4 3 5 3 6 3 7 3 ``` Need to delete the duplicates, expected outcome: ``` id_specific_price id_product ------------------------------- 3 2 7 3 ``` --- ``` SELECT * FROM ps_specific_price WHERE id_specific_price NOT IN (SELECT MAX(id_specific_price) FROM ps_specific_price GROUP BY id_product) ``` works but ``` DELETE FROM ps_specific_price WHERE id_specific_price NOT IN (SELECT MAX(id_specific_price) FROM ps_specific_price GROUP BY id_product) ``` does not. There are plenty of examples to get around this but for some reason I am not able to adapt it. I believe it is GROUP BY. For example: ``` DELETE FROM ps_specific_price WHERE id_specific_price NOT IN (SELECT MAX(p.id_specific_price) FROM (SELECT * FROM ps_specific_price ) as p) GROUP BY id_product ``` Where did I go wrong here?
If you're looking for a solution for MySQL then you can use a proper [multi table `DELETE`](http://dev.mysql.com/doc/refman/5.5/en/delete.html) syntax along with a `JOIN` like this ``` DELETE p FROM ps_specific_price p JOIN ( SELECT id_product, MAX(id_specific_price) id_specific_price FROM ps_specific_price GROUP BY id_product ) d ON p.id_product = d.id_product AND p.id_specific_price <> d.id_specific_price; ``` Outcome: ``` | ID_SPECIFIC_PRICE | ID_PRODUCT | |-------------------|------------| | 3 | 2 | | 7 | 3 | ``` Here is **[SQLFiddle](http://sqlfiddle.com/#!2/fc75e/1)** demo
Try this: ``` CREATE TABLE ps_specific_price ( id_specific_price NUMBER, id_product NUMBER ); INSERT INTO ps_specific_price (id_specific_price, id_product) VALUES (1, 2); INSERT INTO ps_specific_price (id_specific_price, id_product) VALUES (2, 2); INSERT INTO ps_specific_price (id_specific_price, id_product) VALUES (3, 2); INSERT INTO ps_specific_price (id_specific_price, id_product) VALUES (4, 3); INSERT INTO ps_specific_price (id_specific_price, id_product) VALUES (5, 3); INSERT INTO ps_specific_price (id_specific_price, id_product) VALUES (6, 3); INSERT INTO ps_specific_price (id_specific_price, id_product) VALUES (7, 3); COMMIT; DELETE FROM ps_specific_price ps WHERE ps.id_specific_price NOT IN ( SELECT MAX(id_specific_price) FROM ps_specific_price ps_in WHERE ps_in.id_product = ps.id_product ); SELECT * FROM ps_specific_price; ID_SPECIFIC_PRICE ID_PRODUCT ---------------------- ---------------------- 3 2 7 3 ``` You must connect the table from the inner query with the table from the outer one. I'm using Oracle 11g R2. I checked this on SQLFiddle and my DELETE statement is invalid for MySQL - don't have that one installed and not much experience there, but you didn't say what database you are using.
A query to delete duplicates with GROUP BY
[ "", "mysql", "sql", "duplicates", "" ]
I have the following SQL ``` DECLARE @ColIdx INT; DECLARE @FieldPrefix VARCHAR(50) = N'DX'; DECLARE @PivotColumnHeaders NVARCHAR(MAX); SELECT @MaxColumnCount = 8; WHILE @ColIdx < @MaxColumnCount BEGIN SELECT @PivotColumnHeaders = COALESCE(@PivotColumnHeaders + N', [' + @FieldPrefix + CAST(@ColIdx AS VARCHAR) + N'] AS NVARCHAR(100)', N'[' + @FieldPrefix + CAST(@ColIdx AS VARCHAR) + N'] AS NVARCHAR(100)') SET @ColIdx += 1 END; PRINT @@PivotColumnHeaders; ``` I expect the output to be ``` [DX1] AS NVARCHAR(100), [DX2] AS NVARCHAR(100), ..., [DX8] AS NVARCHAR(100) ``` and this is part of a larger SP that is going to insert these columns into a database using dynamic SQL. Granted I have not used SQL for ages, but this is basic and I have no idea why `@PivotColumnHeader` is ending up blank, or indeed why `PRINT` is not working (probably because the variable is blank! - but why?) - I have attempted the normally sound ``` RAISERROR (@PivotColumnHeaders, 10, 1) WITH NOWAIT; ``` in place of `PRINT` but this is also returning nothing. **What am I doing wrong?** Thanks for your time.
You are not initializing `@ColIdx`: ``` DECLARE @ColIdx INT; DECLARE @FieldPrefix VARCHAR(50) = N'DX'; DECLARE @PivotColumnHeaders NVARCHAR(MAX); SELECT @MaxColumnCount = 8, @ColIdx = 1; ``` By default the value will be `NULL`, and thus `@ColIdx < @MaxColumnCount` will be `NULL`, so the `WHILE` loop will never execute.
D Stanley has already given the correct answer about @ColIdx, not being assigned a value, thus never entering the loop. I just wanted to point out an alternative to using a loop to generate this column list: ``` DECLARE @FieldPrefix VARCHAR(50) = N'DX'; DECLARE @PivotColumnHeaders NVARCHAR(MAX) = ''; DECLARE @MaxColumnCount INT = 8; SELECT @PivotColumnHeaders += ', ' + QUOTENAME(@FieldPrefix + CAST(Number AS VARCHAR)) + N' AS NVARCHAR(100)' FROM Master..spt_values WHERE Type = 'P' AND Number BETWEEN 1 AND @MaxColumnCount; PRINT STUFF(@PivotColumnHeaders, 1, 1, ''); ``` I don't think a simple loop like you have done will ever be consequential to performance, but I just think it is best to avoid loops in SQL where possible, this way it is the last thing that springs to mind whenever we need to solve a problem.
SQL NVARCHAR Variable String Concatenation
[ "", "sql", "sql-server-2008", "variables", "" ]
I am building a SQL database which will have an Access 2010 front-end. I would like some of the fields to be lookups in Access (ie the user clicks on the field in Access and a drop down populates). It is fairly straightforward to make a field a lookup for another table in Access but I can't seem to know how to do it in SQL and then propagate the changes. My SQL knowledge is very basic. Here's an example of how I am creating my SQL tables: ``` CREATE TABLE RequestTypes ( RequestType varchar(50) PRIMARY KEY ); INSERT INTO RequestTypes (RequestType) VALUES ('Val 1'); INSERT INTO RequestTypes (RequestType) VALUES ('Val 2'); INSERT INTO RequestTypes (RequestType) VALUES ('Val 3'); CREATE TABLE Projects ( ID int IDENTITY(1,1) PRIMARY KEY, RequestStatus varchar(50) FOREIGN KEY REFERENCES RequestStatus(RequestStatus), Quantity varchar(50) ); ``` I then connect to the database through the ODBC connection in Access. How can I create my tables in SQL so that the `RequestStatus` field of my Projects table to have the same functionality a lookup table does? For example, being able to click on the RequestStatus attribute of a Project and select "Val 1" or "Val 2" or "Val 3" from a list. The above does require the tables to match but does not provide the "dropdown" lookup functionality.
Create the table in SQL Server, link to it, then use that table as the row source property for the desired combo box / drop down. This is the very basic syntax to create a table in SQL Server: ``` CREATE TABLE LookupTest ( ID INT NOT NULL, LookupValue varchar(255) ); ```
When using datasheet view to SQL server, you cannot use a lookup. However what you can do is create a continues form, and then simply drop in a combo box and use the wizard (or by hand) build the drop down and lookup you require. From the users point of view, they not really be able to discern the difference between editing data in that continues form as opposed to a continues form. So the continues form will be based on the ONE table. The combo box column will be a long Number, and the combo box will have a row source of the table that provides the list of choices (the lookup values). So you can build a screen like this: ![enter image description here](https://i.stack.imgur.com/k3hIr.gif) Note how in above we have a continues form, but also some buttons etc. So a continues form is more flexible then a datasheet, but this “multiple items” forms result in much the same user interface. Note how the first column is a combo box. In fact as above shows, you can drop in near any kind of control into these continues forms. The combo box can/will display a text column of your choice, but “behind” the scenes the combo box will store the PK id of the value you choose from the combo box into a single column in your forms table. The result and look and feel is the same as a lookup column in an Access datasheet.
How to get "Lookup" functionality in Access when linking to a SQL table?
[ "", "sql-server", "sql", "ms-access", "linked-server", "" ]
I am developing an android app, where I want to update a column in a row based on a where clause which comprises of two values. Below is what I have tried. ``` public void setlocationsetcolumn(double lats , double longs, String setvalue) { SQLiteDatabase db = this.getWritableDatabase(); ContentValues values = new ContentValues(); values.put(Loc_set, setvalue); db.update(TABLE_LATLONG, values, " WHERE "+Loc_lati +" = " + lats+" AND "+Loc_longi+" = "+longs, null); } ``` I wan to update the Loc\_set, based on the lats & longs values. but i am getting a force close. Am I doing something wrong here. Please help.Thanks!
below snippet will help you. ``` String[] args = new String[]{lats, longs}; db.update(TABLE_LATLONG, values, "Loc_lati=? AND Loc_longi=?", args); ```
Drop the `WHERE` String . Try this : ``` db.update(TABLE_LATLONG, values, Loc_lati +" = " + lats+" AND "+Loc_longi+" = "+longs, null); ``` However, I Don't know what loc\_lati and loc\_longi are, hopefully columns in your db.
Sqlite update a column based on where clause android
[ "", "android", "sql", "sqlite", "sql-update", "android-sqlite", "" ]
I want to split a datetime column so that the year and the month both have their own column in a select statement output. I also want to have a column by week of the year, as opposed to specific date. Basically, I want separate year, month, and week columns to show up in my select statement output.
Try using the [`DatePart`](http://msdn.microsoft.com/en-us/library/ms174420.aspx) function as shown in the following: ``` select datepart(year,Mydate), datepart(month,Mydate), datepart(week,Mydate) From MyTable ``` Note: If you need to calculate the week number by [ISO 8601](http://en.wikipedia.org/wiki/ISO_8601) standards then you'll need to use `datepart(iso_week,Mydate)` You could also look at the [`DateName`](http://msdn.microsoft.com/en-us/library/ms174395.aspx) function ``` select datename(month,Mydate) From MyTable ```
Here is another way. Use SQL Servers YEAR() and MONTH() functions. For week, I use datepart(week,Mydate) as noted by @DMK. ``` SELECT YEAR(MyDate), MONTH(MyDate), DATEPART(WEEK,Mydate) From YourTable ```
splitting a datetime column into year, month and week
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have following table with three columns containing the string values. ``` ID strOriginal strNew strFinal 1 '122,234,23,22,554' '23,22' '122,234,554' 2 '122,23,22,554,998,856,996' '554,998,856,996' '122,23,22' 3 '60,89,65,87,445,54' '87' '60,89,65,445,54' ``` Now in last column i need to compare two columns values and get result values in strFinal column which has all the values of strOriginal column except strNew values. Can anyone suggest me how to do it in SQL?
You can try below query : ``` select *, case when patindex('%'+STRNEW+'%',STRORIGINAL)>=1 then replace(replace (STRORIGINAL,STRNEW,''),',,',',') else STRORIGINAL end final_String from table1 ``` **[SQL FIDDLE](http://sqlfiddle.com/#!3/6682a/13)**
Well below is the link which might be helpful. [Remove second appearence of a substring from string in SQL Server](https://stackoverflow.com/questions/10458473/remove-second-appearence-of-a-substring-from-string-in-sql-server) However, in this link strings are delimited by '/' where you can substitute ',' and also slight modification is required.
Compare two string column and result in third column SQL Server
[ "", "sql", "sql-server", "string", "compare", "" ]
I found that error Visual stutio 2010 when I try to connect with Oracle database <https://i.stack.imgur.com/BtIKu.jpg> <https://i.stack.imgur.com/q6ffE.jpg> Here is TNSNAMES.ORA: ``` TNS_ALIAS= (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST =188.11.32.22)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcl) ) ) ``` Here is sqlnet.ora ``` # sqlnet.ora Network Configuration File: F:\app\user\product\11.2.0\client_1\network\admin\sqlnet.ora # Generated by Oracle configuration tools. # This file is actually generated by netca. But if customers choose to # install "Software Only", this file wont exist and without the native # authentication, they will not be able to connect to the database on NT. SQLNET.AUTHENTICATION_SERVICES= (NTS) NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT) ``` What should I do now??
It is old post here but as I was in same situation and this forum comes up pretty top of google search then I decided to post my solution. I tried to send XML request to Oracle server and got from one instance: ORA-12504: TNS:listener was not given the SERVICE\_NAME in CONNECT\_DATA The problem was in FQDN service\_name. It tried to solve it over EZCONNECT but in Oracle 11g EZCONNECT does not send Service name at all. Solution: 1. In "$ORACLE\_HOME\database\network\admin\sqlnet.ora" use only TNSNAMES in NAMES.DIRECTORY\_PATH like: ``` NAMES.DIRECTORY_PATH= (TNSNAMES) ``` 1. In "$ORACLE\_HOME\database\network\admin\tnsnames.ora" create a additional section with FQDN. Like: EXAMPLE = (DESCRIPTION = (ADDRESS\_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = example.domain.com)(PORT = 1521)) ) (CONNECT\_DATA = (SERVICE\_NAME = x99.domain.com) (SID=X) ) ) EXAMPLE.DOMAIN.COM = (DESCRIPTION = (ADDRESS\_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = example.domain.com)(PORT = 1521)) ) (CONNECT\_DATA = (SERVICE\_NAME = x99.domain.com) (SID=X) ) ) 2. Use tnsping utilite to ping both names: 1) tnsping example; 2) tnsping example.domain.com - both names must answer. NB! Use your own HOST, SERVICE\_NAME AND SID of cource ;) I hope that it helps someone. BR Raul
You need to use the shorthand version when setting the DataSource property of the Connection string. The entries in your TNSNames file will translate to this ``` var conBuiler = new OracleConnectionStringBuilder(); //DataSource = "HOST:PORT/SERVICE_NAME" conBuilder.DataSource = "example.domain.com:1521/x99.domain.com" conBuilder.UserId = "SomeUser"; conBuilder.Password = "Password123"; var orCon = new OracleConnection(conBuilder.ConnectionString); ```
Oracle error: TNS: Listener was not given the SERVICE_NAME in CONNECT_DATA 1
[ "", ".net", "sql", "oracle", "" ]
I would like to know if there is any difference in using the WHERE clause or using the matching in the ON of the inner join. The result in this case is the same. First query: ``` with Catmin as ( select categoryid, MIN(unitprice) as mn from production.Products group by categoryid ) select p.productname, mn from Catmin inner join Production.Products p on p.categoryid = Catmin.categoryid and p.unitprice = Catmin.mn; ``` --- Second query: ``` with Catmin as ( select categoryid, MIN(unitprice) as mn from production.Products group by categoryid ) select p.productname, mn from Catmin inner join Production.Products p on p.categoryid = Catmin.categoryid where p.unitprice = Catmin.mn; // this is changed ``` --- Result both queries: ![result](https://i.stack.imgur.com/B6ek1.png)
My answer may be a bit off-topic, but I would like to highlight a problem that may occur when you turn your INNER JOIN into an OUTER JOIN. In this case, the most important difference between putting predicates (test conditions) on the ON or WHERE clauses is that you can **turn LEFT or RIGHT OUTER JOINS into INNER JOINS without noticing it**, if you put fields of the table to be left out in the WHERE clause. For example, in a LEFT JOIN between tables A and B, if you include a condition that involves fields of B on the WHERE clause, there's a good chance that there will be no null rows returned from B in the result set. Effectively, and implicitly, you turned your LEFT JOIN into an INNER JOIN. On the other hand, if you include the same test in the ON clause, null rows will continue to be returned. For example, take the query below: ``` SELECT * FROM A LEFT JOIN B ON A.ID=B.ID ``` The query will also return rows from A that do not match any of B. Take this second query: ``` SELECT * FROM A LEFT JOIN B WHERE A.ID=B.ID ``` This second query won't return any rows from A that don't match B, even though you think it will because you specified a LEFT JOIN. That's because the test A.ID=B.ID will leave out of the result set any rows with B.ID that are null. That's why I favor putting predicates in the ON clause rather than in the WHERE clause.
The results are exactly same. Using "ON" clause is more suggested due to increasing performance of the query. Instead of requesting the data from tables then filtering, by using on clause, you first filter first data-set and then join the data to other tables. So, lesser data to match and faster result is given.
What's the difference between filtering in the WHERE clause compared to the ON clause?
[ "", "sql", "t-sql", "sql-server-2012", "" ]
Assume I have table foo ``` A B C ============== 1 1 1 1 2 3 1 2 4 1 3 6 2 2 6 ``` I want the set of all C where I have a duplicate AB. Something like: ``` select all(C) from foo group by a, b having count(b) > 1 ``` I want the result to be ``` all(C) === 3 4 ``` Is there an easy way to do this in Oracle SQL?
``` SELECT t1.c FROM foo t1 JOIN foo t2 ON (t1.a = t2.a AND t1.b = t2.b AND t1.rowid != t2.rowid) ``` should give you what you're after. A bit more efficient would likely be to use an analytic function ``` SELECT c FROM (SELECT f.*, count(*) over (partition by a, b) cnt FROM foo f) WHERE cnt > 1 ```
Try this: ``` SELECT C FROM (select C, COUNT(*) OVER(PARTITION BY A, B) AS DUPLICATES from MY_TABLE) AS RESULTS WHERE DUPLICATES > 1 ```
SQL for set of all in group by
[ "", "sql", "oracle", "" ]
Wondering if you can help a little with the syntax here. Trying to set a variable as a month value dependent upon whether it is past the 25th day of the month. If it is, then it uses the current month (e.g. the variable 'month' will be 10 if the date is 28th October, but will be 9 if it's the 24th October). So far I've got the following: ``` select a case when (SELECT DAY(GETDATE()) >= 25 then a = (SELECT MONTH(GETDATE())) else a = (SELECT MONTH(GETDATE()) - 1) end ``` I understand you can't use less than or greater than signs, as case statements are only for evaluations (=)? Can anyone suggest another way of doing this? Thanks.
As far as I understand, you need this kind of usage; ``` select case when DAY (GETDATE()) = 25 then MONTH(GETDATE()) else MONTH(GETDATE()) - 1 end ```
``` select @your_variable = case when DAY(GETDATE()) = 25 then MONTH(GETDATE()) else MONTH(GETDATE()) - 1 end ```
CASE statement with GETDATE()
[ "", "sql", "sql-server-2008", "case", "getdate", "" ]
I am thinking about a possibility in doing inner joins with the below 3 tables. **CategorizedItems** ``` CategoryId | linkid | type ------------------------------ 1 1 a1 1 2 a1 1 1 a2 ``` **A1\_Items** ``` itemid | Name | A1 Fields ----------------------------- 1 N1 2 N2 ``` **A2\_items** ``` itemid | Name | A2 Fields ----------------------------- 1 N3 ``` CategorizedItems.linkId can hold A1\_items.itemid and A2\_items.itemid. Because of that, there are no primary key - foreign key relationship among them. I am in the need to get the names of the items under CategoryId 1. Is there a possibility to do this. I dont have any field representing 'type' in A1\_items and A2\_Items. I guess that I can achieve it by doing 2 queries (or) By doing a union with the results of 2 separate queries. In my real time scenario, I have multiple tables connected with these 3 tables. Redoing the joins for all the other tables, just for the above scenario is costly.. Any help on this would be much appreciated.
Just use left joins and apply a suitable filter to your joins like this: ``` SELECT CategoryId, COALESCE(A1_Items.Name, A2_Items.Name) AS Name FROM CategorizedItems LEFT JOIN A1_Items ON A1_Items.itemid = CategorizedItems.linkid AND CategorizedItems.type = 'a1' LEFT JOIN A2_Items ON A2_Items.itemid = CategorizedItems.linkid AND CategorizedItems.type = 'a2' ``` **`Fiddle Demo`**
Another approach: ``` select CategoryID, Name from CategorizedItems c join (select 'a1' as table, ItemId, Name from A1_Items union select 'a2' as table, ItemId, Name from A2_Items) a on c.linkId = a.ItemId and c.type = a.table ```
SQL Server Inner Join on same column with two tables
[ "", "sql", "sql-server", "database", "join", "" ]
I'm sort of throwing a hail mary here after going crazy trying to figure out how to write this correctly. I've searched near and far and haven't found something that made this "click" for me yet. Here's the situation: There's 3 tables (columns involved in parens): * prods (prod\_id PK, prod\_grp\_id) * prod\_grps (prod\_grp\_id PK, prod\_grp\_mgr\_nm) * prod\_grp\_prods (prod\_id, prod\_grp\_id) A product can belong to more than one prod\_grp. For instance, a baseball can be in the sports prod\_grp, which may have prod\_grp\_mgr\_nm = "AssMGR\_Bill" while also being in the general\_merch group with prod\_grp\_mgr\_nm = "LeadMGR\_Jake". I need to go through every item in the prods table and update the "prod\_grp\_id" value for each item so that it contains the prod\_group\_id managed by someone with "LeadMGR" in their prod\_grp\_mgr\_nm. I want to update prods only when there's just one prod\_grp with prod\_grp\_mgr\_nm containing "LeadMGR". The goal here is to associate each prod with a group managed by a lead manager if one (and only one) exists. So far, I've got this SELECT statement: ``` SELECT prods.prod_id FROM prods p INNER JOIN prod_grp_prods pgp ON prod.prod_id = pgp.prod_id INNER JOIN prod_grps pg ON pgp.prod_grp_id = pg.acct_grp_id WHERE pg.prod_grp_mgr_nm LIKE '%LeadMGR%' GROUP BY p.prod_id HAVING COUNT(p.prod_id) = 1 ORDER BY p.prod_id` ``` This returns all the prod\_ids needing updating.
I tried to replicate your questions with below queries. I used subquery to update prods table. I hope this does replicate your issues. ``` create table #prods (prod_id int PRIMARY KEY, prod_grp_id int) create table #prod_grps (acct_grp_id int PRIMARY KEY, prod_grp_mgr_nm varchar(200)) create table #prod_grp_prods (prod_id int , prod_grp_id int) -- insert testing data insert into #prods values (1, null), (2,null), (3, null) insert into #prod_grps values (999, 'AssMGR_Bill'), (998, 'LeadMGR_Jake'), (995, 'LeadMGR_Jake') insert into #prod_grp_prods values (1, 999), (2, 995), (3, 999), (1,998) -- original query SELECT p.prod_id, pgp.prod_grp_id FROM #prods p INNER JOIN #prod_grp_prods pgp ON p.prod_id = pgp.prod_id INNER JOIN #prod_grps pg ON pgp.prod_grp_id = pg.acct_grp_id WHERE pg.prod_grp_mgr_nm LIKE '%LeadMGR%' GROUP BY p.prod_id , pgp.prod_grp_id HAVING COUNT(p.prod_id) = 1 ORDER BY p.prod_id -- update query update #prods set prod_grp_id = t.prod_grp_id from ( SELECT p.prod_id, pgp.prod_grp_id FROM #prods p INNER JOIN #prod_grp_prods pgp ON p.prod_id = pgp.prod_id INNER JOIN #prod_grps pg ON pgp.prod_grp_id = pg.acct_grp_id WHERE pg.prod_grp_mgr_nm LIKE '%LeadMGR%' GROUP BY p.prod_id , pgp.prod_grp_id HAVING COUNT(p.prod_id) = 1 )t where t.prod_id = #prods.prod_id -- view results select * from #prods ```
This query will solve your fundamental problem. ``` alter table prods drop column prod_grp_id ``` As you described, there is a many to many relationship between products and groups. You also have table prod\_grp\_prods to implement that relationship. Having a group id field in your product table is quite simply, a bad idea. **Edit starts here** Here is a bandaid solution ``` update p set prod_grp_id = pgp.prod_grp_id from prods p join prod_grp_prods pgp on p.prod_id = pgp.prod_id join ( select prod_id, count(*) records from prod_grp_prods group by prod_id having count(*) = 1) temp on temp.prod_id = p.prod_id ``` You'll need another query to nullify the other records.
Converting a SELECT with 2 JOINS and a COUNT into an UPDATE Statement
[ "", "sql", "t-sql", "" ]
I have a SQL query issue that seems easy to fix but I can't figure out how to make it work.. I basically have two tables : Orders, and OrderDetails... Each order have several products, registered in OrderDetails table. I want to be able to find all the orders that have 2 products ; one with a specific reference, and the other with a specific description. Here is the query I wrote: ``` SELECT o.orderNumber FROM `order` AS o JOIN `orderDetail` AS d ON o.id = d.orderID WHERE d.reference = "F40" AND d.description = "Epee" ``` Here is the fiddle: <http://sqlfiddle.com/#!2/bd94e/1> The query is returning 0 reccord, and it should return order number QQ00000QQ Can someone please explain to me how can I make that query work?? Thank you very much!!
If I understand you correctly, you want to find an order that has one orderline satisfying a condition (reference = "F40") and another orderline satisfying another condition (description = "Epee"). Doing a single join will not solve this, as you will be searching for one orderline that satisfies both conditions. You should do something like this instead: ``` SELECT orderNumber FROM `order` WHERE id IN ( SELECT orderid FROM orderDetail od1 INNER JOIN orderDetail od2 USING (orderid) WHERE od1.reference = 'F40' AND od2.description = "Epee" ) ```
I think you must a create an field same. example in : `order` = id\_order `orderDetail` = id\_order Then if you want to find all, you must insert that id\_order same. example : ``` INSERT INTO `order` (id_order , orderNumber) VALUES ('1','QQ00000QQ'), ('2','AA11111AA'), ('3','LO00000OL'), ('4','AA12345BB'); INSERT INTO `orderDetail` (orderID, reference, description,id_order) VALUES (1, 'F40', 'Wire','1'), (1, 'Q25', 'Epee','1'), (1, 'Z99', 'Mask','1'), (2, 'F40', 'Wire','2'), (3, 'Q25', 'Epee','2'), (4, 'F40', 'Wire','4'), (4, 'Z99', 'Mask','3'); SELECT o.orderNumber FROM `order` AS o JOIN `orderDetail` AS d ON o.id = d.orderID WHERE d.reference = '4' GROUP BY o.id ```
Search for orders that have two products, one with specific reference, other with specific description
[ "", "mysql", "sql", "join", "" ]
I'm creating a database in db2 and I want to add a constrain to validate whether the user inserting a valid email address, that contain %@%.% . Without luck...any advice?
You can use LIKE with wildcards. See [here](http://publib.boulder.ibm.com/infocenter/db2luw/v8/index.jsp?topic=/com.ibm.db2.udb.doc/core/c0011568.htm) for the wildcards on DB2. > The underscore character ( \_ ) represents any single character. > > The percent sign character (%) represents a string of zero or more characters. ``` SELECT email FROM YourTable WHERE email NOT LIKE '%_@__%.__%' ``` This will ignore the following cases (simple version for valid emails): * emails that have at least one character before the @; * emails that have at least two characters between @ and .; * emails that have at least two characters between . and the end. You can see an example in MySql in [sqlfiddle](http://sqlfiddle.com/#!9/026c7/1). To add it as a constraint, you do (as mustaccio said in a comment): ``` alter table your_table add constraint chk_email check (email like '%_@__%.__%') ```
You could create a trigger that checks the given string with a regular expression that describes the structure of an email ``` ^[A-Za-z0-9]+@[A-Za-z0-9]+.[A-Za-z0-9]+$ ``` However, the regular expression of an email is not easy to define: [Using a regular expression to validate an email address](https://stackoverflow.com/questions/201323/using-a-regular-expression-to-validate-an-email-address) ``` DECLARE RET VARCHAR(32); SET RET = XMLCAST ( XMLQUERY ('fn:matches($TEXT,"^[A-Za-z0-9]+@[A-Za-z0-9]+.[A-Za-z0-9]+$")' PASSING TEXT AS "TEXT" ) AS VARCHAR(32)); ``` Once you have the value of RET (true or false) you can do something in the trigger. You can test the regular expression you want to use from command line: ``` db2 "xquery fn:matches(\"johndoe@mail.com\",\"^[A-Za-z0-9]+@[A-Za-z0-9]+.[A-Za-z0-9]+$\")" ```
How to write a query to ensure email contains @
[ "", "sql", "db2", "check-constraints", "" ]
I have a table where every row has an id, and every time I insert a new row, it akes the highest id and adds 1. However, I'd like new rows to take back the number let by rows that were deleted. How can I find the lowest id that does not exist? Thank you
You can use `LEFT JOIN` on the same table, and look if the next id exists. I don't think this solution is really nice with a lot of rows, but it works : ``` SELECT (t.id + 1) FROM `table` as t LEFT JOIN `table` s ON s.id = (t.id + 1) WHERE s.id IS NULL ORDER BY t.id LIMIT 1 ```
**Don't re-use id values that were previously used.** Example scenario: * Say your site has an abusive user who sends provocative emails. Emails make reference to the user's primary key number. You delete the user's account for violations, but the emails are still out there. You reassign the deleted id to a new, nice user, but he keeps getting complaints about the past abusive emails that he was not responsible for. Auto-increment values should be *unique* but there is no need for them to be *consecutive*. A 32-bit signed integer can be up to 231-1, or 2147483647. A 64-bit signed integer can be up to 263-1, or 9223372036854775807. You aren't likely to use up all integers. MySQL does not take the highest value and add 1. It allocates the next value in the sequence. You may have missing values due to inserts failing, rows being deleted, or transactions being rolled back. This is normal. If you try to work around it, your application will become slow and complex. You should just use the auto-increment feature as designed, and accept that there will be some inevitable gaps. Exception: if you restart the MySQL service, InnoDB's auto-increment values are reset to MAX(id)+1.
Find lowest ID that does not exist
[ "", "mysql", "sql", "" ]
I need to synchronize tables between 2 databases daily, the source is MSSQL 2008, the target is MSSQL 2005. If I use UPDATE, INSERT, and DELETE statements (i.e. UPDATE rows that changed, INSERT new rows, DELETE rows no longer present), will there be performance improvements if I perform the DELETE statement first? i.e. so that the UPDATE statement doesn't look at rows that don't need to be updated, because they will be deleted. Here are some other things I need to consider. The tables have 1-3 million+ rows, and because of the amount of transactions and business requirements, the source DB needs to remain online, and the query needs to be as efficient as possible. The job will be run daily in a SQL server agent job on the target DB. On top of that, I am a DB rookie! Thanks StackOverflow community, you are awesome!
I'd say, first you do `delete`, then `update` then `insert`, so you don't have to update rows which will be deleted anyway and you'll not update rows which are just inserted. But actually, have you seen SQL Server [merge](http://msdn.microsoft.com/en-us/library/bb510625.aspx) syntax? It could save you a great amount of code. **update** I have not checked performance of `MERGE` statement against INSERT/UPDATE/DELETE, here's related [link](https://dba.stackexchange.com/questions/44067/can-i-optimize-this-merge-statement/44069#44069) given by Aaron Bertrand for more details.
Rule of Thumb: `DELETE`, then `UPDATE`, then `INSERT`. Performance aside, my main concern is to avoid any potential **Deadlocks** when: 1. Updating something you will immediately Delete. 2. Inserting something you may immediately try to Update. If you only modify what is necessary and use transactions correctly, then you may use any order. P.S. Someone suggested using `MERGE` - I've tried it a few times and my preference is to never use it.
Synchronizing tables - does order of UPDATE INSERT DELETE matter?
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "sql-server-2005", "" ]
in mysql `table1` i have column `dateofreport` and there are few records per day e.g. ``` dateofreport 2013-05-31 2013-05-31 2013-05-30 2013-05-30 2013-05-30 2013-05-29 2013-04-31 2013-04-31 2013-04-31 2013-04-02 ``` I want to find out how many distinct days there are for a month, so result should be: 2013-May : 3, 2013-April : 2 i can do distinct days in all table: `SELECT COUNT( DISTINCT dateofreport ) FROM table1` or distinct months but i don't know how to group it by months. ``` SELECT DISTINCT DATE_FORMAT(`dateofreport`,'%Y-%M') as months FROM table1 ```
``` SELECT EXTRACT(YEAR_MONTH FROM dateofreport) AS ym, COUNT(DISTINCT dateofreport) AS count FROM table1 GROUP BY ym ``` The only reason I use EXTRACT() is that it's ANSI standard SQL, IIRC. Both types of function are likely to cause temporary tables.
``` SELECT DATE_FORMAT(dateofreport, '%Y-%M') AS months, COUNT(DISTINCT dateofreport) AS count FROM table1 GROUP BY months ```
mysql count distinct days in a month
[ "", "mysql", "sql", "distinct", "" ]
I need to generate a list of active customers in an MS Access database. All past and present customers are stored in a customers table. However, the criteria for determining active status needs to be derived from two other tables: intake and exit. A customer is considered active if they have an intake date which does not have an exit date after it. However, to confuse things, a former customer who has an exit date can become a customer again by getting a new intake date. Here are the relevant parts of the structure of the three tables that store this information: ``` customers table customerID name intake table intakeID customerID intakeDate exit date exitID customerID exitDate ``` A customer can have multiple intake records and multiple exit records. So the pseudocode for the SQL statement needs to look something like: ``` SELECT customerID, name FROM customers WHERE ((most recent intakeDate)>(most recent exitDate(if any))) ``` What should this look like in real SQL? For an MS Access 2010 database. Obviously, joins are necessary. But what types of joins? And how does it need to look?
``` SELECT ActiveCustomers.*, tblAddress.* FROM ( SELECT customers.name, customers.customerID, ( SELECT COUNT(intakeDate) FROM intake WHERE customers.customerID = intake.customerID AND Len(intakeDate & '') > 0 ) AS IntakeCount, ( SELECT COUNT(exitDate) FROM exit WHERE customers.customerID = exit.customerID AND Len(exitDate & '') > 0 ) AS ExitCount FROM customers ) AS ActiveCustomers INNER JOIN tblAddress ON ActiveCustomers.customerID = tblAddress.customerID WHERE IntakeCount > ExitCount AND tblAddress.CurrentAddress = True ```
I like @sqlgrl's approach of just looking at the most recent intake and exit for each customer, but I have adapted it to use MS Access-specific syntax and also, I think, tightened up the join logic a little bit: ``` select c.* from ([customers table] as c inner join ( select customerID, max(exitDate) as lastOut from [exit date] group by customerID ) as [out] on c.customerID = [out].customerID) inner join ( select customerID, max(intakeDate) as lastIn from [intake table] group by customerID ) as [in] on c.customerID = [in].customerID where [in].lastIn > [out].lastOut ``` The above basically says: * Build a list of each customer's most recent exit date * Build a list of each customer's most recent intake date * Join the two lists with the customers table * If the customer's most recent intake date is after their most recent exit date, include this customer in the final output
filtering records based on two other tables
[ "", "sql", "ms-access", "join", "subquery", "ms-access-2010", "" ]
I'm fixing a bug in a proprietary piece of software, where I have some kind of JDBC Connection (pooled or not, wrapped or not,...). I need to detect if it is a MySQL connection or not. All I can use is an SQL query. What would be an SQL query that succeeds on MySQL each and every time (MySQL 5 and higher is enough) and fails (Syntax error) on every other database?
## The preferred way, using JDBC Metadata... If you have access to a JDBC Connection, you can retrieve the vendor of database server fairly easily without going through an SQL query. Simply check the connection metadata: ``` string dbType = connection.getMetaData().getDatabaseProductName(); ``` This will should give you a string that beings with "MySQL" if the database is in fact MySQL (the string can differ between the community and enterprise edition). **If your bug is caused by the lack of support for one particular type of statement** which so happens that MySQL doesn't support, you really should in fact rely on [the appropriate metadata method](http://docs.oracle.com/javase/7/docs/api/java/sql/DatabaseMetaData.html) to verify support for that particular feature instead of hard coding a workaround specifically for MySQL. There are other MySQL-like databases out there (MariaDB for example). --- If you really must pass through an SQL query, you can retrieve the same string using this query: ``` SELECT @@version_comment as 'DatabaseProductName'; ``` However, the preferred way is by reading the DatabaseMetaData object JDBC provides you with.
Assuming your interesting preconditions (which other answers try to work around): Do something like this: ``` SELECT SQL_NO_CACHE 1; ``` This gives you a single value in MySQL, and fails in other platforms because `SQL_NO_CACHE` is a MySQL instruction, not a column. Alternatively, if your connection has the appropriate privileges: ``` SELECT * FROM mysql.db; ``` This is an information table in a database specific to MySQL, so will fail on other platforms. The other ways are better, but if you really are constrained as you say in your question, this is the way to do it.
Query to detect MySQL
[ "", "mysql", "sql", "jdbc", "" ]
Imagine the following scenario: Employees of a company can give votes to an arbitrary question (integer value). I have a complex request where I want to fetch five information: 1. Name of the company 2. Average vote value per company 3. Number of employees 4. Number of votes 5. Participation (no of votes/no of employees) The SQL query shall only fetch votes of companies, that the current user is employed at. Therefore I am accessing four different tables, following you see an excerpt of the table declarations: ``` User - id Company - id - name Employment - user_id (FK User.id) - company_id (FK Company.id) Vote - company_name - vote_value - timestamp ``` `User` and `Company` are related by an `Employment` (n:m relation, but needs to be extra table). The table `Vote` shall not be connected by PK/FK-relation, but they can be related to a company by their company name (`Company.name = Vote.company_name`). I managed to fetch all information **except for the number of employees** correctly by the following SQL query: ``` SELECT c.name AS company, AVG(v.vote_value) AS value, COUNT(e.user_id) AS employees, COUNT(f.face) AS votes, (COUNT(e.user_id) / COUNT(v.vote_value)) AS participation FROM Company c JOIN Employment e ON e.company_id = c.id JOIN User u ON u.id = e.user_id JOIN Vote v ON v.company_name = c.name AND YEAR(v.timestamp) = :year AND MONTH(v.timestamp) = :month AND DAY(v.timestamp) = :day WHERE u.id = :u_id GROUP BY v.company_name, e.company_id ``` But instead of fetching the correct number of employees, the `employee` field is always equal the number of votes. (And therefore the `participation` value is also wrong.) Is there any way to perform this **in one query without subqueries**1? What do I have to change so that the query fetches the correct number of employees? 1 I am using Doctrine2 and try to avoid subqueries as Doctrine does not support them. I just did not want to pull this into a Doctrine discussion. That's I why I broke this topic down to SQL level.
If you want to fetch the number of employees then the issue is that you are filtering by only 1 employee: ``` WHERE u.id = :u_id ``` Secondly, bear in mind that if you want to count the amount of employees and you have gotten into the vote grouping level, then of course you will have the amount of rows equal to the amount of votes. So you will have to distinct count as @Przem... mentioned: ``` COUNT(DISTINCT e.user_id) AS employees, ``` That way you will uniquely count the employees for the company (getting rid of the repeated employee ids for all the votes the employee has). As you mentioned in a comment: > It returns the 1 as employee count This is because of the where condition forcing to 1 employee with many votes. The distinct will only count the unique 1 employee filtered by the `where` clause and that is why you get only 1. However, that is the correct result (based on your filter condition). Adding subqueries in the `select` clause will also get you to the right result but at the expense of performance.
Try this--it calculates the votes as one subquery and the employees as another subquery. ``` SELECT c.name, ce.employee_count, cv.vote_count, cv.vote_count / ce.employee_count, cv.vote_value FROM (select company, count(*) AS 'employee_count' FROM employment GROUP BY company) ce INNER JOIN company c ON c.id = ce.company INNER JOIN (select company, AVG(vote_value) AS 'vote_value', count(*) as 'vote_count' FROM vote v GROUP BY company) cv ON c.id = cv.company ```
Complex SQL query over four tables does not fetch wanted result
[ "", "mysql", "sql", "database", "join", "" ]
How to get aSQL of a table? I want the SQL. Can anybody tell me the `TABLE_NAME` where SQL is maintained? I used SQL developer but is there any command or script or table name which shows the SQL of a table.
Try to spool the output of the below query, ``` SELECT DBMS_METADATA.GET_DDL('TABLE',u.table_name) FROM USER_TABLES u; ``` Like, ``` set pagesize 0 set long 90000 set feedback off set echo off spool schema.sql SELECT DBMS_METADATA.GET_DDL('TABLE',u.table_name) FROM USER_TABLES u WHERE TABLE_NAME = '<your_table>'; spool off; ``` Reference: <http://www.dba-oracle.com/oracle_tips_dbms_metadata.htm>
I'm assuming you're using Oracle. You can use `desc TABLE_NAME` in SQL\*Plus. More info in this question: [`show create table` equivalent in oracle sql](https://stackoverflow.com/questions/18264584/show-create-table-equivalent-in-oracle-sql) You should tag your question with only *one* DBMS
How to get a SQL script of a table?
[ "", "sql", "oracle", "" ]