Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
Using SQL Server 2012, I have a classification table that is something like this: ``` prodName class1min class1max class2min class2max class3min class3max --------------------------------------------------------------------------------- prod A 1.5 1.8 1.8 2.1 2.1 2.5 prod B 3.6 3.9 3.9 4.5 4.5 5.6 prod C 2.7 3.2 3.2 3.6 3.6 4.2 etc... ``` Given a product name and a value, I need to find what classification (1, 2 or 3) my value belongs in. ie. I have a product B item with a value of 4.1. I just need to be able to identify that it belongs in class 2. So far I have experimented with creating unpivot tables based on what product is selected, like this: Prod B: ``` class value ------------------ class1min 3.6 class1max 3.9 class2min 3.9 class2max 4.5 class3min 4.5 class3max 5.6 ``` Then by inserting my value and sorting, I'm at least able to visualize the classification. ``` class value ------------------ class1min 3.6 class1max 3.9 class2min 3.9 myvalue 4.1 class2max 4.5 class3min 4.5 class3max 5.6 ``` I'm still having problems isolating it by code though, and I think there might be a better way. *NOTE*: in the case of a tie, I'd like the lower classification.
You can unpivot the data and then perform the comparison. Since you are using SQL Server 2012, then you can easily unpivot the `min`/`max` columns in sets using CROSS APPLY: ``` select prodname, class, [min], [max] from yourtable cross apply ( values ('class1', class1min, class1max), ('class2', class2min, class2max), ('class3', class3min, class3max) ) c(class, [min], [max]) ``` See [Demo](http://sqlfiddle.com/#!3/2d636/3). Once the data has been unpivoted, then you can compare your value to find the class. If you have more classifications, then you can easily add more values in the subquery: ``` DECLARE @Prod VARCHAR(32) = 'Prod B', @val DECIMAL(10,2) = 4.1; select prodname, class, [min], [max] from yourtable cross apply ( values ('class1', class1min, class1max), ('class2', class2min, class2max), ('class3', class3min, class3max) ) c(class, [min], [max]) where prodname = @Prod and @val > [min] and @val <= [max] ``` See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/2d636/6)
Since there are only three possible classes, you can accomplish this quite simply with a `CASE` expression: ``` DECLARE @Prod VARCHAR(32) = 'Prod B', @val DECIMAL(10,2) = 4.1; SELECT [Class] = CASE WHEN @val <= class1max THEN 'Class1' WHEN @val > class3Min THEN 'Class3' ELSE 'Class2' END FROM dbo.tableName WHERE ProdName = @Prod; ``` Now that we know that the "problem" is that you actually have 9 classifications, not the 3 as stated in the question, this still expands just fine: ``` DECLARE @Prod VARCHAR(32) = 'Prod B', @val DECIMAL(10,2) = 4.1; SELECT [Class] = CASE WHEN @val <= class1max THEN 'Class1' WHEN @val > class2Min AND @val <= class2Max THEN 'Class2' WHEN @val > class3Min AND @val <= class3Max THEN 'Class3' WHEN @val > class4Min AND @val <= class4Max THEN 'Class4' WHEN @val > class5Min AND @val <= class5Max THEN 'Class5' WHEN @val > class6Min AND @val <= class6Max THEN 'Class6' WHEN @val > class7Min AND @val <= class7Max THEN 'Class7' WHEN @val > class8Min AND @val <= class8Max THEN 'Class8' ELSE 'Class9' END FROM dbo.tableName WHERE ProdName = @Prod; ``` If the problem with this is that it's too much code, well, you could consider changing the design.
Classify Value Within Max/Min Columns in SQL
[ "", "sql", "sql-server", "t-sql", "sql-server-2012", "unpivot", "" ]
I am suffering from a situation where a user has changed his mobile network and subscribed our service but their mobile number is same. So, for his old subscription I have his old mobile network and for his new subscription, I have his new mobile network record. For example ``` 4474123456 subscribed gamesClub using O2 network on 2013-10-20 4474123456 subscribed musicClub using Orange network on 2013-12-05 ``` Therefore, I want to display these kind of mobile numbers who have subscribed our services using different network. ``` Database: MySQL Table : MobileUsers field: userunique,mobileno, subscription,networkname, subscribedDate ``` Query I have tried ``` SELECT mobileno,networkname,count(*) from MobileUsers group by mobileno,networkname SELECT mobileno,count(*) from MobileUsers group by mobileno,networkname ``` But none of the results are satisfactory. These also include mobileno who have multiple subscriptions having the same network. Thank You,
I think you need to use `GROUP BY/HAVING`: ``` SELECT MobileNo FROM MobileUsers GROUP BY MobileNo HAVING COUNT(DISTINCT networkname) > 1 -- MORE THAN ONE NETWORK ``` If you then need to get all records back for these mobile numbers, you would need to put the above in a subquery and join back to it: ``` SELECT m.* FROM MobileUsers m INNER JOIN ( SELECT MobileNo FROM MobileUsers GROUP BY MobileNo HAVING COUNT(DISTINCT networkname) > 1 -- MORE THAN ONE NETWORK ) Dupe ON dupe.MobileNo = m.MobileNo ```
You need to use a subquery. ``` Select userunique, mobileno, subscription, networkname, subscribedDate From MobileUsers as t1 Where exists ( SELECT * FROM MobileUsers as t2 WHERE t2.mobileno=t1.mobileno and t1.networkname<>t2.networkname ) ```
SQL Query To Display Mobile number that are registered in Different Mobile Network
[ "", "mysql", "sql", "database", "" ]
I've been working on a 'simple' problem all day and its driving me mad. I'm sure I am missing something obvious; however no amount of searching is helping me. I have a 1 row data table such as; (Table\_1) ``` Person_1 Person_2 Person_3 PersonID ABC DEF GHI ``` However I need to replace the references with values from another table but matching with the ID column, such as; (Table\_2) ``` PersonID Work_Done LMN 298 GHI 187 ABC 872 XYZ 468 DEF 512 ``` Therefore returning; ``` Person_1 Person_2 Person_3 Work_Done 872 512 187 ``` I'm using Microsoft SQL Server Enterprise Edition V8.00.2039 and I have no control over my source tables. Any help (even just ideas on what to Google) would be really appreciated. SQL to generate example tables; ``` -- Table_1 SELECT 'ABC' AS Person_1, 'DEF' AS Person_2, 'GHI' AS Person_3 ; -- Table_2 SELECT 'LMN' AS Person_ID, 298 AS Work_Done UNION ALL SELECT 'GHI' AS Person_ID, 187 AS Work_Done UNION ALL SELECT 'ABC' AS Person_ID, 872 AS Work_Done UNION ALL SELECT 'XYZ' AS Person_ID, 468 AS Work_Done UNION ALL SELECT 'DEF' AS Person_ID, 512 AS Work_Done ; --Returning SELECT '872' AS Person_1, '512' AS Person_2, '187' AS Person_3; ```
Since you can't change the design, here's the code to retrieve what you want: ``` SELECT Person_1 = a.Work_Done , Person_2 = b.Work_Done , Person_3 = c.Work_Done FROM Table1 t JOIN Table2 a ON t.Person_1 = a.Person_ID JOIN Table2 b ON t.Person_2 = b.Person_ID JOIN Table2 c ON t.Person_3 = c.Person_ID ``` Multiple joins to aliases of the same table (Table2)
``` SELECT * FROM ( SELECT unpvt.Person_Type, b.Work_Done FROM Table_1 a UNPIVOT ( Person_ID FOR Person_Type IN (a.[Person_1], a.[Person_2], a.[Person_3]) ) unpvt JOIN Table_2 b ON unpvt.Person_ID = b.Person_ID ) c PIVOT ( MAX(c.Work_Done) FOR c.Person_Type IN ([Person_1], [Person_2], [Person_3]) ) pvt ``` `SQL Fiddle`
TSQL 'Lookup' Function - Confused
[ "", "sql", "sql-server", "t-sql", "vlookup", "" ]
I need to return an array of a category where a particular item of which it `has_many`, has a `.count >= 2` I know this sytax is wrong, but I'm trying to figure out the correct way, any tips?? `Model.where(model.has_many_relationship_item.count >= 2) ??` Thanks in advance!!
Category.joins(:items).group("cateogry.id").having("COUNT(items.id) >= 2")
The only way I know how to do this is: ``` Model.joins(:items).group('models.id').having('count(items.id) >= 2') ``` assuming (for example) the class of the related model is `Item`
Rails postgres query - Model.where(model.has_many_relationship_item.count >= 2)?
[ "", "sql", "ruby-on-rails", "postgresql", "relational-database", "" ]
My problem is the following. I want to `SELECT` the minimum of a years list and display another row of my table. An example: ``` SELECT MIN(Year) FROM table -> Searching for the lowest year. ``` and then I want it to display the `Winners` of the first year. Is there a way to do this in just one line?
``` select winner from table where year in (select min(year) from table) ```
You need to do a self-JOIN between table and itself (I suppose you want to do it in a single *statement*, not a single *line*): ``` SELECT A.* FROM table AS A JOIN ( SELECT MIN(Year) AS Year FROM table ) AS B ON (A.Year = B.Year); ``` This assumes that there is only one record per minimum-year in every group of interest.
SELECT a variable as condition and display another one
[ "", "sql", "sql-server", "t-sql", "select", "" ]
I have created a Stored Procedure ``` CREATE PROCEDURE GetAllRecords() BEGIN SELECT * FROM my_table; END // ``` Now I want to add a parameter to this Stored Procedure like this : ``` CREATE PROCEDURE GetAllRecords(id1 INT(4)) BEGIN SELECT * FROM my_table WHERE `id` = id1; END // ``` How can I edit my Stored Procedure?
You have drop procedure first and then re-create it ``` DROP PROCEDURE IF EXISTS GetAllRecords; CREATE PROCEDURE GetAllRecords(IN _id INT) SELECT * FROM my_table WHEREid = _id; ``` And since you're using the only statement in your procedure you can ditch `BEGIN ... END` block and use a usual delimiter. In case you're wondering no you can not use [`ALTER PROCEDURE`](http://dev.mysql.com/doc/refman/5.5/en/alter-procedure.html) in this case > This statement can be used to change the characteristics of a stored > procedure. However, you cannot change the parameters or body of a > stored procedure using this statement; to make such changes, you > **must drop and re-create** the procedure using `DROP PROCEDURE` and `CREATE PROCEDURE`.
Delete the procedure ``` drop procecdure GetAllRecords// ``` And recreate it ``` CREATE PROCEDURE GetAllRecords(id1 INT(4)) ... ```
Edit a Stored Procedure using ALTER?
[ "", "mysql", "sql", "stored-procedures", "" ]
I Have to create a table currencies ``` CREATE TABLE currencies ( from_currency CHAR(30) NOT NULL, ExchRate REAL NOT NULL, to_currency CHAR(30) NOT NULL, PRIMARY KEY (from_currency), FOREIGN KEY (to_currency) REFERENCES currencies(from_currency) ) ``` I have to save exchange rate for 6 currencies: (inr,eur,cad,usd,gbp,cfp) But the problem is i can't store same values in first column i.e. I can save my currency rate as follow ``` from_curr ER --> to_curr INR --> 60 --> USD USD --> 0 --> USD GBP --> 70 --> USD ``` e.t.c. for all the currencies but now the problem is while i try to store the currency for other er like ``` INR --> 1 --> USD or GBP --> 70 --> USD ``` I get error duplicates values are not allowed ? How to solvve this problem ?
You should either create a composite primary key as a pair of 2 currencies is uniquely identifying a record for you. ``` CREATE TABLE currencies ( from_currency CHAR(30) NOT NULL, ExchRate REAL NOT NULL, to_currency CHAR(30) NOT NULL, PRIMARY KEY (from_currency, to_currency) ) ``` Or alternatively you can add a new column for the primary key, e.g. as autoincrement column: ``` CREATE TABLE currencies ( [Id] [int] IDENTITY(1,1) NOT NULL from_currency CHAR(30) NOT NULL, ExchRate REAL NOT NULL, to_currency CHAR(30) NOT NULL, PRIMARY KEY ([Id]) ) ```
You are not storing a currency, but rather a pair of currencies. ``` CREATE TABLE currency_exchange_rates ( from_currency CHAR(30) NOT NULL, to_currency CHAR(30) NOT NULL, ExchRate REAL NOT NULL, PRIMARY KEY (from_currency,to_currency), ) ``` I would also suggest a separate table holding the six currencies and linking the table above to ensure valid currency codes. You might also want to hold a date field (and make it part of the primary key), since rates can change over time...
I have been assigned a project for currency exchange rate IN SQL . How to save exchange rates for currencies?
[ "", "sql", "" ]
``` select * from tableA order by cast(columnA as int), column B. ``` This is my current query script. There is a scenario where there column A is null. And result rows where column A is null are pushed to the end. Is there a way such that if columnA is null, put the rows before other rows where columnA is not null? thanks.
Something along these lines should work if your dbms supports standard SQL. ``` select (case when columnA is null then 0 else 1 end) as sort_order, * from tableA order by sort_order, columnA, columnB; ```
Try like below... it will help you.... ``` SELECT * FROM tableA ORDER BY (CASE WHEN columnA IS NULL THEN 1 ELSE 0 END) DESC, CAST(columnA as int), column B ``` It display the NULL results before NOT NULL Results
sql query where null results get placed first
[ "", "sql", "" ]
Here I want to get sorted rows as **3,4,5,1,2,6,7,8,9** based on status or based on its description, ``` Status Description 1 New 2 Hold 3 Counter-Proposed 4 Partial-Counter-Proposed 5 Confirmed 6 Partial response Accept 7 Respone Accept 8 Response Reject 9 Cancelled ``` I tried with union all by selection only `3,4,5 as a set and 1,2,6,7,8,9 as another set` is there any other easyway instead of multiple sets.Please guide to get solution.
``` select Status, Description from Status order by case when Status in (3,4,5) then 1 else 2 end, Status ```
@RedFilters answer works. However, you could argue that there is a missing column on Status. You should consider adding a column to Status to represent which "set" the status is in. Then you can join from your main table to Status, and sort by this new column. This way, you would not have to remember to update any code if you add an additional status.
Oracle SQL Custom sorting
[ "", "sql", "oracle", "" ]
I added a new column to an already existing table, I just need to get data into this new column...I assume I can use a variant of the insert command, yet I do not know how to specify the column name and then how to order the values going into the new column.
Ok, after some conversation through the comments lets go to an answer. I suppose your table is something like `id, name, age, dateBirth, etc fields`. But whoever create this table forget to add the gender for the registries. As you said that the new column is an `sex enum('m', 'f')` you will have to update every registry on this table one by one. Like this: ``` update matches set sex = 'm' where id = 1; ``` Pay attention that with this command I just updated the row on the table where the id=1 and Im assuming that id is your primary key. On the where caluse you have to put your primary key, otherwise you may update more then one column. If your table has many registries there is a way that you can do it cuting down the heavy work (at least a little) In order to update many rows at once you have to do an Update with a LIKE filter, you will set a filter that can identifie many womans at a time then many men at time as this: ``` update matches set sex = 'f' where name like '%Jheniffer%' ``` Since Jheniffer is a female name most likely you will update every registry which has part of the name as Jheniffer like 'Jheniffer Smith'. So repeat this process for the common names until the work is done. For all womens then repeat for the men. Hope it help you to understand
you have to use `update` command, `insert` is for adding new rows. ``` update myTABLE set NewColumn = ? ```
inserting data into a new column of an already exsisting table
[ "", "mysql", "sql", "" ]
I need to get the count of records which have Id as 11 and 12 separate. Then i have to group them by period (which is 1,2,3,4,5) and divide each ID with their corresponding group. i.e (count of ID=11 with period 1) / (count of ID=12 with period 1) I have tried this. But this just gives me a count of each. How can i divide them in the same table. Example : ``` SELECT COUNT(pNum), SK_MetricDatePeriod , SK_MetricID FROM ( SELECT SK_PatientID as pNum , SK_MetricDatePeriod ,SK_MetricID FROM [IntegratedCare].[report].[MetricValues] WHERE SK_MetricID = 11 UNION ALL SELECT SK_PatientID ,SK_MetricDatePeriod , SK_MetricID FROM [IntegratedCare].[report].[MetricValues] WHERE SK_MetricID = 12 ) t WHERE pNum IS NOT NULL GROUP BY SK_MetricID ,SK_MetricDatePeriod ORDER BY SK_MetricID,SK_MetricDatePeriod; ``` The result is : ``` Count MetricPeriod MetricID 10199 1 11 10075 2 11 9991 3 11 9891 4 11 8952 5 11 12298 1 12 12130 2 12 12058 3 12 11943 4 12 10860 5 12 ``` How can count and divide. on the above query. Result i am looking like: ``` DividedResult MetricPeriod 10199 1 10075 2 9991 3 9891 4 8952 5 ``` DivideResult is got by dividing (Countvalue where period =1 and metric id =11) / (Countvalue where period =1 and metric id =12)
The Having count .. is to avoid a division by 0 ``` SELECT count(SK_PatientID) as pNum , count(case when SK_MetricID = 11 then 0 end) / count(case when SK_MetricID = 12 then 0 end) , SK_MetricDatePeriod FROM [MetricValues] WHERE SK_MetricID In (11, 12) GROUP BY SK_MetricDatePeriod Having count(case when SK_MetricID = 12 then 0 end) > 0 ```
Not a full answer but it wouldn't let me comment. Wouldn't it be easier to select your original query into a working table and then Divide pNum where MetricPeriod = MetricPeriod If you are just looking for an average across every MetricID you could just sum you pnum of each occurence of period and then divid by the total number of periods? Just throwing yuo some ideas hope this helps.
count and divide two columns of the same table in sql
[ "", "sql", "t-sql", "" ]
> ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that > corresponds to your MySQL server version for the right syntax to use near '(sele > ct ifnull(count(tbl\_Product.manufactureId),0) > from tbl\_product where manu' at line 5 ``` create procedure ManufactureCheckForDelete(p_manufactureid varchar(50)) begin declare p_id longtext ; declare p_result longtext; set p_id (select ifnull(count(tbl_Product.manufactureId),0) from tbl_product where manufactureId = p_manufactureId ) ; if p_id <> 0 then set p_result = 'true' ; else set p_result = 'false' ; end if ; select p_result ; end ```
You need to use = operator to assign the result from query ``` set p_id (select ifnull(count(tbl_Product.manufactureId),0) ^^ here from tbl_product where manufactureId = p_manufactureId ) ; set p_id= (select ifnull(count(tbl_Product.manufactureId),0) ^^ here from tbl_product where manufactureId = p_manufactureId ) ; ```
Your error says you have a space in `select` > ...syntax to use near '**(sele ct** ifnull... And you are missing a `=` before the select. ``` set p_id = (select ... ^-------------here ```
Error 1064 in procedure
[ "", "mysql", "sql", "" ]
I just came up with an idea for a piece of code to show all the distinct values for each column, and count how many records for each. I want the code to loop through all columns. Here's what I have so far... I'm new to SQL so bear with the noobness :) Hard code: ``` select [Sales Manager], count(*) from [BT].[dbo].[test] group by [Sales Manager] order by 2 desc ``` Attempt at dynamic SQL: ``` Declare @sql varchar(max), @column as varchar(255) set @column = '[Sales Manager]' set @sql = 'select ' + @column + ',count(*) from [BT].[dbo].[test] group by ' + @column + 'order by 2 desc' exec (@sql) ``` Both of these work fine. How can I make it loop through all columns? I don't mind if I have to hard code the column names and it works its way through subbing in each one for @column. Does this make sense? Thanks all!
You can use dynamic SQL and get all the column names for a table. Then build up the script: ``` Declare @sql varchar(max) = '' declare @tablename as varchar(255) = 'test' select @sql = @sql + 'select [' + c.name + '],count(*) as ''' + c.name + ''' from [' + t.name + '] group by [' + c.name + '] order by 2 desc; ' from sys.columns c inner join sys.tables t on c.object_id = t.object_id where t.name = @tablename EXEC (@sql) ``` Change `@tablename` to the name of your table (without the database or schema name).
This is a bit of an XY answer, but if you don't mind hardcoding the column names, I suggest you do just that, and avoid dynamic SQL - and the loop - entirely. Dynamic SQL is generally considered the last resort, opens you up to security issues (SQL injection attacks) if not careful, and can often be slower if queries and execution plans cannot be cached. If you have a ton of column names you can write a quick piece of code or mail merge in Word to do the substitution for you. --- However, as far as how to get column names, assuming this is SQL Server, you can use the following query: ``` SELECT c.name FROM sys.columns c WHERE c.object_id = OBJECT_ID('dbo.test') ``` Therefore, you can build your dynamic SQL from this query: ``` SELECT 'select ' + QUOTENAME(c.name) + ',count(*) from [BT].[dbo].[test] group by ' + QUOTENAME(c.name) + 'order by 2 desc' FROM sys.columns c WHERE c.object_id = OBJECT_ID('dbo.test') ``` and loop using a cursor. Or compile the whole thing together into one batch and execute. Here we use the `FOR XML PATH('')` trick: ``` DECLARE @sql VARCHAR(MAX) = ( SELECT ' select ' --note the extra space at the beginning + QUOTENAME(c.name) + ',count(*) from [BT].[dbo].[test] group by ' + QUOTENAME(c.name) + 'order by 2 desc' FROM sys.columns c WHERE c.object_id = OBJECT_ID('dbo.test') FOR XML PATH('') ) EXEC(@sql) ``` Note I am using [the built-in `QUOTENAME` function](http://technet.microsoft.com/en-us/library/ms176114.aspx) to escape column names that need escaping.
Looping through column names with dynamic SQL
[ "", "sql", "loops", "dynamic", "" ]
I need to select all distinct combinations of 2 columns: region and make, as well as select a sum for each of these combinations for the total column. Lets say I have 4 columns ``` ID Region Make total 1 E blue 2 2 E red 1 3 E blue 1 4 W green 2 5 W blue 2 6 W red 1 7 W red 2 ``` now I want a SUM(total) for all unique combinations of region and make. So my results should look something like this: ``` Region Make SUM E blue 3 E red 1 W green 2 W blue 2 W red 3 ``` i just need a simple select statement like: ``` select distinct........... , SUM(total) from table1 ```
You can use `group by`to calculate sum based on distinct combinations of columns you specified after the group by class ``` Select Region,Make,SUM(total) as total from table1 group by Region,Make ```
Try this: ``` select Region, Make, sum(total) from table1 group by Region, Make ```
SQL Select distinct combinations of columns and sum
[ "", "sql", "sql-server", "sum", "" ]
In my case there are orders and order positions. Each order position has a quantity. For example: ![enter image description here](https://i.stack.imgur.com/mnVXt.gif) But now I need a single row for each "position element". This is the output I want: ![enter image description here](https://i.stack.imgur.com/rfgHO.gif) My idea is to use `rank()` / `over()` to get the incremental number, but I don't know how to use the quantity as multiplicator. Is there a smart solution to use a single colum as "row multiplicator"? A sql function or a loop is not possible in my case, just plain sql :) Thank you! :) --- With the query from `gvee` I was able to create a solution for my problem: ``` Select BelPosId as OrderPositionId, Artikelnummer as ProductId, Bezeichnung1 as ProductName, Menge as Quantity, NumberTable.number+1 as ElementId FROM KHKVKBelegePositionen INNER JOIN (SELECT (a.number * 256) + b.number As number FROM ( SELECT number FROM master..spt_values WHERE type = 'P' AND number <= 255 ) As a CROSS JOIN ( SELECT number FROM master..spt_values WHERE type = 'P' AND number <= 255 ) As b) NumberTable ON NumberTable.number < Menge ``` The trick was to `inner join` the column "quantity" to the "number" column from the numbers table with the `less than` operator to simulate a "multiplicator": ![enter image description here](https://i.stack.imgur.com/np2AI.gif)
You need to join to a numbers table! ``` CREATE TABLE dbo.numbers ( number int NOT NULL ) ALTER TABLE dbo.numbers ADD CONSTRAINT pk_numbers PRIMARY KEY CLUSTERED (number) WITH FILLFACTOR = 100 GO INSERT INTO dbo.numbers (number) SELECT (a.number * 256) + b.number As number FROM ( SELECT number FROM master..spt_values WHERE type = 'P' AND number <= 255 ) As a CROSS JOIN ( SELECT number FROM master..spt_values WHERE type = 'P' AND number <= 255 ) As b ``` Here's where I keep my latest script: <http://gvee.co.uk/files/sql/dbo.numbers%20&%20dbo.calendar.sql> Once you have this in place you perform a simple join: ``` SELECT KHKVKBelegePositionen.BelPosId As OrderPositionId , KHKVKBelegePositionen.Artikelnummer As ProductId , KHKVKBelegePositionen.Bezeichung1 As ProductName , KHKVKBelegePositionen.Menge As Quantity , numbers.number As ElemendId FROM KHKVKBelegePositionen INNER JOIN dbo.numbers ON numbers.number BETWEEN 1 AND KHKVKBelegePositionen.Menge ```
You can use that script which doesn't require any external tables: ``` SELECT t.quantity, n.num FROM table10 t INNER JOIN (SELECT Row_number() OVER( ORDER BY object_id) num FROM sys.all_objects) n ON t.quantity >= n.num ``` I didn't include other columns but you can just add them to the `select` list
How to multiply a single row with a number from column in sql
[ "", "sql", "sql-server", "" ]
I have the tables shown below. ``` Table1 Field1 Field2 ID111 1,500 ID112 100 ID111 250 ID114 50 ID114 20 Table2 Field1 Field3 ID111 Chris ID112 Mary ID114 John ``` What I'd like to have is the result shown below. ``` ID111 Chris 1,750 ID112 Mary 100 ID114 John 70 ``` I've already achieved this by using 2 sql executions. And thanks to the ones who helped last night on this site, I just got it to work using only 1 sql statement. However, I'm not able to link the first table to the second table for additional information. By using ``` SELECT SUM(ctotal) AS TransactionTotal FROM table1 GROUP BY field1 ``` I was able to achieve ``` ID111 1,750 ID112 100 ID114 70 ``` I'm currently using this sql statement and it pops out an error. ``` SELECT SUM(ctotal) AS TransactionTotal, table2.field3 FROM table1 INNER JOIN table2 ON table1.field1 = table2.field1 GROUP BY table1.field1 ``` It says can't "You tried to execute a query that does not include the specified expression 'Field3' as part of an aggregate function." If anybody knows a fix for this, I'd appreciate it very much.
This should suffice- ``` SELECT t1.firld1, t2.field3, sum(t1.CTotal) as cTotal FROM table1 t1 INNER JOIN table2 t2 ON t1.field1 = t2.field1 GROUP BY t1.field1, t2.field3 ``` Whatever columns you are selecting in your `SELECT` statement that are not an aggregate function (i.e., the columns that are not using `COUNT`, `SUM` or other aggregate functions) should also be mentioned in the `GROUP BY` clause. This is not particularly followed in MySQL (yielding unwanted results sometimes) but it is mandatory in Oracle. Yet, most of the RDBMS will yield similar result when you use the query as above.
As Blorgbear and Matt Welch said, you need to include all columns that are not in an aggregate function in the group by. The reason for this is that you are trying to tell it to sum one value, but not what to put in the other columns. Think about the data you have in the other two columns, how does it know to show 100,250, or any other value for Field2 if you don't specify an action on it such as Min (pull the minimum value) Here is a list of aggregate functions: <http://www.w3schools.com/sql/sql_functions.asp>
SQL Inner Join with Group By
[ "", "sql", "group-by", "sum", "inner-join", "" ]
[SQL FIDDLE](http://sqlfiddle.com/#!2/5715f/2) I have it's table definition : ``` CREATE TABLE `table2` ( `id` INT NOT NULL, `varchardata` VARCHAR(45) NULL, `intdata` INT(10) NULL, PRIMARY KEY (`id`)); ``` Data : ``` INSERT INTO `table2` (`id`, `varchardata`, `intdata`) VALUES ('1', '12', '12'); INSERT INTO `table2` (`id`, `varchardata`, `intdata`) VALUES ('2', 'keyur', '7'); INSERT INTO `table2` (`id`, `varchardata`, `intdata`) VALUES ('3', '3', '3'); INSERT INTO `table2` (`id`, `varchardata`, `intdata`) VALUES ('4', '13', '13'); INSERT INTO `table2` (`id`, `varchardata`, `intdata`) VALUES ('5', '6', '6'); ``` when I am executing query : ``` SELECT * FROM table2 order by intdata asc; ``` it gives result : ![enter image description here](https://i.stack.imgur.com/gJoaM.jpg) Is it possible to sort data in some order that it will take it and give o/p as ``` 3 6 12 13 keyur ``` It's varchar data so I exactly don't know how to do so or is it actually possible or not ? So suggestion please that is it possible ?
Try this: ``` SELECT * FROM table2 ORDER BY IF(CAST(varchardata AS SIGNED) = 0, 99999, CAST(varchardata AS SIGNED)); ``` Check the [**SQL FIDDLE DEMO**](http://sqlfiddle.com/#!2/5715f/16) **OUTPUT** ``` | ID | VARCHARDATA | INTDATA | |----|-------------|---------| | 8 | 3 | 3 | | 10 | 6 | 6 | | 6 | 12 | 12 | | 9 | 13 | 13 | | 7 | keyur | 7 | ```
Another way of doing it to correctly treat `0` (distinguish it from non-numeric values) ``` SELECT id, varchardata, intdata FROM ( SELECT id, varchardata, intdata, varchardata REGEXP '[0-9]' is_numeric FROM table2 ) q ORDER BY is_numeric DESC, 1 * varchardata ``` Output: ``` | ID | VARCHARDATA | INTDATA | |----|-------------|---------| | 3 | 3 | 3 | | 5 | 6 | 6 | | 1 | 12 | 12 | | 4 | 13 | 13 | | 2 | keyur | 7 | ``` Here is **[SQLFiddle](http://sqlfiddle.com/#!2/be1b6d/10)** demo
Is it possible to sort varchar data in ascending order that have mix value?
[ "", "mysql", "sql", "sorting", "select", "sql-order-by", "" ]
I need to determine if I should insert data into theDate or theDate2 column based on data in the processDate column from the table being selected from. I am unable to use a column in a CASE statement however (I am getting an error when trying to run the code below), and I can't find a solution to this. How can I do this? processDate is a column from #tableTwo. ``` INSERT INTO tableOne CASE WHEN processDate <> '' THEN + ',' + theDate ELSE + ',' + theDate2 END SELECT sDate FROM #tableTwo ```
If both theDate and theDate2 can be determined by the flag processDate then do it in your select rather than in the insert. Assuming that the columns are nullable (as your CASE in the INSERT clause seems to indicate they are mutually exclusive, then I'd be inclined to do something like; ``` INSERT INTO tableOne (theDate, theDate2) SELECT CASE WHEN processDate <> '' THEN sDate ELSE NULL END theDate, CASE WHEN processDate = '' THEN sDate ELSE NULL END theDate2, FROM #tableTwo ```
The proper syntax for an insert is ``` insert into table (field1, field2, etc) select field3, field4, etc from somewhere ``` I have no idea if this will work, but you can try: ``` insert into tableOne (case when processDate <> '' then theDate else Date2 end) select sdate from #table2 ```
Determining column to use in INSERT statement based on data in the table being selected from
[ "", "sql", "sql-server-2008", "" ]
I have a row with date column in my table: ``` date1 = 2013-12-11 00:00:00.000 ``` And I want to compare it with date: ``` date2 = 11.12.2013 0:00:00 ``` I do: ``` SELECT * FROM ServiceOrder WHERE CreateDateTime <= '11.12.2013 0:00:00' ``` And I am expecting that as a result I get a row with `date1` but I don't get it. How to compare this dates correctly? **UPDATE** I tried: ``` SELECT * FROM ServiceOrder WHERE CreateDateTime<= CONVERT(VARCHAR(23),'11.12.2013 0:00:00', 121) ``` But same result. **UPDATE2** ``` SELECT * FROM ServiceOrder WHERE CreateDateTime <= REPLACE(CONVERT(VARCHAR(19),'11.12.2013', 111),'/','-') ``` Does not work, either. **UPDATE3** ``` SELECT * FROM ServiceOrder WHERE CreateDateTime <= CONVERT(datetime,'11.12.2013 0:00:00', 104) ``` Does not work, either.
So it is a bit tricky to convert your format because it is too specific and ms sql doesn't have easy date format functions to do it. so here is an way to convert your entire date: ``` SELECT * FROM ServiceOrder WHERE CreateDateTime <= ( select convert(datetime, concat(y,'-',m,'-',d,' ',h), 20) as dtConverted from ( select substring(dt,1,2) d, substring(dt,4,2) m, substring(dt,7,4) y, substring(dt,12,10) h from (select '11.12.2013 13:00:00' dt) as value ) as convDate ) ``` To convert just to a date without time it would be: ``` SELECT * FROM ServiceOrder WHERE convert(date, CreateDateTime , 104) <= convert(date, substring('11.12.2013 0:00:00',1,10), 104 ) ``` See it here on fiddle: <http://sqlfiddle.com/#!6/3787b/1>
Use this. You need to convert the string into datetime for right comparison. And also use the correct format number here 104 ``` SELECT * FROM ServiceOrder WHERE CreateDateTime<=CONVERT(datetime,'11.12.2013 0:00:00', 104) ``` And the better practice is that to take a variable and use that in query instead of converting them in query. Like this ``` Declare @CompareDate as DateTime SET @CompareDate=CONVERT(datetime,'11.12.2013 0:00:00', 104) SELECT * FROM ServiceOrder WHERE CreateDateTime<=@CompareDate ```
How to compare dates in SQL Server?
[ "", "sql", "sql-server", "" ]
i am having one table name student detail which containing following columns and data's ``` NAME MARK2 MARK3 SID MARK1 Timmy 85 95 123 25 GSJD 75 95 25 69 FBKSD 88 75 27 96 ``` i was trying to get sum the values of mark 1,mark2,mark3 for a specific row where mark1=96 SQL> ``` SELECT mark1, mark2, mark3 FROM studentdetail WHERE mark1=96( SELECT sum(mark1 +mark2+mark3) FROM studentdetail; ``` but its not working can you guys suggest me in this
``` select (mark1+mark2+mark3) as mark_sum from studentdetail where mark1=96 ```
In Mysql you can do like this: ``` select mark1,mark2,mark3,(mark1 +mark2+mark3) as Total from studentdetail where mark1=96; ```
how to sum only a specific row in a table
[ "", "mysql", "sql", "" ]
I tried decrying other answers, no luck, hence asking. I have one table and few more similar to this for other year Here is the table structure ``` -------------------------------- STATUS | Year | Company Name -------------------------------- Certified | 2010 | Google Cert-Denied | 2010 | Google Denied | 2010 | Google Withdrawn | 2010 | Google Denied | 2010 | Microsoft Withdrawn | 2010 | Microsoft Certified | 2010 | Apple Cert-Denied | 2010 | Apple ---------------------------------- ``` When I write a query to list all the companies in an year GROUP BY STATUS, I want to see all the entries corresponding to a count of the status, including the zero count. If I search on Google I would get something like this ``` Query : Select status, count(*) from table where company like '%Google%' group by status ``` Result : ``` Certified | 1| Cert-Denied | 1| Denied | 1| Withdrawn | 1| ``` If I run the same for microsoft, I would get ``` Denied | 1| Withdrawn | 1 | ``` The ones with ZERO Count are missing. I want the ones with ZERO Count as well to be dispalyed like below. ``` Certified | 0| Cert-Denied | 0| Denied | 1| Withdrawn | 1| ``` I have read about joins and stuff, still not clear. Any help would be really appreciated.
The COUNT is only returning the COUNT for the status that he finds for Microsoft. And those are Denied and Wthdrawn. You have to feed the query all the statuses and COUNT the occurrences of all of them. The ones that don't appear will in the table will be left with 0: ``` SELECT a.STATUS, SUM(CASE WHEN b.STATUS IS NOT NULL THEN 1 ELSE 0 END) AS StatusCount FROM ( SELECT DISTINCT STATUS FROM tab1 ) a LEFT JOIN tab1 b ON a.STATUS = b.STATUS AND b.CompanyName = 'Microsoft' GROUP BY a.STATUS; ``` What this does is: ``` SELECT DISTINCT STATUS FROM tab1 ``` This finds all the statuses possible. If you have a reference table with all the possible statuses, even better Use it instead of this query. Then you do a LEFT JOIN on this table by status and companyName. This way, you will only get a match in STATUS if there is a record on the table. If there is, you add 1 to the SUM, otherwise you Add 0. `sqlfiddle demo`
A fairly straight forward way is to get all distinct statuses and `LEFT JOIN` with the table to get the counts; ``` SELECT a.status, COUNT(b.status) cnt FROM (SELECT DISTINCT status FROM Table1) a LEFT JOIN Table1 b ON a.status = b.status AND b.`Company name`='Microsoft' GROUP BY a.status ``` [An SQLfiddle to test with](http://sqlfiddle.com/#!2/0c4e9/1).
MySQL Group by - Get columns with zero count
[ "", "mysql", "sql", "" ]
I have crated two table one is cutomer and other one is ord ``` select * from customers; ``` Customer table ``` 1 101 jun 23 yyyy 15000 2 102 jas 24 zzzz 10000 3 103 fat 20 kkkk 20000 4 104 jini 40 llll 30000 5 105 michael 30 dddd 25000 6 106 das 25 hhhh 10000 7 107 vijay 26 mmmm 12000 8 108 thanku 31 jjjj 26000 9 109 vishnu 34 gggg 24000 10 110 vas 28 ffff 18000 select * from ord; ``` This is order table ``` 1 12/11/2013 1:00:00 AM 102 2500 2 202 12/11/2013 4:14:17 AM 102 3000 3 203 12/9/2013 9:18:16 PM 103 2000 4 204 12/8/2013 12:00:00 PM 102 1000 5 205 12/24/2013 107 2000 ``` This is tha union command that I have used ``` select c.name,c.salary,o.amount from CUSTOMERS c inner join ord o on c.id=o.customer_id; ``` then the resulting table is ``` 1 jas 10000 1000 2 jas 10000 3000 3 jas 10000 2500 4 fat 20000 2000 5 vijay 12000 2000 ``` I want resulting table like this ``` 1 jas 10000 6500 2 fat 20000 2000 3 vijay 12000 2000 ``` plz help me for solving this.
`group by c.name, c.salary` with `sum(salary)` is what you want: ``` select c.name, c.salary, sum(o.amount ) from CUSTOMERS c inner join ord o on c.id=o.customer_id group by c.name, c.salary; ```
try this if it will work. ``` select c.name,c.salary,sum(o.amount) from CUSTOMERS c inner join ord o on c.id=o.customer_id group by 1,2; ``` Thanks.
Please help me to solve this
[ "", "sql", "oracle", "" ]
This is the result of my query, but it doesn't order correctly. I want to order by the last 2 characters. The result should be: `Fa0/10` below `Fa0/9`. ``` Fa0/1 Fa0/10 Fa0/11 Fa0/12 Fa0/2 Fa0/3 Fa0/4 Fa0/5 Fa0/6 Fa0/7 Fa0/8 Fa0/9 Gi0/1 Gi0/2 Null0 Vlan1 ``` My query: ``` SELECT inft.port FROM interfaces AS intf ORDER BY RIGHT(intf.port + 0, 2) ``` Second: [sqlfiddle](http://sqlfiddle.com/#!2/fde72/5 "sqlFiddle")
Try this: ``` SELECT port FROM interfaces ORDER BY SUBSTRING_INDEX(port, '/', 1), CAST(SUBSTRING_INDEX(port, '/', -1) AS SIGNED) ``` Check the [**SQL FIDDLE DEMO**](http://sqlfiddle.com/#!2/fde72/8) **OUTPUT** ``` | PORT | |--------| | Fa0/1 | | Fa0/2 | | Fa0/3 | | Fa0/4 | | Fa0/5 | | Fa0/6 | | Fa0/7 | | Fa0/8 | | Fa0/9 | | Fa0/10 | | Fa0/11 | | Fa0/12 | | Gi0/1 | | Gi0/2 | | Null0 | | Vlan1 | ```
Why you need `+ 0` ?? Simply remove it and it will work. ``` SELECT port FROM interfaces ORDER BY RIGHT(port, 2) ``` [SQL Fiddle Demo](http://www.sqlfiddle.com/#!3/01d40/7) Output: ``` PORT ------------ Fa0/1 Gi0/1 Gi0/2 Fa0/2 Fa0/3 Fa0/4 Fa0/5 Fa0/6 Fa0/7 Fa0/8 Fa0/9 Fa0/10 Fa0/11 Fa0/12 Null0 Vlan1 ```
Order by last 2 characters string
[ "", "mysql", "sql", "select", "sql-order-by", "" ]
Consider there are three database say `D1, D2, D3`. And two tables `T1, T2` in each database. Table `T1` has two columns `C1, C2`. Table `T2` has three columns `C3, C4, C5`. Now we have six table. The records in each database are different. **Database structure:** ``` D1: ------ T1 T2 D2: ------ T1 T2 D3: ------ T1 T2 ``` **Table strucutre:** ``` T1: ------ C1 C2 T2: ---------- C3 C4 C5 ``` I can run the same query in each database, like below, to get the output and find which database has the record: ``` Select * from T1 where C1 = 'Some_value' ``` **The question I have is, how to find which database has the record that I want, without running the query thrice...??** **Note:** There can be N number of databases with M number of tables in each.
You can run a query like this (SQL Server version below, other dbs are similar): ``` Select 'db1' as db, 'T1' as table from db1.dbo.T1 where 'Some_value' in (C1, C2) union all Select 'db1' as db, 'T2' as table from db1.dbo.T1 where 'Some_value' in (C3, C4, C5) union all Select 'db2' as db, 'T1' as table from db2.dbo.T1 where 'Some_value' in (C1, C2) union all Select 'db2' as db, 'T2' as table from db2.dbo.T1 where 'Some_value' in (C3, C4, C5) union all Select 'db3' as db, 'T1' as table from db3.dbo.T1 where 'Some_value' in (C1, C2) union all Select 'db3' as db, 'T2' as table from db3.dbo.T1 where 'Some_value' in (C3, C4, C5) ```
try this query in oracle, you can use `val` as variable : ``` select Tab1.* from (Select 'D1' as DataBase,D1.T1.* from D1.T1 UNION ALL Select 'D2' as DataBase,D2.T1.* from D2.T1 UNION ALL Select 'D3' as DataBase,D3.T1.* from D3.T1) Tab1, (select 'Some_value' as val from dual)Tab2 WHERE Tab1.C1 = Tab2.val; ```
Find the database in which a particular record is available
[ "", "sql", "sql-server", "database", "oracle", "" ]
I need a query to find Inspection/Component that did not get a "Rating 6" record. I'll be inserting new records so that all Inspection/Component have a "Rating 6" record along with what ever other ratings they may have. Consider the following data: 57646, 57652 and 57657 are **not** correct because they are missing their Rating 6 record. In this data example, these three should be the only Inspection/Component returned by the query. ``` InspectionID ComponentID RatingTypeID 138 57646 10 138 57647 6 138 57647 2 138 57648 6 138 57649 6 138 57650 6 138 57651 10 138 57651 6 138 57652 10 138 57653 6 138 57654 6 138 57655 6 138 57656 6 138 57657 10 ```
``` SELECT * FROM YourTable A WHERE NOT EXISTS(SELECT 1 FROM YourTable WHERE InspectionID = A.InspectionID AND ComponentID = A.ComponentID AND RatingTypeID = 6) ```
``` SELECT A.InspectionID FROM MyTable AS A LEFT OUTER JOIN MyTable AS B ON A.InspectionID = B.InspectionID AND B.RatingTypeID = 6 WHERE B.InspectionID IS NULL ```
TSQL: Find required missing records
[ "", "sql", "sql-server-2008", "t-sql", "" ]
How get results without grouping it? My table ``` id user_id amount currency_id type status 5 2 2.00 1 0 0 6 3 3.00 1 0 0 4 1 1.00 1 0 0 7 4 4.00 1 0 0 8 5 3.00 1 1 0 ``` I do the following select ``` SELECT id, user_id, amount, currency_id, SUM( amount ) FROM market WHERE amount <=3 AND type = 0 AND status = 0 ``` Result: ``` id user_id amount currency_id SUM( amount ) 5 2 2.00 1 6.00 ``` How get this result: ``` id user_id amount currency_id SUM( amount ) 5 2 2.00 1 0 6.00 6 3 3.00 1 0 6.00 4 1 1.00 1 0 6.00 ```
If your intent is to return individual records that meet this criteria AND sum them up and you don't actually need the SUM value as a field on every row (not sure why you would), then I would suggest looking at the `GROUP BY ... WITH ROLLUP` modifier. It works like this: ``` SELECT id, user_id, SUM(amount) AS `amounts`, currency_id FROM market WHERE amount <=3 AND type = 0 AND status = 0 GROUP BY id WITH ROLLUP ``` Here I am grouping by `id` because this will keep the individual records intact as this value is unique You output would look like this: ``` id user_id amounts currency_id 5 2 2.00 1 6 3 3.00 1 4 1 1.00 1 NULL 3 6.00 1 ``` Note the last record provides the rollup to the `SUM()` function. Also note that values for `user_id` and `currency_id` in the rollup row are indeterminate as they were not part of the GROUP BY or aggregation. As such, they have no meaning.
You can do join ``` SELECT id, user_id, amount, currency_id, a.totalAmount FROM market CROSS JOIN ( SELECT SUM(amount) totalAmount FROM market WHERE amount <=3 AND type = 0 AND status = 0 ) a WHERE amount <=3 AND type = 0 AND status = 0 ``` or using inline subquery, ``` SELECT id, user_id, amount, currency_id, ( SELECT SUM(amount) totalAmount FROM market WHERE amount <=3 AND type = 0 AND status = 0 ) totalAmount FROM market WHERE amount <=3 AND type = 0 AND status = 0 ```
SELECT without grouping
[ "", "mysql", "sql", "database", "" ]
I'm fairly new to SQL and I'm trying to get my head around the following problem - I'm attempting to get a list of products broken down so we can see how many of each one has been sold every year I have two tables; Orders (which contains JobNum and Date) and Products (with JobNum, Fig, Fype, Size and Quantity fields). Due to the products being custom made a product is an amalgamation of the fig, type and size fields. For example: To get a listing of all the product variations created I can do: ``` SELECT DISTINCT Fig, Type, Size FROM Product; ``` And to get a list of quantities for all products made in a calendar year this query seems to work: (Date is the format 'dd-mm-yyyy' hence the LIKE '%2002' to filter by year) ``` SELECT product.fig, product.type, product.size, Sum(product.qty) AS Quantity FROM orders.dbf INNER JOIN product.dbf ON orders.jobnum = product.jobnum WHERE orders.date LIKE '%2002' GROUP BY product.fig, product.type, product.size ``` Which gives the data for 1 year outputted like this: ``` Fig Type Size Qty AA B 2 1 ``` My question is how can I pull out the data to get an output like this? ``` Fig Type Size 2001 2002 2003... AA B 2 1 2 4 BB C 4 4 6 7 ``` I can think how produce this in a program but I'd like to see if it's possible to do just in SQL? Many thanks in advance. EDIT- Can I just point out that Product.Type can be blank in some cases and in some years it is possible for zero instances of a product to be sold, so for that year the corresponding quantity amount could be blank or 0.
Although I have not specifically used dbase since about 1986, I am guessing they have implemented a common IIF() functionality as other languages. I am more used to FoxPro/Visual Foxpro historically, but here is the basis of what you need, and if you are using Visual Foxpro OleDB, you should be fine. What is happening here, as you are going through, the group by will be based on each fig, type, size, no problem. Now, to create your PIVOT, I am just doing a sum( IIF()) based on the order date. If the year of the order date in question equals the respective year, add that quantity to that column's total, otherwise, sum a zero value. ``` SELECT p.fig, p.type, p.size, SUM( IIF( YEAR( o.date ) = 2002, p.qty, 0 )) as P2002Qty, SUM( IIF( YEAR( o.date ) = 2003, p.qty, 0 )) as P2003Qty, SUM( IIF( YEAR( o.date ) = 2004, p.qty, 0 )) as P2004Qty, SUM( IIF( YEAR( o.date ) = 2005, p.qty, 0 )) as P2005Qty, SUM( IIF( YEAR( o.date ) = 2006, p.qty, 0 )) as P2006Qty, SUM( IIF( YEAR( o.date ) = 2007, p.qty, 0 )) as P2007Qty, SUM( IIF( YEAR( o.date ) = 2008, p.qty, 0 )) as P2008Qty, SUM( IIF( YEAR( o.date ) = 2009, p.qty, 0 )) as P2009Qty, SUM( IIF( YEAR( o.date ) = 2010, p.qty, 0 )) as P2010Qty, SUM( IIF( YEAR( o.date ) = 2011, p.qty, 0 )) as P2011Qty, SUM( IIF( YEAR( o.date ) = 2012, p.qty, 0 )) as P2012Qty, SUM( IIF( YEAR( o.date ) = 2013, p.qty, 0 )) as P2013Qty from orders o INNER JOIN product p ON o.jobnum = p.jobnum group by p.fix, p.type, p.size ``` Now, one note... If the results for whatever strange reason gives zeros in most columns, it is because the engine is predetermining the column size width based on the first "test" record to run the query before it actually executes. If so, then two options to adjust... Change each row to one of the following ``` FROM: SUM( IIF( YEAR( o.date ) = 2002, p.qty, 0 )) as P2002Qty, TO: ( so it FORCES allocated space to 7 digit positions ) SUM( IIF( YEAR( o.date ) = 2002, p.qty, 0000000 )) as P2002Qty, OR: (so it always uses the qty size, but multiplies by 1 or 0 so it still uses basis of qty column) SUM( p.qty * IIF( YEAR( o.date ) = 2002, 1, 0 )) as P2002Qty, ```
You may use a subquery as a column. For instance: ``` SELECT t0.*, (SELECT COUNT(*) FROM t1 WHERE YEAR(date_fld)=2002) AS y2002, (SELECT COUNT(*) FROM t1 WHERE YEAR(date_fld)=2003) AS y2003 FROM t0 ```
Getting product quantities broken down annually
[ "", "sql", "dbase", "" ]
I had one single table that had lots of problems. I was saving data separated by commas in some fields, and afterwards I wasn't able to search them. Then, after search the web and find a lot of solutions, I decided to separate some tables. That one table I had, became 5 tables. First table is called agendamentos\_diarios, this is the table that I'm gonna be storing the schedules. Second Table is the table is called tecnicos, and I'm storing the technicians names. Two fields, id (primary key) and the name (varchar). Third table is called agendamento\_tecnico. This is the table (link) I'm goona store the id of the first and the second table. Thats because there are some schedules that are gonna be attended by one or more technicians. Forth table is called veiculos (vehicles). The id and the name of the vehicle (two fields). Fith table is the link between the first and the vehicles table. Same thing. I'm gonna store the schedule id and the vehicle id. I had an image that can explain better than I'm trying to say. ![Diagram](https://i.stack.imgur.com/kOmHz.jpg) Am I doing it correctly? Is there a better way of storing data to MySQL?
I agree with @Strawberry about the ids, but normally it is the Hibernate mapping type that do this. If you are not using Hibernate to design your tables you should take the ID out from agendamento\_tecnico and agendamento\_veiculos. That way you garantee the unicity. If you don't wanna do that create a unique key on the FK fields on thoose tables. I notice that you separate the vehicles table from your technicians. On your model the same vehicle can be in two different schedules at the same time (which doesn't make sense). It will be better if the vehicle was linked on agendamento\_tecnico table which will turn to be agendamento\_tecnico\_veiculo. Looking to your table I note (i'm brazilian) that you have a column called "servico" which, means service. Your schedule table is designed to only one service. What about on the same schedule you have more than one service? To solve this you can create a table services and create a m-n relationship with schedule. It will be easier to create some reports and have the services well separated on your database. There is also a nome\_cliente field which means the client for that schedule. It would be better if you have a cliente (client) table and link the schedule with an FK. As said before, there is no right answer. You have to think about your problem and on the possible growing of it. Model a database properly will avoid lot of headache later.
Better is subjective, there's no right answer. My natural instinct would be to break that schedule table up even more. Looks like data about the technician and the client is duplicated. There again you might have made a decisions to de-normalise for perfectly valid reasons. Doubt you'll find anyone on here who disagrees with you not having comma separated fields though. Where you call a halt to the changes is dependant on your circumstances now. Comma separated fields caused you an issue, you got rid of them. So what bit of where you are is causing you an issue now?
A more efficient way to store data in MySQL using more than one table
[ "", "mysql", "sql", "database", "" ]
I know that this question has been asked many times but could not find what I needed. I have this column "Order" which contains data in the following format. `'xxx,yyy,zzzz'` Now when I do my `select` statement I need to populate 3 columns by splitting this one E.G. ``` Select Name, Surname, FirstCommaColumn=xx.UpToFirstColumn SecondCommaColumn=xx.FromFirstCommaToLastComma, ThirdColumnFromSecondCommaOnwards=FromSecondCommaToEnd from myTable --thought of doing something like CROSS APPLY (SELECT TOP 1 * FROM dbo.SplitFunctionIDontHave(order,',')) AS xx ``` There are some rows which have no commas so I must return blank. I don't mind if I do in a function or within the query itself just not sure how to do this. How can I do this using SQL Server 2008? **This select is part of a view if makes a difference**
I've change the function name so it won't overlapped in what the `Split()` function really does. Here is the code: ``` CREATE FUNCTION dbo.GetColumnValue( @String varchar(8000), @Delimiter char(1), @Column int = 1 ) returns varchar(8000) as begin declare @idx int declare @slice varchar(8000) select @idx = 1 if len(@String)<1 or @String is null return null declare @ColCnt int set @ColCnt = 1 while (@idx != 0) begin set @idx = charindex(@Delimiter,@String) if @idx!=0 begin if (@ColCnt = @Column) return left(@String,@idx - 1) set @ColCnt = @ColCnt + 1 end set @String = right(@String,len(@String) - @idx) if len(@String) = 0 break end return @String end ``` And here is the usage: ``` select dbo.GetColumnValue('Col1,Field2,VAlue3', ',', 3) ```
``` Declare @str as Varchar(100) = '10|20|30|40|500|55' Declare @delimiter As Varchar(1)='|' Declare @Temp as Table ( item varchar(100)) Declare @i as int=0 Declare @j as int=0 Set @j = (Len(@str) - len(REPLACE(@str,@delimiter,''))) While @i < = @j Begin if @i < @j Begin Insert into @Temp Values(SUBSTRING(@str,1,Charindex(@delimiter,@str,1)-1)) set @str = right(@str,(len(@str)- Charindex(@Delominator,@str,1))) End Else Begin Insert into @Temp Values(@str) End Set @i = @i + 1 End Select * from @Temp ```
Split function by comma in SQL Server 2008
[ "", "sql", "sql-server-2008", "t-sql", "split", "" ]
I want to create a stored procedure to insert a new row in a table 'dbo.Terms' ``` CREATE PROCEDURE dbo.terms @Term_en NVARCHAR(50) = NULL , @Createdate DATETIME = NULL , @Writer NVARCHAR(50) = NULL , @Term_Subdomain NVARCHAR(50) = NULL AS BEGIN SET NOCOUNT ON INSERT INTO dbo.terms ( Term_en , Createdate , Writer , Term_Subdomain ) VALUES ( @Term_en = 'Cat' , @Createdate = '2013-12-12' , @Writer = 'Fadi' , @Term_Subdomain = 'English' ) END GO ``` But is shows me an error here ( @Term\_en = 'Cat') incorrect syntax Any help?
I presume you want to insert the values cat etc into the table; to do that you need to use the values from your procedures variables. I wouldn't call your procedure the same name as your table it will get all kinds of confusing; you can find some good resources for naming standards (or crib from [Adventureworks](https://stackoverflow.com/questions/3593582/database-naming-conventions-by-microsoft)) ``` CREATE PROCEDURE dbo.terms @Term_en NVARCHAR(50) = NULL , @Createdate DATETIME = NULL , @Writer NVARCHAR(50) = NULL , @Term_Subdomain NVARCHAR(50) = NULL AS BEGIN SET NOCOUNT ON INSERT INTO dbo.terms ( Term_en , Createdate , Writer , Term_Subdomain ) VALUES ( @Term_en, @Createdate, @Writer, @Term_Subdomain ) END GO ``` And to test it ``` exec dbo.terms @Term_en = 'Cat' , @Createdate = '2013-12-12' , @Writer = 'Fadi' , @Term_Subdomain = 'English' ```
Here is how to set your defaults for parameters in your proc: ``` CREATE PROCEDURE dbo.terms @Term_en NVARCHAR(50) = 'Cat', @Createdate DATETIME = '2013-12-12', @Writer NVARCHAR(50) = 'Fadi', @Term_Subdomain NVARCHAR(50) = 'English' AS BEGIN SET NOCOUNT ON INSERT INTO dbo.terms ( Term_en , Createdate , Writer , Term_Subdomain ) VALUES ( @Term_en, @Createdate, @Writer, @Term_Subdomain ) END GO ```
Create a stored procedure to insert new data into a table
[ "", "sql", "sql-server", "stored-procedures", "" ]
I have a table like below ![enter image description here](https://i.stack.imgur.com/y6oO9.jpg) From the above table I Need to select adjacent values from val3 column based on Number column I Need output like below : ![enter image description here](https://i.stack.imgur.com/Rlq3S.jpg)
``` WITH T1 AS (SELECT *, DENSE_RANK() OVER (PARTITION BY [Number] ORDER BY [Val3]) - [Val3] AS Grp FROM YourTable), T2 AS (SELECT *, COUNT(*) OVER (PARTITION BY [Number], Grp) AS Cnt FROM T1) SELECT [Number], [Val1], [Val2], [Val3] FROM T2 WHERE Cnt > 1 ```
An ugly solution would be: ``` select d.number, d.val1, d.val2, d.val3 from table d where exists (select * from table t where t.number = d.number --and t.val1 = d.val1 --and t.val2 = d.val2 and (t.val3 = d.val3 - 1 or t.val3 = d.val3 + 1)) ```
How to find adjacent values in a column based on another column value in Sql
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I do not understand the following (returns numbers of comments for articles with the newest ones dates): ``` SELECT `id_comment`,COUNT(*) AS `number`, MAX(`date`) AS `newest` FROM `page_comments` WHERE TO_DAYS( NOW() )-TO_DAYS(`date`) < 90 GROUP BY `id_comment` ORDER BY `count` DESC,`newest` DESC ``` I dont understand how come that the MAX function will not return the MAX value of all the page\_comments table? That it automatically takes only the max for the given group. When using MAX, I would expect it to return the highest value of the column. I dont understand how it works together with groupig.
I am just explaining it to the ground. **MAX()** - An aggregate function(Works over the group of data). If **""group by""** clause is **NOT** specified, the database implicitly groups the data(column specified) considering the entire result set as group. If specified, it just groups the data(column) in the group logic specified.
You described the behavior yourself quite correctly already: `it automatically takes only the max for the given group`. If you group, you do it (per usual) on every column in the result set, that is not aggregated (not using `COUNT`, `SUM`, `MIN`, `MAX`...) That way you get distinct values for all non aggregated columns and the aggregated ones will yield a result that only takes the 'current' group into account.
How does the aggregation function work with group by
[ "", "sql", "group-by", "" ]
Lets say I have the below data, the last column keyword is nothing but the combination of the other 4 cols. ``` ╔════╦══════╦════════════╦═════════╦════════════╦════════════════════════════════╗ ║ ID ║ Name ║ Add1 ║ Add2 ║ Add3 ║ Keyword ║ ╠════╬══════╬════════════╬═════════╬════════════╬════════════════════════════════╣ ║ 1 ║ John ║ W Brown St ║ Edison ║ Washington ║ JohnW Brown StEdisonWashington ║ ║ 2 ║ Paul ║ E High Rd ║ Peapack ║ New Jersey ║ PaulE High RdPeapackNew Jersey ║ ║ 3 ║ John ║ Greams Rd ║ Peapack ║ Washington ║ JohnGreams RdPeapackWashington ║ ╚════╩══════╩════════════╩═════════╩════════════╩════════════════════════════════╝ ``` Now user can enter any of the text fields Name, Add1, Add2, Add3 and click on search and it should search in all the fields. For this, I am trying this, ``` SELECT *, COUNT(ID) AS FREQUENCY FROM TABA WHERE ID IN ( (SELECT ID FROM TABA WHERE KEYWORD LIKE '%WASHINGTON%') UNION ALL (SELECT ID FROM TABA WHERE KEYWORD LIKE '%JOHN%') UNION ALL (SELECT ID FROM TABA WHERE KEYWORD LIKE '%PEAPACK%') ) GROUP BY ID ORDER BY FREQUENCY ``` Expected Output: ``` ╔════╦══════╦════════════╦═════════╦════════════╦═══════════╗ ║ ID ║ Name ║ Add1 ║ Add2 ║ Add3 ║ Frequency ║ ╠════╬══════╬════════════╬═════════╬════════════╬═══════════╣ ║ 3 ║ John ║ Greams Rd ║ Peapack ║ Washington ║ 3 ║ ║ 1 ║ John ║ W Brown St ║ Edison ║ Washington ║ 2 ║ ║ 2 ║ Paul ║ E High Rd ║ Peapack ║ New Jersey ║ 1 ║ ╚════╩══════╩════════════╩═════════╩════════════╩═══════════╝ ``` But I am getting all the frequency values as 1. What am I doing wrong?
The part `... WHERE ID IN ( ...) ...` will just do a 'yes or no' check, but you want counting. That has to be done with a JOIN. Try this: ``` SELECT TABA.*, COUNT(sub.ID) AS FREQUENCY FROM TABA JOIN ( (SELECT ID FROM TABA WHERE KEYWORD LIKE '%WASHINGTON%') UNION ALL (SELECT ID FROM TABA WHERE KEYWORD LIKE '%JOHN%') UNION ALL (SELECT ID FROM TABA WHERE KEYWORD LIKE '%PEAPACK%') ) sub ON sub.ID=TABA.ID GROUP BY TABA.ID ORDER BY FREQUENCY ```
Why not to do a simplified version without that subqueries: ``` SELECT ID, Name, Add1, Add2, Add3, COUNT(*) AS FREQUENCY FROM TABA WHERE ID IN (SELECT ID FROM TABA WHERE KEYWORD LIKE '%WASHINGTON%' OR KEYWORD LIKE '%JOHN%' OR KEYWORD LIKE '%PEAPACK%') GROUP BY ID, Name, Add1, Add2, Add3 ORDER BY 6 ``` 'Order by 6' means the sixth element on the select scope which is frequency
Get number of times keyword is present
[ "", "sql", "sql-server", "" ]
I am using a script to do a query in sqlplus, but i have a problem when i trying to export to a file I have this in my sqlplus: ``` spool hits.csv SET NEWPAGE NONE SET PAGESIZE 0 SET SPACE 0 SET LINESIZE 100 SET ECHO OFF SET FEEDBACK OFF SET VERIFY OFF SET HEADING OFF SET MARKUP HTML OFF SPOOL OFF SET TERMOUT OFF SET TRIMOUT ON SET TRIMSPOOL ON SET COLSEP | arraysize 15 set wrap off select camp1, camp2, camp3, camp4 from table spool off ``` But when i read the file is something like (trunc two columns of my query): ``` rows will be truncated rows will be truncated 20131209|name12 20131209|name12 20131209|name12 20131209|name12 20131209|name12 20131209|name12 ``` Then i increase the linesize, but if i increase the linesize, show me something like: ``` 20131209|name12 |1 | 86 20131209|name12 |5 | 1 20131209|name12 |2 | 9 20131209|name12 |3 | 5 20131209|name12 |6 | 1 ``` And i need something like: ``` 20131209|name12|1|86 20131209|name12|5|1 20131209|name12|2|9 20131209|name12|3|5 20131209|name12|6|1 ``` I read but in all pages say the same headers, and my file is in many lines, and i need my row of the query in only one row of my file. And when i say show all show me: ``` arraysize 150 autocommit OFF autoprint OFF autorecovery OFF autotrace OFF blockterminator "." (hex 2e) btitle OFF and is the first few characters of the next SELECT statement cmdsep OFF colsep "|" compatibility version NATIVE concat "." (hex 2e) copycommit 0 COPYTYPECHECK is ON define "&" (hex 26) describe DEPTH 1 LINENUM OFF INDENT ON echo OFF editfile "afiedt.buf" embedded OFF escape OFF escchar OFF feedback OFF flagger OFF flush ON heading OFF headsep "|" (hex 7c) instance "local" linesize 1500 lno 0 loboffset 1 logsource "" long 80 longchunksize 80 markup HTML OFF HEAD "<style type='text/css'> body {font:10pt Arial,Helvetica,sans-serif; color:black; background:White;} p {font:10pt Arial,Helvetica,sans-serif; color:black; background:White;} table,tr,td {font:10pt Arial,Helvetica,sans-serif; color:Black; background:#f7f7e7; padding:0px 0px 0px 0px; margin:0px 0px 0px 0px;} th {font:bold 10pt Arial,Helvetica,sans-serif; color:#336699; background:#cccc99; padding:0px 0px 0px 0px;} h1 {font:16pt Arial,Helvetica,Geneva,sans-serif; color:#336699; background-color:White; border-bottom:1px solid #cccc99; margin-top:0pt; margin-bottom:0pt; padding:0px 0px 0px 0px;- } h2 {font:bold 10pt Arial,Helvetica,Geneva,sans-serif; color:#336699; background- color:White; margin-top:4pt; margin-bottom:0pt;} a {font:9pt Arial,Helvetica,sans-serif; color:#663300; background:#ffffff; margin-top:0pt; margin-bottom:0pt; vertical-align:top;} </style><title>SQL*Plus Report</title>" BODY "" TABLE "border='1' width='90%' align='center' summary='Script output'" SPOOL OFF ENTMAP ON PREFORMAT OFF newpage NONE null "" numformat "" numwidth 10 pagesize 0 PAUSE is OFF pno 0 recsep WRAP recsepchar " " (hex 20) release 1002000500 repfooter OFF and is NULL repheader OFF and is NULL serveroutput OFF shiftinout INVISIBLE showmode OFF spool ON sqlblanklines OFF sqlcase MIXED sqlcode 0 sqlcontinue "> " sqlnumber ON sqlpluscompatibility 10.2.0 sqlprefix "#" (hex 23) sqlprompt "SQL> " sqlterminator ";" (hex 3b) suffix "sql" tab ON termout OFF timing OFF trimout ON trimspool ON ttitle OFF and is the first few characters of the next SELECT statement underline "-" (hex 2d) USER is "DMSADMIN" verify OFF wrap : lines will be truncated ```
SQL\*Plus is clunky in this regard. You can do: ``` select camp1 || '|' || camp2 || '|' || camp3 || '|' || camp4 from table; ``` Which will do what you want, but can be unwieldy for large select lists.
``` set linesize 30000 -- sufficiently large ``` solves the problem.
sqlplus, export the query in 1 column, and disordered
[ "", "sql", "database", "oracle", "sqlplus", "" ]
Say that I have two tables T and T1 T ``` id p o 1 47 1 2 48 2 3 49 25 ``` T1 ``` id p o 1 47 1 2 42 2 3 47 25 ``` I am looking to insert rows from T1 into T if `count(T1.p)>1` T ``` id p o 1 47 1 2 48 2 3 49 25 1 47 1 3 47 25 ``` I tried the following query but it didn't work ``` insert into T(id , p,o)(SELECT T1.id , T1.p1,T1.l FROM T1 where SELECT count(*) FROM t1 GROUP BY t1.p HAVING COUNT(*)>1) ``` For more [details](http://www.sqlfiddle.com/#!2/6d96f) . Any help will be appreciated .
To get those values into T you will have to find out who they are in T1 and JOIN them with T1 again, to get the right number of rows: ``` INSERT INTO T (id, p, o) SELECT TT.* FROM T1 TT INNER JOIN ( SELECT id, p1, l FROM T1 GROUP BY p1 HAVING COUNT(*) > 1 ) a ON a.p1 = TT.p1; ``` `sqlfiddle demo` How this works: ``` SELECT id, p1, l FROM T1 GROUP BY p1 HAVING COUNT(*) > 1 ``` Returns the p1 that appears more than once in the table. This returns p1 = 47. `GROUP BY p1 HAVING COUNT(*) > 1` makes sure that for each p1, we only want the results that appear more than once. Then, we do an inner JOIN with T1 again, to get all rows that have P1 = 47: ``` ID P1 L 1 47 1 3 47 25 ``` Then you just INSERT this result in the destination table.
``` insert into T SELECT T1.id , T1.p1,T1.l FROM T1 GROUP BY t1.p1 HAVING COUNT(t1.p1)>1 ``` <http://www.sqlfiddle.com/#!2/75c8e/1> Use dml on the left side
Mysql using count in query
[ "", "mysql", "sql", "" ]
I have ended up with a peculiar problem. I am not able to find out the root cause of the issue. Please help. please create a table by executing below scripts ``` CREATE TABLE Employee_salary (ID INT,emp_name VARCHAR(50),Salary INT) INSERT INTO Employee_salary VALUES(1,'Dolu',15000) INSERT INTO Employee_salary VALUES(2,'Bolu',15000) INSERT INTO Employee_salary VALUES(3,'Kalia',10000) INSERT INTO Employee_salary VALUES(4,'Bheem',50000) INSERT INTO Employee_salary VALUES(5,'Krishna',40000) INSERT INTO Employee_salary VALUES(6,'Chutki',30000) SELECT * FROM Employee_salary ID emp_name Salary 1 Dolu 15000 2 Bolu 15000 3 Kalia 10000 4 Bheem 50000 5 Krishna 40000 6 Chutki 30000 ``` I wanted highest salaried employee, so I found bwlow query ``` SELECT Top 1 emp_name,MAX(salary) from Employee_salary Group by emp_name ``` I tried to found Lowest salaried employee and tried executing below query by changing 'MAX' to 'MIN'. Surprisingly the below query is not working in any ways. Still it shows 'Bheem, 50000' which is a wrong answer. Can anybody tell me the cause or reason for this issue. ``` SELECT Top 1 emp_name,MIN(salary) from Employee_salary Group by emp_name ```
Please try below query from lowest salary: ``` SELECT Top 1 emp_name, salary from Employee_salary order by salary ``` and ``` SELECT Top 1 emp_name, salary from Employee_salary order by salary desc ``` for highest salary. If there are more than one employee with same salary, try: ``` SELECT Top 1 WITH TIES emp_name, salary from Employee_salary order by salary desc ```
tried below query for min and max salary records. ``` -- For Max SELECT emp_name, salary from Employee_salary where salary = (SELECT Max(salary) from Employee_salary) -- For Min SELECT emp_name, salary from Employee_salary where salary = (SELECT Min(salary) from Employee_salary) ```
Lowest salaried employee sql server - not working with MIN function
[ "", "sql", "sql-server", "" ]
TBL\_condition ``` ID CONDITION 1 AAA 2 BBB 3 CCC ``` When I hit this query: ``` SELECT CONDITION FROM TBL_condition ``` It says: > ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'condition from tbl\_condition' at line 1
try using backticks (`) around CONDITION : ``` SELECT `CONDITION` FROM `TBL_condition`; ``` Condition is a term used by mysql. read more on this site: <http://forums.mysql.com/read.php?101,40643,40643>
CONDITION is reserved word in mysql <http://dev.mysql.com/doc/refman/5.0/en/reserved-words.html> try like this ``` SELECT `CONDITION` FROM TBL_condition ```
SELECT Query Not Working when selecting particular column 'condition'
[ "", "mysql", "sql", "select", "" ]
i'm afraid to use User forms data to query the database for user login, since the company has only 20 employees, I'm thinking of this function but I'm no sure if this still a easy code to crack for any no so good hacker user ``` Private Function VerifyCredentials(ByVal User As String, ByVal Password As String) As Boolean Dim verification As Boolean = False Dim _conString As String = WebConfigurationManager.ConnectionStrings ("YounnectionString").ConnectionString 'Initialize connections variables Dim cnn As New SqlConnection(_conString) Dim cmd As New SqlCommand cmd.Connection = cnn cnn.Open() 'No data from the form are used on the SQL Server cmd.CommandText = "Select UserName, UserPassword from tblUsers;" Dim cmdReader As SqlDataReader = cmd.ExecuteReader() 'compare the data from the server with the data from the form, it so not matter what the user send from the form While cmdReader.Read() If Trim(User) = Trim(cmdReader("UserName")) AndAlso Trim(Password) = Trim(cmdReader("UserPassword")) Then verification = True End If End While ' this method may result on performance problems if your tblUsers is too big, 'afther all it is the entrance and most of the companies 'just has several hundred users cmdReader.Close() cmd.CommandText = "" cnn.Close() Return verification End Function ``` Please some one check this code and give me better solution, this company was hack ones and the developer was fired. I'm dont know about security but they want a solution while hire a expert. thanks
You are just storing plain text password. Once your database is compromised, you do not have time to notify users. You need to store hashed password with salt. Although, it can still be cracked (it takes times) but you still have sometime to notify users to change the password. For ASP.Net, the easiest way will be to use 1. [ASP.NET Universal Providers](http://www.hanselman.com/blog/IntroducingSystemWebProvidersASPNETUniversalProvidersForSessionMembershipRolesAndUserProfileOnSQLCompactAndSQLAzure.aspx) or 2. [ASP.NET Identity](http://blogs.msdn.com/b/webdev/archive/2013/06/27/introducing-asp-net-identity-membership-system-for-asp-net-applications.aspx)
Let the database filter for you. Change the query to ``` "Select UserName, UserPassword from tblUsers WHERE UserName = " & Trim(User) & " AND UserPassword = " & Trim(Password) ``` And then, if there is some result the authentication is correct, and if there's no result, obviusly you have to return false, so simply do ``` Return cmdReader.Read() ```
UserName and UserPassword Verification function
[ "", "asp.net", "sql", "vb.net", "" ]
I've searched for this answer on Stackoverflow and I haven't been able to get any of the posted solutions to work for me. I have two tables: ``` builds: build_id (INT) status_id (INT) created (INT) ``` And ``` statuses: status_id (INT) name (VARCHAR) description (VARCHAR) ``` I'd like to have a query that will return the date, the status of the build and the count for that date and status. If the count is 0 for a particular status I'd like to have it return 0. For example: ``` +------------+-------+----------+ | date | count | name | +------------+-------+----------+ | 2013-12-05 | 1 | Failed | | 2013-12-05 | 2 | Stable | | 2013-12-05 | 1 | Unstable | | 2013-12-06 | 0 | Failed | | 2013-12-06 | 1 | Stable | | 2013-12-06 | 1 | Unstable | +------------+-------+----------+ ``` I have tried various left join and ifnull combinations in my sql query. Here is the query that returns the results without the 0 count: ``` SELECT FROM_UNIXTIME(created, '%Y-%m-%d') as date, COUNT(statuses.status_id) as count, statuses.name FROM builds LEFT JOIN statuses on builds.status_id=statuses.status_id GROUP BY date, name; ``` This query displays: ``` +------------+-------+----------+ | date | count | name | +------------+-------+----------+ | 2013-12-05 | 1 | Failed | | 2013-12-05 | 2 | Stable | | 2013-12-05 | 1 | Unstable | | 2013-12-06 | 2 | Stable | | 2013-12-06 | 1 | Unstable | +------------+-------+----------+ ``` Thank you in advance. For what it's worth the solutions posted here work: [MySQL Group by - Get columns with zero count](https://stackoverflow.com/questions/20528726/mysql-group-by-get-columns-with-zero-count) EXCEPT that they require the company name in the where clause (the date field in my situation)...I want to get a listing of all the dates in one query.
I ended up making this work by joining in a separate calendar table similar to what was suggested in this post: [SQL left joining multiple tables](https://stackoverflow.com/questions/9722801/sql-left-joining-multiple-tables) Here is the full query: ``` "SELECT allRecords.date, allRecords.Name, ( SELECT COUNT(statuses.status_id) FROM statuses, builds, calendar WHERE builds.status_id = statuses.status_id AND FROM_UNIXTIME(builds.created, '%Y-%m-%d') = calendar.date AND calendar.date = allRecords.date AND statuses.name = allRecords.Name ) as Count FROM ( SELECT calendar.date, statuses.status_id, statuses.name FROM calendar, statuses ORDER BY calendar.date, statuses.name ) allRecords LEFT JOIN builds ON builds.status_id = allRecords.status_id LEFT JOIN statuses ON builds.status_id = statuses.status_id WHERE allRecords.date BETWEEN (SELECT MIN(FROM_UNIXTIME(builds.created, '%Y-%m-%d')) FROM builds) AND (SELECT MAX(FROM_UNIXTIME(builds.created, '%Y-%m-%d')) FROM builds) GROUP BY allRecords.date, allRecords.Name ORDER BY allRecords.date, CASE allRecords.Name WHEN 'Stable' THEN 1 WHEN 'Unstable' THEN 2 WHEN 'Failed' THEN 3 ELSE 100 END"; ```
Try: ``` SELECT FROM_UNIXTIME(created, '%Y-%m-%d') as date, COUNT(builds.status_id) as count, statuses.name FROM builds RIGHT JOIN statuses on builds.status_id=statuses.status_id GROUP BY date, name; ``` or maybe: ``` SELECT FROM_UNIXTIME(created, '%Y-%m-%d') as date, SUM(CASE WHEN builds.status_id IS NULL THEN 0 ELSE 1 END ) as count, statuses.name FROM builds RIGHT JOIN statuses on builds.status_id=statuses.status_id GROUP BY date, name; ``` COUNT( column ) should in theory count only not null values, but who knows ....
MySQL count return 0 instead of no records
[ "", "mysql", "sql", "" ]
This is a seemingly simple thing to do but I can't find any reference to it. I want to add a customized field to my select statement if the value of another field is null. In the below I want to create a field named 'IMPACT' that shows a value of 'Y' if the LOCATION\_ACCOUNT\_ID field in the subquery is null. How do I do this? ``` SELECT FIRST_NAME,LAST_NAME,ULTIMATE_PARENT_NAME, IMPACT = IF LOCATION_ACCOUNT_ID IS NULL THEN 'Y' ELSE '' FROM (SELECT DISTINCT A.FIRST_NAME, A.LAST_NAME, B.LOCATION_ACCOUNT_ID, A.ULTIMATE_PARENT_NAME FROM ACTIVE_ACCOUNTS A, QL_ASSETS B WHERE A.ACCOUNT_ID = B.LOCATION_ACCOUNT_ID(+) ```
Use `CASE` instead of `IF`: ``` SELECT FIRST_NAME, LAST_NAME, ULTIMATE_PARENT_NAME, CASE WHEN LOCATION_ACCOUNT_ID IS NULL THEN 'Y' ELSE '' END AS IMPACT FROM ( SELECT DISTINCT A.FIRST_NAME, A.LAST_NAME, B.LOCATION_ACCOUNT_ID, A.ULTIMATE_PARENT_NAME FROM ACTIVE_ACCOUNTS A, QL_ASSETS B WHERE A.ACCOUNT_ID = B.LOCATION_ACCOUNT_ID(+) ``` You should also use LEFT JOIN syntax instead of the old (+) syntax (but that's more of a style choice in this case - it does not change the result): ``` SELECT FIRST_NAME, LAST_NAME, ULTIMATE_PARENT_NAME, CASE WHEN LOCATION_ACCOUNT_ID IS NULL THEN 'Y' ELSE '' END AS IMPACT FROM ( SELECT DISTINCT A.FIRST_NAME, A.LAST_NAME, B.LOCATION_ACCOUNT_ID, A.ULTIMATE_PARENT_NAME FROM ACTIVE_ACCOUNTS A LEFT JOIN QL_ASSETS B ON A.ACCOUNT_ID = B.LOCATION_ACCOUNT_ID ) ``` In fact, since you aren't *using* any of the columns from `B` in your result (only checking for *existence*) you can just use `EXISTS`: ``` SELECT FIRST_NAME, LAST_NAME, ULTIMATE_PARENT_NAME, CASE WHEN EXISTS(SELECT NULL FROM QL_ASSETS WHERE LOCATION_ACCOUNT_ID = A.ACCOUNT_ID) THEN 'Y' ELSE '' END AS IMPACT FROM ACTIVE_ACCOUNTS A ```
Use a [case](http://docs.oracle.com/cd/B19306_01/server.102/b14200/expressions004.htm) statement: ``` SELECT FIRST_NAME, LAST_NAME, ULTIMATE_PARENT_NAME, CASE WHEN Location_Account_ID IS NULL THEN 'Y' ELSE '' END AS IMPACT FROM ( SELECT DISTINCT A.FIRST_NAME, A.LAST_NAME, B.LOCATION_ACCOUNT_ID, A.ULTIMATE_PARENT_NAME FROM ACTIVE_ACCOUNTS A, QL_ASSETS B WHERE A.ACCOUNT_ID = B.LOCATION_ACCOUNT_ID(+) ) a ``` p.s. also added a alias for your derived table so you wont get an error for that.
Create custom field in SELECT if other field is null
[ "", "sql", "oracle", "" ]
Let's say I have a table called EmployeeInfo like following: ``` Name Hours StartTime Date John Smith 8 8:00 2013-12-11 John Smith 7 7:00 2013-12-10 John Smith 9 6:00 2013-12-09 Tom Smith 6 9:00 2013-12-11 Tom Smith 8 7:00 2013-12-10 Tom Smith 7 5:00 2013-12-05 Alex Smith 8 8:00 2013-12-10 ``` I want query to return the following table: ``` Name HoursToday HoursWeekly StartTime Date John Smith 8 24 8:00 2013-12-11 Tom Smith 6 14 9:00 2013-12-11 ``` Where all info is taken from today's date except HoursWeekly, which is the sum of Hours from the given date (lets say 2013-12-9) till today. And the info should pop up only if employee has a record as of today (2013-12-11). Any help would be appreciated.
``` DECLARE @t TABLE ( Name VARCHAR(50), Hours INT, StartTime TIME, Date1 DATE ) INSERT INTO @t SELECT 'John Smith', 8, '8:00', '2013-12-11' UNION ALL SELECT 'John Smith', 7, '7:00', '2013-12-10' UNION ALL SELECT 'John SMITH', 9, '6:00', '2013-12-09' UNION ALL SELECT 'Tom Smith', 6, '9:00', '2013-12-11' UNION ALL SELECT 'Tom SMITH', 8, '7:00', '2013-12-10' UNION ALL SELECT 'Tom SMITH', 7, '5:00', '2013-12-05' UNION ALL SELECT 'Alex SMITH', 8, '8:00', '2013-12-10' DECLARE @input DATE= '2013-12-9'; WITH cte1 AS ( SELECT name, hours HoursToday, StartTime, Date1 FROM @t WHERE DATEDIFF(DAY, date1, GETDATE()) = 0 ), CTE AS ( SELECT name, SUM(hours) HoursWeekly FROM @t WHERE date1 BETWEEN @input AND GETDATE() AND name IN (SELECT name FROM cte1) GROUP BY name ) SELECT a.Name, a.HoursToday, b.HoursWeekly, a.StartTime, a.Date1 FROM cte1 A INNER JOIN cte B ON a.Name = b.Name ```
A cleaner solution than the accepted answer ``` SELECT e1.Name, e1.Hours HoursToday, e2.HoursWeekly, e1.StartTime, e2.Date FROM EmployeeInfo e1 JOIN ( SELECT Name, MAX(Date) Date, SUM(Hours) HoursWeekly FROM EmployeeInfo WHERE Date >= CONVERT(DATE, GETDATE() - 7) GROUP BY Name HAVING MAX(Date) >= CONVERT(DATE, GETDATE()) ) e2 ON e1.name = e2.Name AND e1.Date = e2.Date ```
How to select a specific row from the table with one column as a sum of values of other rows?
[ "", "sql", "sql-server", "sql-server-2012", "" ]
I am using SQL Server 2008 and I have the following query: ``` SELECT [Id] FROM [dbo].[Products] WHERE [dbo].GetNumOnOrder([Id]) = 0 ``` With the following "GetNumOnOrder" Scalar-valued function: ``` CREATE FUNCTION [dbo].[GetNumOnOrder] ( @ProductId INT ) RETURNS INT AS BEGIN DECLARE @NumOnOrder INT SELECT @NumOnOrder = SUM([NumOrdered] - [NumReceived]) FROM [dbo].[PurchaseOrderDetails] INNER JOIN [dbo].[PurchaseOrders] ON [PurchaseOrderDetails].[PurchaseOrderId] = [PurchaseOrders].[Id] WHERE [PurchaseOrders].[StatusId] <> 5 AND [PurchaseOrderDetails].[ProductId] = @ProductId RETURN CASE WHEN @NumOnOrder IS NOT NULL THEN @NumOnOrder ELSE 0 END END ``` However it takes around 6 seconds to execute. Unfortunately I have no control over the initial SQL generated but I can change the function. Is there any way the function can be modified to speed this up? I'd appreciate the help. Thanks
If you have the rights to add indexes to the tables (and dependant on the version of SQL Server you are using), I would investigate what performance gain adding the following would have:- ``` create index newindex1 on PurchaseOrders (id) include (StatusId); create index newindex2 on PurchaseOrderDetails (PurchaseOrderId) include (ProductId,NumOrdered,NumReceived); ``` You probably already have indexes on these columns - but the indexes above will support just the query in your function in the most efficient way possible (reducing the number of page reads to a minimum). If the performance of this function is important enough, you could also consider adding a calculated column into your table - for `NumOrdered-NumReceived` (and then only include the result column in the index above - and your query). You could also consider doing this in an indexed view rather than the table - but schema binding a view can by tiresome and inconvenient. Obviously, the wider the tables in question are - the greater the improvement in performance will be.
If you still want to use a function and can not live without it, use a in-line table value version. It is a-lot faster. Check out these articles from some experts. <http://aboutsqlserver.com/2011/10/23/sunday-t-sql-tip-inline-vs-multi-statement-table-valued-functions/> <http://dataeducation.com/scalar-functions-inlining-and-performance-an-entertaining-title-for-a-boring-post/> I have had a couple MVP friends say that this the only function they ever write since scalar functions are treated as a bunch of Stored Procedure calls. Re-write using in-line table value function. Check the syntax since I did not. Use the Coalesce function to convert NULL to Zero. ``` -- -- Table value function -- CREATE FUNCTION [dbo].[GetNumOnOrder] ( @ProductId INT ) RETURNS TABLE AS RETURN ( SELECT COALESCE(SUM([NumOrdered] - [NumReceived]), 0) AS Num FROM [dbo].[PurchaseOrderDetails] INNER JOIN [dbo].[PurchaseOrders] ON [PurchaseOrderDetails].[PurchaseOrderId] = [PurchaseOrders].[Id] WHERE [PurchaseOrders].[StatusId] <> 5 AND [PurchaseOrderDetails].[ProductId] = @ProductId ); -- -- Sample call with cross apply -- SELECT [Id] FROM [dbo].[Products] P CROSS APPLY [dbo].[GetNumOnOrder] (C.Id) AS CI WHERE CI.Num = 0; ```
T-SQL Calling Scalar Function in Where Speed
[ "", "sql", "sql-server", "t-sql", "" ]
I need to show count of job application for last 12 months based on & need to group them also based on Job Application dates I tried following it generates error as shown below. ``` SELECT COUNT(MONTH(ApplicationDate) AS VARCHAR(2)) + '-' + CAST(YEAR(ApplicationDate) AS VARCHAR(4)) AS Total, ApplicationDate FROM [Jobs] GROUP BY ApplicationDate, ApplicationDate ORDER BY ApplicationDate ``` But that resulted in an error: > Msg 195, Level 15, State 10, Line 9 > 'COUNT' is not a recognized built-in function name. I need it for SQL Server 2008 Final Working solution ``` SELECT MONTH(ApplicationDate), YEAR(ApplicationDate), COUNT(*) AS Total, (CAST(MONTH(ApplicationDate) AS VARCHAR(2)) + '-' + CAST(YEAR(ApplicationDate) AS VARCHAR(4))) as ApplicationDate FROM [Jobs] GROUP BY YEAR(ApplicationDate), MONTH(ApplicationDate) ORDER BY YEAR(ApplicationDate), MONTH(ApplicationDate) ```
How about this one? ``` SELECT COUNT(*) AS Total, (CAST(MONTH(ApplicationDate) AS VARCHAR(2)) + '-' + CAST(YEAR(ApplicationDate) AS VARCHAR(4))) as ApplicationDate FROM [Jobs] GROUP BY (CAST(MONTH(ApplicationDate) AS VARCHAR(2)) + '-' + CAST(YEAR(ApplicationDate) AS VARCHAR(4))) ORDER BY ApplicationDate ``` the cast function was missing for month. Output will be as follows. ``` Total ApplicationDate 22217 7-2012 17979 8-2012 30341 9-2012 ``` Edited for sorting... If you are not particular about the 'MM-YYYY' format. Convert the month/year combination to number and order by that. Try this. ``` SELECT COUNT(*) AS Total, RIGHT(CONVERT(VARCHAR(10), ApplicationDate, 103), 7) as ApplicationDate, CAST(REPLACE(RIGHT(CONVERT(VARCHAR(10), ApplicationDate, 103), 7), '/', '') as int) as numdate FROM [Jobs] GROUP BY RIGHT(CONVERT(VARCHAR(10), ApplicationDate, 103), 7) ORDER BY numdate,ApplicationDate ```
You need to try something like this to get one row for each month/year that contains the count of applications for that month/year: ``` SELECT MONTH(ApplicationDate), YEAR(ApplicationDate), COUNT(*) FROM dbo.[Jobs] GROUP BY YEAR(ApplicationDate), MONTH(ApplicationDate) ORDER BY YEAR(ApplicationDate), MONTH(ApplicationDate) ``` This counts the applications per month / year and groups by it, too. You should get an output something like this: ![enter image description here](https://i.stack.imgur.com/Lq4iI.png) (*this is taken from the `AdventureWorks` sample database*) If you do a `GROUP BY ApplicationDate`, then you're basically grouping / counting by the actual **date** itself (not it's month/year parts) **Update:** if you **must** have SQL Server provide the formatting in the `MM-YYYY` format (really should be done in your web app and not by SQL Server), then try this: ``` ;WITH GroupedData AS ( SELECT DateMonth = MONTH(ApplicationDate), DateYear = YEAR(ApplicationDate), JobsCount = COUNT(*) FROM dbo.[Jobs] GROUP BY YEAR(ApplicationDate), MONTH(ApplicationDate) ) SELECT RIGHT('00' + CAST(DateMonth AS VARCHAR(2)), 2) + '-' + CAST(DateYear AS VARCHAR(4)), JobsCount FROM GroupedData ORDER BY DateMonth, DateYear ```
SQL query to count based on MM-YYYY part of date
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I need to return a value (0) if nothing is found in an SQL call. Here is what i have (edited/simplified to make more sense out of context), this is baing called from the codebehind. ``` sql1 = "INSERT INTO [Xtr_MenuItems]([menu_order]) values(1 + (select max(menu_order) from [Xtr_MenuItems])) ``` SO into database i insert the max number found in [menu\_order] + 1. this works fine, assuming something is found. However, if `(select max(menu_order) from [Xtr_MenuItems]))` fails (nothing found) then i want to return 0 (as 1 + nothing = nothing, and sql explodes) How can i do this? I have tried 'IF EXISTS', 'OUTPUT' in various ways but cant get it to work...
Try this: ``` sql1 = "INSERT INTO [Xtr_MenuItems]([menu_order]) values(1 + ISNULL((select max(menu_order) from [Xtr_MenuItems]),0)) ``` I used ISNULL function where if the result of query is null returns 0
Instead of `values`, you could use `select` straight away: ``` insert Xtr_MenuItems (menu_order) select 1 + isnull(max(menu_order),0) from Xtr_MenuItems ```
Return an integer if nothing is found (TSQL ASP.NET VB)
[ "", "asp.net", "sql", "vb.net", "t-sql", "" ]
I have a table that has a column (cat\_name). Some are strings followed by numbers and others are just plain strings. I like to arrange it by putting all strings starting with 'Level' first. **Desired output:** * Level 1 Items * Level 2 Items * Level 3 Items * Level 5 Items * Level 10 Items * Level 12 Items * Level 22 Items * Apple * Mango * Others * Special Items I used this query ``` SELECT * FROM category ORDER BY (CASE WHEN cat_name LIKE 'Level%' THEN 0 ELSE 1 END) ASC, cat_name ``` And got * Level 1 Items * Level 10 Items * Level 12 Items * Level 2 Items * Level 22 Items * Level 3 Items * Level 5 Items * Apple * Mango * Others * Special Items And found this query here at stackoverflow for natural sorting ``` SELECT * FROM category WHERE cat_name LIKE 'Level%' ORDER BY LEFT(cat_name,LOCATE(' ',cat_name)), CAST(SUBSTRING(cat_name,LOCATE(' ',cat_name)+1) AS SIGNED), cat_name ASC ``` but I don't know how I can integrate it with my first query. The closest I could get is ``` SELECT * FROM category ORDER BY LEFT(cat_name,LOCATE(' ',cat_name)), CAST(SUBSTRING(cat_name,LOCATE(' ',cat_name)+1) AS SIGNED), (CASE WHEN cat_name LIKE 'Level%' THEN 0 ELSE 1 END) ASC, cat_name ASC ``` But the strings with Levels is off. It is arranged numerically but they are not occupying the top position. * Apple * Mango * Others * Level 1 Items * Level 2 Items * Level 3 Items * Level 5 Items * Level 10 Items * Level 12 Items * Level 22 Items * Special Items I think I am just missing something here. Hope someone can help me. Thanks in advance! sqlfiddle: <http://sqlfiddle.com/#!2/5a3eb/2>
Try with: ``` SELECT * FROM category ORDER BY (CASE WHEN cat_name LIKE 'Level%' THEN 0 ELSE 1 END)ASC, LEFT(cat_name,LOCATE(' ',cat_name)), CAST(SUBSTRING(cat_name,LOCATE(' ',cat_name)+1) AS SIGNED), cat_name ASC ``` It's to do with the order of statements within your `ORDER BY`
You also can use something like this - ``` SELECT * FROM category ORDER BY IF(cat_name REGEXP '^(Level )[0-9]+( Items)$', 0, 1), TRIM(LEADING 'Level ' FROM cat_name)*1, cat_name ```
SQL sort string with numbers certain pattern first
[ "", "mysql", "sql", "sql-order-by", "case", "" ]
After create a Stored Procedure in a Table " dbo.terms" to insert a data in it using this code: ``` CREATE PROCEDURE dbo.terms @Term_en NVARCHAR(50) = NULL , @Createdate DATETIME = NULL , @Writer NVARCHAR(50) = NULL , @Term_Subdomain NVARCHAR(50) = NULL AS BEGIN SET NOCOUNT ON INSERT INTO dbo.terms ( Term_en , Createdate , Writer , Term_Subdomain ) VALUES ( @Term_en = 'Cat' , @Createdate = '2013-12-12' , @Writer = 'Fadi' , @Term_Subdomain = 'English' ) END GO ``` I want to Create a `Trigger` in it to add another rows in a table `dbo.term_prop` I used this code : ``` CREATE TRIGGER triggerdata AFTER INSERT ON dbo.terms FOR EACH ROW BEGIN INSERT INTO dbo.term_prop VALUES ('قطة', term_ar, upper(:new.term_ar) , null , 'chat', term_fr, upper(:new.term_fr) , null ,'Animal', Def_en, upper(:new.Def_en) , null ,'حيوان', Def_ar, upper(:new.Def_ar) , null ,'Animal', Def_fr, upper(:new.Def_fr) , null); END; ``` and it shows me an Error
To add more rows you can use SELECTED table. This is a special table populated with rows inserted in your transaction. An example is: ``` INSERT INTO dbo.term_prop VALUES SELECT * FROM inserted ``` So you mustn't use FOR EACH ROW. The correct definition of your trigger will be ``` CREATE TRIGGER triggername ON table AFTER INSERT AS BEGIN END ```
Wow, I am still surprised that triggers are given a **bad wrap**! I wrote a dozen articles on them a long time ago ... Like anything in life, the use of triggers depends on the situation. 1 - Trigger are great to track DDL changes. Who changed that table? <http://craftydba.com/?p=2015> 2 - Triggers can track DML changes (insert, update, delete). However, on large tables with large transaction numbers, they can slow down processing. <http://craftydba.com/?p=2060> However, with today's hardware, what is slow for me might not be slow for you. 3 - Triggers are great at tracking logins and/or server changes. <http://craftydba.com/?p=1909> So, lets get back to center and talk about your situation. **Why are you trying to make a duplicate entry on just an insert action?** Other options right out of the SQL Server Engine to solve this problem, are: 1 - Move data from table 1 to table 2 via a custom job. "Insert Into table 1 select \* from table 2 where etl\_flag = 0;". Of course make it transactional and update the flag after the insert is complete. I am just considering inserts w/o deletes or updates. 2 - If you want to track just changes, check out the change data capture. It reads from the transaction log. It is not as instant as a trigger, ie - does not fire for every record. Just runs as a SQL Agent Job in the background to load cdc.tables. 3 - Replicate the data from one server1.database1.table1 to server2.database2.table2. ETC ... **I hope my post reminds everyone that the situation determines the solution.** Triggers are good in certain situations, otherwise, they would have been removed from the product a long time ago. And if the situation changes, then the solution might have to change ...
Create a Trigger to insert a rows in another table
[ "", "sql", "sql-server", "datatrigger", "" ]
I have a Table (call it `A_table`) in a database (call it `A_db`) in Microsoft SQL Server Management Studio, and there are 10 rows. I have another database (call it `B_db`), and it has a Table (call it `B_table`), which has the same column settings as `A_table` has. But the `B_table` is empty. What I want: * **Copy** every rows from `A_table` to `B_table`. Is there any option in **Microsoft SQL Server Management Studio 2012**, to create an insert SQL from a table? Or is there any other option to do that?
Quick and Easy way: 1. Right click database 2. Point to `tasks` `In SSMS 2017 you need to ignore step 2 - the generate scripts options is at the top level of the context menu` Thanks to [Daniel](https://stackoverflow.com/users/5740181/daniel-elkington) for the comment to update. 3. Select `generate scripts` 4. Click next 5. Choose tables 6. Click next 7. Click advanced 8. Scroll to `Types of data to script` - Called `types of data to script` in SMSS 2014 Thanks to [Ellesedil](https://stackoverflow.com/users/1861513/ellesedil) for commenting 9. Select `data only` 10. Click on 'Ok' to close the advanced script options window 11. Click next and generate your script I usually in cases like this generate to a new query editor window and then just do any modifications where needed.
I know this is an old question, but victorio also asked if there are any other options to copy data from one table to another. There is a very short and fast way to insert all the records from one table to another (which might or might not have similar design). If you dont have identity column in table B\_table: ``` INSERT INTO A_db.dbo.A_table SELECT * FROM B_db.dbo.B_table ``` If you have identity column in table B\_table, you have to specify columns to insert. Basically you select all except identity column, which will be auto incremented by default. In case if you dont have existing B\_table in B\_db ``` SELECT * INTO B_db.dbo.B_table FROM A_db.dbo.A_table ``` will create table B\_table in database B\_db with all existing values
How to export all data from table to an insertable sql format?
[ "", "sql", "sql-server", "copy", "export", "ssms", "" ]
I have the following trigger created for when the value of a specific column is detected: ``` ALTER TRIGGER [dbo].[trCompletedDate] ON [dbo].[Record] FOR UPDATE AS BEGIN IF(UPDATE(Completed)) UPDATE Record SET Completed_Date=DATEADD(hh, 7, GETDATE()) FROM Record INNER JOIN Inserted ON Record.[ID]= Inserted.[ID] WHERE Record.Completed = 1 END ``` Which works fine but I would like to add an IF statement to change the value of Completed\_Date to another value when Record.Completed = 0. I am just unsure of how the syntax goes.
``` UPDATE Record SET Completed_Date = CASE WHEN Record.Completed = 1 THEN DATEADD(hh, 7, GETDATE()) ELSE --Somethin else END FROM Record INNER JOIN Inserted ON Record.[ID]= Inserted.[ID] ```
You can use a [CASE](http://technet.microsoft.com/en-us/library/ms181765.aspx) statement: ``` UPDATE Record SET Completed_Date = CASE Record.Completed WHEN 1 THEN DATEADD(hh, 7, GETDATE()) WHEN 0 THEN SOMETHINGELSE END FROM Record INNER JOIN Inserted ON Record.[ID]= Inserted.[ID] WHERE Record.Completed IN (0,1); ```
sql trigger IF statement
[ "", "sql", "sql-server-2008", "triggers", "" ]
I have table with user transactions.I need to select users who made total transactions more than 100 000 in a single day.Currently what I'm doing is gather all user ids and execute ``` SELECT sum ( amt ) as amt from users where date = date("Y-m-d") AND user_id=id; ``` for each id and checking weather the amt > 100k or not. Since it's a large table, it's taking lot of time to execute.Can some one suggest an optimised query ?
This will do: ``` SELECT sum ( amt ) as amt, user_id from users where date = date("Y-m-d") GROUP BY user_id HAVING sum ( amt ) > 1; ' not sure what Lakh is ```
What about filtering the record 1st and then applying sum like below ``` select SUM(amt),user_id from ( SELECT amt,user_id from users where user_id=id date = date("Y-m-d") )tmp group by user_id having sum(amt)>100000 ```
Suggest an optimised mysql query
[ "", "mysql", "sql", "query-optimization", "" ]
I'm using MSSQL server 2008 and have two tables `forms` and `employees`. `forms` contains all forms completed by employees, e.g. ``` [id][employeeId][formId] ``` There are only ever 6 types of forms each with their own id. How would I construct a query to list each employee with each type of form and whether they have been completed (exist in the `forms` table) e.g. ``` [employeeId][formType1][formType2][formType3]... 1 1 null 1 2 1 1 null ``` **EDIT:** My current query to list all the information that then needs to be 'filtered' down to the schema above: ``` SELECT * FROM forms as f inner join employees as e on e.EmployeeID = f.EmployeeID ```
You can group by the employee ID and "count" the number of instances of each form type: ``` SELECT employeeid, SUM(CASE WHEN f.ID = 1 THEN 1 ELSE 0 END) formType1, SUM(CASE WHEN f.ID = 2 THEN 1 ELSE 0 END) formType2, SUM(CASE WHEN f.ID = 3 THEN 1 ELSE 0 END) formType3, SUM(CASE WHEN f.ID = 4 THEN 1 ELSE 0 END) formType4, SUM(CASE WHEN f.ID = 5 THEN 1 ELSE 0 END) formType5, SUM(CASE WHEN f.ID = 6 THEN 1 ELSE 0 END) formType6 FROM forms as f GROUP BY employeeid ```
Maybe this could do the trick ``` SELECT emp.[employeeName], form1.formId as formType1, form2.formId as formType2, form3.formId as formType3, form4.formId as formType4, form5.formId as formType5, form6.formId as formType6 FROM [database_name].dbo.[employees] as emp RIGHT OUTER JOIN [database_name].dbo.[form] as form1 ON (emp.[EmployeeID] = form1.[EmployeeID] AND form1.id='ID_FOR_THE_FORM1') RIGHT OUTER JOIN [database_name].dbo.[form] as form2 ON (emp.[EmployeeID] = form2.[EmployeeID] AND form2.id='ID_FOR_THE_FORM2') RIGHT OUTER JOIN [database_name].dbo.[form] as form3 ON (emp.[EmployeeID] = form3.[EmployeeID] AND form3.id='ID_FOR_THE_FORM3') RIGHT OUTER JOIN [database_name].dbo.[form] as form4 ON (emp.[EmployeeID] = form4.[EmployeeID] AND form4.id='ID_FOR_THE_FORM4') RIGHT OUTER JOIN [database_name].dbo.[form] as form5 ON (emp.[EmployeeID] = form5.[EmployeeID] AND form5.id='ID_FOR_THE_FORM5') RIGHT OUTER JOIN [database_name].dbo.[form] as form6 ON (emp.[EmployeeID] = form6.[EmployeeID] AND form6.id='ID_FOR_THE_FORM6') ; ``` Not exactly what are you looking for you can evalute if NULL then the employee not answer that form everything else he answer it. I hope this helps
SQL query select each type of form per employee
[ "", "sql", "sql-server", "database", "" ]
my question is regarding the existance of records in a table. How could i get a resulting list which really notifies if the record i'm looking for des or does not exist ? I mean, i know i can check for what does exist with a query like ``` SELECT field FROM table WHERE unique_field IN (value1, value2) ``` And that will show me those record wich actually exist. But what if i want a result like: ``` +--------------+-------+ | unique_field | exists| +--------------+-------+ | value1 | 1 | | value2 | 0 | +--------------+-------+ ``` Is it possible to do so ?
Use `EXISTS()`, it stops searching for more rows when there's a hit, so you won't have much overhead when searching big tables. ``` SELECT 'value1' AS unique_field, EXISTS(SELECT 1 FROM your_table WHERE unique_field = 'value1') AS 'exists' UNION ALL SELECT 'value2' AS unique_field, EXISTS(SELECT 1 FROM your_table WHERE unique_field = 'value2') AS 'exists' ```
You can also do this using a "reference" table for the values: ``` select ref.val, (exists (select 1 from table t where t.unique_field = ref.val)) as `exists` from (select 'value1' as val union all select 'value2' as val ) ref; ``` You can also phrase this as a `left outer join`: ``` select ref.val, (t.unique_field is not null) as `exists` from (select 'value1' as val union all select 'value2' as val ) ref left outer join table t on t.unique_field = ref.val; ``` This works because the field in `table` is unique (otherwise, you might get duplicate rows or need aggregation).
Check records existance in MYSQL
[ "", "mysql", "sql", "" ]
This should be a simple one but I am having trouble with it. I want to create dynamic seperated list from a table that I have. Example: Table: ``` Email Person1@address.com Person2@address.com ``` End result should give me Person1@address.com;person2@address.com I am not sure what the right method would be to get these results. i think that I can do it with ForXML but it is pretty complex for what seems to be a simple issue. Any advice would be appreciated. I will keep messing with the ForXml example I found.
This should give back the desired results: ``` SELECT STUFF((SELECT ';'+ Email AS [text()] FROM Person FOR XML PATH('')),1,1,''); ``` <http://sqlfiddle.com/#!3/c3fac/3>
``` declare @emailstring varchar(max) = '' select @emailstring = @emailstring + email + ';' from tablename set @emailstring = left(@emailstring,len(@emailstring)-1) ```
Creating a CSV list from a sql table
[ "", "sql", "t-sql", "" ]
I have two tables: ``` Table A with system values ------------------------ id | val_1 | val_2 | ------------------------ 1 | 11 | 22 | ------------------------ Table B with user values ----------------------- uid | set | val_3 | ----------------------- 21 | 1 | 11 | ----------------------- 68 | 2 | 22 | ----------------------- 83 | 1 | 11 | ----------------------- ``` I'd like to update **val\_3** in Table B with the values of **val\_1** and **val\_2** of Table A, according to the values of **set** in Table B. So if I change: Table A val\_1 => 333, Table A val\_2 => 666, the update query changes Table B to: --- ``` uid | set | val 3 | ----------------------- 21 | 1 | 333 | ----------------------- 68 | 2 | 666 | ----------------------- 83 | 1 | 333 | ----------------------- ``` **Is this possible in one query?** Now I do: ``` $result=$mysql->query('SELECT val1,val2 from TABLE A WHERE id=1'); UPDATE TABLE B set val3=$result[0] WHERE set=1 UPDATE TABLE B set val3=$result[1] WHERE set=2 ``` Maybe I could do a `CASE WHEN` but I don't know how to do it without a corresponding row value for Table A.
Do an update using a JOIN:- ``` UPDATE TableB CROSS JOIN TableA SET TableB.val_3 = CASE WHEN TableB.`set` = 1 THEN TableA.val_1 ELSE TableB.`set` = 2 THEN TableA.val_2 END WHERE TableA.id=1 AND TableB.`set` IN (1,2) ```
Try this: ``` UPDATE TableB SET val3 = CASE set WHEN 1 THEN (SELECT val1 FROM TableA WHERE id=1) WHEN 2 THEN (SELECT val2 FROM TableA WHERE id=1) END ```
Update table depending on value without overlapping field names
[ "", "mysql", "sql", "if-statement", "sql-update", "case", "" ]
I have a table which has a `primary key` with `varchar` data type. And another table with `foreign key` as `varchar` datatype. I am making a `join` statement using this pair of `varchar` datatype. Though I am dealing with few number of rows (say hunderd rows), it is taking `60ms`. But when the system will finally be deployed, it will have around thousands of rows. I read [`Performance of string comparison vs int join in SQL`](https://stackoverflow.com/questions/12431107/performance-of-string-comparison-vs-int-join-in-sql), and concluded that the performance of `SQL` Query depend upon `DB` and number of rows it is dealing with. But when I am dealing with a very large amount of data, would it matter much? Should I create a new column with a `number` datatype in both the `table` and `join` the `table` to reduce the time taken by the `SQL` Query.?
> Should I create a new column with a number datatype in both the table and join the table to reduce the time taken by the SQL Query.? If you're in a position where you can change the design of the database with ease then yes, your Primary Key should be an integer. Unless there is a really good reason to have an FK as a varchar, then they should be integers as well. If you can't change the PK or FK fields, then make sure they're indexed properly. This will eventually become a bottleneck though.
You should use the correct data type for that data that you are representing -- any dubious theoretical performance gains are secondary to the overhead of having to deal with data conversions. It's really impossible to say what that is based on the question, but most cases are rather obvious. Where they are not obvious are in situations where you have a data element that is represented by a set of digits but which you do not treat as a number -- for example, a phone number. Clues that you are dealing with this situation are: * leading zeroes that must be preserved * no arithmetic operations are carried out on the element. * string operations are carried out: eg. "take the last four characters" If that's the case then you probably want to store your "number" as a varchar.
SQL - performance in varchar vs. int
[ "", "mysql", "sql", "sql-server", "database", "oracle", "" ]
One thing struck in my head about mysql/sqlserver i.e **delete/ truncate** Which one is better and faster ? where to use delete? where to use truncate?
**DELETE** 1. DELETE is a DML Command. 2. DELETE statement is executed using a row lock, each row in the table is locked for deletion. 3. We can specify filters in where clause 4. It deletes specified data if where condition exists. 5. Delete activates a trigger because the operation are logged individually. 6. Slower than truncate because, it keeps logs. 7. Rollback is possible. **TRUNCATE** 1. TRUNCATE is a DDL command. 2. TRUNCATE TABLE always locks the table and page but not each row. 3. Cannot use Where Condition. 4. It Removes all the data. 5. TRUNCATE TABLE cannot activate a trigger because the operation does not log individual row deletions. 6. Faster in performance wise, because it doesn't keep any logs. 7. Rollback is possible. * DELETE and TRUNCATE both can be rolled back when used with TRANSACTION (TRUNCATE can be rolled back in SQL Server, but not in MySQL). * if there is a PK with auto increment, truncate will reset the counter <http://beginner-sql-tutorial.com/sql-delete-statement.htm>
## Difference The most important difference is DELETE operations are transaction-safe and logged, which means DELETE can be rolled back. TRUNCATE cannot be done inside a transaction and can’t be rolled back. Because TRUNCATE is not logged recovering a mistakenly TRUNCATEd table is a much bigger problem than recovering from a DELETE. DELETE will fail if foreign key constraints are broken; TRUNCATE may not honor foreign key constraints (it does for InnoDB tables). DELETE will fire any ON DELETE triggers; TRUNCATE will not. ## FASTER Truncate operations drop and re-create the table, which is much faster than deleting rows one by one, particularly for large tables. ## Where to USE ### truncate when table set to empty, and need reset auto-incrementing keys to 1. It's faster than DELETE because it deletes all data. DELETE will scan the table to generate a count of rows that were affected. ### delete need rows to delete based on an optional WHERE clause. need logs and apply foreign key constraints
comparison of truncate vs delete in mysql/sqlserver
[ "", "mysql", "sql", "sql-server", "" ]
I am running a file upload process to upload files to a db. The web server and the SQL server are different machines. I am attempting to use an SQL OPENROWSET to upload an excel file, but I cannot determine how to get the file onto the other machine. Is there a way to set up a shared drive that the web server can save a file to and the SQL server can access? We have a local network set up with Active Directory. For Example: WebServer - Shared drive on web server under C:/inetpub/webpage/fileImport SQLServer - Will log in with sql auth using USERID and PASSWD. Needs to access webserver shared drive. What user do I share the drive on web server with so that the sql auth user will be able to access it when I run the OPENROWSET? Any help will be much appreciated.
Using the OPENROWSET means that SQL qill access files using the service account. This account must be used to access share drive, as stated here [Using SQL Credential to Open a file with OpenRowSet](https://stackoverflow.com/questions/20613475/using-sql-credential-to-open-a-file-with-openrowset).
I am also trying the same thing by uploading the file in FTP and trying to access it. But i didn't get any progress from last 2 weeks. And i had found may other alternatives like coping the files in another server and share the folder with out user name & password. then we can able to access it by giving the \\folder\filename If u get any other alternative plz share...
Save and access file from shared drive
[ "", "sql", "iis", "active-directory", "directory", "" ]
I am working with SQL Server 2008 If I have a Table as such: ``` Code Value ----------------------- 4 240 4 299 4 210 2 NULL 2 3 6 30 6 80 6 10 4 240 2 30 ``` How can I find the median AND group by the Code column please? To get a resultset like this: ``` Code Median ----------------------- 4 240 2 16.5 6 30 ``` I really like this solution for median, but unfortunately it doesn't include Group By: <https://stackoverflow.com/a/2026609/106227>
The solution using rank works nicely when you have an odd number of members in each group, i.e. the median exists within the sample, where you have an even number of members the rank method will fall down, e.g. ``` 1 2 3 4 ``` The median here is 2.5 (i.e. half the group is smaller, and half the group is larger) but the rank method will return 3. To get around this you essentially need to take the top value from the bottom half of the group, and the bottom value of the top half of the group, and take an average of the two values. ``` WITH CTE AS ( SELECT Code, Value, [half1] = NTILE(2) OVER(PARTITION BY Code ORDER BY Value), [half2] = NTILE(2) OVER(PARTITION BY Code ORDER BY Value DESC) FROM T WHERE Value IS NOT NULL ) SELECT Code, (MAX(CASE WHEN Half1 = 1 THEN Value END) + MIN(CASE WHEN Half2 = 1 THEN Value END)) / 2.0 FROM CTE GROUP BY Code; ``` **[Example on SQL Fiddle](http://sqlfiddle.com/#!3/733f4/4)** --- In SQL Server 2012 you can use [PERCENTILE\_CONT](http://technet.microsoft.com/en-us/library/hh231473.aspx) ``` SELECT DISTINCT Code, Median = PERCENTILE_CONT(0.5) WITHIN GROUP (ORDER BY Value) OVER(PARTITION BY Code) FROM T; ``` **[Example on SQL Fiddle](http://sqlfiddle.com/#!6/733f4/3)**
SQL Server does not have a function to calculate medians, but you could use the ROW\_NUMBER function like this: ``` WITH RankedTable AS ( SELECT Code, Value, ROW_NUMBER() OVER (PARTITION BY Code ORDER BY VALUE) AS Rnk, COUNT(*) OVER (PARTITION BY Code) AS Cnt FROM MyTable ) SELECT Code, Value FROM RankedTable WHERE Rnk = Cnt / 2 + 1 ``` To elaborate a bit on this solution, consider the output of the RankedTable CTE: ``` Code Value Rnk Cnt --------------------------- 4 240 2 3 -- Median 4 299 3 3 4 210 1 3 2 NULL 1 2 2 3 2 2 -- Median 6 30 2 3 -- Median 6 80 3 3 6 10 1 3 ``` Now from this result set, if you only return those rows where Rnk equals Cnt / 2 + 1 (integer division), you get only the rows with the median value for each group.
How to find the SQL medians for a grouping
[ "", "sql", "sql-server", "t-sql", "group-by", "median", "" ]
[Sqlite does not support the use of indexes in queries based around a NOT IN clause](http://www.sqlite.org/optoverview.html). Is it possible to logically rewrite a query like the following in such a way that it will only use the operators listed at the link above? The query: ``` Select * From table Where table-column not in ( Select table-column From table2); ``` The operators listed as being able to use an index: * column = expression * column > expression * column >= expression * column < expression * column <= expression * expression = column * expression > column * expression >= column * expression < column * expression <= column * column IN (expression-list) * column IN (subquery) * column IS NULL
``` Select A.* From table a left join table2 b on a.table-column = b.table-column WHERE b.table-column is null ```
Use a [LEFT JOIN as described in section 6](http://www.sqlite.org/optoverview.html). SQLFiddle with sample data [here](http://www.sqlfiddle.com/#!5/0cc76/1). Expand *View Execution Plan* to confirm that the original query does a table scan, while the LEFT JOIN query uses the index.
Is it possible to rewrite a NOT IN query to use indexes?
[ "", "sql", "sqlite", "" ]
I have a table with a column price for each product, now its already have the values, but i want to add +10$ to each product. Is there any way to update the entire row with addition. I am not getting how to give the increment value here ``` update mytable set price= ```
Try this. If your price field is numeric(int,float) type.. ``` update mytable set price = price + 10 ```
Just an update to @a\_horse\_with\_no\_name's answer `update mytable set price = NVL(price,0) + 10;` NULL + 10 is null (Atleast in oracle).
updating entire row by adding values in sql
[ "", "mysql", "sql", "" ]
Hi and thanks in advance for your time. I have to solve this specific problem. and I have following table of stamps ``` idStamp idCustomer uptoW uptoD price ----------- ----------- ----------- ----------- ----------- 218 4 2200 690 205 218 4 1700 660 155 ``` I need to find a single row (if exist) that match 4 criteria: - idStamp - idCustomer - uptoW - uptoD I have used ``` SELECT Distinct(UptoW) FROM [mytable] WHERE uptoW=(SELECT MIN(uptoW) FROM [mytable] WHERE uptoW >= 1600-1 AND idStamp = 218 AND idCustomer = 4) ``` and it works for 3 criteria - idStamp - idCustomer - uptoW But I can't figure how to implement the fourth matching criteria. summarizing Using following param (idStamp=218, idCustomer=4, Width=1600, Depth=640) I expect to find the row with price 155 ...but... using param (idStamp=218, idCustomer=4, Width=1600, Depth=670) I expect to find the row with price 205 If theres' no way to satisfy all criteria, I expect no rows. -- edit 2013.12.13 -- ``` idStamp idCustomer UptoW UptoD price ----------- ----------- ----------- ----------- ----------- 218 4 220 69 155 218 4 170 100 205 218 4 230 100 400 218 4 180 90 345 218 4 180 89 34 218 4 179 90 32 218 4 179 89 2343 DECLARE @p_idStamp INT = 218 ,@p_idCustomer INT = 4 ,@UptoW INT = 160 ,@UptoD INT = 89 ,@UptoWmin int ,@UptoDmin int; SELECT @UptoWmin = MIN(UptoW) FROM mytable<br> WHERE UptoW =(SELECT MIN(UptoW) FROM mytable<br> WHERE UptoW >= @UptoW-1 AND idStamp = 218 AND idCustomer = 4)<br> SELECT @UptoDmin = MIN(UptoD) FROM mytable<br> WHERE UptoD =(SELECT MIN(UptoD) FROM mytable<br> WHERE UptoD >= @UptoD-1 AND idStamp = 218 AND idCustomer = 4)<br> SELECT TOP(1) * FROM mytable<br> WHERE UptoW >= @UptoWmin AND UptoD >= @UptoDmin<br> ORDER BY UptoW, UptoD<br> SELECT TOP(1) * FROM mytable<br> WHERE UptoW >= @UptoWmin AND UptoD >= @UptoDmin<br> ORDER BY UptoD, UptoW<br> ``` I have probably found a solution. As you can see I first search the exact min values in the table then I compose a query that satisfy both params. I repeat the query inverting the order by clause, becouse results can differ. Let me know if better solution can be adopted.
I believe one way of doing what you want is ordering the results by the *exceeding area*, and then taking the first row. Something like that: ``` -- Test schema, based on S Koppenol answer: CREATE TABLE stamps ( [idStamp] INT, [idCustomer] INT, [uptoW] INT, [uptoD] INT, [price] MONEY ) INSERT INTO stamps ( [idStamp], [idCustomer], [uptoW], [uptoD], [price] ) VALUES (218, 4, 2200, 690, 205.0), (218, 4, 1700, 660, 155.0) GO -- Procedure to find the desired row: CREATE PROCEDURE GetRow( @idStamp INT, @idCustomer INT, @uptoW INT, @uptoD INT ) AS BEGIN SELECT TOP 1 T.idStamp, T.idCustomer, T.uptoW, T.uptoD, (T.uptoW * T.uptoD - @uptoW * @uptoD) AS DELTA_AREA FROM stamps T WHERE T.uptoW >= @uptoW + 1 AND T.uptoD >= @uptoD + 1 AND idStamp = @idStamp AND idCustomer = @idCustomer ORDER BY DELTA_AREA END GO -- Testing EXEC GetRow 218, 4, 1600, 640; EXEC GetRow 218, 4, 1700, 640; EXEC GetRow 218, 4, 1600, 660; EXEC GetRow 218, 4, 2200, 660; ``` And the testing results: ``` idStamp idCustomer uptoW uptoD DELTA_AREA ----------- ----------- ----------- ----------- ----------- 218 4 1700 660 98000 (1 row(s) affected) idStamp idCustomer uptoW uptoD DELTA_AREA ----------- ----------- ----------- ----------- ----------- 218 4 2200 690 430000 (1 row(s) affected) idStamp idCustomer uptoW uptoD DELTA_AREA ----------- ----------- ----------- ----------- ----------- 218 4 2200 690 462000 (1 row(s) affected) idStamp idCustomer uptoW uptoD DELTA_AREA ----------- ----------- ----------- ----------- ----------- (0 row(s) affected) ``` That way you'll surely get a record that is not only inside the requested limit, but also the nearest.
You could add `uptoD >= DESIRED_VALUE` to the validation. For example: ``` SELECT Distinct(UptoW) FROM [stamps] WHERE uptoW=(SELECT MIN(uptoW) FROM [stamps] WHERE uptoW >= 1600-1 AND uptoD >= 640 AND idStamp = 218 AND idCustomer = 4); ``` This would give you: ``` UPTOW 1700 ``` But you could simplify further and do: ``` SELECT MIN(uptoW) AS UPTOW FROM [stamps] WHERE uptoW >= 1600-1 AND uptoD >= 640 AND idStamp = 218 AND idCustomer = 4 ``` `sqlfiddle demo`
Retrieve closest upper values
[ "", "sql", "sql-server", "database", "" ]
In a MySQL table there are two columns: user and score. How can I get a table with the second maximum scores of each user? And the maximum of these second maximums? In other words, I could have this table: ``` User | Score X 50 Y 74 X 9 X 12 Y 21 ``` I want to get a table with the second maximums: ``` User | Score X | 12 Y | 21 ``` And another one with the maximum of the second maximums: ``` Score 21 ``` Note: Performance is important. I will consider as correct the fastest solution.
``` DROP TABLE IF EXISTS scores; CREATE TABLE scores(id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,user CHAR(1) NOT NULL,score INT NOT NULL,INDEX(user,score)); INSERT INTO scores (user,score) VALUES ('X',50), ('Y',74), ('X',9), ('X',12), ('Y',21); SELECT x.* FROM scores x JOIN scores y ON y.user = x.user AND y.score >= x.score GROUP BY x.user , x.score HAVING COUNT(*) = 2 ORDER BY score DESC LIMIT 1; +----+------+-------+ | id | user | score | +----+------+-------+ | 5 | Y | 21 | +----+------+-------+ ``` If there's a chance that the same user could have the same score twice, then you may need to include a DISTINCT in there somewhere - but I'll leave that as an exercise for the reader. If performance is an issue then a solution similar to the following is infinitely more scalable. I say 'similar' because I have a habit of becoming muddled when trying to track multiple variables... ``` SELECT id,user,score FROM ( SELECT id , user , score , @puser := @cuser , @prev := @curr , @cuser := user , @curr := score , @rank := IF(@puser = @cuser,IF(@prev = @curr, @rank, @rank+1),@rank:=1) rank FROM scores JOIN (SELECT @cuser :=null,@puser := null,@curr := null, @prev := null, @rank := 0) sel1 ORDER BY user, score DESC ) x WHERE rank = 2 ORDER BY score DESC LIMIT 1; +----+------+-------+ | id | user | score | +----+------+-------+ | 5 | Y | 21 | +----+------+-------+ ``` A quick test on an indexed table of two users and ca. 10,000 rows. Query 1 completes in 15 seconds, while query 2 completes in 1/100th of a second!
**All the entries without the `MAX` for each user:** ``` SELECT t1.user, t1.score FROM t t1, (SELECT t2.user, t2.score FROM t t2 WHERE t2.score = (SELECT MAX(t3.score) FROM t t3 WHERE t2.user = t3.user)) t4 WHERE t1.user = t4.user AND t1.score <> t4.score GROUP BY t1.user DESC ORDER BY t1.user, t1.score; ``` **Only `MAX` of second maximums:** ``` SELECT t1.user, MAX(t1.score) FROM t t1, (SELECT t2.user, t2.score FROM t t2 WHERE t2.score = (SELECT MAX(t3.score) FROM t t3 WHERE t2.user = t3.user)) t4 WHERE t1.user = t4.user AND t1.score <> t4.score; ``` ***SQLFiddle:*** <http://sqlfiddle.com/#!2/f2e717/29>
MySQL second maximums of each user
[ "", "mysql", "sql", "" ]
I'm using mysql. And here is one table, which I made for example: > <http://sqlfiddle.com/#!2/0ca23/1> In this table I've got fields: `Country`, `City`, `Resource`, `Volume`, `other` I need to **SELECT** records which contains **MAX** value of `Volume` field of each `Resource` each `City` each `Country`. I've tried these query: ``` SELECT `Country`, `City`, `Resource`, MAX(`Volume`), `other` FROM `temp` GROUP BY `Country`, `City`, `Resource` ``` but data was messed up (at field 'other'). To be clear that's what I'm trying to achieve. > <http://sqlfiddle.com/#!2/0ad62a/1> I need the WHOLE record, which contains **MAX** `Volume` value. I've already read [SQL Select only rows with Max Value on a Column](https://stackoverflow.com/questions/7745609/sql-select-only-rows-with-max-value-on-a-column) and know that there is a INNER JOIN - way to solve that but don't get, how to do it with multiple grouping. Thank you for reading.
Check this updated Fiddle of yours: <http://sqlfiddle.com/#!2/0ca23/4> ``` SELECT temp.* FROM temp JOIN (SELECT `Country`, `City`, `Resource`, MAX(`Volume`) AS MaxVol FROM `temp` GROUP BY `Country`, `City`, `Resource`) t ON temp.country = t.country AND temp.city = t.city AND temp.resource = t.resource AND temp.volume = t.MaxVol ``` This query is basically making a INNER JOIN of your main table with the subquery which gets the max(volume) records for each country, city, and resource. The subquery results are aliased as table `t`.
Since you're not grouping on the `other` column, MySQL will give you a random value from within the group. In fact, other RDBMS such as SQL Server does not even allow you to SELECT a column on which you don't have an aggregate function or a group by. So the solution, in your case, depends on what you want to return for the `other` column. Do you want just the value that belongs to the group that has the maximum volume? In that case, do something like this: ``` SELECT `Country`, `City`, `Resource`, `Volume`, `other` FROM `temp` t1 WHERE `Volume` = (SELECT MAX(`Volume`) FROM `temp` t2 WHERE t1.`Country` = t2.`Country` AND t1.`City` = t2.`City` AND t1.`Resource` = t2.`Resource`) ```
Find records with MAX value in each
[ "", "mysql", "sql", "group-by", "max", "greatest-n-per-group", "" ]
**This is my database table tbl\_rate** ``` delimiter $$ CREATE TABLE `tbl_rate` ( `Rate_ID` int(11) NOT NULL AUTO_INCREMENT, `Route_ID` int(11) NOT NULL, `From_LocationID` int(11) NOT NULL, `To_LocationID` int(11) NOT NULL, `Normal_Rate` double NOT NULL, `Discounted_Rate` double NOT NULL, PRIMARY KEY (`Rate_ID`) ) ENGINE=InnoDB AUTO_INCREMENT=289 DEFAULT CHARSET=latin1$$ ``` **Execute SQL statement from tbl\_rate** ``` SELECT Rate_ID FROM tbl_rate WHERE From_LocationID NOT IN( select a.Route_LocationID from_loc_id from tbl_route a inner join tbl_route b on a.Route_ID = b.Route_ID and a.Route_Seq < b.Route_Seq and a.Route_ID = 3 inner join tbl_location la on la.Location_ID = a.Route_LocationID inner join tbl_location lb on lb.Location_ID = b.Route_LocationID order by a.Route_Seq, b.Route_Seq) AND To_LocationID NOT IN( select b.Route_LocationID to_loc_id from tbl_route a inner join tbl_route b on a.Route_ID = b.Route_ID and a.Route_Seq < b.Route_Seq and a.Route_ID = 3 inner join tbl_location la on la.Location_ID = a.Route_LocationID inner join tbl_location lb on lb.Location_ID = b.Route_LocationID order by a.Route_Seq, b.Route_Seq) AND Route_ID = 3) ``` this is my sql result from tbl\_rate ``` Rate_ID -------- 253 254 ``` i want to delete this two Rate\_ID from table tbl\_rate, how to add in delete SQL statement?
If you want to use a subquery in `DELETE` with the same table you're deleting from all you have to do is to wrap your query in additional outer select ``` DELETE FROM tbl_rate WHERE Rate_ID IN ( SELECT Rate_ID FROM ( SELECT Rate_ID... --- Your original query goes here ) q ); ``` Here is **[SQLFiddle](http://sqlfiddle.com/#!2/91a97/1)** demo Another option is to use multi-table `DELETE` syntax ``` DELETE t FROM tbl_rate t JOIN ( SELECT Rate_ID... --- Your original query goes here ) q ON t.Rate_ID = q.Rate_ID; ``` Here is **[SQLFiddle](http://sqlfiddle.com/#!2/d2b21/1)** demo --- Your complete `DELETE` statement with a subquery might look like ``` DELETE FROM tbl_rate WHERE Rate_ID IN ( SELECT Rate_ID FROM ( SELECT Rate_ID FROM tbl_rate WHERE From_LocationID NOT IN ( select a.Route_LocationID from_loc_id from tbl_route a inner join tbl_route b on a.Route_ID = b.Route_ID and a.Route_Seq < b.Route_Seq and a.Route_ID = 3 inner join tbl_location la on la.Location_ID = a.Route_LocationID inner join tbl_location lb on lb.Location_ID = b.Route_LocationID order by a.Route_Seq, b.Route_Seq ) AND To_LocationID NOT IN ( select b.Route_LocationID to_loc_id from tbl_route a inner join tbl_route b on a.Route_ID = b.Route_ID and a.Route_Seq < b.Route_Seq and a.Route_ID = 3 inner join tbl_location la on la.Location_ID = a.Route_LocationID inner join tbl_location lb on lb.Location_ID = b.Route_LocationID order by a.Route_Seq, b.Route_Seq ) AND Route_ID = 3 ) q ); ```
``` DELETE FROM tbl_rate WHERE From_LocationID NOT IN( select a.Route_LocationID from_loc_id from tbl_route a inner join tbl_route b on a.Route_ID = b.Route_ID and a.Route_Seq < b.Route_Seq and a.Route_ID = 3 inner join tbl_location la on la.Location_ID = a.Route_LocationID inner join tbl_location lb on lb.Location_ID = b.Route_LocationID order by a.Route_Seq, b.Route_Seq) AND To_LocationID NOT IN( select b.Route_LocationID to_loc_id from tbl_route a inner join tbl_route b on a.Route_ID = b.Route_ID and a.Route_Seq < b.Route_Seq and a.Route_ID = 3 inner join tbl_location la on la.Location_ID = a.Route_LocationID inner join tbl_location lb on lb.Location_ID = b.Route_LocationID order by a.Route_Seq, b.Route_Seq) AND Route_ID = 3 ```
MySQL Delete From Select
[ "", "mysql", "sql", "" ]
So I have two payment tables that I want to compare in a Oracle SQL DB. I want to compare the the total payments using the location and invoice and total payments. It's more comlex then this but basically it is: ``` select tbl1.location, tbl1.invoice, Sum(tbl1.payments), Sum(tbl2.payments) From tbl1 left outer join tbl2 on tbl1.location = tbl2.location and tbl1.invoice = tbl2.invoice group by (tbl1.location,tbl1.invoice) ``` I want the left outer join because in addition to comparing payment amounts, I want see check all orders in tbl1 that may not exist in tbl2. The issue is that there is that there is multiple records for each order (location & invoice) in both tables (not the same number of records necessarily ie 2 in tbl1 to 1 in tbl2 or vice versa) but the total payments for each order (location & invoice) should match. So just doing a direct join gives me a cartesian product. So I am thinking I could do two queries, first aggregating the total payments by store & invoice for each and then do a join on those results because in the aggregate results, I would only have one record for each order (store & invoice). But I don't know how to do this. I've tried several subqueries but can't seem the shake the cartesian product. I'd like to be able to do this in one query as opposed to creating tables and joining on those as this will be ongoing. Thanks in advance for any help.
You can use the `With` statement to create the two querys and join then as you said. I will put just the sintaxe and if you need more help just ask. Thats because you didn't provide full details on your tables. So I will just guess on my answer. ``` WITH tmpTableA as ( select tbl1.location, tbl1.invoice, Sum(tbl1.payments) totalTblA From tbl1 group by tbl1.location, tbl1.invoice ), tmpTableB as ( select tbl2.location, tbl2.invoice, Sum(tbl2.payments) totalTblB From tbl2 group by tbl2.location, tbl2.invoice ) Select tmpTableA.location, tmpTableA.invoice, tmpTableA.totalTblA, tmpTableB.location, tmpTableB.invoice, tmpTableB.totalTblB from tmpTableA, tmpTableB where tmpTableA.location = tmpTableB.location (+) and tmpTableA.invoice = tmpTableB.invoice (+) ``` The `(+)` operator is the `left join` operator for Oracle Database (Of course, you can use the LEFT JOIN statements if you prefer )
Sorry, my first answer was wrong. Thank you for providing the sqlfiddle, MT0. The point that i missed is that you need to sum up the payments on each table first, so there's only one line left in each, then join them. This is what MT0 does in his statements. If you want a solution that looks more "symmetric", try: ``` select A.location, A.invoice, B.total sum1, C.total sum2 from (select distinct location, invoice from tbl1) A left outer join (select location, invoice, sum(payments) as total from tbl1 group by location, invoice) B on A.location=B.location and A.invoice=B.invoice left outer join (select location, invoice, sum(payments) as total from tbl2 group by location, invoice) C on A.location=C.location and A.invoice=C.invoice ``` which results in ``` LOCATION INVOICE SUM1 SUM2 a 2 3 2 a 1 5 3 b 1 1 5 b 2 1 (null) ```
Left outer join on aggregate queries
[ "", "sql", "oracle", "aggregate", "cartesian", "" ]
Lets say I have this table: ``` ID|Username|Mood |color ------------------------- 1 |User1 |Sad |red 2 |User1 |Happy |red 3 |User1 |Mad |red 4 |User1 |Scared |green 5 |User1 |Sad |green 6 |User2 |Sad |purple 7 |User2 |Happy |red 8 |User3 |Mad |blue 9 |User4 |Scared |green 10|User4 |Sad |red ``` In the DB I have a total of 4 users: User1, User2, User3, and User4. is there a SQL statement that could count the number of unique values in a certain column? Is there a SQL statement that could return these unique values? so it would return: ``` ID|Unique_Users ---------------- 1 |User1 2 |User2 3 |User3 4 |User4 ``` If all of my wishes could come true I would like to take it even further, by querying for intersection unique values where both the Username and his mood are unique and get: ``` ID|Username|Mood ----------------- 1 |User1 |Sad 2 |User1 |Happy 3 |User1 |Mad 4 |User1 |Scared 6 |User2 |Sad 7 |User2 |Happy 8 |User3 |Mad 9 |User4 |Scared 10|User4 |Sad ``` (In the original table User1 was sad in indexes 1 and 5, but in the new table, since it is not unique, it is not repeated) Notes: 1. It is not a matter of checking if the value already exists when I insert into the table because there is additional information (that is not duplicated) in every entry 2. I know I can select all the values, and then process the information to find out the unique values, and run a new query for each unique value getting the rest of the information but this requires server usage and I am trying to make it as efficient as possible.
You seem to be looking for a simple group by: ``` SELECT MIN(id), username, mood FROM table GROUP BY username, mood ``` Now, note this won't return `user2` with both ids `5` and `6` but rather `6` and `7`. This query returns all the unique `username, mood` pairs and whenever a duplicated pair is found it is discarded and the `id` with the minimum value from the duplicates is selected (guessed from the `User1, Sad` pair selection).
It's really unclear what you want, however one of these should hopefully help: To get rows that are unique, use `DISTINCT`: ``` select distinct * from mytable ``` To get only users that have only one mood: ``` select * from mytable where id in ( select id from mytable group by id having count(*) = 1) ```
Is it possible to count and/or return unique values from an SQL database?
[ "", "sql", "" ]
I would like to know how to update a table, with values from another table, by the trick is I need to set a limit because I have thousands of rows to update, and PHPmyadmin can't handle that load. ( I dont have direct access to the server ) My table structure looks like this **wp\_postmeta** `meta_id, post_id, meta_key, meta_value` **wp\_map** `oldmap, newmap` What I need to do is join the two tables on `wp_postmeta.meta_value` and `wp_map.oldmap`, and update the `wp_postmeta.meta_value` with `wp_map.newmap`. Here is my current query, but I need to add a `LIMIT` of 100 to the query, as I'm splitting the query up into smaller chunks so PHPMyAdmin can process. ``` UPDATE wp_postmeta INNER JOIN wp_map ON wp_map.oldmap = wp_postmeta.meta_value SET wp_postmeta.meta_value = wp_map.newmap; ``` I read about creating a subquery, but couldn't find any relevant examples, so if someone could steer me in the right direction or provide a working example it would be greatly appreciated.
You can try it this way ``` UPDATE wp_postmeta t JOIN ( SELECT p.meta_id, m.newmap FROM wp_postmeta p JOIN wp_map m ON p.meta_value = m.oldmap ORDER BY p.meta_id LIMIT 100 ) s ON t.meta_id = s.meta_id SET t.meta_value = s.newmap; ``` Here is **[SQLFiddle](http://sqlfiddle.com/#!2/47c4e4/1)** demo
It's rather unconventional , but it should help you in a pinch ``` SET @tmp_c = 0; UPDATE table1 INNER JOIN table2 ON table1.id = table2.id INNER JOIN table3 ON table2.id = table3.id SET table1.remove_date = NOW() - INTERVAL 5 MONTH WHERE table1.active = 1 AND (IF (table1.active = 1, @tmp_c := @tmp_c + 1 , 0 )) //This is to only increment the counter if the whole where matches AND @tmp_c <= 15 //Start ignoring row items as soon as it exceeds the limit amount ```
MySQL update join with limit
[ "", "mysql", "sql", "join", "phpmyadmin", "" ]
I want to be able to return a list of unique Ids in a child table that do not contain certain rows. My table looks similar to: ``` Id Name 1 X 1 Y 1 Z 2 A 2 B 2 C 3 X 3 B 3 Z ``` I want to write a SQL query like ``` SELECT Id FROM table t WHERE UPPER(t.Name) IN ('X', 'Y', 'Z') OR UPPER(t.Name) IN ('A', 'B', 'C') GROUP BY t.Id HAVING COUNT(DISTINCT UPPER(t.Name)) != 3 ``` But this does not work, as I would expect only Id = 3 to be returned as invalid. Is this possible in a single SQL statement? Also, is it possible to solve this problem if there are an arbitrary number of lists (X,Y,Z; A,B,C; P,Q,Z; ...) or a mixed list lengths (X,Y; A,B,C,D; L,M,N; ...)? > EDIT: To clarify, each id is really referring to a parent table. So these are child records. In first example, the parent record is only valid if it contains at least 3 children. The 3 children must be named (A, B, and C) or (X, Y, and Z). A parent is valid even if it contains all 6 children. But having 4 children named A, B, X, Y would not be valid (adding a C or Z child would make it valid). So far, Gordon Linoff is closest. I need to write some more tests. Of course this is a contrived example, in my implementation different rule sets will require me to use different sets of lists of different sizes (potentially mixed). For example, I may have a rule where the parent is valid only if it has children named (A and B) or (W, X, Y, and Z) or (L, M, N and Z). Thanks,
You want to find all rows that do not contain X, Y, Z or A, B, C. You can do this with aggregation and a having clause: ``` select id from t group by name having not ((sum(case when Name = 'X' then 1 else 0 end) > 0 and sum(case when Name = 'Y' then 1 else 0 end) > 0 and sum(case when Name = 'Z' then 1 else 0 end) > 0 ) or (sum(case when Name = 'A' then 1 else 0 end) > 0 and sum(case when Name = 'B' then 1 else 0 end) > 0 and sum(case when Name = 'C' then 1 else 0 end) > 0 ) ); ``` Each condition in the `having` clause counts the number of rows that match a particular name. A row passes the filter when there is at least one. The combination of `and` and `or` seems to meet your requirement. Note that an `id` that has A, B, C, and D will match. Your question doesn't specify whether this is correct or incorrect.
[SQL Fiddle](http://sqlfiddle.com/#!4/e708c/1) **Oracle 11g R2 Schema Setup**: ``` CREATE TABLE tbl ( Id NUMBER(1), Name VARCHAR2(1) ); INSERT INTO tbl VALUES ( 1, 'X' ); INSERT INTO tbl VALUES ( 1, 'Y' ); INSERT INTO tbl VALUES ( 1, 'Z' ); INSERT INTO tbl VALUES ( 2, 'A' ); INSERT INTO tbl VALUES ( 2, 'B' ); INSERT INTO tbl VALUES ( 2, 'C' ); INSERT INTO tbl VALUES ( 3, 'X' ); INSERT INTO tbl VALUES ( 3, 'B' ); INSERT INTO tbl VALUES ( 3, 'Z' ); INSERT INTO tbl VALUES ( 4, 'F' ); INSERT INTO tbl VALUES ( 4, 'G' ); INSERT INTO tbl VALUES ( 4, 'H' ); CREATE TYPE VARCHAR2s_1_Table AS TABLE OF VARCHAR2(1); ``` **Query 1**: ``` WITH groups AS ( SELECT id, CAST( COLLECT( Name ) AS VARCHAR2s_1_Table ) AS grp FROM tbl GROUP BY id ) SELECT id FROM groups WHERE ( grp MULTISET INTERSECT VARCHAR2s_1_Table( 'X', 'Y', 'Z') ) IS NOT EMPTY AND ( grp MULTISET INTERSECT VARCHAR2s_1_Table( 'A', 'B', 'C') ) IS NOT EMPTY ``` **[Results](http://sqlfiddle.com/#!4/e708c/1/0)**: ``` | ID | |----| | 3 | ```
How to determine if certain combinations of rows exist in table
[ "", "sql", "sql-server", "oracle", "" ]
``` DECLARE @sample varchar(1) insert into tblSample (Column1) values(@sample) ``` then not passing any values to @sample. I want to insert whenever I call the stored procedure even though the parameter is not supplied. Thanks
You can simply add default value NULL for parameter of SP. But make sure that your table allow NULL value in that column. If you will not pass any value to parameter then it will take NULL as default like below : ``` CREATE PROCEDURE TestProc (@sample varchar(1) = NULL) AS BEGIN insert into tblSample (Column1) values(@sample) END ```
why you will want to insert even if no parameter is supplied ? what is the exACT REQUIREMENT ?
Is it possible to insert to table even though the parameter is not supplied? sql
[ "", "sql", "sql-server", "t-sql", "" ]
I have a table where the customers of a company are listed each month. If 4 new customers are added and none leaves, there would thus be added 4 more rows to the table, than what was added the month before. I thought I could identify these 4 new customers by the following code, but it returns 0 rows. ``` SELECT Variable1 FROM Table1 WHERE DateVariable = 201311 AND Variable1 NOT IN (SELECT Variable1 FROM Table1 WHERE DateVariable < 201311) ``` What is the problem with my query?
This is not an attempt to answer your question. This is a demonstration of how to make your query run faster. While intuitive and database agnostic, this construct, ``` where somefield not in (subquery goes here) ``` is slow. Here is something that has the same logic as your query, but will run faster. ``` SELECT Variable1 FROM Table1 WHERE DateVariable = 201311 except SELECT Variable1 FROM Table1 WHERE DateVariable <201311 ``` In some databases, Oracle for example, the keyword is "minus" instead of except. There are probably some databases that don't support this construct at all. However, you are using sql server, which does.
I am not sure if I understand you correctly, since pieces of information are missing. However, this is how you would output all customers, who are new in this month: ``` SELECT Variable1 FROM Table1 WHERE DateVariable >= '20131201' ``` If you need to check last month's customers, your query would look like this: ``` SELECT Variable1 FROM Table1 WHERE DateVariable < '20131201' AND DateVariable >= '20131101' ```
SQL Server - Identify new entries
[ "", "sql", "sql-server", "" ]
I have such **SupplierLangs** table: *ID, SupplierId, SrcLngId, TrgLngId* it contains data like this: ``` 1, 1000, 1, 2 2, 1000, 1, 3 3, 1000, 1, 4 4, 1000, 2, 3 5, 1000, 2, 4 6, 1001, 1, 2 7, 1001, 1, 4 8, 1001, 2, 4 9, 1002, 3, 4 ``` when **Languages** table look like this: *ID, LangName* ``` 1, En 2, De 3, Fr 4, Pl ``` it means that supplier: 1000 can translate between: ``` En -> De En -> Fr En -> Pl De -> FR DE -> pl ``` 1001 can translate between: ``` En -> De En -> Pl De -> Pl ``` 1002 can translate between: ``` Fr -> Pl ``` I need a result showing me how many suppliers translate in given src languages: LangId Count ``` 1 2 (1000, 1001) 2 2 (1000, 1001) 3 1 (1002) 4 0 (no one translate from polish) ``` What I achieve is only count how many Suppliers translate in hard coded language: ``` select count(distinct SupplierId) from SupplierLangs where SupplierId in ( select sr.SupplierId from SupplierLangs sr where sr.SrcLngId = 1 group by sr.SupplierId ) ``` With above data return 2 (two suppliers translate from english) But how to extend that for all languages to achieve results as mentioned above ? thanks in advance
I found proper query ( but I use temporary table ) - is it a solution without it ? ``` select distinct sr.SrcLngId, sr.SupplierId into #myTable99 from SupplierLangs sl select SrcLngId, count(SrcLngId) as [count] from #myTable99 temp group by SrcLngId drop table #myTable99 ``` Thanks all for theirs reply!
TRY THIS ``` SELECT Count(DISTINCT supplierid) FROM supplierlangs WHERE supplierid IN (SELECT sr.supplierid FROM supplierlangs sr --where sr.SrcLngId = 1 --Remove this GROUP BY sr.supplierid) HAVING Count(DISTINCT supplierid) > 0 --this will remove the record having no translation ```
How to count results from groupped subquery
[ "", "sql", "sql-server", "" ]
I know you can set user [profiles](https://stackoverflow.com/questions/1914286/oracle-set-query-timeout) or set a general timeout for query. But I wish to set timeout to a specific query inside a procedure and catch the exception, something like : ``` begin update tbl set col = v_val; --Unlimited time delete from tbl where id = 20; --Unlimited time begin delete from tbl; -- I want this to have a limited time to perform exception when (timeout???) then --code; end; end; ``` Is this possible? is there any timeout exceptions at all I can catch? per block or query? didn't find much info on the topic.
No, you can not set a timeout in pl/sql. You could use a host language for this in which you embed your sql and pl/sql.
You could do: ``` select * from tbl for update wait 10; --This example will wait 10 seconds. Replace 10 with number of seconds to wait ``` Then, the select will attempt to lock the specified rows, but if it's unsuccessful after n seconds, it will throw an "ORA-30006: resource busy; acquire with WAIT timeout expired". If lock is achieved, then you can execute your delete. Hope that helps.
How to set timeout for anonoymous block or query in plsql?
[ "", "sql", "oracle", "plsql", "oracle11g", "" ]
Hi I'm kinda new to mssql, I'm used to Oracle. I'm trying to delete a specific row from a subquery but mssql doens't really like subqueries. Here is the query: ``` DELETE FROM (SELECT *, ROW_NUMBER() OVER (PARTITION BY column1 ORDER BY column1) row FROM randomtable) a WHERE a.row = 1 ``` Is there a way to get this to work? In Oracle I could've get everything in a query because I can use rownum = 1.
You were nearly there ``` DELETE a FROM (SELECT *, ROW_NUMBER() OVER (PARTITION BY column1 ORDER BY column1) row FROM randomtable) a WHERE a.row = 1 ``` Though I prefer the CTE syntax ``` WITH a AS (SELECT *, ROW_NUMBER() OVER (PARTITION BY column1 ORDER BY column1) row FROM randomtable) DELETE FROM a WHERE a.row = 1 ```
You didn't specify the table alias in what you wanted to delete from. ``` DELETE a FROM (SELECT *, ROW_NUMBER() OVER (PARTITION BY column1 ORDER BY column1) row FROM randomtable) a WHERE a.row = 1 ```
How to get DELETE FROM [QUERY] to work?
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have two simple tables: `OrderStatus` and `OrderStage` that have a single primary key integer column of `OrderStatusID` and `OrderStageID` respectively. Each table returns about 5 rows. What I'm trying to do is make a SELECT statement which counts the number of orders per OrderStatusID and OrderStageID combination. Here's an example: ``` SELECT COUNT(OrderID) FROM Order WHERE OrderStatusID = '1' and OrderStageID = '1' SELECT COUNT(OrderID) FROM Order WHERE OrderStatusID = '1' and OrderStageID = '2' ...... SELECT COUNT(OrderID) FROM Order WHERE OrderStatusID = '4' and OrderStageID = '5' ``` Its getting tedious for me to have to write out 5x5 SQL statements to count the number of orders in each possible combination. And if someone adds more rows to the `OrderStatus` or `OrderStage` tables then I'll have to keep revisiting this code to add the new combinations. The final presentation of this data will be in a 'tree' on a webpage quite similar to how Outlook displays a count of emails in your Inbox, Sent Items, Deleted etc within the mail panel. I using ColdFusion for my webpage. If I use a GROUP BY statement to get the data out, how would I then separately display each result row in ColdFusion? Would `<cfquery group="">` work? It should return 0 or NULL for where there are no rows in the Order table for a given OrderStatusID and OrderStageID combination. This is why I was using separate SELECT statements for each possible combination.
I am sure there are slicker ways to write it for SQL Server 2012, but essentially do a `CROSS JOIN` to get all possible combinations of status and stage (ie 25 rows). *Then* do an outer join back to `orders` to get the counts for each combination: [SQL Fiddle](http://sqlfiddle.com/#!3/dd0d6/1) \*\* Generally best to avoid using keywords like `order` for table names ``` SELECT osg.OrderStageID, ost.OrderStatusID, COUNT(o.OrderID) AS TotalOrders FROM orderStage osg CROSS JOIN orderStatus ost LEFT JOIN [order] o ON o.OrderStageID = osg.OrderStageID AND o.OrderStatusID = ost.OrderStatusID GROUP BY osg.OrderStageID, ost.OrderStatusID ``` NB: Be sure to review how [CROSS JOIN](http://en.wikipedia.org/wiki/Join_%28SQL%29#Cross_join) operates.
If you want to display `0` where none exist, you can create a driver table that consists of all combinations, and join to that: ``` SELECT driver.*,SUM(CASE WHEN o.OrderStatusID IS NOT NULL THEN 1 ELSE 0 END)AS OrderCount FROM (SELECT * FROM (SELECT DISTINCT OrderStatusID FROM Orders) a ,(SELECT DISTINCT OrderStageID FROM Orders) b ) driver LEFT JOIN Orders o ON driver.OrderStatusID = o.OrderStatusID AND driver.OrderStageID = o.OrderStageID GROUP BY driver.OrderStatusID , driver.OrderStageID ``` Demo: [SQL Fiddle](http://sqlfiddle.com/#!3/5f67d/6/0)
Best way to loop through all possible combinations across multiple tables?
[ "", "sql", "sql-server", "coldfusion", "sql-server-2012", "" ]
I am trying to normalize my database. I have broken out all redundant data and is now joining and inserting the new data. I have been porting 1 million rows at a time and that have worked well up until now. Now a million rows takes days instead of minutes, and it seam to be stuck on reading many many millions of rows and never getting to the inserting part. I have this Query: ``` INSERT IGNORE INTO bbointra_normalized.entry (DATE,keyword,url,POSITION,competition,searchEngine) SELECT DATE(insDate) AS DATE,k.id AS kid ,u.id uid, POSITION, competition ,s.id AS sid FROM oldSingleTabels.tempData INNER JOIN bbointra_normalized.keyword k ON tempData.keyword = k.keyword INNER JOIN bbointra_normalized.searchEngine s ON tempData.searchEngine = s.searchEngine INNER JOIN bbointra_normalized.urlHash u ON tempData.url = u.url GROUP BY k.id, s.id, u.id ORDER BY k.id, s.id, u.id ``` EXPLAIN: ``` id select_type table type possible_keys key key_len ref rows Extra ------ ----------- -------- ------ -------------------------------------------- ------------ ------- ---------------------------- ------ ---------------------------------------------- 1 SIMPLE s index (NULL) searchEngine 42 (NULL) 539 Using index; Using temporary; Using filesort 1 SIMPLE k index (NULL) keyword 42 (NULL) 17652 Using index; Using join buffer 1 SIMPLE tempData ref keyword_url_insDate,keyword,searchEngine,url keyword 767 func 433 Using where 1 SIMPLE u ref url url 767 oldSingleTabels.tempData.url 1 Using index ``` SHOW INNODB STATUS: ``` -------------- ROW OPERATIONS -------------- 0 queries inside InnoDB, 0 queries in queue 1 read views open inside InnoDB Main thread process no. 4245, id 140024097179392, state: waiting for server activity Number of rows inserted 26193732, updated 0, deleted 0, read 3383512394 0.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 39676.56 reads/s ``` SQL for entry: ``` CREATE TABLE `entry` ( `id` int(11) NOT NULL AUTO_INCREMENT, `insDate` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `date` int(11) NOT NULL, `project` int(11) NOT NULL, `keyword` int(11) NOT NULL, `url` int(11) NOT NULL, `position` int(11) NOT NULL, `competition` int(11) NOT NULL, `serachEngine` int(11) NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `unikt` (`date`,`keyword`,`position`,`serachEngine`) ) ENGINE=InnoDB AUTO_INCREMENT=201 DEFAULT CHARSET=utf8 COLLATE=utf8_swedish_ci; ```
As many people pointed out, since this was a read problem, I broke the SELECT query down and tested the query minus one join at the time, I was expecting the huge URL table/keys to be the problem but soon found that the main problem was a corrupt table/index on the keyword table. I don't know how this could happen but after dropping and recreating that table, things magically started to work again just fine. I later took @abasterfield's advice and added 3 more indexes to the Entry table and this sped up the select even more.
Try removing the GROUP BY and ORDER BY clauses, they are heavy to process and do not seem to add any value. If there are indexes on table bbointra\_normalized.entry, try removing these temporarily since it is a heavy process to update the indexes when inserting many rows.
SELECT reads 50 000 rows/s for days never INSERTS
[ "", "mysql", "sql", "database-design", "database-migration", "" ]
I have a table in sql database [dbo].[ClockInClockOut] with 3 columns: [UserId], ClockInClockOutTypeId], [CreatedDate]. I need to show the data like ``` UserName CreatedDate TimeIn TimeOut abc 12/31/2013 13:19:51 13:22:37 xyz 1/31/2013 14:19:51 15:19:51 ``` I am doing this in sql: ``` select u.[UserName], CONVERT(VARCHAR(20), c.[CreatedDate], 101) as [Date], CASE WHEN c.[ClockInClockOutTypeId]=1 and THEN CONVERT(VARCHAR(20), c.[CreatedDate], 108) ELSE null END as InTime, CASE WHEN c.[ClockInClockOutTypeId]=2 THEN CONVERT(VARCHAR(20), c.[CreatedDate], 108) ELSE null END as OutTime from [ClockInClockOut] as c inner join [UserProfile] as u on c.UserId = u.[UserId] ``` And I am getting this result: ``` UserName CreatedDate TimeIn TimeOut abc 12/31/2013 13:19:51 null abc 12/31/2013 null 14:19:51 ``` In the 1st line 3rd column time in is ok, but I need the time out in 4th column instead of 2nd line of the 4th column.
You need to join `ClockInClockOut` back to itself for the second of the times, not [union the results](http://en.wikipedia.org/wiki/Union_%28SQL%29#UNION_operator). I'd do something similar to ``` SELECT u.UserName, CONVERT(VARCHAR(20), c1.[CreatedDate], 101) as [Date], CONVERT(VARCHAR(20), c1.[CreatedDate], 108) as InTime, CONVERT(VARCHAR(20), c2.[CreatedDate], 108) as OutTime FROM [ClockInClockOut] as c1 INNER JOIN [ClockInClockOut] as c2 ON c1.UserId = c2.UserId -- getting the specific date is a little tricky, unquestionably better ways exist AND CONVERT(VARCHAR(20), c1.[CreatedDate], 101) = CONVERT(VARCHAR(20), c2.[CreatedDate], 101) INNER JOIN [UserProfile] as u on c1.UserId = u.[UserId] WHERE c1.ClockInClockOutTypeId = 1 AND c2.ClockInClockOutTypeId = 2 ``` Or, if you don't like, you can aggregate your initial query... ``` SELECT a.UserName, a.Date MAX(a.InTime) As InTime, MAX(a.OutTime) As OutTime FROM ( SELECT u.[UserName], CONVERT(VARCHAR(20), c.[CreatedDate], 101) as [Date], CASE WHEN c.[ClockInClockOutTypeId]=1 and THEN CONVERT(VARCHAR(20), c.[CreatedDate], 108) ELSE null END as InTime, CASE WHEN c.[ClockInClockOutTypeId]=2 THEN CONVERT(VARCHAR(20), c.[CreatedDate], 108) ELSE null END as OutTime FROM[ClockInClockOut] as c INNER JOIN [UserProfile] as u on c.UserId = u.[UserId] ) a GROUP BY a.UserName, a.Date ```
**Test Data** ``` DECLARE @ClockInClockOut TABLE ([UserId] INT, [ClockInClockOutTypeId] SMALLINT, [CreatedDate] DATETIME) INSERT INTO @ClockInClockOut VALUES (1,1,'2013-12-10 09:42:08.603'), (1,2,'2013-12-10 16:42:08.603'), (1,1,'2013-12-11 16:42:08.603'), (1,2,'2013-12-11 18:42:08.603'), (2,1,'2013-12-12 09:42:08.603'), (2,2,'2013-12-12 16:42:08.603'), (2,1,'2013-12-13 11:42:08.603'), (2,2,'2013-12-13 15:42:08.603') DECLARE @UserProfile TABLE (USERID INT, USERNAME VARCHAR(20)) INSERT INTO @UserProfile VALUES (1, 'Mark'), (2, 'John') ``` **Query** ``` ;With TimeI AS ( SELECT UserID, [CreatedDate], CONVERT(VARCHAR(20), [CreatedDate], 108) AS InTime FROM @ClockInClockOut WHERE [ClockInClockOutTypeId]=1 ), TimeO AS ( SELECT UserID,[CreatedDate], CONVERT(VARCHAR(20), [CreatedDate], 108) AS OutTime FROM @ClockInClockOut WHERE [ClockInClockOutTypeId]=2 ) SELECT DISTINCT USERNAME , CAST(CO.[CreatedDate] AS DATE) AS [CreatedDate], TI.Intime, T.OutTime FROM @UserProfile UP INNER JOIN @ClockInClockOut CO ON UP.USERID = CO.UserId INNER JOIN TimeI TI ON UP.USERID = ti.UserId AND CAST(CO.[CreatedDate] AS DATE) = CAST(TI.[CreatedDate] AS DATE) INNER JOIN TimeO T ON UP.USERID = t.UserId AND CAST(CO.[CreatedDate] AS DATE) = CAST(T.[CreatedDate] AS DATE) ``` **Result Set** ``` USERNAME CreatedDate Intime OutTime John 2013-12-12 09:42:08 16:42:08 John 2013-12-13 11:42:08 15:42:08 Mark 2013-12-10 09:42:08 16:42:08 Mark 2013-12-11 16:42:08 18:42:08 ```
How to use union in sql?
[ "", "sql", "join", "union", "" ]
Is there a way to input multiple values in a single parameter of a scalar-valued function in SQL Server 2008 R2 and have it filter data by that parameter using both values? For example I would like to do the following ``` SET @Salesperson='BILL' OR 'MOSES' SELECT Sum(SalesDollars) FROM Invoices WHERE Invoices.Salesperson = @Salesperson ``` I attempted to use the following as the WHERE clause, but this didnt work either. ``` SET @Salesperson='BILL','MOSES' SELECT Sum(SalesDollars) FROM Invoices WHERE Invoices.Salesperson IN (@Salesperson) ``` Would it be easier if i were dealing with integers as opposed to varchar values? Any help would be absolutely appreciated!
You need to use table-valued parameters. Look it up on [technet](http://technet.microsoft.com/en-us/library/bb510489.aspx) or [msdn](http://msdn.microsoft.com/en-us/library/bb675163%28v=vs.110%29.aspx) Best part of it that your table-valued parameters can have multiple columns. *Note however that you have to define TVP parameter as readonly. So if you want to return similar set from your function you will need to create another variable inside your function.* Example: ``` CREATE TYPE Names AS TABLE ( Name VARCHAR(50)); GO /* Create a procedure to receive data for the table-valued parameter. */ CREATE PROCEDURE dbo.mySP @n Names READONLY AS SELECT Sum(SalesDollars) FROM WHERE Invoices.Salesperson in (select Name from @n) GO CREATE FUNCTION dbo.myFun(@n Names READONLY) returns int AS SELECT Sum(SalesDollars) FROM WHERE Invoices.Salesperson in (select Name from @n) GO /* Declare a variable that references the type. */ DECLARE @names AS Names; /* Add data to the table variable. */ INSERT INTO @names (Name) VALUES ('BILL'),('MOSES') -- using stored procedure with TVP EXEC dbo.mySP @names -- using function with TVP select dbo.myFun(@names) GO ```
This could be done this way: ``` SET @Salesperson='BILL,MOSES' SELECT * FROM YourTable WHERE Invoices.Salesperson IN (SELECT * FROM dbo.split(@Salesperson,',')) ``` [This is how](https://stackoverflow.com/questions/10581772/how-to-split-a-comma-separated-value-to-columns) you split the values.
Multiple values in a single parameter of a scalar function
[ "", "sql", "sql-server", "parameters", "filtering", "user-defined-functions", "" ]
I have the following job table: ``` job_id | is_full_time | is_short_term 1 | 0 | 0 2 | 0 | 1 3 | 1 | 0 4 | 1 | 1 ``` I have the following query and I'm getting results in 4 rows. ``` SELECT is_full_time, is_short_term, COUNT(CASE WHEN is_full_time = 1 AND is_short_term = 0 THEN 1 ELSE NULL END) AS FT_Long, COUNT(CASE WHEN is_full_time = 1 AND is_short_term = 1 THEN 1 ELSE NULL END) AS FT_Short, COUNT(CASE WHEN is_full_time = 0 AND is_short_term = 0 THEN 1 ELSE NULL END) AS PT_Long, COUNT(CASE WHEN is_full_time = 0 AND is_short_term = 1 THEN 1 ELSE NULL END) AS PT_Short FROM job GROUP by is_full_time, is_short_term ``` My results look like this: ``` FT_Long | FT_Short | PT_Long | PT_Short 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ``` How can I combine this result into a single result row? I want to see this: ``` FT_Long | FT_Short | PT_Long | PT_Short 1 1 1 1 ```
In general this template works well for these situations: ``` SELECT SUM(FT_Long) as FT_Long, SUM(FT_Short) AS FT_Short, SUM(PT_Long) AS PT_Long, SUM(PT_Short) AS PT_Short FROM ( SELECT is_full_time, is_short_term, COUNT(CASE WHEN is_full_time = 1 AND is_short_term = 0 THEN 1 ELSE NULL END) AS FT_Long, COUNT(CASE WHEN is_full_time = 1 AND is_short_term = 1 THEN 1 ELSE NULL END) AS FT_Short, COUNT(CASE WHEN is_full_time = 0 AND is_short_term = 0 THEN 1 ELSE NULL END) AS PT_Long, COUNT(CASE WHEN is_full_time = 0 AND is_short_term = 1 THEN 1 ELSE NULL END) AS PT_Short FROM job GROUP by is_full_time, is_short_term ) T ``` In this case I believe you will get the same results with the following this will depend on what else is in your data that is requiring the group by in your original query ``` SELECT SUM(CASE WHEN is_full_time = 1 AND is_short_term = 0 THEN 1 ELSE 0 END) AS FT_Long, SUM(CASE WHEN is_full_time = 1 AND is_short_term = 1 THEN 1 ELSE 0 END) AS FT_Short, SUM(CASE WHEN is_full_time = 0 AND is_short_term = 0 THEN 1 ELSE 0 END) AS PT_Long, SUM(CASE WHEN is_full_time = 0 AND is_short_term = 1 THEN 1 ELSE 0 END) AS PT_Short FROM job ```
You can safely remove the `GROUP BY` and `SUM` the row instead of counting them ``` SELECT SUM(CASE WHEN is_full_time = 1 AND is_short_term = 0 THEN 1 ELSE NULL END) AS FT_Long, SUM(CASE WHEN is_full_time = 1 AND is_short_term = 1 THEN 1 ELSE NULL END) AS FT_Short, SUM(CASE WHEN is_full_time = 0 AND is_short_term = 0 THEN 1 ELSE NULL END) AS PT_Long, SUM(CASE WHEN is_full_time = 0 AND is_short_term = 1 THEN 1 ELSE NULL END) AS PT_Short FROM job ```
SQL Group By - Return results as Single Row instead of Multiple (Flatten results)
[ "", "sql", "group-by", "flatten", "" ]
``` DELETE FROM Table1 INNER JOIN View1 ON Table1.ID = View1.ID WHERE Table1.ID = View1.ID; ``` error is SQL command not ended properly
How you do this depends on the dialect of SQL. Here is a method that should work in any database: ``` DELETE FROM Table1 WHERE Table1.Id in (select Id from View1); ```
Specify the table where you want to delete the records, ``` DELETE Table1 -- <== this will delete records from Table1 FROM Table1 INNER JOIN Table2 ON Table1.ID = Table2.ID WHERE Table1.ID = Table2.ID; ```
How to delete a row from a table using a join SQL
[ "", "sql", "" ]
I have the following database structure: ``` FieldID|Year|Value a|2011|sugar a|2012|salt a|2013|pepper b|2011|pepper b|2012|pepper b|2013|pepper c|2011|sugar c|2012|salt c|2013|salt ``` now I would like to run a query that counts the number of fields for every item in the particular year looking something like this: ``` value|2011|2012|2013 sugar|2|0|0 salt |0|2|1 pepper|1|1|2 ``` I used multiple tables for every year before. However the distinct values for 2011,2012 and 2013 might be different (e.g. sugar would only be present in 2011) For individual years I used: ``` SELECT `Value`, COUNT( `FieldID` ) FROM `Table` WHERE `Year`=2011 GROUP BY `Value` ```
A1ex07's answer is fine. However, in MySQL, I prefer this formulation: ``` SELECT Value, sum(`Year` = 2011) AS cnt2011, sum(`Year` = 2012) AS cnt2012, sum(`Year` = 2013) AS cnt2013 FROM t GROUP BY value; ``` The use of `count( . . . )` produces the correct answer, but only because the `else` clause is missing. The default value is `NULL` and that doesn't get counted. To me, this is a construct that is prone to error. If you want the above in standard SQL, I go for: ``` SELECT Value, sum(case when `Year` = 2011 then 1 else 0 end) AS cnt2011, sum(case when `Year` = 2012 then 1 else 0 end) AS cnt2012, sum(case when `Year` = 2013 then 1 else 0 end) AS cnt2013 FROM t GROUP BY value; ```
You can do pivoting : ``` SELECT `Value`, COUNT(CASE WHEN `Year` = 2011 THEN FieldID END) AS cnt2011, COUNT(CASE WHEN `Year` = 2012 THEN FieldID END) AS cnt2012, COUNT(CASE WHEN `Year` = 2013 THEN FieldID END) AS cnt2013 FROM `Table` GROUP BY `Value` ```
mysql query split column into m
[ "", "mysql", "sql", "" ]
Postgres 8.4 and greater databases contain common tables in `public` schema and company specific tables in `company` schema. `company` schema names always start with `'company'` and end with the company number. So there may be schemas like: ``` public company1 company2 company3 ... companynn ``` An application always works with a single company. The `search_path` is specified accordingly in odbc or npgsql connection string, like: ``` search_path='company3,public' ``` How would you check if a given table exists in a specified `companyn` schema? eg: ``` select isSpecific('company3','tablenotincompany3schema') ``` should return `false`, and ``` select isSpecific('company3','tableincompany3schema') ``` should return `true`. In any case, the function should check only `companyn` schema passed, not other schemas. If a given table exists in both `public` and the passed schema, the function should return `true`. It should work for Postgres 8.4 or later.
It depends on what you want to test ***exactly***. ### Information schema? To find "whether the table exists" (*no matter who's asking*), querying the information schema (`information_schema.tables`) is **incorrect**, strictly speaking, because ([per documentation](https://www.postgresql.org/docs/current/infoschema-tables.html)): > Only those tables and views are shown that the current user has access > to (by way of being the owner or having some privilege). The query [provided by @kong](https://stackoverflow.com/a/20584058/939860) can return `FALSE`, but the table can still exist. It answers the question: ***How to check whether a table (or view) exists, and the current user has access to it?*** ``` SELECT EXISTS ( SELECT FROM information_schema.tables WHERE table_schema = 'schema_name' AND table_name = 'table_name' ); ``` The information schema is mainly useful to stay portable across major versions and across different RDBMS. But the implementation is slow, because Postgres has to use sophisticated views to comply to the standard (`information_schema.tables` is a rather simple example). And some information (like OIDs) gets lost in translation from the system catalogs - which *actually* carry all information. ### System catalogs Your question was: ***How to check whether a table exists?*** ``` SELECT EXISTS ( SELECT FROM pg_catalog.pg_class c JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace WHERE n.nspname = 'schema_name' AND c.relname = 'table_name' AND c.relkind = 'r' -- only tables ); ``` Use the system catalogs `pg_class` and `pg_namespace` directly, which is also considerably faster. However, [per documentation on `pg_class`](https://www.postgresql.org/docs/current/catalog-pg-class.html): > The catalog `pg_class` catalogs tables and most everything else that has > columns or is otherwise similar to a table. This includes **indexes** (but > see also `pg_index`), **sequences**, **views**, **materialized views**, **composite > types**, and **TOAST tables**; For this particular question you can also use the [system view **`pg_tables`**](https://www.postgresql.org/docs/current/view-pg-tables.html). A bit simpler and more portable across major Postgres versions (which is hardly of concern for this basic query): ``` SELECT EXISTS ( SELECT FROM pg_tables WHERE schemaname = 'schema_name' AND tablename = 'table_name' ); ``` Identifiers have to be unique among *all* objects mentioned above. If you want to ask: ***How to check whether a name for a table or similar object in a given schema is taken?*** ``` SELECT EXISTS ( SELECT FROM pg_catalog.pg_class c JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace WHERE n.nspname = 'schema_name' AND c.relname = 'table_name' ); ``` * [Related answer on dba.SE discussing **"Information schema vs. system catalogs"**](https://dba.stackexchange.com/a/75124/3684) ## Alternative: cast to [**`regclass`**](https://www.postgresql.org/docs/current/datatype-oid.html) ``` SELECT 'schema_name.table_name'::regclass; ``` This *raises an exception* if the (optionally schema-qualified) table (or other object occupying that name) does not exist. If you do not schema-qualify the table name, a cast to `regclass` defaults to the [**`search_path`**](https://stackoverflow.com/a/9067777/939860) and returns the OID for the first table found - or an exception if the table is in none of the listed schemas. Note that the system schemas `pg_catalog` and `pg_temp` (the schema for temporary objects of the current session) are automatically part of the `search_path`. You can use that and catch a possible exception in a function. Example: * [Check if sequence exists in Postgres (plpgsql)](https://stackoverflow.com/questions/11905868/Check-if-sequence-exists-in-Postgres-plpgsql/11919600#11919600) A query like above avoids possible exceptions and is therefore slightly faster. Note that the each component of the name is treated as **identifier** here - as opposed to above queries where names are given as literal strings. Identifiers are cast to lower case unless double-quoted. If you have forced otherwise illegal identifiers with double-quotes, those need to be included. Like: ``` SELECT '"Dumb_SchName"."FoolishTbl"'::regclass; ``` See: * [Are PostgreSQL column names case-sensitive?](https://stackoverflow.com/questions/20878932/are-postgresql-column-names-case-sensitive/20880247#20880247) ## [`to_regclass(rel_name)`](https://www.postgresql.org/docs/current/functions-info.html#FUNCTIONS-INFO-CATALOG-TABLE) in Postgres 9.4+ Much simpler now: ``` SELECT to_regclass('schema_name.table_name'); ``` Same as the cast, [**but** it returns ...](https://www.postgresql.org/docs/current/functions-info.html#FUNCTIONS-INFO-CATALOG-TABLE) > ... null rather than throwing an error if the name is not found
Perhaps use [information\_schema](http://www.postgresql.org/docs/8.4/interactive/infoschema-tables.html): ``` SELECT EXISTS( SELECT * FROM information_schema.tables WHERE table_schema = 'company3' AND table_name = 'tableincompany3schema' ); ```
How to check if a table exists in a given schema
[ "", "sql", "database", "postgresql", "information-schema", "search-path", "" ]
I am trying to write a select statement with a right join (to clients), that will find a specific value in the join table - but ONLY if that is the most recent value for each client (ignoring blanks and nulls). ``` Clients Id Name 0 John Doe 1 Frank Smith 2 Sue Smith 3 John Smith Activity (join table) ClientId Type Date 0 500 2013-01-01 00:00:08 1 900 2013-01-01 00:00:07 2 NULL 2013-01-01 00:00:06 3 2013-01-01 00:00:05 4 500 2013-01-01 00:00:05 0 800 2013-01-01 00:00:04 1 500 2013-01-01 00:00:03 2 500 2013-01-01 00:00:02 3 500 2013-01-01 00:00:01 4 800 2013-01-01 00:00:00 ``` So this query will at least give me only the client records that have an activity type of 500 (in this case I would get back client 0 and 4): ``` select * from clients right join activity on activity.clientid = clients.id where activity.type = 500 ``` HOWEVER, I need to figure out how to make this return ONLY the first record in the above list of records. The logic there is Client #0 is the only client that has 500 as it's latest activity type = 500. The other 3 clients have NULL, blank, or 900 for example as their 'latest' activity type. I am thinking some magic with ordering (the date would normally be pretty accurate), a 'top' and/or 'limit' and possibly union? Just cant quite wrap my head around it.
Please try this ``` SELECT activity.id AS activityid , activity.type , activity.date , clients.id AS clientid , clients.name FROM activity LEFT JOIN activity AS other_activities ON activity.ClientID = other_activities.ClientID AND activity.date < other_activities.date LEFT JOIN clients ON activity.ClientID = clients.id WHERE activity.type = 500 AND other_activities.ClientID IS NULL; ```
This will get you the most recent Activity of type 500 and the client of that activity ``` SELECT * FROM (SELECT * FROM activity WHERE type=500 ORDER BY date DESC LIMIT 1) a LEFT JOIN clients c ON (a.clientid = c.id) ``` of if you only want the result if it's the most recent activity and the type is 500 you can use ``` SELECT * FROM (SELECT * FROM activity ORDER BY date DESC LIMIT 1) a LEFT JOIN clients c ON (a.clientid = c.id) WHERE a.type = 500; ``` [sqlFiddle here](http://sqlfiddle.com/#!2/c183b/4/0) to get clients who have the latest activity of type 500 ``` SELECT a1.ClientID,c.name,a1.Type,a1.Date FROM activity a1 LEFT JOIN clients c ON (c.id = a1.clientid) WHERE NOT EXISTS (SELECT 1 FROM activity a WHERE a.clientid = a1.clientid and a.date > a1.date) AND a1.type = 500; ```
mySQL Query - select join if latest record in join table contains a specific value
[ "", "mysql", "sql", "join", "" ]
I'm using SQL Server 2008. I have a problem where I need to get the top 2 records in a group determined by 2 columns `Code` and `Period`, that will then be inserted to another table. Here is a sample table: ``` Code | Period | Dept | DeptWorkPartSum 001 2013-11 D1 53 001 2013-11 D2 33 001 2013-11 D3 12 002 2013-11 D2 30 002 2013-11 D4 28 002 2013-11 D5 15 002 2013-12 D2 100 ``` And what I want to get out of it is: ``` Code | Period | Dept | DeptWorkPartSum 001 2013-11 D1 53 001 2013-11 D2 33 002 2013-11 D2 30 002 2013-11 D4 28 002 2013-12 D2 100 ``` It is probably a very simple solution but I can't figure it out at the moment.
A simple approach is using a `CTE` with `ROW_NUMBER` ranking function: ``` WITH CTE AS ( SELECT Code,Period,Dept,DeptWorkPartSum, RN = ROW_NUMBER() OVER (PARTITION BY Code,Period, Dept ORDER BY DeptWorkPartSum) FROM dbo.TableName ) SELECT Code,Period,Dept,DeptWorkPartSum FROM CTE WHERE RN <= 2 ``` `Demo` However, your desired result seems to be incorrect since `Dept` is different for all three codes. Maybe you don't want to group by `Dept` but order by it, then the result is correct: ``` WITH CTE AS ( SELECT Code,Period,Dept,DeptWorkPartSum, RN = ROW_NUMBER() OVER (PARTITION BY Code,Period ORDER BY Dept, DeptWorkPartSum) FROM dbo.TableName ) SELECT Code,Period,Dept,DeptWorkPartSum FROM CTE WHERE RN <= 2 ``` `Demo`
``` SELECT Code, Period, Dept, DeptWorkPartSum FROM ( SELECT Code, Period, Dept, DeptWorkPartSum ,ROW_NUMBER ( ) OVER ( PARTITION BY Code, Period, Dept ORDER BY DeptWorkPartSum DESC) r FROM table) q WHERE r <= 2 ```
SQL Server : how to select top 2 records of the same group?
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have these three statements: ``` select count (distinct vid) as SUM_OF_SEC from pdalis where pid in(500,504); select sum (amount*paidperitem) SUM_OF_A from pdalis where pid = 501 ; select sum(amount * paidperitem) SUM_OF_P from pdalis where pid IN (500,504); ``` How do i combine all of them into 3 side by side columns?
Try This by Subquery ``` select count (distinct vid) as SUM_OF_SEC ,(select sum (amount*paidperitem) from pdalis where pid = 501 ;) AS SUM_OF_A , (select sum(amount * paidperitem) from pdalis where pid IN (500,504);) as SUM_OF_P from pdalis where pid in(500,504); ```
Try like this. ``` SELECT TMP1.SUM_OF_SEC, TMP2.SUM_OF_A, TMP3.SUM_OF_P FROM (select count (distinct vid) as SUM_OF_SEC from pdalis where pid in(500,504)) AS TMP1, (select sum (amount*paidperitem) as SUM_OF_A from pdalis where pid = 501 ) AS TMP2, (select sum(amount * paidperitem) as SUM_OF_P from pdalis where pid IN (500,504)) AS TMP3 ```
How do i combine multiple select statements in separate columns?
[ "", "sql", "join", "" ]
I have a question that I'm hoping you can help with. I've been trying to think of a way to display data in a manner that I'm looking for, without much success. Basically, the data is laid out as follows: example of data: ``` UID | Name | Action | Parent_UID | Date | Category | Item 1 | John | Delete | (null) | 10-DEC-2013 | Cars | Cars 2 | John | Delete | 1 | 10-DEC-2013 | Cars | Ford 3 | Mark | Add | (null) | 09-DEC-2013 | Cars | Tesla 4 | Mark | Add | 3 | 09-DEC-2013 | Model | Model-S 5 | Mark | Add | 4 | 09-DEC-2013 | Inventory | 5 ``` The table contains parent records, as well as child records recording actions taken. The Parent records have a Parent\_UID of null. There are many records in this table, and they aren't organized in any fashion by default. As you can see, the table can get very deep in terms of the layers of data. What I would like to do is have the parent record first, followed by the child record. I tried the following SQL in our Oracle database: `select * from Table where Date > '09-DEC-2013' and Name = 'John' start with parent_uid is null connect by parent_uid = uid;` but it's taking forever to run and I'm not sure if the syntax is correct. Is there a better way to get what I'm looking for?
There were a few issues: * [`UID`](http://docs.oracle.com/cd/E11882_01/server.112/e26088/functions223.htm#SQLRF06153) is a system function which uniquely identifies the session user - if you want to use it as a column name then you have to enclose it in double quotes `""`; * `DATE` is a datatype and is a [reserved word](http://docs.oracle.com/cd/B19306_01/em.102/b40103/app_oracle_reserved_words.htm) - same as above, if you want to call a column `Date` then you need to wrap it in double quotes; and * You need to connect the `parent_UID` to the `PRIOR "UID"`. [SQL Fiddle](http://sqlfiddle.com/#!4/fc98a/9) **Oracle 11g R2 Schema Setup**: ``` CREATE TABLE tbl ( "UID", Name, Action, Parent_UID, "Date", Category, Item ) AS SELECT 1, 'John', 'Delete', null, TO_DATE( '10-DEC-2013', 'DD-MON-YYYY' ), 'Cars', 'Cars' FROM DUAL UNION ALL SELECT 2, 'John', 'Delete', 1, TO_DATE( '10-DEC-2013', 'DD-MON-YYYY' ), 'Cars', 'Ford' FROM DUAL UNION ALL SELECT 3, 'Mark', 'Add', null, TO_DATE( '09-DEC-2013', 'DD-MON-YYYY' ), 'Cars', 'Tesla' FROM DUAL UNION ALL SELECT 4, 'Mark', 'Add', 3, TO_DATE( '09-DEC-2013', 'DD-MON-YYYY' ), 'Model', 'Model-S' FROM DUAL UNION ALL SELECT 5, 'Mark', 'Add', 4, TO_DATE( '09-DEC-2013', 'DD-MON-YYYY' ), 'Inventory', '5' FROM DUAL; ``` **Query 1**: ``` SELECT * FROM tbl WHERE "Date" >= TO_DATE( '09-DEC-2013', 'DD-MON-YYYY' ) AND Name = 'John' START WITH parent_UID IS NULL CONNECT BY PRIOR "UID" = parent_uid ORDER SIBLINGS BY "UID" ``` **[Results](http://sqlfiddle.com/#!4/fc98a/9/0)**: ``` | UID | NAME | ACTION | PARENT_UID | DATE | CATEGORY | ITEM | |-----|------|--------|------------|---------------------------------|----------|------| | 1 | John | Delete | (null) | December, 10 2013 00:00:00+0000 | Cars | Cars | | 2 | John | Delete | 1 | December, 10 2013 00:00:00+0000 | Cars | Ford | ```
I think you need to change your connect by to be CONNECT BY UID=PRIOR PARENT\_UID also check that indexes exist For these fields. One on uud+parent\_uid Should do the trick Also to show hierarcy in your results you can use the Level in your select list which will show the depth.
Hierarchial SQL Query in Oracle
[ "", "sql", "oracle", "" ]
I have one table, structure is given below:- ``` Where i_status means i_status 0 = Active i_status 1 = Archived i_status 2 = Deleted and fk_replaced_by_id contains the active records id that will be considered parent record. Table structure is id--completion_date--i_status--fk_replaced_by_id 1 --15-Dec-2013 --0 --0 2 --15-Dec-2013 --0 --0 3 --14-Dec-2013 --2 --0 4 --07-Dec-2013 --1 --1 5 --08-Dec-2013 --1 --1 6 --13-Dec-2013 --0 --0 ``` I want to show listing in a order, `Order by completion_date, and archived records should be under the parent, and in deleted should be in last` :- ``` id--completion_date--i_status--fk_replaced_by_id 1 --15-Dec-2013 --0 --0 4 --07-Dec-2013 --1 --1 5 --08-Dec-2013 --1 --1 2 --15-Dec-2013 --0 --0 6 --13-Dec-2013 --0 --0 3 --14-Dec-2013 --2 --0 ``` I have tried this query ``` SELECT * FROM assessments ORDER BY i_completion_date DESC, i_status ASC ``` But this query return only order by `completion_date` sequence, But I always need active on top having greatest `completetion_date`, and archived of that record should be under to parent record. Any idea please ?
First, to force all "DELETED" records at the end (and this is not supporting deleted with parent reference), do a case/when... or with MySQL an IF() in this case if( i\_status = 2, 2, 1 ) Stating if the i\_status = 2 (deleted), I want all of them sorted in SECOND GROUP, everything else goes in FIRST GROUP. Now, to handle parent. You will need to get the parent's completion date too if there is one via a self-join ``` select YT.ID, YT.Completion_Date, YT.I_Status, YT.FK_Replaced_By_ID, YT2.Completion_Date as ParentCompleted from YourTable YT LEFT JOIN YourTable YT2 ON YT.FK_Replaced_By_ID = YT2.ID order by IF( YT.I_Status = 2, 2, 1 ), IF( YT2.ID IS NULL, YT.Completion_Date, YT2.Completion_Date ) DESC, IF( YT2.ID IS NULL, YT.ID, YT2.ID ), YT.Completion_Date DESC ``` Now to clarify the rest of the ORDER BY. First, IF() is for the status as described. We want all DELETED pushed to the bottom via 2, and anything else considered in group 1. Next IF(). If there IS a parent ID (via matching ID in second instance of same table via self-join), then grab the date of the PARENT'S record (alias YT2). If no parent, then it IS its own record and use its own completion date (alias YT). This way, any child records that had older dates will be pull up to the same date grouping as its parent, not broken down to their older date. Now, what if there are many transactions on the same date, especially since there is no timestamp reference. Here too I am saying that within the same date (previous IF() above), put the record grouped by either the parent ID, or the ID itself. In a similar fashion to the completion date, the parent and child records will all be bunched together. So this IF() is using the ID or Parent ID to keep together. Finally since all the respective SUBGROUPS have been established, now sort THOSE by actual completion date of their original (non-parent) record. I guess what I missed on the 4/5 records being out of order, this should wrap that up for you... Instead of the last order by "YT.Completion\_Date DESC", change to ``` IF( YT2.ID IS NULL, 1, 2), YT.Completion_Date ``` This will force the Parent ID first, THEN do the others in natural date order.
Try to order by parent, than statuses and date in the last, like this: ``` SELECT * FROM assessments ORDER BY i_status ASC, fk_replaced_by_id ASC, i_completion_date DESC ``` I don't know what is your parent column, you don't tell us.
MySql Query - Need 'ORDER BY' option for conditions based
[ "", "mysql", "sql", "" ]
Given a table like this: ``` SESS_CODE YEAR Count ID ---- ---- ---- ---- D 2014 1 51 W 2014 1 51 2014 2 51 O 2014 1 52 W 2014 1 52 2014 2 52 D 2014 2 53 O 2014 1 54 W 2014 1 55 ``` I'm trying to write to a query that returns rows where SESS\_CODE = 'D', unless the ID is in another row where the SESS\_CODE is something other then 'D'. So the results should be: ``` SESS_CODE YEAR Count ID ---- ---- ---- ---- D 2014 2 53 ```
[SQL Fiddle](http://sqlfiddle.com/#!4/e3ade/10) **Oracle 11g R2 Schema Setup**: ``` CREATE TABLE Data ( SESS_CODE, YEAR, "Count", ID ) AS SELECT 'D', 2014, 1, 51 FROM DUAL UNION ALL SELECT 'W', 2014, 1, 51 FROM DUAL UNION ALL SELECT NULL, 2014, 2, 51 FROM DUAL UNION ALL SELECT 'O', 2014, 1, 52 FROM DUAL UNION ALL SELECT 'W', 2014, 1, 52 FROM DUAL UNION ALL SELECT NULL, 2014, 2, 52 FROM DUAL UNION ALL SELECT 'D', 2014, 2, 53 FROM DUAL UNION ALL SELECT 'O', 2014, 1, 54 FROM DUAL UNION ALL SELECT 'W', 2014, 1, 55 FROM DUAL; ``` **Query 1**: One analytical function and a filter (no joins): ``` WITH Max_Sess_Codes AS ( SELECT SESS_CODE, YEAR, "Count", ID, MAX( SESS_CODE ) KEEP ( DENSE_RANK FIRST ORDER BY DECODE( SESS_CODE, 'D', 1, 0 ) ) OVER ( PARTITION BY ID ) AS max_sess_code FROM Data d ) SELECT SESS_CODE, YEAR, "Count", ID FROM Max_Sess_Codes WHERE max_sess_code = 'D' ``` **[Results](http://sqlfiddle.com/#!4/e3ade/10/0)**: ``` | SESS_CODE | YEAR | COUNT | ID | |-----------|------|-------|----| | D | 2014 | 2 | 53 | ``` **Query 2**: If you have a composite `UNIQUE` constraint on `SESS_CODE` and `ID` (i.e. if for each `ID` there can only be one row with SESS\_CODE of `'D'`) then you could use: ``` SELECT MAX( SESS_CODE ) AS SESS_CODE, MAX( year ) AS year, MAX( "Count" ) AS "Count", ID FROM Data GROUP BY ID HAVING COUNT( CASE SESS_CODE WHEN 'D' THEN NULL ELSE 1 END ) = 0 ``` **[Results](http://sqlfiddle.com/#!4/e3ade/10/1)**: ``` | SESS_CODE | YEAR | COUNT | ID | |-----------|------|-------|----| | D | 2014 | 2 | 53 | ```
Try this: ``` select * from Table T1 where SESS_CODE = 'D' and not exists (select * from Table T2 where T1.Id = T2.Id and T2.SESS_CODE != 'D') ```
Select rows where a certain value doesn't exist
[ "", "sql", "oracle", "select", "" ]
Im writing a database for some kind of university and there is a table named > Contact\_Assign > Its parameters are: ``` Is_Instructor UD_BOOLEAN NOT NULL, Is_TeacherAssistant UD_BOOLEAN NOT NULL, Is_Student UD_BOOLEAN NOT NULL, Registration_ID UD_ID NOT NULL, Contact_ID UD_ID NOT NULL, ``` now I want to insert dummy data in this table but I have no idea how can I do this for the boolean parameters. PS. UD\_BOOLEAN is ``` CREATE TYPE UD_BOOLEAN FROM BIT ``` any idea how?
If you are only generating one row, you could use something as simple as: ``` SELECT CAST(ROUND(RAND(),0) AS BIT) ``` However, if you are generating more than one row, `RAND()` will evaluate to the same value for every row, so please see [Martin Smith's answer](https://stackoverflow.com/a/20597328/61305).
You can use ``` CRYPT_GEN_RANDOM(1) % 2 ``` The advantages over `RAND` are that it is stronger cryptographically (you may not care) and that if inserting multiple rows it is re-evaluated for each row. ``` DECLARE @T TABLE( B1 BIT, B2 BIT); INSERT INTO @T SELECT TOP 10 CRYPT_GEN_RANDOM(1)%2, CAST(ROUND(RAND(), 0) AS BIT) FROM master..spt_values SELECT * FROM @T ``` would give the same value in all rows for the second column
How to generate random boolean value in sql server 2008?
[ "", "sql", "sql-server", "sql-server-2008-r2", "" ]
this is my first time posting here and I am a basic SQL user and need help. I have a `varchar` column that stores data like below: ``` Year.Docid 2007.000000001 2007.000000002 2007.000000003 2007.000000004 2007.000000005 2007.000000006 ``` I need to join this data to another table that does not have all the zeros after the decimal, can someone please show me how to get the data to look like below: ``` Year Docid 2007.1 2007.2 2007.3 2007.4 2007.5 2007.6 ``` I am using MICROSOFT SQL SERVER 2012
If the format is fixed, i.e. `YYYY.NNNNNNNNN`, you could just get the last 9 characters, convert them to `int`, convert the result back to `varchar` and concatenate back to the first 5 characters: ``` LEFT([Year.Docid], 5) + CAST(CAST(RIGHT([Year.Docid], 9) AS int) AS varchar(10)) ``` However, it would make more sense to store Year and Docid as two separate `int` columns, in both tables. It is much easier to assemble them just for the output than do this processing every time *and* join on the results of it.
To turn the long format into the short format: ``` SELECT LEFT('2007.000000001',5) + CAST(CAST(RIGHT('2007.000000001',LEN('2007.000000001')-5) AS int)AS VARCHAR) ``` ... To use that in a join: ``` SELECT ... FROM TABLE_1 T1 INNER JOIN TABLE_2 T2 ON LEFT(T1.pk,5) + CAST(CAST(RIGHT(T1.pk,LEN(T1.pk)-5) AS int)AS VARCHAR) = T2.pk ```
I need to remove leading zeros after a decimal point
[ "", "sql", "sql-server", "t-sql", "sql-server-2012", "" ]
I'm creating a website for my and am struggling with the SQL parts, below are two separate statements I am struggling with, if you know what could be the problem with any of them id appreciate it. This is the statement ``` readsql = "SELECT * FROM Gallery WHERE <%=GalleryRs ("img_USERID")%> = <%=session ("usr_ID")%>" ``` This is the error > Expected end of statement > > /WIP/Gallery.asp, line 20 > > readsql = "SELECT \* FROM Gallery WHERE <%=GalleryRs ("img\_USERID") > ------------------------------------------------------^ This is the second statement ``` INSERT INTO like (lik_POSTID) VALUES('" &_ request("pos_ID") & "') ``` and the error > Microsoft Access Database Engine error '80040e14' > > Syntax error in INSERT INTO statement. > > /student/S0190204/WIP/like.asp, line 56 If you can help with either one that would be grand, thanks
Try ``` readsql = "SELECT * FROM Gallery WHERE "& GalleryRs("img_USERID")&" = "& session("usr_ID") ``` <%= %> makes no sense within an asp code block. It's for when you want to embed an asp variable within your html code - the "=" sign is shorthand for Response.Write - eg ``` <h1><%= pagetitle %></h1> ``` Also note that I've taken out the spaces after "GalleryRs" and "Session" Edited ``` readsql = "SELECT * FROM Gallery WHERE "& GalleryRs("img_USERID")&" = '"& session("usr_ID") & "'" ```
Your select sql should look like this when it gets to the database: ``` select * from Gallery where img_UserId = something ``` You are sending: ``` select * from Gallery where <%= etc ``` Your field name does not include <%. Look at why you are using a variable there. Unless the field in your where clause is not the same everytime, don't use a variable.
SQL with asp classic
[ "", "sql", "ms-access", "asp-classic", "" ]
EDIT after @NealB solution: the @NealB's solution is very very fast comparated with [any another one](https://stackoverflow.com/q/18033115#answers), and **dispenses this new question about "add a constraint to improve performance"**. The @NealB's not need any improve, have *O(n)* time and is very simple. --- The problem of "[label transitive groups with SQL](https://stackoverflow.com/q/18033115/287948)" have [an elegant solution using recursion and CTE](https://stackoverflow.com/a/18033635/287948)... But this solution consumes an exponential time (!). I need to work with 10000 itens: with 1000 itens need 1 second, with 2000 need 1 day... *Constraint*: in my case is possible to break the problem into pieces of ~100 itens or less, but only to select one group of ~10 itens, and discard all the other ~90 labeled itens... **There are a generic algotithm to add and use this kind of "pre-selection", to reduce the quadratic, *O(N^2)*, time?** Perhaps, as showed by comments and @wildplasser, a *O(N log(N))* time; but I expect, with "pre-selection" to reduce to *O(N)* time. --- (EDIT) I try to use [alternative algorithm](https://stackoverflow.com/a/20608421/287948), but it need some improvement to use as solution here; or, to really increase performance (to *O(N)* time), need to use "pre-selection". The "pre-selection" (constraint) is based on a "super-set grouping"... Stating by the original ["How to label 'transitive groups' with SQL?" question](https://stackoverflow.com/q/18033115/287948) `t1` table, ``` table T1 (original T1 augmented by "super-set grouping label" ssg, and more one row) ID1 | ID2 | ssg 1 | 2 | 1 1 | 5 | 1 4 | 7 | 1 7 | 8 | 1 9 | 1 | 1 10 | 11 | 2 ``` So there are three groups, * `g1`: {1,2,5,9} because "1 *t* 2", "1 *t* 5" and "9 *t* 1" * `g2`: {4,7,8} because "4 *t* 7" and "7 *t* 8" * `g3`: {10,11} because "10 *t* 11" The super-group is only a auxiliary grouping, * `ssg1`: {g1,g2} * `ssg2`: {g3} If we have *M* super-group-items and *N* total `T1` items, the average group length will be less tham N/M. We can suppose (for my typical problem) also that `ssg` maximum length is ~N/M. So, **[the "label algorithm"](https://stackoverflow.com/q/18033115/287948#tabs) need to run only M times with ~N/M items** if it use the `ssg` constraint.
An SQL only soulution appears to be a bit of a problem here. With the help of some procedural programming on top of SQL the solution appears to be failry simple and efficient. Here is a brief outline of a solution as could be implemented using any procedural language invoking SQL. Declare table `R` with primary key `ID` where `ID` corresponds the same domain as `ID1` and `ID2` of table `T1`. Table `R` contains one other non-key column, a `Label` number Populate table `R` with the range of values found in `T1`. Set `Label` to zero (no label). Using your example data, the initial setup for `R` would look like: ``` Table R ID Label == ===== 1 0 2 0 4 0 5 0 7 0 8 0 9 0 ``` Using a host language cursor plus an auxiliary counter, read each row from `T1`. Lookup `ID1` and `ID2` in `R`. You will find one of four cases: ``` Case 1: ID1.Label == 0 and ID2.Label == 0 ``` In this case neither one of these `ID`s have been "seen" before: Add 1 to the counter and then update both rows of `R` to the value of the counter: `update R set R.Label = :counter where R.ID in (:ID1, :ID2)` ``` Case 2: ID1.Label == 0 and ID2.Label <> 0 ``` In this case, `ID1` is new but `ID2` has already been assigned a label. `ID1` needs to be assigned to the same label as `ID2`: `update R set R.Lablel = :ID2.Label where R.ID = :ID1` ``` Case 3: ID1.Label <> 0 and ID2.Label == 0 ``` In this case, `ID2` is new but `ID1` has already been assigned a label. `ID2` needs to be assigned to the same label as `ID1`: `update R set R.Lablel = :ID1.Label where R.ID = :ID2` ``` Case 4: ID1.Label <> 0 and ID2.Label <> 0 ``` In this case, the row contains redundant information. Both rows of `R` should contain the same Label value. If not, there is some sort of data integrity problem. Ahhhh... not quite see edit... **EDIT** I just realized that there are situations where both `Label` values here could be non-zero and different. If both are non-zero and different then two `Label` groups need to be merged at this point. All you need to do is choose one `Label` and update the others to match with something like: `update R set R.Label to ID1.Label where R.Label = ID2.Label`. Now both groups have been merged with the same `Label` value. Upon completion of the cursor, table `R` will contain Label values needed to update `T2`. ``` Table R ID Label == ===== 1 1 2 1 4 2 5 1 7 2 8 2 9 1 ``` Process table `T2` using something along the lines of: `set T2.Label to R.Label where T2.ID1 = R.ID`. The end result should be: ``` table T2 ID1 | ID2 | LABEL 1 | 2 | 1 1 | 5 | 1 4 | 7 | 2 7 | 8 | 2 9 | 1 | 1 ``` This process is puerly iterative and should scale to fairly large tables without difficulty.
I suggest you check this and use some general-purpose language for solving it. <http://en.wikipedia.org/wiki/Disjoint-set_data_structure> Traverse the graph, maybe run DFS or BFS from each node, then use this disjoint set hint. I think this should work.
How to label a big set of “transitive groups” with a constraint?
[ "", "sql", "algorithm", "" ]
Given different machines with sql server. All of them have the same databases. The only difference between these db's is their name. It could be two names, let's say 'DB1' and 'DB2'. I need to check, which name is used on the given machine and create a function over it. The function is pretty big, it has at least 50 spots where the name of the DB is needed. I was trying to do something like ``` DECLARE @dbName VARCHAR(20) SET @dbName = ISNULL(DB_ID('DB1'), 'DB2'); SELECT * FROM @dbName.dbo.testtable; ``` But it does not work. Any help would be appreciated.
No, that won't work. With only two possible database you may be better off with an `if`: ``` DECLARE @dbName VARCHAR(20) SET @dbName = CASE WHEN DB_ID('DB1') IS NULL THEN 'DB2' ELSE 'DB1' END; IF @dbName = 'DB1' SELECT * FROM DB1.dbo.testtable; ELSE SELECT * FROM DB2.dbo.testtable; ``` If you want to run ALL future queries in that scope against that database then dynamically run a `USE` statement instead: ``` IF @dbName = 'DB1' USE DB1; ELSE USE DB2; ```
You can use dynamic SQL: ``` DECLARE @dbName VARCHAR(20) SET @dbName = 'DB2' IF DB_ID('DB1') IS NOT NULL SET @dbName = 'DB1' DECLARE @SQL NVARCHAR(100) SET @SQL = N'SELECT * FROM ' + @dbName + N'.dbo.testtable' EXEC sp_executesql @SQL ```
How to check which database exists and use it in the function?
[ "", "sql", "sql-server", "database", "" ]
I've been using Toad for more than a year now without problems. All of a sudden the table autocomplete feature has ceased working. No settings have been changed, and I've clean installed a new TOAD version, yet the problem persists. The image below shows autocomplete defaulting into view IN\_INSTRUMENT in schema MCDM. Normal behaviour should result in a table/view list. ![enter image description here](https://i.stack.imgur.com/OU4rd.png) It is notable that the above does not happen with all schemas. For some schemas I will still see a table list. In the beginning this error happened only with a single schema. Now it is slowly progressing to other schemas as well, which is exceptionally frustrtating when you're dealing with dozens of schemas that contain hundreds of tables each. It slows down development when you must open Schema Browser and look for the exact table/view/procedure/package each time instead of letting autocomplete give suggestions. This same issue has been described in [this thread](https://stackoverflow.com/questions/17161110/how-to-turn-on-table-name-autocomplete-feature-in-toad) and [and this thread](https://stackoverflow.com/questions/4535428/toad-for-oracle-autocomplete-dropdown/4556488) with less detail, yet no accepted answer has been given. As can be seen from Code Assist settings, these should be in order. ![enter image description here](https://i.stack.imgur.com/qztBf.png) How to reset autocomplete behaviour into what is its original state?
I've found the solution to this problem. The issue was a corrupt configuration file. For anyone else with the same problem, this is how I fixed mine: 1. Backup your *appdata* folder - you can find its location in *options -> general -> application data*. 2. Create a new set of user files using *Utilities -> Copy User Settings -> Create clean set of user files*. Make sure you are running Toad with administrator rights. Note that the above will delete all your saved connection details (schema names, passwords, connection strings), so take a note of these. Hope this helps someone in the future.
Under View > Toad Options > Editor > Code Assist > Toad Insight Objects, checking Synonyms (in addition to Public Synonyms) worked for me.
Toad: Table Autocomplete Functionality Not Working
[ "", "sql", "oracle", "autocomplete", "toad", "code-assist", "" ]
Such a query as in the title would look like this I guess: ``` select * from table t where (t.k1='apple' and t.k2='pie') or (t.k1='strawberry' and t.k2='shortcake') ... --10000 more key pairs here ``` This looks quite verbose to me. Any better alternatives? (Currently using SQLite, might use MYSQL/Oracle.)
You can use for example this on Oracle, i assume that if you use regular concatenate() instead of Oracle's || on other DB, it would work too (as it is simply just a string comparison with the IN list). Note that such query might have suboptimal execution plan. ``` SELECT * FROM TABLE t WHERE t.k1||','||t.k2 IN ('apple,pie', 'strawberry,shortcake' ); ``` But if you have your value list stored in other table, Oracle supports also the format below. ``` SELECT * FROM TABLE t WHERE (t.k1,t.k2) IN ( SELECT x.k1, x.k2 FROM x ); ```
Don't be afraid of verbose syntax. Concatenation tricks can easily mess up the selectivity estimates or even prevent the database from using indexes. Here is another syntax that may or may not work in your database. ``` select * from table t where (k1, k2) in( ('apple', 'pie') ,('strawberry', 'shortcake') ,('banana', 'split') ,('raspberry', 'vodka') ,('melon', 'shot') ); ``` A final comment is that if you find yourself wanting to submit 1000 values as filters you should most likely look for a different approach all together :)
One select for multiple records by composite key
[ "", "sql", "" ]
i have list of 100 students with there total marks and names.Based on this mark give rank to each students. i need to find the names of those students whose rank between 10th rank to 20th rank. students having rank 10,11,12...20 in the ascending order.
Try this: ``` SELECT studentName FROM studentTable ORDER BY totalMarks DESC LIMIT 10, 10; ```
try this: ``` SELECT <name> FROM <tablename> WHERE <rankfieldname> BETWEEN 10 AND 20 ```
sql query to get the student name with rank between 10 and 20 out of 100 students?
[ "", "mysql", "sql", "select", "limit", "ranking", "" ]
I have a trouble with joining three SQL tables. For simply think there are 3 table called A, B and C. the table A contains data about attendance such as shift id, date. Other two tables contain data of each shift such as in time, out time. Data of a particular shift can be only included in one table. table A ``` shift_id | date -------------------- 001 | 2013-12-01 002 | 2013-12-01 003 | 2013-12-01 ``` table B ``` shift_id | in_time | out_time ------------------------------ 001 | 07:10 | 04:10 003 | 07:30 | 05:10 ``` table C // does not contains out\_time column ``` shift_id | in_time -------------------- 002 | 07:45 ``` Expected output ``` shift_id | in_time | out_time ------------------------------ 001 | 07:10 | 04:10 002 | 07:45 | 00:00 003 | 07:30 | 05:10 ``` Plz help me.
I think `UNION` could help you. ``` SELECT 'FROM A', A.shift_id, B.in_time, B.out_time FROM A INNER JOIN B ON A.shift_id = B.shift_id UNION ALL SELECT 'FROM B', A.shift_id, C.in_time, '00:00' FROM A INNER JOIN C ON A.shift_id = C.shift_id ```
Assuming shift\_id is foreign key for table B and C, i.e. you don't need to verify if shift\_id is present in table A ``` SELECT shift_id, in_time, out_time FROM B UNION SELECT shift_id, in_time,'00:00' as out_time FROM C ``` else ``` SELECT shift_id, in_time, out_time FROM B WHERE shift_id IN (SELECT shift_id from A) UNION SELECT shift_id, in_time,'00:00' as out_time FROM C WHERE shift_id IN (SELECT shift_id from A); ``` if you need to verify whether shift\_id is present in table A
SQL joining three tables
[ "", "mysql", "sql", "" ]
![enter image description here](https://i.stack.imgur.com/4Cx3f.png) I try to get this query to work. It seems to work until I join the albums table, when all of the data disappears. Can someone please explain? ![enter image description here](https://i.stack.imgur.com/YLQkt.png) ![enter image description here](https://i.stack.imgur.com/Q84Nc.png) album table Orderitems table ![enter image description here](https://i.stack.imgur.com/xy5Nl.png) ![enter image description here](https://i.stack.imgur.com/7bI6l.png) Product table
on your first query using the "JOIN", which is why there are no results. if you want to have results, maybe you could use a "LEFT JOIN"
The problem is your join to the song table. In your product table the song id is null for the only entry with a album order, which is doing an inner join and hence removing the entry. My suggestion would be to do a left join on both the song and album table, just to see what you're working with.
Mysql Multiple joins
[ "", "mysql", "sql", "mysql-workbench", "" ]
I'm designing a table 'employees', which contains an primary key which is auto increment and represents an ID of the employee. I want to prefix the ID with an number designating the city: city 1: 1, city 2:2, etc. So the IDs should look like xyy where x represents the city and yy the ID of the employee. When I'm adding new employee I'm selecting the city x, and I would like to yy values to auto increment. Is that possible using SQL commands?
You are effectively packing two fields into one and violating the principle of [atomicity](http://en.wikipedia.org/wiki/1NF#Atomicity) and the 1NF in the process. On top of that, your key is not minimal (so to speak). Instead, keep two separate fields: ID and CITY. ID **alone** is the primary key. In your own words, ID is auto-increment, so it alone is unique. You can easily [concatenate](http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_concat) ID and CITY together for display purposes in your query or VIEW or even in the client code. There is no reason to "precook" the concatenated value in the table itself.
That is not good database design. You really should have a separate column for the city in your table. If you have many cities, the cities should perhaps be in their own table. What you are trying to do is overly complex and although 'everything is possible', I would not recommend it.
Autoincrement ID with a prefix
[ "", "mysql", "sql", "database-design", "" ]
I have two tables. `T1` has `t1field1` and `t1field2`, and `T2` has `t2field1` and `t2field2`. I want to remove all records from `T2`, if `t2field1` value does not exist in `t1field1` ``` +----------+----------+ | t1field1 | t1field2 | +----------+----------+ | 1 | x | | 5 | y | | 6 | z | +----------+----------+ +----------+----------+ | t2field1 | t2field2 | +----------+----------+ | 3 | x | | 4 | y | | 5 | z | +----------+----------+ ```
Try this: ``` DELETE t2 FROM t2 LEFT JOIN t1 ON t1.t1field1 = t2.t2field1 WHERE t1.t1field1 IS NULL ``` **OR** ``` DELETE FROM t2 WHERE t2.t2field1 NOT IN (SELECT t1.t1field1 FROM t1) ```
``` delete t2 from t2 left join t1 on t1.t1field1 = t2.field1 where t1.field1 is null ```
Remove records from table, if its specific value does not appears in another table
[ "", "mysql", "sql", "join", "sql-delete", "notin", "" ]
I am very new to SQL. I have done some basic "select from where ..." queries but I struggle with my current project. Lets say this is my source table: ``` Project Involved 1 Harald 1 Kerstin 1 Peter 1 Christian 1 Lisa 1 Linda 2 Sören 2 Schmidt 2 Jörg 2 Robert 2 Harald 2 Lisa ``` My question should be fairly simple. The input is the name "Lisa" and "Harald". I want to know "Which projects are Lisa and Harald involved in" If this is super easy and cannot understand why I ask such easy thing: provide me with a link where this is explained and ill read trough it myself, just not so sure what exactly to look for so I thought this was a faster way to get started :)
You can solve this in many ways, but here is the primary way I would solve it... but start simply for what projects LISA is associated with... This prevents looking at what could be 1000s of projects / people but if Lisa is only associated with 5 (or 2 in this case), why query against all of them.. ``` Select p1.project from project p1 where p1.involved = 'Lisa' ``` So this lists projects 1 & 2 (obviously your short sample of data. Now, that we know this works, I would just JOIN again for Harold and the same project as this. ``` Select p1.project from project p1 join project p2 on p1.project = p2.project AND p2.involved = 'Harold' where p1.involved = 'Lisa' ``` Ensure you have an index on (involved, project) to help optimize the query Additionally, others may propose to do a group by and having clause based on both parties you are interested in.... something like ``` select p1.project from project p1 where p1.involved in ( 'Lisa', 'Harold') group by p1.project having count(*) = 2 ``` This basically says to the engine. Give me each project where either Lisa or Harold exist. But, by applying a group by I only want the project to show once so I don't see duplicates. The HAVING clause tells how many you EXPECT to have per project, and since you are asking for 2 possible names, and want both of them, the HAVING COUNT(\*) is 2 so you know BOTH are included.
I've named table as person, so this is the example ``` SELECT p1.project FROM person p1, person p2 WHERE p1.name = 'Harald' AND p2.name = 'Lisa' AND p1.project = p2.project ```
Beginner Help: Looking for Member Names in diffrent projects to see in which projects they are working together
[ "", "mysql", "sql", "database", "" ]
I saw some people here have kind the same problem that I have now... but nothing has helped me with :/ I've a table Test with 2 columns. A date and an integer value. And I need the average every N records. For instance ``` Date | Info 2013-01-01 | 210 2013-01-02 | 213 2013-01-03 | 214 2013-01-05 | 197 2013-01-06 | 192 ``` .... And let's supose that N = 3 I need to populate a table with ``` Averages 212.3333 (that's the average of the 3 first Info's) (210+213+214)/3 208.0000 (213+214+197)/3 201.0000 (214+197+192)/3 ``` I have tried this ``` delimiter $$ DROP PROCEDURE IF EXISTS calcAvg$$ CREATE PROCEDURE calcAvg() begin DECLARE v_finished INTEGER DEFAULT 0; declare v_avg DECIMAL default 0.0; DECLARE initial_date date; DEClARE cr CURSOR FOR select avg(i) from ( select t.info i from test t where date >= initial_date limit 3 )as s; -- declare NOT FOUND handler DECLARE CONTINUE HANDLER FOR NOT FOUND SET v_finished = 1; set initial_date = '2013-01-01'; get_avg: LOOP OPEN cr; cr_loop: LOOP FETCH cr INTO v_avg; IF v_finished = 1 THEN LEAVE cr_loop; END IF; insert into result values (v_avg); END LOOP cr_loop; CLOSE cr; #I've to change this line... the table does not have all days set v_dt_ini = date_add(v_dt_ini,interval 1 DAY); #But the problem it's not the line yet... set v_finished = 0; if v_dt_ini > '2013-05-30' then leave get_avg; end if; END LOOP get_avg; end$$ ``` On the code I have fixed N as 3 The problem is that when I call the procedure... it calculates the 1st average of 3 records... and when performing the cursor again... it returns the same value... the 3 first records... and it's not what I need... The cursor is not changing the result :/
[SQLFiddle](http://sqlfiddle.com/#!2/fd2ff/16/0) ``` SELECT date,info,(select avg(info) from (select date,info,@row1:=@row1+1 as row from test,(SELECT @row1:=0)r order by date desc )wrn1 where wrn1.row between wrn2.start and wrn2.end)as avg, start,end FROM (select date,info,@row:=@row+1 as row,(@row - 2) as start,@row as end from test,(SELECT @row:=0)r order by date desc )wrn2 WHERE start > 0 ORDER BY date asc ``` this will return ``` DATE INFO AVG START END January, 01 2013 00:00:00+0000 210 212.3333 3 5 January, 02 2013 00:00:00+0000 213 208 2 4 January, 03 2013 00:00:00+0000 214 201 1 3 ```
You can use a correlated subquery for this. If you want up to `n` days including and after the current date, then this will work: ``` select date, info, (select avg(t2.info) from test t2 where datediff(t.date, t2.date) < n and t2.date >= t.date ) as avginfo from test t; ``` If you want `n` records including and after the current one, then the following idea would work: ``` select date, info, (select avg(info) from (select t2.* from test t2 where t2.date >= t.date order by t2.date limit n ) t3 ) as avginfo from test t; ``` However, this generates an error in MySQL, and I'm not sure how to work around it (the double nested query can no longer recognize the correlation to the outer query). The following version does work as shown [here](http://www.sqlfiddle.com/#!2/e9a0ac/17): ``` select t.date, t.info, avg(t3.info) from (select date, info, (select t2.date from test t2 where t2.date >= t.date order by t2.date limit 1 offset 2 ) as date3 from test t ) t left outer join test t3 on t3.date between t.date and t.date3 group by t.date, t.info; ```
Get the average of each N groups
[ "", "mysql", "sql", "group-by", "average", "" ]