Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I have an sql query to select randomly 1200 top retweeted tweets at least 50 times retweeted and the tweetDate should be 4 days older from 40 million records. The query I pasted below works but It takes 40 minutes, so is there any faster version of that query? ``` SELECT originalTweetId, Count(*) as total, tweetContent, tweetDate FROM twitter_gokhan2.tweetentities WHERE originalTweetId IS NOT NULL AND originalTweetId <> - 1 AND isRetweet = true AND (tweetDate < DATE_ADD(CURDATE(), INTERVAL - 4 DAY)) GROUP BY originalTweetId HAVING total > 50 ORDER BY RAND() limit 0 , 1200; --------------------------------------------------------------- Table creation sql is like: CREATE TABLE `tweetentities` ( `id` int(11) NOT NULL AUTO_INCREMENT, `tweetId` bigint(20) NOT NULL, `tweetContent` varchar(360) DEFAULT NULL, `tweetDate` datetime DEFAULT NULL, `userId` bigint(20) DEFAULT NULL, `userName` varchar(100) DEFAULT NULL, `retweetCount` int(11) DEFAULT '0', `keyword` varchar(500) DEFAULT NULL, `isRetweet` bit(1) DEFAULT b'0', `isCompleted` bit(1) DEFAULT b'0', `applicationId` int(11) DEFAULT NULL, `latitudeData` double DEFAULT NULL, `longitude` double DEFAULT NULL, `originalTweetId` bigint(20) DEFAULT NULL, PRIMARY KEY (`id`), KEY `index` (`originalTweetId`), KEY `index3` (`applicationId`), KEY `index2` (`tweetId`), KEY `index4` (`userId`), KEY `index5` (`userName`), KEY `index6` (`isRetweet`), KEY `index7` (`tweetDate`), KEY `index8` (`originalTweetId`), KEY `index9` (`isCompleted`), KEY `index10` (`tweetContent`(191)) ) ENGINE=InnoDB AUTO_INCREMENT=41501628 DEFAULT CHARSET=utf8mb4$$ ```
You are, of course, summarizing a *huge* number of records, then randomizing them. This kind of thing is hard to make fast. Going back to the beginning of time makes it worse. Searching on a null condition just trashes it. If you want this to perform reasonably, you *must* get rid of the `IS NOT NULL` selection. Otherwise, it will perform badly. But let us try to find a reasonable solution. First, let's get the `originalTweetId` values we need. ``` SELECT MIN(id) originalId, MIN(tweetDate) tweetDate, originalTweetId, Count(*) as total FROM twitter_gokhan2.tweetentities WHERE originalTweetId <> -1 /*AND originalTweetId IS NOT NULL We have to leave this out for perf reasons */ AND isRetweet = true AND tweetDate < CURDATE() - INTERVAL 4 DAY AND tweetDate > CURDATE() - INTERVAL 30 DAY /*let's add this, if we can*/ GROUP BY originalTweetId HAVING total >= 50 ``` This summary query gives us the lowest id number and date in your database for each subject tweet. To get this to run fast, we need a compound index on (originalTweetId, isRetweet, tweetDate, id). The query will do a range scan of this index on tweetDate, which is about as fast as you can hope for. Debug this query, both for correctness and performance, then move on. Now do the randomization. Let's do this with the minimum amount of data we can, to avoid sorting some enormous amount of stuff. ``` SELECT originalTweetId, tweetDate, total, RAND() AS randomOrder FROM ( SELECT MIN(id) originalId, MIN(tweetDate) tweetDate originalTweetId, Count(*) as total FROM twitter_gokhan2.tweetentities WHERE originalTweetId <> -1 /*AND originalTweetId IS NOT NULL We have to leave this out for perf reasons */ AND isRetweet = true AND tweetDate < CURDATE() - INTERVAL 4 DAY AND tweetDate > CURDATE() - INTERVAL 30 DAY /*let's add this, if we can*/ GROUP BY originalTweetId HAVING total >= 50 ) AS retweets ORDER BY randomOrder LIMIT 1200 ``` Great. Now we have a list of 1200 tweet ids and dates in random order. Now let's go get the content. ``` SELECT a.originalTweetId, a.total, b.tweetContent, a.tweetDate FROM ( /* that whole query above */ ) AS a JOIN twitter_gokhan2.tweetentities AS b ON (a.id = b.id) ORDER BY a.randomOrder ``` See how this goes? Use a compound index to do your summary, and do it on the minimum amount of data. Then do the randomizing, then go fetch the extra data you need.
You're selecting a huge number of records by selecting every record older than 4 days old.... Since the query takes a huge amount of time, why not simply prepare the results using an independant script which runs repeatedly in the background.... You might be able to make the assumption that if its a retweet, the originalTweetId cannot be null/-1
query optimization to find random sample
[ "", "mysql", "sql", "random", "" ]
I have a table with a column date but it is stored as text. Now I need to extract data based on date and I'm thinking I need to alter the column type to datetime, but how can I do that without losing data? My text records are in format dd-MM-YYYY hh:mm If I just change the column type the data I lose all data (it is filled with zeros).
``` UPDATE `table` SET `column` = STR_TO_DATE(`column`,'%d-%M-%Y %h:%i') ``` Just change the format to what you have,in case that is not correct. [Formats](http://www.w3schools.com/sql/func_date_format.asp) [**SQL fiddle**](http://sqlfiddle.com/#!2/e8b4f/1)
You can create a new column of type `DATETIME` then you update it with your converted data from your `TEXT` column using the datetime conversion functions from mysql. After update this field youe can change your old field with the data from the new field then drop the field created for store the temp data.
MySQL alter table and convert data from text to datetime
[ "", "mysql", "sql", "date", "datetime", "" ]
I have two tables and I want to update by joining them. I am using DB2 V9.7. **ORDER\_APPROVALS** ``` ORDER_ID CREATED_BY_ID CREATED_BY_NAME PROCESS_DT ------------------------------------------------------- 234 2 admin (null) 307 2 admin (null) 313 2 admin 11-11-2013 ``` **ORDER\_ATTRIBUTE** ``` ORDER_ID ATTRIBUTE_ID VALUE ----------------------------------- 234 123 ? --(ORDER_APPROVALS.CREATED_BY_NAME) 307 123 ? --(ORDER_APPROVALS.CREATED_BY_NAME) ``` I want to update value field against Attribute\_ID 123. So far I tried following query. But that does not work. I have tried similar kind of join in Netezza and that works. Want to know, how to do it in DB2? ``` update ORDER_ATTRIBUTE OT set OT.VALUE = (select CREATED_BY_NAME from ORDER_APPROVALS OA where OA.ORDER_ID = OT.ORDER_ID and OA.PROCESS_DT is NULL) where OT.ATTRIBUTE_ID = 123 and OT.ORDER_ID in (select ORDER_ID from ORDER_APPROVALS where PROCESS_DT is NULL) ```
You are looking for the [`MERGE`](http://pic.dhe.ibm.com/infocenter/db2luw/v9r7/topic/com.ibm.db2.luw.sql.ref.doc/doc/r0010873.html) statement: ``` merge into ORDER_ATTRIBUTE ot using (select ORDER_ID, CREATED_BY_NAME from ORDER_APPROVALS where PROCESS_DT is null) oa on (ot.ORDER_ID = oa.ORDER_ID) when matched and ot.ATTRIBUTE_ID = 123 then update set VALUE = oa.CREATED_BY_NAME; ```
I think you need to use a derived table to accomplish this: ``` update order_attributes set value = ( select created_by_name from( select created_by_name, oa.order_id from order_approvals oa left outer join order_attributes ot on oa.order_id = ot.order_id AND OT.ATTRIBUTE_ID = 123 and OT.ORDER_ID in (select ORDER_ID from ORDER_APPROVALS where PROCESS_DT is NULL) ) ORDERS WHERE orders.order_id = order_attributes.order_Id ) ```
How to update DB2 table with a join?
[ "", "sql", "db2", "sql-update", "" ]
**I have a table like this:** ``` ID_____PostingDate_____PosterID -------------------------------- 1______05/01/2012______450 2______06/30/2012______451 3______02/17/2013______451 4______12/10/2012______451 5______06/14/2012______452 6______06/15/2012______452 7______05/01/2012______453 8______06/04/2012______453 9______04/05/2013______454 10_____05/05/2013______454 ``` I'm trying to get a list of all PosterIDs that have posted in May or June of 2012 and have not posted again since then. **Desired Result from the table above:** ``` PosterID -------- 450 452 453 ``` **I've tried:** ``` WHERE DATE_FORMAT(PostingDate, '%m-%Y') IN ('05-2012', '06-2012') ``` and ``` SELECT UNIQUE(a.PosterID) FROM (SELECT ID, PostingDate, PosterID FROM table WHERE DATE_FORMAT(PostingDate, '%m-%Y') IN ('05-2012', '06-2012') ) a WHERE DATEDIFF(PostingDate, NOW()) > 365 ``` though neither of these are getting close
Try this: ``` SELECT DISTINCT PosterID FROM table1 WHERE PostingDate BETWEEN '2012-05-01' AND '2012-06-30' AND posterID NOT IN (SELECT PosterID FROM table1 WHERE PostingDate > '2012-07-01'); ``` `sqlfiddle demo`
``` SELECT PosterID FROM ( SELECT PosterID, MAX(PostingDate) AS latest_post FROM tbl GROUP BY 1) latest_posts WHERE latest_post BETWEEN '2012-05-01' AND '2012-06-30' ORDER BY PosterID; ```
MySQL - Pulling List Pending on Dates
[ "", "mysql", "sql", "unique", "" ]
I have a student table and a company table. **Student Table** id fname lname company\_id **Company Table** company\_id name type I want to output the student table data and then join the company data so the company id will reference the company name and industry. **Here's the query I'm running** ``` SELECT id, fname, lname, company.company_id, name, type FROM `student` INNER JOIN company ON student.company_id ORDER BY type ```
``` SELECT student_id, student_fname, student_lname, company.company_id, company_name, industry FROM `student` INNER JOIN company ON student.company_id=company.company_id ORDER BY industry ``` Specify columns for both tables when you join them.Without them you do a CROSS JOIN,thats is every row in A is associated with all rows in B.
It should be `ON student.company_id = company.company_id` Other than that it looks fine. ``` SELECT student_id, student_fname, student_lname, company.company_id, company_name, industry FROM `student` INNER JOIN company ON student.company_id = company.company_id ORDER BY industry ```
SQL Query - join help (Getting lots of results)
[ "", "sql", "join", "" ]
``` select MAX (cast (id as numeric)) from table --whatever value it may return select (row_number () over (order by id)) as Number from table --will start at 1 ``` How do I combine the two so whatever value the first query returns, I can use that value and auto increment from there, somehow combining the two (tried nesting them, but unsuccessful) or do I need to... 1. Declare a variable 2. get my max id value 3. make my variable equal to that 4. then place that value/variable in my second statement? Like... ``` select (row_number () over (order by id) + (declared_max_value)) as Number from table ```
Try this: ``` SELECT Seed.ID + ROW_NUMBER() OVER (order by T.ID) as Number, T.ID FROM T CROSS JOIN (SELECT MAX(ID) AS ID FROM T) AS Seed ``` Working sample: <http://www.sqlfiddle.com/#!3/a14ad/3>
Try this: ``` WITH CTE as (SELECT max(Field) FROM Table) SELECT WhatYouWant, cte.m FROM Table2 INNER JOIN CTE ON 0=0 ``` or this: ``` SELECT *, t.maxField FROM Table1 OUTER APPLY (SELECT max(Field) as maxField FROM Table2) t ```
How do I auto increment my own results column from a max value in a table?
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "" ]
I am having trouble finding an answer to this one, so hoping someone may be able to help. I am trying to do the following. I have a table with 4 columns and 22000+ records. Of these 22000+ records, there are 335 distinct server hostnames. Each record represents a peak value of a given metric and the date that corresponds to that. The problem I am having is bringing back the peak value for each server (instead of all records). Example of the source data ![enter image description here](https://i.stack.imgur.com/Airgz.png) What I would like to achieve (using the subset above as an example) is as follows ![enter image description here](https://i.stack.imgur.com/a7Nqx.png) Is this something that can be done easily with a query? Thanks for looking and I look forward to seeing any replies. AF
``` with cte as ( select hostname, metric, peak_d, peak ,row_number() over(partition by hostname order by peak desc) as OrderWithinGroup from Table1 ) select hostname, metric, peak_d, peak from cte where OrderWithinGroup = 1 order by hostname; ```
``` SELECT t1.HOSTNAME, t1.METRIC, t1.PEAK_D, t1.PEAK FROM Table t1 JOIN ( SELECT HOSTNAME, MAX(PEAK) AS MaxPeak FROM Table GROUP BY HOSTNAME ) t2 ON t1.HOSTNAME = t2.HOSTNAME AND t1.PEAK = t2.MaxPeak ```
SQL - Peak value per group from single table
[ "", "sql", "sql-server", "" ]
I made a table of data points - they have a key (type of data), value (value of data), timestamp (time data was recorded). Table definition: ``` CREATE TABLE IF NOT EXISTS datapoints ( point_id int(11) NOT NULL AUTO_INCREMENT, point_user_id int(11) NOT NULL, point_key varchar(32) NOT NULL, point_value longtext NOT NULL, point_timestamp int(11) NOT NULL, PRIMARY KEY (point_id), KEY datapoint_search (point_key,point_timestamp,point_user_id,point_value(64)) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=0; ``` Now I make a query that gets all datapoints of a specific type for a date range (1 day in our example) and orders it by the data point value in descending order: ``` EXPLAIN SELECT * FROM datapoints WHERE point_key = 'body_temperature' AND point_timestamp >= UNIX_TIMESTAMP('2013-11-20') AND point_timestamp < UNIX_TIMESTAMP('2013-11-21') AND point_user_id = 1 ORDER BY point_value DESC; ``` Unfortunately, this produces a filesort in the EXPLAIN: ``` id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra 1 | SIMPLE | datapoints | range | datapoint_search | datapoint_search | 106 | NULL | 175642 | Using where; Using filesort ``` Is it possible to avoid this filesort? I have made an index (datapoint\_search), and it is used, yet a filesort is still invoked. PS. The point\_value column has to be text or longtext, or at least handle very large data (up to 8KB) while still being sortable.
The following index satisfies your where clause: ``` datapoints(point_key, point_user_id, point_timestamp); ``` This will probably improve the performance of your query significantly, but it won't remove the file sort. The following could, theoretically: ``` datapoints(point_key, point_user_id, point_value, point_timestamp); ``` However, I don't think that MySQL is smart enough to match part of the `where` clause and the `order by`, with the remaining filtering done after the sort. It is worth a try. The following will not work: ``` datapoints(point_key, point_user_id, point_timestamp, point_value); ``` The data would be retrieved in timestamp order for satisfying the `where` clause. The ordering by `point_value` is secondary to the timestamp. EDIT: If the number of rows found by the `where` is "constant", then the performance should be similar. If you don't have too many matches to `point_key`, `point_user_id`, then the following trick might help: ``` select dp.* from (SELECT * FROM datapoints WHERE point_key = 'body_temperature' AND point_user_id = 1 ORDER BY point_value DESC ) dp where point_timestamp >= UNIX_TIMESTAMP('2013-11-20') AND point_timestamp < UNIX_TIMESTAMP('2013-11-21'); ``` Along with the index `datapoints(point_key, point_user_id, point_value)`. Unfortunately, MySQL does not *guarantee* that the sort in the inner subquery actually keeps the rows in order for the outer query (I think it does in practice, at least usually). This would use the index for the inner query and then a scan of the temporary table for the second `where` clause. Also, if you don't need all the columns, then I would recommend putting the columns you want into the index. This will save the random scans of the full table when there is a match.
`Filesort` will not disappear while you are sorting on point\_value. `point_value` is indexed just 64 bytes. sorting is done by whole it's data. I suggest that store `point_value_64_prefix` for search and sort `point_value` this also has a problem. Sort are done only 64 bytes, sort result is not exactly. but in most case 64 bytes is enough (I guess) ``` CREATE TABLE IF NOT EXISTS datapoints ( point_id int(11) NOT NULL AUTO_INCREMENT, point_user_id int(11) NOT NULL, point_key varchar(32) NOT NULL, point_value longtext NOT NULL, point_value_64_prefix VARCHAR(64) NOT NULL, // <= this column added point_timestamp int(11) NOT NULL, PRIMARY KEY (point_id), KEY datapoint_search (point_key,point_timestamp,point_user_id,point_value_64_preifx) // <= ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=0; EXPLAIN SELECT * FROM datapoints WHERE point_key = 'body_temperature' AND point_timestamp >= UNIX_TIMESTAMP('2013-11-20') AND point_timestamp < UNIX_TIMESTAMP('2013-11-21') AND point_user_id = 1 ORDER BY point_value_64_prefix DESC // <= sort by point_value_64_prefix rather than original value. ``` and, if your sorting data is large, `Filesort` can happen in this case you need to increase MySQL temp table size. see <http://dev.mysql.com/doc/refman/5.1/en/internal-temporary-tables.html> manual says: > The maximum size for in-memory temporary tables is the minimum of the tmp\_table\_size and max\_heap\_table\_size values
How to avoid filesort in this simple query? (No joins)
[ "", "mysql", "sql", "" ]
In my SQL query I need to do some arithmetic on alias. ``` SELECT MY_COMPLEX_EXPRESSION_USING_SUM_AND_CASEWHEN AS MYALIASNAME1, MY_SECOND_COMPLEX_EXPRESSION_USING_SUM_AND_CASEWHEN AS MYALIASNAME2, MYALIASNAME1 - MYALIASNAME2 AS MYALIASHNAME3 FROM MYTABLE ``` However, this does not work because it is not treating `MYALIASNAME1` and `MYALIASNAME2` as columns. Any ideas how can I achieve this? I am using H2, specifically h2-1.3.173.jar. I am using it in server mode. Thanks.
try this. ``` SELECT X.MYALIASNAME1 - X.MYALIASNAME2 AS MYALIASNAME3 FROM ( SELECT MY_COMPLEX_EXPRESSION_USING_SUM_AND_CASEWHEN AS MYALIASNAME1, MY_SECOND_COMPLEX_EXPRESSION_USING_SUM_AND_CASEWHEN AS MYALIASNAME2 FROM MYTABLE )X ```
Use *Common Table Expressions*: ``` with cte as ( select MY_COMPLEX_EXPRESSION_USING_SUM_AND_CASEWHEN AS MYALIASNAME1, MY_SECOND_COMPLEX_EXPRESSION_USING_SUM_AND_CASEWHEN AS MYALIASNAME2 from table ) select MYALIASNAME1 - MYALIASNAME2 AS MYALIASHNAME3 from cte ```
Doing arithmetic on alias in SQL
[ "", "sql", "alias", "h2", "" ]
I need to get a list of months that are not located inside the database. ***Example:*** Table **members** ``` ID | Member's code | Member since 1 | 555-12 | 2012-11-22 ``` Table **membership** ``` ID | Code | Paid 1 | 555-12 | 2013-1-1 2 | 555-12 | 2013-3-12 3 | 555-12 | 2013-5-1 ``` Let's say that today is : `2013-11-17` I need to get output like this: ``` Member's code | Debt ( Months ) 555-12 | 11-2012 555-12 | 12-2012 555-12 | 2-2013 555-12 | 4-2013 ``` Is this possible to do with a SQL? Do I need to have a `stored procedure` where I will pass `Member's code`?
My idea is to use a number table, that contains just numbers from 0 to 100 or more: ``` CREATE TABLE numbers ( n INT ); INSERT INTO numbers (n) VALUES (0),(1),(2),(3),(4),(5),(6),(7),(8),(9),(10),(11),(12),(13),(14),(15)...; ``` Then you can use a query like this: ``` SELECT m.ID, m.Code, DATE_FORMAT(m.Member_since + INTERVAL num.n MONTH, '%m-%Y') As Debt_Month_Year FROM members m INNER JOIN numbers num ON TIMESTAMPDIFF(MONTH, m.Member_since, LAST_DAY(CURDATE()))>=num.n LEFT JOIN membership ms ON m.Code = ms.Code AND LAST_DAY(ms.Paid)=LAST_DAY(m.Member_since + INTERVAL num.n MONTH) WHERE ms.id IS NULL -- and if you wish, add the following line: AND m.Code = '555-12' ``` Please see fiddle [here](http://sqlfiddle.com/#!2/e2284/3).
``` select code, left(since,7) as debt from user where code='555-12' union all select code, left(date_add(paid, interval -1 MONTH),7) as debt from paid where code='555-12' ``` **Fiddle** <http://sqlfiddle.com/#!2/3d988/1>
Months calculations inside stored procedure
[ "", "mysql", "sql", "date", "stored-procedures", "" ]
I want to simply execute a SQL query that would return the current id of the row that was added to a table, example: ``` insert into V_Post output inserted._ID values('Please return the id of THIS row :)') ``` That shall returns a number like 155 So the aim of my question is to store that number in some kind of local variable like XVALUE and then use this XVALUE in any other SQL query like: ``` insert into V_Posts values(1, XVALUE, null, GETDATE(), null, null, 1) ``` So i thougt of something like: ``` int XVALUE = insert into V_Post output inserted._ID values('Please return the id of THIS row :)') insert into V_Posts values(1, XVALUE, null, GETDATE(), null, null, 1) ``` OR ``` insert into V_Posts values(1, insert into V_Post output inserted._ID values('Please return the id of THIS row :)'), null, GETDATE(), null, null, 1) ``` But both it didn't work :( I hope i explained my question well and i really thank you in advance for your help. **EDIT** Please notice i have two tables: V\_posts and V\_post
**IDENTITY COLUMN** If it is an identity column and you are only inserting a single row then you can use `SCOPE_IDENTITY()` function to get the last generated Identity value within the scope of the current session. ``` DECLARE @NewValue INT; insert into V_Post values('Please return the id of THIS row :)') SET @NewValue = SCOPE_IDENTITY() ``` **IF NOT IDENTITY Column Or Multiple Identity values** If it is an identity column and you are inserting multiple rows and want to return all the newly inserted Identity values, or it is not an identity column but you need the last inserted value then you can make sure of `OUTPUT` command to get the newly inserted values. ``` DECLARE @tbl TABLE (Col1 DataType) DECLARE @NewValue Datatype; insert into V_Post output inserted._ID INTO @tbl values('Please return the id of THIS row :)') SELECT @NewValue = Col1 FROM @tbl ```
You can use `SCOPE_IDENTITY()`: ``` insert into V_Posts values(1, SCOPE_IDENTITY(), null, GETDATE(), null, null, 1) ``` It returns the last IDENTITY value produced on a connection, regardless of the table that produced the value. You can find [here](http://blog.sqlauthority.com/2007/03/25/sql-server-identity-vs-scope_identity-vs-ident_current-retrieve-last-inserted-identity-of-record/) some detailed information about what this value means
Store the value of output inserted._ID to local variable to reuse it in another query
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have a table name farmer data and it has attributes like farmer name, father name, pesticides, variety of crop etc (these attributes would be required for query). I have to write a query for: > Total no of people who has taken more than 1 variety of crop in a Season. How can i write this query? I have tried this query but its not giving me a single answer. ``` select Farmer Name, Count(variety Of Crop) from farmer data group by farmer Name having count(Variety Of Crop)>1 ```
Can you try this? ``` select count(*) from (select 1 as y from farmer data group by farmer Name having count(distinct Variety Of Crop)>1) x ```
I believe you are looking for records where Farmers has more the One type of crop. The way you are Counting rows will give you all the farmers who has more than one record regardless of type of Crop using distinct will return list of farmers who has more than one Distinct variety Of Crop. ``` select Farmer Name, Count(DISTINCT [variety Of Crop]) NumOfCrops from farmer data group by farmer Name having Count(DISTINCT [variety Of Crop]) > 1 ``` **Edit** To get the total Number of Farmers with more then one variety Of Crop. Try this.. ``` SELECT COUNT(*) FROM ( select Farmer Name, Count(DISTINCT [variety Of Crop]) NumOfCrops from farmer data group by farmer Name having Count(DISTINCT [variety Of Crop]) > 1 ) q ```
sql query, total number of people who have taken more than 1 variety in a season
[ "", "sql", "sql-server", "" ]
To find out the Nth max sal in oracle i'm using below query ``` SELECT DISTINCE sal FROM emp a WHERE ( SELECT COUNT(DISTINCE sal) FROM emp b WHERE a.sal<=b.sal)=&n; ``` * But According to me by using the above query it will take more time to execute if table size is big. * i'm trying to use the below query ``` SELECT sal FROM ( SELECT DISTINCE sal FROM emp ORDER BY sal DESC ) WHERE rownum=3; ``` * but not getting output.. any suggetions please .. Please share any link on how to optimise queries and decrease the time for a query to execute.
try this ``` select * from ( select sal ,dense_rank() over (order by sal desc) ranking from table ) where ranking = 4 -- Replace 4 with any value of N ```
``` SELECT sal FROM ( SELECT sal, row_number() OVER (order by sal desc) AS rn FROM emp ) WHERE rn = 3 ``` Yes, it will take longer to execute if the table is big. But for "N-th row" queries the only way is to look through all the data and sort it. It will be definitely much faster if you have an index on sal.
Nth max salary in Oracle
[ "", "sql", "oracle", "oracle11g", "" ]
`tbl_LoanSummary` has `Sample_Number` column. I have to check if `Sample_Number` column is not null the return 'Y' otherwise return return 'N' from below select statement. ``` select a.Br_Loan_No ,a.Br_LookupKey, //return IsNull(s.Sample_Number) ='N' or 'Y' from dbo.tbl_Br a left outer join dbo.tbl_LoanSummary s on s.Loan_no = a.Br_Loan_No order by a.Br_Loan_No ``` How to do this?
You can use the `case` expression for this... ``` select a.Br_Loan_No, a.Br_LookupKey, CASE WHEN s.Sample_Number IS NULL THEN 'N' ELSE 'Y' END AS [HasSample] from dbo.tbl_Br a left outer join dbo.tbl_LoanSummary s on s.Loan_no = a.Br_Loan_No order by a.Br_Loan_No ```
You'll want to use a `CASE` expression. It's like an embedded if-statement or switch-statement from traditional programming languages. ``` SELECT a.Br_Loan_No, a.Br_LookupKey CASE WHEN s.Sample_Number IS NULL THEN 'N' ELSE 'Y' END AS sample_number_is_not_null FROM dbo.tbl_Br a LEFT JOIN dbo.tbl_LoanSummary s ON s.Loan_no = a.Br_Loan_No ORDER BY a.Br_Loan_no ``` Note that you are creating a computed column here, rather than selecting the raw value of an existing column. It's generally required that you give this column a name, thus the use of the `AS sample_number_is_not_null`. There are two forms of the `CASE` expression. One lets you compare a column or value against several choices. It is like using an implicit equals: ``` CASE foo WHEN 3 THEN 'foo is 3!' WHEN 4 THEN 'foo is 4!' ELSE 'foo is not 3 or 4' END ``` The other form, in the example at the top, lets you use arbitrary expressions in each `WHEN` clause. It should be noted that each `WHEN` clause is evaluated in order and the first one to match is the one whose `THEN` is used as the result. If none of the `WHEN`s match, then the result in the `ELSE` is used.
Return 'Yes' or No' from select statement?
[ "", "sql", "t-sql", "" ]
I have model `waypoint` with two attributes 1. coordinates 2. geocode\_text And model `route` that is no more than sorted array of waypoints with some additional text info. Some routes can have same waypoints, so I want to separate waypoints from routes. What is the best way to store waypoints inside route? I see several ways: * serialize waypoints ids to `waypoint_ids` attribute of `route`, but in such situation I will not be able to get route and all his waypoints in one SQL request, because waypoint ids will be hidden in serialized string. * create some third model that have such arguments 1. route\_id 2. waypoint\_id 3. position Connect routes and waypoints with many-to-many association and store position of waypoint in route in `position` attribute. But it seems to be over-complicated. What is the best way for me in such situation?
I'd opt for the second option -- use a join table (RouteWaypoints) with something like acts\_as\_list so that you can retrieve the Route with all it's Waypoints correctly sorted. This isn't overkill -- the DB is *really good* at this stuff.
Those several ways are both viable in your situation. So, to help you, I would just give you the pros and cons of serialization. Then, you'l be able to make a choice * First, freshly new grad, I was taught to not break the [1NF](http://en.wikipedia.org/wiki/First_normal_form). If you choose serialization you'll have to forget using SQL aggregation functions on it (MIN, MAX, AVG). * If there is a large amount of associations between the two models, serialization might make read/write operations slower than a many-to-many associations. Also, you have to think about how often you are willing to update those data. Personally, I'd just use serialization in cases where I deal with smaller components in my application (user\_settings etc) and in which you do not want to read/write its info a lot. Here routes and waypoints seem to be quiet the core... Hope it helps..
How to store array in attribute of ActiveRecord model?
[ "", "mysql", "sql", "ruby-on-rails", "activerecord", "many-to-many", "" ]
Thanks you for looking.I am new to tsql and dont know how to proceed. I have a table with 10 different companies and 20 department for each(the departments are same for all the companies). I am trying to calculate percentage of expenses for each department and want an extra column 'Percentage' to be displayed in the result. please note that for every company the first department is totalcompexpenses which is just the total expenses of the company for all the department combined and dont need to calculate that and should be calculated from the next row. Is it possible to do this by using while loop or any other way instead of doing it manually for each one of them? ``` ID |Company_name| Department |Expenses | Percentage 1 |Company1 |TotalComp1Expenses |50000 | - 2 |Company1 |Department1 |4000 | ? 3 |Company1 |Department2 |8000 | ? 4 |Company1 |Department3 |8000 | ? 5 |Company1 |Department4 |7000 | ? 6 |Company1 |Department5 |10000 | ? ... 11 |Company2 |TotalComp2Expenses |100000 | - 12 |Company2 |Department1 |6000 | ? 13 |Company2 |Department2 |5000 | ? 15 |Company2 |Department3 |8000 | ? 15 |Company2 |Department4 |7000 | ? 16 |Company2 |Department5 |10000 | ? ... 21 |Company3 |TotalComp3Expenses |70000 | - 22 |Company3 |Department1 |2000 | ? 23 |Company3 |Department2 |7000 | ? 24 |Company3 |Department3 |9000 | ? 25 |Company3 |Department4 |8000 | ? 26 |Company3 |Department5 |10000 | ? ... ```
I think the clearest way is to use window functions. If you want the percentages based on the `Total%` columns, then you can do it as: ``` select ID, Company_name, Department, Expenses, (100.0* Expenses / max(case when Department like 'Total%Expenses' then Expenses end) over (partition by Company_Name) ) as Percentage from t; ``` You can also do this as a sum of the non-Total expenses: ``` select ID, Company_name, Department, Expenses, (100.0* Expenses / max(case when Department not like 'Total%Expenses' then Expenses end) over (partition by Company_Name) ) as Percentage from t; ``` The window function is like an aggregation function, but without the aggregation. The sum for each group is added as an additional column on each row. The definition of the grouping is based on the `partition by` clause.
Add this column to the query Expenses \* 200.0 / SUM(expenses) over (partition by company\_name) as PercentageExepenses You have to multiply expenses by 200.0 to take into account that you already have the total for the company and therefore double count.
how to find percentage without calculating manually?
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "plsql", "" ]
Suddenly, all of sql server requests showing "System.ComponentModel.Win32Exception: The wait operation timed out". What is the quickest way to find the issue? ``` Stack Trace: [Win32Exception (0x80004005): The wait operation timed out] [SqlException (0x80131904): Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.] System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction) +1767866 System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction) +5352418 System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose) +244 System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady) +1691 System.Data.SqlClient.SqlDataReader.TryConsumeMetaData() +61 System.Data.SqlClient.SqlDataReader.get_MetaData() +90 System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) +365 System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async, Int32 timeout, Task& task, Boolean asyncWrite, SqlDataReader ds) +1406 System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, TaskCompletionSource`1 completion, Int32 timeout, Task& task, Boolean asyncWrite) +177 System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method) +53 System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method) +134 System.Data.SqlClient.SqlCommand.ExecuteDbDataReader(CommandBehavior behavior) +41 System.Data.Common.DbCommand.System.Data.IDbCommand.ExecuteReader(CommandBehavior behavior) +10 System.Data.Common.DbDataAdapter.FillInternal(DataSet dataset, DataTable[] datatables, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) +140 System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) +316 System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, String srcTable) +86 System.Web.UI.WebControls.SqlDataSourceView.ExecuteSelect(DataSourceSelectArguments arguments) +1481 System.Web.UI.DataSourceView.Select(DataSourceSelectArguments arguments, DataSourceViewSelectCallback callback) +21 ``` I got the SQl that is causing the blocking issue by, <http://www.sqlskills.com/blogs/paul/script-open-transactions-with-text-and-plans/>
Here how I was able to find the issue, First check all open transaction your database, ``` DBCC OPENTRAN ('Databse') ``` If there is an open transaction then Grab it's SPID and put it inside INPUTBUFFER ``` DBCC INPUTBUFFER (58) ``` This will give you the actual SQL. If you want,you can kill this transaction, ``` KILL 58 ``` BTW, In my application I can use READ COMMITTED data, ``` SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED ``` Or ``` Select * from Products WITH NoLock ``` Here is another way to find the SQl quickly, ``` SELECT [s_tst].[session_id], [s_es].[login_name] AS [Login Name], DB_NAME (s_tdt.database_id) AS [Database], [s_tdt].[database_transaction_begin_time] AS [Begin Time], [s_tdt].[database_transaction_log_bytes_used] AS [Log Bytes], [s_tdt].[database_transaction_log_bytes_reserved] AS [Log Rsvd], [s_est].text AS [Last T-SQL Text], [s_eqp].[query_plan] AS [Last Plan] FROM sys.dm_tran_database_transactions [s_tdt] JOIN sys.dm_tran_session_transactions [s_tst] ON [s_tst].[transaction_id] = [s_tdt].[transaction_id] JOIN sys.[dm_exec_sessions] [s_es] ON [s_es].[session_id] = [s_tst].[session_id] JOIN sys.dm_exec_connections [s_ec] ON [s_ec].[session_id] = [s_tst].[session_id] LEFT OUTER JOIN sys.dm_exec_requests [s_er] ON [s_er].[session_id] = [s_tst].[session_id] CROSS APPLY sys.dm_exec_sql_text ([s_ec].[most_recent_sql_handle]) AS [s_est] OUTER APPLY sys.dm_exec_query_plan ([s_er].[plan_handle]) AS [s_eqp] ORDER BY [Begin Time] ASC; GO ``` <http://www.sqlskills.com/blogs/paul/script-open-transactions-with-text-and-plans/>
Try to execute this command: ``` exec sp_updatestats ```
Currently, All my SQL Request showing "System.ComponentModel.Win32Exception: The wait operation timed out"
[ "", "sql", "sql-server", "t-sql", "sql-server-2012", "" ]
I have a table in my database of type nvarchar(50); I want to write to that specific column that string 'Tal' - Tal between 2 apostrophes. When I'm trying to do so what is recorded in my DB is "Tal" - Tal between 2 quotation marks. My database is an SQL database and so are my scripts. How this can be solved?
The standard way to do what you want is this. ``` insert into mytable ( mycolumn ) values ('''Tal'''); ``` The first and last `'` are the start and end markers for the string. Each `''` within these characters means `'`. Refer to page 89 of the SQL 92 specification at <http://www.andrew.cmu.edu/user/shadow/sql/sql1992.txt>
I think escaping is the key to your question. For SQL apostrophes are special characters, thus they have to be escaped by '' (two apostrophes). Have you checked that your scripts do not add the second apostrophe for you? Probably you have to add *Tal* without the apostrophes. Escape % seems to be DB dependant. Oracles uses \%, others accespt [%] and some seem to have a keyword ESCAPE. You have read the documentation of your database, look for "escape characters".
how to write a word to a database that is between 2 Apostrophes?
[ "", "sql", "database", "" ]
I have a table called tblAccounts whose contents will come from an excel spreadsheet. I am using MS SQL Server 2008 (x64) on a Windows 8.1 (x64) I tried using the SQL Server Import/Export Wizard but there is no option to choose an existing table but only an option to create a new one. I tried using other methods such as OPENROWSETS ``` INSERT INTO tblAccount SELECT * FROM OPENROWSET( 'Microsoft.Jet.OLEDB.4.0', 'Excel 12.0;Database=D:\exceloutp.xls','SELECT * FROM [Sheet1$]') ``` but gave me an error: > Msg 7308, Level 16, State 1, Line 1 > OLE DB provider 'Microsoft.Jet.OLEDB.4.0' cannot be used for distributed queries because the provider is configured to run in single-threaded apartment mode. Some research told me that it occurred because of a 64-bit instance of SQL server. The problem is that this Excel data transfer to a SQL table must be accomplished using the SQL Import/Export Wizard only. **How can I import an Excel spreadsheet to an existing SQL table without creating a new one?** Some links I visited but was not able to help me resolve my problem: * [How do I import an excel spreadsheet into SQL Server?](https://stackoverflow.com/questions/472638/how-do-i-import-an-excel-spreadsheet-into-sql-server) * [Fix OLE DB error](http://social.msdn.microsoft.com/Forums/sqlserver/en-US/94332e6b-a283-4c8c-b6bb-eb1345c99909/ole-db-provider-microsoftjetoledb40-cannot-be-used-for-distributed-queries-because-the)
Saudate, I ran across this looking for a different problem. You most definitely can use the Sql Server Import wizard to import data into a new table. Of course, you do not wish to leave that table in the database, so my suggesting is that you import into a new table, then script the data in query manager to insert into the existing table. You can add a line to drop the temp table created by the import wizard as the last step upon successful completion of the script. I believe your original issue is in fact related to Sql Server 64 bit and is due to your having a 32 bit Excel and these drivers don't play well together. I did run into a very similar issue when first using 64 bit excel.
You can copy-paste data from en excel-sheet to an SQL-table by doing so: * Select the data in Excel and press Ctrl + C * In SQL Server Management Studio right click the table and choose Edit Top 200 Rows * Scroll to the bottom and select the entire empty row by clicking on the row header * Paste the data by pressing Ctrl + V Note: Often tables have a first column which is an ID-column with an auto generated/incremented ID. When you paste your data it will start inserting the leftmost selected column in Excel into the leftmost column in SSMS thus inserting data into the ID-column. To avoid that keep an empty column at the leftmost part of your selection in order to skip that column in SSMS. That will result in SSMS inserting the default data which is the auto generated ID. Furthermore you can skip other columns by having empty columns at the same ordinal positions in the Excel sheet selection as those columns to be skipped. That will make SSMS insert the default value (or NULL where no default value is specified).
Import Excel Spreadsheet Data to an EXISTING sql table?
[ "", "sql", "sql-server", "excel", "sql-server-2008", "import", "" ]
I am using SQL Server 2008 R2. I have a table in which I have a column that have a not null constraint. Now, what if I want to check if column has not null constraint defined or not for specific column? Is there any query to find out it? Thanks in advance..
``` SELECT * FROM INFORMATION_SCHEMA.COLUMNS ``` This query will show all columns from all tables and a whole host of information on them. The column you would want is: `IS_NULLABLE` that can have the value 'YES' or 'NO' [COLUMNS (Transact-SQL)](https://learn.microsoft.com/en-us/sql/relational-databases/system-information-schema-views/columns-transact-sql?view=sql-server-2017)
Something like ``` SELECT o.name AS tab, c.name AS col, c.is_nullable FROM sys.objects o INNER JOIN sys.columns c ON c.object_id = o.object_id WHERE o.name like '%yourtable%' and type = 'U' ``` See [sys.columns](http://msdn.microsoft.com/en-us/library/ms176106.aspx) and [sys.objects](http://msdn.microsoft.com/en-us/library/ms190324.aspx)
How to check if a column has not null constraint?
[ "", "sql", "sql-server-2008", "sql-server-2008-r2", "" ]
So i have a small amount of SQL below. ``` SELECT DISTINCT Id, Date, Name_Detail, (SELECT CASE WHEN (Name_Detail IN ('a1','a2') and Name_Detail IN ('d1','d2')) THEN 'AUX' ELSE (CASE WHEN Name_Detail IN ('d1','d2') THEN 'DIN' ELSE (CASE WHEN Name_Detail IN ('a1','a2') THEN 'AUX' ELSE 'NO' END) END) END) AS [Alias] ``` This gives the following results. ``` Id Date Name_Detail Alias 1 01/04/2013 d1 Din 1 01/04/2013 d2 Din 2 02/09/2013 a1 Aux 2 02/09/2013 a2 Aux 2 02/09/2013 d1 Din 2 02/09/2013 d2 Din ``` If i remove the name\_detail part it then gives the below. ``` Id Date Alias 1 01/04/2013 Din 2 02/09/2013 Aux 2 02/09/2013 Din ``` This still however gives a duplicate line fore the AUX and the DIN for ID 2. What i want it to follow is... If the Name\_Detail contain(s) a1 and(or) a2 its Alias is AUX. If the Name\_Detail contain(s) d1 and(or) d2 its Alias is DIN. If the Name\_Detail contain(s) a1 and(or) a2 AND d1 and(or) d2 its Alias is AUX. So for the above data should result in 2 lines as id 1 has d1 and d2 so it shows the Alias as DIN and id 2 has d1,d2,a1,a2 so it shows the Alias as AUX. ``` Id Date Alias 1 01/04/2013 Din 2 02/09/2013 Aux ```
For these exact conditions with fixed values (a1,a2,d1,d2) here is a query: `SQLFiddle demo` ``` select ID, max(DATE), CASE WHEN MIN(Name_Detail)in('a1','a2') and MAX(Name_Detail)in('d1','d2') THEN 'AUX' WHEN MIN(Name_Detail)in('d1','d2') THEN 'DIN' WHEN MAX(Name_Detail)in('a1','a2') THEN 'AUX' ELSE 'NO' END as Alias FROM T GROUP BY ID ``` In general you should use subquery to check all records with the same ID through the table. In your case the first conditions is newer TRUE because you check only ONE current record. `SQLFiddle demo` ``` select DISTINCT ID, DATE, CASE WHEN EXISTS(SELECT ID FROM T WHERE ID=T1.ID AND Name_Detail in('a1','a2')) and EXISTS(SELECT ID FROM T WHERE ID=T1.ID AND Name_Detail in('d1','d2')) THEN 'AUX' WHEN EXISTS(SELECT ID FROM T WHERE ID=T1.ID AND Name_Detail in('d1','d2')) THEN 'DIN' WHEN EXISTS(SELECT ID FROM T WHERE ID=T1.ID AND Name_Detail in('a1','a2')) THEN 'AUX' ELSE 'NO' END as Alias FROM T as T1 ```
I am not sure what you are trying to do, your third rule violates first and second rule, here i have the modified query. ``` SELECT DISTINCT Id, Date, Name_Detail, (SELECT CASE WHEN (Name_Detail IN ('a1','a2') and Name_Detail IN ('d1','d2')) THEN 'AUX' ELSE (CASE WHEN Name_Detail IN ('d1','d2') THEN 'DIN' ELSE (CASE WHEN Name_Detail IN ('a1','a2') and Name_Detail IN ('d1','d2') THEN 'AUX' ELSE 'NO' END) END) END) AS [Alias] ```
Selecting a specific value from multiple unique rows when a single value is different
[ "", "sql", "select", "duplicates", "unique", "rows", "" ]
Each time i want to process 5000 records like below. First time i want to process records from 1 to 5000 rows. second time i want to process records from 5001 to 10000 rows. third time i want to process records from 10001 to 15001 rows like wise I dont want to go for procedure or PL/SQL. I will change the rnum values in my code to fetch the 5000 records. The given query is taking 3 minutes to fetch the records from 3 joined tables. How can i reduced the time to fetch the records. ``` select * from ( SELECT to_number(AA.MARK_ID) as MARK_ID, AA.SUPP_ID as supplier_id, CC.supp_nm as SUPPLIER_NAME, CC.supp_typ as supplier_type, CC.supp_lock_typ as supplier_lock_type, ROW_NUMBER() OVER (ORDER BY AA.MARK_ID) as rnum from TABLE_A AA, TABLE_B BB, TABLE_C CC WHERE AA.MARK_ID=BB.MARK_ID AND AA.SUPP_ID=CC.location_id AND AA.char_id='160' AND BB.VALUE_KEY=AA.VALUE_KEY AND BB.VALUE_KEY=CC.VALUE_KEY AND AA.VPR_ID IS NOT NULL) where rnum >=10001 and rnum<=15000; ``` I have tried below scenario but no luck. > I have tried the /\*+ USE\_NL(AA BB) \*/ hints. > I used exists in the where conditions. but its taking the same 3 minutes to fetch the records. Below is the table details. ``` select count(*) from TABLE_B; ----------------- 2275 select count(*) from TABLE_A; ----------------- 2405276 select count(*) from TABLE_C; ----------------- 1269767 ``` Result of my inner query total records is ``` SELECT count(*) from TABLE_A AA, TABLE_B BB, TABLE_C CC WHERE AA.MARK_ID=BB.MARK_ID AND AA.SUPP_ID=CC.location_id AND AA.char_id='160' AND BB.VALUE_KEY=AA.VALUE_KEY AND BB.VALUE_KEY=CC.VALUE_KEY AND AA.VPR_ID IS NOT NULL; ----------------- 2027055 ``` All the used columns in where conditions are indexed properly. Explain Table for the given query is... Plan hash value: 3726328503 ``` ------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | ------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 2082K| 182M| | 85175 (1)| 00:17:03 | |* 1 | VIEW | | 2082K| 182M| | 85175 (1)| 00:17:03 | |* 2 | WINDOW SORT PUSHED RANK | | 2082K| 166M| 200M| 85175 (1)| 00:17:03 | |* 3 | HASH JOIN | | 2082K| 166M| | 44550 (1)| 00:08:55 | | 4 | TABLE ACCESS FULL | TABLE_C | 1640 | 49200 | | 22 (0)| 00:00:01 | |* 5 | HASH JOIN | | 2082K| 107M| 27M| 44516 (1)| 00:08:55 | |* 6 | VIEW | index$_join$_005 | 1274K| 13M| | 9790 (1)| 00:01:58 | |* 7 | HASH JOIN | | | | | | | | 8 | INLIST ITERATOR | | | | | | | |* 9 | INDEX RANGE SCAN | TABLE_B_IN2 | 1274K| 13M| | 2371 (2)| 00:00:29 | | 10 | INDEX FAST FULL SCAN| TABLE_B_IU1 | 1274K| 13M| | 4801 (1)| 00:00:58 | |* 11 | TABLE ACCESS FULL | TABLE_A | 2356K| 96M| | 27174 (1)| 00:05:27 | ------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter("RNUM">=10001 AND "RNUM"<=15000) 2 - filter(ROW_NUMBER() OVER ( ORDER BY "A"."MARK_ID")<=15000) 3 - access("A"."SUPP_ID"="C"."LOC_ID" AND "A"."VALUE_KEY"="C"."VALUE_KEY") 5 - access("A"."MARK_ID"="A"."MARK_ID" AND "A"."VALUE_KEY"="A"."VALUE_KEY") 6 - filter("A"."MARK_CHN_IND"='C' OR "A"."MARK_CHN_IND"='D') 7 - access(ROWID=ROWID) 9 - access("A"."MARK_CHN_IND"='C' OR "A"."MARK_CHN_IND"='D') 11 - filter("A"."CHNL_ID"=160 AND "A"."VPR_ID" IS NOT NULL) ``` Could you please anyone help me on this to tune this query as i am trying from last 2 days?
Each query will take a long time because each query will have to join then sort all rows. The `row_number` analytic function can only return a result if the whole set has been read. This is highly inefficient. If the data set is large, you only want to sort and hash-join once. You should fetch the whole set once, using batches of 5k rows. Alternatively, if you want to keep your existing code logic, you could store the result in a temporary table, for instance: ``` CREATE TABLE TMP AS <your above query> CREATE INDEX ON TMP (rnum) ``` And then replace your query in your code by ``` SELECT * FROM TMP WHERE rnum BETWEEN :x AND :y ``` Obviously if your temp table is being reused periodically, just create it once and delete when done (or use a true [temporary table](http://docs.oracle.com/cd/E11882_01/server.112/e26088/statements_7002.htm#SQLRF54447)).
How many unique MARK\_ID values have you got in TABLE\_A? I think you may get better performance if you limit the fetched ranges of records by MARK\_ID instead of the artificial row number, because the latter is obviously not sargeable. Granted, you may not get exactly 5000 rows in each range but I have a feeling it's not as important as the query performance.
Want to process 5000 records from the select query is taking long time in oracle database
[ "", "sql", "oracle", "oracle10g", "oracle-sqldeveloper", "database-performance", "" ]
This query is returning more than one row: ``` SELECT T.Title FROM Titles T WHERE T.ArtistID = (SELECT A.ArtistID FROM Artists A WHERE A.Country = "USA"); ``` Is it because it is ambigious?
No, its because there is more than one artist in the USA. What do you want? All the titles written by authors in the USA?? Or all the titles for one specific Author? If you want all the titles written by authors in the USA, ``` Select title from Titles Where ArtistId In (Select ArtistID From artists where Country = 'USA') ``` or ``` Select title from Titles t join Artists a On a.ArtistId = t.ArtistId Where a.Country = 'USA' ``` If you want the titles for one specific Author, you need to specify which specific author you want the titles for...
It means that you have more then one artist with country = 'USA' You can fix it like this to return just any title from any artist from USA ``` SELECT T.Title FROM Titles T WHERE T.ArtistID = (SELECT TOP 1 A.ArtistID FROM Artists A WHERE A.Country = "USA"); ``` if you want all title with artists from USA use JOIN ``` SELECT T.Title FROM Titles T JOIN Artists A ON T.ArtistID = A.ArtistID WHERE A.Country = "USA" ``` or ``` SELECT T.Title FROM Titles T WHERE T.ArtistID in (SELECT TOP 1 A.ArtistID FROM Artists A WHERE A.Country = "USA"); ```
Subquery returns more than 1 row SQL
[ "", "sql", "" ]
In my database in table\_a every row has a date\_created like "2011-04-17" Now some of these dates are in the past, but my question is how can I retrieve the latest date that has not yet passed?
Try this one ``` SELECT * FROM table_a WHERE CURDATE() <= date_created ``` [**CURDATE()**](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_current-date) > Returns the current date as a value in 'YYYY-MM-DD' or YYYYMMDD > format, depending on whether the function is used in a string or > numeric context.
IF date\_created is a date datatype then you can use ``` SELECT * FROM table_a WHERE date_created >= NOW() LIMIT 1 ```
MYSQL get last date from table that has not yet passed
[ "", "mysql", "sql", "date", "" ]
I am using Postgres 9.3 on MacOSX. I am wondering how I can return multiple values (depending on certain criterion) and use them to populate a column in a list/array like manner? ``` --DUMMY DATA CREATE TABLE tbl ( id VARCHAR(2) PRIMARY KEY ,name TEXT ,year_born NUMERIC ,nationality TEXT ); INSERT INTO tbl(id, name, year_born, nationality) VALUES ('A1','Bill',2001,'American') ,('B1','Anna',1997,'Swedish') ,('A2','Bill',1991,'American') ,('B2','Anna',2004,'Swedish') ,('B3','Anna',1989,'Swedish') ,('A3','Bill',1995,'American'); SELECT * FROM tbl; id | name | year_born | nationality ---+------+-----------+------------ A1 | Bill | 2001 | American B1 | Anna | 1997 | Swedish A2 | Bill | 1991 | American B2 | Anna | 2004 | Swedish B3 | Anna | 1989 | Swedish A3 | Bill | 1995 | American ``` I pool over column `name, nationality` by using `SELECT DISTINCT ON` clause as in the below code ``` CREATE TABLE another_tbl ( name TEXT, nationality TEXT, ids VARCHAR ); CREATE FUNCTION f1() RETURNS SETOF another_tbl AS $$ SELECT DISTINCT ON (name, nationality) name, nationality, id FROM tbl GROUP BY name, nationality, ID; $$ LANGUAGE sql SELECT * FROM f1(); name | nationality | ids ------+-------------+----- Anna | Swedish | B1 Bill | American | A1 ``` So, here is the thing which I do not know how to achieve, but which I reckon is fairly easy. I want column `ids` to be populated by all the id's corresponding to the names in the `name` column as seen below. Desired output: ``` SELECT * FROM f1(); name | nationality | ids ------+-------------+----- Anna | Swedish | B1, B2, B3 Bill | American | A1, A2, A3 ``` **Update** Found out about `ARRAY` which I use together with class `VARCHAR` for column `ids` in `another_tbl`. However, I get a mismatch call saying `Final statement returns character varying instead of`character varying[]`at column 3`.
Use `GROUP BY` and the aggregate function [`string_agg()`](http://www.postgresql.org/docs/current/interactive/functions-aggregate.html) if you want a text column as result. Or [`array_agg()`](http://www.postgresql.org/docs/current/interactive/functions-aggregate.html) to construct an array. But drop the now redundant `DISTINCT ON`. ``` SELECT name, nationality, string_agg(id, ',') AS ids FROM tbl GROUP BY 1, 2 ORDER BY 1, 2; ``` The `RETURNS` clause of your function definition has to match, like @ozczecho suggested: ``` CREATE FUNCTION f1() RETURNS TABLE(name text, nationality text, ids text) AS -- varchar[] for array_agg() $func$ SELECT t.name, t.nationality, string_agg(t.id, ',') AS ids FROM tbl t GROUP BY 1, 2 ORDER BY 1, 2; $func$ LANGUAGE sql; ```
I believe you should change: ``` RETURNS SETOF another_tbl ``` to: ``` RETURNS TABLE(name TEXT, nationality TEXT, ids VARCHAR[]) ```
Return multiple values and populate column in array like manner
[ "", "sql", "list", "postgresql", "distinct", "aggregate-functions", "" ]
I get an error when I execute the following sql query. ``` SELECT RTRIM(name) AS [Segment Name], growth,groupid AS [Group Id], filename AS [File Name], CAST(size/128.0 AS DECIMAL(10,2)) AS [Size in MB], CAST(FILEPROPERTY(name, 'SpaceUsed')/128.0 AS DECIMAL(10,2)) AS [Space Used], CAST(size/128.0-(FILEPROPERTY(name, 'SpaceUsed')/128.0) AS DECIMAL(10,2)) AS [Available Space], CAST((CAST(FILEPROPERTY(name, 'SpaceUsed')/128.0 AS DECIMAL(10,2))/CAST(size/128.0 AS DECIMAL(10,2)))*100 AS DECIMAL(10,2)) AS [Percent_Used] FROM sysfiles WHERE growth = 0 AND Percent_Used > 60 ORDER BY groupid DESC ``` The error says: > Msg 207, Level 16, State 1, Line 7 > Invalid column name 'Percent\_Used'. Why do I get this error??
If you see logical order processing of query where clause is evaluated before select. to make it work you need to use derived table concept. ``` 1. FROM 2. ON 3. OUTER 4. **WHERE** 5. GROUP BY 6. CUBE | ROLLUP 7. HAVING 8. **SELECT** ``` <http://blog.sqlauthority.com/2009/04/06/sql-server-logical-query-processing-phases-order-of-statement-execution/> ``` Select * from ( SELECT RTRIM(name) AS [Segment Name], growth,groupid AS [Group Id], filename AS [File Name], CAST(size/128.0 AS DECIMAL(10,2)) AS [Size in MB], CAST(FILEPROPERTY(name, 'SpaceUsed')/128.0 AS DECIMAL(10,2)) AS [Space Used], CAST(size/128.0-(FILEPROPERTY(name, 'SpaceUsed')/128.0) AS DECIMAL(10,2)) AS [Available Space], CAST((CAST(FILEPROPERTY(name, 'SpaceUsed')/128.0 AS DECIMAL(10,2))/CAST(size/128.0 AS DECIMAL(10,2)))*100 AS DECIMAL(10,2)) AS [Percent_Used] FROM sysfiles ) DT WHERE growth = 0 AND Percent_Used > 60 ORDER BY groupid DESC ```
You are trying to use a column you have defined to filter in the `WHERE` clause. This column does not exist in the table `sysfiles` If you want you can wrap that `SELECT` in another `SELECT` and do the filtering on the latter `SELECT`. That way the column already "exists".
sql query - Invalid column name
[ "", "sql", "sql-server", "" ]
im creating a Table with this command : ``` create table if not exists Event ( _id long INTEGER AUTO INCREMENT,repeatEvery long ,repeating int ,startTime long ,title text ,primary key (_id) ); ``` also tried id without the \_ prefix, and then i fetch all rows with ``` database.query("Event",null , null, null, null, null, null); ``` and then i go over the results printing the ID column : ``` if (c.moveToFirst()) { do { System.out.println(c.getLong(c.getColumnIndexOrThrow("_id"))); } while (c.moveToNext()); } ``` all of the results are zeros! idea any one?
SQLITE by default adds an INTEGER AUTOINCREMENT for you. Search for ROWID. The creation of a column that replicates this functionality will just be created as an alias to ROWID. I did some more digging around and found out what some of the issues are. First of all, SQLITE is garbage. Here is the correct SQL statement you need to use. ``` CREATE TABLE Event (_id INTEGER PRIMARY KEY AUTOINCREMENT,repeatEvery long ,repeating int ,startTime long ,title text) ``` I issued the one in your code sample, and it had the type of the \_id column as "INTEGER AUTO INCREMENT". The SQLITE parser is very generous and accepted that as a datatype. It had no idea what to do with it, and just treated it as a blob, and none of the constraints worked. So, even though your DDL was incorrect, SQLITE was nice enough to take it's best guess at what was wrong and didn't bother to let you know. :[ If you print the schema of your apps' Event table, you'll should see the INTEGER AUTO INCREMENT issue. Another issue to watch out for, If you're writing your insert statements on your own, you need to write in one of these two ways. ``` insert into Event (repeatEvery, repeating, startTime, title) values (1,2,3, "title"); ``` or ``` insert into Event values (NULL, 1,2,3, "title"); ``` the SQLITE Helper class should handle this for you.
From [the sqlite docs](http://www.sqlite.org/faq.html#q1) it looks like the definition should be changed a little. the definition should probably be: ``` create table if not exists Event ( _id INTEGER PRIMARY KEY, repeatEvery long, repeating int, startTime long, title text ); ```
Android Database SQLLite AutoIncrement not incrementing and primary constraint violated
[ "", "android", "sql", "database", "auto-increment", "" ]
``` (SELECT ID_OF, Col, BNT, SUM(size1) As size1, SUM(size2) As size2, SUM(size3) as size3, SUM(size4) as size4, SUM(size5) as size5, SUM(size6)as size6, SUM(size7) as size7, SUM(size8) as size8, SUM(size9) as size9, SUM(size10) as size10, SUM(Total) as Total,ref FROM tblTailleOFALL GROUP BY ID_OF, Col, BNT, ref) (SELECT ID_OF, Col, BNT, SUM(size1) As size1, SUM(size2) As size2, SUM(size3) as size3, SUM(size4) as size4, SUM(size5) as size5, SUM(size6)as size6, SUM(size7) as size7, SUM(size8) as size8, SUM(size9) as size9, SUM(size10) as size10, SUM(Total) as Total,ref FROM tblTailleALL GROUP BY ID_OF, Col, BNT, ref) ``` I did this query in SQL Server and get this result ``` Id_OF Col BNT size1 size2 size3 size4 size5 size6 size7 size8 size9 size10 Total ref --------- 37623 738 A 60 60 60 30 30 0 0 0 0 0 240 131380 ``` And This : ``` Id_OF Col BNT size1 size2 size3 size4 size5 size6 size7 size8 size9 size10 Total ref --------- 37623 738 A 60 60 60 30 28 0 0 0 0 0 238 131380 ``` How can I subtract these two result in my query! I should get this as result ``` Id_OF Col BNT size1 size2 size3 size4 size5 size6 size7 size8 size9 size10 Total ref --------- 37623 738 A 0 0 0 0 2 0 0 0 0 0 2 131380 ``` Thanks heaps
This looks really familiar... But how about this? ``` SELECT a.ID_OF, a.Col, a.BNT, SUM(a.size1) - SUM(b.size1) As size1, SUM(a.size2) - SUM(b.size2) As size2, SUM(a.size3) - SUM(b.size3) As size3, SUM(a.size4) - SUM(b.size4) As size4, SUM(a.size5) - SUM(b.size5) As size5, SUM(a.size6) - SUM(b.size6) As size6, SUM(a.size7) - SUM(b.size7) As size7, SUM(a.size8) - SUM(b.size8) As size8, SUM(a.size9) - SUM(b.size9) As size9, SUM(a.size10) - SUM(b.size10) As size10, SUM(a.total) - SUM(b.total) As total, a.ref FROM tblTailleOFALL a JOIN tblTailleALL b ON a.ID_OF = b.ID_OF AND a.Col = b.Col AND a.BNT = b.BNT GROUP BY a.ID_OF, a.Col, a.BNT, a.ref ```
A number of methods exist, depending on your actual logic, but for your given question, I'd say simply JOIN them together using either common table expressions, subqueries or temporary tables and then subtract the size column from the corresponding size column. Something like this: ``` ;WITH T1 AS ( SELECT ID_OF, Col, BNT, SUM(size1) As size1, SUM(size2) As size2, SUM(size3) as size3, SUM(size4) as size4, SUM(size5) as size5, SUM(size6)as size6, SUM(size7) as size7, SUM(size8) as size8, SUM(size9) as size9, SUM(size10) as size10, SUM(Total) as Total,ref FROM tblTailleOFALL GROUP BY ID_OF, Col, BNT, ref) , T2 AS (SELECT ID_OF, Col, BNT, SUM(size1) As size1, SUM(size2) As size2, SUM(size3) as size3, SUM(size4) as size4, SUM(size5) as size5, SUM(size6)as size6, SUM(size7) as size7, SUM(size8) as size8, SUM(size9) as size9, SUM(size10) as size10, SUM(Total) as Total,ref FROM tblTailleALL GROUP BY ID_OF, Col, BNT, ref) SELECT T1.size1 - t2.size1 ..... ..... ..... FROM T1 INNER JOIN T2 ON T1.ID_OF = T2.ID_OF ``` Use an outer join if they do not match 1 to 1 between the two queries (example just to illustrate) But as said - multiple ways exists of doing this. You could also make a single query - I suspect - and subtract in that as well before SUM and GROUP BY.
Sum and Difference of two SQL query result
[ "", "sql", "" ]
I'm trying to export a SQL Azure database to a `.bacpac` file using the Azure portal. The administrator **username** on my database contains a `*`. When I use it in the username field I get this error. ``` The login name must meet the following requirements: It must be a SQL Identifier. It cannot be a system name, for example: - admin, administrator, sa, root, dbmanager, loginmanager, etc. - Built-in database user or role like dbo, guest, public, etc. It cannot contain: - White space like spaces, tabs, or returns - Unicode characters - Nonalphabetic characters ("|:*?\/#&;,%=) It cannot begin with: - Digits (0 through 9) - @, $, + ``` So I add a new user to the database using the following tSQL. ``` USE master; CREATE LOGIN gu6t6rdb WITH PASSWORD = 'kjucuejcj753jc8j' USE MyActualDB; CREATE USER gu6t6rdb FOR LOGIN gu6t6rdb ``` The portal export form accepts that username but later errors with the following message. > Error encountered during the service operation. Could not extract > package from specified database. The reverse engineering operation > cannot continue because you do not have View Definition permission on > the 'MyActualDB' database. To fix this I tried the following tSQL ``` GRANT VIEW ANY DEFINITION TO gu6t6rdb ``` which throws the following error > Securable class 'server' not supported in this version of SQL Server How should I use tSQL to provide an additional user on my database and give the user sufficient privileges to export the database through the Azure portal to a `.bacpac` file in an Azure blobstore?
Got it. I can add the user to the `db_owner` role and then the export proceeds without error. ``` EXEC sp_addrolemember 'db_owner', 'gu6t6rdb' ```
This will not work on sql azure. You will need to grant view definition at the database level. (without the ANY keyword) GRANT VIEW DEFINITION TO gu6t6rdb P.S: I hit the exact same issue and this seemed to solve my problem. I also had to do a Grant Execute (but it depends on what your bacpac is applying to the database)
tSQL to set up user with View Definition permission on SQL Azure
[ "", "sql", "sql-server", "t-sql", "azure", "azure-sql-database", "" ]
I have a query like this: ``` SELECT initials, name FROM employee e, projects p WHERE e.country = p.country ``` Until now, both tables used an abbreviation for the country columns. Like `"SWE"` for Sweden and `"ITA"` for Italy. In the future, the employee table will use names for the country columns. Like "Sweden" and "Italy". Is it somehow possible to change my query so it can match abbreviations with names? Like `"SWE" = "Sweden"` and `"ITA" = "Italy"`. Thanks.
It would be better to have an own country table and the other tables referencing to that. ``` country table ------------- id name abbreviation ```
I'd say the best solution is creating a third table where you match the current abbreviation with the full country name. You can then join both tables on that. ``` CountryTable (countryAbbreviation, countryName) ``` The select would then be something like this: ``` SELECT initials, name FROM employee e JOIN countryTable c ON c.countryName = c.country JOIN projects p ON p.country = c.countryAbbreviation ```
Modifying a SELECT query
[ "", "sql", "select", "" ]
I have two tables: table message (holds the creator of a message and the message) **id - creatorId - msg** and a table message\_viewers (tells who can read the message msgId) **msgId - userId** If I create a message as user 1 and send it to user 2 and user 3, the tables will look like this: ``` tbl_message: 1 - 1 - 'message' tbl_message_viewers: 1 - 2 1 - 3 ``` What I want to do is to fetch the messages that are between the users x1...xN (any number of users) AND ONLY the messages between them. (Example if users are 1, 2, and 3, I want the messages where the creator is 1, 2 or 3, and the users are 2,3 for creator = 1, 1 and 3 for creator = 2 and 1, 2 for creator = 3) **I am not interested** by messages between 1 and 2, or 2 and 3, or 1 and 3, but only by messages **between the 3 people**. I tried different approaches, such as joining the two tables on message id, selecting the messages where creatorId IN (X,Y) and then taking only the rows where userId IN (X, Y) as well. Maybe something about grouping and counting the rows, but I could not figure out a way of doing this that was working. --- **EDIT: SQL Fiddle here** --- **<http://sqlfiddle.com/#!2/963c0/1>**
I think this might do what you want: ``` SELECT m.* FROM message m INNER JOIN message_viewers mv ON m.id = mv.msgId WHERE m.creatorId IN (1, 2, 3) AND mv.userId IN (1, 2, 3) AND NOT EXISTS ( SELECT 1 FROM message_viewers mv2 WHERE mv2.msgId = mv.msgId AND mv2.userId NOT IN (1, 2, 3) ) AND mv.userId != m.creatorId; ``` The IN's will give the users that created/can see, and the `mv.userId != m.creatorId` are for excluding the creator from the message\_viewers table (like you showed in your requirements). Edit: With the requirement of only sending messages between those 3 id's, i came up with the following: ``` SELECT m.id,m.creatorId,m.message FROM message m INNER JOIN message_viewers mv ON m.id = mv.msgId WHERE m.creatorId IN (1, 2, 3) AND mv.userId IN (1, 2, 3) AND mv.userId != m.creatorId AND NOT EXISTS ( SELECT 1 FROM message_viewers mv2 WHERE mv2.msgId = mv.msgId AND mv2.userId NOT IN (1, 2, 3) ) GROUP BY 1,2,3 HAVING COUNT(*) = 2; ``` `sqlfiddle demo`
Try this with join and with IN() clause ``` SELECT * FROM tbl_message m JOIN tbl_message_viewers mv (m.id = mv.msgId ) WHERE m.creatorId IN(1,2,3) AND mv.userId IN(1,2,3) ```
MYSQL: Find all posts between certain users
[ "", "mysql", "sql", "" ]
Can any one explain the below query for getting the 3 maximum salaries? ``` select distinct sal from emp a where 3 >= (select count(distinct sal) from emp b where a.sal <= b.sal) order by a.sal desc; ``` Someone suuggested to me the use of the above query to get 3 max. salaries in a table. I didn't understand what is happening in the below part of the query: ``` 3>= (select count(distinct sal) from emp b where a.sal <= b.sal) ; ``` Can anyone explain it? if there is any other way to get the same result,please advice me with query
``` empid sal =============== 1 300 2 50 3 400 4 200 5 150 ================ select distinct sal from emp a where 3 --outer query >= (select count(distinct sal) from emp b --inner query(correlated) where a.sal <= b.sal) order by a.sal desc; ``` This query fetches all the records from the outer query i.e `emp` `a` and iterates them one by one, passing value to the inner query. Let's take an example: 1. It fetches 1st row, which is `1, 300` and passes this value to the inner query 2. The inner query tries to find a distinct `sal` value that is less than or equal to the records in `emp` table `b` 3. The count is 3, because `50`, `200`, `150` are less than `300`. Since `3 >= 3` (inner query result) the answer is `true` and `300` is selected. 4. Now the outer loop counter comes to 2nd row i.e `2, 50`. It passes value to the inner query, in this case count does not satisfy `3 >=` criteria, hence `50` is not selected. 5. Now `400`, in this case inner query returns `4` and hence it satisfies the criteria, hence `400` is selected 6. Now `200`, in this case inner query returns `3`,hence this is also selected 7. Now `150`, in this case inner query returns `2`, hence this has been filtered out 8. Hence the result will be `400`, `300`, `200` is selected.
It's a strange way to do this, but it will work. Basically, for each row of table emp, it counts the number of salaries in this table, which are bigger when given, in the subquery: ``` select count(distinct sal) from emp b where a.sal <= b.sal ``` And if number of such salaries is not bigger than three: ``` 3>= (select count(distinct sal) from emp b where a.sal <= b.sal) ``` Then it's one of three biggest salaries. --- Well, the easiest way would be something like this: ``` SELECT RES.SAL FROM (SELECT DISTINCT SAL FROM EMP ORDER BY 1 DESC) RES WHERE ROWNUM <= 3 ```
Explanation of the query for getting the 3 maximum salaries
[ "", "sql", "oracle", "" ]
I have data in the following format: ![enter image description here](https://i.stack.imgur.com/I5nWR.png) I need to pivot this to get the data as follows. ![enter image description here](https://i.stack.imgur.com/mtvlH.png) Please help!!!
The solution I got is a bit tricky but very dynamic. You should first unpivot your table, and put the data in a temp table, after that I get the columns name for the pivoting and put the result in the **@cols** variable. At the end I create a dynamic sql string to pivot the the temp table that contains my data, so even if a new student gets added to the table his 2 columns will be generated in the end result. ``` select test, col + ' '+ Student stu_col , value INTO #temp from Marks unpivot(value for col in (english, maths)) unpiv DECLARE @cols AS NVARCHAR(MAX), @query AS NVARCHAR(MAX) select @cols = STUFF((SELECT distinct ',' + QUOTENAME(stu_col) from #temp order by 1 FOR XML PATH(''), TYPE ).value('.', 'NVARCHAR(MAX)') ,1,1,'') set @query = 'SELECT test, ' + @cols + ' from ( select test, Value, stu_col from #temp ) x pivot ( SUM(Value) for stu_col in (' + @cols + ') ) p ' exec(@query) DROP TABLE #temp ```
Something like this. In this case you need to have a fixed list of names. ``` SELECT SUM(CASE WHEN Student='Mike' THEN [English Mark] ELSE 0 END) as [Mike English Mark], SUM(CASE WHEN Student='Mike' THEN [Maths Mark] ELSE 0 END) as [Mike Maths Mark], SUM(CASE WHEN Student='Fisher' THEN [English Mark] ELSE 0 END) as [Fisher English Mark], SUM(CASE WHEN Student='Fisher' THEN [Maths Mark] ELSE 0 END) as [Fisher Maths Mark], SUM(CASE WHEN Student='John' THEN [English Mark] ELSE 0 END) as [John English Mark], SUM(CASE WHEN Student='John' THEN [Maths Mark] ELSE 0 END) as [John Maths Mark], [TestName] FROM Table1 GROUP BY [Test Name] ```
Query to pivot in SQL
[ "", "sql", "sql-server", "pivot", "" ]
I have a mysql user, whom I want to grant all the READ permission on a db schema. One way is this : ``` GRANT SELECT, SHOW_VIEW ON test.* TO 'readuser'@'%'; ``` Is there a way to group all read operations in grant ?
> If there is any single privilege that stands for ALL READ operations on database. It depends on how you define "all read." "Reading" from tables and views is the `SELECT` privilege. If that's what you mean by "all read" then yes: ``` GRANT SELECT ON *.* TO 'username'@'host_or_wildcard' IDENTIFIED BY 'password'; ``` However, it sounds like you mean an ability to "see" everything, to "look but not touch." So, here are the other kinds of reading that come to mind: "Reading" the definition of views is the `SHOW VIEW` privilege. "Reading" the list of currently-executing queries by other users is the `PROCESS` privilege. "Reading" the current replication state is the `REPLICATION CLIENT` privilege. Note that any or all of these might expose more information than you intend to expose, depending on the nature of the user in question. If that's the reading you want to do, you can combine any of those (or any other of [the available privileges](http://dev.mysql.com/doc/refman/5.6/en/privileges-provided.html)) in a single `GRANT` statement. ``` GRANT SELECT, SHOW VIEW, PROCESS, REPLICATION CLIENT ON *.* TO ... ``` However, there is no single privilege that grants some subset of other privileges, which is what it sounds like you are asking. If you are doing things manually and looking for an easier way to go about this without needing to remember the exact grant you typically make for a certain class of user, you can look up the statement to regenerate a comparable user's grants, and change it around to create a new user with similar privileges: ``` mysql> SHOW GRANTS FOR 'not_leet'@'localhost'; +------------------------------------------------------------------------------------------------------------------------------------+ | Grants for not_leet@localhost | +------------------------------------------------------------------------------------------------------------------------------------+ | GRANT SELECT, REPLICATION CLIENT ON *.* TO 'not_leet'@'localhost' IDENTIFIED BY PASSWORD '*xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' | +------------------------------------------------------------------------------------------------------------------------------------+ 1 row in set (0.00 sec) ``` Changing 'not\_leet' and 'localhost' to match the new user you want to add, along with the password, will result in a reusable `GRANT` statement to create a new user. Of, if you want a single operation to set up and grant the limited set of privileges to users, and perhaps remove any unmerited privileges, that can be done by creating a stored procedure that encapsulates everything that you want to do. Within the body of the procedure, you'd build the `GRANT` statement with dynamic SQL and/or directly manipulate the grant tables themselves. In [this recent question on Database Administrators](https://dba.stackexchange.com/questions/51500), the poster wanted the ability for an unprivileged user to modify other users, which of course is not something that can normally be done -- a user that can modify other users is, pretty much by definition, not an unprivileged user -- however -- stored procedures provided a good solution in that case, because they run with the security context of their `DEFINER` user, allowing anybody with `EXECUTE` privilege on the procedure to temporarily assume escalated privileges to allow them to do the specific things the procedure accomplishes.
``` GRANT SELECT ON *.* TO 'user'@'localhost' IDENTIFIED BY 'password'; ``` This will create a user with `SELECT` privilege for all database including Views.
mysql - How to grant read only permissions to a user?
[ "", "mysql", "sql", "security", "permissions", "readonly", "" ]
--- I am working on a school project in which I have to upload data onto a database hosted on phpMyAdmin. What's confusing me right now is that I see "localhost" at the top of the page and before I even query the database I see "Run SQL query/queries on server "localhost":" Does this mean that I'm hosting a server on my computer and accessing the database through that? Because then I query for "SHOW VARIABLES WHERE Variable\_name = 'hostname';" and it returns with hostname = webhost330. (It is hosted on webhost330.asu.edu). I'm extremely confused about what this means. Thanks for any possible help. I'm very new to databases so forgive me if I'm missing something simple here. EDIT: To clarify: I'm not at school, I'm at my house. What kind of implications does this have? In other words, what's the difference between this and hosting a database locally? When I host a database on my computer through the MySQL command line client, I can create database information and it's stored on my computer in files. Does this mean that it's initially stored on localhost and then that data is used to create files on my computer? And then the equivalent of that for the webhost330 is that the localhost stores the database initially but then uploads it to the host at phpMyAdmin? That is the primary thing I'm confused about.
This exactly means what you assumed. When connected to `localhost`, you are connected to local MySQL server on same machine. Using `webhost330....` you are connected to that remote MySQL server instance, if that is not the name of your own machine. Your own server can be `webhost30.etcetc`. **Edit** If your website is hosted here: `webhost330.asu.edu` then MySQL host either being `webhost330.asu.edu` or `localhost` both mean the same ***local*** MySQL server on your very machine. **Edit based on your update** If you are at home, then `localhost` means you are connected to a MySQL server that you have installed on your own computer. and `webhost30.etc.etc` means you are connected to the database you have at your school. Database do allow remote connectivity and if you are connected to school from home, that's a remote MySQL connection. To remove your confusion, you should use only `localhost` in your code. `localhost` at home will mean the development server which is your home computer, and when you take the same code to `webhost30.etc.etc` then `localhost` for that server will mean its own MySQL installation. So `localhost` will work everywhere as long as you don't want your code to connect to a remote external MySQL database server.
To see what "localhost" refers to in terms of MySQL server, you have to look at the URL you are using to connect to the web server. For example, if the URL is ``` http://localhost/phpmyadmin ``` and this instance of phpMyAdmin tells me that the MySQL server is on localhost, this means that the MySQL server is on my local workstation. If the URL is <http://example.com/phpmyadmin>, then localhost will mean that the MySQL server is on the same machine (example.com).
How exactly am I accessing the host/server through localhost on phpMyAdmin?
[ "", "mysql", "sql", "database", "phpmyadmin", "" ]
I have a query which returns records according to time like ``` +-------------+--------------+ | RenewalTime | RenewalCount | +-------------+--------------+ | 1 | 2345 | | 2 | 189 | | 3 | 789 | | 4 | 7676 | | 5 | 9876 | | 6 | 9762 | +-------------+--------------+ ``` but i want to show it like following in which it shows data on the base of 5 min groups ``` +-------------+--------------+ | RenewalTime | RenewalCount | +-------------+--------------+ | 0-5 | 2345 | | 5-10 | 189 | +-------------+--------------+ ``` ``` Select Round((Cast(i.Modification_Date As Date) - Cast(Refill_Date As Date)) * 24 * 60,0) RenewalTime, count(1) RenewalCount from refill,subscription_interval i where Trunc(Refill_Date) > '18-Nov-13' And (Cast(i.Modification_Date As Date) - Cast(Refill_Date As Date)) * 24 * 60 > 0 and (cast(i.modification_date as date) - cast(refill_date as date)) * 24 * 60 < 1000 group by Round((Cast(i.Modification_Date As Date) - Cast(Refill_Date As Date)) * 24 * 60,0) order by RenewalTime; ```
I would convert this RenewTime to 5 minute intervals like this: ``` ((cast(i.modification_date as date) - cast(refill_date as date)) * 24 * 60 /5 ``` Next, I would use the floor function to distinguish one 5 minute group from another: ``` floor((cast(i.modification_date as date) - cast(refill_date as date)) * 24 * 60/ 5) ``` Next, I would convert this to the text you want for the RenewalTime column as follows: ``` to_char( 5 * floor((cast(i.modification_date as date) - cast(refill_date as date)) * 24 * 60/ 5)) ||' - ' || to_char( 5 * (floor((cast(i.modification_date as date) - cast(refill_date as date)) * 24 * 60/ 5)+1)) ``` The intervals are like this: 0 <= renewal\_time < 5 ==> '0 - 5' 5 <= renewal\_time < 10 ==> '5 - 10' 10 <= renewal\_time < 15 ==> '10 - 15' 15 <= renewal\_time < 20 ==> '15 - 20' This results in this sql statement. ``` select to_char( 5 * floor((cast(i.modification_date as date) - cast(refill_date as date)) * 24 * 60/ 5)) || ' - ' || to_char( 5 * (floor((cast(i.modification_date as date) - cast(refill_date as date)) * 24 * 60/ 5) +1)) RenewalTime count(1) renewal_count from refill, subscription_interval i where trunc(refill_date) > to_date('18-nov-13', 'dd-mon-yy') and (cast(i.modification_date as date) - cast(refill_date as date)) * 24 * 60 > 0 and (cast(i.modification_date as date) - cast(refill_date as date)) * 24 * 60 < 1000 group by to_char( 5 * floor((cast(i.modification_date as date) - cast(refill_date as date)) * 24 * 60/ 5)) || ' - ' || to_char( 5 * floor((cast(i.modification_date as date) - cast(refill_date as date)) * 24 * 60/ 5) +1) order by 1; ```
i think that this query can create the thing which you want: ``` select to_char((t1.RenewalTime-1)*5)|| '-' ||to_char(t1.RenewalTime*5) as RenewalTime,RenewalCount from (Select Round((Cast(i.Modification_Date As Date) - Cast(Refill_Date As Date)) * 24 * 60,0) RenewalTime, count(1) RenewalCount from refill,subscription_interval i where Trunc(Refill_Date) > '18-Nov-13' And (Cast(i.Modification_Date As Date) - Cast(Refill_Date As Date)) * 24 * 60 > 0 and (cast(i.modification_date as date) - cast(refill_date as date)) * 24 * 60 < 1000 group by Round((Cast(i.Modification_Date As Date) - Cast(Refill_Date As Date)) * 24 * 60,0) order by RenewalTime)t1 where t1.RenewalTime < 3; ```
SQL Query for group by on Time Interval
[ "", "sql", "oracle", "group-by", "" ]
Example table: ``` id computer app version build date ---|---------|------|------------|-------|--------- 1 | aaaa1 | app1 | 1.0.0 | 1 | 2013-11-11 09:51:07 2 | aaaa1 | app2 | 2.0.0 | 2 | 2013-11-12 09:51:07 5 | xxxx2 | app1 | 1.0.0 | 1 | 2013-11-13 09:51:07 3 | cccc3 | app2 | 3.1.0 | 1 | 2013-11-14 09:51:07 4 | xxxx2 | app1 | 1.0.0 | 2 | 2013-11-15 09:51:07 5 | cccc3 | app2 | 3.1.1 | 3 | 2013-11-16 09:51:07 6 | xxxx2 | app1 | 1.0.2 | 1 | 2013-11-17 09:51:07 7 | aaaa1 | app1 | 1.0.2 | 3 | 2013-11-18 09:51:07 ``` Desired output (not exact format or listing order), getting latest install for each app on each computer: ``` 7. aaaa1 - app1 - 1.0.2 - 3 - 2013-11-18 09:51:07 2. aaaa1 - app2 - 2.0.0 - 2 - 2013-11-12 09:51:07 6. xxxx2 - app1 - 1.0.2 - 1 - 2013-11-17 09:51:07 5. cccc3 - app2 - 3.1.1 - 3 - 2013-11-16 09:51:07 ``` My SQL statement: ``` SELECT id, computer, app, version, build, MAX(date) AS installed FROM data WHERE placement = 'xxx' GROUP BY app, computer ; ``` This gives me: ``` 1. aaaa1 - app1 - 1.0.0 - 1 - 2013-11-11 09:51:07 ``` and not ``` 7. aaaa1 - app1 - 1.0.2 - 3 - 2013-11-18 09:51:07 ``` as I expected. MAX(date) works if I **ONLY** select MAX(date) and nothing else. But then I don't get any data to work with (just latest date). ``` SELECT MAX(date) AS installed ``` I'm not an SQL ninja so I will soon go bald by scratching my head because of this.
Try like this: ``` SELECT d.id, d.computer, d.app, d.version, d.build, a.installed FROM data d INNER JOIN ( SELECT computer, app, max(DATE) AS installed FROM data GROUP BY computer, app ) a ON a.computer = d.computer AND a.app = d.app WHERE placement = 'xxx' ``` The inner query is getting you the max(date) for each pair of computer and app, then you just join with that to get the rest of the information.
Try by casting the Datetime field ``` SELECT id, computer, app, version, build, MAX(cast(date as Datetime)) AS installed FROM data WHERE placement = 'xxx' GROUP BY app, computer, id, version, build ; ```
MySQL select MAX(datetime) not returning max value
[ "", "mysql", "sql", "max", "" ]
In MySQL, I want to collapse duplicate rows while also retaining a list of user\_id containing those duplicates: ``` user_id message lattitude longitude status img_name ------------------------------------------------------------------------------ 3 hello test multiple 42.0473347 89.4525034 P ced5m 4 hello test multiple 42.0473347 89.4525034 P ced5m 5 hello test multiple 42.0473347 89.4525034 P ced5m 6 hello test new 42.0473347 89.4525034 P ced5WEXm 7 hello test new 42.0473347 89.4525034 P ced5WEXm 8 hello test new 42.0473347 89.4525034 P ced5WEXm ``` I want to collapse rows 3,4,5 and 6,7,8 into this: ``` user_id message lattitude longitude status img_name ------------------------------------------------------------------------------------- 3,4,5 hello test multiple 42.0473347 89.4525034 P ced5m 6,7,8 hello test new 42.0473347 89.4525034 P ced5WEXm ``` What's the best way to do this?
``` SELECT GROUP_CONCAT(`user_id` separator ',') as user_ids, message, lattitude, longitude, status, img_name FROM table_name GROUP BY img_name, message; ``` Use the Above query for the relevant result.
Try this: ``` SELECT GROUP_CONCAT(user_id separator ','), message, lattitude, longitude, STATUS, img_name FROM YourTable GROUP BY message, lattitude, longitude, STATUS, img_name ``` `SQLFIDDLE DEMO` You may be able to remove some of the GROUP BY columns. See [here](http://dev.mysql.com/doc/refman/5.0/en/group-by-extensions.html) to evaluate if any of those are in the conditions that mysql allows you to remove them for better performance on the query.
SQL Query for duplicate records, retaining a list of the duplicate id's
[ "", "mysql", "sql", "" ]
I'm getting compile errors with this code using sqlplus. My errors are: > Warning: Procedure created with compilation errors. > > BEGIN point\_triangle; END; > > Error at line 1: ORA-06550: Line 1, column 7: > PLS-00905: object POINT\_TRIANGLE is invalid > ORA-06550: line 1, column 7: > PL/SQL Statement ignored Whenever I type show errors, it tells me there are no errors. Here is the code. ``` create or replace procedure point_triangle AS A VARCHAR2(30); B VARCHAR2(30); C INT; BEGIN FOR thisteam in (select P.FIRSTNAME into A from PLAYERREGULARSEASON P where P.TEAM = 'IND' group by P.FIRSTNAME, P.LASTNAME order by SUM(P.PTS) DESC) (select P.LASTNAME into B from PLAYERREGULARSEASON P where P.TEAM = 'IND' group by P.FIRSTNAME, P.LASTNAME order by SUM(P.PTS) DESC) (select SUM(P.PTS) into C from PLAYERREGULARSEASON P where P.TEAM = 'IND' group by P.FIRSTNAME, P.LASTNAME order by SUM(P.PTS) DESC); LOOP dbms_output.put_line(A|| ' ' || B || ':' || C); END LOOP; END; / ``` it is suppose to put all the players into A and B with their points of their career on that team into the C. I know the queries work, just not in the procedure.
``` create or replace procedure point_triangle AS BEGIN FOR thisteam in (select FIRSTNAME,LASTNAME,SUM(PTS) from PLAYERREGULARSEASON where TEAM = 'IND' group by FIRSTNAME, LASTNAME order by SUM(PTS) DESC) LOOP dbms_output.put_line(thisteam.FIRSTNAME|| ' ' || thisteam.LASTNAME || ':' || thisteam.PTS); END LOOP; END; / ```
Could you try this one: ``` create or replace procedure point_triangle IS BEGIN FOR thisteam in (select P.FIRSTNAME,P.LASTNAME, SUM(P.PTS) S from PLAYERREGULARSEASON P where P.TEAM = 'IND' group by P.FIRSTNAME, P.LASTNAME order by SUM(P.PTS) DESC) LOOP dbms_output.put_line(thisteam.FIRSTNAME|| ' ' || thisteam.LASTNAME || ':' || thisteam.S); END LOOP; END; ```
Stored Procedure error ORA-06550
[ "", "sql", "oracle", "stored-procedures", "ora-06550", "" ]
I want to return all rows that have a certain value in a column and have more than 5 instances in which a number is that certain value. For example, I would like to return all rows of the condition in which if the value in the column M has the number 1 in it and there are 5 or more instances of M having the number 1 in it, then it will return all rows with that condition. ``` select * from tab where M = 1 group by id --ID is the primary key of the table having count(M) > 5; ``` EDIT: Here is my table: ``` id | M | price --------+-------------+------- 1 | | 100 2 | 1 | 50 3 | 1 | 30 4 | 2 | 20 5 | 2 | 10 6 | 3 | 20 7 | 1 | 1 8 | 1 | 1 9 | 1 | 1 10 | 1 | 1 11 | 1 | 1 ``` Originally I just want to insert into a trigger so that if the number of M = 1's is greater than 5, then I want to create an exception. The query I asked for would be inserted into the trigger. END EDIT. But my table is always empty. Can anyone help me out? Thanks!
Try this : ``` select * from tab where M in (select M from tab where M = 1 group by M having count(id) > 5); ``` [**SQL Fiddle Demo**](http://sqlfiddle.com/#!3/f05e0/1)
`please try` ``` select *,count(M) from table where M=1 group by id having count(M)>5 ```
Return the query when count of a query is greater than a number?
[ "", "mysql", "sql", "count", "group-by", "having", "" ]
Table structure for table Warehouse ``` CREATE TABLE Warehouse ( wID NUMBER(25) , Location VARCHAR2(70) , Num_Employees NUMBER(25) , Stock NUMBER(25) , PRIMARY KEY (wID) ); CREATE SEQUENCE WAREHOUSE_SEQ START WITH 1 INCREMENT BY 1 NOMAXVALUE; INSERT INTO Warehouse (wID, Location, Num_Employees, Stock) VALUES (WAREHOUSE_SEQ.nextval , 'Dallas', '3', '13'); INSERT INTO Warehouse (wID, Location, Num_Employees, Stock) VALUES (WAREHOUSE_SEQ.nextval , 'Denver', '3', '07'); INSERT INTO Warehouse (wID, Location, Num_Employees, Stock) VALUES (WAREHOUSE_SEQ.nextval , 'Detroit', '3', '09'); INSERT INTO Warehouse (wID, Location, Num_Employees, Stock) VALUES (WAREHOUSE_SEQ.nextval , 'Phoenix', '3', '14'); INSERT INTO Warehouse (wID, Location, Num_Employees, Stock) VALUES (WAREHOUSE_SEQ.nextval , 'Atlanta', '3', '07'); ``` **Query:** ``` SELECT DISTINCT Orders.wID, person.name, Employee.*, Warehouse.wID FROM person INNER JOIN Orders ON Orders.wID = Warehouse.wID INNER JOIN Warehouse ON Warehouse.LOCATION=Employee.WORK_LOCATION INNER JOIN Employee ON Employees.ETYPE='Manager' WHERE Employee.EID = person.Id; ```
Try this.. ``` SELECT DISTINCT Orders.wID, person.name, Employee.*, Warehouse.wID FROM person INNER JOIN Employee ON person.Id = Employee.EID INNER JOIN Warehouse ON Warehouse.LOCATION = Employee.WORK_LOCATION INNER JOIN Orders ON Orders.wID = Warehouse.wID WHERE Employees.ETYPE ='Manager'; ```
Problem is your first join: 'FROM person INNER JOIN Orders ON Orders.wID = Warehouse.wID' The table 'Warehouse' is at this point not yet available, but you already mention it in the 'on' part.
Inner Join ORA-00904: invalid identifier
[ "", "sql", "oracle", "" ]
I'm building a database for a baseball team, and one of the tables is for pitching stats. If you know baseball, you know baseball ERA is calculated as (number of earned runs given up / divided by innings pitched) \* 9. In the table, I have separate columns for Innings Pitched, and for Earned Runs, and through an SQL statement, i want to calculate the ERA Currently Innings Pitched is stored as a decimal, like "1.0" or "6.1", or "7.2", something like that, however when calculated, if there is a decimal (like "7.2"), I need the ".2" to actually change to ".66666", or a ".1 to change to ".33333". Here is what I have tried so far in my SQL: ``` SELECT *, CONVERT(DECIMAL(10,2), ((stat_er / NULLIF(stat_ip,0)) * 9)) AS stat_era FROM stats_pitching ORDER BY stat_era ASC ``` This does the "stat\_ip" as ".0", ".1" or ".2", as that is how it is from the database. How could I change anything in the column ending in ".1" or ".2" to ".3333" or ".6666" respectively in the SQL equation? Would I use a "Case" type, and if so, does anyone have some examples?
You can do that applying simple math, using `ROUND` function. Lets say `Value` is your column name. ``` ROUND(Value, 0) + (10 * (Value - ROUND(Value, 0)) / 3) ``` Little demo: <http://sqlfiddle.com/#!6/5ad7f/2>
You can use `CASE` as below: ``` declare @x decimal(38,6) = 7.1 select case @x - cast(FLOOR(@x) as decimal) when 0 THEN @x when 0.1 THEN cast(FLOOR(@x) as decimal) + 0.3333 when 0.2 THEN cast(FLOOR(@x) as decimal) + 0.6666 end ``` That will always turn * x.0 to x.0 * x.1 to x.3333 * x.2 to x.6666 where x is an integer.
Calculating numbers from fields in a database
[ "", "sql", "sql-server", "" ]
I am having an issue with the following query returning results a bit too slow and I suspect I am missing something basic. My initial guess is the 'CASE' statement is taking too long to process its result on the underlying data. But it could be something in the derived tables as well. The question is, how can I speed this up? Are there any glaring errors in the way I am pulling the data? Am I running into a sorting or looping issues somewhere? The query runs for about 40 seconds, which seems quite long. C# is my primary expertise, SQL is a work in progress. Note I am not asking "write my code" or "fix my code". Just for a pointer in the right direction, I can't seem to figure out where the slow down occurs. Each derived table runs very quickly (less than a second) by themselves, the joins seem correct and the result set is returning exactly what I need. It's just too slow and I'm sure there are better SQL scripter's out there ;) Any tips would be greatly appreciated! ``` SELECT hdr.taker , hdr.order_no , hdr.po_no as display_po , cust.customer_name , hdr.customer_id , 'INCORRECT-LARGE ORDER' + CASE WHEN (ext_price_calc >= 600.01 and ext_price_calc <= 800) and fee_price.unit_price <> round(ext_price_calc * -.01,2) THEN '-1%: $' + cast(cast(ext_price_calc * -.01 as decimal(18,2)) as varchar(255)) WHEN ext_price_calc >= 800.01 and ext_price_calc <= 1000 and fee_price.unit_price <> round(ext_price_calc * -.02,2) THEN '-2%: $' + cast(cast(ext_price_calc * -.02 as decimal(18,2)) as varchar(255)) WHEN ext_price_calc > 1000 and fee_price.unit_price <> round(ext_price_calc * -.03,2) THEN '-3%: $' + cast(cast(ext_price_calc * -.03 as decimal(18,2)) as varchar(255)) ELSE 'OK' END AS Status FROM (myDb_view_oe_hdr hdr LEFT OUTER JOIN myDb_view_customer cust ON hdr.customer_id = cust.customer_id) LEFT OUTER JOIN wpd_view_sales_territory_by_customer territory ON cust.customer_id = territory.customer_id LEFT OUTER JOIN (select order_no, SUM(ext_price_calc) as ext_price_calc from (select hdr.order_no, line.item_id, (line.qty_ordered - isnull(qty_canceled,0)) * unit_price as ext_price_calc from myDb_view_oe_hdr hdr left outer join myDb_view_oe_line line on hdr.order_no = line.order_no where line.delete_flag = 'N' AND line.cancel_flag = 'N' AND hdr.projected_order = 'N' AND hdr.delete_flag = 'N' AND hdr.cancel_flag = 'N' AND line.item_id not in ('LARGE-ORDER-1%','LARGE-ORDER-2%', 'LARGE-ORDER-3%', 'FUEL','NET-FUEL', 'CONVENIENCE-FEE')) as line group by order_no) as order_total on hdr.order_no = order_total.order_no LEFT OUTER JOIN (select order_no, count(order_no) as convenience_count from oe_line with (nolock) left outer join inv_mast inv with (nolock) on oe_line.inv_mast_uid = inv.inv_mast_uid where inv.item_id in ('LARGE-ORDER-1%','LARGE-ORDER-2%', 'LARGE-ORDER-3%') and oe_line.delete_flag <> 'Y' group by order_no) as fee_count on hdr.order_no = fee_count.order_no INNER JOIN (select order_no, unit_price from oe_line line with (nolock) where line.inv_mast_uid in (select inv_mast_uid from inv_mast with (nolock) where item_id in ('LARGE-ORDER-1%','LARGE-ORDER-2%', 'LARGE-ORDER-3%'))) as fee_price ON fee_count.order_no = fee_price.order_no WHERE hdr.projected_order = 'N' AND hdr.cancel_flag = 'N' AND hdr.delete_flag = 'N' AND hdr.completed = 'N' AND territory.territory_id = ‘CUSTOMERTERRITORY’ AND ext_price_calc > 600.00 AND hdr.carrier_id <> '100004' AND fee_count.convenience_count is not null AND CASE WHEN (ext_price_calc >= 600.01 and ext_price_calc <= 800) and fee_price.unit_price <> round(ext_price_calc * -.01,2) THEN '-1%: $' + cast(cast(ext_price_calc * -.01 as decimal(18,2)) as varchar(255)) WHEN ext_price_calc >= 800.01 and ext_price_calc <= 1000 and fee_price.unit_price <> round(ext_price_calc * -.02,2) THEN '-2%: $' + cast(cast(ext_price_calc * -.02 as decimal(18,2)) as varchar(255)) WHEN ext_price_calc > 1000 and fee_price.unit_price <> round(ext_price_calc * -.03,2) THEN '-3%: $' + cast(cast(ext_price_calc * -.03 as decimal(18,2)) as varchar(255)) ELSE 'OK' END <> 'OK' ```
Just as a clue to the right direction for optimization: * When you do an OUTER JOIN to a query with calculated columns, you are guaranteeing not only a full table scan, but that those calculations must be performed against every row in the joined table. It appears that you can actually do your join to oe\_line without the column calculations (i.e. by filtering **ext\_price\_calc** to a specific range). * You don't need to do most of the subqueries that are in your query--the master query can be recrafted to use regular table join syntax. Joins to subqueries containing subqueries presents a challenge to the SQL optimizer that it may not be able to meet. But by using regular joins, the optimizer has a much better chance at identifying more efficient query strategies. * You don't tag which SQL engine you're using. Every database has proprietary extensions that may allow for speedier or more efficient queries. It would be easier to provide useful feedback if you indicated whether you were using MySQL, SQL Server, Oracle, etc. * Regardless of the database you're using, reviewing the query plan is always a good place to start. This will tell you where most of the I/O and time in your query is being spent. * Just on general principle, make sure your statistics are up-to-date.
It's may not be solvable by any of us without the real stuff to test with. IF that's the case and nobody else posts the answer, I can still help. Here is how to trouble shoot it. (1) take joins and pieces out one by one. (2) this will cause errors. Remove or fake the references to get rid of them. (3) see how that works. (4) Put items back before you try taking something else out (5) keep track... (6) also be aware where a removal of something might drastically reduce the result set. You might find you're missing an index or some other smoking gun.
Query taking too long - Optimization
[ "", "sql", "sql-server-2008", "optimization", "" ]
I have table such that : ``` value1 value2 value3 value4 value5 constant 1 2 3 4 5 2 8 2 8 3 5 2 1 5 3 4 5 3 1 2 6 4 5 3 ``` Now what i want to do is ,i want to find the sum across the columns for only those number of columns as the value given in constant field: for example in row1 if the constant value is `2`, i need to find the sum of `value1+value2`. If the value in constant was `3`, i need to find the sum `value 1+ value2 + value3` Sorry for the bad english.What is suitable way to do it? i have been googling it for a while ow but couldn't find of a suitable way
**Intro** The normal way to resolve this question is: chose correct structure. If you have 24 fields and you need to loop dynamically in SQL, then *something went wrong*. Also, it is bad that your table has not any primary key (or you've not mentioned that). **Extremely important note** It is no matter that the way I'll describe will work. It is still bad practice because of using some special things in MySQL. You can use it on your own risk - and, again, reconsider your structure if it's possible. **The hack** Actually, you *can* do some tricks using MySQL [INFORMATION\_SCHEMA](http://dev.mysql.com/doc/refman/5.0/en/information-schema.html) tables. With this you can create "text" SQL, which later can be used in [prepared statement](http://dev.mysql.com/doc/refman/5.0/en/sql-syntax-prepared-statements.html). *My table* It's called `test`. Here it is: ``` +----------+---------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +----------+---------+------+-----+---------+-------+ | value1 | int(11) | YES | | NULL | | | value2 | int(11) | YES | | NULL | | | value3 | int(11) | YES | | NULL | | | value4 | int(11) | YES | | NULL | | | constant | int(11) | YES | | NULL | | +----------+---------+------+-----+---------+-------+ ``` -I have `4` "value" fields in it and *no primary key* column (that causes troubles, but I've resolved that). Now, my data: ``` +--------+--------+--------+--------+----------+ | value1 | value2 | value3 | value4 | constant | +--------+--------+--------+--------+----------+ | 2 | 5 | 6 | 0 | 2 | | 1 | -100 | 0 | 0 | 1 | | 3 | 10 | -10 | 0 | 3 | | 4 | 0 | -1 | 5 | 3 | | -1 | 1 | -1 | 1 | 4 | +--------+--------+--------+--------+----------+ ``` *The trick* It's about selecting data from mentioned service schema in MySQL and working with [GROUP\_CONCAT](http://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html#function_group-concat) function: ``` select concat('SELECT CASE(seq) ', group_concat(groupcase separator ''), ' END AS result FROM (select *, @j:=@j+1 as seq from test cross join (select @j:=0) as initj) as inittest') from (select concat(' WHEN ', rownum, ' THEN ', groupvalue) as groupcase from (select rownum, group_concat(COLUMN_NAME SEPARATOR '+') as groupvalue from (select *, @row:=@row+1 as rownum from test cross join (select @row:=0) as initrow) as tablestruct left join (select COLUMN_NAME, @num:=@num+1 as num from INFORMATION_SCHEMA.COLUMNS cross join (select @num:=0) as init where TABLE_SCHEMA='test' && TABLE_NAME='test' && COLUMN_NAME!='constant') as struct on tablestruct.constant>=struct.num group by rownum) as groupvalues) as groupscase ``` -what will this do? Actually, I recommend to execute it step-by-step (i.e. add more complex layer to that which you've already understood) - I doubt there's short way to describe what's happening. It's not a wizardry, it's about constructing valid text SQL from input conditions. End result will be like: ``` SELECT CASE(seq) WHEN 1 THEN value1+value2 WHEN 2 THEN value1 WHEN 3 THEN value3+value2+value1 WHEN 4 THEN value3+value2+value1 WHEN 5 THEN value2+value1+value4+value3 END AS result FROM (select *, @j:=@j+1 as seq from test cross join (select @j:=0) as initj) as inittest ``` (I didn't add formatting because that SQL is *generated* string, not the one you'll write by yourself). *Last step* What now? Just Allocate it with: ``` mysql> set @s=(select concat('SELECT CASE(seq) ', group_concat(groupcase separator ''), ' END AS result FROM (select *, @j:=@j+1 as seq from test cross join (select @j:=0) as initj) as inittest') from (select concat(' WHEN ', rownum, ' THEN ', groupvalue) as groupcase from (select rownum, group_concat(COLUMN_NAME SEPARATOR '+') as groupvalue from (select *, @row:=@row+1 as rownum from test cross join (select @row:=0) as initrow) as tablestruct left join (select COLUMN_NAME, @num:=@num+1 as num from INFORMATION_SCHEMA.COLUMNS cross join (select @num:=0) as init where TABLE_SCHEMA='test' && TABLE_NAME='test' and COLUMN_NAME!='constant') as struct on tablestruct.constant>=struct.num group by rownum) as groupvalues) as groupscase); Query OK, 0 rows affected (0.00 sec) mysql> prepare stmt from @s; Query OK, 0 rows affected (0.00 sec) Statement prepared ``` -and, finally: ``` mysql> execute stmt; ``` You'll get results as: ``` +--------+ | result | +--------+ | 7 | | 1 | | 3 | | 3 | | 0 | +--------+ ``` **Why is this bad** Because it generates string for whole table. I.e. *for each row*! Imagine if you'll have 1000 rows - that will be nasty. MySQL also has limitation in `GROUP_CONCAT`: [group\_concat\_max\_len](http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_group_concat_max_len) - which will limit this way, obviously. *So why I did that?* Because I was curious if the way without additional DDL and implicit recounting of table's fields exist. I've found it, so leaving it here.
value column is fixed from 1 to 5? if not you need to generate dynamic query. try this: ``` SELECT IF(constant = 1, value1, IF (constant = 2, value1 + value2, IF (constant = 3, value1 + value2 + value3, IF (constant = 4, value1 + value2 + value3 + value4, value1 + value2 + value3 + value4 + value5) ) ) ) FROM tab; ``` **UPDATED** if i were you, i'll design like this. ``` tbl1(id, constant), value_tbl1(tbl1_id, column_seq, value); SELECT SUM(value) FROM tbl1 t, value_tbl1 v WHERE t.id = v.tbl1_id AND column_seq BETWEEN 1 AND tbl1.constant ```
Find sum across varying no of columns
[ "", "mysql", "sql", "sum", "" ]
I know you can use COALESCE and ISNULL but I was jut wondering if you could do it with a SELECT case. I had this ``` SELECT (CASE Table.Column WHEN ' ' THEN '1/1/2001' Else Table.Column End),Column2 FROM Table ``` That didn't do anything so I tried: ``` SELECT (CASE Table.Column WHEN NULL THEN '1/1/2001' Else Table.Column End),Column2 FROM Table ``` Nothing. Just Curious. Thanks!
You can certainly use the [`is [not] null`](http://technet.microsoft.com/en-us/library/ms188795.aspx) predicate like so: ``` select case when t.Column1 is null then '1/1/2001' else t.Column1 end ,t.Column2 from Table1 as t ``` However, there are functions built specifically for dealing with `null`: > **1.** [`isnull(check_expression , replacement_value)`](http://technet.microsoft.com/en-us/library/ms184325.aspx) > > Replaces NULL with the specified replacement value. > > ``` > select > isnull(t.Column1, '1/1/2001') > ,t.Column2 > from Table1 as t > ``` > > --- > > **2.** [`coalesce(expression [ ,...n ])`](http://msdn.microsoft.com/en-us/library/ms190349.aspx) > > Evaluates the arguments in order and returns the current value of the first expression that initially does not evaluate to NULL. > > ``` > select > coalesce(t.Column1, '1/1/2001') > ,t.Column2 > from Table1 as t > ```
Use `IS` with `NULL` ``` SELECT CASE WHEN Column IS NULL THEN '1/1/2001' ELSE Column END, Column2 FROM Table ```
Can you change the value of a column that is NULL using a CASE?
[ "", "sql", "t-sql", "" ]
I have a query with several left outer joins, for simplicity sake I will just include the two. It looks something like this: ``` SELECT Object.ID, Gloss.name, Gloss.order, Title.name from Object LEFT OUTER JOIN Gloss on Gloss.object_id = Object.ID LEFT OUTER JOIN Title on Title.object_id = Object.ID ``` However, some items have multiple Gloss, and I want to return only a single row with either the max or min Gloss.order. A sample output from my query looks like this: ``` |Object.ID | Gloss.name | Gloss.order | Title.name |4.00 | glossvalue1| 1 | TitleValue |4.00 | glossvalue2| 2 | TitleValue |3.00 | gloss3-1 | 11 | OtherTitle |3.00 | gloss3-2 | 13 | OtherTitle |3.00 | gloss3-3 | 15 | OtherTitle ``` Ideally, I would like to return something like this: ``` |Object.ID | Gloss.name | Gloss.order | Title.name |4.00 | glossvalue1| 1 | TitleValue |3.00 | gloss3-1 | 11 | OtherTitle ``` I think I need some max or min things, but I am having trouble combining that with the other outer join (which does not need max or min). Any help is appreciated, let me know if you need more info.
This should do the job: ``` SELECT Object.ID, (SELECT name FROM Gloss G WHERE G.object_id = X.object_id AND G.order = X.ord) AS [GlossName], X.order, Title.name FROM Object LEFT OUTER JOIN (SELECT object_id, MIN(order) ord FROM Gloss GROUP BY object_id) X ON X.object_id = Object.ID LEFT OUTER JOIN Title on Title.object_id = Object.ID ``` I didn't want to take min gloss name as I understood your min should be based on gloss order.
``` SELECT Object.ID, min(Gloss.name), min(Gloss.order), Title.name from Object LEFT OUTER JOIN Gloss on Gloss.object_id = Object.ID LEFT OUTER JOIN Title on Title.object_id = Object.ID GROUP BY Object.ID,Title.name ```
Multiple Outer joins return top result
[ "", "sql", "sql-server", "outer-join", "" ]
I have a table called `messages` that stores user messages, and the table structure looks like this: Messages: ``` id from_id to_id content 1 1 2 ABC 2 2 1 BCC 3 1 2 EFG 1 4 2 GHJ 2 2 4 MNX 3 15 2 LKH ``` Is it possible to run a query to group messages like the following? Expected Output: ``` from_id to_id 1 2 4 2 15 2 ``` Conversation between two parties will be in one group. So, we can see from table `messages`, there are 3 groups.
You can do either ``` SELECT DISTINCT LEAST(from_id, to_id) from_id, GREATEST(from_id, to_id) to_id FROM messages; ``` Output: ``` | FROM_ID | TO_ID | |---------|-------| | 1 | 2 | | 2 | 4 | | 2 | 15 | ``` or ``` SELECT from_id, to_id FROM messages GROUP BY LEAST(from_id, to_id), GREATEST(from_id, to_id); ``` Output: ``` | FROM_ID | TO_ID | |---------|-------| | 1 | 2 | | 4 | 2 | | 15 | 2 | ``` Here is **[SQLFiddle](http://sqlfiddle.com/#!2/740c3/3)** demo
I think you're looking for `DISTINCT` ``` SELECT DISTINCT from_id, to_id FROM messages WHERE to_id=2 ```
group messages - MySQL
[ "", "mysql", "sql", "" ]
Given the following table (and sample data): ``` PK | ClientID | SetID | Title ----------------------------- P1 | C1 | S1 | Title1 P2 | C1 | S1 | Title1 P3 | C2 | S2 | Title1 P4 | C2 | S2 | Title1 P3 | C1 | S3 | Title2 P5 | C1 | S3 | Title2 ``` Assuming a `Set` belongs to a `Client`, can I have a unique index that constraints the title being unique within a client except with it's siblings within the same set. So for example, I can have `Title1` in two `Clients` but not twice in one `Client`. Now for `Client1`, I want to have a second record with `Title1` but only when it has the same `SetID` as all others with `Title`. Just to note, I'm using SQL Azure, but I'm interested more generally (e.g 2008 R2/2012) too. **Edit:** Please note that I cannot change the structure of the table. It exists this way already, and has a complex business layer behind it. If I can fix this, as is, then great, if not, then I can leave it broken.
You may try additional indexed view. For example, a table: ``` create table dbo.Test (PK int, ClientID int, SetID int, Title varchar(50), primary key (PK)) insert into dbo.Test values (1, 1, 1, 'Title1') ,(2, 1, 1, 'Title1') ,(3, 2, 2, 'Title1') ,(4, 2, 2, 'Title1') ,(5, 1, 3, 'Title2') ,(6, 1, 3, 'Title2') ``` The view and index: ``` create view dbo.vTest with schemabinding as select ClientID, Title, SetID, cnt=count_big(*) from dbo.Test group by ClientID, Title, SetID GO create unique clustered index UX_vTest on dbo.vTest (ClientID, Title) GO ``` Then: ``` insert into dbo.Test values (7, 1, 1, 'Title1') -- will pass insert into dbo.Test values (8, 1, 1, 'Title1') -- will pass insert into dbo.Test values (9, 1, 2, 'Title1') -- will fail insert into dbo.Test values (10, 2, 2, 'Title1') -- will pass insert into dbo.Test values (11, 1, 3, 'Title1') -- will fail ```
If I got the question right I would add another table to link between `SetID` and `Title` which have a `1:1` relation and add a unique index on `ClientID and SetID` ``` CREATE UNIQUE INDEX [Index_Name] ON [dbo].[MyTable]([ClientID ], [SetID]) ```
SQL Server Conditional Unique Index
[ "", "sql", "sql-server", "t-sql", "azure-sql-database", "unique-constraint", "" ]
I'm looking to take the data from the query below (all employees in Seattle) and insert it into a new, blank table with identical columns as my select statement. ``` select e.employee_id, e.first_name, e.last_name, e.email, e.phone_number, e.hire_date, e.job_id, e.salary, e.commission_pct, e.manager_id, e.department_id from employees e join departments d on e.department_id = d.department_id join jobs j on e.job_id = j.job_id join locations l on d.location_id = l.location_id where l.city = 'Seattle'; ``` Any idea how to do this?
If you have an existing table you can do: ``` INSERT INTO ExistingTable (Columns,..) SELECT Columns,... FROM OtherTable ``` From your sql ``` insert into newEmpTable (employee_id, first_name, last_name, email, phone_number, hire_date, job_id, salary, commission_pct, manager_id, department_id) select e.employee_id, e.first_name, e.last_name, e.email, e.phone_number, e.hire_date, e.job_id, e.salary, e.commission_pct, e.manager_id, e.department_id from employees e join departments d on e.department_id = d.department_id join jobs j on e.job_id = j.job_id join locations l on d.location_id = l.location_id where l.city = 'Seattle'; ``` See <http://docs.oracle.com/cd/E17952_01/refman-5.1-en/insert-select.html> If you do not have a table and want to create it, ``` create table new_table as select e.employee_id, e.first_name, e.last_name, e.email, e.phone_number, e.hire_date, e.job_id, e.salary, e.commission_pct, e.manager_id, e.department_id from employees e join departments d on e.department_id = d.department_id join jobs j on e.job_id = j.job_id join locations l on d.location_id = l.location_id where l.city = 'Seattle'; ```
``` CREATE TABLE SeatleEmployees AS (/* your query here */); ```
Simple SQL Table Insert
[ "", "sql", "oracle", "insert", "" ]
I have SQL\*Plus 12.1 installed on Fedora 19 trying to connect to an Oracle 11g database. I installed the instantclient RPM packages (basic, devel, sqlplus) from [here](http://www.oracle.com/technetwork/topics/linuxx86-64soft-092277.html). I can successfully connect to other Oracle databases using SQL\*Plus, so I know I have a working installation of the software. However, when I try to connect to this particular database, I get this error: ``` ERROR: ORA-01017: invalid username/password; logon denied ``` Here's my tnsnames.ora file (with the host and port obfuscated out): ``` PSPRODDB = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = #HOST-ADDR)(PORT = #PORT-NUM)) ) (CONNECT_DATA = (SERVICE_NAME = PSPRODDB) ) ) ``` My TNS\_ADMIN environment variable is set to the path of my tnsnames.ora file. The command I'm running to connect: ``` sqlplus username/password@PSPRODDB ``` After pressing enter, it hangs on the version and copyright info for about 2-3 seconds before giving me the ORA-01017 error. I know I have the username and password correctly typed in because I copied and pasted it from another application that successfully connects to the database. **Edit** I've looked at the log.xml file (in `C:\oracle\product\11.2.0\diag\tnslsnr\test\listener\alert\log.xml`), and found that there a few entries that show I'm talking to the correct listener. Here's an example of the log entry, but obfuscated for possibly sensitive info: ``` <msg time='2013-11-25T09:54:08.530-07:00' org_id='oracle' comp_id='tnslsnr' type='UNKNOWN' level='16' host_id='PSTEST100-50' host_addr='*my address*'> <txt>25-NOV-2013 09:54:08 * (CONNECT_DATA=(SERVICE_NAME=PSPRODDB)(CID=(PROGRAM=sqlplus)(HOST=*localhost*)(USER=njones))) * (ADDRESS=(PROTOCOL=tcp)(HOST=*addr*)(PORT=38906)) * establish * PSPRODDB * 0 </txt> </msg> ``` I've also since tried changing the \*SERVICE\_NAME\* element in my tnsnames.ora file to *SID*, with no difference. Surrounding my password with quotes didn't fix the problem either. Could there be a version problem? I'm using instantclient and sqlplus version 12.1, but the database is version 11.2. **Edit 2** Well, it's official. I'm an idiot, and that's what caused the error. I was typing in the wrong password, and I guess whatever I was copy-pasting was wrong too.
Couple ideas, in order of likelihood of causing this problem: 1) If your password starts with a non-alphabetic character, surround password with quotes: `user/"password"@service` (note, GUI apps e.g. TOAD and SQLDeveloper don't require quotes). 2) Run ``` > tnsping service ``` And confirm your output matches the tnsnames.ora entry you think you are using 3) On server, run (or ask dba to run) ``` > lsnrctl status ``` Confirm that the service listed in your tnsnames.ora is being directed to the proper database. EDIT: Saw Nathan's question, thought, "hmm - weird, I use tnsping all the time to validate client installs, why the heck wouldn't it be included in instantclients???" Asked Google, and lo-and-behold, it turns out TNSPING is pretty much useless. The ONLY think it checks is that host is reachable and that a tnslistener is running on the specified port (which you could easily check with telnet). H/T to "BillyVerreynne" on Oracle forums: <https://forums.oracle.com/message/10561771> Yay, I learned something today! :-) On that note, I'll personally be switching to SQLPlus for deep-dive checking of TNS specifications, and recommend everyone reading this do the same. As Nathan already posted above, issues with SQLPlus connection attempts can be reviewed in $ORACLE\_BASE/diag/tnslsnr/test/listener/alert/log.xml.
ORA-01017 is pretty clear. It means that you got either the username or password wrong, or possibly that you're not *really* connecting to the database that you think you're connecting to. There's really not much more to say. Double check your connect descriptor, and make sure you didn't mistype the username or password.
ORA-01017 error in sqlplus 12.1, can connect with same credentials in other applications
[ "", "sql", "oracle", "oracle11g", "sqlplus", "ora-01017", "" ]
I have a table with 40mil records. I need to add a new INT NOT NULL column to that table, with default value = 0 When adding this column using the following: ``` ALTER TABLE myTable ADD NewColumnID int NOT NULL CONSTRAINT DF_Constraint DEFAULT 0 ``` It sets the NewColumnID to 0 for all records. When running this query on our prod table which has 40mil records, will this take a long time? Because I know doing the following takes a VERY LONG TIME: ``` UPDATE myTable SET NewColumnID = 0 ``` **UPDATE: 05 Jan 2020:** It's been a while since I've last logged into my stack-overflow account. I noticed this particular question which I posted back in 2013. I've received some bad rep for this question and I can now see why. I had to read through it several times to understand what on earth I was asking and how the answer was applicable. Seeing that it's been viewed over 6k times, perhaps it's worth (7 years later, sorry) to provide more context. **Allow me to clarify the question:** I was working for a banking software provider. We had various clients around the world and were rolling out a large update to our software which required a new column to be added to an existing table used by our software. This particular table was normally quite large depending on the size of the bank. The requirement was that when the column is first added, that a particular ID be assigned to all existing records, after which all new entries in the table will revert to a value of "0". So...during the testing phase we noticed that having the following in our upgrade script took nearly an hour to process 40m records: ``` ALTER TABLE myTable ADD NewColumnID int NOT NULL CONSTRAINT DF_Constraint DEFAULT 0 UPDATE myTable SET NewColumnID = 50 ``` The example above will add the new column and then update all existing records with NewColumnID = 50. This is what was taking nearly an hour on the hardware which it was running on. I appreciate that this will vary drastically depending on client's infrastructure. The reason for the question was to see if there was a faster way to accomplish the above. **Allow me to clarify the answer:** I completely understand why my answer makes no sense, but hopefully the following explanation will help: Instead of adding the column and then running an update query, you assign the value that you want all the existing records to inherit by creating a CONSTRAINT with a default value that is the value you want to update it with. The creation of the column will result in this value being automatically inserted: ``` ALTER TABLE myTable ADD CompanyID int NOT NULL CONSTRAINT DF_Constraint DEFAULT 1 (takes about 1min to complete) ``` It was essentially "killing two birds with one stone". This query completely in roughly 1min as apposed to an hour (executed on the same server). Now that the requirement for adding a new column with a default id = x (different for each client) for all existing records, the **DEFAULT 0** constraint is restored so that all newly inserted records will assume a value of 0 if no value is passed. Hence the quote: > Then just set the default value back to 0. Now the table will have > CompanyID = 1 for all records. BOOM! Apologies...this was 7 years ago and this all seems really stupid now :) but who knows, maybe this could help others with stupid requirements that requires creative hacks :)!
Thanks Aaron for your detailed approach, but I did a quick test and the simple approach would be to do the following: Some background. I'm adding a CompanyID to an existing large table. The ID refers to the company the record belongs to. Default value would be 0. But since this is going into an existing customers prod database, their company ID is 1. We have a generic upgrade script for all our clients, turns out a slight modification to this script for this specific customer yielding significant performance improvements. INSTEAD OF: ``` ALTER TABLE myTable ADD CompanyID int NOT NULL CONSTRAINT DF_Constraint DEFAULT 0 (takes about 1min to complete) UPDATE myTable SET CompanyID = 1 (will take over an hour) ``` I JUST DO THIS: ``` ALTER TABLE myTable ADD CompanyID int NOT NULL CONSTRAINT DF_Constraint DEFAULT 1 (takes about 1min to complete) ``` Then just set the default value back to 0. Now the table will have CompanyID = 1 for all records. BOOM!
The major problem is that this needs to write to every single row, which is heavily logged as one single transaction. One way to minimize the impact to the log (and this works best if you don't have silly 10% autogrow settings on your log file) is to break up the work as much as possible: 1. add a NULlable column: ``` ALTER TABLE dbo.myTable ADD NewColumnID INT CONSTRAINT DF_Constraint DEFAULT 0; ``` 2. Update the rows in a batch, say 10K rows at a time (this will minimize log impact - see [this blog post for background](http://www.sqlperformance.com/2013/03/io-subsystem/chunk-deletes)): ``` BEGIN TRANSACTION; SELECT 1; WHILE @@ROWCOUNT > 0 BEGIN COMMIT TRANSACTION; BEGIN TRANSACTION; UPDATE TOP (10000) dbo.myTable SET NewColumnID = 0; END COMMIT TRANSACTION; ``` 3. Add a check constraint ([see these answers for more detail](https://dba.stackexchange.com/questions/48872/quickly-change-null-column-to-not-null)): ``` ALTER TABLE dbo.myTable WITH CHECK ADD CONSTRAINT NewCol_Not_Null CHECK (NewColumnID IS NOT NULL); ``` You can save some time by using `NOCHECK` here, but [as Martin explained in his answer](https://dba.stackexchange.com/a/48936/1186), that is a one-time savings that could cost you plenty of headaches over the longer term. This was addressed in [this previous question](https://stackoverflow.com/questions/287954/how-do-you-add-a-not-null-column-to-a-large-table-in-sql-server), but the accepted answer there uses NOCHECK without any disclaimer about how an untrusted constraint can impact execution plans.
Adding column with default value to large table
[ "", "sql", "sql-server", "performance", "" ]
I need to fill a cell with the first non-empty entry in a set of columns (from left to right) in the same row - similar to coalesce() in SQL. In the following example sheet ``` --------------------------------------- | | A | B | C | D | --------------------------------------- | 1 | | x | y | z | --------------------------------------- | 2 | | | y | | --------------------------------------- | 3 | | | | z | --------------------------------------- ``` I want to put a cell function in each cell of row A such that I will get: ``` --------------------------------------- | | A | B | C | D | --------------------------------------- | 1 | x | x | y | z | --------------------------------------- | 2 | y | | y | | --------------------------------------- | 3 | z | | | z | --------------------------------------- ``` I know I could do this with a cascade of IF functions, but in my real sheet, I have 30 columns to select from, so I would be happy if there were a simpler way.
``` =INDEX(B2:D2,MATCH(FALSE,ISBLANK(B2:D2),FALSE)) ``` This is an Array Formula. After entering the formula, press `CTRL` + `Shift` + `Enter` to have Excel evaluate it as an Array Formula. This returns the first nonblank value of the given range of cells. For your example, the formula is entered in the column with the header "a" ``` A B C D 1 x x y z 2 y y 3 z z ```
I used: ``` =IF(ISBLANK(A1),B1,A1) ``` This tests the if the first field you want to use is blank then use the other. You can use a "nested if" when you have multiple fields.
Is there a coalesce-like function in Excel?
[ "", "sql", "excel", "xls", "coalesce", "" ]
I have these three tables (Soknad, Prognose and Did) in the SQL Server database: Table Soknad has columns: S\_ID (key), S\_REFNR Table Prognose has columns: P\_ID (key), P\_S\_ID Table Did has columns: D\_ID (key), D\_S\_ID, Did\_Something Prognose.P\_S\_ID is foreign key to Soknad.S\_ID. Did.D\_S\_ID is foreign key to Soknad.S\_ID. The tables are like this: SOKNAD ``` S_ID | S_REFNR | 1 | abc | 2 | cbc | 3 | sdf | ``` PROGNOSE ``` P_ID | P_S_ID | 10 | 1 | 11 | 2 | ``` DID ``` D_ID | D_S_ID | D_Did_Something | 100 | 1 | 1 | 101 | 1 | 1 | 102 | 1 | 0 | 103 | 2 | 1 | 104 | 2 | 1 | ``` I want to join these tables (like a view or select statement). From the Did table a count of column Did\_Something should be returned, as well as a count of the same column where the value is 1 (one). The result should be: ``` S_ID | S_REFNR | P_ID | Count_D_Did_Something | Count_D_Did_Something_Is_One | 1 | abc | 10 | 3 | 2 | 2 | cbc | 11 | 2 | 2 | 3 | sdf | | | | ``` Any help would be appreciated!
Here you go: ``` select s.s_id, p.p_id, count(d.Did_Something) as Count_D_Did_Something, -- nulls won't be counted sum(CASE WHEN d.Did_Something = 1 THEN 1 ELSE 0 END) as Count_D_Did_Something_is_one from Soknad as s left join Prognose as p on p.P_S_ID = s.s_id left join Did as d on d.D_S_ID = s.s_id ```
I believe what you want to do is join two tables and put the counts where the rows match the left table in separate columns for each table. This would accomplish that. ``` select t1.id, count(t2.id) t2_count , count(t3.id) t3_count from table1 as t1 left outer join table2 as t2 on t2.table1_id = t1.id left outer join table3 as t3 on t3.table1_id = t1.id group by t1.id; ``` To accomplish the counts you want based on criteria from one of the outer joined tables, you can do that this way, using a derived table... ``` select t1.id, count(t2.id) t2_count, count(tt2.mCount) Did_SomethingCount, count(t3.id) t3_count from table1 as t1 left outer join table2 as t2 on t2.table1_id = t1.id left outer join (select count(*), table1_id mCount from table2 where Did_Something = 1 group by table1_id) as tt2 on tt2.table1_id = t1.id left outer join table3 as t3 on t3.table1_id = t1.id group by t1.id; ```
Join three tables with counts
[ "", "sql", "" ]
I have this SUMIF `=SUMIF(WEZ_ARTICLE_PLANT!A:A,$E2,WEZ_ARTICLE_PLANT!AX:AX)` I want do the same as the sumif in sql excel. In my query ``` WEZ_ARTICLE_PLANT!A:A = column_Material $E2 = column_Material WEZ_ARTICLE_PLANT!AX:AX = column_StockQty ``` I would be glad for any Responce!
you will have to start thinking of things in a slightly different way now that you are working with a database. ``` SELECT column_Material, SUM(column_StockQuantity) FROM YouTableName GROUP BY column_Material ``` will give you a total of all quantities for each *column\_Material*
You mean something like this? ``` Select sum(column_StockQty) from table_Name where column_Material = 'Example' ```
Interpreting SUMIf in excel sql
[ "", "sql", "excel", "formulas", "" ]
I am trying to query a sql server table which has values in upper case and it doesn't return anything. For example, Table structure, ``` ID Fruit -- ------ 1 APPLE ONE 2 ORANGE TWO 3 PEAR THREE Select * from Fruits where fruit = 'APPLE ONE' ``` Does not return anything. But if i change it to "apple one" in the database and change the query to ``` Select * from Fruits where fruit = 'apple one' ``` it works. how do u get this work with upper case data ?
You can use the UPPER or LOWER function. Like this: ``` Select * from Fruits where UPPER(fruit) = 'APPLE ONE' ``` Also like op. will ignore case: ``` Select * from Fruits where fruit like 'APPLE ONE' ```
Make sure your column is set to the correct collation. Or you can specify the collation in your query directly. Collations with CI are Case Insensitive. This will return your 'apple one' record: ``` select 'CI', * from table1 where myfield = 'apple one' collate SQL_Latin1_General_CP1_CI_AS ``` This will not return your 'apple one' record because it is CS (case sensitive): ``` select 'CS', * from table1 where myfield = 'apple one' collate SQL_Latin1_General_CP1_CS_AS ``` If all your queries use the same collation for this column, it's best to set it on the column. If all your queries use the same collation on all columns, it's best to set it on the database itself as a default. Eg. Setting it for the column: ``` CREATE TABLE Table1 ([myfield] varchar(10) collate SQL_Latin1_General_CP1_CI_AS) ; ```
sql server querying values in upper case
[ "", "sql", "sql-server", "" ]
Imagine the following table: ``` prprno prprdt pritcd prqnty popono poqnty ---------- -------- -------- -------- -------- -------- 2013100017 28-10-13 220010284 2000 2013100017 800 2013100017 28-10-13 220010284 2000 2013100018 500 2013100017 28-10-13 220010284 2000 2013100019 500 2013100017 28-10-13 220010284 2000 2013100020 200 ``` I would like a query that returns a running total (prqnty-poqnty) ``` prprno prprdt pritcd prqnty popono poqnty blnce ---------- -------- -------- -------- -------- -------- ---------- 2013100017 28-10-13 220010284 2000 2013100017 800 1200 2013100017 28-10-13 220010284 2000 2013100018 500 700 2013100017 28-10-13 220010284 2000 2013100019 500 200 2013100017 28-10-13 220010284 2000 2013100020 200 0 ``` There is one purchase requisition (2013100017) with one item (220010284) & this item received against different purchase order no. (popono) and (poqnty). I want running balance this item. select pr.prcocd, pr.prprno, prprdt, pr.pritcd, pr.pritcc, iname, iunit, prsrno, prqnty into #tmppr from fisprq10 pr inner join fisitem it on pr.prcocd=it.icocd and pr.pritcd=it.icode and pr.pritcc=it.icccd where pr.prcocd='001' and pr.prprno between 2013100017 and 2013100017 and pr.prprdt between '2013-10-01' and '2013-10-31' order by pr.prprdt, pr.prprno, pr.prsrno ``` select po.pococd, po.popono, po.popodt, po.poprty, po.poptcc, cu.mcdesc, po.poqnty, po.poprno, po.poitcd, po.poitcc into #tmppo from fispod10 po inner join fglcust cu on po.pococd=cu.mccocd and po.poprty=cu.mccode and po.poptcc=cu.mccccd where po.pococd='001' and cu.mccs='S' and po.poopbl<>'Y' and po.poprno between 2013100017 and 2013100017 and po.popodt <= '2013-10-31' order by po.poprno select pr.prprno, max(pr.prprdt) as prprdt, pr.pritcd, pr.pritcc, max(pr.iname) as iname, max(pr.iunit) as iunit, sum(pr.prqnty) as prqnty, isnull(po.popono, 0) as popono, max(isnull(po.poprty, '')) as poprty, max(isnull(po.poptcc, '')) as poptcc, max(isnull(po.mcdesc, '')) as mcdesc, sum(isnull(po.poqnty, 0)) as poqnty from #tmppr pr left outer join #tmppo po on pr.prprno=po.poprno and pr.pritcd=po.poitcd and pr.pritcc=po.poitcc group by pr.prprno, pr.pritcd, pr.pritcc, po.popono order by 1, 2, 3; ```
Please try: ``` ;with T as( select *, ROW_NUMBER() over (order by prprno) RNum From YourTable ) select prprno, prprdt, pritcd, prqnty, popono, poqnty, prqnty-(select SUM(poqnty) from T b where b.RNum<=a.RNum) blnce from T a ``` [Sql Fiddle Demo](http://sqlfiddle.com/#!3/aa70d/2) For different `pritcd`, please check the below query. ``` ;with T as( select *, ROW_NUMBER() over (order by prprno) RNum From YourTable ) select prprno, prprdt, pritcd, prqnty, popono, poqnty, prqnty-(select SUM(poqnty) from T b where b.RNum<=a.RNum and b.pritcd=a.pritcd) blnce from T a ```
You can write this as a correlated subquery: ``` SELECT prprno ,prprdt ,pritcd ,prqnty , popono ,poqnty , (SELECT SUM(T2.poqnty) FROM table1 AS T2 WHERE T2.prprno = T1.prprno --one purchase requisition AND T2.pritcd = T1.pritcd AND T2.popono > t1.popono) AS blnce FROM table1 AS T1 ORDER BY T1.popono ; ```
calculate a running total between two fields in SqlServer 2008
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "" ]
I want to define the new field in a select into statement as integer. However, the NewField ends up as binary. How can I accomplish this? ``` SELECT ExistingField1, ExistingField2, NewField INTO NewTable FROM ExistingTable ``` I searched a lot but couldn't find a solution yet. Edit 1: I am performing the SELECT INTO statement from within the Microsoft Access application itself. (it is not a table in Access that points to a SQL Server table) Edit 2: NewField is created with the SELECT INTO statement. It does not exist in a pre-existing table.
The reason it ends up as a binary is because the field in the existing table most likely is a binary field. You could try doing something like this: ``` SELECT ExistingField1, ExistingField2, CAST(NewField AS int) INTO NewTable FROM ExistingTable ``` `CAST` does not work in MsAccess. However, this should work: ``` SELECT ExistingField1, ExistingField2, cInt(Field1 + Field2) AS NewField INTO NewTable FROM ExistingTable ```
For Access I would use this, then the parameter dialog does not show. ``` SELECT ExistingField1, cInt(0) AS NewField into NewTable1 FROM Table1 ``` For SQL Server ``` CAST(null AS int) AS NewField ```
How to define the type (int) for a new field in SQL SELECT INTO statement in MS Access
[ "", "sql", "ms-access", "select-into", "" ]
I have the following code: ``` create table test.dbo.Users ( Id int identity(1,1) primary key, Name varchar(36) not null ) create table test.dbo.Number ( Id int identity(1,1) primary key, Number varchar(10) not null, Name varchar(36) not null foreign key references Users.Name ) ``` The foreign key throws an error saying `Foreign key 'FK__Number__Name__1CF15040' references invalid table 'Users.Name'.`. What did I do wrong?
Please see in this SQLfiddle link, [Link](http://www.sqlfiddle.com/#!2/3a105/2) ``` CREATE TABLE NUMBER( ID INT PRIMARY KEY, NUMBER VARCHAR(10) NOT NULL, NAME VARCHAR(36) NOT NULL REFERENCES USERS(NAME) ); ```
Foreign key must reference a primary key in another table I would use the following code I hope it is useful ``` use test create table test.dbo.Users ( Id int identity(1,1) primary key, Name varchar(36) not null ) create table test.dbo.Number ( Id int identity(1,1) primary key, Number varchar(10) not null, Users_Id int not null constraint fk_Number_Users foreign key (Users_Id) references Users(Id) on update no action on delete no action ) ```
Foreign key references invalid table
[ "", "sql", "" ]
I have to write an SQL SELECT statement which is valid for MS SQL Server and Oracle. I have to work with old customer databases, so I cannot change anything in the db design... Problem is that the table has a nullable column 'project' which is a varchar that is in some cases filled with an empty string - which is translated by oracle to NULL. So how do I select all columns that have an NON empty 'project'? ``` WHERE project IS NOT NULL works for oracle WHERE project <> '' works for MS WHERE project IS NOT NULL AND project <> '' works for MS, but not for Oracle ``` Thanks, Marko
Because the condition `'' = ''` will be true only in SQL-Server (it's equivalent to `'' IS NOT NULL`) and the condition `'' IS NULL` will be true only in Oracle, you can use this: ``` WHERE ( project > '' AND '' = '') -- for SQL-Server OR ( project IS NOT NULL AND '' IS NULL) -- for Oracle ``` Note that if you have values that are only spaces, they will be treated differently between SQL-Server and Oracle. Test **[SQL-Fiddle-1 (Oracle)](http://sqlfiddle.com/#!4/7fd97/1)** and **[SQL-Fiddle-2 (SQL-Server)](http://sqlfiddle.com/#!3/7fd97/2)**.
You can use `NULLIF()`, which is available in both [SQL Server](http://technet.microsoft.com/en-us/library/ms177562.aspx) and [Oracle](http://docs.oracle.com/cd/B28359_01/server.111/b28286/functions107.htm#SQLRF00681) (it's part of the ANSI standard). ``` select * from table where nullif(project, '') is not null ``` It works because Oracle evaluates the empty string to NULL. It's worth noting that Oracle does not evaluate `NULLIF()` if the first expression is NULL, but it does work this way round.
Oracle empty string/NULL
[ "", "sql", "sql-server", "oracle", "" ]
I have the following **MySql** table: ``` mysql> describe drft1_menu; +-------------------+---------------------+------+-----+---------------------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------------+---------------------+------+-----+---------------------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | menutype | varchar(24) | NO | MUL | NULL | | | title | varchar(255) | NO | | NULL | | | alias | varchar(255) | NO | MUL | NULL | | | note | varchar(255) | NO | | | | | path | varchar(1024) | NO | MUL | NULL | | | link | varchar(1024) | NO | | NULL | | | type | varchar(16) | NO | | NULL | | | published | tinyint(4) | NO | | 0 | | | parent_id | int(10) unsigned | NO | | 1 | | | level | int(10) unsigned | NO | | 0 | | | component_id | int(10) unsigned | NO | MUL | 0 | | | checked_out | int(10) unsigned | NO | | 0 | | | checked_out_time | timestamp | NO | | 0000-00-00 00:00:00 | | | browserNav | tinyint(4) | NO | | 0 | | | access | int(10) unsigned | NO | | 0 | | | img | varchar(255) | NO | | NULL | | | template_style_id | int(10) unsigned | NO | | 0 | | | params | text | NO | | NULL | | | lft | int(11) | NO | MUL | 0 | | | rgt | int(11) | NO | | 0 | | | home | tinyint(3) unsigned | NO | | 0 | | | language | char(7) | NO | MUL | | | | client_id | tinyint(4) | NO | MUL | 0 | | +-------------------+---------------------+------+-----+---------------------+----------------+ 24 rows in set (0.00 sec) ``` As you can see there is many fields and if I try to execute: ``` select * from describe drft1_menu; ``` the obtained output is terrible. What have I to do tho show only some specific fields? (for example only the id, title and alias fields of all contained records) Tnx Andrea
For only one column (or similar-named columns) you can use: ``` SHOW FIELDS FROM `drft1_menu` LIKE 'id' ``` -for `id` column. But that may not answer the question in common case. For that there's `INFORMATION_SCHEMA` database (sample below for columns `id`, `title`): ``` SELECT * FROM INFORMATION_SCHEMA.columns WHERE TABLE_SCHEMA='database_name' AND TABLE_NAME='drft1_menu' AND COLUMN_NAME IN ('id', 'title'); ``` -output will not look like `DESCRIBE`, but will provide same information. The good thing about this, however, is that you can specify only those description fields that you need (for example, only `COLUMN_NAME` and `DATA_TYPE` fields)
Select id, title, alias from drft1\_menu
How to specify to show only some specifics field in a select * query output in MySql?
[ "", "mysql", "sql", "database", "" ]
I have two tables, with size 1TB each and are considered contains same data. However, when I use them the same way, the results differs. So, I would like to compare them record by record, in order to find out what's the difference. --- My current solution is an ugly one: I ordered them by the same key, output them into local disk and compare them with `diff` Can someone suggest a more elegant method to achieve this?
Is it two copies of the same data? If so can you join the tables and select out the differences? [SQLFiddle](http://sqlfiddle.com/#!6/0f035/5) Quick example: ``` create table atable ( id int, field1 int, field2 varchar(16) ) create table btable ( id int, field1 int, field2 varchar(16) ) select * from atable as a join btable as b on a.id = b.id where a.field1 != b.field1 or a.field2 != b.field2 ```
You could try hashing the rows of table 1 using any hash function and then run through table2 to see if there is any entry which is not already hashed. Theoretically it will be most efficient solution I guess. You could use minhash/ LSH hash functions for scaling up.
Fastest way to compare if two tables have exactly the same contents
[ "", "sql", "hadoop", "hive", "" ]
I have a table like this: ![My table](https://i.stack.imgur.com/wnexw.jpg) I like to get something like this: ![My Result Table](https://i.stack.imgur.com/A5Fpu.jpg) The rows starting with 'Property' will go to Property column. Rows starting with 'Location' will go to 'Location' column, and rows starting with 'error' will go to 'ErrorMessage' column. The table here is containing parent-child like data. For example, Property X has a location 'abc' with two errors '1234' and '5678'. Based on comments so far I am adding some more information. Aaron Bertrand asked: Q1.Can you absolutely rely on rowid being sequential? A1.Rowid is sequentially increasing, but not always in same order of 1 though. Lets look at an example. Property X starts at rowid = 1, Property Y at 8. Everything for Property X will be between rowid 1 to 7. If we go another level, Location abc starts at 2 and 'def' starts at 5. All errors for location 'abc' will be between rowid 3 to 4. Q2. And is there only one set of data for any location / property combination? Or is it possible that row 15 is Property X again, row 16 is property abc again, etc.? A2. There is only one set of data for Property-Location combination. So if you find Property X - Location abc on row 2 you will not find it later some time on the table. More information: The example here has only limited amount of rows. The actual table has much more rows than this. So far I have used WHILE loop do get my result. I am just wondering if there is any alternative way of doing this without going through row by row. I am using SQL 2008 R2.
This can be done in 1 query. First split the data into three parts (Property, Location and Error) and determine the Parent ID's. Finally, use regular joins to create the result: ``` with P as ( select ID, ColumnDesc from MyTable P where columnDesc like 'Property %' ), L as ( select ID, ColumnDesc, (Select MAX(P.id) from P where P.ID<L.ID ) as ParentID from MyTable L where columnDesc like 'Location %' ), E as ( select ID, ColumnDesc, (Select MAX(L.id) from L where L.ID<E.ID ) as ParentID from MyTable E where columnDesc like 'error %' ) select P.ColumnDesc as Property, L.ColumnDesc as Location, E.ColumnDesc as Error FROM p JOIN L ON (L.ParentId = P.ID) JOIN E ON (E.ParentID = L.ID) ORDER BY P.ID, L.ID, E.ID ```
Interesting problem in my opinion. Here's a solution in Oracle. I think it's the shortest one posted so far, just using a single query with two sub-queries. ``` WITH data1 AS ( SELECT 1 AS id,'Property X' AS columnDesc FROM DUAL UNION SELECT 2 AS id,'Location abc' FROM DUAL UNION SELECT 3 AS id,'error 1234' FROM DUAL UNION SELECT 4 AS id,'error 3456' FROM DUAL UNION SELECT 5 AS id,'Property Y' FROM DUAL UNION SELECT 6 AS id,'Location abc' FROM DUAL UNION SELECT 7 AS id,'error 1234' FROM DUAL UNION SELECT 8 AS id,'Location def' FROM DUAL UNION SELECT 9 AS id,'error 12'FROM DUAL ) SELECT d2.columnDesc,d3.columnDesc,d.columnDesc FROM data1 d, data1 d2, data1 d3 WHERE d.columnDesc NOT LIKE 'Property%' AND d.columnDesc NOT LIKE 'Location%' AND d2.id < d.id AND d2.id = (SELECT max(id) FROM data1 WHERE columnDesc LIKE 'Property%' AND id < d.id) AND d3.id < d.id AND d3.id = (SELECT max(id) FROM data1 WHERE columnDesc LIKE 'Location%' AND id < d.id); ```
How can I place these row values to columns without looping?
[ "", "sql", "sql-server-2008", "t-sql", "" ]
I am trying to identify a list of duplicates from a table and my table looks like this: Column1-Column2 1. 1-1 2. 1-2 3. 1-3 4. 2-1 5. 2-2 6. 2-3 7. 3-1 8. 3-2 9. 3-4 10. 4-1 11. 4-2 12. 4-3 13. 4-4 14. 5-1 15. 5-2 16. 5-4 * 1 has a group of {1,2,3} * 2 has a group of {1,2,3} * And are duplicates * 3 has a group of {1,2,4} * 5 has a group of {1,2,4} * And are duplicates * 4 has a group of {1,2,3,4} * And has no friends ;) Column 2 really is a varchar column, but I made everything numbers for simplicity sack. I have been playing with CheckSum\_Agg, but it has false positives. :( My output would look something like this: * 1,2 * 3,5 Where I select the min ID for the first column and all of the other values for the second column. Non-duplicates are omitted. Another example might look like: * 1,2 * 1,6 * 3,5 * 3,7 * 3,8 * (Notice no "4" in the list, I just added other "pairs" for show that 1 and 3 are the lowest. If 4 is in the list like 4,0 or 4,null, I can make that work too.) I'm using SQL Server 2012. Thanks!
``` --This code produced the results I was looking for in the original post. WITH t AS ( SELECT column1, COUNT(*) c FROM #tbl GROUP BY column1 ), tt AS( SELECT t1.column1 as 'winner', t2.column1 as 'loser' FROM t t1 INNER JOIN t t2 ON ( t1.c = t2.c AND t1.column1 < t2.column1 ) WHERE NOT EXISTS ( SELECT column2 FROM #tbl WHERE column1 = t1.column1 EXCEPT SELECT column2 FROM #tbl WHERE column1 = t2.column1 ) ) SELECT fullList.winner, fullList.loser FROM ( SELECT winner FROM tt tt1 EXCEPT SELECT loser FROM tt tt2 ) winnerList JOIN tt fullList on winnerList.winner = fullList.winner ORDER BY fullList.winner, fullList.loser ```
``` WITH t AS ( SELECT column1, COUNT(*) c FROM MyTable GROUP BY column1 ) SELECT t1.column1, t2.column1 FROM t t1 INNER JOIN t t2 ON ( t1.c = t2.c AND t2.column1 > t1.column1 ) WHERE NOT EXISTS ( SELECT column2 FROM MyTable WHERE column1 = t1.column1 EXCEPT SELECT column2 FROM MyTable WHERE column1 = t2.column1 ) ```
SQL Table with a list of repeating values (duplicates) to find
[ "", "sql", "sql-server", "hash", "aggregate", "checksum", "" ]
I have a table which contains a set of translations and I want to query the original table with the translations inserted (if a translation exists). For example: ``` CREATE TABLE foo ( id TEXT, key TEXT ); CREATE TABLE translations ( key TEXT, value TEXT ); INSERT INTO foo(id, key) VALUES('1', '1'); INSERT INTO foo(id, key) VALUES('2', '2'); INSERT INTO foo(id, key) VALUES('3', '5'); INSERT INTO foo(id, key) VALUES('4', '6'); INSERT INTO translations(key, value) VALUES('5', '7'); INSERT INTO translations(key, value) VALUES('6', '8'); ``` I want a query to return the table: ``` 1 1 2 2 3 7 4 8 ``` SQLfiddle: <http://sqlfiddle.com/#!7/ee8cc> Thanks!
Like this? ``` SELECT f.id, f.key, COALESCE(t.value, f.key) AS TranslatedKey FROM foo f LEFT JOIN translations t ON f.key = t.key ``` with a left join the values from `t` will be null if no match is found. `COALESCE` returns the first non-null value from the list of inputs.
Another way, using case: ``` SELECT f.id, CASE WHEN t.key IS NULL THEN f.key ELSE t.value END AS Column2 FROM foo f LEFT JOIN translations t ON f.key=t.key ``` The coalesce way is neater in my opinion though.
How can you replace a value via a LEFT JOIN in SQL?
[ "", "sql", "sqlite", "" ]
I have a large table of businesses. There is a field with the sales of each company and I want to remove every record with sales that are over 2000000, but records are VARCHAR and have commas in them like this 2,000,000 Will something like this work? ``` DELETE FROM `tablename` WHERE `sales` > 2,000,000 ```
Remove the comma and force a number conversion ``` DELETE FROM `tablename` WHERE replace(sales, ',', '') * 1 > 2000000 ``` ### BTW it would be better to change the data type of the `sales` column to a numberic one.
``` delete from table1 where replace( replace(sales, ',', ''), '"', '' ) >=200000 ``` Here is [fiddle](http://www.sqlfiddle.com/#!2/b2b65c/1)
Removing MySQL Records From Table by Sales Amount
[ "", "mysql", "sql", "" ]
I'm sure I must be doing something stupid, but as is often the case I can't figure out what it is. I'm trying to run this query: ``` SELECT `f`.`FrenchWord`, `f`.`Pronunciation`, `e`.`EnglishWord` FROM (`FrenchWords` f) INNER JOIN `FrenchEnglishMappings` m ON `m`.`FrenchForeignKey`=`f`.`id` INNER JOIN `EnglishWords` e ON `e`.`id`=`m`.`EnglishForeignKey` WHERE `f`.`Pronunciation` = '[whatever]'; ``` When I run it, what happens seems quite weird to me. I get the results of the query fine, 2 rows in about 0.002 seconds. However, I also get a huge spike in CPU and `SHOW PROCESSLIST` shows two identical processes for that query with state 'Copying to tmp table on disk'. These seem to keep running endlessly until I kill them or the system freezes. None of the tables involved is big - between 100k and 600k rows each. `tmp_table_size` and `max_heap_table_size` are both 16777216. Edit: `EXPLAIN` on the statement gives: +edit reduced keylen of Pronunciation to 112 ``` +----+-------------+-------+--------+-------------------------------------------------------------+-----------------+---------+----------------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+-------------------------------------------------------------+-----------------+---------+----------------------------+------+----------------------------------------------+ | 1 | SIMPLE | f | ref | PRIMARY,Pronunciation | Pronunciation | 112 | const | 2 | Using where; Using temporary; Using filesort | | 1 | SIMPLE | m | ref | tmpindex,CombinedIndex,FrenchForeignKey,EnglishForeignKey | tmpindex | 4 | dict.f.id | 1 | Using index | | 1 | SIMPLE | e | eq_ref | PRIMARY,id | PRIMARY | 4 | dict.m.EnglishForeignKey | 1 | | +----+-------------+-------+--------+-------------------------------------------------------------+-----------------+---------+----------------------------+------+----------------------------------------------+ ``` I'd be grateful if someone could point out what might be causing this. **What I really don't understand is what MySQL is doing - surely if the query is complete then it doesn't need to do anything else?** # UPDATE Thanks for all the responses. I learnt something from all of them. This query was made massively faster after following the advice of nrathaus. I added a PronunciationHash binary(16) column to FrenchWords that contains unhex( md5 ( Pronunciation ) ). That is indexed with a keylen of 16 (vs 600+ for the varchar index on Pronunciation), and queries are much faster now.
As said by the EXPLAIN, you key size is HUGE : 602, this requires MySQL to write down the data. You need to reduce (greatly) the keylen, I believe recommended is below 128. I suggest you create a column called MD5\_FrenchWord which will contain the MD5 value of FrenchWord. Then use this column for the GROUP BY. This assumes that you are looking for similarities, when you group by rather than the actual value
You are misusing `GROUP BY`. This clause is entirely pointless unless you also have a summary function such as `MAX(something)` or `COUNT(*)` in your `SELECT` clause. Try removing `GROUP BY` and see if it helps. It's not clear what you're trying to do with `GROUP BY`. But you might try `SELECT DISTINCT` if you're trying to dedup your result set.
MySQL query with 2 joins, large keylen leads to 'Copying to tmp table on disk' process hanging forever
[ "", "mysql", "sql", "codeigniter", "join", "sql-order-by", "" ]
i have , in sql server , a parameter as : ``` @DoBEmp = '24/6/1990' ``` passed to sql , when I try to insert it into table , this error raises : ``` Error converting data type varchar to date. ``` how can i insert it properly
Have you tried [setting the date format](http://msdn.microsoft.com/fr-fr/library/ms189491.aspx)? ``` SET DATEFORMAT dmy; ```
``` @DoBEmp = CONVERT(date,'24/6/1990',103) ``` dd/mm/yyyy is style 103 [CAST and CONVERT (Transact-SQL)](http://msdn.microsoft.com/en-us/library/ms187928.aspx)
Error converting data type varchar to date
[ "", "sql", "sql-server-2008", "" ]
Working with a legacy system, many tables have no primary keys. DB is MSSQL server 2008, db was migrated from foxpro years ago. I would like to add PK to the tables without them, but am getting push back "because it may break something." Try as I might I can think of no realistic scenario where that can occur. The people pushing back cannot give me an example either they just want to be "safe". Being safe is leading to inaction which is causing unnecessary work. Anyway, can anyone give me a realistic scenario where this could cause an issue in existing .net code? Or confirm that it will not break existing code. Reasoning for adding PK, is to be able to use Entity Framework to generate a DAL. There are issues with some tables where there is no suitable PK. Perhaps I should have asked what are the risks of adding a new column to an existing table? However this is done very frequently to some of these tables with no problems.
Much as I hate to help you use Entity Framework, I have to say that if you have checked the column to make sure there are no nulls and no duplicates, you should be fine to declare it as a PK. Adding a new column is riskier but making an existing column a PK should be fairly straightforward and frankly if anything breaks as a result (i.e., it tried to insert a duplicate when you didn't want any), then that is good because if the column is supposed to be unique it is a bug if something tries to make it not unique. What your company needs though is a clear understanding of how to successfully refactor databases, so that they don't feel it is not possible. I highly suggest you read, Refctoring Databases by Ambler and Sadalage. ([http://www.amazon.com/Refactoring-Databases-Evolutionary-Addison-Wesley-Signature-ebook/dp/B001QAP36E/ref=sr\_1\_1?s=digital-text&ie=UTF8&qid=1385070572&sr=1-1&keywords=refactoring+databases](https://rads.stackoverflow.com/amzn/click/com/B001QAP36E)). There are methods to reduce risk in refacctoring datbases.
There are no guarantees. One area where I can imagine breakage is SELECT \* statements and code that expects a certain number of columns in a certain order coming back. Adding a column, PK or not, could break that. But adding a PK for the sake of a PK doesn't really add value. A PK helps CRUD (especially the UD parts :) ) so you'd have to change code to take advantage of that.
Can adding a new column and making it a pk to a table break existing code
[ "", "sql", "sql-server-2008", "t-sql", "" ]
I have a stored procedure that takes an input parameter `@CategoryKeys varchar`, and parses its contents into a temp table, `#CategoryKeys`. ``` -- create the needed temp table. CREATE TABLE #CategoryKeys ( CategoryKey SMALLINT ); -- fill the temp table if necessary IF Len(rtrim(ltrim(@CategoryKeys))) > 0 BEGIN INSERT INTO #CategoryKeys (CategoryKey) SELECT value FROM dbo.String_To_SmallInt_Table(@CategoryKeys, ','); END ``` If the temp table has rows, I would like to pass the table into a separate stored procedure. How would I go about creating a parameter in the separate procedure to hold the temp table?
When you create a #TEMP table, the "scope" is bigger than just the procedure it is created in. Below is a sample: ``` IF EXISTS ( SELECT * FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_TYPE = N'PROCEDURE' and ROUTINE_SCHEMA = N'dbo' and ROUTINE_NAME = N'uspProc002' ) BEGIN DROP PROCEDURE [dbo].[uspProc002] END GO CREATE Procedure dbo.uspProc002 AS BEGIN /* Uncomment this code if you want to be more explicit about bad "wiring" */ /* IF OBJECT_ID('tempdb..#TableOne') IS NULL begin THROW 51000, 'The procedure expects a temp table named #TableOne to already exist.', 1; end */ /* Note, I did not Create #TableOne in this procedure. It "pre-existed". An if check will ensure that it is there. */ IF OBJECT_ID('tempdb..#TableOne') IS NOT NULL begin Insert into #TableOne ( SurrogateKey , NameOf ) select 2001, 'Hello-From-uspProc002' end END GO IF EXISTS ( SELECT * FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_TYPE = N'PROCEDURE' and ROUTINE_SCHEMA = N'dbo' and ROUTINE_NAME = N'uspProc001' ) BEGIN DROP PROCEDURE [dbo].[uspProc001] END GO CREATE Procedure dbo.uspProc001 ( @Param1 int ) AS BEGIN IF OBJECT_ID('tempdb..#TableOne') IS NOT NULL begin drop table #TableOne end CREATE TABLE #TableOne ( SurrogateKey int , NameOf varchar(12) ) Insert into #TableOne ( SurrogateKey , NameOf ) select 1001, 'Hello-From-uspProc001' Select 'before-nested-call' as MyStatus1, * from #TableOne EXEC dbo.uspProc002 Select 'after-nested-call' as MyStatus1, * from #TableOne IF OBJECT_ID('tempdb..#TableOne') IS NOT NULL begin drop table #TableOne end END GO exec dbo.uspProc001 0 ``` **HAVING SAID THAT, PLEASE DO NOT CODE UP ALOT OF THESE. ITS THE SQL EQUIVALENT OF A GLOBAL VARIABLE AND IT IS DIFFICULT TO MAINTAIN AND BUG PRONE.**
While understanding scoping addresses the direct need, thought it might be useful to add a few more options to the mix to elaborate on the suggestions from the comments. 1. Pass XML into the stored procedure 2. Pass a table-valued parameter into the stored procedure **1. Pass XML into the stored procedure** With XML passed into a parameter, you can use the XML directly in your SQL queries and join/apply to other tables: ``` CREATE PROC sp_PassXml @Xml XML AS BEGIN SET NOCOUNT ON SELECT T.Node.value('.', 'int') AS [Key] FROM @Xml.nodes('/keys/key') T (Node) END GO ``` Then a call to the stored procedure for testing: ``` DECLARE @Text XML = '<keys><key>1</key><key>2</key></keys>' EXEC sp_PassXml @Text ``` Sample output of a simple query. ``` Key ----------- 1 2 ``` **2. Pass a table-valued parameter into the stored procedure** First, you have to define the user defined type for the table variable to be used by the stored procedure. ``` CREATE TYPE KeyTable AS TABLE ([Key] INT) ``` Then, you can use that type as a parameter for the stored proc (the `READONLY` is required since only `IN` is supported and the table cannot be changed) ``` CREATE PROC sp_PassTable @Keys KeyTable READONLY AS BEGIN SET NOCOUNT ON SELECT * FROM @Keys END GO ``` The stored proc can then be called with a table variable directly from SQL. ``` DECLARE @Keys KeyTable INSERT @Keys VALUES (1), (2) EXEC sp_PassTable @Keys ``` Note: If you are using .NET, then you can pass the SQL parameter from a DataTable type matching the user defined type. Sample output from the query: ``` Key ----------- 1 2 ```
How to pass a temp table as a parameter into a separate stored procedure
[ "", "sql", "sql-server", "stored-procedures", "" ]
I have 2 tables as listed below. I need to list doctor details with the patient he has seen in the last 6 months and the number of patient he had seen. ``` - Patient PatientNo | Name | Address | DrNo (FK) | Datevisit - Doctor DrNo | Name | Contact ``` My final output should be as below ``` DrNo | Name | Contact | PatientSeen ``` My coding is definitely wrong, would appreciate some help, totally new to sql. ``` select *, count(select * from patient where drno is not null) from doctor, patient where doctor.drno = patient.drno and trunc(patient.datevisit,'MM') >= trunc(add_months(sysdate,-6), 'MM') ```
Try this: ``` SELECT DrNo, Name, Contact, (SELECT COUNT(*) FROM Patient WHERE Patient.DrNo = Doctor.DrNo AND MONTHS_BETWEEN(sysdate,Patient.datevisit) <= 6) as PatientSeen FROM Doctor ```
``` select d.DrNo,d.Name,d.Contact,count(p.PatientNo) from Doctor d left join Patient p on p.DrNo =d.DrNo where MONTHS_BETWEEN(sysdate,p.Patient.datevisit) <= 6 group by d.DrNo,d.Name,d.Contact ```
Oracle SQL count with date requirement
[ "", "sql", "oracle", "" ]
``` SELECT EmailOfConsumer, COUNT(EmailOfConsumer) as 'NumberOfOrders', SUM(CAST(Total as money)) as 'TotalValue', (SUM(CAST(Total as money))/COUNT(EmailOfConsumer)) as 'AverageValue' FROM webshop GROUP BY EmailOfConsumer ORDER BY TotalValue DESC ``` This brings back: ``` EmailOfConsumer NumberOfOrders TotalValue AverageValue test 1 2000000000.10 2000000000.10 ``` I would like to add a search on `WHERE NumberOfOrders = '1'` I have tried adding `WHERE COUNT(EmailOfConsumer) = '1'` but I get this error: ``` An aggregate may not appear in the WHERE clause unless it is in a subquery contained in a HAVING clause or a select list, and the column being aggregated is an outer reference. ```
use ``` HAVING COUNT(EmailOfConsumer) = 1 ``` The having clause restricts a aggregate whereas the where clause only put restrictions on individual column data
Using group by and then having clause. Refer [this](http://www.w3schools.com/sql/sql_having.asp) ``` SELECT EmailOfConsumer, COUNT(EmailOfConsumer) as 'NumberOfOrders', SUM(CAST(Total as money)) as 'TotalValue', (SUM(CAST(Total as money))/COUNT(EmailOfConsumer)) as 'AverageValue' FROM webshop GROUP BY EmailOfConsumer HAVING COUNT(EmailOfConsumer) = '1' ORDER BY TotalValue DESC ```
Query that Counts records with a WHERE clause
[ "", "sql", "sql-server", "count", "where-clause", "" ]
When I try to alter a data type of my table I get this horrible message from SQL Management Studion: "Saving changes is not permitted. The changes you have made require the following tables to be dropped and re-created". I already tried to do the modification by T-SQL and it worked, but why can't I just do this by design mode? I'm using SQL Server 2008 R2.
I would strongly suggest that you use T-SQL to make changes, or at the very least, preview the scripts that the Designers generate before committing them. However, if you want to do this in the designer, you can turn off that lock by going to Tools...Options...Designers..Table and Database Designers.. and unclick the "prevent saving changes that require table re-creation". That lock is on by default for a reason; it keeps you from committing some change that is obfuscated by the designer. EDIT: As noted in the comment below, you can't preview the changes unless you disable the lock. My point is that if you want to use the table-designer to work on a table with this feature disabled, you should be sure to always preview the changes before committing them. In short, options are: * BEST PROCESS: Use T-SQL * NOT GREAT: Disable the lock, use Table Designer, and ALWAYS preview changes * CRAZY TALK: Click some buttons.
To change the Prevent saving changes that require the table re-creation option, follow these steps: Open SQL Server Management Studio (SSMS). On the Tools menu, click Options. In the navigation pane of the Options window, click Designers. Select or clear the Prevent saving changes that require the table re-creation check box, and then click OK.
Saving changes is not permitted. The changes you have made require the following tables to be dropped and re-created
[ "", "sql", "sql-server-2008", "" ]
I am trying to retrieve some data (`coursename`) from one of my tables but the following error is coming all the time > ORA-00918: column ambiguously defined the command I am typing is: ``` select bookno,courno,coursename from booking, course,coursename where bookno = 6200 and booking.courno = course.courno and coursename.coursenameno = course.coursenameno ``` I have some tables as described : ``` CREATE TABLE BOOKING (BOOKNO NUMBER (4) NOT NULL, COURNO NUMBER (4) NOT NULL, BOOKDATE DATE, BOOKCUSTPAYMENT VARCHAR (20), CONSTRAINT PK_BOOK PRIMARY KEY (BOOKNO,COURNO), CONSTRAINT FK_BOOK FOREIGN KEY (COURNO) REFERENCES COURSE(COURNO) ON DELETE CASCADE); CREATE TABLE CUSTOMER (CUSTNO NUMBER (4) NOT NULL, -- creation of primary-key PROFNO NUMBER (4) NOT NULL, CUSTFNAME VARCHAR (15), CUSTLNAME VARCHAR (15), CUSTDOB DATE, CUSTPHONEDAY NUMBER (15), CUSTPHONEEVE NUMBER (15), CONSTRAINT PK_CUST PRIMARY KEY (CUSTNO), CONSTRAINT FK_PROF FOREIGN KEY (PROFNO) REFERENCES PROFICIENCY(PROFNO) ON DELETE CASCADE); CREATE TABLE COURSENAME ( COURSENAMENO NUMBER (4) NOT NULL, COURSENAME VARCHAR (20), COURSEDESC VARCHAR (120), COURSEDAYCOST NUMBER (7,2), CONSTRAINT PK_COURSENAME PRIMARY KEY (COURSENAMENO)); CREATE TABLE COURSE (COURNO NUMBER (4) NOT NULL, -- creation of primary-key COURSTART DATE, COUREND DATE, COURSENAMENO NUMBER (4) NOT NULL, ACCDAYNO NUMBER (4) NOT NULL, FOODNO NUMBER (4) NOT NULL, TRANSNO NUMBER (4) NOT NULL, CONSTRAINT PK_COURSE PRIMARY KEY (COURNO), CONSTRAINT FK_COURSENAME FOREIGN KEY (COURSENAMENO) REFERENCES COURSENAME(COURSENAMENO)); ``` I am researching but I cannot figure out what is happening !!!
when the same column appears in several tables you need to specify which table is the one to be used. As a general rulem its always a good idea to prefix the column with the table (or alias) as improves readability and speeds up parsing. so, for your query try (changes in upper case) ``` select BOOKING.bookno,BOOKING.courno,COURSENAME.coursename from booking, course,coursename where BOOKING.bookno = 6200 and booking.courno = course.courno and coursename.coursenameno = course.coursenameno ```
You need to specify from which table the columns in `SELECT` and `WHERE` statements should be retrieved: ``` select booking.bookno, booking.courno, course.coursename from booking, course, coursename where booking.bookno = 6200 and booking.courno = course.courno and coursename.coursenameno = course.coursenameno ``` Also, consider using ANSI SQL-92+ JOIN syntax like so: ``` select booking.bookno, booking.courno, course.coursename from booking inner join course on booking.courno = course.courno inner join coursename on coursename.coursenameno = course.coursenameno where booking.bookno = 6200 ``` See [Bad habits to kick : using old-style JOINs][1] for some reasoning about it. [1]: https://sqlblog.org/2009/10/08/bad-habits-to-kick-using-old-style-joins
ORA-00918: column ambiguously defined
[ "", "sql", "oracle", "" ]
I have an Access Database with a table [tblManipulate] with the following four fields populated with data: ``` [tblManipulate].[Name] [tblManipulate].[Description] [tblManipulate].[Price] [tblManipulate].[Account code] ``` I also have an 150 entry table of descriptions called [tblDescLookup] that needs to be utilized like a lookup table in order to manipulate account codes. Example entries follow: ``` [tblDescLookup].[Description Lookup] [tblDescLookup].[Account Code Result] *demonstration* 10000 *coding* 12000 *e-mail* 13000 ``` --- What is the best way to take every record in [tblManipulate] and check the [tblManipulate].[Description] field against [tblDescLookup].[Description Lookup], assigning the account code result into the original table if a 'like' match is found? This seems to me like one of those instances where Access is not the best tool for the job, but it is what I have been instructed to use. I would appreciate any help or insight (or alternatives!). Thank you!
Something like this should do it for you. ``` Dim Description As String Dim lookupDescription As String Dim rs As DAO.Recordset Set rs = CurrentDb.OpenRecordset(SELECT * FROM tblManipulate) If Not (rs.EOF And rs.BOF) Then rs.MoveFirst 'good habit Do Until rs.EOF = True Description = rs("Description") Dim rsLookUp As DAO.Recordset Set rsLookUp = CurrentDb.OpenRecordset(SELECT * FROM tblDescLookup) If Not (rsLookUp .EOF And rsLookUp .BOF) Then rsLookUp .MoveFirst 'good habit Do Until rsLookUp.EOF = True lookupDescription = rsLookUp("Description Lookup") If() Then 'match criteria 'assign value End if rsLookUp.MoveNext Loop Else MsgBox "No records in the recordset." End If rs.MoveNext Loop Else MsgBox "No records in the recordset." End If ```
Oy. You're going to need a loop here. I would open up tblDescLookup in a recordset: ``` Set rec = CurrentDB.OpenRecordset ("Select * from tblDescLookup") ``` Then loop through each record and run the query that way: ``` Do While rec.EOF = False Set rec2 = CurrentDB.OpenRecordset ("Select * from rec where Description like '" & rec("Description Lookup") & "'") rec.MoveNext Loop ``` Or maybe you need to make that an Update statement instead? I can't write that off the top of my head, but you get the idea.
Using a 'lookup' table in MS-ACCESS for an update query
[ "", "sql", "ms-access", "vba", "ms-access-2010", "" ]
I want to show the value only when the sum is higher than 2 Strange output ``` select 1+3 > 2; ?column? ---------- t (1 register) ``` --- ``` ERROR: column "val" does not exist line 1: select 1+3 as val where val > 2; ^ ``` --- ``` ERROR: syntax error at or near "CASE" line 1: select 1+3 as val CASE val > 2; ^ ``` What is the correct way? None of these seems to work.
you cannot use derived column in `where` clause, there're many discussions on SO about this. One way to do this is to use subquery or CTE ``` select val from (select 1+3 as val) as v where val > 2 ``` or ``` with cte ( select 1+3 as val ) select val from cte where val > 2 ```
You need a subselect, because the columns defined in the select clause aren't available for use in where clauses: ``` select val from (select 1+3 as val) as vals where val > 2 ``` A CTE also works: ``` with vals as ( select 1+3 as val ) select val from vals where val > 2 ```
sum with sql and direct condition
[ "", "sql", "postgresql", "" ]
I am trying to find the oldest person in my Members' birthday table from Vermont and New York. My Members resembles the following: ``` Members ------- MemberID, Firstname, Lastname, Birthday, Region ``` I formulated the following subquery: ``` SELECT lastname FROM members WHERE region = 'VT' AND year(birthday) > (SELECT year(birthday) FROM members WHERE region = 'NY') ``` SQL query system tells me that it returns more than one row. What am I missing in it and is it logically correct? Again, am I asking how I can find every member in Vermont who is older than all the members from NY.
You need to find max birthday of NY members, so that you get only one value. Then use that value to find VT member greater than that max value of NY You may try this: ``` select lastname from members where region = 'VT' and year(birthday) > (select max(birthday) from members where region = 'NY') ``` The logical operator `>` requires that there should be only 1 value on the right side of the operator.
``` select top 1 * from table where region='vermont' or region ='newyork' order by year(birthday) desc ```
The oldest person in a birthday table. SQL subquery
[ "", "mysql", "sql", "" ]
Previously for my PHP app, I used a cron job that increments the health of a user in SQL every 10 minutes and the cron job script incremented the health of all users. For my next app, I tried using MySQL events to increment the health every minutes for each individual user and ran into some problems with them not working after awhile ([MySQL events stop working after awhile](https://stackoverflow.com/questions/11730104/mysql-events-stop-working-after-awhile)) What's the best way to do this if I were to create a new app in Ruby on Rails? I'm open to using MySQL or PostgreSQL. This is for a game where users will fight each other and lose health. edit: Sometimes the user will encounter another user, and I need to select that user based on their health among other things. So I need the actual health stored in the database.
Instead updating *every* record in the database every 10 minutes, store a last-modified timestamp in the same row as the health. Every time you read the player\_health from the database, add (current\_time - last\_modified) / (10 min) to the value. Every time you write player\_health to the database, update the last\_modified.
I would create a rake task that increases all users' health by 10, and call it using the awesome [whenever gem](https://github.com/javan/whenever) every 10 minutes. **UPDATE** However, as Dan said in his comment, it might be inefficient to do such a huge DB update every 10 seconds (especially if you have huge number of users) if you can just update every user's health when he requests that. But that's subject to how your game actually works.
Best way to increment health of a user in a game every minute
[ "", "sql", "ruby-on-rails", "" ]
I have a great problem with joining two MSSQL databases together (on the same server) using LEFT JOIN. I run this SQL from the database OLAB\_DB and get the error: ``` The multi-part identifier "OLAP_DB.dbo.OLAP_invoice.UserID" could not be bound. ``` There seem to be a problem with the DB not being able to find itself, and I have no idea of how to solve this. I have double and triple checked the spelling and rewrote the SQL several times, but now I just have to give up and ask for help :( **This doesn't work:** ``` SELECT TOP 200 COALESCE(LTRIM(RTRIM(contact_db.dbo.ContactTable.EmailAdr)),LTRIM(RTRIM(contact_db.dbo.CustomerTable.EmailAdr))) AS EMAIL, OLAP_invoice.OrdreLinjeID AS ORDERNO OLAP_invoice.SalgsPris AS PRICE, OLAP_invoice.UserID AS CONTACTID FROM OLAP_invoice,contact_db.dbo.CustomerTable LEFT JOIN contact_db.dbo.ContactTable ON OLAP_DB.dbo.OLAP_invoice.UserID = contact_db.dbo.ContactTable.UserID WHERE contact_db.dbo.CustomerTable.ClientID = OLAP_invoice.ClientID ``` But skipping the left join and just getting data from the two different databases works just fine. **This works just fine:** ``` SELECT TOP 200 LTRIM(RTRIM(contact_db.dbo.CustomerTable.EmailAdr)) AS EMAIL, LTRIM(RTRIM(contact_db.dbo.ContactTable.UserID)) AS EMAIL2, OLAP_invoice.OrdreLinjeID AS ORDERNO OLAP_invoice.SalgsPris AS PRICE, OLAP_invoice.UserID AS CONTACTID FROM OLAP_invoice,contact_db.dbo.CustomerTable WHERE contact_db.dbo.CustomerTable.ClientID = OLAP_invoice.ClientID AND contact_db.dbo.ContactTable.UserID = OLAP_invoice.UserID ``` The reason I need the LEFT JOIN, is because some orders are not registered with a UserID (only ClientID). I have checked the access rights and there is no problem to access the fields individually, the problem occurs when I have to compare the values in the LEFT JOIN and specifies the absolute path. Do you have any idea of what can be wrong?
You can try this: ``` SELECT TOP 200 COALESCE(LTRIM(RTRIM(contact_db.dbo.ContactTable.EmailAdr)),LTRIM(RTRIM(contact_db.dbo.CustomerTable.EmailAdr))) AS EMAIL, OLAP_invoice.OrdreLinjeID AS ORDERNO OLAP_invoice.SalgsPris AS PRICE, OLAP_invoice.UserID AS CONTACTID FROM OLAP_invoice LEFT JOIN contact_db.dbo.ContactTable ON OLAP_DB.dbo.OLAP_invoice.UserID = contact_db.dbo.ContactTable.UserID, contact_db.dbo.CustomerTable WHERE contact_db.dbo.CustomerTable.ClientID = OLAP_invoice.ClientID ```
You are selecting from `OLAP_invoice`, but joining to `OLAP_DB.dbo.OLAP_invoice`. You need to qualify the table exactly the same every time, or use an alias. So either: ``` FROM OLAP_DB.dbo.OLAP_invoice left join contact_db.dbo.ContactTable on OLAP_DB.dbo.OLAP_invoice... ``` OR ``` FROM ROM OLAP_invoice LEFT JOIN ontact_db.dbo.ContactTable ON OLAP_invoice... ``` Or you could use an alias as well, less typing that way. ``` FROM OLAP_invoice OI LEFT JOIN ontact_db.dbo.ContactTable CT ON OI... ```
Error when left joining two databases (SQL Server)
[ "", "sql", "sql-server", "database", "join", "left-join", "" ]
I wrote a query which splits a string and show me as a value I want using SUBSTR ``` SELECT SUBSTR ('imagelocation/r1.jpg', 15) AS image_location FROM dual ``` I am getting the output as r1.jpg but I only want the value to come as r1. Please help
Try this: ``` SELECT SUBSTR('imagelocation/r1.jpg',15,2) AS image_location FROM dual ```
``` select SUBSTR ( 'imagelocation/r1.jpg', INSTR('imagelocation/r1.jpg', '/')+1, LENGTH('imagelocation/r1.jpg') - INSTR('imagelocation/r1.jpg', '.') - 1 ) AS image_location FROM dual ``` **[SUBSTR Function in Oracle](http://docs.oracle.com/javadb/10.6.1.0/ref/rrefsqlj93082.html)** **[SQL Fiddle](http://www.sqlfiddle.com/#!4/d41d8/21012)**
Splitting of a String using query
[ "", "sql", "oracle", "oracle10g", "" ]
I have used this SP before. Now, I am trying to permission a user to 50 odd databases that start with the same letters, using the code below. It looks like it does not like "GO" in the code. Why is that ? and what is the work around? Thanks for your time.. :) RM ``` exec sp_MSForEachDB ' IF ''?'' LIKE ''MYDBNames%'' BEGIN Use [?] Go CREATE USER [MYDOMAIN\Analysts] FOR LOGIN [MYDOMAIN\Analysts] GO EXEC sp_addrolemember N''db_owner'', N''MYDOMAIN\Analysts'' GO END ``` '
I just explained this in another question yesterday ([here](https://stackoverflow.com/questions/20102165/sql-server-use-database-precedence-issue/20102814#20102814)). This essence is this: `GO` isn't a SQL statement, it's an SSMS/SQLCMD command that is used to separate batches (groups of SQL statements that are compiled together). So you cannot use it in things like stored procedures or Dynamic SQL. Also, very few statement contexts can cross over a `GO` boundary (transactions and session-level temp tables are about it). However, because both stored procedures and Dynamic SQL establish their own separate batches/execution contexts, you can use these to get around the normal need for `GO`, like so: ``` exec sp_MSForEachDB ' IF ''?'' LIKE ''MYDBNames%'' BEGIN Use [?] EXEC('' CREATE USER [MYDOMAIN\Analysts] FOR LOGIN [MYDOMAIN\Analysts] '') EXEC('' EXEC sp_addrolemember N''''db_owner'''', N''''MYDOMAIN\Analysts'''' '') END ' ```
The word `GO` is a batch separator and is not a SQL keyword. In SSMS you can go to options and change it to anything - `COME` for example. Try this: ``` exec sp_MSForEachDB ' IF ''?'' LIKE ''MYDBNames%'' BEGIN; Use [?]; CREATE USER [MYDOMAIN\Analysts] FOR LOGIN [MYDOMAIN\Analysts]; EXEC sp_addrolemember N''db_owner'', N''MYDOMAIN\Analysts''; END;' ```
sp_MSForEachDB doesn't seem to like GO
[ "", "sql", "sql-server", "t-sql", "" ]
My table has entries like ``` 01-JAN-92 12.00.00.000000000 AM -04:00 01-JAN-86 12.00.00.000000000 AM -04:00 03-JAN-01 12.00.00.000000000 AM +00:00 03-JAN-01 12.00.00.000000000 AM -04:00 ``` And I want to be able to count all the entries that I have for a specific day, instead of taking into consideration the time as well (the last two entries). So far this is what I have ``` SELECT ACTIONDATE, count(ACTIONDATE) AS count FROM mytable GROUP BY ACTIONDATE ``` How do I make it so that it groups it by just the day instead of the whole entry?
You can use the `TRUNC` to get rid of the time portion: ``` SELECT TRUNC(ACTIONDATE, 'DD'), count(1) AS count FROM mytable GROUP BY TRUNC(ACTIONDATE, 'DD'); ``` `TRUNC` truncates the part of the date up to the specified part of it, in this case, the day. For example, using TRUNC with 'DD' on `SYSDATE` returns: ``` SELECT TRUNC(SYSDATE, 'DD') FROM dual; ``` ``` TRUNC(SYSDATE,'DD') ------------------- 13/11/21 00:00 ``` [TRUNC function in documentation](http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions201.htm)
Use date() function: ``` SELECT DATE(ACTIONDATE), COUNT(ACTIONDATE) AS count FROM mytable GROUP BY DATE(ACTIONDATE) ``` EDIT: Sorry. This is in fact mysql format. For the Oracle format, the chosen answer is the more appropriate.
SQL group by portion of the string
[ "", "sql", "oracle", "" ]
Here is a basic example of what I am trying to achieve: ``` create table #testing ( tab varchar(max), a int, b int, c int ) insert into #testing VALUES ('x',1, 2, 3) insert into #testing VALUES ('y',1, 2, 3) insert into #testing VALUES ('x', 4, 5, 6) select * from #testing ``` Which will Produce the table: ``` tab a b c ----------------------- x 1 2 3 y 1 2 3 x 4 5 6 ``` I then want to compare rows on 'tab' based on the values of a,b,c: ``` select a,b,c from #testing where tab = 'x' except select a,b,c from #testing where tab= 'y' ``` Which gives me the answer I was expecting: ``` a b c ------------ 4 5 6 ``` However I want to also include the Tab column in my resultset, so I want somthing like this: ``` Select tab,a,b,c from #testing where ???? (select a,b,c from #testing where tab = 'x' except select a,b,c from #testing where tab= 'y') ``` How would I achieve this?
Use `not exists`: ``` select a.* from #testing a where a.tab = 'x' and not exists ( select * from #testing t where t.a = a.a and t.b = a.b and t.c = a.c and t.tab = 'y' ) ``` **And here you get SQL Fiddle demo: [DEMO](http://sqlfiddle.com/#!6/335b2/2)**
Although the answer from @gzaxx does produce a correct result for this test data, the more generalized version is below, where I left 'x' and 'y' out of the statements. ``` select a.* from #testing a where not exists ( select * from #testing t where t.a = a.a and t.b = a.b and t.c = a.c and t.tab <> a.tab ) ```
SQL: EXCEPT Query
[ "", "sql", "sql-server", "t-sql", "sql-server-2005", "" ]
I have a table `Person` with a column `id` that references a column `id` in table `Worker`. What is the difference between these two queries? They yield the same results. ``` SELECT * FROM Person JOIN Worker ON Person.id = Worker.id; ``` and ``` SELECT * FROM Person, Worker WHERE Person.id = Worker.id; ```
**There is no difference at all**. First representation makes query more readable and makes it look very clear as to which join corresponds to which condition.
The queries are logically equivalent. The comma operator is equivalent to an `[INNER] JOIN` operator. The comma is the older style join operator. The JOIN keyword was added later, and is favored because it also allows for OUTER join operations. It also allows for the join predicates (conditions) to be separated from the `WHERE` clause into an `ON` clause. That improves (human) readability. --- **FOLLOWUP** This answer says that the two queries in the question are equivalent. We shouldn't mix old-school comma syntax for join operation with the newer `JOIN` keyword syntax in the same query. If we do mix them, we need to be aware of a difference in the order of precedence. excerpt from MySQL Reference Manual <https://dev.mysql.com/doc/refman/5.6/en/join.html> > `INNER JOIN` and `,` (comma) are semantically equivalent in the absence of a join condition: both produce a Cartesian product between the specified tables (that is, each and every row in the first table is joined to each and every row in the second table). > > However, the precedence of the comma operator is less than that of `INNER JOIN`, `CROSS JOIN`, `LEFT JOIN`, and so on. If you mix comma joins with the other join types when there is a join condition, an error of the form `Unknown column 'col_name' in 'on clause'` may occur. Information about dealing with this problem is given later in this section.
What's the difference between comma separated joins and join on syntax in MySQL?
[ "", "mysql", "sql", "join", "syntax", "" ]
I have a table in Sql Sever 2005 : ``` id eid name datetime -- |----|------- |------------------------ 1 | 1 | john | 2013-11-18 15:30:00.000 2 | 1 | john | 2013-11-18 14:10:00.000 3 | 1 | john | 2013-11-18 13:30:00.000 4 | 1 | john | 2013-11-18 16:00:00.000 5 | 1 | john | 2013-11-18 17:00:00.000 6 | 2 | Richard| 2013-11-18 13:40:00.000 7 | 2 | Richard| 2013-11-18 16:20:00.000 8 | 3 | Mandy | 2013-11-18 20:22:00.000 9 | 3 | Mandy | 2013-11-18 20:20:00.000 10| 4 | Micheal| 2013-11-18 13:00:00.000 ``` Input will be a date such as - 2013-11-18 15:50:00.000 Expected Output : Need Minimum and Maximum datetime adjacent(closest) to input date... Grouping by eid is also required. ``` id eid name AdjacentMinimumDateTime AdjacentMaximumDateTime -- |----|------- |---------------------------|------------------------ 1 | 1 | john | 2013-11-18 15:30:00.000 | 2013-11-18 16:00:00.000 6 | 2 | Richard| 2013-11-18 13:40:00.000 | 2013-11-18 16:20:00.000 8 | 3 | Mandy | NULL | 2013-11-18 20:20:00.000 9 | 4 | Micheal| 2013-11-18 13:00:00.000 | NULL ```
Give this a try: ``` WITH BEFORE AS ( SELECT eid, max(datetime) date FROM t WHERE datetime <= '2013-11-18 15:50:00.000' GROUP BY eid ), AFTER AS ( SELECT eid, min(datetime) date FROM t WHERE datetime >= '2013-11-18 15:50:00.000' GROUP BY eid ) SELECT t.eid, t.name, max(b.date) beforeDate, min(a.date) afterDate FROM t LEFT JOIN BEFORE b ON t.eid = b.eid LEFT JOIN AFTER a ON t.eid = a.eid GROUP BY t.eid, t.name ORDER BY t.eid ``` Or the non-CTE version: ``` SELECT t.eid, t.name, max(b.date) beforeDate, min(a.date) afterDate FROM t LEFT JOIN ( SELECT eid, max(datetime) date FROM t WHERE datetime <= '2013-11-18 15:50:00.000' GROUP BY eid ) b ON t.eid = b.eid LEFT JOIN ( SELECT eid, min(datetime) date FROM t WHERE datetime >= '2013-11-18 15:50:00.000' GROUP BY eid ) a ON t.eid = a.eid GROUP BY t.eid, t.name ORDER BY t.eid ``` I've added repeated dates to test it works with them too. Output: ``` | EID | NAME | BEFOREDATE | AFTERDATE | |-----|---------|----------------------------|----------------------------| | 1 | john | November, 18 2013 15:30:00 | November, 18 2013 16:00:00 | | 2 | Richard | November, 18 2013 13:40:00 | November, 18 2013 16:20:00 | | 3 | Mandy | (null) | November, 18 2013 20:20:00 | | 4 | Michael | November, 18 2013 13:00:00 | (null) | | 5 | Mosty | November, 18 2013 15:00:00 | November, 18 2013 16:00:00 | ``` Fiddle [here](http://sqlfiddle.com/#!6/86b01/1).
Try This... ``` SELECT MIN(id) [id], eid, name, (SELECT MAX(datetime) FROM table t1 WHERE t1.datetime < inputdate AND t1.eid = t.eid) [AdjacentMinimumDateTime], (SELECT MIN(datetime) FROM table t2 WHERE t2.datetime > inputdate AND t2.eid = t.eid) [AdjacentMaximumDateTime] FROM table t GROUP BY t.id, t.Name ```
SQL Server - Need Minimum and Maximum date time adjacent(closest) to input date
[ "", "sql", "sql-server", "sql-server-2005", "" ]
I keep getting this error `Missing semicolon (;) at end of SQL statement.` when I run this code: ``` Dim cmd3 As OleDbCommand = New OleDbCommand("INSERT INTO Tickers (Quarter_Balance_Sheets) VALUES (Scrapalbe) WHERE Ticker = A;", con) cmd3.ExecuteNonQuery() ``` What am I doing wrong?
Edited.. Sorry. After reading again you cannot use WHERE on an INSERT statement, loose the WHERE clause or make an UPDATE statement. INSERT INTO ... WHERE is not a valid query. If the insert you've posted has correct values and columns, the update should be: ``` Dim cmd3 As OleDbCommand = New OleDbCommand("UPDATE Tickers SET Quarter_Balance_Sheets = 'Scrapalbe' WHERE Ticker = 'A';", con) cmd3.ExecuteNonQuery() ```
Missing quotes `WHERE Ticker = 'A'` and in `VALUES ('Scrapalbe')` edit: @Engerlost is right, `where` is not applicable to `insert`; you still need quotes but drop `where...` completely
Missing semicolon (;) at end of SQL statement. Semicolon already exists
[ "", "sql", "vb.net", "where-clause", "insert-into", "oledbcommand", "" ]
My question is very simple but I cannot get my head around my statement: ``` SELECT * FROM table 1 WHERE (row1 = 0 AND row2 > 0) OR (row3 is NULL AND row4 > 0) ``` This gives me a selection of rows that is correct for one option. What I want is making a selection of all rows where the where clause is not true in total ``` SELECT * FROM table 1 WHERE NOT((row1 = 0 AND row2 > 0) OR (row3 is NULL AND row4 > 0)) ``` what is wrong is ``` SELECT * FROM table 1 WHERE (row1 > 0 AND row2 = 0) OR (row3 is NOT NULL AND row4 = 0) ```
I think this is what you are looking for? I've assumed you meant fields rather than rows though... ``` SELECT * FROM table1 WHERE not exists (select * from table1 where ((field1 = 0 and field2 > 0) OR (field3 IS NULL AND field4 > 0))) ``` Maybe see this for further information on NOT IN AND NOT EXISTS to exclude results based on your subquery specification: [NOT IN vs NOT EXISTS](https://stackoverflow.com/questions/173041/not-in-vs-not-exists)
I suspect your problem is to do with `NULL` values in columns, because any predicate with NULL evaluates to NULL, so is neither true or false: Imagine a simple table `T` with one column (Col): ``` Col ----- 1 2 3 NULL ``` So if we run: ``` SELECT Col FROM T WHERE Col = 1; ``` We get one row, where Col = 1, therefore logically if you run: ``` SELECT Col FROM T WHERE NOT (Col = 1); ``` Or ``` SELECT Col FROM T WHERE Col <> 1 ``` You would expect the remaining 3 rows, but in both cases you will only get: ``` Col ---- 2 3 ``` This is because `NOT(NULL = 1)` and `NULL <> 1` are both `NULL`, so not true, so the 4th row isn't returned. The way to get the result you are after is to use `EXCEPT` ``` SELECT Col FROM T EXCEPT SELECT Col FROM T WHERE Col = 1 ``` **[Examples on SQL Fiddle](http://sqlfiddle.com/#!3/d225f4/2)**
sql multiple conditions together not true
[ "", "sql", "" ]
How can I add characters and chain two db queries in the puts statement of Ruby? I'm using sqlite 3 The desired output I want is `Sam - 32` I imagine the code would look something like this: `puts $db.execute(SELECT first_name FROM info) + " - " + $db.execute(SELECT age FROM info)` I know that there is an issue with converting the string to an array. Any help would be appreciated!
At least with sqlite3, this is what gives the desired output: ``` puts $db.execute(SELECT first_name || ' - ' || age FROM info) ```
Is this what you are looking for? ``` $db.execute("SELECT CONCAT(first_name, ' - ', age) as name_and_age FROM info") ```
Formatting SQL output in Ruby
[ "", "sql", "ruby", "" ]
I have a table that have 5 columns, and instead of update, I've done insert of all rows(stupid mistake). How to get rid of duplicated records. They are identical except of the id. I can't remove all records, but I want do delete half of them. ex. table: ``` +-----+-------+--------+-------+ | id | name | name2 | user | +-----+-------+--------+-------+ | 1 | nameA | name2A | u1 | | 12 | nameA | name2A | u1 | | 2 | nameB | name2B | u2 | | 192 | nameB | name2B | u2 | +-----+-------+--------+-------+ ``` How to do this? I'm using Microsoft Sql Server.
Please try: ``` with c as ( select *, row_number() over(partition by name, name2, [user] order by id) as n from YourTable ) delete from c where n > 1; ```
Try the following. ``` DELETE FROM MyTable WHERE ID NOT IN ( SELECT MAX(ID) FROM MyTable GROUP BY Name, Name2, User) ``` That is untested so may need adapting. The following video will provide you with some more information about this query. [Video](http://www.youtube.com/watch?v=ioDJ0xVOHDY)
SQL delete almost identical rows
[ "", "sql", "sql-server", "duplicates", "sql-delete", "" ]
I have 3 tables containing similar rows of data. I need to select 100 rows from all of the three tables with the following conditions: No more than 25 rows can be selected from Table A --> (name it count\_a) No more than 40 rows can be selected from Table B --> (count\_b) Any number of rows can be selected from Table C (count\_c) but the number should be count\_c = 100 - (count\_a + count\_b) ![Combine 3 tables in one collection of rows](https://i.stack.imgur.com/92Yp7.png) Here is what I tried: ``` SELECT * FROM ( SELECT * FROM TABLE_A WHERE ROWNUM <= 25 UNION ALL SELECT * FROM TABLE_B WHERE ROWNUM <= 40 UNION ALL SELECT * FROM TABLE_C ) WHERE ROWNUM <=100 ``` But the query is too slow and does not always give me 100 rows.
Try to add `WHERE ROWNUM <= 100` to the last select: ``` SELECT * FROM ( SELECT TABLE_A.*, 1 as OrdRow FROM TABLE_A WHERE ROWNUM <= 25 UNION ALL SELECT TABLE_B.*, 2 as OrdRow FROM TABLE_B WHERE ROWNUM <= 40 UNION ALL SELECT TABLE_C.*, 3 as OrdRow FROM TABLE_C WHERE ROWNUM <= 100 ) WHERE ROWNUM <=100 ORDER BY OrdRow; ``` Also you can try: ``` SELECT * FROM TABLE_A WHERE ROWNUM <= 25 UNION ALL SELECT * FROM TABLE_B WHERE ROWNUM <= 40 UNION ALL SELECT * FROM TABLE_C WHERE ROWNUM <= 100 - (select count(*) TABLE_A WHERE ROWNUM <= 25) - (select count(*) TABLE_B WHERE ROWNUM <= 40) ```
Technically, you'd have to do something like this in order to guarantee that you'll always get rows from TABLE\_A and TABLE\_B if they exist: ``` SELECT * FROM ( SELECT * FROM ( SELECT 'A' t, TABLE_A.* FROM TABLE_A WHERE ROWNUM <= 25 UNION ALL SELECT 'B' t, TABLE_B.* FROM TABLE_B WHERE ROWNUM <= 40 UNION ALL SELECT 'C' t, TABLE_C.* FROM TABLE_C ) ORDER BY t ) WHERE ROWNUM <= 100; ``` This is because the optimizer is *allowed* to run the subqueries in any order it likes - e.g. in parallel. With regard to performance, I suspect that the sort op will not add too much time to the execution time because it's only sorting a maximum of 100 rows anyway.
How can I collectively select 100 rows from 3 different tables?
[ "", "sql", "oracle", "" ]
I'm trying to write an aggregate query in SQL which returns the count of all records joined to a given record in a table; If no records were joined to the given record, then the result for that record should be `0`: ## Data My database looks like this (I'm not able to change the structure, unfortunately): ``` MESSAGE ---------------------------------------------- MESSAGEID SENDER SUBJECT ---------------------------------------------- 1 Tim Rabbit of Caerbannog 2 Bridgekeeper Bridge of Death MESSAGEPART ---------------------------------------------- MESSAGEID PARTNO CONTENT ---------------------------------------------- 1 0 (BLOB) 1 1 (BLOB) 3 0 (BLOB) ``` (`MESSAGEPART` has a composite `PRIMARY KEY("MESSAGEID", "PARTNO")`) ## Desired output Given the data above I should get something like this: ``` MESSAGEID COUNT(*) ----------------------------------------------- 1 2 2 0 ``` It seems obvious that I need to do a left join on the `MESSAGE` table, but how do I return a count of `0` for rows where the joined columns from `MESSAGEPART` are `NULL`? I've tried the following: ## Logic I've tried ``` SELECT m.MESSAGEID, COUNT(*) FROM MESSAGE m LEFT JOIN MESSAGEPART mp ON mp.MESSAGEID = m.MESSAGEID GROUP BY m.MESSAGEID; ``` However, this returns ``` MESSAGEID COUNT(*) ----------------------------------------------- 1 2 2 1 ``` I've also tried ``` SELECT mp.MESSAGEID, COUNT(*) FROM MESSAGE m LEFT JOIN MESSAGEPART mp ON mp.MESSAGEID = m.MESSAGEID GROUP BY mp.MESSAGEID; ``` but this returns ``` MESSAGEID COUNT(*) ----------------------------------------------- 1 2 1 ``` What am I doing wrong here?
How about something like this: ``` SELECT m.MESSAGEID, sum((case when mp.messageid is not null then 1 else 0 end)) FROM MESSAGE m LEFT JOIN MESSAGEPART mp ON mp.MESSAGEID = m.MESSAGEID GROUP BY m.MESSAGEID; ``` The COUNT() function will count every row, even if it has null. Using SUM() and CASE, you can count only non-null values. EDIT: A simpler version taken from the top comment: ``` SELECT m.MESSAGEID, COUNT(mp.MESSAGEID) FROM MESSAGE m LEFT JOIN MESSAGEPART mp ON mp.MESSAGEID = m.MESSAGEID GROUP BY m.MESSAGEID; ```
You *first* want to count in your messaepart table before joining, i think. Try this: ``` SELECT m.MessageId , COALESCE(c, 0) as myCount FROM MESSAGE m LEFT JOIN (SELECT MESSAGEID , count(*) c FROM MESSAGEPART GROUP BY MESSAGEID) mp ON mp.MESSAGEID = m.MESSAGEID ```
Counting number of joined rows in left join
[ "", "sql", "oracle", "join", "count", "left-join", "" ]
I have this table in my Database and i hava a history with entrys: Tabelname: Activity ``` ActivityID | DateBegin | DateEnd 1 2013-01-01 2013-01-15 1 2013-01-15 9999-12-31 2 2013-01-20 2013-03-15 ``` Now i want to write a Query which will return the id with the DateEnd = 9999-12-31. In this example i want the return only id 2 I wrote this query, but it doesn't function: (I get a return statement of NULL) ``` SELECT ActivityID FROM [dbo].[Activity] where NOT EXISTS (SELECT ActivityID from Activity where DateEnd ='9999-12-31') ``` Can somebody help me ? Thanks a lot
``` SELECT ActivityID FROM dbo.Activity As x WHERE NOT EXISTS ( SELECT ActivityID FROM dbo.Activity WHERE DateEnd ='9999-12-31' AND ActivityID = x.ActivityID ) ```
If you want to return all the IDs that don't have any row with date 9999-12-31, you should first check which IDs you don't want to be returned, then avoid them in your query. So the query becomes: ``` SELECT ActivityID FROM [dbo].[Activity] WHERE ActivityID NOT IN (SELECT ActivityID from [dbo].[Activity] WHERE DateEnd ='9999-12-31') ```
sql - Select Statement where not exists a condition for each id ?
[ "", "sql", "sql-server", "" ]
I am using Lucene to perform queries on a subset of SQL data which returns me a scored list of RecordIDs, e.g. 11,4,5,25,30 . I want to use this list to retrieve a set of results from the full SQL Table by RecordIDs. So `SELECT * FROM MyFullRecord where RecordID in (11,5,3,25,30)` I would like the retrieved list to maintain the scored order. I can do it by using an Order by like so; `ORDER BY (CASE WHEN RecordID = 11 THEN 0 WHEN RecordID = 5 THEN 1 WHEN RecordID = 3 THEN 2 WHEN RecordID = 25 THEN 3 WHEN RecordID = 30 THEN 4 END)` I am concerned with the loading of the server loading especially if I am passing long lists of RecordIDs. Does anyone have experience of this or how can I determine an optimum list length. Are there any other ways to achieve this functionality in MSSQL? Roger
You can record your list into a table or table variable with sorting priorities. And then join your table with this sorting one. ``` DECLARE TABLE @tSortOrder (RecordID INT, SortOrder INT) INSERT INTO @tSortOrder (RecordID, SortOrder) SELECT 11, 1 UNION ALL SELECT 5, 2 UNION ALL SELECT 3, 3 UNION ALL SELECT 25, 4 UNION ALL SELECT 30, 5 SELECT * FROM yourTable T LEFT JOIN @tSortOrder S ON T.RecordID = S.RecordID ORDER BY S.SortOrder ```
Instead of creating a searched order by statement, you could create an in memory table to join. It's easier on the eyes and definitely scales better. **SQL Statement** ``` SELECT mfr.* FROM MyFullRecord mfr INNER JOIN ( SELECT * FROM (VALUES (1, 11), (2, 5), (3, 3), (4, 25), (5, 30) ) q(ID, RecordID) ) q ON q.RecordID = mfr.RecordID ORDER BY q.ID ``` Look [here](http://sqlfiddle.com/#!6/75b68/2) for a fiddle
MSSQL ORDER BY Passed List
[ "", "sql", "sql-server", "" ]
I need to optimize SELECT query in order to improve the performance. I am using ORACLE 10g. Below is my table: ``` CREATE TABLE TRNSCTN ( TRNSCTN_ID VARCHAR2(32) NOT NULL, TRNSCTN_DOC VARCHAR2(60) NOT NULL, TRNSCTN_TYPE VARCHAR2(60) NOT NULL, STATUS NUMBER NOT NULL, TRNSCTN_CREATEDDATE DATE NOT NULL, TRNSCTN_CREATEDBY VARCHAR2(60) NOT NULL, TRNSCTN_CHANGEDDATE DATE NOT NULL, TRNSCTN_CHANGEDBY VARCHAR2(60) NOT NULL, PARENT_LINK VARCHAR2(32) NULL, PT_NAME VARCHAR2(255) NULL, APP_ID VARCHAR2(255) NULL, DIRECTION NUMBER NULL, CONSTRAINT PK_TRNSCTN_ID PRIMARY KEY (TRNSCTN_ID) ); ``` Below are some records: ``` Insert into TRNSCTN (TRNSCTN_ID,TRNSCTN_DOC,TRNSCTN_TYPE,STATUS,TRNSCTN_CREATEDDATE,TRNSCTN_CREATEDBY,TRNSCTN_CHANGEDDATE,TRNSCTN_CHANGEDBY,PARENT_LINK,PT_NAME,APP_ID,DIRECTION) values ('E840496554B91','DOC1','TYPE1',5501,to_date('01-MAY-13','DD-MON-RR'),'usr1',to_date('01-MAY-13','DD-MON-RR'),'usr1','E840496554B92','PT_SEMO','APP1',2 ); Insert into TRNSCTN (TRNSCTN_ID,TRNSCTN_DOC,TRNSCTN_TYPE,STATUS,TRNSCTN_CREATEDDATE,TRNSCTN_CREATEDBY,TRNSCTN_CHANGEDDATE,TRNSCTN_CHANGEDBY,PARENT_LINK,PT_NAME,APP_ID,DIRECTION) values ('E840496554B92','DOC2','TYPE1',5502,to_date('01-MAY-13','DD-MON-RR'),'usr1',to_date('01-MAY-13','DD-MON-RR'),'usr1','E840496554B92','PT_SEMO','APP1',1 ); Insert into TRNSCTN (TRNSCTN_ID,TRNSCTN_DOC,TRNSCTN_TYPE,STATUS,TRNSCTN_CREATEDDATE,TRNSCTN_CREATEDBY,TRNSCTN_CHANGEDDATE,TRNSCTN_CHANGEDBY,PARENT_LINK,PT_NAME,APP_ID,DIRECTION) values ('E840496554B93','DOC3','TYPE2',5503,to_date('01-MAY-13','DD-MON-RR'),'usr1',to_date('01-MAY-13','DD-MON-RR'),'usr1','E840496554B93','PT_SEMO','APP3',2 ); Insert into TRNSCTN (TRNSCTN_ID,TRNSCTN_DOC,TRNSCTN_TYPE,STATUS,TRNSCTN_CREATEDDATE,TRNSCTN_CREATEDBY,TRNSCTN_CHANGEDDATE,TRNSCTN_CHANGEDBY,PARENT_LINK,PT_NAME,APP_ID,DIRECTION) values ('E840496554B94','DOC1','TYPE2',5504,to_date('01-MAY-13','DD-MON-RR'),'usr1',to_date('01-MAY-13','DD-MON-RR'),'usr1','E840496554B91','PT_SEMO','APP1',2 ); Insert into TRNSCTN (TRNSCTN_ID,TRNSCTN_DOC,TRNSCTN_TYPE,STATUS,TRNSCTN_CREATEDDATE,TRNSCTN_CREATEDBY,TRNSCTN_CHANGEDDATE,TRNSCTN_CHANGEDBY,PARENT_LINK,PT_NAME,APP_ID,DIRECTION) values ('E840496554B95','DOC2','TYPE1',5505,to_date('01-MAY-13','DD-MON-RR'),'usr1',to_date('01-MAY-13','DD-MON-RR'),'usr1','E840496554B93','PT_SEMO','APP1',1 ); Insert into TRNSCTN (TRNSCTN_ID,TRNSCTN_DOC,TRNSCTN_TYPE,STATUS,TRNSCTN_CREATEDDATE,TRNSCTN_CREATEDBY,TRNSCTN_CHANGEDDATE,TRNSCTN_CHANGEDBY,PARENT_LINK,PT_NAME,APP_ID,DIRECTION) values ('E840496554B96','DOC1','TYPE1',5506,to_date('01-MAY-13','DD-MON-RR'),'usr1',to_date('01-MAY-13','DD-MON-RR'),'usr1','E840496554B92','PT_SEMO','APP3',2 ); Insert into TRNSCTN (TRNSCTN_ID,TRNSCTN_DOC,TRNSCTN_TYPE,STATUS,TRNSCTN_CREATEDDATE,TRNSCTN_CREATEDBY,TRNSCTN_CHANGEDDATE,TRNSCTN_CHANGEDBY,PARENT_LINK,PT_NAME,APP_ID,DIRECTION) values ('E840496554B97','DOC2','TYPE1',5507,to_date('01-MAY-13','DD-MON-RR'),'usr1',to_date('01-MAY-13','DD-MON-RR'),'usr1','E840496554B99','PT_SEMO','APP1',1 ); Insert into TRNSCTN (TRNSCTN_ID,TRNSCTN_DOC,TRNSCTN_TYPE,STATUS,TRNSCTN_CREATEDDATE,TRNSCTN_CREATEDBY,TRNSCTN_CHANGEDDATE,TRNSCTN_CHANGEDBY,PARENT_LINK,PT_NAME,APP_ID,DIRECTION) values ('E840496554B98','DOC3','TYPE2',5508,to_date('01-MAY-13','DD-MON-RR'),'usr1',to_date('01-MAY-13','DD-MON-RR'),'usr1','E840496554B93','PT_SEMO','APP1',1 ); Insert into TRNSCTN (TRNSCTN_ID,TRNSCTN_DOC,TRNSCTN_TYPE,STATUS,TRNSCTN_CREATEDDATE,TRNSCTN_CREATEDBY,TRNSCTN_CHANGEDDATE,TRNSCTN_CHANGEDBY,PARENT_LINK,PT_NAME,APP_ID,DIRECTION) values ('E840496554B99','DOC1','TYPE2',5509,to_date('01-MAY-13','DD-MON-RR'),'usr1',to_date('01-MAY-13','DD-MON-RR'),'usr1','E840496554B91','PT_SEMO','APP1',1 ); Insert into TRNSCTN (TRNSCTN_ID,TRNSCTN_DOC,TRNSCTN_TYPE,STATUS,TRNSCTN_CREATEDDATE,TRNSCTN_CREATEDBY,TRNSCTN_CHANGEDDATE,TRNSCTN_CHANGEDBY,PARENT_LINK,PT_NAME,APP_ID,DIRECTION) values ('E840496554B910','DOC2','TYPE1',5510,to_date('01-MAY-13','DD-MON-RR'),'usr1',to_date('01-MAY-13','DD-MON-RR'),'usr1','E840496554B93','PT_SEMO','APP3',1 ); Insert into TRNSCTN (TRNSCTN_ID,TRNSCTN_DOC,TRNSCTN_TYPE,STATUS,TRNSCTN_CREATEDDATE,TRNSCTN_CREATEDBY,TRNSCTN_CHANGEDDATE,TRNSCTN_CHANGEDBY,PARENT_LINK,PT_NAME,APP_ID,DIRECTION) values ('E840496554B911','DOC1','TYPE1',5511,to_date('01-MAY-13','DD-MON-RR'),'usr1',to_date('01-MAY-13','DD-MON-RR'),'usr1','E840496554B92','PT_SEMO','APP1',2 ); Insert into TRNSCTN (TRNSCTN_ID,TRNSCTN_DOC,TRNSCTN_TYPE,STATUS,TRNSCTN_CREATEDDATE,TRNSCTN_CREATEDBY,TRNSCTN_CHANGEDDATE,TRNSCTN_CHANGEDBY,PARENT_LINK,PT_NAME,APP_ID,DIRECTION) values ('E840496554B912','DOC2','TYPE1',5512,to_date('01-MAY-13','DD-MON-RR'),'usr1',to_date('01-MAY-13','DD-MON-RR'),'usr1','E840496554B913','PT_SEMO','APP1',1 ); Insert into TRNSCTN (TRNSCTN_ID,TRNSCTN_DOC,TRNSCTN_TYPE,STATUS,TRNSCTN_CREATEDDATE,TRNSCTN_CREATEDBY,TRNSCTN_CHANGEDDATE,TRNSCTN_CHANGEDBY,PARENT_LINK,PT_NAME,APP_ID,DIRECTION) values ('E840496554B913','DOC3','TYPE2',5513,to_date('01-MAY-13','DD-MON-RR'),'usr1',to_date('01-MAY-13','DD-MON-RR'),'usr1','E840496554B911','PT_SEMO','APP3',2 ); ``` Now, here is my SELECT Query ``` SELECT TRNSCTN_ID, TRNSCTN_DOC, TRNSCTN_TYPE, STATUS, TRNSCTN_CREATEDDATE, TRNSCTN_CREATEDBY, TRNSCTN_CHANGEDDATE, TRNSCTN_CHANGEDBY, PARENT_LINK, CASE WHEN (SELECT COUNT(*) FROM TRNSCTN sub WHERE TRNSCTN_ID=sub.PARENT_LINK) = 0 THEN 'false' ELSE 'true' END AS CHILDCNT, PT_NAME, APP_ID, DIRECTION FROM TRNSCTN ``` As I am using the subquery to determine if the record has parent link, the query gets too much slower against millions of records. Sorry that SQLFIDDLE is down so couldn't use it. I was wondering, if there is another way to rewrite the query to optimize the performance. Please let me know the optimized SELECT Query. Thank you.
You can use left-join as i wrote below and you can consider giving parallel hint (e.g /\*+ parallel \*/ ) ``` SELECT sub.TRNSCTN_ID, sub.TRNSCTN_DOC, sub.TRNSCTN_TYPE, sub.STATUS, sub.TRNSCTN_CREATEDDATE, sub.TRNSCTN_CREATEDBY, sub.TRNSCTN_CHANGEDDATE, sub.TRNSCTN_CHANGEDBY, sub.PARENT_LINK, DECODE(prnt.trnsctn_id, null, 'false', 'true') AS CHILDCNT, sub.PT_NAME, sub.APP_ID, sub.DIRECTION from TRNSCTN sub left join TRNSCTN prnt on sub.parent_link = prnt.trnsctn_id; ```
``` SELECT TRNSCTN_ID, TRNSCTN_DOC, TRNSCTN_TYPE, STATUS, TRNSCTN_CREATEDDATE, TRNSCTN_CREATEDBY, TRNSCTN_CHANGEDDATE, TRNSCTN_CHANGEDBY, PARENT_LINK, CASE WHEN NOT EXISTS (SELECT 1 FROM TRNSCTN sub WHERE t.TRNSCTN_ID=sub.PARENT_LINK) THEN 'false' ELSE 'true' END AS CHILDCNT, PT_NAME, APP_ID, DIRECTION FROM TRNSCTN t; ``` [SQLFIDDLE](http://sqlfiddle.com/#!4/56b17/3/0)
Oracle SQL SELECT subquery Optimization
[ "", "sql", "oracle", "select", "subquery", "query-optimization", "" ]
Verified sql syntax code with <http://sqlformat.appspot.com/> and its still not working.... ``` $sql = "CREATE TABLE test (prime_key AUTO-INCREMENT PRIMARY KEY, train_num INT, train_type CHAR(50))"; ``` Its giving me "near 'AUTO-INCREMENT PRIMARY...'" in the error message. What am I missing?
You need to use an underscore `_`, not a dash (`-`). You're also missing a data type for your `prime_key` column: ``` CREATE TABLE test ( prime_key INT AUTO_INCREMENT PRIMARY KEY, train_num INT, train_type CHAR(50) ) ```
It is [**`AUTO_INCREMENT`**](http://dev.mysql.com/doc/refman/5.0/en/example-auto-increment.html) not `AUTO-INCREMENT`
Create Table SQL Syntax Error - What am I doing wrong?
[ "", "mysql", "sql", "" ]
In this [sqlfiddle](http://sqlfiddle.com/#!3/b0665/36/0) I am trying to replace CompanyName to something else if it is null but apparently I cannot. I tried case statement and other techniques but it did not work. Is there away to replace CompanyName lets say by 'Not Given' if the company is null. I must use OUTER APPLY here.
Please try: ``` select p.*, isnull(czip.companyname, 'Not Given') companyname from Person p outer apply ( select companyname from Company c where p.companyid = c.companyId ) Czip ```
``` SELECT Coalesce(companyname, 'Not Given') As companyname ... ```
How to replace NULL in SQL OUTER APPLY
[ "", "sql", "sql-server", "join", "replace", "null", "" ]
Hi I have a query as fallows ``` INSERT INTO CPQ (BackLog_ID, Priority, Category, Type, Country, Region, TN, [Date Entered], Source, Brand, [Remote #], [Target #], Device, Status, [Capture Type], Comment, Processed) SELECT BackLog_ID, Priority, CASE WHEN Type = 'I' THEN 'Category-1' WHEN TYPE = 'P' THEN 'Category-1' WHEN TYPE = 'B' THEN 'Category-2' END AS Category, Type, Country, Region, TN, [Date Entered], Source, Brand, [Remote #], [Target #], Device, Status, [Capture Type], Comment, Processed FROM BackLog WHERE ( Processed = 0 ) AND ( Type <> 'Z' ) AND ( Region = 'Asia' ) AND ( Country = 'China' OR Country = 'Japan' ) ``` In above Query The Table CPQ has PK As CPQ\_ID and its not auto number. Now the above query is giving error as follows Error : > Cannot insert the value NULL into column 'CPQ\_ID', table > 'Capture\_Manager.dbo.CPQ'; column does not allow nulls. INSERT fails. Plz. Help ME
1) You have a PK column, which is not auto-increment 2) You try to insert something to that table without assigning a value to your PK 3) you get an error that says you cannot do that I am not really sure what more you need for a solution, but since 3) is the result of 1) and 2), you will have to change 1) or 2). So either: 1) Make your PK column auto-increment (or otherwise fill it automatically on DB level) or 2) Insert a (unique!) value for you PK when executing your insert statement. You do either of the two, your error will disappear.
The problem is you are trying to insert null value into CPQ\_ID column, The best way is to set CPQ\_ID column to auto increment , Sample table create code with Auto Increment enabled ``` CREATE TABLE [dbo].[tablename]( [CPQ_ID] [int] IDENTITY(1,1) NOT NULL ) ON [PRIMARY] ``` This code " `IDENTITY(1,1) NOT NULL`" always insert auto incremented number to the column whenver you insert a record However if you need not auto-increment value then you have to parse a value to CPQ\_ID column or you have to make it nullable which means you cann't use it as a primary key.
Insert query with Auto number
[ "", "sql", "sql-server", "" ]
I have the following code in a code-first migration: ``` public partial class RunSQLInit : DbMigration { public override void Up() { this.Sql(SqlFiles.Create_Job_ClearExpiredData); } [...] } ``` where `SqlFiles.Create_Job_ClearExpiredData` is a resx-file returning an SQL-file as a string. The SQL creates and schedules a Server Agent job in SQL Server. Now, the SQL have two "variables" I need the ability to change before running the migration on a database: the name of the database the job should run on, and the name for the actual job. That would be fine if I had one environment only, but I deploy this code-first project to a number of environments, who differ in database names and also in what I should call the Job. I see three possibilities: 1. Get hold of the connection string currently being used inside the Up()-method, and use that to parse out the database name and do a replace for it to the SQL-string before running it. But how would I go about getting the "connection string currently being used"? 2. Somehow pass a variable to the migrations when running `Update-Database` in the Package Manager Console, which could be accessible inside the migrations. How would I go about that? 3. Use another solution for creating the Job. We rely heavily on the ease of deploying from the code-first models to all environments, so it should be easily automated somehow without additional labour when updating databases, and producing deployment scripts (with `Update-Database -script`) A small cut out from the SQL-script in question is this: ``` EXEC msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N'Run SP', @step_id=1, @subsystem=N'TSQL', @command=N'exec sp_ClearExpiredData;', @database_name=@databaseName, --This is where I need the varying database name! @flags=0 ```
The answer to my problem was to simply use this inside the SQL-file: ``` set @databaseName = db_name() ``` Since the current context is already the database I will use for the job, it's already there. Too easy.
I had to declare my @DBName variable in order to set it. Error otherwise: > Must declare the scalar variable "@DBName". Complete code: ``` Sql("DECLARE @DBName nvarchar(50), @SQLString nvarchar(200)\r\n" + "SET @DBName = db_name();\r\n" + "SET @SQLString = \'ALTER DATABASE [\' + @DBName + \'] COLLATE Latin1_General_100_CI_AS\'\r\n" + "EXEC( @SQLString )", suppressTransaction: true); ```
Access the database name inside Up in DbMigration
[ "", ".net", "sql", "sql-server", "entity-framework", "ef-code-first", "" ]
I get confused on a sql statement and I kind of need your help. Below are my table entitled (tblfood) contents: ``` ID Food_Name Main_Ingredient Drink 1 Ham n Coke Ham Coke 2 Cheese n Tea Cheese Ice Tea 3 Ham n Coke Ham Coke 4 Ham n Coke Ham Coke 5 Chicken Sandwich Chicken None 6 Chicken Sandwich Chicken None 7 Cheese n Tea Cheese Ice Tea 8 Chicken n Coke Chicken Coke 9 Chicken Sandwich Chicken None 10 Cheese n Tea Cheese Ice Tea ``` I want to get the `ID` of the last entered `Food_Name` with `Chicken Sandwich`, which is 10. And if I get the last entered `Food_Name` with `Chicken n Coke`, it would display 9. My first try goes like this: ``` Select id from tblfood where Food_Name='Chicken Sandwich'; ``` But with this code, I believe it will display all the IDs where the Food\_Name is equal to Chicken Sandwich. How should I display only the last entered ID?
How about this, selecting the max id considering you want a single id not multiple if you want multiple you need to rely on Order By clause solution. ``` SELECT MAX(id) FROM tblfood WHERE Food_Name='Chicken Sandwich'; ```
``` SELECT ID FROM tblfood WHERE Food_Name = 'Chicken Sandwich' ORDER BY ID DESC LIMIT 1 ```
Fetch last ID where column is equal to a string
[ "", "mysql", "sql", "" ]
I have installed an Oracle 12c database on my system. I had an application which need to access the database. Previously in Oracle 11g, I used the following commands to create an user. ``` create user name identified by name; grant connect,create session,resource,create view to name; ``` Can anyone tell me how to create a user in Oracle 12c with my above requirements? I used the following statements but my installation is showing a fatal error saying ``` FATAL ERROR - java.sql.SQLException: ORA-01950: no privileges on tablespace 'USERS' ``` Following were the statements used. ``` create user c##test1 identified by test1 container = ALL; grant connect,create session,resource,create view to test1; ```
Best Practice is to create a tablespace and assign that to the User. Just to make it easier to understand use same name for username and tablespace `CREATE BIGFILE TABLESPACE C##1 DATAFILE '/path/to/datafile/C##1.dbf' SIZE 10M AUTOEXTEND ON NEXT 5M MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO NOLOGGING;` --- ## --Create User `CREATE USER C##1 IDENTIFIED BY password DEFAULT TABLESPACE C##1 QUOTA UNLIMITED ON C##1;`
You should also give the user a quota on his default tablespace: ``` CREATE USER name IDENTIFIED BY name DEFAULT TABLESPACE users TEMPORARY TABLESPACE temp QUOTA 50M /* or any other number that makes sense */ ON users GRANT CONNECT, CREATE SESSION, RESOURCE, CREATE VIEW TO name; ```
Regarding Users in Oracle 12c
[ "", "sql", "oracle", "oracle12c", "" ]
For a school assignment, I have to create a database and run reports. I've created code and a classmate has also created code and it runs the same thing, but his is in a format I've not seen and don't quite understand. Here is mine: ``` SELECT Course.Name AS 'Course Name', Program.Name AS 'Program Name' FROM Course, Program, ProgramCourse WHERE ProgramCourse.CourseID = Course.ID AND ProgramCourse.ProgramID = Program.ID GO ``` And here's his: ``` CREATE VIEW NumberOfCoursePerProgram AS SELECT p.name AS ProgramName, c.name AS CourseName FROM Program p JOIN ProgramCourse pc ON pc.ProgramID = p.ID JOIN Course c ON c.ID = pc.CourseID GO ``` I ran both queries using the data in the tables I've created. They return practically the same results, just in a slightly different order but it fulfills the assignment question. Anyway, if I delete the `p` from `Program p` from his code, it returns an error > The multi-part identifier "p.name" could not be bound. So how is SQL Server able to accept `p.name` and `p.ID`, etc. when I haven't ever established these variables? I don't quite understand how the code is working on his. Mine seems simple and straightforward, and I definitely understand what's going on there. So can someone explain his? Thanks
There's a few differences. First off, he's creating a `VIEW` rather than *just* a select statement: ``` CREATE VIEW NumberOfCoursePerProgram AS ``` Once the view is created, you can query the view just as you would a table: ``` SELECT * FROM NumberOfCoursePerProgram; ``` Second, he's using an ANSI JOIN rather than an implicit JOIN. His method is more modern and most likely considered more *correct* by today's standards: ``` JOIN ProgramCourse pc ON pc.ProgramID = p.ID JOIN Course c ON c.ID= pc.CourseID ``` Rather than: ``` FROM Course, Program, ProgramCourse ``` Also, note he's assigning [table aliases](http://technet.microsoft.com/en-us/library/ms187455%28v=SQL.105%29.aspx) when he refers to a table: ``` FROM Program p ``` The `p` at the end allows you to substitute `p` rather than specify the entire table name of `Program` elsewhere in the query. For example, you can now say `WHERE p.Foo > 5` rather than `WHERE Program.Foo > 5`. In this case, it's just a shortcut and saves a few characters. However, suppose you were referring to the same table twice (for example, JOINing in two different rows on the same table). In that case, you might *have* to provide aliases for each table to disambiguate which one is which.
These are called [alias](http://technet.microsoft.com/en-us/library/ms187455%28v=SQL.105%29.aspx) in SQL. Alias is basically created to give more readability and for better ease of writing code. > The readability of a SELECT statement can be improved by giving a > table an alias, also known as a correlation name or range variable. A > table alias can be assigned either with or without the AS keyword: > > * table\_name AS table alias > * table\_name table\_alias So in your query `p` is an alias to `Program` so that means now you can refer your table `Program` by the name of `p` instead of writing the whole name `Program` everywhere. Similarly you can access the names of the columns of your table Program by simply writing p with a dot and then the column name. Something like `p.column`. This technique is very useful when you using **JOINS** and some your tables have similar names of the columns. **EDIT:-** Although most of the points are covered in other's answer. I am just adding a point that you should avoid the habit of JOINING table the way you are doing it right now. You may check # [Bad habits to kick : using old-style JOINs](https://sqlblog.org/2009/10/08/bad-habits-to-kick-using-old-style-joins) by Aaron Bertrand for reference.
Difference between SQL Server codes?
[ "", "sql", "sql-server", "" ]