Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I am shifting my database to MS SQL Server 2008 R2, When I am trying to save long length string then it is giving me the error for Maximum length, while i have set the datatype of that column as "text" after it i replaced it with "varchar(max)" but no solution there. kindly provide me a solution how to resolve this issue. I am executing the following query: ``` update hotel set hotel_policy = "Overview of Park Central New York - New York This hotel is making improvements. The property is undergoing renovations. The following areas are affected: Bar/lounge Business center Select guestrooms Every effort will be made to minimize noise and disturbance. Occupying a Beaux Arts building dating to 1927, Park Central New York Hotel is within a block of famed concert venue Carnegie Hall and within a 5-minute walk of Manhattan’s world-renowned Broadway theater district. Prefer the great outdoors to the Great White Way? Central Park is just 3 blocks from the hotel. There, you can rent a rowboat at the lake, play a game of tennis, or visit the Central Park Zoo. The international boutiques and flagship department stores of Fifth Avenue start within a 10-minute walk of the hotel. For travel to sights farther afield, there are 7 subway lines located within 3 blocks of the Park Central. The hotel has a snack bar for guests' convenience, and coffee and tea in the lobby. Retreat to your guestroom and sink into a bed with a pillowtop mattress and down comforter and pillows. Need to check email or finish up some work? You’ll find a desk with an ergonomic chair and wireless high-speed Internet access (surcharge). Unwind with a video game (surcharge) on the flat-panel HDTV." where hotel_id = 1 ``` I search it a lot but the solutions i found are not useful to me. Thanks!
According to ANSI SQL standard, double quotes are used (if *necessary*) for object identifiers (ex. `UPDATE "hotel" ...)` and not as string delimiters (`"Overview of Park Central ..."`). SQL Server has this behavior when `QUOTED_IDENTIFIER` is `ON`. **Edit 1:** The usage of single and double quotation marks as delimiters for object identifiers (including column aliases) is described below: ``` Delimiter Delimiter for for SET QUOTED_IDENTIFIER Object ID Alias ID StringDelimiter ON " or [] " or ' or [] ' OFF [] " or ' or [] " or ' ``` * `ON` then double quotes can be used as delimiter for object identifiers (including column aliases) and single quotes are used as delimiters for string literals and/or for column aliases (`SELECT Column1 AS 'Alias1' ....`)identifiers. * `OFF` then double quotes can be used as delimiter for columns aliases (`SELECT Column1 AS "Alias1" ...`) and as delimiter for string literals (`SELECT "String1" AS Alias1 ...`). Single quotation marks can be used as string delimiter and as delimiter for column aliases (`SELECT Column1 AS`Alias1`...`). Use instead single quotes: ``` update hotel set hotel_policy = 'Overview of Park Central ...' where hotel_id = 1 ```
If you don't want to change double quotes to single quotes add following two lines in the begging of the script ``` SET QUOTED_IDENTIFIER OFF SET ANSI_NULLS ON ```
The identifier that starts with ...... is too long. Maximum length is 128
[ "", "sql", "sql-server", "" ]
I want to add multiple items/rows in SQLite using single insert query, ``` insert into suppliers (supoliers_no,supply_name,city,phone_no) values (1,'Ali','amman',111111), (2,'tariq','amman',777777), (3,'mohmmed','taiz',null); ``` Is it possible using Sqlite?
It should be:- ``` insert into suppliers (supoliers_no,supply_name,city,phone_no) values (1,'Ali','amman',111111); insert into suppliers (supoliers_no,supply_name,city,phone_no) values (2,'tariq','amman',777777); insert into suppliers (supoliers_no,supply_name,city,phone_no) values (3,'mohmmed','taiz',null); ``` Also `insert into suppliers (supoliers_no,supply_name,city,phone_no) values (3,'mohmmed','taiz',null);` as `null` is not a recognized keyword in sql. So you might have to try it out like `insert into suppliers (supoliers_no,supply_name,city,phone_no) values (3,'mohmmed','taiz','');`
Multiple insert rows in one line code is `INSERT INTO TableName ( Column1, Column2,Column3 ) VALUES ( Value1, Value2 ,Value3), ( Value1, Value2,Value3 );` In your case Code is correct but `(3,'mohmmed','taiz',null);` try with give some value. and you can also take reference from here [Inserting multiple rows in a single SQL query?](https://stackoverflow.com/questions/452859/inserting-multiple-rows-in-a-single-sql-query)
Insert multiple set of values using Sqlite
[ "", "sql", "sqlite", "" ]
Below is my SQL I would like to execute. I'd like to avoid doing multiple requests for that and i'm pretty sure it's possible… ``` First table : products_categories (category_id, category_infos…) Second table : products_categories_relations (product_id, category_id) Third table : products (product_id, published, products_infos…) ``` I want to find all empty categories (no products in them) AND categories with only unpublished products. This second part is where i'm stucked. ``` SELECT pc.`category_id` FROM `#__products_categories` AS pc LEFT JOIN `#__products_categories_relations` pcr ON pc.`category_id` = pcr.`category_id` WHERE pcr.`category_id` IS NULL ``` This query gives me categories which have no products, but i can't see how to insert a condition to say : > **"for each category with products, return the ones which contain ONLY products where published=0"** My analysis is : If A is the result of the query "categories with no products" and B is the result of "categories with only unpublished products" I need categories that are in A OR B.
``` SELECT pc.`category_id` FROM `#__products_categories` AS pc LEFT JOIN `#__products_categories_relations` pcr ON pc.`category_id` = pcr.`category_id` LEFT JOIN (SELECT product_id, MAX(published) published FROM products GROUP BY product_id HAVING published = 0) p ON pcr.product_id = p.product_id WHERE pcr.`category_id` IS NULL OR p.product_id IS NOT NULL ``` The sub-query finds all unpublished products. We join with that, and then the `WHERE` clause can match either the nonexistent products with the first test, or the unpublished products with the second one.
``` SELECT pc.* FROM products_categories pc INNER JOIN products_categories_relations pcr ON (pc.category_id = pcr.category_id) INNER JOIN products p ON (pcr.product_id = p.product_id AND published = 0) EXCEPT SELECT pc.* FROM products_categories pc INNER JOIN products_categories_relations pcr ON (pc.category_id = pcr.category_id) INNER JOIN products p ON (pcr.product_id = p.product_id AND published != 0) ``` Essentially you query for the categories that have products with published = 0 and remove from the set all that have products with published other than 0. It will result in exactly what you need. Note that category with products with published = 0 is not the same as empty category. If you need empty categories as well, just make a `union all`. EDIT: As I was just reminded that MySQL **still** don't have some of the basic SQL operations here is work around: ``` SELECT pc.* FROM products_categories pc INNER JOIN products_categories_relations pcr ON (pc.category_id = pcr.category_id) INNER JOIN products p ON (pcr.product_id = p.product_id AND published = 0) WHERE pc.product_category_id NOT IN ( SELECT distinct pc.product_category_id FROM products_categories pc INNER JOIN products_categories_relations pcr ON (pc.category_id = pcr.category_id) INNER JOIN products p ON (pcr.product_id = p.product_id AND published != 0) ) ``` or you can also make it: ``` SELECT * FROM ( SELECT distinct pc.* FROM products_categories pc INNER JOIN products_categories_relations pcr ON (pc.category_id = pcr.category_id) INNER JOIN products p ON (pcr.product_id = p.product_id AND published = 0) ) subQ1 LEFT OUTER JOIN ( SELECT distinct pc.* FROM products_categories pc INNER JOIN products_categories_relations pcr ON (pc.category_id = pcr.category_id) INNER JOIN products p ON (pcr.product_id = p.product_id AND published != 0) ) subQ2 ON subQ1.product_category_id = subQ2.product_category_id WHERE subQ2.product_category_id IS NULL ``` I suspect the first should be faster but you can give a try to both.
Hard SQL query - Rows with no matching in another table OR with items not published in another one
[ "", "mysql", "sql", "" ]
I have say 3 tables, which have a key that links through all three like so... ``` select a.call_id, b.name, c.year from table1 a, table2 b, table3 c where a.call_id = b.call_id and b.call_id = c.call_id and a.call_id = 123 ``` The problem is table 3 doesn't have a record that matches the id of 123, thus the entire query returns 0 rows, even though there is a match in tables 1 & 2. Is there a way I can have oracle just return say null for the c.year field and still get results for a.call\_id and b.name?
Yes, you have to use `LEFT` join: ``` select a.call_id, b.name, c.year from table1 a JOIN table2 b ON (a.call_id = b.call_id) LEFT JOIN table3 c ON (b.call_id = c.call_id) WHERE a.call_id = 123; ``` Read more about joins here: [Visual Explanation of Joins by Jeff Atwood](http://www.codinghorror.com/blog/2007/10/a-visual-explanation-of-sql-joins.html)
I guess you just need to translate this to JOIN: ``` SELECT a.call_id, b.name, c.year from table1 a JOIN table2 b ON a.call_id = b.call_id LEFT JOIN table3 c ON and b.call_id = c.call_id WHERE a.call_id = 123 ```
Oracle SQL: joining with an empty table
[ "", "sql", "oracle", "join", "" ]
I wrote the below SQL query with a `LIKE` condition: ``` SELECT * FROM Manager WHERE managerid LIKE '_%' AND managername LIKE '%_%' ``` In the `LIKE` I want to search for any underscores **`%_%`**, but I know that my columns' data has no underscore characters. * Why does the query give me all the records from the table? Sample data: ``` create table Manager( id int ,managerid varchar(3) ,managername varchar(50) ); insert into Manager(id,managerid,managername)values(1,'A1','Mangesh'); insert into Manager(id,managerid,managername)values(2,'A2','Sagar'); insert into Manager(id,managerid,managername)values(3,'C3','Ahmad'); insert into Manager(id,managerid,managername)values(4,'A4','Mango'); insert into Manager(id,managerid,managername)values(5,'B5','Sandesh'); ``` `Sql-Fiddle`
Modify your `WHERE` condition like this: ``` WHERE mycolumn LIKE '%\_%' ESCAPE '\' ``` This is one of the ways in which Oracle supports escape characters. Here you define the escape character with the `escape` keyword. For details see [this link on Oracle Docs](https://docs.oracle.com/cd/B10501_01/text.920/a96518/cqspcl.htm). The `'_'` and `'%'` are wildcards in a `LIKE` operated statement in SQL. The `_` character looks for a presence of (any) one single character. If you search by `columnName LIKE '_abc'`, it will give you result with rows having `'aabc'`, `'xabc'`, `'1abc'`, `'#abc'` but NOT `'abc'`, `'abcc'`, `'xabcd'` and so on. The `'%'` character is used for matching 0 or more number of characters. That means, if you search by `columnName LIKE '%abc'`, it will give you result with having `'abc'`, `'aabc'`, `'xyzabc'` and so on, but no `'xyzabcd'`, `'xabcdd'` and any other string that does not end with `'abc'`. In your case you have searched by `'%_%'`. This will give all the rows with that column having one or more characters, that means any characters, as its value. This is why you are getting all the rows even though there is no `_` in your column values.
The underscore is the [wildcard in a `LIKE`](http://technet.microsoft.com/en-us/library/ms174424.aspx) query for one arbitrary character. Hence `LIKE %_%` means "give me all records with at least one arbitrary character in this column". You have to escape the wildcard character, in sql-server with `[]` around: ``` SELECT m.* FROM Manager m WHERE m.managerid LIKE '[_]%' AND m.managername LIKE '%[_]%' ``` See: [LIKE (Transact-SQL)](http://technet.microsoft.com/en-us/library/ms179859.aspx) `Demo`
Why does using an Underscore character in a LIKE filter give me all the results?
[ "", "sql", "sql-server", "" ]
From **performance perspective** , is this the best way to write the following query concerning the nested query : --- ``` SELECT a.meg,a.currency FROM alt6sal a WHERE a.meg_code IN (1,2) AND a.sal_year = (SELECT MAX(ia.sal_year) FROM alt6sal ia WHERE a.emp_num = ia.emp_num ) AND a.sal_mon = (SELECT MAX(ia.sal_mon) FROM alt6sal ia WHERE a.emp_num = ia.emp_num AND a.sal_year = ia.sal_year) ```
If you can avoid correlated subquery, the better the performance, example of non-correlated subquery: ``` SELECT a.meg,a.currency FROM alt6sal a join ( select ia.emp_num, max(ia.sal_year) as sal_year_max from alt6sal ia group by ia.emp_num ) the_year_max on a.emp_num = the_year_max.emp_num and a.sal_year = the_year_max.sal_year_max join ( select ia.emp_num, ia.sal_year, max(ia.sal_mon) as sal_mon_max from alt6sal ia group by ia.emp_num, ia.sal_year ) the_month_max on a.emp_num = the_month_max.emp_num and a.sal_year = the_month_max.sal_year and a.sal_mon = the_month_max.sal_mon_max WHERE a.meg_code IN (1,2) ``` Analogous non-correlated JOINS for OR instead of AND, use LEFT JOIN then filter-in non-null ``` SELECT a.meg,a.currency FROM alt6sal a left join ( select ia.emp_num, max(ia.sal_year) as sal_year_max from alt6sal ia group by ia.emp_num ) the_year_max on a.emp_num = the_year_max.emp_num and a.sal_year = the_year_max.sal_year_max left join ( select ia.emp_num, ia.sal_year, max(ia.sal_mon) as sal_mon_max from alt6sal ia group by ia.emp_num, ia.sal_year ) the_month_max on a.emp_num = the_month_max.emp_num and a.sal_year = the_month_max.sal_year and a.sal_mon = the_month_max.sal_mon_max WHERE a.meg_code IN (1,2) and (the_year_max.ia_emp_num is not null or the_month_max.ia_emp_num is not null) ```
You can try this - ``` SELECT meg, currency FROM ( SELECT a.meg,a.currency, dense_rank() over (PARTITION BY a.emp_num ORDER BY a.sal_year desc) year_rank, dense_rank() over (PARTITION BY a.emp_num ORDER BY a.sal_mon desc) mon_rank FROM alt6sal a WHERE a.meg_code IN (1,2) ) WHERE year_rank = 1 AND mon_rank = 1; ```
nested query performance alternatives
[ "", "sql", "performance", "informix", "subquery", "" ]
Having a bit of trouble with an SQL query I am trying to create. The table format is as follows, ``` ID | Data Identifier | Date Added | Data Column 1 | 1001 | 15400 | Newest Value 1 | 1001 | 15000 | Oldest Value 1 | 1001 | 15200 | Older Value 1 | 1002 | 16000 | Newest Value 2 | 1001 | 16000 | Newest Value ``` What I am trying to do is, for each ID in a list (1,2) , and for each Data Identifier id in (1001,1002) return just the rows with the first matching field id and date nearest and below 16001. So the results would be : ``` 1 | 1001 | 15400 | Newest Value 1 | 1002 | 16000 | Newest Value 2 | 1001 | 16000 | Newest Value ``` I have tried several manner of joins but I keep returning duplicate records. Any advice or help would be appreciated.
It seems as if you want to GROUP BY and maybe a self join onto the table. I have the following code for you: ``` -- Preparing a test table INSERT INTO #tmpTable(ID, Identifier, DateAdded, DataColumn) SELECT 1, 1001, 15400, 'Newest Value' UNION SELECT 1, 1001, 15000, 'Oldest Value' UNION SELECT 1, 1001, 15200, 'Older Value' UNION SELECT 1, 1002, 16000, 'Newest Value' UNION SELECT 2, 1001, 16000, 'Newest Value' -- Actual Select SELECT b.ID, b.Identifier, b.DateAdded, DataColumn FROM (SELECT ID, Identifier, MAX(DateAdded) AS DateAdded FROM #tmpTable WHERE DateAdded < 16001 GROUP BY ID, Identifier) a INNER JOIN #tmpTable b ON a.DateAdded = b.DateAdded AND a.ID = b.ID AND a.Identifier = b.Identifier ```
You need to create a primary key column on your table that will not be used as an aggregate. Then you can create a [CTE](http://technet.microsoft.com/en-us/library/ms190766(v=sql.105).aspx) to select the rows required and then use it to select the data. The aggregate function `MIN(ABS(15500 - DateAdded))` will return the closest value to 15500. ``` WITH g AS ( SELECT MAX(UniqueKey) AS UniqueKey, ID, DataIdentifier, MIN(ABS(15500 - DateAdded)) AS "DateTest" FROM test GROUP BY ID, DataIdentifier ) SELECT test.ID, test.DataIdentifier, test.DateAdded, test.DataColumn FROM g INNER JOIN test ON g.UniqueKey = test.UniqueKey ``` **EDIT:** Screenshot of working example: ![enter image description here](https://i.stack.imgur.com/VoUlU.png)
SQL inner join using multiple in statements on single table
[ "", "sql", "join", "inner-join", "" ]
there are date values in the below format,in one of the sql server 2000 tables ``` 10/1/2013 10:39:14 PM 10/1/2013 6:39:04 PM 10/1/2013 8:19:31 AM 10/1/2013 3:35:40 AM ``` how to convert the above format data values into a 24hour date format,as shown below ``` 10/1/2013 10:39 10/1/2013 18:39 10/1/2013 8:19 10/1/2013 3:35 ```
Try this: First convert `varchar` date to `datetime` and then you can manipulate it in the way you want as: ``` -- CONVERT TO DATETIME TO GET 24HR FORMAT SELECT CONVERT(DATETIME, '10/1/2013 6:39:04 PM', 0) -- Concatenate in required format SELECT CONVERT(VARCHAR(10), CONVERT(DATETIME, '10/1/2013 6:39:04 PM', 0), 101) + ' '+ CONVERT(VARCHAR(5),CONVERT(DATETIME, '10/1/2013 6:39:04 PM', 0), 108) ```
In SQL Server 2012, we can use Format function to have suitable date time format. Use capital letter 'HH:mm:ss' for 24 hour date time format. Example - **Query (24 hour format):** ``` Select Format(cast('2016-03-03 23:59:59' as datetime),'dd-MMM-yyyy HH:mm:ss','en-us'). ('HH:mm:ss' in Capital letters) ``` **Result** ``` 03-Mar-2016 23:59:59 ``` **Query (12 hour format):** ``` Select Format(cast('2016-03-03 23:59:59' as datetime),'dd-MMM-yyyy hh:mm:ss','en-us'). ('hh:mm:ss' in Capital letters) ``` **Result** ``` 03-Mar-2016 11:59:59 ```
Convert a 12 hour format to 24 hour format in sql server
[ "", "sql", "sql-server", "sql-server-2008", "sql-server-2005", "sql-server-2008-r2", "" ]
I know this is bad, I just don't know how to re-write it... any help appreciated! ``` DECLARE @i INT, @var VARCHAR(max) SET @i = 0 WHILE 1 = 1 BEGIN INSERT INTO mrs.dbo.nav_worldcheck_results ("counter", "entity no", "entity first name", "entity last name", "entity full name", "worldcheck uid", "worldcheck first name", "worldcheck last name", "worldcheck full name", "percentage match") SELECT TOP(1) [Counter] = @i, EN.[entity no_], EN.[name 2], EN.[name], EN.[name 2] + ' ' + EN.[name] AS EntityName, SM.uid AS WorldCheckID, SM.[first name], SM.[last name], SM.[first name] + ' ' + SM.[last name] AS WorldCheckName, dbo.Fn_calculatejarowinkler(EN.[name 2] + ' ' + EN.[name], SM.[first name] + ' ' + SM.[last name]) FROM [NAV_LIVE].dbo.[entitytable$entity] AS EN CROSS JOIN [NAV_LIVE].dbo.[worldcheck master] AS SM WHERE ( EN.inactive = 0 ) AND dbo.Fn_calculatejarowinkler(EN.[name 2] + ' ' + EN.[name], SM.[first name] + ' ' + SM.[last name]) >= .75 AND NOT EXISTS(SELECT * FROM mrs.dbo.nav_worldcheck_results AS WCR WHERE EN.[entity no_] = WCR.[entity no] AND WCR.[worldcheck uid] = SM.uid) SET @i = @i + 1 END ``` I want to avoid the repeated function call in the where statement and I've tried assigning all as variables and then processing but I fall into the trap of you can't mix sets with data retrieval and my poor little brain is in Friday mode! :o/
It doesn't look like you can optimize out your function call. Potentially you could consider pre computing this value if the query needs to be run regularly and the data doesn't change. I recommend that you read a little more about joins and determine if a cross cartesian is really necessary in your case(most of the time you actually want an inner join). <http://www.codinghorror.com/blog/2007/10/a-visual-explanation-of-sql-joins.html> You should really also consult with your DBAs to ensure you are operating on Indexed columns or see if they can add indexes to speed things up. I would also look into learning about your databases(I assume MSSQL) query tuning capabilities. Notably the explain plan can tell you in more depth what your query is actually doing and where you could optimize. Although its impossible to know how the query is being optimized beforehand, in general NOT EXISTS and EXISTS filters can be horribly inefficient. Especially if the filter on the not exists query returns more than one row or if it operates on non indexed columns. Either way I recommend `select 1` instead of `select *` since reducing the number of columsn selected is an easy optimization.
Thanks all for looking and taking the time to reply. I've actually done as Deadron suggested and used a mix of inner join using soundex as the relationship rather than the cross join. I've also added in the use of "difference" into the where clause rather than my Jaro function and performance has improved dramatically and it's returning hits that would be close enough to worry about. Thanks again!
Avoid Repeated Function call in WHERE Clause
[ "", "sql", "t-sql", "function", "" ]
I have the following query. ``` SELECT parent.id AS 'id', parent.CA_NAME as 'name', node.level AS 'level', midpoint.level AS 'midpointlevel', SUM(ad.id IS NOT NULL) AS 'count' FROM category AS parent, category AS midpoint, category AS node LEFT JOIN ad ON ad.id=node.id AND ad.status='A' WHERE (node.`LEFT` BETWEEN parent.`LEFT` AND parent.`RIGHT`) AND (node.`LEFT` BETWEEN midpoint.`LEFT` AND midpoint.`RIGHT`) AND midpoint.id='1' GROUP BY parent.id HAVING IF(midpoint.level=0, node.level < 2, node.level > 0) ORDER BY parent.id ``` I don't want to select the `midpoint.level AS 'midpointlevel'` part but if I remove it the having clause gives an error mentioning : Error Code: 1054. Unknown column 'midpoint.level' in 'having clause'. Can anyone help me to remove the unwanted select section.
The easiest way is to use a simple subquery: ``` SELECT id, name, -- midpointlevel, <-- commented out, since we don't need this column level, count FROM ( SELECT parent.id AS 'id', parent.CA_NAME as 'name', node.level AS 'level', midpoint.level AS 'midpointlevel', SUM(ad.id IS NOT NULL) AS 'count' FROM category AS parent, category AS midpoint, category AS node LEFT JOIN ad ON ad.id=node.id AND ad.status='A' WHERE (node.`LEFT` BETWEEN parent.`LEFT` AND parent.`RIGHT`) AND (node.`LEFT` BETWEEN midpoint.`LEFT` AND midpoint.`RIGHT`) AND midpoint.id='1' GROUP BY parent.id HAVING IF(midpoint.level=0, node.level < 2, node.level > 0) ORDER BY parent.id ) subquery ```
SELECT column\_name, aggregate\_function(column\_name) FROM table\_name WHERE column\_name operator value GROUP BY column\_name HAVING aggregate\_function(column\_name) operator value; Refer above syntax
Ignore Field used in HAVING clause on SELECT
[ "", "mysql", "sql", "having-clause", "" ]
The checksum is returning **null** in the program. And also when i try executing only the query.. i get ``` Msg 245, Level 16, State 1, Line 1 Conversion failed when converting the varchar value 'Audit C recorded' to data type tinyint. ``` .. Can you please help me on this ``` SELECT CAST(ABS(CHECKSUM(Indicator)) % 450 AS TINYINT) AS Indicator, CAST(CIndicator AS VARCHAR(100)) AS CIndicator, CAST(SK_IndicatorL2 AS TINYINT) AS SK_IndicatorL2, CAST(ABS(CHECKSUM(IndicatorL2)) % 450 AS TINYINT) AS IndicatorL2 FROM ( VALUES ('Alcohol', 'Alcohol', 'Audit C recorded', 'Audit C recorded (excluding screen in 3y prior to start of quarter)'), ('Alcohol', 'Alcohol', 'Community Detox and TH CAT', 'Community Detox and TH CAT'), ('Alcohol', 'Alcohol', 'Follow Up appointment', 'Follow Up appointment'), ('Healthy Lifestyles', 'Healthy Lifestyles', 'HealthyLifestyle-Aged 19-39', 'HealthyLifestyle-Aged 19-39'), ('Healthy Lifestyles', 'Healthy Lifestyles', 'Aged 19-39 - BMI recorded', 'Aged 19-39 - BMI recorded') ) AS Nis (Indicator, CIndicator, SK_IndicatorL2, IndicatorL2) ``` I tried doing this : SELECT CAST(ABS(CHECKSUM('Audit C recorded')) % 250 as TinyInt) I get a proper integer value.
``` SELECT CAST(ABS(CHECKSUM(Indicator)) % 220 AS TINYINT) AS Indicator, CAST(CIndicator AS VARCHAR(100)) AS CIndicator, CAST(ABS(CHECKSUM(SK_IndicatorL2)) % 220 AS TINYINT) AS SK_IndicatorL2, CAST(IndicatorL2 AS varchar(100)) AS IndicatorL2 FROM ( VALUES ('Alcohol', 'Alcohol', 'Audit C recorded', 'Audit C recorded (excluding screen in 3y prior to start of quarter)'), ('Alcohol', 'Alcohol', 'Community Detox and TH CAT', 'Community Detox and TH CAT'), ('Alcohol', 'Alcohol', 'Follow Up appointment', 'Follow Up appointment'), ('Healthy Lifestyles', 'Healthy Lifestyles', 'HealthyLifestyle-Aged 19-39', 'HealthyLifestyle-Aged 19-39'), ('Healthy Lifestyles', 'Healthy Lifestyles', 'Aged 19-39 - BMI recorded', 'Aged 19-39 - BMI recorded') ) AS Nis (Indicator, CIndicator, SK_IndicatorL2, IndicatorL2) ```
I think she is trying to get the id and the name metric for this query. I am also getting the data type error.. CAST(ABS(CHECKSUM(Indicator)) % 450 as TinyInt) as an id number if i am not wrong
Abs and checksum in tsql
[ "", "sql", "t-sql", "" ]
I want to find rows (records) which have a specific vlaue (S) in a column (Type), and insert multiple rows (e.g. 2) based on that row in the same table. For example, in tabe t1 below, I want for each row of type 'S', 2 rows be inserted with same ID and Price, new Counter value (no specific requirement for this filed, however the Counter for records with same ID must be different), and Type will be 'B'. It means that when inserting 2 rows based on the first record in table below (1,1200,S,200), Counter value of the new records must be different from Counter values of the records with ID=1 already in the table (1200 and 1201). So, in the initial table there were three records with Type 'S', then in the final table, for each of those records, 2 new records with Type 'B' and a new Counter value is insrted: ``` ID Counter Type Price ------------------------ 1 1200 S 200 1 1201 T 400 2 1200 T 500 3 1546 S 100 3 1547 S 70 4 2607 M 250 ``` The output table t1 will be: ``` ID Counter Type Price ------------------------ 1 1200 S 200 1 1202 B 200 1 1203 B 200 1 1201 T 400 2 1200 T 500 3 1546 S 100 3 1548 B 100 3 1549 B 100 3 1547 S 700 3 1550 B 700 3 1552 B 700 4 2607 M 250 ```
You just have to play **twice** this command: ``` insert into epn with w(max) as ( select max(t.counter) from t -- useful to get max counter value ) select t.id, w.max + rownum, 'B', t.price -- max + rownum then gives new values from t, w where t.type != 'B'; -- Avoid duplicating rows added 1st time ``` This gives: ``` 1 1 1200 S 200 2 1 2617 B 200 3 1 2611 B 200 4 1 1201 T 400 5 1 2618 B 400 6 1 2612 B 400 7 2 1200 T 500 8 2 2613 B 500 9 2 2619 B 500 10 3 1547 S 70 11 3 2609 B 70 12 3 2615 B 70 13 3 1546 S 100 14 3 2614 B 100 15 3 2608 B 100 16 4 2607 M 250 17 4 2610 B 250 18 4 2616 B 250 ```
You need an insert select statement: ``` insert into t1 (ID, Counter, Type, Price) select ID, Counter+1, 'B', Price from t1 where Type = 'S' union all select ID, Counter+2, 'B', Price from t1 where Type = 'S'; ``` EDIT: Here is a statement that matches your criteria mentioned in your remark below. It gets the maximum Counter per ID and adds the count # of the added entry to the ID (1, 2, 3, ...) to it. ``` insert into t1 (ID, Counter, Type, Price) select ID, (select max(Counter) from t1 where ID = src.ID) + row_number() over (partition by ID order by Price) as new_counter, 'B' as Type, Price from ( select ID, Price from t1 join (select * from dual union all select * from dual) twice where t1.Type = 'S' ) src; ```
How to insert multiple new records based on record(s) already in the table (in Oracle)?
[ "", "sql", "oracle", "insert", "multiple-records", "" ]
Suppose Table domains contains a cloumn 'dname'. It consists of 'india.com', russia.net', 'brazil.com', 'canada.biz' etc. I need to update table 'domain' where rows of column 'name' ends with .com. Please help
``` UPDATE domains SET col_name = 'column_value' WHERE dname LIKE '%.com'; ``` Syntax for this one is ``` UPDATE table_name SET col_name = 'column_value' WHERE column_name LIKE '%___'; ```
Try this:: ``` update domain set column='your value' where `name` like '%.com' ```
How update columns that ends with particular text?
[ "", "mysql", "sql", "conditional-statements", "where-clause", "" ]
I intend to make an app which is a sort of dictionnary for a slang/a dialect. There are not many words (less than 1000). To store the data, I have 2 solutions : * storing data in a clean sqlite database (which I use to do in my apps) * storing data in 26 xml files according to you, which solution would be the most flexible and the most clean to use ? Thank You
1.For Better performance you must use sqlite insted of xml. 2.xml is more complex and you have to parse data by reading a tag, so its decrease performance and increase complexity.
Using sqlite database has more flexibility over storing in xmls 1. you can manage the data easily. other operations like searching, adding more, or deleting and related operations will be easier 2. you wont need to handle the store operations manually. if you use xmls then you need to store those separately which will be more complex. (to be hones i dont find any reason to use 26 xmls in your case)
xml or sql for small android databases?
[ "", "android", "sql", "xml", "sqlite", "" ]
I am trying to display data from 3 separate tables. **1.** product\_master that contain data regarding products like image,price,quantity,etc. **2.** attribute\_master that contain only attribute\_id and attribute\_name, **3.** product\_attrib\_master that contain data like prod\_id i.e foreign key, attribute\_id i.e. foreign key and product\_attribute\_value. Now I have fire following query : ``` SELECT pm.prod_name, am.attribute_name, pa.product_attribute_value FROM product_attrib_master pa LEFT JOIN attribute_master am ON pa.attribute_id = am.attribute_id LEFT JOIN product_master pm ON pa.prod_id=pm.prod_id ORDER BY pa.prod_id; ``` this query shows following result : ![enter image description here](https://i.stack.imgur.com/TYGZu.jpg) Now I want to display data of only specific item. Suppose I want to display data of item `Nokia Lumia 925` and it's `prod_id = 12`. Then what I will have to do ? Anyone suggest me what I ll have to do ?
Hello you need to add **WHERE** clause that will apply filter on resulted data. **WHERE** clause with **AND** that told sql engine that I want filtered data based on clause. ``` SELECT pm.prod_name, am.attribute_name, pa.product_attribute_value FROM product_attrib_master pa LEFT JOIN attribute_master am ON pa.attribute_id = am.attribute_id LEFT JOIN product_master pm ON pa.prod_id = pm.prod_id WHERE pm.prod_name = 'Nokia Lumia 925' AND pm.prod_id = 12 ORDER BY pa.prod_id; ```
You need to add a where clause to your query. Try this:: ``` SELECT pm.prod_name, am.attribute_name, pa.product_attribute_value FROM product_attrib_master pa LEFT JOIN attribute_master am ON pa.attribute_id = am.attribute_id LEFT JOIN product_master pm ON pa.prod_id=pm.prod_id WHERE pm.prod_name ='Nokia Lumia 925' and prod_id = 12 ORDER BY pa.prod_id; ```
How to display record or data of single item from 3 joins
[ "", "mysql", "sql", "join", "left-join", "" ]
I have a Table\_A and Table\_B as below, how can I create a UNION to generate a result set likes Table\_C, so that the original content of the columns in Table\_A and Table\_B are kept. Table\_A: ``` ID High_Level_Text ------------------------- 01 High Level Text One 02 High Level Text Two 03 High Level Text Thr ``` Table\_B: ``` ID Key Low_Level_Text ---------------------------------- 01 001 Low Level Text 01/001 01 002 Low Level Text 01/002 01 003 Low Level Text 01/003 02 001 Low Level Text 02/001 03 002 Low Level Text 03/002 ``` Table\_C: ``` ID Key High_Level_Text Low_Level_Text ------------------------------------------------------- 01 High Level Text One 01 001 Low Level Text 01/001 01 002 Low Level Text 01/002 01 003 Low Level Text 01/003 02 High Level Text Two 02 001 Low Level Text 02/001 03 High Level Text Thr 03 002 Low Level Text 03/002 ``` In Table\_C, records of the High\_Level\_Text column are left empty for those records where there's no original data from Table\_A, same with Low\_Level\_Text and Key column for Table\_B
You can do: ``` select a.id as "ID", null as "KEY", a.High_level_text as "High_level_text", null as "Low_Level_Text" from table_a a union select b.id, b.key, null, b.Low_Level_Text from table_b b order by 1, 2 ``` `sqlfiddle demo` This sets the columns you want in your result in the first `select` (the column names are defined in the first select of an `UNION`). Then you `order by` at the end, which will affect the complete resultset.
Try this: ``` SELECT ID, NULL as Key, High_Level_Text, NULL as Low_Level_Text FROM Table_A UNION SELECT ID, Key, NULL as High_Level_Text, Low_Level_Text FROM Table_B ORDER BY ID ``` Check this [SQL FIDDLE DEMO](http://sqlfiddle.com/#!6/4fc7b/1)
Combine two tables without ruin the content of original columns
[ "", "sql", "database", "union", "" ]
EDIT: Please see the edit below for the full code, the first one works. This might be a different problem that what was is in the title. I am trying to put a cursor's count result into an integer, but the integer always stays at 0. ``` DECLARE v_count int := 0; CURSOR logins IS SELECT count(*) FROM big_table; BEGIN OPEN logins; FETCH logins into v_count; IF v_count > 10 THEN DBMS_OUTPUT.PUT_LINE ('Hi mom'); END IF; CLOSE logins; END; ``` This is only a sample of the code in which I want to do this, this fetch is actually in a LOOP. I tried putting a "SELECT count(\*) INTO v\_count..." before the IF statement. It works, but it is awfully slow. Thank you! EDIT: As pointed out by A.B.Cade, this sample works. So my problem might be elsewhere in my code, you'll find the whole thing below: ``` DECLARE v_hour int := 0; v_maxConn int; v_count int; --set the time parameters --v_day = the first day you loop in v_day int := 16; v_month varchar2 (2) := 10; v_year varchar2 (4) := 2013; v_lastDay int := 16; CURSOR logins IS SELECT count(*) FROM ( SELECT (SELECT user_function_name FROM apps.fnd_form_functions_vl fffv WHERE (fffv.function_id = a.function_id)) "Current Function", first_connect, last_connect, user_name, session_id, apps.fnd_profile.value_specific ('ICX_SESSION_TIMEOUT', a.user_id, a.responsibility_id, a.responsibility_application_id, a.org_id, NULL ) TIMEOUT, counter "How many hits a User has made", a.limit_connects "No of hits allowed in session" FROM icx.icx_sessions a, fnd_user b WHERE a.user_id = b.user_id AND last_connect > SYSDATE - 30 AND b.user_name NOT LIKE 'GUEST' ) WHERE to_date(v_year || '-' || v_month || '-' || v_day || ' ' || v_hour || ':00:00','YYYY-MM-DD HH24:MI:SS') between first_connect and last_connect ; out_menu varchar2(500); out_path varchar2(50) := '/usr/tmp/QA'; file_out utl_file.file_type ; BEGIN file_out := utl_file.fopen(out_path,'debug_rapport.txt','W'); OPEN logins; LOOP EXIT WHEN v_day > v_lastDay; v_maxConn := 0; LOOP EXIT WHEN v_hour > 23; FETCH logins into v_count; IF v_count > v_maxConn THEN v_maxConn := v_count; END IF; out_menu := 'Debug: ' || to_char(sysdate, 'YYYY-MM-DD HH24:MI:SS') || ' -> dateLoop: ' || to_date(v_year || '-' || v_month || '-' || v_day || ' ' || v_hour || ':00:00','YYYY-MM-DD HH24:MI:SS') || ' -> v_maxConn: ' || v_maxConn || ' -> hour:' || v_hour; utl_file.put_line(file_out,out_menu); v_hour := v_hour + 1; END LOOP; DBMS_OUTPUT.PUT_LINE (v_year || '-' || v_month || '-' || v_day || ';' || v_maxConn); v_hour := 0; v_day := v_day + 1; END LOOP; CLOSE logins; END; ``` The code is to see concurrent connections within each hour of a given day stretch. It works if I replace the cursor code in the BEGIN statement with the SELECT statement that is found within the cursor. Here is the output from my "Debug" file when I use the cursor: ``` Debug: 2013-10-24 10:19:32 -> dateLoop: 13-10-16 -> v_maxConn: 0 -> hour:0 Debug: 2013-10-24 10:19:32 -> dateLoop: 13-10-16 -> v_maxConn: 0 -> hour:1 Debug: 2013-10-24 10:19:32 -> dateLoop: 13-10-16 -> v_maxConn: 0 -> hour:2 Debug: 2013-10-24 10:19:32 -> dateLoop: 13-10-16 -> v_maxConn: 0 -> hour:3 Debug: 2013-10-24 10:19:32 -> dateLoop: 13-10-16 -> v_maxConn: 0 -> hour:4 Debug: 2013-10-24 10:19:32 -> dateLoop: 13-10-16 -> v_maxConn: 0 -> hour:5 Debug: 2013-10-24 10:19:32 -> dateLoop: 13-10-16 -> v_maxConn: 0 -> hour:6 Debug: 2013-10-24 10:19:32 -> dateLoop: 13-10-16 -> v_maxConn: 0 -> hour:7 Debug: 2013-10-24 10:19:32 -> dateLoop: 13-10-16 -> v_maxConn: 0 -> hour:8 Debug: 2013-10-24 10:19:32 -> dateLoop: 13-10-16 -> v_maxConn: 0 -> hour:9 Debug: 2013-10-24 10:19:32 -> dateLoop: 13-10-16 -> v_maxConn: 0 -> hour:10 Debug: 2013-10-24 10:19:32 -> dateLoop: 13-10-16 -> v_maxConn: 0 -> hour:11 Debug: 2013-10-24 10:19:32 -> dateLoop: 13-10-16 -> v_maxConn: 0 -> hour:12 Debug: 2013-10-24 10:19:32 -> dateLoop: 13-10-16 -> v_maxConn: 0 -> hour:13 Debug: 2013-10-24 10:19:32 -> dateLoop: 13-10-16 -> v_maxConn: 0 -> hour:14 Debug: 2013-10-24 10:19:32 -> dateLoop: 13-10-16 -> v_maxConn: 0 -> hour:15 Debug: 2013-10-24 10:19:32 -> dateLoop: 13-10-16 -> v_maxConn: 0 -> hour:16 Debug: 2013-10-24 10:19:32 -> dateLoop: 13-10-16 -> v_maxConn: 0 -> hour:17 Debug: 2013-10-24 10:19:32 -> dateLoop: 13-10-16 -> v_maxConn: 0 -> hour:18 Debug: 2013-10-24 10:19:32 -> dateLoop: 13-10-16 -> v_maxConn: 0 -> hour:19 Debug: 2013-10-24 10:19:32 -> dateLoop: 13-10-16 -> v_maxConn: 0 -> hour:20 Debug: 2013-10-24 10:19:32 -> dateLoop: 13-10-16 -> v_maxConn: 0 -> hour:21 Debug: 2013-10-24 10:19:32 -> dateLoop: 13-10-16 -> v_maxConn: 0 -> hour:22 Debug: 2013-10-24 10:19:32 -> dateLoop: 13-10-16 -> v_maxConn: 0 -> hour:23 ``` And here is the output when I replace the cursor with "SELECT count(\*) INTO v\_count [...]" : ``` Debug: 2013-10-24 10:00:40 -> dateLoop: 13-10-16 -> v_maxConn: 0 -> hour:0 Debug: 2013-10-24 10:00:54 -> dateLoop: 13-10-16 -> v_maxConn: 0 -> hour:1 Debug: 2013-10-24 10:01:09 -> dateLoop: 13-10-16 -> v_maxConn: 0 -> hour:2 Debug: 2013-10-24 10:01:23 -> dateLoop: 13-10-16 -> v_maxConn: 0 -> hour:3 Debug: 2013-10-24 10:01:37 -> dateLoop: 13-10-16 -> v_maxConn: 0 -> hour:4 Debug: 2013-10-24 10:01:50 -> dateLoop: 13-10-16 -> v_maxConn: 0 -> hour:5 Debug: 2013-10-24 10:02:05 -> dateLoop: 13-10-16 -> v_maxConn: 0 -> hour:6 Debug: 2013-10-24 10:02:20 -> dateLoop: 13-10-16 -> v_maxConn: 0 -> hour:7 Debug: 2013-10-24 10:02:33 -> dateLoop: 13-10-16 -> v_maxConn: 0 -> hour:8 Debug: 2013-10-24 10:02:47 -> dateLoop: 13-10-16 -> v_maxConn: 0 -> hour:9 Debug: 2013-10-24 10:03:00 -> dateLoop: 13-10-16 -> v_maxConn: 1 -> hour:10 Debug: 2013-10-24 10:03:15 -> dateLoop: 13-10-16 -> v_maxConn: 1 -> hour:11 Debug: 2013-10-24 10:03:28 -> dateLoop: 13-10-16 -> v_maxConn: 1 -> hour:12 Debug: 2013-10-24 10:03:41 -> dateLoop: 13-10-16 -> v_maxConn: 1 -> hour:13 Debug: 2013-10-24 10:03:54 -> dateLoop: 13-10-16 -> v_maxConn: 2 -> hour:14 Debug: 2013-10-24 10:04:08 -> dateLoop: 13-10-16 -> v_maxConn: 2 -> hour:15 Debug: 2013-10-24 10:04:22 -> dateLoop: 13-10-16 -> v_maxConn: 2 -> hour:16 Debug: 2013-10-24 10:04:35 -> dateLoop: 13-10-16 -> v_maxConn: 2 -> hour:17 Debug: 2013-10-24 10:04:47 -> dateLoop: 13-10-16 -> v_maxConn: 2 -> hour:18 Debug: 2013-10-24 10:05:00 -> dateLoop: 13-10-16 -> v_maxConn: 2 -> hour:19 Debug: 2013-10-24 10:05:13 -> dateLoop: 13-10-16 -> v_maxConn: 2 -> hour:20 Debug: 2013-10-24 10:05:25 -> dateLoop: 13-10-16 -> v_maxConn: 2 -> hour:21 Debug: 2013-10-24 10:05:38 -> dateLoop: 13-10-16 -> v_maxConn: 2 -> hour:22 Debug: 2013-10-24 10:05:51 -> dateLoop: 13-10-16 -> v_maxConn: 2 -> hour:23 ``` You see v\_maxConn doesn't stay at zero, therefore v\_count is being written into. It takes 5 minutes for only one day, though. Your help is greatly appreciated!
The problem was that I didn't fully understand how cursors worked. I replaced the cursor with a temporary table in which I put my static query : ``` CREATE TABLE apps.audit_custom as select [...] ``` Then I query this table in my LOOP, as such: ``` SELECT count(*) into v_count FROM apps.audit_custom WHERE to_date(v_year || '-' || v_month || '-' || v_day || ' ' || v_hour || ':00:00','YYYY-MM-DD HH24:MI:SS') between first_connect and last_connect; ``` Finally, I drop the table when I'm done. It's fastest way I found to make my idea work. Thanks for helping guys!
I modified your code a little, this works.. ``` DECLARE v_count int := 0; CURSOR logins IS SELECT count(*) FROM big_table; BEGIN OPEN logins; loop FETCH logins into v_count; IF v_count >= 10 THEN DBMS_OUTPUT.PUT_LINE ('Hi mom'); exit; END IF; end loop; CLOSE logins; END; ``` you forgot loop, end loop, and statement to exit, otherwise there will be infinity loop... hope this helps. IF there will be less than 10 rows, it will be infinite loop.
PL/SQL cursor: Fetch count into an integer, bad syntax?
[ "", "sql", "oracle", "plsql", "cursor", "" ]
I'm listing questions with this ``` SELECT q.qTitle, q.qDescription, q.qCreatedOn, u.uCode, u.uFullname, qcat.qcatTitle, q.qId, q.qStatus FROM tblQuestion AS q INNER JOIN tblUser AS u ON q.uId = u.uId INNER JOIN tblQuestionCategory AS qcat ON q.qcatId = qcat.qcatId WHERE (q.qStatus = 1) ORDER BY q.qCreatedOn DESC OFFSET @page*10 ROWS FETCH NEXT 10 ROWS ONLY ``` But there is a problem in my server, ``` Incorrect syntax near 'OFFSET'. Invalid usage of the option NEXT in the FETCH statement. ``` How can I modify my query for sql server 2008? One more question. How can I write a stored procedure for listing pages? Here is my full of code <http://codepaste.net/gq5n6c> Answer: <http://codepaste.net/jjrkqr>
As found out in the comments the reason for the error is because of the fact that SQL Server 2008 does not support it. You may try to change the query according to SQL Server 2012. Something like this:- ``` SELECT column1 FROM ( SELECT column1, ROW_NUMBER() OVER (ORDER BY column_id) AS x FROM mytable ) AS tbl WHERE tbl.x BETWEEN 20 AND 30 ``` In your code:- ``` SELECT * FROM (SELECT ROW_NUMBER() OVER(ORDER BY q.qId) AS rownumber FROM tblQuestion AS q INNER JOIN tblUser AS u ON q.uId = u.uId INNER JOIN tblQuestionCategory AS qcat ON q.qcatId = qcat.qcatId ) as somex WHERE somex.rownumber BETWEEN 11 AND 20 ``` --- The issue is because you have not defined `@page`. Try this (As you have not mentioned what is `@page`. I am taking it as some constant or may be you can declare it and then set the value for it):- ``` declare @page int set @page = 5 // You may set any value here. SELECT q.qTitle, q.qDescription, q.qCreatedOn, u.uCode, u.uFullname, qcat.qcatTitle, q.qId, q.qStatus FROM tblQuestion AS q INNER JOIN tblUser AS u ON q.uId = u.uId INNER JOIN tblQuestionCategory AS qcat ON q.qcatId = qcat.qcatId WHERE (q.qStatus = 1) ORDER BY q.qCreatedOn DESC OFFSET (@page*10) ROWS FETCH NEXT 10 ROWS ONLY ```
For people using Entity Framework, particulary database first, this error can occur if you develop with SQL 2012 but deploy to an earlier version. The problem will occur if you use Take...Skip functionality, as SQL 2012 has a new syntax for this. See: <http://erikej.blogspot.co.uk/2014/12/a-breaking-change-in-entity-framework.html> The fix is to edit your .edmx file and change the ProviderManifestToken value from 2012 to your database version, e.g. 2008.
"Incorrect syntax near 'OFFSET'" modift sql comm 2012 to 2008
[ "", "sql", "sql-server", "sql-server-2008", "paging", "" ]
How can I cast a decimal value to float without getting the result in scientific notation? For example, if my value is `0.000050` as a decimal, when I cast it to float I get `5E-05` I would like to see `0.00005`
This has nothing to do with converting to float. It has to do with converting to text. You need to look at the [`str()`](http://msdn.microsoft.com/en-us/library/ms189527.aspx) function: ``` str( float_expression , total-width , number-of-decimal-places ) ``` where * *float-expression* means what you think it means, * *total-width* is the total field width desired, including sign, decimal place, etc. * *number-of-decimal-places* is the number of decimal places displayed (0-16). If more than 16 is specified, the formatted value is truncated (not rounded) at 16 decimal places. In your case, something like: ``` declare @value float = 0.000050 select str(@value,12,6) ``` should do you. Edited to note: the `str()` function will not display anything in scientific notation. If the problem is that you want to trim trailing zeroes from the decimal value, you can do two things: * Use the `format()` function (SQL Server 2012 only): ``` declare @x decimal(18,6) = 123.010000 select @x as x1 , format(@x,'#,##0.######') as x2 , -- all trailing zeroes trimmed format(@x,'#,##0.000###') as x3 -- min of 3, max of 6 decimal places shown ``` * use `replace()` and `trim()`. Works for any version of SQL Server. ``` declare @x decimal(18,6) = 123.010000 select @x as x1 , replace( rtrim(replace(convert(varchar(32),@x),'0',' ')) , ' ' , '0' ) ```
``` ALTER FUNCTION [dbo].[fnExpStringToDecimal] ( @Number AS varchar(50) ) Returns Decimal(18,7) BEGIN RETURN (SELECT IIF(CHARINDEX ('E', @Number)> 0,CAST(SUBSTRING(@Number, 1, CHARINDEX ('E', @Number)-1)AS DECIMAL(18,7)) * POWER( 10.0000000,CAST(SUBSTRING(@Number, CHARINDEX ('E', @Number)+1, LEN(@Number)) AS DECIMAL(18,7))), @Number)) END ```
Sql cast to float without scientific notation
[ "", "sql", "sql-server", "" ]
Is there a way to simplify this script so that the CASE statements are not duplicated? It can look acceptable in this shortened example but in reality the CASE statement is much longer as I have cases for "every 2 weeks, "every 4 weeks", "monthly", etc. I am using SQL Server and a WHILE statement for performance reason. Would a CTE or MERGE help? ``` DECLARE @theStartDate DATE DECLARE @Interval INT DECLARE @eventCharges TABLE ( [EventDate] [datetime], Person_Id int ) SET @today = GETDATE() SET @Interval = 0 -- delete event charges from previous user DELETE FROM @eventCharges -- Insert the calculated transactions WHILE @Interval < 100 BEGIN SET @Interval = @Interval + 1 INSERT INTO @eventCharges SELECT CASE WHEN pcc.Recurrence = 'Daily' THEN DATEADD(DAY, @Interval, @theStartDate) WHEN pcc.Recurrence = 'Weekly' THEN DATEADD(WEEK, @Interval, @theStartDate) END AS EventDate ,pcc.Person_Id FROM @personChargeCurrent pcc WHERE CASE WHEN pcc.Recurrence = 'Daily' THEN DATEADD(DAY, @Interval, @theStartDate) WHEN pcc.Recurrence = 'Weekly' THEN DATEADD(WEEK, @Interval, @theStartDate) END <= @today AND NOT EXISTS(SELECT 1 FROM dbo.PersonChargeTransaction pct WHERE pct.Person_Id = pcc.Person_Id AND pct.PersonCharge_Id = pcc.Id AND pct.TransactionDate = CASE WHEN pcc.Recurrence = 'Daily' THEN DATEADD(DAY, @Interval, @theStartDate) WHEN pcc.Recurrence = 'Weekly' THEN DATEADD(WEEK, @Interval, @theStartDate) END) ORDER BY StartDate END ```
Yes, a CTE should help. Try changing your INSERT statement to: ``` WITH cte as (SELECT CASE WHEN Recurrence = 'Daily' THEN DATEADD(DAY, @Interval, @theStartDate) WHEN Recurrence = 'Weekly' THEN DATEADD(WEEK, @Interval, @theStartDate) END AS EventDate, p.* FROM @personChargeCurrent p) INSERT INTO @eventCharges SELECT cte.EventDate, cte.Person_Id FROM cte WHERE cte.EventDate <= @today AND NOT EXISTS (SELECT 1 FROM dbo.PersonChargeTransaction pct WHERE pct.Person_Id = cte.Person_Id AND pct.PersonCharge_Id = cte.Id AND pct.TransactionDate = cte.EventDate) ORDER BY StartDate ```
You can wrap this in a function: ``` Create Function dbo.IntervalEnd( @recurrence varchar(10), @interval int, @startDate date -- or whatever data type you're using for dates ) returns date as begin return case when @recurrence = 'Daily' then dateadd(day, @interval, @startDate) when @recurrence = 'Weekly' then dateadd(week, @interval, @startDate) end end ``` Then ``` Insert Into @eventCharges Select dbo.IntervalEnd(pcc.Recurrence, @Interval, @theStartDate) as EventDate, pcc.Person_Id From @personChargeCurrent pcc Where dbo.IntervalEnd(pcc.Recurrence, @Interval, @theStartDate) <= @today And Not Exists ( Select 1 From dbo.PersonChargeTransaction pct Where pct.Person_Id = pcc.Person_Id And pct.PersonCharge_Id = pcc.Id And pct.TransactionDate = dbo.IntervalEnd(pcc.Recurrence, @Interval, @theStartDate) ) ``` There are overheads for using a function. You'll have to decide if the slightly reduced performance is worth the tradeoff for increased readibility.
How to avoid repetitive CASE statements in SQL WHILE loop doing INSERT
[ "", "sql", "sql-server", "t-sql", "" ]
``` SELECT produkte_eintraege.id, produkte_eintraege.hersteller, produkte_eintraege.titel, produkte_eintraege.img_url FROM produkte_eintraege JOIN produkte_eigenschaften ON produkte_eintraege.id = produkte_eigenschaften.produkte_eintraege_id WHERE (produkte_eigenschaften.eigenschaften_merkmale_id = 1 OR produkte_eigenschaften.eigenschaften_merkmale_id = 2) AND (produkte_eigenschaften.eigenschaften_merkmale_id = 3) ORDER by hits DESC ``` Result = empty ``` SELECT produkte_eintraege.id, produkte_eintraege.hersteller, produkte_eintraege.titel, produkte_eintraege.img_url FROM produkte_eintraege JOIN produkte_eigenschaften ON produkte_eintraege.id = produkte_eigenschaften.produkte_eintraege_id WHERE (produkte_eigenschaften.eigenschaften_merkmale_id = 1 OR produkte_eigenschaften.eigenschaften_merkmale_id = 2) ORDER by hits DESC ``` Result = works Whats wrong in this part?: ``` AND (produkte_eigenschaften.eigenschaften_merkmale_id = 3) ``` Here is the table: ``` INSERT INTO `produkte_eigenschaften` (`produkte_eintraege_id`, `eigenschaften_merkmale_id`) VALUES (1, 2), (7, 1), (1, 3); ``` That means that one product has multiple entries in this table.
Taking into account your comments to Walter's answer, which you should've put in your question in the first place, to achieve your goal you have to `GROUP BY` product id and put your conditions in `HAVING` clause ``` SELECT produkte_eintraege_id FROM produkte_eigenschaften -- WHERE eigenschaften_merkmale_id IN(1, 2, 3) GROUP BY produkte_eintraege_id HAVING MAX(eigenschaften_merkmale_id IN (1, 2)) = 1 AND MAX(eigenschaften_merkmale_id = 3) = 1 ``` Here is **[SQLFiddle](http://sqlfiddle.com/#!2/e3f37/2)** demo Then you can `JOIN` back to `produkte_eintraege` ``` SELECT e.id, e.hersteller, e.titel, e.img_url FROM ( SELECT produkte_eintraege_id FROM produkte_eigenschaften -- WHERE eigenschaften_merkmale_id IN(1, 2, 3) GROUP BY produkte_eintraege_id HAVING MAX(eigenschaften_merkmale_id IN (1, 2)) = 1 AND MAX(eigenschaften_merkmale_id = 3) = 1 ) q JOIN produkte_eintraege e ON q.produkte_eintraege_id = e.id ORDER BY hits DESC ```
In your query you have ``` WHERE (produkte_eigenschaften.eigenschaften_merkmale_id = 1 OR produkte_eigenschaften.eigenschaften_merkmale_id = 2) AND (produkte_eigenschaften.eigenschaften_merkmale_id = 3) ORDER by hits DESC ``` To paraphrase the query you are saying ``` SELECT * FROM a join b WHERE (a.id=1 OR a.id=2) AND (a.ID=3) ``` A value cannot equal both 2 and three. **SOLUTIONS**:There are two solutions that may solve your problem: ``` SELECT pe.id, pe.hersteller, pe.titel, pe.img_url FROM produkte_eintraege pe JOIN produkte_eigenschaften pe2 ON pe.id = pe2.produkte_eintraege_id WHERE (pe2.eigenschaften_merkmale_id = 1 OR pe2.eigenschaften_merkmale_id = 2 OR pe2.eigenschaften_merkmale_id = 3) ORDER by hits DESC ``` Or you can just use `IN` ``` SELECT pe.id, pe.hersteller, pe.titel, pe.img_url FROM produkte_eintraege pe JOIN produkte_eigenschaften pe2 ON pe.id = pe2.produkte_eintraege_id WHERE pe2.eigenschaften_merkmale_id IN (1,2,3) ORDER by hits DESC ``` I hope that helps.
Inner join multiple entries in table comparison
[ "", "mysql", "sql", "join", "" ]
im working on a select. I've got a table called vriend and a table called gebruiker. now i got 2 select that give me the result i want but i want to merge them together ``` SELECT g.naam,g.gebruikerID FROM gebruiker g INNER JOIN vriend v on g.gebruikerID = v.gebruikerID_Jezelf WHERE g.gebruikerID IN(SELECT gebruikerID_Jezelf FROM vriend); SELECT g.naam,g.gebruikerID FROM gebruiker g INNER JOIN vriend v on g.gebruikerID = v.gebruikerID_Persoon WHERE g.gebruikerID IN(SELECT gebruikerID_Persoon FROM vriend); ``` what i want is to merge those 2 sleect statements together **NOT WITH A UNION** but for example: select 1 gives: ``` NAAM | ID ----------------- Henk | 1 Karel | 2 ``` Select 2 gives : ``` NAAM | ID ----------------- Andrew | 4 Piet | 5 ``` The merge would be than ``` NAAM | ID | NAAM | ID ------------------------------------------ Henk | 1 | Andrew | 4 Karel | 2 | Piet | 5 ```
``` SELECT g.naam,g.gebruikerID ,g2.naam,g2.gebruikerID FROM gebruiker g , gebruiker g2, vriend v WHERE g.gebruikerID = v.gebruikerID_Jezelf OR g2.gebruikerID = v.gebruikerID_Persoon AND g.gebruikerID IN(SELECT gebruikerID_Jezelf FROM vriend) OR g2.gebruikerID IN(SELECT gebruikerID_Persoon FROM vriend); ```
``` SELECT g.naam, g.gebruikerID , g2.naam, g2.gebruikerID FROM vriend v JOIN gebruiker g1 on g1.gebruikerID = v.gebruikerID_Jezelf JOIN gebruiker g2 ON g2.gebruikerID = v.gebruikerID_Persoon ; ```
merge 2 selects into 1
[ "", "sql", "database", "oracle", "select", "" ]
I have a database and a lot of tables inside it. I wrote definition into the each table and column's definition part. And now using query I want to see all table and columns definition. Can you please help me?
If you're using SqlServer. ``` SELECT obj.name, cols.name as columnname from dbname.sys.objects as obj inner join dbname.sys.columns as cols on obj.object_id = cols.object_id where obj.type='U' ```
try this to get the definition of table in SQL... > sp\_help 'tableName' example `sp_help 'MyTable'` We can also do same thing as ``` exec sp_help 'MyTable' ``` Because **sp\_help** is a pre define *stored processor* and we can execute stored processor using "**exec**" keyword or also can use "**execute**" keyword
Getting the definition of the tables and columns via a SQL statement
[ "", "sql", "sql-server-2008", "" ]
I've got a query in which I am returning multiple rows for a given event, for example: ``` ID | DateTime | String 1017436 | 2013-09-13 05:19:20.000 | Hello 1017436 | 2013-09-13 11:49:00.000 | World ``` I want the result to contain only the *earliest* occurrences of the row for any given ID, but am running into trouble. I originally, thought a query like this would be the answer: ``` ; WITH cte AS ( SELECT *, rn = ROW_NUMBER() OVER (PARTITION BY ixBug ORDER BY dt) FROM dbo.BugEvent ) SELECT ixBug, dt, s FROM cte WHERE ixBug IN (SELECT Bug.ixBug FROM dbo.Bug JOIN dbo.Mailbox ON Mailbox.ixMailbox = Bug.ixMailbox WHERE ixBug = '1017436' AND YEAR(dtOpened) >= '2013' AND MONTH(dtOpened) = '09' AND sOriginalTitle NOT LIKE '\[web\]%' ESCAPE '\' AND dbo.Bug.ixProject = (SELECT ixProject FROM dbo.Project WHERE sProject = 'Support') AND dbo.Bug.ixCategory = (SELECT ixCategory FROM dbo.Category WHERE sCategory = '.inquiry') AND Bug.ixBug NOT IN(SELECT Bug.ixBug FROM dbo.Bug JOIN dbo.Mailbox ON Mailbox.ixMailbox = Bug.ixMailbox WHERE YEAR(dtOpened) >= '2013' AND MONTH(dtOpened) <= '09' AND sOriginalTitle LIKE '\[web\]%' ESCAPE '\' AND dbo.Bug.ixProject = (SELECT ixProject FROM dbo.Project WHERE sProject = 'Support') AND dbo.Bug.ixCategory = (SELECT ixCategory FROM dbo.Category WHERE sCategory = '.inquiry'))) AND sVerb = 'Incoming Email'; ``` But, for some reason the result keeps both rows.
You can use window functions for this, either `ROW_NUMBER()` or `MIN()`. The idea is to partition the rows by the ID - `OVER (PARTITION BY id)` - and then either assign row numbers (ordered by the datetime) or find the minimum datetime per ID. Solution with `ROW_NUMBER()`: ``` ; WITH cte AS ( SELECT id, datetime, string, rn = ROW_NUMBER() OVER (PARTITION BY id ORDER BY datetime) FROM tableX ) SELECT id, datetime, string FROM cte WHERE rn = 1 ; ``` and with `MIN()`: ``` ; WITH cte AS ( SELECT id, datetime, string, min_datetime = MIN(datetime) OVER (PARTITION BY id) FROM tableX ) SELECT id, datetime, string FROM cte WHERE datetime = min_datetime ; ``` The second version has slightly different behaviour. If there are two (or more) rows with exactly same datetime for an ID, they will be both in the result.
You can use `ROW_NUMBER()` which generates sequential number that you can filter with. ``` SELECT ID, DateTime, String FROM ( SELECT ID, DateTime, String, ROW_NUMBER() OVER (PARTITION BY ID ORDER BY DateTime) RN FROM tableName ) a WHERE RN = 1 ``` * [SQLFiddle Demo](http://sqlfiddle.com/#!6/e59d9/1)
Select row with the earliest date
[ "", "sql", "sql-server", "" ]
Just wondering if the way I put COMMIT in the code block is appropriate or not? Should I put them when it finished loop or after each insert statement or after the if else statement? ``` FOR VAL1 IN (SELECT A.* FROM TABLE_A A) LOOP IF VAL1.QTY >= 0 THEN INSERT INTO TEMP_TABLE VALUES('MORE OR EQUAL THAN 0'); COMMIT; /*<-- Should I put this here?*/ INSERT INTO AUDIT_TABLE VALUE('DATA INSERTED >= 0'); COMMIT; /*<-- Should I put this here too?*/ ELSE INSERT INTO TEMP_TABLE VALUES ('0'); COMMIT; /*<-- Should I put this here too?*/ INSERT INTO AUDIT_TABLE('DATA INSERTED IS 0'); COMMIT; /*<-- Should I put this here too?*/ END IF; /*Or put commit here?*/ END LOOP; /*Or here??*/ ```
Generally, committing in a loop is not a good idea, especially after every DML in that loop. Doing so you force oracle(LGWR) to write data in redo log files and may find yourself in a situation when other sessions hang because of `log file sync` wait event. Or facing `ORA-1555` because undo segments will be cleared more often. Divide your DMLs into logical units of work (transactions) and commit when that unit of work is done, not before and not too late or in a middle of a transaction. This will allow you to keep your database in a consistent state. If, for example, two `insert` statements form a one unit of work(one transaction), it makes sense to commit or rollback them altogether not separately. So, generally, you should commit as less as possible. If you **have to** commit in a loop, introduce some threshold. For instance issue commit after, let say 150 rows: ``` declare l_commit_rows number; For i in (select * from some_table) loop l_commit_rows := l_commit_rows + 1; insert into some_table(..) values(...); if mode(l_commit_rows, 150) = 0 then commit; end if; end loop; -- commit the rest commit; ```
It is rarely appropriate; say your insert into TEMP\_TABLE succeeds but your insert into AUDIT\_TABLE fails. You then don't know where you are at all. Additionally, commits will increase the amount of time it takes to perform an operation. It would be more normal to do everything within a single transaction; that is remove the LOOP and perform your inserts in a single statement. This can be done by using a [multi-table insert](http://docs.oracle.com/cd/E11882_01/server.112/e41084/statements_9014.htm#SQLRF55088) and would look something like this: ``` insert when ( a.qty >= 0 ) then into temp_table values ('MORE OR EQUAL THAN 0') into audit_table values ('DATA INSERTED >= 0') else into temp_table values ('0') into audit_table values ('DATA INSERTED IS 0') select qty from table_a ``` A simple rule is to not commit in the middle of an action; you need to be able to tell exactly where you were if you have to restart an operation. This normally means, go back to the beginning but doesn't have to. For instance, if you were to place your COMMIT inside your loop but outside the IF statement then you know that that has completed. You'd have to write back somewhere to tell you that this operation has been completed though or use your SQL statement to determine whether you need to re-evaluate that row.
PL/SQL Oracle Stored Procedure loop structure
[ "", "sql", "oracle", "plsql", "" ]
Using MS SQL 2012 I want to do something like ``` select a, b, c, a+b+c d ``` However a, b, c are complex computed columns, lets take a simple example ``` select case when x > 4 then 4 else x end a, ( select count(*) somethingElse) b, a + b c order by c ``` I hope that makes sense
I would probably do this: ``` SELECT sub.a, sub.b, (sub.a + sub.b) as c, FROM ( select case when x > 4 then 4 else x end a, (select count(*) somethingElse) b FROM MyTable ) sub ORDER BY c ```
You can use a nested query or a common table expression (CTE) for that. The CTE syntax is slightly cleaner - here it is: ``` WITH CTE (a, b) AS ( select case when x > 4 then 4 else x end a, count(*) somethingElse b from my_table ) SELECT a, b, (a+b) as c FROM CTE ORDER BY c ```
Can I add multiple columns to Totals
[ "", "sql", "sql-server", "" ]
Can we optimize this query any further: I need to get: ``` SELECT * FROM table1 WHERE (column1, column2) in (SELECT c1, c2 FROM table2); ``` Since the above query is not supported: I have the following query: ``` SELECT * FROM table1 join (SELECT c1, c2 from table2) as table3 ON table1.column1 = c1 and table1.column2 = c2 ``` --- Edit: I added table2 for simplicity. But it is actually ``` select c1, min(c2) from table2 group by c1; ```
The answer is NO. You've already got the best query. A JOIN is usually the fastest option. The missing column alias would result in a syntax error, though. Fixed and simplified with aliases and the simpler `USING` clause: ``` SELECT * FROM table1 t1 JOIN ( SELECT c1 AS column1, min(c2) AS column2 FROM table2 GROUP BY 1 ) t2 USING (column1, column2) ``` Key to performance are two [multi-column indices](http://www.postgresql.org/docs/current/interactive/indexes-multicolumn.html) (unique or primary key constraints work, too): ``` CREATE INDEX t1_mult_idx ON table1 (column1, column2); CREATE INDEX t2_mult_idx ON table2 (c1, c2); ``` You may be interested in [this related question on dba.SE](https://dba.stackexchange.com/questions/51895/efficient-query-to-get-greatest-value-per-group-from-big-table/52015) about difficulties using an index in such a query.
``` SELECT * from table1 t1 WHERE EXISTS (select * from table2 ex WHERE t1.column1 = ex.c1 and t1.column2 = ex.c2 ); ``` UPDATE: and here for the MIN(c2) case: ``` SELECT * from table1 t1 WHERE EXISTS (select * from table2 ex WHERE ex.c1 = t1.column1 and ex.c2 = t1.column2 ) AND NOT EXISTS (select * from table2 nx WHERE nx.c1 = t1.column1 and nx.c2 < t1.column2 ); ```
Select for multiple fields with "IN" expression
[ "", "sql", "postgresql", "join", "greatest-n-per-group", "" ]
How to count the change in a `sql server` column like I have `Ignition` value Ignition 1 1 0 1 1 1 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 I want to count change only it is from 0 to 1 to make occurrence 1. It can also be from 1 to 0 for the occurrence to be 1.
Simplest and shortest way for SQL server 2008 I know is: ``` with cte as ( select row_number() over(partition by Ignition order by Id) as rn1, row_number() over(order by Id) as rn2 from Table1 ) select count(distinct rn2 - rn1) - 1 from cte ``` Or, as @MartinSmith pointed out: ``` with cte as ( select row_number() over(order by Ignition, Id) as rn1, row_number() over(order by Id) as rn2 from Table1 ), cte2 as ( select distinct Ignition, rn2 - rn1 from cte ) select count(*) - 1 from cte2 ``` for SQL Server 2012 you can use [lag()](http://technet.microsoft.com/en-us/library/hh231256.aspx) (or [lead()](http://technet.microsoft.com/en-us/library/hh213125.aspx)) function: ``` ;with cte as ( select lag(Ignition) over(order by Id) as prev, Ignition as cur from Table1 ) select count(case when cur <> prev then 1 end) from cte; ``` **`sql fiddle demo`**
**Step 1:** use the `Row_Number()` function to provide a complete (un-broken) sequence of numbers, according to our order ``` SELECT ignition , id , Row_Number() OVER (ORDER BY id ASC) As row_num FROM your_table ``` **Step 4:** Make this a Common-Table Expression (CTE) so we can refer to the derived `row_num` column ``` ; WITH cte AS ( SELECT ignition , id , Row_Number() OVER (ORDER BY id ASC) As row_num FROM your_table ) SELECT ignition , id , row_num FROM cte ``` **Step 3:** join this table back to itself matching on the next/previous row ``` ; WITH cte AS ( SELECT ignition , id , Row_Number() OVER (ORDER BY id ASC) As row_num FROM your_table ) SELECT c1.ignition As c1_ignition , c2.ignition As c2_ignition FROM cte As c1 LEFT JOIN cte As c2 ON c2.row_num = c1.row_num + 1 ``` **Step 4:** Filter the results to show those where the values aren't the same ``` ; WITH cte AS ( SELECT ignition , id , Row_Number() OVER (ORDER BY id ASC) As row_num FROM your_table ) SELECT c1.ignition As c1_ignition , c2.ignition As c2_ignition FROM cte As c1 LEFT JOIN cte As c2 ON c2.row_num = c1.row_num - 1 WHERE c1.ignition <> c2.ignition ``` **Step 5:** ... **Step 6:** profit!
Count number of occurrences in a bit column in sql
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "sql-server-2012", "" ]
This is my query of dynamically calculated fields: ``` SELECT 5 * (`Mean (all)` +0.01735568 )/(0.07268901) AS rank1, 5 * (`CVaR 95` +0.51612 )/(0.53212) AS rank2, 5 * (`Hurst` - 0.2 )/(0.7717653) AS rank3, 5 * (`MaxDD` +6.200000762939453 )/(16.200000762939) AS rank4, 5 * (`Positive % 12` - 0.3 )/(1) AS rank5 FROM `quant1` ``` It works fine, but I need to add another dynamic field which is equal to rank1+rank2...+rank 5. So, I am adding another field like ``` SELECT 5 * (`Mean (all)` +0.01735568 )/(0.07268901) AS rank1, 5 * (`CVaR 95` +0.51612 )/(0.53212) AS rank2, 5 * (`Hurst` - 0.2 )/(0.7717653) AS rank3, 5 * (`MaxDD` +6.200000762939453 )/(16.200000762939) AS rank4, 5 * (`Positive % 12` - 0.3 )/(1) AS rank5, rank1+rank2+rank3+rank4+rank5 AS rank FROM `quant1` ``` It generates an error "unknown column rank1". Interesting that exactly the same query works fine with MS Access. How can I fix this? Thanks!
In MySQL, you cannot reference other aliases, but you can define variables in the select, so try this instead: ``` SELECT @rank1 := 5 * (`Mean (all)` +0.01735568 )/(0.07268901) AS rank1, @rank2 := 5 * (`CVaR 95` +0.51612 )/(0.53212) AS rank2, @rank3 := 5 * (`Hurst` - 0.2 )/(0.7717653) AS rank3, @rank4 := 5 * (`MaxDD` +6.200000762939453 )/(16.200000762939) AS rank4, @rank5 := 5 * (`Positive % 12` - 0.3 )/(1) AS rank5, @rank1 + @rank2 + @rank3 + @rank4 + @rank5 AS rank FROM `quant1` ``` It is ugly, but it works.
You can't use an alias in the same `SELECT` statement, put it inside a subquery instead, like this: ``` SELECT rank1, rank2, rank3, rank4, rank5, rank1 + rank2 + rank3 + rank4 + rank5 AS rank FROM ( SELECT 5 * (`Mean (all)` +0.01735568 )/(0.07268901) AS rank1, 5 * (`CVaR 95` +0.51612 )/(0.53212) AS rank2, 5 * (`Hurst` - 0.2 )/(0.7717653) AS rank3, 5 * (`MaxDD` +6.200000762939453 )/(16.200000762939) AS rank4, 5 * (`Positive % 12` - 0.3 )/(1) AS rank5 FROM `quant1` ) AS t ```
Sum of dynamically generated fields in MySQL query
[ "", "mysql", "sql", "" ]
Can somebody tell me how to extract my table definitions using SQL? I want to extract the datatypes of all my tables and other information from my Oracle schema. I have about 100 tables. I need the complete documentation of my Oracle Schema. My schema name IS "cco". Can I do this by SQL? I am using Toad for Data analyst 3.3. Please let me know if this tool helps.
You can try this - ``` select * from all_tab_cols where owner = 'CCO'; ```
To get the DDL for all tables of the current user, you can use this: ``` select dbms_metadata.get_ddl('TABLE', table_name) from user_tables; ``` You will need to adjust your SQL client to be able to properly display the content of a `CLOB` column. More details (e.g. about how to get the DDL for other objects) can be found in the manual: <http://docs.oracle.com/cd/B28359_01/appdev.111/b28419/d_metada.htm>
How to extract table definitions using SQL or Toad
[ "", "sql", "oracle", "toad", "" ]
I have two table: ``` A: id code 1 A1 2 A1 3 B1 4 B1 5 C1 6 C1 ``` ===================== ``` B: id Aid 1 1 2 4 ``` (`B` doesn't contain the `Aid` which link to `code C1`) Let me explain the overall flow: I want to make each row in `table A` have different code(by delete duplicate),and I want to retain the `Aid` which I can find in `table B`.If `Aid` which not be saved in `table B`,I retain the id bigger one. so I can not just do something as below: ``` DELETE FROM A WHERE id NOT IN (SELECT MAX(id) FROM A GROUP BY code, ) ``` I can get each duplicate\_code\_groups by below sql statement: ``` SELECT code FROM A GROUP BY code HAVING COUNT(*) > 1 ``` Is there some code in sql like ``` for (var ids in duplicate_code_groups){ for (var id in ids) { if (id in B){ return id } } return max(ids) } ``` and put the return id into a `idtable`?? I just don't know how to write such code in sql. then I can do ``` DELETE FROM A WHERE id NOT IN idtable ```
Using [`ROW_NUMBER()`](http://technet.microsoft.com/en-us/library/ms186734.aspx) inside CTE (or sub-query) you can assign numbers for each `Code` based on your ordering and then just join the result-set with your table `A` to make a delete. ``` WITH CTE AS ( SELECT A.*, ROW_NUMBER() OVER (PARTITION BY A.Code ORDER BY COALESCE(B.ID,0) DESC, A.ID desc) RN FROM A LEFT JOIN B ON A.ID = B.Aid ) DELETE A FROM A INNER JOIN CTE C ON A.ID = C.ID WHERE RN > 1; SELECT * FROM A; ``` **[SQLFiddle DEMO](http://sqlfiddle.com/#!6/591a8/1)**
The first select gives you all A.id that are in B - you don't want to delete them. The second select takes A, selects all codes without an id that appears in B, and from this subset takes the maximum id. These two sets of ids are the ones you want to keep, so the delete deletes the ones not in the sets. ``` DELETE from A where A.id not in ( select aid from B union select MAX(A.id) from A left outer join B on B.Aid=A.id group by code having COUNT(B.id)=0 ) ``` Actual Execution Plan on MS SQL Server 2008 R2 reveals that this solution performs quite well, it's 5-6 times faster than Nenad's solution :).
How to retain a row which is foreign key in another table and remove other duplicate rows?
[ "", "sql", "sql-server", "t-sql", "sql-delete", "duplicate-data", "" ]
Could please anyone tell me how I can return all the rows which has a maximum primary key id against each and every foreign key id. e.g. If I have an Applicant and Notes Table. Applicant table has ApplicantId as Primary key and Notes Table has NotesId as Primary key. ApplicantId is the Foreign key in Notes Table with One to Many relationship. Here i want to get the maximum NotesId result against each unique ApplicantId from Notes table . Anyone's help will really be appreciated.
Try this: ``` SELECT ApplicantId, MAX(NotesId) FROM Notes GROUP BY ApplicantId; ``` To get the rest of the record you can do: ``` SELECT n.* FROM notes n INNER JOIN ( SELECT max(notesId) AS maxid FROM notes GROUP BY applicantId ) n2 ON n.NotesId = n2.maxid ``` `sqlfiddle demo`
``` SELECT applicantId, max(notesId) from Notes group by applicantId ```
how to get the maximum Primary key id result on the basis of each foreign key id in sql
[ "", "sql", "t-sql", "" ]
I have a table which looks like this ``` id datetime tick_info(string) 0 16-10-2013 3:33:01 "33300" 1 17-10-2013 5:04:01 "003023" 2 17-10-2013 6:12:04 "3244" 3 19-10-2013 5:32:12 "3333332" 4 20-10-2013 8:14:44 "33321" 5 20-10-2013 9:12:11 "5821" 6 22-10-2013 10:32:11 "33111" ``` And so the data can span for 20 days and I want to select all rows with the last 5 days. Note that the last 5 days mean : the 5 latest distinct date in the table. Ex : 17-19-20-22-23 in the case where there is no tick\_info for the 21 and the 18. The number of tick\_info is variable so there might be 100 tick\_info for the 20 and 5 for the 23. I'm using mysql. *Update* Should have mentioned I'm using mysql 5.5.32
Try this:: ``` Select * from 'myTable' where 'datetime' >= ( select `datetime` from myTable group by DATE(`datetime`) order by `datetime` desc 4,1 ) ```
`SELECT * FROM yourTable WHERE tick_info>=DATE_SUB(tick_info, INTERVAL 5 DAY) group by tick_info order by tick_info DESC`
MySQL select * from last 5 days from a datetime field
[ "", "mysql", "sql", "datetime", "" ]
I'm struggling to understand why grouping by DATE on a datetime column in SQLite in Rails is returning nil. I want to return a count of users that have a value in the startdate field, grouped by date. Can anyone help? Removing the DATE aspect of the below query returns the results as I would expect. See the different queries and results below: `User.where("startdate IS NOT NULL").group("DATE(startdate)").count` Gives ``` SELECT COUNT(*) AS count_all, DATE(startdate) AS date_startdate FROM "users" WHERE (startdate IS NOT NULL) GROUP BY DATE(startdate) ``` => `{nil=>10}` What I would expect is an array of dates (e.g. {"2013-06-04"=>3, "2013-06-10"=>4, "2013-06-11"=>3}. In the query below which doesn't have the DATE operator, the results are more as I would expect, but obviously I don't want to group by time too (in my actual DB there are different times on the same dates). ``` User.where("startdate IS NOT NULL").group("(startdate)").count ``` Gives ``` SELECT COUNT(*) AS count_all, (startdate) AS startdate FROM "users" WHERE (startdate IS NOT NULL) GROUP BY (startdate) ``` => `{"2013-06-04 17:27:30.553802+0000"=>3, "2013-06-10 09:24:06.207327+0000"=>4, "2013-06-11 12:20:37.745819+0000"=>3}`
Closest [date formated string](http://www.sqlite.org/lang_datefunc.html) SQLite expects for your data is `YYYY-MM-DD HH:MM:SS.SSS`. If all your dates are in same timezone, `+0000`, you may ignore `DATE` function at all and compare substring: ``` User.where("startdate IS NOT NULL").group("SUBSTR(startdate,1,10)").count ``` Anyway, SQLite doesn't perform date comparisons. In fact, `DATE` function always returns `TEXT`. If you deal with different timezones (better try to deal with them in your application), additional computation may be done using [Date/time functions modifiers](http://www.sqlite.org/lang_datefunc.html). EDIT: As [@chuck.p.lee correctly corrected me](https://stackoverflow.com/questions/19615898/grouping-by-date-in-sql-returning-nil/19617172?noredirect=1#comment29145840_19617172), closest format is now `YYYY-MM-DD HH:MM:SS.SSS+HH:MM`. So, converting your dates to this is pretty easy using some string manipulation (truncating sub-milisecond part): ``` SUBSTR(startdate, 1, 23) || SUBSTR(startdate, 27, 3) || ':' || SUBSTR(startdate, 30,2) ``` Now your query may be correctly written as: ``` User.where("startdate IS NOT NULL").group("DATE(SUBSTR(startdate, 1, 23) || SUBSTR(startdate, 27, 3) || ':' || SUBSTR(startdate, 30,2))").count ```
[date()](http://www.sqlite.org/lang_datefunc.html) function expects time zone to match format "[-+]HH:MM" or "Z": ``` 2013-10-07 08:23:19.120 2013-10-07T08:23:19.120Z 2013-10-07 08:23:19.120-04:00 ```
Grouping by DATE in SQL returning nil
[ "", "sql", "ruby-on-rails", "sqlite", "date", "" ]
``` SELECT DISTINCT YEAR(convert(varchar(max),OrderCreatedDate)) from webshop ``` The above sql query is producing this error: ``` Conversion failed when converting date and/or time from character string. ``` The value in the database is NVARCHAR and in the following format: DD/MM/YYYY 00:00:00 AM (/PM) I would like to select the unique values for the year only! thanks for any help
To avoid this sort of issues, date should be saved in DateTime type field Not in a string field. `Year()` function parameter is a `DateTime type` *not* a `String`. So you should make sure that the string you are passing is convertible to a DateTime type. In this case you could trim out the time part with `Left() function` and use `103` *style* as below. **[Fiddle demo:](http://sqlfiddle.com/#!6/d41d8/10286)** ``` --Example declare @val nvarchar(50) = '28/10/2013 11:25:45 AM (/PM)' select year(convert(date,left(@val,10),103)) myYear --Applying to your query SELECT DISTINCT Year(Convert(Date,Left(OrderCreatedDate,10),103)) FROM Webshop ``` **UPDATE:** If you are getting errors, it could be due to your date format. i.e. Format of the string you have saved may not be as you have described (`DD/MM/YYYY 00:00:00 AM (/PM)`) in the question. Please check with `ISDATE()` function before converting to date and identify which records are causing the problem and correct them. Try this query to get all of them with invalid strings. Following query will return 0 for values with invalid formatting. ``` SELECT DISTINCT CASE WHEN IsDate(eft(OrderCreatedDate,10))=1 THEN Year(Convert(Date,Left(OrderCreatedDate,10),103)) ELSE 0 END as myDate, OrderCreatedDate FROM Webshop ``` Or you could get only the records which are causing the problem as; ``` SELECT OrderCreatedDate FROM Webshop WHERE IsDate(Left(OrderCreatedDate,10)) = 0 ```
``` select distinct year(CONVERT(NVARCHAR(255),CONVERT(SMALLDATETIME, columnName,105))) ```
Convert NVARCHAR to DATETIME and select distinct Year
[ "", "sql", "sql-server", "" ]
I have an application where users's activity will be rewarded with points. There will be a points chart that will assign a certain amount of points for certain action. My question is which approach is better and most importantly why? **Approach 1**: - create a userPoints table in mysql and insert points there at each user action. When the user goes to their profile, query the database for the number of points and display them. *example:* user buys an item query the database: insert 5 points in userPoints for user 2 user follows a place query the database: insert 7 points in userPoints for user 2 user sells their item query the database insert -5points in userPoints for user 2 **Approach 2**: - do not store the points in a table, but count the number of actions user has completed (get their count and type from the db) and then multiply by the certain amount of points for each type of action all at runtime, whenever number of points is required (for example when they visit their profile) *example*: user visits their profile to see how many points they have query the database and count items possessed, places followed, friendships and then multiply items count x5, followings count x7 and friendships count x10 and display the number **EDIT for Steven:** let me get this straight, is this the chronology of actions I am supposed to take? How can I tighten this up and reduce the number of queries? * buying an item has an actionId of 1 and points rewarded are 5 (this is specified in the - actions table in the db) * Kylie (user ID: 1) buys an item. * Query DB: Add the item to userItems table for user ID1 * Query DB: Register an action in the userActions table - userId: 1, actionId: 1 * Query DB SET userPointsTotal = userPointsTotal + points rewarded (nest two queries in one like this: set userPointsTotal = userPointsTotal + (select points from actions where actuionid= 1)) * Query DB get userPointsTotal and display value at Kylie's profile * In the future, refer table userActions so user can get status title, depending on certain type of activity (social status or possessions status)
You'll want to store: 1. **Actions** - A table listing actions and the points defined for each. This will act as your domain table. 2. **User Actions** - A composite-key table (user\_id, action\_id) listing the actions the user completed AND the points they were awarded (you'll want to store this here in case an action's point value changes after the user completed it. At any time you can do SUM(user\_actions.points) to get an accurate total of the points a user has earned. 3. **User Total** - A value enumerating the total of the points a user has earned. You can make this a column on the Users table that you update when recording a user's action, or this can be a sum of user\_actions.points that's stored in a user index or cache somewhere that gets updated periodically (e.g. Solr, Memcached). This will allow you to display large numbers of aggregate figures without pounding your database with expensive aggregate functions every time you display a user list. This gives you the best of all worlds. You have a quick way of saying how many points a user earned, a reliable method for recalculating totals if they "stray", and a domain table for referential integrity / additional information.
I recommend a hybrid approach. If you strictly store points in the table, it will be very easy to lose track of who has which points for what reason. Alternatively, recalculating them constantly will bring down your server in no time if your user base grows. If you instead, recalculate points every 5 minutes, 1 hour, 12 hours, or whatever interval works for you, and store the results in the userPoints table, you can have fast access speed since you are just looking up a table. Also, if you happen to miscalculate points or mess up in the logic you use to calculate points, you can easily fix your code and re-calculate.
Storing stats in the DB or computing them at runtime?
[ "", "mysql", "sql", "" ]
I'm having trouble getting my head around this. I'm using MS SQL 2008 and I have a table for example called Activity, with 3 fields: customer (a customer's identity), visitDate (a the date that they visited) and customerType (what type of cutomer they are). Here's 3 days' of data: ``` customer visitDate customerType customer1 2013-10-01 07:00:00.000 A customer1 2013-10-01 09:00:00.000 A customer2 2013-10-01 10:00:00.000 B customer1 2013-10-02 09:00:00.000 A customer2 2013-10-02 09:00:00.000 B customer3 2013-10-02 09:00:00.000 B customer1 2013-10-02 09:00:00.000 A customer1 2013-10-03 07:00:00.000 A ``` What I'm trying to achieve is to write a query that groups the data showing each day which also counts the user types for each day so that the result looks like this: ``` visitDate TypeA TypeB Total 2013-10-01 1 1 2 2013-10-02 1 2 3 2013-10-03 1 0 0 ``` Note that if someone visits more than once on the same day they are only counted as one visit that day. I know it's got something to do with grouping but I haven't got a clue where to start.
The slightly tricky bit is to only count a customer once for a given day and type, even if they have multiple records for that day: ``` select visitDate, sum(case when customerType = 'A' then 1 else 0 end) as TypeA, sum(case when customerType = 'B' then 1 else 0 end) as TypeB, count(*) as Total from ( select distinct customer, cast(visitdate as date) as visitdate, customertype from activity ) x group by visitdate ``` **`Example SQLFiddle`**
``` select visitDate, sum(case when customerType = 'A' then 1 else 0 end) as TypeA, sum(case when customerType = 'B' then 1 else 0 end) as TypeB, count(*) as Total from activity group by visitDate ```
Grouping by day then count result for each day
[ "", "sql", "sql-server-2008", "group-by", "" ]
This my table with sample data. ``` id | path | category (1-6) | secter_id | date ---------------------------------------------- 1 | ddd | 5 | a | 10-01 2 | ddgg | 6 | a | 10-03 3 | fff | 5 | a | 10-02 ``` I want to filter the latest category 5 and 6 rows for each sector id. Expected result ``` id path | category| secter_id | date -------------------------------------- 2 | ddgg | 6 | a | 10-03 3 | fff | 5 | a | 10-02 ``` Is this possible do only sql?
This query should do it for you ``` SELECT A.ID, A.PATH, A.CATEGORY, A.SECTOR_ID, A.dDATE FROM yourTable A INNER JOIN (SELECT CATEGORY, MAX(dDate) AS dDate FROM yourTable GROUP BY CATEGORY) B ON A.CATEGORY = B.CATEGORY AND A.dDate = B.dDate ``` Here is a [SQLFiddle with the query](http://sqlfiddle.com/#!4/68275/11)
You can try with this code, is not elegant but it should work. ``` Select id,path,category,secter_id,date FROM myTable a INNER JOIN (SELECT category, MAX(date) date FROM myTable GROUP BY Category) b ON a.category = b.Category AND a.date = b.Date WHERE A.Category IN (5,6) ```
make a select query with group by
[ "", "sql", "oracle", "select", "" ]
I'm trying to figure out a query which show the **number** (amount) of employees who worked in more than 1 departments. Here the table name and fields: * `Employee`(id\_employee, employee\_name, salary) * `Department`(id\_dept, dept\_name, budget) * `Department_Employee`(id\_employee, id\_dept, workhours\_percentage) Suppose the content of Department\_Employee table is ``` id_employee id_dept workhours_percentage ----------- ------- -------------------- 0001 03 100 0001 04 50 0001 05 60 0002 05 60 0002 09 90 0003 08 80 0004 07 80 0005 06 60 0006 05 70 0007 04 75 0008 10 95 0009 02 25 0010 01 40 ``` With a right query, the result should be 2 (employees), because there are 2 employees who work in more than one department * Employee 0001 work in 3 departments * Employee 0002 work in 2 departments I tried with the following query ``` SELECT COUNT(DISTINCT id_employee) FROM Department_Employee GROUP BY id_employee HAVING COUNT(id_employee)>1 ``` But the result isn't right. Please help me out. Thanks.
``` SELECT COUNT(*) FROM ( SELECT id_employee, COUNT(*) AS CNT FROM Department_Employee GROUP BY id_employee ) AS T WHERE CNT > 1 ```
To get all employees that work in more than one department: ``` SELECT id_employee, COUNT(*) FROM Department_Employee GROUP BY id_employee HAVING COUNT(*)>1 ``` To count them: ``` ; WITH cte As ( SELECT id_employee FROM Department_Employee GROUP BY id_employee HAVING COUNT(*)>1 ) SELECT COUNT(*) FROM cte ```
Get number of employees who worked in more than one department with SQL query
[ "", "sql", "sql-server", "t-sql", "" ]
I have the following tables - simplified quite a bit ``` Table - Tests Test A B C D E F G H Table - TestHistory Test Result Version A Pass 1 A Fail 2 B Pass 2 C Fail 1 C Pass 2 D Fail 1 D Fail 2 E Fail 1 ``` I want to get the list of tests that failed (or any status) the *last time they ran*. But, also the version that it was found in. So, in the above example, I want this returned: ``` A Fail 2 D Fail 2 E Fail 1 ``` I've tried a couple methods ``` select Test, LastResult = IsNull((Select Top 1 Result From TestHistory Where Test = Tests.Test order by Version desc), 'NOT_RUN') from Tests ``` What this does, is gives me a list of all tests and then I have to go through and kick out the rows I don't want (i.e. isn't Fail). This also doesn't give me the Version it ran in. I also tried this: ``` select Version, TH.Test, Result from TestHistory as TH inner join Tests as T on TH.Test = T.Test where Result = 'Fail' ``` But, then I get rows such as: ``` Test Result Version C Fail 1 ``` I don't want those because it's not the *Last Result*. How can I restrict this to give me exactly what I need without a lot of data manipulation (or worse, more DB reads) after? Any help would be appreciated. Thanks!
I can't syntax check this, but it should be close: ``` SELECT th.Test, th.Result, th.Version FROM TestHistory th INNER JOIN ( SELECT MAX(Version) as MaxVersion, Test FROM TestHistory GROUP BY Test ) sub ON sub.MaxVersion = th.Version AND sub.Test = th.Test WHERE th.result = 'Fail' ``` Explanation: First, in the subquery, you get the maximum version for the test. Then use a join to restrict the outer query to only return the results that match the test/version of the subquery. Edit: forgot the WHERE clause--seems you only want rows where the most recent result is failure. **Edit based on the question in your comment:** This should give you the most recent failure, plus tests that have never run. Note that this will filter out tests that *have* run but have never failed (your data does not have any of these). I based this on my original query in the interest of time, but I would guess there is a more elegant way: ``` SELECT t.Test, outerSub.Result, outerSub.Version FROM Test t LEFT JOIN (SELECT th.Test, th.Result, th.Version FROM TestHistory th INNER JOIN ( SELECT MAX(Version) as MaxVersion, Test FROM TestHistory GROUP BY Test ) sub ON sub.MaxVersion = th.Version AND sub.Test = th.Test ) outerSub on outerSub.Test = t.Test WHERE outerSub.result = 'Fail' OR outerSub.Test IS NULL ```
Small correction can be added to the above solution. In case when You need to receive the results in the Test order, the query can be transformed as below: ``` SELECT src.Test, src.Result, src.Version FROM ( SELECT th.Version, th.Test, th.Result, ROW_NUMBER() over(partition by th.Test order by th.Version desc) as RowNum FROM dbo.TestHistory as th ) src WHERE src.RowNum = 1 and src.Result = 'Fail' order by src.Test; ``` Among this, the query will return the set in the needed columns' order.
SQL query - how do I restrict to the rows I'm interested in
[ "", "sql", "t-sql", "inner-join", "" ]
I have a table that looks like this caller 'makerar' | cname | wmname | avg | | --- | --- | --- | | canada | zoro | 2.0000000000000000 | | spain | luffy | 1.00000000000000000000 | | spain | usopp | 5.0000000000000000 | And I want to select the maximum avg for each cname. ``` SELECT cname, wmname, MAX(avg) FROM makerar GROUP BY cname; ``` but I will get an error, ``` ERROR: column "makerar.wmname" must appear in the GROUP BY clause or be used in an aggregate function LINE 1: SELECT cname, wmname, MAX(avg) FROM makerar GROUP BY cname; ``` so i do this ``` SELECT cname, wmname, MAX(avg) FROM makerar GROUP BY cname, wmname; ``` however this will not give the intented results, and the incorrect output below is shown. | cname | wmname | max | | --- | --- | --- | | canada | zoro | 2.0000000000000000 | | spain | luffy | 1.00000000000000000000 | | spain | usopp | 5.0000000000000000 | Actual Results should be | cname | wmname | max | | --- | --- | --- | | canada | zoro | 2.0000000000000000 | | spain | usopp | 5.0000000000000000 | How can I go about fixing this issue? Note: This table is a VIEW created from a previous operation.
Yes, this is a common aggregation problem. Before [SQL3 (1999)](http://web.cecs.pdx.edu/~len/sql1999.pdf), the selected fields must appear in the `GROUP BY` clause[\*]. To workaround this issue, you must calculate the aggregate in a sub-query and then join it with itself to get the additional columns you'd need to show: ``` SELECT m.cname, m.wmname, t.mx FROM ( SELECT cname, MAX(avg) AS mx FROM makerar GROUP BY cname ) t JOIN makerar m ON m.cname = t.cname AND t.mx = m.avg ; cname | wmname | mx --------+--------+------------------------ canada | zoro | 2.0000000000000000 spain | usopp | 5.0000000000000000 ``` --- But you may also use window functions, which looks simpler: ``` SELECT cname, wmname, MAX(avg) OVER (PARTITION BY cname) AS mx FROM makerar ; ``` The only thing with this method is that it will show all records (window functions do not group). But it will show the correct (i.e. maxed at `cname` level) `MAX` for the country in each row, so it's up to you: ``` cname | wmname | mx --------+--------+------------------------ canada | zoro | 2.0000000000000000 spain | luffy | 5.0000000000000000 spain | usopp | 5.0000000000000000 ``` The solution, arguably less elegant, to show the only `(cname, wmname)` tuples matching the max value, is: ``` SELECT DISTINCT /* distinct here matters, because maybe there are various tuples for the same max value */ m.cname, m.wmname, t.avg AS mx FROM ( SELECT cname, wmname, avg, ROW_NUMBER() OVER (PARTITION BY avg DESC) AS rn FROM makerar ) t JOIN makerar m ON m.cname = t.cname AND m.wmname = t.wmname AND t.rn = 1 ; cname | wmname | mx --------+--------+------------------------ canada | zoro | 2.0000000000000000 spain | usopp | 5.0000000000000000 ``` --- [\*]: Interestingly enough, even though the spec sort of allows to select non-grouped fields, major engines seem to not really like it. Oracle and SQLServer just don't allow this at all. Mysql used to allow it by default, but now since 5.7 the administrator needs to enable this option (`ONLY_FULL_GROUP_BY`) manually in the server configuration for this feature to be supported...
In Postgres, you can also use the special **[`DISTINCT ON (expression)`](http://www.postgresql.org/docs/9.3/static/sql-select.html#SQL-DISTINCT)** syntax: ``` SELECT DISTINCT ON (cname) cname, wmname, avg FROM makerar ORDER BY cname, avg DESC ; ```
must appear in the GROUP BY clause or be used in an aggregate function
[ "", "sql", "group-by", "aggregate-functions", "postgresql-9.1", "" ]
I have 4 different tables which store different transactions for users and I would like to union all of the transactions for some specific users. But the problem is, all these 4 tables have huge amount of data, therefore when I try to UNION all of them and JOIN with the users, it takes hours to compute. What I'm doing is: ``` SELECT blablabla FROM transactions1 t1 JOIN users ON (users.id = t1.user_id AND user.state = 1 AND user.friends > 1 AND user.loc = 'Turkey') UNION SELECT blablabla FROM transactions2 t2 JOIN users ON (users.id = t2.user_id AND user.state = 1 AND user.friends > 1 AND user.loc = 'Turkey') UNION SELECT blablabla FROM transactions3 t3 JOIN users ON (users.id = t3.user_id AND user.state = 1 AND user.friends > 1 AND user.loc = 'Turkey') UNION SELECT blablabla FROM transactions4 t4 JOIN users ON (users.id = t4.user_id AND user.state = 1 AND user.friends > 1 AND user.loc = 'Turkey') ORDER BY date DESC ``` The problem is, I'm running `JOIN users ON (users.id = t1.user_id AND user.state = 1 AND user.friends > 1 AND user.loc = 'Turkey')` this filter 4 times by joining each transaction table with users. In order to have best practice and increase the efficiency of my query, how should I improve my query ? Thanks !
Either select the "*user.state = 1 AND user.friends > 1 AND user.loc = 'Turkey'*" from your users using a [Common Table Expression](http://technet.microsoft.com/en-us/library/ms190766%28v=SQL.105%29.aspx) or into a temporary table. Then use that result in your Union.
You might try something like this: ``` SELECT blablabla FROM transactions1 t1, transactions2 t2,transactions3 t3,transactions4 t4 JOIN users ON (users.id = t1.user_id OR users.id = t2.user_id OR users.id = t3.user_id OR users.id = t4.user_id AND user.state = 1 AND user.friends > 1 AND user.loc = 'Turkey'); ```
SQL - Join one subquery for multiple times within union
[ "", "sql", "performance", "subquery", "union", "" ]
I'm using this query to delete unique records from one table. ``` DELETE FROM TABLE 1 WHERE ID NOT IN (SELECT ID form TABLE 2) ``` But the problem is that both the tables have millions of records and using subquery will be very slow. Can anyone tell me any alternative.
``` Delete t1 from table_1 t1 left join table_2 t2 on t1.id = t2.id where t2.id is null ```
SubQuery are really slow infact [joins](http://www.codinghorror.com/blog/2007/10/a-visual-explanation-of-sql-joins.html) exists! ``` DELETE table1 FROM table1 LEFT JOIN table2 ON table1.id = table2.id WHERE table2.id is null ```
SUbstiute for SubQuery to delete records from table
[ "", "mysql", "sql", "database", "oracle", "" ]
I have a SQL query which used to cause a > Divide By Zero exception I've wrapped it in a `CASE` statement to stop this from happening. Is there a simpler way of doing this? **Here's my code:** ``` Percentage = CASE WHEN AttTotal <> 0 THEN (ClubTotal/AttTotal) * 100 ELSE 0 END ```
A nicer way of doing this is to use [NULLIF](http://technet.microsoft.com/en-us/library/ms177562.aspx) like this: ``` Percentage = 100 * ClubTotal / NULLIF(AttTotal, 0) ```
I'm using `NULLIF` bit differently, because in some cases I do have to return some value. Usually I need to return 0 when there is a divide by zero error. In that case I wrap whole expression in `ISNULL`. So it would be: ``` Percentage = ISNULL(100 * ClubTotal / NULLIF(AttTotal, 0), 0) ``` The inner part is evaluated to `NULL` and then `ISNULL` replaces it with 0.
Simple way to prevent a Divide By Zero error in SQL
[ "", "sql", "sql-server-2008", "t-sql", "sql-server-2005", "" ]
I am fiddling with MS SQL and I am getting different results after running these two queries (...maybe it is just my amateurism in MS SQL): /\* If you spot any syntax errors it doesn't matter, I am writing this out of my head \*/ --- ``` SELECT AVG(X.AvgDailyExpense) AS AverageDailyExpense FROM (SELECT AVG(we.DailyExpense) AS AvgDailyExpense FROM WorkerExpense we LEFT JOIN Worker w ON w.Id = we.WorkerId GROUP BY w.Id) X; ``` --- ``` SELECT AVG(we.DailyExpense) AS AverageDailyExpense FROM WorkerExpense we LEFT JOIN Worker w ON w.Id = we.WorkerId; ``` --- There is a foreign key between WorkerExpense and Worker and it is impossible for WorkerExpense table to have a reference to worker row that doesn't exist. Also, DailyExpense is money data type (maybe that matters?). Now, the lower query is returning what should be the correct result (I calculated the result manually on small number of rows) and upper query is always returning a value greater than it should be. Could anyone please explain it a bit more detailed why is this happening? Is it because of number rounding or...?
These queries will return different results because they are answering different questions. The second query answers "what is the average daily expense across all daily expense reports". The first query first finds the average daily expense by individual and then asks "what is the average of the individual-based average daily expense". They are answering very different questions. Another way to think about it is that the the second query is giving greater weight to individuals that have a larger number of expense reports. The first query normalizes the data by individual.
Suppose that you have workers with the following expenses: ``` A: 1, 2, 3, 4, 5, 6, 7, 8, 9 (average 45 / 9 = 5) B: 12, 13, 14 (average 39 / 3 = 13) ``` The straight average of all the expenses is (45 + 39) / 12 = 7, but the average of the averages of the two workers is (5 + 13) / 2 = 9.
MS SQL - AVG() returns different result than AVG(AVG() with group)
[ "", "sql", "sql-server", "aggregate-functions", "average", "" ]
I have a list like this thing1,thing2,thing3. And I want to insert them into a look-up table with the same foreign key. So ideally it would look like this: ``` <cfquery datasource="#ds#" result="insert_things"> INSERT INTO lkp_things (foreign_key, thing) VALUES (1, thing1), (1, thing2), (1, thing3) </cfquery> ``` **It seems like the only way to accomplish this is to turn the list into a query, but I'm wondering, is there is a simpler way?** Here's what I've tried: ``` <cfquery datasource="#ds#" result="insert_things"> INSERT INTO lkp_things (foreign_key, thing) VALUES <cfloop list="#things#" index="thing"> (#id#,#thing#)<cfif ?????? NEQ len(#things#)>,</cfif> </cfloop> </cfquery> ``` I've heard that you can't do a cfloop inside a cfquery, but I'm not even sure if that's true because I can't have a trailing comma in the VALUES, and I'm not sure how to say "The current iteration number" inside a cfloop. If I turned the list into a query, then I could do currentRow, but again, I'd like to know if there's a simpler way to accomplish this before I go through all that. Also, I'm using CF 8 and sql server '08 EDIT: Sorry, I'm actually using 2000.
**Update:** Ultimately the real problem here was that the feature of inserting multiple sets of values with a single `VALUES` clause is only supported in SQL Server 2008+ and the OP is using 2000. So they went with the [select / union all approach](https://stackoverflow.com/a/19596550/104223) instead. --- (Expanded from the comments) Sure you can loop inside a `cfquery`. All cfml code is processed on the CF server first. Then the resulting SQL string is sent to the database for execution. As long as your CF code results in a valid SQL statement, you can do just about anything you want :) Whether you should is a different question, but this kind of looping is perfectly fine. Getting back to your question, just switch to a `from/to` loop instead and use list functions like `getToken(list, index)` to get the individual elements (see [Matt's example](https://stackoverflow.com/a/19592020/104223)) or use an array instead. Obviously you should also validate the list is not empty first. My personal preference is arrays. Not tested, but something like this: ``` <cfset thingArray = listToArray(things, ",")> <cfquery datasource="#ds#" result="insert_things"> INSERT INTO lkp_things (foreign_key, thing) VALUES <cfloop from="1" to="#arrayLen(thingArray)#" index="x"> <cfif x gt 1>,</cfif> ( <!--- Note: Replace cfsqltype="..." with correct type ---> <cfqueryparam value="#id#" cfsqltype="..."> , <cfqueryparam value="#thingArray[x]#" cfsqltype="..."> ) </cfloop> </cfquery> ``` Having said that, what is the source of your `#thing#` list? If those values exist in a database table, you could insert them directly with a `SELECT` statement, instead of a loop: ``` INSERT INTO lkp_things (foreign_key, thing) SELECT <cfqueryparam value="#id#" cfsqltype="...">, thing FROM ThingTable WHERE thing IN ( <cfqueryparam value="#thingList#" list="true" cfsqltype="..."> ) ```
I would add a counter and increment it inside the loop. Also you need to use `listLen()` rather than `len` on your list to get the number of items. ``` <cfquery datasource="#ds#" result="insert_things"> INSERT INTO lkp_things (foreign_key, thing) VALUES <cfset count = 1> <cfloop list="#things#" index="thing"> ( <cfqueryparam cf_sql_type="cf_sql_integer" value="#id#">, <cfqueryparam cf_sql_type="cf_sql_varchar" value="#thing#"> )<cfif count NEQ listLen(things)>,</cfif> <cfset count++> </cfloop> </cfquery> ``` You should use `cfqueryparam` on all your values. I've guessed on what type the column is.
ColdFusion: How to insert a list with a static foreign key in one insert?
[ "", "sql", "sql-server", "coldfusion", "" ]
I have a query I am trying to write in SQL Server 2008 to produce a row like this: ``` quoteID | dateEntered | insuredName | agentName | quoteType | status | noteDate | userType ``` I currently have: ``` SELECT t1.quoteID, t1.dateEntered, t1.insuredFirstName + ' ' + t1.insuredLastName as insuredName, t2.FirstName + ' ' + t2.LastName as agentName, t1.quoteType, t1.status, t3.noteDate FROM quote_genericInformation t1 INNER JOIN tbl_agents t2 ON t1.createUserID = t2.AgentNumber INNER JOIN (SELECT quoteID, MAX(dateEntered) as noteDate FROM quote_notes GROUP BY quoteID) t3 ON t1.quoteid = t3.quoteid ORDER BY t1.quoteID ``` This produces a result like: ``` quoteID | dateEntered | insuredName | agentName | quoteType | status | noteDate 54 | 01/01/2000 | First Last | First Last | apptype | open | 01/01/2000 ``` 1. I need to add the usertype as another column, but when I add it, I start to get duplicates. 2. I need this to always show a row regardless if a quote\_note exists for that quoteID 3. I need this to show the most recent usertype in the notes (based off the notedate) Thank you!
``` WITH note AS ( SELECT quoteID , dateEntered as noteDate , usertype , ROW_NUMBER() OVER (PARTITION BY quoteID ORDER BY dateEntered DESC) as row_num FROM quote_notes ) SELECT t1.quoteID, t1.dateEntered, t1.insuredFirstName + ' ' + t1.insuredLastName as insuredName, t2.FirstName + ' ' + t2.LastName as agentName, t1.quoteType, t1.status, t3.noteDate, t3.usertype FROM quote_genericInformation t1 INNER JOIN tbl_agents t2 ON t1.createUserID = t2.AgentNumber LEFT JOIN note t3 ON t1.quoteid = t3.quoteid AND t3.row_num = 1 ORDER BY t1.quoteID ```
Answer to question 2: if you want a row returned always you should use a LEFT OUTER JOIN, rather than an INNER JOIN. See Wikipedia for an explanation on join types: <http://en.wikipedia.org/wiki/Join_(SQL)> As for the questions on usertype: we'll need more information about the database schema then. BTW I would advice to use consistent naming for your aliases (like employees e and persons p, rather than t1, t2 etc). This makes the query much more readable IMHO.
How to combine 4 tables with joins and max
[ "", "sql", "sql-server", "t-sql", "join", "" ]
I just started learning PL/SQL and I'm not sure how to create a procedure. The logic seems about right but I think there's some syntactical mistake in the first line. Here's my code:- ``` CREATE OR REPLACE PROCEDURE ReverseOf(input IN varchar2(50)) IS DECLARE reverse varchar2(50); BEGIN FOR i in reverse 1..length(input) LOOP reverse := reverse||''||substr(input, i, 1); END LOOP; dbms_output.put_line(reverse); END; / ```
Two things - you shouldn't specify the datatype size in procedure's/function's parameter list and you do not need the `DECLARE` keyword. Try this: ``` CREATE OR REPLACE PROCEDURE ReverseOf(input IN varchar2) IS rev varchar2(50):=''; BEGIN FOR i in reverse 1..length(input) LOOP rev := rev||substr(input, i, 1); END LOOP; dbms_output.put_line(rev); END; ```
Try it without PL/SQL! ``` WITH params AS (SELECT 'supercalifragilisticexpialidocious' phrase FROM dual), WordReverse (inpt, outpt) AS (SELECT phrase inpt, CAST(NULL AS varchar2(4000)) outpt FROM params UNION ALL SELECT substr(inpt,2,LENGTH(inpt)-1), substr(inpt,1,1) || outpt FROM wordReverse WHERE LENGTH(inpt) > 0 ) SELECT phrase,outpt AS reversed FROM wordReverse, params WHERE LENGTH(outpt) = LENGTH(phrase) ; PHRASE REVERSED ---------------------------------- ----------------------------------- supercalifragilisticexpialidocious suoicodilaipxecitsiligarfilacrepus ``` Citation: <http://rdbms-insight.com/wp/?p=94>
A procedure to Reverse a String in PL/SQL
[ "", "sql", "oracle", "plsql", "syntax-error", "" ]
I have a Salesman table and a Sales table, I need to get a count of Salesman whose revenue was $1,000,000 based on previous quarter. The problem I am having is this: I can do a select of the Sales Table which gets every salesman, then an inner select statement where I take each salesman and find all of his sales. I need to see if all of his sales >= $1,000,000 and I don't know how/if I can do arithmetic inside the select statements to sum of the sales and see if they are >= $1m Here is my code: ``` Select count(SalesID) from Salesman SM where SM.SalesID in ( Select cost from Sales where salesDate >= beginQtr AND salesDate <= endQtr //some code to add them all up and if >= $1m, count that Salesman ); ```
There is a function sum(some\_column) in sql. Try something like this: ``` sum(select cost from sales where salesman_ID = @id) >= 1000000 ```
I think you can try SUM() function ``` SELECT SUM(column_name) FROM table_name; ``` this can get you the total sum of a numeric column. I think this will help
How to do calculations inside sql query
[ "", "sql", "pervasive", "pervasive-sql", "" ]
I have a MySQL query like so: ``` SELECT `contact_last_name` FROM `customer` WHERE `contact_last_name` IS NOT NULL ``` Basically I want to return any rows where the column `contact_last_name` has something entered in it. If it makes any difference, this column is a VARCHAR column. But instead, it is returning every row in the database, regardless of whether that column is empty or not! I'm not sure why this query is not working?
`NULL` requires the column to actually be `NULL`. Not an empty string. Are you trying to check if it's empty? ``` SELECT `contact_last_name` FROM `customer` WHERE `contact_last_name` != '' ``` Do you have white space issues? ``` SELECT `contact_last_name` FROM `customer` WHERE TRIM(`contact_last_name`) != '' ``` Can it both be `NULL` and empty? ``` SELECT `contact_last_name` FROM `customer` WHERE `contact_last_name` != '' AND `contact_last_name` IS NOT NULL ```
There's a difference between a varchar being an empty string and it being explicitly NULL. You could try this where clause: ``` LENGTH(contact_last_name) > 0; ```
Returning results that are not null in MySQL
[ "", "mysql", "sql", "select", "" ]
I have the following models ``` class User attr_accesible :first_name, :phone_number has_one :user_extension end class UserExtension attr_accessible :company, :user_id belongs_to :user end ``` I have table which contains all users. And I need to sort this table by first\_name, phone\_number, company. With first\_name, phone\_number I don't have any problems, order works fine, in example ``` @users = User.order("first_name desc") ``` , but I also need sort by company and don't know how to do it. And I can get company name by this way ``` @user.user_extension.company ``` So i have troubles with sql, which will gave me all users ordered by company. DB: PostgreSQL. Thanks. **Edit:** I should provide more information about this models. ``` create_table "user_extensions", :force => true do |t| t.integer "user_id" t.string "company" end create_table "users", :force => true do |t| t.string "first_name" t.string "phone_number" end ``` Also, I tried use join ``` User.joins(:user_extension).order("user_extension.company desc") ``` and what i get ``` User Load (1.6ms) SELECT "users".* FROM "users" INNER JOIN "user_extensions" ON "user_extensions"."user_id" = "users"."id" ORDER BY user_extension.company desc PG::Error: ERROR: relation "user_extensions" does not exist ``` on ``` User.includes(:user_extension).order("user_extension.company desc") ``` i also get ``` PG::Error: ERROR: relation "user_extensions" does not exist ``` **Resolved** Have problems with my bd, all joins works fine.
Try this: ``` @users = User.includes(:user_extension).order("user_extensions.company desc") ``` I think you need at `order`: `user_extensions`, not `user_extension`
Merge can make the query smaller/saner-looking, and it benchmarked faster for me in Rails 4.x: ``` @users = User.joins(:user_extension).merge(UserExtension.order(company: :desc)) ```
Rails order by association field
[ "", "sql", "ruby-on-rails", "postgresql", "" ]
i'm very new to SQL and don't know what is wrong with my query. plz guide I have 2 tables product and product\_consumer 1 to m relation. i want to get all products and count of consumers of each product. This is what i want: ``` id: pCount item1, 2 item2, 1 ``` i have tried the following query but it only return 1st record/1 row only. ``` SELECT prd.*, COUNT(pc.id) pCount FROM `product` AS prd JOIN product_consumer pc ON pc.id = prd.id ```
Try this ``` SELECT pc.id, COUNT(pc.id) pCount FROM product AS prd JOIN product_consumer pc ON pc.id = prd.id GROUP BY pc.id ``` Let's say your schema is this: ``` CREATE TABLE product( id int, product varchar(1) ) CREATE TABLE product_consumer( id int, consumer varchar(1), pid int ) ``` if you want the details of all `products` with the `count` you need to include all the attributes of prd in `group by clause` like this : ``` SELECT prd.id, prd.product, count(pc.pid) as pCount FROM product AS prd JOIN product_consumer pc ON pc.pid = prd.id GROUP BY prd.id, prd.product, pc.pid; ``` Refer this [SQLFiddle](http://sqlfiddle.com/#!3/efd21/1)
Use the [GROUP BY](http://www.w3schools.com/sql/sql_groupby.asp) Clause ``` SELECT prd.id, count(pc.id) pCount FROM product AS prd JOIN product_consumer pc ON pc.id = prd.id GROUP BY prd.id ```
Need help in making SQL query
[ "", "sql", "" ]
``` article_vote table +------------+------+ | article_id | vote | +------------+------+ | 1 | 3 | | 1 | 1 | | 1 | -1 | | 1 | -2 | +------------+------+ ``` In MySQL 5.5 I've been trying to return two results from the above table. 1. The combined value of all negative votes. 2. The combined value of all positive votes. So positive should be **4** and negative should be **-3**. The vote field has no NULL values. This SQL does not work as I intend: ``` SELECT vote, SUM(vote > 0) AS positive, SUM(vote < 1) AS negative FROM article_vote WHERE vote_article_id = 50 GROUP BY vote ``` when I use it on the above dataset it returns: ``` Array ( [vote] => -22 [positive] => 0 [negative] => 1 ) ``` Where am I going wrong?
Correct something like this: ``` select article_id, sum(case when vote >0 then vote else 0 end) as postive, sum(case when vote <0 then vote else 0 end) as negative FROM article_vote GROUP BY article_id ``` Uncrear about your where condition,sorry.
Try this out: ``` SELECT SUM(if(vote > 0, vote, 0)) positive, SUM(if(vote < 0, vote, 0)) negative FROM article_vote WHERE article_id = 1 ``` Result: ``` | POSITIVE | NEGATIVE | |----------|----------| | 4 | -3 | ``` Fiddle [here](http://sqlfiddle.com/#!2/62df8/1).
MySQL: Multiple SUMS Counting Combined Values WHERE Condition
[ "", "mysql", "sql", "count", "" ]
I have two tables: Artist and Work. Artist is a relation of painters with ArtistID, FirstName and LastName. Work is a relation of their paintings with columns WorkID, Title and ArtistID. Now, my artist relation has data and I am now inserting values to Work table. WorkID is a counter, so no problems there. I want to use something that does ``` INSERT INTO Work (Title, ArtistID) VALUES('Toledo', SELECT ArtistID FROM Artist WHERE FirstName='Joan'); ``` But it doesn't seem to work or accept the Select Statement as a part of insert. What should I do?
Use a [DLookup Function](http://office.microsoft.com/en-us/access-help/dlookup-function-HA001228825.aspx) to retrieve the `ArtistID` value you want to insert. Notice the similarity between this `DLookup` expression and your `SELECT` statement ... ``` INSERT INTO Work (Title, ArtistID) VALUES('Toledo', DLookup("ArtistID", "Artist", "FirstName='Joan'")); ```
Try this ``` INSERT INTO Work (Title, ArtistID) SELECT 'Toledo', ArtistID FROM Artist WHERE FirstName='Joan' ```
SQL MsAccess embedding select into INSERT statement for some specific values
[ "", "sql", "ms-access", "ms-access-2007", "sql-insert", "" ]
I am using a SQL Server database. This query: ``` SELECT * FROM table1 WHERE table1.name = 'something' AND CASE WHEN table1.date IS NULL THEN 1 = 1 ELSE table1.date >= '2013-09-24' END; ``` gives me en error: > [Error Code: 102, SQL State: S0001] Incorrect syntax near '='. Any help is appreciated, Thanks in advance mismas
Try this code: ``` select * from table1 where table1.name='something' and (table1.date is null Or table1.date >= '2013-09-24'); ```
I think this is what you want. ``` select * from table1 where table1.name = 'something' and ( table1.date is null or table1.date >= '2013-09-24' ); ``` SQL Server doesn't really have a boolean type that you can use as a result.
CASE when is null in SQL Server
[ "", "sql", "sql-server", "null", "case", "" ]
I have 3 tables to join and need some help to make it work, this is my schema: donations: ``` +--------------------+------------+ | uid | amount | date | +---------+----------+------------+ | 1 | 20 | 2013-10-10 | | 2 | 5 | 2013-10-03 | | 2 | 50 | 2013-09-25 | | 2 | 5 | 2013-10-01 | +---------+----------+------------+ ``` users: ``` +----+------------+ | id | username | +----+------------+ | 1 | rob | | 2 | mike | +----+------------+ ``` causes: ``` +--------------------+------------+ | id | uid | cause | <missing cid (cause id) +---------+----------+------------+ | 1 | 1 | stop war | | 2 | 2 | love | | 3 | 2 | hate | | 4 | 2 | love | +---------+----------+------------+ ``` Result I want (data cropped for reading purposes) ``` +---------+-------------+---------+-------------+ | id | username | amount | cause | +---------+-------------+---------+-------------+ | 1 | rob | 20 | stop war | | 2 | mike | 5 | love | +---------+-------------+-----------------------+ ``` etc... This is my current query, but returns double data: ``` SELECT i.*, t.cause as tag_name FROM users i INNER JOIN donations tti ON (tti.uid = i.id) INNER JOIN causes t ON (t.uid = tti.uid) ``` EDIT: fixed sql schema on fiddle <http://sqlfiddle.com/#!2/0e06c/1> schema and data How I can do this?
It seems your table's model is not right. There should be a relation between the Causes and Donations. If not when you do your joins you will get duplicated rows. For instance. Your model could look like this: Donations ``` +--------------------+------------+ | uid | amount | date | causeId +---------+----------+------------+ | 1 | 20 | 2013-10-10 | 1 | 2 | 5 | 2013-10-03 | 2 | 2 | 50 | 2013-09-25 | 3 | 2 | 5 | 2013-10-01 | 2 +---------+----------+------------+ ``` causes: ``` +----------------------+ | id | cause | +---------+------------+ | 1 | stop war | | 2 | love | | 3 | hate | +---------+------------+ ``` And the right query then should be this ``` SELECT i.*, t.cause as tag_name FROM users i INNER JOIN donations tti ON (tti.uid = i.id) INNER JOIN causes t ON (t.id = tti.causeId) ```
Try this ``` SELECT CONCAT(i.username ,' ',i.first_name) `name`, SUM(tti.amount), t.cause AS tag_name FROM users i LEFT JOIN donations tti ON (tti.uid = i.id) INNER JOIN causes t ON (t.uid = tti.uid) GROUP BY i.id ``` ## [Fiddle](http://sqlfiddle.com/#!2/0e06c/20)
mysql join 3 tables by id
[ "", "mysql", "sql", "" ]
need some help to build a query, this is my current scheme: ``` users: +----+------------+ | id | username | +----+------------+ | 1 | rob | | 2 | john | | 3 | jane | <--- jane never has donated | 4 | mike | +----+------------+ donations: +--------------------+------------+ | uid | amount | date | +---------+----------+------------+ | 1 | 20 | 2013-10-10 | | 2 | 5 | 2013-10-03 | | 2 | 50 | 2013-09-25 | | 2 | 5 | 2013-10-01 | | 4 | 100 | 2012-10-01 | <-- past year +---------+----------+------------+ Result I want: +---------+-------------+---------+-------------+---------------+----------+ | id | username | amount | monthly | totalamount | total | +---------+-------------+---------+-------------+ --------------+----------+ | 1 | rob | 20 | 1 | 20 | 1 | | 2 | john | 60 | 3 | 60 | 3 | | 3 | jane | 0 | 0 | 0 | 0 | | 4 | mike | 0 | 0 | 100 | 1 | +---------+-------------+-----------------------+---------------+----------+ ``` This is my query: ``` SELECT u.*, COALESCE(sum(d.amount), 0) amount, COUNT(d.uid) monthly, COUNT(d.amount) as Total, <-- need to get sum all time donations and number of times donated FROM users u LEFT JOIN donations d ON u.id = d.uid AND (month(d.date), year(d.date)) = (month(CURDATE()), year(CURDATE())) GROUP BY u.id ORDER BY u.id ASC ``` So i need to add 2 different sums from same data. EDIT: <http://sqlfiddle.com/#!2/20a974/9> schema and data How I can do this?
For this we need to filter the data on the select and not on the join. Remove this condition: ``` AND (month(d.date), year(d.date)) = (month(CURDATE()), year(CURDATE())) ``` and add this to the select: ``` SUM (CASE WHEN (month(d.date), year(d.date)) = (month(CURDATE()), year(CURDATE())) THEN 1 ELSE 0 END) as monthly ``` Edit: whole query: ``` SELECT users.id, users.username, COALESCE(sum(CASE WHEN (month(donations.date), year(donations.date)) = (month(CURDATE()), year(CURDATE())) THEN donations.amount ELSE 0 END), 0) monthly_sum, COALESCE(sum(CASE WHEN (month(donations.date), year(donations.date)) = (month(CURDATE()), year(CURDATE())) THEN 1 ELSE 0 END), 0) monthly_amount, COALESCE(sum(donations.amount), 0) total_sum, count(*) total_amount from users left join donations on donations.uid = users.id group by users.id, users.username ``` <http://sqlfiddle.com/#!2/20a974/20/0>
For me the easiest way to think about the separately grouped information is to put it into separate queries and then just join the results back together. This is not likely to be the most efficient, but it helps to get something working. ``` select auo.id, auo.username, coalesce(monthly_count, 0), coalesce(monthly_total, 0), coalesce(total, 0), coalesce(total_amount, 0) from aaa_users auo left join ( select au.id as id, count(adm.amount) as monthly_count, SUM(adm.amount) as monthly_total from aaa_users au join aaa_donations adm on au.id = adm.uid and adm.donate_date > GETDATE()-30 group by au.id ) as monthly on monthly.id = auo.id left join ( select au.id as id, count(ady.amount) total, SUM(ady.amount) as total_amount from aaa_users au join aaa_donations ady on au.id = ady.uid and ady.donate_date > getDate()-450 group by au.id ) as yearly on yearly.id = auo.id ``` As @CompuChip said, it's cleaner to just join to the donations table twice, but I have something wrong in my join logic as the values for john are getting duplicated. I think there would need to be a donations.id column to prevent the monthly and total donations from being combined. Anyway, here's an example even though it isn't working correctly ``` select au.id, au.username, count(adm.amount), SUM(adm.amount) as monthly_total, count(ady.amount), SUM(ady.amount) as total_amount from aaa_users au left outer join aaa_donations adm on au.id = adm.uid and adm.donate_date > GETDATE()-60 left outer join aaa_donations ady on au.id = ady.uid and ady.donate_date > getDate()-450 group by au.id, au.username order by au.id, au.username ```
sum two rows and order by date / total
[ "", "mysql", "sql", "gaps-and-islands", "" ]
How do I use sql to get the whole month name in sql server? I did't find a way using `DATEPART(mm, mydate)` or `CONVERT(VARCHAR(12), CreatedFor, 107)`. Basically I need in the format: *April 1 2009*.
``` SELECT DATENAME(MONTH, GETDATE()) + RIGHT(CONVERT(VARCHAR(12), GETDATE(), 107), 9) AS [Month DD, YYYY] ``` OR Date without Comma Between date and year, you can use the following ``` SELECT DATENAME(MONTH, GETDATE()) + ' ' + CAST(DAY(GETDATE()) AS VARCHAR(2)) + ' ' + CAST(YEAR(GETDATE()) AS VARCHAR(4)) AS [Month DD YYYY] ```
If you are using SQL Server 2012 or later, you can use: ``` SELECT FORMAT(MyDate, 'MMMM dd yyyy') ``` You can [view the documentation](https://msdn.microsoft.com/en-gb/library/hh213505(v=sql.110).aspx) for more information on the format.
sql server Get the FULL month name from a date
[ "", "sql", "sql-server", "date-conversion", "" ]
I want to know about the storage SQL Server data types in memory. How is the `money` data type in SQL Server stored in memory? I know that `money` is stored in 8 bytes and `smallmoney` is stored in 4 bytes. But I don't know how? For example when you have 123400.93 for the money, how is it stored in 8 bytes? I have the same question about the `decimal` and `DATE` data types. Especially for `DATE`, the format is YYYY-MM-DD, but how is it stored in 3 bytes? Is it stored as described here: <http://dev.mysql.com/doc/internals/en/date-and-time-data-type-representation.html> or the number of days from a specific day is stored?
Just adding a bit here... A single `byte` is made up of 8 `bits`. A `bit` can hold 2 values (0 or 1). So 4 bytes is 32 bits (4 x 8). Which means the numeric range of what can be stored is from 0 to 2^32 which gives a total range of 4,294,967,296 values. `smallmoney` is signed so we drop one of the bits to be used for the sign, which leaves 2^31, or 2,147,483,648 possible values and the +/- sign. Now we take into account that the last 4 digits of a money type are always after the decimal point and we end up with a range of -214,748.3648 to 214,748.3647 Technically, money and smallmoney values are stored by flipping bits in a word or byte just like everything else. If you need more info, read <http://computer.howstuffworks.com/bytes.htm> Or you might see this for the possible value range of money and smallmoney: <http://technet.microsoft.com/en-us/library/ms179882.aspx> **update** For the `DATETIME` data type, it's the same concept with a little twist. In MS SQL a `DATETIME` is stored using 2 numbers. The first is number of days since 1/1/1900 and the second is number of ticks since midnight: See <http://blogs.lessthandot.com/index.php/DataMgmt/DataDesign/how-are-dates-stored-in-sql-server> and [When storing a datetime in sql server (datetime type), what format does it store it in?](https://stackoverflow.com/questions/6656658/when-storing-a-datetime-in-sql-server-datetime-type-what-format-does-it-store)
> How are the "money" [...] data types in SQL Server stored in > memory? If you want to see how is stored a MONEY (8 bytes) value then you could execute following script step/step: ``` CREATE DATABASE TestMoneyDT; GO USE TestMoneyDT; GO CREATE TABLE dbo.MyMoney -- :-) ( Col1 CHAR(5) NOT NULL, Col2 MONEY NOT NULL, Col3 CHAR(5) NOT NULL ); GO INSERT dbo.MyMoney (Col1, Col2, Col3) VALUES ('AAAAA',12345678.0009,'BBBBB'); GO -- Install http://www.sqlskills.com/blogs/paul/inside-the-storage-engine-sp_allocationmetadata-putting-undocumented-system-catalog-views-to-work/ EXEC sp_AllocationMetadata 'dbo.MyMoney' GO /* Stored procedure output: Object Name Index ID Alloc Unit ID Alloc Unit Type First Page Root Page First IAM Page ----------- -------- ----------------- --------------- ---------- --------- -------------- MyMoney 0 72057594039697408 IN_ROW_DATA (1:147) (0:0) (1:150) */ SELECT DB_ID() AS DBID GO /* DBID ---- 13 */ -- Reading data from page (1:147) (file id 1, page number 147) DBCC TRACEON(3604); -- http://technet.microsoft.com/en-us/library/ms187329.aspx DBCC PAGE(13, 1, 147, 3); -- http://blogs.msdn.com/b/sqlserverstorageengine/archive/2006/06/10/625659.aspx DBCC TRACEOFF(3604); -- http://technet.microsoft.com/en-us/library/ms174401.aspx GO -- See [Memory dump @0x0000000014AEA060] of DBCC PAGE output /* Memory Dump @0x000000000E76A060 0000000000000000: 10001600 41414141 41e9f698 be1c0000 †....AAAAAéö.¾... 0000000000000010: 00424242 42420300 00†††††††††††††††††.BBBBB... 41414141 41 = AAAAA <- Col1 CHAR(5) e9f698 be1c0000 <- Col2 MONEY take this string and run following script (look at SumOverAll values) 424242 4242 = BBBBB <- Col3 CHAR(5) */ GO DECLARE @HexString VARBINARY(8) = 0xE9F698BE1C; -- One MONEY value consumes 8 bytes WITH N10 AS ( SELECT * FROM (VALUES (1), (2), (3), (4), (5), (6), (7), (8), (9), (10)) x(Num) ) SELECT src.*, SUM(src.IntValueMultipliedByte) OVER() AS SumOverAll FROM ( SELECT n.Num, SUBSTRING(@HexString, n.Num, 2) AS HexValue, CONVERT(INT, SUBSTRING(@HexString, n.Num, 1)) AS IntValue, POWER(CONVERT(NUMERIC(38,0), 256), n.Num-1) AS Byte, CONVERT(INT, SUBSTRING(@HexString, n.Num, 1)) * POWER(CONVERT(NUMERIC(38,0), 256), n.Num-1) AS IntValueMultipliedByte FROM N10 n WHERE n.Num <= LEN(@HexString) ) src; GO /* NumHexValue IntValue Byte IntValueMultipliedByte SumOverAll ----------- ----------- ---------- ---------------------- ------------ 1 0xE9F6 233 1 233 123456780009 2 0xF698 246 256 62976 123456780009 3 0x98BE 152 65536 9961472 123456780009 4 0xBE1C 190 16777216 3187671040 123456780009 5 0x1C 28 4294967296 120259084288 123456780009 */ ``` Note: I used SQL2008R2.
How are the "money" and "decimal" data types in SQL Server stored in memory?
[ "", "sql", "sql-server", "memory", "" ]
maybe I miss something stupid but... I have three tables in m-to-m relation: ``` CREATE TABLE tbl_users ( usr_id INT NOT NULL AUTO_INCREMENT , usr_name VARCHAR( 64 ) NOT NULL DEFAULT '' , usr_surname VARCHAR( 64 ) NOT NULL DEFAULT '' , usr_pwd VARCHAR( 64 ) NOT NULL , usr_level INT( 1 ) NOT NULL DEFAULT 0, PRIMARY KEY ( usr_id ) ) ENGINE = InnoDB; CREATE TABLE tbl_houses ( house_id INT NOT NULL AUTO_INCREMENT , city VARCHAR( 100 ) DEFAULT '' , address VARCHAR( 100 ) DEFAULT '' , PRIMARY KEY ( house_id ) ) ENGINE = InnoDB; CREATE TABLE tbl_users_houses ( user_id INT NOT NULL , house_id INT NOT NULL , INDEX user_key (user_id), FOREIGN KEY (user_id) REFERENCES tbl_users(usr_id) ON DELETE CASCADE ON UPDATE CASCADE, INDEX house_key (house_id) , FOREIGN KEY (house_id) REFERENCES tbl_houses(house_id) ON DELETE CASCADE ON UPDATE CASCADE ) ENGINE = InnoDB; ``` Into the link table I have two records: ``` user_id house_id 1 1 1 2 ``` Now, trying to select all houses with: ``` select * from tbl_houses AS H left join tbl_users_houses AS UH on H.house_id = UH.house_id where UH.user_id = 2; ``` Why I get no data instead of all houses?
Because of this line: ``` where UH.user_id = 2; ``` This is only true if UH.user\_id is non-null, so it effectively excludes any case where you have a house without a matching row in UH, which is the point of using a LEFT JOIN. If you want all houses, and UH data where there is a match, use this: ``` select * from tbl_houses AS H left join tbl_users_houses AS UH on H.house_id = UH.house_id and UH.user_id = 2; ```
Your WHERE clause is specifying that ``` UH.user_id = 2 ``` What happens if you change it to `H.user_id = 2` ? To give this (all houses for user\_id = 2): ``` select * from tbl_houses AS H left join tbl_users_houses AS UH on H.house_id = UH.house_id where H.user_id = 2; ``` Or if you want all houses regardless and data for user\_id = 2 where it exists in tbl\_User\_houses try this: ``` select * from tbl_houses AS H left join tbl_users_houses AS UH on H.house_id = UH.house_id and UH.user_id = 2; ```
MySQL LEFT JOIN returns empty resultset
[ "", "mysql", "sql", "" ]
The statement below is to get all non-duplicate IDs from the products table. Is there a way I can get the total count of rows that are outputted by this sql statement? ``` select min(product_id) from products where market_code = 'germany' group by product_code ``` ## sample table data: ``` product_id market_code product_code 1 uk AAA 1 uk AAA 1 uk AAA 2 germany BAA 2 germany BAA 3 uk BAA ``` Thanks
You can simply do this: ``` SELECT COUNT(*) FROM ( select min(product_id) from products where market_code = 'germany' group by product_code ) AS t; ```
Actually your query is not what you said "The statement below is to get all unique/non-duplicate IDs from the products table." , It will select lowest product\_id for product\_code , not unique , for example if there are product\_id - 2 and product\_id - 3 for product\_code A , it will only return product\_id - 2 , it is a minimum value not unique , if you want unique you could do this way ``` SELECT product_code,product_id,count(*) FROM TABLE GROUP BY product_code,product_id ``` if you want just unique/non-duplicate ID-s and their count you can select with this query ``` SELECT product_id,count(*) FROM table GROUP BY product_id ```
Is there anyway to perform a Count() when using Min()?
[ "", "mysql", "sql", "" ]
I had a very simple question: Does oracle allow multiple "WITH AS" in a single sql statement. Example: ``` WITH abc AS( select ......) WITH XYZ AS(select ....) /*This one uses "abc" multiple times*/ Select .... /*using XYZ multiple times*/ ``` I can make the query work by repeating the same query multiple times, but do not want to do that, and leverage "WITH AS". It seems like a simple requirement but oracle does not allow me: > ORA-00928: missing SELECT keyword
You can do this as: ``` WITH abc AS( select FROM ...) , XYZ AS(select From abc ....) /*This one uses "abc" multiple times*/ Select From XYZ.... /*using abc, XYZ multiple times*/ ```
the correct syntax is - ``` with t1 as (select * from tab1 where conditions... ), t2 as (select * from tab2 where conditions... (you can access columns of t1 here as well) ) select * from t1, t2 where t1.col1=t2.col2; ```
Can we have multiple "WITH AS" in single sql - Oracle SQL
[ "", "sql", "oracle", "" ]
I am getting the error `subquery must return only one column` when I try to run the following query: ``` SELECT mat.mat as mat1, sum(stx.total) as sumtotal1, ( SELECT mat.mat as mat, sum(stx.total) as sumtotal FROM stx LEFT JOIN mat ON stx.matid = mat.matid LEFT JOIN sale ON stx.saleid = sale.id WHERE stx.date BETWEEN '2013-05-01' AND '2013-08-31' AND sale.userid LIKE 'A%' GROUP BY mat.mat ) AS MyField FROM stx LEFT JOIN mat ON stx.matid = mat.matid LEFT JOIN sale ON stx.saleid = sale.id WHERE stx.date BETWEEN '2013-05-01' AND '2013-08-31' AND sale.userid LIKE 'B%' GROUP BY mat.mat ``` What is causing this error?
Put a subquery that returns multiple columns in the `FROM` list and select from it. A correlated subquery would be a bad idea to begin with. However, your query is not even correlated (no link to outer query) and seems to return multiple rows. This leads to a (possibly very expensive and nonsensical) cross join producing a Cartesian product. Looks like you really want something like this: ``` SELECT m1.mat AS mat1, m1.sumtotal AS sumtotal1 , m2.mat AS mat2, m2.sumtotal AS sumtotal2 FROM ( SELECT mat.mat, sum(stx.total) AS sumtotal FROM stx LEFT JOIN mat ON mat.matid = stx.matid LEFT JOIN sale ON stx.saleid = sale.id WHERE stx.date BETWEEN '2013-05-01' AND '2013-08-31' AND sale.userid LIKE 'A%' GROUP BY mat.mat ) m1 JOIN ( SELECT mat.mat, sum(stx.total) AS sumtotal FROM stx LEFT JOIN mat ON mat.matid = stx.matid LEFT JOIN sale ON sale.id = stx.saleid WHERE stx.date BETWEEN '2013-05-01' AND '2013-08-31' AND sale.userid LIKE 'B%' GROUP BY mat.mat ) m2 USING (mat); ``` Both `LEFT JOIN` are also pointless. The one on `sale` is forced to a `INNER JOIN` by the `WHERE` condition. The one on `mat` seems pointless, since you `GROUP BY mat.mat` - except if you are interested in `mat IS NULL`? (I doubt it.) The whole query can further simplified to: ``` SELECT m.mat , sum(x.total) FILTER (WHERE s.userid LIKE 'A%') AS total_a , sum(x.total) FILTER (WHERE s.userid LIKE 'B%') AS total_b FROM sale s JOIN stx x ON x.saleid = s.id JOIN mat m ON m.matid = x.matid WHERE s.userid LIKE 'ANY ('{A%,B%}') AND x.date BETWEEN '2013-05-01' AND '2013-08-31' GROUP BY 1; ``` Using the aggregate `FILTER` clause (Postgres 9.4+). See: * [Aggregate columns with additional (distinct) filters](https://stackoverflow.com/questions/27136251/aggregate-columns-with-additional-distinct-filters/27141193#27141193) In Postgres 11 or later, this can be optimized further with the ["starts with" operator **`^@`**](https://www.postgresql.org/docs/current/functions-string.html#FUNCTIONS-STRING-OTHER) for prefix matching. See: * [PostgreSQL LIKE query performance variations](https://stackoverflow.com/questions/1566717/postgresql-like-query-performance-variations/13452528#13452528) ``` SELECT m.mat , sum(x.total) FILTER (WHERE s.userid ^@ 'A') AS total_a , sum(x.total) FILTER (WHERE s.userid ^@ 'B') AS total_b FROM sale s JOIN stx x ON x.saleid = s.id JOIN mat m ON m.matid = x.matid WHERE s.userid ^@ ANY ('{A,B}') AND x.date BETWEEN '2013-05-01' AND '2013-08-31' GROUP BY 1; ``` The `WHERE` condition might get even simpler, depending on your secret data types and indices. Here is an overview over pattern matching operators: * [Pattern matching with LIKE, SIMILAR TO or regular expressions](https://dba.stackexchange.com/a/10696/3684)
Instead of subquery select statement ``` SELECT mat.mat as mat, sum(stx.total) as sumtotal ``` Try this statement ``` SELECT sum(stx.total) as sumtotal ```
error : subquery must return only one column
[ "", "sql", "database", "postgresql", "left-join", "cross-join", "" ]
I have this problem with SQL and I can't figure it out. Imagine that I have 3 tables as follows ``` Names Nameid name 1 Starbucks Coffee 2 Johns Restaurant 3 Davids Restaurant user_likes userid Nameid 1 1 2 1 2 3 user_visited userid Nameid 1 2 ``` I want to find the places with the most number of (likes+visited). I also want to select all places not just those who have been liked or visited I do: ``` SELECT n.nameid, n.name , COUNT(f.nameid) AS freq FROM names AS n LEFT JOIN user_likes ON n.nameid=user_likes.nameid LEFT JOIN user_visited ON n.nameid=user_visited.nameid ORDER BY freq DESC ``` But it doesn't give me the total frequency. The problem is, if a place is both visited and liked, it is counted only once, while I want it to be counted twice. Any suggestions?
I've made a quick test and although I prefer Serge's solution, this one seemed to perform faster as the amount of items to join will be less: ``` SELECT n.nameId, n.name, coalesce(sum(likesCount), 0) totalCount FROM NAMES n LEFT JOIN ( SELECT nameId, count(*) likesCount FROM user_likes GROUP BY nameId UNION ALL SELECT nameId, count(*) visitsCount FROM user_visited GROUP BY nameId ) s ON n.nameId = s.nameId GROUP BY n.nameId ORDER BY totalCount DESC ``` I'm assuming the following indexes: ``` alter table names add index(nameid); alter table user_likes add index(nameid); alter table user_visited add index(nameid); ``` Probably the OP can compare the efficiency of both queries with actual data and provide feedback.
``` SELECT n.name, t.nameid, COUNT(t.nameid) AS freq FROM Names n JOIN ( SELECT nameid FROM user_likes UNION ALL SELECT nameid FROM user_visited ) t ON n.nameid = t.nameid GROUP BY t.nameid ORDER BY freq DESC ```
Find the frequency of rows from multiple joint tables
[ "", "mysql", "sql", "" ]
I have the following sql statement: ``` select a.desc ,sum(bdd.amount) from t_main c left outer join t_direct bds on (bds.repid=c.id) left outer join tm_defination def a on (a.id =bds.sId) where c.repId=1000000134 group by a.desc; ``` When I run it I get the following result: ``` desc amount NW 12.00 SW 10 ``` When I try to add another left outer join to get another set of values: ``` select a.desc ,sum(bdd.amount) ,sum(i.amt) from t_main c left outer join t_direct bds on (bds.repid=c.id) left outer join tm_defination def a on (a.id =bdd.sId) left outer join t_ind i on (i.id=c.id) where c.repId=1000000134 group by a.desc; ``` It basically doubles the amount field like: ``` desc amount amt NW 24.00 234.00 SE 20.00 234.00 ``` While result should be: ``` desc amount amt NW 12.00 234.00 SE 10.00 NULL ``` How do I fix this?
Your new left outer join is forcing some rows to be returned in the result set a few times due to multiple relations most likely. Remove your SUM and just review the returned rows and work out exactly which ones you require (maybe restrict it to on certain type of t\_ind record if that is applicable??), then adjust your query accordingly.
If you really need to receive the data as you mentioned, your can use sub-queries to perform the needed calculations. In this case you code may looks like the following: ``` select x.[desc], x.amount, y.amt from ( select c.[desc] , sum (bdd.amount) as amount , c.id from t_main c left outer join t_direct bds on (bds.repid=c.id) left outer join tm_defination_def bdd on (bdd.id = bds.sId) where c.repId=1000000134 group by c.id, c.[desc] ) x left join ( select t.id, sum (t.amt) as amt from t_ind t inner join t_main c on t.id = c.id where c.repID = 1000000134 group by t.id ) y on x.id = y.id ``` In the first sub-select you will receive the aggregated data for the two first columns: `desc` and `amount`, grouped as you need. The second select will return the needed `amt` value for each `id` of the first set. Left join between those results will gives the needed result. The addition of the `t_main` table to the second select was done because of performance issues. Another solution can be the following: ``` select c.[desc] , sum (bdd.amount) as amount , amt = (select sum (amt) from t_ind where id = c.id) from #t_main c left outer join t_direct bds on (bds.repid=c.id) left outer join tm_defination_def bdd on (bdd.id = bds.sId) where c.repId = 1000000134 group by c.id, c.[desc] ``` The result will be the same. Basically, instead of using of nested selects the calculating of the `amt` sum is performing inline per each row of the result joins. In case of large tables the performance of the second solution will be worse that the first one.
Left outer join on multiple tables
[ "", "sql", "sql-server", "t-sql", "" ]
How do I convert the current time (datetime) in my timezone into another timezone, say France ? I looked around in SO, but did not find any posts which could help me. I am using SQL server 2008.
``` select CONVERT(datetime,SWITCHOFFSET(CONVERT(datetimeoffset,GetUTCDate()),'+05:30')) Date_India, select CONVERT(datetime,SWITCHOFFSET(CONVERT(datetimeoffset,GetUTCDate()),'+03:30')) Date_Iran ``` [check here](http://mssqlfun.com/2012/07/26/how-to-insert-datetime-with-different-timezones/)
Please replace getdate() with GetUTCDate() to get accurate result otherwise the offset (5:30) in this case will be added to the current time (which will already have some offset) and will result in incorrect result. For instance if you are in Pakistan then GetDate() will give you time as GMT+5:00 and adding 5:50 offset in this time will result in GMT + 10:30 which is totally wrong.
sql server convert datetime into another timezone?
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have a table A which has a structure like this: ``` ID Name City zip 1 xxx wer 134 2 yyy qwe 234 3 zzz ert 256 4 www qwe 567 ``` Now i would like to update multiple rows with one SQL query. For instance Query: ``` UPDATE A SET zip= '355' WHERE id= '2'; ``` Will update one row, what if I also want to update row with ID 1 and 4 with just one query, where ZIP are different?
If you mean that you want to update multiple rows to the same zip with one query, you can use the following: ``` UPDATE A SET zip= '355' WHERE id in ('1','4'); ``` However, if you mean that you want to update multiple rows to zip values this cannot be done without some logic sitting behind which id should get which zip. For example, if you wanted to just do a few different values you could use something like: ``` UPDATE A SET zip= decode(id,'1','100','4','400') WHERE id in ('1','4'); ``` This sets the zip of any rows where the id is '1' to '100', any rows where id is '4' to '400' and so on. You can add as many arguments as you need to. If you want a default value for any not listed just add this to the end e.g. `decode(id,'1','100','4','400','999')` would set any that are not id '1' or '4' (but were not excluded in the where statement to '999'). If you have a lot of different values, then I would suggest creating a reference data table and doing a select from this table as a subquery within your update statement.
Please try case condition to update with different values: ``` UPDATE A SET zip=case when ID='1' then 'Value1' else 'Value2' end WHERE ID IN ('1', '4'); ```
Updating a number of row with a single SQL query in Oracle DBMS
[ "", "sql", "oracle", "sql-update", "" ]
I have a PostgreSQL database in which one table rapidly grows very large (several million rows every month or so) so I'd like to periodically archive the contents of that table into a separate table. I'm intending to use a cron job to execute a .sql file nightly to archive all rows that are older than one month into the other table. I have the query working fine, but I need to know how to dynamically create a timestamp of one month prior. The `time` column is stored in the format `2013-10-27 06:53:12` and I need to know what to use in an SQL query to build a timestamp of exactly one month prior. For example, if today is October 27, 2013, I want the query to match all rows where time < `2013-09-27 00:00:00`
Question was answered by a friend in IRC: `'now'::timestamp - '1 month'::interval` Having the timestamp return 00:00:00 wasn't terrible important, so this works for my intentions.
``` select date_trunc('day', NOW() - interval '1 month') ``` This query will return date one month ago from now and round time to 00:00:00.
Get timestamp of one month ago in PostgreSQL
[ "", "sql", "postgresql", "" ]
I have two tables with data. Both tables have a `CUSTOMER_ID` column (which is numeric). I am trying to get a list of all the unique values for `CUSTOMER_ID` *and* know whether or not the `CUSTOMER_ID` exists in both tables or just one (and which one). I can easily get a list of the unique `CUSTOMER_ID`: ``` SELECT tblOne.CUSTOMER_ID FROM tblOne.CUSTOMER_ID UNION SELECT tblTwo.CUSTOMER_ID FROM tblTwo.CUSTOMER_ID ``` I can't do just add an identifier column to the `SELECT` statemtn (like: `SELECT tblOne.CUSTOMER_ID, "Table1" AS DataSource`) because then the records wouldn't be unique and it will get both sets of data. I feel I need to add it somewhere else in this query but am not sure how. ### Edit for clarity: For the union query output I need an additional column that can tell me if the unique value I am seeing exists in: (1) *both* tables, (2) table one, or (3) table two.
If the CUSTOMER\_ID appears in both tables then we'll have to arbitrarily pick which table to call the source. The following query uses "tblOne" as the [SourceTable] in that case: ``` SELECT CUSTOMER_ID, MIN(Source) AS SourceTable, COUNT(*) AS TableCount FROM ( SELECT DISTINCT CUSTOMER_ID, "tblOne" AS Source FROM tblOne UNION ALL SELECT DISTINCT CUSTOMER_ID, "tblTwo" AS Source FROM tblTwo ) GROUP BY CUSTOMER_ID ```
Gord Thompson's answer is correct. But, it is not necessary to do a distinct in the subqueries. And, you can return a single column with the information you are looking for: ``` select customer_id, iif(min(which) = max(which), min(which), "both") as DataSource from (select customer_id, "tblone" as which from tblOne UNION ALL select customer_id, "tbltwo" as which from tblTwo ) t group by customer_id ```
Create a UNION query that identifies which table the unique data came from
[ "", "sql", "ms-access", "unique", "union", "" ]
Anyone know sql query or wordpress plugin which may help me to remove duplicate comments. While I was importing posts, comments into wordpress, i got some timeouts, and repeated process, so some of comments posted twice.
Taking a look at some of the images of WordPress' schema then you should be able to identify the records you want to delete with a query such as ``` SELECT wp_comments.* FROM wp_comments LEFT JOIN ( SELECT MIN(comment_id) comment_id FROM wp_comments GROUP BY comment_post_id, comment_author, comment_content ) to_keep ON wp_comments.comment_id = to_keep.comment_id WHERE to_keep.comment_id IS NULL ``` You should run the query above and make sure you it is returning the correct records (the ones that will be deleted). Once you are satisfied the query is working then simply change it from a `SELECT` to a `DELETE` ``` DELETE wp_comments FROM wp_comments LEFT JOIN ( SELECT MIN(comment_id) comment_id FROM wp_comments GROUP BY comment_post_id, comment_author, comment_content ) to_keep ON wp_comments.comment_id = to_keep.comment_id WHERE to_keep.comment_id IS NULL ```
Wow, this worked like a charm, a more aggressive form I used eventually to eliminate all duplicate comments regardless of author or post ID will be : ``` DELETE wp_comments FROM wp_comments LEFT JOIN ( SELECT MIN(comment_id) comment_id FROM wp_comments GROUP BY comment_content ) to_keep ON wp_comments.comment_id = to_keep.comment_id WHERE to_keep.comment_id IS NULL ``` This will delete useless short comments that work like a template like: "Thanks", "Great"....
Remove duplicated comments in Wordpress?
[ "", "sql", "wordpress", "comments", "" ]
I want to use the id for naming the image file inside a row. Is there a way to use the id of the (current) row inside the query itself. (I know about the lastInsertId() but it works generally after the query and not inside it.) ``` INSERT INTO table (id, intro, detail, image) VALUES (null, introtext, detailtext, get_the_id_of_this_query_itself.jpg ) ``` I know that i could just save the extension of this image (like .jpg) and then afterwards use it together with the id of the article. But i wondered if this is possible. added afterwards: Could we perhaps use a subquery inside the insert query that increments max(id) by one and then add .jpg string for the image name ?
``` INSERT INTO table (id, intro, detail, image) SELECT id, 'introtext', 'detailtext', CONCAT('id_', id, '.jpg') FROM ( SELECT AUTO_INCREMENT AS id FROM `information_schema`.`TABLES` WHERE `TABLE_SCHEMA` LIKE 'database_name' AND `TABLE_NAME` LIKE 'table') AS h ``` * **[INSERT ... SELECT](http://dev.mysql.com/doc/refman/5.0/en/insert-select.html)** * **[AUTO\_INCREMENT](http://dev.mysql.com/doc/refman/5.0/en/example-auto-increment.html)** * **[get privileges for information\_schema](https://stackoverflow.com/questions/13588002/get-privileges-on-information-schema-mysql)** * **[from clause subqueries](http://dev.mysql.com/doc/refman/5.0/en/from-clause-subqueries.html)** Replace `table` and `database_name` accordingly.
It is not possible to know what the ID will be before insertion. It's also not recommended to use this data as a real value. This column is supposed to be a transparent identifier of the row. This is useful for `JOIN`s. The fact that it can be convenient for lookups is incidental and should not be relied upon. This ID should be treated as if it could change at any time. If you need another identifier such as an ordinal number, it should be stored in a separate field and handled by the application. Imagine if this script is being accessed at high speed. Many of these `INSERT` queries may be running simultaneously (and even on different servers) so the ID cannot be known until the query completes. Even after the query completes, the ID may be changed after replication syncs and the `image` column's data may be inconsistent.
Is there a way to use the current row's (that is about to be inserted) id number inside query itself?
[ "", "mysql", "sql", "" ]
Taking a sample table: ``` WITH t(val) AS ( SELECT 'my Name' FROM dual UNION SELECT 'my name' FROM dual UNION SELECT 'my naim' FROM dual UNION SELECT 'MY kat' FROM dual UNION select 'my katt' from dual ) SELECT * FROM t; ``` I need an output by preference: The query should search for the exact match in the table first, If not found then search by `lowering the case`, and if not found then only search for `soundex`,. So the final output for something like: ``` WHERE val = 'my Name' OR lower(val) = lower('my Name') OR soundex(lower(val)) = soundex(lower('my Name')) ``` should be: ``` output ----- my Name ``` Thanks in advance.
Just filter as you describe, then sort by that order, then grab the first record: ``` WITH t AS ( SELECT 'my Name' as val FROM dual UNION SELECT 'my name' FROM dual UNION SELECT 'my naim' FROM dual UNION SELECT 'MY kat' FROM dual UNION select 'my katt' from dual ) SELECT * FROM ( SELECT * FROM t WHERE val = 'my Name' OR lower(val) = lower('my Name') OR soundex(lower(val)) = soundex(lower('my Name')) order by case when val = 'my Name' then 1 when lower(val) = lower('my Name') then 2 when soundex(lower(val)) = soundex(lower('my Name')) then 3 end ) WHERE ROWNUM = 1; ```
For large data sets you'd probably want to avoid the unnecessary tests if any of the earlier ones had found a match. ``` with t as ( SELECT 'my Name' as val FROM dual UNION SELECT 'my name' FROM dual UNION SELECT 'my naim' FROM dual UNION SELECT 'MY kat' FROM dual UNION select 'my katt' from dual ) exact_match as ( select * from t where val = 'my Name'), lower_case_match as ( select * from t where lower(val) = lower('my Name') and not exists (select null from exact_match)), soundex_match as ( select * from t where soundex(val) = soundex('my Name') and not exists (select null from lower_case_match) and not exists (select null from exact_match)) select * from exact_match union all select * from lower_case_match union all select * from soundex_match; ``` Oracle would most likely materialise the result set of the first two search common table expressions in order to make it more efficient for the subsequent expressions to test whether they returned a result. If the first "exact\_match" search returns a result then subsequent searches will not be required to execute.
oracle sql search by preference
[ "", "sql", "oracle", "" ]
I want to find top 2 customers with maximum orders. The table looks like: ``` CustomerId OrderId ProductId 101 1 A 101 3 B 101 4 C 102 9 D 102 9 E 103 11 E 103 22 F ``` This is the output that I need from SELECT query: ``` CustomerId OrderId 101 1 101 3 101 4 103 11 103 22 ``` The solution is just not clicking to my mind...I have kind of reached half-way using following query - ``` SELECT CustomerId, OrderId FROM dbo.CustomerOrder GROUP BY CustomerId, OrderId ``` which just gives me distinct pairs of CustomerId, OrderId. Can anyone please help.
Here is the [**SQL Fiddle**](http://sqlfiddle.com/#!6/dcd42/5/0) example that shows the below code working: ``` SELECT DISTINCT CO.CustomerId, CO.OrderID FROM ( SELECT TOP(2) COS.CustomerId, COUNT(DISTINCT COS.orderId) as NoOfOrders FROM custorders AS COS GROUP BY COS.CustomerId ORDER BY COUNT(DISTINCT COS.orderId) DESC, CustomerId DESC ) AS COM INNER JOIN custorders AS CO ON COM.CustomerId = CO.CustomerId ```
Try this: ``` SELECT c.CustomerId, c.OrderId FROM CustomerOrder c INNER JOIN (SELECT TOP 2 WITH TIES CustomerId, COUNT(distinct OrderId) as Count FROM CustomerOrder GROUP BY CustomerId ORDER BY Count DESC) b ON c.CustomerId = b.CustomerId ``` You could use WITH TIES, For example, if you have 3 customers with the same maximum amount of orders, WITH TIES will retrieve the three, without this you will let one outside, an that may be wrong. Check this [SQL FIDDLE DEMO](http://sqlfiddle.com/#!6/fc088/4)
SQL Server - Find Top n customers with maximum orders
[ "", "sql", "sql-server", "join", "subquery", "inner-join", "" ]
My Table structure: **Quran Master** ![enter image description here](https://i.stack.imgur.com/nl9QF.png) QuranPrayed: ![enter image description here](https://i.stack.imgur.com/BLmGx.png) I wanted to make result according to my timestamp. Means want to get count of sipara according to timestamp. for that i made following query: ``` SELECT qm.sipara, Prayed = COUNT(qp.Sipara) FROM QuranMaster qm LEFT JOIN QuranPrayed qp ON qp.sipara=qm.sipara where qp.TimeStamp between '2013-10-27 19:59:00.000' and '2013-10-28 20:00:00.000' GROUP BY qm.sipara,qm.OrderID order by qm.OrderID ``` But its not giving result for 0 times prayed siparas. Its giving as follows: ![enter image description here](https://i.stack.imgur.com/XP903.png) It should also show sipara4 as 0, sipara5 as 0...etc. Please help me.
try this ``` SELECT qm.sipara, Prayed = COUNT(qp.Sipara) FROM QuranMaster qm LEFT JOIN QuranPrayed qp ON (qp.sipara = qm.sipara AND qp.TimeStamp >= '2013-10-27 19:59:00.000' AND qp.TimeStamp <= '2013-10-28 20:00:00.000') GROUP BY qm.sipara,qm.OrderID ORDER BY qm.OrderID ``` basically, your WHERE clause was filtering out the rows, so moving your timestamp condition in your WHERE clause to your LEFT JOIN clause, means that you can include all rows form QuranMaster, and then only have the counts of those records in QuranPrayed that meet the timestamp condition, otherwise those records that don't meet the timestamp condition are not joined and hence not included in the count
Problem is that your between for timestamp is dropping the records you want to sum. Do the sum first and then do left join from the master. ``` SELECT qm.sipara , vsum.Prayed From QuranMaster qm LEFT JOIN ( Select qp.sipara , Prayed = COUNT(qp.Sipara) From QuranPrayed qp GROUP BY qp.sipara ) vsum ON vsum.sipara=qp.sipara ```
Not ordering all column values
[ "", "sql", "database", "sql-server-2008-r2", "" ]
I need to show the names and IDs of all artists who have not recorded titles but do have web addresses listed in my database. This invovles two tables: ``` Artists ------- ArtistID, ArtistName, City, Region, WebAddress Titles ------ TitleID, ArtistID, Title, StudioID, Genre ``` My query is as follows: ``` select ar.* from artists ar inner join titles t on ar.artistid = t.artistid where ar.webaddress != NULL; ``` which returns an empty set.
In MySql, null is not a value, so `where ar.webaddress != NULL;` will not work. There is special syntax for checking nulls, `is null` and `is not null` for example. Also, the inner join will only give you artists that have titles. To get artists without titles, try outer join and check for null in the joined table i.e. ``` select ar.* from artists ar left join titles t on ar.artistid = t.artistid where ar.webaddress Is not NULL and t.ArtistID is null; ```
Note that null is not a value so you cant mention it with equal or not equal symbol: Try this: ``` select * from artists where ar.webaddress Is not NULL and artistid not in(select distinct artistid in titles) ```
JOIN returns empty set
[ "", "mysql", "sql", "" ]
I have log table where there is are records with user id and the date for a specific activity done. I want to get names of users having activity every month. I am using the following query ``` select distinct(employeeid) from transactions where eventdate between '01-OCT-13' AND '23-OCT-13' and eventdate between '01-SEP-13' AND '01-OCT-13' and eventdate between '01-AUG-13' AND '01-SEP-13' and eventdate between '01-JUL-13' AND '01-AUG-13'; ``` But this is doesn't work. Can someone please suggest any improvement? **Edit:** Since my questions seems to be a little confusing, here is an example ``` EmployeeID | Timestamp a | 01-Jul-13 b | 01-Jul-13 a | 01-Aug-13 c | 01-Aug-13 a | 01-Sep-13 d | 01-Sep-13 a | 01-Oct-13 a | 01-Oct-13 ``` In the above table, we can see that employee "a" has activity in all the months from July till October. So I want to find a list of all such employees.
You can use COUNT as analytical function and get the number of months for each employee and total number of months. Then select only those employees where both counts match. ``` select distinct employeeid from ( select employeeid, count(distinct trunc(eventdate,'month')) --count of months for each employee over (partition by employeeid) as emp_months, count(distinct trunc(eventdate,'month')) --count of all months over () as all_months, from transactions ) where emp_months = all_months; ```
By *very inefficient*, I think you mean *it doesn't work*. The same value can't be both in september, in october, etc. Anyway, using @LaggKing suggestion, you could try this query: ``` SELECT employeeid FROM ( SELECT DISTINCT employeeid, MONTH(eventdate) FROM transactions ) HAVING COUNT(*) = MONTH(NOW()) ``` **EDIT:** You need to take year into account. ``` SELECT employeeid FROM ( SELECT DISTINCT employeeid, MONTH(eventdate) FROM transactions WHERE YEAR(eventdate) = YEAR(NOW()) ) HAVING COUNT(*) = MONTH(NOW()) ```
Get name of person having activity in every month - Oracle SQL
[ "", "sql", "oracle", "" ]
I have the following script - ``` select sum(duration) as duration from opr where op in ('GRC','GCQ') and timestamp between '20130101000000' and '20130930235959' ``` I receive the value - **34459298** seconds. I would like to include these restrictions - The duration which is * <= '18000000' (seconds) should be multiplied to 0.14 * The duration between > '18000000' and <= '27000000' should be multiplied to 0.11 * and the duration > '27000000' should be multiplied to 0.09 I have tried with this case statement - ``` case when duration <= '18000000' then (duration)*0.14 when duration > '18000000' and duration <= '27000000' then (duration)*0.11 when duration > '27000000' then (duration)*0.09 else 0 end as bal ``` However, I receive this value **34459298** multiplied to 0.09, because it is bigger then '27000000', which is not the idea. The idea is all the seconds which these two op ('GRC','GCQ') had made to be multiplied with the above values. Could you please help me to do this?
Is `duration` a number or a string? Would be better to compare with numbers, I guess: ``` SELECT SUM( CASE WHEN duration <= 18000000 THEN duration * 0.14 WHEN duration > 18000000 AND duration <= 27000000 THEN duration * 0.11 WHEN duration > 27000000 THEN duration * 0.09 ELSE 0 END ) AS duration FROM opr WHERE op IN ('GRC','GCQ') AND timestamp BETWEEN '20130101000000' AND '20130930235959' ; ```
use subquery to select specific duration and multiply duration by your values and than use sum over that subquery.
Multiple aggregate function
[ "", "sql", "database", "plsql", "" ]
I was trying to pull the `TOTAL_PRICE` from the cart table using the the following query. Currently it will fetch the `TOTAL_PRICE` from last 30 days. But I need to modify the query to fetch the `TOTAL_PRICE` of the current month. ie, `PURCHASED_DATE` should be in between 1 to 30/31 of the current month. ``` SELECT TOTAL_PRICE FROM CUST_CART_TABLE WHERE USER_ID = '"+userId+"' AND (PURCHASED_DATE BETWEEN TO_DATE(SYSDATE-30, 'DD-MM-YY') AND TO_DATE(SYSDATE,'DD-MM-YY') ```
You can use TRUNC and LAST\_DAY: ``` SELECT TOTAL_PRICE FROM CUST_CART_TABLE WHERE USER_ID = '"+userId+"' AND (TRUNC(PURCHASED_DATE) BETWEEN TRUNC(SYSDATE, 'MM') AND TRUNC(LAST_DAY(SYSDATE))); ```
You can try this one - ``` SELECT TOTAL_PRICE FROM CUST_CART_TABLE WHERE USER_ID = '"+userId+"' AND TRUNC(PURCHASED_DATE,'MONTH') = TRUNC(SYSDATE,'MONTH'); ``` It will return records from the current month.
Fetch the records from current month
[ "", "sql", "oracle", "" ]
I have a table users in it there are columns userPoints and userStatus each user begins as a newbie. if the userPoints reach 100 user status should change to something else there is another table userStatuses statusId statusName minPoints maxPoints can I create a trigger that will fire off when userPoints reach 100 their userStatus changes, according to the userStatuses table? **table statuses** ``` id statusName minimumPoints maximumPoints 1 lvl1 0 100 2 lvl2 101 1000 3 lvl3 1001 5000 4 lvl4 5001 20000 5 lvl5 20000 100000 ```
Try this: ``` delimiter // CREATE TRIGGER changeUserStatus BEFORE UPDATE ON users FOR EACH ROW BEGIN IF NEW.userPoints > 100 THEN SET NEW.userStatus = 'lvl2'; END IF; END;// delimiter ; ``` `delimiter` allows the user of semicolons within in the trigger definition. Of course, assumes that you hard code it. If you instead want to reference a table, I would use a different approach. **EDIT:** If you want to reference your table `statuses` (which you provided in your edited question), then your approach depends on how many records you are updating. If you are updating one at a time, ``` delimiter // CREATE TRIGGER changeUserStatus BEFORE UPDATE ON users FOR EACH ROW BEGIN SET @status = ( SELECT statusName FROM statuses WHERE NEW.userPoints BETWEEN s.minimumPoints AND s.maximumPoints ); IF @status <> NEW.userStatus: SET NEW.userStatus = @status; END IF; END;// delimiter ; ``` However, if you are updating many records at a time, it is likely more performant to run one query at the end. Unfortunately, MySQL only allows triggers that perform one operation per row. So, I would create a stored procedure ``` CREATE PROCEDURE refreshUserStatuses UPDATE users u JOIN statues s ON u.userPoints BETWEEN s.minimumPoints AND s.maximumPoints SET u.userStatus = s.statusName; ``` You will have to run `CALL refreshUserStatuses;` after updating users.
You can do: ``` CREATE TRIGGER update_status BEFORE UPDATE ON users FOR EACH ROW BEGIN SET @NEWSTATUS = (SELECT statusId FROM userStatuses WHERE NEW.userPoints BETWEEN minPoints AND maxPoints); IF @NEWSTATUS != OLD.userStatus THEN SET NEW.userStatus = @NEWSTATUS; END IF; END; ``` This will get the status for the user's points and validate if he was already on that level or not. If not, the user will change status. `sqlfiddle demo`
mysql trigger to change user status after a certain amount of points has been reached
[ "", "mysql", "sql", "triggers", "" ]
I have a problem with trigger below. Thanks for any help. ``` CREATE TRIGGER upper_case ON osoba AFTER INSERT AS BEGIN DECLARE @imie VARCHAR(25) DECLARE @new_imie VARCHAR(25) DECLARE my_cursor CURSOR FAST_FORWARD FOR SELECT imie FROM INSERTED OPEN my_cursor FETCH NEXT FROM my_cursor INTO @imie WHILE @@FETCH_STATUS = 0 BEGIN @new_imie = UPPER(SUBSTRING(@imie, 1, 1)) + SUBSTRING(@imie, 2,LEN(@imie)) END UPDATE osoba SET imie = @new_imie WHERE imie = @imie END ``` The error message: > Msg 102, Level 15, State 1, Procedure upper\_case, Line 13 > Incorrect syntax near '@new\_imie'. > Msg 102, Level 15, State 1, Procedure upper\_case, Line 17 > Incorrect syntax near 'END'.
Don't use a cursor for this. You can use an `UPDATE` statement that affects all rows at once instead of processing them one by one. ``` CREATE TRIGGER upper_case ON osoba AFTER INSERT AS BEGIN UPDATE O SET imie = UPPER(SUBSTRING(I.imie, 1, 1)) + SUBSTRING(I.imie, 2, LEN(I.imie)) FROM INSERTED I JOIN osoba O ON I.imie = O.imie END ```
You should use `SET` when you assign `@new_imie` ([msdn](http://msdn.microsoft.com/en-us/library/ms189484.aspx)): ``` SET @new_imie = UPPER(SUBSTRING(@imie, 1, 1)) + SUBSTRING(@imie, 2,LEN(@imie)); ```
SQL Server : Insert trigger
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "triggers", "" ]
I'm a newbie with sql. I have this table with these records: table `mark` ``` ---------------------------------------- student_id | subject_id | comp_id | mark ---------------------------------------- 1 | 1 | 28 | 5.5 1 | 1 | 29 | 7.5 1 | 1 | 30 | 9.0 1 | 2 | 36 | 9.6 1 | 2 | 37 | 6.0 1 | 3 | 42 | 4.0 ``` I've tried this query `SELECT subject_id FROM mark WHERE student_id =1` The results ``` subject_id ---------- 1 1 1 2 2 3 ``` How can i select the data with these result? I'm expecting something like this: ``` subject_id ---------- 1 2 3 ``` Thanks in advance.
Use `distinct` ``` SELECT distinct subject_id FROM mark WHERE student_id = 1 ``` or group the data ``` SELECT subject_id FROM mark WHERE student_id = 1 group by subject_id ```
Just use `DISTINCT`: ``` SELECT DISTINCT subject_id FROM mark WHERE student_id =1 ```
MySQL - return only 1 record for each certain id
[ "", "mysql", "sql", "" ]
I have very strange problem. I have query like this which I run in SQL server 2008 R2: ``` SELECT TOP (20) * FROM MyTable WHERE a = 0 AND b = 0 AND c = 0 ``` that uses my index on MyTable but this one doesn't: ``` SELECT TOP (20) * FROM MyTable WHERE a = 0 OR b = 0 OR c = 0 ``` and i want to execute second query in my program. how do I alter the second query to use the index? the index is: ``` CREATE NONCLUSTERED INDEX [MyIndex] ON [dbo].[MyTable] ( [a] ASC, [b] ASC, [c] ASC ) WITH ( PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON ) ON [PRIMARY] ```
You cannot use that one index for a query which has `or` between conditions. Just think about it: any one of the values can match so the index matching can't guarantee that there are no other records matching that criteria. It is really like running separate searches and then matching them together.
In order to use Index, SQL-Server checks for lot of things **1)** If the filter is SARG. **2)** Index use likelihood ratio (No of unique records : total no of records) **3)** No of records to fetch (if less records, index may be used or else table scan. Because sql-server finds it easy to scan the whole table than to locate and use index if amount of records is huge.) **4)** In your Query, because you are using `OR`, No of records increases, in which condition, sql-server finds it easy to scan the whole table than to locate/use index. If it is `AND`, likelihood of Index use will increase.
why my query is not sargable
[ "", "sql", "sql-server", "performance", "sql-server-2008-r2", "" ]
I have a stored procedure in a package in an Oracle database that has 2 input parameters + 1 output CLOB parameter. How do I view the output in Toad? (Preferably with the user only having execute/select permissions) **Solution:** ``` DECLARE my_output_parameter CLOB; BEGIN my_package.my_stored_proc(1, 2, my_output_parameter); DBMS_OUTPUT.PUT_LINE(my_output_parameter); END; ``` Don't forget to execute as script, rather than just execute statement, and results appear in the DBMS Output window, not the datagrid.
I guess DBMS\_OUTPUT.PUT\_LINE has an internal line limit of 255 chars. However it has been removed from 10g Release 2 onwards. You can try inserting the column data in a table and view it later on by querying that table. Please refer - <http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:146412348066>
Would you consider printing the `CLOB` as a result set? You could then use a `PIPELINED` function (more about them here: [PIPELINED functions by Tim Hall](http://www.oracle-base.com/articles/misc/pipelined-table-functions.php)) which would return the `CLOB` line by line, take a look at the example below: ``` CREATE TABLE my_clob_tab ( id NUMBER, clob_col CLOB ) / INSERT INTO my_clob_tab VALUES (1, to_clob('first line' || chr(10) || 'second line, a longer one' || chr(10) || 'third')) / CREATE OR REPLACE TYPE t_my_line_str AS TABLE OF VARCHAR2(2000) / CREATE OR REPLACE FUNCTION print_clob_func(p_id IN NUMBER) RETURN t_my_line_str PIPELINED AS v_buffer VARCHAR2(32767); v_clob CLOB; v_len NUMBER; v_offset NUMBER := 1; v_line_break_pos NUMBER; v_amount NUMBER; BEGIN SELECT clob_col INTO v_clob FROM my_clob_tab WHERE id = p_id; IF v_clob IS NOT NULL THEN v_len := dbms_lob.getlength(v_clob); WHILE v_offset < v_len LOOP v_line_break_pos := instr(v_clob, chr(10), v_offset); IF v_line_break_pos = 0 THEN v_amount := v_len - v_offset + 1; ELSE v_amount := v_line_break_pos - v_offset; END IF; dbms_lob.read(v_clob, v_amount, v_offset, v_buffer); v_offset := v_offset + v_amount + 1; PIPE ROW (v_buffer); END LOOP; END IF; END; / ``` (the function can be changed so that it takes as a parameter the `CLOB` you get from your procedure) The code reads the content of the `CLOB` line by line (I assumed that the line separator is `CHR(10)` - if you are on Windows, you can change it to `CHR(10) || CHR(13)`) and `PIPE`s each line to the `SELECT` statement. The function that reads the clob could also print the output to the standard output via `dbms_output.put_line`, but it would be trickier, because you'd have to take into account that standard output's maximal line length is limitied to, correct me if I'm wrong, 2000 characters, but it is doable (can't try that solution right now, unfortunately). In the meanwhile, please check above proposal and give me some feedback if that would work for you. Back to the solution, now we can issue this `SELECT` statement: ``` SELECT COLUMN_VALUE AS clob_line_by_line FROM TABLE(print_clob_func(1)); ``` Which will give us the following output: ``` CLOB_LINE_BY_LINE ------------------------- first line second line, a longer one third ``` Check it at SQLFiddle: [SQLFiddle example](http://sqlfiddle.com/#!4/161e6/2)
How do I view a CLOB output parameter in TOAD from an Oracle Stored Procedure?
[ "", "sql", "oracle", "stored-procedures", "toad", "clob", "" ]
Is there a function in apache PIG that's similar to Lead/Lag function in SQL? Or any pig function that can look back to previous row of record?
Here is an alternative: ``` import java.io.IOException; import java.util.ArrayList; import java.util.List; import org.apache.pig.EvalFunc; import org.apache.pig.data.DataType; import org.apache.pig.data.Tuple; import org.apache.pig.data.TupleFactory; import org.apache.pig.impl.logicalLayer.FrontendException; import org.apache.pig.impl.logicalLayer.schema.Schema; import org.apache.pig.impl.logicalLayer.schema.Schema.FieldSchema; public class GenericLag2 extends EvalFunc<Tuple>{ private List<String> lagObjects = null; @Override public Tuple exec(Tuple input) throws IOException { if (lagObjects == null) { lagObjects = new ArrayList<String>(); return null; } try { Tuple output = TupleFactory.getInstance().newTuple(lagObjects.size()); for (int i = 0; i < lagObjects.size(); i++) { output.set(i, lagObjects.get(i)); } lagObjects.clear(); for (int i = 0; i < input.size(); i++) { lagObjects.add(input.get(i).toString()); } return output; } catch (Exception e) { e.printStackTrace(); return null; } } @Override public Schema outputSchema(Schema input) { Schema tupleSchema = new Schema(); try { for (int i = 0; i < input.size(); i++) { tupleSchema.add(new FieldSchema("lag_" + i, DataType.CHARARRAY)); } return new Schema(new FieldSchema(getSchemaName(this.getClass().getName().toLowerCase(), input), tupleSchema, DataType.TUPLE)); } catch (FrontendException e) { e.printStackTrace(); return null; } } } ``` I assume this would be faster, but I'm not sure, as you would have to do the following: ``` ... C = ORDER A BY important_order_by_field, second_important_order_by_field D = FOREACH B GENERATE important_order_by_field ,second_important_order_by_field ,... ,FLATTEN(LAG( string_field_to_lag ,int_field_to_lag ,date_field_to_lag )) ; E = FOREACH D GENERATE important_order_by_field ,second_important_order_by_field ,... ,string_field_to_lag ,(int) int_field_to_lag ,(date_field_to_lag IS NULL ? null : ToDate(SUBSTRING(REPLACE(date_field_to_lag, 'T', ' '), 0, 19), 'yyyy-MM-dd HH:mm:ss')) as date_field_to_lag ; DUMP E; ```
Yes, there is pre-defined functionality. See the [Over()](http://pig.apache.org/docs/r0.12.0/api/org/apache/pig/piggybank/evaluation/Over.html) and [Stitch()](http://pig.apache.org/docs/r0.12.0/api/org/apache/pig/piggybank/evaluation/Stitch.html) methods in Piggybank. Over() has examples listed in the documentation.
lead/lag function in apache PIG
[ "", "sql", "apache-pig", "" ]
I have a problem where when getting the results of a query on the product\_view table I would also like to get the total number of product months (from the product\_month column) for each month related in the query results. ## For example If I did a query like this: ``` SELECT * FROM product_view WHERE product_price BETWEEN 945 AND 980; ``` and the product\_view table had the data like this: ``` INSERT INTO product_view (`product_id`, `product_month`, `product_price`) VALUES (4, 'DEC', 999), (4, 'FEB', 905), (4, 'JAN', 905), (8, 'DEC', 1049), (8, 'FEB', 955), (8, 'JAN', 955), (8, 'DEC', 1049), (8, 'FEB', 955), (8, 'JAN', 955), (22, 'APR', 925), (22, 'AUG', 969), (22, 'JUL', 969), (22, 'JUN', 945), (22, 'MAR', 925), (22, 'MAY', 925), (22, 'OCT', 915), (22, 'SEP', 945), (27, 'APR', 1435), (27, 'JUN', 1459), (27, 'MAY', 1435), (27, 'OCT', 1459), (27, 'SEP', 1459), (31, 'APR', 3029), (31, 'AUG', 3029), (31, 'FEB', 3029); ``` I would not only want the statement to give me the results of the query, but also the total count() for each month in the results returned. i.e. using the query above - FEB = 2, JUN = 1 etc... **End result - with duplicates count() results** ``` PRODUCT_ID PRODUCT_MONTH PRODUCT_PRICE Month_count 8 FEB 955 2 8 JAN 955 2 8 FEB 955 2 8 JAN 955 2 22 AUG 969 1 22 JUL 969 1 22 JUN 945 1 22 SEP 945 1 ``` Please could someone advise me on the best way to get the count() from the results and also is there a better way of showing these mulitple counts **without the results of it being duplicated on each row in the query's result set**. i.e. variables, or temporary tables (though with temp tables there are many users executing this query simultaneously, so would there be problems with getting accurate results per user as the product\_price BETWEEN values of the above query can be changes by the user) Thanks
You can use GROUP BY product\_month: ``` SELECT product_month, COUNT(*) FROM product_view WHERE product_price BETWEEN 945 AND 980 GROUP BY product_month; ``` `sqlfiddle demo` This will give you: ``` PRODUCT_MONTH COUNT(*) AUG 1 FEB 2 JAN 2 JUL 1 JUN 1 SEP 1 ```
I think you should use query below: ``` SELECT PRODUCT_ID, PRODUCT_MONTH, PRODUCT_PRICE, COUNT(*) FROM product_view WHERE product_price BETWEEN 945 AND 980 GROUP BY PRODUCT_ID, PRODUCT_MONTH, PRODUCT_PRICE; ```
Getting the count() of a column from a resulting query?
[ "", "mysql", "sql", "" ]
I need to update table using join and limit. I construct this query ``` UPDATE table1 JOIN table2 AS b ON table1.id = b.id SET table1.username = b.post_username WHERE b.post_username != '' AND table1.username = '' LIMIT 10 ``` unfortunaetly I recive error: ``` Error Code: 1221 Incorrect usage of UPDATE and LIMIT ``` How can I solve this?
I'm afraid you can't do that: If you read the [documentation](http://dev.mysql.com/doc/refman/5.0/en/update.html) it says: > With no WHERE clause, all rows are updated. If the ORDER BY clause is > specified, the rows are updated in the order that is specified. The > LIMIT clause places a limit on the number of rows that can be updated. > > **For the multiple-table syntax, UPDATE updates rows in each table named in table\_references that satisfy the conditions. In this case, > ORDER BY and LIMIT cannot be used.** So you can't do that in your query.
hey just Delete `LIMIT 10` from your code
Mysql Update wtih Join and limit
[ "", "mysql", "sql", "join", "" ]
I have a table in MySQL as follows. ``` studentId studentName status ``` Here studentId is the ID of the student ,name is his/her name and status is pass/fail. I need a MySQL query to return the top N rows based on the condition that the student has passed. For instance ,if the table contains ``` 0001 Sam Pass 1190 Mary Fail 2320 John Pass 2536 Micheal Pass 3010 Rose Pass ``` and lest say n = 3 the query should return. ``` 0001 Sam Pass 2320 John Pass 2536 Micheal Pass ``` Is there a way to do it in MySQL.
You can do: ``` SELECT * FROM yourTable WHERE `status` = "Pass" LIMIT 3; ``` From the [docs](http://dev.mysql.com/doc/refman/5.1/en/select.html): > The LIMIT clause can be used to constrain the number of rows returned > by the SELECT statement. LIMIT takes one or two numeric arguments, > which must both be nonnegative integer constants (except when using > prepared statements). > > With two arguments, the first argument specifies the offset of the > first row to return, and the second specifies the maximum number of > rows to return. The offset of the initial row is 0 (not 1) In your case, since you just want the top rows, you don't need two arguments.
Use `limit` :: **n** is the number of rows you want ``` Select * from student limit n ```
Need a query to get top N rows from MySQL database based on a condition
[ "", "mysql", "sql", "" ]
i am using an oracle database, i have two tables. table A primary key = productid table B references productid of table A primary key = imageid flow: each product should have 4 images stored in the table B (mandatory) problem: there are some products that has only 2 or sometimes 3 or sometimes 1 image only despite of the fact that the 4 images rule is mandatory based from code level. Question: how to count unique number of products that has images in table b?Because, if I do `select count(*) from tableA join tableB on tableA.productid = tableB.productid` the result is double, because it's a one to many...as in , one product has many images. So let's say `productID = 12345` has `4 images` in `table B`, once I ran my query, the result is 4 , when i want to only get 1...so how?
``` SELECT Count(DISTINCT TableA.productid) FROM TableA JOIN TableB ON TableA.productid = TableB.productid; ```
Do a sub query with where exists ``` select count(*) from tableA where exists (select 1 from tableB where tableA.productid = tableB.productid) ```
how to select only 1 unique data when joining two tables, if the table structure is one to many?
[ "", "sql", "oracle", "" ]
We have 3 different tables ie. users, daily\_status, status\_types fields in tables user : ``` id, name, department_id ``` data for user table: ``` u1 jai 1 u2 singh 1 u3 test 2 ``` daily\_status: ``` id, user_id, status_id, date ``` data for daily\_status table: ``` 1 u1 2 '23/10/2013' 2 u1 3 '24/10/2013' 3 u2 3 '23/10/2013' 4 u3 2 '23/10/2013' 5 u3 2 '23/10/2013' ``` status\_types: ``` id, status_name, ``` data for status\_types table ``` 1 present 2 leave 3 sick ``` I need to make a query like this ``` SELECT user.id, user.name status.name LEFT JOIN daily_status as ds ON (user.id=ds.user_id) LEFT JOIN status_types as status ON (ds.status_id=status.id) WHERE user.department_id=1 AND ds.date='24/10/2013' ``` current results: ``` u1 jai sick ``` but I want ``` u1 jai sick u2 singh - ``` The results are good as per query but I need to reconfigure it so that it gives me details of all users of that department those who don't have a status on that date in daily\_status table.
Move the date validation to the ON: ``` SELECT u.id, u.name, status.status_name FROM user u LEFT JOIN daily_status as ds ON u.id=ds.user_id AND ds.date='24/10/2013' LEFT JOIN status_types as status ON ds.status_id=status.id WHERE u.department_id=1 ``` `sqlfiddle demo`
Maybe I'm over simplifying this but you only have one record that has the given date 24/10/2013 so only one can show. If you have more than one listing for a person on a given date you could do something like this ``` SELECT a.id,a.[name],status_types.status_name FROM(SELECT id,[user].[name],department_id,ROW_NUMBER()OVER(PARTITION BY[user].[name]ORDER BY ID DESC)rn FROM[user])a LEFT JOIN daily_status AS ds ON(a.id=ds.user_id)LEFT JOIN status_types ON(ds.status_id=status_types.id)WHERE a.department_id=1AND rn=1AND ds.date='2013-10-23' ```
how to make custom query from multiple tables
[ "", "mysql", "sql", "left-join", "" ]
I had a problem come through concerning missing data on Oracle 12c. I took a look at the code and found a query that works on mysql, mssql, oracle 11g, but has different behaviour in oracle 12c. I have generalized the table structure and query somewhat and reproduced the issue. ``` create table thing (thing_id number, display_name varchar2(500)); create table thing_related (related_id number, thing_id number, thing_type varchar2(500)); create table type_a_status (related_id number, status varchar2(500)); create table type_b_status (related_id number, status varchar2(500)); insert into thing values (1, 'first'); insert into thing values (2, 'second'); insert into thing values (3, 'third'); insert into thing values (4, 'fourth'); insert into thing values (5, 'fifth'); insert into thing_related values (101, 1, 'TypeA'); insert into thing_related values (102, 2, 'TypeB'); insert into thing_related values (103, 3, 'TypeB'); insert into thing_related (related_id, thing_id) values (104, 4); insert into type_a_status values (101, 'OK'); insert into type_b_status values (102, 'OK'); insert into type_b_status values (103, 'NOT OK'); ``` Running the query: ``` SELECT t.thing_id AS id, t.display_name as name, tas.status as type_a_status, tbs.status as type_b_status FROM thing t LEFT JOIN thing_related tr ON t.thing_id = tr.thing_id LEFT JOIN type_a_status tas ON (tr.related_id IS NOT NULL AND tr.thing_type = 'TypeA' AND tr.related_id = tas.related_id) LEFT JOIN type_b_status tbs ON (tr.related_id IS NOT NULL AND tr.thing_type = 'TypeB' AND tr.related_id = tbs.related_id) ``` on Oracle 11g gives (here's a [SQL Fiddle](http://sqlfiddle.com/#!4/4b9ed/3)): ``` ID | NAME | TYPE_A_STATUS | TYPE_B_STATUS 1 | first | OK | (null) 2 | second | (null) | OK 3 | third | (null) | NOT OK 4 | fourth | (null) | (null) 5 | fifth | (null) | (null) ``` Yet the same schema, data, and query on Oracle 12c: ``` ID | NAME | TYPE_A_STATUS | TYPE_B_STATUS 1 | first | OK | (null) 2 | second | (null) | OK 3 | third | (null) | NOT OK 4 | fourth | (null) | (null) ``` It seems that the second two outer joins are failing to bring back anything because there is no row in 'thing\_related' to join by. However I don't understand why the outer join does not return nulls in this case as it does in Oracle 11g, Mysql, etc.. I've been researching and found documentation the Oracle 12c had a number of enhancements for outer joins, but nothing that highlighted a change that would affect this. Does anyone know why this is happening only for Oracle 12c, and how best would I rewrite this to work in 12c and maintain compatibility with 11g, mysql, etc.? EDIT: Attached plans. Oracle 11g: ![enter image description here](https://i.stack.imgur.com/RyP7s.png) Oracle 12c: ![enter image description here](https://i.stack.imgur.com/WGUyl.png)
**UPDATE:** This is fixed in 12.1.0.2. --- This definitely looks like a bug in 12.1.0.1. I would encourage you to create a service request through Oracle support. They might be able to find a fix or a better work around. And hopefully Oracle can fix it in a future version for everyone. Normally the worst part about working with support is reproducing the issue. But since you already have a very good test case this issue may be easy to resolve. There are probably many ways to work around this bug. But it's difficult to tell which method will always work. Your query re-write may work now, but if optimizer statistics change perhaps the plan will change back in the future. Another option that works for me on 12.1.0.1.0 is: ``` ALTER SESSION SET optimizer_features_enable='11.2.0.3'; ``` But you'd need to remember to always change this setting before the query is run, and then change it back to '12.1.0.1' after. There are ways to embed that within a query hint, such as `/*+ OPT_PARAM('optimizer_features_enable' '11.2.0.3') */`. But for some reason that does not work here. Or perhaps you can temporarily set that for the entire system and change it back after a fix or better work around is available. Whichever solution you use, remember to document it. If a query looks odd the next developer may try to "fix" it and hit the same problem.
Refer to: ANSI Outer Join Query Returns Wrong Results After Upgrade to 12.1.0.1 (Doc ID 1957943.1) Unpublished bug 16726638 Fixed in 12.1.0.2 (I have tested this) Workaround (I tested in 12.1.0.1): ``` alter session set "_optimizer_ansi_rearchitecture"=false; ``` Note 1957943.1 recommends as an alternative: ``` optimizer_features_enable = '11.2.0.4'; ``` but that does *not* work.
Query featuring outer joins behaves differently in Oracle 12c
[ "", "sql", "oracle", "oracle12c", "" ]
I have googled and haven't found any that matches my case. ``` declare @from_ datetime, @to_ datetime SET @from_ = '2013-01-01' SET @to_ = '2013-10-01' DECLARE @query NVARCHAR(4000) SET @query = N' DECLARE @from datetime, @to datetime SET @from = CAST('''+@from_+''' AS datetime) SET @to = CAST('''+@to_+''' AS datetime) ' ``` Gives error: ``` Msg 241, Level 16, State 1, Line 34 Conversion failed when converting date and/or time from character string. ``` The `SET @query = N'` is Line 34 I have also tried with `CONVERT(datetime, '+@from_+', 102)` with and without `, 102)` and `'''` and with just ``` SET @from = CAST('+@from_+' AS datetime) ``` Same result Solution is part of a PIVOT, so I have the declare, select, pivot and execute @query. The rest of the solution works nicely, until I bring the datetime into play.
Your problem is that you are adding a datetime with a varchar. Try this instead: ``` declare @from_ char(8), @to_ char(8) SET @from_ = '20130101' SET @to_ = '20131001' DECLARE @query NVARCHAR(4000) SET @query = N' DECLARE @from datetime, @to datetime SET @from = CAST('''+@from_+''' AS datetime) SET @to = CAST('''+@to_+''' AS datetime) ' ```
I am not sure but I guess you have appended apostrophe(') to `@from_` and to `@to_` ``` SET @from = CAST('''+@from_+''' AS datetime) SET @to = CAST('''+@to_+''' AS datetime) ``` Also why are you not directly setting the values like this ``` SET @from = @from_ SET @to = @to_ ```
Conversion failed when converting date
[ "", "sql", "sql-server-2008", "" ]
Using SQL Server 2012. I want to insert unique strings into a table. I always want to return the row ID of the unique string. Now, this can be accomplished in two ways. **Which solution is the best?** This is the table in question: ``` CREATE TABLE [dbo].[Comment]( [CommentID] [int] IDENTITY(1,1) NOT NULL, [Comment] [nvarchar](256) NOT NULL CONSTRAINT [PK_Comment] PRIMARY KEY CLUSTERED([CommentID] ASC) ) CREATE UNIQUE NONCLUSTERED INDEX [IX_Comment_Comment] ON [dbo].[Comment] ( [Comment] ASC ) ``` **Solution 1**: `SELECT` first to check if the string exists. If it does, return its `ID`. Otherwise, `INSERT` a new row and return the newly created `ID`. ``` CREATE PROCEDURE [dbo].[add_comment] @Comment [nvarchar](256) AS BEGIN SET NOCOUNT ON DECLARE @CommentID [int] DECLARE @TransactionCount [int] BEGIN TRY SET @TransactionCount = @@TRANCOUNT IF @TransactionCount = 0 BEGIN TRANSACTION SELECT @CommentID = [CommentID] FROM [dbo].[Comment] WHERE [Comment] = @Comment IF @@ROWCOUNT = 0 BEGIN INSERT INTO [dbo].[Comment]([Comment]) VALUES (@Comment) SET @CommentID = SCOPE_IDENTITY() END IF @TransactionCount = 0 COMMIT TRANSACTION END TRY BEGIN CATCH IF XACT_STATE() <> 0 AND @TransactionCount = 0 ROLLBACK TRANSACTION ; THROW END CATCH RETURN @CommentID END ``` **Solution 2**: `INSERT` first. If the insert violates the `UNIQUE INDEX`, a `SELECT` is issued. ``` CREATE PROCEDURE [dbo].[add_comment2] @Comment [nvarchar](256) AS BEGIN SET NOCOUNT ON DECLARE @CommentID [int] BEGIN TRY INSERT INTO [dbo].[Comment]([Comment]) VALUES (@Comment) SET @CommentID = SCOPE_IDENTITY() END TRY BEGIN CATCH IF @@ERROR = 2601 -- Duplicate SELECT @CommentID = [CommentID] FROM [dbo].[Comment] WHERE [Comment] = @Comment ELSE THROW END CATCH RETURN @CommentID END GO ``` **Solution 3**: Ideas? :)
In my testing I have found that it is much more efficient to check for the violation first instead of letting SQL Server try and fail, especially when the failure rate is expected to be significant (and at best they perform about the same overall, when the failure rate is low). Details [here](http://www.mssqltips.com/sqlservertip/2632/checking-for-potential-constraint-violations-before-entering-sql-server-try-and-catch-logic/?utm_source=AaronBertrand) and [here](http://www.sqlperformance.com/2012/08/t-sql-queries/error-handling). In addition to performance, another reason to not rely on the constraint to raise an error is that tomorrow someone could change or drop it.
Personally I do it like this ``` INSERT INTO [dbo].[Comment]([Comment]) SELECT T.Comment FROM (SELECT @Comment) T (Comment) LEFT JOIN dbo.Comment C ON C.Comment = T.Comment WHERE C.Comment IS NULL SELECT @CommentID = CommentID FROM dbo.Comment C WHERE C.Comment = @Comment ```
Duplicate check using SELECT or rely on unique index?
[ "", "sql", "sql-server", "" ]
Let's say I have screen where user select different options to sort data in a grid given below the data is fetched asynchronously through jQuery from database. When user selects he needs to view `user` data where `status=1` ``` SELECT * FROM user WHERE status=1; ``` And same for 2, 3, 4 a variable can be passed. But when user doesn't select anything than it should return results of all the `users`. So is an expression or keyword that I can pass in the query that will get `ALL` the record, something like: ``` SELECT * FROM user WHERE status=ANYTHING; -- EVERYTHING, ALL, ANY ``` I don't want to write a different query for each and every case, that would not be efficient and cases are also high in number as the above example is just an hypothetical example to explain my situation the real case is bit complex.
``` SELECT * FROM user WHERE @status is null Or status = @status ```
Use ISNULL() or COALESCE() function. Pass NULL value in parameter for all records and write query like ``` SELECT * FROM user WHERE status=ISNULL(@status,status) ``` **UPDATE :** Above query will not returns null records, so if your table have null records then you need to write query as Kaf mentioned like ``` SELECT * FROM user WHERE @status is null Or status = @status ```
A keyword to specify in WHERE clause to get all the data without filtering the field
[ "", "sql", "sql-server", "t-sql", "" ]
I need the amount of words that start from all the characters of the alphabet, exist in my database I tried the following query: ``` SELECT COUNT(id) FROM(SELECT id FROM words WHERE word LIKE 'A%') as A, COUNT(id) FROM(SELECT id FROM words WHERE word LIKE 'B%') AS B, ... COUNT(id) FROM(SELECT id FROM words WHERE word LIKE 'Z%') AS Z ``` When I run the query, it gives me this error: ``` Incorrect syntax near 'COUNT'. Incorrect syntax near the keyword 'AS'. ``` The funny thing is that the query works fine if I only ask for the words that start with 'A' What am I doing wrong? **EDIT** Just to clarify for future readers. The reason I want one check per letter of the alphabet, is because the alphabet can be different according to the language of the user and is going to be provided each time the query is generated
USE CASE WHEN AS FOLLOWS ``` SELECT COUNT(CASE WHEN word LIKE 'A%' THEN 1 END) AS A ,COUNT(CASE WHEN word LIKE 'B%' THEN 1 END) AS B ......... ,COUNT(CASE WHEN word LIKE 'Z%' THEN 1 END) AS Z FROM WORDS ```
I'd suggest sub-stringing out the first character: ``` select substring(word,1,1) as firstChar, count(id) from... group by substring(word,1,1) ``` That seems easier than 26 individual checks.
Count results with condition in the select part of the query
[ "", "sql", "sql-server", "" ]
I have a query optimization problem. Let's say there is a table which has all the invoices. Using a TVP (Table Valued Parameter) I'd like to select few records by providing 1..n ids, or return all records by providing a single id with the value of -1. ``` DECLARE @InvoiceIdSet AS dbo.TBIGINT; INSERT INTO @InvoiceIdSet VALUES (1),(2),(3),(4) --INSERT INTO @InvoiceIdSet VALUES (-1) SELECT TOP 100 I.Id , Number , DueDate , IssuedDate , Amount , Test3 FROM dbo.Invoices I --WHERE EXISTS ( SELECT NULL -- FROM @InvoiceIdSet -- WHERE I.Id = ID -- OR ID = -1 ) --CROSS APPLY @InvoiceIdSet s WHERE i.Id = s.ID OR s.ID = -1 JOIN @InvoiceIdSet S ON S.ID = I.Id OR S.ID=-1 ``` Regardless of which method of selection I use, the query performs quite efficiently, until I start using the OR operator, at which point it starts taking a very long time to return few records, but all records are being returned real fast. Any pointers and suggestions will be highly appreciated. ![Without OR](https://i.stack.imgur.com/icclZ.png) ![With OR](https://i.stack.imgur.com/Nk4Uo.png) The first plan is without OR, the second is with OR. **UPDATE:** After fiddling with different options, I have arrived at this solution as the fastest performing, regardless of number of parameters. First change the UserDefinedTableType to include a primary key index: ``` CREATE TYPE [dbo].[TBIGINT] AS TABLE( [ID] [bigint] NOT NULL PRIMARY KEY CLUSTERED ) ``` The select statement now looks like this: ``` SELECT TOP 100 I.Id , Number , DueDate , IssuedDate , Amount , Test3 FROM dbo.Invoices I WHERE I.ID IN ( SELECT S.ID FROM @InvoiceIdSet S WHERE S.ID <> -1 UNION ALL SELECT S.ID FROM dbo.Invoices S WHERE EXISTS ( SELECT NULL FROM @InvoiceIdSet WHERE ID = -1 ) ) ``` The plans got much bigger, but performance is almost constant, between few (first plan) and all (second plan) records. ![Few Records](https://i.stack.imgur.com/I4oDi.png) ![All Records](https://i.stack.imgur.com/YbUlj.png) As you can see the plans are now identical and return the required records in less than a second from 1M rows. I'd love to hear what the community thinks about this solution. Thanks everyone for your help.
I am going to accept my own answer here: ``` DECLARE @InvoiceIdSet AS TBIGINT --INSERT INTO @InvoiceIdSet --VALUES ( 1 ), -- ( 2 ), -- ( 3 ), -- ( 4 ) INSERT INTO @InvoiceIdSet VALUES ( -1 ) SELECT TOP 100 I.Id , Number , DueDate , IssuedDate , Amount , Test3 FROM dbo.Invoices I WHERE I.ID IN ( SELECT S.ID FROM @InvoiceIdSet S WHERE NOT EXISTS ( SELECT NULL FROM @InvoiceIdSet WHERE ID = -1 ) UNION ALL SELECT S.ID FROM dbo.Invoices S WHERE EXISTS ( SELECT NULL FROM @InvoiceIdSet WHERE ID = -1 ) ) ``` It performs well for ALL and SOME scenarios.
If `or S.ID=-1` is added, the SQL Server knows that the condition is true for each row; therefore, the query plan will use Scan as in your second plan. As Martin Smith says in the comment, the SQL Server isn't clever enough in this situation. You need to either have 2 queries (one if -1 is there, other if only some rows are selected). That way, the SQL Server can produce 2 plans and both of them will be optimal for the scenarios they cover. You could also go with recompilation (but RECOMPILE will be done always, which is a waste of the resources, in general). Or you can construct the query dynamically. Dynamically would mean you'd generate only 2 queries and both of them will be cached, so no recompilation but then caution how it's written so it isn't vulnerable to SQL injection. Thanks
Efficient way to select few or all records from a table
[ "", "sql", "sql-server", "" ]
``` describe table_name ``` doesn't show the constraints, Is there a way to do that?
Use the DBMS\_METADATA package, as [explained here](http://asktom.oracle.com/pls/asktom/f?p=100:11:0%3a%3a%3a%3aP11_QUESTION_ID:1794096300346327738).
You can use system views. E.g. user\_\* or all\_tab\_columns and all\_cons\_columns. Try like this: ``` SELECT tc.column_id, tc.table_name, tc.column_name, tc.data_type, cc.constraint_name FROM user_tab_columns tc, user_cons_columns cc WHERE tc.table_name = cc.table_name(+) AND tc.column_name = cc.column_name(+) AND tc.table_name = 'YOU TABLE NAME THERE' ORDER BY tc.column_id ```
How to show the schema of a table include the constraints in Oracle?
[ "", "sql", "oracle", "" ]
I need some help using the SQL `max` and `sum` aggregate functions in SQL. I want to display the first and last names of employees who have put in the most number of total hours on projects. And I want to list the employees with the highest total combined project hours. I have two tables: ``` employee: FNAMEM--LNAME--SSN--BDATE--ADDRESS--SALARY--SUPERSSN--DNO works_on: ESSN--PROJECT_NUM--HOURS ``` This is what I have so far but there is a syntax error in the code: ``` select fname, lname, max(sum(hours)) "Total Hours", essn from employee, works_on where essn = ssn order by lname; ``` I know that `max(sum(hours))` does not work, what will give me the right result?
You need to use a group by if you are going to use the aggregate function `sum`. Something like this; ``` SELECT s.fname, s.lname FROM (SELECT fname, lname, SUM(w.hours) SumHours, w.project_num FROM Emplyee e JOIN Works_on w ON w.essn = e.ssn GROUP BY e.fname, e.lname, w.project_num) s WHERE s.SumHours = (SELECT MAX(w.hours) MaxSum FROM Works_on w1 WHERE w1.project_num = s.project_num) ``` Note that subqueries embedded in the where clause invoke a massive performance penalty.
``` SELECT s.fname, s.lname FROM (SELECT fname, lname, SUM(w.hours) SumHours FROM employee e JOIN works_on w ON w.essn = e.ssn GROUP BY e.fname, e.lname) s WHERE s.SumHours = (SELECT MAX(hours) MaxSum FROM works_on w1); ``` This code worked for me; Thanks to user: PM 77-1 for putting me on the right track.
How to use SQL MAX(SUM()) function
[ "", "sql", "sum", "max", "sqlplus", "" ]
I am trying to construct an SQL query that would take a set of rows and renumber a field on all rows, starting at where they all match a session ID. e.g. before change: ``` SessionID | LineNumber | LineContents ---------------------------------------------- 74666 | 1 | example content 74666 | 2 | some other content 74666 | 3 | another line 74666 | 4 | final line ``` after change (user has deleted line 2 so the 'LineNumber' values have updated to reflect the new numbering (i.e. line '3' has now become line '2' etc.): ``` SessionID | LineNumber | LineContents ---------------------------------------------- 74666 | 1 | example content 74666 | 2 | another line 74666 | 3 | final line ``` So reflecting this in NON proper syntax would be something along these lines ``` i = 0; UPDATE tbl_name SET LineNumber = i++; WHERE SessionID = 74666; ``` Searches a lot for this with no luck, any help is great :)
Using `Row_Number()` function and `CTE`: ``` ;WITH CTE AS ( SELECT SessionID, LineNumber, LineContents, Row_Number() OVER(PARTITION BY SessionID ORDER BY LineNumber) Rn FROM Table1 ) UPDATE CTE SET LineNumber = Rn WHERE SessionID = 74666; ``` **[Fiddle Demo](http://sqlfiddle.com/#!3/3c3466/1)**
You can use ROW\_NUMBER ( MS SQL ) or ROWNUM ( Oracle ) or similar inside your UPDATE statement. [Check this](https://stackoverflow.com/questions/13648898/sql-update-with-row-number) [Or this](https://stackoverflow.com/questions/12390612/how-to-assign-row-number-to-a-column)
Updating fields in sql incrementing by 1 each time
[ "", "sql", "sql-server", "t-sql", "sql-server-2012", "" ]
In SQL Server 2008 R2, suppose I have a table layout like this... ``` +----------+---------+-------------+ | UniqueID | GroupID | Title | +----------+---------+-------------+ | 1 | 1 | TEST 1 | | 2 | 1 | TEST 2 | | 3 | 3 | TEST 3 | | 4 | 3 | TEST 4 | | 5 | 5 | TEST 5 | | 6 | 6 | TEST 6 | | 7 | 6 | TEST 7 | | 8 | 6 | TEST 8 | +----------+---------+-------------+ ``` Is it possible to select every row with the highest UniqueID number, for each GroupID. So according to the table above - if I ran the query, I would expect this... ``` +----------+---------+-------------+ | UniqueID | GroupID | Title | +----------+---------+-------------+ | 2 | 1 | TEST 2 | | 4 | 3 | TEST 4 | | 5 | 5 | TEST 5 | | 8 | 6 | TEST 8 | +----------+---------+-------------+ ``` Been chomping on this for a while, but can't seem to crack it. Many thanks,
With SQL-Server as rdbms you can use a ranking function like `ROW_NUMBER`: ``` WITH CTE AS ( SELECT UniqueID, GroupID, Title, RN = ROW_NUMBER() OVER (PARTITON BY GroupID ORDER BY UniqueID DESC) FROM dbo.TableName ) SELECT UniqueID, GroupID, Title FROM CTE WHERE RN = 1 ``` This returns exactly one record for each `GroupID` even if there are multiple rows with the highest `UniqueID` (the name does not suggest so). If you want to return all rows in then use `DENSE_RANK` instead of `ROW_NUMBER`. Here you can see all functions and how they work: <http://technet.microsoft.com/en-us/library/ms189798.aspx>
``` SELECT * FROM (SELECT uniqueid, groupid, title, Row_number() OVER ( partition BY groupid ORDER BY uniqueid DESC) AS rn FROM table) a WHERE a.rn = 1 ```
Selecting row with highest ID based on another column
[ "", "sql", "sql-server-2008", "greatest-n-per-group", "" ]
On occasion, I'm interested in getting a list of columns in one of the tables or views in my SQL Server 2008 R2 database. It's useful, for example, if you're building database documentation without using an expensive off-the-shelf product. What's an easy way to get this information?
In SQL Server 2008 R2 (among other versions), there are system views provided automatically with every database. As long as you are connected to the database where your table resides, you can run a query like this: ``` DECLARE @TableViewName NVARCHAR(128) SET @TableViewName=N'MyTableName' SELECT b.name AS ColumnName, c.name AS DataType, b.max_length AS Length, c.Precision, c.Scale, d.value AS Description FROM sys.all_objects a INNER JOIN sys.all_columns b ON a.object_id=b.object_id INNER JOIN sys.types c ON b.user_type_id=c.user_type_id LEFT JOIN sys.extended_properties d ON a.object_id=d.major_id AND b.column_id=d.minor_id AND d.name='MS_Description' WHERE a.Name=@TableViewName AND a.type IN ('U','V') ``` Of course, this is just a starting point. There are many other system views and columns available in every database. You can find them through SQL Server Management Studio under `Views > "System Views`
Another way is querying the INFORMATION\_SCHEMA.columns view as detailed here: [Information\_Schema - COLUMNS](http://technet.microsoft.com/en-us/library/ms188348.aspx) This will give you information for all the columns in the current database (and what table/view they belong to) including their datatypes, precision, collation and whether they allow nulls etc Usefully as well, these views are maintained in multiple DBMS programs too, so you could potentially use the same or similar query to get the same information regarding a MySQL database as you can a SQL Server DB, which could be useful if you are developing on multiple platorms.
How do I get a list of columns in a table or view?
[ "", "sql", "sql-server", "sql-server-2008", "sql-server-2008-r2", "" ]