Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
Consider a table as `table2`, i like to add a `trigger` on this table `update` as
```
select Num into num from table1 where ID=new.id;
BEGIN
DECLARE done int default false;
DECLARE cur1 CURSOR FOR select EmailId from users where Num=num;
DECLARE continue handler for not found set done = true;
OPEN cur1;
my_loop: loop
fetch cur1 into email_id;
if done then
leave my_loop;
end if;
//send mail to all email id.
end loop my_loop;
close cur1;
END;
```
Is there any simple method to write in the place commented? To send a email to all the email id retrieved from table `users`.
I am using MySQL in XAMPP(phpmyadmin).
|
I'm against that solution. Putting the load of sending e-mails in your Database Server could end very bad.
In my opinion, what you should do is create a table that queues the emails you would want to send and have a process that checks if there are emails to be sent and send them if positive.
|
As everyone said, making your database to deal with E-Mails is a bad idea. Making your database to write an external file is another bad idea as IO operations are time consuming compared to database operations.
There are several ways to deal with your issue, one that I like: Having a table to queue up your notifications/emails and a client program to dispatch the emails.
Before you continue, you should make sure you have an E-Mail account or service that you can use to send out emails. It could be SMTP or any other service.
**If you are on .NET you can copy this, if not you get the idea how to rewrite in your platform**
Create a table in your DB
```
CREATE TABLE `tbl_system_mail_pickup` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`mFrom` varchar(128) NOT NULL DEFAULT 'noreplyccms@YourDomain.co.uk',
`mTo` varchar(128) NOT NULL,
`mCC` varchar(128) DEFAULT NULL,
`mBCC` varchar(128) DEFAULT NULL,
`mSubject` varchar(254) DEFAULT NULL,
`mBody` longtext,
`added_by` varchar(36) DEFAULT NULL,
`updated_by` varchar(36) DEFAULT NULL,
`added_date` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
`updated_date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`email_sent` bit(1) DEFAULT b'0',
`send_tried` int(11) DEFAULT '0',
`send_result` text,
PRIMARY KEY (`id`),
UNIQUE KEY `id_UNIQUE` (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
```
this table will hold all your notifications and emails. Your Table trigger or events which is trying to send an email will add a record to above table. Your client will then pickup the emails and dispatch at an interval you specify.
Here is a sample console app on .NET just to give you an idea how the client might look like. You can modify it according to your needs. ( I use Entity-framework as its much easier and simpler)
1. Create new Console Project in Visual Studio.
2. In package manager console, install following packages (install-package EntityFramework, install-package MySQL.Data.Entity)
3. Add a reference to System.Configuration
4. Now open "App.Config" file and add your connection-strings to your MySQL database.
5. In the same App.Config file, add your SMTP email settings.
**check your port and ssl option**
```
<system.net>
<mailSettings>
<smtp from="Noreplyccms@youractualEmailAccount.co.uk">
<network host="smtp.office365.com(your host)" port="587" enableSsl="true" password="dingDong" userName="YourUsername@YourDomain" />
</smtp>
</mailSettings>
```
6. Create new Folder called Models in your project tree.
7. Add new item -> ADO.NET Entity Data Model -> EF Designer from database
8. Select your Connection-String -> tick the Save connection setting in App.Config - > Name your database.
9. Next - > Select only the above table from your database and complete the wizard.
Now you have a connection to your database and email configurations in place. Its time to read emails and dispatch them.
Here is a full program to understand the concept. You can modify this if you have win application, or web application an in cooperate this idea.
This console application will check every minutes for new email records and dispatches them. You can extend above table and this script to be able to add attachments.
```
using System;
using System.Linq;
using System.Timers;
using System.Data;
using System.Net.Mail;
using ccms_email_sender_console.Models;
namespace ccms_email_sender_console
{
class Program
{
static Timer aTimer = new Timer();
static void Main(string[] args)
{
aTimer.Elapsed += new ElapsedEventHandler(OnTimedEvent);// add event to timer
aTimer.Interval = 60000; // timer interval
aTimer.Enabled = true;
Console.WriteLine("Press \'q\' to exit");
while (Console.Read() != 'q');
}
private static void OnTimedEvent(object source, ElapsedEventArgs e)
{
var db_ccms_common = new DB_CCMS_COMMON(); // the database name you entered when creating ADO.NET Entity model
Console.WriteLine(DateTime.Now+"> Checking server for emails:");
aTimer.Stop(); /// stop the timer until we process current queue
var query = from T1 in db_ccms_common.tbl_system_mail_pickup where T1.email_sent == false select T1; // select everything that hasn't been sent or marked as sent.
try
{
foreach (var mRow in query)
{
MailMessage mail = new MailMessage();
mail.To.Add(mRow.mTo.ToString());
if (mRow.mCC != null && !mRow.mCC.ToString().Equals(""))
{
mail.CC.Add(mRow.mCC.ToString());
}
if (mRow.mBCC != null && !mRow.mBCC.ToString().Equals(""))
{
mail.CC.Add(mRow.mBCC.ToString());
}
mail.From = new MailAddress(mRow.mFrom.ToString());
mail.Subject = mRow.mSubject.ToString();
mail.Body = mRow.mBody.ToString();
mail.IsBodyHtml = true;
SmtpClient smtp = new SmtpClient();
smtp.Send(mail);
mRow.email_sent = true;
Console.WriteLine(DateTime.Now + "> email sent to: " + mRow.mTo.ToString());
}
db_ccms_common.SaveChanges(); // mark all sent emails as sent. or you can also delete the record.
aTimer.Start(); // start the timer for next batch
}
catch (Exception ex)
{
Console.WriteLine(DateTime.Now + "> Error:" + ex.Message.ToString());
// you can restart the timer here if you want
}
}
}
}
```
to Test: Run above code and insert a record into your tbl\_system\_mail\_pickup table. In about 1 minute you will receive an email to the sender you've specified.
Hope this helps someone.
|
Send email from MySQL trigger when a table updated
|
[
"",
"mysql",
"sql",
"email",
"triggers",
"sendmail",
""
] |
I have table `tags`, and I also want to count from table `questions` all rows who have in their column `tags` the `tag` name LIKE from the `tags` database.
Combine this two queries I mean:
```
SELECT * FROM tags WHERE 1 SELECT COUNT(*) FROM `questions` WHERE `tags`
LIKE '%(the tag column from tags table)%'
```
|
```
SELECT t.name, Count(*) FROM tags t
join questions q on t.tag_id = q.tag_id
Where t.name like '%(the tag column from tags table)%'
group by t.name
```
If you are looking for `Batches of SQL Statements`
then you can do something like this by placing `;` at the end of first query
```
SELECT * FROM `tags` WHERE 1 ;
SELECT COUNT(*) FROM `questions` WHERE `tags`
LIKE '%(the tag column from tags table)%'
```
|
You must have at least one field in common, use it to relate the tables:
```
SELECT t.name,q.COUNT(*) FROM questions q, tags t WHERE `tags`
LIKE t.name and q.tag=t.name;
```
|
How to query two tables in one query
|
[
"",
"sql",
""
] |
I am currently making use of the code below:
```
ALTER PROCEDURE [dbo].[sp_TidyBasket]
@ReferenceNumber VARCHAR
AS
BEGIN
BEGIN TRANSACTION
DECLARE
@GUID VARCHAR,
@ErrorCode INT
--Get the First Reference Number of a basket item being duplicated
SET @GUID = (SELECT TOP 1 MIN(idx6)
FROM iwfAccountOpening
WHERE Idx29 = @ReferenceNumber
GROUP BY Idx37
HAVING COUNT(*) > 1)
--Executes a while loop whilst there is duplicates to be removed
WHILE (@GUID IS NOT NULL)
BEGIN
DELETE FROM iwfAccountOpening WHERE Idx6 = @GUID;
END
--Rollbacks transactions when any errors occur
SELECT @ErrorCode = @@ERROR
IF (@ErrorCode <> 0) GOTO PROBLEM
COMMIT TRANSACTION
PROBLEM:
IF (@ErrorCode <> 0) BEGIN
ROLLBACK TRANSACTION
END
END
```
I want it to loop through and delete duplicates (whilst retaining one row of the duplicate data) according to a defined value passed through to it.
Currently this is not showing any changes on my database table. I understand it would be due to the `@GUID` value I am setting but I have no idea how to approach this otherwise.
This is not a duplicate question of the previously mentioned duplicate question as it focuses' on the deletion of constraints placed by the user WITH duplicates.
|
Try with this:
```
ALTER PROCEDURE [dbo].[sp_TidyBasket]
@ReferenceNumber VARCHAR
AS
BEGIN TRY
BEGIN TRANSACTION;
WITH cte
AS ( SELECT idx6 ,
ROW_NUMBER() OVER ( PARTITION BY Idx37 ORDER BY ( SELECT ( 1 ) ) ) AS RN
FROM iwfAccountOpening
WHERE Idx29 = @ReferenceNumber
)
DELETE i
FROM cte c
JOIN iwfAccountOpening i ON c.idx6 = i.idx6
WHERE Idx29 = @ReferenceNumber AND RN > 1
COMMIT TRANSACTION
END TRY
BEGIN CATCH
IF @@TRANCOUNT > 0
ROLLBACK TRANSACTION
END CATCH
GO
```
|
Try this. No loop required.
```
DELETE FROM iwfAccountOpening WHERE Idx6 in (SELECT MIN(idx6)
FROM iwfAccountOpening
WHERE Idx29 = @ReferenceNumber
GROUP BY Idx37
HAVING COUNT(*) > 1) and Idx29 = @ReferenceNumber
```
|
User Defined Value for Deleting Duplicates in Stored Procedure
|
[
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
Is there a way to GROUP BY a part of a string....
I wanted to create a SQLFIDDLE but they seem to have to serverproblems, so I have to make it visible here....
This would be the data...
```
CREATE TABLE tblArticle
(
id int auto_increment primary key,
art_id varchar(20),
art_name varchar(30),
art_color varchar(30),
art_type varchar(5)
);
INSERT INTO tblArticle
(art_id, art_name, art_color, art_type)
VALUES
('12345-1','Textile', 'Black','MAT'),
('12345-2','Textile', 'Red','MAT'),
('12345-3','Textile', 'Green','MAT'),
('12345-4','Textile', 'Blue','MAT'),
('54321-1','Textile', 'Black','MAT'),
('54321-2','Textile', 'Red','MAT'),
('54321-3','Textile', 'Green','MAT'),
('54321-4','Textile', 'Blue','MAT');
```
So I get some like:
```
| id | art_id | art_name | art_color | art_type |
----------------------------------------------------------
| 1 | 12345-1 | Textile | Black | MAT |
| 2 | 12345-2 | Textile | Red | MAT |
| 3 | 12345-3 | Textile | Green | MAT |
| 4 | 12345-4 | Textile | Blue | MAT |
| 5 | 54321-1 | Textile | Black | MAT |
| 6 | 54321-2 | Textile | Red | MAT |
| 7 | 54321-3 | Textile | Green | MAT |
| 8 | 54321-4 | Textile | Blue | MAT |
| 9 | 9876543 | Textile | White | MAT |
----------------------------------------------------------
```
My select looks like
```
Select art_id, art_name FROM tblArticle WHERE art_type = 'MAT' GROUP BY art_name
```
What I need is the art\_id (doesn't matters if its with -1 or -2 and so on) and the art\_name to do further querys.
As you can see I have 2 different art\_id groups... and I want to group by them.
So I get two groups... 12345 and 54321. But I don't even know how to start ^^
Expacted result:
```
12345, Textile
54321, Textile
9876543, Textile
```
I tried to add art\_id to my group by but the has the same effect like don't use group by ^^
What could I do to achieve this ?
**SOLVED like:**
```
SELECT DISTINCT @IF( @SCAN( art_id, '-' ) +1,
@LEFT( art_id, @SCAN( art_id, '-')),
art_id) AS art_id, art_name
FROM tblArticle
WHERE art_type LIKE '%MAT%';
```
In this case the `DISTINCT` has the same effect like a `GROUP BY`.
`+1` is used to get 0 if scan could not find anything. Actaully it returns -1 if there was no found. But IF needs 0 (false) or 1+ (true). And there will never be a `-` in fist place in my case.
*Could not use GROUP BY because it only accepts integer or column.*
|
In order to group, you must specify which bit of the string to group by. In this case it's the first 5 characters so you would use `LEFT(art_id,5)`. This must be matched in the `SELECT` statement so you will need to modify it to read the same.
I have aliased the column to be art\_id otherwise it will be unknown:
```
SELECT LEFT(art_id,5) AS art_id, art_name
FROM tblArticle
WHERE art_type = 'MAT'
GROUP BY LEFT(art_id,5), art_name
```
The only thing that could be a problem is if you begin to have IDs bigger than 5 characters. In this case you will need to use `@FIND` in order to look for the dash and take the left of that. This will crash if there's no dash, as the `@FIND` function returns -1 if no match is found, so we have to use an `@IF` statement to compensate for this.
In which case I would write:
```
SELECT @IF(art_id LIKE '%-%'
,@LEFT(art_id, @FIND('-', art_id, 0) - 1)
,art_id
) AS art_id, art_name
FROM tblArticle
WHERE art_type = 'MAT'
GROUP BY @IF(art_id LIKE '%-%'
,@LEFT(art_id, @FIND('-', art_id, 0) - 1)
,art_id
), art_name
```
The @ symbols are necessary (at least I think they are, try without if it doesn't work), I've not used SQLBase before so I'm using the following official documentation as a guide:
[GUPTA SQLBase - SQL Language Reference](http://support.guptatechnologies.com/Docs/SQLBase/Books/sqllang/index.htm#sqllang_sql_elements.htm)
|
You could try this, It will work out where the dash is and group by properly
```
Select case when Instr(art_id, '-') = 0
then art_id
else Left(art_id, Instr(art_id, '-') - 1) end, art_name
FROM tblArticle
WHERE art_type = 'MAT'
GROUP BY case when Instr(art_id, '-') = 0
then art_id
else Left(art_id, Instr(art_id, '-') - 1) end
,art_name
```
|
SQL Select GROUP BY part of field if has a special char
|
[
"",
"sql",
"sqlbase",
""
] |
I have this table in my database:
```
ββββββ¦ββββββββ¦ββββββββ
β id β Name β Price β
β βββββ¬ββββββββ¬ββββββββ£
β 1 β ciao β 123 β
β 2 β ciao β 55 β
β 3 β bye β 43 β
β 4 β hello β 12 β
β 5 β ciao β 1 β
β 6 β ciao β 77 β
ββββββ©ββββββββ©ββββββββ
```
..and i wound like to create a new view displaying two columns:
1) "ciao" or "not ciao"
2) how many rows in the previous table are "ciao" or "not ciao", something like this:
```
ββββββ¦βββββββββββ¦ββββββββ
β id β Name β Count β
β βββββ¬βββββββββββ¬ββββββββ£
β 1 β ciao β 4 β
β 2 β not ciao β 2 β
ββββββ©βββββββββββ©ββββββββ
```
I'm trying to find a solution, but i'm not able to group by "not ciao" values:
```
SELECT Name, COUNT(*)
FROM mytable
WHERE Name = "Ciao"
GROUP BY Name
```
|
This might work (not tested) :
```
SELECT CASE
WHEN name = 'ciao' THEN 'ciao'
ELSE 'not ciao'
END as NAME,
COUNT(*)
FROM table
GROUP BY CASE
WHEN name = 'ciao' THEN 'ciao'
ELSE 'not ciao'
END
```
OR in MySQL
```
SELECT CASE
WHEN name = 'ciao' THEN 'ciao'
ELSE 'not ciao'
END as NAME1,
COUNT(*)
FROM table
GROUP BY NAME1
```
|
You need to map everything except `'ciao'` to `'not ciao'`, like this:
```
SELECT CASE WHEN Name!='ciao' THEN 'not ciao' ELSE Name END as Name, COUNT(*)
FROM mytable
GROUP BY CASE WHEN Name!='ciao' THEN 'not ciao' ELSE Name END
```
|
SQL Query using GROUP BY Statment
|
[
"",
"mysql",
"sql",
"group-by",
""
] |
I have tables that look somewhat like this:
```
Table name: products
ID PRODUCT_ID TYPE PRICE ...
1 111 computer xxx
2 222 book xx
3 333 computer xxxx
----------------------------------------------------------------
Table name: products_computer
ID PRODUCT_ID CPU RAM ....
1 111 amd 16
2 333 intel 8
----------------------------------------------------------------
Table name: products_book
ID PRODUCT_ID AUTHOR YEAR_PUBLISHED ....
1 222 Stephen King xxxx
```
As you can see, in table `products` every single product is listed with columns of what they have in common ( like the price ), and the other tables contain specific information for that product type.
Now lets say we want to list every product independend of their type ordered by the price, and also include the specific information of the `products_*` tables in the query, to not manually make a second query just to retrieve that information.
While I hardly ever use databases and SQL, my inexperience ran into various pitfalls trying to concat strings to build the `products_*` table name for a `JOIN` or make a "middle table" that maps the `procucts.TYPE` value to the actual table name, where I found later in the documentation that it simply doesn't work trying to "bind" a column/row value as the table name.
Is there any "trick" to make this work ( having everything in 1 query ), or do I really have to make a second query manually in my code ?
|
If i understood you correctly, then you need all columns from `products` table and their corresponding information (if any) from `products_computer` and `products_book` tables.
If this is the scenario then I think all you need here is multiple `LEFT JOIN` SQL statements
**SQL Statements**
```
SELECT products.product_id, type , price, cpu, ram, author, year_published
FROM products
LEFT JOIN products_computer ON products.product_id = products_computer.product_id
LEFT JOIN products_book ON products.product_id = products_book.product_id
ORDER BY price
```
**Output**

|
This is possible with a [`UNION`](https://www.sqlite.org/lang_select.html#compound) statement.
```
SELECT
products.ID AS ID, products.PRODUCT_ID AS PRODUCT_ID, products.PRICE AS PRICE, products_computer.CPU AS CPU, products_computer.RAM AS RAM,
null as AUTHOR, null as YEAR_PUBLISHED
FROM products_computer
JOIN products ON products_computer.PRODUCT_ID = products.PRODUCT_ID
UNION
SELECT
products.ID AS ID, products.PRODUCT_ID AS PRODUCT_ID, products.PRICE AS PRICE, null as CPU, null as RAM,
products_book.AUTHOR AS AUTHOR, products_book.YEAR_PUBLISHED AS YEAR_PUBLISHED
FROM products_book
JOIN products ON products_book.PRODUCT_ID = products.PRODUCT_ID
ORDER BY PRICE
```
The two separate queries are joined into one larger query. For this to work, the columns selected in each of the `SELECT` statements need to be the same. Note how I've `SELECT`ed null values for the columns in the *other* table.
Each of the individual `SELECT`s also joins back to the `products` based on the common PRODUCT\_ID column. Price is included, and there's an [ORDER BY](https://www.sqlite.org/lang_select.html#orderby) statement at the end to sort by PRICE.
This is the output from the query:

As per @Zero's comment, it's possible to store a query as a view. As a once off operation, execute the query as a view definition:
```
CREATE VIEW vw_products_all AS
SELECT products.ID AS ID,
products.PRODUCT_ID AS PRODUCT_ID,
products.PRICE AS PRICE,
products_computer.CPU AS CPU,
products_computer.RAM AS RAM,
NULL AS AUTHOR,
NULL AS YEAR_PUBLISHED
FROM products_computer
JOIN products
ON products_computer.PRODUCT_ID = products.PRODUCT_ID
UNION
SELECT products.ID AS ID,
products.PRODUCT_ID AS PRODUCT_ID,
products.PRICE AS PRICE,
NULL AS CPU,
NULL AS RAM,
products_book.AUTHOR AS AUTHOR,
products_book.YEAR_PUBLISHED AS YEAR_PUBLISHED
FROM products_book
JOIN products
ON products_book.PRODUCT_ID = products.PRODUCT_ID
ORDER BY PRICE;
```
... after which the data may be accessed via:
```
SELECT * FROM vw_products_all
```
... or, more explicitly (good practice):
```
SELECT ID,
PRODUCT_ID,
PRICE,
CPU,
RAM,
AUTHOR,
YEAR_PUBLISHED
FROM vw_products_all
```
The output is the same as the original query's output.
|
Compute table name to join from row value
|
[
"",
"sql",
"sqlite",
""
] |
I am pulling my hair out trying to figure out what I am doing wrong.
The table is very simple:
```
CREATE TABLE `icd_index` (
`icd` char(5) NOT NULL,
`core_id` int(11) NOT NULL,
`dx_order` tinyint(4) NOT NULL,
PRIMARY KEY (`icd`,`dx_order`,`core_id`),
KEY `core` (`core_id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
```
As you can see, I've created a covering index that spans all three columns of the table, plus an additional index on the `core_id` for potential joins. This is a one-to-many link table with each `core_id` mapped to one or more `icd`. The table contains 65 million rows.
So, here's the issue. Let's say I want to know how many people have an icd code of '25000'. [That's diabetes, in case you were wondering]. I write a query that looks like this:
```
SELECT COUNT(core_id) FROM icd_index WHERE icd='25000'
```
This takes over 60 seconds to execute. I had thought that since the icd column was first in the covered index, counting it would be fast.
More confusing, once I've run the query once, it now runs very quickly. I assumed that was because the query was cached, but even if I `RESET QUERY CACHE`, the query now runs in fractions of a second. But, if I wait long enough, it seems to slow down again -- and I can't figure out why.
I am missing something obvious. Do I need an index on `icd` alone? Is this the best performance I can get with 65M rows? Why does running the query then resetting the cache affect speed? Are the results being stored in the index?
EDIT: I'm running MySQL 5.6 (in case that matters).
Here's the `EXPLAIN` of the query:
```
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE icd_index ref PRIMARY PRIMARY 15 const 910104 Using where; Using index
```
|
Here's what is going on.
```
The SELECT COUNT (...) icd_index where icd='25000'
```
will use the index, which is a BTree separate from the data. But it scans it in this way:
1. Find the first entry having icd='25000'. This is almost instantaneous.
2. Scan forward until if finds a change in icd. This will scan in the index only, not touching the data. According to the EXPLAIN, there will be about 910,104 index entries to scan over.
Now let's look at the BTree for that index. Based on the fields in the index, each row will be exactly 22 bytes, plus there will be some overhead (estimate 40%). A MyISAM index block is 1KB (cf InnoDB's 16KB). I would estimate 33 rows per block. 910,104/33 says about 27K blocks need to be read to do the COUNT. (Note `COUNT(core_id)` needs to check `core_id` for being null, `COUNT(*)` does not; this is a minor difference.) Reading 27K blocks on a plain hard drive takes about 270 seconds. You were lucky to get it done in 60 seconds.
The second run found all those blocks in the key\_buffer (assuming key\_buffer\_size is at least 27MB), so it did not have to wait for the disk. Hence it was much faster. (This ignoring the Query cache, which you had the wisdom to flush or use SQL\_NO\_CACHE.)
5.6 happens to be irrelevant (but thanks for mentioning it), since this process has not changed since 4.0 or before (except that utf8 did not exist; more on that below).
Switching to InnoDB would help in a couple of ways. The PRIMARY KEY would be 'clustered' with the data, not stored as a separate BTree. Hence, once the data or the PK is cached, the other is immediately available. The number of blocks would be more like 5K, but they would be 16KB blocks. These might be loadable faster if the cache is cold.
You ask " Do I need an index on icd alone? " -- Well that would shrink the MyISAM BTree size to about 21 bytes per row, so the BTree would be about 21/27ths the size, not much improvement (at least for the cold-cache situation).
Another thought is, *if* `icd` is always numeric and always numeric, to use `MEDIUMINT UNSIGNED`, and tack on `ZEROFILL` if it can have leading zeros.
Oops, I failed to notice the CHARACTER SET. (I have fixed the numbers above, but let me elaborate.)
* CHAR(5) allows for 5 *characters*.
* ascii takes 1 *byte* per *character*.
* utf8 takes up to 3 *bytes* per *characters*.
* So, CHAR(5) CHARACTER SET utf8 takes 15 *bytes* *always*.
Changing the column to `CHAR(5) CHARACTER SET ascii` would shrink it to 5 bytes.
Changing it to MEDIUMINT UNSIGNED ZEROFILL would shrink it to 3 bytes.
Shrinking the data would speed up I/O by a roughly proportional amount (after allowing another 6 bytes for the other two fields.
|
Thanks to all above for your help. Given the above advice, I totally rebuilt the database like so:
1. I convinced the server admin to increase my RAM to 6G.
2. I switched all tables to InnoDB with an ASCII character set.
3. When I moved the data from MyISAM to InnoDB, I sorted all of the data in the order of the covering index before inserting it into the new table, so the new table is completely sorted correctly. No idea if this really helps, but it seemed like it couldn't hurt.
4. I tinkered with the DB settings, specifically the InnoDB Buffer Pool size and increased it to 256M.
Holy mother of God, it's really fast now. Simple count query above now runs in less than 2 seconds. Not sure which of the above was the most effective (but the query was fast before the buffer pool size increase)
|
MySQL MyISAM slow count() query despite covering index
|
[
"",
"mysql",
"sql",
""
] |
**EDIT**: After looking at some of the answers here and hours of research, my team came to the conclusion there was most likely no way to optimize this further than the 4.5 seconds we were able to achieve (unless maybe with partitioning on offers\_clicks, but that would have some ugly side-effects). Eventually, after lots of brainstorming, we decided to split both queries, create two sets of user ids (one from users table and one from offers\_clicks), and compare them with set in Python. The set of ids from users table is still pulled from SQL, but we decided to move offers\_clicks to Lucene and also added some caching on top of it, so that's where the other set of ids is now pulled from. The end result is that its down to about half a second with cache and 0.9s without cache.
Start of original post: I have trouble getting a query optimized. The first version of the query is fine, but the moment offers\_clicks is joined in the 2nd query, the query becomes rather slow. Users table contains 10 million rows, offers\_clicks contains 53 million rows.
Acceptable performance:
```
SELECT count(distinct(users.id)) AS count_1
FROM users USE index (country_2)
WHERE users.country = 'US'
AND users.last_active > '2015-02-26';
1 row in set (0.35 sec)
```
Bad:
```
SELECT count(distinct(users.id)) AS count_1
FROM offers_clicks USE index (user_id_3), users USE index (country_2)
WHERE users.country = 'US'
AND users.last_active > '2015-02-26'
AND offers_clicks.user_id = users.id
AND offers_clicks.date > '2015-02-14'
AND offers_clicks.ranking_score < 3.49
AND offers_clicks.ranking_score > 0.24;
1 row in set (7.39 sec)
```
Here's how it looks without specificying any indexes (even worse):
```
SELECT count(distinct(users.id)) AS count_1
FROM offers_clicks, users
WHERE users.country IN ('US')
AND users.last_active > '2015-02-26'
AND offers_clicks.user_id = users.id
AND offers_clicks.date > '2015-02-14'
AND offers_clicks.ranking_score < 3.49
AND offers_clicks.ranking_score > 0.24;
1 row in set (17.72 sec)
```
Explain:
```
explain SELECT count(distinct(users.id)) AS count_1 FROM offers_clicks USE index (user_id_3), users USE index (country_2) WHERE users.country IN ('US') AND users.last_active > '2015-02-26' AND offers_clicks.user_id = users.id AND offers_clicks.date > '2015-02-14' AND offers_clicks.ranking_score < 3.49 AND offers_clicks.ranking_score > 0.24;
+----+-------------+---------------+-------+---------------+-----------+---------+------------------------------+--------+--------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+---------------+-------+---------------+-----------+---------+------------------------------+--------+--------------------------+
| 1 | SIMPLE | users | range | country_2 | country_2 | 14 | NULL | 245014 | Using where; Using index |
| 1 | SIMPLE | offers_clicks | ref | user_id_3 | user_id_3 | 4 | dejong_pointstoshop.users.id | 270153 | Using where; Using index |
+----+-------------+---------------+-------+---------------+-----------+---------+------------------------------+--------+--------------------------+
```
Explain without specifying any indexes:
```
mysql> explain SELECT count(distinct(users.id)) AS count_1 FROM offers_clicks, users WHERE users.country IN ('US') AND users.last_active > '2015-02-26' AND offers_clicks.user_id = users.id AND offers_clicks.date > '2015-02-14' AND offers_clicks.ranking_score < 3.49 AND offers_clicks.ranking_score > 0.24;
+----+-------------+---------------+-------+------------------------------------------------------------------------+-----------+---------+------------------------------+--------+--------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+---------------+-------+------------------------------------------------------------------------+-----------+---------+------------------------------+--------+--------------------------+
| 1 | SIMPLE | users | range | PRIMARY,last_active,country,last_active_2,country_2 | country_2 | 14 | NULL | 221606 | Using where; Using index |
| 1 | SIMPLE | offers_clicks | ref | user_id,user_id_2,date,date_2,date_3,ranking_score,user_id_3,user_id_4 | user_id_2 | 4 | dejong_pointstoshop.users.id | 3 | Using where |
+----+-------------+---------------+-------+------------------------------------------------------------------------+-----------+---------+------------------------------+--------+--------------------------+
```
Here's a whole bunch of indexes I tried with not too much success:
```
+---------------+------------+-----------------------------+--------------+-----------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment |
+---------------+------------+-----------------------------+--------------+-----------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| offers_clicks | 1 | user_id_3 | 1 | user_id | A | 198 | NULL | NULL | | BTREE | | |
| offers_clicks | 1 | user_id_3 | 2 | ranking_score | A | 198 | NULL | NULL | | BTREE | | |
| offers_clicks | 1 | user_id_3 | 3 | date | A | 198 | NULL | NULL | | BTREE | | |
| offers_clicks | 1 | user_id_2 | 1 | user_id | A | 17838712 | NULL | NULL | | BTREE | | |
| offers_clicks | 1 | user_id_2 | 2 | date | A | 53516137 | NULL | NULL | | BTREE | | |
| offers_clicks | 1 | user_id_4 | 1 | user_id | A | 198 | NULL | NULL | | BTREE | | |
| offers_clicks | 1 | user_id_4 | 2 | date | A | 198 | NULL | NULL | | BTREE | | |
| offers_clicks | 1 | user_id_4 | 3 | ranking_score | A | 198 | NULL | NULL | | BTREE | | |
| users | 1 | country_2 | 1 | country | A | 14 | NULL | NULL | | BTREE | | |
| users | 1 | country_2 | 2 | last_active | A | 8048529 | NULL | NULL | | BTREE | | |
```
Simplified users schema:
```
+---------------------------------+---------------+------+-----+---------------------+----------------+
| Field | Type | Null | Key | Default | Extra |
+---------------------------------+---------------+------+-----+---------------------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| country | char(2) | NO | MUL | | |
| last_active | datetime | NO | MUL | 2000-01-01 00:00:00 | |
```
Simplified offers clicks schema:
```
+-----------------+------------------+------+-----+---------------------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-----------------+------------------+------+-----+---------------------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| user_id | int(11) | NO | MUL | 0 | |
| offer_id | int(11) unsigned | NO | MUL | NULL | |
| date | datetime | NO | MUL | 0000-00-00 00:00:00 | |
| ranking_score | decimal(5,2) | NO | MUL | 0.00 | |
```
|
This is your query:
```
SELECT count(distinct u.id) AS count_1
FROM offers_clicks oc JOIN
users u
ON oc.user_id = u.id
WHERE u.country IN ('US') AND u.last_active > '2015-02-26' AND
oc.date > '2015-02-14' AND
oc.ranking_score > 0.24 AND oc.ranking_score < 3.49;
```
First, instead of `count(distinct)`, you might consider writing the query as:
```
SELECT count(*) AS count_1
FROM users u
WHERE u.country IN ('US') AND u.last_active > '2015-02-26' AND
EXISTS (SELECT 1
FROM offers_clicks oc
WHERE oc.user_id = u.id AND
oc.date > '2015-02-14' AND
oc.ranking_score > 0.24 AND oc.ranking_score < 3.49
)
```
Then, the best indexes for this query are: `users(country, last_active, id)` and either `offers_clicks(user_id, date, ranking_score)` or `offers_clicks(user_id, ranking_score, date)`.
|
```
SELECT count(distinct u.id) AS count_1
FROM users u
STRAIGHT_JOIN offers_clicks oc
ON oc.user_id = u.id
WHERE
u.country IN ('US')
AND u.last_active > '2015-02-26'
AND oc.date > '2015-02-14'
AND oc.ranking_score > 0.24
AND oc.ranking_score < 3.49;
```
Make sure you have index on users - (`id`,`last_active`,`country`) columns
and offers\_clicks - (`user_id`,`date`,`ranking_score`)
Or you can reverse the order
```
SELECT count(distinct u.id) AS count_1
FROM offers_clicks oc
STRAIGHT_JOIN users u
ON oc.user_id = u.id
WHERE
u.country IN ('US')
AND u.last_active > '2015-02-26'
AND oc.date > '2015-02-14'
AND oc.ranking_score > 0.24
AND oc.ranking_score < 3.49;
```
Make sure you have index on offers\_clicks - (`user_id`) column
and users - (`id`,`last_active`,`country`)
|
Optimizing slow MySQL select query
|
[
"",
"mysql",
"sql",
"query-optimization",
""
] |
I need to select all entries that do not start with a number between 1-9.
**Example Entries**:
* 6300 Dog Lane
* Kitty Drive
* 500 Bird Chrest
* 800 Tire Road
* Johnson Ave
* Park Ave
**So if I ran a query on the above, I would expect**:
* Kitty Drive
* Johnson Ave
* Park Ave
The table is called `objects` and the column is called `location`.
**Something I tried**:
```
SELECT DISTINCT name, location FROM object WHERE location NOT LIKE '1%' OR '2%' OR '3%' OR '4%' OR '5%' OR '6%' OR '7%' OR '8%' OR '9%';
```
Unfortunately, that is unsuccessful. Is this possible? If no, I will resort to modifying the data with Perl.
|
Try this:
```
SELECT DISTINCT name, location FROM object
WHERE substring(location, 1, 1)
NOT IN ('1','2','3','4','5','6','7','8','9');
```
or you have to add `NOT LIKE` before every number:
```
SELECT DISTINCT name, location FROM object
WHERE location NOT LIKE '1%'
OR location NOT LIKE '2%'
...
```
|
You can use the following stntax:
```
SELECT column FROM TABLE where column NOT REGEXP '^[0-9]+$' ;
SELECT DISTINCT name, location FROM object
WHERE location NOT REGEXP '^[0-9]+$' ;
```
|
Select query or select entries that don't start with a number - MySQL
|
[
"",
"mysql",
"sql",
"select",
""
] |
I created a table called employee
```
CREATE TABLE employee(
id INT,
name VARCHAR(50),
credit_card_number VARCHAR(20),
expr_date CHAR(6),
PRIMARY KEY(id)
)
```
And then I have a table that stores the credit cards information
```
CREATE TABLE credit_card (
credit_card_number VARCHAR(20),
expr_date CHAR(6),
o_datetime DATETIME
)
```
I want to write an insert statement that inserts the currently stored credit card information for every employee into my new credit card table. i also want to fill in my `o_datetime` with the current date time.
This is how i approached it initially, but i'm not exactly sure if i'm doing this right..
```
INSERT INTO credit_card(credit_card_number, expr_date, CURRENT_TIMESTAMP AS o_datetime)
SELECT credit_card_number, expr_date
FROM employees;
```
But i get an error when I run this. I'm really new to SQL so I might be missing a simple step, but I can't seem to figure it out.
|
It should look like this:
```
INSERT INTO credit_card(credit_card_number, expr_date, o_datetime)
SELECT credit_card_number, expr_date, CURRENT_TIMESTAMP
FROM employee;
```
You need to define the 3rd column of your insert, in this case, `CURRENT_TIMESTAMP` for `o_datetime`.
* [SQL Fiddle Demo](http://www.sqlfiddle.com/#!9/87f69/1)
|
First, you should never store unencrypted credit card numbers in a database. This is an invitation for someone to "borrow" the numbers. You can hash them or store them in some other fashion to prevent unauthorized access.
The problem with your statement is the `o_datetime` component. The default value can go in the `select` statement:
```
INSERT INTO credit_card(credit_card_number, expr_date, o_datetime)
SELECT credit_card_number, expr_date, CURRENT_TIMESTAMP
FROM employees;
```
However, if you always want this to be the date that the data was inserted, you can just make it the default value:
```
CREATE TABLE credit_card (
credit_card_number VARCHAR(20),
expr_date CHAR(6),
o_datetime DATETIME default CURRENT_TIMESTAMP
)
```
Then you can do:
```
INSERT INTO credit_card(credit_card_number, expr_date)
SELECT credit_card_number, expr_date
FROM employees;
```
Note that in older versions of MySQL, `o_datetime` would need to be a `timestamp`.
|
MySQL: Is there a way to insert values from one table to another?
|
[
"",
"mysql",
"sql",
"insert-select",
""
] |
Given a table like:
```
Employee (emp_no, emp_fname, emp_lname, emp_salary, job_title)
```
How to write a query that for each employee calculates the difference
between his/her salary to the average salary of his/her job group?
To get the avg. Salary for each group I use:
```
Select job_title, avg(emp_salary) as avg_salary
from employee group by job_title;
```
But I'm struggling to find the difference between salary and avg. salary for each job title.
|
Depending on which database you are using, you may be able to arrive at this more efficiently using either window functions or CTEs. But a SQL solution that should work on pretty much any SQL-based database you could use would look like:
```
SELECT a.emp_no,
a.emp_fname,
a.emp_lname,
a.job_title,
(a.emp_salary - b.emp_salary) as salary_difference
FROM employee AS a
INNER JOIN (
SELECT job_title, avg(emp_salary)as emp_salary from employee group by
job_title) as b
ON a.job_title = b.job_title
```
|
Alternatively, you can also use a subquery to get the answer to this question as below.
Step1: Create a query to get the average of the employee's salary.
Step2: Use query created in step-1 as a subquery and subtract it from the main query's
salary column.
Code :
`SELECT first_name,last_name,salary ,salary-(SELECT AVG(salary) FROM employees) AS 'SalDiff' FROM employees;`
|
Calculates the difference between employee salary and the average salary of job group
|
[
"",
"sql",
""
] |
I am trying to count the number of records for each user:
```
SELECT count(userID) as countOfRecords FROM (select distinct userID from `table`)
```
I get the error: *Every derived table must have its own alias* What am I doing wrong?
|
You can use `GROUP BY`
```
select userID, count(*) as countOfRecords
from table
group by userID
```
|
You don't need a derived table, you can just do count(distinct):
```
SELECT count(distinct userID) as countOfRecords
FROM `table`
```
**Edit:** When I see Philip Fourie's answer I realize I perhaps didn't give the complete answer... See his `GROUP BY` solution as well.
|
Counting records SQL
|
[
"",
"sql",
""
] |
I have 2 fields in my SQL table. One has 5 digits (ex: 12345) and the 2nd field has 2 digits (ex: 99) and I need to know if there is a way to take the LAST digit from the first field and push it to the beginning of the 2nd field. So the first field would be 1234 and the 2nd field would be 599 ???
|
```
UPDATE YourTable
SET Column2 = RIGHT(Column1, 1) + COALESCE(Column2, ''),
Column1 = LEFT(Column1, LEN(Column1)-1)
WHERE COALESCE(LEN(Column1), 0) > 1;
```
|
Here is a "typical" way to do this.
```
update t
set col1 = left(col1, length(col1) - 1),
col2 = concat(right(col1, 1), col2);
```
The exact details depend on the database.
Notes:
* This sort of assumes the values are stored as strings, although the logic will convert back and forth from numbers.
* The `left()` and `right()` functions may differ by database, although most databases do support them.
* `concat()` may be replaced by an appropriate operator, depending on the database ('||', '+', '&', come to mind).
* `length()` is sometimes called `len()`.
|
Update 2 fields - push last digit from 1 field to the beginning of another field
|
[
"",
"sql",
""
] |
I am trying to copy one table to another in oracle changing the value of one of the fields.
Lets say Tables Original and Temp, so I normally would use this SQL bellow:
```
Insert into Original Select * from Temp where site = 1
```
The problem is that now my Original table already has the site 1 so I need to copy the content of site one to the original table changing site 1 for site 2. I normally would do
```
Insert into Original (site, field1, field2, field3) Select '2', field1, field2, field3 from Temp where site = 1
```
However I have few hundre tables to do that and I am looking for a solution where I don't need to enter the name of the fields in each statement.
Any Ideas?
|
I wrote simple PLSQL procedure, code below. It generates insert string to output,
if these suits you, you can copy/paste them and run manually or you can uncomment
line near end and inserts will be done automatically. Tested on sample data, worked fine.
Procedure operates on columns found in `user_tab_columns`. At first you have to define list of tables which interests you in declare section.
You may want to change some code according to your needs.
Output generated:
```
insert into original (site, F1, F2, F3) select 1, F1, F2, F3
from TEMP1 where site = 1
insert into original (site, F2, F1, F3) select 2, F2, F1, F3
from TEMP2 where site = 1
insert into original (site, F3, F2, F1) select 3, F3, F2, F1
from TEMP3 where site = 1
```
Procedure code:
```
create or replace procedure justdoit is
type t_var is table of varchar2(30);
-- table names in uppercase
tabs t_var := t_var('TEMP1', 'TEMP2', 'TEMP3');
v_sql1 varchar2(4000);
v_sql2 varchar2(4000);
begin
for i in 1..tabs.count
loop
v_sql1 := 'insert into original (site, ';
v_sql2 := 'select '||i||', ';
for o in (
select * from user_tab_columns
where table_name = tabs(i)
order by column_id)
loop
if o.column_name <> 'SITE' then
v_sql1 := v_sql1 || o.column_name||', ';
v_sql2 := v_sql2 || o.column_name||', ';
end if;
end loop;
v_sql1 := rtrim(v_sql1, ', ')||') '||rtrim(v_sql2, ', ');
v_sql1 := v_sql1||' from '||tabs(i)||' where site = 1';
dbms_output.put_line(v_sql1);
-- execute immediate v_sql1; -- <- uncomment this line
end loop;
end justdoit;
```
|
You can try the below query
```
INSERT INTO Original select '2' from Temp union all select * from Temp where site=1;
```
Explanation :
I) value '2' will be inserted into first column named "site"
II)Column values field1,field2,field3 from Temp table will be insserted to field1,field2,field3 of original table
|
Insert a table inside another with one field different
|
[
"",
"sql",
"oracle",
"insert",
""
] |
OK, so this one has had me ripping my hair out for hours. I feel like there is something obvious I am overlooking.
I have 2 tables, **service** and **brand**
```
service
-------
id
brand
brand
-----
id
brandName
```
So service.brand can be any of these:
```
Blank
"Other"
Integer (matches brand.id)
String (matches brand.brandName)
String (Not Blank, Not Other, Not brand.brandName)
```
I'm trying to write a query that will pull up the correct brandname from the brand table, and if the value of service.brand is not in brand.id or brand.brandName, then display whatever is there.
So far I got everything working except it wouldn't pull up the record if service.brand was Not Blank, Not Other, Not in brand.id, Not in brand.brandName. (which I'm calling OtherThanOther from here on out).
Now my latest attempt pulls up mostly correct, but the OtherThanOther field gets pulled up many times, like if total records is 40, OtherThanOther is the same record almost 20 times. HELP!
My latest attempt..
```
select
s.*, b.brandName as bname
from
service s, brand b
where
s.brand = b.brandName
or
s.brand = b.id
or
s.brand = 'Other'
or
s.brand = ''
or
(
s.brand not in (select brandName from brand)
and
s.brand not in (select id from brand)
and
s.brand != 'Other'
and
s.brand != ''
)
```
Sample Table Data
```
service
-------
1 5
2 Dell
3 SomeRandom
4
5 Other
brand
-----
1 HP
2 Gateway
3 Dell
4 Compaq
5 Toshiba
```
my query results..
```
(service.id, service.brand, brand.id, brand.brandName, bname)
-------------------------------------------------------------
1 5 5 Toshiba Toshiba
2 Dell 3 Dell Dell
3 SomeRandom, brand.id, brand.brandName, brand.brandName
3 SomeRandom, brand.id, brand.brandName, brand.brandName
3 SomeRandom, brand.id, brand.brandName, brand.brandName
3 SomeRandom, brand.id, brand.brandName, brand.brandName
3 SomeRandom, brand.id, brand.brandName, brand.brandName
4 '', null, null, null
5 Other, null, null, null
```
I need it to just pull SomeRandom once, group by won't work because there may be multiple fields with the same value as SomeRandom.
Any help is much appreciated.
Thanks!
|
```
select
s.*,
CASE
WHEN b_id.brandName IS NOT NULL THEN b_id.brandName
WHEN b.brandName IS NOT NULL THEN b.brandName
ELSE s.brand
END as bname
from
service s
LEFT JOIN brand b_id
ON CAST(s.brand AS UNSIGNED) = b_id.id
LEFT JOIN brand b
ON s.brand = b.brandName
```
|
You are using an old style `inner join`:
```
from service s, brand b
where s.brand = b.brandName or ...
```
More clearly written like:
```
from service s
inner join brand b on s.brand = b.brandName or ...
```
In the newer form, the problem immediately stands out. An `inner join` filters out unmatched rows. Use a `left join` instead:
```
from service s
left join brand b on s.brand = b.brandName or ...
```
Now you'll get all services, even if no matching row was found in the `brand` table.
|
MySQL SQL Select Query with Multiple Sub-Selects
|
[
"",
"mysql",
"sql",
"subquery",
""
] |
I need some help as I am learning SQL so I am at a beginner level. I am using SQL Server 2008 R2.
My question is: I want to (add) sum up a column that have mix values of `varchar` and `decimal`. I want to ignore the `varchar` values and sum only `decimal`. My column is like this:
```
Column1
-------
0.1
Z
0.4
2.1
2.1
Z
```
And I need the sum that is in this case it should be: `4.7`
I tried using `CASE` but I failed to solve it
```
SELECT
SUM(CASE
WHEN ISNUMERIC(Column1) = 1
THEN .(if true how i add them?).
ELSE .(and if false how i ignore them?).
END) AS Total
FROM
TABLE_Name
```
I am not good in explaning things, but I hope you can understand me what I am trying to do.
If this question is already answered please give me directions to that question and my appologise for that.. Many thanks in advance :)
|
Simply use `WHERE` clause with `CAST`:
```
SELECT SUM(Cast(Column1 as numeric(10,2))) as Total
FROM TABLE_Name
Where IsNumeric(Column1) = 1
```
Result:
```
TOTAL
4.7
```
Sample result in [**SQL Fiddle**](http://www.sqlfiddle.com/#!3/ca2eb/2).
**EDIT:**
As @funkwurm pointed out, the column has a chance to have the value '.' (dot):
```
SELECT SUM(Cast(Column1 as numeric(10,2))) as Total
FROM TABLE_Name
Where IsNumeric(Column1 + 'e0') = 1
```
[**Fiddle**](http://www.sqlfiddle.com/#!3/01bad/2).
|
Why not using TRY\_CONVERT:
```
SELECT SUM(TRY_CONVERT(float,Column1) AS Total
FROM TABLE_Name
```
Shortest query and easy to read! TRY\_CONVERT returns NULL if conversion fails (e.g. if there are text values in Column1)
This Functions exists since SQL Server 2012 (i think).
**@Jim Chen:**
Isnumeric is a bit tricky a this point! In your solution there would be a problem if the value '-' or '+' is inside a Column:
```
SELECT ISNUMERIC('+') --returns TRUE!
SELECT ISNUMERIC('-') --returns TRUE!
```
**==> SUM will FAIL!**
|
SQL Server : SUM () for only numeric values, ignoring Varchar values
|
[
"",
"sql",
"sql-server-2008-r2",
""
] |
I am new to SQL Server and I don't know how to word this question. I have feeling this might be repeated. If you know it please flag it as duplicate. I will explain with data what I am trying to achieve
Table data - `sometable`
```
ID TKID Status DateTimeStamp RunMin
-----------------------------------------------------
215 6 Start 2009-10-29 09:48:14.243 NULL
261 6 Stop 2009-10-30 10:05:16.460 1457
356 6 Start 2009-11-03 14:11:05.097 NULL
357 6 Stop 2009-11-03 15:20:05.133 1526
358 6 Start 2009-11-03 16:14:45.863 NULL
574 19 Start 2009-11-12 13:12:11.827 NULL
575 19 Stop 2009-11-12 13:47:23.077 35
543 259 Start 2009-11-12 09:01:24.013 NULL
620 259 Stop 2009-11-14 11:25:30.177 NULL
623 259 Start 2009-11-14 16:47:32.913 NULL
720 352 Start 2009-11-18 17:47:38.637 NULL
730 352 Stop 2009-11-19 08:22:28.317 874
773 352 Start 2009-11-20 10:00:11.320 NULL
778 352 Stop 2009-11-20 11:51:59.853 985
812 352 Start 2009-11-20 17:51:35.640 NULL
813 352 Stop 2009-11-20 17:53:52.373 987
822 352 Start 2009-11-23 08:13:23.030 NULL
823 352 Stop 2009-11-23 08:17:33.063 991
901 352 Start 2009-12-01 10:50:16.547 NULL
910 352 Stop 2009-12-01 10:50:54.200 991
```
Expected output:
```
ID TKID Status DateTimeStamp RunMin
-----------------------------------------------------
358 6 Start 2009-11-03 16:14:45.863 NULL
623 259 Start 2009-11-14 16:47:32.913 NULL
```
So basically I want to get the record which has the `start` status but doesn't have the `stop` status.
Now What I tried ..
I tried to use the `ROW_NUMBER` function as below,
```
;with cte as
(
select
*,
ROW_NUMBER() OVER (PARTITION BY tkid
ORDER BY tkid, DateTimeStamp) AS rn
from Prog_Timer
)
SELECT *
FROM
(SELECT *
FROM cte
WHERE TkID IN (SELECT TkID
FROM cte
GROUP BY TkID
HAVING COUNT(*)% 2 = 1)
) as d
```
It gives me result as
And then try to get the tkid with the `Odd` count and for that tkid got the data.
```
ID TKID Status DateTimeStamp RunMin
-----------------------------------------------------
215 6 Start 2009-10-29 09:48:14.243 NULL
261 6 Stop 2009-10-30 10:05:16.460 1457
356 6 Start 2009-11-03 14:11:05.097 NULL
357 6 Stop 2009-11-03 15:20:05.133 1526
358 6 Start 2009-11-03 16:14:45.863 NULL
543 259 Start 2009-11-12 09:01:24.013 NULL
620 259 Stop 2009-11-14 11:25:30.177 NULL
623 259 Start 2009-11-14 16:47:32.913 NULL
```
I don't know how to get the last row only from this output for each `tkid` select only the last start row. I think my approch is pretty complicated. There has to be simple way to get what I want. If you have new approach feel free to post. If you have anything to add to my existing query feel free to post. In case of confusion feel free to comment.
|
If you modify your `ROW_NUMBER() PARTITION` to `order by DateTimeStamp desc`, then the latest row for each `tkid` will be given `rn=1`, thus giving you the latest status for each `tkid`. You then simply have to `SELECT` rows where the `rn = 1` and the `Status = 'Start'` to get your desired output:
```
select * from
(select
*,
ROW_NUMBER() OVER (PARTITION BY tkid order by DateTimeStamp desc) as rn
from Prog_Timer
)
T
where
T.rn = 1 -- the latest status for each tkid
and T.Status = 'Start' -- returns only started and not stopped timers
-- if timer is stopped, t.Status will be 'Stop' in the latest row
```
|
Lot of ways to do this, one of them is below:
```
SELECT T.*
FROM SOMETABLE T
JOIN (SELECT MAX(DATETIMESTAMP) MAXTIME, TKID
FROM SOMETABLE
GROUP BY TKID) SRC
ON SRC.MAXTIME = T.DATETIMESTAMP AND SRC.TKID = T.TKID
WHERE T.STATUS = 'Start'
```
Depending on the actual data, you might want to get MAX(ID) instead, in case it's sometimes possible for a process to stop and start at the same time. But of course, in case of updates it could be possible for older ID values to have newer timestamps.
|
How to get matching data using rownumber in SQL Server?
|
[
"",
"sql",
"sql-server",
"sql-server-2008-r2",
""
] |
I have a query which I want to return the highest id value in `idevent` that is within a given range of sensor ID's (`sensorID`) however the query is not returning the highest value.
The results when I run the query minus the `max()` statment:
```
mysql> SELECT * FROM events WHERE timestamp BETWEEN "2015-03-09 10:45:35" - INTERVAL 4000 SECOND AND "2015-03-09 10:45:35" AND (sensorID = 34035434 OR sensorID = 34035492 OR sensorID = 34035426 OR sensorID = 34035482 OR sensorID = 34035125 OR sensorID = 34035498 OR sensorID = 34035508 OR sensorID = 34035444 OR sensorID = 34035418 OR sensorID = 34035466 OR sensorID = 34035128 OR sensorID = 34035119 OR sensorID = 34035448 OR sensorID = 34037294 OR sensorID = 34035549);
+---------+---------------------+----------+-------+
| idevent | timestamp | sensorID | event |
+---------+---------------------+----------+-------+
| 117794 | 2015-03-09 10:14:58 | 34035434 | 9 |
| 117795 | 2015-03-09 10:15:03 | 34035508 | 9 |
| 117796 | 2015-03-09 10:15:17 | 34035508 | 1 |
+---------+---------------------+----------+-------+
3 rows in set (0.00 sec)
```
The results given when the `max()` statement is added as a `HAVING` clause:
```
mysql> SELECT * FROM events WHERE timestamp BETWEEN "2015-03-09 10:45:35" - INTERVAL 4000 SECOND AND "2015-03-09 10:45:35" AND (sensorID = 34035434 OR sensorID = 34035492 OR sensorID = 34035426 OR sensorID = 34035482 OR sensorID = 34035125 OR sensorID = 34035498 OR sensorID = 34035508 OR sensorID = 34035444 OR sensorID = 34035418 OR sensorID = 34035466 OR sensorID = 34035128 OR sensorID = 34035119 OR sensorID = 34035448 OR sensorID = 34037294 OR sensorID = 34035549) HAVING max(idevent);
+---------+---------------------+----------+-------+
| idevent | timestamp | sensorID | event |
+---------+---------------------+----------+-------+
| 117794 | 2015-03-09 10:14:58 | 34035434 | 9 |
+---------+---------------------+----------+-------+
1 row in set (0.00 sec)
```
Why is this not returning the lastest value?
|
This could be done with `order by limit`, so all you need to add the following at the end of the given first query
```
order by idevent desc limit 1
```
|
Use as , use [order by](http://dev.mysql.com/doc/refman/5.0/en/order-by-optimization.html)
```
SELECT * FROM events WHERE timestamp BETWEEN "2015-03-09 10:45:35" - INTERVAL 4000 SECOND AND "2015-03-09 10:45:35" AND (sensorID = 34035434 OR sensorID = 34035492 OR sensorID = 34035426 OR sensorID = 34035482 OR sensorID = 34035125 OR sensorID = 34035498 OR sensorID = 34035508 OR sensorID = 34035444 OR sensorID = 34035418 OR sensorID = 34035466 OR sensorID = 34035128 OR sensorID = 34035119 OR sensorID = 34035448 OR sensorID = 34037294 OR sensorID = 34035549)
order by idevent desc
```
|
MySQL max() - not returning max() value
|
[
"",
"mysql",
"sql",
"database",
"max",
""
] |
I have a dataset like this:
```
+---------------+-------+
| SAMPLE_NUMBER | SCORE |
+---------------+-------+
| 1 | 100 |
| 2 | 97 |
| 3 | 124 |
| 4 | 762 |
| 5 | 999 |
| 6 | 1200 |
| 7 | NULL |
| 8 | NULL |
| 9 | NULL |
| 10 | NULL |
+---------------+-------+
```
I want to be able to summarise the NULL rows instead of displaying them all. So ideally, I would want the above to look like this:
```
+---------------+-------+
| SAMPLE_NUMBER | SCORE |
+---------------+-------+
| 1 | 100 |
| 2 | 97 |
| 3 | 124 |
| 4 | 762 |
| 5 | 999 |
| 6 | 1200 |
| 7-10 | NULL |
+---------------+-------+
```
Is there any way in Oracle to do this? Or is it something I will have to do post-query?
|
My guess is that this should be part of your presentation layer since you will have to cast sample\_number to a string (assuming it is a numeric type. An alternative to your requirements is to return the min and max consecutive sample\_number:
```
with t (SAMPLE_NUMBER, SCORE) as (
values (1, 100)
, (2, 97)
, (3, 124)
, (4, 762)
, (5, 999)
, (6, 1200)
, (7, NULL)
, (8, NULL)
, (9, NULL)
, (10, NULL)
)
select min(sample_number), max(sample_number), grp, score
from (
select SAMPLE_NUMBER, SCORE
, row_number() over (order by SAMPLE_NUMBER)
- row_number() over (partition by SCORE
order by SAMPLE_NUMBER) as grp
from t
) group by grp, score
order by grp;
1 2 GRP SCORE
----------- ----------- -------------------- -----------
1 1 0 100
2 2 1 97
3 3 2 124
4 4 3 762
5 5 4 999
6 6 5 1200
7 10 6 -
```
Tried against db2, so you may have to adjust it slightly.
Edit: treat rows as individuals when score is not null
```
with t (SAMPLE_NUMBER, SCORE) as (
values (1, 100)
, (2, 97)
, (3, 97)
, (4, 762)
, (5, 999)
, (6, 1200)
, (7, NULL)
, (8, NULL)
, (9, NULL)
, (10, NULL)
)
select min(sample_number), max(sample_number), grp, score
from (
select SAMPLE_NUMBER, SCORE
, row_number() over (order by SAMPLE_NUMBER)
- row_number() over (partition by SCORE
order by SAMPLE_NUMBER) as grp
from t
) group by grp, score
, case when score is not null then sample_number end
order by grp;
1 2 GRP SCORE
----------- ----------- -------------------- -----------
1 1 0 100
2 2 1 97
3 3 1 97
4 4 3 762
5 5 4 999
6 6 5 1200
7 10 6 -
```
You may want to map max to null in case it is the same as min:
```
[...]
select min(sample_number)
, nullif(max(sample_number), min(sample_number))
, grp
, score
from ...
1 2 GRP SCORE
----------- ----------- -------------------- -----------
1 - 0 100
2 - 1 97
3 - 1 97
4 - 3 762
5 - 4 999
6 - 5 1200
7 10 6 -
```
|
Yes. For your sample data:
```
select (case when score is null then min(sample_number) || '-' || max(sample_number)
else min(sample_number)
end) as sample_number,
score
from table t
group by score
order by min(id)
```
In other words, `group by score` and then fiddle with the sample number. Note: this assumes that you do not have duplicate scores. If you do, you can do so with a more complicated version:
```
select (case when score is null then min(sample_number) || '-' || max(sample_number)
else min(sample_number)
end) as sample_number,
score
from (select t.*,
row_number() over (partition by score order by sample_number) as seqnum
from table t
) t
group by score, (case when score is not null then seqnum end);
```
|
Summarise null rows in Oracle
|
[
"",
"sql",
"oracle",
"oracle11g",
""
] |
I'm running what I think is a fairly simple query, and I'm having getting absolutely atrocious return times on the query. When running the query, a sub query as a filter in an `IN` predicate is performing an Index scan and an Index Spool when used as part of the query, but runs an Index Seek when used outside of the `IN` predicate. I have no idea why, but the query is taking nearly 30 seconds to return zero records...
Here's the query:
```
SELECT DISTINCT
C.County
, S.State
, C.County_ID
FROM
Leads L
INNER JOIN Inventory I ON L.Deleted = 0 AND L.Inv_ID = I.Inv_ID
INNER JOIN County C ON C.County_ID = I.County_ID
INNER JOIN State S ON C.State_ID = S.State_ID AND S.Active = 1
INNER JOIN Contacts ON L.Contact_ID = Contacts.ID AND Contacts.Deleted = 0
WHERE
L.Acct_ID = 204940
OR L.Acct_ID IN (
SELECT Accounts.Acct_ID FROM Accounts (NOLOCK) WHERE Accounts.Parent_Acct_ID = 204940
)
ORDER BY
S.State
, C.County
```
Here's the relevant part of the execution plan of that query. It's running a scan on the Accounts table in the `WHERE` clause, with an estimated number of rows to be returned of over 260,000. The subselect actually returns 0 records.

When I run that simple select outside the above query, I get a straight index seek, against the exact same Index. The subselect returns 0 records. If I run the query without the `OR` statement I get ms response times, if I run the query with the `OR` statement, the query takes nearly 30 seconds to complete.
Here's the index that's being used for both the Scan and the Seek on the Accounts table (I know, it's a dta index, not my original database...)
```
CREATE NONCLUSTERED INDEX [_dta_index_Accounts_5_144719568__K32_50] ON [dbo].[Accounts]
(
[parent_Acct_ID] ASC
)
INCLUDE ( [CompanyState]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
```
How do I get the subselect to run an Index Seek instead of a Scan, and return in any sort of reasonable amount of time?
|
I upvoted all of you who gave me a hand, but this ended up being the solution:
```
SELECT DISTINCT
C.County
, S.State
, C.County_ID
FROM
Leads L
INNER JOIN Contacts ON L.Contact_ID = Contacts.ID AND Contacts.Deleted = 0
INNER JOIN Inventory I ON L.Deleted = 0 AND L.Inv_ID = I.Inv_ID
INNER JOIN Accounts ON L.Acct_ID = Accounts.Acct_ID
AND ( Accounts.Acct_ID = 204940 OR Accounts.parent_Acct_ID = 204940 )
INNER JOIN County C ON C.County_ID = I.County_ID
INNER JOIN State S ON C.State_ID = S.State_ID AND S.Active = 1
ORDER BY
S.State
, C.County
```
Moving to a JOIN instead of a sub-select caused the query to run in 8ms instead of over 30 seconds. I know have straight index seeks in a nice cascade instead of a bunch of backwards filters and sorts to figure out who belongs in the query.
|
Try this one:
```
SELECT DISTINCT
C.County
, S.State
, C.County_ID
FROM
Leads L
INNER JOIN Inventory I ON L.Deleted = 0 AND L.Inv_ID = I.Inv_ID (AND L.Acct_ID = 204940
OR L.Acct_ID IN (
SELECT Accounts.Acct_ID FROM Accounts (NOLOCK) WHERE Accounts.Parent_Acct_ID = 204940
))
INNER JOIN County C ON C.County_ID = I.County_ID
INNER JOIN State S ON C.State_ID = S.State_ID AND S.Active = 1
INNER JOIN Contacts ON L.Contact_ID = Contacts.ID AND Contacts.Deleted = 0
ORDER BY
S.State
, C.County
```
|
Table Scan and Eager Spools on IN Predicate
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
we have this table and some data


in some scenario we need to get the **Entities** that they have a property that have special **Value** and the property itself is not important.
the prob is we want to get entities that they have property where value is equal to 1 and also have a property with a value of 2
in a simple way we need something like this :
> SELECT Entity\_Id FROM table WHERE Value = 1 AND Value = 2
this query returns nothing because the **Value column** only have 1 value ^\_^
actually we need something like this
> SELECT Entity\_Id FROM table
>
> GROUP BY Entity\_Id
>
> HAVING Value = 1 AND Value = 2
MsSql do not support this query, u have to use an aggregate function. OTL
because in this scenario all filters set on **Value** column
u can parsing the first query to something like this :
```
SELECT
DISTINCT Entity_Id,
(SELECT COUNT(*) FROM table WHERE Value = 1 and Order_Id = outerTable.Order_Id)
*
(SELECT COUNT(*) FROM table WHERE Value = 2 and Order_Id = outerTable.Order_Id) as xxx
FROM table AS outerTable
```
currently what i do is get the first query predicates and pars it to third form query.
**EDIT :**
the filters that set on first query is optional and sent by user without knowing the background structure , he think that all of properties is saved as columns for this entity
thats why i parse the AND to
(subQry(filter1) \* subQry(filter2) \* .... \* subQry(filter N))
OR to
(subQry(filter1) + subQry(filter2) + ..... + subQry(filter N))
looking for a better solution ^\_^"
thanks in advance.
|
Here are two other ways, in addition to what the other answers already mention:
```
select entity_id from table where value = 1
intersect
select entity_id from table where value = 2
select t1.entity_id from table t1
join table t2 on t1.entity_id = t2.entity_id
where t1.value = 1 and t2.value = 2
```
|
Here is one method:
```
SELECT Entity_Id
FROM table
WHERE Value in (1, 2)
GROUP BY Entity_Id
HAVING COUNT(DISTINCT value) = 2;
```
|
querying on kinda EVA like model
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
Table Author:
```
id | name_author
---------
1 | david
2 | kate
3 | tom
4 | mark
```
Table books:
```
id | name_book
------------
1 | book1
2 | book2
3 | book3
```
table relationships authors and books
```
id_book | id_author
-------------------
1 | 2
1 | 3
1 | 4
2 | 2
1 | 1
3 | 4
```
As a result, I has to get the book "Book1" because it has 4 authors (david, kate, tom, mark).
How can I write a query to mysql?
|
```
SELECT
b.name_book, GROUP_CONCAT(a.name_author) authors
FROM relationships r
JOIN books b ON r.id_book = b.id
JOIN author a ON r.id_author = a.id
GROUP BY r.id_book
HAVING COUNT(r.id_book) = 4
;
```
|
you could write something like that (not tested)
```
select name_book
from books as b
, link_book_author as l
where b.id = l.id_book
group by name_book
having count(id_author) = 4
```
|
As select from table only those books which have exactly 4 of the author?
|
[
"",
"mysql",
"sql",
""
] |
I have a result table like this (after a query has been run):
```
id | time | region
12x-4nm-334 | 16:00 | Utah
12x-4nm-334 | 17:00 | California
12x-4nm-334 | 19:00 | Missouri
12x-4nm-334 | 22:00 | California
983-n2n-aq2 | 8:00 | New York
983-n2n-aq2 | 9:00 | New York
```
There are a few other columns in this table, but the important thing is that I want to remove the ids that are only registered to one region from the result. So ids like "983-n2n-aq2" which only show up in a single region (regardless of time) should not be in the resulting table.
Hope this question is clear enough.
|
Try this:
```
SELECT id, count(DISTINCT region) as RegionCount
FROM table
GROUP BY
id
HAVING count(DISTINCT region) > 1
```
If your DBMS doesn't support `count(distinct )` then this should do instead:
```
SELECT id, count(DISTINCT region) as RegionCount
FROM (
SELECT id, region FROM table GROUP BY id, region
) as table
GROUP BY
id
HAVING count(DISTINCT region) > 1
```
|
If you use MySql
```
DELETE FROM table
WHERE id IN ( SELECT x.id
FROM ( select *
FROM table t
GROUP BY id
HAVING COUNT(DISTINCT region) = 1
) as x
)
```
I don't know for Vertica. Hope it help
|
SQL remove certain duplicate values
|
[
"",
"sql",
"vertica",
""
] |
I'm trying to compare the columns of a table and fetch the unmatched rows.Can someone help me to write a query to fetch the rows where Email\_ID not in the format of Firstname.Lastname@abc.com in the below scenario.
TABLE
```
S.no Firstname Lastname Email_ID
701 Sean Paul Sean.Paul@abc.com
702 Mike Tyson Mike.Samuel@abc.com
703 Richard Bernard Jim.Anderson@abc.com
704 Simon Sharma Rita.sharma@abc.com
```
|
```
SELECT *
FROM YourTable
WHERE CHARINDEX(FirstName + '.' + LastName + '@', Email) = 0
```
|
You mean something like:
```
select -- some columns
from table
where Email_ID <> (FirstName + '.' + Lastname + '@abc.com')
```
?
There is nothing in SQL to prevent comparing one column with another, and β with a little more syntax β even across rows.
|
Compare columns of same table in sql server
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
Interactive exercise 9 [Difficult Questions That Utilize Techniques Not Covered In Prior Sections](https://sqlzoo.net/wiki/SELECT_within_SELECT_Tutorial#Difficult_Questions_That_Utilize_Techniques_Not_Covered_In_Prior_Sections) at `https://sqlzoo.net`:
> Find the continents where all countries have a population <= 25000000. Then find the names of the countries associated with these continents. Show name, continent and population.
What I have done:
```
SELECT name, continent, population
FROM world x
WHERE population <= ALL(SELECT population
FROM world y
WHERE y.continent = x.continent
AND population > 25000000)
```
What am I writing wrong?
|
```
SELECT name, continent, population
FROM world w
WHERE NOT EXISTS ( -- there are no countries
SELECT *
FROM world nx
WHERE nx.continent = w.continent -- on the same continent
AND nx.population > 25000000 -- with more than 25M population
);
```
|
```
SELECT name, continent, population FROM world x
WHERE 25000000>=ALL (
SELECT population FROM world y
WHERE x.continent=y.continent
AND population>0)
```
|
SQL combined SELECT statement
|
[
"",
"sql",
"subquery",
""
] |
How do I check if a particular id is greater than a(any of) set of values directly..
I thought this would work...but it isn't.. Let table be some table with a column id
```
select id from table where id > any (2,3,4)
```
I get the following error
> You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '2,3,4)'
|
Try this:
```
SELECT id FROM table WHERE id > GREATEST(2,3,4)
```
UPD: Of course, LEAST, not GREATEST, sorry (thanks @brad).
|
```
select id from table where id > 2
```
You can just filter out the minimum value in your group and test against that.
|
How to check if id greater than a set of values in sql
|
[
"",
"sql",
""
] |
Let's suppose I have a table T1 with people IDs and other stuff IDs, as the following
```
Table: T1
personID | stuffID
1 | 1
1 | 2
1 | 3
1 | 4
2 | 1
2 | 4
3 | 1
3 | 2
```
And another table T2 with just one column of stuffIDs
```
Table: T2
stuffID
1
2
3
```
The result that I would get, by a `SELECT`, is a table of peopleIDs who are connected with ALL the stuffIDs of T2.
Following the example the result would be only the id 1 (the personID 3 has no to appear even if all the stuffIDs it is associated are included in T2.stuffID).
|
If I understand correctly, you want to retrieve all the personID's from T1 that have all associated stuffID's found in T2.
You can break this up as follows:
First of all, find all the T1 entries that match with a nested query
```
SELECT personID
FROM T1 WHERE stuffID IN (SELECT stuffID FROM t2)
```
Now you need to check which of the entries in this set contains ALL the stuffID's you want
```
GROUP BY personID
HAVING COUNT(DISTINCT stuffID) = (SELECT COUNT(stuffID) FROM t2)
```
and put it all together:
```
SELECT personID
FROM T1 WHERE stuffID IN (SELECT stuffID FROM t2)
GROUP BY personID
HAVING COUNT(DISTINCT stuffID) = (SELECT COUNT(stuffID) FROM t2)
```
HTH.
|
```
select personID
from T1
where stuffID in (select stuffID from t2)
group by personID
having count(distinct stuffID) = (select count(*) from t2)
```
I.e pick a person's stuffids which are in T2, count them (distinct only), and verify same number as in t2.
|
check if a column contains ALL the values of another column - Mysql
|
[
"",
"mysql",
"sql",
"relational-division",
""
] |
I have two tables `products` and `product_edits` which hold product information on the pricelist. My app works in a way that if user changes any product info in `products` table it inserts it into `product_edits` table...
PRODUCTS table
```
pk|code|name |description|price|....
-----------------------------------
1 |QW1X|Product 1|...
2 |LW1X|Product 2|...
3 |DE1X|Product 3|...
```
PRODUCT\_EDITS table
```
pk|product_id|code|name |description|price|....
-----------------------------------
1 | 2|LW1X|Product 2 new name|...
```
In above case I would like an SQL that returns records from both tables, but if product is found in `product_edits` table it selects only from `product_edits` and not also from `products` table.
I tried using standrd union but selects all records from both tables:
```
select code, name, description from products
union
select code, name, description from product_edits
```
|
It's better to use `EXISTS` instead of `IN`, in this case.
You want the search to stop once you found a match, not go over all of the results from `product_edits`.
So do it like this:
```
SELECT
code, name, description
FROM products p
WHERE NOT EXISTS (SELECT 1 FROM product_edits e WHERE e.code = p.code)
UNION
SELECT
code, name, description
FROM product_edits
```
|
```
select code, name, description from products
where code not in(select code from product_edits)
union
select code, name, description from product_edits
```
|
MYSQL: UNION results between two tables where omitting records from first table if PK found in second table
|
[
"",
"mysql",
"sql",
"union",
""
] |
I have an Oracle DB table `tab` with two key fields: `f1` and `f2`. Now I need to select all those entries from that table, where `f1` is the same for at least two entries (so only `f2` differs).
```
f1 |f2
----+----
a |123
b |123
c |123
d |123
b |456
e |123
c |789
```
So in the example above, the SELECT should return all entries where `f1` = `b` or `c`. I tried it with the following SELECT, but that doesn't work:
```
SELECT f1, f2
FROM tab
GROUP BY f1, f2
HAVING count( f1 ) > 1
```
Any ideas how to achieve this?
|
You can use `count` in analytic version:
```
select f1, f2
from (
select tab.*, count(1) over (partition by f1) cnt from tab
)
where cnt>1
```
Results:
```
F1 F2
----- ----------
b 123
b 456
c 123
c 789
```
|
This should work and look familiar to you:
```
SELECT f1, f2
FROM tab t1
WHERE f1 IN (SELECT f1
FROM tab t2
WHERE t1.f2 <> t2.f2)
```
Basically we check to see if `f1` is found again in itself where the `f2` does not match.
|
Select DB records where some key values are the same
|
[
"",
"sql",
"oracle",
""
] |
When I run in SQL Developer this code,
```
SELECT EMPLOYEE_ID AS NON_MANAGER_EMPLOYEES
FROM EMPLOYEES
MINUS
SELECT MANAGER_ID
FROM EMPLOYEES;
```
Shows:
```
NON_MANAGER_EMPLOYEES
104
107
141
142
143
144
174
176
178
200
202
206
```
Which is what I want. Now I just want to add a column to that so it shows their last name with it. Throwing LAST\_NAME onto the select statements:
```
SELECT EMPLOYEE_ID AS NON_MANAGER_EMPLOYEES, LAST_NAME
FROM EMPLOYEES
MINUS
SELECT MANAGER_ID, LAST_NAME
FROM EMPLOYEES;
```
doesn't work. It tarnished the MINUS. How do I add that information without ruining the MINUS? or rather:
```
NON_MANAGER_EMPLOYEES LAST_NAME
104 Ernst
107 Lorentz
141 Rajs
... ...
```
Thanks in advance!
|
One option would be to `join` the results back to the employee table and use a subquery:
```
SELECT E.EMPLOYEE_ID, E.LAST_NAME
FROM EMPLOYEES E
JOIN (
SELECT EMPLOYEE_ID
FROM EMPLOYEES
MINUS
SELECT MANAGER_ID
FROM EMPLOYEES
) E2 ON E.EMPLOYEE_ID = E2.EMPLOYEE_ID
```
---
If I understand you're expected results, here's an alternative approach using `NOT IN`:
```
select employee_id, last_name
from employees e
where employee_id not in (select manager_id from employees)
```
|
Minus is really the wrong operation here. I would leverage `EXISTS` instead. For example,
```
SELECT employee_id AS non_manager_employees, last_name
FROM employees e
WHERE NOT EXISTS
(SELECT NULL FROM employees m WHERE e.employee_id = m.manager_id)
```
This will filter out any rows where the given employee is also a manager.
Alternatively, you could use this syntax (it does the same thing however).
```
SELECT e.employee_id AS non_manager_employees, e.last_name
FROM employees e
LEFT JOIN employees m ON m.manager_id = e.employee_id
WHERE m.employee_id IS NULL;
```
|
Add a column to a query result in SQL Developer
|
[
"",
"sql",
"oracle",
""
] |
I get this table :

I use this where :
```
where FoodID=1
```
when i Change My where with this :
```
where FoodID=1 and DayId=1 or DayId is null
```
Filed dayid value change to null for All rows .

**Note :**
> My where Statement generate with Special Framework : I write this Code
> in framework :
```
@@wheresql or dayid is null
```
I don't know why change value to null when use is null in where ?
|
You have to use `()` arround the or operator:
```
where FoodID=1 and (DayId=1 or DayId is null)
```
if not first the `and` operator is evaluated and then `or`
|
```
where FoodID=1 and (DayId=1 or DayId is null)
```
|
set Null to value when use Is null in where
|
[
"",
"sql",
"select",
""
] |
```
a.geosid a.Latitude a.Longitude b.Latitude b.Longitude b.geosid
9589565 -36.013472 -71.426018 -36.0135 -71.426 9589565
9586071 -36.015 -71.498 -36.1104 -71.4416 9586071
9589565 -36.013473 -71.426017 -36.0135 -71.426 9589565
```
The data above is formed by running a query in sql something like
```
SELECT *
FROM [ChileCSVimports].[dbo].[tLocation] a
JOIN AIRGeography..tGeography b
ON a.GeographySID = b.GeographySID
```
I need to select data such that the two lat-long of two different tables if they differ by 0.0002 (respectively) or more then don't select the data --
|
Sounds fairly straightforward, since they're both numbers. Since you're using an inner join, you can either put the criteria in the join:
```
SELECT *
FROM [ChileCSVimports].[dbo].[tLocation] a
JOIN AIRGeography..tGeography b
ON a.GeographySID = b.GeographySID
AND ABS(a.Latitude - b.Latitude) < 0.0002
AND ABS(a.Longitude - b.Longitude) < 0.0002
```
... or in a separate WHERE clause:
```
SELECT *
FROM [ChileCSVimports].[dbo].[tLocation] a
JOIN AIRGeography..tGeography b
ON a.GeographySID = b.GeographySID
WHERE ABS(a.Latitude - b.Latitude) < 0.0002
AND ABS(a.Longitude - b.Longitude) < 0.0002
```
Either way, simply let the DB server do the math while retrieving your result set. This ABS() function is available on most DBMSes with identical syntax; PostgreSQL uses the "@" operator as a shorthand (`@ -5.4` evaluates to `5.4`).
|
You can add to your `JOIN` criteria or use `WHERE` criteria to compare the values:
```
SELECT *
FROM [ChileCSVimports].[dbo].[tLocation] a
JOIN AIRGeography..tGeography b
ON a.GeographySID = b.GeographySID
AND ABS(a.Latitude - b.Latitude) < 0.0002
AND ABS(a.Longitude - b.Longitude) < 0.0002
```
Use `ABS()` to return the absolute value so you don't have to worry about which value is bigger than the other.
|
Join statement with condition in sql
|
[
"",
"sql",
"join",
"conditional-statements",
""
] |
I have a table looking like this:
```
ID USER INVENTORY [...]
1 A X
2 A Y
3 A Z
4 B X
5 B Y
6 C X
7 C Z
```
I would like to select all columns in rows of USERs who have both X and Y in their inventory. Like so:
```
ID USER INVENTORY [...]
1 A X
2 A Y
4 B X
5 B Y
```
What would be the most efficient way of going about it? I have about 300mio records in my database, and I will need to get couple of different combinations out of it - it is not a one time job. I'm still new to SQL so please have that in mind.
I was thinking of 2 possible ways:
1) Some sort of nested select statements (but I don't know how to go about writing it and from what I've read they can be very inefficient)
2) To create a new table with a set of inventory items for each user, and intersect them to get USERs. But the problem here could be that sets can only contain up to 64 elements, and many users have more elements in inventory.
If you have any other ideas please let me know.
|
Here's one way using `exists` with conditional aggregation:
```
select id, user, inventory
from yourtable y
where inventory in ('X','Y') and exists (
select 1
from yourtable y2
where y.user = y2.user
group by user
having sum(inventory = 'X') > 0 and sum(inventory = 'Y') > 0
)
```
* [SQL Fiddle Demo](http://sqlfiddle.com/#!9/1418f/1)
---
Or if you'd rather use a `join` instead:
```
select y.id, y.user, y.inventory
from yourtable y
join (
select user
from yourtable
group by user
having sum(inventory = 'X') > 0 and sum(inventory = 'Y') > 0
) y2 on y.user = y2.user
where y.inventory in ('X','Y')
```
|
The most efficient way is probably to use two `exists` statements:
```
select t.*
from table t
where t.inventory in ('X', 'Y') and
exists (select 1 from table t2 where t2.user = t.user and t2.inventory = 'X') and
exists (select 1 from table t2 where t2.user = t.user and t2.inventory = 'Y');
```
You want an index on `table(user, inventory)` *and* on `table(inventory)` for performance.
|
Select intersecting users
|
[
"",
"mysql",
"sql",
"database",
""
] |
I have a stored proc in Oracle, and we're limiting the number of records with ROWNUM based on a parameter. However, we also have a requirement to know whether the search result count exceeded the arbitrary limit (even though we're only passing data up to the limit; searches can return a lot of data, and if the limit is exceeded a user may want to refine their query.)
The limit's working well, but I'm attempting to pass an OUT value as a flag to signal when the maximum results were exceeded. My idea for this was to get the count of the inner table and compare it to the count of the outer select query (with ROWNUM) but I'm not sure how I can get that into a variable. Has anyone done this before? Is there any way that I can do this without selecting everything twice?
Thank you.
EDIT: For the moment, I am actually doing two identical selects - one for the count only, selected into my variable, and one for the actual records. I then pass back the comparison of the base result count to my max limit parameter. This means two selects, which isn't ideal. Still looking for an answer here.
|
You could use a nested subquery:
```
select id, case when max_count > 3 then 'Exceeded' else 'OK' end as flag
from (
select id, rn, max(rn) over () as max_count
from (
select id, rownum as rn
from t
)
where rownum <= 4
)
where rownum <= 3;
```
The inner level is your actual query (which you probably have filters and an order-by clause in really). The middle later restricts to your actual limit + 1, which still allows Oracle to optimise using a stop key, and uses an analytic count over that inner result set to see if you got a fourth record (without requiring a scan of all matching records). And the outer layer restricts to your original limit.
With a sample table with 10 rows, this gets:
```
ID FLAG
---------- --------
1 Exceeded
2 Exceeded
3 Exceeded
```
If the inner query had a filter that returned fewer rows, say:
```
select id, rownum as rn
from t
where id < 4
```
it would get:
```
ID FLAG
---------- --------
1 OK
2 OK
3 OK
```
Of course for this demo I haven't done any ordering so you would get indeterminate results. And from your description you would use your variable instead of 3, and (your variable + 1) instead of 4.
|
You can add a column to the query:
```
select * from (
select . . . , count(*) over () as numrows
from . . .
where . . .
) where rownum <= 1000;
```
And then report `numrows` as the size of the final result set.
|
Find out if query exceeds arbitrary limit using ROWNUM?
|
[
"",
"sql",
"oracle",
"plsql",
"rownum",
""
] |
I had an issue with a MySQL statement that was returning null all the time, and although I was able to figure it out, the cause left me a bit puzzled.
This is a simplified version of the problematic query:
```
SELECT id FROM users WHERE id = id = 2;
```
The error happened because `id =` is repeated, removing one of the `id =` solved the problem (as there is a user with the id 2). But what has me a bit confused is how it "fails silently", returning 0 rows, instead of giving an error.
I tested a similar query on SQL Server, and got the error message `Incorrect syntax near '='` (something similar to what I would have expected from MySQL).
Originally, I thought that MySQL was making a **comparison between the first two fields** and the last one was the result of the comparison (with 0 for false, and 1 for true). Something like this:
```
SELECT id FROM users WHERE (id = id) = 2;
```
But then I ran some queries that (kind of) contradicted that. Let me elaborate a bit more.
Imagine this table (called "users"):
```
id | username
----|----------------
0 | anonymous
1 | root
2 | john
3 | doe
```
If I do `SELECT * FROM users WHERE id = id = 1`, I get all 4 users. And with `SELECT * FROM users WHERE id = id = 0`, I don't get any. That seems to confirm the comparison theory. But, things get confusing if I do something like this:
```
SELECT * FROM users WHERE id = username = 1;
SELECT * FROM users WHERE id = username = 0;
```
None of the records have the same id as username (they are not even the same type: `int(11)` and `varchar(25)` respectively), but with the first one, I get one result back: "anonymous". And with the second one, I get all the users but "anonymous". Why does that happen? I see that it has something to do with the `id` being 0, because if I replace "anonymous'" id from 0 to 4, then I don't get it anymore with the first query (and it shows up in the second).
I guess it has to do with **MySQL turning strings/varchars into 0 when comparing with numbers**. But why does it permit chained comparisons in the same clause? What order does it follows when comparing them?
Things get *funny* when queries like this are valid, and actually return (*unexpected?*) values. For example:
```
SELECT * FROM users WHERE id = id = id = username = username = id = username = 1;
```
returns the records with id 2 and 3, but no the ones with id 0 and 1. Why is that?
---
tl;dr version: Why do queries with chained comparison operation work? What is the order followed in the comparisons? Is this a bug or the expected behavior?
* A SQLFiddle with the database and the last query: <http://sqlfiddle.com/#!9/4f2ab/4>
|
# Prerequisites to understand how chained comparisons in MySQL work
## Understanding SELECT-WHERE query
This is pretty obvious, but I'm making this answer for every programmer, even if you don't really know how SQL works. A `SELECT-WHERE` query, basically:
1. Loops every row affected by the query and
2. evaluates the `where_condition` with the values of that specific row.
3. If the `where_condition` results in `TRUE`, it will be listed.
A pseudo-code could be:
```
for every row:
current values = row values # for example: username = 'anonymous'
if (where_condition = TRUE)
selected_rows.append(this row)
```
## Almost every string equals 0 (or FALSE)
Unless the string is a number (`'1'`, `'423'`, `'-42'`) or the string starts with a number, every other string equals `0` (or `FALSE`). The "numeric strings" equals its equivalent number and the "starting numeric string" equals its initial number. Providing some examples:
mysql> SELECT 'a' = 0;
```
+---------+
| 'a' = 0 |
+---------+
| 1 |
+---------+
1 row in set, 1 warning (0.00 sec)
```
.
```
mysql> SELECT 'john' = 0;
+------------+
| 'john' = 0 |
+------------+
| 1 |
+------------+
1 row in set, 1 warning (0.00 sec)
```
.
```
mysql> SELECT '123' = 123;
+-------------+
| '123' = 123 |
+-------------+
| 1 |
+-------------+
1 row in set (0.00 sec)
```
.
```
mysql> SELECT '12a5' = 12;
+-------------+
| '12a5' = 12 |
+-------------+
| 1 |
+-------------+
1 row in set, 1 warning (0.00 sec)
```
## WHERE\_condition is resolved like Mathematical Operations
Chained comparisons are resolved one by one, with the only preference of parenthesis first, starting from the left until a `TRUE` or `FALSE` is what remains.
So, for example `1 = 1 = 0 = 0` will be traced as follows:
```
1 = 1 = 0 = 0
[1] = 0 = 0
[0] = 0
[1]
Final result: 1 -> TRUE
```
# How it works
I will trace the last query, which I consider the most complicated and yet, the most beautiful to explain:
```
SELECT * FROM users WHERE id = id = id = username = username = id = username = 1;
```
First of all, I will show every `where_condition` with every row variables:
`id = id = id = username = username = id = username = 1`
```
0 = 0 = 0 = 'anonymous' = 'anonymous' = 0 = 'anonymous' = 1
1 = 1 = 1 = 'root' = 'root' = 1 = 'root' = 1
2 = 2 = 2 = 'john' = 'john' = 2 = 'john' = 1
3 = 3 = 3 = 'doe' = 'doe' = 3 = 'doe' = 1
```
And now I will trace every row:
```
0 = 0 = 0 = 'anonymous' = 'anonymous' = 0 = 'anonymous' = 1
[1] = 0 = 'anonymous' = 'anonymous' = 0 = 'anonymous' = 1
[0] = 'anonymous' = 'anonymous' = 0 = 'anonymous' = 1
[1] = 'anonymous' = 0 = 'anonymous' = 1
[0] = 0 = 'anonymous' = 1
[1] = 'anonymous' = 1
[0] = 1
[0] -> no match
1 = 1 = 1 = 'root' = 'root' = 1 = 'root' = 1
[1] = 1 = 'root' = 'root' = 1 = 'root' = 1
[1] = 'root' = 'root' = 1 = 'root' = 1
[0] = 'root' = 1 = 'root' = 1
[1] = 1 = 'root' = 1
[1] = 'root' = 1
[0] = 1
[0] -> no match
2 = 2 = 2 = 'john' = 'john' = 2 = 'john' = 1
[1] = 2 = 'john' = 'john' = 2 = 'john' = 1
[0] = 'john' = 'john' = 2 = 'john' = 1
[1] = 'john' = 2 = 'john' = 1
[0] = 2 = 'john' = 1
[0] = 'john' = 1
[1] = 1
[1] -> match
3 = 3 = 3 = 'doe' = 'doe' = 3 = 'doe' = 1
[1] = 3 = 'doe' = 'doe' = 3 = 'doe' = 1
[0] = 'doe' = 'doe' = 3 = 'doe' = 1
[1] = 'doe' = 3 = 'doe' = 1
[0] = 3 = 'doe' = 1
[0] = 'doe' = 1
[1] = 1
[1] -> match
```
The query shows the rows with `id = 2` and `id = 3` since they match the `where_condition`.
|
I do not know MYSQL at all but suspect that you are encountering a side effect of the fact that each predicate on a where clause must evaluate to true. E.g.
```
Select *
from table
where 1 = 1
```
is in effect saying
```
Select *
from table
where is_true(1 = 1)
```
I suggest you try other values than 1 and 0 and see what result you get.
Obviously (id=id) is true so in the use of 0 and 1 for false/true this is equal to 1
After some more experimenting Alvaro Montoro added to this that
(int == int) = true;
but for int > 1: (int == true) = false, and (int == false) = false.
|
Chained comparisons in MySQL get unexpected result
|
[
"",
"mysql",
"sql",
"database",
""
] |
Is this the best way to select default value from the table if selecting value is `0` or `null`?
```
DECLARE @value INT = 15
DECLARE @defaultValue INT = 12
SELECT IIF(ISNULL(@value,0) = 0, @defaultValue, @value)
```
|
Specify "best". Since `IIF` works only in SQL-Server i'd use `CASE` which is ANSI SQL standard and works in every(?) rdbms:
```
SELECT CASE WHEN ISNULL(@value,0) = 0 THEN @defaultValue ELSE @value END
```
Actually [`IIF`](https://msdn.microsoft.com/en-us/library/hh213574.aspx) is even translated to `CASE`:
> IIF is a shorthand way for writing a CASE expression ...
> The fact that IIF is translated into CASE also has an impact on other
> aspects of the behavior of this function....
But the same is true for `ISNULL` which is also a `SQL-Server` function and could be replaced by `COALECSE`.
By the way, if you use `ISNULL` or `COALESCE` in a `WHERE`-clause, it prevents the query optimizer from using an index. So then you should prefer:
```
SELECT ...
FROM dbo.TableName
WHERE @value IS NOT NULL AND @value <> @value
```
However, i prefer `ISNULL` over `COALESCE` since the latter has an issue if it contains a sub-query. It is executed twice whereas `ISNULL` executes it once. Actually `COALESCE` is also translated into `CASE`. You can read about that issue [here](https://connect.microsoft.com/SQLServer/feedback/details/546437/coalesce-subquery-1-may-return-null).
|
You can use the [COALESCE](https://msdn.microsoft.com/en-us/library/ms190349.aspx). It evaluates the arguments in order and returns the current value of the first expression that initially does not evaluate to `NULL`. It is used for this purpose to get the first `not null` value.
```
SELECT COALESCE(@value,@defaultValue)
```
But keep in mind,
> If all arguments are NULL, COALESCE returns NULL. At least one of the
> null values must be a typed NULL.
You can also use the `ISNULL` but there is difference between both of them that is as listed below,
**Comparing COALESCE and ISNULL**
> 1) The `ISNULL` function and the `COALESCE` expression have a similar
> purpose but can behave differently. Because `ISNULL` is a function, it
> is evaluated only once. As described above, the input values for the
> `COALESCE` expression can be evaluated multiple times.
> 2) Data type
> determination of the resulting expression is different. `ISNULL` uses
> the data type of the first parameter, `COALESCE` follows the `CASE`
> expression rules and returns the data type of value with the highest
> precedence.
> 3) The `NULLability` of the result expression is
> different for `ISNULL` and `COALESCE`. The `ISNULL` return value is
> always considered `NOT NULLable (assuming the return value is a non-nullable one)` whereas `COALESCE` with non-null parameters is
> considered to be `NULL`. So the expressions `ISNULL(NULL, 1)` and
> `COALESCE(NULL, 1)` although equivalent have different `nullability`
> values. This makes a difference if you are using these expressions in
> computed columns, creating key constraints or making the return value
> of a scalar UDF deterministic so that it can be indexed as shown in
> the following example.
If you are planning to use the sub queries in the expression to check for the `NULL` then you better of using the `ISNULL` as `COALESCE` will evaluate same query multiple times.
|
Should i use IIF and ISNULL to select a default-value?
|
[
"",
"sql",
"sql-server",
""
] |
I would like a union query where the second query is used only if the first does not return any results.
a simple example...
table
```
create table tbl (
name varchar(3),
lang varchar(3)
);
insert into tbl (name, lang) values ('foo', 'en');
insert into tbl (name, lang) values ('foo', 'es');
insert into tbl (name, lang) values ('bar', 'en');
insert into tbl (name, lang) values ('baz', 'en');
```
query
```
SELECT name, lang
FROM tbl
WHERE lang = 'es'
UNION ALL
SELECT name, lang
FROM tbl
WHERE lang = 'en'
```
output
```
name lang
-----------
foo es
foo en
bar en
baz en
```
expected
```
name lang
-----------
foo es
bar en
baz en
```
|
Use this:
```
SELECT name, lang
FROM (
SELECT *, ROW_NUMBER() OVER (PARTITION BY name ORDER BY CASE WHEN lang = 'es' THEN 1 ELSE 2 END) AS rn
FROM tbl ) t
WHERE t.rn = 1
```
You can adjust to `CASE` clause within `OVER` to accomodate any other language.
|
Use `Row_Number()` window function.
```
select name, lang
from
(
SELECT name, lang,row_number(partition by name order by order_column) RN
FROM tbl
) a
where RN=1
```
Keep what ever `column` which helps you to find the `first` value per `Name` `order by` in `row_number`
|
SQL Server (TSQL) Select first value if exists, otherwise select other
|
[
"",
"sql",
"sql-server",
""
] |
I have a column named phone\_number and I wanted to query this column to get the the string right of the last occurrence of '.' for all kinds of numbers in one single sql query.
example:
515.123.1277
011.44.1345.629268
I need to get 1277 and 629268 respectively.
I have this so far:
select phone\_number,
case when length(phone\_number) <= 12
then
substr(phone\_number,-4)
else
substr (phone\_number, -6) end
from employees;
This works for this example, but I want it for all kinds of formats.
|
It should be as easy as this regex:
```
SELECT phone_number, REGEXP_SUBSTR(phone_number, '[^.]*$')
FROM employees;
```
With the end anchor `$` it should get everything that is not a `.` character after the final `.`. If the last character is `.` then it will return `NULL`.
|
Search for a pattern including the period, `[.]` with digits, `\d`, followed by the end of the string, `$`.
Associate the digits with a character group by placing the pattern, `\d`, in parenthesis (see below). This is referenced with the subexpr parameter, `1` (last parameter).
Here is the solution:
```
SCOTT@dev> list
1 WITH t AS
2 ( SELECT '414.352.3100' p_number FROM dual
3 UNION ALL
4 SELECT '515.123.1277' FROM dual
5 UNION ALL
6 SELECT '011.44.1345.629268' FROM dual
7 )
8* SELECT regexp_substr(t.p_number, '[.](\d+)$', 1, 1, NULL, 1) end_num FROM t
SCOTT@dev> /
END_NUM
========================================================================
3100
1277
629268
```
|
substring, after last occurrence of character?
|
[
"",
"sql",
"oracle",
"substring",
""
] |
So given a table like the one below, I would like to grab rows where `id` has at least three consecutive years.
```
+---------+--------+
| id | year |
+------------------+
| 2 | 2003 |
| 2 | 2004 |
| 1 | 2005 |
| 2 | 2005 |
| 1 | 2007 |
| 1 | 2008 |
+---------+--------+
```
The result over here would be of course:
```
+---------+
| id |
+---------+
| 2 |
+---------+
```
Any input at all as to how I could go about structuring a query to do this would be great.
|
This one works and can be fast when you have at least an index on the id-field:
```
WITH t1 AS (
SELECT *
FROM (VALUES
(2,2003),
(2,2004),
(1,2005),
(2,2005),
(1,2007),
(1,2008)
) v(id, year)
)
SELECT DISTINCT t1.id
FROM t1 -- your tablename
JOIN t1 AS t2 ON t1.id = t2.id AND t1.year + 1 = t2.year
JOIN t1 AS t3 ON t1.id = t3.id AND t1.year + 2 = t3.year;
```
|
You can use `JOIN` approach (*self-join*):
```
SELECT t1.id
FROM tbl t1
JOIN tbl t2 ON t2.year = t1.year + 1
AND t1.id = t2.id
JOIN tbl t3 ON t3.year = t1.year + 2
AND t1.id = t3.id
```
[**SQLFiddle**](http://sqlfiddle.com/#!15/ddf2a8/3)
|
How do I group a collection of elements by whether or not they have consecutive values?
|
[
"",
"sql",
"postgresql",
"gaps-and-islands",
""
] |
Let's say I have two columns: **Date** and **Indicator**
**Usually the indicator goes from 0 to 1** (when the data is sorted by date) and **I want to be able to identify if it goes from 1 to 0 instead**. Is there an easy way to do this with SQL?
I am already aggregating other fields in the same table. If I can add this to as another aggregation (e.g. without using a separate "where" statement or passing over the data a second time) it would be pretty awesome.
This is the phenomena I want to catch:
```
Date Indicator
1/5/01 0
1/4/01 0
1/3/01 1
1/2/01 1
1/1/01 0
```
|
This isn't a teradata-specific answer, but this *can* be done in normal SQL.
Assuming that the sequence is already 'complete' and xn+1 can be derived from xn, such as when the dates are sequential and all present:
```
SELECT date -- the 1 on the day following the 0
FROM r curr
JOIN r prev
-- join each day with the previous day
ON curr.date = dateadd(d, 1, prev.date)
WHERE curr.indicator = 1
AND prev.indicator = 0
```
YMMV on the ability of such a query to use indexes efficiently.
* If the sequence is not complete the same can be applied after making a delegate sequence which is well ordered and similarly 'complete'.
* This can also be done using [*correlated subqueries*](http://en.wikipedia.org/wiki/Correlated_subquery), each selecting the indicator of the 'previous max', but.. uhg.
|
Joining the table against it self it quite generic, but most SQL Dialects now support Analytical Functions. Ideally you could use `LAG()` but TeraData seems to try to support the absolute minimum of these, and so so they point you to use `SUM()` combined with `rows preceding`.
In any regard, this method avoids a potentially costly join and effectively deals with gaps in the data, whilst making maximum use of indexes.
```
SELECT
*
FROM
yourTable t
QUALIFY
t.indicator
<
SUM(t.indicator) OVER (PARTITION BY t.somecolumn /* optional */
ORDER BY t.Date
ROWS BETWEEN 1 PRECEDING AND 1 PRECEDING
)
```
`QUALIFY` is a bit TeraData specific, but slightly tidier than the alternative...
```
SELECT
*
FROM
(
SELECT
*,
SUM(t.indicator) OVER (PARTITION BY t.somecolumn /* optional */
ORDER BY t.Date
ROWS BETWEEN 1 PRECEDING AND 1 PRECEDING
)
AS previous_indicator
FROM
yourTable t
)
lagged
WHERE
lagged.indicator < lagged.previous_indicator
```
|
SQL query to identify 0 AFTER a 1
|
[
"",
"sql",
"teradata",
""
] |
I need to drop all User-Defined Types, User-Defined Data Types and User-Defined Tables (all in Types folder). Is it possible to do this using a T-SQL script, or must I use SSMS?
|
```
select 'drop type ' + quotename(schema_name(schema_id)) + '.' + quotename(name)
from sys.types
where is_user_defined = 1
```
You can try getting all you user defined type objects and create a script using this query or Generate Script Clicking Task under your database.
<http://blog.falafel.com/t-sql-drop-all-objects-in-a-sql-server-database/>
|
For the above link posted
[T-SQL: Drop All Objects in a SQL Server Database](http://blog.falafel.com/t-sql-drop-all-objects-in-a-sql-server-database/)
I added following code to replace check Constraint from [Drop Constraint by Stefan Steiger](https://stackoverflow.com/questions/12721823/how-to-drop-all-foreign-key-constraints-in-all-tables) to Drop All Constraints. I chose below code because it uses same approach
```
DECLARE @sql nvarchar(MAX)
SET @sql = ''
SELECT @sql = @sql + 'ALTER TABLE ' + QUOTENAME(RC.CONSTRAINT_SCHEMA)
+ '.' + QUOTENAME(KCU1.TABLE_NAME)
+ ' DROP CONSTRAINT ' + QUOTENAME(rc.CONSTRAINT_NAME) + '; '
FROM INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS AS RC
INNER JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE AS KCU1
ON KCU1.CONSTRAINT_CATALOG = RC.CONSTRAINT_CATALOG
AND KCU1.CONSTRAINT_SCHEMA = RC.CONSTRAINT_SCHEMA
AND KCU1.CONSTRAINT_NAME = RC.CONSTRAINT_NAME
```
|
Drop All User-Defined Types from SQL Server
|
[
"",
"sql",
"sql-server-2008",
""
] |
I'm trying to implement a trigger on a SQL Server database, which will insert a new row in `table2` whenever a row is updated in `table1`. I want to get the row data (`table1`) before update.
For example, I have the following rows:
```
table1: table2:
________________________ ________________________
| id | name | | id | name |
________________________ ________________________
| 1 | John |
------------------------
```
Currently I'm using the following trigger-
```
CREATE TRIGGER triggername
ON table1
FOR UPDATE
AS
Begin
INSERT INTO table2(name)
SELECT i.name
FROM Inserted i
End
```
This trigger is doing the following -
```
table1: table2:
________________________ ________________________
| id | name | | id | name |
________________________ ________________________
| 1 | Alex | | 1 | Alex |
------------------------ ------------------------
```
But I want 'John' in table2. For example, table1 and table2 now should be as follows-
```
table1: table2:
________________________ ________________________
| id | name | | id | name |
________________________ ________________________
| 1 | Alex | | 1 | John |
------------------------ ------------------------
```
How can I do that?
|
You need to use the `Deleted` pseudo table in your trigger, which contains the **old** values (before the `UPDATE`):
```
CREATE TRIGGER triggername
ON table1
FOR UPDATE
AS
Begin
INSERT INTO table2(name)
SELECT d.name
FROM Deleted d
End
```
The `Inserted` pseudo table contains the **new** values - after the `UPDATE`.
|
This will only return the new value:
```
INSERT INTO table2(name)
SELECT i.name
FROM Inserted i
```
This however will return the value before the update
```
INSERT INTO table2(name)
select d.name
from inserted i
join deleted d
on (i.id = d.id)
```
|
Create a trigger that inserts old values into other table when a row is updated
|
[
"",
"sql",
"sql-server",
"triggers",
""
] |
Is there any way to load results from a query that takes more than a minute to return results?
Let me elaborate on the situation. I have a SQL query which is very convoluted. it has multiple join on different on 5 different tables because of the level of normalization involved.
So my query return around 6k records, however it takes more than a minute to fetch these records.
This does not look good on a website that a user has to wait for more than minute for a page to load. So is there any way i can pass the results in real time as the query is getting executed? Rather than waiting for the entire query to be completed. I hope everybody understands the point I'm trying to make here...
|
While you've not specified what DBMS or web platform you're using, in general this will not be possible if you are using an ORDER BY in your query, as this will require the whole resultset in order to work.
It sounds like you should instead focus on optimising your query, or getting the results cached so that you don't need to run it every time the page is loaded.
|
I don't know your system, but I will try give you some advices.
You can load the page without records, if possible, and load them with ajax and insert rows dynamically.
Other possibility is to do this query as the first step at your backend, but asynchronous, and get result at last step. It will reduce this time, probably, because the query would execute parallel with other steps.
Obviously you should try too optimize the query and apply some cache method, as @richard said.
|
How to pass results from slow sql query in runtime
|
[
"",
"sql",
"dynamic",
""
] |
Is there any easy way to select every record for two tables, kind of hard to explain but if I have two tables Client and Product
```
Client
A
B
C
Product
1
2
3
```
What query would get a result like this:
```
RESULT
A1
A2
A3
B1
B2
B3
C1
C2
C3
```
|
That's called a `cross join` (or a cartesian product):
```
select c.field, p.field
from client c
cross join product p
```
It's fairly straight-forward to combine the columns together at this point.
|
Use `CROSS JOIN`
```
SELECT C.client_column
+ CONVERT(VARCHAR(50), P.Product_column)
FROM client C
CROSS JOIN product P
```
|
Select every record in SQL
|
[
"",
"sql",
"sql-server",
"database",
"resultset",
""
] |
I have a table which has a column `CreatedDate` that is of type `varchar(100)`. The rows for that columns are:
```
CreatedDate
--------------
March 6, 2015
March 6, 2015
March 6, 2015
March 6, 2015
March 6, 2015
March 6, 2015
March 7, 2015
March 10, 2015
```
If I do the following query, the date aren't ordering correctly:
```
SELECT *
FROM [Db].[dbo].[Table1]
ORDER BY [CD] ASC
```
I see the following:
```
CreatedDate
---------------
March 10, 2015
March 6, 2015
March 6, 2015
March 6, 2015
March 6, 2015
March 6, 2015
March 6, 2015
March 7, 2015
```
How can I cast the column so the ordering statement works correctly?
|
Assuming it can all be safely converted to date or datetime:
```
CAST(CD AS DATE)
```
If not you'd have to split the string up and make it a date manually.
|
Change the `ORDER BY` clause to:
```
ORDER BY TRY_PARSE(CreatedDate AS DATE) ASC;
```
If the `CreatedDate` column contains values that aren't valid dates, [TRY\_PARSE](https://msdn.microsoft.com/en-us/library/hh213126.aspx) returns NULL for sort evaluation instead of throwing an error message.
If you have control over the table definition, change the column data type to [DATE](https://msdn.microsoft.com/en-us/library/bb630352.aspx). You won't have to jump through hoops for sorting, invalid date ranges will be prevented at insert, and less space is required.
[SQLFiddle](http://www.sqlfiddle.com/#!6/4889e/2)
|
How to convert varchar to date for ordering in SQL Server
|
[
"",
"sql",
"sql-server-2012",
""
] |
I have a simple table:
```
column 1 column2 column3
AA AAAA TT
AA BB EE
AB C AABBCC
ABC XXX XYZ
AABB YYY A
```
now i tried to find all columns, that match e.g. 'AA%' resulting in:
```
column 1 column2 column3
**AA** **AAAA** null
**AA** null null
null null *AABBCC*
null null null
**AABB** null null
```
is that possible with one simple query? my idea was to start with unions or temporay tables, but i can't get it to work. sorry for this simple beginner question and thanks in advance!
```
select d1.column1, d1.column2, d1.column3 from sample d1 where d1.column1 like 'AA%'
union
select d2.column1, d2.column2, d2.column3 from sample d2 where d2.column2 like 'AA%'
union
select d3.column1, d3.column2, d3.column3 from sample d3 where d3.column3 like 'AA%'
```
|
You could just use OR key-word as this:
```
SELECT column1, column2, column3
FROM sample
WHERE column1 like 'AA%'
OR column2 like 'AA%'
OR column3 like 'AA%'
```
|
You union query is also correct bt in case of simplicity i agree with Or query
|
PostgreSQL: finding multiple columns that match a like clause
|
[
"",
"sql",
"postgresql",
"sql-like",
""
] |
So I'm trying to get a part of a value from a column and insert that part into another column - new column. BOTH columns are in the same table. So what i want should look something like this:
```
id newColumn oldColumn
1 12 123 some text
2 24 246 some text
....
```
I know how to get 12 and 24 using SUBSTR, but how do i enter the data for each row in the table. Should i be using self-join or something else?
|
First you have to add new col using following command:-
```
ALTER TABLE TAB_NAME
ADD COLUMN COL_NAME(VARCHAR(10));
```
After that execute this command:-
```
UPDAET TAB_NAME
SET COL_NAME = SUBSTRING(OLDCOLUMN, 1, 2);
```
I think this might help you.
|
No need to join, it's just a plain `UPDATE`:
```
update tablename set newColumn = substring(oldColumn from 1 for 2)
```
`substring` is ANSI SQL, some dbms have substr and other versions.
The question is why you are doing this? What do you expect to find in newColumn if someone later updates oldColumn to another value? Maybe you should have a view instead, where newColumn always has up to date values?
|
How to store part of a field value in another column in SQL
|
[
"",
"sql",
"substr",
"self-join",
""
] |
i have the SQL code to generate the date list between two dates, but i want to generate the week days (Business days) from the given two dates,
```
DECLARE @MinDate DATE = '20140101', @MaxDate DATE = '20140106';
SELECT TOP (DATEDIFF(DAY, @MinDate, @MaxDate) + 1)
Date = DATEADD(DAY, ROW_NUMBER() OVER(ORDER BY a.object_id) - 1,@MinDate)
FROM sys.all_objects a
CROSS JOIN sys.all_objects b;
```
This is my code, so anyone please suggest me to get the weekdays list alone.
Online sources have code to find the number of days not for to list all dates, there lies my confusion.
|
Try this:
```
DECLARE @MinDate DATE = '20140101',
@MaxDate DATE = '20140106'
;WITH N1 (N) AS (SELECT 1 FROM (VALUES (1), (1), (1), (1), (1), (1), (1), (1), (1), (1)) n (N)),
N2 (N) AS (SELECT 1 FROM N1 AS N1 CROSS JOIN N1 AS N2),
N3 (N) AS (SELECT 1 FROM N2 AS N1 CROSS JOIN N2 AS N2),
N4 (N) AS (SELECT ROW_NUMBER() OVER(ORDER BY N1.N) FROM N3 AS N1 CROSS JOIN N3 AS N2)
SELECT Date = DATEADD(DAY, N - 1, @MinDate)
FROM N4
WHERE
N < DATEDIFF(DAY, @MinDate, @MaxDate) + 2 AND
DATEDIFF(DAY, 1 - N, @MinDate) % 7 NOT IN (5,6)
```
Result:
```
Date
2014-01-01
2014-01-02
2014-01-03
2014-01-06
```
|
Make the original query as `sub-select` which generates all the dates between two given dates then do the filteration in `outer query`.
```
SET DATEFIRST 1
select [Date] from
(
SELECT TOP (DATEDIFF(DAY, @MinDate, @MaxDate) + 1)
Date = DATEADD(DAY, ROW_NUMBER() OVER(ORDER BY a.object_id) - 1,@MinDate)
FROM sys.all_objects a
CROSS JOIN sys.all_objects b
)
where datename(dw,[Date]) not in ('Saturday','Sunday')
```
|
List all working dates between two dates in SQL
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-server-2012-express",
""
] |
I have a row of the database like this:
```
1|abc|10|30|12
```
The biggest value is 30 and the second highest is 12. How can I get the second value for each row in my table?
|
This will work with any number of columns, just make sure you add them to the concatenated list where labelled `merge_col` in my query below:
```
select col1, col2, col3, col4, col5, second_highest
from (select x.*,
regexp_substr(merge_col, '[^|]+', 1, levels.column_value) as second_highest,
row_number() over(partition by x.col1 order by to_number(regexp_substr(merge_col, '[^|]+', 1, levels.column_value)) desc) as rn
from (select t.*, col3 || '|' || col4 || '|' || col5 as merge_col
from tbl t) x,
table(cast(multiset
(select level
from dual
connect by level <=
length(regexp_replace(merge_col,
'[^|]+')) + 1) as
sys.OdciNumberList)) levels)
where rn = 2
```
**Fiddle test:** <http://sqlfiddle.com/#!4/b446f/2/0>
In other words, for additional columns, change:
```
col3 || '|' || col4 || '|' || col5 as merge_col
```
to:
```
col3 || '|' || col4 || '|' || col5 || '|' || col6 ......... as merge_col
```
with however many columns there are in place of the `......`
|
There are different ways to achieve that. try this one for instance:
```
SELECT MAX('column_name') FROM 'table_name'
WHERE 'column_name' NOT IN (SELECT MAX('column_name') FROM 'table_name' )
```
You basically exclude the highest number from your query and then you select the highest out of the rest .
|
How to find the 2nd highest value in a row with Oracle SQL?
|
[
"",
"sql",
"oracle",
""
] |
Example
if two products
```
id name
1 product A
2 product B
```
And for each products I've attributes
```
id product_id value
1 1 1
2 1 2
3 2 3
3 2 4
```
And I need to select products by value of attributes.
I need products which have attributes with 1 AND 2 values.
This query doesn't work:
```
SELECT *
FROM product
LEFT JOIN attribute ON product.id = attribute.product_id
WHERE attribute.value = 1 AND attribute.value = 2;
```
|
Do a group by to find product id's with both 1 and 2 attributes. Select from products where product id found by that group by:
```
SELECT *
FROM product_table
WHERE id IN (select product_id
from attribute_table
where value in (1,2)
group by product_id
having count(distinct value) = 2)
```
Alternative solution, double join:
```
SELECT *
FROM product_table
JOIN attribute_table a1 ON product_table.id = a1.product_id
AND a1.value = 1
JOIN attribute_table a2 ON product_table.id = a2.product_id
AND a2.value = 2
```
|
To rephrase your question, you really need those products, which has both `1` and `2` *within* the values of their attributes:
```
SELECT product.*
-- , array_agg(attribute.value) attribute_values
-- uncomment the line above, if needed
FROM product
LEFT JOIN attribute ON product.id = attribute.product_id
GROUP BY product.id
HAVING array_agg(attribute.value) @> ARRAY[1, 2];
```
|
SELECT one entry with two left joins. SQL
|
[
"",
"sql",
"postgresql",
""
] |
I am creating a testing system where users are allowed to re-test until they have passed. I would like to get a list, for a given UserID, of tests which are assigned to them which they have scored less than passing (100% for this example) on.
I have the following tables:
> (Everything here has been adapted for simplicity, but it should all be valid still)
```
Users
(Generic "users" table with UserID and Name, etc...)
Tests
+--------+----------+------------------+
| TestID | TestName | OtherTestColumns |
+--------+----------+------------------+
| 1 | Test 1 | Blah.... |
| 2 | Test 2 | Blah.... |
| 3 | Test 3 | Blah.... |
| 4 | Test 4 | Blah.... |
+--------+----------+------------------+
Users_Have_Tests
Users are assigned tests they must take with this table
+--------+--------+
| UserID | TestID |
+--------+--------+
| 1 | 1 |
| 1 | 2 |
| 1 | 3 |
| 2 | 1 |
| 2 | 2 |
| 2 | 3 |
+--------+--------+
TestResults
+--------+--------+------------+
| TestID | UserID | Percentage |
+--------+--------+------------+
| 1 | 1 | 75 |
| 1 | 1 | 100 |
| 2 | 1 | 80 |
| 2 | 1 | 100 |
| 1 | 2 | 100 |
| 2 | 2 | 75 |
+--------+--------+------------+
```
The following query works for me to figure out ALL tests which are assigned to a user, but I want to remove tests they have passed from this list
```
SELECT DISTINCT
TestID,
TestName
FROM
`Users` AS u
RIGHT JOIN
`Users_have_Tests` AS ut
ON
u.UserID = ut.UserID
LEFT JOIN
`Tests` AS t
ON
ut.TestID = t.TestID
WHERE UserID = 1
```
I feel like I need a subquery here. I came up with the following query to find the max scores for each test a user has taken, but I'm not quite sure how to tie them together. (Subqueries are very new to me and I'm still trying to understand them.)
```
SELECT DISTINCT
TestID,
MAX(Percentage)
FROM
`TestResults`
WHERE UserID = 1
GROUP BY TestID
```
The results I am looking for are:
```
UserID = 1
+--------+----------+
| TestID | TestName |
+--------+----------+
| 3 | Test 3 |
+--------+----------+
UserID = 2
+--------+----------+
| TestID | TestName |
+--------+----------+
| 2 | Test 2 |
| 3 | Test 3 |
+--------+----------+
```
I have tried several manners of joins and conditions, but I can't quite wrap my head around what exactly I need here. Google and StackOverflow have failed me; Likely because I'm lacking the knowledge of what to search for.
**EDIT**
The following is what worked for me:
```
SELECT DISTINCT t.TestID, t.TestName
FROM 'Users' AS u
RIGHT JOIN 'Users_have_Tests' AS ut
ON u.UserID = ut.UserID
LEFT JOIN 'Tests' AS t
ON ut.TestID = t.TestID
WHERE t.TestID NOT IN (
SELECT tr.TestID
FROM TestResults AS tr
WHERE tr.Percentage >= 100
AND tr.UserID = 2
)
AND u.UserID = 2
```
|
I think I understand what you are after now, give this a go:
```
SELECT DISTINCT
TestID,
TestName
FROM
`Users` AS u
RIGHT JOIN
`Users_have_Tests` AS ut
ON
u.UserID = ut.UserID
LEFT JOIN
`Tests` AS t
ON
ut.TestID = t.TestID
WHERE TestID NOT IN (SELECT TestID
FROM TestResults
Where Percentage = 100)
GROUP BY UserID
```
|
Have you tried the following?
Edit I noticed you want to see the highest score, so I created a subquery
Edit I forgot a grouping.
Also, Test needs to be joined on both the TestID and UserID.
```
SELECT
TestID,
TestName,
HighestScore
FROM
`Users` AS u
RIGHT JOIN
`Users_have_Tests` AS ut
ON
u.UserID = ut.UserID
LEFT JOIN
( SELECT TestID, UserID, MAX(Percentage) AS HighestScore
FROM `Tests`
GROUP BY TestID, UserID
)AS t
ON
ut.TestID = t.TestID
AND ut.UserID = t.UserID
WHERE HighestScore < 100
GROUP BY TestID, TestName, HighestScore
```
|
Retrieving only rows for which a join has a dependancy
|
[
"",
"mysql",
"sql",
"subquery",
""
] |
I am trying to obtain all posts created by all users a given user follows.
Tables: posts, users, followers
## Users
```
id int(255)
```
Which is associated with:
## Followers
```
id int(255)
user_id int(255) //The id of person that's is following
following int(255) //The id of person being followed
```
Which is also associated with:
## Posts
```
id int(255)
user_id(255) //The id of person who created the post
```
I don't know what is needed here to achieve this in a single query. Normally, I would obtain all users the user follows and then loop through all users obtaining the posts of each. Using this approach, my results are first sorted by user, then sorted by date. I want to sort all posts by date.
Any suggestions?
|
A simple join should suffice. The `?` is where the user id of the 'follower' goes.
```
SELECT * FROM Posts p
INNER JOIN Followers f ON (f.following = p.user_id)
WHERE f.user_id = ?;
```
|
To do this queries you just have to join multiple tables
```
SELECT P.*
FROM users U
INNER JOIN followers F ON (F.user_id = U.id)
INNER JOIN users FU ON (FU.id = F.following)
INNER JOIN posts P ON(P.user_id = F.id)
WHERE U.id = :id
```
|
Select all posts created by all users a given user follows
|
[
"",
"mysql",
"sql",
""
] |
I'm trying to get a count on how many times BNxxxx has been commented in the comments cell. So far, I can make each cell be counted once, but there may be multiple comments in a cell containing BNxxxx.
For example, this:
```
-------
BN0012
-------
BN0012
-------
BN0012
BN0123
-------
```
should show an output of BN0012 3 times and BN0123 once. Instead, I get BN0012 3 times only.
Here's my code:
```
select COMMENTS, count(*) as TOTAL
from NOTE
Where COMMENTS like '%BN%' AND CREATE_DATE between '01/1/2015' AND '11/03/2015'
group by COMMENTS
order by Total desc;
```
Any ideas?
edit
My code now looks like
```
select BRIDGE_NO, count(*)
from IACD_ASSET b join
IACD_NOTE c
on c.COMMENTS like concat(concat('BN',b.BRIDGE_NO),'%')
Where c.CREATE_DATE between '01/1/2015' AND '11/03/2015' AND length(b.BRIDGE_NO) > 1
group by b.BRIDGE_NO
order by count(*);
```
Problem with this is the BN44 is the same as BN4455 .. have tried concat(concat('BN',b.BRIDGE\_NO),'\_') comes back with nothing , any ideas how i can get exact likes
|
I'm going to assume that your `COMMENTS` table has a primary key column (such as `comment_id`) or at least that `comments` isn't a `CLOB`. If it is a `CLOB` then you're not going to be able to use `GROUP BY` on that column.
You can accomplish this as follows without even a lookup table of `BN....` values. No guarantees as to the performance:
```
WITH d1 AS (
SELECT 1 AS comment_id, 'BN0123 is a terrible thing BN0121 also BN0000' AS comments
, date'2015-01-03' AS create_date
FROM dual
UNION ALL
SELECT 2 AS comment_id, 'BN0125 is a terrible thing BN0120 also BN1000' AS comments
, date'2015-02-03' AS create_date
FROM dual
)
SELECT comment_id, comments, COUNT(*) AS total FROM (
SELECT comment_id, comments, TRIM(REGEXP_SUBSTR(comments, '(^|\s)BN\d+(\s|$)', 1, LEVEL, 'i')) AS bn
FROM d1
WHERE create_date >= date'2015-01-01'
AND create_date < date'2015-11-04'
CONNECT BY REGEXP_SUBSTR(comments, '(^|\s)BN\d+(\s|$)', 1, LEVEL, 'i') IS NOT NULL
AND PRIOR comment_id = comment_id
AND PRIOR DBMS_RANDOM.VALUE IS NOT NULL
) GROUP BY comment_id, comments;
```
Note that I corrected your filter:
```
CREATE_DATE between '01/1/2015' AND '11/03/2015'
```
First, you should be using ANSI date literals (e.g., `date'2015-01-01'`); second, using `BETWEEN` for dates is often a bad idea as Oracle `DATE` values contain a time portion. So this should be rewritten as:
```
create_date >= date'2015-01-01'
AND create_date < date'2015-11-04'
```
Note that the later date is November *4*, to make sure we capture all possible comments that were made on November *3*.
If you want to see the matched comments without aggregating the counts, then do the following (taking out the outer query, basically):
```
WITH d1 AS (
SELECT 1 AS comment_id, 'BN0123 is a terrible thing BN0121 also BN0000' AS comments
, date'2015-01-03' AS create_date
FROM dual
UNION ALL
SELECT 2 AS comment_id, 'BN0125 is a terrible thing BN0120 also BN1000' AS comments
, date'2015-02-03' AS create_date
FROM dual
)
SELECT comment_id, comments, TRIM(REGEXP_SUBSTR(comments, '(^|\s)BN\d+(\s|$)', 1, LEVEL, 'i')) AS bn
FROM d1
WHERE create_date >= date'2015-01-01'
AND create_date < date'2015-11-04'
CONNECT BY REGEXP_SUBSTR(comments, '(^|\s)BN\d+(\s|$)', 1, LEVEL, 'i') IS NOT NULL
AND PRIOR comment_id = comment_id
AND PRIOR DBMS_RANDOM.VALUE IS NOT NULL;
```
---
Given the edits to your question, I think you want something like the following:
```
SELECT b.bridge_no, COUNT(*) AS comment_cnt
FROM iacd_asset b INNER JOIN iacd_note c
ON REGEXP_LIKE(c.comments, '(^|\W)BN' || b.bridge_no || '(\W|$)', 'i')
WHERE c.create_dt >= date'2015-01-01'
AND c.create_dt < date'2015-03-12' -- It just struck me that your dates are dd/mm/yyyy
AND length(b.bridge_no) > 1
GROUP BY b.bridge_no
ORDER BY comment_cnt;
```
Note that I am using `\W` in the regex above instead of `\s` as I did earlier to make sure that it captures things like `BN1234/BN6547`.
|
You have a problem. Let me assume that you have a table of all known BN values that you care about. Then you can do something like:
```
select bn.fullbn, count(*)
from tableBN bn join
comments c
on c.comment like ('%' || bn.fullbn || '%')
group by bn.fullbn;
```
The performance of this might be quite slow.
If you happen to be storing lists of things in the comment field, then this is a very bad idea. You should not store lists in strings; you should use a junction table.
|
SQL using Count, with same "Like" multiple times in same cell
|
[
"",
"sql",
"oracle",
"count",
"cell",
"sql-like",
""
] |
I am trying to run this query on pervasive database
```
select cust_no,cust_name,sum(bvtotal) as Amount
from sales_history_header
where cust_no is not null and number is not null and bvtotal > 1000 and in_date < 20140101
group by cust_no,cust_name
order by sum(bvtotal) desc;
```
How do I exclude those group results which had in\_date > 20140101 in there sub results ?
This query I have is also fetching those results which had in\_date > 20140101
Am I doing something wrong ?
The sample output I am getting is in this format
```
cust_no cust_name amount
A a1 500
B b1 500
C c1 1000
```
I want to exclude this record with cust\_no 'A' because it had transaction with in\_date in 20140202
consider in my raw data I have records like
```
cust_no cust_name amount in_date
A a1 100 20130203
A a1 400 20130101
A a1 1000 20140503
```
|
You need to exclude all records based on a set of ids. Typically, you do this with a subquery:
```
SELECT cust_no,
cust_name,
sum(bvtotal) AS Amount
FROM sales_history_header
WHERE cust_no IS NOT NULL
AND number IS NOT NULL
AND bvtotal > 1000
AND cust_no NOT IN (
SELECT cust_no
FROM sales_history_header
WHERE in_date >= 20140101
AND cust_no IS NOT NULL
)
GROUP BY cust_no,
cust_name
ORDER BY sum(bvtotal) DESC;
```
The `AND cust_no IS NOT NULL` portion of the subquery is to avoid problems with `NOT IN` and `NULL` values. You may have better performance if you rewrite this as a `NOT EXISTS` correlated subquery, but in my experience MySQL is pretty bad at those.
Another alternative is the more explicit self-anti-join approach (LEFT JOIN and filter where right table is null) but that is kind of... sketchy feeling?... because you appear to allow `cust_no` to be `NULL` and because it's a query that's aggregating so it feels like you have to worry about multiplying rows:
```
SELECT s1.cust_no,
s1.cust_name,
sum(s1.bvtotal) AS Amount
FROM sales_history_header s1
LEFT JOIN (
SELECT cust_no
FROM sales_history_header
WHERE cust_no IS NOT NULL
AND number IS NOT NULL
AND bvtotal > 1000
AND in_date >= 20140101) s2
ON s2.cust_no = s1.cust_no
WHERE s1.cust_no IS NOT NULL
AND s1.number IS NOT NULL
AND s1.bvtotal > 1000
AND s2.cust_no IS NULL
GROUP BY cust_no,
cust_name
ORDER BY sum(bvtotal) DESC;
```
The `LEFT JOIN` combined with `WHERE [...] s2.cust_no IS NULL` is the part that eliminates the records you don't want.
|
I think you need to specify you date constant as a date unless you really want to be dealing with integers.
```
select cust_no,cust_name,sum(bvtotal) as Amount
from sales_history_header
where cust_no is not null and number is not null and bvtotal > 1000 and in_date < '2014-01-01'
group by cust_no,cust_name
order by sum(bvtotal) desc;
```
|
pervasive query with group by issue
|
[
"",
"mysql",
"sql",
"pervasive-sql",
""
] |
I have field REPORTDATE (DATETIME).
In SQL Developer i can see its value in this format
`29.10.2013 17:08:08`
I found that in order to do the select of just a DATE I need to execute this:
```
SELECT TO_DATE (REPORTDATE, 'DD.MON.YYYY') AS my_date
FROM TABLE1
```
but it returns `0RA-01843: not a valid month`
I want result to return only 29.10.2013
|
> TO\_DATE (REPORTDATE, 'DD.MON.YYYY')
This makes no sense. You are converting a date into a date again. You use **TO\_DATE** to convert a string literal into **DATE**.
> I want result to return only 29.10.2013
You could use **TRUNC** to truncate the time element. If you want to use this value for DATE calculations, you could use it directly.
For example,
```
SQL> select TRUNC(SYSDATE) dt FROM DUAL;
DT
---------
12-MAR-15
```
To display in a particular format, you could use **TO\_CHAR** and proper **FORMAT MASK**.
```
SQL> SELECT to_char(SYSDATE, 'DD.MM.YYYY') dt from dual;
DT
----------
12.03.2015
SQL>
```
|
Use this:
```
SELECT trunc(REPORTDATE, 'DD') AS my_date
FROM TABLE1
```
This will not change the type of the returning object, only truncates everything below "day" level.
If you are ok with returning a String, then you can just do:
```
SELECT TO_CHAR(REPORTDATE, 'DD.MM.YYYY') AS my_date
FROM TABLE1
```
|
Oracle SQL "SELECT DATE from DATETIME field "
|
[
"",
"sql",
"oracle",
"date",
"select",
""
] |
I have a table named `Click` in sql database. This table stores clicks of everything. This table has a field named `EntityTypeId` that says this row is for which entity. I want to fetch a special type of clicks and store them in another table. So I should select them from `Click` table and insert them in `PriceClick` table.
Here is the `Click` table records:
```
ClickId | EntityTypeId
------------------------
1 1
2 2
3 2
4 3
```
Here is the required rows for `Click` table and `PriceClick` table:
`Price`:
```
ClickId | EntityTypeId
------------------------
1 1
4 3
```
`PriceClick`:
```
ClickId | EntityTypeId
------------------------
2 2
3 2
```
How to do this?
|
Use `INSERT INTO`:
```
INSERT INTO PriceClick(ClickId, EntityTypeId)
SELECT ClickId, EntityTypeId
FROM Click
WHERE EntityTypeId = 2
```
|
Use `SELECT INTO` if you don't have the DestinationTable or Use `INSERT INTO` as the other answer mentioned (@wewesthemenace) if you already created your DestinationTable
```
SELECT *
INTO DestinationTable
FROM SourceTable
WHERE ....
```
|
How to fetch a list of rows and foreach in it in sql?
|
[
"",
"sql",
"sql-server",
"sqlcommand",
""
] |
Gives me error when I try to insert and it finally inserts after couple of retries with out any error.
```
SQL Error: ORA-00001: unique constraint (ALERTS2.PK_UP_MODULE_MASTER) violated.
*Cause: An UPDATE or INSERT statement attempted to insert a duplicate key.
```
**INSERT statement:**
```
INSERT INTO up_module_master
(
mr_id,
mr_name,
mr_desc,
mr_parent,
created_by,
created_date,
modified_date,
module_type,
module_link,
link_text,
help_text shortcut_link,
shortcut_name,
shortcut_flag,
module_display_name
)
SELECT max(u1.mr_id)+1,
'Notification Engine',
'Notification Engine Module',
0,1,
SYSDATE,
'',
'',
'../../jalsweb/Logout.jsp',
'HTTP',
'',
'',
'',
0,
'Notification Engine'
FROM up_module_master u1;
```
**Below is the table definition:**
```
CREATE TABLE "up_module_master"
(
"mr_id" NUMBER (10, 0) NOT NULL ENABLE,
"mr_name" VARCHAR2 (200 byte) NOT NULL ENABLE,
"mr_desc" VARCHAR2 (250 byte),
"mr_parent" NUMBER,
"created_by" NUMBER,
"created_date" TIMESTAMP (6),
"modified_date" TIMESTAMP (6),
"module_type" VARCHAR2 (100 byte),
"module_link" VARCHAR2 (200 byte),
"link_text" VARCHAR2 (250 byte),
"help_text" VARCHAR2 (250 byte),
"shortcut_link" VARCHAR2 (400 byte),
"shortcut_name" VARCHAR2 (100 byte),
"shortcut_flag" NUMBER,
"module_display_name" VARCHAR2 (100 byte),
"audit_type" VARCHAR2 (100 byte),
"service_id" NUMBER,
"module_regis_type" NUMBER DEFAULT 1,
CONSTRAINT "PK_UP_MODULE_MASTER" PRIMARY KEY ("mr_id"),
CONSTRAINT "UP_MODULE_MASTER_UP_MODUL_FK1" FOREIGN KEY ("mr_parent")
REFERENCES "up_module_master" ("mr_id") ENABLE
)
```
|
Looks like `MR_ID` is not an identity seed. If another process is attempting to insert a record at the same time as you, both will try to insert the same ID and you will get this error. I suggest that you change the column to be auto incrementing. You will need to create a sequence and then a trigger if using pre version 12 or the identity column if using later versions:
[Oracle: how to create an identity column?](https://stackoverflow.com/questions/2053313/oracle-how-to-create-an-identity-column)
The link above has all the details for doing this.
|
the problem is, that someone other inserted and commited a row with the same key meanwhile
consider using a sequence (it looks like the sequence is already defined):
*SELECT seq\_name.nextval, ... FROM dual*
|
How do I fix SQL Error: ORA-00001: unique constraint (ALERTS2.PK_UP_MODULE_MASTER) violated
|
[
"",
"sql",
"oracle",
""
] |
I have a function that needs to perform a long update on multiple large tables. During the update 2-3 tables at a time need to be locked in EXCLUSIVE mode.
Since not all the tables need to be locked at the same time, ideally I'd want to LOCK only those tables I'm updating at the time, and then remove the lock once I'm done.
Eg.
```
-- Lock first pair of tables
LOCK TABLE tbl1_a IN EXCLUSIVE MODE;
LOCK TABLE tbl1_b IN EXCLUSIVE MODE;
-- Perform the update on tbl1_a and tbl1_b
-- Release the locks on tbl1_a and tbl1_b
-- HOW???
-- Proceed to the next pair of tables
LOCK TABLE tbl2_a IN EXCLUSIVE MODE;
LOCK TABLE tbl2_b IN EXCLUSIVE MODE;
```
Unfortunately, there's no the equivalent of UNLOCK statement in plpgsql. The normal way to remove LOCK is to COMMIT the transaction, but that is not possible inside a function.
Is there any solution for this? Some way to explicitly release the lock before function is done? Or run some kind of sub-transaction (perhaps by running each update in a separate function)?
UPDATE
I accepted that there is no solution. I'll write each update into a separate function and coordinate from outside the db. Thanks everyone.
|
In **Postgres 11** or later, consider a **`PROCEDURE`** which allows transaction control. See:
* [Do stored procedures run in database transaction in Postgres?](https://stackoverflow.com/questions/28250376/do-stored-procedures-run-in-database-transaction-in-postgres/28251922#28251922)
---
With **functions**, there is **no way**. Functions in Postgres are atomic (always inside a transaction) and locks are released at the end of a transaction.
You might be able to work around this with [**advisory locks**](https://www.postgresql.org/docs/current/explicit-locking.html#ADVISORY-LOCKS). But those are not the same thing. All competing transactions have to play along. Concurrent access that is not aware of advisory locks will spoil the party.
Code example on dba.SE:
* [Postgres UPDATE ... LIMIT 1](https://dba.stackexchange.com/a/69497/3684)
Or you might get somewhere with "cheating" autonomous transactions with dblink:
* [How do I do large non-blocking updates in PostgreSQL?](https://stackoverflow.com/questions/1113277/how-do-i-do-large-non-blocking-updates-in-postgresql/22163648#22163648)
* [Does Postgres support nested or autonomous transactions?](https://stackoverflow.com/questions/25425944/postgres-supports-nested-transaction-or-autonomus-transactions-or-not/25428060#25428060)
Or you re-assess your problem and split it up into a couple of separate transactions.
|
In pg11 you now have `PROCEDURE`s which let you release locks via `COMMIT`. I just converted a bunch of parallel executed functions running `ALTER TABLE ... ADD FOREIGN KEY ...` with lots of deadlock problems and it worked nicely.
<https://www.postgresql.org/docs/current/sql-createprocedure.html>
|
PostgreSQL obtain and release LOCK inside stored function
|
[
"",
"sql",
"postgresql",
"plpgsql",
""
] |
I have this:
```
SELECT ROW_NUMBER() OVER (ORDER BY vwmain.ch) as RowNumber,
vwmain.vehicleref,vwmain.capid,
vwmain.manufacturer,vwmain.model,vwmain.derivative,
vwmain.isspecial,
vwmain.created,vwmain.updated,vwmain.stocklevel,
vwmain.[type],
vwmain.ch,vwmain.co2,vwmain.mpg,vwmain.term,vwmain.milespa
FROM vwMain_LATEST vwmain
INNER JOIN HomepageFeatured
on vwMain.vehicleref = homepageFeatured.vehicleref
WHERE homepagefeatured.siteskinid = 1
AND homepagefeatured.Rotator = 1
AND RowNumber = 1
ORDER BY homepagefeatured.orderby
```
It fails on "Invalid column name RowNumber"
Not sure how to prefix it to access it?
Thanks
|
You can't reference the field like that. You can however use a subquery or a common-table-expression:
Here's a subquery:
```
SELECT *
FROM (
SELECT ROW_NUMBER() OVER (ORDER BY vwmain.ch) as RowNumber,
vwmain.vehicleref,vwmain.capid,
vwmain.manufacturer,vwmain.model,vwmain.derivative,
vwmain.isspecial,
vwmain.created,vwmain.updated,vwmain.stocklevel,
vwmain.[type],
vwmain.ch,vwmain.co2,vwmain.mpg,vwmain.term,vwmain.milespa,
homepagefeatured.orderby
FROM vwMain_LATEST vwmain
INNER JOIN HomepageFeatured on vwMain.vehicleref = homepageFeatured.vehicleref
WHERE homepagefeatured.siteskinid = 1
AND homepagefeatured.Rotator = 1
) T
WHERE RowNumber = 1
ORDER BY orderby
```
---
Rereading your query, since you aren't partitioning by any fields, the `order by` at the end is useless (it contradicts the order of the ranking function). You're probably better off using `top 1`...
|
Using top:
```
SELECT top 1 vwmain.vehicleref,vwmain.capid,
vwmain.manufacturer,vwmain.model,vwmain.derivative,
vwmain.isspecial,
vwmain.created,vwmain.updated,vwmain.stocklevel,
vwmain.[type],
vwmain.ch,vwmain.co2,vwmain.mpg,vwmain.term,vwmain.milespa,
homepagefeatured.orderby
FROM vwMain_LATEST vwmain
INNER JOIN HomepageFeatured on vwMain.vehicleref = homepageFeatured.vehicleref
WHERE homepagefeatured.siteskinid = 1
AND homepagefeatured.Rotator = 1
ORDER BY homepagefeatured.orderby
```
|
SQL ROW_NUMBER OVER syntax
|
[
"",
"sql",
"sql-server",
"window-functions",
""
] |
The answers for [this question](https://stackoverflow.com/questions/149784/how-do-you-copy-a-record-in-a-sql-table-but-swap-out-the-unique-id-of-the-new-ro) almost answer mine, but not quite. How do I turn this:
```
col0 col1 col2
data0 a foo
data1 b foo
data2 c fee
data3 d fee
```
into this? (duplicating the `foo` rows only)
```
col0 col1 col2
data0 a foo
data1 b foo
data2 c fee
data3 d fee
data0 a bar
data1 b bar
```
Where `bar` is from the statement, not the data, and the original table has 2 new rows.
|
```
insert into T (col0, col1, col2)
select col0, col1, 'bar'
from T
```
If by "copy" you mean a `select` then a union would work as in other answers or you could try this:
```
select col0, col1, case when num = 0 then col2 else 'bar' end as col2
from T, (select 0 as num union all select 1) as dup
```
|
One option, `union all`:
```
select col0, col1, col2
from yourtable
union all
select col0, col1, 'bar'
from yourtable
```
|
How do you copy a row in a SQL table and alter one of the column values?
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a simple (I suppose) problem, which is baffling me (I'm not a SQL expert as you will see soon... :-).
I have a table `person` with fields `id` and `name`, and a table `comment` with field `id`, `id_person` and `text`:
```
---------------
table: person
---------------
id | name
---------------
---------------------------
table: comment
---------------------------
id | id_person | text
---------------------------
```
The comments are linked to persons on `person.id` => `comment.id_person`.
Each person can have many comments.
I would prefer no to save the number of comments in any table.
The question is: **How to select all persons with at least N comments?**
This is what I am trying right now, but it's of course wrong...:
```
SELECT * FROM person WHERE (SELECT COUNT(*) FROM comment WHERE id_person = 2) >= N
```
*P.S.: I'm currently working with sqlite, but a standard SQL answer is fine...*
|
I'm not sure I see what's wrong with what you've tried. You hardcode the ID, but if you just swap that out, it should be fine.
```
SELECT *
FROM person
WHERE (SELECT COUNT(*)
FROM comment
WHERE id_person = person.id) >= N
```
Another solution, whether better or not is hard to know (you should try each if performance is a concern, or at least check the query plans), would be to utilize a `GROUP BY` clause:
```
SELECT *
FROM person
WHERE id IN (SELECT id_person
FROM comment
GROUP BY id_person
HAVING COUNT(*) >= N)
```
I like using the `IN` clause here, as compared to an `INNER JOIN`, because it lets you do `SELECT *`, as you had in your example. Doing that can, of course, [sometimes not be ideal](https://stackoverflow.com/a/25093454/1270504), but it sounds like it's appropriate for your case.
In each of these, of course, you'll have to swap out `N` with either a parameter or literal value.
|
Simplest would be to use a `group by` and a `having` clause like this
```
select p.name
from person p
inner join comment c on c.id_person = p.id
group by
p.name
having count(*) = 2
```
As a sidenote: I would rename your columns to
```
---------------------------
table: person
---------------
id_person | name
---------------
---------------------------
table: comment
---------------------------
id_comment | id_person | text
---------------------------
```
as to make it clear what columns are related to each other. Someday you will encounter a database schema that's new to you where the names don't match and you have to resort to looking up foreign key relations to work things out. Trust me, it's not fun.
|
SQL: How to select persons with at least N comments?
|
[
"",
"sql",
"sqlite",
""
] |
I have a mysql query:
```
SELECT zip AS z FROM zip WHERE zip = 90210;
```
When the row matching 90210 is found, it returns:
```
+-------+
| z |
+-------+
| 90210 |
+-------+
```
When the row is not found, an empty set is returned.
```
Empty set (0.01 sec)
```
What I am trying to figure out is how in the case of the empty set, I can get a response like this (note that 'false' is not important, it can be an integer, string, or whatever I need to define):
```
+-------+
| z |
+-------+
|'false'|
+-------+
```
I tried using SELECT EXISTS but the value is either 0/1 rather than the value/'false'.
```
SELECT EXISTS(SELECT zip AS z FROM zip WHERE zip = 90210);
```
|
To guarantee that a query returns one row, you can use aggregation. Here is one method:
```
SELECT COALESCE(MAX(zip), 'false') AS z
FROM zip
WHERE zip = '90210';
```
This assumes that `zip` is a string, so the types are compatible. If not, you might want to convert it to a string because you are clearly expected a string back.
As for your method, you would need to use a `case` statement, which in turn, would require another subquery to get the zip. Hence, I prefer the above method.
|
This query will do what you want. Selecting from `Dual` means selecting from no table. The idea is to select all the zip codes, and union it with "false" string (you can replace it with whatever you want) if there's no entry for that zip code.
```
SELECT zip AS z FROM zip WHERE zip = 90210
UNION ALL
SELECT "false" FROM dual WHERE NOT EXISTS(SELECT zip AS z FROM zip WHERE zip = 90210);
```
EDIT: thanks to Paul Griffin, here's a sql fiddle: <http://sqlfiddle.com/#!9/9b169/8>
|
How to replace 'Empty set' in a mysql query?
|
[
"",
"mysql",
"sql",
""
] |
I have a similar table to this dummy data, where there are 3 records for one individual. I would like to change that to one record with multiple columns. What is complicating it for me the most is I want the 3 most current Products based off of Date\_Purchased
Existing:
```
NameFirst NameLast MbrKey Product DatePurchased
John Doe 123456 ProductA 1/1/2015
John Doe 123456 ProductA 2/1/2015
John Doe 123456 ProductB 3/1/2015
John Doe 123456 ProductB 12/1/2015
Joe Smith 987654 ProductA 3/1/2015
Jane Jones 555555 ProductA 1/1/2015
Jane Jones 555555 ProductB 1/1/2015
```
This is what I have so far:
```
select MbrKey, NameLast, NameFirst,
Case when rn = 1 then Product else null end as Product1,
case when rn = 2 then Product else null end as Product2,
case when rn = 3 then Product else null end as Product3
from
(select t2.*
from(
select t.*, ROW_NUMBER () over (partition by t.MbrKey
order by t.MbrKey, t.DatePurchased desc) as RN
from testing t) as t2
where t2.RN between 1 and 3) as t3
```
I think this got me closer as the results are as follows:
```
NameFirst NameLast MbrKey Product1 Product2 Product3
Doe John 123456 ProductB NULL NULL
Doe John 123456 NULL ProductA NULL
Doe John 123456 NULL NULL ProductA
Jones Jane 555555 ProductA NULL NULL
Jones Jane 555555 NULL ProductB NULL
Smith Joe 987654 ProductA NULL NULL
```
Future State: Below is what I am hoping for.
```
NameFirst NameLast MbrKey Product1 Product2 Product3
Doe John 123456 ProductB ProductB ProductA
Jones Jane 555555 ProductA ProductB Null
Smith Joe 987654 ProductA Null Null
```
Any help would be greatly appreciated!
|
Try `PIVOT`:
```
DECLARE @t TABLE
(
Name NVARCHAR(MAX) ,
Product NVARCHAR(MAX) ,
Date DATE
)
INSERT INTO @t
VALUES ( 'John', 'ProductA', '20150101' ),
( 'John', 'ProductA', '20150102' ),
( 'John', 'ProductB', '20150103' ),
( 'John', 'ProductB', '20150112' ),
( 'Joe', 'ProductA', '20150103' ),
( 'Jane', 'ProductA', '20150101' ),
( 'Jane', 'ProductB', '20150101' );
WITH cte
AS ( SELECT Name ,
Product ,
ROW_NUMBER() OVER ( PARTITION BY Name ORDER BY Date DESC ) AS RN
FROM @t
)
SELECT Name ,
[1] AS Product1 ,
[2] AS Product2 ,
[3] AS Product3
FROM cte PIVOT( MAX(Product) FOR rn IN ( [1], [2], [3] ) ) a
ORDER BY Name
```
Output:
```
Name Product1 Product2 Product3
Jane ProductA ProductB NULL
Joe ProductA NULL NULL
John ProductB ProductB ProductA
```
Of course you should partition here by `MbrKey`. I leave it for you.
|
Use the `max()` aggregate function with the case statements and a `group by` clause. You can also skip a level of subqueries:
```
select
MbrKey, NameLast, NameFirst,
max(Case when rn = 1 then Product else null end) as Product1,
max(case when rn = 2 then Product else null end) as Product2,
max(case when rn = 3 then Product else null end) as Product3
from (
select
t.*,
rn = ROW_NUMBER () over (partition by t.MbrKey order by t.MbrKey, t.DatePurchased desc)
from testing t
) as t1
where t1.RN between 1 and 3
group by MbrKey, NameLast, NameFirst
```
|
SQL Multiple rows/records into one row with 3 columns
|
[
"",
"sql",
"sql-server-2008",
"t-sql",
"pivot",
""
] |
I have two tables:
```
create table creations (id int)
create table images (creation_id int, path varchar)
```
In the `images.path`, there is a limited number of possible values, like 'my documents', 'desktop', etc (it's a datamining DB).
I want to find the number of different paths for a creation, and how many of those creations have this number of paths.
With this data:
```
insert into creations (id) values (1), (2), (3), (4)
insert into images (creation_id, path) values
(1, 'a'), (1, 'a'), (1, 'a'), --creation 1: 1 path
(2, 'a'), (2, 'b'), (2, 'a'), --creation 2: 2 paths
(3, 'a'), (3, 'b'), (3, 'c'), --creation 3: 3 paths
(4, 'a'), (4, 'a'), (4, 'b') --creation 4: 3 paths
```
The desired result would look like this:
```
nb_paths | nb_creations
-----------------------
1 | 1
2 | 2
3 | 1
```
|
I'm not certain I understand the question, nor why you have supplied an query as an answer.
However all I would suggest is moving the correlated query into an APPLY. Below I have used OUTER APPLY which would allow all creation records even if not found in paths. A CROSS APPLY would limit results to only creations found in paths.
```
SELECT
nb_path
, COUNT(*) AS nb_creations
FROM (
SELECT
c.id
, COALESCE(oa.nb_path, 0) nb_path
FROM creations AS c
OUTER APPLY (
SELECT
COUNT(DISTINCT [path])
FROM images AS i
WHERE i.creation_id = c.id
) OA (nb_path)
) AS g
GROUP BY
nb_path
;
```
COALESCE() is *{or ISNULL() could be}* used in case a creation exists without a path. That wouldn't be required with CROSS APPLY.
---
**NB:** The APPLY operator is unique to MS SQL Server (at time of writing)
I understand similar functionality is (or will be) through
CROSS JOIN LATERAL
and this is present in Oracle 12c
|
It works if I first count the number of `path`s for each creation, then group on this, by using the results as a table:
```
select nb_path, COUNT(*) as nb_creations
from (
select c.id,
(
select count(distinct [path])
from images as i
where i.creation_id = c.id
) as nb_path
from creations as c
) as g
group by nb_path
```
|
Double count on two tables
|
[
"",
"sql",
"sql-server",
""
] |
I am having a difficult time explaining this in words.
What I am trying to do:
I have the following table
```
employerID, userID
45 1
48 1
53 1
45 2
55 2
```
I want to build a query to return the rows
```
employerID, userID
45 1
48 1
53 1
45 2
and omit
55 2
```
the employer id of 55 is not in the rows that contain userID 1.
I want to find all rows where userID = 2 that its employerID is in the group of all of the rows where the userID = 1
I want to specify userid 1 and 2.
|
So you want to find all rows where either the `userID = 1` or where at least one record exist with the same `employerID` and `userID = 1`?
Then you can use this sql which does exactly that:
```
SELECT employerID, userID
FROM dbo.TableName t
WHERE t.userID = 1 -- either userID = 1
OR -- or userID = 2 but with another record with userID = 1 and the same employerID
(
t.userID = 2 AND EXISTS
(
SELECT 1 FROM dbo.TableName t1
WHERE t1.userID = 1
AND t1.employerID = t2.employerID
)
)
```
|
One more version:
```
DECLARE @t TABLE ( EmpID INT, UserID INT )
INSERT INTO @t
VALUES ( 45, 1 ),
( 48, 1 ),
( 53, 1 ),
( 45, 2 ),
( 55, 2 )
SELECT * FROM @t
WHERE EmpID NOT IN ( SELECT EmpID FROM @t WHERE UserID = 2
EXCEPT
SELECT EmpID FROM @t WHERE UserID = 1 )
```
Output:
```
EmpID UserID
45 1
48 1
53 1
45 2
```
|
Find all of the rows in that are in a subgroup of a table
|
[
"",
"sql",
"sql-server",
"database",
"sql-server-2008",
""
] |
I have MySQL table have `2 columns (col1,col2)` I want to make query to get the count of cells where `value = 4`
I use `Select COUNT(*) from tbl where col1=4 OR col2=4`
But it give me 1 as they are in same row it should be 2 because (4) found 2 times
```
col1 col2
------------- -----------
5 1
7 3
4 4
10 8
8 21
2 22
```
|
you can add the result of two select statements. try this one
```
SELECT
(SELECT COUNT(*) FROM tbl WHERE col1 = 4 )+
(SELECT COUNT(*) from tbl WHERE col2 = 4)
AS SumCount
```
> result would be: 2
|
I think this will count the cells properly. Including counting both col1 and col2 being 4 in the same row.
```
SELECT COUNT(IF(col1=4,N,NULL)) + COUNT(IF(col2=4,N,NULL)) cellcnt FROM tbl
```
|
Counting all occurances of a certain value in both of two columns
|
[
"",
"mysql",
"sql",
""
] |
I have dynamic query which might programatically generate more than a thousand where clauses, each of them could be like (column a = '1' and b = '2' ) with little variations. May I know whether so many conditions in one Sql query will have performance issue? column a is a varchar max , no index created. the table it query against is large and partitioned by another column c . Thanks very much. And it is Sql server 2012.
|
Yes. A thousand `where` clauses can impact performance. If you could, I would recommend creating a temporary table with the values and using a `join` for the logic. You should create an index on the temporary table.
Note: you could use a trick on the main table, by adding a computed column that concatenates the column values together. Then use `in`:
```
alter table t add ab as (a + ':' + b);
create index idx_t_ab on t(ab);
```
Then modify the query to be:
```
where ab in ('1:2', . . . )
```
This would not work well for all applications, but it might fit your needs for performance.
|
This will be inefficient, but there are also limits on query size and number of parameters that could cause errors in your application if you build queries this way. These limits are not always clear, depending on a number of factors, so it could lead to subtle errors that are hard eliminate.
See [this question](https://stackoverflow.com/questions/1869753/maximum-size-for-a-sql-server-query-in-clause-is-there-a-better-approach) for a discussion.
|
Will many where clauses impact query performance
|
[
"",
"sql",
"sql-server",
"performance",
"query-optimization",
""
] |
I want to perform following oracle SQL query
```
SELECT t1.Technology, count(t1.trax_id) as "Current number of items", to_char(to_date(max(round((SYSDATE - t1.time_event) * 24 * 60 * 60)),'sssss'),'hh24:mi:ss') as "max_ages"
from dm_procmon t1
group by t1.Technology;
```
The problem is the date substration formula.
> I susbtract 2 dates from each other. This gives me a decimal value
> (like 0,00855605). I want the value back to be a date value. So I
> converted this first to a Number (Decimal > Number) and than to a
> char (Number > Char) Finally from a char to date (Char > Date).
But when I perform the action I receive
Error report -
> SQL Error: ORA-01830: Datumnotatieafbeelding eindigt voordat de gehele
> invoerstring is geconverteerd.
> 01830. 00000 - "date format picture ends before converting entire input string"
What do I do wrong?
|
you try to convert to\_date(%number of seconds%, 'sssss'), that is the problem. just use `TO_CHAR(MAX(TO_DATE('20000101','yyyymmdd')+(SYSDATE - t1.time_event)),'hh24:mi:ss')`; this will function correctly for intervals < 1day
|
The following will give a `DAY TO SECOND INTERVAL` for all dates within about 68 years of one another:
```
SELECT t1.technology, COUNT(t1.trax_id) AS "Current number of items"
, NUMTODSINTERVAL(ROUND((SYSDATE - MIN(t1.time_event) * 24 * 60 * 60), 'SECOND') AS "max_ages"
FROM dm_procmon t1
GROUP BY t1.technology;
```
Note that rather that using `MAX(SYSDATE - time)`, I used `SYSDATE-MIN(time)` (logically the same). The advantage of using this instead of `TO_CHAR()` is that you can then use the value returned in `DATE`/`INTERVAL` arithmetic.
|
Oracle sql: ORA-01830 when date calculation happens
|
[
"",
"sql",
"oracle",
"compiler-errors",
""
] |
I discover some behavior I didn't know before. Why this line of code does not work?
```
SELECT REPLACE('','','0') ==> returns ''
```
I can't even have `''` in where condition. It just doesn't work. I have this from imported Excel where in some cells are no values but I'm not able to remove them unless I used `LEN('') = 0` function.
|
There is nothing to replace in an empty string. [`REPLACE`](https://msdn.microsoft.com/en-us/library/ms186862.aspx) replaces a **sequence** of characters in a string with another set of characters.
You could use `NULLIF` to treat it as `NULL` + `COALESCE` (or `ISNULL`):
```
declare @value varchar(10);
set @value = '';
SELECT COALESCE(NULLIF(@value,''), '0')
```
This returns `'0'`.
|
You can use CASE for this.
```
(CASE WHEN *YOURTHING* = '' THEN '0' ELSE *YOURTHING* END)
AS *YOURTHING*
```
|
REPLACE empty string
|
[
"",
"sql",
"sql-server-2008-r2",
""
] |
Below query I am trying to filter status id =7 records for team\_id>1
But want to include records with status\_id=7 for team\_id=1.How can we write the query.
```
SELECT MAX(created_date) AS maxtime,
TEAM_ID,
ticket_id
FROM ri_ticket_status_history
WHERE status_id<>7
GROUP BY TEAM_ID,ticket_id
```
|
A combination of `and` and `or` logical operators should do it:
```
SELECT MAX(created_date) AS maxtime ,team_id, ticket_id
FROM ri_ticket_status_history
WHERE (status_id <> 7 AND team_id > 1) OR team_id = 1
GROUP BY team_id, ticket_id
```
|
> brace location can change the result set.
```
select max(created_date) as maxtime ,TEAM_ID,ticket_id
from ri_ticket_status_history
where ( status_id<>7 or (status_id=7 and team_id=1))
group by TEAM_ID,ticket_id
```
|
SQL query where clause filter based on another column value
|
[
"",
"sql",
"sql-server",
"oracle",
"select",
""
] |
Every time I confused for selecting varchar(max) or varchar(fix) datatype. suppose I have a data column that will be around 5000 varchar. a column is not null type.
should i set it varchar(max) not null or varchar(5000) not null.
same thing in case of a nullable data type.
```
CREATE TABLE [dbo].[tblCmsPages](
[CmsPagesID] [int] IDENTITY(1,1) NOT NULL,
[PageName] [varchar](250) NOT NULL,
[PageContent] [varchar](max) NOT NULL,
[Sorting] [int] NOT NULL,
[IsActive] [bit] NOT NULL)
```
//or
```
CREATE TABLE [dbo].[tblCmsPages](
[CmsPagesID] [int] IDENTITY(1,1) NOT NULL,
[PageName] [varchar](250) NOT NULL,
[PageContent] [varchar](5000) NOT NULL,
[Sorting] [int] NOT NULL,
[IsActive] [bit] NOT NULL
```
//[PageContent] will be 5000 char or single char or null then what should i take.
One another think I want to know. What is the main difference between **null** and **not null**. Is it only for validation check and what is an effect on performance.
|
[MSDN](https://msdn.microsoft.com/en-IN/library/ms176089.aspx)
* Use varchar when the sizes of the column data entries vary
considerably.
* Use varchar(max) when the sizes of the column data entries vary
considerably, and the size might exceed 8,000 bytes.
When the the length is specified in declaring a `VARCHAR` variable or column, the maximum length allowed is **8000**. If the length is greater than **8000**, you have to use the `MAX` specifier as the length. If a length greater than **8000** is specified, the following error will be encountered (assuming that the length specified is **10000**):
> The size (10000) given to the type 'varchar' exceeds the maximum allowed for any data type (8000).
UPDATE :-
I found a link which I would like to share:-
[Here](http://sqlhints.com/2013/03/10/difference-between-sql-server-varchar-and-varcharmax-data-type/)
There is not much performance difference between `Varchar[(n)]` and `Varchar(Max)`. `Varchar[(n)]` provides better performance results compared to `Varchar(Max)`. If we know that data to be stored in the column or variable is less than or equal to 8000 characters, then using this Varchar[(n)] data type provides better performance compared to Varchar(Max).Example: When I ran the below script by changing the variable `@FirstName` type to `Varchar(Max)` then for 1 million assignments it is consistently taking double time than when we used data type as `Varchar(50)` for variable `@FirstName`.
```
DECLARE @FirstName VARCHAR(50), @COUNT INT=0, @StartTime DATETIME = GETDATE()
WHILE(@COUNT < 1000000)
BEGIN
SELECT @FirstName = 'Suraj', @COUNT = @COUNT +1
END
SELECT DATEDIFF(ms,@StartTime,GETDATE()) 'Time Taken in ms'
GO
```
|
You have clear idea about data , it would be not exceed than 5000, I prefer to varchar(n)(varchar(5000).
If you want to selection between varchar(n) and varchar(max), please care below point:
1. Where appropriate, use VARCHAR(n) over VARCHAR(MAX)
a. for reasons of good design if not performance benefits, and
b. because VARCHAR(MAX) data does not compress
2. Storing large strings takes longer than storing small strings.
3. Updating an in-row VARCHAR(MAX) value from below 8,000 to over 8,000 will be relatively slow, but the difference for a single transaction will likely not be measurable.
4. Updating an in-row VARCHAR(MAX) value from over 8,000 to below 8,000 will be faster than if the table is set to store data out-of-row.
5. Using the out-of-row option for VARCHAR(MAX) will cause slower writes until the strings are very long.
|
sql varchar(max) vs varchar(fix)
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I need to select top 1 most valid discount for a given FriendId.
I have the following tables:
**DiscountTable** - describes different discount types
```
DiscountId, Percent, Type, Rank
1 , 20 , Friend, 2
2 , 10 , Overwrite, 1
```
Then I have another two tables (both list FriendIds)
**Friends**
```
101
102
103
```
**Overwrites**
```
101
105
```
I have to select top 1 most valid discount for a given FriendId. So for the above data this would be sample output
```
Id = 101 => gets "Overwrite" discount (higher rank)
Id = 102 => gets "Friend" discount (only in friends table)
Id = 103 => gets "Friend" discount (only in friends table)
Id = 105 => gets "Overwrite" discount
Id = 106 => gets NO discount as it does not exist in neither Friend and overwrite tables
INPUT => SINGLE friendId (int).
OUTPUT => Single DISCOUNT Record (DiscountId, Percent, Type)
```
Overwrites and Friend tables are the same. They only hold list of Ids (single column)
|
Having multiple tables of identical structure is usually bad practice, a single table with `ID` and `Type` would suffice, you could then use it in a `JOIN` to your `DiscountTable`:
```
;WITH cte AS (SELECT ID,[Type] = 'Friend'
FROM Friends
UNION ALL
SELECT ID,[Type] = 'Overwrite'
FROM Overwrites
)
SELECT TOP 1 a.[Type]
FROM cte a
JOIN DiscountTable DT
ON a.[Type] = DT.[Type]
WHERE ID = '105'
ORDER BY [Rank]
```
Note, non-existent `ID` values will not return.
|
This will get you all the FriendIds and the associate discount of the highest rank. It's an older hack that doesn't require using top or row numbering.
```
select
elig.FriendId,
min(Rank * 10000 + DiscountId) % 10000 as DiscountId
min(Rank * 10000 + Percent) % 10000 as Percent,
from
DiscountTable as dt
inner join (
select FriendId, 'Friend' as Type from Friends union all
select FriendId, 'Overwrite' from Overwrites
) as elig /* for eligible? */
on elig.Type = dt.Type
group by
elig.FriendId
```
|
SQL Select with Priority
|
[
"",
"sql",
"sql-server",
"select",
""
] |
Suppose, we have query like this:
```
SELECT
1
FROM DUAL WHERE
(SELECT id FROM table_1 t1 WHERE /*conditions*/)
IN
(SELECT id FROM table_1 t2 WHERE /*conditions*/)
```
I want to check if first query
`SELECT id FROM table_1 t1 WHERE /*conditions*/`
returns the same ids like the second query.
Of course this query (`IN` statement) doesn't work.
|
This will return everything from the first set that is not also contained in the second set, or NULL if the second set contains every element of the first set:
```
SELECT id FROM table_1 t1 WHERE /*conditions*/
EXCEPT
SELECT id FROM table_1 t2 WHERE /*conditions*/
```
If the second set contains additional elements not in the first set, they are not part of the result set.
On a Venn diagram, `EXCEPT` returns only what is contained in the first circle that isn't overlapped by the second circle, and doesn't include anything at all from the second circle.
|
Try:
```
SELECT id FROM table_1 t1 WHERE /*conditions1*/ and id not in (SELECT id FROM table_1 t2 WHERE /*conditions2*/)
union
SELECT id FROM table_1 t1 WHERE /*conditions2*/ and id not in (SELECT id FROM table_1 t2 WHERE /*conditions1*/)
```
If both queries gives you the same id's the result should be empty.
|
How to check if two selects return the same ids
|
[
"",
"sql",
""
] |
I'm trying to make an SQL query. I'm still new to SQL, maybe someone could give me some direction, or let me know if this is even possible.
Please note I have read up on IN, ALL, INNER JOIN etc and I still cannot get the proper query. I have tried to find the solution for hours.
These are my table columns:
```
+-----------+---------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-----------+---------------+------+-----+---------+-------+
| route | varchar(100) | NO | PRI | NULL | |
| stopID | varchar(5) | NO | PRI | NULL | |
| stopName | varchar(100) | NO | | NULL | |
| Latitude | float(10,6) | YES | | NULL | |
| Longitude | float(10,6) | YES | | NULL | |
| timeList | varchar(1000) | NO | | NULL | |
| day | varchar(5) | NO | PRI | NULL | |
+-----------+---------------+------+-----+---------+-------+
```
I need the stopId's and stopNames where the stopName matches another stopName with a different stopID AND the day='Week' (Otherwise a total of 3 results will come back since there are Week, Sat, Sun days.).
```
| 16 | 1639 | MYUNIQUESTOPNAME | 43.514000 | -80.199600 | (a really long list of times) | Week |
| 5 | 0530 | MYUNIQUESTOPNAME | 43.514000 | -80.199600 | (a really long list of times) | Week |
SELECT stopID,stopName FROM Routes WHERE stopName='MYUNIQUESTOPNAME' AND day='Week';
+--------+---------------------------+
| stopID | stopName |
+--------+---------------------------+
| 1612 | MYUNIQUESTOPNAME |
| 1639 | MYUNIQUESTOPNAME |
| 0530 | MYUNIQUESTOPNAME |
+--------+---------------------------+
```
Gives me what I want, but that's only for one stop, and doing it this way, I would have to do it manually. There's a lot of stops.
Secondly, I would then want all the id's and names that have NO matches (are unique to the bus route).
```
SELECT stopID,stopName FROM Routes WHERE stopName IN (SELECT stopName FROM Routes WHERE day='Week') AND day='Week';
```
The above DOES NOT GIVE ME WHAT I WANT. All my searching implies that it is though... What am I doing incorrectly...?
mysql version is 5.5.41-0+wheezy1
|
Do a self join on the table (routes) on the column (stopName) and then get all the results where day=week and stopIds are not same. This would give you the answer to the first query
```
Select unq.stopId, unq.stopName from Routes unq join Routes other on other.stopName=unq.stopName where unq.stopId!= other.stopId and unq.day = 'Week'
```
For the second one where you want only those which are unique, do a group by on stop name and pick the records where grouping results in only one record
```
Select unq.stopId, unq.stopName from Routes unq group by unq.stopName having count(*) = 1
```
|
With the query you are running you will return all stops as the nested query will always contain the stop from the outer query.
Use the following to stop this from happening.
```
SELECT stopID,stopName AS s1
FROM Routes
WHERE stopName IN (SELECT stopName AS s2
FROM Routes
WHERE day='Week'
AND s1.stopID <> s2.stopID)
AND day='Week';
```
|
SQL matching column with another in same table, IN statement not working
|
[
"",
"mysql",
"sql",
""
] |
I have two tables
```
T1:
ID, Truck, Trailer
1 null null
2 null null
T2:
ID, Type, ResourceID
1 R 111
1 F 222
1 D 333
2 R 444
2 F 555
```
I need a result where
```
ID, Truck, Trailer
1 111 222
2 444 555
```
How can I update `T1.Truck = T2.ResourceID` when `T2.Type = R` and `T1.Trailer = T2.ResourceID` when `T2.Type = F` where `T1.ID = T2.ID`.
This is what I have so far
```
UPDATE T1
SET T1.Truck = CASE
WHEN T2.Type = 'R' THEN T2.ResourceId
ELSE T1.Truck
END,
T1.Trailer = CASE
WHEN T2.Type = 'F' THEN T2.ResourceId
ELSE T1.Trailer
END
FROM T1 INNER JOIN (SELECT Id, Type, ResourceId
FROM T2) T2
ON T1.Id = T2.Id
```
This will only Truck, but not trailer.
What am I missing?
|
The problem with your current update query is that the update will update one column to null as it can't match both conditions at the same time.
If you did a `select *` instead of the update the result would look like:
```
ID Truck Trailer Id Type ResourceId
1 NULL NULL 1 R 111 -- this will set R = 111 and F = null
1 NULL NULL 1 F 222 -- this will set F = 222 and R = null
1 NULL NULL 1 D 333 -- this will set R = null and F = null
2 NULL NULL 2 R 444 -- this will set R = 444 and F = null
2 NULL NULL 2 F 555 -- this will set R = null and F = 555
```
Here you can see that when `Type` matches `R` the update for `F` will update to `null` et cetera.
One solution is to join the `T2` table twice:
```
UPDATE T1
SET
T1.Truck = T2.ResourceId
,
T1.Trailer = T3.ResourceId
FROM T1
INNER JOIN (SELECT Id, ResourceId FROM T2 WHERE Type = 'R') T2 ON T1.Id = T2.Id
INNER JOIN (SELECT Id, ResourceId FROM T2 WHERE Type = 'F') T3 ON T1.Id = T3.Id
```
If there might not always be both types (`R`,`F`) then use `left join` instead of `inner join` and check for `null` values.
Edit: thinking a bit more gives this query:
```
UPDATE T1
SET
T1.Truck = ISNULL(T.rValue, T1.Truck),
T1.Trailer = ISNULL(T.fValue, T1.Trailer)
FROM T1
INNER JOIN (
SELECT
Id,
rValue = MAX(CASE WHEN Type = 'R' THEN ResourceId END),
fValue = MAX(CASE WHEN Type = 'F' THEN ResourceId END)
FROM T2 GROUP BY id
) T ON T1.Id = T.Id
```
On a side note: using an alias for a derived table that is also a table name can be pretty confusing and should be avoided.
|
you can `PIVOT` the table `t2` before updating the `t1` table
Use the following query to pivot the `t2` table.
```
SELECT ID, R, F
FROM t2
PIVOT (Max(resourceID)
FOR type IN ([R],[F]))pv
```
Now the result will in the format of `t1` table you can easily update using the following query
```
UPDATE A
SET Trailer = F,
Truck = R
FROM t1 A
INNER JOIN (SELECT ID, R, F
FROM t2
PIVOT (Max(resourceID)
FOR type IN ([R],
[F]))pv) B
ON A.ID = b.ID
```
[**SQLFIDDLE DEMO**](http://sqlfiddle.com/#!6/289b2/3)
|
Update data where ID match and use case to choose what data
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-update",
""
] |
I have been trying to generate a series of dates (YYYY-MM-DD HH) from the first until the last date in a timestamp field. I've got the `generate_series()` I need, however running into an issue when trying to grab the start and end dates from a table. I have the following to give a rough idea:
```
with date1 as
(
SELECT start_timestamp as first_date
FROM header_table
ORDER BY start_timestamp DESC
LIMIT 1
),
date2 as
(
SELECT start_timestamp as first_date
FROM header_table
ORDER BY start_timestamp ASC
LIMIT 1
)
select generate_series(date1.first_date, date2.first_date
, '1 hour'::interval)::timestamp as date_hour
from
( select * from date1
union
select * from date2) as foo
```
**Postgres 9.3**
|
You don't need a CTE for this, that would be more expensive than necessary.
And you don't need to cast to `timestamp`, the result already *is* of data type `timestamp` when you feed `timestamp` types to [`generate_series()`](https://www.postgresql.org/docs/current/functions-srf.html). Details here:
* [Generating time series between two dates in PostgreSQL](https://stackoverflow.com/questions/14113469/generating-time-series-between-two-dates-in-postgresql/46499873#46499873)
In Postgres **9.3** or later you can use a `LATERAL` join:
```
SELECT to_char(ts, 'YYYY-MM-DD HH24') AS formatted_ts
FROM (
SELECT min(start_timestamp) as first_date
, max(start_timestamp) as last_date
FROM header_table
) h
, generate_series(h.first_date, h.last_date, interval '1 hour') g(ts);
```
Optionally with [`to_char()`](https://www.postgresql.org/docs/current/functions-formatting.html) to get the result as text in the format you mentioned.
This works in ***any*** Postgres version:
```
SELECT generate_series(min(start_timestamp)
, max(start_timestamp)
, interval '1 hour') AS ts
FROM header_table;
```
Typically a bit faster.
Calling set-returning functions in the `SELECT` list is a non-standard-SQL feature and frowned upon by some. Also, there were behavioral oddities (though not for this simple case) that were eventually fixed in Postgres 10. See:
* [What is the expected behaviour for multiple set-returning functions in SELECT clause?](https://stackoverflow.com/questions/39863505/what-is-the-expected-behaviour-for-multiple-set-returning-functions-in-select-cl/39864815#39864815)
**Note** a subtle difference in **NULL** handling:
The equivalent of
```
max(start_timestamp)
```
is obtained with
```
ORDER BY start_timestamp DESC NULLS LAST
LIMIT 1
```
Without `NULLS LAST` NULL values come *first* in descending order (if there *can* be NULL values in `start_timestamp`). You would get NULL for `last_date` and your query would come up empty.
Details:
* [Why do NULL values come first when ordering DESC in a PostgreSQL query?](https://stackoverflow.com/questions/20958679/why-do-null-values-come-first-when-ordering-desc-in-a-postgresql-query/20959470#20959470)
|
How about using aggregation functions instead?
```
with dates as (
SELECT min(start_timestamp) as first_date, max(start_timestamp) as last_date
FROM header_table
)
select generate_series(first_date, last_date, '1 hour'::interval)::timestamp as date_hour
from dates;
```
Or even:
```
select generate_series(min(start_timestamp),
max(start_timestamp),
'1 hour'::interval
)::timestamp as date_hour
from header_table;
```
|
Generate_series in Postgres from start and end date in a table
|
[
"",
"sql",
"postgresql",
"aggregate-functions",
"generate-series",
""
] |
I need two columns A and B but of them A has repeated values and B has single unique values. I have to fetch only those values of A which has max(C) value. C is another column.
|
Use `Row_Number` Analytic function to do this
```
select A,B
from
(
select row_number() over(partition by A order by C desc)rn,A,B,C
from yourtable
)
where RN=1
```
|
You can use [ROW\_NUMBER](https://msdn.microsoft.com/en-us/library/ms186734.aspx "Row Number").
> **ROW\_NUMBER**
>
> Returns the sequential number of a row within a partition of a result
> set, starting at 1 for the first row in each partition.
>
> ---
>
> **PARTITION BY value\_expression**
>
> Divides the result set produced by the
> FROM clause into partitions to which the ROW\_NUMBER function is
> applied. value\_expression specifies the column by which the result set
> is partitioned. If PARTITION BY is not specified, the function treats
> all rows of the query result set as a single group.
>
> ---
>
> **ORDER BY**
>
> The ORDER BY clause determines the sequence in which the rows are assigned their unique ROW\_NUMBER within a specified
> partition. It is required.
---
Sample of `ROW_NUMBER` in your case:
```
SELECT A, B
FROM
(
SELECT ROW_NUMBER() OVER(PARTITION BY A ORDER BY C DESC) AS RowNum, A, B, C
FROM TableName
)
WHERE RowNum = 1
```
|
Remove multiple entries for a column
|
[
"",
"sql",
"oracle",
""
] |
I have one table and in this table is PostDate and OrderNo and a bunch of other fields. I would like to get all the latest PostDates and then with those PostDates get the largest OrderNo. I would then like to show one row that shows the PostDate and that OrderNo.
So far I below. This shows me all the rows with postdates from yesterday and now I want to show only the largest OrderNo in those PostDates. I tried using a nested select but I could not wrap my head around it. Any clever ideas?
```
SELECT OrderNo, PostDate
FROM table
WHERE PostDate = (SELECT MAX(PostDate) FROM table
```
|
If you just want the order number, you can use MAX again since you know the post date will also be the MAX due to your subselect:
```
SELECT MAX(OrderNo), MAX(PostDate)
FROM table
WHERE PostDate = (SELECT MAX(PostDate) FROM table)
```
|
The simplest way to solve this problem is to drop the `WHERE` with the `MAX` altogether, order the rows in such a way that the row that you want is at the top, and limit the output to a single row.
The last step (limiting the output to one row) is dependent on SQL version. Here is how to do it in SQL Server:
```
SELECT TOP 1
OrderNo, PostDate
FROM My_Table
ORDER BY PostDate DESC, OrderNo Desc
```
|
SQL - Nested Select Statements with Max?
|
[
"",
"sql",
"sql-server",
"sql-server-2014",
""
] |
I have two columns **created\_at** and **updated\_at** in same table. Both the columns have data type as **datetime**. Now when I am trying to modify the column data types as follows -
```
alter table nodestatics modify column updated_at datetime default current_timestamp;
alter table nodestatics modify column created_at datetime default current_timestamp;
```
It is showing the following error
> Error Code : 1067 Invalid default value for 'updated\_at' (0 ms taken)
>
> Error Code : 1067 Invalid default value for 'created\_at' (0 ms taken)
My mysql version is 5.5.41-0ubuntu0.14.04.1 (Ubuntu)
|
It is hard to reference [documentation](http://dev.mysql.com/doc/refman/5.6/en/timestamp-initialization.html) in a comment:
> As of MySQL 5.6.5, `TIMESTAMP` and `DATETIME` columns can be automatically
> initializated and updated to the current date and time (that is, the
> current timestamp). Before 5.6.5, this is true only for `TIMESTAMP`, and
> for at most one `TIMESTAMP` column per table.
Either upgrade to 5.6.5. Or use `TIMESTAMP` instead of `DATETIME`.
|
Use `TIMESTAMP` instead of `DATETIME`
---
Should be something like this:
`ALTER TABLE nodestatics MODIFY COLUMN updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP;`
`ALTER TABLE nodestatics MODIFY COLUMN created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP;`
---
Or you can use `TRIGGERS`.
```
CREATE TRIGGER myTable_OnInsert BEFORE INSERT ON `tblMyTable`
FOR EACH ROW SET NEW.dateAdded = NOW();
```
You can look at [similar topic](https://stackoverflow.com/questions/168736/how-do-you-set-a-default-value-for-a-mysql-datetime-column). Happy coding!
|
Column name is not taking timestamp as default value
|
[
"",
"mysql",
"sql",
"mysql-error-1067",
""
] |
I have a data that is`101.32650000` and I want to get `101.3265` only ... I tried using right function, but the disadvantage is that digits before decimal can vary.
|
How about using `cast` with `decimal` and scale of 4. Something like:
```
select cast(101.32650000 as decimal(10,4));
```
* [SQL Fiddle Demo](http://sqlfiddle.com/#!6/507e0/1)
|
```
cast(col_name as decimal(28,4)) as col_name
```
|
How to get 4 digit numbers after decimal including before decimal also in SQL
|
[
"",
"sql",
"sql-server-2008-r2",
""
] |
I am using `MySQL 5.6.17`.
I have a `table` that contains the master entries of titles let us say it `Master_titles`. It contains columns `id int primary key auto-increment`, `title varchar(255)`. The data is as shown below :
```
id title
1 Item one
2 Item two
3 Item three
4 Item five
6 Item six
7 Item seven
8 Item eight
9 Item nine
10 Item ten
```
There is another table named `routines` that is `self-referencing table` contains the `records with parent-child relationship`. The columns are `id int primary key auto-increment`, `master_title_id int foreign key references id of Master_titles`, `type varchar(2)`, `parent int foreign key referencing id of routines` itself. The data is as below :
```
id master_title_id type parent
1 1 G null
2 2 Q 1
3 3 A 2
4 4 A 2
5 5 Q 1
6 6 A 5
7 8 A 5
8 2 G null
9 5 Q 8
10 7 A 9
11 8 A 9
12 9 A 9
```
And another table let us say `data`, that contains columns as `id int primary key auto-increment`, `routine_id int foreign key references id of routines`, `group id int`, `question_id int`, `answer id int`. Here, all of the records of columns i.e. `group_id`, `question_id`, `answer_id` will be the `id` of `routines` table. The data is as below :
```
id routine_id group_id question_id answer_id
1 2 1 2 4
2 5 1 5 6
3 9 8 9 11
4 5 1 5 7
5 2 1 2 7
3 9 8 9 10
```
Now, I want a `query` that fetches `titles` for the columns `group_id`, `question_id` and `answer_id` from `master_titles` table.
I can do it using `sub-query` but what is most efficient way to fetch the records without overheads?
The query i.e. working but I don't prefer to be used is as below :
```
SELECT data.created_at,
(SELECT Master_titles.title FROM `Master_titles` LEFT JOIN routines ON Master_titles.id = routines.master_title_id LEFT JOIN data ON routines.id = data.group_id WHERE data.id = data.`id`) AS Group,
(SELECT Master_titles.title FROM `Master_titles` LEFT JOIN routines ON Master_titles.id = routines.master_title_id LEFT JOIN data ON routines.id = data.question_id WHERE data.id = data.`id`) AS Question,
(SELECT Master_titles.title FROM `Master_titles` LEFT JOIN routines ON Master_titles.id = routines.master_title_id LEFT JOIN data ON routines.id = data.answer_id WHERE data.id = data.`id`) AS Answer
FROM data;
```
|
According to your comment stating, that all the columns containing ids in table `data` are actually `ids` of `routines` table you need to perform joins to get title for every column by going through the path that is `routines > master_titles`.
```
SELECT
d.id,
d.routine_id,
mgroup.title group,
mquestion.title question,
manswer.title answer
FROM
data d
LEFT JOIN routines r1 ON ( d.group_id = r.id )
LEFT JOIN master_titles mgroup ON ( r1.master_title_id = m.id )
LEFT JOIN routines r2 ON ( d.question_id = r.id )
LEFT JOIN master_titles mquestion ON ( r2.master_title_id = m.id )
LEFT JOIN routines r3 ON ( d.answer_id = r.id )
LEFT JOIN master_titles manswer ON ( r3.master_title_id = m.id )
```
Don't worry about amount of joins (never calculate cost of a query by size of it in terms of lines).
I suggest following index to speed up: routines(master\_title\_id).
Also, take under consideration (can't estimate your environment) indexes on group\_id, question\_id and answer\_id columns.
You know best whether or not it is going to help you with performance and not hurt your writes/space unprofitably at the same time.
|
Just a small change from "Consider Me" answer.
```
SELECT
d.id,
d.routine_id,
mgroup.title group,
mquestion.title question,
manswer.title answer
FROM
data d
JOIN routines r1 ON ( d.group_id = r.id )
JOIN master_titles mgroup ON ( r1.master_title_id = m.id )
JOIN routines r2 ON ( d.question_id = r.id )
JOIN master_titles mquestion ON ( r2.master_title_id = m.id )
JOIN routines r3 ON ( d.answer_id = r.id )
JOIN master_titles manswer ON ( r3.master_title_id = m.id )
```
Inner join should be better since `group_id`, `question_id` and `answer_id` are not defined as nullable in `data` table. But since there is no foreign key relationship, there can be instances of missing rows if there are values which are not present in routines table.
|
MySQL Multiple column joins with more than one table references
|
[
"",
"mysql",
"sql",
"join",
""
] |
I'm displaying records of a table which I `COUNT` a particular field.
There might be sometimes the table might be empty and instead of displaying a `NULL` value i want it to come zero.
```
SELECT count(comment) as total
FROM dbo.omment
WHERE resp = MMColParam2
AND com_stat = 'No' having count(comment)> 0
```
|
Taking a look at the documentation, you can use [ISNULL](https://msdn.microsoft.com/en-us//library/ms184325.aspx):
> ISNULL ( check\_expression , replacement\_value )
```
SELECT ISNULL(COUNT(comment), 0)) AS total
FROM dbo.omment
WHERE resp = MMColParam2
```
|
You can use [COALESCE](https://msdn.microsoft.com/en-us/library/ms190349.aspx), which "Evaluates the arguments in order and returns the current value of the first expression that initially does not evaluate to NULL". You pass to it **count(comment)** as the first argument, and **zero** as the second:
```
SELECT COALESCE(count(comment), 0) as total
FROM dbo.omment
WHERE resp = MMColParam2
```
|
SQL-Query return zero count if no records are found
|
[
"",
"sql",
"sql-server",
""
] |
i have table named etablissement\_amenities
```
id
etablissement_id
amenities_id
```
I need to delete my duplicate row.
For example I have duplicate row like this.
```
id etablissement_id amenities_id
1 1 1
2 1 1
3 2 2
4 6 3
5 6 3
```
I have duplicate my table and I have try this request
```
DELETE FROM doublon
WHERE id < ANY (SELECT etablissement_amenities.id
FROM doublon, etablissement_amenities
WHERE doublon.id = etablissement_amenities.id
AND doublon.amenities_id = etablissement_amenities.amenities_id
AND doublon.etablissement_id = etablissement_amenities.etablissement_id);
```
But I have an error
```
You can't specify target table 'doublon' for update in FROM clause
```
Anybody can help me ?
Thanks you.
|
This Query Will hopefuly delete all duplicated rows from `doublon`
```
DELETE FROM doublon WHERE id in
( Select id from `doublon` where `id` not in
( Select id
From `doublon`
group by etablissement_id,amenities_id
having Count(etablissement_id) > 1 and Count(amenities_id) > 1
union
Select id
From `doublon`
group by etablissement_id,amenities_id
having Count(etablissement_id) = 1 and Count(amenities_id) = 1
)
)
```
Here is for duplicate records **[sqlfiddle](http://sqlfiddle.com/#!9/be63b/52)**
|
There is no information about the table doublon u refered in the above query.
To delete or update a table with sub query referencing same table, try enclosing the subquery in a temporary derived table as follows.
```
DELETE FROM doublon
WHERE id < ANY (Select * from (SELECT etablissement_amenities.id
FROM doublon, etablissement_amenities
WHERE doublon.id = etablissement_amenities.id
AND doublon.amenities_id = etablissement_amenities.amenities_id
AND doublon.etablissement_id = etablissement_amenities.etablissement_id) as Tmp);
```
`P.s. The answer is to fix the error in above query. It is not about deleting duplicate values.....`
|
delete duplicate row on table
|
[
"",
"mysql",
"sql",
""
] |
I have an Oracle DB and there-in is a table, lets say "aura"

Now in this table there are different records with duplicates. I have to remove these duplicates.
As you can see in the image above **Dsvsd**, **Cvra**, **Single Entry** & **Double Entry**
The logic for the removal of the duplicates is, The highest(biggest) `Review_ID` & If the record does not have duplicates like **Single Entry** keep it. Now I can not do this task manually because there record count of the table is 84,000 and there are 18,000 distinct records. **How to do it with a query?** OR **What is the best approach for this task**
This is the expected result

|
Here's one approach using `rowid` and the `row_number` ranking function to delete duplicates:
```
delete from auro
where rowid in
( select rwid
from ( select rowid rwid,
row_number() over ( partition by name
order by review_id desc) rn
from auro
) t
where rn>1
)
```
|
```
delete from aura where id not in (
select min(id) keep (dense_rank last order by review_id nulls first)
from aura
group by name
)
```
|
Delete Query to remove duplicates and keep the latest record
|
[
"",
"sql",
"oracle",
""
] |
Thanks to NoDisplayName ([SQL: Query to set value based on group parameter](https://stackoverflow.com/questions/29020509/sql-query-to-set-value-based-on-group-parameter/29020724#29020724)) for getting my table to have Primary tags. But now I need help with a query to find errors in my table.
Sample Input:
```
Column1 | Column2 |
ID1 Primary
ID1 Primary
ID1
ID2 Primary
ID2
ID3 Primary
ID3
```
Specifically what would a query to find if there is more than 1 Primary in Column2 associated with the same value in Column1?
I just need the output to be something actionable so I can then remove the duplicated Primary tags.
Thank you!
|
```
select
column1,
column2,
count(column1)
from tablename
groupby column1, column2
having count(column1) > 1
```
|
I think the following is the query you want:
```
select column1, count(column1)
from tablename
where column2 = 'Primary'
group by column1
having count(*) > 1;
```
This will pick up only duplicate values of `'Primary'`. It will not pick up duplicate blank values.
|
SQL: Find duplicated values associated with one ID
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
According to the first normal form, every row and column intersection contains exactly one value from the applicable domain.
If name is divided into three part First Name, Middle Name , Last Name.
For few value if I gave middle name as Null.
Does it violate the 1NF rule?
```
````````````````````````````````````````````````````````````````````````````
Client_ID | First Name | Middle_Name | Last_Name
----------------------------------------------------------------------------
3256 | Jamie | NULL | Zacharias
---------------------------------------------------------------------------
8524 | Madonna | Doremus | NULL
```
|
This does not violate the 1st normal form. A violation of the first normal form would have more than one value for a column within the same row. Using your example, if someone changed their last name and you simply added a new distinct value to the same row, or you tossed in nicknames to the first name column like:
````````````````````````````````````````````````````````````````````````````
## Client\_ID | First Name | Middle\_Name | Last\_Name
3256 | Jamie, James| NULL | Zacharias
Huffington
---
8524 | Madonna | Doremus | NULL
For more information, see this Wikipedia article: <http://en.wikipedia.org/wiki/First_normal_form>
|
E.F. Codd's definition of first normal form (1NF) was very simple:
> A relation [table] is in first normal form if it has the property that none of its domains [sets of possible values] has elements [values] which are themselves sets.
Under this definition it is obvious that first normal form does not preclude having `NULL` values since a `NULL` isn't a set. In addition, Codd's Rule 3 requires that an RDBMS treat `NULL` values systematically.
Now, C.J. Date defines first normal form differently and excludes `NULL`s. That is, any table with one or more nullable columns does not meet Date's definition.
Personally, even though `NULL`s can be annoying at times, they do have a place in RDBMSs and I'll go with Codd's definition over Date's.
|
First Normal Form - Exact one value rule
|
[
"",
"sql",
"oracle",
""
] |
```
Begin Tran
Declare @Ssql As Varchar(100)
Set @Ssql = 'India is mycountry. I love my country and I am proud of its rich and varied heritage'
Select charindex('country', lower(@Ssql))
RollBack
```
This query returns 12 and not 31.
Okay, now I want the char index of the word `country` [later] and not of the word `country` in `mycountry` [former].
How do I do that ?
Please help.
|
Try this
```
Begin Tran
Declare @Ssql As Varchar(100)
Set @Ssql = 'India is mycountry. I love my country and'
Select charindex(' country',lower(@Ssql))+1
RollBack
```
or try using the index start postion as
```
SELECT CHARINDEX('country', @Ssql, 14);
```
|
The following code will give you the last occurrence of the `serachchar`. Using [REVERSE](https://msdn.microsoft.com/en-us/library/ms180040.aspx) function many times is detrimental to performance
```
DECLARE @Ssql VARCHAR(500) = 'India is my country. I love my country and I am proud of its rich and varied heritage'
DECLARE @FindChar VARCHAR(7) = 'country'
-- Shows the position of the last occurrence
SELECT LEN(@Ssql) - CHARINDEX(@FindChar,REVERSE(@Ssql)) AS LastOccuredAt
-- Shows text before last occurrence
SELECT LEFT(@Ssql , LEN(@Ssql ) - CHARINDEX(@FindChar,REVERSE(@Ssql ))) AS Before
-- Shows text after last occurrence
SELECT RIGHT(@Ssql , CHARINDEX(@FindChar,REVERSE(@Ssql))-1) AS After
```
If your `searchchar` is going to be multiple times and you want the `2nd` occurrence then use as,
```
-- Shows the index of the second occurrence.
SELECT CHARINDEX(@FindChar ,@Ssql,CHARINDEX(@FindChar ,LOWER(@Ssql))+1)
```
|
How do I get the char index of an exact word in SQL Server
|
[
"",
"sql",
"sql-server",
""
] |
I'm almost a newbie coming from the "Java's World" to the "PL/SQL World", working with "legacy" stored procedures, and I have a question.
Best practices for Java naming includes tips such as "the names of variables, methods, classes, etc should be meaningful and auto-documented" (I read on the Clean Coder book).
By default, Oracle's identifiers are 30 characters length, but I've found abbreviated naming that not always are "easy translatable", and I don't know if that is done thinking on the app performance or it's "just a bad practice".
Supose I've found something like this:
```
PROCEDURE PROCCALCTAXES(vVarNamTax VARCHAR2(2)) IS
vVarNamCouCalTax VARCHAR2(20);
nVarCouId NUMBER;
vVarNamTax VARCHAR2(20);
vVarValTax VARCHAR2(10);
nVarCouTaxTimPaid NUMBER;
vVarExa30Cha VARCHAR2(10);
BEGIN
--business logic
END PROCDOSTH;
```
There's something to take care of If I refactor that code like this?
```
PROCEDURE P_CALCULATE_TAXES(vVarNamTax VARCHAR2(2)) IS
vCountryName VARCHAR2(20);
nCountryId NUMBER;
vTaxName VARCHAR2(20);
vTaxValue VARCHAR2(10);
nCountOfTimesPaid NUMBER;
vAnExampleWith30CharactersLong VARCHAR2(10);
BEGIN
--business logic
END P_CALCULATE_TAXES;
```
Is there a decrease on the performance of the app? What if all variables would be like the last one, with 30 characters? When does a name of a function/procedure/trigger/etc affect the performance? Is there a standard for this?
Thanks!
|
You should definitely make the names more readable, meaningful and auto-documenting. It would be hard to come across any language where a more descriptive name would be disadvantageous (maybe in some codegolf specific languages due to length constraints), at least in terms of performance. Sql and PL/Sql are no different. The name does not affect performance.
There is a 30 character limit though and if you are in an organization with multiple developers, you should definitely think about adopting a standard for parameter names, variable names, globals, any kind of prefixes/suffixes for views, triggers, procedures, packages etc.
You could visit <http://www.toadworld.com/platforms/oracle/w/wiki/8245.plsql-standards.aspx> to see some examples of standards that different people have adopted for them. There is no hard rule that says if something is correct or something is wrong. But there are certain standards that definitely help you improve the readability, maintainability of your code and possibly help write future code faster.
As for your example, I would totally encourage changing the name from `PROCCALCTAXES` to something more readable like `CALCULATE_TAXES` or if you adopted a standard to prefix procedures by `P_` or `PRC_`, that is up to you, but redundant in my opinion and takes up valuable real-estate in the 30 character space.
Another thing that I've found useful is to come up with a standard list of abbreviations applicable to our company. Like ORG for organization, ADDR for address, CUST for customer etc. Helps to fit words in the 30 char limit.
|
Variable names have no impact on performance.
Regardless of the language, variable names should be clear and self-documenting. Given that PL/SQL variable names are limited to 30 characters, however, making things clear and self-documenting may involve some sort of consistent abbreviation. If I know, for example, that I'm going to have a number of different variables
```
order_id
order_line_id
order_line_amount
order_total_amount
order_sales_tax_rate
order_sales_tax_amount
order_sales_tax_taxing_entity_name
order_sales_tax_taxing_entity_fips_code
```
where some of the variables would exceed 30 characters, it often makes sense to abbreviate all of them consistently rather than only abbreviating those variables that would exceed 30 characters. In this case, I may want to consistently abbreviate "order" as "ordr", "sales\_tax" as "stax", and "taxing\_entity" as "txety" in order to ensure that all my variables are less than 30 characters. Forcing developers to know those abbreviations (and to stick to them consistently) is going to decrease the readability of your code but the hit is probably less than forcing developers to figure out whether you abbreviated "sales\_tax" in one variable but not another.
|
Does variables/functions/procedures/etc name affect performance in PL/SQL?
|
[
"",
"oracle",
"plsql",
"naming-conventions",
"sql",
""
] |
I am trying to pass the date as a where condition in select query.but in the database the field is in datetime format.I don't know how to pass the condition?
```
SELECT * FROM (`scl_students`) WHERE `std_added_on` = '2015-03-03'
```
|
i got it.
```
SELECT * FROM (`scl_students`) WHERE DATE(std_added_on) = '2015-03-06'
```
|
I guess the problem is that the `std_added_on` contains time portion which could be non-zero. In order to select all rows for a given date you would write:
```
SELECT *
FROM `scl_students`
WHERE `std_added_on` >= '2015-03-03'
AND `std_added_on` < '2015-03-03' + INTERVAL 1 DAY
```
This is better than `` DATE(`std_added_on`) = ... `` performance wise.
|
sql condition as date and db date format is in datetime
|
[
"",
"mysql",
"sql",
"date",
"datetime",
""
] |
I'm trying to calculate a percentage in SQL. I want to find out to get the percentage of output start with 'RT @'
```
CREATE TABLE temp
([tweet] varchar(130)) ;
INSERT INTO temp
([tweet])
VALUES
('WillyWonka'),
('Barbara'),
('JoanOfArcJoanArcadia'),
('LisaDLisa Diddle'),
('LisaDSkip Padington'),
('WillyNillySiil Stoinson'),
('RT @Barbara'),
('JoanOfArcJoanArcadia'),
('LisaDLisa Diddle'),
('RT @LisaDSkip Padington')
;
```
My code returns the lines correctly that I want to see but how do i calculate the percentage of those lines from the total output ... I know its something with selecting the two parts and dividing them but i can't get that right.
```
select count(*)
from temp
where tweet like 'RT @%'
```
or
```
select *
from temp
where tweet like 'RT @%'
```
I've read a few of the other posts and am no closer to getting the output.
Thank you.
|
You need `conditional Aggregate` to find the percentage of output start with `'RT @'`
```
SELECT ( Count(CASE WHEN tweet LIKE 'RT @%' THEN 1 END)
/ Cast(Count(*) AS NUMERIC(10, 2)) ) * 100
FROM temp
```
|
You'll want a subquery to find the dividend. Something like this.
```
select (count(t.*) / (select count(*) from temp)) as percentage
from temp t
where tweet like 'RT @%'
```
|
Calculating a percentage in SQL
|
[
"",
"sql",
"sql-server",
"count",
"division",
"percentage",
""
] |
I am trying to work out the `SQL` to average a series of numbers, but only to include those greater than 0.
For example:
```
Field
4
5
0
6
SELECT Avg(Field) FROM Table;
```
gives 3.75. But I want it to ignore the 0 so the average is 5.
```
SELECT Avg(Field) FROM Table where Field > 0;
```
works but what if I have more than 1 field:
```
Field1 Field2
4 2
5 0
0 3
6 4
SELECT Avg(Field1), Avg(Field2) FROM Table;
```
Using
```
SELECT Avg(Field1), Avg(Field2) FROM Table where Field1 >0 and Field2>0;
```
will not work
How do I do that? Note the numbers can be anything from 0 to 100
|
You can use a conditional expression:
```
SELECT
Avg(case when Field1 > 0 then Field1 end),
Avg(case when Field2 > 0 then Field2 end)
FROM table
```
|
Just filter out 0 using the where clause.
```
SELECT Avg(Field) FROM Table Where Field > 0
```
|
Using SQL to work out the average of some, but not all, records
|
[
"",
"sql",
""
] |
It is Microsoft SQL Server.
In this column `PostalCode` from the AdventureWorks 2012 `Person.Address` table, there are numeric and string values.
I want to get table with rows `WHERE PostalCode < 7000`
This does not work as expected:
```
USE [AdventureWorks2012]
SELECT *
FROM Person.Address
WHERE ISNUMERIC(PostalCode) = 1
AND PostalCode < 7000
```
because I get this error:
> Conversion failed when converting the nvarchar value 'K4B 1T7' to data type int.
I can make it, by creating temporally table like this:
```
/* creating of temp table */
USE AdventureWorks2012
SELECT *
INTO temp2
FROM Person.Address
WHERE ISNUMERIC(PostalCode) = 1
/* get data from temp table */
SELECT *
FROM temp2
WHERE PostalCode < 7000
```
But it is a bad way, cause of low productivity and needless temp-table.
What is the better way to get table with rows `WHERE PostalCode < 7000` but data has not only numeric values?
|
The error is being returned because the conditions being evaluated are not short-circuiting - the condition `PostalCode<7000` is being evaluated even where the postal code is non-numeric.
Instead, try:
```
SELECT *
from Person.Address
WHERE CASE WHEN PostalCode NOT LIKE '%[^0-9]%'
THEN CAST(PostalCode AS NUMERIC)
ELSE CAST(NULL AS NUMERIC)
END <7000
```
(Updated following comments)
|
If you're in SQL Server 2012 or newer you should use try\_convert instead of isnumeric. Isnumeric has some funny issues that it returns 1 even for strings that can't be converted into a number. So something like this should work:
```
SELECT *
FROM Person.Address
WHERE try_convert(int, PostalCode) < 7000
```
If the string can't be converted, try\_convert returns null.
MSDN: <https://msdn.microsoft.com/en-us/library/hh230993.aspx>
|
Conversion failed. SELECT * from Person.Address WHERE ISNUMERIC(PostalCode) =1 AND PostalCode<7000
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a voting system and the tables look like this:
```
answers
-----------
int id
text content
results
-----------
int id
int answers_id
```
I want a select so i can see the answer and the number of responds even if there are no votes(results).
I tried something like this:
```
SELECT
count(results.answers_id), answers.content
from answers
left outer join results
on answers.id = results.answers_id
group by results.answers_id
```
But it doesn't get the answers with 0 votes(results).
|
This should do the job:
```
SELECT count(results.answer_id), answers.content
from answers left outer join results on answers.id = results.answer_id
group by answers.content;
```
|
Try
```
SELECT
count(result.answers_id) as acount, answers.content
from answers
left inner join results
on answers.id = acount
```
|
mySQL voting system SELECT
|
[
"",
"mysql",
"sql",
""
] |
I am trying to get the last element of an ordered set, stored in a database table. The ordering is defined by one of the columns in the table. Also the table contains multiple sets, so I want the last one for each of the sets.
As an example consider the following table:
```
benchmarks=# select id,sorter from aggtest ;
id | sorter
----+--------
1 | 1
3 | 1
5 | 1
2 | 2
7 | 2
4 | 1
6 | 2
(7 rows)
```
Sorter 1 and 2 define each of the sets, sets are ordered by the id column. To get the last element of each set, I defined an aggregate function:
```
CREATE FUNCTION public.last_agg ( anyelement, anyelement )
RETURNS anyelement LANGUAGE sql IMMUTABLE STRICT AS $$
SELECT $2;
$$;
CREATE AGGREGATE public.last (
sfunc = public.last_agg,
basetype = anyelement,
stype = anyelement
);
```
As explained [here](https://wiki.postgresql.org/wiki/First/last_%28aggregate%29).
However when I use this I get:
```
benchmarks=# select last(id),sorter from aggtest group by sorter order by sorter;
last | sorter
------+--------
4 | 1
6 | 2
(2 rows)
```
However, I want to get `(5,1)` and `(7,2)` as these are the last ids (numerically) in the set. Looking at how the aggregate mechanism works, I can see quite well, why the result is not what I want. The items are returned in the order I added them, and then aggregated so that the last one I added is returned.
I tried sorting by ids, so that each group is sorted independently, however that does not work:
```
benchmarks=# select last(id),sorter from aggtest group by sorter order by sorter,id;
ERROR: column "aggtest.id" must appear in the GROUP BY clause or be used in an aggregate function
LINE 1: ...(id),sorter from aggtest group by sorter order by sorter,id;
```
If I wrap the sorting criteria in another aggregate, I get wrong data again:
```
benchmarks=# select last(id),sorter from aggtest group by sorter order by sorter,last(id);
last | sorter
------+--------
4 | 1
6 | 2
(2 rows)
```
Also grouping by id in addition to sorter does not work obviously.
Of course there is an easier way, to get the last (highest) id for each group by using the `max` aggregate. However, I am not so much interested in the id but as in data associated with it (i.e. in the same row). Hence I do not to sort by id and then aggregate so that the row with the highest id is returned for each group.
What is the best way to accomplish this?
**EDIT: Why does `max(id)` grouped by sorter not work**
Assume the following complete table (unsorter represents the additional data I have in the table):
```
benchmarks=# select * from aggtest ;
id | sorter | unsorter
----+--------+----------
1 | 1 | 1
3 | 1 | 2
5 | 1 | 3
2 | 2 | 4
7 | 2 | 5
4 | 1 | 6
6 | 2 | 7
(7 rows)
```
I would like to retrieve the lines:
```
id | sorter | unsorter
----+--------+----------
5 | 1 | 3
7 | 2 | 5
```
However with `max(id)` and grouping by sorter I get:
```
benchmarks=# select max(id),sorter,unsorter from aggtest group by sorter;
ERROR: column "aggtest.unsorter" must appear in the GROUP BY clause or be used in an aggregate function
LINE 1: select max(id),sorter,unsorter from aggtest group by sorter;
```
Using a `max(unsorter)` obviously does not work either:
```
benchmarks=# select max(id),sorter,max(unsorter) from aggtest group by sorter;
max | sorter | max
-----+--------+-----
5 | 1 | 6
7 | 2 | 7
(2 rows)
```
However using distinct (the accepted answer) I get:
```
benchmarks=# select distinct on (sorter) id,sorter,unsorter from aggtest order by sorter, id desc;
id | sorter | unsorter
----+--------+----------
5 | 1 | 3
7 | 2 | 5
(2 rows)
```
Which has the correct additional data. The join approach also seems to work, by is slightly slower on the test data.
|
Why not use a window function:
```
select id, sorter
from (
select id, sorter,
row_number() over (partition by sorter order by id desc) as rn
from aggtest
) t
where rn = 1;
```
Or using Postgres `distinct on` operator which is usually faster:
```
select distinct on (sorter) id, sorter
from aggtest
order by sorter, id desc
```
|
You write:
> Of course there is an easier way, to get the last (highest) id for
> each group by using the max aggregate. However, I am not so much
> interested in the id but as in data associated with it (i.e. in the
> same row).
This query will give you the data associated with the highest id of each sorter group.
```
select a.* from aggtest a
join (
select max(id) max_id, sorter
from aggtest
group by sorter
) b on a.id = b.max_id and a.sorter = b.sorter
```
|
Get last element of an ordered set in postgresql
|
[
"",
"sql",
"postgresql",
"aggregate-functions",
"greatest-n-per-group",
""
] |
I have two tables to join, and I want each row to be joined only once. Here is sample data:
```
CREATE TABLE A (id smallint, val varchar(1) );
CREATE TABLE B (id smallint, val varchar(1) );
INSERT INTO A VALUES (1, 'a'), (2, 'b'), (3, 'c'), (3, 'd');
INSERT INTO B VALUES (2, 'x'), (3, 'y'), (4, 'z'), (3, 'k');
```
When we join on id we obtain:
```
mysql> SELECT * FROM A JOIN B ON A.id = B.id;
+------+------+------+------+
| id | val | id | val |
+------+------+------+------+
| 2 | b | 2 | x |
| 3 | c | 3 | y |
| 3 | d | 3 | y |
| 3 | c | 3 | k |
| 3 | d | 3 | k |
+------+------+------+------+
```
What I want is either:
```
+------+------+------+------+ +------+------+------+------+
| id | val | id | val | | id | val | id | val |
+------+------+------+------+ or +------+------+------+------+
| 2 | b | 2 | x | | 2 | b | 2 | x |
| 3 | c | 3 | y | | 3 | d | 3 | y |
| 3 | d | 3 | k | | 3 | c | 3 | k |
+------+------+------+------+ +------+------+------+------+
```
The order and arrangement donβt matter.
Is it possible? How?
According to [this answer](https://stackoverflow.com/a/19549626/3628160) I need to specify how to select the matching row. In that case I guess would need to check in a subquery if the row of the joined table has already been used; or a kind of counter related to the id... but I do not know how to write this.
**Edit:**
To clarify I want each row with id 3 to be mapped with another one in the joined table, such as each row is mapped only once (I am also interested to know what happens when the number of rows with same id is different in the two tables):
```
(3, c) -> (3, y) [join only with the first row such as B.id = 3]
(3, d) -> (3, k) [the first row has been used, so map with (and only with) the second row such as B.id = 3]
```
But as I said mapping may be in any other order (e.g. mapping rows in reverse order).
|
I could make it thanks to this [blog post](http://www.xaprb.com/blog/2005/09/27/simulating-the-sql-row_number-function/):
```
SELECT A2.id, A2.val, B2.val FROM (
SELECT l.id, l.val, COUNT(*) AS n1 FROM A AS l JOIN A AS r ON l.id = r.id AND l.val >= r.val GROUP BY l.id, l.val
) AS A2 JOIN (
SELECT l.id, l.val, COUNT(*) AS n2 FROM B AS l JOIN B AS r ON l.id = r.id AND l.val >= r.val GROUP BY l.id, l.val
) AS B2 ON
A2.id = B2.id AND n1 = n2;
```
The result is:
```
+------+------+------+
| id | val | val |
+------+------+------+
| 2 | b | x |
| 3 | c | k |
| 3 | d | y |
+------+------+------+
```
|
[SQL Fiddle](http://sqlfiddle.com/#!9/166bc/12)
**MySQL 5.6 Schema Setup**:
```
CREATE TABLE A (id smallint, val varchar(1) );
CREATE TABLE B (id smallint, val varchar(1) );
INSERT INTO A VALUES (1, 'a'), (2, 'b'), (3, 'c'), (3, 'd');
INSERT INTO B VALUES (2, 'x'), (3, 'y'), (4, 'z'), (3, 'k');
```
**Query 1**:
```
select
aa.id as aid
, aa.val as aval
, bb.id as bid
, bb.val as bval
from (
select
@row_num :=IF(@prev_value=a.id,@row_num+1,1)AS RowInGroup
, a.id
, a.val
, @prev_value := a.id
from (
SELECT id, val
FROM A
group by id, val
/* order by ?? */
) a
CROSS JOIN (
SELECT @row_num :=1, @prev_value :=''
) vars
) aa
INNER JOIN (
select
@row_num :=IF(@prev_value=b.id,@row_num+1,1)AS RowInGroup
, b.id
, b.val
, @prev_value := b.id
from (
SELECT id, val
FROM B
group by id, val
/* order by ?? */
) b
CROSS JOIN (
SELECT @row_num :=1, @prev_value :=''
) vars
) bb on aa.id = bb.id and aa.RowInGroup = bb.RowInGroup
order by
aa.id
, aa.val
```
**[Results](http://sqlfiddle.com/#!9/166bc/12/0)**:
```
| id | val | id | val |
|----|-----|----|-----|
| 2 | b | 2 | x |
| 3 | c | 3 | k |
| 3 | d | 3 | y |
```
**nb**: you could influence the final result by introducing `order by` in the subqueries that `group by id, val` where the sequence `RowInGroup` is calculated.
|
JOIN each row only once β arrange (distribute) rows 1 to 1
|
[
"",
"mysql",
"sql",
"join",
"row-number",
""
] |
Lets say we have quite a few tables (T1, T2... T50), and we would like to have n to n relations between all of them.
What would be a propper way of implementig that.
Having a relations table for each pair of Tx and Ty would not be practical if the number of tables goes up to 100 or more.
The current solution I have is
```
relationships_table
id_x, table_name_x, id_y, table_name_y
```
for storing all the relationships. This way adding new tables is trivial, but what are the disadvantages?
**1)** What is a better way of supporting such a use case, if we're limited to sql?
**2)** How to efficiently solve this if we're not limited to sql?
|
The solution you proposed is the most reasonable solution to the stated problem. But the problem seems somewhat unreasonable.
If you need a graph, then you only need two tables, one for the nodes and another one for the edges.
If some nodes are of specific types then you can have extra specialization tables for them.
|
Add only the essential Relation tables. tblA relates to tblB, and tblB relates to tblC. So, *usually* that implies that you can get from A to C via
```
FROM tblA
JOIN tblB ON ...
JOIN tblC ON ...
```
Won't this do? And need not much more than 50 extra tables? And be a lot cleaner?
|
How to store a graph of sql tables
|
[
"",
"mysql",
"sql",
"database",
"performance",
"graph",
""
] |
I created an procedure to insert data from another table as follows:
```
DROP PROCEDURE IF EXISTS STP_addCompanySection;
DELIMITER //
CREATE PROCEDURE STP_addCompanySection ()
BEGIN
insert into companySection(categoryName,description)
select categoryName,description from BusinessCat where deleted='0';
END \g
DELIMITER ;
```
Can I add another column on insert like this:
```
DROP PROCEDURE IF EXISTS STP_addCompanySection;
DELIMITER //
CREATE PROCEDURE STP_addCompanySection (
IN inCompanyId int
)
BEGIN
insert into companySection(categoryName,description,comId)
(select categoryName,description from BusinessCat where deleted='0' ,inCompanyId);
select LAST_INSERT_ID() id;
END \g
DELIMITER ;
```
thanks
|
Try something like:
```
insert into companySection(categoryName,description,comId)
(select categoryName,description, inCompanyId
from BusinessCat
where deleted='0');
```
|
You can do it this way:
```
insert into companySection(categoryName,description,comId)
select categoryName,description,inCompanyId from BusinessCat where deleted='0';
select LAST_INSERT_ID() id;
```
|
insert into with select statement sql server
|
[
"",
"sql",
""
] |
I have so many records having duplicate taskid assigned to multiple person, but i want to show distinct records means only one taskid in output in SQL
Below is my query not working give me solution
```
SELECT DISTINCT
taskid, taskname, person, userid, dept, date, status, monitor,
comments, monitor_comments, respondtime, assignedby,
reassigncomment, priority,date_complete, followup_date, re_status
FROM
task
WHERE
(status IS NULL)
```
|
i have added a column priority in which the value of that column is 1 of same TASKID and other will be 0 so i can find
SELECT DISTINCT
taskid, taskname, person, userid, dept, date, status, monitor,
comments, monitor\_comments, respondtime, assignedby,
reassigncomment, priority,date\_complete, followup\_date, re\_status
FROM
task
WHERE
(status IS NULL) and (priority='1')
|
in your case, result is distinct but not of your desire because you need only distinct task id then you should use this:
```
SELECT DISTINCT taskid
FROM task
WHERE (status IS NULL)
```
then result would be distinct task ids.
|
Distinct Clause not working in SQL Server 2005
|
[
"",
"sql",
"sql-server-2005",
""
] |
Shouldn't first\_value() ordered descending give the same results as last\_value() ordered ascending? I ran the following query and got the results attached below the query.
```
select random_date, trunc(random_date,'MM') random_month, random_value,
first_value(random_value) over (partition by trunc(random_date,'MM') order by random_date desc) first_,
last_value(random_value) over (partition by trunc(random_date,'MM') order by random_date) last_
from
(select to_date(round (dbms_random.value (1, 28))
|| '-'
|| round (dbms_random.value (02, 03))
|| '-'
|| round (dbms_random.value (2014, 2014)),
'DD-MM-YYYY') + level - 1 random_date,
round(100*(dbms_random.value)) random_value
from dual
connect by level <= 10) order by 2, 1;
```

|
You forgot about a moving window used for analytical functions. See the difference (used rows between unbounded preceding and unbounded following):
```
SQL> select random_date, trunc(random_date,'MM') random_month, random_value,
2 first_value(random_value) over (partition by trunc(random_date,'MM') order by random_date desc rows between unbounded preceding and unbounded following) first_,
3 last_value(random_value) over (partition by trunc(random_date,'MM') order by random_date rows between unbounded preceding and unbounded following) last_
4 from
5 (select to_date(round (dbms_random.value (1, 28))
6 || '-'
7 || round (dbms_random.value (02, 03))
8 || '-'
9 || round (dbms_random.value (2014, 2014)),
10 'DD-MM-YYYY') + level - 1 random_date,
11 round(100*(dbms_random.value)) random_value
12 from dual
13 connect by level <= 10) order by 2, 1;
RANDOM_DATE RANDOM_MONTH RANDOM_VALUE FIRST_ LAST_
----------- ------------ ------------ ---------- ----------
02.02.2014 01.02.2014 93 75 75
09.02.2014 01.02.2014 78 75 75
11.02.2014 01.02.2014 69 75 75
12.02.2014 01.02.2014 13 75 75
21.02.2014 01.02.2014 91 75 75
25.02.2014 01.02.2014 75 75 75
01.03.2014 01.03.2014 54 80 80
15.03.2014 01.03.2014 37 80 80
16.03.2014 01.03.2014 92 80 80
17.03.2014 01.03.2014 80 80 80
10 rows selected
```
|
As I see you do first\_value this is used only for strings but you need it for a number so change it to min and max
In a string 10 will be before 2 but in number 2 is before
|
Should first_value() over (order by something asc) be the same as last_value () over (order by something desc)?
|
[
"",
"sql",
"oracle",
"sorting",
"analytic-functions",
""
] |
I have this simple SQL query :
```
UPDATE [mydb].[dbo].[EXPLANATIONS]
SET [EXPLANATION] = " This is the new explanation."
WHERE [RECORDNUMBER] = 123456
AND [EXPLANATIONNUMBER] = 7;
```
and I get this error :
> Msg 207, Level 16, State 1, Line 1
> Invalid column name ' This is the new explanation.' .
How can I get rid of this error and update the row correctly? Thanks.
|
Double quotes are usually used to object names (e.g. column name). That is part of SQL-92 standard.
In ANSI SQL, double quotes quote object names (e.g. tables) which allows them to contain characters not otherwise permitted, or be the same as reserved words (Avoid this, really).
single quotes to the string litral
```
UPDATE [mydb].[dbo].[EXPLANATIONS]
SET [EXPLANATION] = 'This is the new explanation.'
WHERE [RECORDNUMBER] = 123456 AND [EXPLANATIONNUMBER] = 7;
```
|
Instead of using double quotes, use single quotes like:
```
SET [EXPLANATION] = '<i>new explanation</i>'
```
|
Error in SQL UPDATE CLAUSE
|
[
"",
"sql",
"sql-server",
"sql-update",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.