Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
Hi I am facing one critical issue , please suggest some solution. I have a record in my Sql table like the below: ``` Table Name (tbl_rawdata) ID Price DATE 1 20 20/8/2014 2 20 20/8/2013 ``` Hence we don't have the actual data we need to create sample data and test. ex: We need to insert 60 records as the same showed in the table but the date will be different. ``` ID Price DATE 1 20 20/8/2014 1 20 21/8/2014 1 20 22/8/2014 ----------------------- 1 20 25/8/2014 ------------------------ 1 20 26/8/2014 1 20 27/8/2014 1 20 28/8/2014 ``` that means we need to get the next date (Excluding Saturday and Sunday's) like that we need to insert for 60 days. In the same way we have different id values (around 100) in tbl\_rawdata , we need to repeat the same for all. Please help on this case. Thanks in advance and waiting for your response
you can remove the dayName from the results if not desired: ``` DECLARE @FirstDate DATETIME -- You can change @year to any year you desire SELECT @FirstDate = '20140820' -- Creating Query to Prepare Year Data ;WITH cte AS ( SELECT 1 AS ID, @FirstDate AS FromDate, DATENAME(dw, @FirstDate) AS Dayname UNION ALL SELECT CASE WHEN DayName NOT IN ('Saturday','Sunday') THEN cte.ID + 1 ELSE cte.ID END AS ID, DATEADD(d, 1 ,cte.FromDate), DATENAME(dw, DATEADD(d, 1 ,cte.FromDate)) AS Dayname FROM cte WHERE ID < 60 ) SELECT ID, 20 AS Price, FromDate AS Date, Dayname FROM CTE WHERE DayName NOT IN ('Saturday','Sunday') ```
Try this ``` select id, price,dateadd(day,number,date) from tbl_rawdata as t1, master..spt_values as t2 where type='p' and number<60 and datename(weekday,dateadd(day,number,date)) not in ('saturday','sunday') ```
CRITICAL DATETIME FUNCTION ISSUE FOR STOCK MARKET SAMPLE
[ "", "sql", "sql-server-2008", "" ]
I've this error: ``` Use of uninitialized value $index in concatenation (.) or string at getdesc.pl line 43, <OctetsIn> line 2. ``` part of my code as follows: ``` my $select_sth = $dbh->prepare("SELECT Hid,Hostname,IP FROM Devices") or die "$dbh->errstr"; $select_sth->execute() or die "$dbh->errstr"; while ( my $row_ref = $select_sth->fetchrow_hashref ) { my $hostname = $row_ref->{'Hostname'}; if ( $hostname ne 'null' ) { my $hid = $row_ref->{'Hid'}; my $ip = $row_ref->{'IP'}; my $desc = "null"; my $index = 0; open( OctetsIn, "snmpwalk -v2c -c public $ip 1.3.6.1.2.1.18 |" ) or die "can't exec: $!"; while (<OctetsIn>) { chomp; print <OctetsIn> . "\n"; /IF-MIB::ifAlias.(\S+) = STRING: (\S+)/; $index = $1; $desc = $2; $dbh->do( "INSERT INTO Description (Hid,index,desc) Values ($hid,$index,'$desc')" ) or die "$dbh->errstr"; } } } close(OctetsIn); ``` What error is there in my code? anyone knows how to fix the error ? The error is on the line: ``` $dbh->do("INSERT INTO Description (Hid,index,desc) Values ($hid,$index,'$desc')") or die "$dbh->errstr"; ```
There are three issues regarding your innermost `while` loop: 1. You're reading from the filehandle twice when trying to just print the current line: ``` while (<OctetsIn>) { chomp; print <OctetsIn> . "\n"; # Should be: print "$_\n"; ``` 2. Always verify that your regular expression matched before using capture variables. ``` /IF-MIB::ifAlias.(\S+) = STRING: (\S+)/; $index = $1; # Will be undefined if regex doesn't match $desc = $2; ``` 3. Use [placeholders and bind values](https://metacpan.org/pod/DBI#Placeholders-and-Bind-Values) instead of manually including values in a SQL statement: Should aim to never interpolate values directly into a SQL statement like below: ``` "INSERT INTO Description (Hid,index,desc) Values ($hid,$index,'$desc')" ``` To clean up these three issues, I'd transform your inner while loop to something like the following. ``` while (<OctetsIn>) { chomp; print "$_\n"; if (my ($index, $desc) = /IF-MIB::ifAlias.(\S+) = STRING: (\S+)/) { $dbh->do( "INSERT INTO Description (Hid,index,desc) Values (?,?,?)", undef, $hid, $index, $desc ) or die $dbh->errstr; } } ```
You should test if regex was successful prior to assigning `$1` to `$index`, ie. ``` # skip to next line if current did not match, as $1 and $2 are undefined /IF-MIB::ifAlias.(\S+) = STRING: (\S+)/ or next; ```
Use of uninitialized value in concatenation (.) or string at or string at
[ "", "sql", "perl", "" ]
I've got Sql Server Query like this: ``` Select * ,Cast(Column1 as float) as Column1Convert ,Cast(Column2 as float) as Column2Convert ,Column1Convert/Colun2Convert AS [Result] From MyTable Order By Points DESC, Result ASC ``` But I'am receiving error: "Invalid column name 'Column1Convert'." and second error: "Invalid column name 'Column2Convert'." Could You tell me what I'am doing wrong ?
This is because you are using aliases in the same level of the query which is not allowed. Try: ``` Select * ,Cast(Column1 as float) as Column1Convert ,Cast(Column2 as float) as Column2Convert ,Cast(Column1 as float)/Cast(Column2 as float) AS [Result] From MyTable Order By Points DESC, Result ASC ```
**Try this:** ``` SELECT Points, Result, Column1Convert, Column2Convert, Column1Convert/Colun2Convert AS [Result] FROM ( Select * ,Cast(Column1 as float) as Column1Convert ,Cast(Column2 as float) as Column2Convert From MyTable ) Order By Points DESC, (Column1Convert/Colun2Convert) ASC ```
Ordering query by divide two numbers in Sql Server
[ "", "sql", "sql-server", "" ]
I have a stored procedure that performs calculations and stores a large amount of data in new tables, in another database. If anything goes wrong, I just drop the tables. I've already set the recovery mode for this database as simple, as the data is available elsewhere. Is there anything else I can do in the stored procedure to limit writing to the transaction log or remove transactions entirely to speed up the process?
It is impossible to completely eliminate transaction log from the equation in SQL Server. You may try to check bulk logged recovery model in conjunction with `bulk insert`, but if your calculations are complex and cannot be expressed within a single `select` statement, it could be worth trying SSIS.
I suggest you that using SSIS package in order to convert data from one database to another database. in SSIS you can control converted data and can use balk insert. In buck insert mode you limit your database to write transaction logs completely.
Stored procedure without transaction
[ "", "sql", "sql-server", "t-sql", "stored-procedures", "" ]
Few questions i have here * I always see some SQL written like below (not sure if i get it right) ``` SELECT a.column_1, a.column_2 FROM table_name WHERE b.column_a = 'some value' ``` i don't quite understand the SQL written in such way. Is it similar to using object in programming, where you can define an object and variables within the object? If it is, where is the definition of a and b for the SQL above (assuming i got the query right)? * I want to make comparisons between 3 columns (say C1 C2 C3) in 3 different tables, say T1 T2 and T3. The condition is to get the values from the C1 in T1, that exists in C2 in T2, but not exists in C3 in T3. Both columns are practically the same, just that some might different or lesser records than the other columns in the other 2 tables, and i want to know what the differences are. Is the query below the right way to do it? ``` select distinct C1 from T1 and (C1) not in (select C2 from T2) and (C1) in (select C3 from T3) order by C1; ``` And is it possible to extend the condition if i want to include more tables into comparison using the query above? * If i were to customize the query above into something similar to the first question, is the query below the right way to do it? ``` select a.C1 from T1 a and (a.C1) not in (select b.C2 from T2 b) and (a.C1) in (select c.C3 from T3 c) order by a.C1; ``` * What are the advantanges of writing query in object way (like above), compared to writing it in traditional way? I feel like even if you define a table name as a variable, the variable only can be used within the query where it is defined, and cannot be extended to the other queries. Thanks
the first point is a and b are "table aliases" (shortcut reference to the table(s) involved in THAT query) e.g. ``` SELECT a.column_1, a.column_2 FROM table_name_a a ------------------------------- table alias a defined here INNER JOIN table_name_b b -------------------------- table alias b defined here ON a.id = b.id WHERE b.column_a = 'some value' ``` Your second query has a syntax issue: You need `WHERE` as shown in uppercase. It also has and performance implications. Distinct adds effort to a query, using IN() is really a syntax shortcut for a series of ORs (it might not scale well). But with the syntax it is valid. ``` select distinct C1 from T1 WHERE (C1) not in (select C2 from T2) and (C1) in (select C3 from T3) order by C1; ``` Yes (with performance reservations) you could add more tables into that comparison. You introduce table aliases, done correctly, into your third query - but there is no real advantage in that query structure. Aside from just making code more convenient, aliases serve to distinguish between items that would be ambiguous. In my first query above `ON a.id = b.id` shows possible ambiguity in that 2 tables both have a field of the same name. Prefixing the field name by a table or table alias solves that ambiguity.
For your first point. > I always see some SQL written like below (not sure if i get it right) ``` SELECT a.column_1, a.column_2 FROM table_name WHERE b.column_a = 'some value' ``` This query is wrong. It should be like this - ``` SELECT a.column_1, a.column_2 FROM table_name a INNER JOIN --(There might be another join also like left join etc..) table_name b ON a.id = b.id WHERE b.column_a = 'some value' ``` so you noted in the above query that a and b are just table alias. Well, there are some cases you must use them, like when you need to join to the same table twice in one query. For the second point. you can also do it like this ``` SELECT DISTINCT C1 FROM T1 t1 WHERE NOT EXISTS ( SELECT C2 FROM T2 t2 where t2.C2 = t1.C1) AND WHERE EXISTS ( SELECT C3 FROM T3 t3 where t3.C3 = t1.C1) ORDER BY C1; ``` Personally I prefer aliases, and unless I have a lot of tables they tend to be single letter ones.
Few doubts on writing SQL queries
[ "", "sql", "" ]
Im trying to join two count querys ``` SELECT COUNT(*) AS total FROM clients WHERE addedby = 1 UNION SELECT COUNT(*) AS converts FROM clients WHERE addedby = 1 AND status = '6' ``` What this returns is ``` total 4 0 ``` this is the correct data, what I was expecting was this ``` total converts 4 0 ```
You don't need a `UNION` query to do this. [`SELECT A UNION SELECT B`](http://dev.mysql.com/doc/refman/5.1/en/union.html) returns the rows of `A` followed by the rows of `B` (deduplicated; if you want *all* rows from both datasets, use `UNION ALL`). What you want is something like this: ``` select (select count(*) from clients where addedby=1) as total, (select count(*) from clients where addedby=1 and status='6') as converts ``` Other way to do this is using a `case ... end` expression that returns `1` if `status='6'`: ``` select count(*) from clients, sum(case when status='6' then 1 else 0 end) as converts from clients ```
No `UNION` needed, do it in one pass. ``` SELECT COUNT(*) as total, SUM(CASE status WHEN '6' THEN 1 ELSE 0 END) as converts FROM clients; ```
SQL Union query error
[ "", "mysql", "sql", "" ]
Im having 260 columns table in SQL server. When we run "Select count(\*) from table" it is taking almost 5-6 to get the count. Table contains close 90-100 million records with 260 columns where more than 50 % Column contains NULL. Apart from that, user can also build dynamic sql query on to table from the UI, so searching 90-100 million records will take time to return results. Is there way to improve find functionality on a SQL table where filter criteria can be anything , can any1 suggest me fastest way get aggregate data on 25GB data .Ui should get hanged or timeout
Investigate [horizontal partitioning](http://technet.microsoft.com/en-us/library/ms188232%28v=sql.105%29.aspx). This will really only help query performance if you can force users to put the partitioning key into the predicates. Try vertical partitioning, where you split one 260-column table into several tables with fewer columns. Put all the values which are commonly required together into one table. The queries will only reference the table(s) which contain columns required. This will give you more rows per page i.e. fewer pages per query. You have a high fraction of NULLs. [Sparse columns](http://msdn.microsoft.com/en-us/library/cc280604.aspx) may help, but calculate your percentages as they can hurt if inappropriate. There's an [SO](https://stackoverflow.com/questions/1398453/why-when-should-i-use-sparse-column-sql-server-2008) question on this. Filtered indexes and filtered statistics may be useful if the DB often runs similar queries.
Changing my comment into an answer... You are moving from a transaction world where these 90-100 million records are recorded and into a data warehousing scenario where you are now trying to slice, dice, and analyze the information you have. Not an easy solution, but odds are you're hitting the limits of what your current system can scale to. In a past job, I had several (6) data fields belonging to each record that were pretty much free text and randomly populated depending on where the data was generated (they were search queries and people were entering what they basically would enter in google). With 6 fields like this...I created a dim\_text table that took each entry in any of these 6 tables and replaced it with an integer. This left me a table with two columns, text\_ID and text. Any time a user was searching for a specific entry in any of these 6 columns, I would search my dim\_search table that was optimized (indexing) for this sort of query to return an integer matching the query I wanted...I would then take the integer and search for all occourences of the integer across the 6 fields instead. searching 1 table highly optimized for this type of free text search and then querying the main table for instances of the integer is far quicker than searching 6 fields on this free text field. I'd also create aggregate tables (reporting tables if you prefer the term) for your common aggregates. There are quite a few options here that your business setup will determine...for example, if each row is an item on a sales invoice and you need to show sales by date...it may be better to aggregate total sales by invoice and save that to a table, then when a user wants totals by day, an aggregate is run on the aggreate of the invoices to determine the totals by day (so you've 'partially' aggregated the data in advance). Hope that makes sense...I'm sure I'll need several edits here for clarity in my answer.
Performance Improve on SQL Large table
[ "", "sql", "sql-server", "" ]
I have two table. I want to join them. This table is "program\_participants" ![program_participants](https://i.stack.imgur.com/0TvZO.png) This table is "logsesion" ![enter image description here](https://i.stack.imgur.com/cVhEH.png) My query ``` SELECT a.`id_participant` FROM `program_participants` a INNER JOIN `logsesion` b on a.`id_participant` != b.`user_id` GROUP BY a.`id_participant` ``` Now after running the query above I get a.`id_participant` (1 to 9 means all of it from participants table) But I want all of it except 1 and 2 as they are present in the logsesion table. can you please tell me what I am doing wrong. I have spend so much time on this and this seems to be straight forward. I have also tried symbol <> as well.
You want a `left join` and then a comparison to filter out the records that match. The ones that remain have no match: ``` SELECT pa.`id_participant` FROM `program_participants` pa LEFT JOIN `logsesion` ls ON pa.`id_participant` = ls.`user_id` WHERE ls.user_id is null; ```
If you really want to join these table you can try this, this will give the cartesian product of these tables filtering the userids already in the logsesion table. If you want different results comment below: ``` SELECT pa.`id_participant` FROM `program_participants` pa JOIN `logsesion` ls WHERE pa.`id_participant` NOT IN ( SELECT user_id from `logsesion` ); ```
mysql join not returning the expected ids
[ "", "mysql", "sql", "" ]
I need get every Sunday between date range. For example if my startdate is 07/27/2014 and End date is '08/10/2014', then i need a table have 07/27/2014, 08/03/2014, 08/10/2014 ``` select '2014/7/27' union all select dateadd(day, 7,'2014/7/27') where '2014/7/27' <= '2014/8/10' ``` only give me 07/27/2014 and 08/03/2014. please help.
If you're trying to do this as a recursive query the format is ``` WITH cteSundays as ( select dateadd(day, 0, '2014/7/27') as Sunday union all select dateadd(day, 7,Sunday) FROM cteSundays where Sunday <= dateadd(day, -7, '2014/8/10') ) SELECT * FROM cteSundays ``` but keep in mind that these are limited by the recursive depth allowed. I think 2012 is about 100 but you should experiment to make sure it can handle your needs. EDIT: Oops, the original went an extra week, you need to subtract 7 days from the end condition
Something like below will work ``` declare @startdate datetime, @enddate datetime set @startdate='20140727' set @enddate='20140810' select dateadd(week,number,@startdate) from master..spt_values where type='p' and dateadd(week,number,@startdate) <=@enddate ```
get every Sunday between a date frame in sql 2012
[ "", "sql", "sql-server", "" ]
I'm trying to create a stored procedure that runs a select query and pulls an id (variable) and then do an update query to that id. Any help would be appreciated. This is what I have: ``` CREATE PROCEDURE dbo.Lead_usp_getLead @LeadId int output AS SELECT TOP 1 Leadid, LeadInitials, LeadFirstName, LeadSurname, LeadHomeTelephoneNumber, LeadWorkTelephoneNumber, LeadCellularNumber, LeadEMailAddress, IsLocked, uploadedDate FROM dbo.Lead WHERE IsLocked = 'False' ORDER BY uploadedDate; UPDATE dbo.Lead SET IsLocked = 'TRUE' WHERE LeadId = @LeadId DECLARE @leadid int EXEC dbo.Lead_usp_getLead @leadId; ```
You can just combine them. No need to do two queries: ``` with toupdate as ( SELECT TOP 1 l.* FROM dbo.Lead l WHERE l.IsLocked = 'False' ORDER BY l.uploadedDate ) Update toupdate SET IsLocked = 'TRUE'; ```
Don't understand your final result, but: ``` UPDATE dbo.Lead SET IsLocked = 'TRUE' WHERE Leadid = (SELECT TOP 1 Leadid FROM dbo.Lead WHERE IsLocked = 'False') ```
How to create a select, and then an update stored procedure in SQL Server 2012
[ "", "sql", "stored-procedures", "sql-server-2012", "" ]
In sql i will get DateName from the following query ``` SELECT DATENAME(dw,'10/24/2013') as theDayName ``` to return 'Thursday' have any equivalent function in Vertica?
The easiest way without using a custom UDF is using [`TO_CHAR`](http://my.vertica.com/docs/7.1.x/HTML/index.htm#Authoring/SQLReferenceManual/Functions/Formatting/TO_CHAR.htm) formatting: ``` SELECT TO_CHAR(TIMESTAMP '2014-08-21 14:34:06', 'DAY'); ``` This returns the full uppercase day name. `Day` gives the mixed-case day name, and `day` gives the lowercase day name. You can find more template patterns [here](http://my.vertica.com/docs/7.1.x/HTML/index.htm#Authoring/SQLReferenceManual/Functions/Formatting/TemplatePatternsForDateTimeFormatting.htm).
You can try installing a custom UDF ([`weekday_name`](https://github.com/sKwa/vertica/blob/master/UDFSQL/sql_funcs.sql)). Once installed, you can use: ``` SELECT weekday_name(dayofweek(TO_DATE('10/24/2013','MM/DD/YYYY'))) ```
Is there a function in Vertica is equivalent to DATENAME in SQL
[ "", "sql", "vertica", "" ]
[![enter image description here](https://i.stack.imgur.com/FI38J.png)](https://i.stack.imgur.com/FI38J.png) (source: [joescott.me](http://share.joescott.me/29bd.png)) I'm currently grappling with the following, as the title suggests i'm looking to reverse this table design into a valid create query for the table 'Book' Am i right in thinking: * The title should be used as a primary key? * A foreign key reference publisherName on **Publisher(name)** * And another authorName on **Author(name)** Which becomes: ``` CREATE TABLE Book ( Title varchar(30), ISBN INT UNIQUE, Cost dec(8,2), authorName varchar(30) REFERENCES Author(name), publisherName varchar(30) REFERENCES Publisher(name) ); ``` After mentioned alterations: ``` CREATE TABLE Book ( Title varchar(30) NOT NULL, ISBN INT UNIQUE PRIMARY KEY, Cost dec(8,2) NOT NULL, authorName varchar(30) REFERENCES Author(name), publisherName varchar(30) REFERENCES Publisher(name) ); ```
If you are trying to make the best out of this bad design, you should go for your 2nd option: * Table [Publisher] with PK 'Name'. * Table [Author] with PK 'Name'. * Table [Book] with PK 'ISBN' and FK [Publisher].Name and another FK [Author].Name. (PK should be standard UNIQUE and NOT NULL) `CREATE TABLE Book ( Title varchar(30) NOT NULL, ISBN INT PRIMARY KEY, Cost DECIMAL(8,2) NOT NULL, authorName varchar(30) REFERENCES Author(name), publisherName varchar(30) REFERENCES Publisher(name) );` Also with this dataset, your char lengths are fine. But in reality, INT will be too small to store 13 digit numbers for ISBN, and names can go up to 40+ chars easily, especially publishers.
> The title should be used as a primary key? No. A primary key should be unique, and unchanging. There is no way to guarantee that there aren't two books with the same title. I believe ISBN is guaranteed unique and unchanging, although books exist without ISBNs (books that are not yet finished, books published before ISBNs became popular). > A foreign key reference publisherName on Publisher(name) Again - you want the primary key for "publisher" to be unique, and unchanging. There's no guarantee that publisher names are unique, or unchanging. Typically, we create "publisherID" as primary keys, with either a GUID or incrementing integer. > And another authorName on Author(name) As above Also, I wouldn't include "numberOfTitles" in the publisher table - normalization suggests that we need to calculate this value, rather than store it.
Translate from table design to SQL Create query?
[ "", "sql", "oracle", "" ]
I have a query which basically is grouping the total sum by day ``` SELECT CountDate, SUM(Max_Count) as MaximumCount, SUM(Min_Count) as MinimumCount FROM countTable WHERE countId IN ('48', '34', '65', '63', '31', '64', '86') AND CountDate BETWEEN '2014-08-14' AND '2014-08-16' GROUP BY CountDate ORDER BY CountDate ``` The output result will be ``` Date | Maximum | Minimum ------------|-----------|---------------------- 2014-08-14 | 3018234 | 3014212 2014-08-15 | 3023049 | 3018510 2014-08-16 | 3026813 | 3023244 ``` I want the query to get the difference between the MaximumCount of the last day and the MinimumCount of the first day. The result of the query should be the maximum of the last day i.e. 2014-08-16 : 3026813 minus (-) the minimum of the first day i.e. 2014-08-14 | 3014212. Therefore 3026813 - 3014212 Any help how I could achieve this will be much appreciated.
With reference to Jithin Shaji answer, I've got the result by this query ``` DECLARE @STARTDATE DATE = '2014-08-14' DECLARE @ENDDATE DATE = '2014-08-16' DECLARE @NOOFDAYS INT = datediff(day, @STARTDATE, @ENDDATE) SELECT A.CountDate, A.MaximumCount - B.MinimumCount AS CountSum FROM ( SELECT CountDate, SUM(Max_Count) AS MaximumCount, SUM(Min_Count) AS MinimumCount FROM countTable WHERE countId IN ('48','34','65','63','31','64','86') AND CountDate BETWEEN @STARTDATE AND @ENDDATE GROUP BY CountDate) A LEFT JOIN ( SELECT DATEADD(DAY, @NOOFDAYS, CountDate) AS CountDate, SUM(Max_Count) AS MaximumCount, SUM(Min_Count) AS MinimumCount FROM countTable WHERE countId IN ('48','34','65','63','31','64','86') AND CountDate BETWEEN @STARTDATE AND @ENDDATE GROUP BY DATEADD(DAY, @NOOFDAYS, CountDate)) B ON A.CountDate = B.CountDate ```
``` SELECT (SELECT [Maximum] FROM TABLE WHERE Date = (SELECT MAX(Date) FROM TABLE)) - (SELECT [Minimum] FROM TABLE WHERE Date = (SELECT MIN(Date) FROM TABLE)) FROM TABLE ```
SQL statement to get the difference between two dates
[ "", "sql", "sql-server", "group-by", "difference", "" ]
My curiosity always gets the best of me and I've searched online for an explanation to this and came up with nothing (could be because I didn't use the right terms.) Can someone please explain why SQL Server returns a value of zero (0) when the following is executed, instead of an empty string (''). ``` DECLARE @I AS INT SET @I = NULL SELECT ISNULL(@I, '') -- 0 ```
As declared [here](http://msdn.microsoft.com/en-us/library/ms184325.aspx), the second argument to `ISNULL` is the replacement\_value, which "must be of a type that is implicitly convertible to the type of check\_expresssion." Implicitly converting `''` to `INT` results in 0.
Because @I is declared as an `INT`, the empty string is implicitly CAST as an integer resulting in a ZERO.
SQL Server NULL Integer to Empty String using ISNULL
[ "", "sql", "sql-server", "t-sql", "string", "isnull", "" ]
I want to create a rule to restrict special characters to be entered into a column. I have tried the following. But it didnt work. ``` CREATE RULE rule_spchar AS @make LIKE '%[^[^*|\":<>[]{}`\( );@&$]+$]%' ``` I dont know what I am doing wrong here. Any help would be appreciated.
Your can create a `Check Constraint` on this column and only allow `Numbers`and `Alphabets` to be inserted in this column, see below: ## Check Constraint to only Allow Numbers & Alphabets ``` ALTER TABLE Table_Name ADD CONSTRAINT ck_No_Special_Characters CHECK (Column_Name NOT LIKE '%[^A-Z0-9]%') ``` ## Check Constraint to only Allow Numbers ``` ALTER TABLE Table_Name ADD CONSTRAINT ck_Only_Numbers CHECK (Column_Name NOT LIKE '%[^0-9]%') ``` ## Check Constraint to only Allow Alphabets ``` ALTER TABLE Table_Name ADD CONSTRAINT ck_Only_Alphabets CHECK (Column_Name NOT LIKE '%[^A-Z]%') ```
It's important to remember Microsoft's plans for the features you're using or intending to use. [`CREATE RULE`](http://msdn.microsoft.com/en-us/library/ms188064(v=sql.105).aspx) is a deprecated feature that won't be around for long. Consider using `CHECK CONSTRAINT` instead. Also, since the character exclusion class doesn't actually operate like a RegEx, trying to exclude brackets `[]` is impossible this way without multiple calls to `LIKE`. So collating to an accent-insensitive collation and using an alphanumeric inclusive filter will be more successful. More work required for non-latin alphabets. M.Ali's `NOT LIKE '%[^A-Z0-9 ]%'` Should serve well.
Create rule to restrict special characters in table in sql server
[ "", "sql", "sql-server", "sql-server-2008", "sql-server-2008-r2", "" ]
Funny one this, I've got a table, "addresses", with a list of address details, some with missing fields. I want to identify these rows, and replace them with the previous address row, however these must only be accounts that are NOT the most recent address on the account, they must be previous addresses. Each address has a sequence number (1,2,3,4 etc), so i cab easily identify the MAX address and make that it's not the most recent address on the account, however how do I then scan for what is effectively, "Max -1", or "one less than max"? Any help would be hugely appreciated.
Try this: ``` SELECT MAX(field) FROM table WHERE field < (SELECT MAX(field) FROM table) ``` By the way: Here is a good article, which describes how to [achieve nth row](http://www.programmerinterview.com/index.php/database-sql/find-nth-highest-salary-sql/).
``` SELECT TOP 1 field FROM( SELECT DISTINCT TOP 2 field FROM table ORDER BY field DESC )tbl ORDER BY field; ```
SQL - How do I select a "Next Max" record
[ "", "mysql", "sql", "sql-server", "sybase", "" ]
I was trying with writing a query to select a row from the table (attached screenshot). This is something peculiar, where `*` means any value. I need to select a row where `Amount` should be between Start Amount and End Amount and Department should be IT. The condition for `Country` and `Sub Department` is a bit tricky. If the selected country is not in the `Country` column then the query should return me the record with `*` and same is the case with sub department. ![enter image description here](https://i.stack.imgur.com/HhLK9.png) I tried with a approach of selecting columns based on Department and amount like this ``` Select * from table_name where Department = 'IT' and 1000 BETWEEN Start Amount AND End Amount ``` But, after this I am not sure how to get the result with below condition. If country is not India then all `*` results I should get.
I believe you want something like: ``` SELECT * FROM table_name WHERE Department = 'IT' AND 1000 BETWEEN `Start Amount` AND `End Amount` AND country IN ('India','*') AND `Sub Department` IN ('SD2','*') ORDER BY country = 'India' DESC, `Sub Department` = 'SD2' DESC LIMIT 1 ```
Use a union all to assign a group number in order of preference to every permitted combination of country/sub\_department i.e. `(India,SD1) (India,*) (*,*)` then only select the rows with the lowest group number. ``` select t1.* from ( Select t1.* , if(@minGroup > groupNumber, @minGroup := groupNumber, @minGroup) minGroupNumber from ( Select t1.*, 1 groupNumber from table_name t1 where Department = 'IT' and 1000 BETWEEN `Start Amount` AND `End Amount` and country = 'India' and sub_department = 'SD1' union all Select t1.*, 2 groupNumber from table_name t1 where Department = 'IT' and 1000 BETWEEN `Start Amount` AND `End Amount` and country = 'India' and sub_department = '*' union all Select t1.*, 3 groupNumber from table_name t1 where Department = 'IT' and 1000 BETWEEN `Start Amount` AND `End Amount` and country = '*' and sub_department = '*' ) t1 cross join (select @minGroup := 3) t2 ) t1 where groupNumber = @minGroup ```
MySQL Conditioning Query
[ "", "mysql", "sql", "" ]
How to get table where some columns are queried and behaves as rows? **Source table** ``` ID | Name | Funct | Phone1 | Phone2 | Phone3 1 | John | boss | 112233 | 114455 | 117788 2 | Jane | manager | NULL | NULL | 221111 3 | Tony | merchant | 441100 | 442222 | NULL ``` **Wanted result** ``` ID | Name | Funct | Phone | Ord 1 | John | boss | 112233 | 1 1 | John | boss | 114455 | 2 1 | John | boss | 117788 | 3 2 | Jane | manager | 221111 | 3 3 | Tony | merchant | 441100 | 1 3 | Tony | merchant | 442222 | 2 ``` `Ord` is a column where is the order number (`Phone1...Phone3`) of the original column **EDITED:** OK, `UNION` would be fine when phone numbers are in separed columns, but what if the source is following (all numbers in one column)?: ``` ID | Name | Funct | Phones 1 | John | boss | 112233,114455,117788 2 | Jane | manager | 221111 3 | Tony | merchant | 441100,442222 ``` Here I understand, that column `Ord` is a non-sense (so ignore it in this case), but how to split numbers to separed rows?
The easiest way is to use `union all`: ``` select id, name, funct, phone1 as phone, 1 as ord from source where phone1 is not null union all select id, name, funct, phone2 as phone, 2 as ord from source where phone2 is not null union all select id, name, funct, phone3 as phone, 3 as ord from source where phone3 is not null; ``` You can write this with a `cross apply` as: ``` select so.* from source s cross apply (select s.id, s.name, s.funct, s.phone1 as phone, 1 as ord union all select s.id, s.name, s.funct, s.phone2 as phone, 2 as ord union all select s.id, s.name, s.funct, s.phone3 as phone, 3 as ord ) so where phone is not null; ``` There are also methods using `unpivot` and `cross join`/`case`.
Please see the answer below, ``` Declare @table table (ID int, Name varchar(100),Funct varchar(100),Phones varchar(400)) Insert into @table Values (1,'John','boss','112233,114455,117788'), (2,'Jane','manager','221111' ), (3,'Tony','merchant','441100,442222') Select * from @table ``` Result: ![enter image description here](https://i.stack.imgur.com/Jxbt3.jpg) Code: ``` Declare @tableDest table ([ID] int, [name] varchar(100),[Phones] varchar(400)) Declare @max_len int, @count int = 1 Set @max_len = (Select max(Len(Phones) - len(Replace(Phones,',','')) + 1) From @table) While @count <= @max_len begin Insert into @tableDest Select id,Name, SUBSTRING(Phones,1,charindex(',',Phones)-1) from @table Where charindex(',',Phones) > 0 union Select id,Name,Phones from @table Where charindex(',',Phones) = 0 Delete from @table Where charindex(',',Phones) = 0 Update @table Set Phones = SUBSTRING(Phones,charindex(',',Phones)+1,len(Phones)) Where charindex(',',Phones) > 0 Set @count = @count + 1 End ------------------------------------------ Select * from @tableDest Order By ID ------------------------------------------ ``` Final Result: ![enter image description here](https://i.stack.imgur.com/BPb1V.jpg)
SQL Server columns as rows in result of query
[ "", "sql", "sql-server", "querying", "" ]
So I have a SELECT query, and the result is like this: ``` SELECT .... ORDER BY SCORE, STUDENT_NUMBER STUDENT_NAME STUDENT_NUMBER SCORE ---------------------------------------- Adam 9 69 Bob 20 76 Chris 10 77 Dave 14 77 Steve 5 80 Mike 12 80 ``` But I want to order by STUDENT\_NUMBER, but I want them to be grouped by the same score: ``` STUDENT_NAME STUDENT_NUMBER SCORE ---------------------------------------- Steve 5 80 Mike 12 80 Adam 9 69 Chris 10 77 Dave 14 77 Bob 20 76 ``` So now the data is ordered by STUDENT\_NUMBER, but if there is the same SCORE, they are grouped (like it is shown in the next row). Is it possible to do this with the ORDER BY clause?
It seems that the ordering can also be described as ordering by the minimum student number for each score. You would do this using window functions. Here is an example: ``` select <whatever> from (select t.*, min(student_number) over (partition by score) as minsn from <whatever> t ) t order by minsn, score, student_number asc; ``` You do ask if this can be done with the `order by`. I think the answer is "yes", using a subquery. It would look something like this: ``` select <whatever> from <whatever> t order by (select min(t2.student_number) from <whatever> t2 where t2.score = t.score ), score, student_number; ```
You could order by the minimum student number with that score, then by student number: ``` SELECT STUDENT_NAME, STUDENT_NUMBER, SCORE FROM Scores s ORDER BY (SELECT(MIN(STUDENT_NUMBER) FROM Scores WHERE SCORE = s.SCORE) , STUDENT_NUMBER ```
SQL Order By clause with group
[ "", "sql", "sql-server", "sql-order-by", "" ]
Quick question regarding sql syntax If I have 3 tables (hereby reffered to 1,2,3) and want to select everything from `table 2,3` tables depending on if an id is present in table 1, how do I do that, I.E "select nothing" from table 1? As for now I select everything from `table 1`. ``` SELECT * FROM [Content] pc, [test] Dc, [Swg] Swg where pc.Id=Dc.Id and pc.Id=Swg.Id order by pc.Id ```
If I understand you correctly, you want to select everything from tables test and SWG, even if there is no match in table Content? If so, a RIGHT join would do the trick: ``` SELECT * FROM Content RIGHT JOIN test ON content.ID = test.ID RIGHT JOIN SWG ON Content.ID = SWG.ID ``` If you're looking for everything from test and SWG but only if there is a matching ID in CONTENT, then this should work: ``` SELECT test.* , SWG.* FROM Content JOIN test ON content.ID = test.ID JOIN SWG ON Content.ID = SWG.ID ```
So you just don't want to see the columns from table 1? ``` SELECT Dc.*, Swg.* FROM [Content] pc, [test] Dc, [Swg] Swg where pc.Id=Dc.Id and pc.Id=Swg.Id order by pc.Id ``` That assumed pc = table 1, dc = table 2 and swg = table 3
select nothing from first table
[ "", "sql", "sql-server", "" ]
I want to select all records from a MySQL data base that are older than the year 2013. I've tried this... `SELECT * FROM messages WHERE DATEPART(yyyy,date_sent) < 2013` but it didn't work. The `DATEPART` idea came from here: <http://www.w3schools.com/sql/func_datepart.asp> Any ideas on how I can do this?
Use [`YEAR()`](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_year) ``` SELECT * FROM messages WHERE YEAR(date_sent) < 2013 ``` Opinion: You should not use or link to [w3schools](http://www.w3fools.com). It's not a reliable source of information and we don't want to encourage its use.
I'm assuming `date_sent` is a date field, you can use [`YEAR`](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_year) ``` SELECT * FROM messages WHERE YEAR(date_sent) < 2013 ```
MySQL -- Select year from a format like "2010-07-30 22:58:59"
[ "", "mysql", "sql", "" ]
I have tables A, B and C and I want to get matching values for from all tables (tables have different columns). ``` Table A (primary key = id) +------+-------+ | id | name | +------+-------+ | 1 | Ruby | | 2 | Java | | 3 | JRuby | +------+-------+ Table B (pid is reference to A(id) - No primary key) +------+------------+ | pid | name | +------+------------+ | 1 | Table B | +------+------------+ Table C (primary key = id, pid is reference to A(id)) +------+------+------------+ | id | pid | name | +------+------+------------+ | 1 | 2 | Table C | +------+------+------------+ ``` So my below query returned nothing. Whats wrong here? Is it treated as AND when multiple inner joins present? ``` Select A.* from A inner join B ON a.id = b.pid inner join C ON a.id = c.pid; ```
When you inner-join like this, a single row from `A` needs to exist such that `a.id = b.pid` AND `a.id = c.pid` are true at the same time. If you examine the rows in your examples, you would find that there is a row in `A` for each individual condition, but no rows satisfy both conditions at once. That is why you get nothing back: the row that satisfies `a.id = b.pid` does not satisfy `a.id = c.pid`, and vice versa. You could use an outer join to produce two results: ``` select * from A left outer join B ON a.id = b.pid left outer join C ON a.id = c.pid; a.id a.name b.pid b.name c.id c.pid c.name 1 | Ruby | 1 | Table B | NULL | NULL | NULL 2 | Java | NULL | NULL | 1 | 2 | Table C ```
As you first join ``` 1 | Ruby | Table B ``` and then try to join `Table C`, there is no match for pid `2` in the aforementioned result, the result is therefore empty.
How do I inner join multiple tables?
[ "", "mysql", "sql", "" ]
Suppose I have following table RIGHTS with data: ``` ID NAME OWNER_ID ACL_ID ACL_NAME -------------------------------------------------- 100 Entity_1 1 1 g1 100 Entity_1 2 2 g2 100 Entity_1 3 3 g3 200 Entity_2 1 1 g1 200 Entity_2 2 2 g2 300 Entity_3 1 1 g1 300 Entity_3 2 2 g2 300 Entity_3 4 NULL NULL 400 Entity_4 1 1 g1 400 Entity_4 2 2 g2 400 Entity_4 3 3 g3 400 Entity_4 4 NULL NULL 500 Entity_5 4 NULL NULL 500 Entity_5 5 NULL NULL 500 Entity_5 6 NULL NULL 600 Entity_6 NULL NULL NULL ``` How to select all (ID, NAME) records for which there is no even single ACL\_ID=NULL row except those rows with OWNER\_ID=NULL. In this particular example I want to select 3 rows: * (100, Entity\_1) - because all 3 rows with ACL\_ID != NULL (1, 2, 3) * (200, Entity\_2) - because all 2 rows with ACL\_ID != NULL (1, 2) * (600, Entity\_6) - because OWNER\_ID=NULL For now I use SQL Server, but I want it works on Oracle as well if it possible. **UPDATE** I apologize I had to mention that this table data is just a result of a query with joins, so it has to be taken into account: ``` SELECT DISTINCT EMPLOYEE.ID ,EMPLOYEE.NAME , OWNERS.OWNER_ID as OWNER_ID , GROUPS.GROUP_ID as ACL_ID , GROUPS.NAME as ACL_NAME from EMPLOYEE inner join ENTITIES on ENTITIES.ENTITY_ID = ID left outer join OWNERS on (OWNERS.ENTITY_ID = ID and OWNERS.OWNER_ID != 123) left outer join GROUPS on OWNERS.OWNER_ID = GROUPS.GROUP_ID where ENTITIES.STATUS != 'D' ```
Try this: ``` select s.id, s.name from (select id,name,max(coalesce(owner_id,-1)) owner_id, min(coalesce(acl_id,-1)) acl_id from yourtable group by id,name) as s where s.owner_id = -1 or (s.owner_id > -1 and s.acl_id > -1) ``` We use `COALESCE` to default null values to -1 (assuming the columns are integers), and then get the minimum values of `owner_id` and `acl_id` per unique `id-name` combination. If the maximum value of `owner_id` is -1, then the owner column is null. Likewise, if minimum value of `acl_id` is -1, then at least one null valued row exists. Based on these 2 conditions, we filter the list to get the required `id-name` pairs. Note that in this case, I simply chose -1 as the default value because I assume you don't use negative numbers as IDs. If you do, you can choose a suitable, "impossible" value as the default for the `COALESCE` function. This should work on SQL Server and Oracle.
Here's my solution on Oracle. ``` SELECT DISTINCT EMPLOYEE.ID ,EMPLOYEE.NAME , OWNERS.OWNER_ID as OWNER_ID , GROUPS.GROUP_ID as ACL_ID , GROUPS.NAME as ACL_NAME from EMPLOYEE inner join ENTITIES on ENTITIES.ENTITY_ID = ID left outer join OWNERS on (OWNERS.ENTITY_ID = ID and OWNERS.OWNER_ID != 123) left outer join GROUPS on OWNERS.OWNER_ID = GROUPS.GROUP_ID where ENTITIES.STATUS != 'D' and EMPLOYEE.ID not in (select id from EMPLOYEE where GROUPS.GROUP_ID is null and OWNERS.OWNER_ID is not null); ``` You simply need to append the inner subquery from my earlier answer and you will get your solution.
SQL query to select rows with all ACL that user has
[ "", "sql", "sql-server", "oracle", "" ]
I have a table with a few million rows. Currently, I'm working my way through them 10,000 at a time by doing this: ``` for (my $ival = 0; $ival < $c_count; $ival += 10000) { my %record; my $qry = $dbh->prepare ( "select * from big_table where address not like '%-XX%' limit $ival, 10000"); $qry->execute(); $qry->bind_columns( \(@record{ @{$qry->{NAME_lc} } } ) ); while (my $record = $qry->fetch){ this_is_where_the_magic_happens($record) } } ``` I did some benchmarking and I found that the prepare/execute part, while initially fast, slows down considerably after multiple 10,000 row batch. Is this a boneheaded way to write this? I just know if I try to select everything in one go, this query takes forever. Here's some snippets from the log: ``` (Thu Aug 21 12:51:59 2014) Processing records 0 to 10000 SQL Select => 1 wallclock secs ( 0.01 usr + 0.00 sys = 0.01 CPU) (Thu Aug 21 12:52:13 2014) Processing records 10000 to 20000 SQL Select => 1 wallclock secs ( 0.00 usr + 0.00 sys = 0.00 CPU) (Thu Aug 21 12:52:25 2014) Processing records 20000 to 30000 SQL Select => 2 wallclock secs ( 0.00 usr + 0.00 sys = 0.00 CPU) (Thu Aug 21 12:52:40 2014) Processing records 30000 to 40000 SQL Select => 5 wallclock secs ( 0.00 usr + 0.00 sys = 0.00 CPU) (Thu Aug 21 12:52:57 2014) Processing records 40000 to 50000 SQL Select => 13 wallclock secs ( 0.01 usr + 0.00 sys = 0.01 CPU) ... (Thu Aug 21 14:33:19 2014) Processing records 650000 to 660000 SQL Select => 134 wallclock secs ( 0.01 usr + 0.00 sys = 0.01 CPU) (Thu Aug 21 14:35:50 2014) Processing records 660000 to 670000 SQL Select => 138 wallclock secs ( 0.00 usr + 0.00 sys = 0.00 CPU) (Thu Aug 21 14:38:27 2014) Processing records 670000 to 680000 SQL Select => 137 wallclock secs ( 0.00 usr + 0.00 sys = 0.00 CPU) (Thu Aug 21 14:41:00 2014) Processing records 680000 to 690000 SQL Select => 134 wallclock secs ( 0.00 usr + 0.00 sys = 0.00 CPU) ``` Would it be faster to do some other way? Should I remove the 'where' clause and just throw out results I don't want in the loop? Thanks for the help.
The problem is you're running multiple queries. Your dataset may also change between queries - you may miss rows or see duplicate rows since you're running multiple queries; inserts or deletions on the items you're searching will affect this. The reason the first ones go fast is because the DB is truncating the query when it hits 10,000 items. It's not getting all the rows matching your query, and thus running faster. It's not 'getting slower', just doing more of the work, over and over and over - getting the first 10,000 rows, getting the first 20,000 rows, the first 30,000 rows. You've written a Schlemiel the painter's database query. (<http://www.joelonsoftware.com/articles/fog0000000319.html>) You should run the query without a limit and iterate over the resultset. This will ensure data integrity. You may also want to look into using where clauses that can take advantage of database indices to get a faster response to your query.
Others have made useful suggestions. I'll just add a few thoughts that come to mind... * Firstly, see my old but still very relevant [Advanced DBI Tutorial](http://www.cpan.org/modules/by-module/Apache/TIMB/DBI_AdvancedTalk_200708.pdf). Specifically page 80 which addresses paging through a large result set, which is similar to your situation. It also covers profiling and `fetchrow_hashref` vs `bind_columns`. * Consider creating a temporary table with an auto increment field, loading it with the data you want via an `INSERT ... SELECT ...` statement, then building/enabling an index on the auto increment field (which will be faster than loading the data with the index already enabled), then select ranges of rows from that temporary table using the key value. That will be *very* fast for fetching but there's an up-front cost to build the temporary table. * Consider enabling [mysql\_use\_result](http://dev.mysql.com/doc/refman/5.0/en/mysql-use-result.html) in [DBD::mysql](https://metacpan.org/pod/DBD::mysql). Rather than load all the rows into memory within the driver, the driver will start to return rows to the application as they stream in from the server. This reduces latency and memory use but comes at the cost of holding a lock on the table. * You could combine using mysql\_use\_result with my previous suggestion, but it might be simpler to combine it with using `SELECT SQL_BUFFER_RESULT ...`. Both would avoid the lock problem (which might not be a problem for you anyway). Per [the docs](http://dev.mysql.com/doc/refman/5.6/en/select.html), SQL\_BUFFER\_RESULT "forces the result to be put into a temporary table". (Trivia: I think I suggested SQL\_BUFFER\_RESULT to Monty many moons ago.)
What's the most efficient way to work through a large result in Perl DBI?
[ "", "mysql", "sql", "perl", "dbi", "" ]
I have a bunch of stored procedures (more than 200) in my database. I have to change the schema of those now. They have the schema `ABC`. I have to change it to `XYZ`. I know that I can use this query ``` ALTER SCHEMA XYZ TRANSFER ABC.STOREDPROCEDURE ``` to achieve this. But the number of stored procedures is huge. I cannot do it one by one. Is there any other way to do this task? Can I use while loop for it? Thank you everyone.
I would use Sql Server to generate the code for me. ``` SELECT 'ALTER SCHEMA XYZ TRANSFER ABC.' + name FROM sys.Procedures ``` Then copy and paste the results into a Sql Window and hit the Execute button...
Run this script to generate the statements you need ``` SELECT 'ALTER SCHEMA NewSchemaName TRANSFER ' + SysSchemas.Name + '.' + DbObjects.Name + ';' FROM sys.Objects DbObjects INNER JOIN sys.Schemas SysSchemas ON DbObjects.schema_id = SysSchemas.schema_id WHERE SysSchemas.Name = 'OldSchemaName' AND (DbObjects.Type IN ('P')) ```
Alter Schema of all stored procedures
[ "", "sql", "sql-server", "stored-procedures", "" ]
When creating a table in SQL SERVER, I want to restrict that the length of an INTEGER column can only be equal 10. eg: the PhoneNumber is an INTEGER, and it must be a 10 digit number. How can I do this when I creating a table?
If you want to limit the range of an integer column you can use a check constraint: ``` create table some_table ( phone_number integer not null check (phone_number between 0 and 9999999999) ); ``` But as R.T. and huMpty duMpty have pointed out: a phone number is usually better stored in a `varchar` column.
If I understand correctly, you want to make sure the entries are exactly 10 digits in length. If you insist on an Integer Data Type, I would recommend Bigint because of the range limitation of Int(-2^31 (-2,147,483,648) to 2^31-1 (2,147,483,647)) ``` CREATE TABLE dbo.Table_Name( Phone_Number BIGINT CONSTRAINT TenDigits CHECK (Phone_Number BETWEEN 1000000000 and 9999999999) ); ``` Another option would be to have a Varchar Field of length 10, then you should check only numbers are being entered and the length is not less than 10.
How to restrict the length of INTEGER when creating a table in SQL Server?
[ "", "sql", "sql-server", "sql-server-2008", "sql-server-2008-r2", "" ]
I have a large table with a column containing phone numbers that are formatted inconsistently. i.e 01234567890 or possibly 01234 567 890. I'm looking for a select statement that will return the record as long as the user search contains the numbers in the correct order regardless of spacing of the record in the database. So if the user search using 0123456789 it would return the record containing 01234 567 890 or vice versa. Currently using like but not working as I'd like. Any ideas? ``` SELECT * FROM contacts WHERE telephone LIKE '%01234567890% ```
Replace() should work for you. ``` WHERE REPLACE(telephone,' ','') = 01234567890 ```
I would suggest removing spaces and other characters before doing the comparison: ``` SELECT * FROM contacts WHERE replace(replace(replace(replace(telephone, ' ', ''), '(', ''), ')', ''), '-') LIKE '%01234567890%'; ``` This gets rid of spaces, parentheses and hyphens. You could also do this by fixing the pattern: ``` where telephone like '%0%1%2%3%4%5%6%7%8%9%0%' ``` The wildcard `%` can match zero or more characters, so it would find the numbers in the right order.
Sql Select when string might contains spaces
[ "", "sql", "sql-server", "" ]
I have a business requirement where I need to alter the SSRS report based on some additional filtering. I have a field name as ProductShortName where they don't want records where Product name is 'BLOC', 'Small Business Visa', Product name starting with 'WOW' and Product name ending with 'Review'. This is the original where condition: ``` WHERE ( A.AppDetailSavePointID = 0) AND (B.QueueID = 1) AND (A.DecisionStatusName <> N'Cancelled') AND (A.DecisionStatusName <> N'Withdrawn') OR (A.AppDetailSavePointID = 0) AND ((B.QueueID = - 25) OR (B.QueueID = - 80)) AND (A.DecisionStatusName <> N'Cancelled') AND (A.DecisionStatusName <> N'Withdrawn') OR (A.AppDetailSavePointID = 0) AND (A.DecisionStatusName <> N'Cancelled') AND (A.DecisionStatusName <> N'Withdrawn') AND (LEFT(C.QueueName, 2) = 'LC') ``` I added additional filtering to meet the criteria: ``` WHERE (A.AppDetailSavePointID = 0) AND ((A.ProductShortName <> 'BLOC') AND (A.ProductShortName <> 'Small Business Visa') AND NOT (A.ProductShortName LIKE 'WOW%') AND NOT (A.ProductShortName LIKE '%Review')) AND (B.QueueID = 1) AND (A.DecisionStatusName <> N'Cancelled') AND (A.DecisionStatusName <> N'Withdrawn') OR (A.AppDetailSavePointID = 0) AND ((B.QueueID = - 25) OR (B.QueueID = - 80)) AND (A.DecisionStatusName <> N'Cancelled') AND (A.DecisionStatusName <> N'Withdrawn') AND ((A.ProductShortName <> 'BLOC') AND (A.ProductShortName <> 'Small Business Visa') AND NOT (A.ProductShortName LIKE 'WOW%') AND NOT (A.ProductShortName LIKE '%Review')) AND (A.AppDetailSavePointID = 0) AND (A.DecisionStatusName <> N'Cancelled') AND (A.DecisionStatusName <> N'Withdrawn') AND (LEFT(C.QueueName, 2) = 'LC') AND ((A.ProductShortName <> 'BLOC') AND (A.ProductShortName <> 'Small Business Visa') AND NOT (A.ProductShortName LIKE 'WOW%') AND NOT (A.ProductShortName LIKE '%Review')) ``` While this removes the products but it additionally removes few more products. I don't understand how? Can anyone please suggest an appropriate where condition?
It may be easier to read if you deassociate the universal predicates. `(X and Y) or (X and Z) == X and (Y or Z)` This yields: ``` WHERE (A.ProductShortName NOT LIKE 'WOW%') AND (A.ProductShortName NOT LIKE '%Review') AND (A.ProductShortName <> 'Small Business Visa') AND (A.DecisionStatusName <> N'Cancelled') AND (A.DecisionStatusName <> N'Withdrawn') AND (A.AppDetailSavePointID = 0) AND ( QueueID = 1 OR QueueID = -25 OR QueueID = -80 OR LEFT(C.QueueName, 2) = 'LC' ) ```
You should avoid mixing AND and OR conditions without bracketing them properly. If you are mixing ANDs and ORs then put brackets to resolve the confusions. If you don't do that, the results would be unexpected. For example, in your query, if AppDetailSavePointID = 0 then all other conditions become invalid/irrelevent. I'm sure this not what you want. ``` WHERE (AppDetailSavePointID = 0) AND (QueueID = 1) AND (DecisionStatusName <> N'Cancelled') AND (DecisionStatusName <> N'Withdrawn') OR (AppDetailSavePointID = 0) AND ((QueueID = - 25) OR (QueueID = - 80)) AND (DecisionStatusName <> N'Cancelled') AND (DecisionStatusName <> N'Withdrawn') OR (AppDetailSavePointID = 0) AND (DecisionStatusName <> N'Cancelled') AND (DecisionStatusName <> N'Withdrawn') AND (LEFT(QueueName, 2) = 'LC') ``` **EDIT** You should take either AND or OR as the major part, but not a mixture of AND and OR (without brackets). You can use additonal brackets to specify the other. e.g. Assuming a,b,c,d,e,f... are conditions of type `Field op value` (e.g. AppDetailSavePointID = 0, DecisionStatusName <> N'Cancelled' etc.). You should not do this: ``` -- don't do this. WHERE a AND b OR c AND d OR e AND f OR g ``` You can do either of these two things: ``` -- this is ok. WHERE a AND b AND c AND (d OR e) AND (f OR g) ``` Or, ``` -- this is ok. WHERE a OR b OR c OR (d AND e) OR (f AND g) ```
<> not equal to function doesn't work appropriately to filter records in SQL Server
[ "", "sql", "sql-server", "reporting-services", "" ]
I have table with these columns: id, status, text. my sql query: `SELECT * FROM table ORDER BY id AND status DESC` I need to get all rows from table and sort it by id and by status descending. result is: `id | status 1 | 1 2 | 0 3 | 0` Result should be like this: `id | status 1 | 1 3 | 0 2 | 0` Thanks in advance.
You do not use `and` (usually) in the `order by`. To get the results that you want, you want to order by `status` first, and then the `id`: ``` SELECT * FROM table ORDER BY status DESC, id DESC; ``` Note that `desc` is needed twice, because it applies to only one sort key.
You have to use DESC for both columns, you are tryitg to sort by: ``` SELECT * FROM table ORDER BY id DESC,status DESC ```
How to get latest records from database sorted by id and status
[ "", "mysql", "sql", "" ]
Have 3 tables **Table A** ``` id | value ----------- | ``` **Table B** ``` id|value|A_id(fk to A) -------------- | | ``` **Table C** ``` id|value|B_id(FK to B)|timestamp -------------------------------- | | | ``` I have written a query to find out all latest distinct C values using the following query ``` select A.id, B.id, C.timestamp, C.value from A,B,C where A.id = B.A_id and B.id = C.B_id where C.value in (select distinct value from C c2 where c2.value = c.value and c2.value is not null) and c.timestamp = (select max(timestamp) from C c3 where c3.value = c.value); ``` except IDs none of the other columns are having indexes. Right now this query takes about 2 hrs or more to run, because the number of distinct C values are 221000 records. Is there an efficient way to do this?
``` SELECT distinct A.id, B.id, c.timestamp, c.value FROM ( SELECT c.value, MAX(c.timestamp) AS max_timestamp FROM c WHERE NOT c.value IS NULL GROUP BY c.value) c1 INNER JOIN c ON c1.value = c.value AND c1.max_timestamp = c.timestamp inner join b ON B.id = C.B_id inner join a ON A.id = B.A_id ```
A sub-query inside a query will be run for each row inside the main query. When having large data inside the main query, that will be a performance anti-pattern (you have 2 sub-queries). You need a group maximum, that could be achieved with a self left join. ``` SELECT A.id a_id, B.id b_id, C1.timestamp, C1.value From C C1 INNER JOIN B on B.id = C1.b_id INNER JOIN A on A.id = B.A_id LEFT JOIN C C2 on C1.value = C2.value and C1.timstamp < C2.timestamp WHERE C1.value IS NOT NULL and C2.id IS NULL ```
Need an efficient query in the following case
[ "", "sql", "oracle", "oracle10g", "" ]
I have the following problem: I have a table that looks something like this: ``` ArticleID|Group|Price 1|a|10 2|b|2 3|a|3 4|b|5 5|c|5 6|f|7 7|c|8 8|x|3 ``` Now im trying to get a result like this: ``` PriceA|PriceRest 13|30 ``` Meaning I want to sum all prices from group a in one column and the sum of everything else in another column. Something like this doesnt work. ``` select sum(Price) as PriceGroupA sum(Price) as PriceRest from Table where Group='a' Group<>'a' ``` Is there a way to achieve this functionality?
``` SELECT sum(case when [Group] = 'a' then Price else 0 end) as PriceA, sum(case when [Group] <> 'a' then Price else 0 end) as PriceRest from Table ```
Please try: ``` select sum(case when [Group]='A' then Price end) PriceA, sum(case when [Group]<>'A' then Price end) PriceRest from Table ``` [SQL Fiddle Demo](http://sqlfiddle.com/#!6/94c7c/15)
Specific where for multiple selects
[ "", "sql", "select", "" ]
I have a SQL Server table with an XML column, and it contains data something like this: ``` <Query> <QueryGroup> <QueryRule> <Attribute>Integration</Attribute> <RuleOperator>8</RuleOperator> <Value /> <Grouping>OrOperator</Grouping> </QueryRule> <QueryRule> <Attribute>Integration</Attribute> <RuleOperator>5</RuleOperator> <Value>None</Value> <Grouping>AndOperator</Grouping> </QueryRule> </QueryGroup> </Query> ``` Each QueryRule will only have one Attribute, but each QueryGroup can have many QueryRules. Each Query can also have many QueryGroups. I need to be able to pull all records that have one or more `QueryRule` with a certain attribute and value. ``` SELECT * FROM QueryBuilderQueries WHERE [the xml contains any value=X where the attribute is either Y or Z] ``` I've worked out how to check a specific QueryRule, but not "any". ``` SELECT Query FROM QueryBuilderQueries WHERE Query.value('(/Query/QueryGroup/QueryRule/Value)[1]', 'varchar(max)') like 'UserToFind' AND Query.value('(/Query/QueryGroup/QueryRule/Attribute)[1]', 'varchar(max)') in ('FirstName', 'LastName') ```
You can use two `exist()`. One to check the value and one to check Attribute. ``` select Q.Query from dbo.QueryBuilderQueries as Q where Q.Query.exist('/Query/QueryGroup/QueryRule/Value/text()[. = "UserToFind"]') = 1 and Q.Query.exist('/Query/QueryGroup/QueryRule/Attribute/text()[. = ("FirstName", "LastName")]') = 1 ``` If you really want the `like` equivalence when you search for a Value you can use `contains()`. ``` select Q.Query from dbo.QueryBuilderQueries as Q where Q.Query.exist('/Query/QueryGroup/QueryRule/Value/text()[contains(., "UserToFind")]') = 1 and Q.Query.exist('/Query/QueryGroup/QueryRule/Attribute/text()[. = ("FirstName", "LastName")]') = 1 ```
It's a pity that the SQL Server (I'm using 2008) does not support some XQuery functions related to string such as `fn:matches`, ... If it supported such functions, we could query right inside XQuery expression to determine if there is ***any***. However we still have another approach. That is by turning all the possible values into the corresponding SQL row to use the `WHERE` and `LIKE` features of SQL for searching/filtering. After some experiementing with the `nodes()` method (used on an XML data), I think it's the best choice to go: ``` select * from QueryBuilderQueries where exists( select * from Query.nodes('//QueryRule') as v(x) where LOWER(v.x.value('(Attribute)[1]','varchar(max)')) in ('firstname','lastname') and v.x.value('(Value)[1]','varchar(max)') like 'UserToFind') ```
Querying XML colum for values
[ "", "sql", "sql-server", "" ]
I have the following 2 tables: tblEventCustomers ``` EventCustomerId EventId CustomerId InvoiceLineId 1002 100 5 21 1003 100 6 21 1004 100 7 22 1005 101 9 23 ``` tblInvoiceLines ``` InvoiceLineId Quantity Price 21 2 25 22 1 12.5 23 1 34 ``` I want to return the number of customers on an event and the total of the invoice lines for that event: ``` EventId No. Delegates Total 100 3 37.5 101 1 34 ``` I have tried the following function: ``` CREATE FUNCTION dbo.udfInvoiceLineTotal ( @eventId AS INT ) RETURNS MONEY BEGIN DECLARE @returnAmount AS MONEY; SET @returnAmount = ( select sum(Price) from tblInvoiceLines as IL where il.InvoiceLineId in ( SELECT InvoiceLineId FROM tblEventCustomers where EventId = @eventId ) ) RETURN @returnAmount; END ``` And used it as below: ``` select ed.EventId, COUNT(1), dbo.udfInvoiceLineTotal(ed.EventId) from tblEventCustomers as ED inner join tblInvoiceLines as IL on IL.InvoiceLineId = ED.InvoiceLineId group by ed.EventId ``` This returns me the results I want, however I wanted to see if I was missing any way to do it in a more ANSI SQL way, i.e. using Sub-Queries rather than a database function? The approach below returns me duplicates in the sum: ``` select ed.EventId, SUM(il.Price), COUNT(1) from tblEventCustomers as ed inner join tblInvoiceLines as il on ed.InvoiceLineId = il.InvoiceLineId group by (ed.EventId) ``` ## Edit Apologies for all those who answered, I realise I had a typo in my original data set which meant some of the suggested approaches worked for the dataset I presented at first but don't work for the full data set.
If you modify your last approach like this, it should give you the desired results: ``` select ed.EventId, SUM(il.Price/il.quantity) from tblEventCustomers as ed inner join tblInvoiceLines as il on ed.InvoiceLineId = il.InvoiceLineId group by (ed.EventId) having COUNT(ed.EventCustomerId) > 5 ``` And to include the number of delegates, just add `SUM(il.quantity)`
Would this work for you? It gets the results you are looking for ``` SELECT c.EventId, SUM(Quantity), SUM(i.price) FROM tblInvoiceLines i JOIN (SELECT DISTINCT EventId, CustomerId, InvoiceLineId FROM tblEventCustomers) c ON i.InvoiceLineId = c.InvoiceLineId GROUP BY c.EventId ```
Return Sum from another table in join with duplicates
[ "", "sql", "sql-server", "ansi-sql", "" ]
Not sure if the title explains this scenario in full, so I will be as descriptive as I can. I'm using a SQL Server database and have the following 4 tables: **CUSTOMERS**: ``` CustomerID CustomerName -------------------------- 100001 Mr J Bloggs 100002 Mr J Smith ``` **POLICIES**: ``` PolicyID PolicyTypeID CustomerID ----------------------------------- 100001 100001 100001 100002 100002 100001 100003 100003 100001 100004 100001 100002 100005 100002 100002 ``` **POLICYTYPES**: ``` PolicyTypeID PolTypeName ProviderID ----------------------------------------- 100001 ISA 100001 100002 Pension 100001 100003 ISA 100002 ``` **PROVIDERS**: ``` ProviderID ProviderName -------------------------- 100001 ABC Ltd 100002 Bloggs Plc ``` This is obviously a stripped down version and the actual database contains a lot more records. What I am looking to do is return a list of clients who ONLY have products from a certain provider. So in the example above, if I want to return customers who have policies with ABC Ltd with this SQL: ``` SELECT C.CustomerName, P.PolicyID, PT.PolTypeName, Providers.ProviderName FROM Customers C LEFT JOIN Policies P ON C.CustomerID = P.CustomerID LEFT JOIN PolicyTypes PT ON P.PolicyTypeID = PT.PolicyTypeID LEFT JOIN Providers PR ON PR.ProviderID = PT.ProviderID WHERE PR.ProviderID = 100001 ``` It will currently return both customers in the Customers table. But the customer Mr J Bloggs actually holds policies provided by Bloggs Plc as well. I don't want this. I only want to return the customers who hold ONLY policies from ABC Ltd, so the SQL I need should only return Mr J Smith. Hope I've been clear, if not please let me know. Many thanks in advance Steve
Dirty but readable: ``` SELECT C.CustomerName, P.PolicyID, PT.PolTypeName, Providers.ProviderName FROM Customers C LEFT JOIN Policies P ON C.CustomerID = P.CustomerID LEFT JOIN PolicyTypes PT ON P.PolicyTypeID = PT.PolicyTypeID LEFT JOIN Providers PR ON PR.ProviderID = PT.ProviderID WHERE PR.ProviderID = 100001 AND C.CustomerName NOT IN ( SELECT C.CustomerName FROM Customers C LEFT JOIN Policies P ON C.CustomerID = P.CustomerID LEFT JOIN PolicyTypes PT ON P.PolicyTypeID = PT.PolicyTypeID LEFT JOIN Providers PR ON PR.ProviderID = PT.ProviderID WHERE PR.ProviderID <> 100001 ) ```
try this one... ``` SELECT C.CustomerName, P.PolicyID, PT.PolTypeName, Providers.ProviderName from Customers C inner join POLICIES P ON C.CustomerID = P.CustomerID inner join PT ON P.PolicyTypeID = PT.PolicyTypeID inner join Providers PR ON PR.ProviderID = PT.ProviderID where PR.ProviderID = 100001 and c.CustomerID not in (SELECT C.CustomerID from Customers C inner join POLICIES P ON C.CustomerID = P.CustomerID inner join PT ON P.PolicyTypeID = PT.PolicyTypeID inner join Providers PR ON PR.ProviderID = PT.ProviderID where PR.ProviderID <> 100001) ```
Select records that are only associated with a record in another table
[ "", "sql", "sql-server", "unique", "" ]
I have this table in SQL Server 2012: ``` Id INT DomainName NVARCHAR(150) ``` And the table have these DomainName values 1. google.com 2. microsoft.com 3. othersite.com And this value: ``` mail.othersite.com ``` and I need to select the rows where the string ends with the column value, for this value I need to get the row no.3 othersite.com It's something like this: ``` DomainName Like '%value' ``` but in reverse ... ``` 'value' Like %DomainName ```
You can use such query: ``` SELECT * FROM TABLE1 WHERE 'value' LIKE '%' + DomainName ```
It works on mysql server. [ LIKE '%' || value ] not works great because when value start with numbers this like not return true. ``` SELECT * FROM TABLE1 WHERE DomainName like CONCAT('%', value) ```
SQL SELECT WHERE string ends with Column
[ "", "sql", "sql-server", "select", "sql-like", "" ]
I have a table with following structure ``` ID FirstName LastName CollectedNumbers 1 A B 10,11,15,55 2 C D 101,132,111 ``` I want a boolean value based on CollectedNumber Range. e.g. If CollectedNumbers are between 1 and 100 then True if Over 100 then False. Can anyone Suggest what would be best way to accomplish this. Collected Numbers won't be sorted always.
It so happens that you have a pretty simple way to see if values are 100 or over in the list. If such a value exists, then there are at least *three* characters between the commas. If the numbers are never more than 999, you could do: ``` select (case when ','+CollectedNumbers+',' not like '%,[0-9][0-9][0-9]%' then 1 else 0 end) as booleanflag ``` This happens to work for the break point of 100. It is obviously not a general solution. The best solution would be to use a junction table with one row per `id` and `CollectedNumber`.
Just make a function, which will return true/False, in the database which will convert the string values(10,11,15,55) into a table and call that function in the Selection of the Query like this ``` Select ID, FirstName, LastName, dbo.fncCollectedNumbersResult(stringvalue) as Result from yourTableName ```
Checking Range in Comma Separated Values [SQL Server 2008]
[ "", "sql", "sql-server-2008", "" ]
I have a table with 4 columns, and I need to check to see if a Column Pair exists before inserting a row into the database: ``` INSERT INTO dbo.tblCallReport_Detail (fkCallReport, fkProductCategory, Discussion, Action) VALUES (?, ?, ?, ?) ``` The pair in question is `fkCallReport` and `fkProductCategory`. For example if the row trying to be inserted has `fkCallReport = 3` and `fkProductCategory = 5`, and the database already has both of those values together, it should display an error and ask if they would like to combine the Disuccsion and Action with the current record. Keep in mind I'm doing this in VBA Access 2010 and am still very new.
Just set them both as the primary keys (compound key I believe is the correct term). Then you'll need a unique combination to add to the table.
Two options I can think of: First is to make a compound primary key in the database itself. Second is a conditional insert. Basically use `select count(*) where fkCallReport=var1 and fkProductCategory=var2` with the conditional operators of your database. MSSqL has `if` Oracle has `when` not sure about Access If you are allowed to set up your primary keys PLEASE go with the compound key. Better practice and keeps you out of sticky situations
Check for duplicate rows in 2 columns before update
[ "", "sql", "ms-access", "vba", "" ]
I have seen similar questions asked but never seen an answer that works for me. I have the following table and trigger definitions... ``` DROP TRIGGER IF EXISTS c_consumption.newRateHistory; DROP TABLE IF EXISTS c_consumption.myrate; DROP TABLE IF EXISTS c_consumption.myratehistory; USE c_consumption; CREATE TABLE `myrate` ( `consumerId` varchar(255) DEFAULT NULL, `durationType` varchar(50) NOT NULL DEFAULT 'DAY', `id` bigint(20) NOT NULL AUTO_INCREMENT, `itemId` varchar(50) NOT NULL, `quantity` double NOT NULL DEFAULT 1.0, `quantityType` varchar(100) NOT NULL DEFAULT 'GALLON', `timePeriod` double NOT NULL DEFAULT 1.0, PRIMARY KEY (`id`), UNIQUE INDEX `UNIQUE_RATE` (`itemId` ASC, `consumerId` ASC) ) ENGINE=InnoDB AUTO_INCREMENT=314 DEFAULT CHARSET=utf8; CREATE TABLE `myratehistory` ( `consumerId` varchar(255) DEFAULT NULL, `durationType` varchar(50) DEFAULT NULL, `itemId` varchar(50) NOT NULL, `quantity` double DEFAULT NULL, `quantityType` varchar(100) DEFAULT NULL, `status` varchar(20) NOT NULL DEFAULT 'CREATED', `timePeriod` double DEFAULT NULL, `timestamp` DATETIME NULL, PRIMARY KEY (`itemId`, `consumerId`, `timestamp`) ) ENGINE=InnoDB AUTO_INCREMENT=314 DEFAULT CHARSET=utf8; CREATE TRIGGER `newRateToHistory` AFTER INSERT ON myrate FOR EACH ROW INSERT INTO myratehistory ( consumerId, durationType, itemId, quantity, quantityType, status, timePeriod, timestamp ) VALUES( new.consumerId, new.durationType, new.itemId, new.quantity, new.quantityType, 'CREATED', new.timePeriod, now()); ``` Note that consumerId CAN be null. Next I run this SQL statement: ``` INSERT INTO c_consumption.myrate ( consumerId, durationType, itemId, quantity, quantityType, timePeriod) VALUES( null, 'DAY', 'MyItem', 1.0, 'GALLON', 1.0); ``` I get the following message: ``` Error Code: 1048 Column 'consumerId' cannot be null ``` Obviously I am doing something wrong but I do not know what it is. Any help would be greatly appreciated. Thanks!
ConsumerId is part of the primary key. No part of the primary key can be null.
Just a guess, why mysql allows a null-able column as part of the primary key I don't know ``` CREATE TABLE `myratehistory` ( [...] PRIMARY KEY (`itemId`, `consumerId`, `timestamp`) ```
mySQL trigger? causing Error Code: 1048 Column cannot be null
[ "", "mysql", "sql", "triggers", "" ]
I want to join 6 tables, which all have different variables, to one table, which has same columns as all 6 other tables. Can i somehow do it without looking at these tables and watching which columns these tables have? I have got macro variable, an array, with column names, but I cannot think of any good way how to join these tables using this array. Array is created by this macro: ``` %macro getvars(dsn); %global vlist; proc sql noprint; select name into :vlist separated by ' ' from dictionary.columns where memname=upcase("&dsn"); quit; %mend getvars; ``` And i want to just join tables like this: ``` proc sql; create table new_table as select * from table1 as l left join table2 as r on l.age=r.age and l.type=r.type; quit; ``` but not so manually :) For example, table1 has columns name, age, coef1 and sex, table 2 has columns name, region and coef2. The third table, where I want to join them has name, age, sex, region, coef and many other columns. I want to write a program, that doesn't know which table has which columns, but joins so that third table still has all the same columns plus coef1 and coef2.
This isn't an answer I'd normally recommend as it can lead to unwanted results if you're not careful, however it could work for you in this instance. I'm proposing using a natural join, which automatically joins on to all matching variables so you don't need to specify an ON clause. Here's example code. ``` proc sql; create table want as select * from a natural left join b natural left join c ; quit; ``` As I say, be very careful about checking the results
Here's one method... Firstly, use `DICTIONARY.COLUMNS` to find all of the common variables in each table based on the 'master' table. Then dynamically generate the join criteria for tables with common variables, and finally join them all together based on those criteria. ``` %MACRO COMMONJOIN(DSN,DSNLIST) ; %LET DSNC = %SYSFUNC(countw(&DSNLIST,%STR( ))) ; /* # of additional tables */ /* Create a list of variables from primary DSN, with flags where variable exists in DSNLIST datasets */ proc sql ; create table commonvars as select a.name %DO I = 1 %TO &DSNC ; %LET D = %SYSFUNC(scan(&DSNLIST,&I,%STR( ))) ; , d&I..V&I label="&D" %END ; from dictionary.columns a %DO I = 1 %TO &DSNC ; /* Iterate over list of dataset names */ %LET D = %SYSFUNC(scan(&DSNLIST,&I,%STR( ))) ; left join (select name, 1 as V&I from dictionary.columns where libname = scan(upcase("&D"),1,'.') and memname = scan(upcase("&D"),2,'.')) as d&I on a.name = d&I..name %END ; where libname = scan(upcase("&DSN"),1,'.') and memname = scan(upcase("&DSN"),2,'.') ; quit ; /* Create join criteria between master & each secondary table */ %DO I = 1 %TO &DSNC ; %LET JOIN&I = ; proc sql ; select catx(' = ',cats('a.',name),cats("V&I..",name)) into :JOIN&I separated by ' and ' from commonvars where V&I = 1 ; quit ; %END ; /* Join */ proc sql ; create table masterjoin as select a.* %DO I = 1 %TO &DSNC ; %IF "&&JOIN&I" ne "" %THEN %DO ; , V&I..* %END ; %END ; from &DSN as a %DO I = 1 %TO &DSNC ; %IF "&&JOIN&I" ne "" %THEN %DO ; %LET D = %SYSFUNC(scan(&DSNLIST,&I,%STR( ))) ; left join &D as V&I on &&JOIN&I %END ; %END ; ; quit ; %MEND ; %COMMONJOIN(work.master,work.table1 work.table2 work.table3) ; ```
SAS Dynamic SQL join
[ "", "sql", "join", "sas", "" ]
I have a `created_at` timestamp field. I need to update and subtract 4 hours from every record. This is about **updating the records**, not changing the results of a query. So `2014-08-20 18:00:00` would become `2014-08-20 14:00:00`. And `2014-08-21 03:00:00` would become `2014-08-20 23:00:00`.
You can use a simple UPDATE statement: ``` UPDATE yourtable SET created_at = created_at - INTERVAL 4 HOUR; ``` This will update all rows accordingly to your needs: From the documentation to [DATE\_ADD](https://dev.mysql.com/doc/refman/5.6/en/date-and-time-functions.html#function_date-add) > Date arithmetic also can be performed using INTERVAL together with the > + or - operator: > > date + INTERVAL expr unit > date - INTERVAL expr unit
You can use `INTERVAL` to specify the time that needs to be subtracted: ``` UPDATE myTable SET created_at = created_at - INTERVAL 4 HOUR ```
Update timestamp values by subtracting 4 hours from each record
[ "", "mysql", "sql", "" ]
## Table:**City** ``` City_Name (PK) State_Name (PK) Population -------------- --------------- ---------- Austin MA Austin TX Burlington VT Fresno CA Houston TX Independence KS Independence MO Independence PA Kansas CA Kansas MO Paris MO Paris OR Paris TX Paris VA Paso TX ``` The above table has a composite primary key (City\_Name and State\_Name). I want to **find the city which occur in at least two distinct States**. In this case, the result will be ``` City_Name --------- Austin Independence Kansas Paris ``` Other cities will not qualify as they don't occur in at-least two states. This solution is required for practicing relational calculus problems and thus (unfortunately) aggregate function such as COUNT() cannot be used. I went through a database book where I saw a solution to similar problem which looks like this, but not working. ``` SELECT c1.State_Name FROM City AS c1 JOIN City AS c2 on c1.City_Name = c2.City_Name AND c1.State_Name = c2.State_Name JOIN City AS c3 on c1.City_Name = c3.City_Name AND c1.State_Name = c3.State_Name WHERE (c2.City_Name<>c3.City_Name) AND (c2.State_Name<>c3.State_Name); ``` Some help at this point will be highly welcoming.
If you can't use any kind of aggregate functions, then the following would return the cities that appear at least twice ``` SELECT distinct c1.city_name FROM `city` c1 JOIN city c2 ON c1.city_name=c2.city_name AND c1.state_name!=c2.state_name ``` I assumed that not only COUNT but also any kind of aggegate was unavailable
You can use COUNT(DISTINCT) and restrict your result with the HAVING clause: ``` SELECT City_Name, COUNT(DISTINCT State_Name) FROM City GROUP BY City_NAME HAVING COUNT(DISTINCT State_Name) > 1 ``` see documentation of [COUNT(DISTINCT)](https://dev.mysql.com/doc/refman/5.6/en/group-by-functions.html#function_count-distinct) > Returns a count of the number of rows with different non-NULL expr values. I overlooked the restriction not be able to use COUNT(). That leads indeed to a self join: ``` SELECT DISTINCT c1.City_Name FROM City c1 INNER JOIN City c2 ON c1.City_Name = c2.City_Name AND c1.State_Name <> c2.State_Name ``` [Demo](http://sqlfiddle.com/#!2/581ad/3) for both solutions.
MySql - Find values that occur AT LEAST two times without using any aggregate function
[ "", "mysql", "sql", "database", "" ]
I have two tables in my database, 'users' and 'accounts'. Each have a column named 'email' What I would like to do in the end is query 'users' and return all the rows which do NOT have the same email as one of the rows in 'accounts', and then DROP those entire rows. I started by trying to craft a query which returned all the rows in 'users' that DO have an email that exists in one of the rows in 'accounts' ``` SELECT * FROM `users` WHERE users.email = accounts.email ``` This gave me an error using the syntax "users.email". Any help would be greatly appreciated! **EDIT:** To clarify, my problem was that I had devices(users) which were being registered to my database by people who had not created an account (uniquely identified by their email address). These devices were still being registered with email addresses, just not those belonging to a legitimate account, so email will never be null. I wanted to form a query that returns all the rows of 'users' whose email column does not match any email in the 'accounts' table, so that I could then drop those users. I solved this problem in a more simple way just using PHP, but for the sake of answering my original question: What I was trying to is essentially the opposite of this: ``` SELECT email FROM users JOIN accounts ON users.email=accounts.email; ``` That query would return all of the rows of users that do have an email that also exists in the email column of the accounts table, meaning the device is registered to a legitimate account. I wanted to do the opposite, a query which would return the rows of all users whose email field in 'user' does not match with any email field in 'accounts', so that I could then drop those devices.
If you want a list of users that do not have corresponding email address in accounts, you can use a left join where accounts email is null. Putting that in a subselect for a delete will allow you to delete all the rows that are in users that are not in accounts. (i've commented out the delete for safety) ``` select * --delete from `users` where `users`.email in ( select u.email from `users` u left join `accounts` a on u.email = a.email where a.email is null) ``` You can see this working in a `fiddle`
Sorry for wall of text. Let me start from here: > I wanted to form a query that returns all the rows of 'users' whose email column does not match any email in the 'accounts' table, so that I could then drop those users. I think you need to solve the problem first, and make sure they are sent a verification email with a link they must click on during the sign-up process. You won't get this problem anymore. **Now, The query you're looking for** > That query would return all of the rows of users that do have an email that also exists in the email column of the accounts table ... It is an invalid email in the 'email' field of the 'users' table if no such email exists in the 'email' column in the 'accounts' table. If I understand you correctly, You'll want to do a subquery of the records(email addresses) that exist in both tables. Then, from that query you will select only the ones do not have an email in the accounts table. What would really help us all understand is a mock-up on [SQL Fiddle](http://sqlfiddle.com/). I made a mockup below, I make a query that returns a result like this: (let me explain the variables first) ... exist\_in\_both.aemail #=> email exist in both, found in account table exist\_in\_both.uemail #=> email exists in both, found in user table email #=> email that does NOT exist in both ... Results of the query from here: <http://sqlfiddle.com/#!2/67050/26> looks like: ``` exist_in_both.aemail, exist_in_both.uemail, email ``` Now, I make query against THOSE results, and only select the ones that have the column for existing in a single table, but not a full column for existing in both tables. That looks like this: <http://sqlfiddle.com/#!2/67050/38> Your needed query, returns emails that exist only in the accounts table, but not emails that exist in both users table and accounts table. A, b, and c are in accounts, a and b are in users, this will select c. :) ``` SELECT derived_table.email from (select * from ( SELECT u.email as uemail, a.email as aemail from users u join accounts a WHERE u.email = a.email) exist_in_both RIGHT JOIN accounts on accounts.email = exist_in_both.aemail) derived_table WHERE derived_table.uemail IS NULL ``` Working from the inside out: You select the emails that exist in both, then you do a right join to the emails that exist in just one table. Then, from that result set you query the ones that are emails that didn't show up in the results for "exists in both". In the unfortunate situation that you have emails that exist in accounts that don't exist in users, AND you have emails that exist in users that don't exist in accounts, here's a SQL fiddle where that situation is going on, and the query to solve that problem. (just with a UNION) <http://sqlfiddle.com/#!2/ed2814/1>
Query for grouping columns with the same text in different tables
[ "", "mysql", "sql", "" ]
I've trying to convert this same into a "Gross Profit" type report and am running into an issue. ``` select CONVERT(VARCHAR(12), ih.invoice_date,110) as invoice_date, oh.order_no, bosr.salesrep_name, bosr.salesrep_id, oh.location_id, oh.taker, oh.customer_id, Replace(oh.ship2_name, ',', ' ') as Ship_to_name, bosr.supplier_id, Replace(bosr.supplier_name, ',', ' ') as Supplier_name, Cast((dc.dealer_commission_amt_due) as DECIMAL (19,2)) as "Gross Profit" from oe_hdr oh inner join anspin_view_booked_orders_ship_to_rep bosr on oh.order_no = bosr.order_no inner join oe_line ol on oh.order_no = ol.order_no inner join invoice_hdr ih on oh.order_no = ih.order_no inner join dealer_commission dc on ih.invoice_no = dc.invoice_no where ih.invoice_date >= '2014-07-01' and ih.invoice_date < '2014-08-01' and ol.qty_ordered > '0' and bosr.source_code_no <> '706' and bosr.source_code_no <> '709' group by CONVERT(VARCHAR(12), ih.invoice_date, 110), oh.order_no, bosr.salesrep_name, bosr.salesrep_id, oh.location_id, oh.customer_id, oh.taker, oh.ship2_name, bosr.supplier_id, bosr.supplier_name, dc.dealer_commission_amt_due order by invoice_date; ``` This worked great and the "Gross Profit" column is showing the correct values in the date range I want...now if I were to remove the "dc.dealer\_commission\_amt\_due" from the group by, and then make this change: ``` Cast(sum(dc.dealer_commission_amt_due) as DECIMAL (19,2)) as "Gross Profit" ``` I get amounts in SOME (Not all! Which is weird since some are correct) of the invoice numbers as 2-4+ times their original value. Example Between the two: ``` invoice_date order_no salesrep_name salesrep_id location_id taker customer_id Ship_to_name supplier_id Supplier_name Gross Profit 07-10-2014 X NAME ID 60 NAME X Customer INC 123452 supplier INC. 4800.00 ``` Non-Sum: ``` invoice_date order_no salesrep_name salesrep_id location_id taker customer_id Ship_to_name supplier_id Supplier_name Gross Profit 07-10-2014 X NAME ID 60 NAME X Customer INC 123452 supplier INC. 750.00 invoice_date order_no salesrep_name salesrep_id location_id taker customer_id Ship_to_name supplier_id Supplier_name Gross Profit 07-10-2014 X NAME ID 60 NAME X Customer INC 123452 supplier INC. 450.00 ``` From what I've read this has to do with the joins...is this correct?
Ended up having to pay for the software company to write a query since they have the "encyclopedia" on linking the different tables together. I was able to get 50% there by getting rid of the multiplication on my own, but only 50% of the order types were showing up...I gave up and now have a working query that I can compare against. Thanks for all your help and suggestions!
These two queries are not the same: ``` SELECT Cast(( dc.dealer_commission_amt_due ) AS DECIMAL (19, 2)) AS "Gross Profit" FROM dealer_commission dc GROUP BY dc.dealer_commission_amt_due SELECT Cast(( SUM(dc.dealer_commission_amt_due) ) AS DECIMAL (19, 2)) AS "Gross Profit" FROM dealer_commission dc ``` Adding additional columns in your GROUP BY clause will return more rows in the result, but it should not effect the sum. Removing a column in the GROUP BY will return less rows and again should not effect the sum. The only part of the query that can effect the sum is which rows are matched. Also, keep in mind the order of operations of a query: ``` FROM WHERE GROUP BY HAVING SELECT ORDER BY ```
Sql Sum Multiplying Results
[ "", "sql", "sql-server", "sql-server-2008", "sum", "inner-join", "" ]
Good afternoon almighty Stackoverflow! I'm not overly familiar with SQL in Oracle, but have a need to take a date/time value and convert it to a string that matches a specific format for another application. I found a lot of scenarios that were similar, but those mixed with some Oracle documentation has not gotten me to what I need yet. The input format is as follows: 8/6/2014 3:05:21 PM The format that I need to be input into the other application is as follows: YYYYMMDDhhmmssuu uu is microseconds (fractional seconds in Oracle I guess). What I thought would work would be: ``` to_date(VP_ACTUAL_RPT_DETAILS.ETLLOADER_OUT,'YYYYMMDDHH24MISSFF') ``` I think that only works if the input format matches the output format. If you can provide assistance, I would greatly appreciate it!
If You convert from DATE type to output format use `TO_CHAR` function: ``` TO_CHAR(VP_ACTUAL_RPT_DETAILS.ETLLOADER_OUT,'YYYYMMDDHH24MISSFF') ``` If You convert from VARCHAR2 type, then use both functions: ``` TO_CHAR(TO_DATE(VP_ACTUAL_RPT_DETAILS.ETLLOADER_OUT, 'MM/DD/YYYY HH:MI:SS'),'YYYYMMDDHH24MISSFF') ``` `TO_DATE` - converts from VARCHAR2 type (input format) to DATE type; `TO_CHAR` - from DATE type to VARCHAR2 type (output format)
The Oracle function you need is TO\_CHAR. to\_char(VP\_ACTUAL\_RPT\_DETAILS.ETLLOADER\_OUT,'YYYYMMDDHH24MISSFF')
Oracle: Convert Date Time to Specific format
[ "", "sql", "oracle", "type-conversion", "reformatting", "" ]
I have a table like this: ![](https://i.stack.imgur.com/pTHNT.png) I would like to aggregate the table like this: ![enter image description here](https://i.stack.imgur.com/jPAou.png) Explanation: For Yes to increment: need just 1 yes across grouped response by user for each item. In above example, At least one result for Item\_1 for user 1 and 2 is Yes. So "Yes" is 2(1+1). One for each user. For No to increment: need all no’s across grouped response by user for each item. In above example, All the result for Item\_2 is "No" for User 1.So "No" is 1. For N/A to increment: need all N/A’s across grouped response by user for each item. In above example, all the result for Item\_2 is "N/A" for User 2. So "N/A" is again 1. Notes: There are just 2 items Item\_1 and Item\_2 Result is either Yes, No or N/A Any suggestion is appreciated. Thanks in advance.
I think I have tweaked the solution by GarethD to now correctly account for cases where there is no 'Yes' and not all 'No' or 'N/A': ``` SELECT Item, Yes = COUNT(CASE WHEN Result = 'Yes' THEN 1 END), [No] = COUNT(CASE WHEN Result = 'No' THEN 1 END), [N/A] = COUNT(CASE WHEN Result = 'N/A' THEN 1 END), Unique_User_Count = COUNT(DISTINCT UserID) FROM ( SELECT UserID, Item, Result = MAX(Result) FROM UserResult GROUP BY UserID, Item HAVING MAX(Result) = MIN(Result) OR MAX(Result) = 'Yes' ) AS T GROUP BY Item; ``` This is my original solution, which I knew could be shorter: ``` WITH yes AS ( SELECT Item_ID, User_ID FROM table1 WHERE Result = 'Yes' GROUP BY User_ID,Item_ID ), no AS ( SELECT Item_ID, User_ID FROM table1 t0 WHERE NOT EXIST ( SELECT 1 FROM table1 WHERE Result != 'No' AND Item_ID = t0.Item_ID AND User_ID = t0.User_ID) ) GROUP BY Item_ID, User_ID ), na AS ( SELECT Item_ID, User_ID FROM table1 t0 WHERE NOT EXIST ( SELECT 1 FROM table1 WHERE Result != 'N/A' AND Item_ID = t0.Item_ID AND User_ID = t0.User_ID) ) GROUP BY Item_ID, User_ID ) SELECT t1.Item_ID, (SELECT COUNT(*) FROM yes GROUP BY Item_ID WHERE Item_ID = t1.Item_ID ) AS yes, (SELECT COUNT(*) FROM no GROUP BY Item_ID WHERE Item_ID = t1.Item_ID ) AS no, (SELECT COUNT(*) FROM na GROUP BY Item_ID WHERE Item_ID = t1.Item_ID ) AS na, (SELECT COUNT(DISTINCT User_ID) FROM table1 WHERE Item_ID = t1.Item_ID ) AS Unique_User_Count FROM table1 t1 GROUP BY t1.Item_ID ```
To get down to one response per user you can use: ``` SELECT UserID, Item_Name, Result = MAX(Result) FROM T GROUP BY UserID, Item_Name ``` This simply takes advantage of the fact that in descending order the available values are Yes, No, N/A, so using `MAX` will mean that if a user has a result of yes this will be picked, if not and the result of no exists, this will be used, otherwise it will be N/A Then you can use a conditional aggregate: ``` SELECT Item_Name, Yes = COUNT(CASE WHEN Result = 'Yes' THEN 1 END), [No] = COUNT(CASE WHEN Result = 'No' THEN 1 END), [N/A] = COUNT(CASE WHEN Result = 'N/A' THEN 1 END), Unique_User_Count = COUNT(DISTINCT UserID) FROM ( SELECT UserID, Item_Name, Result = MAX(Result) FROM T GROUP BY UserID, Item_Name ) AS T GROUP BY Item_Name; ```
Aggregate Rows in SQL Server
[ "", "sql", "sql-server", "aggregate-functions", "" ]
I Have a table dbo.ArtikelAlternatief created like this: ``` CREATE TABLE [dbo].[ArtikelAlternatief]( [Barcode] [varchar](50) NOT NULL, [BarcodeAlternatief] [varchar](50) NOT NULL, CONSTRAINT [PK_ArtikelAlternatief] PRIMARY KEY CLUSTERED ( [Barcode] ASC, [BarcodeAlternatief] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] ``` Now I want the following results combined: ``` select BarcodeAlternatief AS 'Barcode' from dbo.ArtikelAlternatief where Barcode like '7630015711115' select Barcode AS 'Barcode' from dbo.ArtikelAlternatief where BarcodeAlternatief like '7630015711115' ``` How is it possible to combine those 2 query's in one result column?
You can do it in 3 methods. ## [SQLFIDDLE](http://sqlfiddle.com/#!6/e7574a/5) **Method 1:** Using `CASE` Statement: ``` select (case when Barcode = '7630015711115' then BarcodeAlternatief else Barcode END) as 'Barcode' from ArtikelAlternatief where Barcode = '7630015711115' or BarcodeAlternatief = '7630015711115'; ``` --- **Method 2:** You can try using `DECODE` statement (Of oracle), ``` SELECT DECODE (BarcodeAlternatief , '7630015711115', Barcode , BarcodeAlternatief ) AS Barcode FROM dbo.ArtikelAlternatief where Barcode = '7630015711115' OR BarcodeAlternatief = '7630015711115' ``` --- **Method 3:** Try below query using `UNION ALL`: ``` select BarcodeAlternatief AS 'Barcode' from dbo.ArtikelAlternatief where Barcode = '7630015711115' UNION ALL select Barcode AS 'Barcode' from dbo.ArtikelAlternatief where BarcodeAlternatief = '7630015711115' ``` 1. If you wish to allow duplicates, then use `UNION ALL`. If you do not wish to allow duplicates, then use `UNION`. 2. In your case, you can use `=` operator instead of `LIKE` in where condition because you are not doing any pattern matching.
Use the [UNION](http://msdn.microsoft.com/en-us/library/ms180026.aspx) operator: ``` query1 UNION ALL query2 ``` The `ALL` keyword is optional, and used if you want duplicate rows.
select query with many-on-many table
[ "", "sql", "select", "" ]
I am working on an Oracle query and I badly need to make it go faster. I would greatly appreciate any advice. * The database is Oracle, running on an ExaData cluster. * Oracle version: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production I have two tables. 1) Transactions: a purchase at a store - TransactionID 2) TransactionItems: each purchase has 1..many items - TransactionID, ItemID In each table, there are two flags: * FlagA: Y/N * FlagB: Y/N The query needs to: 1. Set the value of FlagA and FlagB for every record in TransactionItem. 2. Set the value of FlagA and FlagB for each row in Transaction, based on the values of the Flags in TransactionItem I have broken my query into 4 steps. 1. Set value of Flag A for TransactionItem 2. Set value of Flag B for TransactionItem 3. Set value of Flag A for Transaction 4. Set value of Flag B for Transaction The query runs smoothly. However, this is the catch. There are billions of Transaction records, and each Transaction has about 7 Transaction Items. Here is how fast it goes now: * Total time: 616 seconds / 10.27 minutes * Processes 1,218 Transactions per second / 73,000 transactions’ per minute I tracked the process time for each step: 1. Set value of Flag A for TransactionItem * 4 minutes 52 seconds 2. Set value of Flag B for TransactionItem * 3 minutes 26 seconds 3. Set value of Flag A for Transaction * 1 minute 6 seconds 4. Set value of Flag B for Transaction * 0 minutes 51 seconds Below is my full query. Here are the other tables used Product * Each TransactionItem has a ProductId Each product has a ProductCode. * One product code has many Products FlagAproductCodes 1. A single column with a list of ProductCodes that are categorized as FlagA FlagBproductCodes 1. A single column with a list of ProductCodes that are categorized as FlagB TransactionPayment 1. This is a fact table containing payment details for each transaction Payment\_Dim 1. Links to TransactionPayment on PaymentID 2. This is needed because FlagB is set based on Payment\_Dim.PaymentName I have these indexes: Transactions 1. TransactionID TransactionItems 1. TransactionID 2. ProductID Product 1. ProductID 2. ProductCode FlagAproductCodes 1. ProductCode FlagBproductCodes 1. ProductCode Payment 1. PaymentID 2. PaymentCode 3. Payment\_Name I really appreciate the help, thanks ``` -- 1. Set value of FlagA for TransactionItem Update TransactionItems Item Set FlagA = ( Select Case When Item.FlagA_Qty = 0 Then 'N' -- this is the quantity of items purchased that fall into the FlagA category When FlagA.ProductCode Is Null Then 'N' Else 'Y' End From Product Prod Left Join FlagAproductCodes FlagA On Product.ProductCode = FlagA.ProductCode Where Product.Prod_Id = Item.Prod_Id ) ; -- 2. Set value of FlagB for TransactionItem Update TransactionItems Set FlagB = 'Y' Where ItemID In ( Select Trans_Items.ItemID From TransactionItems Trans_Items Inner Join Product Prod On Trans_Items.Prod_Id = Product.Prod_Id Inner Join FlagBproductCodes FlagB On Product.ProductCode = FlagB.ProductCode Where ( Trans_Items.Gov_FlagA_Qty < Trans_Items.Item_Qty ) AND ( Exists (Select Transaction_Payment_Fid From TransactionPayment Trans_Pay Inner Join Warehouse.Payment_Dim Pay_Dim On Trans_Pay.Payment_Id = Pay_Dim.Payment_Id Where Transaction_Fid = Trans_Items.Transaction_Fid And Upper(Pay_Dim.Payment_Name) Like '%ABC%' ) ) ) ; Update TransactionItems Set FlagB = 'N' Where FlagB Is Null; -- 3: Set FlagA for Transactions Update Transactions Set Gov_FlagA_Flag = Case When Exists (Select ItemID From TransactionItems Item Where Item.Transaction_Fid = Transactions.Transaction_Fid and gov_FlagA_flag = 'Y') Then 'Y' Else 'N' End ; -- 4: Set FlagB for Transactions Update Transactions Set FlagB = Case When Exists (Select ItemID From TransactionItems Item Where Item.Transaction_Fid = Transactions.Transaction_Fid And FlagB = 'Y') Then 'Y' Else 'N' End ; ```
You need to look into parallel execution, which is probably too broad a topic to fully explore here. (And I'm not qualified to say much about it). In the meantime you might get some benefit by only updating each table once, and reducing the number of incidental look-ups. This is untested but I think covers the same logic as your three updates against TransactionItems, for example: ``` merge into TransactionItems TI using ( select P.Prod_ID, case when FAPC.ProductCode is null then 'N' else 'Y' end as FlagA, case when FBPC.ProductCode is null then 'N' else 'Y' end as FlagB from Product P left join FlagAproductCodes FAPC on FAPC.ProductCode = P.ProductCode left join FlagAproductCodes FBPC on FBPC.ProductCode = P.ProductCode ) temp on (temp.Prod_id = TI.Prod_ID) when matched then update set TI.FlagA = case when temp.FlagA = 'Y' and TI.FlagA_Qty != 0 then 'Y' else 'N' end, TI.FlagB = case when TI.FlagA_Qty < TI.Item_Qty and exists ( select Transaction_Payment_Fid from TransactionPayment TP join Payment_Dim PD on TP.Payment_Id = PD.Payment_Id where TP.Transaction_Fid = TI.Transaction_Fid and upper(PD.Payment_Name) Like '%ABC%' ) then 'Y' else 'N' end / ``` You might prefer to create an updatable view. But on that volume of data it's still going to take a long time. [This might also be useful](http://docs.oracle.com/cd/E11882_01/server.112/e25523/parallel007.htm#i1009319).
Interesting challenge. My immediate reaction is to divide & conquer - write the PLSQL to operate on sectors/id ranges, and commit often. Then fire off parallel jobs to operate on different ranges, then tune to find the optimum settings. If by luck the tables are partitioned, all the better. Also, although I hail from an era when everything was done set-based, before PLSQL had even been dreamt of, you might want to consider re-designing this on a transactional basis, rather than your current set-based approach, whereby you fetch for update the master rows as bulk collect and then do use the bulk collection to drive the detail table updates. I have found that this can be quicker, and it certainly gives you much more control in batching this operation up. This will also afford you the option to restart should the should fail e.g. snapshot too old, archive logs filled, etc. You wouldn't want to start over if this went bang.
Increase speed of Oracle query running on ExaData
[ "", "sql", "database", "oracle", "bigdata", "exadata", "" ]
I have the following two tables Table: **Customer** ``` Cust_ID FName ------- ----- 1 X 2 Y 3 Z ``` Table: **Account** ``` Acct_Number Cust_ID Acct_Type ----------- ------- --------- 1001 1 savings 1002 1 capital 1003 2 savings 1004 2 capital 1005 2 vip 1006 3 capital 1007 3 vip ``` There are three different types of account in Account table, (savings, capital and vip), I want to **find the customer who have an account of every type which is listed in Account relation**(without using any aggregate operator).That is, in this case, Y will qualify, as he is the only one who has all types of account. I got a suggestion to try the following, which is not working: ``` SELECT c.FName, c.Cust_ID FROM Customer AS c JOIN Account AS a1 ON c.Cust_ID = a1.Cust_ID JOIN Account AS a2 ON c.Cust_ID = a2.Cust_ID WHERE a1.Acct_Type <> a2.Acct_Type; ``` The above query is giving the customer who has account in two distinct type, not all. Helps are highly appreciated.
here's a query using `not exists` ``` select c.* from customer c where not exists ( select acct_type from account a2 where acct_type not in ( select distinct acct_type from account a3 where a3.Cust_ID = c.Cust_ID ) ) ```
What you want is relational division: ``` forall x:p(x) ``` but this is not possible to express in sql so you have to rewrite it to: ``` not exists x : not p(x) ``` in other words, for which customers does it not exists an accounttype such that the account does not have it. Something like: ``` SELECT c.FName, c.Cust_ID FROM Customer AS c WHERE NOT EXISTS ( select distinct Acct_Type from Account t where not exists ( select 1 from Account as a where a.cust_id = c.cust_id and a.Acct_Type = t. Acct_Type ) ); ``` Edit: did not notice that aggregates was disallowed
Comparing with EVERY distinct value listed in another table in MySql
[ "", "mysql", "sql", "database", "" ]
I have two tables, and want to get all the Products.ProductID if it doesn't exist in Images.ProductID. I'm not too sure how I would write this.. Any help would be great.
You can translate your English sentence into SQL almost directly: ``` SELECT * FROM Products p WHERE NOT EXISTS (SELECT * FROM Images i WHERE i.ProductId=p.ProductId) ```
``` select ProductID from Products where ProductID not in ( select distinct ProductID from images where ProductID is not null ) ``` or ``` select p.ProductID from Products p left join images i on i.ProductID = p.ProductID where i.ProductID is null ```
SQL, select field if field doesn't exist in another table
[ "", "sql", "" ]
I've been given this statement: ``` Select Format(( Select Max([Date]) from BusinessDaysCalendar where [date] in ( Select top 1 [date] from BusinessDaysCalendar where date > CURRENT_TIMESTAMP ) ),'MM/dd/yyyy', 'en-US') [retval] ``` which returns the date in this format `08/22/2014` I want to use `Select CONVERT` instead to get the date formatted as `Aug 22, 2014`. I know how to use this statement to get what I need ``` SELECT CONVERT(VARCHAR(12), GETDATE(), 107) ``` I'm just having a hard time integrating it with the first statement. Any help would be greatly appreciated. Thanks.
I reformatted your query into something which is more readable--by which, I mean that it's easier to determine what's part of a given statement, such as where an argument is for the Convert function, and which lines are part of each subquery. I also included a 2nd version which I think is accomplishing what you want, but is a little simpler because it skips 1 subquery level. My understanding is that you are trying to return a single row for the lowest date following the current date. The problem with your most inner subquery is that without an Order By, there is no guaranty that the row returned will be the lowest value. The Top 1 simply tells it to return the 1st row it comes across, and it's possible this isn't the lowest of the values meeting your criteria. ``` Select convert(varchar(12) ,( --this is the 2nd argument for the 'convert' function select ( Select Max([Date]) --determine the Max (Date) **It looks like you're trying to get the first value after today, and ensure you're only returning 1 row from BusinessDaysCalendar where [date] in (--select the 1st date ** note: since this subquery is only returning a single row, you can use "=" instead of "in" Select top 1 [date] from BusinessDaysCalendar where date > CURRENT_TIMESTAMP) ) ) ,107 --the 3rd argument for 'convert' ) [retval] Select convert(varchar(12) ,( --this is the 2nd argument for the 'convert' function select ( Select min([Date]) --determine the Max (Date) **It looks like you're trying to get the first value after today, and ensure you're only returning 1 row from BusinessDaysCalendar where date > CURRENT_TIMESTAMP ) ) ,107 --the 3rd argument for 'convert' ) [retval] ```
I think you just need to replace "GETDATE()" with the first statement. So: ``` SELECT CONVERT(VARCHAR(12), (Select Max([Date]) from BusinessDaysCalendar where [date] in (Select top 1 [date] from BusinessDaysCalendar where date > CURRENT_TIMESTAMP) ),107) ```
SQL Server : use SELECT convert instead of SELECT format
[ "", "sql", "sql-server", "date", "" ]
This is my orders table. I want to report all orders which all details are Ready For Shipment. ``` orderNo detailNo statusId status 10001 1 40 Ready For Shipment 10002 1 40 Ready For Shipment 10002 2 20 Canceled 10002 3 30 Pending 10003 1 40 Ready For Shipment 10003 2 40 Ready For Shipment 10004 1 10 New Order 10004 2 20 Canceled 10004 3 40 Ready For Shipment 10004 4 40 Ready For Shipment ``` Expected results are: ``` Orders Ready For Shipment 10001 10003 ``` Are there any effective method to achieve ready orders list without using subqueries?
Group by the `orderno` and use a `having`to get only those groups having no other status ``` select orderno from your_table group by orderno having sum(case when status <> 'Ready For Shipment' then 1 end) = 0 ``` or with the `statusId` ``` select orderno from your_table group by orderno having sum(case when statusid <> 40 then 1 end) = 0 ```
``` select Distinct a.orderId from ordersTable a inner join ( select orderNo, Avg(statusId) from ordersTable group by orderNo having Avg(statusId) = 40) b on a.orderNo = b.orderNo ```
List orders which all rows are ready
[ "", "sql", "sql-server", "" ]
Consider the following table structure: ``` id speed 1 100 2 200 3 300 4 400 5 500 ``` Consider the following query: `"SELECT * FROM records WHERE speed >= 300"` - this will return the rows #3, 4, 5. Is there a way to modify this query so that with the same `300` speed parameter it would also return the first row that does not fit the condition i.e. #2. So that the end results would be rows #2, 3, 4, 5? UPD: Note that all the values and the records count here are arbitrary and for example only. The database is SQLite.
Try this: ``` SELECT * FROM Test WHERE Speed >= 300 UNION SELECT * FROM ( SELECT * FROM Test WHERE Speed < 300 ORDER BY Speed DESC LIMIT 1 ) AS XXX ORDER BY Speed ``` See [DEMO](http://rextester.com/STRV47088) **Note**: changed for [SQLite syntax](http://sqlfiddle.com/#!7/eb321/1/0).
Try this simple query, which selects the row with the maximum id and speed < 300 as well as rows with speed >= 300. ``` SELECT * FROM records WHERE speed >= 300 OR id = (SELECT MAX(id) FROM records WHERE speed < 300) ORDER BY id; ```
SQL query with additional 'nearest' row
[ "", "sql", "sqlite", "select-query", "" ]
I have a table with a date column set as datetime. I am currently selecting and converting the date as follows which returns the following format: `dd mmm yyyy` **Current (example):** 23 Aug 2014 ``` CONVERT(VARCHAR(11), C.modTime, 106) AS modTime ``` Can someone tell me how I need to change this to get the following format instead: `dddd, dd mmmm yyyy` **Required (example):** Saturday, 23 August 2014
Use this: ``` DATENAME(dw, C.modTime) + ', ' + CONVERT(VARCHAR(11), C.modTime, 106) AS modTime ``` which when run with the current value of getdate() yields: Saturday, 23 Aug 2014 Or, constructing the entire string manually: ``` DATENAME(dw, C.modTime) + ', ' + cast(datepart(dd, C.modTime) as char(2)) + ' ' + datename(mm, C.modTime) + ' ' + cast(datepart(yyyy,C.modTime) as char(4)) ``` to get: Saturday, 23 August 2014
SQL Server 2012 or later ``` declare @d as datetime = '20140823' select format(@d,'dddd, dd MMMM yyyy', 'en-US') ``` Result = Saturday, 23 August 2014
How to select datetime date as dddd, dd mmmm, yyyy with SQL Server?
[ "", "sql", "sql-server", "date", "sql-server-2008", "date-format", "" ]
I am creating a dynamic pivot query using SQL Server 2008 but got stuck in `STUFF`. **Example**: ``` Declare @col as nvarchar(max) Declare @tablename as nvarchar(max) SET @col = 'STUFF((SELECT '','' + QUOTENAME(cola) from ' + @tablename + ' group by cola order by cola FOR XML PATH(''''), TYPE ).value(''.'', ''NVARCHAR(MAX)'') ,1,1,'''')' execute(@col) print @col; ``` **Error** ``` Incorrect syntax near the keyword 'order'. ```
``` Declare @col as nvarchar(max) Declare @tablename as nvarchar(max) = N'Table1' set @col = N'SELECT STUFF(( SELECT '','' + QUOTENAME(cola) FROM ' + @tablename + N' GROUP BY cola ORDER BY cola FOR xml PATH (''''), TYPE ) .value(''.'', ''NVARCHAR(MAX)''), 1, 1, '''');' execute(@col) ; ``` not exec @col see: `this SQLFiddle demo`
Meem, I have modified your query and it should look like this. ``` Declare @col as nvarchar(max) Declare @tablename as nvarchar(max) set @col = 'REF_REFM_CODE' set @tablename = 'tblKeywords' SET @col = 'Select STUFF((SELECT ' + ''','' + ' + @col + ' from ' + @tablename + ' group by ' + @col + ' order by ' + @col + ' FOR XML PATH(''''), TYPE ).value(''.'', ''NVARCHAR(MAX)'') ,1,1,'''')' execute(@col) print @col; ``` See this [Demo in SQL Fiddle.](http://sqlfiddle.com/#!3/15508/1/0)
Dynamic Pivot query using SQL Server
[ "", "sql", "sql-server", "pivot-table", "" ]
table leave has following data:- ``` EMPNO NAME DATEFROM DATETO 111 xxx 2014-08-03 00:00:00.000 2014-09-05 00:00:00.000 ``` now i am fetching the data from leave table: ``` SELECT [NAME], sum(datediff(day, DATEFROM, case when dateto > '2014-08-31' then '2014-08-31' else dateto end)+1) as holiday FROM [leave] where DATEFROM >= '2014-08-01' and DATEFROM <= '2014-08-31' and userid = 1 group by name ``` it gives me below answer which is perfect:- ``` NAME holiday xxx 29 ``` but i want to to exclude the weekends(friday and saturday) from the holiday days...but it must exclude from the date 2014-08-03 (it is in leave table and datefrom column) how can i perform this?
You can use following query in order to calculate holiday between two day: ``` SELECT (DATEDIFF(WEEK,StartDate,EndDate)-1) * 2 + CASE DATEPART(dw,StartDate) WHEN 4 THEN 2 WHEN 5 THEN 1 ELSE 0 END ```
Try thethis `DATENAME()` function: ``` select [your_date_column] from table where DATENAME(WEEKDAY, [date_created]) <> 'Friday' and DATENAME(WEEKDAY, [date_created]) <> 'Saturday' and DATENAME(WEEKDAY, [date_created]) <> 'Sunday' ``` Hope This may help you!
Exclude weekends days from the holidays
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have an SQLite database (v. 3.8.1) with somewhat unusual schema that can't be changed. For the purposes of this questions, there are 5 tables (t1 through t5), and I need to create a summary report using data from t1 and t5, but the data I need to reference in t5 can only be gleaned based on relationships to records in t1 through t4. To help clarify - imagine that t1 holds data regarding a document. The document can subsequently go through 1 to 4 more iterations (with different fields available in each iteration, hence the 5 different tables rather than just a flag in 1 table to signify what iteration it is at). I'm interested in whether or not an initial record/document (held in t1) has reached it's final iteration or not (a ParentGUID exists in t5 that when followed up the chain of tables, eventually reaches t1, or not). t1 has a GUID (text) field, and t2 through t5 have GUID and ParentGUID fields (also text). The ParentGUID field in t2 through t5 don't have to be populated (documentation iterations can be skipped in some cases), but when ParentGUID has a value it will always be a GUID from a previous table (for example, if t5 has a ParentGuid value, it will be a GUID from t1, t2, t3 OR t4). This means that I want all of the distinct records from t1, and then for each a value (or values) from t5 if present, or null if not. If a ParentGuid field value in a t5 record is the GUID of a record in t4, and the ParentGuid field value in that t4 record is the GUID of a record in t1, then that particular t1 record is considered to have reached its final iteration. Similarly, ParentGUID > GUID links that will be considered t1 > t5, initial > final iterations include: ``` t1 > t2 > t3 > t4 > t5 t1 > t2 > t3 > t5 t1 > t2 > t4 > t5 t1 > t2 > t5 t1 > t3 > t4 > t5 t1 > t3 > t5 t1 > t4 > t5 t1 > t5 ``` Or represented graphically: ![Possible relationship paths from T1 to T5](https://i.stack.imgur.com/vpoYJ.png) Consider the following test schema: ``` CREATE TABLE Table1 ("GUID" TEXT, "Name" TEXT) ; CREATE TABLE Table2 ("GUID" TEXT, "ParentGUID" TEXT) ; CREATE TABLE Table3 ("GUID" TEXT, "ParentGUID" TEXT) ; CREATE TABLE Table4 ("GUID" TEXT, "ParentGUID" TEXT) ; CREATE TABLE Table5 ("GUID" TEXT, "Name" TEXT, "Amount" REAL, "ParentGUID" TEXT) ; INSERT INTO Table1 ("GUID", "Name") VALUES ('ABC', 'A1') ; INSERT INTO Table1 ("GUID", "Name") VALUES ('DEF', 'A2') ; INSERT INTO Table1 ("GUID", "Name") VALUES ('GHI', 'A3') ; INSERT INTO Table2 ("GUID", "ParentGUID") VALUES ('JKL', 'GHI') ; INSERT INTO Table2 ("GUID", "ParentGUID") VALUES ('MNO', '') ; INSERT INTO Table2 ("GUID", "ParentGUID") VALUES ('PQR', 'GHI') ; INSERT INTO Table3 ("GUID", "ParentGUID") VALUES ('STU', 'MNO') ; INSERT INTO Table3 ("GUID", "ParentGUID") VALUES ('STU', 'GHI') ; INSERT INTO Table3 ("GUID", "ParentGUID") VALUES ('VWX', 'PQR') ; INSERT INTO Table4 ("GUID", "ParentGUID") VALUES ('YZA', 'VWX') ; INSERT INTO Table4 ("GUID", "ParentGUID") VALUES ('BCD', '') ; INSERT INTO Table4 ("GUID", "ParentGUID") VALUES ('EFG', 'GHI') ; INSERT INTO Table5 ("GUID", "ParentGUID", "Amount", "Name" ) VALUES ('HIJ', 'EFG', -500, 'E3') ; INSERT INTO Table5 ("GUID", "ParentGUID", "Amount", "Name" ) VALUES ('KLM', 'YZA', -702, 'E2') ; INSERT INTO Table5 ("GUID", "ParentGUID", "Amount", "Name" ) VALUES ('NOP', '', 220, 'E8') ; INSERT INTO Table5 ("GUID", "ParentGUID", "Amount", "Name" ) VALUES ('QRS', 'GHI', 601, 'E4') ; ``` What I'd like to do is get all records in t1, and then show the total of all related Amount fields from t5 (related in any of the ways listed above), and the group\_concat of all the related Name fields from t5. Using the above sample schema, it would look something like: ``` t1.Name total(t5.Amount) group_concat(t5.Name) -------------------------------------------------- A1 0.00 A2 0.00 A3 -601.00 E2,E3,E4 ``` I've tried a bunch of different joins but nothing has worked...either I get too many items in my Total/Group\_Concat cells (too high a total due to adding items multiple times, and multiple repeat names such as "E4,E4,E4,E4,E2,E3,E3,E4,E4..."), or I can only get the one directly linked item to t1 from t5 (601.00, E4). For example, the query just gives me the E4/601.00 result for t1 record GHI: ``` SELECT DISTINCT t1.guid "OriginalGuid", t1.name "OriginalName", TOTAL(t5."Amount") as "TotalAmount", group_concat(t5.Name) AS "FinalNames" FROM Table1 t1 LEFT JOIN Table5 t5 ON (t1.GUID=t5.ParentGUID) LEFT JOIN Table4 t4 ON (t1.GUID=t4.ParentGuid AND t5.ParentGuid=t4.Guid) LEFT JOIN Table3 t3 ON (t1.GUID=t3.ParentGuid AND (t4.ParentGuid=t3.Guid OR t5.ParentGuid=t3.Guid)) LEFT JOIN Table2 t2 ON (t1.GUID=t2.ParentGuid AND (t3.ParentGuid=t2.Guid AND ((t4.ParentGuid=t3.Guid And t5.ParentGuid=t4.guid) or (t5.ParentGuid=t3.Guid)) OR (t4.ParentGuid=t2.Guid and t5.ParentGuid=t4.Guid) OR (t5.ParentGuid=t2.Guid))) GROUP BY t1.GUID; ``` Sadly, I've been working on this for a good chunk of the weekend, and I haven't been able to figure out something that works and is reasonably performant (I had something that appeared to work okay for small datasets but took *minutes* against my full dataset, which is just too long - I've since lost the SQL for that though unfortunately). I'm continuing to work on a solution now, and if I find it I will post the answer here, but I'd greatly appreciate any assistance/ideas if they're out there! Here's my SQL Fiddle: <http://sqlfiddle.com/#!5/1a2ac/55> Thanks in advance for any help.
cha's answer was ok, but could be optimized by adding a temp table to store all the relations from table2 to table5. ``` CREATE TABLE TableRel ("GUID" TEXT, "ParentGUID" TEXT, "TB" TEXT); insert into TableRel select GUID, ParentGUID, 'TABLE2' FROM TABLE2 UNION ALL select GUID, ParentGUID, 'TABLE3' FROM TABLE3 UNION ALL select GUID, ParentGUID, 'TABLE4' FROM TABLE4 UNION ALL select GUID, ParentGUID, 'TABLE5' FROM TABLE5 ; ``` **UPDATE** Then you could use recursive query to get all descendants from table1. ``` WITH RECURSIVE Table1Descendants(GUID, DescendantGUID,generation) as ( select t1.GUID, Rel.GUID ,1 from Table1 t1 inner join TableRel rel on t1.GUID= Rel.ParentGUID UNION ALL select td.GUID, Rel.GUID, td.generation+1 from TableRel Rel inner join Table1Descendants td on td.DescendantGUID= Rel.ParentGUID ) select t1.guid , t1.name , coalesce(sum(t5.Amount) ,0) from Table1 as t1 left join Table1Descendants on t1.GUID = Table1Descendants.GUID left join Table5 as t5 on t5.GUID = Table1Descendants.DescendantGUID group by t1.guid,t1.name order by t1.name; ``` Or you could get all ancestors from table5. ``` WITH RECURSIVE Table1Ancestors(GUID, AncestorGUID) as ( select t5.GUID, Rel.ParentGUID from Table5 t5 inner join TableRel rel on t5.GUID= Rel.GUID UNION ALL select ta.GUID, Rel.ParentGUID from TableRel Rel inner join Table1Ancestors ta on ta.AncestorGUID= Rel.GUID ) select t1.guid , t1.name , coalesce(sum(t5.Amount) ,0) from Table1 as t1 left join Table1Ancestors on t1.GUID = Table1Ancestors.AncestorGUID left join Table5 as t5 on t5.GUID = Table1Ancestors.GUID group by t1.guid,t1.name order by t1.name; ``` But only since 3.8.3 SQLite support recursive CTE, I don't have this version of SQLite, here is the [SQLFidle](http://sqlfiddle.com/#!15/13d28/15) tested with PostgreSQL, they have similar grammar with [recursive query](http://sqlite.org/lang_with.html), but no `total` and `group_concat` functions in PostgreSQL. And here is a none recursive query([SqlFiddle](http://sqlfiddle.com/#!5/63c70/4)) in case you don't have SQLite 3.8.3 or later version: ``` select t1.guid "OriginalGuid", t1.name "OriginalName", TOTAL(t5."Amount") as "TotalAmount", group_concat(t5.Name) AS "FinalNames" from Table1 as t1 left join ( select t1.GUID, Rel.GUID as DescendantGUID, 1 from Table1 t1 inner join TableRel rel on t1.GUID= Rel.ParentGUID UNION ALL select t1.GUID, Rel2.GUID, 2 from Table1 t1 inner join TableRel rel1 on t1.GUID= Rel1.ParentGUID inner join TableRel rel2 on Rel1.GUID= Rel2.ParentGUID UNION ALL select t1.GUID, Rel3.GUID, 3 from Table1 t1 inner join TableRel rel1 on t1.GUID= Rel1.ParentGUID inner join TableRel rel2 on Rel1.GUID= Rel2.ParentGUID inner join TableRel rel3 on Rel2.GUID= Rel3.ParentGUID UNION ALL select t1.GUID, Rel4.GUID, 4 from Table1 t1 inner join TableRel rel1 on t1.GUID= Rel1.ParentGUID inner join TableRel rel2 on Rel1.GUID= Rel2.ParentGUID inner join TableRel rel3 on Rel2.GUID= Rel3.ParentGUID inner join TableRel rel4 on Rel3.GUID= Rel4.ParentGUID ) as Table1Descendants on t1.GUID = Table1Descendants.GUID left join Table5 as t5 on t5.GUID = Table1Descendants.DescendantGUID group by t1.guid,t1.name ``` Result: ``` OriginalGuid OriginalName TotalAmount FinalNames ABC A1 0.0 DEF A2 0.0 GHI A3 -601.0 E3,E2,E4 ```
This query will do this. Basically, you need to UNION ALL all combinations (likely you have a limited number of possible combinations) and then just LEFT JOIN them to T1 and group\_concat the names: [SQL Fiddle](http://sqlfiddle.com/#!5/1a2ac/69) **SQLite (SQL.js) Schema Setup**: ``` CREATE TABLE Table1 ("GUID" TEXT, "Name" TEXT) ; CREATE TABLE Table2 ("GUID" TEXT, "ParentGUID" TEXT) ; CREATE TABLE Table3 ("GUID" TEXT, "ParentGUID" TEXT) ; CREATE TABLE Table4 ("GUID" TEXT, "ParentGUID" TEXT) ; CREATE TABLE Table5 ("GUID" TEXT, "Name" TEXT, "Amount" REAL, "ParentGUID" TEXT) ; INSERT INTO Table1 ("GUID", "Name") VALUES ('ABC', 'A1') ; INSERT INTO Table1 ("GUID", "Name") VALUES ('DEF', 'A2') ; INSERT INTO Table1 ("GUID", "Name") VALUES ('GHI', 'A3') ; INSERT INTO Table2 ("GUID", "ParentGUID") VALUES ('JKL', 'GHI') ; INSERT INTO Table2 ("GUID", "ParentGUID") VALUES ('MNO', '') ; INSERT INTO Table2 ("GUID", "ParentGUID") VALUES ('PQR', 'GHI') ; INSERT INTO Table3 ("GUID", "ParentGUID") VALUES ('STU', 'MNO') ; INSERT INTO Table3 ("GUID", "ParentGUID") VALUES ('STU', 'GHI') ; INSERT INTO Table3 ("GUID", "ParentGUID") VALUES ('VWX', 'PQR') ; INSERT INTO Table4 ("GUID", "ParentGUID") VALUES ('YZA', 'VWX') ; INSERT INTO Table4 ("GUID", "ParentGUID") VALUES ('BCD', '') ; INSERT INTO Table4 ("GUID", "ParentGUID") VALUES ('EFG', 'GHI') ; INSERT INTO Table5 ("GUID", "ParentGUID", "Amount", "Name" ) VALUES ('HIJ', 'EFG', -500, 'E3') ; INSERT INTO Table5 ("GUID", "ParentGUID", "Amount", "Name" ) VALUES ('KLM', 'YZA', -702, 'E2') ; INSERT INTO Table5 ("GUID", "ParentGUID", "Amount", "Name" ) VALUES ('NOP', '', 220, 'E8') ; INSERT INTO Table5 ("GUID", "ParentGUID", "Amount", "Name" ) VALUES ('QRS', 'GHI', 601, 'E4') ; ``` **Query 1**: ``` SELECT t1.GUID, group_concat(o.Name), COALESCE(SUM(o.Amount), 0.0) TotalAmount FROM Table1 t1 LEFT JOIN ( SELECT t1.GUID, t5.Name, t5.Amount FROM Table1 t1 INNER JOIN Table5 t5 ON (t1.GUID=t5.ParentGUID) UNION ALL SELECT t1.GUID, t5.Name, t5.Amount FROM Table1 t1 INNER JOIN Table4 t4 ON (t1.GUID=t4.ParentGuid) INNER JOIN Table5 t5 ON (t5.ParentGuid=t4.Guid) UNION ALL SELECT t1.GUID, t5.Name, t5.Amount FROM Table1 t1 INNER JOIN Table3 t3 ON (t1.GUID=t3.ParentGuid) INNER JOIN Table5 t5 ON (t5.ParentGuid=t3.Guid) UNION ALL SELECT t1.GUID, t5.Name, t5.Amount FROM Table1 t1 INNER JOIN Table3 t3 ON (t1.GUID=t3.ParentGuid) INNER JOIN Table4 t4 ON (t4.ParentGuid=t3.Guid) INNER JOIN Table5 t5 ON (t5.ParentGuid=t4.Guid) UNION ALL SELECT t1.GUID, t5.Name, t5.Amount FROM Table1 t1 INNER JOIN Table2 t2 ON (t1.GUID=t2.ParentGuid) INNER JOIN Table5 t5 ON (t5.ParentGuid=t2.Guid) UNION ALL SELECT t1.GUID, t5.Name, t5.Amount FROM Table1 t1 INNER JOIN Table2 t2 ON (t1.GUID=t2.ParentGuid) INNER JOIN Table4 t4 ON (t4.ParentGuid=t2.Guid) INNER JOIN Table5 t5 ON (t5.ParentGuid=t4.Guid) UNION ALL SELECT t1.GUID, t5.Name, t5.Amount FROM Table1 t1 INNER JOIN Table2 t2 ON (t1.GUID=t2.ParentGuid) INNER JOIN Table3 t3 ON (t3.ParentGuid=t2.Guid) INNER JOIN Table5 t5 ON (t5.ParentGuid=t3.Guid) UNION ALL SELECT t1.GUID, t5.Name, t5.Amount FROM Table1 t1 INNER JOIN Table2 t2 ON (t1.GUID=t2.ParentGuid) INNER JOIN Table3 t3 ON (t3.ParentGuid=t2.Guid) INNER JOIN Table4 t4 ON (t4.ParentGuid=t3.Guid) INNER JOIN Table5 t5 ON (t5.ParentGuid=t4.Guid) ) o ON t1.GUID = o.GUID GROUP BY t1.GUID ``` **[Results](http://sqlfiddle.com/#!5/1a2ac/69/0)**: ``` | GUID | group_concat(o.Name) | TotalAmount | |------|----------------------|-------------| | ABC | | 0.0 | | DEF | | 0.0 | | GHI | E2,E3,E4 | -601.0 | ```
SQLite - Aggregating related data (tree-like) between 2 tables with multiple potential intermediate relationships
[ "", "sql", "sqlite", "join", "aggregate-functions", "multiple-tables", "" ]
I have a stored procedure that selects all messages for a specific user, except for ones in a table: ``` ALTER PROCEDURE [dbo].[sp_LA_SelectAllUnreadMessagesForUser] @UserID INT = 1 AS BEGIN SELECT * FROM Message m WHERE m.ID NOT IN (SELECT mt.MessageID FROM MessageTracking mt WHERE mt.SubscriberID = @UserID ) AND m.DateExpires < GETDATE() /* With the other check */ ORDER BY m.DateCreated DESC END ``` I need to extend this to not include 'Expired messages'. There is a `DateExpires` column on the `Message` table, however - for everlasting messages, the `DateExpires` is set to `1900-01-01 00:00:00.000` Is there any way to check if the date is after a certain date, except for when its `1900-01-01 00:00:00.000` ?
Try this: ``` SELECT * FROM Message m WHERE m.ID NOT IN (SELECT mt.MessageID FROM MessageTracking mt WHERE mt.SubscriberID = @UserID ) AND (m.DateExpires > GETDATE()) OR (m.DateExpires = '1900-01-01 00:00:00.000')) ORDER BY m.DateCreated DESC ``` You want to check if the expiry date is after the current date, or if the particular date is equal to your special value.
edit: ``` select * where ( DateExpires = '19000101' OR DateExpires > getdate() ) ``` which wil list any that have a "never expire" condition plus any that expire after the current date/time
TSQL Check for Date difference except for Min Value
[ "", "sql", "sql-server", "" ]
I have been trying to understand what is wrong with the following view, and unfortunately I was not able to find my answer anywhere, other than using triggers, which I would like to avoid. Given the following view, when I try to insert into it I get the error above, however if I remove the inner join to the Company table everything seems to work just fine: ``` CREATE VIEW [dbo].[vwCheckBookingToCheck] WITH SCHEMABINDING AS SELECT [checkUser].[CheckID] , [checkUser].[CheckToTypeID] , [checkUser].[CheckNumber] , [checkUser].[CheckDate] , [checkUser].[CheckAmount] , [checkUser].[CheckStatusID] , [checkUser].[CheckAcceptedBy] , [checkUser].[CreatedBy] , [checkUser].[CreatedDateTime] , [checkUser].[CheckToUserID] [ToID], [checkUser].[CheckFromCompanyID] [FromID], [companyFrom].[CompanyName] FROM [dbo].[CheckUser] [checkUser] INNER JOIN [dbo].[Company] [companyFrom] ON [companyFrom].[CompanyID] = [checkUser].[CheckFromCompanyID] UNION ALL SELECT [checkCompany].[CheckID] , [checkCompany].[CheckToTypeID] , [checkCompany].[CheckNumber] , [checkCompany].[CheckDate] , [checkCompany].[CheckAmount] , [checkCompany].[CheckStatusID] , [checkCompany].[CheckAcceptedBy] , [checkCompany].[CreatedBy] , [checkCompany].[CreatedDateTime] , [checkCompany].[CheckToCompanyID] [ToID], [checkCompany].[CheckFromCompanyID] [FromID] , [companyFrom].[CompanyName] FROM [dbo].[CheckCompany] [checkCompany] INNER JOIN [dbo].[Company] [companyFrom] ON [companyFrom].[CompanyID] = [checkCompany].[CheckFromCompanyID] GO ``` Here is my insert, I am only inserting in [CheckUser] or [CheckCompany]: ``` INSERT INTO [dbo].[vwCheckBookingToCheck] ( [CheckToTypeID] , [CheckNumber] , [CheckDate] , [CheckAmount] , [CheckStatusID] , [CheckAcceptedBy] , [CreatedBy] , [CreatedDateTime] , [ToID] , [FromID] ) SELECT 2, 'Test' , -- CheckNumber - varchar(255) '2014-08-23 20:07:42' , -- CheckDate - date 1233 , -- CheckAmount - money 0 , -- CheckStatusID - int 1 , -- CheckAcceptedBy - int 1 , -- CreatedBy - int '2014-08-23 20:07:42' , -- CreatedDateTime - datetime 1, -- ToID - int 1 -- FromID - int ``` CheckToTypeID is my check constraint, is there any way to make this view work with inner joins? Again, if I remove the inner joins I am able to get it to work, but I would like to keep them if possible. I am using SQL Server 2012, any help is appreciated. Thanks, Paul
This is a bit long for a comment. I cannot readily find the 2012 documentation on this subject, but the [SQL Server 2008 documentation](http://technet.microsoft.com/en-us/library/ms187067(v=sql.105).aspx) says: > A view is considered an updatable partitioned view when the view is a > set of SELECT statements whose individual result sets are combined > into one using the UNION ALL statement. **Each SELECT statement > references one SQL Server base table.** You have two tables in the `from` clause, so it is not updatable. It is a read-only view. I am not aware that this was changed in 2012.
You can work around this by adding an ["instead of" trigger](https://technet.microsoft.com/en-us/library/ms175089(v=sql.105).aspx) to the view and update the underlying tables instead.
Update or insert of view or function failed because it contains a derived or constant field
[ "", "sql", "sql-server-2012", "" ]
I have a Oracle table with 1M rows in it. I have a subset of oracle table in SAS with 3000 rows in it. I want to delete these 3000 rows from the oracle table. ``` Oracle Table columns are Col1 Col2 Col3 timestamp SAS Table columns are: Col1 Col2 Col3 ``` The only additional column that Oracle table has is a timestamp. This is the code that I using currently, but it's taking a lot of time. ``` libname ora oracle user='xxx' password='ppp' path = abcd; PROC SQL; DELETE from ora.oracle_table a where exists (select * from sas_table b where a.col1=B.col1 AND a.col2=B.col2 AND A.col3=B.col3 ); QUIT; ``` Please advise as to how to make it faster and more efficient. Thank You !
One option is to push your SAS table up to Oracle, then use oracle-side commands to perform the delete. I'm not sure exactly how SAS will translate the above code to DBMS-specific code, but it might be pushing a lot of data over the network depending on how it's able to optimize the query; in particular, if it has to perform the join locally instead of on the database, that's going to be very expensive. Further, Oracle can probably do the delete faster using entirely native operations. IE: ``` libname ora ... ; data ora.gtt_tableb; *or create a temporary or GT table in Oracle and insert into it via proc sql; set sas_tableb; run; proc sql; connect to oracle (... ); execute ( delete from ... ) by connection to oracle; quit; ``` That may offer significant performance improvements over using the LIBNAME connection. Further improvements may be possible if you take full advantage of an index on your PKs, if you don't already have that.
@Joe has a good answer. Another way would be to do something like this. This MIGHT allow the libname engine to pass all the work to Oracle instead of retrieving rows back to SAS (which is where your time is going). Created some test data to show ``` data test1 test2; do i=1 to 10; do j=1 to 10; do k=1 to 10; output; end; end; end; run; data todel; do i=1 to 3; do j=1 to 3; do k=1 to 3; output; end; end; end; run; proc sql noprint; delete from test1 as a where a.i in (select distinct i from todel) and a.j in (select distinct j from todel) and a.k in (select distinct k from todel); quit; proc sql noprint; delete from test2 as a where exists (select * from todel as b where a.i=b.i and a.j=b.j and a.k=b.k); quit; ```
Delete specific rows in Oracle Database using SAS table
[ "", "sql", "oracle", "sas", "" ]
I try to Write the SQL code to create the table named ‘EMP\_1’. This table is a subset of the EMPLOYEE table, and the structure of the table is summarized as shown below. This is the information: ``` Attribute Name Data Type Remarks EMP_NUM CHAR(3) PK EMP_LNAME VARCHAR(15) Not Null EMP_FNAME VARCHAR(15) Not Null EMP_INITIAL CHAR(1) EMP_HIREDATE DATE JOB_CODE CHAR(3) FK (from JOB table) ``` My code: ``` CREATE TABLE EMP_1 ( EMP_NUM CHAR(3) PRIMARY KEY, EMP_LNAME VARCHAR(15) Not Null, EMP_FNAME VARCHAR(15) Not Null, EMP_INITIAL CHAR(1) , EMP_HIREDATE DATETIME, JOB_CODE CHAR(3) FOREIGN KEY (JOB_CODE) REFERENCES JOB(JOB_CODE) ); ``` I keep getting CONSTRAINT error
I think you might be missing a comma before the constraint. This worked when I tried it: ``` CREATE TABLE EMP_1 ( EMP_NUM CHAR(3) PRIMARY KEY, EMP_LNAME VARCHAR(15) Not Null, EMP_FNAME VARCHAR(15) Not Null, EMP_INITIAL CHAR(1), EMP_HIREDATE DATETIME, JOB_CODE CHAR(3), CONSTRAINT FK_JOBS FOREIGN KEY (JOB_CODE) REFERENCES JOB(JOB_CODE) ); ```
Make sure your PRIMARY KEY and FOREIGN KEY have the same DATA TYPE.
I cannot create a simple table
[ "", "sql", "ms-access-2007", "create-table", "" ]
I have 3 tables: A,B and C they all share a common column name, "name". I need to check if name 7 for example, is in any of them. ``` A name 5 name 6 name 7 B name 8 name 9 name 10 C name 6 name 7 name 8 name 9 ``` would yield: ``` name 7 name 7 ``` Because table A has 7 and C also has 7, so it sums them up. What SQL would make that combination?
Just use *union all* to stack up the results from A, B and C: ``` select name, value from A where value = 7 union all select name, value from B where value = 7 union all select name, value from C where value = 7 ```
You can use a `union` to merge the tables: ``` select * from ( select name , value from A union all select name , value from B union all select name , value from C ) SubQueryAlias where value = 8 ```
how to combine results from 3 tables into one?
[ "", "mysql", "sql", "" ]
Currently i am trying to output the top row for 2 condition. One is max and one is min. Current code ``` Select * from (MY SELECT STATEMENT order by A desc) where ROWNUM <= 1 ``` **UPDATE** I am now able to do for both condition. But i need the A to be the highest, if same then check for the B lowest. E.g Lets say there is 2 rows, Both A is 100 and B is 50 for one and 60 for other. In this case the 100:50 shld be choose because A is same then B is lowest. E.g Lets say there is 2 rows, A is 100 for one and 90 for other, since one is higher no need to check for B. I tried using max and min but this method seems to work better, any suggestions
Well, after your clarification, you are looking for one record. With Max A. And the smallest B, in case there is more than one record with MAX A. This is simply: ``` Select * from (MY SELECT STATEMENT order by A desc, B) where ROWNUM = 1; ``` This sorts by A descending first, so you get all maximal A records first. Then it sorts by B, so inside each A group you get the least B first. This gives you the desired A record first, no matter if the found A is unique or not.
or avoid the vagaries of rownun and go for row\_number() instead: ``` SELECT * FROM ( SELECT * , ROW_NUMBER (ORDER BY A DESC) adesc , ROW_NUMBER (ORDER BY B ASC) basc FROM SomeQuery ) WHERE adesc = 1 OR basc = 1 ``` --- footnote: `select *` is a convenience only, please replace with the actual columns required along with table names etc.
How to use ROWNUM for a maximum and another minimum ordering in ORACLE?
[ "", "sql", "oracle", "plsql", "" ]
Ok I typed this to google and no answer Here comes my question Assume that inside a stored procedure I have the following selects would it make any difference putting `;` at the end of select statements or not? I mean if I do not put `;` at the end of statement what happens? And if I put what happens? ``` begin declare @param_1 int; declare @param_2 int; select @param_1 = col1 from mytable select @param_2 = col2 from mytable end ``` Case 1 above no ; at the end ``` begin declare @param_1 int; declare @param_2 int; select @param_1 = col1 from mytable; select @param_2 = col2 from mytable; end ``` Case 2 above ; at the end **I am not asking for this particular case. It will obviously have no effect on this example but in run time what difference does it make ? I mean does it complete it then move next or what** Thanks for the answers
The `;` terminates a statement. In most cases it's optional. Whether you terminate a statement implicitly (without a semicolon) or explicitly (with a semicolon) makes no difference. [MSDN says:](http://msdn.microsoft.com/en-us/library/ms177563.aspx) > Transact-SQL statement terminator. Although the semicolon is not > required for most statements in this version of SQL Server, it will be > required in a future version. Here's an example where `;` is not currently optional. This will fail: ``` select 6 with t1 as (select 7 as col1) select * from t1 ``` While this will work: ``` select 6 ; with t1 as (select 7 as col1) select * from t1 ``` A `with` or `merge` statement requires the preceding statement to be explicitly terminated.
As I understand it, it means that the statement ending with the ';' must complete/finish/terminate before the next line will begin executing.
Does ; means anything inside SQL Server stored procedure
[ "", "sql", "sql-server", "t-sql", "stored-procedures", "" ]
I have an `emp` table like this ``` EMPNO | JOB | DEPTNO _____________________ 1 | A | 10 2 | B | 20 3 | C | 10 4 | A | 20 ``` I want to write a query to List the jobs common to department 10 and 20. So the output as per the data provided should be ``` A ``` which is common to both deptno 10 and 20 I am trying this but it doesn't give me any output. ``` select job, deptNo from emp group by job having deptno = 20 and deptno = 10; /*6*/ ```
**Try this** ``` select job from emp e1 where e1.deptno = 10 and exists ( select 1 from emp e2 where e2.deptno = 20 and e1.job = e2.job ) group by job; ```
You can use an aggregate function in the `HAVING` clause with the condition you need: ``` SELECT job FROM emp GROUP BY job HAVING COUNT(CASE WHEN deptNo IN (10, 20) THEN 1 END) = 2 ```
SQL select records common to a column
[ "", "sql", "oracle", "" ]
I have the following first three columns of data in a select statement, I am trying to add the "total" column: ``` Customer ReportingCategory SumOfProfit Total ABC 1 10 60 ABC 2 25 60 ABC 4 25 60 ``` So right now, I am basically selecting Customer, ReportingCategory, and SumOfProfit and grouping by Customer, ReportingCategory, and summing SumOfProfit (this is selecting from a sub query). I want to add the total column to look just as it does above. So it sums the entire sum of profit for the customer but still keeps the reporting categories and their individual sum of profit. Is this possible?
Try using a [windowing function](http://msdn.microsoft.com/en-us/library/ms189461(v=sql.100).aspx). ``` select your_original_columns, sum(SumOfProfit) OVER(PARTITION BY Customer) AS 'Total' ... ``` Now, instead of SumOfProfit you likely will need your sub query, but the idea of windowing function is to return an aggregate over a different range then your group by, which is what you want.
You can likely just add the `Total` field to your existing query using `OVER()`: `SUM(Profit) OVER() AS Total` If this needs to be a total per Customer or some other set of fields, you'll add `PARTITION BY`: `SUM(Profit) OVER(PARTITION BY Customer) AS Total` You could also use a subquery and another aggregation.
Getting the sum of a column based on another column
[ "", "sql", "sql-server-2008", "" ]
I have two tables, `products` and `meta`. They are in relation 1:N where each product row has at least one meta row via foreign key. (viz. SQLfiddle: <http://sqlfiddle.com/#!15/c8f34/1>) I need to join these two tables but i need to filter only unique products. When I try this query, everything is ok (4 rows returned): ``` SELECT DISTINCT(product_id) FROM meta JOIN products ON products.id = meta.product_id ``` but when I try to select all columns the DISTINCT rule no longer applies to results, as 8 rows instead of 4 is returned. ``` SELECT DISTINCT(product_id), * FROM meta JOIN products ON products.id = meta.product_id ``` I have tried many approaches like trying to `DISTINCT` or `GROUP BY` on sub-query but always with same result.
While retrieving all or most rows from a table, the fastest way for this type of query typically is to aggregate / disambiguate *first* and join *later*: ``` SELECT * FROM products p JOIN ( SELECT DISTINCT ON (product_id) * FROM meta ORDER BY product_id, id DESC ) m ON m.product_id = p.id; ``` The more rows in `meta` per row in `products`, the bigger the impact on performance. Of course, you'll want to add an `ORDER BY` clause in the subquery do define *which* row to pick form each set in the subquery. @Craig and @Clodoaldo already told you about that. I am returning the `meta` row with the highest `id`. [SQL Fiddle.](http://sqlfiddle.com/#!15/c8f34/39) Details for `DISTINCT ON`: * [Select first row in each GROUP BY group?](https://stackoverflow.com/questions/3800551/select-first-row-in-each-group-by-group/7630564#7630564) ### Optimize performance Still, this is not always the fastest solution. Depending on data distribution there are various other query styles. For this simple case involving another join, this one ran considerably faster in a test with big tables: ``` SELECT p.*, sub.meta_id, m.product_id, m.price, m.flag FROM ( SELECT product_id, max(id) AS meta_id FROM meta GROUP BY 1 ) sub JOIN meta m ON m.id = sub.meta_id JOIN products p ON p.id = sub.product_id; ``` If you wouldn't use the non-descriptive `id` as column names, we would not run into naming collisions and could simply write `SELECT p.*, m.*`. (I *never* use `id` as column name.) If performance is your paramount requirement, consider more options: * a [`MATERIALIZED VIEW`](http://www.postgresql.org/docs/current/interactive/sql-creatematerializedview.html) with pre-aggregated data from `meta`, if your data does not change (much). * a recursive CTE emulating a [**loose index scan**](https://wiki.postgresql.org/wiki/Loose_indexscan) for a *big* `meta` table with *many* rows per product (relatively few distinct `product_id`). This is the only way I know to use an index for a DISTINCT query over the whole table.
I think you might be looking for [`DISTINCT ON`, a PostgreSQL extension feature](http://www.postgresql.org/docs/current/static/sql-select.html#SQL-DISTINCT): ``` SELECT DISTINCT ON(product_id) * FROM meta INNER JOIN products ON products.id = meta.product_id; ``` <http://sqlfiddle.com/#!15/c8f34/18> However, note that without an `ORDER BY` the results are not guaranteed to be consistent; the database can pick any row it wants from the matching rows.
GROUP or DISTINCT after JOIN returns duplicates
[ "", "sql", "postgresql", "join", "group-by", "distinct", "" ]
I'm trying to show ages according a specific rank of ages. Here is the [demo](http://sqlfiddle.com/#!2/e0fe4/3): ``` CREATE TABLE clients (date_birth date, date_anniversary date); INSERT INTO clients (date_birth, date_anniversary) VALUES ('1991-01-04',NULL ), ('1992-01-05',NULL ), ('1993-01-06',NULL ), ('1994-01-07',NULL ), ('1995-01-08',NULL ), ('1996-01-09',NULL ), ('1997-01-10',NULL ), ('1998-01-11',NULL ), ('1999-08-12',NULL ) ; ``` Here is the query,it shows all ages converted. ``` SET @start:='0'; SET @end:='22'; SELECT YEAR(CURDATE())- year(date_birth) AS ages FROM clients ``` I'm trying to show ages between 0 AND 22, I tried this [demo](http://sqlfiddle.com/#!2/e0fe4/6) : ``` SET @start:='0'; SET @end:='22'; SELECT YEAR(CURDATE())- year(date_birth) AS ages FROM clients WHERE year(date_birth) >= @start AND year(date_birth) <= @end ``` Please somebody can help me or advice me? Thanks in advance.
Your query should be ``` SELECT YEAR(CURDATE())- year(date_birth) AS ages FROM clients WHERE date_birth <= (curdate() - interval @start year) and date_birth >= (curdate() - interval @end year) ``` This will make use of your index on date\_birth as well (if any).
Change your query to be this: ``` SET @start:='0'; SET @end:='22'; SELECT YEAR(CURDATE())- year(date_birth) AS ages FROM clients WHERE YEAR(CURDATE())- year(date_birth) >= @start AND YEAR(CURDATE())- year(date_birth) <= @end ```
How can show ages according specific years?
[ "", "mysql", "sql", "" ]
I have a table with some dates. I need a query which will return the max (last) date from this table and last date of quarter this max date belongs to. So for data i table ``` ID| EDATE --+---------- 1|2014-03-06 2|2014-10-12 ``` this query should return 2014-10-12 and 2014-12-31.
I found the simplest answer: ``` SELECT MAKEDATE(YEAR(edate),1) + INTERVAL QUARTER(edate) QUARTER - INTERVAL 1 DAY ``` This query takes the first day of year, adds quarters to it and subtracts 1 day to get the last day in wanted quarter. So the required query should look like: ``` SELECT MAX(edate), MAKEDATE(YEAR(MAX(edate)),1) + INTERVAL QUARTER(MAX(edate)) QUARTER - INTERVAL 1 DAY FROM table ```
As I understand you want the last day of the quarter, so 31 March, 30 June, 30 Sept, 31 Dec? So you can use the answer from Gordon Linoff and adjust it to do that. You only need a case statement on month(date) and concat that with the year. <http://dev.mysql.com/doc/refman/5.1/de/control-flow-functions.html> ``` str_to_date( concat( year(edate), (case when month(edate) in (1, 2, 3) then '-03-31' when month(edate) in (4, 5, 6) then '-06-30' when month(edate) in (7, 8, 9) then '-09-30' else '-12-31' end) ), '%Y-%m-%d' ) ```
Last date in quarter MySQL
[ "", "mysql", "sql", "" ]
## Table:**City** ``` City_Name (PK) State_Name (PK) Population -------------- --------------- ---------- Austin MA Austin TX Burlington VT Fresno CA Houston TX Independence KS Independence MO Independence PA Kansas CA Kansas MO Paris MO Paris OR Paris TX Paris VA Paso TX ``` The above table has a composite primary key (City\_Name and State\_Name). I want to **find the city which occur in exactly two distinct States**(no more, no less). In this case, the result will be ``` City_Name --------- Austin Kansas ``` Other cities will not qualify as they occur in one or more than two states.This solution is required for practicing relational calculus problems and thus (unfortunately) **any aggregate function such as COUNT() cannot be used.** Looking forward to hearing from some kind-hearted people. NB - Not finding any clue where to start from, thus no effort could be shown :(
``` select distinct c1.city_name from city c1 join city c2 on c2.city_name = c1.city_name and c1.state_name <> c2.state_name where not exists (select 1 from city c3 where c3.city_name = c1.city_name and c3.state_name not in (c1.state_name, c2.state_name)); ``` SQLFiddle: <http://sqlfiddle.com/#!2/7d0901/8>
It appears as if mysql does not support minus/except: ``` -- two or more states select c1.city_name from city c1 join city c2 on c1.city_name = c2.city_name and c1.state_name<>c2.state_name minus -- three or more states select c1.city_name from city c1 join city c2 on c1.city_name = c2.city_name and c1.state_name<>c2.state_name join city c3 on c1.city_name = c3.city_name and c1.state_name <> c3.state_name and c2.state_name <> c3.state_name; ``` rewriting using not in: ``` -- two or more states select distinct c1.city_name from city c1 join city c2 on c1.city_name = c2.city_name and c1.state_name<>c2.state_name where c1.city_name not in ( -- three or more states select c1.city_name from city c1 join city c2 on c1.city_name = c2.city_name and c1.state_name<>c2.state_name join city c3 on c1.city_name = c3.city_name and c1.state_name <> c3.state_name and c2.state_name <> c3.state_name ); ```
MySql - Find values that occur EXACTLY two times without using aggregate function
[ "", "mysql", "sql", "database", "" ]
I'm working with an external access database (.accdb) in VB.NET and I'm trying to limit the amount of times that my [Winforms] program pings the data since it slows it down pretty considerably. I figured I could do this by querying my in-memory dataset instead of continuously going back to the database itself. Though, unfortunately, I cannot seem to figure out how to query my in-memory dataset. I tried looking up LINQ but couldn't find any instructions on how to set it up. Am I missing something? Is there a better way to do this? Thanks so much! My starting code below... ``` locsql = "SELECT * FROM TABLE1) Dim objAdapter As New OleDb.OleDbDataAdapter(locsql, objconn) objAdapter.Fill(locdata, "BASE") ``` So I can easily do some basic things I need with `locdata("BASE").rows.item("Item")` but I have to do some stuff like `SELECT thing FROM TABLE1 WHERE this = this and that = that` and I would just rather not keep going back to the database if possible. Is it possible?
What I ended up doing was breaking down my data filters and creating new tables based on a series of loops and if/else statements. all of these responses were very helpful, just didn't seem to work for what I needed. Thanks everyone!
You need to use the `Select` method on each datatable you want to query. The syntax is very similar to the part after the WHERE clause in you normal SQL query. IE ``` locdata.tables(0).Select("ID = 47") ``` This returns an array of datarows.
Querying datasets vb.net
[ "", "sql", "vb.net", "" ]
I have a site where users can share content. I have a table called "Posts" with two columns; `"Type"` and `"Media"`. The `"Type"` column identifies what kind of post it is. i.e. if it contains any photos or if it's just a plain post. If it contains a photo the `"Type"` column will have the value `"b"` and if it's a blank post with no photo it will have the value `"a"`. If the value of `"Type"` is equal to `"b"` the `"Media"` column will have an ID integer as the value identifies the photo attached to the post. In a separate table, `"Photos"` each photo that is uploaded is given a unique ID. This unique ID is put into the value of `"Media`" in the `"Posts"` table. I want to do an `INNER JOIN` in my `SQL` syntax **ONLY** if the value of `"Type"` is equal to `"b"`. Can anyone help point me in the correct direction?
Sounds like you really want to do a `Left Join`. A `Left Join` will give you the data if it exists, but otherwise, will show null data for photos. Maybe something like this: ``` Select * From Posts Left Join Photos on Posts.MEDIA = Photos.MEDIA and Photos.Type = 'b' ``` If this was just text, it'd look like: ``` Type Media Photo a w/e b w/e cats.jpg ```
The best way, as suggested by Philip, is to use an Outer Join, but if you really want to avoid an outer Join, you need to use a subquery... ``` SELECT Type, (Select case when p.type = 'p' then photo [or whatever column you want] else null end from Photos where id = p.Media) photo FROM posts p ```
SQL only join if conditions are met
[ "", "mysql", "sql", "join", "" ]
``` DELETE `table1`, `table2` FROM `table1` JOIN `table2` ON table2.col = table1.id WHERE table1.id = 1 ``` If in table2 no result with table2.col = table1.id then it sql query doesn't work. If table2 has row with col = table.id then it works perfectly. How to remake it? I need to delete row fom table1 even if in table2 row with table.col = table1.id doesn't exists. Thanks in advance.
Use a `LEFT JOIN`. ``` DELETE `table1`, `table2` FROM `table1` LEFT JOIN `table2` ON table2.col = table1.id WHERE table1.id = 1 ``` The general rule is that a `DELETE` query will delete the same rows that would be returned if you did a `SELECT` query with the same parameters. Since you would use a `LEFT JOIN` in a `SELECT` to get rows from `table1` that have no match in `table2`, you have to do the same thing with `DELETE`. This general rule is also helpful if you want to test a `DELETE` safely. Perform the corresponding `SELECT`, and make sure it returns only the rows you want to delete.
You should use a LEFT JOIN in other to achieve this: ``` DELETE `table1`, `table2` FROM `table1` LEFT JOIN `table2` ON table2.col = table1.id WHERE table1.id = 1 ``` Take a look here for further documentation: <http://www.w3schools.com/sql/sql_join_left.asp> Hope it helps.
DELETE FROM 2 tables with JOIN (SQL)
[ "", "mysql", "sql", "" ]
I am trying to convert the HTML names like `&amp; &quot;` etc to their equivalent `CHAR` values using the SQL below. I was testing this in SQL Server 2012. Test 1 (This works fine): ``` GO DECLARE @inputString VARCHAR(MAX)= '&amp;testString&amp;' DECLARE @codePos INT, @codeEncoded VARCHAR(7), @startIndex INT, @resultString varchar(max) SET @resultString = LTRIM(RTRIM(@inputString)) SELECT @startIndex = PATINDEX('%&amp;%', @resultString) WHILE @startIndex > 0 BEGIN SELECT @resultString = REPLACE(@resultString, '&amp;', '&'), @startIndex=PATINDEX('%&amp;%', @resultString) END PRINT @resultString Go ``` Output: ``` &testString& ``` Test 2 (this isn't worked): Since the above worked, I have tried to extend this to deal with more characters as following: ``` DECLARE @htmlNames TABLE (ID INT IDENTITY(1,1), asciiDecimal INT, htmlName varchar(50)) INSERT INTO @htmlNames VALUES (34,'&quot;'),(38,'&amp;'),(60,'&lt;'),(62,'&gt;'),(160,'&nbsp;'),(161,'&iexcl;'),(162,'&cent;') -- I would load the full list of HTML names into this TABLE varaible, but removed for testing purposes DECLARE @inputString VARCHAR(MAX)= '&amp;testString&amp;' DECLARE @count INT = 0 DECLARE @id INT = 1 DECLARE @charCode INT, @htmlName VARCHAR(30) DECLARE @codePos INT, @codeEncoded VARCHAR(7), @startIndex INT , @resultString varchar(max) SELECT @count=COUNT(*) FROM @htmlNames WHILE @id <=@count BEGIN SELECT @charCode = asciiDecimal, @htmlname = htmlName FROM @htmlNames WHERE ID = @id SET @resultString = LTRIM(RTRIM(@inputString)) SELECT @startIndex = PATINDEX('%' + @htmlName + '%', @resultString) While @startIndex > 0 BEGIN --PRINT @resultString + '|' + @htmlName + '|' + NCHAR(@charCode) SELECT @resultString = REPLACE(@resultString, @htmlName, NCHAR(@charCode)) SET @startIndex=PATINDEX('%' + @htmlName + '%', @resultString) END SET @id=@id + 1 END PRINT @resultString GO ``` Output: ``` &amp;testString&amp; ``` I cannot figure out where I'm going wrong? Any help would be much appreciated. I am not interested to load the string values into application layer and then apply `HTMLDecode` and save back to the database. EDIT: This line `SET @resultString = LTRIM(RTRIM(@inputString))` was inside the `WHILE` so I was overwriting the result with `@inputString`. Thank you, YanireRomero. I like @RichardDeeming's solution too, but it didn't suit my needs in this case.
Here's a simpler solution that doesn't need a loop: ``` DECLARE @htmlNames TABLE ( ID INT IDENTITY(1,1), asciiDecimal INT, htmlName varchar(50) ); INSERT INTO @htmlNames VALUES (34,'&quot;'), (38,'&amp;'), (60,'&lt;'), (62,'&gt;'), (160,'&nbsp;'), (161,'&iexcl;'), (162,'&cent;') ; DECLARE @inputString varchar(max)= '&amp;test&amp;quot;&lt;String&gt;&quot;&amp;'; DECLARE @resultString varchar(max) = @inputString; -- Simple HTML-decode: SELECT @resultString = Replace(@resultString COLLATE Latin1_General_CS_AS, htmlName, NCHAR(asciiDecimal)) FROM @htmlNames ; SELECT @resultString; -- Output: &test&quot;<String>"& -- Multiple HTML-decode: SET @resultString = @inputString; DECLARE @temp varchar(max) = ''; WHILE @resultString != @temp BEGIN SET @temp = @resultString; SELECT @resultString = Replace(@resultString COLLATE Latin1_General_CS_AS, htmlName, NCHAR(asciiDecimal)) FROM @htmlNames ; END; SELECT @resultString; -- Output: &test"<String>"& ``` --- **EDIT:** Changed to `NCHAR`, as suggested by @tomasofen, and added a case-sensitive collation to the `REPLACE` function, as suggested by @TechyGypo.
For the sake of performance, this isn't something you should do write as T-SQL statements, or as a SQL scalar value function. The .NET libraries provide excellent, fast, and, above all, **reliable** HTML decoding. In my opinion, you should implement this as a SQL CLR, like this: ``` using Microsoft.SqlServer.Server; using System.Data.SqlTypes; using System.Net; public partial class UserDefinedFunctions { [Microsoft.SqlServer.Server.SqlFunction( IsDeterministic = true, IsPrecise = true, DataAccess = DataAccessKind.None, SystemDataAccess = SystemDataAccessKind.None)] [return: SqlFacet(MaxSize = 4000)] public static SqlString cfnHtmlDecode([SqlFacet(MaxSize = 4000)] SqlString input) { if (input.IsNull) return null; return System.Net.WebUtility.HtmlDecode(input.Value); } } ``` Then in your T-SQL, call it like this: ``` SELECT clr_schema.cfnHtmlDecode(column_name) FROM table_schema.table_name ```
SQL Server: HTML Decode based on the HTML names in a String input
[ "", "sql", "sql-server", "sql-server-2012", "" ]
I have a wordpress site that has some meta\_key's asigned to my posts. I want to delete all posts in mysql that has the `meta_key` value `acest_meta`. With this SQL command : ``` SELECT `post_id` FROM `wp_postmeta` WHERE `meta_key` LIKE 'acest_meta' ORDER BY `wp_postmeta`.`post_id` ASC ``` i can see all the id's that contain that given meta\_key but i don't have a delete check box neer. I see a message on top that say "Current selection does not contain a unique column. Grid edit, checkbox, Edit, Copy and Delete features are not available." How can i delete all those posts ?
Using Mysql query you can use join in delete query ``` DELETE p.*,w.* FROM `wp_postmeta` w JOIN wp_posts p ON(p.ID = w.`post_id`) WHERE w.`meta_key` LIKE 'acest_meta' ``` Above query will delete all posts and its meta data contains a meta\_key as acest\_meta
you can delete with a select using an IN() statement and put the select in an IN like so ``` DELETE FROM `wp_postmeta` WHERE `post_id` IN ( SELECT `post_id` FROM `wp_postmeta` WHERE `meta_key` LIKE 'acest_meta' ) ```
Mysql delete all posts that have a given meta_key
[ "", "mysql", "sql", "wordpress", "" ]
Given below is my *select query* ``` select case when mobile||', '||phoneoff <>', ' then mobile||', '||phoneoff when phoneoff='' and mobile<>'' then mobile when phoneoff<>'' and mobile='' then phoneoff else 'NIL' end as "Concat_result",phoneoff,mobile from gtab12 ``` *Result* ``` Concat_result| phoneoff | mobile -------------------+------------+---------- 9544070335, 2812111|2812111 |9544070335 NIL| | , 0479-2436762|0479-2436762| 9495758057, | |9495758057 ``` --- what I'm trying to achieve is, if both `phoneoff` and `mobile` have value then it should concatenate with a `,` see `Concat_result` *col* in the first *row* in the *result* If `phoneoff` and `mobile` are `= ''` then `Concat_result` should print `NIL` If `phoneoff<>''` and `mobile=''` then `Concat_result` should print `phoneoff` only (see *row* 3 in my result its showing `phoneoff` along with `,`) and same for `mobile<>''` --- *Expecting Result* ``` Concat_result| phoneoff | mobile -------------------+------------+---------- 9544070335, 2812111|2812111 |9544070335 NIL| | 0479-2436762 |0479-2436762| 9495758057 | |9495758057 ```
A simple one-liner, which uses [`NULLIF()`](http://www.postgresql.org/docs/current/static/functions-conditional.html) & [`COALESCE()`](http://www.postgresql.org/docs/current/static/functions-conditional.html) (therefore can handle `NULL` values too, not just empty strings): ``` select coalesce(nullif(concat_ws(', ', nullif(phoneoff, ''), nullif(mobile, '')), ''), 'NIL') "Concat_result", phoneoff, mobile from gtab12 ```
Postgres has `concat_ws`, which can help: ``` select concat_ws(', ', phoneoff, mobile) as Concat_result, phoneoff, mobile from gtab12 ; ``` This doesn't quite do what you want, because you care about special values and `'NIL'`. So, let's put that logic in: ``` select (case when phoneoff = '' and mobile = '' then 'NIL' else concat_ws(', ', (case when phoneoff <> '' then phoneoff end), (case when mobile <> '' then mobile end) ) end) as Concat_result, phoneoff, mobile from gtab12 ; ```
PostgreSQL:string concatanation in select query
[ "", "sql", "postgresql", "" ]
I'm looking for some help trying to figure out why I'm seeing a SQL error on an update command within vb.net, noting that my SQL experience is very limited. I'm in the midst of building a profile system that will be used in several other tools within our company, as part of the profile system, the user's information is queried against our LDAP directory upon page load and the data stored in variables displayed in labels on the page. The user then has the option to create a new profile if one does not exist, or update an existing if there is already one in place. The code determines the proper action based on a query to the table using the users employee id. My insert command works ok but the update does not, below is my current code and the error. This is the insert statement to create a new entry, this works properly. ``` Protected Sub btnInsert_Click(sender As Object, e As EventArgs) Handles btnInsert.Click Try Dim con As New SqlConnection Dim cmd As New SqlCommand con.ConnectionString = "Data Source="server" Catalog=usertable;Integrated Security=True" con.Open() cmd.Connection = con cmd.CommandText = "INSERT INTO Profile (FirstName, LastName, Email, Telephone, EmployeeID, NTLogin) VALUES ('" & lblFirstName.Text & "','" & lblLastName.Text & "','" & lblEmail.Text & "','" & lblPhone.Text & "','" & lblEID.Text & "','" & lblNT.Text & "')" cmd.ExecuteNonQuery() btnInsert.Enabled = False lblValid.Text = "Record Inserted Successfully" con.Close() Catch ex As System.Exception lblValid.Text = (ex.Message) End Try ``` The following is the update code that gives an error: ``` Protected Sub btnUpdate_Click(sender As Object, e As EventArgs) Handles btnUpdate.Click Try Dim con As New SqlConnection Dim cmd As New SqlCommand con.ConnectionString = "server;Initial Catalog=usertable;Integrated Security=True" con.Open() cmd.Connection = con cmd.CommandText = "UPDATE Profile (FirstName, LastName, Email, Telephone, EmployeeID, NTLogin) VALUES ('" & lblFirstName.Text & "','" & lblLastName.Text & "','" & lblEmail.Text & "','" & lblPhone.Text & "','" & lblEID.Text & "','" & lblNT.Text & "') Where [EmployeeID] = '" & lblEID.text & "'" cmd.ExecuteNonQuery() btnUpdate.Enabled = False lblValid.Text = "Record Updated Successfully" con.Close() Catch ex As System.Exception lblValid.Text = (ex.Message) End Try End Sub ``` The exception error returned is ``` [SqlException (0x80131904): Incorrect syntax near '('.] System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction) +392 System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose) +815 System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady) +4515 System.Data.SqlClient.SqlCommand.RunExecuteNonQueryTds(String methodName, Boolean async, Int32 timeout, Boolean asyncWrite) +1390 System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(TaskCompletionSource`1 completion, String methodName, Boolean sendToPipe, Int32 timeout, Boolean asyncWrite) +538 System.Data.SqlClient.SqlCommand.ExecuteNonQuery() +290 newprofile.btnUpdate_Click(Object sender, EventArgs e) +759 System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) +155 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +3804 ``` I'm at a loss and appreciate any help that can be given.
Update syntax is completely invalid as stated by @JBKing. Update statements are of form: ``` UPDATE myTableName SET field=new_value WHERE condition=some_value ``` Also, to avoid possible injections, do not pass field values directly into SQL command. Use parameters [How to use parameters "@" in an SQL command in VB](https://stackoverflow.com/questions/11139791/how-to-use-parameters-in-an-sql-command-in-vb)
``` cmd.CommandText = "UPDATE Profile (FirstName, LastName, Email, Telephone, EmployeeID, NTLogin) VALUES ('" & lblFirstName.Text & "','" & lblLastName.Text & "','" & lblEmail.Text & "','" & lblPhone.Text & "','" & lblEID.Text & "','" & lblNT.Text & "') Where [EmployeeID] = '" & lblEID.text & "'" ``` Should be like this: ``` cmd.CommandText = "UPDATE Profile SET FirstName='"&lblFirstName.Text&"', LastName='"&lblLastName.Text&"', Email='"&lblEmail.Text&"', Telephone='"&lblPhone.Text&"', EmployeeID='"&lblEID.Text&"', NTLogin='"&lblNT.Text&"' Where [EmployeeID] = '" & lblEID.text & "'" ``` I would be careful about last names like "O'Connor" that may break with your SQL as there are reasons stored procedures or parameters are preferred in most cases to prevent SQL injection attacks.
vb.net problem with sql update command
[ "", "asp.net", "sql", "vb.net", "" ]
Trying to run : ``` ALTER TABLE [dbo].[Table1] ADD CONSTRAINT [FK_Table1_ScenarioResult] FOREIGN KEY ([ScenarioResultID]) REFERENCES [dbo].[ScenarioResult] ([ScenarioResultID]) ON DELETE CASCADE ``` Getting this error : **Msg 547, Level 16, State 0, Line 1 The ALTER TABLE statement conflicted with the FOREIGN KEY constraint "FK\_Table1\_ScenarioResult". The conflict occurred in database "8362", table "dbo.ScenarioResult", column 'ScenarioResultID'.** I have checked : * Constraint does not already exist, and no other exists on same column * The values in the column match in both tables * Types of columns are the same * Tried a different name, also fails On `SQL Server 2008 R2` Any ideas what I could try?
In theory this might work: ``` ALTER TABLE [dbo].[Table1] WITH NOCHECK ADD CONSTRAINT [FK_Table1_ScenarioResult] FOREIGN KEY ([ScenarioResultID]) REFERENCES [dbo].[ScenarioResult] ([ScenarioResultID]) ON DELETE CASCADE ``` Not sure how you checked for integrity of existing values, but it should be: ``` SELECT COUNT(*) as Orphans FROM [dbo].[Table1] t WHERE NOT EXISTS (SELECT * FROM [dbo].[ScenarioResult] WHERE ScenarioResultID = t.ScenarioResultID) ``` If "Orphans" is greater then zero you need to clean the data before adding a constraint.
This one was baffling me aswell, and then the penny dropped. I was trying to create a Foreign Key using "Database Diagrams" in SQL Server 2012, but it refused to let me as it claimed to clash with the foreign key I was trying to create. Huh ? But, I had accepted the defaults to "**Enforce foreign key constraint**". But I already had data in the two tables I was attempting to create a foreign key for, and it was breaking the foreign key rule I was trying to make, so SQL Server was rejecting the new key. The solution (in this particular case) was to change "**Enforce foreign key constraint**" to "**No**", or at least until I had cleaned up my data. Hope this helps.
Unable to add foreign key constraint due to obscure conflict
[ "", "sql", "sql-server", "foreign-keys", "constraints", "conflict", "" ]
I have a table like this: ``` [challenge_log] User_id | challenge | Try | Points ============================================== 1 1 1 5 1 1 2 8 1 1 3 10 1 2 1 5 1 2 2 8 2 1 1 5 2 2 1 8 2 2 2 10 ``` I want the overall average points. To do so, i believe i need 3 steps: Step 1 - Get the MAX value (of points) of each user in each challenge: ``` User_id | challenge | Points =================================== 1 1 10 1 2 8 2 1 5 2 2 10 ``` Step 2 - SUM all the MAX values of one user ``` User_id | Points =================== 1 18 2 15 ``` Step 3 - The average ``` AVG = SUM (Points from step 2) / number of users = 16.5 ``` Can you help me find a query for this?
You can get the overall average by dividing the total number of points by the number of distinct users. However, you need the maximum per challenge, so the sum is a bit more complicated. One way is with a subquery: ``` select sum(Points) / count(distinct userid) from (select userid, challenge, max(Points) as Points from challenge_log group by userid, challenge ) cl; ``` You can also do this with one level of aggregation, by finding the maximum in the `where` clause: ``` select sum(Points) / count(distinct userid) from challenge_log cl where not exists (select 1 from challenge_log cl2 where cl2.userid = cl.userid and cl2.challenge = cl.challenge and cl2.points > cl.points ); ```
Try these on for size. * **Overall Mean** ``` select avg( Points ) as mean_score from challenge_log ``` * **Per-Challenge Mean** ``` select challenge , avg( Points ) as mean_score from challenge_log group by challenge ``` If you want to compute the mean of each users highest score per challenge, you're not exactly raising the level of complexity very much: * **Overall Mean** ``` select avg( high_score ) from ( select user_id , challenge , max( Points ) as high_score from challenge_log ) t ``` * **Per-Challenge Mean** ``` select challenge , avg( high_score ) from ( select user_id , challenge , max( Points ) as high_score from challenge_log ) t group by challenge ```
SQL - Overall average Points
[ "", "sql", "sum", "distinct", "average", "" ]
Problem: Current input in numeric#; varchar eg: 404#;a pruchase order 1#; b purchase order 1046#;x y x purchase order from this company I need to have the numbers at the beginning put in a column and the name of the field after #; in a different column. Parse will not work because there is a possibility of greater than 4 words in the title As you can see #; is a common feature in all the inputs. What I would like to see happen is: ``` ID Name 404 a purchase order 1 b purchase order 1046 xyz purchase order from this company ``` Any ideas? I tried [How do I split a string so I can access item x?](https://stackoverflow.com/questions/2647/how-do-i-split-a-string-so-i-can-access-item-x) but that wouldn't work for me
``` declare @a varchar(20) set @a = '123#;gunrin gnre' SELECT SUBSTRING(@a,0,CHARINDEX('#',@a)) AS ID , SUBSTRING(@a,CHARINDEX('#',@a)+2,LEN(@a)-CHARINDEX('#',@a)) AS Name ```
``` Declare @Sample NVARCHAR(100) SET @Sample = '1046#;x y x purchase order from this company' SELECT SUBSTRING(@Sample, 1, CHARINDEX('#;', @Sample)-1), SUBSTRING(@Sample, CHARINDEX('#;', @Sample)+2,LEN(@Sample) ) ```
Return specific part of string
[ "", "sql", "string", "t-sql", "" ]
Table A name is source ``` ID | date | valueS | commonID 1 26.8.14 Svalue01 11 2 21.8.14 Svalue02 11 3 25.8.14 Svalue03 11 ``` Table B name is destination ``` ID | date | valueD | commonID 1 26.8.14 Dvalue01 11 2 21.8.14 Dvalue03 11 3 24.8.14 Dvalue03 11 ``` So currently im using ``` SELECT a.*, b.* FROM (SELECT * FROM Source WHERE commonID = '11')a JOIN destination b ON a.commonID = b.commonID ``` But this dont get me the wished result. i want something sorted by date, and if there is no record for both on the date, one is zero. example how it should look ``` ID | date | valueD | commonID | ID | date | valueS | commonID 1 26.8.14 Dvalue01 11 1 26.8.14 Svalue01 11 3 25.8.14 Svalue03 11 3 24.8.14 Dvalue03 11 2 21.8.14 Dvalue03 11 2 21.8.14 Svalue02 11 ``` Is and how would this be possible? Additional Info: -Using Mysql 5.5.37 (MariaDB) -ID is primary on both -date fields are "timestamp" -value fields are INT -ID fields are INT -Engine is InnoDB I hope i provided enough information and tried to make a good explained question thank you for your help
you want to join on the date as that is the determining column so something like this ``` SELECT COALESCE(s.id, "") as s_id, COALESCE(s.date, "") as s_date, COALESCE(s.valueS, "") as 'valueS', COALESCE(s.commonID, "") as s_commonID, COALESCE(d.id, "") as d_id, COALESCE(d.date, "") as d_date, COALESCE(d.valueD, "") as 'valueD', COALESCE(d.commonID, "") as d_commonID FROM source s LEFT JOIN destination d on d.date = s.date AND d.commonID = s.commonID WHERE d.commonID = 11 UNION SELECT COALESCE(s1.id, "") as s_id, COALESCE(s1.date, "") as s_date, COALESCE(s1.valueS, "") as 'valueS', COALESCE(s1.commonID, "") as s_commonID, COALESCE(d1.id, "") as d_id, COALESCE(d1.date, "") as d_date, COALESCE(d1.valueD, "") as 'valueD', COALESCE(d1.commonID, "") as d_commonID FROM source s1 RIGHT JOIN destination d1 on d1.date = s1.date AND d1.commonID = s1.commonID WHERE d1.commonID = 11 ORDER BY s_date DESC, d_date DESC ``` [DEMO](http://sqlfiddle.com/#!2/2db17c/2)
You need a Full outer Join ``` SELECT s.id, s.date, s.valueS, d.valueD, d.commonID FROM source s LEFT JOIN destination d ON (s.id = d.id) UNION SELECT s.id, s.date, s.valueS, d.valueD, d.commonID FROM source s RIGHT JOIN destination d ON (s.id = d.id); ```
MySQL merge table, with zero vaules
[ "", "mysql", "sql", "join", "" ]
I would like to type a query on Microsoft SQL Server Management Studio but I find some problems. In my query, I would to create a temporary column in my request that gives the age of the personn. So I typed this, with [DANAID] being the birthdate of the personn (yyyy/mm/dd hh:mm:ss.sss) : ``` SELECT TOP 1000 [All my columns] , YEAR([DANAID])-YEAR(GETDATE()) AS AGE FROM [table].[PERSONNE] WHERE AGE <> NULL AND (AGE >= 25 AND AGE <= 50) ``` After this, i get an error in MSSQL and I can't execute the request. I get three errors, one on each AGE in the where function. After doing some research, I discovered the function DATEDIFF so I edited my code like this : ``` SELECT TOP 1000 [All my columns] , DATEDIFF(year,[DANAID],GETDATE()) AS AGE /* No error */ FROM [table].[PERSONNE] WHERE AGE <> NULL AND (AGE >= 25 AND AGE <= 50) /* Three errors */ ``` But I still obviously has the same error message that says "Column name not valid "AGE"" on the two. So after AGE, I tried so name it as [AGE], then by doing some search on stackoverflow, I called it 'AGE' everywhere so it looks like this. ``` SELECT TOP 1000 [All my columns] , DATEDIFF(year,[DANAID],GETDATE()) AS 'AGE' /* No error */ FROM [table].[PERSONNE] WHERE 'AGE' <> NULL AND ('AGE' >= 25 AND 'AGE' <= 50) /* No error */ ``` But when I execute my request, I don't get any result despite having some people with age between 25 and 50. I firstly replaced <> by !=, then tried to cut 'AGE' <> NULL but it doesn't show anything while replacing the WHERE by an ORDER BY 'AGE' display me everybody with the age at the end and I find people between those ages. Can someone help me for this request please ? Thank you very much in advance.
``` SELECT * FROM ( SELECT TOP 1000 [All my columns] , DATEDIFF(year,[DANAID],GETDATE()) AS [age] /* No error */ FROM [table].[PERSONNE] ) AS T WHERE NOT [age] IS NULL AND ([age] >= 25 AND [age] <= 50) /* No error */ ``` fields in select clause calculates after all parts of query! Ok and this code ``` SELECT TOP 1000 [All my columns] , DATEDIFF(year,[DANAID],GETDATE()) AS [age] /* No error */ FROM [table].[PERSONNE] WHERE NOT DATEDIFF(year,[DANAID],GETDATE()) IS NULL AND DATEDIFF(year,[DANAID],GETDATE()) BETWEEN 25 AND 50 ```
EDIT: reverse your calculation to YEAR(GETDATE()) - YEAR([DANAID]) There are 2 methods to access the `column alias` AGE in your query. Firstly be "nesting" the original: ``` SELECT TOP 1000 * , AGE FROM ( SELECT * , YEAR(GETDATE()) - YEAR([DANAID]) AS AGE FROM [table].[PERSONNE] ) AS derived WHERE AGE BETWEEN 25 AND 50 ORDER BY "some fields here" ; ``` or, using cross apply ``` SELECT TOP 1000 * , CA.AGE FROM [table].[PERSONNE] CROSS APPLY ( SELECT YEAR(GETDATE()) - YEAR([DANAID]) ) CA (AGE) WHERE CA.AGE BETWEEN 25 AND 50 ORDER BY "some fields here" ; ``` For your where clause you may use `BETWEEN` and if AGE is between 25 and 50 it cannot also be NULL so it isn't necessary to explicitly exclude those. EDIT: but it would be possible to include `WHERE [PERSONNE].[DANAID] IS NOT NULL` into the query which would exclude any records that would produce a NULL AGE --- BUT there is a problem not resolved. That is not an accurate method of calculating age. If the month of a date is after the month of getdate() the age is 1 year less than the current result. e.g. if born in December, you do not become 1 year older in January of each year. **More accurate method of age calculation:** ``` SELECT TOP 1000 ID , CA.* FROM [PERSONNE] CROSS APPLY ( SELECT YEAR(GETDATE()) - YEAR([DANAID]) , YEAR(getdate()) - YEAR(DANAID) - ( CASE WHEN MONTH(DANAID) > MONTH(getdate()) OR ( MONTH(DANAID) = MONTH(getdate()) AND DAY(DANAID) > DAY(getdate()) ) THEN 1 ELSE 0 END ) ) CA (AGEsimple, AGEaccurate) WHERE CA.AGEaccurate BETWEEN 25 AND 50 ORDER BY ID ; ``` `See this SQLfiddle Demo`
Writing a query with AS on MSSQL
[ "", "sql", "sql-server", "" ]
My Stored Procedure is like this ``` alter PROCEDURE ph_GetAllStaffAddressByCamp @CampCode VARCHAR(20) AS IF NULLIF(@CampCode, '') IS NULL DECLARE @Year VARCHAR(2) EXEC cs_GetCurrentYear @Year OUTPUT SELECT DISTINCT [VolID], [CampID] FROM [vStaffJobAndCamp] WHERE IF @CampCode IS NULL CampCode like @Year +% ELSE CampCode = @CampCode ``` Everything looks okay to me, But it throws compile error like this ``` Msg 156, Level 15, State 1, Procedure ph_GetAllStaffAddressByCamp, Line 14 Incorrect syntax near the keyword 'IF'. Msg 102, Level 15, State 1, Procedure ph_GetAllStaffAddressByCamp, Line 15 Incorrect syntax near 'CampCode'. ``` Does any one know what I am doing wrong?
You forgot BEGIN...END in IF clause. And WHERE conditions was also wrong ``` alter PROCEDURE ph_GetAllStaffAddressByCamp @CampCode VARCHAR(20) AS IF NULLIF(@CampCode, '') IS NULL BEGIN DECLARE @Year VARCHAR(2) EXEC cs_GetCurrentYear @Year OUTPUT SELECT DISTINCT [VolID], [CampID] FROM [vStaffJobAndCamp] WHERE (@CampCode IS NULL and CampCode like @Year +%) OR (@CampCode IS NOT NULL and CampCode = @CampCode) END ```
`IF` is not a valid SQL keyword. Therefore, you cannot use it inside a `SELECT` statement. You could express those conditions in valid SQL like so: ``` WHERE (@CampCode IS NULL AND CampCode like @Year + '%') OR (@CampCode is NOT NULL AND CampCode = @CampCode) ```
If else inside SQL Where clause throws error
[ "", "sql", "sql-server", "t-sql", "stored-procedures", "" ]
I'm trying to figure out why my query is not behaving the way I want it to. The following statement is trying to get the newest car in stock for each Ford model: ``` Car.where("make = 'Ford'").group(:model_id).having('year = MAX(year)') ``` But it seems to be grouping by `model_id` and then filtering by the year on all the data (not just for each group). This is the SQL it's generating: ``` SELECT `cars`.* FROM `cars` WHERE `cars`.`make` = 'Ford' GROUP BY model_id HAVING year = MAX(year) ``` What am I doing wrong? Is there a different query I can use? **Update** Here is a sample of the data: ``` | make | model_id | year | | Ford | 1 | 2000 | | Ford | 1 | 2002 | | Ford | 1 | 2010 | | Ford | 2 | 2012 | | BMW | 3 | 2012 | | BMW | 4 | 2014 | ``` And it should return the following after the query: ``` | Ford | 1 | 2010 | | Ford | 2 | 2012 | ```
Thanks to @p11y for posting a link that helped me solve the issue with the following query. Note: It may not be the most efficient. Please provide a more efficient query if you'd like. ``` subquery = "select max(year) from cars as c where c.model_id = cars.model_id" Car.where("make = 'Ford' AND year = (#{subquery})") ``` **Update**: Adding subquery to make it easier to read.
You can also use aggregate functions as part of the SELECT clause, which should yield the desired result: ``` cars = Car.select('MAX(year)').group(:model_id) cars.to_a.last.attributes #=> {name: 'BMW', model_id: 4, ..., max: 2014} ``` you can use aliasing to select multiple aggregates at once: ``` cars = Car.select('MAX(year) as max_year, MAX(hp) as max_hp').group(:model_id) cars.to_a.last.attributes #=> {name: 'BMW', model_id: 4, ..., max_year: 2014, max_hp: 120} ``` In Postgresql, [you can use window functions](http://postgresguide.com/tips/window.html) to find extremes in the groups. It's not very pretty, but the result is a proper ActiveRecord relation that can be chained with other scopes: ``` Car.from('(SELECT *, rank() OVER (PARTITION BY model_id ORDER BY year DESC) FROM cars) AS cars').where('rank = 1') ``` This can be refactored a bit for readability: ``` partition = 'PARTITION BY model_id ORDER BY year DESC' subquery = Car.arel_table.project("*, rank() OVER (#{partition})") Car.from("(#{subquery}) AS cars").where('rank = 1') ``` Now you can even do things like "get the two newest cars for each group": ``` Car.from("(#{subquery}) AS cars").where('rank <= 2') ```
AR - Filter Group Query
[ "", "mysql", "sql", "ruby-on-rails", "ruby", "activerecord", "" ]
I have this query in my Access database: ``` SELECT t_Campioni_CAMPIONE, t_Campioni.[DATA ARRIVO], t_Campioni.PRODUTTORE, t_Campioni.CodF, t_Fornitori.[Nome Fornitore] FROM t_Campioni INNER JOIN t_Fornitori ON t_Campioni.CodF = t_Fornitori.CodF WHERE (((t_Campioni.CAMPIONE)=[Forms]![m_Campioni_modifica]![CAMPIONE])) ORDER BY t_Campioni.[DATA ARRIVO] DESC; ``` It works but I need it to extract only the first record (with the last date). How can I do it?
Just replace your initial `SELECT` by `SELECT TOP 1` ``` SELECT TOP 1 t_Campioni_CAMPIONE, t_Campioni.[DATA ARRIVO], t_Campioni.PRODUTTORE, t_Campioni.CodF, t_Fornitori.[Nome Fornitore] FROM t_Campioni INNER JOIN t_Fornitori ON t_Campioni.CodF = t_Fornitori.CodF WHERE (((t_Campioni.CAMPIONE)=[Forms]![m_Campioni_modifica]![CAMPIONE])) ORDER BY t_Campioni.[DATA ARRIVO] DESC; ```
``` SELECT t_Campioni_CAMPIONE, t_Campioni.[DATA ARRIVO], t_Campioni.PRODUTTORE, t_Campioni.CodF, t_Fornitori.[Nome Fornitore] FROM t_Campioni INNER JOIN t_Fornitori ON t_Campioni.CodF = t_Fornitori.CodF WHERE (((t_Campioni.CAMPIONE)=[Forms]![m_Campioni_modifica]![CAMPIONE])) ORDER BY t_Campioni.[DATA ARRIVO] DESC LIMIT 1; ```
How can I limit the results extracted by a query in Acces?
[ "", "sql", "ms-access-2010", "" ]
I have a simple mysql query like this: `INSERT INTO ftpt VALUES (0,ftp://ftp.somewhere.com);` As you see, there is "//" in my code which i dont know how to make mysql ignore that, altough it may seem silly to you , I couldnt find anything nowhere. Thank you for your help.
String literals must be enclosed in single quotes ([as documented in the manual](https://dev.mysql.com/doc/refman/5.5/en/string-literals.html)) (Note that using double quotes is non-standard SQL and will not work on other standard-compliant databases. So it's best to always use single quotes) ``` INSERT INTO ftpt VALUES (0, 'ftp://ftp.somewhere.com'); ``` It's also good coding style to explicitly state the columns of the table: ``` INSERT INTO ftpt (id, url) VALUES (0, 'ftp://ftp.somewhere.com'); ```
your query almost right, your just missing `single quotes` you should try this like this: > INSERT INTO ftpt VALUES (0,'<ftp://ftp.somewhere.com>');
Ignoring "//" in mysql query
[ "", "mysql", "sql", "" ]
I have table named Products, that looks like: ``` maker model type ---------- ------- ----- A 1232 PC A 1233 PC A 1276 Printer A 1401 Printer A 1408 Printer A 1298 Laptop A 1752 Laptop B 1121 PC B 1750 Laptop C 1321 Laptop D 1433 Printer D 1288 Printer E 1260 PC E 1434 Printer E 2112 PC E 2113 PC ``` And I need to get the maker that produces more than 1 model but that models should be the same type... Here it should be `maker = D and Type = Printer`. I spent the whole day using count(model)>1 and count(type)=1 etc. Nothing works.
``` SELECT maker, MIN(type) AS type FROM Products GROUP BY maker HAVING COUNT(DISTINCT type) = 1 AND COUNT(DISTINCT model) >1 ```
If you want to determine the `maker` that has more than one `model` of the same `type`, then you can use `GROUP BY` and `HAVING` to get the result: ``` select maker from products group by maker having count(distinct model) > 1 -- more than one model and count(distinct type) = 1 -- same type ``` See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/34ce2/3) If you want to return everything for each `maker`, then use can use ``` select p.maker, p.model, p.type from products p where maker in (select maker from products t group by maker having count(distinct model) > 1 and count(distinct type) = 1); ``` See [Demo](http://sqlfiddle.com/#!3/34ce2/6)
trouble using count() and group by
[ "", "sql", "count", "group-by", "" ]
I am writing a function which has a parameter @terminationMonthYear with datatype nvarchar, i need to convert the above parameter in to datetime. Eg : If i pass (January,2013) .I need it to convert in to first day of that particular month '2013-01-01' with datetime datatype in sql server. Thanks in advance
Try this (in this example, @Date represents @terminationMonthYear): ``` DECLARE @Date NVARCHAR(50) = 'February ,2013' DECLARE @Month NVARCHAR(50) = LTRIM(RTRIM(LEFT(@Date, PATINDEX('%,%',@Date)-1))) DECLARE @Year NVARCHAR(50) = LTRIM(RTRIM(RIGHT(@Date, LEN(@Date) - PATINDEX('%,%',@Date)))) SELECT CAST(@Month + ' 01, ' + @Year AS DATE) ``` OR if your input parameter includes the (), then try this: ``` DECLARE @Date NVARCHAR(50) = '(February ,2013)' SET @Date = REPLACE(REPLACE(@Date,'(',''),')','') DECLARE @Month NVARCHAR(50) = LTRIM(RTRIM(LEFT(@Date, PATINDEX('%,%',@Date)-1))) DECLARE @Year NVARCHAR(50) = LTRIM(RTRIM(RIGHT(@Date, LEN(@Date) - PATINDEX('%,%',@Date)))) SELECT CAST(@Month + ' 01, ' + @Year AS DATE) ``` This will work whether you pass in the full month name or the 3-letter abbreviation (e.g. Mar for March). Also, you mentioned you wanted to convert it into DATETIME format, but 2013-01-01 is a DATE (no time component). If you want a time component, you can just change the CAST in the last line to "... AS DATETIME" and it will add a time component (though it will be all 0's).
if you pass "Jan,2013" it will work ``` @MonthName Varchar(50) Declare @FromDate datetime, @ToDate datetime Select @FromDate = Convert(Datetime,@MonthName), @ToDate = DateAdd(d,-1,DateAdd(m,1,Convert(Datetime,@MonthName))) ```
How to Convert (Monthname,year) in to DateTime Format
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have a table called `PropertyFeatures` which has three columns. The first one - `PropertyFeatureId` is an auto incremented column, the second one - `Propertyid` is a foreign key of the `Property` table, and the last one - `Featureid` is a foreign key of the `Feature` table. ``` +------------------------------------------+ | PropertyFeatureId PropertyId FeatureId | +------------------------------------------+ | 1 1 1 | | 2 1 2 | | 3 2 2 | | 4 2 3 | | 5 2 4 | +------------------------------------------+ SELECT propertyFeatures.PropertyId FROM PropertyFeatures propertyFeatures INNER JOIN Feature feature ON feature.id = propertyFeatures.FeatureId WHERE propertyFeatures.[FeatureId] IN (1,2) GROUP BY propertyFeatures.PropertyId ``` The above query gives the following result: ``` PropertyId 1 2 ``` But **I want to get the following** result: ``` PropertyId 1 ``` because `Featureid` 1 and 2 are available only for `PropertyId` 1. If I change the query to the following: ``` SELECT propertyFeatures.PropertyId FROM PropertyFeatures propertyFeatures INNER JOIN Feature feature ON feature.id = propertyFeatures.FeatureId WHERE propertyFeatures .[FeatureId] IN (2,3,4) GROUP BY propertyFeatures.PropertyId ``` It will only display: ``` PropertyId 2 ``` because `Featureid` 2, 3 and 4 are available only for `PropertyId` 2. How can I achieve the desired result?
Keep in mind that `WHERE a IN (1, 2)` is actually a shortcut for `WHERE a = 1 OR a = 2`. This will not give you the intersection of the two sets, only the combination of the set where the first filter is valid with the set where the second filter is valid. In your case, a check with `EXISTS` might be what you are searching for - or `INTERSECT`. Example with `INTERSECT`: ``` SELECT propertyFeatures.PropertyId FROM PropertyFeatures AS propertyFeatures INNER JOIN Feature AS feature ON feature.id = propertyFeatures.FeatureId WHERE propertyFeatures.[FeatureId] = 1 GROUP BY propertyFeatures.PropertyId INTERSECT SELECT propertyFeatures.PropertyId FROM PropertyFeatures AS propertyFeatures INNER JOIN Feature AS feature ON feature.id = propertyFeatures.FeatureId WHERE propertyFeatures.[FeatureId] = 2 GROUP BY propertyFeatures.PropertyId; ``` Example with `EXISTS`: ``` SELECT propertyFeatures.PropertyId FROM PropertyFeatures AS propertyFeatures INNER JOIN Feature AS feature ON feature.id = propertyFeatures.FeatureId WHERE EXISTS (SELECT TOP 1 * FROM PropertyFeatures AS P INNER JOIN Feature AS F ON F.id = P.FeatureId WHERE P.FeatureId = 1) AND EXISTS (SELECT TOP 1 * FROM PropertyFeatures AS P INNER JOIN Feature AS F ON F.id = P.FeatureId WHERE P.FeatureId = 2); ``` (Code not tested, so please check for yourself if it works!)
You could use an `EXISTS` clause instead of an `IN` caluse: ## [Demo Sql Fiddle](http://sqlfiddle.com/#!3/29304/5) **Create Script:** ``` CREATE TABLE PropertyFeatures ([PropertyFeatureId] int, [PropertyId] int, [FeatureId] int) ; INSERT INTO PropertyFeatures ([PropertyFeatureId], [PropertyId], [FeatureId]) VALUES (1, 1, 1), (2, 1, 2), (3, 2, 2), (4, 2, 3), (5, 2, 4) ; ``` **SQL Statement:** ``` select distinct PropertyId from PropertyFeatures pf1 where exists (select PropertyId from PropertyFeatures pf2 where FeatureID = 1 and pf1.PropertyId = pf2.PropertyId) and exists (select PropertyId from PropertyFeatures pf2 where FeatureID = 2 and pf1.PropertyId = pf2.PropertyId) ``` So each subquery within the `exists` statement returns a `PropertyId` and you can `and` them both together. The issue I can see with this is that you would have to add another `EXISTS` clause each time you wanted to query more `FeatureId` values.
How to use the IN keyword?
[ "", "sql", "sql-server", "" ]
This is a simplified version of a problem I have. Say I’ve got three variables all of the same type, in three columns of table1, and an id field. They are all codes. Mostly they map to variables (group identifyers say) contained in a look up in table2. I want to write a query that does the following: For each of my records I want to return the variable in table2 that my matches the code in the first of the three columns in table1. However, if the variable in this column contains a value that does not have a match in table2, I want to try for a match using column2. If that one does not match, use the one in column3. I want the query result to contain the ID from table1 and the match from table2. If there is no match at all, then I want the query to contain a row with the id and n/a. In this example there are just two values that match in my lookup. I'm actually mapping across 12 columns with a few hundreds of unique code values and several million rows of data. Table1 ``` id col1 col2 col3 1 V21 G22 T21 2 E30 W21 S34 3 Y11 U29 Q66 ``` Table2 ``` cat_code class_group V21 group1 W21 group2 ``` Query result ``` id class_group 1 group1 2 group2 3 n/a ``` So here in the desired result the record id 1 gets to match the very first column, and returns the corresponding variable, the second record can't get a match on the first but finds one on the second column and the third records can't match any value in any of the three columns so it throws an n/a. I'm fairly new to SQL - I'm not sure whether this can be achieved in a simple query or whether it needs a functon.
``` select t1.id, coalesce(t21.class_group, t22.class_group, t23.class_group) class_group from Table1 t1 left join Table2 t21 on t21.cat_code = t1.col1 left join Table2 t22 on t22.cat_code = t1.col2 left join Table2 t23 on t23.cat_code = t1.col3 ```
Just like Joel wrote... but he's quicker than I am :) ``` SELECT [Id], COALESCE(C1.[class_group], C2.[class_group], C2.[class_group], 'N/A') FROM Table1 AS T1 LEFT JOIN Table2 AS C1 ON C1.[cat_code] = T1.[col1] LEFT JOIN Table2 AS C2 ON C2.[cat_code] = T1.[col2] LEFT JOIN Table2 AS C3 ON C3.[cat_code] = T1.[col3] ``` <http://sqlfiddle.com/#!6/ffb01/3>
how to make a simple query that matches column values to a look-up then matches further columns if no match is found
[ "", "sql", "sql-server", "" ]
i want to join 2 mysql tables .but join information is in a separate table .let's say i have 3 table named student ,course and reg contain id of student and course he does. ``` student table s_id | name 1 | miki 2 | foly 3 | oski course table c_id | name 101 | c++ 102 | java 103 | ruby reg table s_id | c_id 1 | 101 1 | 102 2 | 101 ``` now i want to get all the course someone do.i wrote sql query for that without using join query .but i want to do same thing using join query .this is my query ``` SELECT c.name FROM student as s,course as c,reg as r where r.s_id=s.s_id and r.c_id=c.c_id and s.name='miki'; ```
Statement ``` SELECT c.name FROM student as s,couse as c,reg as r where r.s_id=s.s_id and r.c_id=c.c_id and s.name='miki' ``` is join too, `,` between table names is short cut for `cross join`, so you already using joins (actually you have some conditions in `where`, so RDBMS will optimize it to `inner join`) but, of course you can rewrite it to different syntax: ``` SELECT c.name FROM couse as c inner join reg as r on (c.c_id = r.c_id and r.s_id=(select s_id from student where name='miki')); ``` another syntax: ``` SELECT c.name FROM couse as c inner join reg as r on (c.c_id = r.c_id) inner join student as s on (r.s_id=s.s_id and s.name='miki'); ``` and another one: ``` SELECT c.name FROM couse as c inner join reg as r on (c.c_id = r.c_id) inner join student as s on (r.s_id=s.s_id) where s.name='miki'; ``` depending on bunch of conditions performance of these 4 queries can be different, but results will be the same
Just join all 3 tables to get the result ``` select c.name from course c join reg r on r.c_id = c.c_id join student s on s.s_id = r.s_id where s.name = 'miki' ```
mysql join two tables with registry table using join query
[ "", "mysql", "sql", "join", "" ]
I'm attempting to connect to an instance of SQL Server 2008 (Developer Edition) in SQL Server Management Studio but am receiving the following error: ![Login Window](https://i.stack.imgur.com/ZRXe4.png) ![SQL Server Management Studio Error Message](https://i.stack.imgur.com/OFdyD.png) In SQL Server Configuraton Manager, all services are running and logged in as LocalSystem. ![SQL Server Services](https://i.stack.imgur.com/tJ26F.png) I have set the SQL Native client 10.0 configutarion to enable Shared Memory, TCP/IP, and Named Pipes. I have also set SQL Serer Network Configuration to enable Shared Memory. ![SQL Native Client 10.0 Configuration(32 bit)](https://i.stack.imgur.com/UYx1I.png) ![SQL Natiev Client 10.0 Configuration](https://i.stack.imgur.com/F5hXz.png) ![SQL Server Network Configuration ](https://i.stack.imgur.com/JkiI9.png)
The default (unnamed) instance uses `MSSQLSERVER` as its service name - so it seems your instance is actually called `MSSQLSERVER08`. Therefore, you need to use ``` .\MSSQLSERVER08 ``` or ``` (local)\MSSQLSERVER08 ``` as your server name for connecting... **Note:** all these things refer to the **local machine** in the context of connecting to a SQL Server installed on that machine: * `.` * `(local)` * `localhost` * or your actual **machine name** (whatever it might be)
For SQL Express use `.\MSSQLSERVER08` as your server name. Try to enable TCP/IP, Shared Memory and Named Pipes from all programs Also check if the service is started: ![enter image description here](https://i.stack.imgur.com/ExJpD.png)
Cannot connect to local instance of SQL Server 2008
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have a small table "`ImgViews`" that only contains two columns, an ID column called "`imgID`" + a count column called "`viewed`", both set up as int. The idea is to use this table only as a counter so that I can track how often an image with a certain ID is viewed / clicked. The table has no primary or foreign keys and no relationships. However, when I enter some data for testing and try entering the same imgID multiple times it always appears greyed out and with a red error icon. Usually this makes sense as you don't want duplicate records but as the purpose is different here it does make sense for me. Can someone tell me how I can achieve this or work around it ? What would be a common way to do this ? Many thanks in advance, Tim.
To address your requirement to store non-unique values, simply remove primary keys, unique constraints, and unique indexes. I expect you may still want a non-unique clustered index on ImgID to improve performance of aggregate queries that would otherwise require a scan the entire table and sort. I suggest you store an insert timestamp, not to provide uniqueness, but to facilitate purging data by date, should the need arise in the future.
You must have some unique index on that table. Make sure there is no unique index and no unique or primary key constraint. Or, SSMS simply doesn't know how to identify the row that was just inserted because it has no key. It is generally not best practice to have a table without a (logical) primary key. In your case, I'd make the image id the primary key and increment the counter. The `MERGE` statement is well-suited for performing and insert or update at the same time. Alternatives exist. If you don't like that, create a surrogate primary key (an identity column set as the primary key). At the moment you have no way of addressing a specific row. That makes the table a little unwieldy.
SQL Server: How to allow duplicate records on small table
[ "", "sql", "sql-server", "database-design", "duplicates", "" ]
I'm using C# to write to a SQL Compact Edition 3.5 database. I got a table containing e-mail addresses and names for each address. ``` MailRecipientAddressID int primary key identity(1,1) not null, Address nvarchar(4000), Name nvarchar(4000) ``` In this table I want every address-name combination to be unique. In some cases it's possible that either `Address` or `Name` is `NULL` in a row. Before inserting new rows into this table, I'm using a `SELECT` query to check if there is an existing row matching the one I want to insert. When using this query ``` SELECT MailRecipientAddressID FROM MailRecipientAddress WHERE Address = @Address AND Name = @Name ``` I will not find existing rows with `NULL` values in one column (see [here](https://stackoverflow.com/questions/2749044/what-is-null-and-is-null)). Now I got this query, which works and kind of solves my problem ``` SELECT MailRecipientAddressID FROM MailRecipientAddress WHERE ISNULL(Address, '') = ISNULL(@Address, '') AND ISNULL(Name, '') = ISNULL(@Name, '') ``` but even though it is no problem in my case that `NULL` and empty string values are handled equally, I do not like this solution. I think it's kind of hackish. Is there a better approach * to apply a filter on a SELECT statement with parameters which can contain NULL * which works on SQL CE **Edit** I do not understand why, but my query works with SQL Management Studio but it does not in my application (see [here](https://stackoverflow.com/questions/5890970/sql-server-compact-edition-isnullsth-returns-a-boolean-value)). To correct my own approach I would need to use [COALESCE](http://msdn.microsoft.com/en-us/library/ms174075(v=sql.100).aspx). I don't like the option to replace my `NULL` values with empty strings because I think it would be kind of inconsequent to set a value at a place where I got no value or is my understanding of this design question wrong?
The best solution is a constraint on the table that *prevents* duplicates from going into the table. You can put one in with a unique index: ``` create unique index idx_MailRecipientAddress_address_name on MailRecipientAddress(Address, Name); ``` This will generate an error on the `insert`, which you would then need to catch. However, this is only a partial solution, because `NULL` values do not count as duplicates. You might solve your overall problem by not allowing `NULL` values in the field at all. Instead, represent no data using empty strings. Note: I wouldn't normally recommend this. In SQL, `NULL` means "unknown" and by the definition of the language, two "unknown" values are not equal. However, you seem to want them to be equal. As for SQL, yours is okay, but it equates `NULL` and the empty string. An explicit check is more accurate: ``` WHERE (Address = @Address or Address is null and @Address is null) and (Name = @Name or Name is null and @Name is null) ```
@George if Parameter value is Null and column value is not null then "(Address = @Address or Address is NULL) returns false " if Parameter value is Null and column value is null then "(Address = @Address or Address is NULL) returns true" if Parameter value is Not Null and column value is null then "(Address = @Address or Address is NULL) returns true" if Parameter value is Not Null and column value is Not null and if matches then "(Address = @Address or Address is NULL) returns true otherwise false"
How to filter using WHERE with a parameter possibly beeing null
[ "", "sql", "t-sql", "sql-server-ce", "" ]
I don't know what's wrong with this statement, but whenever i run this i always get an error here is my sql: ``` DELETE FROM tbl_usersinfo WHERE users_lname IN (SELECT users_lname FROM tbl_usersinfo WHERE users_lname = 'asd') ``` here is my error: #1093 - You can't specify target table 'tbl\_usersinfo' for update in FROM clause
try ``` DELETE FROM tbl_usersinfo WHERE users_lname IN (select * from (SELECT users_lname FROM tbl_usersinfo WHERE users_lname = 'asd') as t) ```
NOTE THAT, `(SELECT users_lname FROM tbl_usersinfo WHERE users_lname = 'asd')` is equal to `users_lname='asd'` So, the sql could be `DELETE FROM tbl_usersinfo WHERE users_lname = 'asd'`
subquery delete statement not working
[ "", "mysql", "sql", "" ]
My table has (SubId, QId, Question, AnswerOptions, Ans). SubId is a foreign Key and QId is Primary key. my need is select latest inserted record and display it in to text box. I already tried to these queries ``` SELECT * FROM tblQuestions SELECT SCOPE_IDENTITY () SELECT MAX(QId) FROM tblQuestions SELECT TOP 1 QId FROM tblQuestions ORDER BY QId DESC ``` but I get the first record, when I inserted at the first time, even I tried `MAX()` it shows the value 1, I tried `MIN()` it shows 1 also. How to I get latest value? Note: the latest value is Maximum value of the QId.
Try this will work ``` ;WITH x AS ( SELECT *, r = RANK() OVER (ORDER BY QId DESC) FROM tblQuestions ) SELECT * FROM x WHERE r = 1; ```
Since Qld is a varchar type, this should work: SQL Server: ``` SELECT TOP 1 * FROM tblQuestions ORDER BY Qld ASC ``` MySQL: ``` SELECT Qld FROM tblQuestions ORDER BY Qld ASC LIMIT 1 ``` MS Access: ``` SELECT LAST(Qld) FROM tblQuestions ```
Select latest inserted record in SQL Server
[ "", "sql", "sql-server", "" ]
I am attempting to read a `BLOB` message and display it as a variable in one of my procedures, but am getting the error below: > Error - ORA-22835: Buffer too small for CLOB to CHAR or BLOB to RAW > conversion (actual: 9923, maximum: 2000) I have googled and found a suggestion to trim the `BLOB` message as below, but I would rather display the complete `BLOB` message as a string. ``` UTL_RAW.CAST_TO_VARCHAR2(dbms_lob.substr(variable_name,2000,1)) ``` How can I display the complete message? Is there a setting on either the database or procedure level that I can change?
I got this worked by using the option described in <http://www.dba-oracle.com/t_convert_blob_varchar_datatype.htm>
You can get around the buffer issue by reading the `BLOB` value in chunks. In order to do that, you can define a user defined function (UDF) [as suggested here](http://www.dba-oracle.com/t_convert_blob_varchar_datatype.htm). The following UDF reads the `BLOB` field one chunk at a time, converts that to `VARCHAR2` and appends the results consecutively to return the result as a `CLOB`: ``` CREATE OR REPLACE FUNCTION VC2CLOB_FROM_BLOB(B BLOB) RETURN CLOB IS c CLOB; n NUMBER; BEGIN IF (b IS NULL) THEN RETURN NULL; END IF; IF (LENGTH(b) = 0) THEN RETURN EMPTY_CLOB(); END IF; DBMS_LOB.CREATETEMPORARY(c, TRUE); n := 1; WHILE (n + 32767 <= LENGTH(b)) LOOP DBMS_LOB.WRITEAPPEND(c, 32767, UTL_RAW.CAST_TO_VARCHAR2(DBMS_LOB.SUBSTR(b, 32767, n))); n := n + 32767; END LOOP; DBMS_LOB.WRITEAPPEND(c, LENGTH(b) - n + 1, UTL_RAW.CAST_TO_VARCHAR2(DBMS_LOB.SUBSTR(b, LENGTH(b) - n + 1, n))); RETURN c; END; / ``` After having defined it, you can simply call it like so: ``` SELECT VC2CLOB_FROM_BLOB(variable_name); ``` Worked like a charm for my problem.
Error- ORA-22835: Buffer too small for CLOB to CHAR or BLOB to RAW conversion
[ "", "sql", "oracle", "stored-procedures", "blob", "clob", "" ]
I have 3 tables: **Doc\_group** * id * name **doc\_type** * id * doc\_group\_id * name **Doc** * id * doc\_type\_id * name * date I would like to retrieve only rows from Doc\_group where all of its doc have date less than 90 days. Example: **Doc\_group** ``` +-----------------+ | id | name | +-----------------+ | 1 | doc_group_1 | +-----------------+ | 2 | doc_group_2 | +-----------------+ | 3 | doc_group_3 | +-----------------+ ``` **Doc\_type** ``` +--------------------------------+ | id| name | doc_group_id | +--------------------------------+ | 1 | doc_type_1 | 1 | +--------------------------------+ | 2 | doc_type_2 | 1 | +--------------------------------+ | 3 | doc_type_2 | 2 | +--------------------------------+ ``` **Doc**: ``` +---------------------------------------+ | id| name | doc_type_id | date | +---------------------------------------+ | 1 | doc_1 | 1 |01/10/2012 | +---------------------------------------+ | 2 | doc_2 | 2 |01/9/2012 | +---------------------------------------+ | 3 | doc_3 | 3 |01/10/2012 | +---------------------------------------+ | 4 | doc_4 | 3 |26/07/2014 | +---------------------------------------+ ``` Result: Only doc\_group\_1 should be returned as all of its doc is less than 90 days. doc\_group\_2 does not qualify as doc\_4 is not less than 90 days **Doc\_group** ``` +-----------------+ | id | name | +-----------------+ | 1 | doc_group_1 | +-----------------+ ``` I tried group by, but I can't get the result I want. Thanks
Here is the query that you're looking for: ``` SELECT DG.* FROM Doc_group DG WHERE NOT EXISTS (SELECT D.id FROM Doc D WHERE D.doc_group_id = DG.id AND D.date < DATEADD(DAY, -90, GETUTCDATE())) ``` That's a solution in T-SQL, I'm not sure about the MySQL version. Hope that this will help you. After a quick search, here is the MySQL version of the query: ``` SELECT DG.* FROM Doc_group DG WHERE NOT EXISTS (SELECT D.id FROM Doc D WHERE D.doc_group_id = DG.id AND D.date < DATE_SUB(CURDATE(), INTERVAL 90 DAY)) ``` I don't have any MySQL database to test this query but it should work ;-)
``` select dg.id, dg.name from Doc_group dg where dg.id not in ( select d.doc_group_id from Doc d where d.date NOT BETWEEN DATEDIFF(NOW() - 90 days) AND NOW() ) ```
Sql return all rows from parent table where criteria match from another child of a child table
[ "", "mysql", "sql", "" ]
# Update 4 ## Updated the whole question to reflect my changes. Still Not Working. This has been annoying me for two days now. I'm updating an old ordering interface system that our customers use, written in ASP Classic, VBScript. It connects to an SQL database on Windows Server 2003. ### Stored Procedure I have a stored procedure that returns a list of pallet codes, filtered by customer ID and searchable by pallet code: ``` CREATE PROCEDURE dbo.sp_PalletSearch @CustomerRef Int, @SearchQuery VarChar(15) = '%' AS SET NoCount On SET @SearchQuery = '%' + COALESCE(@SearchQuery, '%') + '%' SELECT p.PalletID, p.PalletCode FROM dbo.v_PalletSearch p WHERE p.CustomerRef = @CustomerRef AND p.PalletCode LIKE @SearchQuery ORDER BY p.PalletCode ASC SET NoCount Off GO ``` This seems to work fine in SQL Query Analyzer with and without a search term: `exec sp_PalletSearch 100, ''` and `exec sp_PalletSearch 100, 'PalletCode'` ### ASP Web Page So onto the web page itself... This is the ADO Command I use to get the recordset and this is where my problem starts. It just simply will not return anything: ``` Dim strSearchQuery strSearchQuery = "PalletCode" Dim objCmd Set objCmd = Server.CreateObject("ADODB.Command") objCmd.ActiveConnection = cConn objCmd.CommandType = adCmdStoredProc objCmd.CommandText = "sp_PalletSearch" objCmd.Parameters.Append objCmd.CreateParameter("@CustomerRef", adInteger, adParamInput) objCmd.Parameters.Append objCmd.CreateParameter("@SearchQuery", adVarChar, adParamInput, 15) objCmd.Parameters("@CustomerRef").Value = CustomerID objCmd.Parameters("@SearchQuery").Value = strSearchQuery Dim objRS Set objRS = objCmd.Execute Set objCmd = Nothing Do While Not objRS.EOF Response.Write(objRS("PalletID").Name & ": " & objRS("PalletID").Value & " | " & objRS("PalletCode").Name & ": " & objRS("PalletCode").Value & "<br>") objRS.MoveNext Loop objRS.Close Set objRS = Nothing ``` --- ### I Have Tried... If I edit this line in my ADO Command: ``` objCmd.CommandText = "sp_PalletSearch" ``` And change it to: ``` objCmd.CommandText = "{call sp_PalletSearch(?, '" & strSearchQuery & "')}" ``` And remove: ``` objCmd.CommandType = adCmdStoredProc ``` All searching works fine. This is what I will stick to if a *real* solution isn't found. --- If I edit the stored procedure to get the pallet code that *equals* the search term instead of *LIKE*, and comment out ``` --SET @SearchQuery = '%' + COALESCE(@SearchQuery, '%') + '%' ``` then I will get the exact match. This would tell me that the ADO Command is passing the parameters ok. **But** then why won't the stored procedure get results *LIKE* the `@SearchQuery`? --- Another thing to note is that replacing the ADO Command with the following works fine with pallet code *LIKE*. I don't see this snippet as a secure option, please tell me if I'm wrong. I would rather use the parametrised command: ``` strSQL = "EXECUTE sp_PalletSearch " & CustomerID & ", '" & strSearchQuery & "' " Set objRS = Server.CreateObject("ADODB.Recordset") Set objConn = Server.CreateObject("ADODB.Connection") objConn.Open cConn objRS.Open strSQL, objConn ``` --- It's a big ask, but I like to do things efficiently and correctly, and I love to learn. I hope you guys can help me with this puzzle.
# Solved Thank you to Bond and especially Lankymart for your help. Lankymart, your suggestion to use SQL Profiler helped. My server has the older version I guess - Profiler. I found this when looking in the Profiler Trace: `@SearchQuery = 'bww100052 '` So I decided to force a Trim inside the stored procedure: `LTRIM(RTRIM(@SearchQuery))` ## Stored Procedure ``` CREATE PROCEDURE dbo.sp_PalletSearch @CustomerRef Int, @SearchQuery VarChar(15) = '%' AS SET NoCount On SET @SearchQuery = '%' + COALESCE(LTRIM(RTRIM(@SearchQuery)), '%') + '%' SELECT p.PalletID, p.PalletCode FROM dbo.v_PalletSearch p WHERE p.CustomerRef = @CustomerRef AND p.PalletCode LIKE @SearchQuery ORDER BY p.PalletCode ASC SET NoCount Off GO ``` ## ADO Command ``` Dim objCmd Set objCmd = Server.CreateObject("ADODB.Command") objCmd.ActiveConnection = cConn objCmd.CommandType = adCmdStoredProc objCmd.CommandText = "sp_PalletSearch" objCmd.Parameters.Append objCmd.CreateParameter("@CustomerRef", adInteger, adParamInput) objCmd.Parameters.Append objCmd.CreateParameter("@SearchQuery", adVarChar, adParamInput, 15) objCmd.Parameters("@CustomerRef").Value = CustomerID objCmd.Parameters("@SearchQuery").Value = Trim(strSearchQuery) Dim objRS Set objRS = objCmd.Execute Set objCmd = Nothing ``` ### Finally I thought I would never solve this one, it was just making no sense at all! I'll throw a few more tests at it, but it looks like trimming the variable was needed. I don't know why the extra space was added though.
I think you causing yourself more issues by trying anything and everything. With each attempt you make slight mistakes in your syntax (like quotes in the wrong place, not specifying a `CommandType` etc). If it helps this is how I would code for that stored procedure ``` Dim cmd, rs, sql Dim data, rows, row Set cmd = Server.CreateObject("ADODB.Command") 'Name of your stored procedure sql = "dbo.sp_PalletSearch" With cmd .ActiveConnection = cConn 'Assuming cConn is a connection string variable .CommandType = adCmdStoredProc .CommandText = sql 'Define Stored Procedure parameters Call .Parameters.Append(.CreateParameter("@CustomerRef", adInteger, adParamInput, 4)) Call .Parameters.Append(.CreateParameter("@SearchQuery", adVarChar, adParamInput, 15)) 'First parameter is optional so only pass if we have a value, will default to NULL. If Len(CustomerId) > 0 Then .Parameters("@CustomerRef").Value = CustomerID .Parameters("@SearchQuery").Value = strSearchQuery Set rs = .Execute() 'Populate 2D-Array with data from Recordset If Not rs.EOF Then data = rs.GetRows() 'Close and Release Recordset from memory Call rs.Close() Set rs = Nothing End With Set cmd = Nothing If IsArray(data) Then rows = UBound(data, 2) For row = 0 To rows Call Response.Write("Pallet Id: " & data(0, row) & " | Pallet Code: " & data(1, row) & "</ br>") Next End If ```
ADO Command Parameter Not Passing to Stored Procedure or Stored Procedure 'Ignoring' Parameter
[ "", "sql", "sql-server", "vbscript", "asp-classic", "ado", "" ]
I have a database with 2 tables: CurrentTickets & ClosedTickets. When a user creates a ticket via web application, a new row is created. When the user closes a ticket, the row from currenttickets is inserted into ClosedTickets and then deleted from CurrentTickets. If a user reopens a ticket, the same thing happens, only in reverse. The catch is that one of the columns being copied back to CurrentTickets is the PK column (TicketID)that idendity is set to ON. I know I can set the IDENTITY\_INSERT to ON but as I understand it, this is generally frowned upon. I'm assuming that my database is a bit poorly designed. Is there a way for me to accomplish what I need without using IDENTITY\_INSERT? How would I keep the TicketID column autoincremented without making it an identity column? I figure I could add another column RowID and make that the PK but I still want the TicketID column to autoincrement if possible but still not be considered an Idendity column.
Simple answer is DO NOT make an Identity column if you want your influence on the next Id generated in that column. Also I think you have a really poor schema, Rather than having two tables just add another column in your `CurrentTickets` table, something like `Open BIT` and set its value to `1` by default and change the value to `0` when client closes the Ticket. And you can Turn it On/Off as many time as client changes his mind, with having to go through all the trouble of Insert Identity and managing a whole separate table. ## Update Since now you have mentioned its SQL Server 2014, you have access to something called `Sequence Object`. You define the object once and then every time you want a sequential number from it you just select next value from it, it is kind of hybrid of an Identity Column and having a simple INT column.
This just seems like bad design with 2 tables. Why not just have a single tickets table that stores all tickets. Then add a column called `IsClosed`, which is false by default. Once a ticket is closed you simply update the value to true and you don't have to do any copying to and from other tables. All of your code around this part of your application will be much simpler and easier to maintain with a single table for tickets.
Insert & Delete from SQL best practice
[ "", "sql", "sql-server", "t-sql", "" ]
Hi all please help me wire a sql query for the scenario Table ``` +----+---------+--------+------------+--------------+ | ID | country | region | restaurant | locationcode | +----+---------+--------+------------+--------------+ | 1 | IND | DL | xyz | 100 | | 2 | IND | DL | yzc | 111 | +----+---------+--------+------------+--------------+ ``` SELECT all items --- ``` if condition matched with country,region,restaurant,locationcode If not then all items that matched condition with country,region,restaurant if not then all items that matched the condition with country,region ``` Can I do that using sql query or I have to handle that using application logic?
This solution is kind of verbose but it should work. Use a `CASE` to choose from the options, and use the existence of those rows as the condition for choosing. ``` SELECT id FROM mytable WHERE CASE WHEN EXISTS( SELECT 1 FROM mytable WHERE region='DL' AND country='IND' AND restaurant='xyz' AND locationcode='100') THEN region='DL' AND country='IND' AND restaurant='xyz' AND locationcode='100' WHEN EXISTS( SELECT 1 FROM mytable WHERE region='DL' AND country='IND') THEN region='DL' AND country='IND' WHEN EXISTS( SELECT 1 FROM mytable WHERE region='DL' ) THEN region = 'DL'; END; ```
It could be achieved like this using `IF` ``` SELECT * FROM mytable mt WHERE IF( mt.region='DL' AND mt.country='IND' AND mt.restaurant='xyz' AND mt.locationcode='100', 1, IF( mt.region='DL' AND mt.country='IND' AND mt.restaurant='xyz', 1, IF( mt.region='DL' AND mt.country='IND' ,1, 'Nothing Found') ) ) ```
Need a sql query for this scenario
[ "", "mysql", "sql", "sql-server", "" ]