Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I need to check whether `mytable` table is containing `mycolumn` column? Here is my query: ``` SELECT CASE WHEN EXISTS (SHOW COLUMNS FROM mytable LIKE mycolumn) THEN 1 ELSE 0 END; ``` But it doesn't work and throws this error-message: > **#1064** - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use > near 'SHOW COLUMNS FROM mytable LIKE mycolumn) THEN 1 ELSE 0 END at line 1 What's wrong and how can I fix it?
You can use the following as an if ``` IF EXISTS( select * from INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME ='SOMETABLE' AND COLUMN_NAME = 'SOMECOLUMN') ) BEGIN -- do stuff END GO ``` Alternatively as a case ``` SELECT CASE WHEN EXISTS( select * from INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME ='TABLE_NAME' AND COLUMN_NAME = 'COLUMN_NAME') Then 1 Else 0 End; ```
Try this instead ``` SELECT CASE WHEN EXISTS ( SELECT * FROM information_schema.COLUMNS WHERE TABLE_SCHEMA = 'db_name' AND TABLE_NAME = 'table_name' AND COLUMN_NAME = 'column_name') then 1 else 0 end; ```
How do I check whether column exists in the table?
[ "", "mysql", "sql", "" ]
I am new to SQL and I'm having difficulties writing the following query. **Scenario** A user has two addresses, home address (`App\User`) and listing address (`App\Listing`). When a visitor searches for listings for a Suburb or postcode or state, if the user's listing address does not match - but if home address does match - they will be in the search result too. For example: if a visitor searches for `Melbourne`, I want to include listings from `Melbourne` and also the listings for the users who have an address in `Melbourne`. **Expected output:** ``` user_id first_name email suburb postcode state 1 Mathew mathew.afsd@gmail.com Melbourne 3000 VIC 2 Zammy Zamm@xyz.com Melbourne 3000 VIC ``` **Tables** users: ``` id first_name email 1 Mathew mathew.afsd@gmail.com 2 Zammy Zamm@xyz.com 3 Tammy tammy@unknown.com 4 Foo foo@hotmail.com 5 Bar bar@jhondoe.com.au ``` listings: ``` id user_id hourly_rate description 1 1 30 ABC 2 2 40 CBD 3 3 50 XYZ 4 4 49 EFG 5 5 10 Efd ``` addresses: ``` id addressable_id addressable_type post_code suburb state latitude longitude 3584 1 App\\User 2155 Rouse Hill NSW -33.6918372 150.9007221 3585 2 App\\User 3000 Melbourne VIC -33.6918372 150.9007221 3586 3 App\\User 2000 Sydney NSW -33.883123 151.245969 3587 4 App\\User 2008 Chippendale NSW -33.8876392 151.2011224 3588 5 App\\User 2205 Wolli Creek NSW -33.935259 151.156301 3591 1 App\\Listing 3000 Melbourne VIC -37.773923 145.12385 3592 2 App\\Listing 2030 Vaucluse NSW -33.858935 151.2784079 3597 3 App\\Listing 4000 Brisbane QLD -27.4709331 153.0235024 3599 4 App\\Listing 2000 Sydney NSW -33.91741 151.231307 3608 5 App\\Listing 2155 Rouse Hill NSW -33.863464 151.271504 ```
Try this. You can check it [here](http://sqlfiddle.com/#!9/54a0376/6). ``` SELECT l.* FROM listings l LEFT JOIN addresses a_l ON a_l.addressable_id = l.id AND a_l.addressable_type = "App\\Listing" AND a_l.suburb = "Melbourne" LEFT JOIN addresses a_u ON a_u.addressable_id = l.user_id AND a_u.addressable_type = "App\\User" AND a_u.suburb = "Melbourne" WHERE a_l.id IS NOT NULL OR a_u.id IS NOT NULL ```
As per my understanding of your question, for any suburb - supplied by a visitor, you want to include all the listings where either User's address is the same as the suburb supplied or the Listing's address is the same as the suburb supplied. Assuming addressable\_id column is related to Id of Users table and Listings table, based on value in addressable\_type column, you can use the following query to join and get the desired result: ``` Select l.* From Listings l inner join Addresses a on ((a.addressable_id = l.user_Id and a.addressable_type = 'App\\User') or (a.addressable_id = l.Id and a.addressable_type = 'App\\Listings')) inner join Addresses a1 On a1.addressable_id = a.addressable_id and a1.Suburb = 'Melbourne' ```
Select locations from two tables
[ "", "mysql", "sql", "" ]
I have a relatively large table (currently 3 million records). Which has columns: ``` [id] INT IDENTITY(1,1) NOT NULL, [runId] INT NOT NULL, [request] VARCHAR(MAX) NULL, [response] VARCHAR(MAX) NULL ``` And an Index as : `CONSTRAINT [Id_Indexed] PRIMARY KEY CLUSTERED` I have view on this table. when I do query as: ``` Query 1 on table -- SELECT COUNT(*) FROM API (nolock) WHERE runId = 22 Query 2 on view -- SELECT COUNT(*) FROM API_View WHERE runId = 22 ``` Then I will get result around 1 million, but time taken by query 1 has taken 16 minutes while query 2 has taken 18 minutes. Does it possible to improve this?
As people already mentioned use add an index to the column run id. Depending on how the table is used, you can think about using "with (nolock)"-hint. In some cases it can improve the performance a lot. Read here for further information: <https://www.mssqltips.com/sqlservertip/2470/understanding-the-sql-server-nolock-hint/> Another advise (but not regarding your performance issue): Check whether you really need varchar(max), often varchar(255) would fit better. varchar(max) uses more space on your disk.
ok, ``` CREATE NONCLUSTERED INDEX [IX_Table_RunId] ON [Table]([RunId]); ``` and run the queries again.
How to improve performance of relatively large table
[ "", "sql", "sql-server", "sql-server-2014", "" ]
Is there a relationship between the `JOIN` clauses used in in a `SELECT` statement and how the two tables are related to one another, i.e. one to many, many to one, one to one? If not, can/should those type of table relationships be defined in SQL code?
Defining these relationships is done by using Foreign Keys. These keys ensure referential integrity, by enforcing constraints on either side. Example: `table-one: id, name table-many: id, name, table-one_id` Here, `table-one_id` is a foreign key (referencing the id of table-one), ensuring you can only enter valid ids (present in table-one). Defining FK's is not mandatory, but it provides referential integrity. JOINs in SELECT statements are often done using these foreign keys. But it is not technically required or needed.
There are a couple of questions here: > Is there a relationship between the JOIN clauses used in in a SELECT statement and how the two tables are related to one another? In the vast majority of cases, yes, a `JOIN` clause will illustrate one of the ways two tables are related to each other. But this is not always the case. Consider the two following examples: 1) ``` Select * From TableA A Join TableB B On A.B_Id = B.Id ``` 2) ``` Select * From TableA A Join @CodeList B On A.Code = B.Code ``` In the first example `JOIN`, there is a defined relationship in the table between `TableA` and `TableB`. However, in the second example, it is more likely that `@CodeList` is acting more as a *filter* for `TableA`. The `JOIN` in this situation is not over a defined relationship between the two tables, but rather a means to filter the data to a defined set. So to answer your first question: a `JOIN` will usually indicate some kind of relationship between two tables, but its presence, alone, doesn't always mean that. > Can/should those type of table relationships be defined in SQL code? Not necessarily. Even discounting the above example where there was no intended relationship between the tables for the `JOIN` condition, a defined `FOREIGN KEY` relationship is not always desirable. One thing to keep in mind with `FOREIGN KEY CONSTRAINTS` is that they are *`CONSTRAINTS`*. Whether or not you wish to physically constrain your data to not allow values that violate the constraint is completely situational, based on your needs. *Can* they? Yes, they certainly can. *Should* they? Not always - it depends on your intention.
Are JOINs and one to many type relationships related?
[ "", "sql", "" ]
If I have the table [![enter image description here](https://i.stack.imgur.com/nwx03.png)](https://i.stack.imgur.com/nwx03.png) ``` SELECT (Firstname || '-' || Middlename || '-' || Surname) AS example_column FROM example_table ``` This will display Firstname-Middlename-Surname e.g. ``` John--Smith Jane-Anne-Smith ``` The second one (Jane’s) displays correct, however since John doesn’t have a middlename, I want it to ignore the second dash. How could I put a sort of IF Middlename = NULL statement in so that it would just display John-Smith
Here would be my suggestions: PostgreSQL and other SQL databases where `'a' || NULL IS NULL`, then use [COALESCE](https://www.postgresql.org/docs/current/functions-conditional.html#FUNCTIONS-COALESCE-NVL-IFNULL): ``` SELECT firstname || COALESCE('-' || middlename, '') || '-' || surname ... ``` Oracle and other SQL databases where `'a' || NULL = 'a'`: ``` SELECT first name || DECODE(middlename, NULL, '', '-' || middlename) || '-' || surname... ``` I like to go for conciseness. Here it is not very interesting to any maintenance programmer whether the middle name is empty or not. CASE switches are perfectly fine, but they are bulky. I'd like to avoid repeating the same column name ("middle name") where possible. As @Prdp noted, the answer is RDBMS-specific. What is specific is whether the server treats a zero-length string as being equivalent to `NULL`, which determines whether concatenating a `NULL` yields a `NULL` or not. Generally `COALESCE` is most concise for PostgreSQL-style empty string handling, and `DECODE (*VALUE*, NULL, ''...` for Oracle-style empty string handling.
If you use Postgres, `concat_ws()` is what you are looking for: ``` SELECT concat_ws('-', Firstname, Middlename, Surname) AS example_column FROM example_table ``` SQLFiddle: <http://sqlfiddle.com/#!15/9eecb7db59d16c80417c72d1e1f4fbf1/8812> To treat empty strings or strings that only contain spaces like `NULL` use `nullif()`: ``` SELECT concat_ws('-', Firstname, nullif(trim(Middlename), ''), Surname) AS example_column FROM example_table ```
SQL using If Not Null on a Concatenation
[ "", "sql", "null", "concatenation", "" ]
I have the following two tables T1 and T2. Table T1 ``` Id Value1 1 2 2 1 3 2 ``` Table T2 ``` Id Value2 1 3 2 1 4 1 ``` I need a SQL SERVER query to return the following ``` Id Value1 Value2 1 2 3 2 1 1 3 2 0 4 0 1 ``` Thanks in advance!!
You can achieve this by **FULL OUTER JOIN** with **ISNULL** Execution with given sample data: ``` DECLARE @Table1 TABLE (Id INT, Value1 INT) INSERT INTO @Table1 VALUES (1, 2), (2, 1), (3, 2) DECLARE @Table2 TABLE (Id INT, Value2 INT) INSERT INTO @Table2 VALUES (1, 3), (2, 1), (4, 1) SELECT ISNULL(T1.Id, T2.Id) AS Id, ISNULL(T1.Value1, 0) AS Value1, ISNULL(T2.Value2, 0) AS Value2 FROM @Table1 T1 FULL OUTER JOIN @Table2 T2 ON T2.Id = T1.Id ``` Result: ``` Id Value1 Value2 1 2 3 2 1 1 3 2 0 4 0 1 ```
FYI - `Merge` means something different in SQL Server. I would suggest if you have a table which contains a list of all possible Id values, I would select everything from that and have two left outer joins to T1 and T2. Assuming there isn't one, with only what is provided, it sounds like you want a full outer join. Something like this should work: ``` SELECT Id = COALESCE(T1.Id, T2.Id), Value1 = COALESCE(T1.Value1, 0), Value2 = COALESCE(T2.Value2, 0) FROM T1 FULL OUTER JOIN T2 ON T1.ID = T2.ID ```
How to merge two tables in SQL SERVER?
[ "", "sql", "sql-server", "" ]
[SQL FIDDLE DEMO HERE](http://sqlfiddle.com/#!6/2a7c5/1) I have this table structure for SheduleWorkers table: ``` CREATE TABLE SheduleWorkers ( [Name] varchar(250), [IdWorker] varchar(250), [IdDepartment] int, [IdDay] int, [Day] varchar(250) ); INSERT INTO SheduleWorkers ([Name], [IdWorker], [IdDepartment], [IdDay], [Day]) values ('Sam', '001', 5, 1, 'Monday'), ('Lucas', '002', 5, 2, 'Tuesday'), ('Maria', '003', 5, 1, 'Monday'), ('José', '004', 5, 3, 'Wednesday'), ('Julianne', '005', 5, 3, 'Wednesday'), ('Elisa', '006', 18, 1, 'Monday'), ('Gabriel', '007', 23, 5, 'Friday'); ``` I need to display for each week day the names of workers in the department 5 that works in this day, like this: ``` MONDAY TUESDAY WEDNESDAY THURSDAY FRIDAY SATURDAY ------ ------- --------- -------- ------ ------- Sam Lucas Jose Maria Julianne ``` How can I get this result, I accept suggestions, thanks.
You can use pivot for this. Please use below query for your problem. And use Partition. ``` SELECT [Monday] , [Tuesday] , [Wednesday] , [Thursday] , [Friday], [SATURDAY] FROM (SELECT [Day],[Name],RANK() OVER (PARTITION BY [Day] ORDER BY [Day],[Name]) as rnk FROM SheduleWorkers) p PIVOT( Min([Name]) FOR [Day] IN ( [Monday] , [Tuesday] , [Wednesday] , [Thursday] , [Friday], [SATURDAY] ) ) AS pvt ```
``` DECLARE @SheduleWorkers TABLE ( [Name] VARCHAR(250) , [IdWorker] VARCHAR(250) , [IdDepartment] INT , [IdDay] INT , [Day] VARCHAR(250) ); INSERT INTO @SheduleWorkers ( [Name], [IdWorker], [IdDepartment], [IdDay], [Day] ) VALUES ( 'Sam', '001', 5, 1, 'Monday' ), ( 'Lucas', '002', 5, 2, 'Tuesday' ), ( 'Maria', '003', 5, 1, 'Monday' ), ( 'José', '004', 5, 3, 'Wednesday' ), ( 'Julianne', '005', 5, 3, 'Wednesday' ), ( 'Elisa', '006', 18, 1, 'Monday' ), ( 'Gabriel', '007', 23, 5, 'Friday' ); ; WITH cte AS ( SELECT Name , Day , ROW_NUMBER() OVER ( PARTITION BY Day ORDER BY [IdWorker] ) AS rn FROM @SheduleWorkers ) SELECT [MONDAY] , [TUESDAY] , [WEDNESDAY] , [THURSDAY] , [FRIDAY] , [SATURDAY] FROM cte PIVOT( MAX(Name) FOR day IN ( [MONDAY], [TUESDAY], [WEDNESDAY], [THURSDAY], [FRIDAY], [SATURDAY] ) ) p ``` Output: ``` MONDAY TUESDAY WEDNESDAY THURSDAY FRIDAY SATURDAY Sam Lucas José NULL Gabriel NULL Maria NULL Julianne NULL NULL NULL Elisa NULL NULL NULL NULL NULL ``` The main idea is `row_number` window function in the common table expression, which will give you as many rows as there are maximum duplicates across a day.
How can I display rows values in columns sql server?
[ "", "sql", "sql-server", "group-by", "union-all", "weekday", "" ]
I'm supposed to write a query for this statement: > List the names of customers, and album titles, for cases where the customer has bought the entire album (i.e. all tracks in the album) I know that I should use division. Here is my answer but I get some weird syntax errors that I can't resolve. ``` SELECT R1.FirstName ,R1.LastName ,R1.Title FROM (Customer C, Invoice I, InvoiceLine IL, Track T, Album Al) AS R1 WHERE C.CustomerId=I.CustomerId AND I.InvoiceId=IL.InvoiceId AND T.TrackId=IL.TrackId AND Al.AlbumId=T.AlbumId AND NOT EXISTS ( SELECT R2.Title FROM (Album Al, Track T) AS R2 WHERE T.AlbumId=Al.AlbumId AND R2.Title NOT IN ( SELECT R3.Title FROM (Album Al, Track T) AS R3 WHERE COUNT(R1.TrackId)=COUNT(R3.TrackId) ) ); ``` ERROR: `misuse of aggregate function COUNT()` You can find the schema for the database [here](https://chinookdatabase.codeplex.com/wikipage?title=Chinook_Schema&referringTitle=Documentation)
You cannot alias a table list such as `(Album Al, Track T)` which is an out-dated syntax for `(Album Al CROSS JOIN Track T)`. You can either alias a table, e.g. `Album Al` or a subquery, e.g. `(SELECT * FROM Album CROSS JOIN Track) AS R2`. So first of all you should get your joins straight. I don't assume that you are being taught those old comma-separated joins, but got them from some old book or Website? Use proper explicit joins instead. Then you cannot use `WHERE COUNT(R1.TrackId) = COUNT(R3.TrackId)`. `COUNT` is an aggregate function and aggregation is done after `WHERE`. As to the query: It's a good idea to compare track counts. So let's do that step by step. Query to get the track count per album: ``` select albumid, count(*) from track group by albumid; ``` Query to get the track count per customer and album: ``` select i.customerid, t.albumid, count(distinct t.trackid) from track t join invoiceline il on il.trackid = t.trackid join invoice i on i.invoiceid = il.invoiceid group by i.customerid, t.albumid; ``` Complete query: ``` select c.firstname, c.lastname, a.title from ( select i.customerid, t.albumid, count(distinct t.trackid) as cnt from track t join invoiceline il on il.trackid = t.trackid join invoice i on i.invoiceid = il.invoiceid group by i.customerid, t.albumid ) bought join ( select albumid, count(*) as cnt from track group by albumid ) complete on complete.albumid = bought.albumid and complete.cnt = bought.cnt join customer c on c.customerid = bought.customerid join album a on a.albumid = bought.albumid; ```
Seems you are using count in the wrong place use having for aggregate function ``` SELECT R3.Title FROM (Album Al, Track T) AS R3 HAVING COUNT(R1.TrackId)=COUNT(R3.TrackId)) ``` but be sure of alias because in some database the alias in not available in subquery ..
relational division
[ "", "mysql", "sql", "sql-server", "sqlite", "relational-division", "" ]
I've been tasked to develop a query that behaves essentially like the following one: ``` SELECT * FROM tblTestData WHERE *.TestConditions LIKE '*textToSearch*' ``` The *textToSearch* is a string which contains information about the condition in which a given device is tested (Voltage, Current, Frequency, etc) in the following format as an example: ``` [V:127][PF:1][F:50][I:65] ``` The objective is to recover a list of any and all tests performed at a voltage of 127 Volts, so the SQL developed would look like the folllowing: ``` SELECT * FROM tblTestData WHERE *.TestConditions LIKE '*V:127*' ``` This works as intended but there is a problem due to an inproper introduction of data, there are cases in which the \_textToSearch string looks like the following examples: ``` [V.127][PF:1][F:50][I:65] [V.230][PF:1][F:50][I:65] ``` As you can see, my previous SQL transaction does not work as it does not meet the conditions. If I try to do the following transaction with the objective of ignoring improper data format: ``` SELECT * FROM tblTestData WHERE *.TestConditions LIKE '*V*127*' ``` The transaction is not succesful and returns an error. What am I doing wrong for this transaction not to work? I am approaching this problem wrong? I see a pair of problems although with this transaction, if there were a group of test conditions like the following: ``` [V.127][PF:1][F:50][I:127] [V.230][PF:1][F:50][I:127] ``` Would it return the values of both points given that both meet the condition of the transaction stated above? In conclusion, my questions are: 1. What is wrong with the LIKE '\*V\*127\*' condition for it not to work? 2. What implications has working with this condition? Can it return more information than desired if I am not careful? I hope it is clear what I am asking for, if it isn't, please point out what is not clear and I will try to clarify it
One choice is to look for *any* character between the "V" and the "127": ``` WHERE TestConditions LIKE '%V_127%' ``` Note that `%` is the wildcard for a string of any length and `_` is the wildcard for a single character. You can also use regular expressions: ``` WHERE regexp_like(TestConditions, 'V[.:]127') ``` Note that regular expressions match anywhere in the string, so wildcards at the beginning and end are not needed.
You could check for both cases (although this will decrease performance) ``` SELECT * FROM tblTestData WHERE (TestConditions LIKE '%V:127%' OR TestConditions LIKE '%V.127%') ``` It is better to clean the data in your database if only old records have this problem.
SQL Like condition fails to run
[ "", "sql", "oracle", "" ]
How can I list the names of all countries whose surface Area is greater than that of all other countries in the same region from the country Table below: ``` +----------------------------------------------+---------------------------+-------------+ | name | region | surfacearea | +----------------------------------------------+---------------------------+-------------+ | Aruba | Caribbean | 193.00 | | Afghanistan | Southern and Central Asia | 652090.00 | | Angola | Central Africa | 1246700.00 | | Anguilla | Caribbean | 96.00 | | Albania | Southern Europe | 28748.00 | | Andorra | Southern Europe | 468.00 | | Netherlands Antilles | Caribbean | 800.00 | ``` So far I have come with this code but this does not list the countries ? Is this code correct ? ``` select region, max(surfacearea) as maxArea from country group by region; ```
Could be you can use an inner join whit a temp table ``` select name from country as a inner join ( select region, max(surfacearea) as maxarea from country group by region ) as t on a.region = t.region where a.surfacearea = t.maxarea; ```
It looks like the query you have identifies the "largest" value of surfacearea for each region. To get the country, you can join the result from your query back to the country table again, to get the country that matches on region and surfacearea. ``` SELECT c.* FROM ( -- largest surfacearea for each region SELECT n.region , MAX(n.surfacearea) AS max_area FROM country n GROUP BY n.region ) m JOIN country c ON c.region = m.region AND c.surfacearea = m.max_area ```
Listing based on a particular SQL Group
[ "", "mysql", "sql", "" ]
I have a query (`SQL Server`) that returns a decimal. I only need 2 decimals without rounding: [![enter image description here](https://i.stack.imgur.com/TEdzA.png)](https://i.stack.imgur.com/TEdzA.png) In the example above I would need to get: **3381.57** Any clue?
You could accomplish this via the [`ROUND()`](https://msdn.microsoft.com/en-us/library/ms175003.aspx?) function using the length and precision parameters to truncate your value instead of actually rounding it : ``` SELECT ROUND(3381.5786, 2, 1) ``` The second parameter of `2` indicates that the value will be rounded to two decimal places and the third precision parameter will indicate if actual rounding or truncation is performed (non-zero values will truncate instead of round). **Example** [![enter image description here](https://i.stack.imgur.com/D9Lm7.png)](https://i.stack.imgur.com/D9Lm7.png) You can [see an interactive example of this in action here](http://sqlfiddle.com/#!6/9eecb7/7758).
Another possibility is to use `TRUNCATE`: ``` SELECT 3381.5786, {fn TRUNCATE(3381.5786,2)}; ``` `LiveDemo`
SQL get decimal with only 2 places with no round
[ "", "sql", "sql-server", "t-sql", "" ]
I've been leveraging information gleaned from other thread and what not and have gotten really close but am missing something here to do what I need to do. Here is my code that as I have it up right now in a SQL query window: ``` WITH n AS ( SELECT sub_idx AS current_id, ROW_NUMBER() OVER (PARTITION BY EID ORDER BY alt_sub_idx) AS new_id FROM GETT_Documents ) UPDATE GETT_Documents SET sub_idx = n.new_id FROM n WHERE EID = 'AC-1.1.i'; ``` This seemed like it should work but instead of numbering the sub\_idx column from 1 to 11 it put all 1's in that column. [![View of relevant rows](https://i.stack.imgur.com/d0Xnz.jpg)](https://i.stack.imgur.com/d0Xnz.jpg) Can someone with sharp eyes point out the error of my ways first off? Then perhaps suggest how I might change this to increment by 10's instead of single digits because I would like to turn around and do the same thing to the alt\_sub\_idx column after doing this to to this column but in increments of 10. Regards, Ken...
``` DECLARE @GETT_DOCUMENTS TABLE (DID INT, EID VARCHAR(1), SUB_IDX INT, ALT_SUB_IDX INT) INSERT INTO @GETT_DOCUMENTS VALUES (1,'A',0,10), (2,'A',0,20), (3,'A',0,30), (4,'A',0,40), (5,'A',0,50), (6,'A',0,60), (7,'A',0,70), (8,'A',0,80), (9,'A',0,90), (10,'A',0,100), (11,'A',0,110), (12,'A',0,120) ;WITH n AS ( SELECT DID AS DID, sub_idx AS current_id, ROW_NUMBER() OVER (PARTITION BY EID ORDER BY alt_sub_idx) AS new_id FROM @GETT_Documents ) --SELECT * FROM N UPDATE @GETT_Documents SET sub_idx = n.new_id FROM @GETT_Documents G JOIN n ON N.DID = G.DID WHERE EID = 'A'; SELECT * FROM @GETT_Documents ```
Your UPDATE isn't correlated, so it is just grabbing the first row from the cte everytime. It needs to be like this: ``` ... UPDATE d SET sub_idx = n.new_id FROM n INNER JOIN GETT_Documents d ON d.sub_idx=n.sub_idx WHERE d.EID = 'AC-1.1.i'; ```
ROW_NUMBER() OVER with sub set
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have a query with multiple joins for which `DOC_TYPE` column is coming NULL even if it has some values in it. The query is below ``` SELECT a.mkey, c.type_desc DOC_TYPE, a.doc_no INWARD_NO, CONVERT(VARCHAR, a.doc_date, 103) date, a.to_user, a.No_of_pages, Ref_No, c.type_desc DEPT_RECEIVED, c.type_desc EMP_RECEIVED, b.first_name + ' ' + b.last_name NAME, b.email FROM inward_doc_tracking_hdr a LEFT JOIN user_mst b ON a.to_user = b.mkey LEFT JOIN type_mst_a c ON a.doc_type = c.master_mkey AND a.dept_received = c.Master_mkey AND a.emp_received = c.Master_mkey WHERE a.to_user = '1279' ``` The `doc_type` value is `428` and whose desciption comes from ``` select type_desc from type_mst_a where master_mkey = 428 ``` as `Drawing` but when I run the join query I get it as NULL. why ?? I am using SQL Server 2005.
Following the discussion current version is ``` SELECT a.mkey, c.type_desc DOC_TYPE, a.doc_no INWARD_NO, convert(varchar, a.doc_date,103) date, a.to_user, a.No_of_pages, Ref_No, d.type_desc DEPT_RECEIVED, b.first_name + ' ' + b.last_name SENDER, b.first_name + ' ' + b.last_name NAME, b.email FROM inward_doc_tracking_hdr a -- LEFT ? JOIN user_mst b ON a.to_user = b.mkey JOIN type_mst_a c ON a.doc_type = c.master_mkey JOIN type_mst_a d ON a.dept_received = d.Master_mkey WHERE a.to_user = '1279' ``` LEFT JOIN is needed if `inward_doc_tracking_hdr` rows with NULLs or having no matches still must be present in the result. Hope we are now on the right track.
**Instead of Left join you have to use inner join in order to get records having doc\_type. This query will help you :** ``` SELECT a.mkey, c.type_desc DOC_TYPE, a.doc_no INWARD_NO, CONVERT(VARCHAR, a.doc_date, 103)date, a.to_user, a.No_of_pages, Ref_No, c.type_desc DEPT_RECEIVED, c.type_desc EMP_RECEIVED, b.first_name + ' ' + b.last_name NAME, b.email FROM inward_doc_tracking_hdr a INNER JOIN user_mst b ON a.to_user = b.mkey INNER JOIN type_mst_a c ON a.doc_type = c.master_mkey AND a.dept_received = c.Master_mkey AND a.emp_received = c.Master_mkey WHERE a.to_user = '1279' ```
SQL query not returning correct result
[ "", "sql", "sql-server-2005", "" ]
I know of a couple of different ways to find all primary keys in the db, but is it possible to filter the results, so that it only show primary keys that have system generated names? None of the attributes returned by these queries seem relevant, so I am guessing I'll have to join another table or call a function, but I can't find anything relevant. ``` SELECT * FROM sys.all_objects WHERE type_desc = 'PRIMARY_KEY_CONSTRAINT' SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE = 'PRIMARY KEY' ``` The reason I want to show these results is so that I can find and rename these constraints in a large database. Edit: By system generated names I mean primary keys that have been created by just adding `PRIMARY KEY` behind the column name in the table definition, so that it gets a name like: PK\_\_Countrie\_\_5D9B0D2D28F35AE2
The cleanest way would be this: ``` SELECT * FROM sys.key_constraints WHERE type = 'PK' AND is_system_named = 1 ``` Just check the `is_system_named` property in the `sys.key_constraints` view
***An auto-generated PK seems to contain 16 hexadecimal digits in its name***. So I would use this query and then still manually check the results from it. Why check them manually? Because maybe the above statement may be just something undocumented, and may not apply in future versions of SQL Server. ``` SELECT * FROM sys.all_objects WHERE type_desc = 'PRIMARY_KEY_CONSTRAINT' and name like '%[A-F0-9][A-F0-9][A-F0-9][A-F0-9][A-F0-9][A-F0-9][A-F0-9][A-F0-9][A-F0-9][A-F0-9][A-F0-9][A-F0-9][A-F0-9][A-F0-9][A-F0-9][A-F0-9]%' ```
Is it possible to find all primary keys that have system generated names in a database?
[ "", "sql", "sql-server", "" ]
I have rather simple question: is exception handling possible at the package level? And if yes, how to implement it? My package has procedures and functions in it, and in case of, let's say, a `NO_DATA_FOUND` exception I want to do the same thing in all of my procedures and functions. So my question is: can I write ``` WHEN NO_DATA_FOUND THEN ``` just once and use that same lines for `NO_DATA_FOUND` exceptions in all my procedures/functions, or do I have to write that exception handler in every procedure/function.
No, it's not possible. I expect that that's not in the language because it's not consistent with proper and intended use of exception handlers. The general rule of thumb that I apply is: "if you don't have something specific and helpful to do in response to an exception, don't catch it". If `NO_DATA_FOUND` is expected and OK in a given situation and you can ignore it and/or assume a default value for the data, then you'd want to catch and handle that (and a package-level handler wouldn't help, because your handling would be situation-dependent). In all other cases, you don't want to catch the `NO_DATA_FOUND` -- it represents a true exception: something that shouldn't have happened, something outside your design assumptions. Let those propagate up to the top-level, who can log them and/or report them to the client. But maybe you'd get better answers if you explained what it is you'd want the package-level exception handler to do.
No, you can't handle an exception globally across all procedure/functions in a package. [The exception handler documentation](http://docs.oracle.com/cd/E11882_01/appdev.112/e25519/exception_handler.htm#LNPLS01316) says: > An exception handler processes a raised exception. Exception handlers appear in the exception-handling parts of anonymous blocks, subprograms, triggers, and packages. Which makes it sound like you can; but the 'packages' reference there is referring to the initialisation section of the [`create package body` statement](http://docs.oracle.com/cd/E11882_01/appdev.112/e25519/create_package_body.htm#BABFJIFI): [![enter image description here](https://i.stack.imgur.com/EcuEi.gif)](https://i.stack.imgur.com/EcuEi.gif) But that section "Initializes variables and does any other one-time setup steps", and is run once per session, when a function or procedure in the package is first invoked. Its exception handler doesn't do anything else. If you really want similar behaviour then you can put that into its own (probably private) procedure and call that from the exception handler on each procedure/function. Which might save a bit of typing but is likely to mask what is really happening, if you're trying to log the errors, say. It's probably going to be simpler and better to have specific exception handling, even if that causes some repetition.
PL/SQL Package level exception handling
[ "", "sql", "oracle", "plsql", "" ]
I have following Spark sql and I want to pass variable to it. How to do that? I tried following way. ``` sqlContext.sql("SELECT count from mytable WHERE id=$id") ```
You are almost there just missed `s` :) ``` sqlContext.sql(s"SELECT count from mytable WHERE id=$id") ```
You can pass a string into sql statement like below ``` id = "1" query = "SELECT count from mytable WHERE id='{}'".format(id) sqlContext.sql(query) ```
Spark SQL passing a variable
[ "", "sql", "select", "" ]
I need to get all the `Room_IDs` where the `Status` are reported vacant, and then occupied at a later date, only. This is a simplified table I am using as an example: ``` **Room_Id Status Inspection_Date** 1 vacant 5/15/2015 2 occupied 5/21/2015 2 vacant 1/19/2016 1 occupied 12/16/2015 4 vacant 3/25/2016 3 vacant 8/27/2015 1 vacant 4/17/2016 3 vacant 12/12/2015 3 occupied 3/22/2016 4 vacant 2/2/2015 4 vacant 3/24/2015 ``` My result should look like this: ``` **Room_Id Status Inspection_Date** 1 vacant 5/15/2015 1 occupied 12/16/2015 1 vacant 4/17/2016 3 vacant 8/27/2015 3 vacant 12/12/2015 3 occupied 3/22/2016 ```
Here's one option using `exists` with a `correlated subquery`: ``` select * from yourtable t where exists ( select 1 from yourtable c where c.room_id = t.room_id group by c.room_id having min(case when status = 'vacant' then inspection_date end) < max(case when status = 'occupied' then inspection_date end) ) ``` * [SQL Fiddle Demo](http://sqlfiddle.com/#!3/d4c64/5)
Try this ``` ;WITH cte AS (SELECT *, Row_number()OVER(partition BY [room_id] ORDER BY [inspection_date])rn, FROM YOurtable) SELECT room_id, status, [inspection_date] FROM cte a WHERE EXISTS (SELECT 1 FROM cte b WHERE a.room_id = b.room_id AND b.rn = 1 AND b.status = 'vacant') AND EXISTS(SELECT 1 FROM cte c WHERE a.room_id = c.room_id AND c.status = 'occupied') ``` * [SQL FIDDLE DEMO](http://www.sqlfiddle.com/#!3/444224/1)
TSQL: conditional group by query
[ "", "sql", "sql-server", "t-sql", "group-by", "" ]
I have a table structure like: ``` Table = contact Name Emailaddress ID Bill bill@abc.com 1 James james@abc.com 2 Gill gill@abc.com 3 Table = contactrole ContactID Role 1 11 1 12 1 13 2 11 2 12 3 12 ``` I want to select the Name and Email address from the first table where the person has Role 12 but not 11 or 13. In this example it should return only Gill. I believe I need a nested SELECT but having difficulty in doing this. I did the below but obviously it isn't working and returning everything. ``` SELECT c.Name, c.Emailaddress FROM contact c WHERE (SELECT count(*) FROM contactrole cr c.ID = cr.ContactID AND cr.Role NOT IN (11, 13) AND cr.Role IN (12)) > 0 ```
Use conditional aggregation in `Having` clause to filter the records Try this ``` SELECT c.NAME, c.emailaddress FROM contact c WHERE id IN (SELECT contactid FROM contactrole GROUP BY contactid HAVING Count(CASE WHEN role = 12 THEN 1 END) > 1 AND Count(CASE WHEN role in (11,13) THEN 1 END) = 0) ``` If you have only `11,12,13` in `role` then use can use this ``` SELECT c.NAME, c.emailaddress FROM contact c WHERE id IN (SELECT contactid FROM contactrole GROUP BY contactid HAVING Count(CASE WHEN role = 12 THEN 1 END) = count(*) ```
You can use a combination of `EXISTS` and `NOT EXISTS` ``` SELECT * FROM contact c WHERE EXISTS(SELECT 1 FROM contactrole cr WHERE cr.ContactID = c.ID AND cr.Role = 12) AND NOT EXISTS(SELECT 1 FROM contactrole cr WHERE cr.ContactID = c.ID AND cr.Role IN(11, 13)) ``` --- Another option is to use `GROUP BY` and `HAVING`: ``` SELECT c.* FROM contact c INNER JOIN contactrole cr ON cr.ContactID = c.ID GROUP BY c.ID, c.Name, c.Emailaddress HAVING SUM(CASE WHEN cr.Role = 12 THEN 1 ELSE 0 END) > 0 AND SUM(CASE WHEN cr.Role IN(11, 13) THEN 1 ELSE 0 END) = 0 ```
Nested Oracle SQL - Multiple Values
[ "", "sql", "oracle", "select", "nested", "" ]
I'm creating a database that has a `users` table, but there are three types of users (admin, teachers and students) and some types has its own properties. Here are my solutions: 1 - Three different tables: ``` table_admin id name email password table_teachers id name email password teacher_only_a teacher_only_b table_students id name email password student_only_a student_only_b ``` 2 - One table, allowing NULL values: ``` table_users id name email password teacher_only_a (null on admin and student) teacher_only_b (null on admin and student) student_only_a (null on admin and teacher) student_only_b (null on admin and teacher) ``` 3 - Related tables: ``` table_users id name email password teacher_id (null on admin and student) student_id (null on admin and teacher) table_teachers id teacher_only_a teacher_only_b table_students id student_only_a student_only_b ``` Which are the best design option? Is there another solution?
Why not ``` table_users id name email password is_admin table_teachers user_id teacher_only_a teacher_only_b table_students user_id student_only_a student_only_b ``` That would abstract out the user information so there isn't any redundancy.
I would opt for having two tables, one called `user` which will store user name, role, and other metadata, and a second table called `user_relation` which will store relationships between users. **user** ``` id name email password role (admin, teacher, or student) ``` **user\_relation** ``` id1 id2 ``` I am making two assumptions here in my design. The first is that a user's role will only be `admin`, `teacher`, or `student`. If a user can be more than one role then you will need to create a new table `user_role` which stores this information. The second assumption is that it will be clear what the nature of a relationship is merely by the types of users. For example, if a record from `user_relation` contains a student and a teacher, then it will be implicitly assumed that the student belongs to that teacher's class. Similarly, if a teacher and an admin have an entry, it will be assumed that the latter manages the former. However, you could easily add a column for relation type if you wanted to make it more flexible and, for example, allow an admin to also be a student.
Issues on designing SQL users table
[ "", "mysql", "sql", "" ]
I wrote a script in oracle. But it does not give me the result that i want. I need this one, imagine i have two table. Order\_table and book table. My order table is like this ORDER\_TABLE Table ``` ID TYPE_ID VALUE_ID 1 11 null 2 11 null 3 11 null 4 12 null 5 11 null ``` Book Table ``` ID ORDER_TYPE DELETED 1 1 F 2 null F 3 5 F 4 5 F 5 4 F 6 4 F 7 3 T ``` My script is like this ``` Select * From ( Select Newtable.Counter As Value_id, o.Id As Id, o.Type_id As Type_id From ( Select (Count B.Order_Type) As Counter, B.Order_Type As Id From Book B Where B.Deleted = 'F' Group By B.Order_Type Order By Count(B.Order_Type) Desc ) newtable, order_table o where o.id = newtable.id and o.type_id = 11 ) order by id asc; ``` Result is like this. ``` Value_ID TYPE_ID ID 2 11 5 2 11 4 1 11 1 ``` It is not showing that second and third id has 0 count, Have can i show 0 count too ? Result should be like this. ``` Value_ID TYPE_ID ID 2 11 5 2 11 4 1 11 1 0 11 2 0 11 3 ```
First, do not use implicit `JOIN` syntax(comma separated), that's one of the reason this mistakes are hard to catch! Use the proper `JOIN` syntax. Second, your problem is that you need a `left join`, not an `inner join` , so try this: ``` Select * From (Select coalesce(Newtable.Counter,0) As Value_id, o.Id As Id, o.Type_id As Type_id From order_table o LEFT JOIN (Select Count(B.Order_Type) As Counter, B.Order_Type As Id From Book B Where B.Deleted = 'F' Group By B.Order_Type Order By Count(B.Order_Type) Desc) newtable ON(o.id = newtable.id) WHERE o.type_id = 11) order by id asc; ```
You could also do this with a scalar subquery, which may or may not be more performant than the left join versions described in the other answers. (Quite possibly, the optimizer may rewrite it to be a left join anyway!): ``` with order_table ( id, type_id, value_id ) as (select 1, 11, cast( null as int ) from dual union all select 2, 11, cast( null as int ) from dual union all select 3, 11, cast( null as int ) from dual union all select 4, 12, cast( null as int ) from dual union all select 5, 11, cast( null as int ) from dual), book ( id, order_type, deleted ) as (select 1, 1, 'F' from dual union all select 2, null, 'F' from dual union all select 3, 5, 'F' from dual union all select 4, 5, 'F' from dual union all select 5, 4, 'F' from dual union all select 6, 4, 'F' from dual union all select 7, 3, 'T' from dual) -- end of mimicking your tables; you wouldn't need the above subqueries as you already have the tables. -- See SQL below: select (select count(*) from book bk where bk.deleted = 'F' and bk.order_type = ot.id) value_id, ot.type_id, ot.id from order_table ot order by value_id desc, id desc; VALUE_ID TYPE_ID ID ---------- ---------- ---------- 2 11 5 2 12 4 1 11 1 0 11 3 0 11 2 ```
Counting one field of table in other table
[ "", "sql", "oracle", "" ]
I have a table `tab`, which contains columns `a,b,c,d`. But the following query will not work since the `c` is not in the group by clause or in a reduction function. ``` SELECT a, b, c FROM tab GROUP BY a, b; ``` But what i want is to select `c` based on maximum value of `d`. How can I do this query in PostgreSQL ?. ``` | a | b | c | d | | 1 | 2 | 3 | 100 | | 1 | 2 | 4 | 110 | | 1 | 2 | 5 | 90 | ``` As the output I need the result in row 2, because the value in d is the highest.
Classic `top-n-per-group`. One way to do it using `ROW_NUMBER`: ``` WITH CTE AS ( SELECT a, b, c ,ROW_NUMBER() OVER(PARTITION BY a, b ORDER by d DESC) AS rn FROM tab ) SELECT a, b, c FROM CTE WHERE rn = 1; ``` Index on `(a, b, d, c)` should help. Approach with `ROW_NUMBER` works well when a table has few rows per group and the server has to read most of the table. For example, a table has 1 million rows and 800K distinct groups of `(a, b)`. You'd have to read most rows any way. If the table has 1 million rows and only 20 distinct groups of `(a, b)` it would be better to do 20 seeks of an appropriate index instead of reading all rows.
In Postgres, you can use `distinct on`: ``` SELECT DISTINCT ON (a, b) a, b, c FROM tab ORDER BY a, b, d DESC; ``` This syntax is specific to Postgres. It is often the most efficient way to do this type of operation.
How to add aggregation function to non grouped column which is not in select
[ "", "sql", "postgresql", "group-by", "greatest-n-per-group", "" ]
I would like to select some rows multiple-times, depending on the column's value. **Source table** ``` Article | Count =============== A | 1 B | 4 C | 2 ``` **Wanted result** ``` Article =============== A B B B B C C ``` Any hints or samples, please?
You could also use a recursive CTE which works with numbers > 10 (here up to 1000): ``` With NumberSequence( Number ) as ( Select 0 as Number union all Select Number + 1 from NumberSequence where Number BETWEEN 0 AND 1000 ) SELECT Article FROM ArticleCounts CROSS APPLY NumberSequence WHERE Number BETWEEN 1 AND [Count] ORDER BY Article Option (MaxRecursion 0) ``` `Demo` A number-table will certainly be the best option. <http://sqlperformance.com/2013/01/t-sql-queries/generate-a-set-2>
You could use: ``` SELECT m.Article FROM mytable m CROSS APPLY (VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10)) AS s(n) WHERE s.n <= m.[Count]; ``` `LiveDemo` Note: `CROSS APLLY` with any tally table. Here values up to 10. Related: [What is the best way to create and populate a numbers table?](https://stackoverflow.com/questions/1393951/what-is-the-best-way-to-create-and-populate-a-numbers-table)
SQL multiplying rows in select
[ "", "sql", "sql-server", "t-sql", "" ]
Is it possible to use a WHERE statement to find the oldest or the newest date ? I mean something like ``` SELECT * FROM employees WHERE birth_date = MIN(birth_date); ``` I know this doesn't work, but I am asking if there is a syntax error or the whole idea is wrong.
This is possible(`ANSI SQL`) ``` SELECT * FROM employees WHERE birth_date = (select MIN(birth_date) from employees) ``` or You can use `TOP 1 with Ties`(`SQL SERVER`) ``` Select TOP 1 with TIES * FROM employees Order by birth_date ASC ```
You can use a simple subselect for getting the value you need ``` SELECT * FROM employees WHERE birth_date = (select MIN(birth_date) from employess) ```
how to select the oldest or newest date using where?
[ "", "mysql", "sql", "sql-server", "" ]
I have a `stored procedure` which accepts one parameter as `@ReportDate`. but when I execute it with parameter it gives me error as > Error converting data type varchar to datetime. Here is the SP. ``` ALTER PROCEDURE [dbo].[GET_EMP_REPORT] @ReportDate Datetime AS BEGIN DECLARE @Count INT DECLARE @Count_closed INT DECLARE @Count_pending INT DECLARE @Count_wip INT DECLARE @Count_transferred INT DECLARE @Count_prevpending INT SELECT * INTO #temp FROM ( select distinct a.CUser_id,a.CUser_id User_Id, b.first_name + ' ' + b.last_name NAME, 0 RECEIVED, 0 CLOSED, 0 PENDING, 0 WIP, 0 TRANSFERRED, 0 PREV_PENDING from inward_doc_tracking_trl a, user_mst b where a.CUser_id = b.mkey ) AS x DECLARE Cur_1 CURSOR FOR SELECT CUser_id, User_Id FROM #temp OPEN Cur_1 DECLARE @CUser_id INT DECLARE @User_Id INT FETCH NEXT FROM Cur_1 INTO @CUser_id, @User_Id WHILE (@@FETCH_STATUS = 0) BEGIN /***** received *******/ SELECT @Count = COUNT(*) FROM inward_doc_tracking_trl WHERE CUser_id = @CUser_id AND NStatus_flag = 4 AND CStatus_flag = 1 AND U_datetime BETWEEN @ReportDate AND GETDATE() /***** closed *******/ SELECT @Count_closed = COUNT(*) FROM inward_doc_tracking_trl WHERE CUser_id = @CUser_id AND NStatus_flag = 5 AND U_datetime BETWEEN @ReportDate AND GETDATE() /***** pending *******/ SELECT @Count_pending = COUNT(*) FROM inward_doc_tracking_trl trl INNER JOIN inward_doc_tracking_hdr hdr ON hdr.mkey = trl.ref_mkey WHERE trl.N_UserMkey = @CUser_id AND trl.NStatus_flag = 4 AND trl.CStatus_flag = 1 AND hdr.Status_flag = 4 AND trl.U_datetime BETWEEN @ReportDate AND GETDATE() /***** wip *******/ SELECT @Count_wip = COUNT(*) FROM inward_doc_tracking_trl trl INNER JOIN inward_doc_tracking_hdr hdr ON hdr.mkey = trl.ref_mkey INNER JOIN (select max(mkey) mkey,ref_mkey from inward_doc_tracking_trl where NStatus_flag = 2 group by ref_mkey ) trl2 ON trl2.mkey = trl.mkey and trl2.ref_mkey = trl.ref_mkey WHERE trl.N_UserMkey = @CUser_id AND trl.NStatus_flag = 2 AND hdr.Status_flag = 2 AND trl.U_datetime BETWEEN @ReportDate AND GETDATE() /***** transferred *******/ SELECT @Count_transferred = COUNT(*) FROM inward_doc_tracking_trl WHERE CUser_id = @CUser_id AND NStatus_flag = 4 AND CSTATUS_flag <> 1 AND U_datetime BETWEEN @ReportDate AND GETDATE() /******** Previous pending **********/ SELECT @Count_prevpending = COUNT(*) FROM inward_doc_tracking_trl trl INNER JOIN inward_doc_tracking_hdr hdr ON hdr.mkey = trl.ref_mkey INNER JOIN (select max(mkey) mkey,ref_mkey from inward_doc_tracking_trl where NStatus_flag = 2 group by ref_mkey ) trl2 ON trl2.mkey = trl.mkey and trl2.ref_mkey = trl.ref_mkey WHERE trl.N_UserMkey = @CUser_id AND trl.NStatus_flag = 2 AND hdr.Status_flag = 2 AND trl.U_datetime < @ReportDate UPDATE #temp SET RECEIVED = @Count, CLOSED = @Count_closed, PENDING = @Count_pending, WIP = @Count_wip, TRANSFERRED = @Count_transferred, PREV_PENDING = @Count_prevpending WHERE CUser_id = @CUser_id AND User_Id = @User_Id FETCH NEXT FROM Cur_1 INTO @CUser_id, @User_Id END CLOSE Cur_1 DEALLOCATE Cur_1 SELECT * FROM #temp END ``` I am executing like this `EXEC GET_EMP_REPORT '16/05/2016'` The current date format entered is `DD/MM/YYYY` which gives me the error. Executing it as `MM/DD/YYYY` works but I would prefer executing it as `DD/MM/YYYY`. but getting error I am using `SQL-server-2005`
``` EXEC GET_EMP_REPORT '20160516' ``` Pass date in generic format 'yyyyMMdd'
``` DECLARE @ReportDate DATETIME SET @ReportDate ='31/12/2016' -- DD/MM/YYYY Format you cant insert . It will give the below error ``` The conversion of a varchar data type to a datetime data type resulted in an out-of-range value. If you really need to insert the same in DD/MM/YYYY Format. Declare @ReportDate as Varchar.Please refer the code below. ``` DECLARE @ReportDate VARCHAR(10) SET @ReportDate ='31/12/2016' SELECT * FROM MyTable WHERE MyColumn BETWEEN CONVERT(DATETIME, @ReportDate, 103) AND GETDATE() ```
Adding datetime as a parameter is giving Error converting data type varchar to datetime (Error) in stored procedure
[ "", "sql", "datetime", "stored-procedures", "sql-server-2005", "" ]
My query is: ``` SELECT Pics.ID, Pics.ProfileID, Pics.Position, Rate.ID as RateID, Rate.Rating, Rate.ProfileID, Gender FROM Pics INNER JOIN Profiles ON Pics.ProfileID = Profiles.ID LEFT JOIN Rate ON Pics.ID = Rate.PicID WHERE Gender = 'female' ORDER BY Pics.ID ``` And results are: ``` ID ProfileID Position RateID Rating ProfileID Gender 23 24 1 59 9 42 female 24 24 2 33 8 32 female 23 24 1 53 3 40 female 26 24 4 31 8 32 female 30 25 4 30 8 32 female 24 24 2 58 4 42 female ``` Now I want to do another query which would be: If Rate.ProfileID = 32, remove any rows that contain that same Pics.ID so left with: ``` ID ProfileID Position RateID Rating ProfileID Gender 23 24 1 59 9 42 female 23 24 1 53 3 40 female ``` and also remove any duplicate Pics.ID so just one of the above as they are both = 23 so left with : 23 24 1 59 9 42 female or 23 24 1 53 3 40 female
You should probably get rid of "magical numbers", like 32. That said, I think that this will give you what you need. ``` SELECT P.ID, P.ProfileID, P.Position, R.ID as RateID, R.Rating, R.ProfileID, PR.Gender FROM Pics P INNER JOIN Profiles PR ON PR.ID = P.ProfileID LEFT JOIN Rate R ON R.PicID = P.ID WHERE PR.Gender = 'female' AND NOT EXISTS ( SELECT * FROM Pics P2 INNER JOIN Profiles PR2 ON PR2.ID = P2.ProfileID INNER JOIN Rate R2 ON R2.PicID = P2.ID AND R2.ProfileID = 32 WHERE P2.ID = P.ID ) ORDER BY P.ID ```
> @Shadow Because the 2nd row contains the Rate.ProfileID = 32, and that > Pic.ID = 24, therefore it must remove ALL Pic.ID = 24, which removes > the bottom row also. ``` SELECT Pics.ID, Pics.ProfileID, Pics.Position, Rate.ID as RateID, Rate.Rating, Rate.ProfileID, Gender FROM Pics INNER JOIN Profiles ON Pics.ProfileID = Profiles.ID LEFT JOIN Rate ON Pics.ID = Rate.PicID WHERE Gender = 'female' AND Pics.ID NOT IN ( SELECT Pics.ID FROM Pics INNER JOIN Profiles ON Pics.ProfileID = Profiles.ID LEFT JOIN Rate ON Pics.ID = Rate.PicID WHERE Gender = 'female' AND Rate.ProfileID = 32) ORDER BY Pics.ID ```
SQL Help Inner JOIN LEFT JOIN
[ "", "mysql", "sql", "join", "left-join", "inner-join", "" ]
I have two tables t1 containg 3million records and t2 containing 11000 records. I execute the query ``` Select Count(*) FROM t1 LEFT JOIN t2 ON t1.id = t2.id ``` I execute this query on a sql workbench, it returns 3million which is correct because it is a left join. But when I upload this data to Hive and run the same query it returns 9Million. can anyone explain why this is happening? Do joins behave differently in Hive as compared to normal SQL?
The data is not the same. I would suggest a "histogram of histogram" query to figure out what the issue is: ``` select cnt, count(*), min(id), max(id) from (select id, count(t2.id) as cnt from t1 left join t2 on t1.id = t2.id group by id ) t group by cnt order by cnt; ``` This will give an idea of how many non-matches there are; how many onesies, twosies, and so on.
use hive ``` select count(*) from tb1 ``` see data number is 3million tb2 id one-to-many hive insert date select?
Do joins in Hive behave differently?
[ "", "sql", "hadoop", "join", "hive", "left-join", "" ]
So with the case statement is there a way to get last entry which satisfies the When condition. Consider this ``` Case when hireDate > getDate() THEN hireDate END; ID HireDate 101 '07-28-2016' 101 '08-02-2016' 101 '08-04-2016' ``` Now with the above made up data, sql server will always output 07-28-2016, because its the first entry which satisfies the condition. Is there a way to get latest hire date with this case stmt, like 08-04-2016.
Here's one option using `conditional aggregation`: ``` alter view XYZ as select id, max(case when hireDate > getDate() then hireDate end) maxdate from abc group by id ```
> So with the case statement is there a way to get last entry which satisfies the When condition. you can do some thing like ,getting max per group and querying against it. ``` ;with cte as ( select id,max(date) as hiredate from yourtable group by id ) select id, case when hiredate>getdate() then hiredate else null end as 'hiredate' from cte ```
ordered result with case when multiple enteries satisfies condition
[ "", "sql", "sql-server", "sqlite", "case", "" ]
I am new to Postgresql, and I am trying to change the data type of a column from `Integer to Varchar(20)`, but I get strange error: ``` ERROR: operator does not exist: character varying <> integer : No operator matches the given name and argument type(s). You might need to add explicit type casts.********** Error ********** ``` The script I wrote to create the table is: ``` CREATE TABLE LOGIN( USERNAME INTEGER NOT NULL CHECK(USERNAME != NULL), PASSWORD VARCHAR(10) NOT NULL CHECK(PASSWORD <>'' AND USERNAME != NULL) ); ``` This is the script I used to modify the column from Integer to Varchar: ``` ALTER TABLE LOGIN ALTER COLUMN USERNAME TYPE varchar(20); ``` I appreciate any help. Thanks.
The cause of the error is the useless additional check constraint (`<> null`) that you have: > operator does not exist: character varying <> integer : refers to the condition `USERNAME != NULL` in both of your check constraints. (the "not equals" operator in SQL is `<>` and `!=` gets re-written into that) So you first need to get rid of those check constraints. The default generated name for that check would be `login_username_check`, so the following will most probably work: ``` alter table login drop constraint login_username_check; ``` The other check is most probably `login_check`: ``` alter table login drop constraint login_check; ``` Once those check constraints are dropped you can alter the data type: ``` ALTER TABLE LOGIN ALTER COLUMN USERNAME set data TYPE varchar(20); ``` Now you need to re-add the constraint for the password: ``` alter table login add constraint check_password check (password <> ''); ``` If for some reason the generated constraint names are different then the ones I assumes, you can find the names using: ``` select c.conname, c.consrc from pg_constraint c join pg_class t on c.conrelid = t.oid join pg_namespace n on t.relnamespace = n.oid where t.relname = 'login' and n.nspname = 'public'; --<< change here for the correct schema name ``` --- As jarlh has already commented, defining a column as `NOT NULL` is enough. There is no need to add another "not null" check. Plus: the check is wrong anyway. You can't compare a value against `null` using `=` or `<>`. To test for a not null value you need to use `IS NOT NULL`. The correct way to write an explicit check constraint would be ``` username check (username is not null) ```
Use `USING expression`. It allows you to define value conversion: ``` ALTER TABLE LOGIN ALTER COLUMN USERNAME TYPE varchar(20) USING ...expression...; ``` From [PostgreSQL documentation](http://www.postgresql.org/docs/current/static/sql-altertable.html): > The optional USING clause specifies how to compute the new column > value from the old; if omitted, the default conversion is the same as > an assignment cast from old data type to new. A USING clause must be > provided if there is no implicit or assignment cast from old to new > type.
Postgresql unable to change the data type of a column
[ "", "sql", "database", "postgresql", "" ]
I want to get records with duplicated values and their count from them as shown below. I am trying following query but it shows wrong count. Please suggest me. The query that I used: ``` SELECT msisdn, waiver_reason, COUNT(msisdn) AS cnt FROM ECONSOLE_NEW WHERE msisdn IN ( SELECT [CUSTOMER CELL NUMBER] FROM SOFTCLOSURE INTERSECT SELECT msisdn FROM ECONSOLE_NEW GROUP BY msisdn HAVING COUNT(msisdn) > 1 ) GROUP BY msisdn, waiver_reason ORDER BY msisdn ``` Result I get: ``` msisdn waiver_reason cnt ------------------------ 111 DD 1 111 VD 1 222 LP 1 222 VD 1 333 DDW 1 333 GG 1 333 GQ 1 ``` **Result I want** ==> ``` msisdn waiver_reason cnt --------------------------- 111 DD 2 111 VD 2 222 LP 2 222 VD 2 333 DDW 3 333 GG 3 333 GQ 3 ```
There seems to be exactly 1 record for each `msisdn, waiver_reason` pair. You seem to want the count per `msisdn` and also, at the same time, return all `msisdn, waiver_reason` pairs. If this is the case, then you can use window version of [**`COUNT`**](https://msdn.microsoft.com/en-us/library/ms175997.aspx) to get the expected result: ``` select msisdn, waiver_reason, count(msisdn) over (partition by msisdn) as cnt from ECONSOLE_NEW where msisdn in ( select [CUSTOMER CELL NUMBER] from SOFTCLOSURE intersect select msisdn from ECONSOLE_NEW GROUP BY msisdn having COUNT(msisdn)>1 ) order by msisdn ```
`cnt` column is related to `msisdn` column. Try following query: ``` SELECT msisdn, waiver_reason, ( SELECT SUM(cnt) FROM ECONSOLE_NEW WHERE msisdn = e.msisdn ) AS cnt FROM ECONSOLE_NEW AS e WHERE msisdn IN ( SELECT [CUSTOMER CELL NUMBER] FROM SOFTCLOSURE INTERSECT SELECT msisdn FROM ECONSOLE_NEW GROUP BY msisdn HAVING COUNT(msisdn) > 1 ) GROUP BY msisdn, waiver_reason ORDER BY msisdn ```
Want records with count and duplicate values in SQL Server
[ "", "sql", "sql-server", "" ]
I have list of datetime value. How to select previous year just for december only. For example: ``` Current month = May 2016 Previous year of december = Dec 2015 (it will display data from dec 2015 to may 2016) if Current month = May 2017 Previous year of december = Dec 2016 and so on. (it will display data from dec 2015 to may 2016) ``` Any idea ? Thank you very much
``` SELECT * FROM TableName WHERE TableName.Date BETWEEN CONVERT(DATE,CONVERT(VARCHAR,DATEPART(YYYY,GETDATE())-1)+'-12-'+'01') AND GETDATE() ```
Below query will give the required output :- ``` declare @val as date='2016-05-19' select concat(datename(MM,DATEADD(yy, DATEDIFF(yy,0,@val), -1)),' ',datepart(YYYY,DATEADD(yy, DATEDIFF(yy,0,@val), -1))) ``` output : december 2015
how to get previous year (month december) and month showing current month in sql
[ "", "sql", "sql-server", "" ]
I have two tables. ``` tblEmployee tblExtraOrMissingInfo id nvarchar(10) id nvarchar(10) Name nvarchar(50) Name nvarchar(50) PreferredName nvarchar(50) UsePreferredName bit ``` The data (brief example) ``` tblEmployee tblExtraOrMissingInfo id Name id Name PreferredName UsePreferredName AB12 John PN01 Peter Tom 1 LM22 Lisa YH76 Andrew Andy 0 PN01 Peter LM22 Lisa Liz 0 LK655 Sarah ``` I want a query to produce the following result ``` id Name AB12 John LM22 Lisa PN01 Tom YH76 Andrew LK655 Sarah ``` So what I want is all the records from tblEmployee returned and any records in tblExtraOrMissingInfo that are not already in tblEmployee. If there is a record in both tables with the same id I would like is if the UsePreferredName field in tblExtraOrMissingInfo is 1 for the PreferredName to be used rather than the Name field in the tblEmployee, please see the record PN01 in the example above.
It is slightly faster to use a left join and coalesce than to use the case statement (most servers are optimized for coalesce). Like this: ``` SELECT E.ID, COALESCE(P.PreferredName,E.Name,'Unknown') as Name FROM tblemployee E LEFT JOIN tblExtraOrMissingInfo P ON E.ID = P.ID AND P.UsePreferredName = 1 ``` > The `,'Unknown'` is not needed to answer your question, but I added > here to show that you can enhance this query to handle cases where the > name is not available in both tables and you don't want nulls in your result
`left join` on the employee table and use a `case` expression for name. ``` select e.id ,case when i.UsePreferredName = 1 then i.PreferredName else e.name end as name from tblemployee e left join tblExtraOrMissingInfo i on i.id=e.id ```
select results from two tables but choose one field over another
[ "", "sql", "sql-server", "sql-server-2012", "" ]
I have a request which returns something like this: ``` -------------------------- Tool | Week | Value -------------------------- Test | 20 | 3 Sense | 20 | 2 Test | 19 | 2 ``` And I want my input to look like this: ``` ------------------------- Tool | W20 | W19 ------------------------- Test | 3 | 2 Sense | 2 | null ``` Basically, for every week I need to have a new column. The number of week and of tools is dynamic. I have tried many things but nothing worked. Anybody have a solution ?
Try this ``` CREATE table #tst ( Tool varchar(50), [Week] int, Value int ) insert #tst values ('Test', 20, 3), ('Sense', 20,2), ('Test', 19, 2) ``` Here is the Dynamic Query: ``` DECLARE @col nvarchar(max), @query NVARCHAR(MAX) SELECT @col = STUFF((SELECT DISTINCT ',' + QUOTENAME('W' + CAST([Week] as VARCHAR)) from #tst FOR XML PATH(''), TYPE ).value('.', 'NVARCHAR(MAX)') ,1,1,'') SET @query = ' SELECT * FROM ( SELECT Tool, Value, ''W'' + CAST([Week] as VARCHAR) AS WeekNo FROM #tst ) t PIVOT ( MAX(t.Value) FOR WeekNo IN (' + @col + ') ) pv ORDER by Tool' EXEC (@query) ``` **Result** ``` Tool W20 W19 ================= Sense 2 NULL Test 3 2 ```
``` IF OBJECT_ID('tempdb..#temp') IS NOT NULL DROP TABLE #temp CREATE TABLE #temp ( Tool varchar(5), Week int, Value int) ; INSERT INTO #temp ( Tool , Week , Value ) VALUES ('Test', 20, 3), ('Sense', 20, 2), ('Test', 19, 2) ; DECLARE @statement NVARCHAR(max) ,@columns NVARCHAR(max), @col NVARCHAR(max) SELECT @columns = ISNULL(@columns + ', ', '') + N'[' +'w'+ tbl.[Week] + ']' FROM ( SELECT DISTINCT CAST([Week] AS VARCHAR)[Week] FROM #temp ) AS tbl SELECT @statement = 'SELECT * FROM ( SELECT Tool , ''w''+ CAST(Week AS VARCHAR) week , Value FROM #Temp ) src PIVOT(MAX(Value)for Week in (' + @columns + ')) as pvt ' EXEC sp_executesql @statement = @statement ```
How to pivot rows into colums dynamically SQL Server
[ "", "sql", "sql-server", "" ]
Scenario: I have table with single column (string). I want to retrieve data which are stored in particular order. table Tbl\_EmployeeName having only one column 'Name' I inserted records through this below query ``` Insert Into Tbl_EmployeeName select 'Z' union select 'y' union select 'x' union select 'w' union select 'v' union select 'u' union select 't' union select 's' ``` Now I want these records in the same order in which it is inserted. when i run the query: ``` select * from Tbl_EmployeeName ``` it will arrange the records in default order by Name asc. i.e. result output ``` s t u v w x y Z ``` I want all records in this way expected output ``` z y x w v u t s ``` Could you please suggest me is there any way to get this in proper sequence in which records are entered, and also without adding any new column to a table.
You can sort it in your own order with **FIND\_IN\_SET** ``` SELECT * FROM Tbl_EmployeeName ORDER BY FIND_IN_SET(Name,'Z,y,x,w,v,u,t,s'); ``` If you dont know the insert order use this, then there is direct a AUTOINCREMENT field. ``` CREATE TABLE `Tbl_EmployeeName` ( `id` int(11) unsigned NOT NULL AUTO_INCREMENT, `Name` varchar(64) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; SELECT * FROM Tbl_EmployeeName ORDER by id; ```
> I inserted records through this below query > > `...query using union...` > > Now I want these records in the same order in which it is inserted. Surprisingly, you *are* retrieving the records in the order in which they were inserted. Using `UNION` between each of the `SELECT` statements on your `INSERT` is causing the records to be sorted *before* being inserted. `UNION` does an inherent `DISTINCT` over all of the results. Switching this to `UNION ALL` will eliminate the inherent ordering. HOWEVER... > Could you please suggest me is there any way to get this in proper sequence in which records are entered, **and also without adding any new column to a table.** Unfortunately, this is *not* possible. SQL tables represent *unordered* sets. It has no native concept over either the order of the records or the order of which they were inserted. > When I run the query `select * from Tbl_EmployeeName` > it will arrange the records in default order by Name asc. This is false. As mentioned above, there is *no* default order that is returned. Any result that you may have gotten when executing that query is merely coincidental. Without specifying an `ORDER BY` clause, the order is *not* guaranteed. > Could you please suggest me is there any way to get this in proper sequence in which records are entered Contrary to your question, you *can* do this by adding a new column to your table. By setting up the table as follows: ``` Create Table Tbl_EmployeeName ( Id Int Identity(1,1) Not Null, Name Varchar (10) -- Or whatever your size is ); ``` Then doing your inserts: ``` Insert Tbl_EmployeeName (Name) Values ('Z'), ('y'), ('x'), ('w'), ('v'), ('u'), ('t'), ('s') ``` And querying: ``` Select Name From Tbl_EmployeeName Order By Id Asc ```
Sorting of string as per its insertion
[ "", "mysql", "sql", "sql-server", "" ]
Need some help structuring my query. I think I need a subquery, but I am not quite sure how to use them in my context. I have the following tables and data, ``` people ID, Name 1, David 2, Victoria 3, Brooklyn 4, Tom 5, Katie 6, Suri 7, Kim 8, North 9, Kanye 10,James 11,Grace relationship peopleID, Relationship, relatedID 3,Father,1 3,Mother,2 6,Father,4 6,Mother, 5 8,Mother,7 8,Mother,9 11,Father,10 ``` I have the following query ``` SELECT DISTINCT p.ID, p.name, f.ID, f.name, m.ID, m.name FROM people AS p LEFT JOIN relationship AS fr ON p.ID = fr.peopleID LEFT JOIN people AS f ON fr.relatedID = f.ID LEFT JOIN relationship AS mr ON p.ID = mr.peopleID LEFT JOIN people AS m ON mr.relatedID = m.ID WHERE p.ID IN(3,6,8,11) AND ( mr.Relationship IN('Mother','Stepmother') OR fr.Relationship IN('Father','Stepfather') ) ``` The query above outputs the following data ``` 3,Brooklyn,1,David,1,David 3,Brooklyn,1,David,2,Victoria 3,Brooklyn,2,Victoria,2,Victoria 6,Suri,4,Tom,4,Tom 6,Suri,4,Tom,5,Katie 6,Suri,5,Katie,5,Katie 8,North,7,Kim,7,Kim 8,North,9,Kanye,7,Kim 8,North,9,Kanye,9,Kanye 11,Grace,10,James,10,James ``` I kind of understand what is going on, hence the reason I am thinking I probably need a subquery or possibly a union to get the parents first and then build on those results. I am trying to output the following, can anyone help please? ``` 3,Brooklyn,1,David,2,Victoria 6,Suri,4,Tom,5,Katie 8,North,9,Kanye,7,Kim 11,Grace,10,James,, <-should display no mother details (same for the father if father was not in the data) ```
Sorry I have no possibility to check a query right now. Does this work? ``` SELECT DISTINCT p.ID, p.name, f.ID, f.name, m.ID, m.name FROM people AS p LEFT JOIN relationship AS fr ON p.ID = fr.peopleID AND fr.relationship IN ('Father','Stepfather') LEFT JOIN people AS f ON fr.relatedID = f.ID LEFT JOIN relationship AS mr ON p.ID = mr.peopleID AND mr.relationship IN('Mother','Stepmother') LEFT JOIN people AS m ON mr.relatedID = m.ID WHERE p.ID IN(3,6,8,11) ``` The point is to get rid of using (WHERE A OR B) together with LEFT JOIN. It brings too much uncertainty in result's logic
Even though you've already accepted an answer, but I still want to provide the mine : ``` WITH familly AS ( SELECT child.ID AS childID ,child.Name AS childName ,Relationship AS relationship ,parent.ID AS parentID ,parent.Name AS parentName FROM relationship LEFT JOIN people AS child ON child.ID = peopleID LEFT JOIN people AS parent ON parent.ID = relatedID ) SELECT t.childID ,t.childName ,STUFF(ISNULL(( SELECT ', ' + CAST(x.parentID AS NVARCHAR(10)) + ', ' + x.parentName FROM familly x WHERE x.childID = t.childID GROUP BY x.parentID, x.parentName FOR XML PATH (''), TYPE ).value('.','VARCHAR(max)'), ''), 1, 2, '') [Parents] FROM familly t WHERE t.childID IN(3,6,8,11) GROUP BY t.childID, t.childName ``` There're less `LEFT JOIN` and more readable. You should start the jointure by using the table `relationship`, so: * on one side, you can join `people` as children * on the other side, you can joint `people` as parents. Then, I've used the statement `WITH` to provide better readability. At the end, the operation [STUFF (Transact-SQL)](https://msdn.microsoft.com/en-us/library/ms188043.aspx) concatenates multiple strings (parents) into one row. References : * [Stack Overflow: Can I Comma Delimit Multiple Rows Into One Column?](https://stackoverflow.com/questions/2046037/can-i-comma-delimit-multiple-rows-into-one-column) * [MSDN: STUFF (Transact-SQL)](https://msdn.microsoft.com/en-us/library/ms188043.aspx)
Output from my query not quite right, possibly subquery for this?
[ "", "sql", "" ]
I have a accountNo column varchar(50) sample data ``` 000qw33356 034453534u a56465470h 00000000a1 ``` I need output like.. ``` qw33356 34453534u a56465470h a1 ``` I have limitation that i can not use while loop in side UDF as this is creating performance issue .
If your data doesn't contains spaces you can use: ``` select replace(ltrim(replace(data, '0', ' ')),' ', '0') ``` If there are spaces, you could replace them first to something else that doesn't exist and then replace back at the end.
Here's a somewhat cute variant that doesn't need a replacement character (although that's the one I'd usually use): ``` declare @t table (Val varchar(20)) insert into @t(Val) values ('000qw33356'), ('034453534u'), ('a56465470h'), ('00000000a1') select SUBSTRING(Val,PATINDEX('%[^0]%',Val),9000) from @t ``` Results: ``` -------------------- qw33356 34453534u a56465470h a1 ``` Basically, we just take a substring from the first non-`0` character to the end (assuming 9000 is larger than your input string length)
remove leading zero from column with out any while loop in sql
[ "", "sql", "sql-server", "database", "" ]
Sample table below. I recently added the column 'is\_last\_child' and want to update it to have a value 1 if the row is the last child (or is not a parent). I had a query of ``` update node set is_last_child=1 where id not in (select parent_id from node); ``` I get the following error when I run it. "You can't specify target table 'node' for update in FROM clause". I've tried using a join, but I'm sure how exactly I can update only the rows that are not a parent. Any one have any ideas or have run into a similar situation? ``` id | parent_id | is_last_child 1 | 1 | 0 2 | 1 | 0 3 | 1 | 0 4 | 2 | 0 5 | 4 | 0 6 | 1 | 0 ``` Essentially I want to select ids 3, 5, and 6 and set the column is\_last\_child equal to 1. This isn't my schema and there are thousands of rows, but the table above is just to simplify things.
You want `UPDATE FROM`: ``` UPDATE N1 SET N1.is_last_child = 1 FROM Node N1 LEFT OUTER JOIN Node N2 ON N1.ID = N2.Parent_ID WHERE N2.ID IS NULL ``` The left outer join is conceptually the same as using `NOT IN` only it's easier to read and you don't need a bunch of nested queries.
While you can't update a table you are selecting from, I think you can update a table joined to itself: ``` UPDATE `node` AS n1 LEFT JOIN `node` AS n2 ON n1.id = n2.parent_id SET n1.is_last_child = 1 WHERE n2.id IS NULL ; ```
Update a column using a select subquery to the same table
[ "", "mysql", "sql", "sql-update", "" ]
I'm not really good when it comes to database... I'm wondering if it is possible to get the weeks of a certain month and year.. For example: `1 (January)` = `month` and `2016` = `year` Desired result will be: ``` week 1 week 2 week 3 week 4 week 5 ``` This is what I have tried so far... ``` declare @date datetime = '01/01/2016' select datepart(day, datediff(day, 0, @date) / 7 * 7) / 7 + 1 ``` This only returns the total of the weeks which is 5.
``` declare @MonthStart datetime -- Find first day of current month set @MonthStart = dateadd(mm,datediff(mm,0,getdate()),0) select Week, WeekStart = dateadd(dd,(Week-1)*7,@MonthStart) from ( -- Week numbers select Week = 1 union all select 2 union all select 3 union all select 4 union all select 5 ) a where -- Necessary to limit to 4 weeks for Feb in non-leap year datepart(mm,dateadd(dd,(Week-1)*7,@MonthStart)) = datepart(mm,@MonthStart) ``` Got the answer in the link: <http://www.sqlservercentral.com/Forums/Topic1328013-391-1.aspx>
Here is one way to approach this: A month has a minimum of 29 or more days, and a max of 31 or less. Meaning there are almost always 5 weeks a month, with the exception of a non-leap year's feburary, and in those cases, 4 weeks a month. You can refer to this to find out which years are "leap". [Check for leap year](https://stackoverflow.com/questions/6534788/check-for-leap-year) Hope this helps!
SQL Server : is there a way for me to get weeks per month using Month and Year?
[ "", "sql", "sql-server", "" ]
I'm trying to compare assembly versions using SQL Server, however there can be more than one version returned and I need it be in a six digit format. For example, the assembly version `2.00.0001` and I need that to be returned as `2.0.1`. There could be versions like `1.01.0031` that I would need to be `1.1.31`. This works but is there a better way of doing it? ``` select left(left([output],9),1)+'.'+substring(left([output],9),3,1)+'.'+substring(right(left([output],9),1),1,1) ```
Using `ParseName` function, you can achieve this. Try this - ``` DECLARE @val VARCHAR(100) = '01.10.0031' SELECT CONVERT(VARCHAR, CONVERT(INT, PARSENAME(@val, 3))) + '.' + CONVERT(VARCHAR, CONVERT(INT, PARSENAME(@val, 2))) + '.' + CONVERT(VARCHAR, CONVERT(INT, PARSENAME(@val, 1))) ``` **Result** ``` 1.10.31 ```
For limited numbers of zeros, you can replace `.0` with `.`: ``` select replace(replace(replace([output], '.00', '.'), '.0', '.'), '..', '.0.') ``` This is a bit of a hack, but it is relatively simple.
Trim first zero after dot twice sql query
[ "", "sql", "sql-server", "sql-server-2008-r2", "" ]
**table** ``` id | term ---+----- 1 | 2015 2 | 2015 3 | 2016 ``` I have this table and want to select all 2015 results: ``` select * from table where term = 2015 ``` HOWEVER, I only want it to to return 2015 results IF 2015 is the only term that shows up in the table. If it has anything else, it should not return any rows. For example, the current table should return nothing since `id=3` has a term of `2016`. If `id=3` had a term of `2015`, I'd want it to show all `2015` results. How can I accomplish this.
You can use `NOT EXISTS` for this: ``` SELECT * FROM table WHERE term = 2015 AND NOT EXISTS (SELECT 1 FROM table WHERE term <> 2015 ) ``` Demo: [SQL Fiddle](http://sqlfiddle.com/#!9/40627/4/0) `NOT EXISTS` evaluates to true if any records are returned by the subquery. This is a simple example, and the functionality can be expanded by correlating the subquery to the outer query.
I think what you are saying is: IF there are at least two different values in the term column, return nothing. If there is exactly one value, then return the entire table. You didn't say if term is nullable (if it may contain nulls). Assuming it isn't, you can get your result very quickly this way: ``` select * from your_table where (select count (distinct term) from your_table) = 1; ``` (I do hope you don't have a table named literally "table"!) If term is nullable, you need to state how you want null in the term column to be treated.
Select results if table doesn't contain criteria
[ "", "sql", "oracle", "" ]
I wonder how can I check if two users (**user\_id**) are in the same thread (**thread\_id**) without passing the (**thread\_id**) using SQL, through the structure of the table below. [![enter image description here](https://i.stack.imgur.com/zXE6m.png)](https://i.stack.imgur.com/zXE6m.png)
To get all threads where those 2 users are in you can do ``` select thread_id from your_table where user_id in (1,2) group by thread_id having count(distinct user_id) = 2 ``` To get all threads that have more than one user do ``` select thread_id from your_table group by thread_id having count(distinct user_id) >= 2 ```
I didn't understand very well what you wanted but as I understand it , I assume that this is. ``` select thread_id, COUNT(thread_id) AS dup_count from participant_thread as participant GROUP BY thread_id HAVING (COUNT(thread_id) > 1) ``` [result scrypt](https://i.stack.imgur.com/towu3.png)
SQL - Check if IDs contains same ID's value
[ "", "mysql", "sql", "" ]
me trying to make distinct data in temporary table, trying to simple it with create table #tabletemp still got wrong, it says unrecognize data type near distinct and comma or closing bracket was expected near ponumber here's the code : ``` CREATE TEMPORARY TABLE t1( SELECT DISTINCT PONumber varchar(10), POdate varchar(10), customername varchar(35), description varchar(22) FROM tb_po ); SELECT p.PONumber, p.podate, p.customername, p.description, ( SELECT SUM(q.deliveryqty) FROM tb_spb q WHERE p.PONumber = q.PONumber AND p.description = q.description ) AS Total FROM t1 p ```
You don't need to create a temporary table to get the result that you want. Here is my revised query based on your query: ``` SELECT DISTINCT p.PONumber, p.POdate, p.customername, p.[description], SUM(q.deliveryqty) FROM tb_po p INNER JOIN tb_spb q ON p.PONumber = q.PONumber AND p.description = q.description GROUP BY p.PONumber,p.POdate,p.customername,p.[description] ```
If you really need it to be in a temporary table, another approach is using "SELECT INTO" wherein you wont need to declare the creation of a temporary table. (Although creating a table then inserting records is the more preferred method <https://stackoverflow.com/a/6948850/6344844>) ``` SELECT DISTINCT p.PONumber, p.POdate, p.customername, p.[description], SUM(q.deliveryqty) INTO #TEMPTABLE_NAME FROM tb_po p INNER JOIN tb_spb q ON p.PONumber = q.PONumber AND p.description = q.description GROUP BY p.PONumber,p.POdate,p.customername,p.[description] SELECT * FROM #TEMPTABLE_NAME DROP TABLE #TEMPTABLE_NAME ```
Select distinct in temp table
[ "", "mysql", "sql", "" ]
I'm trying to make a prepared SQL statement using Doctrine. If I hard code the value in the SQL query it's working: ``` public function search() { $sql = 'SELECT `name` FROM `files` WHERE `name` = "myfile.exe"'; $query = $this->conn->prepare($sql); $status = $query->execute(); $searchResult = $query->fetchAll(); $searchResult = array( 'service' => $searchResult, 'status' => $status ); $searchResult = $this->conn->fetchAll($sql); return $searchResult; } ``` But if I try to pass the value with a parameter it doesn't and give my the **PDOException: SQLSTATE[42000]** ``` public function search() { $sql = 'SELECT `name` FROM `files` WHERE `name` = :service'; $query = $this->conn->prepare($sql); $query->bindValue(":service", "myfile.exe"); $status = $query->execute(); $searchResult = $query->fetchAll(); $searchResult = array( 'service' => $searchResult, 'status' => $status ); $searchResult = $this->conn->fetchAll($sql); return $searchResult; } ``` I can't figured out what I am doing wrong.
After 2 days stuck on this problem I finally find the solution. I just remove the line: ``` $searchResult = $this->conn->fetchAll($sql); ``` So the code below is working fine: ``` public function search($service) { $sql = 'SELECT `name` FROM `files` WHERE `name` = ":service"'; $query = $this->conn->prepare($sql); $query->bindValue(":service", $service); $status = $query->execute(); $searchResult = $query->fetchAll(); $searchResult = array( 'service' => $searchResult, 'status' => $status ); return $searchResult; } ``` Thank you all for your help.
Looking at the docs <http://doctrine-orm.readthedocs.io/projects/doctrine-dbal/en/latest/reference/data-retrieval-and-manipulation.html> It should be: ``` public function search(){ $sql = 'SELECT `name` FROM `files` WHERE `name` = ?'; $query = $this->conn->prepare($sql); $query->bindValue(1, "myfile.exe"); $status = $query->execute(); $searchResult = $query->fetchAll(); $searchResult = array( 'service' => $searchResult, 'status' => $status ); $searchResult = $this->conn->fetchAll($sql); return $searchResult; } ```
Why do I get a PDOException: SQLSTATE[42000]
[ "", "sql", "doctrine", "" ]
* Table 1 has Id with random date and the corresponding value. * Table 2 has id with sequence date (it’s not necessarily to be sequence). Matching the Table2.Id and Table2.SequenceDate with Table1.Id and Table.RandomDate and then need to apply theTable1. value to the Table2 till the next Random Date occurs. You can see in the below expected result ``` Table 1 RandomDate value ID 2/12/2016 A 1 2/15/2016 B 1 2/18/2016 C 1 2/12/2016 A 2 Table 2 SequenceDate ID 2/12/2016 1 2/13/2016 1 2/14/2016 1 2/15/2016 1 2/16/2016 1 2/17/2016 1 2/18/2016 1 2/19/2016 1 2/20/2016 1 2/12/2016 2 Expected Result from table and table 2 SequenceDate ID value 2/12/2016 1 A 2/13/2016 1 A 2/14/2016 1 A 2/15/2016 1 B 2/16/2016 1 B 2/17/2016 1 B 2/18/2016 1 C 2/19/2016 1 C 2/20/2016 1 C 2/12/2016 2 A ```
Finally I got that logic, thanks everyone for all your inputs. ``` SELECT t2.Id, t2.SequenceDate, t1.RandomDate, t1.Value, ISNULL(LEAD(RandomDate,1,NULL) OVER (PARTITION BY Id ORDER BY Id,RandomDate),DATEADD(YEAR,99,RandomDate)) AS nextRandomDte, ISNULL(LAG(RandomDate,1,NULL) OVER (PARTITION BY Id ORDER BY Id,RandomDate),RandomDate) AS prevRandomDte FROM dbo.table1 t1 RIGHT JOIN dbo.table2 t2 ON t1.Id = t2.Id AND ( t2.SequenceDate >= t2.RandomDate OR t2.RandomDate = prevRandomDte ) AND t2.SequenceDate < nextRandomDte ```
I would use a subquery to get the expected value. ``` select Table2.SequenceDate, Table2.ID, ( select top 1 Table1.Value from Table1 where Table1.RandomDate <= Table2.SequenceDate and Table1.ID = Table2.ID order by Table1.RandomDate desc ) as Value from Table2 ```
SQL Logic to merge two tables based on the given scenario
[ "", "sql", "sql-server", "" ]
Suppose I have two `tables` with parent-child relationship in `sql server` as below, parent table: ``` Parentid value 1 demo 2 demo2 ``` child table: ``` childid parchildid subvalue 1 1 demo1 2 1 demo2 ``` here `parchildid` from `child table` is a `foreign key` referring `parentid` of the `parent table`. I needed to retrieve child table data for a particular parentid. So, I used below query ``` select *from child where parchildid in (select parchildid from parent) ``` It gave the below output. ( all the rows for `child table`) ``` childid parchildid subvalue 1 1 demo1 2 1 demo2 ``` But as you see, I have given a `invalid` column (`parchildid`) in the sub-query ( `parchildid` belongs to `child table` not the `parent table` ). I wonder why `sql server` didn't throw any error. running `select parchildid from parent` query alone thows `invalid` column error. could anyone explains why there is no error thrown in the sub-query? hows the logic works there? Thanks
It is equivalent to writing: ``` select * from child c where c.parchildid in ( select c.parchildid from parent p ) ``` If you notice, `child` has an alias of `c` which is accessible inside the subquery. It is also like writing: ``` select * from child c where Exists ( select * from parent p where c.parchildid = c.parchildid ) ```
From [MSDN](https://technet.microsoft.com/en-us/library/ms178050(v=sql.105).aspx): > If a column does not exist in the table referenced in the FROM clause of a subquery, it is implicitly qualified by the table referenced in the FROM clause of the outer query. In your case, since `parchildid` is a column from the table in the outer query, there is no error. On it's own however, the query cannot find such a column, and so it fails.
Issue with Parent-Child Relationship in Sql-Server
[ "", "sql", "sql-server", "t-sql", "parent-child", "" ]
I Have following table. [![enter image description here](https://i.stack.imgur.com/ZAgay.jpg)](https://i.stack.imgur.com/ZAgay.jpg) Simply i want to make order by `meta_key` where value is `LoginTS`. its a bit distracting. `ORDER BY meta_value( where meta_key is LoginTS ) DESC` . Im sorry if its not clear enough.. > Expected Result : > [![enter image description here](https://i.stack.imgur.com/tDiQu.jpg)](https://i.stack.imgur.com/tDiQu.jpg)
``` SELECT * FROM mytable WHERE `meta_key`= 'LoginTS' ORDER BY `meta_value` DESC ``` **[SQL FIDDLE DEMO 1](http://sqlfiddle.com/#!9/4c7a5/5)** If you want to get back all the Table but to ORDER BY specific column try this: ``` SELECT * FROM myTable ORDER BY CASE WHEN `meta_key`='LoginTS' THEN 0 ELSE 1 END ``` [**SQL FIDDLE DEMO 2**](http://sqlfiddle.com/#!9/4c7a5/7)
WHERE should be placed before ORDER BY, try this: ``` WHERE meta_key = 'LoginTS' ORDER BY meta_value DESC ```
SQL Ordering by field value
[ "", "mysql", "sql", "" ]
I would like to make a report that will show the average grade for different tasks. I am having trouble with how to get the averages. I need to figure out how to convert the grades to floats so that I can take the averages. The grades sometimes have non-numeric or null values, although most values look like "2.0" or "3.5". I can exclude anything that is non-numeric. This is what I have so far: ``` Select GradingScores.task As task, Avg(Cast((SELECT GradingScores.score WHERE GradingScores.score LIKE '%[^0-9]%')As float)) As averages From GradingScores ``` I am using FlySpeed SQL Query.
You could try using [IsNumeric](https://msdn.microsoft.com/en-us/library/ms186272.aspx) ``` Select GradingScores.task As task, Avg(Cast(GradingScores.score as float) As averages From GradingScores where IsNumeric(GradingScores.score) = 1 ```
Just simply convert() and use isnumeric() function in T-SQL ``` select avg(convert(float,score)) from GradingScores where isnumeric(score)=1 ```
SQL: select average of varchar
[ "", "sql", "sql-server", "casting", "average", "varchar", "" ]
I executed the following query and it worked. I want to understand how it works. ``` select 50+2 from employees ``` This works only when the 'employees' table exists. If I mention a non existent table, then it throws an error. How can such expressions be evaluated for user-defined tables?
What MySQL does first is it uses its parser to read the SQL statement and separate it into it's logical parts. A parser (used not only by MySQL but very often in programming) is just a block of code that retrieves raw information, and turns it into something that the program can use. In the case of MySQL, it will separate ``` SELECT 50+2 FROM employees ``` Into `SELECT 50+2` and `FROM employees` So lets analyse what each of these two do. The `SELECT` reserved word is used to identify what a user wishes to obtain from the operation. It usually contains a column name. However if you include a string called "*hello*", MySQL interprets that you want to select "*hello*". In this case you want to select a number that is the result of `50+2`. Now what `FROM employees` does is that it informs MySQL which table you wish to select this information from. It doesn't matter that `50+2` isn't a column or even anything located in the table. MySQL isn't an AI system, and won't ask you questions on why you decided to do that, it just executes commands that are designed following the rules established. Now MySQL will look through the whole table, and return the selected columns of each row that is consistent with the condition that exists in the `WHERE ...` section of a query. If a query does not have the `WHERE ...` section, it is assumed you want to return all the rows. The result of your query will then be a column full of the value `52` with the same amount of rows as your table has. If your table has 5 rows, the result will be: ``` 52 52 52 52 52 ```
If you want to give expression then just use `SELECT .. Dual` ``` select 50+2 from dual ``` This will also result `52` Why `select 50+2 from employees` because it is a just a constant value which will be displayed in all the rows of your table, Example the below query will also result constant value 'A' for all the rows ``` Select 'A',col1,col2 from yourtable ```
How SQL select statements work
[ "", "sql", "select", "oracle11g", "" ]
In the following query I get syntax error: ``` SELECT <property1>, <property2> FROM <table1> ORDER BY <condition> LIMIT 1 UNION ALL SELECT <property1>, <property2> FROM <table2> WHERE <condition> ORDER BY <condition> LIMIT 1; ``` > syntax error at or near "UNION" > LINE 4: UNION ALL Each of the `SELECT` stand alone executes fine. My guess is about the `ORDER BY... LIMIT 1` maybe?
Wrap each query with `()`: ``` (SELECT <property1>, <property2> FROM <table1> ORDER BY <condition> LIMIT 1) UNION ALL (SELECT <property1>, <property2> FROM <table2> WHERE <condition> ORDER BY <condition> LIMIT 1); ``` `SqlFiddleDemo` You could also order final query: ``` (SELECT 'a' AS col ORDER BY col LIMIT 1) UNION ALL (SELECT 'b' AS col ORDER BY col LIMIT 1) ORDER BY col DESC ```
The first answer of @lad2025 is correct, but the generalization just under is not correct because must be the whole condition, desc clause included. This is the correct code : ``` (SELECT 'a' AS col ORDER BY col DESC LIMIT 1) UNION ALL (SELECT 'b' AS col ORDER BY col DESC LIMIT 1) ORDER BY col DESC LIMIT 1 ``` otherwise you select only le highest of the two lowest col of select 1 and select 2 (if any) (and not the highest of all the cols) and you must not forget the LIMIT 1 at the end too.
SQL Union All with order by and limit (Postgresql)
[ "", "sql", "postgresql", "union", "union-all", "" ]
I have a table in a db in SQL Server. Example of the data ``` ID StartDate EndDate Notes 0 2016-01-24 02:50:23 2016-01-25 08:00:05 somethingoranother 2 2016-01-30 22:00:00 2016-02-05 08:00:05 somethingoranother ``` On the front end (vb code) I am taking each row of the table and counting the time in hours, minutes and seconds. Example: ``` something = something + table.Row(i)("Enddate") - table.Row(i)("startdate") ``` However, when I pull the report for the date, `01/01/2016` to `31/01/2016`, it is including the whole time of row ID 2, up until `5/02/2016`. How can I get only that time of `30/01/2016` to `31/01/2016 23:59:59` for the month of January?
As it seems, you only want the records from a specific date untill a given date. The part of code that does the addition in your frontend would have to check if the row should be counted in your sum: ``` Dim fromthis As Date = #01/01/2016# Dim tothis AS Date = #01/02/2016# For i As Integer = 1 To XXX If(table.Row(i)("startdate") >= fromthis and tothis > table.Row(i)("Enddate")) something = something + table.Row(i)("Enddate") - table.Row(i)("startdate") If(table.Row(i)("startdate") < fromthis and table.Row(i)("Enddate") > fromthis) something = something + table.Row(i)("Enddate") - fromthis If(table.Row(i)("startdate") < tothis and table.Row(i)("Enddate") > tothis) something = something + tothis - table.Row(i)("startdate") End If Next ``` Or your query that returns the rows for your table could restrict the results for the given margins. This approach is cleaner, as you don't load unwanted data. Something like: ``` SELECT ID , CASE WHEN StartDate < @fromDate THEN @fromDate ELSE StartDate END as StartDate , CASE WHEN EndDate > @toDate THEN @toDate ELSE EndDate END as EndDate , Notes FROM yourtable WHERE (StartDate >= @fromDate) OR EndDate <= @toDate OR (StartDate < @fromDate AND EndDate > @toDate) ```
Assuming row 0 should be treated similarly to row 2, I think this does what you want: ``` declare @t table (ID int not null,StartDate datetime not null,EndDate datetime not null, Notes varchar(217) not null) insert into @t(ID,StartDate,EndDate,Notes) values (0,'2016-01-24T02:50:23','2016-02-05T08:00:05','somethingoranother'), (2,'2016-01-30T22:00:00','2016-02-05T08:00:05','somethingoranother') declare @Start datetime declare @End datetime select @Start = '20160101',@End='20160201' ;With Clamped as ( select ID, CASE WHEN StartDate > @Start THEN StartDate ELSE @Start END as StartDate, CASE WHEN EndDate < @End THEN EndDate ELSE @End END as EndDate, Notes from @t where StartDate < @End and @Start < EndDate ) select *, DATEDIFF(second,StartDate,EndDate) as ElapsedTime from Clamped ``` (`@Start` and `@End` could, of course, be parameters - this is just for the sample code). The result is: ``` ID StartDate EndDate Notes ElapsedTime ----------- ----------------------- ----------------------- ------------------- ----------- 0 2016-01-24 02:50:23.000 2016-02-01 00:00:00.000 somethingoranother 680977 2 2016-01-30 22:00:00.000 2016-02-01 00:00:00.000 somethingoranother 93600 ``` So, row 0 is contributing 189 hours, 9 minutes and 37 minutes and row 2 is contributing 26 hours. Those values seem to be correct, if I've understood your problem. It's up to you whether you wish to aggregate this data here within the final query or to do that within your VB. I leave the translation of the times in seconds into units of hours, minutes and seconds as something trivially done in VB also, using the `TimeSpan` class. (It's not as easily done within SQL Server, since it doesn't have a corresponding data type that represents time spans rather than times of day)
How to get specific month end in VB.NET and SQL Server
[ "", "sql", "sql-server", "vb.net", "" ]
``` select user_id, max(perception_score) as max, min(perception_score) as min from temp_user_notes group by user_id as t1; ``` I am trying to convert this sql query in rails active record but having a hard time to create aliases
You can alias the table to another name in the `from` method of ActiveRecord. For example, part of your query could be: ``` TempUserNote. select("t1.user_id, (t1.max - t1.min) as std_deviation"). from( TempUserNote. select("user_id, max(perception_score) as max, min(perception_score) as min"). group(:user_id), :t1 ) ```
Just use the SQL alias feature inside a `select` method call: ``` TempUserNote.select('user_id, max(perception_score) as max, min(perception_score) as min').group(:user_id) ```
how to create alias of table name in Rails ActiveRecords
[ "", "sql", "ruby-on-rails", "ruby", "activerecord", "" ]
I have a table like ``` ID | Name | ProdID | Model | StudID ----------------------------------- 1 | A | 3 | hey | 6 2 | B | 4 | he | 7 2 | C | 5 | hi | 8 ``` I need to make just `Model` and `StudID` values to `N/A` when `ProdID` is 4 and 5 ``` ID | Name | ProdID | Model | StudID ----------------------------------- 1 | A | 3 | hey | 6 2 | B | 4 | N/A | N/A 2 | C | 5 | N/A | N/A ``` Here's a small sample of what I've done so far ``` SELECT ID, Name, CASE WHEN CONVERT(VARCHAR, ProdID) = 4 THEN CONVERT(VARCHAR, Model) = 'N/A' .. ```
Try this ``` SELECT ID, Name, ProdID, CASE WHEN ProdID IN( 4,5) THEN 'N/A' ELSE CONVERT(VARCHAR, Model) END AS 'Model', CASE WHEN ProdID IN( 4,5) THEN 'N/A' ELSE CONVERT(VARCHAR, StudID) END AS 'StudID' ```
First, you need to be quite careful about types. * Don't convert values to strings unless you have to. * When using `convert()` or `cast()` *always* include a length for `varchar()`. So: ``` SELECT ID, Name, ProdId, (CASE WHEN ProdId IN (4, 5) THEN Model ELSE 'N/A' END) as Model, (CASE WHEN ProdId IN (4, 5) THEN CONVERT(VARCHAR(255), StudId) ELSE 'N/A' END) as StudId ... ``` Note that you need conversion for `StudId`, assuming the value is an integer.
Convert column to a different value based on another columns value
[ "", "sql", "sql-server", "t-sql", "" ]
How I can skip the duplicate record when list sorted, for example, I have table: ``` EmpID Date Dept OtherField 1 2017.02.03 11 1 1 2016.02.03 11 2 1 2015.02.03 13 7 1 2014.02.03 21 6 1 2013.02.03 21 12 1 2012.02.03 13 333 ``` I need get: ``` 1 2016.02.03 11 1 2015.02.03 13 1 2013.02.03 21 1 2012.02.03 13 ```
Thanks for the clarification. [Tabibitosan](http://rwijk.blogspot.co.uk/2014/01/tabibitosan.html) would suit your needs, I believe: ``` with sample_data as (select 1 empid, to_date('03/02/2017', 'dd/mm/yyyy') dt, 11 dept, 1 otherfield from dual union all select 1 empid, to_date('03/02/2016', 'dd/mm/yyyy') dt, 11 dept, 2 otherfield from dual union all select 1 empid, to_date('03/02/2015', 'dd/mm/yyyy') dt, 13 dept, 7 otherfield from dual union all select 1 empid, to_date('03/02/2014', 'dd/mm/yyyy') dt, 21 dept, 6 otherfield from dual union all select 1 empid, to_date('03/02/2013', 'dd/mm/yyyy') dt, 21 dept, 12 otherfield from dual union all select 1 empid, to_date('03/02/2012', 'dd/mm/yyyy') dt, 13 dept, 333 otherfield from dual) select empid, min(dt) dt, dept from (select empid, dt, dept, row_number() over (partition by empid order by dt) - row_number() over (partition by empid, dept order by dt) grp from sample_data) group by empid, dept, grp order by empid, dt desc; EMPID DT DEPT ---------- ---------- ---------- 1 2016.02.03 11 1 2015.02.03 13 1 2013.02.03 21 1 2012.02.03 13 ```
Idea: `Partition by` duplicated fields to get first row using `row_number()` Implementation: ``` select EmpID, Date, Dept, OtherField from ( select EmpID, Date, Dept, OtherField, row_number() over (partition by empid, dept order by date asc) rwn from table_name) t where rwn = 1; ``` According to your `expected result`, I assume that you need partition by `empid, dept`, and get min of `date`. If not, feel free to change the query for your purpose.
Oracle, skip duplicate rows on specific key by sort
[ "", "sql", "oracle", "greatest-n-per-group", "" ]
A coworker and myself stumbled upon a newly acquired database schema with multiple tables that seem to only have one column. One table seems to be some sort of type, and the another for some sort of frequency, etc. We are only dealing with a schema here so there's no actual data to go by. We were thinking it over, and we can't really wrap our head around the need, or the type of information that should, "*in best practice*", use only a single column. Through my education, we were taught there should always be some sort of Note, Time Stamp, Description or type of information associated with each primary key. Through a couple google searches I find that a lot of sites mention only how this behavior can affect primary key practices, but not information in general. So to reiterate my question: **Is it considered a bad practice to design tables with only a single column? And how can this actually be beneficial to only have said single column? Are there any industry examples you can think of?**
Almost every table that I create has the following columns: * Primary key (generally a number and named after the table is `Id` after it). * CreatedAt * CreatedBy * CreatedOn (the server where the row was created) One use for a single column table is to effectively implement a check constraint where the code can dynamically validate values. I would typically implement this using a reference table with a proper foreign key relationships and the above columns. Another instance would be a number table, which just stores integer values. In general, I would say that it isn't a good idea. There may be specific cases such as a number table where it is fine.
As I see it, that depends on the use of this tables. If you have a lot of them, its probably wrong. I can think of a few uses of a single column table , they can be used as `derived tables` to generate ID's, sequences , dates(which probably be more useful with more then 1 column to specify month,year..) ETC but I believe they do have a purpose. In general, its always better to have more then one column, some sort of key column of an ID or a date to the table so it will mean something. If its bad practice? I believe so, always better to have more information on a table, unless its a specific table used for a specific purpose.
Is having a single column table in SQL Server considered a bad practice?
[ "", "sql", "sql-server", "database", "database-design", "" ]
I want to select all the planes that aren't belong to a certain company. I have three tables in this case: `Planes`, `Companies`, and `CompanyPlanes`. Here is my query: ``` SELECT * FROM planes p, companyplanes cp, companies c WHERE c.id = ? AND cp.idCompany != c.id AND (cp.idPlane = p.id OR p.id NOT IN (SELECT idPlane FROM companyplanes)) ORDER BY name ASC ``` But this query returned nothing! what is the wrong here? example: ``` | Plane | --------- id | name --------- 1 | p1 2 | p2 3 | p3 |Company| --------- id | name --------- 1 | c1 2 | c2 | companyPlanes | ------------------------ id | idCompany | idPlane ------------------------ 1 | 1 | 1 2 | 1 | 2 3 | 2 | 2 ``` if I want to get the planes that are not belong to the company `c2` the result should be: p1, p3.
**Update Answer** We can get the result in following way 1. Get all planes of the unexpected company `SELECT idplane from CompanyPlanes WHERE idCompany = ?` 2. Get all planes without those planes of the unexpected company `SELECT * FROM Planes WHERE id NOT IN ( SELECT idplane from CompanyPlanes WHERE idCompany = ? )` You don't need to `join` with `Company` table as you already get `idCompany` from `CompanyPlanes` table.
The inner join requires that the query return rows from planes which have a corresponding row in companyplanes but the subselect excludes any rows which have corresponding records in companyplanes. Assuming that you want the records from planes which don't have a record in companyplanes, then why are you also selecting from companies? ``` Select p.* From planes p Left join Companyplanes do On p.id=cp.idplane Where cp.idplane is null; ```
Select from table where id is not in another
[ "", "mysql", "sql", "" ]
I have a table with a locale field like this: ``` Id, locale 1, "en-US" 2, "en-BR" 3, "en-SK" 4, "fr-FR" 5, "fr-FS" ``` I want to do a select on this table, and group based on "en" or "fr" (part of the locale field string), what should i write to accomplish this?
Alternatively, you may also use the LEFT() function ``` SELECT left(locale, 2), count(id) FROM table group by left(locale, 2) ``` At the end of the day, the important part is to make use of a `group by` statement which contains a function. Note that this works in MySQL very well. Other DBMS might not like this.
You can use a combination of `substring` and `instr`. ``` select substring(locale,1,instr(locale,'-')-1), count(*) from tablename group by substring(locale,1,instr(locale,'-')-1) ```
GROUP BY based on String value
[ "", "mysql", "sql", "" ]
I have to create a dynamic sql query in SP. the dynamic query is 5000 chars, and I used NVARCHAR(MAX),VARCHAR(MAX), NVARCHAR(6000), VARCHAR(6000) but they all truncate to 4000 chars ``` DECLARE @SCRIPT VARCHAR(8000) set @SCRIPT =' ASDADASD ASDA DSADAD AD AS D......' +@VAR1+ ' AWDAWd' -- 6000 CHARS PRINT LEN(@SCRIPT) -- THIS PRINTS "4000" PRINT @SCRIPT -- THIS PRINTS ONLY 4000 CHARS ``` How can increase the size of STRING? I have read many post but I couldnt get it done. Please help me?
Use the `nvarchar(max)` data type instead. Be sure to prefix string literals with the N designator (`N' ASDADASD...'`). Cast explicitely so the default limitation will not apply: ``` set @script=convert(nvarchar(max),N'very-long-literal') ``` (reference: [The weird length of varchar and nvarchar in T-SQL](https://stackoverflow.com/questions/8115972/the-weird-length-of-varchar-and-nvarchar-in-t-sql?rq=1)) EDIT: full working example: ``` DECLARE @SCRIPT nvarchar(max) SET @SCRIPT=convert(nvarchar(max),N' ASDADASD ASDA DSADAD AD AS D......')+replicate(convert(nvarchar(max),N'A'),6000) PRINT LEN(@SCRIPT) -- this prints "6036" PRINT @SCRIPT -- this prints all 6036 chars of the string ```
NVARCHAR(n) is limited to 4000, VARCHAR + NVARCHAR = NVARCHAR. Look at ``` DECLARE @SCRIPT VARCHAR(8000) set @SCRIPT = replicate('A',6000)+'A' select len(@SCRIPT) set @SCRIPT = replicate('A',6000)+N'A' select len(@SCRIPT), 'mind NVARCHAR' ``` Cast everything NVARCHAR(MAX) to be sure. ``` DECLARE @SCRIPT NVARCHAR(MAX) =replicate('A',7000) set @SCRIPT = @SCRIPT +'A' select len(@SCRIPT) set @SCRIPT = @SCRIPT + cast(replicate('A',6000) as NVARCHAR(MAX)) select len(@SCRIPT) ```
cannot store 5000 chars in NVARCHAR and VARCHAR
[ "", "sql", "sql-server", "stored-procedures", "varchar", "nvarchar", "" ]
I have a problem when displaying several coloumns with counting, this is my table "Empo" : ``` idEmp DeptA DeptB ---- ---- ---- 1 23 7 2 42 23 3 23 11 4 23 17 ``` And I want to count number of idEmp , and the number of times where '23' is in every Dept to get something like this: ``` count(id) count(DeptA) count(DeptB) ---- ---- ---- 4 3 1 ``` also i have another table "Rapport" ``` idRap DeptA bonnus ---- ---- ---- 1 23 200 2 42 23 3 23 346 4 77 44 ``` and i want to get also the sum of the bonnus for the DeptA How do I do this in MySQL? thank you
The method I have used in the past is to use a combination of Count and Sum. ``` select count(idEmp), sum(Case when DeptA = 23 Then 1 else 0 End), sum(Case when DeptB = 23 Then 1 else 0 End) from tableX ``` Edit for question. I would use a subselect for the new case to prevent duplicates being added to the original counts. see below. ``` select count(idEmp) as RecordCount, sum(Case when DeptA = 23 Then 1 else 0 End) as DeptA23Count, (select sum(bonnus) from Rapport where DeptA = 23) as BonnusForDeptA23, sum(Case when DeptB = 23 Then 1 else 0 End) as DeptB23Count, from tableX ``` Or something like that depending on where criteria.
Try below query :- ``` select (select count(id) from test) as countID, (select count(DeptA) from test where DeptA=23) as CountDeptA, (select count(DeptB) from test where DeptB=23) as CountDeptB ```
SQL select multiple coloumns each one count somthing
[ "", "mysql", "sql", "count", "" ]
## select value = Hemraj and value = pal, where field\_id = 3 and where field\_id = 4; ``` how to solve this query? table structure : field_id = 3 value = Hemraj field_id = 4 value = Pal field_id = 3 value = Subhankar field_id = 4 value = Chaole field_id = 3 value = Suman field_id = 4 value = Pal field_id = 3 value = Akash field_id = 4 value = Dutta field_id = 3 value = Hemraj field_id = 4 value = Pal ```
What about this? ``` SELECT value FROM engine4_user_fields_values where field_id=3 or field_id=4 or field_id=5 or field_id=6 .... ```
You can also use IN operator ``` SELECT value FROM engine4_user_fields_values where field_id IN (3,4,5,6) ```
How to solve this multiple where conditions(field_ids and values)?
[ "", "mysql", "sql", "" ]
I've been trying to use code which finds the count of elements in a table and stores it in a local variable. I basically just want to check the existence of a record, so if there is any easier way to do this. Here is an example I found of storing the result of a query in a variable ([link](https://www.ibm.com/support/knowledgecenter/SSGU8G_11.70.0/com.ibm.sqlt.doc/ids_sqt_460.htm?lang=en)): ``` CREATE FUNCTION checklist( d SMALLINT ) RETURNING VARCHAR(30), VARCHAR(12), INTEGER; DEFINE name VARCHAR(30); DEFINE dept VARCHAR(12); DEFINE num INTEGER; SELECT mgr_name, department, CARDINALITY(direct_reports) FROM manager INTO name, dept, num WHERE dept_no = d; IF num > 20 THEN EXECUTE FUNCTION add_mgr(dept); ELIF num = 0 THEN EXECUTE FUNCTION del_mgr(dept); ELSE RETURN name, dept, num; END IF; END FUNCTION; ``` But when I try to create my own version of this, I get a syntax error. I have no idea what the problem is. ``` CREATE FUNCTION test () RETURNING INTEGER AS num1; DEFINE l_count INTEGER; CREATE TEMP TABLE t_queued_calls ( session_id DEC(18,0) PRIMARY KEY, calling_number NVARCHAR(50) ) WITH NO LOG; INSERT INTO t_queued_calls VALUES (123456, '5555555555'); SELECT COUNT(*) FROM t_queued_calls INTO l_count WHERE session_id = 123456; DROP TABLE t_queued_calls; END FUNCTION; ```
The position of the `INTO` clause is wrong in both functions. The INTO clause goes after the *select-list* (the list of expressions after the keyword SELECT) and before the FROM clause (see the Informix "Guide to SQL: Syntax" manual on the [SELECT](https://www.ibm.com/support/knowledgecenter/SSGU8G_12.1.0/com.ibm.sqls.doc/ids_sqs_0981.htm) statement), as in this code: ``` CREATE PROCEDURE test() RETURNING INTEGER AS num1; DEFINE l_count INTEGER; CREATE TEMP TABLE t_queued_calls ( session_id DEC(18,0) PRIMARY KEY, calling_number NVARCHAR(50) ) WITH NO LOG; INSERT INTO t_queued_calls VALUES (123456, '5555555555'); SELECT COUNT(*) INTO l_count FROM t_queued_calls WHERE session_id = 123456; DROP TABLE t_queued_calls; RETURN l_count; END PROCEDURE; ``` Also, the first function as shown in the question has the same problem with the ordering of the clauses. Also, it does not always RETURN a value, and the original version of the second function never returns a value (although it says it will).
The could be related to the fact the insert dont have the columns name adapt your\_column1, your\_column2 to your table schema ``` INSERT INTO t_queued_calls (your_column1, your_column2) VALUES (123456, '5555555555'); SELECT COUNT(*) FROM t_queued_calls INTO l_count WHERE session_id = 123456; ``` And/Or the number of column from the select don't match the number and type in insertt ... you select un field only but insert two field and select into is strange select format ...normally is insert into but select don't use into clause
Informix SELECT INTO syntax error
[ "", "sql", "syntax", "informix", "" ]
I have two tables with many to many relationship. I need to join them and get the matched records. ``` Table 1 Column1 | column 2| column 3| 1|p1|1.0 1|p1|1.1 1|p1|1.2 Table 2 Column1 | column 2| column 3| 1|p1|2.0 1|p1|2.1 1|p1|2.2 ``` Now I want the result as ``` 1|p1|1.0|2.0 1|p1|1.1|2.1 1|p1|1.2|2.2 ``` I mean column1 and column2 matching and showing values from both columns for column3 Edit 1: I have one issue after trying MT0 query. I am very much satisfied with his answer but still need some changes to be done: ``` Table 1 Column1 | column 2| column 3| 1|p1|1.0 1|p1|1.1 1|p1|1.2 Table 2 Column1 | column 2| column 3| 1|p1|1.0 1|p1|1.2 ``` Now I want the result as ``` 1|p1|1.0|1.0 1|p1|1.1|NULL 1|p1|1.2|1.2 ``` But I am getting as ``` 1|p1|1.0|1.0 1|p1|1.1|1.2 1|p1|1.2|NULL ``` Please do some help on this
If you have unequal numbers of rows for each partition then you can do: **Oracle Setup**: ``` CREATE TABLE table1 ( col1, col2, col3 ) AS SELECT 1, 'P1', '1.0' FROM DUAL UNION ALL SELECT 1, 'P1', '1.1' FROM DUAL UNION ALL SELECT 1, 'P1', '1.2' FROM DUAL UNION ALL SELECT 1, 'P2', '1.0' FROM DUAL UNION ALL SELECT 1, 'P2', '1.2' FROM DUAL UNION ALL SELECT 2, 'P1', '1.0' FROM DUAL; CREATE TABLE table2 ( col1, col2, col3 ) AS SELECT 1, 'P1', '2.0' FROM DUAL UNION ALL SELECT 1, 'P1', '2.1' FROM DUAL UNION ALL SELECT 1, 'P1', '2.2' FROM DUAL UNION ALL SELECT 1, 'P2', '2.1' FROM DUAL UNION ALL SELECT 2, 'P1', '2.0' FROM DUAL UNION ALL SELECT 2, 'P1', '2.1' FROM DUAL; ``` **Query**: ``` SELECT COALESCE( t1.col1, t2.col1 ) AS col1, COALESCE( t1.col2, t2.col2 ) AS col2, t1.col3 AS t1col3, t2.col3 AS t2col3 FROM ( SELECT t.*, ROW_NUMBER() OVER ( PARTITION BY col1, col2 ORDER BY col3 ) AS rn FROM table1 t ) t1 FULL OUTER JOIN ( SELECT t.*, ROW_NUMBER() OVER ( PARTITION BY col1, col2 ORDER BY col3 ) AS rn FROM table2 t ) t2 ON ( t1.col1 = t2.col1 AND t1.col2 = t2.col2 AND t1.RN = t2.rn ) ORDER BY col1, col2, t1col3 NULLS LAST, t2col3 NULLS LAST; ``` **Output**: ``` COL1 COL2 T1COL3 T2COL3 ---------- ---- ------ ------ 1 P1 1.0 2.0 1 P1 1.1 2.1 1 P1 1.2 2.2 1 P2 1.0 2.1 1 P2 1.2 2 P1 1.0 2.0 2 P1 2.1 ```
Please try below. ``` create table tab1(Col1 int,col2 varchar(10), col3 varchar(10)) insert into tab1 values(1,'p1','1.0'), (1,'p1','1.1'), (1,'p1','1.2') create table tab2(Col1 int,col2 varchar(10), col3 varchar(10)) insert into tab2 values(1,'p1','2.0'), (1,'p1','2.1'), (1,'p1','2.2') SELECT a.col1,a.col2,a.col3,b.col3 FROM ( select *,ROW_NUMBER() over(order by col1) as rownum from tab1 )a inner join ( select *,ROW_NUMBER() over(order by col1) as rownum from tab2 )b ON a.rownum = b.rownum and a.Col1 = b.Col1 ```
Joining two tables with many to many relationship in sql
[ "", "sql", "sql-server", "oracle", "" ]
I answered that tables had only one primary key but could have many unique constraints. But what else?
Primary column can never be null, a unique column can be.
Some differences I could think of: 1. Primary Key can't be null whereas unique will allow one null value. 2. You can have multiple unique keys on a table but only one Primary Key. Some taken from WikiPedia - [Unique key - Differences from primary key constraints](https://en.wikipedia.org/wiki/Unique_key#Differences_from_primary_key_constraints): **Primary Key constraint** 1. A Primary Key cannot allow null (a primary key cannot be defined on columns that allow nulls). 2. Each table cannot have more than one primary key. 3. On some RDBMS a primary key generates a clustered index by default. **Unique constraint** 1. A unique constraint can be defined on columns that allow nulls. 2. Each table can have multiple unique keys. 3. On some RDBMS a unique key generates a nonclustered index by default.
What are the main differences between a primary key and a unique constraint?
[ "", "sql", "primary-key", "unique-constraint", "" ]
I have a table with 2 columns (id, name) with following values: ``` id name --- --- 1 John 2 John 3 Mary 4 Mary ``` For values that are repeated in 'name', I only want to select those rows which have maximum value in 'id'. So my desired output is: ``` id name --- --- 2 John 4 Mary ``` I tried following instructions from this link: [Fetch the row which has the Max value for a column](https://stackoverflow.com/questions/121387/fetch-the-row-which-has-the-max-value-for-a-column) but couldn't get it to work. This is the query I'm using: ``` select name, id from (select name, max(id) over (partition by name) max_ID from sometable) where id = max_ID ``` But I'm getting this error: > Incorrect syntax near the keyword 'where'. Any ideas what I'm doing wrong?
If you alias your subquery you will avoid the syntax error... try putting "AS MyTable" after your closing parenthesis ``` select name, id FROM ( select name, max(id) over (partition by name) max_ID from sometable ) AS MyTable where id = max_ID ``` This might be simpler though: ``` SELECT name, MAX(id) FROM tablename GROUP BY name ```
Your subquery has no mandatory alias ``` .. FROM ( select name, max(id) over (partition by name) max_ID from sometable ) t -- alias missing .. ```
TSQL: Get a row which has a max value for a column
[ "", "sql", "sql-server", "t-sql", "" ]
I have a table with step id and resident id. I wanted to write a query find the second largest step id for a particular resident. The query using max in the google examples doesn't have the where clause. Tried this: ``` SELECT DISTINCT(step_action_id) FROM step_table where resident_id =219 ORDER BY step_action_id DESC limit 2,1; ``` Table: ``` Step_Action_id REsident_id 800 218 801 218 911 218 857 218 856 219 858 219 ``` Result am expecting is 856. but i am getting the below error, ``` Error: ILLEGAL USE OF KEYWORD LIMIT. TOKEN SKIP OPTIMIZE FOR FETCH , EXCEPT MINUS UNION <END-OF-STATEMENT> WAS EXPECTED. SQLCODE=-199, SQLSTATE=42601, DRIVER=3.59.81 SQLState: 42601 ErrorCode: -199 Error: THE CURSOR SQL_CURLH200C1 IS NOT IN A PREPARED STATE. SQLCODE=-514, SQLSTATE=26501, DRIVER=3.59.81 SQLState: 26501 ErrorCode: - ```
If the step ids are distinct, you can just use `OFFSET` and `FIRST FIRST 1 ROW ONLY`: ``` SELECT step_action_id FROM step_table WHERE resident_id = 219 ORDER BY step_action_id DESC OFFSET 1 ROWS FETCH FIRST 1 ROW ONLY; ``` If they are not distinct, just add a `GROUP BY` or `SELECT DISTINCT`: ``` SELECT step_action_id FROM step_table WHERE resident_id = 219 GROUP BY step_action_id ORDER BY step_action_id DESC OFFSET 1 ROWS FETCH FIRST 1 ROW ONLY; ```
You could use `ROW_NUMBER()`: ``` SELECT step_action_id FROM ( SELECT step_action_id, ROW_NUMBER() OVER(ORDER BY step_action_id DESC) AS rn FROM (SELECT DISTINCT step_action_id FROM step_table WHERE resident_id = 219 ) AS s ) AS s2 WHERE rn = 2; ``` `LiveDemo`
SQL query for second largest value with a where condition
[ "", "sql", "db2", "greatest-n-per-group", "" ]
I wrote a query to compare 2 columns in different tables (`TRELAY` VS `TUSERDEF8`). The query works great, except that it retrieves the top record in the `TUSERDEF8` table which has a many to one relationship to the `TRELAY` table. The tables are linked by `TRELAY.ID = TUSERDEF8.N01`. I would like to retrieve the latest record from `TUSERDEF8` and compare that record with the `TRELAY` record. I plan to use the max value of the index column (`TUSERDEF8.ID`) to determine the latest record. I am using SQL Server. My code is below, but I'm not sure how to change the query to retrieve the last `TUSERDEF8` record. Any help is appreciated. ``` SELECT TRELAY.ID, TRELAY.S15, TUSERDEF8.S04, TUSERDEF8.N01, TUSERDEF8.S06 FROM TRELAY INNER JOIN TUSERDEF8 ON TRELAY.ID = TUSERDEF8.N01 WHERE LEFT(TRELAY.S15, 1) <> LEFT(TUSERDEF8.S04, 1) AND NOT (TRELAY.S15 LIKE '%MEDIUM%' AND TUSERDEF8.S04 LIKE '%N/A%' AND TUSERDEF8.S06 LIKE '%EACMS%') ```
Using an ID column to determine which row is "last" is a bad idea Using cryptic table names like "TUSERDEF8" (how is it different from TUSERDEF7) is a very bad idea, along with completely cryptic column names like "S04". Using prefixes like "T" for table is a bad idea - it should already be clear that it's a table. Now that all of that is out of the way: ``` SELECT R.ID, R.S15, U.S04, U.N01, U.S06 FROM TRELAY R INNER JOIN TUSERDEF8 U ON U.N01 = R.ID LEFT OUTER JOIN TUSERDEF8 U2 ON U2.N01 = R.ID AND U2.ID > U.ID WHERE U2.ID IS NULL AND -- This will only happen if the LEFT OUTER JOIN above found no match, meaning that the row in U has the highest ID value of all matches LEFT(R.S15, 1) <> LEFT(U.S04, 1) AND NOT ( R.S15 LIKE '%MEDIUM%' AND U.S04 LIKE '%N/A%' AND U.S06 LIKE '%EACMS%' ) ```
I believe that your expected output is still a little ambiguous. It sounds to me like you want only the record from the output where TUSERDEF8.ID is at its max. If that's correct, then try this: ``` SELECT TRELAY.ID, TRELAY.S15, TUSERDEF8.S04, TUSERDEF8.N01, TUSERDEF8.S06 FROM TRELAY INNER JOIN TUSERDEF8 ON TRELAY.ID = TUSERDEF8.N01 WHERE LEFT(TRELAY.S15, 1) <> LEFT(TUSERDEF8.S04, 1) AND NOT (TRELAY.S15 LIKE '%MEDIUM%' AND TUSERDEF8.S04 LIKE '%N/A%' AND TUSERDEF8.S06 LIKE '%EACMS%') AND TUSERDEF8.ID IN (SELECT MAX(TUSERDEF8.ID) FROM TUSERDEF8) ``` EDIT: After reviewing your recent comments, it would seem something like this would be more suitable: ``` SELECT , C.ID , C.S15, , D.S04 , D.N01 , D.S06 FROM ( SELECT A.ID, A.S15, MAX(B.ID) AS MaxID FROM TRELAY AS A INNER JOIN TUSERDEF8 AS B ON A.ID = B.N01 WHERE LEFT(A.S15, 1) <> LEFT(B.S04, 1) AND NOT (A.S15 LIKE '%MEDIUM%' AND B.S04 LIKE '%N/A%' AND B.S06 LIKE '%EACMS%') GROUP BY A.ID, A.S15 ) AS C INNER JOIN TUSERDEF8 AS D ON C.ID = D.N01 AND C.MaxID = D.ID ```
SQL query to retrieve last record from a linked table
[ "", "sql", "sql-server", "greatest-n-per-group", "" ]
I have a stored procedure in SQL Server which takes 3 parameters and returns a list of Orders like this: ``` @fieldToFilter VARCHAR(100), --Will only be 1 of these values 'Order Date', 'Delivery Date' or 'Dispatch Date' @StartDate DATE, @EndDate DATE SELECT o.Number, o.Customer FROM Order o WHERE (o.OrderDate > @StartDate) AND (o.OrderDate < @EndDate) AND (SoftDeleted = 0) ``` The problem with the query above is that I have hard-coded the column `OrderDate` in the `WHERE` clause. What I want to do is use the value in `@fieldToFilter` to determine what column should be filtered on. e.g. * If `@fieldToFilter` is `Order Date` then filter on the `OrderDate` column * if `@fieldToFilter` is `Delivery Date` then filter on the `DeliveryDate` column * if `@fieldToFilter` is `Dispatch Date` then filter on the `DispatchDate` column What would be the best way to implement something like this?
I'm not sure if I got the syntax right but something like this should work ``` SELECT o.Number, o.Customer FROM Order o WHERE (@StartDate < case when @fieldToFilter = 'Order Date' then o.OrderDate when @fieldToFilter = 'Delivery Date' then o.DeliveryDate when @fieldToFilter = 'Dispatch Date' then o.DispatchDate else o.OrderDate end) and (@EndDate > case when @fieldToFilter = 'Order Date' then o.OrderDate when @fieldToFilter = 'Delivery Date' then o.DeliveryDate when @fieldToFilter = 'Dispatch Date' then o.DispatchDate else o.OrderDate end) and (SoftDeleted=0) ```
This should get you started. Just replace the other two SELECT statements with the appropriate filters. ``` IF @fieldToFilter = 'Order Date' BEGIN SELECT o.Number, o.Customer FROM Order o WHERE (o.OrderDate > @StartDate) and (o.OrderDate < @EndDate) and SoftDeleted=0) END ELSE IF @fieldToFilter = 'Delivery Date' BEGIN SELECT '2' END ELSE IF @fieldToFilter = 'Dispatch Date' BEGIN SELECT '3' END ```
SQL Query on Field Name passed into stored procedure
[ "", "sql", "sql-server", "t-sql", "stored-procedures", "" ]
I just started in a new project, in a new company. I was given a big and complex SQL, with about 1000 lines and MANY subqueries, joins, sums, group by, etc. This SQL is used for report generation (it has no inserts nor updates). The SQL has some flaws, and my first job in the company is to identify and correct these flaws so that the report shows the correct values (I know the correct values by accessing a legacy system written in Cobol...) How can I make it easier for me to understand the query, so I can identify the flaws? As an experienced Java programmer, I know how to refactor a complex bad written monolitic Java code into an easier to understand code with small pieces of code. But I have no clue on how to do that with SQL. The SQL looks like this: ``` SELECT columns FROM (SELECT columns FROM (SELECT DISTINCT columns FROM table000 alias000 INNER JOIN table000 alias000 ON column000 = table000.column000 LEFT JOIN (SELECT columns FROM ( SELECT DISTINCT columns FROM columns WHERE conditions) AS alias000 GROUP BY columns ) alias000 ON conditions WHERE conditions ) AS alias000 LEFT JOIN (SELECT columns FROM many_tables WHERE many_conditions ) ) ) AS alias000 ON condition LEFT JOIN ( SELECT columns FROM (SELECT columns FROM many_tables WHERE many_conditions ) ) ) AS alias001 , (SELECT many_columns FROM many_tables WHERE many_conditions) AS alias001 ) AS alias001 ON condition LEFT JOIN (SELECT many_columns FROM many_tables WHERE many_conditions ) AS alias001 ON condition , (SELECT DISTINCT columns FROM table001 alias001 INNER JOIN table001 alias001 ON condition LEFT JOIN (SELECT columns FROM ( SELECT DISTINCT columns FROM tables WHERE conditions ) AS alias001 GROUP BY columns ) alias001 ON condition WHERE conditions ) AS alias001 LEFT JOIN (SELECT columns FROM tables WHERE conditions ) AS alias001 ON condition LEFT JOIN ( SELECT columns FROM (SELECT columns FROM tables WHERE conditions ) AS alias001 , (SELECT columns FROM tables WHERE conditions ) AS alias001 ) AS alias001 ON condition LEFT JOIN (SELECT columns FROM tables WHERE conditions ) AS alias001 ON condition WHERE condition ) AS alias001 order by column001 ``` How can I make it easier for me to understand the query, so I can identify the flaws?
The solution was to simplify the query using COMMON TABLE EXPRESSIONS. This allowed me to break the big and complex SQL query into many small and easy to understand queries. **COMMON TABLE EXPRESSIONS:** * Can be used to break up complex queries, especially complex joins and sub-queries * Is a way of encapsulating a query definition. * Persist only until the next query is run. * Correct use can lead to improvements in both code quality/maintainability and speed. * Can be used to reference the resulting table multiple times in the same statement (eliminate duplication in SQL). * Can be a substitute for a view when the general use of a view is not required; that is, you do not have to store the definition in metadata. ### Example: ``` WITH cte (Column1, Column2, Column3) AS ( SELECT Column1, Column2, Column3 FROM SomeTable ) SELECT * FROM cte ``` My new SQL looks like this: ``` ------------------------------------------ --COMMON TABLE EXPRESSION 001-- ------------------------------------------ WITH alias001 (column001, column002) AS ( SELECT column005, column006 FROM table001 WHERE condition001 GROUP by column008 ) -------------------------------------------- --COMMON TABLE EXPRESSION 002 -- -------------------------------------------- , alias002 (column009) as ( select distinct column009 from table002 ) -------------------------------------------- --COMMON TABLE EXPRESSION 003 -- -------------------------------------------- , alias003 (column1, column2, column3) as ( SELECT '1' AS column1, '1' as column2, 'name001' AS column3 FROM SYSIBM.SYSDUMMY1 UNION ALL SELECT '1' AS column1, '1.1' as column2, 'name002' AS column3 FROM SYSIBM.SYSDUMMY1 UNION ALL SELECT '1' AS column1, '1.2' as column2, 'name003' AS column3 FROM SYSIBM.SYSDUMMY1 UNION ALL SELECT '2' AS column1, '2' as column2, 'name004' AS column3 FROM SYSIBM.SYSDUMMY1 UNION ALL SELECT '2' AS column1, '2.1' as column2, 'name005' AS column3 FROM SYSIBM.SYSDUMMY1 UNION ALL SELECT '2' AS column1, '2.2' as column2, 'name006' AS column3 FROM SYSIBM.SYSDUMMY1 UNION ALL SELECT '3' AS column1, '3' as column2, 'name007' AS column3 FROM SYSIBM.SYSDUMMY1 UNION ALL SELECT '3' AS column1, '3.1' as column2, 'name008' AS column3 FROM SYSIBM.SYSDUMMY1 ) -------------------------------------------- --COMMON TABLE EXPRESSION 004 -- -------------------------------------------- , alias004 (column1) as ( select distinct column1 from table003 ) ------------------------------------------------------ --COMMON TABLE EXPRESSION 005 -- ------------------------------------------------------ , alias005 (column1, column2) as ( select column1, column2 from alias002, alias004 ) ------------------------------------------------------ --COMMON TABLE EXPRESSION 006 -- ------------------------------------------------------ , alias006 (column1, column2, column3, column4) as ( SELECT column1, column2, column3, sum(column0) as column4 FROM table004 LEFT JOIN table005 ON column01 = column02 group by column1, column2, column3 ) ------------------------------------------------------ --COMMON TABLE EXPRESSION 007 -- ------------------------------------------------------ , alias007 (column1, column2, column3, column4) as ( SELECT column1, column2, column3, sum(column0) as column4 FROM table006 LEFT JOIN table007 ON column01 = column02 group by column1, column2, column3 ) ------------------------------------------------------ --COMMON TABLE EXPRESSION 008 -- ------------------------------------------------------ , alias008 (column1, column2, column3, column4) as ( select column1, column2, column3, column4 from alias007 where column5 = 123 ) ---------------------------------------------------------- --COMMON TABLE EXPRESSION 009 -- ---------------------------------------------------------- , alias009 (column1, column2, column3, column4) as ( select column1, column2, CASE WHEN column3 IS NOT NULL THEN column3 ELSE 0 END as column3, CASE WHEN column4 IS NOT NULL THEN column4 ELSE 0 END as column4 from table007 ) ---------------------------------------------------------- --COMMON TABLE EXPRESSION 010 -- ---------------------------------------------------------- , alias010 (column1, column2, column3) as ( select column1, sum(column4), sum(column5) from alias009 where column6 < 2005 group by column1 ) -------------------------------------------- -- MAIN QUERY -- -------------------------------------------- select j.column1, n.column2, column3, column4, column5, column6, column3 + column5 AS column7, column4 + column6 AS column8 from alias010 j left join alias006 m ON (m.column1 = j.column1) left join alias008 n ON (n.column1 = j.column1) ```
I deal with code like this every day as we do a lot of reporting and exporting of complex data here. **Step one is to understand the meaning of what you are doing.** If you don't understand the meaning, you can't evaluate if you got the correct results. So understand exactly what you are trying to accomplish and see if you can see the results you should see for one record in the user interface. It really helps to have something to compare to so that you can see as you go through the query how adding in new things changes the results. If your query has used single letters or something else meaningless for the derived table aliases, then as you figure out the meaning of that that derived table is supposed to be doing, then replace the alias with something more meaningful like Employees instead of A. This will make it easier for the next person who works on it to decode it later. Then what you do is start at the innermost derived table(Or subquery if you prefer but when it is being used as a table, the term derived table is more accurate). First figure out what it is supposed to be doing. For instance maybe it is getting all the employees who have less than satisfactory performance evaluations. Run that and check the results to see if they look correct based on the meaning of what you are doing. For instance, if you are looking at unsatisfactory evaluations and you have 10,000 employees would 5617 seem like a reasonable results set for that chunk of data? Look for repeated records. If the same person is in there three times, then likely you have problem where you are joining one to many and getting the many back when you only want one. This can be fixed either through using aggregate functions and group by or putting another derived table in to replace the problem join. Once you have the innermost part clear, then start checking the results of the other other derived tables, adding the code back in and checking the results until you find where either records dropped out that should not have (Hey I had 137 employees at this stage and now I only have 116. What caused that?) Remember that is only a clue to look at why that happened. There will be times as you build a complex query when the basic results will change and times when they should not have, that is why understanding the meaning of the data is critical. Some things in general to look out for: * How null values are handled can affect results * Mixing implict and explict joins can cause incorrect results in some databases. * At any rate you should always replace all implicit joins with explicit ones. That makes the code clearer and less likely to have errors. * If you have implicit joins, look for accidental cross joins. They are very easy to introduce even in short queries, in complex ones, they are much more likely which is why implicit joins should never be used. * If you have left joins look out for places where they get accidentally converted to inner joins by putting a where clause on the left join table (other than whether id is null). So this structure is a problem: ``` FROM table1 t1 LEFT JOIN Table2 t2 ON t1.t1id = T2.t1id WHERE t2.somefield = 'test' ``` and should be ``` FROM table1 t1 LEFT JOIN Table2 t2 ON t1.t1id = T2.t1id AND t2.somefield = 'test' ```
Best way to understand big and complex SQL queries with many subqueries
[ "", "sql", "db2", "subquery", "" ]
Using SQL server, I have a table that looks something like the following: ``` id | time | measurement ---+---------------------+------------- 1 | 2014-01-01T05:00:00 | 1.0 1 | 2014-01-01T06:45:00 | 2.0 1 | 2014-01-01T09:30:00 | 3.0 1 | 2014-01-01T11:00:00 | NULL 1 | 2014-02-05T03:00:00 | 1.0 1 | 2014-02-05T05:00:00 | NULL ``` The measurements being stored are presumed to be accurate until a new value is provided for the same id; the last measurement for a given id is the end of the sequence. I'm interested in creating a query or view that synthesizes new data points on each hour defined by these spans if they don't exist (and the previous point was neither 0 nor NULL), thus: ``` id | time | measurement ---+---------------------+------------- 1 | 2014-01-01T05:00:00 | 1.0 1 | 2014-01-01T06:00:00 | 1.0 1 | 2014-01-01T06:45:00 | 2.0 1 | 2014-01-01T07:00:00 | 2.0 1 | 2014-01-01T08:00:00 | 2.0 1 | 2014-01-01T09:00:00 | 2.0 1 | 2014-01-01T09:30:00 | 3.0 1 | 2014-01-01T10:00:00 | 3.0 1 | 2014-02-05T03:00:00 | 1.0 1 | 2014-02-05T04:00:00 | 1.0 ``` Is this feasible? Would it be more feasible if each input row had a "duration", specifying the amount of time for which its measurement is valid? (In this case, we would be effectively unpacking a run-length encoding in SQL). [My target is SQL Server 2012, which has LEAD and LAG functions, allowing such to be easily constructed]. --- To provide that data in a format consumable by SQL Server: ``` select id, cast(stime as datetime) as [time], measurement from (values (1, '2014-01-01T05:00:00', 1.0), (1, '2014-01-01T05:00:00', 1.0), (1, '2014-01-01T06:45:00', 2.0), (1, '2014-01-01T09:30:00', 3.0), (1, '2014-01-01T11:00:00', NULL), (1, '2014-02-05T03:00:00', 1.0), (1, '2014-02-05T05:00:00', NULL) ) t(id, stime, measurement) ```
Its complex, but working (for dataset you provided) ``` ;WITH cte AS ( SELECT * FROM (VALUES (1, '2014-01-01T05:00:00', '1.0'),(1, '2014-01-01T06:45:00', '2.0'), (1, '2014-01-01T09:30:00', '3.0'),(1, '2014-01-01T11:00:00', NULL), (1, '2014-02-05T03:00:00', '1.0'),(1, '2014-02-05T05:00:00', NULL) ) as t (id, [time], measurement) ) --Get intervals for every date , dates AS ( SELECT MIN([time]) [min], DATEADD(hour,-1,MAX([time])) [max] FROM cte GROUP BY CAST([time] as date) ) --Create table with gaps datetimes , add_dates AS ( SELECT CAST([min] as datetime) as date_ FROM dates UNION ALL SELECT DATEADD(hour,1,a.date_) FROM add_dates a INNER JOIN dates d ON a.date_ between d.[min] and d.[max] WHERE a.date_ < d.[max] ) --Get intervals of datetimes with ids and measurements , res AS ( SELECT id, [time], LEAD([time],1,NULL) OVER (ORDER BY [time])as [time1], measurement FROM cte ) --Final select SELECT DISTINCT * FROM ( SELECT r.id, a.date_, r.measurement FROM add_dates a LEFT JOIN res r ON a.date_ between r.time and r.time1 WHERE measurement IS NOT NULL UNION ALL SELECT * FROM cte WHERE measurement IS NOT NULL ) as t ORDER BY t.date_ ``` Output: ``` id date_ measurement 1 2014-01-01 05:00:00.000 1.0 1 2014-01-01 06:00:00.000 1.0 1 2014-01-01 06:45:00.000 2.0 1 2014-01-01 07:00:00.000 2.0 1 2014-01-01 08:00:00.000 2.0 1 2014-01-01 09:00:00.000 2.0 1 2014-01-01 09:30:00.000 3.0 1 2014-01-01 10:00:00.000 3.0 1 2014-02-05 03:00:00.000 1.0 1 2014-02-05 04:00:00.000 1.0 ``` **EDIT** *First part* If change this part with `dates` cte to this: ``` , dates AS ( SELECT DATEADD(hour,DATEPART(hour,MIN([time])),CAST(CAST(MIN([time]) as date) as datetime)) [min], DATEADD(hour,-1,MAX([time])) [max] FROM cte GROUP BY CAST([time] as date) ) ``` > This truncates minute and second values from dates. *Second part* > And adding `partition by id` in the `LEAD` statement keeps different > data items from being munged together ``` , res AS ( SELECT id, [time], LEAD([time],1,NULL) OVER (PARTITION BY id ORDER BY [time])as [time1], measurement FROM cte ) ``` For original dataset output will be the same.
``` DECLARE @t TABLE ( id INT , t DATETIME , m MONEY ) INSERT INTO @t VALUES ( 1, '2014-01-01T05:00:00', 1.0 ), ( 1, '2014-01-01T06:45:00', 2.0 ), ( 1, '2014-01-01T09:30:00', 3.0 ), ( 1, '2014-01-01T11:00:00', NULL ), ( 1, '2014-02-05T03:00:00', 1.0 ), ( 1, '2014-02-05T05:00:00', NULL ); WITH tal AS(SELECT -1 + ROW_NUMBER() OVER(ORDER BY (SELECT NULL)) AS n FROM (VALUES(1),(1),(1),(1),(1),(1),(1),(1),(1),(1)) a(i) CROSS JOIN (VALUES(1),(1),(1),(1),(1),(1),(1),(1),(1),(1)) b(i) CROSS JOIN (VALUES(1),(1),(1),(1),(1),(1),(1),(1),(1),(1)) c(i)), rnk AS(SELECT *, ROW_NUMBER() OVER(PARTITION BY id ORDER BY t) AS rn FROM @t), itr AS(SELECT lr.id, rr.t, DATEADD(mi, 60 - DATEPART(mi, lr.t) , lr.t) AS wt, lr.m FROM rnk lr LEFT JOIN rnk rr ON lr.id = rr.id AND lr.rn = rr.rn - 1 WHERE lr.m IS NOT NULL AND lr.m <> 0) SELECT * FROM @t WHERE m IS NOT NULL AND m <> 0 UNION ALL SELECT i.id, DATEADD(hh, t.n, i.wt), i.m FROM itr i JOIN tal t ON DATEADD(hh, t.n, i.wt) < i.t ORDER BY id, t ``` **Breakdown:** **1:** ``` tal AS(SELECT -1 + ROW_NUMBER() OVER(ORDER BY (SELECT NULL)) AS n FROM (VALUES(1),(1),(1),(1),(1),(1),(1),(1),(1),(1)) a(i) CROSS JOIN (VALUES(1),(1),(1),(1),(1),(1),(1),(1),(1),(1)) b(i) CROSS JOIN (VALUES(1),(1),(1),(1),(1),(1),(1),(1),(1),(1)) c(i)) ``` This will return numbers `0, 1, 2, 3, 4, 5 ..... 999`. This is approximately 41 days of consecutive intervals of 1 hour. If bigger intervals needed just add more cross joins to it. **2:** ``` rnk AS(SELECT *, ROW_NUMBER() OVER(PARTITION BY id ORDER BY t) AS rn FROM @t) ``` This will rank your rows within `id` and will return: ``` id t m rn 1 2014-01-01 05:00:00.000 1.00 1 1 2014-01-01 06:45:00.000 2.00 2 1 2014-01-01 09:30:00.000 3.00 3 1 2014-01-01 11:00:00.000 NULL 4 1 2014-02-05 03:00:00.000 1.00 5 1 2014-02-05 05:00:00.000 NULL 6 ``` **3:** ``` itr AS(SELECT lr.id, rr.t, DATEADD(mi, 60 - DATEPART(mi, lr.t) , lr.t) AS wt, lr.m FROM rnk lr LEFT JOIN rnk rr ON lr.id = rr.id AND lr.rn = rr.rn - 1 WHERE lr.m IS NOT NULL AND lr.m <> 0) ``` This is the main part. It produces intervals. `wt` will hold starting hour and `t` will hold the end of the interval: ``` id t wt m 1 2014-01-01 06:45:00.000 2014-01-01 06:00:00.000 1.00 1 2014-01-01 09:30:00.000 2014-01-01 07:00:00.000 2.00 1 2014-01-01 11:00:00.000 2014-01-01 10:00:00.000 3.00 1 2014-02-05 05:00:00.000 2014-02-05 04:00:00.000 1.00 ``` **4:** The last part takes all rows from input table filtering out `NULL` and `0` values. And unions another set that you get with joining previous intervals on tally table to produce all hours in the interval.
Synthesizing SQL rows within a range
[ "", "sql", "sql-server", "" ]