Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I have the following simple DB: ``` Table Types: - ID int - TypeName nvarchar Table Users: - ID int - UserName nvarchar - TypeID int Table BusyTime - ID int - UserID int - BTime time(0) ``` But one restriction - records in BusyTime should be only for users, which have TypeID = 3. Users with TypeID = 1 and with TypeID = 2 can't have records in BusyTime (it contradicts the business logic) How to describe it on MS SQL level? Or should I redesign DB ?
I'm assuming your primary keys in each table are just on `ID`. What you need to change is, add a `UNIQUE KEY` constraint on *both* `ID` and `TypeID` in `Users`: ``` ALTER TABLE Users ADD CONSTRAINT UQ_User_Types_XRef (ID,TypeID) ``` And create the `BusyTime` table as: ``` CREATE TABLE BusyTime ( ID int not null, UserID int not null, BTime time(0) not null, _Type_XRef as 3 persisted, constraint PK_BusyTime PRIMARY KEY (ID), constraint FK_BusyTime_Users FOREIGN KEY (UserID) references Users (ID), constraint FK_BusyTime_Users_XRef FOREIGN KEY (UserID,_Type_XRef) references Users (ID,TypeID) ) ``` Where I've assumed `PK_BusyTime` and `FK_BusyTime_Users` were your existing constraints. It's a matter of taste whether you drop `FK_BusyTime_Users` (which is the "real" foreign key constraint) now that `FK_BusyTime_Users_XRef` exists.
You can use check constraints to disallow invalid types: <https://technet.microsoft.com/en-us/library/ms188258%28v=sql.105%29.aspx>
SQL Rules depend on selected value
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I currently have the following situation: I am trying to build a filter with which I can select every row that has a set of given properties. I will explain this with an example: ``` SELECT table1.name FROM table1 LEFT JOIN table2 ON table1.ID = table2.ID WHERE table2.propertyID = "First PropertyID" AND table2.propertyID = "Second PropertyID" ``` Obviously, this returns null every time. But I can not figure out how to do this.. Table 1 has two columns: ID and Name. Table 2 looks like this: ``` | ID | PropertyID | | 1 | 3 | | 1 | 5 | | 1 | 7 | | 2 | 6 | | 2 | 1 | ``` I would like to be able to select the rows from Table 1 which are connected to e.g. PropertyID 3 and 5. I have found a lot of examples, but these all use OR or IN, but I need AND is this situation. Is this even possible with MySQL or do I need to create some workaround? Any help is greatly appreciated. **Edit 1:** Table 1 looks like this: ``` | ID | Name | | 1 | test | | 2 | hello | ``` In my case, I need to do more LEFT JOINS to reach the table2, but I thought this example would be sufficient to get the correct method.
I recommend you take one step back and consider this problem without the join. You want to know which rows are related to both properties 3 and 5. You can do that without the join like this: ``` SELECT tab1ID FROM table2 WHERE propertyID = 3 OR propertyID = 5 GROUP BY tab1ID HAVING COUNT(*) = 2; ``` To get the information from table 1, you can do an inner join so that only id 1 will be returned: ``` SELECT t1.name FROM table1 t1 JOIN table2 t2 ON t2.tab1ID = t1.id WHERE t2.propertyid = 3 OR t2.propertyid = 5 GROUP BY t1.id HAVING COUNT(*) = 2; ``` Here is an [SQL Fiddle](http://sqlfiddle.com/#!2/2a736/5) example.
When outer join, move outer table conditions to the ON clause, else outer join behaves just like an inner join: ``` SELECT table1.name FROM table1 LEFT JOIN table2 ON table1.ID = table2.ID AND table2.propertyID = "First PropertyID" AND table2.property = "Second PropertyID" ```
Select row from left join table where multiple conditions are true
[ "", "mysql", "sql", "join", "" ]
Assume this very simple SQL query: ``` SELECT * FROM a WHERE time < '2010-01-01' ``` Now, how can I assemble a query where the time part is actually an 'array'? Somehow between the lines of: ``` SELECT * FROM a WHERE time < ['2010-01-01', '2012-01-01'] ``` The Select should be executed two times but result in a single result set. Note, that this sample array contains only two items, but it may contain many more, as the results are actually coming from a sub-query. Postgresql 9.3
[ANY](http://www.postgresql.org/docs/8.2/static/functions-subquery.html#AEN13976) works for this, e.g: ``` SELECT * FROM a WHERE time < ANY (SELECT '2010-01-01' UNION ALL SELECT '2012-01-01') ```
This should do the trick - use `ANY` as in: ``` select * from a where time < any(array['2010-01-01'::timestamp, '2012-01-01'::timestamp]) ```
Repeat a SQL query
[ "", "sql", "postgresql", "" ]
Well, I have 2 tables like this: Table1 ``` ID | USER_ID 1 0 2 2 3 15 4 16 ``` Table2 ``` ID | FROM | TO 9 0 2 9 2 16 9 16 15 9 15 0 10 15 2 ``` What I want is really simple but driving me crazy, considering that `ID` , `FROM` and `TO` represents users in table 2. I want to get someone in `FROM` (which is Table1.user\_id) with an `ID` in table2 such as it also `exists` in `TO` (which is the same Table1.user\_id) with the same `ID` of table2 For example, record 16 is eligible. Because it appears in From with `ID` of 9 and as `TO` with the same `ID` of 9 in table 2 (both TO and FROM correspond to a user\_id of 15 in table1) What I have done was: ``` select * from `Table1` where exists (select ID from `Table2` as p1 where FROM = 16) and exists (select ID from `Table2` as p2 where ID = 16) and p1.ID = p2.ID ```
This may work; ``` select * from table1 a where a.USER_ID in (select b.FROM from table2 b where exists (select c.id from table2 c where b.id = c.id and b.FROM = c.TO) ) ```
You could try using a [self join](https://stackoverflow.com/questions/1284441/how-does-a-mysql-self-join-work) to find records with the same ID and then compare the values. ``` select a.from from table1 a inner table1 b on a.id = b.id where a.from = b.to ```
SQL where exists while preserving values between them
[ "", "mysql", "sql", "" ]
I'm trying to write an INSERT INTO that does a some DISTINCT/GROUP BY work. The query runs perfectly fine as a select statement, but will not work if it's wrapped into an INSERT INTO. ``` INSERT INTO MasterRecords (BatchRecordRecordID, SourceID, BatchID) SELECT RecordID, SourceID, BatchID FROM ( SELECT RecordID, BatchID, 101 AS SourceID FROM BatchRecords WHERE BatchID = 150 GROUP BY RecordID, BatchID ) BR ``` This earns me: > SQL Error: ORA-00979: not a GROUP BY expression But if I remove just the INSERT INTO code, it runs perfectly fine: ``` SELECT RecordID, SourceID, BatchID FROM ( SELECT RecordID, BatchID, 101 AS SourceID FROM BatchRecords WHERE BatchID = 150 GROUP BY RecordID, BatchID ) BR ``` Results: ``` 3 101 150 5 101 150 6 101 150 2 101 150 4 101 150 8 101 150 7 101 150 1 101 150 ``` My assumption is that GROUP BY's are not allowed inside INSERT INTO select statements but I can find almost no documentation confirming this.
Am I thinking wrong, but is not the sql below equal what you want to achieve? ``` INSERT INTO MasterRecords(BatchRecordRecordID, SourceID, BatchID) SELECT DISTINCT RecordID, 101, 150 FROM BatchRecords WHERE BatchID = 150 ; ```
I arrived here trying to solve a similar situation so it seems to me that this kind of problem still appears. In my case, avoiding any optimizer transformation, did the trick. I applied a NO\_QUERY\_TRANSFORMATION hint to the *"intoed"* SELECT statement and the error disappeared. In the case of this question, I should rewrite it as: ``` INSERT INTO MasterRecords (BatchRecordRecordID, SourceID, BatchID) SELECT /*+NO_QUERY_TRANSFORMATION*/ RecordID, SourceID, BatchID FROM ( SELECT RecordID, BatchID, 101 AS SourceID FROM BatchRecords WHERE BatchID = 150 GROUP BY RecordID, BatchID ) BR ```
Valid GROUP BY query doesn't work when combined with INSERT INTO on Oracle
[ "", "sql", "oracle", "oracle12c", "" ]
``` SELECT * FROM table1 WHERE col1 = ANY ( SELECT DISTINCT col2 FROM table2 WHERE first = 'blah' OR second = 'blahblah' OR third = 'blahblahblah' ); ``` The subquery checks three columns (`first`, `second`, `third`). In this example I'm using `ANY`, but instead I want it to return like this: ``` If found `first` return `first` else if found `second` return `second` else if found `third` return `third` ``` Know what I'm saying? I can only select 1 row (the best) in priority `first`, `second`, `third`
I think you want this: ``` SELECT * FROM table1 WHERE col1 = ( SELECT col2 FROM table2 WHERE first = 'blah' OR second = 'blahblah' OR third = 'blahblahblah' ORDER BY first = 'blah' DESC, second = 'blahblah' DESC, third = 'blahblahblah' DESC LIMIT 1 ); ```
Does this do what you want? ``` SELECT t.*, 'first' FROM table1 t WHERE col1 IN (SELECT col2 FROM table2 WHERE first = 'blah') UNION ALL SELECT t.*, 'second' FROM table1 t WHERE col1 IN (SELECT col2 FROM table2 WHERE first <> 'blah' AND second = 'blahblah') UNION ALL SELECT t.*, 'third' FROM table1 t WHERE col1 IN (SELECT col2 FROM table2 WHERE first <> 'blah' AND second <> 'blahblah' AND third = 'blahblahblah' ); ``` For each match (based on `col1`) it returns the first of the three conditions.
Select "the best result" from subquery?
[ "", "mysql", "sql", "" ]
I have a table with data similar to the following. I've been using a large table with numerous rows with varying flags, and keys. I've managed to group them down so that I have the lowest where the flag is true, and the lowest where the flag is false. ``` ╔══════════════════╦══════╦═══════╗ β•‘ Email β•‘ Flag β•‘ Key β•‘ ╠══════════════════╬══════╬═══════╣ β•‘ email1@one.com β•‘ 1 β•‘ 77731 β•‘ β•‘ email1@one.com β•‘ 0 β•‘ 67980 β•‘ β•‘ email2@two.com β•‘ 1 β•‘ 64417 β•‘ β•‘ email2@two.com β•‘ 0 β•‘ 71733 β•‘ β•‘ email3@three.com β•‘ 1 β•‘ 95655 β•‘ β•‘ email4@four.com β•‘ 0 β•‘ 91016 β•‘ β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β• ``` Now, for each distinct email, if there is a true AND false flag, I want to return the *true* Key value. Otherwise, I want to return the lowest value. So the output would ideally look like this: ``` ╔══════════════════╦══════╦═══════╗ β•‘ Email β•‘ Flag β•‘ Key β•‘ ╠══════════════════╬══════╬═══════╣ β•‘ email1@one.com β•‘ 1 β•‘ 77731 β•‘ β•‘ email2@two.com β•‘ 1 β•‘ 64417 β•‘ β•‘ email3@three.com β•‘ 1 β•‘ 95655 β•‘ β•‘ email4@four.com β•‘ 0 β•‘ 91016 β•‘ β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β• ``` I've been trying all kinds of grouping, having clauses, using case statements in the previous two, but can't see how to do so. I only really need the email and the Key, if that helps.
There's lots of ways, here are two. **Method 1** You could do it with a CTE like this: ``` WITH data_cte AS ( SELECT Email, MAX(CAST(Flag AS INT)) AS Flag FROM Data GROUP BY Email) SELECT Data.* FROM data_cte JOIN Data ON Data.Email = data_cte.Email AND Data.Flag = data_cte.Flag ``` To de-construct it, the CTE part just gets the `MAX` value of flag for each email (need to `CAST` to `INT` as you can't `MAX` on a `BIT` column) and the rest of the query joins the CTE back to the table to get the relevant data rows. **Method 2** Using a `UNION`: ``` SELECT * FROM Data WHERE Flag = 1 UNION SELECT * FROM Data WHERE Flag = 0 AND NOT EXISTS(SELECT * FROM Data AS InnerData WHERE InnerData.Flag = 1 AND InnerData.Email = Data.Email) ```
``` SELECT A.Email ,CASE WHEN A.Cf > A.Sf THEN Mk ELSE Key END AS ChosenKey FROM ( SELECT Email ,Key ,COUNT(Flag) Cf ,MIN(Key) OVER(PARTITION BY Email ORDER BY Key ASC) AS Mk ,SUM(Flag) OVER(PARTITION BY Email) Sf FROM Table ) AS A ``` Maybe work from something like that? And then remove your duplicates etc.
Get column value where boolean is true, otherwise get the column value where boolean false
[ "", "sql", "sql-server", "" ]
**Problem Statement:** I'm trying to convert one of my Sql to linq query, but I'm unable to get the desired output which i need. Can anyone suggest me what i should do? **SQL Query:** ``` SELECT AssetTagging.AssetID, AssetTagging.AssetDescription, [Return].RequestStatus FROM AssetTagging LEFT OUTER JOIN [Return] ON AssetTagging.AssetID = [Return].AssetID LEFT OUTER JOIN Issue ON AssetTagging.AssetID = Issue.AssetID WHERE (Issue.AssetID IS NULL) OR ([Return].RequestStatus = 'Approved') ``` **Linq Query I'm using:** ``` var result = (from at in db.AssetTagging.AsEnumerable() join r in db.Return on at.AssetID equals r.AssetID orderby at.AssetID where !db.Issue.Any(issue=>issue.AssetID==at.AssetID) || r.RequestStatus=="Approved" select new globalTestModel { model1=at }).ToList(); //I know that in Linq query I'm using Inner join instead of Left Join,but i'm getting error if i use left join instead of inner join? ``` What am I doing wrong?? *Any suggestion to get desired query like Sql in Linq?* **Asset Tag table:** ![enter image description here](https://i.stack.imgur.com/vHqSD.png) **Issue table:** ![enter image description here](https://i.stack.imgur.com/F2TrQ.png) **Return table:** ![enter image description here](https://i.stack.imgur.com/fW8h0.png) **Desired Output :** ![enter image description here](https://i.stack.imgur.com/OeQFt.png)
You need to do like this: ``` var result = from at in db.AssetTagging join r in db.Returns on at.AssetID equals r.AssetID into a from returns into a.DefaultIfEmpty() join i in db.Issues on at.AssetID equals I.AssetID into b from issues into b.DefaultIfEmpty() where issues.AssetID != null || returns.RequestStatus == "Approved" select new { AssetID = at.AssetID, AssetDescription = at.AssetDescription, Status = returns != null ? returns.RequestStatus : null }.ToList(); ```
You need to remove `.AsEnumerable()`, because you want your query to be translated to `sql`. Right now it would be using `linq-to-objects` and if you are using a left join with `linq-to-object` you need to check for null reference exceptions. `rt` could be null, so `rt.RequestStatus` would throw an exception. \*I believe `rt` should be `r` in your example You can't project to an existing entity, so you need to change your select to: ``` select new PocoClass { model1=at } //New class definition public PocoClass { public AssetTagging model1 { get; set; } } ```
Unable to convert SQL Query to LINQ Query for Left Outer Join
[ "", "sql", "asp.net-mvc", "linq", "" ]
I have the need for 2 different types of joins on the same tables (lets say `ADDRESS` and `USER`). I can either make 2 tables (`BILLING_ADDRESS` and `SHIPPING_ADDRESS`) that both have 3 columns (`ID`, `USER_ID`, `ADDRESS_ID`), or I can make a single table(`CUSTOMER_ADDRESS`) with a type column (`ID`, `USER_ID`, `ADDRESS_ID`, `ADDRESS_TYPE`). For DRY coding practices I'm thinking just the single table, but that means when I compile the 2 lists I would have to do full table scans twice. ``` select address.* from customer_addresses, address where user_id = 1 and address_type = 'Billing' ``` and ``` select address.* from customer_addresses, address where user_id = 1 and address_type = 'Shipping' ``` Both rely on full table scans of the customer\_addresses table. If we have 1000 customer addresses that means 2000 records have been scanned to find all the addresses for that customer. If I do the 2 different tables, then only 1000 customer addresses are scanned, because the shipping\_addresses table only holds 800 address/customer records, and the billing\_addresses table hold the other 200. So for performance I would have to say the 2 different tables. For DRY I would have to go with the single table. What are the industry thoughts on this?
A shipping address and a billing address might be different things. For instance, a billing address might be a PO Box, but a shipping address often cannot be. Similarly, a shipping address might include other information, such as a contact name, contact phone, and drop-off instructions. I just mention this because you need to decide whether the differences are material enough to create a separate entity, or just to have a few separate fields in an address table. This is just to let you know that there might be other fields. I think this is the query you suggest (with the `join` syntax fixed): ``` select a.* from customer_addresses ca join address a on ca.address_id = a.address_id where ca.user_id = 1 and ca.address_type = 'Billing'; ``` This does not require a full table scan with an intelligent data design. As Barmar points out in a comment, you should have a proper index on these tables. In this case, the indexes you want are `customer_address(user_id, address_type)` and `address(address_id)`. If a database only did full table scans for `SELECT` queries, SQL would be a much less useful language and probably not used anywhere.
A single table allows for more flexibility. For instance, in the future you might decide to allow a customer to store alternate shipping addresses, and choose one when placing an order. You could then add `address_type = 'Alternate Shipping Address'`, you wouldn't have to add another whole table. There should be little performance impact of this design. An index on the `user_id` will narrow down the query to just a few rows that need to be scanned for the desired address type.
should I have 2 identical tables
[ "", "sql", "database", "" ]
I have created a table: ``` CREATE TABLE AIRLINE ( airline_code NUMBER(4) PRIMARY KEY NOT NULL, airline_name VARCHAR(29) NOT NULL, airline_address1 VARCHAR(29) NOT NULL, airline_address2 VARCHAR(29), airline_postcode VARCHAR(29), airline_city VARCHAR(29) NOT NULL, airline_country VARCHAR(29) NOT NULL ); ``` And when I insert this Insert statement: ``` INSERT INTO AIRLINE (airline_code, airline_name, airline_address1, airline_address2, airline_postcode, airline_city, airline_country) VALUES ("BA07", "British Airways PLC", "Waterside", "PO Box 365, Harmondsworth", "UB7 0GB", "London", "United Kingdom"); ``` I get an error pointing to United Kingdom saying 'column not allowed here', as far as I'm aware there's the same number of columns as there is data being inserted into the table.
Double quotes are usually used to object names (e.g. column name "First name"). That is part of SQL-92 standard. In ANSI SQL, double quotes quote object names (e.g. tables) which allows them to contain characters not otherwise permitted, or be the same as reserved words (Avoid this, really). Single quotes are for strings. ``` INSERT INTO AIRLINE (airline_code, airline_name, airline_address1, airline_address2, airline_postcode, airline_city, airline_country) VALUES ('BA07', 'British Airways PLC', 'Waterside', 'PO Box 365, Harmondsworth', 'UB7 0GB', 'London', 'United Kingdom'); ```
Use single quotes arround the strings not double quotes.
Oracle SQL: Column not allowed
[ "", "sql", "oracle", "" ]
I have this data: ``` nov_id 2.1.1 2.1.10 2.1.11 2.1.12 2.1.13 2.1.14 2.1.2 2.1.3 2.1.4 2.1.5 2.1.6 2.1.7 2.1.8 2.1.9 2.2 2.3 2.4 2.5 2.6 ``` I need to order my results so my result expected is this: ``` nov_id 2.1.1 2.1.2 2.1.3 2.1.4 2.1.5 2.1.6 2.1.7 2.1.8 2.1.9 2.1.10 2.1.11 2.1.12 2.1.13 2.1.14 2.2 2.3 2.4 2.5 2.6 ``` This is one of my tries: ``` Select nov_id From dbo.NS_tbl_sc_novedad Order by Convert(int,Left(Ltrim(Rtrim(replace(nov_id,'.','')))+'0000',4)); ``` I tried to paste some zero's and order by that but, obviously I don't get it yet.
This should work with any string that has 2 or 3 parts with any number of digits in the number, e.g 1546.345.245 and 999.34 ``` select nov_id from data cross apply ( select charindex('.', nov_id) as pos ) as c1 cross apply ( select charindex('.', nov_id, c1.pos+1) as pos ) as c2 order by convert(int, left(nov_id, c1.pos-1)), convert(int, substring(nov_id, c1.pos+1, isnull(nullif(c2.pos, 0), 100)-c1.pos-1)), convert(int, case c2.pos when 0 then 0 else substring(nov_id, c2.pos+1, 100) end) ``` Looks little messy, though :)
For your particular data, this will work: ``` order by left(mov_id, 3), len(mov_id), mov_id ``` The idea is to order by the length, because the smaller numbers at the end have a shorter length -- given how the values are stored. This can be revised to be more general, depending on what your data really looks like.
How to order query result by multipart X.Y[.Z] "version" numbers?
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "" ]
I need to allow users to enter SQL select statements in my web application; these select statements will be used to generate the options in a customized dropdownlist. So I have a field on the UI where the user enters a select; how to prohibit the user of entering an insert/update/delete? I could check that the first statement word is `select` however they could enter multiple statements on the UI separated by semicolons.
I would do the following: * In your database create a user that only been granted SELECT privileges on the tables that you want the user to be accessible to. * In your server, use a separate data source using the read only user from above for the queries issued from the client.
Trust me, a user with malicious intent will find ways to bypass your checks and inject SQL. (Especially if you are using MS-SQL Server.) So, **do not** do this. Write a proper user interface, no matter how complex it needs to be, and make sure that absolutely no string entered by the user ends up un-quoted and un-escaped in an SQL query.
Allow user to enter SQL select statement in web application disabling updates
[ "", "sql", "web", "sql-injection", "" ]
Apologies if that is difficult to understand, I work for a survey company and am relatively new to SQL. I have the following two tables: **targetReadings** ``` id Epoch PointNumber Easting Northing ``` **targetShift** ``` ID Epoch PointNumber ShiftEasting ShiftNorthing ``` We have data automatically going in to **targetReadings**. We then plot this on graphs to view on our website. Sometimes we need to apply shifts to our data. I need a query that will run through the data and apply the correct shift value. For example, if I have a point called `R101` that has coordinates `East 100, North 500`. This may get knocked and the latest reading maybe `East 101, North 501`. I would then put an entry in **targetShift**: ``` 2015-01-01, R101, -1, -1 ``` Then a week later it may get knocked again and give me readings of `East 105, North 105`. I would then put another entry in **targetShift**: ``` 2015-01-08, R101, -4, -4 ``` I need a query that will run through the data in **targetReadings** and apply the correct shift based on the data from **targetShift**. My current query is ``` SELECT CASE WHEN a.Epoch > r.Epoch THEN a.Easting + r.ShiftEasting END as ShiftEast , a.PointNumber , a.Epoch , r.Epoch FROM TargetReadings a inner join targetShift r on r.PointNumber = a.PointNumber ``` This works but it brings back a result for every entry in **targetShift**, as this is a large project there could be many shifts for each point. I only want 1 result for each entry in **targetReadings**. Thank you for your help. Here is some example data, I have the data for both tables and a table below of the results I want from the query: **TargetShift** ``` PointNumber Epoch ShiftEasting ShiftNorthing CB501 01/01/2014 00:00 0 0 CB501 01/01/2015 00:00 100000 100000 CB501 02/01/2015 00:00 200000 200000 ``` **TargetReadings** ``` PointNumber Epoch Easting Northing CB501 31/12/2014 00:01 528196.679 178392.171 CB501 31/12/2014 01:01 528196.679 178392.170 CB501 01/01/2015 00:01 528196.675 178392.165 CB501 01/01/2015 01:01 528196.676 178392.166 CB501 02/01/2015 00:01 528196.679 178392.167 CB501 02/01/2015 05:01 528196.679 178392.167 ``` **Expected Query Result** ``` PointNumber Epoch ShiftEasting ShiftNorthing CB501 31/12/2014 00:01 528196.679 178392.171 CB501 31/12/2014 01:01 528196.679 178392.170 CB501 01/01/2015 00:01 628196.675 278392.165 CB501 01/01/2015 01:01 628196.676 278392.166 CB501 02/01/2015 00:01 728196.679 378392.167 CB501 02/01/2015 05:01 728196.679 378392.167 ```
After examining sample data and your clarifications in comments it looks like you don't want to sum **all** shifts, but you want to add only the most recent shift, if any. In this case `OUTER APPLY` seems the best choice. **sample data:** ``` DECLARE @TargetShift TABLE (PointNumber char(5), Epoch datetime, ShiftEasting float, ShiftNorthing float); INSERT INTO @TargetShift (PointNumber, Epoch, ShiftEasting, ShiftNorthing) VALUES ('CB501', '2014-01-01T00:00:00', 0 , 0 ), ('CB501', '2015-01-01T00:00:00', 100000, 100000), ('CB501', '2015-01-02T00:00:00', 200000, 200000); DECLARE @TargetReadings TABLE (PointNumber char(5), Epoch datetime, Easting float, Northing float); INSERT INTO @TargetReadings (PointNumber, Epoch, Easting, Northing) VALUES ('CB501', '2014-12-31T00:01:00', 528196.679, 178392.171), ('CB501', '2014-12-31T01:01:00', 528196.679, 178392.170), ('CB501', '2015-01-01T00:01:00', 528196.675, 178392.165), ('CB501', '2015-01-01T01:01:00', 528196.676, 178392.166), ('CB501', '2015-01-02T00:01:00', 528196.679, 178392.167), ('CB501', '2015-01-02T05:01:00', 528196.679, 178392.167); ``` **query** ``` SELECT R.PointNumber , R.Epoch , R.Easting + ISNULL(OA_Shift.ShiftEasting, 0) as ShiftEast , R.Northing + ISNULL(OA_Shift.ShiftNorthing, 0) as ShiftNorth FROM @TargetReadings AS R OUTER APPLY ( SELECT TOP(1) S.ShiftEasting , S.ShiftNorthing FROM @TargetShift AS S WHERE S.PointNumber = R.PointNumber AND S.Epoch < R.Epoch ORDER BY S.Epoch DESC ) OA_Shift ORDER BY R.PointNumber , R.Epoch ; ``` **result** ``` PointNumber Epoch ShiftEast ShiftNorth CB501 2014-12-31 00:01:00.000 528196.679 178392.171 CB501 2014-12-31 01:01:00.000 528196.679 178392.17 CB501 2015-01-01 00:01:00.000 628196.675 278392.165 CB501 2015-01-01 01:01:00.000 628196.676 278392.166 CB501 2015-01-02 00:01:00.000 728196.679 378392.167 CB501 2015-01-02 05:01:00.000 728196.679 378392.167 ``` For each row in `TargetReadings` `OUTER APPLY` finds 1 row from `TargetShift` with the same `PointNumber` and dated prior to the row from `TargetReadings`. If you add an index to `TargetShift` on `(PointNumber, Epoch DESC)` the query should be efficient.
Try like this,it just indicate, ``` SELECT CASE WHEN a.Epoch > r.Epoch THEN a.Easting + r.ShiftEasting END AS ShiftEast ,a.PointNumber ,a.Epoch ,r.Epoch FROM TargetReadings a INNER JOIN ( SELECT * ,ROW_NUMBER() OVER ( PARTITION BY PointNumber ORDER BY [datecolumn] DESC ) rn FROM targetshift ) r ON r.PointNumber = a.PointNumber WHERE r.rn = 1 ```
Applying shifts in SQL
[ "", "sql", "" ]
I'm trying to select the amount of times a player has won and lost in the following table for each player name that exists in the table. This table holds games played. ``` β”Œβ”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚idβ”‚winner β”‚loser β”‚ β”œβ”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚1 β”‚player1 β”‚ player2β”‚ β”œβ”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚2 β”‚player1 β”‚ player2β”‚ β”œβ”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚3 β”‚player2 β”‚ player1β”‚ β”œβ”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚4 β”‚player3 β”‚ player2β”‚ β””β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”˜ ``` Example: In the above table, player1 would have two wins and one loss. I'm trying to select the amount of wins and losses each player has. I currently have the following SQL ``` SELECT winner, count(winner) FROM multiplayergamestbl GROUP BY winner ``` This will return each player along with their wins, but the count of losses is not what I'm expecting, count of losses will return the same value as the count of wins. I'm clearly a bit new to SQL, can someone help me?
One way to do this is to do two separate queries (one for winner count and one for loser count) and union them together and use the result as a derived table, like this: ``` select player, max(count_winner), max(count_loser) from ( SELECT winner as player, count(winner) as count_winner, null as count_loser FROM multiplayergamestbl GROUP BY winner union all SELECT loser as player, null as count_winner, count(loser) as count_loser FROM multiplayergamestbl GROUP BY loser ) t group by player; ``` The outer query uses the max aggregate function to flatten the rows, without it you would get five rows instead of three (see [this example](http://www.sqlfiddle.com/#!2/c720f0/4)). [Sample SQL Fiddle](http://www.sqlfiddle.com/#!2/c720f0/3)
First turn your table into more like the format it should be in using a subquery, then do a simple sum over that: ``` select player, sum(won) won, sum(lost) lost from ( select winner player, 1 won, 0 lost from multiplayergamestbl union select loser, 0, 1 from multiplayergamestbl) x group by player ```
Select count of two columns while grouped by
[ "", "mysql", "sql", "" ]
I know itΒ΄s difficult to answer without knowing the model, but I have next heavy query that takes around 10 secs to complete in my MySQL database. I guess it can be optimized, but IΒ΄m not that skilled. ``` SELECT DISTINCT b . * FROM boats b, states s, boat_people bp, countries c, provinces pr, cities ct1, cities ct2, ports p, addresses a, translations t, element_types et WHERE s.name = 'Confirmed' AND bp.id = '2' AND b.state_id = s.id AND b.id NOT IN (SELECT bc.boat_id FROM boat_calendars bc WHERE (date(bc.since) <= '2015-02-09 09:23:00 +0100' AND date(bc.until) >= '2015-02-09 09:23:00 +0100') OR (date(bc.since) <= '2015-02-10 09:23:00 +0100' AND date(bc.until) >= '2015-02-10 09:23:00 +0100')) AND b.people_capacity_id >= bp.id AND c.id = (SELECT DISTINCT t.element_id FROM translations t, element_types et WHERE t.element_translation = 'Spain' AND et.name = 'Country' AND t.element_type_id = et.id) AND pr.country_id = c.id AND pr.id = (SELECT DISTINCT t.element_id FROM translations t, element_types et WHERE t.element_translation = 'Mallorca' AND et.name = 'Province' AND t.element_type_id = et.id) AND ((ct1.province_id = pr.id AND p.city_id = ct1.id AND b.port_id = p.id) OR (ct2.province_id = pr.id AND a.city_id = ct2.id AND b.address_id = a.id)); ``` Basically, it tries to get all the boats, that are not already booked in Confirmed state and that are in a province and a country ie. Mallorca, Spain. Please, let me know if you need some more details about de purpose of the query or the model.
> remove \* from select clause. instead give column names in select clause. it will increase some > performance. Its one of the way to optimize
Instead of having a sub query, use **`LEFT JOIN NULL`** (just google for it) and it will help a lot.
SQL optimization (MySQL)
[ "", "mysql", "sql", "" ]
How to join table1 and table2 (containing dimensions and their values) to get the following results: ![enter image description here](https://i.stack.imgur.com/RPjJj.png) **Update** Thank you for proposing the way with PIVOT. Just for curiosity - is it possible to solve it by several left joins? **Update 2** I would like to share with you my idea of solving the problem with several left joins. It is very simple and much faster than Pivot way. Below you may find my answer.
My experience has worked out this code which I want to share with you. The key is to assign new alias to dimension table every time we make a left join. ``` SELECT T1.ID ,S.DimensionValue AS Sex ,E.DimensionValue AS Eyes ,WT.DimensionValue AS [Worker Type] FROM #Table1 AS T1 LEFT JOIN #Table2 AS S -- new alias here for dimension table ON T1.ID=S.ID AND S.Dimension='sex' LEFT JOIN #Table2 AS E -- new alias here for dimension table ON T1.ID=E.ID AND E.Dimension='eyes' LEFT JOIN #Table2 AS WT -- new alias here for dimension table ON T1.ID=WT.ID AND WT.Dimension='worker type' ```
``` declare @table1 table ( id int, salary int ) insert into @table1 values (1, 1000), (2, 2000) declare @table2 table ( id int, dimension varchar(1000), dimensionValue varchar(1000) ) insert into @table2 values (1, 'eyes','blue'), (1, 'sex','male'), (1, 'worker type','marvelous'), (2, 'eyes','brown'), (2, 'sex', 'female'), (2, 'worker type','spectacular') ``` query with pivot ``` SELECT t2.*, t1.salary FROM @table1 t1 join (SELECT * FROM @table2) AS SourceTable PIVOT ( max(dimensionValue) FOR dimension IN ([eyes],[worker type],[sex]) ) AS T2 on T2.id = t1.id ```
SQL joins with tables that contain dimensions
[ "", "sql", "join", "sql-server-2012", "" ]
I have write a series of SQL statements in an ORACLE database. In my database, I have 2 tables (book, publisher). Below is the table structure ``` BOOK ---------- bk_id | title | pub_id PUBLISHER ---------- pub_id | pub_name ``` If I insert into the publisher table first ``` INSERT INTO PUBLISHER (pub_name) VALUE ('ABC Publisher'); ``` How do I retrieve the id of the publisher and enter it into the book table? I usually do this with a stored procedure (SQL Server) or do it in the application. How can I do it in ORACLE in SQL?
From PL/SQL, you might want to use the [`RETURNING INTO`](http://docs.oracle.com/cd/B19306_01/appdev.102/b14261/returninginto_clause.htm) clause to get back the newly inserted id: ``` DECLARE my_id int; BEGIN INSERT INTO PUBLISHER (pub_name) VALUE ('ABC Publisher') RETURNING id INTO my_id; ... END; ``` where `my_id` is a PL/SQL variable declared accordingly to your column type.
I think the easiest way would consist in creating a `Trigger` that would insert in other `table` after table. ``` create or replace trigger tr_ai_publisher after insert on publisher for each row begin --Here you can access the new publisher id using :new.pub_id end; ``` That way, you would not have to handle yourself a call to the procedure. However, if you really want to, you can also use a stored procedure in `ORACLE`, the general syntax is ``` CREATE [OR REPLACE] PROCEDURE proc_name [list of parameters] IS Declaration section BEGIN Execution section EXCEPTION Exception section END; ```
ORACLE SQL: Inserting another ID from another row
[ "", "sql", "oracle", "" ]
I have the following SQL query: ``` SELECT column FROM table WHERE column IN ('item1','item2','item3') ``` it's result contains item1 and item2, How can I get the non existence argument (**item3**)? Is it possible? **EDIT:** I have an array of items. every minute some new items are adding to the array. So I should get new items that are new and doesn't exist in the table. After that I can do my process and insert the new items to the table Thanks
The following query will return the items that doesn't exist in MyTable: ``` WITH B AS ( SELECT 'Item1' AS col UNION ALL SELECT 'Item2' UNION ALL SELECT 'Item3' ) SELECT B.col FROM B WHERE NOT EXISTS ( SELECT * FROM MyTable T WHERE T.col = B.col ) ``` EDIT: Because of building such a select statement at the client side could be teddious and dangeours, you should take care of sql injection and formatting issues, I suggest you to use a table valued funtion like the following: ``` CREATE TABLE Items ( item nvarchar(128) PRIMARY KEY ) GO CREATE FUNCTION GetNonExistingItems( @Items xml ) RETURNS TABLE AS RETURN WITH B AS ( SELECT c.value('.', 'nvarchar(128)') As item FROM @items.nodes('items/item') T(c) ) SELECT B.item FROM B WHERE NOT EXISTS ( SELECT * FROM Items I WHERE B.item = i.item ) GO DECLARE @items XML = N' <items> <item>Item1</item> <item>Item2</item> <item>Item5</item> </items>' SELECT * FROM GetNonExistingItems (@Items) ```
You can do it if you translate your array to table like this: ``` Select * From ( select 'item1' As Column union select 'item2' As Column union select 'item3' As Column ) l left join Table t on l.Column = t.Column Where t.Column is NULL ``` Version without union: ``` Select * From ( VALUES ('Item1'), ('Item2'), ('Item3') ) As l(Column) left join Table t on l.Column = t.Column Where t.Column is NULL ```
Get non existence arguments of SQL IN operator
[ "", "sql", "sql-server", "in-operator", "" ]
Is it possible to join tables, when one or more tables does not even exist? Consider this use case: You are using some system that has a certain DB scheme out of the box, but allows you to create your own custom tables as well. It is possible to run a query of some kind that includes custom tables, but will also run without errors for someone who does not have these custom tables set up? Or what is the most elegant way to achieve this without having to maintain different versions of your queries? edit: especially for Sybase ASE, but I am also interested in other dbms.
You could do something like this: ``` IF EXISTS (Select * from sysobjects where name = 'tblname') Begin Select * from tbl End Else Begin --Do something else End ``` Basically check the table exists and run the query if it does, if it doesn't then do something else.
In most RDBMS, the query execution looks like something like this: * Parser + Name resolution (this checks the referenced objects, binds aliases, etc) + Type derivation (Determines the final types in the resultset) + Aggregate bindinf (Determines if any aggregation is required) + Group binding (Binds the aggregation to the select list) * Algebrizer * Optimizer * Execution Because of the steps of query execution, the RDBMS always checks the referenced objects (during the name resolution step), the actual query processing (which executes any user defined checks - e.g. an IF statement) are executed after that. (This stands for Data Manipulation, not for Data Definition). **To make it short:** In most RDBMS you can not do this, but only with dynamic queries. In my opinion, dynamic queries are sometimes helpful, but not an easy task maintain and debug them, so be careful, especially if the query generation is based on complex logic. Please keep in mind, that most client applications are accept only a predefined resultset (this includes the columns and their types), so working with dynamic queries could be a problem in the client application too. For Sybase, you can read more about query processing in it's online documentation: [Performance and Tuning Series: Query Processing and Abstract Plans](http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc00743.1570/html/queryprocessing/CEGGFDBH.htm)
Join on table that does not exist
[ "", "sql", "database", "join", "sybase", "sap-ase", "" ]
I have Table A ``` ============ | id | val | =====+====== | 1 | abc | | 1 | def | | 2 | ghi | | 2 | jkl | ============ ``` I have Table B ``` ============ | id | val2 | =====+====== | 1 | rty | | 1 | vbn | | 2 | uio | | 2 | zxc | ============ ``` I want to display the two tables like this.. ``` =================== | id | val | val2 | =====+=====+======= | 1 | abc | rty | | 1 | def | vbn | | 2 | ghi | uio | | 2 | jkl | zxc | =================== ``` my problem is i am having redundancy..
I like Gordon's approach, since it doesn't assume the same number of rows in each table, but here's a `JOIN` version: ``` SELECT a.id,a.val,b.val2 FROM (SELECT @row_number:=@row_number+1 AS row_number ,id,val FROM Table1 a cross join (select @row_number := 0) b )a JOIN (SELECT @row_number2:=@row_number2+1 AS row_number2 ,id,val2 FROM Table2 a cross join (select @row_number2 := 0) b )b ON a.Row_Number = b.Row_Number2 AND a.id = b.id ``` Here's a working version of his `UNION` version: ``` SELECT Row_Number,ID,MAX(Val) AS Val,MAX(Val2) AS Val2 FROM (SELECT @row_number:=@row_number+1 AS row_number ,id,val,NULL as Val2 FROM Table1 a cross join (select @row_number := 0) b UNION ALL SELECT @row_number2:=@row_number2+1 AS row_number ,id,NULL,val2 FROM Table2 a cross join (select @row_number2 := 0) b )sub GROUP BY Row_Number,ID ``` Demo of both: [SQL Fiddle](http://www.sqlfiddle.com/#!2/223aa/9/0)
Yes, you have a problem because you don't have a proper `join` key. You can do this by using variables to create one. Something like this will work for the data you provide: ``` select min(id), max(aval), max(bval) from ((select id, val as aval, NULL as bval, @rna := @rna + 1 as seqnum from tablea a cross join (select @rna := 0) ) union all (select id, NULL val, @rnb := @rnb + 1 as seqnum from tableb b cross join (select @rnb := 0) ) ) ab group by seqnum; ```
display two tables into one using select sql
[ "", "mysql", "sql", "join", "" ]
I am trying to dynamically delete the table by using the parameter. I am writing the below code, the code is running succesfully but its not deleting the table. Can someone please help me on this. Insights is the database name here. ``` DECLARE @DQ VARCHAR( MAX ) Declare @DB varchar(256) SET @db = @Insights SELECT @DQ=' IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'''+ @db + '[tablename]'',N''U'')) DROP TABLE ' + @db + '[tablename]' EXEC(@DQ) SELECT @db ``` Regards, Ratan
You can do it like: ``` DECLARE @dq VARCHAR(MAX) DECLARE @db VARCHAR(256) = 'Databasename' DECLARE @schema VARCHAR(256) = 'dbo' DECLARE @tb VARCHAR(256) = 'TableName' SELECT @dq = ' IF EXISTS (SELECT * FROM ' + @db + '.sys.objects WHERE object_id = OBJECT_ID(N''' + @db + '.' + @schema + '.' + @tb + ''',N''U'')) DROP TABLE ' + @db + '.' + @schema + '.' + @tb PRINT @dq EXEC(@dq) ``` Look at how I am checking for existance of object `IF EXISTS (SELECT * FROM ' + @db + '.sys.objects`. Also in drop you should specify schema name or just double dot if that table is in default schema. `DROP TABLE ' + @db + '..[TableName]'`
``` DECLARE @TableName SYSNAME; DECLARE @DBname SYSNAME; DECLARE @Schema SYSNAME; --<-- I would add this too DECLARE @Sql NVARCHAR(MAX); SET @DBname = N'Test_DB'; SET @TableName = N'Test_Table'; SET @Schema = N'dbo'; SET @Sql = N'Use [master]' + N'IF OBJECT_ID('''+ QUOTENAME(@DBname)+ '.'+ QUOTENAME(@Schema) +'.' + QUOTENAME(@TableName)+ ''') IS NOT NULL ' + N'DROP TABLE ' + QUOTENAME(@DBname)+ '.'+ QUOTENAME(@Schema) +'.' + QUOTENAME(@TableName) PRINT @Sql -- exec sp_executesql @Sql ```
Dynamically delete the table by using parameter
[ "", "sql", "sql-server", "" ]
I have this table (say TABLE1): ``` ID1 | ID2 | NAME ``` where (ID1, ID2) is the composite PK. And this another table (say TABLE2): ``` ID | COD1 | COD2 | DATA | INDEX ``` where ID is the PK. I need to join this tables on `((TABLE1.ID1 = TABLE2.COD1) AND (TABLE1.ID2 = TABLE2.COD2))` My problem is that, for each ID of TABLE2, I have many tuples with different INDEX. I only want join the tuple that its INDEX is the MAX of its group (COD1, COD2). For instance, if I have: ``` ID1|ID2|NAME 10 10 JOSH ID|COD1|COD2|DATA|INDEX 1 10 10 YES 0 2 10 10 NO 1 3 11 10 OH 0 ``` I want to get: ``` ID1|ID2|NAME|DATA 10 10 JOSH NO ``` I have tried this but it doesn't work: ``` SELECT ID1, ID2, NAME, DATA FROM TABLE1 T1 JOIN TABLE2 T2 ON T1.ID1 = T2.COD1 AND T1.ID2 = T2.COD2 GROUP BY ID1, ID2, NAME, DATA HAVING INDEX = MAX(INDEX) ``` Thanks.
I have solved it this way: ``` SELECT ... FROM TABLE1 JOIN (SELECT ID1, ID2, NAME, DATA FROM TABLE1 T1 JOIN TABLE2 T2 ON T1.ID1 = T2.COD1 AND T1.ID2 = T2.COD2 GROUP BY ID1, ID2, NAME, DATA HAVING INDEX = SELECT MAX(INDEX) FROM TABLE2 WHERE TABLE1.ID1 = TABLE2.COD1 AND TABLE1.ID2 = TABLE2.COD2 ``` Thanks!
This is the generic construct. ``` select field1,field2, etc from yourtables join (select field1, max(something) themax from table1 where whatever group by field1) temp on table1.something = themax and table1.field1 = temp.field1 where whatever ``` The two "where whatevers" should be the same. You should be able to take it from here.
Oracle SQL: GROUP BY and HAVING clause
[ "", "sql", "oracle", "join", "" ]
I want to find most frequent product each customer has purchased. my data set is like this : ``` CustomerID ProdID FavouriteProduct 1 A ? 1 A ? 1 A ? 1 B ? 1 A ? 1 A ? 1 A ? 1 B ? 2 A ? 2 AN ? 2 G ? 2 C ? 2 C ? 2 F ? 2 D ? 2 C ? ``` There are so many products,So i cannot put them in a pivot table. Answer would look like this : ``` CustomerID ProdID FavouriteProduct 1 A A 1 A A 1 A A 1 B A 1 A A 1 A A 1 A A 1 B A 2 A C 2 AN C 2 G C 2 C C 2 C C 2 F C 2 D C 2 C C ``` The query may look like this: ``` Update table set FavouriteProduct = (Select CustomerID, Product, Max(Count(Product)) From Table group by CustomerID, Product) FP ```
Thanks to Nick, i found a way to find the most frequent value. i share with you how it works : ``` Select CustomerID,ProductID,Count(*) as Number from table A group by CustomerID,ProductID having Count(*)>= (Select Max(Number) from (Select CustomerID,ProductID,Count(*) as Number from table B where B.CustomerID= A.CustomerID group by CustomerID,Product)C) ```
Another way to get the most frequent product is to use `row_number()`: ``` select customerid, productid, max(case when seqnum = 1 then productid end) over (partition by customerid) as favoriteproductid from (select customerid, productid, count(*) as cnt, row_number() over (partition by customerid order by count(*) desc) as seqnum from customer c group by customerid, productid ) cp; ```
Finding the most frequent value in sql server 2012
[ "", "sql", "sql-server", "sql-server-2012", "pivot", "" ]
I'm new to database development and I've a stored procedure where I need to do some INNER JOINs to tables only when a condition is met. Below is a shorter version of my complex query. ``` IF (testFunction(@UserId)) = 0 select C.AwardId, C.ProgramName, Count(ClientId) as Intakes from Client C where C.Id = @ClientId group by C.AwardId, C.ProgramName... ELSE select C.AwardId, C.ProgramName, Count(ClientId) as Intakes from Client C INNER JOIN UserRoleEntity URE ON C.AwardId = URE.EntityId INNER JOIN UserRole UR ON URE.UserRoleId = UR.Id AND UR.CDPUserId = @UserId where C.Id = @ClientId group by C.AwardId, C.ProgramName... END ``` Basically, its the same SELECT query in both "IF" and "ELSE" except that I need to do an inner join on 2 tables when the IF condition is not met. Instead of repeating the query in both "IF" and "ELSE", is there a way to make it a single query without going for a Dynamic query? Thank You!
You can use additional `WHERE` criteria and `LEFT JOIN` to filter conditionally: ``` SELECT C.AwardId, C.ProgramName, Count(ClientId) as Intakes FROM Client C LEFT JOIN UserRoleEntity URE ON C.AwardId = URE.EntityId LEFT JOIN UserRole UR ON URE.UserRoleId = UR.Id AND UR.CDPUserId = @UserId WHERE C.Id = @ClientId AND ( testFunction(@UserId) = 0 OR (testFunction(@UserId) <> 0 AND URE.AwardID IS NOT NULL AND UR.ID IS NOT NULL)) GROUP BY C.AwardId, C.ProgramName ```
If I got your query right, you want to display all field even `CDPUserID` is not exist or `0` right? You just need `FULL JOIN` in second join : ``` SELECT C.AwardId, C.ProgramName, Count(ClientId) as Intakes FROM Client C INNER JOIN UserRoleEntity URE ON C.AwardId = URE.EntityId FULL JOIN UserRole UR ON URE.UserRoleId = UR.Id AND UR.CDPUserId = @UserId WHERE C.Id = @ClientId GROUP BY C.AwardId, C.ProgramName... ```
SQL Server - Add conditional inner join
[ "", "sql", "sql-server", "inner-join", "" ]
I need to use Oracle but DATEDIFF function doesn't work in Oracle DB. How to write the following code in Oracle? I saw some examples using INTERVAL or TRUNC. ``` SELECT DATEDIFF ('2000-01-01','2000-01-02') AS DateDiff; ```
In Oracle, you can simply subtract two dates and get the difference in **days**. Also note that unlike SQL Server or MySQL, in Oracle you cannot perform a `select` statement without a `from` clause. One way around this is to use the builtin dummy table, `dual`: ``` SELECT TO_DATE('2000-01-02', 'YYYY-MM-DD') - TO_DATE('2000-01-01', 'YYYY-MM-DD') AS DateDiff FROM dual ```
Just subtract the two dates: ``` select date '2000-01-02' - date '2000-01-01' as dateDiff from dual; ``` The result will be the difference in days. More details are in the manual: <https://docs.oracle.com/cd/E11882_01/server.112/e41084/sql_elements001.htm#i48042>
DATEDIFF function in Oracle
[ "", "sql", "oracle", "select", "datediff", "" ]
This is my SQL code in MySQL: ``` create table Table1 ( id varchar(2), name varchar(2), PRIMARY KEY (id) ); Create table Table1_Addr ( addid varchar(2), Address varchar(2), PRIMARY KEY (addid) ); Create table Table1_sal ( salid varchar(2), addid varchar(2), id varchar(2), PRIMARY KEY (salid), index(addid), index(id), FOREIGN KEY (addid) REFERENCES Table1_Addr(addid), FOREIGN KEY (id) REFERENCES Table1(id) ) ``` Table1 and Table1\_Addr are two tables that Table1\_sal has two foreign keys to these two tables. My problem is that I can't change id field of Table1 and addid field of Table1\_Addr The error is: ``` #1451 - Cannot delete or update a parent row: a foreign key constraint fails (`avl`.`table1_sal`, CONSTRAINT `table1_sal_ibfk_2` FOREIGN KEY (`id`) REFERENCES `table1` (`id`)) ``` Edited-The answer is: ``` create table Table1 ( id varchar(2), name varchar(2), PRIMARY KEY (id) ); Create table Table1_Addr ( addid varchar(2), Address varchar(2), PRIMARY KEY (addid) ); Create table Table1_sal ( salid varchar(2), addid varchar(2), id varchar(2), PRIMARY KEY (salid), index(addid), index(id), FOREIGN KEY (addid) REFERENCES Table1_Addr(addid)ON UPDATE CASCADE ON DELETE CASCADE, FOREIGN KEY (id) REFERENCES Table1(id)ON UPDATE CASCADE ON DELETE CASCADE ) ```
Add `ON UPATE CASCADE` after your foreign keys definitions ``` FOREIGN KEY (addid) REFERENCES Table1_Addr(addid) ON UPDATE CASCADE ON DELETE CASCADE, FOREIGN KEY (id) REFERENCES Table1(id) ON UPDATE CASCADE ON DELETE CASCADE ```
You have to set the foreign key to On Update cascade.Then all child rows will also be updated ``` Create table Table1_sal ( salid varchar(2), addid varchar(2), id varchar(2), PRIMARY KEY (salid), index(addid), index(id), FOREIGN KEY (addid) REFERENCES Table1_Addr(addid) On delete cascade On Update cascade , FOREIGN KEY (id) REFERENCES Table1(id) On delete cascade On Update cascade ) ```
I can't edit table that another table has a foreign key of it in mysql
[ "", "mysql", "sql", "foreign-keys", "" ]
I'm working with SQL Server Database, and I have one problem like this. To prevent the users from entering any Null or Zero value into the fields, unfortunately, I have to manage this validation in SQL. So from the table below, I have ALINUT\_Value (last column - always 10 records), , so my question is how to check whether this column that contains JUST NULLs and Zeros (not any other value) in SQL Select? ``` ----------------------------------------------------- ALINUT_NUT_Id, ALINUT_Id, ALINUT_ALI_Id, ALINUT_Value ----------------------------------------------------- 1 200463 18822 0.0000 2 200464 18822 0.0000 3 200466 18822 NULL 4 200465 18822 0.0000 5 200467 18822 NULL 6 200468 18822 NULL 7 200469 18822 NULL 8 200462 18822 0.0000 9 200461 18822 0.0000 10 200470 18822 NULL ``` Another new point for me in SQL, I have a list of products each product contain 10 lines of ALINUT\_value (last column) Result that I wish to have is all the products that: * Products with only null for ALINUT\_Value column * Products with only zero for ALINUT\_Value column * Products with both null and zero for ALINUT\_Value column * Ignore other products contain other values rather than just zero and null This is my table: ``` PRD_ID, ALI_Id, ALI_ALISRC_Id, ALINUT_NUT_Id, ALINUT_ALI_Id, ALINUT_Value 263 14177 2 1 14177 30.0000 263 14177 2 2 14177 40.0000 263 14177 2 3 14177 60.0000 263 14177 2 4 14177 50.0000 263 14177 2 5 14177 47.0000 263 14177 2 6 14177 80.0000 263 14177 2 7 14177 90.0000 263 14177 2 8 14177 20.0000 263 14177 2 9 14177 10.0000 263 14177 2 10 14177 NULL 304 16880 2 1 16880 NULL 304 16880 2 2 16880 NULL 304 16880 2 3 16880 NULL 304 16880 2 4 16880 NULL 304 16880 2 5 16880 NULL 304 16880 2 6 16880 NULL 304 16880 2 7 16880 NULL 304 16880 2 8 16880 NULL 304 16880 2 9 16880 NULL 304 16880 2 10 16880 NULL 305 16880 2 1 16881 NULL 305 16880 2 2 16881 0 305 16880 2 3 16881 NULL 305 16880 2 4 16881 NULL 305 16880 2 5 16881 0 305 16880 2 6 16881 NULL 305 16880 2 7 16881 NULL 305 16880 2 8 16881 0 305 16880 2 9 16881 NULL 305 16880 2 10 16881 NULL ``` What i have tried so far : ``` SELECT COUNT(Product.PRD_ID) AS COUNTCOLUMNPRD ,PRD_ID,ALISRC_Name,ALINUT_Value FROM Product INNER JOIN Aliment ON ALI_Id = PRD_ALI_Id INNER JOIN AlimentNutrient on ALI_Id = ALINUT_ALI_Id LEFT OUTER JOIN AlimentSource ON ALISRC_Id = ALI_ALISRC_Id WHERE ALISRC_ALISRCT_Id = 2 and ALINUT_Value = 0 OR ALINUT_Value IS NULL GROUP BY PRD_ID,ALISRC_Name,ALINUT_Value Having count(Product.PRD_ID) = 10 ``` But it doesn't get the products which have both null and values in alinut\_value column Thanks,
``` if ( not exists (select 1 from MyTable where isnull(ALINUT_Value,0) <> 0) ) begin print 'ok' end ```
Answer to initial question: ``` Select * from yourTable Where ALINUT_Value Is Null or ALINUT_Value = 0 ```
Get two types of value in the same column SQL
[ "", "sql", "sql-server", "select", "" ]
How to merge these two queries? i) ``` SELECT runId, Runs.prodId, prodDate, prodName, buildNumber, totalCount as TotalTestCases, (passCount*100)/(passCount+failCount) as PassPercent, passCount, failCount, runOwner FROM Runs,Product WHERE Runs.prodId=Product.prodId ``` ii) ``` SELECT (CAST(counts.Count as decimal(10,4)) / CAST(failCount as decimal(10,4))) as PercentAnalysed FROM Runs LEFT JOIN (SELECT COUNT(*) AS 'Count', runId FROM Results WHERE Analysed = 'True' GROUP BY runId )counts on counts.runId = Runs.runId ``` I tried this : ``` SELECT Runs.runId, Runs.prodId, prodDate,prodName, buildNumber, totalCount as TotalTestCases, (passCount*100)/(passCount+failCount) as PassPercent, passCount, failCount, runOwner, counts.runId, (cast(counts.Count as decimal(10,4)) / cast(failCount as decimal(10,4))) as PercentAnalysed FROM Runs,Product LEFT JOIN (SELECT COUNT(*) AS 'Count', runId FROM Results WHERE Analysed = 'True' GROUP BY runId ) counts on counts.runId = Runs.runId WHERE Runs.prodId=Product.prodId ``` but it gives an error. Individually, both the queries run fine. Also,the number of rows returned by both of the queries are the same, so that isn't the issue. The error is: > "Msg 4104, Level 16, State 1, Line 13 The multi-part identifier > "Runs.runId" could not be bound."
Use `Inner Join` to join Runs and Products table. ``` select Runs.runId, Runs.prodId, prodDate, prodName, buildNumber, totalCount as TotalTestCases, (passCount*100)/(passCount+failCount) as PassPercent, passCount, failCount, runOwner, counts.runId, (cast(counts.Count as decimal(10,4)) / cast(failCount as decimal(10,4))) as PercentAnalysed from Runs AS Runs Inner Join Product AS Product On Runs.prodId=Product.prodId left join ( SELECT COUNT(*) AS 'Count', runId FROM Results WHERE Analysed = 'True' GROUP BY runId ) counts on counts.runId = Runs.runId ```
if your two first queries are working properly then you can use both of them in a query like below and it should word properly (you just need to have a join condition) ``` select * from --select columns you want -- Query i ( select runId,Runs.prodId,prodDate,prodName,buildNumber,totalCount as TotalTestCases,(passCount*100)/(passCount+failCount) as PassPercent, passCount,failCount,runOwner from Runs,Product where Runs.prodId=Product.prodId ) qi --you need to have a join column inside it i.e runId join -- Query ii ( select runId , (cast(counts.Count as decimal(10,4)) / cast(failCount as decimal(10,4))) as PercentAnalysed from Runs left join ( SELECT COUNT(*) AS 'Count', runId FROM Results WHERE Analysed = 'True' GROUP BY runId ) counts on counts.runId = Runs.runId ) qii --you need to have a join column inside it. i.e runId on qi.runId=qii.runId ```
Merge two SQL Server Queries
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have written query but I'm not getting latest updated data. I mean I have psn update column , I want current positiondata of latest updated date , according to "latestpsnupdate" I need currentposition. ``` SELECT k.KeyWord, pa.UserName, pa.ProjId, k.KeywordID, kp.currentposition, kp.PsnUpdateDate FROM Tbl_ProjAssignment pa INNER JOIN Tbl_keywords k on k.KeywordID=pa.KeywordID INNER JOIN seo.Tbl_KeywordPosition kp ON kp.keywordid = pa.keywordid WHERE k.ProjId='105' ORDER BY PsnUpdateDate DESC ``` o/p ``` keyword UserName ProjId keywordid currentposition PsnUpdateDa ventis123 Tester 105 1453 6 2015-03-11 ventis123 krishna 105 1453 6 2015-03-11 annuity Tester 105 1449 8 2015-01-30 annuity Tester 105 1449 7 2015-01-27 ventissitnev Tester 105 1452 6 2015-01-16 annuity Tester 105 1449 55 2015-01-15 annuity Tester 105 1449 6 2015-01-08 annuity Tester 105 1449 10 2014-12-13 ``` o/p needed desired o/p ``` keyword UserName ProjId keywordid currentposition PsnUpdateDa ventis123 Tester 105 1453 6 2015-03-11 annuity Tester 105 1449 8 2015-01-30 ventissitnev Tester 105 1452 6 2015-01-16 annuity Tester 105 1449 55 2015-01-15 ```
Use as follows: ``` ;WITH CTE AS ( SELECT k.KeyWord , pa.UserName , pa.ProjId , k.KeywordID , kp.currentposition , kp.PsnUpdateDate , ROW_NUMBER() OVER ( PARTITION BY k.keyword ORDER BY kp.PsnUpdateDate DESC ) AS positiondata FROM Tbl_ProjAssignment pa INNER JOIN Tbl_keywords k ON k.KeywordID = pa.KeywordID INNER JOIN seo.Tbl_KeywordPosition kp ON kp.keywordid = pa.keywordid ) SELECT * FROM CTE WHERE ProjId = '105' AND positiondata = 1 ```
Apart from Row\_number we can approach like this also ..just an sample example ``` declare @t table (Keyword varchar(10), UserName varchar(10),ProjId int,keywordid int,perdate date ) insert into @t(Keyword,UserName,ProjId,keywordid,perdate)values ('ventis123','Tester',105,1453,'2015-03-11') insert into @t(Keyword,UserName,ProjId,keywordid,perdate)values ('ventis123','Tester',105,1453,'2015-03-11') insert into @t(Keyword,UserName,ProjId,keywordid,perdate)values ('annuity','Tester',105,1449,'2015-01-30') insert into @t(Keyword,UserName,ProjId,keywordid,perdate)values ('annuity','Tester',105,1449,'2015-01-27') select DISTINCT tt.Keyword, tt.UserName, tt.ProjId, t.Keyword, t.perdate from @t tt INNER JOIN (SELECT MAX(keywordid)Keyword, MAX(perdate)perdate from @t GROUP BY Keyword,UserName,ProjId )t ON t.Keyword = tt.keywordid AND t.perdate = tt.perdate ORDER BY 1 desc ```
How to Get Latest Updated Record in SQL?
[ "", "sql", "sql-server", "sql-server-2008", "" ]
My objective is to perform two calculations on two selected fields. The `formActionDate` is an integer like so `YYYMMDD` and `reminderFrequency` is also an integer (a whole number representing a number of days). I am wanting to divide the integer date by the number of days. Using mod (I think this is the best approach) I will decide if an email needs to go out. EG if there is a left over then dont send email or if there is no left over an email goes out. Using **SQL 2008 R2** I have this in my select: ``` SELECT CAST(wfi.formActionDate/wf.reminderFrequency AS DECIMAL(18,1)) AS divCalc, formActionDate%reminderFrequency AS modCalc FROM (webFormsInstances AS wfi ``` > **Note** - The query is running no problem but the calculations are both > out. Have I used them correctly? The first one: ``` CAST(wfi.formActionDate/wf.reminderFrequency AS DECIMAL(18,3)) AS divCalc ``` Is doing this: As a result of : `20150123 / 14` which should be `1439294.5` but its `1439294.0` and `20150115 / 14` which should be `1439293.9` nut its `1439293.0` and then the second one:: ``` formActionDate%reminderFrequency AS modCalc ``` To use the same examples above: As a result of : `20150123%14` which should be `5` but its `13` and `20150115%14` which should be `9` but its `7` What have I done wrong? Here is my full query if it helps: ``` SELECT wfi.WebFormsInstanceID, wfi.WebFormsIndexID, wfi.FormStage, wfi.FormAction, wfi.FormActionDate, wf.WebFormsIndexID, wf.reminderFrequency, CAST(wfi.formActionDate/wf.reminderFrequency AS DECIMAL(18,1)) AS divCalc, formActionDate%reminderFrequency AS modCalc FROM (webFormsInstances AS wfi LEFT OUTER JOIN WebFormsIndex AS wf ON wfi.WebFormsIndexID = wf.WebFormsIndexID) WHERE (wfi.formStage <> 'Complete' AND wfi.FormStage <> 'Terminated') AND wfi.formActionDate < CONVERT(int, CONVERT(varchar(8), dateAdd(day,-14, getdate()), 112)) ORDER BY wfi.WebFormsInstanceID DESC; ``` Thanking You... Edited. Thanks to Rigel1121 and Dave G The two solutions which are now in my select are: ``` DATEDIFF(day, CAST(CAST(wfi.formActionDate AS VARCHAR(8)) AS DATE),CAST(GETDATE() AS DATE)) AS theDateDiffCalc, ``` and: ``` DATEDIFF(day, CAST(CAST(wfi.formActionDate AS VARCHAR(8)) AS DATE),CAST(GETDATE() AS DATE)) % wf.reminderFrequency AS theModCalc ```
By checking your query there is an unnecessary parenthesis. So I removed it. I've also modified it on how you will get the value of `divCalc`. I `CAST` both the numerator and the denominator and I put it in `CASE` statement to prevent the SQL Error when the value of`wf.reminderFrequency` is zero. Also I used the `DATEDIFF` function to get the `modCalc`. See below: ``` SELECT wfi.WebFormsInstanceID, wfi.WebFormsIndexID, wfi.FormStage, wfi.FormAction, wfi.FormActionDate, wf.WebFormsIndexID, wf.reminderFrequency, CASE WHEN CAST(wf.reminderFrequency AS DECIMAL(18,1))=0.0 THEN 0.0 ELSE CAST(wfi.formActionDate AS DECIMAL(18,1)) / CAST(wf.reminderFrequency AS DECIMAL(18,1)) END as divCalc, DATEDIFF(DAY, '2000-01-01', CAST(CAST(wfi.formActionDate AS VARCHAR(8)) AS DATE)%wf.reminderFrequency as modCalc FROM webFormsInstances as wfi LEFT OUTER JOIN WebFormsIndex as wf ON wfi.WebFormsIndexID = wf.WebFormsIndexID WHERE (wfi.formStage <> 'Complete' AND wfi.FormStage <> 'Terminated') AND wfi.formActionDate < CONVERT(int, CONVERT(varchar(8), dateAdd(day,-14, getdate()), 112)) ORDER BY wfi.WebFormsInstanceID DESC; ```
When you divide an `int` with another `int`, the result will also be an `int`. You need to cast either one or both of the parameters to `decimal` first. For example: ``` SELECT CAST(wfi.formActionDate AS DECIMAL(18,1)) / CAST(wf.reminderFrequency AS DECIMAL(18,1)) as divCalc ...snip... ``` And regarding the `mod` calculation, your maths is not going to work with dates in that format. You need to convert that `int` to a real `datetime` value, get the difference of that from a base date and then use `mod` on that value. For example, to get that value as a date, you can convert it to a `varchar` first, then cast that as `date`: ``` DECLARE @datevalue int = 20150209 SELECT DATEDIFF(DAY, '2000-01-01', CAST(CAST(@datevalue AS VARCHAR(8)) AS DATE))%14 ```
Using Cast as Decimal and Mod in Select
[ "", "sql", "sql-server", "t-sql", "sql-server-2008-r2", "" ]
I have three kind of artifacts. I store these artifacts in database with following columns: ``` artifact_type varchar2(20) not null, artifact_version varchar2(40) not null, artifact_blob blob default empty_blob() ``` The versions are store in following format : `3.0.0.0.0` There is a query, where I have to return the latest version for a artifact. The max() will not return correct result for varchar. So, is there a way to find a max for this version format, or should I store version in some other way or should I create one more column which will be like latest flag.
This will give you the highest version for each `artifact_type` providing that you have only numbers and dots (i.e. not 3.0.2.1.1b or something). This is for Oracle 12c ``` SELECT a.artifact_type, a.artifact_version, a.artifact_blob FROM artifacts a WHERE a.artifact_version = ( SELECT b.artifact_version FROM artifacts b WHERE b.artifact_type = a.artifact_type ORDER BY CAST(REGEXP_SUBSTR(b.artifact_version,'[^.]+',1, 1) AS NUMBER) DESC, CAST(REGEXP_SUBSTR(b.artifact_version,'[^.]+',1, 2) AS NUMBER) DESC, CAST(REGEXP_SUBSTR(b.artifact_version,'[^.]+',1, 3) AS NUMBER) DESC, CAST(REGEXP_SUBSTR(b.artifact_version,'[^.]+',1, 4) AS NUMBER) DESC, CAST(REGEXP_SUBSTR(b.artifact_version,'[^.]+',1, 5) AS NUMBER) DESC FETCH FIRST 1 ROWS ONLY ) / ``` For 11g you'll need to use the rownum trick to limit the rowset in the sub-select to the first row only.
As a common decision, you can use [user-defined aggregate function](http://docs.oracle.com/cd/B28359_01/appdev.111/b28425/aggr_functions.htm). If 'last version' is equals of 'last row inserted', you can add an 'insertion date' column and use it for ordering. Sometimes other lightweight approaches can be used. For example, if you can to format version as '003.000.000.000', varchar comparision would be quite enough.
How to fetch latest version of a artifact from oracle database
[ "", "sql", "oracle", "" ]
Given the following table: ``` subscriber(id_sub, name_sub) 1 'Helen S.' 2 'Marie S.' ``` The **name\_sub** column records have to be updated, in order, with values taken randomly from the table: ``` targeted_subscriber(id, target_name) 1 'Damien B' ``` My intention was to use a JOIN clause in the UPDATE statement. Also, get random **id** values. Is there an elegant, straightforward solution for this kind of scenario?
Use `Cursor` to update random data in each row .Not sure whether this is the better way of doing ``` DECLARE @id_sub INT, @name_sub VARCHAR(100) DECLARE cur CURSOR FOR SELECT id_sub, name_sub FROM subscriber OPEN vendor_cursor FETCH NEXT FROM cur INTO @id_sub, @name_sub WHILE @@FETCH_STATUS = 0 BEGIN UPDATE subscriber SET name_sub = (SELECT TOP 1 target_name FROM targeted_subscriber ORDER BY Newid()) WHERE id_sub = @id_sub FETCH NEXT FROM cur INTO @id_sub, @name_sub END CLOSE cur; DEALLOCATE cur ```
I will leave it to you to adapt this to an update ``` select getRand.sID, getRand.value from ( select docSVsys.sID, docEnum1.value, ROW_NUMBER() over (partition by docSVsys.sID order by docEnum1.eRand) as rownum from ( select newid() sysRand, docSVsys.sID from docSVsys where docSVsys.sID < 100 ) docSVsys join ( select newid() eRand, value from docEnum1 where value is not null and value <> 'null' ) docEnum1 on docSVsys.sysRand <= docEnum1.eRand ) getRand where getRand.rownum = 1 ```
Update varchar column records with random values from a joined table?
[ "", "sql", "sql-server", "t-sql", "" ]
I have a database table (Employee) with following fields: EmpID, EmpName. I have a second table (EmployeeVersion) with following fields: PKID(auto), EmpID, EmpName, Month, Year. What would be the quickest way to copy data from Employee table to EmployeeVersion table. The Month, Year would either contain the month and the year when the data was copied or values sent to Stored procedure (@Month, @Year) from c# code. Please provide solution for both scenarios. As far as I know, I can't used following statement because the number of columns don't match in both tables: ``` Insert Into EmployeeVersion (EmpID, EmpName, Month, Year) select * from Employee ``` Please advice. Thanks.
You should specify columns in the select statement and escape reserved words used as column names `Month, Year` with `[]`: ``` Insert Into EmployeeVersion (EmpID, EmpName, [Month], [Year]) select EmpId, EmpName, MONTH(GETDATE(), YEAR(GETDATE()) from Employee ``` And stored procedure: ``` CREATE PROCEDURE [dbo].[CopyEmployee] @Month INT, @Year INT AS BEGIN SET NOCOUNT ON; Insert Into EmployeeVersion (EmpID, EmpName, [Month], [Year]) select EmpId, EmpName, @Month AS [Month], @Year AS [Year] from Employee; END GO ```
Why are you storing month and year in separate columns? Why not just put the date in when the data is inserted? ``` Insert Into EmployeeVersion (EmpID, EmpName, Month, Year) select EmpID, EmpName, month(getdate()), year(getdate()) from Employee; ``` Note that you can have a column called `CreatedAt` that is assigned automatically. You would define it as: ``` create table . . . CreatedAt datetime not null default getdate() ``` Then the `insert` would look like: ``` Insert Into EmployeeVersion (EmpID, EmpName) select EmpID, EmpName from Employee; ```
Quickest way to copy records from one table to another sql server
[ "", "sql", "sql-server", "sql-server-2008", "sqlbulkcopy", "" ]
Background: I need to write a function in T-SQL on SQL Server 2008 10.0.5869. Here's the table I'm working on (for the sake of simplicity - I only put in 3 columns here - but I have 10 columns for the actual work): ``` ID | Column1 | Column2 | Column3 1 | 2014-05 | 2015-02 | 2013-04 2 | 2012-09 | 2011-02 | 2013-03 ``` ID is varchar and Column(x) are all datetime. My end goal is to design a function fn\_CompareDate to do something like this: ``` select fn_CompareDate(ID) from table where ID = 1 ``` **The query above should return the latest date from Column(x)s which should be 2015-02.** I used CASE WHEN but it would be almost impossible to use it for 10 columns. Is there another way to achieve the same result?
I think the below Function serves requirment better ``` CREATE FUNCTION fn_CompareDate(@ID VARCHAR(10)) RETURNS DATETIME AS BEGIN DECLARE @maxDate DATETIME; SELECT @maxDate = (SELECT Max(v) FROM (VALUES (COLUMN1), (COLUMN2), (COLUMN3)) AS value(v)) FROM table WHERE ID = @ID RETURN @maxDate; END; ``` Now run the below query ``` select dbo.fn_CompareDate(ID) from table where ID = 1 ``` Hope you got it.
One approach is to use `apply`: ``` select d.maxd from table t cross apply (select max(d) as maxd from values ((id, column1), (id, column2), (id, column3)) as val(id, d) where val.id = t.id ) d where t.id = 1; ``` EDIT: You can do this without `values()`: ``` select d.maxd from table t cross apply (select max(d) as maxd from (select id, column1 as d union all select id, column2 union all select id, column3 union all select id, column4 ) val where t.id = val.id ) d where t.id = 1; ```
How to compare columns and return only 1 of them in SQL
[ "", "sql", "sql-server", "t-sql", "" ]
I want to update the values of a column with the values that I have acquired through a select statement. But by performing the select I have the whole set of the results, which I want each one of them to be replaced by the czi\_first\_name\_en (kind of a foreach loop) This is what is have so far: ``` UPDATE citizen_info t SET t.czi_first_name_en= ( SELECT per_username FROM person INNER JOIN enrollment_office ON person.per_id=enrollment_office.eof_manager_id INNER JOIN card_request ON enrollment_office.eof_id=card_request.crq_enroll_office_id INNER JOIN citizen ON card_request.crq_citizen_id=citizen.ctz_id INNER JOIN citizen_info ON citizen.ctz_id=citizen_info.czi_id WHERE person.per_dep_id=card_request.crq_enroll_office_id) AS person_username ``` How must I proceed?
You could also use **MERGE** statement with only an **UPDATE** clause - ``` MERGE INTO citizen_info t USING (SELECT person.per_username per_username, citizen.ctz_id ctz_id FROM person inner join enrollment_office ON person.per_id = enrollment_office.eof_manager_id inner join card_request ON enrollment_office.eof_id = card_request.crq_enroll_office_id inner join citizen ON card_request.crq_citizen_id = citizen.ctz_id WHERE person.per_dep_id = card_request.crq_enroll_office_id) s ON(s.ctz_id = t.czi_id) WHEN matched THEN UPDATE SET t.czi_first_name_en = s.per_username / ```
1) You just need an UPDATE with JOIN, I think (Oracle-style). ``` UPDATE ( select per_username as NEW, czi_first_name_en as OLD from person inner join enrollment_office on person.per_id=enrollment_office.eof_manager_id inner join card_request on enrollment_office.eof_id=card_request.crq_enroll_office_id inner join citizen on card_request.crq_citizen_id=citizen.ctz_id inner join citizen_info on citizen.ctz_id=citizen_info.czi_id where person.per_dep_id=card_request.crq_enroll_office_id ) t SET t.OLD = t.NEW ``` See also: [Update statement with inner join on Oracle](https://stackoverflow.com/questions/2446764/oracle-update-statement-with-inner-join) 2) You can also do it your way, I think. ``` UPDATE citizen_info t SET t.czi_first_name_en= ( SELECT per_username FROM person inner join enrollment_office ON person.per_id=enrollment_office.eof_manager_id inner join card_request ON enrollment_office.eof_id=card_request.crq_enroll_office_id inner join citizen ON card_request.crq_citizen_id=citizen.ctz_id inner join citizen_info x ON citizen.ctz_id=x.czi_id WHERE person.per_dep_id=card_request.crq_enroll_office_id and x.czi_id=t.czi_id ) as person_username ``` Note the only difference here: `x.czi_id=t.czi_id`. This links the `x record` to the `t record` which you want to update.
How to update each row of a column by the results of a select?
[ "", "sql", "oracle", "select", "sql-update", "" ]
I came across this example and I don't understand what it means. ``` (SELECT drinker FROM Frequents) EXCEPT ALL (SELECT drinker FROM Likes); ``` relations: Frequents(drinker, bar), Likes(drinker, beer) What does the ALL do in this case? How is the result different from the query below? ``` (SELECT drinker FROM Frequents) EXCEPT (SELECT drinker FROM Likes); ```
The SQL EXCEPT operator takes the distinct rows of one query and returns the rows that do not appear in a second result set. The EXCEPT ALL operator does not remove duplicates. For purposes of row elimination and duplicate removal, the EXCEPT operator does not distinguish between NULLs. EXCEPT ALL which returns all records from the first table which are not present in the second table, leaving the duplicates as is. Unfortunately, SQL Server does not support this operator.
The except operator returns the first table minus any overlap with the second table. Set A = (10,11,12,10,10) Set B = (10,10) A **except** B --> (11,12) A **except all** B --> (10,11,12) **except** removes all occurrences of duplicate data from set A, whereas **except all** only removes one occurrence of duplicate data from set A for every occurrence in set B.
Trying to understand "except all" in sql query
[ "", "sql", "intersect", "set-intersection", "set-difference", "set-operations", "" ]
In my CRM system I have table with leads. I would like to make a chart to see how many leads were added in last 7 days. For that purpose I need to have separete sums for every day from last week. How to do that in MySQL? My table called `tab_leads` it have `lead_id` (integer) and `lead_create_date` (time stamp, format: 0000-00-00 00:00:00) So I need something like: * Day 1 - 10 * Day 2 - 0 * Day 3 - 5 * Day 4 - 1 * Day 5 - 9 * Day 6 - 15 * Day 7 (today) - 2
Just use a GROUP BY query: ``` SELECT DATE(lead_create_date) AS `Date`, COUNT(*) AS `Leads` FROM tab_leads WHERE lead_create_date >= CURRENT_DATE - INTERVAL 6 DAY GROUP BY DATE(lead_create_date) ``` The above query assumes that there are no future records and current day is counted as the 7th day.
> Try this Mysql Query ``` SELECT * FROM tab_leads WHERE DATE(lead_create_date) = DATE_SUB(DATE(NOW()), INTERVAL 7 DAY) GROUP BY DATE(lead_create_date); ```
How to retrieve sum of data form last 7 days as a separate sums
[ "", "sql", "mysql", "date", "aggregate-functions", "" ]
I have a temp table `@table` in my stored procedure that looks like this: ``` AgtID | Bonus ------------- 5063 | 0 1104 | 0 ``` And a table `bonus` that looks like this: ``` AgtID | Contest | Points | Event -------------------------------- 5063 | 21 | 1000 | 1 5063 | 21 | 3000 | 3 1104 | 21 | 1000 | 1 1104 | 21 | 500 | 2 ``` And an update query (currently) like thus: ``` UPDATE tt SET Bonus = b.Points FROM @table tt INNER JOIN tblContestData_Bonus b ON tt.AgtID = b.AgtID where ContestID = 21; ``` Currently, when I run this query, it'll overwrite whatever data is in `@table.Bonus` with the data in `bonus.Points`. This is fine for a single record, but as we start getting more bonus point events, I need to have it add to my value. I know some languages have a `i += value`... does SQL have something similar?
I figured it out right after posting. Funny how that works. What I did: ``` UPDATE tt SET Bonus = coalesce( (select SUM(Points) as Points from bonus b where b.AgtID = tt.AgtID and ContestID = 21 group by AgtID),0) FROM @table tt; ``` Using the `coalesce()` to account for null values to math in following steps works fine even if there are no records.
I might be missing what you're after here, but you can add the existing value to the new value in an `UPDATE`: ``` UPDATE tt SET Bonus = b.Points + Bonus FROM @table tt INNER JOIN tblContestData_Bonus b ON tt.AgtID = b.AgtID where ContestID = 21; ```
SQL - Add Data to Existing Data From Another Table
[ "", "sql", "sql-server-2005", "" ]
I am trying to show a list of accounts where 2 identical products, have been ordered for the same account, within the same calendar month. Field names: ``` A/c number, Order id, Cust name, Product, Purchase date ``` I have used GROUP BY and HAVING, but I am concerned with the volume of records returned.
I would suggest doing an inner join on the same table twice. Something like: ``` SELECT o1.* FROM orders o1 INNER JOIN orders o2 ON o1.account_num = o2.account_num AND o1.product_id = o2.product_id AND MONTH(o1.purchase_date) = MONTH(o2.purchase_date) AND YEAR(o1.purchase_date) = YEAR(o2.purchase_date) ``` I just want to point out that you have to match BOTH the months AND the years to avoid matching something purchased on 1/2014 with 1/2015
``` SELECT * FROM TABLE as table2 INNER JOIN table as table1 on table1.product = table2.product AND MONTH(table1.purchase_date) = MONTH(tabl2.purchase_date) AND YEAR(table1.purchase_date) = YEAR(table2.purchase_date) ``` This should do the trick.
SQL - Retrieve data based upon multiple conditions
[ "", "sql", "" ]
I'm sure I've done this before, but seem to have forgotten how.. I'm trying to filter a recordset so that I get just the 1 record, so for example, if this is my table called **TableA**: ``` | ID | User | Type | Date | ------------------------------------ | 1 | Matt | Opened | 1/8/2014 | | 2 | Matt | Opened | 2/8/2014 | | 3 | Matt | Created| 5/8/2014 | | 4 | John | Opened | 1/8/2014 | | 5 | John | Created| 2/8/2014 | ``` I'd want to filter it so I get the `MIN` of Date where the User is "Matt" and the Type is "Opened". The result set needs to include the ID field and return just the 1 record, so it would look like this: ``` | ID | User | Type | Date | ------------------------------------ | 1 | Matt | Opened | 1/8/2014 | ``` I'm struggling with getting past the GROUPBY requirement when selecting the ID field... this seems to ignore MIN of Date and return more than 1 record.
Use `TOP` and `ORDER BY`: ``` select top 1 * from table where user = "Matt" and type = "Opened" order by date asc; ``` **Edit**: changed order by from `desc` to `asc` as this achieves the `MIN` effect I'm after.
Another way is by finding the `min` or `max` date per `user` and `type` then join the result back to the main table ``` SELECT A.ID, A.USER, A.Type, A.Date FROM yourtable A INNER JOIN (SELECT USER, Type, Min(Date) Date FROM yourtable WHERE USER = "Matt" AND type = "Opened" GROUP BY USER, Type) B ON A.USER = B.USER AND A.Type = B.Type AND A.date = B.Date ```
Keep all columns in MIN / MAX query, but return 1 result
[ "", "sql", "ms-access", "" ]
So I started building a query that joins 5 separate tables together to get values to multiply and sum. However, I seem to be facing a rather odd issue. Whenever I attempt to join a certain table to my query, suddenly all of my values are multiplied by 15, the counts, the sums, etc. I'm trying to figure out what is causing all these extra runs. Any ideas? Full Query ``` USE Facilities_Database DECLARE @minimumDate DATE DECLARE @maximumDate DATE SET @minimumDate = '2014/12/11' SET @maximumDate = '2014/12/15' SELECT tab4.TypeName AS 'Labor Type' ,tab1.Building ,CAST(@minimumDate AS nvarchar(255)) + ' - ' + CAST(@maximumDate AS nvarchar(255)) AS 'Date Range' ,Count(tab1.CHSRNumber) AS 'Number of CHSRs' ,ISNULL(SUM(tab5.[Item Cost] * tab3.[Amount Used]),0) AS 'Total Material Cost' ,ISNULL(SUM(tab2.[Hour Worked] * tab2.[Hourly CHSR Labor Rate]),0) AS 'Total Labor Cost' FROM [Facilities].[HardwareSupportRequest] tab1 JOIN Facilities.tblCHSRLaborPerCHSR tab2 ON tab1.CHSRNumber = tab2.[CHSR #] JOIN Facilities.tblMaterialUsed tab3 ON tab1.CHSRNumber = tab3.[CHSR #] JOIN Facilities.LaborTypes tab4 ON tab2.LaborTypeId = tab4.Id JOIN Facilities.tblMaterial tab5 ON tab3.MaterialId = tab5.Id WHERE tab1.ActualCompleteDate BETWEEN @minimumDate AND @maximumDate AND tab4.TypeName IS NOT NULL GROUP BY tab4.TypeName,Building ORDER BY Building,tab4.TypeName ``` Working Query: ``` USE Facilities_Database SELECT tab1.Building ,COUNT(*) AS 'CHSR Count' ,SUM(tab2.[Hour Worked] * 40) AS 'Labor Cost' FROM [Facilities].[HardwareSupportRequest] tab1 INNER JOIN Facilities.tblCHSRLaborPerCHSR tab2 ON tab1.CHSRNumber = tab2.[CHSR #] INNER JOIN Facilities.LaborTypes tab3 ON tab2.LaborTypeId = tab3.Id --INNER JOIN Facilities.tblMaterialUsed tab4 ON --tab4.[CHSR #] = tab1.CHSRNumber --INNER JOIN Facilities.tblMaterial tab5 ON -- tab4.MaterialId = tab5.Id WHERE ActualCompleteDate BETWEEN '2014/12/11' AND '2014/12/15' GROUP BY tab1.Building,tab3.TypeName ``` The table that causes the problems is `tblMaterialsUsed`. Thanks for any assistance.
If adding a JOIN multiplies the results of GROUP BY operations, then the JOIN is returning more than 1 row per JOIN criteria. Either narrow it down (maybe you are missing 1 or more JOIN fields?) or add more fields to the GROUP BY to change the granularity of what is being aggregated. The issue is that you have a list of Materials User per each `[CHSR #]`. Those rows cause a "Cartesian Product" such that the other rows are duplicated per each row in `tblMaterialUsed`. Hence the Total `Material Cost` field was probably correct while the `Number of CHSRs` and `Total Labor Cost` were multiplied. Essentially, you need to group data at the same level of granularity, which means 1-to-1 across `CHSRNumber` / `[CHSR #]` The following should solve this issue. If it doesn't, that would be due to more than 1 row per `CHSRNumber` / `[CHSR #]` in the main query (which is getting JOINed with the 1 row per `CHSRNumber` / `[CHSR #]` of the `material` CTE). In this case, you would apply the same theory to the main query by creating a second CTE for that aggregation and then just JOIN both of those results in the new main query. (and I have updated the query below to incorporate that change as it is doubtful that it wouldn't be needed) ``` ;WITH material AS ( SELECT mu.[CHSR #], ISNULL(SUM(mtrl.[Item Cost] * mu.[Amount Used]),0) AS [MaterialCost] FROM Facilities.tblMaterialUsed mu INNER JOIN Facilities.tblMaterial mtrl ON mu.MaterialId = mtrl.Id GROUP BY mu.[CHSR #] ), labour AS ( SELECT tab1.CHSRNumber, tab4.TypeName, tab1.Building, ISNULL(SUM(tab2.[Hour Worked] * tab2.[Hourly CHSR Labor Rate]),0) AS [LaborCost] FROM [Facilities].[HardwareSupportRequest] tab1 JOIN Facilities.tblCHSRLaborPerCHSR tab2 ON tab1.CHSRNumber = tab2.[CHSR #] JOIN Facilities.LaborTypes tab4 ON tab2.LaborTypeId = tab4.Id WHERE tab1.ActualCompleteDate BETWEEN @minimumDate AND @maximumDate AND tab4.TypeName IS NOT NULL GROUP BY tab1.CHSRNumber, tab4.TypeName, tab1.Building ) SELECT labour.TypeName AS [Labor Type], labour.Building, CAST(@minimumDate AS NVARCHAR(255)) + ' - ' + CAST(@maximumDate AS NVARCHAR(255)) AS [Date Range], COUNT(labour.[CHSRNumber]) AS [Number of CHSRs], SUM(material.[MaterialCost]) AS [Total Material Cost], SUM(labour.[LaborCost]) AS [Total Labor Cost] FROM labour INNER JOIN material ON labour.CHSRNumber = material.[CHSR #] GROUP BY labour.TypeName, labour.Building ORDER BY labour.Building, labour.TypeName; ``` If you want this in a View, instead use an Inline Table-Valued Functions by adding the following to the beginning: ``` CREATE FUNCTION GetCosts (@minimumDate DATE, @maximumDate DATE) RETURNS TABLE AS RETURN ``` And * remove the `;` before the `;WITH` * remove the `ORDER BY` --- Also, it would be a huge benefit if you used acronyms for table aliases instead of `tab1`, `tab2`, etc as it would make the query *much* easier to read, especially given that the same table in both queries isn't even the same `tab#`.
You've got insufficient `JOIN` criteria resulting in one row joining to multiple rows. If the following query results in two different numbers, then you know that `JOIN` is to blame: ``` SELECT COUNT(*),COUNT(DISTINCT [CHSR #]) FROM Facilities.tblMaterialUsed ``` You need to then determine how to exclude the extra records by adding to your `JOIN` criteria or perhaps aggregating in a subquery first. **Update**: To aggregate first you can use a `cte` or subquery to aggregate by `CHSR #` then join to that cte/subquery: ``` ;WITH Materials AS (SELECT mat.[CHSR #] ,ISNULL(SUM(tab5.[Item Cost] * tab3.[Amount Used]),0) AS Total_Material_Cost FROM Facilities.tblMaterialUsed tab3 JOIN Facilities.tblMaterial tab5 ON tab3.MaterialId = tab5.Id GROUP BY mat.[CHSR #] ) SELECT tab4.TypeName AS 'Labor Type' ,tab1.Building ,CAST(@minimumDate AS nvarchar(255)) + ' - ' + CAST(@maximumDate AS nvarchar(255)) AS 'Date Range' ,Count(tab1.CHSRNumber) AS 'Number of CHSRs' ,SUM(mat.Total_Material_Cost) AS 'Total Material Cost' ,ISNULL(SUM(tab2.[Hour Worked] * tab2.[Hourly CHSR Labor Rate]),0) AS 'Total Labor Cost' FROM [Facilities].[HardwareSupportRequest] tab1 JOIN Facilities.tblCHSRLaborPerCHSR tab2 ON tab1.CHSRNumber = tab2.[CHSR #] JOIN Facilities.LaborTypes tab4 ON tab2.LaborTypeId = tab4.Id JOIN Materials mat ON tab1.CHSRNumber = mat.[CHSR #] WHERE tab1.ActualCompleteDate BETWEEN @minimumDate AND @maximumDate AND tab4.TypeName IS NOT NULL GROUP BY tab4.TypeName,Building ORDER BY Building,tab4.TypeName ```
Values Inexplicably multiplying by a power of 15 when performing a join?
[ "", "sql", "sql-server", "t-sql", "" ]
I am trying to get the results of two requests within a single one, these two following requests are functional and each of them are resulting in a table with two columns: ``` SELECT patron.last_name, COUNT(*) AS **pret** FROM circ_transaction_log INNER JOIN patron ON circ_transaction_log.patron_id=patron.patron_id AND **circ_transaction_log.transaction_type<5** AND patron.college_or_school = 'High School' GROUP BY patron.last_name; ``` **last\_name** | **pret** \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ steven grelle | 552 michelle vins | 122 ... \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ **OR** \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ ``` SELECT patron.last_name, COUNT(*) AS **resa** FROM circ_transaction_log INNER JOIN patron ON circ_transaction_log.patron_id=patron.patron_id AND **circ_transaction_log.transaction_type BETWEEN 5 AND 10** AND patron.college_or_school = 'High School' GROUP BY patron.last_name; ``` **last\_name** | **resa** \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ steven grelle | 12 michelle vins | 8 ... The result I would like to get is kind of like this : **last\_name** | **resa** | **pret** \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ steven grelle | 552 | 12 michelle vins | 122 | 8 ... But I think the difficulty is that I am requesting the same table twice withe count (table CIRC\_TRANSACTION\_LOG) and whatever I tried was or in error or not working Thanks in advance for your reply Regards, Nickk
You're after something like this, then: ``` SELECT p.last_name, COUNT(case when ctl.transaction_type < 5 then 1 end) AS pret, count(case when ctl.transaction_type between 5 and 10 then 1 end) as resa FROM circ_transaction_log ctl INNER JOIN patron p ON (ctl.patron_id = p.patron_id) AND ctl.transaction_type <= 10 -- possibly not required if transaction_type is always <= 10 AND p.college_or_school = 'High School' GROUP BY p.last_name; ``` NB. untested, since you didn't give any sample data for your tables.
You can combine your queries qith a **full outer join** of your result tables **using(last\_name)**. This should produce the expected result. Tried to create the query... ``` Select * from (SELECT patron.last_name, COUNT(*) AS pret FROM circ_transaction_log INNER JOIN patron ON circ_transaction_log.patron_id=patron.patron_id AND circ_transaction_log.transaction_type<5 AND patron.college_or_school = 'High School' GROUP BY patron.last_name) FULL OUTER JOIN (SELECT patron.last_name, COUNT(*) AS resa FROM circ_transaction_log INNER JOIN patron ON circ_transaction_log.patron_id=patron.patron_id AND circ_transaction_log.transaction_type BETWEEN 5 AND 10 AND patron.college_or_school = 'High School' GROUP BY patron.last_name) USING (last_name) ``` ;
Substitute 2 SQL/ORACLE requests by only one request
[ "", "mysql", "sql", "sql-server", "oracle", "" ]
I have a `VARCHAR(MAX)` field which is being interfaced to an external system in `XML` format. The following errors were thrown by the interface: ``` mywebsite.com-2015-0202.xml:413005: parser error : xmlParseCharRef: invalid xmlChar value 29 ne and Luke's family in Santa Fe. You know you have a standing invitation,&#x1D; ^ mywebsite.com-2015-0202.xml:455971: parser error : xmlParseCharRef: invalid xmlChar value 25 The apprentice nodded, because frankly, who hadn&#x19;t? That diseases like chol ^ mywebsite.com.com-2015-0202.xml:456077: parser error : xmlParseCharRef: invalid xmlChar value 28 bon mot; a sentimental love of nature and animals; the proverbial British &#x1C; ^ mywebsite.com-2015-0202.xml:472073: parser error : xmlParseCharRef: invalid xmlChar value 20 "And&#x14;you want that?" ^ mywebsite.com-2015-0202.xml:492912: parser error : xmlParseCharRef: invalid xmlChar value 25 She couldn&#x19;t live like this anymore. ``` We found that the following list of characters are invalid: ``` &#x0; &#x1; &#x2; &#x3; &#x4; &#x5; &#x6; &#x7; &#x8; &#x9; &#xa; &#xb; &#xc; &#xd; &#xe; &#xf; &#x10; &#x11; &#x12; &#x13; &#x14; &#x15; &#x16; &#x17; &#x18; &#x19; &#x1a; &#x1b; &#x1c; &#x1d; &#x1e; &#x1f; &#x7f; ``` I am trying to clean this data, and I found a SQL function to clean these characters [here](http://blogs.technet.com/b/wardpond/archive/2005/07/06/a-solution-for-stripping-invalid-xml-characters-from-varchar-text-data-structures.aspx). However, the function was taking `NVARCHAR(4000)` as input parameter, so I have changed the function to use `VARCHAR(MAX)` instead. Could anyone please advise if changing the `NVARCHAR(4000)` to `VARCHAR(MAX)` would produce wrong results? Sorry, I wouldn't be able to test this interface locally so thought to seek opinion/advise. Original Function: ``` CREATE FUNCTION fnStripLowAscii (@InputString nvarchar(4000)) RETURNS nvarchar(4000) AS BEGIN IF @InputString IS NOT NULL BEGIN DECLARE @Counter int, @TestString nvarchar(40) SET @TestString = '%[' + NCHAR(0) + NCHAR(1) + NCHAR(2) + NCHAR(3) + NCHAR(4) + NCHAR(5) + NCHAR(6) + NCHAR(7) + NCHAR(8) + NCHAR(11) + NCHAR(12) + NCHAR(14) + NCHAR(15) + NCHAR(16) + NCHAR(17) + NCHAR(18) + NCHAR(19) + NCHAR(20) + NCHAR(21) + NCHAR(22) + NCHAR(23) + NCHAR(24) + NCHAR(25) + NCHAR(26) + NCHAR(27) + NCHAR(28) + NCHAR(29) + NCHAR(30) + NCHAR(31) + ']%' SELECT @Counter = PATINDEX (@TestString, @InputString COLLATE Latin1_General_BIN) WHILE @Counter <> 0 BEGIN SELECT @InputString = STUFF(@InputString, @Counter, 1, NCHAR(164)) SELECT @Counter = PATINDEX (@TestString, @InputString COLLATE Latin1_General_BIN) END END RETURN(@InputString) END ``` Modified Version: ``` CREATE FUNCTION [dbo].RemoveInvalidXMLCharacters (@InputString VARCHAR(MAX)) RETURNS VARCHAR(MAX) AS BEGIN IF @InputString IS NOT NULL BEGIN DECLARE @Counter INT, @TestString NVARCHAR(40) SET @TestString = '%[' + NCHAR(0) + NCHAR(1) + NCHAR(2) + NCHAR(3) + NCHAR(4) + NCHAR(5) + NCHAR(6) + NCHAR(7) + NCHAR(8) + NCHAR(11) + NCHAR(12) + NCHAR(14) + NCHAR(15) + NCHAR(16) + NCHAR(17) + NCHAR(18) + NCHAR(19) + NCHAR(20) + NCHAR(21) + NCHAR(22) + NCHAR(23) + NCHAR(24) + NCHAR(25) + NCHAR(26) + NCHAR(27) + NCHAR(28) + NCHAR(29) + NCHAR(30) + NCHAR(31) + ']%' SELECT @Counter = PATINDEX (@TestString, @InputString COLLATE Latin1_General_BIN) WHILE @Counter <> 0 BEGIN SELECT @InputString = STUFF(@InputString, @Counter, 1, ' ') SELECT @Counter = PATINDEX (@TestString, @InputString COLLATE Latin1_General_BIN) END END RETURN(@InputString) END ```
It is safe to use `VARCHAR(MAX)` as my data column is a `VARCHAR(MAX)` field. Also, there will be an overhead of converting `VARCHAR(MAX)` to `NVARCHAR(MAX)` if I pass a `VARCHAR(MAX)` field to the SQL function which accepts the `NVARCHAR(MAX)` param. Thank you very much @RhysJones, @Damien\_The\_Unbeliever for your comments.
There is a trick using the implicit conversion of `VARBINARY` to base64 and back: Here your **list of evil** ``` DECLARE @evilChars VARCHAR(MAX)= CHAR(0x0) + CHAR(0x1) + CHAR(0x2) + CHAR(0x3) + CHAR(0x4) + CHAR(0x5) + CHAR(0x6) + CHAR(0x7) + CHAR(0x8) + CHAR(0x9) + CHAR(0xa) + CHAR(0xb) + CHAR(0xc) + CHAR(0xd) + CHAR(0xe) + CHAR(0xf) + CHAR(0x10) + CHAR(0x11) + CHAR(0x12) + CHAR(0x13) + CHAR(0x14) + CHAR(0x15) + CHAR(0x16) + CHAR(0x17) + CHAR(0x18) + CHAR(0x19) + CHAR(0x1a) + CHAR(0x1b) + CHAR(0x1c) + CHAR(0x1d) + CHAR(0x1e) + CHAR(0x1f) + CHAR(0x7f); ``` This works ``` DECLARE @XmlAsString NVARCHAR(MAX)= ( SELECT @evilChars FOR XML PATH('test') ); SELECT @XmlAsString; ``` The result (some are "printed") ``` <test>&#x00;&#x01;&#x02;&#x03;&#x04;&#x05;&#x06;&#x07;&#x08; &#x0B;&#x0C;&#x0D;&#x0E;&#x0F;&#x10;&#x11;&#x12;&#x13;&#x14;&#x15;&#x16;&#x17;&#x18;&#x19;&#x1A;&#x1B;&#x1C;&#x1D;&#x1E;&#x1F;</test> ``` The following is forbidden ``` SELECT CAST(@XmlAsString AS XML) ``` But you can use the implicit conversion of VARBINARY to base64 ``` DECLARE @base64 NVARCHAR(MAX)= ( SELECT CAST(@evilChars AS VARBINARY(MAX)) FOR XML PATH('test') ); SELECT @base64; ``` The result ``` <test>AAECAwQFBgcICQoLDA0ODxAREhMUFRYXGBkaGxwdHh9/</test> ``` Now you've got your **real XML including the special characters**! ``` SELECT CAST(CAST(@base64 AS XML).value('/test[1]','varbinary(max)') AS VARCHAR(MAX)) FOR XML PATH('reconverted') ``` The result ``` <reconverted>&#x0;&#x1;&#x2;&#x3;&#x4;&#x5;&#x6;&#x7;&#x8; &#xB;&#xC; &#xE;&#xF;&#x10;&#x11;&#x12;&#x13;&#x14;&#x15;&#x16;&#x17;&#x18;&#x19;&#x1A;&#x1B;&#x1C;&#x1D;&#x1E;&#x1F;</reconverted> ```
SQL Server: Replace invalid XML characters from a VARCHAR(MAX) field
[ "", "sql", "sql-server", "xml", "t-sql", "sql-server-2012", "" ]
How to combine multiple rows into one row: ``` Example: ID NAME NUMBER 24 infill 20 25 test1 10 26 test2 30 27 test5 35 28 test3 40 ``` ``` SELECT name, number FROM table1 WHERE table1.id IN (24,26,28) ``` They will have result as: ``` NAME NUMBER infill 20 test2 30 test3 40 ``` How to modify SQL statement above: and I have one column name as I really want. Service ``` infill 20,test2 30, test3 40 ``` I did: ``` select name + " " + number as Service from table1 where table1.id in (24,26,28) ``` Result is NULL, thank you all for reply my question.
Try like this ``` DECLARE @MyTable TABLE ( ID INT, NAME VARCHAR(100), NUMBER INT ) INSERT INTO @MyTable VALUES (24, 'infill', '20'), (25, 'test1', '10'), (26, 'test2', '30'), (27, 'test5', '35'), (28, 'test3', '40') SELECT stuff((SELECT ',' + NAME + ' ' + CONVERT(VARCHAR(100), NUMBER) FROM @MyTable WHERE id IN ( 24, 26, 28 ) FOR XML PATH('')), 1, 1, '') AS SERVICE ```
You are close, the reason concat is not working since you aretrying to concat varchar with Int. In order to concat, both the columns have to be of same datatype, try like this: ``` SELECT name + ' ' + Convert(nvarchar(max), number) AS Service FROM table1 WHERE table1.id IN (24,26,28) ```
Combine Multiple Rows into One Row (One Column) when Using SQL
[ "", "sql", "sql-server", "return", "" ]
I have the below table and now I need to delete the rows which are having duplicate "refIDs" but have atleast one row with that ref, i.e i need to remove row 4 and 5. please help me on this ``` +----+-------+--------+--+ | ID | refID | data | | +----+-------+--------+--+ | 1 | 1023 | aaaaaa | | | 2 | 1024 | bbbbbb | | | 3 | 1025 | cccccc | | | 4 | 1023 | ffffff | | | 5 | 1023 | gggggg | | | 6 | 1022 | rrrrrr | | +----+-------+--------+--+ ```
This is similar to Gordon Linoff's query, but without the subquery: ``` DELETE t1 FROM table t1 JOIN table t2 ON t2.refID = t1.refID AND t2.ID < t1.ID ``` This uses an inner join to only delete rows where there is another row with the same refID but lower ID. The benefit of avoiding a subquery is being able to utilize an index for the search. This query should perform well with a multi-column index on refID + ID.
I would do: ``` delete from t where ID not in (select min(ID) from table t group by refID having count(*) > 1) and refID in (select refID from table t group by refID having count(*) > 1) ``` criteria is refId is among the duplicates and ID is different from the min(id) from the duplicates. It would work better if refId is indexed otherwise and provided you can issue multiple times the following query until it does not delete anything ``` delete from t where ID in (select max(ID) from table t group by refID having count(*) > 1) ```
remove duplicate rows based on one column value
[ "", "mysql", "sql", "" ]
Assume I have following SQL `select` clause: ``` SELECT * FROM some_table_1 t1 join some_table t2 on t2.some_id = t1.id and t2.school_id = 56 and t1.is_valid = 1 and t2.status in (15, 16, 17, 18); ``` But I have also to check, if `t2.status = 18`, then I have also to check if `t2.date < 01.01.2015` How can I add this condition to this `select`?
It sounds like you only want to join on rows that have a status of 18 when the date (which is a bad name for a column, since it's an Oracle reserved word. I'm also going to assume it's of DATE datatype) is less than 1st Jan 2015. If so, then the following should do the trick: ``` SELECT * FROM some_table_1 t1 join some_table t2 on t2.some_id = t1.id and t2.school_id = 56 and t1.is_valid = 1 and (t2.status in (15, 16, 17) or (t2.status = 18 and t2.date < to_date('01.01.2015', 'dd.mm.yyyy.'))); ```
Just add an `or` checking for 'not 18', or matching your second condition: ``` SELECT * FROM some_table_1 t1 join some_table t2 on t2.some_id = t1.id and t2.school_id = 56 and t1.is_valid = 1 and t2.status in (15, 16, 17, 18) /* added part */ where ( t2.status != 18 or t2.date < to_date('01.01.2015', 'dd.MM.yyyy') ) ```
If inside select clause
[ "", "sql", "oracle", "" ]
I need to calculate the percentage of total in SQL I've got > customerID, quantity What i have tried is: ``` SELECT customer_id, sum(quantity) / (quantity * 100) FROM MyTable Group by customer_id ```
You need the total to calculate the percentage, so use a sub query. To avoid calculating the total over again for every user, cross join to the total calculated once: ``` select customer_id, quantity * 100 / total from MyTable cross join ( select sum(quantity) total from MyTable) x ```
You need to get total quantity first to calculate the customer quantity percentage. ``` DECLARE @TotalQty int set @TotalQty = (select sum(Amount) FROM tbltemp) SELECT id, Amount * 100 / @TotalQty FROM tbltemp ```
Percentage of the same column in SQL
[ "", "sql", "sql-server", "" ]
I have a dataset like this: ``` col1 col2 John 1 John 1 Emily 1 Emily 2 ``` A simple `select distinct col1 from table where col2 = 1` returns John and Emily. I want a query that results in only John, because Emily has at least one other row where col2 does not equal 1. Any ideas?
Try with a having clause: ``` select col1 from table group by col1 having count(distinct col2) = 1 and min(col2) = 1 ``` **Edit** Salman's answer looks nice too: ``` select col1 from table group by col1 1 having count(*) = count(case when col2 = 1 then 1 else null end) ```
i currently don't have a mySQL data engine, but this works on MS-SQL engine. hope you could find some hint. ``` select name, avg(cast(num as decimal(4,2) )) from test1 group by name having avg(cast(num as decimal(4,2) ))=1 ```
MySQL return a single row only when all rows are the value of the WHERE
[ "", "mysql", "sql", "group-by", "" ]
I'm trying to create a query that returns values in a table that begin with the letters A-Z as their first character and subsequently are followed by only numbers. Example: `[table].Code`: ``` 0056 A0089 X0023 J0F5 09AG A91234671 A945353B ``` Query would return the following results: ``` A0089 X0023 A91234671 ``` Any help would be much appreciated, thank you.
You can use `[A-Z]` to check the first character and `ISNUMERIC` to check all characters except from the first one for being a number: ``` SELECT Code FROM mytable WHERE LEFT(Code,1) LIKE '[A-Z]' AND ISNUMERIC(RIGHT(Code, LEN(Code)-1)) = 1 ``` [SQL Fiddle Demo here](http://sqlfiddle.com/#!3/ede2b/1) **EDIT:** As stated in comment by @Dan `ISNUMERIC` might fail to give you the result you want, in case `Code` field contains characters like `+`, `-`. You can try this instead: ``` SELECT Code FROM mytable WHERE LEFT(Code,1) LIKE '[A-Z]' AND RIGHT(Code, LEN(Code)-1) NOT LIKE '%[^0-9]%' ``` [SQL Fiddle Demo here](http://sqlfiddle.com/#!3/fc61c/4)
You can use `like` with pattern matching. The `[A-Z]` will match any value between `A` `Z` for first character of code. And if you want to match the small caps also then you can include `[a-z]` in the query ``` SELECT Code FROM tablename WHERE LEFT(Code,1) LIKE '[A-Z]%' ``` [Here is more info on LIKE](https://msdn.microsoft.com/en-us/library/ms179859.aspx).
SQL-Server: Query to return codes only with numbers beginning with letters
[ "", "sql", "sql-server", "" ]
Dearest professionals, I have a query built to get the first and last day of the current month, but I'm having an issue with the time stamp for the First Day of the month. ``` declare @FirstDOM datetime, @LastDOM datetime set @FirstDOM = (select dateadd(dd,-(day(getdate())-1),getdate()) ) set @LastDOM = (select dateadd(s,-1,dateadd(mm,datediff(m,0,getdate())+1,0))) ``` Since it's February of 2015, I would like to get results of: ``` @FirstDOM = 2015-02-01 00:00:00.000 @LastDOM = 2015-02-28 23:59:59.000 ``` @LastDOM is correct, but I'm not getting the zeroes for the time stamp portion of @FirstDOM, I'm getting the correct date, but the time of the time I run the script. Say it's 8:50 a.m., I get: ``` 2015-02-01 08:50:49.160 ``` What is the best way to fix this little snafu? Regards, Nick
Convert @FirstDOM to `DATE` as below: ``` declare @FirstDOM datetime, @LastDOM datetime set @FirstDOM = (select CONVERT(DATE,dateadd(dd,-(day(getdate())-1),getdate())) ) set @LastDOM = (select dateadd(s,-1,dateadd(mm,datediff(m,0,getdate())+1,0))) SELECT @FirstDOM,@LastDOM ``` I hope this will help! Thanks, Swapnil
``` declare @FirstDOM datetime, @LastDOM datetime set @FirstDOM = (select dateadd(d,-1,dateadd(mm,datediff(m,0,getdate()),1 ))) set @LastDOM = (select dateadd(s,-1,dateadd(mm,datediff(m,0,getdate())+1,0))) Print @FirstDOM Print @LastDOM ```
SQL SELECT First Day and Last Day of the Month.
[ "", "sql", "sql-server", "t-sql", "datetime", "" ]
I have 2 tables, `tblBasicInfo` and `tblPayment`. Relationship is 1 to many, where `tblBasicInfo` is on the 1 side, and `tblPayment` is on the `many` side. *Relationship is optional and that is the problem.* I need to subtract value of certain field from parent table with sum of certain fields from child table *that match certain criteria.* If there are no records in child table that fulfill the criteria then this should be represented with zero ( `data from parent table - 0` ). *I apologize if this is not crystal clear, English is not my native and I am not experienced enough to know how to properly describe the problem.* It would be best to demonstrate what I mean with a small example: We shall start from table schema: ``` tblBasicInfo: #ID, TotalPrice (double) tblPayment: #P_ID, $ID, Amount (double), IsPaid (bool) ``` Here is the content for parent table `tblBasicInfo`: ``` ID | TotalPrice 1 | 100 2 | 150 3 | 200 4 | 250 ``` Here is the content for child table `tblPayment`: ``` P_ID | ID | IsPaid | Amount 1 | 1 | true | 50 2 | 1 | false | 25 3 | 2 | false | 100 4 | 2 | false | 25 5 | 3 | true | 200 ``` This is what I have accomplished on my own: ``` SELECT tblBasicInfo.ID, ( tblBasicInfo.TotalPrice - sum(tblPayment.Amount) ) AS [Difference] FROM tblBasicInfo, tblPayment WHERE ( tblBasicInfo.ID = tblPayment.ID ) GROUP BY tblBasicInfo.TotalPrice, tblPayment.IsPaid HAVING ( tblPayment.IsPaid = TRUE ) --this is the criteria I talked above ORDER BY tblBasicInfo.ID; ``` This is what I get from the above query: ``` ID | Difference 1 | 50 3 | 0 . . . ``` I need to get the following result: ``` ID | Difference 1 | 50 2 | 150 -- does not meet the criteria ( IsPayed = false ) 3 | 0 4 | 250 -- no records in child table . . . ``` *I apologize for imperfect title of the question, but I really did not know how to describe this problem.*
I tried this on SQL Server, but you can achieve same in other RDMS you can achieve this in probably more than one way here I presented two solutions I found that first solution performs better than second ``` SELECT ti.id,MAX(totalprice) - ISNULL(SUM(CASE WHEN is_payed = ((0)) THEN 0 ELSE amount END),0) amount FROM tblbasicinfo ti LEFT OUTER JOIN tblpayment tp ON ti.id = tp.p_id GROUP BY ti.id --OR SELECT id,totalprice-ISNULL((SELECT SUM(amount) FROM tblpayment tp WHERE ti.id = tp.p_id AND is_payed = ((1)) GROUP BY id),0) AS reconsile FROM tblbasicinfo ti ``` ![enter image description here](https://i.stack.imgur.com/Tmmnb.jpg) ``` CREATE TABLE tblBasicInfo (id INT IDENTITY(1,1),totalprice MONEY) CREATE TABLE tblPayment (id INT IDENTITY(1,1), P_ID INT ,is_payed BIT,amount MONEY) INSERT INTO tblbasicinfo VALUES(100),(150),(200),(250) INSERT INTO tblpayment(p_id,is_payed,amount) VALUES(1,((1)),50),(1,((0)),25),(2,((0)),100),(2,((0)),25),(3,((1)),200) ```
try this ``` select a.Id,(a.TotalPrice-payment.paid) as Difference from tblBasicInfo a left join ( select sum(Amount) as paid,Id from tblPayment group by Id where IsPaid =1)payment on a.Id=payment.Id ```
Subtracting value from parent table with SUM(value from child table)
[ "", "sql", "ms-access-2007", "" ]
I have have been reading up on PATINDEX attempting to understand what and why. I understand the when using the wildcards it will return an INT as to where that character(s) appears/starts. So: ``` SELECT PATINDEX('%b%', '123b') -- returns 4 ``` However I am looking to see if someone can explain the reason as to why you would use this in a simple(ish) way. I have read some other forums but it just is not sinking in to be honest.
Are you asking for realistic use-cases? I can think of two, real-life use-cases that I've had at work where `PATINDEX()` was my best option. I had to import a text-file and parse it for `INSERT INTO` later on. But these files sometimes had numbers in this format: `00000-59`. If you try `CAST('00000-59' AS INT)` you'll get an error. So I needed code that would parse `00000-59` to `-59` but also `00000159` to `159` etc. The `-` could be anywhere, or it could simply not be there at all. This is what I did: ``` DECLARE @my_var VARCHAR(255) = '00000-59', @my_int INT SET @my_var = STUFF(@my_var, 1, PATINDEX('%[^0]%', @my_var)-1, '') SET @my_int = CAST(@my_var AS INT) ``` `[^0]` in this case means *"any character that **isn't** a `0`"*. So `PATINDEX()` tells me when the 0's end, regardless of whether that's because of a `-` or a number. The second use-case I've had was checking whether an [IBAN](http://en.wikipedia.org/wiki/International_Bank_Account_Number) number was correct. In order to do that, any letters in the IBAN need to be changed to a corresponding number (A=10, B=11, etc...). I did something like this (incomplete but you get the idea): ``` SET @i = PATINDEX('%[^0-9]%', @IBAN) WHILE @i <> 0 BEGIN SET @num = UNICODE(SUBSTRING(@IBAN, @i, 1))-55 SET @IBAN = STUFF(@IBAN, @i, 1, CAST(@num AS VARCHAR(2)) SET @i = PATINDEX('%[^0-9]%', @IBAN) END ``` So again, I'm not concerned with finding exactly the letter `A` or `B` etc. I'm just finding anything that isn't a number and converting it.
Quoted from [**PATINDEX (Transact-SQL)**](https://msdn.microsoft.com/en-us/library/ms188395.aspx) > The following example uses `%` and `_` wildcards to find the position at > which the pattern `'en'`, followed by any one character and `'ure'` starts > in the specified string (index starts at 1): ``` SELECT PATINDEX('%en_ure%', 'please ensure the door is locked'); ``` > Here is the result set. ``` 8 ``` --- You'd use the `PATINDEX` function when you want to know at which character position a pattern begins in an expression of a valid text or character data type.
Simple Explanation for PATINDEX
[ "", "sql", "sql-server", "t-sql", "syntax", "" ]
I need a little help with a SQL query (SQL Server 2012). I have written a script which calculates the SHA256 of a set of files stored across a number of servers and stores it in a database table with columns: * Server * FileName * FileHash Some example rows from the table: ``` SRVDB001, Address.xsd, CF4658A...F674 SRVDB002, Address.xsd, CF4658A...F674 SRVDB001, Company.xsd, BE6E5C3...76E3 ``` I would like to write a SQL query which returns a list of files where the hash is not the same across all servers - i.e. one of the files is different on one of the servers. The only thing I can think of is a `OUTER JOIN` against a subquery `(SELECT DISTINCT [FileName]...)` but I don't really know where to start. Any help is much appreciated. Thanks,
Here you have the query you are looking for: ``` DECLARE @FilesOnServers TABLE ( [ServerName] varchar(128) NOT NULL, [FileName] nvarchar(256) NOT NULL, [FileHash] varbinary(512) NOT NULL ) INSERT INTO @FilesOnServers(ServerName, [FileName], [FileHash]) VALUES ('SRVDB001', 'Address.xsd', 0xCF4658AF674), ('SRVDB002', 'Address.xsd', 0xCF4658AF675), ('SRVDB001', 'Company.xsd', 0xBE6E5C376E3); SELECT DISTINCT F.FileName FROM @FilesOnServers F WHERE EXISTS ( SELECT * FROM @FilesOnServers F2 WHERE F.[FileName] = F2.[FileName] AND F.FileHash <> F2.FileHash AND F.ServerName <> F2.ServerName ) ```
I think this should do the trick: ``` select file_name from your_table group by file_name, file_hash having count(*) > 1 ``` It groups by file\_name and file\_hash and only returns names where there is more than one record in the result. So if the same file\_name and file\_hash appeared in all records, that would produce a single resultant row and therefore be filtered out by the HAVING clause.
SQL Query - select where things are different
[ "", "sql", "sql-server", "" ]
I have the following table **order\_item** ``` order_id| item_id -------------------------------- 213 | 3 215 | 3 244 | 2 299 | 3 400 | 3 ``` **User input:** `item_id` and `order_id` eg: `3 & 299` respectively in the above example. **Requirement:** I need to find immediate last order which sold the item with item\_id `3`. ( 215 order\_id in this case) Because input(order\_id , item\_id) is (299 , 3). From the table the recent past order which has sold item 3 is `order_id` 215 so, i need to return the (215, 3) row to the user. I tried the following and I know it is wrong as i do not know how to get the above information. ``` select (select oi.item_id from order_item oi where oi.order_id < order_id and rownum =1) from order_item where order_id = 299 and item_id = 3 ```
This should do: ``` select max(order_id) from order_item where order_id <299 and item_id =3 ```
try this Oracle ``` select order_id,item_id from order_item where order_id <299 and item_id = 3 and rownum = 1 order by order_id desc ``` SQL Server ``` select top 1 * from order_item where order_id <299 and item_id = 3 order by order_id desc ```
Sql - get previous order which uses the item
[ "", "sql", "oracle10g", "" ]
How Can I query an RDD with complex types such as maps/arrays? for example, when I was writing this test code: ``` case class Test(name: String, map: Map[String, String]) val map = Map("hello" -> "world", "hey" -> "there") val map2 = Map("hello" -> "people", "hey" -> "you") val rdd = sc.parallelize(Array(Test("first", map), Test("second", map2))) ``` I thought the syntax would be something like: ``` sqlContext.sql("SELECT * FROM rdd WHERE map.hello = world") ``` or ``` sqlContext.sql("SELECT * FROM rdd WHERE map[hello] = world") ``` but I get > Can't access nested field in type MapType(StringType,StringType,true) and > org.apache.spark.sql.catalyst.errors.package$TreeNodeException: Unresolved attributes respectively.
It depends on a type of the column. Lets start with some dummy data: ``` import org.apache.spark.sql.functions.{udf, lit} import scala.util.Try case class SubRecord(x: Int) case class ArrayElement(foo: String, bar: Int, vals: Array[Double]) case class Record( an_array: Array[Int], a_map: Map[String, String], a_struct: SubRecord, an_array_of_structs: Array[ArrayElement]) val df = sc.parallelize(Seq( Record(Array(1, 2, 3), Map("foo" -> "bar"), SubRecord(1), Array( ArrayElement("foo", 1, Array(1.0, 2.0, 2.0)), ArrayElement("bar", 2, Array(3.0, 4.0, 5.0)))), Record(Array(4, 5, 6), Map("foz" -> "baz"), SubRecord(2), Array(ArrayElement("foz", 3, Array(5.0, 6.0)), ArrayElement("baz", 4, Array(7.0, 8.0)))) )).toDF ``` ``` df.registerTempTable("df") df.printSchema // root // |-- an_array: array (nullable = true) // | |-- element: integer (containsNull = false) // |-- a_map: map (nullable = true) // | |-- key: string // | |-- value: string (valueContainsNull = true) // |-- a_struct: struct (nullable = true) // | |-- x: integer (nullable = false) // |-- an_array_of_structs: array (nullable = true) // | |-- element: struct (containsNull = true) // | | |-- foo: string (nullable = true) // | | |-- bar: integer (nullable = false) // | | |-- vals: array (nullable = true) // | | | |-- element: double (containsNull = false) ``` * array (`ArrayType`) columns: + `Column.getItem` method ``` df.select($"an_array".getItem(1)).show // +-----------+ // |an_array[1]| // +-----------+ // | 2| // | 5| // +-----------+ ``` + Hive brackets syntax: ``` sqlContext.sql("SELECT an_array[1] FROM df").show // +---+ // |_c0| // +---+ // | 2| // | 5| // +---+ ``` + an UDF ``` val get_ith = udf((xs: Seq[Int], i: Int) => Try(xs(i)).toOption) df.select(get_ith($"an_array", lit(1))).show // +---------------+ // |UDF(an_array,1)| // +---------------+ // | 2| // | 5| // +---------------+ ``` + Additionally to the methods listed above Spark supports a growing list of built-in functions operating on complex types. Notable examples include higher order functions like `transform` (SQL 2.4+, Scala 3.0+, PySpark / SparkR 3.1+): ``` df.selectExpr("transform(an_array, x -> x + 1) an_array_inc").show // +------------+ // |an_array_inc| // +------------+ // | [2, 3, 4]| // | [5, 6, 7]| // +------------+ import org.apache.spark.sql.functions.transform df.select(transform($"an_array", x => x + 1) as "an_array_inc").show // +------------+ // |an_array_inc| // +------------+ // | [2, 3, 4]| // | [5, 6, 7]| // +------------+ ``` + `filter` (SQL 2.4+, Scala 3.0+, Python / SparkR 3.1+) ``` df.selectExpr("filter(an_array, x -> x % 2 == 0) an_array_even").show // +-------------+ // |an_array_even| // +-------------+ // | [2]| // | [4, 6]| // +-------------+ import org.apache.spark.sql.functions.filter df.select(filter($"an_array", x => x % 2 === 0) as "an_array_even").show // +-------------+ // |an_array_even| // +-------------+ // | [2]| // | [4, 6]| // +-------------+ ``` + `aggregate` (SQL 2.4+, Scala 3.0+, PySpark / SparkR 3.1+): ``` df.selectExpr("aggregate(an_array, 0, (acc, x) -> acc + x, acc -> acc) an_array_sum").show // +------------+ // |an_array_sum| // +------------+ // | 6| // | 15| // +------------+ import org.apache.spark.sql.functions.aggregate df.select(aggregate($"an_array", lit(0), (x, y) => x + y) as "an_array_sum").show // +------------+ // |an_array_sum| // +------------+ // | 6| // | 15| // +------------+ ``` + array processing functions (`array_*`) like `array_distinct` (2.4+): ``` import org.apache.spark.sql.functions.array_distinct df.select(array_distinct($"an_array_of_structs.vals"(0))).show // +-------------------------------------------+ // |array_distinct(an_array_of_structs.vals[0])| // +-------------------------------------------+ // | [1.0, 2.0]| // | [5.0, 6.0]| // +-------------------------------------------+ ``` + `array_max` (`array_min`, 2.4+): ``` import org.apache.spark.sql.functions.array_max df.select(array_max($"an_array")).show // +-------------------+ // |array_max(an_array)| // +-------------------+ // | 3| // | 6| // +-------------------+ ``` + `flatten` (2.4+) ``` import org.apache.spark.sql.functions.flatten df.select(flatten($"an_array_of_structs.vals")).show // +---------------------------------+ // |flatten(an_array_of_structs.vals)| // +---------------------------------+ // | [1.0, 2.0, 2.0, 3...| // | [5.0, 6.0, 7.0, 8.0]| // +---------------------------------+ ``` + `arrays_zip` (2.4+): ``` import org.apache.spark.sql.functions.arrays_zip df.select(arrays_zip($"an_array_of_structs.vals"(0), $"an_array_of_structs.vals"(1))).show(false) // +--------------------------------------------------------------------+ // |arrays_zip(an_array_of_structs.vals[0], an_array_of_structs.vals[1])| // +--------------------------------------------------------------------+ // |[[1.0, 3.0], [2.0, 4.0], [2.0, 5.0]] | // |[[5.0, 7.0], [6.0, 8.0]] | // +--------------------------------------------------------------------+ ``` + `array_union` (2.4+): ``` import org.apache.spark.sql.functions.array_union df.select(array_union($"an_array_of_structs.vals"(0), $"an_array_of_structs.vals"(1))).show // +---------------------------------------------------------------------+ // |array_union(an_array_of_structs.vals[0], an_array_of_structs.vals[1])| // +---------------------------------------------------------------------+ // | [1.0, 2.0, 3.0, 4...| // | [5.0, 6.0, 7.0, 8.0]| // +---------------------------------------------------------------------+ ``` + `slice` (2.4+): ``` import org.apache.spark.sql.functions.slice df.select(slice($"an_array", 2, 2)).show // +---------------------+ // |slice(an_array, 2, 2)| // +---------------------+ // | [2, 3]| // | [5, 6]| // +---------------------+ ``` * map (`MapType`) columns + using `Column.getField` method: ``` df.select($"a_map".getField("foo")).show // +----------+ // |a_map[foo]| // +----------+ // | bar| // | null| // +----------+ ``` + using Hive brackets syntax: ``` sqlContext.sql("SELECT a_map['foz'] FROM df").show // +----+ // | _c0| // +----+ // |null| // | baz| // +----+ ``` + using a full path with dot syntax: ``` df.select($"a_map.foo").show // +----+ // | foo| // +----+ // | bar| // |null| // +----+ ``` + using an UDF ``` val get_field = udf((kvs: Map[String, String], k: String) => kvs.get(k)) df.select(get_field($"a_map", lit("foo"))).show // +--------------+ // |UDF(a_map,foo)| // +--------------+ // | bar| // | null| // +--------------+ ``` + Growing number of `map_*` functions like `map_keys` (2.3+) ``` import org.apache.spark.sql.functions.map_keys df.select(map_keys($"a_map")).show // +---------------+ // |map_keys(a_map)| // +---------------+ // | [foo]| // | [foz]| // +---------------+ ``` + or `map_values` (2.3+) ``` import org.apache.spark.sql.functions.map_values df.select(map_values($"a_map")).show // +-----------------+ // |map_values(a_map)| // +-----------------+ // | [bar]| // | [baz]| // +-----------------+ ``` Please check [SPARK-23899](https://issues.apache.org/jira/browse/SPARK-23899) for a detailed list. * struct (`StructType`) columns using full path with dot syntax: + with DataFrame API ``` df.select($"a_struct.x").show // +---+ // | x| // +---+ // | 1| // | 2| // +---+ ``` + with raw SQL ``` sqlContext.sql("SELECT a_struct.x FROM df").show // +---+ // | x| // +---+ // | 1| // | 2| // +---+ ``` * fields inside array of `structs` can be accessed using dot-syntax, names and standard `Column` methods: ``` df.select($"an_array_of_structs.foo").show // +----------+ // | foo| // +----------+ // |[foo, bar]| // |[foz, baz]| // +----------+ sqlContext.sql("SELECT an_array_of_structs[0].foo FROM df").show // +---+ // |_c0| // +---+ // |foo| // |foz| // +---+ df.select($"an_array_of_structs.vals".getItem(1).getItem(1)).show // +------------------------------+ // |an_array_of_structs.vals[1][1]| // +------------------------------+ // | 4.0| // | 8.0| // +------------------------------+ ``` * user defined types (UDTs) fields can be accessed using UDFs. See [Spark SQL referencing attributes of UDT](https://stackoverflow.com/q/33747851/1560062) for details. **Notes**: * depending on a Spark version some of these methods can be available only with `HiveContext`. UDFs should work independent of version with both standard `SQLContext` and `HiveContext`. * generally speaking nested values are a second class citizens. Not all typical operations are supported on nested fields. Depending on a context it could be better to flatten the schema and / or explode collections ``` df.select(explode($"an_array_of_structs")).show // +--------------------+ // | col| // +--------------------+ // |[foo,1,WrappedArr...| // |[bar,2,WrappedArr...| // |[foz,3,WrappedArr...| // |[baz,4,WrappedArr...| // +--------------------+ ``` * Dot syntax can be combined with wildcard character (`*`) to select (possibly multiple) fields without specifying names explicitly: ``` df.select($"a_struct.*").show // +---+ // | x| // +---+ // | 1| // | 2| // +---+ ``` * JSON columns can be queried using `get_json_object` and `from_json` functions. See [How to query JSON data column using Spark DataFrames?](https://stackoverflow.com/q/34069282/) for details.
Once You convert it to DF, u can simply fetch data as ``` val rddRow= rdd.map(kv=>{ val k = kv._1 val v = kv._2 Row(k, v) }) val myFld1 = StructField("name", org.apache.spark.sql.types.StringType, true) val myFld2 = StructField("map", org.apache.spark.sql.types.MapType(StringType, StringType), true) val arr = Array( myFld1, myFld2) val schema = StructType( arr ) val rowrddDF = sqc.createDataFrame(rddRow, schema) rowrddDF.registerTempTable("rowtbl") val rowrddDFFinal = rowrddDF.select(rowrddDF("map.one")) or val rowrddDFFinal = rowrddDF.select("map.one") ```
Querying Spark SQL DataFrame with complex types
[ "", "sql", "scala", "apache-spark", "dataframe", "apache-spark-sql", "" ]
Trying to learn more on sub-query. I am looking for a way to subtract and compare two tables. 1. Inventory 2. Sales My data records are as follows: **Inventory:** ``` mysql> select store_id, product_id, sum(quantity) as inventory from inventories where store_id = 1 group by product_id; +----------+------------+-----------+ | store_id | product_id | inventory | +----------+------------+-----------+ | 1 | 8 | 24 | | 1 | 10 | 4 | | 1 | 14 | 24 | +----------+------------+-----------+ 3 rows in set (0.00 sec) ``` **Sales** ``` mysql> select store_id, product_id, sum(quantity) as sales from sales where store_id = 1 group by product_id; +----------+------------+-------+ | store_id | product_id | sales | +----------+------------+-------+ | 1 | 8 | 12 | | 1 | 14 | 2 | | 1 | 8 | 1 | +----------+------------+-------+ 2 rows in set (0.00 sec) ``` What is the proper sub-query to have the following result? ``` +----------+------------+-----------+-------+-----------+ | store_id | product_id | inventory | sales | remaining | +----------+------------+-----------+-------+-----------+ | 1 | 8 | 24 | 12 | 12 | | 1 | 14 | 24 | 2 | 22 | | 1 | 8 | 12 | 1 | 11 | +----------+------------+-----------+-------+-----------+ ```
To achieve the desired output you need to calculate running totals of product sales. To get meaningful data, the data in `sales` table must be ordered chronologically. So you need at least one more field to sort data - it doesn't matter if it's a timestamp, or `id` field. Let's assume there is an `id` field in sales table. This is a query to get what you described: ``` SELECT sales.id, sales.store_id, sales.product_id, inventories.quantity-IFNULL(SUM(sales_2.quantity), 0) as inventory, sales.quantity as sales, inventories.quantity-IFNULL(SUM(sales_2.quantity), 0) - sales.quantity as remaining FROM sales INNER JOIN inventories ON inventories.store_id = sales.store_id AND inventories.product_id = sales.product_id LEFT JOIN sales AS sales_2 ON sales_2.store_id = sales.store_id AND sales_2.product_id = sales.product_id AND sales_2.id < sales.id GROUP BY sales.id , sales.store_id , sales.product_id ORDER BY sales.id ``` The second instance of `sales` table called `sales_2` is used to calculate the sum of earlier sales (`sales_2.id<sales.id`) You can exclude `sales.id` from the `select` clause, but you need to keep it in `group by` and `order by`.
You can use results from your queries and join them to calculate the remaining quantity for each product ``` SELECT a.store_id, a.product_id, a.inventory, b.sales, a.inventory - b.sales AS remaining FROM ( SELECT store_id, product_id, COALESCE(SUM(quantity),0) AS inventory FROM inventories WHERE store_id = 1 GROUP BY product_id) a LEFT JOIN ( SELECT store_id, product_id, COALESCE(SUM(quantity),0) AS sales FROM sales WHERE store_id = 1 GROUP BY product_id ) b USING(store_id, product_id) ```
How to subtract inventory and sale using mysql subquery?
[ "", "mysql", "sql", "database", "subquery", "" ]
I just made a query to get second last row from database: Here Is code: ``` SELECT TOP 1 * FROM (select Top 2 * from Categories ORDER BY CategoryID DESC)x ORDER BY CategoryID ``` but this code gives me error syntax error in new version. In older version this code work fine. What is problem: I have 10 results in the table, and when I submit this query, it gives me 9 row. But when 9 row delete from table, now I have 8 and 10. But this code gives me 10.
``` SELECT * FROM Categories ORDER BY CategoryID DESC LIMIT 1,1 ```
``` SELECT * From (select * from Categories ORDER BY CategoryID DESC LIMIT 2) AS x ORDER BY CategoryID LIMIT 1 ```
How to get second last row from a mysql database?
[ "", "mysql", "sql", "" ]
I want to sum prices from a table linked to another. I try this query : ``` SELECT COUNT(cmd.mt14), SUM(cmd.mt14) FROM g_piece cmd INNER JOIN g_piece lot ON lot.GPIREFNOTAIRE = cmd.refpiece AND lot.gpirole = 'B2029005' ``` And results are : ``` count(cmd.mt14) sum(cmd.mt14) 100 170704,188 ``` It's not good because it sums all deuplicated lines because of the left join. If I try with "unique" : ``` SELECT count(unique cmd.mt14), sum(unique cmd.mt14) FROM g_piece cmd INNER JOIN g_piece lot ON lot.GPIREFNOTAIRE = cmd.refpiece AND lot.gpirole = 'B2029005' ``` Results: ``` count(unique cmd.mt14) sum(unique cmd.mt14) 54 57724,554 ``` The problem is that's wrong too, because the sum function is not going to sum ALL prices, just unique prices. How can I sum all prices based on different column? Thanks a lot --- If I try to get rows and SUM all lines on excel, I have: ``` count sum 58 58492.503 ```
I see two problems: 1. Your where clause makes your left join an inner, so move the limit to the join, or change the left join to an inner. 2. When aggregrating data, you must get sums before joins occur if you want to not have duplicates included thus a sub query. . ``` SELECT count_mt14, sum_mt14 FROM (SELECT count(cmd.mt14) count_mt14, sum(mt14) sum_mt14, Liebelle_20_1 FROM g_Piece group by liebelle_20_1) cmd LEFT JOIN g_individu ind ON ind.refindividu = cmd.LIBELLE_20_1 LEFT JOIN g_piece lot ON lot.GPIREFNOTAIRE = cmd.refpiece AND lot.gpirole = 'B2029005' ```
Your second "LEFT JOIN" must always be satisfied since you are using one of its columns in your WHERE clause. And you are not using the IND table at all. So why not just: ``` SELECT count(cmd.mt14), sum(cmd.mt14) FROM g_piece cmd INNER JOIN g_piece lot ON lot.GPIREFNOTAIRE = cmd.refpiece WHERE lot.gpirole = 'B2029005' ``` If that isn't what you are looking for, then perhaps you are just trying to verify that your CMD exists for a specific lot: ``` SELECT count(cmd.mt14), sum(cmd.mt14) FROM g_piece cmd WHERE EXISTS (select 1 from g_piece lot where lot.GPIREFNOTAIRE = cmd.refpiece AND lot.gpirole = 'B2029005') ```
SQL - SUM different rows
[ "", "sql", "oracle", "join", "sum", "" ]
I use the below query ``` select TOP 1 Number,ltrim(rtrim(first_field)) + ' ' + ltrim(rtrim( second_field)) as FirstName, from ViewName _details with(nolock) where _details.ID=2912 ``` Here `second_field` is null hence `FirstName` column is returning as null
I think this is what you're looking for: ``` SET CONCAT_NULL_YIELDS_NULL OFF; ``` Read more here: <https://msdn.microsoft.com/en-us/library/ms176056.aspx>
Use `COALESCE` to avoid the null values as set default value if null. [you can find more about it here](https://msdn.microsoft.com/en-us/library/ms190349.aspx).You can use coalesce to get first not null value. So if your any other column contains the not null value you might want to use in case of `null` then you can use that column name as your second column and so on. ``` select TOP 1 Number, ltrim(rtrim(COALESCE(first_field,anyothercolumnwithvalue,'yourdefaultvalue'))) + ' ' + ltrim(rtrim(COALESCE(second_field,anyothercolumnwithvalue,'yourdefaultvalue'))) as FirstName, from ViewName _details with(nolock) where _details.ID=2912 ```
concatenation fails to add null field
[ "", "sql", "sql-server", "" ]
For some reason, I am unable to get this query to return anything. ``` SELECT film.title, film.length FROM moviedb.film, moviedb.`language` WHERE film.length=(SELECT MIN(length) FROM moviedb.film) AND film.language_id=`language`.language_id AND `language`.`name`='French' AND film.rating='R'; ``` So, I am wanting to return two columns with 1 row...the shortest french movie with an 'R' rating. Am I missing something here? I've been scratching my head for an hour.
You need to specify language and rating in the sub-query too, to get the shortest film of those: ``` SELECT film.title, film.length FROM moviedb.film f1 JOIN moviedb.`language` ON film.language_id = `language`.language_id WHERE film.length = (SELECT MIN(length) FROM moviedb.film f2 WHERE f1.language_id = f2.language_id AND f1.rating = f2.rating) AND `language`.`name`='French' AND film.rating='R'; ```
Refrain from using old-style/implicit `JOIN` syntax. See in this **[article](https://sqlblog.org/2009/10/08/bad-habits-to-kick-using-old-style-joins)** if why. So, I converted your query into explicit `JOIN`. Try this query: ``` SELECT A.title, A.length FROM film A INNER JOIN language B ON A.language_id=B.language_id WHERE A.length=(SELECT MIN(length) FROM film) AND B.name`='French' AND A.rating='R'; ```
Beginner: Can't Order my WHERE Statements Properly with Subselect MIN?
[ "", "mysql", "sql", "mysql-workbench", "" ]
``` Select Count(*),* from TourBooking Where MemberID = 6 ``` Giving an error > "Column 'TourBooking.ID' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause."
`count()` is an aggregate function and thus requires a `group by`. If you want to count the total number of rows in your result, you can use a window function to add such a column: ``` select count(*) over () as total_count, TourBooking.* from TourBooking where MemberID = 6; ``` If you want the total count in the table `TourBooking` *regardless* of the value in the column `MemberId` you need a scalar sub-query in order to retrieve the count: ``` Select (select Count(*) from TourBooking) as total_count, TourBooking.* from TourBooking where MemberID = 6 ```
You need to use a subselect: ``` Select (select Count(*) from TourBooking Where MemberID = 6), * from TourBooking Where MemberID = 6 ```
How to get count with *
[ "", "sql", "sql-server", "" ]
My query ``` SELECT DATEDIFF(day,SD.STARTDATE,SD.ENDDATE) AS TotalDays ,amount FROM staydetails ``` For example, the amount column may have a value of `100`, I need to multiply the value of `totaldays` and `amount` and add the result to another column.
Just add the computed column to your select list, like so: ``` select DATEDIFF(day,SD.STARTDATE,SD.ENDDATE) AS TotalDays , amount, (DATEDIFF(day,SD.STARTDATE,SD.ENDDATE) * amount) AS CalculatedAmount from staydetails ``` Note that you will need to repeat the `DATEDIFF` function - you cannot use the alias TotalDays.
Pretty straight forward: ``` select DATEDIFF(day,SD.STARTDATE,SD.ENDDATE) AS TotalDays ,amount, (DATEDIFF(day,SD.STARTDATE,SD.ENDDATE) * amount) as Result from staydetails ```
Multiply the values between two columns in sql
[ "", "sql", "sql-server", "" ]
I have one table named **products** with the next tuples: type code gestion situation subfamily And I want to select *type, code, gestion, situation*, where *subfamily* are different values; *994, 948, 931*. What I have done: ``` SELECT type, code, gestion, situation FROM products WHERE subfamily = 994 AND subfamily = 948 AND subfamily = 931 ``` How can I do it to not repeat *subfamily* all the time? Thank you very much!
You cannot use `AND` here. Because `subfamily` will never have the values 994,948 and 931 at the same time. You should have used `OR` instead. Or simply using `IN` (much better way for a list of values): ``` SELECT type, code, gestion, situation FROM products WHERE subfamily IN (994,948,931) ```
Use the In statement: ``` SELECT type, code, gestion, situation FROM products WHERE subfamily in(994,948,931) ```
Basic SQL Statement (Where)
[ "", "sql", "where-clause", "" ]
In this <http://sqlfiddle.com/#!6/aa0e17/4> as you can see id is primary key and auto incremented and values column is int type. To retrieve count based on value I am doing 4 different queries ``` select count(id) from status where value=1 select count(id )from status where value=2 select count(id) from status where value=3 select count(id) from status where value=4 ``` My requirement is to get all those counts in a single query. Why I want? The above table is just a demo table and have only 4 queries but in my scenario I have 35 queries and so I have to do 35 methods in java. Expected output: 4,,4,4,4 (1st query result, 2nd query result, 3rd query result, 4th query result)
Here are the 2 normal ways of solving it: Example 1 PIVOT: ``` SELECT [1] count1,[2] count2,[3] count3,[4] count4 FROM ( SELECT id, value FROM status ) p PIVOT (COUNT(id) FOR [value] IN ([1], [2], [3], [4])) AS pvt ``` Use [CONCAT](https://msdn.microsoft.com/query/dev10.query?appId=Dev10IDEF1&l=EN-US&k=k(CONCAT_TSQL);k(SQL11.SWB.TSQLRESULTS.F1);k(SQL11.SWB.TSQLQUERY.F1);k(MISCELLANEOUSFILESPROJECT);k(DevLang-TSQL)&rd=true) if you want to combine the columns into one. To do this, replace first row in first example with: ``` SELECT CONCAT([1],',',[2],',',[3],',',[4]) ``` Example 2 CASE: ``` SELECT COUNT(CASE WHEN value = 1 THEN 1 END) count1, COUNT(CASE WHEN value = 2 THEN 1 END) count2, COUNT(CASE WHEN value = 3 THEN 1 END) count3, COUNT(CASE WHEN value = 4 THEN 1 END) count4 FROM status ```
``` select value, count(id) from status group by value ```
How to get join multiple queries
[ "", "sql", "sql-server", "sql-server-2012", "" ]
I'm trying to perform what I believe a very simple case-update SQL to two different column based on a select: `PROD_TB`: ``` Product_Code Reg_Price Sale_Price A 1000 2000 ``` `PRICE_TB`: ``` Product_Code Type Price A REG 3000 A SALE 4000 ``` Desired update result: `PROD_TB`: ``` Product_Code Reg_Price Sale_Price A 3000 4000 ``` What I attempted: ``` UPDATE PROD_TB SET Reg_Price = CASE WHEN PRICE_TB.Type = 'REG' THEN PRICE_TB.Price ELSE Reg_Price END, Sale_Price = CASE WHEN PRICE_TB.Type = 'SALE' THEN PRICE_TB.Price ELSE Sale_Price END FROM PROD_TB JOIN PRICE_TB ON PROD_TB.PRODUCT_CODE = PRICE_TB.PRODUCT_CODE ``` Running the above SQL only updates regular price, not the sale price. Does SQL not support these types of update query? Or did I make an elementary mistake?
Something like this? Basically, just join the set from the PRICE\_TB on the condition of which column you want to get from it. But this is assuming you will only ever have one PRICE\_TB.TYPE per updated PROD\_TB column name, and that each PROD\_TB column always contains a value in PRICE\_TB, otherwise it'll be NULL and that row won't be returned. So make sure you know the variations of data that can exist here. ``` UPDATE PROD SET Reg_Price = REG.Price, Sale_Price = SALE.Price FROM PROD_TB PROD JOIN PRICE_TB REG ON REG.Product_Code = PROD.Product_Code AND REG.Type = 'REG' JOIN PRICE_TB SALE ON SALE.Product_Code = PROD.Product_Code AND SALE.Type = 'SALE' ```
You need to get the data into one row first, here's example that fetches always the biggest price, in case there's more than one in PRICE\_TB, otherwise it should work the same way as @Kahn's sql. ``` UPDATE PROD SET PROD.Reg_Price = PRICE.Reg_Price, PROD.Sales_Price = PRICE.Sales_Price FROM PROD_TB PROD, cross apply ( select max(CASE WHEN Type = 'REG' THEN Price ELSE 0 end) as Reg_Price, max(CASE WHEN Type = 'SALE' THEN Price ELSE 0 end) as Sale_Price from PRICE_TB PRICE where PRICE.Product_Code = PROD.Product_Code ) PRICE ```
SQL Case Update to two different column based on select
[ "", "sql", "sql-server", "" ]
I've got three tables: Clients, Services, and Client Locations. I'm running a query that needs to return the locations of clients that received a certain service. So using the second table in the SELECT and the third table in the WHERE. I'm using two LEFT JOINs and getting my results repeated in an undesirable way. Here are simplified versions of the three tables... **Clients (clients)** ``` id_client | clientName ---------------------- 1 | Abby 2 | Betty 3 | Cathy ``` **Client Services (services)** Used only in the WHERE statement ``` id_client | date | serviceType ----------------------------------- 1 | 1/5/2015 | Counseling 1 | 1/12/2015 | Counseling 1 | 1/19/2015 | Counseling 2 | 1/21/2015 | Sup. Group ``` **Client Locations (locations)** Used only in the SELECT statement ``` id_client | city ---------------------- 1 | Boston, MA 3 | Providence, RI ``` **Here's The Query** ``` SELECT clients.clientName,locations.city FROM clients LEFT JOIN locations ON clients.id_client=locations.id_client LEFT JOIN services ON clients.id_client=services.id_client WHERE services.serviceType='Counseling' ``` **The Results** ``` clientName | city ----------------------- Abby | Boston, MA Abby | Boston, MA Abby | Boston, MA ``` So it's giving me Abby living in Boston three times instead of the desired one. Now, I know exactly why this is happening. The LEFT JOIN used for the services table is being used for the results and Abby's three counseling sessions are causing the city to be repeated three times. Is there another way to do this JOIN so that the services table doesn't cause repetition like this? I've tried INNER JOIN and get the same thing.
Use `exists`: ``` SELECT c.clientName, l.city FROM clients c JOIN locations l ON c.id_client = l.id_client WHERE EXISTS (SELECT 1 FROM services s WHERE c.id_client = s.id_client AND s.serviceType = 'Counseling' ); ``` Although you can use `group by` or `distinct`, this method should perform better. There is no need to generate the duplicated results just to remove them in another step.
Either use `distinct` ``` SELECT DISTINCT clients.clientName,locations.city FROM clients LEFT JOIN locations ON clients.id_client=locations.id_client LEFT JOIN services ON clients.id_client=services.id_client WHERE services.serviceType='Counseling' ``` Or a `group by` ``` SELECT clients.clientName,locations.city FROM clients LEFT JOIN locations ON clients.id_client=locations.id_client LEFT JOIN services ON clients.id_client=services.id_client WHERE services.serviceType='Counseling' GROUP BY clients.clientName,locations.city ``` Or a subquery ``` SELECT clients.clientName,locations.city FROM clients LEFT JOIN locations ON clients.id_client=locations.id_client LEFT JOIN ( SELECT id_client, serviceType FROM services GROUP BY id_client, serviceType ) services ON clients.id_client=services.id_client WHERE services.serviceType='Counseling' GROUP BY clients.clientName,locations.city ``` [Sample SQL Fiddle](http://www.sqlfiddle.com/#!2/fda56c/1)
Three Tables, Two Joins, Only One Table Needed For Results
[ "", "mysql", "sql", "join", "" ]
How to convert sql query to linq using datatable. ``` select DISTINCT targeturl from internal where TargetURL NOT in (select DISTINCT url from Internal) ``` using this query, but still not getting the correct result. ``` var List1 = dt.AsEnumerable().Select(c => c.Field<string>("URL")).Distinct().ToList(); var List2 = dt.AsEnumerable().Select(c => c.Field<string>("TargetURL")).Distinct().ToList(); var Targetname = List2.Except(List1.ToList()).ToList(); ```
You can also try populating an object then using the object to populate the datatable. ``` var data = Select(c => c.Field<string>("TargetURL")).Distinct().ToList(); Datatable dtMyTable = new Datatable(); dtMytable.Columns.Add("col1",typeof(string)); dtMytable.Columns.Add("col2",typeof(string)); dtMytable.Columns.Add("col3",typeof(string)); ``` then populate the table ``` foreach (var item in data) { dtMytable.Rows.Add(data.col1,data.col2,data.col3); } ```
I prefer to separate first ``` dim query = (from u in Internal select u.url).distinct ``` Second ``` dim tmp = (from t in Interal where not query.contains(TargetURL) select TargetURL ).ToList ``` It's in VB.net but You can translate easily And you can too distinct with group by request.
sql "not in" in Linq using datatable
[ "", "sql", "linq", "linq-to-entities", "" ]
I have a table with a userID, a startDate and an endDate. I would like to count hour by hour the number of userID concerned. For example, the user '4242' with startDate = '21/05/2014 01:15:00' and with endDate = '21/05/2014 05:22:00' should be counted once from 01 to 02, once from 02 to 03, once from 03 to 04, ... It would give a result like that: ``` DATE AND TIME COUNT ------------------------------------- 20140930 18-19 198 20140930 19-20 220 20140930 20-21 236 20140930 21-22 257 20140930 22-23 257 20140930 23-00 257 20141001 00-01 259 20141001 01-02 259 20141001 02-03 258 20141001 03-04 259 20141001 04-05 258 20141001 05-06 258 ``` How would you do that ? Well, I tried a lot of things. Here's my latest attempt. If the code is too messy, don't even bother reading it, just tell me how you would handle this problem ;) Thanks ! ``` WITH timespan AS ( SELECT lpad(rownum - 1,2,'00') ||'-'|| lpad(mod(rownum,24),2,'00') AS hours FROM dual connect BY level <= 24 ), UserID_min_max AS ( SELECT USERS.UserID, min(USERS.date_startUT) AS min_date, max(USERS.date_end) AS max_date, code_etat FROM USERS WHERE ( (USERS.date_startUT >= to_date('01/10/2014 00:00:00','dd/MM/YYYY HH24:mi:ss') AND USERS.date_end <= to_date('08/10/2014 23:59:00','dd/MM/YYYY HH24:mi:ss')) OR ( USERS.date_startUT <= to_date('01/10/2014 00:00:00','dd/MM/YYYY HH24:mi:ss') AND USERS.date_end >= to_date('01/10/2014 00:00:00','dd/MM/YYYY HH24:mi:ss') AND USERS.date_end <= to_date('08/10/2014 23:59:00','dd/MM/YYYY HH24:mi:ss')) OR (USERS.date_startUT BETWEEN to_date('01/10/2014 00:00:00','dd/MM/YYYY HH24:mi:ss') AND to_date('08/10/2014 23:59:00','dd/MM/YYYY HH24:mi:ss'))) GROUP BY USERS.UserID, code_etat ), hours_list AS ( SELECT UserID, min_date, max_date, code_etat , to_char(min_date + row_number() over (partition BY UserID ORDER BY 1)-1,'yyyymmdd') AS days , to_char(min_date,'yyyymmdd') AS date_start , to_char(min_date, 'hh24') || '-' || lpad(to_number(to_char(min_date, 'hh24')) + 1, 2, '00') AS timespan_date_start , to_char(max_date,'yyyymmdd') AS date_end , to_char(max_date, 'hh24') || '-' || lpad(to_number(to_char(max_date, 'hh24')) + 1, 2, '00') AS timespan_date_end FROM UserID_min_max cmm connect BY level <= trunc(max_date) - trunc(min_date)+1 AND PRIOR UserID = UserID AND prior sys_guid() IS NOT NULL ), all_timespan_hours_list AS ( SELECT lj.*, t.*, lj.days ||' '|| t.hours AS days_hours FROM hours_list lj JOIN timespan t ON lj.days || t.hours >= lj.date_start || lj.timespan_date_start AND lj.days || t.hours <= lj.date_end || lj.timespan_date_end ) SELECT DISTINCT days_hours, COUNT(*) FROM ( SELECT * FROM all_timespan_hours_list ttlj WHERE CODE_ETAT IN ('SOH','SOL') ) GROUP BY days_hours ORDER BY days_hours; ```
Here's how I would do something similar: ``` with dt_tab as (select trunc(:p_start_date, 'hh') + (level - 1)/24 hr from dual connect by level <= (trunc(:p_end_date, 'hh') - trunc(:p_start_date, 'hh'))*24 + 1), sample_data as (select 4242 usr, to_date('21/05/2015 01:15:00', 'dd/mm/yyyy hh24:mi:ss') start_date, to_date('21/05/2015 05:22:00', 'dd/mm/yyyy hh24:mi:ss') end_date from dual union all select 4243 usr, to_date('20/05/2015 18:32:42', 'dd/mm/yyyy hh24:mi:ss') start_date, to_date('21/05/2015 01:36:56', 'dd/mm/yyyy hh24:mi:ss') end_date from dual union all select 4244 usr, to_date('21/05/2015 07:00:00', 'dd/mm/yyyy hh24:mi:ss') start_date, null end_date from dual) select to_char(dt.hr, 'dd/mm/yyyy hh24-')||to_char(dt.hr + 1/24, 'hh24') date_and_time, count(sd.usr) cnt from dt_tab dt left outer join sample_data sd on (dt.hr < nvl(sd.end_date, :p_end_date) and dt.hr >= sd.start_date) group by to_char(dt.hr, 'dd/mm/yyyy hh24-')||to_char(dt.hr + 1/24, 'hh24') order by date_and_time; :p_start_date := 20/05/2015 08:00:00 :p_end_date := 21/05/2015 08:00:00 DATE_AND_TIME CNT ---------------- --- 20/05/2015 08-09 0 20/05/2015 09-10 0 20/05/2015 10-11 0 20/05/2015 11-12 0 20/05/2015 12-13 0 20/05/2015 13-14 0 20/05/2015 14-15 0 20/05/2015 15-16 0 20/05/2015 16-17 0 20/05/2015 17-18 0 20/05/2015 18-19 0 20/05/2015 19-20 1 20/05/2015 20-21 1 20/05/2015 21-22 1 20/05/2015 22-23 1 20/05/2015 23-00 1 21/05/2015 00-01 1 21/05/2015 01-02 1 21/05/2015 02-03 1 21/05/2015 03-04 1 21/05/2015 04-05 1 21/05/2015 05-06 1 21/05/2015 06-07 0 21/05/2015 07-08 1 21/05/2015 08-09 0 ``` (depending on how your time period start and end dates are configured, you might want to change from using bind variables - eg. use the min/max dates in your table, etc) --- The above works when I run it in Toad. For something that works in SQL\*Plus, or when you run it as a script (e.g. in Toad), the below should work: ``` variable p_start_date varchar2(20) variable p_end_date varchar2(20) exec :p_start_date := '20/05/2015 08:00:00'; exec :p_end_date := '21/05/2015 08:00:00'; with dt_tab as (select trunc(to_date(:p_start_date, 'dd/mm/yyyy hh24:mi:ss'), 'hh') + (level - 1)/24 hr from dual connect by level <= (trunc(to_date(:p_end_date, 'dd/mm/yyyy hh24:mi:ss'), 'hh') - trunc(to_date(:p_start_date, 'dd/mm/yyyy hh24:mi:ss'), 'hh'))*24 + 1), sample_data as (select 4242 usr, to_date('21/05/2015 01:15:00', 'dd/mm/yyyy hh24:mi:ss') start_date, to_date('21/05/2015 05:22:00', 'dd/mm/yyyy hh24:mi:ss') end_date from dual union all select 4243 usr, to_date('20/05/2015 18:32:42', 'dd/mm/yyyy hh24:mi:ss') start_date, to_date('21/05/2015 01:36:56', 'dd/mm/yyyy hh24:mi:ss') end_date from dual union all select 4244 usr, to_date('21/05/2015 07:00:00', 'dd/mm/yyyy hh24:mi:ss') start_date, null end_date from dual) select to_char(dt.hr, 'dd/mm/yyyy hh24-')||to_char(dt.hr + 1/24, 'hh24') date_and_time, count(sd.usr) cnt from dt_tab dt left outer join sample_data sd on (dt.hr < nvl(sd.end_date, to_date(:p_end_date, 'dd/mm/yyyy hh24:mi:ss')) and dt.hr >= sd.start_date) group by to_char(dt.hr, 'dd/mm/yyyy hh24-')||to_char(dt.hr + 1/24, 'hh24') order by date_and_time; ```
Try to use the function [TRUNC(date,[fmt])](http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions201.htm) like this: ``` select trunc(some_date, 'HH24') from some_table group by trunc(some_date, 'HH24'); ```
Count records per hour within a time span
[ "", "sql", "database", "oracle", "plsql", "" ]
I can't for the life of me get this to work. I've tried just about every example I've seen on here and all over the internet. I'm trying to query based on a month's worth of data. The data in the view I'm querying is in this format: ``` 2012-03-20 00:00:00.000 ``` Using the following criteria it's bringing back all of the dates (or nearly all of them). Any ideas? ``` AND cast(convert(varchar(10), lag.postingdate, 112) as datetime) between '2015-01-01' and '2015-01-31' ``` Original Query: ``` SELECT prov.pgrp_id AS PERFORMING_PROV_ID , prov.pgrp_prov_name AS PERFORMING_PROV_NAME , lag.chgno AS CHARGE_NUM , lag/countcharges AS LAGTIME , lag.chgamt , lag.postingdate FROM dbo.Providers prov RIGHT JOIN dbo.LagTime_Charges_Calcs lag ON prov.pgrp_prov_cd = lag.provcode AND prov.pgrp_practice = lag.px LEFT JOIN dbo.PlaceofService_union_v pos ON lag.px = pos.px AND lag.poscode = pos.poscode WHERE pos.posid = '1' OR pos.posid = '2' AND prov.Laginclude = 'y' AND MONTH(lag.postingdate) = 1 AND YEAR(lag.postingdate) = 2015 --and lag.postingdate between '2015-01-01 00:00:00.000' and '2015-01-31 23:59:59.000' --AND cast(convert(varchar(10),lag.postingdate,112) as datetime) between '2015- 01-01' and '2015-01-31' ```
Try this: ``` declare @period_start datetime , @period_end datetime set @period_start = '2015-01-01 00:00:00.000' set @period_end = dateadd(month,1,@period_start) select * from foo where foo.myDateTimeColumn >= @period_start and foo.myDateTimeColumn < @period_end ```
good way (will use indexes on postingdate column) ``` where lag.postingdate >= '20150101' and lag.postingdate < '20150201' ``` easy but inefficient way (wont use any indexes even if there are on this datetime column) ``` where MONTH(lag.postingdate) = 1 and YEAR(lag.postingdate) = 2015 ```
SQL Server : Query criteria - need to extract date from datetime
[ "", "sql", "sql-server-2005", "" ]
I have a requirement to query the latest record for a list of data. Here's my sample table (primary key omitted) ``` col1| createtime d1 | 2013-01-31 22:04:15 d1 | 2014-01-31 22:04:15 d1 | 2015-01-31 22:04:15 d2 | 2013-01-31 22:04:15 d2 | 2014-01-31 22:04:15 d2 | 2014-02-31 22:04:15 d2 | 2015-01-31 22:04:15 d3 | 2013-01-31 22:04:15 d3 | 2014-01-31 22:04:15 d3 | 2014-01-31 22:04:15 d3 | 2015-01-31 22:04:15 d4 | 2013-01-31 22:04:15 d4 | 2014-01-31 22:04:15 d4 | 2015-01-31 22:04:15 ``` A list of data of col1 are given. For example, the list of data given is [d3, d4]. The result of my query should be the rows ``` [(d3 2015-01-31 22:04:15), (d4 2015-01-31 22:04:15)] ``` because the latest record for d3 is `2015-01-31 22:04:15` and the latest record for d4 is `2015-01-31 22:04:15` Is this possible without using sql procedure?
If you have only two columns, just use `group by`: ``` select t.col1, max(t.createtime) from table t where t.col1 in ('d3', 'd4') group by t.col1; ``` If you have more than two columns, I think the following will work: ``` select t.* from table t where t.col1 in ('d3', 'd4') and not exists (select 1 from table t2 where t2.col1 = t.col1 and t2.createtime > t.createtime ); ```
You can also use a table expression ``` ;WITH C AS( SELECT RANK() OVER (PARTITION BY col1 ORDER BY createtime DESC) AS Rnk ,col1 ,createtime FROM tableName ) SELECT col1, createtime FROM C WHERE Rnk = 1 ```
Query latest record from a list of given data
[ "", "sql", "hibernate", "" ]
I have a table that has process engines 1,2,3,4,5,6 with a running status. When one of the engines is down the record gets deleted from the table. Using a case statement I can display the first engine that is down but how do I go about displaying the engines if 2 or more engines are down. For e.g. how do I make this query display PE 2 IS DOWN and PE 4 is DOWN if both the engines are down. Right now it displays only the first engine in the list that is down . ``` SELECT CASE WHEN (SELECT COUNT(PE_ID) FROM CWVMINFO WHERE PE_ID = 1) = 0 THEN 'PE 1 IS DOWN' WHEN (SELECT COUNT(PE_ID) FROM CWVMINFO WHERE PE_ID = 2) = 0 THEN 'PE 2 IS DOWN' WHEN (SELECT COUNT(PE_ID) FROM CWVMINFO WHERE PE_ID = 3) = 0 THEN 'PE 3 IS DOWN' WHEN (SELECT COUNT(PE_ID) FROM CWVMINFO WHERE PE_ID = 4) = 0 THEN 'PE 4 IS DOWN' WHEN (SELECT COUNT(PE_ID) FROM CWVMINFO WHERE PE_ID = 5) = 0 THEN 'PE 5 IS DOWN' WHEN (SELECT COUNT(PE_ID) FROM CWVMINFO WHERE PE_ID = 6) = 0 THEN 'PE 6 IS DOWN' ELSE 'ALL PROCESS ENGINES ARE UP AND RUNNING' END "STATUS" from dual; ```
Instead of case, using `union all` for the two different cases, all good versus some not running. Sub query factoring to reduce repeated code. ``` with engines as (select level as engine_number from dual connect by level <= 6) , down_engines as (select engine_number from engines where engine_number not in (select pe_id from cwvminfo)) select to_char(engine_number) || ' IS DOWN' from down_engines union all select 'all engines are running' from dual where not exists (select null from down_engines) ```
I would rewrite this using a single query and conditional aggregation: ``` select coalesce(case when sum(case when pe_id = 1 then 1 else 0 end) = 0 then 'PE 1 IS DOWN; ' end) || (case when sum(case when pe_id = 1 then 2 else 0 end) = 0 then 'PE 2 IS DOWN; ' end) || (case when sum(case when pe_id = 1 then 3 else 0 end) = 0 then 'PE 3 IS DOWN; ' end) || (case when sum(case when pe_id = 1 then 4 else 0 end) = 0 then 'PE 4 IS DOWN; ' end) || (case when sum(case when pe_id = 1 then 5 else 0 end) = 0 then 'PE 5 IS DOWN; ' end) || (case when sum(case when pe_id = 1 then 6 else 0 end) = 0 then 'PE 6 IS DOWN; ' end), 'ALL PROCESS ENGINES ARE UP AND RUNNING') as status from CWVMINFO; ``` Or, alternatively, if you don't have to have a single row, you might do: ``` select ids.id, (case when count(c.pe_id) = 0 then "DOWN' else 'UP' end) as status from (select 1 as id from dual union all select 2 from dual union all select 3 from dual union all select 4 union all select 5 union all select 6 from dual ) ids left join CWVMINFO c on c.pe_id = ids.id group by ids.id order by ids.id; ``` This would make it more obvious that everything is being checked.
SQL CASE Statement for no data
[ "", "sql", "oracle", "oracle11g", "case", "" ]
One of my `WHERE` clauses is the following: ``` AND (DateCreated BETWEEN @DateFrom and @DateTo OR (@DateFrom IS NULL OR @DateTo IS NULL)) ``` `@DateFrom` and `@DateTo` are input parameters that may be `NULL`. If they are both null, then I need to basically ignore the `BETWEEN` and return all records. If `@DateFrom` is `NULL`, but `@DateTo` is `NOT NULL`, then I need to return all records with DateCreated being no greater than `@DateTo` (inclusive). If `@DateFrom` is `NOT NULL`, but `@DateTo` is `NULL`, then I need to return all records with DateCreated being no earlier than `@DateFrom` (inclusive) up to today's date. DateCreated is not a null or some time it is null field. So far my `WHERE` clause is not working exactly like I want.
Just need some extra criteria to handle when one or the other is `NULL`: ``` AND ( (DateCreated >= @DateFrom and DateCreated < DATEADD(day,1,@DateTo)) OR (@DateFrom IS NULL AND @DateTo IS NULL) OR (@DateFrom IS NULL AND DateCreated < DATEADD(day,1,@DateTo)) OR (@DateTo IS NULL AND DateCreated >= @DateFrom) ) ``` Edit: Giorgi's approach was simpler, here it is adapted for use with `DATETIME`: ``` AND ( (DateCreated >= @DateFrom OR @DateFrom IS NULL) AND (DateCreated < DATEADD(day,1,@DateTo) OR @DateTo IS NULL) ) ``` The issue with `BETWEEN` or `<=` when using a `DATE` variable against a `DATETIME` field, is that any time after midnight on the last day will be excluded. `'2015-02-11 13:07:56.017'` is greater than `'2015-02-11'` Rather than casting your field as `DATE` for comparison, it's better for performance to add a day to your variable and change from `<=` to `<`.
Try this: ``` WHERE ((DateCreated >= @DateFrom OR @DateFrom IS NULL) AND (DateCreated =< @DateTo OR @DateTo IS NULL)) ```
Using WHERE clause with BETWEEN and null date parameters
[ "", "sql", "sql-server", "" ]
I know my query below is just horrible and it takes 2 min to get 10 records (listing table has over 1M records though) but I am not sure whats the better way to write this I simply just wanna get all the countries that have listings the table that connects countries to listings is province.. ``` ALTER VIEW [dbo].[CountriesWithListings] AS SELECT distinct cn.CountryID, cn.Code as CountryCode, cn.Name as CountryName FROM dbo.Countries AS cn INNER JOIN dbo.Provinces AS p ON p.CountryID = cn.CountryID INNER JOIN dbo.Cities c on c.ProvinceID = p.ProvinceID INNER JOIN dbo.Listings AS l ON l.CityID = c.CityID WHERE l.IsActive = 1 AND l.IsApproved = 1 ```
Assuming you have the appropriate indices in place, using `distinct` is expensive. You should be able to get better performance using `exists`: ``` SELECT cn.CountryID, cn.Code as CountryCode, cn.Name as CountryName FROM dbo.Countries AS cn WHERE EXISTS ( SELECT 1 FROM dbo.Provinces AS p INNER JOIN dbo.Cities c on c.ProvinceID = p.ProvinceID INNER JOIN dbo.Listings AS l ON l.CityID = c.CityID WHERE p.CountryID = cn.CountryID AND l.IsActive = 1 AND l.IsApproved = 1 ) ```
Is the listing table indexed? If it has like 1M entries, it would be good to index it first and check the performance after that. Your query is not that complex
Inner join with distinct slow
[ "", "sql", "performance", "" ]
What is the equivalent of the Oracle "Dual" table in MS SqlServer? This is my `Select`: ``` SELECT pCliente, 'xxx.x.xxx.xx' AS Servidor, xxxx AS Extension, xxxx AS Grupo, xxxx AS Puerto FROM DUAL; ```
In `sql-server`, there is no `dual` you can simply do ``` SELECT pCliente, 'xxx.x.xxx.xx' AS Servidor, xxxx AS Extension, xxxx AS Grupo, xxxx AS Puerto ``` However, if your problem is because you transfered some code from `Oracle` which reference to `dual` you can re-create the table : ``` CREATE TABLE DUAL ( DUMMY VARCHAR(1) ) GO INSERT INTO DUAL (DUMMY) VALUES ('X') GO ```
You don't need *DUAL* in MSSQLserver **in oracle** ``` select 'sample' from dual ``` **is equal to** ``` SELECT 'sample' ``` **in sql server**
What is the equivalent of the Oracle "Dual" table in MS SqlServer?
[ "", "sql", "sql-server", "oracle", "t-sql", "dual-table", "" ]
I have table A with two columns `id(int)` and `f_value(float)`. Now I'd like to select all rows where `f_value` starts from '123'. So for the following table: ``` id | f_value ------------ 1 | 12 2 | 123 3 | 1234 ``` I'd like to get the second and third row. I tried to use LEFT with cast but that was a disaster. For the following query: ``` select f_value, str(f_value) as_string, LEFT(str(f_value), 2) left_2, LEFT(floor(f_value), 5) flor_5, LEFT('abcdef', 5) test from A ``` I got: ``` f_value | as_string | left_2 | flor_5 | test ------------------------------------------------ 40456510 | 40456510 | | 4.045 | abcde 40454010 | 40454010 | | 4.045 | abcde 404020 | 404020 | | 40402 | abcde 40452080 | 40452080 | | 4.045 | abcde 101020 | 101020 | | 10102 | abcde 404020 | 404020 | | 40402 | abcde ``` The question is why left works fine for 'test' but for other returns such weird results? **EDIT:** I made another test I now I'm even more confused. For query: ``` Declare @f as float set @f = 40456510. select LEFT(cast(@f as float), LEN(4045.)), LEFT(404565., LEN(4045.)) ``` I got: ``` | ------------ 4.04 | 4045 ``` Is there a default cast which causes this? [Fiddle SQL](http://sqlfiddle.com/#!3/185b8/1)
I found the solution. The problem was that SQL Server uses the exponential representation of floats. To resolve it you need to first convert float to BigInt and then use Left on it. Example: ``` Select * from A where Left(Cast(float_value as BigInt), 4) = xxxx ```
``` /* returns significant digits from @f (a float) as an integer negative sign is stripped off */ declare @num_digits int = 3; /* needs to be positive; accuracy diminishes with larger values */ with samples(num, f) as ( select 1, cast(123.45 as float) union select 2, 123456700 union select 3, -1.234567 union select 4, 0.0000001234 ) select num, f, case when f = 0 or @num_digits < 1 then 0 else floor( case sign(log10(abs(f))) when -1 then abs(f) * power(10e0, -floor(log10(abs(f))) + @num_digits - 1) when 1 then abs(f) / power(10e0, ceiling(log10(abs(f))) - @num_digits) end ) end as significant_digits from samples order by num; ``` [sqlfiddle](http://sqlfiddle.com/#!3/d41d8/43228)
How to get first n numbers from float
[ "", "sql", "sql-server", "" ]
I'm attempting to send in a number and identifiers and update rows in a while loop. For instance if @Number = 1 and ``` MAX(Number) FROM QuestionnaireQuestions WHERE Questionnaire_ID = @Questionnaire_ID = 4 ``` The results are values 1, 1, 1 when I would expect 3,2,1. ``` CREATE PROCEDURE [dbo].[DeleteQuestion] (@QuestionnaireQuestions_ID BIGINT, @Questionnaire_ID BIGINT, @Number SMALLINT) AS DECLARE @i AS SMALLINT SELECT @i = MAX(Number) FROM QuestionnaireQuestions WHERE Questionnaire_ID = @Questionnaire_ID WHILE ( @i > @Number ) BEGIN UPDATE QuestionnaireQuestions SET Number = ( @i - 1 ) WHERE Number = @i AND Questionnaire_ID = @Questionnaire_ID SET @i = @i - 1 END DELETE QuestionnaireQuestions WHERE QuestionnaireQuestions_ID = @QuestionnaireQuestions_ID ```
If the question number is not really more than an ordering in the questionnaire, why do you not simply leave the number initially assigned to the question, and then renumber them when they are shown, or otherwise processed. This way, changing data only has local impact to the affected row.
I would rework it to only take a single id. That way you can never pass it values that don't belong together. ``` CREATE PROCEDURE [dbo].[DeleteQuestion] (@QuestionnaireQuestions_ID BIGINT) AS DECLARE @Questionnaire_ID BIGINT, @Number SMALLINT SELECT @Questionnaire_ID = Questionnaire_ID, @Number = Number FROM QuestionnaireQuestions WHERE QuestionnaireQuestions_ID = @QuestionnaireQuestions_ID DELETE QuestionnaireQuestions WHERE QuestionnaireQuestions_ID = @QuestionnaireQuestions_ID UPDATE QuestionnaireQuestions SET Number = Number - 1 WHERE Questionnaire_ID = @Questionnaire_ID AND Number > @Number END ```
SQL Update in While Loop
[ "", "sql", "sql-server", "" ]
I want to get the difference between two sequential values from my table. ``` | id | count | | 1 | 1 | | 2 | 7 | | 3 | 9 | | 4 | 3 | | 5 | 7 | | 6 | 9 | ``` For example the difference between *id2-id1 = 6,* *id3-id2 = -2*, ... How can I do it? `SELECT SUM(id(x+1) - id(x)) FROM table1`
You can use a subquery to find `count` for the preceding `id`. In case there are no gaps in the `ID` column: ``` SELECT CONCAT(t.`id` ,' - ', t.`id` - 1) AS `IDs` , t.`count` - (SELECT `count` FROM `tbl` WHERE `id` = t.`id` - 1) AS `Difference` FROM `tbl` t WHERE t.`id` > 1 ``` [**SQLFiddle**](http://sqlfiddle.com/#!2/627d4/6) In case there are gaps in the `ID`column. *First solution*, using `ORDER BY <...> DESC` with `LIMIT 1`: ``` SELECT CONCAT(t.id ,' - ', (SELECT `id` FROM tbl WHERE t.id > id ORDER BY id DESC LIMIT 1)) AS IDs , t.`count` - (SELECT `count` FROM tbl WHERE t.id > id ORDER BY id DESC LIMIT 1) AS difference FROM tbl t WHERE t.id > 1; ``` [**SQLFiddle**](http://sqlfiddle.com/#!2/754ead/6) *Second solution*, using another subquery to find `count` with the `MAX(id)` less than current `id`: ``` SELECT CONCAT(t.id ,' - ', (SELECT MAX(`id`) FROM tbl WHERE id < t.id)) AS IDs , t.`count` - (SELECT `count` FROM tbl WHERE `id` = (SELECT MAX(`id`) FROM tbl WHERE id < t.id) ) AS difference FROM tbl t WHERE t.id > 1; ``` [**SQLFiddle**](http://sqlfiddle.com/#!2/754ead/5) *P.S. :* First column, `IDs`, is just for readability, you can omit it or change completely, if it is necessary.
If you *know* that the ids have no gaps, then just use a `join`: ``` select t.*, (tnext.count - t.count) as diff from table t join table tnext on t.id = tnext.id - 1; ``` If you just want the sum of the differences, then that is the same as the last value minus the first value (all the intermediate values cancel out in the summation). You can do this with `limit`: ``` select last.count - first.count from (select t.* from table order by id limit 1) as first cross join (select t.* from table order by id desc limit 1) as last; ```
Mysql - Get the difference between two sequential values
[ "", "mysql", "sql", "sequential", "" ]
I have a table (source) with payments for a person - called 'Item' in the example below. This table will have payments for each person, added to it over a period. I then generate invoices, which basically takes all the payments for a particular person, and sums them up into a single row. This must be stored in an invoice table, for auditing reasons. I do this in the example below. What I am missing, though, as I am not sure how to do it, is that each payment, once assigned to the Invoice table, needs to had the Invoice ID that it was assigned to, stored in the Items table. So, see the example below: ``` CREATE TABLE Items ( ID INT NOT NULL IDENTITY(1,1), PersonID INT NOT NULL, PaymentValue DECIMAL(16,2) NOT NULL, AssignedToInvoiceID INT NULL ) CREATE TABLE Invoice ( ID INT NOT NULL IDENTITY(1,1), PersonID INT NOT NULL, Value DECIMAL(16,2) ) INSERT INTO Items (PersonID, PaymentValue) VALUES (1, 100) INSERT INTO Items (PersonID, PaymentValue) VALUES (2, 132) INSERT INTO Items (PersonID, PaymentValue) VALUES (2, 65) INSERT INTO Items (PersonID, PaymentValue) VALUES (1, 25) INSERT INTO Items (PersonID, PaymentValue) VALUES (3, 69) SELECT * FROM Items INSERT INTO Invoice (PersonID, Value) SELECT PersonID, SUM(PaymentValue) FROM Items WHERE AssignedToInvoiceID IS NULL GROUP BY PersonID SELECT * FROM Invoice DROP TABLE Items DROP TABLE Invoice ``` What I need to do is then update the Items table, to say that the first row has been assigned to Invoice.ID 1, row two was assigned to Invoice ID 2. Row 3, was assigned to Invoice ID 2 as well. Note, there are many other columns in the table. This is a basic example. Simply, I need to record which invoice, each source row was assigned to.
The key thing here to ensure payments are correctly linked to invoices is to ensure that: A: No updates are made to Items between reading the unassigned items and updating AssignedToInvoiceID. B: No new invoices are created with the Items being process before updating AssignedToInvoiceID. As you are updating two tables it will have to be a two step process. To ensure A it will need to be run in a transaction with a least REPEATABLE READ isolation. To ensure B requires a transaction with SERIALIZABLE isolation. See [SET TRANSACTION ISOLATION LEVEL](https://msdn.microsoft.com/en-us/library/ms173763.aspx) It can be done like this: ``` BEGIN TRAN SET TRANSACTION ISOLATION LEVEL SERIALIZABLE DECLARE @newInvoices TABLE (PersonID INT, InvoiceID INT) INSERT INTO Invoice (PersonID, Value) OUTPUT inserted.ID, inserted.PersonID INTO @newInvoices(InvoiceID, PersonID) SELECT PersonID, SUM(PaymentValue) FROM Items WHERE AssignedToInvoiceID IS NULL GROUP BY PersonID UPDATE Items SET AssignedToInvoiceID = InvoiceID FROM Items INNER JOIN @newInvoices newInvoice ON newInvoice.PersonID = Items.PersonID WHERE AssignedToInvoiceID IS NULL COMMIT ``` An alternative if you are using SQL Server 2012 or later is to use the [SEQUNCE](https://msdn.microsoft.com/en-us/library/ff878058.aspx) object, this will allow the Items to be assigned new invoice IDs before the Invoices are created, reducing the locking required. It works like this: ``` -- Run once with your table setup. CREATE SEQUENCE InvoiceIDs AS INT START WITH 1 INCREMENT BY 1 CREATE TABLE Items ( ID INT NOT NULL IDENTITY(1,1), PersonID INT NOT NULL, PaymentValue DECIMAL(16,2) NOT NULL, AssignedToInvoiceID INT NULL ) CREATE TABLE Invoice ( -- No longer a IDENTITY column ID INT NOT NULL, PersonID INT NOT NULL, Value DECIMAL(16,2) ) BEGIN TRAN DECLARE @newInvoiceLines TABLE (PersonID INT, InvoiceID INT, PaymentValue DECIMAL(16,2)) -- Reading and updating AssignedToInvoiceID happens in one query so is thread safe. UPDATE Items SET AssignedToInvoiceID = newInvoices.InvoiceID OUTPUT inserted.PersonID, inserted.AssignedToInvoiceID, inserted.PaymentValue INTO @newInvoiceLines(PersonID, InvoiceID, PaymentValue) FROM Items INNER JOIN ( SELECT PersonID, NEXT VALUE FOR InvoiceIDs AS InvoiceID FROM Items GROUP BY PersonID ) AS newInvoices ON newInvoices.PersonID = Items.PersonID WHERE Items.AssignedToInvoiceID IS NULL INSERT INTO Invoice (ID, PersonID, Value) SELECT InvoiceID, PersonID, SUM(PaymentValue) FROM @newInvoiceLines GROUP BY PersonID, InvoiceID COMMIT ``` You will still want to use a transaction to ensure the Invoice gets created.
1) Get `MAX(ID)` from `Invoice` table before inserting new rows from `Items` table. Store this value into a variable: `@MaxInvoiceID` 2) After inserting records into `Invoice` table, update `AssignedToInvoiceID` in `Items` table with `Invoice.ID>@MaxInvoiceID` Refer below code: ``` CREATE TABLE #Items ( ID INT NOT NULL IDENTITY(1,1), PersonID INT NOT NULL, PaymentValue DECIMAL(16,2) NOT NULL, AssignedToInvoiceID INT NULL ) CREATE TABLE #Invoice ( ID INT NOT NULL IDENTITY(1,1), PersonID INT NOT NULL, Value DECIMAL(16,2) ) DECLARE @MaxInvoiceID INT; SELECT @MaxInvoiceID=ISNULL(MAX(ID),0) FROM #Invoice SELECT @MaxInvoiceID INSERT INTO #Items (PersonID, PaymentValue) VALUES (1, 100) INSERT INTO #Items (PersonID, PaymentValue) VALUES (2, 132) INSERT INTO #Items (PersonID, PaymentValue) VALUES (2, 65) INSERT INTO #Items (PersonID, PaymentValue) VALUES (1, 25) INSERT INTO #Items (PersonID, PaymentValue) VALUES (3, 69) SELECT * FROM #Items INSERT INTO #Invoice (PersonID, Value) SELECT PersonID, SUM(PaymentValue) FROM #Items WHERE AssignedToInvoiceID IS NULL GROUP BY PersonID SELECT * FROM #Invoice UPDATE Itm SET Itm.AssignedToInvoiceID=Inv.ID FROM #Items Itm JOIN #Invoice Inv ON Itm.PersonID=Inv.PersonID AND Itm.AssignedToInvoiceID IS NULL AND Inv.ID>@MaxInvoiceID SELECT * FROM #Items DROP TABLE #Items DROP TABLE #Invoice ```
UPDATE source table, AFTER Grouping?
[ "", "sql", "sql-server", "" ]
I have three tables that I want to run a unmatched query. Table 1 is my main table, Table 2 and 3 are tables that records get added to everyday. Now, result needs to show me what I do not yet have in Table 1 (Exception Report) Now I have written a script, that only queries 2 tables, but its not working correctly. because Table 1 contains 8277436 records and when I execute the script the result is 8620530?????? I went wrong somewhere. Script Below ``` Select distinct ID_NUMBER, CLIENT_CODE from [KAMLS].[dbo].[Retail] Left Join [22AE5D15].[dbo].[Documents1] on [KAMLS].[dbo].[Retail].ID_NUMBER NOT LIKE '%' + [22AE5D15].[dbo].[Documents1].B61DDE99 + '%' ``` Table 1 [KAMLS].Retail ID\_Number Client\_Code ``` Table 2 [22AE5D15].Documents1 B61DDE99 = ID Number Table 3 [22AE5D16].Documents 2 ID_Number ``` Result I'm looking for is all documents in Table 1 [KAMLS].Retail that does not appear in both table 2 and three, and before I forget, why is my script not giving the correct result... I need to learn from my mistakes... Thank you
Based on the comments to my previous answer, this should give you the desired results: ``` DECLARE @Table1 TABLE (ID_NUMBER INT) DECLARE @Table2 TABLE (ID_NUMBER INT) DECLARE @Table3 TABLE (ID_NUMBER INT) INSERT INTO @Table1 VALUES (1),(2),(3),(4),(5),(6) INSERT INTO @Table2 VALUES (1),(2),(3) INSERT INTO @Table3 VALUES (1),(4),(5) ; WITH NotInTable2OrTable3 AS ( SELECT ID_NUMBER FROM @Table1 EXCEPT ( SELECT ID_NUMBER FROM @Table2 UNION ALL SELECT ID_NUMBER FROM @Table3 ) ) SELECT * FROM NotInTable2OrTable3 ```
The reason your script isn't working for you is because you are comparing each row in Table1 to each row in Table2 and joining every time they are not like eachother. This isn't very valuable information for your scenario because if you have 'abc' in table1, that's not like '123' or '456' or '789' so you would join 3 records to 'abc' and it doesn't tell you if 'abc' was in the table2 or not. You can do what you wish in several ways. The cte posted will work, but I would just use a left join to both tables and take the result where the joins are null. ``` select ID_NUMBER from Table1 t1 left join Table2 t2 on t1.ID_NUMBER = t2.ID_NUMBER left join Table3 t3 on t1.ID_NUMBER = t3.ID_NUMBER where t2.ID_NUMBER is null and t3.ID_NUMBER is null ```
Not Matching SQL query over 3 tables
[ "", "sql", "sql-server", "" ]
I have a table with hierarchical data such as this: ``` LEVEL id_value parent_id_value description 0 1 505 None Top Hierarchy 1 2 1000 505 Sub Hierarchy 2 2 300 505 Other Sub Hierarchy 3 3 0040 300 Rookie hierarchy 4 3 0042 300 Bottom level ``` What I need is a query that will give me this: ``` 0 id_value 3 2 1 1 40 Rookie hierarchy Other Sub Hierarchy Top Hierarchy 2 42 Bottom level Other Sub Hierarchy Top Hierarchy 3 1000 NULL Sub Hierarchy Top Hierarchy ``` It looks like it should be simple but I'm missing something...
I have translated your sample data requirements to an SQL Query. Notice than: * the trip is **join table to itself again and again**. * Use table alias on each join is mandatory. * You can tune this query to match your general requirements. * To match your data sample, second join is a **left join**. Here it is: ``` select coalesce( l3.id_value,l2.id_value) as id_value , l3.description as "3", l2.description as "2", l1.description as "1" from t l1 inner join t l2 on l2."LEVEL"=2 and l1.id_value = l2.parent_id_value left outer join t l3 on l3."LEVEL"=3 and l2.id_value = l3.parent_id_value where l1.LEVEL = 1 ``` [Check it on sqlFiddle](http://sqlfiddle.com/#!4/7742b/5/0)
This query gives all needed informations: ``` select id_value, --parent_id_value piv, description, level tlvl, sys_connect_by_path(description, '/') tpath from hd where connect_by_isleaf = 1 start with parent_id_value not in (select id_value from hd) connect by parent_id_value = prior id_value ``` Result ``` id_value tpath -------- --------------------------------------------------------------- 40 /Top hierarchy/Other sub hierarchy/Rookie hierarchy 42 /Top hierarchy/Other sub hierarchy/Bottom level 1000 /Top hierarchy/Sub hierarchy ``` Now if we assume that maximal hierarchy depth is 3 then this query puts subhierarchies in separate columns. ``` with leaves as ( select id_value, parent_id_value piv, description, level tlvl, sys_connect_by_path(rpad(description, 20), '/') tpath from hd where connect_by_isleaf = 1 start with parent_id_value not in (select id_value from hd) connect by parent_id_value = prior id_value ) select id_value, substr(tpath, 2*20 + 4, 20) l3, substr(tpath, 1*20 + 3, 20) l2, substr(tpath, 0*20 + 2, 20) l1 from leaves ===================================================================== id_value L3 L2 L1 40 Rookie hierarchy Other sub hierarchy Top hierarchy 42 Bottom level Other sub hierarchy Top hierarchy 1000 Sub hierarchy Top hierarchy ``` If description length > 20 change this value to field column length. This can also be easily done in PL/SQL dynamically, e.g. by first counting depth, creating table with proper number of columns through `execute immediate` and putting hierarchies into right columns.
unstack hierarchical data
[ "", "sql", "oracle", "" ]
I have a small table which contains group memberships to which I am struggling to find a query. ``` uid groupid userid 1 2 5 2 2 6 3 1 2 4 3 8 5 4 7 ``` I was wondering if it is possible to return TRUE if two given user IDs where in the same group?
The following gets all groups that have two given members: ``` select groupid from table t where userid in ($userid1, $userid2) group by groupid having count(distinct userid) = 2; ``` You can turn this into a boolean if you like: ``` select (case when count(*) > 0 then true else false end) from (select groupid from table t where userid in ($userid1, $userid2) group by groupid having count(distinct userid) = 2 ) g; ```
``` SELECT groupid, CASE WHEN COUNT(distinct userid) > 1 THEN "TRUE" ELSE "FALSE" END FROM my_table WHERE userid IN ('x', 'y') GROUP BY groupid ``` *Note the x and y should be replaced with the given userids* **TRUE:** ``` SELECT groupid, CASE WHEN COUNT(distinct userid) > 1 THEN 'TRUE' ELSE 'FALSE' END FROM my_table WHERE userid IN (5,6) GROUP BY groupid ``` **FALSE:** ``` SELECT groupid, CASE WHEN COUNT(distinct userid) > 1 THEN 'TRUE' ELSE 'FALSE' END FROM my_table WHERE userid IN (5,2) GROUP BY groupid ``` <http://sqlfiddle.com/#!15/3f156/1>
Compare one field between two rows in the same table
[ "", "sql", "postgresql", "self-join", "" ]
I have a query that has to filter our results from a text field based on certain keywords used in the textline .. currently the SQL statement looks like the below. `and (name like '%Abc%') or (name like '%XYZ%') or (name like '%CSV%')...` Is there a way to avoid multiple or statements and achieve the same results?
A slightly more shorthand way of doing this if you have a large amount of different patterns is to use `EXISTS` and a [table value constructor](https://msdn.microsoft.com/en-GB/library/dd776382.aspx): ``` SELECT * FROM T WHERE EXISTS ( SELECT 1 FROM (VALUES ('abc'), ('xyz'), ('csv')) m (match) WHERE T.Name LIKE '%' + m.Match + '%' ); ``` A similar approach can be applied with [table valued parameters](https://msdn.microsoft.com/en-us/library/bb510489.aspx). Since this is usually a requirement where people want to pass a variable number of search terms for a match it can be quite a useful approach: ``` CREATE TYPE dbo.ListOfString TABLE (value VARCHAR(MAX)); ``` Then a procedure can take this type: ``` CREATE PROCEDURE dbo.GetMatches @List dbo.ListOfString READONLY AS BEGIN SELECT * FROM T WHERE EXISTS ( SELECT 1 FROM @List AS l WHERE T.Name LIKE '%' + l.value + '%' ); END ``` Then you can call this procedure: ``` DECLARE @T dbo.ListOfString; INSERT @T VALUES ('abc'), ('xyz'), ('csv'); EXECUTE dbo.GetMatches @T; ```
You could put your filter keywords into a table or temp table and query them like this: ``` select a.* from table_you_are_searching a inner join temp_filter_table b on charindex(b.filtercolumn,a.searchcolumn) <> 0 ```
Is there a possibility to Avoid multiple "OR" statement in Microsoft SQL?
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I am using SQL Server 2008 R2 and I have data in a format `AA-BB-CCCCCCCC-DDDDDDDD-EEEE` stored in one column. I need to separate this into 5 separate columns with T-SQL inline (*I don't want to create a function for this purpose, though I may look into it if there is a serious performance gain, there are permissions issues here that I would have to deal with*). I am creating a view to imitate the layout of a similar table on another server. So, I'd like my output to look like this: ``` +------+------+----------+----------+------+ | Col1 | Col2 | Col3 | Col4 | Col5 | +------+------+----------+----------+------+ | AA | BB | CCCCCCCC | DDDDDDDD | EEEE | +------+------+----------+----------+------+ ``` Now, **I have something that works**, but it seems entirely inefficient to me and executing queries against this view is quite time consuming in my testing. I use a CTE and XML to break out the columns, but this requires things like escaping the ampersand, etc. So, what I currently have is this: ``` WITH cte (ColA, ColB, Colc, etc.) AS ( SELECT CONVERT(XML,'<Account><Attribute>' + REPLACE(REPLACE(MY_COLUMN,'&','&amp;'),'-', '</Attribute><Attribute>') + '</Attribute></Account>') as ACCOUNT_VALUE ) SELECT ACCOUNT_VALUE.value('/Account[1]/Attribute[1]','varchar(2)') as Col1, ACCOUNT_VALUE.value('/Account[1]/Attribute[2]','varchar(2)') as Col2, ACCOUNT_VALUE.value('/Account[1]/Attribute[3]','varchar(8)') as Col3, ACCOUNT_VALUE.value('/Account[1]/Attribute[4]','varchar(8)') as Col4, ACCOUNT_VALUE.value('/Account[1]/Attribute[5]','varchar(4)') as Col5 FROM cte ``` And this returns the data fine, but takes exceptionally long. So, **is there a better way to separate hyphenated data into columns within a T-SQL statement (preferably no functions, etc.)**? This table has millions of rows that need to be broken out into separate columns. I got this far via a few hours of Googling yesterday and haven't really found another option I've been able to make work.
Try this: ``` CREATE TABLE Test ( LongText VARCHAR(400) ) INSERT INTO Test (LongText) VALUES('AA-BB-CCCCCCCC-DDDDDDDD-EEEE'), ('BB-CC-DDDDDDDD-EEEEEEEE-FFFF') ;WITH CTE AS ( --initial part SELECT LongText, 1 AS ColNo, LEFT(LongText, CHARINDEX('-', LongText)-1) AS Part, RIGHT(LongText, LEN(LongText) - CHARINDEX('-', LongText)) AS Remainder FROM Test WHERE CHARINDEX('-', LongText)>0 --recursive part, gets 'Part' till the last '-' UNION ALL SELECT LongText, ColNo + 1 AS ColNo,LEFT(Remainder, CHARINDEX('-', Remainder)-1) AS Part, RIGHT(Remainder, LEN(Remainder) - CHARINDEX('-', Remainder)) AS Remainder FROM CTE WHERE CHARINDEX('-', Remainder)>0 --recursive part, gets the last 'Part' (there is no '-') UNION ALL SELECT LongText, ColNo + 1 AS ColNo,Remainder AS Part,NULL AS Remainder FROM CTE WHERE CHARINDEX('-', Remainder)=0 ) SELECT [1],[2],[3],[4],[5] FROM ( SELECT LongText, ColNo, Part FROM CTE ) AS DT PIVOT(MAX(Part) FOR ColNo IN ([1],[2],[3],[4],[5])) AS PT ``` [SQL Fiddle](http://sqlfiddle.com/#!6/3c13a7/4)
If you always have 5 parts, this kind of approach might be faster than XML handling: ``` select left(MY_COLUMN, P1.P1-1) as PART1, substring(MY_COLUMN, P1.P1+1,P2.P2-P1.P1-1) as PART2, substring(MY_COLUMN, P2.P2+1,P3.P3-P2.P2-1) as PART3, substring(MY_COLUMN, P3.P3+1,P4.P4-P3.P3-1) as PART4, substring(MY_COLUMN, P4.P4+1,8000) as PART5 from MY_TABLE cross apply (select charindex('-', MY_COLUMN) as P1) P1 cross apply (select charindex('-', MY_COLUMN, P1.P1+1) as P2) P2 cross apply (select charindex('-', MY_COLUMN, P2.P2+1) as P3) P3 cross apply (select charindex('-', MY_COLUMN, P3.P3+1) as P4) P4 cross apply (select charindex('-', MY_COLUMN, P4.P4+1) as P5) P5 ```
What is the best way to separate hyphen-delimited data into several columns in SQL Server?
[ "", "sql", "sql-server", "t-sql", "" ]
In the table there are several columns, one is a varchar column called "Country", and another is a boolean column called "SomeFlag", I want to filter all the countries those have "SomeFlag" both 0 and 1 from records. For example, ``` Record1: country is US, SomeFlag is 0 and some other values. Record2: country is US, SomeFlag is 1 and some other values. Record3: country is CA, SomeFlag is 1 and some other values. ``` So US is what I want to filter out, how should I construct this SQL query? Thanks in advance.
``` select Country from T group by Country having min(cast(SomeFlag as int)) = max(cast(SomeFlag as int)) ``` or ``` select Country from T group by Country having count(distinct SomeFlag) = 1 ``` or ``` select Country from T where not exists ( select 1 from T where SomeFlag = 0 ) or not exists ( select 1 from T where SomeFlag = 1 ) ``` or ``` select Country from T where Country not in ( select Country from T group by Country having min(cast(SomeFlag as int)) = 0 and max(cast(SomeFlag as int)) = 1 ) ``` Do nulls in SomeFlag come into play?
``` "select * from table where (some_flag == '1' or some_flag == '0') and (country == 'US');" ```
The loop and if in MySQL
[ "", "mysql", "sql", "" ]
I'm trying to set up a simple set of tables for displaying the results of a tournament - I have the following structure: ``` CREATE TABLE players( id SERIAL PRIMARY KEY, name TEXT); CREATE TABLE matches( id SERIAL PRIMARY KEY, player_one_id INTEGER REFERENCES players, player_two_id INTEGER REFERENCES players, winner_id INTEGER REFERENCES players); ``` And I've inputted some test data, as follows: ``` INSERT INTO players (name) VALUES ('Mike Jones'); INSERT INTO players (name) VALUES ('Albert Awesome'); INSERT INTO players (name) VALUES ('Sad Sally'); INSERT INTO players (name) VALUES ('Lonely Lenny'); INSERT INTO matches (player_one_id, player_two_id, winner_id) VALUES (1,2,1); INSERT INTO matches (player_one_id, player_two_id, winner_id) VALUES (3,4,4); ``` I'm trying to perform a query which gives me the following results for each player: id, name, matched\_won, matches\_played. I have the following query thus far: ``` SELECT players.id, players.name, count(matches.winner_id) as matches_won , count(matches.id) as matches_played FROM players left join matches ON players.id = matches.winner_id GROUP BY players.id ORDER BY matches_won DESC ``` And, unfortunately, I'm getting the incorrect output as follows (there should be 1 matches\_played for each player): ``` id | name | matches_won | matches_played ----+----------------+-------------+---------------- 4 | Lonely Lenny | 1 | 1 1 | Mike Jones | 1 | 1 2 | Albert Awesome | 0 | 0 3 | Sad Sally | 0 | 0 (4 rows) ``` Now, I know the reason for this incorrect output is because of joining on players.id = matches.winner\_id, but, my question is: Is it possible to get these results with just *one* left join query? If so, how? I'd like to avoid doing multiple queries if possible.
Yes. First, you need to understand that `count()` simply counts the number of rows with non-NULL values, so your two counts should be the same. To get the winner, use conditional aggregation: ``` SELECT p.id, p.name, sum(case when m.winner_id = p.id then 1 else 0 end) as matches_won, count(m.id) as matches_played FROM players p left join matches m ON p.id in (m.player_one_id, m.player_two_id) GROUP BY p.id ORDER BY matches_won DESC; ``` You also need to fix the `join` condition. You cannot just join on the winner and expect to get the count of all the matches.
Sub-select solution: ``` SELECT players.id, players.name, (select count(*) from matches where matches.winner_id = players.id) as matches_won, (select count(*) from matches where players.id in (player_one_id, player_two_id)) as matches_played FROM players ORDER BY matches_won DESC ```
PostgreSQL - Left Join with Bad Count Output
[ "", "sql", "postgresql", "aggregate-filter", "" ]
Is there any way to specify the last value to order Mysql results by. Eg if I have the following table ``` id | Colour 1 | Blue 2 | Red 3 | Yellow 4 | Green ``` Could I have the results of my query display 'Red' last ``` SELECT * FROM colours ORDER By colours ASC [but show Red last] ```
You'll need to use some conditional logic in your `ORDER BY`. This will sort the data in the specific order that you want, `Red` always being last: ``` SELECT id, colour FROM colours ORDER BY CASE WHEN colour <> 'Red' THEN 1 ELSE 2 END, colour; ``` See [SQL Fiddle with Demo](http://sqlfiddle.com/#!2/b5f21/3). This uses a `CASE` expression to assign a value to each row that is used for the ordering. `Red` is assigned a higher value, so it will appear a the end of the list. This could also be written testing for the Colour being equal to `Red` first: ``` SELECT id, colour FROM colours ORDER BY CASE WHEN colour = 'Red' THEN 2 ELSE 1 END, colour; ``` See [Demo](http://sqlfiddle.com/#!2/b5f21/2). Both versions will return: ``` | ID | COLOUR | |----|--------| | 1 | Blue | | 4 | Green | | 5 | Orange | | 6 | Teal | | 3 | Yellow | | 2 | Red | ```
There might be cases where you'd need to do this with a single-column sort. And some queries/platforms require the sorting column to be part of the output. ``` SELECT id, colour, case when colour = 'Red' then 'zzzzz' else '' end + colour as colour_sort FROM colours ORDER By colour_sort ```
Specify value to appear last in ordered results
[ "", "mysql", "sql", "" ]
## My problem Let's say I have a query that returns the following data: ``` id date -- ------ 1 2015-01-12 1 ... // here I might have X more rows 1 2015-06-30 2 2015-01-12 2 ... // here I might have Y more rows 2 2015-05-20 ... ``` Given that `X, Y >= 120 and X != Y` and the order of the query is `id, date ASC` I want a way to retrieve the record number 120 for id 1 and the 120 for the id 2 (and so on for each different ID), something like: ``` id date -- -------- 1 2015-03-24 // this is the record 120 for id = 1 2 2015-04-26 // this is the record 120 for id = 2 ... ``` Notice that the dates don't follow a sequential order (you may have a gap between one row and the next one). **Is there a direct SQL solution for my problem?** (I know I can use vba to achieve my goal but I rather stay with SQL) As a clarification note, see this example. Given the following resultset: ``` id date -- ------ 1 2015-01-12 // this is record 1 for id = 1 1 2015-01-13 // this is record 2 for id = 1 1 2015-01-20 // this is record 3 for id = 1 1 2015-01-21 // this is record 4 for id = 1 ... 1 2015-03-22 // this is record 118 for id = 1 1 2015-03-23 // this is record 119 for id = 1 1 2015-03-24 // this is record 120 for id = 1 1 2015-03-25 // this is record 121 for id = 1 ... 1 2015-06-30 // this is the last row for id = 1 2 2015-01-12 // this is record 1 for id = 2 2 2015-01-13 // this is record 2 for id = 2 ... 2 2015-04-25 // this is record 120 for id = 2 ... 2 2015-05-20 // this is the last record for id = 2 ``` The result should be: ``` id date -- -------- 1 2015-03-24 2 2015-04-26 ``` Remember, I have at least 120 records for each ID, this is a fact (I have a query that gives only the IDs with more than 119 records) ## Attempted solution I've tried to play with the `SELECT TOP` directive, but I fail to achieve the results I want as I cannot apply it direcly: I don't want the top 120 and then get the last row as I want the last one of the TOP 120 for each ID. ## Edited for (a second) clarification My goal would be to have something like: ``` SELECT id, 120thRow(date) FROM table GROUP BY id; ``` unfortunatelly I don't know how to implement the *120thRow* function in access.
In the end, I've managed a way to put a row counter for each day and ID like so: ``` select id, date, ( select count(date) from table As t1 where t1.id = t.id and t1.date <= t.date ) As rowNum from table As t ``` From here on it's just a matter of selecting from this resultset the rows which has rownum = 120 and game over.
Does this work in Access? ``` select t.* from table as t where t.date = (select top 1 date from (select top 120 date from table t2 where t2.id = t.id order by date ) as tt order by date desc ); ``` EDIT: I guess MS Access doesn't allow nesting in the correlation clause. You can do this more painfully as: ``` select t.* from table as t join (select t.id, max(t.date) as maxdate from table as t where t.date = (select top 120 date from table as t2 where t2.id = t.id order by date ) ) tt on t.id = tt.id and t.date = tt.maxdate; ```
Access SQL return nth row for each given field
[ "", "sql", "ms-access", "ms-access-2003", "" ]
I have a table which has the transactions. Each transaction is represented by a row. The row has a field TranCode indicating the type of transaction and also the date of transaction is also recorded. Following is the table, and corresponding data. ``` create table t ( id int identity(1,1), TranDate datetime, TranCode int, BatchNo int ) GO insert into t (TranDate, TranCode) VALUES(GETDATE(), 1), (DATEADD(MINUTE, 1, GETDATE()), 1), (DATEADD(MINUTE, 2, GETDATE()), 1), (DATEADD(MINUTE, 3, GETDATE()), 1), (DATEADD(MINUTE, 4, GETDATE()), 2), (DATEADD(MINUTE, 5, GETDATE()), 2), (DATEADD(MINUTE, 6, GETDATE()), 2), (DATEADD(MINUTE, 7, GETDATE()), 2), (DATEADD(MINUTE, 8, GETDATE()), 2), (DATEADD(MINUTE, 9, GETDATE()), 1), (DATEADD(MINUTE, 10, GETDATE()), 1), (DATEADD(MINUTE, 11, GETDATE()), 1), (DATEADD(MINUTE, 12, GETDATE()), 2), (DATEADD(MINUTE, 13, GETDATE()), 2), (DATEADD(MINUTE, 14, GETDATE()), 1), (DATEADD(MINUTE, 15, GETDATE()), 1), (DATEADD(MINUTE, 16, GETDATE()), 1), (DATEADD(MINUTE, 17, GETDATE()), 2), (DATEADD(MINUTE, 18, GETDATE()), 2), (DATEADD(MINUTE, 19, GETDATE()), 1), (DATEADD(MINUTE, 20, GETDATE()), 1), (DATEADD(MINUTE, 21, GETDATE()), 1), (DATEADD(MINUTE, 21, GETDATE()), 1) ``` After the above code, the table contains the following data, well values in the tranDate field will be different for you, but that is fine. ``` id TranDate TranCode BatchNo ----------- ----------------------- ----------- ----------- 1 2015-02-12 20:40:47.547 1 NULL 2 2015-02-12 20:41:47.547 1 NULL 3 2015-02-12 20:42:47.547 1 NULL 4 2015-02-12 20:43:47.547 1 NULL 5 2015-02-12 20:44:47.547 2 NULL 6 2015-02-12 20:45:47.547 2 NULL 7 2015-02-12 20:46:47.547 2 NULL 8 2015-02-12 20:47:47.547 2 NULL 9 2015-02-12 20:48:47.547 2 NULL 10 2015-02-12 20:49:47.547 1 NULL 11 2015-02-12 20:50:47.547 1 NULL 12 2015-02-12 20:51:47.547 1 NULL 13 2015-02-12 20:52:47.547 2 NULL 14 2015-02-12 20:53:47.547 2 NULL 15 2015-02-12 20:54:47.547 1 NULL 16 2015-02-12 20:55:47.547 1 NULL 17 2015-02-12 20:56:47.547 1 NULL 18 2015-02-12 20:57:47.547 2 NULL 19 2015-02-12 20:58:47.547 2 NULL 20 2015-02-12 20:59:47.547 1 NULL 21 2015-02-12 21:00:47.547 1 NULL 22 2015-02-12 21:01:47.547 1 NULL 23 2015-02-12 21:01:47.547 1 NULL ``` I want a set based solution and not a cursor or row based solution to update the batch number for the rows. For example, the first 4 records should get a batchNo of 1 as they have TranCode as 1, and the next 5 (having tranCode of 2 and are closer to each other in time) should have batchNo as 2, and the next 4 should have 3 and so on. Following is the expected output. ``` id TranDate TranCode BatchNo ----------- ----------------------- ----------- ----------- 1 2015-02-12 20:43:59.123 1 1 2 2015-02-12 20:44:59.123 1 1 3 2015-02-12 20:45:59.123 1 1 4 2015-02-12 20:46:59.123 1 1 5 2015-02-12 20:47:59.123 2 2 6 2015-02-12 20:48:59.123 2 2 7 2015-02-12 20:49:59.123 2 2 8 2015-02-12 20:50:59.123 2 2 9 2015-02-12 20:51:59.123 2 2 10 2015-02-12 20:52:59.123 1 3 11 2015-02-12 20:53:59.123 1 3 12 2015-02-12 20:54:59.123 1 3 13 2015-02-12 20:55:59.123 2 4 14 2015-02-12 20:56:59.123 2 4 15 2015-02-12 20:57:59.123 1 5 16 2015-02-12 20:58:59.123 1 5 17 2015-02-12 20:59:59.123 1 5 18 2015-02-12 21:00:59.123 2 6 19 2015-02-12 21:01:59.123 2 6 20 2015-02-12 21:02:59.123 1 7 21 2015-02-12 21:03:59.123 1 7 22 2015-02-12 21:04:59.123 1 7 23 2015-02-12 21:04:59.123 1 7 ``` I have tried very hard with row\_number, rank and dense\_rank and none of them came for my rescue. I am looking for set based solution as I want really good performance. Your help is very much appreciated.
You could do this using `Recursive CTE`. I also used the `lead` function to check the next row and determine if you transcode changed. Query: ``` WITH A AS ( SELECT id ,trancode ,trandate ,lead(trancode) OVER (ORDER BY id,trancode) leadcode FROM #t ) ,cte AS ( SELECT id ,trandate ,trancode ,lead(trancode) OVER (ORDER BY id,trancode) leadcode ,1 batchnum ,1 nextbatchnum ,id + 1 nxtId FROM #t WHERE id = 1 UNION ALL SELECT A.id ,A.trandate ,A.trancode ,A.leadcode ,nextbatchnum ,CASE WHEN A.trancode <> A.leadcode THEN nextbatchnum + 1 ELSE nextbatchnum END nextbatchnum ,A.id + 1 nxtid FROM A INNER JOIN CTE B ON A.id = B.nxtId ) SELECT id ,trandate ,trancode ,batchnum FROM CTE OPTION (MAXRECURSION 100) ``` Result: ``` id trandate trancode batchnum 1 2015-02-12 10:19:06.717 1 1 2 2015-02-12 10:20:06.717 1 1 3 2015-02-12 10:21:06.717 1 1 4 2015-02-12 10:22:06.717 1 1 5 2015-02-12 10:23:06.717 2 2 6 2015-02-12 10:24:06.717 2 2 7 2015-02-12 10:25:06.717 2 2 8 2015-02-12 10:26:06.717 2 2 9 2015-02-12 10:27:06.717 2 2 10 2015-02-12 10:28:06.717 1 3 11 2015-02-12 10:29:06.717 1 3 12 2015-02-12 10:30:06.717 1 3 13 2015-02-12 10:31:06.717 2 4 14 2015-02-12 10:32:06.717 2 4 15 2015-02-12 10:33:06.717 1 5 16 2015-02-12 10:34:06.717 1 5 17 2015-02-12 10:35:06.717 1 5 18 2015-02-12 10:36:06.717 2 6 19 2015-02-12 10:37:06.717 2 6 20 2015-02-12 10:38:06.717 1 7 21 2015-02-12 10:39:06.717 1 7 22 2015-02-12 10:40:06.717 1 7 23 2015-02-12 10:40:06.717 1 7 ```
I've managed to get your desired output using a recursive CTE, although it's not optimised, but thought it might be useful to post what I've done to give you something to work with. The issue I have with this is the `GROUP BY` and `MAX` I'm using on the result set to get the correct values. I'm sure it can be done in a better way. ``` ;WITH cte AS ( SELECT ID , TranDate , TranCode , 1 AS BatchNumber FROM t UNION ALL SELECT t.ID , t.TranDate , t.TranCode , CASE WHEN t.TranCode != cte.TranCode THEN cte.BatchNumber + 1 ELSE cte.BatchNumber END AS BatchNumber FROM t INNER JOIN cte ON t.id = cte.Id + 1 ) SELECT id , trandate , trancode , MAX(cte.BatchNumber) AS BatchNumber FROM cte GROUP BY id , tranDate , trancode ```
Set based solution to generate batch number based on proximity and type of record in SQL server
[ "", "sql", "sql-server", "" ]
I'm trying to get the number of records created each hour but running into trouble with getting the results to group correctly. The idea is similiar to: [How to count number of records per day?](https://stackoverflow.com/questions/9229862/how-to-count-number-of-records-per-day) However, the field I'm using to Group by is a Date-time field that records down to the second. This seems to be causing trouble with the Group By statement, as when the query returns, there is one row for each second in the specified time period, which is way too much data and will make the work I want to do with the results more difficult than it needs to be (if for no other reason that it's too many rows to fit on one Excel sheet). My current code is: ``` SELECT ASD, Count(ASD) Num_CR From DB_Name.Table_Name fcr Where trunc(fcr.ASD) > to_Char('31-DEC-2014') And trunc(fcr.ASD) < to_Char('31-JAN-2015') And fcr.Status_Code = 'C' Group By ASD Order By ASD; ``` I've tried changing the Group By to be trunc(ASD), but that results in Toad throwing this error: ORA-00979: not a GROUP BY expression. Thanks in advance!
When you use aggregation anything in the `select` and `order by` clauses must match what's in the `group by clause`: ``` SELECT trunc(ASD,'hh'), Count(ASD) Num_CR From DB_Name.Table_Name fcr Where trunc(fcr.ASD) > to_date('31-DEC-2014') And trunc(fcr.ASD) < to_date('31-JAN-2015') And fcr.Status_Code = 'C' Group By trunc(ASD,'hh') Order By trunc(ASD,'hh'); ``` --- When applied to a date, `trunc` will truncate to the day. To truncate to a different level, specify the format of the element you'd like to truncate to as the second argument (e.g. `'hh'` will truncate to the hour; `'mm'` will truncate to the month).
``` SELECT to_char(ASD,'DD-MM-YYYY HH'), Count(ASD) Num_CR From DB_Name.Table_Name fcr Where trunc(fcr.ASD) > to_Char('31-DEC-2014') And trunc(fcr.ASD) < to_Char('31-JAN-2015') And fcr.Status_Code = 'C' Group By to_char(ASD,'DD-MM-YYYY HH') Order By to_char(ASD,'DD-MM-YYYY HH'); ``` Quick and dirty :)
Oracle Count number of records each hour
[ "", "sql", "oracle", "group-by", "" ]
I'm trying to put check constraint on existing column. Is there any way to achieve this from PostgreSQL?
Use `alter table` to add a new constraint: ``` alter table foo add constraint check_positive check (the_column > 0); ``` More details and examples are in the manual: <http://www.postgresql.org/docs/current/static/sql-altertable.html#AEN70043> **Edit** Checking for specific values is done in the same way, by using an `IN` operator: ``` alter table foo add constraint check_positive check (some_code in ('A','B')); ```
If you are okay with (or want) Postgres to generate a constraint name you can use the following shorthand syntax. ``` ALTER TABLE foo ADD CHECK (column_1 > 2); ```
Check constraint on existing column with PostgresQL
[ "", "sql", "database", "postgresql", "" ]
I have built a concatenation using SQL (Oracle), but I only want the concatenation to output when the value in the field is not null. I'm effectively building a website URL in the field, but in some cases the link is not yet available, but the concatenation still outputs the prefix (<http://www>.). If the value is null, then it should output null. At the moment I have: ``` SELECT 'http://www.'||LINK AS "URL" FROM TABLE ```
If selecting only rows from `TABLE` where `LINK IS NOT NULL` isn't an option, you can use `NVL2()` for this. It accepts three arguments - a string, the value to return if the string is not null, and the value to return if the string is null. ``` SELECT NVL2(LINK, 'http://www.'||LINK, NULL) AS "URL" FROM TABLE; ```
I would go even further. You have Oracle so you have regular expressions at your disposal (or you do if you have 9i or greater), so you can check to see if your link already starts with `http://`: ``` SELECT CASE WHEN REGEXP_LIKE(link, '^https?:\/\/') THEN link WHEN link IS NOT NULL THEN 'http://www.' || link END AS url FROM mytable; ``` The `CASE` statement will return `NULL` if there is no `ELSE` clause, so you need not add an explicit case for `link IS NULL`. Personally, I would go so far as to make sure that `link` didn't start with `www.` as well, or if it even should.
Only want a Concatenation to appear when the value is not null
[ "", "sql", "oracle", "concatenation", "" ]
I want to retrieve the second 10 UNIQUE rows in my sql database, for selecting the second 10 rows, I use the next script: ``` SELECT col1, col2 FROM ( SELECT col1, col2, ROW_NUMBER() OVER (ORDER BY ID) AS RowNum FROM MyTable ) AS MyDerivedTable WHERE MyDerivedTable.RowNum BETWEEN 10 AND 19 ``` But now I want the UNIQUE rows in that set, when I use `DISTINCT` like in the next example, the `DISTINCT` also goes over `RowNum` which makes every row a DISTINCT row... how can I exclude the `RowNum` from my `DISTINCT`? ``` SELECT col1, col2 FROM ( SELECT DISTINCT col1, col2, ROW_NUMBER() OVER (ORDER BY ID) AS RowNum FROM MyTable ) AS MyDerivedTable WHERE MyDerivedTable.RowNum BETWEEN 10 AND 19 ```
You need to use `group by` instead of `distinct`, as they are applied in different moments of query execution. Of course, since `col1` and `col2` are in N:1 relation with `ID` you need to tell SQL which exact `ID` to use when numbering rows. This can probably be `MAX(ID)`. So: ``` SELECT col1, col2 FROM ( SELECT col1, col2, ROW_NUMBER() OVER (ORDER BY MAX(ID)) AS RowNum FROM MyTable GROUP BY col1, col2 ) AS MyDerivedTable WHERE MyDerivedTable.RowNum BETWEEN 10 AND 19 ```
If you want to distinguish those 10 rows in result you can do it in outside `Select` clause: ``` SELECT DISTINCT col1, col2 FROM ( SELECT col1, col2, ROW_NUMBER() OVER (ORDER BY ID) AS RowNum FROM MyTable ) AS MyDerivedTable WHERE MyDerivedTable.RowNum BETWEEN 10 AND 19 ``` Also if you use SQL Server 2012 you can use new sweet statement OFFSET FETCH: ``` SELECT DISTINCT col1, col2 FROM ( SELECT col1, col2 FROM MyTable ORDER BY ID OFFSET 10 ROWS FETCH FIRST 10 ROWS ONLY ) AS MyDerivedTable ```
retrieve second 10 UNIQUE rows in SQL
[ "", "sql", "sql-server", "distinct", "between", "rownum", "" ]
I am trying to make one statement(automated) of the following queries. All i know proposal id from my UI. i do query manually like below ``` select * from tblAssessment where ProposalID like '%0081/2013/0587B%' select * from tblAssessmentVersion where AssessmentID=587 select * from tblIdentifiedThreatenedSpecies where AssessmentVersionID=731 delete from tblIdentifiedThreatenedSpecies where PercentGain=0 and AssessmentVersionID=731 ``` once i know the proposal id ,i query in to tblassessment to get the assessment id then use the assessment id to query in to assessmentversion table to find the assessment version ID then use the assessment version id in to Identified species table to delete the record where percentgain column is 0. Appreciate all your help,thanks in advance
You can use `DELETE FROM .. JOIN`: ``` DELETE s FROM tblIdentifiedThreatenedSpecies s INNER JOIN tblAssessmentVersion v ON v.AssessmentVersionID = s.AssessmentVersionID INNER JOIN tblAssessment a ON a.AssessmentID = v.AssessmentID WHERE a.ProposalID LIKE '%0081/2013/0587B%' AND PercentGain = 0 ``` --- To check for rows to be deleted, `SELECT` first: ``` SELECT s.* FROM tblIdentifiedThreatenedSpecies s INNER JOIN tblAssessmentVersion v ON v.AssessmentVersionID = s.AssessmentVersionID INNER JOIN tblAssessment a ON a.AssessmentID = v.AssessmentID WHERE a.ProposalID LIKE '%0081/2013/0587B%' AND PercentGain = 0 ```
join all your SELECT in one subquery ``` delete from tblIdentifiedThreatenedSpecies where PercentGain=0 and AssessmentVersionID in ( select av.Id from tblAssessmentVersion av join tblAssessment a on a.Id = av.AssessmentID where a.ProposalID like '%0081/2013/0587B%' ) ```
Link all query (select and delete) in one
[ "", "sql", "sql-server", "" ]
I want to sort every group of size N entries out of K entries where N e.g. lets say we have table ``` Student ( StudentID int, StudentName varchar(50), DepartmentName varchar(50), Marks float ) ``` and it has 100 entries I want sort every 10 entries with specific criterion lets say by Marks ascending. e.g. Consider the following data ``` StudentID StudentName Departmentname Marks 1 A CS 98.7 2 B IT 96.78 3 C Civil 95.67 4 D Electronics 93.25 ``` I have grid and it has paging mechanism. If I set pageSize = 2 then on page 1 the data will be ``` StudentID StudentName Departmentname Marks 1 A CS 98.7 2 B IT 96.78 ``` On page 2 the data will be ``` StudentID StudentName Departmentname Marks 3 C Civil 95.67 4 D Electronics 93.25 ``` If I sort my grid by order of StudentName DESC then the whole data get sorted and for page 1 my data will be ``` StudentID StudentName Departmentname Marks 4 D Electronics 93.25 3 C Civil 95.67 ``` and on page 2 the data will be ``` StudentID StudentName Departmentname Marks 2 B IT 96.78 1 A CS 98.7 ``` Instead of it, the expected output will be like this on page 1 ``` StudentID StudentName Departmentname Marks 2 B IT 96.78 1 A CS 98.7 ``` and on page 2 will be ``` StudentID StudentName Departmentname Marks 4 D Electronics 93.25 3 C Civil 95.67 ``` That's why I want to sort the data of size 2 here specifically instead of sorting whole data. In general let say we have 100 entries in total and want to sort every 10 entries then row no. 13 after applying sorting its row number should lie in row no. 11 to row no. 20
Try this: **Sample data orderdered by Marks DESC** ``` StudentID StudentName DepartmentName Marks ----------- ----------- --------------- ---------------------- 1 A CS 98.7 2 B IT 96.78 3 C Civil 95.67 4 D Electronics 93.25 ``` **SOLUTION** ``` DECLARE @pagSize INT = 2 DECLARE @pageNumber INT = 1 --Page you want to sort ;WITH SortByMarks AS( --Sorted first by Marks DESC SELECT *, rn = ROW_NUMBER() OVER(ORDER BY Marks DESC), pageNumber = (ROW_NUMBER() OVER(ORDER BY Marks DESC) - 1)/@pagSize + 1 FROM Student ) SELECT StudentId, StudentName, DepartmentName, Marks FROM SortByMarks WHERE pageNumber = @pageNumber ORDER BY StudentName DESC ``` **RESULT** ``` StudentId StudentName DepartmentName Marks ----------- ----------- --------------- ---------------------- 2 B IT 96.78 1 A CS 98.7 ```
If i understand the question properly...In oracle this is how it happens select STUDENT\_ID (select \* from STUDENTS order by STUDENT\_MARKS asc) where ROWNUM<=10 minus select STUDENT\_ID (select \* from STUDENTS order by STUDENT\_MARKS asc) where ROWNUM<=1 here we are ordering marks in ascending order then fetching first nine rows except first row. If its not what you are looking for then then explain the question little more
sort every group of size n
[ "", "sql", "sql-server", "" ]
I have a table like this. ![enter image description here](https://i.stack.imgur.com/pMTWg.png) How can I remove the records "Jimmy" and "Kenneth" which has greater Month values. Thank you.
``` delete from tablename t1 where exists (select 1 from tablename t2 where t1.name = t2.name and t1.month > t2.month) ``` But why doesn't take year into consideration? Why not use a date datatype? What if two Jimmys from same month? Why no unique constraint if no duplicates allowed?
Use `CTE` to delete the duplicate records ``` ;with cte as ( select Rn=row_number()over(partition by name,post order by month ASC),* from yourtable ) delete from cte where rn>1 ```
How to delete duplicate record which has one unique column
[ "", "sql", "sql-server", "duplicates", "" ]
I am using SQL Server 2008 R2. I'd like to have a WHERE clause on a column that is derived. I have something like this: ``` SELECT ActualColumn1, CASE WHEN condition THEN value1 ELSE value2 AS DerivedColumn1 FROM ... WHERE DerivedColumn1 <> 'Foo' -- I'd like to filter on the derived column ``` Running this query reports the error "Invalid column name DerivedColumn1".
You cannot refer to an alias from the `WHERE` directly (you *could* from the `ORDER BY`) , you have to use a sub-query or CTE (or repeat the `CASE WHEN` in the `WHERE`): ``` WITH CTE AS ( SELECT ActualColumn1, CASE WHEN condition THEN value1 ELSE value2 AS DerivedColumn1 FROM ... ) SELECT ActualColumn1, DerivedColumn1 FROM CTE WHERE DerivedColumn1 <> 'Foo' ``` Related: [Reference alias (calculated in SELECT) in WHERE clause](https://stackoverflow.com/questions/11182339/reference-alias-calculated-in-select-in-where-clause)
You can't, but you can put case in a subquery or in a common table expression. Check also this question for more information: [Referencing a calculated column in the where clause SQL](https://stackoverflow.com/questions/9720790/referencing-a-calculated-column-in-the-where-clause-sql)
How to filter data in a WHERE clause based on a derived column name?
[ "", "sql", "sql-server-2008-r2", "" ]
I am working in SQL Server 2008. I am trying to test whether a string (varchar) has only digit characters (0-9). I know that the IS\_NUMERIC function can give spurious results. (My data can possibly have $ signs, which should not pass the test.) So, I'm avoiding that function. I already have a test to see if a string has any non-digit characters, i.e., ``` some_column LIKE '%[^0123456789]%' ``` I would think that the only-digits test would be something similar, but I'm drawing a blank. Any ideas?
Use `Not Like` ``` where some_column NOT LIKE '%[^0-9]%' ``` ## Demo ``` declare @str varchar(50)='50'--'asdarew345' select 1 where @str NOT LIKE '%[^0-9]%' ```
There is a system function called [ISNUMERIC](https://msdn.microsoft.com/en-us/library/ms186272.aspx) for SQL 2008 and up. An example: ``` SELECT myCol FROM mTable WHERE ISNUMERIC(myCol)<> 1; ``` I did a couple of quick tests and also looked further into the docs: `ISNUMERIC returns 1 when the input expression evaluates to a valid numeric data type; otherwise it returns 0.` Which means it is fairly predictable for example `-9879210433` would pass but `987921-0433` does not. `$9879210433` would pass but `9879210$433` does not. So using this information you can weed out based on the [list of valid currency symbols](https://msdn.microsoft.com/en-us/library/ms179882.aspx) and `+` & `-` characters.
SQL Server : How to test if a string has only digit characters
[ "", "sql", "sql-server", "sql-server-2008", "" ]
Im trying to run a sql statement that joins 2 tables then and does a distinct on the ID column and runs a where clause to filter the data down to an hour of the day in question. The result will show me an hour out of the day but will remove the duplicates so that it gives me unique records, i have seen a number of posts some helpful and some confusing. This is what I have so far. ``` select DISTINCT FinalID,finaltime,finalos from dbo.FinalList join dbo.Users on dbo.FinalList.FinalID = dbo.Users.usersid WHERE FinalDate >='2014-07-01' and finaldate <='2014-07-01' and finaltime >='00:00:00' and FinalTime <= '00:59:59' order by FinalDate asc, FinalTime asc ``` If i take away finaltime,finalos and leave select DISTINCT FinalID i get the correct numbers. But i wanted to see the info from Finaltime and Finalos but as soon as i add these it tries to do a distinct on those columns also and i get dodgy results. Do i do a subquery ? ?
Well, you have to decide which values you want. Perhaps using `group by` will provide what you want: ``` select FinalID, min(finaltime), max(finaltime), min(finalos), max(finalos) from dbo.FinalList join dbo.Users on dbo.FinalList.FinalID = dbo.Users.usersid where FinalDate >='2014-07-01' and finaldate <='2014-07-01' and finaltime >='00:00:00' and FinalTime <= '00:59:59' group by FinalId; ```
You don't say how you will decide among various records with the same ID but different values for `finaltime` and `finalos`. I am going to assume that your final time is your latest time (the largest) and that you want the `finalos` value that goes along with that final time. ``` ; WITH A as ( select FinalID , Finaltime , Finalos from dbo.FinalList join dbo.Users on dbo.FinalList.FinalID = dbo.Users.usersid WHERE FinalDate >='2014-07-01' and finaldate <='2014-07-01' and finaltime >='00:00:00' and FinalTime <= '00:59:59' ) , B as ( SELECT A.FinalID , A.Finaltime , A.FinalOS , RowNumber = ROW_NUMBER() OVER ( PARTITION BY A.FinalID ORDER BY FinalTime DESC ) FROM A ) SELECT FinalID , FinalTime , FinalOS FROM B WHERE B.RowNumber = 1 ORDER BY FinalTime , FinalOS ``` ETA: Fixed syntax.
SQL Unique Distinct Column 1 and not the other columns
[ "", "sql", "sql-server-2008", "unique", "distinct", "" ]
I have a problem with a stored procedure in SQL Server, there is a table with two datetime columns, a start time and an end time that I am filtering on based on location and entity. In a location report, when the filtering conditions for Start time and End time are set, only location records where the start time is at least as great as the filtering Start time and the end time is no later than the filtering End time are considered in building the report. For example if in reality John Doe was in Room A from 8:30 to 9:30 and then in Room B from 9:30 until 10:30, a report of locations from 9:00 to 10:00 will include no record of John Doe's whereabouts. The desired behavior is that location intervals (as opposed to individual location records) that begin before the End time and end after the Start time should be included. In terms of presentation, in the case of John Doe above, the output report should show John Doe in Room A from 9:00 to 9:30 and in Room B from 9:30 to 10:00, imposing the filtering limits when the limits of the location interval are not within the filtering constraints. Is this at all possible? If there is any other information needed please let me know, at the moment I'm using a basic AND locationchangehistory.starttime >= Starttime AND locationchangehistory.endtime <= @Endtime) This is the full Stored Procedure the company is using, I hope the formatting comes out correctly:- ``` @Asset Varchar (MAX) = NULL OUTPUT, @Location Varchar (MAX) = NULL OUTPUT, @Ward Varchar (MAX) = NULL OUTPUT, @Zone Varchar (MAX) = NULL OUTPUT, @Floor Varchar (MAX) = NULL OUTPUT, @Starttime datetime OUTPUT, @Endtime datetime OUTPUT, @Top int, @FacilityID int AS SELECT DISTINCT TOP (@Top) location.name AS 'Location', monitoredentity.name AS 'Asset', zone.name AS 'Zone', floor.name AS 'Floor', ward.name AS 'Area', locationchangehistory.starttime AS 'Starttime', locationchangehistory.endtime AS 'Endtime', CONVERT(varchar(max), DATEDIFF(SECOND, locationchangehistory.starttime, locationchangehistory.endtime) / 3600) + ':' + RIGHT('0' + CONVERT(varchar(2), DATEDIFF(SECOND, locationchangehistory.starttime, locationchangehistory.endtime) % 3600 / 60), 2) + ':' + RIGHT('0' + CONVERT(varchar(2), DATEDIFF(SECOND, locationchangehistory.starttime, locationchangehistory.endtime) % 60), 2) AS 'TimeInPlace' FROM floor INNER JOIN zone ON zone.floor = floor.id INNER JOIN ward ON zone.id = ward.zone INNER JOIN location ON ward.id = location.ward INNER JOIN locationchangehistory ON location.id = locationchangehistory.location INNER JOIN monitoredentity ON monitoredentity.id = locationchangehistory.entity WHERE (monitoredentity.type = 4 AND floor.facilityid = @FacilityID AND zone.facilityid = @FacilityID AND ward.facilityid = @FacilityID AND Location.facilityid = @FacilityID AND locationchangehistory.facility = @FacilityID AND monitoredentity.facilityid = @FacilityID AND charindex(',' + cast(monitoredentity.id AS VARCHAR(MAX)) + ',', ',' + @Asset + ',') > 0 AND locationchangehistory.starttime >= @Starttime AND locationchangehistory.endtime <= @Endtime) AND ((charindex(',' + cast(Location.id AS VARCHAR(MAX)) + ',', ',' + @location + ',') > 0 OR charindex(',' + cast(Ward.id AS VARCHAR(MAX)) + ',', ',' + @Ward + ',') > 0 OR charindex(',' + cast(zone.id AS VARCHAR(MAX)) + ',', ',' + @Zone + ',') > 0) OR charindex(',' + cast(floor.id AS VARCHAR(MAX)) + ',', ',' + @Floor + ',') > 0) ORDER by locationchangehistory.starttime DESC` ```
Just changed your select and where, try this. ``` SELECT DISTINCT TOP (@Top) location.name AS 'Location', monitoredentity.name AS 'Asset', zone.name AS 'Zone', floor.name AS 'Floor', ward.name AS 'Area', locationchangehistory.starttime AS 'Starttime', case when locationchangehistory.endtime<@Endtime then locationchangehistory.endtime else @Endtime end 'Endtime', CONVERT(varchar(max), DATEDIFF(SECOND, locationchangehistory.starttime, locationchangehistory.endtime) / 3600) + ':' + RIGHT('0' + CONVERT(varchar(2), DATEDIFF(SECOND, locationchangehistory.starttime, locationchangehistory.endtime) % 3600 / 60), 2) + ':' + RIGHT('0' + CONVERT(varchar(2), DATEDIFF(SECOND, locationchangehistory.starttime, locationchangehistory.endtime) % 60), 2) AS 'TimeInPlace' FROM floor INNER JOIN zone ON zone.floor = floor.id INNER JOIN ward ON zone.id = ward.zone INNER JOIN location ON ward.id = location.ward INNER JOIN locationchangehistory ON location.id = locationchangehistory.location INNER JOIN monitoredentity ON monitoredentity.id = locationchangehistory.entity WHERE locationchangehistory.starttime between locationchangehistory.starttime and locationchangehistory.endtime AND locationchangehistory.endtime >= locationchangehistory.starttime AND (monitoredentity.type = 4 AND floor.facilityid = @FacilityID AND zone.facilityid = @FacilityID AND ward.facilityid = @FacilityID AND Location.facilityid = @FacilityID AND locationchangehistory.facility = @FacilityID AND monitoredentity.facilityid = @FacilityID AND charindex(',' + cast(monitoredentity.id AS VARCHAR(MAX)) + ',', ',' + @Asset + ',') > 0 --AND locationchangehistory.starttime >= @Starttime --AND locationchangehistory.endtime <= @Endtime) AND ((charindex(',' + cast(Location.id AS VARCHAR(MAX)) + ',', ',' + @location + ',') > 0 OR charindex(',' + cast(Ward.id AS VARCHAR(MAX)) + ',', ',' + @Ward + ',') > 0 OR charindex(',' + cast(zone.id AS VARCHAR(MAX)) + ',', ',' + @Zone + ',') > 0) OR charindex(',' + cast(floor.id AS VARCHAR(MAX)) + ',', ',' + @Floor + ',') > 0) ```
If I've gotten you right, you just need an OR-Opreator. With an AND Operator you want the condition to check BOTH boolean Expressions are true. E.g. A = 1 AND B = 1 means check if A is equal to 1 and if, and only if, A is true, it is useful to check if B was equal to 1, too. If the first Condition is not true, it wont get as far as the second condition, even though it was true. An OR operation will check, if any of the both Conditions are true, and return the ROW if there was a hit for one of the conditions. I like the following eplanation somehow: ``` - compare AND to a multiplication (*) - Compare OR to a add operation (+) - 1 and 0 are our boolean values - so 0 * 1 = 0 and vice versa but 1*1 = 1 - 1 + 0 = 1, 0+1 and 1+1 = 1 ```
SQL Stored Procedure datetime filter by location
[ "", "sql", "sql-server", "datetime", "filter", "" ]
I have string date in `yymmdd` format, for example `150202` I want to convert this string into a valid date in format `yyyymmdd`, e.g. `20150202`. Thanks in advance.
convert your string to datetime and then do that you want with it ``` declare @dt varchar(6) = '150213' select CONVERT(datetime, @dt, 112) ``` --- Do another `CONVERT` to transform it to `yyyymmdd` format. ``` SELECT CONVERT(VARCHAR(10), CONVERT(DATETIME, @dt, 112), 112) ```
may this will work ``` select CONVERT(datetime, '150202', 112) ``` for all date conversions <http://www.sqlusa.com/bestpractices/datetimeconversion/>
How to convert string date into valid date format in SQL Server 2008?
[ "", "sql", "sql-server", "sql-server-2008", "" ]