Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I want to use `IFNULL()` in such a way that I can select the record containing NULL or, if a value is present, then select the record matchinga particular value. My query is: ``` SELECT (@v:=2) AS Row, CL.LeaveTypeId, CL.NumberOfLeave FROM `CompanyLeave` CL WHERE(IFNULL(CL.EmploymentType,0)=3); ``` column `EmploymentType` can contain either an `Integer` or `NULL`. I want to select the record matching the specified value, or, if none matches, then the record containing `NULL`.
I am interpreting the question as a prioritization. If a record with `3` exists, choose that. Otherwise, choose the one that is `NULL`, if it exists. If so, this might do what you want: ``` SELECT (@v:=2) AS Row, CL.LeaveTypeId, CL.NumberOfLeave FROM `CompanyLeave` CL WHERE CL.EmployementType = 3 or CL.EmployementType IS NULL ORDER BY (CL.EmployementType = 3) DESC LIMIT 1; ``` This will return the row with `3`, if present. Otherwise, it will return a row with `NULL`, if one exists.
The expression `IFNULL(CL.EmploymentType, 3)` basically means: if `CL.EmploymentType IS NULL` then use `3` instead. The original value of `CL.EmploymentType` is used if it is not `NULL`. If I understand correctly your question, you need to select the rows having `NULL` or `3` in the column `CL.EmploymentType`. The query is: ``` SELECT (@v:=2) AS Row, CL.LeaveTypeId, CL.NumberOfLeave FROM `CompanyLeave` CL WHERE IFNULL(CL.EmploymentType, 3) = 3; ``` **Update:** If only one row must be returned (the one having `3` being preferred over those having `NULL`) then the rows must be sorted using a criteria that puts the `NOT NULL` value in front and a `LIMIT 1` clause must be added. MySQL [documentation about `NULL`](http://dev.mysql.com/doc/refman/5.7/en/working-with-null.html) says: > When doing an `ORDER BY`, `NULL` values are presented first if you do `ORDER BY ... ASC` and last if you do `ORDER BY ... DESC`. The updated query is: ``` SELECT (@v:=2) AS Row, CL.LeaveTypeId, CL.NumberOfLeave FROM `CompanyLeave` CL WHERE IFNULL(CL.EmploymentType, 3) = 3; ORDER BY CL.EmploymentType DESC LIMIT 1 ```
How can i use IFNULL in Where clause
[ "", "mysql", "sql", "ifnull", "" ]
I have a table of doctor names and states. ``` f_name | l_name | state MICHAEL | CRANE | HAL | CRANE | MD THOMAS | ROMINA | DE ``` And so on. What I want is to get all doctors that are NOT in MD. However, if I write this expression I'm missing those with NULL values for state. ``` SELECT * FROM doctors WHERE state NOT IN ('MD') ``` I don't understand the issue. I was able to fix it by adding ``` OR state IS NULL ``` Obviously it has something to due with NOT IN (or IN) not handling NULL. Can anyone explain this for me? Is there an alternative for what I was trying to do? Thanks
Yes, there is an alternative - you would use the `NVL()` function (or `COALESCE()` if you want to stick to the ANSI standard): ``` SELECT * FROM doctors WHERE NVL(state, '@@') NOT IN ('MD') ``` However you don't really need to use `NOT IN` here - it's only necessary when you have multiple values, e.g.: ``` SELECT * FROM doctors WHERE NVL(state, '@@') NOT IN ('MD','PA') ``` With one value you can just use `=` (or in this case, `!=` or `<>`): ``` SELECT * FROM doctors WHERE NVL(state, '@@') != 'MD' ``` In Oracle SQL, `NULL` can't be compared to other values (not even other `NULL`s). So `WHERE NULL = NULL`, for example, will return zero rows. You do `NULL` comparisons with `IS NULL` and `IS NOT NULL`.
As noted already, you don't know that Michael Crane's state isn't Maryland. It's NULL, which can be read as representing "don't know". It might be Maryland, or it might not be. `NOT IN ('MD')` only finds those values *known* not to be `'MD'`. If you have a filter `WHERE x`, you can use `MINUS` to find exactly those records where `x` is not true (where `x` is either false or unknown). ``` select * from doctors minus select * from doctors where state in ('MD'); ``` This has one big advantage over anything involving `IS NULL` or `NVL`: it's immediately obvious exactly which records you *don't* want to see. You don't have to worry about accidentally missing one case where `NULL` isn't covered in your condition, and you don't have to worry about records that happen to match whatever dummy value you use with `NVL`. It's generally not good for performance on Oracle, accessing the table twice, but for one-off queries, depending on the table size, the time saved writing the query can be more than the added execution time.
Missing records if add "NOT IN"
[ "", "sql", "oracle", "" ]
Suppose I had a table with the following format: ``` | dbo.ROUTES | ---------------------------------------- | ID | ROUTE | LOWER_LIMIT | UPPER_LIMIT | ---------------------------------------- | 0 | A | 0 | 10 | | 1 | B | 11 | 500 | | 2 | C | 600 | 1000 | ``` How could I find any number ranges that aren't covered by a route entry? i.e. For the example above, I would need to be able to see that there is no entry that covers 501 - 599. We're currently using this layout, albeit with 4 or 5 other columns with various criteria, and we've found that (as you'd expect) as the table has both grown, and had the lower and upper thresholds updated, we're starting to see voids and overlaps. I know this really falls down to poor design, but until we have the resources to improve it, we could do with something in the interim that could at least help us tidy the tables manually. Thanks,
As you can not use lead/lag functions, i have used alternate way to achieve this. Edit boundary conditions in output columns (missing\_val, overlapping) as per your need by adding/subtracting 1 Input: ``` ID LOWER_LIMIT UPPER_LIMIT 0 0 10 1 11 500 2 600 1000 3 980 1100 ``` Output: ``` ID LOWER_LIMIT UPPER_LIMIT MISSING_VAL OVERLAPPING 0 0 10 0 0 1 11 500 500-600 0 2 600 1000 0 980-1000 3 980 1100 0 0 ``` Query : ``` SELECT ID, LOWER_LIMIT, UPPER_LIMIT, CASE WHEN UPPER_LIMIT+1=NEXT_LOWER_VAL THEN '0' WHEN UPPER_LIMIT+1< NEXT_LOWER_VAL THEN UPPER_LIMIT||'-'||NEXT_LOWER_VAL ELSE '0' END AS MISSING_VAL, CASE WHEN UPPER_LIMIT+1= NEXT_LOWER_VAL THEN '0' WHEN UPPER_LIMIT+1> NEXT_LOWER_VAL THEN NEXT_LOWER_VAL||'-'||UPPER_LIMIT ELSE '0' END AS OVERLAPPING FROM ( SELECT T1.*, (SELECT MIN(LOWER_LIMIT) FROM TEST_T T WHERE T.ID<> T1.ID AND T.LOWER_LIMIT> T1.LOWER_LIMIT) AS NEXT_LOWER_VAL FROM TEST_T T1) SUB ```
This should show rows that have gaps on either side of it: ``` SELECT * FROM ROUTES WHERE NOT Exists(SELECT ID FROM ROUTES as sub WHERE sub.Lower_Limit = ROUTES.Upper_Limit + 1) OR NOT Exists(SELECT ID FROM ROUTES as sub1 WHERE sub1.Upper_Limit = ROUTES.Lower_Limit - 1) ```
Finding gaps or overlaps in ranges
[ "", "sql", "sql-server", "" ]
I have a report with Reporting Services, in my local pc it works. But if I try to run it on Internet Explorer, I have this error message: ``` Errore durante l'elaborazione del report. (rsProcessingAborted) Query execution failed for dataset 'chartArea_monthly'. (rsErrorExecutingCommand) Per ulteriori informazioni su questo errore, navigare al server di report nel server locale oppure abilitare gli errori remoti. ``` I have see that the error is in the query that populated my chart. The query is this: ``` --Modified by Somosree Banerjee (IBM) for SRQ00409010 on 11-15-2011 -- NOTE: it is strongly recommended to use MS SQL Server Management studio (copy + paste) to edit this query --CREATED ON Michele Castriotta use iMELReporting DECLARE @TemporaryTable TABLE ( Mese NVARCHAR(2), Year NVARCHAR(100), Value DECIMAL(12,5), Target1 DECIMAL(12,5), Target2 DECIMAL(12,5), Target3 DECIMAL(12,5), Target4 DECIMAL(12,5), Target DECIMAL(12,5) ) -- Get PO start and end time and the linkupID DECLARE @DATA_START AS DATETIME DECLARE @DATA_END AS DATETIME DECLARE @Date as DATETIME SET @Date = '2015-2-1' DECLARE @ProductionLine AS NVARCHAR(100) SET @ProductionLine = 'COMBINER001' --THE USER SELECT A DATE, I SET THE DATA IN THE FIRST DAY OF 3 mounth before this data --THE CURRENT MONTH IS NOT CALCULATED SET @DATA_START = DATEADD(MONTH,DATEDIFF(MONTH,0,DATEADD(MM,-3,@Date)),0) --THE USER SELECT A DATE, I SET THE DATA IN THE FIRST DAY OF 1 mounth before this data --THE CURRENT MONTH IS NOT CALCULATED SET @DATA_END = DATEADD(MONTH, 1, DATEADD(MONTH,DATEDIFF(MONTH,0,DATEADD(MM,-1,@Date)),0) - DAY(DATEADD(MONTH,DATEDIFF(MONTH,0,DATEADD(MM,-1,@Date)),0)) + 1) -1 SET @DATA_END =DATEADD(second,86399,@DATA_END) --THE USER SELECT A DATE, I SET THE DATA IN THE LAST DAY OF WEEK --SELECT @DATA_START,@DATA_END INSERT INTO @TemporaryTable (Mese,Year,Value,Target1,Target2,Target3,Target4,Target) SELECT --DATENAME(MONTH,MONTH(k.Data)), MONTH(k.Data), YEAR(k.Data), AVG(k.Value) * 100, t.Red, t.Yellow, t.Lime, t.CornflowerBlue, target FROM KPI_Value k LEFT JOIN [iMELReporting].[dbo].[Target] t ON (k.Machine = t.Machine AND k.KPI = t.KPI) WHERE k.KPI = 3 AND k.Machine = @ProductionLine AND k.Data BETWEEN @DATA_START AND @DATA_END AND t.DataStart <= k.Data AND t.DataEnd >= k.Data GROUP BY --DATENAME(mONTH,MONTH(k.Data)), MONTH(k.Data), YEAR(k.Data), t.Red, t.Yellow, t.Lime, t.CornflowerBlue, t.target ORDER BY YEAR(k.Data), MONTH(k.Data) SELECT [dbo].[AF_GetNameOfMonth] (Mese),Year,Value,Target1,Target2,Target3,Target4,Target FROM @TemporaryTable ``` The problem is in the last SELECT if I try tu delete this command ``` SELECT [dbo].[AF_GetNameOfMonth] (Mese),Year,Value,Target1,Target2,Target3,Target4,Target FROM @TemporaryTable ``` and insert this: ``` SELECT Mese,Year,Value,Target1,Target2,Target3,Target4,Target FROM @TemporaryTable ``` I don't have any problem. So this is the code of my function: ``` USE [iMELReporting] GO /****** Object: UserDefinedFunction [dbo].[AF_GetNameOfMonth] Script Date: 01/16/2015 09:32:31 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER FUNCTION [dbo].[AF_GetNameOfMonth](@Month [INT]) RETURNS NVARCHAR(100) AS BEGIN DECLARE @monthName as NVARCHAR(100); SELECT @monthName = CASE @Month WHEN 1 then 'January' WHEN 2 then 'February' WHEN 3 then 'March' WHEN 4 then 'April' WHEN 5 then 'May' WHEN 6 then 'June' WHEN 7 then 'July' WHEN 8 then 'August' WHEN 9 then 'September' WHEN 10 then 'October' WHEN 11 then 'November' WHEN 12 then 'December' END RETURN @monthName END ``` If I try to run this original report in my local pc it works, but if I try to run it online I have that error and if I delete the function of my SP also online the report works. Where is my error please?
You need to check if the online report server has access to that function this error generates when database is unable to populate the result set. Check the database which is getting hit by online report and see that function exists there and the user under which the `SSRS` tool is running has access to that function
This is very commonly observed problem that people runs into. I had this problem many a times when I used to work on SSRS reports. When you run report on your local machine then it uses the Database which is installed on your machine (of course you select the database while configuring report). But when you deploy this report on LIVE server, then you forget to change the Database configuration. In turn, when report server makes a call to report, then that report tries to connect with the database which was configured at design time. Since that Database is not available on LIVE server, your report fails to execute. On LIVE server you may have a database with the same name, but following two things varies between local machine and LIVE machine. 1. User DB credential 2. Database IP So, in order to avoid this problem, make sure that before you GO LIVE, change these things in your report.
Reporting services error if I try to run report online
[ "", "sql", "reporting-services", "" ]
I am attempting to do the following 1. Link two tables via a join on the same database 2. Take a column that exists in both FK\_APPLICATIONID(with a slight difference, where one = +1 of the other I.e. Column 1 =1375 and column 2 = 1376 3. In one of the tables exist a reference number (QREF1234) and the other contains 11 phonenumbers 4. I want to be able to enter the Reference number, and it returns all 11 phonenumbers as a single declarable value. 5. use `Select * from TableD where phonenum in (@Declared variable)` Here is what I have so far, ``` Use Database 1 DECLARE @Result INT; SELECT @Result = D.PhoneNum1,phonenum2,phonenum3,etc FROM Table1 JOIN TABLE2 D on D.FK_ApplicationID= D.FK_ApplicationID where TABLE1.FK_ApplicationID = D.FK_ApplicationID + 1 and QREF = 'Q045569/2' Use Database2 Select * from Table3 where PhoneNum = '@result' ``` The names of things like TABLE1 is not their true name Thanks
I think you are after something like this. You are trying to "normalize" un-normalized columns and search for all those values in another table. You need to union the results together into a temp table, then search for the values. ``` Use Database 1 Create Table #tmp(PhoneNums varchar(50)) INSERT INTO #tmp SELECT D.PhoneNum1 FROM Table1 JOIN TABLE2 D on D.FK_ApplicationID= D.FK_ApplicationID where TABLE1.FK_ApplicationID = D.FK_ApplicationID + 1 and QREF = 'Q045569/2' union SELECT D.PhoneNum2 FROM Table1 JOIN TABLE2 D on D.FK_ApplicationID= D.FK_ApplicationID where TABLE1.FK_ApplicationID = D.FK_ApplicationID + 1 and QREF = 'Q045569/2' union SELECT D.PhoneNum3 FROM Table1 JOIN TABLE2 D on D.FK_ApplicationID= D.FK_ApplicationID where TABLE1.FK_ApplicationID = D.FK_ApplicationID + 1 and QREF = 'Q045569/2' --Use Database2 --you don't need to switch databases if you use a fully qualified name like shown below. Select * from Database2..Table3 where PhoneNum in ( Select PhoneNums from #tmp ) ```
A variable can only hold a single value. Instead of using the `in` operator, you could accompish this same thing with a join... ``` Use Database1 SELECT distinct T3.* FROM Table1 JOIN TABLE2 D on D.FK_ApplicationID= D.FK_ApplicationID JOIN Database2.dbo.Table3 T3 on T3.PhoneNum = D.PhoneNum1 or T3.PhoneNum = D.PhoneNum2 or T3.PhoneNum = D.PhoneNum3 where TABLE1.FK_ApplicationID = D.FK_ApplicationID + 1 and QREF = 'Q045569/2' ```
SQL How to make multiple values use the same Declare Variable?
[ "", "sql", "sql-server", "declare", "" ]
I have a unique circumstance here. I have a concatenated field I call address and as part of this address I have the phone number of that address on the last line. currently the phone number is formatted like: ``` 8885551212 ``` I want it to format like: ``` (888) 555-1212 ``` Below is my INSERT into my temp table that I do for getting everything correct. Any guidance on how to get the phone portion working right is a great help. I did find one article talking about using a function but I am unsure how to write it up properly for my circumstance. Thank you. ``` INSERT into #Referral_Temp (status, physician, timeFrame, date, address) select o.actStatus, o.actTextDisplay, apptTimeframe, apptDate, (select (p.address_line_1 + CHAR(13)+CHAR(10) + p.address_line_2 + CHAR(13)+CHAR(10) + p.city + ', ' + p.state + ' ' + p.zip + CHAR(13)+CHAR(10) + p.phone) AS address) from order_ o join provider_mstr p on o.refer_to_prov_id = p.provider_id where o.encounterID = @enc_id ```
You can use the SUBSTRING function: change ``` + p.phone ``` to ``` + '(' + SUBSTRING(p.phone,1,3) + ') ' + SUBSTRING(p.phone,4,3) + '-' + SUBSTRING(p.phone,7,4) ```
Use **[`Stuff`](http://msdn.microsoft.com/en-IN/library/ms188043.aspx)** string function to insert the required special into the string. Try this. ``` SELECT '('+ Stuff(Stuff('8885551212', 4, 0, ') '), 9, 0, '-') ```
Format Phone Number inside of concat field SQL
[ "", "sql", "sql-server-2008-r2", "" ]
I have a table called `team_members` with this structure and contents: ``` +---------+---------+ | team_id | user_id | +---------+---------+ | 1 | 18 | +---------+---------+ | 1 | 7 | +---------+---------+ | 3 | 18 | +---------+---------+ ``` What i am trying to do is to find a team that only contains 2 users and this users are supplied by me (in this case users with id 7 and 18). Unfortunately, i am having no ideas about how to make this query properly. I have tried something like ``` SELECT a.team_uid FROM team_members a INNER JOIN ( SELECT team_uid, user_id, COUNT(*) cnt_team FROM team_members GROUP BY team_uid HAVING COUNT(*) = 2 ) b ON a.user_id = b.user_id ```
Use `Case statement` in `Having` clause and `Count` only the required user\_id's. Try this. ``` select teamid from yourtable group by teamid having count(case when userid=7 then 1 end)=1 and count(case when userid=18 then 1 end)=1 and count(1)=2 ```
Something to think about (and assuming a PK on team\_id,user\_id)... ``` SELECT x.*, COUNT(*),SUM(user_id IN(7,18)) FROM my_table x GROUP BY team_id; ```
SQL find team that only contains specified 2 users
[ "", "mysql", "sql", "" ]
I have two tables name `cars` and `booking`. What i want to find is the available cars for booking which in simple is selecting the car details from `cars` whose id is not in booking table on specific date and time. the booking table has the columns `pick_date(date),drop_date(date),pick_time(time),drop_time(time),car_id(int),booking_id(int) blah blah`. Now i am stuck in getting the available cars. Will you please guide me how i do it in mysql. Below is 4 of the columns of Booking Table. ``` Pick_Date Drop_Date Pick_Time Drop_Time ---------- ----------- ----------- -------------- 2015-01-15 2015-01-15 09:00:00.000000 10:00:00.000000 ```
Try this: ``` SELECT c.car_id FROM cars c LEFT JOIN ( SELECT car_id FROM bookings WHERE '2015-01-15 11:00:00' BETWEEN CAST(CONCAT(Pick_Date, ' ', Pick_Time) AS DATETIME) AND CAST(CONCAT(Drop_Date, ' ', Drop_Time) AS DATETIME) ) AS A ON c.car_id = A.car_id WHERE A.car_id IS NULL ```
You can do this with a `left join` or `not in` or `not exists`. The `left join` looks like ``` select c.* from cars c left join bookings b on c.car_id = b.car_id and @pickdt <= addtime(drop_date, drop_time) and @dropdt >= addtime(pick_date, pick_time) where b.car_id is null; ``` The variables `@pickdt` and `@dropdt` are the pick up and drop off times you are looking for. This checks for any overlap between that period and a booking period. It chooses cars that have no overlap at all. Note: you should store datetime values in a single column, not two separate values with the date and time.
MySQL select id which is not in specific date and time
[ "", "mysql", "sql", "date", "datetime", "select", "" ]
I have three tables `#temp`, `tbProducts` and `tbVendor` ``` tbVendor(States, AgentId, Products) tbProduct(ProductId, ProductName) #temp(ProductName) ``` **SAMPLE DATA** ``` tbVendor -------------------------------- ----------- -------------------- States AgentId Products -------------------------------- ----------- -------------------- Alabama,New York 1 T.V, Desktops New Jersy, Florida 2 Cellphones, Laptops New York, San Fransico 1 Cellphones, Desktop New Jersy, San Fransico 1 A.C, Heaters tbProduct ----------- ------------- ProductId ProductName ----------- ------------- 1 T.V 2 Laptops 3 Desktop 4 Cellphones 5 A.C 6 Heaters #temp --------------- ProductName --------------- T.V Laptops Desktop Cellphones A.C Heaters ``` Now I have perform join operation on them in order to fetch `ProductId` and `ProductName` Assigned `tbVendor` on the basis `Agent Id` and `States` fields. For E.g I have fetch all the `Products` with their `Names` and `Ids` in `New york` with `AgentId = 1` the output should be like ``` ProductId ProductName ----------- ------------- 1 T.V 3 Desktop 4 Cellphones ``` Note that `#Temp` contains all the products assigned to `AgentId = 1`. Somehow I managed to fetch all the products assigned to `AgentId = 1` and them in `#temp` tables. Now I am not able manage to fetch the output like i.e only the products assigned to `New York` The expected output is like below ``` ProductId ProductName ----------- ------------- 1 T.V 3 Desktop 4 Cellphones ``` I have tried below lines of code bit it doesnot work ``` Select * from tbProduct L JOIN #Temp TL ON L.ProductName collate SQL_Latin1_General_CP1_CI_AS = TL.ProductName JOIN tbVendor Li ON Li.AgentId = @AgentId and Li.States like @States; ``` Note that `@States` contains `New York` and `@AgentId` contains `1`.
Write as: ``` ;WITH cte AS ( SELECT AgentId, CAST('<r>' + REPLACE(States, ',', '</r><r>') + '</r>' AS XML) AS States, CAST('<r>' + REPLACE(Products, ',', '</r><r>') + '</r>' AS XML) AS Products FROM @tbVendor ) ,FinalList AS ( SELECT AgentId, RTRIM(LTRIM (sTable.sColumn.value('.', 'VARCHAR(MAX)'))) AS States, RTRIM(LTRIM (PTable.PColumn.value('.', 'VARCHAR(MAX)'))) AS Products FROM cte CROSS APPLY States.nodes('//r') AS sTable(sColumn) CROSS APPLY Products.nodes('//r') AS PTable(PColumn) ) SELECT DISTINCT F.Products AS ProductName ,T.ProductId AS ProductId FROM FinalList F CROSS APPLY (SELECT ProductId FROM @tbProduct TP WHERE TP.ProductName = F.Products) AS T WHERE F.States = 'New York' AND F.AgentId = 1 ORDER BY T.ProductId ASC ``` UPDATE: To deal with special character LIKE `&` REPLACE it WITH `&amp;` as: ``` ;WITH cte AS ( SELECT AgentId, CAST('<r>' + REPLACE(States, ',', '</r><r>') + '</r>' AS XML) AS States, CAST('<r>' + REPLACE(REPLACE(Products,'&','&amp;'), ',', '</r><r>') + '</r>' AS XML) AS Products FROM @tbVendor ) ,FinalList AS ( SELECT AgentId, RTRIM(LTRIM (sTable.sColumn.value('.', 'VARCHAR(MAX)'))) AS States, RTRIM(LTRIM (PTable.PColumn.value('.', 'VARCHAR(MAX)'))) AS Products FROM cte CROSS APPLY States.nodes('//r') AS sTable(sColumn) CROSS APPLY Products.nodes('//r') AS PTable(PColumn) ) SELECT DISTINCT F.Products AS ProductName ,T.ProductId AS ProductId FROM FinalList F CROSS APPLY (SELECT ProductId FROM @tbProduct TP WHERE TP.ProductName = F.Products) AS T WHERE F.States = 'New York' AND F.AgentId = 1 ORDER BY T.ProductId ASC ``` [DEMO](http://rextester.com/SVXKFH57654)
TRY THIS: ``` DECLARE @state varchar(25) = 'New York', @agentId int = 1 SELECT STUFF((SELECT ', ' + Products FROM tbVendor tv WHERE tv.AgentId = @agentId AND tv.States Like '%' +@state + '%' FOR XML PATH('')), 1, 2, '') AS products INTO #tmp ;with tmp(product, products) as ( select LTRIM(LEFT(products, CHARINDEX(', ',products+', ')-1)), STUFF(products, 1, CHARINDEX(', ',products+', '), '') from #tmp union all select LTRIM(LEFT(products, CHARINDEX(', ',products+', ')-1)), STUFF(products, 1, CHARINDEX(', ',products+', '), '') from tmp where products > '' ) SELECT tp.* FROM tbProduct tp INNER JOIN tmp t ON t.product = tp.ProductName ```
How to join more than two tables on the basis of two parameters?
[ "", "sql", "sql-server", "sql-server-2008", "join", "" ]
I have a SQL table which look like : ``` Date temp dewpoint humidity 10/12/15 5.1 2.4 57 11/12/15 4.1 5.8 69 12/12/15 20.6 3.6 56 13/12/15 13.0 2.5 21 14/12/15 5.6 13.6 15 15/12/15 5.1 2.4 56 ``` I would like to know it it's possible to print this out with only one SQL query. That is for each data column, get the min and max value with the occurrence date (As an example I would like to know that the min 4.1 was on 11/12/15 and the max was 20.6 on 12/12/15.) Would it be possible to do this with one sql query? Specifically, I want the output format as: ``` Date dataName dataValue 11/12/15 temp-min 4.1 12/12/15 temp-max 20.6 14/12/15 humidity-min 15 11/12/15 humidity-max 69 ```
In MySQL, the easiest way is probably to use the `substring_index()`/`group_concat()` trick: ``` select substring_index(group_concat(date order by temp asc), ',', 1) as minTempDate, substring_index(group_concat(date order by temp desc), ',', 1) as maxTempDate, substring_index(group_concat(date order by dewpoint asc), ',', 1) as minDPDate, substring_index(group_concat(date order by dewpoint desc), ',', 1) as maxDPDate, substring_index(group_concat(date order by humidity asc), ',', 1) as minHumidityDate, substring_index(group_concat(date order by humidity desc), ',', 1) as maxHumidityDate from table t; ``` An alternative is to use `union all` like this: ``` (select date, 'temp-min', temp from table t order by temp asc limit 1) union all (select date, 'temp-max', temp from table t order by temp desc limit 1) union all (select date, 'humidity-min', humidity from table t order by humidity asc limit 1) union all (select date, 'humidity-max', humidity from table t order by humidity desc limit 1) ```
It is exactly what you want to receive, but it's looks terrible. ``` SELECT date, 'temp-min' dataName, temp dataValue FROM numbers WHERE temp = (SELECT min(temp) FROM numbers) UNION SELECT date, 'temp-max' dataName, temp dataValue FROM numbers WHERE temp = (SELECT max(temp) FROM numbers) UNION SELECT date, 'humidity-min' dataName, humidity dataValue FROM numbers WHERE humidity = (SELECT min(humidity) FROM numbers) UNION SELECT date, 'humidity-max' dataName, humidity dataValue FROM numbers WHERE humidity = (SELECT max(humidity) FROM numbers) ; ```
Request multi column min/max with date in SQL
[ "", "mysql", "sql", "database", "minmax", "" ]
This query returns a set of dates from tblValue whose FieldValue is type nvarchar(4000) ``` SELECT t1.FieldValue FROM (SELECT FieldValue FROM tblValue WHERE FieldID = 4) t1 WHERE DateAdd(day, -90, t1.FieldValue) <= GETDATE() ``` This works, but instead of hard-coding the FieldID of 4, I'd like to get all FieldValues for those which have the type "Expiration". This query returns 4. ``` SELECT FieldID FROM tblField WHERE FieldType = 'Expiration' ``` So, I expect this query's innermost subquery to return 4, and then to have the DateAdd applied only to those Expiration values which are yielded from t1 in the outermost subquery, which is what happens in the working first example. ``` SELECT t1.FieldValue FROM (SELECT FieldValue FROM tblValue WHERE FieldID = (SELECT FieldID FROM tblField WHERE FieldType = 'Expiration')) t1 WHERE DateAdd(day, -90, t1.FieldValue) <= GETDATE() ``` But I get the error > "Conversion failed when converting date and/or time from character string." which to me suggests that the DateAdd is being applied to all values of tblValue, not only to those which are yielded by the subquery which returns t1. There is probably a technical reason for it, but it doesn't seem right to me. For some reason ``` WHERE FieldID = 4) t1 ``` is not equivalent to ``` WHERE FieldID = (SELECT FieldID FROM tblField WHERE FieldType = 'Expiration')) t1 ``` It just so happens that if I leave off the final WHERE clause of the erroring query I get the same set of dates as in the working query. So t1 should not be presenting any values which the DateAdd should have a problem with. But there it is. I'm puzzled as to why.
This happens because of the particular execution plan that the optimizer produces. Depending on how it chooses to combine the comparison and filtering operations of the various clauses, it can do either one or the other first. In this case, it's trying to perform the date conversion and comparison before applying the FieldType filter. It's a well-known issue but inherent to the behavior of the SQL optimizer -- this is a similar issue with a different datatype: <https://connect.microsoft.com/SQLServer/feedback/details/333312/error-8114-converting-data-type-varchar-to-numeric> There are ways around this, but they are not always straightforward and usually require you to force specific order of execution. The below works for me, although I understand that the CASE technique is not always 100% effective. From [this fiddle](http://www.sqlfiddle.com/#!3/da7e1/3/0): ``` SELECT t1.FieldValue FROM (SELECT FieldValue FROM tblValue WHERE FieldID = (SELECT FieldID FROM tblField WHERE FieldType = 'Expiration')) t1 WHERE CASE WHEN ISDATE(t1.FieldValue) = 1 THEN DateAdd(day, -90, t1.FieldValue) ELSE '1/1/2900' END <= GETDATE() ```
I guess you want this? ``` SELECT * FROM tblValue v JOIN tblField f ON v.FieldID = f.FieldID WHERE f.FieldType = 'Expiration' AND DateAdd(day, -90, v.FieldValue) <= GETDATE() ```
Main T-SQL WHERE function seems to be wrongly applied to a subquery
[ "", "sql", "t-sql", "" ]
I am trying to select records that appear more than once and are part of a specific department plus other departments. So far the query that I have is this: ``` SELECT employeeCode, employeeName FROM Employees WHERE Department <> 'Technology' AND employeeCode IN (SELECT employeeCode FROM Employees GROUP BY employeeCode HAVING COUNT(*) > 1) ``` The problem is that I want to select employees which are part of the Technology department, but they also participate in other departments. So, they must be from the Technology department, but they could also be from the Household department. In the database it could look like: ``` 1 | A1 | Alex | Technology 2 | A2 | Thor | Household 3 | A3 | John | Cars 4 | A3 | John | Technology 5 | A4 | Kim | Technology 6 | A4 | Kim | Video Games ``` So basically the query should return: ``` A3 | John | A4 | Kim | ``` I think it's a small part that I am missing but.. Any ideas on how to filter/sort it so that it always uses the technology and the other departments? Btw, I tried searching but I couldn't find a problem like mine..
If you want employees that could be in the technology department *and* another department: ``` select e.employeeCode, e.employeeName from employees e group by e.employeeCode, e.employeeName having sum(case when e.department = 'Technology' then 1 else 0 end) > 0 and count(*) > 1; ``` This assumes no duplicates in the table. If it can have duplicates, then use `count(distinct department) > 1` rather than `count(*) > 1`.
Try this: ``` SELECT E.employeeCode, E.employeeName FROM Employees E INNER JOIN (SELECT DISTINCT E1.employeeCode, E1.employeeName FROM Employees E WHERE E.Department = 'Technology' ) AS A ON E.employeeCode = A.employeeCode AND E.employeeName = A.employeeName GROUP BY E.employeeCode, E.employeeName HAVING COUNT(*) > 1; ```
Select records that appear more than once
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I'm trying to get my EntryDate column in this format 'YYYY\_m' for example '2013\_04'. This code has been unsuccessful ``` DATENAME (YYYY, EntryDate) + '_' + DATEPART (M, EntryDate) ``` Attempts using DATEFORMAT have also been unsuccessful, stating there was syntax error at ',' after the M. What code would work instead? Thank you.
How about `date_format()`? ``` select date_format(EntryDate, '%&Y_%m') ``` This is the MySQL way. Your code looks like an attempt to do this in SQL Server. EDIT: The following should work in SQL Server: ``` select DATENAME(year, EntryDate) + '_' + RIGHT('00' + DATEPART(month, EntryDate), 2) ``` Personally, I might use `convert()`: ``` select replace(convert(varchar(7), EntryDate, 121), '-', '_') ```
``` select DATENAME (YYYY, EntryDate) + '_' + right('0' + convert(varchar(2),datepart (MM, EntryDate)), 2) ``` You have to convert the result of `DATEPART()` to a character string in order for the `+` to perform an append. FYI - in the future "unsuccessful" doesn't mean anything. Next time post the actual error you are receiving.
DATENAME and DATEPART in SQL
[ "", "sql", "sql-server", "datepart", "" ]
Below is my `MDX` query to generate aging report. I want to use a named `SET` inside a calculated measure. I am getting this error: > Query (3, 1) The function expects a tuple set expression for the 1 > argument. A string or numeric expression was used. Can this be resolved? ``` WITH SET [Cnt] AS {'FILTER( [Cheque Detail Fact Keys].[Cheque Master ID].[Cheque Master ID] ,[Measures].[Paid Amt]<>0 )' } SET [x] AS { ClosingPeriod ( [Cal Date].[Month].[Month] ,[Cal Date].[Month].[All] ) } MEMBER [Measures].[0-30] AS Sum ( [x].Item(0).Lag(1) : [x].Item(0).Lag(0) ,Count(Cnt) //[Measures].[Master Count] ) MEMBER [Measures].[31-60] AS Sum ( [x].Item(0).Lag(2) : [x].Item(0).Lag(1) ,Count(Cnt) //[Measures].[Master Count] ) MEMBER [Measures].[>60] AS Sum ( NULL : [x].Item(0).Lag(4) ,Count(Cnt) //[Measures].[Master Count] ) SELECT { [Measures].[0-30] ,[Measures].[31-60] ,[Measures].[>60] } ON 0 ,{[Customer].[Name].[Name].ALLMEMBERS} ON 1 FROM [My Cube]; ```
I think it is just the first section of your script. Why are you using a string? Try deleting the apostrophies and just add in `.MEMBERS` for clarity: ``` WITH SET [Cnt] AS {FILTER( [Cheque Detail Fact Keys].[Cheque Master ID].[Cheque Master ID].MEMBERS ,[Measures].[Paid Amt]<>0 ) } ... ... ``` The error message seems a bit mysterious but I believe that the curly braces are a function themselves - they convert any members inside them into a set. So you have effectively written this `{<some string>}` but the curly brackets are expecting something that can be a set. I get the same error message when running the following AdvWrks script: ``` WITH SET [Cnt] AS {'FILTER( [Date].[Date].[Date].MEMBERS ,[Measures].[Internet Sales Amount] > 10000 )' } SELECT [Measures].[Internet Sales Amount] ON 0 ,[Cnt] ON 1 FROM [Adventure Works]; ``` To use a count of `Cnt` in the following measures you can create a measure but I don't think this will be context aware as the set is evaluated before anything else: ``` WITH SET [Cnt] AS {FILTER( [Cheque Detail Fact Keys].[Cheque Master ID].[Cheque Master ID] ,[Measures].[Paid Amt]<>0 ) } MEMBER [Measures].[Master Count] AS [Cnt].count SET [x] AS { ClosingPeriod ( [Cal Date].[Month].[Month] ,[Cal Date].[Month].[All] ) } MEMBER [Measures].[0-30] AS Sum ( [x].Item(0).Lag(1) : [x].Item(0).Lag(0) ,[Measures].[Master Count] ) MEMBER [Measures].[31-60] AS Sum ( [x].Item(0).Lag(2) : [x].Item(0).Lag(1) ,[Measures].[Master Count] ) MEMBER [Measures].[>60] AS Sum ( NULL : [x].Item(0).Lag(4) ,[Measures].[Master Count] ) SELECT { [Measures].[0-30] ,[Measures].[31-60] ,[Measures].[>60] } ON 0 ,{[Customer].[Name].[Name].ALLMEMBERS} ON 1 FROM [My Cube]; ```
The [Cnt] set looks expensive...is that based on a fact table? If so, what's the granularity? Perhaps you can add a calculated field in the DSV for the base table of the measure group (e.g. IIF(Paid Amt <> 0, 1, 0) ) and then create a SUM-based measure on the field. Then your query becomes... ``` WITH SET [x] AS { ClosingPeriod ( [Cal Date].[Month].[Month] ,[Cal Date].[Month].[All] ) } MEMBER [Measures].[0-30] AS Sum ( [x].Item(0).Lag(1) : [x].Item(0).Lag(0) ,[Measures].[NonZero Check Count] ) MEMBER [Measures].[31-60] AS Sum ( [x].Item(0).Lag(2) : [x].Item(0).Lag(1) ,[Measures].[NonZero Check Count] ) MEMBER [Measures].[>60] AS Sum ( NULL : [x].Item(0).Lag(4) ,[Measures].[NonZero Check Count] ) SELECT { [Measures].[0-30] ,[Measures].[31-60] ,[Measures].[>60] } ON 0 ,{[Customer].[Name].[Name].ALLMEMBERS} ON 1 FROM [My Cube]; ```
Aging Report MDX query issue?
[ "", "sql", "sql-server", "reporting-services", "ssas", "mdx", "" ]
I am wanting to insert a record into a database table (website) by using a procedure with parameters. The SQL code has been tested in mysql workbench and works properly to insert new data. However, with delphi i am getting an 'SQL Syntax error near [insert whole line of code here]'. I was wondering if one of you could tell me where I'm going wrong. Thanks again. ``` procedure TNewWebsite.InsertData(WID, D, T, Wh, Dr, Od, Rd, Rc, Pm, OStat, Cstat, Rstat, N, U1, P1, P2, PStat, CID : string); begin WebsiteTable.WebsiteQuery.SQL.Add('INSERT INTO website VALUES ( '+WID+', '''+D+''', '''+T+''', '''+Wh+''', '''+D+''', '''+Od+''', '''+Rd+''', '+Rc+', '''+Pm+''', '+Ostat+', '+Cstat+', '''+Rstat+''', '''+N+''', '''+U1+''', '''+P1+''', '''+P2+''', '+Pstat+', '+CID+';)'); WebsiteTable.WebsiteQuery.Open; end; ```
You have quite a few problems in your code. A) Don't exaggerate with function parameters, if you have a lot of variables, assemble them in a record or class depending on your needs. B) Your SQL code is vulnerable for SQL injection. You probably never heard of SQL injection, please Google it or read this [really good answer](https://stackoverflow.com/a/6001373/800214). The solution against SQL injection is to use parameters (see my code example). An added bonus is that your SQL statement will be human readable, and less error prone. C) The `Open` function is only used in conjunction for queries that return a result set, like `SELECT` statements. For `INSERT`, `DELETE` and `UPDATE` statements, you need to use the `ExecSQL` function. Sanitized code: ``` interface type TMyDataRecord = record WID : String; D : String; T : String; Wh : String; Dr : String; Od : String; Rd : String; Rc : String; Pm : String; OStat : String; Cstat : String; Rstat : String; N : String; U1 : String; P1 : String; P2 : String; PStat : String; CID : String; end; ... implementation procedure TNewWebsite.InsertData(Data : TMyDataRecord); var SQL : String; begin SQL := 'INSERT INTO website VALUES (:WID, :D1, :T, :Wh, :D2, :Od, :Rd, :Rc',+ 'Pm, :Ostat, :Cstat, :Rstat, :N, :U1, :P1, :P2, :Pstat, :CID)'; WebsiteTable.WebsiteQuery.ParamCheck := True; WebsiteTable.WebsiteQuery.SQL.Text := SQL; WebsiteTable.WebsiteQuery.Params.ParamByName('WID').AsString := Data.WID; WebsiteTable.WebsiteQuery.Params.ParamByName('D1').AsString := Data.D; ...// rest of parameters WebsiteTable.WebsiteQuery.Params.ParamByName('CID').AsString := Data.CID; WebsiteTable.WebsiteQuery.ExecSQL; end; ```
Please replace `'+CID+';)');` with `'+CID+');');` in the end of your query line. The **`;`** was in wrong place.
Insert into SQL syntax error delphi
[ "", "mysql", "sql", "database", "delphi", "" ]
Is there a way to update only parts of a database record? I have a table that lists a bunch of different items followed by their cost, but I would like to remove the cost from the item. For example: Current table looks like this ``` ---Item------------- Apple- 1.35 Orange - 1.24 Grape - 2.00 ETC.. --------------------- ``` I would like to update the table with the same records, but without the hyphened price at the end. There are hundreds of different records in this table, so I can't just update by a specific record. I've tried using wildcards, but I wasn't able to get the results I'm looking for. Is there a way of doing this?
I think I would go for Mid: ``` SELECT t.ExistingField, Trim(Mid([existingfield],1,InStr([existingfield],"-")-1)) AS NewData FROM Table t ``` So: ``` UPDATE Table SET Table.UpdatedField = Trim(Mid([existingfield],1,InStr([existingfield],"-")-1)) WHERE Table.ExistingField Like "*-*" ```
If the part you want to remove always begins with a hyphen, and there won't be any hyphens as part of the item name then this code should do what you want: ``` update YourTable set item = left(item, instr(item,"-")-1) ``` Before you run the update you might want to try it as a select: ``` select left(item, instr(item,"-")-1) as newitem from YourTable ``` If your item name can contain hyphens maybe searching for a hyphen followed by a space would work: `"- "` Also, a where clause should probably be used to avoid trying to update rows without the price part. ``` SELECT Left(item,InStr(item,"- ")-1) AS newitem FROM YourTable WHERE InStr(item,"- ") > 0; ```
Updating parts of a record in Access
[ "", "sql", "database", "ms-access-2010", "" ]
I have two tables: ``` create table books ( id int ,bookname text ); ``` --- ``` create table users( id int ,name text ,book_1 int ,book_2 int ,book_3 int ); ``` Now, 'book\_1', 'book\_2', 'book\_3' contains id of table 'books'. I am trying to create a single query using join to get the all three book names with user name. I am able to get one book name, but how will I get all the three books name? ``` SELECT user.name ,books.name FROM user LEFT JOIN books ON books.id=user.book_1; ``` (This is giving me one book detail) Using PostgreSQL. I want result in one row. like ``` username, book_1_name, book_2_name, book_3_name ``` Don't want multiple rows.
You can use *sub-selects* to get *bookname* of each *username* in a single row (**if *id* in table *books* is unique**) ``` select name username ,(select bookname from books where id=book_1) book1_name ,(select bookname from books where id=book_2) book2_name ,(select bookname from books where id=book_3) book3_name from users ``` [**> SQLFIDDLE DEMO**](http://sqlfiddle.com/#!15/c6d64/1/0)
``` SELECT user.name ,books.name FROM user LEFT JOIN books ON books.id=user.book_1 OR books.id=user.book_2 OR books.id=user.book_3; ``` join on all `id's`
How to get data by using different column values refrencing same table
[ "", "sql", "postgresql", "" ]
I have done much searching on StackOverflow and have been unable to adapt other SQL Join "Top 1" answers to my own case, so I am hoping someone else can point out what I am missing. ### TableA ``` UniqueTableID TaskID ReferenceID 1 Task_1 Group_1 2 Task_2 Group_2 3 Task_2 Group_3 4 Task_3 Group_4 5 Task_3 Group_5 6 Task_4 Group_6 ``` ### TableB ``` GroupID GroupName Group_1 Group_AAA Group_2 Group_BBB Group_3 Group_CCC Group_4 Group_DDD Group_5 Group_EEE Group_6 Group_FFF ``` What I want is to return a GroupName for each TaskID (TableA.ReferenceID = TableB.GroupID). As for the TaskID's with multiple GroupName references, I don't care which one is returned, so I attempt to use TOP 1. This query (and many variations thereof): ``` SELECT A.TaskID, B.GroupName FROM [TableA] A JOIN [TableB] B ON B.GroupID = ( SELECT TOP 1 [GroupID] FROM [TableB] WHERE [GroupID] = A.ReferenceID ) ``` Gives me this table: ### Result ``` TaskID GroupName Task_1 Group_AAA Task_2 Group_BBB Task_2 Group_CCC Task_3 Group_DDD Task_3 Group_EEE Task_4 Group_FFF ``` How can I get this instead: ### Wanted Result ``` TaskID GroupName Task_1 Group_AAA Task_2 Group_BBB Task_3 Group_DDD Task_4 Group_FFF ``` Thank you for your time and help!
If you want the first GroupName alphabetically for each task, you can use `min`: ``` SELECT A.TaskID, min(B.GroupName) as GroupName FROM [TableA] A JOIN [TableB] B ON B.GroupID = A.ReferenceID GROUP BY A.TaskID ```
I think the best way to do this is with row number. While you can do a group by on the main table (like Ron Smith's answer), you have to group by all columns which are not GroupName. Joining to a table using row number you don't. Here is an example of how to do it for your data: ``` SELECT A.TaskID, s.GroupName FROM [TableA] A JOIN (SELECT [GroupID], [GroupName], ROW_NUMBER() OVER (Partition By GroupID ORDER BY GroupName) AS RN ) as subselect s ON A.ReferenceID = s.GroupID AND s.RN = 1 ``` To explain: You are using row number to give a unique number to each item in the sub-query. You can pick what to partition by (in this case GroupID) and what to order by (in this case GroupName). Because you only select the items with RN = 1 you will always get the lowest GroupName with this query. But you can change the order by if you want. As I've set it up you will get the same results as your example, but I think it is clear how to change it.
SQL Join Tables from Top 1
[ "", "sql", "sql-server", "join", "" ]
I have two tables, one called facebook\_posts and the other called facebook\_post\_metrics. facebook\_posts looks like ``` NAME id a 1 b 1 c 4 d 4 ``` facebook\_post\_metrics looks like ``` number FBID Date_Executed User_Executed 1 1 2012-09-18 16:10:44.917 admin 2 1 2012-09-25 11:39:01.000 jeff 3 4 2012-09-25 13:20:09.930 steve 4 4 2012-09-25 13:05:09.953 marsha ``` So the common column that would be used for the inner join is id from the facebook\_posts table and FBID from the facebook\_post\_metrics. So after the inner Join, the table should look like: ``` name number FBID Date_Executed User_Executed a 1 1 2012-09-18 16:10:44.917 admin b 2 1 2012-09-25 11:39:01.000 jeff c 3 4 2012-09-25 13:20:09.930 steve d 4 4 2012-09-25 13:05:09.953 marsha ``` However, I want to include another condition while doing this inner join. Basically, I just want to have the most updated entry for the joined table above. I know I would use max(date\_executed) and then group it by FBID. But I'm not sure which part of the SQL Query that would go into when using INNER JOIN. Please help me out. Bottom line...I'd like to end up with a table looking like this: ``` name number FBID Date_Executed User_Executed b 2 1 2012-09-25 11:39:01.000 jeff c 3 4 2012-09-25 13:20:09.930 steve ```
With a problem like this, I recommend breaking it down into pieces and putting it back together. Finding the date of the most recent facebook\_posts\_metrics row is easy, like this: ``` SELECT fbid, MAX(date_executed) AS latestDate FROM facebook_post_metrics GROUP BY fbid; ``` So, to get the entire row, you want to join the original table with those results: ``` SELECT fpm.* FROM facebook_post_metrics fpm JOIN( SELECT fbid, MAX(date_executed) AS latestDate FROM facebook_post_metrics GROUP BY fbid) t ON t.fbid = fpm.fbid AND t.latestDate = fpm.date_executed; ``` Last, all you have to do is join that with facebook\_posts table to get the name: ``` SELECT fp.name, fpm.number, fpm.fbid, fpm.date_executed, fpm.user_executed FROM facebook_posts fp JOIN( SELECT fpm.* FROM facebook_post_metrics fpm JOIN( SELECT fbid, MAX(date_executed) AS latestDate FROM facebook_post_metrics GROUP BY fbid) t ON t.fbid = fpm.fbid AND t.latestDate = fpm.date_executed ) fpm ON fpm.fbid = fp.id AND fpm.date_executed = fp.updated_at; ``` Here is an [SQL Fiddle](http://sqlfiddle.com/#!2/1485d/1) example. **EDIT** Based on your comments and looking over your design, I believe you can do something like this. First, get the latest `facebook_post_metrics` which I have described above. Then, get the latest `facebook_post` by using a similar method. This searches the most recent `updated_at` value of the facebook post. If you want to use a different date column, just change that: ``` SELECT fp.* FROM facebook_posts fp JOIN( SELECT id, MAX(updated_at) AS latestUpdate FROM facebook_posts GROUP BY id) t ON t.id = fp.id AND t.latestUpdate = fp.updated_at; ``` Last, you can join that query with the one for facebook\_post\_metrics on the condition that the `id` and `fbid` columns match: ``` SELECT fp.name, fpm.number, fpm.fbid, fpm.date_executed, fpm.user_executed FROM( SELECT fp.* FROM facebook_posts fp JOIN( SELECT id, MAX(updated_at) AS latestUpdate FROM facebook_posts GROUP BY id) t ON t.id = fp.id AND t.latestUpdate = fp.updated_at) fp JOIN( SELECT fpm.* FROM facebook_post_metrics fpm JOIN( SELECT fbid, MAX(date_executed) AS latestDate FROM facebook_post_metrics GROUP BY fbid) t ON t.fbid = fpm.fbid AND t.latestDate = fpm.date_executed) fpm ON fp.id = fpm.fbid; ``` Here is an updated [SQL Fiddle](http://sqlfiddle.com/#!2/1485d/10) example.
you need a `subquery` that calculated the `max` using `group by` and then join again with same tables to get all the details As per the latest edit, the update\_at column is no longer there in posts as there are two entries, you can get only one by doing a group by if you want all the entries then remove the aggregation and `group by` in the outer query. ``` select max(fp.name), fpm.number, fpm.FBID, fpm.date_executed, fpm.user_executed from facebook_posts fp join ( select max(Date_executed) dexecuted, FBID from facebook_post_metrics group by FBID ) t on fp.id = t.fbid join facebook_post_metrics fpm on fpm.fbid = t.fbid and fpm.date_executed = t.dexecuted group by fpm.number, fpm.FBID, fpm.date_executed, fpm.user_executed ```
Using MAX within INNER JOIN - SQL
[ "", "sql", "" ]
I have these two tables: ``` actions action_data ``` `action_data` belongs to actions and has the columns: `action_id`, `name`, `value` The contents may look like this: `Actions`: ``` id | ----- 178| 179| ``` `action_data`: ``` action_id | name | value ------------------------------------- 178 | planet | earth 178 | object | spaceship_a 179 | planet | earth 179 | object | building ``` Now I want to select the action, which has `planet = earth and object = spaceship_a` in action\_data. How can I achieve this with SQL? If you had only one condition it would work like this: ``` SELECT DISTINCT actions.* FROM actions INNER JOIN action_data ON actions.id = action_data.action_id WHERE (action_data.name = 'planet' AND action_data.value = 'earth'); ``` But I need two or more conditions from `action_data`. Any ideas?
Since you don't know the number of meta data to search for, I wouldn't recommend unknown/unlimited number of `joins`. Instead use `group concatenation`: ``` select * from actions join ( select action_id, group_concat(name,'=',value order by name separator ',') as csv // MySQL // string_agg(name || '=' || value, ',' order by name) as csv // PostgreSQL from meta where name in ('planet', 'object') group by action_id ) meta on actions.id = meta.action_id where csv = 'object=building,planet=earth' ``` *I'm happy to hear SQL pros about performance, which, I suppose, would be better in case of 3+ values to find.*
If you **don't** want a DBMS-specific syntax, you could use an auto-join. I would do it like this: ``` SELECT DISTINCT action_id FROM action_data a1 JOIN action_data a2 USING(action_id) WHERE a1.name = 'planet' AND a1.value = 'earth' AND a2.name = 'object' AND a2.value = 'spaceship_a'; ``` This works for **2 conditions**, but can be extended to 3 or more with more replicas of the data table in the `FROM` clause and the corresponding comparision conditions. In this case, the `a1` replica is used for the first condition (planet - earth) and the `a2` replica is used for the second condition (object - spaceship\_a). The `JOIN` allows us to search for the match in all the possible combinations (N rows gives N^2 combinations). This is probably not the best and most efficient way of doing, but is reliable and is not platform-dependent. Demo follows: ``` mysql> select * from action_data; +-----------+--------+-------------+ | action_id | name | value | +-----------+--------+-------------+ | 178 | planet | earth | | 178 | object | spaceship_a | | 179 | planet | earth | | 179 | object | building | +-----------+--------+-------------+ 4 rows in set (0.02 sec) mysql> SELECT DISTINCT action_id -> FROM action_data a1 JOIN action_data a2 USING (action_id) -> WHERE -> a1.name = 'planet' AND a1.value = 'earth' AND -> a2.name = 'object' AND a2.value = 'spaceship_a'; +-----------+ | action_id | +-----------+ | 178 | +-----------+ 1 row in set (0.00 sec) ```
SQL - Select by condition based on multiple rows
[ "", "sql", "" ]
I'm currently working on a small project that uses USA county data. I have no problems ordering the data in a `Seq.orderBy`, but as there is a `sortBy` in the query expression I would expect the results to be sorted. This is not the case. ``` type SysData = SqlDataConnection<"Data Source=ROME\SQLEXPRESS;Initial Catalog=SysData;Integrated Security=True"> type County = { State : string; SSA : string } let counties = let db = SysData.GetDataContext() query { for c in db.CountyMatrix do sortBy c.Countyssa select { State = c.State; SSA = c.Countyssa } distinct } ``` Now, the above is what I'm executing, but my results end up looking like so: ``` ... {State = "OR"; SSA = "38030";} {State = "GA"; SSA = "11630";} {State = "WA"; SSA = "50130";} {State = "MN"; SSA = "24740";} {State = "KY"; SSA = "18030";} {State = "MO"; SSA = "26970";} {State = "DC"; SSA = "09000";} ... ``` And the query sent to my local SQL Server instance is displayed in IntelliTrace as so: ``` USE [SysData]; GO SELECT DISTINCT [t0].[state] AS [Item1], [t0].[countyssa] AS [Item2] FROM [dbo].[CountyMatrix] AS [t0] ``` Note the lack of an `ORDER BY`, which I was expecting to be there because of the query expression's `sortBy c.Countyssa`. Any ideas as to why I'm not getting sorted data out of `counties`? I'm aiming to make this as clean as possible to show as a small example to my employer. Thanks in advance!
The order of the keywords is important. If you read the description carefully, then SortBy will sort the elements that have been selected so far. Distinct will remove duplicates so far (but there is nothing in there about keeping order), etc. Each of these keywords acts on the one before it, but keep in mind that it may also undo something that was done before (like the sorting). See the [description of each of these keywords here](http://msdn.microsoft.com/en-us/library/hh225374.aspx). ``` for c in db.CountyMatrix do let c = { State = c.State; SSA = c.Countyssa } distinct sortBy c.SSA select c ``` You may need to use a temporary variable to catch the data type in order to distinct and sort them.
Came across this today. Here's an alternate solution using a subquery: ``` let counties = let db = SysData.GetDataContext() query { for result in (query { for c in db.CountyMatrix do select { State = c.State; SSA = c.Countyssa } distinct }) do sortBy (result.SSA) select result } ```
Why does orderBy in F# not result in an ORDER BY in SQL?
[ "", "sql", "linq", "f#", "sql-order-by", "" ]
Using MySQL, I'm looking to write a single insert query that will take a single string and insert it multiple times, each time removing the last character. So the query would be something like ``` INSERT INTO table (str) VALUES ("string") ..... ``` and would result in the following values being inserted ``` string strin stri str st s ``` I could do this PHP, but I'm wondering if there is an SQL solution first.
The simplest way is to write a stored procedure and then just call it to do the inserts. ``` DELIMITER # CREATE PROCEDURE 'insert_string_rows' (IN 'str' text) BEGIN DECLARE len int unsigned; SET len = CHAR_LENGTH(str); WHILE len > 0 DO INSERT INTO table VALUES (str); SET str = SUBSTR(str, 1, len - 1); SET len = CHAR_LENGTH(str); END WHILE; END# DELIMITER ; ``` Then just simply ``` CALL insert_string_rows ("string") ``` and out pops all the rows into 'table' ``` string strin stri str st s ```
If you have a table of numbers, you could do: ``` insert into table(str) select left(@str, n.n) from (select 1 as n union all select 2 union all select 3 union all select 4 union all select 5 union all select 6 union all select 7 union all select 8 union all select 9 ) n where length(@str) >= n.n order by n.n desc; ```
Insert string into mysql multiple times, removing last character each time, with one query
[ "", "mysql", "sql", "insert", "" ]
I am doing an advanced search functionality for a recipes website where the users can query; * Recipes **with** the presence of certain ingredients (up to 3 ingredients) * Recipes **without** the presence of certain ingredients (up to 3 ingredients) * Under specific cooking time The above-mentioned points can be combined together in the user query. I'm already half way through where I can query the recipes that include specified ingredients and cooked under certain amount of time. Here's my database structure; Table: **Recipes** Recipe\_ID, Recipe\_Name, Cooking\_time Table: **Ingredients** Ingredient\_ID, Ingredient\_Name Table: **Recipe\_Ingredients** Recipe\_Ingredient\_ID, Ingredient\_ID, Recipe\_ID Here's my SQL query so far; ``` SELECT count(*) as rowcount, r.Recipe_name FROM Recipes AS r INNER JOIN Recipe_Ingredients AS ri ON r.Recipe_ID = ri.Recipe_ID INNER JOIN Ingredients AS i ON ri.Ingredient_ID = i.Ingredient_ID AND i.Ingredient_Name IN ('penne','onion') AND r.Cooking_time < 60 GROUP BY r.Recipe_name HAVING rowcount = 2; ``` The will get recipes that contain 'penne' and 'onion' and cooked in less than 60 minutes. What I can't figure out is how to query recipes as per below; * contain 'penne' and 'onion' * **does not contain 'butter'** * cooked in 'less than 60 minutes' I have tried the below code but it does not work; ``` SELECT count(*) as rowcount, r.Recipe_name FROM Recipes AS r INNER JOIN Recipe_Ingredients AS ri ON r.Recipe_ID = ri.Recipe_ID INNER JOIN Ingredients AS i ON ri.Ingredient_ID = i.Ingredient_ID AND i.Ingredient_Name IN ('penne','onion') AND i.Ingredient_Name NOT IN ('butter') AND r.Cooking_time < 60 GROUP BY r.Recipe_name HAVING rowcount = 2; ``` Any help is much appreciated! Thanks.
You can use ``` SELECT r.Recipe_name, r.Recipe_ID FROM Recipes AS r INNER JOIN Recipe_Ingredients AS ri ON r.Recipe_ID = ri.Recipe_ID INNER JOIN Ingredients AS i ON ri.Ingredient_ID = i.Ingredient_ID WHERE i.Ingredient_Name IN ( 'penne', 'onion', 'butter' ) AND r.Cooking_time < 60 GROUP BY r.Recipe_ID, /*<-- In case two recipes with same name*/ r.Recipe_name HAVING /*Must contain both these*/ COUNT(DISTINCT CASE WHEN i.Ingredient_Name IN ( 'penne', 'onion' ) THEN i.Ingredient_Name END) = 2 AND /*Can't contain these*/ MAX(CASE WHEN i.Ingredient_Name IN ( 'butter' ) THEN 1 ELSE 0 END) = 0 ```
Use NOT EXIST clause with a subquery: ``` NOT EXISTS( SELECT 1 FROM Ingredients i WHERE ri.Ingredient_ID = i.Ingredient_ID AND i.Ingredient_Name IN ('butter') ) ``` A full query: ``` SELECT count(*) as rowcount, r.Recipe_name FROM Recipes AS r INNER JOIN Recipe_Ingredients AS ri ON r.Recipe_ID = ri.Recipe_ID INNER JOIN Ingredients AS i ON ri.Ingredient_ID = i.Ingredient_ID AND i.Ingredient_Name IN ('penne','onion') AND NOT EXISTS( SELECT 1 FROM Ingredients i WHERE ri.Ingredient_ID = i.Ingredient_ID AND i.Ingredient_Name IN ('butter') ) AND r.Cooking_time < 60 GROUP BY r.Recipe_name HAVING rowcount = 2; ```
SQL query recipes with these ingredient(s) but NOT these ingredient(s)
[ "", "mysql", "sql", "relational-division", "" ]
``` StudentID | SubCode | SubName ------------------------------- 1 1 Math 1 2 Science 1 3 English 2 1 Math 2 2 Science 3 2 Science 4 1 Math 4 3 English ``` This is my subject table. How can I find students who have registered as following 1. Students who have registered in only Maths 2. Students who have registered In Maths And English 3. Students who have registered In Science And Maths And English in a single SQL query. I tried as this way ``` SELECT DISTINCT `stud_id` FROM `subj_assign` WHERE `subj_id` = '1,2' AND STATUS = '1' ORDER BY `subj_assign`.`stud_id` ASC ```
try these two queries, both are similar but shows the data in different ways: ``` SELECT StudentID, CASE WHEN Sum(CASE WHEN SubCode IN( 1, 2, 3 ) THEN 1 ELSE 0 END) = 3 THEN 'All' WHEN Sum(CASE WHEN SubCode IN( 1, 3 ) THEN 1 ELSE 0 END) = 2 THEN 'MathsEnglish' WHEN Sum(CASE WHEN SubCode IN( 1 ) THEN 1 ELSE 0 END) = 1 THEN 'Maths' END AS subjects FROM yourtable GROUP BY StudentID HAVING subjects IS NOT NULL; SELECT StudentID, CASE WHEN Sum(CASE WHEN SubCode IN( 1, 2, 3 ) THEN 1 ELSE 0 END) = 3 THEN 'YES' ELSE 'NO' END AS `all`, CASE WHEN Sum(CASE WHEN SubCode IN( 1, 3 ) THEN 1 ELSE 0 END) = 2 THEN 'YES' ELSE 'NO' END AS `MathsEnglish`, CASE WHEN Sum(CASE WHEN SubCode IN( 1 ) THEN 1 ELSE 0 END) = 1 THEN 'YES' ELSE 'NO' END AS `Maths` FROM yourtable GROUP BY StudentID ``` [SQLFIDDLE](http://sqlfiddle.com/#!2/db27c/4)
You need to filter the group using `Having Clause`. I don't know why you need all the results in a single query. Try this. ``` SELECT StudentID, 'Only Maths' as Subjects FROM #testt GROUP BY StudentID HAVING Count(CASE WHEN SubCode = '1' THEN 1 END) = 1 AND Count(*) = 1 UNION ALL SELECT StudentID, 'Maths and English' FROM #testt GROUP BY StudentID HAVING Count(CASE WHEN SubCode = '1' THEN 1 END) = 1 AND Count(CASE WHEN SubCode = '3' THEN 1 END) = 1 UNION ALL SELECT StudentID, 'Maths,Sceince and English' FROM #testt GROUP BY StudentID HAVING Count(CASE WHEN SubCode = '1' THEN 1 END) = 1 AND Count(CASE WHEN SubCode = '3' THEN 1 END) = 1 AND Count(CASE WHEN SubCode = '2' THEN 1 END) = 1 ```
Find Students who have registered in multiple subject by subject name
[ "", "mysql", "sql", "select", "join", "" ]
I currently have two separate queries in SQL Server that count the number of times a table contains a unique ID in a week. I would like to display these using one query, not two. This data is held in two separate views, hence my writing two queries. These are `ActivityPointer` and `Asp_dealercallreport`. Query #1: ``` SELECT OwnerIDName, COUNT(Distinct ActivityID) AS CalendarEvents FROM ActivityPointer WHERE /*Specify Activity code for Calendar Events*/ ActivityTypeCode = '4201' /*Specify Calendar Events from this week only*/ AND ScheduledStart >= DATEADD(DAY, DATEDIFF(DAY, 0, GETDATE()) / 7 * 7, 0) AND ScheduledStart <= DATEADD(DAY, DATEDIFF(DAY, -1, GETDATE()), 0) /*Specify users to be reported on by Name*/ AND OwnerIdName IN ('John Doe', 'Jane Doe') GROUP BY OwnerIDName ``` Query #2: ``` SELECT OwnerIDName, COUNT(Distinct Asp_dealercallreportId) AS DealerVisits FROM Asp_dealercallreport /*Specify Calendar Events from this week only*/ WHERE asp_callreportdate >= DATEADD(DAY, DATEDIFF(DAY, 0, GETDATE()) / 7 * 7, 0) AND asp_callreportdate <= DATEADD(DAY, DATEDIFF(DAY, -1, GETDATE()), 0) /*Specify to be reported on by Name*/ AND OwnerIdName IN ('John Doe', 'Jane Doe') GROUP BY OwnerIDName ``` Thanks
Maybe you can simply use INNER JOIN operator? Like this: ``` SELECT ap.OwnerIDName, COUNT(Distinct ap.ActivityID) AS CalendarEvents, COUNT(Distinct a_dcr.Asp_dealercallreportId) AS DealerVisits FROM ActivityPointer ap INNER JOIN Asp_dealercallreport a_dcr ON ap.OwnerIDName=a_dcr.OwnerIDName WHERE /*Specify Activity code for Calendar Events*/ ap.ActivityTypeCode = '4201' /*Specify Calendar Events from this week only*/ AND ap.ScheduledStart >= DATEADD(DAY, DATEDIFF(DAY, 0, GETDATE()) / 7 * 7, 0) AND ap.ScheduledStart <= DATEADD(DAY, DATEDIFF(DAY, -1, GETDATE()), 0) /*Specify Calendar Events from this week only*/ AND a_dcr.asp_callreportdate >= DATEADD(DAY, DATEDIFF(DAY, 0, GETDATE()) / 7 * 7, 0) AND a_dcr.asp_callreportdate <= DATEADD(DAY, DATEDIFF(DAY, -1, GETDATE()), 0) /*Specify users to be reported on by Name*/ AND ap.OwnerIdName IN ('John Doe', 'Jane Doe') GROUP BY ap.OwnerIDName ```
Use conditional aggregation: ``` SELECT OwnerIDName, COUNT(Distinct case when ScheduledStart >= DATEADD(DAY, DATEDIFF(DAY, 0, GETDATE()) / 7 * 7, 0) AND ScheduledStart <= DATEADD(DAY, DATEDIFF(DAY, -1, GETDATE()), 0) then ActivityID end) AS CalendarEvents_Scheduled, COUNT(Distinct case when asp_callreportdate >= DATEADD(DAY, DATEDIFF(DAY, 0, GETDATE()) / 7 * 7, 0) and asp_callreportdate <= DATEADD(DAY, DATEDIFF(DAY, -1, GETDATE()), 0) then ActivityID end) AS CalendarEvents_asp FROM ActivityPointer WHERE ActivityTypeCode = '4201' AND OwnerIdName IN ('John Doe', 'Jane Doe') GROUP BY OwnerIDName; ```
SQL Count between two tables
[ "", "sql", "sql-server", "join", "count", "" ]
I am new to databases and I needed help with something. I have one table in my database which contains information about all the events. So for example, there are rows, each one describing a different event. Now, I also need to keep track of users attending events. So how would I design this? I was thinking that if there was some way, to have a new table specifically for that one event created every time a new row is added to the events table. Is that possible to do? Or would it be a good idea to just add a column in the main events table, with names of people attending delimited by a comma or a semicolon or something? Or is it better to make a table for each user, with each row being an event being attended? Thanks in advance.
A better way of doing this would be to have something like the following: ``` Create table events( id integer, <OTHER EVENT DETAILS HERE (i.e. start time, location, duration etc> primary key (event_id) ); Create table users( id integer, <OTHER USER DETAILS HERE (i.e. name, email, phone etc)> primary key (user_id) ); CREATE TABLE event_users( event_id integer, user_id integer, <OTHER USER/EVENT DETAILS HERE (i.e. ticket price paid etc)> Primary Key (event_id,user_id), Foreign Key (event_id) REFERENCES events(id), Foreign Key (user_id) REFERENCES users(id) ); ``` This way events can have 0 or many users attending and users can attend 0 or many events and you dont need to create more tables. The way you would then get the data would be something like: ``` SELECT U.id FROM User U, UserEvent UE WHERE U.id = UE.user_id AND UE.event_id = <event id you want to search for>; ```
> ... would it be a good idea to just add a column in the main events table, with names of people attending delimited by a comma or a semicolon or something? Hmm you may want to read up on [joins](http://www.tizag.com/mysqlTutorial/mysqljoins.php) first this will help with relations between tables. You would create an events table and and a users table with a [relational table](https://stackoverflow.com/questions/6861376/creating-relational-tables-in-mysql) to keep track of which users on which event. This way you don't have to have a comma list of users(which does not perform well at all). I would suggest you have one event table that has a type in a different table, I'm betting that each event will contain the same information so it just makes sense. Creating tables on the fly is just going to cause confusion and complexity that I don't think you can really justify here.
Having SQL Automatically Create a Table
[ "", "mysql", "sql", "database", "database-design", "" ]
I would like to count students by their current age for all students registered since 2010. i.e. 16 - 2 17 - 5 19 - 5 In the current "student" table, I have the student's DOB and registration\_date. I am using Management Studio. So far I have: ``` SELECT COUNT (*) FROM db.student WHERE DATEDIFF(year, DOB, CURRENT_TIMESTAMP) AND registration_date >= '2010-01-01' ``` but am not sure where to go from here. Thank you in advance.
try this. ``` select DATEDIFF(Year,DOB,CURRENT_TIMESTAMP) age,Count(ID) users from dbo.student where DatePart(year,registration_date)>=2010 GROUP BY DATEDIFF(Year,DOB,CURRENT_TIMESTAMP) ```
You might need a GROUP BY. Probably something like this? ``` SELECT DATEDIFF(year, DOB, CURRENT_TIMESTAMP), COUNT (*) FROM db.student WHERE registration_date >= '2010-01-01' GROUP BY DATEDIFF(year, DOB, CURRENT_TIMESTAMP) ```
DATEDIFF and COUNT
[ "", "sql", "datediff", "" ]
I'm trying to figure out from the past two days how to perform an count function on the following query: For each module, list the module title and the number of activities scheduled for the module. My two tables that are needed for this are : TblActivity - ID, Name, Type, ModuleID, Day, Time, RoomID. TblModule - ID, Title I'm sure GROUP BY needs to be used as well but I don't know how to implement it. I'm using SQL server management studio 2008. Thank you.
Join the tables on the `ID` column, group by the `Title` column, and then select the `Title` column and the number of matched activities (`COUNT(TblActivity.ID)`). ``` SELECT Title, COUNT(TblActivity.ID) FROM TblModule JOIN TblActivity ON TblActivity.ID=TblModule.ID GROUP BY Title ``` Hope this helps!
``` SELECT M.Title, COUNT (A.ModuleID) AS NUMBER_OF_ACTIVITIES FROM TblModule M LEFT JOIN TblActivity A ON M.ID = A.ModuleID GROUP BY Title ```
SQL Count using two people
[ "", "sql", "sql-server", "count", "" ]
My Table: ``` ID|Col1|Col2| 1 |abc |1 | 2 |abc |0 | 3 |xyz |0 | 4 |xyz |0 | 5 |jkl |1 | ``` **Q1.** I want to return a list of records grouped by Col1 where all of the records in that group have Col2 = 0. I don't mind if it returns all of them (record 3 and 4) or just unique one(e.g. just 'xyz'). **Q2.** I also would like to get a list of records grouped by Col1 where any of the records in that group have Col2 = 1 (in this case 'abc' and 'jkl')
``` select col1 from your_table group by col1 having sum(case when col2 = 1 then 1 else 0 end) = 0 ``` and ``` select col1 from your_table group by col1 having sum(case when col2 = 1 then 1 else 0 end) > 0 ```
And another ways, using EXISTS and sub-queries: ``` select * from your_table where not exists (select * from your_table t2 where col1 = t2.col1 and col2 <> 0) ``` And ``` select * from your_table where exists (select * from your_table t2 where col1 = t2.col1 and col2 = 1) ```
Select from table group by id where all records in that group have a flag = false
[ "", "sql", "sql-server", "" ]
Oracle database allows the following queries: ``` select col1 from table order by col2; select count(col1) from table group by col2; ``` It appears that the columns in the order / group by clause don't need to be in the select clause. However, [tutorialspoint: SQL - ORDER BY Clause](http://www.tutorialspoint.com/sql/sql-order-by.htm) tells me otherwise. Is the above only Oracle specific then?
The tutorial is just plain wrong. An `order by` can include any columns in the queried objects whether they are in the select list or not. A `group by` goes the other way. Any non-aggregated columns in the select list must be included in the `group by`.
I don't think it's the case that columns mentioned in the `GROUP BY` must be in the `SELECT`. However, if there is a `GROUP BY` then columns in the `SELECT`, that aren't aggregated, must be in the `GROUP BY`! Anyway, please see [SQL Fiddle demo here](http://sqlfiddle.com/#!2/2469f6/1) for MySQL. So you can see that it is not Oracle-specific.
Are the columns used in the order / group by clause(s) mandatory in the select clause?
[ "", "mysql", "sql", "oracle", "" ]
I have field endTime has DateTime data type and status (int). I want to update my status field from 1 to 0 if hour and minute from endTime is equal to hour and minute of current time. the day is absolutely same. so the day is doesn't matter. How do I do ?
You can use `datepart()`: ``` select (case when datepart(hour, EndTime) = datepart(hour, getdate()) and datepart(minute, EndTime) = datepart(minute, getdate()) then 1 else 0 end) as status ``` You can also put this into an update, if that is what you really want. EDIT: The update would be: ``` update table set status = (case when datepart(hour, EndTime) = datepart(hour, getdate()) and datepart(minute, EndTime) = datepart(minute, getdate()) then 1 else 0 end); ```
``` UPDATE <table> SET status = 0 WHERE datepart(hour, endTime) = datepart(hour,getdate()) AND datepart(minute, endTime) = datepart(minute,getdate()); ``` This is How I do on my SQL-Server 2008.
How to compare Hour and Minute only from DateTime data type
[ "", "sql", "sql-server-2008", "datetime-format", "" ]
I am calling a stored procedure called `Searchprocedure`. I get an error at line where I am calling it. I have not made any changes to the parameters passed in the procedure and it was called just fine with same calling statement earlier. ``` Exec SearchProcedure @firstname = 'Simran', @middlename = 'kaur', @lastname = 'Khurana', @City = 'Delhi' ``` What's wrong with the syntax that is gives the error that says: > Incorrect syntax near '=' Edit: Statement where I did : ``` set @sql = 'declare ' + '@Temp'+ @colVar + ' int' exec(@sql) select @sql as 'SQLFORDECLARATIONS outputs declare @TempMiddleName int ``` yet when I try to set value in the variable gives error that it should be declared first. The set statement results in : ``` select @TempMiddleName=dbo.[MatchMiddleName](MiddleNameFromUser,MiddleNameFromTable,0) ``` which is what is should be yet it not able to see the declared variable The stored procedure is as follows: ``` create procedure SearchProcedure ( @firstname nvarchar(20), @middlename nvarchar(20) = null, @lastname nvarchar(20), @DOB Date = null, @SSN nvarchar(30)= null, @ZIP nvarchar(10)= null, @StateOfResidence nvarchar(2)= null, @City nvarchar(20)= null, @StreetName nvarchar(20)= null, @StreetType nvarchar(20)= null, @BuildingNumber int= null, @Aptnumber nvarchar(10)= null ) As DECLARE @sSQL NVARCHAR(2000), @Where NVARCHAR(1000) = ' ' declare @Percent int, @FN nvarchar(20), @MN nvarchar(20) = null, @LN nvarchar(20), @DateOfB Date = null, @SSNumber nvarchar(30)= null, @ZIPCode nvarchar(10)= null, @StateOfRes nvarchar(2)= null, @CityOfRes nvarchar(20)= null, @StreetNameRes nvarchar(20)= null, @StreetTypeRes nvarchar(20)= null, @BuildingNumberRes int= null, @AptnumberRes nvarchar(10)= null set @Percent = 0 create table #results ( firstname nvarchar(20) not null, middlename nvarchar(20), lastname nvarchar(20)not null, PercentageMatch int not null, DOB Date, SSN nvarchar(30), ZIP nvarchar(10), [State] nvarchar(2), City nvarchar(20), StreetName nvarchar(20), StreetType nvarchar(20), BuildingNumber int, Aptnumber nvarchar(10) ) declare c Cursor local static Read_only for SELECT * from dbo.Patients where firstname = @firstname open c fetch next from c into @FN, @MN, @LN, @DateOfB, @SSNumber, @ZIPCode, @StateOfRes, @CityOfRes, @StreetNameRes, @StreetTypeRes, @BuildingNumberRes, @AptnumberRes while @@FETCH_STATUS = 0 BEGIN /*set @Percent = dbo.[MatchLastName](@lastname, @LN, @Percent) set @Percent = dbo.[MatchMiddleName](@middlename, @MN, @Percent) set @Percent = dbo.[MatchCity](@City, @CityOfRes, @Percent)*/ Exec [dbo].[OutputProcedure] @lastname, @LN, @middlename, @MN,@City, @CityOfRes, @Percent output Insert into #results values (@FN,@MN,@LN,@Percent, @DateOfB,@SSNumber, @ZIPCode,@StateOfRes,@CityOfRes,@StreetNameRes,@StreetTypeRes,@BuildingNumberRes,@AptnumberRes) fetch next from c into @FN, @MN, @LN, @DateOfB, @SSNumber, @ZIPCode, @StateOfRes, @CityOfRes, @StreetNameRes, @StreetTypeRes, @BuildingNumberRes, @AptnumberRes end select * from #results order by PercentageMatch desc IF OBJECT_ID('tempdb..#results') IS NOT NULL DROP TABLE #results go ``` `OutputProcedure` code is as follows: ``` CREATE Procedure OutputProcedure ( @LastNameFromUser nvarchar(20) = null, @LastNameFromTable nvarchar(20), @MiddleNameFromUser nvarchar(20) = null, @MiddleNameFromTable nvarchar(20) = null, @CityFromUser nvarchar(20) = null, @CityFromTable nvarchar(20) = null, @Percentage int out ) AS BEGIN select 'OUTPUTPROCEDURECALLED' declare @maxvalue int DECLARE @variableTable TABLE ( idx int identity(1,1), matchvalue nvarchar(15)) INSERT INTO @variableTable(matchvalue) values ('MiddleName') INSERT INTO @variableTable(matchvalue) values ('LastName') INSERT INTO @variableTable(matchvalue) values ('City') SELECT * FROM @variableTable DECLARE @counter int declare @sql nvarchar(100) declare @sql2 nvarchar(25), @finalResult nvarchar(100) declare @sql3 nvarchar(300), @sql4 nvarchar(15), @tempresultStore nvarchar(20), @temp int, @temp2 int, @average int SET @counter = 1 SELECT @maxvalue = (SELECT MAX(idx) FROM @variableTable) select @maxvalue as 'MAXVALUE' WHILE(@counter <= @maxvalue) BEGIN DECLARE @colVar nvarchar(15) SELECT @colVar = matchvalue FROM @variableTable WHERE idx = @counter set @sql = 'declare ' + '@Temp'+ @colVar + ' int' exec(@sql) select @sql as 'SQLFORDECLARATIONS' /*set @temp = CHARINDEX(' ',@sql) select @temp as 'resultofcharindex' set @temp2 = LEN(@sql) - (@temp) SELECT @temp2 AS 'AFTERADDING1' set @tempresultStore = right(@sql, @temp2)*/ set @tempresultStore = 'Temp'+@colVar SELECT @tempresultStore AS 'FINALCUTPART' set @sql3 = 'set ' + ' ' + @tempresultStore + '=' + 'dbo.[Match' + @colVar + '](' + @colVar + 'FromUser' + ',' + @colVar + 'FromTable' + ',' + '0)' EXEC(@sql3) select @sql3 as 'check sql query formed' set @finalResult = @finalResult + @tempresultStore select @finalResult as 'SUM' SET @counter = @counter + 1 select @counter as 'COUNTERVALUE' END set @Percentage = @finalResult/@maxvalue SELECT @Percentage AS 'FINALRESULT' RETURN END ```
``` --Setting variable in Dynamic SQL DECLARE @sql NVARCHAR(MAX), @Name NVARCHAR(100) SET @Name = '#B2E0EB1A' SET @sql = ' DECLARE @MyVar NVARCHAR(100) SELECT TOP 1 @MyVar = name FROM sys.objects WHERE name LIKE ''%a''--Escaping single quote with double quote PRINT @MyVar ' EXEC(@sql) SET @sql = ' DECLARE @MyVar NVARCHAR(100) SET @MyVar = '''+(SELECT TOP 1 name FROM sys.objects WHERE name LIKE '%a')+'''--Escaping single quote with double quote PRINT @MyVar ' EXEC(@sql) SET @sql = ' DECLARE @MyVar NVARCHAR(100) SET @MyVar = '''+@Name+'''--Escaping single quote with double quote PRINT @MyVar ' EXEC(@sql) SET @sql = ' DECLARE @MyVar NVARCHAR(100) SET @MyVar = ''#B2E0EB1A''--Escaping single quote with double quote PRINT @MyVar ' EXEC(@sql) ``` If you want to get output variable from your dynamic query you have to use sp\_executesql procedure instead of EXEC() Study this code ``` DECLARE @DynamicSQLOutput NVARCHAR(100) DECLARE @SQL nvarchar(500); DECLARE @ParmeterDefinition nvarchar(500); --in this variabe you write the variables which you want to be declared in the dynamic sql without using the declare SET @ParmeterDefinition = N'@FinalOutputResultInDynamicSQL NVARCHAR(100) OUTPUT'; --here you write your dynamic code SELECT @SQL = N'SET @FinalOutputResultInDynamicSQL = ''test'' ' EXEC sp_executesql @SQL, --Execute code @ParmeterDefinition, -- Define Parameters @FinalOutputResultInDynamicSQL = @DynamicSQLOutput OUTPUT --Get output --Note that @FinalOutputResultInDynamicSQL is only defined in @ParmeterDefinition but not outside of the dynamic sql PRINT @DynamicSQLOutput; ```
I can't find any error and can run the exec without anerror message. Edit: After I created the (now avialbe) output procedure and uncomment its call, I'm getting the error too... Still investigating Edit2: In the Output procedure you have to use `set @sql = 'declare ' + '@Temp'+ @colVar + ' int'` (@ before Temp) and `set @sql3 = 'set ' + ' @' + @tempresultStore"` (additional @ after the 'set'. Btw: you can exec more than one statement in Dynamic SQL so you could use f.e. `exec (@sql + '; ' + @sql3);`
Incorrect syntax near '=' while trying to call a stored procedure in T-sql
[ "", "sql", "sql-server", "t-sql", "ssms", "" ]
I'm new to SQL queries and I'm trying to join two tables I need to get all data of the followers of userID = 2 here's the error i get : `Syntax error: Encountered "INNER" at line 1, column 39.` and here's the SQL query I ran : ``` SELECT * FROM FOLLOWER WHERE userID = "2" INNER JOIN USERS ON FOLLOWER.Follower_userID = USERS.userID ORDER BY USERS.follower_count ASC ``` **The tables in my DB are :** FOLLOWER --- * ID * userID * Follower\_userID USERS --- * userID * username * password * Nickname P.S I'm using Apache Derby. Thank you so much guys.
position of where clause was incorrect structure of SELECT query is ``` SELECT fields FROM tables WHERE conditions ORDER BY fields ``` so you query should be ``` SELECT * FROM FOLLOWER INNER JOIN USERS ON FOLLOWER.Follower_userID = USERS.userID WHERE userID="2" ORDER BY USERS.follower_count ASC ```
Try this statement: ``` SELECT * FROM FOLLOWER Fl WHERE userID="2" INNER JOIN USERS Us ON Us.userID = Fl.Follower_userID ORDER BY USERS.follower_count ASC ``` Let me know if it's works
SQL INNER JOIN exception
[ "", "sql", "exception", "syntax-error", "derby", "" ]
Essentially, I want to partition my table by person and add a row number (EVENT) for each record within a n month window of the most recent date in that window. For the example below n = 3. Sample Data: ``` PERSON DATE (yyyy-mm-dd) A 2014-05-02 A 2014-01-09 A 2014-01-08 A 2014-01-07 A 2014-01-02 B 2014-07-11 B 2014-06-12 B 2014-01-10 C 2014-11-11 ``` Results: ``` PERSON DATE (yyyy-mm-dd) EVENT A 2014-05-02 1 A 2014-01-09 2 A 2014-01-08 2 A 2014-01-07 2 A 2014-01-02 2 B 2014-07-11 1 B 2014-06-12 1 B 2014-01-10 2 C 2014-11-11 1 ``` How would I go about getting those results? I've been trying to solve this with a recursive CTE but the recursive step is throwing me off: ``` WITH testCTE (PERSON, DATE, EVENT) AS ( SELECT A.PERSON, A.DATE, 1 AS EVENT FROM [dbo].[Records] A JOIN (SELECT MAX(PERSON) AS PERSON, MAX(DATE) AS DATE FROM [dbo].[Records] GROUP BY PERSON) B ON A.PERSON = B.PERSON AND A.DATE >= DATEADD(MONTH, -3, B.DATE) UNION ALL -- Not sure what to put here. This gives an error: -- Recursive references are not allowed on the right hand side of an EXCEPT operator in the recursive part of recursive CTEs. ( SELECT PERSON, DATE, EVENT+1 AS EVENT FROM [dbo].[Records] EXCEPT SELECT A.PERSON, A.DATE, EVENT FROM [dbo].[Records] A JOIN testCTE B ON A.PERSON = B.PERSON AND A.DATE = B.DATE ) ) SELECT * FROM testCTE ``` I'm currently using SQL sever 2008 but this will ultimately be implemented in Oracle 10g.
Another example with a ROW\_NUMBER(): ``` with a AS(SELECT *, ROW_NUMBER() OVER (PARTITION BY PERSON ORDER BY DT DESC) AS rn FROM #Records), c AS( SELECT PERSON AS PERSON, DT, DT AS eventStart, 1 AS EVENT, rn FROM a WHERE rn=1 UNION ALL SELECT c.PERSON, r.DT, CASE WHEN r.DT < DATEADD(MONTH, -3, c.eventStart) THEN r.DT ELSE c.eventStart END, CASE WHEN r.DT < DATEADD(MONTH, -3, c.eventStart) THEN c.EVENT + 1 ELSE c.EVENT END, r.rn FROM c INNER JOIN a r ON c.PERSON = r.PERSON and c.rn=r.rn-1 ) SELECT PERSON, DT, EVENT FROM c order by 1, 2 desc; ```
This might work for you - an Oracle solution will be very similar: ``` WITH x1 AS ( SELECT person, MAX(dt) AS max_dt FROM person_event GROUP BY person ) SELECT p1.person, p1.dt, FLOOR(DATEDIFF(month, p1.dt, x1.max_dt)/3) + 1 FROM person_event p1 INNER JOIN x1 ON p1.person = x1.person ``` [Please see SQL Fiddle Demo here](http://sqlfiddle.com/#!3/9483f2/5). In Oracle you might do the following: ``` WITH x1 AS ( SELECT person, MAX(dt) AS max_dt FROM person_event GROUP BY person ) SELECT p1.person, p1.dt, TRUNC(MONTHS_BETWEEN(p1.dt, x1.max_dt)/3) + 1 FROM person_event p1 INNER JOIN x1 ON p1.person = x1.person ```
Row number over a partition and sliding window
[ "", "sql", "sql-server", "" ]
I want a query to count duplicate values in a column,is there any query for this or i was thinking as ``` total records=SELECT COUNT(column_name) FROM table_name; distinct records=SELECT COUNT(DISTINCT column_name) FROM table_name; duplicate count=total records-distinct records; ```
Looking at your sql, it can be simplified as below : ``` SELECT Count(Column) - Count(DISTINCT Column) FROM yourTable ```
``` select column_name, count(*) from table_name group by column_name having count(*) > 1 ```
count duplicate values in a column sql
[ "", "sql", "oracle", "" ]
I have resources each represented by a guid and they have attribute name-value pairs. I would like to query for resources which have the given attribute name value pairs. So, suppose the table looks like: ``` GUID ATTR_SUBTYPE ATTR_VAL 63707829116544a38c5a508fcde031a4 location US 63707829116544a38c5a508fcde031a4 owner himanshu 44d5bf579d9f4b9a8c41429d08fc51de password welcome1 44d5bf579d9f4b9a8c41429d08fc51de host retailHost c67d8f5d1a9b41428f029d55b79263e1 key random c67d8f5d1a9b41428f029d55b79263e1 role admin ``` and I want all the resources with location as US and owner as olaf. One possible query would be: `select guid from table where attr_subtype = 'location' and attr_value = ‘US' INTERSECT select guid from table where attr_subtype = 'owner' and attr_value = ‘himanshu';` There can be any number of attribute name value pairs in the query, so an additional intersection per pair in the query. I was wondering if we can construct a better query as intersection is expensive.
Assuming you don't have *duplicate* attributes per GUID you can achieve the desired result without a `JOIN`: ``` SELECT "GUID" FROM T WHERE ( "ATTR_SUBTYPE" = 'location' AND "ATTR_VAL" = 'US' ) OR ( "ATTR_SUBTYPE" = 'owner' AND "ATTR_VAL" = 'himanshu' ) GROUP BY "GUID" HAVING COUNT(*) = 2 -- <-- keep only GUID have *both* attributes ``` See <http://sqlfiddle.com/#!4/80900/2>
Insert your targets into a temp table then join to it. ``` select t.guid from table as t join temp on t.attr_subtype = temp.attr_subtype and t.attr_value = temp.attr_value ```
sql query for multi valued attributes
[ "", "sql", "oracle", "" ]
I have a table that has records for multiple session events. Each row is an event in a session, and a session can have multiples of the same event. Those are basically game sessions and each event is round start or round end. My data looks something like ``` Session_id | Event_type | Event_Time 1 | round_start | 12:01:00 1 | round_end | 12:02:00 1 | round_start| 12:05:00 1 | round_end | 12:7:00 2 | round_start | 14:11:00 2 | round_end | 14:12:00 3 | round_start| 15:09:00 3 | round_end | 15:13:00 ``` I am trying to find the average round duration. I tried the following SQL ``` select RS.session_id, RS.Event_Time as StartTime, RE.EndTime, TIMESTAMPDIFF(MINUTE,RE.EndTime,RS.Event_Time) as duration from amp_event_mamlaka as RS left join ( select session_id, min(event_time) as EndTimd from amp_event_mamlaka where Event_Type = "Round End" and session_id = RS.session_id and event_time>RS.Event_Time ) RE on RE.session_id = RS.session_id ``` The issue is that I can't reference RS.session\_id and RS.event\_time in the joined table. I am using MySQL. Any suggestions on how to accomplish this? Thanks
A subquery, as opposed to a nested query, should only return one value. Your requirement is an example where you want data from pairs of rows. The subquery is only used to connect the pair, not supply data. `Fiddle` ``` select e1.SessionID, e1.EventType, e1.EventTime, e2.EventType, e2.EventTime, TimeStampDiff( minute, e1.EventTime, e2.EventTime ) Duration from Events e1 join Events e2 on e2.SessionID = e1.SessionID and e2.EventType = 'end' and e2.EventTime =( select Min( EventTime ) from Events where SessionID = e1.SessionID and EventType = 'end' and EventTime > e1.EventTime ) where e1.EventType = 'start'; ```
I would suggest that you approach this with a correlated subquery: ``` select RS.session_id, RS.Event_Time as StartTime, (select smin(event_time) from amp_event_mamlaka em where em.session_id = RS.session_id and em.Event_Type = 'Round End' and em.event_time > RS.Event_Time ) as EndTime, from amp_event_mamlaka RS; ``` You can do the timestamp difference using a subquery: ``` select RS.*, TIMESTAMPDIFF(MINUTE, EndTime, Event_Time) as duration from (select RS.session_id, RS.Event_Time as StartTime, (select min(event_time) from amp_event_mamlaka em where em.session_id = RS.session_id and em.Event_Type = 'Round End' and em.event_time > RS.Event_Time ) as EndTime from amp_event_mamlaka RS ) RS ```
Referencing a column in a sub-query's where clause
[ "", "mysql", "sql", "left-join", "where-clause", "" ]
I am new in the SQL. Where I work I got a new type of job about automatize a Full SQL backup. Almost got it, it is working from the SQL Server Management Studio as a query. (by the way I have to do it on the Server) So I am trying to execute this query like this way "- sqlcmd -E -S EUTAX-WS\REMEK -i D:\Programs\FastGlacier\backupAllDB.sql" and its working my PC. But in the Server doesnt. There is my batch file ``` sqlcmd -E -S EUTAX-WS\REMEK -i D:\Programs\FastGlacier\backupAllDB.sql ``` And there is my sql query ``` DECLARE @name VARCHAR(50) -- database name DECLARE @path VARCHAR(256) -- path for backup files DECLARE @fileName VARCHAR(256) -- filename for backup DECLARE @fileDate VARCHAR(20) -- used for file name -- please change the set @path = 'change to your backup location'. for example, -- SET @path = 'C:\backup\' -- or SET @path = 'O:\sqlbackup\' if you using remote drives -- note that remotedrive setup is extra step you have to perform in sql server in order to backup your dbs to remote drive -- you have to chnage you sql server accont to a network account and add that user to have full access to the network drive you are backing up to SET @path = 'd:\Backup\Remek\' SELECT @fileDate = CONVERT(VARCHAR(20),GETDATE(),112) DECLARE db_cursor CURSOR FOR SELECT name FROM master.dbo.sysdatabases WHERE name NOT IN ('master','model','msdb','tempdb') OPEN db_cursor FETCH NEXT FROM db_cursor INTO @name WHILE @@FETCH_STATUS = 0 BEGIN SET @fileName = @path + @name + '_' + @fileDate + '.BAK' BACKUP DATABASE @name TO DISK = @fileName FETCH NEXT FROM db_cursor INTO @name END CLOSE db_cursor DEALLOCATE db_cursor ``` And when I am trying to execute it, i got the following error: *The server principal "" is not able to access the database "" under current security context.* Somehow I should add permission, but I can not how to do it!
``` sqlcmd -U "username" -P "password" -i D:\Programs\FastGlacier\backupAllDB.sql ```
**Please try giving permission to script containing folder i.e d:\backup** In Windows Explorer, right-click the file or folder you want to work with. From the pop-up menu, select Properties, and then in the Properties dialog box click the Security tab. In the Name list box, select the user, contact, computer, or group whose permissions you want to view. If the permissions are dimmed, it means the permissions are inherited from a parent object.
Can not execute a Full SQL backup
[ "", "sql", "batch-file", "backup", "database-backups", "" ]
Does anyone know why sql server chooses to query the table 'building' twice? Is there any explanation? Can it be done with only one table seek? This is the code sample: ``` DECLARE @id1stBuild INT = 1 ,@number1stBuild INT = 2 ,@idLastBuild INT = 5 ,@numberLastBuild INT = 1; DECLARE @nr TABLE (nr INT); INSERT @nr VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10); CREATE TABLE building ( id INT PRIMARY KEY identity(1, 1) ,number INT NOT NULL ,idStreet INT NOT NULL ,surface INT NOT NULL ) INSERT INTO building (number,idStreet,surface) SELECT bl.b ,n.nr ,abs(convert(BIGINT, convert(VARBINARY, NEWID()))) % 500 FROM ( SELECT ROW_NUMBER() OVER (ORDER BY n1.nr) b FROM @nr n1 CROSS JOIN @nr n2 CROSS JOIN @nr n3 ) bl CROSS JOIN @nr n --***** execution plan for the select below SELECT * FROM building b WHERE b.id = @id1stBuild AND b.number = @number1stBuild OR b.id = @idLastBuild AND b.number = @numberLastBuild DROP TABLE building ``` The execution plan for this is always the same: Two Clustered Index Seek unified through Merge Join (Concatenation). The rest is less important. Here is the execution plan: ![enter image description here](https://i.stack.imgur.com/kscBM.png)
You can try the following, which gives only one seek and a slight performance improvement. As @Martin\_Smith says what you have coded is the equivalent of a `Union` ``` SELECT * FROM building b WHERE b.id IN (@id1stBuild , @idLastBuild) AND ( (b.id = @id1stBuild AND b.number = @number1stBuild) OR (b.id = @idLastBuild AND b.number = @numberLastBuild) ) ```
It's not scanning twice. It is seeking twice. Your query is semantically the same as the below. ``` SELECT * FROM building b WHERE b.id = @id1stBuild AND b.number = @number1stBuild UNION SELECT * FROM building b WHERE b.id = @idLastBuild AND b.number = @numberLastBuild ``` And the execution plan performs two seeks and unions the result.
SQL Server - why is scanning done twice for the same table?
[ "", "sql", "sql-server", "sql-execution-plan", "" ]
I have to find the timediff in minutes for a order lifetime. i.e time from order was received(Activity ID 1) to keyed(2) to printed(3) to delivered(4) for each order for eg I am completely lost at which approach should i take?? use case or if then statement ?? something like for each to loop thru each record? what should be the most efficient way to do it? i know once i get dates in correct variables i can use DATEDIFF. ``` declare @received as Datetime, @keyed as DateTime, @printed as Datetime, @Delivered as Datetime, @TurnTime1 as int Select IF (tblOrderActivity.ActivityID = 1) SET @received = tblOrderActivity.ActivityDate --- ---- from tblOrderActivity where OrderID = 1 ``` it should show me @TurnTime1 = 48 mins as orderID 1 took 48 mins from received(activity id 1) to keyed (activity id 2) @TurnTime2 = 29 mins as it took 29mins for order 1 from keyed(activity id 2) to printed (activity id 3) so on and so forth for each order
You can do this easily by `pivoting` the data.It can be done in two ways. 1.Use `Conditional Aggregate` to pivot the data. After `pivoting` you can find `datediff` between different stages. Try this. ``` SELECT orderid,Received,Keyed,Printed,Delivered, Datediff(minute, Received, Keyed) TurnTime1, Datediff(minute, Keyed, Printed) TurnTime2, Datediff(minute, Printed, Delivered) TurnTime3 FROM (SELECT OrderID, Max(CASE WHEN ActivityID = 1 THEN ActivityDate END) Received, Max(CASE WHEN ActivityID = 2 THEN ActivityDate END) Keyed, Max(CASE WHEN ActivityID = 3 THEN ActivityDate END) Printed, Max(CASE WHEN ActivityID = 4 THEN ActivityDate END) Delivered FROM Yourtable GROUP BY OrderID)A ``` 2.use `Pivot` to transpose the data ``` SELECT orderid, [1] AS Received, [2] AS Keyed, [3] AS Printed, [4] AS Delivered, Datediff(minute, [1], [2]) TurnTime1, Datediff(minute, [2], [3]) TurnTime2, Datediff(minute, [3], [4]) TurnTime3 FROM Yourtable PIVOT (Max(ActivityDate) FOR ActivityID IN([1],[2],[3],[4]))piv ```
At first I make a list of all orders (`CTE_Orders`). For each order I get four dates, one for each ActivityID using `OUTER APPLY`. I assume that some activities could be missing (not completed yet), so `OUTER APPLY` would return `NULL` there. When I calculate durations I assume that if activity is not in the database, it hasn't happened yet and I calculate duration till the current time. You can handle this case differently if you have other requirements. I assume that each order can have at most one row for each `Activity ID`. If you can have two or more rows with the same `Order ID` and `Activity ID`, then you need to decide which one to pick by adding `ORDER BY` to the `SELECT` inside the `OUTER APPLY`. ``` DECLARE @TOrders TABLE (OrderID int, ActivityID int, ActivityDate datetime); INSERT INTO @TOrders (OrderID, ActivityID, ActivityDate) VALUES (1, 1, '2007-04-16T08:34:00'); INSERT INTO @TOrders (OrderID, ActivityID, ActivityDate) VALUES (1, 1, '2007-04-16T08:34:00'); INSERT INTO @TOrders (OrderID, ActivityID, ActivityDate) VALUES (1, 2, '2007-04-16T09:22:00'); INSERT INTO @TOrders (OrderID, ActivityID, ActivityDate) VALUES (1, 3, '2007-04-16T09:51:00'); INSERT INTO @TOrders (OrderID, ActivityID, ActivityDate) VALUES (1, 4, '2007-04-16T16:14:00'); INSERT INTO @TOrders (OrderID, ActivityID, ActivityDate) VALUES (2, 1, '2007-04-16T08:34:00'); INSERT INTO @TOrders (OrderID, ActivityID, ActivityDate) VALUES (3, 1, '2007-04-16T08:34:00'); INSERT INTO @TOrders (OrderID, ActivityID, ActivityDate) VALUES (3, 2, '2007-04-16T09:22:00'); INSERT INTO @TOrders (OrderID, ActivityID, ActivityDate) VALUES (3, 3, '2007-04-16T09:51:00'); INSERT INTO @TOrders (OrderID, ActivityID, ActivityDate) VALUES (3, 4, '2007-04-16T16:14:00'); INSERT INTO @TOrders (OrderID, ActivityID, ActivityDate) VALUES (4, 1, '2007-04-16T08:34:00'); INSERT INTO @TOrders (OrderID, ActivityID, ActivityDate) VALUES (4, 2, '2007-04-16T09:22:00'); INSERT INTO @TOrders (OrderID, ActivityID, ActivityDate) VALUES (4, 3, '2007-04-16T09:51:00'); WITH CTE_Orders AS ( SELECT DISTINCT Orders.OrderID FROM @TOrders AS Orders ) SELECT CTE_Orders.OrderID ,Date1_Received ,Date2_Keyed ,Date3_Printed ,Date4_Delivered ,DATEDIFF(minute, ISNULL(Date1_Received, GETDATE()), ISNULL(Date2_Keyed, GETDATE())) AS Time12 ,DATEDIFF(minute, ISNULL(Date2_Keyed, GETDATE()), ISNULL(Date3_Printed, GETDATE())) AS Time23 ,DATEDIFF(minute, ISNULL(Date3_Printed, GETDATE()), ISNULL(Date4_Delivered, GETDATE())) AS Time34 FROM CTE_Orders OUTER APPLY ( SELECT TOP(1) Orders.ActivityDate AS Date1_Received FROM @TOrders AS Orders WHERE Orders.OrderID = CTE_Orders.OrderID AND Orders.ActivityID = 1 ) AS OA1_Received OUTER APPLY ( SELECT TOP(1) Orders.ActivityDate AS Date2_Keyed FROM @TOrders AS Orders WHERE Orders.OrderID = CTE_Orders.OrderID AND Orders.ActivityID = 2 ) AS OA2_Keyed OUTER APPLY ( SELECT TOP(1) Orders.ActivityDate AS Date3_Printed FROM @TOrders AS Orders WHERE Orders.OrderID = CTE_Orders.OrderID AND Orders.ActivityID = 3 ) AS OA3_Printed OUTER APPLY ( SELECT TOP(1) Orders.ActivityDate AS Date4_Delivered FROM @TOrders AS Orders WHERE Orders.OrderID = CTE_Orders.OrderID AND Orders.ActivityID = 4 ) AS OA4_Delivered ORDER BY OrderID; ``` This the result set: ``` OrderID Date1_Received Date2_Keyed Date3_Printed Date4_Delivered Time12 Time23 Time34 1 2007-04-16 08:34:00.000 2007-04-16 09:22:00.000 2007-04-16 09:51:00.000 2007-04-16 16:14:00.000 48 29 383 2 2007-04-16 08:34:00.000 NULL NULL NULL 4082575 0 0 3 2007-04-16 08:34:00.000 2007-04-16 09:22:00.000 2007-04-16 09:51:00.000 2007-04-16 16:14:00.000 48 29 383 4 2007-04-16 08:34:00.000 2007-04-16 09:22:00.000 2007-04-16 09:51:00.000 NULL 48 29 4082498 ``` You can easily calculate other durations, like the total time for the order (time 4 - time1). Once you have several different queries that produce the same correct result that you need you should measure their performance with your real data on your system to decide which is more efficient.
how to loop thru a table to find data set?
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "" ]
``` ALTER PROCEDURE [dbo].[bulk_import_csv_from_dir] -- Add the parameters for the stored procedure here @ext varchar(10), @likefilename varchar(max), @tablename varchar(max), @tabeltemplate varchar(max), @directory varchar(8000) AS BEGIN --DECLARE @daily varchar(20); DECLARE @filename varchar(255); DECLARE @directory_table table ( id int IDENTITY(1,1) ,subdirectory nvarchar(512) ,depth int ,isfile bit); SET NOCOUNT ON; BEGIN TRY ----------------------------------------------------------------- --fill temp table with listed files INSERT into @directory_table (subdirectory,depth,isfile) EXECUTE master.sys.xp_dirtree @directory,1,1; --create cursor file execute ('IF OBJECT_ID(''cur_files'') IS NOT NULL DEALLOCATE cur_files'); SELECT subdirectory as fname FROM @directory_table WHERE isfile = 1 AND RIGHT(subdirectory,4) = @ext AND subdirectory like @likefilename ORDER BY id; open cur_files; FETCH NEXT FROM cur_files INTO @filename WHILE @@FETCH_STATUS = 0 BEGIN execute ('IF OBJECT_ID('''+@tablename+''') IS NULL SELECT * INTO '+@tablename+' FROM ' + @tabeltemplate); declare @sql varchar(max); SET @sql ='BULK INSERT '+@tablename+' FROM '''+@directory+@filename+''' WITH (FIELDTERMINATOR = ''\n'', ROWTERMINATOR = '''+CHAR(10)+''', FIRSTROW = 3)'; execute(@sql); FETCH NEXT FROM cur_files INTO @filename END close cur_files; DEALLOCATE cur_files; ----------------------------------------------------------------- END TRY BEGIN CATCH SELECT ERROR_NUMBER() AS ErrorNumber ,ERROR_SEVERITY() AS ErrorSeverity ,ERROR_STATE() AS ErrorState ,ERROR_PROCEDURE() AS ErrorProcedure ,ERROR_LINE() AS ErrorLine ,ERROR_MESSAGE() AS ErrorMessage; IF @@TRANCOUNT > 0 begin ROLLBACK TRANSACTION; execute ('IF OBJECT_ID(''cur_files'') IS NOT NULL DEALLOCATE cur_files'); end END CATCH; IF @@TRANCOUNT > 0 COMMIT TRANSACTION; END ``` this my store procedure why always show ((A cursor with the name 'cur\_files' does not exist.) if execute. what i forget something with this procedure? thanks
At the beginning you are Deallocating the **cur\_files** `Cursor` you are not declaring the **cur\_files** `Cursor` after that,replace ``` execute ('IF OBJECT_ID(''cur_files'') IS NOT NULL DEALLOCATE cur_files'); SELECT subdirectory as fname FROM @directory_table WHERE isfile = 1 AND RIGHT(subdirectory,4) = @ext AND subdirectory like @likefilename ORDER BY id; ``` with ``` DECLARE cur_files CURSOR LOCAL STATIC READ_ONLY FORWARD_ONLY FOR SELECT subdirectory as fname FROM @directory_table WHERE isfile = 1 AND RIGHT(subdirectory,4) = @ext AND subdirectory like @likefilename ORDER BY id; ```
You never declare your cursor. I suspect you want something like ``` --fill temp table with listed files INSERT into @directory_table (subdirectory,depth,isfile) EXECUTE master.sys.xp_dirtree @directory,1,1; DECLARE cur_files CURSOR LOCAL STATIC READ_ONLY FORWARD_ONLY FOR SELECT subdirectory AS fname FROM @directory_table WHERE isfile = 1 AND RIGHT(subdirectory,4) = @ext AND subdirectory LIKE @likefilename ORDER BY id; OPEN cur_files; ``` I have removed the check to deallocate the cursor if it exists, since I have added the `LOCAL` option in the declaration it will only exist in the current scope, and since you have not already declared it, it cannot already exist, so this is a redundant check. The other options are just to improve memory management, and keep your cursor as simple as possible.
SQL Cursor (A cursor with the name 'cur_files' does not exist.)
[ "", "sql", "sql-server", "" ]
I have a table named `trades` for holding currency trading data with the following schema: ``` id - uuid timestamp - timestamp without time zone price - numeric ``` I would like to be able to query in a way that I can build a candle chart. For this I need the *first price*, the *last price*, the *max price* and the *min price*, grouped by time intervals. So far I have this: ``` CREATE FUNCTION ts_round( timestamptz, INT4 ) RETURNS TIMESTAMPTZ AS $$ SELECT 'epoch'::timestamptz + '1 second'::INTERVAL * ( $2 * ( extract( epoch FROM $1 )::INT4 / $2 ) ); $$ LANGUAGE SQL; SELECT ts_round( timestamp, 300 ) AS interval_timestamp , max(price) AS max, min(price) AS min FROM trades GROUP BY interval_timestamp ORDER BY interval_timestamp DESC ``` How do I get the *first price* and *last price* within these intervals?
I think this is the query you want: ``` SELECT ts_round( timestamp, 300 ) AS interval_timestamp, max(firstprice) as firstprice, max(lastprice) as lastprice, max(price) AS maxprice, min(price) AS minprice FROM (SELECT t.*, first_value(price) over (partition by ts_round(timestamp, 300) order by timestamp) as firstprice, first_value(price) over (partition by ts_round(timestamp, 300) order by timestamp desc) as lastprice FROM trades t ) t GROUP BY interval_timestamp ORDER BY interval_timestamp DESC; ```
This uses a *single* window for all window functions and no subquery. Should be faster than the currently accepted answer. ``` SELECT DISTINCT ON (1) ts_round(timestamp, 300) AS interval_timestamp , min(price) OVER w AS min_price , max(price) OVER w AS max_price , first_value(price) OVER w AS first_price , last_value(price) OVER w AS last_price FROM trades WINDOW w AS (PARTITION BY ts_round(timestamp, 300) ORDER BY timestamp ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) ORDER BY 1 DESC; ``` To define "first" and "last" per `timestamp`, this column needs to be unique or the query is ambiguous and yo get an arbitrary pick from equal peers. Similar answer with explanation for the custom window frame: * [Given time/interval to calculate open/high/low/close value in each grouped data](https://stackoverflow.com/questions/27399054/given-time-interval-to-calculate-open-high-low-close-value-in-each-grouped-data/27399571#27399571) Explanation for the reference by ordinal numbers: * [When can we use an identifier number instead of its name in PostgreSQL?](https://stackoverflow.com/questions/24596290/when-can-we-use-an-identifier-number-instead-of-its-name-in-postgresql/24596349#24596349) Aside: don't use "timestamp" as identifier. It's a basic type name, which is error-prone.
Selecting first and last row within a time interval
[ "", "sql", "postgresql", "aggregate-functions", "greatest-n-per-group", "window-functions", "" ]
**Table**: table\_name ``` create table table_name ( given_dates timestamp, set_name varchar ); ``` **Insertion of records**: ``` insert into table_name values('2001-01-01'),('2001-01-05'),('2001-01-10'), ('2001-01-15'),('2001-01-20'),('2001-01-25'), ('2001-02-01'),('2001-02-05'),('2001-02-10'), ('2001-02-15'); ``` Now I want to update set\_name for some dates. **For example**: I want to update table like this: ``` given_dates set_name ---------------------- 2001-01-01 s1 2001-01-05 s1 2001-01-10 s2 2001-01-15 s2 2001-01-20 2001-01-25 2001-02-01 2001-02-05 2001-02-10 2001-02-15 ``` **Note**: The `given_dates` and `set_name` are pass a parameter because of they are dynamic. I may pass 2 sets as shown above `s1,s2` or may pass 4 sets according to the requirement. So I need the dynamic case statement for update the `set_name`. **Given two parameters**: ``` declare p_dates varchar := '2001-01-01to2001-01-05,2001-01-10to2001-01-15'; declare p_sets varchar := 's1,s2'; ``` Well I can do this by using following static script: **Static Update statement**: ``` update table_name SET set_name = CASE when given_dates between '2001-01-01' and '2001-01-05' then 's1' when given_dates between '2001-01-10' and '2001-01-15' then 's2' else '' end; ``` The above update statement does the job done but statically. Like the same way to update table I want to prepare only case statement which should be dynamic which can change as per the parameters `(p_dates,p_sets)` changes. **Questions**: 1. How to split the given dates that is `p_dates`? (I have `to` keyword in between two dates.) 2. How to split the given sets that is `p_sets`? (I have ',' comma in between two set\_names.) 3. How to prepare dynamic case statement after splitting the `p_dates` and `p_sets`? --- This question relates to [Dynamic case statement using SQL Server 2008 R2](https://stackoverflow.com/questions/27428544/dynamic-case-statement-using-sql-server-2008-r2/27428811#27428811), which is the same thing but for Microsoft SQL Server.
Clean setup: ``` CREATE TABLE tbl ( given_date date , set_name varchar ); ``` Use a singular term as column name for a *single* value. The data type is obviously [`date` and not a `timestamp`](https://www.postgresql.org/docs/current/datatype-datetime.html). To transform your text parameters into a useful table: ``` SELECT unnest(string_to_array('2001-01-01to2001-01-05,2001-01-10to2001-01-15', ',')) AS date_range , unnest(string_to_array('s1,s2', ',')) AS set_name; ``` "Parallel unnest" is handy but has its caveats. Postgres **9.4** adds a clean solution, Postgres **10** eventually sanitized the behavior of this. See below. ## Dynamic execution ### Prepared statement Prepared statements are only visible to the creating session and die with it. [Per documentation:](https://www.postgresql.org/docs/current/sql-prepare.html) > Prepared statements only last for the duration of the current database session. [`PREPARE`](https://www.postgresql.org/docs/current/sql-prepare.html) *once per session*: ``` PREPARE upd_tbl AS UPDATE tbl t SET set_name = s.set_name FROM ( SELECT unnest(string_to_array($1, ',')) AS date_range , unnest(string_to_array($2, ',')) AS set_name ) s WHERE t.given_date BETWEEN split_part(date_range, 'to', 1)::date AND split_part(date_range, 'to', 2)::date; ``` Or use tools provided by your client to prepare the statement. Execute n times with arbitrary parameters: ``` EXECUTE upd_tbl('2001-01-01to2001-01-05,2001-01-10to2001-01-15', 's1,s4'); ``` ### Server-side function Functions are persisted and visible to *all* sessions. [`CREATE FUNCTION`](https://www.postgresql.org/docs/current/sql-createfunction.html) *once*: ``` CREATE OR REPLACE FUNCTION f_upd_tbl(_date_ranges text, _names text) RETURNS void LANGUAGE sql AS $func$ UPDATE tbl t SET set_name = s.set_name FROM ( SELECT unnest(string_to_array($1, ',')) AS date_range , unnest(string_to_array($2, ',')) AS set_name ) s WHERE t.given_date BETWEEN split_part(date_range, 'to', 1)::date AND split_part(date_range, 'to', 2)::date $func$; ``` Call n times: ``` SELECT f_upd_tbl('2001-01-01to2001-01-05,2001-01-20to2001-01-25', 's2,s5'); ``` Old [sqlfiddle](http://sqlfiddle.com/#!15/ce7c3/1) ## Superior design Use array parameters (can still be provided as string literals), a [`daterange`](https://www.postgresql.org/docs/current/rangetypes.html#RANGETYPES-BUILTIN) type (both pg 9.3) and the [new parallel `unnest()`](https://www.postgresql.org/docs/current/queries-table-expressions.html#QUERIES-TABLEFUNCTIONS) (pg **9.4**). ``` CREATE OR REPLACE FUNCTION f_upd_tbl(_dr daterange[], _n text[]) RETURNS void LANGUAGE sql AS $func$ UPDATE tbl t SET set_name = s.set_name FROM unnest($1, $2) s(date_range, set_name) WHERE t.given_date <@ s.date_range $func$; ``` [`<@` being the "element is contained by" operator.](https://www.postgresql.org/docs/current/functions-range.html#RANGE-OPERATORS-TABLE) Call: ``` SELECT f_upd_tbl('{"[2001-01-01,2001-01-05]" ,"[2001-01-20,2001-01-25]"}', '{s2,s5}'); ``` Details: * [Unnest multiple arrays in parallel](https://stackoverflow.com/questions/27836674/passing-arrays-to-stored-procedures-in-postgres/27854382#27854382)
**String\_to\_array** ``` declare p_dates varchar[] := string_to_array('2001-01-01,2001-01-05, 2001-01-10,2001-01-15*2001-01-01,2001-01-05,2001-01-10,2001-01-15','*'); declare p_sets varchar[] := string_to_array('s1,s2',','); declare p_length integer=0; declare p_str varchar[]; declare i integer; select array_length(p_dates ,1) into p_count; for i in 1..p_count loop p_str := string_to_array( p_dates[i],',') execute 'update table_name SET set_name = CASE when given_dates between'''|| p_str [1] ||''' and '''|| p_str [2] ||''' then ''' || p_sets[1] ||''' when given_dates between '''|| p_str [3] ||''' and ''' || p_str [4] ||''' then ''' || p_sets[2] ||''' else '''' end'; end loop; ```
Split given string and prepare case statement
[ "", "sql", "postgresql", "postgresql-9.3", "set-returning-functions", "unnest", "" ]
I have two tables, **PERSON** and **FRIENDS**. FRIENDS has the fields NAME and SURNAME. A person has N friends. **I want to retrieve all the PERSONs that have atleast two FRIENDs, one with name ="mark", and the other with name="rocco" and surname ="siffredi".** Example: if I have a person that has 5 friends, one of them is called mark and no one is called rocco siffredi, no tables are returned. I was thinking about: ``` SELECT * FROM person p JOIN friends AS f ON p.ID=f.personID WHERE f.name ="mark" AND f IN ( SELECT * from FRIENDS WHERE name="rocco" and surname="siffredi") ``` or ``` SELECT * FROM person p JOIN friends AS f1 ON p.ID=f1.personID JOIN friends AS f2 ON p.ID=f2.personID WHERE f1.name="mark" AND f2.name="rocco" AND f2.surname="siffredi" ``` What is the best way? I mean the fastest way to execute it. I don't care about readability. Is there any other way to execute this query? Ty. EDIT: added the join on the ID...
I had to guess your column names and make up a table: Use EXISTS: ``` CREATE table FRIENDS(person_id INT, friend_id INT) go SELECT * FROM person WHERE EXISTS (SELECT * FROM friends f JOIN person per ON f.friend_id = per.id WHERE per.name ='mark' AND person.id = f.person_id) AND EXISTS (SELECT * FROM friends f JOIN person per ON f.friend_id = per.id WHERE per.name = 'rocco' AND per.surname='siffredi' AND person.id = f.person_id) ```
Your schema design isn't very good for what you are trying to do... I would have a Person table as you have, which would also contain a unique identifier called PersonId. I would then have a Friends table which took two fields - Person1Id and Person2Id. This gives you a couple of important advantages - first of all your system is able to handle more than one bloke called John Smith (because we join on Ids rather than Names...). Secondly, a person's details are only ever recorded in the Person table. One definition of truth...
SQL Query JOIN or IN operator?
[ "", "sql", "sql-server", "logical-operators", "" ]
I am having a heck of a time getting this query to work. Basically I need to do a CASE statement to get the person's NickName if they have one, otherwise use their FirstName. Then in the WHERE statement, do a LIKE statement on that above CASE statement. I've been able to get it to work if I **only** do the FirstName/NickName but when I add in other columns it stops working. Here is what I've got ``` SELECT LastName , Company , Status , CASE WHEN NickName = '' THEN FirstName WHEN NickName IS NULL THEN FirstName ELSE NickName END AS FName FROM database WHERE Status = 'Active' AND Company = '@Company' AND FName + ' ' + LastName LIKE '%@search%' OR LastName + ', ' + FName LIKE '%@search%' ``` Obviously the above doesn't work because I'm trying to use an alias in my WHERE clause. I get ``` Msg 207, Level 16, State 1, Line 14 Invalid column name 'FName'. Msg 207, Level 16, State 1, Line 15 Invalid column name 'FName'. ``` Any help is greatly appreciated. Thanks!
You can use a Common Table Expression or Subquery to create the FName as a kind of virtual column of a new resultset. You can that use the resultset as a kind of virtual table and filter on that in the where clause. ``` SELECT * FROM ( SELECT LastName , Company , Status , CASE WHEN NickName IS NULL OR NickName = '' THEN FirstName ELSE NickName END AS FName FROM database WHERE Status = 'Active' AND Company = '@Company' ) FNames WHERE FName + ' ' + LastName LIKE '%@search%' OR LastName + ', ' + FName LIKE '%@search%' ```
Use `CROSS APPLY` to create an alias ``` SELECT LastName , Company , Status , FName FROM database CROSS APPLY ( SELECT CASE WHEN NickName = '' THEN FirstName WHEN NickName IS NULL THEN FirstName ELSE NickName END AS FName ) AS CA1 WHERE Status = 'Active' AND Company = '@Company' AND FName + ' ' + LastName LIKE '%@search%' OR LastName + ', ' + FName LIKE '%@search%' ```
SQL - using SELECT Alias in WHERE
[ "", "sql", "sql-server", "alias", "" ]
How can I remove any white space from substring of string? For example I have this number '+370 650 12345'. I need all numbers to have this format `country_code rest_of_the_number` or in that example: `+370 65012345`. How could you achieve that with PostgreSQL? I could use `trim()` function, but then it would remove all whitespace.
Assuming the column is named `phone_number`: ``` left(phone_number, strpos(phone_number, ' ')) ||regexp_replace(substr(phone_number, strpos(phone_number, ' ') + 1), ' ', '', 'g') ``` It first takes everything up to the first space and then concatenates it with the result of replacing all spaces from the rest of the string. If you also need to deal with other whitespace than just a space, you could use `'\s'` for the search value in `regexp_replace()`
If you are able to assume that a country code will always be present, you could try using a regular expression to capture the parts of interest. Assuming that your phone numbers are stored in a column named `content` in a table named `numbers`, you could try something like the following: ``` SELECT parts[1] || ' ' || parts[2] || parts[3] FROM ( SELECT regexp_matches(content, E'^\\s*(\\+\\d+)\\s+(\\d+)\\s+(\\d+)\\s*$') AS parts FROM numbers ) t; ```
Postgresql - remove any whitespace from sub string
[ "", "sql", "regex", "postgresql", "" ]
Having the following structure: ``` Table Auction (Id_Auction (Pk), DateTime_Auction) Table Auction_Item (Id_Auction_Item (Pk), Id_Auction (Fk), Id_Winning_Bid (Fk), Item_Description) Table Bid (Id_Bid (Pk), Id_Auction_Item (Fk), Id_Bidder (Fk), Lowest_Value, Highest_Value) Table Bidder (Id_Bidder (Pk), Name) ``` Indexes for Auction are not relevant. Indexes for Auction\_Item: ``` Clustered Index PK_Auction_Item (Id_Auction_Item) NonClustered Index IX_Auction_Item_IdWinningBid (Id_Winning_Bid) ``` Indexes for Bid: ``` Clustered Index PK_Bid (Id_Bid) NonClustered Index IX_Bid_IdBidder (Id_Bidder) NonClustered Index IX_Bid_IdBid_IdBidder (Id_Bid, Id_Bidder) Unique Included (Id_Auction_Item, Lowest_Value, Highest_Value) ``` Indexes for Bidder are not relevant. I'll ask you to bear with me a little... This structure is only to you recognize the relationship between the tables/data and is not intendent to be following best practices. The actual database is really more complex (Table "Bid" is like 54 millions rows). Oh, Yes, each Auction\_Item will have only one "Bid per Bidder" with his highest and lowest bid. So, when I execute the following query: ``` Select Auc.Id_Auction, Itm.Id_Auction_Item, Itm.Item_Description, B.Id_Bid, B.Lowest_Value, B.Highest_Value From Auction Auc Inner Join Auction_Item Itm on Itm.Id_Auction = Auc.Id_Auction Inner Join Bid B on B.Id_Bid = Itm.Id_Winning_Bid And B.Id_Bidder = 27 Where Auc.DateTime_Auction > '2014-01-01'; ``` Why Sql Server prefers to NOT use "IX\_Bid\_IdBid\_IdBidder", and use this execution plan for **Bid**: ![Preferred execution plan ](https://i.stack.imgur.com/t2Fhj.gif) If I disable IX\_Bid\_IdBidder, and force it to use "IX\_Bid\_IdBid\_IdBidder" everything mess up: ![enter image description here](https://i.stack.imgur.com/G66Rz.gif) I can't understand why MSSQL prefers use 2 indexes, instead of only one that covers completely the query. My only guess is that's faster to use the ClusteredIndex, but I can't believe that it's faster than just use the Unique Composite Key of the other NonClustered Index. Why? **Update:** As proposed by @Arvo, I changed the order of key columns of the "IX\_Bid\_IdBid\_IdBidder", making the Id\_Bidder first and Id\_Bid second. Then, it become the preferred index. So, once again, why is MSSQL using the less selective "Index Key", instead of the most selective key? The Id\_Bid is explicitly related in the inner join... **Old update:** I Updated the query, making it even more selective. Also, I updated the index "IX\_Bid\_IdBid\_IdBidder", to include Id\_Auction\_Item **Apologies:** The Index IX\_Bid\_IdAuctionItem\_IdBidder is in fact IX\_Bid\_IdBid\_IdBidder, that INCLUDES Id\_Bid IN THE INDEX UNIQUE KEY!
Ok, I think that after a lot of research (and learn a bit more about how joins really work behind the scenes) I figured it out. By now, I'll post it only as a theory, til some SQL Master say that it's wrong and show me the light, or I really be sure I'm right. The point is that MSSQL is choosing what is fastest to the whole query, and not only to the Bid table. So the analyzer have to choose to start from Auction table, or Bid table (because the conditions I specified. DateTime\_Auction, and Id\_Bidder). In my (frivolous) mind, I thought the best execution plan will be starting from the Auction table: Get Auctions that match the specified date >> Get Auctions\_Items matching inner join with Auctions >> Get the Bids matching inner join with Auction\_Item AND that have Id\_Bidder matching the specified id This will select a lot of rows in each "level"/nested loop, and only in the end use the specified index to exclude 90% of data. Instead, MSSQL want to start with the minimal data set as possible. In this case, only the Bids of the specified bidder, since there is a lot of Auction Items that the bidder could simply don't participate. Doing this, each nested loop have its outer table shrunken compared with "my plan". Get Bids of specified bidder >> inner join with Auction\_Item >> excludes Auctions matching date. If you pay attention to the very most at right nested loop, that I presume is the first nested loop, the Outer table of the loop is the preselected list of Bids of a Bidder using the appropriate index (IX\_Bid\_IdBidder), than execute a scan on the clustered index, and etc... To make it even better, I included the columns that was in the "IX\_Bid\_IdBid\_IdBidder" into "IX\_Bid\_IdBidder", and MSSQL doesn't need to execute a Key lookup on the PK\_Bid. There is a lot of Auction Items to each Auction, but only one Bid from the specified Bidder for each Auction Item, so the first nested loop will select the minimum of valid Auction Items we will need, that also will limit the Auctions we will to consider matching the Date. Thus, since we are starting from Bids, there is not a "list" of Id\_Bids to limit, and then MSSQL cannot use the index "IX\_Bid\_IdBid\_IdBidder" EVEN it covering all the fields of query. Thinking now, it seems a little obvious. Anyway, Thanks for everybody that helped me! My research: <http://sqlmag.com/database-performance-tuning/advanced-join-techniques> (a little outdated...) <https://technet.microsoft.com/en-us/library/ms191426%28v=sql.105%29.aspx> <https://technet.microsoft.com/en-us/library/ms191318%28v=sql.105%29.aspx> <http://blogs.msdn.com/b/craigfr/archive/2006/07/26/679319.aspx> <http://blogs.msdn.com/b/craigfr/archive/2009/03/18/optimized-nested-loops-joins.aspx>
A covering, correctly-sorted index is rarely not used by SQL Server. Only pathological cases come to mind such as extremely low page fullness or huge unneeded additional columns. You index is simply not covering. Look at the columns that are output. You'll discover one that you have not indexed. That column is `Id_Auction_Item`.
MSSQL Server Not Using NonClustered Composite Key Index (PK + FK) on InnerJoin
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I'm working on a Point of Sale System developing in Java, and for backend I am using a SQL Server database. As I'm new to database and confuse that I'm doing write in database. For ever sale I'm making invoice, the table design is like that 1. `Invoice` table has columns `InvoiceID`, `InvoiceDate`, `CutomerID`, `MethodOfPayment`, `Comments` 2. `Line Item` table has columns `InvoiceID`, `LineitemNo`, `ProductID`, `QTY`, `Price` 3. `Transaction` table has columns `TransactionID`, `InvoiceID`, `Debit`, `Credit`, `Balance`, `InvoiceStatus` Walking customers is always cash customers, so I have no problem with them. The `InvoiceStatus` flag will be close to them because no payment left from them. For other customers they take material on credit also. I need to to save there Credit Debit and Balance history in a very effective way. My problem is that for every customer do I need separate table for record of there credit debit and balance history or all in one table, because in future the table size will grow very big if I use only one table. Or if someone have better database design which I'm using now it will be very helpful.
Its recommended to make a separate table for the pending debits, and the the pending debits will be dynamic which means when ever the invoice collected completely the delete its record. And your invoices table will be as its.
Some sample data might help us understand the problem a bit better. if I try to read between the lines, you are asking whether it is better to have one record for each invoice which you update when the customer makes a payment, or whether you have a separate record for each payment associated with an invoice which you can then sum or manipulate as you wish. If that is correct then the answer is "it depends" on how granular your transactions need to be. In my experience accounting packages should be as auditable as possible. I would suggest that your transactions table have a separate record for each "transaction" where an invoice is a transaction, each payment is a transaction, etc. you could have a column for "transaction\_type". also you talk about having a "table for each customer" the answer to that is no.
Credit debit and balance of a customer in a SQL Server database
[ "", "sql", "sql-server", "accounting", "" ]
I'm trying to alter the course\_id column in the table 'course' but I keep getting this error: ``` CREATE TABLE course ( course_id varchar(10) PRIMARY KEY, title varchar(30), dep_name varchar(10), credits numeric(2,2) CHECK (credits>0) ); ALTER TABLE takes ALTER COLUMN course_id varchar(10) REFERENCES course(course_id); ``` > ERROR: syntax error at or near "varchar" LINE 1: ALTER TABLE takes > ALTER COLUMN course\_id varchar(10) REFEREN...
Altering a column's type and adding a foreign key on it are two different statements: ``` ALTER TABLE takes ALTER COLUMN course_id TYPE VARCHAR(10); ALTER TABLE takes ADD CONSTRAINT takes_course_fk FOREIGN KEY (course_id) REFERENCES course(course_id); ```
Your syntax is worng. It must be: ``` ALTER TABLE takes ALTER COLUMN course_id TYPE varchar(10) ; ```
PostgreSQL ERROR: syntax error at or near "varchar"
[ "", "sql", "postgresql", "foreign-keys", "ddl", "" ]
I have a view in one of my databases that is retrieving the previous and current case officers(think person) from a few tables and views. The issue is these records are only linked by the end date(saoh.Date\_TO) being the same as another case officers start date (saoh.Date\_FROM). To create a join between these records I am currently doing an outer join with an inner join inside it. (This can be seen in the script below). The issue is the view has ~3 million records. This means to then query this view is taking an extremely long time (2-3 hours). Does anyone have any suggestions on how to improve the fundamental design of the SQL below. **Further Information** **Environment:** MSSQL server 2008 **Other Info:** Snapshot\_Period is a tool for reporting and not linked to the case officer dates. ``` ALTER VIEW [dbo].[vw_Stage_Estate_Case_Officer_Source] AS SELECT sp.SNAPSHOT_PERIOD_START_DATETIME, sp.SNAPSHOT_PERIOD_END_DATETIME, aes.APPLICATION_RID, aes.ESTATE_RID, aes.TRUSTEE_NUMBER, aes.TRUSTEE_TYPE, aes.TEAM_CODE, saoh.POSITION, saoh.DATE_FROM, saoh.DATE_TO, saoh.ERROR_CONDITION, saoch.USER_ID as PRIOR_CASE_OFFICER FROM Stage_App_Estate_Statuses aes /*Standard snapshot period new join for MonthlyITS and yearlyTIS*/ INNER JOIN Stage_Snapshot_Period_New sp ON change_date < sp.SNAPSHOT_PERIOD_END_DATETIME and sp.SNAPSHOT_PERIOD_IS_FINALISED_INDICATOR = 'No'and (sp.SNAPSHOT_PERIOD_TYPE_NAME = 'MonthlyITS' or sp.SNAPSHOT_PERIOD_TYPE_NAME = 'YearlyITS') /*This should be inner joining to the staging table that links Case officers to team codes by region*/ INNER JOIN [DEV_STAGING].[dbo].[STAGE_STAF_ACTION_OFFICERS] saoh ON aes.TEAM_CODE = saoh.POSITION AND LEFT(aes.ESTATE_RID,3) = LEFT(saoh.ACTION_OFFICER_RID,3) /*This should be inner joining to App_Estate_Statues again to get the previous UserID*/ LEFT OUTER JOIN (SELECT staf.USER_ID, staf.DATE_FROM, staf.DATE_TO, a.TEAM_CODE, a.ESTATE_RID FROM [DEV_STAGING].[dbo].[STAGE_STAF_ACTION_OFFICERS] staf INNER JOIN Stage_App_Estate_Statuses a ON a.TEAM_CODE = staf.POSITION) saoch ON saoh.DATE_TO = saoch.DATE_FROM AND saoch.ESTATE_RID = aes.ESTATE_RID GO ``` **Table Definitions** **Snapshot Period:** Irrelevant / Must stay as is. **Stage App Estate Statuses:** ``` CREATE TABLE [dbo].[Stage_App_Estate_Statuses]( [ESTATE_STATUS_RID] [nvarchar](20) NOT NULL, [ESTATE_RID] [nvarchar](30) NULL, [CATEGORY] [decimal](1, 0) NULL, [CHANGE_DATE] [datetime2](0) NULL, [CHANGE_TYPE] [nvarchar](3) NULL, [STATUS] [nvarchar](1) NULL, [TEAM_CODE] [nvarchar](4) NULL, [TRUSTEE_NUMBER] [decimal](22, 0) NULL, [TRUSTEE_TYPE] [nvarchar](10) NULL, [APPLICATION_RID] [nvarchar](30) NULL, [DML_TYPE] [nvarchar](1) NOT NULL, [AUDIT_KEY] [int] NOT NULL ) ON [PRIMARY] ``` **Stage\_STAF\_ACTION\_OFFICERS** ``` CREATE TABLE [dbo].[Stage_STAF_ACTION_OFFICERS]( [ACTION_OFFICER_RID] [nvarchar](15) NOT NULL, [POSITION] [nvarchar](13) NULL, [DATE_FROM] [date] NULL, [DATE_TO] [date] NULL, [USER_ID] [nvarchar](32) NULL, [ERROR_CONDITION] [nvarchar](100) NULL, [EXTRACTED_DATE] [date] NULL, [DML_TYPE] [nvarchar](1) NULL, [AUDIT_KEY] [int] NOT NULL ) ON [PRIMARY] ```
Its hard not having anything to test against so just treat this as a starting point ``` SELECT sp.SNAPSHOT_PERIOD_START_DATETIME, sp.SNAPSHOT_PERIOD_END_DATETIME, aes.APPLICATION_RID, aes.ESTATE_RID, aes.TRUSTEE_NUMBER, aes.TRUSTEE_TYPE, aes.TEAM_CODE, saoh.POSITION, saoh.DATE_FROM, saoh.DATE_TO, saoh.ERROR_CONDITION, (select USER_ID from [DEV_STAGING].[dbo].[STAGE_STAF_ACTION_OFFICERS] sao where sao.DATE_FROM = saoh.DATE_TO and (select ESTATE_RID from Stage_App_Estate_Statuses where TEAM_CODE = sao.POSITION) = aes.ESTATE_RID )as PRIOR_CASE_OFFICER --Even if this approach doesnt help, keep the original join as outer because there might not be a prior_case_officer FROM Stage_App_Estate_Statuses aes /This should be inner joining to the staging table that links Case officers to team codes by region/ INNER JOIN [DEV_STAGING].[dbo].[STAGE_STAF_ACTION_OFFICERS] saoh ON aes.TEAM_CODE = saoh.POSITION AND LEFT(aes.ESTATE_RID,3) = LEFT(saoh.ACTION_OFFICER_RID,3) /Standard snapshot period new join for MonthlyITS and yearlyTIS/ INNER JOIN (select SNAPSHOT_PERIOD_START_DATETIME, SNAPSHOT_PERIOD_END_DATETIME, from Stage_Snapshot_Period_New where SNAPSHOT_PERIOD_IS_FINALISED_INDICATOR = 'No' and SNAPSHOT_PERIOD_TYPE_NAME in ('MonthlyITS','YearlyITS') ) sp ON change_date < sp.SNAPSHOT_PERIOD_END_DATETIME ```
In this case you may be able to turn the "inner join in the outer join" into multiple left joins, though your where clause will then need to handle the situations that the inner join in the subquery was dealing with, for example: ``` ... query same up to the left join ... /*This should be inner joining to App_Estate_Statues again to get the previous UserID*/ LEFT OUTER JOIN [dbo].[STAGE_STAF_ACTION_OFFICERS] saoch ON saoh.DATE_TO = saoch.DATE_FROM AND saoch.POSITION = aes.TEAM_CODE LEFT OUTER JOIN Stage_App_Estate_Statuses a ON a.TEAM_CODE = saoch.POSITION AND a.ESTATE_RID = aes.ESTATE_RID WHERE (saoch.ACTION_OFFICER_RID IS NULL) OR (saoch.ACTION_OFFICER_RID IS NOT NULL AND a.ESTATE_STATUS_RID IS NOT NULL) ``` I simplified the tables to remove the irrelevant bits and added some dummy data; with the dummy data I used this query returns the same results as the original but a different query plan -- meaning you'll have to test it to determine if it performs better or not. Here's [the SQLFiddle I used](http://sqlfiddle.com/#!3/3971be/10/0).
An Inner join in an Outer Join leading to peformance shortfalls, what is a different approach?
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "dimensional-modeling", "" ]
I have a visitor system. when the visitor checks in, it sets a date\_in. and when the visitor checks out it sets a date\_out. but if a visitor forget to check out. it says he has no date\_out so I try to figure out how to set the date\_out if the visitor didn't check out. an example: Check in: 2015-01-19 12:00:00 visitor forget to check out that day. so if date is: 2015-01-19 23:59:59. I want it to set it automatically on the date\_out. because with another query I ask all the visitor without a date\_out to show. so I can see who is in the building that day. is there any way to do this automatically? Table structure ![Table structure](https://i.stack.imgur.com/NUHOU.png) date\_in is set when the visitor checks in with his name.
My suggestion - rather than editing the data (and subsequently not being able to identify genuine check-outs at 23:59), just update the query: ``` SELECT * /* TODO - actual columns */ FROM visits WHERE date_out IS NULL AND date_in >= DATEADD(day,DATEDIFF(day,0,CURRENT_TIMESTAMP),0) ``` Where the expression `DATEADD(day,DATEDIFF(day,0,CURRENT_TIMESTAMP),0)` is just a way of saying "midnight at the start of today". The above query should give you the same results as what you're asking for, but leave the actual recorded data intact. That is, only people who've entered today but not left are reported. The reason I'd recommend not changing the recorded data is in case you decide to change the rules later - i.e. using a different interval than one day to consider a visitor to have left.
You could create a job that runs 23:59:59 every day that fills the date\_out with data.
SQL set date_out automatically if today ends
[ "", "sql", "sql-server-2008", "" ]
Is there any way to simply add not null unique column to existing table. Something like default = 1++ ? Or simply add unique column? I tried to add column and then put unique contrain but MS SQL says that: The CREATE UNIQUE INDEX statement terminated because a duplicate key was found for the object name (...) The duplicate key value is ( < NULL > ). Any way to simply add column to existing working table with unique constrain? Should MS SQL really think that null IS a value?
1. Add not null column with some default. 2. Update column to be a sequential integers (see row\_number() function) 3. Add UNIQUE constraint or UNIQUE index over new column You can add IDENTITY column to a table (but from question it is not clear if you need it or not).
[IDENTITY](http://blog.sqlauthority.com/2013/05/30/sql-server-add-identity-column-to-table-based-on-order-of-another-column/) is all you need: ``` ALTER TABLE TestTable ADD ID INT IDENTITY(1, 1) ```
How to add not null unique column to existing table
[ "", "sql", "sql-server", "unique", "" ]
``` IF OBJECT_ID('Tempdb..#TempTable') IS NOT NULL DROP TABLE #TempTable CREATE TABLE #TempTable ( [ID] INT NOT NULL , [Value] VARCHAR(50) NULL , [Date] DATE NULL , [Time] TIME(7) NULL , [Duration] INT NULL , [srcFile] VARCHAR(50) NULL, ) INSERT #TempTable ( [ID], [Value], [Date], [Time], [Duration], [srcFile] ) VALUES ( 1, N'One', CAST(N'2014-07-29' AS DATE), CAST(N'23:34:00' AS TIME), 1710, N'sF1' ), ( 2, N'One', CAST(N'2014-07-30' AS DATE), CAST(N'00:00:10' AS TIME), 1710, N'sF1' ), ( 3, N'One', CAST(N'2014-07-30' AS DATE), CAST(N'01:30:00' AS TIME), 1710, N'sF1' ), ( 4, N'One', CAST(N'2014-07-30' AS DATE), CAST(N'01:54:00' AS TIME), 1710, N'sF1' ), ( 5, N'One', CAST(N'2014-07-30' AS DATE), CAST(N'13:30:00' AS TIME), 1710, N'sF1' ), ( 6, N'One', CAST(N'2014-07-30' AS DATE), CAST(N'13:57:00' AS TIME), 1710, N'sF2' ), ( 7, N'One', CAST(N'2014-07-30' AS DATE), CAST(N'23:34:00' AS TIME), 1710, N'sF1' ), ( 8, N'One', CAST(N'2014-07-31' AS DATE), CAST(N'00:00:10' AS TIME), 1710, N'sF2' ), ( 9, N'One', CAST(N'2014-07-31' AS DATE), CAST(N'00:10:10' AS TIME), 1710, N'sF3' ), ( 10, N'One', CAST(N'2014-08-01' AS DATE), CAST(N'00:00:00' AS TIME), 1710, N'sF2' ), ( 11, N'One', CAST(N'2014-08-01' AS DATE), CAST(N'00:00:00' AS TIME), 1710, N'sF1' ), ( 12, N'One', CAST(N'2014-08-01' AS DATE), CAST(N'01:00:00' AS TIME), 1710, N'sF3' ), ( 13, N'One', CAST(N'2014-08-01' AS DATE), CAST(N'01:00:00' AS TIME), 1710, N'sF4' ), ( 14, N'Two', CAST(N'2014-08-01' AS DATE), CAST(N'00:01:00' AS TIME), 1710, N'sF2' ) SELECT * FROM #TempTable ``` Base Table ``` ID Value Date Time Duration srcFile 1 One 7/29/2014 23:34:00 1710 sF1 2 One 7/30/2014 0:00:10 1710 sF1 3 One 7/30/2014 1:30:00 1710 sF1 4 One 7/30/2014 1:54:00 1710 sF1 5 One 7/30/2014 13:30:00 1710 sF1 6 One 7/30/2014 13:57:00 1710 sF2 7 One 7/30/2014 23:34:00 1710 sF1 8 One 7/31/2014 0:00:10 1710 sF2 9 One 8/1/2014 0:00:00 1710 sF2 10 Two 8/1/2014 0:01:00 1710 sF2 11 One 8/1/2014 0:00:00 1710 sF1 ``` Requirement: When [Value] + [Date] + [Time] match then Dup Output: Mark isDup flag with 1 and dupFIle with the srcFile for two or more records where dup condition matches. When [Value] match and [Date] + [Time] of any two or more records fall within [Date] + [Time] PLUS (+) [Duration] then Overlap (note: when ALL matching records are DUP...they can't also be overlap..but overlap can have at least one unique record and multiple dups that fall within the duration time frame). Output: Mark isOverlap flag with 1 and overlapFile with the srcFile for two or more records where overlap condition matches. This is what I tried ``` ;WITH dupCTE AS ( SELECT ID, Value, [Date], [Time], Duration, srcFile ,CASE WHEN COUNT(*) OVER (PARTITION BY Value, [Date], [Time]) > 1 THEN 1 ELSE 0 END AS isDup ,CASE WHEN COUNT(*) OVER (PARTITION BY Value, [Date], [Time]) > 1 THEN STUFF((SELECT ' - ' + srcFile FROM #TempTable T WHERE T.Value = TT.Value AND T.[Date] = TT.[Date] AND T.[Time] = TT.[Time] FOR XML PATH('')), 1, 3, '') ELSE NULL END AS dupFIle FROM #TempTable TT ) , overlapCTE AS ( SELECT A. ID, A.Value, A.[Date], A.[Time], A.Duration, A.srcFile, A.isDup, A.dupFIle ,CASE WHEN B.ID IS NOT NULL THEN 1 ELSE 0 END AS 'isOverlap' ,CASE WHEN b.ID IS NOT NULL THEN STUFF((SELECT ' - ' + srcFile FROM #TempTable T WHERE T.Value = A.Value AND ((CAST(CAST(B.[Date] AS VARCHAR(10)) + ' ' + CAST(B.[Time] AS VARCHAR(16)) AS DateTime2) > CAST(CAST(A.[Date] AS VARCHAR(10)) + ' ' + CAST(A.[Time] AS VARCHAR(16)) AS DateTime2) AND CAST(CAST(B.[Date] AS VARCHAR(10)) + ' ' + CAST(B.[Time] AS VARCHAR(16)) AS DateTime2) < DATEADD(SECOND, A.Duration, CAST(CAST(A.[Date] AS VARCHAR(10)) + ' ' + CAST(A.[Time] AS VARCHAR(16)) AS DateTime2))) OR (CAST(CAST(A.[Date] AS VARCHAR(10)) + ' ' + CAST(A.[Time] AS VARCHAR(16)) AS DateTime2) > CAST(CAST(B.[Date] AS VARCHAR(10)) + ' ' + CAST(B.[Time] AS VARCHAR(16)) AS DateTime2) AND CAST(CAST(A.[Date] AS VARCHAR(10)) + ' ' + CAST(A.[Time] AS VARCHAR(16)) AS DateTime2) < DATEADD(SECOND, B.Duration, CAST(CAST(B.[Date] AS VARCHAR(10)) + ' ' + CAST(B.[Time] AS VARCHAR(16)) AS DateTime2)))) FOR XML PATH('')), 1, 3, '') ELSE NULL END AS 'overlapFiles' FROM dupCTE A LEFT JOIN dupCTE B ON A.Value = B.Value AND ((CAST(CAST(B.[Date] AS VARCHAR(10)) + ' ' + CAST(B.[Time] AS VARCHAR(16)) AS DateTime2) > CAST(CAST(A.[Date] AS VARCHAR(10)) + ' ' + CAST(A.[Time] AS VARCHAR(16)) AS DateTime2) AND CAST(CAST(B.[Date] AS VARCHAR(10)) + ' ' + CAST(B.[Time] AS VARCHAR(16)) AS DateTime2) < DATEADD(SECOND, A.Duration, CAST(CAST(A.[Date] AS VARCHAR(10)) + ' ' + CAST(A.[Time] AS VARCHAR(16)) AS DateTime2))) OR (CAST(CAST(A.[Date] AS VARCHAR(10)) + ' ' + CAST(A.[Time] AS VARCHAR(16)) AS DateTime2) > CAST(CAST(B.[Date] AS VARCHAR(10)) + ' ' + CAST(B.[Time] AS VARCHAR(16)) AS DateTime2) AND CAST(CAST(A.[Date] AS VARCHAR(10)) + ' ' + CAST(A.[Time] AS VARCHAR(16)) AS DateTime2) < DATEADD(SECOND, B.Duration, CAST(CAST(B.[Date] AS VARCHAR(10)) + ' ' + CAST(B.[Time] AS VARCHAR(16)) AS DateTime2)))) WHERE A.isDup = 1 OR B.ID IS NOT NULL ) SELECT * FROM overlapCTE DROP TABLE #TempTable ``` Current Output ``` ID Value Date Time Duration srcFile isDup dupFIle isOverlap overlapFiles 1 One 2014-07-29 23:34:00 1710 sF1 0 NULL 1 sF1 - sF1 - sF1 - sF1 - sF1 - sF2 - sF1 - sF2 - sF2 - sF1 2 One 2014-07-30 00:00:10 1710 sF1 0 NULL 1 sF1 - sF1 - sF1 - sF1 - sF1 - sF2 - sF1 - sF2 - sF2 - sF1 3 One 2014-07-30 01:30:00 1710 sF1 0 NULL 1 sF1 - sF1 - sF1 - sF1 - sF1 - sF2 - sF1 - sF2 - sF2 - sF1 4 One 2014-07-30 01:54:00 1710 sF1 0 NULL 1 sF1 - sF1 - sF1 - sF1 - sF1 - sF2 - sF1 - sF2 - sF2 - sF1 5 One 2014-07-30 13:30:00 1710 sF1 0 NULL 1 sF1 - sF1 - sF1 - sF1 - sF1 - sF2 - sF1 - sF2 - sF2 - sF1 6 One 2014-07-30 13:57:00 1710 sF2 0 NULL 1 sF1 - sF1 - sF1 - sF1 - sF1 - sF2 - sF1 - sF2 - sF2 - sF1 7 One 2014-07-30 23:34:00 1710 sF1 0 NULL 1 sF1 - sF1 - sF1 - sF1 - sF1 - sF2 - sF1 - sF2 - sF2 - sF1 8 One 2014-07-31 00:00:10 1710 sF2 0 NULL 1 sF1 - sF1 - sF1 - sF1 - sF1 - sF2 - sF1 - sF2 - sF2 - sF1 9 One 2014-08-01 00:00:00 1710 sF2 1 sF2 - sF1 0 NULL 11 One 2014-08-01 00:00:00 1710 sF1 1 sF2 - sF1 0 NULL ``` Desired Output ``` ID Value Date Time Duration srcFile isDup dupFIle isOverLap overlapFile 1 One 2014-07-29 24:34:00 1710 sF1 0 NULL 1 sF1 - sF1 2 One 2014-07-30 00:00:10 1710 sF1 0 NULL 1 sF1 - sF1 3 One 2014-07-30 01:30:00 1710 sF1 0 NULL 1 sF1 - sF1 4 One 2014-07-30 01:54:00 1710 sF1 0 NULL 1 sF1 - sF1 5 One 2014-07-30 13:30:00 1710 sF1 0 NULL 1 sF1 - sF2 6 One 2014-07-30 13:57:00 1710 sF2 0 NULL 1 sF2 - sF1 7 One 2014-07-30 24:34:00 1710 sF1 0 NULL 1 sF1 - sF2 8 One 2014-07-31 00:00:10 1710 sF2 0 NULL 1 sF2 - sF1 9 One 2014-08-01 00:00:00 1710 sF2 1 sF2 - sF1 0 NULL 10 Two 2014-08-01 00:01:00 1710 sF2 0 NULL 0 NULL 11 One 2014-08-01 00:00:00 1710 sF1 1 sF1 - sF2 0 NULL ``` I am not meeting the requirement. Any help would be appreciated. Thank you Update: Added Current Output Update2: Found a mistake in Dup CTE (ID was used instead of Value). The desired output is still left to be desired. Update3: Progress folks, we are very close. Now overlap logic is "working." The one major issue is the overlapFIles. It should only list the files for the records that do overlap each others (right now, it's listing all the files from overlapCTE output instead of specifically listing only those that meet the WHERE within the STUFF query). Also, is there a way to get that unique record listed? Update4: Added more records to see if duplicates and overlap queries can accommodate more than just two records.
This should give you exactly what you're after: ``` With CTE as (Select T.ID ID1, T.srcFile + ' - ' + c.srcFile over1, '1' as isDup from #TempTable T INNER JOIN #TempTable c on T.Value = c.Value and c.ID <> T.ID and (Cast(C.Date as datetime) + Cast(C.Time as datetime)) = (Cast(T.Date as datetime) + Cast(T.Time as datetime))), CTE2 as (Select T.ID ID1, c.ID ID2, T.srcFile + ' - ' + c.srcFile over1, c.srcFile + ' - ' + T.srcFile over2, '1' as isOverLap from #TempTable T INNER JOIN #TempTable c on T.Value = c.Value and c.ID <> T.ID Where DateAdd(second, c.Duration, Cast(C.Date as datetime) + Cast(C.Time as datetime)) > (Cast(T.Date as datetime) + Cast(T.Time as datetime)) and (Cast(C.Date as datetime) + Cast(C.Time as datetime)) < (Cast (T.Date as datetime) + Cast(T.Time as datetime))) Select T.*, ISNULL((Select top 1 c.isDup from CTE c where c.ID1 = T.ID) ,0) isDup ,(Select substring((select ',' + c1.over1 as [text()] from CTE c1 where c1.ID1 = T.ID for xml path ('')),2,1000)) dupFile ,ISNULL((select Top 1 case isOverLap when 1 then 1 else 0 end from CTE2 c where c.ID1 = T.ID or C.ID2 = T.ID),0) isOverLap ,(Select substring((select case when T.ID = C.ID1 then ',' + c.over1 else ',' + c.over2 end as [text()] from CTE2 c where c.ID1 = T.ID or C.ID2 = T.ID for xml path('')),2,1000)) OverlapFile from #TempTable T ```
code follows your requirement (hopefully). i tested it with adding more overlaps and duplicities, it work not only with 2 duplicates, overlaping files (e.g. srcFile='sF3'), but with these observations: 1. DupFile - list always ordered by filename 2. overlapfile - if there is only one file, there is no pair "sF1 - sF1", but "sF1" only - i'm not sure if this is necessary for production pursposes, but could be tweaked (not in this case yet) --- ``` with rows ( select [ID],[Value], [Date], [Time], [Duration], [srcFile], cast(cast([date] as varchar(10))+' ' +cast(time as varchar(8)) as datetime) as datetime, dateadd(ss,-duration,cast(cast([date] as varchar(10))+' ' +cast(time as varchar(8)) as datetime)) as date_from, dateadd(ss,duration,cast(cast([date] as varchar(10))+' ' +cast(time as varchar(8)) as datetime)) as date_to from #TempTable ) , dups as ( SELECT [value], [Date], [Time] FROM rows group by [value], [Date], [Time] having count([ID])>1 ) , dups_files as ( select r.* , STUFF((select ' - '+d.srcFile FROM rows as d WHERE (r.[value]=d.value and r.time=d.time and r.date=d.date) order by d.srcFile FOR xml path('')),1,3,'') as dupFile FROM dups r ) , duplicities as ( select a.id, d.dupFile from rows a join dups_files d on ( a.value=d.value and a.date=d.date and a.time=d.time) ) , overlaps_pairs as ( select f.id as id_a, d.id as id_b, f.srcfile as srcfile_a, d.srcfile as srcfile_b from rows f JOIN rows d ON (f.id<>d.id AND f.value=d.value and f.datetime between d.date_from and d.date_to) where not exists (select 1 FROM duplicities du where f.id=du.id) ) , overlaps as ( select DISTINCT op.id_a as id, 1 as isoverlap , STUFF((select distinct ' - ' +aa.srcFile_b from overlaps_pairs aa where aa.id_a = op.id_a or op.id_b=aa.id_a for xml path ('')),1,3,'') as overlapfiles from overlaps_pairs op ) select a.id, a.value, a.date, a.time, a.duration, a.srcFile, case when d.id is not null then 1 else 0 end as isDup, d.dupFile, o.isoverlap, o.overlapfiles from rows a LEFT OUTER JOIN overlaps o on (a.id=o.id) LEFT OUTER JOIN duplicities d on (a.id=d.id); ```
T/SQL - Time related calculations
[ "", "sql", "sql-server", "t-sql", "sql-server-2012", "" ]
I am getting an error message when I'm trying to give an Alias to a Sub-Query and then reference it in a Where clause: *Unknown column 'point\_status' in 'where clause'* ``` SELECT goals.id, goals.status, ifnull((SELECT point_status FROM pts WHERE point_id = ?), goals.status) as point_status FROM goals WHERE goals.name = ? AND point_status > 1 ``` I do not understand why it isn't allowing me to use this alias. Is it possible to use that alias somehow in the where clause by notmodifying the functionality of the Query?
You cannot use the `Alias name` in same select statement `where` clause. You can make it as a `Sub-select` and use the `alias name` in the `outer query` to filter the records. ``` select * from (SELECT goals.id, goals.status, ifnull((SELECT point_status FROM pts WHERE point_id = ?), goals.status) as point_status FROM goals WHERE goals.name = ?)a where point_status>1 ```
> It is not allowable to refer to a column alias in a WHERE clause, because the column value might not yet be determined when the WHERE clause is executed. That means that you can't use aliases in a `where` clause. Try replacing `where` with `having`.
Why am I getting Unknown column in 'where clause' using a sub-query alias with MySQL?
[ "", "mysql", "sql", "" ]
Say I have two tables: Product ``` product_id (other fields are of no concern) ``` Sku ``` product_id sku_id color_id color_name (other fields such as size but unimportant) 001 11 5 green 001 12 1 black 001 13 3 red 002 21 1 black 002 22 2 yellow 002 23 8 magenta 002 24 9 turquoise ``` I need to rewrite a query that gets a list of product ids with comma delimited lists for all colors/color ids associated with that product. The color ids/names must have the same order in both lists. Desired output: ``` product_id colorIds colorNames 001 1,3,5 black,red,green 002 1,2,8,9 black,yellow,magenta,turquoise ``` Note that the concat list of color ids' order maps to the color names order. Current output: ``` product_id colorIds colorNames 001 1,3,5 green,black,red -- out of order sometimes 002 1,2,8,9 black,yellow,magenta,turquoise ``` Currently used query: ``` select distinct(p.product_id) as product_id, (select decode(dbms_lob.SubStr(wm_concat(DISTINCT color_name)),'NO COLOR','','No Color','','no color','',null,'',dbms_lob.SubStr(wm_concat(DISTINCT color_name))) as color_name from sku where product_id = p.product_id) as colorName, (select decode(dbms_lob.SubStr(wm_concat(DISTINCT color_code)),'000','',dbms_lob.SubStr(wm_concat(DISTINCT color_code))) from sku where product_id = p.product_id) as colorCode from product p; ``` I was thinking of just adding order by clauses in those sub selects, but the query just errors out, saying missing right parenthesis - oddly there seemed to be no mismatched parens. Any suggestions are welcome. Edit \* The above query is highly simplified. In reality it joins with over a dozen other tables to get other data columns related to the product, most of which are non-aggregate pieces of data. The solution should have no group by clause in the main query or suggest a reasonable way to accommodate this requirement.
This might work for you: ``` SELECT p.product_id , LISTAGG(s.color_id, ',') WITHIN GROUP ( ORDER BY s.color_id ) AS colorIds , LISTAGG(s.color_name, ',') WITHIN GROUP ( ORDER BY s.color_id ) AS colorNames FROM product p LEFT JOIN ( SELECT DISTINCT product_id, color_id, color_name FROM sku ) s ON p.product_id = s.product_id GROUP BY p.product_id ORDER BY product_id ``` `LISTAGG()` can be sorted while `WM_CONCAT()` can't (and it's undocumented, etc.). **UPDATE per OP's comment about non-aggregate data:** ``` WITH product_colors AS ( SELECT p.product_id , LISTAGG(s.color_id, ',') WITHIN GROUP ( ORDER BY s.color_id ) AS colorIds , LISTAGG(s.color_name, ',') WITHIN GROUP ( ORDER BY s.color_id ) AS colorNames FROM product p LEFT JOIN ( SELECT DISTINCT product_id, color_id, color_name FROM sku ) s ON p.product_id = s.product_id GROUP BY p.product_id ) SELECT t1.other_column, t2.other_column, etc. FROM table1 t1 JOIN table2 t2 ON ... JOIN product_colors pc ON ... ```
This will achieve the `distinct` effect (you cannot use `distinct` with `listagg`): ``` select product_id, listagg(color_id, ',') within group(order by color_id) as colorids, listagg(color_name, ',') within group(order by color_id) as colornames from (select distinct product_id, color_id, color_name from sku) group by product_id ``` If you want to show columns from the `product` table and/or you want to show products on the `product` table not on the `sku` table you can use: ``` select p.product_id, listagg(s.color_id, ',') within group(order by s.color_id) as colorids, listagg(s.color_name, ',') within group(order by s.color_id) as colornames from product p left join (select distinct product_id, color_id, color_name from sku) s on p.product_id = s.product_id group by p.product_id ```
Order By in subselect using concat and decode
[ "", "sql", "oracle", "select", "oracle11g", "" ]
I currently have an inelegant solution that requires iterating through thousands of rows one at a time and I would like to know if it's possible to do this with a single SQL statement. I have a database table called `history` that holds a record of all transactions on another database called `inventory`. Both databases share a column called `pKey` (the foreign key). My Inventory database: ``` | ID | pKey | model | room | ip | active | ... | ``` My history database: **The pKey is the foreign key** ``` | ID | pKey | fieldName | oldValue | newValue | ``` In the history database there can be more than 20 transactions for a single pKey. I would like to find all rows in the history that have: 1. fieldName of A AND oldValue of B 2. fieldName of C AND oldValue of D 3. the same pKey. I.e. say we find that we have a row in the history table with fieldName of A and old Value of B, the result will only be valid if the pKey associated with that row also turns up in the search for rows with a fieldName of C and oldValue of D. After doing some research it seems that SELF JOIN would be a good bet, but I am getting errors since I'm trying to do an SELF JOIN on the same column. Here is my statement: ``` SELECT pKey FROM `history` INNER JOIN history ON history.pKey=history.pKey WHERE `fieldName` = 'ipAddress' AND `oldValue` LIKE '129.97%' ``` **EDIT:** I made a mistake writing my statement here; I only mean to select the pKey results.
When you need to perform SELF-JOIN, you have to give an alias to the copy of your table being joined. Please find the query below - here both sides were aliased: ``` SELECT history_AB.pKey FROM history AS history_AB INNER JOIN history AS history_CD ON history_AB.pKey = history_CD.pKey WHERE (history_AB.fieldName = 'A' AND history_AB.oldValue LIKE 'B%') AND (history_CD.fieldName = 'C' AND history_CD.oldValue LIKE 'D%') ``` In the `WHERE` clause there are conditions you mentioned in your question: > (...) say we find that we have a row in the history table with > fieldName of A and old Value of B, the result will only be valid if > the pKey associated with that row also turns up in the search for rows > with a fieldName of C and oldValue of D I hope I understood the problem well and it might help you some way.
You have to identify the two instances of history as separate elements. Using your "ABCD" example instead of your statement: ``` SELECT * FROM history h1 INNER JOIN history h2 ON h1.pKey=h2.pKey WHERE h1.fieldName = 'A' AND h1.oldValue = 'B' AND h2.fieldName = 'C' AND h2.oldValue = 'D' ```
Is there a way to do a SELF JOIN on on the same column?
[ "", "mysql", "sql", "inner-join", "one-to-many", "foreign-key-relationship", "" ]
I have a `ActiveRecord` model `Post` that contains the fields `created_at` and `score`. I want to order the `Posts` by score in the last 50 ones created. So basically the SQL would be ``` SELECT * FROM (SELECT * FROM Posts ORDER BY created_at DESC LIMIT 50) Sorted_Posts ORDER BY score DESC ``` Unfortunately I'm not sure how to do this in Rails. I prefer not to use raw SQL to accomplish this since there's many filters other stuff in the query not shown that would really complicate the raw SQL. I looked into perhaps using the `to_sql` and strip the `SELECT * FROM POSTS` prefix out then put it in a `where` or `find_by_sql` function but I feel like this could be hacky and potentially lead to SQL injections? Thanks.
Maybe something like this: `Order.where(id: Order.order(:created_at => :desc).limit(50).ids).order(:score => :desc)`
Try this: ``` Post.order('created_at DESC').limit(50).order('score DESC') ```
Greatest-n-per-group rails ActiveRecord SQL
[ "", "mysql", "sql", "ruby-on-rails", "activerecord", "greatest-n-per-group", "" ]
How can query **table1** with a list of values for **colA** and get back the the list of the values that were not found? i.e. **all\_values** = ["a","b","c","d"] Table1 already has rows with **colA** for "b","c" I want to create a query that can I send in the entire **all\_values** list and get back **new\_values** = ["a","d"] Currently I have to do this by first getting all **existing\_values** = `SELECT colA FROM table1`, then do **new\_values** = **all\_values** - **existing\_values** at the application layer (ruby). But this can be an expensive routine if **all\_values** grows very large. I want to leverage the db power here to tell me which values in my list don't exist already without needing to return first such a large list to the application layer. [**Solution**] from David Faber's checked answer below, here is my solution from the application layer perspective (ruby on rails) in case anyone is interested. I've written the method so it can be included in any model you wish to use it on and accepts the column and list of values as arguments: [**Update**] parameterized the temp table name and setting datatype appropriately ``` def self.unfound_values_by_column values, column quoted_values = values.collect {|value| "('#{value}')"}.join(",") conn = self.connection # get this model's connection if different than base temp_table_name = "temp_values_for_unfound_lookup_table" datatype = self.columns_hash[column].sql_type begin conn.execute( "DROP TEMPORARY TABLE IF EXISTS #{temp_table_name}" ) conn.execute( "CREATE TEMPORARY TABLE #{temp_table_name} (`#{column}` #{datatype})" ) conn.execute( "INSERT INTO #{temp_table_name} (#{column}) VALUES #{quoted_values}" ) unfound = conn.select_values( "SELECT temp.`#{column}` FROM `#{temp_table_name}` temp WHERE NOT EXISTS ( SELECT 1 FROM `#{self.table_name}` my WHERE my.`#{column}` = temp.`#{column}` )" ) ensure conn.execute( "DROP TEMPORARY TABLE IF EXISTS #{temp_table_name}" ) end unfound end ```
Your best option is to create a temporary table (which will be visible only to the current session and dropped when that session ends), inserting all the values in `all_values`, then comparing that to the existing table. For example, ``` CREATE TEMPORARY TABLE `all_values` ( `myvalue` VARCHAR(30) ); -- insert values SELECT av.`myvalue` FROM `all_values` av WHERE NOT EXISTS ( SELECT 1 FROM `table1` t WHERE t.`ColA` = av.`myvalue` ); ``` I personally find `WHERE EXISTS` syntax easier than a `LEFT JOIN` with `key = NULL`, but YMMV.
You can use a derived table: ``` SELECT * FROM ( SELECT "a" AS colA UNION ALL SELECT "b" UNION ALL SELECT "c" UNION ALL SELECT "d" ) all_values WHERE all_values.colA COLLATE utf8_general_ci NOT IN (SELECT colA FROM table1); ``` I use COLLATE to avoid a collation conflict between table1 and values from all\_values.
How to return a list of unfound values in MySQL
[ "", "mysql", "sql", "ruby", "" ]
I have the following mysql statement based on [Mysql intersection based on single field](https://stackoverflow.com/questions/27997211/mysql-intersection-based-on-single-field): ``` SELECT m1.* FROM mytable m1 INNER JOIN mytable2 m2 ON m1.history = m2.history WHERE `insert_date` >= DATE_SUB(CURDATE(), INTERVAL 3 DAY) ``` Does the where clause applies to the first table. How do you apply the where clause to the second table or the first table? edit: what I am actually trying to do is to execute: ``` SELECT * FROM `mytable1` WHERE `insert_date` >= DATE_SUB(CURDATE(), INTERVAL 3 DAY) ``` and then get the inner joined records of mytable2 (m2) : ``` ON m1.history = m2.history ``` My original query may be correct but I'm not sure. This made me start thinking: What if I wanted to apply conditions to the join of either or both tables. I'm not sure how to build that query.
mysql will handle that for you, you can put columns from either table in the where clause. Think of the join as producing a new table with columns from both tables in it, then the where clause applies that that result set. If you have the same columns in both tables you just need to provide the name of the table or alias in your case to get the column. In fact your query won't compile if that is the case unless you do so: ``` SELECT m1.* FROM mytable m1 INNER JOIN mytable2 m2 ON m1.history = m2.history WHERE m2.`insert_date` >= DATE_SUB(CURDATE(), INTERVAL 3 DAY) ```
I believe you can add the condition to either the ON clause or the WHERE clause. However, per [the documentation](http://dev.mysql.com/doc/refman/5.0/en/join.html) the ON conditions (ideally) are used to describe how the tables should be joined while the WHERE conditions are for everything else. I'm still fairly new to MySQL, though, so I could be missing something. e.g., ``` SELECT m1.* FROM mytable m1 INNER JOIN mytable2 m2 ON m1.history = m2.history WHERE `insert_date` >= DATE_SUB(CURDATE(), INTERVAL 3 DAY) AND m2.id > 1 ```
How to apply where cause in mysql join query to different tables
[ "", "mysql", "sql", "" ]
My database is listing Jobs. I want a user to be able to query jobs from a particular industry but chose as many work-types from the `type` column as they want, eg: ``` SELECT * FROM `rec_jobs` WHERE `industry` LIKE '%Security%' AND (`type` LIKE '%Full-Time%' OR 'type' LIKE '%Part-Time%' OR 'type' LIKE '%Casual%' OR 'type' LIKE '%Contract%'); ``` This should return something like: > **ID - Industy - Type** > > --- > > 1 - Security - Part-Time > > 2 - Security - Full-Time > > 3 - Security - Casual > > 4 - Security - Full-Time etc. but it is not working as expected - I dont get any SQL errors or any results (though I know rows exist). Does anyone know a better way of achieving this (or the correct terminology to search in Google)?
you should use the same quotes around the column name type: ``` SELECT * FROM `rec_jobs` WHERE `industry` LIKE '%Security%' AND ( `type` LIKE '%Full-Time%' OR `type` LIKE '%Part-Time%' OR `type` LIKE '%Casual%' OR `type` LIKE '%Contract%'); ```
If you know the exact strings you are looking for then use `IN()` ``` SELECT * FROM `rec_jobs` WHERE `industry` = 'Security' AND `type` IN ('Full-Time', 'Part-Time', 'Casual', 'Contract') ```
Querying using multiple LIKEs from same column?
[ "", "mysql", "sql", "" ]
``` Select FirstName + ' ' + LastName from Members ``` Gives error: > Implicit conversion of varchar value to varchar cannot be performed because the collation of the value is unresolved due to a collation conflict. But it works fine when I take `FirstName` and `LastName` in separate columns like ``` Select FirstName, LastName From Members ``` I want to pick First and Last name in one column
Seems you have different collation within the same table - very unusual Try this: ``` SELECT FirstName COLLATE DATABASE_DEFAULT + ' ' + LastName COLLATE DATABASE_DEFAULT AS FullName FROM Members ``` If this works, I suggest you change collation to be the same for FirstName and LastName rather than correcting your query. This is an example of how to change collation: ``` ALTER TABLE Members ALTER COLUMN LastName varchar(20) COLLATE DATABASE_DEFAULT NOT NULL ```
Try this ``` select CAST(FirstName AS VARCHAR) + ' ' + CAST(LastName AS VARCHAR)from Members or select CONVERT(VARCHAR(100),FirstName) + ' ' + CONVERT(VARCHAR(100),LastName )from Members ```
(Select FirstName + ' ' + LastName from Members gives error Implicit conversion of varchar value to varchar cannot be performed
[ "", "sql", "sql-server", "sql-server-2008", "correlated-subquery", "" ]
I want to generate a SQL query to get null count in a particular column like ``` SELECT COUNT (column) AS count FROM table WHERE column = null ; ``` This is returning 0, but I want how many null values are present in that column like ``` SELECT COUNT (column) AS count FROM table WHERE column = 'some value'; ``` which returns the count of the matched records
`NULL` value is special in that you cannot use `=` with it; you must use `IS NULL` instead: ``` SELECT COUNT (*) AS count FROM table where column IS null ; ``` This is because `NULL` in SQL does not evaluate as equal to anything, including other `NULL` values. Also note the use of `*` as the argument of `COUNT`.
You can use a conditional `sum()` ``` SELECT sum(case when column is null then 1 else 0 end) AS count FROM table ```
SQL query to get null count from a column
[ "", "sql", "oracle", "count", "" ]
I am trying to get records from the last 24 hours, grouped by hour with counts in SQL Server? I have sample data like: ``` ID Dat 1 2015-01-19 10:29:00.000 2 2015-01-19 11:29:00.000 3 2015-01-19 11:29:00.000 4 2015-01-19 11:29:00.000 5 2015-01-19 12:29:00.000 6 2015-01-19 12:29:00.000 7 2015-01-19 12:29:00.000 8 2015-01-19 12:29:00.000 9 2015-01-17 13:29:00.000 10 2015-01-17 13:29:00.000 11 2015-01-17 13:29:00.000 12 2015-01-17 13:29:00.000 13 2015-01-17 13:29:00.000 14 2015-01-17 13:29:00.000 15 2015-01-17 14:29:00.000 17 2015-01-17 15:29:00.000 18 2015-01-17 15:29:00.000 19 2015-01-17 16:29:00.000 20 2015-01-17 16:29:00.000 21 2015-01-15 16:29:00.000 22 2015-01-15 17:29:00.000 23 2015-01-15 18:29:00.000 24 2015-01-15 18:29:00.000 25 2015-01-15 18:29:00.000 26 2015-01-15 18:29:00.000 27 2015-01-15 18:29:00.000 28 2015-01-15 18:29:00.000 29 2015-01-15 19:29:00.000 30 2015-01-10 20:29:00.000 ``` Now suppose current date time is `2015-01-19 12:30:00.000`, my desired output would be: ``` Date Count 2015-01-19 12:00:00.000 4 2015-01-19 11:00:00.000 3 2015-01-19 10:00:00.000 1 2015-01-19 09:00:00.000 0 2015-01-19 08:00:00.000 0 2015-01-19 07:00:00.000 0 2015-01-19 06:00:00.000 0 2015-01-19 05:00:00.000 4 and so on... ``` So the count is based on number of records that fall in to each hour.
Try this, it will also count the hours without data: ``` DECLARE @t table(ID int, Date datetime) INSERT @t values (1,'2015-01-19 10:29:00.000'), (2,'2015-01-19 11:29:00.000'), (3,'2015-01-19 11:29:00.000'), (4,'2015-01-19 11:29:00.000'), (5,'2015-01-19 12:29:00.000'), (6,'2015-01-19 12:29:00.000'), (7,'2015-01-19 12:29:00.000'), (8,'2015-01-19 12:29:00.000'), (9,'2015-01-17 13:29:00.000'), (10,'2015-01-17 13:29:00.000'), (11,'2015-01-17 13:29:00.000'),(12,'2015-01-17 13:29:00.000'), (13,'2015-01-17 13:29:00.000'),(14,'2015-01-17 13:29:00.000'), (15,'2015-01-17 14:29:00.000'),(17,'2015-01-17 15:29:00.000'), (18,'2015-01-17 15:29:00.000'),(19,'2015-01-17 16:29:00.000'), (20,'2015-01-17 16:29:00.000'),(21,'2015-01-15 16:29:00.000'), (22,'2015-01-15 17:29:00.000'),(23,'2015-01-15 18:29:00.000'), (24,'2015-01-15 18:29:00.000'),(25,'2015-01-15 18:29:00.000'), (26,'2015-01-15 18:29:00.000'),(27,'2015-01-15 18:29:00.000'), (28,'2015-01-15 18:29:00.000'),(29,'2015-01-15 19:29:00.000'), (30,'2015-01-10 20:29:00.000') DECLARE @yourdate datetime = '2015-01-19T12:30:00.000' ;WITH CTE AS ( SELECT dateadd(hh, datediff(hh, 0, @yourdate), 0) Date UNION ALL SELECT dateadd(hh, -1, Date) FROM CTE WHERE Date + 1 > @yourdate ) SELECT CTE.Date, count(t.id) count FROM CTE LEFT JOIN @t t ON CTE.Date <= t.Date and dateadd(hh, 1, CTE.Date) > t.Date GROUP BY CTE.Date ORDER BY CTE.Date DESC ``` Result: ``` Date Count 2015-01-19 12:00:00.000 4 2015-01-19 11:00:00.000 3 2015-01-19 10:00:00.000 1 2015-01-19 09:00:00.000 0 2015-01-19 08:00:00.000 0 ..... ```
You can [round your values to the nearest hour](https://stackoverflow.com/questions/6666866/t-sql-datetime-rounded-to-nearest-minute-and-nearest-hours-with-using-functions) and then simply GROUP and COUNT: ## [SQL Fiddle Demo](http://sqlfiddle.com/#!3/baa4f/2) **MS SQL Server Schema Setup**: ``` CREATE TABLE DateTable ([ID] int, [Date] datetime) ; INSERT INTO DateTable ([ID], [Date]) VALUES (1, '2015-01-19 10:29:00'), (2, '2015-01-19 11:29:00'), (3, '2015-01-19 11:29:00'), (4, '2015-01-19 11:29:00'), (5, '2015-01-19 12:29:00'), (6, '2015-01-19 12:29:00'), (7, '2015-01-19 12:29:00'), (8, '2015-01-19 12:29:00'), (9, '2015-01-17 13:29:00'), (10, '2015-01-17 13:29:00'), (11, '2015-01-17 13:29:00'), (12, '2015-01-17 13:29:00'), (13, '2015-01-17 13:29:00'), (14, '2015-01-17 13:29:00'), (15, '2015-01-17 14:29:00'), (17, '2015-01-17 15:29:00'), (18, '2015-01-17 15:29:00'), (19, '2015-01-17 16:29:00'), (20, '2015-01-17 16:29:00'), (21, '2015-01-15 16:29:00'), (22, '2015-01-15 17:29:00'), (23, '2015-01-15 18:29:00'), (24, '2015-01-15 18:29:00'), (25, '2015-01-15 18:29:00'), (26, '2015-01-15 18:29:00'), (27, '2015-01-15 18:29:00'), (28, '2015-01-15 18:29:00'), (29, '2015-01-15 19:29:00'), (30, '2015-01-10 20:29:00') ; ``` **Query to return aggregated data**: ``` SELECT DATEADD(HOUR, DATEDIFF(HOUR, 0, [DATE]), 0) As [DateValue], COUNT(1) AS [COUNT] FROM DateTable WHERE [DATE] >= DATEADD(day, -1, GETDATE()) GROUP BY DATEADD(HOUR, DATEDIFF(HOUR, 0, [DATE]), 0) ORDER BY 1 ``` **[Results](http://sqlfiddle.com/#!3/baa4f/2/0)**: ``` | DATEVALUE | COUNT | |--------------------------------|-------| | January, 19 2015 10:00:00+0000 | 1 | | January, 19 2015 11:00:00+0000 | 3 | | January, 19 2015 12:00:00+0000 | 4 | ``` This is using `GETDATE()` to return the current date time value and taking the last 24 hours from the point. The query above uses the value returned from the below for the `WHERE` clause: ``` SELECT DATEADD(day, -1, GETDATE()) ``` You can replace the filter value in the `WHERE` clause with a variable if required.
Return records with counts for the last 24 hours
[ "", "sql", "sql-server", "database", "sql-server-2012", "" ]
I have two tables namely "CProduct" and "DProduct".Below are the examples: CProduct : ``` EffectiveDate CFund 2014-01-03 0.06 2014-01-03 0.12 2014-01-06 0.11 ``` DProduct : ``` EffectiveDate DFund 2014-01-03 0.06 2014-01-06 0.12 2014-01-08 0.09 ``` I want to get a result like below : ``` EffectiveDate CFund DFund 2014-01-03 0.18 0.06 2014-01-06 0.11 0.12 2014-01-08 NULL 0.09 ``` My query is : ``` SELECT a.EffectiveDate,a.CFund,a.DFund FROM ( SELECT t1.EffectiveDate,Sum(t1.CFund) as CFund ,SUM(t2.DFund) as DFund FROM CProduct t1 LEFT JOIN DProduct t2 ON t1.EffectiveDate = t2.EffectiveDate Group By t1.EffectiveDate UNION SELECT t1.EffectiveDate,SUM(t2.CFund) as CFund ,Sum(t1.DFund) as DFund FROM DProduct t1 LEFT JOIN CProduct t2 ON t1.EffectiveDate = t2.EffectiveDate Group By t1.EffectiveDate ) a ``` But I am not getting the desired result.
This gets your desired results - not quite sure why the other answerers think joins and `COALESCE` are so crucial: ``` SELECT a.EffectiveDate, SUM(a.CFund) AS CFund, SUM(a.DFund) AS DFund FROM ( SELECT c.EffectiveDate, c.CFund, NULL AS DFund FROM CProduct c UNION ALL SELECT d.EffectiveDate, NULL AS CFund, d.DFund FROM DProduct d ) a GROUP BY a.EffectiveDate ORDER BY a.EffectiveDate ``` In SQL Fiddle, against SQLite (I haven't checked, but should be fine with Access too): <http://sqlfiddle.com/#!7/80158/1>
Should just be some subqueries and a full outer join. Not sure why you think a `UNION` is required (especially because that eliminates duplicate rows): ``` SELECT COALESCE(t1.EffectiveDate,t2.EffectiveDate) as EffectiveDate, t1.Total, t2.Total FROM (select EffectiveDate,SUM(CFund) from CProduct group by EffectiveDate) as t1(EffectiveDate,Total) full outer join (select EffectiveDate,SUM(DFund) from DProduct group by EffectiveDate) as t2(EffectiveDate,Total) on t1.EffectiveDate = t2.EffectiveDate ```
Using Left join , Union and Right Join to get a desired result
[ "", "sql", "ms-access", "ms-access-2007", "" ]
Consider the following table. ``` ----------------------------------------------------- | game_id | blue_player_id | red_player_id | turn_of | ----------------------------------------------------- | 1 | 4 | 5 | "blue" | ----------------------------------------------------- | 2 | 1 | 2 | "red" | ----------------------------------------------------- | 3 | 3 | 4 | "blue" | ----------------------------------------------------- ``` I want to write a query that selects either the blue\_player\_id or the red\_player\_id depending on the turn\_of column for a given game id. Thus for game\_id=1 it should return 4, for game\_id=2 it should return 2, and for game\_id=3 it should return 3. I know I could do this in two query by first selecting who's turn it is and then selecting the right column in the second query. However, is there a one query method which does not use database specific SQL?
You can use CASE statement if SQL. ``` SELECT CASE <variable> WHEN <value> THEN <returnvalue> WHEN <othervalue> THEN <returnthis> ELSE <returndefaultcase> END as something FROM <table> ``` You need something like this ``` SELECT CASE WHEN turn_of='blue' then blue_player_id else red_player_id end as player_id FROM Your_TABLE WHERE game_id = 0 ```
``` select gameId, case when turn_of = 'blue' then blue_player_id else red_player_id end as PlayerID from MYTable ```
SQL selecting column depending on the value of another column
[ "", "sql", "select", "" ]
when I select table like this: `select count(*) from products` is returning 12900 result. I have a relational query that returns multiple table relation reult like this: ``` SELECT category.name, manifacturer.name, supplier.name, product.name FROM products as product, suppliers as supplier, manifacturers as manifacturer, categories as category WHERE product.sid=supplier.id AND product.manid = manifacturer.id AND product.catid = category.id ``` This query returns 12873 result, So I can not find which data is not matched. How can I find this lost data? I used NOT query but did not return any result.
You can see that not matched records with FULL OUTER JOIN like this: ``` SELECT prod.id, rel.id FROM ( SELECT category.name, manifacturer.name, supplier.name, product.name FROM products as product, suppliers as supplier, manifacturers as manifacturer, categories as category WHERE product.sid=supplier.id AND product.manid = manifacturer.id AND product.catid = category.id ) as rel FULL OUTER JOIN products as prod ON rel.id = prod.id ``` So you can see null id and not null ids in list.
First, you should learn to use proper, explicit `join` syntax: ``` SELECT category.name, manifacturer.name, supplier.name, product.name FROM products as product join suppliers as supplier on product.sid = supplier.id join manifacturers as manifacturer on product.manid = manifacturer.id join categories as category on product.catid = category.id; ``` Then if you want non-matches, switch to `left join` and look for non-matches in the `where` clause: ``` SELECT category.name, manifacturer.name, supplier.name, product.name FROM products as product left join suppliers as supplier on product.sid = supplier.id left join manifacturers as manifacturer on product.manid = manifacturer.id left join categories as category on product.catid = category.id WHERE supplier.id IS NULL OR manifacturer.id IS NULL or category.id IS NULL; ```
How to use NOT in relational SQL query
[ "", "sql", "sql-server", "" ]
For the below given data set I want to remove the row which has later timestamp. ``` **37C1Z2990E5E0 (TRXID) should be UNIQUE** in the below dataSet JKLAMMSDF123 20141112 20141117 5000.0 P 1.22 RT101018 *2014-11-12 10:10:26* 37C1Z2990E5E0 101018 JKLAMMSDF123 20141110 20141114 5000.0 P 1.22 RT161002 *2014-11-12 10:11:33* 37C1Z2990E5E0 161002 -- More rows ```
Try this: ``` ;WITH DATA AS ( SELECT TRXID, MAX(YourTimestampColumn) AS TS FROM YourTable GROUP BY TRXID HAVING COUNT(*) > 1 ) DELETE T FROM YourTable AS T INNER JOIN DATA AS D ON T.TRXID = D.TRXID AND T.YourTimestampColumn = D.TS; ```
Select the min of the timestamp column and group by all of the other columns. ``` SELECT MIN(TIMESTAMP), C1, C2, C3... FROM YOUR_TABLE GROUP BY C1, C2, C3.. ```
Remove duplicates based on a condition
[ "", "sql", "sql-server", "database", "" ]
I have a Three columns in the SQL table, I want to show the sum of the total of two columns in the third column. How can I show that using SQL query. This is my table structure ``` Id int Unchecked Col_A int Unchecked Col_B int Unchecked Total int Checked ```
You can use comuted column for this: ``` CREATE TABLE [dbo].[Test]( [a] [INT] NULL, [b] [INT] NULL, [c] AS ([a]+[b]) ) ON [PRIMARY] GO INSERT INTO dbo.Test ( a, b ) VALUES ( 1, -- a - int 2 -- b - int ) SELECT * FROM dbo.Test Results: a b c 1 2 3 ```
There is no need to store the total in the table, you can simply calculate it as part of your SQL query as follows: ``` SELECT Id, Col_A, Col_B, Col_A + Col_B AS Total FROM tablename ```
Total Sum of two columns and show in the third Column
[ "", "sql", "" ]
I have two tables and records in it as below: ``` User --------------------- ID UserId --------------------- 1 User1 2 User2 Department_User ------------------------------ ID DEPT_ID USER_ID ------------------------------ 1 1 1 2 2 1 3 1 2 ``` Now I want a oracle query which will return only those users who are serving in both the departments(1 & 2), in this example it will be 1
This is an example of a set-within-sets query. I like to solve these using `group by` and `having`. Here is one method: ``` select user_id from department_user where dept_id in (1, 2) group by user_id having count(distinct dept_id) = 2; ```
``` select du1.user_id from department_user du1 where exists ( select * from department_user du2 where dept_id= 2 on du1.user_id = du2.user_id); and du1.dept_id = 1 ``` or ``` select user_id from department_user where dept_id = 1 and user_id in ( select user_id from department_user where dept_id= 2); ``` or ``` select du1.user_id from department_user du1 join department_user du2 on du1.user_id = du2.user_id where du1.dept_id=1 and du2.dept_id = 2 ``` I don't know about Oracle but the where exists version is likely the fastest one in SQL Server. The join version is the one you need if you need fields from the record where dept\_id = 2 as well as the records where dept\_id = 1
Need help in SQL join
[ "", "sql", "oracle", "" ]
I'm having trouble at query which displays the Employee Number, First Name, Last Name and the number of orders of each employee. Make sure that the Number of orders column name should be “OrderCount”. Order By the OrderCount then by employee ID. Null is allowed. employee\_data containing columns: `Emp_ID`, `F_Name`, `L_Name` order\_data containing columns: `O_ID`,`Emp_ID`, `OrderNumber` Here's my query: ``` Select order_data.Emp_ID, F_Name, L_Name, COUNT(employee_data.Emp_ID) as OrderCount FROM order_data LEFT JOIN employee_data ON employee_data.Emp_ID = order_data.Emp_ID GROUP BY order_data.Emp_ID ORDER BY OrderCount ``` These are my tables: employee\_data ![enter image description here](https://i.stack.imgur.com/JFAbG.jpg) order\_data ![enter image description here](https://i.stack.imgur.com/fWB1K.jpg) And the output should be: ![enter image description here](https://i.stack.imgur.com/WcsPQ.jpg) But it is giving me this wrong input. ![enter image description here](https://i.stack.imgur.com/my1fy.jpg)
Try this: ``` SELECT od.Emp_ID, ed.F_Name, ed.L_Name, COUNT(DISTINCT od.O_ID) AS OrderCount FROM order_data od LEFT JOIN employee_data ed ON ed.Emp_ID = od.Emp_ID GROUP BY od.Emp_ID ORDER BY OrderCount ```
Aggregate functions cannot be used along with other fields in a query. You can use it in a sub-query, as follows: ``` Select order_data.Emp_ID, F_Name, L_Name, (select COUNT(employee_data.Emp_ID) from employee_data) as OrderCount FROM order_data LEFT JOIN employee_data ON employee_data.Emp_ID = order_data.Emp_ID GROUP BY order_data.Emp_ID ORDER BY OrderCount ``` Hope it helps .. :)
COUNT() and Left Join not working
[ "", "mysql", "sql", "select", "group-by", "left-join", "" ]
I'm looking to create a new TABLE in my database but ran into a naming conundrum. I initially wanted to create the table as **people** But then I got to thinking... should I name the table **people** or **persons** ``` TABLE CREATE persons(name string, age integer) or TABLE CREATE people(name string, age integer) ```
The words people and persons can both be used as the plural of person,People is by far the more common of the two words and is used in most ordinary contexts: "a group of people"; "there were only about ten people" if you are going to have particular person's detail [attributes of individuals ] as `name`, `age`... then I think you should go with "`persons`' ``` TABLE CREATE persons(name string, age integer) ``` if you would have fields in the table like `People_count` ,`Avg_Age` , `Totalvote` etc , then you should have choose table name "`People`"
This is debated and often considered a matter of opinion, however I recommend always using singular. Your table name should be 'Person'. * A table should be named the 'object' of what it contains. Plural is 'redundant', the table may always have 0, 1 or more items in it, using a singular reference standardises this across the board. * You can think of a table as an object - a 'container' of something. For example if you have a Box of Mushrooms, then it is a 'MushroomBox', it doesn't make sense to call it a 'MushroomBoxes'. Whether it has 0, 1, or many mushrooms in it, it's always a 'MushroomBox'. * Plural names can have more irregularities and complexities such as "ies", "es" vs "s" or words such as "Status" and "Mice" vs "Mouse" etc. Singular is simpler, easier and more consistent. * If you have related tables, a singular naming convention carries over better. Say you have one table 'Student' and another Table 'Room'. You could potentially have a 'StudentRoom' table. But the name 'StudentsRoom' or 'StudentsRooms' is confusing. * When you refer to a column within a table, singular makes sense. Take the following example: SELECT \* FROM Person WHERE Person.FirstName = 'John' makes more sense than SELECT \* FROM Persons where Persons.Firstname = 'John' because you are talking about "a row", or "a person". * Singular uses less characters :)
SQL TABLE naming. persons vs. people
[ "", "sql", "sqlite", "" ]
I have a table with a column DateTime datatype with default value null . I am using following code to update the Datetime to proper datetime , but it is not working . My code as below ``` declare @SomeDate datetime = GETDATE() update Testing set SomeDate = @SomeDate where SomeDate <> @SomeDate ```
Try below code ``` declare @SomeDate datetime = GETDATE() update Testingset set SomeDate = @SomeDate where SomeDate is null or SomeDate <> @SomeDate ```
Try this: ``` UPDATE Testingset SET SomeDate = @SomeDate WHERE SomeDate IS NULL OR SomeDate != @SomeDate ```
Update Datetime null value to not null
[ "", "sql", "sql-server", "sql-server-2008", "" ]
Here is a little mind-breaker: Platform is MS SQL 2008, but the problem is general. I have a table table with 3 columns: CLIENT,DATE,DESTINATION\_PREFERENCE ``` TABLE1 ------------------------------------------------------- CLIENT |DATE |DESTINATION_PREFERENCE ------------------------------------------------------- Akme |2014-01 |1 Akme |2014-02 |6 Akme |2014-02 |3 Akme |2014-03 |5 Yutani |2014-01 |5 Yutani |2014-02 |8 Yutani |2014-03 |3 Yutani |2014-03 |5 ``` What I have to do is actually 2 things at once: **1st is very simple, and a classic problem:** Selecting the rows with the one minimum DESTINATION\_PREFERENCE from each group of CLIENT, and DATE. In other words, we GROUP BY on CLIENT,DATE, then we choose the rows with the lowest DESTINATION\_PREFERENCE. Note: I use only YEAR and MONTH from the DATE. This can be easily solved by RANK: ``` SELECT CLIENT,DATE,DESTINATION_PREFERENCE FROM ( SELECT CLIENT,DATE,DESTINATION_PREFERENCE, RANK() OVER (PARTITION BY CLIENT, DATE ORDER BY DESTINATION_PREFERENCE ASC) AS RANKING FROM #table1 ) sq WHERE RANKING = 1 ``` Good result is, where we choose rows only with the lowest DESTINATION\_PREFERENCE from each group of CLIENT and DATE(YEAR,MONTH): ``` CLIENT DATE DESTINATION_PREFERENCE Akma 2014-01 1 Akma 2014-02 3 Akma 2014-03 5 Yutani 2014-01 5 Yutani 2014-02 8 Yutani 2014-03 3 ``` **2nd - Now comes the hard part. Which I cannot solve, and need some advice:** If the DESTINATION\_PREFERENCE is 3 I should still include rows where DESTINATION\_PREFERENCE equals to 6. So the result table value will have one extra row (2nd row): ``` CLIENT DATE DESTINATION_PREFERENCE Akma 2014-01 1 Akme 2014-02 6 Akma 2014-02 3 Akma 2014-03 5 Yutani 2014-01 5 Yutani 2014-02 8 Yutani 2014-03 3 ``` How could I extend RANK() to such include arbitrary rules like that? Example rules to implement: If the top lowest value of DESTINATION\_PREFERENCE is 3 in the group, then include rows with values of 6 as well from the same group. If the top lowest value of DESTINATION\_PREFERENCE is 9 in the group, then include rows with values of 2 as well from the same group. If the top lowest value of DESTINATION\_PREFERENCE is 128 in the group then include rows with values of 312 as well from the same group. etc ... There are many rules. Thank you for the tips in advance!
You can use CTE with adding additional column, where you simply replace 6 with 3, 9 with 2 etc. ``` DECLARE @t TABLE ( client NVARCHAR(MAX) , date DATETIME , dest INT ) INSERT INTO @t VALUES ( 'Akme', '20140101', 1 ), ( 'Akme', '20140102', 3 ), ( 'Akme', '20140102', 6 ), ( 'Akme', '20140103', 5 ), ( 'Yutani', '20140104', 2 ), ( 'Yutani', '20140104', 7 ), ( 'Yutani', '20140104', 9 ), ( 'Yutani', '20140105', 7 ); WITH cte AS ( SELECT client , date , dest , CASE dest WHEN 6 THEN 3 WHEN 9 THEN 2 ELSE dest END AS rndest FROM @t ) SELECT CLIENT , DATE , dest FROM ( SELECT CLIENT , DATE , dest , RANK() OVER ( PARTITION BY CLIENT, DATE ORDER BY rndest ASC ) AS RANKING FROM cte ) sq WHERE RANKING = 1 ``` Output: ``` CLIENT DATE dest Akme 2014-01-01 00:00:00.000 1 Akme 2014-01-02 00:00:00.000 3 Akme 2014-01-02 00:00:00.000 6 Akme 2014-01-03 00:00:00.000 5 Yutani 2014-01-04 00:00:00.000 2 Yutani 2014-01-04 00:00:00.000 9 Yutani 2014-01-05 00:00:00.000 7 ```
You could just put a case statement in your order by in the rank: ``` SELECT CLIENT,DATE,DESTINATION_PREFERENCE FROM ( SELECT CLIENT,DATE,DESTINATION_PREFERENCE, RANK() OVER (PARTITION BY CLIENT, DATE ORDER BY Case DESTINATION_PREFERENCE when 3 then 6 when 9 then 2 when 128 then 312 else DESTINATION_PREFERENCE END ASC) AS RANKING FROM #table1 ) sq WHERE RANKING = 1 ```
SQL - Complex RANK
[ "", "sql", "sql-server", "ranking", "" ]
I have this table: ``` cart_item_id (primary key) user_id (int) shop_item_id (int) quantity (int) is_checked_out (tinyint) ``` from this table i want to get the most selling item, which is the item that have the maximum checked out quantities, i developed this query, but it returned errors: ``` SELECT `shop_item_id` FROM `cart` WHERE `is_checked_out` = 1 group by `shop_item_id` having sum(`quantity`) > max( select sum(`quantity`) from `cart` where `is_checked_out` = 1 group by `shop_item_id` ) ``` the error is: ``` #1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'select sum(`quantity`) from `cart` where `is_checked_out` = 1 group by `shop_ite' at line 1 ``` so what is the wrong in my query?
Order by the quantity and take the first record only ``` SELECT `shop_item_id`, sum(`quantity`) FROM `cart` WHERE `is_checked_out` = 1 group by `shop_item_id` order by sum(`quantity`) desc limit 1 ```
The following code will do fine for you : ``` SELECT `shop_item_id` FROM `cart` WHERE `is_checked_out` = 1 and sum(`quantity`)>=max ( select sum(`quantity`) from `cart` where `is_checked_out` = 1 ) limit 1 ; ```
how to perform query to get max sum in mysql?
[ "", "mysql", "sql", "max", "having", "" ]
Supporse I have the fllowing table with words from a dictionary: ``` Word --- cat dog giraffe zebra ``` I would like to find a word, and if it doesn't exist, the closest before it in [dictionary](http://en.wikipedia.org/wiki/Lexicographical_order) order, e.g. `aardvark` would return nothing, `cat` would return `cat`, `cow` would return `cat`, `horse` would return `giraffe`. This should be relatively straightforward to search for using a BTREE index but I haven't figured out a way to do it. I'm using sqlite for this, but other engines are also acceptable. I'm only interested in the dictionary order, i.e. the query should work exactly with the above examples. Other similarity metrics are of course nice, but are irrelevant to this question.
Assuming that you have declared the column with the correct [collation](http://www.sqlite.org/datatype3.html#collation) for dictionary order (which might be the default, or `COLLATE NOCASE`, or a user-defined collation), getting an exact match is trivial: ``` SELECT Word FROM Dictionary WHERE Word = ? ``` and getting the closest before is easy: ``` SELECT MAX(Word) FROM Dictionary WHERE Word < ? ``` To get only the first result of these two queries, combine them with [UNION ALL](http://www.sqlite.org/lang_select.html#compound), and use `LIMIT 1` so that the second query is ignored if the first one succeeds: ``` SELECT Word FROM Dictionary WHERE Word = ? UNION ALL SELECT MAX(Word) FROM Dictionary WHERE Word < ? LIMIT 1 ```
this may helps you ``` select a.word,b.word from dictionary a, words b where difference (a.word,'DOG') in(3,4) ``` or you can use `soundex` function
Select closest match in dictionary order
[ "", "sql", "sqlite", "" ]
Sorry for the rubbish title. I could quite articulate my problem in a few words. I have an SQL that gives a list of ids separate by a pipe (|). I want to pass these ids into another sql as a where clause. I can use replace to convert the values from pipe separate into comma separated. As an example the list of IDs might be ``` 1|2|3|4 ``` and using replace I get ``` 1,2,3,4 select replace(value, '|', ',') from my_table; ``` If I try and pass this into another SQL where I want to look up these IDs I get an error ``` ORA-01722: invalid number select * from my_table2 where id in ( select replace(value, '|', ',') from my_table); ``` Now I presume I need to cast the output to a number but I dont want to cast the entire string to a number just hte numeric values within it. How can I do this easily? Thanks
There may be two cases: good and bad. Bad case is your pipe-separated string is stored somewhere in the database and you cannot change this design to something meaningful. If so, you'll need to use like operator, something like this: ``` select t2.* from my_table2 t2, my_table t where '|' || t1.value || '|' like '%|' || t2.id || '|%' ``` Good case is this pipelining isn't persistent and made by first SQL. If so, you should just remove garbage. Remove pipelining, remove listing into one row. Make inner SQL return resultset of IDs required, one per row, and use something like ``` select t2.* from my_table2 t2 where t2.id in (select id from ...) ``` Additional case is if this list is a parameter value transferred from client. Some developers use this approach to make filters etc. If so, you should change client for transferring something better, say, table of numbers. SQL would be like ``` select t2.* from my_table2 t2 where t2.id in (select column_value from table(cast :param as NumberTable)) ```
This is a complicated expression, but you can do it with `like` and `exists`: ``` select * from my_table2 where exists (select 1 from my_table t1 where '|' || value || '|' like '%|' || id || '|%' ); ``` However, you have a fundamental problem with the data structure in `my_table`. You should not be storing lists of anything -- and especially *integer* ids -- in a string. The proper SQL approach is to use a junction table, with one row per id. Oracle has other data structures such as nested tables, which can help with this.
output of replace SQL as a where clause in another SQL
[ "", "sql", "oracle", "casting", "" ]
I want a trigger that triggers whenever the loan table is updated (i.e. a book is returned). It should take values from the rows in the loan table only where the loan is overdue and insert them into a new table. --- the 'loan' table: ``` CREATE TABLE loan ( book_code INT NOT NULL, student_num INT NOT NULL, out_date DATE NOT NULL, due_date DATE NOT NULL, return_date DATE, CONSTRAINT pk_loan PRIMARY KEY (book_code, student_num, out_date), CONSTRAINT fk_book_code FOREIGN KEY (book_code) REFERENCES copy(book_code), CONSTRAINT fk_num FOREIGN KEY (student_num) REFERENCES student(student_num) ); ``` and the 'overdue' table ``` CREATE TABLE overdue ( overdue_id INT NOT NULL AUTO_INCREMENT, student_num INT NOT NULL, out_date DATE NOT NULL, due_date DATE NOT NULL, return_date DATE, CONSTRAINT pk_overdue PRIMARY KEY (overdue_id), CONSTRAINT fk_num FOREIGN KEY (student_num) REFERENCES student(student_num) ); ``` --- What I've got so far: ``` DELIMITER $$ CREATE TRIGGER trg_overdue_loans AFTER UPDATE ON loan FOR EACH ROW BEGIN IF (NEW.return_date > OLD.due_date) THEN INSERT INTO overdue (student_num, out_date, due_date, return_date) VALUES (OLD.student_num, OLD.out_date, OLD.due_date, NEW.return_date) END IF; END$$ DELIMITER ; ``` --- I'm getting "an error in (my) SQL syntax" on the `END IF` and I have no clue why. Any help will be much appreciated!
Try this, You are missing semicolon in your syntax and delimiter ``` DROP TRIGGER IF EXISTS trg_overdue_loans; DELIMITER $$ CREATE TRIGGER `trg_overdue_loans` AFTER UPDATE ON loan FOR EACH ROW BEGIN IF NEW.return_date > OLD.due_date THEN INSERT INTO overdue (student_num, out_date, due_date, return_date) VALUES (OLD.student_num, OLD.out_date, OLD.due_date, NEW.return_date); END IF; END;$$ DELIMITER ; ```
## Storing the old and new row state in JSON The best way to store the old and new row state is to use JSON columns. SO, for each table that you want to enable audit logging, you can create an audit log table, like this one: ``` CREATE TABLE book_audit_log ( book_id BIGINT NOT NULL, old_row_data JSON, new_row_data JSON, dml_type ENUM('INSERT', 'UPDATE', 'DELETE') NOT NULL, dml_timestamp TIMESTAMP NOT NULL, dml_created_by VARCHAR(255) NOT NULL, PRIMARY KEY (book_id, dml_type, dml_timestamp) ) ``` * The `book_id` column stores the identifier of the `book` row that has been either created, updated, or deleted. * The `old_row_data` is a JSON column that will capture the state of the `book` record prior to executing an INSERT, UPDATE, or DELETE statement. * The `new_row_data` is a JSON column that will capture the state of the `book` record after executing an INSERT, UPDATE, or DELETE statement. * The `dml_type` is an enumeration column that stores the DML statement type that created, updated, or deleted a given `book` record. * The `dml_timestamp` stores the DML statement execution timestamp. * The `dml_created_by` stores the application user who issued the INSERT, UPDATE, or DELETE DML statement. ## Intercepting INSERT, UPDATE, and DELETE DML statements using triggers Now, to feed the audit log tables, you need to create the following 3 triggers: ``` CREATE TRIGGER book_insert_audit_trigger AFTER INSERT ON book FOR EACH ROW BEGIN INSERT INTO book_audit_log ( book_id, old_row_data, new_row_data, dml_type, dml_timestamp, dml_created_by ) VALUES( NEW.id, null, JSON_OBJECT( "title", NEW.title, "author", NEW.author, "price_in_cents", NEW.price_in_cents, "publisher", NEW.publisher ), 'INSERT', CURRENT_TIMESTAMP, @logged_user ); END CREATE TRIGGER book_update_audit_trigger AFTER UPDATE ON book FOR EACH ROW BEGIN INSERT INTO book_audit_log ( book_id, old_row_data, new_row_data, dml_type, dml_timestamp, dml_created_by ) VALUES( NEW.id, JSON_OBJECT( "title", OLD.title, "author", OLD.author, "price_in_cents", OLD.price_in_cents, "publisher", OLD.publisher ), JSON_OBJECT( "title", NEW.title, "author", NEW.author, "price_in_cents", NEW.price_in_cents, "publisher", NEW.publisher ), 'UPDATE', CURRENT_TIMESTAMP, @logged_user ); END CREATE TRIGGER book_delete_audit_trigger AFTER DELETE ON book FOR EACH ROW BEGIN INSERT INTO book_audit_log ( book_id, old_row_data, new_row_data, dml_type, dml_timestamp, dml_created_by ) VALUES( OLD.id, JSON_OBJECT( "title", OLD.title, "author", OLD.author, "price_in_cents", OLD.price_in_cents, "publisher", OLD.publisher ), null, 'DELETE', CURRENT_TIMESTAMP, @logged_user ); END ``` > The `JSON_OBJECT` MySQL function allows us to create a JSON object that takes the provided key-value pairs. The `dml_type` column is set to the value of `INSERT`, `UPDATE` or `DELETE` and the `dml_timestamp` value is set to the `CURRENT_TIMESTAMP`. The `dml_created_by` column is set to the value of the `@logged_user` MySQL session variable, which was previously set by the application with the currently logged user: ``` Session session = entityManager.unwrap(Session.class); Dialect dialect = session.getSessionFactory() .unwrap(SessionFactoryImplementor.class) .getJdbcServices() .getDialect(); session.doWork(connection -> { update( connection, String.format( "SET @logged_user = '%s'", ReflectionUtils.invokeMethod( dialect, "escapeLiteral", LoggedUser.get() ) ) ); }); ``` ## Testing time When executing an INSERT statement on the `book` table: ``` INSERT INTO book ( id, author, price_in_cents, publisher, title ) VALUES ( 1, 'Vlad Mihalcea', 3990, 'Amazon', 'High-Performance Java Persistence 1st edition' ) ``` We can see that a record is inserted in the `book_audit_log` that captures the INSERT statement that was just executed on the `book` table: ``` | book_id | old_row_data | new_row_data | dml_type | dml_timestamp | dml_created_by | |---------|--------------|--------------------------------------------------------------------------------------------------------------------------------------|----------|---------------------|----------------| | 1 | | {"title": "High-Performance Java Persistence 1st edition", "author": "Vlad Mihalcea", "publisher": "Amazon", "price_in_cents": 3990} | INSERT | 2020-07-29 13:40:15 | Vlad Mihalcea | ``` When updating the `book` table row: ``` UPDATE book SET price_in_cents = 4499 WHERE id = 1 ``` We can see that a new record is going to be added to the `book_audit_log` by the AFTER UPDATE trigger on the `book` table: ``` | book_id | old_row_data | new_row_data | dml_type | dml_timestamp | dml_created_by | |---------|--------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------|----------|---------------------|----------------| | 1 | | {"title": "High-Performance Java Persistence 1st edition", "author": "Vlad Mihalcea", "publisher": "Amazon", "price_in_cents": 3990} | INSERT | 2020-07-29 13:40:15 | Vlad Mihalcea | | 1 | {"title": "High-Performance Java Persistence 1st edition", "author": "Vlad Mihalcea", "publisher": "Amazon", "price_in_cents": 3990} | {"title": "High-Performance Java Persistence 1st edition", "author": "Vlad Mihalcea", "publisher": "Amazon", "price_in_cents": 4499} | UPDATE | 2020-07-29 13:50:48 | Vlad Mihalcea | ``` When deleting the `book` table row: ``` DELETE FROM book WHERE id = 1 ``` A new record is added to the `book_audit_log` by the AFTER DELETE trigger on the `book` table: ``` | book_id | old_row_data | new_row_data | dml_type | dml_timestamp | dml_created_by | |---------|--------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------|----------|---------------------|----------------| | 1 | | {"title": "High-Performance Java Persistence 1st edition", "author": "Vlad Mihalcea", "publisher": "Amazon", "price_in_cents": 3990} | INSERT | 2020-07-29 13:40:15 | Vlad Mihalcea | | 1 | {"title": "High-Performance Java Persistence 1st edition", "author": "Vlad Mihalcea", "publisher": "Amazon", "price_in_cents": 3990} | {"title": "High-Performance Java Persistence 1st edition", "author": "Vlad Mihalcea", "publisher": "Amazon", "price_in_cents": 4499} | UPDATE | 2020-07-29 13:50:48 | Vlad Mihalcea | | 1 | {"title": "High-Performance Java Persistence 1st edition", "author": "Vlad Mihalcea", "publisher": "Amazon", "price_in_cents": 4499} | | DELETE | 2020-07-29 14:05:33 | Vlad Mihalcea | ``` That's it!
How to create audit trail or logging tables with triggers in MySQL
[ "", "mysql", "sql", "t-sql", "triggers", "audit-trail", "" ]
We have a DACPAC that has a create user script. The user has a log in that needs to be set to a windows domain user When we do a build for Test/Staging/Release we need to be able to apply a different domain and user for the users log in. I thought we might be able to use SQLCMD variables but I just get a SQL71501 Error when trying to use this That script looks something like this: ``` CREATE USER [Username] For Login [$(SQLLoginDomain)] WITH DEFAULT_SCHEMA = [SCHEMANAME] GO ```
We ended up solving this by using a post deployment script in the dacpac and as long as you do a check before trying to call the user to see if they already exist it all works as that is the only place you can use them bar the pre deployment script.
My advice based on bitter experience is to keep anything that is environment specific out of your SQL Server Database Project. Rather, apply anything that is environment specific (permisins etc) as a separate T-SQL script after the DACPAC has been deployed. If you are doing this with Release Management I have a soup-to-nuts blog series that includes publishing DACPACs and separate permissions scripts [here](http://pleasereleaseme.net/continuous-delivery-with-tfs/).The post about applying a tokenised permissions script is [here](http://pleasereleaseme.net/continuous-delivery-tfs-building-deployment-pipeline-using-agent-based-release-template/).
Create user script with variable domain user login
[ "", "sql", "sql-server", "release", "release-management", "dacpac", "" ]
I have a table containing some random natural numbers, for example: ``` number ------- 2 3 1 ``` I'd like to retrieve a set from this table where each row would included as many times as the value shows, for example above It would be: ``` number ------- 2 2 3 3 3 1 ``` I was thinking about a query like this: ``` select t.number from table t, (select level from dual connect by level <= t.number) ``` but apparently it doesn't work. How to solve it?
And this is query for Oracle database: ``` SELECT t1.number FROM t t1 JOIN (SELECT LEVEL FROM dual CONNECT BY LEVEL<=(SELECT MAX(number) FROM t)) t2 ON t2.level<= t1.number; ```
try this way: ``` select t.id from tab t join (SELECT ROWNUM as id FROM ALL_OBJECTS WHERE ROWNUM<=(select max(id) from tab)) t2 on t2.id <= t.id; ``` I'm not sure what about performance of this query but the result is as expected. **[Sql fiddle Demo](http://sqlfiddle.com/#!4/35921/7/0)**
How to pass number of lines in "select level from dual connect by level < "?
[ "", "sql", "database", "oracle", "" ]
I have two tables: ``` product table product_id product_name 1 sample1 2 sample2 3 sample3 product_child table pc_id product_id product_size 1 1 5 2 1 6 3 1 7 4 2 5 5 2 8 6 2 7 7 3 8 8 3 6 9 3 9 ``` My question is: I want to get all products where product\_size (5 or 9) and 6. It means I want result like: ``` product_id product_name 1 sample1 3 sample3 ``` I need query like : ``` SELECT p.* FROM products p LEFT JOIN product_child pc ON pc.product_id = p.product_id WHERE ( pc.product_size = 5 OR pc.product_size = 9) AND pc.product_size = 6 ``` but it's not working.
Try this query: ``` SELECT p.* FROM products p JOIN product_child pc1 ON pc1.product_id = p.product_id and pc1.product_size in (5,9) JOIN product_child pc2 ON pc2.product_id = p.product_id and pc2.product_size = 6 ```
If I understand correctly you need all products which have both sizes. So you have different products and every product appears in different sizes. And you want only these products which appear in both (5 or 9) and 6 sizes at the same time. I think you need something like this: ``` SELECT * from products where product_id IN ( SELECT product_id from product_child where product_size = 5 or product_size = 9 INTERSECT SELECT product_id from product_child where product_size = 6 ) ``` In above query I did intersection. I selected one set (product of size 5 or 9) and second set (product of size 6). Intersection means finding products which appear in both sets.
MySQL query left join issue
[ "", "mysql", "sql", "join", "relational-database", "" ]
I have a list of parameters the first are a date range @start\_dt and @end\_dt one of my fields is mode. if mode = 1, the query is run one way if mode - 2, another, etc. I've got 7 possible versions. when mode = 6, what I want to say is pull all the columns between the date range list below and where one column pmt\_rcvd\_amt doesn't equal to the sum of sli\_paid\_amt. Now I want the sli\_paid\_amt grouped by customer\_no `Where a.create_dt between @start_dt and @end_dt` `and a.pmt_rcvd_amt <> sum(a.sli_paid_amt)` For example: ``` customer_no sli_due_amt pmt_rcvd_amt` 85244305 200.00 200.00 74500386 50.00 219.00 74500386 219.00 219.00 74500386 10.00 219.00 86119821 NULL NULL ``` I want it to take for customer\_no 74500386 it to group them (sum all the sli\_due\_amt) so 219 + 50. + 10 thus 279 and then compare it to the 219 in pmt\_rcvd\_amt column. (those values are going to be the same) my code looks like this ... ``` select distinct a.id_key , a.customer_no , a.customer_prefix , a.customer_lname ... a.current_child_price , a.current_other_price from LV_CHC_TOURS_RSV_DATA a left outer join LT_CHC_TOURS_RSV_LANG b on a.language = b.id left outer join LV_CHC_TOURS_RSV_CS c on a.add_text = c.source_no Where a.create_dt between @start_dt and @end_dt and a.pmt_rcvd_amt <> sum(a.sli_paid_amt) Error Message: An aggregate may not appear in the WHERE clause unless it is in a subquery contained in a HAVING clause or a select list, and the column being aggregated is an outer reference. ``` I then changed it to try and use having in my code: ``` from LV_CHC_TOURS_RSV_DATA a left outer join LT_CHC_TOURS_RSV_LANG b on a.language = b.id left outer join LV_CHC_TOURS_RSV_CS c on a.add_text = c.source_no Where a.create_dt between @start_dt and @end_dt group by a.customer_no having sum(a.sli_paid_amt) <> a.pmt_rcvd_amt ``` and now I get the following error message: ``` Column 'LV_CHC_TOURS_RSV_DATA.pmt_rcvd_amt' is invalid in the HAVING clause because it is not contained in either an aggregate function or the GROUP BY clause. ``` I know its a bit odd - but I want for a given customer\_no to sum all the sli\_due\_amts and then see if they equal the distinct value of pmt\_rcvd\_amt. because those distinct values will be the same in each line. what my desired output is: ``` customer_no sli_due_amt pmt_rcvd_amt` 74500386 50.00 219.00 74500386 219.00 219.00 74500386 10.00 219.00 ``` because: 50 + 219 + 10 doesn't equal to 219 (grouped by customer 74500386)
``` SELECT a.customer_no, SUM(a.sli_paid_amt), MAX(a.pmt_rcvd_amt) AS [Payment Amount], MAX(some_other_field), ... from LV_CHC_TOURS_RSV_DATA a left outer join LT_CHC_TOURS_RSV_LANG b on a.language = b.id left outer join LV_CHC_TOURS_RSV_CS c on a.add_text = c.source_no Where a.create_dt between @start_dt and @end_dt group by a.customer_no having sum(a.sli_paid_amt) <> [Payment Amount] ``` If the field is not in the group by, it must be an aggregate function such as SUM/MAX etc... If you group by customer id the above query should work. Then just wrap the fields in your select statement with MAX(-), because MAX can be used with char fields also.
If I understand you correctly, first you need to sum up sli\_due\_amt for each customer, and then you want to return the individual rows, excluding customers who have a pmt\_rcvd\_amt = the summed sli\_due\_amt. So: ``` WITH Summed as (select customer_no, sum(sli_due_amt) as sli_sum from LV_CHC_TOURS_RSV_DATA... GROUP BY cust_no ) select * from LV_CHC_TOURS_RSV_DATA t1 inner join summed on t1.customer_no = summed.customer_no and t1.pmt_rcvd_amt <> sli_sum ``` I grossly over-simplified your query, of course.
Error grouping by the sum of one column to the general value of the other
[ "", "mysql", "sql", "join", "having", "" ]
so i had to adjust a query i used because the limit at the end of the table would cause the entire first table to be read before limiting. this resulted in a timeout from my mysql server. now i created the query like i read in another post on stack overflow and came up with this: ``` SELECT a.title , a.lat , a.lon , a.a_content_id , a.date_added , r.countRep , i.countInt , content.img , c.marker , c.subcatvan FROM ( SELECT title , lat , lon , alert_content_id , date_added , cat FROM alerts LIMIT 10 ) a LEFT JOIN ( SELECT COUNT(DISTINCT id) countRep FROM reply WHERE alert_id = alerts.alerts ) r LEFT JOIN ( SELECT COUNT(DISTINCT id) countInt FROM interactions WHERE alert_id = alerts.alerts ) i LEFT JOIN ( SELECT img FROM alerts_content WHERE alert_id = alerts.alerts ) content LEFT JOIN ( SELECT marker , subcatvan FROM categories WHERE a.cat = id ) c; #1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '' at line 45 ``` This was my original query which resulted in a timeout: ``` SELECT a.title, a.lat, a.lon, a.alert_content_id, a.date_added, count(DISTINCT r.id) as countRep ,count(DISTINCT i.id) AS countInt, ac.img, c.marker, c.subcatvan FROM `alerts` a LEFT JOIN `reply` r ON r.alert_id = a.alerts LEFT JOIN `interactions` i ON i.alert_id = a.alerts LEFT JOIN `alerts_content` ac ON ac.alert_id = a.alerts LEFT JOIN `categories` c ON a.cat = c.id GROUP BY a.title, a.lat, a.lon, a.alert_content_id, a.date_added LIMIT 0,10 ``` Does anyone know what is causing the error? Or someone that knows how to correct my original query?
Fixed it thanks to @Ravinder his comment about missing the on clause updated sql: ``` SELECT a.alerts, a.title, a.lat, a.lon, a.alert_content_id, a.date_added, r.countRep, i.countInt, ac.img, c.marker, c.subcatvan FROM (SELECT alerts, title, lat, lon, alert_content_id, date_added, cat FROM `alerts` LIMIT 10) a LEFT JOIN (SELECT alert_id,count(DISTINCT id) as countRep FROM `reply`) r ON r.alert_id = a.alerts LEFT JOIN (SELECT alert_id,count(DISTINCT id) AS countInt FROM `interactions`) i ON i.alert_id = a.alerts LEFT JOIN (SELECT alert_id, img FROM `alerts_content`) ac ON ac.alert_id = a.alerts LEFT JOIN (SELECT id, marker, subcatvan FROM `categories`) c ON a.cat = c.id GROUP BY a.title, a.lat, a.lon, a.alert_content_id, a.date_added ```
You need to fix the subqueries. They need to use `group by` rather than be correlated: ``` SELECT a.title, a.lat, a.lon, a.a_content_id, a.date_added, r.countRep, i.countInt, content.img, c.marker, c.subcatvan FROM (SELECT title, lat, lon, alert_content_id, date_added, cat FROM `alerts` LIMIT 10 ) a LEFT JOIN (SELECT alert_id, count(DISTINCT id) as countRep FROM `reply` GROUP BY alert_id ) r ON r.alert_id = a.alerts r LEFT JOIN (SELECT alert_id, count(DISTINCT id) AS countInt FROM `interactions` GROUP BY alert_id ) i ON i alert_id = a.alerts LEFT JOIN (SELECT alert_id, img FROM `alerts_content` GROUP BY alert_id ) content ON alert_id = a.alerts LEFT JOIN `categories` c ON a.cat = c.id; ```
Error in MySQL Query with multiple joins and limit on first table
[ "", "mysql", "sql", "left-join", "limit", "" ]
I tried to execute an `SQL` `INSERT` with `Toad for oracle`: ``` INSERT INTO GRAT_ACTIVITY (UUID, IP_ADRESS, SEND_MAIL, DATE_CREA, DATE_UPD, CREATOR, CENTER, ETAT, REQUEST) VALUES('555-vgd9-pllkd-5513', '172.12.23.130', 'N', SYSDATE, SYSDATE, '1554', 'M18', 'I', 8842); --COMMIT; ``` the `GRAT_ACTIVITY` table structure is as below: ``` CREATE TABLE CASH.GRAT_ACTIVITY ( UUID VARCHAR2(64 BYTE) NOT NULL, IP_ADRESS VARCHAR2(15 BYTE), SEND_MAIL VARCHAR2(1 BYTE), DATE_CREA DATE, DATE_UPD DATE, CREATOR VARCHAR2(4 BYTE), CENTER VARCHAR2(4 BYTE), ETAT VARCHAR2(1 BYTE), REQUEST NUMBER ) ``` the error message: > ORA-00911: invalid character > > Cause: identifiers may not start with any ASCII character other than letters and numbers. $#\_ are also allowed after the first > character. Identifiers enclosed by doublequotes may contain any > character other than a doublequote. Alternative quotes (q'#...#') > cannot use spaces, tabs, or carriage returns as delimiters. For all > other contexts, consult the SQL Language Reference Manual. > > Action: None How can I solve it?
The statement you're executing is valid. The error seems to mean that Toad is including the trailing semicolon as part of the command, which does cause an `ORA-00911` when it's included as part of a statement - since it is a statement separator in the client, not part of the statement itself. It may be the following commented-out line that is confusing Toad ([as described here](https://support.software.dell.com/toad-for-oracle/kb/47096)); or it might be because you're trying to run everything as a single statement, in which case you can try to use the run script command (`F9`) instead of run statement (`F5`). Just removing the commented-out line makes the problem go away, but if you also saw this with an actual commit then it's likely to be that you're using the wrong method to run the statements. There is a bit more information about how Toad parses the semicolons in a comment [on this related question](https://stackoverflow.com/a/23033815/266304), but I'm not familiar enough with Toad to go into more detail.
Remove the semicolon (;), backtick (``) etc. from inside a query
How can I solve ORA-00911: invalid character error?
[ "", "sql", "oracle", "toad", "" ]
If I have a table like this: ``` ------------------------------ |id |duration |date | |1 |20 |1392479451 |<-- |2 |20 |1392479451 | |3 |10 |1392478620 |<-- |4 |30 |1392479457 |<-- |5 |30 |1392479457 | |6 |30 |1392479457 | ------------------------------ ``` I only want to `SUM` the first of each unique `date`: ``` ---------------- |SUM(duration) | |60 | ---------------- ``` How do I do this, please?
This should do it: ``` SELECT SUM(duration) FROM ( SELECT duration FROM tbl GROUP BY date ) t; ``` The `GROUP BY` in the inner query will select only one row per distinct date. Which row will be selected is not defined, but it doesn't matter as the durations should be identical. Then we just sum the durations. **EDIT**: Just to be clear, this will not select the *first* duration, but *any* single duration for a given date. If the durations can vary within a date and you specifically need the first one, this will not solve your problem.
You can try something like this: ``` SELECT SUM(IF(`date` = ( SELECT MIN(`date`) FROM table WHERE `date` = t.`date`), `duration`, 0)) FROM `table` t ``` Or ``` SELECT SUM(t1.`duration`) FROM `table` t1 JOIN ( SELECT MIN(`id`) AS `id` FROM `table` GROUP BY `date` ) t2 ON t1.`id` = t2.`id` ```
SQL sum one value for each unique value from another column
[ "", "mysql", "sql", "" ]
I have a number 000005500 and I want it to be in format 0000055.00 or 55.00 using an Oracle select query. I used this query: ``` select to_char(to_number(000005500),'99.99') from dual ``` but its displaying `#####` How can I display it in the format I need?
As written your `to_number()` call is just doing an implicit conversion to a string and then an explicit conversion back to a number, which seems pointless, but I assume you're actually dealing with a value from a `varchar2` column. In which case you see: ``` select to_char(to_number('000005500'),'99.99') from dual TO_CHA ------ ###### ``` You're seeing the hashes because you can't fit your four-digit number, 5500, into a 99.99 format - you have four digits before the decimal point and the format mask only allows for two. The bit you seem to be missing is dividing by 100 to get the decimal: ``` select to_char(to_number('000005500') / 100,'99.99') from dual; TO_CHA ------ 55.00 ``` Another approach, if you want to keep it as a string with the same number of leading zeros as the oroginal value, is to leave it as a string, chop it up with `substr()`, and concatenate the parts back together. Using a CTE as a demo: ``` with t as (select '000005500' as val from dual) select val, substr(val, 1, length(val) - 2) || '.' || substr(val, length(val) - 1, 2) as adj_val from t; VAL ADJ_VAL --------- --------------------------------------------- 000005500 0000055.00 ```
Firstly, `000005500` is not a number. A number doesn't start with zero. You are dealing with a string. > I want it to be in format 0000055.00 You can only expect it to be a string as an output, and not a number. Anyway, to get the output as `55.00` as NUMBER, you could do the following - ``` SQL> WITH DATA AS( 2 SELECT '000005500' num FROM DUAL 3 ) 4 SELECT to_char(to_number(replace(num,'0','')),'99D99') FROM DATA 5 / TO_CHA ------ 55.00 SQL> ``` Or, ``` SQL> WITH DATA AS( 2 SELECT '000005500' num FROM DUAL 3 ) 4 SELECT to_char(to_number(rtrim(ltrim(num,'0'),'0')),'99D99') FROM DATA 5 / TO_CHA ------ 55.00 SQL> ``` **Edit** Alex's method is also nice, it uses simple mathematics to convert it to DECIMAL. I would prefer his way for the first part.
Change number format by adding decimal to it in oracle
[ "", "sql", "oracle", "" ]
Apologies if this question has already been asked, but i'm pulling my hair out! I have two tables, abbreviated to KI and UG. KI contains a list of people and their photos, UG contains another list of people. What I want to do is match the tables up and return a query that shows me a list of names where we have a match between KI & UG. Now i'm halfway there, i've got my query and it works fine (almost) - the problem is that there are loads of duplicates in the list. The people that originally managed the KI table had input different images for the same person, leaving there to be multiple rows for "John Smith" for example. This is my code: ``` SELECT ki.name, ug.name, ki.image_file FROM kantechimages AS ki INNER JOIN user_group as UG ON ki.name like ug.name WHERE ki.image_file is not null GROUP BY ki.name, ug.name, ki.image_file ``` So my question is, how can I remove the duplicate names from the list and only return one row where we have a match instead of all of them? Many thanks!
This will give you one row per person, but only one image file too; ``` SELECT ki.name, ug.name, max(ki.image_file) as image_file FROM kantechimages AS ki INNER JOIN user_group as UG ON ki.name like ug.name WHERE ki.image_file is not null GROUP BY ki.name, ug.name ``` Rhys
Your question seems to suggest that you are only interested in getting the names of people who have images tagged to them in the other table. If that is the case, you can just retrieve the distinct names from the join which satisfy your filter condition, like so: ``` SELECT DISTINCT ki.name FROM kantechimages AS ki INNER JOIN user_group as UG ON ki.name like ug.name WHERE ki.image_file is not null ``` If you do need to return the other fields as well, then you can try the below: ``` ;with cte as ( SELECT ki.name kiname, ug.name ugname, ki.image_file ki_image_file, row_number() over (partition by ki.name order by ug.name) rn FROM kantechimages AS ki INNER JOIN user_group as UG ON ki.name like ug.name WHERE ki.image_file is not null ) select kiname, ugname, ki_image_file from cte where rn = 1 ``` [Demo](http://rextester.com/RNWSY46610)
How to remove duplicate entries? SQL Server 2008
[ "", "sql", "sql-server", "database", "sql-server-2008", "duplicates", "" ]
``` CREATION_DATE REJECTED_REASON PART_NAME REJECTED_QTY 03-03-2014 Metal chips in port face PEGEOUT 1.8 CYLINDER HEAD CASTING H29 15 03-03-2014 Angular hole Shrinkage PEGEOUT 1.8 CYLINDER HEAD CASTING H29 7 01-05-2014 5th cap side dross CYL.HEAD VM MOTORI-4 CYL TESTA CILINDRI LAVORATA 23 01-05-2014 Casting broken CYL.HEAD VM MOTORI-4 CYL TESTA CILINDRI LAVORATA 3 01-05-2014 Bend in dand CYL . HEAD VM MOTORI-4 CYL TESTA CILINDRI LAVORATA 11 01-05-2014 Bend in casting CYL . HEAD VM MOTORI-4 CYL TESTA CILINDRI LAVORATA 17 07-05-2014 Angular hole Shrinkage EATON REAR HOUSING H-99 10 08-05-2014 Unclean CASTING CYLINDER HEAD 01 OF KOHLER H-185 9 08-05-2014 Angular hole Shrinkage CASTING CYLINDER HEAD 01 OF KOHLER H-185 1 08-05-2014 Bend CASTING CYLINDER HEAD 01 OF KOHLER H-185 20 12-11-2014 Shrinkage on top face GEAR BOX HOUSING ITL CLUTCH - ITL 15 12-11-2014 Casting damage GEAR BOX HOUSING ITL CLUTCH - ITL 5 13-11-2014 1st Exhaust Port Core Shift PIAGGIO 3 VALVE CYLINDER HEAD CASTING 22 06-01-2015 Shrinkage in spark plug G-30 CYLINDER HEAD REAR CASTING 14 08-01-2015 1st cap side dross G-30 CYLINDER HEAD REAR CASTING 6 08-01-2015 Unclean G-30 CYLINDER HEAD REAR CASTING 2 08-01-2015 Shrinkage in spark plug G-30 CYLINDER HEAD REAR CASTING 12 13-01-2015 1st cap side dross G-30 CYLINDER HEAD REAR CASTING 1 13-01-2015 1st Exhaust Port Core Shift G-30 CYLINDER HEAD REAR CASTING 9 13-01-2015 Shrinkage in spark plug G-30 CYLINDER HEAD REAR CASTING 10 ``` Suppose I have above values in a particular table.Now I need to sum up the rejection quantity and display rejection reasons as columns and rejection quantity as values monthwise as well as partwise Suppose I need the data for May 2014.The report output for the above list of values should be:- ``` PART NAME 5th cap side dross Casting Broken Bend in Casting Bend in Dand Angular Hole Shrinkage Unclean Bend CYL . HEAD VM MOTORI-4 CYL TESTA CILINDRI LAVORATA 23 3 17 11 0 0 0 CASTING CYLINDER HEAD 01 OF KOHLER H-185 0 0 0 0 1 9 20 EATON REAR HOUSING H-99 0 0 0 0 10 0 0 ``` Please help me and guide me in implementing this task in oracle through sql or pl sql.
The best would be to use dynamic pivot as Aramillo said. Oracle has something like that, it's **pivot xml** clause, but I have some query for you which do the job dynamically without xml operations. As you said you need only display report so after some formatting the output of below query will be appropriate for your case: ``` select part_name , rtrim(xmlagg(xmlelement(e,rejected_reason,',').extract('//text()')),',') || chr(10) /*new line char */ || rtrim(xmlagg(xmlelement(e,cnt,',').extract('//text()')),',') from ( select part_name, rejected_reason, row_number() over (partition by part_name, rejected_reason order by rejected_reason) as rn, count(*) over (partition by part_name, rejected_reason order by rejected_reason) as cnt from your_table order by part_name, rejected_reason ) where data_column = 'MAY 2014' and rn = 1 -- to avoid duplicates in groups group by part_name; ```
Try this: ``` SELECT * FROM your_table_name PIVOT (COUNT(REJECTED_REASON) FOR REJECTED_REASON IN('reason1', 'reason2', 'reason3' ....)) ; ```
Count values of a particular column and display it monthwise and partwise
[ "", "sql", "oracle", "plsql", "oracle11g", "" ]
I have `Crea_Date` column which is a `DateTime` Column. I want to subtract the value in the `IST` column from `crea_date` Column and return a new column with the `DateTime` Value in it. My sample data is like this: ``` States crea_date IST AB 2014-12-30 15:01:00.000 12:30:00.0000000 AK 2014-12-29 16:32:00.000 10:30:00.0000000 AZ 2014-12-18 16:07:00.000 11:30:00.0000000 ``` Thanks in Advance
If `IST` is an integer number of seconds: ``` SELECT DATEADD(s, -IST, crea_date) FROM yourTable ``` If `IST` is of the `TIME` type: ``` SELECT DATEADD(ms, DATEDIFF(ms, IST, '00:00:00'), crea_date) FROM yourTable ```
As strange as it might seem, you *can* add/subtract `datetime` values and it seems it's "normal" behavior. Internally, datetime values are stored as the offset from `1/1/1900`. If I add `22/1/2015` and `1/1/2015` I get `22/1/2130` because the second value is actually `115` years after `1900`. When you cast a `time` value to `datetime` only the time component is copied and the date component is set to `1/1/1900`. In effect, you have an interval equal to your original time value. This way I can subtract `10:30` hours from a specific datetime: ``` declare @d datetime='2014-11-04 12:51:00', @t time='10:30:00' select @d -cast(@t as datetime) //----------------------- //2014-11-04 02:21:00.000 ``` This behavior isn't an implementation quirk - it is explicitly permitted only for the `datetime` type. All other datetime types (eg datetime2, datetimeoffset) return the error `Operand data type datetimeoffset is invalid for subtract operator`.
How to subtract Time Column from DateTime Column in SQL?
[ "", "sql", "sql-server", "datetime", "time", "subtraction", "" ]
Suppose I've next data ``` id date another_info 1 2014-02-01 kjkj 1 2014-03-11 ajskj 1 2014-05-13 kgfd 2 2014-02-01 SADA 3 2014-02-01 sfdg 3 2014-06-12 fdsA ``` I want for each id extract last information: ``` id date another_info 1 2014-05-13 kgfd 2 2014-02-01 SADA 3 2014-06-12 fdsA ``` How could I manage that?
The most efficient way is to use Postgres' `distinct on` operator ``` select distinct on (id) id, date, another_info from the_table order by id, date desc; ``` If you want a solution that works across databases (but is less efficient) you can use a window function: ``` select id, date, another_info from ( select id, date, another_info, row_number() over (partition by id order by date desc) as rn from the_table ) t where rn = 1 order by id; ``` The solution with a window function is in most cases faster than using a sub-query.
``` select * from bar where (id,date) in (select id,max(date) from bar group by id) ``` *Tested in PostgreSQL,MySQL*
Postgresql extract last row for each id
[ "", "sql", "postgresql", "greatest-n-per-group", "" ]
I know there are several examples of recursion with CTE and so on, but how can this be accomplished just by using window functions in SQL Server 2012: ``` CREATE TABLE #temp ( ID INT PRIMARY KEY IDENTITY(1,1) NOT NULL, Percentage INT NOT NULL ) DECLARE @Calculated MONEY = 1000 INSERT INTO #temp ( Percentage ) VALUES ( 100 ) INSERT INTO #temp ( Percentage ) VALUES ( 90) INSERT INTO #temp ( Percentage ) VALUES ( 60) INSERT INTO #temp ( Percentage ) VALUES ( 50) INSERT INTO #temp ( Percentage ) VALUES ( 100) ``` And the result would be a running percentage like so (we are starting with $1000) ``` id percentage calculated -- -------- --------- 1 100 1000 2 50 500 3 90 450 4 80 360 5 100 360 ``` So the value for the next row is the percentage multiplied by the calculated value above that row. Can LAG be used on a computed alias? Thanks,
You need a running product of the percentages instead of always comparing 2 consecutive rows, which is why LEAD and LAG won't work here. You can use a windowed sum to keep a running product of the percentages against your variable to get your desired calculation: ``` SELECT ID, Expected, EXP(SUM(LOG(CONVERT(FLOAT, Percentage) / 100)) OVER (ORDER BY ID)) * @Calculated AS Actual FROM #Temp ``` Adding this to your sample code (with a column I added for your expected output): ``` CREATE TABLE #temp ( ID INT PRIMARY KEY IDENTITY(1,1) NOT NULL, Percentage INT NOT NULL, Expected MONEY NOT NULL ) DECLARE @Calculated MONEY = 1000 INSERT INTO #temp ( Percentage, Expected ) VALUES ( 100 , 1000) INSERT INTO #temp ( Percentage, Expected ) VALUES ( 50, 500) INSERT INTO #temp ( Percentage, Expected ) VALUES ( 90, 450) INSERT INTO #temp ( Percentage, Expected ) VALUES ( 80, 360) INSERT INTO #temp ( Percentage, Expected ) VALUES ( 100, 360) SELECT ID, Expected, EXP(SUM(LOG(CONVERT(FLOAT, Percentage) / 100)) OVER (ORDER BY ID)) * @Calculated AS Actual FROM #Temp ``` This will yield your expected output: ``` ID Expected Actual ----------- --------------------- ---------------------- 1 1000.00 1000 2 500.00 500 3 450.00 450 4 360.00 360 5 360.00 360 ```
you can use `recursive cte` to get the desired result ``` with cte as ( select id, percentage, 1000 as calculated from #temp where id =1 union all select t.id, t.percentage, t.percentage*cte.calculated/100 as calculated from #temp t join cte on t.id = cte.id+1 ) select * from cte ```
SQL Server window function for running percentage
[ "", "sql", "sql-server", "" ]
I'm logging queries which have been sent to my API like this: ``` id | timestamp ----+--------------------- 1 | 2015-01-19 18:01:47 2 | 2015-01-19 20:41:37 3 | 2015-01-20 14:15:06 4 | 2015-01-21 13:02:51 5 | 2015-01-23 05:02:12 ``` Now, I need to know how much queries have been made in the last 60 minutes, 24 hours and 30 days and group it like that: ``` HOURLY | DAILY | MONTHLY -------+-------+-------- 0 | 1 | 5 ``` Is this possible to do in one query? **Edit:** I've used the suggestions from here and played a bit with the logical things until it worked the way I want it to. ``` SELECT SUM(CASE WHEN DATE_SUB(NOW(), interval 1 HOUR) <= `TIMESTAMP` THEN 1 ELSE 0 END) HOURLY, SUM(CASE WHEN DATE_SUB(NOW(), interval 1 DAY) <= `TIMESTAMP` THEN 1 ELSE 0 END) DAILY, SUM(CASE WHEN DATE_SUB(NOW(), interval 1 MONTH) <= `TIMESTAMP` THEN 1 ELSE 0 END) MONTHLY FROM `REQUESTS`; ``` Thanks a lot for your help guys!
``` select sum(case when timestamp between now() - interval 1 hour and now() then 1 else 0) hourly, sum(case when timestamp between now() - interval 1 day and now() then 1 else 0) daily, sum(case when timestamp between now() - interval 1 month and now() then 1 else 0) monthly from your_table ``` edited...
``` select sum(timestamp >= now() - interval 1 hour) as hour, sum(timestamp >= now() - interval 1 day) as day, sum(timestamp >= now() - interval 1 month) as month from your_table ``` ### [SQLFiddle demo](http://sqlfiddle.com/#!2/cb328a/2)
How to get data from last hour, last day and last month with one query?
[ "", "mysql", "sql", "database", "pdo", "" ]
I'm trying to write an 'idiot proof' SQL script that can be run by non-SQL literate users. My idea is to have a couple of variables defined at the top of the script, then run particular queries based on those variable. I'm testing on mySQL, but it will eventually run on SQL-Server. In pseudo-code this is what I'm trying to do: ``` # Set matchThis to the value to match SET @matchThis = "matchMe"; # Uncomment (remove the #) one of the two lines below to update or just view #SET @update = "YES"; SET @update = "NO"; IF @update = "YES" { UPDATE myTable SET myColumn = "changed" WHERE matchVal = @matchThis; } ELSE { SELECT * FROM myTable WHERE matchVal = @matchThis; } ``` I want to do this entirely in SQL if there is a way. I've seen guides on using `SELECT IF` etc but can't figure out how to achieve the above.
This is for MSSQL. I think that you got everything down but the syntax. I hope this helps/works. ``` DECLARE @matchthis AS VARCHAR(MAX) DECLARE @update AS VARCHAR(1) SET @matchthis = 'matchme' --@update can be Y or N. User changes this here. SET @update = 'Y' IF @update = 'Y' UPDATE mytable SET myColumn = 'changed' WHERE matchval = @matchthis ELSE IF @update = 'N' SELECT * FROM myTable WHERE matchval = @matchthis ``` I didn't know whether to make the changed a variable, but if you wanted to make it a variable, follow the same syntax as for @matchthis (declare and set). If you want to make this really idiot proof, I'd say the best thing to do is make a stored procedure so that users don't see the code, they just have the input box.
There are two questions here. One is why the `IF` statement doesn't work - because T-SQL doesn't have braces. The syntax is shown in [the documentation](https://msdn.microsoft.com/en-us/library/ms182717.aspx). The important question though is how to pass parameters to the script without having the users modify the script itself. This is done using [Script Variables](https://msdn.microsoft.com/en-us/library/ms188714.aspx). When a script is executed by using the `sqlcmd` command, any text of the form `$(SomeName)` is replaced with command-line parameters or environment variables with the same name. For example, if you have the following script ``` USE AdventureWorks2012; SELECT x.$(ColumnName) FROM Person.Person x WHERE c.BusinessEntityID < 5; ``` This command will run it with `FirstName` as the column name ``` sqlcmd -v ColumnName ="FirstName" -i c:\testscript.sql ```
Only run SQL query if condition met
[ "", "mysql", "sql", "sql-server", "" ]
I've got this: ``` SELECT COUNT(*) FROM TABLE1, TABLE2, TABLE3; ``` Say TABLE1 has 5 entries, TABLE2 has 5 entries, and TABLE3 has one, COUNT(\*) essentially becomes 5\*5\*1 = 25. Is there a reason for this? Ideally, I want the total rowcount from all 3 tables WITHOUT having to use multiple from statements.
``` SELECT COUNT(*) FROM TABLE1, TABLE2, TABLE3; ``` does a `CROSS JOIN`. It matches each record in `TABLE1` with each record in `TABLE2`, then matches eash result with each record in `TABLE3`, so for example if `TABLE1` contained 2 records, `TABLE2` contained 3 records and `TABLE4` contained 4 records, you'd get `2 X 3 X 4 = 24` as your result. To get the counts from each table you'll need to use subqueries: ``` SELECT (SELECT COUNT(*) FROM TABLE1), (SELECT COUNT(*) FROM TABLE2), (SELECT COUNT(*) FROM TABLE3); ``` OR ``` SELECT COUNT(*) FROM TABLE1 UNION ALL SELECT COUNT(*) FROM TABLE2 UNION ALL SELECT COUNT(*) FROM TABLE3 ``` to get the result as records instead of columns
The reason is you are making join 's (cross join) , is about the set theory from the school , if you want count 3 different tables and then sum that counts together , you can do this: ``` SELECT SUM(COUNT) FROM (SELECT COUNT(*) AS COUNT FROM TABLE1 UNION ALL SELECT COUNT(*) AS COUNT FROM TABLE2 UNION ALL SELECT COUNT(*) AS COUNT FROM TABLE3 ) TB ```
Why does COUNT(*) multiply the results when there are several tables?
[ "", "sql", "" ]
I have list of file path stored in a column of a table. Now I need to extract only till last '\' in file path (i.e like below result set) Example: ``` column_A -------------- G:\REPORTS\DDMS\PCP0.txt G:\REPORTS\DPS\DEFAU.pdf ``` Result ``` G:\REPORTS\DDMS\ G:\REPORTS\DPS\ ```
Try this. ``` DECLARE @str VARCHAR(500)='G:\REPORTS\DDMS\PCP0.txt' SELECT Reverse(Substring(Reverse(@str), Charindex('\', Reverse(@str)), Len(@str))) ```
Try this ``` DECLARE @str VARCHAR(500)='G:\REPORTS\DDMS\PCP0.txt' SELECT LEFT(@str, len(@str) - CHARINDEX('\', REVERSE(@str))) ```
Find Path of the file using substring
[ "", "sql", "sql-server", "t-sql", "" ]
I'm attacking a problem, where I have a value for a a range of dates. I would like to consolidate the rows in my table by averaging them and reassigning the date column to be relative to the last 7 days. My SQL experience is lacking and could use some help. Thanks for giving this a look!! E.g. 7 rows with dates and values. ``` UniqueId Date Value ........ .... ..... a 2014-03-20 2 a 2014-03-21 2 a 2014-03-22 3 a 2014-03-23 5 a 2014-03-24 1 a 2014-03-25 0 a 2014-03-26 1 ``` Resulting row ``` UniqueId Date AvgValue ........ .... ........ a 2014-03-26 2 ``` First off I am not even sure this is possible. I'm am trying to attack a problem with this data at hand. I thought maybe using a framing window with a partition to roll the dates into one date with the averaged result, but am not exactly sure how to say that in SQL.
Am taking following as sample ``` CREATE TABLE some_data1 (unique_id text, date date, value integer); INSERT INTO some_data1 (unique_id, date, value) VALUES ( 'a', '2014-03-20', 2), ( 'a', '2014-03-21', 2), ( 'a', '2014-03-22', 3), ( 'a', '2014-03-23', 5), ( 'a', '2014-03-24', 1), ( 'a', '2014-03-25', 0), ( 'a', '2014-03-26', 1), ( 'b', '2014-03-01', 1), ( 'b', '2014-03-02', 1), ( 'b', '2014-03-03', 1), ( 'b', '2014-03-04', 1), ( 'b', '2014-03-05', 1), ( 'b', '2014-03-06', 1), ( 'b', '2014-03-07', 1) ``` **OPTION A : - Using PostgreSQL Specific Function `WITH`** ``` with cte as ( select unique_id ,max(date) date from some_data1 group by unique_id ) select max(sd.unique_id),max(sd.date),avg(sd.value) from some_data1 sd inner join cte using(unique_id) where sd.date <=cte.date group by cte.unique_id limit 7 ``` [**> SQLFIDDLE DEMO**](http://sqlfiddle.com/#!15/80f51/1/0) --- **OPTION B : - To work in PostgreSQL and MySQL** ``` select max(sd.unique_id) ,max(sd.date) ,avg(sd.value) from ( select unique_id ,max(date) date from some_data1 group by unique_id ) cte inner join some_data1 sd using(unique_id) where sd.date <=cte.date group by cte.unique_id limit 7 ``` [**> SQLFDDLE DEMO**](http://sqlfiddle.com/#!2/80f51d/1/0)
For PostgreSQL a window function might be what you want: ``` DROP TABLE IF EXISTS some_data; CREATE TABLE some_data (unique_id text, date date, value integer); INSERT INTO some_data (unique_id, date, value) VALUES ( 'a', '2014-03-20', 2), ( 'a', '2014-03-21', 2), ( 'a', '2014-03-22', 3), ( 'a', '2014-03-23', 5), ( 'a', '2014-03-24', 1), ( 'a', '2014-03-25', 0), ( 'a', '2014-03-26', 1), ( 'a', '2014-03-27', 3); WITH avgs AS ( SELECT unique_id, date, avg(value) OVER w AS week_avg, count(value) OVER w AS num_days FROM some_data WINDOW w AS ( PARTITION BY unique_id ORDER BY date ROWS BETWEEN 6 PRECEDING AND CURRENT ROW)) SELECT unique_id, date, week_avg FROM avgs WHERE num_days=7 ``` Result: ``` unique_id | date | week_avg -----------+------------+-------------------- a | 2014-03-26 | 2.0000000000000000 a | 2014-03-27 | 2.1428571428571429 ``` Questions include: 1. What happens if a day from the preceding six days is missing? Do we want to add it and count it as zero? 2. What happens if you add a day? Is the result of the code above what you want (a rolling 7-day average)?
Get average of last 7 days
[ "", "sql", "postgresql", "amazon-redshift", "" ]
I need to find only one entry from db which is less than other row i have written this query , is there other method than this? ``` SELECT * FROM users WHERE birthdate < 1420239600 ``` It will return all users whose birthdate is less than 1320239600 but i need to show only one row like 1320239599 not 1 to 1320239599 **edit** ``` SELECT * FROM USERS WHERE birthdate < 13 ORDER BY birthdate DESC LIMIT 1; ``` it returns 1 as it is less among all but requirement is it should return 12
**MYSQL** ``` SELECT * FROM users WHERE birthdate < 1420239600 ORDER BY birthdate DESC LIMIT 1 ``` **MSSQL** ``` SELECT TOP 1 * FROM users WHERE birthdate < 1420239600 ORDER BY birthdate DESC ``` This will select the first entry only
I think the safest option would be like this: ``` SELECT * FROM USERS WHERE birthdate < 1320239600 ORDER BY birthdate DESC LIMIT 1; ``` You had the less than operator the wrong way round. Arijit's answer is only valid for MS SQL Server or MS Access, this statement using ORDER BY and LIMIT is the equivalent in MySQL.
find only one entry from database with less than condition
[ "", "mysql", "sql", "select", "" ]
I have this table called myTable ``` Posting Date|Item No_|Entry Type| 2015-01-13|1234|1 2015-01-13|1234|1 2015-01-12|1234|1 2015-01-12|5678|1 2015-02-12|4567|1 ``` What I want, is only return result where a [Item No\_] is ind the table 1 time. So in this example of my table, i only want to return [Item No\_] 5678 and 4567, because there only are one record in it. And then ignore [Item No\_] 1234 This is my SQL i have tried, but something is wrong. Can anyone help me? ``` SELECT [Item No_], [Posting Date], COUNT([Item No_]) AS Antal FROM myTable GROUP BY [Entry Type], [Posting Date], [Item No_] HAVING ([Entry Type] = 1) AND (COUNT([Item No_]) = 1) ORDER BY [Posting Date] DESC ```
Remove `Posting Date` from `group by` ``` SELECT [Item No_],Entry Type, COUNT([Item No_]) AS Antal FROM myTable GROUP BY [Entry Type], [Item No_] HAVING COUNT([Item No_]) = 1 ``` or if you want other details use a `subquery` ``` SELECT [Item No_], Entry Type, Posting Date FROM myTable a WHERE EXISTS (SELECT 1 FROM myTable b where a.[Item No_]=b.[Item No_] GROUP BY [Entry Type], [Item No_] HAVING Count(1) = 1) ORDER BY [Posting Date] DESC ``` or `window function` ``` ;WITH cte AS (SELECT [Item No_], [Posting Date], [Entry Type], Row_number()OVER (Partition BY [Entry Type], [Item No_] ORDER BY [Item No_]) RN FROM myTable) SELECT * FROM cte a WHERE NOT EXISTS (SELECT 1 FROM cte b WHERE a.[Item No_] = b.[Item No_] AND rn > 1) ORDER BY [Posting Date] DESC ```
``` select [Item No_] from myTable group by [Item No_] having count(*)=1 ```
only return records count() = 1
[ "", "sql", "sql-server", "" ]
I am trying the following select statement including columns from 4 tables. But the results return each row 4 times, im sure this is because i have multiple left joins but i have tried other joins and cannot get the desired result. ``` select table1.empid,table2.name,table2.datefrom, table2.UserDefNumber1, table3.UserDefNumber1, table4.UserDefChar6 from table1 inner join table2 on table2.empid=table1.empid inner join table3 on table3.empid=table1.empid inner join table4 on table4.empid=table1.empid where MONTH(table2.datefrom) = Month (Getdate()) ``` I need this to return the data without any duplicates so only 1 row for each entry. I would also like the "where Month" clause at the end look at the previous month not the current month but struggling with that also. I am a bit new to this so i hope it makes sense. Thanks
If the duplicate rows are identical on each column you can use the `DISTINCT` keyword to eliminate those duplicates. But I think you should reconsider your `JOIN` or `WHERE` clause, because there has to be a reason for those duplicates: 1. The WHERE clause hits several rows in table2 having the same month on a single empid 2. There are several rows with the same empid in one of the other tables 3. both of the above is true You may want to rule those duplicate rows out by conditions in WHERE/JOIN instead of the `DISTINCT` keyword as there may be unexpected behaviour when some data is changing in a single row of the original resultset. Then you start having duplicate empids again. You can check if a date is in the previous month by following clause: ``` date BETWEEN dateadd(mm, -1, datefromparts(year(getdate()), month(getdate()), 1)) AND datefromparts(year(getdate()), month(getdate()), 1) ``` This statment uses `DATEFROMPARTS` to create the beginning of the current month twice, subtract a month from the first one by using `DATEADD` (results in the beginning of the previous month) and checks if `date` is between those dates using `BETWEEN`.
If your query is returning duplicates, then one or more of the tables have duplicate `empid` values. This is a *data* problem. You can find them with queries like this: ``` select empid, count(*) from table1 group by empid having count(*) > 1; ``` You should really fix the data and query so it returns what you want. You can do a bandage solution with `select distinct`, but I would not usually recommend that. Something is causing the duplicates, and if you do not understand why, then the query may not be returning the results you expect. As for your `where` clause. Given your logic, the proper way to express this would include the year: ``` where year(table2.datefrom) = year(getdate()) and month(table2.datefrom) = month(Getdate()) ``` Although there are other ways to express this logic that are more compatible with indexes, you can continue down this course with: ``` where year(table2.datefrom) * 12 + month(table2.datefrom) = year(getdate()) * 12 + Month(Getdate()) - 1 ``` That is, convert the months to a number of months since time zero and then use month arithmetic. If you care about indexes, then your current `where` clause would look like: ``` where table2.datefrom >= dateadd(day, - (day(getdate) - 1), cast(getdate() as date) and table2.datefrom < dateadd(day, - (dateadd(month, 1, getdate()) - 1), cast(dateadd(month, 1, getdate()) as date) ```
Joining multiple tables returning duplicates
[ "", "sql", "sql-server", "select", "join", "inner-join", "" ]
``` select * INTO [dbo].[aTable] from dbo.bt btg left join dbo.btt bta on btg.specialty = bta.specialty order by 1, 6 ``` I am getting the following error: > Column names in each table must be unique. Column name 'Specialty' in table 'aTable' is specified more than once. Columns for `bt` table: ``` Location Specialty Provider ``` Column for `btt` table: ``` Specialty Topic ``` I am trying to get Location, Specialty, Provider, and (join all topics for the specialty).
The issue here is the "Select \*", which will select all fields in your result set. Try specifying the specific fields that you want to insert. For instance: ``` SELECT Location, btg.Specialty, Provider, Topic INTO INTO [dbo].[aTable] from dbo.bt btg left join dbo.btt bta on btg.specialty = bta.specialty order by 1, 6 ```
**Alias name is all you need** to mention to avoid conflict when you join two tables with same column names. ``` SELECT btg.Location, btg.Specialty, btg.Provider,btg.Topic INTO INTO [dbo].[aTable] from dbo.bt btg join dbo.btt bta on btg.specialty = bta.specialty order by 1, 6 ```
How to merge two tables with having same column name
[ "", "sql", "sql-server", "t-sql", "sql-server-2012", "" ]
I am looking for the simplest solution that will resolve following problem. I have a table: ``` Column1 Column2 ------- ------- A 11 A NULL B 12 B 14 B NULL C NULL ``` I would like to query(Select) this table to achive only NULL value when this is the only value for distinct value of column1. When there is at least one non null value with column1 it is important to bypass then null values for them. Desired outcome: ``` Column1 Column2 ------- ------- A 11 B 12 B 14 C NULL ``` I was trying with COALESCE, NULLIF.. etc and had no results. I would like to achieve this with simplest solution. I am joining then Column2 with column in other table but for NULL I hope left join could be appropriate . I am very grateful for any help
Try this: ``` SELECT column1, column2 FROM dummy t1 WHERE column2 IS NOT NULL OR NOT EXISTS (SELECT '' FROM dummy t2 WHERE t1.column1 = t2.column1 AND column2 IS NOT NULL) ``` Keeps all rows for which column2 either not NULL or are NULL and are the only within in a group.
Should work like this ( there might be a more perfomant way though ) ``` select column1, column2 from table where column2 is not null union ( select column1, column2 from table where column2 is null and column1 not in ( select column1 from table where column2 is not null group by column1) ) ``` The first select above the union gets all rows without `null` values, then the second one simply adds all rows with `null` values in column2, but not the ones you had allready an the first one. `group by` not necessary for the result, but might clearify the logic.
Oracle function to distinct NULL values based on other column
[ "", "sql", "oracle", "function", "select", "null", "" ]