Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I have below table. ``` code type value =================== 100 R 3300 100 B 7900 101 R 6800 100 D 2100 100 C 2300 101 C 1200 ``` I want the select statement return below result when I select for code=100. ``` code Rvalue Bvalue Dvalue Cvalue ================================== 100 3300 7900 2100 2300 ``` I was successful achieving this using inline queries. But I want to know if there is a better way to do it. Thanks in Advance.
Not sure if this is the best way, but it works. Use max() and CASE to create multiple columns from one. Max will always choose the real value instead of the NULL. You could substitute 0 or another default if you want. ``` SELECT code ,max(CASE WHEN type = 'r' THEN value ELSE NULL END) RValue ,max(CASE WHEN type = 'b' THEN value ELSE NULL END) BValue ,max(CASE WHEN type = 'd' THEN value ELSE NULL END) DValue ,max(CASE WHEN type = 'c' THEN value ELSE NULL END) CValue FROM mytable GROUP BY code ```
You could use the PIVOT operator. I've never actually used it myself, but I think it will do what you're trying to do. Not really sure it's that much better though. <https://technet.microsoft.com/en-us/library/ms177410(v=sql.105).aspx>
Select multiple rows in single result using SQL Server 2008
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I want to write a sql to bucket the time into an increment of 2 hours. For example 0-2, 2-4, 6-8, ………18-20, 20-22, 22-24 ``` Time I want it to be 6/8/2015 20:49 20-22 6/5/2015 12:47 12-14 6/9/2015 16:46 16-18 ``` Thanks,
You can use a case expression and some simple arithmetic to group the time values into buckets: ``` select time, case when datepart(hour, time) % 2 = 0 then -- n % 2 = 0 determines if hour is even cast(datepart(hour, time) as varchar(2)) + '-' + cast(datepart(hour, time) + 2 as varchar(2)) else -- hour is odd cast(datepart(hour, time) - 1 as varchar(2)) + '-' + cast(datepart(hour, time) + 1 as varchar(2)) end as bucket from t ``` Note that I made the assumption that the odd hours should be bucketed into the even numbered buckets, and that there should not be any odd buckets (like 1-3, 3-5 etc). [Sample SQL Fiddle](http://www.sqlfiddle.com/#!3/1eec6/2) Sample output: ``` | time | bucket | |------------------------|--------| | June, 08 2015 00:49:00 | 0-2 | | June, 08 2015 23:49:00 | 22-24 | | June, 08 2015 20:49:00 | 20-22 | | June, 05 2015 12:47:00 | 12-14 | | June, 05 2015 13:47:00 | 12-14 | | June, 09 2015 16:46:00 | 16-18 | ```
Simple integer division can drop everything in the right buckets for you. For example, 1/2 = 0, 2/2 = 1, 3/2 = 1, etc. After that it is just a matter of formatting the output: ``` select time, cast((datepart(hour, time)/2)*2 as varchar(2))+'-'+ cast((datepart(hour, time)/2)*2+2 as varchar(2)) as bucket from t ``` Note that the division of two integers defaults to integer division, which is why this works.
SQL Bucket Time Stamp into incrementsof 2 hours
[ "", "sql", "sql-server", "sql-server-2008", "" ]
Input ``` +----+----+----+ | in | tn | 20 | +----+----+----+ | in | ka | 18 | +----+----+----+ | in | ap | 30 | +----+----+----+ | us | la | 12 | +----+----+----+ | us | ca | 20 | +----+----+----+ | us | ny | 5 | +----+----+----+ ``` Desired Output ``` +----+----+----+ | in | ka | 18 | +----+----+----+ | us | ny | 5 | +----+----+----+ ``` As in the above output, i need each country and its state with least value. ``` select a.country, a.state, a.value from table1 a where a.value in (select min(value) from table1); ``` With the above query, am getting output based on least value of the entire table. ``` +----+----+---+ | us | ny | 5 | +----+----+---+ ``` I need the least value under each country and its respective state.
Using your approach, you need a correlated subquery: ``` select a.country, a.state, a.value from table1 a where a.value in (select min(value) from table1 b where a.country = b.country); ```
You can also try- ``` SELECT b.* FROM mytable b JOIN (SELECT country,MIN(val) AS val FROM mytable GROUP BY country) AS a ON b.val=a.val AND a.country=b.country ```
SQL Query to fetch multiple least values
[ "", "mysql", "sql", "" ]
I have a table, Supplies, that lists the prices for items from different suppliers: ``` | ID | Item | Price | Supplier | ``` *ID* is the primary key (just an auto-generated integer). *Item* is the name of the product. *Price* is the price of the product. *Supplier* is a foreign key (integer). I want to list the cheapest price and supplier for each item. I'm relatively new to databases and so far I've got this: ``` SELECT Name, MIN(Price), Supplier FROM Supplies GROUP BY Name ``` This of course gives me the error that > *Supplier* is not in an aggregate function or group by clause. I have done some Google searching and read quite a few articles and answers on the subject but they all seem to be for much more complicated situations than mine and confuse me (as said, I don't have much experience with databases and SQL) or they just talk about the error and not how to get around it. Any help would be appreciated. Thanks in advance.
Depending on your RDBMS, you may be able to use `ROW_NUMBER()` to assign a ranking to each record and pick the one's that rank first. This is faster than using additional joins or correlated sub-queries, but isn't, for example, supported in MySQL at present. ``` WITH sorted_supplies AS ( SELECT supplies.*, ROW_NUMBER() OVER (PARTITION BY name ORDER BY price) AS price_ordinal FROM supplies ) SELECT * FROM sorted_supplies WHERE price_ordinal = 1 ; ``` Without the support for `ROW_NUMBER()` then you're pretty much steered down the road of additional aggregations and joins... ``` SELECT supplies.* FROM supplies INNER JOIN ( SELECT name, MIN(price) AS min_price FROM supplies GROUP BY name ) AS min_prices ON min_prices.name = supplies.name AND min_prices.min_price = supplies.price ``` Do note that this query will return all suppliers with the same price if they're all tied for the lowest price. The first query can be Forced to do that by using `RANK()` instead of `ROW_NUMBER()`
You could rank the results, later selecting the lowest/highest ranking item (based on sort order). Assuming You're using SQL Server 2008 or higher: ``` SELECT Item, Price, Supplier FROM ( SELECT ROW_NUMBER() OVER (PARTITION BY Item ORDER BY Price ASC) PriceRank , Item , Price , Supplier FROM Supplies ) supplies_ranked WHERE PriceRank = 1 ```
Exclude Column From Group By
[ "", "sql", "group-by", "min", "" ]
In SQL-Server, I have 4 fields in a table and I want check whether more than one is >0 and if so set the value of a new field to 1. For example, ``` RowID Football Cricket Tennis Athletics 1 0 2 0 1 2 1 0 0 0 ``` Row one would evaluate to 1, row two wouldn't. How do I construct a case statement to evaluate the multiple fields? Thanks
Try following, ``` Select RowId, Case when NewField > 1 then 1 else 0 end as 'Status' from ( Select *, case when Football > 0 then 1 else 0 end + case when Cricket > 0 then 1 else 0 end + case when Tennis > 0 then 1 else 0 end + case when Athletics > 0 then 1 else 0 end as 'NewField' from TableName ) ```
Purely in the concept of what's possible, so I'm not saying it's better or worse, you can also unpivot your results first to normalize the data. For example, this would display the sports and their count underneath each other: ``` select RowID, Sport, SportCount from YourTableName unpivot ( SportCount for Sport in (Football, Cricket, Tennis, Athletics )) up ``` From there it's relatively simple to get the number of sports per id: ``` select rowid, SUM(case when SportCount > 0 then 1 else 0 end) NumberOfSports from( select RowID, SportCount from YourTableName unpivot ( SportCount for Sport in (Football, Cricket, Tennis, Athletics )) up ) q group by rowid ``` Or to use it in the update query for number of sports > 2 (other approaches possible of course) ``` update YourTableName set SomeField = 1 where RowID in ( select rowid from( select RowID, SportCount from YourTableName unpivot ( SportCount for Sport in (Football, Cricket, Tennis, Athletics )) up ) q group by rowid having SUM(case when SportCount > 0 then 1 else 0 end) > 1) ``` The advantage of going through the trouble of unpivoting, is that the logic of the criterium is in one place. If in the future you want to see all rows with more than one sport with a count for more than 2, there's only one place where it is updated. Performance wise I'd probably stick to using a `case footb + case cricket + ..` approach.
SQL check if more than one field in range is >0
[ "", "sql", "sql-server", "" ]
I'm trying to start using SQL Trace and I am running into "Access Denied" errors when using the following: ``` exec @rc = sp_trace_create @TraceID output, 0, N'C:\Users\USER$\Desktop\SQLTrace', @maxfilesize, NULL ``` if (@rc != 0) goto error I have tried UNC path and local, I am using an admin account to login to Mgmt Studio, what component am I missing? I have tried saving locally and to a remote client.
It appears for whatever reason I had to point the location to here for it to work: C:\Users\Public\Desktop Regardless of which account I used for SSMS... Definitely permission related, but as I am Admin for windows and SQL not sure where root cause lies..
It's not what permissions your account has, it's whether or not the account that is used to start SQL Server service has write permissions to that folder. Please check that first. See [sp\_trace\_create](https://msdn.microsoft.com/en-us/library/ms190362.aspx) doc: > For sp\_trace\_create, the SQL Server service account must have Write permission on the trace file folder. If the SQL Server service account is not an administrator on the computer where the trace file is located, you must explicitly grant Write permission to the SQL Server service account.
SQL Server : Trace won't create file
[ "", "sql", "sql-server", "trace", "" ]
In SQL I have an `EanTbl (EAN varchar(13) (PK), ProductID int)` with millions of EAN barcode numbers as string. Many of which are in reality the same but written differently, i.e. with **one or more leading zeros**. One ProductID can be assigned to many EAN Barcodes numbers. *(historically grown, I would have created the EAN column as Long from the beginning)* From now on, when a EAN barcode is inserted, I want to avoid the creation of additional doublettes by checking if an already existing EAN matches the new one. Example EanTbl: ``` EAN | ProductID ========================== 123456789| 987 \ 0123456789| 987 - In reality the same EAN 00123456789| 987 / ``` Now, when a user Input wants me to insert a new value of for instance `000123456789` I now want the most efficient query possible in T-SQL to discover that there are already matching EAN Barcodes assigned to this ProductID. I already tried the different flavours of CONVERT() but as you can imagine the performance is horrible. This check needs to happens several hundreds times a minute on a table with 50+ millions of rows. Thanks in advance for the suggestions.
Thanks to the suggestions. In the Long term I will reorganize the table as most of you suggested but for the moment I found the fastest way to do what I want is something like that: ``` SELECT EanCode, ProductID FROM EanCodes WHERE EanCode = @ean OR EanCode = '0'+@ean OR EanCode = '00'+@ean OR EanCode = '000'+@ean OR EanCode = '0000'+@ean OR EanCode = '00000'+@ean OR EanCode = '000000'+@ean ``` with `@ean` being the string parameter of course It seemed counter-intuitive at first but it is indeed the fastest way.
You should take the time to fix the table. One method is to change the EANs to the appropriate form. If you want a string of a fixed length, then do something like: ``` select distinct right(replicate('0', 13) + ean, 13) as ean, productid into #temptable; truncate table eantbl; insert into eantbl(ean, productid) select ean, productid from #temptable; ``` That may not be feasible, if you need the improper EAN values for some reason. An alternative is to put a canonical form into the table. Something like: ``` alter table eantbl add CanonicalEAN char(13); update eantbl set CanonicalEAN = right(replicate('0', 13) + ean, 13); create index idx_eantbl_canonicalean on eantbl(CanonicalEAN); ``` Then you can do the comparison using the appropriate column and take advantage of an index.
What is the fastest way of finding EAN Number doublettes in a SQL table?
[ "", "sql", "sql-server", "performance", "" ]
I am trying to average values of some specific rows but sum volumes of all of them. I have 2 tables, one called `exchanges` and another one called `valid_country`. **exchanges** ``` +-----------+----------+--------------+----------+--------+ | id | ref | country | value | volume | +-----------+----------+--------------+----------+--------+ | 1 | 1029 | DE | 1 000 | 100 | +-----------+----------+--------------+----------+--------+ | 2 | 1029 | US | 2 000 | 250 | +-----------+----------+--------------+----------+--------+ | 3 | 1029 | FR | 3 500 | 300 | +-----------+----------+--------------+----------+--------+ | 4 | 1053 | UK | 1 200 | 110 | +-----------+----------+--------------+----------+--------+ | 5 | 1029 | RU | 900 | 70 | +-----------+----------+--------------+----------+--------+ ``` This table contains many references (ref) which have different countries, themselves with different values and volumes. **valid\_country** ``` +--------------+--------------+ | ref | country | +--------------+--------------+ | 1029 | US | +--------------+--------------+ | 1029 | RU | +--------------+--------------+ | 1053 | UK | +--------------+--------------+ ``` This table lists all the 'good' countries for which values can be averaged. ### What I would like as a result query is : ``` +----------+------------+-------------+ | ref | AVG(value) | SUM(volume) | +----------+------------+-------------+ | 1029 | 1 450 | 720 | +----------+------------+-------------+ | 1053 | 1 200 | 110 | +----------+------------+-------------+ ``` Firstly ref are `GROUP BY`. Ref 1029 shall `AVERAGE` values of only US and RU (because of table valid\_country) but `SUM` volumes of all countries. Same thing for Ref 1053 but since there's only one row it is easy. Here is a little [Fiddle](http://sqlfiddle.com/#!9/e16969/4). The SQL request is false since it averages all countries and not only the good one.
I think the comparison to `valid_country` needs to use both `ref` and `country`: ``` SELECT e.ref, AVG(CASE WHEN vc.country IS NOT NULL THEN e.value END) AS average, SUM(e.volume) AS volume FROM exchanges e LEFT JOIN valid_country vc ON vc.country = e.country AND vc.ref = e.ref GROUP BY e.ref; ``` This doesn't matter for your sample data but it might be important for the larger problem.
You can use a [`LEFT JOIN`](https://dev.mysql.com/doc/refman/5.0/en/join.html) and the [`CASE`](https://dev.mysql.com/doc/refman/5.0/en/case.html) statement to ignore some values in the `AVG` ([SQLFiddle](http://sqlfiddle.com/#!9/e16969/14)): ``` SELECT e.ref, AVG(CASE WHEN vc.country IS NOT NULL THEN e.value END) AS average, SUM(e.volume) AS volume FROM exchanges e LEFT JOIN valid_country vc ON ( vc.country = e.country ) GROUP BY e.ref ``` `CASE`returns `NULL`if not matched, and `AVG` ignores those values: ``` | ref | average | volume | |------|---------|--------| | 1029 | 1450 | 720 | | 1053 | 1200 | 110 | ```
MySQL - SELECT AVG on some rows and SUM on all
[ "", "mysql", "sql", "group-by", "sum", "average", "" ]
I have the following table structure: `AuditUserMethods`: ``` +---------------+---------------+----------+ | ColumnName | DataType | Nullable | +---------------+---------------+----------+ | Id | INT | NOT NULL | | CreatedDate | DATETIME | NOT NULL | | ApiMethodName | NVARCHAR(MAX) | NOT NULL | | Request | NVARCHAR(MAX) | NOT NULL | | Result | NVARCHAR(MAX) | NOT NULL | | Method_Id | INT | NOT NULL | | User_Id | INT | NULL | +---------------+---------------+----------+ ``` `AuditUserMethodErrorCodes`: ``` +--------------------+----------+----------+ | ColumnName | DataType | Nullable | +--------------------+----------+----------+ | Id | INT | NOT NULL | | AuditUserMethod_Id | INT | NOT NULL | | ErrorCode | INT | NOT NULL | +--------------------+----------+----------+ ``` The `ID` is the PK in both tables. There is a one to many relationship. An `AuditUserMethod` can have many `AuditUserMethodErrorCodes`. Hence the FK `AuditUserMethod_Id`. There are two nonclustered indexes on both the `AuditUserMethod_Id` and `CreatedDate` in the `AuditUserMethods` table. The purpose of the procedure is to return a paginated result set based on filters. The `@PageSize` determines how many rows to return and `@PageIndex` determines which page to return. All other variables are for filtering. Three result sets are returned. 1. Contains the the `AuditUserMethods` detail 2. Contains the `AuditUserMethodErrorCodes` detail 3. Contains the total rows found (i.e. if the page size was 1000 and there were 5000 rows that matched all the criteria, this would return 5000). Stored procedure: ``` CREATE PROCEDURE [api].[Audit_V1_GetAuditDetails] ( @Users XML = NULL, @Methods XML = NULL, @ErrorCodes XML = NULL, @FromDate DATETIME = NULL, @ToDate DATETIME = NULL, @PageSize INT = 5, @PageIndex INT = 0 ) AS BEGIN DECLARE @UserIds TABLE (Id INT) DECLARE @MethodNames TABLE (Name NVARCHAR(256)) DECLARE @ErrorCodeIds TABLE (Id INT) DECLARE @FilterUsers BIT = 0 DECLARE @FilterMethods BIT = 0 DECLARE @FilterErrorCodes BIT = 0 INSERT @UserIds SELECT x.y.value('.', 'int') FROM @Users.nodes('Ids/x/@i') AS x (y) INSERT @MethodNames SELECT x.y.value('.', 'NVARCHAR(256)') FROM @Methods.nodes('ArrayOfString/string') AS x (y) INSERT @ErrorCodeIds SELECT x.y.value('.', 'int') FROM @ErrorCodes.nodes('Ids/x/@i') AS x (y) IF EXISTS (SELECT TOP 1 0 FROM @UserIds) SET @FilterUsers = 1 IF EXISTS (SELECT TOP 1 0 FROM @MethodNames) SET @FilterMethods = 1 IF EXISTS (SELECT TOP 1 0 FROM @ErrorCodeIds) SET @FilterErrorCodes = 1 DECLARE @StartRow INT = @PageIndex * @Pagesize DECLARE @PageDataResults TABLE (Id INT, CreatedDate DATETIME, ApiMethodName NVARCHAR(256), Request NVARCHAR(MAX), Result NVARCHAR(MAX), MethodId INT, UserId INT, TotalRows INT); WITH PageData AS ( SELECT id AS id , createddate AS createddate , apimethodname AS apimethodname , request AS request , result AS result , method_id AS method_id , user_id AS user_id , ROW_NUMBER() OVER (ORDER BY createddate DESC, id DESC) AS row_number , COUNT(*) OVER() as TotalRows FROM dbo.AuditUserMethods AS aum WHERE (@FromDate IS NULL OR (@FromDate IS NOT NULL AND aum.createddate > @FromDate)) AND (@ToDate IS NULL OR (@ToDate IS NOT NULL AND aum.createddate < @ToDate)) AND (@FilterUsers = 0 OR (@FilterUsers = 1 AND aum.user_id IN (SELECT Id FROM @UserIds))) AND (@FilterMethods = 0 OR (@FilterMethods = 1 AND aum.ApiMethodName IN (SELECT Name FROM @MethodNames))) AND (@FiltererRorCodes = 0 OR (@FiltererRorCodes = 1 AND EXISTS (SELECT 1 FROM AuditUserMethodErrorCodes e WHERE e.AuditUserMethod_Id = aum.Id AND e.ErrorCode IN (SELECT Id FROM @ErrorCodeIds) ) ) ) ) INSERT @PageDataResults SELECT TOP (@Pagesize) PageData.id AS id , PageData.createddate AS createddate , PageData.apimethodname AS apimethodname , PageData.request AS request , PageData.result AS result , PageData.method_id AS method_id , PageData.user_id AS user_id , PageData.TotalRows AS totalrows FROM PageData WHERE PageData.row_number > @StartRow ORDER BY PageData.createddate DESC SELECT Id, CreatedDate, ApiMethodName, Request, Result, MethodId, UserId FROM @PageDataResults SELECT aumec.AuditUserMethod_Id, aumec.ErrorCode FROM @PageDataResults ps INNER JOIN AuditUserMethodErrorCodes aumec ON ps.Id = aumec.AuditUserMethod_Id SELECT TOP 1 TotalRowsNumberOfReturnedAuditEntries FROM @PageDataResults END ``` The `AuditUserMethods` table contains 500,000 rows and the `AuditUserMethodErrorCodes` contains 67843 rows. I am executing the procedure with the following parameters: ``` EXEC [api].[Audit_V1_GetAuditDetails] @Users = N'<Ids><x i="1" /></Ids>' ,@Methods = NULL ,@ErrorCodes = N'<Ids />' ,@FromDate = '2015-02-15 07:18:59.613' ,@ToDate = '2015-07-02 08:18:59.613' ,@Pagesize = 5000 ,@PageIndex = 0 ``` The stored procedure takes just over 2 seconds to execute and return 5000 rows. I need this stored procedure to run much faster and I'm not sure how to improve it. According to the actual execution plan. It is the CTE that is taking up 99% relative to the batch. Within the CTE, it is the Sort that is taking up 95% of the cost: ![Actual Execution Plan](https://i.stack.imgur.com/BdoHX.jpg)
I'd start by declaring a couple table parameter types. ``` CREATE TYPE [api].[IdSet] AS TABLE ( [Id] INT NOT NULL ); ``` and, ``` CREATE TYPE [api].[StringSet] AS TABLE ( [Value] NVARCHAR(256) NOT NULL ); ``` Then I'd change the signature of the store procedure to use them. **Note** I'd also return the total count as an output parameter rather than as a separate result set. ``` CREATE PROCEDURE [api].[Audit_V2_GetAuditDetails] ( @userIds [api].[IdSet] READONLY, @methodNames [api].[StringSet] READONLY, @errorCodeIds [api].[IdSet] READONLY, @fromDate DATETIME = NULL, @toDate DATETIME = NULL, @pageSize INT = 5, @pageIndex INT = 0, @totalCount BIGINT OUTPUT ) ``` I know you may still need to do the XML extraction but it will help the query planner if you do it outside the SP. Now, in the SP, I would not use the `@PageDataResults` I'd get just the ids for the page. I wouldn't use the CTE either, that is not helping in this scenario. I'd simplify the query and run it once to aggregate the total count, then if that is greater than 0, run the same query again to return just the page of ids. The main body of the query will have been cached internally by the server. Additionally, Id' do the paging with the `OFFSET` and `FETCH` extensions to [`ORDER BY`](https://msdn.microsoft.com/en-us/library/ms188385.aspx), There are a number of logical simplifications that I outline below, ``` CREATE PROCEDURE [api].[Audit_V2_GetAuditDetails] ( @userIds [api].[IdSet] READONLY, @methodNames [api].[StringSet] READONLY, @errorCodeIds [api].[IdSet] READONLY, @fromDate DATETIME = NULL, @toDate DATETIME = NULL, @pageSize INT = 5, @pageIndex INT = 0, @totalCount BIGINT OUTPUT ) AS DECLARE @offset INT = @pageSize * @pageIndex; DECLARE @filterUsers BIT = 0; DECLARE @filterMethods BIT = 0; DECLARE @filterErrorCodes BIT = 0; IF EXISTS (SELECT 0 FROM @userIds) SET @filterUsers = 1; IF EXISTS (SELECT 0 FROM @methodNames) SET @filterMethods = 1; IF EXISTS (SELECT 0 FROM @errorCodeIds) SET @filterErrorCodes = 1; SELECT @totalCount = COUNT_BIG(*) FROM [dbo].[AuditUserMethods] [aum] LEFT JOIN @userIds [U] ON [U].[Id] = [aum].[user_id] LEFT JOIN @methodName [M] ON [M].[Value] = [aum].[ApiMethodName] WHERE ( @fromDate IS NULL OR [aum].[createddate] > @fromDate ) AND ( @toDate IS NULL OR [aum].[createddate] < @toDate ) AND ( @filterUsers = 0 OR [U].[Id] IS NOT NULL ( AND ( @filterMethods = 0 OR [M].[Value] IS NOT NULL ( AND ( @filterErrorCodes = 0 OR ( EXISTS( SELECT 1 FROM [dbo].[AuditUserMethodErrorCodes] [e] JOIN @errorCodeIds [ec] ON [ec].[Id] = [e].[ErrorCode] WHERE [e].[AuditUserMethod_Id] = [aum].[Id]) ); DECLARE @pageIds [api].[IdSet]; IF @totalCount > 0 INSERT @pageIds SELECT [aum].[id] FROM [dbo].[AuditUserMethods] [aum] LEFT JOIN @userIds [U] ON [U].[Id] = [aum].[user_id] LEFT JOIN @methodName [M] ON [M].[Value] = [aum].[ApiMethodName] WHERE ( @fromDate IS NULL OR [aum].[createddate] > @fromDate ) AND ( @toDate IS NULL OR [aum].[createddate] < @toDate ) AND ( @filterUsers = 0 OR [U].[Id] IS NOT NULL ( AND ( @filterMethods = 0 OR [M].[Value] IS NOT NULL ( AND ( @filterErrorCodes = 0 OR ( EXISTS( SELECT 1 FROM [dbo].[AuditUserMethodErrorCodes] [e] JOIN @errorCodeIds [ec] ON [ec].[Id] = [e].[ErrorCode] WHERE [e].[AuditUserMethod_Id] = [aum].[Id]) ) ORDER BY [aum].[createddate] DESC, [aum].[id] DESC OFFSET @offset ROWS FETCH NEXT @pageSize ROWS ONLY; SELECT [aum].[Id], [aum].[CreatedDate], [aum].[ApiMethodName], [aum].[Request], [aum].[Result], [aum].[MethodId], [aum].[UserId] FROM [dbo].[AuditUserMethods] [aum] JOIN @pageIds [i] ON [i].[Id] = [aum].[id] ORDER BY [aum].[createddate] DESC, [aum].[id] DESC; SELECT [aumec].[AuditUserMethod_Id], [aumec].[ErrorCode] FROM [dbo].[AuditUserMethodErrorCodes] [aumec] JOIN @pageIds [i] ON [i].[Id] = [aumec].[AuditUserMethod_Id]; /* The total count is an output parameter */ RETURN 0; ``` If this doesn't improve things enough, you'll need to look at the query plan and consider what indices would be optimal. ***Caveat*** All the code is written off the cuff, so, while the ideas are right the syntax may not be perfect.
``` (@FromDate IS NULL OR (@FromDate IS NOT NULL AND aum.createddate > @FromDate)) ``` is the same as ``` (@FromDate IS NULL OR aum.createddate > @FromDate) ``` try something like this ``` CREATE PROCEDURE [api].[Audit_V1_GetAuditDetails] ( @Users XML = NULL, @Methods XML = NULL, @ErrorCodes XML = NULL, @FromDate DATETIME = NULL, @ToDate DATETIME = NULL, @PageSize INT = 5, @PageIndex INT = 0 ) AS BEGIN DECLARE @UserIds TABLE (Id INT) DECLARE @MethodNames TABLE (Name NVARCHAR(256)) DECLARE @ErrorCodeIds TABLE (Id INT) INSERT @UserIds SELECT x.y.value('.', 'int') FROM @Users.nodes('Ids/x/@i') AS x (y) INSERT @MethodNames SELECT x.y.value('.', 'NVARCHAR(256)') FROM @Methods.nodes('ArrayOfString/string') AS x (y) INSERT @ErrorCodeIds SELECT x.y.value('.', 'int') FROM @ErrorCodes.nodes('Ids/x/@i') AS x (y) IF NOT EXISTS (SELECT TOP 1 0 FROM @UserIds) INSERT INTO @UserIds values (-1) IF NOT EXISTS (SELECT TOP 1 0 FROM @MethodNames) INSERT INTO @MethodNames values ('empty') IF NOT EXISTS (SELECT TOP 1 0 FROM @ErrorCodeIds) INSERT INTO @ErrorCodeIds values (-1) IF @FromDate is null @FromDate = '1/1/1900' IF @ToDate is null @ToDate = '1/1/2079' DECLARE @StartRow INT = @PageIndex * @Pagesize DECLARE @PageDataResults TABLE (Id INT, CreatedDate DATETIME, ApiMethodName NVARCHAR(256), Request NVARCHAR(MAX), Result NVARCHAR(MAX), MethodId INT, UserId INT, TotalRows INT); WITH PageData AS ( SELECT id AS id , createddate AS createddate , apimethodname AS apimethodname , request AS request , result AS result , method_id AS method_id , user_id AS user_id , ROW_NUMBER() OVER (ORDER BY createddate DESC, id DESC) AS row_number , COUNT(*) OVER() as TotalRows FROM dbo.AuditUserMethods AS aum JOIN @UserIds ON (aum.user_id = @UserIds.ID OR @UserIds.ID = -1) AND aum.createddate > @FromDate AND aum.createddate < @ToDate JOIN @MethodNames ON aum.ApiMethodName = @MethodNames.Name OR @MethodNames.Name = 'empty' JOIN AuditUserMethodErrorCodes e on e.AuditUserMethod_Id = aum.Id JOIN @ErrorCodeIds ON e.ErrorCode = @ErrorCodeIds.ID OR @ErrorCodeIds.ID = -1 ) ```
SQL Server 2014 : slow stored procedure execution time
[ "", "sql", "t-sql", "stored-procedures", "sql-server-2014", "sqlperformance", "" ]
here is the example table with data (rn column is ROW\_NUMBER() for each UELN). ``` UELN OwnerID Date rn 191001180010389 017581 1989-06-30 00:00:00.000 1 191001180010389 017747 2011-06-02 00:00:00.000 2 191001180010389 017992 2014-03-25 00:00:00.000 3 191001180010389 117030 2015-02-03 00:00:00.000 4 191001250009303 018148 2004-06-30 00:00:00.000 1 191001250009303 018418 2013-10-16 00:00:00.000 2 ``` I need to combine those rows to get result set like this: ``` UELN OwnerID DateFrom DateTo 191001180010389 017581 1989-06-30 00:00:00.000 2011-06-02 00:00:00.000 191001180010389 017747 2011-06-02 00:00:00.000 2014-03-25 00:00:00.000 191001180010389 017992 2014-03-25 00:00:00.000 2015-02-03 00:00:00.000 191001180010389 117030 2015-02-03 00:00:00.000 NULL 191001250009303 018148 2004-06-30 00:00:00.000 2013-10-16 00:00:00.000 191001250009303 018418 2013-10-16 00:00:00.000 NULL ``` NULL in DateTo column means that this is still valid. Can anyone help me with the query?
``` select u1.*, u2.date as [date to] from tabl u1 left join tabl u2 on u1.UELN = u2.UELN and u2.rn = u1.rn + 1 ``` You just need a left self join The left part is what gets the null date for the no match
Using `OUTER APPLY`: ``` SELECT t.UELN, t.OwnerID, DateFrom = t.[Date], DateTo = x.DateTo FROM tbl t OUTER APPLY( SELECT DateTo = [Date] FROM tbl WHERE UELN = t.UELN AND rn = t.rn + 1 )x ```
T-SQL combine periods
[ "", "sql", "sql-server", "database", "t-sql", "" ]
In an eCommerce app, **Scenario 1** Assume the server is in NY, USA and the client is in Tokyo, Japan. Client makes an order and it needs to be delivered in 10 days. In the scenario there are two times zones NY, USA and Tokyo, Japan and there is the 10-day promise. 1. When client makes the order how many time zone details are entered to database? 2. I was told by a colleague that i have to consider about UTC but how that fits in to this? 3. When I calculate 10 days, based on whose time zone the 10 days needs to be calculated? 4. Could any one give me a GOOD link that shows how this is handled?
To get any reasonably sane implementation, you should store the dates as UTC. That is a linear time that is independent of time zones and daylight savings time. When you read the time from the database and display it to the user, you should convert it to their local time zone. In .NET you can use a `TimeZoneInfo` object to convert a date to a specific time zone. > When client makes the order how many time zone details are entered to > database? You only need to store the UTC time in the database. That is an exact point in time, that you can later convert to any local time. > When I calculate 10 days, based on whose time zone the 10 days needs > to be calculated? That depends on how those 10 days are defined. You can just add 10 days to the UTC time, and you get a point in time that is exactly 240 hours later. That means that you may in practice have only nine days to make the delivery, depending on when the order is placed, and what times of day you can make a delivery. Those ten days could also be defined as calendar days, then the entire tenth day would be included, e.g. if a user places an order at 2015-07-02 03:26, it should arrive before 2015-07-12 23:59 in his time zone.
You can store the time in UTC or common format (YOUR SERVER TIMEZONE). When you display the time it will be converted to Japan (as your case scenario). Your backend have time in your server timezone so you can use it for calculate the time. for add 10 days. Convert the time to server and add 10 days. Do all calculation this way, it will works. I have used this same method previously in some codebase and this work fine.
How to manage orders and time zones related issues?
[ "", "sql", "asp.net", "entity-framework", "asp.net-mvc-4", "sql-server-2008-r2", "" ]
How can I write a query to give the results of three tables such that there's only one result per "line"? The tables are: ``` T1 (ID, name, IP) T2 (ID, date_joined) T3 (ID, address, date_modified) ``` The relations are: `T1-T2 1:1`, `T1-T3 1:M` - there can be many address rows per ID in T3. What I want is a listing of all users with the fields above, but *IF* they have an address, I only want to record ONE (bonus would be if it is the latest one based on T3.date\_modified). So I should end up with exactly the number of records in T1 (happens to be equal to T2 in this case) and no more. I tried: ``` select t.ID, t.name, t.IP, tt.ID, tt.date_joined, ttt.ID, ttt.address from T1 t JOIN T2 tt ON (t.ID = tt.ID) JOIN T3 ttt ON (t.ID = ttt.ID) ``` And every sensible combination of LEFT, RIGHT, INNER, etc joins I could think of! I keep getting multiple duplicate because of T3
This query should work: ``` select t1.ID, t1.name, t1.IP, t2.date_joined, t3x.address from t1 join t2 on t1.ID = t2.id left join ( select t3.* from t3 join ( select id, max(date_modified) max_date from t3 group by id ) max_t3 on t3.id = max_t3.id and t3.date_modified = max_t3.max_date ) t3x on t1.ID = t3x.id ``` First you do the normal join between t1 and t2 and then you left join with a derived table (t3x) that is the set of t3 rows having the latest date.
So T2 is actually not relevant here. You just need a way to join from T1 to T3 in a way that gets you at most one T3 row per T1 row. One way of doing this would be: ``` select T1.*, (select address from T3 where T3.ID=T1.ID order by date_modified desc limit 1) from T1; ``` This won't likely be very efficient, being a correlated subquery, but you may not care depending on the size of your dataset. It's also only good for getting one column from T3, so if you had Address, City, and State, you'd have to figure out something else.
Joining three tables such that extra matches are discarded?
[ "", "mysql", "sql", "join", "" ]
This is admittedly atrocious code. I'm looking to not only make it work, but make it work well. I would like to get the `max date` for each `Claim Adjustment Type Code`. Current code: ``` SELECT a.[Current ClaimID], a.[Claim Adjustment Type Code], a.[Claim Effective Date] FROM #tmp_hic_dupes_list_final_not10 a join (select a.[Current ClaimID], a.[Claim Adjustment Type Code], a.[Claim Effective Date] from #tmp_hic_dupes_list_final_not10 as a where a.[Claim Adjustment Type Code] in (SELECT a.[Claim Adjustment Type Code] FROM #tmp_hic_dupes_list_final_not10 b where a.[Current HIC #] = b.[Current HIC #] and a.[Claim Type Code] = b.[Claim Type Code] and a.[Provider Oscar #] = b.[Provider Oscar #] and a.[Claim From Date] = b.[Claim From Date] and a.[Claim Thru Date] = b.[Claim Thru Date] and a.[Claim Adjustment Type Code] = b.[Claim Adjustment Type Code] HAVING COUNT(*) > 1)) b on a.[Current ClaimID] = b.[Current ClaimID] --WHERE a.[Claim Effective Date] < b.[Claim Effective Date] group by a.[Current ClaimID], a.[Claim Adjustment Type Code], a.[Claim Effective Date] having a.[Claim Effective Date] = max(a.[Claim Effective Date]) ``` Current reuslts: ``` [Current ClaimID] [Claim Adjustment Type Code] [Claim Effective Date] 37274993770 1 2014-02-07 00:00:00.000 37274993771 2 2014-02-07 00:00:00.000 37509451954 1 2014-02-21 00:00:00.000 37509451955 2 2014-02-21 00:00:00.000 38168035124 1 2014-04-04 00:00:00.000 38168035125 2 2014-04-04 00:00:00.000 ``` Expected results: ``` [Current ClaimID] [Claim Adjustment Type Code] [Claim Effective Date] 38168035124 1 2014-04-04 00:00:00.000 38168035125 2 2014-04-04 00:00:00.000 ```
You could add a RANK() function to the existing code, RANK() each group of type codes by descending date, and then in outer query, pick off just the records with rank = 1 ``` SELECT [Current ClaimID], [Claim Adjustment Type Code], [Claim Effective Date] FROM ( SELECT a.[Current ClaimID], a.[Claim Adjustment Type Code], a.[Claim Effective Date], RANK() OVER (PARTITION BY a.[Claim Adjustment Type Code] ORDER BY a.[Claim Effective Date] DESC) AS RankClaimTypeByDate FROM #tmp_hic_dupes_list_final_not10 a join (select a.[Current ClaimID], a.[Claim Adjustment Type Code], a.[Claim Effective Date] from #tmp_hic_dupes_list_final_not10 as a where a.[Claim Adjustment Type Code] in (SELECT a.[Claim Adjustment Type Code] FROM #tmp_hic_dupes_list_final_not10 b where a.[Current HIC #] = b.[Current HIC #] and a.[Claim Type Code] = b.[Claim Type Code] and a.[Provider Oscar #] = b.[Provider Oscar #] and a.[Claim From Date] = b.[Claim From Date] and a.[Claim Thru Date] = b.[Claim Thru Date] and a.[Claim Adjustment Type Code] = b.[Claim Adjustment Type Code] HAVING COUNT(*) > 1)) b on a.[Current ClaimID] = b.[Current ClaimID] --WHERE a.[Claim Effective Date] < b.[Claim Effective Date] group by a.[Current ClaimID], a.[Claim Adjustment Type Code], a.[Claim Effective Date] having a.[Claim Effective Date] = max(a.[Claim Effective Date]) ) d WHERE RankClaimTypeByDate = 1 -- Select the 1st ranked record within each [Claim Adjustment Type Code] ```
A possibility is to join on a subset of the same data you're selecting from that grabs just the `MAX([Claim Effective Date])` for each `[Claim Adjustment Type Code]`. This is more of an additional step on the results of the existing query rather than being a part of it, though (in case that's an option for you): ``` SELECT a.[Current ClaimID] ,a.[Claim Adjustment Type Code] ,a.[Claim Effective Date] FROM #tmp_hic_dupes_list_final_not10 a INNER JOIN (SELECT [ClaimAdjustment Type Code], MAX([Claim Effective Date]) AS MostRecentEffectiveDate FROM #tmp_hic_dupes_list_final_not10 GROUP BY [ClaimAdjustment Type Code]) AS XYZ ON a.[Claim Effective Date] = XYZ.MostRecentEffectiveDate ORDER BY a.[Current ClaimID], a.[ClaimAdjustment Type Code] ```
Find max Date Value for a subqueried list
[ "", "sql", "sql-server", "sql-server-2012", "" ]
I have a query as follows ``` Select comp.*,emp.emp_name, emp.emp_dept from Company_tbl comp, Employee emp where comp.comp_id=emp.comp_id and comp_id = 1234; ``` This produces following result ``` Comp_ID Comp_Name emp_name Emp_dept 1234 Comp1234 ABCD Admin 1234 Comp1234 EFGH HR 1234 Comp1234 IJKL Admin 1234 Comp1234 MNOP Admin ``` From this result, I can get all the departments in EMP\_dept column of a perticuar company (1234) Now I want all those companies in which there is no HR department. I tried using group by comp\_id and emp\_dept not in 'HR' but it didnt work.
Add this: ``` AND comp.comp_id NOT IN ( SELECT comp_id FROM Employee WHERE emp_dept = 'HR' ) ``` The new select gets you all comp\_ids of your employees that are in a department called "HR". The "NOT IN" excludes these comp\_ids from the result.
One of solutions is to use `NOT EXISTS`: [SQLFiddle demo](http://sqlfiddle.com/#!4/12bce/1) ``` select c.*, e.emp_name, e.emp_dept from company_tbl c join employee e on e.comp_id = c.comp_id where not exists ( select 1 from employee where comp_id = c.comp_id and emp_dept='HR' ) ```
Using group by to get 'Not In' data not working
[ "", "sql", "oracle", "group-by", "" ]
I am using SQL Server 2012 I have the following sample data ``` Date Type Symbol Price 6/30/1995 gaus 313586U72 109.25 6/30/1995 gbus 313586U72 108.94 6/30/1995 csus NES 34.5 6/30/1995 lcus NES 34.5 6/30/1995 lcus NYN 40.25 6/30/1995 uaus NYN 40.25 6/30/1995 agus SRR 10.25 6/30/1995 lcus SRR 0.45 7/1/1995 gaus 313586U72 109.25 7/1/1995 gbus 313586U72 108.94 ``` I want to filter out when symbol and price match. It's ok if type doesn't match. Thus with the above data I would expect to only see ``` Date Type Symbol Price 6/30/1995 gaus 313586U72 109.25 6/30/1995 gbus 313586U72 108.94 6/30/1995 agus SRR 10.25 6/30/1995 lcus SRR 0.45 7/1/1995 gaus 313586U72 109.25 7/1/1995 gbus 313586U72 108.94 ``` NES and NYN have been filtered out because their symbol and price matches. I was thinking of using Partition and row number, but I am not sure how to pair and filter rows using that or another function. **\* \*\*UPDATE** I will be testing the replies. I should have mentioned I just want to see duplicates for symbol and price that occur on the same date. Also the table is called duppri
One way is to use the `exists` predicate with a correlated subquery that checks that the specific symbol have more than one price.: ``` select * from table1 t where exists ( select 1 from table1 where symbol = t.symbol and price <> t.price); ``` [Sample SQL Fiddle](http://www.sqlfiddle.com/#!6/d6d40e/4) This would return: ``` | Date | Type | Symbol | Price | |------------------------|------|-----------|--------| | June, 30 1995 02:00:00 | gaus | 313586U72 | 109.25 | | June, 30 1995 02:00:00 | gbus | 313586U72 | 108.94 | | June, 30 1995 02:00:00 | agus | SRR | 10.25 | | June, 30 1995 02:00:00 | lcus | SRR | 0.45 | | July, 01 1995 02:00:00 | gaus | 313586U72 | 109.25 | | July, 01 1995 02:00:00 | gbus | 313586U72 | 108.94 | ``` Edit: inspiried by Gordon Linoffs clever answer another option could be to use `avg()` as a windowed function: ``` select Date, Type, Symbol, Price from ( select Date, Type, Symbol, Price, avg = avg(price) over (partition by symbol) from table1) a where avg <> price; ``` Edit: with a check to ensure only duplicates on the same date are returned: <http://www.sqlfiddle.com/#!6/29d67/1>
I would approach this using window functions: ``` select s.* from (select s.*, min(price) over (partition by symbol) as minprice, max(price) over (partition by symbol) as maxprice from sample s ) s where minprice <> maxprice; ```
SQL to check when pairs don't match
[ "", "sql", "sql-server", "sql-server-2012", "data-partitioning", "" ]
I have a table Oracle SQL Developer, there was a mistake somewhere in some code, and two values got flipped when the records were created. So, what I need is something to flip all the 5's and 6's. ``` ID Name Type 0 Joe 5 1 Chris 6 2 Jane 5 3 Tyler 6 ``` Needs to be ``` ID Name Type 0 Joe 6 1 Chris 5 2 Jane 6 3 Tyler 5 ```
``` update table set Type = 11 - Type where Type in (5,6) ```
This is the more general approach. I really love tricks like `Type = 11 - Type` but sadly we work in a world where a lot of our coworkers wouldn't understand that if it's not just a one-off update. ``` update table set Type = case when Type = 5 then 6 when 6 then 5 end where Type in (5,6) ```
Oracle SQL update swap numbers
[ "", "sql", "oracle", "sql-update", "oracle-sqldeveloper", "" ]
I have one table which I select with the code: **Code A** ``` Select TDS, TL, IK From (Select Sheet1.TOOLING_DATA_SHEET As TDS, Sheet1.CUTTING_TOOL As TL, ENT_ITEM_MASTER.ITEM_KEY As IK From Sheet1 Inner Join ENT_ITEM_MASTER On ENT_ITEM_MASTER.ITEM_CODE=Sheet1.CUTTING_TOOL And ENT_ITEM_MASTER.USER_LAST_MODIFIED Is Not Null) As A ``` **Output:** ``` TDS TL IK TDS-1980D-10+OP10+S7 TL-000032 1 TDS-1980D-10+OP10+S7 TL-000019 34 TDS-2258-01+OP10+S4 TL-000016 53 TDS-2325PU+OP10+S1 TL-000036 7 TDS-1234-56-78 TL-000123 45 ``` and another table which I select with the code: **Code B** ``` Select ENT_LINK_OBJECTS.OBJ_NAME, ENT_ITEM_MASTER.ITEM_CODE, ENT_ITEM_MASTER.ITEM_KEY From ENT_LINK_OBJECTS Inner Join ENT_ITEM_MASTER On ENT_ITEM_MASTER.ITEM_KEY=ENT_LINK_OBJECTS.ENTITY_KEY And ENT_ITEM_MASTER.USER_LAST_MODIFIED Is Not Null) As B ``` **Output:** ``` OBJ_NAME ITEM_CODE ITEM_KEY TDS-1980D-10+OP10+S7 TL-000032 1 TDS-1980D-10+OP10+S7 TL-000019 34 TDS-2258-01+OP10+S4 TL-000032 28 TDS-2258-01+OP10+S4 TL-000016 53 TDS-2325PU+OP10+S1 TL-000036 7 TDS-2325PU+OP10+S1 TL-000009 9 ``` I have Left Joined the tables in working code which gives me everything that is in Table A that is not in Table B. I now am trying to Right Join the tables which would give me everything that is in Table B that is not in Table A. Right now the output is nothing. --- Full code for **Right Join:** ``` Select TDS, TL, IK From (Select Sheet1.TOOLING_DATA_SHEET, Sheet1.CUTTING_TOOL, ENT_ITEM_MASTER.ITEM_KEY From Sheet1 Inner Join ENT_ITEM_MASTER On ENT_ITEM_MASTER.ITEM_CODE=Sheet1.CUTTING_TOOL And ENT_ITEM_MASTER.USER_LAST_MODIFIED Is Not Null) As A Right Join (Select ENT_LINK_OBJECTS.OBJ_NAME As TDS, ENT_ITEM_MASTER.ITEM_CODE As TL, ENT_ITEM_MASTER.ITEM_KEY As IK From ENT_LINK_OBJECTS Inner Join ENT_ITEM_MASTER On ENT_ITEM_MASTER.ITEM_KEY=ENT_LINK_OBJECTS.ENTITY_KEY And ENT_ITEM_MASTER.USER_LAST_MODIFIED Is Not Null) As B On A.TOOLING_DATA_SHEET=B.TDS Where A.TOOLING_DATA_SHEET is Null ``` Current output: ``` TDS TL IK ``` Desired output: (everything in B that is not in A) ``` TDS TL IK TDS-2258-01+OP10+S4 TL-000032 28 TDS-2325PU+OP10+S1 TL-000009 9 ``` I can add my working code for my Left Join if that is helpful. I have also tried switching the Select areas, doing a Left Join, and changing the Null statement to attempt a Left Join, which provides no output as well. **EDIT:** How my code works with a Left Join: ``` Select TDS, TL, IK From (Select Sheet1.TOOLING_DATA_SHEET As TDS, Sheet1.CUTTING_TOOL As TL, ENT_ITEM_MASTER.ITEM_KEY As IK From Sheet1 Inner Join ENT_ITEM_MASTER On ENT_ITEM_MASTER.ITEM_CODE=Sheet1.CUTTING_TOOL And ENT_ITEM_MASTER.USER_LAST_MODIFIED Is Not Null) As A Left Join (Select ENT_LINK_OBJECTS.OBJ_NAME, ENT_ITEM_MASTER.ITEM_CODE, ENT_ITEM_MASTER.ITEM_KEY From ENT_LINK_OBJECTS Inner Join ENT_ITEM_MASTER On ENT_ITEM_MASTER.ITEM_KEY=ENT_LINK_OBJECTS.ENTITY_KEY And ENT_ITEM_MASTER.USER_LAST_MODIFIED Is Not Null) As B On A.TDS=B.OBJ_NAME Where B.OBJ_NAME is Null ``` **OUTPUT:** (everything in A that is not in B) ``` TDS TL IK TDS-1234-56-78 TL-000123 45 ```
---You have two problems 1. Since you want all records from B not in A. You need to display B table Columns 2. Your Join criteria doesn't quantify unique records. 28 and 9 are being removed because TDS-2258-01+OP10+S4 and TDS-2325PU+OP10+S1 do exist in table A. The issue is TDS-2258-01+OP10+S4 TL-000032 doesn't exist in A nor does TDS-2325PU+OP10+S1 TL-000009. The criteria you're using to JOIN on is incorrect. To know the CORRECT values ***you*** need to specify the relationship between the tables or simply (based on displayed data) use `On A.TDS=B.OBJ_NAME and A.TL = B.Item_Code and A.IK = B.Item_key` Meaning final result would be: ``` Select B.TDS, B.TL, B.IK From (Select Sheet1.TOOLING_DATA_SHEET, Sheet1.CUTTING_TOOL, ENT_ITEM_MASTER.ITEM_KEY From Sheet1 Inner Join ENT_ITEM_MASTER On ENT_ITEM_MASTER.ITEM_CODE=Sheet1.CUTTING_TOOL And ENT_ITEM_MASTER.USER_LAST_MODIFIED Is Not Null) As A Right Join (Select ENT_LINK_OBJECTS.OBJ_NAME As TDS, ENT_ITEM_MASTER.ITEM_CODE As TL, ENT_ITEM_MASTER.ITEM_KEY As IK From ENT_LINK_OBJECTS Inner Join ENT_ITEM_MASTER On ENT_ITEM_MASTER.ITEM_KEY=ENT_LINK_OBJECTS.ENTITY_KEY And ENT_ITEM_MASTER.USER_LAST_MODIFIED Is Not Null) As B On A.TDS=B.OBJ_NAME and A.TL = B.Item_Code and A.IK = B.Item_ke Where A.TOOLING_DATA_SHEET is Null ``` If you're RDBMS supports MINUS (EXCEPT for SQL SERVER) this would also work ``` (SELECT ENT_LINK_OBJECTS.OBJ_NAME As TDS, ENT_ITEM_MASTER.ITEM_CODE As TL, ENT_ITEM_MASTER.ITEM_KEY As IK FROM ENT_LINK_OBJECTS INNER JOIN ENT_ITEM_MASTER ON ENT_ITEM_MASTER.ITEM_KEY=ENT_LINK_OBJECTS.ENTITY_KEY AND ENT_ITEM_MASTER.USER_LAST_MODIFIED Is Not Null) As B EXCEPT (SELECT Sheet1.TOOLING_DATA_SHEET, Sheet1.CUTTING_TOOL, ENT_ITEM_MASTER.ITEM_KEY FROM Sheet1 INNER JOIN ENT_ITEM_MASTER ON ENT_ITEM_MASTER.ITEM_CODE=Sheet1.CUTTING_TOOL AND ENT_ITEM_MASTER.USER_LAST_MODIFIED Is Not Null) A ``` It basically says take result set B and subtract from it Result Set A. which leaves you with.... the two records you're after.. This only works if all columns match. Outer joins or exists/not exists provide greater flexibility. [Visual Aid](http://blog.codinghorror.com/a-visual-explanation-of-sql-joins/) on Joins to help better understand
From what you are describing in comments you are looking for a Not Exists rather than a Right Join. A right join will give you everything in table B plus everything that matches in table A. Not exists will find everything in Table B that is not in Table A. You need something like the below: ``` Select ENT_LINK_OBJECTS.OBJ_NAME, e1.ITEM_CODE, e1.ITEM_KEY From ENT_LINK_OBJECTS Inner Join ENT_ITEM_MASTER e1 On e1.ITEM_KEY=ENT_LINK_OBJECTS.ENTITY_KEY And e1.USER_LAST_MODIFIED Is Not Null Where not exists (Select * From Sheet1 Inner Join ENT_ITEM_MASTER On ENT_ITEM_MASTER.ITEM_CODE=Sheet1.CUTTING_TOOL And ENT_ITEM_MASTER.USER_LAST_MODIFIED Is Not Null where ENT_ITEM_MASTER.ITEM_KEY = e1.ITEM_KEY) ```
SQL - no output from Right Join
[ "", "sql", "left-join", "ssms", "right-join", "" ]
![enter image description here](https://i.stack.imgur.com/rqn0u.png) Hello guys, Our aim is to get a script that will insert the missing pairs of product - TaxCategory in the intermediate table (ProductTaxCategory) The following script is correctly working but we are trying to find a way to optimize it: ``` INSERT ProductTaxCategory (ProductTaxCategory_TaxCategoryId,ProductTaxCategory_ProductId) SELECT TaxCategoryId ,ProductId FROM Product pr CROSS JOIN TaxCategory tx WHERE pr.ProductId NOT IN ( SELECT ProductTaxCategory_ProductId FROM ProductTaxCategory ) OR pr.ProductId IN ( SELECT ProductTaxCategory_ProductId FROM ProductTaxCategory ) AND tx.TaxCategoryId NOT IN ( SELECT ProductTaxCategory_TaxCategoryId FROM ProductTaxCategory WHERE ProductTaxCategory_ProductId = pr.ProductId ) ``` How can we optimize this query ?
Try something like (full statement now): ``` INSERT INTO ProductTaxCategory (ProductTaxCategory_TaxCategoryId,ProductTaxCategory_ProductId) SELECT TaxCategoryId, ProductId FROM Product pr CROSS JOIN TaxCategory tx WHERE NOT EXISTS (SELECT 1 FROM ProductTaxCategory WHERE ProductTaxCategory_ProductId = pr.ProductId AND ProductTaxCategory_TaxCategoryId = tx.TaxCategoryId) ``` `EXISTS` with `(SELECT 1 ... WHERE ID=...)` is often a better alternative to `IN (SELECT ID FROM ... )` constructs.
You can do a `LEFT JOIN` with `ProductTaxCategory`and check for `NULLs`. Something like this. ``` INSERT ProductTaxCategory ( ProductTaxCategory_TaxCategoryId, ProductTaxCategory_ProductId ) SELECT p.TaxCategoryId, p.ProductId FROM ( SELECT TaxCategoryId, ProductId FROM Product pr CROSS JOIN TaxCategory tx ) p LEFT JOIN ProductTaxCategory ptx ON P.TaxCategoryId = ptx.ProductTaxCategory_TaxCategoryId AND P.ProductId = ptx.ProductTaxCategory_ProductId WHERE ptx.ProductTaxCategory_ProductId IS NULL ```
Changing SQL NOT IN to JOINS
[ "", "sql", "join", "sql-server-2012", "notin", "" ]
Given with the query below ``` Select COLUMN_ID, (Select CASE COLUMN_ID WHEN 4 THEN 'WEIGHT' WHEN 6 THEN 'CARGO_LENGTH' WHEN 7 THEN 'WIDTH' WHEN 8 THEN 'HEIGHT' END GROOVE FROM ALL_TAB_COLS where TABLE_NAME = 'TBL_CARGO') FROM ALL_TAB_COLS where COLUMN_ID IN(4,6,7,8) ``` I like to get only non blank columns. I'm expecting the output 4 6 7 8 displayed per field. How do I do that?
All that sub-query and in-line view could be done in single query: Using **CASE** expression(verbose and easy to understand): ``` SELECT COLUMN_ID, CASE COLUMN_ID WHEN 4 THEN 'WEIGHT' WHEN 6 THEN 'CARGO_LENGTH' WHEN 7 THEN 'WIDTH' WHEN 8 THEN 'HEIGHT' END GROOVE FROM ALL_TAB_COLS WHERE TABLE_NAME = 'TBL_CARGO' AND COLUMN_ID IN(4,6,7,8); ``` Using **DECODE**(looks short): ``` SELECT COLUMN_ID, DECODE(COLUMN_ID, 4, 'WEIGHT', 6, 'CARGO_LENGTH', 7, 'WIDTH', 8, 'HEIGHT') FROM ALL_TAB_COLS WHERE TABLE_NAME = 'TBL_CARGO' AND COLUMN_ID IN(4,6,7,8); ```
No need for a sub-select, just add the `CASE` expression. Something like this perhaps? ``` Select COLUMN_ID, CASE COLUMN_ID WHEN 4 THEN 'WEIGHT' WHEN 6 THEN 'CARGO_LENGTH' WHEN 7 THEN 'WIDTH' WHEN 8 THEN 'HEIGHT' END GROOVE FROM ALL_TAB_COLS where COLUMN_ID IN(4,6,7,8) ```
Solving "single-row subquery returns more than one row" error in Oracle SQL
[ "", "sql", "oracle", "subquery", "" ]
I have a table with just a few columns: ``` BUS_DATE VALUE EXP_DATE 6/29/2015 60 6/29/2015 6/30/2015 100 6/30/2015 6/30/2015 50 6/30/2015 6/30/2015 25 7/1/2015 7/1/2015 75 7/1/2015 ``` I'm just looking how to loop through each [BUS\_DATE] in the table and SUM the [VALUE] with some [EXP\_DATE] logic ``` FOR EACH @BUS_DATE INSERT BUS_DATE, SUM(VALUE) INTO #tmp FROM TABLE WHERE ( BUS_DATE = @BUS_DATE OR (@BUS_DATE > BUS_DATE AND @BUS_DATE <= EXP_DATE) ) NEXT ``` Ultimately, I'd like the output to look like this: ``` BUS_DATE VALUE 6/29/2015 60 6/30/2015 175 7/1/2015 100 ``` Thank you so much in advance!
if i understand your problem correctly: you calculate a list of `BUS_DATE` (`SELECT DISTINCT BUS_DATE FROM TABLE`). each existing `BUS_DATE` will appear once. you join your original table with that list according your `EXP_DATE` logic. ``` SELECT lst.BUS_DATE , val = SUM(VALUE) INTO #tmp FROM ( SELECT DISTINCT BUS_DATE FROM TABLE ) lst JOIN TABLE dat ON ( lst.BUS_DATE = dat.BUS_DATE ) OR ( lst.BUS_DATE > dat.BUS_DATE AND lst.BUS_DATE <= dat.EXP_DATE ) GROUP BY lst.BUS_DATE ```
This is a simple `GROUP BY` query: ``` SELECT BUS_DATE, SUM(VALUE) AS VALUE FROM MyTable GROUP BY BUS_DATE ``` If you would like to store the results in `#tmp` table, use [insert from select](https://stackoverflow.com/q/25969/335858) syntax.
SQL Loop through table by date and sum a value
[ "", "sql", "sql-server", "sql-server-2008", "" ]
Actually I have 2 questions, both in SQL Server. 1. There's a query like "Create Table" but for Create columns? If it exist, please, how it is? 2. I need to use a "Create Table" but using parameters, but I don't know if that is possible. For example: > ``` > @table_name string, > @column_name1 int, > @column_name2 int, > @column_name3 int > > CREATE TABLE @table_name > ( > @column_name1, > @column_name2, > @column_name3 > .... > ); > ``` Obviously that's only what I have in mind and doesn't work. There is a right way to do it? I appreciate your help!
For #1, use ALTER TABLE command. For #2, how are you executing the query? You can create a string variable first with the complete command and table name as parameter and then execute the query. Something like: ``` declare @ CustomerID int set @ CustomerID = 3262833 declare @sql nvarchar(1000) set @sql = N'SELECT * FROM [dbo].[Customers] AS [tbCustomers] WITH (NOLOCK) WHERE tbCustomers.ID = '+ + cast(@CustomerID as nvarchar(10)) exec (@sql) ```
1. By creating a column probably you mean adding a column to an existing table, right? use **ALTER TABLE** for that ALTER TABLE TableName ADD newColumn datatype More info <https://msdn.microsoft.com/en-us/library/ms190273.aspx> 2. For dynamically creating a table, try using **sp\_executesql** function ``` sp_executesql 'CREATE TABLE @table_name @column_name1 datatype1, @column_name2 datatype2, @column_name3 datatype3 )', N'@table_name varchar(100), @column_name1 varchar(100), @column_name2 varchar(100), @column_name3 varchar(100)', @table_name, @column_name1, @column_name2, @column_name3 ``` More info here <https://msdn.microsoft.com/en-us/library/ms188001.aspx>
Can I use parameters in a CREATE query?
[ "", "sql", "sql-server", "parameters", "create-table", "" ]
When I am executing following query I am getting different results. ``` SELECT Datediff(year, 0, Getdate()); ``` **The result was 115** When I use this, I am getting another result: ``` SELECT Datediff(year, 1900, Getdate()); ``` **The result was 110** Actually in SQL Server it will take from `1900-01-01`, but why do these show different values?
Try this to explain the logic: ``` select cast(0 as datetime) select cast(1 as datetime) ``` An integer is interpreted as the number of Days since 1900-01-01 whereas a string value such as '1900' will be interpreted as a date format. 1900 Days from Jan 1st 1900 is 1905-03-16, which is five years from 1900 and 110 years from now (2015).
This is because if you cast 0 as datetime, it returns 1900 as the year part, whereas 1900 cast as datetime returns 1905 as the year part. [Demo](http://rextester.com/ICGY32404) From [MSDN](https://technet.microsoft.com/en-us/library/aa258277(v=sql.80).aspx): > Values with the datetime data type are stored internally by Microsoft SQL Server as two 4-byte integers. The first 4 bytes store the number of days before or after the base date, January 1, 1900. The base date is the system reference date. That means, casting the literal 0 to `datetime` is equivalent to getting the datetime value for 0 days after 1/1/1900, which is 1/1/1900. Similarly for 1900. Therefore, as @MartinSmith points out in the comments, your calculation is equivalent to `SELECT Datediff(year,dateadd(d,0,'1/1/1900'), Getdate())` which returns 115 as expected. Possibly worth noting that the MSDN page on Cast and Convert does not specifically cover this scenario i.e. `int` to `datetime`.
Why does the Datediff function show different values?
[ "", "sql", "sql-server", "" ]
I have one table called `Device`: ``` Device Table id deviceName genId description overview 1 PV motor 5 motor capacity 5 motor load is good 2 puf pannel 6 puf capacity 2 puf is for small load 3 gd motor 5 motor capacity 7 motor load is very good 4 rg motor 5 capacity is 3 low capacity ``` Now suppose this table has thousands of records , i need to add searching of rows like , `genId=5 and description Like = '%motor%' OR Overview Like='%motor%'` Search result will be ``` 1 PV motor 5 motor capacity 5 motor load is good 3 gd motor 5 motor capacity 7 motor load is very good ``` I need to construct query which **first it search for genId 5** from the table , **and after that it search for the description and overview having text like motor**. Since if my table had 100 records and only 5 of them have their genId set to 5 then my text search query will be executed on those 5 rows instead of 100 rows . My Search query : ``` Select * From Device where (genId=5) And (description Like '%motor%' Or overview Like '%motor%') ``` Can any one help me to create optimized query?
Your query already is as optimized as possible. LIKE itself is not slow; what it slow is loading all table rows from disk, and LIKE usualy needs to do that because it cannot optimize its lookups with an index. However, when there is no index on the `genId` column, all rows need to be loaded *anyway* to check those values, so inserting an extra step would be needless effort: ``` > EXPLAIN QUERY PLAN Select * From Device where (genId=5) And (description Like '%motor%' Or overview Like '%motor%'); 0|0|0|SCAN TABLE Device ``` If, on the other hand, there *is* an index on the `genId` column, then you don't need to do any manual optimizations, because the database will automatically look up the matching `genId` rows in the index, and then check for LIKE mathes only on those: ``` > CREATE INDEX genId_index ON Device(genId); > EXPLAIN QUERY PLAN Select * From Device where (genId=5) And (description Like '%motor%' Or overview Like '%motor%'); 0|0|0|SEARCH TABLE Device USING INDEX genId_index (genId=?) ```
In this case you can go for sub-query ``` Select * From (Select * From Device where (genId=5)) where description Like '%motor%' Or overview Like '%motor%' ``` Here, first the subquery will be executed first and then the where condition will be applied. This what I know
Need filtered result optimised query
[ "", "sql", "sqlite", "query-optimization", "" ]
I am trying to convent a Ms Access Database to MS SQL 2012, using Microsoft SQL Server Migration Assistant for Access version 6.0 but each time i try to convert it an error pops up. any ideas how i can solve the problem i reinstall the program Microsoft SQL Server Migration Assistant for Access , and it worked just ones and after that i have the same error. any help will be most appreciated > Access Object Collector error: Database > Retrieving the COM class factory for component with CLSID {CD7791B9-43FD-42C5-AE42-8DD2811F0419} failed due to the following > error: 80040154 Class not registered (Exception from HRESULT: > 0x80040154 (REGDB\_E\_CLASSNOTREG)). This error may be a result of > running SSMA as 64-bit application while having only 32-bit > connectivity components installed or vice versa. You can run 32-bit > SSMA application if you have 32-bit connectivity components or 64-bit > SSMA application if you have 64-bit connectivity components, shortcut > to both 32-bit and 64-bit SSMA can be found under the Programs menu. > You can also consider updating your connectivity components from > <http://go.microsoft.com/fwlink/?LinkId=197502>. > An error occurred while loading database content.
I remember this error. I had to find the EXE for 32-bit SSMA even though I was running 64-bit windows. The default installed location was: C:\Microsoft SQL Server Migration Assistant for Access\bin And the filename was: SSMAforAccess32.exe
Just to complete the other side of the story... If you have a 64-bit setup, you might have to go the other direction and install the 64-bit MSAccess 2010 engine. I'm running Access 2013, and I believe it is 64-bit. But the SMAA tool threw the 80040154 error. * I think SMAA can't or doesn't use the 64-bit connectivity drivers that come with 64-bit Access 2013. So using the 32-bit SMAA didn't help. * Installing the 64-bit 2010 engine did. I got it as a free standalone download here: * <https://www.microsoft.com/en-us/download/details.aspx?id=13255> After the quick install of this, I simply started SMAA 64-bit again, and the wizard was successful the first time. --- thanks to : <https://social.technet.microsoft.com/Forums/itmanagement/en-US/0249eebf-14bd-45f6-9bca-3b42395a3d13/ssma-60-error-retrieving-the-com-class-factory-for-component-failed-due-to-the-following-error?forum=sqlservermigration>
Converting Access Database to MSSQL Database Using Microsoft SQL Server Migration Assistant for Access
[ "", "sql", "sql-server", "database", "ms-access", "ms-access-2010", "" ]
I have a table for marks of students as follows: ``` SrNo Class Name Marks 1 1A Student1 67 2 1A Student2 62 3 1A Student3 65 4 1A Student4 78 5 1A Student5 28 6 1B Student6 57 7 1B Student7 65 8 1B Student8 85 9 1B Student9 18 10 1B Student10 8 ``` I want the results as 3 rows from each class with highest, lowest and average marks. The result would ideally be: ``` SrNo Class Student Marks 4 1A Student4 78 5 1A Student5 28 2 1A Student2 62 8 1B Student8 85 10 1B Student10 8 6 1B Student6 57 ```
You can try something like this to achieve it: ``` -- Create demo data CREATE TABLE #temp(SrNo int, Class nvarchar(5), Name nvarchar(50), Marks int) INSERT INTO #temp(SrNo, Class, Name, Marks) VALUES (1,'1A','Student1',67), (2,'1A','Student2',62), (3,'1A','Student3',65), (4,'1A','Student4',78), (5,'1A','Student5',28), (6,'1B','Student6',57), (7,'1B','Student7',65), (8,'1B','Student8',85), (9,'1B','Student9',18), (10,'1B','Student10',8) -- your part SELECT t.* FROM ( SELECT Class, MIN(Marks) as min_marks, AVG(Marks) as avg_marks, MAX(Marks) as max_marks FROM #temp GROUP BY class ) as data OUTER APPLY ( SELECT TOP 1 t.marks as nearest_avg FROM #temp as t WHERE t.class = data.Class ORDER BY CASE WHEN data.avg_marks-marks >= 0 THEN data.avg_marks-Marks ELSE Marks-data.avg_marks END ) as avg_data INNER JOIN #temp as t ON t.Class = data.Class AND( t.Marks = data.min_marks OR t.marks = avg_data.nearest_avg OR t.marks = data.max_marks ) -- Cleanup DROP TABLE #temp ```
You can use a combination of `ROW_NUMBER` and Aggregate functions with `OVER` like this. `ROW_NUMBER() OVER(PARTITION BY Class ORDER BY ABS(Marks - AvgMarks))` gets the a Student with the marks closest to the average of the class. [SQL Fiddle](http://sqlfiddle.com/#!3/1c121/6) **Query** ``` ;WITH CTE AS ( SELECT MAX(Marks)OVER(PARTITION BY Class) MaxMarks, MIN(Marks)OVER(PARTITION BY Class) MinMarks, AVG(Marks)OVER(PARTITION BY Class) AvgMarks, [SrNo], [Class], [Name], [Marks] FROM Class ), CTEAvg as ( SELECT [SrNo], [Class], [Name], [Marks],MaxMarks,MinMarks, ROW_NUMBER() OVER(PARTITION BY Class ORDER BY ABS(Marks - AvgMarks)) ClosestAvg FROM CTE ) SELECT [SrNo], [Class], [Name], [Marks] FROM CTEAvg WHERE [Marks] = MaxMarks OR [Marks] = MinMarks OR ClosestAvg = 1; ``` **Output** ``` | SrNo | Class | Name | Marks | |------|-------|-----------|-------| | 2 | 1A | Student2 | 62 | | 4 | 1A | Student4 | 78 | | 5 | 1A | Student5 | 28 | | 6 | 1B | Student6 | 57 | | 10 | 1B | Student10 | 8 | | 8 | 1B | Student8 | 85 | ```
SQL Query for calculating max, min and mode of a sample
[ "", "sql", "sql-server", "sql-server-2008-r2", "" ]
When you have a table which has relationships with 2 others through FKs, is it possible to query the main table while only counting how many times its ID exists in the other 2? For example: ``` id title product_id contact_id ------ ------- ---------- ---------- 11 Mug 110 33 11 Mug 110 297 11 Mug 378 33 11 Mug 378 297 12 Monster 299 75 12 Monster 299 291 12 Monster 299 296 ``` My query which results in above: ``` SELECT t1.id, t1.title, t2.product_id, t3.contact_id FROM t1 JOIN t2 ON t1.id = t2.rfq_id JOIN t3 ON t1.id = t3.rfq_id ``` I'm only interested in the `COUNT()`: ``` id title product_count contact_count ------ ------- ------------- ------------- 11 Mug 2 2 12 Monster 1 3 ```
You can try with `group by` and `count` only `distinct` values: ``` select t1.id , t1.title , count(distinct t2.product_id) as product_count , count(distinct t3.contact_id) as contact_count from t1 join t2 on t1.id = t2.rfq_id join t3 on t1.id = t3.rfq_id group by t1.id , t1.title ```
Try these ``` SELECT t1.id, t1.title, count(distinct t2.product_id) as product_id, count(distinct t3.contact_id) as contact_id FROM t1 JOIN t2 ON t1.id = t2.rfq_id JOIN t3 ON t1.id = t3.rfq_id group by t1.id, t1.title ``` Thank you.
MySQL join while counting on joined tables
[ "", "mysql", "sql", "" ]
I am trying to get distinct result of following table ``` id | name | created_on 1 | xyz | 2015-07-04 09:45:14 1 | xyz | 2015-07-04 10:40:59 2 | abc | 2015-07-05 10:40:59 ``` I want distinct **id** with latest **created\_on** means following result ``` 1 | xyz | 2015-07-04 10:40:59 2 | abc | 2015-07-05 10:40:59 ``` How to get above result by sql query?
Try this: ``` Select id, name, max(created_on) as created_on from table group by id ```
Try: ``` select id,max(name), max(created_on) from table_name group by id ``` Additional Note: As it appears, your table is not **normalized**. That is, you store the `name` along with `id` in this table. So you may have these two rows simultaneously: ``` id | name | created_on 1 | a | 12-12-12 1 | b | 11-11-11 ``` If that state is not logically possible in your model, you should redesign your database by splitting this table into two separate tables; one for holding `id-name` relationship, and another to hold `id-created_on` relationship: ``` table_1 (id,name) table_2 (id,created_on) ``` Now, to get last created\_on for each id: ``` select id,max(created_on) from table_2 ``` And if you want to hold `name` in the query: ``` select t1.id, t1.name, t2.created_on from table_1 as t1 inner join (select id, max(created_on) as created_on from table_2) as t2 on t1.id=t2.id ```
how to get distinct result in sql?
[ "", "mysql", "sql", "" ]
* I have a list of orders which come in two types (A and B). * Every account has at least one type A order. * Multiple orders of each type can exist on an account. * For each account, I need to know the minimum Order\_Date of all type A orders on that account. Should I do this in SQL, powerquery, or powerpivot? I would prefer to calculate it in powerquery or powerpivot. Any ideas?
Just going to make some basic assumptions about your table structure. Something like this should work: ``` SELECT Account.Name, MIN(Order_date) FirstOrderDate FROM Account LEFT OUTER JOIN Orders on Account.AccountId = Orders.OrderId AND Orders.Type='A' GROUP BY Account.Name ```
``` SELECT * FROM accounts,order WHERE accounts.id = order.account_id and order.id in (SELECT TOP 1 id FROM order WHERE type = "A" ORDER BY date); ```
How to return the minimum date per account for a specific order type
[ "", "sql", "powerpivot", "dax", "powerquery", "" ]
I have a stored procedure that is supposed to pull all characters after a / character. However, sometimes the field will not have the character then which it should return nothing and sometimes the field would be null, again it should return nothing. The SQL that I have works for if the / character exists but not if it doesn't exist, I get the error in the title. Here is the SQL: ``` COALESCE(NULLIF(SUBSTRING(s.billing_dept,1, CASE WHEN CHARINDEX('/', s.billing_dept) = 0 THEN 0 ELSE (LEN(s.billing_dept)-1) - CHARINDEX('/', s.billing_dept) --END, --CASE WHEN CHARINDEX('/', s.billing_dept) = 0 -- THEN 1 -- ELSE ((LEN(s.billing_dept)-1) - CHARINDEX('/', s.billing_dept)) END),''),'') ```
This would fail if the string ends in `'/'` If you want all characters after the `'/'`, you should use this: ``` CASE CHARINDEX('/', s.billing_dept) WHEN 0 THEN '' ELSE SUBSTRING(s.billing_dept, CHARINDEX('/', s.billing_dept) + 1, LEN(s.billing_dept) - CHARINDEX('/', s.billing_dept)) END ```
You can try this: ``` CREATE TABLE #teststrings(test nvarchar(20)) INSERT INTO #teststrings(test) VALUES (null), (N'Hiho!'), (N'Cool/String'), (N'Cooler / String'), (N'Stupid String /') SELECT s.test, SUBSTRING(s.test,1, CASE WHEN CHARINDEX(N'/',s.test) = 0 THEN LEN(s.test) ELSE CHARINDEX(N'/',s.test)-1 END ) as string_before, SUBSTRING(s.test,CHARINDEX(N'/',s.test)+1,LEN(s.test)) as string_after FROM #teststrings as s DROP TABLE #teststrings ``` I provided both. A version which will get everything in front of the `/` which is called `string_before` and one version which will give you everything after the `/` called `string_after`. Additionally I provided five different teststrings for the test. These are the results: ``` test string_before string_after -------------------- -------------------- -------------------- NULL NULL NULL Hiho! Hiho! Hiho! Cool/String Cool String Cooler / String Cooler String Stupid String / Stupid String ``` You can still `ltrim/rtrim` the results if needed and wished. This solution will work on SQL Server 2005 up to current versions.
'Invalid length parameter passed to the LEFT or SUBSTRING function' when / doesn't exist
[ "", "sql", "sql-server", "substring", "" ]
How to get the TeamCount and TeamLead Name currectly ``` Teams | TeamCount | TeamLead Name ------------------------------------- | Team1 | 2 | NULL | Team2 | 2 | NULL | Team1 | 1 | Prashanth ``` Some times Team may or may not have the team lead. So we just have to show the TeamLead name as null, if team lead is not found for the team I need some help to get the out as below ``` Teams | TeamCount | TeamLead Name --------------------------------- Team1 | 3 | Prashanth Team2 | 2 | NULL ```
Here you are: MySQL: ``` select team as teamId, sum(teamcount) as teamcount, (select teamleadname from teams where teamId = teams.team and teamleadname is not null limit 1) as teamleadname from teams group by team ``` <http://sqlfiddle.com/#!9/5889d/39> SQl Server: ``` declare @teams TABLE(Team varchar(20), TeamCount int, TeamLeadName varchar(20)); INSERT INTO @teams VALUES ('Team1', 2, null), ('Team2', 2, null), ('Team1', 1, 'Prashanth') select l.Team, l.TeamCount, r.TeamLeadName from (select Team , sum(TeamCount) as TeamCount from @teams group by Team) as l left outer join (select Team, TeamLeadName from (select Team, TeamLeadName, row_number() over(partition by Team order by TeamLeadName desc) as roworder from @teams) as o where o.roworder = 1 ) as r on l.Team = r.Team ``` Hope this help.
...and here you go! ``` create table test2 as select distinct c.teams,sum(c.TeamCount) as SUM,c.TeamName from (select a.Teams,a.TeamCount,b.TeamName from team as a left outer join (select * from team where TeamName ne "") as b on a.Teams = B.Teams) as c group by c.Teams; ``` You should try to develop these solutions by working from the inside out and build the query in steps
SQL select query using group by
[ "", "mysql", "sql", "oracle", "group-by", "" ]
**My table:** ``` // mytable +-----+----------+--------------+ | id | number | name | +-----+----------+--------------+ | 1 | 1 | red | | 2 | 1 | | | 3 | 1 | | | 4 | 2 | blue | | 5 | 2 | | | 6 | 3 | | | 7 | 3 | | | 8 | 3 | | | 9 | 4 | | | 10 | 4 | | +-----+----------+--------------+ ``` Now I want this output: `id=6` and `id=9`. *(distinct number and name is empty)* something like this: ``` // mynewtable +-----+----------+--------------+ | id | number | name | +-----+----------+--------------+ | 6 | 3 | | | 9 | 4 | | +-----+----------+--------------+ ``` **My query:** ``` SELECT * FROM mytable group by number where name='' order by id ASC; ``` **My error:** > **Warning:** mysql\_fetch\_assoc(): supplied argument is not a valid MySQL > result resource in {addres} on line {x} how can I fix it ?
This will work well, self tested! ``` SELECT id, number FROM `table` WHERE name='' and number not in(select number from `table` where name !='') GROUP BY number ORDER BY id ``` **point:** if the number of the rows is too much (in reality), you should use *index* for them.
As * you commented the **id** column not to exist, and * from the data perspective any out of the rows (6, 7, 8) and (9, 10) respectively should be good, one the following might work for you (depending on whether you actually have to check against empty strings or **NULL**): ``` SELECT DISTINCT number , MAX(name) AS name FROM mytable GROUP BY number HAVING MAX(name) IS NULL ; SELECT DISTINCT number , MAX(name) AS name FROM mytable GROUP BY number HAVING MAX(name) = '' ; ``` See it in action: [SQL Fiddle](http://sqlfiddle.com/#!9/42688/1) As it requires no sub-selects, it might provide a performance gain - in particular with an index combining the two columns checked (depending on the actual data distribution). - You have (according to the [chat](https://chat.stackoverflow.com/rooms/82324/discussion-between-kamal-pal-and-stack)) the +100,000 of rows of real data to check against... ;-) Please comment if and as adjustment / further detail is required.
how to select the rows where distinct col1 and empty col2
[ "", "mysql", "sql", "" ]
I am working on a query and part of the query must return values based on when the transaction occurred. A simpler version of the table would be as follows ``` TransDate | TransAmt -------------------- 6/1/2012 | 10 7/5/2012 | 15 6/1/2013 | 15 7/1/2013 | 15 ``` The restriction is that any Transactions that occurred before(TransDate) 6/30/2012 would return the full TransAmt. If the transaction happened after 6/30/2012, the value for TransAmt must be returned as 0. I believe I would need to use the CAST function as well as the CASE function but I am not experienced with either functions. Any help or guidance would be much appreciated. I am using SQL Server 2008 as well.
You can use `case`: ``` select (case when TransDate < '2012-06-30' then TransAmt else 0 end) . . . ``` I would strongly encourage you to use ISO standard date formats (YYYY-MM-DD).
You can try with `case`: ``` select case when TransDate < '6/30/2012' then TransAmt else 0 end from tbl ``` [**SQLFiddle**](http://sqlfiddle.com/#!3/e49432/1)
SQL IF statement?
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have some SQL query output for low disk space. It is displayed in MB, such as 1342 MB I would like to convert this to GB and if possible in a decimal format such as 1.3 GB. Any suggestions? Edit: I'm using MS SQL 2008. And I should have given way more info in my original post. Sorry. This is report in Lansweeper. Below is the current report query. ``` Select Top 1000000 tblAssets.AssetID, tblAssets.AssetUnique, tsysOS.OSname As [OS Name], tblAssets.Description, tblDiskdrives.Caption As [Partition], Cast(Cast(tblDiskdrives.Freespace As bigint) / 1024 / 1024 As numeric) As [Free (in MB)], Cast(Cast(tblDiskdrives.Size As bigint) / 1024 / 1024 As numeric) As [Total Size (in MB)], tblDiskdrives.Lastchanged As [last changed], tsysOS.Image As icon From tblAssets Inner Join tblDiskdrives On tblAssets.AssetID = tblDiskdrives.AssetID Inner Join tblOperatingsystem On tblAssets.AssetID = tblOperatingsystem.AssetID Inner Join tblComputersystem On tblAssets.AssetID = tblComputersystem.AssetID Inner Join tblAssetCustom On tblAssets.AssetID = tblAssetCustom.AssetID Inner Join tsysOS On tblAssets.OScode = tsysOS.OScode Where Cast(Cast(tblDiskdrives.Freespace As bigint) / 1024 / 1024 As numeric) < 5120 And Cast(Cast(tblDiskdrives.Size As bigint) / 1024 / 1024 As numeric) <> 0 And tblComputersystem.Domainrole > 1 And tblDiskdrives.DriveType = 3 And tblAssetCustom.State = 1 ```
``` select cast(1342/1024.0 as numeric(10,1)) ``` Note: * the `.0` part at the end of the `1024.0` expression is very important. Without it, you'll get the wrong result. * Casting as `numeric(10, 1)` takes care of the rounding properly.
``` create function dbo.MbToGbString (@mbValue int) returns varchar (10) as begin return left (round (@mbValue / 1024.0, 1), 3) + ' GB' end go select dbo.MbToGbString (1342) ```
Converting numbers in SQL
[ "", "sql", "sql-server-2008", "" ]
I have a very simple table like that: ``` CREATE TABLE IF NOT EXISTS LuxLog ( Sensor TINYINT, Lux INT, PRIMARY KEY(Sensor) ) ``` It contains thousands of logs from different sensors. I would like to have Q1 and Q3 for all sensors. I can do one query for every data, but it would be better for me to have one query for all sensors (getting Q1 and Q3 back from one query) I though it would be a fairly simple operation, as quartiles are broadly used and one of the main statistical variables in frequency calculation. The truth is that I found loads of overcomplicated solutions, while I was hoping to find something neat and simple. Anyone can give me a hint? Edit: This is a piece of code that I found online, but it is not working for me: ``` SELECT SUBSTRING_INDEX( SUBSTRING_INDEX( GROUP_CONCAT( -- 1) make a sorted list of values Lux ORDER BY Lux SEPARATOR ',' ) , ',' -- 2) cut at the comma , 75/100 * COUNT(*) -- at the position beyond the 90% portion ) , ',' -- 3) cut at the comma , -1 -- right after the desired list entry ) AS `75th Percentile` FROM LuxLog WHERE Sensor=12 AND Lux<>0 ``` I am getting 1 as return value, while it should be a number that can be divided by 10 (10,20,30.....1000)
See SqlFiddle : <http://sqlfiddle.com/#!9/accca6/2/6> Note : for the sqlfiddle I've generated 100 rows, each integer between 1 and 100 has a row, but it is a random order (done in excel). Here is the code : ``` SET @number_of_rows := (SELECT COUNT(*) FROM LuxLog); SET @quartile := (ROUND(@number_of_rows*0.25)); SET @sql_q1 := (CONCAT('(SELECT "Q1" AS quartile_name , Lux, Sensor FROM LuxLog ORDER BY Lux DESC LIMIT 1 OFFSET ', @quartile,')')); SET @sql_q3 := (CONCAT('( SELECT "Q3" AS quartile_name , Lux, Sensor FROM LuxLog ORDER BY Lux ASC LIMIT 1 OFFSET ', @quartile,');')); SET @sql := (CONCAT(@sql_q1,' UNION ',@sql_q3)); PREPARE stmt1 FROM @sql; EXECUTE stmt1; ``` **EDIT :** ``` SET @current_sensor := 101; SET @quartile := (ROUND((SELECT COUNT(*) FROM LuxLog WHERE Sensor = @current_sensor)*0.25)); SET @sql_q1 := (CONCAT('(SELECT "Q1" AS quartile_name , Lux, Sensor FROM LuxLog WHERE Sensor=', @current_sensor,' ORDER BY Lux DESC LIMIT 1 OFFSET ', @quartile,')')); SET @sql_q3 := (CONCAT('( SELECT "Q3" AS quartile_name , Lux, Sensor FROM LuxLog WHERE Sensor=', @current_sensor,' ORDER BY Lux ASC LIMIT 1 OFFSET ', @quartile,');')); SET @sql := (CONCAT(@sql_q1,' UNION ',@sql_q3)); PREPARE stmt1 FROM @sql; EXECUTE stmt1; ``` Underlying reasoning is as follows : For quartile 1 we want to get 25% from the top so we want to know how much rows there are, that's : ``` SET @number_of_rows := (SELECT COUNT(*) FROM LuxLog); ``` Now that we know the number of rows, we want to know what is 25% of that, it is this line : ``` SET @quartile := (ROUND(@number_of_rows*0.25)); ``` Then to find a quartile we want to order the LuxLog table by Lux, then to get the row number "@quartile", in order to do that we set the OFFSET to @quartile to say that we want to start our select from the row number @quartile and we say limit 1 to say that we want to retrieve only one row. That's : ``` SET @sql_q1 := (CONCAT('(SELECT "Q1" AS quartile_name , Lux, Sensor FROM LuxLog ORDER BY Lux DESC LIMIT 1 OFFSET ', @quartile,')')); ``` We do (almost) the same for the other quartile, but rather than starting from the top (from higher values to lower) we start from the bottom (it explains the ASC). But for now we just have strings stored in the variables @sql\_q1 and @sql\_q3, so the concatenate them, we union the results of the queries, we prepare the query and execute it.
Well to use NTILE is very simple but it is a Postgres Function. You basically just do something like this: ``` SELECT value_you_are_NTILING, NTILE(4) OVER (ORDER BY value_you_are_NTILING DESC) AS tiles FROM (SELECT math_that_gives_you_the_value_you_are_NTILING_here AS value_you_are_NTILING FROM tablename); ``` Here is a simple example I made for you on SQLFiddle: <http://sqlfiddle.com/#!15/7f05a/1> In MySQL you would use RANK... Here is the SQLFiddle for that: <http://www.sqlfiddle.com/#!2/d5587/1> (this comes from the Question linked below) This use of MySQL RANK() comes from the Stackoverflow answered here: [Rank function in MySQL](https://stackoverflow.com/questions/3333665/rank-function-in-mysql) Look for the answer by Salman A.
Quartiles in SQL query
[ "", "mysql", "sql", "quantile", "percentile", "" ]
I would like to have a SQL formula to match all the words in a query in any order without using multiple 'AND like' statements. For example, the query 'cat dog' should match the following statements: 'cat and dog in a park' 'dog and cat are playing' I found a solution in Regex: ``` WHERE query REGEXP concat('\'(?=.*',replace('cat dog',' ',')(?=.*'),')\'') ``` Note: the part after REGEXP transforms into '(?=.\*cat)(?=.\*dog)' However, I get the error 'repetition-operator operand invalid from regexp'. Could you please help to find another way to get this to work? The query is a free field (search box) so there can be many words to match. This is why I'm not using: ``` WHERE query like '%cat%' AND query like '%dog%' ``` Thanks a lot in advance! Kevin
Well, up to and including simple plural (formed by adding just an 's') could be catered for by: ``` SELECT query FROM T WHERE CONCAT(',', REPLACE(query, ' ', ','), ',') REGEXP CONCAT(',', REPLACE('cat dog', ' ', 's?,|,'), 's?,') ; ``` See it in action: [SQL Fiddle](http://sqlfiddle.com/#!9/5948a/1). This, however, does just check, whether at least one of the search terms is found. If every single search term (still including simple plural) needs to be found at least once, one could try along ``` SELECT query FROM T WHERE CASE (LENGTH(@search) - LENGTH(REPLACE(@search, ' ', ''))) WHEN 0 THEN (CONCAT(',', REPLACE(query, ' ', ','), ',') REGEXP CONCAT(',', @search, 's?,')) WHEN 1 THEN (CONCAT(',', REPLACE(query, ' ', ','), ',') REGEXP CONCAT(',', SUBSTRING_INDEX(@search, ' ', 1), 's?,')) AND (CONCAT(',', REPLACE(query, ' ', ','), ',') REGEXP CONCAT(',', SUBSTRING_INDEX(@search, ' ', -1), 's?,')) WHEN 2 THEN (CONCAT(',', REPLACE(query, ' ', ','), ',') REGEXP CONCAT(',', SUBSTRING_INDEX(@search, ' ', 1), 's?,')) AND (CONCAT(',', REPLACE(query, ' ', ','), ',') REGEXP CONCAT(',', SUBSTRING_INDEX(SUBSTRING_INDEX(@search, ' ', 2), ' ', -1), 's?,')) AND (CONCAT(',', REPLACE(query, ' ', ','), ',') REGEXP CONCAT(',', SUBSTRING_INDEX(@search, ' ', -1), 's?,')) WHEN 3 THEN (CONCAT(',', REPLACE(query, ' ', ','), ',') REGEXP CONCAT(',', SUBSTRING_INDEX(@search, ' ', 1), 's?,')) AND (CONCAT(',', REPLACE(query, ' ', ','), ',') REGEXP CONCAT(',', SUBSTRING_INDEX(SUBSTRING_INDEX(@search, ' ', 2), ' ', -1), 's?,')) AND (CONCAT(',', REPLACE(query, ' ', ','), ',') REGEXP CONCAT(',', SUBSTRING_INDEX(SUBSTRING_INDEX(@search, ' ', 3), ' ', -1), 's?,')) AND (CONCAT(',', REPLACE(query, ' ', ','), ',') REGEXP CONCAT(',', SUBSTRING_INDEX(@search, ' ', -1), 's?,')) ELSE FALSE END ; ``` [SQL Fiddle](http://sqlfiddle.com/#!9/21f62/1) This might be feasible for a limited number of search terms. Not sure, it is really advisable / preferable compared to generating the equivalent **LIKE**, **INSTR** or even **REGEXP**. Please comment, if and as adjustment / further detail is required.
I think this is you want. ``` Hitesh> select * from test; +-------------------------+ | name | +-------------------------+ | i am the boss | | You will get soon | | Happy birthday bro | | the beautiful girl | | oyee its sunday | | cat and dog in a park | | dog and cat are playing | | cat | | dog | +-------------------------+ 9 rows in set (0.00 sec) Hitesh> set @a='cat'; Query OK, 0 rows affected (0.00 sec) Hitesh> set @b='dog'; Query OK, 0 rows affected (0.00 sec) Hitesh> set @var=concat(concat('((^(',@a,') .*)|(.* (',@a,') .*))'),'.*',concat('(.* (',@b,') .*)|(','.* (',@b,')$)')); Query OK, 0 rows affected (0.00 sec) Hitesh> set @var2=concat(concat('((^(',@b,') .*)|(.* (',@b,') .*))'),'.*',concat('((.* (',@a,') .*)|(','.* (',@a,')$))')); Query OK, 0 rows affected (0.00 sec) Hitesh> select @a, @b, @var, @var2; +------+------+--------------------------------------------------------+----------------------------------------------------------+ | @a | @b | @var | @var2 | +------+------+--------------------------------------------------------+----------------------------------------------------------+ | cat | dog | ((^(cat) .*)|(.* (cat) .*)).*(.* (dog) .*)|(.* (dog)$) | ((^(dog) .*)|(.* (dog) .*)).*((.* (cat) .*)|(.* (cat)$)) | +------+------+--------------------------------------------------------+----------------------------------------------------------+ 1 row in set (0.00 sec) Hitesh> select * from test where (name REGEXP @var) or (name REGEXP @var2); +-------------------------+ | name | +-------------------------+ | cat and dog in a park | | dog and cat are playing | +-------------------------+ 2 rows in set (0.00 sec) ```
SQL formula to match all words in any order in just one statement
[ "", "mysql", "sql", "regex", "" ]
I need to create a SQL query (MySql) that returns the last row from the "in" clause based on the order of the values in the IN(a,b,c). To simplify the problem, here is a sample table, and the results I need from the query. ``` For the Table PEOPLE, with values LAST_NAME FIRST_NAME ..... Smith Mike Smith Betty Smith Jane Jones Mike Jones Sally .... ``` I need a query with like ``` SELECT * FROM PEOPLE WHERE FIRST_NAME = 'Mike' AND LAST_NAME IN ('Smith','Jones') ``` to return Mike Jones, and a query like ``` SELECT * FROM PEOPLE WHERE FIRST_NAME = 'Mike' AND LAST_NAME IN ('Jones','Smith') ``` to return Mike Smith Basically the `WHERE LAST_NAME IN (....)` serves an "over-ride" function where the members of the list can over-ride values in the result set based on the order from the members of the `IN (...)` clause.
You can use [`FIELD`](https://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_field) in `ORDER BY` clause: ``` SELECT * FROM PEOPLE WHERE FIRST_NAME = 'Mike' AND LAST_NAME IN ('Smith','Jones') ORDER BY FIELD(LAST_NAME, 'Smith', 'Jones') DESC LIMIT 1; ``` Just repeat string literal values of `IN`, using the same order, inside `FIELD`.
giogios' solution is good. Without using limit this should work too, so if you have multiple Mike Jones or Mike Smith, you get all the results for those mike Jones and Mike Smith will appear. It is unlikely but not impossible if your database doesn't use any of these fields as primary-keys. ``` select * From emp_master where FIRST_NAME = 'Mike' And (LAST_NAME = case LAST_NAME when 'jones' then 'jones' when 'Smith' then 'Smith' end) ```
How to select the last row that matches a select where COL in (a,b,c)
[ "", "mysql", "sql", "" ]
I have just taken over the database and found that there are many areas where the data stored is not standardized. Records of operator's name are not stored in a standardized way. I am working towards standardization so that it is easier to analyze data. The following are the 3 tables that i need help with. I need to update the information on the table called TimeCards. tblEmployees ``` ID FirstName LastName Num 234 Saijimon Joseph306 306 235 Pasquale Partipilo 299 ``` The main problem with this table is that there are Numbers inside the last name as shown on ID 234, but some others are perfectly normal as shown on ID 235. I have made a new table below to rectify the changes. tblEmployeeMain ``` ID FirstName LastName Num 234 Saijimon Joseph 306 235 Pasquale Partipilo 299 ``` Now to the main issue. I have a table below whcih gets its information from a form. And the form uses information from tblEmployees tblTimeCards ``` TimeCard# Employee Hours 27742 Joseph306 35 27743 Partipilo 36 ``` Is there a way to update all the existing entries in tblTimecards such that the information stored is shown as below? ``` TimeCard# Employee Hours 27742 Joseph Saijimon 306 35 27743 Partipilo Pasquade 299 36 ``` The following is the query which i try to use, but since there is no joins, i am stuck with what to do. ``` UPDATE tblTimeCards SET tblTimeCards.Employee = tblEmployeeMain.[Last Name]+" "+tblEmployeeMain.[First Name]+" "+tblEmployeeMain.[no] WHERE tblTimeCards.Employee = "Joseph%" AND tblEmployeeMain.[Last Name] = "Joseph" ; ``` I am not familiar with the update query.
You can add a column called **ID\_Employee** to the **TimeCard** table. This column plays the role of a foreign key which can be used to create a relationship between table **tblEmployees** and **TimeCard**. ``` CREATE TABLE TimeCard ( Timecard INT PRIMARY KEY, Employee VARCHAR(50), Hours INT, ID_Employee INT ) ``` And then use the following code to update the specific data in the table TimeCard ``` UPDATE TimeCard SET Employee = tblEmployees.LastName + ' ' + tblEmployees.FirstName + ' ' + CAST(tblEmployees.Num AS VARCHAR) FROM tblEmployees INNER JOIN TimeCard ON tblEmployees.ID = TimeCard.ID_Employee ``` In the actual form of tables that you provided, you can not bind the data between table **tblEmployees** and **TimeCard**.
You are trying to update tbltimecard with tblemployemain in that case you are supposed to use update with join ``` UPDATE tblTimeCards SET tblTimeCards.Employee = tblEmployeeMain.[Last Name]+" "+tblEmployeeMain.[First Name]+" "+tblEmployeeMain.[no] from (select timcard#,First name, Last name, no from tblemployeemain) WHERE tblTimeCards.timecard# = tblEmployeeMain.timcard#; ```
Update Table from another table without similar values
[ "", "sql", "ms-access", "ms-access-2007", "" ]
Can SQL Server triggers prevent updating records and allow inserting new records? Thanks, Przemek
To prevent update on table, you can use "INSTEAD OF UPDATE" Trriger you can check this note "<https://msdn.microsoft.com/en-us/library/aa258254%28SQL.80%29.aspx?f=255&MSPPError=-2147217396>"
Couldn't this be accomplished by creating a SQL Server user that has INSERT access on the destination tables but not UPDATE access? Create a new user with no permissions, then: ``` GRANT INSERT ON [blah] TO [user] ``` I admit, my knowledge of SQL Server triggers is limited, but this seems like it would work.
Trigger to prevent updates but allow inserts
[ "", "sql", "sql-server", "triggers", "" ]
I am using `Oracle` and using in my **toad** to check what the result is. But I am getting error as > ORA-01740: missing double quote in identifier Here is my query ``` SELECT T1.Project_Id, PROPERTY_NAME Project_name, T1.Vehicle_No, T1.Creation_date, T1.Time_In, T1.Time_Out FROM XXCUS.XX_SUPINV T1 INNER JOIN XXACL_PN_PROJBUILD_V T2 ON T1.Project_Id = T2.Project_id WHERE t1.Project_Id = '" + ddlProjectName.SelectedValue + "' AND Creation_date BETWEEN to_date fnd_conc_date.string_to_date('"TxtIndate.Text"') AND to_date fnd_conc_date.string_to_date('"txtOutDate.Text"')" ``` Please suggest where I am mistaking
This is the actual Query (with data) you're trying to execute. ``` SELECT T1.Project_Id, PROPERTY_NAME Project_name, T1.Vehicle_No, T1.Creation_date, T1.Time_In, T1.Time_Out FROM XXCUS.XX_SUPINV T1 INNER JOIN XXACL_PN_PROJBUILD_V T2 ON T1.Project_Id = T2.Project_id WHERE t1.Project_Id = '409' AND Creation_date BETWEEN to_date('01-jan-2015','DD-mon-yyyy') AND to_date('01-jan-2012','DD-mon-yyyy') ``` You're programmatic version could be ( Just derived from your base version) ``` sl = "SELECT T1.Project_Id, PROPERTY_NAME Project_name, T1.Vehicle_No, T1.Creation_date, T1.Time_In, T1.Time_Out FROM XXCUS.XX_SUPINV T1 INNER JOIN XXACL_PN_PROJBUILD_V T2 ON T1.Project_Id = T2.Project_id WHERE t1.Project_Id = '" + ddlProjectName.SelectedValue + "' AND Creation_date BETWEEN fnd_conc_date.string_to_date('" + TxtIndate.Text+ "') AND fnd_conc_date.string_to_date('"+ txtOutDate.Text +"')" ``` > To improve readability and avoid [`SQL*Injection`](https://msdn.microsoft.com/en-us/library/ff648339.aspx),you should [try > using bind variables](https://stackoverflow.com/questions/3790424/usage-of-oracle-binding-variables-with-like-in-c-sharp)( I am not so thorough with the `.NET` syntax)
Seems that you have additional `"` in the end of query: ``` to_date fnd_conc_date.string_to_date('"txtOutDate.Text"')" -- here ``` should be like: ``` to_date fnd_conc_date.string_to_date('"txtOutDate.Text"') ``` > An initial double quote (") was found without a closing quote. If an > identifier contains a blank or special characters other than $, #, or > \_, it must be enclosed in double quotes. [Documentation](http://ora-01740.dbquanti.eu/)
Error with the query in Oracle
[ "", "sql", "asp.net", "oracle", "oracle10g", "" ]
I search about the error, but I only find different answers that didn't work for me. I have this query, in SQL Server: ``` DECLARE @column_name varchar (25), @data_column int, @table_name varchar (25) DECLARE @mySql nvarchar (MAX) SET NOCOUNT ON; SET @column_name = 'Excellent' SET @table_name = 'CSAT' SET @data_column = 10 SET @mySql = 'INSERT INTO '+@table_name+'('+@column_name+') VALUES('+@data_column+')' EXEC (@mySql) ``` When I execute it, it shows me this error: > Conversion failed when converting the varchar value 'INSERT INTO CSAT(Excellent) VALUES(' to data type int. All the columns are Int and allow nulls. I have to make a conversion or something? I appreciate your help!
@data\_column is an int, so you need to convert it to varchar because you are building a string. ``` SET @mySql = 'INSERT INTO '+@table_name+'('+@column_name+') VALUES('+ Convert(Varchar(10), @data_column)+')' ``` When sql server encounters an expression that mixes strings and int, it attempts to convert the string to an int (instead of the the other way around). This is documented here: [SQL Server Data Type Precedence](https://msdn.microsoft.com/en-us/library/ms190309.aspx "Sql Server Data Type Precedence")
add some space like this ``` SET @mySql = 'INSERT INTO '+@table_name+' ('+convert(varchar(50),@column_name)+') VALUES('+@data_column+')' ```
SQL query error: Conversion failed when converting the varchar value
[ "", "sql", "sql-server", "parameters", "data-conversion", "" ]
I have a query that is used for generating names on each node in a tree. This method now has one input parameter (p\_optional\_id\_node NUMBER). My query has a condition that looks like this: `(p_optional_id_node IS NULL OR p_optional_id_node=n.id)` This procedure takes one minute when i use it with: `p_optional_id_node=NULL` and ~2 seconds if: `p_optional_id_node=123.` If i change the condition from: `(p_optional_id_node IS NULL OR p_optional_id_node=n.id)` to: `p_optional_id_node=n.id` it takes ~25ms. How can i optimize a query like this to use ~25ms if i set p\_optional\_id to a number and ~1 minute if i set it to null? I don't want to duplicate the logic in this method to have one procedure for each case.
You can do it like this: ``` WHERE NVL(p_optional_id_node, n.id) = n.id ```
If you want to return all rows in mytable in the case when p\_potional\_id\_node is null and all records with id = p\_optional\_id\_node otherwise then you can try following: To do not duplicate logic in this method, you can create table-valued function or view for the main logic (select \* from mytable or something else) and then with using [IF](http://docs.oracle.com/cd/B28359_01/appdev.111/b28370/controlstructures.htm#CJAJGBEE) statement in the method: 1. In a case when p\_potional\_id\_node is null, then return select from function/view without where clause 2. In a case, when p\_potional\_id\_node is not null, then return select from function/view with where it = p\_potional\_id\_node
How to optimize query for optional input id (either one specific row or whole table)
[ "", "sql", "oracle", "plsql", "" ]
I have a `CREATE TABLE` document and I need to get a listing of all the table names. This is what I have: ``` CREATE TABLE `mturk_reviewqueue` ( `id` int(11) NOT NULL AUTO_INCREMENT, `catalog_id` int(11) DEFAULT NULL, `tv_series_id` int(11) DEFAULT NULL, `added_on` datetime NOT NULL, `correct_url` varchar(100) DEFAULT NULL, `notes` varchar(400) DEFAULT NULL, `completed_on` datetime DEFAULT NULL, `completed_by_id` int(11) DEFAULT NULL, `top_url` varchar(100) DEFAULT NULL, `diff_score` decimal(5,2) DEFAULT NULL, `ip_checkout` varchar(24) DEFAULT NULL, `incorrect_original_url` varchar(100) DEFAULT NULL, `fix_notes` varchar(1000) DEFAULT NULL, `is_promotional_content` tinyint(1) DEFAULT NULL, `contains_multiple_titles` tinyint(1) DEFAULT NULL, `is_garbage_series` tinyint(1) DEFAULT NULL, PRIMARY KEY (`id`), KEY `completed_by` (`completed_by_id`), KEY `catalog_id` (`catalog_id`), KEY `tv_series_id` (`tv_series_id`), CONSTRAINT `mturk_reviewqueue_ibfk_1` FOREIGN KEY (`completed_by_id`) REFERENCES `auth_user` (`id`), CONSTRAINT `mturk_reviewqueue_ibfk_2` FOREIGN KEY (`catalog_id`) REFERENCES `main_catalog` (`id`), CONSTRAINT `mturk_reviewqueue_ibfk_3` FOREIGN KEY (`tv_series_id`) REFERENCES `main_tvseriesinstance` (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=196089 DEFAULT CHARSET=utf8; -- Create syntax for TABLE 'mturk_worker' CREATE TABLE `mturk_worker` ( `worker_id` varchar(22) NOT NULL DEFAULT '', `notes` varchar(100) NOT NULL, ``` ...etc... And this is what I need: ``` mturk_reviewqueue mturk_worker etc... ``` So far I have: ``` r'CREATE\sTABLE\s`(.+)`\s\(' ``` This is getting me the table name, but I can't seem to get rid of all the junk after the table name but before the next one. What would be the best regex to do this?
``` r'CREATE\sTABLE\s`(.+?)`' ``` This should do it for you.
If you'd like a command line option. Say your sql file is named "foo": ``` cat foo | grep 'CREATE TABLE' | cut -d\` -f2 ``` This should work on MacOS.
Regex to get names of all SQL tables
[ "", "mysql", "sql", "regex", "" ]
I have a codition in which i need to use multiple parameters in `where` clause using Conditional operator. I have written the following query and in this condition i can't use `in` clasue. Kindly guide me how can i return multiple parameters from case clause. ``` select * from cardimport where STATUS = CASE WHEN STATUS = '' THEN 'F' ELSE STATUS END ``` When Status in `null` i want to return `'F'` and `'V'`. Right now its returning Only `'F'` **EDITED** ``` select * from CARDIMPORT where STATUS = CASE WHEN $P{status} = '' THEN 'E' ELSE $P{status} END ``` When the STATUS is null, I want to show the records of from all status, that are `'E'`, `'I'`, `'A'`
You should try following, ``` Declare @status nvarchar(50) = 'XXXX' select * from cardimport where 1 = case when isnull(@status,'') = '' then 1 else case when status = @status then 1 else 0 end end ``` It will give you all the rows when status is null and when status is not null then give you only matching data.
I think you don't need CASE, rather you need **NVL** and **NVL2**. If I understand your requirement correctly, you need a particular row when the status value is NULL, else when not null then return all the rows. For example, in the below example from `EMP` table. If the parameter value is `NULL`, I expect only that row where `comm` = `300` else when `not null` then return all rows. **NULL** ``` SQL> VAR a NUMBER; SQL> EXEC :a := NULL PL/SQL procedure successfully completed. SQL> SELECT empno, comm FROM emp WHERE NVL(comm, 9999) = NVL2(:a, NVL(comm, 9999), 300); EMPNO COMM ---------- ---------- 7499 300 ``` **NOT NULL** ``` SQL> EXEC :a :=1400 PL/SQL procedure successfully completed. SQL> SELECT empno, comm FROM emp WHERE NVL(comm, 9999) = NVL2(:a, NVL(comm, 9999), 300); EMPNO COMM ---------- ---------- 7369 7499 300 7521 500 7566 7654 1400 7698 7782 7788 7839 7844 7876 7900 7902 7934 14 rows selected. SQL> ``` The above uses the following logic: * If, the value passed is NULL, then return row where `COMM = 300`. * Else if, value passed is NOT NULL, then return all the rows.
How to Return Multiple Values from CASE clause in Where Condition
[ "", "sql", "oracle", "case", "" ]
For an example, if I did this: ``` SELECT il.itm, il.price, il.strt_dt, il.exp_dt, pl.itm_cd FROM itm_list il, prc_list pl WHERE il.prc_cd = pl.prc_cd GROUP BY il.itm, il.price, il.strt_dt, il.exp_dt, pl.itm_cd ``` Would the select only group rows where those fields are *exactly* the same in all 5 rows? As in, it will remove all duplicate rows but keep any unique combination of those rows? So if I had the select return: ``` il.itm ----- il.price ----- il.strt_dt ----- il.exp_dt ----- pl.itm_cd APPLE $1 1/1/2011 1/1/2012 191 BANANA $1 1/1/2011 1/1/2013 177 APPLE $1 1/1/2011 1/1/2012 191 APPLE $2 1/1/2012 1/1/2013 191 ``` Would the results return Result A... (every row with some kind of uniqueness between all 5 columns) ``` il.itm ----- il.price ----- il.strt_dt ----- il.exp_dt ----- pl.itm_cd APPLE $1 1/1/2011 1/1/2012 191 BANANA $1 1/1/2011 1/1/2013 177 APPLE $2 1/1/2012 1/1/2013 191 ``` OR Result B?... (if they group on one column, then there will only be one entry) ``` il.itm ----- il.price ----- il.strt_dt ----- il.exp_dt ----- pl.itm_cd APPLE $1 1/1/2011 1/1/2012 191 BANANA $1 1/1/2011 1/1/2013 177 ```
It would give Result A. ``` Group By X, Y means put all those with the same values for both X and Y in the one group. ``` So for your case, combination of all the five columns having unique value collectively will be considered correct. Therefore,all the possible combinations of `il.itm ----- il.price ----- il.strt_dt ----- il.exp_dt ----- pl.itm_cd` having unique values will be the result. For example for the record ``` APPLE $1 1/1/2011 1/1/2012 191 ``` Each record having exactly the same value for the five columns would be group along with it. If we change any value of column say `price = $10`, then the record will become unique since it has differing value from the above five fields combined, and hence will be considered another unique record. Hence the answer: **Result 1**
You can use DISTINCT function like this ``` SELECT DISTINCT il.itm, il.price, il.strt_dt, il.exp_dt, pl.itm_cd FROM itm_list il, prc_list pl WHERE il.prc_cd = pl.prc_cd ```
If I do a GROUP BY function with multiple fields, will the select only show rows where all fields in the GROUP BY exactly match?
[ "", "sql", "oracle", "group-by", "" ]
I would like to find the difference between two tables, because there is a column in table 2 which was linked to the table 1 ID column but sadly a site manager deleted items from table 1 and now there are a lots of unlinked rows in table 2 what causes problems on the site. For example here are the table structures ``` Table 1 table 2 ID | name ID | value (this is the ID from table 1) 1 | one 1 | 1 2 | two 2 | 2 3 | three 3 | 4 6 | six 4 | 4 7 | seven 5 | 5 6 | 5 7 | 6 8 | 7 9 | 1 10 | 1 ``` As you see in table 2 some IDs from table 1 are in multiple rows and I would like to get all of them which aren't present in table 1 as the return of the query. Just for clarification this is what I would like to get from the query Result: ``` ID (form table 2) | value 3 | 4 4 | 4 5 | 5 6 | 5 ``` I know I could use for example NOT IN but than I would have to put in about a 1000 IDs and table 1 contains much more items than what is linked in table 2 How can I do a query which will return a result like above?
Use `NOT EXISTS` ``` select * from table2 A Where Not exists (select 1 from table1 B Where A.ID = B.value) ``` Or `LEFT OUTER JOIN` ``` select * from table2 A LEFT OUTER JOIN table1 B on A.ID = B.value Where B.value IS NULL ```
General query : ``` (select * from table1 a where not exists (select * from table2 b where a.id=b.id)) union select * from table2 a where not exists (select * from table1 b where a.id=b.id); ``` NOTE: This query gives those rows of table 1 which doesn't exists in table 2 and rows of table 2 which doesn't exists in table 1
Mysql compare two tables and display only the difference
[ "", "mysql", "sql", "" ]
Forgive my SQL knowledge, but I have a `Person` table with following data - ``` Id Name ---- ------ 1 a 2 b 3 b 4 c ``` and I want the following result - ``` Name Total ------ ------ b 2 ``` If I use the `GROUP BY` query - ``` SELECT Name, Total=COUNT(*) FROM Person GROUP BY Name ``` It gives me - ``` Name Total ------ ------ a 1 b 2 c 1 ``` But I want only the one with maximum count. How do I get that?
If you want ties ``` SELECT top (1) with ties Name, COUNT(*) AS [count] FROM Person GROUP BY Name ORDER BY count(*) DESC ```
The easiest way to do this in SQL Server would be to use the `top` syntax: ``` SELECT TOP 1 Name, COUNT(*) AS Total FROM Person GROUP BY Name ORDER BY 2 DESC ```
Selecting the maximum count from a GROUP BY operation
[ "", "sql", "select", "sql-server-2008-r2", "group-by", "max", "" ]
I am using MS SQL Server and I have a table named `Logs` that looks like this: ![table logs(dateTime, errorDescription)](https://i.stack.imgur.com/7hPad.jpg) Records are added frequently so the table becomes quite big after few days/weeks. I need to perform a little cleanup periodically: I need query that would delete older rows and keep only the most recent 100 rows in the table. I understand it would have been better to have it *delete records older than some date*... but I am asked to do it as described above.
You can use one of the following: ``` -- offset clause WITH goners AS ( SELECT * FROM Logs ORDER BY DateTime DESC OFFSET 100 ROWS ) DELETE FROM goners -- numbered rows WITH goners AS ( SELECT ROW_NUMBER() OVER(ORDER BY DateTime DESC) AS rn, Logs.* FROM Logs ) DELETE FROM goners WHERE rn > 100 -- nth value -- note that this "keeps" all rows that tie for last place DELETE FROM Logs WHERE DateTime < ( SELECT MIN(DateTime) FROM ( SELECT TOP 100 DateTime FROM Logs ORDER BY DateTime DESC ) AS x ) ```
While I agree with others that this is probably not the way to go, here's a way to do it anyway: ``` ;WITH keepers AS ( SELECT TOP 100 [DateTime] FROM dbo.Logs ORDER BY [DateTime] DESC ) DELETE FROM dbo.Logs a WHERE NOT EXISTS ( SELECT 1 FROM keepers b WHERE b.[DateTime] = a.[DateTime] ) ```
Delete all rows except 100 most recent ones
[ "", "sql", "sql-server", "t-sql", "datetime", "sql-delete", "" ]
I need to select only that data for which US data is not null and for other country it is null. ``` field1 field2 field3 A US A A DE null A IN null A JP null B US null B DE B B IN C B JP null ```
``` select field1 from ( select field1, field2, count(*) over (partition by field1) not_null_count from tablename where field3 is not null ) where field2 = 'US' and not_null_count = 1 ```
The query should be remarkably close to the requirements: ``` select * from TableName where (field2='US' and field3 is not null) or (field2<>'US' and field3 is null) ```
select data from a table for only null
[ "", "sql", "oracle", "" ]
I try to get the first (lowest) start date of a employee: ``` SELECT a.StartDate, a.EmpID, b.fullname FROM Employees a LEFT JOIN dbo.info b ON a.EmpID = b.EmpID WHERE type = 800 GROUP By EmpId ORDER BY fullname ``` The result is similar to this: ``` StartDate | EmpId 1992-12-01 00:00:00.000 | 7 2014-01-01 00:00:00.000 | 7 2015-01-01 00:00:00.000 | 7 1992-12-01 00:00:00.000 | 10 2014-01-01 00:00:00.000 | 10 2015-01-01 00:00:00.000 | 10 1992-01-01 00:00:00.000 | 16 2014-01-01 00:00:00.000 | 16 2015-01-01 00:00:00.000 | 16 ``` To get the lowest date of each employee I tried this, but it resulted in an error. ``` SELECT MIN(a.StartDate), a.EmpID, b.fullname FROM Employees a LEFT JOIN dbo.info b ON a.EmpID = b.EmpID WHERE type = 800 GROUP By EmpId ORDER BY fullname ``` What is the correct query to get the result as below: ``` StartDate | EmpId 1992-12-01 00:00:00.000 | 7 1992-12-01 00:00:00.000 | 10 1992-01-01 00:00:00.000 | 16 ```
``` SELECT a.StartDate, a.EmpID, b.fullname FROM Employees a LEFT JOIN dbo.info b ON a.EmpID = b.EmpID WHERE type = 800 AND NOT EXISTS ( SELECT 'a' FROM Employees a2 WHERE a.EmpID = a2.EmpID AND a2.StartDate < a.StartDate ) ```
You want a result row per `EmpID`, so that's the column you should group by: ``` SELECT MIN(StartDate), EmpID FROM Employees WHERE type = 800 AND EmpID BETWEEN 1 AND 500 GROUP BY EmpID ```
group results by min datetime
[ "", "sql", "sql-server", "sql-server-2012", "" ]
I get the below error: > 1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '('text','text','text',1,2,3,4,5,6,7,' at line 4 when trying to run this SQL command: ``` insert into `table` VALUES (`col1`,`col2`,`col3`,`col4`,`col5`,`col6`,`col7`,`col8`,`col9`,`col10`,`col11`,`col12`,`col13`,`col14`) ('text','text','text',1,2,3,4,5,6,7,8,'text','text','text'), ('text','text','text',11,12,13,14,15,16,17,18,'text','text','text') ``` **UPDATE**: I'm a complete moron, added col names before `VALUES`...
``` insert into `table` (`col1`,`col2`,`col3`,`col4`,`col5`,`col6`,`col7`,`col8`,`col9`,`col10`,`col11`,`col12`,`col13`,`col14`) VALUES ('text','text','text',1,2,3,4,5,6,7,8,'text','text','text'), ('text','text','text',11,12,13,14,15,16,17,18,'text','text','text') ```
``` insert into `table`(`col1`,`col2`,`col3`,`col4`,`col5`,`col6`,`col7`,`col8`,`col9`,`col10`,`col11`,`col12`,`col13`,`col14`) VALUES('text','text','text',1,2,3,4,5,6,7,8,'text','text','text'), ('text','text','text',11,12,13,14,15,16,17,18,'text','text','text') ```
MySQL - Syntax error when trying to insert strings/numbers
[ "", "mysql", "sql", "syntax-error", "" ]
I am having a sql query as : ``` select dateName(month, DateAccessed) "Month" , count(1) totalVisits , count(distinct l.userName) UsersVisit from and where clause goes here group by dateName(monthDateAccessed) order by Month ``` The output I get is ``` Month totalVisits UsersVisit April 100 25 February 200 35 July 300 45 March 400 55 May 500 65 ``` But the ouput I want is in the order of: ``` February 200 35 March 400 55 April 100 25 May 500 65 July 300 45 ``` How can I get this ?
Use either `month(DateAccessed)` or `datepart(month, DateAccessed)` to extract the month number and use that in the `order by` clause. You will however have to add it to the `group by` clause too: ``` SELECT DATENAME(month, DateAccessed) "Month", COUNT(1) totalVisits, COUNT(DISTINCT l.userName) UsersVisit FROM and where clause goes here GROUP BY MONTH(dateaccessed), DATENAME(month, DateAccessed) ORDER BY MONTH(dateaccessed); ``` If your data holds data for more than one year you should include the year in the group- and order by clauses (and select statement) if you don't already make sure you only get data from one year in the where clause.
You just need to change the `ORDER BY` clause to order by the month number of the `DateAccessed`: ``` select dateName(month, DateAccessed) "Month" , count(1) totalVisits , count(distinct l.userName) UsersVisit from and where clause goes here group by dateName(monthDateAccessed) order by Month(DateAccessed) ```
Sort Sql Query by Month
[ "", "sql", "sql-server", "sql-date-functions", "" ]
I'm calling a stored procedure and passing in 2 dates as parameters from my windows application. Its returning all rows rather than 2 rows that I'm expecting. The stored procedure is: ``` ALTER procedure [dbo].[Get_Entries] @Start_Date datetime=null, @End_Date datetime=null as begin SELECT * FROM MyTable WHERE (MyTable.Date BETWEEN @Start_Date AND @End_Date OR (@Start_Date IS NULL AND @End_Date IS NULL)) ORDER BY MyTable.Date desc end ``` The following `sp_executesql` query returns all rows: ``` exec sp_executesql N'Get_Entries', N'@Start_Date datetime, @End_Date datetime', @Start_Date='2015-06-06 11:35:06.437', @End_Date='2015-07-06 11:35:06.437' ``` However if I run the stored procedure manually from Management Studio I get the expected 2 rows: ``` USE [MyDatabase] GO DECLARE @return_value int EXEC @return_value = [dbo].[Get_Entries] @Start_Date = N'2015-06-06 11:35:06.437', @End_Date = N'2015-07-06 11:35:06.437' SELECT 'Return Value' = @return_value GO ``` Any ideas why `sp_executesql` isn't returning the filtered list? Its returning all rows.
The way to solve this is to use the (slightly adapted) **ISO-8601 date format** that is supported by SQL Server - this format works **always** - regardless of your SQL Server language and dateformat settings. The [ISO-8601 format](http://msdn.microsoft.com/en-us/library/ms180878.aspx) is supported by SQL Server comes in two flavors: * `YYYYMMDD` for just dates (no time portion); note here: **no dashes!**, that's very important! `YYYY-MM-DD` is **NOT** independent of the dateformat settings in your SQL Server and will **NOT** work in all situations! or: * `YYYY-MM-DDTHH:MM:SS` for dates and times - note here: this format *has* dashes (but they *can* be omitted), and a fixed `T` as delimiter between the date and time portion of your `DATETIME`. This is valid for SQL Server 2000 and newer. If you use SQL Server 2008 or newer and the `DATE` datatype (only `DATE` - **not** `DATETIME`!), then you can indeed also use the `YYYY-MM-DD` format and that will work, too, with any settings in your SQL Server. Don't ask me why this whole topic is so tricky and somewhat confusing - that's just the way it is. But with the `YYYYMMDD` format, you should be fine for any version of SQL Server and for any language and dateformat setting in your SQL Server. My recommendation for SQL Server 2008 and newer is to use `DATE` if you only need the date portion, and `DATETIME2(n)` when you need both date and time. You should try to start phasing out the `DATETIME` datatype if ever possible
Let's take a look at your [`sp_executesql`](https://msdn.microsoft.com/en-us/library/ms188001.aspx) statement: ``` exec sp_executesql N'Get_Entries', N'@Start_Date datetime, @End_Date datetime', @Start_Date='2015-06-06 11:35:06.437', @End_Date='2015-07-06 11:35:06.437' ``` This query tells SQL Server to execute the following query: ``` 'Get_Entries' ``` The way you are invoking `sp_executesql` says the query uses the following parameters: ``` '@Start_Date datetime,@End_Date datetime' ``` However, the query text string 'Get\_Entries' does not use these parameters. Therefore, SQL Server will not put the parameters into the query. The result query is equivalent to the following code: ``` exec Get_Entries ``` Without specifying any parameters, your stored procedure will return all rows. To use the parameters, you need to place them in your dynamic SQL query like below. I renamed the dynamic SQL parameters to make it clearer where they are used in the query: ``` exec sp_executesql N'Get_Entries @Start_Date = @StartDateParm, @End_Date = @EndDateParm', N'@StartDateParm datetime, @EndDateParm datetime', @StartDateParm='2015-06-06 11:35:06.437', @EndDateParm='2015-07-06 11:35:06.437' ``` Note that you don't need to put a stored procedure call in a call to `sp_executesql`. It is more efficient to call the procedure directly.
sp_executesql vs manually executing gives different results
[ "", "sql", "sql-server", "winforms", "" ]
I'm very new to SQL and was wondering if someone could direct me on how to obtain specific rows of data using the WHERE clause of an SQL file. This is my current WHERE clause: ``` CREATE or REPLACE view V_Status1 AS SELECT A.C0401_aid, B.C0401_description DESCR, A.c0432_date DATEREQUIRED, A.C0432_sampl_value_r FROM vhis_data A, T0401_accounts B WHERE A.C0401_aid = B.C0401_aid AND (B.C0401_aid = 5486 OR B.C0401_aid = 5489 OR B.C0401_aid = 5490); ``` 5486, 5489 and 5490 represent the accounts that I need data from, but my current clause does not make sense for what I want. I need the data from **all three accounts** not just one. Please let me know if you need any clarification.
You can use the `in` operator example: ``` SELECT column_name(s) FROM table_name WHERE column_name IN (value1,value2,...); ``` Your case will looks like : ``` WHERE A.C0401_aid = B.C0401_aid AND B.C0401_aid in(5486,5489,5490); ```
Cleaned up the syntax, but this should perform the same operation your current query performs. ``` CREATE or REPLACE view V_GTAAStatus1 AS SELECT A.C0401_aid, B.C0401_description DESCR, A.c0432_date DATEREQUIRED, A.C0432_sampl_value_r [column_name] FROM vhis_data A LEFT JOIN T0401_accounts B on A.C0401_aid = B.C0401_aid WHERE B.C0401_aid IN (5486,5489,5490); ```
How to obtain specific data in a SQL WHERE clause
[ "", "sql", "oracle", "" ]
I am reading the MySql tutorial in the [docs](https://dev.mysql.com/doc/refman/5.0/en/multiple-tables.html) and have the following tables and SQL statements: Event table: ``` +----------+------------+----------+------------------------------+ | name | date | type | remark | +----------+------------+----------+------------------------------+ | Fluffy | 1995-05-15 | litter | 4 kittens, 3 females, 1 male | | Buffy | 1993-06-23 | litter | 5 puppies, 2 female, 3 male | | Buffy | 1994-06-19 | litter | 3 puppies, 3 female | | Chirpy | 1999-03-21 | vet | needed beak streightened | | Slim | 1997-08-03 | vet | broken rib | | Bowser | 1991-10-12 | kennel | NULL | | Fang | 1991-10-12 | kennel | NULL | | Fang | 1998-08-28 | birthday | Gave him new chew toy | | Claws | 1998-03-17 | birthday | Gave him a flea collar | | Whistler | 1998-12-09 | birthday | First birthday | +----------+------------+----------+------------------------------+ ``` Pet table: ``` +----------+--------+---------+------+------------+------------+ | name | owner | species | sex | birth | death | +----------+--------+---------+------+------------+------------+ | Fluffy | Harold | cat | f | 1993-02-04 | NULL | | Claws | Gwen | cat | m | 1994-03-17 | NULL | | Buffy | Harold | dog | f | 1989-05-13 | NULL | | Fang | Benny | dog | m | 1990-08-27 | NULL | | Bowser | Diane | dog | m | 1989-03-31 | 1995-07-29 | | Chirpy | Gwen | bird | f | 1998-09-11 | NULL | | Whistler | Gwen | bird | NULL | 1997-12-09 | NULL | | Slim | Benny | snake | m | 1996-04-29 | NULL | | Puffball | Diane | hamster | f | 1999-03-30 | NULL | | Jenny | Robert | dog | f | 2004-01-01 | 2014-05-04 | +----------+--------+---------+------+------------+------------+ ``` SQL: ``` select pet.name, ( YEAR(date) - YEAR(birth) ) - ( RIGHT(date,5) < RIGHT(birth,5) ) AS age, remark from pet inner join event on pet.name = event.name where event.type = 'litter'; ``` I understand the SQL statement except for this one: `( YEAR(date) - YEAR(birth) ) - ( RIGHT(date,5) < RIGHT(birth,5) )` A step by step explanation would greatly help. I know that the `YEAR()` function is used to extract the year from a date.
As you've mentioned `YEAR()` gets the year from the date. We will use Fluffy as an example with `birth = 1993-02-04` and `date = 1995-05-15` Step 1: Subtract the extracted year from both dates `( YEAR(date) - YEAR(birth) )` You now have 1995 - 1993 which is equal to 2 Step 2: `( RIGHT(date,5) < RIGHT(birth,5) )` this will actually read the date and birth string 5 paces from right to left, so if you perform `RIGHT(date,5)` you will get the value 0, and if you perform `RIGHT(birth,5)` you will also get a value of 0. Step 3: Now we get on to the `<` operator, this returns a boolean value of 1 or 0 if it satisfies the condition. Since 0 = 0, the statement is false, so it will return to 0. The whole function actually checks if the day part of your `date` is less than the day on your `birthdate` which will determine if you have a sort of remaining days before a whole year. And if you do, it will return 1 which will be subtracted from the current year - year operation you performed earlier. But in our case, since the `<` will return 0, we can definitely say that Fluffy's age is 2 - 0, which is 2. If, however, Fluffy's `birthdate` is say, 1993-12-04, this will yield a `1` value for the `<` operation meaning that the year is not yet complete which will bring a result of 2 - 1, which is 1. Sorry if its a bit messy.
It's a clever(?) way of checking if the month and day of the date of birth happened before or after the date from the event (litter) in order to properly calculate the number of years between the events (as the year part in itself isn't enough). In this part: ``` ( YEAR(date) - YEAR(birth) ) - ( RIGHT(date,5) < RIGHT(birth,5) ) ``` the `RIGHT(date,5)` returns the month and day part like (`03-30`) and does a boolean less than comparison which returns either 0 or 1 depending on the result. This is then subtracted from the `YEAR(date) - YEAR(birth)` calculation so that the years between the events get adjusted correctly.
MySQL complex date calculation in inner join
[ "", "mysql", "sql", "" ]
I have the below data. ``` Base table Id DateTime 201 2015-05-03 08:01 301 2015-05-03 08:20 401 2015-05-03 08:40 Extract Table Id DateTime Location 201 2015-05-03 07:50 City A 201 2015-05-03 08:01 City B 201 2015-05-03 08:50 City C 301 2015-05-03 07:15 City E 301 2015-05-03 08:01 City F 301 2015-05-03 08:20 City G 401 2015-05-03 08:40 City X 401 2015-05-03 08:55 City Y Desired Result: Location Id DateTime City A 201 2015-05-03 07:50 City E 301 2015-05-03 07:15 City X 401 2015-05-03 08:40 ``` To clear the picture, I am trying to get the first location for each customer where they checked in. Since I will be filtering some Locations I will need to have ``` WHERE Location in ('City A','City B','City C','City E','City F','City G','City X','City Y') ``` Thanks.
I still think a LEFT JOIN is the way to go: ``` SELECT extract.id, extract.location FROM [base] INNER JOIN [extract] ON [base].id = [extract].id LEFT JOIN extract [earlierExtract] ON [extract].id = [earlierExtract].id AND [earlierExtract].DATETIME < [extract].DATETIME AND [earlierExtract].Location IN ( 'City A' ,'City B' ,'City C' ,'City E' ,'City F' ,'City G' ,'City X' ,'City Y' ) WHERE extract.Location IN ( 'City A' ,'City B' ,'City C' ,'City E' ,'City F' ,'City G' ,'City X' ,'City Y' ) AND earlierextract.id IS NULL ```
Perhaps this is what you want? ``` select b.id, (CASE WHEN b.ADateTime = e.RDateTime THEN e.Location ELSE (SELECT TOP 1 e.Location FROM Extract e2 WHERE e2.id = e.id ORDER BY RdateTime ASC ) END) AS [Test], a.Location from Base b inner join Extract e on b.id = e.id where b.id = 301; ``` This gets the location associated with the minimum matching `id`. There are definitely other ways to expression this logic, assuming this is what you are really looking for.
Need first record based on the earliest date - Sql Server 2012
[ "", "sql", "sql-server-2012", "" ]
I have a MySQL Table as follows: ``` +----+-------+-------+-------+-------+-------+ | ID | MON | TUE | WED | THU | FRI | +----+-------+-------+-------+-------+-------+ | 0 | Bike | Bike | Walk | Bike | Car | | 1 | Car | Car | Car | Bus | Car | | 2 | Bus | Train | Bus | Bus | Train | | 3 | Car | Car | Car | Walk | Car | +----+-------+-------+-------+-------+-------+ ``` How would I group by and count all days, to get the total modes of each transport over the week. For example: ``` +--------+-------+ | MODE | COUNT | +--------+-------+ | Bike | 3 | | Bus | 4 | | Car | 9 | | Train | 2 | | Walk | 2 | +--------+-------+ ``` I have tried using: ``` SELECT COUNT(*), Mon FROM transport GROUP BY Mon, Tue, Wed, Thu, Fri ``` But this creates a new group for each unique value in each day.
One way to do this is to produce a subquery that selects the transport mode in one column using the `union all` operator, and then counting the occurrences: ``` SELECT mode, COUNT(*) FROM (SELECT mon AS mode FROM transport UNION ALL SELECT tue AS mode FROM transport UNION ALL SELECT wed AS mode FROM transport UNION ALL SELECT thu AS mode FROM transport UNION ALL SELECT fri AS mode FROM transport) t GROUP BY mode ```
If you have a separate table of modes, you can also do: ``` select m.mode, count(*) from modes m join transport t on m.mode in (t.mon, t.tue, t.wed, t.thu, t.fri) group by m.mode; ```
MySQL GROUP BY and COUNT over Multiple Columns
[ "", "mysql", "sql", "select", "count", "" ]
How do I structure an SQL statement that selects a group of very similar objects? Essentially, the problem statement can boil down to `select x number of objects` whose values vary by v within a given range. For example, I have data of some substrates that have a thickness associated with it as below: ``` ╔════════════════════╗ ║ SubstID Thickness ║ ╠════════════════════╣ ║ Subst01 690 ║ ║ Subst02 710 ║ ║ Subst03 695 ║ ║ Subst04 692 ║ ║ Subst05 715 ║ ║ Subst06 699 ║ ║ Subst07 700 ║ ║ Subst08 702 ║ ║ Subst09 690 ║ ║ Subst10 690 ║ ║ Subst11 701 ║ ╚════════════════════╝ ``` I would like to query the data to get sets of `x number of substrates` that are within a certain range of thickness, e.g. 690 and 710 and also vary by no more than v, e.g. v=4. Using the numbers above, this should yield the following possibilities, if x is 3: ``` Subst01(690),Subst04(692),Subst09(690) Subst01(690),Subst04(692),Subst10(690) Subst01(690),Subst09(690),Subst10(690) Subst04(692),Subst09(690),Subst10(690) Subst06(699),Subst07(700),Subst08(702) ... Subst06(699),Subst07(700),Subst11(701) Subst06(699),Subst08(702),Subst11(701) Subst07(700),Subst08(702),Subst11(701) ``` There are tens of thousands of substrates so comparison time is an issue. I can do this in a programming language, but would prefer if there is an easy solution in SQL.
Try this: ``` select * from table t1 join table t2 on t1.substid < t2.substid join table t3 on t2.substid < t3.substid where abs(t1.thickness - t2.thickness) <= 3 and abs(t1.thickness - t3.thickness) <= 3 and abs(t2.thickness - t3.thickness) <= 3 ```
As @j\_random\_hacker mentioned, SQL doesn't easily do *x*-number of self-joins, but you can use some XML Path trickery to get a concatenated list of objects within the variance. Unfortunately, this doesn't always force the resultset of SimilarObjects to be 3. ``` Declare @var int=4 Declare @x int=3 ;with VarSubstrates as ( Select SubstID,Thickness,Thickness+@var as ThicknessMin,Thickness-@var as ThicknessMax From Substrates ) Select vo.SubstID,vo.Thickness,stuff(subs.SimilarObjects,1,1,'') as SimilarObjects From VarSubstrates vo Outer Apply ( Select Top (@x) ','+vi.SubstID+'('+cast(vi.Thickness as varchar)+')' From VarSubstrates vi Where vi.Thickness Between vo.ThicknessMax and vo.ThicknessMin For XML Path('') ) subs(SimilarObjects) ``` Output: ``` SubstID Thickness SimilarObjects Subst01 690 Subst01(690),Subst04(692),Subst09(690) Subst02 710 Subst02(710) Subst03 695 Subst03(695),Subst04(692),Subst06(699) Subst04 692 Subst01(690),Subst03(695),Subst04(692) ... ```
SQL to retrieve thickness values with small variance
[ "", "sql", "sql-server", "" ]
I'm working with an existing table that stores a start date, end date, and an integer used for ordering. For a given start & end date, I need to be able to determine the smallest available integer for entries with an overlapping date range. So for example, my table might store these records: *July 8th -> July 9th with an ordering index of 0.* *July 9th -> July 10th with an ordering index of 1.* *July 9th -> July 11th with an ordering index of 2.* Then, given the date range *July 10th -> July 11th*, I would want to set the ordering index to *0*. It needs to work where there could be no other entries within the entry date range (so it could default to 0). Date ranges aren't always two dates apart, and the ordering index doesn't have a limit. Here is what I have that only returns one above the maximum order index: ``` SELECT ISNULL(MAX(order_index),-1) + 1 FROM table WHERE start_date <= @end AND end_date >= @start) ``` I tried working with [this answer](https://stackoverflow.com/questions/684106/find-the-smallest-unused-number-in-sql-server), but couldn't get the desired result.
How about something like this? ``` declare @SampleData table ([BeginDate] date, [EndDate] date, [Order] int); insert @SampleData values ('2015-07-08', '2015-07-09', 0), ('2015-07-09', '2015-07-10', 1), ('2015-07-09', '2015-07-11', 2); declare @Start date = '2015-07-10'; declare @End date = '2015-07-11'; with [OrderingCTE] as ( select [Order], [Ideal Order] = row_number() over (order by [Order]) - 1 from @SampleData where [BeginDate] <= @End and [EndDate] >= @Start ) select coalesce ( min(case [Order] when [Ideal Order] then null else [Ideal Order] end), max([Order]) + 1 ) from [OrderingCTE]; ``` The CTE produces two orderings for each record in the source table: `[Order]` is the actual value stored in the record, and `[Ideal Order]` is what that value *would* be if all possible orderings (starting with zero) were in use within the given date range. If at any point the `[Ideal Order]` differs from the `[Order]`, you can infer that the current `[Ideal Order]` value has not been used and is therefore the minimum available value. If this is not true at any point, then the minimum available value is one greater than the largest value that has been used thus far; that's the second half of the `COALESCE` at the bottom of the script. As a final note: the question you linked has [another answer](https://stackoverflow.com/a/684176/3628863) raises concerns about a possible race condition that can arise, depending on how you're trying to use the data that you query in this way. I'd strongly recommend taking a look at it if you haven't already done so.
I think you can do this by enumerating the values. If I assume that the order indexes are not duplicated, then you can use `row_number()` and some arithmetic to find the "holes". Additional logic is needed to handle the edge cases. ``` with t as ( select t.*, row_number() over (order by order_index) as seqnum, min(order_index) over () as minoi, max(order_index) over () as maxoi from table t where start_date <= @end and end_date >= @start ) select (case when min(minoi) > 0 then 0 when min(minoi) is null then min(maxoi + 1) else min(minoi + seqnum - 1) end) from t where order_index <> minoi + seqnum - 1 or order_index = maxoi ```
Smallest available integer in subset of table - SQL
[ "", "sql", "sql-server", "" ]
``` p_id book_num conf_num arrival_dt departure_dt create-dt room_num 353 21807 3328568 19-JUN-15 21-JUN-15 27-JUN-15 2408 353 21807 3328562 18-JUN-15 20-JUN-15 27-JUN-15 2408 ``` In the above example arrival\_dt and departure\_dt is overlapping for 2 different confirmation numbers for the same room number 2408 also I want to exclude the below set of records where arrival\_dt and departure\_dt are same ``` p_id book_num conf_num arrival_dt departure_dt create-dt room_num 353 21802 3328508 18-JUN-15 21-JUN-15 27-JUN-15 1909 353 21802 3328555 18-JUN-15 21-JUN-15 27-JUN-15 1909 ``` Can you please help me with a SQL logic to find these kind of records in the table
[SQL Fiddle](http://sqlfiddle.com/#!4/35d28/2) **Oracle 11g R2 Schema Setup**: ``` CREATE TABLE TEST ( p_id, book_num, conf_num, arrival_dt, departure_dt, create_dt, room_num ) AS SELECT 353, 21807, 3328568, DATE '2015-06-19', DATE '2015-06-21', DATE '2015-06-27', 2408 FROM DUAL UNION ALL SELECT 353, 21807, 3328562, DATE '2015-06-18', DATE '2015-06-20', DATE '2015-06-27', 2408 FROM DUAL UNION ALL SELECT 353, 21802, 3328508, DATE '2015-06-18', DATE '2015-06-21', DATE '2015-06-27', 1909 FROM DUAL UNION ALL SELECT 353, 21802, 3328555, DATE '2015-06-18', DATE '2015-06-21', DATE '2015-06-27', 1909 FROM DUAL UNION ALL SELECT 353, 21801, 3328444, DATE '2015-06-17', DATE '2015-06-21', DATE '2015-06-27', 2000 FROM DUAL UNION ALL SELECT 353, 21801, 3328445, DATE '2015-06-18', DATE '2015-06-20', DATE '2015-06-27', 2000 FROM DUAL UNION ALL SELECT 353, 21803, 3328446, DATE '2015-06-19', DATE '2015-06-20', DATE '2015-06-27', 2001 FROM DUAL UNION ALL SELECT 353, 21804, 3328447, DATE '2015-06-20', DATE '2015-06-21', DATE '2015-06-27', 2001 FROM DUAL; ``` **Query 1**: ``` SELECT * FROM TEST t WHERE EXISTS ( SELECT 'X' FROM TEST x WHERE x.room_num = t.room_num AND x.arrival_dt < t.departure_dt AND x.departure_dt > t.arrival_dt AND NOT ( x.arrival_dt = t.arrival_dt AND x.departure_dt = t.departure_dt ) ) ``` **[Results](http://sqlfiddle.com/#!4/35d28/2/0)**: ``` | P_ID | BOOK_NUM | CONF_NUM | ARRIVAL_DT | DEPARTURE_DT | CREATE_DT | ROOM_NUM | |------|----------|----------|------------------------|------------------------|------------------------|----------| | 353 | 21807 | 3328568 | June, 19 2015 00:00:00 | June, 21 2015 00:00:00 | June, 27 2015 00:00:00 | 2408 | | 353 | 21807 | 3328562 | June, 18 2015 00:00:00 | June, 20 2015 00:00:00 | June, 27 2015 00:00:00 | 2408 | | 353 | 21801 | 3328444 | June, 17 2015 00:00:00 | June, 21 2015 00:00:00 | June, 27 2015 00:00:00 | 2000 | | 353 | 21801 | 3328445 | June, 18 2015 00:00:00 | June, 20 2015 00:00:00 | June, 27 2015 00:00:00 | 2000 | ```
The correct logic is that one departs after the other arrivesa and the first arrives before the other departs. You can do this with a self-join or a `where` clause. If you just want the records: ``` select r.* from records r where exists (select 1 from records r2 where r2.pid = r.pid and r2.arrival_dt >= r.departure_dt and r2.departure_dt <= r.arrival_dt ); ```
Find overlapping date range from a data set
[ "", "sql", "oracle", "date", "range", "" ]
I have a table test looks like ``` Month|CA |CATTC | CA |CATTC ------------------------------------ 1 |100 |20 | 250 |120 5 |100 |30 | 202 |140 12 |130 |260 | 255 |130 ``` My goal is to get a table `test 2` like ``` Month|CA |CATTC -------------------- 1 |100 |20 5 |100 |30 12 |130 |260 1 |250 |120 5 |202 |140 12 |255 |130 ``` Is it possible within SQL Server?
Try this, ``` CREATE TABLE #TEMP ( [Month] INT, CA INT, CASTTC INT, CA1 INT, CATTC1 INT ) INSERT INTO #TEMP VALUES (1 ,100 ,20 , 250 ,120), (5 ,100 ,30 , 202 ,140), (12 ,130 ,260 , 255 ,130) SELECT [Month],CrossApplied.CA,CrossApplied.CASTTC FROM #TEMP CROSS APPLY (VALUES (CA,CASTTC),(CA1,CATTC1)) CrossApplied(CA,CASTTC) ``` (OR) ``` SELECT [Month], CrossApplied.CA, CrossApplied.CASTTC FROM #TEMP CROSS APPLY (SELECT CA, CASTTC UNION ALL SELECT CA1, CATTC1) CrossApplied(CA, CASTTC) ```
Change columns names, then do a `UNION ALL`: ``` select Month, CA1 as CA, CATTC1 as CATTC from tablename UNION ALL select Month, CA2, CATTC2 from tablename ```
Transform a table within SQL Server
[ "", "sql", "sql-server", "" ]
I have the following a Rails ActiveRecord association where a lesson has\_many books, and books belong\_to a lesson. I want to run a SQL command that will tell me how many lessons have multiple books that belong to that lesson. Ex: Lesson 1 has books 1 and 2, and I want to see how many times that happens.
[#having](http://api.rubyonrails.org/classes/ActiveRecord/QueryMethods.html#method-i-having) will let you do a [#group](http://api.rubyonrails.org/classes/ActiveRecord/QueryMethods.html#method-i-group) that filters out Lessons that only have one Book. ``` Book.group(:lesson_id).having("count(lesson_id) > 1").count ```
`Book.group(:lesson_id).count` will return a hash with the `lesson_id` as key and the number of books as value. You could also implement a `counter_cache` as described [here](http://guides.rubyonrails.org/association_basics.html#belongs-to-association-reference) under section 4.1.2.3
List number of duplicate values
[ "", "sql", "ruby-on-rails", "ruby", "postgresql", "ruby-on-rails-4", "" ]
I faced a problem with a data loss, caused by a wrong query. Data restored, but now I'd like to understand the problem. I encountered the problem on SQL Server 2014, but I replicated it on SQL Server 2000 and PostgreSQL. Specifically, there was a DELETE. In the following scenario I use a SELECT. The tables creation for sql server 2014: ``` CREATE TABLE [dbo].[tmp_color]( [color_id] [int] NOT NULL, [color_name] [nvarchar](50) NOT NULL, [color_cat] [int] NOT NULL, CONSTRAINT [PK_tmp_color] PRIMARY KEY CLUSTERED ( [color_id] ASC ) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF , ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] CREATE TABLE [dbo].[tmp_color_cat]( [catid] [int] NOT NULL, [catname] [nvarchar](50) NOT NULL, CONSTRAINT [PK_tmp_color_cat] PRIMARY KEY CLUSTERED ( [catid] ASC ) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF , ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] ``` And the Postgres version: ``` CREATE TABLE tmp_color ( color_id integer NOT NULL, color_name text, color_cat integer, CONSTRAINT tmp_color_pkey PRIMARY KEY (color_id) ); CREATE TABLE tmp_color_cat ( catid integer NOT NULL, catname text, CONSTRAINT tmp_color_cat_pkey PRIMARY KEY (catid) ); ``` Data population (works on both RDBMS): ``` INSERT INTO tmp_color_cat (catid, catname) VALUES (1, 'magic color'); INSERT INTO tmp_color_cat (catid, catname) VALUES (2, 'normal color'); INSERT INTO tmp_color (color_id, color_name, color_cat) VALUES (1, 'red', 1); INSERT INTO tmp_color (color_id, color_name, color_cat) VALUES (2, 'green', 2); INSERT INTO tmp_color (color_id, color_name, color_cat) VALUES (3, 'black', 1); ``` The following SELECT ***is wrong***: ``` SELECT color_cat FROM tmp_color_cat; ``` Because `color_cat` does not exists in `tmp_color_cat`. BUT, the moment you take this in a subquery: ``` SELECT * FROM tmp_color WHERE color_cat IN( SELECT color_cat FROM tmp_color_cat WHERE catname = 'magic color' ); ``` It **returns every single record** from `tmp_color`. The logical error in script is obvious: developer wrote the wrong column to identify category. If you are deleting records instead of selecting them, you will delete entire table. Not good. Is this desired behavior? Or it is a consequence of subquery design? By observing the execution plan of SQL Server, the logical operation is a Left Semi Join. I found a couple of posts, one [for PostgreSQL](http://pgsql-sql.postgresql.narkive.com/w8JRWMdP/misnamed-field-in-subquery-does-not-cause-error-when-field-name-exists-in-parent-query) and one [for SQL Server](https://connect.microsoft.com/SQLServer/feedback/details/302281/non-existent-column-causes-subquery-to-be-ignored-rather-than-errored-out). Is there any good documentation I could send to the developer group explaining why this is not an error? How can I avoid this kind of problems? My first idea is to use an alias. Aliases are good.
### Authoritative quote for Postgres The scope of a subquery includes all visible columns of the outer query. Unqualified names are resolved to the inner query first, then extending the search outward. Assign table aliases and use theses aliases to table-qualify column names to remove any ambiguity - as you hinted yourself already. Here is an [example in the Postgres manual with a definitive statement explaining the scope](https://www.postgresql.org/docs/current/queries-table-expressions.html#QUERIES-WHERE): > ``` > SELECT ... FROM fdt WHERE c1 IN (SELECT c3 FROM t2 WHERE c2 = fdt.c1 + 10) > ``` > > [...] > > Qualifying `c1` as fdt.c1 is only necessary if `c1` is also the name of a > column in the derived input table of the subquery. But qualifying the > column name adds clarity even when it is not needed. This example > shows how the **column naming scope of an outer query extends into its inner queries.** Bold emphasis mine. There is also an example with an `EXISTS` semi-join in the list of examples in the same chapter of the manual. That's typically the **superior alternative** to `WHERE x IN (subquery)`. But in this particular case you don't need either. See below. *One* example: * [sql query to extract new records](https://stackoverflow.com/questions/26598764/sql-query-to-extract-new-records/26599026#26599026) ### DB design, naming convention This disaster happened because of confusion about column names. A **clear and consistent naming convention** in your table definitions would go a long way to make that a lot less likely to happen. This is true for ***any*** RDBMS. Make them as long as necessary to be *clear*, but as short as possible otherwise. Whatever your policy, be consistent. For Postgres I would suggest: ``` CREATE TABLE colorcat ( colorcat_id integer NOT NULL PRIMARY KEY, colorcat text UNIQUE NOT NULL ); CREATE TABLE color ( color_id integer NOT NULL PRIMARY KEY, color text NOT NULL, colorcat_id integer REFERENCES colorcat -- assuming an FK ); ``` * You already had legal, lower-case, unquoted identifiers. That's *good*. * Use a **consistent** policy. An inconsistent policy is worse than a bad policy. Not `color_name` (with underscore) vs. `catname`. * I rarely use 'name' in identifiers. It doesn't add information, just makes them longer. All identifiers are *names*. You chose `cat_name`, leaving away `color`, which actually carries information, and added `name`, which doesn't. If you have other "categories" in your DB, which is common, you'll have multiple `cat_name` which easily collide in bigger queries. I'd rather use `colorcat` (just like the table name). * Make the name indicate what's in the column. For the ID of a color category, `colorcat_id` is a good choice. `id` is not descriptive, `colorcat` would be misleading. * The FK column `colorcat_id` can have the same name as the referenced column. Both have *exactly* the same content. Also allows short syntax with `USING` in joins. Related answer with more details: * [How to implement a many-to-many relationship in PostgreSQL?](https://stackoverflow.com/questions/9789736/how-to-implement-a-many-to-many-relationship-in-postgresql/9790225#9790225) ### Better query Building on my supposed design: ``` SELECT c.* FROM colorcat cc JOIN color c USING (colorcat_id) WHERE cc.colorcat = 'magic color'; ``` This is assuming a 1:n relationship between `colorcat` and `color` (which you did not specify, but seems likely). Less publicly known (since the syntax is *different in other RDBMS like SQL Server*), you can [join in additional tables in a `DELETE`](https://www.postgresql.org/docs/current/sql-delete.html) as well: ``` DELETE FROM color c USING colorcat cc WHERE cc.colorcat = 'magic color' AND cc.colorcat_id = c.colorcat_id; ```
This is known behaviour with SQL Server. Using Aliases will prevent this ``` SELECT * FROM tmp_color WHERE color_cat IN( SELECT A.color_cat FROM tmp_color_cat As A WHERE A.catname = 'magic color' ); ``` The above query will throw an error ``` Msg 207, Level 16, State 1, Line 3 Invalid column name 'color_cat'. ```
Misnamed field in subquery leads to join
[ "", "sql", "sql-server", "postgresql", "" ]
Using MySQL, I have a table with a name field. I want to check that name field against a set of "keywords" that indicate that the name is not a person, but a business. In words: Get the entire row if the name contains any of the keywords. My attempt: `SELECT * FROM leads WHERE CONTAINS(leads.name, (SELECT word FROM keywords));` (Returns "Subquery returns more than 1 row")
It does not work like that. You can use a join instead ``` SELECT l.* FROM leads l JOIN keywords k on instr(leads.name, word) > 0 ```
Here you are: ``` SELECT * FROM leads WHERE leads.name IN (SELECT word FROM keywords); ``` Hope this helps.
SQL select rows where field contains word from another table's fields
[ "", "mysql", "sql", "select", "contains", "" ]
This is kind of a weird one. This particular piece of bad database design has caught me out so many times and I've always had to make ridiculous work arounds and this is no exception. To summarize: I have 3 tables, the first one is a lookup table of questions, the second is a lookup table of answers and the third stores a questions and answer id to show which questions have been answered. So far straight forward. However the answer can be 1 of 3 types: Free text, multiple choice or multiple selection and these are all stored in the same column (Answer). Free text can be anything, like 'Hello' or a datetime '2015-07-03 00:00:00'. Multiple choice gets stored as integers 1 or 49 etc and Multiple selection gets stored as a delimited string '1,4,7,8' (i know this is very bad design, a column shouldn't store more than 1 value however it is before my time and written into our aspx web application, as I work on my own I simply do not have the resource or time to change it) Here comes the problem; take a look at this query: ``` Select * FROM AnswersTable JOIN LK_Questions ON AnswersTable.QuestionID = LK_Questions.QuestionID JOIN LK_Answers ON AnswersTable.Answer = LK_Answers.AnswerID Where LK_Questions.QuestionTypeID = 1 ``` The where clause should ensure that the only questions that are returned are multiple choice. (So I am not joining a free text answer to an integer) and in fact when I run this query it runs ok but when i try to select individual columns it errors out with this error message: > Conversion failed when converting the varchar value ',879' to data type smallint. It almost like it's doing the join before it does the where although I know the query optimizer doesn't work that way. The problem is I need to select column names as this is going into a table so I need to define the column names. ?Is there anything I can do? I've tried for ages but with no results. I should mention that I am running SQL Server 2005. Many thanks in advance EDIT: This is the query that causes an error: ``` Select LK_Answers.Answer FROM AnswersTable JOIN LK_Questions ON AnswersTable.QuestionID = LK_Questions.QuestionID JOIN LK_Answers ON AnswersTable.Answer = LK_Answers.AnswerID Where LK_Questions.QuestionTypeID = 1 ```
The issue in your query is that you have a datatype mismatch on the columns you're using to join your tables together. One of the easiest ways to correct this is to explicitly `CAST` both sides of the join to `VARCHAR` so that there is no problem matching datatypes when joining. This isn't ideal, but if you're not able to change the table schema then you have to work around it. [SQL Fiddle Demo](http://sqlfiddle.com/#!3/883ce/2) ``` CREATE TABLE LeftTable ( id INT ); CREATE TABLE RightTable ( id VARCHAR(30) ); INSERT INTO LeftTable (id) VALUES (1), (2), (3), (4), (5); INSERT INTO RightTable (id) VALUES ('1'), ('2'), ('3'), ('4'), ('5'), (',879'); SELECT l.*, r.* FROM LeftTable l JOIN RightTable r ON CAST(l.id AS VARCHAR(30)) = CAST(r.id AS VARCHAR(30)) WHERE l.id = '1' ```
You can use a subselect for this. ``` Select * FROM AnswersTable JOIN ( SELECT * FROM LK_Questions Where QuestionTypeID = 1) as LK_Questions ON AnswersTable.QuestionID = LK_Questions.QuestionID JOIN LK_Answers ON AnswersTable.Answer = LK_Answers.AnswerID ```
SQL joins before where clause when selecting columns?
[ "", "sql", "sql-server", "join", "" ]
Recently one issue came into my Db,the same db is been shared by multiple Application. my First application uses one Table to Insert row which was Having A column `ProductionDate` Of **DataType** `DateTime` `DateCreated` Of **DataType** `DateTime` `Default GetDate()` As we all know `DateCreated`, will have the time,when Row was inserted to the table when no values are passed in `Insert` Statement But one of my collegues used the same column `DateCreated` from his application to insert some other value (date ) for the product inspite of using `ProductionDate` (he was misleaded with the name), i found this issue when my reports was misleading (Select was based on DateCreated). how can i force my Column to avoid accepting anything ,except it will contain only `Getdate()`, ie ``` INSERT INTO MyTableName(.....,DateCreated,......) VALUES (.....,'2015-07-15 14:06:42.250',......) ``` Should throw Exception/Error!! I am using SQL SERVER 2012 **Exception**:I Already have an `update/Insert` Trigger to Fill My `DateModified` Column :(
I would rename the existing table (say with an underscore prefix) and replace it with a view using the original name and performing a trivial computation on the `DateCreated` column so that it becomes computed and therefore readonly: ``` create table dbo._T ( ID int not null, DateCreated datetime not null constraint DF_Created DEFAULT (CURRENT_TIMESTAMP)) go create view dbo.T with schemabinding as select ID,COALESCE(DateCreated,DateCreated) as DateCreated from dbo._T go insert into dbo.T (ID) values (1) go insert into dbo.T(ID,DateCreated) values (1,'20150101') ``` Produces: ``` (1 row(s) affected) Msg 4406, Level 16, State 1, Line 1 Update or insert of view or function 'dbo.T' failed because it contains a derived or constant field. ``` And we can see the second insert failed: ``` select * from dbo.T ID DateCreated ----------- ----------------------- 1 2015-07-07 14:22:48.840 ``` And now only give the other user/application permission to talk to this view, not the base table.
You can add a trigger to set the date created. Code below taken from [here](http://snipplr.com/view/2595/) ``` CREATE TRIGGER tr[TableName]CreateDate ON [TableName] FOR INSERT AS UPDATE [TableName] SET [TableName].Created=getdate() FROM [TableName] INNER JOIN Inserted ON [TableName].[UniqueID]= Inserted.[UniqueID] ```
How to add Constraint to Table's Column,so value cannot be other than default
[ "", "sql", "sql-server", "constraints", "" ]
I have two tables `Table1` & `Table2` Table1 ``` s.no|uniqueNumber | assigned_to 1. | S123 | Tom 2. | S234 | Harry 3. | S345 | Tom ``` Table2 ``` s.no|uniqueNumber | status 1. | S123 | approve 2. | S234 | approve 3. | S345 | reject ``` I want to fetch `uniqueNumber` whose `status` is `approve` & `assigned` to `Tom`. I am trying to use `UNION` with `where` clause. But I think `UNION` doesn't work here. How to achieve this?
You must use a join: ``` select table1.uniqueNumber, status, assigned_to from table1 inner join table2 on table1.uniqueNumber = table2.uniqueNumber where status = 'approve' and assigned_to ='Tom' ```
Something like ``` SELECT t1.uniquenumber FROM table1 AS t1 JOIN table2 AS t2 ON t1.uniquenumber = t2.uniquenumber WHERE t2.status = 'approve' AND t1.assigned_to = 'Tom' ```
How to fetch data from two table?
[ "", "sql", "" ]
I am working with three tables: `RGN (region)`, `FAC (facility)` and `RGN_FAC`. In simplest form, facilities can be associated with more than one region. These associations are stored in the `RGN_FAC` table. Each region has a column called PrimaryFlag. I am attempting to create a list of the PrimaryFlag values of each region associated with a given facility. I was able to do this using the following sub query: ``` SELECT [dbo].[RGN].PRIMARY_FLAG FROM [dbo].[RGN] WHERE [dbo].[RGN].ID in (SELECT [dbo].[RGN_FAC].RGN_ID FROM [dbo].[RGN_FAC] WHERE [dbo].[RGN_FAC].FAC_ID = 'my fac id') ``` I was told that I could do this more efficiently using a join, instead of a sub query. However, I am not wrapping my head around how can I accomplish this with a join.
Actually, SQL Server has a pretty good optimizer. But, the best approach is normally `EXISTS`: ``` SELECT r.PRIMARY_FLAG FROM [dbo].[RGN] r WHERE EXISTS (SELECT 1 FROM [dbo].[RGN_FAC] f WHERE f.FAC_ID = 'my fac id' AND r.id = f.RGN_ID ); ``` The strict equivalent `JOIN` would be: ``` SELECT r.PRIMARY_FLAG FROM [dbo].[RGN] r JOIN (SELECT DISTINCT f.RGN_ID FROM [dbo].[RGN_FAC] f WHERE f.FAC_ID = 'my fac id' ) f ON f.RGN_ID = r.id ``` However, the `DISTINCT` can be a performance issue. If you know the values are never duplicated in the subquery, you can remove it: ``` SELECT r.PRIMARY_FLAG FROM [dbo].[RGN] r JOIN (SELECT f.RGN_ID FROM [dbo].[RGN_FAC] f WHERE f.FAC_ID = 'my fac id' ) f ON f.RGN_ID = r.id; ``` Of course, if you want performance, then typically an index will help. For the first query: `RGN_FAC(RGN_ID, FAC_ID)` is the optimal index. For the versions using `JOIN` or `IN`, then you want `RGN_FAC(FAC_ID, RGN_ID)`.
``` SELECT [dbo].[RGN].PRIMARY_FLAG FROM [dbo].[RGN] JOIN (SELECT DISTINCT [dbo].[RGN_FAC].RGN_ID FROM [dbo].[RGN_FAC] WHERE [dbo].[RGN_FAC].FAC_ID = 'my fac id') ON [dbo].[RGN].ID = [dbo].[RGN_FAC].RGN_ID ``` The `DISTINCT` can perhaps be removed, depending on if duplicates may occur or not.
Achieving same results with join as with subquery
[ "", "sql", "sql-server", "" ]
I'm new to relation databases and mySQL, I am trying to develop a database for employees, that logs all the times its employees access the system(shown by recording the timestamp of everytime it access the system). So when the employee accesses the system, the current timeStamp is recorded, and the next time they acess it that current timestamp is also recorded. The idea is that i can go back and query how many times in a day an employee accessed the system or week and so on, for any employee. so far i have: ``` EMP_ID | F_Name | L_Name | TimeStamp ------------------------------------- 1222 | joe | blogs | 12.03.22 1222 | joe | blogs | 12.44.34 1352 | carl | mansy | 19.33.22 ``` and so on, i would like to know if there is a way to have just one emp\_id show up with all the timestamps below, or do i need another table? or can i just have the data base like this? Obviously this will grow in size a lot, so would it be better to have a table for every emp\_id? Thanks in advance Jonny
Assuming all data is stored in table named `my_log_table`, To see all timestamps for an EMP\_ID 12222, query it like ``` select TimeStamp from my_log_table where EMP_ID = 12222 ``` Having all logs in same table would be scalable and easier to use. Issues with having multiple log tables: * For each new user, you have to manually create a new table * Privileges have to be granted to this script/user to query the new table created for a new user * If EMP\_ID changes, then you have to track and change table names Moreover, you are not saving any space other than one column of EMP\_ID of combined log table
You should have 2 tables first one is the employee table ``` emplyee : EMP_ID | F_Name | L_Name ``` the second one is the log table ``` employee_log : EMP_ID | TimeStamp ``` the first table will store the data of the empolyee the second will store just the log of this employee and if you want to retrieve the logs you just need to join betwen these tables ``` select * from employee left join employee_log on employee.EMP_ID = employee_log.EMP_ID ```
mySQL table with EMP_ID and lots of timestamps
[ "", "mysql", "sql", "database", "" ]
I have created/am creating 3 tables: **Slaves**, **Citizens** and **Incidents**. How should I go about **Incident** involving multiple **Citizens** and **Slaves**? Now I'm thinking about making two fields in **Incidents** containing list of CitizenID's and SlaveID's (SlaveID1, SlaveID2...,SlaveIDn), but it seems plain dumb.
Actually your idea doesn't sound dumb at all. You can design your `Incidents` table like this: ``` +------------+-----------+---------+ | IncidentID | CitizenID | SlaveID | +------------+-----------+---------+ | 1 | A | A | <-- incident #1 involved 2 citizens and 1 slave | 1 | B | A | | 2 | A | A | <-- incident #2 involved 2 citizens and 2 slaves | 2 | B | A | | 2 | A | B | | 2 | A | B | +------------+-----------+---------+ ``` Now when you query for a certain incident ID you can obtain a list of all citizens and slaves involved in the incident. This is a many-to-many relationship in your database schema.
You can make it by making 2 bridge table along with one master table ``` Master table _______________ Incidents Bridge tables ___________ incident_slave(pk of incidents table , slave information field(s) or pk of slave table) incident_citizen(pk of incidents table , citizen information field(s) or pk of citizen table) ```
Field involving multiple rows from another table
[ "", "sql", "database", "field", "" ]
We use copy command to copy data of one table to a file outside database. Is it possible to copy data of one table to another table using command. If yes can anyone please share the query. Or is there any better approach like we can use pg\_dump or something like that.
You cannot easily do that, but there's also no need to do so. ``` CREATE TABLE mycopy AS SELECT * FROM mytable; ``` or ``` CREATE TABLE mycopy (LIKE mytable INCLUDING ALL); INSERT INTO mycopy SELECT * FROM mytable; ``` If you need to select only some columns or reorder them, you can do this: ``` INSERT INTO mycopy(colA, colB) SELECT col1, col2 FROM mytable; ``` You can also do a selective pg\_dump and restore of just the target table.
If the columns are the same (names and datatypes) in both tables then you can use the following ``` INSERT INTO receivingtable (SELECT * FROM sourcetable WHERE column1='parameter' AND column2='anotherparameter'); ```
How do I copy data from one table to another in postgres using copy command
[ "", "sql", "postgresql", "copy", "" ]
I have following sample data from a table ``` Id Dsc 500001 INSURED 500001 THIRD PARTY 500001 THIRD PARTY 500001 THIRD PARTY 500002 INSURED 500002 THIRD PARTY 500003 INSURED 500004 BROKER 500005 CLAIMANT ``` I wish to extract those Ids for which Dsc is neither 'Insured' nor 'Broker'. Since both columns have repetitive data, I have devised following query for this.. ``` Select Id from table1 where Dsc not in ('Insured', 'Broker') Except Select Id from table1 where Dsc in ('Insured', 'Broker') ``` Is there any alternate way to do this?
``` SELECT id FROM table1 GROUP BY id HAVING SUM(CASE WHEN Dsc='Insured' THEN 1 ELSE 0 END)=0 AND SUM(CASE WHEN Dsc='Broker' THEN 1 ELSE 0 END)=0 ```
You can write a query as: ``` SELECT Id FROM ( SELECT Id , SUM(CASE WHEN Dsc IN ('INSURED','BROKER') THEN 1 ELSE 0 END ) AS Condition FROM @Test GROUP BY Id ) T WHERE Condition = 0 ``` `DEMO`
Filter a group in mysql
[ "", "sql", "sql-server", "sql-except", "" ]
I have a table `review_store` with two columns: `review_id` and `store_id` If I had to replace 1 with 2, I would do this: ``` UPDATE review_store SET store_id = '2' WHERE store_id = '1' ``` How do I copy/duplicate every row in which store\_ID = 1 to Store\_ID = 2 ?
I assume you mean create new records in the same table. Use [INSERT SELECT](https://dev.mysql.com/doc/refman/5.1/en/insert-select.html) ``` INSERT INTO review_store SELECT review_id, '2' as Store_id FROM review_store WHERE Store_id = '1' ```
``` INSERT INTO review_store (review_id, store_id) SELECT review_id, 2 FROM review_store WHERE store_id = 1 ```
SQL query: Copy all rows with Store_ID = 1
[ "", "mysql", "sql", "magento", "review", "" ]
I am usung Firebird database. I have below SQL, which is concatenating the ShortCode column data, but without ordering as per the ORDER\_NUMBER column in ABC table in "WITH" clause. ``` With TBL_SHORT_CODE (SHORT_CODE, FK_KEY) As ( SELECT Distinct(XYZ.SHORT_CODE) As SHORT_CODE, ABC.FK_KEY From ABC Join XYZ On ABC.PK_KEY = XYZ.FK_KEY where XYZ.FK_KEY = '{009DA0F8-51EE-4207-86A6-7E18F96B983A}' And ABC.STATUS_CODE = 1 Order By ABC.ORDER_NUMBER ) SELECT LIST(Distinct(TBL_SHORT_CODE.SHORT_CODE), '' ), ABC.FK_BOM From ABC Join XYZ ON ABC.FK_KEY = XYZ.PK_KEY Join TBL_SHORT_CODE On TBL_SHORT_CODE.FK_KEY = ABC.FK_KEY where ABC.FK_BOM = '{009DA0F8-51EE-4207-86A6-7E18F96B983A}' And ABC.STATUS_CODE = 1 Group By ABC.FK_BOM ``` Thanks In Advance. With Best Regards. Vishal
I got problem solved in Firebird yahoo group. SQL: ``` EXECUTE BLOCK RETURNS (SHORT_CODES VARCHAR(2000), FK_BOM INTEGER AS DECLARE VARIABLE SHORT_CODE1 VARCHAR(2000); DECLARE VARIABLE FK_BOM2 INTEGER; DECLARE VARIABLE DUMMY INTEGER; BEGIN FK_BOM = NULL; FOR With TBL_SHORT_CODE (SHORT_CODE, FK_KEY, ORDER_NUMBER) As (SELECT XYZ.SHORT_CODE, ABC.FK_KEY, min(ABC.ORDER_NUMBER) From ABC Join XYZ On ABC.PK_KEY = XYZ.FK_KEY where XYZ.FK_KEY = '{009DA0F8-51EE-4207-86A6-7E18F96B983A}' And ABC.STATUS_CODE = 1 group by 1, 2) SELECT ABC.FK_BOM, tsc.SHORT_CODE, min(tsc.ORDER_NUMBER) From ABC Join XYZ ON ABC.FK_KEY = XYZ.PK_KEY Join TBL_SHORT_CODE tsc On tsc.FK_KEY = ABC.FK_KEY where ABC.FK_BOM = '{009DA0F 8-51EE-4207-86A6-7E18F96B983A}' And ABC.STATUS_CODE = 1 Group By 1, 2 ORDER BY 1, 3 into :FK_BOM2, :SHORT_CODE1, :DUMMY do begin if (FK_BOM2 > FK_BOM) then suspend; if (FK_BOM2 is distinct from FK_BOM) then begin FK_BOM = FK_BOM2; SHORT_CODES = ''; end SHORT_CODES = SHORT_CODES || SHORT_CODE1; end suspend; end ```
ORDER BY does not function in a common table expression, and your join to TBL\_SHORT\_CODE would gain no benefit from it anyway. If your ordering is simply to order the output, then alter your script to order in your final query: ``` With TBL_SHORT_CODE (SHORT_CODE, FK_KEY) As ( SELECT XYZ.SHORT_CODE As SHORT_CODE, ABC.FK_KEY From ABC Join XYZ On ABC.PK_KEY = XYZ.FK_KEY where XYZ.FK_KEY = '{009DA0F8-51EE-4207-86A6-7E18F96B983A}' And ABC.STATUS_CODE = 1 GROUP BY XYZ.SHORT_CODE, ABC.FK_KEY ) SELECT LIST(Distinct(TBL_SHORT_CODE.SHORT_CODE), '' ), ABC.FK_BOM From ABC Join XYZ ON ABC.FK_KEY = XYZ.PK_KEY Join TBL_SHORT_CODE On TBL_SHORT_CODE.FK_KEY = ABC.FK_KEY where ABC.FK_BOM = '{009DA0F8-51EE-4207-86A6-7E18F96B983A}' And ABC.STATUS_CODE = 1 Group By ABC.FK_BOM ```
Order By Not Working Using "WITH" Clause
[ "", "sql", "firebird", "" ]
I have a table in my database in Access 2013. ``` Table : city ID_city city 1 Tetuan 5 Rabat 9 Marrakech 10 Agadir 15 Laayoun ``` I wish to add the Rowid number beside them: ``` Rowid ID_city city 1 1 Tetuan 2 5 Rabat 3 9 Marrakech 4 10 Agadir 5 15 Laayoun ```
One way to do this is to use the `count` function in a subquery. Not sure it scales well though and there are probably better ways... ``` select (select count(*) from city where ID_city <= t1.ID_city) as row_number, * from city t1 ```
The best way to do it is by using a self-join... ``` SELECT COUNT(*) AS Rowid, C.ID_City, C.city FROM City C INNER JOIN City C1 ON C.ID_City >= C1.ID_City GROUP By C.ID_City, C.city ```
Adding a Row Number in Query
[ "", "sql", "ms-access", "vba", "" ]
There is H2 table: ``` CREATE TABLE IF NOT EXISTS sometable (ondate DATE NOT NULL); ``` With data ``` INSERT INTO sometable VALUES ('2015-07-07'); INSERT INTO sometable VALUES ('2014-07-07'); INSERT INTO sometable VALUES ('2013-07-07'); ``` I want to limit the selected data amount but **the following select doesn't work**. Why? ``` SELECT YEAR(CONVERT(ondate, TIMESTAMP)) AS yr FROM sometable WHERE yr = 2015 ``` *The error message is SELECT YEAR(CONVERT(ondate, TIMESTAMP)) AS yr FROM sometable WHERE yr = 2015; Column "YR" not found; SQL statement: SELECT YEAR(CONVERT(ondate, TIMESTAMP)) AS yr FROM sometable WHERE yr = 2015 [42122-176] 42S22/42122*
This isn't H2 specific, this will happen on many RDBMS. The `yr` column isn't part of the SELECT clause really, it is the CONVERT statement that is. If you want to do that, wrap the entire statement in a derived table and query that, or use your same clause as you do in the SELECT in your WHERE. For example; ``` SELECT * FROM (SELECT YEAR(CONVERT(ondate, TIMESTAMP)) AS yr FROM sometable) a WHERE a.yr = 2015 ``` OR ``` SELECT YEAR(CONVERT(ondate, TIMESTAMP)) AS yr FROM sometable WHERE YEAR(CONVERT(ondate, TIMESTAMP)) = 2015 ```
You can't refer to a column alias in the where clause. ``` SELECT YEAR(CONVERT(ondate, TIMESTAMP)) AS yr FROM sometable WHERE YEAR(CONVERT(ondate, TIMESTAMP)) = 2015 ```
H2 select expression column alias
[ "", "sql", "select", "h2", "column-alias", "" ]
I have table tbl0 and table tbl1. tbl0 has a primary key made from field "Ticker" that I created like this: ``` Sub CreatPrimaryKey() Dim db As Database Set db = CurrentDb db.Execute "CREATE INDEX TickerID ON tbl0 (Ticker) WITH PRIMARY;" db.Close End Sub ``` ...which worked fine. I confirm that I have the primary key by using this: ``` Sub GetPrimaryKeyField() Call PrimKey("tbl0") End Sub Public Sub PrimKey(tblName As String) 'get primary key of tabel 'how to use: Call PrimKey("tbl_DatedModel_2015_0702_0") 'http://bytes.com/topic/access/answers/679509-finding-primary-key-using-vba '******************************************* 'Purpose: Programatically determine a ' table's primary key 'Coded by: raskew 'Inputs: from Northwind's debug window: ' Call PrimKey("Products") 'Output: "ProductID" '******************************************* Dim db As Database Dim td As TableDef Dim idxLoop As Index Set db = CurrentDb Set td = db.TableDefs(tblName) For Each idxLoop In td.Indexes If idxLoop.Primary = True Then Debug.Print Mid(idxLoop.Fields, 2) Exit For End If Next idxLoop db.Close Set db = Nothing End Sub ``` The immediate window prints "Ticker". I'm not sure what happened to "TickerID", but whatever. I get a PK. I then try to create a foreign key relationship between tbl0 and tbl1 by doing this: ``` Sub CreateForeignKey()Dim db As Database Set db = CurrentDb db.Execute "ALTER TABLE tbl1 " _ & "ADD CONSTRAINT fk_tbl1_tbl0 " _ & "FOREIGN KEY (Ticker) REFERENCES tbl0 (Ticker);" db.Close End Sub ``` When I run the above sub I get error: "Invalid field definition "Ticker" in definition of index or relationship" UPDATE: What makes this question different is part of the issue I was having was that I needed to have the same field in both tables when I AlTER TABLE.
Your initial SQL statement creates an Index named "TickerID" on the field "Ticker." This is why the debug statement returns "Ticker" rather than "TickerID." Your Foreign Key SQL should be: ``` ALTER TABLE tbl1 ADD CONSTRAINT fk_tbl1_tbl0 FOREIGN KEY (Ticker) REFERENCES tbl0 (Ticker); ``` This assumes you have a field in tbl1 named "Ticker" that is the same type as tbl0.Ticker. The second line in this means the Foreign Key field you are creating references the related key field in the other table. Read it like this: The Foreign Key "Ticker" in the table I am altering (tbl1) references the Primary Key "Ticker" in the related table "tbl0". I use this routine, which you may find helpful. It does make some assumptions: 1) The Primary Key is always named {table\_name} + "Id" and 2) The Foreign Key is usually named the same thing. (both of these are common practice and advisable in my opinion). ``` Public Function CreateForeignKey( _ db As DAO.Database, _ ByVal sTable As String, _ ByVal sPrimaryTable As String, _ Optional ByVal sField As String) As Boolean Dim sSQL As String Dim sSuffix As String On Error GoTo EH If sField = "" Then sField = sPrimaryTable & "Id" Else sSuffix = "_" & sField End If sSQL = "ALTER TABLE [" & sTable & "]" _ & " ADD CONSTRAINT FK_" & sTable & "_" & sPrimaryTable & sSuffix _ & " FOREIGN KEY([" & sField & "])" _ & " REFERENCES [" & sPrimaryTable & "] ([" & sPrimaryTable & "Id]);" db.Execute sSQL, dbFailOnError CreateForeignKey = True Exit Function EH: MsgBox "Error " & Err.Number & vbCrLf _ & " (" & Err.Source & vbCrLf _ & " (" & Err.Description _ & ") in procedure CreateForeignKey of Module Database" End Function ``` References: * [CONSTRAINT Clause (Microsoft Access SQL)](https://msdn.microsoft.com/en-us/library/bb177889(v=office.12).aspx) * [INDEX Clause (Microsoft Access SQL)](https://msdn.microsoft.com/en-us/library/bb177891(v=office.12).aspx)
I think your sql should instead be this: ``` ALTER TABLE tbl1 ADD CONSTRAINT fk_tbl1_tbl0 FOREIGN KEY (TickerId) REFERENCES tbl0 (Ticker); ``` The parentId is tickerID and the child is ticker.
Create foreign key in MS Access VBA
[ "", "sql", "vba", "ms-access", "ms-access-2013", "" ]
We have a table which contains column with datatype `nvarchar(254)`. This column stores date and time in the following format: ``` 12-MAR-15 04.07.36.000000000 PM ``` We need to convert this into `datetime` format so that we can run time dependent queries to extract the required results. This data is not stored in any of the formats mentioned on [CAST and CONVERT documentation page](https://msdn.microsoft.com/en-us/library/ms187928.aspx) I tried with `130` and `131` but no luck. The format here seems to be `dd-mon-yy hh.mm.ss.mmmmmmm PM` Does anyone know how should we go about converting this into 'datetime' format? Any help would be much appreciated.
First of all it'is a bad idea to store dates as strings. [This is a one way](http://sqlfiddle.com/#!3/9eecb7/1550) to convert them back to datetime using `STUFF` and `REPLACE` functions. I assume all your dates are after year 2000. ``` declare @d nvarchar(256) = '12-MAR-15 04.07.36.000000000 PM' SELECT CONVERT(DATETIME, STUFF( REPLACE(REPLACE( STUFF(@d, 23, 7, ''), '-',' '),'.',':'), 8,0,'20') ) ```
Basically you need to convert your date string into one of those styles supported by SQL Server. In the following example, I target date style 109 (`mon dd yyyy hh:mi:ss:mmmAM`): ``` -- character position 0 1 2 -- 1234567890123456789012345678 DECLARE @DATESTR AS NVARCHAR(254) = '12-MAR-15 04.07.36.000000000 PM' SELECT CONVERT(DATETIME, SUBSTRING(@DATESTR, 4, 3) + ' ' + -- mon SUBSTRING(@DATESTR, 1, 2) + ' ' + -- dd SUBSTRING(@DATESTR, 8, 2) + ' ' + -- yy SUBSTRING(@DATESTR, 11, 2) + ':' + -- hh SUBSTRING(@DATESTR, 14, 2) + ':' + -- mi SUBSTRING(@DATESTR, 17, 2) + ':' + -- ss SUBSTRING(@DATESTR, 20, 3) + -- mmm RIGHT(@DATESTR, 2) -- am/pm , 109) AS Converted -- 2015-03-12 16:07:36.000 ```
converting nvarchar to datetime
[ "", "sql", "sql-server", "date", "datetime", "sql-server-2012", "" ]
I want to update a column for every row of a table with a value from a given set of values. So for example: ``` id name code --------------------- 1 n1 2 n2 3 n3 ``` And i have this array of values ['code-1','code-2','code3'] that I want to set to every row, so for row one the value for code column will be 'code-1' from array, and row two the value for code column will be 'code-2' and for row three value for code column will be 'code-3' from array. So the final table will look like this: ``` id name code --------------------- 1 n1 code-1 2 n2 code-2 3 n3 code-3 ``` How can i do this in one sql query?
You could do something like this: ``` update table t set code = concat('code-', id) where id in (1, 2, 3); ``` If the codes aren't really tied to the ids, you can use a `case`: ``` update table t set code = (case when id = 1 then 'code-1' when id = 2 then 'code-2' when id = 3 then 'code-3' end) where id in (1, 2, 3); ```
create a new table (temp) to hold the array with a schema like this ``` id code ------------- 1 code-1 2 code-2 3 code-3 ``` and update your original table with this update statement ``` UPDATE original SET code = temp.code FROM original INNER JOIN temp ON original.id = temp.id ```
mysql how to update a column of every row with a given set of values
[ "", "mysql", "sql", "arrays", "sql-update", "" ]
I am using angularJS to display my result set into a table, that is filterable with totals etc. I was wondering if it would be possible to display a value based on another value if it is `=` to something. here is an example: ``` <tr ng-repeat="c in data"> <td>{{c.type}}</td> //expects either 'fixed' or 'hourly' <td>{{c.fixed_rate}}</td> <td>{{c.hourly_rate}}</td> </tr> ``` Therfore are you able to only display the fixed value if type is fixed, and hourly if the type is hourly without using any JQuery to hide elements? My mind is kind of stumped on this as I am just a few months in with angular. Data is pulled from a database\* so if there is an SQL option I am all for it.
You could do something like this: ``` <tr ng-repeat="c in data | filter: filterHourlyOrFixed"> <td>{{c.type}}</td> <td ng-if="c.type == 'fixed'">{{c.fixed_rate}}</td> <td ng-if="c.type == 'hourly'">{{c.hourly_rate}}</td> </tr> ``` If you want to filter by only those two values, add this function to your controller: ``` $scope.filterHourlyOrFixed = function (item) { return item.type === 'fixed' || item.type === 'hourly'; }; ``` If you do not want to filter by the value, remove `| filter: filterHourlyOrFixed` from the ng-repeat. Also, when you have some time, do a little reading through the docs for [ngIf](https://docs.angularjs.org/api/ng/directive/ngIf), [ngShow](https://docs.angularjs.org/api/ng/directive/ngShow), [ngHide](https://docs.angularjs.org/api/ng/directive/ngHide), and [ngFilter](https://docs.angularjs.org/api/ng/filter/filter). You'll probably be using these repeated. Also, pay attention to the differences in how `ng-if`, `ng-show`, and `ng-hide` manipulate the DOM to achieve similiar results. The differences are subtle, but important.
Use `ng-if` to remove or recreate parts of the DOM based on an expression. for example: ``` <tr ng-repeat="c in data"> <td>{{c.type}}</td> //expects either 'fixed' or 'hourly' <td ng-if="c.type=='fixed'">{{c.fixed_rate}}</td> <td ng-if="c.type=='hourly'">{{c.hourly_rate}}</td> </tr> ``` You could also use a filter, if you are rendering identical DOM but want to exclude certain elements.
Angular-repeat display column IF equal to a value
[ "", "sql", "angularjs", "" ]
``` class Product < ActiveRecord::Base belongs_to :category has_many :order_items, dependent: :destroy end class OrderItem < ActiveRecord::Base belongs_to :order belongs_to :product end ``` I need to list all product with their sum of quantity from order\_item and sum of their total\_price ``` Product id name 1 product_1 OrderItem product_id order_id quantity total_price 1 1 10 200 1 2 10 200 for example expecting output should be name quantity total_price product_1 20 400 ```
``` select p.name, sum(o.quantity) as quantity, sum(o.total_price) as total_price from Product p join OrderItem o on p.id = o.product_id group by p.name ```
Try this for active records query. just verify your column,table name and associations you can used like: ``` OrderItem.joins(:product).select("products.name as name,sum(total_price) as total_price , sum(quantity) as total_quantity").group("order_items.product_id").as_json ```
has_many with sum active record
[ "", "mysql", "sql", "ruby-on-rails", "activerecord", "" ]
I have a sql query where i am getting data from tables User,UserDetails and UserData. i also have to check the table ConditionCheck, for any entries for this particular user. This table can have multiple condition checks for each user. if table ConditionCheck contains even one entry of 2 or 3, i dont return any user data. I wrote the query as follows: ``` select A.Column1, A.Column2, B.Column1, isnull(D.Column1, '') from User A WITH (NOLOCK) inner join UserDetails B WITH (NOLOCK) on(B.id = A.id) left join UserData C WITH (NOLOCK) on (C.uid = B.uid) left join ConditionCheck CC WITH (NOLOCK) on(CC.S_id = B.S_id) left outer join MoreData D WITH (NOLOCK) on (D.id = A.id) where A.Column1 = 'ABC' and CC.T_id not in(2, 3) ``` if a user has rows with entries 1,2,4,5 in CC, i dont want to return the user details because of condition 2 existing. but this query returns user details if the user has rows other than 2 or 3 existing.
You will want to add a `NOT EXISTS` clause to your query, and remove the `LEFT JOIN` to your ConditionalCheck table, since you're not actually doing anything with the data there: ``` Select A.Column1, A.Column2, B.Column1, IsNull(D.Column1, '') From User A With (NoLock) Inner Join UserDetails B With (NoLock) On (B.id = A.id) Left Join UserData C With (NoLock) On (C.uid = B.uid) Left Join MoreData D With (NoLock) On (D.id = A.id) Where A.Column1 = 'ABC' And Not Exists ( Select * From ConditionalCheck CC Where CC.S_id = B.S_id And CC.T_id In (2,3) ) ``` As a side note, for the context of the query provided, the `LEFT JOIN` to the *UserData* table is also unnecessary.
As you speculated in your title, it's probably better to do this with `EXISTS` than to have a proper join to the `ConditionCheck` table. Even if your `WHERE` clause were doing what you want, you'd still have the problem that a user with multiple records in the `ConditionCheck` table would appear multiple times in your result set. Try something like this: ``` select A.Column1, A.Column2, B.Column1, isnull(D.Column1, '') from User A WITH (NOLOCK) inner join UserDetails B WITH (NOLOCK) on(B.id = A.id) left join UserData C WITH (NOLOCK) on (C.uid = B.uid) left outer join MoreData D WITH (NOLOCK) on (D.id = A.id) where A.Column1 = 'ABC' and not exists ( select 1 from ConditionCheck CC with (nolock) where CC.S_id = B.S_id and CC.T_id in (2, 3) ); ```
SQL query with multiple joins (And Exists?)
[ "", "sql", "sql-server", "join", "" ]
I've tried more or less all combinations of `count` and `distinct` (except the correct one :) ) in order to get the example below. Input: table t1 ``` NAME | FOOD Mary | Apple Mary | Banana Mary | Apple Mary | Strawberry John | Cherries ``` Expected output: ``` NAME | FOOD Mary | 3 John | 1 ``` N.B. Mary has Apple in two rows but she has 3 as we have 3 different values in the column. I only managed to get 4 in FOOD Column for her, but I need 3 :(
``` select a.name as NAME, a.count(name) as Food from (SELECT distinct NAME,Food from table)a ```
Start with a query which gives you unique combinations of *NAME* and *FOOD*: ``` SELECT DISTINCT t1.NAME, t1.FOOD FROM t1 ``` Then you can use that as a subquery in another where you can `GROUP BY` and `Count`: ``` SELECT sub.NAME, Count(*) AS [FOOD] FROM ( SELECT DISTINCT t1.NAME, t1.FOOD FROM t1 ) AS sub GROUP BY sub.NAME; ```
Get number of different values in a column in Access
[ "", "sql", "ms-access", "count", "distinct", "" ]
I have this example table, with example data: ``` +-------------------+-----------------+------------+------------+ | OriginalBeginDate | OriginalEndDate | Start Date | End Date | +-------------------+-----------------+------------+------------+ | 2015-06-01 | 2015-06-30 | 2015-08-01 | 2015-08-31 | | 2015-06-01 | 2015-06-30 | 2015-09-01 | 2015-09-30 | | 2015-06-01 | 2015-06-30 | 2015-10-01 | 2015-10-31 | | 2015-06-01 | 2015-06-30 | 2015-11-01 | 2015-11-30 | | 2015-06-01 | 2015-06-30 | 2015-12-01 | 2015-12-31 | | 2015-07-01 | 2015-12-31 | 2015-08-01 | 2015-08-31 | | 2015-07-01 | 2015-12-31 | 2015-09-01 | 2015-09-30 | | 2015-07-01 | 2015-12-31 | 2015-10-01 | 2015-10-31 | | 2015-07-01 | 2015-12-31 | 2015-11-01 | 2015-11-30 | | 2015-07-01 | 2015-12-31 | 2015-12-01 | 2015-12-31 | +-------------------+-----------------+------------+------------+ ``` If you notice, there is a set of 5 'Start Date's and 'End Date's repeated because the 'OriginalBeginDate' and 'OriginalEndDate' has two 'sets'. I need to have only one set of the last two columns (distinct values of them). This is what the output needs to look like: ``` | OriginalBeginDate | OriginalEndDate | Start Date | End Date | +-------------------+-----------------+------------+------------+ | 2015-06-01 | 2015-06-30 | 2015-08-01 | 2015-08-31 | | 2015-06-01 | 2015-06-30 | 2015-09-01 | 2015-09-30 | | 2015-06-01 | 2015-06-30 | 2015-10-01 | 2015-10-31 | | 2015-06-01 | 2015-06-30 | 2015-11-01 | 2015-11-30 | | 2015-06-01 | 2015-06-30 | 2015-12-01 | 2015-12-31 | +-------------------+-----------------+------------+------------+ ``` If I run ``` SELECT DISTINCT * FROM myTable ``` I get the whole table. If I run ``` SELECT DISTINCT [START DATE], [END DATE] FROM myTable ``` I don't get all 4 columns, which, I need. Any help would be appreciated!
Just add some grouping with `MIN` to take the lowest values of the other columns, which seems like what you're after: ``` SELECT MIN(OriginalBeginDate) as OriginalBeginDate, MIN(OriginalEndDate) as OriginalEndDate, [START DATE], [END DATE] FROM myTable GROUP BY [START DATE], [END DATE] ```
You can try below by grouping data based on start date and end date ``` DECLARE @mytable TABLE ( OriginalBeginDate DATE , OriginalEndDate DATE NOT NULL , Start_Date DATE NOT NULL , End_Date DATE NOT NULL ); INSERT INTO @mytable(OriginalBeginDate,OriginalEndDate,Start_Date,End_Date) VALUES ('2015-06-01','2015-06-30','2015-08-01','2015-08-31') , ('2015-06-01','2015-06-30','2015-09-01','2015-09-30') , ('2015-06-01','2015-06-30','2015-10-01','2015-10-31') , ('2015-06-01','2015-06-30','2015-11-01','2015-11-30') , ('2015-06-01','2015-06-30','2015-12-01','2015-12-31') , ('2015-07-01','2015-12-31','2015-08-01','2015-08-31') , ('2015-07-01','2015-12-31','2015-09-01','2015-09-30') , ('2015-07-01','2015-12-31','2015-10-01','2015-10-31') , ('2015-07-01','2015-12-31','2015-11-01','2015-11-30') , ('2015-07-01','2015-12-31','2015-12-01','2015-12-31') SELECT MIN(OriginalBeginDate), MIN(OriginalEndDate), Start_Date, End_Date FROM @mytable GROUP BY Start_Date, End_Date ```
Selecting a whole row with specific distinct columns
[ "", "sql", "sql-server", "t-sql", "" ]
I have a database table with three columns. `WeekNumber`, `ProductName`, `SalesCount` Sample data is shown in below table. I want top 10 gainers(by %) for week 26 over previous week i.e. week 25. The only condition is that the product should have sales count greater than 0 in both the weeks. In the sample data B,C,D are the common products and C has the highest % gain. Similarly, I will need top 10 losers also. What I have tried till now is to make a inner join and get common products between two weeks. However, I am not able to get the top gainers logic. ![enter image description here](https://i.stack.imgur.com/YZwBH.png) The output should be like ``` Product PercentGain C 400% D 12.5% B 10% ```
This will give you a generic answer, not just for any particular week: ``` select top 10 product , gain [gain%] from ( SELECT product, ((curr.salescount-prev.salescount)/prev.salescount)*100 gain from (select weeknumber, product, salescount from tbl) prev JOIN (select weeknumber, product, salescount from tbl) curr on prev.weeknumber = curr.weeknumber - 1 AND prev.product = curr.product where prev.salescount > 0 and curr.salescount > 0 )A order by gain desc ``` If you are interested in weeks 25 and 26, then just add the condition below in the `WHERE` clause: ``` and prev.weeknumber = 25 ```
this is the Query . ``` select top 10 product , gain [gain%] from ( SELECT curr.Product, ( (curr.Sales - prev.Sales ) *100)/prev.Sales gain from (select weeknumber, product, sales from ProductInfo where weeknumber = 25 ) prev JOIN (select weeknumber, product, sales from ProductInfo where weeknumber = 26 ) curr on prev.product = curr.product where prev.Sales > 0 and curr.Sales > 0 )A order by gain desc ```
Query to get top product gainers by sales over previous week
[ "", "sql", "sql-server", "select", "top-n", "" ]
I have an address column in a table where I want to replace a certain word. But my query also replaces partial matches: ``` select replace('the bthe the them', 'the', 'abc') ``` for above example, desired output should be: ``` abc bthe abc them ``` but the output is: ``` abc babc abc abcm ``` How to solve this?
``` SELECT REPLACE(' ' + column + ' ', ' the ', ' abc ') FROM table ``` output: ``` abc bthe abc them ``` I was assuming that you would be selecting from a table when replacing the values in the strings. So 'column' is the column in your table you are selecting from that contains the value you are wanting to use the replace function on.
Use ``` SELECT regexp_replace('the bthe the them', '\ythe\y', 'abc','g') ``` \y : means word boundary flag g: for replacing all occurences
Replace multiple instances of exact word in a string
[ "", "sql", "postgresql", "pattern-matching", "" ]
The table is this: ``` Id_P Id_Utente PasswordOld_ DateOld_ 1 134 E0476F2E85A84FB4E68AA26A841A86FA 8/01/2015 10:30:00 PM 2 134 9C454981DE1702C7AAD3B435B51404EE 8/02/2015 10:30:00 PM 3 134 BA0D9BE25565C34CAAD3B435B51404EE 8/03/2015 10:30:00 PM 4 134 9C6C9E34FB63DC9DE68AA26A841A86FA 8/04/2015 10:30:00 PM 5 134 14BEE187F918F8817584248B8D2C9F9E 8/04/2015 10:30:00 PM 6 135 9A70F4507624037CAAD3B435B51404EE 15/01/2015 10:30:00 PM 7 135 C3DB775AE3B3BD29E68AA26A841A86FA 15/02/2015 10:30:00 PM 8 135 0BB76B9CA33D8E31AAD3B435B51404EE 15/03/2015 10:30:00 PM 9 136 9A70F4507624037CAAD3B435B51404EE 26/01/2015 10:30:00 PM 10 136 C3DB775AE3B3BD29E68AA26A841A86FA 26/02/2015 10:30:00 PM 11 136 0BB76B9CA33D8E31AAD3B435B51404EE 26/03/2015 10:30:00 PM ```
As your `DateOld_` field is of type `ntext`, you cannot do an `order by` on that. To sort the results based on that column, you need to convert the values in to `datetime` format. You can execute the following query to fetch 5 rows with id\_utente = 134 and sorted in descending order as per the time: ``` select top 5 * from table1 where id_utente = 134 order by convert(datetime, convert(varchar, dateold_)) desc; ``` [SQLFiddle Example](http://sqlfiddle.com/#!3/138a7f/4)
You can use limit or top. Try this: ``` SELECT TOP 5 * FROM yourTablename ORDER BY DateOld_ DESC, Id_P DESC; ``` **SELECT LIMIT is not supported in all SQL databases**. But you can use the query above for SQL Server or MSAccess and the below query for mysql specifically. ``` SELECT * FROM yourTablename ORDER BY DateOld_ DESC LIMIT 5; ``` For more info, you can visit [this link](http://www.w3schools.com/sql/sql_top.asp).
How to select top (5) rows with specified Id and latest date?
[ "", "sql", "sql-server", "datetime", "" ]
I came across this today and accident and was wondering a few things. The basic code setup is ``` Begin Transaction Update Table set column to value Begin transaction Update Table set column to value ``` I have played with it a little and found you can not do a commit after a doing a rollback, but you can do a commit before a rollback, however the rollback negates the commit. I guess my question is, is there any purpose/use for this? I see not other then making my DBA slap me for bad code lol
The short answer is that the intent behind the design of nested transactions is to allow you to code reusable procedures (or blocks of code) where the 2 following situations can be handled automatically without having to write the code differently for both cases: * You can start and end a transaction if none has been started yet. * Or, if a transaction is already in progress, then you just participate in the on-going transaction. So let's say you like to code all your reusable procedures in a transactional manner, like this (pseudo code): ``` create procedure Foo begin transaction perform DML 1 perform DML 2 perform DML 3 -- other stuff commit transaction end procedure create procedure Blah begin transaction perform DML 1 perform DML 2 perform DML 3 -- other stuff commit transaction end procedure ``` But now, let's say that you now need the `Blah` procedure to incorporate what `Foo` does. Obviously, you wouldn't want to copy-paste the contents of `Foo` in `Blah`. A simple call to `Foo` makes more sense for reusability's sake, like this: ``` create procedure Blah begin transaction perform DML 1 perform DML 2 -- include a call to Foo here Foo(); perform DML 3 -- other stuff commit transaction end procedure ``` In the above case, without any changes to `Foo`'s code, the call to `Blah` will still behave as one big transaction, which is probably what you want. It's exactly for cases like these that ***inner*** `commit`s don't actually do anything. They really only serve the purpose of flagging that everything was ok up until that point. But the real commit only happens when the outer transaction commits everything. Imagine if every commit actually committed the transaction, then, to ensure that you don't corrupt the outer transaction, you would have to add extra conditions at the beginning of every procedure to check if a transaction is already started, and only start one if none is found. So, every procedure would have to be coded something like this to ensure it's safe for calling within other procedures: ``` create procedure Foo didIStartATransaction = false if @@trancount = 0 then begin transaction didIStartATransaction = true end if perform DML 1 perform DML 2 perform DML 3 -- other stuff if didIStartATransaction then commit transaction end if end procedure create procedure Blah didIStartATransaction = false if @@trancount = 0 then begin transaction didIStartATransaction = true end if perform DML 1 perform DML 2 perform DML 3 -- other stuff if didIStartATransaction then commit transaction end if end procedure ``` That said, nested transactions can still be dangerous if one of the procedures forgets to symmetrically start and commit a transaction. And personally, I prefer to not have any transaction control statements in any of my procedures, and just have the calling code manage the transaction. I feel a lot safer that way.
Please take a look at `SAVEPOINT` syntax. This allows you to set points in a transaction you can rollback to. <https://msdn.microsoft.com/en-us/library/ms188378.aspx> Within there lies your answer :)
Multiple Begin Transactions
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have the following data ``` PROD_NO PROD_CAT PROD_DESCRIPTION X23 PENS N/A X23 PENCIL in warehouse X23 INK " X30 BOOKS " X30 DRAWINGS not in warehouse X30 ERASERS " ``` What I would like to achieve is if PROD\_DESCRIPTION is having N/A or ", then I would like those rows to be filled with comments exist for that prod\_no. e.g. for `X23` **in warehouse** to be filled for all rows. For `X30` **not in warehouse** to be filled for all rows. ``` PROD_NO PROD_CAT PROD_DESCRIPTION X23 PENS in warehouse X23 PENCIL in warehouse X23 INK in warehouse X30 BOOKS not in warehouse X30 DRAWINGS not in warehouse X30 ERASERS not in warehouse ``` How can I do this? There are many prod\_no and prod\_description varies for each prod\_no
You should make a JOIN table with MAXIMUM PROD\_DESCRIPTION for each PROD\_NO and then use DECODE to output this value for NULL,'' or 'N/A': ``` SELECT T.PROD_NO, T.PROD_CAT, DECODE (T.PROD_DESCRIPTION, NULL,TMAX.PROD_DESC_MAX, '',TMAX.PROD_DESC_MAX, 'N/A',TMAX.PROD_DESC_MAX, T.PROD_DESCRIPTION) PROD_DESCRIPTION FROM T LEFT JOIN (SELECT PROD_NO, MAX(PROD_DESCRIPTION) PROD_DESC_MAX FROM T GROUP BY PROD_NO) TMAX ON T.PROD_NO = TMAX.PROD_NO ``` `SQLFiddle demo`
Try this and let me know ``` UPDATE YOUR_TABLE SET PROD_DESCRIPTION=decode(PROD_NO,'X23','in warehouse','X30','not in warehouse') WHERE PROD_DESCRIPTION='N/A' or PROD_DESCRIPTION='' ``` **OR** This should also work. ``` UPDATE YOUR_TABLE SET PROD_DESCRIPTION= CASE WHEN PROD_NO='X23' THEN 'in warehouse' WHEN PROD_NO='X30' THEN 'not in warehouse' END WHERE PROD_DESCRIPTION IN('N/A','') ``` **Edited** Assuming there are only two types of description for all production no. Pass all prod\_no in `IN()` accordingly. ``` UPDATE YOUR_TABLE SET PROD_DESCRIPTION= CASE WHEN PROD_NO in('X23','X25',... so on) THEN 'in warehouse' WHEN PROD_NO in('X30','X31',.... so on) THEN 'not in warehouse' END WHERE PROD_DESCRIPTION IN('N/A','') ```
Column to be filled based on condition
[ "", "sql", "oracle", "" ]
I'm expiriencing some strange behaviour in a SQL statement which is using a `CASE`. It is reproducable with the following example: ``` SELECT CASE WHEN 1=1 THEN 'foo' WHEN 1=2 THEN (DATEADD(s, 1435586700, '01/01/1970 00:00:00')) WHEN 1=3 THEN (DATEADD(s, 1435586700, '01/01/1970 00:00:00')) ELSE 'bar' END AS result ``` If you execute this on an SQL Server 2008 it returns no result. As soon as you change it for the second or third case to succeed, it returns a value. If you remove those cases completely, it works just fine. At first I thought it was a problem with `DATEADD` being executed in any case and failing because of passed parameter values. But in my example they are constant and should work in any case. What is going on here?
`case` is an expression whose type is determined at compile-time. However, the values are not actually evaluated until run-time. By the rules of type-conversion, I think that the expression will return a `datetime`. That results in an error for the `'foo'` and `'bar'` values -- when the `when` conditions are met.
You have different types in the various options. This is simply wrong - the case *expression* must have a single type. In practice, this probably tries to convert the result to `datetime`. And it can't convert `'foo'` to `datetime`. As for your note about the `dateadd` failing, that wouldn't be a problem - in fact, `case` is one of the few ways to *avoid* evaluating all options. Case expressions are a bit brittle this way; but this should have already ran your alarm bells - *why* would you ever even consider returning two different data types in the same column? This is bound to cause you trouble sooner or later (in this case, sooner; but it could cause you more trouble in the future yet). Either split the two different possibilities in two columns (e.g. error code + datetime or something), or keep the formatting on the application rather than stuffing it in the database.
Strange behaviour in CASE expression using dateadd
[ "", "sql", "sql-server", "" ]
I have a table `a` with column `sdate` which is `TIMESTAMP` I wanted to count the number of rows which `sdate` is empty. when I'm doing ``` SELECT sdate FROM a WHERE sdate IS NULL ``` I get `13` rows. So what I did after that was: ``` SELECT count(sdate) FROM a WHERE sdate IS NULL ``` i expected to get `13` but i got `0`. and I don't understand why. When I changed it to: ``` SELECT count(*) FROM a WHERE sdate IS NULL ``` I got `13` which the correct result. Can someone explain why `SELECT count(sdate)` approach return false result?
You get 0 results when counting with below query: ``` SELECT count(sdate) FROM a WHERE sdate IS NULL ``` because `COUNT()` does not take in consideration NULL values. `COUNT()` only takes in consideration non-NULL values. As opposed to when using: ``` SELECT count(*) FROM a WHERE sdate IS NULL ``` which counts the number of rows in the entire table (because you're using `*`). Here is a [**SQLFiddle**](http://sqlfiddle.com/#!15/60cd1/4) with the results returned by both `COUNT(*)` and `COUNT(1)`.
The [COUNT aggregate](http://www.postgresql.org/docs/current/static/functions-aggregate.html) takes an expression and it counts only rows for which such is *not* NULL. For example; `COUNT(*)` - or `COUNT(1)`, eg. - counts every row. `COUNT(col)` counts only the rows in which `col` is *not* NULL. `COUNT(NULL)` - always zero. In this case since the sdate columns for the selected rows (all) have null values those rows are *not* counted in `COUNT(sdate)` which results in a *correct* count of 0.
count function isn't working as expected in SQL
[ "", "sql", "postgresql", "" ]
For example, in SQL Server(10.0.5520. SQL Server 2008 SP3), we have a table with 3 columns of type `DateTime`. Our goal is get only data which has one of 3 columns > `SpecificedDate` but we don't want list out specified columns one by one. Please view an example below: ``` TABLE A (ID INT, Name VARCHAR, CreateDate DATETIME, ModifyDate DATETIME, VoidDate DATETIME) ``` Normally it will be: ``` SELECT * FROM A WHERE CreateDate > 'X' OR ModifyDate > 'X' OR VoidDate > 'X' ``` It should turn to ``` SELECT * FROM A WHERE (IF Column Is DateTime AND Column.Data > 'X') ``` Can we query in that way? If yes, how can we do that?
You can use `sys.columns` to get the columns for a given table that is `datetime`. ``` select C.name from sys.columns as C where C.object_id = object_id('A') and C.system_type_id = type_id('datetime') ``` Then you can use that to build and execute your query using `sp_executesql`. ``` declare @SQL nvarchar(max) set @SQL = ' select * from dbo.A where '+stuff(( select 'or '+quotename(C.name) + ' > @Value ' from sys.columns as C where C.object_id = object_id('A') and C.system_type_id = type_id('datetime') for xml path(''), type ).value('text()[1]', 'nvarchar(max)'), 1, 3, '') --print @SQL exec sp_executesql @SQL, N'@Value datetime', '2015-07-01' ``` --- ``` select * from dbo.A where [CreateDate] > @Value or [ModifyDate] > @Value or [VoidDate] > @Value ```
Table A ``` ID Name CreateDate ModifyDate VoidDate --- ----------- ----------------------- ----------------------- ----------------------- 1 Name1 2015-01-01 00:00:00.000 2015-02-02 00:00:00.000 2015-03-03 00:00:00.000 2 Name2 2015-01-01 00:00:00.000 NULL NULL 3 Name3 NULL NULL 2015-03-03 00:00:00.000 4 Name4 NULL NULL NULL ``` T-SQL Code ``` DECLARE @ColNames AS TABLE (RowNumber INT, ColumnName VARCHAR(MAX)) INSERT INTO @ColNames SELECT RANK() OVER (ORDER BY c.COLUMN_NAME) AS RowNumber, c.COLUMN_NAME FROM INFORMATION_SCHEMA.COLUMNS c WHERE TABLE_NAME = 'A' AND DATA_TYPE = 'datetime' ORDER BY RowNumber DECLARE @SpecifiedDate DATETIME SET @SpecifiedDate = '2015-01-01' DECLARE @i INT = 1 DECLARE @sqlString NVARCHAR(MAX) = N'SELECT * FROM A WHERE ' DECLARE @count INT = (SELECT COUNT(*) FROM @ColNames) WHILE @i <= @count BEGIN DECLARE @colName VARCHAR(MAX) = (SELECT ColumnName FROM @ColNames WHERE RowNumber = @i) SET @sqlString = @sqlString + @colName + ' > ''' + CONVERT(VARCHAR, @SpecifiedDate,104) + '''' SET @sqlString = @sqlString + CASE WHEN @i < @count THEN ' OR ' ELSE '' END SET @i = @i + 1; END EXEC sp_executesql @sqlString, N'@SpecifiedDate datetime', '2015-01-01' ``` Output ``` ID Name CreateDate ModifyDate VoidDate --- ------ ----------------------- ----------------------- ----------------------- 1 Name1 2015-01-01 00:00:00.000 2015-02-02 00:00:00.000 2015-03-03 00:00:00.000 3 Name3 NULL NULL 2015-03-03 00:00:00.000 ```
SQL query database on column data type with a condition
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have this data: ![](https://i.stack.imgur.com/GsIkm.png) I need to update the "IsConsideredNewHire" column so that if there is a ZERO in the control group related to it, all other rows for that control group should be ZERO. after the update, you'd get something like this: ![](https://i.stack.imgur.com/BNM0q.png) I'm having a hard time figuring out how to get this accomplished. Anyone want to give me a hand?
You can simply do this: ``` UPDATE MyTable SET IsConsideredNewHire = 0 WHERE ControlGroupID IN (SELECT DISTINCT ControlGroupID FROM MyTable WHERE IsConsideredNewHire=0) ```
I would do this using window fuunctions: ``` with toupdate as ( select d.*, min(IsConsideredNewHire) over (partition by ControlGroupId) as minIsConsideredNewHire from data ) update toupdate set IsConsideredNewHire = 0 where minIsConsideredNewHire = 0; ```
how to update all rows of a specified data group in SQL server based on a condition?
[ "", "sql", "sql-server", "" ]
I have a data set which is based on a timestamp. ``` Date Value 07-Jul-15 12:05:00 1 07-Jul-15 12:10:00 1 07-Jul-15 12:15:00 1 07-Jul-15 12:20:00 0 07-Jul-15 12:25:00 0 07-Jul-15 12:30:00 0 07-Jul-15 12:35:00 1 07-Jul-15 12:40:00 1 07-Jul-15 12:45:00 1 07-Jul-15 12:50:00 1 07-Jul-15 12:55:00 0 07-Jul-15 13:00:00 0 07-Jul-15 13:05:00 1 07-Jul-15 13:10:00 1 07-Jul-15 13:15:00 1 07-Jul-15 13:20:00 0 07-Jul-15 13:25:00 0 ``` I would like to query and return > 1. Number of shutdowns: The Number of shut down in this case is 3 based on 0 is ON and 1 is OFF. > 2. Period Between every shut down > > Example: > > 1. From: 07-Jul-15 12:05:00 To: 07-Jul-15 12:15:00 Duration : 15 Mins > 2. From: 07-Jul-15 12:35:00 To: 07-Jul-15 12:50:00 Duration : 20 Mins I am using Oracle
Using LEAD and LAG functions in ORACLE you can built these queries: 1.Number of shutdowns: ``` WITH IntTable AS ( SELECT * FROM ( SELECT dt b_date,value,LEAD(dt) OVER (ORDER BY dt) e_date FROM ( select "Date" dt,"Value" value, LAG("Value") OVER (ORDER BY "Date") pvalue, LEAD("Value") OVER (ORDER BY "Date") nvalue from T ) T1 WHERE pvalue is NULL or value<>pvalue or nvalue is NULL ) WHERE E_DATE is NOT NULL ) SELECT COUNT(*) FROM IntTable where value = 0 ``` `SQLFiddle demo` 2.Period Between every shut down ``` WITH IntTable AS ( SELECT * FROM ( SELECT dt b_date,value,LEAD(dt) OVER (ORDER BY dt) e_date FROM ( select "Date" dt,"Value" value, LAG("Value") OVER (ORDER BY "Date") pvalue, LEAD("Value") OVER (ORDER BY "Date") nvalue from T ) T1 WHERE pvalue is NULL or value<>pvalue or nvalue is NULL ) WHERE E_DATE is NOT NULL ) SELECT b_date,e_date, (e_date-b_date) * 60 * 24 FROM IntTable where value = 1 ``` `SQLFiddle demo`
You can test my answer on sqlfiddle: <http://www.sqlfiddle.com/#!4/9c6a69/16> > > Test Data ``` create table test (dttm date, onoff number); insert into test values (to_date('07-Jul-15 12:05:00', 'DD-MM-YY HH24:MI:SS'), 1 ); insert into test values (to_date('07-Jul-15 12:10:00', 'DD-MM-YY HH24:MI:SS'), 1 ); insert into test values (to_date('07-Jul-15 12:15:00', 'DD-MM-YY HH24:MI:SS'), 1 ); insert into test values (to_date('07-Jul-15 12:20:00', 'DD-MM-YY HH24:MI:SS'), 0 ); insert into test values (to_date('07-Jul-15 12:25:00', 'DD-MM-YY HH24:MI:SS'), 0 ); insert into test values (to_date('07-Jul-15 12:30:00', 'DD-MM-YY HH24:MI:SS'), 0 ); insert into test values (to_date('07-Jul-15 12:35:00', 'DD-MM-YY HH24:MI:SS'), 1 ); insert into test values (to_date('07-Jul-15 12:40:00', 'DD-MM-YY HH24:MI:SS'), 1 ); insert into test values (to_date('07-Jul-15 12:45:00', 'DD-MM-YY HH24:MI:SS'), 1 ); insert into test values (to_date('07-Jul-15 12:50:00', 'DD-MM-YY HH24:MI:SS'), 1 ); insert into test values (to_date('07-Jul-15 12:55:00', 'DD-MM-YY HH24:MI:SS'), 0 ); insert into test values (to_date('07-Jul-15 13:00:00', 'DD-MM-YY HH24:MI:SS'), 0 ); insert into test values (to_date('07-Jul-15 13:05:00', 'DD-MM-YY HH24:MI:SS'), 1 ); insert into test values (to_date('07-Jul-15 13:10:00', 'DD-MM-YY HH24:MI:SS'), 1 ); insert into test values (to_date('07-Jul-15 13:15:00', 'DD-MM-YY HH24:MI:SS'), 1 ); insert into test values (to_date('07-Jul-15 13:20:00', 'DD-MM-YY HH24:MI:SS'), 0 ); insert into test values (to_date('07-Jul-15 13:25:00', 'DD-MM-YY HH24:MI:SS'), 0 ); ``` First of all, remove all unnecessary columns and keep only the on/off columns: ``` select t.dttm, t.onoff from test t where not exists (select 'X' from test tt where tt.dttm = (select max(ttt.dttm) from test ttt where ttt.dttm < t.dttm) and tt.onoff = t.onoff) ``` > > number of shutdowns: ``` with data as ( select t.dttm, t.onoff from test t where not exists (select 'X' from test tt where tt.dttm = (select max(ttt.dttm) from test ttt where ttt.dttm < t.dttm) and tt.onoff = t.onoff) ) select count(*) from data d where d.onoff=0; ``` > > ontime: ``` with data as ( select t.dttm, t.onoff from test t where not exists (select 'X' from test tt where tt.dttm = (select max(ttt.dttm) from test ttt where ttt.dttm < t.dttm) and tt.onoff = t.onoff) ) select d1.dttm as ontime, d0.dttm as offtime, (d0.dttm - d1.dttm) * 24 * 60 as duration from data d0, data d1 where d1.onoff=1 and d0.dttm = (select min(dd0.dttm) from data dd0 where dd0.dttm > d1.dttm); ```
SQL Oracle Counting Clusters
[ "", "sql", "oracle", "" ]
I have two tables like this: ``` table1 table2 id COL1 COL2 id COL1 COL2 1 1 2 1 1 2 1 3 4 1 3 4 1 1 5 6 1 2 7 8 2 1 2 2 1 2 2 3 4 2 3 4 2 5 6 2 5 6 2 7 8 2 7 8 ``` I want to find the id where it matches all rows for the id with second table When I query in hana I am getting two id's As only one id i.e 2 matches all rows with second table i am expecting id 2. I tried all joins. Please help me.
I did it using string aggregation group by id. After that I compared columns using Id condition
``` SELECT * FROM Table1 t1 WHERE NOT EXISTS (SELECT * FROM Table1 t11 WHERE t1.id = t11.id EXCEPT SELECT * FROM Table2 t2 WHERE t2.id = t1.id AND t2.COL1=t1.COL1 AND t2.COL2=t1.COL2) ``` The trick here is that you find all records in `table1` and subtract (using `EXCEPT`) them from `table2`. Now the `id` that has all fields match with `table2` will return a `NULL` and therefore the `NOT EXISTS` will return this record for you.
Where clause for all rows matches between tables
[ "", "sql", "sql-server", "database", "join", "rdbms", "" ]
I keep a track of whom is logging onto our local server as shown in the database below. My DB Structure is: ``` user logon reconnect mike 2015-07-09 mike 2015-07-09 mike 2015-07-09 mike 2015-07-09 john 2015-07-09 john 2015-07-09 john 2015-07-09 pete 2015-07-09 pete 2015-07-09 pete 2015-07-09 matt 2015-07-09 sara 2015-07-09 ``` I am trying to build a query that gets how many DISTINCT users logon or reconnect during one day, Data should look like this: ``` date totalcount 2015-07-09 4 ``` Basically its counted 4 users on 2015-07-09. Heres my query that shows the last 14 days `logons`, but does not include `reconnects` ``` SELECT DATE(logon) AS `date`, COUNT(DISTINCT `user`) AS totalcount FROM user_logons GROUP BY DATE(logon) ORDER BY DATE(logon) DESC LIMIT 14 ``` Shows: ``` date totalcount 2015-07-09 399 2015-07-08 513 2015-07-07 524 2015-07-06 456 2015-07-05 213 2015-07-04 300 2015-07-03 484 2015-07-02 525 2015-07-01 539 2015-06-30 536 2015-06-29 481 2015-06-28 289 2015-06-27 423 2015-06-26 509 ``` I'm wanting to total both columns, combining them and then grouping by date. I'm having trouble writing the query. Please help.
Use [`COALESCE()`](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#function_coalesce) function to "*combine*" the two dates: ``` SELECT DATE(COALESCE(logon, reconnect)) AS `date`, COUNT(DISTINCT `user`) AS totalcount FROM user_logons GROUP BY DATE(COALESCE(logon, reconnect)) ORDER BY DATE(COALESCE(logon, reconnect)) DESC LIMIT 14 ```
My approach was to create two temporary tables. The first one contains `user` and a `date` column for all logon events. The second one contains a `user` and a `date` column for all reconnect events. After `UNION`ing these two tables together, you can group by the `date` and then get the count of distinct users for each date. ``` SELECT t.date AS `date`, COUNT(DISTINCT t.user) AS totalcount FROM ( SELECT `user`, DATE(logon) AS `date` FROM user_logons WHERE DATE(reconnect) ISNULL UNION ALL SELECT `user`, DATE(reconnect) AS `date` FROM user_logons WHERE DATE(logon) ISNULL ) t GROUP BY t.date ORDER BY t.date DESC ``` Let me add that this approach would be more attractive if you had values other than `NULL`, in which case you could not use `COALESCE()` so gracefully.
MYSQL Multiple column COUNT DISTINCT
[ "", "mysql", "sql", "" ]
I've got a table like this: ``` SKU ITEM VALUE 1503796 1851920 0,9770637 1503796 1636691 0,9747891 1503796 1503781 0,9741025 1503796 3205763 0,9741025 1503801 1999745 0,9776622 1503801 1999723 0,9718825 1503801 3651241 0,9348839 1503801 1773569 0,9331309 1503811 1439825 0,97053134 1503811 1636684 0,96297866 1503811 1636671 0,96003973 1503811 1600553 0,9535771 1503818 1636708 0,9440251 1503818 1636709 0,9440251 1503818 1779789 0,9423958 1503818 3322310 0,9369579 ``` I need to get output like this (grouped with max value): ``` SKU ITEM VALUE 1503796 1851920 0,9770637 1503801 1999745 0,9776622 1503811 1439825 0,97053134 1503818 1636708 0,9440251 ``` tried to use smth like this: ``` select SKU, ITEM, VALUE from import where value=(select max(value) from import ) ``` But it select only one row with max value. How to rewrite query?
Rank the records with ROW\_NUMBER, so that the max value for an sku gets #1. Then keep only those records ranked #1. ``` select sku, item, value from ( select mytable.* row_number() over (partition by sku order by value desc) as rn from mytable ) where rn = 1; ``` For SKU 1503818 you will get either of these two: ``` 1503818 1636708 0,9440251 1503818 1636709 0,9440251 ``` If you want a particular one (e.g. the one with the higher item number) then add this criteria to Row\_Number's ORDER BY clause. As to the query you tried yourself: You should be looking for sku-value pairs instead: ``` select SKU, ITEM, VALUE from import where (sku,value) in (select sku, max(value) from import group by sku); ``` In case of a tie, as with SKU 1503818, this query will get you both records, however.
Use a common table expression ``` WITH CTE AS (SELECT SKU,ITEM,VALUE, ROW_NUMBER() OVER (PARTITION BY SKU ORDER BY value DESC)as maxvalue FROM import) SELECT SKU,ITEM,VALUE FROM CTE WHERE maxvalue=1 ```
Using Max() function to select group values
[ "", "sql", "oracle", "function", "max", "greatest-n-per-group", "" ]
Developers of my team are really used to the power of Laravel migrations, they are working great on local machines and our dev servers. But customer's database admin will not accept Laravel migrations. He asks for raw SQL scripts for each new version of our application. **Is there any tool or programming technique to capture the output from Laravel migrations to up/down SQL scripts?** It would be perfect if we could integrate SQL script generation in our CI system (TeamCity) when creating production builds. By the way, we will be using Laravel 5 and PostgreSQL for this project.
# Update 2023-05-24 / Laravel 10 Lately I've been using this one-liner to get a list of all migrations as queries: `php artisan tinker --no-ansi --execute 'echo implode(PHP_EOL, array_reduce(glob("database/migrations/*.php"), fn($c, $i) => [...$c, ...array_column(app("db")->pretend(fn() => (include $i)->up()), "query")], []))'` --- # Use the migrate command You can add the `--pretend` flag when you run `php artisan migrate` to output the queries to the terminal: ``` php artisan migrate --pretend ``` This will look something like this: ``` Migration table created successfully. CreateUsersTable: create table "users" ("id" integer not null primary key autoincrement, "name" varchar not null, "email" varchar not null, "password" varchar not null, "remember_token" varchar null, "created_at" datetime not null, "updated_at" datetime not null) CreateUsersTable: create unique index users_email_unique on "users" ("email") CreatePasswordResetsTable: create table "password_resets" ("email" varchar not null, "token" varchar not null, "created_at" datetime not null) CreatePasswordResetsTable: create index password_resets_email_index on "password_resets" ("email") CreatePasswordResetsTable: create index password_resets_token_index on "password_resets" ("token") ``` To save this to a file, just redirect the output **without ansi**: ``` php artisan migrate --pretend --no-ansi > migrate.sql ``` > This command only include the migrations that haven't been migrated yet. --- # Hack the migrate command To further customize how to get the queries, consider hacking the source and make your own custom command or something like that. To get you started, here is some quick code to get all the migrations. ### Example code ``` $migrator = app('migrator'); $db = $migrator->resolveConnection(null); $migrations = $migrator->getMigrationFiles('database/migrations'); $queries = []; foreach($migrations as $migration) { $migration_name = $migration; $migration = $migrator->resolve($migration); $queries[] = [ 'name' => $migration_name, 'queries' => array_column($db->pretend(function() use ($migration) { $migration->up(); }), 'query'), ]; } dd($queries); ``` ### Example output ``` array:2 [ 0 => array:2 [ "name" => "2014_10_12_000000_create_users_table" "queries" => array:2 [ 0 => "create table "users" ("id" integer not null primary key autoincrement, "name" varchar not null, "email" varchar not null, "password" varchar not null, "remember_token" varchar null, "created_at" datetime not null, "updated_at" datetime not null)" 1 => "create unique index users_email_unique on "users" ("email")" ] ] 1 => array:2 [ "name" => "2014_10_12_100000_create_password_resets_table" "queries" => array:3 [ 0 => "create table "password_resets" ("email" varchar not null, "token" varchar not null, "created_at" datetime not null)" 1 => "create index password_resets_email_index on "password_resets" ("email")" 2 => "create index password_resets_token_index on "password_resets" ("token")" ] ] ] ``` > This code will include **all** the migrations. To see how to only get what isn't already migrated take a look at the `run()` method in `vendor/laravel/framework/src/Illuminate/Database/Migrations/Migrator.php`.
Just in case you are facing the same problem as I did: `php artisan migrate --pretend` did not output anything, yet runs the SQLs without adding the record to migrations. In other words, * it does the SQL job, which was not intended * returned nothing, which was the reason I did the call and * did not add the entry to migrations, which sort of destroys the situations as I was not able to re-run the migration without manually remove tables The reason for it was my setup with several databases, which are addressed with `Schema::connection('master')->create('...` More on that issue you may find here: <https://github.com/laravel/framework/issues/13431> Sadly, a Laravel developer closed the issue, quote "*Closing since the issue seems to be a rare edge case that can be solved with a workaround.*", so there is not much hope, it will be fixed anytime soon. For my maybe rare case, I'll use a third party SQL diff checker. Cheers
How to convert Laravel migrations to raw SQL scripts?
[ "", "sql", "laravel", "laravel-migrations", "" ]
I have an extra information to catch about my users, the sitecode associate. To do that, I use a request with a `WHERE IN` construction. But I have sometimes the same value in the `IN` clause, and I need to show the data for each line (my request return the result only once.) Here is my request : ``` SELECT [SiteCode] FROM [Configuration].[Site] INNER JOIN [Utilisateur].[Utilisateur] ON [Utilisateur].[Utilisateur].[SiteDefaultId] = [Configuration].[Site].[SiteId] WHERE [Login] IN ('XXXXXXXX', 'YYYYYY', 'YYYYYY', 'YYYYYY',) ``` This request return only two result, and I need to return 4 results (3 will be be the same). Here is the result : ``` SiteCode 0002153 0005963 ``` And here is what I need for my example : ``` SiteCode 0002153 0005963 0005963 0005963 ```
You can generate the product by joining the criterias ``` SELECT [SiteCode] FROM [Configuration].[Site] INNER JOIN [Utilisateur].[Utilisateur] ON [Utilisateur].[Utilisateur].[SiteDefaultId] = [Configuration].[Site].[SiteId] INNER JOIN (VALUES('XXXXXXXX'), ('YYYYYY'), ('YYYYYY'), ('YYYYYY')) cri(Login) ON [Login] = cri.Login ```
Remove `DISTINCT` to keep duplicates... ``` SELECT [SiteCode], [Login] FROM [Configuration].[Site] INNER JOIN [Utilisateur].[Utilisateur] ON [Utilisateur].[Utilisateur].[SiteDefaultId] = [Configuration].[Site].[SiteId] WHERE [Login] IN ('XXXXXXXX', 'YYYYYY', 'YYYYYY', 'YYYYYY',) ``` **EDIT:** To return the same row several times, you can't do a simple `JOIN` with an `IN`, you need to do a `UNION ALL` with one `SELECT` for each value: ``` SELECT [SiteCode], [Login] FROM [Configuration].[Site] INNER JOIN [Utilisateur].[Utilisateur] ON [Utilisateur].[Utilisateur].[SiteDefaultId] = [Configuration].[Site].[SiteId] WHERE [Login] = 'XXXXXXXX' UNION ALL SELECT [SiteCode], [Login] FROM [Configuration].[Site] INNER JOIN [Utilisateur].[Utilisateur] ON [Utilisateur].[Utilisateur].[SiteDefaultId] = [Configuration].[Site].[SiteId] WHERE [Login] = 'YYYYYY' UNION ALL SELECT [SiteCode], [Login] FROM [Configuration].[Site] INNER JOIN [Utilisateur].[Utilisateur] ON [Utilisateur].[Utilisateur].[SiteDefaultId] = [Configuration].[Site].[SiteId] WHERE [Login] = 'YYYYYY' UNION ALL SELECT [SiteCode], [Login] FROM [Configuration].[Site] INNER JOIN [Utilisateur].[Utilisateur] ON [Utilisateur].[Utilisateur].[SiteDefaultId] = [Configuration].[Site].[SiteId] WHERE [Login] = 'YYYYYY' ```
Select Where In - for all line
[ "", "sql", "sql-server", "" ]
All: I am learnig SQL now, but stuck at **#7** of > <http://sqlzoo.net/wiki/SELECT_names> ``` Bahamas has three a - who else? Find the countries that have three or more a in the name ``` Thanks
Try using the `LIKE` operator: ``` SELECT name FROM world WHERE name LIKE '%a%a%a%' ``` If you want to do case-insensitive search for either `a` *or* `A` then you can use the `LOWER()` function: ``` SELECT name FROM world WHERE LOWER(name) LIKE '%a%a%a%' ``` **Edit:** We could also use `REGEXP` here: ``` SELECT name FROM world WHERE name REGEXP '(.*[a]){3,}'; ``` However, for this particular example, I would go with `LIKE`, because it probably would perform better, and less of an overhead, than using `REGEXP`.
Try this query: ``` SELECT name FROM world WHERE LEN(name) - LEN(REPLACE(name,'a', '')) > 2 ```
How to solve # 7 in SQLZOO select names section
[ "", "sql", "" ]
So I have my database set up as a single table. In that table I have collected source URL and description (I am scraping the product description from a number of pages). Unfortunately I have ended up with multiple rows in the database for a URL/source page if there is more than one paragraph. What I would like to do is, if there are multiple rows with the same URL, combine the description from each row and then delete the duplicate rows for that URL. My table is literally structured like so: ``` table +----+----------------------------+-------------+ | id | url | description | +----+----------------------------+-------------+ | 1 | http://example.com/page-a | paragraph 1 | | 2 | http://example.com/page-a | paragraph 2 | | 3 | http://example.com/page-a | paragraph 3 | | 4 | http://example.com/page-b | paragraph 1 | | 5 | http://example.com/page-b | paragraph 2 | +----+----------------------------+-------------+ ``` How I want it is like: ``` table +----+----------------------------+-------------------------------------+ | id | url | description | +----+----------------------------+-------------------------------------+ | 1 | http://example.com/page-a | paragraph 1 paragraph 2 paragraph 3 | | 2 | http://example.com/page-b | paragraph 1 paragraph 2 | +----+----------------------------+-------------------------------------+ ``` I'm not so bothered about the IDs being updated to be correct, I just want to be able to combine the rows where the paragraphs should be in the same field as they are the same URLs, and then delete the duplicates. Any help would be greatly appreciated!
It's easy to filter the table, just insert the result in a new table: ``` SELECT url, GROUP_CONCAT(description ORDER BY description SEPARATOR ' ') AS description FROM `table` GROUP BY url ```
Create a new temporary table, truncate the original, and re-insert the data: ``` create temporary table tempt as select (@rn := @rn + 1) as id, url, group_concat(description order by id separator ' ') as description from t cross join (select @rn := 0) params group by url order by min(id); -- Do lots of testing and checking here to be sure you have the data you want. truncate table t; insert into t(id, url, description) select id, url, description from tempt; ``` If `id` is already auto-incremented in the table, then you do not need to provide a value for it.
MYSQL - Combine rows with multiple duplicate values and delete duplicates afterwards
[ "", "mysql", "sql", "concatenation", "group-concat", "" ]