Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I have created a foreign key without giving it a name for a column in a table. Now i want to remove that column. First i've tried removing the foreign key constraint but there was an error
I've used the following sql command
```
ALTER TABLE passenger
DROP FOREIGN KEY bookedBy
```
The error message
> #1091 - Can't DROP 'bookedBy'; check that column/key exists
i've ensured that the column exist.
I have not named the foreign key constraint. Is it possible to remove the foreign key constraint without naming it. Is there any default naming given to the foreign keys.
|
Run the statement **`SHOW CREATE TABLE passenger`**.
The output from that will show the foreign key constraints, as well as the columns in the table. You should be able to figure out the name of the foreign key constraint you want to drop from that.
Or, you can muck with your queries of the tables in `information_schema` database. It's going to show up in there as well.
---
**Followup**
One possible query of `information_schema` to find the names of the foreign key constraints for a given table:
```
SELECT kcu.constraint_schema
, kcu.constraint_name
-- , kcu.*
FROM information_schema.key_column_usage kcu
WHERE kcu.referenced_table_name IS NOT NULL
AND kcu.constraint_schema = 'mydatabase'
AND kcu.table_name = 'mytablename'
```
|
```
CREATE TABLE Orders
(
O_Id int NOT NULL,
OrderNo int NOT NULL,
P_Id int,
PRIMARY KEY (O_Id),
FOREIGN KEY (P_Id) REFERENCES Persons(P_Id)
)
```
also you can add foreign key like this
```
ALTER TABLE Orders
ADD FOREIGN KEY (P_Id)
REFERENCES Persons(P_Id)
```
and delete
```
ALTER TABLE Orders
DROP FOREIGN KEY fk_PerOrders
```
How to find foreign key in my table
```
SELECT
TABLE_NAME,COLUMN_NAME,CONSTRAINT_NAME,
REFERENCED_TABLE_NAME,REFERENCED_COLUMN_NAME
FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE
WHERE
REFERENCED_TABLE_NAME = '<table>';
```
|
Drop a foreign key from a table in mysql
|
[
"",
"mysql",
"sql",
"database",
"foreign-keys",
""
] |
I'm trying to implement a SQL query to "UPDATE if exists else INSERT"
My table(`Allowance`) is as below:
```
EmployeeID int(8) PK
Year year(4) PK
Month int(2) PK
OverTime decimal(10,2)
Medical decimal(10,2)
Lunch decimal(10,2)
Bonus decimal(10,2)
Allowance decimal(10,2)
```
Below is the SQL query I tried:
```
IF EXISTS (SELECT * FROM Allowance WHERE EmployeeID =10000001 and Year = 2014 and Month = 4)
UPDATE Allowance
SET OverTime = 10.00, Medical = 10.00, Lunch = 10.45, Bonus =10.10, Allowance = 40.55
WHERE EmployeeID =10000001 and Year = 2014 and Month = 4
ELSE
INSERT into Allowance values (10000001,2014,4,10.00,10.00,10.45,10.10,40.55)
```
I keep getting this error message:
> "#1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'IF EXISTS (SELECT \* FROM Allowance WHERE EmployeeID =10000001 and Year = 2014 an' at line 1 "
Can somebody please help??
|
The below query will fulfill your requirement.
```
INSERT INTO `ALLOWANCE` (`EmployeeID`, `Year`, `Month`, `OverTime`,`Medical`,
`Lunch`, `Bonus`, `Allowance`) values (10000001, 2014, 4, 10.00, 10.00,
10.45, 10.10, 40.55) ON DUPLICATE KEY UPDATE `EmployeeID` = 10000001
```
|
Further to Gunaseelan's answer *(answering questions I had myself, that I needed to look up)*:
```
INSERT INTO `ALLOWANCE` (`EmployeeID`, `Year`, `Month`, `OverTime`,`Medical`,
`Lunch`, `Bonus`, `Allowance`) values (10000001, 2014, 4, 10.00, 10.00,
10.45, 10.10, 40.55) ON DUPLICATE KEY UPDATE `OverTime`=10.00, `Medical`=10.00, `Lunch`=10.45, `Bonus`=10.10, `Allowance`=40.55;
```
1. Don't forget the trailing semicolon *(like I did)*;
2. WHERE is not required - it knows which record is the duplicate
3. Also no need to mention the tablename again - that was provided in first half
Here is a page that explains the syntax in more detail:
<https://chartio.com/resources/tutorials/how-to-insert-if-row-does-not-exist-upsert-in-mysql/#using-insert--on-duplicate-key-update>
|
UPDATE if exists else INSERT in SQL
|
[
"",
"mysql",
"sql",
""
] |
I have created a backup for my country table.
```
create table country_bkp as select * from country;
```
What SQL should I use to restore the `country` table to it's original state?
I can do
```
insert into country select * from country_bkp;
```
but it will just have duplicate entries and probably fail as primary key would be same .
> Is there an SQL command to merge data back?
Last alternative would be
```
DROP TABLE country;
create table country as select * from country_bkp;
```
but I want to avoid this as all the `grants/permissions` would get lost by this.
Other cleaner way would be
```
delete from country ;
insert into country select * from country_bkp;
```
But I am looking for more of a merge approach without having to clear data from original table.
|
Instead of dropping the table, which, as you noted, would lose all the permission defitions, you could `truncate` it to just remove all the data, and then insert-select the old data:
```
TRUNCATE TABLE country;
INSERT INTO country SELECT * FROM county_bkp;
```
|
In my case, `INSERT INTO country SELECT * FROM county_bkp;` didnt work because:
* It wouldnt let me insert in Primary Key column due to
`indentity_insert` being off by default.
* My table had `TimeStamp` columns.
In that case:
* allow `identity_insert` in the `OriginalTable`
* insert query in which you mention all the columns of `OriginalTable` (Excluding `TimeStamp` Columns) and in `Values` select all columns from `BackupTable` (Excluding `TimeStamp` Columns)
* restrict identity\_insert in the OriginalTable at the end.
EXAMPLE:
```
Set Identity_insert OriginalTable ON
insert into OriginalTable (a,b,c,d,e, ....) --[Exclude TimeStamp Columns here]
Select a,b,c,d,e, .... from BackupTable --[Exclude TimeStamp Columns here]
Set Identity_insert OriginalTable Off
```
|
Revert backup table data to original table SQL
|
[
"",
"sql",
"database",
"oracle",
"plsql",
"backup",
""
] |
I'm sorry to flat out ask for someone to write my code for me but I've been tearing my hair out for nearly an hour trying to get the pivot operator to work in SQL.
I have the following results set:
```
SCCY AccountedPremiumCurrent AccountedPremiumPrevious
---- ----------------------- ------------------------
CAD 99111.0000 NULL
EUR 467874.0000 128504.0000
GBP 431618.3847 195065.8751
USD 1072301.1193 1171412.1193
```
And I need to pivot it around to this:
```
GBP USD CAD EUR
----------- ------------ ---------- -----------
431618.3847 1072301.1193 99111.0000 467874.0000
195065.8751 1171412.1193 NULL 128504.0000
```
I have a workaround which uses a union across two ugly "`select max(case when...`" queries but I'd love to get this working with the pivot operator.
I feel like my brain cannot process the necessary logic to perform this operation, hence I'm asking for someone to help. Once I get this I'll be able to hopefully re-apply this like a pro...
|
You can do it with the following query:
```
SELECT col, [CAD], [EUR], [GBP], [USD]
FROM (
SELECT SCCY, col, val
FROM mytable
CROSS APPLY (SELECT 'current', AccountedPremiumCurrent UNION ALL
SELECT 'previous', AccountedPremiumPrevious) x(col, val) ) src
PIVOT (
MAX(val) FOR SCCY IN ([CAD], [EUR], [GBP], [USD])
) pvt
```
`PIVOT` on multiple columns, like `AccountedPremiumCurrent`, `AccountedPremiumPrevious` in your case is not possible in SQL Server. Hence, the `CROSS APPLY` trick is used in order to *unpivot* those two columns, before `PIVOT` is applied.
In the output produced by the above query, `col` is equal to `current` for values coming from `AccountedPremiumCurrent` and equal to `previous` for values coming from `AccountedPremiumPrevious`.
[**Demo here**](http://sqlfiddle.com/#!6/ca807/8)
|
If you need a Dynamic pivot(where column values are not known in advance), you can do the following queries.
First of all declare a variable to get the column values dynamically
```
DECLARE @cols NVARCHAR (MAX)
SELECT @cols = COALESCE (@cols + ',[' + SCCY + ']', '[' + SCCY + ']')
FROM (SELECT DISTINCT SCCY FROM #TEMP) PV
ORDER BY SCCY
```
Now use the below query to pivot. I have used `CROSS APPLY` to bring the two column values to one column.
```
DECLARE @query NVARCHAR(MAX)
SET @query = 'SELECT ' + @cols + ' FROM
(
SELECT SCCY,AccountedPremium,
ROW_NUMBER() OVER(PARTITION BY SCCY ORDER BY (SELECT 0)) RNO
FROM #TEMP
CROSS APPLY(VALUES (AccountedPremiumCurrent),(AccountedPremiumPrevious))
AS COLUMNNAMES(AccountedPremium)
) x
PIVOT
(
MAX(AccountedPremium)
FOR SCCY IN (' + @cols + ')
) p
'
EXEC SP_EXECUTESQL @query
```
* **[Click here](https://data.stackexchange.com/stackoverflow/query/321796) to view result**
|
Need help to transpose rows to columns
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
"pivot",
""
] |
I have small problem with grouped records. Chart code:
```
<%= pie_chart CountChatLine.group(:channel).count %>
```
The problem is, that I have more than few `:channels` in database, and chart looks like this:

Can I somehow take only N top `:channels` and sum rest of it as `others` or something? Or add to others every `:channel` that has less than N%?
|
Some heavy thinking and @Agazoom help and I've done this.
Code:
```
@top_count = 5
@size = CountChatLine.group(:channel).count.count-@top_count
@data_top = CountChatLine.group(:channel).order("count_all").reverse_order.limit(@top_count).count
@data_other = CountChatLine.group(:channel).order("count_all").limit(@size).count
@data_other_final = {"others" => 0}
@data_other.each { |name, count| @data_other_final = {"others" => @data_other_final["others"]+count }}
@sum_all_data = @data_top.reverse_merge!(@data_other_final)
```
IDK if there is a better way. If there is, please post it. But for now, it works :)
|
The chartkick gem (which I just started using), will chart whatever data you provide it. It's up to you as to what the data you provide it looks like.
Yes, you can absolutely reduce the number of slices in your pie by aggregating them, however, you need to do that yourself.
You can write a method in your model to summarize this and call is as such:
```
<%= pie_chart CountChatLine.summarized_channel_info %>
```
**method in CountChatLine model:**
```
def self.summarized_channel_info
{code to get info and convert it into format you really want}
end
```
Hope that helps. That's what I did.
|
Chartkick gem, limit group in pie chart and multiple series
|
[
"",
"sql",
"ruby-on-rails",
"activerecord",
"charts",
"highcharts",
""
] |
Below is some SQL Server code I have been working on. I know now that using a cursor is a bad idea in general, but I cannot figure out how else I can make this work. The performance is terrible with the cursor. I'm really just using some simple IF statement logic with a loop, but can't translate it to SQL. I'm using SQL Server 2012.
```
IF [Last Employee] = [Employee] AND [Action] = '1-HR'
SET [Employee Record] = @counter + 1
ELSE IF [Last Employee] != [Employee] OR [Last Employee] IS NULL
SET [Employee Record] = 1
ELSE
SET [Employee Record] = @counter
```
Basically, how can I keep this @counter going without a cursor. I feel like the solution is simple, but I've lost myself. Thanks for looking.
```
declare curr cursor for
select WORKER, SEQUENCE, ACTION
FROM [DB].[Transactional History]
order by WORKER ,SEQUENCE asc
declare @EmployeeID as nvarchar(max);
declare @SequenceNum as nvarchar(max);
declare @LastEEID as nvarchar(max);
declare @action as nvarchar(max);
declare @currentEmpRecord int
declare @counter int;
open curr
fetch next from curr into @EmployeeID, @SequenceNum, @action;
while @@FETCH_STATUS=0
begin
if @LastEEID=@EmployeeID and @action='1-HR'
begin
set @sql = concat('update [DB].[Transactional History]
set EMPRECORD=',+ @currentEmpRecord, '+1
where WORKER=', @EmployeeID, ' and SEQUENCE=', @SequenceNum)
EXECUTE sp_executesql @sql
set @counter=@counter+1;
set @LastEEID=@EmployeeID;
set @currentEmpRecord=@currentEmpRecord+1;
end
else if @LastEEID is null or @LastEEID<>@EmployeeID
begin
set @sql = concat('update [DB].[Transactional History]
set EMPRECORD=1
where WORKER=', @EmployeeID, ' and SEQUENCE=', @SequenceNum)
EXECUTE sp_executesql @sql
set @counter=@counter+1;
set @LastEEID=@EmployeeID;
set @currentEmpRecord=1
end
else
begin
set @sql = concat('update [DB].[Transactional History]
set EMPRECORD=', @currentEmpRecord, '
where WORKER=', @EmployeeID, ' and SEQUENCE=', @SequenceNum)
EXECUTE sp_executesql @sql
set @counter=@counter+1;
end
fetch next from curr into @EmployeeID, @SequenceNum, @action;
end
close curr;
deallocate curr;
```
Below is code to build a sample table. I want to increase EMPRECORD every time a record is '1-HR', but reset it for each new WORKER. Before this code is executed, EMPRECORD is null for all records. This table shows the target output.
```
CREATE TABLE [DB].[Transactional History-test](
[WORKER] [nvarchar](255) NULL,
[SOURCE] [nvarchar](50) NULL,
[TAB] [nvarchar](25) NULL,
[EFFECTIVE_DATE] [date] NULL,
[ACTION] [nvarchar](5) NULL,
[SEQUENCE] [numeric](26, 0) NULL,
[EMPRECORD] [numeric](26, 0) NULL,
[MANAGER] [nvarchar](255) NULL,
[PAYRATE] [nvarchar](20) NULL,
[SALARY_PLAN] [nvarchar](1) NULL,
[HOURLY_PLAN] [nvarchar](1) NULL,
[LAST_MANAGER] [nvarchar](255) NULL
) ON [PRIMARY]
GO
INSERT [DB].[Transactional History-test] ([WORKER], [SOURCE], [TAB], [EFFECTIVE_DATE], [ACTION], [SEQUENCE], [EMPRECORD], [MANAGER], [PAYRATE], [SALARY_PLAN], [HOURLY_PLAN], [LAST_MANAGER]) VALUES (N'1', NULL, N'EMP-Position Mgt', CAST(N'2004-01-01' AS Date), N'1-HR', CAST(1 AS Numeric(26, 0)), CAST(1 AS Numeric(26, 0)), N'3', N'Hourly', NULL, NULL, NULL)
GO
INSERT [DB].[Transactional History-test] ([WORKER], [SOURCE], [TAB], [EFFECTIVE_DATE], [ACTION], [SEQUENCE], [EMPRECORD], [MANAGER], [PAYRATE], [SALARY_PLAN], [HOURLY_PLAN], [LAST_MANAGER]) VALUES (N'1', NULL, N'Change Job', CAST(N'2004-05-01' AS Date), N'5-JC', CAST(2 AS Numeric(26, 0)), CAST(1 AS Numeric(26, 0)), N'4', NULL, NULL, NULL, N'3')
GO
INSERT [DB].[Transactional History-test] ([WORKER], [SOURCE], [TAB], [EFFECTIVE_DATE], [ACTION], [SEQUENCE], [EMPRECORD], [MANAGER], [PAYRATE], [SALARY_PLAN], [HOURLY_PLAN], [LAST_MANAGER]) VALUES (N'1', NULL, N'EMP-Terminations', CAST(N'2005-01-01' AS Date), N'6-TR', CAST(3 AS Numeric(26, 0)), CAST(1 AS Numeric(26, 0)), N'4', NULL, NULL, NULL, N'4')
GO
INSERT [DB].[Transactional History-test] ([WORKER], [SOURCE], [TAB], [EFFECTIVE_DATE], [ACTION], [SEQUENCE], [EMPRECORD], [MANAGER], [PAYRATE], [SALARY_PLAN], [HOURLY_PLAN], [LAST_MANAGER]) VALUES (N'1', NULL, N'Change Job', CAST(N'2010-05-01' AS Date), N'5-JC', CAST(4 AS Numeric(26, 0)), CAST(1 AS Numeric(26, 0)), N'3', NULL, NULL, NULL, N'4')
GO
INSERT [DB].[Transactional History-test] ([WORKER], [SOURCE], [TAB], [EFFECTIVE_DATE], [ACTION], [SEQUENCE], [EMPRECORD], [MANAGER], [PAYRATE], [SALARY_PLAN], [HOURLY_PLAN], [LAST_MANAGER]) VALUES (N'1', NULL, N'EMP-Position Mgt', CAST(N'2011-05-01' AS Date), N'1-HR', CAST(5 AS Numeric(26, 0)), CAST(2 AS Numeric(26, 0)), N'3', N'Hourly', NULL, NULL, NULL)
GO
INSERT [DB].[Transactional History-test] ([WORKER], [SOURCE], [TAB], [EFFECTIVE_DATE], [ACTION], [SEQUENCE], [EMPRECORD], [MANAGER], [PAYRATE], [SALARY_PLAN], [HOURLY_PLAN], [LAST_MANAGER]) VALUES (N'1', NULL, N'CWR-Position Mgt', CAST(N'2012-01-01' AS Date), N'1-HR', CAST(6 AS Numeric(26, 0)), CAST(3 AS Numeric(26, 0)), NULL, NULL, NULL, NULL, NULL)
GO
INSERT [DB].[Transactional History-test] ([WORKER], [SOURCE], [TAB], [EFFECTIVE_DATE], [ACTION], [SEQUENCE], [EMPRECORD], [MANAGER], [PAYRATE], [SALARY_PLAN], [HOURLY_PLAN], [LAST_MANAGER]) VALUES (N'1', NULL, N'Organizations', CAST(N'2015-01-01' AS Date), N'3-ORG', CAST(7 AS Numeric(26, 0)), CAST(3 AS Numeric(26, 0)), NULL, NULL, NULL, NULL, NULL)
GO
INSERT [DB].[Transactional History-test] ([WORKER], [SOURCE], [TAB], [EFFECTIVE_DATE], [ACTION], [SEQUENCE], [EMPRECORD], [MANAGER], [PAYRATE], [SALARY_PLAN], [HOURLY_PLAN], [LAST_MANAGER]) VALUES (N'1', NULL, N'Organizations', CAST(N'2015-01-01' AS Date), N'3-ORG', CAST(8 AS Numeric(26, 0)), CAST(3 AS Numeric(26, 0)), NULL, NULL, NULL, NULL, NULL)
GO
INSERT [DB].[Transactional History-test] ([WORKER], [SOURCE], [TAB], [EFFECTIVE_DATE], [ACTION], [SEQUENCE], [EMPRECORD], [MANAGER], [PAYRATE], [SALARY_PLAN], [HOURLY_PLAN], [LAST_MANAGER]) VALUES (N'2', NULL, N'EMP-Terminations', CAST(N'2001-01-01' AS Date), N'6-TR', CAST(9 AS Numeric(26, 0)), CAST(1 AS Numeric(26, 0)), NULL, NULL, NULL, NULL, NULL)
GO
INSERT [DB].[Transactional History-test] ([WORKER], [SOURCE], [TAB], [EFFECTIVE_DATE], [ACTION], [SEQUENCE], [EMPRECORD], [MANAGER], [PAYRATE], [SALARY_PLAN], [HOURLY_PLAN], [LAST_MANAGER]) VALUES (N'2', NULL, N'EMP-Terminations', CAST(N'2001-05-01' AS Date), N'6-TR', CAST(10 AS Numeric(26, 0)), CAST(1 AS Numeric(26, 0)), NULL, NULL, NULL, NULL, NULL)
GO
INSERT [DB].[Transactional History-test] ([WORKER], [SOURCE], [TAB], [EFFECTIVE_DATE], [ACTION], [SEQUENCE], [EMPRECORD], [MANAGER], [PAYRATE], [SALARY_PLAN], [HOURLY_PLAN], [LAST_MANAGER]) VALUES (N'2', NULL, N'Change Job', CAST(N'2004-01-01' AS Date), N'5-JC', CAST(11 AS Numeric(26, 0)), CAST(1 AS Numeric(26, 0)), N'3', NULL, NULL, NULL, NULL)
GO
INSERT [DB].[Transactional History-test] ([WORKER], [SOURCE], [TAB], [EFFECTIVE_DATE], [ACTION], [SEQUENCE], [EMPRECORD], [MANAGER], [PAYRATE], [SALARY_PLAN], [HOURLY_PLAN], [LAST_MANAGER]) VALUES (N'2', NULL, N'Change Job', CAST(N'2004-01-01' AS Date), N'5-JC', CAST(12 AS Numeric(26, 0)), CAST(1 AS Numeric(26, 0)), N'3', NULL, NULL, NULL, N'3')
GO
INSERT [DB].[Transactional History-test] ([WORKER], [SOURCE], [TAB], [EFFECTIVE_DATE], [ACTION], [SEQUENCE], [EMPRECORD], [MANAGER], [PAYRATE], [SALARY_PLAN], [HOURLY_PLAN], [LAST_MANAGER]) VALUES (N'2', NULL, N'EMP-Position Mgt', CAST(N'2014-01-01' AS Date), N'1-HR', CAST(13 AS Numeric(26, 0)), CAST(2 AS Numeric(26, 0)), N'4', N'Salary', NULL, NULL, NULL)
GO
INSERT [DB].[Transactional History-test] ([WORKER], [SOURCE], [TAB], [EFFECTIVE_DATE], [ACTION], [SEQUENCE], [EMPRECORD], [MANAGER], [PAYRATE], [SALARY_PLAN], [HOURLY_PLAN], [LAST_MANAGER]) VALUES (N'3', NULL, N'EMP-Terminations', CAST(N'2012-01-01' AS Date), N'6-TR', CAST(14 AS Numeric(26, 0)), CAST(1 AS Numeric(26, 0)), NULL, NULL, NULL, NULL, NULL)
GO
INSERT [DB].[Transactional History-test] ([WORKER], [SOURCE], [TAB], [EFFECTIVE_DATE], [ACTION], [SEQUENCE], [EMPRECORD], [MANAGER], [PAYRATE], [SALARY_PLAN], [HOURLY_PLAN], [LAST_MANAGER]) VALUES (N'4', NULL, N'EMP-Position Mgt', CAST(N'2012-01-01' AS Date), N'1-HR', CAST(15 AS Numeric(26, 0)), CAST(1 AS Numeric(26, 0)), NULL, NULL, NULL, NULL, NULL)
GO
select * from DB.[Transactional History-test]
```
|
This should reproduce the logic of the cursor in a more efficient way
```
WITH T
AS (SELECT *,
IIF(FIRST_VALUE([ACTION]) OVER (PARTITION BY WORKER
ORDER BY [SEQUENCE]
ROWS UNBOUNDED PRECEDING) = '1-HR', 0, 1) +
COUNT(CASE
WHEN [ACTION] = '1-HR'
THEN 1
END) OVER (PARTITION BY WORKER
ORDER BY [SEQUENCE]
ROWS UNBOUNDED PRECEDING) AS _EMPRECORD
FROM DB.[Transactional History-test])
UPDATE T
SET EMPRECORD = _EMPRECORD;
```
|
I think what you need is a Windows function with a case statement. This is simpler and should perform *significantly* better than your cursor especially if you have good indexes.
```
WITH CTE
AS
(
SELECT *,
CASE WHEN [action] = '1-HR' OR [Sequence] = MIN([sequence]) OVER (PARTITION BY worker)
THEN 1 --cnter increases by 1 whether the action is 1-HR OR the sequence is the first for that worker
ELSE 0 END cnter
FROM [Transactional History-test]
)
SELECT empRecord, --can add any columns you want here
SUM(cnter) OVER (PARTITION BY worker ORDER BY [SEQUENCE]) AS new_EMPRECORD --just a cumalative sum of cnter per worker
FROM CTE
```
Results(mine matches yours):
```
empRecord new_EMPRECORD
--------------------------------------- -------------
1 1
1 1
1 1
1 1
2 2
3 3
3 3
3 3
1 1
1 1
1 1
1 1
2 2
1 1
1 1
```
|
Loop in SQL Server without a Cursor
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2012",
""
] |
So I have a table for every item in my sales order, which contains the item ID or just a description if it's not a real item, the table looks like this:
SalesOrderLine:
```
|ID | SOI_IMA_RecordID | SOI_LineNbrTypeCode | SOI_MiscLineDescription |
|1 | 2 | Item | XYZ |
|2 | NULL | GL Acct | Description |
|3 | NULL | GL Acct | Descrip |
|4 | 20 | Item | ABC |
```
What I want to do is, if it's not a real item (have the SOI\_IMA\_RecordID = NULL and/or SOI\_LineNbrTypeCode = GL Acct) do not inner join with my Item table and get the SOI\_MiscLineDescription instead of the item name (IMA\_ItemName in my query).
My query is the following:
```
SELECT SOH_ModifiedDate,
SOM_SalesOrderID,
IMA_ItemName,
SOH_SOD_RequiredDate,
SOD_RequiredDate,
SOH_SOD_DockDate,
SOD_DockDate,
SOH_SOD_PromiseDate,
SOD_PromiseDate,
SOH_SOD_RequiredQty,
SOD_RequiredQty,
SOH_SOD_UnitPrice,
SOD_UnitPrice
FROM WSI_SOH SOH
INNER JOIN SalesOrderDelivery SOD ON SOH.SOH_SOD_RecordID = SOD.SOD_RecordID
INNER JOIN SalesOrder SO ON SO.SOM_RecordID = SOD.SOD_SOM_RecordID
INNER JOIN SalesOrderLine SOI ON SOI.SOI_RecordID = SOD.SOD_SOI_RecordID
INNER JOIN Item ITE ON ITE.IMA_RecordID = SOI.SOI_IMA_RecordID
```
How to do it?
|
You can use the `CASE` statement in your `SELECT`:
```
SELECT CASE WHEN SOI_IMA_RecordID IS NULL OR
SOI_LineNbrTypeCode = 'GL Acct'
THEN SOI_MiscLineDescription
ELSE IMA_ItemName
END ItemName,
-- ...
FROM WSI_SOH SOH
INNER JOIN SalesOrderDelivery SOD ON SOH.SOH_SOD_RecordID = SOD.SOD_RecordID
INNER JOIN SalesOrder SO ON SO.SOM_RecordID = SOD.SOD_SOM_RecordID
INNER JOIN SalesOrderLine SOI ON SOI.SOI_RecordID = SOD.SOD_SOI_RecordID
LEFT JOIN Item ITE ON ITE.IMA_RecordID = SOI.SOI_IMA_RecordID AND
SOI_IMA_RecordID IS NOT NULL AND
SOI_LineNbrTypeCode != 'GL Acct'
```
|
A `left join` will allow you to join when the `on` condition is matched, and still display only the columns from the right side of the join when it isn't - in your case, when `SOI_IMA_RecordID` is `null`:
```
SELECT SOH_ModifiedDate,
SOM_SalesOrderID,
IMA_ItemName,
SOH_SOD_RequiredDate,
SOD_RequiredDate,
SOH_SOD_DockDate,
SOD_DockDate,
SOH_SOD_PromiseDate,
SOD_PromiseDate,
SOH_SOD_RequiredQty,
SOD_RequiredQty,
SOH_SOD_UnitPrice,
SOD_UnitPrice
FROM WSI_SOH SOH
INNER JOIN SalesOrderDelivery SOD ON SOH.SOH_SOD_RecordID = SOD.SOD_RecordID
INNER JOIN SalesOrder SO ON SO.SOM_RecordID = SOD.SOD_SOM_RecordID
INNER JOIN SalesOrderLine SOI ON SOI.SOI_RecordID = SOD.SOD_SOI_RecordID
LEFT JOIN Item ITE ON ITE.IMA_RecordID = SOI.SOI_IMA_RecordID
```
|
SQL query with condition
|
[
"",
"sql",
"sql-server",
"conditional-statements",
""
] |
I'm not able to use where condition on the row number within the same select statement. Results are not consistent if I use a different select statement for applying condition over the rownumber...
```
SELECT TOP (@lastrow - 1) c.totalRows
,c.ae_effective_enrollment_id
,c.[user_id]
,c.login_name
,c.first_name
,c.last_name
,cm.courseware_title
,cm.courseware_code
,@courseware_id assetId
,c.enrollment_status_id
,CASE
WHEN c.enrollment_status_id = 2
AND c.is_self_enrolled = 0
THEN 'Admin-' + s.description
WHEN c.enrollment_status_id = 2
AND c.is_self_enrolled = 1
THEN 'Self-' + s.description
ELSE s.description
END AS enrollmentStatus
,c.is_group
,CASE
WHEN c.is_self_enrolled = 0
THEN 1
ELSE 0
END is_admin
,CASE
WHEN c.auma_is_assigned = 1
THEN 'Admin-assigned'
WHEN c.auma_is_assigned = 0
THEN 'Self-assigned'
ELSE 'No-My-Plan'
END AS myplanStatus
, master_assignment_id
,ROW_NUMBER() over(partition by cm.courseware_id,c.user_id order by c.is_self_enrolled)as check_row
FROM enrollmentCTE c
INNER JOIN dbo.courseware_master cm ON cm.courseware_id = @courseware_id
LEFT JOIN @statuscodes s ON s.id = c.enrollment_status_id
WHERE check_row=1 and
enrollment_status_id<>4 and
rownumber > @firstrow
AND rownumber < @lastrow
ORDER BY rownumber
```
`check_row` here is not recognised. Please help
|
SQL order of execution.
> 1. `FROM` clause
> 2. `WHERE` clause
> 3. `GROUP` BY clause
> 4. `HAVING` clause
> 5. `SELECT` clause
> 6. `ORDER BY` clause
the check\_row alias was made in the *`select`* part so it doesn't exist yet in the context
**EDIT**
done some testing. can't seem to get it right. as a temporary solution you could attempt to put the
```
ROW_NUMBER() over(...
```
in the `where` clause aswell
**EDIT:**
another option from the [MSDN website](https://msdn.microsoft.com/en-us/library/ms186734.aspx) is
> Returning a subset of rows
>
> The following example calculates row numbers for all rows in the SalesOrderHeader table in the order of the OrderDate and returns only rows 50 to 60 inclusive.
```
USE AdventureWorks2012;
GO
WITH OrderedOrders AS
(
SELECT SalesOrderID, OrderDate,
ROW_NUMBER() OVER (ORDER BY OrderDate) AS RowNumber
FROM Sales.SalesOrderHeader
)
SELECT SalesOrderID, OrderDate, RowNumber
FROM OrderedOrders
WHERE RowNumber BETWEEN 50 AND 60;
```
|
```
SELECT totalRows, ae_effective_enrollment_id, user_id, login_name, first_name, last_name, check_row FROM
(SELECT TOP (@lastrow - 1) c.totalRows as totalRows
,c.ae_effective_enrollment_id as ae_effective_enrollment_id
,c.[user_id] as user_id
,c.login_name as login_name
,c.first_name as first_name
,c.last_name as last_name
,cm.courseware_title as courseware_title
,cm.courseware_code as courseware_code
,@courseware_id as assetId
,c.enrollment_status_id as enrollment_status_id
,CASE
WHEN c.enrollment_status_id = 2
AND c.is_self_enrolled = 0
THEN 'Admin-' + s.description
WHEN c.enrollment_status_id = 2
AND c.is_self_enrolled = 1
THEN 'Self-' + s.description
ELSE s.description
END AS enrollmentStatus
,c.is_group
,CASE
WHEN c.is_self_enrolled = 0
THEN 1
ELSE 0
END is_admin
,CASE
WHEN c.auma_is_assigned = 1
THEN 'Admin-assigned'
WHEN c.auma_is_assigned = 0
THEN 'Self-assigned'
ELSE 'No-My-Plan'
END AS myplanStatus
, master_assignment_id
,ROW_NUMBER() over(partition by cm.courseware_id,c.user_id order by c.is_self_enrolled)as check_row
FROM enrollmentCTE c
INNER JOIN dbo.courseware_master cm ON cm.courseware_id = @courseware_id
LEFT JOIN @statuscodes s ON s.id = c.enrollment_status_id
WHERE enrollment_status_id<>4 and
rownumber > @firstrow
AND rownumber < @lastrow
ORDER BY rownumber ) t where check_row = 1
```
NOTE - add all column name in first select statement
|
How to have where clause on row_number within the same select statement?
|
[
"",
"sql",
"sql-server",
"t-sql",
"row-number",
""
] |
I have this SQL query that deletes a user's preferences from USERPREF table if they have not logged in for 30 days (last login date located in MOMUSER table), however, it does not verify that the user still exists in MOMUSER. How can I change this so that if USERPREF.CUSER does not exist in MOMUSER.CODE that the USERPREF row is also deleted in that situation since they will not have a last login date?
```
DELETE USERPREF FROM USERPREF
INNER JOIN MOMUSER ON MOMUSER.CODE = USERPREF.CUSER
WHERE MOMUSER.LOG_START < GETDATE()-30
```
|
Change to an *outer* join, reverse the condition (so you match users you want to *keep*) and move it into the join, then use `IS NULL` to delete rows *without* joins:
```
DELETE USERPREF
FROM USERPREF
LEFT JOIN MOMUSER ON MOMUSER.CODE = USERPREF.CUSER
AND MOMUSER.LOG_START >= GETDATE()-30
WHERE MOMUSER.LOG_START IS NULL
```
Recall that an outer join returns all nulls when the join misses. By moving the date condition into the join, you get to exercise it but *not* require a joined row. The where clause filters out all rows that have the kind of data you want to keep - leaving only those you want to delete.
|
Not 100% I understand your question but I think that you might be looking for a `left join` and check if `MOMUSER.LOG IS NULL` (should be null if it didn't actually join
```
DELETE USERPREF FROM USERPREF
LEFT JOIN MOMUSER ON MOMUSER.CODE = USERPREF.CUSER
WHERE MOMUSER.LOG_START < GETDATE()-30
OR MOMUSER.LOG_START IS NULL
```
|
delete row from table where column does not exist in another table
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I want to get Top 3 row from the table but it should also consider tie situation.
I have a table which looks like this
```
+--------+---------+
| NAME | COST |
+--------+---------+
| EA | 500 |
| SPSD | 475 |
| IA | 450 |
| NST | 450 |
| AAD | 350 |
| ECOM | 325 |
+--------+---------+
```
And my desired result should look like this
```
+--------+---------+
| NAME | COST |
+--------+---------+
| EA | 500 |
| SPSD | 475 |
| IA | 450 |
| NST | 450 |
+--------+---------+
```
My Query:
```
SELECT *
FROM TABLE
ORDER BY COST DESC
LIMIT 0,3
```
But it returns only first 3 rows and doesn't handle if the 4th row is equal to 3rd.
How can I handle this situation from the query?
|
Use a sub-query to get the third highest cost:
```
select * from table
where cost >= (SELECT COST FROM TABLE
ORDER BY COST DESC
LIMIT 3,1)
ORDER BY c_cost DESC
```
(I'm not fully sure about the `LIMIT 3,1` part, since I'm not a MySQL guy. Test and see!)
|
Here is a way using rank mechanism.
Consider the following
```
mysql> select * from test1 ;
+------+------+
| name | cost |
+------+------+
| EA | 500 |
| SPSD | 475 |
| IA | 450 |
| NST | 450 |
| AAD | 350 |
| ECOM | 325 |
+------+------+
select
name,
cost from(
select
t1.*,
@rn:= if(@prev_cost = cost,@rn,@rn+1) as rn,
@prev_cost:= cost
from test1 t1,(select @rn:=0,@prev_cost:=0)x
order by cost desc
)x
where x.rn <=3 ;
+------+------+
| name | cost |
+------+------+
| EA | 500 |
| SPSD | 475 |
| IA | 450 |
| NST | 450 |
+------+------+
```
|
Select Top N rows and also handle tie situation
|
[
"",
"mysql",
"sql",
"database",
""
] |
I need to get the maximum value for a field so I used this SQL in a stored procedure
```
Select max(field_1) from mytable into :v_max1
```
However I want also to get another field value with that maximum value, I could write another SQL like this
```
Select field_2 from mytable where field_1 = :v_max1 into v_field2
```
But I want to ask is it possible to get field\_2 value with the first statement so I use only single statement ?
|
This query will return all records whose `field_1` equals to `MAX(field_1)`
```
SELECT field_2 FROM mytable WHERE field_1 = (
SELECT MAX(field_1) FROM mytable)
```
|
you can do a query like this
```
SELECT FIRST 1 field_1, field_2
FROM yourtable
ORDER BY field_1 DESC;
```
if I remember well you should index by `field_1` in descending order for it to perform well.
note that your second query may return multiple rows if max(value\_1) is not unique. this query will return only one row.
|
Get another value with max
|
[
"",
"sql",
"firebird",
"greatest-n-per-group",
""
] |
I have 2 tables :
Table 'annonce' (real estate ads) :
```
idAnnonce | reference
-----------------------
1 | dupond
2 | toto
```
Table 'freeDays' (Free days for all ads) :
```
idAnnonce | date
-----------------------
1 | 2015-06-06
1 | 2015-06-07
1 | 2015-06-09
1 | 2015-06-10
2 | 2015-06-06
2 | 2015-06-07
2 | 2015-06-12
2 | 2015-06-13
```
I want to select all alvailable ads who have only free days between a start and end date, I have to check each days between this date.
The request :
```
SELECT DISTINCT
`annonce`.`idAnnonce`, `annonce`.`reference`
FROM
`annonce`, `freeDays`
WHERE
`annonce`.`idAnnonce` = `freeDays`.`idAnnonce`
AND
`freeDays`.`date` = '2015-06-06'
AND
`freeDays`.`date` = '2015-06-07'
```
Return no result. Where is my error ?
|
It cant be equal both dates
```
SELECT DISTINCT a.idAnnonce, a.reference
FROM annonce a
INNER JOIN freeDays f ON a.idAnnonce = f.idAnnonce
WHERE f.date BETWEEN '2015-06-06' AND '2015-06-07'
```
|
What Matt is say is correct. You can also do this as alternative:
```
SELECT DISTINCT a.idAnnonce, a.reference
FROM annonce a
INNER JOIN freeDays f ON a.idAnnonce = f.idAnnonce
WHERE f.date IN('2015-06-06','2015-06-07')
```
Or like this:
```
SELECT DISTINCT a.idAnnonce, a.reference
FROM annonce a
INNER JOIN freeDays f ON a.idAnnonce = f.idAnnonce
WHERE f.date ='2015-06-06' OR f.date ='2015-06-07'
```
This will give you the same result as with an `BETWEEN`
|
Select on 2 tables return no result?
|
[
"",
"mysql",
"sql",
""
] |
I have a table with the following sample data:
```
Tag Loc Time1
A 10 6/2/15 8:00 AM
A 10 6/2/15 7:50 AM
A 10 6/2/15 7:30 AM
A 20 6/2/15 7:20 AM
A 20 6/2/15 7:15 AM
B 10 6/2/15 7:12 AM
B 10 6/2/15 7:11 AM
A 10 6/2/15 7:10 AM
A 10 6/2/15 7:00 AM
```
I need SQL to select the first (earliest) row in a sequence until location changes, then select the earliest row again until location changes. In other words I need the following output from above:
```
Tag Loc Time1
A 10 6/2/15 7:30 AM
A 20 6/2/15 7:15 AM
A 10 6/2/15 7:00 AM
B 10 6/2/15 7:11 AM
```
I tried this from Giorgos - but some lines from the select were duplicated:
```
declare @temptbl table (rowid int primary key identity, tag nvarchar(1), loc int, time1 datetime)
declare @tag as nvarchar(1), @loc as int, @time1 as datetime
insert into @temptbl (tag, loc, time1) values (1,20,'6/5/2015 7:15 AM')
insert into @temptbl (tag, loc, time1) values (1,20,'6/5/2015 7:20 AM')
insert into @temptbl (tag, loc, time1) values (1,20,'6/5/2015 7:25 AM')
insert into @temptbl (tag, loc, time1) values (4,20,'6/5/2015 7:20 AM')
insert into @temptbl (tag, loc, time1) values (4,20,'6/5/2015 7:25 AM')
insert into @temptbl (tag, loc, time1) values (4,20,'6/5/2015 7:30 AM')
insert into @temptbl (tag, loc, time1) values (4,20,'6/5/2015 7:35 AM')
insert into @temptbl (tag, loc, time1) values (4,20,'6/5/2015 7:40 AM')
select * from @temptbl
SELECT Tag, Loc, MIN(Time1) as time2
FROM (
SELECT Tag, Loc, Time1,
ROW_NUMBER() OVER (ORDER BY Time1) -
ROW_NUMBER() OVER (PARTITION BY Tag, Loc
ORDER BY Time1) AS grp
FROM @temptbl ) t
GROUP BY Tag, Loc, grp
```
Here is the results (there should only be one line for each tag)
```
Tag Loc time2
1 20 2015-06-05 07:15:00.000
1 20 2015-06-05 07:25:00.000
4 20 2015-06-05 07:20:00.000
4 20 2015-06-05 07:30:00.000
```
|
Can you try this, change `yourTable` with table name you want
```
declare @temptbl table (rowid int primary key identity, tag nvarchar(1), loc int, time1 datetime)
declare @tag as nvarchar(1), @loc as int, @time1 as datetime
declare tempcur cursor for
select tag, loc, time1
from YourTable
-- order here by time or whatever columns you want to
open tempcur
fetch next from tempcur
into @tag, @loc, @time1
while (@@fetch_status = 0)
begin
if not exists (select top 1 * from @temptbl where tag = @tag and loc = @loc and rowid = (select max(rowid) from @temptbl))
begin
print 'insert'
print @tag
print @loc
print @time1
insert into @temptbl (tag, loc, time1) values (@tag, @loc, @time1)
end
else
begin
print 'update'
print @tag
print @loc
print @time1
update @temptbl
set tag = @tag,
loc = @loc,
time1 = @time1
where tag = @tag and loc = @loc and rowid = (select max(rowid) from @temptbl)
end
fetch next from tempcur
into @tag, @loc, @time1
end
deallocate tempcur
select * from @temptbl
```
|
Assuming you're using MS SQL Server 2012 or newer, the [`lag`](https://msdn.microsoft.com/en-us/library/hh231256.aspx) window function will allow you to compare a row to the previous one:
```
SELECT tag, loc, time1
FROM (SELECT tag, loc, time1,
LAG (loc) OVER (PARTITION BY tag ORDER BY time1) AS lagloc
FROM my_table) t
WHERE loc != lagloc OR lagloc IS NULL
```
|
SQL Server how to select first row in sequence
|
[
"",
"sql",
"sql-server",
"select",
""
] |
I have below SPROC in which i am passing column name(value) along with other parameters(Place,Scenario).
```
ALTER PROCEDURE [dbo].[up_GetValue]
@Value varchar(20), @Place varchar(10),@Scenario varchar(20), @Number varchar(10)
AS BEGIN
SET NOCOUNT ON;
DECLARE @SQLquery AS NVARCHAR(MAX)
set @SQLquery = 'SELECT ' + @Value + ' from PDetail where Place = ' + @Place + ' and Scenario = ' + @Scenario + ' and Number = ' + @Number
exec sp_executesql @SQLquery
END
GO
```
when executing : exec [dbo].[up\_GetValue] 'Service', 'HOME', 'Agent', '123697'
i am getting the below error msg
Invalid column name 'HOME'.
Invalid column name 'Agent'.
Do i need to add any thing in the sproc??
|
First: You tagged your question as mysql but I think your code is MSSQL.
Anyway, your problem is that you need to add quotes around each string valued parameter.
Like this:
```
alter PROCEDURE [dbo].[up_GetValue]
@Value varchar(20), @Place varchar(10),@Scenario varchar(20), @Number varchar(10)
AS BEGIN
SET NOCOUNT ON;
DECLARE @SQLquery AS NVARCHAR(MAX)
set @SQLquery = 'SELECT ' + QUOTENAME(@Value) + ' from PDetail where Place = ''' + @Place + ''' and Scenario = ''' + @Scenario + ''' and Number = ''' + @Number +''''
print @SQLquery
exec sp_executesql @SQLquery
END
GO
```
Update:
Use `QUOTENAME` to make sure it works.
QUOTENAME:
> Returns a Unicode string with the delimiters added to make the input string a valid SQL Server delimited identifier.
|
You need to quote column names with ` (backtick) and string values with ".
```
set @SQLquery = 'SELECT `' + @Value + '` from PDetail where Place = "' + @Place + '" and Scenario = "' + @Scenario + '" and Number = ' + @Number
```
|
How to pass column name along with other parameters
|
[
"",
"mysql",
"sql",
""
] |
Designing a database, there's a relationship between two tables, `Job` and `Document`. One `Job` can have multiple `Documents`, but one (and only one) of these `Documents` needs to be flagged as `IsCurrent`. This is not always the most recent `Document` associated with that `Job`.
Structurally, I can see two ways of doing this.
The first is to add a `DocumentId` column to `Job`, and a `JobId` column to `Document`. This will work, but creates a circular reference: when imported into Entity Framework you end up with the peculiar situation that a `Job` has both a `Document` and a `Documents` collection. Likewise that `Document` has both a `Job` and a `Jobs` collection.
The second is to add an `IsCurrent` bit flag to the `Document` table. This will work, but leaves it logically possible for a Job to have multiple `IsCurrent` `Documents`, which is not allowed.
Questions:
1) Am I right in thinking there's no "third way" out of this dilemma?
2) Presuming not, which is better, and why? I favour the second solution as it seems much cleaner and we can enforce the single `IsCurrent` through the business logic. My colleague favours the former solution because it results in simpler C# code and object references - if we rename the foreign keys, it should avoid the confusion created by `Job/Jobs`.
|
If your back-end is SQL Server, you can create a [filtered index](https://msdn.microsoft.com/en-us/library/cc280372.aspx) to ensure that each `job` has at most one current document:
```
CREATE UNIQUE INDEX IX_Documents_Current
ON Documents (JobId) where IsCurrent=1
```
That way, it's not *just* enforced at the business level but is also enforced inside the database.
|
just for a third way (and for fun): consider using not a bit, but an int equals to max + 1 among the documents of the job.
then create a unique index on {job FK, said int}.
you can:
* change current by updating the int,
* get the current by searching the max and
* prevent to have more than one current because of the unique index.
* create a new non current document by using min - 1 for said int.
this is not the simplest to implement.
|
How to implement a one-to-many relationship with an "Is Current" requirement
|
[
"",
"sql",
"database",
"entity-framework",
"database-design",
"relational-database",
""
] |
I have a SQL table showing charge amounts per person and an associated date. I'm looking to create a printout of each person's charges per month. I have the following code which will show me everyone's data for ONE month but I'd like to put this in one report without having to rerun this for every month's date range. Is there a way to pull this data all at once? Essentially I'd like the columns to show:
Last Name January February March Etc
name amt amt amt
Here is my code to pull this data for April as an example. You'll see the dates are codes as YYYYMMDD. This code works perfectly for one month at a time.
```
select pm.last_name, SUM(amt) as Total_Charge from charges c
inner join provider_mstr pm ON c.rendering_id = pm.provider_id
where begin_date_of_service >= '20150401' and begin_date_of_service <= '20150431'
group by pm.last_name
```
|
A generic solution uses SUM over CASEs:
```
select pm.last_name,
SUM(case when begin_date_of_service >= '20150101' and begin_date_of_service < '20150201' then amt else 0 end) as Total_Charge_Jan,
SUM(case when begin_date_of_service >= '20150201' and begin_date_of_service < '20150301' then amt else 0 end) as Total_Charge_Feb,
SUM(case when begin_date_of_service >= '20150301' and begin_date_of_service < '20150401' then amt else 0 end) as Total_Charge_Mar,
...
from charges c
inner join provider_mstr pm ON c.rendering_id = pm.provider_id
where begin_date_of_service >= '20150101' and begin_date_of_service < '20160101'
group by pm.last_name
```
Depending on your DBMS you might have a PIVOT function or similar...
|
Change SUM(amt) as Total\_Charge to the following for each period
```
SUM(CASE WHEN begin_date_of_service BETWEEN '20150401' AND '20150430' THEN amt ELSE 0 END) AS April_Amt
SUM(CASE WHEN begin_date_of_service BETWEEN '20150301' AND '20150331' THEN amt ELSE 0 END) AS March_Amt
```
And update WHERE clause to include all date ranges for pull.
|
SQL: Group SUM values by date ranges and show in multiple columns by date
|
[
"",
"sql",
""
] |
I've been advised in this self page to not use the wildcard `*` in my SQL queries.
Wrong query
```
SELECT * FROM table
```
instead of
```
SELECT field_a, field_b, field_c FROM table
```
I understand only one reason, if you need to know only 3 fields from that query there is no point in force the sql engine to work with all the fields and send them back to your program if you just want to use a few.
But this makes me doubt if is correct to use it if you need all (or almost all) the field data retrieved, or even in those cases it's better to specify all your fields.
Is there any other reason to avoid wildcards than reducing the amount of data sent from the DB engine to the program?
|
The reason which you have understood is very much valid and is perhaps the most strong reason why it is said so.
In many of the application where the table contains too many columns(lets say 20) and the table size is also huge containing millions of records and if you want to retrieve only specific column then there is no point of using wildcard \* as then the MYSQL engine has to unnecessarily iterate though all the columns and all the rows.
But to make a point **it is nothing like that `*` is discouraged** infact it can be a boon in the same situation when you have 20 columns and you want to retrieve the values from all the columns.
To add more to it the \* could be slower because of the floowing reasons:
* In your table you dont create index on all of your columns and the query uses full table scan. So it makes the query slow.
* When you want to return trailing fields from table which contain variable length columns and hence can result in a slight searching overhead.
|
Using `*` means you're querying **all** the table's fields. If that's what your application is supposed to do, it makes sense to use it.
If that's not what your application is **supposed** to do, it's a potential recipe for trouble. If the table is ever modified in the future, the best case scenario is that your application will be querying columns it doesn't need, which may harm performance. The worst case scenario is that it will just break.
|
Why is the use of wildcard * in select statements discouraged?
|
[
"",
"mysql",
"sql",
"select",
""
] |
I am trying to set a column to the current date (in the form dd/mm/yyyy) in a stored procedure, however the column simply sets to 0. The code is as below:
```
USE [DBDataOne]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
declare @dateone varchar(max)
declare @dodate varchar(max)
select @dateone=convert(varchar,GETDATE(),103 )
select @dodate='Update [dbo].[tabfget] set DATEIN='+@dateone
exec (@dodate)
```
|
If `datein` is stored properly (as a date/time), then you don't need to worry about conversion.
Also, you don't need dynamic SQL for this. Just:
```
Update [dbo].[tabfget]
set DATEIN = cast(getdate() as date);
```
If -- horror of horrors -- you are storing dates as *strings* instead of the proper format, then you should fix the database. If that is not possible, you can do:
```
Update [dbo].[tabfget]
set DATEIN = convert(varchar(10), GETDATE(), 103);
```
However, if dates *have* to be stored as strings, then you should *always* use an ISO-standard format, such as YYYY-MM-DD.
|
Why can't you directly do like
```
Update [dbo].[tabfget] set DATEIN = @dateone
```
(OR)
```
Update [dbo].[tabfget] set DATEIN = convert(varchar,GETDATE(),103 )
```
Yes, the format/style `103` should get you `/` instead of `-` like
```
select GETDATE()
```
will result in `2015-05-29 20:43:38.547`
```
select CONVERT(varchar(15), GETDATE(), 103)
```
Will result in `29/05/2015`
|
Column Set to 0 Instead of Date Stored Procedure
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"stored-procedures",
""
] |
So I have an unordered table `movement` that has columns `timestamp`,`x`,and `y`. I want to order by `timestamp`, and change and save the table to have the all rows ordered by timestamp.
I wanted to use `UPDATE TABLE` but I'm unsure on how the query would look... for example, I have this:
```
UPDATE movement
SET ?????
ORDER BY timestamp;
```
I don't want to set anything, I just want to change the order of the table. I need to do this for another query that I'm going to write, but the fact is that this table is very large and I need to break up things in steps so that the performance isn't terrible later. So how would I do this? I found [this](https://stackoverflow.com/questions/10079750/how-to-sort-values-in-columns-and-update-table) SO post that addressed a similar question, but I was wondering if there was another way to do it rather than use another table, because as I said, this table is very large(millions of rows) and recopying the table would take a long time.
|
Tables don't inherently have an order; you don't have to update them into any particular order.
What you do is choose the order of what you SELECT from the table. Then it can be in any order you choose!
Example:
```
SELECT * FROM movement
ORDER BY timestamp;
```
But then somewhere else maybe you want to:
```
SELECT * FROM movement
ORDER BY timestamp DESCENDING;
```
|
You can't use `ORDER BY` in `UPDATE` statement. `ORDER BY` should be used with `SELECT` statement only. Again, there is no need of having the records stored in particular order cause you can just display the records in particular order with a `SELECT statement` like
```
select * from movement
order by timestamp
```
|
SQL - UPDATE TABLE and ORDER BY?
|
[
"",
"mysql",
"sql",
"phpmyadmin",
"sql-update",
"sql-order-by",
""
] |
I have a table - "logs".
```
+-------+--------+-----------+---------------+
| logId | spName | startTime | executionTime |
+-------+--------+-----------+---------------+
| 1 | mail | 11:54:49 | 300 |
| 2 | calc | 13:12:11 | 500 |
| .... | | | |
| 100 | log | 15:30:30 | 400 |
+-------+--------+-----------+---------------+
```
I want to update it like:
```
UPDATE logs
SET executionTime = 600
WHERE logId = ( SELECT max(logId) FROM logs WHERE spName = 'calc' )
```
But I receive an error:
> Error Code: 1093. You can't specify target table 'logs' for update in FROM clause
I don't know how to fix it(
|
You can do this with single query by using ORDER By and LIMIT
```
UPDATE logs
SET executionTime = 600
WHERE spName = 'calc'
ORDER By logId DESC
LIMIT 1
```
This will update only one row with largest logId
|
try this:
```
UPDATE logs a,
(SELECT SELECT max(logId) max_logs FROM logs WHERE spName = 'calc') AS b
SET executionTime = 600
WHERE a.logId = b.max_logs
```
|
How to update some row where ID = SELECT (...) - SQL
|
[
"",
"mysql",
"sql",
""
] |
Can someone help me build a SQL query where if column1 is null/blank I should get value from column2, if column2 is also blank/null I should get value from column3.
Below is the table that I am using
```
Price1 Price2 Price3
120
140
160
```
The output that I am looking for is
```
Price
120
140
160
```
I have already tried
```
select Price1 as Price
from A
WHERE PRICE1 IS NOT NULL
UNION
SELECT PRICE2 as Price
from A
where PRICE1 is null
UNION
select PRICE3 as id
from A
where PRICE2 is null
select COALESCE (PRICE1,PRICE2,PRICE3) from A
select ISNULL(PRICE1,ISNULL(PRICE2,PRICE3)) from A
select
case when PRICE1 IS not null then PRICE1 when PRICE1 IS null then PRICE2 WHEN PRICE2 IS NULL then PRICE3 end PRICE id from A
```
None of the above syntax gets the data I'm looking for. Please help
|
If you think you have empty strings you can try :
```
select
case when coalesce(PRICE1, '') <> '' then PRICE1
when coalesce(PRICE2, '') <> '' then PRICE2
when coalesce(PRICE3, '') <> '' then PRICE3
end AS PRICE
FROM A
```
|
Use `COALESCE` like so:
```
SELECT COALESCE(Price1, Price2, Price3) FROM A;
```
However, this won't work if the entries are blank as opposed to NULL.
|
How to select value from second column if first column is blank/null in SQL (MS SQL)?
|
[
"",
"sql",
"sql-server",
""
] |
This might be a really elementary question, but I've never created a table with `TIMESTAMP()` before, and I'm confused on what to put as the parameters. For example, here:

I just randomly put `TIMESTAMP(20)`, but what does the `20` as a parameter signify here? What should be put in here?
I googled the question, but didn't really come up with anything so... Anyway I'm new to sql, so any help would be greatly appreciated, thank you!!
|
*EDIT*
As of MySQL 5.6.4, datatype `TIMESTAMP(n)` specifies `n` (0 up to 6) decimal digits of precision for fractional seconds.
Before MySQL 5.6, MySQL did not support fractional seconds stored as part of a `TIMESTAMP` datatype.
Reference: <https://dev.mysql.com/doc/refman/5.6/en/fractional-seconds.html>
---
We don't need to specify a length modifier on a `TIMESTAMP`. We can just specify `TIMESTAMP` by itself.
But be aware that the first `TIMESTAMP` column defined in the table is subject to automatic initialization and update. For example:
```
create table foo (id int, ts timestamp, val varchar(2));
show create table foo;
CREATE TABLE `foo` (
`id` INT(11) DEFAULT NULL,
`ts` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`val` VARCHAR(2) DEFAULT NULL
)
```
---
What goes in parens following a datatype depends on what the datatype is, but for some datatypes, it's a length modifier.
For some datatypes, the length modifier affects the maximum length of values that can be stored. For example, `VARCHAR(20)` allows up to 20 characters to be stored. And `DECIMAL(10,6)` allows for numeric values with four digits before the decimal point and six after, and effective range of -9999.999999 to 9999.999999.
For other types, the length modifier it doesn't affect the range of values that can be stored. For example, `INT(4)` and `INT(10)` are both integer, and both can store the full range of values for allowed for the integer datatype.
What that length modifier does in that case is just informational. It essentially specifies a recommended display width. A client can make use of that to determine how much space to reserve on a row for displaying values from the column. A client doesn't have to do that, but that information is available.
*EDIT*
A length modifier is no longer accepted for the `TIMESTAMP` datatype. (If you are running a really old version of MySQL and it's accepted, it will be ignored.)
|
Thats the precision my friend, if you put for example (2) as a parameter, you will get a date with a precision like: 2015-12-29 00:00:00.**00**, by the way the maximum value is 6.
|
Setting a column as timestamp in MySql workbench?
|
[
"",
"mysql",
"sql",
"timestamp",
"mysql-workbench",
""
] |
I am having issues inserting Id fields from two tables into a single record in a third table.
```
mysql> describe ing_titles;
+----------+------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+----------+------------------+------+-----+---------+----------------+
| ID_Title | int(10) unsigned | NO | PRI | NULL | auto_increment |
| title | varchar(64) | NO | UNI | NULL | |
+----------+------------------+------+-----+---------+----------------+
2 rows in set (0.22 sec)
mysql> describe ing_categories;
+-------------+------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------+------------------+------+-----+---------+----------------+
| ID_Category | int(10) unsigned | NO | PRI | NULL | auto_increment |
| category | varchar(64) | NO | UNI | NULL | |
+-------------+------------------+------+-----+---------+----------------+
2 rows in set (0.02 sec)
mysql> describe ing_title_categories;
+-------------------+------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------------+------------------+------+-----+---------+----------------+
| ID_Title_Category | int(10) unsigned | NO | PRI | NULL | auto_increment |
| ID_Title | int(10) unsigned | NO | MUL | NULL | |
| ID_Category | int(10) unsigned | NO | MUL | NULL | |
+-------------------+------------------+------+-----+---------+----------------+
3 rows in set (0.04 sec)
```
Let's say the data from the tables is:
```
mysql> select * from ing_titles;
+----------+-------------------+
| ID_Title | title |
+----------+-------------------+
| 3 | Chicken |
| 2 | corn |
| 1 | Fettucini Alfredo |
+----------+-------------------+
3 rows in set (0.00 sec)
mysql> select * from ing_categories;
+-------------+----------+
| ID_Category | category |
+-------------+----------+
| 1 | Dinner |
| 3 | Meat |
| 2 | Veggie |
+-------------+----------+
3 rows in set (0.00 sec)
```
I want to insert into `ing_title_categories` the record "corn, Veggie" or where ID\_Title = 2 and ID\_Category = 2.
Here's what I tried:
```
INSERT INTO ing_title_categories (ID_Title, ID_Category)
SELECT ing_titles.ID_Title, ing_categories.ID_Category
FROM ing_title_categories
LEFT JOIN ing_titles ON ing_title_categories.ID_Title=ing_titles.ID_Title
LEFT JOIN ing_categories ON ing_title_categories.ID_Category=ing_categories.ID_Category
WHERE (ing_titles.ID_Title=2) AND (ing_categories.ID_Category = 2);
```
There is no data inserted into the table `ing_title_categories`, and here is the reply from MySQL:
```
Query OK, 0 rows affected (0.00 sec)
Records: 0 Duplicates: 0 Warnings: 0
```
What is the correct syntax for inserting the `ing_titles.ID_Title` and `ing_categories.ID_Category` into the table `ing_titles_categories`?
*Please, no PHP or Python examples. Use SQL that I can copy and paste into the MySQL prompt. I will be adding this to a C++ program, not PHP, JavaScript or Python.*
**Edit 1:**
The `ing_title_categories.ID_Title` and `ing_title_categories.ID_Category` are foreign keys into the other tables.
|
After taking advice from @DrewPierce and @KaiserM11, here is the MySQL sequence:
```
mysql> INSERT INTO ing_title_categories (ID_Title, ID_Category)
-> SELECT
-> ing_titles.ID_Title,
-> ing_categories.ID_Category
-> FROM ing_titles, ing_categories
-> where (ing_titles.ID_Title = 2) AND (ing_categories.ID_Category = 2)
-> ;
Query OK, 1 row affected (0.07 sec)
Records: 1 Duplicates: 0 Warnings: 0
mysql> select * from ing_title_categories;
+-------------------+----------+-------------+
| ID_Title_Category | ID_Title | ID_Category |
+-------------------+----------+-------------+
| 17 | 2 | 2 |
+-------------------+----------+-------------+
1 row in set (0.00 sec)
```
|
In this case, only possible way I see is using a `UNION` query like
```
INSERT INTO ing_title_categories (ID_Title, ID_Category)
SELECT Title, NULL
FROM ing_title WHERE ID_Title = 2
UNION
SELECT NULL, category
FROM ing_categories
WHERE ID_Category = 2
```
(OR)
You can change your table design and use an `AFTER INSERT` trigger to perform the same in one go.
**EDIT:**
If you can change your table design to something like below (No need of that extra chaining table)
```
ing_titles(ID_Title int not null auto_increment PK, title varchar(64) not null);
ing_categories( ID_Category int not null auto_increment PK,
category varchar(64) not null,
ing_titles_ID_Title int not null,
FOREIGN KEY (ing_titles_ID_Title)
REFERENCES ing_titles(ID_Title));
```
Then you can use a `AFTER INSERT` trigger and do the insertion like
```
DELIMITER //
CREATE TRIGGER ing_titles_after_insert
AFTER INSERT
ON ing_titles FOR EACH ROW
BEGIN
-- Insert record into ing_categories table
INSERT INTO ing_categories
( category,
ing_titles_ID_Title)
VALUES
('Meat' NEW.ID_Title);
END; //
DELIMITER ;
```
|
MySQL Insert from 2 source tables to one destination table
|
[
"",
"mysql",
"sql",
"insert",
"foreign-keys",
"left-join",
""
] |
I have 3 tables sc\_user, sc\_cube, sc\_cube\_sent
I wand to join to a user query ( sc\_user) one distinct random message/cube ( from sc\_cube ), that has not been sent to that user before ( sc\_cube\_sent), so each row in the result set has a disctinct user id and a random cubeid from sc\_cube that is not part of sc\_cube\_sent with that user id associated there.
I am facing the problem that I seem not to be able to use a correlation id for the case that I need the u.id of the outer query in the inner On clause. I would need the commented section to make it work.
```
# get one random idcube per user not already sent to that user
SELECT u.id, sub.idcube
FROM sc_user as u
LEFT JOIN (
SELECT c.idcube, sent.idreceiver FROM sc_cube c
LEFT JOIN sc_cube_sent sent ON ( c.idcube = sent.idcube /* AND sent.idreceiver = u.id <-- "unknown column u.id in on clause" */ )
WHERE sent.idcube IS NULL
ORDER BY RAND()
LIMIT 1
) as sub
ON 1
```
I added a fiddle with some data : <http://sqlfiddle.com/#!9/7b0bc/1>
new cubeids ( sc\_cube ) that should show for user 1 are the following : 2150, 2151, 2152, 2153
Edit>>
I could do it with another subquery instead of a join, but that has a huge performance impact and is not feasible ( 30 secs+ on couple of thousand rows on each table with reasonably implemented keys ), so I am still looking for a way to use the solution with JOIN.
```
SELECT
u.id,
(SELECT sc_cube.idcube
FROM sc_cube
WHERE NOT EXISTS(
SELECT sc_cube.idcube FROM sc_cube_sent WHERE sc_cube_sent.idcube = sc_cube.idcube AND sc_cube_sent.idreceiver = u.id
)
ORDER BY RAND() LIMIT 0,1
) as idcube
FROM sc_user u
```
|
Got it working with NOT IN subselect in the on clause. Whereas the correlation link u.id is not given within the LEFT JOIN scope, it is for the scope of the ON clause. Here is how it works:
```
SELECT u.id, sub.idcube
FROM sc_user as u
LEFT JOIN (
SELECT idcube FROM sc_cube c ORDER BY RAND()
) sub ON (
sub.idcube NOT IN (
SELECT s.idcube FROM sc_cube_sent s WHERE s.idreceiver = u.id
)
)
GROUP BY u.id
```
Fiddle : <http://sqlfiddle.com/#!9/7b0bc/48>
|
without being able to test this, I would say you need to include your sc\_user in the subquery because you have lost the scope
```
LEFT JOIN
( SELECT c.idcube, sent.idreceiver
FROM sc_user u
JOIN sc_cube c ON c.whatever_your_join_column_is = u.whatever_your_join_column_is
LEFT JOIN sc_cube_sent sent ON ( c.idcube = sent.idcube AND sent.idreceiver = u.id )
WHERE sent.idcube IS NULL
ORDER BY RAND()
LIMIT 1
) sub
```
|
Nested Join in Subquery and failing correlation
|
[
"",
"mysql",
"sql",
"join",
"subquery",
""
] |
I am successfully able to connect to database, but the problem is when it updates table field. It updates the entire field in table instead of searching for ID number and only updating that specific Time\_out Field. Here is the code below, I must be missing something, hopefully something simple I have overlooked.
```
Sub UpdateAccessDatabase()
Dim accApp As Object
Dim SQL As String
Dim id
id = frm2.lb.List(txt)
SQL = "UPDATE [Table3] SET [Table3].Time_out = " & "Now()" & " WHERE "
SQL = SQL & "((([Table3].ID)=id));"
Set accApp = CreateObject("Access.Application")
With accApp
.OpenCurrentDatabase "C:\Signin-Database\DATABASE\Visitor_Info.accdb"
.DoCmd.RunSQL SQL
.Quit
End With
Set accApp = Nothing
End Sub
```
|
In case the `id` is integer/long, you should modify the query as following:
```
SQL="UPDATE [Table3] SET [Table3].Time_out=#" & Now() & "# WHERE [Table3].ID=" & id;
```
In case the `id` is a text, you should modify the query as following:
```
SQL="UPDATE [Table3] SET [Table3].Time_out=#" & Now() & "# WHERE [Table3].ID='" & id &"'";
```
Hope this may help.
|
Here is the working Code below:
```
Sub UpdateAccessDatabase()
Dim accApp As Object
Dim SQL As String
Dim id
id = frm2.lb.List(txt)
***'modified lines below***
SQL = "UPDATE [Table3] SET [Table3].Time_out = " & "Now()" & " WHERE "
SQL = SQL & "((([Table3].ID)=" & id & "));"
Set accApp = CreateObject("Access.Application")
With accApp
.OpenCurrentDatabase "C:\Signin-Database\DATABASE\Visitor_Info.accdb"
.DoCmd.RunSQL SQL
.Quit
End With
Set accApp = Nothing
End Sub
```
|
Update Microsoft Access Table Field from Excel App Via VBA/SQL
|
[
"",
"sql",
"excel",
"vba",
"ms-access",
"sql-update",
""
] |
i have a table with a bunch of customer IDs. in a customer table is also these IDs but each id can be on multiple records for the same customer. i want to select the most recently used record which i can get by doing `order by <my_field> desc`
say i have 100 customer IDs in this table and in the customers table there is 120 records with these IDs (some are duplicates). how can i apply my `order by` condition to only get the most recent matching records?
dbms is sql server 2000.
table is basically like this:
loc\_nbr and cust\_nbr are primary keys
a customer shops at location 1. they get assigned loc\_nbr = 1 and cust\_nbr = 1
then a customer\_id of 1.
they shop again but this time at location 2. so they get assigned loc\_nbr = 2 and cust\_Nbr = 1. then the same customer\_id of 1 based on their other attributes like name and address.
because they shopped at location 2 AFTER location 1, it will have a more recent rec\_alt\_ts value, which is the record i would want to retrieve.
|
You want to use the [ROW\_NUMBER()](https://msdn.microsoft.com/en-us/library/ms186734.aspx) function with a Common Table Expression (CTE).
Here's a basic example. You should be able to use a similar query with your data.
```
;WITH TheLatest AS
(
SELECT *, ROW_NUMBER() OVER (PARTITION BY group-by-fields ORDER BY sorting-fields) AS ItemCount
FROM TheTable
)
SELECT *
FROM TheLatest
WHERE ItemCount = 1
```
**UPDATE:** I just noticed that this was tagged with sql-server-2000. This will only work on SQL Server 2005 and later.
|
Use an aggregate function in the query to group by customer IDs:
```
SELECT cust_Nbr, MAX(rec_alt_ts) AS most_recent_transaction, other_fields
FROM tableName
GROUP BY cust_Nbr, other_fields
ORDER BY cust_Nbr DESC;
```
This assumes that `rec_alt_ts` increases every time, thus the max entry for that `cust_Nbr` would be the most recent entry.
|
select multiple records based on order by
|
[
"",
"sql",
"sql-server",
"sql-server-2000",
"greatest-n-per-group",
""
] |
To start, take this snippet as an example:
```
SELECT *
FROM StatsVehicle
WHERE ((ReferenceMakeId = @referenceMakeId)
OR @referenceMakeId IS NULL)
```
This will fetch and filter the records if the variable @referenceMakeId is not null, and if it is null, will fetch all the records. In other words, it is taking the first one into consideration if @referenceMakeId is not null.
I would like to add a further restriction to this, how can I achieve this?
For instance
```
(ReferenceModelId = @referenceModeleId) OR
(
(ReferenceMakeId = @referenceMakeId) OR
(@referenceMakeId IS NULL)
)
```
If @referenceModelId is not null, it will only need to filter by ReferenceModelId, and ignore the other statements inside it. If I actually do this as such, it returns all the records. Is there anything that can be done to achieve such a thing?
|
Maybe something like this?
```
SELECT * FROM StatsVehicle WHERE
(
-- Removed the following, as it's not clear if this is beneficial
-- (@referenceModeleId IS NOT NULL) AND
(ReferenceModelId = @referenceModeleId)
) OR
(@referenceModeleId IS NULL AND
(
(ReferenceMakeId = @referenceMakeId) OR
(@referenceMakeId IS NULL)
)
)
```
|
This should do the trick.
```
SELECT * FROM StatsVehicle
WHERE ReferenceModelId = @referenceModeleId OR
(
@referenceModeleId IS NULL AND
(
@referenceMakeId IS NULL OR
ReferenceMakeId = @referenceMakeId
)
)
```
However, you should note that this types of queries (known as catch-all queries) tend to be less efficient then writing a single query for every case.
This is due to the fact that SQL Server will cache the first query plan that might not be optimal for other parameters.
You might want to consider using the `OPTION (RECOMPILE)` [query hint](https://msdn.microsoft.com/en-us/library/ms181714.aspx), or braking down the stored procedure to pieces that will each handle the specific conditions (i.e one select for null variables, one select for non-null).
For more information, read [this article](http://sqlinthewild.co.za/index.php/2009/03/19/catch-all-queries/).
|
Ignore other results if a resultset has been found
|
[
"",
"sql",
"sql-server",
"t-sql",
"stored-procedures",
""
] |
Here is a example table with 2 columns.
```
id | name
------------
1 | hello
2 | hello
3 | hello
4 | hello
5 | world
6 | world
7 | sam
8 | sam
9 | sam
10 | ball
11 | ball
12 | bat
13 | bat
14 | bat
15 | bat
16 | bat
```
In the above table here is the occurrence count
```
hello - 4
world - 2
sam - 3
ball - 2
bat - 5
```
How to write a query in psql, such that the output will be sorted from max occurrence of a particular name to min ? i.e like this
```
bat
bat
bat
bat
bat
hello
hello
hello
hello
sam
sam
sam
ball
ball
world
world
```
|
You can use a temporary table to get counts for all the names, and then `JOIN` that to the original table for sorting:
```
SELECT yt.id, yt.name
FROM your_table yt INNER JOIN
(
SELECT COUNT(*) AS the_count, name
FROM your_table
GROUP BY name
) t
ON your_table.name = t.name
ORDER BY t.the_count DESC, your_table.name DESC
```
|
Alternative solution using window function:
```
select name from table_name order by count(1) over (partition by name) desc, name;
```
This will avoid scanning `table_name` twice as in Tim's solution and may perform better in case of big `table_name` size.
|
Table sorting based on the number of occurrences of a particular item
|
[
"",
"sql",
"postgresql",
"multiple-columns",
""
] |
How I can insert with a query a date like this? `2015-06-02T11:18:25.000`
I have tried this:
```
INSERT INTO TABLE (FIELD) VALUES (convert(datetime,'2015-06-02T11:18:25.000'))
```
But I have returned:
```
Conversion failed when converting date and/or time from character string.
```
I tried also:
```
CONVERT(DATETIME, '2015-06-02T11:18:25.000', 126)
```
but it is not working:
```
Msg 241, Level 16, State 1, Line 1
Conversion failed when converting date and/or time from character string.
```
The entire query is:
```
INSERT INTO BOLLE_TEST_POPPER (QIDDIADE,QNUMBOLLA,QSELEZIONALE,QDATA,QORA,QPRIMAPESATA,QSECONDAPESATA,QIMP1,QIMP2,QIDCAUSALE,QIDCLIENTE,QIDDESTINAZIONE,QIDVETTORE,QIDSUBVETTORE,QIDCAMION,QORATRASITO,QNUMBOLLAINGRESSO,QDATABOLLAINGRESSO,QCOMMITTENTIDELTRASPORTO,QANNOTAZIONI,QANNOTAZIONIINBOLLA,QIDARTICOLO,QQANTITA,QIDAUTISTA,QNUMTESSERA,QNUMGETTONE,VALORETAB1,VALORETAB2,VALORETAB3,VALORETAB4,VALORETAB5,VALORETAB6,VALORETAB7,VALORETAB8,VALORETAB9,VALORETAB10,VALORETESTO1,VALORETESTO2,VALORETESTO3,VALORETESTO4,VALORETESTO5,VALORETESTO6,VALORETESTO7,VALORETESTO8,VALORETESTO9,VALORETESTO10) VALUES ('4','5234','-',
convert(datetime,'2015-06-02'),convert(datetime,'2015-06-02T11:18:25.000',126),'30020','20230','null','null','4','1','391','50','50','50','500',convert(datetime,'2015-06-02T11:14:06+02:00',126),'-','false','-','-','19','9790.00','1','BK994P','-','-','null','null','null','null','null','null','null','null','null','-','-','-','-','-','-','-','-','-','-');
```
What is wrong?
|
Try this:
```
INSERT INTO TABLE (FIELD) VALUES CONVERT(DATETIME, '2015-06-02T11:18:25.000', 126)
```
126 relates to ISO8601, which is the format `yyyy-mm-ddThh:mi:ss.mmm`.
This is the same format as the string `'2015-06-02T11:18:25.000'`.
For more information, see [here](https://msdn.microsoft.com/en-us/library/ms187928.aspx).
For dates with a datetimeoffset (for example `'2015-06-02T11:14:06+02:00'` - note the `+02:00` at the end), you will have to do this:
```
CONVERT(DATETIME, CONVERT(DATETIMEOFFSET,'2015-06-02T11:14:06+02:00'), 127)
```
The fully fixed query should be:
```
INSERT INTO BOLLE_TEST_POPPER (QIDDIADE,QNUMBOLLA,QSELEZIONALE,QDATA,QORA,QPRIMAPESATA,QSECONDAPESATA,QIMP1,QIMP2,QIDCAUSALE,QIDCLIENTE,QIDDESTINAZIONE,QIDVETTORE,QIDSUBVETTORE,QIDCAMION,QORATRASITO,QNUMBOLLAINGRESSO,QDATABOLLAINGRESSO,QCOMMITTENTIDELTRASPORTO,QANNOTAZIONI,QANNOTAZIONIINBOLLA,QIDARTICOLO,QQANTITA,QIDAUTISTA,QNUMTESSERA,QNUMGETTONE,VALORETAB1,VALORETAB2,VALORETAB3,VALORETAB4,VALORETAB5,VALORETAB6,VALORETAB7,VALORETAB8,VALORETAB9,VALORETAB10,VALORETESTO1,VALORETESTO2,VALORETESTO3,VALORETESTO4,VALORETESTO5,VALORETESTO6,VALORETESTO7,VALORETESTO8,VALORETESTO9,VALORETESTO10) VALUES ('4','5234','-',
convert(datetime,'2015-06-02'),convert(datetime,'2015-06-02T11:18:25.000',126),'30020','20230','null','null','4','1','391','50','50','50','500',CONVERT(DATETIME, CONVERT(DATETIMEOFFSET,'2015-06-02T11:14:06+02:00'), 127),'-','false','-','-','19','9790.00','1','BK994P','-','-','null','null','null','null','null','null','null','null','null','-','-','-','-','-','-','-','-','-','-');
```
|
You need a format. In this case, 126:
```
INSERT INTO TABLE (FIELD)
VALUES (convert(datetime,'2015-06-02T11:18:25.000', 126))
```
The list is [here](https://msdn.microsoft.com/en-us/library/ms187928.aspx).
For time zones, you need 127, so you need to fix your `values` clause:
```
('4','5234','-',
convert(datetime,'2015-06-02'),convert(datetime,'2015-06-02T11:18:25.000',127),'30020','20230','null','null','4','1','391','50','50','50','500',convert(datetime,'2015-06-02T11:14:06+02:00',127),'-','false','-','-','19','9790.00','1','BK994P','-','-','null','null','null','null','null','null','null','null','null','-','-','-','-','-','-','-','-','-','-');
```
|
Save datetime in sql table
|
[
"",
"sql",
"sql-server",
"datetime",
"datetime-format",
"sqldatetime",
""
] |
How would you create a function that returns a random number between two number?
example syntax
RandBetween(3,300)
|
How about
* Use `RAND()` (which returns a value between 0 and 1 (exclusive).
* multiply by 298 (since you want a dynamic range of [300-3] = 297 + 1)
* add 3 to Offset
* and cast to INT?
i.e.
```
SELECT CAST(RAND() * 298 + 3 AS INT)
```
[Fiddle](http://sqlfiddle.com/#!6/3d6954dd72e53b90/1)
(**Edit** Also see @ivo's answer for how to turn this into a user defined function)
|
Based on the solution of StuartLC, you could also use a Stored Procedure
Like this if you want to reuse your code more often
```
CREATE PROCEDURE [dbo].[RANDBETWEEN]
@LowerBound int = 0 ,
@UpperBound int = 1 ,
@ret int OUT
AS
BEGIN
SET NOCOUNT ON;
SELECT @ret = (CAST((RAND() * (@UpperBound - @LowerBound)) + @LowerBound AS INT));
RETURN ;
END;
```
And Call it like this :
```
DECLARE @tmp INT;
EXECUTE [dbo].[RANDBETWEEN] 0,10, @ret=@tmp OUT ;
SELECT @tmp
```
To Create A Function you must first create a view :
```
CREATE VIEW Get_RAND
AS
SELECT RAND() AS MyRAND
GO
```
Then you can create a function like this (accessing the view with the SELECT MyRand ... ) :
```
CREATE FUNCTION RANDBETWEEN(@LowerBound INT, @UpperBound INT)
RETURNS INT
AS
BEGIN
DECLARE @TMP FLOAT;
SELECT @TMP = (SELECT MyRAND FROM Get_RAND);
RETURN CAST(@TMP* (@UpperBound - @LowerBound) + @LowerBound AS INT);
END
GO
```
And finally you can call your function it like this :
```
SELECT [dbo].[RANDBETWEEN](1,10)
```
|
RANDBETWEEN for SQL Server 2012
|
[
"",
"sql",
"sql-server",
""
] |
I have two tables:
```
locations
---------------------
| id INT, PK, AI |
| x INT |
| y INT |
---------------------
signals
---------------------
| id INT, PK, AI |
| location_id INT |
| mac INT |
| strength INT |
---------------------
```
One location can (will) have up to maximum of 4 signals. Where location is X,Y point on an image and signal is Access Point in range of X,Y and it's signal strength.
Anyhow I have a method to which I provide a `List` of up to 4 MAC addresses and I need to find all `locations` that contain those 4 addresses.
Right now I'm doing this programmatically by:
1. Take top signal strength at `X,Y` and it's mac addresses
2. `SELECT * FROM signals WHERE mac = ScanResult.BSSID`
3. Create array of ID's from returned `signals.location_id`
4. In bulk select all locations if their ID is in the array along with all signals related to those locations
5. Complex loop in loop creates an array containing all locations that have in their relationship all the 4 mac addresses that I provided in the `List` and delete others.
This is incredibly messy and has redundant queries but since I'm not very good with SQL it was a patch that sort of worked.
Now I was wondering if I can do this using SQL alone and return `locations` that contain those 4 mac addresses.
I have:
```
SELECT locations.*, signals.* FROM locations INNER JOIN signals ON locations.id = signals.location_id;
```
I would be less confused if I only had to exclude locations where a relation would be 1:1 but here each locations has up to 4 signals. Is there a way I could provide an "array" to the query and say from that `JOIN` remove all `locations` that do not contain this number of mac addresses and these mac addresses.
|
Andomar's solution is correct based on the sample query. However, the data structure differs from the query, and the query doesn't really make sense. I think this is all that is necessary:
```
SELECT s.location_id
FROM signals s
WHERE s.MAC in (1, 2, 3, 4)
GROUP BY s.location_id
HAVING COUNT(DISTINCT s.MAC) = 4;
```
|
You can use a `having` clause to ensure a location has all four MAC addresses:
```
SELECT l.id
, l.x
, l.y
FROM locations l
JOIN signals s
ON s.location_id = l.location_id
WHERE s.MAC in (1, 2, 3, 4)
GROUP BY
l.id
, l.x
, l.y
HAVING COUNT(DISTINCT s.MAC) = 4
```
|
MySQL Exclude results from JOIN
|
[
"",
"mysql",
"sql",
""
] |
Forgive the length, this is quite hard to explain!
I've got two tables:
```
t_people
----------------------------------------
| ID | surname | forenames |
----------------------------------------
| 1 | Baggins | Frodo |
----------------------------------------
| 2 | Took | Peregrin |
----------------------------------------
| 3 | Baggins | Bilbo |
----------------------------------------
t_courseResults
--------------------------
| personID | result |
--------------------------
| 1 | 98.0% |
--------------------------
| 2 | 14.0% |
--------------------------
| 3 | 56.0% |
--------------------------
```
I then execute the following statement:
```
SELECT result FROM t_courseResults WHERE personID IN (SELECT ID FROM t_people WHERE surname like '%B')
```
...returning all course results for all people with a surname beginning with 'B' (2 rows).
Now let's say I've got a third table which stores a static list of people (e.g. a list of people in a particular department)
```
t_staticList
-------------
| personID |
-------------
| 2 |
-------------
| 3 |
-------------
```
I would then want to execute something like this:
```
SELECT result FROM t_courseResults WHERE (SELECT personID FROM t_staticList) IN (SELECT ID FROM t_people WHERE surname like '%B')
```
Which is obviously garbage, but is something like this possible or am I being stupid? The idea would be that this would return just 1 row - Bilbo Baggins - as his name begins with 'B' and he is in the t\_staticList table. Thanks!
|
You mean like this?
```
SELECT result FROM t_courseResults WHERE PersonId IN (
SELECT p.ID FROM t_staticList AS s
INNER JOIN t_people as p ON p.ID = s.PersonId
WHERE p.surname like '%B')
```
|
USe `INNER JOIN`.
```
SELECT A.result
FROM t_courseResults AS A, t_staticList AS B
INNER JOIN t_people AS C ON c.id = B.personID
WHERE c.surname like '%B'
```
|
SQL SELECT WHERE subquery (multiple rows) = subquery (multiple rows)
|
[
"",
"sql",
"sql-server",
""
] |
How to get max by colum1 1 then by colum 2 in sql? and is there aggregate function like first or last?
I tried 2 ways:
1-it's get me max date but the max price is not associated with max date:
```
SELECT doc_kits_t.DateOper, kits_t.DocId, kits_t.GoodId,MAX(price_list_t._Date), MAX (price_list_t.Price)
FROM [AcKits] kits_t
INNER JOIN [DocAcKits] doc_kits_t
ON kits_t.DocId = doc_kits_t.Id
LEFT OUTER JOIN [PriceList] price_list_t
ON price_list_t._Date <= doc_kits_t.DateOper
AND kits_t.GoodId = price_list_t.AcGoodsId
GROUP BY kits_t.DocId,kits_t.GoodId,doc_kits_t.DateOper
```
2- I have not found aggregate function to get first:
```
SELECT doc_kits_t.DateOper, kits_t.DocId, kits_t.GoodId, /*top or first*/(price_list_t._Date),/*top or first*/ (price_list_t.Price)
FROM [AcKits] kits_t
INNER JOIN [DocAcKits] doc_kits_t
ON kits_t.DocId = doc_kits_t.Id
LEFT OUTER JOIN [PriceList] price_list_t
ON price_list_t._Date <= doc_kits_t.DateOper
AND kits_t.GoodId = price_list_t.AcGoodsId
GROUP BY kits_t.DocId,kits_t.GoodId,doc_kits_t.DateOper
ORDER BY price_list_t._Date, price_list_t.Price
```
The full explanation:
I have 3 table:
```
Table 1:
PriceList (Id, AcProviderId, AcGoodsId, Price, _Date)
Table 2:
DocAcKits (Id, Number, DateOper)
Table 3:
AcKits (Id, DocId, GoodId, Count)
```
at the result i should get table:
AS
```
(DateOpr , DocId, Number, TotalPrice )
```
the TotalPrice is the sum of prices of goods that have same DocId
where link GoodId in AcKits with GoodId in PriceList It must fulfill the conditions `(DocAcKits.DateOper <= PriceList._Date` and date is max and if there more then one max date we will get max price)
Thanks for help, the final solution is:
```
;WITH CTE AS
(
SELECT ROW_NUMBER()OVER(PARTITION BY kits_t.GoodId,doc_kits_t.DateOper ORDER BY price_list_t._Date DESC, price_list_t.Price DESC) rn,
doc_kits_t.DateOper as DateOper, kits_t.DocId as DocId,
ISNULL (price_list_t.Price,0) as Price,
kits_t.Count as Count,
doc_kits_t.Number as Number
FROM [AcKits] kits_t
INNER JOIN [DocAcKits] doc_kits_t
ON kits_t.DocId = doc_kits_t.Id
LEFT OUTER JOIN [PriceList] price_list_t
ON price_list_t._Date <= doc_kits_t.DateOper
AND kits_t.GoodId = price_list_t.AcGoodsId
WHERE doc_kits_t.DateOper BETWEEN @StartDate AND @EndDate
)
INSERT @ret
SELECT DateOper, Number, SUM(Price * Count)as TotalPrice FROM CTE t1
WHERE rn = 1
GROUP BY DocId,DateOper,Number
```
|
You can use `ROW_NUMBER()` with `PARTITION BY` to achieve this
```
;WITH CTE AS
(
SELECT ROW_NUMBER()OVER(PARTITION BY kits_t.DocId,kits_t.GoodId,doc_kits_t.DateOper ORDER BY price_list_t._Date DESC) rn,
doc_kits_t.DateOper, kits_t.DocId, kits_t.GoodId
FROM [AcKits] kits_t
INNER JOIN [DocAcKits] doc_kits_t
ON kits_t.DocId = doc_kits_t.Id
LEFT OUTER JOIN [PriceList] price_list_t
ON price_list_t._Date <= doc_kits_t.DateOper
AND kits_t.GoodId = price_list_t.AcGoodsId
)
SELECT * FROM CTE
WHERE rn = 1
```
***Note:***
If you need `MAX` record for `price_list_t._Date` use `ORDER BY price_list_t._Date DESC`
If you need `MIN` record for `price_list_t._Date` use `ORDER BY price_list_t._Date ASC`
|
If you want to get all the records for max(price) do:
```
SELECT doc_kits_t.DateOper, kits_t.DocId, kits_t.GoodId,price_list_t._Date, price_list_t.Price
FROM [AcKits] kits_t
INNER JOIN [DocAcKits] doc_kits_t
ON kits_t.DocId = doc_kits_t.Id
LEFT OUTER JOIN [PriceList] price_list_t
ON price_list_t._Date <= doc_kits_t.DateOper
AND kits_t.GoodId = price_list_t.AcGoodsId
WHERE price_list_t.price = (
SELECT Max(price)
FROM price_list_t)
GROUP BY kits_t.DocId,kits_t.GoodId,doc_kits_t.DateOper
```
OR if you want to get by max date:
```
SELECT doc_kits_t.DateOper, kits_t.DocId, kits_t.GoodId,price_list_t._Date, price_list_t.Price
FROM [AcKits] kits_t
INNER JOIN [DocAcKits] doc_kits_t
ON kits_t.DocId = doc_kits_t.Id
LEFT OUTER JOIN [PriceList] price_list_t
ON price_list_t._Date <= doc_kits_t.DateOper
AND kits_t.GoodId = price_list_t.AcGoodsId
WHERE price_list_t.date= (
SELECT Max(date)
FROM price_list_t)
GROUP BY kits_t.DocId,kits_t.GoodId,doc_kits_t.DateOper
```
|
SQL max by 2 column, or first and last aggregate function
|
[
"",
"sql",
"sql-server",
""
] |
I have been trying to solve this problem now for days.
I have table called Stat with the following simplified structure and sample data:
```
Customer BankID AccNumb Type Date Amount AccType
Customer 1 Boa 5 Account Statement 2015-01-01 10000,00 Eur
Customer 1 CS 10 Account Statement 2015-04-04 22000,00 Eur
Customer 2 Sa 15 Account Statement 2015-03-13 3000,00 Eur
Customer 2 Sa 40 Account Statement 2015-04-24 1000,00 Eur
Customer 2 Sa 15 Sale Advice 2015-04-16 400,00 Eur
Customer 2 Sa 15 Account Statement 2015-12-24 50,00 Usd
Customer 2 Boa 20 Sale Advice 2015-05-15 6000,00 Eur
Customer 3 Cu 25 Account Statement 2015-11-27 81000,00 Eur
Customer 3 Cu 30 Sale Advice 2015-11-27 3000,00 Usd
Customer 3 Pop 30 Account Statement 2015-11-27 12000,00 Eur
```
What I'm trying to do is to Select the AccountNumber with the latest date specified. A Customer can also have different Account Numbers on various Banks, so it should also be grouped by BankID and Customer.
I have come this far:
```
SELECT AccNumb, Customer, BankID,
(SELECT TOP 1 Amount FROM Stat
WHERE AccNumb = y.AccNumb AND Customer = y.Customer AND
BankID = y.BankID AND Type = 'Account Statement' AND
Date = MAX(y.Date) GROUP BY Amount) Amount
FROM Stat y
GROUP BY AccNumb, Customer, BankID
ORDER BY Customer, AccNumb
```
And it works fine, the problem is i should also add the column AccType and Date
I managed to do this with 2 more subselects (the query takes long but it works).
But now i have the problem that there are also NULL values in Customer (or Date) Column. Now, the account number of these 'NULL' Customers still should be displayed if it's the latest date. I also tried to do the same by joining the table by itself, and it didnt work out.
```
SELECT x.AccNumber, x.Customer, x.BankID, x.Date, y.Amount, y.AccType
FROM Stat y RIGHT JOIN
(SELECT AccNumber, Customer, BankID, MAX(Date) Date FROM Stat
GROUP BY AccNumber, Customer, BankID) x
ON x.AccNumber = y.AccNumber AND
x.Customer = y.Customer AND
x.BankID = y.BankID AND
x.Date = y.Date
ORDER BY y.Customer, y.AccNumber
```
But now the 'NULL' Customers only have NULL values in the Amount, Date and AccType Columns, which is not correct.
The output should be something like this
```
AccNumb Customer BankID Amount Date AccType
111111111 a Boa 1234.40 31.06.2014 Eur
222222222 NULL Boa 5678.40 31.04.2014 Eur
333333333 b Boa 0.00 25.02.2014 Eur
444444444 NULL Boa 9101.40 23.04.2015 Eur
555555555 NULL Boa 1213.40 31.02.2014 Usd
A66666666 c Sa NULL 31.02.2014 Eur
777777777 c Sa 1415.00 31.12.2014 Eur
888888888 c Boa 1617.40 31.12.2014 Usd
999999999 f Pop 5678.64 31.10.2014 Eur
```
Thanks in advance.
|
Just use `row_number()`, if I understand correctly:
```
select s.*
from (select s.*,
row_number() over (partition by customer, bankId order by date desc) as seqnum
from stat s
) s
where seqnum = 1;
```
|
Find the rows with latest dates, i.e. return a row if no other row with same AccountNumber, BankID and Customer but a later Date exists:
```
select *
from stat s1
where not exists (select 1 from stat s2
where s1.AccountNumber = s2.AccountNumber
and s1.BankID = s2.BankID
and s1.Customer = s2.Customer
and s1.Date < s2.Date)
```
|
Selecting Account Numbers with latest date
|
[
"",
"sql",
"select",
"join",
"subquery",
""
] |
I have the following tables:
```
users
[id | name | type]
types
[id | title]
```
I need to setup a `has_one` association ('users'.type => 'types'.id). But, as I understand, when I write something like:
**user.rb**
```
has_one :type
```
it seems to create a relation between 'users'.id and 'types'.id. How do I specify exactly 'users'.type => 'types'.id has\_one association?
|
Finally found a solution:
**user.rb**
```
belongs_to :type
```
**type.rb**
```
has_many :users, foreign_key: "type_id"
```
|
Rename your type column of users table to type\_id
user.rb
```
belongs_to :of_type, class_name: 'Type', foreign_key: 'type_id'
```
type.rb
```
has_many :users
```
Rails console try to do something like
```
u = User.last
u.of_type
```
Last line will give you type record
If you don't want to rename column then in user.rb, you can write it as
```
belongs_to :of_type, class_name: 'Type', foreign_key: 'type'
```
|
RAILS: has_one association based on non PK column
|
[
"",
"sql",
"ruby-on-rails",
"associations",
"models",
"has-one",
""
] |
consider below table and its records
```
create table dbo.test
(
id numeric(4),
vals nvarchar(1000)
);
insert into dbo.test values (1,'1,2,3,4,5');
insert into dbo.test values (2,'6,7,8,9,0');
insert into dbo.test values (3,'11,54,76,23');
```
I am going to use below function to split CSVs, you can use any method to help in `select` syntax
```
CREATE FUNCTION [aml].[Split](@String varchar(8000), @Delimiter char(1))
returns @temptable TABLE (items varchar(8000))
as
begin
declare @idx int
declare @slice varchar(8000)
select @idx = 1
if len(@String)<1 or @String is null return
while @idx!= 0
begin
set @idx = charindex(@Delimiter,@String)
if @idx!=0
set @slice = left(@String,@idx - 1)
else
set @slice = @String
if(len(@slice)>0)
insert into @temptable(Items) values(@slice)
set @String = right(@String,len(@String) - @idx)
if len(@String) = 0 break
end
return
end
```
I want to select `id` and `max` and `min` values from `vals` against each record.
**Update** Though i am writing the query on SQL Server 2008 but i need to support SQL Server 2005 and above
|
You can `CROSS APPLY` to the table projected by the function and then apply normal aggregation functions on each group of `id` :
```
SELECT t.id, MIN(CAST(x.items AS INT)) AS MinItem, MAX(CAST(x.items AS INT)) AS MaxItem
FROM dbo.test t
CROSS APPLY dbo.Split(t.vals, ',') x
GROUP BY t.id;
```
(**Edit** - since these appear to be integers, you'll want to cast before applying the `MIN / MAX` aggregates otherwise you'll get an alphanumeric sort)
[SqlFiddle example here](http://sqlfiddle.com/#!3/cfa8c/9)
Another option is to persist the comma separated list in a normalized table structure before applying queries over them - it isn't useful storing non-normalized data in an RDBMS :)
|
Without function, just plain `sql`:
```
SELECT t.id,
Max(Split.a.value('.', 'VARCHAR(100)')) AS MaxVal,
Min(Split.a.value('.', 'VARCHAR(100)')) AS MinVal
FROM
(
SELECT id,
CAST ('<M>' + REPLACE(vals, ',', '</M><M>') + '</M>' AS XML) AS Data
FROM test
) AS t CROSS APPLY Data.nodes ('/M') AS Split(a)
group by t.id
```
Fiddle <http://sqlfiddle.com/#!3/22321/6>
|
Selecting Min/Max from Comma Separated Values against each record
|
[
"",
"sql",
"sql-server",
"sql-server-2005",
""
] |
I have stored procedure which is taking around 2 minutes to get executed,
I have used few temp tables in that and while loop , am not able to figure out the best way to increase the speed of stored procedure.
My stored procedure is as follows
```
ALter procedure _sp_Get_PatentAssignment_Mail_Test
(
@CompName nvarchar(max)='Canadian Spirit,Connexus Corporation'
)
As
Begin
Set NOCOUNT ON
Create table #temp
(
ID int identity(1,1),
AssigneeName nvarchar(100)
)
Create Table #tmpMainResult
(
Title nvarchar(1000),
Pat_PubNum varchar(30),
AssigneeName nvarchar(100)
)
IF(@CompName is NOT NULL AND @CompName<>'')
BEGIN
INSERT INTO #temp
SELECT * FROM dbo.uf_Split(@CompName,',')
END
DECLARE @MaxRownum INT
SET @MaxRownum = (SELECT MAX(ID) FROM #temp)
DECLARE @Iter INT
SET @Iter = (SELECT MIN(ID) FROM #temp)
WHILE @Iter <= @MaxRownum
BEGIN
Declare @AssigneeName nvarchar(100)
Set @AssigneeName= (SELECT AssigneeName
FROM #temp
WHERE ID = @Iter)
print @AssigneeName
Insert into #tmpMainResult
Select p.Title,
Case when p.PatentNum is null Or p.PatentNum=''
then
p.PublicationNum
else
p.PatentNum
end as 'Pat_PubName',
pa.AssigneeName
from Patent p
inner join PatentProperty pp on p.PatentId=pp.PatentId
inner join PatentAssignee pa on pp.PatentAssignmentID=pa.PatentAssignmentID
WHERE pa.AssigneeName like '%' +@AssigneeName+ '%'
SET @Iter = @Iter + 1
END
Select * from #tmpMainResult
drop table #temp
drop table #tmpMainResult
End
```
Please give some suggestions to reduce execution time.
Function used above is as follows:
```
ALTER FUNCTION [dbo].[uf_Split](@String varchar(MAX), @Delimiter char(1))
returns @temptable TABLE (items varchar(MAX))
as
begin
declare @idx int
declare @slice varchar(8000)
select @idx = 1
if len(@String)<1 or @String is null return
while @idx!= 0
begin
set @idx = charindex(@Delimiter,@String)
if @idx!=0
set @slice = left(@String,@idx - 1)
else
set @slice = @String
if(len(@slice)>0)
insert into @temptable(Items) values(@slice)
set @String = right(@String,len(@String) - @idx)
if len(@String) = 0 break
end
```
return
end;
|
Since we don't have access to your execution plan, the mileage of recommendations may vary.
If you are using a split function, it's easy to get it wrong by using `WHILE`. Post your code for your split function. Alternately use [DelimitedSplit8K](http://www.sqlservercentral.com/articles/Tally+Table/72993/) by Jeff Moden
Once you have the splitted string in a temp table, you don't need a `WHILE`.
What you are doing is basically a `cross join` with a filter `pa.AssigneeName like '%' +@AssigneeName+ '%'`.
Change your Insert to something like this.
```
Insert into #tmpMainResult
Select p.Title,
Case when p.PatentNum is null Or p.PatentNum=''
then
p.PublicationNum
else
p.PatentNum
end as 'Pat_PubName',
pa.AssigneeName
from Patent p
inner join PatentProperty pp on p.PatentId=pp.PatentId
inner join PatentAssignee pa on pp.PatentAssignmentID=pa.PatentAssignmentID
CROSS JOIN #temp t
WHERE pa.AssigneeName like '%' + t.AssigneeName + '%'
```
Since you are filtering using `'%' + t.AssigneeName + '%'` an index on `AssigneeName` or `AssigneeName` might not help.
Also check if you have appropriate indexes on `PatentId` and `PatentAssignmentID` on both tables
**Edit**
The split function by Jeff Moden `[dbo].[DelimitedSplit8K]`
```
CREATE FUNCTION [dbo].[DelimitedSplit8K]
/**********************************************************************************************************************
Purpose:
Split a given string at a given delimiter and return a list of the split elements (items).
Notes:
1. Leading a trailing delimiters are treated as if an empty string element were present.
2. Consecutive delimiters are treated as if an empty string element were present between them.
3. Except when spaces are used as a delimiter, all spaces present in each element are preserved.
Returns:
iTVF containing the following:
ItemNumber = Element position of Item as a BIGINT (not converted to INT to eliminate a CAST)
Item = Element value as a VARCHAR(8000)
Statistics on this function may be found at the following URL:
http://www.sqlservercentral.com/Forums/Topic1101315-203-4.aspx
CROSS APPLY Usage Examples and Tests:
--=====================================================================================================================
-- TEST 1:
-- This tests for various possible conditions in a string using a comma as the delimiter. The expected results are
-- laid out in the comments
--=====================================================================================================================
--===== Conditionally drop the test tables to make reruns easier for testing.
-- (this is NOT a part of the solution)
IF OBJECT_ID('tempdb..#JBMTest') IS NOT NULL DROP TABLE #JBMTest
;
--===== Create and populate a test table on the fly (this is NOT a part of the solution).
-- In the following comments, "b" is a blank and "E" is an element in the left to right order.
-- Double Quotes are used to encapsulate the output of "Item" so that you can see that all blanks
-- are preserved no matter where they may appear.
SELECT *
INTO #JBMTest
FROM ( --# & type of Return Row(s)
SELECT 0, NULL UNION ALL --1 NULL
SELECT 1, SPACE(0) UNION ALL --1 b (Empty String)
SELECT 2, SPACE(1) UNION ALL --1 b (1 space)
SELECT 3, SPACE(5) UNION ALL --1 b (5 spaces)
SELECT 4, ',' UNION ALL --2 b b (both are empty strings)
SELECT 5, '55555' UNION ALL --1 E
SELECT 6, ',55555' UNION ALL --2 b E
SELECT 7, ',55555,' UNION ALL --3 b E b
SELECT 8, '55555,' UNION ALL --2 b B
SELECT 9, '55555,1' UNION ALL --2 E E
SELECT 10, '1,55555' UNION ALL --2 E E
SELECT 11, '55555,4444,333,22,1' UNION ALL --5 E E E E E
SELECT 12, '55555,4444,,333,22,1' UNION ALL --6 E E b E E E
SELECT 13, ',55555,4444,,333,22,1,' UNION ALL --8 b E E b E E E b
SELECT 14, ',55555,4444,,,333,22,1,' UNION ALL --9 b E E b b E E E b
SELECT 15, ' 4444,55555 ' UNION ALL --2 E (w/Leading Space) E (w/Trailing Space)
SELECT 16, 'This,is,a,test.' --E E E E
) d (SomeID, SomeValue)
;
--===== Split the CSV column for the whole table using CROSS APPLY (this is the solution)
SELECT test.SomeID, test.SomeValue, split.ItemNumber, Item = QUOTENAME(split.Item,'"')
FROM #JBMTest test
CROSS APPLY dbo.DelimitedSplit8K(test.SomeValue,',') split
;
--=====================================================================================================================
-- TEST 2:
-- This tests for various "alpha" splits and COLLATION using all ASCII characters from 0 to 255 as a delimiter against
-- a given string. Note that not all of the delimiters will be visible and some will show up as tiny squares because
-- they are "control" characters. More specifically, this test will show you what happens to various non-accented
-- letters for your given collation depending on the delimiter you chose.
--=====================================================================================================================
WITH
cteBuildAllCharacters (String,Delimiter) AS
(
SELECT TOP 256
'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789',
CHAR(ROW_NUMBER() OVER (ORDER BY (SELECT NULL))-1)
FROM master.sys.all_columns
)
SELECT ASCII_Value = ASCII(c.Delimiter), c.Delimiter, split.ItemNumber, Item = QUOTENAME(split.Item,'"')
FROM cteBuildAllCharacters c
CROSS APPLY dbo.DelimitedSplit8K(c.String,c.Delimiter) split
ORDER BY ASCII_Value, split.ItemNumber
;
-----------------------------------------------------------------------------------------------------------------------
Other Notes:
1. Optimized for VARCHAR(8000) or less. No testing or error reporting for truncation at 8000 characters is done.
2. Optimized for single character delimiter. Multi-character delimiters should be resolvedexternally from this
function.
3. Optimized for use with CROSS APPLY.
4. Does not "trim" elements just in case leading or trailing blanks are intended.
5. If you don't know how a Tally table can be used to replace loops, please see the following...
http://www.sqlservercentral.com/articles/T-SQL/62867/
6. Changing this function to use NVARCHAR(MAX) will cause it to run twice as slow. It's just the nature of
VARCHAR(MAX) whether it fits in-row or not.
7. Multi-machine testing for the method of using UNPIVOT instead of 10 SELECT/UNION ALLs shows that the UNPIVOT method
is quite machine dependent and can slow things down quite a bit.
-----------------------------------------------------------------------------------------------------------------------
Credits:
This code is the product of many people's efforts including but not limited to the following:
cteTally concept originally by Iztek Ben Gan and "decimalized" by Lynn Pettis (and others) for a bit of extra speed
and finally redacted by Jeff Moden for a different slant on readability and compactness. Hat's off to Paul White for
his simple explanations of CROSS APPLY and for his detailed testing efforts. Last but not least, thanks to
Ron "BitBucket" McCullough and Wayne Sheffield for their extreme performance testing across multiple machines and
versions of SQL Server. The latest improvement brought an additional 15-20% improvement over Rev 05. Special thanks
to "Nadrek" and "peter-757102" (aka Peter de Heer) for bringing such improvements to light. Nadrek's original
improvement brought about a 10% performance gain and Peter followed that up with the content of Rev 07.
I also thank whoever wrote the first article I ever saw on "numbers tables" which is located at the following URL
and to Adam Machanic for leading me to it many years ago.
http://sqlserver2000.databases.aspfaq.com/why-should-i-consider-using-an-auxiliary-numbers-table.html
-----------------------------------------------------------------------------------------------------------------------
Revision History:
Rev 00 - 20 Jan 2010 - Concept for inline cteTally: Lynn Pettis and others.
Redaction/Implementation: Jeff Moden
- Base 10 redaction and reduction for CTE. (Total rewrite)
Rev 01 - 13 Mar 2010 - Jeff Moden
- Removed one additional concatenation and one subtraction from the SUBSTRING in the SELECT List for that tiny
bit of extra speed.
Rev 02 - 14 Apr 2010 - Jeff Moden
- No code changes. Added CROSS APPLY usage example to the header, some additional credits, and extra
documentation.
Rev 03 - 18 Apr 2010 - Jeff Moden
- No code changes. Added notes 7, 8, and 9 about certain "optimizations" that don't actually work for this
type of function.
Rev 04 - 29 Jun 2010 - Jeff Moden
- Added WITH SCHEMABINDING thanks to a note by Paul White. This prevents an unnecessary "Table Spool" when the
function is used in an UPDATE statement even though the function makes no external references.
Rev 05 - 02 Apr 2011 - Jeff Moden
- Rewritten for extreme performance improvement especially for larger strings approaching the 8K boundary and
for strings that have wider elements. The redaction of this code involved removing ALL concatenation of
delimiters, optimization of the maximum "N" value by using TOP instead of including it in the WHERE clause,
and the reduction of all previous calculations (thanks to the switch to a "zero based" cteTally) to just one
instance of one add and one instance of a subtract. The length calculation for the final element (not
followed by a delimiter) in the string to be split has been greatly simplified by using the ISNULL/NULLIF
combination to determine when the CHARINDEX returned a 0 which indicates there are no more delimiters to be
had or to start with. Depending on the width of the elements, this code is between 4 and 8 times faster on a
single CPU box than the original code especially near the 8K boundary.
- Modified comments to include more sanity checks on the usage example, etc.
- Removed "other" notes 8 and 9 as they were no longer applicable.
Rev 06 - 12 Apr 2011 - Jeff Moden
- Based on a suggestion by Ron "Bitbucket" McCullough, additional test rows were added to the sample code and
the code was changed to encapsulate the output in pipes so that spaces and empty strings could be perceived
in the output. The first "Notes" section was added. Finally, an extra test was added to the comments above.
Rev 07 - 06 May 2011 - Peter de Heer, a further 15-20% performance enhancement has been discovered and incorporated
into this code which also eliminated the need for a "zero" position in the cteTally table.
**********************************************************************************************************************/
--===== Define I/O parameters
(@pString VARCHAR(8000), @pDelimiter CHAR(1))
RETURNS TABLE WITH SCHEMABINDING AS
RETURN
--===== "Inline" CTE Driven "Tally Table" produces values from 0 up to 10,000...
-- enough to cover NVARCHAR(4000)
WITH E1(N) AS (
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1
), --10E+1 or 10 rows
E2(N) AS (SELECT 1 FROM E1 a, E1 b), --10E+2 or 100 rows
E4(N) AS (SELECT 1 FROM E2 a, E2 b), --10E+4 or 10,000 rows max
cteTally(N) AS (--==== This provides the "base" CTE and limits the number of rows right up front
-- for both a performance gain and prevention of accidental "overruns"
SELECT TOP (ISNULL(DATALENGTH(@pString),0)) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM E4
),
cteStart(N1) AS (--==== This returns N+1 (starting position of each "element" just once for each delimiter)
SELECT 1 UNION ALL
SELECT t.N+1 FROM cteTally t WHERE SUBSTRING(@pString,t.N,1) = @pDelimiter
),
cteLen(N1,L1) AS(--==== Return start and length (for use in substring)
SELECT s.N1,
ISNULL(NULLIF(CHARINDEX(@pDelimiter,@pString,s.N1),0)-s.N1,8000)
FROM cteStart s
)
--===== Do the actual split. The ISNULL/NULLIF combo handles the length for the final element when no delimiter is found.
SELECT ItemNumber = ROW_NUMBER() OVER(ORDER BY l.N1),
Item = SUBSTRING(@pString, l.N1, l.L1)
FROM cteLen l
;
GO
```
|
Some advice:
1) you can avoid to `DROP` your temporary tables. At the end of the execution of the sp the tables are dropped automatically.
2) Define the `Primary Key` on your temporary tables
|
How can I increase the performace of Stored Procedure
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-server-2008-r2",
""
] |
I'm trying to run a query on my music database.
The relevant part of my database looks like this:
```
CREATE TABLE ARTIST (
artistid integer(3) auto_increment not null,
name varchar(30) not null,
//other columns
CONSTRAINT ALBUM_pk PRIMARY KEY (artistid)
);
CREATE TABLE SONG (
songid integer(5) auto_increment not null,
lineupid integer(5) not null,
title varchar(50) not null,
//other columns
CONSTRAINT SONG_pk PRIMARY KEY (songid),
CONSTRAINT SONG_fk_ARTIST FOREIGN KEY (lineupid)
REFERENCES ARTISTLINEUPLINEUP (artistlineupid),
CONSTRAINT SONG_fk_GENRE FOREIGN KEY (genreid)
REFERENCES GENRE (genreid)
);
CREATE TABLE ARTISTLINEUP (
artistlineupid integer(3) auto_increment not null,
lineupid integer(5) not null,
artistid integer(3) not null,
CONSTRAINT ARTISTLINEUP_pk PRIMARY KEY (artistlineupid),
CONSTRAINT ARTISTLINEUP_fk_ARTIST FOREIGN KEY (artistid)
REFERENCES ARTIST (artistid),
CONSTRAINT ARTISTLINEUP_fk_LINEUP FOREIGN KEY (lineupid)
REFERENCES LINEUP (lineupid)
);
```
I'm trying to get all the songs written by one artist, which has multiple lineups.
```
SELECT title
FROM song
WHERE lineupid = (
SELECT artistlineupid
FROM artistlineup
WHERE artistid = (
SELECT artistid
FROM artist
WHERE name = "Black Sabbath"));
```
However, it gives me the following error:
```
#1242 - Subquery returns more than 1 row
```
This is correct, because this artist has multiple lineups and therefore returns two numbers. Why can't it return the songs with both lineupid's? How can I get this to work?
Thanks
|
Although you can phrase the query in different way, if you want to compare multiple values, use `IN` not `=`:
```
SELECT title
FROM song
WHERE lineupid IN (
SELECT artistlineupid
FROM artistlineup
WHERE artistid IN (
SELECT artistid
FROM artist
WHERE name = "Black Sabbath"));
```
|
If artistid is unique with respect to name you can also use limit
```
SELECT title
FROM song
WHERE lineupid = (
SELECT artistlineupid
FROM artistlineup
WHERE artistid = (
SELECT artistid
FROM artist
WHERE name = "Black Sabbath" LIMIT 1));
```
|
Subquery returns more than 1 row, but I want it to
|
[
"",
"mysql",
"sql",
"subquery",
""
] |
```
SELECT u.USER_NAME AS [User]
,COUNT(ad.ACTIVITY_ID) AS [Total Activities Late]
FROM USERS u
INNER JOIN ACTIVITY_DATA ad ON u.USER_ID = ad.USER_ID
WHERE ad.DAYS_LATE > 0
AND ad.COMPLETED_DATE IS NULL
GROUP BY u.USER_NAME
SELECT u.USER_NAME AS [User]
,COUNT(ad.ACTIVITY_ID) AS [Upto One Week Late]
FROM USERS u
INNER JOIN ACTIVITY_DATA ad ON u.USER_ID = ad.USER_ID
WHERE ad.DAYS_LATE BETWEEN 1 AND 7
AND ad.COMPLETED_DATE IS NULL
GROUP BY u.USER_NAME
```
Both of these selects work fine, however I would like to return the total number of activities late and the total number of activities upto one week late inside the same SELECT statement. Is this possible?
I am using SQL Server 2012.
|
You can, you just need to use a SUM and a CASE statement. Something like this should do it:
```
SELECT u.USER_NAME As [User]
, COUNT(ad.ACTIVITY_ID) As [Total Activities Late]
, SUM(CASE WHEN ad.DAYS_LATE BETWEEN 1 and 7 THEN 1 ELSE 0 END) As [Upto One Week Late]
FROM USERS u
JOIN ACTIVITY_DATA ad
ON u.USER_ID = ad.USER_ID
WHERE ad.DAYS_LATE > 0 AND ad.COMPLETED_DATE is NULL
GROUP BY u.USER_NAME
```
|
Try like this instead using `SUM()` function and specifying the condition using a `CASE` statement
```
SELECT u.USER_NAME As [User],
SUM(CASE WHEN ad.DAYS_LATE > 0 THEN 1 ELSE 0 END) As [Total Activities Late],
SUM(CASE WHEN ad.DAYS_LATE BETWEEN 1 and 7 THEN 1 ELSE 0 END) As [Upto One Week Late]
FROM USERS u
JOIN ACTIVITY_DATA ad ON u.USER_ID = ad.USER_ID
WHERE ad.COMPLETED_DATE is NULL
GROUP BY u.USER_NAME
```
|
SQL - specifying a condition in COUNT / returning a value from the same field more than once in one SELECT
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
If I have a table that includes:
```
user_id | event_time
```
How can I calculate the average days between events? To get something like:
```
days_diff | count
1 | 100
2 | 90
3 | 20
```
A user may have 1 day between events, but may also have 3 days between to subsequent events. How can I count them in both buckets?
Sample data (note in this case the DAY DIFF is 0/1 but this is just a small subset of data)
```
user_id | event_time
82770 2015-05-04 02:34:53
1 2015-05-04 08:45:53
82770 2015-05-04 20:38:24
82770 2015-05-04 20:38:24
82770 2015-05-04 20:38:24
1 2015-05-05 09:31:42
82770 2015-05-05 13:33:36
82770 2015-05-05 13:33:53
1 2015-05-06 09:53:59
1 2015-05-06 23:31:18
1 2015-05-06 23:31:35
1 2015-05-07 12:31:41
82770 2015-05-07 16:01:16
```
|
Here's a solution without using a temporary table:
```
select daybetweenevents as days_diff,
count(daybetweenevents) as count
from (select t1.user_id,
t1.event_time,
datediff(day, t1.event_time, min(t2.event_time)) as daybetweenevents
from yourtable t1
inner join yourtable t2
on t1.user_id = t2.user_id
and t1.event_time < t2.event_time
group by t1.user_id, t1.event_time) temp
group by daybetweenevents
```
|
Use `DATEDIFF` and a correlated sub query to get previous date.
```
SELECT user_id, event_time,
DATEDIFF((SELECT MAX(event_time)
FROM yourtable
WHERE event_time < a.event_time), event_time) AS days_diff
FROM yourtable AS a
```
|
Calculate average days between events?
|
[
"",
"mysql",
"sql",
""
] |
So... I want to be able to insert the results of a query in a table.
```
SELECT top 30 dt ,
count(*) AS NumberOfOrders
FROM shoptable
WHERE (name LIKE '%shop%')
GROUP by dt ORDER by dt
```
This query successfully returns the number of orders per each day from previous days.
Now I want to insert this into a table called shop\_stats and this is my query:
```
INSERT INTO shop_stats
(dt, NumberOfOrders)
(SELECT top 30 dt ,
count(*) AS NumberOfOrders
FROM shoptable
WHERE (name LIKE '%shop%')
GROUP by dt ORDER by dt)
```
I get the following error:
> Msg 156, Level 15, State 1, Line 1
> Incorrect syntax near the keyword 'order'.
If I remove the order by clause .... I will have 30 random days with orders.. not the last 30 as I need.
Does anybody know how to solve this in a relatively easy way? (i.e. no additional scripts that would remove the unnecessary data etc. - I just need a single SQL query).
|
There's no intrinsic ordering of rows within the table - any specific ordering is supposed to be done when you're getting the data out, not when you're putting them in.
So, just remove the `order by` clause, insert them, and then order them again when you select them.
That, said, if you absolutely, positively, need to insert them in a specified order (as in your case), you can use a subquery with `top` clause
```
insert into shop_stats (dt, NumberOfOrders)
select *
from (select top 30 dt, count(*) as NumberOfOrders
from shoptable
where (name like '%shop%')
group by dt
order by dt)
```
|
Although there is no implicit ordering of rows in a table, there is an unusual extension for `order by` in insert. If you do:
```
INSERT INTO shop_stats(dt, NumberOfOrders)
SELECT top 30 dt, count(*) AS NumberOfOrders
FROM shoptable
WHERE name LIKE '%shop%')
GROUP by dt
ORDER by dt;
```
And `shoptable` has an `identity`, `primary key` column, then the identity values assigned to the row are guaranteed to be in the order specified. The rows may not be inserted in the table in that order, but the identity column respects the ordering.
This is hard to find in the documentation. Unfortunately, almost all referencesin the documentation explain how this does not work with `select into`. Here is one [reference](http://blogs.msdn.com/b/sqltips/archive/2005/07/20/441053.aspx).
|
Incorrect syntax near 'order' in insert query
|
[
"",
"sql",
"sql-server",
""
] |
What if you just have a time field that is stored as a `char` and displays in military time, example (1015), and you just want to format it to standard time using sql code. Is there a way to do that?
**Update:**
To clarify, I am defining "standard time" as meaning 6:30 PM or 4:30AM.
|
The first step is to insert a colon into your military time:
```
DECLARE @Time CHAR(4) = '1015';
SELECT STUFF(@Time, 3, 0, ':');
```
Which gives `10:15`
Then you can convert it to the Time data type:
```
DECLARE @Time CHAR(4) = '1015';
SELECT CONVERT(TIME, STUFF(@Time, 3, 0, ':'));
```
Which gives `10:15:00.0000000`
Finally you can convert back to varchar, with the style 100 to get it in a 12 hour format:
```
DECLARE @Time CHAR(4) = '1015';
SELECT CONVERT(VARCHAR(7), CONVERT(TIME, STUFF(@Time, 3, 0, ':')), 100);
```
Which gives `10:15AM`
Or
```
DECLARE @Time CHAR(4) = '2157';
SELECT CONVERT(VARCHAR(7), CONVERT(TIME, STUFF(@Time, 3, 0, ':')), 100);
```
Which gives `9:57PM`
The standard way to deal with this though would be to store the time using the `TIME` datatype, and do all formatting in your presentation layer.
|
Converting Military DateTime to Standard DateTIme
```
Select Convert(VarChar, Convert(Datetime,'01/12/2024 13:01:7',101),100)
```
Result:
```
Jan 12 2024 1:01PM
```
|
Convert time from Military to standard AM/PM Microsoft SQL
|
[
"",
"sql",
"sql-server",
"time-format",
""
] |
If you have something like below
```
SELECT column1
FROM table1
WHERE id IN (SELECT column2 FROM table2 WHERE id2 = 'x')
```
I have a scenario where the in clause may not return any data which means I should get all the data and not 0 rows as though the in clause was never created. I cannot do joins either.
|
Have you tried:
```
select column1 from table1
where id in (select column2 from table2 where id2 = 'x')
OR IF NOT EXISTS (select column2 from table2 where id2 = 'x')
```
This will use the ID in clause if rows returned, but ignore it if 0 rows.
Caveat: I don't know the performance implications of larger data sets and how the query optimizer would execute this.
|
Try something like this, using a CTE (common table expression)
```
with filter as
(
select distinct
t2.column2 as id
from table2 t2
join table1 t1 on t1.id = t2.column2
and t2.id2 = 'x'
)
select t1.*
from table1 t1
left join filter f on f.id = t1.id
where f.id is not null -- matched the filter
OR 0 = ( select count(*) from filter ) -- filter is empty
```
|
Optional IN Clause
|
[
"",
"sql",
"t-sql",
""
] |
I want to know how can I get results between 101-150 rows from specific table, based on a condition, something like this:
```
SELECT * FROM Students
WHERE Student_Status = 'Cancelled';
```
There can be multiple student statuses so I want only the results between 101 - 150 for cancelled students.
|
Use `row_number` window function with appropriate ordering to rank rows:
```
select * from
(select *, row_number() over(order by StudentID) as rn
from Students where Student_Status = 'Cancelled') t
where rn between 101 and 150
```
|
Same answer as Giorgi really, but a different way to write it using a CTE; which I find easier to read and work with.
```
;WITH t1 AS
(
SELECT ROW_NUMBER() OVER(ORDER BY StudentID) AS RID, *
FROM Students
WHERE Student_Status='Cancelled'
)
SELECT *
FROM t1
WHERE RID BETWEEN 101 AND 150
```
|
sql query to get me the 3rd set of results, if each set holds 50 results
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I'm struggling with defining the SQL to find a list of values that are statistically close to each other. For example, let's say we have a table of prices, and I want to get all the prices that vary within $0.25 of each other.
**Prices:
1.00
1.25
2.00
4.00
4.50
4.75
5.00**
For the example above, this should return 1.00, 1.25, 4.50, 4.75, and 5.00 as they are within 0.25 of another value in the list.
I don't want to get a raw list and then process it in the code. It would be much more efficient for SQL server to do the work. Is this possible?
|
I might use a correlated subquery:
```
DECLARE @tbl TABLE (val DECIMAL (9,2))
INSERT INTO @tbl VALUES (1),(1.25),(2),(4),(4.5),(4.75),(5)
SELECT *
FROM @tbl a
WHERE EXISTS(SELECT 1
FROM @tbl b
WHERE b.val <> a.val
AND b.val BETWEEN a.val-.25 AND a.val+.25)
```
You could also work an ABS into this which might be more succinct but probably doesn't impact performance:
```
SELECT *
FROM @tbl a
WHERE EXISTS(SELECT 1
FROM @tbl b
WHERE b.val <> a.val
AND ABS(b.val - a.val) <= .25)
```
EDIT: I switch from Float to Decimal, because that's a "better" type in SQL Server.
|
Try a join of the table with itself:
```
declare @Values table (value float)
insert into @Values values (1),(1.25),(2),(4),(4.5),(4.75),(5)
select distinct A.Value
from @values A
inner join @Values B
on abs(A.value - B.Value) <= 0.25
and A.Value <> B.Value
```
[SQL Fiddle](http://sqlfiddle.com/#!3/9eecb7db59d16c80417c72d1/479)
OR in another approach, using `CROSS APPLY`
```
SELECT DISTINCT CASE WHEN N.n=1 THEN A.Value ELSE ant END
FROM @Values A
cross apply (select max(value) from @Values where Value < A.Value) B(ant)
CROSS APPLY(SELECT 1 UNION SELECT 2)N(n)
where abs(A.value - ant) <= 0.25
```
[SQL Fiddle](http://sqlfiddle.com/#!3/9eecb7db59d16c80417c72d1/480)
And, if you are using SQL Server 2012+, you can use [LEAD](https://msdn.microsoft.com/pt-br/library/hh213125.aspx) function:
```
SELECT DISTINCT CASE WHEN N.n=1 THEN A.Value ELSE ant END
FROM (
SELECT Value,
LEAD(Value, 1,0) OVER (ORDER BY Value) AS Ant
FROM @Values
) A
CROSS APPLY(SELECT 1 UNION SELECT 2)N(n)
where abs(Ant - Value) <= 0.25
```
[SQL Fiddle](http://sqlfiddle.com/#!6/9eecb7/117)
|
SQL statement to find values close to each other
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I have a `table` named `Actor`, with only a column for `City` (CityId). I want to return the number of actors in a particular `State` (StateId). The catch however is that I have separate tables for `City`, `County`, and finally `State` (City has CountyId, County has StateId). How do I this in a T-SQL query?
I have a solution that involves nested Select statements, something like:
```
SELECT COUNT(1)
FROM Actor a
WHERE a.CityId IN
(SELECT CityId FROM City WHERE CountyId IN...)
```
...but is there a more efficient way to do this? Thanks
|
```
You can use this query to get your output
----------------------------------------------------------
SELECT COUNT(ActorId)
FROM Actor a
INNER JOIN City c ON a.cityId = c.cityId
INNER JOIN Country con ON c.countryId = con.countryId
INNER JOIN STATE s ON con.stateId = s.stateId
GROUP BY s.stateId
```
|
Use JOINS to query your data.
I am using INNER JOIN here.
Assuming that you have CountryId in your City Table, You can do it following way.
In case you don't have countryId in your City Table you have to apply one more INNER JOIN on State Table.
```
SELECT COUNT(1) FROM Actor a INNER JOIN
City b ON a.CityId = b.CityId
WHERE b.CountryId IN (...)
```
|
Nested 'Where'?
|
[
"",
"sql",
""
] |
I am currently trying to make a simple database but everytime I run the script, it creates the database, but does not create any of the tables, any ideas?
```
use master
IF EXISTS(select * from sys.databases where name = 'MyWebDB')
DROP DATABASE MyWebDB
CREATE DATABASE MyWebDB
DROP TABLE Users
CREATE TABLE Users
(
UserID int IDENTITY PRIMARY KEY,
EmailAddress varchar(100) NOT NULL,
FirstName varchar(50) NOT NULL,
LastName varchar(50) NOT NULL,
);
CREATE TABLE Products(
ProductID int IDENTITY PRIMARY KEY,
ProductName varchar(100) NOT NULL
);
DROP TABLE Downloads
CREATE TABLE Downloads(
DownloadID int IDENTITY PRIMARY KEY,
UserID int FOREIGN KEY REFERENCES Users(UserID),
DownloadDate datetime NOT NULL,
FileName varchar(100) NOT NULL,
ProductID int FOREIGN KEY REFERENCES Products(ProductID)
);
```
|
Get rid of your drop tables and make sure to change your database context by using the USE keyword and the database name. If you want to check for the existence of the table like you are doing with the database, query if exists from sys.tables.
```
USE master
GO
IF EXISTS(select * from sys.databases where name = 'MyWebDB')
DROP DATABASE MyWebDB
CREATE DATABASE MyWebDB
USE MyWebDB
GO
CREATE TABLE Users
(
UserID int IDENTITY PRIMARY KEY,
EmailAddress varchar(100) NOT NULL,
FirstName varchar(50) NOT NULL,
LastName varchar(50) NOT NULL,
);
CREATE TABLE Products(
ProductID int IDENTITY PRIMARY KEY,
ProductName varchar(100) NOT NULL
);
CREATE TABLE Downloads(
DownloadID int IDENTITY PRIMARY KEY,
UserID int FOREIGN KEY REFERENCES Users(UserID),
DownloadDate datetime NOT NULL,
FileName varchar(100) NOT NULL,
ProductID int FOREIGN KEY REFERENCES Products(ProductID)
);
```
|
1. You have `use master` so your tables are probably going there.
2. You're dropping the Users table before you even create it. (Since you dropped the db all tables are implicitly dropped).
3. You're missing some semicolons at the end of some lines.
|
SQL script not creating the tables
|
[
"",
"sql",
"sql-server",
"database",
""
] |
I am trying to find the rows where `PilotID` has used the `shimpmentNumber` more than once.
I have this so far.
```
select f_Shipment_ID
,f_date
,f_Pilot_ID
,f_Shipname
,f_SailedFrom
,f_SailedTo
,f_d_m
,f_Shipmentnumber
,f_NumberOfPilots
from t_shipment
where f_Pilot_ID < 10000
and f_NumberOfPilots=1
and f_Shipmentnumber in(select f_Shipmentnumber
from t_shipment
group by f_Shipmentnumber
Having count(*) >1)
```
|
Try something like this:
```
-- The CTE determines the f_Pilot_ID/f_Shipmentnumber combinations that appear more than once.
with DuplicateShipmentNumberCTE as
(
select
f_Pilot_ID,
f_Shipmentnumber
from
t_shipment
where
f_Pilot_ID < 10000 and
f_NumberOfPilots = 1
group by
f_Pilot_ID,
f_Shipmentnumber
having
count(1) > 1
)
select
Shipment.f_Shipment_ID,
Shipment.f_date,
Shipment.f_Pilot_ID,
Shipment.f_Shipname,
Shipment.f_SailedFrom,
Shipment.f_SailedTo,
Shipment.f_d_m,
Shipment.f_Shipmentnumber,
Shipment.f_NumberOfPilots
from
-- The join is used to restrict the result set to the shipments identified by the CTE.
t_shipment Shipment
inner join DuplicateShipmentNumberCTE CTE on
Shipment.f_Pilot_ID = CTE.f_Pilot_ID and
Shipment.f_Shipmentnumber = CTE.f_Shipmentnumber
where
f_NumberOfPilots = 1;
```
You can also do this with a subquery if you want to—or if you're using an old version of SQL Server that doesn't support [CTEs](https://technet.microsoft.com/en-us/library/ms190766%28v=sql.105%29.aspx?f=255&MSPPError=-2147217396)—but I find the CTE syntax to be more natural, if only because it enables you to read and understand the query from the top down, rather than from the inside out.
|
How about this
```
select f_Shipment_ID
,f_date
,f_Pilot_ID
,f_Shipname
,f_SailedFrom
,f_SailedTo
,f_d_m
,f_Shipmentnumber
,f_NumberOfPilots
from t_shipment
where f_Pilot_ID < 10000
and f_NumberOfPilots=1
and f_Pilot_ID IN (select f_Pilot_ID
from t_shipment
group by f_Pilot_ID, f_Shipmentnumber
Having count(*) >1)
```
|
Looking for duplicates based on a few other columns
|
[
"",
"sql",
"sql-server",
""
] |
I cannot control these circumstances at the moment so please bear with me.
I pull email addresses from a field called *EMAIL\_O*, and sometimes they are completely valid (*somename@domain.com*) and other times they have a 12-character phone number appended at the front (*123-456-7890somename@domain.com*).
How can I, in MS Access, detect which type of field I am seeing and remove the phone number appropriately when pulling in this data? I cannot just take the `mid()` from the 13th character because if the email is valid, I'd be removing good characters.
So somehow I need to detect the presence of a number and then apply the `mid()`, or just take the full field if no number is present.
|
Use pattern matching to check whether *EMAIL\_O* starts with a phone number.
```
EMAIL_O = "123-456-7890somename@domain.com"
? EMAIL_O Like "###-###-####*"
True
EMAIL_O = "somename@domain.com"
? EMAIL_O Like "###-###-####*"
False
```
So you can use that strategy in an `IIf` expression. Apply `Mid` when *EMAIL\_O* matches the pattern. Otherwise, just return *EMAIL\_O* unaltered.
```
? IIf(EMAIL_O Like "###-###-####*", Mid(EMAIL_O, 13), EMAIL_O)
somename@domain.com
```
Those were examples copied from the Immediate window. If you want to use the same approach in a query ...
```
SELECT IIf(EMAIL_O Like "###-###-####*", Mid(EMAIL_O, 13), EMAIL_O) AS email
FROM YourTable;
```
|
Using the Left and IsNumeric functions you can just check to see if the first character is numeric, something like that is below. (Untested code).
```
Public Function CheckEmail(strEmail As String)
If IsNumeric(Left(strEmail, 1)) Then
strEmail = Right(strEmail, Len(strEmail - 13))
CheckEmail = strEmail
Else
CheckEmail = strEmail
End If
End Function
```
|
How to detect a number and split in MS Access?
|
[
"",
"sql",
"string",
"ms-access",
""
] |
I need to get All distinct User list and count only thoose who is Active
and get something like
```
ID | Count | Active
1 | 0 | False
2 | 1 | True
3 | 6 | True
```
but when i do
```
SELECT ID,COUNT(*)
FROM Users
WHERE Active='True'
GROUP BY ID
```
I got All Distinct Which Active
```
ID | Count | Active
2 | 1 | True
3 | 6 | True
```
How to write query to get what i want?
|
You need to move the condition into the aggregate:
```
SELECT ID, SUM(CASE WHEN Active='True' THEN 1 ELSE 0 END)
FROM Users
GROUP BY ID
```
|
You want to remove the `where` clause. If `active` only takes on the values `True` and `False` (and NULL), then this will work:
```
SELECT ID, COUNT(*) as cnt, MAX(Active)
FROM Users
GROUP BY ID;
```
If `Active` can take on other values, then you might want:
```
SELECT ID, COUNT(*) as cnt,
(CASE WHEN SUM(CASE WHEN Active = 'True' THEN 1 ELSE 0 END) > 0 THEN 'True' ELSE 'False' END) as Active
FROM Users
GROUP BY ID;
```
|
get All distinct (by ID) Users list and count only thoose who is Active
|
[
"",
"sql",
"sql-server",
""
] |
I have data in tbl1:
```
item date amount
a 1 10
a 2 20
a 3 30
a 4 40
b 1 20
b 2 30
b 3 40
b 4 50
c 1 30
c 2 40
c 3 50
c 4 60
```
But i need like below
```
item 1 2 3 4
a 10 20 30 40
b 20 30 40 50
c 30 40 50 60
```
|
You can use `GROUP BY` with conditional aggregation using `CASE` like this.
**Sample Data**
```
DECLARE @tbl1 TABLE (item CHAR(1),date int, amount int)
INSERT INTO @tbl1 VALUES
('a', 1, 10),
('a', 2, 20),
('a', 3, 30),
('a', 4, 40),
('b', 1, 20),
('b', 2, 30),
('b', 3, 40),
('b', 4, 50),
('c', 1, 30),
('c', 2, 40),
('c', 3, 50),
('c', 4, 60);
```
**Query**
```
SELECT
item,
MAX(CASE WHEN date = 1 then amount END) as d1,
MAX(CASE WHEN date = 2 then amount END) as d2,
MAX(CASE WHEN date = 3 then amount END) as d3,
MAX(CASE WHEN date = 4 then amount END) as d4
FROM @tbl1
GROUP BY item
```
**Output**
```
item d1 d2 d3 d4
a 10 20 30 40
b 20 30 40 50
c 30 40 50 60
```
**Edit**
If you have unknown number of date, then use `dynamic pivot` like this.
```
DECLARE @s NVARCHAR(MAX)
SELECT @s = STUFF((SELECT DISTINCT ',' + quotename(CONVERT(VARCHAR(10),date),'[')
FROM #tbl1 for xml path('')),1,1,'')
SET @s = N'SELECT item,' + @s + ' FROM #tbl1 PIVOT(MAX(amount) FOR date in(' + @s + ')) as pvt'
print @s
EXEC sp_executeSQL @s
```
|
Use `PIVOT`
```
SELECT item, [1], [2], [3], [4]
FROM tbl1 t
PIVOT (SUM(amount) FOR date IN ([1], [2], [3], [4])) p
SELECT item, [10], [20], [30], [40], [50], [60]
FROM tbl1 t
PIVOT (MAX(date) FOR amount IN ([10], [20], [30], [40], [50], [60])) p
```
OUTPUT:
```
item 1 2 3 4
a 10 20 30 40
b 20 30 40 50
c 30 40 50 60
```
SQL Fiddle: <http://sqlfiddle.com/#!3/f097d/12/0>
|
Query with rows into columns
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"pivot",
""
] |
I'd like to have AVG excluding 0s from a list of values that can contain them, except in the case that 0 is the only values available (or at least there's just one row and the values is exactly 0).
At the moment I use
```
SELECT AVG (NULLIF(feature, 0))
from cluster_density_feat
where cluster_id=1;
```
but it returns NULL if feature value cluster\_id has only one row and his feature values is 0 or if every row of this cluster\_id has only zero values.
How can I achieve this?
|
`AVG()` uses all the values from the group, it cannot exclude the `0`s. You should exclude them using a `WHERE` clause.
```
SELECT IFNULL(AVG(feature), 0)
FROM cluster_density_feat
WHERE cluster_id = 1
AND feature <> 0
```
If all the values in the `feature` column are `0`, the `feature <> 0` condition produces an empty set of rows and `AVG()` returns `NULL`. The function `IFNULL()` returns its second argument when the first one is `NULL`.
|
You can use a WHERE clause to exclude the 0 values, and if you want to get 0 if there are no other values than 0 use coalesce to get 0, COALESCE is a function that will either get the first value between the brackets if it's not NULL and will return the second value if the first one is NULL
```
SELECT COALESCE(AVG(feature),0)
from cluster_density_feat
where cluster_id=1
AND NOT feature = 0
```
|
AVG of Values except 0 (except 0 is only value) in SQL
|
[
"",
"mysql",
"sql",
""
] |
```
SELECT * FROM User
INNER JOIN BlockUser ON BlockUser.userId='1' and User.userId != BlockUser.blockID
```
I have two table one is of “**Users**” table and **BlockUser** table.
I need to select all users from user table except those which a particular user blocked
User table
```
userID userName nickName
1 abc abc
2 adc adc
3 dc dc
4 xyz xyz
5 qwe qwe
```
BlockUser
```
_Id userId blockID
1 1 2
2 1 3
```
Result
```
userID userName nickName
4 xyz xyz
5 qwe qwe
```
please help me correct my Sql query.
|
Try this:
```
DECLARE @blocker int = 1
SELECT * FROM [User]
WHERE [User].userID NOT IN (SELECT BlockUser.blockID FROM BlockUser WHERE BlockUser.blockID = [User].userID AND BlockUser.userID = @blocker)
AND userID <> @blocker
```
<http://sqlfiddle.com/#!3/a7012/14>
The error was also the SQL keyword which is the name of the table, change the name of the table user or use the brackets as shown.
There are also other solutions (from other users but deleted):
```
select u.*
from [User] u
left join BlockUser bu on bu.blockID = u.userID
and bu.userId = 1
where bu.userID is null
and u.userID <> 1;
select * from [user]
where userID <> 1
and userID not in
(select blockID from BlockUser
where userID = 1);
```
**EDIT 1:** fixed query
**EDIT 2:** nailed it - not :)
**EDIT 3:** accounting for removing blocker from result, using variable
**EDIT 4:** added other solutions
|
Your query will be as below:
select userID,userName,nickName from Users where userID NOT IN(Select blockID from BlockUser where userId =1 );
|
SQL JOIN- user selection which are not blocked by current user
|
[
"",
"sql",
"inner-join",
""
] |
I want to execute the following statement. Is it possible to select which column to update using the CASE ?
```
UPDATE TABVAR CASE
WHEN M BETWEEN 0 AND 6 THEN SET M0_TO_6 = M
WHEN M BETWEEN 7 AND 18 THEN SET M7_TO_18 = M
WHEN M BETWEEN 19 AND 54 THEN SET M19_TO_54 = M
WHEN M > 54 THEN SET MABOVE54 = M
END
```
|
You can't use a `case` expression like that, only to return an (l-)value. You could, however, emulate such a behavior with a `case` expression for each column:
```
UPDATE tabvar
SET
m0_to_6 = CASE WHEN m BETWEEN 0 AND 6 THEN m ELSE m0_to_6 END,
m7_to_18 = CASE WHEN m BETWEEN 7 AND 18 THEN m ELSE m7_to_18 END,
m19_to_54 = CASE WHEN m BETWEEN 19 AND 54 THEN m ELSE m19_to_54 END,
mabove54 = CASE WHEN m > 54 THEN m ELSE mabove54 END
```
|
Not that way, but you could do basically same thing like this:
```
UPDATE TABVAR
set
M0_TO_6 = CASE WHEN M BETWEEN 0 AND 6 THEN M else M0_TO_6 end,
M7_TO_18 = CASE WHEN M BETWEEN 7 AND 18 THEN M else M7_TO_18 END,
...
```
This way you're updating either the value M to the column, or the value that already exists in there.
|
using CASE to select column for SET in UPDATE statement IN SQL SERVER
|
[
"",
"sql",
"sql-server",
"sql-update",
"case",
""
] |
I have a table with a composite primary key (`ID`, `Date`) like below.
```
+------+------------+-------+
| ID | Date | Value |
+------+------------+-------+
| 1 | 1433419200 | 15 |
| 1 | 1433332800 | 23 |
| 1 | 1433246400 | 41 |
| 1 | 1433160000 | 55 |
| 1 | 1432900800 | 24 |
| 2 | 1433419200 | 52 |
| 2 | 1433332800 | 23 |
| 2 | 1433246400 | 39 |
| 2 | 1433160000 | 22 |
| 3 | 1433419200 | 11 |
| 3 | 1433246400 | 58 |
| ... | ... | ... |
+------+------------+-------+
```
There is also a separate index on `Date` column. The table is of moderate size, currently ~600k row and growing by ~2k everyday.
I want to do a single SELECT query that returns the latest 3 records (ordered by `Date` timestamp) for each `ID`. For each given `ID`, the `Date` values are always unique, so no need to worry about ties for `Date` here.
I've tried a self-join approach, inspired by [this answer](https://stackoverflow.com/a/12114175/548240), but it took quite a few seconds to run and returned nothing:
```
SELECT p1.ID, p1.Date, p1.Value FROM MyTable AS p1
LEFT JOIN MyTable AS p2
ON p1.ID=p2.ID AND p1.Date<=p2.Date
GROUP BY p1.ID
HAVING COUNT(*)<=5
ORDER BY p1.ID, p1.Date DESC;
```
What would be a fast solution here?
|
You could look up the three most recent dates for each ID:
```
SELECT ID, Date, Value
FROM MyTable
WHERE Date IN (SELECT Date
FROM MyTable AS T2
WHERE T2.ID = MyTable.ID
ORDER BY Date DESC
LIMIT 3)
```
Alternatively, look up the third most recent date for each ID, and use it as a limit:
```
SELECT ID, Date, Value
FROM MyTable
WHERE Date >= IFNULL((SELECT Date
FROM MyTable AS T2
WHERE T2.ID = MyTable.ID
ORDER BY Date DESC
LIMIT 1 OFFSET 2),
0)
```
Both queries should get good performance from the primary key's index.
|
First, here is the correct query for the inequality method:
```
SELECT p1.ID, p1.Date, p1.Value
FROM MyTable p1 LEFT JOIN
MyTable AS p2
ON p1.ID = p2.ID AND p2.Date <= p1.Date
--------------------------^ fixed this condition
GROUP BY p1.ID, p1.Date, p1.Value
HAVING COUNT(*) <= 5
ORDER BY p1.ID, p1.Date DESC;
```
I'm not sure if there is a fast way to do this in SQLite. In most other databases, you can use the ANSI standard `row_number()` function. In MySQL, you can use variables. Both of these are difficult in SQLite. Your best solution may be to use a cursor.
The above can benefit from an index on `MyTable(Id, Date)`.
|
Select the latest 3 records for each ID in a table
|
[
"",
"sql",
"sqlite",
"group-by",
"sql-order-by",
"greatest-n-per-group",
""
] |
I'm trying to combine two tables, each has three columns and two of those are common. The `income` table has `Location`, `date` and `Amt_recd`, the `expense` table has `Location`, `date` and `Amt_spent`. I want to combine these two tables so there is one table with four columns, and any common dates are combined to one row. I've tried this:
```
select location, date, amt_recd, null as amt_spent
from income
union
select location, date, null as amt_recd, amt_spent
from expense
```
And it gets me just one step away, since it does not combine like dates into one row, it has two rows where `amt_recd` is `null` in one and `amt_spent` is `null` in the other. What is a better way to construct this so I get a result that is more condensed? I've tried various joins instead of union, and haven't been able to get the result I'm looking for.
|
I am basing this answer on this line from your original post:
> I want to combine these two tables so there is one table with four
> columns, and any common dates are combined to one row.
The second half of that suggests that you need not only a JOIN, but also to aggregate.
```
SELECT
COALESCE(i.location, e.location) AS Location
, COALESCE(i.[date], e.[date]) AS [date]
, SUM(i.amt_recd) AS amt_recd
, SUM(e.amt_spent) AS amt_spent
FROM income I
FULL OUTER JOIN expense e
on i.location = e.location
and i.date = e.date
GROUP BY
COALESCE(i.location, e.location)
, COALESCE(i.[date], e.[date])
```
This will give you one row per unique combination of location & date.
If you really need to further reduce the results to one row per date, you will need to specify rules for which location to show when there is more than one location on a given date.
|
Assuming the combination of (location, date) is unique in each table, you are correct in stating that `union` is the wrong tool for the job, and looking into `join`s instead. Since you may have incomes without expenses or expenses without incomes, a `full outer join` is in order:
```
SELECT i.location, i.date, i.amt_recd, e.amt_spent
FROM income i
FULL OUTER JOIN expense e ON i.location = e.location AND i.date = e.date
```
|
Three columns in two tables to four columns in one table
|
[
"",
"sql",
"sql-server",
"select",
"join",
"union",
""
] |
I'm trying to analyze a funnel using event data in Redshift and have difficulties finding an efficient query to extract that data.
For example, in Redshift I have:
```
timestamp action user id
--------- ------ -------
2015-05-05 12:00 homepage 1
2015-05-05 12:01 product page 1
2015-05-05 12:02 homepage 2
2015-05-05 12:03 checkout 1
```
I would like to extract the funnel statistics. For example:
```
homepage_count product_page_count checkout_count
-------------- ------------------ --------------
100 50 25
```
Where `homepage_count` represent the distinct number of users who visited the homepage, `product_page_count` represents the distinct numbers of users who visited the homepage **after** visiting the homepage, and `checkout_count` represents the number of users who checked out after visiting the homepage and the product page.
What would be the best query to achieve that with Amazon Redshift? Is it possible to do with a single query?
|
I think the best method might be to add flags to the data for the first visit of each type for each user and then use these for aggregation logic:
```
select sum(case when ts_homepage is not null then 1 else 0 end) as homepage_count,
sum(case when ts_productpage > ts_homepage then 1 else 0 end) as productpage_count,
sum(case when ts_checkout > ts.productpage and ts.productpage > ts.homepage then 1 else 0 end) as checkout_count
from (select userid,
min(case when action = 'homepage' then timestamp end) as ts_homepage,
min(case when action = 'product page' then timestamp end) as ts_productpage,
min(case when action = 'checkout' then timestamp end) as ts_checkout
from table t
group by userid
) t
```
|
The above answer is very much correct . I have modified it for people using it for AWS Mobile Analytics and Redshift.
```
select sum(case when ts_homepage is not null then 1 else 0 end) as homepage_count,
sum(case when ts_productpage > ts_homepage then 1 else 0 end) as productpage_count,
sum(case when ts_checkout > ts_productpage and ts_productpage > ts_homepage then 1 else 0 end) as checkout_count
from (select client_id,
min(case when event_type = 'App Launch' then event_timestamp end) as ts_homepage,
min(case when event_type = 'SignUp Success' then event_timestamp end) as ts_productpage,
min(case when event_type = 'Start Quiz' then event_timestamp end) as ts_checkout
from awsma.v_event
group by client_id
) ts;
```
|
Funnel query with Amazon Redshift / PostgreSQL
|
[
"",
"sql",
"analytics",
"amazon-redshift",
""
] |
I have a field that lists all language descriptions that a product has and the field can contain data like:
EN;FR;DE
It will always be a two letter language code followed by a semi colon.
I then have a stored procedure that looks for all products with a particular language code. Simply done by:
```
WHERE
ext.languages LIKE '%' + @language + '%'
```
The @language variable might just represent the letters EN for example. Now when I want to find a product that has both French and English languages on I need to pass in 'FR, EN' for the language variable. Now I have a custom function in SQL that splits the language variable into rows so I effectively have
Row 1-EN
Row 2-FR
Now I need to check my ext.language field to see if both those values exist.
I have attempted to do:
```
INNER JOIN MyFunctionsDatabase.dbo.listSplit(@language) as var1
ON ext.language LIKE '%'+var1.StringLiteral+'%'
```
This only brings back products where it contains either french or english I need it to bring back values where it contains both English and French.
Any advice would be greatly appreciated:
|
Try with below script, this i write for 3 language but can be done generic
```
Declare @Product AS Table(ProductID INT, [Language] Varchar(500))
Insert Into @Product Values(1,'EN;FR;DE'),(2,'EN'),(3,'EN;DE'),(4,'EN;FR')
SELECT * FROM
(
Select P.ProductID,L.Value
From @Product P
CROSS APPLY dbo.[udfSplit]([Language],';') L
) Product
PIVOT
(
Count(Value)
For Value in (EN,FR,DE)
)
AS PV
Where EN=1 AND FR=1
```
|
I would make your parameter a multi-select and have each individual language be a selection. You could even feed the parameter with values from the database so it would automatically update if there is a new language. I'm going to call this parameter `@LangMultiSelect`
Since you only want items that items that match all of the selections you need to pass in a second parameter with the number of items that have been selected. In the properties of your dataset you can add another parameter that is set by an expression. Name it `@LangCount` and use the expression:
```
=Parameters!LangMultiSelect.Count
```
Then use a SQL query similar to this:
```
SELECT Name
FROM (
SELECT Name,
COUNT(*) OVER(PARTITION BY pt.id) AS lCount
FROM ProductTable AS pt
INNER JOIN MyFunctionsDatabase.dbo.listSplit(@language) AS var1 ON var1.id=pt.id
WHERE pt.language IN (@LangMultiSelect)
) AS t
WHERE t.lCount = @LangCount
```
That query uses the `COUNT()` aggregate as a [window function](https://msdn.microsoft.com/en-us/library/ms189461.aspx) to determine the number of matches the item has and then only returns results that match all of the selections in the multi-select parameter.
It works because I am splitting the count by a field that is the same for all of the item names that are the same item but in a different language. If you don't have a field like that this won't work.
|
SQL inner join list split for an SSRS report
|
[
"",
"sql",
"t-sql",
"reporting-services",
"inner-join",
""
] |
I'm struggling with a tricky SQL query that I'm trying to write. Have a look at the following table:
```
+---+---+
| A | B |
+---+---+
| 1 | 2 |
| 1 | 3 |
| 2 | 2 |
| 2 | 3 |
| 2 | 4 |
| 3 | 2 |
| 3 | 3 |
| 4 | 2 |
| 4 | 3 |
| 4 | 4 |
+---+---+
```
Now, from this table, I essentially want a list of all As which have the exact same set of Bs and give each set an incrementing ID.
Hence, the output set for the above would be:
```
+---+----+
| A | ID |
+---+----+
| 1 | 1 |
| 3 | 1 |
| 2 | 2 |
| 4 | 2 |
+---+----+
```
Thanks.
Edit: If it helps, I have a list of all distinct values of B that are possible in another table.
Edit: Thank you so much for all the innovative answers. Was able to learn a lot indeed.
|
Here is mathematical trick to solve your tricky select:
```
with pow as(select *, b * power(10, row_number()
over(partition by a order by b)) as rn from t)
select a, dense_rank() over( order by sum(rn)) as rn
from pow
group by a
order by rn, a
```
Fiddle <http://sqlfiddle.com/#!3/6b98d/11>
This of course will work only for limited distinct count as you will get overflow. Here is more general solution with strings:
```
select a,
dense_rank() over(order by (select '.' + cast(b as varchar(max))
from t t2 where t1.a = t2.a
order by b
for xml path(''))) rn
from t t1
group by a
order by rn, a
```
Fiddle <http://sqlfiddle.com/#!3/6b98d/29>
|
Something like this:
```
select a, dense_rank() over (order by g) as id_b
from (
select a,
(select b from MyTable s where s.a=a.a order by b FOR XML PATH('')) g
from MyTable a
group by a
) a
order by id_b,a
```
Or maybe using a CTE (I avoid them when possible)
[Sql Fiddle](http://sqlfiddle.com/#!6/31c14/6)
As a side note, this is the output of the inner query using the sample data in the question:
```
a g
1 <b>2</b><b>3</b>
2 <b>2</b><b>3</b><b>4</b>
3 <b>2</b><b>3</b>
4 <b>2</b><b>3</b><b>4</b>
```
|
T-SQL - Get a list of all As which have the same set of Bs
|
[
"",
"sql",
"t-sql",
""
] |
I have this query.
The INNER `SELECT` brings back multiple records. The outer does a SUM & MAX so I only have 1 record:
```
SELECT z.EmployeeId,
SUM(z.PayrollGap) AS PayrollGap,
MAX(z.PayrollGap) AS PayrollGapMax
FROM (SELECT DISTINCT
a.EmployeeId,
a.PayPeriodStart,
a.PayPeriodEnd,
b.PayPeriodStart AS NextStartDate,
CASE WHEN DATEDIFF(d, a.PayPeriodEnd, b.PayPeriodStart) - 1 < 0 THEN 0
ELSE DATEDIFF(d, a.PayPeriodEnd, b.PayPeriodStart) - 1
END AS PayrollGap
FROM EmployeePayroll a
LEFT JOIN EmployeePayroll b
ON b.EmployeeId = a.EmployeeId
AND b.rn = a.rn + 1
WHERE b.PayPeriodStart IS NOT NULL) z
GROUP BY z.EmployeeId
```
Along with the `MAX(z.PayrollGap)`, I need to grab the PayPeriodStart as well.
The problem is that if I add the column PayPeriodStart to the query, it'll bring back more than 1 record and I need to do a `MAX(z.PayrollGap)`.
How do I go about running this query but at the same time bringing back the PayPeriodStart RELATED TO `MAX(z.PayrollGap)`?
|
Try to split query:
```
;with cte as
(
SELECT DISTINCT
a.EmployeeId,
a.PayPeriodStart,
a.PayPeriodEnd,
b.PayPeriodStart AS NextStartDate,
CASE WHEN DATEDIFF(d, a.PayPeriodEnd, b.PayPeriodStart) - 1 < 0 THEN 0
ELSE DATEDIFF(d, a.PayPeriodEnd, b.PayPeriodStart) - 1
END AS PayrollGap
FROM EmployeePayroll a
LEFT JOIN EmployeePayroll b
ON b.EmployeeId = a.EmployeeId
AND b.rn = a.rn + 1
WHERE b.PayPeriodStart IS NOT NULL
),
res as
(
SELECT z.EmployeeId,
SUM(z.PayrollGap) AS PayrollGap,
MAX(z.PayrollGap) AS PayrollGapMax
FROM cte z
GROUP BY z.EmployeeId
)
select r.EmployeeId, r.PayrollGap, r.PayrollGapMax, c.PayPeriodStart
from res r
join cte c on c.EmployeeId = r.EmployeeId
and c.PayrollGap = r.PayrollGapMax
```
|
If I understand the question correctly you need to join your result set back to EmployeePayroll to add in PayPeriodStart.
Something like:
```
WITH cte AS (
SELECT DISTINCT
a.EmployeeId,
a.PayPeriodStart,
a.PayPeriodEnd,
b.PayPeriodStart AS NextStartDate,
CASE WHEN DATEDIFF(d, a.PayPeriodEnd, b.PayPeriodStart) - 1 < 0 THEN 0
ELSE DATEDIFF(d, a.PayPeriodEnd, b.PayPeriodStart) - 1
END AS PayrollGap
FROM EmployeePayroll a
LEFT JOIN EmployeePayroll b
ON b.EmployeeId = a.EmployeeId
AND b.rn = a.rn + 1
WHERE b.PayPeriodStart IS NOT NULL
)
SELECT EmployeeId
,PayrollGap
,PayrollGapMax
,PayPeriodStart
FROM (SELECT z.EmployeeId,
SUM(z.PayrollGap) AS PayrollGap,
MAX(z.PayrollGap) AS PayrollGapMax
FROM cte z
GROUP BY z.EmployeeId) x
INNER JOIN
cte ON cte.EmployeeId = x.EmployeeId
AND cte.PayrollGap = x.PayrollGapMax
```
Which isn't optimized.
Or tested, since no sample data.
Or formatted particularly nicely.
|
How to get the MAX() while NOT grouping all columns in SQL Server
|
[
"",
"sql",
"sql-server",
""
] |
I have an MS SQL database where sensor data is being streamed from a low level device, into say column 1.
In columns 2 and 3 I have some alarm levels.
What I would like to do is for "some script" to test the data as it arrives in column 1 against the levels in columns 2 and 3 and place a boolean decision in column 4.
My question is can I have some SQL script to do this? Where would it reside or does my low level device need knowledge of the column 2 and 3 trip points and when it posts the data to column 1 it also posts the decision to column 4?
UPDATE - First Test
I have implemented this and tested in sqlfiddle (but I am unable to login for some reason) so here is my schema:
```
CREATE TABLE SensorData(
data1 float NULL,
data1alarm bool NOT NULL DEFAULT FALSE
);
//
CREATE TABLE SensorConfig(
data1max float NULL,
data1min float NULL
);
//
CREATE TRIGGER Alarm
AFTER INSERT
ON SensorData FOR EACH ROW
BEGIN
IF (new.data1 > 5.0) AND (new.data1 < 60.0) THEN
INSERT INTO SensorData(data1alarm)
VALUES(true);
END IF;
END
//
INSERT INTO SensorConfig(data1max, data1min) VALUES (200.0, 100.0);
INSERT INTO SensorData(data1) VALUES (10.0);
```
When `INSERT INTO SensorData(data1) VALUES (10.0);` I get the error:
`Can't update table 'sensordata' in stored function/trigger because it is already used by statement which invoked this stored function/trigger.`
but when `INSERT INTO SensorData(data1) VALUES (1.0);` then the schema builds.
I cannot figure out the problem.
Also, I want really to implement such a condition that a true is written when `data1` is greater `data1max` and less than `data1min`.
But at the moment I not sure why the schema does not build?
Furthermore, when I have tried getting the data1min and data1max into the conditional part of the trigger I have encountered errors:
`IF (new.data1 > SensorConfig.data1min) AND (new.data1 < SensorConfig.data1max) THEN`
Any help would be much appreciated.
|
I would recommend that you don't insert things into your table, but rather create a view that contains the logic for the alarm codes. So you might do
```
create view alarming_data as
select
data1.thevalue as theValue
true as theAlarm
from
data1 where data1.theValue > 5 and data1.theValue < 60
```
That keeps your raw data "pure" and your processing separate. In fact, you don't really need the second field, I'm just putting it there to keep my answer as close to your example as possible.
You can query this view as you would a table, and if your definition of alarming changes, it's just a matter of updating the definition of the view.
|
If you only want to display the offending lines, you don't need to persist `column4`, you can query them dynamically:
```
SELECT *
FROM mytable
WHERE column1 > column2 OR column1 > column3
```
|
SQL data processing
|
[
"",
"sql",
"sql-server",
""
] |
I am currently studying for a final and have a question about one of the practice problems.
The problem states:
Consider the following two SQL queries over the relation `R(A,B)` where attribute `A` is the primary key.
Query 1: `SELECT * FROM R WHERE B > 50 OR B <= 50`
Query 2: `SELECT * FROM R WHERE A > 50 OR A <= 50`
Which is correct?
a. Queries 1 and 2 return all the records in `R`
b. Query 2 returns all the records in `R`, but Query 1 may not
The official answer says that b is correct, but I wanted to know why. I understand that a primary key must be unique, meaning Query 2 should return every record. But wouldn't query 1 also return every record even if there were duplicate `B` values?
|
The correct answer is the second, because A is a primary key and a primary key cannot be null, so
```
A > 50 OR A <= 50
```
will always be true, while the following:
```
B > 50 OR B <= 50
```
might be NULL if B is NULL
```
NULL > 50 OR NULL <= 50 ---> NULL
```
and rows whenever B is null won't be returned.
|
The only reason I can see why query 1 would not return some of the records is if column (or attribute as you call it) B is nullable.
When comparing null to any value in any rdbms I know you will never get correct results, since `null` is *by definition* an unknown value.
for example, if you add to your where clause `and null = null` you will get no results at all.
So any record where B is null would not be returned from the first query.
This cannot be the case for column A since it's the primary key, and a primary key can not be nullable *by definition*.
|
Calling SELECT * on a primary key versus an attribute in SQL
|
[
"",
"mysql",
"sql",
"select",
"primary-key",
""
] |
I've this block in sql developer:
```
begin
delete from temp;
insert into temp values (1);
dbms_output.put_line('Done');
end;
```
If I hit F5 the script runs fine. The table is updated with the value 1 and the "Script output" tab says 'Done'.
However I would like to automatically visualize the table that I have just modified in the "result" tab after I execute the block. Is that possible? How?
Any help is appreciated. Thank you.
|
You can use CURSOR:
```
declare
a temp.id%type; --name column in your table
cursor c1 is
select id from temp;
begin
delete from temp;
insert into temp values (1);
open c1;
loop
fetch c1 into a;
dbms_output.put_line (a);
exit when c1%notfound;
end loop;
CLOSE C1;
end;
```
|
No need of any anonymous **PL/SQL** block. Just run the **SQL** statements as a script,
```
delete from temp;
insert into temp values (1);
select * from temp;
```
Put the above three statements in **SQL Developer** worksheet and then press **F5** to run as a script, see the output in the Script output tab. You must finally **COMMIT** to make the table changes permanent.
You cannot do `select * from table` in PL/SQL as it is a pure SQL statement. PL/SQL expects an INTO clause. Never do in PL/SQL when you could do the same in pure SQL.
But, if you really want to do it in a `BEGIN-END` block, then put the SELECT statement outside the PL/SQL block. Do not merge PL/SQL and SQL.
```
begin
delete from temp;
insert into temp values (1);
dbms_output.put_line('Done');
end;
/
select * from table;
```
|
How to see the table just modified in a script with Sql Developer
|
[
"",
"sql",
"oracle",
"select",
"plsql",
"oracle-sqldeveloper",
""
] |
I have 4 tables that I want to be joined.
1. `Customers`
2. `Traffic`
3. `Average`
4. `Live`
I want to insert joined data of these tables to "Details" table.
The relationship between the tables is here:
each of Traffic, Average and Live tables have a "cid" that is the primary key of "Customers" table:
* `Traffic.cid = Customers.id`
* `Average.cid = Customers.id`
* `Live.cid = Customers.id`
The query that I wrote is here:
```
INSERT INTO Details
(
cid, Customer_Name, Router_Name,
Traffic_Received,
Average_Received,
Live_Received,
date
)
(
SELECT Customers.id AS cid, Customers.name AS Customer_Name, Traffic.Router_Name,
Traffic.Received,
Average.Received,
Live.Received,
Traffic.date
FROM Customers
INNER JOIN Traffic ON Customers.id=Traffic.cid
INNER JOIN Average ON Customers.id=Average.cid
INNER JOIN Live ON Customers.id=Live.cid
WHERE Traffic.date='2015-06-08'
)
```
But the result will have duplicated rows. I changed the `JOIN` to both `LEFT JOIN`, and `RIGHT JOIN`. but the result does not changed.
What should I do to not have duplicated rows in `Details` table?
|
With the `LEFT JOIN`, you will be joining to the table (e.g. `Traffic`) even when there is not a record that corresponds to the `Customers.id`, in which case, you will get the `null` value for the columns from this table where there is no matching record.
With the `RIGHT JOIN`, you will get every record from the joined table, even when there is not a corresponding record in `Customers`.
However, the type of `JOIN` is not the problem here. If you are getting duplicate records in your results, then this means that is more than one matching record in the tables you are joining to. For example, there may be more than one record in `Traffic` with the same `cid`. Use `SELECT DISTINCT` to remove duplicates, or if you are interested in an aggregate of those duplicates, use an aggregate function, such as `count()` or `sum()` and a `GROUP BY` clause, e.g. `GROUP BY Traffic.cid`.
If you still have duplicates, then check to make sure that they really are duplicates - I'd suggest that one or more columns is actually different.
|
Can you please try this
```
INSERT INTO Details
(
cid, Customer_Name, Router_Name,
Traffic_Received,
Average_Received,
Live_Received,
date
)
(
SELECT Customers.id AS cid,
Customers.name AS Customer_Name,
Traffic.Router_Name,
Traffic.Received,
Average.Received,
Live.Received,
Traffic.date
FROM Customers
INNER JOIN Traffic ON Customers.id=Traffic.cid
INNER JOIN Average ON Customers.id=Average.cid
INNER JOIN Live ON Customers.id=Live.cid
WHERE Traffic.date='2015-06-08'
GROUP BY
cid,
Customer_Name,
Traffic.Router_Name,
Traffic.Received,
Average.Received,
Live.Received,
Traffic.date
)
```
|
Which JOIN type in multiple joins
|
[
"",
"mysql",
"sql",
"join",
"left-join",
"right-join",
""
] |
I'm having trouble with a sql query.
I'm using MariaDB. I have a table with several fields and I want to select the rows when they change values.
For example:
```
+------------+----------+----------+
| time_stamp | G1.state | G2.state |
+------------+----------+----------+
| 1433717821 | 3 | 1 |
+------------+----------+----------+
| 1433717881 | 3 | 1 |
+------------+----------+----------+
| 1433717942 | 5 | 1 |
+------------+----------+----------+
```
Here I need to get the time\_stamp when 3 changes to 5. I've tried the following query:
```
SELECT a.`time_stamp`, a.`G1.state`, `G2.state`
FROM `regs` AS `a`
WHERE ((a.`G1.Estado` <>
(SELECT b.`G1.state`
FROM `regs` AS `b`
WHERE a.time_stamp > b.time_stamp
ORDER BY b.time_stamp DESC
LIMIT 1)) OR
(a.`G2.state` <>
(SELECT b.`G2.state`
FROM `regs` AS `b`
WHERE a.time_stamp > b.time_stamp
ORDER BY b.time_stamp DESC
LIMIT 1))) AND
(a.time_stamp >= 1433159549) AND (a.time_stamp < 1433677949)
ORDER BY a.time_stamp ASC
```
However it just takes too long to run, it takes 50.96 sec and I need it a lot faster (hopefully less than a second). It analyzes around 10k values. Can anyone give me a hand here?
|
You can use variables to track changes in `G1.state` or `G2.state` columns:
```
SELECT time_stamp, `G1.state`, `G2.state`
FROM (
SELECT time_stamp, `G1.state`, `G2.state`,
IF (@g1 IS NOT NULL AND @g1 <> `G1.state`,
IF(@g1:= `G1.state`, 1, 1),
IF(@g1:= `G1.state`, 0, 0)) AS g1Changed,
IF (@g2 IS NOT NULL AND @g2 <> `G2.state`,
IF(@g2:= `G2.state`, 1, 1),
IF(@g2:= `G2.state`, 0, 0)) AS g2Changed
FROM mytable, (SELECT @g1:= NULL, @g2:= NULL) AS vars
ORDER BY time_stamp ASC ) t
WHERE g1Changed = 1 OR g2Changed = 1
```
This query might perform better since no correlated subqueries are used.
**Note:** according to [**MariaDB documentation**](https://mariadb.com/kb/en/mariadb/user-defined-variables/):
> It is unsafe to read a user-defined variable and set its value in the same statement (unless the command is SET), because the order of these actions is undefined.
To overcome this problem nested conditionals are used: `@g1` or `@g2` are *first* compared against `G1.state`, `G2.state` values respectively, and *then* set in order to be compared again when the next record is fetched.
[**Demo here**](http://sqlfiddle.com/#!9/32e60/9)
|
```
Select * from regs a
Where time_stamp=
(Select Max(time_stamp) from regs
Where time_stamp< a.time_stamp)
and 'g1.state' <> a.'g1.staTE'
order by a.time_stamp
```
|
SELECT sql row when value changes
|
[
"",
"mysql",
"sql",
""
] |
I've read a lot about this issue.. (also read this: [Tables with no Primary Key](https://stackoverflow.com/questions/5600/tables-with-no-primary-key))
it seems like there is no reason to use tables with no PK. so why does PostgreSQL allows it? can you give an example when it's good idea to not indicate PK?
|
I think the answer to your question lies in trying to understand what are the drawbacks of having a Primary-Key (PK) in the first place.
One obvious 'drawback' (depending on how you see it) in maintaining a PK is that it has its own overhead during an `INSERT`. So, in order to increase `INSERT` performance (assuming for e.g. the sample case is a logging table, where Querying is done offline) I would remove all Constraints / PK if possible and definitely would increase table performance. You may argue that pure logging should be done outside the DB (in a noSQL DB such as Cassandra etc.) but then again at least its possible in PostgreSQL.
|
A primary key is a special form of a unique constraint. A unique constraint is always backed up by an index. And the disadvantage of an index is that it takes time to update. Tables with an index have lower `update`, `delete` and `insert` performance.
So if you have a table that has a lot of modifications, and few queries, you can improve performance by omitting the primary key.
|
In PostgreSQL what tables with no primary key used for
|
[
"",
"sql",
"postgresql",
""
] |
I need to update a column in table using following:
```
update Table1 set name = (select productName from Table2 where
@rid=$parent.$current.productid)
```
Query works fine but instead of name query stores value in "[productname]" format.
I have read orientdb documentation, I guess select query returns result in collection format. so I have already tried following functions
* get(1)
* first()
* [0] etc (my desperate attempt :)
Thanks in advance.
|
I tried searching but did not get any clean ans, but i making following change worked for me & got the job done :)
```
update Table1 set name=(select productname from Table2 where
@rid=$parent.$current.productid),
name= name.replace("\[","").replace("\]","")
```
Hope this saves time for someone.
|
You observe this behavior since the sub query (the select query) always returns a collection. The LET block would help you here. Following is how you use the LET block in your query;
`update Table1 set name = $name[0].productname
LET $name = (select productname from Table2 where @rid=$parent.$current.productId)`
The LET block is useful for sub queries, projections and holding results that will be used multiple times.
You can find more information [here](http://orientdb.com/docs/2.0/orientdb.wiki/SQL-Query.html#projections).
Hope this helps.
|
OrientDB: How to update column using select query
|
[
"",
"sql",
"orientdb",
""
] |
I have a column like "AP.1.12345.ABCD.20150523\_0523.20150524\_0223".
Can i divide the column into multiple columns on basis of " . " ?
Say for above example column, the output will be like
```
AP 1 12345 ABCD 20150523_0523 20150524_0223.
```
Thus single column will be converted into 6 columns.
|
Thanks all for your suggestions and solutions.
I got the solution. After little experimenting, Assuming that the column will be as mentioned, I used **substring** and **charindex** to get the desired result.
Though the query looks a bit big, still it worked out.
I would rather wish it to be a simple query than using a function.
Need to check on performance of the query though.
|
You can also write a function that will do something like this:
```
DECLARE @Col VARCHAR(64),
@DynSQL VARCHAR(MAX)
SET @Col = 'AP.1.12345.ABCD.20150523_0523.20150524_0223'
SET @DynSQL = 'SELECT '''+REPLACE(@Col,'.',''',''')+''''
exec(@DynSQL)
```
This can handle any number of delimiters/columns.
|
Sql code to get multiple columns out of single column
|
[
"",
"sql",
"sql-server",
"ssis",
"multiple-columns",
""
] |
I have absolutely no clue why MySQL is having an issue with the second `CREATE TABLE` statement.
```
CREATE TABLE User(
uid INTEGER,
url CHAR(100),
firstname CHAR(40),
lastname CHAR(40),
PRIMARY KEY(uid)
);
```
The below is the one that causes problems:
```
CREATE TABLE Follows(
uid INTEGER,
url CHAR(100),
PRIMARY KEY(uid,url),
FOREIGN KEY(uid) REFERENCES User(uid), ON DELETE CASCADE,
FOREIGN KEY(url) REFERENCES User(url), ON DELETE CASCADE
);
```
Error I get is:
> #1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'ON DELETE CASCADE, FOREIGN KEY(url) REFERENCES User(url), ON DELETE CASCADE)' at line 1
|
There are a few issues here:
First the `on delete cascade` is part of the foreign key definition, so the comma (`,`) before it should be removed.
Second, the second foreign key references `url`, which is not a unique key, and therefore is not allowed. So either remove this constraints:
```
CREATE TABLE Follows (
uid INTEGER,
url CHAR(100),
PRIMARY KEY(uid,url),
FOREIGN KEY(uid) REFERENCES User(uid) ON DELETE CASCADE
);
```
Or define another unique key on `url`:
```
CREATE TABLE User(
uid INTEGER,
url CHAR(100),
firstname CHAR(40),
lastname CHAR(40),
PRIMARY KEY(uid),
UNIQUE (url)
);
CREATE TABLE Follows (
uid INTEGER,
url CHAR(100),
PRIMARY KEY(uid,url),
FOREIGN KEY(uid) REFERENCES User(uid) ON DELETE CASCADE,
FOREIGN KEY(url) REFERENCES User(url) ON DELETE CASCADE
);
```
|
try to remove after REFERENCES User(uid) and REFERENCES User(url) this ","
|
MySQL FOREIGN KEY error, ON DELETE CASCADE
|
[
"",
"mysql",
"sql",
"foreign-keys",
"ddl",
""
] |
There are two tables.
One table contains:
```
Name value
A 1
B 2
C 3
D 4
```
another table contains
```
City value
aa 1
bb 2,3
cc 3
dd 1,2,4
```
I want an output which contains:
```
City value Name
aa 1 A
bb 2,3 B,C
cc 3 C
dd 1,2,4 A,B,D
```
How can i do this using cursor?
|
Thanks. Your question really made me appreciate normal forms.
Anyhow, I am going to go out on a limb and assume you asked for a cursor-based solution because you assumed the non-normalized data could not be handled.
Once you have the function to materialize the rows into a value list, you can solve this with a simple query.
Given:
```
CREATE TABLE dbo.NV (Name CHAR(1), Value INT)
CREATE TABLE dbo.CV (City varchar(88), ValueList VARCHAR(88))
```
loaded with the data you indicated.
And this SQL script:
```
GO
CREATE FUNCTION dbo.f_NVList(@VList VARCHAR(MAX)) RETURNS VARCHAR(MAX)
AS
BEGIN
DECLARE @VAL VARCHAR(928)='',
@FIDescr VARCHAR(55)
SELECT @VAL = COALESCE(@VAL + LTRIM(map.name),'') + ','
FROM dbo.nv Map
WHERE CHARINDEX(','+LTRIM(STR(map.value)) + ',', ','+@VList + ',' ) > 0
SET @VAL = SUBSTRING(@VAL,1,len(@VAL)-1)
RETURN(@VAL)
END
GO -- end of function
-- this generates the output, using the function to materialize the name-values
SELECT cv.* , dbo.f_NVList(cv.ValueList ) as NameList FROM dbo.CV cv;
```
producing your output:

PLEASE DON'T - but If you really need the cursor for some reason, instead of
```
SELECT cv.* , dbo.f_NVList(cv.ValueList ) as NameList FROM dbo.CV cv;
```
use this
```
OPEN BadIdea;
FETCH NEXT FROM BadIdea INTO @C, @VList
WHILE @@FETCH_STATUS = 0
BEGIN
SET @NameList = dbo.f_NVList(@Vlist)
INSERT INTO @OUT VALUES( @C, @VLIST , @NameList )
FETCH NEXT FROM BadIdea INTO @C, @VList
END
CLOSE BadIdea
DEALLOCATE BadIdea
select * from @OUT ;
```
|
using CROSS APPLY we will initially delimit all the values and then we can acheieve using XML path () and CTE's
```
DECLARE @Name table (name varchar(5),value int)
INSERT INTO @Name (name,value)values ('A',1),('B',2),('C',3),('D',4)
DECLARE @City table (city varchar(10),value varchar(10))
INSERT INTO @City (city,value)values ('aa','1'),('bb','2,3'),('cc','3'),('dd','1,2,4')
```
Code :
```
;with CTE AS (
SELECT A.city,
Split.a.value('.', 'VARCHAR(100)') AS Data
FROM
(
SELECT city,
CAST ('<M>' + REPLACE(value, ',', '</M><M>') + '</M>' AS XML) AS Data
FROM @City
) AS A CROSS APPLY Data.nodes ('/M') AS Split(a)
),CTE2 AS (
Select c.city,t.value,STUFF((SELECT ', ' + CAST(name AS VARCHAR(10)) [text()]
FROM @Name
WHERE value = c.Data
FOR XML PATH(''), TYPE)
.value('.','NVARCHAR(MAX)'),1,2,' ') List_Output
from CTE C
INNER JOIN @Name t
ON c.Data = t.value
)
select DISTINCT c.city,STUFF((SELECT ', ' + CAST(value AS VARCHAR(10)) [text()]
FROM CTE2
WHERE city = C.city
FOR XML PATH(''), TYPE)
.value('.','NVARCHAR(MAX)'),1,2,' ') As Value ,STUFF((SELECT ', ' + CAST(List_Output AS VARCHAR(10)) [text()]
FROM CTE2
WHERE city = C.city
FOR XML PATH(''), TYPE)
.value('.','NVARCHAR(MAX)'),1,2,' ')As Name from CTE2 C
```
|
How to read value inside a cursor?
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
So I am new the whole SQL Query business but I need some help with two issues. My goal is to have anything in the Column "EnvironmentName" that has the word "Database" in Column "NodeName" to be displayed in the query results. I did this with
```
FROM [Backbone_ASPIDER].[dbo].[vw_CFGsvr_Con]
WHERE NodeName = 'Database'
ORDER BY EnvironmentName asc
WHERE NodePath
```
Results of Query:

I am able to get my query results but would like to remove the rows with **NULL**. I have tried to use "IS NOT NULL" but SQL Server Management Studio labeles this as "**incorrect syntax**."
What I have tried:
```
FROM [Backbone_ASPIDER].[dbo].[vw_CFGsvr_Con]
WHERE NodeName = 'Database'
ORDER BY EnvironmentName asc IS NOT NULL
WHERE NodePath
```
**Thank you in advance!**
|
Your query is pretty close..
1: You have to specify a specific column to not be null while using IS NOT NULL.
So modify your query to:
```
FROM [Backbone_ASPIDER].[dbo].[vw_CFGsvr_Con]
WHERE NodeName = 'Database' AND EnvironmentName IS NOT NULL
ORDER BY EnvironmentName asc
WHERE NodePath
```
2: Check out this article about trimming parts of strings from query results
<http://basitaalishan.com/2014/02/23/removing-part-of-string-before-and-after-specific-character-using-transact-sql-string-functions/>
|
EDIT: I just noticed you removed this from your OP, so feel free to disregard if you took care of that.
I don't think anyone addressed the substring problem yet. There's several ways you could get at this depending on how complex the strings are you have to slice up, but here's how I'd do it
```
-- Populating some fake data, representative of what you've got
if object_id('tempdb.dbo.#t') is not null drop table #t
create table #t
(
nPath varchar(1000)
)
insert into #t
select '/Database/Mappings/Silver/Birthday' union all
select '/Database/Connections/Blue/Happy'
-- First, get the character index of the first '/' after as many characters the word '/database/' takes up.
-- You could have hard coded this value too. Add 1 to it so that it moves PAST the slash.
;with a as
(
select
ixs = charindex('/', nPath, len('/Database/') + 1),
-- Get everything to the right of what you just determined with all the charindex() stuff
ss = right(nPath, len(nPath) - charindex('/', nPath, len('/Database/') + 1)),
nPath
from #t
)
-- Now just take the left of the now-cleaned-up string from start to the first pipe
select
ixs,
ss,
color = left(ss, charindex('/', ss) -1),
nPath
from a
```
|
SQL Query Remove Part of Path/Null
|
[
"",
"sql",
"sql-server",
""
] |
I have a SAS Master Dataset with 10 Million Rows and 1800 columns. I need to update 10 columns using a transaction dataset with 15 million records, only for records with matching key.
I tried running a proc sql update statement using the following code.
```
proc sql;
UPDATE lib1.master1 a
SET col1 = (SELECT col1 FROM lib1.changes_1 b WHERE a.key=b.key),
col2 = (SELECT col2 FROM lib1.changes_1 b WHERE a.key=b.key),
col3 = (SELECT col3 FROM lib1.changes_1 b WHERE a.key=b.key),
col4 = (SELECT col4 FROM lib1.changes_1 b WHERE a.key=b.key),
col5 = (SELECT col5 FROM lib1.changes_1 b WHERE a.key=b.key),
col6 = (SELECT col6 FROM lib1.changes_1 b WHERE a.key=b.key),
col7 = (SELECT col7 FROM lib1.changes_1 b WHERE a.key=b.key),
col8 = (SELECT col8 FROM lib1.changes_1 b WHERE a.key=b.key),
col9 = (SELECT col9 FROM lib1.changes_1 b WHERE a.key=b.key)
WHERE EXISTS ( SELECT 1 FROM lib1.changes_1 b WHERE A.key = B.key);
quit;
```
I tried for just col1 for testing purpose and its been running for more than 4 hours.
I can think of data merge, by dropping the 10 columns and then a left join but that would change the column ordering. Reordering 1800 columns would again be a tedious task.
Is there a faster/more efficient technique?
|
To replace one column, a format (roughly similar to Bendy's approach) is easiest.
To replace ten columns, always coming from the same row, I recommend a hash table. Around the same speed as a single format, typically. (Formats actually can be a bit slow at the 10MM rows mark, so this might be purely faster even than one.)
This took ~30 seconds on my laptop (CPU time, and real time; I have an SSD, so those are similar. On a HDD this is probably 30 seconds CPU time and a few minutes real time.)
```
*make some dummy data;
data maindata;
array col(10);
do _i = 1 to dim(col);
col[_i] = _i;
end;
do key = 1 to 1e7;
output;
end;
run;
data updatedata;
array col(10);
do _i = 1 to dim(col);
col[_i] = 100+_i;
end;
do key = 1 to 3e7 by 2;
output;
end;
run;
*using MODIFY here which is identical in function to SQL UPDATE - does not make a new dataset;
data maindata;
if _n_=1 then do;
declare hash ud(dataset:'updatedata');
ud.defineKey('key');
ud.defineData('col1', 'col2', 'col3', 'col4', 'col5', 'col6', 'col7', 'col8', 'col9', 'col10');
ud.defineDone();
end;
modify maindata;
rcUpdate = ud.find();
if rcUpdate=0 then replace;
run;
```
|
# Summary
1. Rollup `lib1.changes_1` into a temporary table (say, `lib1.changes_1_rolledup`) performing full scan. (`lib1.changes_1_rolledup`'s physical attributes should be carefully set up.)
2. Update `lib1.master1` with the `lib1.changes_1_rolledup`'s data performing full scan of one of the two and index scan of the other. (Which one to full scan depends on the actual data.)
# Explained
First, in order to achieve better performance you're most likely bound to go down to the underlying DBMS level and utilize its capabilities.
Then, the optimization technique really depends on the nature of your data.
I suppose [almost] all of the `lib1.changes_1(key)` values match one of the `lib1.master1(key)` values (`lib1.changes_1` may be even a detail table for `lib1.master1`). Also, we need to apply all the changes from `lib1.changes_1`. This means we have to read all the records from `lib1.changes_1` anyway. If so, the most effective way of doing this would be a `lib1.changes_1` table *full scan* but executed *only once*. This full scan would sum up all the changes from `lib1.changes_1` into a (possible temporary) table of such definition:
```
-- pseudo code:
create [temporary] table lib1.changes_1_rolledup
<set physical attributes depending on your data nature - see below>
as select key, col1, col2, col3, col4, col5, col6, col7, col8, col9
from lib1.changes_1
where 1 = 2
```
This rolled up changes table would likely contain no more than 10M records and depending of colX sizes may be relatively small in terms of storage space requirement. And what is more important, it would only have one record per `key` value which may be a great benefit.
The we need to assess how many of `lib1.master1`'s records are represented in `lib1.changes_1_rolledup` which in case of master-detail relationship is simply a comparison of record count.
If the `lib1.changes_1_rolledup` if only two-tree times (the rate actually depends on the DBMS) or less shorter than `lib1.master1` (in terms of record count), the most effective way would be to perform a full scan of `lib1.master1` updating every record from `lib1.master1` (all 9 `colX` values at once) with corresponding values from `lib1.changes_1_rolledup`. (The update with full scan should be implemented in a single update query whenever possible, of course.) In this case the physical attributes of `lib1.changes_1_rolledup` table have to be tuned for key lookup. I'd suggest technique similar to Oracle's index-organized table if available.
If the `lib1.changes_1_rolledup` is several times shorter than `lib1.master1`, then it is more efficient `lib1.changes_1_rolledup` updating the respective records from `lib1.master1`. The physical storage of `lib1.changes_1_rolledup` in this case should rather be tuned for full scan and possibly in the order of `lib1.master1(key)` appearance (which may be the case for auto-incremented surrogate keys, for instance).
P.S.
In order for simplify the description I omitted the cases where lib1.changes\_1 contains only part of column updates for a certain key. This can be dealt with by adding flag fields in the `lib1.changes_1_rolledup` and tweaking the update as follows:
```
-- pseudo code:
update lib1.master1 m
set m.col1 = (
select case c.col1 then select c.col1 else m.col1 end
from lib1.changes_1_rolledup c
where c.key = m.key
)
..............
```
|
PROC SQL Update Efficiency for Large Datasets
|
[
"",
"sql",
"performance",
"sas",
"large-data",
""
] |
I have table like:
```
name marks
raja 88
ravi 88
karthik 99
praveen 99
vijay 70
```
In that table I assume the rank of the table is:
```
name marks rank
karthirk 99 1
praveen 99 2
raja 88 3
ravi 88 4
vijay 77 5
```
I get that by using `dense rank()` function:
```
select name,marks,dense_rank() over( order by name ) as ranks
from std_D_D order by marks desc
```
but what I need is from that above table I need to get second lowest rank.
I mean the output I want is:
```
ravi 88 4
```
I tried to get the second lowest rank but I can't get the all the columns. I tried this query:
```
select max(a.ranks) as b from (
select name,marks,dense_rank() over( order by name ) as ranks
from std_D_D ) as a where a.ranks not in
(
select max(a.ranks) as b from (
select name,marks,dense_rank() over( order by name ) as ranks
from std_D_D ) as a )
```
|
Aswer for my above question :
```
SELECT * FROM
(SELECT NAME,MARKS,dense_RANK() OVER(ORDER BY RANK DESC,name) AS TEMP_ORDER FROM
(SELECT *,DENSE_RANK() OVER(ORDER BY MARKS DESC,NAME) AS RANK FROM std_D_D) A
) B WHERE TEMP_ORDER=2;
```
|
You could reverse the ranking order and select the second one:
```
SELECT name, marks
FROM (SELECT name, makrs, DENSE_RANK() OVER (ORDER BY name DESC) AS ranks
FROM std_D_D) t
WHERE ranks = 2
```
|
Second maximum value by using dense rank function
|
[
"",
"sql",
"sql-server-2005",
""
] |
I am using below CREATE TABLE statement
```
CREATE TABLE IF NOT EXISTS users (
`uuid` varchar(36) NOT NULL,
`json` varchar(MAX) NOT NULL,
PRIMARY KEY (`uuid`)
);
```
However I keep getting this error.
> ERROR 1064 (42000): You have an error in your SQL syntax; check the
> manual that corresponds to your MySQL server version for the right
> syntax to use near
> 'MAX) NOT NULL,
> PRIMARY KEY (`uuid`)
> )' at line 3
Makes no sense to me.
|
MAX is not suported for this use, it is reserved for the MAX function. Use the equivalent number instead, check this out: [Equivalent of varchar(max) in MySQL?](https://stackoverflow.com/questions/332798/equivalent-of-varcharmax-in-mysql)
|
This will work for you. MAX is reserved keyword. Specify exact number of varchar instead of max. However, varchar(MAX) will work in SQL SERVER 2005+.
```
CREATE TABLE IF NOT EXISTS users (
uuid varchar(36) NOT NULL,
json varchar(21808) NOT NULL,
PRIMARY KEY (uuid)
);
```
[FIDDLE](http://sqlfiddle.com/#!9/23d6d)
|
Unknown syntax error in MySQL statement
|
[
"",
"mysql",
"sql",
"mysql-error-1064",
""
] |
I am having a problem with my sql statement, I have two tables (ogrenci and taksit). I am trying to count the number of taksits where they are zero and group them by KategoriID.KategoriID is both in ogrenci and taksit tables while odendi is only in taksit table. I could not find where the error is.
my sql is as follows
```
$sql= mysql_query("SELECT KategoriID, ad, odendi, COUNT(odendi) FROM ogrenci INNER JOIN taksit ON ogrenci.KategoriID = taksit.KategoriID and WHERE odendi=0 GROUP BY KategoriID");
while($listele = mysql_fetch_array($sql))
{
$KategoriID = $listele['KategoriID'];
$ad = $listele["ad"];
$odendi = $listele["odendi"];
```
}
|
Simple rule: never use commas in the `from` clause. Always use explicit `join`s:
```
SELECT t.KategoriID, ad, odendi, COUNT(odendi)
FROM taksit t INNER JOIN
ogrenci o
ON o.KategoriID = t.KategoriID
WHERE odendi = 0
GROUP BY t.KategoriID;
```
You should also use table aliases and qualify all column names.
|
The JOIN was incorrect, plus the WHERE clause was incorrect. Try this:
```
$sql= mysql_query("SELECT KategoriID, ad, odendi, COUNT(odendi) FROM ogrenci INNER JOIN taksit ON ogrenci.KategoriID = taksit.KategoriID WHERE odendi=0 GROUP BY KategoriID");
```
Here is the formatted SQL for easier reading:
```
SELECT t.KategoriID,
ad,
t.odendi,
COUNT(t.odendi)
FROM ogrenci o
INNER JOIN taksit t ON o.KategoriID = t.KategoriID
WHERE t.odendi = 0
GROUP BY t.KategoriID
```
|
SQL SELECT COUNT GROUP BY - error
|
[
"",
"mysql",
"sql",
"count",
""
] |
Here is the column containing my row entries for `skills`
```
C,C++
P,H,D
ASP,.net,C,C#,C++,R+
C++
```
I need to find all entries that contain `C`. So I formatted a query by using `Skills LIKE ('%'+@Skill+'%')` and this gives me all entries including `C++` when I just want to get the result of `C` alone.
Searching from the above example, I must only get `C,C++` and `ASP, .net, C, C#, C++, R+` rows. I must not get `C++` - last row in the resultset.
My requirement is that I need to get only `C` when searching for `C` and not `C++`. How do I format such a query?
I am using stored procedures to execute all the queries.
|
You can filter based on these conditions
1. if search skill is first skill in column `Skills LIKE @Skill +',%'`
2. if search skill is any where in the middle `Skills LIKE '%,'+ @Skill+',%'`
3. if search skill is at the end `Skills LIKE '%,' + @Skill`
4. if search skill is the only skill `Skills = @Skill`
**Query**
```
SELECT ...
WHERE Skills LIKE '%,'+ @Skill+',%'
OR Skills LIKE @Skill +',%'
OR Skills LIKE '%,' + @Skill
OR Skills = @Skill
```
**EDIT**
Another shorter query can be
```
SELECT ...
WHERE ',' + Skills + ',' LIKE '%,'+ @Skill+',%'
```
***Note:**: You may face performance issues with such a design and query. If possible look into creating a skills table to hold all skills for a user. Please see Zohar Peled's answer on how to improve your design*
|
As long as it is stored as a delimited string you will have to use workarounds like the answers you already got.
After a quick glance most of them will answer your question, meaning that you will be able to search for a specific skill, However none of them provides a solution to the problem, only a workaround. It's like using a bandaid to plug a hole in a boat.
What you actually should do is normalize your database, meaning that instead of keeping the skills as a comma delimited string, you should create a skills table, with only one skill per record, and a personToSkill table that will hold a unique combination of personId and skillId. This Is the correct way of handeling many to many relationships in a relational database. Of course, you will need a unique constraint on thd skill, as well as foreign keys between each relates table.
|
Query to find all matching rows of a substring
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I need some help with a `sql` query which I can't get it work. I need to order these values by number then by letter.
Any suggestion how to do it?
I am working on `Sql Server 2014`, but I think it's irrelevant.
```
Cod_Turma Turma
1 11-A
2 11-F
3 10-F
4 11-G
5 11-I
6 10-E
7 12-L
8 10-J
9 7-B
10 9-B
11 7-E
12 7-D
13 12-H
```
Output should be:
```
Cod_Turma Turma
9 7-B
12 7-D
11 7-E
10 9-B
...
```
|
Possible solution:
```
SELECT * FROM TableName
ORDER BY CAST(LEFT(Turma, CHARINDEX('-', Turma) - 1) AS INT), --left part
SUBSTRING(Turma, CHARINDEX('-', turma), LEN(turma)) --right part
```
|
```
DECLARE @t table (cod_turma int, turma varchar(10));
INSERT @t values
(1,'11-A')
,(2,'11-F')
,(3,'10-F')
,(4,'11-G')
,(5,'11-I')
,(6,'10-E')
,(7,'12-L')
,(8,'10-J')
,(9,'7-B' )
,(10,'9-B')
,(11,'7-E')
,(12,'7-D')
,(13,'12-H')
SELECT * FROM @t
ORDER BY CAST(LEFT(Turma, CHARINDEX('-', Turma)-1) AS INT), SUBSTRING(turma, CHARINDEX('-', Turma), 1000)
```
Explanation: Parse Turma out into two separate values (the int and the character); cast the int part to int (so that you don't get an ordering of `1, 10, 2, 20, 3, 31`) and order by that, then order by the letter part.
I've edited this to reflect improvements suggested by @Giorgi Nakeuri's post as well as a comment by @TimSchemlter.
(Note: my initial post did not work, I tried to fire from the hip and used some incorrect syntax.)
|
SQL Query - Order by String (which contains number and chars)
|
[
"",
"sql",
"sql-server",
""
] |
I have the following structure in my Oracle database:
```
Date Allocation id
2015-01-01 Same 200
2015-01-02 Good 200
2015-01-03 Same 200
2015-01-04 Same 200
2015-01-05 Same 200
2015-01-06 Good 200
```
I would like to have a query that has to check for only the previous consecutive days and get the count where Allocation is `"Same"`.
I want to select by a date, for example `2015-01-05`.
Example output: for the date `2015-01-05` the count is `3`.
New problem. With the query from Lukas Eder the count is always `1` or `2`. but the expected is `3`. Why?!
```
Date Allocation id
2015-01-01 Same 400
2015-01-02 Good 400
2015-01-03 Same 400
2015-01-04 Same 400
2015-01-05 Same 400
2015-01-06 Good 400
```
Code from Lukas Eder
```
SELECT c
FROM (
SELECT allocation, d, count(*) OVER (PARTITION BY allocation, part ORDER BY d) AS c
FROM (
SELECT allocation, d,
d - row_number() OVER (PARTITION BY allocation ORDER BY d) AS part
FROM t
)
)
WHERE d = DATE '2015-01-05';
```
The expected output is something like this, First\_day end Last day not necessary:
```
id count first_day Last_Day
200 3 2015-01-03 2015-01-05
400 3 2015-01-03 2015-01-05
```
|
This query will yield the counts for each row:
```
SELECT allocation, d, count(*) OVER (PARTITION BY allocation, part ORDER BY d) AS c
FROM (
SELECT allocation, d,
d - row_number() OVER (PARTITION BY allocation ORDER BY d) AS part
FROM t
)
ORDER BY d;
```
You can then filter on it to find the counts for a given row:
```
SELECT c
FROM (
SELECT allocation, d, count(*) OVER (PARTITION BY allocation, part ORDER BY d) AS c
FROM (
SELECT allocation, d,
d - row_number() OVER (PARTITION BY allocation ORDER BY d) AS part
FROM t
)
)
WHERE d = DATE '2015-01-05';
```
### Explanation:
The derived table is used to calculate different "partitions" `part` for each date and allocation:
```
SELECT allocation, d,
d - row_number() OVER (PARTITION BY allocation ORDER BY d) AS part
FROM t
```
The result is:
```
allocation d part
--------------------------------
Same 01.01.15 31.12.14
Good 02.01.15 01.01.15
Same 03.01.15 01.01.15
Same 04.01.15 01.01.15
Same 05.01.15 01.01.15
Good 06.01.15 04.01.15
```
The concrete date produced by `part` is irrelevant. It's just some date that will be the same for each "group" of dates within an allocation. You can then count the number of identical values of `(allocation, part)` using the `count(*) over(...)` window function:
```
SELECT allocation, d, count(*) OVER (PARTITION BY allocation, part ORDER BY d) AS c
FROM (...)
ORDER BY d;
```
to produce your wanted result.
### Data
I've used the following table for the example:
```
CREATE TABLE t AS (
SELECT DATE '2015-01-01' AS d, 'Same' AS allocation FROM dual UNION ALL
SELECT DATE '2015-01-02' AS d, 'Good' AS allocation FROM dual UNION ALL
SELECT DATE '2015-01-03' AS d, 'Same' AS allocation FROM dual UNION ALL
SELECT DATE '2015-01-04' AS d, 'Same' AS allocation FROM dual UNION ALL
SELECT DATE '2015-01-05' AS d, 'Same' AS allocation FROM dual UNION ALL
SELECT DATE '2015-01-06' AS d, 'Good' AS allocation FROM dual
);
```
|
Consider the following query to solve your problem:
```
SELECT COUNT(*) AS `count` FROM test t
WHERE `date` < '2015-01-05' AND allocation = 'Same';
```
Let's assume that the given date is '2015-01-05'. The idea here is to select all dates that are less than '2015-01-05' which means its previous days. Since allocation must be 'Same' so it's also included in condition section of statement.
|
Get count of consecutive days meeting a given criteria
|
[
"",
"sql",
"oracle",
""
] |
I am trying to select the friends of the current user, using a query that returns a list of friends for table 1 row per friendship.
I have a User and a Friends Table:
```
User(UserID, Username)
Friends(IdFirst, IdSecond)
```
Assuming i have the following users: `(1, 'Alex'), (2, 'Ana'), (3, 'Daniel')` and the following friendships: `(1, 2), (1,3), (2,3)`
Up until now I have been using this query:
```
SELECT * FROM User U
LEFT JOIN Friends F
ON U.IdUser = F.IdSecond
WHERE F.IdFirst = *LOGGED USER ID*
```
And it only works if I have mirrored friendships, example: `(1, 2) (2, 1) (1, 3) (3,1)` and I only want to have a single pair for each friendship. If use the above query I only get the list of friends for IdFirst.
I hope I make sense, thank you!
|
How about a union? <http://sqlfiddle.com/#!9/da447/7>
```
SELECT * FROM users U
LEFT JOIN friends F
ON U.userid = F.idsecond
WHERE F.idfirst = *LOGGED USER ID*
UNION
SELECT * FROM users U
LEFT JOIN friends F
ON U.userid = F.idfirst
WHERE F.idsecond = *LOGGED USER ID*
```
|
Why not simply? Unless you need fields from users which you have not indicated.
```
SELECT idFirst
,idSecond
FROM Friends
WHERE IdFirst = *LOGGED USER ID*
OR IdSecond =*LOGGED USER ID*
```
This means you don't have to have mirrored friendships--and in fact you should not.
EDIT: if you do want the user for the friends you can do it without a union as:
```
SELECT *
FROM users U
WHERE UserID <> *LOGGED USER ID*
AND EXISTS(
SELECT 1
FROM Friends
WHERE (IdFirst = *LOGGED USER ID* AND IdSecond = UserID)
OR (IdSecond =*LOGGED USER ID* AND IdFirst = UserID)
)
```
I'm not sure it better than @BarbaraLaird 's. Though the execution plan looks simpler here <http://sqlfiddle.com/#!9/da447/13>
|
SQL Select friends query
|
[
"",
"sql",
"social-networking",
""
] |
I have a column in my table that contains a text record of some large logging data. These fields will have a 9 digit number (0-9) that starts with "3".
I simply want to **select** *only the pattern from this field*, not the whole field. And to make things complicated, this pattern can be in a field *more than once*.
I think the `REGEX` I need is `3{8}[0-9]` this?
Is there a MySQL way only to do this? I'd rather not have to write a php script to extract this data.
**EDIT: It seems this is not possible with REGEX - can it be done with any of the other MySQL String functions?**
|
I don't think MySQL or SQL in general is the right tool for text mining when dealing with non-normalized data.
Just
```
$ mysql_dump mydb mytable > dump.sql
```
your data into a file and then search for your pattern using
```
$ grep -o '3[0-9]\{8\}' dump.sql > numbers.txt
```
* `-o` tells grep to only display matched data.
* `3` and `[0-9]` are patterns to match `3` and `any number between 0-9`
* `\{8\}` is the escaped form of `{8}` telling grep that the previous pattern should match exactly 8 times
Final command from the discussion that also expects a non-numeric value after the 9 digits:
```
$ grep -Po '3[0-9]{8}(?=[^0-9])' dump.sql > numbers.txt
```
* uses perl regexp so no escaping is needed
* (?=...) is a lookahead that matches, but is not included in the result
|
> These fields will have a 9 digit number (0-9) that starts with "3".
Here's 3 cases to show the regexp and that it fulfills that specification:
```
+----------------------------------+----------------------------------+---------------------------------------+
| '123456789' REGEXP '^3[0-9]{8}$' | '323456789' REGEXP '^3[0-9]{8}$' | 'junk-323456789' REGEXP '^3[0-9]{8}$' |
+----------------------------------+----------------------------------+---------------------------------------+
| 0 | 1 | 0 |
+----------------------------------+----------------------------------+---------------------------------------+
```
|
Select a numbers only pattern from a text field in MySQL
|
[
"",
"mysql",
"sql",
"regex",
""
] |
Lets say I have table with 1 column like this:
```
Col A
1
2
3
4
```
If I `SUM` it, then I will get this:
```
Col A
10
```
My question is: how do I multiply Col A so I get the following?
```
Col A
24
```
|
Using a combination of `ROUND`, `EXP`, `SUM` and `LOG`
```
SELECT ROUND(EXP(SUM(LOG([Col A]))),1)
FROM yourtable
```
SQL Fiddle: <http://sqlfiddle.com/#!3/d43c8/2/0>
Explanation
`LOG` returns the logarithm of col a ex. `LOG([Col A])` which returns
```
0
0.6931471805599453
1.0986122886681098
1.3862943611198906
```
Then you use `SUM` to Add them all together `SUM(LOG([Col A]))` which returns
```
3.1780538303479453
```
Then the exponential of that result is calculated using `EXP(SUM(LOG(['3.1780538303479453'])))` which returns
```
23.999999999999993
```
Then this is finally rounded using `ROUND` `ROUND(EXP(SUM(LOG('23.999999999999993'))),1)` to get `24`
---
## Extra Answers
Simple resolution to:
> An invalid floating point operation occurred.
When you have a `0` in your data
```
SELECT ROUND(EXP(SUM(LOG([Col A]))),1)
FROM yourtable
WHERE [Col A] != 0
```
If you only have `0` Then the above would give a result of `NULL`.
> When you have negative numbers in your data set.
```
SELECT (ROUND(exp(SUM(log(CASE WHEN[Col A]<0 THEN [Col A]*-1 ELSE [Col A] END))),1)) *
(CASE (SUM(CASE WHEN [Col A] < 0 THEN 1 ELSE 0 END) %2) WHEN 1 THEN -1 WHEN 0 THEN 1 END) AS [Col A Multi]
FROM yourtable
```
Example Input:
```
1
2
3
-4
```
Output:
```
Col A Multi
-24
```
SQL Fiddle: <http://sqlfiddle.com/#!3/01ddc/3/0>
|
This is a complicated matter. If you want to take signs and handle zero, the expression is a bit complicated:
```
select (case when sum(case when a = 0 then 1 else 0 end) > 0
then 0
else exp(sum(log(abs(a)))) *
(case when sum(case when a < 0 then 1 else 0 end) % 2 = 1 then -1 else 1 end)
end) as ProductA
from table t;
```
Note: you do not specify a database. In some databases you would use `LN()` rather than `LOG()`. Also the function for the modulo operator (to handle negative values) also differs by database.
|
How to Multiply all values within a column with SQL like SUM()
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"sum",
""
] |
I would like to union 2 queries but facing an error in oracle.
```
select count(*) as faultCount,
COMP_IDENTIFIER
from CORDYS_NCB_LOG
where AUDIT_CONTEXT='FAULT'
union
select count(*) as responseCount,
COMP_IDENTIFIER
from CORDYS_NCB_LOG
where AUDIT_CONTEXT='RESPONSE'
group by COMP_IDENTIFIER
order by responseCount;
```
Two queries run perfectly individually.but when using union,it says ORA-00904: "RESPONSECOUNT": invalid identifier
|
### The error you've run into
In Oracle, it's best to always name each column in each `UNION` subquery the same way. In your case, the following should work:
```
select count(*) as theCount,
COMP_IDENTIFIER
from CORDYS_NCB_LOG
where AUDIT_CONTEXT='FAULT'
group by COMP_IDENTIFIER -- don't forget this
union
select count(*) as theCount,
COMP_IDENTIFIER
from CORDYS_NCB_LOG
where AUDIT_CONTEXT='RESPONSE'
group by COMP_IDENTIFIER
order by theCount;
```
See also:
[Curious issue with Oracle UNION and ORDER BY](https://stackoverflow.com/q/25387951/521799)
A good workaround is, of course, to use indexed column references as suggested by [a\_horse\_with\_no\_name](https://stackoverflow.com/a/30712380/521799)
### The query you really wanted
From your comments, however, I suspect you wanted to write an entirely different query, namely:
```
select count(case AUDIT_CONTEXT when 'FAULT' then 1 end) as faultCount,
count(case AUDIT_CONTEXT when 'RESPONSE' then 1 end) as responseCount,
COMP_IDENTIFIER
from CORDYS_NCB_LOG
where AUDIT_CONTEXT in ('FAULT', 'RESPONSE')
group by COMP_IDENTIFIER
order by responseCount;
```
|
The column names of a union are determined by the **first** query. So your first column is actually named `FAULTCOUNT`.
But the easiest way to sort the result of a union is to use the column index:
```
select ...
union
select ...
order by 1;
```
You most probably also want to use `UNION ALL` which avoids removing duplicates between the two queries and is faster than a plain `UNION`
|
group by and union in oracle
|
[
"",
"sql",
"oracle",
"group-by",
""
] |
I have the table
```
+---------------------+
| ID | Code | Amount -|
+---------------------+
| 1 | 101 | 1.2 |
| 1 | 102 | 1.3 |
| 1 | 103 | 1.3 |
| 1 | 104 | 1.4 |
| 1 | 105 | 1.2 |
| 2 | 101 | 1.5 |
| 2 | 102 | 1.4 |
| 2 | 103 | 1.3 |
| 2 | 104 | 1.1 |
| 2 | 105 | 1.0 |
+---------------------+
```
What I am trying to do is change the amount column for each ID which is not the code 101 to the value in amount from code 101
So my output should be like this.
```
+---------------------+
| ID | Code | Amount -|
+---------------------+
| 1 | 101 | 1.2 |
| 1 | 102 | 1.2 |
| 1 | 103 | 1.2 |
| 1 | 104 | 1.2 |
| 1 | 105 | 1.2 |
| 2 | 101 | 1.5 |
| 2 | 102 | 1.5 |
| 2 | 103 | 1.5 |
| 2 | 104 | 1.5 |
| 2 | 105 | 1.5 |
+---------------------+
```
This is clearly a simplified table to show what I need as the row count today is over 100,000 but will change everyday.
I have tried to use a cursor but it is very slow. Is there anyway to do this?
Thanks
|
```
update t
set t.amount = t2.amount
from your_table t
join
(
select id, min(amount) as amont
from your_table
where code = 101
group by id
) t2 on t1.id = t2.id
where t.code <> 101
```
|
This would do the trick:
```
DECLARE @t table(ID int, Code int, Amount decimal(6,1))
INSERT @t values
(1,101,1.2),(1,102,1.3),
(1,103,1.3),(1,104,1.4),
(1,105,1.2),(2,101,1.5),
(2,102,1.4),(2,103,1.3),
(2,104,1.1),(2,105,1.0)
;WITH CTE AS
(
SELECT
min(CASE WHEN Code = 101 THEN amount end)
over (partition by ID) newAmount,
Code,
Amount
FROM @t
)
UPDATE CTE
SET Amount = newAmount
WHERE
code <> 101
AND newAmount is not NULL
SELECT * FROM @t
```
Result:
```
ID Code Amount
1 101 1.2
1 102 1.2
1 103 1.2
1 104 1.2
1 105 1.2
2 101 1.5
2 102 1.5
2 103 1.5
2 104 1.5
2 105 1.5
```
|
Change column values based on another column in same table
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
First sorry if my english is bad, it's not my native language.
I have two tables (A and B) with the following columns:
**A:
PRENUMERO (ID), DATA, ARMAZEM, TIPO**
and
**B:
Autoreg (ID), PRENUMERO, PRODUTO**
I want a result like:
\*CountA, CountB, CountC, DATE \*
CountA is when PRODUTO is equal to 1.
CountB is when PRODUTO is > than 1.
CountC is when PRODUTO has both 1 and > 1 in the same PRENUMERO.
I joined the tables like this:
> ```
> SELECT DISTINCT
> b.[Autoreg],
> a.[PRENUMERO],
> b.[PRENUMERO],
> b.[PRODUTO]
> FROM
> b.[VendasPOS_Linhas]
> JOIN a
> ON b.[PRENUMERO]=a.[PRENUMERO]
> WHERE
> a.[DATA] > '2015-06-01 00:00:00.000' AND
> a.[Armazem] = '111'
> ```
And this is the result:

So if I do [PRENUMERO] = '1' (for CountA) the result should be for example 1000 and if I do [PRENUMERO] > '1' (for CountB) the result should be for example 1100 and the CountC should be the rest (667). But I do not know how to do it.
As you can see in the line 11 and 12, they have the same PRENUMERO, but the PRODUTO is 1 and the other is 10900... so this is for CountC. If the PRENUMERO is the same (several exactly PRENUMERO) but the PRODUTO is 1 (CountA (like line 8 and 9)) or >1 (CountB) this count only as 1 (DICTINCT here?)
(IF PRODUTO = 1 THEN CountA; IF PRODUTO >1 THEN CountB)
EDIT:
So I did this:
> ```
> SELECT
> (SELECT
> COUNT(DISTINCT [B].[PRENUMERO])
> FROM
> [B]
> JOIN [A]
> ON [B].[PRENUMERO] = [A].[PRENUMERO]
> WHERE
> [A].[DATA] > '2015-06-01 00:00:00.000' AND
> [A].[Armazem] = '454' AND
> [B].[PRODUTO] > '1') AS X,
> (SELECT
> COUNT(DISTINCT [B].[PRENUMERO])
> FROM
> [B]
> JOIN [A]
> ON [B].[PRENUMERO] = [A].[PRENUMERO]
> WHERE
> [A].[DATA] > '2015-06-01 00:00:00.000' AND
> [A].[Armazem] = '454' AND
> [B].[PRODUTO] = '1') AS Y,
> (SELECT COUNT([A].[PRENUMERO]) from A where DATA> '2015-06-01 00:00:00.000' and Armazem = '454') AS Total
> ```
And this is the result:

But I think this is wrong because it's counting the CountC as well.
For example:
If the same PRENUMERO as 1 and >1 it belongs to CountC. What can I do to not count does? (How do I check if PRENUMERO doesn't have 1 and >1 (diferent rows))
|
Solution
```
> SELECT COUNT(DISTINCT CASE WHEN MinProduto = 1 AND MaxProduto = 1
> THEN PRENUMERO END) AS QtdCombustivel
> ,COUNT(DISTINCT CASE WHEN MinProduto <> 1 AND MaxProduto <> 1 THEN PRENUMERO END) AS QtdLoja
> ,COUNT(DISTINCT CASE WHEN MinProduto = 1 and MaxProduto <> 1 THEN PRENUMERO END) AS QtdMisto
> ,COUNT(DISTINCT PRENUMERO) AS Total FROM (
> SELECT [VendasPOS_Linhas].[PRENUMERO]
> ,MIN([VendasPOS_Linhas].[PRODUTO]) AS MinProduto
> ,MAX([VendasPOS_Linhas].[PRODUTO]) AS MaxProduto
> FROM [VendasPOS_Linhas]
> INNER JOIN [VendasPOS_Cabecalhos]
> ON [VendasPOS_Linhas].[PRENUMERO] = [VendasPOS_Cabecalhos].[PRENUMERO]
> WHERE UPPER([VendasPOS_Cabecalhos].[FACT_VD]) IN ('T','F','C')
> AND [VendasPOS_Cabecalhos].[DATA] > '2015-06-11 00:00:00.000'
> AND [VendasPOS_Cabecalhos].[Armazem] = '404'
> GROUP BY [VendasPOS_Linhas].[PRENUMERO] , [VendasPOS_Cabecalhos].[DATA] )Res
```
|
Well If I understand you are trying to do the next.
* COUNT A -> *Only products=1*
* COUNT B -> *Only products>1*
* COUNT A -> *products=1 AND Prenumero>1*
SELECT SUM(if(producto)=1,1,0) AS COUNT\_A
,SUM(if(producto)>1,1,0) AS COUNT\_B
,SUM((if(producto)=1 AND PRENUMERO>1),1,0) AS COUNT\_C
,a.DATA
FROM b
JOIN a USING(PRENUMERO)
WHERE a.DATA > '2015-06-01 00:00:00.000'
AND a.Armazem = '111' AND b.producto>0
GROUP BY a.DATA;
You are sharing information between count A and count C, for discating this you could use the alias from each tabla to validate by example
```
a.prenumero>1 AND b.prenumero =1
```
but if this is your inner join field always will have the same value for both tables
Regards.
|
SQL Several Count
|
[
"",
"mysql",
"sql",
"if-statement",
"count",
"conditional-statements",
""
] |
I am stumped by what seems like a simple problem.
We have the following Table.
```
ID--- ---Income--- ---Years Offset--- ---Income By Offset---
1 1000 1 NULL
2 500 1 NULL
3 400 1 NULL
4 0 1 NULL
5 2000 2 NULL
6 0 2 NULL
7 400 2 NULL
```
What I would love to figure out how to do is to sum all of the income column by the "Years Offset column" and place in the first row of the "Income by Offset column." What would be awesome is if the Income by Offset column has values of 1900 in row 1 and 2400 in row 5 with the rest of them rows being untouched.
I know that this sound like a simple problem. But I have tried Window functions, Row\_number(), SELF joining tables and a piece of it is solved with each but am having trouble putting it all together.
Thanks in advance,
George
|
## My Version of Your Table
```
DECLARE @yourTable TABLE (ID INT,Income INT,[Years Offset] INT,[Income By Offset] INT NULL);
INSERT INTO @yourTable
VALUES (1,1000,1,NULL),
(2,500,1,NULL),
(3,400,1,NULL),
(4,0,1,NULL),
(5,2000,2,NULL),
(6,0,2,NULL),
(7,400,2,NULL);
```
## Actual Query
```
SELECT ID,
Income,
[Years Offset],
CASE
WHEN ID = MIN(ID) OVER (PARTITION BY [Years Offset])
THEN SUM(Income) OVER (PARTITION BY [Years Offset])
ELSE [Income By Offset]
END AS [Income By Offset]
FROM @yourTable
```
## Results
```
ID Income Years Offset Income By Offset
----------- ----------- ------------ ----------------
1 1000 1 1900
2 500 1 NULL
3 400 1 NULL
4 0 1 NULL
5 2000 2 2400
6 0 2 NULL
7 400 2 NULL
```
|
This should return the required result set:
```
SELECT ID, Income, [Years Offset],
CASE WHEN ROW_NUMBER() OVER (PARTITION By [Years Offset]
ORDER BY ID) = 1
THEN SUM(Income) OVER (PARTITION BY [Years Offset])
ELSE NULL
END AS [Income By Offset]
FROM mytable
```
Windowed version of `SUM` calculates the `Income` per `[Years Offset]`. `ROW_NUMBER()` is used to return this value only for the first row of each `[Years Offset]` group.
[**Demo here**](http://sqlfiddle.com/#!6/5e1b4/1)
|
SQL Server Multiple Groupings
|
[
"",
"sql",
"sql-server",
"grouping",
"window-functions",
""
] |
I want to work out an annual figure as a proportion of the year based on the date - so if 400 is the annual figure, I want to divide this by 365 then multiply the result by however many days is equivalent to today's date (e.g. (400/365)\*160)).
Is there a way of doing this with a SQL server select statement to avoid manually entering the 160 figure?
The 400 figure is coming from a standard field named HES.
Thanks.
|
You can use `datepart(dayofyear, getdate())` - will return a number representing today's day of the year. See [MSDN DatePart](https://msdn.microsoft.com/en-us/library/ms174420.aspx)
|
Since this is sql server and the other answer is using mysql I will post the sql server version.
```
select DATEPART(dayofyear, getdate())
```
|
SQL find out how many days into year date is
|
[
"",
"sql",
"sql-server",
"date",
""
] |
I have the following data in a database table column named timeSchedule
```
00100110
00010100
00110000
00110011
```
Boolean addition would result in
```
00110111
```
Is there a way to do this in sql? Something like select sumboolean(timeSchedule) from myTable
Someone asked for DDL+DML.. here is an example:
```
CREATE TABLE [dbo].[myTable](
[musPracticeID] [int] IDENTITY(1,1) NOT NULL,
[chosenDate] [datetime] NULL,
[timeSchedule] [nvarchar](50) NULL CONSTRAINT [DF_myTable_schedule] DEFAULT (N'0000000000000000')
)
INSERT INTO myTable (chosenDate, timeSchedule)
VALUES (’06/07/2015’, ’01000100’);
```
|
OK, Now that we have the DDL (unfortunately without the DML but only one row). we can provide a solution :-)
firstly! I highly recommend NOT TO USE the solution above, **THERE IS NO NEEDS for loops** even you not use fixed data length we know the max len (50).
Secondly! If you are going to parse text then you should use SQLCLR and not looping and parsing using T-SQL, in most cases as this one as well.
Third :-) here is simple example for simple solution. I only used the first 10 chars... you can continue to 50... you can use dynamic query in order to create the query if you dont want to write it yourself manually (There are other solutions as well, I recommend to check execution plan and IO used in order to select the best solution for u):
```
CREATE TABLE [dbo].[myTable](
[musPracticeID] [int] IDENTITY(1,1) NOT NULL,
[chosenDate] [datetime] NULL,
[timeSchedule] [nvarchar](50) NULL CONSTRAINT [DF_myTable_schedule] DEFAULT (N'0000000000000000')
)
GO
truncate table [myTable]
INSERT INTO myTable (chosenDate, timeSchedule)
VALUES
('06/07/2015', '00100110'),
('06/07/2015', '00010100'),
('06/07/2015', '00110000'),
('06/07/2015', '00110011');
GO
select * from myTable
GO
;With MyCTE as (
select
SUBSTRING([timeSchedule],1,1) as c1,
SUBSTRING([timeSchedule],2,1) as c2,
SUBSTRING([timeSchedule],3,1) as c3,
SUBSTRING([timeSchedule],4,1) as c4,
SUBSTRING([timeSchedule],5,1) as c5,
SUBSTRING([timeSchedule],6,1) as c6,
SUBSTRING([timeSchedule],7,1) as c7,
SUBSTRING([timeSchedule],8,1) as c8,
SUBSTRING([timeSchedule],9,1) as c9
from myTable
)
select
CONVERT( NVARCHAR(50),CASE WHEN SUM(CONVERT(INT,c1)) > 0 THEN 1 ELSE 0 END)+
CONVERT( NVARCHAR(50),CASE WHEN SUM(CONVERT(INT,c2)) > 0 THEN 1 ELSE 0 END)+
CONVERT( NVARCHAR(50),CASE WHEN SUM(CONVERT(INT,c3)) > 0 THEN 1 ELSE 0 END)+
CONVERT( NVARCHAR(50),CASE WHEN SUM(CONVERT(INT,c4)) > 0 THEN 1 ELSE 0 END)+
CONVERT( NVARCHAR(50),CASE WHEN SUM(CONVERT(INT,c5)) > 0 THEN 1 ELSE 0 END)+
CONVERT( NVARCHAR(50),CASE WHEN SUM(CONVERT(INT,c6)) > 0 THEN 1 ELSE 0 END)+
CONVERT( NVARCHAR(50),CASE WHEN SUM(CONVERT(INT,c7)) > 0 THEN 1 ELSE 0 END)+
CONVERT( NVARCHAR(50),CASE WHEN SUM(CONVERT(INT,c8)) > 0 THEN 1 ELSE 0 END)+
CONVERT( NVARCHAR(50),CASE WHEN SUM(CONVERT(INT,c9)) > 0 THEN 1 ELSE 0 END)
from MyCTE
```
|
First thing you need is a way to get the string and convert it into a #. So, you need to create a new scalar function (borrowed from [here](http://improve.dk/converting-between-base-2-10-and-16-in-t-sql/)).
```
CREATE FUNCTION [dbo].[BinaryToDecimal]
(
@Input varchar(255)
)
RETURNS bigint
AS
BEGIN
DECLARE @Cnt tinyint = 1
DECLARE @Len tinyint = LEN(@Input)
DECLARE @Output bigint = CAST(SUBSTRING(@Input, @Len, 1) AS bigint)
WHILE(@Cnt < @Len) BEGIN
SET @Output = @Output + POWER(CAST(SUBSTRING(@Input, @Len - @Cnt, 1) * 2 AS bigint), @Cnt)
SET @Cnt = @Cnt + 1
END
RETURN @Output
END
```
Then you can simply use:
```
SUM([dbo].[BinaryToDecimal](timeSchedule))
```
Then wrap that in another function to convert it back to a string representation. [This](https://stackoverflow.com/questions/127116/sql-server-convert-integer-to-binary-string) is a good example.
By the way, storing binary as a string is almost always the wrong approach.
|
How to perform boolean addition on an SQL select
|
[
"",
"sql",
"sql-server",
"select",
"sum",
"boolean",
""
] |
I have a employee table
```
empid empname status
1 raj active
2 ravi active
3 ramu active
4 dan active
5 sam inactive
```
I have another table called facilities
```
empid timestamp
1 2014-12-28
1 2015-05-05
1 2015-06-05
2 2015-05-03
2 2015-06-04
3 2015-02-01
```
I want my result like
```
empid empname status lastusedts
1 raj active 2015-06-05
2 ravi active 2015-06-04
3 ramu active 2015-02-01
4 dan active null
```
So i have to join my employee table and facilities table and find when the employee has last used the facilities by getting the max of time stamp and for employees who did not use it the timestamp value should be null and only active employees are to be fetched.
Please help me in writing this query in db2
|
Do a `LEFT JOIN` with a `GROUP BY` to find the `MAX`(timestamp):
```
select e.empid, e.empname, e.status, max(timestamp) as lastusedts
from employee e
left join facilities f on e.empid = f.empid
where e.status = 'active'
group by e.empid, e.empname, e.status
```
Alternatively, a correlated sub-select for the max-timestamp:
```
select e.empid, e.empname, e.status, (select max(timestamp) from facilities f
where e.empid = f.empid) as lastusedts
from employee e
where e.status = 'active'
```
|
```
SELECT e.empid, e.empname, e.status, MAX(f.timestamp) AS lastusedts
FROM employee e LEFT OUTER JOIN facilities f ON e.empid = f.empid
WHERE e.status = 'active' GROUP BY e.empid, e.empname, e.status
```
|
join 2 tables and get some columns from 1st table and max timestamp value from second table
|
[
"",
"sql",
"join",
"db2",
"max",
"left-join",
""
] |
Can anyone give me a solution for my problem?
I have a string array of max length. I want to compare all of my string array elements with a single SQL query. How can I do that?
```
string[] new = searchtext;
select Qid from questions where qdescriptions like string[0],string[1],string[2]
```
The string array length is not fixed, it's dynamic.
Ex: my search string is "admin login error"
Then I split that into
```
admin
login
error
```
as three parts. My expected result should contain all these three strings in database
Like this
```
Admin post this;
password change for login;
the error database;
```
Hope you understand. The results should contain all my search strings in a single search query..
C# code:
```
public void searchdetails(string[] searchwords) {
SqlConnection con = new SqlConnection();
con.ConnectionString = connection; con.Open();
string[] soldesc = searchwords;
int i = 0;
if (soldesc.Length == 1) {
string query1 = "select Qid from Questions where Qdescription like '% " + soldesc[i] + " %'";
}
SqlCommand cmds = new SqlCommand(query1, con); cmds.ExecuteNonQuery();
```
|
You can do this by dynamically creating your sql query:
```
string[] new = searchtext;
String query = "select Qid from questions";
```
Write a for loop in your application that loops through your search array:
Pseudo code incoming:
```
For(String searchstring in new){
if(new.indexof(searchstring) === 0){
query += " where qdescriptions like " + searchstring;
}
else{
//depending on what you want to do here use OR or AND
query += " or qdescriptions like " + searchstring;
}
}
result = query.execute();
```
Note: this is pseudo code, seeing as you didn't say what programming language etc you are using I can't tell you what the actual syntax for the for loop will be like or how to protect your query from sqlInjection
Your C# code should look something like this:
```
public void searchdetails(string[] searchwords) {
SqlConnection con = new SqlConnection(); con.ConnectionString = connection;
con.Open();
string[] soldesc = searchwords;
string query1 = "select Qid from Questions";
For(int i = 0; i<soldesc.Length;i++){
if (i == 0) {
query1 += "where Qdescription like '%" + soldesc[i] + "%'";
}
else{
query1 += " AND Qdescription like '%" + soldesc[i] + "%'";
}
}
SqlCommand cmds = new SqlCommand(query1, con); cmds.ExecuteNonQuery();
```
|
Try this
```
declare @searchtext nvarchar(max) = 'abc,def,pqr'
```
create a function
```
CREATE FUNCTION [dbo].[fn_Split](@text varchar(8000), @delimiter varchar(20))
RETURNS @Strings TABLE
(
position int IDENTITY PRIMARY KEY,
value varchar(8000)
)
AS
BEGIN
DECLARE @index int
SET @index = -1
WHILE (LEN(@text) > 0)
BEGIN
SET @index = CHARINDEX(@delimiter , @text)
IF (@index = 0) AND (LEN(@text) > 0)
BEGIN
INSERT INTO @Strings VALUES (@text)
BREAK
END
IF (@index > 1)
BEGIN
INSERT INTO @Strings VALUES (LEFT(@text, @index - 1))
SET @text = RIGHT(@text, (LEN(@text) - @index))
END
ELSE
SET @text = RIGHT(@text, (LEN(@text) - @index))
END
RETURN
END
```
Query
```
select * from yourtable y inner join (select value from
fn_split(@searchtext,',')) as split on y.qdescriptions like '%+split.value+%'
```
|
Dynamically add like operators in where clause
|
[
"",
"sql",
"sql-server",
"where-clause",
"sql-like",
""
] |
I just try to excute the below query .Its giving me
ERROR:
```
Msg 208, Level 16, State 3, Line 1
Invalid object name 'dbo.f_getPeopleTabRowCounts'.
SELECT *
FROM dbo.f_getPeopleTabRowCounts(7424,'YYYYYYYYYYYYY','abcd','Y');
```
Anyone can help me pls.
> [How do I test a Table-Valued Function in SQL Server Management Studio?](https://stackoverflow.com/questions/1572078/how-do-i-test-a-table-valued-function-in-sql-server-management-studio)
Thanks
Sitansu
Here is :
```
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER FUNCTION [dbo].[f_getPeopleTabRowCounts] (@PeopleRSN INT, @TabMask VARCHAR(10), @UserId VARCHAR (128), @enableRLS VARCHAR (1))
RETURNS VARCHAR(2000)
AS
BEGIN
```
|
Your function isn't returning a Table, so cannot be a Table Valued function, and looks much like a Scalar Valued Function.
[Technet - Table Valued Functions](https://technet.microsoft.com/en-us/library/ms191165%28v=sql.105%29.aspx)
To test a Scalar Valued Function you should use
```
SELECT dbo.f_getPeopleTabRowCounts(7424,'YYYYYYYYYYYYY','abcd','Y');
```
|
you can also do declaring of the variables top side and then use 'set' to designate them with values.
once your done testing comment out declare variables thru set variables sql script. Its always good to leave those in for testing purposes.
OR
like the comment before me said. Hardcode the values for each param. test and replace back with params.
|
How do i test the table value function?
|
[
"",
"sql",
"database",
"ssms",
"mssql-jdbc",
""
] |
I'm trying make a function to check if my sysdate is between 2 datas on the HH24:mi. If this is true then it needs to return 1 if its not return 0.
I tried this one but without succes: [Check if current date is between two dates Oracle SQL](https://stackoverflow.com/questions/23398632/check-if-current-date-is-between-two-dates-oracle-sql)
Here is the code I used:
```
create or replace FUNCTION WinkelOpen(winkelID Number)
RETURN NUMBER
IS
CURSOR c_tijden_t(v_temp_winkelID IN NUMBER, v_temp_dag IN VARCHAR2) IS
SELECT * FROM Openingstijd
WHERE winkel_id = v_temp_winkelID
AND dag = v_temp_dag;
TYPE a_array_days IS VARRAY(7) OF VARCHAR2(2);
v_dagen a_array_days := a_array_days('ma', 'di', 'wo', 'do', 'vr', 'za', 'zo');
v_temp_suc NUMBER;
v_isClosed Number(1,0) := 0;
BEGIN
FOR i IN c_tijden_t(winkelID, v_dagen(to_char (SYSDATE, 'D')-1)) LOOP
select * INTO v_temp_suc from dual
WHERE trunc(sysdate)
BETWEEN TO_DATE(i.open, 'HH24:mi')
AND TO_DATE(i.gesloten, 'HH24:mi');
--if v_temp_suc is 0 then keep looping
--else v_isClosed = 1 break loop
END LOOP;
RETURN v_isClosed;
END WinkelOpen;
```
|
Using TRUNC on a date sets it as 00:00 on that day. I assume what you're after here is "Check if the, right now, is between X and Y". Like Mick said, the following should be good:
```
SELECT count(*) INTO v_temp_suc FROM dual
WHERE to_char(sysdate, 'HH24:MI')
BETWEEN i.open AND i.gesloten;
```
But you also need to beware that you are potentially looping through multiple results in the cursor. You may get any number of hits but it'll only return the result on the last one. You've commented out that pseudocode part of your cursor. But you don't forget to add
```
IF v_temp_suc != 0 THEN
EXIT;
END IF;
```
Also - depending on what type of input you're using, such as user input, you might find that they enter the later time first and the earlier second sometimes. So headsup on that if you're experiencing weird results
Meaning: IF(2 between 1 and 3) gives TRUE, but IF(2 between 3 and 1) gives FALSE.
|
Assuming the data in `i.open` and `i.gesloten` is in format `HH24:MI` (and hours before `10` are zero-padded), you could use this query within your procedure:
```
SELECT count(*) INTO v_temp_suc FROM dual
WHERE to_char(sysdate, 'HH24:MI')
BETWEEN i.open AND i.gesloten;
```
The query will either return `0` or `1`, depending on whether the current time (of the database) is within the interval.
|
PL/SQL Check if SYSDATE is between two DATETIMES "HH24:mi"
|
[
"",
"sql",
"oracle",
"plsql",
""
] |
In the following two `SELECT` statements, I'd like to compare the outputs from these two to get the `IDs` of the amounts don't match.
The `GROUP BY` and `SUM`s in the first statement seem to be complicating things:
```
SELECT CreditAHID,
SUM(Debit) as CCDebit,
SUM(Credit) as CCCredit,
FROM FS_2015_June.dbo.CCs
GROUP BY CreditAHID
SELECT ID as CreditAHID,
JuneDebit as AHDebit,
JuneCredit as AHCredit,
FROM FS_2015_2016.dbo.AHs
```
Is there any way to combine them to something like this?
```
SELECT ID ... WHERE (CCDebit != AHDebit) OR (CCCredit != AHCredit)
```
|
Try this query below:
```
SELECT
ResultSet1.*
, ResultSet2.*
FROM (
SELECT CreditAHID
,SUM(Debit) AS CCDebit
,SUM(Credit) AS CCCredit
FROM FS_2015_June.dbo.CCs
GROUP BY CreditAHID
) ResultSet1
INNER JOIN (
SELECT ID AS CreditAHID
,JuneDebit AS AHDebit
,JuneCredit AS AHCredit
FROM FS_2015_2016.dbo.AHs
) ResultSet2 ON ResultSet1.CreditAHID = ResultSet2.CreditAHID
WHERE ResultSet1.CCDebit <> ResultSet2.AHDebit
AND ResultSet1.CCCredit <> ResultSet2.AHCredit
```
This query will return the combination of both results where a CreditAHID matches in both results and where the `CCDebit` and `CCCredit` is not equal to `AHDebit` and `AHCredit` respectively.
|
You can try it with CTE:
```
WITH cte as (
SELECT CreditAHID,
SUM(Debit) as CCDebit,
SUM(Credit) as CCCredit,
FROM FS_2015_June.dbo.CCs
GROUP BY CreditAHID
), cte2 as (
SELECT ID as CreditAHID,
JuneDebit as AHDebit,
JuneCredit as AHCredit,
FROM FS_2015_2016.dbo.AHs)
SELECT * FROM cte
UNION
SELECT * FROM cte2
WHERE CCDebit <> AHDebit and CCCredit <> AHCredit
```
|
Comparing values from two SQL SELECTs
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I am using a query as follows.
```
SELECT * FROM TableName
WHERE ([Material Description] Not Like "*LICENSE*" Or
[Material Description] Not Like "*LICENCE*");
```
However, this fetches me results having records with LICENCE or LICENSE in the Material Description field. Please advise as to what is that I am missing so that my query yields me result omitting the records containing these words in the Material Description field.
|
What you actually want is `AND` in between:
```
SELECT * FROM TableName
WHERE ([Material Description] NOT LIKE '*LICENSE*' AND
[Material Description] NOT LIKE '*LICENCE*');
```
You will currently select the record "LICENSE" because it does **NOT** contain "LICENCE" and the record "LICENCE" because it does **NOT** contain "LICENSE".
The records you currently really exclude are the ones which contain "LICENCE" **AND** "LICENSE", probably not many ;). That little confusion arises from the usage of Or in combination with Not.
Another way to achieve the same goal would be to move the `NOT` in front of the `OR`ed condition:
```
SELECT * FROM TableName
WHERE NOT ([Material Description] LIKE '*LICENSE*' OR
[Material Description] LIKE '*LICENCE*');
```
That way it is a little bit clearer what you actually want to achieve.
This will read `NOT (A OR B)` as opposed to `(NOT A) OR (NOT B)` and has very different truth table:
```
A B | ( ~ A ) & ( ~ B )
----------------------------------
0 0 | 1 0 1 1 0
0 1 | 1 0 0 0 1
1 0 | 0 1 0 1 0
1 1 | 0 1 0 0 1
A B | ~ ( A | B )
-------------------------
0 0 | 1 0 0 0
0 1 | 0 0 1 1
1 0 | 0 1 1 0
1 1 | 0 1 1 1
```
Your truth table would look like
```
A B | ( ~ A ) | ( ~ B )
----------------------------------
0 0 | 1 0 1 1 0
0 1 | 1 0 1 0 1
1 0 | 0 1 1 1 0
1 1 | 0 1 0 0 1
```
|
You should check your conditions.
* The first condition of the statements gets all records with [Material
Description] doesn't contains "LICENSE". So I gives you also the
records witch contains "LICENCE".
* The second condition gives you all records with [Material
Description] doesn't contains "LICENCE". So you'll get records with
"LICENSE" too.
* Because you use an OR-operation the records from both conditions will
be combined.
The result is that you get all the records from your table.
|
Query in Access using NOT LIKE
|
[
"",
"sql",
"ms-access",
"sql-like",
""
] |
I'm trying to select the 5 rows with the highest `count` value
This is my query:
```
string sql = "SELECT top 5 count FROM Likes ORDER BY COUNT(*) DESC";
```
It's just throwing an error code that
> Column 'Likes.count' is invalid in the select list because it is not
> contained in either an aggregate function or the GROUP BY clause.
It's for a project I've got to present tomorrow... 
|
On SQL Server, simply do this:
```
SELECT TOP 5 * FROM Likes ORDER BY [Count] DESC
```
This assumes that your `Likes`-table already contains a column named `[Count]` meaning that you don't need to count the records yourself (which is what `COUNT(*)` does).
|
You should not use `COUNT(*)` here for `order by`.
```
SELECT top 5 [count] FROM Likes ORDER BY [Count] DESC
```
|
sql select 5 higest values
|
[
"",
"sql",
"sql-server",
"count",
"select-query",
""
] |
This is SQL code im running on SSMS 2008 R2. It's taking over 10 minutes to run (it runs against 90,000 records)..
I'm trying to update all unique records in #tmp\_hic\_final where [Claim Adjustment Type Code] is 0 and [Claim Type Code] is not 10. I'm also doing the update based on the select subquery which checks to make sure there isn't another record that has a [Claim Adjustment Type Code] of 1 in the table.
Although I don't know much on analyzing it, here's the execution plan: <http://snag.gy/TLRsZ.jpg>
is there a better way to optimize it?
```
update PAHT
set [Marked Final] = 'Y'
from #tmp_hic_final PAHT
join
(
select [HIC #],
[Claim Type Code] ,
[Provider Oscar #],
[Claim From Date] ,
[Claim Thru Date]
from #tmp_hic_final
where [Claim Adjustment Type Code] = 0
and [Claim Type Code] <> 10
group by [HIC #],
[Claim Type Code] ,
[Provider Oscar #],
[Claim From Date] ,
[Claim Thru Date]
--,[Claim Adjustment Type Code]
having count(*) = 1
) as PAHT_2
on PAHT.[HIC #] = PAHT_2.[HIC #] and
PAHT.[Claim Type Code] = PAHT_2.[Claim Type Code] and
PAHT.[Provider Oscar #] = PAHT_2.[Provider Oscar #] and
PAHT.[Claim From Date] = PAHT_2.[Claim From Date] and
PAHT.[Claim Thru Date] = PAHT_2.[Claim Thru Date]
where PAHT.[Claim Adjustment Type Code] = 0
and PAHT.[Claim Type Code] <> 10
and NOT EXISTS (select
[Claim Adjustment Type Code]
from [ACO].[dbo].[PA_Header_Temp]
where
[HIC #] = PAHT.[HIC #]
and [Provider Oscar #] = PAHT.[Provider Oscar #]
and [Claim Type Code] = PAHT.[Claim Type Code]
and [Claim From Date] = PAHT.[Claim From Date]
and [Claim Thru Date] = PAHT.[Claim Thru Date]
and [Claim Adjustment Type Code] = 1)
```
Table definition and index on PA\_Header\_Temp:
```
/****** Object: Table [dbo].['PA_Header'] Script Date: 06/02/2015 2:32:33 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[PA_Header_Temp](
[ID] [int] IDENTITY(1,1) NOT NULL,
[Current ClaimID] [float] NULL,
[Provider OSCAR #] [nvarchar](255) NULL,
[HIC #] [nvarchar](255) NULL,
[Claim Type Code] [float] NULL,
[Claim From Date] [datetime] NULL,
[Claim Thru Date] [datetime] NULL,
[Claim Bill Facility Type Code] [float] NULL,
[Claim Bill Classification Code] [float] NULL,
[Principal Diagnosis Code] [nvarchar](255) NULL,
[Admitting Diagnosis Code] [nvarchar](255) NULL,
[Claim Medicare Non Payment Reason Code] [nvarchar](255) NULL,
[Claim Payment Amount] [float] NULL,
[Claim NCH Primary Payer Code] [nvarchar](255) NULL,
[FIPS state Code] [float] NULL,
[Bene Patient Status Code] [float] NULL,
[Diagnosis Related Group Code] [float] NULL,
[Claim Outpatient Service Type Code] [nvarchar](255) NULL,
[Facility Provider NPI #] [float] NULL,
[Operating Provider NPI #] [nvarchar](255) NULL,
[Attending provider NPI #] [float] NULL,
[Other Provider NPI #] [nvarchar](255) NULL,
[Claim Adjustment Type Code] [float] NULL,
[Claim Effective Date] [datetime] NULL,
[Claim IDR Load Date] [datetime] NULL,
[Bene Equitable BIC HICN #] [nvarchar](255) NULL,
[Claim Admission Type Code] [nvarchar](255) NULL,
[Claim Admission Source Code] [nvarchar](255) NULL,
[Claim Bill Frequency Code] [nvarchar](255) NULL,
[Claim Query Code] [float] NULL,
[Marked Final] [nvarchar](255) NULL,
[Load Date] [datetime] NULL,
PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
```
|
I suggest this approach from both a readability and performance standpoint.
```
update yourTable
set field2 = someValue
where whatever
and field1 in
(select field1
from yourTable
where whatever
except
select field1
from yourTable
where whatever
and somethingElse)
```
`where whatever` should be the same every time.
|
First thing I would check is indexes in the tables (normal and temp.). Especially the clustered index scan of PA\_Header\_Temp in the nested loop looks really bad. Depending on the columns and data (data types, selectivity, row count) you should probably create index with either some or all of the columns, either as normal or included fields.
It might be a good idea to create clustered indexes for the temp. tables too, probably on the columns used for joining and for #tmp\_hic\_final you should also consider the fields used as where criteria in the update.
Edit: Have you tried populating PAHT\_2 into a separate temp. table before running the update (+ indexing it) -- that might help too.
|
Update statement taking too long (7 minutes)
|
[
"",
"sql",
"sql-server",
"sql-update",
"execution-time",
"sql-execution-plan",
""
] |
I have the following query, I am trying to find out how many employees have a department ID assigned and how many of them have no department assigned (DepartmentID is not a Foreign Key)
```
WITH
archive AS
(SELECT CASE COALESCE(tbl_Department.DepartmentID, -33) WHEN -33 THEN 'Department Not Found' ELSE 'Department Found' END AS DepartmentStatus
FROM dbo.tbl_Employee
WITH (NOLOCK)
LEFT OUTER JOIN dbo.tbl_Department
WITH (NOLOCK)
ON tbl_Employee.DepartmentID = tbl_Department.DepartmentID
)
SELECT DepartmentStatus, COUNT(DepartmentStatus)
FROM archive
GROUP BY DepartmentStatus
```
The above query works good but it takes too long to execute. I have about a couple hundred thousand employee records and about 4000 department records.
|
You could start with avoiding the `COALESCE` function if it's not needed. I assume that `-33` is just a placeholder for `NULL`. Following is clearer and also more efficient since the optimizer can use indexes.
I would write it in this way by using `EXISTS` and no join at all:
```
WITH archive AS
(
SELECT CASE WHEN EXISTS(SELECT 1 FROM dbo.tbl_Department d
WHERE d.DepartmentID = e.DepartmentID)
THEN 'Department Found'
ELSE 'Department Not Found' END AS DepartmentStatus
FROM dbo.tbl_Employee WITH (NOLOCK) e
)
SELECT DepartmentStatus, Count(*) As Cnt
FROM archive
GROUP BY DepartmentStatus
```
|
```
SELECT SUM(CASE WHEN DepartmentID IS NULL THEN 1 ELSE 0 END) AS EmployeesWithNoDepartment
FROM tbl_Department
```
|
Whats the better way to write this query?
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have two tables which track items in different systems, for simplicity, lets call them Alpha and Beta systems. I am attempting to merge the two tables into a single table that can be correctly track the location of items.
Items tracked start in Alpha, and can move into Beta. However since the systems are not connected, the start time entered into Beta are not always matched to the end time in Alpha.
An example Item tracked correctly is as follows
`System---ID----Item------- Start---------------- End
Alpha - 987 - 123 - 20/5/2015 07:00:00 - 20/5/2015 08:00:00
Alpha - 374 - 123 - 20/5/2015 08:00:00 - 20/5/2015 09:00:00
Beta - 184 - 123 - 20/5/2015 09:00:00 - 20/5/2015 10:00:00
Beta - 798 - 123 - 20/5/2015 10:00:00 - 20/5/2015 12:00:00`
My issue is because the systems are not linked, I have cases where items appear in both Alpha and Beta at the same time. As follows
`System---ID----Item------- Start---------------- End
Alpha - 987 - 123 - 20/5/2015 07:00:00 - 20/5/2015 08:00:00
Beta - 184 - 123 - 20/5/2015 07:30:00 - 20/5/2015 10:00:00
Alpha - 374 - 123 - 20/5/2015 08:00:00 - 20/5/2015 09:00:00
Beta - 798 - 123 - 20/5/2015 10:00:00 - 20/5/2015 12:00:00`
Alpha is the better system, and should always be trusted.
So my desired outcome in the above situation would be to modify record 184 and change it's start time to the end time of record 374.
There is an another case also which should be accounted for, and that's where an Alpha record begins and ends before the last Beta record starts.
Make sense? I hope so, it's been driving me nuts for the last few days.
Thank you for any help you can give.
|
If you use SQL Server 2012 and later (thanks for the tip Karl) you can use LAG and LEAD, as shown here:
[SQL Fiddle](http://sqlfiddle.com/#!6/2afad4/3)
**Query 1**:
```
select [System], [ID], [Item],
[Start],
CASE WHEN LAG([End]) OVER(ORDER BY [Start]) > [Start] AND
LAG([System]) OVER(ORDER BY [Start]) = 'Alpha' AND
[System] = 'Beta'
THEN LAG([End]) OVER(ORDER BY [Start]) ELSE [Start] END As [CorrectStart],
[End],
CASE WHEN LEAD([Start]) OVER(ORDER BY [Start]) < [End] AND
LEAD([System]) OVER(ORDER BY [Start]) = 'Alpha' AND
[System] = 'Beta'
THEN LEAD([Start]) OVER(ORDER BY [Start]) ELSE [End] END As [CorrectEnd]
FROM Table1
order by [Start]
```
**[Results](http://sqlfiddle.com/#!6/2afad4/3/0)**:
```
| System | ID | Item | Start | CorrectStart | End | CorrectEnd |
|--------|-----|------|-----------------------|-----------------------|-----------------------|-----------------------|
| Alpha | 987 | 123 | May, 20 2015 07:00:00 | May, 20 2015 07:00:00 | May, 20 2015 08:00:00 | May, 20 2015 08:00:00 |
| Beta | 374 | 123 | May, 20 2015 07:30:00 | May, 20 2015 08:00:00 | May, 20 2015 10:00:00 | May, 20 2015 09:00:00 |
| Alpha | 184 | 123 | May, 20 2015 09:00:00 | May, 20 2015 09:00:00 | May, 20 2015 10:00:00 | May, 20 2015 10:00:00 |
| Beta | 798 | 123 | May, 20 2015 10:00:00 | May, 20 2015 10:00:00 | May, 20 2015 12:00:00 | May, 20 2015 12:00:00 |
```
|
I think you want to find the last alpha and the first beta for each item and update the first beta start time with the end time of the last alpha.
This could be simplified and certainly is optimized for performance. I left it this way because it is very explicit.
Note: LAG and LEAD are introduced in SQL Server 2012, so cha's solution certainly works if your have that or later.
```
--create the sample data
DECLARE @Tracking TABLE(Name VARCHAR(10),ID INT,Item INT,StartTime DATETIME2,EndTime DATETIME2)
INSERT INTO @Tracking
SELECT * FROM (VALUES
('Alpha' , 987 , 123 , '2015-05-20 07:00:00' , '2015-05-20 08:00:00')
,('Beta' , 184 , 123 , '2015-05-20 07:30:00' , '2015-05-20 10:00:00')
,('Alpha' , 374 , 123 , '2015-05-20 08:00:00' , '2015-05-20 09:00:00')
,('Beta' , 798 , 123 , '2015-05-20 10:00:00' , '2015-05-20 12:00:00')
) AS tbl(Name,ID,Item,StartTime,EndTime)
--get row number for the sample data over sytem name and item
--use a cte for clarity
;WITH
cte AS (
SELECT Name,ID,Item,StartTime,EndTime
,rn = ROW_NUMBER() OVER (PARTITION BY Item,Name ORDER BY StartTime)
,rn_reverse = ROW_NUMBER() OVER (PARTITION BY Item,Name ORDER BY StartTime DESC)
FROM @Tracking
),
--get only the last alpha
LastAlphas AS (
SELECT * FROM cte WHERE Name = 'Alpha' AND rn_reverse = 1
),
--and the forst beta
FirstBetas AS (
SELECT * FROM cte WHERE Name = 'Beta' AND rn = 1
)
--join them all and do the update
UPDATE @Tracking
SET StartTime = a.EndTime
FROM @Tracking t
JOIN FirstBetas b ON t.id = b.id
JOIN LastAlphas a ON t.item = a.item
SELECT * FROM @Tracking ORDER BY Name, StartTime
--Alpha 987 123 2015-05-20 07:00:00.0000000 2015-05-20 08:00:00.0000000
--Alpha 374 123 2015-05-20 08:00:00.0000000 2015-05-20 09:00:00.0000000
--Beta 184 123 2015-05-20 09:00:00.0000000 2015-05-20 10:00:00.0000000
--Beta 798 123 2015-05-20 10:00:00.0000000 2015-05-20 12:00:00.0000000
```
|
Modify records to account for correct priority and order between two systems
|
[
"",
"sql",
"t-sql",
"datetime",
""
] |
I have a table with the following type:
```
Id Parent_id Code Name market
1 NULL 1ex name 1 3
2 1 2ex name 2 3
3 1 3ex name 3 3
4 Null 4ex name 4 1
5 null 5ex name 5 3
6 4 6ex name 6 3
```
I wanted to select `code` and `name` from the above table such that it is ordered in the following way:
1. based on the market where market id=3
2. Parent id
3. related child
4. others
ie. id 1 (Parent\_id) should be displayed first followed by id 2 and 3 (Child id). The values in 'parent\_id' are from the column 'id'.
I have built the following query so far and i am feeling little difficult to order the parent code and the related child codes.
```
select code,name from tbl_codes A
order by CASE WHEN(A.[Market] = 3) THEN 0 ELSE 1 END
```
Can someone please help me out.
|
A recursive CTE is the best way to construct a parent/child heirarchy as follows:
```
-- Set up test data
CREATE TABLE tbl_codes (id INT , Parent_id INT, Code VARCHAR(3), NAME VARCHAR(12), Market INT)
INSERT tbl_codes
SELECT 1, NULL, '1ex', 'name 1', 3 UNION ALL
SELECT 2, 1 , '2ex', 'name 2', 3 UNION ALL
SELECT 3, 1 , '3ex', 'name 3', 3 UNION ALL
SELECT 4, NULL , '4ex', 'name 4', 1 UNION ALL
SELECT 5, NULL , '5ex', 'name 5', 3 UNION ALL
SELECT 6, 4 , '6ex', 'name 6', 3
CREATE VIEW [dbo].[View_ParentChild]
AS
-- Use a recursive CTE to build a parent/child heirarchy
WITH
RecursiveCTE AS
(
SELECT
id,
name,
parent_id,
Code,
market,
sort = id
FROM
tbl_codes
WHERE
parent_id IS NULL
UNION ALL
SELECT
tbl_codes.id,
tbl_codes.name,
tbl_codes.parent_id,
tbl_codes.Code,
tbl_codes.market,
sort = tbl_codes.parent_id
FROM
tbl_codes
INNER JOIN RecursiveCTE
ON tbl_codes.parent_id = RecursiveCTE.id
WHERE
tbl_codes.parent_id IS NOT NULL
)
SELECT
Code,
NAME,
Market,
Sort
FROM
RecursiveCTE
GO
```
As per your request I have refactored the query as a VIEW.
To use the view:
```
SELECT
*
FROM
dbo.View_ParentChild AS vpc
ORDER BY
CASE WHEN ( Market = 3 ) THEN 0
ELSE 1
END,
sort
```
It gives the following result:
```
Code NAME Market Sort
---- ------ ------ ----
1ex name 1 3 1
2ex name 2 3 1
3ex name 3 3 1
6ex name 6 3 4
5ex name 5 3 5
4ex name 4 1 4
```
To learn more about recursive CTEs click [here](http://blog.sqlauthority.com/2012/04/24/sql-server-introduction-to-hierarchical-query-using-a-recursive-cte-a-primer/)
And, as requested, is a new version of the view that does not use a recursive CTE
```
CREATE VIEW [dbo].[View_ParentChild_v2]
AS
SELECT
id,
Code,
market,
sort
FROM
(
SELECT
id,
name,
parent_id,
Code,
market,
sort = id
FROM
tbl_codes
WHERE
parent_id IS NULL
UNION ALL
SELECT
tbl_codes.id,
tbl_codes.name,
tbl_codes.parent_id,
tbl_codes.Code,
tbl_codes.market,
sort = tbl_codes.parent_id
FROM
tbl_codes
WHERE
tbl_codes.parent_id IS NOT NULL
) AS T
GO
```
Used as follows:
```
SELECT
*
FROM
View_ParentChild_v2
ORDER BY
CASE WHEN ( Market = 3 ) THEN 0
ELSE 1
END,
sort
```
nb: The first version, using a recursive CTE, could handle virtually unlimited levels of Parent/Child while version 2 only handles one level.
|
Try this
```
SELECT code ,
name
FROM tbl_codes A
ORDER BY CASE WHEN ( A.[Market] = 3 ) THEN 0
ELSE 1
END ,
CASE WHEN ( ISNULL(parent_id,0) = 1 ) THEN 0
ELSE 1
END
```
|
How to order the result based on the column values in sql server
|
[
"",
"sql",
"sql-server",
""
] |
Which clause performs first in a `SELECT` statement?
I have a doubt in `select` query on this basis.
consider the below example
```
SELECT *
FROM #temp A
INNER JOIN #temp B ON A.id = B.id
INNER JOIN #temp C ON B.id = C.id
WHERE A.Name = 'Acb' AND B.Name = C.Name
```
1. Whether, First it checks `WHERE` clause and then performs `INNER JOIN`
2. First `JOIN` and then checks condition?
If it first performs `JOIN` and then `WHERE` condition; how can it perform more where conditions for different `JOIN`s?
|
The conceptual order of query processing is:
```
1. FROM
2. WHERE
3. GROUP BY
4. HAVING
5. SELECT
6. ORDER BY
```
But this is just a conceptual order. In fact the engine may decide to rearrange clauses. Here is proof. Let's make 2 tables with 1000000 rows each:
```
CREATE TABLE test1 (id INT IDENTITY(1, 1), name VARCHAR(10))
CREATE TABLE test2 (id INT IDENTITY(1, 1), name VARCHAR(10))
;WITH cte AS(SELECT -1 + ROW_NUMBER() OVER(ORDER BY (SELECT NULL)) d FROM
(VALUES(1),(1),(1),(1),(1),(1),(1),(1),(1),(1)) t1(n) CROSS JOIN
(VALUES(1),(1),(1),(1),(1),(1),(1),(1),(1),(1)) t2(n) CROSS JOIN
(VALUES(1),(1),(1),(1),(1),(1),(1),(1),(1),(1)) t3(n) CROSS JOIN
(VALUES(1),(1),(1),(1),(1),(1),(1),(1),(1),(1)) t4(n) CROSS JOIN
(VALUES(1),(1),(1),(1),(1),(1),(1),(1),(1),(1)) t5(n) CROSS JOIN
(VALUES(1),(1),(1),(1),(1),(1),(1),(1),(1),(1)) t6(n))
INSERT INTO test1(name) SELECT 'a' FROM cte
```
Now run 2 queries:
```
SELECT * FROM dbo.test1 t1
JOIN dbo.test2 t2 ON t2.id = t1.id AND t2.id = 100
WHERE t1.id > 1
SELECT * FROM dbo.test1 t1
JOIN dbo.test2 t2 ON t2.id = t1.id
WHERE t1.id = 1
```
Notice that the first query will filter most rows out in the `join` condition, but the second query filters in the `where` condition. Look at the produced plans:
> 1 TableScan - Predicate:[Test].[dbo].[test2].[id] as [t2].[id]=(100)
>
> 2 TableScan - Predicate:[Test].[dbo].[test2].[id] as [t2].[id]=(1)
This means that in the first query optimized, the engine decided first to evaluate the `join` condition to filter out rows. In the second query, it evaluated the `where` clause first.
|
Logical order of query processing phases is:
1. `FROM` - Including `JOIN`s
2. `WHERE`
3. `GROUP BY`
4. `HAVING`
5. `SELECT`
6. `ORDER BY`
---
You can have as many as conditions even on your `JOIN`s or `WHERE` clauses. Like:
```
Select * from #temp A
INNER JOIN #temp B ON A.id = B.id AND .... AND ...
INNER JOIN #temp C ON B.id = C.id AND .... AND ...
Where A.Name = 'Acb'
AND B.Name = C.Name
AND ....
```
|
Which performs first WHERE clause or JOIN clause
|
[
"",
"sql",
"sql-server",
"select",
"join",
"where-clause",
""
] |
Reviewing some legacy code, I found a query which seems to be, at best, poorly written. Is there some theoretical (performance?) advantage to writing the `ORDER BY` in that manner? After all, SQL Server defaults to not caring about case.
```
SELECT PreferredName
FROM NameList
ORDER BY CAST(LOWER(PreferredName) AS BINARY)
```
It likely doesn't matter for this question, but the `PreferredName` column is defined as NVarchar(1000).
|
Nope.
Would agree that's badly written.
The `LOWER` and `CAST` functions prevent any index being used to avoid a sort.
Additionally the cast to binary means many characters will not be sorted in their correct place per usual Unicode string sorting rules.
Also the cast to binary without any length specified will truncate names longer than 15 characters (30 bytes).
|
No, I don't see how there could possibly be an advantage.
The issue here is that the ORDER BY can't use an indexed column. If you add an index on PreferredName and get rid of the CAST and LOWER, I bet you'd see large improvement on large datasets.
|
CAST AS BINARY in ORDER BY clause
|
[
"",
"sql",
"sql-server",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.