Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I have a one to many relationship table.
I want to get the lowest other\_id that is shared between multiple id's.
```
id other_id
5 5
5 6
5 7
6 6
6 7
7 7
```
I can do this by building an SQL statement dynamically with parts added for each additional id I want to query on. For example:
```
select * from (
select other_id from SomeTable where id = 5
) as a
inner join (
select other_id from SomeTable where id = 6
) as b
inner join (
select other_id from SomeTable where id = 7
) as c
on a.other_id = b.other_id
and a.other_id = c.other_id
```
Is there a better way to do this? More specifically, is there a way to do this that doesn't require a variable number of joins? I feel like this problem probably already has a name and better solutions.
My query gives me the number 7, which is what I want.
|
the lowest other\_id shared between all ids
```
select other_id
from SomeTable
group by other_id
having count(distinct id) = 3
order by other_id limit 1
```
or dynamically
```
select other_id
from SomeTable
group by other_id
having count(distinct id) = (select count(distinct id) from SomeTable)
order by other_id limit 1
```
or if you want to look for the lowest other\_id shared between specific ids
```
select other_id
from SomeTable
where id in (5,6,7)
group by other_id
having count(distinct id) = 3
order by other_id limit 1
```
|
i dont have a mysql server to test it out at the moment, but try a "group by" statement:
```
select
other_id
from
SomeTable
group by
other_id having count(*) > 1
order by
other_id asc
limit 1
```
|
Get lowest shared value in a one to many relationship table
|
[
"",
"mysql",
"sql",
""
] |
I'm using SQL Server and I need a query that will change a table I'm working with:
From this
```
| Band Name | Guitar1 | Guitar2 | Drums | Bass | Vocals |
---------------------------------------------------------------------------------
| LedZep | JimmyPage | NULL | NULL | NULL | NULL |
| LedZep | NULL | NULL | JonBonham | NULL | NULL |
| LedZep | NULL | NULL | NULL | JohnPaulJones | NULL |
| LedZep | NULL | NULL | NULL | NULL | RobertPlant |
```
"MAGIC SQL QUERY"
to this:
```
Band Name | Guitar1 | Guitar2 | Drums | Bass | Vocals |
---------------------------------------------------------------------------
LedZep | Jimmy Page | NULL | JonBonham | JonPaulJones | RobertPlant |
```
|
It may depend on what beckend server software you are using, but the basic idea would be:
```
SELECT
BandName,
MAX(Guitar1) Guitar1,
MAX(Guitar2) Guitar2,
MAX(Drums) Drums,
MAX(Vocals) Vocals
FROM Bands
GROUP BY BandName
```
However if a band has two records with a value for `Vocals` (or any column) what would you expect in the results?
|
Pretty sparse on details here but something this should work.
```
select BandName
, MAX(Guitar1) as Guitar1
, MAX(Guitar2) as Guitar2
, MAX(Drums) as Drums
, MAX(Bass) as Bass
, MAX(Vocal) as Vocals
from SomeTable
group by BandName
```
|
Possible to Combine columns on id in sql?
|
[
"",
"sql",
"sql-server",
""
] |
I currently have the following code
```
select *
FROM list
WHERE name LIKE '___'
ORDER BY name;
```
I am trying to get it so that it only shows names with three or more words.
It only displays names with three characters
I cannot seem to work out the correct syntax for this.
Any help is appreciated
Thankyou
|
If you assume that there are no double spaces, you can do:
```
WHERE name like '% % %'
```
To get names with three or more words.
If you can have double spaces (or other punctuation), then you are likely to want a regular expression. Something like:
```
WHERE name REGEXP '^[^ ]+[ ]+[^ ]+.*$'
```
|
you can count number of words and then select those who are equal or greater then 3 words.
```
SELECT * FROM list
HAVING LENGTH(name) - LENGTH(REPLACE(name, ' ', ''))+1 >= 3
ORDER BY name
```
[**DEMO HERE**](http://www.sqlfiddle.com/#!2/ef6c7/3)
\*Even if you have multi spaces it will not affect here check [this](http://www.sqlfiddle.com/#!2/fa940/1)
|
MySQL SELECT values with more than three words
|
[
"",
"mysql",
"sql",
"select",
"where-clause",
"sql-like",
""
] |
I've looked around and cannot find an answer to this. As far as I'm aware I'm using this correctly but I'm obviously missing something as it keeps coming back with 'Incorrect syntax near the keyword 'CASE''
I'm trying to take two values, and depending on what 'word' they are return a value of 1-5. These then get multiplied together to give me a 'rating'
```
DECLARE @PR int
DECLARE @IR int
DECLARE @R int
DECLARE @ProbabilityRating varchar(max)
DECLARE @ImpactRating varchar(max)
SET @ProbabilityRating = 'High'
SET @ImpactRating = 'Medium'
CASE @ProbabilityRating
WHEN 'Very Low' THEN @PR = 1
WHEN 'Low' THEN @PR = 2
WHEN 'Medium' THEN @PR = 3
WHEN 'High' THEN @PR = 4
WHEN 'Very High' THEN @PR = 5
END
CASE @ImpactRating
WHEN 'Very Low' THEN @IR = 1
WHEN 'Low' THEN @IR = 2
WHEN 'Medium' THEN @IR = 3
WHEN 'High' THEN @IR = 4
WHEN 'Very High' THEN @IR = 5
END
SET @R = @IR * @PR
```
Where is this going wrong?!
|
Since you're reusing that `case`, maybe it makes sense to have those values in a table ([fiddle](http://sqlfiddle.com/#!6/5ed90/2)):
```
create table Ratings
(
Value int
,Description varchar(20)
)
insert Ratings values
(1,'Very Low')
,(2,'Low')
,(3,'Medium')
,(4,'High')
,(5,'Very High')
```
And then assign the variables from a select...
```
select
@IR = r.Value
from Ratings as r
where r.[Description] = @ImpactRating
select
@PR = r.Value
from Ratings as r
where r.[Description] = @ProbabilityRating
```
Alternatively, you could just create a temp table ([fiddle](http://sqlfiddle.com/#!6/d41d8/21448)):
```
select
d.*
into #Ratings
from (values
('Very Low',1)
,('Low',2)
,('Medium',3)
,('High',4)
,('Very High',5)
) d([Description], Value)
select
@IR = r.Value
from #Ratings as r
where r.[Description] = @ImpactRating
select
@PR = r.Value
from #Ratings as r
where r.[Description] = @ProbabilityRating
```
As for your syntax issue, it seems like you're confusing a sql `case` with a `switch` (from other languages) where you'd branch to execute different code. They look pretty similar, so that's understandable. They behave differently, though. According to [the documentation](http://msdn.microsoft.com/en-us/library/ms181765.aspx) (emphasis mine):
> **`CASE`**
>
>
> Evaluates a list of conditions **and returns one** of multiple possible result expressions.
That's to say, a case statement resolves to a value. Simply assign that value to your variable, like so:
```
set @PR = case @ProbabilityRating
when 'Very Low' then 1
when 'Low' then 2
when 'Medium' then 3
when 'High' then 4
when 'Very High' then 5
end
```
|
```
DECLARE @PR int
DECLARE @IR int
DECLARE @R int
DECLARE @ProbabilityRating varchar(max)
DECLARE @ImpactRating varchar(max)
SET @ProbabilityRating = 'High'
SET @ImpactRating = 'Medium'
set @PR=(CASE @ProbabilityRating
WHEN 'Very Low' THEN 1
WHEN 'Low' THEN 2
WHEN 'Medium' THEN 3
WHEN 'High' THEN 4
WHEN 'Very High' THEN 5
END)
set @IR=(
CASE @ImpactRating
WHEN 'Very Low' THEN 1
WHEN 'Low' THEN 2
WHEN 'Medium' THEN 3
WHEN 'High' THEN 4
WHEN 'Very High' THEN 5
END)
SET @R = @IR * @PR
print @r
```
Your query should be looks like this
Ref : <https://stackoverflow.com/a/14631123/2630817>
|
SQL CASE statement for if
|
[
"",
"sql",
"sql-server",
"t-sql",
"case",
""
] |
I have TableA and TableB which contains identical columns but they have different records.
How do I find out which "UniqueID" is not in TableA but in TableB?
I have been doing
```
select tc.uniqueid, td.uniqueid
from tab1c as tc
left join tab2c as td
where tc.uniqueid != td.uniqueid;
```
but it doesnt seem to be correct.
|
Use left join:
```
select tc.uniqueid, td.uniqueid
from tab1c as tc
left join tab2c as td
on tc.uniqueid = td.uniqueid
where td.uniqueid = NULL; --Will get all uid in tab1c and not in tab2c
```
The same efficient and more readability way is `NOT EXISTS`:
```
select tc.uniqueid
from tab1c as tc
WHERE NOT EXISTS (SELECT * FROM tab2c as td
WHERE tc.uniqueid = td.uniqueid)
```
|
Maybe not the most efficient way but you can use [EXCEPT](http://msdn.microsoft.com/en-US/library/ms188055.aspx)
```
SELECT UNIQUEID
FROM tab1c
EXCEPT
SELECT UNIQUEID
FROM tab2c
```
|
SQL Query to compare columns in different table
|
[
"",
"sql",
"sqlite",
""
] |
I'm trying to check if a record exists and then update it if it does
Here is what I current have: (Which obviously does not work)
```
CREATE PROCEDURE dbo.update_customer_m
@customer_id INT ,
@firstname VARCHAR(30) ,
@surname VARCHAR(30) ,
@gender VARCHAR(6) ,
@age INT ,
@address_1 VARCHAR(50) ,
@address_2 VARCHAR(50) ,
@city VARCHAR(50) ,
@phone VARCHAR(10) ,
@mobile VARCHAR(11) ,
@email VARCHAR(30) ,
AS
IF EXISTS
(
SELECT *
FROM dbo.Customer
WHERE CustID = @customer_id
)
BEGIN
UPDATE dbo.Customer
SET Firstname = @firstname, Surname = @surname, Age = @age, Gender = @gender, Address1 = @address_1, Address2 = @address_2, City = @city, Phone = @phone, Mobile = @mobile, Email = @email
WHERE CustID = @customer_id
END
```
Is there a better way of doing this that works?
|
Why both checking first? The update will update no rows if the row doesn't exist:
```
UPDATE dbo.Customer
SET Firstname = @firstname, Surname = @surname, Age = @age, Gender = @gender,
Address1 = @address_1, Address2 = @address_2, City = @city,
Phone = @phone, Mobile = @mobile, Email = @email
WHERE CustID = @customer_id;
```
The `if` is not needed.
|
if block not needed, first select if has no row, Update block do nothing and no row is affected.
just write your update in this procedure.
but maybe you want to write else if for this procedure, if it's OK, you can you IF and ELSE.
in IF block you can write this Update, and in ELSE block you can do another that want.
|
Stored procedure if record exists then update
|
[
"",
"sql",
"asp.net",
"sql-server",
"stored-procedures",
""
] |
I have a `Table1` like this:
```
ApplicableTo IdApplicable
---------------------------
Dept 1
Grade 3
section 1
Designation 2
```
There other tables like:
`tblDept`:
```
ID Name
1 dept1
2 baking
3 other
```
`tblGrade`:
```
ID Name
1 Grd1
2 Manager
3 gr3
```
`tblSection`:
```
id Name
1 Sec1
2 sec2
3 sec3
```
`tblDesignation`:
```
id Name
1 Executive
2 Developer
3 desig3
```
What I need is a query for `table1` in such a way that gives me
```
ApplicableTo (table1)
Name (from the relevant table based on the value in `ApplicableTo` column)
```
Is this possible?
Desired Result:
```
eg: ApplicableTo IdApplicable Name
Dept 1 dept1
grade 3 gr3
Section 1 sec1
Designation 2 Developer.
```
This is the result I desire.
|
You can use CASE here,
```
SELECT ApplicableTo,
IdApplicable,
CASE
WHEN ApplicableTo = 'Dept' THEN (SELECT Name FROM tblDept WHERE tblDept.ID = IdApplicable)
WHEN ApplicableTo = 'Grade' THEN (SELECT Name FROM tblGrade WHERE tblGrade.ID = IdApplicable)
WHEN ApplicableTo = 'Section' THEN (SELECT Name FROM tblSection WHERE tblSection.ID = IdApplicable)
WHEN ApplicableTo = 'Designation' THEN (SELECT Name FROM tblDesignation WHERE tblDesignation.ID = IdApplicable)
END AS 'Name'
FROM Table1
```
|
You could do something like the following so the applicable to becomes part of the JOIN predicate:
```
SELECT t1.ApplicableTo, t1.IdApplicable, n.Name
FROM Table1 AS t1
INNER JOIN
( SELECT ID, Name, 'Dept' AS ApplicableTo
FROM tblDept
UNION ALL
SELECT ID, Name, 'Grade' AS ApplicableTo
FROM tblGrade
UNION ALL
SELECT ID, Name, 'section' AS ApplicableTo
FROM tblSection
UNION ALL
SELECT ID, Name, 'Designation' AS ApplicableTo
FROM tblDesignation
) AS n
ON n.ID = t1.IdApplicable
AND n.ApplicableTo = t1.ApplicableTo
```
I would generally advise against this approach, although it may seem like a more consice approach, you would be better having 4 separate nullable columns in your table:
```
ApplicableTo | IdDept | IdGrade | IdSection | IdDesignation
-------------+--------+---------+-----------+---------------
Dept | 1 | NULL | NULL | NULL
Grade | NULL | 3 | NULL | NULL
section | NULL | NULL | 1 | NULL
Designation | NULL | NULL | NULL | 2
```
This allows you to use foreign keys to manage your referential integrity properly.
|
Sql select query based on a column value
|
[
"",
"sql",
"select",
"join",
""
] |
I'm struggling with the following thing: I have a table called custom\_fields. Within it there is a field with values like product\_id, money\_spent. When I do an AND query to get data I get 0 results (even though the conditions are met). This is the query:
```
SELECT DISTINCT `users` . *
FROM `users`
LEFT JOIN `emails` ON `users`.`id` = `emails`.`user_id`
LEFT JOIN `phones` ON `users`.`id` = `phones`.`user_id`
LEFT JOIN `trackers` ON `users`.`tracker_id` = `trackers`.`id`
LEFT JOIN `custom_fields` ON `users`.`id` = `custom_fields`.`user_id`
WHERE (
`custom_fields`.`field` = "product_id"
AND `custom_fields`.`value` IS NOT NULL
AND `custom_fields`.`value` != ""
)
AND (
`custom_fields`.`field` = "payment_value"
)
AND (
`custom_fields`.`value` <50
AND `custom_fields`.`value` IS NOT NULL
AND `custom_fields`.`value` != ""
)
AND (
`users`.`tracker_id` =186
)
```
How to solve this problem? I tried to use UNION but it gives different results. Maybe some aliases? I can't do the transposition on this table (meaning: convert each row to a seperate field)
|
Supposing that `custom_fields` is a [EAV-style table](http://en.wikipedia.org/wiki/Entity%E2%80%93attribute%E2%80%93value_model) (i.e. it has multiple entries per one entity like user, in form of a set [entry type, entry label, entry value]), you need to do multiple `JOIN`s with this table to achieve the desired result.
Try this query instead. It filters users having both: entries in `custom_fields` of type `product_id` and entries of type `payment_value` with value lower than 50:
```
SELECT DISTINCT `users` . *
FROM `users`
LEFT JOIN `emails` ON `users`.`id` = `emails`.`user_id`
LEFT JOIN `phones` ON `users`.`id` = `phones`.`user_id`
LEFT JOIN `trackers` ON `users`.`tracker_id` = `trackers`.`id`
LEFT JOIN `custom_fields` as cf1 ON `users`.`id` = cf1.`user_id`
LEFT JOIN `custom_fields` as cf2 ON `users`.`id` = cf2.`user_id`
WHERE (
cf1.`field` = "product_id"
AND cf1.`value` IS NOT NULL
AND cf1.`value` != ""
)
AND (
cf2.`field` = "payment_value"
AND cf2.`value` <50
AND cf2.`value` IS NOT NULL
AND cf2.`value` != ""
)
AND (
`users`.`tracker_id` =186
)
```
|
All you seem to want to do is select users for which a product id exists and also a payment value less than 50 exists. So simply use the word EXISTS (that one uses to formulate the task) in your query as well and there are no longer issues with duplicate results. Plus the query is much easier to read, because you use SQL straight-forward.
```
select *
from users
where tracker_id = 186
and exists
(
select *
from custom_fields
where field = 'product_id' and value is not null
)
and exists
(
select *
from custom_fields
where field = 'payment_value' and value < 50 -- and value > 0 maybe?
);
```
As you see, quite often you can just put the task in words that can be easily translated into SQL.
Please also read in your request's comment section about problems with your query.
|
AND statement on same table on values from same field returns 0 results
|
[
"",
"mysql",
"sql",
""
] |
Are these 2 queries equivalent in performance ?
```
select a.*
from a
inner join b
on a.bid = b.id
inner join c
on b.cid = c.id
where c.id = 'x'
```
and
```
select a.*
from c
inner join b
on b.cid = c.id
join a
on a.bid = b.id
where c.id = 'x'
```
Does it join all the table first then filter the condition, or is the condition applied first to reduce the join ?
(I am using sql server)
|
The Query Optimizer will almost always filter table `c` first before joining `c` to the other two tables. You can verify this by looking into the execution plan and see how many rows are being taken by SQL Server from table `c` to participate in the join.
**About join order**: the Query Optimizer will pick a join order that it thinks will work best for your query. It could be `a JOIN b JOIN (filtered c)` or `(filtered c) JOIN a JOIN b`.
If you want to force a certain order, include a hint:
```
SELECT *
FROM a
INNER JOIN b ON ...
INNER JOIN c ON ...
WHERE c.id = 'x'
OPTION (FORCE ORDER)
```
This will force SQL Server to do `a join b join (filtered c)`. **Standard warning**: unless you see massive performance gain, most times it's better to leave the join order to the Query Optimizer.
|
Read about <http://www.bennadel.com/blog/70-sql-query-order-of-operations.htm>
The execution order is FROM then WHERE, in this case or in any other cases I don't think the WHERE clause is executed before the JOINS .
|
Does inner join order and where as an impact on performance
|
[
"",
"sql",
"sql-server",
""
] |
I have a table
`process` with the fileds `id`, `fk_object` and `status`.
example
```
id| fk_object | status
----------------------
1 | 3 | true
2 | 3 | true
3 | 9 | false
4 | 9 | true
5 | 9 | true
6 | 8 | false
7 | 8 | false
```
I want to find the `id`s of all rows where different `status`exists grouped by `fk_object`.
in this example it should return the `id`s `3, 4, 5`, because for the `fk_object` `9` there existing `status` with `true` and `false` and the other only have one of it.
|
The stock response is as follows...
```
SELECT ... FROM ... WHERE ... IN ('true','false')... GROUP BY ... HAVING COUNT(DISTINCT status) = 2;
```
where '2' is equal to the number of arguments in IN()
|
This gets the `fk_object` values with that property:
```
select fk_object
from process
group by fk_object
having min(status) <> max(status);
```
You can get the corresponding rows by using a `join`:
```
select p.*
from process p join
(select fk_object
from process
group by fk_object
having min(status) <> max(status)
) pmax
on p.fk_object = pmax.fk_object;
```
|
Find IDs of differing values grouped by foreign key in MySQL
|
[
"",
"mysql",
"sql",
"group-by",
""
] |
I have designed a cursor to run some stats against 6500 inspectors but it is taking too long. There are many other select queries in cursor but they are running okay but the following select is running very very slow. Without cursor select query is running perfectly fine.
**Requirements:**
Number of visits for each inspectors where visits has uploaded document (1 or 2 or 13)
**Tables:**
* `Inspectors: InspectorID`
* `InspectionScope: ScopeID, InspectorID (FK)`
* `Visits: VisitID, VisitDate ScopeID (FK)`
* `VisitsDoc: DocID, DocType, VisitID (FK)`
Cursor code:
```
DECLARE
@curInspID int,
@DateFrom date, @DateTo date;
SELECT @DateTo = CAST(GETDATE() AS DATE)
,@DateFrom = CAST(GETDATE() - 90 AS DATE)
DECLARE
@InspectorID int,
@TotalVisits int;
DECLARE @Report TABLE (
InspectorID int,
TotalVisits int)
DECLARE curList CURSOR FOR
SELECT InspectorID FROM Inspectors ;
OPEN curList
FETCH NEXT FROM curList INTO @curInspID;
WHILE @@FETCH_STATUS = 0
BEGIN
SELECT
@curInspID = s.InspectorID
,@TotalVisits = COUNT(distinct v.visitID)
from Visits v
inner join InspectionScope s on s.ScopeID = v.ScopeID
inner join VisitDocs vd on vd.VisitID = v.VisitID
where s.InspectorID = @curInspID and vd.DocType IN (1,2,13)
and v.VisitDate BETWEEN @DateFrom and @DateTo
group by s.InspectorID
INSERT INTO @Report VALUES(@curInspID,@TotalVisits);
FETCH NEXT FROM curList INTO @curInspID;
END
CLOSE curList
DEALLOCATE curList
SELECT * FROM @Report
```
Following queries run okay inside the same cursor
```
,@TotalVisitsWithReportScore = (select COUNT(v.visitid) from visits v
inner join InspectionScope s on s.ScopeID = v.ScopeID
where v.ReportStandard not in (0,9) and v.VisitType = 1
and v.VisitDate BETWEEN @DateFrom and @DateTo
and s.InspectorID = @curInspID
)
,@TotalVisitsWith_ReportScore_RejectionFeedBack = (select COUNT(v.visitid) from visits v
inner join InspectionScope s on s.ScopeID = v.ScopeID
where v.ReportStandard not in (0,9) and v.VisitType = 1
and v.DiscrepancyType IN (2,5,6,7,8)
and v.VisitDate BETWEEN @DateFrom and @DateTo
and s.InspectorID = @curInspID
)
```
|
No need for a cursor here -- you can use `INSERT INTO` with `SELECT`, joining on the `Inspector` table.
```
INSERT INTO @Report
SELECT
s.InspectorID
, COUNT(distinct v.visitID)
from Visits v
inner join InspectionScope s on s.ScopeID = v.ScopeID
inner join VisitDocs vd on vd.VisitID = v.VisitID
inner join Inspector i on s.InspectorID = i.InspectorId
where vd.DocType IN (1,2,13)
and v.VisitDate BETWEEN @DateFrom and @DateTo
group by s.InspectorID
```
---
Please note, you may need to use an `OUTER JOIN` with the `Inspector` table if there are results in that table that do not exist in the other tables. Depends on your data and desired results.
|
Cursors is not recommended any more. It is better for you to insert the data into a temporary table and add a primary key to it.
So in your loop you will have a while loop that loops through your table with a WHERE clause on your Id in the temporay table.
That is much much more faster.
|
select query in Cursor taking too long
|
[
"",
"sql",
"sql-server-2008",
"t-sql",
""
] |
I do not understand why the following Oracle 10g SQL query is not working although each sub-query is working fine and fast on its own:
```
SELECT ref.EWO_ISSUE_ID, ref.EWO_REF_ID, ref.MSN_ID, ref.MSN, ref.HOV, ref.RANK,
ref.EWO_WP_ID, ref.EWO_WP, ref.EPAC_TDU, ref.MOD, ref.MP, ref.EWO_REF_DESCRIPTION,
ewo.EWO1, ewo.EWO2, ewo.EWO3, ref.TRS_ISSUE, ref.TRS_UPDATE_ON_SHEET_1,
ref.TRS_UPDATE_ON_SHEET_2, ref.TRS_UPDATE_ON_SHEET_3,
ref.TRS_UPDATE_ON_SHEET_4, ref.TRS_UPDATE_ON_SHEET_5, ref.TRS_UPDATE_ON_SHEET_6,
ref.TRS_TECHDOM_INFO
FROM V_EWO_ACTUAL_REFERENCE ref
LEFT JOIN EWO_REFERENCE ewo
ON ref.EWO_REF_ID = ewo.EWO_REF_ID
WHERE ref.EWO_REF_ID IS NOT NULL
AND ref.TRS_TECHDOM_INFO IS NOT NULL
MINUS
SELECT EWO_ISSUE_ID, EWO_REF_ID, MSN_ID, MSN, HOV, RANK, EWO_WP_ID, EWO_WP,
EPAC_TDU, MOD, MP, EWO_REF_DESCRIPTION, EWO1, EWO2, EWO3, TRS_ISSUE,
TRS_UPDATE_ON_SHEET_1, TRS_UPDATE_ON_SHEET_2, TRS_UPDATE_ON_SHEET_3,
TRS_UPDATE_ON_SHEET_4, TRS_UPDATE_ON_SHEET_5, TRS_UPDATE_ON_SHEET_6,
TRS_TECHDOM_INFO
FROM EWO_REF_TRS_HISTORY;
```
I only get a time out error because it takes very long. Has anyone an idea what could be wrong?
|
This may not solve your problem, but it will probably force a different execution plan:
```
with x as ( SELECT /*+ materialize */ ref.EWO_ISSUE_ID, ref.EWO_REF_ID, ref.MSN_ID, ref.MSN, ref.HOV, ref.RANK,
ref.EWO_WP_ID, ref.EWO_WP, ref.EPAC_TDU, ref.MOD, ref.MP, ref.EWO_REF_DESCRIPTION,
ewo.EWO1, ewo.EWO2, ewo.EWO3, ref.TRS_ISSUE, ref.TRS_UPDATE_ON_SHEET_1,
ref.TRS_UPDATE_ON_SHEET_2, ref.TRS_UPDATE_ON_SHEET_3,
ref.TRS_UPDATE_ON_SHEET_4, ref.TRS_UPDATE_ON_SHEET_5, ref.TRS_UPDATE_ON_SHEET_6,
ref.TRS_TECHDOM_INFO
FROM V_EWO_ACTUAL_REFERENCE ref
LEFT JOIN EWO_REFERENCE ewo
ON ref.EWO_REF_ID = ewo.EWO_REF_ID
WHERE ref.EWO_REF_ID IS NOT NULL
AND ref.TRS_TECHDOM_INFO IS NOT NULL ),
y AS ( SELECT /*+ materialize */ EWO_ISSUE_ID, EWO_REF_ID, MSN_ID, MSN, HOV, RANK, EWO_WP_ID, EWO_WP,
EPAC_TDU, MOD, MP, EWO_REF_DESCRIPTION, EWO1, EWO2, EWO3, TRS_ISSUE,
TRS_UPDATE_ON_SHEET_1, TRS_UPDATE_ON_SHEET_2, TRS_UPDATE_ON_SHEET_3,
TRS_UPDATE_ON_SHEET_4, TRS_UPDATE_ON_SHEET_5, TRS_UPDATE_ON_SHEET_6,
TRS_TECHDOM_INFO
FROM EWO_REF_TRS_HISTORY )
select *
from x
minus
select *
from y;
```
|
You can use NOT EXISTS or NOT IN and check something like this
```
SELECT ref.EWO_ISSUE_ID, ref.EWO_REF_ID, ref.MSN_ID, ref.MSN, ref.HOV, ref.RANK,
ref.EWO_WP_ID, ref.EWO_WP, ref.EPAC_TDU, ref.MOD, ref.MP, ref.EWO_REF_DESCRIPTION,
ewo.EWO1, ewo.EWO2, ewo.EWO3, ref.TRS_ISSUE, ref.TRS_UPDATE_ON_SHEET_1,
ref.TRS_UPDATE_ON_SHEET_2, ref.TRS_UPDATE_ON_SHEET_3,
ref.TRS_UPDATE_ON_SHEET_4, ref.TRS_UPDATE_ON_SHEET_5, ref.TRS_UPDATE_ON_SHEET_6,
ref.TRS_TECHDOM_INFO
FROM V_EWO_ACTUAL_REFERENCE ref
LEFT JOIN EWO_REFERENCE ewo
ON ref.EWO_REF_ID = ewo.EWO_REF_ID
WHERE ref.EWO_REF_ID IS NOT NULL
AND ref.TRS_TECHDOM_INFO IS NOT NULL
AND NOT EXISTS
(SELECT EWO_ISSUE_ID, EWO_REF_ID, MSN_ID, MSN, HOV, RANK, EWO_WP_ID, EWO_WP,
EPAC_TDU, MOD, MP, EWO_REF_DESCRIPTION, EWO1, EWO2, EWO3, TRS_ISSUE,
TRS_UPDATE_ON_SHEET_1, TRS_UPDATE_ON_SHEET_2, TRS_UPDATE_ON_SHEET_3,
TRS_UPDATE_ON_SHEET_4, TRS_UPDATE_ON_SHEET_5, TRS_UPDATE_ON_SHEET_6,
T RS_TECHDOM_INFO FROM EWO_REF_TRS_HISTORY);
```
or try with `NOT IN`. Hope it helps.
|
Oracle SQL query SELECT MINUS SELECT not working or too slow
|
[
"",
"sql",
"oracle",
""
] |
I know you can change the default value of an existing column like [this](https://stackoverflow.com/questions/6791675/how-to-set-a-default-value-for-an-existing-column):
```
ALTER TABLE Employee ADD CONSTRAINT DF_SomeName DEFAULT N'SANDNES' FOR CityBorn;
```
But according to [this](http://msdn.microsoft.com/en-us/library/ms187742%28v=sql.110%29.aspx) my query supposed to work:
```
ALTER TABLE MyTable ALTER COLUMN CreateDate DATETIME NOT NULL
CONSTRAINT DF_Constraint DEFAULT GetDate()
```
So here I'm trying to make my column Not Null and also set the Default value. But getting Incoorect Syntax Error near CONSTRAINT. Am I missing sth?
|
I think issue here is with the confusion between `Create Table` and `Alter Table` commands.
If we look at `Create table` then we can add a default value and default constraint at same time as:
```
<column_definition> ::=
column_name <data_type>
[ FILESTREAM ]
[ COLLATE collation_name ]
[ SPARSE ]
[ NULL | NOT NULL ]
[
[ CONSTRAINT constraint_name ] DEFAULT constant_expression ]
| [ IDENTITY [ ( seed,increment ) ] [ NOT FOR REPLICATION ]
]
[ ROWGUIDCOL ]
[ <column_constraint> [ ...n ] ]
[ <column_index> ]
ex:
CREATE TABLE dbo.Employee
(
CreateDate datetime NOT NULL
CONSTRAINT DF_Constraint DEFAULT (getdate())
)
ON PRIMARY;
```
you can check for complete definition here:
<http://msdn.microsoft.com/en-IN/library/ms174979.aspx>
but if we look at the `Alter Table` definition then with `ALTER TABLE ALTER COLUMN` you cannot add
`CONSTRAINT` the options available for `ADD` are:
```
| ADD
{
<column_definition>
| <computed_column_definition>
| <table_constraint>
| <column_set_definition>
} [ ,...n ]
```
Check here: <http://msdn.microsoft.com/en-in/library/ms190273.aspx>
So you will have to write two different statements one for Altering column as:
```
ALTER TABLE MyTable ALTER COLUMN CreateDate DATETIME NOT NULL;
```
and another for altering table and add a default constraint
`ALTER TABLE MyTable ADD CONSTRAINT DF_Constraint DEFAULT GetDate() FOR CreateDate;`
Hope this helps!!!
|
There is no direct way to change default value of a column in SQL server, but the following parameterized script will do the work:
```
DECLARE @table NVARCHAR(100);
DECLARE @column NVARCHAR(100);
DECLARE @newDefault NVARCHAR(100);
SET @table = N'TableName';
SET @column = N'ColumnName';
SET @newDefault = N'0';
IF EXISTS (
SELECT name
FROM sys.default_constraints
WHERE parent_object_id = OBJECT_ID(@table)
AND parent_column_id = COLUMNPROPERTY(OBJECT_ID(@table), @column, 'ColumnId')
)
BEGIN
DECLARE @constraintName AS NVARCHAR(200);
DECLARE @constraintQuery AS NVARCHAR(2000);
SELECT @constraintName = name
FROM sys.default_constraints
WHERE parent_object_id = OBJECT_ID(@table)
AND parent_column_id = COLUMNPROPERTY(OBJECT_ID(@table), @column, 'ColumnId');
SET @constraintQuery = N'ALTER TABLE ' + @table + N' DROP CONSTRAINT '
+ @constraintName + N'; ALTER TABLE ' + @table + N' ADD CONSTRAINT '
+ @constraintName + N' DEFAULT ' + @newDefault + N' FOR ' + @column;
EXECUTE sp_executesql @constraintQuery;
END;
```
Just fill the parameters and execute. The script removes existing constraint and creates a new one with designated default value.
|
Alter column default value
|
[
"",
"sql",
"sql-server",
"default-value",
"notnull",
""
] |
I have a table like given bellow, in Oracle:
```
[Products]
Product_ID | Product_NME | Product_SUP | Quantity
=================================================
1 Apple USA 100
2 Fish Japan 50
3 Wine Italy 10
4 Apple China 30
5 Fish Germany 10
```
I need a query that will find the full `Quantity` for every `Product_NME` by `DISTINCT`.
The expected result should be:
* apple 130
* fish 60
* wine 10
I've tried to modify it like the one [shown here](https://stackoverflow.com/questions/15512628/sql-query-with-distinct-and-sum) as:
```
SELECT
distinct(Product_NME, Product_SUP), sum(Quantity)
FROM
Products
```
But it's not my case. Also I've tried this one:
```
SELECT DISTINCT Product_NME
FROM Products
UNION
SELECT SUM(Quantity) FROM Products
```
But is also not working.
Can anyone help me with this?
* Thanks
|
`DISTINCT` is not the clause you are looking for!
`GROUP BY` is.
The following query will return with all products and the total quantity for each one.
```
SELECT
Product_NME
, SUM(Quantity) AS TotalQuantity
FROM
Products
GROUP BY
Product_NME
```
|
You don't need for distinct just group by product name :
```
SELECT Product_NME , SUM(Quantity) AS TotalQTY
FROM Products
GROUP BY Product_NME
```
|
How to combine SELECT DISTINCT and SUM()
|
[
"",
"sql",
"select",
""
] |
Here is my code in sql server
I have an error saying"The multi-part identifier "t2.isactive" could not be bound."
also, I want to do this update in one command and one command only
thanks for help
```
UPDATE tb_active_priority_alert
SET
priority_alert_guid = t2.priority_alert_guid,
priority_alert_title = t2.priority_alert_title,
priority_alert_zone = t2.priority_alert_zone,
priority_alert_color =t2.priority_alert_color,
priority_alert_allow_cancel = t2.priority_alert_allow_cancel,
priority_alert_time_duration = t2.priority_alert_time_duration,
priority_alert_type = t2.priority_alert_type,
priority_alert_text = t2.priority_alert_text,
web_url =t2.web_url,
video_url = t2.video_url,
video_style = t2.video_style,
banner_playlist_guid = t2.banner_playlist_guid,
signage = t2.signage,
signage_guid = t2.signage_guid,
alert_icon = t2.alert_icon,
isactive ='true',
user_guid =t2.user_guid,
creation_datetime = GETDATE(),
expiration_datetime = dateadd(MINUTE,t2.priority_alert_time_duration,GETDATE()),
t2.isactive='true'
from tb_users_priority_alerts t2 left outer join tb_active_priority_alert t1 on t2.priority_alert_guid =@priority_alert_guid
WHERE t2.priority_alert_guid =@priority_alert_guid;
```
|
you can't make it work.
the statement:
```
t2.isactive='true'
```
is trying to update a table that is not the one stated in the first row of your statement.
update instruction can update data in a single table/object; what you are trying to do is update data in 2 different tables at once and this is not supported.
[here](https://stackoverflow.com/questions/5154615/how-to-update-multiple-tables-at-the-same-time) you can find the very same question answered on SO.
[here](http://msdn.microsoft.com/en-us//library/ms177523.aspx) you can find official MS documentation about `UPDATE`; it is not explicitly written but in each and every reference to the item to be updated you will find a single object as target and even in the syntax guide there is only one target expected.
you can go with 2 separate update statements in a transaction.
|
You have an error in your join with t1. You have to join the two tables on a specific id/value/whatever.
try this instead:
```
from tb_users_priority_alerts t2
left join tb_active_priority_alert t1 on t2.priority_alert_guid = t1.priority_alert_guid
```
instead of t2.priority\_alert\_guid =@priority\_alert\_guid
|
update in sql server with joined table
|
[
"",
"sql",
"sql-server",
""
] |
I am using the following script to insert 100,000 records into a table. Basically int from 500,001 to 600,000 are inserted. I am casting the integer into a string and inserting coz thats how i want it in the table (an integer in the form of string). I am using a merge to check if the record already exists or not.
```
DECLARE @first AS INT
SET @first = 500001
DECLARE @step AS INT
SET @step = 1
DECLARE @last AS INT
SET @last = 600000
BEGIN TRANSACTION
WHILE(@first <= @last)
BEGIN
MERGE dbo.Identifiers As target
USING (SELECT CAST(@first as varchar(10)) AS Identifier) AS source
ON (source.Identifier = target.Identifier)
WHEN NOT MATCHED THEN
INSERT (Identifier)
VALUES (source.Identifier);
SET @first += @step
END
COMMIT TRANSACTION
```
Its taking more than 2 minutes to load. I am doing something terribly wrong but unable to trace out where.
Note: The table has unique non-clustered index on Identifier Column.
|
I am wondering how much your procedural looping and the `MERGE` (instead of a simple `INSERT`) contributes to bad performance. I would opt for a strictly set-based solution like this:
```
INSERT INTO dbo.Identifiers (Identifier)
SELECT n FROM dbo.GetNums(500001, 600000)
WHERE n NOT IN (SELECT Identifier FROM dbo.Identifiers);
```
Now, this relies on a user-defined table-valued function `dbo.GetNums` that returns a table containing all numbers between 500,001 and 600,000 in a column called `n`. How do you write that function? You need to generate a range of numbers on the fly inside it.
The following implementation is taken from the book [*"Microsoft SQL Server 2012 High-Performance T-SQL Using Window Functions"* by Itzik Ben-Gak](https://www.microsoft.com/learning/en-us/book.aspx?id=15759).
```
CREATE FUNCTION dbo.GetNums(@low AS BIGINT, @high AS BIGINT) RETURNS TABLE
AS
RETURN
WITH L0 AS (SELECT c FROM (VALUES(1),(1)) AS D(c)),
L1 AS (SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B),
L2 AS (SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B),
L3 AS (SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B),
L4 AS (SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B),
L5 AS (SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B),
Nums AS (SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS rownum FROM L5)
SELECT @low + rownum - 1 AS n
FROM Nums
ORDER BY rownum
OFFSET 0 ROWS FETCH FIRST @high - @low + 1 ROWS ONLY;
```
(Since this comes from a book on SQL Server 2012, it might not work on SQL Server 2008 out-of-the-box, but it should be possible to adapt.)
|
Try this one. It uses a tally table. Reference: <http://www.sqlservercentral.com/articles/T-SQL/62867/>
```
create table #temp_table(
N int
)
declare @first as int
set @first = 500001
declare @step as int
set @step = 1
declare @last as int
set @last = 600000
with
e1 as(select 1 as N union all select 1), --2 rows
e2 as(select 1 as N from e1 as a, e1 as b), --4 rows
e3 as(select 1 as N from e2 as a, e2 as b), --16 rows
e4 as(select 1 as N from e3 as a, e3 as b), --256 rows
e5 as(select 1 as N from e4 as a, e4 as b), --65,356 rows
e6 as(select 1 as N from e5 as a, e1 as b), -- 131,072 rows
tally as (select 500000 + (row_number() over(order by N) * @step) as N from e6) -- change 500000 with desired start
insert into #temp_table
select cast(N as varchar(10))
from tally t
where
N >= @first
and N <=@last
and not exists(
select 1 from #temp_table where N = t.N
)
drop table #temp_table
```
|
Fastest way to insert 100000 records into SQL Server
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have two tables, one containing a path and the other containing a filename. Here's an example:
```
t1
- file.mov
- myfile.txt
- 2file.py
t2
- /new/path/file.mov
- /path/hello.txt
- /path/2file.py
```
I want to build a query to get me all the filenames for which I have a path:
```
- file.mov /new/path/file.mov
- 2file.py /path/2file.py
```
What would be the query I could use to find this? The closest I could think of is using `IN`, but that is based on an exact match and not a `LIKE`, which is what I need to use:
```
SELECT name FROM names WHERE name in (SELECT path FROM paths);
```
|
Using an `INNER JOIN` between to 2 tables will allow you to list either or both the file names and paths
```
SELECT names.name, paths.path
FROM names
INNER JOIN paths ON names.name = SUBSTRING_INDEX(paths.path, '/', -1)
;
```
However the output will be (from the sample):
```
| NAME | PATH |
|----------|--------------------|
| file.mov | /new/path/file.mov |
| 2file.py | /path/2file.py |
```
because you do not have a path for "myfile.txt", or a name for "/path/hello.txt", (so this result does not match the result shown in the question).
|
```
SELECT name
FROM names
WHERE name in
(SELECT SUBSTRING_INDEX(path, '/', -1) FROM paths);
```
|
SQL name in list of items
|
[
"",
"mysql",
"sql",
""
] |
[SQL Fiddle](http://sqlfiddle.com/#!6/e1116/1)
I'm trying without success to change an iterative/cursor query (that is working fine) to a relational set query to achieve a better performance.
What I have:
**table1**
```
| ID | NAME |
|----|------|
| 1 | A |
| 2 | B |
| 3 | C |
```
Using a function, I want to insert my data into another table. The following function is a simplified example:
**Function**
```
CREATE FUNCTION fn_myExampleFunction
(
@input nvarchar(50)
)
RETURNS @ret_table TABLE
(
output nvarchar(50)
)
AS
BEGIN
IF @input = 'A'
INSERT INTO @ret_table VALUES ('Alice')
ELSE IF @input = 'B'
INSERT INTO @ret_table VALUES ('Bob')
ELSE
INSERT INTO @ret_table VALUES ('Foo'), ('Bar')
RETURN
END;
```
My expected result is to insert data in table2 like the following:
**table2**
```
| ID | NAME |
|----|-------|
| 1 | Alice |
| 2 | Bob |
| 3 | Foo |
| 3 | Bar |
```
To achieve this, I've tried some CTEs (Common Table Expression) and relational queries, but none worked as desired. The only working solution that I've got so far was an iterative and not performatic solution.
**My current working solution**:
```
BEGIN
DECLARE
@ID int,
@i int = 0,
@max int = (SELECT COUNT(name) FROM table1)
WHILE ( @i < @max ) -- In this example, it will iterate 3 times
BEGIN
SET @i += 1
-- Select table1.ID where row_number() = @i
SET @ID =
(SELECT
id
FROM
(SELECT
id,
ROW_NUMBER() OVER (ORDER BY id) as rn
FROM
table1) rows
WHERE
rows.rn = @i
)
-- Insert into table2 one or more rows related with table1.ID
INSERT INTO table2
(id, name)
SELECT
@ID,
fn_result.output
FROM
fn_myExampleFunction (
(SELECT name FROM table1 WHERE id = @ID)
) fn_result
END
END
```
**The objective is to achieve the same without iterating through the IDs.**
|
if the question is about how to apply a function in a set oriented way, then `cross apply` (or `outer apply`) is your friend:
```
insert into table2 (
id, name
) select
t1.id,
t2.output
from
table1 t1
cross apply
fn_myExampleFunction(t1.name) t2
```
[Example SQLFiddle](http://sqlfiddle.com/#!6/e1116/2)
If the non-simplified version of your function is amenable to rewriting, the other solutions will likely be faster.
|
A query like this will do what you want:
```
insert into table2(id, name)
select id, (case when name = 'A' then 'Alice'
when name = 'B' then 'Bob'
when name = 'C' then 'Foo'
end)
from table1
union all
select id, 'Bar'
from table1
where name = 'C';
```
|
Change an iterative query to a relational set-based query
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a master table and detail table like below
**Master Table**
```
Id code Name
-------------------
1 00 qqq
```
**Detail Table**
```
Id code Name
-------------------
1 01 xyz
1 02 pqr
1 03 abc
1 04 aaa
```
now I need the result like below
**Result**
```
Id code Name
-----------------
1 00 qqq
1 01 xyz
1 02 pqr
1 03 abc
1 04 aaa
```
***I like to avoid UNION***
Thanks in advance :)
|
Well, I don't know why you don't want to use Union.
But you can use [**FULL OUTER JOIN**](http://www.w3schools.com/sql/sql_join_full.asp) for that, if u don't want to use union. something like this -
```
SELECT COALESCE(Master1.id, Detail.id) AS id
,COALESCE(Master1.code, Detail.code) AS code
,COALESCE(Master1.NAME, Detail.NAME) AS NAME
FROM MASTER1
FULL OUTER JOIN DETAIL ON Master1.code = Detail.code
```
## Output
```
Id code Name
-----------------
1 00 qqq
1 01 xyz
1 02 pqr
1 03 abc
```
Check SQL Fiddle
## [Here](http://sqlfiddle.com/#!3/6abdd/1)
|
If table is not large enough you can create a temporary table and insert the data from both master and detail by using
insert into temp\_table select \* from detail;
insert into temp\_table select \* from master;
or if you want you can also use undermentioned query to combine data of the table
insert into master select \* from detail;
|
Join master and detail tables as one table
|
[
"",
"sql",
"sql-server",
""
] |
This has probably been covered before, but my skill level with SQL is low enough that I'm not even sure how to properly search for what I want!
I think what I want to do is relatively simple, and it seems that using pivoting might be the solution, but I'm really not sure about the syntax for that.
I have a SELECT query which can return data in this format:
```
TeamID | PlayerName
--------+-------------------
1 | Arthur Schiller
1 | Dimitre Rogatchev
1 | Mohamed Hechmeh
1 | Santosh Pradhan
2 | Adriano Ferrari
2 | Amanda Compagnone
2 | Danail Delchev
2 | David Bergin
```
I want to create columns from the ID with rows filled with the appropriate names, like this:
```
1 | 2
------------------+-------------------
Arthur Schiller | Adriano Ferrari
Dimitre Rogatchev| Amanda Compagnone
Mohamed Hechmeh | Danail Delchev
Santosh Pradhan | David Bergin
```
The purpose is to use the returned data in a php mysqli\_fetch\_assoc call to display the two columns on a website.
Thanks in advance for any help!
|
Unfortunately MySQL doesn't support windowing functions to generate a unique value for each of those rows per team. You can create a derived "row number" using variables or you could use a correlated subquery similar to:
```
select
max(case when teamid = 1 then playername else '' end) Team1,
max(case when teamid = 2 then playername else '' end) Team2
from
(
select TeamId,
PlayerName,
(select count(*)
from yourtable d
where t.teamId = d.TeamId
and t.playername <= d.PlayerName) rn
from yourtable t
) d
group by rn;
```
See [SQL Fiddle with Demo](http://sqlfiddle.com/#!2/79aab4/3).
**Note:** depending on the size of your data you might have some performance issues. This works great with smaller datasets.
|
you cannot do this with a pivot because there is no common ground to pivot off of.. what you can do is make a row count that you join on for the second team.. try this
```
SELECT t.playername as '1', f.playername as '2'
FROM
( SELECT @a := @a + 1 as id, playername
FROM players
WHERE teamid = 1
)t
LEFT JOIN
( SELECT @b := @b +1 as id , playername
FROM players
WHERE teamid = 2
)f ON f.id = t.id
CROSS JOIN (SELECT @a :=0, @b :=0)t1
```
[DEMO](http://sqlfiddle.com/#!2/ee3a0e/1)
|
SQL query to pivot table on column value
|
[
"",
"mysql",
"sql",
"pivot",
""
] |
I am **trying to install and test a MySQL ODBC Connector** on my machine (Windows 7) to connect to a remote MySQL DB server, but, when I configure and test the connection, I keep getting the following error:
```
Connection Failed
[MySQL][ODBC 5.3(w) Driver]Access denied for user 'root'@'(my host)' (using password: YES):
```
The problem is, **I can connect with MySQL Workbench (remotely - from my local machine to the remote server) just fine.** I have read [this FAQ](http://dev.mysql.com/doc/refman/5.6/en/access-denied.html) extensively but it's not helping out. I have tried:
* Checking if mysql is running on the server (it is. I even tried restarting it many times);
* Checking if the port is listening for connection on the remote server. It is.
* Connecting to the remote server using MySQL Workbench. It works.
* Checking if the IP address and Ports of the remote database are correct;
* Checking if the user (root) and password are correct;
* Re-entering the password on the ODBC config window;
* Checking and modifying the contents of the "my.conf" on the remote server to allow connections from all sides (0.0.0.0);
* Including (my host) on the GRANT HOST tables from mySQL (I also tried the wildcard '%' but it's the same as nothing);
* Running a FLUSH HOSTS; And FLUSH PRIVILEGES; command on the remote mySQL server to reset the privilege cache;
* Turning off my Firewall during the configuration of the ODBC driver;
* Checked if the MySQL variable 'skip\_networking' is OFF in order to allow remote connections.
What is frustrating is that I can connect with MySQL Workbench on my local machine (with the same IP/user/password), just not with ODBC.
What could I be doing wrong, or what could be messing up my attempt to connect with ODBC?
**Update**: I managed to set up the ODBC driver and get it running correctly on the server side. I can connect there to the localhost using a command line (with the "isql" command). But I still can't connect over remotely with my Windows 7 machine.
|
Solved.
As it turns out, it was a permissions problem. I ran the following command on the remote server SQL:
```
GRANT ALL PRIVILEGES ON *.* TO 'root'@'(my_host)' IDENTIFIED BY '(my_password)';
```
I had run the previous command, but without the "IDENTIFIED BY" password. Then, to reset the mysql permissions cache, I also ran
```
FLUSH PRIVILEGES;
```
And now it works.
|
Worked for me too only with 64bit odbc driver not for mySQL 32 bit.
|
MySQL - ODBC connect fails, Workbench connect works
|
[
"",
"mysql",
"sql",
"database",
"windows",
"odbc",
""
] |
I have dates in different format :
Date 1 : `30-JAN-14`
Date 2 : `2014-01-30 10:06:04.0`
I am trying to compare the two dates using the following :
```
to_char(date2,'YYYY-MM-DD HH24:MM:SS') <> to_char(date1,'dd-MON-YY')
```
But it is giving the following error for the date1 part :
```
Lietral does not match format String
```
I tried comparing them directly
```
date2<>date1
```
I am gtting the same error again
EDIT : Date2 is varchar and Date1 is DATE
|
Probably easiest to use `substr` and truncate the timestamp from the `date2` column. Then you can use `to_date` to compare:
```
select *
from sometable
where date1 <> to_date(substr(date2,1,10), 'YYYY-MM-DD')
```
* [SQL Fiddle Demo](http://sqlfiddle.com/#!4/8302a2/1)
|
You should be Date type conversion string is the same format for comparison
|
Compare Dates in different format oracle sql
|
[
"",
"sql",
"oracle",
"date",
"oracle-sqldeveloper",
""
] |
Consider this data from TestTable:

I need a view that will provide that most recent beginning StatusDate where IsAssist is true. For example, the most recent StatusDate where IsAssist is true is 2014-05-25. Since the immediately previous records are also IsAssist = true, I need to include those in the calculation. So the beginning of the "IsAssist Block" is 2014-05-22. This is the date I need.
How can I write a view that will provide this?
|
Ok - I managed to figure this out with the help of JL Peyret's response. I had to reverse the parameters for the between statement (when did MS do this? Has it always been this way?).
What it does is determine the low date by getting the StatusDate where IsAssist = 0 and is less than the max StatusDate where IsAssist = 1. The high date is simply the max StatusDate where IsAssist = 1.
Check between these dates where IsAssist = 1 and there you have it, the most recent block of records where IsAssist = 1.
I realize this isn't quite finished yet. I have to cover for the possibility that the low date calculation might fail because there aren't any records where IsAssist = 0. Details...
JL was very close and deserves credit for getting me on the right path. Thanks!
```
select *
from TestTablewhere StatusDate
between
/* the low date */
(select max(StatusDate)
from TestTable
Where IsAssist = 0
and StatusDate < (select max(StatusDate) from TestTable s where IsAssist = 1))
and
/* the high date */
(select max(StatusDate) from TestTable s where IsAssist = 1)
and IsAssist = 1
```
|
this should give you a good start
```
Select s.tableid, s.StatusDate, e,TableId, e.StatusDate
from testable s -- for start
left join testable e -- for end
on e.statusdate =
(Select Min(StatusDate)
From TestTable
Where StatusDate > s.StatusDate
and isAssist = 1
and Not exists
(Select * From testable
Where StatusDate Betweens.StatusDate and e.StatusDate
and isAssist = 0))
Where s.IsAssist = 1
```
|
SQL Server View to Return Consecutive rows with a Specific Value
|
[
"",
"sql",
"view",
""
] |
To lookup a country for a phone number prefix I running the following query:
```
SELECT country_id
FROM phonenumber_prefix
WHERE '<myphonnumber>' LIKE prefix ||'%'
ORDER BY LENGTH(calling_prefix) DESC
LIMIT 1
```
To query phone numbers from a table I run a query like:
```
SELECT phonenumber
FROM phonenumbers
```
Now I want to combine those query into one, to get countries for all phone numbers. I know that I could put the first query into a function e.g. getCountry() and then query
```
SELECT phonenumber, getCountry(phonenumber)
FROM phonenumbers
```
But is there also a way to to do this with joins in one query, I'm using postgresql 9.2?
|
You can do this with a correlated subquery:
```
SELECT phonenumber,
(SELECT country_id
FROM phonenumber_prefix pp
WHERE pn.phonenumber LIKE prefix ||'%'
ORDER BY LENGTH(calling_prefix) DESC
LIMIT 1
) as country_id
FROM phonenumbers pn;
```
|
This will give you list of numbers with corresponding country ids for the longest matching prefix:
```
SELECT * FROM (
SELECT
p.phonenumber, pc.country_id,
ROW_NUMBER() OVER (PARTITION BY phonenumber ORDER BY LENGTH(pc.prefix) DESC) rn
FROM
phonenumber_prefix pc INNER JOIN
phonenumbera p ON p.phonenumber LIKE pc.prefix || '%' ) t
WHERE t.rn = 1
```
|
Lookup table with best match query
|
[
"",
"sql",
"postgresql",
""
] |
I'm having an UNION Statement with different datatypes and want to sort them.
```
SELECT * FROM
(SELECT name, 'P' as 'type', to_char(order_number) as order_type FROM abc
UNION ALL
SELECT name, 'T' as 'type', to_char(name) as order_type FROM def
)
ORDER BY
CASE type
WHEN 'P' THEN order_type
ELSE
order_type
END
```
This works so far.
Now the content for the order\_type from table `abc` is integer and from table `def` is varchar.
Thats why the order of the result is wrong. (e.g. 1000 is before 11)
I tried using
```
ORDER BY
CASE type WHEN 'P' THEN CAST(order_type AS NUMBER)
ELSE order_type END
```
in the order part but I'm getting
```
INCONSISTENT DATATYPES
```
What am I doing wrong?
table contents:
`abc`:
```
name | order_number
'Example 1' | 10001
'Example 2' | 11
```
`def`:
```
name | order_number
'Example 4' | 0
'Example 3' | 0
```
Expected Result:
```
Example 2
Example 1
Example 3
Example 4
```
|
Using the constant `'4'` or `4` in the order by doesn't achieve anything; that won't be translated to the column position.
Whatever the original data types, the union will present the data from both branches as the same type (determined by the first branch). You've got `to_char(c)` which means `order_type` is a string; `f` is already a string but even if it was a number it would be implicitly converted to match.
```
CASE type WHEN 'P' THEN CAST(order_type AS NUMBER)
ELSE order_type END
```
When you do this order\_type is a string; the `then` is turning it into a number, the `else` is not, so the data type is different.
If you want to order numerically then make `order_type` numeric and just order by that:
```
SELECT * FROM
(SELECT a,b,'P' as "type", c as order_type FROM abc
UNION ALL
SELECT d,e,'T' as "type", to_number(f) as order_type FROM def
)
ORDER BY order_type;
```
Or if you need the `order_type` in the result set to be a string, convert it back in the order-by clause:
```
SELECT * FROM
(SELECT a,b,'P' as "type", to_char(c) as order_type FROM abc
UNION ALL
SELECT d,e,'T' as "type", f as order_type FROM def
)
ORDER BY to_number(order_type);
```
... but that seems rather redundant.
Of course, this assumes all the values in `f` are actually valid numbers stored as strings (which is a whole different topic). If they cannot all be converted then you'll get an invalid-number error at some point either way; and then your best bet might be to pad the string result as @KimBergHansen suggests, though as he said non-integer values might give odd results, and you'd need to pick a suitably large length.
---
Based on your question edit, you seem to want the `abc` values first sorted by `order_num`, then the `def` values sorted by `name`. In that case use multiple elements:
```
SELECT * FROM
(SELECT name, 'P' as order_type, order_num FROM abc
UNION ALL
SELECT name,'T' as order_type, null as order_num FROM def
)
ORDER BY CASE WHEN order_type = 'P' THEN 1 ELSE 2 END,
order_num,
name;
NAME ORDER_TYPE ORDER_NUM
---------- ---------- ----------
Example 2 P 11
Example 1 P 10001
Example 3 T
Example 4 T
```
[SQL Fiddle](http://sqlfiddle.com/#!4/9a051/1) from your sample data.
|
Your column ORDER\_TYPE is a string and will sort as a string, so 11 comes before 2. You want to order those strings coming from table ABC as numbers and those strings coming from table DEF alphabetically.
One way could be:
```
ORDER BY CASE type WHEN 'P' THEN lpad(order_type,20,'0') ELSE order_type END
```
That order by is string ordering all the time, but by left-padding the numbers with zeroes, that will be ordered numerically (as you state it is integer data - if you had fractions that could complicate it a bit ;-)
|
CAST with different datatypes from UNION Select
|
[
"",
"sql",
"oracle",
"plsql",
""
] |
I am getting the following error when I try and run my package. I am new to ssis. Any suggestions. Tahnks
===================================
Package Validation Error (Package Validation Error)
===================================
Error at Data Flow Task [SSIS.Pipeline]: "OLE DB Source" failed validation and returned validation status "VS\_NEEDSNEWMETADATA".
Error at Data Flow Task [SSIS.Pipeline]: One or more component failed validation.
Error at Data Flow Task: There were errors during task validation.
(Microsoft.DataTransformationServices.VsIntegration)
---
Program Location:
at Microsoft.DataTransformationServices.Project.DataTransformationsPackageDebugger.ValidateAndRunDebugger(Int32 flags, IOutputWindow outputWindow, DataTransformationsProjectConfigurationOptions options)
at Microsoft.DataTransformationServices.Project.DataTransformationsProjectDebugger.LaunchDtsPackage(Int32 launchOptions, ProjectItem startupProjItem, DataTransformationsProjectConfigurationOptions options)
at Microsoft.DataTransformationServices.Project.DataTransformationsProjectDebugger.LaunchActivePackage(Int32 launchOptions)
at Microsoft.DataTransformationServices.Project.DataTransformationsProjectDebugger.LaunchDtsPackage(Int32 launchOptions, DataTransformationsProjectConfigurationOptions options)
at Microsoft.DataTransformationServices.Project.DataTransformationsProjectDebugger.Launch(Int32 launchOptions, DataTransformationsProjectConfigurationOptions options)
|
VS\_NEEDSNEWMETADATA shows up when the underlying data behind one of the tasks changes. The fastest solution will probably be to just delete and re-create each element which is throwing an error.
|
How about disabling validation checks?
Like if you right click on source or destination component and select properties then you will have the property named validateExternalMetadata put that as false and try.
This Solution is working for me.
|
ssis Package validation error ole db source failed
|
[
"",
"sql",
"ssis-2012",
""
] |
Here's my associations:
```
class User < ActiveRecord::Base
has_many :tickets
end
class Ticket < ActiveRecord::Base
belongs_to :user
end
```
Pretty straightforward. I want a scope on the `User` model which returns all users having exactly ZERO associated tickets.
Presently I'm using Ruby to do it for the most part like so:
```
def self.never_flown
results = []
find_each do |user|
results << user if user.tickets.count == 0
end
results
end
```
... which sucks because this isn't a good use-case for Ruby. But I'm struggling to figure out how to do this using SQL.
Can someone let me know a good, clean SQL solution to this?!
|
Maybe something like: `User.includes(:tickets).where(tickets: {user_id: nil})`
in a scope it would be like `:without_tickets, -> { includes(:tickets).where(tickets: {user_id: nil}) }`
|
A SQL subquery could be used, which might be more efficient than using `includes()`:
```
User.where('id not in (select user_id from tickets)')
```
|
Rails: get parent records having ZERO has_many associations
|
[
"",
"sql",
"ruby-on-rails",
"ruby",
"activerecord",
"ruby-on-rails-4",
""
] |
I am trying to do a simple exercise for slq where i have to get as much info as i can about poeple who work in japan in my database. However i just started learning so i don't even know what to google to answer my problem. So here it is:
My Code:
```
SELECT *
FROM Employees
WHERE DEPARTMENT_ID =
(SELECT *
FROM Departments
WHERE LOCATION_ID =
(SELECT *
FROM Locations
WHERE Country_ID =
(SELECT *
FROM Countries
WHERE Country_Name = 'Japan')))
```
My Error:
```
Msg 116, Level 16, State 1, Line 12
Only one expression can be specified in the select list when the subquery is not introduced with EXISTS.
Msg 116, Level 16, State 1, Line 12
Only one expression can be specified in the select list when the subquery is not introduced with EXISTS.
Msg 116, Level 16, State 1, Line 12
Only one expression can be specified in the select list when the subquery is not introduced with EXISTS.
```
My Database:
<https://i.stack.imgur.com/ar0aT.png>
EDIT: I think i hit a wall... <https://i.stack.imgur.com/FjTps.png>
|
You are using `select *` in your nested selects. You need to select a particular column instead.
Something like this:
```
SELECT *
FROM Employees
WHERE DEPARTMENT_ID IN
(SELECT DEPARTMENT_ID
FROM Departments
WHERE LOCATION_ID IN
(SELECT LOCATION_ID
FROM Locations
WHERE Country_ID IN
(SELECT Country_ID
FROM Countries
WHERE Country_Name = 'Japan')))
```
The same query using `JOINS` which are efficient.
```
SELECT *
FROM Employees
INNER JOIN Departments ON Employees.DEPARTMENT_ID = Departments.DEPARTMENT_ID
INNER JOIN Locations ON Departments.LOCATION_ID = Locations.Location_ID
INNER JOIN Countries ON Locations.Country_ID = Countries.Country_ID
WHERE Countries.Country_Name = 'Japan'
```
|
You cannot use "SELECT \* " inside IN statements. Change your query to this:
```
SELECT *
FROM Employees
WHERE DEPARTMENT_ID IN
(SELECT DEPARTMENT_ID
FROM Departments
WHERE LOCATION_ID =
(SELECT LOCATION_ID
FROM Locations
WHERE Country_ID =
(SELECT Country_ID
FROM Countries
WHERE Country_Name = 'Japan')))
```
|
Error when using nested selects in sql
|
[
"",
"sql",
""
] |
I'm newbie in SQL Server and I need some help with SQL Server and JOIN method.
My code is:
```
SELECT TOP 1000
p.value AS userposition,
p2.value AS usercell,
t.id
FROM
[es] t
JOIN
[user] u ON t.user_uid = u.uid
JOIN
[user] su ON u.superior_uid = su.uid
JOIN
[user_params] up ON t.user_uid = up.user_uid
LEFT JOIN
[params] p ON up.param_id = p.id AND p.name_id = 1
JOIN
[user_params] up2 ON t.user_uid = up2.user_uid
LEFT JOIN
[params] p2 ON up2.param_id = p2.id AND p.name_id = 2
```
but it returns duplicated records. I want them just as many as rows in [es] table. In MySQL I would use `GROUP BY t.id`, but in SQL Server that method doesn't work.
Thanks in advance.
EDIT (clarification):
Thank you for your replies. Maybe I should describe my tables structure and what I need to display.
```
Table [ES]
[id],[user_uid],[more_data]
Table [User]
[uid],[superior_uid],[more_data]
Table [UserParams]
[id],[user_uid],[param_id]
Table [Params]
[id],[param_id],[value]
```
Now what I need is to get all records from [ES] add user data from [User] add his superior data on [User][superior\_uid] which is also an [User] record, add [Params] with [Params][name\_id] = 1 as value1 AND add [Params] with [Params][name\_id] = 2 as value2 ... through [UserParams] if exists.
I think the problem is with JOIN or GROUP BY. [ES] records with users has no [UserParams] are shown only once, but those with [UserParams] are doubled.I tried LEFT OUTER JOIN but it doesn't work. :(
|
How about
```
SELECT DISTINCT TOP 1000
p.value AS userposition,
p2.value AS usercell,
t.id
FROM [es] t
JOIN [user] u ON t.user_uid = u.uid
JOIN [user] su ON u.superior_uid = su.uid
JOIN [user_params] up ON t.user_uid = up.user_uid
LEFT JOIN [params] p ON up.param_id = p.id AND p.name_id = 1
JOIN [user_params] up2 ON t.user_uid = up2.user_uid
LEFT JOIN [params] p2 ON up2.param_id = p2.id AND p.name_id = 2
ORDER BY (whichever rows that you want it to be ordered by) ?
```
|
all of your columns need to be in the group by, or part of an aggregate function
```
p.value AS userposition, #group by or agg func
p2.value AS usercell, #group by or agg func
t.id #group by
```
Wouldnt be certain without knowing what p.value and p2.value actually mean
|
SQL Server : JOIN query
|
[
"",
"sql",
"sql-server",
"join",
""
] |
If I have these tables:
```
Thing
id | name
---+---------
1 | thing 1
2 | thing 2
3 | thing 3
Photos
id | thing_id | src
---+----------+---------
1 | 1 | thing-i1.jpg
2 | 1 | thing-i2.jpg
3 | 2 | thing2.jpg
Ratings
id | thing_id | rating
---+----------+---------
1 | 1 | 6
2 | 2 | 3
3 | 2 | 4
```
How can I join them to produce
```
id | name | rating | photo
---+---------+--------+--------
1 | thing 1 | 6 | NULL
1 | thing 1 | NULL | thing-i1.jpg
1 | thing 1 | NULL | thing-i2.jpg
2 | thing 2 | 3 | NULL
2 | thing 2 | 4 | NULL
2 | thing 2 | NULL | thing2.jpg
3 | thing 3 | NULL | NULL
```
Ie, left join on each table simultaneously, rather than left joining on one than the next?
[This](http://sqlfiddle.com/#!2/468e48/5) is the closest I can get:
```
SELECT Thing.*, Rating.rating, Photo.src
From Thing
Left Join Photo on Thing.id = Photo.thing_id
Left Join Rating on Thing.id = Rating.thing_id
```
|
You can get the results you want with a union, which seems the most obvious, since you return a field from either ranking or photo.
Your additional case (have none of either), is solved by making the joins `left join` instead of `inner joins`. You will get a duplicate record with `NULL, NULL` in ranking, photo. You can filter this out by moving the lot to a subquery and do `select distinct` on the main query, but the more obvious solution is to replace `union all` by `union`, which also filters out duplicates. Easier and more readable.
```
select
t.id,
t.name,
r.rating,
null as photo
from
Thing t
left join Rating r on r.thing_id = t.id
union
select
t.id,
t.name,
null,
p.src
from
Thing t
left join Photo p on p.thing_id = t.id
order by
id,
photo,
rating
```
|
Here's what I came up with:
```
SELECT
Thing.*,
rp.src,
rp.rating
FROM
Thing
LEFT JOIN (
(
SELECT
Photo.src,
Photo.thing_id AS ptid,
Rating.rating,
Rating.thing_id AS rtid
FROM
Photo
LEFT JOIN Rating
ON 1 = 0
)
UNION
(
SELECT
Photo.src,
Photo.thing_id AS ptid,
Rating.rating,
Rating.thing_id AS rtid
FROM
Rating
LEFT JOIN Photo
ON 1 = 0
)
) AS rp
ON Thing.id IN (rp.rtid, rp.ptid)
```
|
SQL left join two tables independently
|
[
"",
"sql",
"left-join",
""
] |
I have a database which I didn't make and now I have to work on that database. I have to insert some information, but some information must be saved in not one table but several tables. I
can use the program which have made the database and insert information with that. While I am doing that, I want to see that which tables are updated. I heard that SQL Server Management Studio has a tool or something which make us see changes.
Do you know something like that? If you don't, how can I see changes on the database's tables? If you don't understand my question, please ask what I mean. Thanks
**Edit :** Yes absolutely Sql Profiler is what I want but I am using SQL Server 2008 R2 Express and in Express edition, Sql Profiler tool does not exist in Tools menu option. Now I am looking for how to add it.
**Edit 2 :** Thank you all especially @SchmitzIT for his pictured answer. I upgraded my SQL Server Management Studio from 2008 R2 express edition to 2012 Web Developer Edition. SQL Profiller Trace definitely works.
|
I agree with @Lmu92. SQL Server Profiler is what you want.
From SQL Server Management Studio, click on the "Tools" menu option, and then select to use "SQL SErver Profiler" to launch the tool. The profier will allow you to see statements executed against the database in real time, along with statistics on these statements (time spent handling the request, as well as stats on the impact of a statement on the server itself).
The statistics can be a real help when you're troubleshooting performance, as it can help you identify long running queries, or queries that have a significant impact on your disk system.
On a busy database, you might end up seeing a lot of information zip by, so the key to figuring out what's happening behind the scenes is to ensure that you implement proper filtering on the events.
To do so, after you connect Profiler to your server, in the "Trace properties" screen, click the "Events Selection" tab:

You probably are good to uncheck the boxes in front of the "Audit" columns, as they are not relevant for your specific issue. However, the important bit on this screen is the "Column filters" button:

This is where you will be able to implement filters that only show you the data you want to see. You can, for instance, add a filter to the "ApplicationName", to ensure you only see events generated by an application with the name you specify. Simply click on the "+" sign next to "Like", and you will be able to fill in an application name in the textbox.

You can choose to add additional filters if you want (like "NTUsername" to filter by AD username, or "LoginName" for an SQL Server user.
Once you are satisfied with the results, click "OK", and you will hopefully start seeing some results. Then you can simply use the app to perform the task you want while the profiler trace runs, and stop it once you are done.
You can then scroll through the collected data to see what exactly it has been doing to your database. Results can also be stored as a table for easy querying.
Hope this helps.
|
Although you describe in your question what you want, you don't explain ***why*** you want it. This would be helpful to properly answer your question.
[ExpressProfiler](https://expressprofiler.codeplex.com/) is a free profiler that might meet your needs.
If you're looking to track DDL changes to your database, rather than all queries made against it, you might find [SQL Lighthouse](https://www.simple-talk.com/blogs/2014/09/10/tackling-database-drift-with-sql-lighthouse/) useful, once it is released in Beta shortly.
Disclosure: I work for Red Gate.
|
How can I see which tables are changed in SQL Server?
|
[
"",
"sql",
"sql-server",
"database",
""
] |
I was wondering can I change product status to 0 if product\_id has't assigned any category\_id in other table.
```
+------------+----------+ +------------+-------------+
| Product_id | Status | | Product_id | cateogry_id |
+------------+----------+ +------------+-------------+
| 1 | 1 | | 1 | 10 |
| 2 | 1 | | 3 | 20 |
| 3 | 1 | +------------+-------------+
+------------+----------+
```
In result i need that product\_id = 2 which don't have category status be 0.
Is there one MySQL query for that?
|
Simple first you need to get the rows that arent in the table..
table1 is the table with the status
table2 is the table with the category id
```
table1 table2
+------------+----------+ +------------+-------------+
| Product_id | Status | | Product_id | cateogry_id |
+------------+----------+ +------------+-------------+
| 1 | 1 | | 1 | 10 |
| 2 | 1 | | 3 | 20 |
| 3 | 1 | +------------+-------------+
+------------+----------+
```
so now run this query to get the rows that don't have a category\_id
```
SELECT product_id
FROM table1 t
LEFT JOIN table2 t2 ON t2.product_id = t.product_id
WHERE t2.product_id IS NULL
```
now update table1 like so
```
UPDATE table1 t,
( SELECT product_id
FROM table1 t
LEFT JOIN table2 t2 ON t2.product_id = t.product_id
WHERE t2.product_id IS NULL
) t1
SET t.status = 0
WHERE t1.product_id = t.product_id
```
[DEMO](http://sqlfiddle.com/#!2/4fa12b/1)
|
You can use left join in update with is null
```
update product p
left join product_relation pr on (p.Product_id = pr.Product_id)
set p.Status = 0
where pr.Product_id is null
```
[**Demo**](http://sqlfiddle.com/#!2/006a0/1)
|
MYSQL update join is null
|
[
"",
"mysql",
"sql",
""
] |
I have a table like this -
```
C1 C2 C3
A 20130101 10
A 20130102 10
A 20130103 20
A 20130104 10
```
I want to set row no like this -
```
C1 C2 C3 RowNo
A 20130101 10 1
A 20130102 10 2
A 20130103 20 1
A 20130104 10 1
```
How can I make by query?
or there is only way to loop this table?
Thanks..
|
I am updated answer with recursive CTE. It build hierarchy tree starting with records with new C3's value and display level as RowNo.
```
with t as
(select t.*, row_number () over (order by c2) rn from table1 t)
,temp (c2,c3,rn,lvl) AS
(SELECT c2,c3,rn,1 lvl from t t1
where not exists(
select 1 from t t0
where t1.rn=t0.rn+1
and t1.c3=t0.c3
)
UNION ALL
select t1.c2,t1.c3,t1.rn,lvl + 1 AS lvl FROM t t1
join temp t2 on t1.rn=t2.rn+1 and t1.c3=t2.c3)
SELECT c2, c3, lvl rowno FROM temp order by rn;
```
<http://sqlfiddle.com/#!3/4adbd/1>
|
ROW\_NUMBER() function can help you to set numbers of rows:
```
SELECT ROW_NUMBER() over(order by [some field]), *
FROM [your table]
```
|
How to set row number()
|
[
"",
"sql",
"sql-server-2008",
""
] |
Having the following tables:
* "TypeSomething (ID, NAME) -- 2 Entries - Id 1 and 2"
* "OtherThings (ID, NAME) - 3 entries - Id 10,20,30"
What is the easiest way to retrieve the following SQL result:
TypeId, OtherThingId: (1, 10)(1, 20)(1, 30)(2, 10)(2, 20)(2, 30)
Thanks...
|
It's a *cartesian join*:
```
select t.id TypeId,
o.id OtherThingId
from TypeSomething t
cross join OtherThings o
```
I've added `cross join` which is not required by *SQL 92 standard*, in order to stress out that the query is not a *typical error* when *join is just left*.
|
Looks like you just want a cross join (also called a cartesian product):
```
SELECT
t.ID TypeId,
o.ID OtherThingId
FROM TypeSomething, OtherThings
```
|
How to "project" 2 tables
|
[
"",
"sql",
""
] |
I have some fields in the table, need to average those fields.
Then I run this syntax, because I don't want to calculate 0 (zero) value.
```
SELECT myDate, AVG(CASE myField1 WHEN 0 THEN NULL ELSE myField1 END) AS avgmyField1
FROM myTable WHERE myDate = '2014-06-01'
```
On my syntax, the average calculation means.. Make zero value to null.
My question is, How if all values are zero...?
Thank you.
|
Then you get NULL.
If you want zero instead, use COALESCE:
```
COALESCE( AVG(CASE myField1 WHEN 0 THEN NULL ELSE myField1 END) , 0)
```
|
Average will not use nullvalues to calculate an average value
IsNull or Coalesce can be used to change null values to different values.
This script will change 0 to null and take the average value:
```
SELECT IsNull(AVG(NullIf(val,0)), 0)
FROM
(Values(5),(7),(0)) tbl(val)
```
Since 0 is excluded the result is 6
|
Average without calculate zero
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have 2 strings , let's suppose stringA and stringB . I want to know if stringA contains stringB. Since I am new to SQL Server and T SQL. I am not sure what prebuild function can I use .
It should be something like this.
```
If (contains(stringA,stringB))
```
then Print `'It Contains it'`
|
```
DECLARE @StringA VARCHAR(100)
DECLARE @StringB VARCHAR(100)
SET @StringA = 'ABCDEF'
SET @StringB= 'CD'
IF @StringA LIKE '%' + @StringB + '%'
Print 'It Contains it'
ELSE
PRINT 'It doesn''t'
SET @StringA = 'ABCDEF'
SET @StringB= 'WU'
IF @StringA LIKE '%' + @StringB + '%'
Print 'It Contains it'
ELSE
PRINT 'It doesn''t'
```
|
You can use [`LIKE`](http://msdn.microsoft.com/en-us/library/ms179859.aspx):
```
SELECT t.* FROM dbo.Table t
WHERE stringA LIKE '%' + @stringB + '%'
```
or with an `IF`:
```
IF @stringA LIKE '%' + @stringB + '%' PRINT 'It contains it'
ELSE PRINT 'It does not contain it';
```
|
How to check if string A "contains" B in TSQL?
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
How to check if a stored procedure or query is still running in SQL Server?
**Ideas**
1. I've thought of having a log where to write when the procedure starts and delete when it ends.
*Flaws:*
* it leaves open the case when the server restarts or some kind of failure inside the procedure.
* this method needs some work to be done before running the procedure, so it can't be applied on already running procedures.
2. Use process monitor
I would prefer a solution that can be incorporated as a stored procedure with `procedure_name` and/or `pid`, `parameters` as input, so tracing programs or solutions using the SQL Server interface won't work.
## Update #1
---
Usage example:
```
CREATE PROCEDURE dbo.sp_sleeping_beauty
@time_str varchar(50)
AS
SET NOCOUNT ON;
WAITFOR DELAY @time_str;
GO
dbo.sp_sleeping_beauty '00:00:10'
dbo.sp_sleeping_beauty '00:00:20'
dbo.sp_sleeping_beauty '00:00:30'
```
the procedure should be called like
```
test_if_running 'dbo.sp_sleeping_beauty '00:00:20''
```
and return true while running (for 20 seconds) and false after or if the function fails or the system is restarted
|
You might query `sys.dm_exec_requests` which will provide sesion\_ID, waittime and futher rows of interest and CROSS APPLY `sys.dm_exec_sql_text` filtering your query with the SQL for your procedure.
```
Select * from
(
SELECT * FROM sys.dm_exec_requests
where sql_handle is not null
) a
CROSS APPLY sys.dm_exec_sql_text(a.sql_handle) t
where t.text like 'CREATE PROCEDURE dbo.sp_sleeping_beauty%'
```
|
**Update:** The answer given by John Clayton references the [outdated SQL Server 2000 system table](https://learn.microsoft.com/en-us/sql/relational-databases/system-tables/mapping-system-tables-to-system-views-transact-sql?view=sql-server-2017) (sys.sysprocesses). The updated SQL is:
```
SELECT
object_name(st.objectid) as ProcName
FROM
sys.dm_exec_connections as qs
CROSS APPLY sys.dm_exec_sql_text(qs.most_recent_sql_handle) st
WHERE
object_name(st.objectid) is not null
```
The SQL code above returns a list of names of your running processes. Note that you will [need permission to view the Server/Database state](https://learn.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-connections-transact-sql?view=sql-server-2017#permissions).
|
Check if stored procedure is running
|
[
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
I have simple categories table. Category can have parent category (`par_cat` column) or null if it is main category and with the same parent category there shouldn't be 2 or more categories with the same name or url.
Code for this table:
```
CREATE TABLE IF NOT EXISTS `categories` (
`id` int(10) unsigned NOT NULL,
`par_cat` int(10) unsigned DEFAULT NULL,
`lang` varchar(2) COLLATE utf8_unicode_ci NOT NULL DEFAULT 'pl',
`name` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`url` varchar(120) COLLATE utf8_unicode_ci NOT NULL,
`active` tinyint(3) unsigned NOT NULL DEFAULT '1',
`accepted` tinyint(3) unsigned NOT NULL DEFAULT '1',
`priority` int(10) unsigned NOT NULL DEFAULT '1000',
`entries` int(10) unsigned NOT NULL DEFAULT '0',
`created_at` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
`updated_at` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00'
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci AUTO_INCREMENT=3 ;
ALTER TABLE `categories`
ADD PRIMARY KEY (`id`),
ADD UNIQUE KEY `categories_name_par_cat_unique` (`name`,`par_cat`),
ADD UNIQUE KEY `categories_url_par_cat_unique` (`url`,`par_cat`),
ADD KEY `categories_par_cat_foreign` (`par_cat`);
ALTER TABLE `categories`
MODIFY `id` int(10) unsigned NOT NULL AUTO_INCREMENT,AUTO_INCREMENT=3;
ALTER TABLE `categories`ADD CONSTRAINT `categories_par_cat_foreign`
FOREIGN KEY (`par_cat`) REFERENCES `categories` (`id`);
```
The problem is that even if I have unique keys it doesn't work. If I try to insert into database 2 categories that have `par_cat` set to `null` and same name and url, those 2 categories can be inserted into database without a problem (and they shouldn't). However if I select for those categories other `par_cat` (for example 1 assuming category with id 1 exists), only first record will be inserted (and that's desired behaviour).
Question - how to handle this case? I read that:
> A UNIQUE index creates a constraint such that all values in the index
> must be distinct. An error occurs if you try to add a new row with a
> key value that matches an existing row. This constraint does not apply
> to NULL values except for the BDB storage engine. For other engines, a
> UNIQUE index permits multiple NULL values for columns that can contain
> NULL. If you specify a prefix value for a column in a UNIQUE index,
> the column values must be unique within the prefix.
however if I have unique on multiple columns I expected it's not the case (only `par_cat` can be null, `name` and `url` cannot be null). Because `par_cat` references to `id` of the same table but some categories don't have parent category it should allow `null` values.
|
This works as defined by the SQL standard. NULL means unknown. If you have two records of par\_cat = NULL and name = 'X', then the two NULLs are not regarded to hold the same value. Thus they don't violate the unique key constraint. (Well, one could argue that the NULLs still *might* mean the same value, but applying this rule would make working with unique indexes and nullable fields almost impossible, for NULL could as well mean 1, 2 or whatever other value. So they did well to define it such as they did in my opinion.)
As MySQL does not support functional indexes where you could have an index on `ISNULL(par_cat,-1), name`, your only option is to make par\_cat a NOT NULL column with 0 or -1 or whatever for "no parent", if you want your constraints to work.
|
I see that this was asked in 2014.
However it is often requested from MySQL: <https://bugs.mysql.com/bug.php?id=8173> and <https://bugs.mysql.com/bug.php?id=17825> for example.
People can click on affects me to try and get attention from MySQL.
Since MySQL 5.7 we can now use the following workaround:
```
ALTER TABLE categories
ADD generated_par_cat INT UNSIGNED AS (ifNull(par_cat, 0)) NOT NULL,
ADD UNIQUE INDEX categories_name_generated_par_cat (name, generated_par_cat),
ADD UNIQUE INDEX categories_url_generated_par_cat (url, generated_par_cat);
```
The generated\_par\_cat is a virtual generated column, so it has no storage space. When a user inserts (or updates) then the unique indexes cause the value of generated\_par\_cat to be generated on the fly which is a very quick operation.
|
Unique (multiple columns) and null in one column
|
[
"",
"mysql",
"sql",
"database",
"unique-constraint",
""
] |
I have string "this is my test string" and I want to remove only word "my".
The problem is word "my" will select from another query as
```
select word from table
```
I tried this if I know the word
```
set string = REPLACE(string,'my','');
```
So, any help
I want to do something like
```
set string = REPLACE(string,select word from table,'');
```
my query is
```
set @test = 'this is my test query';
DROP TABLE IF EXISTS `test`;
CREATE TEMPORARY TABLE IF NOT EXISTS test(word longtext );
insert into test values('my');
insert into test values('test');
select replace(@test,(select word from test),'');
```
I have got error of sub query should returns 1 row
|
`select replace(@test,word,'') from test`;
...
```
select @test:=replace(@test,word,'') from test;
```
However keep in mind that you have multiple entries in the test table, @test will end up with all the words in table test replaced with ''. If that is what you want then all okay.
If you want a list output with @test with different words replaced use the first query and parse it in whatever you are passing the data to.
|
Maybe there's a better way but try this procedure
```
CREATE PROCEDURE myProc(IN teststring VARCHAR(255),
OUT teststring_new varchar(255))
BEGIN
DECLARE n INT DEFAULT 0;
DECLARE i INT DEFAULT 0;
SELECT COUNT(*) FROM test INTO n;
SET teststring_new = teststring;
SET i=0;
WHILE i<n DO
SELECT replace(teststring_new,(SELECT word FROM test LIMIT 1 OFFSET i),'') INTO teststring_new;
SET i = i + 1;
END WHILE;
SELECT teststring_new;
END//
```
to call this
```
set @test = 'this is my test query';
call myProc(@test, @outvalue)
```
|
How to delete specific word from string in mysql?
|
[
"",
"mysql",
"sql",
"mysql-workbench",
""
] |
I'm working the Oracle Developer days demo VM (virtualBox ),
and I can't seem to find the password for the SQL\*plus, so I go ahead and log into it like so
```
sqlplus sys/sys_password as sysdba
```
I need to run some oracle memory management tests (REDO logs and creating archive logs, etc).
Is this a risky plan though? to use the sysdba user?
thanks
|
If you're just on a virtualbox vm I can't imagine any risk at all but it isn't best practice to operate as sysdba in a production environment (or any environment that you really care about).
Also all passwords for dev days are usually oracle.
You could always change them by going in as sysdba and executing:
`alter user system identified by "some_new_password";`
|
The default password for all accounts in oracle developer day vm is 'oracle'. Best practice is , you can create dba account and use that account to do your oracle memory management tests. To create dba follow step 5 in following link
<http://praitheesh.blogspot.com.au/2013/10/oracle-developer-day-vm-how-to-use-pre.html>
```
SQL> CREATE USER oracle_dba IDENTIFIED BY password$1;
User created.
SQL> GRANT CONNECT TO oracle_dba;
Grant succeeded.
SQL> GRANT DBA TO oracle_dba;
Grant succeeded.
SQL>; CREATE USER SNSDIA IDENTIFIED BY password$1;
User created.
SQL> GRANT CONNECT TO SNSDIA;
Grant succeeded.
SQL>ALTER USER SNSDIA QUOTA 100M ON USERS;
User altered.
```
|
Regarding logging into SQL*PLUS as sysdba versus regular user?
|
[
"",
"sql",
"oracle",
"virtualbox",
"sqlplus",
""
] |
I have two tables Like A and B. Each have one column (ID) in **ascending order**.
I want to join tables LIKE C Table. Datas are not statical but in A and B the row counts are always equal. I've tried join statements but couldn't find out. Thanks for your help.
```
A (ID)
--
2
3
4
6
8
B (ID)
--
11
12
13
14
15
C ( IDA , IDB )
--
2 11
3 12
4 13
6 14
8 15
```
|
If you're using a database that supports row\_number() such as Oracle, postgresql, sql sql server:
```
select a_id, b_id
from (select row_number() over(order by id) as a_rn, id as a_id from a) x
join (select row_number() over(order by id) as b_rn, id as b_id from b) y
on x.a_rn = y.b_rn
```
**Fiddle:** <http://sqlfiddle.com/#!15/5ac6b/1/0>
If you're using mysql you can mimic row\_number using a variable:
```
select a_id, b_id
from (select @rn := @rn + 1 as a_rn, id as a_id
from a
cross join (select @rn := 0) r) x
join (select @rx := @rx + 1 as b_rn, id as b_id
from b
cross join (select @rx := 0) r) y
on x.a_rn = y.b_rn
```
**Fiddle:** <http://sqlfiddle.com/#!9/5ac6b/2/0>
|
Try this:
```
select a.id ida,b.id idb from
( select a.*, row_number() over (order by id) rn from a) a
join (select b.*, row_number() over (order by id) rn from b) b on a.rn=b.rn
```
|
Join Two Non Related Tables
|
[
"",
"sql",
"t-sql",
""
] |
Suppose I want to find, for two tables A and B, the counts of all records that are in A but not B, all records that are in A and B, and all records that are in B but not in A. I don't want the actual records, just the counts of all 3 components (think of a Venn diagram).
When I say, for example, records in A and B, I mean a count of all records that have identical values for, say, four variables (like ID, Year, Month, Day).
Is there a snazzy query that will return these counts efficiently?
|
For all that are in A and B, it's a simple JOIN:
```
SELECT COUNT(*)
FROM A
JOIN B ON A.ID = B.ID AND A.Year = B.Year AND A.Month = B.Month AND A.Day = B.Day
```
Note that this assumes that the combination `(ID, Year, Month, Day)` is unique in each table; if there are duplicates, it will count all the cross-products between the equivalent one. If `ID` is a unique key in the tables, that shouldn't be a problem.
For all A's not in B, use a LEFT JOIN:
```
SELECT COUNT(*)
FROM A
LEFT JOIN B ON A.ID = B.ID AND A.Year = B.Year AND A.Month = B.Month AND A.Day = B.Day
WHERE B.ID IS NULL
```
For all B's not in A, do the same thing but reverse the roles of A and B:
```
SELECT COUNT(*)
FROM B
LEFT JOIN A ON A.ID = B.ID AND A.Year = B.Year AND A.Month = B.Month AND A.Day = B.Day
WHERE A.ID IS NULL
```
You could combine the first two into a single query:
```
SELECT SUM(B.ID IS NOT NULL) AS A_and_B_count, SUM(B.ID IS NULL) AS A_not_B_count
FROM A
LEFT JOIN B ON A.ID = B.ID AND A.Year = B.Year AND A.Month = B.Month AND A.Day = B.Day
```
But I don't think it's possible to include the third query in this. That would require a `FULL OUTER JOIN`, which MySQL doesn't have.
For all these queries, make sure that at least one of the columns you're comparing has an index, otherwise this will be very slow; the more the better. Although if any of them is unique (e.g. the ID) field, that should be sufficient.
|
You can use `union` (which automatically removes duplicates) to get a master table of all unique rows and left join that table to tables a and b to get your counts.
This assumes that tables a and b don't contain duplicates within the table (otherwise the left joins will produce inflated counts).
```
select
count(all_rows.id) total_unique_count,
sum(a.id is not null and b.id is not null) in_both_count,
sum(a.id is not null and b.id is null) only_in_a_count,
sum(a.id is null and b.id not null) only_in_b_count
from (
select id, year, month, day from tablea
union
select id, year, month, day from tableb
) all_rows
left join tablea a
on a.id = all_rows.id
and a.year = all_rows.year
and a.month = all_rows.month
and a.day = all_rows.day
left join tableb b
on b.id = all_rows.id
and b.year = all_rows.year
and b.month = all_rows.month
and b.day = all_rows.day
```
|
How to get counts for all 3 components of two tables using SQL?
|
[
"",
"mysql",
"sql",
""
] |
I have the following two tables:
**CashRegister**
```
Date PaymentId CustId VenId Debit Credit Date
=========================================================
2/2/12 Pay_1 Cus_1 444 3/2/2012
4/2/12 Pay_2 Ven_1 555
```
**SalesOrder**
```
SaleOrderId CustId Debit Credit
==================================
So_1 Cus_1 666
```
From these two tables I have to generate a **Journal Report** like the following:
```
Date TrId AccountId Debit Credit
========================================
2/2/12 Pay_1 Cus_1 444
3/3/12 So_1 Cus_1 666
4/2/12 Pay_2 Ven_1 555
```
I have tried this query:
```
SELECT *
FROM (
SELECT Date, PaymentId, TrId, CustId, AccountId, VenId, AccountId, Debit,
Credit
FROM CashRegister
UNION
SELECT Date, SaleOrderId, TrId, CustId, AccountId, Debit, Credit
FROM SalesOrder
) t
ORDER BY Date
```
But it gives this error:
> In Union number of expressions should be equal for target list.
|
It seems like you want to use either CustId or VenId from the CashRegister table, and if so I think this is what you want:
```
SELECT
Date,
PaymentId AS TrId,
COALESCE(CustId, VenId) AS AccountId, -- COALESCE returns the first
Debit, -- non null value as AccountId
Credit
FROM CashRegister
UNION -- use UNION ALL if you don't want to eliminate duplicate rows.
SELECT
Date,
SaleOrderId,
CustId,
Debit,
Credit
FROM SalesOrder
ORDER BY Date
```
Sample [SQL Fiddle](http://www.sqlfiddle.com/#!6/51ab7/1)
|
To correct *your query* you have to add *stub columns* (`null` is usual stub value), e.g.
```
select *
from (select Date,
PaymentId TrId,
CustId AccountId,
VenId,
Debit,
Credit
from CashRegister
union
select Date,
SaleOrderId TrId,
CustId AccountId,
null, -- Stub: there's no verdor id
Debit,
Credit
from SalesOrder) t
order by Date
```
However, it seems that the *right query* (that returns desired cursor) is
```
select Date,
PaymentId as TrId,
Coalesce(CustId, VenId) as AccountId, -- if CustId is null take VenId
Debit,
Credit
from CashRegister
union all -- I don't think you want to remove occasional dublicates
select Date,
SaleOrderId,
CustId,
Debit,
Credit
from SalesOrder
order by Date
```
|
How do I select columns from two tables with differing layouts
|
[
"",
"sql",
"sql-server",
"union",
""
] |
Is there any way to find the largest value for any column or any data type in a table?
Let me eleaborate on this a bit more. If i have a table with 10 columns and some columns have datatype int, and some have decimal data type. Is there any sql function to get the largest 'numerical' value from that table?
```
P_ID Price Quantity CreateDate UpdateDate
1 5.99 60 2014-09-02 07:06:16.000 2014-09-02 07:06:16.000
2 45 20 2014-09-02 07:06:16.000 2014-09-02 07:06:16.000
3 29.99 35 2014-09-02 07:06:16.000 2014-09-02 07:06:16.000
4 95 10 2014-09-02 07:06:16.000 2014-09-02 07:06:16.000
5 199 20 2014-09-02 07:06:16.000 2014-09-02 07:06:16.000
```
EDIT: The example i have given is just an illustration and not actual data. In the example i want to be able to see '199' as the result. I know that MAX(Price) will give me 199 but i cannot assume this in my actual table. So i need something that will give me the max values for about 50000 records...
I need something generic if possible that can work on any table...
|
Try this, unfortunately you have to list all the columns. Would be easier if they were same type:
```
SELECT max(val)
FROM
(SELECT CAST(p_id as decimal(11,2)) p_id,
CAST(Price as decimal(11,2)) Price,
CAST(Quantity as decimal(11,2)) Quantity
FROM yourtable) as p
UNPIVOT
(val FOR Seq IN
([p_id], [Price], [Quantity]) ) AS unpvt
```
|
Well, you could use a combination of selecting from syscolumns and systypes. Systypes gives you a length and precision which could be used to give the information you want ?
|
Is there any way to find the largest value for any numeric column over all in a table?
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have a table with five years worth of data. It has, among other values, `JulianDate`, `RefETo`, and `Precipitation`.
I need to get the average of RefETo over that five years and the average of the last year, plus the sum of precipitation over the last year.
At the same time the averages and sums have to be for a period of time between 7 and 28 days.
Right now, I'm doing it with a function:
```
FUNCTION [dbo].[CIMISAvg](@Stn INT, @Yr INT, @Period INT)
RETURNS @AvgTable TABLE (Period INT, RefETo float, RefETo1 float, Precipitation float)
AS
BEGIN
DECLARE @PeriodInc INT = 1
DECLARE @RefETo float
DECLARE @Precip float
DECLARE @RefETo1 float
DECLARE @P INT = 1
BEGIN
WHILE @PeriodInc < 366
BEGIN
IF @PeriodInc < 365
BEGIN
SET @RefETo = (SELECT AVG(RefETo) FROM Cimis
WHERE StationNo = @Stn AND RefETO >= 0 AND JulianDate BETWEEN @PeriodINC AND
PeriodINC + @Period - 1)
SET @RefETo1 = (SELECT AVG(RefETo) FROM Cimis WHERE StationNo = @Stn AND
RefETO >= 0 AND JulianDate BETWEEN @PeriodINC AND @PeriodINC + @Period - 1
AND DATEPART(Year, DateCollected) = @Yr)
SET @Precip = (SELECT SUM(Precipitation) FROM Cimis
WHERE StationNo = @Stn AND Precipitation >= 0 AND JulianDate
BETWEEN @PeriodINC AND @PeriodINC + @Period - 1
AND DATEPART(Year, DateCollected) = @Yr)
END
ELSE
BEGIN
SET @RefETo = (SELECT AVG(RefETo) FROM Cimis
WHERE StationNo = @Stn AND RefETO >= 0
AND (JulianDate > 364 OR JulianDate < @Period - 1))
SET @RefETo1 = (SELECT AVG(RefETo) FROM Cimis WHERE StationNo = @Stn
AND RefETO >= 0 AND JulianDate > 364
AND DATEPART(Year, DateCollected) = @Yr)
SET @Precip = (SELECT SUM(Precipitation) FROM Cimis WHERE StationNo = @Stn
AND Precipitation >= 0 AND JulianDate > 364
AND DATEPART(Year, DateCollected) = @Yr)
END
INSERT INTO @AvgTable(Period, RefETo, RefETo1, Precipitation)
VALUES (@P, @RefETo, @RefETo1, @Precip)
SET @PeriodInc += @Period
SET @P += 1
END
END
RETURN
END
```
It returns the following table if I use:
```
SELECT * from dbo.CimiAvg(80,2014,28)
```
```
Period RefETo RefETo1 Precipitation
1 0.0417192857142857 0.0470392857142857 0.0156
2 0.0672328571428571 0.0585214285714286 0
3 0.121372142857143 0.135967857142857 1.2755
4 0.170277519379845 0.186428571428571 0.7991
5 0.235207258064516 0.240425 0.7087
6 0.268260240963855 0.294403571428571 0.1811
7 0.293128125 0.290282142857143 0
8 0.273767123287671 0.267457142857143 0.0196
9 0.244358333333333 0.2513375 0
10 0.176087142857143 NULL NULL
11 0.10749 NULL NULL
12 0.0625579831932773 NULL NULL
13 0.0382158273381295 NULL NULL
14 0.0413401459854015 NULL NULL
```
Which is fine and dandy but does anyone have any better ideas?
I've fiddled around with
```
SELECT JulianDate, AVG(RefETO)
OVER (ORDER BY JulianDate ROWS BETWEEN 28 PRECEDING AND CURRENT ROW)
FROM Cimis
```
and variations on that but haven't got anywhere
|
I agree with [PM 77-1](https://stackoverflow.com/users/2055998/pm-77-1) in the comments. Conditional aggregation is the way to go.
Try this:
```
Declare @Stn INT, @Yr INT, @Period INT
Select @Stn = 80, @Yr=2014, @Period=28
SELECT Period, RefETo/RefEToDays AS RefETo, RefETo1/RefETo1Days AS RefETo1, Precipitation
FROM (
SELECT
ROUNDUP(JulianDate/@Period,0) AS Period,
SUM(
RefETo
) AS RefETo,
SUM(
CASE WHEN DATEPART(Year, DateCollected) = @Yr THEN RefETo ELSE 0 END
) AS RefETo1,
COUNT(*)
AS RefEToDays,
SUM(
CASE WHEN DATEPART(Year, DateCollected) = @Yr THEN 1 ELSE 0 END
) AS RefETo1Days,
SUM(
CASE WHEN DATEPART(Year, DateCollected) = @Yr Then Precipitation ELSE 0 END
) AS Precipitation
FROM Cimis
WHERE StationNo = @Stn
GROUP BY ROUNDUP(JulianDate/@Period,0)
) c
```
|
> I need to get the average of RefETo over that five years and the
> average of the last year, plus the sum of precipitation over the last
> year.
From where I sit, this is a perfect example of a requirement that should be implemented in the report writer, not in the dbms.
I would not do this in a single SQL statement.
|
A way to get averages over a period of days
|
[
"",
"sql",
"sql-server",
""
] |
The site produces results, but with SELECT COUNT and SELECT query with GROUP BY having two different result counts. This is likely due to the error that is displaying in phpmyadmin but not on the site.
**The Queries:**
```
SELECT count(DISTINCT `name`) as `numrows` FROM `users` WHERE `verified` = '1'
SELECT `name`, `type`, `language`, `code` FROM `users` WHERE `verified` = '1' GROUP BY `name` ORDER BY `count` DESC LIMIT 0, 25
```
PhpMyAdmin provides the following error:
> 1055 - 'main.users.type' isn't in GROUP BY
When reading MySQL docs, I'm still unclear what it is I have to fix. I can't seem to grasp this.
|
You need to have a full group by:
```
SELECT `name`, `type`, `language`, `code`
FROM `users`
WHERE `verified` = '1'
GROUP BY `name`, `type`, `language`, `code`
ORDER BY `count` DESC LIMIT 0, 25
```
SQL92 requires that all columns (except aggregates) in the select clause is part of the group by clause. SQL99 loosens this restriction a bit and states that all columns in the select clause must be functionally dependent of the group by clause. MySQL by default allows for partial group by and this may produce non-deterministic answers, example:
```
create table t (x int, y int);
insert into t (x,y) values (1,1),(1,2),(1,3);
select x,y from t group by x;
+------+------+
| x | y |
+------+------+
| 1 | 1 |
+------+------+
```
I.e. a random y is select for the group x. One can prevent this behavior by setting @@sql\_mode:
```
set @@sql_mode='ONLY_FULL_GROUP_BY';
select x,y from t group by x;
ERROR 1055 (42000): 'test.t.y' isn't in GROUP BY
```
|
The best solution to this problem is, of course, using a complete `GROUP BY` expression.
But there's another solution that works around the `ONLY_FULL_GROUP_BY` blocking of the old MySQL extension to `GROUP BY`.
```
SELECT name,
ANY_VALUE(type) type,
ANY_VALUE(language) language,
ANY_VALUE(code) code
FROM users
WHERE verified = '1'
GROUP BY name
ORDER BY count DESC LIMIT 0, 25
```
`ANY_VALUE()` explicitly declares what used to be implicit in MySQL's incomplete `GROUP BY` operations -- that the server can choose, well, any, value to return.
|
MySQL : isn't in GROUP BY
|
[
"",
"mysql",
"sql",
"phpmyadmin",
"mysql-error-1055",
""
] |
I have created a database in SQL Server 2012 with mdf and ldf pointing to a external hard drive attached to my machine. I created tables, stored procedures, populated tables, etc. etc.
I removed the hard drive at the end of the day.
Today, when I attached the hard drive and tried to access the DB in Management Studio, I see the name of the database with (Recovery Pending).
What does this mean? I see the mdf and ldf files in the D drive.
|
When you removed the drive, you forcefully disconnected the database from the SQL Server service. SQL Server does not like that.
SQL Server is designed by default so that any database created is automatically kept open until either the computer shuts down, or the SQL Server service is stopped. Prior to removing the drive, you should have "Detached" the database, or stopped the SQL Server service.
You "may" be able to get the database running by executing the following command in a query window: `RESTORE DATABASE [xxx] WITH RECOVERY;`
You could, although I would not normally recommend this, alter the database to automatically close after there are no active connections.
To accomplish this, you would execute the following query:
```
ALTER DATABASE [xxx] SET AUTO_CLOSE ON WITH NO_WAIT;
```
|
What worked for me was to take the database offline\*, then back online - *no RESTORE DATABASE was necessary* in this case, so far as I can tell.
In SQL Server Management Studio:
1. right-click on the database
2. select Tasks / Take Offline ... breathe deeply, cross fingers...
3. right-click on the database again
4. select Tasks / Take Online
|
MSSQL database on external hard drive shows Recovery Pending
|
[
"",
"sql",
"sql-server",
""
] |
So I'm trying to insert 3 rows into 2 different tables by doing this:
```
INSERT INTO GAMES.ATHLETE (ATHLETE_NO, ATHLETE_NAME, ATHLETE_BIRTHDATE, ATHLETE_BIRTHPLACE, ATHLETE_BORN_COUNTRY, ATHLETE_GENDER, ATHLETE_HEIGHT, ATHLETE_WEIGHT, ATHLETE__TEAM_COUNTRY)
VALUES ('12345', 'Saif Haseeb', '08-NOV-1995', 'Clayton', 'AUS', 'M', '176', '75', 'AUS'), ('12323', 'Rajandeep', '01-FEB-1995', 'Melbourne', 'AUS', 'M', '180', '77', 'AUS'), ('2818', 'Jen Selter', '18- APR-1993', 'Paris', 'FRA', 'F', '169', '63', 'FRA')
INSERT INTO GAMES.COMPETES (ATHLETE_NO, DISCIPLINE_CODE, SG_GAMENO)
VALUES ('12345', 'FB', '30'), ('12323', 'FB', '30'), ('2818', 'TT', '29')
INSERT INTO GAMES.VENUE (VENUE_NO, VENUE_NAME, VENUE_LOCATION, VENUE_USEDFROM, VENUE_USEDTO, VENUE_SEATINGCAPACITY, VENUE_STRUCTURE, VENUE_USE)
VALUES ('SY', 'SYDNEYSTADIUM', 'SYDNEY', '14-AUG-2014', '19-AUG-2014', '98172', 'A', 'Y');
```
However I keep getting the error in the title. Can anyone help me out here please?
|
These are three Statements, but you forgot to end each with a semicolon.
|
Oracle doesn't let you type `insert into ... values (...),(...)` like PostgreSQL does you need to split it into several inserts:
```
INSERT INTO GAMES.ATHLETE (ATHLETE_NO, ATHLETE_NAME, ATHLETE_BIRTHDATE, ATHLETE_BIRTHPLACE, ATHLETE_BORN_COUNTRY, ATHLETE_GENDER, ATHLETE_HEIGHT, ATHLETE_WEIGHT, ATHLETE__TEAM_COUNTRY)
VALUES ('12345', 'Saif Haseeb', '08-NOV-1995', 'Clayton', 'AUS', 'M', '176', '75', 'AUS');
INSERT INTO GAMES.ATHLETE (ATHLETE_NO, ATHLETE_NAME, ATHLETE_BIRTHDATE, ATHLETE_BIRTHPLACE, ATHLETE_BORN_COUNTRY, ATHLETE_GENDER, ATHLETE_HEIGHT, ATHLETE_WEIGHT, ATHLETE__TEAM_COUNTRY)
VALUES ('12323', 'Rajandeep', '01-FEB-1995', 'Melbourne', 'AUS', 'M', '180', '77', 'AUS');
INSERT INTO GAMES.ATHLETE (ATHLETE_NO, ATHLETE_NAME, ATHLETE_BIRTHDATE, ATHLETE_BIRTHPLACE, ATHLETE_BORN_COUNTRY, ATHLETE_GENDER, ATHLETE_HEIGHT, ATHLETE_WEIGHT, ATHLETE__TEAM_COUNTRY)
VALUES ('2818', 'Jen Selter', '18- APR-1993', 'Paris', 'FRA', 'F', '169', '63', 'FRA')
INSERT INTO GAMES.COMPETES (ATHLETE_NO, DISCIPLINE_CODE, SG_GAMENO)
VALUES ('12345', 'FB', '30');
INSERT INTO GAMES.COMPETES (ATHLETE_NO, DISCIPLINE_CODE, SG_GAMENO)
VALUES ('12323', 'FB', '30'), ('2818', 'TT', '29');
INSERT INTO GAMES.VENUE (VENUE_NO, VENUE_NAME, VENUE_LOCATION, VENUE_USEDFROM, VENUE_USEDTO, VENUE_SEATINGCAPACITY, VENUE_STRUCTURE, VENUE_USE)
VALUES ('SY', 'SYDNEYSTADIUM', 'SYDNEY', '14-AUG-2014', '19-AUG-2014', '98172', 'A', 'Y');
```
|
"SQL command not properly ended" (Oracle SQL)
|
[
"",
"sql",
"oracle",
""
] |
I have following data in MySQL table named info:
```
chapter | section
3 | 0
3 | 1
3 | 2
3 | 3
4 | 0
5 | 0
```
I would like to delete a row for chapter = n, but only when there is no section>0 for same chapter. So chapter 3 can't be deleted while chapter 4 and 5 can. I know the following doesn't work:
`DELETE info WHERE chapter = 3 AND NOT EXISTS (SELECT * FROM info WHERE chapter = 3 AND section>0)`;
The same table is used twice in the statement. So what is the easiest way to achieve my goal?
|
You've got the idea right. Here is the syntax:
```
DELETE
FROM mytable
WHERE chapter NOT IN (
SELECT * FROM (
select tt.chapter
from mytable tt
where tt.section <> 0
group by tt.chapter
) tmp
)
```
The nested select is a workaround a bug in MySQL.
[Demo.](http://sqlfiddle.com/#!2/bc347/1)
|
**You can run a sub query to return the rows that have sections of more then one and then delete the rows returned from the sub query.**
---
`DELETE FROM table1 WHERE table1.chapter Not IN (select chapter from
(SELECT table1.chapter FROM table1 WHERE Table1.section >=1 ) Results);`
---
[Example Fiddle based on your question](http://sqlfiddle.com/#!2/28504f/1/0)
**You could also supply the chapter as well in the sub query where clause if you only want to delete a specfic chapter. If it does not meet the where clause then no records will be deleted.**
|
How to delete a row where there are only one of the kind in MySql?
|
[
"",
"mysql",
"sql",
""
] |
How can I get result like below with mysql?
```
> +--------+------+------------+
> | code | qty | total |
> +--------+------+------------+
> | aaa | 30 | 75 |
> | bbb | 20 | 45 |
> | ccc | 25 | 25 |
> +--------+------+------------+
```
**total** is value of the rows and the others that comes after this.
|
You can do this with a correlated subquery -- assuming that the ordering is alphabetical:
```
select code, qty,
(select sum(t2.qty)
from mytable t2
where t2.code >= t.code
) as total
from mytable t;
```
SQL tables represent unordered sets. So, a table, by itself, has no notion of rows coming after. In your example, the codes are alphabetical, so they provide one definition. In practice, there is usually an id or creation date that serves this purpose.
|
I would use join, imho usually fits better.
Data:
```
create table tab (
code varchar(10),
qty int
);
insert into tab (code, qty)
select * from (
select 'aaa' as code, 30 as qty union
select 'bbb', 20 union
select 'ccc', 25
) t
```
Query:
```
select t.code, t.qty, sum(t1.qty) as total
from tab t
join tab t1 on t.code <= t1.code
group by t.code, t.qty
order by t.code
```
The best way is to try both queries (my and with subquery that @Gordon mentioned) and choose the faster one.
Fiddle: <http://sqlfiddle.com/#!2/24c0f/1>
|
Mysql each row sum
|
[
"",
"mysql",
"sql",
""
] |
I have a History table (like a log) that records changes to parts:
```
TransactionID Part ID Description Last Updated
1 1 Fixed O-ring 2006-03-14 20:00:04.700
2 2 Replaced coil 2009-01-02 20:00:04.700
3 1 Replaced coil 2009-01-02 20:00:04.700
4 1 Replaced LED 2002-08-20 20:00:04.700
5 2 Sealed leakage 2007-03-08 20:00:04.700
6 3 Replace connector 2004-05-16 20:00:04.700
```
I have another table that will show what each Part ID stands for, but that is not the problem I'm facing now. I'm required to write a query that returns the latest maintenance done on every parts. So in this case, my expected output would be:
```
TransactionID Part ID Description Last Updated
2 2 Replaced coil 2009-01-02 20:00:04.700
3 1 Replaced coil 2009-01-02 20:00:04.700
6 3 Replace connector 2004-05-16 20:00:04.700
```
Explanation: For example, the latest maintenance for Part ID #1 was completed on 2009-01-02 20:00:04.700 and so on.
I have tried `SELECT DISTINCT` but it won't work because basically every rows will be different. I'm completely out of clue. And if I use `MAX(Last Updated)`, it will only return one row of the entire table.
Edited: In any case, I am NOT allowed to use Dynamic query.
|
```
SELECT TransactionID
,PartID
,[Description]
,[Last Updated]
FROM (
SELECT TransactionID
,PartID
,[Description]
,[Last Updated]
,ROW_NUMBER() OVER (PARTITION BY [PartID] ORDER BY [Last Updated] DESC) RN
FROM TableName
)A
WHERE A.RN = 1
```
Or you can use CTE
```
;WITH CTE AS
(
SELECT TransactionID
,PartID
,[Description]
,[Last Updated]
,ROW_NUMBER() OVER (PARTITION BY [PartID] ORDER BY [Last Updated] DESC) RN
FROM TableName
)
SELECT TransactionID
,PartID
,[Description]
,[Last Updated]
FROM CTE
WHERE A.RN = 1
```
|
```
select
TransactionID, PartID, Description, LastUpdated
from
History H
where
LastUpdated =
(
select
max(LastUpdated)
from
History
where
PartID = H.PartID
)
```
|
Get the latest date for each record
|
[
"",
"sql",
"sql-server",
"stored-procedures",
"greatest-n-per-group",
""
] |
Facing issue to find the Min and Max pricing status on the column YearMonth,
Below is my table data
```
YearMonth STATE ProductGroup LocaProdname Price
201407 MH AIRTEL AIRTEL-3G 10,000
201208 GJ IDEA IDEA-3G 1,200
201406 WB AIRCEL AIRCEL PERPAID 5,866
201407 DL TATA DOCOMA TATA LANDLINE 8,955
201207 KAR VODAFONE VODAFONE-3G 7,899
201312 MH AIRTEL AIRTEL-3G 15,000
201408 GJ IDEA IDEA-3G 25,000
```
I require below output:
```
YearMonth STATE ProductGroup LocaProdname Price Indictor-YEAR
201407 MH AIRTEL AIRTEL-3G 10,000 MAX
201312 MH AIRTEL AIRTEL-3G 15,000 MIN
201408 GJ IDEA IDEA-3G 25,000 MAX
201208 GJ IDEA IDEA-3G 1,200 MIN
```
I need the Max yearmonth and min Year values values.
|
Please use Row\_number with partition BY and remove unwanted code as per your need,
```
SELECT yearmonth,state,productgroup,locaprodname,price,operation
FROM (
SELECT * FROM (SELECT p.yearmonth,p.state,p.productgroup,p.locaprodname,p.price,'MAX' AS Operation,
Row_number() OVER( partition BY p.productgroup, p.locaprodname
ORDER BY p.price DESC) AS Row
FROM pricingtest p) AS Maxx
WHERE Maxx.row = 1
UNION ALL
SELECT * FROM (SELECT p.yearmonth,p.state,p.productgroup,p.locaprodname,p.price,'MIN' AS Operation,
Row_number() OVER( partition BY p.productgroup, p.locaprodname
ORDER BY p.price ASC) AS Row
FROM pricingtest p) AS Minn
WHERE Minn.row = 1
) AS whole
ORDER BY yearmonth,productgroup
```
|
If I understand correctly, you can do this with `row_number()`:
```
select YearMonth, STATE, ProductGroup, LocaProdname, Price,
(case when seqnum_asc = 1 then 'MIN' else 'MAX' end) as Indicator
from (select d.*,
row_number() over (partition by state, productgroup, localprodname
order by price asc) as seqnum_asc,
row_number() over (partition by state, productgroup, localprodname
order by pricedesc) as seqnum_desc
from data
) d
where seqnum_asc = 1 or seqnum_desc = 1;
```
EDIT:
Does this do what you want?
```
select YearMonth, STATE, ProductGroup, LocaProdname, Price,
(case when seqnum_asc = 1 then 'MIN' else 'MAX' end) as Indicator
from (select d.*,
row_number() over (partition by YearMonth
order by price asc) as seqnum_asc,
row_number() over (partition by YearMOnth
order by pricedesc) as seqnum_desc
from data
) d
where seqnum_asc = 1 or seqnum_desc = 1;
```
|
SQL query for Pricing analysis
|
[
"",
"sql",
"sql-server-2008",
"aggregate-functions",
"greatest-n-per-group",
""
] |
What would be the suggested way (if even possible) to call MSSQL SP from Node.js. The documentation <https://www.npmjs.org/package/mssql> is great but there is no mention of SP (stored procedure) anywhere.
|
The [linked document](https://www.npmjs.org/package/mssql) does actually mention stored procedures:
```
var request = new sql.Request(connection);
request.input('input_parameter', sql.Int, 10);
request.output('output_parameter', sql.VarChar(50));
request.execute('procedure_name', function(err, recordsets, returnValue) {
// ... error checks
console.dir(recordsets);
});
```
Not sure it's wise to answer this question, but it might be valueable for future readers/googlers.
|
It's getting better with ES6/7 additions to JS. This is how you can do it with [async/await](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/async_function):
```
async function getDataFromProcedure(dbConfig, procedureName) {
try {
await sql.connect(dbConfig);
const request = new sql.Request();
recordsets = await request.execute(procedureName);
return recordsets[0];
} catch (error) {
// handle error here
}
};
```
|
Node.js and MSSQL stored procedures
|
[
"",
"sql",
"sql-server",
"node.js",
""
] |
I have a 24-hour store and I have a bunch of data points for customer entry and exit timestamps. I want to know how many customers were in the store at any given time.
My data looks like:
```
CREATE TABLE Events (CustomerID INT, EventDTS DATETIME2, Delta INT);
```
Delta is always either 1 or -1 and represents the change in store population. For every customer entry event, there is eventually a customer exit event.
I want to know what the occupancy of the store was at any given point in time. I don't know how to do this efficiently in SQL (MS SQL Server 2012).
This is what I want:
```
select * from EventsWithPopulation;
CustomerID | EventDTS | Delta | Polulation
1 | 2014-01-01 00:01:00 | 1 | 0
2 | 2014-01-01 00:04:00 | 1 | 1
3 | 2014-01-01 00:05:00 | 1 | 2
1 | 2014-01-01 00:07:00 | -1 | 3
3 | 2014-01-01 00:07:00 | -1 | 2
2 | 2014-01-01 00:09:00 | -1 | 1
```
I've tried creating a cursor to iterate over the data and apply the delta to a running count variable, but this is very slow. It's on the order of 3 million rows and it takes 5 minutes to compute the running count the events - I'm looking for a way to do it in a few seconds.
|
You are using SQL Server 2012, so you can use cumulative sum. That makes this easy:
```
select ewp.*,
sum(Delta) over (order by EventDTS) as Population
from EventsWithPopulation ewp;
```
This will give you the population at the instant after any event occurred.
EDIT:
The above looks a bit strange when there are multiple time stamps that are exactly the same. You can fix this by putting something in to make them distinct, presumably the customer di:
```
select ewp.*,
sum(Delta) over (order by EventDTS, CustomerId) as Population
from EventsWithPopulation ewp;
```
|
Gordon's answer will list the population for each event. If you want to give it a time and get the population, just sum the delta between midnight and now.
```
declare @mydate DateTime
set @mydate = GetDate()
Select
sum(delta) + 1
from
EventsWithPopulation
where
EventDTS between DateAdd(Day, Datediff(Day,0,@mydate), 0) and @mydate
```
EDIT: As the enter event doesnt reflect immediately in the row, but rather on the next event. Delta is always behind by one. I've added one from the delta sum to adjust for this.
[Fiddle](http://sqlfiddle.com/#!6/df41c/12)
|
Calculate a running count (store population) in SQL using entry and exit timestamps
|
[
"",
"sql",
"sql-server",
""
] |
I am using SQL Server. I want to select multiple columns with only one distinct column.
For example,
TABLE 1:
```
ID NAME ...(other columns)
1 A
2 B
3 C
```
Table 2 (ID and number together is the unique key):
```
ID Number Year...(other columns)
1 111 2011
2 12345678 2011
2 22222222 2012
3 333 2013
```
Table 3:
```
Name Company ...(other columns)
A Amazon
B Google
C Amazon
```
Each table above has many columns (more than 2). How can get the result so that there are only 5 columns as result without other "useless" columns and the ID column is the distinct column.
More specifically, for example,
The normal sql statement I had is the following:
```
select distinct ID, NAME, NUMBER, COMPANY, Year
from table1
left join table2 on table1.ID = table2.ID
left join table3 on table1.name = table3.name
group by ID, NAME, NUMBER, COMPANY, year
order by ID desc, Year desc
```
This will output the following:
```
ID NAME NUMBER COMPANY YEAR
1 A 111 Amazon 2011
2 B 12345678 google 2011
2 B 22222222 google 2012
3 c 333 Amazon 2013
```
What I want to have is actually the following:
```
ID NAME NUMBER COMPANY YEAR
1 A 111 Amazon 2011
2 B 22222222 google 2012
3 c 333 Amazon 2013
```
I want to have the results without duplicated ID. If there are duplicate ID's, I want to show only the latest one. In above example, ID 2 has 2 rows in table2. I want to show the one with the latest date which is 2012.
How can I achieve this. Thanks in advance.
|
You can use `not exists` to only select the latest rows per id (where another row with the same id and a greater year does not exist).
```
select * from table1 t1
where not exists (
select 1 from table1 t2
where t2.id = t1.id
and t2.year > t1.year
)
```
using analytic functions (this should be faster than the query above)
```
select * from
(select *,
row_number() over(partition by id order by year desc) rn
from table1) t1 where rn = 1
```
**edit:** applied to your tables
```
select t2.id, t3.name, t2.number, t3.company, t2.year from
(
select * from
(select *,
row_number() over(partition by id order by year desc) rn
from table2
) t1 where rn = 1
) t2 join table1 t1 on t2.id = t1.id
join table3 t3 on t3.name = t1.name
```
|
```
WITH CTE AS
(
SELECT t1.ID, t1.NAME, t2.NUMBER, t3.COMPANY, t2.Year,
Row_number() OVER(partition BY t1.ID, t1.NAME, t2.NUMBER, t3.COMPANY ORDER BY t2.Year DESC) AS rn
FROM table1 t1
LEFT JOIN table2 t2 ON t1.ID = t2.ID
LEFT JOIN table3 t3 ON t1.name = t3.name
)
SELECT ID, NAME, NUMBER, COMPANY, Year
FROM CTE
WHERE rownum = 1
ORDER BY ID desc, Year desc
```
|
sql - how to select multiple columns with only one distinct column from joining multiple tables
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I'm using Oracle SQL. I have column with values of `String` type, that actually supposed to be a currency.
The strings values are in the following format:
```
103,2 %
```
I need to convert it to:
```
103.2
```
Aldo i need to convert the values type to `Float`.
Is there's a good conversation functions that can help me? Do i need to do that with `Replace()` function?
|
How about something like this?
```
select cast(translate(col, ', %', '.') as float)
```
|
You can use `replace` to strip out the fixed `%` part, which changed `103,2 %` to `103,2`; and pass the result of that into `to_number()`. Since you seem to have a comma decimal separtator in the value and period in the databse you can specify the NLS\_NUMERIC\_CHARACTERS:
```
select to_number(replace('103,2 %', ' %'),
'999999D99', 'NLS_NUMERIC_CHARACTERS='',.''') as value
from dual;
VALUE
----------
103.2
```
If you don't have quite a fixed format - the space may or not be there, for example, or you could have different currency symbols instead of % - then you could use `translate` as Gordon Linoff showed, or use a regular expression to remove any non-numeric characters:
```
select to_number(regexp_replace('103,2 %', '[^[:digit:],]'),
'999999D99', 'NLS_NUMERIC_CHARACTERS='',.''') as value
from dual;
```
|
Converting string to currency value
|
[
"",
"sql",
"oracle",
""
] |
I have these tables, Orders Table:
```
Name Null? Type
ORDER_ID NOT NULL NUMBER(5)
CUSTOMER_ID NUMBER(8)
SHIPMENT_METHOD_ID NUMBER(2)
```
and Shipment\_method Table:
```
Name Null? Type
SHIPMENT_METHOD_ID NOT NULL NUMBER(2)
SHIPMENT_DESCRIPTION VARCHAR2(80)
```
I'm trying to get the most used shipping method based on the orders, and I'm kind of a beginner here so I need some help.
I'm thinking if it's possible to have MAX(count(order\_id)) but how can I do that for each shipment\_method\_id?
|
This is another approach:
```
select shipment_method_id, shipment_description, count(*) as num_orders
from orders
join shipment_method
using (shipment_method_id)
group by shipment_method_id, shipment_description
having count(*) = (select max(count(order_id))
from orders
group by shipment_method_id)
```
|
Here is a method that allows for more than one Shipment Method having the same maximum number of Orders.
```
SELECT shipment_method_id
,shipment_description
,orders
FROM
(SELECT shipment_method_id
,shipment_description
,orders
,rank() OVER (ORDER BY orders DESC) orders_rank
FROM
(SELECT smm.shipment_method_id
,smm.shipment_description
,count(*) orders
FROM orders odr
INNER JOIN shipment_method smm
ON (smm.shipment_method_id = odr.shipment_method_id)
GROUP BY smm.shipment_method_id
,smm.shipment_description
)
)
WHERE orders_rank = 1
```
|
Max value of count in oracle
|
[
"",
"sql",
"oracle",
""
] |
> I have the following table.

I need to select SemesterID,AcadamiYear,AcademicSemester of the record with highest Academic year and Academic semester of the year 2015
Expected output is
> 2013 1 2
I tried the following query but it returns both of the records
```
select MAX(AcadamiYear) as Year,
MAX(AcadamicSemester) as Semester
,SemesterID
from
tblSemesterRegistration
where [IntakeYear]='2015'
Group by SemesterID
```
|
Since you are searching for a single record you might use TOP 1, ordered by your intend
```
select TOP 1 *
from
tblSemesterRegistration
where [IntakeYear]='2015'
Order by AcadamiYear DESC, AcadamicSemester DESC
```
|
This is the query you're looking for:
```
SELECT SR.*
FROM tblSemesterRegistration SR
INNER JOIN (SELECT MAX(SR2.AcadamiYear) AS [AcadamiYear]
,MAX(SR2.AcadamicSemester) AS [AcadamicSemester]
,IntakeYear
FROM tblSemesterRegistration SR2
GROUP BY SR2.IntageYear) T ON T.AcadamiYear = SR.AcadamiYear
AND T.AcadamicSemester = SR.AcadamicSemester
AND T.IntakeYear = SR.IntakeYear
WHERE SR.IntakeYear = '2015'
```
Hope this will help you.
|
Select Max with groupby
|
[
"",
"sql",
"sql-server",
""
] |
I have a table called calendar with only one column Date\_I (type Date). Now the requirement is to create a plsql function insert\_data which takes start\_date and end\_date. This function will now check if each and every date between start\_date and end\_date is presented in the table calendar. If not then insert that date into the table.
I know the requirement is quite unsual but it has to be like this way only. Following is the code I have written but I am wondering if there is a better way to do the same:
```
PROCEDURE Insert_Data(Start_Date DATE, End_Date DATE)
IS
cal_v calendar%rowtype;
BEGIN
DECLARE
CURSOR cal_c IS
SELECT *
FROM Calendar;
BEGIN
FOR cal_v IN calc_c LOOP
FOR date_v IN Insert_Data.Start_Date..Insert_Data.End_Date LOOP
BEGIN
SELECT * FROM calendar WHERE calc_v.calc_date = date_v;
EXCEPTION
WHEN NO_DATA_FOUND THEN
INSERT INTO calendar VALUES date_v;
END;
END LOOP;
END LOOP;
END;
END Insert_Data;
```
The above code loops through each record from the table calendar and executes a select on it with that date..if no data found then it inserts the same in the table. This code is working but it is executing select no. of times and then insert(when required). I am now worried about the performance. Any suggestion would be of great help.
Thanks!
|
As a more concrete example of the `MERGE` approach already suggested:
```
PROCEDURE Insert_Data(Start_Date DATE, End_Date DATE)
IS
BEGIN
MERGE INTO calendar ca
USING (
SELECT Insert_Data.Start_Date + level - 1 as cal_date
FROM dual
CONNECT BY level <= Insert_Data.End_Date - Insert_Data.Start_Date + 1
) t
ON (t.cal_date = ca.cal_date)
WHEN NOT MATCHED THEN INSERT VALUES (t.cal_date);
END Insert_Data;
```
It doesn't need to be a procedure at all, but that seems to be a requirement on its own. You can just run the merge as plain SQL, using your date range directly instead of through variables. (Or as bind variables, depending on how/where you're running this).
The `USING` clause is a generated table that creates all of the dates in the supplied range, using a common `CONNECT BY` method. [The `LEVEL` pseudocolumn](http://docs.oracle.com/cd/E11882_01/server.112/e26088/pseudocolumns001.htm#SQLRF50942) is similar to the loop you're trying to do; overall the inner query generates all dates in your range as an inline view, which you can then use to check against the actual table. The rest of the statement only inserts new records from that range if they don't already exit.
You could also do the same thing manually, and less efficiently, with a `NOT EXISTS` check:
```
PROCEDURE Insert_Data(Start_Date DATE, End_Date DATE)
IS
BEGIN
INSERT INTO calendar
WITH t AS (
SELECT Insert_Data.Start_Date + level - 1 as cal_date
FROM dual
CONNECT BY level <= Insert_Data.End_Date - Insert_Data.Start_Date + 1
)
SELECT cal_date
FROM t
WHERE NOT EXISTS (
SELECT 1
FROM Calendar
WHERE Calendar.cal_date = t.cal_date
);
END Insert_Data;
```
[SQL Fiddle](http://sqlfiddle.com/#!4/11cca2/1).
---
You have a few other issues in your procedure.
This is redundant because of the form of cursor-for loop you're using:
```
cal_v calendar%rowtype;
```
You have an unnecessary nested block here; it doesn't hurt I suppose but it isn't adding anything either. The first BEGIN, DECLARE and the first END can be removed (and the alignment is a bit off):
```
BEGIN -- remove
DECLARE -- remove
CURSOR cal_c IS
SELECT *
FROM Calendar;
BEGIN
...
END; -- remove
END Insert_Data;
```
The outer loop, and the entire cursor, isn't needed; it actually means you're repeating the inner loop which actually does the work (or tries to, the first time anyway) as many times as there are existing records in the calendar table, which is pointless and slow:
```
FOR cal_v IN calc_c LOOP
FOR date_v IN Insert_Data.Start_Date..Insert_Data.End_Date LOOP
...
END LOOP;
END LOOP;
```
The inner loop won't compile as you can't use dates for a range loop, only integers (giving PLS-00382):
```
FOR date_v IN Insert_Data.Start_Date..Insert_Data.End_Date LOOP
```
The innermost select doesn't have an INTO; this won't compile either:
```
SELECT * FROM calendar WHERE calc_v.calc_date = date_v;
```
The insert needs the value to be enclose in parentheses:
```
INSERT INTO calendar VALUES date_v;
```
So if you really did want to do it this way you'd do something like:
```
PROCEDURE Insert_Data(Start_Date DATE, End_Date DATE)
IS
tmp_date DATE;
BEGIN
FOR i IN 0..(Insert_Data.End_Date - Insert_Data.Start_Date) LOOP
BEGIN
dbms_output.put_line(i);
SELECT cal_date INTO tmp_date FROM calendar
WHERE cal_date = Insert_Data.Start_Date + i;
EXCEPTION
WHEN NO_DATA_FOUND THEN
INSERT INTO calendar VALUES (Insert_Data.Start_Date + i);
END;
END LOOP;
END Insert_Data;
```
... but really, use merge.
|
Try inserting values in to your table using [MERGE](http://www.oracle-base.com/articles/9i/merge-statement.php)
```
MERGE INTO calendar ca
USING (SELECT *
FROM calendar
WHERE < your sql condition>
)qry ON (ca.<your_primary_key> = qry.<your_primary_key>)
WHEN NOT MATCHED THEN
INSERT INTO calendar VALUES....
```
A great deal about performance is elucidated [here](http://www.oracle-base.com/articles/9i/merge-statement.php#performance)
You could wrap merge statement in a procedure too.
|
Loop through date range and insert when no data found
|
[
"",
"sql",
"database",
"oracle",
"plsql",
""
] |
I have attached my code below for your review. My error is coming with the 4th CASE statement where I am using math functions to calculate information. Does anyone know what is causing the error and how to get around it? I'm guessing that the issue is in my ORDER BY function at the end.
```
SELECT
H.DATE AS "DATE"
,TRIM(H.NUMBER) AS "NUMBER"
,CASE WHEN TT.RY_DATE IS NULL THEN '' ELSE CHAR(TT.RY_DATE) END AS "RY DATE"
,CASE WHEN T.PT_DATE = '0001-01-01' THEN 'N' ELSE 'Y' END AS "PT"
,T.ON AS "ON"
,CASE WHEN H.CLASS = '0.00' THEN H.CLASS ELSE H.RATED END AS "CLASS"
,SUBSTR(TRIM(S.NAME), 1, LENGTH(TRIM(S.NAME))-2) AS "CITY"
,H.STATE AS "STATE"
,H.ZIPCODE AS "ZIP"
,SUBSTR(TRIM(CN.NAME), 1, LENGTH(TRIM(CN.NAME))-2) AS "DCITY"
,H.STATE AS "DSTATE"
,H.ZIPCODE AS "DZIP"
,H.WGT AS "WEIGHT"
,CASE
WHEN Q.AMOUNT IS NULL THEN (Y.CHGS - Z.AMOUNT)
WHEN Q.AMOUNT IS NOT NULL THEN (Y.CHGS - (SUM(Q.AMOUNT + Z.AMOUNT))) ELSE ''
END AS "LL"
,Z.AMOUNT AS "FU"
,SUM(Q.AMOUNT) AS "AC"
,Y.CHGS AS "CHARGES"
FROM A.COST H
INNER JOIN A.MASTER S
ON H.CITY = S.CITY
INNER JOIN A.MASTER CN
ON H.CITY = CN.CITY
INNER JOIN A.SPEC Z
ON Z.NUMBER = H.NUMBER
AND Z.DATE = H.DATE
AND Z.TYPE = 'F'
INNER JOIN A.ALT Y
ON Y.NUMBER = H.NUMBER
AND Y.DATE = H.DATE
LEFT OUTER JOIN A.SPEC. Q
ON Q.NUMBER = H.NUMBER
AND Q.DATE = H.DATE
AND Q.TYPE = 'S'
LEFT OUTER JOIN A.T TT
ON H.NUMBER = TT.NUMBER
LEFT OUTER JOIN A.TIME T
ON T.NUMBER = H.NUMBER
WHERE H.CTRL = '000000'
AND (MONTH(CURRENT DATE)-1) = MONTH(H.DATE)
AND H.DATE > CURRENT DATE - 90 DAYS
GROUP BY
H.DATE
,TRIM(H.NUMBER)
,CASE WHEN TT.D_DATE IS NULL THEN '' ELSE CHAR(TT.D_DATE) END
,CASE WHEN T.PT_DATE = '0001-01-01' THEN 'N' ELSE 'Y' END
,T.ON
,CASE WHEN H.CLASS = '0.00' THEN H.CLASS ELSE H.RATED END
,SUBSTR(TRIM(S.NAME), 1, LENGTH(TRIM(S.NAME))-2)
,H.STATE
,H.ZIPCODE
,SUBSTR(TRIM(CN.NAME), 1, LENGTH(TRIM(CN.NAME))-2)
,H.STATE
,H.ZIPCODE
,H.WGT
,CASE WHEN Q.AMOUNT IS NULL THEN (Y.CHGS - Z.AMOUNT)
WHEN Q.AMOUNT IS NOT NULL THEN (Y.CHGS - (SUM(Q.AMOUNT + Z.AMOUNT))) END
,Z.AMOUNT
,Y.CHGS
ORDER BY DATE
```
|
The issue is that you are trying to reference both individual and aggregated `Q.AMOUNT` values in the SELECT.
If I understand your intention correctly, you probably meant to do this with your fourth CASE:
```
Y.CHGS - (COALESCE(SUM(Q.AMOUNT), 0) + Z.AMOUNT) AS "LL"
```
If every `Q.AMOUNT` in a group is NULL, `SUM(Q.AMOUNT)` will be NULL as well and replaced with a zero, which will essentially make the expression equivalent to `Y.CHGS - Z.AMOUNT`.
You don't need to repeat this expression in GROUP BY as all references in it are either already GROUP BY criteria or being aggregated.
The `COALESCE` function accepts multiple arguments and returns the first one that is not NULL.
|
In fourth case there is no use of using an extra `SUM` function. Try somthing like this:-
```
CASE
WHEN Q.AMOUNT IS NULL THEN (Y.CHGS - Z.AMOUNT)
WHEN Q.AMOUNT IS NOT NULL THEN (Y.CHGS - Q.AMOUNT + Z.AMOUNT)
ELSE ''
END AS "LL"
```
Hope this is helpful to you.
|
"INVALID USE OF AN AGGREGATE FUNCTION OR OLAP FUNCTION" ERROR IN DB2
|
[
"",
"sql",
"db2",
""
] |
I have a customer database which include LMD(Last Modified Date) data type is date and LMT(Last Modified Time) data type is int. I need to create a datetime from this LMD and LMT column. Here are the rows,
```
LMD LMT
2014-09-03 172351
```
|
You might use divsion and modulo to extract the needed parts for DATEADD
```
declare @datefield date
declare @timefield int
set @datefield='20140910'
set @timefield=121314
Select CAST(@datefield AS DATETIME) + DATEADD(HOUR,@timefield/10000,0) + DATEADD(MINUTE,@timefield/100 % 100,0)+ DATEADD(SECOND,@timefield % 100,0)
Select CAST(LMD AS datetime) + DATEADD(HOUR,LMT/10000,0) + DATEADD(MINUTE,LMT/100 % 100,0)+ DATEADD(SECOND,LMT % 100,0), LMT, LMD
FROM MyTable
```
|
This works using `TIMEFROMPARTS` and adding it to the date.
**[SQL Fiddle Demo](http://sqlfiddle.com/#!6/3dc52/1)**
```
select cast(LMD as datetime) + cast(timefromparts(substring(cast(lmt as varchar), 1, 2), substring(cast(lmt as varchar), 3, 2), substring(cast(lmt as varchar), 5, 2),0,0) as datetime)
from yourTable
```
|
How to combine separate Date and Time columns into DateTime?
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I want to add a column to my table with a random number using seed.
If I use RAND:
```
select *, RAND(5) as random_id from myTable
```
I get an equal value(0.943597390424144 for example) for all the rows, in the random\_id column. I want this value to be different for every row - and that for every time I will pass it 0.5 value(for example), it would be the same values again(as seed should work...).
How can I do this?
(
For example, in PostrgreSql I can write
`SELECT setseed(0.5);
SELECT t.* , random() as random_id
FROM myTable t`
And I will get different values in each row.
)
---
**Edit:**
After I saw the comments here, I have managed to work this out somehow - but it's not efficient at all.
If someone has an idea how to improve it - it will be great. If not - I will have to find another way.
I used the basic idea of the example in [here](http://technet.microsoft.com/en-us/library/ms177610.aspx).
**Creating a temporary table** with blank seed value:
```
select * into t_myTable from (
select t.*, -1.00000000000000000 as seed
from myTable t
) as temp
```
**Adding a random number** for each seed value, one row at a time(this is the bad part...):
```
USE CPatterns;
GO
DECLARE @seed float;
DECLARE @id int;
DECLARE VIEW_CURSOR CURSOR FOR
select id
from t_myTable t;
OPEN VIEW_CURSOR;
FETCH NEXT FROM VIEW_CURSOR
into @id;
set @seed = RAND(5);
WHILE @@FETCH_STATUS = 0
BEGIN
set @seed = RAND();
update t_myTable set seed = @seed where id = @id
FETCH NEXT FROM VIEW_CURSOR
into @id;
END;
CLOSE VIEW_CURSOR;
DEALLOCATE VIEW_CURSOR;
GO
```
**Creating the view** using the seed value and ordering by it
```
create view my_view AS
select row_number() OVER (ORDER BY seed, id) AS source_id ,t.*
from t_myTable t
```
|
So, in case it would someone someday, here's what I eventually did.
I'm generating the random seeded values in the server side(Java in my case), and then create a table with two columns: the id and the generated random\_id.
Now I create the view as an `inner join` between the table and the original data.
The generated SQL looks something like that:
```
CREATE TABLE SEED_DATA(source_id INT PRIMARY KEY, random_id float NOT NULL);
select Rand(5);
insert into SEED_DATA values(1,Rand());
insert into SEED_DATA values(2, Rand());
insert into SEED_DATA values(3, Rand());
.
.
.
insert into SEED_DATA values(1000000, Rand());
```
and
```
CREATE VIEW DATA_VIEW
as
SELECT row_number() OVER (ORDER BY random_id, id) AS source_id,column1,column2,...
FROM
( select * from SEED_DATA tmp
inner join my_table i on tmp.source_id = i.id) TEMP
```
In addition, I create the random numbers in batches, 10,000 or so in each batch(may be higher), so it will not weigh heavily on the server side, and for each batch I insert it to the table in a separate execution.
All of that because I couldn't find a good way to do what I want purely in SQL. Updating row after row is really not efficient.
My own conclusion from this story is that SQL Server is sometimes really annoying...
|
I think the simplest way to get a repeatable random id in a table is to use `row_number()` or a fixed `id` on each row. Let me assume that you have a column called `id` with a different value on each row.
The idea is just to use this as a seed:
```
select rand(id*1), as random_id
from mytable;
```
Note that the seed for the id is an integer and not a floating point number. If you wanted a floating point seed, you could do something with `checksum()`:
```
select rand(checksum(id*0.5)) as random_id
. . .
```
If you are doing this for sampling (where you will say `random_id < 0.1` for a 10% sample for instance, then I often use modulo arithmetic on `row_number()`:
```
with t as (
select t.* row_number() over (order by id) as seqnum
from mytable t
)
select *
from t
where ((seqnum * 17 + 71) % 101) < 0.1
```
This returns about 10% of the numbers (okay, really 10/101). And you can adjust the sample by fiddling with the constants.
|
SQL Server random using seed
|
[
"",
"sql",
"sql-server",
"random",
"random-seed",
""
] |
I work on a MySQL database table that has a column containing timestamps (Ex. 2014-09-16 09:08:05) of the time I ping different hosts. My question is how could I calculate in minutes the difference between the first ping and the last one for specific hosts? Also how could I specify different timestamps for the start and the end of the difference mentioned above (instead of the first and last ping). Here is an example of the table:
```
|--|---------|-------------------|----|
|id|http_code|pingtime |host|
|--|---------|-------------------|----|
|10|200 |2014-09-16 09:08:05|2 |
|--|---------|-------------------|----|
|11|200 |2014-09-16 10:07:05|1 |
|--|---------|-------------------|----|
|12|200 |2014-09-16 10:14:10|2 |
|--|---------|-------------------|----|
```
I hope that I've explain myself clear enough.
|
You could use the native [TIMESTAMPDIFF](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_timestampdiff) function :
```
SELECT TIMESTAMPDIFF(<INTERVAL>,<timestampFrom>,<timestampTo>);
```
If you want to find the difference between the first and the last timestamp of a given host ID, here you are:
```
SELECT TIMESTAMPDIFF(MINUTE,MIN(pingtime),MAX(pingtime))
FROM yourTable
WHERE host = 2;
```
|
If you want to use in SQL syntax:
```
SELECT DATEDIFF(minute,'2008-11-30','2008-11-29') AS DiffDate
```
In addition to minute, you can use day as well.
|
Calculate the time difference between two timestamps in mysql
|
[
"",
"mysql",
"sql",
"database",
"time",
""
] |
Heres what i have
**Query**
```
with temp as (
select 11 as x from dual
union
select 222 as x from dual
)
select x from temp t,(SELECT 1
FROM DUAL
CONNECT BY ROWNUM <= 3)
```
**Output**
```
X
--
11
222
11
222
11
222
```
**Output i desire**
```
X
--
11
11
222
222
222
```
basically i would like the row to repeat itself based on the length of the column 'X' value.
so if value is 'abc'
then that row should repeat 3 times
|
This works:
```
with temp as (
select '11' as x from dual
union
select '222' as x from dual
)
SELECT x, LEVEL from temp t
CONNECT BY LEVEL <= LENGTH(t.x)
and prior x = x
and prior sys_guid() is not null;
```
Last line does the trick. Same can be achieved with other reference to PRIOR:
```
with temp as (
select '11' as x from dual
union
select '222' as x from dual
)
SELECT x, LEVEL from temp t
CONNECT BY LEVEL <= LENGTH(t.x)
and PRIOR x = x
and PRIOR DBMS_RANDOM.VALUE () IS NOT NULL;
```
|
Since you're on 11g you can use [recursive subquery factoring](http://docs.oracle.com/cd/E11882_01/server.112/e26088/statements_10002.htm#sthref6730) to achieve this:
```
with t as (
select 11 as x from dual
union all
select 222 as x from dual
),
r (x, idx, len) as (
select x, 1, length(x)
from t
union all
select r.x, r.idx + 1, r.len
from r
where r.idx < r.len
)
select x from r
order by x;
X
-----
11
11
222
222
222
```
The anchor member gets the original rows and the length of the value.The recursive member adds one to `idx` until it reaches the length.
[SQL Fiddle](http://sqlfiddle.com/#!4/d41d8/35241).
You can do it with a hierarchical query too:
```
with t as (
select 11 as x from dual
union all
select 222 as x from dual
)
select x
from t
connect by level <= length(x)
and prior x = x
and prior sys_guid() is not null;
```
The combination of the two `prior` clauses - one restricitng duplicates, the other involing a non-deterministic function to prevent cycling when you do that - gives you the desired rows:
```
X
-----
11
11
222
222
222
```
[SQL Fiddle](http://sqlfiddle.com/#!4/d41d8/35255).
|
Repeat rows dynamically in Oracle based on a condition
|
[
"",
"sql",
"oracle",
"oracle11g",
"connect-by",
""
] |
I have table with following structure:

And I'm trying to get grouped values between two dates, problem is, that i would like to have also retuned rows for dates which are not in select, for example i have range for
```
WHERE m.date BETWEEN "2014-09-02" AND "2014-09-10"
```
But for example in date 2014-09-06 is no related row in table, so in result should be
```
2014-09-06| 0 | 0 | 0 | 0
```
How can i do it please? (if is it possible with SQLLite database).
**Here is the query which i'm using:**
```
SELECT substr(m.date, 1, 10) as my_date, COUNT(m.ID) AS 'NUMBER_OF_ALL_CALLS',
(SELECT COUNT(*) FROM dialed_calls subq WHERE subq.call_result = 'DONE'
AND substr(m.date, 1, 10) = substr(subq.DATE, 1, 10)) as 'RESULT_DONE',
(SELECT COUNT(*) FROM dialed_calls subq WHERE subq.call_result = 'NOT_INTERESTED'
AND substr(m.date, 1, 10) = substr(subq.DATE, 1, 10)) as 'RESULT_NOT_INTERESTED',
(SELECT COUNT(*) FROM dialed_calls subq WHERE subq.call_result = 'NO_APPOINTMENT'
AND substr(m.date, 1, 10) = substr(subq.DATE, 1, 10)) as 'RESULT_NO_APP'
FROM dialed_calls m
WHERE m.date BETWEEN "2014-09-02" AND "2014-09-05"
GROUP BY my_date
```
Many thanks for any help.
**Table structure:**
```
BEGIN TRANSACTION;
CREATE TABLE dialed_calls(Id integer PRIMARY KEY,
'date' datetime,
'called_number' VARCHAR(45),
'call_result' VARCHAR(45),
'call_duration' INT,
'synced' BOOL);
/* Create few records in this table */
INSERT INTO dialed_calls VALUES(1,'2014-09-02 15:54:34+0200',
'800123456', 'NOT_INTERESTED', 10, 0);
INSERT INTO dialed_calls VALUES(2,'2014-09-02 15:56:30+0200',
'800123456', 'NO_APPOINTMENT', 10, 0);
INSERT INTO dialed_calls VALUES(3,'2014-09-02 16:01:49+0200',
'800123456', 'DONE', 9, 0);
INSERT INTO dialed_calls VALUES(4,'2014-09-02 16:03:03+0200',
'800123456', 'NO_APPOINTMENT', 69, 0);
INSERT INTO dialed_calls VALUES(5,'2014-09-02 18:09:34+0200',
'800123456', 'NO_APPOINTMENT', 3, 0);
INSERT INTO dialed_calls VALUES(6,'2014-09-02 18:54:02+0200',
'123456789', 'NO_APPOINTMENT', 89, 0);
INSERT INTO dialed_calls VALUES(7,'2014-09-02 18:55:25+0200',
'123456789', 'NOT_INTERESTED', 89, 0);
INSERT INTO dialed_calls VALUES(8,'2014-09-03 18:36:58+0200',
'123456789', 'DONE', 185, 0);
INSERT INTO dialed_calls VALUES(9,'2014-09-04 18:36:58+0200',
'123456789', 'DONE', 185, 0);
INSERT INTO dialed_calls VALUES(10,'2014-09-05 18:36:58+0200',
'123456789', 'DONE', 185, 0);
COMMIT;
```
|
Try this:
```
SELECT
d.date AS DATE,
IFNULL(NUMBER_OF_ALL_CALLS, 0) AS NUMBER_OF_ALL_CALLS,
IFNULL(RESULT_DONE, 0) AS RESULT_DONE,
IFNULL(RESULT_NOT_INTERESTED, 0) AS RESULT_NOT_INTERESTED,
IFNULL(RESULT_NO_APP, 0) AS RESULT_NO_APP
FROM
(SELECT DATE('1970-01-01', '+' || (t4.i*10000 + t3.i*1000 + t2.i*100 + t1.i*10 + t0.i) || ' days') date FROM
(SELECT 0 i UNION SELECT 1 UNION SELECT 2 UNION SELECT 3 UNION SELECT 4 UNION SELECT 5 UNION SELECT 6 UNION SELECT 7 UNION SELECT 8 UNION SELECT 9) t0,
(SELECT 0 i UNION SELECT 1 UNION SELECT 2 UNION SELECT 3 UNION SELECT 4 UNION SELECT 5 UNION SELECT 6 UNION SELECT 7 UNION SELECT 8 UNION SELECT 9) t1,
(SELECT 0 i UNION SELECT 1 UNION SELECT 2 UNION SELECT 3 UNION SELECT 4 UNION SELECT 5 UNION SELECT 6 UNION SELECT 7 UNION SELECT 8 UNION SELECT 9) t2,
(SELECT 0 i UNION SELECT 1 UNION SELECT 2 UNION SELECT 3 UNION SELECT 4 UNION SELECT 5 UNION SELECT 6 UNION SELECT 7 UNION SELECT 8 UNION SELECT 9) t3,
(SELECT 0 i UNION SELECT 1 UNION SELECT 2 UNION SELECT 3 UNION SELECT 4 UNION SELECT 5 UNION SELECT 6 UNION SELECT 7 UNION SELECT 8 UNION SELECT 9) t4) d
LEFT JOIN
(
SELECT substr(m.date, 1, 10) as my_date, COUNT(m.ID) AS 'NUMBER_OF_ALL_CALLS',
(SELECT COUNT(*) FROM dialed_calls subq WHERE subq.call_result = 'DONE'
AND substr(m.date, 1, 10) = substr(subq.DATE, 1, 10)) as 'RESULT_DONE',
(SELECT COUNT(*) FROM dialed_calls subq WHERE subq.call_result = 'NOT_INTERESTED'
AND substr(m.date, 1, 10) = substr(subq.DATE, 1, 10)) as 'RESULT_NOT_INTERESTED',
(SELECT COUNT(*) FROM dialed_calls subq WHERE subq.call_result = 'NO_APPOINTMENT'
AND substr(m.date, 1, 10) = substr(subq.DATE, 1, 10)) as 'RESULT_NO_APP'
FROM dialed_calls m
GROUP BY my_date
) t
ON d.date = t.my_date
WHERE d.date BETWEEN '2014-09-02' AND '2014-09-10'
ORDER BY d.date;
```
Above query will first the retrieve the dates between the specified date range and later will join the retrieved values with your table.
|
This is a good case for joining to a Calendar table.
<http://web.archive.org/web/20070611150639/http://sqlserver2000.databases.aspfaq.com/why-should-i-consider-using-an-auxiliary-calendar-table.html>
Note this is a SQL server link, but you can adapt this to SQLlite.
You could do your calculations, and then right join the results to the calendar table, so that the dates appear with NULL values. Or you could COALESCE() the nulls to something that makes more sense, like 0.
|
How to get missing values in date range?
|
[
"",
"sql",
"date",
"range",
"between",
""
] |
Simplified I have two tables in MySQL: One holding a sort of person entity and one relation table which associates multiple rights two the person entity. The tables look like:
person
```
person_id | person_name
1 | Michael
2 | Kevin
```
person2right
```
person_id | right_id
1 | 1
1 | 2
1 | 4
2 | 1
2 | 2
```
What I want to achieve now is getting all persons including all associated rights, which have at least the defined rights - right\_id 1 and 4 in this example.
What I have so far is a query with subselect, but I wonder if there is a more efficient way to achieve my goal without the subselect, because of MySQL not being able to use an index when joining a subselect. Here is my query:
```
SELECT person_name, GROUP_CONCAT(`person2right`.`right_id`) as `all_rights`
FROM `person`
LEFT JOIN `person2right` ON `person`.`person_id` = `person2right`.`person_id`
LEFT JOIN (
SELECT `person_id` FROM `person2right` WHERE `right_id` IN (1, 4)
GROUP BY `person_id` HAVING COUNT(`person2right`.`right_id`) >= 2
) as `p2r` ON `person`.`person_id` = `p2r`.`person_id`
WHERE `p2r`.`person_id` IS NOT NULL GROUP BY `person_id`
```
Maybe someone has an idea to do it without the subquery. Appriciate your help.
Thx in advance!
|
This will only select persons (and all their associated rights) who have both rights 1 and 4. Note that it's different from your query in that your query selects all persons (regardless of their rights) and only selects their associated rights if they have rights 1 and 4.
```
SELECT person_name, GROUP_CONCAT(`person2right`.`right_id`) as `all_rights`
FROM `person`
JOIN `person2right` ON `person`.`person_id` = `person2right`.`person_id`
GROUP BY `person`.`person_id`
HAVING SUM(`right_id` = 4) > 0 AND SUM(`right_id` = 1) > 0
```
**Edit:** if the rows in `person2right` are unique, then you can change your `having` clause to
```
HAVING SUM(`right_id` IN (1,4)) = 2
```
|
Let's see if additional joins can do the trick:
```
select person_name, group_concat(distinct p2r.right_id) as all_rights
from person as p
inner join person2right as p2r using (person_id) -- You don't need LEFT JOIN, because you'll only return persons with rights
-- The new stuff starts here: Two new LEFT JOINs to track the rights you want
left join person2right as p2r_1 using (person_id)
left join person2right as p2r_4 using (person_id)
where
-- Here is where you check if the rights exist
(p2r_1.right_id = 1 and p2r_4.right_id = 4)
group by p.person_id;
```
Take a look to [this SQL fiddle example](http://sqlfiddle.com/#!2/92a51b/12).
|
Most efficient way in MySQL to get all associated values from a relation table while filtering after the same values
|
[
"",
"mysql",
"sql",
""
] |
I'm trying to execute a query who generates inserts statements, like that :
```
select
'insert into MYTABLE (
ID,
COLUMN_A,
COLUMN_B)
values (' +
ID + ',' +
COLUMN_A + ',' +
COLUMN_B
as INSERTGENERATOR from MYTABLE;
```
But I have this error :
```
ORA-01722: invalid number
01722. 00000 - "invalid number"
*Cause:
*Action:
```
Why ?
I tried with `||` instead of `+`, and with `to_char`, but it but it does not change.
|
Don't put end of lines within quotes. And `||` is used to concatenate.
```
select
'insert into MYTABLE (ID, COLUMN_A, COLUMN_B) values (' ||
ID || ',' ||
COLUMN_A || ',' ||
COLUMN_B || ');'
as INSERTGENERATOR from MYTABLE;
```
|
This works:
```
SELECT
'INSERT INTO MYTABLE
(ID, COLUMN_A, COLUMN_B)
VALUES (' ||
ID || ',' ||
COLUMN_A || ','||
COLUMN_B || ');'
AS INSERTGENERATOR
FROM MYTABLE;
```
[Sample SQL Fiddle](http://www.sqlfiddle.com/#!4/be2cc/4)
|
Query who generates insert statements : invalid number
|
[
"",
"sql",
"oracle",
""
] |
I'm new to SQL and trying to find the correct join to use in this situation. Table A contains a series of unique (i.e. no repeats) IDs. Table B contains multiple measurements for each ID, including IDs that don't exist in table A. I need a combined table that contains all the data in table B for the IDs that are in table A.
For example:
```
A B JOINED:
ID ID | count | date ID | count | date
- -- | ----- | ---- -- | ----- | ----
001 001 | 512 | 01-Aug-2014 001 | 512 | 01-Aug-2014
003 001 | 123 | 08-Aug-2014 001 | 123 | 08-Aug-2014
004 002 | 2325 | 01-Aug-2014 003 | 2325 | 01-Aug-2014
002 | 7653 | 08-Aug-2014 003 | 423 | 08-Aug-2014
003 | 2325 | 01-Aug-2014 004 | 2 | 01-Aug-2014
003 | 423 | 08-Aug-2014 004 | 76 | 08-Aug-2014
004 | 2 | 01-Aug-2014
004 | 76 | 08-Aug-2014
005 | 232 | 01-Aug-2014
005 | 67 | 08-Aug-2014
```
Now, if I understand joins correctly, a LEFT join on A-B would produce a result that only had one line for each ID, not the multiple lines for each date. A RIGHT join would include all the IDs in B that aren't in A. Is that correct?
So, in summary, how do I get a results table like that shown above?
|
```
SELECT A.ID
,B.[Count]
,B.[Date]
FROM TableA A INNER JOIN TableB B
ON A.ID = B.ID
```
**INNER JOIN** will give you the matching rows between Two tables.
**RIGHT JOIN** will give you matching rows from the table on left side of join key word and all rows from the table on right side of the join key word.
**LEFT JOIN** will give you matching rows from the table on right side of join key word and all rows from the table on left side of the join key word.
**FULL JOIN** will return all the matching and non matching rows from both tables.
**Matching Row** Joining condition defined in `ON` condition will map each value in ID column in tableA to each `matching` value in ID column of tableB.
|
Inner join is the one you are looking for
As for your questions.
If I understood it correctly, then the answer is no. Left join would be an inner join + anything that LHS table has but not in the RHS, it will fill in the blank with null, similarly for Right join.
|
SQL join to retrieve multiple lines in right table
|
[
"",
"sql",
"join",
"left-join",
"right-join",
""
] |
Being new with SQL and SSRS and can do many things already, but I think I must be missing some basics and therefore bang my head on the wall all the time.
A **report** that is almost working, **needs to have more results** in it, based on conditions.
My working query so far is like this:
```
SELECT projects.project_number, project_phases.project_phase_id, project_phases.project_phase_number, project_phases.project_phase_header, project_phase_expensegroups.projectphase_expense_total, invoicerows.invoicerow_total
FROM projects INNER JOIN
project_phases ON projects.project_id = project_phases.project_id
LEFT OUTER JOIN
project_phase_expensegroups ON project_phases.project_phase_id = project_phase_expensegroups.project_phase_id
LEFT OUTER JOIN
invoicerows ON project_phases.project_phase_id = invoicerows.project_phase_id
WHERE ( projects.project_number = @iProjectNumber )
AND
( project_phase_expensegroups.projectphase_expense_total >0 )
```
The parameter is for `selectionlist` that is used to choose a project to the report.
**How to have also records that have**
`( project_phase_expensegroups.projectphase_expense_total )` with value `0` but there might be invoices for that project phase?
Tried already to add another condition like this:
```
WHERE ( projects.project_number = @iProjectNumber )
AND
( project_phase_expensegroups.projectphase_expense_total > 0 )
OR
( invoicerows.invoicerow_total > 0 )
```
but while it gives **some** results - also the one with `projectphase_expense_total` with value `0`, but the report is total mess.
So my question is: what am I doing wrong here?
|
There is a core problem with your query in that you are left joining to two tables, implying that rows may not exist, but then putting conditions on those tables, which will eliminate NULLs. That means your query is internally inconsistent as is.
The next problem is that you're joining two tables to `project_phases` that both may have multiple rows. Since these data are not related to each other (as proven by the fact that you have no join condition between `project_phase_expensegroups` and `invoicerows`, your query is not going to work correctly. For example, given a list of people, a list of those people's favorite foods, and a list of their favorite colors like so:
```
People
Person
------
Joe
Mary
FavoriteFoods
Person Food
------ ---------
Joe Broccoli
Joe Bananas
Mary Chocolate
Mary Cake
FavoriteColors
Person Color
------ ----------
Joe Red
Joe Blue
Mary Periwinkle
Mary Fuchsia
```
When you join these with links between Person <-> Food and Person <-> Color, you'll get a result like this:
```
Person Food Color
------ --------- ----------
Joe Broccoli Red
Joe Bananas Red
Joe Broccoli Blue
Joe Bananas Blue
Mary Chocolate Periwinkle
Mary Chocolate Fuchsia
Mary Cake Periwinkle
Mary Cake Fuchsia
```
This is essentially a cross-join, also known as a Cartesian product, between the Foods and the Colors, because they have a many-to-one relationship with each person, but no relationship with each other.
There are a few ways to deal with this in the report.
1. Create ExpenseGroup and InvoiceRow subreports, that are called from the main report by a combination of `project_id` and `project_phase_id` parameters.
2. Summarize one or the other set of data into a single value. For example, you could sum the invoice rows. Or, you could concatenate the expense groups into a single string separated by commas.
Some notes:
* Please, please format your query before posting it in a question. It is almost impossible to read when not formatted. It seems pretty clear that you're using a GUI to create the query, but do us the favor of not having to format it ourselves just to help you
* While formatting, please use aliases, Don't use full table names. It just makes the query that much harder to understand.
|
You need an extra parentheses in your where clause in order to get the logic right.
```
WHERE ( projects.project_number = @iProjectNumber )
AND (
(project_phase_expensegroups.projectphase_expense_total > 0)
OR
(invoicerows.invoicerow_total > 0)
)
```
Also, you're using a column in your `WHERE` clause from a table that is left joined without checking for `NULL`s. That basically makes it a (slow) inner join. If you want to include rows that don't match from that table you also need to check for `NULL`. Any other comparison besides `IS NULL` will always be false for `NULL` values. See this page for more information about SQL's three value predicate logic: <http://www.firstsql.com/idefend3.htm>
To keep your `LEFT JOIN`s working as you intended you would need to do this:
```
WHERE ( projects.project_number = @iProjectNumber )
AND (
project_phase_expensegroups.projectphase_expense_total > 0
OR project_phase_expensegroups.project_phase_id IS NULL
OR invoicerows.invoicerow_total > 0
OR invoicerows.project_phase_id IS NULL
)
```
|
SSRS query and WHERE with multiple
|
[
"",
"sql",
"t-sql",
"reporting-services",
""
] |
I need to pull values from one table that are only 24 hours from the date in another table in sybase ase 15.5.
Here is my wrong code:
```
SELECT p_generaldata.admissiontime,*
FROM Patient.dbo.P_MonVals P_MonVals
INNER JOIN Patient.dbo.P_GeneralData P_GeneralData
ON P_MonVals.PatientID=P_GeneralData.PatientID
where p_generaldata.admissiontime < P_MonVals.entertime +1
order by p_generaldata.patientid ASC
```
Im trying to return all rows in p\_monvals, where the entertime in that table is less than 24 hours after the admissiontime.
> the error im getting is INT is not compatible with DATETIME
Any help greatly appreciated
thank you
|
Take a look a the [`DateAdd`](http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc38151.1530/doc/html/san1278453012069.html) function, and add a day to the `entertime`
Example from docs:
Adds one day to a date:
```
declare @a date
select @a = "apr 12, 9999"
select dateadd(dd, 1, @a)
```
In your case...
```
...
where p_generaldata.admissiontime < dateadd(dd, 1, P_MonVals.entertime)
```
|
Use function `dateadd` to sum 1 day in your date time:
```
dateadd(dd, 1, P_MonVals.entertime)
```
Reference: [Sybase dateadd function](http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc36271.1570/html/blocks/X10609.htm).
|
select results from one date field compared to another + 24 hours
|
[
"",
"sql",
"datetime",
"sap-ase",
""
] |
I'm using SQLite3 (version 3.8.6) and I have a list of transactions in a database. I want to `DELETE` the records for ids which have fewer than 10 associated transactions.
I can use this to list the ids and their count values:
```
SELECT id,count(id) AS cnt FROM purchases GROUP BY id HAVING cnt < 10;
```
I tried to use this to `DELETE` records:
```
DELETE
FROM purchases
WHERE
id IN
(SELECT id,count(id) AS cnt FROM purchases GROUP BY id HAVING cnt < 10);
```
However this gives me an error:
```
Error: only a single result allowed for a SELECT that is part of an expression
```
Is there an elegant way to accomplish this goal?
|
Try this:
```
DELETE
FROM purchases
WHERE
id IN
(SELECT id FROM purchases GROUP BY id HAVING count(*) < 10);
```
|
What's SQLite supposed to do with this?
```
id in (select id, count(id) from ...)
```
The `IN` operator expects a list, not a rowset. Omit the named `count(id)` column and use `count(*)` directly in the `having` clause:
```
id in (select id from purchases group by id having count(*) < 10)
```
|
deleting records based on aggregate function criteria
|
[
"",
"sql",
"sqlite",
"aggregate-functions",
"sql-delete",
""
] |
The link is <http://sqlzoo.net/wiki/The_JOIN_operation>. Question No.13.
Now I can list all the matches as long as there is at least one goal with the code:
```
SELECT mdate,
team1,
SUM(CASE WHEN teamid = team1 THEN 1 ELSE 0 END) AS score1,
team2,
SUM(CASE WHEN teamid = team2 THEN 1 ELSE 0 END) AS score2 FROM
game JOIN goal ON (id = matchid)
GROUP BY id
ORDER BY mdate, matchid, team1, team2
```
However there are some games the score of which is 0:0. My code can't display these games, and can't find other available solutions. Really hope someone can help me with this problem.
|
```
SELECT mdate,
team1,
SUM(CASE WHEN teamid = team1 THEN 1 ELSE 0 END) AS score1,
team2,
SUM(CASE WHEN teamid = team2 THEN 1 ELSE 0 END) AS score2 FROM
game LEFT JOIN goal ON (id = matchid)
GROUP BY mdate,team1,team2
ORDER BY mdate, matchid, team1, team2
```
You want to GROUP BY columns without aggregate functions in the SELECT and use a LEFT JOIN.
|
Try `LEFT JOIN` insted of `JOIN`. Because you need to show all games even if there are no goals.
|
Solution to #13 of SQL Join in SQLZoo
|
[
"",
"mysql",
"sql",
"join",
""
] |
I have 3 tables:
```
Meeting_t : Meeting_id (PK), create_date (Date)
Prog_call : prog_call_id (PK) and Meeting_id (FK)
Lock_t : lock_id (PK) and prog_call_id (FK)
```
There could be multiple records corresponding to each Meeting\_id in Prog\_call and similarly multiple records in Lock\_t corresponding to each prog\_call\_id.
Now I want to create a procedure which deletes all the records with create\_date < first day of previous year. This should also delete the corresponding data from other 2 tables (Prog\_call and Lock\_t).
I created the following prodecure to do the task:
```
PROCEDURE CleanBefore(delete_before DATE)
IS
meeting_id_v dev.prog_call.meeting_id%TYPE;
prog_call_id_v dev.prog_call.prog_call_id%TYPE;
CURSOR meeting_c IS
SELECT st.meeting_id
FROM dev.meeting_t st
WHERE TRUNC(st.logoff_datum) <= TRUNC(CleanBefore.delete_before);
CURSOR prog_call_c IS
SELECT pa.prog_call_id
FROM dev.prog_call pa
WHERE pa.meeting_id = meeting_id_v;
BEGIN
OPEN meeting_c;
LOOP
FETCH meeting_c INTO meeting_id_v;
OPEN prog_call_c;
LOOP
FETCH prog_call_c INTO prog_call_id_v;
DELETE FROM dev.lock_t lt
WHERE lt.prog_call_id = prog_call_id_v;
DELETE FROM dev.prog_call pa
WHERE pa.prog_call_id = prog_call_id_v;
END LOOP;
CLOSE prog_call_c;
DELETE FROM dev.meeting_t st
WHERE st.meeting_id = meeting_id_v;
END LOOP;
CLOSE meeting_c;
END CleanBefore;
```
So far it is looking good to me.. I have compiled it and testing is in Progress. However I always look for a better solution. And i believe here is also a probability of making it more optimized. Any Suggestion would be of great help.
|
You could three straight deletes e.g.
```
PROCEDURE cleanBefore( delete_before DATE )
IS
DELETE FROM lock_t l
WHERE l.prog_call_id IN
(SELECT p.prog_call_id
FROM meeting_t m
INNER JOIN prog_call p
ON (p.meeting_id = m.meeting_id)
WHERE trunc(m.logoff_datum) <= trunc(cleanBefore.delete_before);
DELETE FROM prog_call p
WHERE p.meeting_id IN
(SELECT m.meeting_id
FROM meeting_t m
WHERE trunc(m.logoff_datum) <= trunc(cleanBefore.delete_before);
DELETE FROM meeting_t m
WHERE trunc(m.logoff_datum) <= trunc(cleanBefore.delete_before);
END cleanBefore;
```
This would avoid the context switches when looping over cursor records, and mean much fewer DML statements are issued. You may wish to consider locking the tables before issuing the deletes, so the whole procedure is processed atomically.
|
Simply use **`ON DELETE CASCADE`** option :
```
SQL> drop table comm;
SQL> DROP TABLE EMPLOYEE;
SQL>
SQL> CREATE TABLE employee
2 (employee_id NUMBER(10) NOT NULL
3 ,employee_name VARCHAR2(500) NOT NULL
4 ,salary NUMBER(20) NOT NULL
5 ,department VARCHAR2(300) NOT NULL
6 ,CONSTRAINT employee_pk PRIMARY KEY (employee_id)
7 );
SQL>
SQL> CREATE TABLE comm
2 (emp_id NUMBER(10)
3 ,commission_percent NUMBER(20)
4 ,CONSTRAINT fk_employee
5 FOREIGN KEY (emp_id)
6 REFERENCES employee(employee_id)
7 ON DELETE CASCADE
8 );
SQL>
SQL> INSERT INTO employee
2 VALUES (101,'Emp A',10000,'Sales');
SQL>
SQL> INSERT INTO employee
2 VALUES (102,'Emp B',20000,'IT');
SQL>
SQL> INSERT INTO employee
2 VALUES (103,'Emp C',28000,'IT');
SQL>
SQL> INSERT INTO employee
2 VALUES (104,'Emp D',30000,'Support');
SQL>
SQL> INSERT INTO EMPLOYEE
2 VALUES (105,'Emp E',32000,'Sales');
SQL>
SQL> INSERT INTO comm
2 VALUES (102,20);
SQL>
SQL> INSERT INTO comm
2 VALUES (103,20);
SQL>
SQL> INSERT INTO comm
2 VALUES (104,NULL);
SQL>
SQL> INSERT INTO COMM
2 VALUES (105,10);
SQL>
SQL> SELECT *
2 FROM employee;
EMPLOYEE_ID EMPLOYEE_NAME SALARY DEPARTMENT
----------- ------------------------------ ---------- ------------------------------
101 Emp A 10000 Sales
102 Emp B 20000 IT
103 Emp C 28000 IT
104 Emp D 30000 Support
105 Emp E 32000 Sales
SQL>
SQL> SELECT *
2 FROM comm;
EMP_ID COMMISSION_PERCENT
---------- ------------------
102 20
103 20
104
105 10
SQL>
SQL> DELETE FROM EMPLOYEE
2 WHERE EMPLOYEE_ID = 105;
SQL>
SQL> SELECT *
2 FROM employee;
EMPLOYEE_ID EMPLOYEE_NAME SALARY DEPARTMENT
----------- ------------------------------ ---------- ------------------------------
101 Emp A 10000 Sales
102 Emp B 20000 IT
103 Emp C 28000 IT
104 Emp D 30000 Support
SQL>
SQL> SELECT *
2 FROM COMM;
EMP_ID COMMISSION_PERCENT
---------- ------------------
102 20
103 20
104
SQL>
```
**Update to below comment :**
```
SQL> SELECT *
2 FROM employee;
EMPLOYEE_ID EMPLOYEE_NAME SALARY DEPARTMENT
----------- ------------------------------ ---------- ------------------------------
101 Emp A 10000 Sales
102 Emp B 20000 IT
103 Emp C 28000 IT
104 Emp D 30000 Support
105 Emp E 32000 Sales
SQL>
SQL> SELECT *
2 FROM comm;
EMP_ID COMMISSION_PERCENT
---------- ------------------
102 20
103 20
104
105 10
SQL>
SQL> BEGIN
2 FOR i IN
3 (SELECT TABLE_NAME
4 FROM all_constraints
5 WHERE r_constraint_name IN
6 (SELECT constraint_name FROM all_constraints WHERE TABLE_NAME='EMPLOYEE'
7 )
8 )
9 LOOP
10 EXECUTE IMMEDIATE 'delete from '||i.TABLE_NAME||' where emp_id = 101';
11 END LOOP;
12 DELETE FROM EMPLOYEE WHERE EMPLOYEE_ID = 101;
13 END;
14 /
SQL>
SQL> SELECT * FROM EMPLOYEE;
EMPLOYEE_ID EMPLOYEE_NAME SALARY DEPARTMENT
----------- ------------------------------ ---------- ---------------------------
102 Emp B 20000 IT
103 Emp C 28000 IT
104 Emp D 30000 Support
SQL> SELECT * FROM COMM;
EMP_ID COMMISSION_PERCENT
---------- ------------------
102 20
103 20
104
SQL>
```
|
Deleting data with multiple referential integrities
|
[
"",
"sql",
"oracle",
"plsql",
"cursor",
""
] |
i have a table location that has thousands of data
my sql query is
```
select *
from location_table
where country = "PH"
and (province = "NCR" OR province = "")
and (city = "makati" or city="") limit 10
```
location\_table
```
ID | country | province | city
1 | PH | NCR | makati
2 | PH | NCR | makati
3 | PH | NCR | makati
4 | PH | NCR | makati
5 | PH | NCR | marikina
6 | PH | NCR |
7 | PH | |
8 | US | |
```
is this correct. is this going to be sure that my output will be like this
```
ID | country | province | city
1 | PH | NCR | makati
2 | PH | NCR | makati
3 | PH | NCR | makati
4 | PH | NCR | makati
6 | PH | NCR |
7 | PH | |
```
i need to be sure that my query is accurate. do i need to improve my query? i need to prioritize the value of city and province. if it has a province. or else get the null value same with the city.
for example
city = makati
i need to prioritize getting all city that is "makati" value. what if on ID 3000 . there is a makati there. i need to get first all city with makati instead of city of null values. same with province.
|
Use `ORDER BY` to put the non-empty values first:
```
select *
from location_table
where country = "PH"
and (province = "NCR" OR province = "")
and (city = "makati" or city="")
order by province = "", city = ""
```
`columnname = ""` will be `1` when the column is empty, otherwise it will be `0`.
[DEMO](http://www.sqlfiddle.com/#!2/96e8f6/1)
|
```
select *
from location_table
where country = "PH"
and (province = "NCR" OR province = "")
and (city = "makati" or city="")
order by province, city
```
Order By will work on your problem.
|
mysql selecting a values with priority
|
[
"",
"mysql",
"sql",
""
] |
I want to run the first query on the data list and then the second on the data that is selected by the first query. Needless to say this is not working out. How do I go about doing this?
```
SELECT *
FROM data_list
WHERE USAGE_TIME > 1530 AND USAGE_TIME < 2030
SELECT *
FROM data_list
WHERE USAGE_DATE = #12/07/2013#;
```
I also tried making another Query and selecting from the first completed Query, this did not work either.
Each Query runs fine on its own, but together... I am obviously doing it wrong... I am very new to SQL and have started picking it up to deal with a large data file I need to look at. Thanks!
|
No need to write 2 different select statements. Instead you can have multiple filters (conditions) in you `WHERE` clause.
I hope you want this:
```
SELECT *
FROM data_list
WHERE USAGE_DATE = #12/07/2013#
AND USAGE_TIME > 1530 AND USAGE_TIME < 2030;
```
|
It seems you're complicating the situation. You can simply combine the conditions in the `WHERE` clause.
Here is the query you're looking for:
```
SELECT *
FROM data_list D
WHERE D.USAGE_TIME > 1530
AND D.USAGE_TIME < 2030
AND D.USAGE_DATE = #12/07/2013#
```
Hope this will help you.
|
SQL: Run one SELECT query and then another SELECT query on the last query?
|
[
"",
"sql",
"database",
"ms-access",
""
] |
i have the following table with different prices in every week and need a numbering like in the last column. consecutive rows with same prices should have the same number like in weeks 11/12 or 18/19. but on the other side weeks 2 and 16 have the same prices but are not consecutive so they should get a different number.
```
w | price | r1 | need
===========================
1 167,93 1 1
2 180 1 2
3 164,72 1 3
4 147,42 1 4
5 133,46 1 5
6 145,43 1 6
7 147 1 7
8 147,57 1 8
9 150,95 1 9
10 158,14 1 10
11 170 1 11
12 170 2 11
13 166,59 1 12
14 161,06 1 13
15 162,88 1 14
16 180 2 15
17 183,15 1 16
18 195 1 17
19 195 2 17
```
i have already experimented with the analytics functions (row\_number, rank, dens\_rank), but didn't found a solution for this problem so far.
(oracle sql 10,11)
does anyone have a hint? thanks.
|
Simulating your table first:
```
SQL> create table mytable (w,price,r1)
2 as
3 select 1 , 167.93, 1 from dual union all
4 select 2 , 180 , 1 from dual union all
5 select 3 , 164.72, 1 from dual union all
6 select 4 , 147.42, 1 from dual union all
7 select 5 , 133.46, 1 from dual union all
8 select 6 , 145.43, 1 from dual union all
9 select 7 , 147 , 1 from dual union all
10 select 8 , 147.57, 1 from dual union all
11 select 9 , 150.95, 1 from dual union all
12 select 10, 158.14, 1 from dual union all
13 select 11, 170 , 1 from dual union all
14 select 12, 170 , 2 from dual union all
15 select 13, 166.59, 1 from dual union all
16 select 14, 161.06, 1 from dual union all
17 select 15, 162.88, 1 from dual union all
18 select 16, 180 , 2 from dual union all
19 select 17, 183.15, 1 from dual union all
20 select 18, 195 , 1 from dual union all
21 select 19, 195 , 2 from dual
22 /
Table created.
```
Your need column is calculated in two parts: first compute a delta column which denotes whether the previous price-column differs from the current rows price column. If you have that delta column, the second part is easy by computing the sum of those deltas.
```
SQL> with x as
2 ( select w
3 , price
4 , r1
5 , case lag(price,1,-1) over (order by w)
6 when price then 0
7 else 1
8 end delta
9 from mytable
10 )
11 select w
12 , price
13 , r1
14 , sum(delta) over (order by w) need
15 from x
16 /
W PRICE R1 NEED
---------- ---------- ---------- ----------
1 167.93 1 1
2 180 1 2
3 164.72 1 3
4 147.42 1 4
5 133.46 1 5
6 145.43 1 6
7 147 1 7
8 147.57 1 8
9 150.95 1 9
10 158.14 1 10
11 170 1 11
12 170 2 11
13 166.59 1 12
14 161.06 1 13
15 162.88 1 14
16 180 2 15
17 183.15 1 16
18 195 1 17
19 195 2 17
19 rows selected.
```
|
You can nest your analytic functions using inline views, so you first group the consecutive weeks with same prices and then dense\_rank using those groups:
```
select w
, price
, r1
, dense_rank() over (
order by first_w_same_price
) drank
from (
select w
, price
, r1
, last_value(w_start_same_price) ignore nulls over (
order by w
rows between unbounded preceding and current row
) first_w_same_price
from (
select w
, price
, r1
, case lag(price) over (order by w)
when price then null
else w
end w_start_same_price
from your_table
)
)
order by w
```
The innermost inline view with LAG function lets the starting week of every consecutive group get it's own week number, but every consecutive week with same price gets null (weeks 12 and 19 in your data.)
The middle inline view with LAST\_VALUE function then use the IGNORE NULLS feature to give the consecutive weeks the same value as the first week within each group. So week 11 and 12 both gets 11 in first\_w\_same\_price and week 18 and 19 both gets 18 in first\_w\_same\_price.
And finally the outer query use DENSE\_RANK to give the desired result.
|
oracle sql - numbering group of rows
|
[
"",
"sql",
"oracle",
"analytic-functions",
""
] |
Can you tell me what I am doing wrong here?
```
CREATE PROC [dbo].[GetUserEULASelection] (@username VARCHAR(150))
AS
BEGIN
DECLARE @result VARCHAR(250);
SELECT @result = a.NAME
FROM licmn_license_users_accounts_c ua
JOIN accounts a ON a.id = ua.licmn_license_users_accountsaccounts_ida
JOIN licmn_license_users lu ON lu.id = licmn_license_users_accountslicmn_license_users_idb
WHERE lu.username = @username;
CASE
WHEN @result IS NOT NULL
THEN SELECT @result AS 'name'
END
CASE
WHEN @result IS NULL
THEN SELECT '0' AS 'name'
END
END
```
I am new to T-SQL (I am porting MySQL Procedures to MSSQL)
The problem is that the case statement causes compilation errors.
|
Your problem (as noted in other answers) is that you cannot use a case statement like this, an if would be more appropriate.
However, you could get rid of that conditional statement altogether, along with the local variable:
```
CREATE PROC [dbo].[GetUserEULASelection] (@username VARCHAR(150))
AS
BEGIN
SELECT COALESCE(a.NAME, '0') AS Name
FROM licmn_license_users_accounts_c ua
JOIN accounts a ON a.id = ua.licmn_license_users_accountsaccounts_ida
JOIN licmn_license_users lu ON lu.id = licmn_license_users_accountslicmn_license_users_idb
WHERE lu.username = @username;
END
```
|
What's wrong is you're using CASE incorrectly. You're mixing it up with an IF in that context
So you should properly do:
```
IF @result IS NOT NULL
SELECT @result AS name
ELSE
SELECT '0' AS name
```
If you want to use CASE you should do it in the query:
```
SELECT CASE
WHEN a.NAME IS NULL THEN '0'
ELSE a.NAME
END as Name
FROM licmn_license_users_accounts_c ua
JOIN accounts a ON a.id = ua.licmn_license_users_accountsaccounts_ida
JOIN licmn_license_users lu ON lu.id = licmn_license_users_accountslicmn_license_users_idb
WHERE lu.username = @username;
```
|
T-SQL Stored procedure CASE issue
|
[
"",
"mysql",
"sql",
"sql-server",
""
] |
I am designing a database for use with a Ruby on Rails application. For a given object, I need to access the date of an event in both the [Gregorian](http://en.wikipedia.org/wiki/Gregorian_calendar) format and the [Hebrew calendar](http://en.wikipedia.org/wiki/Hebrew_calendar) equivalent. I can easily convert between the two formats, but the issue is that in the Hebrew calendar, *the date changes at sunset*, not midnight. Therefore, I'll need to either store two separate dates, or store a Gregorian date and a separate boolean field, `after_sunset`. Then, whenever I need to access the Hebrew date, I'll need to query for both fields, convert the date, and if `after_sunset==true`, increment the date.
Which of these options is considered "better"?
And, if I store the Hebrew date separately, is it best to store it as a String, an Integer, or can I use a regular Date?
|
With an after\_sunset flag you store a Gregorian date and add all the additional information needed to know the Hebrew date.
With two dates you would store the two dates explicitely. However, to have data consistent you would install a check constraint to ensure that the dates match. This is because the two dates share part of their information (redundancy). This means the data is not normalized.
For this reason, to have data normalized in your database (and thus not having to install a check constraint to keep the data consistent) the first approach is better. Store the date plus an after- sunset flag.
|
Unless you are going back to the dawn of time, I think I would simply have a many-year lookup table of UTC datetimes and Hebrew dates where the UTC column is the first second of the Hebrew day in a specific time zone (Greenwich?).
Conversions are a quick binary search,
```
SELECT hebrew_date FROM hebrew_gregorian_lookup
WHERE some_input_time >= gregorian_cutoff
ORDER BY gregorian_cutoff DESC LIMIT 1;
```
If you index and cluster the lookup table on `gregorian_cutoff`, it should be very quick, even for 100 years. (If your RDBMS has a way to force a table into RAM, even better.) Also depending on your RDBMS, you may be able to wrap this in a function/procedure with no loss of efficiency.
I suggest storing the Hebrew date not as a string but as a record of three shorts, day, month, year. You can have a tiny lookup table for month to string, or perhaps use an enumeration. That will give you some flexibility in formatting, e.g., Hebrew characters vs. Latin in the output.
|
Store Hebrew Dates in a SQL Database
|
[
"",
"sql",
"ruby-on-rails",
"date",
"database-design",
""
] |
I have a table with only 1 column that contains a string. I am trying to only get the email address. How can I do that? I looked at Substring/Ltrim, etc, but I haven't been able to piece together how to extract only part of the string. I am fairly new to SQL. Thank you for your help!
```
Column1:
John Smith Email: John.Smith@987456email.com Terminate:
Jacqueline Smith Email: Jacqueline.Smith@987456email.com Terminate:
```
|
Assuming the email is prefixed by `Email:` and does not contain spaces, you can just take all characters after `Email:` and before the next space (or end of string);
```
SELECT CASE WHEN CHARINDEX(' ', a.em) <> 0
THEN SUBSTRING(a.em, 1, CHARINDEX(' ', a.em) - 1)
ELSE a.em END email
FROM (
SELECT SUBSTRING(column1, CHARINDEX('Email: ', column1) + 7, LEN(column1)) em
FROM mytable
) a
```
The subquery keeps anything after `Email:` and the outer query cuts everything trailing the next space (or end of string).
The query assumes that there is an `Email:` tag, if that's not guaranteed, you'll want to use a `WHERE` to make sure that only rows that have will be returned.
[An SQLfiddle to test with](http://sqlfiddle.com/#!6/c8a75/1).
|
I'm making a few assumptions about your data, namely that the characters 'Name:' don't appear before the name and that each line includes the substring 'Terminate:'
In SQL Server, use a combination of PATINDEX, CHARINDEX and SUBSTRING to parse the address from the string in each row. The cursor lets you loop through your table. This will print out all the e-mail addresses in your table. It needs formatting and if you want to search for a particular person's email, you will have to modify the select statement with a WHERE clause. I hope this helps:
declare @strung as nvarchar(255)
,@start as int
,@end as int
,@result as int
,@emailCursor Cursor
set @emailCursor = CURSOR FORWARD\_ONLY STATIC READ\_ONLY FOR
Select yourColumnName
from yourTableName
```
OPEN @emailCursor
FETCH NEXT FROM @emailCursor INTO @strung
WHILE @@FETCH_STATUS = 0
BEGIN
set @start = (select charindex(':',@strung)+1);
set @end = (SELECT PATINDEX('%Terminate:%', @strung)-1)
set @result = @end-@start
set @address = (select SUBSTRING(@strung, @start, @result ) AS eMailAddress)
print @address
FETCH NEXT FROM @emailCursor INTO @strung
END
CLOSE @emailCursor
DEALLOCATE @emailCursor
```
CHARINDEX returns the position of the first ':' character in your string (the one after EMAIL). I add one to that value to move you past the ':'
PATINDEX returns the beginning position of the substring 'Terminate'
SUBSTRING returns all the character between the starting position [CHARNINDEX(':', @strung)] and the space before 'Terminate' [PATINDEX('%Terminate:%', @strung)]
|
How do I only return part of a string with varying lengths in SQL?
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a table which looks roughly like this:
```
+---------------------+----------+
| date | item |
+---------------------+----------+
| 2008-11-30 11:15:59 | Plums |
| 2012-11-08 19:42:37 | Lemons |
| 2013-01-30 18:58:07 | Apples |
| 2013-02-12 13:44:45 | Pears |
| 2014-06-08 11:46:48 | Apples |
| 2014-09-01 20:28:03 | Oranges |
+---------------------+----------+
```
I would now like to select items for two cases:
1) I'd like to select all distinct items for this year (or a different specified year) which are NEW this year, meaning that they have not appeared in previous years. This query should therefore give me "Oranges" as a result for 2014.
2) I'd also like to select all distinct items for this (or another specified) year which are RETURNING, which obviously means that they have appeared in previous years. This query should give me "Apples" as a result for 2014.
To clarify, I would like to use both queries only for comparison to previous years. So if I were to specify 2013 as the year for which this query is run, "Apples" should appear as new, not returning.
I don't necessarily need both things in one query (I doubt it's possible), so two different queries, one for each case, are perfectly fine.
|
How about ([**SQL Fiddle**](http://sqlfiddle.com/#!2/6d915/9/0)) for **NEW** items:
```
SELECT DISTINCT m1.item
FROM MyTable m1
LEFT JOIN MyTable m2 ON m1.item = m2.item AND YEAR(m2.date) < 2014
WHERE YEAR(m1.date) = 2014
AND m2.date IS NULL
```
And ([**SQL Fiddle**](http://sqlfiddle.com/#!2/6d915/10)) for **RETURNING** items:
```
SELECT DISTINCT m1.item
FROM MyTable m1
INNER JOIN MyTable m2 ON m1.item = m2.item
WHERE YEAR(m1.date) = 2014
AND YEAR(m2.date) < 2014
```
If you want to combine the two queries together ([**SQL Fiddle**](http://sqlfiddle.com/#!2/6d915/7/0) or [**SQL Fiddle**](http://sqlfiddle.com/#!2/e8d36f/1)):
```
SELECT DISTINCT m1.item, 'New' AS Status
FROM MyTable m1
LEFT JOIN MyTable m2 ON m1.item = m2.item AND YEAR(m2.date) < 2014
WHERE YEAR(m1.date) = 2014
AND m2.date IS NULL
UNION ALL
SELECT DISTINCT m1.item, 'Returning' AS Status
FROM MyTable m1
INNER JOIN MyTable m2 ON m1.item = m2.item
WHERE YEAR(m1.date) = 2014
AND YEAR(m2.date) < 2014
ORDER BY Status;
```
|
1 - This solution can use an index
```
SELECT DISTINCT x.item
FROM my_table x
LEFT
JOIN my_table y
ON y.item = x.item
AND y.date < x.date
WHERE x.date BETWEEN '2014-01-01' AND '2014-12-31' AND y.date IS NULL;
```
2 - And so can this...
```
SELECT DISTINCT x.item
FROM my_table x
JOIN my_table y
ON y.item = x.item
AND y.date < x.date
WHERE x.date BETWEEN '2014-01-01' AND '2014-12-31';
```
2 is obviously a simpler problem than 1, so I'm a little surprised you've asked for them in this order !?!?
If you remove the WHERE ... IS NULL from the first query then you in effect get the answer which you 'doubt is possible', but a UNION of these two queries will achieve the same effect.
|
Select new or returning items for a specified year
|
[
"",
"mysql",
"sql",
"union",
"self-join",
""
] |
Suppose there are three tables `T1`, `T2` and `T3`.
* `T1` has columns `Ordernum` and `bizdate`
* `T2` has columns `Ordernum` and `Orderitem`
* `T3` has columns `Orderitem` and `Catid`
FYI: there are multiple orderitems under each ordernum; each ordertiem has multiple catids.
I want to eliminate ordernums where any itemnum has catid=100.
As I said, each ordernum has multiple orderitem, so I would like to eliminate all orderitem even if only one orderitem in a ordernum has catid=100 .
In other words, I want to print only ordernum where catid != 100
|
You can use a subquery that picks out the orders with that category and use to filter out those orders:
```
select
OrderNum
from
T1 as t
where
not exists(
select *
from T1
inner join T2 on T2.Ordernum = T1.Ordernum
inner join T3 on T3.Orderitem = T2.Orderitem and T3.Catid = 100
where T1.Ordernum = t.Ordernum
)
```
|
You can to use EXISTS clause:
```
SELECT T1.OrderNum
FROM T1
WHERE NOT EXISTS
(SELECT * FROM T2 INNER JOIN T3 ON T2.Orderitem = T3.Orderitem
WHERE T1.OrderNum = T2.OrderNum AND T3.Catid != 100)
```
|
Multiple tables, eliminate data
|
[
"",
"sql",
""
] |
How would you advise to find out in Sql Server 2010/2012 if a query contains a substring equal to a 6 digits number?
e.g. "`agh123456 dfsdfdf`" matches the requirements
"`x123 ddd456`" doesn't match the requirements because the 6 digits are not consecutive
"`lm123`" doesn't match the requirements because only 3 digits are found (out of the required 6)
The problem I encountered so far: is that SUBSTRING as a function requires parameters (position where the number presumably starts and this is random)
while PATINDEX returns the location of a pattern in a string, but we don't know the exact pattern (it can be any 6 digit number)
Any pointers or advice, much appreciated.
Thank you
|
You can use the [LIKE](http://msdn.microsoft.com/en-GB/library/ms179859.aspx) operator:
```
SELECT *
FROM MyTable
WHERE Mycolumn LIKE '%[0-9][0-9][0-9][0-9][0-9][0-9]%'
```
|
Even this should work.. considering you don't have a string like this
> abc123 abc123456
Try this
```
DECLARE @str varchar(max) = 'abcxyz123456'
SELECT ISNUMERIC(SUBSTRING(@str,(SELECT PATINDEX('%[0-9]%',@str)),6))
```
|
find if there is a 6 digit number within a string
|
[
"",
"sql",
"sql-server",
""
] |
I have these MySQL tables:
table1:
```
id | writer
1 | Bob
2 | Marley
3 | Michael
```
table2:
```
user_one | user_two
1 | 2
```
And this query:
```
SELECT table1.id FROM table1 LEFT JOIN table2 ON table1.id = table2.user_one
```
This query will return all rows of table1 which are 1,2,3
I want to select only rows which are not found in the left joint. So it should return only row with id `3`
I want sort of the opposite of INNER JOIN which will select only the rows which are found in the join. How to get the opposite like if left join exists, ignore it and move to the next row. Hope i'm clear
|
You could use the following query:
```
SELECT table1.id
FROM table1
LEFT JOIN table2
ON table1.id IN (table2.user_one, table2.user_two)
WHERE table2.user_one IS NULL;
```
Although, depending on your indexes on `table2` you may find that two joins performs better:
```
SELECT table1.id
FROM table1
LEFT JOIN table2 AS t1
ON table1.id = t1.user_one
LEFT JOIN table2 AS t2
ON table1.id = t2.user_two
WHERE t1.user_one IS NULL
AND t2.user_two IS NULL;
```
|
One of the best approach if you do not want to return any columns from `table2` is to use the [`NOT EXISTS`](http://dev.mysql.com/doc/refman/5.0/en/exists-and-not-exists-subqueries.html)
```
SELECT table1.id
FROM table1 T1
WHERE
NOT EXISTS (SELECT *
FROM table2 T2
WHERE T1.id = T2.user_one
OR T1.id = T2.user_two)
```
Semantically this says what you want to query: Select every row where there is no matching record in the second table.
MySQL is optimized for `EXISTS`: It returns as soon as it finds the first matching record.
One more note to this:
When you check against a nullable column in the joined table, depending on your needs, you may have to use `NOT EXISTS` (or check against other columns when using `LEFT JOIN`), because MySQL won't be able to distinguish between a column which is `NULL`, but there is an existing record in the joined table and a column which is generated as `NULL` because the joined table have no matching record.
|
MySQL select rows where left join is null
|
[
"",
"mysql",
"sql",
"join",
"left-join",
""
] |
I am trying to get the sum of 2 values in my inner select but get an error about the aggregate function.
I am just trying to understand how I need to format this differently to get it working.
Is there a different order I need to do this in?
```
(SELECT CONVERT(VARCHAR,D.[booksCost],1) AS booksCost,
CONVERT(VARCHAR,D.[tuitionCost],1) AS tuitionCost,
SUM (D.[booksCost] + D.[tuitionCost]) AS totalCost,
D.[className]
FROM tuitionApplicationClasses AS D
WHERE applicationID = A.[applicationID]
FOR XML PATH ('classData'), TYPE, ELEMENTS),
Column 'tuitionApplicationClasses.booksCost' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
```
|
[`SUM`](http://msdn.microsoft.com/en-us/library/ms187810.aspx) is an aggregate function that is applied over the specified column in a query. What you are trying to do is simply add the first two columns in every row, so you need to modify the query to drop the `SUM` keyword.
```
(SELECT CONVERT(VARCHAR,D.[booksCost],1) AS booksCost,
CONVERT(VARCHAR,D.[tuitionCost],1) AS tuitionCost,
(D.[booksCost] + D.[tuitionCost]) AS totalCost,
D.[className]
FROM tuitionApplicationClasses AS D
WHERE applicationID = A.[applicationID]
FOR XML PATH ('classData'), TYPE, ELEMENTS),
```
|
Try use SUM() OVER(PARTITION BY..)
```
(SELECT CONVERT(VARCHAR,D.[booksCost],1) AS booksCost,
CONVERT(VARCHAR,D.[tuitionCost],1) AS tuitionCost,
SUM (D.[booksCost] + D.[tuitionCost]) OVER (PARTITON BY applicationID) AS totalCost,
D.[className]
FROM tuitionApplicationClasses AS D
WHERE applicationID = A.[applicationID]
FOR XML PATH ('classData'), TYPE, ELEMENTS),
```
|
TSQL Inner Select with SUM
|
[
"",
"sql",
"sql-server",
"t-sql",
"stored-procedures",
""
] |
Table `airports`:
```
id | from | to | price | photo | notes
_______________________________________
1 | LON | JFK| 1000 | | test
2 | LON | JFK| 2000 | | test2
```
I want to retrieve the bestprice entry of all `from-to` combinations inside the database.
I want to fetch the whole record that is minprice found, or at least specific tables.
The following works, BUT only gives me the 3 columns from, to, price. Not the whole entity.
`SELECT from, to, min(price) FROM airports GROUP BY from, to`
How would I have to adapt this?
|
You can order the results and use distinct on to take the first result from each grouping
```
select distinct on (from,to) * from airports order by from,to,price asc;
```
the above query should work
|
This is typically done using window functions:
```
select id, "from", "to", price, photo, notes
from (
select id, "from", "to", price, photo, notes
min(price) over (partition by "from", "to") as min_price
from the_table
) t
where price = min_price
order by id;
```
`from` is a reserved word and it's a bad idea to use that as a column name (not entirely sure about `to`)
To deal with "ties" (same values in from, to and price), you can use the `dense_rank()` function instead:
```
select id, "from", "to", price, photo, notes
from (
select id, "from", "to", price, photo, notes
dense_rank() over (partition by "from", "to" order by price) as price_rank
from the_table
) t
where price_rank = 1
order by id;
```
|
How to retrieve specific columns in a GROUP BY sql statement?
|
[
"",
"sql",
"database",
"postgresql",
"greatest-n-per-group",
""
] |
I have not seen any clear, concise examples of this anywhere online.
With an existing table, how do I add a foreign key which references this table? For example:
```
CREATE TABLE dbo.Projects(
ProjectsID INT IDENTITY(1,1) PRIMARY KEY,
Name varchar(50)
);
```
How would I write a command to add a foreign key which references the same table? Can I do this in a single SQL command?
|
I'll show you several equivalent ways of declaring such a foreign key constraint. (This answer is intentionally repetitive to help you recognise the simple patterns for declaring constraints.)
**Example:** This is what we would like to end up with:

---
**Case 1:** The column holding the foreign keys already exists, but the foreign key relationship has not been declared / is not enforced yet:

In that case, run this statement:
```
ALTER TABLE Employee
ADD FOREIGN KEY (ManagerId) REFERENCES Employee (Id);
```
---
**Case 2:** The table exists, but it does not yet have the foreign key column:

```
ALTER TABLE Employee
ADD ManagerId INT, -- add the column; everything else is the same as with case 1
FOREIGN KEY (ManagerId) REFERENCES Employee (Id);
```
or more succinctly:
```
ALTER TABLE Employee
ADD ManagerId INT REFERENCES Employee (Id);
```
---
**Case 3:** The table does not exist yet.
```
CREATE TABLE Employee -- create the table; everything else is the same as with case 1
(
Id INT NOT NULL PRIMARY KEY,
ManagerId INT
);
ALTER TABLE Employee
ADD FOREIGN KEY (ManagerId) REFERENCES Employee (Id);
```
or, declare the constraint inline, as part of the table creation:
```
CREATE TABLE Employee
(
Id INT NOT NULL PRIMARY KEY,
ManagerId INT,
FOREIGN KEY (ManagerId) REFERENCES Employee (Id)
);
```
or even more succinctly:
```
CREATE TABLE Employee
(
Id INT NOT NULL PRIMARY KEY,
ManagerId INT REFERENCES Employee (Id)
);
```
---
> **P.S. regarding constraint naming:** Up until [the previous revision of this answer](https://stackoverflow.com/revisions/25885755/3), the more verbose SQL examples contained `CONSTRAINT <ConstraintName>` clauses for giving unique names to the foreign key constraints. After [a comment by @ypercube](https://stackoverflow.com/questions/25878192/how-to-add-a-foreign-key-referring-to-itself-in-sql-server-2008/25885755?noredirect=1#comment40528027_25885755) I've decided to drop these clauses from the examples, for two reasons: Naming a constraint is an orthogonal issue to (i.e. independent from) putting the constraint in place. And having the naming out of the way allows us to focus on the the actual adding of the constraints.
>
> In short, in order to name a constraint, precede any mention of e.g. `PRIMARY KEY`, `REFERENCES`, or `FOREIGN KEY` with `CONSTRAINT <ConstraintName>`. The way I name foreign key constraints is `<TableName>_FK_<ColumnName>`. I name primary key constraints in the same way, only with `PK` instead of `FK`. (Natural and other alternate keys would get the name prefix `AK`.)
|
You can add the column and constraint in one operation
```
ALTER TABLE dbo.Projects ADD
parentId INT NULL,
CONSTRAINT FK FOREIGN KEY(parentid) REFERENCES dbo.Projects
```
Optionally you could specify the PK column in brackets after the referenced table name but it is not needed here.
|
How to add a foreign key referring to itself in SQL Server 2008?
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have a rather complex query that returns a single row, and I'd like to extend it to return multiple rows, one for each region. Right now, I can use the following query to get stats for one region at a time (in the example, the 'West' region):
```
SELECT Count(id) totalcustomers,
Sum(ordertotal) totaldollars
FROM (SELECT c.*
FROM customers c
JOIN customeraddresses ca
ON c.id = ca.customerid
WHERE ca.state IN (SELECT state
FROM stateregions
WHERE region = "West")
GROUP BY c.id) t
```
I would like to get a summary table for all regions at once: the first column should have the region names, and then the other columns should have the summary stats, as they already do.
***A running example is available at <http://sqlfiddle.com/#!2/297e86/5>***
In case something goes wrong with SQLFiddle, here's the data structure:
```
CREATE TABLE customers
(
id INT,
ordertotal INT,
PRIMARY KEY (id)
);
INSERT INTO customers
(id, ordertotal)
VALUES
(1, 100),
(2, 250),
(3, 120);
CREATE TABLE customeraddresses (
customerid INT,
state char(2),
PRIMARY KEY (customerid,state)
);
INSERT INTO customeraddresses
(customerid, state)
VALUES
(1, 'ca'),
(2, 'ny'),
(3, 'ny'),
(3, 'ca');
CREATE TABLE stateregions (
region varchar(20),
state char(2),
PRIMARY KEY (region,state)
);
INSERT INTO stateregions
(region, state)
VALUES
('West', 'ca'),
('East', 'ny');
```
Obviously, the structure and values have been greatly simplified for this example, so please ask if anything is unclear. Thanks!
|
You just need to `JOIN` to the `stateregions` table, and then `GROUP BY` the region:
```
SELECT region, count(id) totalcustomers,sum(ordertotal) totaldollars
FROM customers c
JOIN customeraddresses ca ON c.id = ca.customerid
JOIN stateregions s on ca.state = s.state
GROUP BY s.region
```
* [Updated Fiddle](http://sqlfiddle.com/#!2/297e86/15)
This will return you one row for each region:
```
REGION TOTALCUSTOMERS TOTALDOLLARS
East 2 370
West 2 220
```
---
In lieu of potential duplicates, a subquery with `distinct` should work for you:
```
SELECT region, COUNT(id), SUM(ordertotal)
FROM (
SELECT DISTINCT c.id, c.ordertotal, s.region
FROM customers c
JOIN customeraddresses ca ON c.id = ca.customerid
JOIN stateregions s on ca.state = s.state
) t
GROUP BY region
```
Resulting in:
```
REGION TOTALCUSTOMERS TOTALDOLLARS
All 3 470
East 2 370
West 2 220
```
|
turn each subquery into an inner join
```
SELECT Count(id) totalcustomers,
Sum(ordertotal) totaldollars
FROM customers c
JOIN customeraddresses ca
ON c.id = ca.customerid
inner join stateregions sr on sr.state = ca.state
```
where sr.region ="WEST"
|
Untangling subqueries
|
[
"",
"mysql",
"sql",
"subquery",
""
] |
Could someone point me toward general principals for speeding up my query below?
I have a working query which aggregates a count of "property" values grouped across five columns. But it takes over twenty minutes to run.
The count is aggregated across three related "Case" data tables which each have about 500,000 rows, and they are linked using a "UserID" plus "CaseNumber" composite key. (CaseNumbers are only unique per user.) I am using SQL Server 2005.
**My key problems seem to be:**
1. I need to "Group" *after* joining the three tables because each uniquely contains at least one of the columns that I am grouping against (so recommendations discussed [here](https://periscope.io/blog/use-subqueries-to-count-distinct-50x-faster.html) and [here](https://stackoverflow.com/questions/10444333/sql-server-how-can-i-use-the-count-clause-without-grouping) don't seem to apply).
2. My range of possible permutations in my desired result set (the product of the five column ranges) is large (~200,000 possibilities).
I am able to get results an order of magnitude more quickly if I limit my "range". So I could, for example, redesign this query as a "foreach" loop that retrieves one month at a time. But I would prefer to design a set-based approach.
I created a similar version of this query without the temp table, and another version with small temp tables for each "range" value, and the resulting speeds were similarly slow.
Ultimately, I want to get a count of the total number of permutations of "Categories" times "Properties" across every "Case" in the database, Grouped by Month and User. Each "UserID" + "CaseNumber" is uniquely tied to one Month and Year, and may be tied to two or three "Categories" or "Properties", in which case I would like to count every permutation of Properties \* Categories.
**The result set would look something like this:**

**Primary Keys:**
* "CaseMaster" has a composite primary key against "UserID" and
"CaseNumber".
* "CaseCategory" has a composite primary key against
"UserID" and "CaseNumber" and "CategoryID".
* "CaseProperty" has a
composite primary key against "UserID" and "CaseNumber" and "OtherID"
(not PropertyID).
* "CaseNumber" is "varchar". The rest are "char".
**Here is my draft query:**
```
USE MyDB
-- Drop Temp Table if it Exists
IF OBJECT_ID('tempdb..#DataRange') IS NOT NULL
DROP TABLE #DataRange
SELECT [UserID]
,[Year]
,[Month]
,[CategoryID]
INTO #DataRange
FROM [MyDB].[dbo].[IndexTable]
-- Aggregate a COUNT of "property" values joined across three large "Case" tables.
SELECT range.[UserID] AS [UserID]
,range.[Year] AS [Year]
,range.[Month] AS [Month]
,range.[CategoryID]
,cp.[PropertyID]
,COUNT(cp.[PropertyID]) AS [PropertyCount]
FROM
(
-- (1) Get the range of possible permutations.
(SELECT [UserID]
,[Year]
,[Month]
,[CategoryID]
FROM #DataRange) range
-- (2) Join against Dates AND Categories in the "Case Master" AND "Case Category" tables.
INNER JOIN
(
SELECT cm.[CaseNumber] AS [CaseNumber]
,cm.[UserID] AS [UserID]
,cm.[Year] AS [Year]
,cm.[Month] AS [Month]
,cc.[CategoryID] AS [CategoryID]
FROM
((SELECT [CaseNumber]
,[UserID]
,(CASE WHEN value1 = 'A' THEN datepart(year, date1)
ELSE datepart(year, date2) END) AS Year,
,(CASE WHEN value2 = 'B' THEN datepart(month, date1)
ELSE datepart(month, date2) END) AS Month
FROM [MyDB].[dbo].[CaseMaster]) cm
INNER JOIN
(SELECT [CaseNumber]
,[UserID]
,[CategoryID]
FROM [MyDB].[dbo].[CaseCategory]) cc
ON cm.UserID = cc.UserID AND cm.CaseNumber = cc.CaseNumber)
) case
ON range.[UserID] = case.[UserID] AND range.[Year] = case.[IncYear]
AND range.[Month] = case.[IncMonth] AND range.[WebCategoryID] = case.[WebCategoryID]
-- (3) Join against a "Property" fields in the "Case Property" table.
INNER JOIN
(
SELECT [CaseNumber]
,[UserID]
,[property1] AS [PropertyID]
FROM [MyDB].[dbo].[CaseProperty]
) cp
ON range.UserID = cp.UserID AND case.CaseNumber = cp.CaseNumber
AND cp.[PropertyID] IN (SELECT [PropertyID] FROM [MyDB].[dbo].[PropertyTypes])
)
GROUP BY range.[UserID], range.[Year], range.[Month], range.[CategoryID], p.[PropertyID]
DROP TABLE #DataRange
GO
```
|
Indices. Check the query plan for all necessary indices.
If the are in place, it is too slow and you need this t o be faster, find the bottleneck and buy the proper hardware to fix it, OR find another way o get the data (caching in memory etc.) though I think that wont work for this query.
So, at one point there is a reason large data analysis needs expensive hardware to hit it. Smae reason I am just plugging 5tb SSD into my database server.
That said, you likely have a throughput bottleneck on tempdb - and that loves a nice Raid 0 of SSD ;)
|
Given some assumptions:
* The query hits (aggregates) across virtually every row in every table, and
* The tables are truly large (joins over three tables with 500k rows in each qualify), and
* The query gets run frequently OR if the query has to run quickly when it is called
Then you might be looking at a data warehouse (datamart, reporting table) situation. Some of the basic concepts behind this are:
* Design the tables to support reporting (olap) queries, not writing/update (oltp) queries
* At designated points in time (daily? hourly?) you refresh the warehouse, loading in all data that has been added to the system since the last refresh. (Or, reload everything from scratch each time, but that is not ideal)
* Designed properly, reporting queries could run very quickly
In cases where you are just processing too much data, queries like this simply cannot be run quickly—think “end of the day” report, or something run overnight. The advantage of the warehouse here is that these long-running queries will not be running on the regular transaction, so you don’t get locking, blocking, or deadlock situations (so long as you’re not running the queries at the same time you’re trying to load the tables.) In addition, the underlying data won’t change while the query is being run.
|
How to speed up a COUNT of values grouped across an INNER JOIN of three large tables?
|
[
"",
"sql",
"sql-server",
"performance",
"join",
""
] |
Let's say I have 2 tables:
```
tOrder tOrderLine
------ ----------
OrderID OrderID
Other Fields ProductID
Other Fields
```
I want to get back all orders that have a certain 2 products on the same order. Currently, I'm doing:
```
select o.OrderID
from tOrder o
inner join tOrderLine ol on o.OrderID = ol.OrderID
inner join tOrderLine ol2 on o.OrderID = ol2.OrderID
where ol.ProductID = 67
and ol2.ProductID = 68
```
This works and gives me the results I need, but it seems a little hokey. Also, if I wanted to get all orders that have a set of 10 items, the query is going to get monstrous.
Is there a better way to do what I'm looking for?
|
There are 2 simple ways I can think to do it:
1. Inner join to `tOrderLine` twice. Exactly how you did it. I actually would have never thought of that. It was an interesting solution when I saw it.
2. `EXISTS` clauses. For each tOrder row, check if there exists a row in `tOrderLine` corresponding to it and one of the products.
For example:
```
select tOrder.OrderID
from tOrder o
where exists (select 1 from tOrderLine ol where ol.OrderID = o.OrderID and ol.ProductID = 67)
and exists (select 1 from tOrderLine ol where ol.OrderID = o.OrderID and ol.ProductID = 68)
```
This is how I would write it, simply because it's more obvious what the query is doing. On the other hand, yours seems easier for the parser to optimize.
I'd be curious to know if you look at the execution plan of either one if they are the same. The correlated sub-queries could be rewritten internally to be inner joins.
|
You could try to solve it using a distinct count on a `having`:
```
select t.OrderId
, count(*)
from tOrder t
join tOrderLine o
on t.OrderID = o.OrderID
where o.ProductId in (67, 68)
group
by t.OrderId
having count(distinct o.ProductId) = 2
```
|
SQL Query Multiple Products on One Order
|
[
"",
"sql",
"sql-server",
""
] |
I have a table in database as "EXPERIENCE RANGE" with rows as (I can also edit this table according to my need)
```
0
0.5
1
2
3
5
10
20
```
I have total experience as integer. I need to display the range in which it lies.
Example - for experience of 8, Range will be 5 - 10
I need to write a sql query. Any ideas will be quite helpful as I am new to SQL.
I cannot hard code it..need to take values from tables only.
|
Assuming that you are using Oracle, the following query works fine with your existing table:
```
SELECT
( SELECT MAX( value ) FROM experience_range WHERE value <= :search_value ) AS range_start,
( SELECT MIN( value ) FROM experience_range WHERE value > :search_value ) AS range_end
FROM dual;
```
No need to hardcode the values, and no need to store the lower and upper bounds redundantly.
|
you can do it with [CASE](http://docs.oracle.com/cd/B19306_01/server.102/b14200/expressions004.htm) Expression, the syntax is:
```
SELECT
CASE
WHEN experience >= 0 and experience <= 4 THEN '0-4'
WHEN experience >= 5 and experience <= 10 THEN '5-10'
.....
ELSE 'No Range'
END as Range
FROM Table_Name
```
|
finding range by comparing two tables
|
[
"",
"sql",
"oracle",
""
] |
I have a table that contains values per user per month.
What I need to find is the average value for the sum of users in each month.
I can't do this:
## **QUERY**
```
SELECT AVG(SUM(TACMargin))
FROM report_sales_month
WHERE YEAR(date) = 2013
GROUP BY userId
```
Which I don't think would get me what I want anyway.
If I group by user I get multiple rows (obviously) but I need have an average of these.
|
I think you can do what you want with a subquery:
```
SELECT AVG(sumtc)
FROM (SELECT SUM(TACMargin) as sumtc
FROM report_sales_month
WHERE YEAR(date) = 2013
GROUP BY month(date)
) t;
```
You can also express this without the subquery:
```
SELECT SUM(TACMrgin) / count(distinct month(date))
FROM report_sales_month
WHERE year(date) = 2013;
```
|
I'm not sure about what result you're looking for. It may be the query that you want:
```
SELECT AVG(M.totalTACMargin)
FROM (SELECT R.userId
,MONTH(R.date) AS month
,SUM(R.TACMargin) AS totalTACMargin
FROM report_sales_month R
WHERE YEAR(R.date) = 2013
GROUP BY R.userId, MONTH(R.date)) M
```
The subquery provide a list of the total `TACMargin` grouped by month and by user. Then I get the average of all these results.
Hope this will help you.
|
Average of the Sum of X in MySQL
|
[
"",
"mysql",
"sql",
""
] |
i need autoincrement. for start like abc\_1,abc\_2. like this format? below shown code is for auto increment. but i need format like abc\_ is constatanct then auto increment, format like abc\_1,abc\_2..
## **CODE**
```
sql = "CREATE TABLE MY_TABLE
(
table_id int NOT NULL AUTO_INCREMENT,
PRIMARY KEY(table_id),
table_1 varchar(45),
table_2 varchar(45),
table_3 varchar(999),
table_4 varchar(45)
)"
```
|
You have 2 options - both include keeping the autoincrement field exactly as it is.
1st Option is to add a short char type field, which simply stores your Alpha part. When you want to retrieve the whole key, then you can `SELECT (alpha_part + table_id) as ID`. As you can see this generates smaller storage, but requires more work for each select statement.
2nd option is to add a longer column that gets populated by an insert trigger normally. It is simply storing the concatenation on creation and then you don't have to concatenate it when you want to select it. This option also allows you to create an index or clustered index easier.
```
CREATE TABLE MY_TABLE (
table_id int NOT NULL AUTO_INCREMENT, PRIMARY KEY(table_id),
alpha_part varchar(10) NOT NULL, -- This
display_id varchar(40) NOT NULL, -- OR This (not both)
table_1 varchar(45),
table_2 varchar(45),
table_3 varchar(999),
table_4 varchar(45) )
```
"Database Id" and "Insurance Policy Id" are two separate entities - they may contain the "same" number, but don't mix up what the database needs to perform effectively, with what your business application needs to generate IDs for customers. Business rules and database Id are separate entities. You can "seed" a policy Id from a database generated one, but if something changes the policy id (yes this happens) your database suddenly needs to be refactored and you don't want that to happen.
|
You could add another column to derive this value, then have a trigger that automatically updates this column to add the derived value whenever a row is inserted.
However, it is not clear why this would be needed. It is likely better to just store the number and derive the form `abc_123` where that value needs to be used.
|
Auto Increment in sql with specific name
|
[
"",
"mysql",
"sql",
""
] |
I have been trying to code up a stored procedure that will grab the top tenant for each property ID and Contract Number for the user logged in.
I have been able to get SQL to grab able to do most of it but not the bit where i only grab the top one for each group wich i need it to do.
It is a little hard to exsplain as its the first time i have had to do this. Maybe if you see the example below it will become clear of what the end result should be...
The code i am using is:
```
SELECT Id,
PropertyId,
Name + ', ' + Address1,
contractNumber
FROM PropertyTenant
WHERE PropertyId IN (SELECT PropertyId FROM UserProperty WHERE UserId = '5E541037-747C-43B9-BB9F-97C8728083B7')
AND ContractNumber BETWEEN 1 AND 10;
```
Results I am getting
(tenant Names and address would be diffrent in real situation but this is just dummy data i used)
```
Id PropertyId Tenant Name and address contractNumber
142 1501 chris devine, 12 1
143 1501 steven mo, 12 1
144 1502 chris devine, 12,whitebeam close 1
145 1502 steven, 12,whitebeam close 2
146 1503 sdf adsf, asda 1
147 1504 dddd ddds, sdsdwsd 1
160 1532 Christopher, 1 1
161 1532 Christopher Devine, 6 1
162 1532 Christopher Devine, 11 2
163 1532 Christopher Devine, 16 2
164 1529 Christopher Devine, 12, windy 1
165 1529 Christopher Devine, 12 dog 1
166 1523 Christopher Devine, asdfasiuuh 1
167 1523 Christopher Devine, asdasd 1
182 1538 chris devine, 12 whitebam 1
183 1538 chris devine, 12, whitebeam close 2
184 1538 chris devine, 12asd 1
185 1561 chris devine, 12 whitebeam 1
```
The Results i would like and need, should be:
```
Id PropertyId Tenant Name and address contractNumber
142 1501 chris devine, 12 1
144 1502 chris devine, 12,whitebeam close 1
145 1502 steven, 12,whitebeam close 2
146 1503 sdf adsf, asda 1
147 1504 dddd ddds, sdsdwsd 1
160 1532 Christopher, 1 1
162 1532 Christopher Devine, 11 2
164 1529 Christopher Devine, 12, windy 1
167 1523 Christopher Devine, asdasd 1
182 1538 chris devine, 12 whitebam 1
183 1538 chris devine, 12, whitebeam close 2
185 1561 chris devine, 12 whitebeam 1
```
Any help would be great thanks as i think i am going about this the totaly wrong way :(
|
Here is a way to this.
```
with MyCTE as
(
SELECT pt.Id
, pt.PropertyId
, pt.Name + ', ' + pt.Address1 as tenantinfo
, pt.contractNumber
, ROW_NUMBER() over (PARTITION BY PropertyID, contractnumber order by id asc) as RowNum
FROM PropertyTenant pt
WHERE pt.PropertyId IN
(
SELECT PropertyId
FROM UserProperty
WHERE UserId = '5E541037-747C-43B9-BB9F-97C8728083B7'
)
AND ContractNumber BETWEEN 1 AND 10
)
select *
from MyCTE
where RowNum = 1
order by id asc
```
|
Try this:-
```
SELECT Min(Id),
PropertyId,
Name + ', ' + Address1,
contractNumber
FROM PropertyTenant
WHERE PropertyId IN (SELECT PropertyId
FROM UserProperty
WHERE UserId = '5E541037-747C-43B9-BB9F-97C8728083B7')
AND ContractNumber BETWEEN 1 AND 10
GROUP BY PropertyId, Name + ', ' + Address1, contractNumber
```
Hope this helps you.
|
How to show only top result for each feild group in SQL
|
[
"",
"sql",
"sql-server",
""
] |
Basically I have just two tables. The first is "Objects" and the second is "Properties". Now I want objects with specified properties.
Objects Table:
```
+-----------+
| Column |
+-----------+
| object_id |
+-----------+
```
Properties Table:
```
+-------------+
| Columns |
+-------------+
| property_id |
| object_id |
| key |
| value |
+-------------+
```
Now what I wanne select is of course all objects by predicates (AND, OR, XOR).
e.g.:
```
property_id object_id key value
1 2 name Mike
2 2 gender male
3 2 hair blonde
4 2 married yes
5 5 name John
6 5 gender male
7 5 married yes
8 5 job Cop
```
Say I want to select all objects with the gender="male" and married="yes".
Do you have any idea, how this is achieved most efficient with MySQL? And is it clear enough what I need?
Chrispi
|
If I understand correctly it is something like
```
Select object_id
from objects
where object_id in (select object_id from properties where gender="male") and
object_id in (select object_id from properties where married="yes") ;
```
An alternative using only properties:
```
(select object_id from properties where gender="male")
intersect
(select object_id from properties where married="yes");
```
|
This cannot be done efficiently in a RDBMS. However, you can achieve it along the lines of:
```
SELECT object_id FROM your_table t1
INNER JOIN your_table t2 ON (
t1.object_id = t2.object_id
AND t2.property_id = 4
AND t2.value = 'yes')
WHERE t1.property_id = 6 and t1.value = 'male';
```
|
Select row depending on what is linked to it in other table MySQL
|
[
"",
"mysql",
"sql",
""
] |
I'm trying to implement this SQL code in PL/SQL
SQL Code:
```
select SUM(SUM(distinct inv_total) - SUM(distinct credit_total) ) as TOTAL from invoice
where cus_id =95
group by inv_total, credit_total
```
PL/SQL Code:
```
Declare
total invoice.inv_total%TYPE;
cust_id NUMBER := 95;
BEGIN
select SUM(SUM(distinct inv_total) - SUM(distinct credit_total) )
INTO total from invoice
where cus_id= cust_id;
dbms_output.put_line('Total is ' || total);
END;
ERROR at line 1:
ORA-00978: nested group function without GROUP BY
ORA-06512: at line 8
```
Any tips how to use GROUP BY with PL/SQL?
Thank you
|
You seem to have several problems:
1. I find it highly unlikely that you actually want `distinct` inv\_total or credit\_total. If two invoices have the same total, you only want to count one of them?
2. Without the distinct, the `sum` can be written 'SUM(inv\_total - credit\_total)'. This is much more straightforward and will not require a `group by` clause.
3. Your original query groups by inv\_total and credit\_total. This also doesn't seem right, as you're unlikely want to group multiple rows based on totals like this.
4. If you really wanted the `distinct` and the `group by`, then you'd have to include the `group by` in the PL/SQL block, just as it is in your original SQL statement.
|
Try this:-
```
DECLARE
total invoice.inv_total%TYPE;
cust_id NUMBER := 95;
BEGIN
select SUM( inv_total - credit_total)
INTO total from invoice
where cus_id= cust_id;
dbms_output.put_line('Total is ' || total);
END;
/
```
|
PL/SQL SUM Difference Example
|
[
"",
"sql",
"oracle",
"plsql",
""
] |
I'm trying to find the following statistic: `How many users have made at least one order`
(yeah, sounds like homework .. but this is a simplistic example of my real query).
Here's the made up query
```
SELECT COUNT(UserId)
FROM USERS a
WHERE EXISTS(
SELECT OrderId
FROM ORDERS b
WHERE a.UserId = b.UserId
)
```
I feel like I'm getting the correct answers but I feel like this is an overkill and is inefficient.
Is there a more efficient way I can get this result?
If this was `linq` I feel like I want to use the `Any()` keyword....
|
It sounds like you just could use [`COUNT DISTINCT`](http://msdn.microsoft.com/en-us/library/ms175997.aspx):
```
SELECT COUNT(DISTINCT UserId)
FROM ORDERS
```
This will return the number of distinct values of `UserId` appear in the table `OrderId`.
In response to [*sgeddes*'s comment](https://stackoverflow.com/users/1073631/sgeddes), to ensure that `UserId` also appears in `Users`, simply do a `JOIN`:
```
SELECT COUNT(DISTINCT b.UserId)
FROM ORDERS b
JOIN USERS a
ON a.UserId = b.UserId
```
|
```
Select count(distinct u.userid)
From USERS u
Inner join ORDERS o
On o.userid = u.userid
```
|
Is this SQL query with an EXISTS the most performant way of returning my result?
|
[
"",
"sql",
"t-sql",
""
] |
I have a question. I have two tables employees and salary and I want to display the full name and salary grade of all employee who’s grade\_ID is 2 or 3 sort by employee\_ID.
Here are the tables:
```
Salary_ID LOW_SALARY High Salary
1 1 250000
2 250001 500000
3 500001 750000
4 750001 999999
```
---
```
EMPLOYEE_ID FIRST_NAME Last_NAME Salary
1 James Smith 800000
22 Roy Red 375000
2 Ron Johnson 550000
5 Rob Green 225000
```
I know I have to join the tables, this is what I have so far:
## **QUERY**
```
SELECT employees2.FIRST_NAME,
employees2.last_name,
salary_grades.SALARY_GRADE_ID,
employees2.SALARY
FROM employees2,
salary_grades
ORDER BY employees2.EMPLOYEE_ID;
```
what do i do next? I am stuck. Can anyone help me out? Thank you.
|
It may be the query you're looking for:
```
SELECT E.EMPLOYEE_ID
,E.FIRST_NAME
,E.LAST_NAME
,E.SALARY
FROM employees E
INNER JOIN salary S ON S.LOW_SALARY <= E.SALARY
AND S.HIGH_SALARY >= E.SALARY
AND S.SALARY_ID IN (2,3)
ORDER BY E.EMPLOYEE_ID
```
Hope this will help you.
|
```
SELECT employees2.FIRST_NAME, employees2.LAST_NAME, employees2.SALARY, salary_grades.SALARY_GRADE_ID
FROM employees2, salary_grades
WHERE employees2.SALARY BETWEEN salary_grades.LOW_SALARY AND salary_grades.HIGH_SALARY
ORDER BY salary_grades.SALARY_GRADE_ID
```
So I tried using this script, it works but do i use IN to display the range of 2 and 3 in sqldeveloper?
|
Joining two tables SQL
|
[
"",
"mysql",
"sql",
"oracle",
""
] |
First, I'll claim no extreme expertise in SQL.
What I am trying to do is pull down some incident counts from a SQL Server database based on a couple of varying criteria and put them into SSRS in a line chart that shows monthly counts on top of each other. Both of the queries that I had initially created pull down the right data but SSRS will not allow for using multiple data sets on one chart. This is where I get hung up. I'm sure I can do some sort of sub-querying to get both data sets into one just separating the counts into columns, but what I've tried thus far aggregates the entire counts.
The data I need is the reporteddate which is a date/time value and a count of incidents that occurred (no need to aggregate the counts, or shouldn't be, since SSRS will allow me to group them by month, which is what the report should look like) matching each of the 2 criteria.
Below are the two queries that I am using:
```
SELECT DISTINCT v_Incident.ReportedDate, COUNT(*) AS Count
FROM v_Incident INNER JOIN
v_IncidentMaxWorkLog_Category ON v_Incident.IncidentID = v_IncidentMaxWorkLog_Category.Incident_Number
WHERE (v_Incident.Summary LIKE N'Intelligent Incident for Impacted CI%') AND (v_Incident.CustomerFirstName = N'BMC') AND
(v_Incident.CustomerLastName = N'Impact Manager') AND (v_IncidentMaxWorkLog_Category.WORK_LOG_TYPE IN ('General Information', 'Incident Task / Action',
'Working Log')) AND (v_Incident.ReportedDate >= '7/1/2014') AND (v_Incident.ReportedDate >= DATEADD(year, - 1, GETDATE()))
GROUP BY v_Incident.ReportedDate
FROM v_Incident
```
Second Query
```
SELECT DISTINCT v_Incident.ReportedDate, COUNT(*) AS Count
FROM v_Incident INNER JOIN
v_IncidentMaxWorkLog_Category ON v_Incident.IncidentID = v_IncidentMaxWorkLog_Category.Incident_Number
WHERE (v_Incident.Summary LIKE N'Intelligent Incident for Impacted CI%') AND (v_Incident.CustomerFirstName = N'BMC') AND
(v_Incident.CustomerLastName = N'Impact Manager') AND (v_Incident.ReportedDate >= '7/1/2014') AND (v_Incident.ReportedDate >= DATEADD(year, - 1, GETDATE()))
GROUP BY v_Incident.ReportedDate
FROM v_Incident
```
As you can see, I am pulling down the reporteddate and a count. If I simply bring those two statements together in a single statement [select (query1) as filtered, (query2) as base] and move the group by's to the outside, I get results, but it is the entire total for each value on every single line, and about 10k lines too many.
Is there a simple solution that will get me each date, and a single count for base and filtered using these 2 or similar queries? Thank you very much in advance!
|
Your queries are quite similar. It looks like you have a total count, and then a partial count of some records that have specific work log types. Here is a query that returns date, BaseCount, and 'WorkLogFilteredCount' SSRS should be able to consume the date as an axis and the other two columns as separate series very easily for your chart.
```
SELECT v_Incident.ReportedDate
, COUNT(*) AS BaseCount
, SUM(CASE WHEN v_IncidentMaxWorkLog_Category.WORK_LOG_TYPE IN ('General Information', 'Incident Task / Action', 'Working Log') THEN 1 ELSE 0 END) AS WorkLogFilteredCount
FROM v_Incident
INNER JOIN v_IncidentMaxWorkLog_Category
ON v_Incident.IncidentID = v_IncidentMaxWorkLog_Category.Incident_Number
WHERE (v_Incident.Summary LIKE N'Intelligent Incident for Impacted CI%')
AND (v_Incident.CustomerFirstName = N'BMC')
AND (v_Incident.CustomerLastName = N'Impact Manager')
AND (v_Incident.ReportedDate >= '7/1/2014')
AND (v_Incident.ReportedDate >= DATEADD(year, - 1, GETDATE()))
GROUP BY v_Incident.ReportedDate
FROM v_Incident
```
|
Based on what I am thinking you want I am going to use a [UNION](http://msdn.microsoft.com/en-us/library/ms180026.aspx) and treat the unioned queries as a [subquery](http://technet.microsoft.com/en-us/library/ms189575(v=sql.105).aspx).
```
SELECT *
FROM
((SELECT DISTINCT
v_Incident.ReportedDate, COUNT(*) AS Count, 'BASE' as [Designator]
FROM
v_Incident
INNER JOIN
v_IncidentMaxWorkLog_Category ON v_Incident.IncidentID = v_IncidentMaxWorkLog_Category.Incident_Number
WHERE
(v_Incident.Summary LIKE N'Intelligent Incident for Impacted CI%')
AND (v_Incident.CustomerFirstName = N'BMC')
AND (v_Incident.CustomerLastName = N'Impact Manager')
AND (v_IncidentMaxWorkLog_Category.WORK_LOG_TYPE IN ('General Information', 'Incident Task / Action', 'Working Log'))
AND (v_Incident.ReportedDate >= '7/1/2014')
AND (v_Incident.ReportedDate >= DATEADD(year, - 1, GETDATE()))
GROUP BY
v_Incident.ReportedDate
FROM v_Incident)
UNION ALL
(SELECT DISTINCT v_Incident.ReportedDate, COUNT(*) AS Count, 'Filtered' as [Designator]
FROM v_Incident INNER JOIN
v_IncidentMaxWorkLog_Category ON v_Incident.IncidentID = v_IncidentMaxWorkLog_Category.Incident_Number
WHERE (v_Incident.Summary LIKE N'Intelligent Incident for Impacted CI%') AND (v_Incident.CustomerFirstName = N'BMC') AND
(v_Incident.CustomerLastName = N'Impact Manager') AND (v_Incident.ReportedDate >= '7/1/2014') AND (v_Incident.ReportedDate >= DATEADD(year, - 1, GETDATE()))
GROUP BY v_Incident.ReportedDate
FROM v_Incident)) ORDER BY ReportedDate, Designator
```
|
TSQL 2 Queries in 1
|
[
"",
"sql",
"sql-server",
"t-sql",
"reporting-services",
""
] |
I'm using PostgreSQL and am an SQL beginner. I'm trying to create a table from a query, and if I run:
```
CREATE TABLE table_name AS
(....query...)
```
it works just fine. But then if I add 'if not exists' and run:
```
CREATE TABLE IF NOT EXISTS table_name AS
(....query...)
```
using exactly the same query, I get:
> ```
> ERROR: syntax error at or near "as"
> ```
Is there any way to do this?
|
[CREATE TABLE AS](http://www.postgresql.org/docs/current/static/sql-createtableas.html) is considered a separate statement from a normal [CREATE TABLE](http://www.postgresql.org/docs/current/static/sql-createtable.html), and *until Postgres version 9.5* (see [changelog entry](https://www.postgresql.org/docs/9.5/static/release-9-5.html#AEN127121)) didn't support an `IF NOT EXISTS` clause. (Be sure to look at the correct version of the manual for the version you are using.)
Although not quite as flexible, the `CREATE TABLE ... LIKE` syntax might be an alternative in some situations; rather than taking its structure (and content) from a `SELECT` statement, it copies the structure of another table or view.
Consequently, you could write something like this (untested); the final insert is a rather messy way of doing nothing if the table is already populated:
```
CREATE OR REPLACE VIEW source_data AS SELECT * FROM foo NATURAL JOIN bar;
CREATE TABLE IF NOT EXISTS snapshot LIKE source_data;
INSERT INTO snapshot
SELECT * FROM source_data
WHERE NOT EXISTS ( SELECT * FROM snapshot );
```
Alternatively, if you want to discard previous data (e.g. an abandoned temporary table), you could conditionally drop the old table, and unconditionally create the new one:
```
DROP TABLE IF EXISTS temp_stuff;
CREATE TEMPORARY TABLE temp_stuff AS SELECT * FROM foo NATURAL JOIN bar;
```
|
`CREATE TABLE IF NOT EXISTS ...` was added in Postgres 9.1. See:
* [PostgreSQL create table if not exists](https://stackoverflow.com/questions/1766046/postgresql-create-table-if-not-exists/7438222#7438222)
### Postgres 9.0 or older
If you are going to write a function for this, base it on [system catalog table **`pg_class`**](https://www.postgresql.org/docs/current/catalog-pg-class.html), not on views in the [information schema](https://www.postgresql.org/docs/current/information-schema.html) or the [statistics collector](https://www.postgresql.org/docs/current/monitoring-stats.html) (which only exist if activated).
* [How to check if a table exists in a given schema](https://stackoverflow.com/questions/20582500/how-to-check-if-a-table-exists-in-a-given-schema/24089729#24089729)
```
CREATE OR REPLACE FUNCTION create_table_qry(_tbl text
, _qry text
, _schema text = NULL)
RETURNS bool
LANGUAGE plpgsql AS
$func$
DECLARE
_sch text := COALESCE(_schema, current_schema());
BEGIN
IF EXISTS (
SELECT FROM pg_catalog.pg_class c
JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE n.nspname = _sch
AND c.relname = _tbl
) THEN
RAISE NOTICE 'Name is not free: %.%',_sch, _tbl;
RETURN FALSE;
ELSE
EXECUTE format('CREATE TABLE %I.%I AS %s', _sch, _tbl, _qry);
RAISE NOTICE 'Table created successfully: %.%',_sch, _tbl;
RETURN TRUE;
END IF;
END
$func$;
```
The function takes a table name and the query string, and optionally also a schema to create the table in (defaults to the [current schema](https://stackoverflow.com/a/9067777/939860)).
Note the correct use of `=` in the function header and `:=` in the function body:
* [The forgotten assignment operator "=" and the commonplace ":="](https://stackoverflow.com/questions/7462322/the-forgotten-assignment-operator-and-the-commonplace/22001209#22001209)
Also note how identifiers are escaped as identifiers. You can't use `regclass`, since the table does not exist, yet:
* [Table name as a PostgreSQL function parameter](https://stackoverflow.com/questions/10705616/table-name-as-a-postgresql-function-parameter/10711349#10711349)
|
PostgreSQL: Create table if not exists AS
|
[
"",
"sql",
"postgresql",
"create-table",
""
] |
I have a table variable which i use to collect records from several tables, which have to be processed later on in my query.
It look a bit like this:
```
DECLARE @Prices TABLE (
ProductId INT,
Price MONEY,
Fee INT,
Discount INT,
IsSpecialPrice BIT
)
```
Now, after gathering price related records from several tables (about 400K records), i have to delete just a few records. For the products for which i have at least 2 records (one or more with IsSpecialPrice = 0 and one with IsSpecialPrice = 1), i have to delete all records where IsSpecialPrice = 0.
Now using a DELETE WHERE IN or DELETE WHERE EXISTS takes way too long to get rid of there few records. So, i looking for a faster type of query here.
Right now this is what i use:
```
DELETE P1
FROM @Prices P1
WHERE P1.IsSpecialPrice = 0 AND EXISTS (SELECT P2.ProductId FROM @Prices P2 WHERE P2.ProductId = P1.ProductId AND P2.IsSpecialPrice = 1)
```
I tried both WHERE IN and WHERE EXISTS, but both are just as slow.
|
what about a join?
```
DELETE P1
FROM @Prices P1 join (
SELECT DISTINCT ProductId
FROM @Prices
WHERE IsSpecialPrice = 1
) P2 on P1.ProductId = P2.ProductId
WHERE P1.IsSpecialPrice = 0
```
i made a test on a table with 500k rows and completed in 2sec deleting 90k rows that's a huge improvement compared to the neverending `exists`.
|
This is your query:
```
DELETE P1
FROM @Prices P1
WHERE P1.IsSpecialPrice = 0 AND
EXISTS (SELECT 1
FROM @Prices P2
WHERE P2.ProductId = P1.ProductId AND P2.IsSpecialPrice = 1
);
```
The normal way to speed this up would be to add indexes: `@Prices(IsSpecialPrice)` and `@Prices(ProductId, IsSpecialPrice)`. Alas, you cannot add indexes on table variables unless you are using SQL Server 2014 (new feature).
An alternative would be to store this in an explicit temporary table and add the indexes on that table. So, use `#Prices` rather than `@Prices` for the table.
|
Faster SQL DELETE WHERE IN/EXISTS needed
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.