Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
```
CommentID Name Replies OrderID
313 Ed dlfkndl sdknldgf dlffdg 496
313 James sdsflsdf snflsf sfnslf sfnklsdf 499
313 Jeff sdsflsdf snflsf sfnslf sfnklsdf 500
313 Alan sdsflsdf snflsf sfnslf sfnklsdf 501
313 William sdflksnfdlsk sdfknslnf slfnks 503
```
I have a sample table above which is in an ascending order by [OrderID]. I want to fetch the last
3 rows of the table which is in an ascending order as well which is shown below.
```
CommentID Name Replies OrderID
313 Jeff sdsflsdf snflsf sfnslf sfnklsdf 500
313 Alan sdsflsdf snflsf sfnslf sfnklsdf 501
313 William sdflksnfdlsk sdfknslnf slfnks 503
```
What is the exact syntax to do this in T-SQL? Thanks...
I tried to come up with this but still not not fetching the last 3 rows in an asc order.
```
SqlCommand cmd = new SqlCommand("SELECT * FROM [RepTab] WHERE [OrderID] > (SELECT MAX([OrderID]) - 3 FROM [RepTab] WHERE [CommentID]='" + Id + "') ", con);
```
|
You can also write the query using `Common Table Expression` as:
```
With CTE as
( select row_number() over ( partition by CommentID order by OrderID desc) as rownum,
CommentID,
Name,
Replies,
OrderID
From reptab
)
select CommentID,
Name,
Replies,
OrderID
from CTE
where rownum <=3
and CommentID = 313
order by OrderID asc
```
|
Selecting the bottom 3 records in a subquery ordered descending, then order the outer ascending.
```
Select *
from
(select top 3 orderid, commentid, replies, name
From
[RepTab]
Order by orderid desc
) t
Order by order id
```
## [fiddle](http://sqlfiddle.com/#!3/93692/1)
|
fetching last 3 rows in ascending order with t-sql?
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have SP that has a parameter @NumeroTransferencia int.
I need to select rows that match that variable when it is not null.
But when it's null, I have to return the ones which have H.CodigoTransferencia = null
I tried the following but I didn't get a good result:
```
SELECT
*
FROM RUTEO.HOJADERUTA H
WHERE
(H.CodigoTransferencia IS NULL OR H.CodigoTransferencia = @NumeroTransferencia)
```
Thanks!
|
```
SELECT
*
FROM RUTEO.HOJADERUTA H
WHERE
((H.CodigoTransferencia IS NULL AND @NumeroTransferencia IS NULL) OR H.CodigoTransferencia = @NumeroTransferencia)
```
|
You can try this
```
SELECT
*
FROM RUTEO.HOJADERUTA H
WHERE
(@NumeroTransferencia IS NULL
AND H.CodigoTransferencia IS NULL)
OR (@NumeroTransferencia IS NOT NULL
AND H.CodigoTransferencia = @NumeroTransferencia)
```
|
SQL Server: WHERE clause with null comparison
|
[
"",
"sql",
"null",
"case",
"where-clause",
""
] |
I am looking at a checklist that contains several unique checklist items. I only want to select records (ID, name, etc) of those who have NO null values in the checklist date field.
```
Select distinct sp.id as "ID",
SP.LAST_NAME as "Last",
SP.FIRST_NAME as "First",
SA.TERM_CODE_ENTRY as "Term",
SA.APST_CODE as "Status"
FROM SPRITE SP
JOIN SARC CK
on sp.sprite_pidm = CK.SARC_pidm
JOIN ADAP SA
on sp.sprite_pidm = sa.adap_PIDM
WHERE
Sp.sprite_change_ind is null
and SA.ADAP_TERM_CODE_ENTRY = '201480'
and SA.ADAP_APST_CODE = 'I'
and SA.ADAP_APPL_NO = CK.SARC_APPL_NO
-- where there are no null records - all records should be not null
and CK.SARC_RECEIVE_DATE is not null
```
Currently, it is selecting those who have at least one not null checklist date. This means it is still selecting records of those who have null dates for some checklist items.
How do tell it to select where
CK.SARC\_RECEIVE\_DATE = (all checklist item receive\_dates must be non-null values)?
Simplified Example:
```
ID Name Checklist Items DateReceived Other data...
01 Sherry missing item 1
01 Sherry missing item 2 02-02-14
05 Mike missing item 8 02-03-13
17 Carl missing item 2
17 Carl missing item 3
28 Luke missing item 3 04-03-13
28 Luke missing item 5 04-03-13
28 Luke missing item 8 04-03-13
The results should be
05 Mike (other data...)
28 Luke (other data...)
Instead, it is returning
01 Sherry (other data...)
05 Mike (other data...)
28 Luke (other data...)
```
|
```
WITH MYVIEW AS
(
Select sp.id as ID,
SP.LAST_NAME ,
SP.FIRST_NAME,
SA.TERM_CODE_ENTRY,
SA.APST_CODE,
CK.SARC_RECEIVE_DATE As RECEIVED_DATE
FROM SPRITE SP
JOIN SARC CK
on sp.sprite_pidm = CK.SARC_pidm
JOIN ADAP SA
on sp.sprite_pidm = sa.adap_PIDM
WHERE
Sp.sprite_change_ind is null
and SA.ADAP_TERM_CODE_ENTRY = '201480'
and SA.ADAP_APST_CODE = 'I'
and SA.ADAP_APPL_NO = CK.SARC_APPL_NO
)
SELECT ID as "ID",
MAX(LAST_NAME) as "Last",
MAX(FIRST_NAME) as "First",
MAX(TERM_CODE_ENTRY) as "Term",
MAX(APST_CODE) as "Status"
FROM MY_VIEW
GROUP BY id
HAVING SUM(NVL2(RECEIVED_DATE,0,1)) = 0;
```
|
You wouldn't do it that way. Instead, use an analytic function to count the number of NULL values and choose the ones that don't have any. Here is the idea:
```
with t as (
<your query here>
)
select *
from (select t.*, sum(case when SARC_RECEIVE_DATE is null then 1 else 0 end) as numNulls
from t
) t
where numNulls = 0;
```
|
oracle sql - Select ONLY if there are NO null values in that column
|
[
"",
"sql",
"oracle",
"notnull",
""
] |
I want to retrieve a full table with some of the values sorted. The sorted values should all appear before the unsorted values. I though I could pull this off with a UNION but order by is only valid to use after unioning the table and my set of data isn't set up such that that is useful in this case. I want rows with a column value of 0-6 to show up sorted in DESC order and then the rest of the results to show up after that. Is there some way to specify a condition in the order by clause? I saw something that looked close to what I wanted to so but I couldn't get the equality condition working in sql. I'm going to try to make a query using WHEN cases but I'm not sure if there's a way to specify a case like currentValue <= 6. If anyone has any suggestions that would be awesome.
|
You could do something like this:
```
order by (case when currentValue <= 6 then 1 else 0 end) desc,
(case when currentValue <= 6 then column end) desc
```
The first puts the values you care about first. The second puts them in sorted order. The rest will be ordered arbitrarily.
|
Try this:
```
SELECT *
FROM yourdata
ORDER BY CASE WHEN yourColumn BETWEEN 0 AND 6 THEN yourColumn ELSE -1 End Desc
```
|
SQL Order By Except When You Don't
|
[
"",
"sql",
"sql-order-by",
""
] |
How to find out a Sundays in a month?
please help me on this..
no of Sundays : 4 for current month and i need to subtract these count from the days in a month
:: days in a month -sundays
If i pass the dates in between from and to i need to get the count of sundays for that period..
Many thanks for your help.
Sunitha..
|
Try this:
```
select to_char(last_day(sysdate),'dd') -
((next_day(last_day(trunc(sysdate)),'sunday')-7
-next_day(trunc(sysdate,'mm')-1,'sunday'))/7+1) as result
from dual
```
For count dates between a period you can do this:
```
SELECT (next_day(TRUNC(TO_DATE('28/09/2014','DD/MM/YYYY')),'sunday') -
next_day((trunc(TO_DATE('13/09/2014','DD/MM/YYYY'))),'sunday')-7)/7+1 AS RESULT
FROM DUAL
```
|
If you have from and to dates, you should first generate all the dates between them using hierarchal query. And then filter only sundays from them.
```
select count(1)
from (
select start_date + level - 1 as dates
from dual
connect by start_date + level - 1 <= end_date
)
where to_char(dates,'fmday') = 'sunday';
```
If you want to calculate number of non sundays for current month, find the first and last day of the month and follow the same procedure as above.
```
select count(1)
from (
select start_date + level - 1 as dates
from (
select trunc(sysdate,'month') as start_date,
last_day(sysdate) as end_date
from dual
)
connect by start_date + level - 1 <= end_date
)
where to_char(dates,'fmday') != 'sunday';
```
[Sqlfiddle](http://sqlfiddle.com/#!4/d41d8/35401).
|
how to subtract the Sundays from no of days in a month
|
[
"",
"sql",
"oracle",
""
] |
I want to sort records as follows:
1. Future/present events ASC
2. Past events DESC
So first today, then tomorrow, until there are no more future records.
Then I want to show the past events, but the latest first.
So far I've found a solution for the first point:
```
ORDER BY (
CASE WHEN ev.StartDate < CURDATE()
THEN 1
ELSE 0
END) ASC, ev.StartDate ASC
```
But the issue with this query is that all posts are ordered ASC, including the past posts (which need to be DESC).
How do I combine this in the `CASE`?
|
You need a slightly more complex `order by`:
```
ORDER BY (ev.StartDate < CURDATE()),
(case when ev.StartDate > CURDATE() then ev.StartDate end) ASC,
(case when ev.StartDate < CURDATE() then ev.StartDate end) DESC
```
You could actually do this with two clauses:
```
ORDER BY greatest(ev.StartDate, CURDATE()) DESC,
least(ev.StartDate, CURDATE()) ASC
```
But I think the first version is clearer in its intention.
|
I find this most straight forward, without needing complex conditional syntax:
first one ranks future before past, second one orders the future ASC, third one orders the past DESC
(second and third ones are interchangeable)
```
ORDER BY
(date < CURDATE()) ASC,
(greatest(date, CURDATE()) ASC,
(least(date, CURDATE()) DESC
```
|
Sort records by; future ASC, past DESC
|
[
"",
"mysql",
"sql",
""
] |
I have a database in which there is a column named "codes". When this column of codes is displayed in gridview I would like to modify the contents of the column. For example:
In Database if the codes column contains A,B and C values, then in gridview it should show Abeloth(instead of A), Bollux (instead of B), Chewbacca (instead of C).
Thank you
Update:
I have added the following code, what point am I missing?
```
Protected Sub GrvPassengerReport_RowDataBound(sender As Object, e As GridViewRowEventArgs) Handles GrvPassengerReport.RowDataBound
If e.Row.RowType = DataControlRowType.DataRow Then
If e.Row.Cells(8).Text = "C" Then
e.Row.Cells(8).Text = "Chewbacca"
End If
End If
End Sub
```
|
I was able to solve this issue by using GridView.RowDataBound event. Few articles that helped me are
1) [MSDN](http://msdn.microsoft.com/en-us/library/system.web.ui.webcontrols.gridview.rowdatabound(v=vs.110).aspx)
2) [Question on StackOverflow](https://stackoverflow.com/questions/11671905/how-to-change-in-a-gridview-on-rowdatabound-event-the-value-of-an-eval-field)
Here is the code
```
Protected Sub GrvPassengerReport_RowDataBound(sender As Object, e As GridViewRowEventArgs) Handles GrvPassengerReport.RowDataBound
If e.Row.RowType = DataControlRowType.DataRow Then
Dim labelone As System.Web.UI.WebControls.Label
labelone = e.Row.FindControl("lblcStopType")
If labelone.Text = "C" Then
labelone.Text = "Chewbaca"
End If
End If
End Sub
```
|
one way to achieve this is using "itemdatabound" event for the gridview control and then decide what values you'd rather display in the gridview with respect to the actual values in the database table.
|
Modify data from database to gridview
|
[
"",
"sql",
"vb.net",
"gridview",
""
] |
I have a sample table like below where Course Completion Status of a Student is being stored:
```
Create Table StudentCourseCompletionStatus
(
CourseCompletionID int primary key identity(1,1),
StudentID int not null,
AlgorithmCourseStatus nvarchar(30),
DatabaseCourseStatus nvarchar(30),
NetworkingCourseStatus nvarchar(30),
MathematicsCourseStatus nvarchar(30),
ProgrammingCourseStatus nvarchar(30)
)
Insert into StudentCourseCompletionStatus Values (1, 'In Progress', 'In Progress', 'Not Started', 'Completed', 'Completed')
Insert into StudentCourseCompletionStatus Values (2, 'Not Started', 'In Progress', 'Not Started', 'Not Applicable', 'Completed')
```
Now as part of normalizing the schema I have created two other tables - `CourseStatusType` and `Status` for storing the Course Status names and Status.
```
Create Table CourseStatusType
(
CourseStatusTypeID int primary key identity(1,1),
CourseStatusType nvarchar(100) not null
)
Insert into CourseStatusType Values ('AlgorithmCourseStatus')
Insert into CourseStatusType Values ('DatabaseCourseStatus')
Insert into CourseStatusType Values ('NetworkingCourseStatus')
Insert into CourseStatusType Values ('MathematicsCourseStatus')
Insert into CourseStatusType Values ('ProgrammingCourseStatus')
Insert into CourseStatusType Values ('OperatingSystemsCourseStatus')
Insert into CourseStatusType Values ('CompilerCourseStatus')
Create Table Status
(
StatusID int primary key identity(1,1),
StatusName nvarchar (100) not null
)
Insert into Status Values ('Completed')
Insert into Status Values ('Not Started')
Insert into Status Values ('In Progress')
Insert into Status Values ('Not Applicable')
```
The modified table is as below:
```
Create Table StudentCourseCompletionStatus1
(
CourseCompletionID int primary key identity(1,1),
StudentID int not null,
CourseStatusTypeID int not null CONSTRAINT [FK_StudentCourseCompletionStatus1_CourseStatusType] FOREIGN KEY (CourseStatusTypeID) REFERENCES dbo.CourseStatusType (CourseStatusTypeID),
StatusID int not null CONSTRAINT [FK_StudentCourseCompletionStatus1_Status] FOREIGN KEY (StatusID) REFERENCES Status (StatusID),
)
```
I have few question on this:
1. Is this the correct way to normalize it ? The old table was very helpful to get data easily - I can store a student's course status in a single row, but now 5 rows are required. Is there a better way to do it?
2. Moving the data from the old table to this new table seems to be not an easy task. Can I achieve this using a query or I have to manually to do this ?
Any help is appreciated.
|
vou could also consider storing results in flat table like this:
## studentID,courseID,status
* 1,1,"completed"
* 1,2,"not started"
* 2,1,"not started"
* 2,3,"in progress"
you will also need additional Courses table like this
## courserId,courseName
* 1, math
* 2, programming
* 3, networking
and a students table
## students
* 1 "john smith"
* 2 "perry clam"
* 3 "john deere"
etc..you could also optionally create a status table to store the distinct statusstrings statusstings and refer to their PK instead ofthestrings
## studentID,courseID,status
* 1,1,1
* 1,2,2
* 2,1,2
* 2,3,3
... etc
and status table
## id,status
* 1,"completed"
* 2,"not started"
* 3,"in progress"
the beauty of this representation is: it is quite easy to filter and aggregate data , i.e it is easy to query which subjects a particular person have completed, how many subjects are completed by an average student, etc. this things are much more difficult in the columnar design like you had. you can also easily add new subjects without the need to adapt your tables or even queries they,will just work.
you can also always usin SQLs PIVOT query to get it to a familiar columnar presentation like
name,mathstatus,programmingstatus,networkingstatus,etc..
|
> but now 5 rows are required
No, it's still just one row. That row simply contains identifiers for values stored in other tables.
There are pros and cons to this. One of the main reasons to normalize in this way is to protect the integrity of the data. If a column is just a string then anything can be stored there. But if there's a foreign key relationship to a table containing a finite set of values then *only* one of those options can be stored there. Additionally, if you ever want to change the text of an option or add/remove options, you do it in a centralized place.
> Moving the data from the old table to this new table seems to be not an easy task.
No problem at all. Create your new numeric columns on the data table and populate them with the identifiers of the lookup table records associated with each data table record. If they're nullable, you can make them foreign keys right away. If they're not nullable then you need to populate them before you can make them foreign keys. Once you've verified that the data is correct, remove the old de-normalized columns. Done.
|
Database Normalization using Foreign Key
|
[
"",
"sql",
"sql-server",
"database-normalization",
""
] |
I have a order table that contains and order number, customer id and agent id. Then theres a customer table with a id and a agent table with a id.
I need to get all the customer ids that have an order from both agent id 'a03' and agent id 'a05'
Right now I'm trying to get all the customer ids but only the ones that appear in both a list from customer ids from agent 'a03' and a list from agent'a05'.
Chaining the second WHERE IN doesn't work.
```
select customer.cid
from customer
where customer.cid in
(select order1.cid
from order1
inner join agent on order1.aid=agent.aid
where agent.aid="a05")
and
(select order1.cid
from order1
inner join agent on order1.aid=agent.aid
where agent.aid="a03");
```
|
It sounds like you want to return only those customers who appear for both agents. If that is the case, then you can use group by:
```
select c.cid
from customer c
join order1 o on c.cid = o.cid
join agent a on o.aid = a.aid
where a.aid in ('a05','a03')
group by c.cid
having count(distinct a.aid) = 2
```
|
You need to replace `and` with `or customer.cid in`:
```
select customer.cid
from customer
where customer.cid in
(select order1.cid
from order1
inner join agent on order1.aid=agent.aid
where agent.aid="a05")
or customer.cid in
(select order1.cid
from order1
inner join agent on order1.aid=agent.aid
where agent.aid="a03");
```
Or shorter version:
```
select customer.cid
from customer
where customer.cid in
(select order1.cid
from order1
inner join agent on order1.aid=agent.aid
where agent.aid="a05" or agent.aid="a03")
```
|
MySQL chaining WHERE IN
|
[
"",
"mysql",
"sql",
""
] |
Is there a way to use intersect without selecting distinct values only? Something like `INTERSECT ALL`.
For example, consider table A and B
```
A --> 1, 1, 1, 2, 3, 4
B --> 1, 1, 2
```
Would result in
```
Result --> 1, 1, 2
```
**EDIT**
I think this [link](http://www.freelists.org/post/oracle-l/Except-Minus-all-and-Intersect-all,1) explains well what I want. This [other link](http://books.google.com.br/books?id=_BycAgAAQBAJ&pg=PT343&lpg=PT343&dq=%22intersect%20all%22%20oracle&source=bl&ots=P_VXpC7QGy&sig=ChZiB0wDmcarYTrkPAxOEAz2W_M&hl=pt-BR&sa=X&ei=C0wbVKiiAsrAggTl7YHADQ&ved=0CDYQ6AEwAzgK#v=onepage&q=%22intersect%20all%22%20oracle&f=false) is also intersting to understand the question. Or [this other link](http://www.dbtalks.com/uploadfile/dkverma87/intersect-intersect-all-operators-in-db2/) explains event better.
**EDIT 2**
Suppose the tables:
Table A
```
ββββββββββ¦βββββ¦ββββ¦βββββ¦βββββ
β A β B β C β D β E β
β βββββββββ¬βββββ¬ββββ¬βββββ¬βββββ£
β Car β 10 β 1 β OK β -1 β
β Car β 10 β 1 β OK β -1 β
β Car β 10 β 1 β OK β -1 β
β House β 10 β 1 β NO β -5 β
β Monkey β 15 β 1 β OK β -1 β
β Dog β 3 β 1 β OK β -1 β
ββββββββββ©βββββ©ββββ©βββββ©βββββ
```
Table B
```
βββββββ¦βββββ¦ββββ¦βββββ¦βββββ
β A β B β C β D β E β
β ββββββ¬βββββ¬ββββ¬βββββ¬βββββ£
β Car β 10 β 1 β OK β -1 β
β Car β 10 β 1 β OK β -1 β
β Car β 15 β 1 β OK β -1 β
β Dog β 3 β 1 β OK β -1 β
βββββββ©βββββ©ββββ©βββββ©βββββ
```
The answer for intersect (`select * from A INTERSECT select * from B`) would be:
```
βββββββ¦βββββ¦ββββ¦βββββ¦βββββ
β A β B β C β D β E β
β ββββββ¬βββββ¬ββββ¬βββββ¬βββββ£
β Car β 10 β 1 β OK β -1 β
β Dog β 3 β 1 β OK β -1 β
βββββββ©βββββ©ββββ©βββββ©βββββ
```
*Because it takes only distinct values*. What I want is taking common rows, just like:
```
βββββββ¦βββββ¦ββββ¦βββββ¦βββββ
β A β B β C β D β E β
β ββββββ¬βββββ¬ββββ¬βββββ¬βββββ£
β Car β 10 β 1 β OK β -1 β
β Car β 10 β 1 β OK β -1 β
β Dog β 3 β 1 β OK β -1 β
βββββββ©βββββ©ββββ©βββββ©βββββ
```
Observe I don't need to know what I have to link (the connection is positional, just like `INTERSECT`). The ID would be something constructed using all columns (the link between table are all columns, based on their position).
|
In SQL Server, `INTERSECT` works on distinct rows only. If you want it to distinguish between duplicate rows, you will need to make the rows distinct. The only way to do so I can think of is to add another column and populate it with unique values per duplicate, but in such a way that the resulting rows would be matchable across different tables.
The problem, however, is that so far there is no universal syntax for that. For instance, you could use ROW\_NUMBER() to enumerate every duplicate, but you would have to write out its PARTITION BY clause for every case individually: there is no `PARTITION BY *`, not in SQL Server at least.
Anyway, for the purpose of illustration, here is how the ROW\_NUMBER method would look like:
```
SELECT
A, B, C, D, E,
ROW_NUMBER() OVER (PARTITION BY A, B, C, D, E ORDER BY (SELECT 1))
FROM
dbo.A
INTERSECT
SELECT
A, B, C, D, E,
ROW_NUMBER() OVER (PARTITION BY A, B, C, D, E ORDER BY (SELECT 1))
FROM
dbo.B
;
```
As written above, the query would also return an extra column, the row number column, in the output. If you wanted to suppress it, you would need to make the query more complex:
```
SELECT
A, B, C, D, E
FROM
(
SELECT
A, B, C, D, E,
rn = ROW_NUMBER() OVER (PARTITION BY A, B, C, D, E ORDER BY (SELECT 1))
FROM
dbo.A
INTERSECT
SELECT
A, B, C, D, E,
rn = ROW_NUMBER() OVER (PARTITION BY A, B, C, D, E ORDER BY (SELECT 1))
FROM
dbo.B
) AS s
;
```
And just to clarify, when I said above there was no universal syntax, I meant you could not do it without resorting to dynamic SQL. With dynamic SQL, a great many things are possible but such a solution would be much more complex and, in my opinion, much less maintainable.
Again, to illustrate the point, this is an example of how you could solve it with dynamic SQL:
```
DECLARE
@table1 sysname,
@table2 sysname,
@columns nvarchar(max),
@sql nvarchar(max)
;
SET @table1 = 'dbo.A';
SET @table2 = 'dbo.B';
-- collecting the columns from one table only,
-- assuming the structures of both tables are identical
-- if the structures differ, declare and populate
-- @columns1 and @columns2 separately
SET @columns = STUFF(
(
SELECT
N', ' + QUOTENAME(name)
FROM
sys.columns
WHERE
object_id = OBJECT_ID(@table1)
FOR XML
PATH (''), TYPE
).value('text()[1]', 'nvarchar(max)'),
1,
2,
''
);
SET @sql =
N'SELECT ' + @columns + N'
FROM
(
SELECT
' + @columns + N',
ROW_NUMBER() OVER (PARTITION BY ' + @columns + N' ORDER BY (SELECT 1))
FROM
' + @table1 + N'
INTERSECT
SELECT
' + @columns + N',
ROW_NUMBER() OVER (PARTITION BY ' + @columns + N' ORDER BY (SELECT 1))
FROM
' + @table2 + N'
) AS s
';
EXECUTE sp_executesql @sql;
```
You can probably see now what I meant by "much more complex" at least.
|
```
SELECT
COLUMN1
FROM B
WHERE B.COLUMN1 IN
(SELECT COLUMN1
FROM A)
```
|
Intersect in SQL Server
|
[
"",
"sql",
"sql-server",
"intersect",
""
] |
I have a rather complex and time consuming query which returns a result set with two columns like this:
```
| ID1 | ID2
1 10
2 11
3 15
```
I want to have those results in one column:
```
| ID
1
10
2
11
3
15
```
Is this possible in (oracle) sql?
The (simplified) query looks like this:
```
SELECT adr.nad_nkt_id id1, firmenadr.nad_nkt_id id2 FROM nav_adressen adr,
nav_kontakte kontakt, nav_kontakte firma, nav_adressen firmenadr WHERE
kontakt.nkt_id = adr.nad_nkt_id AND
firma.nkt_id (+)= kontakt.nkt_nkt_id AND
firmenadr.nad_id (+)= firma.nkt_id AND
(firmenadr.nad_strasse like '%needle%' OR
adr.nad_strasse like '%needle%');
```
|
```
with source as (
SELECT adr.nad_nkt_id id1, firmenadr.nad_nkt_id id2 FROM nav_adressen adr,
nav_kontakte kontakt, nav_kontakte firma, nav_adressen firmenadr WHERE
kontakt.nkt_id = adr.nad_nkt_id AND
firma.nkt_id (+)= kontakt.nkt_nkt_id AND
firmenadr.nad_id (+)= firma.nkt_id AND
(firmenadr.nad_strasse like '%needle%' OR
adr.nad_strasse like '%needle%')
)
select decode(x.l, 1, a.id1, 2, a.id2) as id
from source a
cross join (select level as l from dual connect by level <=2) x
```
In this case you do not need to scan result set twice. It might be faster than union method.
|
You're after a `UNION`:
```
SELECT ID1
FROM YourTable
UNION
SELECT ID2
FROM YourTable
```
`UNION` will return distinct items, `UNION ALL` will return all items (and is therefore faster).
Given the amount of filtering/joining you're doing, it makes sense to do the filtering first into a temp table then run the `UNION`.
|
How to make two rows out of one
|
[
"",
"sql",
"oracle",
"oracle11g",
""
] |
I want the OrderType for the min(Date) for each Name.
So, I want this:
```
Name Date OrderType
Alex 1/1/2014 Direct
Alex 9/15/2014 Distributor
Cindy 6/4/2014 Distributor
John 5/8/2014 Direct
John 2/14/2014 Distributor
```
to return this:
```
Name Date OrderType
Alex 1/1/2014 Direct
Cindy 6/4/2014 Distributor
John 2/14/2014 Distributor
```
|
We can get row number based on the date for each `[Name]` and pick the least date record.
```
SELECT [T].*
FROM (
SELECT [Name]
, [DATE]
, [OrderType]
, ROW_NUMBER() OVER (PARTITION BY [Name] ORDER BY [Date]) AS [seq]
FROM [TableA]
) AS [T]
WHERE [T].[seq] = 1
```
|
I think you need select the min date per person then join back to the original table to get the type for that row.
Assuming your table is called tab and each person only had one order per date (otherwise the question is impossible) then something like:
```
Select t.name, t.date, t.ordertype
From tab t,
( select min (i.date) date, i.name from tab i group by i.name) t2
Where t.date = t2.date
And t.name = t2.name
```
Sorry I work mainly with mysql and Oracle not tsql so it is generic sql syntax.
|
sql select min, return value from different column in same row with grouping
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I'd like to select each pair of two columns in a database, but only select the entry with the lowest price. As a result, I want to output the `id` and the `price` column.
But it does not work:
My table:
```
id | category | type | name | price
1;"car";"pkw";"honda";1000.00
2;"car";"pkw";"bmw";2000.00
```
SQL:
```
select min(price) price, id
from cartable
group by category, type
```
Result:
`Column "cartable.id" must be present in GROUP-BY clause or used in an aggregate function.`
|
If you want the entry with the lowest price, then calculate the lowest price and join the information back in:
```
select ct.*
from cartable ct join
(select category, type, min(price) as price
from cartable
group by category, type
) ctp
on ct.category = ctp.category and ct.type = ctp.type and ct.price = ctp.price;
```
|
You can achieve this with EXISTS clause:
```
SELECT *
FROM cartable ct
WHERE
NOT EXISTS (
SELECT *
FROM cartable
WHERE ct.type = type and ct.category = categoery and ct.price < price)
```
For speed caparison can you try this:
```
SELECT DISTINCT ON (type, category), id, price
FROM cartable
ORDER BY price DESC
```
|
How to select records with minimum price per group
|
[
"",
"sql",
"database",
"postgresql",
"greatest-n-per-group",
""
] |
I'm new to this so please bear with me.
I'm writing a query where I need to count the number of rows with two specific values,
I have used the following to get the result of the different values in one field but I need to know the results only if another field is set to a specific value. I pulled the following from a previous question on this site:
```
COALESCE(count(case when CW.MAINJOBROLE = 2 THEN 1 end),0) as ALLJOBROLES_2,
coalesce(count(case when CW.MAINJOBROLE = 3 then 1 end), 0) as ALLJOBROLES_3,
coalesce(count(case when CW.MAINJOBROLE = 4 then 1 end), 0) as ALLJOBROLES_4,
coalesce(count(case when CW.MAINJOBROLE = 7 then 1 end), 0) as ALLJOBROLES_7,
coalesce(count(case when CW.MAINJOBROLE = 8 then 1 end), 0) as ALLJOBROLES_8,
coalesce(count(case when CW.MAINJOBROLE = 23 then 1 end), 0) as ALLJOBROLES_23,
coalesce(count(case when CW.MAINJOBROLE = 24 then 1 end), 0) as ALLJOBROLES_24,
coalesce(count(case when CW.MAINJOBROLE = 25 then 1 end), 0) as ALLJOBROLES_25'
```
As part of a larger query, I want to do the above only if `CW.EMPLSTATUS = 1`
|
You can just add the condition into the `where` clause:
```
COALESCE(count(case when CW.MAINJOBROLE = 2 and CW.EMPLSTATUS = 1 THEN 1 end),0) as ALLJOBROLES_2,
```
By the way, the `COALESCE()` should be unnecessary. `COUNT()` will return `0` if there are no matches.
|
You will have to surround each expression with another CASE WHEN construct:
```
CASE WHEN CW.EMPLSTATUS = 1 THEN
count(case when CW.MAINJOBROLE = 2 THEN 1 end)
ELSE
NULL
END as ALLJOBROLES_2,
CASE WHEN CW.EMPLSTATUS = 1 THEN
count(case when CW.MAINJOBROLE = 3 THEN 1 end)
ELSE
NULL
END as ALLJOBROLES_3,
....
```
|
Bringing together coalesce, count, case and clauses
|
[
"",
"sql",
"coalesce",
""
] |
Are PL/SQL stored procedures transactions?
When a trigger, procedure or function runs, is it a transaction?
I usually end my procedures with `COMMIT` and when an error occurs I use `ROLLBACK`! Is it wrong?
I'am not asking about `START TRANSACTION` into procedures but I want to know if they are transactions.
Thank you.
|
`Are PL/SQL stored procedures transactions? When a trigger, procedure or function runs, is it a transaction?`
No. A transaction starts with the first DML encountered and ends when it encounters a COMMIT or ROLLBACK. A transaction can comprise many function and procedure calls and DML statements and trigger code. On the other hand, you can create a procedure that comprises of many transactions by issuing lots of COMMITs.
`I usually end my procedures with COMMIT and when an error occurs I use ROLLBACK! Is it wrong?`
Wrong is a strong word. Let's just say it's not a good practice. Making (packaged) functions and procedures is all about modularization: making reusable pieces of code. When a function/procedure contains ROLLBACK or COMMIT statements, it stops being reusable as it messes up the transaction of the caller. So it's better not to use ROLLBACK or COMMIT in your procedures and leave it to the topmost caller.
You could use SAVEPOINTS throughout your code which makes sure a single function or procedure doesn't leave open parts of a transaction. But for esthetical reasons I prefer to not use SAVEPOINTS. For me, it's just five lines of unnecessary code, because I know my caller function will handle the transaction just nicely.
Exception is when you create an autonomous procedure, which is by definition a single transaction and thus needs to end with a COMMIT.
**UPDATE**
Note that a RAISE\_APPLICATION\_ERROR or a RAISE [exception name] statement will also automatically rollback your PL/SQL block as a single atomic unit. Which is of course a desirable effect as it doesn't leave you with uncommitted changes.
```
SQL> create table mytable (id int)
2 /
Table created.
SQL> create procedure p
2 as
3 begin
4 insert into mytable values (2);
5 raise_application_error(-20000,'My exception');
6 end;
7 /
Procedure created.
SQL> select *
2 from mytable
3 /
no rows selected
SQL> insert into mytable values (1)
2 /
1 row created.
SQL> exec p
BEGIN p; END;
*
ERROR at line 1:
ORA-20000: My exception
ORA-06512: in "X.P", regel 5
ORA-06512: in regel 1
SQL> select *
2 from mytable
3 /
ID
----------
1
1 row selected.
```
|
Off course its not a transaction but a modification. And you dont need to specify Commit everytime to save your transaction.
In Oracle for example all individual DML statements are atomic (i.e. they either succeed in full, or rollback any intermediate changes on the first failure) (unless you use the EXCEPTIONS INTO option, which I won't go into here).
Consider an example above:
If you wish a group of statements to be treated as a single atomic transaction, you'd do something like this:
```
BEGIN
SAVEPOINT start_tran;
INSERT INTO .... ; -- first DML
UPDATE .... ; -- second DML
BEGIN ... END; -- some other work
UPDATE .... ; -- final DML
EXCEPTION
WHEN OTHERS THEN
ROLLBACK TO start_tran;
RAISE;
END;
```
That way, any exception will cause the statements in this block to be rolled back, but any statements that were run prior to this block will not be rolled back.
Note that I don't include a COMMIT - usually I prefer the calling process to issue the commit.
|
Are PL/SQL stored procedures transactions?
|
[
"",
"sql",
"database",
"oracle",
"stored-procedures",
"transactions",
""
] |
this is the first time I've tried including a row count within a select statement. I've tried the following but including `COUNT(other row)` is apparently not allowed in the way I'm trying to do it.
How can I include a row count from another table in a select statement, mainly consisting of objects from the first table?
-Thanks
...
```
SELECT
Reports.ReportID,
EmployeeADcontext,
ReportName,
CreatedDate,
COUNT(Expenses.ExpID) AS ExpCount,
ReportTotal,
Status
FROM
[dbo].[Reports]
INNER JOIN
[dbo].[Expenses]
ON
[dbo].[Expenses].ReportID = [dbo].[Reports].ReportID
WHERE EmployeeADcontext = @rptEmployeeADcontext
```
|
You are missing your `GROUP BY`. Whenever you aggregate (`SUM`, `COUNT`, `MAX`, etc..) you always need to include a `GROUP BY` statement that includes all visible fields except your aggregated fields. So your code should read:
```
SELECT
Reports.ReportID,
EmployeeADcontext,
ReportName,
CreatedDate,
COUNT(Expenses.ExpID) AS ExpCount,
ReportTotal,
Status
FROM
[dbo].[Reports]
INNER JOIN
[dbo].[Expenses]
ON
[dbo].[Expenses].ReportID = [dbo].[Reports].ReportID
WHERE EmployeeADcontext = @rptEmployeeADcontext
GROUP BY Reports.ReportID, EmployeeADcontext, ReportName, CreatedDate,
ReportTotal, Status
```
Here is some additional [documentation](http://msdn.microsoft.com/en-us/library/ms177673.aspx) on T-SQL `GROUP BY`.
|
You need a group by clause.
Add:
```
GROUP BY
Reports.ReportID,
EmployeeADcontext,
ReportName,
CreatedDate,
ReportTotal,
Status
```
|
How can I COUNT rows from another table using a SELECT statement when joining?
|
[
"",
"sql",
"t-sql",
"stored-procedures",
""
] |
Hi I am new to sql and stuck in a problem.
Below is the sample of my table. This is not the exact table but a sample of what i am trying to achieve
```
Name Classification Hits
A A1 2
A A2 3
A A3 4
A A4 8
A A5 9
B B1 9
B B2 3
B B3 4
B B4 8
B B5 9
c c1 8
c c2 9
c c3 4
c c4 8
c c5 9
...
```
And i am looking for the result based on top Hits . For example
```
Name Classification Hits
A A4 8
A A5 9
B B1 9
B B5 9
c c2 9
c c5 9
```
i have tried this query
```
SELECT TOP (2) Name , Classification , Hits
FROM Table4
Group By Name , Classification , Hits
Order By Hits
```
But i am only getting two values. What i am doing wrong here any suggestions?
|
You can use a CTE with the `Row_Number()` function
```
;WITH CTE AS(
SELECT Name,
Classification,
Hits,
Row_Number() OVER(Partition by name ORDER BY Hits DESC) AS RowNum
FROM Table4
)
SELECT Name,
Classification,
Hits
FROM CTE
WHERE RowNum <= 2
ORDER BY Name, Hits
```
[SQL FIDDLE DEMO](http://sqlfiddle.com/#!3/6478e/4)
|
I'm working from memory so I'm not sure of the syntax, and there's probably a more efficient way to do this, but you'd want to do something like
```
;with rawdata as (
select Name, Classification, Hits,
Row_number() over (partition by Name order by Hits desc) as x
)
select Name, Classification, Hits from rawdata where x < 3
```
|
Sql Getting Top 2 results in each classification
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
In one case, we have to store values for +infinity & -infinity in a Postgres DB?
What should be appropriate value that should be considered?
If there is not one, then please suggest the best suitable alternative.
|
You can actually use `+infinity` and `-infinity` for `FLOAT4` and `FLOAT8` (i.e. `float` and `double precision`) data types, and for timestamps.
```
regress=> SELECT FLOAT8 '+infinity', FLOAT8 '-infinity';
float8 | float8
----------+-----------
Infinity | -Infinity
(1 row)
```
For other types, either use a separate column, use the minimum/maximum values for the type, or (where logically appropriate) use null.
|
You can insert the string `'infinity'` or `'-infinity'` into a **numeric/float/real/double precision** column types. However, this will be *out of range error* for **integer/bigint/smallint**.
* `'+infinity'` and `'infinity'` and `'Infinity'` are equivalent
* `'-infinity'` and `'-Infinity'` are equivalent
Some example that work β
:
```
INSERT INTO table
(real_column)
VALUES ('-Infinity'::float);
```
or even
```
INSERT INTO table
(numeric_column)
VALUES ('infinity'::numeric);
```
or
```
INSERT INTO table
(float_column)
VALUES ('+infinity');
```
|
Appropriate values for -Infinity & Infinity in Postgres
|
[
"",
"sql",
"postgresql",
""
] |
My SQL Server database has many tables with variety of data types being used. The current design has a standard datatype length for columns ex: `NVARCHAR` like 200, 1000 or MAX (this is a standard we are adopted) depending on what type of data I saved, I use datatype accordingly.
For a `NAME` column I believe it does not makes sense to make it 200/1000, forget about MAX (I am not saying we did it). So, is it fine to make a field 200/1000 if you know it for sure occupies not more than 75 characters? Does it have any impact on the query performance at all? What other problems/benefits are there for this approach.
Which way is better, use a standard datatype length irrespective of the data stored or is it good to use a specific datatype length for the column?
Any help in this regard is highly appreciated
Thanks!!!
|
Generally I tend to use the smaller types whenever I can, but it's not always possible or convenient, and there are certainly trade-offs in each choice. For one, 200 don't seems a disparate amount but if the data is guaranteed to be limited at 75 chars a smaller type is perhaps a better option. I would analyze the pros and cons for using smaller sizes:
## Pros
* Helps prevent errors from user putting a larger text into controls.
* Also prevents possible malicious users if a bug appears due to long texts.
* The client application must be ready to handle longer strings, should the full 200 characters are used.
* Indexes on that column may be smaller, which helps improving performance.
## Cons
* Not "future-proof". Some day, a name might appear longer than 75 characters, forcing an update in the DB and front-end application. User's needs change with time.
* UI must gracefully respond to the user entering longer texts, either by not accepting or by displaying a message or something.
* If you ever want to import data from other systems, those might have longer texts that you must handle.
* Breaking or changing company standards (200/1000/MAX) may not be easy at all, specially if the team had a long time following it.
Of course you must evaluate your exact situation and if it's worth the change or not. From memory and personal experience I had to balance all those things.
|
I always try to keep the data type and size realistic. I do think 200 is way too much and why Nvarchar? I think Varchar(50) or 75 as you think is suffice. It does have an impact on application if you were to create class because an application would have to reserve the byte size causing an application to require more memory. That is just a personal opinion, I'm no expert.
|
Database column data type
|
[
"",
"sql",
".net",
"sql-server",
"database",
""
] |
I have a table example like this:
```
date id status
01/01/2013 55555 high
01/01/2014 55555 low
01/01/2010 44444 high
01/01/2011 33333 low
```
I need in order: `group by id` and select most recent date.
this is the result I want.
```
date id status
01/01/2014 55555 low
01/01/2010 44444 high
01/01/2011 33333 low
```
I do not care the order of the rows.
|
you need to join your table with a subquery that "links" the record date with the greatest date for each id:
```
select a.*
from your_table as a
inner join (
select id, max(date) as max_date
from your_table
group by id
) as b on a.id = b.id and a.date = b.max_date;
```
|
I think you will need a subquery to get the MAX(Date) and then inner join. Try this:
```
SELECT A.[Date], A.[Id], A.[Status]
FROM Table A
INNER JOIN(SELECT Id, MAX([Date]) AS MaxDate
FROM Table
GROUP BY [Id]) B ON
A.[Id] = B.[Id] AND
A.[Date] = B.[MaxDate]
```
|
Group by id and select most recent
|
[
"",
"sql",
"ms-access",
"group-by",
""
] |
I'm currently selecting Material from two tables using a `union` operator:
## **QUERY**
```
SELECT Material
FROM (
SELECT Material
FROM DP_Historical_Data
EXCEPT SELECT Material
FROM SAS_GLOBE_Material_to_SASBaseItem
) HD
UNION SELECT Material
FROM (
SELECT Material
FROM DP_Historical_Data_Archive
EXCEPT SELECT Material
FROM SAS_GLOBE_Material_to_SASBaseItem
) HDA
```
What I want to do now is use the retrieved Materials to get a description from a third table, i.e.:
```
Select MaterialDescription
from Description_Table DT
Where DT.Material = UnionResult.Material
```
Is there any tidy way to do this?
|
```
Select UnionResult.Material, DT.MaterialDescription
FROM Description_Table DT
JOIN
(
SELECT Material FROM
(
SELECT Material
FROM DP_Historical_Data
EXCEPT
SELECT Material
FROM SAS_GLOBE_Material_to_SASBaseItem
) HD
UNION
SELECT Material
FROM
(
SELECT Material
FROM DP_Historical_Data_Archive
EXCEPT
SELECT Material
FROM SAS_GLOBE_Material_to_SASBaseItem
) HDA
) UnionResult ON DT.Material = UnionResult.Material
```
|
```
SELECT DT.MaterialDescription
FROM Description_Table DT
INNER JOIN (
SELECT Material
FROM (
SELECT Material
FROM DP_Historical_Data
EXCEPT
SELECT Material
FROM SAS_GLOBE_Material_to_SASBaseItem
) HD
UNION
SELECT Material
FROM (
SELECT Material
FROM DP_Historical_Data_Archive
EXCEPT
SELECT Material
FROM SAS_GLOBE_Material_to_SASBaseItem
) HDA
) unires
WHERE DT.Material = unires.Material
```
|
Select from a third table based upon a union result
|
[
"",
"sql",
"sql-server",
""
] |
So I have achieved the results I want, but want to know if there is a better / cleaner way to do it:
```
SELECT * from Services
LEFT JOIN SellerServices ON SellerServices.Type = Services.Type
WHERE `id` =$user_id
OR
Services.Type NOT IN
(
SELECT SellerServices.Type
FROM SellerServices
where id =$user_id
)
```
This returns all entries from the left table (Services in this case).
I can test later whether `$user_id = id field` in the results to set a check box to on or off in my html output, but what I've done seems a little kludgy and convoluted.
Does anybody know a cleaner way?
|
If your query returns the expected result, here is the same query with juste few adjustments (may be more performant):
```
SELECT S.*
,CASE
WHEN S.id = $user_id THEN 'ON'
ELSE 'OFF'
END AS is_user
FROM Services S
LEFT JOIN SellerServices SS ON SS.Type = S.Type
LEFT JOIN SellerServices SSU ON SSU.id = S.id
AND SSU.Type = S.Type
WHERE S.id = $user_id
OR SSU.id IS NULL
```
Instead of using a subquery, i've added a second jointure with table `SellerServices` using the same conditions as your subquery. The `WHERE` clause is then much more readable, i check if the `id` is matching the user id and if not, i check if the second jointure exists.
Hope this will help you.
|
Do you want something like this?
```
select s.*,
(exists (select 1 from SellerServices ss on ss.Type = s.type) ) as ExistsFlag
from services s
where s.id = $userid;
```
This chooses all rows (and columns) form `services`. It adds a boolean if the `type` exists in the second table.
|
SQL getting one of each entries from left table and treating existence in the right as a boolean
|
[
"",
"mysql",
"sql",
"lamp",
""
] |
Sorry, I asked this question just before and got some good answers but then I realised I made a mistake with the query in question, if I change the question in the original post that could make the answers invalid so I'm posting again with the right query this time, please forgive me, I hope this is acceptable.
```
DECLARE @Temp TABLE
(MeasureDate, col1, col2, type)
INSERT INTO @Temp
SELECT MeasureDate, col1, col2, 1
FROM Table1
WHERE Col3 = 1
INSERT INTO @Temp
SELECT MeasureDate, col1, col2, 3
FROM Table1
WHERE Col3 = 1
AND Col4 = 7000
SELECT SUM(col1) / SUM(col2) AS Percentage, MeasureDate, Type
FROM @Temp
GROUP BY MeasureDate, Type
```
I do two inserts into the temp table, 2nd insert with an extra WHERE but same columns same table, but different type, then I do SUM(col1) / SUM(col2) on the temp table to return the result I need per MeasureDate and type. Is there a way to merge all these inserts and selects into one statement so I don't use a temp table and do a single select from Table1? Or even if I still need the temp table, merge the selects into one select instead of two separate selects? Stored procedure works fine as it is, just looking for a way to shorten it.
Thanks.
|
Sure can. I might start with combining the two queries from your inserts using `UNION ALL` (this variation of `UNION` will not remove duplicates), wrapped up in a [CTE](http://technet.microsoft.com/en-us/library/ms190766(v=sql.105).aspx) from which you can perform your final query:
```
WITH MeasureData(MeasureDate, col1, col2, type) AS (
SELECT MeasureDate, col1, col2, 1
FROM Table1
WHERE Col3 = 1
UNION ALL
SELECT MeasureDate, col1, col2, 3
FROM Table1
WHERE Col3 = 1
AND Col4 = 7000
)
SELECT SUM(col1) / SUM(col2) AS Percentage, MeasureDate, Type
FROM MeasureData
GROUP BY MeasureDate, Type
```
That's it, no more table variable or insert statements.
|
No real need for a `UNION`, you can handle this with a `CASE` statement:
```
SELECT SUM(col1) / SUM(col2) AS Percentage, MeasureDate, Type
FROM (
SELECT MeasureDate, col1, col2, case when Col4 = 7000 then 3 else 1 end type
FROM Table1
WHERE Col3 = 1
) t
GROUP BY MeasureDate, Type
```
---
Edit, as Gordon correctly points out, for `Type = 1`, this query wouldn't produce the same results. Here's a variation on Gordon's good answer that might be easier to visually understand using a `CROSS JOIN` and `IF` logic:
```
SELECT T1.MeasureDate,
T.Type,
SUM(IF(T.Type=1,Col1,IF(T.Type=3 AND T1.Col4=7000,T1.Col1,0))) /
SUM(IF(T.Type=1,Col2,IF(T.Type=3 AND T1.Col4=7000,T1.Col2,0))) AS Percentage
FROM Table1 T1
CROSS JOIN (SELECT 1 Type UNION SELECT 3) T
WHERE T1.Col3 = 1
GROUP BY T1.MeasureDate, T.Type
```
* [Condensed SQL Fiddle](http://sqlfiddle.com/#!2/cc618/7)
|
merge two queries with different where and different grouping into 1
|
[
"",
"sql",
"t-sql",
"stored-procedures",
""
] |
trying to write a sql which will keep first N number of rows of a table and delete the rest. I'm have comeup with this sql but the its saying I can't use the count here. Please help me to re write the sql.
```
DELETE
FROM ZZ_TEST_FINTABLE
WHERE PROCESS_INSTANCE = (
SELECT MIN(B.PROCESS_INSTANCE)
FROM ZZ_TEST_FINTABLE B)
AND COUNT(PROCESS_INTANCE) > 9
```
|
Maybe this works for you (with Oracle DB)
```
DELETE FROM
ZZ_TEST_FINTABLE
WHERE
PROCESS_INSTANCE NOT IN
(
SELECT PROCESS_INSTANCE
FROM ZZ_TEST_FINTABLE
WHERE ROWNUM < 9
);
```
|
You should use HAVING instead of AND.
```
DELETE
FROM ZZ_TEST_FINTABLE
WHERE PROCESS_INSTANCE = (
SELECT MIN(B.PROCESS_INSTANCE)
FROM ZZ_TEST_FINTABLE B
)
HAVING COUNT(PROCESS_INTANCE) > 9
```
or this
```
DELETE
FROM ZZ_TEST_FINTABLE A
INNER JOIN ZZ_TEST_FINTABLE B ON A.PROCESS_INSTANCE= MIN(B.PROCESS_INSTANCE)
HAVING COUNT(PROCESS_INTANCE) > 9
```
|
Keep First N number of rows and delete the rest
|
[
"",
"sql",
"db2",
"udb",
""
] |
Scenario:
I have 2 query form 1 table, just want to view both query results as a single query result.
Details:
Table: loantrans
```
+-----+----------+---------+---------+---------+
| tid | date | account | purpose | out |
+-----+----------+---------+---------+---------+
| 1 |2014-08-12| 975 | Loan | 5000 |
| 2 |2014-08-12| 975 |Interest | 850 |
| 3 |2014-08-12| 975 | Loan | 150 |
| 4 |2014-08-12| 975 |Interest | 5000 |
+-----+----------+---------+---------+---------+
```
Query 1:
```
SELECT MONTH(`loantrans`.`date`) as month, SUM(`loantrans`.`out`) AS loanout
FROM loantrans
WHERE (`loantrans`.`date` BETWEEN '2014-01-01' AND '2014-09-20')
AND (`loantrans`.`purpose` = 'Loan')
GROUP BY MONTH(`loantrans`.`date`)
ORDER BY `loantrans`.`date`
```
Result:
```
+-------+---------+
| month | loanout |
+-------+---------+
| 1 | 28000 |
| 2 | 27000 |
| 3 | 10200 |
| 4 | 7000 |
| 5 | 95000 |
| 6 | 2000 |
+-------+---------+
```
Query 2:
```
SELECT MONTH(`loantrans`.`date`) as month, SUM(`loantrans`.`out`) AS intout
FROM loantrans
WHERE (`loantrans`.`date` BETWEEN '2014-01-01' AND '2014-09-20')
AND (`loantrans`.`purpose` = 'Interest')
GROUP BY MONTH(`loantrans`.`date`)
ORDER BY `loantrans`.`date`
```
Result :
```
+-------+---------+
| month | intout |
+-------+---------+
| 1 | 2000 |
| 2 | 750 |
| 3 | 200 |
| 4 | 180 |
| 5 | 570 |
| 6 | 625 |
+-------+---------+
```
What I want is like
```
+-------+---------+---------+
| month | intout | loanout |
+-------+---------+---------+
| 1 | 2000 | 28000 |
| 2 | 750 | 27000 |
| 3 | 200 | 10200 |
| 4 | 180 | 7000 |
| 5 | 570 | 95000 |
| 6 | 625 | 2000 |
+-------+---------+---------+
```
How to get result like this???
Please check this screenshot for more clear idea of what i'm looking for <https://www.dropbox.com/s/dpptqb7y4c6xzi5/Capture3.PNG?dl=0>
Database:
<https://www.dropbox.com/s/8gbgrgvil915efr/bankdb.sql_7.zip?dl=0>
|
```
SELECT MONTH(lt.date) month,
SUM( case when lt.purpose='Interest' then lt.out else 0 end) intout,
SUM(case when lt.purpose='Loan' then lt.out else 0 end) loanout
FROM loantrans lt
WHERE lt.date BETWEEN '2014-01-01' AND '2014-09-20'
GROUP BY month
ORDER BY month
```
<http://sqlfiddle.com/#!2/fa8ac/5>
|
```
SELECT MONTH(lt.date) month,
SUM((lt.purpose='Interest')*lt.out) intout,
SUM((lt.purpose='Loan')*lt.out) loanout
FROM loantrans lt
WHERE lt.date BETWEEN '2014-01-01' AND '2014-09-20'
/* AND lt.purpose IN ('Interest', 'Loan') /* If you have more purposes. */
GROUP BY month
ORDER BY month
```
|
How to join 2 query results together?
|
[
"",
"mysql",
"sql",
""
] |
Why does the @result value in the code below print out a blank string? I would expect it to concatenate with the previous result.
```
DECLARE @size int
DECLARE @string nvarchar(10)
DECLARE @result nvarchar(10)
SELECT @string = '12345abc123'
DECLARE @count int, @total int
SELECT
@total = LEN(@string),
@count = 1
WHILE @count <= @total
BEGIN
SELECT @result = SUBSTRING(@string, @count, 1) + '-'+ @result
SELECT @count = @count + 1
PRINT @result
END
```
|
You never initialized `@result`, so it defaults to an sql `null`. SQL nulls are contagious poison, so when you do
```
SELECT @result = .... + @result
```
You're actually doing
```
SELECT @result = ... + null
```
and `@result` simply remains `null`
Initializing the value to an empty string solves your problem:
```
SET @result = ''
```
|
It is returning a blank because you are concatenating the substring with @result, which initially is NULL.
Try setting @result to an empty string like this this:
```
SELECT @string = '12345abc123', @result = ''
```
|
Why does this TSql return a blank or null value?
|
[
"",
"sql",
"t-sql",
""
] |
```
| iID | saleID | cusID | date | total | discount | advance | paid_status |
+-----+--------+-------+------------+-------+----------+---------+-------------+
| 16 | 1 | 1 | 2014-01-17 | 2400 | 100 | 1000 | 1 |
| 17 | 1 | 1 | 2014-02-17 | 240 | 100 | 1000 | 1 |
| 18 | 1 | 1 | 2013-03-17 | 280 | 100 | 1000 | 1 |
| 19 | 1 | 1 | 2014-04-18 | 120 | 100 | 1000 | 1 |
```
basically i have database shown above.i want to get sum of total for every month but for specific year
when i use this query it works but it doesn't give for only specific year as i don't specified year
```
SELECT sum(total) FROM `invoice_reg` group by month(date) ;
```
but when i use where clause like following it didn't work give me a error
```
SELECT sum(total) FROM `invoice_reg` group by month(date) where YEAR(date) = 2014
```
and i tried this one too.but faield too
```
SELECT sum(total) FROM `invoice_reg` group by month(date) where DATE_FORMAT(date,'%Y')=2014
```
this is the error i'm getting
```
check the manual that corresponds to your MySQL server version for the right syntax to use near 'where YEAR(date) = 2014
```
any help will be appreciated
|
Please use `where` before `group by`
<http://dev.mysql.com/doc/refman/5.0/en/select.html>
|
Try moving the `group by` statement to the end of the command
|
mysql where clause give me an error
|
[
"",
"mysql",
"sql",
"group-by",
"where-clause",
""
] |
I have three var char values as
```
@hour = '18'
@minute = '25'
@seconds = '45'
```
I need output in the form of '18:25:45'
```
Select from Convert(Time,@hour+@minute+@seconds)
```
values in hour minute and seconds coming from SSRS report drop down
|
An implicit conversion is not allowed. One solution could be to organise the format as a string and then convert to time as below.
```
declare @hour smallint = '18'
declare @minute smallint = '25'
declare @seconds smallint = '45'
declare @format varchar(8) = (select (CAST(@hour as varchar(2)) + ':'+ CAST(@minute as varchar(2))+':'+ CAST(@seconds as varchar(2))))
select CAST(@format as time)
```
|
You can 'add-up' the values from zero, like this:
```
declare @hour smallint = 18
declare @minute smallint = 25
declare @seconds smallint = 45
declare @result time
SELECT @result = DATEADD(hour, @hour, DATEADD(minute, @minute, DATEADD(second, @seconds, 0)))
```
|
How to create Time with Hour+Minute+Seconds in sql server 2008
|
[
"",
"sql",
"sql-server-2008-r2",
""
] |
I have 2 tables with duplicated values in one of the columns. I'd like to do a left join without taking rows, where mentioned column values duplicates.
For example,
i have table X:
```
id Value
A 2
B 4
C 5
```
and table Y:
```
id Value
A 2
A 8
B 2
```
I'm doing a LEFT JOIN:
```
SELECT*
FROM X LEFT JOIN Y ON X.id = Y.id;
```
Would like to have something like:
```
id Value
A 2 A 2
B 4 B 2
C 5
```
so that duplicated id (A 8) from table Y is not considered.
|
You can do it with `GROUP BY`:
```
SELECT X.id, X.value, MIN(Y.value)
FROM X
LEFT JOIN Y ON X.id = Y.id
GROUP BY X.id, X.value
```
Note that it is not necessary to bring `Y.id` into the mix, because it is either `null` or equal to `X.id`.
|
You are looking for GROUP BY to aggregate the Y table records, effectively collapsing them down to one row per id. I have chosen MIN but you could use SUM if they are integers like your example data.
```
SELECT
x.id , x.Value, y.id, min(y.value)
FROM
X LEFT JOIN Y ON X.id = Y.id
GROUP BY
x.id, x.value, y.id;
```
I have given exactly what you asked for. But in my opinion the y.Id is unnecessary in the select and group by list.
|
How to do a LEFT JOIN in MS Access without duplicates?
|
[
"",
"sql",
"ms-access",
"ms-access-2010",
"left-join",
""
] |
Say you have a non-relational table `Persons` that is filled with badly formatted data:
Persons:
```
id name city state country
('1', 'username1', 'Cityville', 'Alabama', 'USA')
('1', 'username1', 'Cityville', 'Alabama', 'USA')
('2', 'username2', 'Cityville', 'Alabama', 'USA')
('2', 'username2', 'Cityville', 'Alabama', 'USA')
('3', 'username3', 'Knoxville', 'Tennessee', 'USA')
```
and you have a table setup that relates location id's to city state country tuples:
Location:
```
loc_id city state country
(1, 'Cityville', 'Alabama', 'USA')
(2, 'Knoxville', 'Tennessee', 'USA')
```
And you want to insert this data into Clean\_Persons where Clean\_Person the first row is `Persons.id` and second row is the result of the city state country lookup from Location
Clean\_Persons
```
user_id loc_id
(1, 1)
(2, 1)
(3, 2)
```
My attempt:
```
INSERT INTO Clean_Persons (user_ID, Loc_ID)
SELECT TO_NUMBER(USER_ID) FROM Persons
UNION
SELECT loc_ID FROM Location L JOIN Persons P ON (
L.city = P.city AND
L.state = P.state AND
L.country = P.country
);
```
|
I dont think you need to do a UNION ALL here I think it should be something like this...
```
INSERT INTO Clean_Persons (user_ID, Loc_ID)
SELECT DISTINCT TO_NUMBER(P.USER_ID), L.loc_ID
FROM Location L JOIN Persons P
ON (
L.city = P.city AND
L.state = P.state AND
L.country = P.country
);
```
|
You don't need `UNION`, but rather just a `JOIN`:
```
INSERT INTO Clean_Persons (Id, Name, Loc_Id)
SELECT DISTINCT TO_NUMBER(P.Id), P.Name, L.Loc_Id
FROM Persons P
JOIN Location L ON P.City = L.City
AND P.State = L.State
AND P.Country = L.Country
```
Given the duplicated data in the `Persons` table, I'd also use `DISTINCT` to avoid duplicates. With this said, is it possible the same person could have multiple addresses? If so, consider adding another table for `PersonLocations` as this current query would insert duplicate records in the `Clean_Persons` table. This will help keep your database normalized.
|
INSERT from UNION of 2 selects?
|
[
"",
"sql",
"oracle",
""
] |
I got query like this
```
SELECT
IMAGE_PATH, IMAGE_ID, DOC_NID
FROM
TBL_IMAGE WHERE IMAGE_ORD = 1
```
And have follow conditions
* image\_ord should br changed to image\_ord=max value
* image\_ord column is number type and it has some duplication
* I need only one row with biggest value of image\_ord
* it could be handled deal with rownum, but I do not want to Since it has many layers above this.(It is subquery)
Could it be handled without subquery?
|
Based on your requirements:
> "image\_ord [...] has some duplication"
>
> "I need only one row with biggest value of image\_ord"
```
WITH C AS ( SELECT MAX(IMAGE_ORD) M FROM TBL_IMAGE )
SELECT
TBL_IMAGE.IMAGE_PATH, TBL_IMAGE.IMAGE_ID, TBL_IMAGE.DOC_NID
FROM
TBL_IMAGE JOIN C ON IMAGE_ORD = C.M
WHERE ROWNUM < 2
```
* The `WITH ...` statement will retrieve the maximum value for the `image_ord` field
* The `SELECT ...` statement will use a `JOIN` on the temporary table above
* The `WHERE ROWNUM < 2` will limit the number of result to 1 row
Please note that in absence of an (unambiguous) `ORDER BY` clause, the result from the `SELECT` statement should be considered as an unordered set of row. So in case of duplicates, you shouldn't rely on any particular row (nor even the same row) being returned. Just *"one"* row having the maximum `image_ord` value.
|
Try this :
```
SELECT
IMAGE_PATH, IMAGE_ID, DOC_NID, Max(image_ord)
FROM
TBL_IMAGE WHERE IMAGE_ORD = 1 group by IMAGE_PATH, IMAGE_ID, DOC_NID
```
|
How can I get a row which has biggest value?
|
[
"",
"sql",
"oracle",
"sql-order-by",
"max",
"limit",
""
] |
I have a query that selects the oldest record from table B, which contains multiple rows for each row in table A, and joins it to table A:
```
SELECT A.Surname, A.Fornames, A.DOB,
(SELECT TOP 1 B.RegNumber
FROM B
WHERE B.ID=A.ID
ORDER BY B.Date ASC) AS RegNumber
FROM A
ORDER BY A.Surname;
```
This works great, however I also want to pull another column from table B. So I would have:
```
A.Surname, A.Fornames, A.DOB, B.RegNumber, B.RegDate
```
How can I do this, whilst still only getting the oldest record from table B?
|
You have two options, the first is to restructure your query using joins:
The principle is, you need to get the first date for each record in `B` grouped by `ID`:
```
SELECT ID, MIN(Date) AS [FirstDate]
FROM B
GROUP BY ID;
```
You can then JOIN this back to B, to filter the results, i.e.:
```
SELECT B.*
FROM B
INNER JOIN
( SELECT ID, MIN(Date) AS [FirstDate]
FROM B
GROUP BY ID
) AS B2
ON B2.ID = B.ID
AND B2.FirstDate = B.Date;;
```
You can then join this to Table A and select all the fields you need:
```
SELECT A.Surname, A.Fornames, A.DOB, B.RegNumber, B.RegDate
FROM (A
INNER JOIN B
ON B.ID = A.ID)
INNER JOIN
( SELECT ID, MIN(Date) AS [FirstDate]
FROM B
GROUP BY ID
) AS B2
ON B2.ID = B.ID
AND B2.FirstDate = B.Date
ORDER BY A.Surname;
```
An alterative way to use JOINs is:
```
SELECT A.Surname, A.Fornames, A.DOB, B.RegNumber, B.RegDate
FROM (A
INNER JOIN B
ON A.ID = B.ID)
LEFT JOIN B AS B2
ON B2.ID = B.ID
AND B2.Date < B.Date
WHERE B2.ID IS NULL
ORDER BY A.Surname;
```
This method works by joining to `B` twice, and the second time (`B2`) getting all the records that are earlier than the record in the first join (`B`), then by stating that `B2.ID` is null, you are effectively saying that you want all records in B, where a record with the same ID and an earlier date does not exist.
The second approach, is to just repeat your correlated subquery:
```
SELECT A.Surname, A.Fornames, A.DOB,
(SELECT TOP 1 B.RegNumber
FROM B
WHERE B.ID=A.ID
ORDER BY B.Date ASC) AS RegNumber
(SELECT TOP 1 B.RegDate
FROM B
WHERE B.ID=A.ID
ORDER BY B.Date ASC) AS RegDate
FROM A
ORDER BY A.Surname;
```
If you are only accessing two columns from the table then there is little to separate the two methods, both require B to be read twice, however using JOINs tends to give the optimiser a better chance so I would veer towards this method. The other advantage is it gives you access to all the fields in `B`, so if you needed a third column, you wouldn't have to add a third correlated subquery.
|
You can repeat the subquery:
```
SELECT A.Surname, A.Fornames, A.DOB,
(SELECT TOP 1 B.RegNumber
FROM B
WHERE B.ID=A.ID
ORDER BY B.Date ASC
) AS RegNumber,
(SELECT min(b.Date)
FROM B
WHERE B.ID=A.ID
) AS RegDate
FROM A
ORDER BY A.Surname;
```
|
Selecting more than one column in subquery
|
[
"",
"sql",
"ms-access",
""
] |
I have a situation to handle, i have my liquibase structured as per the best practices recommended. I have the change log xml structured as given below
```
Master XML
-->Release XML
-->Feature XML
-->changelog XML
```
In our application group, we run updateSQL to generate the consolidated sql file and get the changes executed through our DBA group.
However, the real problem I have is to execute a common set of sql statements during every iteration. Like
```
ALTER SESSION SET CURRENT_SCHEMA=APPLNSCHEMA
```
as the DBA executes the changes as SYSTEM but the target schema is APPLNSCHEMA.
How to include such common repeating statements in Liquibase changelog.
|
You would be able to write an extension (<http://liquibase.org/extensions>) that injects it in. If you need to do it per changeLog, it may work best to extend XMLChangeLogParser to automatically create and add a new changeSet that runs the needed SQL.
|
You could make a changeSet with the attribute 'runAlways' set to true and include the SQL.
|
executing a common sql file using liquibase
|
[
"",
"sql",
"oracle",
"liquibase",
"changelog",
""
] |
I have the following table
*test*
```
+----+---------+------+
| Id | NumFact | type |
+----+---------+------+
| 1 | 20 | A |
| 2 | 21 | A |
| 3 | 22 | A |
| 4 | 23 | B |
| 5 | 24 | B |
| 6 | 25 | B |
| 7 | 26 | B |
| 8 | 26 | T |
+----+---------+------+
```
I need to get the following result , get the number of NumFact for only A and B
*Expected Result*
```
+------+--------+
| Num | type |
+------+--------+
| 3 | A |
| 4 | B |
+------+--------+
```
I wrote the following *queries*.
```
-- select query 1 --
Select (count NumFact )as Num
from test
where type = 'A'
-- select query 2 --
Select (count NumFact )as Num
from test
where type = 'B'
```
How can I make only one *select* and get *Num*?
|
You just need to group by `type` and filter properly.
```
SELECT type, COUNT(NumFact) AS Num
FROM test
WHERE type IN ('A','B')
GROUP BY type;
```
What this says is to group the result set the column `type` and use the aggregate function `COUNT()` for each group but only where the `type` column value is equal to 'A' or 'B'.
It is pretty basic SQL really. Read more about [GROUP BY on MSDN](http://msdn.microsoft.com/en-au/library/ms177673.aspx).
|
Use `GROUP BY`
## **Query1**
```
SELECT COUNT(NumFact) AS Num,
type FROM test
WHERE type IN ('A','B')
GROUP BY type
```
## **Query2**
```
SELECT COUNT(NumFact) AS Num,
type FROM test
WHERE type='A' OR type='B'
GROUP BY type;
```
---
## [DEMO](http://sqlfiddle.com/#!3/ecb44/5)
|
Combine two select
|
[
"",
"sql",
"sql-server",
"dml",
""
] |
In my MySql table there's a column called \_time of type varchar. The values it holds are in the format: `year month day hour minute` without the whitespaces: `201409201945` I want to convert it to `datetime` so I'm doing this:
```
ALTER TABLE `my_table` CHANGE COLUMN `_time` `date_time` DATETIME NOT NULL;
```
And it throws this error for some reason:
`Error Code: 1292. Incorrect datetime value: '201409201945' for column '_date_time' at row 1 0.036 sec`
|
The three steps @Arkain mentioned would be with the help of the function [STR\_TO\_DATE](http://dev.mysql.com/doc/refman/5.6/en/date-and-time-functions.html#function_str-to-date)
```
-- add the new column
ALTER TABLE `my_table` ADD COLUMN `date_time` DATETIME;
-- update the new column with the help of the function STR_TO_DATE
UPDATE `my_table` SET `date_time` = STR_TO_DATE(`_time`, '%Y%m%d%H%i');
-- drop the old column
ALTER TABLE `my_table` DROP COLUMN `_time`;
```
The complete list of specifiers for STR\_TO\_DATE can be found at [DATE\_FORMAT](http://dev.mysql.com/doc/refman/5.6/en/date-and-time-functions.html#function_date-format), here an excerpt with those I used:
```
%d Day of the month, numeric (00..31)
%H Hour (00..23)
%i Minutes, numeric (00..59)
%m Month, numeric (00..12)
%Y Year, numeric, four digits
```
[Demo of the UPDATE](http://sqlfiddle.com/#!2/e6694/1)
If the new column should have the attribute NOT NOLL, one way could be to set the sql mode before the operation to '' and reset the sql\_mode later on:
```
SET @old_mode = @@sql_mode;
SET @@sql_mode = ''; -- permits zero values in DATETIME columns
ALTER TABLE `my_table` ADD COLUMN `date_time` DATETIME NOT NULL;
UPDATE `my_table` SET `date_time` = STR_TO_DATE(`_time`, '%Y%m%d%H%i');
ALTER TABLE `my_table` DROP COLUMN `_time`;
SET @@sql_mode = @old_mode;
```
[Updated Demo](http://sqlfiddle.com/#!2/1be5d/1)
|
If your `varchar` data were formatted like this '2014-09-20 19:45' altering your column's data type would work. Why? that's the character representation used by `DATETIME` and other time-oriented data types.
But it isn't. So, what choices do you have?
One is to use these four steps:
1. alter the table to add a new `DATETIME` column with a temporary name
2. do an `UPDATE` with no `WHERE` clause to fill in the values of that column
3. alter the table to drop the previous column
4. alter the table to rename your new column to have the same name as the column you just dropped.
Here's how that would go.
```
ALTER TABLE my_table ADD COLUMN tempstamp DATETIME
UPDATE my_table SET tempstamp = STR_TO_DATE(_time, '%Y%m%d%H%i')
ALTER TABLE my_table DROP COLUMN _time
ALTER TABLE my_table CHANGE tempstamp _time DATETIME NOT NULL
```
Another approach: Change the strings in your `_time` to valid datetime values, then alter your column. If your `varchars()` are wide enough to hold a few extra characters, try this.
```
UPDATE my_table SET `_time`=STR_TO_DATE(`_time`, '%Y%m%d%H%i')
ALTER TABLE my_table CHANGE `_time` `_time` DATETIME NOT NULL
```
This works because `STR_TO_DATE()` makes `DATETIME` values of your strings, and then MySQL casts them back to strings to store back into your `varchar` column. Then you can change the datatype to `DATETIME`.
You probably noticed I threw in `NOT NULL`. If you're going to put an index on that column, `NOT NULL` is a good thing to have. But, if some of your time values are missing, it won't work.
|
Unable to convert varchar to datetime in MySql
|
[
"",
"mysql",
"sql",
"datetime",
"datetime-format",
""
] |
I have a query like this.
```
select
ad.escore,
ad.mscore,
round(sum(ps.cnt) / sum(n.cnt) * 100,1) as percent
from
(
select
account_no,
-- 602 becomes '595-604'
to_char(trunc(empirica_score - 5, -1) + 5, '9999') || '-' || to_char(trunc(empirica_score - 5, -1) + 14, '9999') as escore,
-- 97 becomes '76-100'. Change the expression to group differently.
cast(((mfin_score - 1) / 25) * 25 + 1 as text) || '-' || cast(((mfin_score - 1) / 25) * 25 + 25 as text) as mscore
from account_details
) ad
join
(
select custno, count(*) as cnt
from paysoft_results
where result = 'Successful'
and resultdate >= '13/08/2014' <------- HERE
and resultdate <= '12/19/2014' <------- HERE
group by custno
) ps on ps.custno = ad.account_no
join
(
select customer_code, count(distinct start_date) as cnt
from naedo
and start_date >= '13/08/2014' <------- HERE
and start_date <= '12/19/2014' <------- HERE
group by customer_code
) n on n.customer_code = ad.account_no
group by ad.escore, ad.mscore;
```
It works perfect if I dont have the date installed like above.
if I do put in the dates i get a error `ERROR: syntax error at or near "and"`
Any ideas why?
**UPDATE**
Okay, i suppose I can ask a now question, so if I can append on this one.
`ERROR: date/time field value out of range: "13/08/2014"`
the date comparison in my query. What is the correct way to do it?
|
Well, *this* bit won't work:
```
select customer_code, count(distinct start_date) as cnt
from naedo
and start_date >= '13/08/2014' <------- HERE
and start_date <= '12/19/2014' <------- HERE
group by ...
```
since the `where` clause has to start with a `where`, not an `and`. Otherwise, we'd all be calling it an `and` clause :-)
It'll need to be:
```
select customer_code, count(distinct start_date) as cnt
from naedo
where start_date >= '13/08/2014'
and start_date <= '12/19/2014'
group by ...
```
The other bit you've marked with `HERE` (the second segment, first `join` clause) looks fine, it should work without error.
---
As an aside, at least *one* of your dates is in the incorrect format. The segment:
```
and start_date >= '13/08/2014'
and start_date <= '12/19/2014'
```
either has a date of the 8th of Undecimber or the 12th of, well, I don't even *know* what the Latin prefix for nineteen (or seventeen based on the real months already being out of step) is.
You'll need to figure out which of `mm/dd/yyyy` or `dd/mm/yyyy` your database supports and then stick with just that *one.*
Given that you question update states it's complaining about `13/08/2014`, you'll probably find it should be written as `08/13/2014`, in `mm/dd/yyyy` format.
|
```
select customer_code, count(distinct start_date) as cnt
from naedo
Where start_date >= '13/08/2014' <------- HERE
and start_date <= '12/19/2014' <------- HERE
group by customer_code
```
|
ERROR: syntax error at or near "and"
|
[
"",
"sql",
"postgresql",
""
] |
I have a MySQL dataset that looks like this:
```
a b
.32 .72
.41 .80
.28 .64
.31 .80
```
And I want to assign values (*c*) to each row based on the conditions of a and b:
.3 < a < .4 and .71 < b < .83 = 1
.4 < a < .5 and .71 < b < .83 = 2
.2 < a < .3 and .58 < b < .77 = 3
and so on. This would result in my table looking like this:
```
a b c
.32 .72 1
.41 .80 2
.28 .64 3
.31 .80 1
```
How would I do this? I have tried a case when() statement but that didn't work since I don't know how/if it is possible to have one of those with more than one case.
|
I don't fully understand this: *I have tried a case when() statement but that didn't work since I don't know how/if it is possible to have one of those with more than one case.*
This will get you the output you described however. I'm a bit lazy/tired so I'll just paste the output from SQL Fiddle:
[SQL Fiddle](http://sqlfiddle.com/#!2/bdc44/1)
**MySQL 5.5.32 Schema Setup**:
```
CREATE TABLE Table1
(`a` decimal(2,2), `b` decimal(2,2))
;
INSERT INTO Table1
(`a`, `b`)
VALUES
(.32, .72),
(.41, .80),
(.28, .64),
(.31, .80)
;
alter table table1 add column c int;
update table1
set c =
case
when (.3 < a and a < .4) and (.71 < b and b < .83) then 1
when (.4 < a and a < .5) and (.71 < b and b < .83) then 2
when (.2 < a and a < .3) and (.58 < b and b < .77) then 3
end;
```
**Query 1**:
```
select * from table1
```
**[Results](http://sqlfiddle.com/#!2/bdc44/1/0)**:
```
| A | B | C |
|------|------|---|
| 0.32 | 0.72 | 1 |
| 0.41 | 0.8 | 2 |
| 0.28 | 0.64 | 3 |
| 0.31 | 0.8 | 1 |
```
|
I would create a secondary table and use it to drive the update.
Generally, try to parametrize things like this into tables, rather than making a big honking sql.
That still leaves you with the question of whether there is a contradiction in your criteria (i.e. multiple outcomes possible, no outcomes, etc...) but is cleaner in terms of sql and in terms of maintainability.
I've handled no outcome/multiple outcomes with exists and limit 1 below, respectively so that at least they don't error out.
```
drop table tgt;
create table tgt(a float, b float, calc int);
drop table range;
create table range(a_low float, a_high float,
b_low float, b_high float, calc float);
select * from tgt;
insert into tgt (a,b, calc) values(.32, .72, 0);
insert into tgt values(.41, .80, 0);
insert into tgt values(.28, .64, 0);
insert into tgt values(.31, .80, 0);
/*
3 < a < .4 and .71 < b < .83 = 1
.4 < a < .5 and .71 < b < .83 = 2
.2 < a < .3 and .58 < b < .77 = 3
*/
insert into range (a_low, a_high, b_low, b_high, calc)
values (.3, .4, .71, .83, 1);
insert into range (a_low, a_high, b_low, b_high, calc)
values (.4, .5, .71, .83, 2);
insert into range (a_low, a_high, b_low, b_high, calc)
values (.2, .3, .58, .77, 3);
select * from tgt;
update tgt
set calc =
(select calc
from range
where tgt.a
between range.a_low and range.a_high
and tgt.b between range.b_low and range.b_high
/* limit is to avoid
if error if multiple results
- picks only one
*/
limit 1)
where
/* and exists avoids it if there are no results */
exists
(select calc
from range
where tgt.a
between range.a_low and range.a_high
and tgt.b between range.b_low and range.b_high)
;
select * from tgt;
```
in postgresql this is the result:
0.32; 0.72; 1
0.41; 0.8; 2
0.28; 0.64; 3
0.31; 0.8; 1
|
Assign integer value to row based on interval - SQL
|
[
"",
"sql",
""
] |
I have the following select statement:
```
SELECT * FROM pgp
WHERE group_id IN (
SELECT group_id FROM pgroups
WHERE label LIKE 'Registration%'
AND label NOT LIKE '%Snom%'
)
AND pid = 12;
```
it returns results like:
```
group_id | pid | value | updatev
----------+----------+-------+----------
34 | 12 | | f
11 | 12 | | t
4 | 12 | | t
13 | 12 | | t
17 | 12 | | f
19 | 12 | | f
```
For all the records returned, I want to force the value of the "updatev" field to be set to true. I'm not sure how to do that.
Thanks
|
Just change the `SELECT` to an `UPDATE`:
```
UPDATE pgp
SET updatev = 't'
WHERE group_id IN (SELECT group_id FROM pgroups WHERE label like 'Registration%' and label not like '%Snom%') and pid = 12;
```
That's actually a good practice - get your criteria right with a `SELECT` before actually changing any data.
|
Use an [`update`](http://www.postgresql.org/docs/9.2/static/sql-update.html) statement:
```
UPDATE pgp
SET updatev = true
WHERE group_id IN (SELECT group_id
FROM pgroups
WHERE label LIKE 'Registration%' AND
label NOT LIKE '%Snom%') AND
pid = 12;
```
|
how to select records from table A and update table A
|
[
"",
"sql",
"postgresql-9.2",
""
] |
Say I have a table Clients, with a field ClientID, and that client has orders that are loaded in another table Orders, with foreign key ClientID to link both.
A client can have many orders (1:N), but orders have different types, described by the field TypeID.
Now, I want to select the clients that have orders of a number of types. For instance, the clients that have orders of type 1 and 2 (both, not one or the other).
How do I build this query? I'm really at lost here.
EDIT: Assume I'm on SQL Server.
|
This is query upon the assumption that TypeId can be either 1 or 2. This will return ClientId that have both a Type1 and Type2 no matter how many of them.
```
Select ClientId, COUNT(distinct TypeId) as cnt
from tblOrders o
group by ClientId
Having COUNT(distinct TypeId) >= 2
```
`COUNT(distinct TypeId)` is how this really works. It will count the distinct number of TypeId's for a particular ClientId. If you had say 5 different Types, then change the condition in the Having Clause to 5
This is a small sample DataSet
```
ClientId TypeId
1 1
1 2
1 2
2 2
2 1
3 1
3 1
```
Here is the resulting Query, it will exclude client 3 because it only has orders with Type1
Result Set
```
ClientId cnt
1 2
2 2
```
If you have many different TypeId's, but only want to check Type1 and Type2 put those Id's in a where clause
```
where TypeId in (1,2)
```
|
Here's one solution:
```
select * from clients c
where exists (select 1 from orders o where typeid = 1 and o.clientid = c.clientid)
and exists (select 1 from orders o where typeid = 2 and o.clientid = c.clientid)
and exists (select 1 from orders o where typeid = 3 and o.clientid = c.clientid)
-- additional types ...
```
|
Find rows in a table based on the existance of two different rows in a 1:N-related table
|
[
"",
"sql",
"sql-server",
""
] |
I am using SQL Server.
I need to create a stored procedure that will update the Data field (table bellow) with different value for every ID value. (the values in the Data fields depend on the user input).
```
ID | Data
---------
1 | NULL
2 | NULL
3 | NULL
```
For example:
if ID = 1, Data should be "Test1"
The ID and Data pairs should somehow be input parameters to the stored procedures.
Is this possible, or I'll have to call simple update procedure for every ID/Data pair?
|
You need to use XML for sending data for multiple rows. For your current problem prepare (generate dynamically) an xml like below.
```
'<NewDataSet><Table><Id>1</Id><Data>test1</Data></Table><Table><Id>2</Id><Data>test2</Data></Table></NewDataSet>'
```
Then Prepare a procedure like below.
```
CREATE PROC [dbo].[UpdateMultipleRecords]
(
@XmlString VARCHAR(MAX)
)
AS
BEGIN
SET NOCOUNT ON;
CREATE TABLE #DATA
(
Id int,
Data varchar(50) NULL
)
DECLARE @DocHandle int
EXEC sp_xml_preparedocument @DocHandle OUTPUT, @XmlString
INSERT INTO #DATA
SELECT Id,Data
FROM OPENXML (@DocHandle, '/NewDataSet/Table',2)
WITH
(
Id int,
Data varchar(50)
)
EXEC sp_xml_removedocument @DocHandle
UPDATE [dbo].[Table1] SET DATA=D.Data
FROM [dbo].[Table1] T INNER JOIN #DATA D ON T.ID=D.Id
IF (SELECT OBJECT_ID('TEMPDB..#DATA')) IS NOT NULL DROP TABLE #DATA
END
```
And call the procedure as
```
[UpdateMultipleRecords] '<NewDataSet><Table><Id>1</Id><Data>Test1</Data></Table><Table><Id>2</Id><Data>Test2</Data></Table></NewDataSet>'
```
|
You need user-defined table types for this:
Try this:
```
-- test table
create table yourtable(id int not null, data [varchar](256) NULL)
GO
-- test type
CREATE TYPE [dbo].[usertype] AS TABLE(
[id] [int] not null,
[Data] [varchar](256) NULL
)
GO
-- test procedure
create procedure p_test
(
@tbl dbo.[usertype] READONLY
) as
BEGIN
UPDATE yourtable
SET data = t.data
FROM yourtable
JOIN
@tbl t
ON yourtable.id = t.id
END
go
-- test data
insert yourtable(id)
values(1),(2),(3)
go
```
Test of script:
```
declare @t [dbo].[usertype]
insert @t values(1,'hello'),(2,'world')
exec p_test @t
select * from yourtable
```
Result:
```
id data
1 hello
2 world
3 NULL
```
|
Stored Procedure that updates fields with different values
|
[
"",
"sql",
"sql-server",
"stored-procedures",
"sql-update",
""
] |
I got a Strange issue in Mule .. I have a webservice exposed in Mule that perform simple CRUD operation..
Now the issue is there is a SQL query :-
`if not exists (select * from sysobjects where name='getData' and xtype='U')create table getData (ID int NOT NULL, NAME varchar(50) NULL,AGE int NULL,DESIGNATION varchar(50) NULL)`
What this query does is it check whether the table exists on Database .. if it exists it, leaves and if it doesn't exists it create a new table with the same name and same fields ..
Now I want to use this query before an insert DB operation .. that is if the table exists then it will leave it and will perform insert data into it.. and if it doesn't exists then it will create the table first and then it will insert data into it ..
So my Mule Flow is following :
```
<jdbc-ee:connector name="Database_Global" dataSource-ref="DB_Source" validateConnections="true" queryTimeout="-1" pollingFrequency="0" doc:name="Database">
<jdbc-ee:query key="CheckTableExistsQuery" value="if not exists (select * from sysobjects where name='getData' and xtype='U')create table getData (ID int NOT NULL, NAME varchar(50) NULL,AGE int NULL,DESIGNATION varchar(50) NULL)"/>
<jdbc-ee:query key="InsertQuery" value="INSERT INTO getData(ID,NAME,AGE,DESIGNATION)VALUES(#[flowVars['id']],#[flowVars['name']],#[flowVars['age']],#[flowVars['designation']])"/>
</jdbc-ee:connector>
<flow name="MuleDbInsertFlow1" doc:name="MuleDbInsertFlow1">
<http:inbound-endpoint exchange-pattern="request-response" host="localhost" port="8082" path="mainData" doc:name="HTTP"/>
<cxf:jaxws-service service="MainData" serviceClass="com.test.services.schema.maindata.v1.MainData" doc:name="SOAPWithHeader" />
<component class="com.test.services.schema.maindata.v1.Impl.MainDataImpl" doc:name="JavaMain_ServiceImpl"/>
<mulexml:object-to-xml-transformer doc:name="Object to XML"/>
<choice doc:name="Choice">
<when expression="#[message.inboundProperties['SOAPAction'] contains 'insertDataOperation']">
<processor-chain doc:name="Processor Chain">
<logger message="INSERTDATA" level="INFO" doc:name="Logger"/>
<jdbc-ee:outbound-endpoint exchange-pattern="request-response" queryKey="CheckTableExistsQuery" queryTimeout="-1" connector-ref="Database_Global" doc:name="Database (JDBC)"/>
<jdbc-ee:outbound-endpoint exchange-pattern="request-response" queryKey="InsertQuery" queryTimeout="-1" connector-ref="Database_Global" doc:name="Database (JDBC)"/>
//remaining code ......
```
As you can see .. I am trying to call **CheckTableExistsQuery** before **InsertQuery** so that it checks the table exists or not and then perform insertion of Data .. but I am getting following exception :-
```
ERROR 2014-09-21 14:03:48,424 [[test].connector.http.mule.default.receiver.02] org.mule.exception.CatchMessagingExceptionStrategy:
********************************************************************************
Message : Failed to route event via endpoint: DefaultOutboundEndpoint{endpointUri=jdbc://CheckTableExistsQuery, connector=EEJdbcConnector
{
name=Database_Global
lifecycle=start
this=79fcce6c
numberOfConcurrentTransactedReceivers=4
createMultipleTransactedReceivers=false
connected=true
supportedProtocols=[jdbc]
serviceOverrides=<none>
}
, name='endpoint.jdbc.CheckTableExistsQuery', mep=REQUEST_RESPONSE, properties={queryTimeout=-1}, transactionConfig=Transaction{factory=null, action=INDIFFERENT, timeout=0}, deleteUnacceptedMessages=false, initialState=started, responseTimeout=10000, endpointEncoding=UTF-8, disableTransportTransformer=false}. Message payload is of type: String
Code : MULE_ERROR--2
--------------------------------------------------------------------------------
Exception stack is:
1. No SQL Strategy found for SQL statement: {if not exists (select * from sysobjects where name='getData' and xtype='U')create table getData (ID int NOT NULL, NAME varchar(50) NULL,AGE int NULL,DESIGNATION varchar(50) NULL)} (java.lang.IllegalArgumentException)
com.mulesoft.mule.transport.jdbc.sqlstrategy.EESqlStatementStrategyFactory:105 (null)
2. Failed to route event via endpoint: DefaultOutboundEndpoint{endpointUri=jdbc://CheckTableExistsQuery, connector=EEJdbcConnector
{
name=Database_Global
lifecycle=start
this=79fcce6c
numberOfConcurrentTransactedReceivers=4
createMultipleTransactedReceivers=false
connected=true
supportedProtocols=[jdbc]
serviceOverrides=<none>
}
, name='endpoint.jdbc.CheckTableExistsQuery', mep=REQUEST_RESPONSE, properties={queryTimeout=-1}, transactionConfig=Transaction{factory=null, action=INDIFFERENT, timeout=0}, deleteUnacceptedMessages=false, initialState=started, responseTimeout=10000, endpointEncoding=UTF-8, disableTransportTransformer=false}. Message payload is of type: String (org.mule.api.transport.DispatchException)
org.mule.transport.AbstractMessageDispatcher:117 (http://www.mulesoft.org/docs/site/current3/apidocs/org/mule/api/transport/DispatchException.html)
--------------------------------------------------------------------------------
Root Exception stack trace:
java.lang.IllegalArgumentException: No SQL Strategy found for SQL statement: {if not exists (select * from sysobjects where name='getData' and xtype='U')create table getData (ID int NOT NULL, NAME varchar(50) NULL,AGE int NULL,DESIGNATION varchar(50) NULL)}
at com.mulesoft.mule.transport.jdbc.sqlstrategy.EESqlStatementStrategyFactory.create(EESqlStatementStrategyFactory.java:105)
at org.mule.transport.jdbc.JdbcMessageDispatcher.doSend(JdbcMessageDispatcher.java:65)
at org.mule.transport.AbstractMessageDispatcher.process(AbstractMessageDispatcher.java:84)
+ 3 more (set debug level logging or '-Dmule.verbose.exceptions=true' for everything)
********************************************************************************
```
But the Strange fact is that .. if I implement the same using Java code it works fine .. for example in Java code I use JDBCTemplate to execute query :-
```
Check table exists and create it */
String checkTableExists=getQueryByKey("CheckTableExistsQuery"); // Query for check existing table
jdbcTemplate.execute(checkTableExists); //Create Table If not exists
try {
String insertDataIntoDB = getQueryByKey("InsertQuery");
jdbcTemplate.update(insertDataIntoDB, ID, NAME, AGE,
DESIGNATION);
dataResponse.setResponse("Data inserted Successfully");
} catch (DataIntegrityViolationException e) {
SQLException sql = (SQLException) e.getCause();
e.printStackTrace();
throw sql;
} catch (Exception e) {
e.printStackTrace();
throw e;
}
```
Please help me .. Please let me know how to execute the query
`if not exists (select * from sysobjects where name='getData' and xtype='U')create table getData (ID int NOT NULL, NAME varchar(50) NULL,AGE int NULL,DESIGNATION varchar(50) NULL)`
successfully ... why it's not getting executed from Mule JDBC endpoint while it's getting executed from JDBCTemplate in Java Code
|
Mule doesn't recognize the `if not exists...` query and thus doesn't know what to do with it.
To fix this you need to:
* create your own `org.mule.transport.jdbc.sqlstrategy.SqlStatementStrategyFactory` by sub-classing [the default one](https://github.com/mulesoft/mule/blob/mule-3.x/transports/jdbc/src/main/java/org/mule/transport/jdbc/sqlstrategy/DefaultSqlStatementStrategyFactory.java) and adding extra behaviour to support this type of query,
* Spring-inject it into the [JdbcConnector](https://github.com/mulesoft/mule/blob/mule-3.x/transports/jdbc/src/main/java/org/mule/transport/jdbc/JdbcConnector.java#L557).
|
So as per David's suggestion, ended up using `if not exists` query in a Java component and Groovy component in the Mule flow and is working for me
|
Mule JDBC endpoint causing exception while executing SQL query
|
[
"",
"sql",
"mule",
"mule-studio",
"mssql-jdbc",
""
] |
What is the Oracle equivalent of NVL for number datatypes?
This works since the datatype is VARCHAR:
```
select nvl(enrol.OUAC_APPLNO, 'blank') Invalid_OUAC_APPLNO
```
But this doesn't work since the datatype is NUMBER:
```
select nvl(enrol.ouac_type_id, 'blank') REGTYP
```
|
There is no equivalent and no Oracle functions will accept this ([apart from `DECODE()` but don't do that](https://stackoverflow.com/q/13712763/458741)); you're going to have to convert your number to a character:
```
select nvl(cast(enrol.OUAC_APPLNO as varchar2(10)), 'blank')
```
You may need to change the number of characters you're converting to as appropriate.
However, I don't know why you're doing this at all. By definition a NULL implies non-existence. If you want to display `blank` in order to confer non-existence this is something that you should be doing with your presentation layer rather than the database.
|
it wont work because oracle wants every row should be in the same type otherwise you cannot run functions on that column, you have cast ouac\_type\_id to be varchar as below;
```
select nvl(cast(enrol.OUAC_APPLNO as varchar2(10)), 'blank') REGTYP
```
|
Oracle equivalent of NVL for number datatype
|
[
"",
"sql",
"oracle",
""
] |
I have a test suite that runs against a database in a SQL Server 2012 cluster. I want this test suite to run as fast as possible. I'm willing to **sacrifice every durability and availability guarantee** for performance. The database is recreated during every test run, so it doesn't even have to survive server restarts.
Changing the recovery model with `ALTER DATABASE [dbname] SET RECOVERY SIMPLE` makes no noticeable difference.
A good option seems `DELAYED_DURABILITY`, but this is new in 2014 and therefore unavailable to me.
What can I do to make a crazy fast database on this cluster? I tried looking for in-memory databases but couldn't find any options. The cluster won't allow me to create a database on a local disk, insisting that it must be located on a clustered disk.
**Update:**
The application uses advanced SQL Server features, so I'm most likely stuck with MS SQL Server. The database itself is quite small because it's for testing (8MB mdf, 1MB ldf). The cluster nodes are the fastest servers in the network, so if I can misuse one of these nodes for an in-memory database that would certainly be fastest. But how?
|
If for some reason you are stuck on a clustered sql server instance but you do not want durability, maybe you could run your application on tempdb. Tempdb can be placed on local storage to avoid cluster overhead.
Also note that data stored on tempdb will initially stay on the buffer pool, which is RAM memory, and only spill to disk asynchronously as the sql server engine finds a better use for that memory space.
You can implement this solution, by scripting all your database objects and using a text editor to replace the name of your database with 'tempdb'. Then execute this script to create all objects on tempdb. Also set the initial catalog on the user running the application to tempdb and/or edit the required connection strings.
Keep in mind that tempdb is regenerated every time the instance is restarted. So you would loose all data and ddl changes.
That would certanly be a best effort to "sacrifice every durability and availability guarantee".
|
Could something like this work ([doc](http://msdn.microsoft.com/en-us/library/ms176061.aspx))?
```
CREATE DATABASE Sales
ON
( NAME = Sales_dat,
FILENAME = 'R:\saledat.mdf',
SIZE = 10,
MAXSIZE = 50,
FILEGROWTH = 5 )
```
Where `R:` is a [RAM Drive](https://serverfault.com/questions/208560/how-to-create-a-ram-drive-ram-disk-in-windows-2008-r2)
|
How to create the fastest possible database on a SQL Server 2012 cluster, sacrificing any durability
|
[
"",
"sql",
"sql-server",
"performance",
"testing",
"durability",
""
] |
I'm trying to write a query that updates tbl8\_update\_transactions HID field (where it's null) with the primary key value (HID) that's highest in HOLIDAY\_DATE\_TABLE. I get the following error
> "An aggregate may not appear in the set list of an UPDATE statement"
I've read that I need to accomplish this using a subquery, but need help. Thanks
```
USE BillingUI;
UPDATE tbl8_update_transactions
SET tbl8_update_transactions.HID = MAX(HOLIDAY_DATE_TABLE.HID)
FROM HOLIDAY_DATE_TABLE
WHERE tbl8_update_transactions.HID = NULL;
```
**Update:** Tried the proposed solution
```
UPDATE tbl8_update_transactions
SET HID = h.maxHID
FROM (select max(HOLIDAY_DATE_TABLE.HID) as maxHID from HOLIDAY_DATE_TABLE) h
WHERE tbl8_update_transactions.HID IS NULL;
```
Unfortunately this affects 0 rows/doesn't work. I think this is because HID is a foreign key (in `tbl8_update_transactions`). The real issue seems to be my C# methodology for inserting the records into the table (it inserts the row without populating the foreign key). I'd like to handle it with triggers rather than C# code. My tables are as follows.
```
USE BillingUI;
CREATE TABLE HOLIDAY_DATE_TABLE
(
HID INT IDENTITY PRIMARY KEY,
TABLE_NUMBER nchar(2) NOT NULL,
HOLIDAY_DATE nchar(8) NOT NULL,
FIELD_DESCRIPTION nVARchar(43) NULL,
);
USE BillingUI;
CREATE TABLE tbl8_update_transactions
(
TID INT IDENTITY PRIMARY KEY,
TABLE_NUMBER nchar(2) NOT NULL,
HOLIDAY_DATE nchar(8) NOT NULL,
FIELD_DESCRIPTION nVARchar(43) NULL,
HID int,
FOREIGN KEY (HID) REFERENCES HOLIDAY_DATE_TABLE (HID)
);
```
I think this might solve the null foreign key issue if I can get help with it
```
CREATE TRIGGER tbl8_ins
ON HOLIDAY_DATE_TABLE
FOR INSERT
AS
BEGIN
INSERT INTO tbl8_update_transactions
SELECT * FROM HOLIDAY_DATE_TABLE
WHERE HID = MAX(HID);
END
```
In case you want to see my C# code that performs the insert successfully, but doesn't populate the foreign key
```
public ActionResult Create()
{
return View();
}
//
// POST: /Table8/Create
[HttpPost]
public ActionResult Create(HOLIDAY_DATE_TABLE holiday_date_table, tbl8_update_transactions tbl8_update_transaction)
{
if (ModelState.IsValid)
{
db.HOLIDAY_DATE_TABLE.Add(holiday_date_table);
db.SaveChanges();
db.tbl8_update_transactions.Add(tbl8_update_transaction);
db.SaveChanges();
return RedirectToAction("../Billing/HolidayDateTable");
}
return View(holiday_date_table);
}
```
|
You can write the query like this:
```
UPDATE tbl8_update_transactions
SET HID = h.maxHID
FROM (select max(HOLIDAY_DATE_TABLE.HID) as maxHID from HOLIDAY_DATE_TABLE) h
WHERE tbl8_update_transactions.HID IS NULL;
```
I find it confusing to use a `from` clause and not have the main table mentioned there. I prefer writing this as:
```
UPDATE ut
SET HID = h.maxHID
FROM tbl8_update_transactions ut CROSS JOIN
(select max(HID) as maxHID from HOLIDAY_DATE_TABLE) h
WHERE ut.HID IS NULL;
```
|
I guess, your proposed code is correct, just missed SELECT
```
UPDATE tbl8_update_transactions
SET HID = (SELECT h.maxHID
FROM (select max(HOLIDAY_DATE_TABLE.HID) as maxHID from HOLIDAY_DATE_TABLE) h
WHERE tbl8_update_transactions.HID IS NULL);
```
|
SQL Update Query - An aggregate may not appear in the set list of an UPDATE statement
|
[
"",
"sql",
"sql-server",
""
] |
I have a database with auto-increment index. When I inserting a new item in the table, I don't know the index number. Is there a way to return new index of my item simultaneously?
I could assume that the new index would be = `max-index + 1`. Is there some standard way to solve this?
Thanks for help.
|
For MySQL, you use the [`LAST_INSERT_ID()`](http://dev.mysql.com/doc/refman/5.6/en/information-functions.html#function_last-insert-id) function, like this:
```
INSERT INTO tbl (auto,text) VALUES(NULL,'text');
SELECT LAST_INSERT_ID();
```
|
In MSSQL use `@@IDENTITY`.
For example `INSERT INTO [TaTAble] (testchamps) VALUES (tesvaleurs); SELECT @@IDENTITY as [index];`
|
SQL: Insert into table and return new index?
|
[
"",
"sql",
"indexing",
""
] |
In SQL, how would I do something like a reverse join?
For example suppose I have the following two tables
```
UsedSlide
SlideId
UserId
SomeOtherValue
LegacySlide
SlideId
UserId
```
How would I select all rows in `UsedSlide` where the `SlideId` and `UserId` don't match the values in any row of `LegacySlide`?
Note that I specifically made it two things we're matching on since otherwise I know that I can use `NOT IN` and a subselect.
**Bonus:** In my scenario the dataset is small, but what if it was large? How do I do it most efficiently?
|
You could use the `not exists` operator:
```
SELECT *
FROM UsedSlide u
WHERE NOT EXISTS (SELECT *
FROM LegacySlide l
WHERE u.SlideId = l.SlideId AND u.UserId = l.UserId)
```
|
LEFT JOIN CAN BE USED
```
SELECT * from
UsedSlide US
LEFT JOIN LegacySlide LS
ON US.SlideId = LS.SlideId
and US.UserId = LS.UserId
WHERE LS.SlideId is NULL
AND LS.UserId is NULL
```
|
Get rows that don't match rows in another table
|
[
"",
"sql",
""
] |
I need to get the difference of the sums of two fields which are in single table (really sorry if this is confusing), please read on for an example
```
Id type account_id stock_id volume price value
==========================================================
1 BUY 1 1 5 500 2500
2 BUY 1 4 30 200 6000
6 BUY 1 1 10 500 5000
7 SELL 1 1 3 500 1500
8 SELL 1 1 2 500 1000
9 SELL 1 4 20 120 2400
```
Above is my sample data and I would my SQL query result to be something like,
```
account_id stock_id volume totalAmount
============================================
1 1 10 5000
1 4 10 3600
```
basically here I am trying to get the total buy value of unique account & stock combination and subtract with the total sell value
Any help here would be highly appreciated.
Thanks in advance
|
**Fiddle Test:**
<http://sqlfiddle.com/#!2/53035/1/0>
```
select account_id,
stock_id,
sum(case when type = 'BUY' then volume else -volume end) as volume,
sum(case when type = 'BUY' then value else -value end) as totalamount
from tbl
group by account_id,
stock_id
having sum(case when type = 'BUY' then volume else -volume end) <> 0
```
I added the HAVING clause based on your comment.
|
Just to reduce duplication I would change Brian's code to this:
```
SELECT
account_id,
stock_id,
SUM(volume * type_sign) as total_volume,
SUM(value * type_sign) as total_value
FROM
(select t.*, case when type = 'BUY' then 1 else -1 end as type_sign
from tbl) t
GROUP BY account_id,
stock_id
```
|
SQL Query - How to get difference of sum of multiple of cells of rows
|
[
"",
"mysql",
"sql",
""
] |
I have a PostgreSQL table with `id` field (`bigint`). Can I somehow (possibly with ORM) perform check that `id` value ends with '`00000`' (e. g. "7500000", "7600000", but not "123456")?
Thanks for any help.
|
Django has an [`__endswith` clause](https://docs.djangoproject.com/en/1.7/ref/models/querysets/#std:fieldlookup-endswith) you could use.
```
qs = MyModel.objects.filter(myfield__endswith='00000') #id in your case
```
This fetches the queryset with all ids that end with `00000`
Now, if you have an objects' instance at hand, and need to check if it ends with `00000`, you dont need the ORM. You can do something like:
```
if str(myObject.id).endswith('00000'):
```
DEMO:
```
>>> str(10000000).endswith('00000')
True
```
|
Try this: `YourModel.objects.filter(id__endswith='00000')`. It should also works on int fields :)
|
Django SQL LIKE on integer values?
|
[
"",
"sql",
"django",
""
] |
I have a `survey` where users can post `answers` and since the answers are being saved in the db as a `foreign key` for each question, I'd like to know which answer got the highest rating.
So if the DB looks somewhat like this:
```
answer_id
1
1
2
```
how can I find that the answer with an `id` of `1` was selected more times than the one with an id of `2` ?
**EDIT**
So far I've done this:
`@question = AnswerContainer.where(user_id: params[:user_id])` which lists the things a given user has voted for, but, obviously, that's not what I need.
|
You can do group by and then sort
```
Select answer_id, count(*) as maxsel
From poll
Group by answer_id
Order by maxsel desc
```
|
you could try:
```
YourModel.group(:answer_id).count
```
for your example return something like: `{1 => 2, 2 => 1}`
|
How to order by largest amount of identical entries with Rails?
|
[
"",
"sql",
"ruby-on-rails",
""
] |
I have two tables (tags and tag map):
Tags:
```
id text
1 tag1
2 tag2
3 tag3
```
Tag map:
```
tag_id question_id
1 1
1 2
1 3
1 4
2 5
3 6
```
I would like to get results like in the table below:
```
id text count
1 tag1 4
2 tag2 1
3 tag3 1
```
My query:
```
SELECT
t.id,
t.text
FROM
`#__tags` AS t
```
How can I modify my query to return count.
Thanks!
|
Use below query:
```
SELECT t1.id,
t1.text,
count(t2.question_id) AS COUNT
FROM Table1 t1
LEFT JOIN Table2 t2 ON (t1.id=t2.tag_id)
GROUP BY t1.id;
```
[SQL Fiddle Demo](http://sqlfiddle.com/#!2/3547a/6)
|
You can achieve this by grouping the individual ID's of the map table, and then counting the number of rows in each group. Edit: oh, and offcourse you need to filter out the rows which don't belong in the cartesian product that the joining of 2 tables generates by including a where-clause.
```
SELECT
t.id,
t.text,
COUNT(tm.tag_id) AS count
FROM
`#__tags` AS t,
`tag_map` AS tm
WHERE t.id = tm.tag_id
GROUP BY
tm.tag_id
```
|
Mysql | Count tags
|
[
"",
"mysql",
"sql",
"count",
"tags",
""
] |
Following SQL table definition is illustrated one of create table statement from my MYSQL database which is developed by a former developer of my company.
```
DROP TABLE IF EXISTS `classifieds`.`category_vehicles`;
CREATE TABLE `classifieds`.`category_vehicles`(
`adv_id_ref` BIGINT UNSIGNED NOT NULL,
`category_id_ref` TINYINT UNSIGNED NOT NULL,
`forsale_status` TINYINT (1) NOT NULL,
`vehicle_type_id_ref` TINYINT UNSIGNED NOT NULL,
`price` DOUBLE NULL DEFAULT NULL,
PRIMARY KEY (`adv_id_ref`)
) ENGINE = INNODB CHARSET = latin1 COLLATE = latin1_swedish_ci ;
```
In there look at the statement `price` DOUBLE NULL DEFAULT NULL,
Normally I'm using:
`price` DOUBLE NULL;
if I want to enable that column to accept NULL values.
So what are the differences between these 3 statements?
1. `price` DOUBLE NULL;
2. `price` DOUBLE DEFAULT NULL;
3. `price` DOUBLE NULL DEFAULT NULL;
|
There is no difference. `NULL DEFAULT NULL` is the implicit default.
[From the CREATE TABLE documentation:](http://dev.mysql.com/doc/refman/5.5/en/create-table.html#idm46972477071408)
* If neither NULL nor NOT NULL is specified, the column is treated as though NULL had been specified
[From the "Data Type Default Values" chapter:](http://dev.mysql.com/doc/refman/5.5/en/data-type-defaults.html)
* If a column definition includes no explicit DEFAULT value, MySQL determines the default value as follows: If the column can take NULL as a value, the column is defined with an explicit DEFAULT NULL clause.
|
```
price DOUBLE NULL;
```
`price` is a double and can be null and its default value is null.
```
price DOUBLE DEFAULT NULL;
```
`price` is a double and can be null and its default value is null.
```
price DOUBLE NULL DEFAULT NULL;
```
`price` is a double and can be null and its default value is null.
|
NULL vs DEFAULT NULL vs NULL DEFAULT NULL in MYSQL column creation?
|
[
"",
"mysql",
"sql",
""
] |
Scenario: I have 2 query form 2 table, just want to view both query results as a single query result.
Details:
Table: loantrans
```
+-----+----------+---------+---------+---------+
| tid | date | account | purpose | out |
+-----+----------+---------+---------+---------+
| 1 |2014-08-12| 975 | Loan | 5000 |
| 2 |2014-08-12| 975 |Interest | 850 |
| 3 |2014-08-12| 975 | Loan | 150 |
| 4 |2014-08-12| 975 |Interest | 5000 |
+-----+----------+---------+---------+---------+
```
Table: fdrtrans
```
+-----+----------+---------+---------+---------+
| tid | date | account | purpose | out |
+-----+----------+---------+---------+---------+
| 1 |2014-08-12| 975 | FDR | 5000 |
| 2 |2014-08-12| 975 |Interest | 850 |
| 3 |2014-08-12| 975 | FDR | 150 |
| 4 |2014-08-12| 975 | Deposit | 5000 |
+-----+----------+---------+---------+---------+
```
Query 1:
```
SELECT MONTH(`loantrans`.`date`) as month, SUM(`loantrans`.`out`) AS loanout
FROM loantrans
WHERE (`loantrans`.`date` BETWEEN '2014-01-01' AND '2014-09-20')
AND (`loantrans`.`purpose` = 'Loan')
GROUP BY MONTH(`loantrans`.`date`)
ORDER BY `loantrans`.`date`
```
Result:
```
+-------+---------+
| month | loanout |
+-------+---------+
| 1 | 28000 |
| 2 | 27000 |
| 3 | 10200 |
| 4 | 7000 |
| 5 | 95000 |
| 6 | 2000 |
+-------+---------+
```
Query 2:
```
SELECT MONTH(`fdrtrans`.`date`) as month, SUM(`fdrtrans`.`in`) AS fdr
FROM fdrtrans
WHERE (`fdrtrans`.`date` BETWEEN '2014-01-01' AND '2014-09-20')
AND (`fdrtrans`.`purpose` = 'FDR')
GROUP BY MONTH(`fdrtrans`.`date`)
ORDER BY `fdrtrans`.`date`
```
Result :
```
+-------+---------+
| month | fdr |
+-------+---------+
| 1 | 2000 |
| 2 | 750 |
| 3 | 200 |
| 4 | 180 |
| 5 | 570 |
| 6 | 625 |
+-------+---------+
```
What I want is like
```
+-------+---------+---------+
| month | fdr | loanout |
+-------+---------+---------+
| 1 | 2000 | 28000 |
| 2 | 750 | 27000 |
| 3 | 200 | 10200 |
| 4 | 180 | 7000 |
| 5 | 570 | 95000 |
| 6 | 625 | 2000 |
+-------+---------+---------+
```
How to get result like this???
Please check this screenshot for more clear idea of what i'm looking for <https://www.dropbox.com/s/kn8z7z4v7sbahf0/Capture4.PNG?dl=0>
Database: <https://www.dropbox.com/s/8gbgrgvil915efr/bankdb.sql_7.zip?dl=0>
|
You can just join the two queries. Using `WITH` is not necessary, but it greatly enhances the readibility of your query for cases like this.
```
WITH q2 AS
(SELECT MONTH(`fdrtrans`.`date`) as month, SUM(`fdrtrans`.`in`) AS fdr
FROM fdrtrans
WHERE (`fdrtrans`.`date` BETWEEN '2014-01-01' AND '2014-09-20')
AND (`fdrtrans`.`purpose` = 'FDR')
GROUP BY MONTH(`fdrtrans`.`date`)
ORDER BY `fdrtrans`.`date`)
, q1 AS
(SELECT MONTH(`loantrans`.`date`) as month, SUM(`loantrans`.`out`) AS loanout
FROM loantrans
WHERE (`loantrans`.`date` BETWEEN '2014-01-01' AND '2014-09-20')
AND (`loantrans`.`purpose` = 'Loan')
GROUP BY MONTH(`loantrans`.`date`)
ORDER BY `loantrans`.`date`)
SELECT q1.month, q1.loanout, q2.fdr
FROM q1
JOIN q2 ON q1.month = q2.month
ORDER BY q1.month
```
|
In SQL Union is used to combine the output of select query.
```
select * from loantrans
uninon
select * from fdrtrans
```
|
How to get result from 2 different table?
|
[
"",
"mysql",
"sql",
""
] |
> I have following records
```
NAME BIRTHDATE
A 19/09/1990
B 25/09/1992
C 26/09/1993
```
and current date is 19/09/2014
I want to get the birthday record from current date to next seven days below is my query.
```
CREATE PROCEDURE [dbo].[Get_Birthday]
as
begin
Declare @CurrentDate date ,@NxtDate date
set @CurrentDate = GETDATE();
set @NxtDate = DATEADD(day,7,getdate())
print @CurrentDate
print @NxtDate
select DocId, DoctorName,DOA,Email from vw_DoctorDetail
where DOA between @CurrentDate and @NxtDate
end
```
|
You may try the following
```
CREATE PROCEDURE [dbo].[Get_Birthday]
as
beginDeclare @CurrentDate date ,@NxtDate date
set @CurrentDate = DATEADD(year,-DATEDIFF(year,'19000101',GETDATE()),GETDATE());
set @NxtDate = DATEADD(day,7,@CurrentDate)
print @CurrentDate
print @NxtDate
select DocId, DoctorName,DOA,Email from vw_DoctorDetail
WHERE DATEADD(year,-DATEDIFF(year,'19000101',DOA),DOA) between @CurrentDate and @NxtDate
```
|
Try this
```
SELECT DocId
,DoctorName
,DOA
,Email
FROM vw_DoctorDetail
WHERE (
DATEPART(DAY, DOA) BETWEEN DATEPART(DAY, GETDATE() + 7)
AND DATEPART(DAY, GETDATE())
)
AND DATEPART(MONTH, DOA) = DATEPART(MONTH, GETDATE())
```
|
How to get record between two date and month not year?
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-server-2012",
""
] |
I have a table called VAT, and I need to generate the monthly VAT report. So below is what I do...
```
SELECT SUM(Amount) AS "January"
WHERE current_time_stamp BETWEEN '2014-01-01' AND '2014-01-310'
FROM VAT
```
Just like the above query, I have 12 queries for January, February, March etc. So, instead of writing 12, is there a way that I can get all of these in 1 query?
|
You can use conditional sum with case statement,below query will give you one row with 12 columns as for each month's sum
```
SELECT
SUM(case when current_time_stamp BETWEEN '2014-01-01' AND '2014-01-31' then Amount else 0 end) AS `January` ,
SUM(case when current_time_stamp BETWEEN '2014-02-01' AND '2014-02-29' then Amount else 0 end) AS `Febuary` ,
.
.
.
SUM(case when current_time_stamp BETWEEN '2014-12-01' AND '2014-12-31' then Amount else 0 end) AS `December`
FROM VAT
```
|
Use a `GROUP BY`, like this:
```
SELECT SUM(Amount), MONTH(current_time_stamp) AS `Month`
FROM VAT
WHERE current_time_stamp BETWEEN '2014-01-01' AND '2014-12-31'
GROUP BY MONTH(current_time_stamp)
```
However, it could be slow in case you have a lot of records.
|
One `Select` query to generate 12 result sets
|
[
"",
"mysql",
"sql",
"database",
"select",
""
] |
I have found the first transaction (min), but when I add the column 'Winners', I get a row for their first win and a row for their first loss. I need only the first row, including whether they won or lost. I have tried aggregating the winners column to no avail. I would prefer not to sub-query if possible. Thanks in advance for checking this out.
```
SELECT
MIN(dbo.ADT.Time) AS FirstShowWager,
dbo.AD.Account, dbo.AD.FirstName,
dbo.AD.LastName, dbo.ADW.Winners
FROM
dbo.BLAH
WHERE
(dbo.ADT.RunDate = CONVERT(DATETIME, '2014-04-12
00:00:00', 102)) AND (dbo.ADW.Pool = N'shw')
GROUP BY
dbo.AD.Account,
dbo.AD.FirstName,
dbo.AD.LastName,
dbo.AD.RunDate,
dbo.ADW.Winners
ORDER BY
dbo.AD.Account
```
|
```
select sorted.*
from
(
SELECT dbo.ADT.Time AS FirstShowWager,
dbo.AD.Account, dbo.AD.FirstName,
dbo.AD.LastName, dbo.ADW.Winners,
ROW_NUMBER ( ) OVER (partition by dbo.AD.Account,
dbo.AD.FirstName,
dbo.AD.LastName,
dbo.AD.RunDate
order by dbo.ADT.Time) as rowNum
FROM dbo.AD
WHERE dbo.ADT.RunDate = CONVERT(DATETIME, '2014-04-1200:00:00', 102)
AND dbo.ADW.Pool = N'shw'
) as sorted
where rowNum = 1
```
[ROW\_NUMBER](http://msdn.microsoft.com/en-us/library/ms186734.aspx)
|
It sounds like you don't care about the value of winners column, by grouping on winners you'd get multiple rows, one for null and others for non-null values. If you don't care about the amount they've won but just simply if they've won or lost, you can do something like this,
```
SELECT
MIN(dbo.ADT.Time) AS FirstShowWager,
dbo.AD.Account, dbo.AD.FirstName,
dbo.AD.LastName, CASE WHEN dbo.ADW.Winners IS NULL THEN 0 ELSE 1 END
FROM
dbo.BLAH
WHERE
(dbo.ADT.RunDate = CONVERT(DATETIME, '2014-04-12
00:00:00', 102)) AND (dbo.ADW.Pool = N'shw')
GROUP BY
dbo.AD.Account,
dbo.AD.FirstName,
dbo.AD.LastName,
dbo.AD.RunDate,
dbo.ADW.Winners
ORDER BY
dbo.AD.Account
```
|
How to find the min, where TSQL groups by
|
[
"",
"sql",
"sql-server",
"t-sql",
"group-by",
"greatest-n-per-group",
""
] |
In my example, I have a table containing info about different venues, with columns for `city`, `venue_name`, and `capacity`. I need to select the `city` and `venue_name` for the venue with the highest `capacity` within each `city`. So if I have data:
```
city | venue | capacity
LA | venue1 | 10000
LA | venue2 | 20000
NY | venue3 | 1000
NY | venue4 | 500
```
... the query should return:
```
LA | venue2
NY | venue3
```
Can anybody give me advice on how to accomplish this query in SQL? I've gotten tangled up in joins and nested queries :P. Thanks!
|
```
select t.city, t.venue
from tbl t
join (select city, max(capacity) as max_capacity from tbl group by city) v
on t.city = v.city
and t.capacity = v.max_capacity
```
|
One way to do this is with `not exists`:
```
select i.*
from info i
where not exists (select 1
from into i2
where i2.city = i.city and i2.capacity > i.capacity);
```
|
SQL: get A with max B for every distinct C
|
[
"",
"mysql",
"sql",
"sqlite",
"greatest-n-per-group",
""
] |
I have a database table that is used to version in/out results based on the user "version". I need to combine where the Source and Target match, and where there is a Activate record that comes directly after a Deactivate record.
What I have currently:
```
ID Source Target Activate Deactivate
361440 1760 2569 1 78
532741 1760 2569 79 80
532742 1760 2569 81 84
574687 1760 2569 95 97
574687 1760 2569 98 NULL
```
What I would like to have:
```
ID Source Target Activate Deactivate
361440 1760 2569 1 84
574687 1760 2569 95 NULL
```
EDIT: My example only included a continuous chain of 1 additional record, there are some cases where the chain exists for multiple records. There is also the case where the Deactivation version has not been set yet. I have updated my example to reflect this.
Thanks
|
This will work for the data sample you provided:
```
SELECT
v1.ID,
v1.source,
v1.target,
v1.activate,
COALESCE(v2.Deactivate, v1.Deactivate) as Deactivate
FROM Versions v1 LEFT JOIN
Versions v2 ON v1.source = v2.source
and v1.target = v2. target
and v1.Deactivate + 1 = v2.Activate
```
But you need to decide what you want to do if you have a continuous chain of several records.
|
A totally different approach might use Oracle [`CONNECT BY` hierarchical queries](http://docs.oracle.com/cd/B19306_01/server.102/b14200/queries003.htm). The idea here is to build a path from "consecutive" rows (according to activate/deactivate):
```
SELECT CONNECT_BY_ROOT A.id AS ROOTID,
A.SOURCE,
A.TARGET,
CONNECT_BY_ROOT A.activate AS ACTIVATE,
A.DEACTIVATE
FROM MYTABLE A
-- Keep only path ending on a leaf node (i.e.: no following range)
WHERE CONNECT_BY_ISLEAF <> 0
-- Make a path by chaining consecutive rows
CONNECT BY PRIOR A.deactivate+1 = A.activate
AND PRIOR A.source = source
AND PRIOR A.target = target
-- Keep only path starting on a root node (i.e.: no previous range)
START WITH NOT EXISTS (SELECT 1 FROM MYTABLE B WHERE B.deactivate+1 = A.activate)
```
Producing:
```
ROOTID SOURCE TARGET ACTIVATE DEACTIVATE
361440 1760 2569 1 84
574687 1760 2569 95 -
```
|
SQLcombine continuous records
|
[
"",
"mysql",
"sql",
"sql-server-2008",
"plsql",
""
] |
I have the date to check as shown below:
Input Date:
```
17-09-2014
```
For which I am converting in my dynamic script:
Attempt #1:
```
CAST((convert(date,@FDate, 105)) AS nvarchar(50))
```
Error:
> Error converting data type varchar to date.
Attempt #2:
```
convert(date, @FDate, 105)
```
Error:
> The data types nvarchar and date are incompatible in the add operator.
Attempt #3:
```
cast(@FDate as varchar(50))
```
Error:
> Error converting data type varchar to date.
One whole attempt, taken from the sqlfiddle.com/#!3/d41d8/38976 of the comments:
```
DECLARE @querys NVARCHAR(max)
DECLARE @Date DATE
SET @Date = '17-09-2014'
SET @querys = 'SELECT' + CAST((convert(date,@Date, 105)) AS nvarchar(50)) + ''
EXEC(@querys)
```
|
Try
```
convert(Datetime, @FDate,105)
```
I tried following script and it worked well for SQL Server 2005 and SQL Server 2012:
```
Declare @FDate varchar(100);
set @FDate='17-09-2014';
Select convert(Varchar(50), convert(Datetime, @FDate,105) ,105)
```
Verified your fiddle script, just had small change and it worked as expected.
Here is new script that I tested on fiddle:
```
DECLARE @qs VARCHAR(max)
Declare @FsDate varchar(100)
set @FsDate = '17-09-2014'
SET @qs = 'Select convert(Varchar(50), convert(Datetime, '''+@FsDate+''',105) ,105) '
EXEC(@qs)
```
|
There seems to be some confusion with [Convert](http://msdn.microsoft.com/en-US/us-en/library/ms187928.aspx).
Comparing your fiddels from the comments and your shown Attempts. Your fiddels are showing `DECLARE @Date DATE`;
The first argument is the targettype
(Attempt #1) since your @FDate is already of type DATE `convert(date,@FDate, 105)` will lead to a conversion from DATE to DATE, your outer cast to nvarchar does not seem to work due to your locales.
(Attempt #2) is shown incomplete since the shown part `convert(date, @FDate, 105)` does work, even it won't change anything (conversion from DATE to DATE)
(Attempt #3) does not seem to work due to your locales.
Your shown fiddle:
```
DECLARE @querys NVARCHAR(max)
DECLARE @Date DATE
SET @Date = '17-09-2014'
SET @querys = 'SELECT' + CAST((convert(date,@Date, 105)) AS nvarchar(50)) + ''
EXEC(@querys)
```
already is failing here `SET @Date = '17-09-2014'`, a save way independed from locales would be to use the format YYYYMMDD `SET @Date = '20140917'`.
Since you are trying to buid a varchar your targettype for CONVERT would by VARCHAR not DATE and you wuold have to add quotation marks, a simple PRINT @querys or SELECT @querys would show what you are trying to execute.
Taken from your fiddle, you are trying to convert a Date to a varchar and then add it to a dynamic SQL which you want to execute, so one way to go would be:
```
DECLARE @querys NVARCHAR(max)
DECLARE @Date DATE
SET @Date = '20140917'
-- get it as varchar
--SET @querys = ' SELECT ''' + convert(varchar(20),@Date,105) + ''''
--get it as date
SET @querys = ' SELECT convert(date,''' + convert(varchar(20),@Date,105) + ''',105)'
EXEC(@querys)
```
|
Error converting data type varchar to date in SQL Server 2008 R2
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-server-2008-r2",
""
] |
Consider the following table structure:

I'm creating a stored procedure which will delete data from `RelatedA` and `RelatedB`. Afterwards I want to delete everything from `MasterData` that no longer has a relation in **both** `RelatedA` and `RelatedB`.
I tried 3 different queries to get the IDs for deletion
## Using simple `WHERE`
```
SELECT MD.ID
FROM MasterData MD
WHERE MD.ID NOT IN (SELECT MasterDataID FROM RelatedA)
AND MD.ID NOT IN (SELECT MasterDataID FROM RelatedB)
```
## Using join + `IS NULL` in `WHERE`
```
SELECT MD.ID
FROM MasterData MD
JOIN RelatedA RelA on RelA.MasterDataID = MD.ID
JOIN RelatedB RelB on RelB.MasterDataID = MD.ID
WHERE RelA.MasterDataID IS NULL
AND RelB.MasterDataID IS NULL
```
## Using join + `IS NULL` in `ON`
```
SELECT MD.ID
FROM MasterData MD
JOIN RelatedA RelA on RelA.MasterDataID = MD.ID AND RelA.MasterDataID IS NULL
JOIN RelatedB RelB on RelB.MasterDataID = MD.ID AND RelB.MasterDataID IS NULL
```
**None of these return any records** for deletion and I'm sure there are a couple cause I just inserted fresh records in `MasterData` (and there are no triggers that insert data in the related tables).
I actually have two questions:
1. **What am I doing wrong?**
2. What is the exact difference between my queries? (bonus)
|
First let's look at the ones that don't work
```
DELETE FROM MasterData
WHERE ID IN (
SELECT MD.ID
FROM MasterData MD
JOIN RelatedA RelA on RelA.MasterDataID = MD.ID
JOIN RelatedB RelB on RelB.MasterDataID = MD.ID
WHERE RelA.MasterDataID IS NULL
AND RelB.MasterDataID IS NULL
)
```
The `WHERE` conditions will never be true because you're doing inner joins instead of left joins.
---
```
DELETE FROM MasterData
WHERE ID IN (
SELECT MD.ID
FROM MasterData MD
JOIN RelatedA RelA on RelA.MasterDataID = MD.ID AND RelA.MasterDataID IS NULL
JOIN RelatedB RelB on RelB.MasterDataID = MD.ID AND RelB.MasterDataID IS NULL
)
```
The join condition will never be true because `null` does not equal `null` (i.e `null` = `null` returns `null` instead of true) and the second part requires that `MasterDataID` be null.
---
```
DELETE FROM MasterData
WHERE ID IN (
SELECT MD.ID
FROM MasterData MD
WHERE MD.ID NOT IN (SELECT MasterDataID FROM RelatedA)
AND MD.ID NOT IN (SELECT MasterDataID FROM RelatedB)
)
```
This includes an unnecessary subquery but unless I'm missing something seems like it should work. You can rewrite w/o the subquery as (you can similarly also omit the subquery in the 1st 2 queries above)
```
DELETE FROM MasterData MD
WHERE MD.ID NOT IN (SELECT MasterDataID FROM RelatedA)
AND MD.ID NOT IN (SELECT MasterDataID FROM RelatedB)
```
Personally I prefer not exists
```
DELETE FROM MasterData MD
WHERE NOT EXISTS (SELECT 1 FROM RelatedA WHERE MasterDataID = MD.ID)
AND NOT EXISTS (SELECT 1 FROM RelatedB WHERE MasterDataID = MD.ID)
```
|
I would suggest using `NOT EXISTS`
```
select
md.*
from MasterData md
where not exists (
select 1 from RelatedA A where md.id = A.MasterDataID)
and not exists (
select 1 from RelatedB B where md.id = B.MasterDataID)
;
```
Use these together (as shown above) or separately to assess the data. If satisfied it converts easily to a delete query.
`See this SQLFiddle demo`
|
Delete from table with no related data - WHERE NOT IN and JOIN both failing
|
[
"",
"sql",
"sql-server",
"join",
"where-clause",
"sql-delete",
""
] |
```
id | sent_time | status | campaignId |
---+-------------------+--------+------------+
1 |2013-12-26 00:00:00| sent | 12 |
---+-------------------+--------+------------+
2 |2013-12-26 00:00:00| sent | 12 |
---+-------------------+--------+------------+
3 |2013-12-26 00:00:00| sent | 11 |
---+-------------------+--------+------------+
4 |2018-10-21 00:00:00| draft | 13 |
---+-------------------+--------+------------+
5 |2018-10-21 00:00:00| draft | 14 |
---+-------------------+--------+------------+
6 |2018-10-21 00:00:00| draft | 14 |
---+-------------------+--------+------------+
```
I want to display the data between date from ***sent\_time*** field range and sum the 'sent' status who get the same 'campaignId'
and this is my query:
```
SELECT DATE(sent_time) AS date_sent_time, SUM(status='sent') AS total_sent_status
FROM sms_table WHERE (DATE(sent_time) BETWEEN 2014-08-26 AND 2014-09-30)
AND campaignId = 12 GROUP BY DATE(sent_time)
```
Can somebody help me?
|
Try this:
```
SELECT
DATE(sent_time) AS sent_date,
COUNT(id) AS total_sent_status
FROM sms_table
WHERE
status = 'sent'
AND campaignID = 12
AND sent_time BETWEEN '2014-08-26 00:00:00' AND '2014-09-30 23:59:59'
GROUP BY sent_date
```
A few notes:
* You can just move status filter to WHERE clause to simplify things. Make sure you have index on this column
* You should not use a calculated value for comparison in a where clause like you are doing with `DATE(sent_time)` as this prevents use of index. You can simply use `00:00:00` and `23:59:59` to work around this, or better yet, if your `sent_time` field really has all time components of `00:00:00` as shown in your sample data, just make it a date field.
|
You can move the condition on `status='sent'` to the WHERE clause and add `COUNT(id)` to the SELECT as follows:
```
SELECT DATE(sent_time) AS date_sent_time, COUNT(id) AS total_sent_status
FROM sms_table
WHERE sent_time BETWEEN '2014-08-26 00:00:00' AND '2014-09-30 23:59:59'
AND campaignId = 12
AND status = 'sent'
GROUP BY date_sent_time
```
You should indeed not use functions in the where clause as this can prevent MySQL from using an index. Also note that you can use aliases in the GROUP BY clause.
Here is an [sqlfiddle](http://sqlfiddle.com/#!2/d1f779/1/0).
|
mysql - sum field by date range and group by
|
[
"",
"mysql",
"sql",
""
] |
I have a table named `infoone` with two columns `address` and `locationCITY`.
I want to write a search query.
If user searches for `Edmonton`, it should return all records that have city `Edmonton`.
If user searches for `12 main street Edmonton`, then it should return all respective records. `12 main street` is the address and `Edmonton` is the city
What query I have is following,
```
string sql = "select PLACEID,LEFT(CONVERT(NVARCHAR(1000),description),500)+'...' as des1,LOCATIONCITY,ADDRESS,DateT,RENT from infoone where ";
sql += "(((address like '%"+txtSearch.Text+"%') or (address like '')) and ((locationcity like '%"+txtSearch.Text+"%') or (locationcity like '')) and ((address LIKE '%"+txtSearch.Text+"%') or (address like '')))";
```
Above query is not returning anything when I search:
```
main street Edmonton
```
User will also be allowed to search without city like: `12th main street`
What I am doing wrong?
Please help
Thanks
|
Free form searching for addresses is very difficult.
Let's look at your examples
```
Address locationCity
12 Main Street Edmonton
456 Thomas Ave St Martin
```
**Possible searchs**
* Edmonton - If only 1 word, should we assume it is a city? If so, how
would the user find St. Martin?
* 12 Main Street Edmonton - Now, how do you know Edmonton is the city?
Could the user search for just a street name?
I would suggest that your interface accepts two columns, one for address and one for city, it will make searching much easier.
```
where <other conditions>
AND
(locationcity like '%CitySearchFld%' and address like '%AddresssSearchFld%')
```
No need to search for empty, because if the user leaves the field blank, a search of %% will match all rows
**Other considerations**
What happens in the user searches for
```
12 Main St
```
or
**Edmenton**
**Abbreviations? Misspellings?**
To handle abbreviations, I would build a stop-word list, which would remove common abbreviations from the address field, things like St, Street, Avenue, Ave, etc. So the search becomes
**12 Main**
I'd hate to miss a record because I wasn't sure if it was st or Street in the table.
You can also use a function know as Soundex (native SQL) or Metaphone (custom SQL or CLR) to deal with misspellings...
Good luck
|
I don't believe your where clause is doing what you've intended. Let's remove the outer string var and reformat for readability:
```
select PLACEID,LEFT(CONVERT(NVARCHAR(1000),description),500)+'...' as des1
,LOCATIONCITY,ADDRESS,DateT,RENT
from infoone
where(
((address like '%"+txtSearch.Text+"%') or (address like ''))
-- #1 address must match full text or be blank
and
((locationcity like '%"+txtSearch.Text+"%') or (locationcity like ''))
-- #2 locationcity must match full text or be blank
and
((address LIKE '%"+txtSearch.Text+"%') or (address like ''))
-- #3 address must match full text or be blank. Seems a duplicate of #1
)
```
These three are chained together with ANDs, so all three conditions must be true for it to return a result.
At the very least, the Where clause might be re-written to be this:
```
select PLACEID,LEFT(CONVERT(NVARCHAR(1000),description),500)+'...' as des1
,LOCATIONCITY,ADDRESS,DateT,RENT
from infoone
where(
(address like '%"+txtSearch.Text+"%')
or
(locationcity like '%"+txtSearch.Text+"%')
or
(address + locationcity like '%"+txtSearch.Text+"%')
or
(address + ' ' + locationcity like '%"+txtSearch.Text+"%')
)
```
This will return a record if a text match was found in either address or locationcity, or if the text matches them when combined, or combined with a space.
This is why you are not getting any results back on the sample input you provided. With the above code, you should get a match on the final fourth condition when searching for "main street edmonton"
You should get your code working with non-dynamic SQL in SSMS first, using a variable for your text (e.g. @TEXT), and once it is returning the results you want based on your text param, you can switch it over to dynamic. Writing this as a SQL string to execute is just complicating things while you're still working on developing your SQL code.
For a broader solution, you may want to look at Full Text Search:
[http://msdn.microsoft.com/en-us/library/ms142571.aspx](http://msdn.microsoft.com/en-us/library/ms142571.aspx "MSDN Full Text Search")
This breaks apart a string into individual words, and searches for those words, it allows you to come up with weighted guesses and rankings of match. It even lets you use similar terms using thesaurus, so if someone searched for "123 Main St." and "123 Main Street" was in the database, it would find those as a match.
|
sql search query with like operator
|
[
"",
"sql",
"sql-server-2008",
"t-sql",
"sql-server-2012",
"sql-like",
""
] |
I have a `BaseProducts` table. I want to update a row from another row of same table. This is what I have tried but not working means not updation,
```
UPDATE DP
SET
DP.[Description] = SP.[Description]
................................
................................
................................
,[LastModificationDateUtc] = GETUTCDATE()
FROM
[dbo].[BaseProducts] DP
INNER JOIN
[dbo].[BaseProducts] SP ON DP.Id = SP.Id
WHERE
SP.Id = @SourceBaseProductId
AND DP.Id = @DestinationBaseProductId;
```
I have `SourceBaseProductId` and `DestinationBaseProductId`.
|
From what I understand you didn't mean to use an `inner join`, but a `cross join`:
```
UPDATE DP
SET
DP.[Description] = SP.[Description]
................................
................................
................................
,[LastModificationDateUtc] = GETUTCDATE()
FROM
[dbo].[BaseProducts] DP
CROSS JOIN
[dbo].[BaseProducts] SP
WHERE
SP.Id = @SourceBaseProductId
AND DP.Id = @DestinationBaseProductId;
```
In this case you can have a different source and destination id, as you provide them in your where clause.
|
There is no need of inner join. This will work.
```
UPDATE DP
SET DP.[Description] = SP.[Description]
FROM [dbo].[BaseProducts] DP
,[dbo].[BaseProducts] SP
WHERE
SP.Id = @SourceBaseProductId
AND DP.Id = @DestinationBaseProductId;
```
|
Update a Row from another row of same table?
|
[
"",
"sql",
"sql-server",
""
] |
I cant get this sql query work the way i want to. I am trying to match a record using "like" and "and" but i dont get any result. When i change it to "or" I get the result.
Why do i not get any results when i use and?
```
SELECT a.MOD_REQ_NUM,
a.REFERENCE_CODE,
a.REFERENCE_VALUE,
a.ACTIVE_DETAIL,
b.REQUESTOR_WWID,
b.REQUEST_DATE
from DB_MOD_REQ_DETAILS a, DB_MOD_REQ_HEADER b
where a.MOD_REQ_NUM=b.MOD_REQ_NUM
and a.ACTIVE_DETAIL='Y'
and b.ACTIVE_HEADER='Y'
----------------------------------------------------------
and
(a.REFERENCE_VALUE like '%43598%' and a.REFERENCE_CODE = 2)
and
(a.REFERENCE_VALUE like '%3694894%' and a.REFERENCE_CODE = 4)
```
---
## This is the screenshot when i use "or"

---
---
## Update
I want to be able to search the a.REFERENCE\_VALUE only in the given a.REFERENCE\_CODE. I have got form fields and C# foreach loop. Every field has got its own REFERENCE\_CODE. if user is searching for 1 field, i want to do:
```
(a.REFERENCE_VALUE like '%43598%' and a.REFERENCE_CODE = 2)
```
If user is searching more than 1 field, i want to match :
```
a.REFERENCE_VALUE like '%43598%' and a.REFERENCE_CODE = 2
and
a.REFERENCE_VALUE like '%3694894%' and a.REFERENCE_CODE = 4
```
this is the screen shot of the query with all the data

|
From your question and your comment to jmcilhinney's answer the SQL statement you want is:
```
SELECT a.MOD_REQ_NUM,
a.REFERENCE_CODE,
a.REFERENCE_VALUE,
a.ACTIVE_DETAIL,
b.REQUESTOR_WWID,
b.REQUEST_DATE
FROM DB_MOD_REQ_DETAILS a, DB_MOD_REQ_HEADER b
WHERE a.MOD_REQ_NUM=b.MOD_REQ_NUM
AND a.ACTIVE_DETAIL='Y'
AND b.ACTIVE_HEADER='Y'
----------------------------------------------------------
AND
(
(a.REFERENCE_VALUE like '%43598%' and a.REFERENCE_CODE = 2)
OR
(a.REFERENCE_VALUE like '%3694894%' and a.REFERENCE_CODE = 4)
)
```
Notice the parentheses to the last clause.
Here is why:
1. Your first statement has AND operators everywhere. This results in having a conjunction of other clauses (`a.MOD_REQ_NUM=b.MOD_REQ_NUM AND a.ACTIVE_DETAIL='Y' AND b.ACTIVE_HEADER='Y' AND a.REFERENCE_VALUE like '%43598%' AND a.REFERENCE_VALUE like '%3694894%'`) as well as these:
`a.REFERENCE_CODE = 4 AND a.REFERENCE_CODE = 2` which is always false. Therefore you will get no result from this SQL statement.
2. Your second statement is close to what you want but not quite it. As Luuan pointed out the parentheses are required: A AND B AND C OR D is not the same as A AND B AND (C OR D). See <http://en.wikipedia.org/wiki/Associative_property> for the long explanation.
---
**EDIT:** the piece of software you have provided in a comment to your question is the source of the problem as it does not generate the SQL statement that you need (and which both jmcilhinney and Beatles1692 tried to provide).
**First** replace the following line in GetParamDataSet
```
strCond = " a.REFERENCE_VALUE LIKE'%" + item.Substring(item.IndexOf('=') + 1).Trim() + "%'" +
" and a.REFERENCE_CODE= " + id;
```
with
```
strCond = " (a.REFERENCE_VALUE LIKE'%" + item.Substring(item.IndexOf('=') + 1).Trim() + "%'" +
" and a.REFERENCE_CODE= " + id + ")";
```
*Notice the parentheses!*
**Second** replace the main block in `GetDataValue(List<string> conditions)` with
```
string strSQL = "SELECT a.MOD_REQ_NUM, " +
" a.REFERENCE_CODE, " +
" a.REFERENCE_VALUE, " +
" a.ACTIVE_DETAIL, " +
" b.REQUESTOR_WWID, " +
" b.REQUEST_DATE " +
" from DB_MOD_REQ_DETAILS a, DB_MOD_REQ_HEADER b " +
" where a.MOD_REQ_NUM=b.MOD_REQ_NUM " +
" and a.ACTIVE_DETAIL='Y' and b.ACTIVE_HEADER='Y' ";
-- construct the conjunction of "(a.REFERENCE_VALUE LIKE '%%' and a.REFERENCE_CODE = %)"
string referenceCodeValueCondition = string.Join<string>( " OR ", conditions );
-- append the conjunction block to the other conditions
if (! string.IsNullOrEmpty( referenceCodeValueCondition ))
{
strSQL = strSQL + " AND (" + referenceCodeValueCondition + ")";
}
```
|
This part doesn't make sense:
```
and
(a.REFERENCE_VALUE like '%43598%' and a.REFERENCE_CODE = 2)
and
(a.REFERENCE_VALUE like '%3694894%' and a.REFERENCE_CODE = 4)
```
How can `REFERENCE_CODE` ever be 2 and 4? At the very least, there should be an OR between those last two sets of conditions, i.e.
```
and
((a.REFERENCE_VALUE like '%43598%' and a.REFERENCE_CODE = 2)
or
(a.REFERENCE_VALUE like '%3694894%' and a.REFERENCE_CODE = 4))
```
You may or may not need to change other AND operators to OR as well.
|
How to use multiple "like" and "and"
|
[
"",
"sql",
"sql-server",
""
] |
I have two columns in the database:
```
Value | Date
---------------------
1.3 | 1410374280000
```
Value is a float and Date is an epoch integer in milliseconds
I am trying to optimize this query as this returns a few thousand rows.
```
SELECT * FROM data WHERE date >= [some_time_in_the_past]
```
Since I am rendering a chart on the front end, I need far fewer data points. So I am truncating the date.
```
SELECT data.date / 5000 * 5000 AS truncated, data.price FROM data
WHERE date >= [some_time_in_the_past]
```
This above truncates the date to every 5 seconds. So now I want to `SELECT DISTINCT` on it.
However:
```
SELECT DISTINCT truncated, price
```
This will SELECT DISTINCT on both the truncated\_date as well as the price. Is there a way to only SELECT DISTINCT on the truncated date and get the price column too.
|
To pick an arbitrary price with [`DISTINCT ON`](https://stackoverflow.com/questions/3800551/select-first-row-in-each-group-by-group/7630564#7630564) for each set of identical `truncated` (which fits your non-existing definition):
```
SELECT DISTINCT ON (1)
(date / 5000) * 5000 AS truncated, price
FROM data
WHERE date >= [some_time_in_the_past]
ORDER BY 1;
```
Add more `ORDER BY` expressions to pick something more specific, like @Clodoaldo already provided.
|
[`distinct on`](http://www.postgresql.org/docs/current/static/sql-select.html#SQL-DISTINCT)
```
select distinct on (truncated) *
from (
select data.date / 5000 * 5000 as truncated, data.price
from data
where date >= [some_time_in_the_past]
) s
order by truncated, price desc
```
If you want the lowest price change from `price desc` to `price asc`
|
How to select distinct on one column while returning other columns
|
[
"",
"sql",
"postgresql",
"distinct",
""
] |
Good Day
I am trying to remove a part of a string after a specific Character. It almost works when I use the specific Query:
```
LEFT(T1.ItemCode, CHARINDEX('-VASA', T1.ItemCode) - 1) AS 'Item Code
```
The problem I Have is that when I add the -1 at the end I get an error: Invalid length parameter passed on the LEFT or SUBSTRING function. When I remove it returns the Item Code but Adds that last '-' I am also trying to get rid off. This is an example of an item code I am trying to fix: 0C0002AC-GG-VASA = Without the '-1' I get 0C0002AC-GG- want it to return: 0C0002AC-GG
Thanks
`
|
Try this:
```
LEFT(T1.ItemCode, CHARINDEX('-VASA', T1.ItemCode + '-VASA') - 1) AS 'Item Code'
```
|
The problem is due to few of your item code which may not have the word '-VASA' and you are searching its position and again doing -1 which is negative. so first check weather your word is having '-VASA' in it or not.
like :
```
Case when CHARINDEX('-VASA', T1.ItemCode) >=1 Then LEFT(T1.ItemCode, CHARINDEX('-VASA', T1.ItemCode) - 1) Else T1.ItemCode End AS 'Item Code'
```
|
Removing part of string after specific character
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have below tables:
```
Create Table Country(CountryID int primary key, CountryName nvarchar(100) not null)
Create Table DeviceType(DeviceTypeID int primary key, DeviceTypeName nvarchar(100) not null)
Create Table UserStat
(
LocalID int primary key identity(1,1),
TimePeriod datetime not null,
CountryID int not null,
DeviceTypeID int not null,
UserCount int not null,
CONSTRAINT [FK_UserStat_Country] FOREIGN KEY (CountryID) REFERENCES [dbo].[Country] (CountryID),
CONSTRAINT [FK_UserStat_DeviceType] FOREIGN KEY (DeviceTypeID) REFERENCES [dbo].[DeviceType] (DeviceTypeID))
Insert into Country values (1, 'India')
Insert into Country values (2, 'USA')
Insert Into DeviceType values (1, 'Mobile')
Insert Into DeviceType values (2, 'Desktop')
Insert into UserStat values (CAST(DATEFROMPARTS(2014,9,1) AS datetime),1,1,9999)
Insert into UserStat values (CAST(DATEFROMPARTS(2014,9,1) AS datetime),1,2,10000)
Insert into UserStat values (CAST(DATEFROMPARTS(2014,9,1) AS datetime),2,1,20000)
Insert into UserStat values (CAST(DATEFROMPARTS(2014,9,1) AS datetime),2,2,19999)
Insert into UserStat values (CAST(DATEFROMPARTS(2014,8,1) AS datetime),1,1,50000)
Insert into UserStat values (CAST(DATEFROMPARTS(2014,8,1) AS datetime),1,2,60000)
Insert into UserStat values (CAST(DATEFROMPARTS(2014,8,1) AS datetime),2,1,70000)
Insert into UserStat values (CAST(DATEFROMPARTS(2014,8,1) AS datetime),2,2,80000)
```
Now, I need to get total users based on location and device type for a particular month and year. I tried below query for this:
```
Declare @region nvarchar(50)='Both'
Declare @calendarYear int = 2014
SELECT YEAR(U.TimePeriod) AS [Year],
MONTH(U.TimePeriod) AS [Month],
CASE
WHEN @region = 'India' THEN (SELECT SUM(U.UserCount) WHERE C.CountryName = 'India' AND D.DeviceTypeName = 'Mobile' GROUP BY C.CountryName, D.DeviceTypeName)
WHEN @region = 'USA' THEN (SELECT SUM(U.UserCount) WHERE C.CountryName = 'USA' AND D.DeviceTypeName = 'Mobile' GROUP BY C.CountryName, D.DeviceTypeName)
WHEN @region = 'Both' THEN (SELECT SUM(U.UserCount) WHERE C.CountryName IN ('India', 'USA') AND D.DeviceTypeName = 'Mobile' GROUP BY C.CountryName, D.DeviceTypeName)
ELSE 0
END AS [MobileUsers],
CASE
WHEN @region = 'India' THEN (SELECT SUM(U.UserCount) WHERE C.CountryName = 'India' AND D.DeviceTypeName = 'Desktop' GROUP BY C.CountryName, D.DeviceTypeName)
WHEN @region = 'USA' THEN (SELECT SUM(U.UserCount) WHERE C.CountryName = 'USA' AND D.DeviceTypeName = 'Desktop' GROUP BY C.CountryName, D.DeviceTypeName)
WHEN @region = 'Both' THEN (SELECT SUM(U.UserCount) WHERE C.CountryName IN ('India', 'USA') AND D.DeviceTypeName = 'Desktop' GROUP BY C.CountryName, D.DeviceTypeName)
ELSE 0
END AS [DesktopUsers]
FROM dbo.UserStat AS U WITH (NOLOCK)
Join dbo.Country AS C WITH (NOLOCK) ON C.CountryID=U.CountryID
Join dbo.DeviceType AS D WITH (NOLOCK) ON D.DeviceTypeID=U.DeviceTypeID
WHERE YEAR(U.TimePeriod) = @calendarYear
GROUP BY YEAR(U.TimePeriod), MONTH(U.TimePeriod);
```
When I try to run this, I am getting below errors:
```
Column 'dbo.Country.CountryName' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
Column 'dbo.DeviceType.DeviceTypeName' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
```
If I don't use `Group By` clause in above CASE statements, then also I am getting same error, Can you please let me know why this error has shown and how can I proceed here? Any help is appreciated.
|
```
Declare @region nvarchar(50)='BOTH'
Declare @calendarYear int = 2014
SELECT YEAR(U.TimePeriod) AS [Year]
,MONTH(U.TimePeriod) AS [Month]
,SUM(CASE WHEN D.DeviceTypeName = 'Mobile' THEN U.UserCount ELSE NULL END) AS [MobileUsers]
,SUM(CASE WHEN D.DeviceTypeName = 'Desktop' THEN U.UserCount ELSE NULL END) AS [DesktopUsers]
FROM dbo.UserStat AS U WITH (NOLOCK)
Join dbo.Country AS C WITH (NOLOCK) ON C.CountryID = U.CountryID
Join dbo.DeviceType AS D WITH (NOLOCK) ON D.DeviceTypeID = U.DeviceTypeID
WHERE YEAR(U.TimePeriod) = @calendarYear
AND (
(C.CountryName = @region AND @region <> 'Both')
OR
(@region = 'Both' AND C.CountryName IN ('India','USA'))
)
GROUP BY YEAR(U.TimePeriod), MONTH(U.TimePeriod);
```
## [`SQL FIDDLE`](http://sqlfiddle.com/#!3/0a14d/1)
|
I think this gives you want you want:
```
DECLARE @region varchar(10);
DECLARE @calendarYear int;
SET @calendarYear = 2014
SET @region = 'USA'
;WITH Data AS (
SELECT U.*, C.CountryName, D.DeviceTypeName
FROM dbo.UserStat AS U WITH (NOLOCK)
INNER JOIN dbo.Country AS C WITH (NOLOCK) ON C.CountryID=U.CountryID
INNER JOIN dbo.DeviceType AS D WITH (NOLOCK) ON D.DeviceTypeID=U.DeviceTypeID
)
SELECT YEAR(d1.TimePeriod) AS [Year],
MONTH(d1.TimePeriod) AS [Month],
CASE
WHEN @region = 'India' THEN (SELECT SUM(d2.UserCount) FROM Data d2
WHERE YEAR(d2.TimePeriod) = YEAR(d1.TimePeriod) AND MONTH(d2.TimePeriod) = MONTH(d1.TimePeriod)
AND d2.CountryName = 'India' AND d2.DeviceTypeName = 'Mobile' GROUP BY YEAR(d2.TimePeriod))
WHEN @region = 'USA' THEN (SELECT SUM(d2.UserCount) FROM Data d2
WHERE YEAR(d2.TimePeriod) = YEAR(d1.TimePeriod) AND MONTH(d2.TimePeriod) = MONTH(d1.TimePeriod)
AND d2.CountryName = 'USA' AND d2.DeviceTypeName = 'Mobile' GROUP BY YEAR(d2.TimePeriod))
WHEN @region = 'Both' THEN (SELECT SUM(d2.UserCount) FROM Data d2
WHERE YEAR(d2.TimePeriod) = YEAR(d1.TimePeriod) AND MONTH(d2.TimePeriod) = MONTH(d1.TimePeriod)
AND d2.CountryName IN ('India', 'USA') AND d2.DeviceTypeName = 'Mobile' GROUP BY YEAR(d2.TimePeriod))
ELSE 0
END AS [MobileUsers],
CASE
WHEN @region = 'India' THEN (SELECT SUM(d2.UserCount) FROM Data d2
WHERE YEAR(d2.TimePeriod) = YEAR(d1.TimePeriod) AND MONTH(d2.TimePeriod) = MONTH(d1.TimePeriod)
AND d2.CountryName = 'India' AND d2.DeviceTypeName = 'Desktop' GROUP BY d2.CountryName, d2.DeviceTypeName)
WHEN @region = 'USA' THEN (SELECT SUM(d2.UserCount) FROM Data d2
WHERE YEAR(d2.TimePeriod) = YEAR(d1.TimePeriod) AND MONTH(d2.TimePeriod) = MONTH(d1.TimePeriod)
AND d2.CountryName = 'USA' AND d2.DeviceTypeName = 'Desktop' GROUP BY d2.CountryName, d2.DeviceTypeName)
WHEN @region = 'Both' THEN (SELECT SUM(d2.UserCount) FROM Data d2
WHERE YEAR(d2.TimePeriod) = YEAR(d1.TimePeriod) AND MONTH(d2.TimePeriod) = MONTH(d1.TimePeriod)
AND d2.CountryName IN ('India', 'USA') AND d2.DeviceTypeName = 'Desktop' GROUP BY d2.DeviceTypeName)
ELSE 0
END AS [DesktopUsers]
FROM Data d1
WHERE YEAR(d1.TimePeriod) = @calendarYear
GROUP BY YEAR(d1.TimePeriod), MONTH(d1.TimePeriod);
```
|
Column is invalid in the select list because it is not contained in an aggregate function or in Group By clause
|
[
"",
"sql",
"sql-server",
"group-by",
""
] |
I have a configuration table and a users table.
users:
```
| id | name |
|----|----------|
| 0 | Bob |
| 1 | Ted |
| 2 | Sam |
```
config:
```
| user_id | name | value |
|---------|----------|-------|
| 0 | a | 11 |
| 0 | b | 2 |
| 0 | c | 54 |
| 1 | a | 5 |
| 1 | b | 3 |
| 1 | c | 0 |
| 2 | a | 1 |
| 2 | b | 74 |
| 2 | c | 54 |
```
I normalized the configuration this way since the config will be of unknown amount, but I will have to query users based on this config, so it couldn't be stored in a serialized form.
My issue is how do I find users based on multiple rows? For instance:
Select all users with a > 4 and b < 5
This should return Bob and Ted.
|
Try this:
```
SELECT us.name
FROM USERS us
WHERE EXISTS (SELECT name FROM CONFIG WHERE name='a' AND value>4 AND user_id=us.id)
AND EXISTS (SELECT name FROM CONFIG WHERE name='b' AND value<5 AND user_id=us.id)
```
Alternatively, you can use two joins:
```
SELECT us.name
FROM USERS us, CONFIG c1, CONFIG c2
WHERE us.id=c1.user_id
AND c1.name='a'
AND c1.value<4
AND us.id=c2.user_id
AND c2.name='b'
AND c2.value>5
```
|
Using groups:
```
SELECT u.name
FROM users u
JOIN config c
ON c.user_id = u.id
GROUP BY u.name
HAVING MAX(c.name = 'a' AND c.value > 4)
AND MAX(c.name = 'b' AND c.value < 5)
```
Using joins:
```
SELECT u.name
FROM users u
JOIN config a
ON a.user_id = u.id
AND a.name = 'a'
AND a.value > 4
JOIN config b
ON b.user_id = u.id
AND b.name = 'b'
AND b.value < 5
```
I prefer the `JOIN` method, as you can name each `JOIN` after the property and collect the conditions in the `JOIN`. You also don't have to worry about the `GROUP`s which makes it more flexible for aggregates.
A bonus over `EXISTS` is that you can easily access the properties of the config if you require further joins/calculations.
|
How to select data given several rows match?
|
[
"",
"mysql",
"sql",
""
] |
how to get "AND" of all the values of a boolean column in a single sql query.

for above the o/p should be false,
if all were true then o/p: true,
if all/atlease one value false , then o/p: false.
|
Try this
```
IF EXISTS (SELECT ActiveStatus From TableName where ActiveStatus = 0)
SELECT 'False'
ELSE
SELECT 'True'
AS OutputColumn
```
|
You can indirectly use `MIN` for AND, and `MAX` for OR.if you take into account that a `BIT` colum is either zero or one:
* `MIN`: only if all the values are 1 (true) the result will be 1 (true), it works like `AND`
* `MAX`: if there is at least a 1 (true) the result will be 1 (true), it works like `OR`
If you try to use `MIN` or `MAX` directly on a `BIT` column, it will fail, so you need to cast it to an integer type, and back to bit, like this:
```
SELECT CAST(MIN(CAST(BitColumn AS TINYINT)) AS BIT) FROM Table -- for AND
SELECT CAST(MAX(CAST(Bitcolumn AS TINYINT)) AS BIT) FROM Table -- for OR
```
It's easy to include this in a more complex query, for example one with `GROUP BY`
|
how to get "AND" of all the values of a boolean column in a single sql
|
[
"",
"sql",
"sql-server",
""
] |
I have 54567 rows in this jos\_clientes\_contratos table;
When I execute the query:
```
SELECT * FROM `jos_clientes_contratos`
INNER JOIN jos_users
ON jos_clientes_contratos.cpf_cnpj_cliente = jos_users.username;
```
It will return me 54560 rows. This means that there are 7 rows which were not listed on the results. How can I list these 7 rows from jos\_clientes\_contratos that have not this condition `jos_clientes_contratos.cpf_cnpj_cliente = jos_users.username`?
|
```
SELECT *
FROM `jos_clientes_contratos`
LEFT JOIN jos_users
ON jos_clientes_contratos.cpf_cnpj_cliente = jos_users.username
WHERE jos_users.username is null
```
Read up on Joins: <http://blog.codinghorror.com/a-visual-explanation-of-sql-joins/>
LEFT Join basically says... Return all records from the left table (jos\_clientes\_contratos) and only those on the right (jos\_users) which match. This is the nature of an outer join. LEFT, RIGHT, or full outer. They return all data from one or both of the sets including records that don't match.
Since you are interested in the jos\_clientes\_contratos records that do not match, you want the ones that have a jos\_users\_username that is null.
|
You can do an anti-join:
```
SELECT * FROM t1
LEFT JOIN t2 ON (t1.key = t2.key)
WHERE t2.key IS NULL
```
|
Query on mysql with inner join
|
[
"",
"mysql",
"sql",
""
] |
I am trying to create a trigger to act like an exception that will not let a unordered pair not be inserted or updated in a dataset twice.
For example given the set {A,B} where A and B are both columns that are primary keys and {A,B} exists in the table, I do not want to allow the set {B,A} to exist because that relationship is already defined with {A,B}.
Here is my attempt, but it gives `Trigger created with compilation errors.` and also I don't see how to do this against `new` and `old` information.
```
CREATE TRIGGER pair
BEFORE INSERT OR UPDATE ON pairing
DECLARE exists_pair NUMBER;
BEGIN
SELECT MAX(COUNT_VAL) INTO exist_pair
FROM (SELECT COUNT(*) FROM pairing p, pairing p2 WHERE p2.element_one = p.element_two AND p.element_one = p2.element_two)
IF exist_pair > 0 THEN
RAISE SOME_EXCEPTION;
END IF;
END;
```
Obviously this is not exactly what I want, but it gives an idea. This will instead return 0 every time until a bad entry is made then it will say every entry valid or not is invalid...so it's not what I want but I have no idea how to use `:new` and `:old` in this context.
This has to work for oracle.
Here is a SQLfiddle with an example insert that should fail:
<http://sqlfiddle.com/#!4/1afb7/1/0>
|
A function based index will work:
```
create unique index unique_pair_ix on pairing (least(element_one,element_two),greatest(element_one,element_two));
```
Btw: Using a row trigger to select from the same table will cause:
```
ORA-04091: table XXXX is mutating, trigger/function may not see it
```
if you attempt to insert or update more than a single row in a single statement. So you wont be able to use :OLD and :NEW.
|
> For example given the set {A,B} where A and B are both columns that are primary keys and {A,B} exists in the table, I do not want to allow the set {B,A} to exist because that relationship is already defined with {A,B}.
The most straightforward way of enforce uniqueness is to add an unique index. But a "simple" index will not work:
```
-- This does not help here
CREATE UNIQUE INDEX sample_uniq_ab_fn ON SAMPLE (B,A);
```
Assuming `PRIMARY KEY(A,B)`, with this setup, when inserting `(A=1,B=2)` you will simply enforce the uniqueness of `(1,2)` in the primary key index *and* the uniqueness of `(2,1)` in my unique index. This will not prevent the insertion of `(A=2,B=1)` as `(2,1)` is *not* in the primary key index. Nor `(1,2)` in the unique index.
---
Here you need a [function based index](http://docs.oracle.com/cd/E11882_01/appdev.112/e10471/adfns_indexes.htm#ADFNS00505), as you want *(min(a,b),max(a,b))* to be unique. Something like that:
```
CREATE TABLE SAMPLE (
A NUMBER(3),
B NUMBER(3),
PRIMARY KEY (A,B));
CREATE UNIQUE INDEX sample_uniq_ab_fn
ON SAMPLE (CASE WHEN A < B THEN A ELSE B END,
CASE WHEN A < B THEN B ELSE A END);
INSERT INTO SAMPLE(A,B) VALUES (10,20) -- OK
INSERT INTO SAMPLE(A,B) VALUES (20,10) -- ORA-00001: unique constraint (SYLVAIN.SAMPLE_UNIQ_AB_FN) violated
```
|
Trigger for unique pair constraint
|
[
"",
"sql",
"database",
"oracle",
"triggers",
""
] |
I have 2 tables like this,
**Table1**
```
Id Locations
-- ---------
1 India, Australia
2 US , UK
```
**Table2**
```
Table2Id Location
-------- --------
101 Italy
102 UK
103 Hungary
104 India
```
I need to inner join these 2 tables on the condition, If `Locations` in table2 contains `Location` field in table1. The result will be like
```
Id Table2Id Location Locations
-- -------- -------- ---------
1 104 India India, Australia
2 102 UK US , UK
```
I tried something like
```
Select t1.id,
t2.Table2Id,
t1.Locations,
t2.Location
From Table1 t1
Inner join Table2 t2 On CONTAINS(t1.Locations, t2.Location)
```
But the second parameter of `contains` should be a string. Its not allowing to give the column name there.
I cannot use `temptable` or `variable` in the query. Because this query needs to be run on a email campaign tool called `ExactTarget` where there is no support for `temptable` and `variables`.
Any help will be highly appreciated. Thank you.
|
[SQLFiddle example](http://sqlfiddle.com/#!2/1a3e4) for MySQL 5.5
[SQLFiddle example](http://sqlfiddle.com/#!3/1a3e4/1) for SQL
**Table and data**
```
create table table1 (id int, locations varchar(100));
insert into table1 values
(1, 'India, Australia'),
(2, 'US, UK');
create table table2 (table2id int, location varchar(100));
insert into table2 values
(101, 'Italy'),
(102, 'UK'),
(103, 'Hungary'),
(104, 'India');
```
**MySQL query**
```
select
table1.id,
table2.table2id,
table2.location,
table1.locations
from table1
join table2 on table1.locations like concat('%', table2.location, '%')
```
**SQL Server query**
```
select
table1.id,
table2.table2id,
table2.location,
table1.locations
from table1
join table2 on table1.locations like '%' + table2.location + '%'
```
**Edit**
In case where US location is contained in the country name Australia, the above query may not work as desired. To work around that problem, here's a possible query to use
```
select
table1.id,
table2.table2id,
table2.location,
table1.locations
from table1
join table2 on
',' + replace(table1.locations,', ', ',') + ',' like '%,' + table2.location + ',%'
```
This query forces `India, Australia` to become `,India,Australia,`. This is then compared with `,US,` and therefore will not suffer from incorrect results.
|
If you are using Mysql you could take a look at the following option:
[INSTR](http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_instr)
```
select table2id, location, table1.id, locations
from table2 inner join table1 on instr(locations,location) >= 1;
```
[SQL Fiddle Link](http://sqlfiddle.com/#!2/6284a/1)
|
Inner-join in sql with contains condition
|
[
"",
"sql",
"inner-join",
"contains",
"exacttarget",
""
] |
I have a particular question which is about MySQL. Please have a look at the query and the result set below:
```
SELECT `SentTo`,
`Read`
FROM `Emails` as t
```
```
hello@mydomain.com | 1
--------------------------------
hello@mydomain.com | 0
--------------------------------
hello@mydomain.com | 1
--------------------------------
support@mydomain.com | 0
--------------------------------
support@mydomain.com | 1
--------------------------------
```
I'd like to get the count of the emails for each email, and the total Read. I will get unread by subtracting total and read on the script side, don't really need it here.
```
Email | Total | Read
--------------------------------------
hello@mydomain.com | 3 | 1
--------------------------------------
support@mydomain.com | 2 | 1
--------------------------------------
```
Any advice? I have tried to apply sub-queries to this but I couldn't find a proper solution.
|
This should work using `GROUP BY`:
```
SELECT `SentTo`,
COUNT(*) Total,
SUM(`Read`=0)
FROM Emails
GROUP BY `SentTo`
```
* [SQL Fiddle Demo](http://sqlfiddle.com/#!2/cd3f3/2)
|
A 'Group by' clause can solve this problem as follows:
```
SELECT `SentTo`, COUNT(*) as totalSend, (totalSend - SUM(`Read`)) as totalUnread
FROM `Emails`
GROUP BY `SentTo`
```
|
MySQL SUM(), COUNT() and GROUP BY
|
[
"",
"mysql",
"sql",
""
] |
Why doesn't my query able to return any rows and not go to *Else* in my conditional statement?
```
For i As Integer = 0 To dtoffenseinfo2.Rows.Count - 1
Dim dtoffenseinfo3 As New DataTable
Dim adapter3 As New OleDbDataAdapter("SELECT SUM(Price) AS TPRICE FROM tblFuelTransactionLogs " & _
"WHERE Created_Date=#" & Format(dtoffenseinfo2.Rows(i).Item("Dates"), "Short Date") & "#", DBConnection)
If dtoffenseinfo3.Rows.Count <= 0 Then
Else
Dim x As Decimal = dtoffenseinfo3.Rows(0).Item("TPRICE")
cmd.ExecuteNonQuery()
End If
Next
```
In my query, the value of dtoffenseinfo2.Rows(i).Item("Dates") comes from a lookup table with dates (for the whole month of September), and per loop, the value of the dtoffenseinfo2.Rows.(i)Item("Dates") is 09/01/2014 up to 09/30/2014 respectively.
I already have 09/18/2014 in both tables but it still doesn't return any row. I am also not getting errors. Am I using SELECT SUM() wrong? Sorry for any obvious mistake.
|
You forgot
```
adapter3.Fill(dtoffenseinfo3)
```
That's what's missing. :)
|
Who says that your query can't return any rows? You're not actually executing it so how would you even know? What's the point of the data adapter if you don't call its `Fill` method?
|
What's missing / wrong with my SQL Query? (Doesn't return any row)
|
[
"",
"sql",
"vb.net",
"ms-access-2010",
"oledb",
""
] |
I have a database with 169 tables
I need this column in every table:
```
wid integer not null primary key
```
I tried this(Thanks <https://stackoverflow.com/users/27535/gbn> for the solution):
```
SELECT
'ALTER TABLE ' + T.name + ' ADD foo int NULL'
FROM
sys.tables AS T
WHERE
T.is_ms_shipped = 0
```
But it didn't work on PostgreSQL.
It only worked on tsql.
How to add this column in every table at once ?
|
```
do $$
declare
selectrow record;
begin
for selectrow in
select
'ALTER TABLE '|| T.mytable || ' ADD COLUMN foo integer NULL' as script
from
(
select tablename as mytable from pg_tables where schemaname ='public' --your schema name here
) t
loop
execute selectrow.script;
end loop;
end;
$$;
```
---
You can test whether all your *tables* altered with the new *column* using the following *select*
```
select
table_name,COLUMN_NAME
from
INFORMATION_SCHEMA.COLUMNS
where
COLUMN_NAME='foo' -- column name here
```
|
Try this (change 'public' to whatever schema you're doing this in)
```
DO $$
DECLARE
row record;
cmd text;
BEGIN
FOR row IN SELECT schemaname, tablename FROM pg_tables WHERE schemaname = 'public' LOOP
cmd := format('ALTER TABLE %I.%I ADD COLUMN foo SERIAL PRIMARY KEY ', row.schemaname, row.tablename);
RAISE NOTICE '%', cmd;
-- EXECUTE cmd;
END LOOP;
END
$$ LANGUAGE plpgsql;
```
If you run as is, it'll show you the commands. Uncomment the EXECUTE line to actually perform the alterations.
I'd run within a transaction so you can roll back if you're not happy with the results.
Note that the type is `SERIAL` - the column type will be integer, but also creates a sequence owned by the table and defaults the column value to the next value of that sequence.
|
PostgreSQL: How to add a column in every table of a database?
|
[
"",
"sql",
"database",
"postgresql",
""
] |
I'm trying to insert new user if not exist in users table, I tried this query but receive error:
```
INSERT INTO USER (name,email)
VALUES ('John','john@mmm.com')
WHERE NOT EXISTS
(SELECT id FROM USER WHERE email = 'john@mmm.com')
```
how to insert user if not exists ?
error:
```
#1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'where not exists (select id from user where email = 'john@mmm.com')' at line 5
```
thanks,
|
Use `insert . . . select`:
```
INSERT INTO USER (name, email)
SELECT 'John', 'john@mmm.com'
WHERE NOT EXISTS
(SELECT id FROM USER WHERE email = 'john@mmm.com');
```
I would write this as:
```
INSERT INTO USER (name, email)
SELECT name, email
FROM (SELECT 'John' as name, 'john@mmm.com' as email) t
WHERE NOT EXISTS (SELECT 1 FROM USER u WHERE u.email = t.email);
```
But a better approach is probably to just put in a unique index so the database protects the data:
```
create unique index idx_users_email on user(email);
```
|
Use [`INSERT ... SELECT`](http://dev.mysql.com/doc/refman/5.0/en/insert-select.html) instead of `INSERT`.
```
INSERT INTO USER (name,email)
SELECT 'John','john@mmm.com'
WHERE NOT EXISTS
(SELECT id FROM USER WHERE email = 'john@mmm.com')
```
You also might consider using the `ON DUPLICATE KEY UPDATE` syntax extension of MySQL.
|
MySQL: insert where not exists
|
[
"",
"mysql",
"sql",
""
] |
I have two tables Those are following
**Table Name : stockIn**
```
+----+-------------+------------+------------+
| Id | date | Itemname | stockInqty |
+====+=============+============+============+
| 1 | 12/12/2014 | testitem | 12 |
| 2 | 14/12/2014 | testitem11 | 20 |
+----+-------------+------------+------------+
```
**Table Name : stockOut**
```
+----+------------+-------------+-------------+
| Id | date | Itemname | stockOutqty |
+====+============+=============+=============+
| 1 | 12/12/2014 | testitem | 7 |
| 2 | 13/12/2014 | testitem11 | 15 |
+----+------------+-------------+-------------+
```
I need a combined result of testitem record between two date period in the order of
```
+------------+-----------+-------------+-------------+
| date | itemname | stockInqty | stockOutqty |
+============+===========+=============+=============+
| 12/12/2014 | testitem | 12 | 7 |
| 13/12/2014 | testitem | NIL | 15 |
| 14/12/2014 | testitem | 20 | NIL |
+------------+-----------+-------------+-------------+
```
**I got a query like this**
```
SELECT t1.`date`
, t1.itemname
, t1.stockInqty AS stockInqty
, t2.stockInqty AS stockOutqty
FROM tbl1 t1 LEFT JOIN tbl2 t2 ON t1.`date` = t2.`date`
AND t1.itemname = t2.itemname
UNION
SELECT t2.`date`
, t2.itemname
, t1.stockInqty AS stockInqty
, t2.stockInqty AS stockOutqty
FROM tbl1 t1 RIGHT JOIN tbl2 t2 ON t1.`date` = t2.`date`
AND t1.itemname = t2.itemname
```
The result of the query will be like this
```
+-------------+------------+------------+-------------+
|date | itemname | stockInqty | stockOutqty |
+=============+============+============+=============+
| 12/12/2014 | testitem | 12 | 7 |
| 14/12/2014 | testitem | 20 | NIL |
| 13/12/2014 | testitem | NIL | 15 |
+-------------+------------+------------+-------------+
```
The result is ordered on the basis of fist table data.
I need the result should be ordered on the basis of date (ie,first record must be on `12/12/2014`, second record on `13/14/2014` third on `14/12/2014`).
Thanks in advance.
|
```
select * from
(
SELECT t1.date as sdate
, t1.itemname
, t1.stockInqty AS stockInqty
, t2.stockInqty AS stockOutqty
FROM tbl1 t1 LEFT JOIN tbl2 t2 ON t1.`date` = t2.`date`
AND t1.itemname = t2.itemname
UNION
SELECT t2.date as sdate
, t2.itemname
, t1.stockInqty AS stockInqty
, t2.stockInqty AS stockOutqty
FROM tbl1 t1 RIGHT JOIN tbl2 t2 ON t1.`date` = t2.`date`
AND t1.itemname = t2.itemname
) t
order by sdate
```
|
Try this with order by date
```
SELECT t1.`date`
, t1.itemname
, t1.stockInqty AS stockInqty
, t2.stockInqty AS stockOutqty
FROM tbl1 t1 LEFT JOIN tbl2 t2 ON t1.`date` = t2.`date`
AND t1.itemname = t2.itemname
UNION
SELECT t2.`date`
, t2.itemname
, t1.stockInqty AS stockInqty
, t2.stockInqty AS stockOutqty
FROM tbl1 t1 RIGHT JOIN tbl2 t2 ON t1.`date` = t2.`date`
AND t1.itemname = t2.itemname order by date
```
|
Order by date in sql
|
[
"",
"mysql",
"sql",
""
] |
I have a journal type table where it has `Account`, `Credit_Debit` and `Amount` columns. I am trying a
`select` where by the result should look like a row followed by its contra amount row. Amount column is Numeric/Decimal.
Expected output: An amount followed by its equivalent opposite(plus or minus) amount
```
expected output
|Account|c_d|Amount|
| 1 |D | 100 |
! 7 |C | -100 |
| 8 |D | 750 |
| 1 |C | -750 |
| 10 |C | -500 |
| 11 |D | 500 |
```
So, my query below threw invalid relational operator
```
select * from table where amount between 100 and 1000 AND (-100) - (-1000)
```
This one returned empty result set
```
select * from table where
(amount between('100') AND ('1000')) AND (amount between('-100') AND ('-1000'))
```
|
Ok, this has worked. Thanks for everyone's efforts.
```
select * from table where amount between 100 AND 1000
UNION
select * from table where amount between -1000 AND -100
```
|
Your second one will work with an OR operator.
```
select * from table where (amount between('100') AND ('1000')) OR (amount between('-100') AND ('-1000'))
```
But it will show rows containing amounts between -1000 and 1000 minus those with amounts between -100 and 100.
To do what you want to do, if I get it right, you can sue:
```
SELECT T1.amount, T2.amount from table T1, table T2 WHERE ABS(T1.amount) = ABS(T2.amount)
```
|
How to apply multiple between operator in SQL
|
[
"",
"sql",
"oracle",
""
] |
I have the following table:
```
ID GroupID Date Value
1 A 2014-06-01 20
2 A 2014-06-02 21
3 A 2014-06-03 22
4 B 2014-06-01 50
5 B 2014-06-02 76
6 A 2014-06-04 25
7 C 2014-06-01 70
8 A 2014-06-05 29
```
I want to have the following result in SQL VIEW:
```
ID GroupID Date Value RowNumber
1 A 2014-06-01 20 1
2 A 2014-06-02 21 2
3 A 2014-06-03 22 3
4 B 2014-06-01 50 1
5 B 2014-06-02 76 2
6 A 2014-06-04 25 4
7 C 2014-06-01 70 1
8 A 2014-06-05 29 5
```
But, I want to limit the RowNumber field until 24. If the number reach 24, then it will start from 1 again.
Does anyone have an idea how to do this?
Thank you.
|
you just set RowNumber Column Value to RowNumber%24+1 then when row number reached to 24 then start from 1
```
SELECT (ROW_NUMBER() OVER
(PARTITION BY GroupID ORDER BY ID)
-1)%24+1 as RowNumber,
* FROM Table
```
because row number to start from 1 i minus row number -1 to start from 0
|
You can achieve this using **decode** function available in SQL. Once the RowNumber reaches 25, RowNumber starts again from 1.
try executing below query:
select ID,GroupID,Date,Value, decode(mod(rownum,24), 0 , 24, mod(rownum,24)) mod\_Val from Table
|
Adding limited Row number SQL
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
My question is very similar to this one:
## **[SE Question](https://stackoverflow.com/questions/92093/removing-leading-zeroes-from-a-field-in-a-sql-statement)**
However I want to run a query on a table, and actually update the values in that table, not just Select them.
Im not sure how to select a value and then Update it without using a for loop.
What Query can I run to do this?
{Edit:}
I store number values as a varchar and an import I ran imported numbers such a 00233
I want it to be 233
|
So adapt the query for an `update`:
```
update table t
set ColumnName = substring(ColumnName, patindex('%[^0]%', ColumnName), 10);
```
|
If all the columns are numeric use this:
```
UPDATE table
SET col = cast(cast(col as int) as varchar(10))
WHERE col like '0%'
```
If some values are not numeric
```
UPDATE table
SET col = stuff(@a, 1, patindex('%[^0]%', @a) - 1, '')
WHERE col like '0%'
```
|
Remove Leading Zeros in a column in a sql server table,
|
[
"",
"sql",
"sql-server",
""
] |
I need one very expert help, I can't seem to find the answer to this, the problem is, find the titles and ISBNs of books that have the same title and publishers as at least 2 other books.
Order it by title.
So basically, everything is coming from the book table.
```
CREATE TABLE BOOK
(
ISBN VARCHAR2(20),
Title VARCHAR2(250) NOT NULL,
YearPublished NUMBER(4) NOT NULL,
PublisherName VARCHAR2(40) NOT NULL,
CONSTRAINT pk_isbnP PRIMARY KEY(ISBN)
);
```
Here is my rough draft:
```
select _____
from book b2
where ______ (select____ from book b2
where _____ = ______ and ______ =______)
```
|
This ought to do it:
```
select isbn, title
from books
where (title,publishername) in(select title,publishername
from books
group by title,publishername
having count(*) >=3)
order by title;
```
|
Step 1: Find combinations of title and publisher that have at least 2 books:
```
SELECT title, PublisherName
FROM BOOK
GROUP BY title, PublisherName
HAVING COUNT(*) > 1
```
Step 2: find all other books that have this title and publisher:
```
SELECT *
FROM Books b1
WHERE EXISTS(
SELECT title, PublisherName
FROM BOOK b2
WHERE b1.title = b2.title AND b1.PublisherName = b2.PublisherName
GROUP BY title, PublisherName
HAVING COUNT(*) > 1
)
```
|
How can we retrieve same data in one table?
|
[
"",
"sql",
"oracle",
""
] |
I have 3 tables, A, B,C
* Table A,Columns: latlong, name
* Table B, columns : code, name
* Table C, Columns : latlong, code
I want to update table A,column Name with the values from Table B, Column Name, something like:
```
update A set A.name =
(select b.name
from B where code = c.code)
where a.latlong = c.latlong
```
Note that all the columns are not related.
Would appreciate the right direction to go about it.
Have tried Sub queries with inner joins, but no good.
|
You have mentioned in your question the following:
> I want to update table A,column Name with the values from Table B, Column Name
But what I can see from your query is that actually , you need only those values of the column `Name` of table `B` which has same value of `code` as in table `C`, and that your `latlong` in `A` should be the same as `latlong` in `C`, if I'm not mistaken.
Based on that, I can say you need an SQL **JOIN** operation for Tables `B` and `C` with table `A`. Something like this:
```
UPDATE A SET A.name = B.Name
FROM A
JOIN C
ON C.latlong = A.latlong
JOIN B
ON B.code = C.code
```
No need to create a **SUBQUERY**
|
You can do this with an `update` using `join`:
```
update a
set name = b.name
from a join
c
on c.latlong = a.latlong join
b
on b.code = c.code;
```
|
Sql Subquery result
|
[
"",
"sql",
"sql-server",
""
] |
I Have a table that has order data like this: Table Order\_Detail and Item\_Master are joined by Item#
We want to report on order number,
Order\_Detail table:
```
Order# Item#
1234 IPhone6
1234 IPhone5
1234 Battery
join Item_Master:
Item# Item_type Desc
IPhone6 Phone Smartphone
IPhone5 Phone Smartphone
```
Now we only want order numbers, that have only one Item-Type = Phone. We are only interested in types Phone. I tried using Query/400 and doing a count on order# which = Phone, and then taking only the counts = 1. But this does bring in some orders that have more than one phone type = Phone, in our example here we would not want this order.
|
this query will return ordernums where the only ordered item type is 'phone'
```
select ordernum
from order_detail od
join item_master im on im.itemnum = od.itemnum
group by ordernum
having count(case when im.item_type <> 'Phone' then 1 end) = 0
and count(*) = 1
```
if you want to allow multiple 'phone' orders you can remove `and count(*) = 1`
|
Your question is a little confusing. Are you looking to get a count of order numbers where `Item_Type = 'Phone'`? If so the follwoing should work for you:
```
SELECT COUNT(DISTINCT OrderNum) AS OrderNumCount
FROM Order_Detail o
INNER JOIN Item_Master i ON o.ItemNum = o.ItemNum
WHERE Item_type = 'Phone'
```
Or are you after only the orders that have only one record from the item table linked. If so then you may want:
```
SELECT o.OrderNum
FROM Order_Detail o
INNER JOIN Item_Master i ON o.ItemNum = o.ItemNum
WHERE Item_type = 'Phone'
GROUP BY o.OrderNum
HAVING COUNT(*) = 1
```
|
Count in Query 400 not passing correctly
|
[
"",
"sql",
"ibm-midrange",
"db2-400",
""
] |
I have users table
```
ID NAME
1 John
2 Mike
3 Jack
```
and table with attributes and user IDs
```
USER ATTRIBUTE
1 1
1 2
2 4
```
I need to select all users with attribute 1 AND 2 (so, in this example user #1 John). Attributes can be more than two.
I'v tried
```
SELECT * FROM user u LEFT JOIN attributes a ON u.id = a.user
WHERE a.attribute = 1 AND a.attribute = 2
```
but of course it not working..
|
You will need to use a combination of `IN()` and `GROUP BY ... HAVING` to achieve this. Also no need for a join if all you need is user ID's. So something like:
```
SELECT user, COUNT(attribute) AS attribute_count
FROM attributes
WHERE attribute IN(...) /* include your set of attributes here */
GROUP BY user
HAVING attribute_count = ? /* include number equal to number of attribute ID's in IN() above */
```
If you need user id's and names you can simply join this record set derived from the query above as a filter to the users table:
```
SELECT user.id, user.name
FROM user
INNER JOIN
(
SELECT user, COUNT(attribute) AS attribute_count
FROM attributes
WHERE attribute IN(...) /* include your set of attributes here */
GROUP BY user
HAVING attribute_count = ? /* include number equal to number of attribute ID's in IN() above */
) AS filter
ON user.id = filter.user
```
|
having clause can be used with sum
```
SELECT u.id FROM user u
INNER JOIN attributes a ON u.id = a.user
group by u.id
having ( sum(case when attribute in (1,2) then 1 else 0 end) ) =2
```
|
Join and multiple and conditions
|
[
"",
"mysql",
"sql",
"symfony1",
"doctrine-1.2",
""
] |
If Oracle fails to execute the below query due to whatever reason, will the *supplier\_seq* get incremented? When exactly does Oracle increment sequences?
```
INSERT INTO suppliers
(supplier_id, supplier_name)
VALUES
(supplier_seq.NEXTVAL, 'Kraft Foods');
```
|
The sequence is incremented as soon as `nextval` is called. This increment will never be rolled back. The value is "gone". The only situation where this value could ever be generated again by the sequence, is if the sequence is set to `CYCLE` and it wraps around.
> *due to whatever reason*
The reason *does* matter.
If the statement fails because there is a syntax error or there is no table named `suppliers`, or if the user does not have the privilege to insert into `suppliers` then the sequence will not be incremented (because nextval is never actually called because execution is aborted in an earlier step).
|
Independently from whether query fails or not, Oracle Sequences are synchronously incremented in all sessions, right when they are called.
However, if there is a syntax error in your statement (wrong tablename, column name etc.), sequence will stay the same since the query won't be executed.
|
oracle sequence NEXTVAL behavior in case of query execution failure
|
[
"",
"sql",
"oracle",
"sequence",
"sql-execution-plan",
""
] |
I am trying to find out if there is any way to aggregate a sales for each product. I realise I can achieve it either by using group-by clause or by writing a procedure.
example:
```
Table name: Details
Sales Product
10 a
20 a
4 b
12 b
3 b
5 c
```
Is there a way possible to perform the following query with out using group by query
```
select
product,
sum(sales)
from
Details
group by
product
having
sum(sales) > 20
```
I realize it is possible using Procedure, could it be done in any other way?
|
You could do
```
SELECT product,
(SELECT SUM(sales) FROM details x where x.product = a.product) sales
from Details a;
```
(and wrap it into another select to simulate the `HAVING`).
|
It's possible to use analytic functions to do the sum calculation, and then wrap that with another query to do your filtering.
[See and play with the example here.](https://data.stackexchange.com/stackoverflow/query/226781/sum-without-group-by)
```
select
running_sum,
OwnerUserId
from (
select
id,
score,
OwnerUserId,
sum(score) over (partition by OwnerUserId order by Id) running_sum,
last_value(id) over (partition by OwnerUserId order by OwnerUserId) last_id
from
Posts
where
OwnerUserId in (2934433, 10583)
) inner_q
where inner_q.id = inner_q.last_id
--and running_sum > 20;
```
We keep a running sum going on the partition of the owner (product), and we tally up the last id for the same window, which is the ID we'll use to get the total sum. Wrap it all up with another query to make sure you get the "last id", take the sum, and then do any filtering you want on the result.
This is an extremely round-about way to avoid using GROUP BY though.
|
Is it possible to calculate the sum of each group in a table without using group by clause
|
[
"",
"sql",
"sql-server",
"database",
"t-sql",
"plsql",
""
] |
I have two tables, `CountLogs` and `RegisterCountLogs`. They both have the same FK constraints, so I'd like to transfer the Timestamp from one to the other. How can I achieve this in one SQL statement?
E.g.
```
SELECT [DeviceSerial]
,[LogEntryID]
,[Timestamp]
FROM [dbo].[CountLogs]
UPDATE RegisterCountLogs
SET Timestamp = [OTHERQUERY?].Timestamp
WHERE [DeviceSerial] = [OTHERQUERY?].[DeviceSerial]
AND [OTHERQUERY?][LogEntryID] = [OTHERQUERY?].[LogEntryID]
```
|
Use a join:
```
UPDATE RegisterCountLogs
SET Timestamp = [OTHERQUERY?].Timestamp
FROM RegisterCountLogs
INNER JOIN [OTHERQUERY?] ON RegisterCountLogs.DeviceSerial = [OTHERQUERY?].[DeviceSerial]
AND RegisterCountLogs.[LogEntryID] = [OTHERQUERY?].[LogEntryID]
```
|
try this...
```
UPDATE R
SET R.Timestamp = C.Timestamp
FROM RegisterCountLogs R
INNER JOIN [dbo].[CountLogs] C ON
(R.[DeviceSerial] = C.[DeviceSerial]
AND R.[LogEntryID] = C.[LogEntryID])
```
|
Update rows in one table, with data from another
|
[
"",
"sql",
"sql-server",
""
] |
I am working on Averaging several rows of data and I have been busting my butt figuring it out and I "think" Im down to the last error.
```
ERROR 1584 (42000): Incorrect parameters in the call to stored function 'cast'
```
I get this when I run this
```
select avg(volume) from _AAPL WHERE date between cast('2014-02-03' as date) and cast ('2014-05-05' as DATE);
```
\_AAPL is my table, date is my column "date" the dates are already in the database under "date"
I cannot get rid of this error, any ideas?
|
You should not have any space after "CAST" :
> cast('2014-05-05' as DATE)
instead of
```
cast ('2014-05-05' as DATE)
```
By the way i'm not sure it's mandatory to cast dates when they are already formatted like 'yyyy-mm-dd' : i would just write
```
`column` BETWEEN '2014-02-03' AND '2014-05-05'
```
|
In [mysql](/questions/tagged/mysql "show questions tagged 'mysql'") you can convert a string literal to a date by using the [`str_to_date`](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_str-to-date) function:
```
SELECT AVG(volume)
FROM _AAPL
WHERE `date` BETWEEN STR_TO_DATE ('2014-02-03', '%Y-%m-%d') AND
STR_TO_DATE ('2014-05-05', '%Y-%m-%d')
```
|
Average in mysql with date
|
[
"",
"mysql",
"sql",
""
] |
I have this stored procedure:
```
exec SearchResume @KeywordSearch=N'', @GreaterThanDate='2013-09-22 00:00:00',
@CityIDs=N'0,56,31,271,117,327,3,328,228',
@ProvinceIDs=N'0,1,12,13',
@CountryIDs=N'1',
@IndustryIDs=N'0,2,3,4,38,113,114,115,116,117'
```
Which doesn't return any results because the ids are in nvarchar but the actual values are integer.
Now, when I test the actual SP with a manual list of int values I'm able to get the results, this is the example:
```
SELECT DISTINCT
UserID,
ResumeID,
CASE a.Confidential WHEN 1 THEN 'Confidential' ELSE LastName + ',' + FirstName END as 'Name',
a.Description 'ResumeTitle',
CurrentTitle,
ModifiedDate,
CASE ISNULL(b.SalaryRangeID, '0') WHEN '0' THEN CAST(SalarySpecific as nvarchar(8)) ELSE c.Description END 'Salary',
h.Description 'Relocate',
i.Description + '-' + j.Description + '-' + k.Description 'Location'
FROM dbo.Resume a JOIN dbo.Candidate b ON a.CandidateID = b.CandidateID
LEFT OUTER JOIN SalaryRange c ON b.SalaryRangeID = c.SalaryRangeID
JOIN EducationLevel e ON b.EducationLevelID = e.EducationLevelID
JOIN CareerLevel f ON b.CareerLevelID = f.CareerLevelID
JOIN JobType g ON b.JobTypeID = g.JobTypeID
JOIN WillingToRelocate h ON b.WillingToRelocateID = h.WillingToRelocateID
JOIN City i ON b.CityID = i.CityID
JOIN StateProvince j ON j.StateProvinceID = b.StateProvinceID
JOIN Country k ON k.CountryID = b.CountryID
WHERE (b.CityID IN (0,56,31,125,229,5,219,8,228))
AND (b.StateProvinceID IN (0,1,13))
AND (b.CountryID IN (1))
AND (b.IndustryPreferenceID IN (0,2,3,4,5,6,115,116,117))
```
I would like to know what can I do to send a list of int values, not a list of nvarchar values since as you can see the query doesn't work properly.
## Update
Original SP:
```
ALTER PROCEDURE [dbo].[SearchResume]
@KeywordSearch nvarchar(500),
@GreaterThanDate datetime,
@CityIDs nvarchar(500),
@ProvinceIDs nvarchar(500),
@CountryIDs nvarchar(500),
@IndustryIDs nvarchar(500)
AS
BEGIN
DECLARE @sql as nvarchar(4000)
SET @sql = 'SELECT DISTINCT
UserID,
ResumeID,
CASE a.Confidential WHEN 1 THEN ''Confidential'' ELSE LastName + '','' + FirstName END as ''Name'',
a.Description ''ResumeTitle'',
CurrentTitle,
ModifiedDate,
CurrentEmployerName,
PersonalDescription,
CareerObjectives,
CASE ISNULL(b.SalaryRangeID, ''0'') WHEN ''0'' THEN CAST(SalarySpecific as nvarchar(8)) ELSE c.Description END ''Salary'',
e.Description ''EducationLevel'',
f.Description ''CareerLevel'',
g.Description ''JobType'',
h.Description ''Relocate'',
i.Description + ''-'' + j.Description + ''-'' + k.Description ''Location''
FROM dbo.Resume a JOIN dbo.Candidate b ON a.CandidateID = b.CandidateID
LEFT OUTER JOIN SalaryRange c ON b.SalaryRangeID = c.SalaryRangeID
JOIN EducationLevel e ON b.EducationLevelID = e.EducationLevelID
JOIN CareerLevel f ON b.CareerLevelID = f.CareerLevelID
JOIN JobType g ON b.JobTypeID = g.JobTypeID
JOIN WillingToRelocate h ON b.WillingToRelocateID = h.WillingToRelocateID
JOIN City i ON b.CityID = i.CityID
JOIN StateProvince j ON j.StateProvinceID = b.StateProvinceID
JOIN Country k ON k.CountryID = b.CountryID
WHERE ( (ModifiedDate > ''' + CAST(@GreaterThanDate as nvarchar(55)) + ''')
'
IF (LEN(@CityIDs) >0)
BEGIN
SET @sql = @sql + 'AND (b.CityID IN (' + @CityIDs + '))'
END
IF (LEN(@ProvinceIDs) >0)
BEGIN
SET @sql = @sql + 'AND (b.StateProvinceID IN (' + @ProvinceIDs + '))'
END
IF (LEN(@CountryIDs) >0)
BEGIN
SET @sql = @sql + 'AND (b.CountryID IN (' + @CountryIDs + '))'
END
IF (LEN(@IndustryIDs) >0)
BEGIN
SET @sql = @sql + 'AND (b.IndustryPreferenceID IN (' + @IndustryIDs + '))'
END
IF (LEN(@KeywordSearch) > 0)
BEGIN
SET @sql = @sql + ' AND (' + @KeywordSearch + ')'
END
SET @sql = @sql + ') ORDER BY ModifiedDate desc'
--select @sql
exec sp_executesql @sql
END
```
|
You can create a Table-Valued Function which takes the nVarChar and creates a new record for each value, where you tell it the delimiter. My example here returns a table with a single Value column, you can then use this as a sub query for your IN Selection :
```
Create FUNCTION [dbo].[fnSplitVariable]
(
@List nvarchar(2000),
@delimiter nvarchar(5)
)
RETURNS @RtnValue table
(
Id int identity(1,1),
Variable varchar(15),
Value nvarchar(100)
)
AS
BEGIN
Declare @Count int
set @Count = 1
While (Charindex(@delimiter,@List)>0)
Begin
Insert Into @RtnValue (Value, Variable)
Select
Value = ltrim(rtrim(Substring(@List,1,Charindex(@delimiter,@List)-1))),
Variable = 'V' + convert(varchar,@Count)
Set @List = Substring(@List,Charindex(@delimiter,@List)+len(@delimiter),len(@List))
Set @Count = @Count + 1
End
Insert Into @RtnValue (Value, Variable)
Select Value = ltrim(rtrim(@List)), Variable = 'V' + convert(varchar,@Count)
Return
END
```
Then in your where statement you could do the following:
```
WHERE (b.CityID IN (Select Value from fnSplitVariable(@CityIDs, ','))
```
I have included your original Procedure, and updated it to use the function above:
```
ALTER PROCEDURE [dbo].[SearchResume]
@KeywordSearch nvarchar(500),
@GreaterThanDate datetime,
@CityIDs nvarchar(500),
@ProvinceIDs nvarchar(500),
@CountryIDs nvarchar(500),
@IndustryIDs nvarchar(500)
AS
BEGIN
DECLARE @sql as nvarchar(4000)
SET @sql = N'
DECLARE @KeywordSearch nvarchar(500),
@CityIDs nvarchar(500),
@ProvinceIDs nvarchar(500),
@CountryIDs nvarchar(500),
@IndustryIDs nvarchar(500)
SET @KeywordSearch = '''+@KeywordSearch+'''
SET @CityIDs = '''+@CityIDs+'''
SET @ProvinceIDs = '''+@ProvinceIDs+'''
SET @CountryIDs = '''+@CountryIDs+'''
SET @IndustryIDs = '''+@IndustryIDs+'''
SELECT DISTINCT
UserID,
ResumeID,
CASE a.Confidential WHEN 1 THEN ''Confidential'' ELSE LastName + '','' + FirstName END as ''Name'',
a.Description ''ResumeTitle'',
CurrentTitle,
ModifiedDate,
CurrentEmployerName,
PersonalDescription,
CareerObjectives,
CASE ISNULL(b.SalaryRangeID, ''0'') WHEN ''0'' THEN CAST(SalarySpecific as nvarchar(8)) ELSE c.Description END ''Salary'',
e.Description ''EducationLevel'',
f.Description ''CareerLevel'',
g.Description ''JobType'',
h.Description ''Relocate'',
i.Description + ''-'' + j.Description + ''-'' + k.Description ''Location''
FROM dbo.Resume a JOIN dbo.Candidate b ON a.CandidateID = b.CandidateID
LEFT OUTER JOIN SalaryRange c ON b.SalaryRangeID = c.SalaryRangeID
JOIN EducationLevel e ON b.EducationLevelID = e.EducationLevelID
JOIN CareerLevel f ON b.CareerLevelID = f.CareerLevelID
JOIN JobType g ON b.JobTypeID = g.JobTypeID
JOIN WillingToRelocate h ON b.WillingToRelocateID = h.WillingToRelocateID
JOIN City i ON b.CityID = i.CityID
JOIN StateProvince j ON j.StateProvinceID = b.StateProvinceID
JOIN Country k ON k.CountryID = b.CountryID
WHERE ( (ModifiedDate > ''' + CAST(@GreaterThanDate as nvarchar(55)) + ''')
'
IF (LEN(@CityIDs) >0)
BEGIN
SET @sql = @sql + N'AND (b.CityID IN (Select Value from fnSplitVariable(@CityIDs,'','') ))'
END
IF (LEN(@ProvinceIDs) >0)
BEGIN
SET @sql = @sql + N'AND (b.StateProvinceID IN (Select Value from fnSplitVariable(@ProvinceIDs,'','') ))'
END
IF (LEN(@CountryIDs) >0)
BEGIN
SET @sql = @sql + N'AND (b.CountryID IN (Select Value from fnSplitVariable(@CountryIDs,'','') ))'
END
IF (LEN(@IndustryIDs) >0)
BEGIN
SET @sql = @sql + N'AND (b.IndustryPreferenceID IN (Select Value from fnSplitVariable(@IndustryIDs,'','') ))'
END
IF (LEN(@KeywordSearch) > 0)
BEGIN
SET @sql = @sql + N' AND (' + @KeywordSearch + ')'
END
SET @sql = @sql + N') ORDER BY ModifiedDate desc'
--select @sql
exec sp_executesql @sql
END
```
|
```
DECLARE @SQL AS NVARCHAR(MAX)
SET @SQL = 'SELECT DISTINCT
UserID,
ResumeID,
CASE a.Confidential WHEN 1 THEN ''Confidential'' ELSE LastName + '','' + FirstName END as ''Name'',
a.Description ''ResumeTitle'',
CurrentTitle,
ModifiedDate,
CASE ISNULL(b.SalaryRangeID, ''0'') WHEN ''0'' THEN CAST(SalarySpecific as nvarchar(8)) ELSE c.Description END ''Salary'',
h.Description ''Relocate'',
i.Description + ''-'' + j.Description + ''-'' + k.Description ''Location''
FROM dbo.Resume a JOIN dbo.Candidate b ON a.CandidateID = b.CandidateID
LEFT OUTER JOIN SalaryRange c ON b.SalaryRangeID = c.SalaryRangeID
JOIN EducationLevel e ON b.EducationLevelID = e.EducationLevelID
JOIN CareerLevel f ON b.CareerLevelID = f.CareerLevelID
JOIN JobType g ON b.JobTypeID = g.JobTypeID
JOIN WillingToRelocate h ON b.WillingToRelocateID = h.WillingToRelocateID
JOIN City i ON b.CityID = i.CityID
JOIN StateProvince j ON j.StateProvinceID = b.StateProvinceID
JOIN Country k ON k.CountryID = b.CountryID
WHERE (b.CityID IN (' + @CityIDs + '))
AND (b.StateProvinceID IN (' + @ProvinceIDs + '))
AND (b.CountryID IN (' + @CountryIDs + '))
AND (b.IndustryPreferenceID IN (' + @IndustryIDs + '))'
EXEC @SQL
```
|
cast list of strings as int list in sql query / stored procedure
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"stored-procedures",
"sql-server-2008-r2",
""
] |
**How do I create a new sequence taking its name is from a variable?**
Let's take a look at the following example:
```
CREATE OR REPLACE FUNCTION get_value(_name_part character varying)
RETURNS INTEGER AS
$BODY$
DECLARE
result bigint;
sequencename character varying(50);
BEGIN
sequencename = CONCAT('constant_part_of_name_', _name_part);
IF((SELECT CAST(COUNT(*) AS INTEGER) FROM pg_class
WHERE relname LIKE sequencename) = 0)
THEN
CREATE SEQUENCE sequencename --here is the guy this is all about
MINVALUE 6000000
INCREMENT BY 1;
END IF;
SELECT nextval(sequencename) INTO result;
RETURN result;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
```
Now, let's say I want a sequence with \_name\_part = 'Whatever', so I type:
```
SELECT get_value('Whatever');
```
If sequence `constant_part_of_name_Whatever` does not exist, my function should create it and take a value; if it exists it should only take a value. However, I created sequence `constant_part_of_name_sequencename`.
How do I put the value of the variable in sequence definition to make it work?
|
The currently accepted answer has a number of problems. Most importantly it fails to take the **schema** into account.
Use instead:
```
CREATE OR REPLACE FUNCTION get_value(_name_part text)
RETURNS bigint AS
$func$
DECLARE
_seq text := 'constant_part_of_name_' || _name_part;
BEGIN
CASE (SELECT c.relkind = 'S'::"char"
FROM pg_namespace n
JOIN pg_class c ON c.relnamespace = n.oid
WHERE n.nspname = current_schema() -- or provide your schema!
AND c.relname = _seq)
WHEN TRUE THEN -- sequence exists
-- do nothing
WHEN FALSE THEN -- not a sequence
RAISE EXCEPTION '% is not a sequence!', _seq;
ELSE -- sequence does not exist, name is free
EXECUTE format('CREATE SEQUENCE %I MINVALUE 6000000 INCREMENT BY 1', _seq);
END CASE;
RETURN nextval(_seq);
END
$func$ LANGUAGE plpgsql;
```
[**SQL Fiddle.**](http://sqlfiddle.com/#!15/90111/5)
### Major points
* Your test was needlessly expensive and incorrect. You need to take the schema into account. A sequence of the same name can exist in another schema, which would make your function fail.
I use the current schema as default, since you did not specify otherwise. Details:
+ [How does the search\_path influence identifier resolution and the "current schema"](https://stackoverflow.com/questions/9067335/how-to-create-table-inside-specific-schema-by-default-in-postgres/9067777#9067777)
* You also need to be aware that the name of a sequence conflicts with other names of other objects in the same schema. Details:
+ [How to create sequence if not exists](https://stackoverflow.com/questions/13752885/how-to-create-sequence-if-not-exists/13753955#13753955)
* `varchar(50)` as data type is pointless and may cause problems if you enter a longer string. Just use `text` or `varchar`.
* [The assignment operator in plpgsql is `:=`, not `=`.](https://stackoverflow.com/questions/7462322/the-forgotten-assignment-operator-and-the-commonplace)
* You can assign a variable at declaration time. Shorter, cheaper, cleaner.
* You need dynamic SQL, I am using `format()` with `%I` to escape the identifier properly. Details:
+ [INSERT with dynamic table name in trigger function](https://stackoverflow.com/questions/7914325/insert-with-dynamic-table-name-in-trigger-function/7915100#7915100)
* `concat()` is only useful if `NULL` values can be involved. I assume you don't want to pass `NULL`.
* `VOLATILE` is default and therefore just noise.
* If you want to return NULL on NULL input, add `STRICT`.
|
Try this. Hope this work for you.
```
CREATE OR REPLACE FUNCTION get_value(_name_part character varying) RETURNS INTEGER AS
$BODY$
DECLARE
result bigint;
sequencename character varying(50);
v_sql character varying;
BEGIN
sequencename = CONCAT('constant_part_of_name_', _name_part);
IF((SELECT CAST(COUNT(*) AS INTEGER) FROM pg_class WHERE relname LIKE sequencename) = 0)
THEN
v_sql := 'CREATE SEQUENCE '||sequencename||'
MINVALUE 6000000
INCREMENT BY 1;';
EXECUTE v_sql;
END IF;
SELECT nextval(sequencename) INTO result ;
RETURN result;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
```
|
Sequence name taken from variable
|
[
"",
"sql",
"postgresql",
"sequence",
"plpgsql",
"dynamic-sql",
""
] |
I have been working at this for hours and I cannot figure out what is wrong...
I have written this code
```
INSERT INTO Agents (ID,Name,Phone,Cert_Level,Join_Date) VALUES (1,βChrisβ,β317-9578β,1,β01-NOV-91β);
```
and I get the same missing comma error over and over and I cannot find the problem PLEASE HELP ME
here is the database and I have confirmed that it was created correctly
Agents :
```
ID Int Primary key
Name Char(10) Not Null
Phone Char(9) Not Null
Cert_Level Int Restricted to values 1-10
Join_Date Date
```
|
try the below code as you are different quotes, replace β to ' .
```
INSERT INTO Agents (ID,Name,Phone,Cert_Level,Join_Date) VALUES (1,'Chris','317-9578',1,'01-NOV-91');
```
Cheers !!
|
Your string literals are delimited with unicode character Right single quotation mark (u+2019)
instead of Apostrophe (' u+0027).
You can try typing the apostrophes.
```
βChrisβ
```
vs
```
'Chris'
```
|
Oracle Missing Comma
|
[
"",
"sql",
"database",
"oracle",
""
] |
Supose we've got the following dataset:
```
DATE VAR1 VAR2
1 A 1
2 A 1
3 B 1
4 C 2
5 D 3
6 E 4
7 F 5
8 B 6
9 B 7
10 D 1
```
Each record belongs to a person, the problem is that a single person can have more than one record with different values.
To identify a person: If you share the same VAR1, you are the same person, BUT also if you share the same VAR2, you are the same person.
My objective is to create a new variable IDPERSON which uniquely identifies the person for each record. In my example, there are only 4 different people:
```
DATE VAR1 VAR2 IDPERSON
1 A 1 1
2 A 1 1
3 B 1 1
4 C 2 2
5 D 3 1
6 E 4 3
7 F 5 4
8 B 6 1
9 B 7 1
10 D 1 1
```
How could I achieve this by using SQL or SAS?
|
I forgot to post my final solution, it is a SAS macro. I've made another one for 3 variables.
```
%MACRO GROUPER2(INDATA,OUTDATA,ID1,ID2,IDOUT,IDN=_N_,MAXN=5);
%PUT ****************************************************************;
%PUT ****************************************************************;
%PUT **** GROUPER MACRO;
%PUT **** PARAMETERS:;
%PUT **** INPUT DATA: &INDATA.;
%PUT **** OUTPUT DATA: &OUTDATA.;
%PUT **** FIRST VARIABLE: &ID1.;
%PUT **** SECOND VARIABLE: &ID2.;
%PUT **** OUTPUT GROUPING VARIABLE: &IDOUT.;
%IF (&IDN.=_N_) %THEN %PUT **** STARTING NUMBER VARIABLE: AUTONUMBER;
%ELSE %PUT **** STARTING NUMBER VARIABLE: &IDN.;
%PUT **** MAX ITERATIONS: &MAXN.;
%PUT ****************************************************************;
%PUT ****************************************************************;
/* CREATE FIRST GUESS FOR GROUP ID */
DATA _G_TEMP1 _G_TEMP2;
SET &INDATA.;
&IDOUT.=&IDN.;
IF &IDOUT.=. THEN OUTPUT _G_TEMP2;
ELSE OUTPUT _G_TEMP1;
RUN;
PROC SQL NOPRINT;
SELECT MAX(&IDOUT.) INTO :MAXIDOUT FROM _G_TEMP1;
QUIT;
DATA _G_TEMP2;
SET _G_TEMP2;
&IDOUT.=_N_+&MAXIDOUT.;
RUN;
DATA _G_TEMP;
SET _G_TEMP1 _G_TEMP2;
RUN;
PROC SQL;
UPDATE _G_TEMP SET &IDOUT.=. WHERE &ID1. IS NULL AND &ID2. IS NULL;
QUIT;
/* LOOP, IMPROVE GROUP ID EACH TIME*/
%LET I = 1;
%DO %WHILE (&I. <= &MAXN.);
%PUT LOOP NUMBER &I.;
%LET I = %EVAL(&I. + 1);
PROC SQL NOPRINT;
/* FIND THE LOWEST GROUP ID FOR EACH GROUP OF FIRST VARIABLE */
CREATE TABLE _G_MAP1 AS SELECT MIN(&IDOUT.) AS &IDOUT., &ID1. FROM _G_TEMP WHERE &ID1. IS NOT NULL GROUP BY &ID1.;
/* FIND THE LOWEST GROUP ID FOR EACH GROUP OF SECOND VARIABLE */
CREATE TABLE _G_MAP2 AS SELECT MIN(&IDOUT.) AS &IDOUT., &ID2. FROM _G_TEMP WHERE &ID2. IS NOT NULL GROUP BY &ID2.;
/* FIND THE LOWEST GROUP ID FROM BOTH GROUPING VARIABLES */
CREATE TABLE _G_NEW AS SELECT A.&ID1., A.&ID2., COALESCE(MIN(B.&IDOUT., C.&IDOUT.), A.&IDOUT.) AS &IDOUT.,
A.&IDOUT. AS &IDOUT._OLD FROM _G_TEMP AS A FULL OUTER JOIN _G_MAP1 AS B ON A.&ID1. = B.&ID1.
FULL OUTER JOIN _G_MAP2 AS C ON A.&ID2. = C.&ID2.;
/* PUT RESULTS INTO TEMPORARY DATASET READY FOR NEXT ITTERATION */
CREATE TABLE _G_TEMP AS SELECT * FROM _G_NEW ORDER BY &ID1., &ID2.;
/* CHECK IF THE ITTERATION PROVIDED ANY IMPROVEMENT */
SELECT MIN(CASE WHEN &IDOUT._OLD = &IDOUT. THEN 1 ELSE 0 END) INTO :STOPFLAG FROM _G_TEMP;
%PUT NO IMPROVEMENT? &STOPFLAG.;
QUIT;
/* END LOOP IF ID UNCHANGED OVER LAST ITTERATION */
%LET ITERATIONS=%EVAL(&I. - 1);
%IF &STOPFLAG. %THEN %LET I = %EVAL(&MAXN. + 1);
%END;
%PUT ****************************************************************;
%PUT ****************************************************************;
%IF &STOPFLAG. %THEN %PUT **** LOOPING ENDED BY NO-IMPROVEMENT CRITERIA. OUTPUT FULLY GROUPED.;
%ELSE %PUT **** WARNING: LOOPING ENDED BY REACHING THE MAXIMUM NUMBER OF ITERARIONS. OUTPUT NOT FULLY GROUPED.;
%PUT **** NUMBER OF ITERATIONS: &ITERATIONS. (MAX: &MAXN.);
%PUT ****************************************************************;
%PUT ****************************************************************;
DATA &OUTDATA.;
SET _G_TEMP;
DROP &IDOUT._OLD;
RUN;
/* OUTPUT LOOKUP TABLE */
PROC SQL;
CREATE TABLE &OUTDATA._1 AS SELECT &ID1., MIN(&IDOUT.) AS &IDOUT. FROM _G_TEMP WHERE &ID1. IS NOT NULL GROUP BY &ID1. ORDER BY &ID1.;
CREATE TABLE &OUTDATA._2 AS SELECT &ID2., MIN(&IDOUT.) AS &IDOUT. FROM _G_TEMP WHERE &ID2. IS NOT NULL GROUP BY &ID2. ORDER BY &ID2.;
QUIT;
/* CLEAN UP */
PROC DATASETS NOLIST;
DELETE _G_:;
QUIT;
%MEND GROUPER2;
```
|
```
%macro grouper(
inData /*Input dataset*/,
outData /*output dataset*/,
id1 /*First identification variable (must be numeric)*/,
id2 /*Second identification variable*/,
idOut /*Name of variable to contain group ID*/,
maxN = 5 /*Max number of itterations in case of failure*/);
/* Assign an ID to each distict connected graph in a a network */
/* Create first guess for group ID */
data _g_temp;
set &inData.;
&idOut. = &id1.;
run;
/* Loop, improve group ID each time*/
%let i = 1;
%do %while (&i. <= &maxN.);
%put Loop number &i.;
%let i = %eval(&i. + 1);
proc sql noprint;
/* Find the lowest group ID for each group of first variable */
create table _g_map1 as
select
min(&idOut.) as &idOut.,
&id1.
from _g_temp
group by &id1.;
/* Find the lowest group ID for each group of second variable */
create table _g_map2 as
select
min(&idOut.) as &idOut.,
&id2.
from _g_temp
group by &id2.;
/* Find the lowest group ID from both grouping variables */
create table _g_new as
select
a.&id1.,
a.&id2.,
coalesce(min(b.&idOut., c.&idOut.), a.&idOut.) as &idOut.,
a.&idOut. as &idOut._old
from _g_temp as a
full outer join _g_map1 as b
on a.&id1. = b.&id1.
full outer join _g_map2 as c
on a.&id2. = c.&id2.;
/* Put results into temporary dataset ready for next itteration */
create table _g_temp as
select *
from _g_new;
/* Check if the itteration provided any improvement */
select
min(
case when &idOut._old = &idOut. then 1
else 0
end) into :stopFlag
from _g_temp;
quit;
/* End loop if ID unchanged over last itteration */
%if &stopFlag. %then %let i = %eval(&maxN. + 1);
%end;
/* Output lookup table */
proc sql;
create table &outData. as
select
&id1.,
min(&idOut.) as &idOut.
from _g_temp
group by &id1.;
quit;
/* Clean up */
proc datasets nolist;
delete _g_:;
quit;
%mend grouper;
DATA baseData;
INPUT VAR1 VAR2 $;
CARDS;
1 A
1 A
1 B
2 C
3 D
4 E
5 F
6 B
7 B
1 D
1 X
7 G
6 Y
6 D
6 I
8 D
9 Z
9 X
;
RUN;
%grouper(
baseData,
outData,
VAR1,
VAR2,
groupID);
```
|
How to group using two variables
|
[
"",
"sql",
"sas",
""
] |
Assume I have a database set up and a table named `MyTable` which contains a large number of records (tens of thousands). Assume a query as follows...
```
select * from MyTable where ColumnX = 'X'
```
... returns just a handful of records (< 10). Let's say I wanted to filter this result set further to only those records where `ColumnY` matches `'Y1'` or `'Y2'`. Is it better from a speed and memory perspective to simply modify to the above query as follows...
```
select * from MyTable where ColumnX = 'X' and (ColumnY = 'Y1' or ColumnY = 'Y2')
```
... Or is it better to iterate over the (small) result set in code and filter out only those records where `ColumnY` matches `'Y1'` or `'Y2'`? The reason I ask is because I have been told that `OR` clauses are bad in database queries from a performance perspective (when dealing with large tables) and better avoided where possible.
> **Note:** The scenario in which this applies for me is an Android application with a local SQLite database but I guess the question is a bit more generic than that.
|
SQLite's documentation describes [multiple optimizations](http://www.sqlite.org/optoverview.html#or_opt) that can be done on queries with OR, and says:
> For any given query, the fact that the OR-clause optimization described here can be used does not guarantee that it will be used. SQLite uses a cost-based query planner that estimates the CPU and disk I/O costs of various competing query plans and chooses the plan that it thinks will be the fastest. If there are many OR terms in the WHERE clause or if some of the indices on individual OR-clause subterms are not very selective, then SQLite might decide that it is faster to use a different query algorithm, or even a full-table scan. Application developers can use the [EXPLAIN QUERY PLAN](http://www.sqlite.org/lang_explain.html) prefix on a statement to get a high-level overview of the chosen query strategy.
In any case, implementing the OR by hand in your code is very likely to be slower than letting the database do it, because the database has to read and return all rows that match on `ColumnX`, even those that will not match on `ColumnY`.
Furthermore, the database already has code to do this filtering; implementing it again just increases the complexity of your code and the chances of errors.
The statement that "`OR` clauses are bad in database queries from a performance perspective (when dealing with large tables) and better avoided where possible" is not quite true; if you need the `OR`, all alternatives are worse.
|
You can try with IN clause :
```
select * from MyTable where ColumnX = 'X' and ColumnY in ('Y1','Y2')
```
|
Is it better to avoid OR clauses in database queries?
|
[
"",
"android",
"sql",
"sqlite",
"android-sqlite",
"database-performance",
""
] |
I have the following query
```
SELECT COUNT( iContactId ) AS Users, DATE( dCreatedAt ) AS ActivityDate
FROM contacts WHERE iAppId =".$iAppId."
AND DATE(dCreatedAt) IN (CURRENT_DATE(), CURRENT_DATE()-1 )
GROUP BY ActivityDate
```
by this i am getting
```
Users |ActivityDate
1 |2014-09-19
```
i want 0 if there is no match rows
e.g
```
Users |ActivityDate
0 |CURRENT_DATE()
0 | CURRENT_DATE()-1
```
How can i do that.
|
You could use union
```
SELECT COUNT( iContactId ) AS Users, CURRENT_DATE() AS ActivityDate
FROM contacts WHERE iAppId =".$iAppId."
AND DATE(dCreatedAt) = CURRENT_DATE()
UNION
SELECT COUNT( iContactId ) AS Users, CURRENT_DATE() - 1 AS ActivityDate
FROM contacts WHERE iAppId =".$iAppId."
AND DATE(dCreatedAt) = CURRENT_DATE() - 1
```
|
## **QUERY**
```
SELECT
COUNT(C.iContactId) AS Users,
DATE(C.dCreatedAt) AS ActivityDate
FROM
contacts C
LEFT OUTER JOIN
(
SELECT CURRENT_DATE() AS Dates FROM dual
UNION
SELECT CURRENT_DATE() - 1 AS Dates FROM dual
) D
ON
D.Dates = DATE(C.dCreatedAt)
WHERE
C.iAppId =".$iAppId."
GROUP BY
C.ActivityDate
```
|
Get the 0 value if No result in column -- MySQL
|
[
"",
"mysql",
"sql",
"select",
""
] |
```
SELECT *
FROM
TableName
WHERE
ORDER BY
CASE @OrderByColumn
WHEN 1 THEN Forename
WHEN 2 THEN Surname
END;
```
I have a statement like above which lets me dynamically choose how to order the results of a query. However, how do I specify that I want the Forename ordered `DESC` and the Surname `ASC`?
|
You need to split your `ORDER BY` in two parts:
```
SELECT *
FROM
TableName
WHERE
ORDER BY
(CASE @OrderByColumn
WHEN 1 THEN Forename
END) DESC -- Forename --> descending
, (CASE @OrderByColumn
WHEN 2 THEN Surname
END) ASC -- Surname --> ascending
```
|
You need two clauses in the `order by`:
```
ORDER BY (CASE WHEN @OrderByColumn = 1 and @Dir = 'ASC' THEN Forename
WHEN @OrderByColumn = 2 and @Dir = 'ASC' THEN Surname
END) ASC,
(CASE WHEN @OrderByColumn = 1 and @Dir = 'DESC' THEN Forename
WHEN @OrderByColumn = 2 and @Dir = 'DESC' THEN Surname
END) DESC
```
|
Case statement for Order By clause with Desc/Asc sort
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a SQL Server stored procedure that looks simplified like this:
```
begin
select
(
case
when @Method = 'phone' then u.PhoneNumber
when @Method = 'email' then u.Email
end
)
from
Table a
join
Users u on a.ID = u.Id
where
a.Active = 1
and
-- This part does not work
case
when @Method = 'phone' then u.PhoneNumber is not null
when @Method = 'email' then u.Email is not null
end
end
```
So the problem is that I don't want null values and both phone and email can be null. So if the @Method parameter is phone, I don't want null values in `u.PhoneNumber` and when the input parameter is email I don't want null values in the `u.Email` column.
What is the best way to solve this?
|
Change your where condition as below
```
where
a.Active = 1
and (
(@Method = 'phone' and u.PhoneNumber is not null)
or
(@Method = 'email' and u.Email is not null)
)
```
|
you not need to use case when in where you can use
```
where
a.Active = 1
and
-- This part now work
(
(@Method = 'phone' and u.PhoneNumber is not null) OR
(@Method = 'email' and u.Email is not null)
)
```
|
SQL Server Procedure Case when I where clause
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I am trying to product a sales report that shows for example
```
itemid salesdate qty_sold
item1 1/9/14 3
item1 1/9/14 2
item1 1/9/14 5
item2 2/9/14 2
item3 4/9/14 1
```
The problem I have is because the sales of for example 3 on Monday could be made up of 1-3 different sales orders, I can only get it to show multiple lines and not group together as I want.
I want to product a report which shows item id, date of purchase and total sold on that day, not a list of all sales to include that item if that makes sense?
thanks in advance
more details:
```
SELECT st.orderdate, sl.itemid, sl.qty
FROM salestable st
INNER JOIN salesline sl
ON salestable.salesid = salesline.salesid
```
currently, it displays results as follows.
```
OrderDate ItemId Qty
1/1/14 101 1
1/1/14 101 3
1/1/14 102 1
```
I would like to group the rows if possible to only show 1 line per date & itemid. it doesn't work because they are obviously separate lines in the database as they have different order numbers etc.
```
OrderDate ItemId Qty
1/1/14 101 4
1/1/14 102 1
2/1/14 102 5
2/1/14 101 2
```
If it cant be done, then a grouping type within report builder would suffice but I cant see a way of doing it!
Cheers
|
For the SQL you have included in your question you would need to use the aggregate function SUM with the GROUP BY statement.
```
SELECT st.orderdate, sl.itemid, SUM(sl.qty)
FROM salestable st
INNER JOIN salesline sl
ON salestable.salesid = salesline.salesid
GROUP BY st.orderdate, sl.itemid
```
You mention other fields that would prevent this, but don't include them in your example anywhere.
If you could modify your question to include these we could come up with a definitive answer.
Based on your comment about the date, this is one possible solution:
```
SELECT DATEADD(dd, 0, DATEDIFF(dd, 0, st.orderdate)), sl.itemid, SUM(sl.qty)
FROM salestable st
INNER JOIN salesline sl
ON salestable.salesid = salesline.salesid
GROUP BY DATEADD(dd, 0, DATEDIFF(dd, 0, st.orderdate)), sl.itemid
```
|
If I understand it correctly, your problem is with sorting the record.
If that is the case, then append this to your SQL code.
```
ORDER BY salesdate DESC
```
|
SQL Query to use in report builder 3.0: grouping results together
|
[
"",
"sql",
""
] |
I have a database called Clients\_Visits and I have found that there are so many duplicate records of a patricular patient based on their visit date and type of visit. I'm just wondering if somebody can help me eliminate/drop this records which are occupying spaces.
Sample Data:
**Clients\_Visits:tbl**
```
RecID | Customer_No | DateOfVisit | VisitType
--------------------------------------------
01 | 1001 | 01/05/2011 | 1
02 | 1001 | 01/05/2011 | 1
03 | 1002 | 06/09/2010 | 2
04 | 1004 | 07/08/2008 | 2
05 | 1001 | 01/05/2011 | 3
```
**In my sample table data. I want to drop record RecID 01 or RecID 02 (But for the sake of example I'm dropping RecID 02**
```
RecID | Customer_No | DateOfVisit | VisitType
--------------------------------------------
02 | 1001 | 01/05/2011 | 1
```
This records is under the same
* Customer\_No(1001)
* DateOfVisit(01/05/2011)
* VisitType (1)
considering that it is only possible to have one kind of visit type per Date of visit.
Thanks in advance.
**ADDITIONAL NOTES:**
Sorry I think I have confused myself and some commentors.
Take note that the record below is not 3 duplicates considering that the VisitType of **RecID 05** is 3 Thus I need to Drop 1 Record only which either **RecID 01 or RecID 02** which is considered redundant based on their DateOfvisit and VisitType
```
RecID | Customer_No | DateOfVisit | VisitType
--------------------------------------------
01 | 1001 | 01/05/2011 | 1
02 | 1001 | 01/05/2011 | 1
05 | 1001 | 01/05/2011 | 3
```
Thanks.
|
This Statement deletes all the records with identic Customer\_No, DateOfVisit and VisitType.
The record with the lowest RecID stays in DB
```
DELETE
FROM Clients_Visits v1
WHERE EXISTS (SELECT RecID
FROM Clients_Visits v2
WHERE v1.Customer_No=v2.Customer_No
AND v1.DateOfVisit=v2.DateOfVisit
AND v1.VisitType=v2.VisitType
AND v1.RecID>v2.RecID)
```
|
Assuming you want to keep the highest `recid`, there are several ways to do this. One is to establish a `row_number` and delete accordingly:
```
with cte as (
select *,
row_number() over (partition by customer_no, dateofvisit
order by recid desc) rn
from yourtable
)
delete from cte
where rn != 1
```
Given your desired results, I don't think you want to add `visitType` to your partition.
---
Given your title, you'd actually want to do something like this, but it doesn't match your desired results as this would also remove recid 5:
```
with cte as (
select *, count(*) over (partition by customer_no, dateofvisit) cnt
from yourtable
)
delete from cte where cnt != 1
```
|
SQL - Drop exactly duplicate rows of records based on 2 columns as validation
|
[
"",
"sql",
"sql-server",
"database",
"t-sql",
""
] |
I am creating a package that will run every fifteen minutes and write a csv file into a shared folder. I am stuck on the query. I need the query to only go back to 12:00AM of the same day. Once it's the next day I need to have the query run only for that same day and not go back to the previous day.
I have tried this
```
DATEADD(dd, 0, DATEDIFF(dd, 0, GETDATE() - 1))
```
but obviously that isn't producing the result I need.
Everything else on the query is fine and I am receiving the appropriate data I need. However, I'm setting the dates at this time and that is not efficient.
Any help is greatly appreciated. I'm not a SQL Server expert and drudging through the TechNet documents has lots of information. I'm just having a difficult time connecting their information to my implementation.
|
If you are using SQL Server 2008 + you can simply use:
```
SELECT CAST(GETDATE() AS DATE)
```
To get the current date.
|
You can use the following to get the current date (I'm sure there are many other methods as well)
```
SELECT CONVERT(DATETIME, CONVERT(VARCHAR(10)DATE, GETDATE(), 101))
```
as the -1 comment pointed out using varchars with dates is a bad idea (though everyone seems to do it all the time. Another solution would be:
```
SELECT CONVERT(DATE, GETDATE()))
```
though this won't work until a specific version of SQL server.
Yet another way to do it would be:
```
SELECT DATEADD(dd, 0, DATEDIFF(dd, 0, GETDATE()))
```
which i believe would work on all sql server versions (though i find the hardest to understand)
The inner (convert varchar) gives you your date without time, then it is converted back into date format.
So for your query you could use
```
DECLARE @today DATETIME
SET @today = (SELECT CONVERT(DATE, GETDATE()))
SELECT *
FROM table
WHERE date >= @today
```
|
How to run same day query for SQL Server
|
[
"",
"sql",
"datetime",
"sql-server-2008-r2",
""
] |
Firstly I'm a sql rookie and only know the basics. I have tried googling ( and have seen some references and example of similar but I was not able to follow or apply to my problem), I'm not even 100% if it is possible in my situation. I'm trying figure out how to perform a select that involves calculating a column based on a column from the pervious row. I want the balance from the pervious to be the carry in the next:
I have a list of transactions
```
_____________________________________________
| id | Date | time | amount | type |
| 1 |2014-01-01 | 12:00 | 2000 | payin |
| 5 |2014-01-01 | 17:00 | -20 | payout |
| 2 |2014-01-01 | 18:00 | -20 | payout |
| 3 |2014-01-02 | 12:00 | -10 | payout |
| 4 |2014-01-03 | 12:00 | +300 | payin |
```
As you can see there is an id ( which is the primary key ) but its order is not guaranteed because a transaction could be back dated and added after more recent transactions have been. I would like to get the following but I am completely lost. SQL is not my area at all.
```
| Date | time | carry | amount | balance | type |
|2014-01-01 | 12:00 | 0 | 2000 | 2000 | payin |
|2014-01-01 | 17:00 | 2000 | -20 | 1980 | payout |
|2014-01-01 | 17:00 | 1980 | -20 | 1960 | payout |
|2014-01-02 | 12:00 | 1960 | -10 | 1950 | payout |
|2014-01-03 | 12:00 | 1950 | -1 | 1949 | payout |
|2014-01-03 | 12:00 | 1949 | -5 | 1944 | payout |
|2014-01-03 | 12:00 | 1944 | +300 | 2244 | payin |
```
and in turn I believe I can get the following by summing based on type and group by date
```
| Date | carry | total in | total out | balance |
|2014-01-01 | 0 | 2000 | -40 | 1960 |
|2014-01-02 | 1960 | 0 | -10 | 1950 |
|2014-01-03 | 1950 | 300 | -6 | 2244 |
```
up till now I have been getting the desired result by querying the mysql data base from Nodejs for each date specific date using the following queries ( all performed separately ):
```
var sql_init_Balance_day ='select sum(amount) as total '+
'from transactions '+
'Where transaction_date<\''+mdate.format('YYYY-MM-DD')+'\'';
var sql_closing_Balance_day ='select sum(amount) as closingBalance '+
'from transactions '+
'Where transaction_date<\''+mtomorrow.format('YYYY-MM-DD')+'\'';
var sql_payins_day = 'select sum(amount) as deposits '+
'from transactions '+
'Where transaction_date=\''+mdate.format('YYYY-MM-DD')+'\''+
' and type=\'payin\'';
var sql_payouts_day = 'select sum(amount) as deductions '+
'from transactions '+
'Where transaction_date=\''+mdate.format('YYYY-MM-DD')+'\' '+
'and type=\'payout\'';
```
|
You can calculate the carry using variables
```
select `Date`, max(case when rowNum = 1 then carry end) carry,
sum(case when type = 'payin' then amount else 0 end) total_in,
sum(case when type = 'payout' then amount else 0 end) total_out,
sum(amount) balance
from (
select `Date`, time, @balance carry, amount,
(@balance := @balance + amount) balance, type,
@rowNum := IF(`Date` = @prevDate, @rowNum + 1, 1) rowNum,
@prevDate := `Date`
from transactions
cross join (select @balance := 0) t1
order by `Date`, time
) t1 group by `Date`
```
**Update:** Another query using a self join. Self joins are typically slower than using variables but it might be good enough for what you need.
```
select t1.date
coalesce(@prevBalance,0) carry,
sum(case when t2.type = 'payin' and t2.date = t1.date then t2.amount else 0 end) total_in,
sum(case when t2.type = 'payout' and t2.date = t1.date then t2.amount else 0 end) total_out,
sum(t2.amount) balance,
@prevBalance := sum(t2.amount)
from transactions t1
join transactions t2 on t2.date <= t1.date
group by t1.date
order by t1.date
```
|
I believe you can get your first result with the following:
```
SELECT date,
time,
@carry := COALESCE(@carry,0) carry,
amount,
@carry := @carry + amount balance,
type
FROM transaction
ORDER BY date, time
```
[**SQL FIDDLE DEMO**](http://sqlfiddle.com/#!2/12db1/5)
|
How Can I reference pervious row in a mysql select query
|
[
"",
"mysql",
"sql",
"mamp",
""
] |
Using Django, I often have to add new fields to our PostgreSQL database. The fields usually have an initial value and they must be set to NOT NULL. My procedure for this involves three steps:
1) Add field and set initial value:
```
ALTER TABLE my_table ADD COLUMN my_field boolean;
UPDATE my_table SET my_field = FALSE;
```
2) Set NOT NULL:
```
ALTER TABLE my_table ALTER COLUMN my_field SET NOT NULL;
```
All three queries cannot be run in one go, because this results in a PostgreSQL error.
What's the most efficient way of performing this task? I'd like to do this in a single query - and with adding several fields in one go. Something along those lines:
```
ALTER TABLE my_table ADD COLUMN my_field boolean INITIAL FALSE SET NOT NULL, ADD COLUMN my_field2 boolean INITIAL FALSE SET NOT NULL;
```
|
Did you try this;
```
alter table my_table add column my_field1 boolean default false NOT NULL,
add column my_field2 boolean default false NOT NULL;
```
|
You should do that by two steps:
1. Add column
2. Alter column: set constraint, add default
Query:
```
ALTER TABLE my_table ADD COLUMN my_field boolean;
ALTER TABLE my_table ALTER COLUMN my_field SET NOT NULL,
ALTER COLUMN my_field TYPE boolean USING false;
```
Or this:
```
ALTER TABLE my_table ADD COLUMN my_field boolean;
ALTER TABLE my_table ALTER COLUMN my_field SET NOT NULL,
ALTER COLUMN my_field SET DEFAULT false;
```
The benefit of first is that you can calculate value for each row based on your row data (in other words you can refer fields). for `DEFAULT` clause you must use bare value.
See [documentation examples](https://www.postgresql.org/docs/current/static/sql-altertable.html#id-1.9.3.33.8) with timestamps
|
Add a new column to PostgreSQL database, set initial value and set NOT NULL all in one go
|
[
"",
"sql",
"postgresql",
"ddl",
"alter-table",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.