Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I have a table which contains information about reports being accessed along with the Date.I need to group reports being accessed according to a date range and count them.
I'm using T-SQL
Table
```
EventId ReportId Date
60 4 11/24/2015
59 11 11/23/2015
58 6 11/22/2015
57 11 11/22/2015
56 9 11/21/2015
55 3 11/20/2015
54 5 11/20/2015
53 6 11/19/2015
52 5 11/19/2015
51 4 11/18/2015
50 3 11/17/2015
49 9 11/16/2015
```
If days' difference is 3 then I need result in the format
```
StartDate EndDate ReportsAccessed
11/22/2015 11/24/2015 4
11/19/2015 11/21/2015 5
11/16/2015 11/18/2015 3
```
but the difference between days could change.
|
Assuming you have values for all the dates, then you can calculate the difference in days between each date and the maximum (or minimum) date. Then divide this by three and use that for aggregation:
```
select min(date), max(date), count(*) as ReportsAccessed
from (select t.*, max(date) over () as maxd
from table t
) t
group by (datediff(day, date, maxd) / 3)
order by min(date);
```
"3" is what I think you are referring to as the "difference in days".
|
Those 2 blocks are simply for added clarity on what parameters you'd have to change
```
DECLARE @t as TABLE(
id int identity(1,1),
reportId int,
dateAccess date)
DECLARE @NumberOfDays int=3;
```
And here comes the actual select
```
Select StartDate, EndDate, COUNT(reportId) from
(
select *,
DATEADD(day, DATEDIFF(DAY, dateAccess, maxdate.maxdate)%@NumberOfDays, dateAccess) as EndDate,
DATEADD(day, DATEDIFF(DAY, dateAccess, maxdate.maxdate)%@NumberOfDays-@NumberOfDays+1, dateAccess) as StartDate
from @t, (select MAX(dateAccess) maxdate from @t t2) maxdate
) results
GROUP BY StartDate, EndDate
ORDER BY StartDate desc
```
There are a few places I'm unsure if it's optimized or not, for instance cross joining with select max(date) instead of using a subquery, but that returns the exact result from your OP.
Basically, I simply split the entries into groups based on how far they are from the `MAX(date)`, and then use a `COUNT`. On that note, it might be more useful to use `COUNT(distinct ...)` otherwise if someone looks at the document #9 3 times, it will tell you tha 3 documents were checked, but only 1 was truly looked at.
The upside with using `MAX(date)` over `MIN(date)` is that your first group will always have the maximal amount of days. This will prove very useful if you want to compare the last few periods to the average. The downside is that you don't have stable data. With every new entry (assuming it's a new day), your query will cycle itself to produce a new set of results. If you wanted to graph the data, you'd be better comparing to MIN(date) that way the first days won't change when you add a new one.
Depending on the usage, it could even be useful to extrapolate the number of accesses done in the last period (in that case `MIN(date)` is also preferable).
---
Here's an adaptation of Gordon's answer that's probably *much* more optimized (it's at the very least much more aesthetic) :
```
SELECT DateADD(day, -datediff(day, dateAccess, maxdate)/3*3, maxdate) as EndDate,
DateADD(day, (-datediff(day, dateAccess, maxdate)/3+1)*3, maxdate) as StartDate,
count(reportId)
from (select *, MAX(dateAccess) over() as maxdate from @t) t
GROUP BY datediff(day, dateAccess, maxdate)/3, maxdate
```
|
Group by contiguous dates and Count
|
[
"",
"sql",
"t-sql",
""
] |
I'm working on AdventureWorks2014 Database and need to retrieve client's ID's for those clients who placed orders in BOTH years 2011 and 2014. So far I have this query:
```
SELECT CustomerID, SalesOrderID, YEAR(OrderDate) AS 'Year'
FROM Sales.SalesOrderHeader
WHERE YEAR(OrderDate) IN (2011,2014)
HAVING COUNT(YEAR(OrderDate))=2;
```
When I try to run it though, I get an error:
> Column 'Sales.SalesOrderHeader.CustomerID' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
I tried adding `GROUP BY` or `ORDER BY`, but it resulted in having an empty table. I don't necessarily need to use `HAVING COUNT`, it was just one idea I came up with. Is there any way to list CustomerID's that repeat themselves in 2011 and 2014? I'm stuck, maybe I'm not seeing something simple?
**EDIT:** I really need to have both SalesOrderID and Year selected as well, not only CustomerID. That's usually where I run into trouble.
|
Try this, it will only return matching Customer ID's if they have purchased in both 2011 and 2014
```
SELECT S.CustomerID, S.SalesOrderID, YEAR(S.OrderDate) AS 'Year'
FROM Sales.SalesOrderHeader S
WHERE S.CustomerId IN (
SELECT DISTINCT S1.CustomerId FROM Sales.SalesOrderHeader S1
INNER JOIN Sales.SalesOrderHeader S2 ON S1.CustomerID = S2.CustomerId
WHERE YEAR(S1.OrderDate) = 2011 AND YEAR(S2.OrderDate) = 2014)
AND YEAR(S.OrderDate) IN (2011,2014)
```
|
You have to use GROUP BY CustomerID and DISTINCT keywords in the COUNT to count only one for 2014 and one for 2011 (in case if there were many orders in each year):
```
SELECT CustomerID
FROM Sales.SalesOrderHeader
WHERE YEAR(OrderDate) IN (2011,2014)
GROUP BY CustomerID
HAVING COUNT(DISTINCT YEAR(OrderDate))=2;
```
To select all orders for these customers try this:
```
SELECT CustomerID, SalesOrderID, YEAR(OrderDate) AS 'Year'
FROM Sales.SalesOrderHeader
WHERE CustomerID IN
(SELECT CustomerID
FROM Sales.SalesOrderHeader
WHERE YEAR(OrderDate) IN (2011,2014)
GROUP BY CustomerID
HAVING COUNT(DISTINCT YEAR(OrderDate))=2
)
```
Or use JOIN syntax as in @dasblinkenlight's answer.
|
How to write SQL query for matching two conditions in the same column?
|
[
"",
"sql",
"sql-server",
""
] |
I have a MySql query, which is given below:
```
UPDATE signup SET lastname='Lastname', password='123'
WHERE firstname='Firstname';
```
I am using MySql Workbench to execute the query.
But it's not updating the row and shows this error:
> You are using safe update mode and you tried to update a table without
> a WHERE that uses a KEY column To disable safe mode, toggle the option
> in Preferences -> SQL Editor and reconnect.
|
In mysql workbench the safe mode is enabled by default, so if your `WHERE` clause doesn't have a key it will prevent running the query. Try disabling that using these steps -
`Edit` > `Preferences` > `Sql Editor` > `uncheck the "Safe Updates"`
**Note** - try reconnecting the server (`Query` > `Reconnect to Server`) and than run your query again.
|
`MySQL` helps you particularly avoid updating/deleting multiple rows in one shot. To achieve that, it doesn't allow you to run `UPDATE` queries without passing the ID parameter. This is called as the `SAFE UPDATES` mode.
As said by @ManojSalvi, you can set it permanently from the settings.
In case you want to temporarily disable the `SAFE UPDATE` mode, you can try the following:-
```
SET SQL_SAFE_UPDATES = 0;
UPDATE signup SET lastname='Lastname', password='123'
WHERE firstname='Firstname';
SET SQL_SAFE_UPDATES = 1;
```
|
Update query not working in mysql workbench
|
[
"",
"mysql",
"sql",
"mysql-workbench",
""
] |
Is there a way to select rpd.name using a different WHERE clause if the previous one returns an empty string?
I've looked into CASE, but I'm not sure how would I go about using it.
## table replacement\_part
**part\_id** **href**
1 url\_1
2 url\_2
3 url\_3
## table replacement\_part\_description
**part\_id** **language\_id** **name**
1 2
1 1 hose
2 2
2 1 control module
3 2 vonkajsi kryt
3 1 outer casing
## expected output
**part\_id** **href** **name**
1 url\_1 hose
2 url\_2 control module
3 url\_3 vonkajsi kryt
```
SELECT *
FROM replacement_part AS rp
LEFT JOIN replacement_part_description AS rpd
ON (rp.part_id = rpd.part_id)
WHERE language_id = :id
```
So something like
```
if rpd.name = ''
WHERE language_id = a,
else
WHERE language_id = b
```
|
One way is to join the rpd table twice, moving the where clause on the ON condition, and then use COALESCE:
```
SELECT
...,
COALESCE(rpd1.name, rpd2.name) AS name
FROM
replacement_part rp LEFT JOIN replacement_part_description rpd1
ON rp.part_id = rpd1.part_id AND rpd1.language=a
LEFT JOIN replacement_part_description rpd2
ON rp.part_id = prd2.part_id AND rpd2.language=b
```
here I suppose that the language is on the description table. If name is an empty string, instead of coalesce you could use CASE WHEN:
```
CASE WHEN rpd1.name='' OR rpd1.name IS NULL THEN rpd2.name ELSE rpd1.name END AS Name
```
|
This ?
```
SELECT *
FROM replacement_part AS rp
LEFT JOIN replacement_part_description AS rpd
ON (rp.part_id = rpd.part_id)
WHERE
(
language_id = :id
AND (rpd.name = '' OR rpd.name IS NULL)
)
OR language_id = b
```
|
If the column is empty, select the same column using different where clause
|
[
"",
"mysql",
"sql",
""
] |
I want a query that will display rows of those **COMPANIES** who are giving **theft cover** and **key loss** Having status is **yes**
See the Sample Data below :
```
addon status amt1 amt2 company
theft cover yes 7 7 comp1
key loss yes 2 2 comp1
theft cover no NULL NULL comp2
key loss yes 2 33 comp2
key loss yes 1 1 comp3
theft cover yes 12 22 comp3
theft cover yes 11 22 comp4
key loss no NULL NULL comp4
theft cover yes 22 55 comp5
key loss yes 33 44 comp5
SELECT addon, status, amt1, amt2, company
FROM test
WHERE (addon = 'theft cover' OR
addon = 'key loss') AND (status = 'yes')
```
Please help me above query not working
and the output should that i want is below :
```
addon status amt1 amt2 company
theft cover yes 7 7 comp1
key loss yes 2 2 comp1
key loss yes 1 1 comp3
theft cover yes 12 22 comp3
theft cover yes 22 55 comp5
key loss yes 33 44 comp5
```
|
Use Group BY clause to group your records by company and addon and sum all your amounts (only if you have multiple records which is not in your sample data) like:
```
SELECT addon, status, SUM(amt1), SUM(amt2), company
FROM test
WHERE (addon = 'theft cover' OR
addon = 'key loss') AND (status = 'yes')
GROUP BY company, addon
```
|
SELECT addon, status, amt1, amt2, company
FROM test
WHERE addon in('theft cover', 'key loss') AND (status = 'yes')
GROUP BY company, addon, amt1, amt2, company
if you find any duplicates it can be achieve using above query.
|
SQL Query to find Records companywise
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I can get nearly what I require, from a sql query but not quite and I would appreciate some help.
I have the following three tables:
```
people
+----+-------------------------+
| id | email |
+----+-------------------------+
| 1 | joe_soap@hotmail.com |
| 2 | john_doe@hotmail.com |
| 3 | fred_bloggs@hotmail.com |
+----+-------------------------+
jobs
+----+-------------+
| id | description |
+----+-------------+
| 1 | Plumber |
| 2 | Plasterer |
| 3 | Carpenter |
| 4 | Builder |
+----+-------------+
people_jobs
+-----------+--------+
| person_id | job_id |
+-----------+--------+
| 1 | 1 |
| 1 | 3 |
| 2 | 3 |
| 3 | 3 |
| 3 | 1 |
| 3 | 4 |
+-----------+--------+
```
Using this query I can output all the data in the format that I require:
```
SELECT people.id, people.email,
GROUP_CONCAT(DISTINCT jobs.description
ORDER By jobs.description DESC SEPARATOR ', ')
FROM jobs
INNER JOIN people_jobs On people_jobs.job_id = jobs.id
INNER JOIN people On people.id = people_jobs.person_id
GROUP BY people.id
```
As follows:
```
+----+-------------------------+----------------------------+
| id | email | GROUP_CONCAT |
+----+-------------------------+----------------------------+
| 1 | joe_soap@hotmail.com | Plumber, Carpenter |
| 2 | john_doe@hotmail.com | Carpenter |
| 3 | fred_bloggs@hotmail.com | Plumber, Carpenter,Builder |
+----+-------------------------+----------------------------+
```
By adding a WHERE clause I can optput all the people that can do a specific job:
```
SELECT people.id, people.email,
GROUP_CONCAT(DISTINCT jobs.description
ORDER By jobs.description DESC SEPARATOR ', ')
FROM jobs
INNER JOIN people_jobs On people_jobs.job_id = jobs.id
INNER JOIN people On people.id = people_jobs.person_id
WHERE jobs.description = 'Plumber'
GROUP BY people.id
+----+-------------------------+--------------+
| id | email | GROUP_CONCAT |
+----+-------------------------+--------------+
| 1 | joe_soap@hotmail.com | Plumber |
| 3 | fred_bloggs@hotmail.com | Plumber |
+----+-------------------------+--------------+
```
What I want to achieve is the above output plus the other jobs that the people can do as follows:
```
+----+-------------------------+----------------------------+
| id | email | GROUP_CONCAT |
+----+-------------------------+----------------------------+
| 1 | joe_soap@hotmail.com | Plumber, Carpenter |
| 3 | fred_bloggs@hotmail.com | Plumber, Carpenter,Builder |
+----+-------------------------+----------------------------+
```
Can anyone help please?
|
Use `HAVING` clause instead:
```
SELECT people.id, people.email,
GROUP_CONCAT(DISTINCT jobs.description
ORDER By jobs.description DESC SEPARATOR ', ')
FROM jobs
INNER JOIN people_jobs On people_jobs.job_id = jobs.id
INNER JOIN people On people.id = people_jobs.person_id
GROUP BY people.id
HAVING COUNT(CASE WHEN jobs.description = 'Plumber' THEN 1 END) >= 1
```
`HAVING` filters out people *groups* not having at least one job description equal to `'Plumber'`, whereas `WHERE` operates per row, excluding all *rows* not having a job description equal to `'Plumber'`.
[**Demo here**](http://sqlfiddle.com/#!9/0d61b/1)
|
You can wrap your first query with subquery and use `FIND_IN_SET`:
```
SELECT *
FROM (
SELECT people.id, people.email,
GROUP_CONCAT(DISTINCT jobs.description
ORDER By jobs.description DESC SEPARATOR ', ') AS jobs
FROM jobs
JOIN people_jobs On people_jobs.job_id = jobs.id
JOIN people On people.id = people_jobs.person_id
GROUP BY people.id) AS sub
WHERE FIND_IN_SET('Plumber', sub.jobs) > 0;
```
`SqlFiddleDemo`
But it will be slower than `HAVING` approach in [Giorgos Betsos](https://stackoverflow.com/a/33984576/5070879) answer.
|
mySQL GROUP_CONCAT - Query
|
[
"",
"mysql",
"sql",
"database",
""
] |
I have two tables (`delta` and `aa`) of flight data, and I am trying to create a new table that would be a subset of `delta`. This subset would only contain the rows in delta that share the same `origin_airport_id` and `dest_airport_id` as in `aa`.
`aa` has 89,940 rows and `delta` has 245,052.
I used:
```
CREATE TABLE dl_share
AS
SELECT delta.*
FROM delta,aa
WHERE (aa.origin_airport_id = delta.origin_airport_id
AND aa.dest_airport_id = delta.dest_airport_id)
```
which creates a table with 18,562,876 rows. Why is the size of the table bigger rather than smaller, and how can I do this correctly?
|
You should use `WHERE EXISTS` rather than `JOIN`:
```
SELECT *
FROM delta d
WHERE EXISTS (
SELECT 1
FROM aa
WHERE aa.origin_airport_id = d.origin_airport_id
AND aa.dest_airport_id = d.dest_airport_id);
```
|
What about something like this?
```
SELECT delta.*
FROM delta
inner join aa on aa.origin_airport_id = delta.origin_airport_id
and aa.dest_airport_id = delta.dest_airport_id
```
|
How to create a subset of a table
|
[
"",
"sql",
"postgresql",
"postgresql-9.3",
""
] |
I want to replace all the `dots` **before** `@` in an email with `empty string` in oracle query
like:
```
anurag.mart@hotmail.com >> anuragmart@hotmail.com
```
|
* Instr - To identify the position(`@`)
* Substr - To extract data between start(`1`) and end(`@`) position
* Replace - To replace `.` with `''`
* || - To concatenate two strings
Try this
```
SELECT Replace(Substr('anurag.mart@hotmail.com', 1,
Instr('anurag.mart@hotmail.com', '@', 1)), '.', '')
|| Substr('anurag.mart@hotmail.com', Instr('anurag.mart@hotmail.com','@')+1)
FROM dual
```
**Result:**
```
anuragmart@hotmail.com
```
[**SqlFiddle Demo**](http://sqlfiddle.com/#!4/9eecb7db59d16c/5105)
|
The easiest way is to use **REGEXP\_REPLACE** to identify the pattern and replace it with required pattern.
```
regexp_replace('anurag.mart@hotmail.com', '(\w+)\.(\w+)(@+)', '\1\2\3')
```
For example,
```
SQL> SELECT 'anurag.mart@hotmail.com' email_id,
2 regexp_replace('anurag.mart@hotmail.com', '(\w+)\.(\w+)(@+)', '\1\2\3') new_email_id
3 FROM dual;
EMAIL_ID NEW_EMAIL_ID
----------------------- ----------------------
anurag.mart@hotmail.com anuragmart@hotmail.com
```
|
How to replace all the dots before @ in an email with empty string in Oracle SQL?
|
[
"",
"sql",
"oracle",
""
] |
I am using SQL Server 2008. And I am trying to insert a string into a table but only part of the string is inserted. I have checked whether it is a SQL injection risk. How to solve or avoid this problem?
```
insert into tble (col1, col2, col3)
values (23, 34, "out of 8 works, 5 works are completed");
```
Only `Out of 8 Works` is inserted, `, 5 works are completed` was skipped
|
The double quotes will only work if QUOTED\_IDENTIFIERS is turned off, if you are worried about SQL injection then don't pass a string on the INSERT - parameterise it from the application. The string could be truncated because the col3 is not defined long enough - check that also.
|
Try this..
```
insert into table(col1,col2,col3) values (23,34,'out of 8 works, 5 works are completed');
```
|
Unable to insert this string into a column. Only part of the string Is inserted
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-injection",
"code-injection",
""
] |
The logic is pretty simple, I have first, last name columns and I wish to concatenate them and make a search based on the concatenated column fullName.
My query does not work since I cannot do where on a alias:
```
SELECT CONCAT(profile_firstname, ' ', profile_lastname) AS fullName FROM Profiles WHERE fullName = '%a%';
```
What would be the alternative is acquiring a similar result?
|
You can use a `having` clause, for example (*in your specific case*):
```
having fullName = '%a%';
```
|
This may be more what you're looking for......
```
SELECT Sub_Query.*
FROM
(
SELECT CONCAT(profile_firstname, ' ', profile_lastname) AS fullName FROM Profiles
) Sub_Query
WHERE
Sub_Query.fullName = '%a%';
```
But you might want to consider a View which would have fullName and run your queries against that......which is essentially what's going on in this.
|
Alternative to WHERE on alias
|
[
"",
"mysql",
"sql",
""
] |
In Oracle, just using the `ORDER BY` does not sort version numbers.
My `Version_Number` field is declared as a `VARCHAR` and I cannot change it.
For Example: The following versions :
```
1.20
1.9
1.18
1.13
1.5
1.11
2.0
1.8
1.3
1.2
1.1
1.0
1.10
```
**should be sorted as**
```
2.0
1.20
1.18
1.13
1.11
1.10
1.9
1.8
1.5
1.3
1.2
1.1
1.0
```
I have researched several posts but none of them seem to really serve my purpose or the answers were intended for SQL Server, etc and not Oracle. I came across this particular sql which seemed to look like it worked.
```
select version_number from mytable
order by lpad(version_number, 4) desc;
```
which sorted the versions in this fashion:
```
1.20
1.18
1.13
1.11
1.10
2.0
1.9
1.8
1.5
1.3
1.2
1.1
1.0
```
I believe this sql statement works for `SQL Server` :
```
select version_number from mytable
order by cast ('/' + replace(version_number , '.', '/') + '/' as hierarchyid) desc;
```
However, this does not work with Oracle. Is there an alternative to hierarchyid in Oracle?
Is there anyone who can come up with a SQL to sort these versions accurately?
I have already seen the posts mentioned below (links attached). So kindly do not tell me that this post is a duplicate.
[SQL sort by version "number", a string of varying length](https://stackoverflow.com/questions/528830/sql-sort-by-version-number-a-string-of-varying-length)
[How Can I Sort A 'Version Number' Column Generically Using a SQL Server Query](https://stackoverflow.com/questions/3474870/how-can-i-sort-a-version-number-column-generically-using-a-sql-server-query)
[Version number sorting in Sql Server](https://stackoverflow.com/questions/7870945/version-number-sorting-in-sql-server)
[mysql sorting of version numbers](https://stackoverflow.com/questions/7508313/mysql-sorting-of-version-numbers)
and many more.
|
This is one way to do it. First order by the number before `.` and then by the numbers after `.`
```
select version_number
from mytable
order by substr(version_number, 1, instr(version_number,'.')-1) desc
,length(substr(version_number, instr(version_number,'.')+1)) desc
,substr(version_number, instr(version_number,'.')+1) desc
```
|
This SQL supports your input data plus any included Revision or Build digits.
```
with
inputs as (select '1.20' as version_number from dual union all
select '1.9' as version_number from dual union all
select '1.18' as version_number from dual union all
select '1.13' as version_number from dual union all
select '1.5' as version_number from dual union all
select '1.11' as version_number from dual union all
select '2.0' as version_number from dual union all
select '1.8' as version_number from dual union all
select '1.3' as version_number from dual union all
select '1.2' as version_number from dual union all
select '1.1' as version_number from dual union all
select '1.0' as version_number from dual union all
select '1.10' as version_number from dual union all
select ' 3.1 ' as version_number from dual union all
select '3.1.1000' as version_number from dual union all
select '3.1.1' as version_number from dual union all
select '3.1.100' as version_number from dual union all
select '3.1.2.1000' as version_number from dual union all
select '3.1.2.1' as version_number from dual union all
select '3.1.2.100 ' as version_number from dual)
,versions as (select trim(version_number) as version_number,
nvl(LPAD(trim(regexp_substr(version_number, '[^.]+', 1, 1)),5,'0'),'00000') AS Major,
nvl(LPAD(trim(regexp_substr(version_number, '[^.]+', 1, 2)),5,'0'),'00000') AS Minor,
nvl(LPAD(trim(regexp_substr(version_number, '[^.]+', 1, 3)),5,'0'),'00000') AS Revision,
nvl(LPAD(trim(regexp_substr(version_number, '[^.]+', 1, 4)),5,'0'),'00000') AS Build
from inputs
ORDER BY Major desc, Minor desc, Revision desc, Build desc)
--select * from versions;
select version_number from versions;
```
Remove the -- to see the intermediate result.
For OP, replace "inputs as (select ... from dual)" with:
```
inputs as (select version_number from mytable)
```
|
Oracle SQL to Sort Version Numbers
|
[
"",
"sql",
"oracle",
"sorting",
"rdbms",
"version-sort",
""
] |
```
SQL SERVER:
Select tblActivity.RoomID, tblActivity.Time , tblRoom.RoomType
from tblActivity
Inner Join tblRoom
On tblRoom.ID = tblActivity.RoomID
Where tblActivity.Time Between 10 And 11
And tblActivity.RoomID is NULL
GROUP BY tblActivity.RoomID , tblRoom.RoomType
```
Sorry the questions title is pretty vague. Didnt really know how to explain it. Here it goes - In here I am trying to display Rooms that are not in use at the time between 10 and 11 how can I get this to display?
The question itself - List the rooms that are not used between 10 and 11 in the morning. (5)
The error Message : Msg 8120, Level 16, State 1, Line 1
Column 'tblActivity.Time' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
|
For your logic, you would seem to want `not exists`:
```
Select r.*
from tblRoom r
where not exists (select 1
from tblActivity a
where r.ID = a.RoomID and
a.Time >= 10 and
a.Time <= 11
);
```
When using date/times, it is usually better to use explicit comparisons rather than `between`, because the results of `between` can depend on the type of the column. (This is particularly true for `date`/`datetime` comparisons, but I would just get in the habit and not use it for `time` as well.)
|
`Where tblActivity.Time Between 10 And 11` is identical to writing :
```
Where tblActivity.Time >= 10 And tblActivity.Time <= 11
```
Note that the between in SQL Server is `inclusive`, meaning the end (11) is included in the answer... which is overall considered a bad practice for many good reasons. So be aware it might not always be the case and you could have surprises when changing database engines.
*Why are inclusive ranges a bad practice?*
When you have to partition results, you will prefer to have an algorithm to work with exclusive ranges like `X >= 0 and X < 10` then `X >= 10 and X < 20` instead of inclusive `X >= 0 and X <= 9` then `X >= 10 and X <= 19`.
The inclusive range means you will have to deal with the smaller granular unit of your data to know what is 9 compared to 10. This is easy with simple algorithms, but can become a hell when you have to deal with a complex schema. The smallest unit of an integer is 1, this is easy, but what is the smallest unit of a Date, or a Float?
|
How to properly use the Between statement
|
[
"",
"sql",
"sql-server",
""
] |
I need to create a view in `Oracle 11g` that would take these tables:
**employees**
```
FirstName | LastName | EmployeeID
-----------------------------------
joe | shmo | 1
bob | moll | 2
```
**salesData**
```
Employee ID | commission on sale
----------------------------------
1 | $20
1 | $30
2 | $50
2 | $60
```
and then sum up the total commission each employee earned and return the employee who earned the most commission.
So using the sample data the view will contain the `employee id :: 2` or `bob moll`.
|
This should get you what you need
Create someviewname as view
```
Select EmployeeID, sum (commision)
from employees
left outer join salesData on salesData.EmployeeID = employees.EmployeeID
Group by EmployeeID, commision
order by commission desc
```
|
```
SELECT employeeID
FROM
(SELECT employeeID,
SUM(commission)
FROM sales
GROUP BY employeeID
ORDER BY SUM(commission)
)
WHERE rownum = 1
```
Not sure why you want a view of that, but hopefully, you can figure that out.
|
How to create a oracle view of the max sum of a sum of values of a column based on the values of another
|
[
"",
"sql",
"oracle",
"view",
"sum",
""
] |
I tried these queries in our application. Each returned different result sets for me.
Query Set 1
```
SELECT *
FROM TABLE1 T1
LEFT OUTER JOIN TABLE2 T2 ON (T1.ID = T2.ID
AND T1.STATUS = 'A'
AND T2.STATUS = 'A')
INNER JOIN TABLE3 T3 ON (T2.ID = T3.ID)
WHERE T3.STATUS = 'A'
```
Query Set 2
```
SELECT *
FROM TABLE1 T1
LEFT OUTER JOIN TABLE2 T2 ON (T1.ID = T2.ID
AND T2.STATUS = 'A')
INNER JOIN TABLE3 T3 ON (T2.ID = T3.ID)
WHERE T3.STATUS = 'A'
AND T1.STATUS = 'A'
```
I couldn't find out why each query returns different outputs. Also please guide me about which approach is best when we use multiple `joins` (`left, right, Inner)` with `Filtering clauses.`
Thanks for any help
|
On the first
```
SELECT *
FROM TABLE1 T1
LEFT OUTER JOIN TABLE2 T2 ON (T1.ID = T2.ID
AND T1.STATUS = 'A'
AND T2.STATUS = 'A')
INNER JOIN TABLE3 T3 ON (T2.ID = T3.ID)
WHERE T3.STATUS = 'A'
```
AND T1.STATUS = 'A' has zero effect
It is a left join - you are gong to get all of T1 period
When you move AND T1.STATUS = 'A' to the where then it is applied
|
You are putting a filter on the right table of a left join, creating an inner join. Your first query will return less results, whilst your second will have NULLS against non matching rows on the right table. This link helped me to understand joins much better, as I really struggled for a long time to grasp the concept. [HERE](http://www.tutorialspoint.com/sql/sql-using-joins.htm)
If my answer is not clear please ask me for a revision.
|
Data Difference Between Where Clause and AND Clause
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
How can i select only distinct user id's ONLY having ACCESS\_COLUMN\_ID value 1 even though they may also have ACCESS\_COLUMN\_ID value 2 as well.
Here is my query which returns 1 and 2:
```
SELECT DISTINCT(USER_ID) FROM USER_ACCESS WHERE ACCESS_COLUMN_ID = 1
```
The result returned contains userid who have `ACCESS_COLUMN_ID = 2` as well.
```
Here is my table data
USERID ACCESS_COLUMN_ID
1 1
1 2
2 1
```
I am expecting USERID 2 only as my query result
|
You could use `HAVING`:
```
SELECT USER_ID
FROM USER_ACCESS
GROUP BY USER_ID
HAVING MIN(ACCESS_COLUMN_ID) = 1
AND MAX(ACCESS_COLUMN_ID) = 1
```
This query will get all `user_id`, but only unique ones because of the `group by` clause. Then it will take the minimum and maximum `access_column_id` it finds for each of them, and if these two values are both 1, then the `user_id` is retained in the final result set.
The above will have good performance, as it references the table only once.
For your interest, there are several other ways to get the same result. However they all need the table to be referenced twice. You might want to compare their readability and performance yourself:
**NOT EXISTS**
```
SELECT DISTINCT USER_ID
FROM USER_ACCESS UA1
WHERE UA1.ACCESS_COLUMN_ID = 1
AND NOT EXISTS (
SELECT 1
FROM USER_ACCESS UA2
WHERE UA1.USER_ID = UA2.USER_ID
AND UA2.ACCESS_COLUMN_ID <> 1)
```
**NOT IN**
This is very similar to the previous one, but in my experience has not as good performance:
```
SELECT DISTINCT USER_ID
FROM USER_ACCESS
WHERE ACCESS_COLUMN_ID = 1
AND USER_ID NOT IN (
SELECT USER_ID
FROM USER_ACCESS
WHERE ACCESS_COLUMN_ID <> 1)
```
**Outer Self-Join**
This often has better performance than the previous two solutions:
```
SELECT DISTINCT USER_ID
FROM USER_ACCESS UA1
LEFT JOIN USER_ACCESS UA2
ON UA1.USER_ID = UA2.USER_ID
AND UA2.ACCESS_COLUMN_ID <> 1
WHERE UA1.ACCESS_COLUMN_ID = 1
AND UA2.USER_ID IS NULL
```
The last `NULL` condition checks that the outer join did not yield any match (with `ACCESS_COMUN_ID <> 1`).
**EXCEPT**
This is syntax specific to SQL Server, but is easy to understand (Oracle has the similar `MINUS`);
```
SELECT DISTINCT USER_ID
FROM USER_ACCESS
WHERE ACCESS_COLUMN_ID = 1
EXCEPT
SELECT USER_ID
FROM USER_ACCESS
WHERE ACCESS_COLUMN_ID <> 1
```
**Remark on DISTINCT**
The `DISTINCT` keyword is easy to understand, but one might often get better performance by using a `GROUP BY` clause instead. This can be applied to all solutions mentioned above.
If it is certain that there cannot be two records with the same values for `USER_ID` and `ACCESS_COLUMN_ID` then the `DISTINCT` keyword can be left out in the above queries.
|
You can use `NOT IN` to filter out the ones with `ACCESS_COLUMN_ID = 2`
```
SELECT DISTINCT(USER_ID) FROM USER_ACCESS WHERE ACCESS_COLUMN_ID = 1 and
USER_ID NOT IN (
SELECT DISCTINCT (USER_ID) FROM USER_ACCESS WHERE ACCESS_COLUMN_ID = 2
)
```
|
Only select unique record for a condition in sql
|
[
"",
"sql",
"sql-server",
"distinct",
""
] |
I am trying to disable a table's constraint with an ALTER statement using BCP commands. My code is as follows:
```
SET @disableConstraints = 'SQLCMD -E -S server-Name -d '+@databaseName+' -Q "EXEC sp_MSforeachtable @command1 = ALTER TABLE '+@schemaName+'.'+@tableName+' NOCHECK CONSTRAINT ALL;" >> Z:\Log\ErrorLog.txt'
```
However when I execute the `xp_cmdshell` command, the `ErrorLog.txt` will print an error saying:
> Msg 156, Level 15, State 1, Server server-Name, Line 1
> Incorrect syntax near the keyword 'ALTER'.
I've been fiddling around with single quotes and double quotes, but I am not getting anywhere.
Can someone help spot the syntax error?
|
Needed two single quotes at the start of ALTER and the end of ALL
```
SET @disableConstraints = 'SQLCMD -E -S server-Name -d '+@databaseName+' -Q "EXEC sp_MSforeachtable @command1 = ''ALTER TABLE '+@schemaName+'.'+@tableName+' NOCHECK CONSTRAINT ALL'';" >> Z:\Log\ErrorLog.txt'
```
|
You miss the double quote before alter
```
SET @disableConstraints = 'SQLCMD -E -S server-Name -d ' + @databaseName + ' -Q EXEC sp_MSforeachtable @command1 = "ALTER TABLE ' + @schemaName + '.' + @tableName + ' NOCHECK CONSTRAINT ALL;" >> Z:\Log\ErrorLog.txt'
```
|
Incorrect syntax near ALTER NOCHECK CONSTRAINT - SQL Server 2008
|
[
"",
"sql",
"sql-server-2008",
"bcp",
"alter",
""
] |
I'm trying to write one of my first SQL queries, but it seems like the syntax is wrong. I've gone over it for the past two hours trying to change things back and forth, but it still seems to be wrong somewhere on line 3. Could someone help me?
```
SELECT prodotti.ID_Prodotto, prodotti.ID_Ditta,
prodotti.Nome, prodotti.Descrizione, prodotti.ID_Tipologia,
prodotti.Immagine, cp.Nome, valori.Valore
FROM prodotti, campi tipologia as cp, valori
WHERE prodotti.ID_Ditta = '1'
and cp.ID_tipologia = prodotti.ID_tipologia
and (valori.ID_prodotto = prodotti.ID_prodotto
and valori.ID_Campo_Tipologia = cp.ID_campo);
```
This is the error message I get:
```
#1064 - You have an error in your SQL syntax; check the manual that corresponds to your
MySQL server version for the right syntax to use near 'as cp, valori WHERE prodotti.ID_Ditta = '1'
and cp.ID_tipologia = prodotti.ID_ti' at line 3
```
|
You need to back-tick the table name with a space in it, if that is a table name with a space in it.
Otherwise, yet another table, and you need a comma ?
```
SELECT prodotti.ID_Prodotto, prodotti.ID_Ditta, prodotti.Nome, prodotti.Descrizione,
prodotti.ID_Tipologia, prodotti.Immagine, cp.Nome, valori.Valore
FROM prodotti, `campi tipologia` as cp, valori
WHERE prodotti.ID_Ditta = '1'
and cp.ID_tipologia = prodotti.ID_tipologia
and (valori.ID_prodotto= prodotti.ID_prodotto and valori.ID_Campo_Tipologia = cp.ID_campo);
```
Yes, see below, I guess it is possible to have a table name with a space. Never tried it before to be honest
```
create table `a b`
(
id int
);
```
|
Your SQL have a typpo at:
```
campi tipologia AS cp,
```
remove spaces between 2 tables;
|
Wrong syntax in SQL?
|
[
"",
"mysql",
"sql",
""
] |
I'm trying to extract some data were the Event column is distinct but I specifically want only rows that contain the highest number from the Value column, I'm thinking it's some mis-mash of using DISTINCT and GROUP BY but my sql knowledge is limited at the moment, any help would be great
[](https://i.stack.imgur.com/w8LSt.png)
|
Try this out:
```
SELECT [EVENT]
,MAX([Value])
FROM [MyTable]
GROUP BY [EVENT]
```
|
The easiest way to do this is using either a subquery or `row_number()`:
```
select event, value
from (select t.*, row_number() over (partition by event order by value desc) as seqnum
from mytable t
) t
where seqnum = 1;
```
or:
```
select t.*
from mytable t
where t.value = (select max(t2.value) from mytable t2 where t2.event = t.event);
```
You should be careful about naming columns. `event` and `value` might be reserved words in some databases.
|
SQL selecting specific DISTINCT rows
|
[
"",
"sql",
""
] |
I have two tables and i need to compare data and update one table records. Please let me know how this can be done, i am trying to not use merge. This is the scenario
**Proj1 Table**
This is the first table where data needs to be synchronized
```
ID Text
1 R1
2 R2
3 R3
```
**Proj2 Table**
This is the table where data updates are taking place
```
ID Text Active
3 R1 1
4 R3 1
5 R4 1
```
After a compare is done on Text field between both these tables result should be similar to below. We are syncing data in Proj2 to similar to Proj1.
```
ID Text Active
3 R1 1 (Ignore as it exists in both tables)
4 R3 1 (Ignore as it exists in both tables)
5 R4 0 (Update to inactive as it does not exist Proj1 table)
6 R2 1 (Insert as it does not exist in Proj2 table)
```
|
If you really can't use `MERGE`, you can simply split it into an update and an insert query :
```
INSERT INTO @Proj2(Text, Active)
SELECT Text,1 FROM @Proj1 p1
WHERE NOT EXISTS(
SELECT *
FROM @Proj2 p2
WHERE p2.Text = p1.Text
);
UPDATE
p2
SET
p2.Active = CASE WHEN p1.id is null THEN 0 ELSE 1 END
FROM
@Proj2 p2
LEFT JOIN
@Proj1 p1
ON
p2.Text = p1.Text;
```
This assumes that your ID is an auto-increment.
Thsi is pretty much like Zak's new answer, but with the 2 update queries merged.
|
```
merge [Proj2Table] target
using ( select [id], [text] from [Proj1Table] ) source ([id], [text] )
on target.[id] = source.[id]
when not matched by source
then update
set target.[Active] = 0
when not matched by target
then insert
([id], [text] )
values( source.[id], source.[text] )
```
|
Compare data between two tables and update one table data
|
[
"",
"sql",
"sql-server",
""
] |
I have two different tables: one called XY and the other called XZ
In each table I have a column named "id" and I want to compare those columns and to see which one of the "id" values in table XY is also available in XZ and which isn't available
|
To get the id's that are in XY and not in XZ (The inner query gives the id's present in both XY and XZ)
```
SELECT XY.`id`
FROM XY
WHERE `id` NOT IN(
SELECT XY.`id`
FROM XY
JOIN XZ ON XY.`id`=XZ.`id`);
```
|
Is a simplle inner join
```
select * from table XY
inner join XZ on (XY.id = XZ.id);
```
|
mysql compare columns
|
[
"",
"mysql",
"sql",
"database",
""
] |
Please help me to debug the error for below stored Procedure.
Whenevere i tried to execute the sp i am getting error as
(142 row(s) affected)
Msg 2714, Level 16, State 6, Line 16
There is already an object named 'AOP\_Monthly\_GrowthRate\_Acquisition' in the database.
```
if exists (select * from INFORMATION_SCHEMA.TABLES where TABLE_NAME =
'AOP_Monthly_GrowthRate_Acquisition' AND TABLE_SCHEMA = 'dbo')
drop table BudgetWorld_temp.dbo.AOP_Monthly_GrowthRate_Acquisition
GO
SELECT dbo.LTM_ACQUISITION_MONTHLY.Year,
dbo.LTM_ACQUISITION_MONTHLY.Month,
dbo.LTM_ACQUISITION_MONTHLY.SALES_MANAGER_CODE,
dbo.LTM_ACQUISITION_MONTHLY.SALES_GROUP,
dbo.LTM_ACQUISITION_MONTHLY.NetProductSales,
dbo.LTM_ACQUISITION_MONTHLY.AverageCostBasedProfit,
dbo.LTM_ACQUISITION_MONTHLY.ReplacementCostBasedProfit,
isnull(dbo.[Provided 2016 AOP Values_Acquisition].[AOP Sales],0) as 'AOP Sales',
isnull(dbo.[Provided 2016 AOP Values_Acquisition].[AOP
Profit Based On Average Cost],0) as 'AOP Profit Based On Average Cost'
INTO #TEMP3
FROM dbo.LTM_ACQUISITION_MONTHLY LEFT OUTER JOIN
dbo.[Provided 2016 AOP Values_Acquisition] ON
dbo.LTM_ACQUISITION_MONTHLY.Month = dbo.[Provided 2016 AOP
Values_Acquisition].Month AND
dbo.LTM_ACQUISITION_MONTHLY.SALES_MANAGER_CODE =
dbo.[Provided 2016 AOP Values_Acquisition].[Sales Manager] AND
dbo.LTM_ACQUISITION_MONTHLY.SALES_GROUP = dbo.
[Provided 2016 AOP Values_Acquisition].[Sales Group]
SELECT *,
SALES_GROWTH = CASE
WHEN ([NetProductSales] + [AOP Sales]) = 0 THEN 0
WHEN ([AOP Sales] ) > 0 AND ([NetProductSales]) = 0 THEN 1
ELSE
(([AOP Sales] -[NetProductSales])/[NetProductSales]) END
,
Margin_Growth =
CASE
WHEN ([AverageCostBasedProfit] + [AOP Profit Based On Average Cost]) = 0
THEN 0
WHEN ([AOP Profit Based On Average Cost] ) > 0 AND
([AverageCostBasedProfit]) = 0 THEN 1
else
(([AOP Profit Based On Average Cost] -
[AverageCostBasedProfit])/[AverageCostBasedProfit]) END
INTO AOP_Monthly_GrowthRate_Acquisition
FROM #TEMP3
DROP TABLE #TEMP3
```
Thanks a ton
|
```
USE BudgetWorld_temp
GO
IF OBJECT_ID('dbo.AOP_Monthly_GrowthRate_Acquisition', 'U') IS NOT NULL
DROP TABLE dbo.AOP_Monthly_GrowthRate_Acquisition
GO
IF OBJECT_ID('tempdb.dbo.#TEMP3') IS NOT NULL
DROP TABLE #TEMP3
GO
SELECT
a.[Year],
a.[Month],
a.SALES_MANAGER_CODE,
a.SALES_GROUP,
a.NetProductSales,
a.AverageCostBasedProfit,
a.ReplacementCostBasedProfit,
ISNULL(b.[AOP Sales], 0) AS [AOP Sales],
ISNULL(b.[AOP Profit Based On Average Cost], 0) AS [AOP Profit Based On Average Cost]
INTO #TEMP3
FROM dbo.LTM_ACQUISITION_MONTHLY a
LEFT JOIN dbo.[Provided 2016 AOP Values_Acquisition] b ON a.[Month] = b.[Month]
AND a.SALES_MANAGER_CODE = b.[Sales Manager]
AND a.SALES_GROUP = b.[Sales Group]
SELECT *
, SALES_GROWTH =
CASE
WHEN [NetProductSales] + [AOP Sales] = 0 THEN 0
WHEN [AOP Sales] > 0 AND [NetProductSales] = 0 THEN 1
ELSE (([AOP Sales] -[NetProductSales])/[NetProductSales])
END
, Margin_Growth =
CASE
WHEN ([AverageCostBasedProfit] + [AOP Profit Based On Average Cost]) = 0 THEN 0
WHEN ([AOP Profit Based On Average Cost] ) > 0 AND ([AverageCostBasedProfit]) = 0 THEN 1
ELSE (([AOP Profit Based On Average Cost] - [AverageCostBasedProfit])/[AverageCostBasedProfit])
END
INTO dbo.AOP_Monthly_GrowthRate_Acquisition
FROM #TEMP3
IF OBJECT_ID('tempdb.dbo.#TEMP3') IS NOT NULL
DROP TABLE #TEMP3
```
|
SELECT
FROM
INTO
Creates a table, which it cannot do if it already exits (perhaps from a previous execution).
You could swap to a temp table (prefix table name with #). Or you could use INSERT INTO instead. With the second method you may need to clear the table before populating.
|
Error In Stored Procedure with error message as There is already an object named
|
[
"",
"sql",
"sql-server",
""
] |
I have table like this:
```
product_id model date computers
----------------------------------------------------------------
12204 Asus 'random_date' :::aass:::ddff:::ddfd:::dfwr:::
12205 Acer 'random_date' :::ersz:::dfwqq:::bbs:::
12205 Acer 'random_date' :::ettww:::iwoeur:::iwerq:::
```
As you see, product\_id, model can duplicate. Computers field contains how much computer in current row. Separator is :::. After I group them (GROUP BY product\_id) first product (12204) have 3 computer in it, and other product (12205) have 6 computers in it. So query result must be like this:
```
GROUP(product_id) model count
-----------------------------------------
12204 Asus 4
12205 Acer 6
```
Is it possible do it with single query? Or I must do it at back end code (in my case python)?
(I can't modify table, i'm working on someone else's pre-existed table)
**Edit:** updated sample data format
|
Using a combination of `SUM`, `ROUND`, `LENGTH`, `REPLACE` & `COUNT` for the count field, and then group by the product\_id and model.
```
SELECT product_id, model,
SUM(ROUND ((LENGTH(computers) - LENGTH(REPLACE(computers, ":::", ""))) / LENGTH(":::"))) - count(product_id) AS count
FROM yourtable
GROUP BY product_id, model
```
Output
```
product_id model count
12204 Asus 4
12205 Acer 6
```
SQL Fiddle:<http://sqlfiddle.com/#!9/ad183/2/0>
|
Try the following query:
```
SELECT product_id, model,
SUM(1 + ((LENGTH(computers) - LENGTH(REPLACE(computers, ':::', ''))) / 3)) AS count
FROM products
GROUP BY product_id, model
```
Click the link below for a running demo:
[**SQLFiddle**](http://sqlfiddle.com/#!9/eba29/4)
|
Mysql query explode and count
|
[
"",
"mysql",
"sql",
"split",
"explode",
""
] |
I was able to join 2 SQL tables use the following query:
```
SELECT *
FROM Table1, Table2 with (nolock)
WHERE Table1.field1 = Table2.field2
```
Then I tried to join 3 SQL tables like below:
```
SELECT *
FROM Table1, Table2, Table3 with (nolock)
WHERE Table1.field1 = Table2.field2, Table1.field2 = Table3.field3
```
But it didn't work. Did I miss anything here? Or how do I join 3 tables properly?
Thanks!
|
If you use the **proper** ANSI JOIN syntax, you won't have any of those issues:
```
SELECT *
FROM
Table1
INNER JOIN
Table2 ON Table1.field1 = Table2.field2
INNER JOIN
Table3 ON Table1.field2 = Table3.field3
```
|
You are joining table in old style and for multiple condition you have to use `and` instead of `,`
Try to use inner join
Like this
```
SELECT *
FROM Table1
inner join Table2 on Table1.field1 = Table2.field2
inner join Table3 on Table1.field2 = Table3.field3
```
|
SQL Server: join three tables
|
[
"",
"sql",
"sql-server",
"join",
""
] |
I have 2 Tables A & B
**A**
```
select TranType, DocType, Document FROM
db1.dbo.A
TRANTYPE | DOCTYPE | DOCUMENT
Coup | Stat | 1
Coup | Stat | 2
Coup | Stat | 3
Coup | Stat | ...
Coup | Stat | 100
Swp | Corr | 1
Swp | Corr | 2
Swp | Corr | ...
Swp | Corr | 5
```
**B** ...
**A:**
```
SELECT TranType, DocType, COUNT (*) AS Docs
FROM db1.dbo.A
GROUP BY TranType, DocType
TRANTYPE | DOCTYPE | DOCS
Coup | Stat | 100
Swp | Corr | 5
```
**B:**
```
SELECT RecType, SubType, COUNT (*) AS Docs
FROM db2.dbo.B
GROUP BY RecType, SubType
RECTYPE | SUBTYPE | DOCS
Coup | Stat | 50
Cr | Cr | 3
Swp | Cr | 10
```
I managed to get the intersection of the 2 Tables and Add up the Count result:
**A ∩ B**
```
SELECT a.TranType, a.DocType,
(SELECT COUNT(*) FROM db2.dbo.B b WHERE b.RecType= a.TranType AND b.SubType = a.DocType) +
COUNT(a.TranType) AS SUM
FROM db1.dbo.A a
GROUP BY a.TranType, a.DocType
ORDER BY TranType
TRANTYPE | DOCTYPE | SUM
Coup | Stat | 150
```
Anyone got an idea how to get all the entries (Union)?
```
TRANTYPE | DOCTYPE | DOCS
Coup | Stat | 150 <----
Swp | Corr | 5
Cr | Cr | 3
Swp | Cr | 10
```
SOLUTION:
I had different collation on my tables, resulting in an error.
Find the solution with proper collation handling here: [SQL Sample Fiddle](http://sqlfiddle.com/#!6/44b5f/7)
|
**[SQL Fiddle Demo](http://sqlfiddle.com/#!6/1c80d/3)**
`UNION` is easy. Just need matching column name
Or `UNION ALL` to keep duplicated
```
SELECT TranType, DocType, SUM (Docs)
FROM (
SELECT TranType, DocType, COUNT (*) AS Docs
FROM TableA
GROUP BY TranType, DocType
UNION ALL
SELECT RecType as TranType, SubType as DocType, COUNT (*) AS Docs
FROM TableB
GROUP BY RecType, SubType
) T
GROUP BY TranType, DocType
ORDER BY TranType, DocType;
```
**OUTPUT**
```
| TranType | DocType | |
|----------|---------|-----|
| Coup | Stat | 150 |
| Cr | Cr | 3 |
| Swp | Corr | 5 |
| Swp | Cr | 10 |
```
**EDIT**
Your first query should be write like this to avoid perfom a subquery for each row.
```
SELECT TranType, DocType, SUM (Docs)
FROM (
SELECT TranType, DocType, COUNT (*) AS Docs
FROM TableA
GROUP BY TranType, DocType
UNION ALL
SELECT RecType as TranType, SubType as DocType, COUNT (*) AS Docs
FROM TableB
GROUP BY RecType, SubType
) T
GROUP BY TranType, DocType
HAVING COUNT(*) > 1;
```
|
I would suggest using subqueries and then aggregating:
```
select TranType, DocType, sum(docs) as docs
from ((select TranType, DocType, count(*) as docs
from db1.dbo.A a
group by TranType, DocType
) union all
(select RecType, SubType, count(*) as docs
from db1.dbo.B
group by RecType, SubType
)
) ab
group by TranType, DocType;
```
You can also use a `full outer join`, but then you have to deal with `NULL` values in the final result.
Note: this will return all pairs, even those in only one table.
|
Join two Tables and add Count result
|
[
"",
"sql",
"select",
"join",
""
] |
I am doing an Sql assignment in which I need to create a few tables. The assignment requires:
> Make sure that you define NOT NULL constraints for the PK of each table
I don't get this. When we are defining a key as Primary Key, why should we write `NOT NULL` separately with it? Doesn't defining a key as `PRIMARY KEY` itself mean not null and unique?
Please explain!
**Edit** (copied from below):
```
CREATE TABLE Faculty(
FacNo char(11) not null,
FacFirstName varchar(30) not null,
FacLastName varchar(30) not null,
FacCity varchar(30) not null,
FacState char(2) not null,
FacZipCode char(10) not null,
FacRank char(4),
FacHireDate date,
FacSalary decimal(10,2),
FacSupervisor char(11),
FacDept char(6),
CONSTRAINT FacultyPK PRIMARY KEY (FacNo));
```
Is this correct? The `FACNO` column is not null plus it's also a primary key.
|
<http://www.techonthenet.com/oracle/primary_keys.php>
> In Oracle, a primary key is a single field or combination of fields
> that uniquely defines a record. None of the fields that are part of
> the primary key can contain a null value. A table can have only one
> primary key.
when you set `PK` for a table the column will be set to `NOT NULL` even if you specify it as nullable
```
-- Create table
create table t_test_pk(
col1 varchar2(5) null
);
SQL> desc t_test_pk
Name Type Nullable Default Comments
---- ----------- -------- ------- --------
COL1 VARCHAR2(5) Y
```
so... the column is nullable
then we set PK for the table:
```
SQL> alter table t_test_pk add constraint pk_1 primary key (COL1);
Table altered
```
and try to insert null into
```
SQL> insert into t_test_pk values (null);
insert into t_test_pk values (null)
ORA-01400: cannot insert NULL into ("T_TEST_PK"."COL1")
```
something was changed! check in SqlPlus - the column is not nullable - and get error... we cannot insert null into the column because it was used in PK
```
SQL> desc t_test_pk;
Name Type Nullable Default Comments
---- ----------- -------- ------- --------
COL1 VARCHAR2(5)
```
OK... try to set it to nullable
```
SQL> alter table t_test_pk modify col1 null;
alter table t_test_pk modify col1 null
ORA-01451: column to be modified to NULL cannot be modified to NULL
```
|
I imagine the reason your instructor asks for this is to make sure you write DDL that shows your intent clearly. Trusting auto conversion of null to not null does not help readability of DDL so I'd request all DDL to be written such that the create table statement shows the intended nullability of all columns.
|
Primary key not null
|
[
"",
"sql",
"oracle",
""
] |
I did not know how to insert column with max.
```
Select id,MAX(salary),Min(Salary)
from C
GROUP BY id;
```
it is give me the all id with it is maximum
and I want just the id with maximum and minimum of salary!!
|
```
Select id,
salary
from C
where salary = (select MAX(salary)
from C)
```
|
Several options for you that only require a single scan of the table:
[SQL Fiddle](http://sqlfiddle.com/#!4/f8541/9)
**Oracle 11g R2 Schema Setup**:
```
CREATE TABLE C ( ID, SALARY ) AS
SELECT 1, 100 FROM DUAL
UNION ALL SELECT 2, 110 FROM DUAL
UNION ALL SELECT 3, 100 FROM DUAL
UNION ALL SELECT 4, 110 FROM DUAL
UNION ALL SELECT 5, 90 FROM DUAL
```
**Query 1 - Get a single ID**:
```
SELECT *
FROM (
SELECT ID, SALARY
FROM c
ORDER BY SALARY DESC
)
WHERE ROWNUM = 1
```
**[Results](http://sqlfiddle.com/#!4/f8541/9/0)**:
```
| ID | SALARY |
|----|--------|
| 2 | 110 |
```
**Query 2 - Get a single ID (alternate method that will get min and max IDs)**:
```
SELECT MAX( ID ) KEEP ( DENSE_RANK LAST ORDER BY SALARY ) AS MAX_SALARY_ID,
MAX( SALARY ) AS MAX_SALARY,
MIN( ID ) KEEP ( DENSE_RANK FIRST ORDER BY SALARY ) AS MIN_SALARY_ID,
MIN( SALARY ) AS MIN_SALARY
FROM C
```
**[Results](http://sqlfiddle.com/#!4/f8541/9/1)**:
```
| MAX_SALARY_ID | MAX_SALARY | MIN_SALARY_ID | MIN_SALARY |
|---------------|------------|---------------|------------|
| 4 | 110 | 5 | 90 |
```
**Query 3 - Get all the IDs with the maximum salary**:
```
SELECT ID, SALARY
FROM (
SELECT ID,
SALARY,
RANK() OVER ( ORDER BY SALARY DESC ) AS RNK
FROM C
)
WHERE RNK = 1
```
**[Results](http://sqlfiddle.com/#!4/f8541/9/2)**:
```
| ID | SALARY |
|----|--------|
| 2 | 110 |
| 4 | 110 |
```
**Query 4 - Get all IDs for min and max salary**:
```
SELECT LISTAGG( CASE MIN_RANK WHEN 1 THEN ID END, ',' ) WITHIN GROUP ( ORDER BY ID ) AS MIN_SALARY_IDS,
MAX( CASE MIN_RANK WHEN 1 THEN SALARY END ) AS MIN_SALARY,
LISTAGG( CASE MAX_RANK WHEN 1 THEN ID END, ',' ) WITHIN GROUP ( ORDER BY ID ) AS MAX_SALARY_IDS,
MAX( CASE MAX_RANK WHEN 1 THEN SALARY END ) AS MAX_SALARY
FROM (
SELECT ID,
SALARY,
RANK() OVER ( ORDER BY SALARY ASC ) AS MIN_RANK,
RANK() OVER ( ORDER BY SALARY DESC ) AS MAX_RANK
FROM C
)
```
**[Results](http://sqlfiddle.com/#!4/f8541/9/3)**:
```
| MIN_SALARY_IDS | MIN_SALARY | MAX_SALARY_IDS | MAX_SALARY |
|----------------|------------|----------------|------------|
| 5 | 90 | 2,4 | 110 |
```
**Query 5**:
```
SELECT ID,
SALARY,
CASE WHEN MIN_RANK = 1 THEN 'MIN'
WHEN MAX_RANK = 1 THEN 'MAX' END AS MIN_MAX
FROM (
SELECT ID,
SALARY,
RANK() OVER ( ORDER BY SALARY ASC ) AS MIN_RANK,
RANK() OVER ( ORDER BY SALARY DESC ) AS MAX_RANK
FROM C
)
WHERE MIN_RANK = 1 OR MAX_RANK = 1
```
**[Results](http://sqlfiddle.com/#!4/f8541/9/4)**:
```
| ID | SALARY | MIN_MAX |
|----|--------|---------|
| 2 | 110 | MAX |
| 4 | 110 | MAX |
| 5 | 90 | MIN |
```
|
Insert column with Max in sql
|
[
"",
"sql",
"oracle",
"greatest-n-per-group",
""
] |
I have a form with two subforms in it in the following format:
[](https://i.stack.imgur.com/9ZIBM.png)
on Subform1, Element1 has Data1. Value of Data1 points to Data1 on Subform2 which has a value of Value1
Right now both subforms are showing all data in each table. What I want to do is filter Subform2 based on the row selected in Subform1.
[](https://i.stack.imgur.com/n5dUz.png)
In this example, Element3 is selected, so the pair Data3 Value3 shows in Subform2.
I've tried accomplishing this by altering the SQL on Subform2, but nothing I do seems to do the trick. I don't know if I'm looking in the right place, or if I should look somewhere else.
If there's anything else I should provide, please don't hesitate to point it out. I want to provide enough information to come to a solution.
|
## Results and explanation using methods in other answers
Okay, so I ended up coming to a solution for my specific problem, and hopefully by sharing here, it will help others in the future.
The other solutions to this question assume that your form hierarchy is as follows:
```
MainForm->Subform1->Subform2
```
As in, the second subform is in and owned by the first form. This will work for most applications, but not when both Subform1 and Subform2 are Datasheets.
The hierarchy in my case, and the hierarchy for the people I hope to help in the future is as follows:
```
MainForm->Subform1
MainForm->Subform2
```
As in, the second Subform is NOT owned by the first Subform.
With this hierarchy, the code in the other solutions doesn't work, unfortunately. However, there is a simple workaround:
## Use hidden textbox control as a "link" between Subform1 and Subform2
**(The following method uses the example names found in my original question)**
From design view, create a Text Box in the MainForm, not in the Subform1 or Subform2.
On the Property Sheet for the newly created Text Box control, under the Data tab under Property "Control Source", put the following code:
```
=[Subform1].[Form]![Element1]
```
Obviously, replace Subform1 with your first subform, and typically Element1 will be Primary Key for that table.
Next, on the Property Sheet for the Text Box control, under the Format tab change Property "Visible" to No.
Next, we're going to change the "Link Master Fields" and "Link Child Fields" properties on Subform2, which can be found on the Data tab of the Property Sheet:
For the "Link Master Fields" property, put in the Name of the Text Box control you made. An example of a default name will be Text5. I renamed my Text Box to CurrentElementKey, but name it whatever you want. So in my "Link Master Fields" property, I put CurrentElementKey.
For the "Link Child Fields" property, since we put the Primary Key for Subform1 in the Text Box, we need to put the Foreign Key it relates to in Subform2. I can't tell you exactly what this will look like, because it will vary for your scenario. For example, you might have had Element1 be Element1PK on the first Subform, and it's Foreign Key on Subform2 as Element1FK. So you'd put Element1FK in "Link Child Fields".
If you have any questions, or require further explanation, please comment on this answer, and I'll do my best to help.
|
You can do that by changing the recordsource of Subform2 on the OnCurrent event of subform1. The steps to do that are as follows:-
* Open the form in design view
* Go to the properties of subform1
* Go to the event tab
* Select eventprocedure from the oncurrent combo
* double click on the button next to the event to go to the vba window
* insert the following code
```
Private Sub Form_Current()
Me.Parent.Subform2.Form.RecordSource = "Select data,value From TableName Where data=" & Me.Data
End Sub
```
|
Filter data in datasheet based on second datasheet
|
[
"",
"sql",
"ms-access",
"vba",
""
] |
My data is like so
```
item date country sales
----------------------------------------
motorola 2015-01-01 US 10
motorola 2015-01-01 UK 20
motorola 2015-01-02 US 40
motorola 2015-01-02 UK 80
motorola 2015-01-03 US 120
motorola 2015-01-03 UK 150
motorola 2015-01-04 US 170
motorola 2015-01-04 US 180
```
I want to get the daily sales delta of motorola from 2 jan 2015 until 4 jan 2015.
So for example
* total sales for 1 jan 2015 is 10 (US) + 20(UK) = 30
* total sales for 2 jan 2015 is 120 so daily sales delta (sales on date minus D-1) is 90
* total sales for 3 jan 2015 is 270 so daily delta is 150
* total sales for 4 jan 2015 is 350 so daily delta is 80
I'm expecting the result tuple :
```
date dailyDelta
2015-01-02 90
2015-01-03 150
2015-01-04 80
```
What is the syntax to get this? I'm using SQL Server 2012.
Thanks
|
This is it, the query logic is as simple as it gets, the performance better than inner joins:
```
select date, sum(sales) - coalesce(lag(sum(sales), 1) over (order by date), 0)
from my_sales
group by date
order by date
```
Use window function `lag`. Play with it: <http://sqlfiddle.com/#!6/bebab/8> and read about it: <https://msdn.microsoft.com/en-us/library/hh231256.aspx>
briefly, `lag(sum(sales), 1) over (order by date)` means "get sum(sales) column of previous record of this query, ordered by date", `coalesce(XXX, 0)` means "when XXX is null, let's pretend it was a zero"
|
use a self join
```
declare @t table (item varchar(10), [date] date,country char(2), sales int)
insert into @t (item, [date],country, sales) values
('motorola','2015-01-01','US',10),
('motorola','2015-01-01','UK',20),
('motorola','2015-01-02','US',40),
('motorola','2015-01-02','UK',80),
('motorola','2015-01-03','US',120),
('motorola','2015-01-03','UK',150),
('motorola','2015-01-04','US',170),
('motorola','2015-01-04','US',180)
;with a as (select row_number() over (order by [date]) r,[date],sum(sales) n from @t group by [date])
select a.[date],a.n-isnull(a1.n,0) dailyDelta from a join a a1 on a.r =a1.r+1
```
|
Syntax to get sum(sales) group by brand but different date
|
[
"",
"sql",
"sql-server",
"database",
"aggregate-functions",
""
] |
I have two tables in a database. One called Users and one called Days.
The Users table has a column called ID which contains some numbers and an ActiveUser column which contains either a 0(not active) or 1(active). The Days table has a column called UserID which contains the same numbers the ID column from the Users table has. It also has a column called Day which contains info like "2015-01-01 00:00:00:000".
Users Table:
```
ID | ActiveUser
-------------------
10 | 0
11 | 1
12 | 1
13 | 0
```
Days Table:
```
User_ID | Day
------------------
10 | 2010-06-24 00:00:00.000
11 | 2011-07-05 00:00:00.000
12 | 2008-06-19 00:00:00.000
13 | 2010-06-20 00:00:00.000
10 | 2009-09-02 00:00:00.000
12 | 2010-08-15 00:00:00.000
11 | 2011-05-06 00:00:00.000
13 | 2012-04-25 00:00:00.000
```
I'm trying to create a query that finds the most recent Day listed for each Active user. So using the tables above, the query I'm trying to make should give me the following:
```
Day
------
2011-07-05 00:00:00.000
2010-08-15 00:00:00.000
```
Which corresponds to the two Active users with user ID's 11 and 12, both of which have two entries for Day, but the query picks the most recent date.
I'm new to SQL, and the closest I've got is below, which doesn't take into account the Users table (also, used <https://stackoverflow.com/a/2411703/2480598> as a template):
```
select Days.Day
from Days
inner join (select User_ID, max(day) as MaxDay
from Days
group by User_ID
) tm on Days.User_ID = tm.User_ID and Days.Day = tm.MaxDay
```
This gives me the list of dates for users active and non-active.
|
This is classic example where `APPLY` can be applied:
```
select * from users u
outer apply(select top 1 day from days where userid = u.id order by day desc)a
where u.ActiveUser = 1
```
If you want only the `day` column(I think it makes no sense):
```
select max(d) as day
from users u
join days d on d.userid = u.id
where u.ActiveUser = 1
group by u.id
```
|
```
Select a.User_ID, max(a.day) as LatestDay from Days a
Inner Join Users b
on a.User_ID = b.ID and b.ActiveUser = 1
group by a.User_ID
```
The `inner join` limits it to active users only. You can remove `a.User_ID` from the `select` if you only want a list of days.
|
SQL query to find most recent date using two tables
|
[
"",
"sql",
"sql-server",
""
] |
I need a resultset of weeknumers, year and startdate of all weeks between two dates. I need it to match other search results to the weeks. Since the report will span just over a year, I need it to match the calendars.
We're in Europe, so weeks start on MONDAY.
I use SQL Server via a JDBC connection. I cannot use the calander table.
I've come across various solutions, but none does just what I need it to. This is the kind of list I need, but somehow the results are not correct. I can't find my mistake:
```
WITH mycte AS
(
SELECT DATEADD(ww, DATEDIFF(ww,0,CAST('2010-12-01' AS DATETIME)), 0) DateValue
UNION ALL
SELECT DateValue + 7
FROM mycte
WHERE DateValue + 7 < '2016-12-31'
)
SELECT DATEPART(wk, DateValue) as week, DATEPART(year, DateValue) as year, DateValue
FROM mycte
OPTION (MAXRECURSION 0);
```
I used --SET DATEFIRST 1; to make sure weeks start on monday.
The result looks like:
```
week year DateValue
----------- ----------- -------------------------
49 2010 2010-11-29 00:00:00.0
50 2010 2010-12-06 00:00:00.0
51 2010 2010-12-13 00:00:00.0
52 2010 2010-12-20 00:00:00.0
53 2010 2010-12-27 00:00:00.0
2 2011 2011-01-03 00:00:00.0
3 2011 2011-01-10 00:00:00.0
4 2011 2011-01-17 00:00:00.0
5 2011 2011-01-24 00:00:00.0
6 2011 2011-01-31 00:00:00.0
```
The problem is obvious. 2010 hasn't 53 weeks, and week 1 is gone.
This hapens for other years as well. Only 2015 has 53 weeks.
(note: in iso weeks (Europe) there only 52 weeks in 2010, see wiki: <https://en.wikipedia.org/wiki/ISO_week_date>)
> The following 71 years in a 400-year cycle (add 2000 for current
> years) have 53 weeks (leap years, with February 29, are emphasized),
> years not listed have 52 weeks: 004, 009, 015, 020, 026, 032, 037,
> 043, 048, 054, 060, 065, 071, 076, 082, 088, 093, 099,105, 111, 116,
> 122, 128, 133, 139, 144, 150, 156, 161, 167, 172, 178, 184, 189,
> 195,201, 207, 212, 218, 224, 229, 235, 240, 246, 252, 257, 263, 268,
> 274, 280, 285, 291, 296,303, 308, 314, 320, 325, 331, 336, 342, 348,
> 353, 359, 364, 370, 376, 381, 387, 392, 398.
The dates are correct though. 2012-12-27 is a monday and so is 2011-01-03.
But in Europe we always have a full week 1 (so there is always a monday with weeknumber 1)
Any ideas what hapend to week 1 or why there are so many years with 53 (which is wrong)?
|
Use `iso_week` in [`DATEPART`](https://msdn.microsoft.com/en-us/library/ms174420.aspx):
> ISO 8601 includes the ISO week-date system, a numbering system for
> weeks. Each week is associated with the year in which Thursday occurs.
> For example, week 1 of 2004 (2004W01) ran from Monday 29 December 2003
> to Sunday, 4 January 2004. The highest week number in a year might be
> 52 or 53. This style of numbering is typically used in European
> countries/regions, but rare elsewhere.
```
WITH mycte AS
(
SELECT DATEADD(ww, DATEDIFF(ww,0,CAST('2010-12-01' AS DATETIME)), 0) DateValue
UNION ALL
SELECT DateValue + 7
FROM mycte
WHERE DateValue + 7 < '2016-12-31'
)
SELECT DATEPART(iso_week, DateValue) as week, DATEPART(year, DateValue) as year,
DateValue
FROM mycte
OPTION (MAXRECURSION 0);
```
`LiveDemo`
You can also consider changing recursive CTE with `tally table`.
|
The reason you're seeing 53 weeks in 2010 is simply because, there *are* 53 weeks in 2010.
Let's take a closer look at how the weeks break down in that year:
```
Declare @FromDate Date = '2010-01-01',
@ToDate Date = '2011-01-03'
;With Date (Date) As
(
Select @FromDate Union All
Select DateAdd(Day, 1, Date)
From Date
Where Date < @ToDate
)
Select Date, DatePart(Week, Date) WeekNo, DateName(WeekDay, Date) WeekDay
From Date
Option (MaxRecursion 0)
```
[SQL Fiddle](http://sqlfiddle.com/#!3/9eecb7d/621/0)
Here's how the beginning of the year is:
```
Date WeekNo WeekDay
---------- ----------- ------------------------------
2010-01-01 1 Friday
2010-01-02 1 Saturday
2010-01-03 2 Sunday
2010-01-04 2 Monday
2010-01-05 2 Tuesday
2010-01-06 2 Wednesday
2010-01-07 2 Thursday
2010-01-08 2 Friday
2010-01-09 2 Saturday
2010-01-10 3 Sunday
```
Since the year begins in the middle of a week, there are only two days for `Week 1`. This causes the year to have 53 total weeks.
Now, to answer your question for why you don't see a `Week 1` value for 2011, let's look at how that year ends:
```
Date WeekNo WeekDay
---------- ----------- ------------------------------
2010-12-26 53 Sunday
2010-12-27 53 Monday
2010-12-28 53 Tuesday
2010-12-29 53 Wednesday
2010-12-30 53 Thursday
2010-12-31 53 Friday
2011-01-01 1 Saturday
2011-01-02 2 Sunday
2011-01-03 2 Monday
```
You are selecting your dates in 7-day increments. The last date that you pulled for 2010 was `2010-12-27`, which was accurately being displayed as being in Week 53. But the beginning of the next year occurs within this week on the Saturday, making Saturday `Week 1` of 2011, with the following day starting `Week 2`.
Since you are not selecting a new date until Monday, `2011-01-03`, it will effectively skip the dates in the first week of 2011, and begin with `Week 2`.
|
Need list of weeknumbers between two dates
|
[
"",
"sql",
"sql-server",
"date",
""
] |
I have very large data set as below (example) :
```
#ID #report_name #report_count
1 ReportA1 3
2 ReportA1(PDF) 4
3 ReportA2 2
4 ReportA2(PDF) 6
5 ReportA3 7
6 ReportA3(PDF) 2
7 ReportA4 9
8 ReportA4(PDF) 7
9 ReportA5 1
10 ReportA5(PDF) 2
11 ReportA6 8
12 ReportA6(PDF) 9
13 ReportA7 8
14 ReportA7(PDF) 6
15 ReportA8 6
16 ReportA8(PDF) 7
17 ReportA9 5
18 ReportA9(PDF) 9
19 ReportA10 7
20 ReportA10(PDF) 1
```
I want to make **SQL query** so that i can **merge two rows as single rows**.
For example ReportA1 and ReportA1(PDF) as ReportA1.
My Question, how to create SQL statement to combine two rows of data (i.e ReportA1 and ReportA1(PDF))
and count it as single name (i.e ReportA1) as table below (example only) ?
```
#ID #report_name #report_count
1 ReportA1 7
2 ReportA2 8
3 ReportA3 9
4 ReportA4 16
5 ReportA5 3
6 ReportA6 17
7 ReportA7 14
8 ReportA8 13
9 ReportA9 14
10 ReportA10 8
```
|
In mySQL you can probably do this:
```
SELECT ReportName, SUM(ReportCount) as ReportsCount
FROM (
SELECT REPLACE(ReportName,'(PDF)','') as ReportName, ReportCount
FROM Reports ) T
GROUP BY ReportName
```
See `Demo`
|
This should help you
```
select substr(report_name,8),sum(report_count) from yourtablename
groupb by (substr(report_name,8))
```
This is Oracle Syntax, but every SQL dialekt have a "substr" methode.
|
Combine rows in same column and table with similar values and sum values
|
[
"",
"sql",
""
] |
I have a table with data like:
```
==============
item | tagid
--------------
1111 | 101
1111 | 102
2222 | 101
2222 | 103
3333 | 104
4444 | 105
4444 | 106
5555 | 101
5555 | 103
==============
```
I want all items with tagids `101` and `103`. In the sample above, that would be items `2222` and `5555`, since both of them have tags `101` and `103`.
How can I do this?
|
Assuming tagid is unique for each item (ie the combination of item and tagid is unqiue):
```
select item
from mytable
where tagid in (101, 103)
group by item
having count(*) = 2
```
or if not unique (unlikely), use `having count(distinct tagid) = 2`
|
Something like this should work:
```
select item from table
where tagid in(101, 103)
group by item
having count(distinct tagid) = 2
```
|
MySQL query to get value of coumn 1 having set of values in column 2
|
[
"",
"mysql",
"sql",
""
] |
I would like to create a constraint that verifies if a value (from the 'nominal\_value' column) classified as "minimum" in the 'stats\_type' column is equal or smaller than a value classified as "average", when all values in the other columns are equal. In other words, given corresponding tuples, except for columns 'oid', 'stats\_type' and 'nominal\_value', I'd like to ensure the value labeled "minimum" is always equal or smaller than the value labeled as "average".
It is difficult to explain, so I made the example below:
```
CREATE TABLE price (
oid SERIAL NOT NULL,
product INTEGER NOT NULL,
territory INTEGER NOT NULL,
stats_type INTEGER NOT NULL,
year INTEGER NOT NULL,
nominal_value NUMERIC(6,2) NOT NULL,
data_source INTEGER NOT NULL,
CONSTRAINT pk_price PRIMARY KEY (oid),
CONSTRAINT price_1 UNIQUE (product, territory, stats_type, year, data_source),
CONSTRAINT price_2 CHECK (nominal_value > 0)
);
INSERT INTO price (oid, product, territory, stats_type, year, nominal_value, data_source) VALUES (1, 55, 5611, 1, 2014, 120, 3);
INSERT INTO price (oid, product, territory, stats_type, year, nominal_value, data_source) VALUES (2, 55, 5611, 2, 2014, 160, 3);
INSERT INTO price (oid, product, territory, stats_type, year, nominal_value, data_source) VALUES (3, 55, 5615, 1, 2014, 60, 3);
INSERT INTO price (oid, product, territory, stats_type, year, nominal_value, data_source) VALUES (4, 55, 5611, 3, 2014, 180, 3);
INSERT INTO price (oid, product, territory, stats_type, year, nominal_value, data_source) VALUES (5, 62, 5615, 1, 2013, 1500, 3);
INSERT INTO price (oid, product, territory, stats_type, year, nominal_value, data_source) VALUES (6, 62, 5615, 2, 2013, 1300, 3);
```
The 'stats\_type' labels are: 1 = minimum, 2 = average, 3 = maximum.
Looking at the first two rows; they are equal, except for 'oid', 'stats\_type' and 'nominal\_value'; so I'd like to verify: is 120 (which is labeled as minimum in the first row) smaller or equal than 160 (the value labeled as average)? In this case the answer is yes, so it passed the verification.
Now, the last two rows (5 and 6) also have a match regarding columns 'product', 'territory', 'year' and 'data\_source'. However, it happens that the nominal value 1500 in row 5 should not be smaller than 1300 in row 6 (because 1500 was supposed to be minimum, and 1300 an average value).
How can I do this? Is it possible to accomplish such task using a 'check' constraint?
|
This cannot be done using a check constraint because of necessity of querying the table. You can use a trigger:
```
create or replace function price_trigger()
returns trigger language plpgsql as $$
begin
if exists (
select 1
from price p
where (p.product, p.territory, p.year, p.data_source) =
(new.product, new.territory, new.year, new.data_source)
and (
p.stats_type < new.stats_type and p.nominal_value > new.nominal_value
or
p.stats_type > new.stats_type and p.nominal_value < new.nominal_value
)
)
then
raise exception 'Nominal value error';
end if;
return new;
end $$;
create trigger price_trigger
before insert or update on price
for each row
execute procedure price_trigger();
```
The trigger function checks all conditions (min < avg < max).
|
Since your scenario has the complexity that the insertions can be made in any order, I suggest that you perform the insertions in a secondary table beforehand, such as price\_check.
Then you could put a trigger to perform a SQL such as:
```
SELECT pmin.*
FROM price_check AS pmin
JOIN price_check AS pavg
ON (pmin.product, pmin.territory, pmin.year, pmin.data_source) = (pavg.product, pavg.territory, pavg.year, pavg.data_source)
AND pmin.stats_type = 1
AND pavg.stats_type = 2
AND pmin.nominal_value <= pavg.nominal_value
```
Which will return valid minimum only and use this to perform the insertion on the final table. Be aware that I use an INNER JOIN, so minimum without an average pair won't return, you would have to use a LEFT JOIN to do so.
Similarly, you can design queries on price\_check put non-valid rows in a price\_log table to manually check them latter on.
Alternatively you could put an INSTEAD OF INSERT trigger on your table without need for the secondary table, but even so it is a good practice to create a LOG table to manually check and fix non-valid rows.
|
Constraint for values with different labels when all other columns are equal
|
[
"",
"sql",
"postgresql",
"triggers",
"constraints",
"multiple-columns",
""
] |
I have a `table` with Sum of Amounts for Each Weekday (Sunday to Saturday) . Table structure is as below.
[](https://i.stack.imgur.com/sFQCz.png)
I need to assign these table values into parameters. For E.g : I need to assign sum of rdate `'2015-11-15'` i.e `324` to a variable @sundayval , 374 to variable @mondayval etc...
How to do this In a single update query. I have tried out with Case statement,
But it only assigns value to variable @saturdayval .
Thanks for the Help.
|
Ok - let's do it this way: the `else` case is set to return itself, so each variable essentially aggregates as a coalesce. Note: I don't have any way to test this right now. :)
```
SELECT
@sundayval = case when DATEPART(weekday, rdate) = 1 then sum else @sundayval end
, @mondayval = case when DATEPART(weekday, rdate) = 2 then sum else @mondayval end
, @tuesdayval = case when DATEPART(weekday, rdate) = 3 then sum else @tuesdayval end
, @wednesdayval = case when DATEPART(weekday, rdate) = 4 then sum else @wednesdayval end
, @thursdayval = case when DATEPART(weekday, rdate) = 5 then sum else @thursdayval end
, @fridayval = case when DATEPART(weekday, rdate) = 6 then sum else @fridayval end
, @saturdayval = case when DATEPART(weekday, rdate) = 7 then sum else @saturdayval end
FROM TABLE
```
|
This does the job. It doesn't depend on any particulat [`DATEFIRST`](https://msdn.microsoft.com/en-GB/library/ms181598.aspx) settings - it instead uses an arbitrarily chosen sunday (I picked 17th May this year) (what I usually refer to as a "known good" date because it has the property we're looking for, in this case the right day of the week):
```
declare @t table ([sum] int not null,rdate datetime2 not null)
insert into @t([sum],rdate) values
(324,'20151115'),
(374,'20151116'),
(424,'20151117'),
(474,'20151118'),
(524,'20151119'),
(574,'20151120'),
(624,'20151121')
declare @sundayval int
declare @mondayval int
declare @tuesdayval int
declare @wednesdayval int
declare @thursdayval int
declare @fridayval int
declare @saturdayval int
select
@sundayval = SUM(CASE WHEN DATEPART(weekday,rdate) = DATEPART(weekday,'20150517') THEN [sum] END),
@mondayval = SUM(CASE WHEN DATEPART(weekday,rdate) = DATEPART(weekday,'20150518') THEN [sum] END),
@tuesdayval = SUM(CASE WHEN DATEPART(weekday,rdate) = DATEPART(weekday,'20150519') THEN [sum] END),
@wednesdayval = SUM(CASE WHEN DATEPART(weekday,rdate) = DATEPART(weekday,'20150520') THEN [sum] END),
@thursdayval = SUM(CASE WHEN DATEPART(weekday,rdate) = DATEPART(weekday,'20150521') THEN [sum] END),
@fridayval = SUM(CASE WHEN DATEPART(weekday,rdate) = DATEPART(weekday,'20150522') THEN [sum] END),
@saturdayval = SUM(CASE WHEN DATEPART(weekday,rdate) = DATEPART(weekday,'20150523') THEN [sum] END)
from @t
select @sundayval,@mondayval,@tuesdayval,@wednesdayval,@thursdayval,@fridayval,@saturdayval
```
Result:
```
----------- ----------- ----------- ----------- ----------- ----------- -----------
324 374 424 474 524 574 624
```
|
Update multiple variables in a single query
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a table that look like this:
```
ID | DATE | NAME | VALUE_1 | VALUE_2
1 | 27.11.2015 | Homer | A | B
2 | 27.11.2015 | Bart | C | B
3 | 28.11.2015 | Homer | A | C
4 | 28.11.2015 | Maggie | C | B
5 | 28.11.2015 | Bart | C | B
```
I currently delete duplicate rows (thank to [this thread](https://stackoverflow.com/questions/6025367/t-sql-deleting-all-duplicate-rows-but-keeping-one)) using this code :
```
WITH cte AS
(SELECT ROW_NUMBER() OVER (PARTITION BY [VALUE_1], [VALUE_2]
ORDER BY [DATE] DESC) RN
FROM [MY_TABLE])
DELETE FROM cte
WHERE RN > 1
```
But this code don't delete exactly the lines I want. I would like to delete only rows which values already exist so in my example I would like to delete only line 5 because line 2 have the same values and is older.
Code to create my table and insert values:
```
CREATE TABLE [t_diff_values]
([id] INT IDENTITY NOT NULL PRIMARY KEY,
[date] DATETIME NOT NULL,
[name] VARCHAR(255) NOT NULL DEFAULT '',
[val1] CHAR(1) NOT NULL DEFAULT '',
[val2] CHAR(1) NOT NULL DEFAULT '');
INSERT INTO [t_diff_values] ([date], [name], [val1], [val2]) VALUES
('2015-11-27','Homer', 'A','B'),
('2015-11-27','Bart', 'C','B'),
('2015-11-28','Homer', 'A','C'),
('2015-11-28','Maggie', 'C','B'),
('2015-11-28','Bart', 'C','B');
```
|
You need to add one more `CTE` where you will index all islands and then apply your duplicate logic in second `CTE`:
```
DECLARE @t TABLE
(
ID INT ,
DATE DATE ,
VALUE_1 CHAR(1) ,
VALUE_2 CHAR(1)
)
INSERT INTO @t
VALUES ( 1, '20151127', 'A', 'B' ),
( 2, '20151128', 'C', 'B' ),
( 3, '20151129', 'A', 'B' ),
( 4, '20151130', 'A', 'B' );
WITH cte1
AS ( SELECT * ,
ROW_NUMBER() OVER ( ORDER BY date)
- ROW_NUMBER() OVER ( PARTITION BY VALUE_1, VALUE_2 ORDER BY DATE) AS gr
FROM @t
),
cte2
AS ( SELECT * ,
ROW_NUMBER() OVER ( PARTITION BY VALUE_1, VALUE_2, gr ORDER BY date) AS rn
FROM cte1
)
DELETE FROM cte2
WHERE rn > 1
SELECT *
FROM @t
```
|
You could use this query:
```
WITH cte AS
(
SELECT RN = ROW_NUMBER() OVER (ORDER BY ID)
, *
FROM @data
)
DELETE FROM c1
--SELECT *
FROM CTE c1
INNER JOIN CTE c2 ON c1.RN +1 = c2.RN AND c1.VALUE_1 = c2.VALUE_1 AND c1.VALUE_2 = c2.VALUE_2
```
Here I order them by ID. If the next one (RN+1) has similar V1 and V2, it is deleted.
**Output:**
```
ID DATE VALUE_1 VALUE_2
1 2015-11-27 A B
2 2015-11-28 C B
4 2015-11-30 A B
```
**Data:**
```
declare @data table(ID int, [DATE] date, VALUE_1 char(1), VALUE_2 char(1));
insert into @data(ID, [DATE], VALUE_1, VALUE_2) values
(1, '20151127', 'A', 'B'),
(2, '20151128', 'C', 'B'),
(3, '20151129', 'A', 'B'),
(4, '20151130', 'A', 'B');
```
|
SQL: Deleting row which values already exist
|
[
"",
"sql",
"sql-server",
"t-sql",
"delete-row",
""
] |
I need to match one column data to another column data, if the first column exists anywhere in the second then the result should be '0.5' (1/2) else if doesn't match or crossing both columns result should be '0'(zero).
I have table which contain following data:
```
Job_Id link_Id
2 3
3 2
4 5
5 4
6 null
7 8
8 7
10 null
```
Expected result :
```
Job_Id link_Id cycle
2 3 0.5
3 2 0.5
4 5 0.5
5 4 0.5
6 null 0
7 8 0.5
8 7 0.5
10 null 0
```
[](https://i.stack.imgur.com/5xCFh.png)
My query :
```
select t.job_id
, t.link_id
, round((case when t.link_job_id IS NULL then 1 else null end))/2 cycles
from T_QCMS_JOB_STATE_HIS t
```
This doesn't quite work
|
Why not just a standard outer join?
```
select a.job_id, b.link_id, case when b.link is not null then 0.5 else 0 end cycle
from tb a left outer join tb b
on a.job_id = b.link_id;
```
Written free hand as no create table and data is provided. Adopt as needed.
|
Your table looks like it might be hierarchical, in which case a [recursive CTE/sub-query factoring clause](http://docs.oracle.com/database/121/SQLRF/statements_10002.htm#BCEJGIBG) may help you in the future.
To obtain your current result though, you just need to do a self-join:
```
select coalesce(l.job_id, j.job_id) as job_id
, l.link_id
, case when l.link_id is not null then 0.5 else 0 end as cycle
from t_qcms_job_state_his j
left outer join t_qcms_job_state_his l
on j.job_id = l.link_id;
JOB_ID LINK_ID CYCLE
---------- ---------- ----------
2 3 .5
3 2 .5
4 5 .5
5 4 .5
7 8 .5
8 7 .5
10 0
6 0
8 rows selected.
```
The outer join is there to deal with the fact that not all link IDs exist.
Another non-ANSI compliant way, but which only involves a single table scan would be to use Oracle's [`FIRST`](http://docs.oracle.com/database/121/SQLRF/functions074.htm) function, this is significantly more confusing but will be more efficient:
```
with the_data as(
select job_id
, max(link_id) keep (dense_rank first order by case when job_id = link_id then 0 else 1 end) as link_id
from t_qcms_job_state_his
group by job_id
)
select job_id
, link_id
, case when link_id is not null then 0.5 else 0 end as cycle
from the_data
```
|
Find if a value exists anywhere in a second column and return a result
|
[
"",
"sql",
"oracle",
"birt",
""
] |
Can you please have a look at the code and notice that I cannot use the "MYDELTOT" in the main Select Statement Query, either side of the "Union"
Code:
```
select
'POID' = DYN_PORDERS.ID, 'PSID' = SYS_SUPPLIERS.ID,
(select sum(DYN_PORDERDELS.DelQty) as MMTD
from DYN_PORDERDELS
where DYN_PORDERDELS.DelPOID = DYN_PORDERS.ID) as MYDELTOT,
*
from
DYN_PORDERSRS
inner join
DYN_PORDERS on DYN_PORDERS.id = DYN_PORDERSRS.RSOrderID
inner join
SYS_SUPPLIERS on SYS_SUPPLIERS.id = DYN_PORDERS.SupplierID
inner join
DYN_porderdels on DYN_PORDERS.ID = DYN_PORDERDELS.DelPOID
where
DYN_PORDERSRS.RSDate <= '20151031'
and DYN_PORDERS.Qnty >= MYDELTOT
union
select
'POID' = DYN_PORDERS.ID, 'PSID' = SYS_SUPPLIERS.ID,
(select sum(DYN_PORDERDELS.DelQty) as MMTD
from DYN_PORDERDELS
where DYN_PORDERDELS.DelPOID = DYN_PORDERS.ID) as MYDELTOT,
*
from
DYN_PORDERSRS
inner join
DYN_PORDERS on DYN_PORDERS.id = DYN_PORDERSRS.RSOrderID
inner join
SYS_SUPPLIERS on SYS_SUPPLIERS.id = DYN_PORDERS.SupplierID
inner join
DYN_porderdels on DYN_PORDERS.ID = DYN_PORDERDELS.DelPOID
where
DYN_PORDERSRS.rsdate >= '20151101'
and DYN_PORDERS.Qnty >= MYDELTOT
```
Please help
Thanks
Mike
|
The aliases are applied after the `WHERE` condition, that mean that there is nothing that is called `MYDELTOT` when the condition is applied.
One way to address it is to create the alias in a subquery, with that the alias will be associated to the value at the subquery level
```
select 'POID' = DYN_PORDERS.ID
, 'PSID' = SYS_SUPPLIERS.ID
, t.MYDELTOT
, *
from DYN_PORDERSRS
inner join DYN_PORDERS on DYN_PORDERS.id = DYN_PORDERSRS.RSOrderID
inner join SYS_SUPPLIERS on SYS_SUPPLIERS.id = DYN_PORDERS.SupplierID
inner join DYN_porderdels on DYN_PORDERS.ID = DYN_PORDERDELS.DelPOID
CROSS APPLY (select sum(DYN_PORDERDELS.DelQty) as MYDELTOT
from DYN_PORDERDELS
where DYN_PORDERDELS.DelPOID = DYN_PORDERS.ID
) as t
where DYN_PORDERSRS.rsdate <= '20151031'
and DYN_PORDERS.Qnty >= t.MYDELTOT
```
|
You can try to use `CTE` in following:
```
;with cte as (
select 'POID' = DYN_PORDERS.ID, 'PSID' = SYS_SUPPLIERS.ID,
(select sum(DYN_PORDERDELS.DelQty) as MMTD from DYN_PORDERDELS where
DYN_PORDERDELS.DelPOID = DYN_PORDERS.ID) as MYDELTOT,
* from DYN_PORDERSRS
inner join DYN_PORDERS on DYN_PORDERS.id=DYN_PORDERSRS.RSOrderID
inner join SYS_SUPPLIERS on SYS_SUPPLIERS.id=DYN_PORDERS.SupplierID
inner join DYN_porderdels on DYN_PORDERS.ID = DYN_PORDERDELS.DelPOID
where DYN_PORDERSRS.RSDate <= '20151031'
union
select 'POID' = DYN_PORDERS.ID, 'PSID' = SYS_SUPPLIERS.ID,
(select sum(DYN_PORDERDELS.DelQty) as MMTD from DYN_PORDERDELS where
DYN_PORDERDELS.DelPOID = DYN_PORDERS.ID) as MYDELTOT,
* from DYN_PORDERSRS
inner join DYN_PORDERS on DYN_PORDERS.id=DYN_PORDERSRS.RSOrderID
inner join SYS_SUPPLIERS on SYS_SUPPLIERS.id=DYN_PORDERS.SupplierID
inner join DYN_porderdels on DYN_PORDERS.ID = DYN_PORDERDELS.DelPOID
where DYN_PORDERSRS.rsdate >= '20151101'
)
select *
from cte
where Qnty >= MYDELTOT
```
|
SQL - Use a Column from a Sub Select Statement as a Query in Main Select Statement
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have an issue regarding date calculations.
I have a datetime column called `CreatedLocalTime` date with this format: **`2015-11-15 19:48:50.000`**
I need to retrieve a new column called `Prod_Date` with:
```
if “CreatedLocalTime” between
(CreatedLocalTime 7 AM)
& (CreatedLocalTime+1 7 AM)
return CreatedLocalTime date with DD/MM/YYYY format
```
On other words, today production = sum of yesterday from 7 AM till today at 7 AM.
Any help using case?
|
For `day 7AM` to `day+1 7AM`, you can try:
```
SELECT CAST(CreatedLocalTime as date)
...
FROM ...
WHERE ...
CreatedLocalTime >= DATEADD(hour, 7, CAST(CAST(CreatedLocalTime as date) as datetime))
AND
CreatedLocalTime < DATEADD(hour, 31, CAST(CAST(CreatedLocalTime as date) as datetime))
...
```
For `previous day 7AM` to `day 7AM`, replace 7 by -14 and 31 by 7.
|
It looks like you want somthing along the lines of
```
DECLARE @StartDateTime Datetime
DECLARE @EndDateTime Datetime
SET @EndDateTime = DATEADD(hour, 7,convert(datetime,convert(date,getdate())) )
SET @StartDateTime = DATEADD(day, -1, @EndDateTime )
--Print out the variables for demonstration purposes
PRINT '@StartDateTime = '+CONVERT(nchar(19), @StartDateTime,120)
PRINT '@EndDateTime = '+CONVERT(nchar(19), @EndDateTime,120)
SELECT SUM (Production) AS Prod_Date FROM YourSchema.YourTable WHERE CreatedLocalTime >= @StartDateTime AND CreatedLocalTime < @EndDateTime
```
But you could also look at it as all the times which after you remove 7 hours from them are yesterday
```
SELECT SUM (Production) AS Prod_Date
FROM YourSchema.YourTable
WHERE DATEDIFF(day,DATEADD(hour, -7, CreatedLocalTime ))) = 1
```
The First version will be more efficient as the query will only have to do the date arithmetic once at the start while the second involved executing DATEDIFF and DATEADD for every record. This will be slower on large amounts of data.
The Gold plated solution would be to add a computed column to your table
```
ALTER TABLE YourSchema.YourTable ADD EffectiveDate AS CONVERT(date, DATEDIFF(day,DATEADD(hour, -7, CreatedLocalTime ))))
```
And then an index on that column
```
CREATE INDEX IX_YourTable_EffectiveDate ON YourSchema.YourTable (EffectiveDate )
```
So you can write
```
DECLARE @YesterDay date = DATEADD(day,-1, getdate())
SELECT SUM (Production) AS Prod_Date
FROM YourSchema.YourTable
WHERE EffectiveDate = @YesterDay
```
|
Today Production - SQL Date Calculation case when
|
[
"",
"sql",
"sql-server",
"case",
"dynamics-ax-2012",
"production",
""
] |
I have a table:
table1
```
start end
1/jan/2012 15/jan/2012
1/feb/2013 5/april/2013
```
I need to find all the possible monthly, quarterly and yearly timeframes. For ex.
1)
1/jan/2012 15/jan/2012
will fall between:
```
1/jan/2012 31/jan/2012
1/jan/2012 31/march/2012
1/jan/2012 31/dec/2012
```
2)
1/feb/2013 5/april/2013
will fall between:
```
1/feb/2013 28/feb/2013
1/march/2013 31/march/2013
1/april/2013 30/april/2013
1/jan/2013 31/march/2013
1/april/2013 30/june/2013
1/jan/2013 31/dec/2013
```
Is it possible to do it through SQL query to get all the possible date combinations?
|
```
WITH date_range AS (
SELECT TO_DATE('2012-01-01', 'YYYY-MM-DD') AS start_date, TO_DATE('2012-01-15', 'YYYY-MM-DD') AS end_date FROM DUAL
UNION
SELECT TO_DATE('2012-02-01', 'YYYY-MM-DD') AS start_date, TO_DATE('2012-04-05', 'YYYY-MM-DD') AS end_date FROM DUAL
), monthly_range AS (
SELECT dr.start_date
, dr.end_date
, 'Monthly' AS range_type
, ADD_MONTHS(TRUNC(dr.start_date, 'MM'), LEVEL - 1) AS month_start
, ADD_MONTHS(LAST_DAY(dr.start_date), LEVEL - 1) AS month_end
FROM date_range dr
CONNECT BY LEVEL <= CEIL(MONTHS_BETWEEN(dr.end_date, dr.start_date))
), quarterly_range AS (
SELECT
dr.start_date
, dr.end_date
, 'Quarterly' AS range_type
, ADD_MONTHS(TRUNC(dr.start_date, 'MM'), (LEVEL - 1) * 3) AS range_start
, ADD_MONTHS(TRUNC(dr.start_date, 'MM'), LEVEL * 3) - 1 AS range_end
FROM date_range dr
CONNECT BY LEVEL <= CEIL(MONTHS_BETWEEN(dr.end_date, dr.start_date)/3)
), yearly_range AS (
SELECT
dr.start_date
, dr.end_date
, 'Yearly' AS range_type
, ADD_MONTHS(TRUNC(dr.start_date, 'MM'), (LEVEL - 1) * 12) AS range_start
, ADD_MONTHS(TRUNC(dr.start_date, 'MM'), LEVEL * 12) - 1 AS range_end
FROM date_range dr
CONNECT BY LEVEL <= CEIL(MONTHS_BETWEEN(dr.end_date, dr.start_date)/12)
)
SELECT mr.* FROM monthly_range mr
UNION
SELECT qr.* FROM quarterly_range qr
UNION
SELECT yr.* FROM yearly_range yr
ORDER BY 1,2,3,4;
```
|
Hope it helps:
```
-- test data
with table1 as
(select 1 as id,
to_date('20120101', 'YYYYMMDD') as start_dt,
to_date('20120115', 'YYYYMMDD') as end_dt
from dual
union all
select 2 as id,
to_date('20130201', 'YYYYMMDD') as start_dt,
to_date('20130405', 'YYYYMMDD') as end_dt
from dual),
-- get sequences in range [0..max date interval-1]
idx_tab as
(select level - 1 as idx
from dual
connect by level < (select max(end_dt - start_dt) from table1)),
-- expand interval [start_dt; end_dt] by day
dt_tb as
(select t.id, t.start_dt, t.end_dt, t.start_dt + i.idx as dt
from table1 t, idx_tab i
where t.start_dt + idx <= t.end_dt)
select 'Month-' || to_char(dt, 'YYYY-MM'), id, start_dt, end_dt
from dt_tb
union
select 'Quarter-' || to_char(dt, 'YYYY-Q'), id, start_dt, end_dt
from dt_tb
union
select 'Year-' || to_char(dt, 'YYYY'), id, start_dt, end_dt
from dt_tb
order by 1, 2;
```
|
Oracle - Find monthly, quarterly and yearly dates
|
[
"",
"sql",
"oracle",
"date",
""
] |
I am trying to select the latest row from a LEFT JOIN not on the main query. This is my SQL:
```
SELECT *
FROM messages
LEFT JOIN users
ON messages.message_to = users.user_id
OR messages.message_user = users.user_id
LEFT JOIN message_replies
ON messages.message_id = message_replies.message_reply_main
LEFT JOIN user_personal_information
ON users.user_id =
user_personal_information.user_personal_information_user
```
I want to select the latest row from:
```
LEFT JOIN message_replies
ON messages.message_id = message_replies.message_reply_main
```
My column is called: `message_reply_date` - how can I use that to LEFT JOIN the latest row?
`message_replies`:
```
CREATE TABLE IF NOT EXISTS `message_replies` (
`message_reply_id` int(11) NOT NULL,
`message_reply_user` int(11) NOT NULL,
`message_reply_main` int(11) NOT NULL,
`message_reply_message` text NOT NULL,
`message_reply_date` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
`message_reply_read` int(11) NOT NULL DEFAULT '0'
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
```
I am using this for the WHERE clause:
```
WHERE m.message_user = ?
OR m.message_to = ?
AND m.message_deleted=0
AND m.message_permdeleted=0
GROUP BY mr.message_reply_main
ORDER BY mr.message_reply_date DESC
```
|
If i understood the question, i'll do it this simple way:
```
SELECT *
FROM messages
LEFT JOIN users
ON messages.message_to = users.user_id OR messages.message_user = users.user_id
LEFT JOIN message_replies
ON messages.message_id = message_replies.message_reply_main
LEFT JOIN user_personal_information
ON users.user_id = user_personal_information.user_personal_information_user
WHERE message_replies.message_reply_date = (SELECT MAX(message_reply_date) FROM message_replies WHERE message_reply_main = messages.message_id)
/* AND more criterias */
```
No grouping in the main query but just using a `MAX` evaluation of `message_reply_date` in a subquery regarding the `WHERE` criteria on `message_reply_date` itself.
|
There is a "left join" method for getting the most recent message (as well as several others). But in keeping with the preferred join method of the question:
```
SELECT *
FROM messages m LEFT JOIN
users u
ON m.message_to = u.user_id OR m.message_user = u.user_id LEFT JOIN
message_replies mr
ON m.message_id = mr.message_reply_main LEFT JOIN
user_personal_information upi
ON u.user_id = upi.user_personal_information_user LEFT JOIN
message_replies mr2
ON mr2.message_reply_main = m.message_id AND
mr2.message_reply_date > mr.message_reply_date
WHERE mr2.message_reply_main IS NULL;
```
I also added table aliases because they make a query easier to write and to read.
The idea is to match to the table again, but only for later messages. Then, the `WHERE` clause checks that none exist -- so it has the latest one.
|
LEFT JOIN latest row using a date column
|
[
"",
"mysql",
"sql",
""
] |
I have the following format
```
Name someString theValue
abc bla bla 3
abc bla bla 3
abd bla bla 5
abd bla bla 5
xfz ffs ds ds 50
xfz ffs ds ds 50
```
expected result:
```
Name someString theValue
abc bla bla 3
abd bla bla 5
xfz ffs ds ds 50
```
Tried to solve my problem using this query:
```
Select Name, someString, theValue count(*) FROM myTable
GROUP BY Name, someString, theValue
HAVING count(*) = 1
```
The logic here being: append to each row a count number, and select only the first one which was counted per set of rows.
This shrinks my result from 2160 rows to 40, but there are still duplicates here.
**My question is**: Is the logic I used in the query corect (for filtering out duplicate rows by 2 columns), and would you have written this in another way?
Using SQL Server 2012
Thank you
|
I would suggest a simple aggregation as the simplest approach:
```
select name, someString, min(theValue) as theValue
from t
group by name, someString;
```
|
Surely what you want to do is
```
SELECT DISTINCT Name, someString, theValue FROM myTable
```
Or if you need to know how many occurances then
```
SELECT Name, someString, theValue, Count(*) as ct FROM myTable GROUP BY Name, someString, theValue
```
|
SQL Server - select distinct rows by two columns
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
Let's say I've got a list of Clients and I want to allocate them to a bus. I've got John, Ringo, Paul..... and they can travel on the Pink or green bus. The idea is that the allocation would be
```
John: Pink
Ringo: Green
Paul: Pink
```
Does anyone know how to do this without resorting to a loop statement?
```
--DROP TABLE [BusAllocation]
CREATE TABLE [dbo].[BusAllocation]
(
[ClientName] [varchar](50) NOT NULL,
[BusAllocation] [varchar](50) NULL
);
INSERT INTO BusAllocation([ClientName]) VALUES('John');
INSERT INTO BusAllocation([ClientName]) VALUES('Ringo');
INSERT INTO BusAllocation([ClientName]) VALUES('Paul');
INSERT INTO BusAllocation([ClientName]) VALUES('Simon');
INSERT INTO BusAllocation([ClientName]) VALUES('Tyrone');
CREATE TABLE [dbo].[Bus]
(
BusName [varchar](50) NOT NULL,
);
INSERT INTO [Bus](BusName) VALUES('Pink');
INSERT INTO BusAllocation([ClientName]) VALUES('Green');
```
|
The following code demonstrates the basic technique. Hopefully the real tables have some unique field that can be used to provide an order in place of the `( order by ( select NULL ) )` hacks.
Note that the code generates numbers for each row independent of any other data. It will not get tripped up by gaps in identity values, e.g. when rows are deleted or transactions are rolled back.
```
-- Sample data.
declare @BusAllocation as Table ( ClientName VarChar(10), BusAllocation VarChar(10) );
insert into @BusAllocation values
( 'John', NULL ), ( 'Ringo', NULL ), ( 'Paul', NULL ), ( 'Simon', NULL ), ( 'Tyrone', NULL );
select * from @BusAllocation;
declare @Bus as Table ( BusName VarChar(10) );
insert into @Bus values
( 'Pink' ), ( 'Green' );
select * from @Bus;
-- Mix and match the buses and output the result.
with
NumberedBuses as (
select BusName, Row_Number() over ( order by ( select NULL ) ) - 1 as RN
from @Bus ),
NumberedClients as (
select ClientName, Row_Number() over ( order by ( select NULL ) ) - 1 as RN
from @BusAllocation )
select NC.RN, ClientName, BusName
from NumberedClients as NC inner join
NumberedBuses as NB on NB.RN = NC.RN % ( select count(42) from NumberedBuses )
order by NC.RN;
-- Do it again updating the table.
with
NumberedBuses as (
select BusName, Row_Number() over ( order by ( select NULL ) ) - 1 as RN
from @Bus ),
NumberedClients as (
select ClientName, Row_Number() over ( order by ( select NULL ) ) - 1 as RN
from @BusAllocation )
update @BusAllocation
set BusAllocation = NB.BusName
from @BusAllocation as BA inner join
NumberedClients as NC on NC.ClientName = BA.ClientName inner join
NumberedBuses as NB on NB.RN = NC.RN % ( select count(42) from NumberedBuses );
select * from @BusAllocation;
```
|
It is confusing to most of us here why you are not using Id columns. But, if it is as simple as you are saying, this would be the minimum code to achieve what you want.
```
DECLARE @RowIndex int = 0
UPDATE BusAllocation
SET BusAllocation = CASE WHEN @RowIndex % 2 = 0 THEN 'Pink' ELSE 'Green' END,
@RowIndex = @RowIndex + 1
```
I suspect when you go to the next step you will realize the value of identity columns and put one on each of the tables. There would then be a Client table, Bus table and BusAllocation table. The BusAllocation table would have a ClientId and BusId column and that is how you assign a bus to a client.
|
Update a table with records in a sequential order
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have the following query --
```
select * from main_iteminstance where master_id=33568
or tv_series_id in
(select id from main_iteminstance where master_id=33568)
```
How would this best be simplified?
Basically, what I am trying to get is:
```
ids = select id from main_iteminstance where master_id=33568
child_ids = select id from main_iteminstance where tv_series_id in ids
my_answer = ids + child_ids
```
|
Upon looking at this further, the subquery is performing the lookup where the master\_id is the same value of the parent query. This means the condition with the `tv_series_id` is superfluous as it will only return results that have already been found in the parent.
Your most simple resultset would be:
```
SELECT *
FROM main_iteminstance
WHERE master_id=33568
```
Unless you're looking for `tv_series_id` to match `id` `where master_id != 33568`
---
## [SQLFiddle](http://sqlfiddle.com/#!9/ad5ab/1)
This example demonstrates where tv\_series\_id matches id's where the master\_id != 33568. If that's not desired, you can remove the OR and everything after.
|
Another way to do it is to join the table on itself:
```
SELECT Master.*
FROM main_iteminstance Master
INNER JOIN main_iteminstance Child
ON Child.master_id = 33568 AND ((Master.id = Child.id) OR (Master.tv_series_id = Child.id))
```
|
How to simplify sql statement
|
[
"",
"mysql",
"sql",
""
] |
I've been trying to find some way to simplify editing/adding a value to the corresponding column inside Insert-clauses when coding inside SQL inside Managagement Studio 2008 R2.
Assume the following code:
```
INSERT INTO [Schema].[Table]
( [Col1],
[Col2],
[Col3],
[Col4],
[Col5],
[Col6],
[Col7],
[Col8],
[Col9],
[Col10],
[Col11],
[Col12],
[Col13],
[Col14],
[Col15],
[Col16],
[Col17],
[Col18]
)
VALUES
(
232,
0,
NULL,
NULL,
0,
NULL,
NULL,
NULL,
NULL,
NULL,
0,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL
```
Say I want to change a value in the 11th column, I have to start counting to see which value corresponds to that column. Perhaps it's not that difficult in this example but many real life tables contains a lot of columns. This becomes more difficult if there are a lot of NULLS in the value list. Even with line numbers enabled.
Is there a way to have, for example a tooltip at mouse over on the value in the value list to see which column it belongs to?
In my dream world I would right click on the column name in the Insert Into and choose something like "Go to value...".
I'm using the free ApexSQL tools but I haven't found a function that enables me to do it. Are there any similar tools that allows me to do this?
Thanks in advance and hope I'm not too ambigious in what I'm trying to achieve.
|
I'm not aware of any facility to do this, but when I've run into this issue, I usually just add comments to the code, something like:
```
VALUES
(
232, --Col1
0, --Col2
NULL, --Col3
NULL, --Col4
--etc.
)
```
|
I haven't seen an option like this.
Although it requires more typing, I prefer SELECT over VALUES because I can alias the inputs, saving time later. Also SELECT statements can be previewed (by highlighting just that section and pressing F5 - very handy).
Alternative you can turn on line numbers in SSMS and calculate the correct row. Actually scrap that. Doesn't sound very user friendly does it?
```
INSERT INTO Schema.Table
(
Col1,
Col2,
Col3,
Col4,
Col5
)
SELECT
1 AS Col1,
0 AS Col2,
NULL AS Col3,
1 AS Col4,
NULL AS Col5
;
```
|
How to find column in value list easily in Management Studio?
|
[
"",
"sql",
"sql-server",
"ssms",
""
] |
I have a procedure and anonymous block, where I enter in two dates and it searches the tickets table for the tickets that have been resolved during those two dates and outputs em. But I can't seem to figure out how to output multiple rows in pl/sql.
```
ORA-01422: exact fetch returns more than requested number of rows
ORA-06512: at "SYSTEM.JOBS_DONE", line 9
ORA-06512: at line 8
01422. 00000 - "exact fetch returns more than requested number of rows"
*Cause: The number specified in exact fetch is less than the rows returned.
*Action: Rewrite the query or change number of rows requested
```
```
CREATE OR REPLACE PROCEDURE jobs_done(
month_start IN tickets.date_reported%TYPE,
month_end IN tickets.date_resolved%TYPE,
userid OUT tickets.user_id%TYPE,
supportstaffid OUT tickets.support_staff_id%TYPE,
resolution OUT tickets.resolution_details%TYPE)
AS
BEGIN
SELECT user_id, support_staff_id, resolution_details
INTO userid, supportstaffid, resolution
FROM tickets
WHERE date_resolved >= month_start AND date_resolved <= month_end;
dbms_output.put_line('userid, supportstaffid, resolution');
dbms_output.put_line(userid || supportstaffid || resolution);
END jobs_done;
/
SET serveroutput ON
DECLARE
month_start tickets.date_reported%TYPE := &Enter_date_start;
month_end tickets.date_resolved%TYPE := &Enter_date_end;
userid tickets.user_id%TYPE;
supportstaffid tickets.support_staff_id%TYPE;
resolution tickets.resolution_details%TYPE;
BEGIN
jobs_done(month_start, month_end, userid, supportstaffid, resolution);
END;
/
```
|
Based on new information: your query returns multiple records. With this type of select you can only fetch one row. Try using a cursor. e.g. structure like this:
```
declare
cursor c1 is <select your thing from table>;
begin
for r1 in c1
loop
dbms_output.put_line (r1.firstcolumn|| r2.anothercolumn );
end loop;
end;
```
|
```
Try this. Returning refcursor is the best way for this kind of resultset.
CREATE OR REPLACE PROCEDURE jobs_done(
month_start IN tickets.DATE_REPORTED%TYPE,
month_end IN tickets.DATE_RESOLVED%TYPE,
cur_out OUT sys_refcursor
)
AS
OPEN cur_out FOR
select user_id, support_staff_id, resolution_details
from tickets where DATE_RESOLVED >= month_start AND DATE_RESOLVED <= month_end;
END jobs_done;
------------------------execute-----------------------------------------
var ls refcursor;
EXEC jobs_done(input1,input2,:ls);
print ls;
------------------------------------------------------------------------
```
|
PL/SQl sqldeveloper want to output multiple rows in plsql
|
[
"",
"sql",
"oracle",
"plsql",
"select-into",
"ora-01422",
""
] |
I am trying to use nested sql in postgresql tPostgresqlRow\_1.
But I receive an error.
The following sql runs okay if I run it in PgAdmin.
But in Talend I receive an error.
I am getting the max date from one table and updating the column in another table.
```
update "STG_magento_de"."configuration_table"
set created_at=(select MAX(created_at) from "STG_magento_de"."sales_flat_order_test")
where table_name='sales_flat_order_test'
```
|
The tPostgresqlRow component expects a Java string containing the SQL statement.
The most likely problem is that you have unescaped quotes in the statement. That works fine in pgAdmin because it is valid. To pass the same statement from Talend, you'll have to escape all the quotes in the statement itself. Alternatively, you could try removing the double quotes from the SQL statement.
Remember to enclose the whole thing in quotes, so that it is a proper Java string.
|
Following sql with escape quotes worked.
```
"update \"STG_magento_de\".\"configuration_table\"
set created_at=(select MAX(created_at) from \"STG_magento_de\".\"sales_flat_order_test\")
where table_name='sales_flat_order_test'"
```
|
Nested sql statement in talend (updated)
|
[
"",
"sql",
"talend",
""
] |
Consider this setup. Please understand that our setup is much more detailed but this is a simple example.
`competition` which has `name`. This is an annual competition.
`competition_instances` which has `location`, `starts_at`.
Each competition has `sports` which has `name`.
Example:
`competition`.`name`: "Super Bowl" has different `competition_instances` every year but `sport` remains the same.
Conversely, `competition`.`name`: "Olympics" has different `competition_instances` and different `sports` in each `competition_instance`.
Would it be best to create `competition_sports` and `competition_instance_sports` with `competition_instance_sports` as a subclass of `competition_sports`?
**GOAL**: Use `competition_instance_sports` records if they exist, otherwise use the `competition_sports` record. In our real world app, each competition/competition\_instance can have 20-50 sport records. How can we best achieve?
|
I'm just thinking of the case where there are standard sports which are always in the Olympics and some which are added on, such as those proposed by the host country.
I would use [Polymorphic Associations](http://guides.rubyonrails.org/association_basics.html#polymorphic-associations), in a ["reverse manner"](https://gist.github.com/runemadsen/1242485).
```
class Competition < ActiveRecord::Base
has_many :competition_instances
has_many :competition_sports, as: :event
end
class CompetitionInstance < ActiveRecord::Base
belongs_to :competition
has_many :competition_sports, as: :event
def events_array # neater by sacrificing ActiveRecord methods
competition.competition_sports + competition_sports
end
def events # messier, returns ActiveRecord relationship
CompetitionSport.where( " ( event_id = ? AND event_type = 'Competition' ) OR
( event_id = ? AND event_type = 'CompetitionInstance')", competition_id, id )
end
end
class Sport < ActiveRecord::Base
has_many :events, as: :competition_sport
end
class CompetitionSport < ActiveRecord::Base
belongs_to :sport
belongs_to :event, polymorphic: true
end
```
This allows:
```
competition.competition_sports # standard sports
competition_instance.competition_sports # only those specific for this instance
competition_instance.events # includes sports from both
```
|
Based on what I understand from the question I cannot see where STI will be helpful in this situation. However a join table will get you where you want.
I suggest creating a new table `sports`, this model will have all the specific details of each sport. The `competition_instance.rb` will have one/many `sport.rb`. `competiton.rb` will have many `sports` through `competition_instance.rb`.
**competition.rb**
```
Class Competition < ActiveRecord::Base
has_many :competition_instances
has_many :sports, through: :competition_instances
end
```
**competition\_instance.rb**
```
Class CompetitionInstance < ActiveRecord::Base
belongs_to :competition
belongs_to :sport
end
```
**sport.rb**
```
Class Sport < ActiveRecord::Base
has_many :competition_instances
end
```
By using this design you will be able to achieve the following:
1- You will have your predefined sports in your database along with their specific properties.
2- Each competition will have `.sports` which will give all the sports in this competition for the olympics case.
3- You will be able to set specific properties for each competition instance (example `event_start_time` and `event_end_time`) in the competition instance table.
|
Rails: Is this a use case for Single Table Inheritance (STI)?
|
[
"",
"sql",
"ruby-on-rails",
"database-design",
"database-normalization",
"sti",
""
] |
I seem to be stuck on this and can't find a solution having had a look around.
I have an SQL table who's first row looks something like this:
```
Name Val1 Val2 Val3
John 1000 2000 3000
```
What I need to do is Select the largest value within this row i.e. `3000`
Obviously if these values were in a column rather than row you could just use `SELECT MAX(column) FROM table` to get the largest value in the column. **Is there an equivalent of this for finding the max value in a row?**
I have also had a look at the uses of `PIVOT` and `UNPIVOT` but I don't think they are useful to me here..
The only way I have been able to do it is to create a temp table and insert each value into a single column like so:
```
CREATE TABLE #temp (colvals float)
INSERT INTO #temp (colvals)
SELECT Val1 FROM table WHERE ID=1
UNION
SELECT Val2 FROM table WHERE ID=1
UNION
SELECT Val3 FROM table WHERE ID=1
--------------------------------------------
SELECT MAX(colvals) FROM #temp
--------------------------------------------
DROP TABLE #temp
```
However I feel this is rather slow especially as my table has a lot more columns than the snippet I have shown above.
Any ideas?
Thanks in advance.
|
You can build a reference table for columns by `APPLY` and use native `MAX()`
```
-- Sample Data
declare @data table (Name varchar(10), Val1 int, Val2 int, Val3 int, Val4 int, Val5 int, Val6 int)
insert @data values
('John', 1000, 2000, 3000, 4000, 5000, 6000),
('Mary', 1, 2, 3, 4, 5, 6)
select Name, MaxValue from
@data
cross apply
(
select max(value) as MaxValue
from
(values
(Val1),(Val2),(Val3),(Val4),(Val5),(Val6) -- Append here
) t(value)
) result
```
[SQL Fiddle](http://sqlfiddle.com/#!3/9eecb7d/646)
|
```
select MAX(case when c1 > c2 and c1 > c3 then c1
when c2 > c3 then c2
else c3
end)
from tablename
```
**Edit:** Modern SQL Server versions have the `GREATEST()` function:
```
select GREATEST(c1, c2, c3)
from tablename
```
<https://learn.microsoft.com/en-us/sql/t-sql/functions/logical-functions-greatest-transact-sql>
|
SQL - Select the largest value within a row
|
[
"",
"sql",
"max",
"ssms",
""
] |
In our application, we have some tables storing staff data. I need to find out whether the same office is allocated to a staff member multiple times. How can I figure this out in SQL Server?
I have tried below query but it didn't give me the correct solution.
```
SELECT DISTINCT STAFFID, officeid
FROM [stafftable]
WHERE rowstatus = 'A'
GROUP BY STAFFID, officeid
HAVING COUNT(staffid) > 1
AND COUNT(officeid) > 1
ORDER BY STAFFID, officeid
```
It returns the staff members allocated to different offices also. Any help would be greatly appreciated.
|
```
SELECT *
FROM (
SELECT *, RowNum = ROW_NUMBER() OVER (PARTITION BY STAFFID, officeid ORDER BY STAFFID)
FROM dbo.stafftable
WHERE rowstatus = 'A'
) t
WHERE t.RowNum > 1
```
|
You can use `COUNT(*)` to count all rows in each group:
```
SELECT STAFFID, officeid
FROM [stafftable]
WHERE rowstatus = 'A'
GROUP BY STAFFID, officeid
HAVING COUNT(*) > 1
ORDER BY STAFFID, officeid
```
|
Find out the Duplicate Records in Sql-server
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
In my query, when I use:
```
CONCAT(GROUP_CONCAT(wu_paqueteproducto.extra1),' ',GROUP_CONCAT(wu_paqueteproducto.producto)) as elproducto
```
I get something like this
```
+--------+---------+----+---------------------------------------------+
| poliza | cct | // | elproducto |
+--------+---------+----+---------------------------------------------+
| 0001 | kjjk565 | // | 1,2,2,6,1 pr001,pr002,pr003,pr004,pr005 |
+--------+---------+----+---------------------------------------------+
```
Is there a way to have it like this?
```
+--------+---------+----+---------------------------------------------+
| poliza | cct | // | elproducto |
+--------+---------+----+---------------------------------------------+
| 0001 | kjjk565 | // | 1 pr001, 2 pr002, 2 pr003, 6 pr004, 1 pr005 |
+--------+---------+----+---------------------------------------------+
```
The table wu\_paqueteproducto have this data:
```
| ids | extra1 | producto |
+------+----------+----------+
| 0001 | 1 | pr001 |
| 0002 | 2 | pr002 |
| 0003 | 2 | pr003 |
| 0004 | 6 | pr004 |
| 0005 | 1 | pr005 |
+------+----------+----------+
```
This is my full query
```
SELECT
wu_polizas.poliza,
wu_polizas.cct,
wu_polizas.paquete,
wu_escuelas.director,
wu_escuelas.localidad,
wu_escuelas.nombre_mun,
wu_escuelas.nombre,
wu_facturashijo.empresa,
wu_bancos.banco as nombrebanco,
CONCAT(GROUP_CONCAT(wu_paqueteproducto.extra1),' ',GROUP_CONCAT(wu_paqueteproducto.producto)) as elproducto
FROM
wu_polizas
INNER JOIN wu_escuelas ON (wu_polizas.cct=wu_escuelas.cct)
INNER JOIN wu_facturashijo ON (wu_polizas.factura=wu_facturashijo.clave)
INNER JOIN wu_bancos ON (wu_polizas.banco=wu_bancos.ids)
INNER JOIN wu_paqueteproducto ON (wu_polizas.paquete=wu_paqueteproducto.paquete)
WHERE
wu_polizas.factura=002
GROUP BY
poliza
ORDER BY
poliza
```
|
Put `CONCAT` inside `GROUP_CONCAT` instead of the other way around:
```
GROUP_CONCAT(CONCAT(wu_paqueteproducto.extra1, ' ', wu_paqueteproducto.producto)) AS elproducto
```
|
I have wrote on [SQL Fiddle](http://sqlfiddle.com/#!9/0426f/2) a solution than you can translate easily to your query:
**MySQL 5.6 Schema Setup**:
```
create table t ( a char(1), b char(1), i int);
insert into t values
( '1', 'a', 1 ),
( '2', 'b', 1 ),
('8', 'x', 2),
('9', 'y', 2);
```
**Query**:
```
select GROUP_CONCAT( CONCAT_WS(' ', a, b ))
from t
group by i
```
**[Results](http://sqlfiddle.com/#!9/0426f/2/0)**:
```
| GROUP_CONCAT( CONCAT_WS(' ', a, b )) |
|--------------------------------------|
| 1 a,2 b |
| 8 x,9 y |
```
|
How do I group columns into a field containing the data?
|
[
"",
"mysql",
"sql",
""
] |
Table @t1 is the master table that should have the summed values of table @t2 after an import.
```
DECLARE @t1 TABLE (
typ int,
total float
);
insert into @t1 (typ,total) values(1,30.0)
insert into @t1 (typ,total) values(2,70.0)
insert into @t1 (typ,total) values(3,99.9)
DECLARE @t2 TABLE (
typ int,
value float
);
insert into @t2 (typ,value) values(1, 10.0)
insert into @t2 (typ,value) values(1, 20.0)
insert into @t2 (typ,value) values(2, 30.0)
insert into @t2 (typ,value) values(2, 40.0)
insert into @t2 (typ,value) values(3, 50.0)
select
t1.typ,
t1.total,
t2.typ,
t2.value,
case when total = value then 'TRUE' else 'FALSE' end as Result
from @t1 t1
left join @t2 t2 on t2.typ = t1.typ
```
Results in:
```
typ|total|typ|value|Result
1 |30 |1 |10 |FALSE
1 |30 |1 |20 |FALSE
2 |70 |2 |30 |FALSE
2 |70 |2 |40 |FALSE
3 |99,9 |3 |50 |FALSE
```
Sure, the Result is always 'FALSE' because t2.value is not summed yet.
My first idea is this:
```
select
t1.typ,
t1.total,
-- t2.typ,
(select sum(t2.value)
from @t2 t2
where t1.typ = t2.typ
group by typ) as Summed,
case when total = (select sum(t2.value)
from @t2 t2
where t1.typ = t2.typ
group by typ) then 'TRUE' else 'FALSE' end as Result
from @t1 t1
left join @t2 t2 on t2.typ = t1.typ
```
but I get this
```
typ|total|Summed|Result
1 |30 |30 |TRUE
1 |30 |30 |TRUE
2 |70 |70 |TRUE
2 |70 |70 |TRUE
3 |99,9 |50 |FALSE
```
The correct result has to look so:
```
typ|total|Summed|Result
1 |30 |30 |TRUE
2 |70 |70 |TRUE
3 |99,9 |50 |FALSE
```
I would be glad to get a reply to that question.
|
This would normally be done with a `left join`:
```
select t1.typ, t1.total, t2.typ, t2.sumvalue,
(case when t1.total = t2.sumvalue then 'TRUE' else 'FALSE' end) as Result
from @t1 t1 left join
(select typ, sum(t2.value) as sumvalue
from @t2 t2
group by typ
) t2
on t2.typ = t1.typ;
```
`outer apply` is fine, but this uses standard SQL.
|
You can use `OUTER APPLY` for this:
```
SELECT t1.typ, t1.total, x.Summed,
CASE
WHEN x.Summed = t1.total THEN 'TRUE'
ELSE 'FALSE'
END
FROM @t1 AS t1
OUTER APPLY (
SELECT SUM(value) AS Summed
FROM @t2 AS t2
WHERE t1.typ = t2.typ ) AS x
```
|
Comparing summed values against the master table
|
[
"",
"sql",
"sql-server-2008",
""
] |
I have a database query question.
My USER table set up is like this..
```
Id | PIN | biz_id | active_ind
--- | ---------------------------
1 | 123 | NULL | Y
2 | 456 | 1 | Y
3 | 789 | NULL | N
4 | 012 | 2 | Y
```
and my USER\_PURCHASE table is like this
```
id user_id date amount
---|----------|------------|--------|
1 | 1 | 2014-04-03 | 5 |
2 | 2 | 2015-03-04 | 5 |
5 | 3 | 2014-04-03 | 6 |
3 | 4 | 2015-03-03 | 6 |
4 | 2 | 2015-03-04 | 7 |
6 | 3 | 2013-03-03 | 7 |
7 | 4 | 2013-12-24 | 8 |
8 | 4 | 2013-01-01 | 8 |
```
I am trying to write a query that will return user.id, user.pin, user\_purchase.date for ACTIVE users whom have NOT made a purchase within the past 13 months.
I have tried to write the query as follows:
```
SELECT max( date ) AS mdate, `user_id` , `users`.`pin`
FROM `users_purchases`
INNER JOIN `users` ON `users_purchases`.`user_id` = `users`.`id`
WHERE `users`.`active_ind` = 'Y'
AND `users`.`biz_id` = NULL
GROUP BY `users`.`pin`
HAVING mdate < DATE_SUB( CURDATE( ) , INTERVAL 13
MONTH )
```
but I suspect that is not the correct way of doing it.
Can someone please help with this and MOST IMPORTANTLY explain why the correct way of doing it is in fact correct - I would really like to understand this. THANKS!
|
```
SELECT
u.id,
u.pin,
max(up.date) mdate
FROM
User u
LEFT JOIN User_Purchase up ON u.Id = up.user_id
WHERE
u.active_ind = 'Y'
and u.biz_id IS NULL
GROUP BY
u.id,
u.pin
HAVING
mdate IS NULL or
mdate < DATE_SUB( CURDATE( ) , INTERVAL 13 MONTH )
```
you can leave off the `mdate IS NULL or` if you only want users who have actually made a purchase +13 months ago
[SQL Fiddle](http://sqlfiddle.com/#!9/8ac4f8/3)
|
It's not correct because inner join users\_purchases will automatically filter out users who have never made any purchase.
```
select distinct a.id, a.pin
from users a
left join users_purchases b on a.id = b.user_id and b.mdate >= curdate() - interval 13 month
where a.active_ind = 'Y' and a.biz_id is null and b.id is not null;
```
Here we used a left join on any purchases within the past 13 months. There will be a row for every user due to the left join. For users who have made any purchase within 13 months, users\_purchases.id will not be null, and we filter them out by `b.id is not null` in the where statement.
|
MYSQL getting a user with latest purchase
|
[
"",
"mysql",
"sql",
"select",
"join",
"greatest-n-per-group",
""
] |
I have two tables:
Table 1:
```
Id
1232
1344
1313
4242
3242
555
```
Table 2:
```
Id sym mnth code
1232 9 1 32
1344 15 1 14
1313 10 1 32
4242 11 1 32
3242 9 1 32
1232 9 2 32
1344 13 2 14
1313 9 2 32
4242 10 2 32
3242 9 2 32
```
I want to check if all the id's in table 1 have the value 9 in sym for all the month (1,2 in the example) but only for those id's which for them the code in table 2 is '32'.
If not return me the id and the months for which the 9 is missing separate by a comma. If Id in table 1 doesn't exists at all in table 2 return null in the month column and the id.
The output in the example should be:
```
ID month
1313 1
4242 1,2
555 NULL
1344 doesn't exist because the code column for hime is not 32.
```
I started writing this:
```
SELECT table1.id
FROM table1
WHERE not EXISTS (SELECT id FROM table2
WHERE table2.sml = '9' AND table2.code = 32)
```
But I really don't know how to make the query run for all month and plug the results like I've mentioned in the output. Any help?
Thank you!
|
* Here I create a cte to find out what are the month missing.
* Has to create a derivated table `months` to include all the months.
* Then perform a left join, so either didnt exist one item in table 2 to match or the item was the wrong one
* Once I have all the wrong link create the string using `XML PATH`
+ [Concatenate many rows into a single text string?](https://stackoverflow.com/questions/194852/concatenate-many-rows-into-a-single-text-string)
[**Sql Fiddle Demo**](http://sqlfiddle.com/#!6/b24e8/4)
```
WITH cte as (
SELECT t1.*, t2.sym, t2.mnth, t2.code
FROM Table1 t1
CROSS JOIN (select 1 month_id union select 2) months
LEFT JOIN Table2 t2
ON t2.[id] = t1.id
AND t2.[mnth] = months.month_id
WHERE ([code] = 32 OR [code] is NULL)
AND ([sym] <> 9 OR [sym] is NULL)
), MAIN as (
SELECT DISTINCT c2.Id, (SELECT c1.mnth + ',' as [text()]
FROM cte c1
WHERE c1.Id = c2.Id
ORDER BY c1.Id
For XML PATH ('')
) [months]
FROM cte c2
)
SELECT Id,
IIF( len([months]) > 0,
LEFT ([months], len([months])-1),
NULL) as [months]
FROM Main
```
**OUTPUT**
```
| Id | months |
|------|--------|
| 555 | (null) |
| 1313 | 1 |
| 4242 | 1,2 |
```
|
You have a lot of conditions in your request that don't compliment each other so well, so the query is likely going to look a bit messy, and probably perform slow. You'll need a way to combine results into your comma-delimited list. SQL Server doesn't have a built-in string concatenation aggregate function, so you'll need to work on something [similar to this other question](https://stackoverflow.com/questions/5031204/does-t-sql-have-an-aggregate-function-to-concatenate-strings) in order to get the `month` output you are after.
What I've come up with that gives you the results you are after is:
```
SELECT t1.id, t2.[month]
FROM Table1 t1
OUTER APPLY (
SELECT stuff((SELECT ', ' + convert(varchar, mnth)
FROM Table2
WHERE id = t1.id and sym <> 9 and code = 32
ORDER BY mnth ASC
for xml path('')
),1,2,'') as [month]
) t2
WHERE
id in (SELECT id FROM Table2 WHERE sym <> 9 and code = 32)
or id not in (SELECT id FROM Table2);
```
Note that I added the `ORDER BY mnth ASC` line so that the result `month` field has the non-9-sym months in logical order. If you'd rather see the order they appear in the table, just remove this line.
**Edit:** Removed the initial "thinking out loud" answer and left just the actual solution to prevent confusion.
|
Check if all ID's in a Column have a specific value in another column, different tables
|
[
"",
"sql",
"sql-server",
"select",
"exists",
""
] |
I am quite new in sql server. I am trying to run this following query:
```
SELECT AVG( jl.[Stock Weight] )
FROM [Settlement Line] jl
WHERE jl.[Vendor No_] = 8516
AND ( jl.[Slaughter Date] BETWEEN '2015-11-01' AND '2015-11-30' )
AND ( CAST( jl.[Item No_] AS INT ) BETWEEN 17000 AND 17099 )
```
As a result, I get this error:
> OperationalError: (245, "Conversion failed when converting the varchar
> value 'PRISGAR.'
I cannot figure this out. I can't also figure out what `PRISGAR` is. I have tried with `CONVERT`also, got same error. The field `[Item No_]`is a `varchar(20)`which I am trying to convert into `INT` and check between two numbers.
|
It is unclear from the error whether the error is for `[Vendor No_]` or `[Item No_]` or `jl.[Stock Weight]`. SQL is strongly typed, but does type conversions. You should try to do the comparisons using the same type and not to store numbers as strings.
If the problem is the average, then you can use a `case`. Something like:
```
SELECT AVG(CASE WHEN ISNUMERIC(jl.[Stock Weight]) = 1
THEN CAST(jl.[Stock Weight] as FLOAT)
END)
```
For the comparison logic, this suggests using the same type as the column for the constant:
```
j1.[Vendor No_] = '8516'
```
The `[Item No_]` is a bit more troublesome. It is tempting to write:
```
jl.[Item No_] BETWEEN '17000' AND '17099'
```
However, this would match '170019991234545', because the comparisons would be done as strings, not numbers. Another method is:
```
(jl.[Item No_] BETWEEN '17000' AND '17099' AND LEN(jl.[Item No_]) = 5)
```
However, this would match '1702A'. Yet another can use `LIKE` to ensure the values are all digits:
```
(jl.[Item No_] BETWEEN '17000' AND '17099' AND
jl.[Item No_] LIKE '[0-9][0-9][0-9][0-9][0-9]'
)
```
And this can be simplified to:
```
(jl.[Item No_] LIKE '170[0-9][0-9]')
```
|
Try this one:
```
SELECT AVG(jl.[Stock Weight])
FROM dbo.[Settlement Line] jl
WHERE jl.[Vendor No_] = 8516
AND jl.[Slaughter Date] BETWEEN '20151101' AND '20151130'
AND jl.[Item No_] BETWEEN '17000' AND '17099'
```
|
SQL Server error "Conversion failed" while using CAST or CONVERT
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
Guy's I'm having issues with this query of mine. I have done what i can and now it's coming towards the end. If i run the 2 separate it works perfectly but once i select the whole thing to run as on piece it give me an error about the temp table already existing even though I check if the Temp table exists and drop it at the end and the beginning of each "batch" i would call it.
I don't really know what piece of the query to post so I'm just going to post the whole thing. If Someone can give me insight of why it is doing this and other tips that you might see me doing wrong.
```
Use test
IF OBJECT_ID('tempdb..#TEMP') IS NOT NULL DROP TABLE #TEMP
IF OBJECT_ID('MetricsServerAudit') IS NOT NULL
BEGIN
CREATE TABLE #TEMP ([TIME] nvarchar(max) NULL,[DATE] nvarchar(max) NULL,[USER_LOGIN] nvarchar(max) NULL,[USER_NAME] nvarchar(max) NULL,[MODEL_NAME] nvarchar(max) NULL,[SCORECARD_IDENTIFIER] nvarchar(max) NULL, [SCORECARD_NAME] nvarchar(max) NULL,[ELEMENT_IDENTIFIER] nvarchar(max) NULL,[ELEMENT_NAME] nvarchar(max) NULL,[SERIES_IDENTIFIER] nvarchar(max) NULL,[SERIES_NAME] nvarchar(max) NULL,[PERIOD_NAME] nvarchar(max) NULL,[ACTION_TYPE] nvarchar(max) NULL,[ACTION] nvarchar(max) NULL,[PREVIOUS_VALUE] nvarchar(max) NULL,[VALUE] nvarchar(max) NULL,[UNIT] nvarchar(max) NULL)
BULK INSERT #TEMP FROM 'C:\QPR_Logs\Audit\MetricsServerAudit.txt'
WITH (FIELDTERMINATOR ='\t', ROWTERMINATOR = '\r', FIRSTROW = 2, KEEPNULLS)
UPDATE #TEMP SET [DATE]= REPLACE(CONVERT(VARCHAR(11),[DATE],103),'/' ,'-')
ALTER TABLE #TEMP ALTER COLUMN [DATE] DATE
UPDATE #TEMP SET [TIME] = '12:00:00' Where [TIME] = ''
UPDATE #TEMP SET [TIME] = REPLACE(REPLACE(REPLACE([TIME], CHAR(10), ''), CHAR(13), ''), CHAR(9), '')
UPDATE #TEMP SET [TIME] = REPLACE([TIME], '/', ':')
UPDATE #TEMP SET [TIME] = left([TIME], 8)
UPDATE #TEMP SET [DATE] = '2015-01-01' Where [DATE] is null
INSERT INTO [dbo].[MetricsServerAudit]([DateStamp],[TIME],[DATE],[USER_LOGIN],[USER_NAME],[MODEL_NAME],[SCORECARD_IDENTIFIER],[SCORECARD_NAME],[ELEMENT_IDENTIFIER],[ELEMENT_NAME],[SERIES_IDENTIFIER],[SERIES_NAME],[PERIOD_NAME],[ACTION_TYPE],[ACTION],[PREVIOUS_VALUE],[VALUE],[UNIT])
SELECT CONCAT([DATE],'', [TIME]) AS [DateStamp], [TIME],[DATE],[USER_LOGIN],[USER_NAME],[MODEL_NAME],[SCORECARD_IDENTIFIER],[SCORECARD_NAME],[ELEMENT_IDENTIFIER],[ELEMENT_NAME],[SERIES_IDENTIFIER],[SERIES_NAME],[PERIOD_NAME],[ACTION_TYPE],[ACTION],[PREVIOUS_VALUE],[VALUE],[UNIT]
FROM #TEMP
WHERE NOT EXISTS(SELECT [TIME] FROM [dbo].[MetricsServerAudit] WHERE [TIME] = [TIME])
DROP TABLE #TEMP
END
Else --SEPERATOR
IF OBJECT_ID('tempdb..#TEMP') IS NOT NULL DROP TABLE #TEMP
IF OBJECT_ID('MetricsServerAudit') IS NULL
BEGIN
CREATE TABLE MetricsServerAudit ([DateStamp] nvarchar(max) NULL, [TIME] nvarchar(max) NULL,[DATE] date NULL,[USER_LOGIN] nvarchar(max) NULL,[USER_NAME] nvarchar(max) NULL,[MODEL_NAME] nvarchar(max) NULL,[SCORECARD_IDENTIFIER] nvarchar(max) NULL,[SCORECARD_NAME] nvarchar(max) NULL,[ELEMENT_IDENTIFIER] nvarchar(max) NULL,[ELEMENT_NAME] nvarchar(max) NULL,[SERIES_IDENTIFIER] nvarchar(max) NULL,[SERIES_NAME] nvarchar(max) NULL,[PERIOD_NAME] nvarchar(max) NULL,[ACTION_TYPE] nvarchar(max) NULL,[ACTION] nvarchar(max) NULL,[PREVIOUS_VALUE] nvarchar(max) NULL,[VALUE] nvarchar(max) NULL,[UNIT] nvarchar(max) NULL)
END
IF OBJECT_ID('tempdb..#TEMP') IS NULL
BEGIN
CREATE TABLE #TEMP ([TIME] nvarchar(max) NULL,[DATE] nvarchar(max) NULL,[USER_LOGIN] nvarchar(max) NULL,[USER_NAME] nvarchar(max) NULL,[MODEL_NAME] nvarchar(max) NULL,[SCORECARD_IDENTIFIER] nvarchar(max) NULL, [SCORECARD_NAME] nvarchar(max) NULL,[ELEMENT_IDENTIFIER] nvarchar(max) NULL,[ELEMENT_NAME] nvarchar(max) NULL,[SERIES_IDENTIFIER] nvarchar(max) NULL,[SERIES_NAME] nvarchar(max) NULL,[PERIOD_NAME] nvarchar(max) NULL,[ACTION_TYPE] nvarchar(max) NULL,[ACTION] nvarchar(max) NULL,[PREVIOUS_VALUE] nvarchar(max) NULL,[VALUE] nvarchar(max) NULL,[UNIT] nvarchar(max) NULL)
BULK INSERT #TEMP FROM 'C:\QPR_Logs\Audit\MetricsServerAudit.txt'
WITH (FIELDTERMINATOR ='\t', ROWTERMINATOR = '\r', FIRSTROW = 2, KEEPNULLS)
UPDATE #TEMP SET [DATE]= REPLACE(CONVERT(VARCHAR(11),[DATE],103),'/' ,'-')
ALTER TABLE #TEMP ALTER COLUMN [DATE] DATE
UPDATE #TEMP SET [TIME] = '12:00:00' Where [TIME] = ''
UPDATE #TEMP SET [TIME] = REPLACE(REPLACE(REPLACE([TIME], CHAR(10), ''), CHAR(13), ''), CHAR(9), '')
UPDATE #TEMP SET [TIME] = REPLACE([TIME], '/', ':')
UPDATE #TEMP SET [TIME] = left([TIME], 8)
UPDATE #TEMP SET [DATE] = '2015-01-01' Where [DATE] is null
INSERT INTO [dbo].[MetricsServerAudit]([TIME],[DATE],[USER_LOGIN],[USER_NAME],[MODEL_NAME],[SCORECARD_IDENTIFIER],[SCORECARD_NAME],[ELEMENT_IDENTIFIER],[ELEMENT_NAME],[SERIES_IDENTIFIER],[SERIES_NAME],[PERIOD_NAME],[ACTION_TYPE],[ACTION],[PREVIOUS_VALUE],[VALUE],[UNIT])
SELECT [TIME],[DATE],[USER_LOGIN],[USER_NAME],[MODEL_NAME],[SCORECARD_IDENTIFIER],[SCORECARD_NAME],[ELEMENT_IDENTIFIER],[ELEMENT_NAME],[SERIES_IDENTIFIER],[SERIES_NAME],[PERIOD_NAME],[ACTION_TYPE],[ACTION],[PREVIOUS_VALUE],[VALUE],[UNIT]
FROM #TEMP
DROP TABLE #TEMP
UPDATE [dbo].[MetricsServerAudit] SET [DateStamp] = CONCAT([DATE],'', [TIME])
UPDATE [dbo].[MetricsServerAudit] SET [DateStamp] = REPLACE([DateStamp], '/', ':')
UPDATE [dbo].[MetricsServerAudit] SET [DateStamp] = CONVERT(datetime,(SUBSTRING([DateStamp],1,10)+' '+SUBSTRING([DateStamp],11,15)))
ALTER TABLE [dbo].[MetricsServerAudit] ALTER COLUMN [DateStamp] DATETIME
END
```
By the else is the separator that i talk of so everything above it is "Batch1",
and everything below it is "Batch2"
The error I get is :
```
Msg 2714, Level 16, State 1, Line 36
There is already an object named '#TEMP' in the database.
```
|
`ELSE` is not a batch separator, and any temporary tables you use and don't drop will throw this error.
Unless you've specifically changed the batch separator, you should use `GO` in-between your batches. That way any temporary tables will be dropped between batch separation.
|
T-SQL is declarative, when you execute the SQL the first thing the optimiser does is look at your batch of SQL and then create an execution plan (that is before anything actually executes), so, to the optimiser you do indeed have two CREATE TABLE #TEMP which is invalid. You can't even use dynamic SQL here because the scope of the # table is the EXEC('') statement - you could use a global temporary table with EXEC( ). That, or simply use two different table names.
|
Can not run the whole T-SQL query but in parts I can
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have two tables and i need to compare data and update/insert one table records. What iam trying to do is I need to take each record from Table1,
use a split function then for each text in split, compare dataelement field between both these tables. We are syncing data in Table2 to similar to Table1.
Please let me know how this can be done. I am ok using cursor or merge. This is the scenario
```
DataTable:
dataId dataelement
1 Check
2 System
3 Balances
4 City
5 State
6 Zip
7 Other
Table1:
Id reqId dataelementValues
1 52 Check
2 52 City;State;System
3 52 Other
Table2:
elId dataId dataelement reqId Active
1 6 Zip 52 1
2 1 Check 52 1
3 4 city 52 1
4 5 State 52 1
```
Outcome Should be similar to after compare in table2
```
Table2:
elId dataId dataelement reqId Active
1 6 Zip 52 0 (Should be set to inactive as it exists in table2 but not in table1)
2 1 Check 52 1 (NO Updates as it exists in both the tables)
3 4 city 52 1 (NO Updates as it exists in both the tables)
4 5 State 52 1 (NO Updates as it exists in both the tables)
5 2 System 52 1 (Get the dataid for system from datatable and insert in table2 as it exists in table1 but not in table2)
6 7 Other 52 1 (Get the dataid for other from datatable and insert in table2 as it exists in table1 but not in table2)
```
This is where iam at, not sure how to set inactive on table2.
```
WHILE Exists(Select * from #Table1)
BEGIN
Select @currentId = Id, @dataValue = dataelementValues FROM #Table1 where rowID=(SELECT top 1 rowID from #Table1 order by rowID asc)
SET @pos = 0
SET @len = 0
WHILE CHARINDEX(';', @dataValue, @pos+1)>0
BEGIN
SET @dataValueValue = SUBSTRING(@dataValue, @pos, CHARINDEX('|', @dataValue, @pos+1) - @pos)
SET @glbaDEId = (Select DataTable.dataId from datatable where dataelement = @dataValue)
IF NOT Exists (Select * from #Table2 Where DataElement=@dataValue)
BEGIN
--Insert into table2
END
SET @pos = CHARINDEX('|', @dataValue, @pos+@len) +1
END
DELETE from #Table1 where rowID=(SELECT top 1 rowID from #Table1 order by rowID asc )
END
```
|
You can try using a MERGE statement with a few other tricks.
[Merge Guide](https://www.simple-talk.com/sql/learn-sql-server/the-merge-statement-in-sql-server-2008/)
```
-- Create a CTE that will split out the combined column and join to DataTable
-- to get the dataId
;WITH cteTable1Split AS
(
SELECT reqId, dt.* FROM
(
SELECT
[dataelement] = y.i.value('(./text())[1]', 'nvarchar(4000)'),
reqId
FROM
(
-- use xml to split column
-- http://sqlperformance.com/2012/07/t-sql-queries/split-strings
SELECT x = CONVERT(XML, '<i>'
+ REPLACE([dataelementValues], ';', '</i><i>')
+ '</i>').query('.'),
reqId
FROM Table1
) AS a CROSS APPLY x.nodes('i') AS y(i)
) a
JOIN DataTable dt ON dt.[dataelement] = a.[dataelement]
)
-- Merge Table2 with the CTE
MERGE INTO Table2 AS Target
USING cteTable1Split AS Source
ON Target.[dataelement] = Source.[dataelement]
-- If exists in Target (Table2) but not Source (CTE) then UPDATE Active flag
WHEN NOT MATCHED BY Source THEN
UPDATE SET ACTIVE = 0
-- If exists in Source (CTE) but not Target (Table2) then INSERT new record
WHEN NOT MATCHED BY TARGET THEN
INSERT ([dataId], [dataelement], [reqId], [Active])
VALUES (SOURCE.[dataId], SOURCE.[dataelement], SOURCE.[reqId], 1);
```
[**SQL Fiddle**](http://sqlfiddle.com/#!3/b5032/1)
|
You've not mentioned whether you have control over the structure of these tables, so I'm going to go ahead and suggest you **redesign Table1 to normalise the dataelementValues column**.
That is, instead of this:
```
Table1:
Id reqId dataelementValues
1 52 Check
2 52 City;State;System
3 52 Other
```
You should be storing this:
```
Table1_New:
Id reqId dataelementValues
1 52 Check
2 52 City
2 52 State
2 52 System
3 52 Other
```
You may also need a new, surrogate, primary key column on the table, using an `IDENTITY(1,1)` specification.
Storing your data like this is how relational databases are intended to be used/designed. As well as simplifying the problem at hand right now, you might find it removes potential problems in the future as well.
|
Loop through table data and compare using split
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
Here is my Store Procedure for printing purpose. I am facing issue as i get records twice as they exist due to commented join. I want to get output as mentioned below in question. I am a novice to DB. Help in this regard will be appreciated.
```
DECLARE @path AS VARCHAR(100)
SET @path = N'~/BillAttachments/166/'
DECLARE @billID AS INT
SET @billID = 166
SELECT bh.billID,bh.statusID,bh.modifiedOn,bs.statusName
FROM dbo.eInvoice_tbl_billHistory AS bh
--INNER JOIN dbo.eInvoice_tbl_billAttachment AS ba ON bh.billID = ba.billID
--INNER JOIN dbo.eInvoice_tbl_billAttachment AS ba ON ba.billID = bh.billID
INNER JOIN dbo.eInvoice_tbl_billStatus AS bs ON bh.statusID = bs.statusID
WHERE bh.billID=@billID;
```
when i try to fetch these columns from the required tables
```
ba.attachmentID ,
@path + ba.fileName AS asad ,
ba.attachmentDescription ,
ba.billAttachmentStatus,
```
Output is:
```
166 3 2015-11-30 11:44:18.663 Approve
166 3 2015-11-30 11:44:18.663 Approve
166 5 2015-11-30 11:44:42.600 Paid
166 5 2015-11-30 11:44:42.600 Paid
```
but desired output is:
```
166 3 2015-11-30 11:44:18.663 Approve
166 5 2015-11-30 11:44:42.600 Paid
```
|
Use [`DISTINCT`](https://msdn.microsoft.com/en-us/library/ms176104.aspx):
> Specifies that only unique rows can appear in the result set
```
SELECT DISTINCT bh.billID,bh.statusID,bh.modifiedOn,bs.statusName
FROM dbo.eInvoice_tbl_billHistory AS bh
--INNER JOIN dbo.eInvoice_tbl_billAttachment AS ba ON bh.billID = ba.billID
--INNER JOIN dbo.eInvoice_tbl_billAttachment AS ba ON ba.billID = bh.billID
INNER JOIN dbo.eInvoice_tbl_billStatus AS bs ON bh.statusID = bs.statusID
WHERE bh.billID=@billID;
```
Other method is to use `GROUP BY`:
```
SELECT bh.billID,bh.statusID,bh.modifiedOn,bs.statusName
FROM dbo.eInvoice_tbl_billHistory AS bh
--INNER JOIN dbo.eInvoice_tbl_billAttachment AS ba ON bh.billID = ba.billID
--INNER JOIN dbo.eInvoice_tbl_billAttachment AS ba ON ba.billID = bh.billID
INNER JOIN dbo.eInvoice_tbl_billStatus AS bs ON bh.statusID = bs.statusID
WHERE bh.billID=@billID
GROUP BY bh.billID,bh.statusID,bh.modifiedOn,bs.statusName;
```
|
A different approach if your rule is getting more complex which row you want to keep using a CTE and the [`ROW_NUMBER`](https://msdn.microsoft.com/en-us/library/ms186734.aspx) ranking function:
```
WITH CTE AS
(
SELECT bh.billID,
bh.statusID,
bh.modifiedOn,
bs.statusName,
rn = ROW_NUMBER() OVER() (PARTITION BY bh.billID, bh.statusID ORDER BY bh.modifiedOn ASC)
FROM dbo.eInvoice_tbl_billHistory AS bh
INNER JOIN dbo.eInvoice_tbl_billStatus AS bs
ON bh.statusID = bs.statusID
WHERE bh.billID = @billID;
)
SELECT billID, statusID, modifiedOn, statusName
FROM CTE
WHERE RN = 1
```
In this example i keep the first row of each `BillId`+`StatusID` combination according to the `modifiedOn` datetime.
|
How to avoid duplicate records while retrieving using Inner Join in SQL Server
|
[
"",
"sql",
"sql-server",
"database",
"stored-procedures",
"inner-join",
""
] |
Need help figuring out how to determine if the date is the same 'day' as today in teradata. IE, today `12/1/15 Tuesday`, same day last year was actually `12/2/2014 Tuesday`.
I tried using `current_date - INTERVAL'1'Year` but it returns `12/1/2014`.
|
You can do this with a bit of math if you can convert your current date's "Day of the week" to a number, and the previous year's "Day of the week" to a number.
In order to do this in Teradata your best bet is to utilize the `sys_calendar.calendar` table. Specifically the `day_of_week` column. Although there are [other ways](http://teradatadba.blogspot.com/2010/04/day-of-week-sql.html) to do it.
Furthermore, instead of using `CURRENT_DATE - INTERVAL '1' YEAR`, it's a good idea to use `ADD_MONTHS(CURRENT_DATE, -12)` since `INTERVAL` arithmetic will fail on `2012-02-29` and other Feb 29th leap year dates.
So, putting it together you get what you need with:
```
SELECT
ADD_MONTHS(CURRENT_DATE, -12)
+
(
(SELECT day_of_week FROM sys_calendar.calendar WHERE calendar_date = CURRENT_DATE)
-
(SELECT day_of_week FROM sys_calendar.calendar WHERE calendar_date = ADD_MONTHS(CURRENT_DATE, -12))
)
```
This is basically saying: Take the current dates day of week number (3) and subtract from it last years day of week number (2) to get 1. Add that to last year's date and you'll have the same day of the week as current date.
I tested this for all dates between `01/01/2010` and `CURRENT_DATE` and it worked as expected.
|
Why don't you simply subtract 52 weeks?
```
current_date - 364
```
|
Teradata SQL Same Day Prior Year in same Week
|
[
"",
"sql",
"teradata",
""
] |
If I have the following example table, is there any way that I only keep rows where the "Closed Date" column is empty? In this example, only the second row has an empty "Closed Date" column, the third and fourth are not. [](https://i.stack.imgur.com/gqH0e.jpg)
```
Unique Key,Created Date,Month,Closed Date,Latitude,Longitude
32098276,12/1/2015 0:35,12,,40.78529363,-73.96933478,"(40.78529363449518, -73.96933477605721)"
32096105,11/30/2015 20:09,11,11/30/2015 20:09,40.62615508,-73.9606431,"(40.626155084398036, -73.96064310416676)"
32098405,11/30/2015 20:08,11,11/30/2015 20:08,40.6236074,-73.95914964,"(40.62360739765128, -73.95914964173129)"
```
I find this but this is not exactly I am looking for. Could any guru enlighten? Thanks!
[return empty row based on condition in sql server](https://stackoverflow.com/questions/11002722/return-empty-row-based-on-condition-in-sql-server)
* Sorry forgot to mention I use SQL Server 2014.
|
```
SELECT *
FROM your_table
WHERE [Closed Date] IS NULL OR LTRIM(RTRIM([Closed Date])) = ''
```
|
You can use `IS NULL` to filter out records that contains `NULL` value:
```
SELECT *
FROM your_table
WHERE "Closed Date" IS NULL
```
Keep in mind that column identifier with space is bad practice. You should use something like `Closed_Date` to avoid quoting.
|
SELECT only the whole row where the column is empty in SQL
|
[
"",
"sql",
"sql-server-2014",
""
] |
I have a sql problem that I just cannot seem to solve.
Please imagine a table of software releases. In said table we have a release id, a product id, a version number and a status flag field.
*Please note I have heavily simplified this. In the real world this is a BIG table hence we introduced the status to avoid expensive grouping queries to only show the latest version of each product in the results etc.*
## Software Release
* ReleaseId (int)
* ProductId (int)
* Version (int)
* Status (int) // 0 = Old, 1 = current, 2 = future.
What we can do is create a select to find out which records are NOT the latest version so...
```
Select *
From SoftwareRelease SR
Where
(
Select Count(*)
From SoftwareRelease
Where ProductId = SR.ProductId
And Version > SR.Version
) > 0
```
But what we cannot do is a variant on this to update the status field appropriately. This is what we need to achieve for 0 (old) and 1(latest). Note it is assumed when the script is run that any records are old or latest. "2" or future records are for staging only and should not be considered.
|
How about using window functions?
```
with toupdate as (
select sr.*, row_number() over (partition by ProductId order by version desc) as seqnum
from SoftwareRelease sr
)
update toupdate
set status = (case when seqnum = 1 then 1 else 0 end)
where seqnum = 1 and status = 0 or seqnum > 1 and status <> 0;
```
|
You can do this with a `Row_Number()` function over the ProductId partition, and then updating the `Status` column based on whether it is the latest version or not:
```
;With Cte As
(
Select *,
Row_Number() Over (Partition By ProductId Order By Version Desc) Row_Number
From SoftwareRelease
Where Status <> 2
)
Update Cte
Set Status = Case When Row_Number = 1 Then 1 Else 0 End
```
|
Update a record where a "newer" version exists
|
[
"",
"sql",
"sql-server",
"sql-update",
""
] |
I'm new to SQL and database design.
I have a task where I need to create a table with users (student or teacher), each user can be in a group or more, and each group can get one or more tasks to solve.
I created the design, but I think is a problem with the entity `group`, because when I what to add data there, I can't do it (I have only the ID, but no other data. I'm sure I did something wrong).
Adding data to the `user` entity is working.
What should I change or how I should add users to the group?
[](https://i.stack.imgur.com/tdISr.png)
|
Is this a possible way to do it?
```
insert into [group] (group_id) values((SELECT ISNULL(MAX([group_id])+1,1) FROM [group] WITH(SERIALIZABLE, UPDLOCK)))
insert into [Entity3] values(1,1) //values(x,y) x - user_id, y - group_id where I want to add the user
insert into [Entity3] values(2,1)
insert into [Entity3] values(3,1)
```
This is working as expected, but maybe something shorter or easier?
|
Something like this should work:
```
USERS
-----
user_id (pk)
name
surname
email
GROUPS
------
group_id (pk)
groupName
TASKS
-----
task_id (pk)
taskName
USERSGROUPSM2M (Many-to-Many table)
-------------
id (pk)
user_id (fk)
group_id (fk)
```
You could have user\_id #1 with group\_id #1 and #42, or any combination
```
GROUPSTASKSM2M (Another Many-to-Many table)
-------------
id (pk)
group_id (fk)
task_id (fk)
```
You could have group\_id #7 with task\_ids #3 and #76, or any combination
|
Designing SQL Server database
|
[
"",
"sql",
""
] |
I'm trying to make a report to find rows in a table, which have a mistake, a missing item order. I.e.
```
ID Item Order
----------------
1 A 1
2 A 2
3 A 3
4 B 1
5 B 2
6 C 2
7 C 3
8 D 1
```
Note, that Item "C" is missing row with Order index "1". I need to find all items, which are missing index "1" and start with "2" or other.
One way I figured is this:
```
SELECT DIstinct(Item) FROM ITEMS as I
WHERE I.Item NOT IN (SELECT Item FROM Items WHERE Order = 1)
```
But surprisingly (to me), it does not give me any results even though I know I have such items. I guess, it first selects items wich are not in sub-select and then distincts them, but what I wanted to is select distinct Items and find which of them have no lines with "Order = 1".
Also, this code is to be executed over some 70 thousands of lines, so it has to be feasible (another way I can think of is a CURSOR, but that would be very slow and possibly unstable?).
Regards,
Oak
|
You can find the missing orders using a [HAVING](https://msdn.microsoft.com/en-GB/library/ms180199.aspx) clause. HAVING allows you to filter on aggregated records. In this case we are filtering for Items with a min Order in excess of 1.
The benefit of this approach over a sub query in the WHERE clause is SQL Server doesn't have to rerun the sub query multiple times. It should run faster on large datasets.
[Example](https://data.stackexchange.com/stackoverflow/query/401623/having-example)
```
/* HAVING allows us to filter on aggregated records.
*/
WITH SampleData AS
(
/* This CTE creates some sample records
* to experiment with.
*/
SELECT
r.*
FROM
(
VALUES
( 1, 'A', 1),
( 2, 'A', 2),
( 3, 'A', 3),
( 4, 'B', 1),
( 5, 'B', 2),
( 6, 'C', 2),
( 7, 'C', 3),
( 8, 'D', 1)
) AS r(ID, Item, [Order])
)
SELECT
Item,
COUNT([Order]) AS Count_Order,
MIN([Order]) AS Min_Order
FROM
SampleData
GROUP BY
Item
HAVING
MIN([Order]) > 1
;
```
|
The idea is sound, but there is one tiny detail with NOT IN that may be problematic. That is, if the subquery after NOT IN results in any NULLs, the NOT IN is evaluated as if it were false. This may be the reason why you get no results. You can try NOT EXISTS, like in the other answer, or just
`SELECT DISTINCT Item FROM ITEMS as I
WHERE I.Item NOT IN (SELECT Item FROM Items WHERE Order = 1 AND Item IS NOT NULL)`
|
Select rows with missing value for each item
|
[
"",
"sql",
"sql-server",
"distinct",
""
] |
Is there any ways to SELECT the first and last 2 characters and REPLACE all the other characters with \* as shown below?
```
WA***RT
EX*** ***IL
CH***ON
BE******* *****AY
AP*LE
```
Thanks in advance!
|
Spaces skipped:
```
SELECT
name,
[hidden] = CASE WHEN LEN(name) <= 4 THEN name
ELSE CONCAT(LEFT(name, 2),REPLICATE('*', LEN(name)- 2),RIGHT(name,2))
END
FROM #tab;
```
If you need spaces and there is **only one** you can use:
```
SELECT name,
[hidden] = CASE
WHEN LEN(name) <= 4 THEN name
WHEN CHARINDEX(' ', name) > 0
THEN STUFF(CONCAT(LEFT(name, 2),REPLICATE('*', LEN(name) - 2) ,
RIGHT(name,2)), CHARINDEX(' ', name),1, ' ')
ELSE CONCAT(LEFT(name, 2),REPLICATE('*', LEN(name) - 2) ,RIGHT(name,2))
END
FROM #tab;
```
`LiveDemo`
If you use version lower than `SQL Server 2012` concatenate string with `+`.
|
```
select left(col, 2) + REPLICATE('*',len(col)-4) + right(col,2) from table;
```
|
SQL-Server: Replace except first and last characters
|
[
"",
"sql",
"sql-server",
""
] |
Noob question - I have the following tables;
ID is Unique as well as date & time.
How can I write an SQL query that will return a table with;
```
COLUMNS: id, man
```
Pretty sure I'm being extra stupid but losing patience rapidly with what appears to be so simple.
|
```
SELECT t2.*, t1.price
FROM film t1
JOIN movie t2 ON t1.id = t2.id
```
|
```
select film.id, date, time, price
from film
join movie on (film.id = movie.id)
```
|
SQL - joining tables together for a single table
|
[
"",
"sql",
""
] |
I have the following code:
```
declare @testValue nvarchar(50) = 'TEST';
select @testValue = 'NOTUSED' where 1 > 2;
select @testValue; -- Outputs 'TEST'
select @testValue = 'USED' where 2 > 1;
select @testValue; -- Outputs 'USED'
```
With the above, the first assignment is never used because the where clause fails. The second one is done properly and used is returned.
Why doesn't SQL return a null in this case and assigns a NULL value to @testValue after the first assignment where the where clause fails?
|
This is the expected behavior:
"If the SELECT statement returns no rows, the variable retains its present value. If expression is a scalar subquery that returns no value, the variable is set to NULL."
<https://msdn.microsoft.com/en-us/library/ms187330.aspx>
You can get around this in your example by using a subquery in the right side.
```
SELECT @testValue = (SELECT 'NOTUSED' where 1 > 2);
```
As for why it is this way, I cannot say for certain. Perhaps the entire `@testValue = 'NOTUSED'` is equating to NULL instead of only the right side `'NOTUSED'` portion of the statement, and this prevents the parameter from being set. Not directly related but I can say it took me some time to grow confident with writing queries when NULLs are involved. You need to be aware of / familiar with the ANSI NULL spec and associated behavior.
|
This is the default behavior of `SELECT`.
When assigning a value to a variable using `SELECT`, if there is no value returned, `SELECT` will not make the assignment at all so the variable's value will not be changed.
On the other hand, `SET` will assign `NULL` to the variable if there is no value returned.
[For more info](http://vyaskn.tripod.com/differences_between_set_and_select.htm)
|
Why doesn't the Select statement assigns an empty string or null value if it doesn't return a result?
|
[
"",
"sql",
"sql-server",
""
] |
im a bit new to sql server, so hopefully this isnt something too convoluted. if i have a table with a bunch of data that shows different records that have been complete or not...
TABLE 1
```
ID CATEGORY COMPLETE
1 reports yes
2 reports no
3 processes no
4 processes yes
5 reports no
6 events yes
```
...what would be the best way of creating a new field that would show the percentage complete for every category?
TABLE 2
```
ID CATEGORY PERCENTAGE
1 events 100%
2 processes 50%
3 reports 33%
```
any help would be greatly appreciated, thank you.
|
`group by` category column and use conditional sum to get only `complete = 'yes'` cases in the numerator.
```
select category,
100 * 1.0 * sum(case when complete = 'yes' then 1 else 0 end)/count(*) as pct
from tablename
group by category
```
|
You can use windowed functions and `PARTITION BY Category`:
```
SELECT DISTINCT Category,
[percentage] = ROUND(100 * SUM(CASE complete WHEN 'yes' THEN 1.0 ELSE 0.0 END)
OVER (PARTITION BY Category)/
COUNT(*) OVER (PARTITION BY Category),0)
FROM #tab;
```
`LiveDemo`
With insert to second table:
```
SELECT DISTINCT
[id] = IDENTITY(INT, 1,1)
,category
,[percentage] = ROUND(100 * SUM(CASE complete WHEN 'yes' THEN 1.0 ELSE 0.0 END)
OVER (PARTITION BY CATEGORY)/
COUNT(*) OVER (PARTITION BY Category),0)
INTO #table2
FROM #tab
ORDER BY [percentage] DESC;
SELECT *
FROM #table2;
```
`LiveDemo2`
|
SQL Server Completion Percentage Category
|
[
"",
"sql",
"sql-server",
"percentage",
""
] |
I have this:
```
SELECT
invoice_number, invoice_year, invoice_month, invoice_amount,
payment_year, payment_month, payment_amount
FROM payments_table
```
**Result:**

So I have 4 invoices. The invoice amount of two invoices from 2015/01 add up to 900 and the 2 invoices from 2015/02 add up to 950
I want this result:

So I want to sum the `invoice_amount` by `invoice_year` and `invoice_month` using the `invoice_number` just once. And I want to sum the `payment_amount` by `invoice_year`, `invoice_month`, `payment_year` and `payment_month`.
If I use `GROUP BY invoice_year, invoice_month, payment_year, payment_month` I get the right amounts for `SUM(payment_amount)` but I get the wrong amounts for `SUM(invoice_amount)`.
Any advice?
|
I've found the solution myself. Thanks all for thinking with me:
```
select b.invoice_year
, b.invoice_month
, b.invoice_amount
, c.payment_year
, c.payment_month
, c.payment_amount
from (
select a.invoice_year
, a.invoice_month
, sum(a.invoice_amount) as invoice_amount
from (
select distinct invoice_number
, invoice_year
, invoice_month
, invoice_amount
from payments
) a
group by a.invoice_year
, a.invoice_month
) b
inner join (
select a.invoice_year
, a.invoice_month
, a.payment_year
, a.payment_month
, sum(a.payment_amount) as payment_amount
from (
select invoice_year
, invoice_month
, payment_year
, payment_month
, payment_amount
from payments
) a
group by a.invoice_year
, a.invoice_month
, a.payment_year
, a.payment_month
) as c
on c.invoice_year = b.invoice_year
and c.invoice_month = b.invoice_month
```
This gives me exactly the result I was looking for.
|
The query you need is this one:
```
select a.invoice_year, a.invoice_month, a.payment_year, a.payment_month,
SUM(payment_amount), b.sumup
from payments_table a
inner join
(select invoice_year, invoice_month, sum(payment_amount) sumup
from payments_table
group by invoice_year, invoice_month) b
ON (a.invoice_year = b.invoice_year
and a.invoice_month = b.invoice_month )
GROUP BY a.invoice_year, a.invoice_month, a.payment_year, a.payment_month
```
But let me say that for the sample data you provided the sum for `invoice_year and invoice_month` is total 900 not 950.
See it here on fiddle: <http://sqlfiddle.com/#!9/46249/4>
Note that I did the fiddle in MySql but it should be the same for SQLServer since there is no specific function or syntax, just plain SQL. The reason why I did it in Mysql is because sometimes SQLFiddle with SQLServer gets unstable.
**EDIT**
Turns out that I was summing the wrong field and missing a column, so the proper query should be:
```
select a.invoice_year, a.invoice_month,
b.incount,
SUM(payment_amount) invoice_amount,
a.payment_year,
a.payment_month,
b.payment__amount
from payments_table a
inner join
(select invoice_year, invoice_month,
count(distinct invoice_amount) incount,
sum(distinct invoice_amount) payment__amount
from payments_table
group by invoice_year, invoice_month) b
ON ( a.invoice_year = b.invoice_year
and a.invoice_month = b.invoice_month )
GROUP BY a.invoice_year, a.invoice_month, a.payment_year, a.payment_month
```
This will give you the results exactly as you want. See it here: <http://sqlfiddle.com/#!9/46249/10>
|
Two different grouping levels in one select
|
[
"",
"sql",
"sql-server",
""
] |
I have a database query:
```
DECLARE @Pager_PageNumber AS INT, @Pager_PageSize AS INT;
SET @Pager_PageNumber = 1;
SET @Pager_PageSize = 12;
SELECT
[Name], [Description], [Table1ID], [VersionNo], [Status]
FROM
(SELECT
CAST(Table1.name AS VARCHAR(MAX)) As [Name],
CAST(Table1.description AS VARCHAR(MAX)) AS [Description],
CAST(CAST(Table1.Table1_ID AS DECIMAL(18,0)) AS VARCHAR(MAX)) AS [Table1ID],
CAST(CAST(Table1.VERSION_NO AS DECIMAL(18,0)) AS VARCHAR(MAX)) AS [VersionNo],
CAST(Table2.br_status AS VARCHAR(MAX)) AS [Status]
FROM
Table1 WITH (NOLOCK)
INNER JOIN
(SELECT
Table1_id, MAX(version_no) as version_no
FROM Table1
WHERE Table1.status = '00002'
GROUP BY Table1_id) AS BR WITH (NOLOCK) ON Table1.Table1_id = BR.Table1_id
AND BR.version_no = Table1.version_no
INNER JOIN
Table2 WITH (NOLOCK) ON Table1.status = Table2.br_status_code) A
ORDER BY
[Name], [Description], [Table1ID], [VersionNo], [Status]
OFFSET ((@Pager_PageNumber - 1) * @Pager_PageSize) ROWS
FETCH NEXT @Pager_PageSize ROWS ONLY;
SELECT COUNT(*)
FROM
(SELECT
CAST(Table1.name AS VARCHAR(MAX)) AS [Name],
CAST(Table1.description AS VARCHAR(MAX)) AS [Description],
CAST(CAST(Table1.Table1_ID AS DECIMAL(18,0)) AS VARCHAR(MAX)) AS [Table1ID],
CAST(CAST(Table1.VERSION_NO AS DECIMAL(18,0)) AS VARCHAR(MAX)) As [VersionNo],
CAST(Table2.br_status AS VARCHAR(MAX)) AS [Status]
FROM
Table1 WITH (NOLOCK)
INNER JOIN
(SELECT Table1_id, MAX(version_no) as version_no
FROM Table1
WHERE Table1.status = '00002'
GROUP BY Table1_id) AS BR WITH (NOLOCK) ON Table1.Table1_id = BR.Table1_id
AND BR.version_no = Table1.version_no
INNER JOIN
Table2 WITH (NOLOCK) ON Table1.status = Table2.br_status_code) A;
```
In SQL Server I get the error near : `BR WITH (NOLOCK)` that :
> > Incorrect syntax near the keyword 'WITH'.
> > Incorrect syntax near the keyword 'with'. If this statement is a common table expression, an xmlnamespaces clause or a change tracking context clause, the previous statement must be terminated with a semicolon.
But as per my understanding from sources [like](http://sqlserverplanet.com/tsql/using-with-nolock) the syntax is as
```
SELECT
first_name, last_name,
FROM
dbo.person p WITH (NOLOCK)
JOIN
dbo.employee e WITH (NOLOCK) ON e.person_id = p.person_id
WHERE
p.person_id = 1;
```
So, my query looks just about right.
Also, when I remove the WITH (NOLOCK) next to `BR WITH (NOLOCK)` i.e. my inner join query the query runs fine. Any ideas of what I might be missing??
**PS:** My DB compatibility level is 110.
|
You apply `with (nolock)` to *tables*, not to subqueries. So, instead of:
```
(SELECT Table1_id, MAX(version_no) as version_no
FROM Table1
where Table1.status='00002'
GROUP BY Table1_id
) as BR WITH (NOLOCK)
```
You would write:
```
(SELECT Table1_id, MAX(version_no) as version_no
FROM Table1 WITH (NOLOCK)
where Table1.status='00002'
GROUP BY Table1_id
) BR
```
|
Just put
```
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
```
before your query. NOLOCK behavior will be used for all your tables in the query. Using hint NOLOCK just means using READ UNCOMMITTED transaction isolation level for a particular table.
|
SQL Server Inner Join using WITH(NOLOCK)
|
[
"",
"sql",
"sql-server",
"join",
"inner-join",
"self-join",
""
] |
I have a table on my DB that record every call made by employees all over the day.
Every record is a call made by branch lines.
If a branch made 50 calls a day we will have 50 records on DB:
```
branch | date
-------+----------------------------
72489 | 2015-12-03 08:03:58
34002 | 2015-12-03 08:03:59
78700 | 2015-12-03 08:05:37
78700 | 2015-12-03 08:05:53
78700 | 2015-12-03 08:05:57
78700 | 2015-12-03 08:06:24
```
What I would like to do is take the first call of the day of every branch line.
I tried to use DISTINCT:
```
SELECT DISTINCT branch, date FROM table WHERE date::date='2015-12-03'::date;
```
Supposedly DISTINCT would avoid repetion on branch, but the result was:
```
branch | date
-------+----------------------------
34002 | 2015-12-03 07:58:24
34002 | 2015-12-03 08:00:40
59754 | 2015-12-03 08:01:31
34002 | 2015-12-03 08:01:56
59754 | 2015-12-03 08:02:09
57764 | 2015-12-03 08:02:17
59754 | 2015-12-03 08:02:28
```
I have some repetions on there.
What I like to have on my result is:
```
branch | date
-------+----------------------------
34002 | 2015-12-03 07:58:24
59754 | 2015-12-03 08:01:31
59754 | 2015-12-03 08:02:09
```
I tried with GROUP BY:
```
SELECT branch, date FROM table WHERE date::date='2015-12-03'::date GROUP BY branch;
```
But get this error:
**ERROR: column "table.date" must appear in the GROUP BY clause or be used in an aggregate function.**
Can someone help-me?
PS: Sorry for the english, my bad.
|
The cast using `::date` implies you are using Postgres. In this case getting the "*the first call of the day of every branch line*" is quite easy by using the `distinct on()` operator:
```
SELECT DISTINCT ON (branch) branch, date
FROM table
WHERE date::date = '2015-12-03'::date
ORDER BY branch, date;
```
Another possibility to solve this kind of query is to use a window function:
```
select branch, date
from (
select branch, date,
row_number() over (partition by branch order by date) as rn
where cast(date as date) = date '2015-12-03'
) t
where rn = 1
order by branch;
```
The `distinct on ()` solution is Postgres specific, the second solution is ANSI SQL (using ANSI a date literal and ANSI casting)
|
Possible solution:
```
SELECT
branch,
(
select
date
from
table tb
where
tb.branch = table.branch
) as date
FROM
table
GROUP BY
branch
WHERE
date::date='2015-12-03'::date;
```
|
SELECT DISTINCT or GROUP BY on field
|
[
"",
"sql",
"postgresql",
"group-by",
"distinct",
"greatest-n-per-group",
""
] |
I have the following query which counts the number of items created on a particular date in the last 10 days
```
SELECT
CONVERT (DATE, CreatedDate_6258638D_B885_AB3C_E316_D00782B8F688) AS 'Logged Date',
Count (*) AS 'Total'
FROM
MTV_System$WorkItem$Incident
WHERE
CreatedDate_6258638D_B885_AB3C_E316_D00782B8F688 >= DATEADD(DAY, DATEDIFF(DAY, 0, Getdate()) - 10, 0)
GROUP BY
CONVERT(DATE, CreatedDate_6258638D_B885_AB3C_E316_D00782B8F688)
```
How do I get this to show the dates which have no values present (i.e. get every date value for the last 10 days, return the count if there is data or 0 if none). Using SQL Server 2012.
|
You can write a recursive cte to get the date for the last 10 days into a table as follows:
```
WITH TableA (StartDate) AS (SELECT DATEADD(DAY, DATEDIFF(DAY, 0, Getdate()) - 10, 0)),
q as (
SELECT StartDate
, Number = 0
FROM TableA
UNION ALL
SELECT DATEADD(d,1,StartDate)
, Number = Number + 1
FROM q
WHERE 10 > Number )
```
Then join q with your original query, to get a row for every date.
```
select q.StartDate, yourtable.Total from q
left join (
SELECT
CONVERT (DATE, CreatedDate_6258638D_B885_AB3C_E316_D00782B8F688) AS 'Logged Date',
Count (*) AS 'Total'
FROM
MTV_System$WorkItem$Incident
WHERE
CreatedDate_6258638D_B885_AB3C_E316_D00782B8F688 >= DATEADD(DAY, DATEDIFF(DAY, 0, Getdate()) - 10, 0)
GROUP BY
CONVERT(DATE, CreatedDate_6258638D_B885_AB3C_E316_D00782B8F688)
) as yourtable on [Logged Date] = q.StartDate
```
|
Similar to BeanFrog's answer but a little shorter
```
-- sample data for testing
declare @MTV_System$WorkItemIncident table (
[CreatedDate_6258638D_B885_AB3C_E316_D00782B8F688] DATE,
[Total] INT
);
INSERT INTO @MTV_System$WorkItemIncident VALUES ('2015-11-23', 23);
INSERT INTO @MTV_System$WorkItemIncident VALUES ('2015-11-21', 21);
INSERT INTO @MTV_System$WorkItemIncident VALUES ('2015-11-30', 30);
-- now the query
WITH TableA (LoggedDate) AS (
SELECT TOP 10 CONVERT (DATE, DATEADD(DAY, number * -1, GETDATE())) AS 'LoggedDate'
FROM master.dbo.spt_values
WHERE name IS NULL
)
SELECT TableA.[LoggedDate],
SUM(ISNULL(Data.Total, 0)) AS 'LoggedCount'
FROM TableA
LEFT JOIN @MTV_System$WorkItemIncident AS Data ON CONVERT (DATE, CreatedDate_6258638D_B885_AB3C_E316_D00782B8F688) = TableA.[LoggedDate]
GROUP BY TableA.[LoggedDate]
```
|
Show Dates that have no values for selected column
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2012",
""
] |
Suppose I have the following table
```
ClassID StudentID
1 10
1 11
2 12
2 10
3 13
```
I want to have table like this
```
ClassID StudentID1 StudenID2 ... StudentID32
1 10 11 null
2 12 10 null
3 3 null null
```
Is there a way to query above table. Constraint: there could be maximum of 32 (it can be any predefined number) students for each class. I guess while loop can be used somehow, but there should be other ways.
**EDIT**
I've tried to use Pivot, but there I've to use some kind of aggregates, but for above question I don't have to use any kind of aggregate. And in pivot I have to name the column (something like `in(col1, col2,...)`), but here I don't even know ids of students. Is there any general way to solve above simple question so that each row represents a unique ClassID with all students, where each StudentID is in different column (columns can be named anything you like), subject to each ClassID can have max of 32 StudentIDs (I'm not supposed to use self joins because it is not practical)
|
I hope I understood you clearly..I still used PIVOT though and here, I assume I don't know the student ID's as well.
```
create table Student (
ClassID INT
, StudentID INT
)
INSERT INTO Student (ClassID, StudentID) VALUES
(1,10)
, (1,11)
, (2,12)
, (2,10)
, (3,13)
select
'Student' + CONVERT(NVARCHAR(150),ROW_NUMBER () OVER (PARTITION BY ClassID ORDER BY StudentID)) AS StudentNo
, *
into #tmpStud
from Student
declare @distinct nvarchar(max) = ''
/*
option a : flexible to the number of students
*/
--set @distinct = (select distinct '[' + StudentNo + '],' as [text()] from #tmpStud for xml path(''))
--set @distinct = SUBSTRING(@distinct, 0, LEN(@distinct))
/*
option b : max of 32 student
*/
declare @max int = 33
, @loop int = 1
while (@loop < @max)
begin
if(@loop = 1) begin
set @distinct = @distinct + '[Student' + Convert(nvarchar(20),@loop) + ']'
set @loop = @loop + 1
end
else begin
set @distinct = @distinct + ',[Student' + Convert(nvarchar(20),@loop) + ']'
set @loop = @loop + 1
end
end
exec ('
select
*
from (
select
ClassID
, StudentNo
, StudentID
FROM #tmpStud
) AS s PIVOT
(
MAX(StudentID)
FOR StudentNo IN (' + @distinct + ')
) AS pvt
')
drop table #tmpStud
```
**EDIT :** Once you have your create table student, please run the below code :
```
select
'Student' + CONVERT(NVARCHAR(150),ROW_NUMBER () OVER (PARTITION BY ClassID ORDER BY StudentID)) AS StudentNo
, *
into #tmpStud
from Student
declare @distinct nvarchar(max) = ''
/*
option a : flexible to the number of students
*/
set @distinct = (select distinct '[' + StudentNo + '],' as [text()] from #tmpStud for xml path(''))
set @distinct = SUBSTRING(@distinct, 0, LEN(@distinct))
exec ('
select
*
from (
select
ClassID
, StudentNo
, StudentID
FROM #tmpStud
) AS s PIVOT
(
MAX(StudentID)
FOR StudentNo IN (' + @distinct + ')
) AS pvt
')
drop table #tmpStud
```
**EDIT :** Lets assume you'll be needed a static of 32 students @ max..use the script below.
```
select
'Student' + CONVERT(NVARCHAR(150),ROW_NUMBER () OVER (PARTITION BY ClassID ORDER BY StudentID)) AS StudentNo
, *
into #tmpStud
from Student
declare @distinct nvarchar(max) = ''
/*
option b : static max of 32 student
*/
declare @max int = 33
, @loop int = 1
while (@loop < @max)
begin
if(@loop = 1) begin
set @distinct = @distinct + '[Student' + Convert(nvarchar(20),@loop) + ']'
set @loop = @loop + 1
end
else begin
set @distinct = @distinct + ',[Student' + Convert(nvarchar(20),@loop) + ']'
set @loop = @loop + 1
end
end
exec ('
select
*
from (
select
ClassID
, StudentNo
, StudentID
FROM #tmpStud
) AS s PIVOT
(
MAX(StudentID)
FOR StudentNo IN (' + @distinct + ')
) AS pvt
')
drop table #tmpStud
```
|
With a known limit that is relatively small (e.g. 10), a self join would work.
|
T-sql putting values of column into one row
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
Hey so I have one set of data with the structure:
```
id product_number product_type
1 1001 car
2 1002 house
```
But the data has some duplicates where:
```
id product_number product_type
1 1001 car
2 1001 house
```
I need to delete the duplicates but only the value which is = house.
In my mind the query should be like:
```
DELETE *
FROM table
WHERE product_number is duplicate AND product_type = house
```
Thanks
|
In MySQL, you can do what you want with a `join`:
```
delete t
from table t join
(select product, count(*) as cnt
from table
group by product
) tt
on tt.product = t.product
where tt.cnt > 1 and t.product_type = 'house';
```
|
```
DELETE *
FROM table
WHERE id not in
(select max(id) from table
group by product_number)
```
AND product\_type = house
|
Deleting duplicates with a where clause
|
[
"",
"mysql",
"sql",
""
] |
I am trying to get the total sum of a column and the sum of the same column between 2 dates in one query. is this possible?
My table looks like this:
```
uid|amount|date
```
The two queries i am trying to make one of:
```
SELECT sum(amount) as `keys` FROM tbl_keys WHERE uid = 1
SELECT sum(amount) as `keys` FROM tbl_keys WHERE uid = 1 AND YEAR(`date`) = YEAR(CURRENT_DATE)
AND MONTH(`date`) = MONTH(CURRENT_DATE)
```
|
You could use a UNION query:
```
SELECT 'All' AS cnt, sum(amount) as `keys` FROM tbl_keys WHERE uid = 1
UNION ALL
SELECT 'Current_month' AS cnt, sum(amount) as `keys`
FROM tbl_keys
WHERE
uid = 1
AND `date`<= last_day(current_date)
`date`>= current_date - interval (day(current_date)-1) day
```
(I prefer to use `>=` and `<=` on the date column, as it can make use of an index if present, while functions like `MONTH()` or `YEAR()` cannot, also I assume that `date` is a date columnd and that it doesn't contain time informations).
If you want the result in one row, you could use an inline query:
```
SELECT
(SELECT sum(amount) as `keys` FROM tbl_keys WHERE uid = 1) AS total,
(SELECT sum(amount) as `keys`
FROM tbl_keys
WHERE
uid = 1
AND `date`<= last_day(current_date)
`date`>= current_date - interval (day(current_date)-1) day
) AS current_month
```
|
Something like this:
```
SELECT sum(amount) as `keys`,
(
SELECT sum(t.amount)
FROM tbl_keys as t
WHERE t.uid = tbl_keys.uid AND YEAR(t.`date`) = YEAR(CURRENT_DATE)
AND MONTH(t.`date`) = MONTH(CURRENT_DATE)
) as `keys2`
FROM tbl_keys
WHERE uid = 1
```
|
MYSQL get total sum and sum between 2 date in 1 query
|
[
"",
"mysql",
"sql",
""
] |
Assuming we got two below tables:
**TravelTimes**
```
OriginId DestinationId TotalJourneyTime
1 1 10
1 2 20
2 2 30
2 3 40
1 3 50
```
**Destinations**
```
DestinationId Name
1 Destination 1
2 Destination 2
3 Destination 3
```
How do I find the quickest journey between each origin and destination?
I want to join TravelTimes with Destinations by DestinationId and then group them by OriginId and sort each group by TotalJourneyTime and select the first row of each group.
I did try joining and grouping, but it seems group by is not the solution for my case as I don't have any aggregation column in the output.
**Expected output**
```
OriginId DestinationId DestinationName TotalJourneyTime
1 1 Destination 1 10
2 3 Destination 3 40
```
|
Use a [`RANK`](https://msdn.microsoft.com/en-GB/library/ms176102.aspx) to rank each journey partitioned by the origin and destination and ordered by the travel time
```
WITH RankedTravelTimes
AS
(
select originid,
destinationId,
totaljourneytime,
rank() over (partition by originid,destinationid order by totaljourneytime ) as r
from traveltimes
)
SELECT rtt.*, d.name
FROM RankedTravelTimes rtt
INNER JOIN Destinations d
ON rtt.destinationId = d.id
WHERE rtt.r=1
```
The above will include both the journey from 1-2 and 2-2 as separate. If you're only interested in the *destination* you can remove `originId` out of the partition.
|
Not sure I see the problem here with just joining and grouping the data with a `MIN` on the journey time:
```
CREATE TABLE #Traveltimes
(
[OriginId] INT ,
[DestinationId] INT ,
[TotalJourneyTime] INT
);
INSERT INTO #Traveltimes
( [OriginId], [DestinationId], [TotalJourneyTime] )
VALUES ( 1, 1, 10 ),
( 1, 2, 20 ),
( 2, 2, 30 ),
( 2, 3, 40 ),
( 2, 3, 50 );
CREATE TABLE #Destinations
(
[DestinationId] INT ,
[Name] VARCHAR(13)
);
INSERT INTO #Destinations
( [DestinationId], [Name] )
VALUES ( 1, 'Destination 1' ),
( 2, 'Destination 2' ),
( 3, 'Destination 3' );
SELECT d.DestinationId ,
d.Name ,
tt.OriginId ,
MIN(tt.TotalJourneyTime) MinTime
FROM #Destinations d
INNER JOIN #Traveltimes tt ON tt.DestinationId = d.DestinationId
GROUP BY tt.OriginId ,
d.DestinationId ,
d.Name
DROP TABLE #Destinations
DROP TABLE #Traveltimes
```
Gives you:
```
DestinationId Name OriginId MinTime
1 Destination 1 1 10
2 Destination 2 1 20
2 Destination 2 2 30
3 Destination 3 2 40
```
Note: why do you travel from destination 1 to itself?
|
SQL: first row of group by after join and order
|
[
"",
"sql",
"sql-server",
""
] |
table is like this, say it represents various types of events in a log file
```
<type> <date>
```
I want to select the top 5 most common types
```
select type,count(type) as c from log order by c desc limit 5
```
this works fine, but I only want the type column so I can use this is a `where in` subquery. How do I do that? I cant work out how to supress the count colum
|
Pretty straightforward:
```
SELECT type FROM log GROUP BY type ORDER BY COUNT(type) DESC LIMIT 5
```
|
You didn't specify the RDBMS and this is highly dependent on which one you're using. Here are some options.
```
-- works in postgres and mysql
select type from log group by type order by count(*) desc limit 5;
-- this variant works in mssql but probably no where else (maybe sybase)
select top(5) type from log group by type order by count(*) desc;
-- works in posgres and mssqlserver but not mysql or oracle
select
type
from (select
type,
row_number() over (order by count(*) desc) as r
from
log
group by
type
) as t
where
r <= 5
;
-- portable to all standards compliant RDMS
select
type
from (select
type,
row_number() over (order by c) as r
from
(select
type,
count(*) as c
from
log
group by
type
) as t
) as t
where
r <= 5
;
-- works if you don't have windowing functions, but returns more than 5 rows
select
type
from
(select
type,
count(*) as c
from
log
group by
type
) as t
order by
c desc
;
```
|
selecting a column sorted by count but without the count
|
[
"",
"sql",
"sqlite",
""
] |
I'm trying to do a select statement to order the result set by force of connections between them.
```
CREATE CLASS Entity EXTENDS V;
CREATE CLASS isConnectedTo EXTENDS E;
CREATE PROPERTY isConnectedTo.strength INTEGER;
```
**'isConnectedTo'** edges relate **Entities** to another **Entities**
The tricky part is I have double connections between the same entities as so:
```
CREATE VERTEX Entity SET name = "John";
CREATE VERTEX Entity SET name = "Mike";
CREATE VERTEX Entity SET name = "Susan";
CREATE EDGE isConnectedTo FROM (SELECT FROM Entity WHERE name = "Mike") TO (SELECT FROM Entity WHERE name = "John") SET strength = 3;
CREATE EDGE isConnectedTo FROM (SELECT FROM Entity WHERE name = "Mike") TO (SELECT FROM Entity WHERE name = "Susan") SET strength = 4;
CREATE EDGE isConnectedTo FROM (SELECT FROM Entity WHERE name = "John") TO (SELECT FROM Entity WHERE name = "Mike") SET strength = 2;
```
So **Mike** is connected to **John** which is connected to him too. Also **Mike** is connected to **Susan**.
In this case if I were to run my desired query on Mike it should return:
1. **John**
2. **Susan**
by this order because the sum of strength between Mike and John is **3 + 2 = 5** and between Mike and Susan is only **4**.
I've iterated through many possible queries to no avail but I can't seem to figure why this last one refuses to work.
```
LET $main = SELECT FROM Entity WHERE name = "John";
LET $vset = SELECT expand(in('isConnectedTo')) FROM $main;
SELECT @rid, eval('$ei.strength + $eo.strength') as total_strength FROM $vset
LET $ei = (SELECT expand(inE('isConnectedTo')) FROM $current WHERE $main.@rid IN out),
LET $eo = (SELECT expand(outE('isConnectedTo')) FROM $current WHERE $main.@rid IN in)
ORDER BY total_strength DESC;
```
Note: I already got the Entities' names indexed with a hash dictionary and the 'isConnectedTo' edges are indexed with unique links between Entities (hence they cannot be repeated in the same direction)
All the other trials I did or help from the other answers here result in query times of >= 2 mins.
|
I'd like to propose a small change on your domain. Don't create 2 edges in case both people are connected, but use the same edge with 2 properties.
```
CREATE CLASS Entity EXTENDS V;
CREATE CLASS isConnectedTo EXTENDS E;
CREATE PROPERTY isConnectedTo.strengthOut INTEGER;
CREATE PROPERTY isConnectedTo.strengthIn INTEGER;
CREATE VERTEX Entity SET name = "John";
CREATE VERTEX Entity SET name = "Mike";
CREATE VERTEX Entity SET name = "Susan";
CREATE EDGE isConnectedTo FROM (SELECT FROM Entity WHERE name = "Mike") TO (SELECT FROM Entity WHERE name = "John") SET strengthOut = 3, strengthIn = 2;
CREATE EDGE isConnectedTo FROM (SELECT FROM Entity WHERE name = "Mike") TO (SELECT FROM Entity WHERE name = "Susan") SET strengthOut = 4;
```
In this was you have less edges and you can use this super fast query:
```
SELECT out.name as name1, in.name as name2, eval('strengthOut + strengthIn') as strength
FROM (
SELECT expand( bothE('isConnectedTo') ) FROM Entity WHERE name = "Mike"
) ORDER BY strength
```
*NOTE: Remember to create an index on Entity.name to speedup the inner query.*
|
Try this query:
```
select expand($a.rid) from (select from Entity where name="Mike")
let $a=(select @rid,sum(bothE().strength) as sum from Entity where both('isConnectedTo').name contains $parent.current.name group by name order by sum desc)
```
|
Summing edges properties for order by in select statement
|
[
"",
"sql",
"graph",
"properties",
"graph-theory",
"orientdb",
""
] |
my assignment is to create table-valued function, which will have one DATETIME parameter and which returns every date from first to last day of the month, for example I gave the function date 04/12/2015 from that parameter and it will print all 31 days date of December.
I've already created 2 scalar functions (this was hint in assignment) one calculates first day
```
CREATE FUNCTION firstday
(
@first DATETIME
)
RETURNS DATETIME
AS
BEGIN
RETURN CAST(MONTH(@first) AS VARCHAR) + '/' + '01/' + + CAST(YEAR(@first) AS VARCHAR)
END
```
and second calculates last day:
```
CREATE FUNCTION lastday
(
@last DATETIME
)
RETURNS DATETIME
AS
BEGIN
RETURN CONVERT(VARCHAR(25),DATEADD(dd,-(DAY(DATEADD(mm,1,@last))),DATEADD(mm,1,@last)),101)
END
```
and here is the table-valued function but I don't know how to use that scalar functions within it
```
CREATE FUNCTION get_months_dates
(
@mydate DATETIME
)
RETURNS TABLE
AS
RETURN
WITH Calender AS
(
SELECT **first_function** AS CalenderDate
UNION ALL
SELECT CalenderDate + 1 FROM Calender
WHERE CalenderDate + 1 <= **last_function**
)
SELECT [Date] = CONVERT(VARCHAR(10),CalenderDate,25)
FROM Calender
OPTION (MAXRECURSION 0)
```
I am using sql-server 2014.
|
We're not here to do your homework for you, or it defeats the purpose of the homework.
That said, some tips to get you going -
* Don't build your FirstDay using a CAST to string like that - it's not reliable across different servers with different system settings. Run that on a non-US server and it'd fail. Use DATEADD and DATEPART instead.
* Look up numbers / tally tables. Combine that with DATEADD and you're away.
* Functions are called inline like anything else in a SELECT statement - so SELECT Function(Parameters) FROM Table
Good luck!
|
Try something like this....
```
Declare @Date DATE = '20151104';
WITH X AS
(
SELECT TOP (31)
YEAR(@Date) AS [Year]
,MONTH(@Date) AS [Month]
,RIGHT('00' + CAST(ROW_NUMBER()
OVER (ORDER BY (SELECT NULL)) AS VARCHAR(2)),2) [Days]
FROM master..spt_values
), Dates AS
(
SELECT TRY_CONVERT ( DATE, ( CAST( [Year] AS varchar(4))
+ CAST( [Month] AS varchar(2))
+ CAST( [Days] AS varchar(2))) ) DatesVals
FROM X
)
Select * FROM Dates
Where DatesVals IS NOT NULL
```
|
find out every date from first to last day of the month using table valued function
|
[
"",
"sql",
"sql-server",
""
] |
I have the following query:
```
SELECT
CASE
WHEN ([DBO].fn_WorkDays(GETDATE(), DATEADD(d, 1, EOMONTH(GETDATE()))) = 2)
THEN 1
END
```
That should return 1 if the condition is true (obviously), but I want it to return nothing (no rows) if false. Instead, it's returning a null row. How do I force it to return 1 or nothing?
|
The way you have this coded it will always return a row. You could rearrange your query to put the scalar function in the where clause.
```
SELECT 1
where [DBO].fn_STRS_WorkDays(GETDATE(), DATEADD(d, 1, EOMONTH(GETDATE()))) = 2
```
Of course, you might consider evaluating that scalar function. They are notoriously poor for performance.
|
How about
```
IF [DBO].fn_STRS_WorkDays(GETDATE(), DATEADD(d, 1, EOMONTH(GETDATE()))) = 2
SELECT 1
```
|
Force a SQL Server query to return no rows
|
[
"",
"sql",
"sql-server",
""
] |
I have a database where I am trying to count the total cancellations and changes. The problem is that their are duplicate rows in the database and I only want to use one row for each id. In the example below, I want my final count to be cancellations:5 , changes:1 . The sum part is working correctly, but the MIN(id) is selecting every row and calculating, instead of only one row per id. Thanks for any help., I have included my sql below
```
cancellations | changes | id |orderNumber |
--------------|---------|-----|------------|
3 |1 | 5 |4 |
3 |1 | 5 |4 |
2 |0 | 7 |5 |
SELECT SUM(cancellations),SUM(changes)
FROM table_1
WHERE id = (SELECT MIN(id))
AND orderNumber > 3
AND (cancellations >0) AND id = (SELECT
MIN(id))
```
|
You have to use `DISTINCT`:
```
SELECT
SUM(cancellations), SUM(changes)
FROM (
SELECT DISTINCT * FROM table_1
) AS t
```
|
i broke down the parts of your query into smaller ones and made a whole query out of the parts in this sql fiddle, [**Fiddle Here**](http://sqlfiddle.com/#!6/3808d/16)
|
SQL MIN not working correctly
|
[
"",
"mysql",
"sql",
"sql-server",
"database",
""
] |
I am trying to make search method in my Rails app, but sadly - my SQL knowledge is rather pathetic. I managed to find guides that have searched text in the description by:
```
@guides = Guide.where(['description LIKE ?', "%#{params[:search]}%"])
```
or in the title:
```
@guides = Guide.where(['title LIKE ?', "%#{params[:search]}%"])
```
but I don't know how to look in both title and descripton at the same time. Please help me solve this problem.
Additionally I would appreciate any suggestions of sources to learn SQL from for usage in Rails apps (PostgreSQL mainly I guess?).
|
You can use the keyword "or" in your SQL queries:
```
Guide.where('description LIKE :search OR name LIKE :search', search: "%#{params[:search]}%")
```
About the doc, you can try this website which seems to offer interactive courses about SQL in general: <http://www.sqlcourse.com/>
This website is about PostgreSQL: <http://www.tutorialspoint.com/postgresql/postgresql_overview.htm>
Also, `ILIKE` might be usefull in this case (it is case-insensitive).
|
**The shortest answer is tag this method onto your model:**
```
def search_by_fields(q, fields=nil)
sql = fields.map {|field| "#{field} LIKE ?"}.join(' OR ')
parameters = fields.map {"%#{q}%"}
where(sql, *parameters)
end
```
**A longer answer is in a concern!**
Using a concern is a great way to provide search functionality to ANY model you choose. Check this out. First create a file in your `app/models/concerns/` called `searchable.rb` with the following code:
```
#app/models/concerns/searchable.rb
module Searchable
def self.included(base)
base.extend(ClassMethods)
base.include(AdapterAcknowledgeable)
end
module ClassMethods
@@searchable_fields = []
@@searchable_scope = nil
def search(q, method=nil)
search_method = resolve_search_method(method)
self.send(search_method, q)
end
def search_by_fields(q, fields=nil)
fields = searchable_fields unless fields
sql = fields.map {|field| "#{field} LIKE ?"}.join(' OR ')
parameters = fields.map {"%#{q}%"}
where(sql, *parameters)
end
def search_by_scope(q, scope=nil)
scope = searchable_scope unless scope
scope.call(q)
end
def searchable_scope(scope=nil)
@@searchable_scope = scope unless scope.nil?
@@searchable_scope
end
def searchable_fields(*fields)
@@searchable_fields = fields if fields.present?
@@searchable_fields
end
private
def resolve_search_method(method)
method = method.downcase.to_sym unless method.nil?
if method == :searchable_fields ||
searchable_fields.present? && searchable_scope.nil?
:search_by_fields
elsif method == :searchable_scope ||
!searchable_scope.nil? && searchable_fields.empty?
:search_by_scope
else
raise "Unable to determine search method within #{self}, you must declare exactly one search method in the including class"
end
end
end
end
```
**Usage goes like this:**
This will enable the all columns `foo`, `bar`, `fiz` and `baz` to be queried via the SQL `WHERE LIKE`. By joining them with `OR` it allows any of them to match the query parameter.
```
#app/models/my_searchable_model.rb
class MySearchableModel < ActiveRecord::Base
include Searchable
searchable_fields :foo, :bar, :fiz, :bad
end
```
The concern also enables you to specify a scope with Ruby's lambda syntax like so:
```
class User < ActiveRecord::Base
include Searchable
searchable_scope ->(q){where("first_name || ' ' || last_name LIKE ?", "%#{q}%")}
end
```
Note the use of the SQL concatenation operator `||`.
Now when we want to search a model that includes the concern:
```
MySearchableModel.search('foo')
```
|
Rails search SQL query
|
[
"",
"sql",
"ruby-on-rails",
"ruby",
"postgresql",
"search",
""
] |
These are my tables:
`Room` (roomId, roomName, roomTypeId, roomStatus)
`RoomType` (roomTypeId, roomTypeName, roomTypeDesc, roomTypePrice)
`ReservationRoom` (reservationId, roomId,checkInDate, checkOutDate, totalStay)
I receive three parameter from user (check in date, check out date, quantity)to search the available room type that can be reserve into a grid view.
This is my sql that show the available roomtype for reserve.
```
SELECT rt.roomTypeName, rt.roomTypePrice, count(*) as quantity
FROM Room r,
RoomType rt
WHERE roomStatus = 'Online'
AND r.roomTypeId = rt.roomTypeId
AND NOT EXISTS (SELECT 1 FROM ReservationRoom b
WHERE b.roomId = r.roomId
AND (@CheckInDate BETWEEN b.checkInDate AND b.checkOutDate
OR @CheckOutDate BETWEEN b.checkInDate AND b.checkOutDate
OR (@CheckInDate<= b.checkInDate AND @CheckOutDate >= b.checkOutDate)))
group by rt.roomTypeName, rt.roomTypePrice
Having COUNT(*) >= @quantity";
```
It works quite well. But it got problem when my ReservationRoom table already have a record which is checkindate (1/11/2015) and check out date (2/11/2015) made by guest A for roomtypeA.
When a guest B search for room on check in date is(2/11/2015) and check out date is(3/11/2015).it will still show the roomtypeA is not available. But it should be available because the guest A already check out on 2/11. so is there any way to let user B reserve the roomtype in this situation?
|
You could convert from type DATE columns to [DATETIME](https://dev.mysql.com/doc/refman/5.5/en/datetime.html) columns, and then compare specific times of day (i.e., your specific check-out time, such as 11am).
Or, convert all times to [UNIX\_TIMESTAMP](https://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_unix-timestamp), and then you will have simpler greater-than / less-than comparisons altogether, since it's just an integer.
|
use MySQL `str_to_date` or `cast` because you must need to convert your type as `DATE`
```
cast(str_to_date('1/11/2015','%d/%m/%Y')as date)
```
or
```
cast(str_to_date(checkindate,'%d/%m/%Y')as date)
```
|
Search Room Availability
|
[
"",
"mysql",
"sql",
"database",
""
] |
I'm having a hard time trying to make a query that gets a lot of numbers, a sequence of numbers, and if the difference between two of them is bigger than 30, then the sequence resets from this number. So, I have the following table, which has another column other than the number one, which should be maintained intact:
```
+----+--------+--------+
| Id | Number | Status |
+----+--------+--------+
| 1 | 1 | OK |
| 2 | 1 | Failed |
| 3 | 2 | Failed |
| 4 | 3 | OK |
| 5 | 4 | OK |
| 6 | 36 | Failed |
| 7 | 39 | OK |
| 8 | 47 | OK |
| 9 | 80 | Failed |
| 10 | 110 | Failed |
| 11 | 111 | OK |
| 12 | 150 | Failed |
| 13 | 165 | OK |
+----+--------+--------+
```
It should turn it into this one:
```
+----+--------+--------+
| Id | Number | Status |
+----+--------+--------+
| 1 | 1 | OK |
| 2 | 1 | Failed |
| 3 | 2 | Failed |
| 4 | 3 | OK |
| 5 | 4 | OK |
| 6 | 1 | Failed |
| 7 | 4 | OK |
| 8 | 12 | OK |
| 9 | 1 | Failed |
| 10 | 1 | Failed |
| 11 | 2 | OK |
| 12 | 1 | Failed |
| 13 | 16 | OK |
+----+--------+--------+
```
Thanks for your attention, I will be available to clear any doubt regarding my problem! :)
EDIT: Sample of this table here: <http://sqlfiddle.com/#!6/ded5af>
|
With this test case:
```
declare @data table (id int identity, Number int, Status varchar(20));
insert @data(number, status) values
( 1,'OK')
,( 1,'Failed')
,( 2,'Failed')
,( 3,'OK')
,( 4,'OK')
,( 4,'OK') -- to be deleted, ensures IDs are not sequential
,(36,'Failed') -- to be deleted, ensures IDs are not sequential
,(36,'Failed')
,(39,'OK')
,(47,'OK')
,(80,'Failed')
,(110,'Failed')
,(111,'OK')
,(150,'Failed')
,(165,'OK')
;
delete @data where id between 6 and 7;
```
This SQL:
```
with renumbered as (
select rn = row_number() over (order by id), data.*
from @data data
),
paired as (
select
this.*,
startNewGroup = case when this.number - prev.number >= 30
or prev.id is null then 1 else 0 end
from renumbered this
left join renumbered prev on prev.rn = this.rn -1
),
groups as (
select Id,Number, GroupNo = Number from paired where startNewGroup = 1
)
select
Id
,Number = 1 + Number - (
select top 1 GroupNo
from groups where groups.id <= paired.id
order by GroupNo desc)
,status
from paired
;
```
yields as desired:
```
Id Number status
----------- ----------- --------------------
1 1 OK
2 1 Failed
3 2 Failed
4 3 OK
5 4 OK
8 1 Failed
9 4 OK
10 12 OK
11 1 Failed
12 1 Failed
13 2 OK
14 1 Failed
15 16 OK
```
**Update**: using the new LAG() function allows somewhat simpler SQL without a self-join early on:
```
with renumbered as (
select
data.*
,gap = number - lag(number, 1) over (order by number)
from @data data
),
paired as (
select
*,
startNewGroup = case when gap >= 30 or gap is null then 1 else 0 end
from renumbered
),
groups as (
select Id,Number, GroupNo = Number from paired where startNewGroup = 1
)
select
Id
,Number = 1 + Number - ( select top 1 GroupNo
from groups
where groups.id <= paired.id
order by GroupNo desc
)
,status
from paired
;
```
|
I don't deserve answer but I think this is even shorter
```
with gapped as
( select id, number, gap = number - lag(number, 1) over (order by id)
from @data data
),
select Id, status
ReNumber = Number + 1 - isnull( (select top 1 gapped.Number
from gapped
where gapped.id <= data.id
and gap >= 30
order by gapped.id desc), 1)
from @data data;
```
|
If the difference between two sequences is bigger than 30, deduct bigger sequence
|
[
"",
"sql",
"sql-server",
"sql-server-2014",
""
] |
I want list of columns and its table name in a database with columns having all null values.
This table is too large some columns having only null values.
I want a stored procedure which list out columns in a table which do not have any data at all (That is NULL).
So that I can trim number of columns.
```
CREATE TABLE dbo.ngkbm_template_data_sets_
(
seq_no uniqueidentifier NOT NULL,
practice_id char(4) NULL,
created_by int NOT NULL,
create_timestamp datetime NOT NULL,
modified_by int NOT NULL,
modify_timestamp datetime NOT NULL,
create_timestamp_tz smallint NULL,
modify_timestamp_tz smallint NULL,
row_timestamp timestamp NOT NULL,
chk_combo_med varchar(1) NULL,
chk_inactive_ind varchar(1) NULL,
chk_label_values int NULL,
kbm_ind varchar(1) NULL,
opt_sp int NULL,
txt_cursor_hold varchar(1) NULL,
txt_data_set varchar(50) NULL,
txt_description_1 varchar(75) NULL,
txt_description_10 varchar(75) NULL,
txt_description_11 varchar(75) NULL,
txt_description_12 varchar(100) NULL,
txt_description_13 varchar(100) NULL,
txt_description_14 varchar(75) NULL,
txt_description_15 varchar(75) NULL,
txt_description_16 varchar(75) NULL,
txt_description_17 varchar(75) NULL
)
```
I'm just showing few columns so that you can read. Orginal table has 324 columns, and millions rows of data. Perfomance is not a pressing issue for this code. I need this only for an internal purpose.
This is mysql solution, I need a SQL Server solution
```
SET group_concat_max_len = 4294967295; -- to overcome default 1KB limitation
SELECT CONCAT(
'SELECT * FROM ('
, GROUP_CONCAT(
'SELECT ', QUOTE(TABLE_NAME), ' AS `table`,'
, 'IF('
, 'COUNT(`', REPLACE(COLUMN_NAME, '`', '``'), '`),'
, 'NULL,'
, QUOTE(COLUMN_NAME)
, ') AS `column` '
, 'FROM `', REPLACE(TABLE_NAME, '`', '``'), '`'
SEPARATOR ' UNION ALL '
)
, ') t WHERE `column` IS NOT NULL'
)
INTO @sql
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_SCHEMA = DATABASE();
PREPARE stmt FROM @sql;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
```
|
Here's a quick stored procedure that will do a `NULL` check on all columns in a provided table.
```
Create Procedure spGetColumnsWithAllNullValues
(
@Database Varchar (100),
@Schema Varchar (100),
@Table Varchar (100)
)
As Begin
Declare @Column Varchar (100)
Declare @Columns Table
(
ColumnName Varchar (100)
)
Declare @Results Table
(
ColumnName Varchar (100)
)
Declare @Temp Table
(
Result Bit
)
Insert @Columns
Select COLUMN_NAME
From INFORMATION_SCHEMA.COLUMNS
Where IS_NULLABLE = 'YES'
And TABLE_CATALOG = @Database
And TABLE_SCHEMA = @Schema
And TABLE_NAME = @Table
Declare cur Cursor For
Select ColumnName
From @Columns
Open cur
While (1 = 1)
Begin
Fetch Next From cur Into @Column
If (@@FETCH_STATUS <> 0) Break
Declare @sql NVarchar(Max) = N'Select Case When Exists (Select * From '
+ QuoteName(@Database) + '.'
+ QuoteName(@Schema) + '.'
+ QuoteName(@Table)
+ ' Where ' + QuoteName(@Column) + ' Is Not Null) Then 0 Else 1 End'
Delete @Temp
Insert @Temp Execute (@sql)
Insert @Results
(ColumnName)
Select @Column
From @Temp
Where Result = 1
End
Close cur
Deallocate cur
Select ColumnName
From @Results
Order By ColumnName
End
```
All you need to do is supply it the database name, schema, and table name. You can tweak this as needed.
Demo Table:
```
A B DummyColumn
----------- ----------- -----------
1 1 NULL
1 2 NULL
1 3 NULL
2 5 NULL
2 4 NULL
3 NULL NULL
```
Usage:
```
Execute spGetColumnsWithAllNullValues 'Sandbox', 'dbo', 'B'
```
Output:
```
ColumnName
----------------
DummyColumn
```
|
Basically, you need to unpivot the values. There are several ways. This is a pretty easy one:
```
select name
from ngkbm_template_data_sets_ t outer apply
(values ('txt_description_1', txt_description_1),
('txt_description_2', txt_description_2),
. . .
) cols(name, val)
group by name
having count(val) = 0;
```
Yes, this does require listing out all the columns. I would suggest using `INFORMATION_SCHEMA.COLUMNS` and/or your favorite spreadsheet to generate the code represented by the `. . .`.
|
I have a large table with 324 columns, I want to get the list of columns which don't have any values at all
|
[
"",
"sql",
"sql-server",
""
] |
`Text` column is `NVARCHAR(MAX)` type.
```
ID Text
001 have odds and modds
002 odds>=12
003 modds
004 odds < 1
```
How can I search in Text column contains `odds` and not contain `modds`
I try:
```
Select * from MyTable
Where text LIKE '%odds%' AND text NOT LIKE '%modds%'
```
But result not correct return all. I want return
```
ID Text
001 have odds and modds
002 odds>=12
004 odds < 1
```
Any ideas? Thanks!
|
```
WHERE (text LIKE '%odds%' AND text NOT LIKE '%modds%')
OR (text LIKE '%odds%odds%')
```
Some questions regarding how this works. First off, SQL works with "sets" of data so we need a selector (WHERE clause) to create our "set" (or it is the entire table "set" if none is included)
SO here we created two portions of the set.
First we select all the rows that include the value "odds" in them somewhere but do NOT include "modds" in them. This excludes rows that ONLY include "modds" in them.
Second, we include rows where they have BOTH/two values of "odds" in them - the "%" is a wildcard so to break it down starting at the beginning.
* "'%" anything at the start
* "'%odds" anything at the start followed by "odds"
* "'%odds%" anything at the start with anything following that
* "'%odds%odds" anything at the start with anything following that but has "odds" after that
* "'%odds%odds%'" anything at the start % with "odds" with anything in between % with "odds" following that with anything at the end %
This works for THIS SPECIFIC case because both the words contain "odds" so the order is NOT specific here. IF we wanted to do that with different words for example "cats", "cats" and "dogs" but JUST "dogs: we would have:
```
WHERE (mycolumn LIKE '%cats%' AND mycolumn NOT LIKE '%dogs%')
OR ((mycolumn LIKE '%cats%dogs%') OR (mycolumn LIKE '%dogs%cats%'))
```
This could also be written like: (has BOTH with the AND)
```
WHERE (mycolumn LIKE '%cats%' AND mycolumn NOT LIKE '%dogs%')
OR (mycolumn LIKE '%cats%' AND mycolumn LIKE '%dogs%')
```
This would catch the values without regard to the order of the "cats" and "dogs" values in the column.
Note the groupings with the parenthesis is not optional for these last two solution examples.
|
```
Select * from MyTable
Where text LIKE 'odds%'
```
|
Find exactly text
|
[
"",
"sql",
"sql-server",
""
] |
I initially had a web service which ran the first query (post details) and within the while loop of its results, I was running the second query to retrieve the number of comments on a post. I need to try and combine the two as now I am having to order the webservice by number of comments.
```
1. SELECT ReportID, Title, Description, posts.Pic, DatePosted, posts.UserID, FName, SName, users.Pic as userPic,
photoWidth, photoHeight
FROM posts
INNER JOIN Users
ON Users.UserID = posts.UserID
WHERE private = 0
ORDER BY ReportID Desc
LIMIT ?, 10
2. SELECT COUNT(ReportID) as numComments FROM Comments WHERE ReportID =? AND Comment IS NOT NULL
```
I'm unsure how to achieve this. Will I need to make a derived table?
My initial attempt:
```
SELECT ReportID, Title, Description, posts.Pic, DatePosted, posts.UserID, FName, SName, users.Pic as userPic,
photoWidth, photoHeight, numComments
FROM posts
INNER JOIN Users
ON Users.UserID = posts.UserID
WHERE private = 0 AND numComments = (SELECT COUNT(ReportID) as numComments FROM Comments WHERE ReportID = ReportID AND Comment IS NOT NULL)
ORDER BY numComments DESC
```
This gives the issue unknown column numComments in field list
Posts:
```
- ReportID (primary)
- Title
- Description
- Pic
- private
- DatePosted (epoch)
- photoWidth
- photoHeight
```
Comments:
```
- CommentID (primary)
- UserID
- ReportID (linking key)
- Comment (can be null if type = 'like')
- dateposted (epoch)
- type ('comment' or 'like')
```
|
If I understand your question correctly I think what you want is the following:
```
SELECT Posts.*, count(Comments.ReportID) as CommentCount FROM Posts
LEFT JOIN Comments
ON Comments.ReportID = Posts.ReportID
WHERE private = 0
GROUP BY Comments.ReportID
ORDER BY CommentCount, ReportID Desc;
```
Obviously, you will need to adjust it to contain all the fields you want and any other joins you want to do.
Here is a [demo](http://sqlfiddle.com/#!9/f99cab/3).
This will get all the posts as well as the number of Comments in each post.
|
I don't have the data structures, but i think you could use this, using the count in a sub query
```
SELECT
ReportID, Title, Description, posts.Pic, DatePosted, posts.UserID, FName, SName, users.Pic as userPic,
photoWidth, photoHeight, numComments.numComments
FROM posts
INNER JOIN Users
ON Users.UserID = posts.UserID
WHERE private = 0 AND ReportID = (SELECT COUNT(ReportID) as numComments FROM Comments WHERE AND Comment IS NOT NULL GROUP BY ReportID) numComments
ORDER BY numComments DESC
```
|
join two mysql queries to order the first query by the result of the second
|
[
"",
"mysql",
"sql",
"join",
"inner-join",
""
] |
I have a table TABLE\_A with one column.
```
select VALUE from TABLE_A;
VALUE
---------
1
2
3
4
5
```
I need a second column which will give me sum of all the values of first colum.
Expected:
```
VALUE SUM
--------- ---------
1 15
2 15
3 15
4 15
5 15
```
I need to do this without a sub-select query.(select in place of a column)
|
It could be easily done using **SUM() OVER()** analytic function.
```
SELECT VALUE,
SUM(VALUE) OVER(ORDER BY NULL) as "SUM"
FROM TABLE_A;
```
**Working demo:**
```
SQL> WITH sample_data AS(
2 -- end of sample_data mocking as real table
3 SELECT 1 VALUE FROM dual UNION ALL
4 SELECT 2 VALUE FROM dual UNION ALL
5 SELECT 3 VALUE FROM dual UNION ALL
6 SELECT 4 VALUE FROM dual UNION ALL
7 SELECT 5 VALUE FROM dual
8 )
9 SELECT VALUE,
10 SUM(VALUE) OVER() as "SUM"
11 FROM sample_data;
VALUE SUM
---------- ----------
1 15
2 15
3 15
4 15
5 15
```
|
Just use a sub-select that counts:
```
select value, (select sum(value) from TABLE_A)
from TABLE_A
```
Or `JOIN`:
```
select t1.value, t2.sum_value
from TABLE_A t1
CROSS JOIN (select sum(value) as sum_value from TABLE_A) t2
```
Does Oracle support `CROSS JOIN` syntax?
|
select to sum the values of one column to other
|
[
"",
"sql",
"oracle",
"sum",
""
] |
Says I have `TABLE Audit` with `COLUMN ID, STATUS, TIME` and sample data as below:
```
1. {ID = 1, STATUS = 'APPROVE', TIME = '2015-02-01'}
2. {ID = 1, STATUS = 'DECLINE', TIME = '2014-12-01'}
3. {ID = 1, STATUS = 'CLOSED', TIME = '2015-11-01'}
4. {ID = 2, STATUS = 'APPROVE', TIME = '2015-02-01'}
5. {ID = 3, STATUS = 'DECLINE', TIME = '2015-10-01'}
6. {ID = 4, STATUS = 'CLOSED', TIME = '2015-02-01'}
```
There's a condition: If `status='approve'` then ignore `status='decline'` else select everything.
May I know how to construct a query so that I will get only records : 1,3,4,5,6?
My current way is first retrieve all data with `status='approve' and 'closed'` and store them into `temptable`, then store data that is `status = 'decline' and ID not in @temptable` into `@temptable`. Then eventually `select * from @temptable`.
I'm wondering if there's any other way to handle such situation?
|
You can use windowed functions:
```
WITH cte AS
(
SELECT *
,[dec] = COUNT(CASE WHEN STATUS = 'DECLINE' THEN 1 END) OVER (PARTITION BY ID)
,[app] = COUNT(CASE WHEN STATUS = 'APPROVE' THEN 1 END) OVER (PARTITION BY ID)
FROM #Audit
)
SELECT ID, STATUS, [TIME]
FROM cte
WHERE NOT ([dec] >= 1 AND [app] >= 1 AND [STATUS] = 'DECLINE');
```
`LiveDemo`
Your column `TIME` can be misleading especially when it holds date only :)
|
THe idea it is to get all the ones that have been approved and for those skip the declined ones.
I couldn't try right now, but it should work:
```
;WITH App (ID, app) AS
(SELECT ID, COUNT(1) FROM #Audit WHERE STATUS = 'APPROVE' GROUP BY ID UNION
SELECT ID, 0 FROM #Audit WHERE ID NOT IN (SELECT ID FROM #Audit WHERE STATUS = 'APPROVE') GROUP BY ID )
SELECT *, ISNULL(App.app, 0)
FROM #Audit LEFT OUTER JOIN App ON #Audit.ID = App.ID
WHERE ISNULL(App.app, 0) = 0 OR (App.app = 1 AND #audit.status != 'DECLINE')
```
|
How to select data based on special condition?
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I just can't find a solution to my problem:
I have a table employees with id's name and the id they report to.
```
Employees:
id name reports_to
2 name2 55
3 name3 2
4 name4 3
5 name5 3
6 name6 2
7 name7 33
8 name8 55
```
now I need to be able to select the person that reports to id 55, and all the person that report to this person (id2). But just one level down.
the result would look like this:
```
2 name2 55
3 name3 2
6 name6 2
8 name8 55
```
I tried joins and subquery but without success.
Edit:
If possible the result should be ordered in a way my result table looks:
- person that reports to id 55 (i.e. #2)
- person that report to #2 should follow immediately
|
You have a hierarchy that has been modelled using the adjacency list model. The advantage of this model is that it is fairly intuitive to the modeller and facilitates updates for the user. However, the disadvantage, as you've discovered, it that it is not so easy to query because you need new a join (or equivalent) for each level in the hierarchy. This is particularly tricky when the number of levels is arbitrary.
For mysql solutions, see [adjacency list vs nested set](https://stackoverflow.com/questions/31641504/adjacency-list-model-vs-nested-set-model-for-mysql-hierarchical-data).
|
Here is one method, using a subquery to get the second level:
```
select e.*
from employees e
where e.reports_to = 55 or
e.reports_to in (select e2.id from employees e2 where e2.reports_to = 55);
```
EDIT:
To get this sorted requires using `join` rather than `in`. I think the logic looks like this:
```
select e.*
from employees e left join
employees e2
on e.reports_to = e2.id
where 55 in (e.reports_to, e2.reports_to)
order by coalesce(e2.id, e.id), (e2.id is null) desc
```
|
join results from the same table
|
[
"",
"mysql",
"sql",
""
] |
I have a field that is `varchar(8)`, holding date values that I converted from `float` to `varchar`.
Some records have eight characters, and some have seven. I would like to make them all the same length by adding a leading zero to the ones that have 7.
* 8 char example: 12162003
* 7 char example: 5072004 (needs a leading zero)
The query:
```
select birthdate_new from table_name
```
|
A function that will work for more situations would be REPLICATE. It concatenates a value X amount of times to a string.
```
SELECT REPLICATE('0', 8-LEN(birthdate_new)) + birthdate_new AS 8_len_birthdate
```
This will take the length of your birthdate, subtract it from 8, then put that many leading 0's on the front to make it 8 chars.
|
You can use `RIGHT`:
```
SELECT RIGHT('00000000' + birthdate_new, 8) AS birthdate_new
FROM table_name;
```
`LiveDemo`
If you want to `UPDATE` field use:
```
UPDATE table_name
SET birthdate_new = RIGHT('00000000' + birthdate_new, 8)
WHERE LEN(birthdate_new) < 8;
```
|
Add leading zeros to a varchar field
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have query written below.
```
SELECT T1.[AcctCode],T1.[Segment_0],SUM(T0.[DebLTotal]) AS BUDGET, SUM(T3.[Debit]) AS DEBIT,
AcctName = CASE WHEN T1.[Segment_0] LIKE '6001%%' THEN 'Operating Cost'
WHEN T1.[Segment_0] LIKE '6002%%' THEN 'Operating Cost'
WHEN T1.[Segment_0] LIKE '6003%%' THEN 'Operating Cost'
WHEN T1.[Segment_0] LIKE '6004%%' THEN 'Operating Cost'
WHEN T1.[Segment_0] LIKE '6005%%' THEN 'Operating Cost'
WHEN T1.[Segment_0] LIKE '6006%%' THEN 'Operating Cost'
ELSE T1.[AcctName] END
FROM OBGT T0 INNER JOIN OACT T1 ON T0.[AcctCode]=T1.[AcctCode] INNER JOIN OBGS T2
ON T0.[Instance] = T2.[AbsId] INNER JOIN JDT1 T3 ON T1.[AcctCode] = T3.[Account] INNER JOIN OASC T4 ON T1.[Project]=T4.[Code]
where T1.[Segment_0] like '60%%'
GROUP BY T1.[AcctCode],T1.[Segment_0],T1.[AcctName]
```
This query gives me different values for operating cost. I need something like `WHEN T1.[Segment_0] LIKE '6002%%,6003%%,6004%%' THEN 'Operating Cost'`. Is it possible? I don't need different likes by one by one. I need all in one as operating cost. Please help me out for this.
|
If `T1.[Segment_0]` are only numeric value you can use this syntax with `between`:
```
case when substring(T1.[Segment_0], 1, 4) between 6001 and 6006 then 'Operating Cost' else T1.[AcctName] end
```
Else you can use `IN` :
```
case when substring(T1.[Segment_0], 1, 4) in ('6001','6002','6003','6004','6005','6006') then 'Operating Cost' else T1.[AcctName] end
```
|
Perhaps you issue is that you are including `AcctName` in the `group by` rather than the result of the `case`. Perhaps this does what you want:
```
SELECT T1.[AcctCode],T1.[Segment_0],SUM(T0.[DebLTotal]) AS BUDGET, SUM(T3.[Debit]) AS DEBIT,
AcctName = (CASE WHEN T1.[Segment_0] LIKE '6001%%' THEN 'Operating Cost'
WHEN T1.[Segment_0] LIKE '6002%%' THEN 'Operating Cost'
WHEN T1.[Segment_0] LIKE '6003%%' THEN 'Operating Cost'
WHEN T1.[Segment_0] LIKE '6004%%' THEN 'Operating Cost'
WHEN T1.[Segment_0] LIKE '6005%%' THEN 'Operating Cost'
WHEN T1.[Segment_0] LIKE '6006%%' THEN 'Operating Cost'
ELSE T1.[AcctName] END)
FROM OBGT T0 INNER JOIN OACT
T1
ON T0.[AcctCode]=T1.[AcctCode] INNER JOIN
OBGS T2
ON T0.[Instance] = T2.[AbsId] INNER JOIN
JDT1 T3
ON T1.[AcctCode] = T3.[Account] INNER JOIN
OASC T4
ON T1.[Project] = T4.[Code]
where T1.[Segment_0] like '60%%'
GROUP BY T1.[AcctCode], T1.[Segment_0],
(CASE WHEN T1.[Segment_0] LIKE '6001%%' THEN 'Operating Cost'
WHEN T1.[Segment_0] LIKE '6002%%' THEN 'Operating Cost'
WHEN T1.[Segment_0] LIKE '6003%%' THEN 'Operating Cost'
WHEN T1.[Segment_0] LIKE '6004%%' THEN 'Operating Cost'
WHEN T1.[Segment_0] LIKE '6005%%' THEN 'Operating Cost'
WHEN T1.[Segment_0] LIKE '6006%%' THEN 'Operating Cost'
ELSE T1.[AcctName]
END);
```
EDIT:
You might also want to remove `Segment_0` from both the `select` and `group by`. Without sample data, it is hard to tell.
|
multiple Like expression in sql query using case exprssion
|
[
"",
"sql",
"sql-server",
""
] |
I am writing a script that query's a table and counts all rows that have a status of 10 and separates the total count by month. Something is
not right considering I have two Decembers in my results.
In November, there is only one date in it meaning two rows have a status of 10 and are under the same date (11-04). December has 252 rows on the same date (12-04) and 1 row with a 12-05 date .
How to query and separate a count and date by months?
Any help is most appreciated.
```
SELECT CONVERT(CHAR(3), Datename(month, datecomplete)) AS Month,
Count(*) AS Val
FROM nwds
WHERE status = 10
GROUP BY Datediff(month, 0, datecomplete),
datecomplete
```
My Results
```
Nov 2
Dec 252
Dec 1
```
Desired Results
```
Nov 2
Dec 253
```
|
```
SELECT LEFT(DATENAME(M, datecomplete), 3) AS Month,
Count(*) AS Val
FROM nwds
WHERE status = 10
GROUP BY LEFT(DATENAME(M, datecomplete), 3)
```
If you have ever get data for different years, you can add the year to the `GROUP BY`.
```
SELECT LEFT(DATENAME(M, datecomplete), 3) AS Month,
YEAR(datecomplete) AS Year,
Count(*) AS Val
FROM nwds
WHERE status = 10
GROUP BY LEFT(DATENAME(M, datecomplete), 3), YEAR(datecomplete)
```
|
`datecomplete` should be excluded from `group by`, or the results would be grouped by day. Grouping needs to be on `month` and `year` part of datecomplete column.
```
SELECT
CONVERT(CHAR(3), Datename(month, datecomplete)) AS Month,
count(*) AS Val
FROM nwds
WHERE status = 10
GROUP BY CONVERT(CHAR(3), Datename(month, datecomplete)) ,
datepart(yyyy, datecomplete)
```
|
SQL Sum and Count Separated by Month
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I'm doing an assignment on SQL Server 2012 where I have to
> 7) Create a stored procedure (call it `SQL7`) which will retrieve the
> Charity ID and Charity Name and the total of all the contribution
> amounts that each charity has in the contribution table."
`Charity ID` and `Charity Name` are in one table and the second table has `CharityID` and `Total` contributions.
I don't know how to add the total contributions that each charity has received and output it. The code I have so far is
```
create proc SQL7
as
select distinct
dbo.CharityTbl.CharityID, CharityName,
from
dbo.CharityTbl, dbo.ContributionsTbl
where
dbo.CharityTbl.CharityID = dbo.ContributionsTbl.CharityID
```
Thanks in advance!
|
You'll probably need grouping (assuming table1 holds charity information and table2 holds contribution information)
```
create procedure SQL7
as
select a.charityid, a.charityname, sum(b.totalcontributions) as totals
from CharityTbl a
left join ContributionsTbl b on a.charityid = b.charityid
group by a.charityid, a.charityname
```
SQLFiddle example: <http://sqlfiddle.com/#!3/67cca> and <http://sqlfiddle.com/#!3/ca00e>
```
create table charityTbl (charityid int, charityname varchar(100));
insert into charityTbl values (1, 'Red Cross'), (2, 'Doctors without borders');
create table contributionsTbl (charityid int, totalcontributions int);
insert into contributionsTbl values
(1, 100),
(1, 200),
(2, 500);
```
(Just think that this was a stored procedure and was called with `exec SQL7`)
```
select a.charityid, a.charityname, sum(b.totalcontributions) as totals
from CharityTbl a
left join ContributionsTbl b on a.charityid = b.charityid
group by a.charityid, a.charityname
```
Result:
```
| charityid | charityname | totals |
|-----------|-------------------------|--------|
| 2 | Doctors without borders | 500 |
| 1 | Red Cross | 300 |
```
|
You need to learn about [Join](http://www.w3schools.com/sql/sql_join.asp)
```
CREATE PROC SQL7
AS
SELECT
charity.CharityID
,charity.CharityName
SUM(contrib.TotalContrib)
FROM
dbo.CharityTbl AS charity
INNER JOIN dbo.ContributionsTbl AS contrib
ON contrib.CharityID = charity.CharityID
GROUP BY
charity.CharityID
,charity.CharityName
```
|
SQL Server 2012 add value of rows with matching ids
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-server-2012",
""
] |
SQL Server
I have a parameter that contains a comma delimited string:
> 'abc,def,ghi'
I want to use that string in a IN statement that would take my parameter like this:
```
select * from tableA where val IN ('abc','def','ghi')
```
Any ideas on how I would do this?
|
If using dynamic SQL is an option, this can be executed:
```
SELECT 'SELECT * FROM tableA WHERE val IN (' +
'''' + REPLACE('abc,def,ghi', ',', ''',''') + ''')'
```
Basically, the `REPLACE()` function separates each item by **','** instead of just **,**.
|
The simplest way would be to do something like this:
```
SELECT *
FROM TableName
WHERE ',' + commaDelimitedString + ',' LIKE '%,' + FieldName + ',%'
```
But be careful about SQL injection. You might want to parameterize it.
|
SQL IN Statement splitting parameter
|
[
"",
"sql",
"sql-server",
""
] |
I am trying to calculate the standard deviation of multiple columns for each given ID.
I have a table that shows gives me demand over time and I need to calculate volatility of the demand.
```
SELECT id ::Text, n0 ::numeric, n1 ::numeric, n2 ::Numeric, n3 ::numeric, n4 ::numeric, n5 ::numeric, n6 ::numeric, n7 ::numeric
FROM mytable
```
I would like to add another column that calculates the standard deviation from the values in in columns `n0-n7` for each `id`.
|
Have you tried [stddev\_samp](http://www.postgresql.org/docs/9.4/static/functions-aggregate.html)? Or do you mean the standard deviation **between** columns? If so, take a look at [this SO question](https://stackoverflow.com/questions/23596515/aggregate-functions-across-multiple-columns-in-postgres).
|
Hmmm. Probably the easiest way is to unpivot the table and re-aggregate. For instance:
```
select id, stddev(n)
from ((select id, n0 as n from mytable) union all
(select id, n1 as n from mytable) union all
(select id, n2 as n from mytable) union all
(select id, n3 as n from mytable) union all
(select id, n4 as n from mytable) union all
(select id, n5 as n from mytable) union all
(select id, n6 as n from mytable) union all
(select id, n7 as n from mytable)
) t
group by id;
```
Note: you can choose between `stddev_pop()` or `stddev_samp()`. The latter would be the typical choice for this problem.
|
Postgres Standard deviation of multiple columns
|
[
"",
"sql",
"postgresql",
""
] |
I am new to SQL and I want to write a query to add multiple rows in a table.
For example:
Table:
```
matchid|player1id|player2id|player1score|player2score
101 |20 |10 |0 |100
101 |20 |10 |0 |100
101 |20 |10 |0 |100
201 |20 |10 |645 |0
201 |20 |10 |100 |700
201 |20 |10 |0 |100
```
Required output:
```
matchid|player1id|player2id|player1score|player2score
101 |20 |10 |0 |300
201 |20 |10 |745 |800
```
Note: I have to do this without using `GROUP BY`
|
Without using `GROUP BY`:
```
SELECT *
FROM (
SELECT DISTINCT matchid, player1id, player2id FROM tbl
) AS t
CROSS APPLY(
SELECT
SUM(player1score), SUM(player2score)
FROM tbl
WHERE
matchid = t.matchid
AND player1id = t.player1id
AND player2id = t.player2id
) AS x(player1score, player2score)
```
|
```
SELECT
matchid, player1is, player2id,
SUM(player1score) as player1score,
SUM(player2score) as player2score
FROM
tablename
GROUP BY
matchid, player1id, player2id
```
|
add specific multiple rows (SQL)
|
[
"",
"sql",
"sql-server",
""
] |
I am using SQL Server to create my database.
I want to add a column to my Table which would calculate the number of NULL values in each row, like this:
```
Column1 | Column2 | Column3 | Score
a | B | C | 0
x | NULL | NULL | 2
```
Currently, I have this:
```
Column1 | Column2 | Column3
a | B | C
x | NULL | NULL
```
I have created a new column called Score, and in order to calculate it, I have used:
```
SELECT
CASE WHEN Column1 IS NULL THEN 1 ELSE 0 END +
CASE WHEN Column2 IS NULL THEN 1 ELSE 0 END +
CASE WHEN Column3 IS NULL THEN 1 ELSE 0 END
As TMP
FROM MyTable
```
That returns a column with all my lines and the Score for each line:
```
|TMP
1 |0
2 |2
```
I would like to update the column Score in myTable with those values.
Thanks for your help.
|
You could use a computed column - a virtual column that is always computed with a given expression, and not stored on disk. This way, you avoid problems with data consistency. The syntax is easy:
```
CREATE TABLE myTab
(
column1 datatype
, column2 datatype
...
, Score AS CASE WHEN Column1 IS NULL THEN 1 ELSE 0 END +
CASE WHEN Column2 IS NULL THEN 1 ELSE 0 END +
CASE WHEN Column3 IS NULL THEN 1 ELSE 0 END
);
```
In order to alter the existing table and add such a column, use:
```
ALTER TABLE myTab ADD Score AS CASE WHEN Column1 IS NULL THEN 1 ELSE 0 END +
CASE WHEN Column2 IS NULL THEN 1 ELSE 0 END +
CASE WHEN Column3 IS NULL THEN 1 ELSE 0 END
```
Source: <https://msdn.microsoft.com/en-us/library/ms188300.aspx>
|
It's generally a bad idea to store calculated values depending on other columns. (Data redundancy, risk of data inconsistency.) Create a view instead:
```
create view MyView as
SELECT column1, column2, column3,
CASE WHEN Column1 IS NULL THEN 1 ELSE 0 END +
CASE WHEN Column2 IS NULL THEN 1 ELSE 0 END +
CASE WHEN Column3 IS NULL THEN 1 ELSE 0 END As TMP
FROM MyTable
```
|
Join the result of a Temp column to a table SQL Server
|
[
"",
"sql",
"sql-server",
""
] |
My task is to rebuild database structure from queries found in code, these are queries that I have found:
Query 1:
```
Select o.id, n.nazwa, n.nazwa, n.typ, o.kwota, o.pozostala_kwota, o.pozostale_raty
from oplaty o
inner join wlasciciel w on w.id = o.id_wlasciciela
inner join nieruchomosc n on n.id = o.id_nieruchomosci
where w.dane_osobowe = ?
and o.rok = ?
```
Query 2:
```
Select n.id, n.powierzchnia, n.nazwa, n.typ, wn.procent_posiadania
from wlasciciele_nieruchomosci wn
inner join wlasciciel w on w.id = wn.id_wlasciciela
inner join nieruchomosc n on n.id = wn.id_nieruchomosci
where w.id = ?
```
What does this notations mean:
`select o.id` - why is there a dot and what `o` stands for?
`wlasciciel w` - i know `wlascicel` is table name but what does`w` stand for?
|
After table name `oplaty` you have alias `o` and expression `select o.id` means that you want to select `id` from table with `o` alias so it'll be `id`from table `oplaty`.
You can use alias with the key word 'AS' or without.
For more informations see page:
<http://dev.mysql.com/doc/refman/5.7/en/select.html>
|
The "o" and the "w" are simply aliases for the table name. At this point, I'd say you need to take time on an SQL Tutorial, such as: <http://www.w3schools.com/sql/>.
|
How to rebuild database from queries
|
[
"",
"mysql",
"sql",
""
] |
I am facing the following problem.
I have a database with a table which saves Dates (with its time).
Now I would like to know all the tables information where the date is in between two timestamps, but I am getting the following error:
01830. 00000 - "date format picture ends before converting entire input string".
What I did so far is this query:
```
SELECT * FROM ARBEITSBLOCK WHERE STARTZEIT BETWEEN '30.11.2015 19:00:00'
and '01.12.2015 19:05:00';
```
And this which doesn't give me any result but there should be:
```
SELECT * FROM ARBEITSBLOCK
WHERE TO_CHAR(STARTZEIT,'DD.MM.YYYY H24:MM:SS') BETWEEN '30.11.2015 13:00:00'
and '01.12.2015 19:05:00';
```
|
Try this statement (using Oracle syntax)
```
SELECT *
FROM ARBEITSBLOCK
WHERE STARTZEIT BETWEEN TO_DATE ('12/04/2015 09:00:00 AM', 'mm/dd/yyyy hh:mi:ss AM')
AND TO_DATE ('12/04/2015 10:00:00 AM', 'mm/dd/yyyy hh:mi:ss AM');
```
|
If STARTZEIT is a DATE column, then why are you trying to compare it to a string?
By doing that, you are relying on Oracle being able to say "aha! This string is really a date, so I will attempt to convert it for you!". That's all well and good, but how will Oracle know how the date-in-the-string is formatted?
Well, there's the nls\_date\_format parameter which is defaulted to 'DD-MON-RR', and I think you can now see why you're getting the "date format picture ends before converting entire input string" error, since 'DD-MON-RR' is a lot shorter than '30.11.2015 19:00:00'.
Instead of relying on this implicit conversion and the bugs that go right along with that (as you've discovered!), you should explicitly convert the string into a date, which you can easily do with the `to_date()` function.
E.g.:
```
select *
FROM ARBEITSBLOCK
WHERE STARTZEIT BETWEEN to_date('30.11.2015 19:00:00', 'dd.mm.yyyy hh24:mi:ss')
and to_date('01.12.2015 19:05:00', 'dd.mm.yyyy hh24:mi:ss');
```
|
Select date from between two timestamps
|
[
"",
"sql",
"oracle",
"date-arithmetic",
""
] |
This is probably a really basic question but I have spend a lot of time on this.
I have 2 Tables:
**Files**
```
| FileId | Type | FolderId |
----------------------------
| 1 | txt | 11 |
| 2 | xml | 15 |
| 3 | xml | 17 |
| 4 | txt | 19 |
```
**Folders**
```
| FolderId | path | version | typeId |
-------------------------------------------
| 11 | //c:/here | 1 | 1 |
| 19 | //c:/here | 3 | 1 |
| 15 | //c:/dummy| 6 | 1 |
| 17 | //c:/dummy| 4 | 1 |
```
I want it to return: All info from `Files`, with the highest `Folder.Version` number, where the `Files.type`, `Folders.path` and `Folders.typeID` all match.
Essentially I need to group by `Files.type`, where `Folders.path` & `Folders.typeID` are the same. Then return the item with the max version field for each group.
So in the example above I would want the output:
```
| FileId | Type | FolderId |
----------------------------
| 2 | xml | 15 |
| 4 | txt | 19 |
```
So far I have the query below, which I think is ok for getting the items with the highest version number when TypeId and path match (I think????). But I cannot figure out how to group by Files.Type.
```
SELECT * FROM Files
WHERE FileId IN (
SELECT FileId FROM Files
WHERE FolderId IN (
SELECT fo.FolderId
FROM Folders fo
LEFT JOIN Folders fo2
ON fo.Path = fo2.Path AND fo.TypeId = fo2.TypeId
AND fo.Version < fo2.Version
WHERE fo2.Version IS NULL
)
)
```
Can anyone explain how I can get this query done? I have a feeling I need some group by statements but I can't figure it out!
This is SQL server 2012.
|
You didn't specify a verson of SQL Server.
```
select Files.*
from
Files inner join
(
select f1.FolderId
from Folders f1 inner join
(
select TypeId, Path, max(Version) as MaxVersion
from Folders
group TypeId, Path
) as f2
on f2.TypeId = f1.TypeId and f2.Path = f1.Path f2.MaxVersion = f1.Version
) Folders
on Folders.FolderId = Files.FolderId
```
If you have a later edition of SQL Server then this should work too:
```
select Files.*
from
Files inner join
(
select first_value(FolderId)
over (partition by TypeId, Path order by Version desc) as FolderId
from Folders
) Folders
on Folders.FolderId = Files.FolderId
```
EDIT: Maybe this is the fix per your comment:
```
select Files.*
from
Files fi inner join Folders fo on fo.FolderId = fi.FolderId
inner join
(
select "Type", TypeId, Path, max(Version) as MaxVersion
from Files fi2 inner join Folders fo2 on fo2.FolderId = fi2.FolderId
group "Type", TypeId, Path
) as mv
on mv."Type" = fi."Type"
and mv.TypeId = fo.TypeId and mv.Path = fo.Path
and mv.MaxVersion = fo.Version
```
I'm in a rush but I think this might work on SQL Server 2012:
```
select distinct
first_value(FileId)
over (partition by "Type", TypeId, Path order by Version desc) as FileId,
"Type",
first_value(fo.FolderId)
over (partition by "Type", TypeId, Path order by Version desc) as FolderId
from
Files fi inner join Folders fo on fo.FolderId = fi.FolderId
```
|
Give this a shot:
```
select f.*
from files f
inner join folders dir on f.folderid = dir.folderid
inner join (
select type, max(version) maxver
from files fi
inner join folders fo on fi.folderid = fo.folderid
group by type
) t on f.type = t.type and dir.version = t.maxver
```
Result:
```
| fileid | type | folderid |
|--------|------|----------|
| 4 | txt | 19 |
| 2 | xml | 15 |
```
Since SQLFiddle SQL Server 2008 and 2014 were not responding well, I created an example with MySQL here: <http://sqlfiddle.com/#!9/7abd4/13>. I expect the query results to be identical for SQL Server 2012.
|
SQL: Getting items which have a max value in another tables column, but also match some cross table clauses
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I am creating a file orginization system where you can add content items to multiple folders.
I am storing the data in a table that has a structure similar to the following:
```
ID TypeID ContentID FolderID
1 101 1001 1
2 101 1001 2
3 102 1002 3
4 103 1002 2
5 103 1002 1
6 104 1001 1
7 105 1005 2
```
I am trying to select the first record for each unique TypeID and ContentID pair. For the above table, I would want the results to be:
> **ID**
> 1
> 3
> 4
> 6
> 7
As you can see, the pairs 101 1001 and 103 1002 were each added to two folders, yet I only want the record with the first folder they were added to.
When I try the following query, however, I only get result that have at least two entries with the same TypeID and ContentID:
```
select MIN(ID)
from table
group by TypeID, ContentID
```
results in
> **ID**
> 1
> 4
If I change `MIN(ID)` to `MAX(ID)` I get the correct amount of results, yet I get the record with the last folder they were added to and not the first folder:
> **ID**
> 2
> 3
> 5
> 6
> 7
Am I using `GROUP BY` or the `MIN` wrong? Is there another way that I can accomplish this task of selecting the first record of each TypeID ContentID pair?
|
`MIN()` and `MAX()` should return the same amount of rows. Changing the function should *not* change the number of rows returned in the query.
Is this query part of a larger query? From looking at the sample data provided, I would assume that this code is only a snippet from a larger action you are trying to do. Do you later try to join `TypeID`, `ContentID` or `FolderID` with the tables the IDs are referencing?
If yes, this error is likely being caused by another part of your query and not this select statement. If you are using joins or multi-level select statements, you can get different amount of results if the reference tables do not contain a record for all the foreign IDs.
Another suggestion, check to see if any of the values in your records are `NULL`. Although this should not affect the `GROUP BY`, I have sometime encountered strange behavior when dealing with `NULL` values.
|
Use ROW\_NUMBER
```
WITH CTE AS
(SELECT ID,TypeID,ContentID,FolderID,
ROW_NUMBER() OVER (PARTITION BY TypeID,ContentID ORDER BY ID) as rn FROM t
)
SELECT ID FROM CTE WHERE rn=1
```
|
Using GROUP BY, select ID of record in each group that has lowest ID
|
[
"",
"sql",
"sql-server",
"select",
"group-by",
"min",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.