Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I have a query like so -
```
select CAST(jobrun_proddt as Date) as 'Date', COUNT(*) as 'Error Occurred Jobs' from jobrun
where jobrun_orgstatus = 66 and jobmst_type <> 1
group by jobrun_proddt
order by jobrun_proddt desc
```
Not every date will have a count. What I want to be able to do is the dates that are blank to have a count of 0 so the chart would look like this -
```
2014-11-18 1
2014-11-17 0
2014-11-16 0
2014-11-15 0
2014-11-14 0
2014-11-13 1
2014-11-12 0
2014-11-11 1
```
Currently it's not returning the lines where there's no count.
```
2014-11-18 1
2014-11-13 1
2014-11-11 1
```
edit to add that the jobrun table DOES have all the dates, just some dates don't have the value I'm searching for.
|
Try this. Use `Recursive CTE` to generate the Dates.
```
WITH cte
AS (SELECT CONVERT(DATE, '2014-11-18') AS dates --Max date
UNION ALL
SELECT Dateadd(dd, -1, dates)
FROM cte
WHERE dates > '2014-11-11') -- Min date
SELECT a.dates,
Isnull([Error_Occurred_Jobs], 0)
FROM cte a
LEFT JOIN (SELECT Cast(jobrun_proddt AS DATE) AS Dates,
Count(*) AS [Error_Occurred_Jobs]
FROM jobrun
WHERE jobrun_orgstatus = 66
AND jobmst_type <> 1
GROUP BY jobrun_proddt) B
ON a.dates = b.dates
Order by a.dates desc
```
|
If you have data for all dates, but the other dates are being filtered by the `where` clause, then you can use conditional aggregation:
```
select CAST(jobrun_proddt as Date) as [Date],
SUM(CASE WHEN jobrun_orgstatus = 66 and jobmst_type <> 1 THEN 1 ELSE 0
END) as [Error Occurred Jobs]
from jobrun
group by jobrun_proddt
order by jobrun_proddt desc
```
|
how to include dates where no value
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I am trying to Copy ALL data (columns, too) from `Table1` in to `Table2` but ONLY where `Column1` is XYZ.
I have something like:
```
INSERT INTO Table2
SELECT *
FROM Table1
WHERE
Column1='XYZ'
```
This errors telling me that `Column1` is an unknown field name.
|
Your column names must match, so alias them as they come out of `Table1` so they match `Table2`
```
INSERT INTO Table2
SELECT table1_column1 as table2_column1
FROM Table1
WHERE
table1_column1='XYZ'
```
|
INSERT INTO table2
(column\_name(s))
SELECT column\_name(s)
FROM table1;
|
Insert Into WHERE Data in a column is specific
|
[
"",
"mysql",
"sql",
"ms-access",
""
] |
I have a table called `profile` which has a column `region_code` and `country_code` and another table called `regions` which has columns `country_code`, `region_name` and `region_code`. I want to select the `region_name` from table `regions` based on the `region_code` and `country_code` from table `profiles`. I thought the code below wouldwork but it always returns an empty set and I can't figure out what I'm doing wrong. Any help much appreciated.Thanks.
```
SELECT region_name
FROM regions
WHERE country_code IN(
SELECT country_code
FROM profiles
WHERE account_name = 'me'
)
AND region_code IN(
SELECT region_code
FROM profiles
WHERE account_name = 'me'
)
```
|
I would simplify your query, and use something like this one:
```
SELECT region_name
FROM
regions
WHERE
(country_code, region_code) IN (SELECT country_code, region_code
FROM profiles
WHERE account_name = 'me')
```
but if you still get empty result, I would execute just the subquery and see what rows it returns:
```
SELECT country_code, region_code
FROM profiles
WHERE account_name = 'me'
```
i suspect that it doesn't return any value, or that the values returned are not present in the regions table.
|
It sounds like you want a join?
```
SELECT r.region_name
FROM regions r
JOIN profiles p ON r.country_code = p.country_code
AND r.region_code = p.region_code
WHERE p.account_name = 'me'
```
This would list the region name for the region that maps to the specific users region code and country code.
|
selecting region names based on country and region codes
|
[
"",
"mysql",
"sql",
""
] |
Can any one help me in solving this. In a table, I have the data like this right now.

How do i split the column Nodes which has delimiter TTBFA-TTBFB-TTBFC-TTBFD into 4 rows with other columns being same.
california region GAXAEB 102,520,000 18.71 4 8/30/2014
california region TTBFA 92,160,000 23.33 3 9/13/2014
california region TTBFB 92,160,000 23.33 3 9/13/2014
california region TTBFC 92,160,000 23.33 3 9/13/2014
california region TTBFD 92,160,000 23.33 3 9/13/2014
The value for column NODES is not always 5 characters , It may vary like below

Thanks in advance
|
you could use (whatever number is your max number of nodes) `UNION ALL` statements and `SUBSTRING` with INSTR for the possible locations for a node
try something like:
```
SELECT region_name, nodes AS node,
sgspeed, sgutil, portCount, WeekendingDate
FROM t
WHERE instr(nodes,'-') = 0
UNION ALL
SELECT region_name, SUBSTRING(nodes FROM instr(nodes,'-',1,1) +1 FOR instr(nodes,'-',1,2)-1) AS node,
sgspeed, sgutil, portCount, WeekendingDate
FROM t
WHERE instr(nodes,'-') > 0
UNION ALL
SELECT region_name, SUBSTRING(nodes FROM instr(nodes,'-',1,2) +1 FOR instr(nodes,'-',1,3)-1) AS node,
sgspeed, sgutil, portCount, WeekendingDate
FROM t
WHERE instr(nodes,'-',1,2) > 0
UNION ALL
SELECT region_name, SUBSTRING(nodes FROM instr(nodes,'-',1,3) +1 FOR instr(nodes,'-',1,4)-1) AS node,
sgspeed, sgutil, portCount, WeekendingDate
FROM t
WHERE instr(nodes,'-',1,3) > 0
...
```
|
The function INSTR usually implies you're running TD14+.
There's also the STRTOK function, better use this instead of SUBSTRING(INSTR).
And instead of up to 15 UNION ALLs you can also cross join to a table with numbers:
```
SELECT region_name, STRTOK(nodes, '-', i) AS x
FROM table
CROSS JOIN
( -- better don't use sys_calendar.CALENDAR as there are no statistics on day_of_calendar
SELECT day_of_calendar AS i
FROM sys_calendar.CALENDAR
WHERE i <= 15
) AS dt
WHERE x IS NOT NULL
```
And you can utilize STRTOK\_SPLIT\_TO\_TABLE in TD14, too:
```
SELECT *
FROM table AS t1
JOIN
(
SELECT *
FROM TABLE (STRTOK_SPLIT_TO_TABLE(table.division, table.nodes, '-')
RETURNS (division VARCHAR(30) CHARACTER SET UNICODE
,tokennum INTEGER
,token VARCHAR(30) CHARACTER SET UNICODE)
) AS dt
) AS t2
ON t1.division = t2.division
```
Hopefully this is for data cleansing and not for daily use...
|
Split The Column which is delimited into separate Rows in Teradata 14
|
[
"",
"sql",
"teradata",
""
] |
Where is a problem because I try count values using sql queries:
`(SELECT quantity FROM db WHERE no='998')` this is fine
but `(('500') - (SELECT quantity FROM db WHERE no='998'))` // incorrect syntax near -
But I need to use constant 500. Where is problem
|
How about this?
```
SELECT 500 - quantity
FROM db
WHERE no = 998;
```
A `select` statement needs to start with a select. In addition, numeric constants should not use single quotes (although that has no effect on whether the query parses or runs).
|
`SELECT 500-quantity FROM db WHERE no='998'`
|
My sql query not work with math operators
|
[
"",
"mysql",
"sql",
""
] |
I am trying to sort the numbers,
```
MH/122020/101
MH/122020/2
MH/122020/145
MH/122020/12
```
How can I sort these in an Access query?
I tried `format(mid(first(P.PFAccNo),11),"0")` but it didn't work.
|
You need to use expressions in your ORDER BY clause. For test data
```
ID PFAccNo
-- -------------
1 MH/122020/101
2 MH/122020/2
3 MH/122020/145
4 MH/122020/12
5 MH/122021/1
```
the query
```
SELECT PFAccNo, ID
FROM P
ORDER BY
Left(PFAccNo,9),
Val(Mid(PFAccNo,11))
```
returns
```
PFAccNo ID
------------- --
MH/122020/2 2
MH/122020/12 4
MH/122020/101 1
MH/122020/145 3
MH/122021/1 5
```
|
you have to convert your substring beginning with pos 11 to a number, and the number can be sorted.
|
How to sort the string 'MH/122020/[xx]x' in an Access query?
|
[
"",
"sql",
"sorting",
"ms-access",
""
] |
I am trying (and failing) to craft a simple SQL query (for SQL Server 2012) that counts the number of occurrences of a value for a given date range.
This is a collection of results from a survey.
So the end result would show there are only 3 lots of values matching '2' and
6 values matching '1'.
Even better if the final result could return 3 values:
```
MatchZero = 62
MatchOne = 6
MatchTwo = 3
```
Something Like (I know this is horribly out):
```
SELECT
COUNT(0) AS MatchZero,
COUNT(1) AS MatchOne,
COUNT(2) As MatchTwo
WHERE dated BETWEEN '2014-01-01' AND '2014-02-01'
```
I don't need it grouped by date or anything, simply a total value for each.
Any insights would be greatly received.
```
+------------+----------+--------------+-------------+------+-----------+------------+
| QuestionId | friendly | professional | comfortable | rate | recommend | dated |
+------------+----------+--------------+-------------+------+-----------+------------+
| 3 | 0 | 0 | 0 | 0 | 0 | 2014-02-12 |
| 9 | 0 | 0 | 0 | 0 | 0 | 2014-02-12 |
| 14 | 0 | 0 | 0 | 2 | 0 | 2014-02-13 |
| 15 | 0 | 0 | 0 | 0 | 0 | 2014-01-06 |
| 19 | 0 | 1 | 2 | 0 | 0 | 2014-01-01 |
| 20 | 0 | 0 | 0 | 0 | 0 | 2013-12-01 |
| 21 | 0 | 1 | 0 | 0 | 0 | 2014-01-01 |
| 22 | 0 | 1 | 0 | 0 | 0 | 2014-01-01 |
| 23 | 0 | 0 | 0 | 0 | 0 | 2014-01-24 |
| 27 | 0 | 0 | 0 | 0 | 0 | 2014-01-31 |
| 30 | 0 | 1 | 2 | 0 | 0 | 2014-01-27 |
| 31 | 0 | 0 | 0 | 0 | 0 | 2014-01-11 |
| 36 | 0 | 0 | 0 | 1 | 1 | 2014-01-22 |
+------------+----------+--------------+-------------+------+-----------+------------+
```
|
You can use conditional aggregation:
```
SELECT SUM((CASE WHEN friendly = 0 THEN 1 ELSE 0 END) +
(CASE WHEN professional = 0 THEN 1 ELSE 0 END) +
(CASE WHEN comfortable = 0 THEN 1 ELSE 0 END) +
(CASE WHEN rate = 0 THEN 1 ELSE 0 END) +
(CASE WHEN recommend = 0 THEN 1 ELSE 0 END) +
) AS MatchZero,
SUM((CASE WHEN friendly = 1 THEN 1 ELSE 0 END) +
(CASE WHEN professional = 1 THEN 1 ELSE 0 END) +
(CASE WHEN comfortable = 1 THEN 1 ELSE 0 END) +
(CASE WHEN rate = 1 THEN 1 ELSE 0 END) +
(CASE WHEN recommend = 1 THEN 1 ELSE 0 END) +
) AS MatchOne,
SUM((CASE WHEN friendly = 2 THEN 1 ELSE 0 END) +
(CASE WHEN professional = 2 THEN 1 ELSE 0 END) +
(CASE WHEN comfortable = 2 THEN 1 ELSE 0 END) +
(CASE WHEN rate = 2 THEN 1 ELSE 0 END) +
(CASE WHEN recommend = 2 THEN 1 ELSE 0 END) +
) AS MatchTwo
FROM . . .
WHERE dated BETWEEN '2014-01-01' AND '2014-02-01';
```
|
If I understand you correctly, you want to count the zeros, ones and twos for a particular (or each) column in your table. If this is correct, then you could do something like this:
```
select sum(case when your_column = 0 then 1 else 0 end) as zeros
, sum(case when your_column = 1 then 1 else 0 end) as ones
--- and so on
from your_table
-- where conditions go here
```
If you want to count the total for more than one column, enclose the needed `case...end`s in the `sum()`:
```
sum(
(case when column1 = 0 then 1 else 0 end) +
(case when column2 = 0 then 1 else 0 end)
-- and so on
) as zeros
```
|
SQL Query to COUNT fields that match a certain value across all rows in a table
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I wrote a CREATE FUNCTION script, and created a function. But I can't find it in the LONG list of functions under the programability folder in SSMS. Where did SQL Server put my function. It's not in any of the folders, or if it is, I missed it. And I looked twice.
I did specify at the beginning of my script:
```
USE myDatabase
GO
```
before starting the
```
CREATE FUNCTION...
```
... in the script, so it shouldn't have gotten lost in the master database.
|
Look under \Programmability\Functions. To use those objects refer to this post [SQLServer cannot find my user defined function function in stored procedure](https://stackoverflow.com/questions/4697241/sqlserver-cannot-find-my-user-defined-function-function-in-stored-procedure)
|
I apologise. The questioner is an idiot.
All that I needed to do was right click on the Function folder in SSMS and choose "Refresh".
Sorry for wasting your time.
|
I created a SQL Server function. Now I can't find it in SSMS
|
[
"",
"sql",
"sql-server",
"ssms",
""
] |
I have a table which have 2 columns of dates, and I want to look for the closest date in the future (Select min()?) searching in both columns, is that possible?
For example, if I have `column1=23/11/2014` and `column2=22/11/2014`, in the same row, I want get `22/11/2014`
I hope it is clear enough, ask me if it doesn't.
Greetings.
|
In a single table use [CASE](https://docs.oracle.com/cd/B19306_01/server.102/b14200/expressions004.htm)
```
SELECT CASE column1 < column2
THEN column1
ELSE column2
END mindate
FROM yourtable
```
If you have the date column in multiple tables, just replace `yourtable` with your tables `JOIN`ed together
|
This is what the `least()` function is for:
```
select least(column_1, column_2)
from your_table;
```
`min()` is an aggregate function that operates on a single column but multiple rows.
|
It is possible to do a SELECT MIN(DATE) searching in 2 or more columns?
|
[
"",
"sql",
"oracle",
"function",
""
] |
I have a student table where column is `name`, `attendance`, and `date` in which data is entered everyday for the students who attend the class. For example if a student is absent on a day, entry is not made for that particular student for that day.
Finally. I need to find out students name whose `attendance` less than 50.
|
You have to use [GROUP BY](http://www.w3schools.com/sql/sql_groupby.asp) clause to aggregate similar student in table and [HAVING](http://www.w3schools.com/sql/sql_having.asp) check your condition, to get your desired output.
```
SELECT name, count(name)
FROM student
GROUP BY column_name
HAVING count(name)<50;
```
I hope this will help to solved your problem.
|
You can use `GROUP BY` and `HAVING` statements for this.
```
SELECT name FROM student GROUP BY name HAVING COUNT(*) < 50;
```
Please note that above query is not tested.
|
find student name less than 50 attendance
|
[
"",
"sql",
""
] |
There are two tables, package table and product table. In my case, the package contains multiple products. We need to recognize multiple products whether they can match a package which is already in package records. Some scripts are below.
```
DECLARE @tblPackage TABLE(
PackageID int,
ProductID int
)
INSERT INTO @tblPackage VALUES(436, 4313)
INSERT INTO @tblPackage VALUES(436, 4305)
INSERT INTO @tblPackage VALUES(436, 4986)
INSERT INTO @tblPackage VALUES(437, 4313)
INSERT INTO @tblPackage VALUES(437, 4305)
INSERT INTO @tblPackage VALUES(442, 4313)
INSERT INTO @tblPackage VALUES(442, 4335)
INSERT INTO @tblPackage VALUES(445, 4305)
INSERT INTO @tblPackage VALUES(445, 4335)
```

```
DECLARE @tblProduct TABLE(
ProductID int
)
INSERT INTO @tblProduct VALUES(4305)
INSERT INTO @tblProduct VALUES(4313)
```

We have two product 4305 and 4313, then I need to retrieve the matched package record 437. Only the exactly matched one can be return, so package 436 is not the right one. It's not easy to make a multiple rows query clause. please someone can have any suggestions? Thanks.
|
Try this. [***SQLFIDDLE DEMO***](http://sqlfiddle.com/#!3/fb8e1/16)
```
Declare @cnt Int
Select @cnt = count(distinct ProductID) from tblProduct
SELECT B.packageid
FROM (SELECT packageid
FROM tblpackage
GROUP BY packageid
HAVING Count(productid) = @cnt) A
JOIN tblpackage B
ON a.packageid = b.packageid
WHERE EXISTS (SELECT 1 FROM tblproduct c WHERE c.productid = b.productid)
GROUP BY B.packageid
HAVING Count(DISTINCT B.productid) = @cnt
```
|
This is a "set-within-sets" query. I would approach it using aggregation and `having`:
```
select p.PackageID
from @tblPackage p left join
@tblProduct pr
on p.ProductId = pr.ProductId
group by p.PackageId
having count(*) = count(pr.ProductId) and
count(*) = (select count(*) from @tblProduct);
```
The `left join` keeps all products for each package. The first condition in the `having` clause says that all these products match what is in the product table. The second says that all the products are actually there.
Note that if you have duplicates in either table, then you'll need to use `count(distinct)` in one or more places. Your sample data suggests that this is not an issue.
|
T-SQL How to Match Multiple Rows
|
[
"",
"sql",
"sql-server",
""
] |
For example I have a table like given below .. I want to have separate columns on the basis of even/odd ids
```
-----------------------------------------------------
| ID | Names
-----------------------------------------------------
| 1 | Name1
-----------------------------------------------------
| 2 | Name2
-----------------------------------------------------
| 3 | Name3
-----------------------------------------------------
| 4 | Name4
-----------------------------------------------------
```
I want to design a query that could give me
```
-------------------
| Even | Odd |
-------------------
| Name2 | Name1 |
-------------------
| Name4 | Name3 |
-------------------
```
|
```
select
max(case id%2 when 1 then name end) as odd,
max(case id%2 when 0 then name end) as even
from your_table
group by floor((id+1)/2)
```
`SQL Fiddle Demo`
|
If your Id column contain sequential number without any gap between number then :
```
SELECT t1.name as ODD,
t2.name as EVEN
FROM YourTable t1
left outer JOIN YourTable t2
ON t1.Id + 1 = t2.Id
where t1.Id%2 = 0
```
Note : if there are gaps between number then some of the `ODD` will be shown as `NULL` or it may skip that `name` if it has gap of more than 3.
|
how to get even odd ids in different columns on single query?
|
[
"",
"mysql",
"sql",
""
] |
I have a table in mysql database. Like
```
+-----+------+------+--------+
| id | p_id | lang | title |
+-----+------+------+--------+
| 1 | 1 | en | eng1 |
| 2 | 1 | fr | fr1 |
| 3 | 2 | en | eng2 |
+-----+------+------+--------+
```
I want to select records of `P_id` of `lang` = `fr`. If any p\_id does not have record in `fr` then select record of `en`.
Like
```
+-----+------+------+--------+
| id | p_id | lang | title |
+-----+------+------+--------+
| 2 | 1 | fr | fr1 |
| 3 | 2 | en | eng2 |
+-----+------+------+--------+
```
Is it possible?
I am Using PHP, MySQL and Joomla.
**Edit :**
1 : There can be more than 2 rows of each p\_id with diff lang, like see p\_id = 1
```
+-----+------+------+--------+
| id | p_id | lang | title |
+-----+------+------+--------+
| 1 | 1 | en | eng1 |
| 2 | 1 | fr | fr1 |
| 3 | 2 | en | eng2 |
| 4 | 1 | de | de1 |
+-----+------+------+--------+
```
2 : There is surety that each p\_id has a raw in lang = en-GB
|
Mysql Query (untested)
```
SELECT DISTINCT a.p_id
FROM myTable a
WHERE a.lang = 'fr'
UNION
SELECT DISTINCT b.p_id
FROM myTable b
WHERE b.lang = 'en'
AND NOT EXISTS
(
SELECT 1
FROM myTable c
WHERE c.lang = 'fr'
AND c.p_id = b.p_id
)
```
I take all the fr rows then i add all the en where does not exist a fr row.
|
```
SELECT
id, p_id, lang, title
FROM
your_table
WHERE lang IN ('fr', 'en')
ORDER BY lang = 'fr' DESC
/*optionally*/
LIMIT 1;
```
The `lang = 'fr'` in the `order by` returns 1 or 0, true or false. This makes sure, that the `fr` record is first.
Or like this:
```
SELECT SQL_CALC_FOUND_ROWS
id, p_id, lang, title
FROM
your_table
WHERE lang ='fr'
UNION ALL
SELECT
id, p_id, lang, title
FROM
your_table
WHERE lang ='en'
AND FOUND_ROWS() = 0;
```
This executes the second query in the union only if the first one found no rows.
|
How to get records if a column has value A, if not then B?
|
[
"",
"mysql",
"sql",
""
] |
I have about 10000 rows in SQL table, and I need to add a prefix before each value of user column.
Example: I have value names like John, Smith, and so on, and I need to set QA-John, QA-Smith, and so on.
Is there an SQL function that can do it automatically or only can be done it one by one?
|
I got it.
```
UPDATE my table SET column_name = CONCAT('prefix', column_name)
```
|
If I haven't misunderstood, this is what You're asking..
```
Update Table set name="QA-"+name
```
|
Adding prefix to value in SQL row values
|
[
"",
"sql",
""
] |
I have a table like this
```
Rules (RuleValue varchar (50))
```
It has values like
```
A1B1C1
A1B1C0
A1B0C0
```
There is another table
```
Input (RulePart varchar (2))
```
It can have values like:
```
A1
B1
C1
```
I want to get all RuleValues where all the RulePart matches anywhere in the RuleValue
Following is an example with hardcoded RuleParts:
```
Select RuleValue from Rules where Rules.RuleValue like '%A1%' and Rules.RuleValue like '%B1%' and Rules.RuleValue like '%C1%'
```
With the above examples my expected result is A1B1C1 or B1A1C1 or C1A1B1 etc.
I tried to use inner join, but it does not match all the Rule Parts in every row.
I can achieve this using creating query dynamically but don't want to go for that unless it affects query performance.
|
One approach is:
```
Select r.RuleValue
from Rules r
join Input i
on r.RuleValue like '%' + i.RulePart + '%'
group by r.RuleValue
having count(distinct i.RulePart) = 3 -- or (select count(*) from Input )
```
**UPDATE**
More elegant way is using `NOT EXISTS` to represent for `ALL`
```
select *
from Rules r
where not exists
(
select *
from Input i
where r.RuleValue not like '%'+i.RulePart+'%'
)
```
`SQL Fiddle Demo`
|
```` ```
SELECT *
FROM Rules a
WHERE RuleValue LIKE '%'+(SELECT stuff((select '%' + cast(c.RulePart as varchar(512))from Input c for xml path('')),1,2,''))+'%'
``` ````
|
SQL query to select records that has matching substring in a field from another table
|
[
"",
"sql",
"sql-server-2012",
""
] |
My apologies for a non-intuitive thread title.
I have a table, `Jobs`, where each row represents a maintenance task performed by a computer program. It has this design:
```
CREATE TABLE Jobs (
JobId bigint PRIMARY KEY,
...
Status int NOT NULL,
OriginalJobId bigint NULL
)
```
When a Job is created/started, its row is added to the table and its status is `0`. When a job is completed its status is updated to `1` and when a job fails its status is updated to `2`. When a job fails, the job-manager will retry the job by inserting a new row into the Jobs table by duplicating the details of the failed job and reset the `Status` to `0` and use the original (failed) JobId in `OriginalJobId` for tracking purposes. If this re-attempt fails then it should be tried again up to 3 times, each subsequent retry will maintain the original `JobId` in the `OriginalJobId` column.
My problem is trying to formulate a query to get the current set of Jobs that have failed and get their retry count.
Here's a sample data in the table:
```
JobId | Status | OriginalJobId
1, 1, NULL -- Successful initial job
2, 0, NULL -- Pending initial job
3, 2, NULL -- Failed initial job
4, 1, 3 -- Successful retry of Job 3
5, 2, NULL -- Failed initial job
6, 2, 5 -- Failed retry 1 of Job 5
7, 2, 5 -- Failed retry 2 of Job 5 -- should be tried again for 1 more time
8, 2, NULL -- Failed initial job
9, 2, 8 -- Failed retry 1 of Job 8
10, 2, 8 -- Failed retry 2 of Job 8
11, 2, 8 -- Failed retry 3 of Job 8 -- don't try again
12, 2, NULL -- Failed initial job
```
My query needs to return this:
```
JobId | RetryCount
5, 2
12, 0
```
Notice how Job `3` isn't included because its last retry succeeded (status `1`). Similarly Job `8` is excluded because the number of retries exceeds the limit of 3. Job `5` is included because it still failed and has had only 2 retries, and Job `12` is included and hasn't yet had any retries.
I'm thinking the solution would be something like this:
```
SELECT
J1.JobId
FROM
Jobs AS J1
LEFT OUTER JOIN Jobs AS J2 ON J1.JobId = J2.OriginalJobId
WHERE
J1.Status = 2
```
...but I can't think of how to get the RetryCount data.
Here is a SQLFiddle I created for this problem, with one of the solutions below:
<http://sqlfiddle.com/#!6/8765f>
## Update
Here is an updated SQLFiddle that compares the 5 solutions provided so far (I added an extra `HAVING` clause to remove jobs that had more than 3 retries)
<http://sqlfiddle.com/#!6/8765f/23>
Performance-wise, I think GarethD's answer is the best as it has the simplest execution plan and tends to finish with the fastest time in SqlFiddle.
My production table has about 14,000,000 rows so obviously the results will be different there. I'll try each out in production and see which is the fastest then and choose an answer accordingly.
Thank you all for the help!
|
The following returns the result required:
```
SELECT J1.JobId,
Retries = COUNT(J2.JobId)
FROM Jobs AS J1
INNER JOIN Jobs AS J2
ON J1.JobId = J2.OriginalJobId
WHERE J1.Status = 2
GROUP BY J1.JobId
HAVING COUNT(CASE WHEN J2.Status = 1 THEN 1 END) = 0;
```
I have changed it to an `INNER` join so that only jobs that have been retried are included, although this could feasibly be changed back to a `LEFT` join to include failed jobs that have not been retried yet. I also added a `HAVING` clause to exclude any jobs that have not failed when they have been retried.
---
**EDIT**
As mentioned above, using `INNER JOIN` will mean that you only return jobs that have been retried, to get all failed jobs you need to use a `LEFT JOIN`, this will mean that retries are returned as failed jobs, so I have added an additional predicate `J1.OriginalJobId IS NULL` to ensure only the original jobs are returned:
```
SELECT J1.JobId,
Retries = COUNT(J2.JobId)
FROM Jobs AS J1
LEFT JOIN Jobs AS J2
ON J1.JobId = J2.OriginalJobId
WHERE J1.Status = 2
AND J1.OriginalJobId IS NULL
GROUP BY J1.JobId
HAVING COUNT(CASE WHEN J2.Status = 1 THEN 1 END) = 0;
```
**[Example on SQL Fiddle](http://sqlfiddle.com/#!6/bd98c/1)**
|
This should do the job. It does a COALESCE to combine `JobId` and `OriginalJobId`, gets the retry count by grouping them up then excluding any jobs that have a status of 1.
```
SELECT COALESCE(j.OriginalJobId, j.JobId) JobId,
COUNT(*)-1 RetryCount
FROM Jobs j
WHERE j.[Status] = 2
AND NOT EXISTS (SELECT 1
FROM Jobs
WHERE COALESCE(Jobs.OriginalJobId, Jobs.JobId) = COALESCE(j.OriginalJobId, j.JobId)
AND Jobs.[Status] = 1)
GROUP BY COALESCE(j.OriginalJobId, j.JobId), j.[Status]
```
|
Retrieving failed jobs from a table with retry details (id and retry count)
|
[
"",
"sql",
"sql-server",
""
] |
I have two tables, created with the following SQL queries:
```
CREATE TABLE user
(
user_email varchar(255) not null primary key,
--other unimportant columns
subscription_start date not null,
subscription_end date,
CONSTRAINT chk_end_start CHECK (subscription_start != subscription_end)
)
CREATE TABLE action
(
--unimportant columns
user_email varchar(255) not null,
action_date date not null,
CONSTRAINT FK_user FOREIGN KEY (user_email) REFERENCES user(user_email)
)
```
What I would like to do is make sure with some sort of check constraint that the `action_date` is between the `subscription_start` and `subscription_end`.
|
This is not possible to do using check constraints, since check constraints can only refer to columns inside the same table. Furthermore, foreign key constraints only support equi-joins.
If you must perform this check at the database level instead of your application level, you could do it using a trigger on INSERT/UPDATE on the action-table. Each time a record is inserted or updated, you check whether the action\_date lies within the corresponding subscription\_start/end dates on the user-table. If that is not the case, you use the RAISERROR function, to flag that the row can not be inserted/updated.
```
CREATE TRIGGER ActionDateTrigger ON tblaction
AFTER INSERT, UPDATE
AS
IF NOT EXISTS (
SELECT * FROM tbluser u JOIN inserted i ON i.user_email = u.user_email
AND i.action_date BETWEEN u.subscription_start AND u.subscription_end
)
BEGIN
RAISERROR ('Action_date outside valid range', 16, 1);
ROLLBACK TRANSACTION;
END
```
|
I always try to avoid triggers where possible, so thought I would throw an alternative into the mix. You can use an indexed view to validate data here. First you will need to create a new table, that simply contains two rows:
```
CREATE TABLE dbo.Two (Number INT NOT NULL);
INSERT dbo.Two VALUES (1), (2);
```
Now you can create your indexed view, I have used `ActionID` as the implied primary key of your `Action` table, but you may need to change this:
```
CREATE VIEW dbo.ActionCheck
WITH SCHEMABINDING
AS
SELECT a.ActionID
FROM dbo.[User] AS u
INNER JOIN dbo.[Action] AS a
ON a.user_email = u.user_email
CROSS JOIN dbo.Two AS t
WHERE a.Action_date < u.subscription_start
OR a.Action_date > u.subscription_end
OR t.Number = 1;
GO;
CREATE UNIQUE CLUSTERED INDEX UQ_ActionCheck_ActionID ON dbo.ActionCheck (ActionID);
```
So, your view will always return one row per action (`t.Number = 1` clause), however, the row in `dbo.Two` where number = 2 will be returned if the action date falls outside of the subscription dates, this will cause duplication of `ActionID` which will violate the unique constraint on the index, so will stop the insert. e.g.:
```
INSERT [user] (user_email, subscription_start, subscription_end)
VALUES ('test@test.com', '20140101', '20150101');
INSERT [Action] (user_email, action_date) VALUES ('test@test.com', '20140102');
-- WORKS FINE UP TO THIS POINT
-- THIS NEXT INSERT THROWS AN ERROR
INSERT [Action] (user_email, action_date) VALUES ('test@test.com', '20120102');
```
> Msg 2601, Level 14, State 1, Line 1
>
> Cannot insert duplicate key row in object 'dbo.ActionCheck' with unique index 'UQ\_ActionCheck\_ActionID'. The duplicate key value is (6).
>
> The statement has been terminated.
|
SQL constraint check with date from table linked with foreign key
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
So I have this table (table1):

I need to know all the 'num' who dont know HTML
I tried - `SELECT num FROM table1 WHERE package <> HTML`
The problem is, for example, that NUM 2 knows Excel aswell, so he still shows up in the result...
Any ideas?
|
```
SELECT DISTINCT num FROM table1
WHERE
(num NOT IN (SELECT num FROM table1 WHERE package = 'HTML'))
```
I don't have access to a MySQL box at this moment, but that should work.
|
Try this:
```
SELECT DISTINCT num
FROM table1
WHERE num NOT IN (SELECT num FROM table1 WHERE package = 'HTML')
```
|
SQL - Removing other entries
|
[
"",
"mysql",
"sql",
""
] |
I've got a few columns that have values either in fractional strings (i.e. 6 11/32) or as decimals (1.5). Is there a `CAST` or `CONVERT` call that can convert these to consistently be decimals?
The error:
Msg 8114, Level 16, State 5, Line 1
Error converting data type varchar to numeric.
Can I avoid doing any kind of parsing?
Thanks!
P.S. I'm working in `SQL Server Management Studio 2012`.
|
```
CREATE FUNCTION ufn_ConvertToNumber(@STR VARCHAR(50))
RETURNS decimal(18,10)
AS
BEGIN
DECLARE @L VARCHAR(50) = ''
DECLARE @A DECIMAL(18,10) = 0
SET @STR = LTRIM(RTRIM(@STR)); -- Remove extra spaces
IF ISNUMERIC(@STR) > 0 SET @A = CONVERT(DECIMAL(18,10), @STR) -- Check to see if already real number
IF CHARINDEX(' ',@STR,0) > 0
BEGIN
SET @L = SUBSTRING(@STR,1,CHARINDEX(' ',@STR,0) - 1 )
SET @STR = SUBSTRING(@STR,CHARINDEX(' ',@STR,0) + 1 ,50 )
SET @A = CONVERT(DECIMAL(18,10), @L)
END
IF CHARINDEX('/',@STR,0) > 0
BEGIN
SET @L = SUBSTRING(@STR,1,CHARINDEX('/',@STR,0) - 1 )
SET @STR = SUBSTRING(@STR,CHARINDEX('/',@STR,0) + 1 ,50 )
SET @A = @A + ( CONVERT(DECIMAL(18,10), @L) / CONVERT(DECIMAL(18,10), @STR) )
END
RETURN @A
END
GO
```
Then access it via select dbo.ufn\_ConvertToNumber ('5 9/5')
|
You'll need to parse. As Niels says, it's not really a good idea; but it can be done fairly simply with a T-SQL scalar function.
```
CREATE FUNCTION dbo.FracToDec ( @frac VARCHAR(100) )
RETURNS DECIMAL(14, 6)
AS
BEGIN
RETURN CASE
WHEN @frac LIKE '% %/%'
THEN CAST(LEFT(@frac, CHARINDEX(' ', @frac, 1) -1) AS DECIMAL(14,6)) +
( CAST(SUBSTRING(@frac, CHARINDEX(' ', @frac, 1) + 1, CHARINDEX('/', @frac, 1)-CHARINDEX(' ',@frac,1)-1) AS DECIMAL(14,6))
/ CAST(RIGHT(@frac, LEN(@frac) - CHARINDEX('/', @frac, 1)) AS DECIMAL(14,6)) )
WHEN @frac LIKE '%/%'
THEN CAST(LEFT(@frac, CHARINDEX('/', @frac, 1) - 1) AS DECIMAL(14,6)) / CAST(RIGHT(@frac, LEN(@frac) - CHARINDEX('/', @frac, 1)) AS DECIMAL(14,6))
ELSE
CAST(@frac AS DECIMAL(14,6))
END
END
GO
-- Test cases
SELECT dbo.FracToDec('22/7'), dbo.fracToDec('3.117'), dbo.fracToDec('7 3/4')
-- Output
-- 3.142857 3.117000 7.750000
```
Note that this will fail if the contents passed does not actually match the forms "mm/nn", "xx mm/nn" or a real decimal.
|
Convert fractional string to decimal
|
[
"",
"sql",
"sql-server",
"casting",
""
] |
I'm running this query:
```
SELECT "uname","sip",count(1) as cnt FROM "random_data_date" GROUP BY "uname","sip";
```
To get something like this:
```
+------------+-----------------+--------------+
| uname | sip | cnt |
+------------+-----------------+--------------+
| root | 172.17.165.60 | 1 |
| root | 172.17.53.124 | 2 |
| root | 172.28.184.39 | 3 |
| root | 192.168.207.7 | 1 |
| root | 192.168.245.110 | 1 |
| user1 | 172.17.165.60 | 1 |
| user1 | 172.24.85.227 | 10 |
| user1 | 172.25.14.184 | 2 |
| user2 | 172.16.194.151 | 1 |
| user2 | 172.16.248.88 | 1 |
| user2 | 172.16.9.55 | 1 |
| user2 | 172.17.165.60 | 2 |
| user2 | 172.17.234.122 | 1 |
| user2 | 172.17.53.124 | 1 |
+------------+-----------------+--------------+
```
Is there a way so that I can make it add up the number of `sip`s for each username, and add up the `cnt`s of them, so the output is like this:
```
+------------+-----------------+--------------+
| uname | sipcnt | cnt |
+------------+-----------------+--------------+
| root | 5 | 8 |
| user1 | 3 | 13 |
| user2 | 6 | 7 |
+------------+-----------------+--------------+
```
I'm using [Apache Phoenix](http://phoenix.apache.org/language/index.html) to query Apache HBase.
I found [this question](https://stackoverflow.com/questions/5146978/count-number-of-records-returned-by-group-by) that is similar but I don't know how would I apply the answers to my situation.
|
Would be something like this:
```
SELECT
"uname",
count(distinct "sip"),
count("sip") as cnt
FROM
"random_data_date"
GROUP BY
"uname"
;
```
|
```
SELECT "uname",
Count(distinct "sip") as "sipcnt",
count(1) as cnt
FROM "random_data_date"
GROUP BY "uname";
```
|
Count records of a column after group by in SQL
|
[
"",
"sql",
"select",
"apache-phoenix",
""
] |
I have a table named conductor. I want to select latest records that `date` less than `my_value`.
```
+----+-----------+------+
| id | program | date |
+----+-----------+------+
| 1 | program 1 | 1 |
| 2 | program 1 | 3 |
| 3 | program 2 | 3 |
| 4 | program 1 | 5 |
| 5 | program 1 | 7 |
+----+-----------+------+
```
If we consider `my_value` is 4 then output will be:
```
+----+-----------+------+
| id | program | date |
+----+-----------+------+
| 2 | program 1 | 3 |
| 3 | program 2 | 3 |
+----+-----------+------+
```
How can I select records by SQL?
|
```
SELECT * FROM Conductor
WHERE `date` = (SELECT max(`date`) FROM Conductor
WHERE `date` < myvalue )
```
|
```
SELECT * FROM Conductor
WHERE date IN (SELECT max(date) FROM Conductor
WHERE date < myvalue )
```
|
How to select latest records less than value
|
[
"",
"mysql",
"sql",
""
] |
Given a table such as the following called form\_letters:
```
+---------------+----+
| respondent_id | id |
+---------------+----+
| 3 | 1 |
| 7 | 2 |
| 7 | 3 |
+---------------+----+
```
How can I select each of these rows except the ones that do not have the maximum id value for a given respondent\_id.
Example results:
```
+---------------+----+
| respondent_id | id |
+---------------+----+
| 3 | 1 |
| 7 | 3 |
+---------------+----+
```
|
Something like this should work;
```
SELECT respondent_id, MAX(id) as id FROM form_letters
group by respondent_id
```
MySQL fiddle:
<http://sqlfiddle.com/#!2/5c4dc0/2>
|
There are many ways of doing it. `group by` using `max()`, or using `not exits` and using `left join`
Here is using left join which is better in terms of performance on indexed columns
```
select
f1.*
from form_letters f1
left join form_letters f2 on f1.respondent_id = f2.respondent_id
and f1.id < f2.id
where f2.respondent_id is null
```
Using `not exits`
```
select f1.*
from form_letters f1
where not exists
(
select 1 from form_letters f2
where f1.respondent_id = f2.respondent_id
and f1.id < f2.id
)
```
**[Demo](http://www.sqlfiddle.com/#!2/8b5735/3)**
|
Select each row of table except where the id is not the maximum value for a given foreign key
|
[
"",
"mysql",
"sql",
""
] |
If i have the following query
> select sum(8.9177 + 7.1950 + 5.1246) as total
> from table1
Gives me `21.2373`
how do i round that figure to only be `21.2`.
I must always only have one decimal place, nothing more
|
```
SELECT ROUND(SUM(8.9177 + 7.1950 + 5.1246),1) AS total
FROM table1
```
|
here are two solutions :
1. it will cast it it in decimal with 1 decimal point :
select cast((sum(8.9177 + 7.1950 + 5.1246)) AS decimal(19,1)) from yourtable
2. it will round it upto 1 decimal point but output will remail in 4 decimal place
select round(sum(8.9177 + 7.1950 + 5.1246),1) from yourtable
use 2nd option as it is simple and will not change the actual datatype.
|
how to remove certain amount of decimals in sql server
|
[
"",
"sql",
"sql-server-2008",
""
] |
I am trying to get duplicate counts but without actually removing duplicates.
I tried using `GROUP BY id` and then `COUNT(id)` but it removes all duplicate entries.
Is there any way to not remove duplicates?
The table looks like this:
```
ID1 ID2 Value
1 2 someval
1 3 someval
1 4 someval
2 3 someval
2 1 someval
3 1 someval
4 1 someval
```
I am trying to get this:
```
ID1 ID2 Value COUNT
1 2 someval 3
1 3 someval 3
1 4 someval 3
2 3 someval 2
2 1 someval 2
3 1 someval 1
4 1 someval 1
```
I used this:
```
SELECT ID1, ID2, Value, COUNT(ID1) FROM table GROUP BY ID1;
```
|
One of way doing this is to have a separate query for the count and join on it:
```
SELECT t.id1, t.id2, t.value, cnt
FROM my_table t
JOIN (SELECT id1, count(*) AS cnt
FROM my_table
GROUP BY id1) c ON t.id1 = c.id1
```
|
You can do this with a correlated subquery in MySQL;
```
select id1, id2, value,
(select count(*) from table t2 where t2.id1 = t.id1) as count
from table t;
```
|
Get Duplicated Count Without Removing
|
[
"",
"mysql",
"sql",
"group-by",
""
] |
i have a 4 columns in my table..
```
digit1 digit2 digit3 amount
1 2 3 1
2 2 2 1
2 1 3 1
3 2 1 1
```
and i want to group them if the row has the same digit value..and the output would be like this :
```
digit1 digit2 digit3 amount
1 2 3 3
2 2 2 1
```
sorry for this noob question though..i'm stuck with this thing since last night..tnx in advance
|
here a solution
if all digits has the same set of numbers ( 2,1,3) ,(3,2,1),...etc, this means they have same factorial
example: 2\*1\*3=3\*2\*1 ... etc
**NOTE**: this solution works for any digits different than zero (Factorial rule)
steps of solution
1. make multiplication and summation for each row
2. make partition based on the multiplication and summation and name it [part]
3. take the records that only appear once => [part]=1
4. count the records
here the solution
```
with fact
as(
select Id,digit1,digit2,digit3,digit1*digit2*digit3 as [mult],digit1+digit2+digit3 as [sum]
from Data),part as(
select Id,digit1,digit2,digit3,[mult],[sum],row_number() over(partition by [mult],[sum] order by [mult],[sum]) as [Part]
from fact
)
select Id,digit1,digit2,digit3,(select count(*)
from fact f where f.[mult]=p.[mult] and f.[sum]=p.[sum]) as amount
from part p
where part=1
```
and here a correct result [DEMO](http://sqlfiddle.com/#!3/14f19/6)
hope it will help you
|
```
please try this one i did it in oracle but its simple sql so it will too work on your DBMS
select digit1,digit2,digit3,(select sum(amount) from expdha
where digit2<>digit1+1 and digit3<>digit2+1) amount
from expdha
where digit2=digit1+1 and digit3=digit2+1
group by digit1,digit2,digit3
union
select digit1,digit2,digit3,amount from expdha
where digit1=digit2 and digit2=digit3;
```
where expdha is your table ,if you need explanation then i can explain it .
|
how to match values in sql ?
|
[
"",
"sql",
"vb.net",
""
] |
I am using SQL Server 2008 and need to create a query that shows rows that fall within a date range.
My table is as follows:
```
ADM_ID WH_PID WH_IN_DATETIME WH_OUT_DATETIME
```
My rules are:
* If the WH\_OUT\_DATETIME is on or within 24 hours of the WH\_IN\_DATETIME of another ADM\_ID with the same WH\_P\_ID
I would like another column added to the results which identify the grouped value if possible as `EP_ID`.
e.g.
```
ADM_ID WH_PID WH_IN_DATETIME WH_OUT_DATETIME
------ ------ -------------- ---------------
1 9 2014-10-12 00:00:00 2014-10-13 15:00:00
2 9 2014-10-14 14:00:00 2014-10-15 15:00:00
3 9 2014-10-16 14:00:00 2014-10-17 15:00:00
4 9 2014-11-20 00:00:00 2014-11-21 00:00:00
5 5 2014-10-17 00:00:00 2014-10-18 00:00:00
```
Would return rows with:
```
ADM_ID WH_PID EP_ID EP_IN_DATETIME EP_OUT_DATETIME WH_IN_DATETIME WH_OUT_DATETIME
------ ------ ----- ------------------- ------------------- ------------------- -------------------
1 9 1 2014-10-12 00:00:00 2014-10-17 15:00:00 2014-10-12 00:00:00 2014-10-13 15:00:00
2 9 1 2014-10-12 00:00:00 2014-10-17 15:00:00 2014-10-14 14:00:00 2014-10-15 15:00:00
3 9 1 2014-10-12 00:00:00 2014-10-17 15:00:00 2014-10-16 14:00:00 2014-10-17 15:00:00
4 9 2 2014-11-20 00:00:00 2014-11-20 00:00:00 2014-10-16 14:00:00 2014-11-21 00:00:00
5 5 1 2014-10-17 00:00:00 2014-10-18 00:00:00 2014-10-17 00:00:00 2014-10-18 00:00:00
```
The EP\_OUT\_DATETIME will always be the latest date in the group. Hope this clarifies a bit.
This way, I can group by the EP\_ID and find the EP\_OUT\_DATETIME and start time for any ADM\_ID/PID that fall within.
---
Each should roll into the next, meaning that if another row has an WH\_IN\_DATETIME which follows on the WH\_OUT\_DATETIME of another for the same WH\_PID, than that row's WH\_OUT\_DATETIME becomes the EP\_OUT\_DATETIME for all of the WH\_PID's within that EP\_ID.
I hope this makes some sense.
Thanks,
MR
|
Since the question does not specify that the solution be a "single" query ;-), here is another approach: using the "quirky update" feature dealy, which is updating a variable at the same time you update a column. Breaking down the complexity of this operation, I create a scratch table to hold the piece that is the hardest to calculate: the `EP_ID`. Once that is done, it gets joined into a simple query and provides the window with which to calculate the `EP_IN_DATETIME` and `EP_OUT_DATETIME` fields.
The steps are:
1. Create the scratch table
2. Seed the scratch table with all of the `ADM_ID` values -- this lets us do an UPDATE as all of the rows already exist.
3. Update the scratch table
4. Do the final, simple select joining the scratch table to the main table
**The Test Setup**
```
SET ANSI_NULLS ON;
SET NOCOUNT ON;
CREATE TABLE #Table
(
ADM_ID INT NOT NULL PRIMARY KEY,
WH_PID INT NOT NULL,
WH_IN_DATETIME DATETIME NOT NULL,
WH_OUT_DATETIME DATETIME NOT NULL
);
INSERT INTO #Table VALUES (1, 9, '2014-10-12 00:00:00', '2014-10-13 15:00:00');
INSERT INTO #Table VALUES (2, 9, '2014-10-14 14:00:00', '2014-10-15 15:00:00');
INSERT INTO #Table VALUES (3, 9, '2014-10-16 14:00:00', '2014-10-17 15:00:00');
INSERT INTO #Table VALUES (4, 9, '2014-11-20 00:00:00', '2014-11-21 00:00:00');
INSERT INTO #Table VALUES (5, 5, '2014-10-17 00:00:00', '2014-10-18 00:00:00');
```
**Step 1: Create and Populate the Scratch Table**
```
CREATE TABLE #Scratch
(
ADM_ID INT NOT NULL PRIMARY KEY,
EP_ID INT NOT NULL
-- Might need WH_PID and WH_IN_DATETIME fields to guarantee proper UPDATE ordering
);
INSERT INTO #Scratch (ADM_ID, EP_ID)
SELECT ADM_ID, 0
FROM #Table;
```
Alternate scratch table structure to ensure proper update order (since "quirky update" uses the order of the Clustered Index, as noted at the bottom of this answer):
```
CREATE TABLE #Scratch
(
WH_PID INT NOT NULL,
WH_IN_DATETIME DATETIME NOT NULL,
ADM_ID INT NOT NULL,
EP_ID INT NOT NULL
);
INSERT INTO #Scratch (WH_PID, WH_IN_DATETIME, ADM_ID, EP_ID)
SELECT WH_PID, WH_IN_DATETIME, ADM_ID, 0
FROM #Table;
CREATE UNIQUE CLUSTERED INDEX [CIX_Scratch]
ON #Scratch (WH_PID, WH_IN_DATETIME, ADM_ID);
```
**Step 2: Update the Scratch Table** using a local variable to keep track of the prior value
```
DECLARE @EP_ID INT; -- this is used in the UPDATE
;WITH cte AS
(
SELECT TOP (100) PERCENT
t1.*,
t2.WH_OUT_DATETIME AS [PriorOut],
t2.ADM_ID AS [PriorID],
ROW_NUMBER() OVER (PARTITION BY t1.WH_PID ORDER BY t1.WH_IN_DATETIME)
AS [RowNum]
FROM #Table t1
LEFT JOIN #Table t2
ON t2.WH_PID = t1.WH_PID
AND t2.ADM_ID <> t1.ADM_ID
AND t2.WH_OUT_DATETIME >= (t1.WH_IN_DATETIME - 1)
AND t2.WH_OUT_DATETIME < t1.WH_IN_DATETIME
ORDER BY t1.WH_PID, t1.WH_IN_DATETIME
)
UPDATE sc
SET @EP_ID = sc.EP_ID = CASE
WHEN cte.RowNum = 1 THEN 1
WHEN cte.[PriorOut] IS NULL THEN (@EP_ID + 1)
ELSE @EP_ID
END
FROM #Scratch sc
INNER JOIN cte
ON cte.ADM_ID = sc.ADM_ID
```
**Step 3: Select Joining the Scratch Table**
```
SELECT tab.ADM_ID,
tab.WH_PID,
sc.EP_ID,
MIN(tab.WH_IN_DATETIME) OVER (PARTITION BY tab.WH_PID, sc.EP_ID)
AS [EP_IN_DATETIME],
MAX(tab.WH_OUT_DATETIME) OVER (PARTITION BY tab.WH_PID, sc.EP_ID)
AS [EP_OUT_DATETIME],
tab.WH_IN_DATETIME,
tab.WH_OUT_DATETIME
FROM #Table tab
INNER JOIN #Scratch sc
ON sc.ADM_ID = tab.ADM_ID
ORDER BY tab.ADM_ID;
```
**Resources**
* MSDN page for [UPDATE](http://msdn.microsoft.com/en-us/library/ms177523.aspx)
look for "@variable = column = expression"
* [Performance Analysis of doing Running Totals](http://blog.waynesheffield.com/wayne/archive/2011/08/running-totals-in-denali-ctp3/) (not exactly the same thing as here, but not too far off)
This blog post does mention:
+ PRO: this method is generally pretty fast
+ CON: "The order of the UPDATE is controlled by the order of the clustered index". This behavior might rule out using this method depending on circumstances. But in this particular case, if the `WH_PID` values are not at least grouped together naturally via the ordering of the clustered index and ordered by `WH_IN_DATETIME`, then those two fields just get added to the scratch table and the PK (with implied clustered index) on the scratch table becomes `(WH_PID, WH_IN_DATETIME, ADM_ID)`.
|
I would do this using `exists` in a correlated subquery:
```
select t.*,
(case when exists (select 1
from table t2
where t2.WH_P_ID = t.WH_P_ID and
t2.ADM_ID = t.ADM_ID and
t.WH_OUT_DATETIME between t2.WH_IN_DATETIME and dateadd(day, 1, t2.WH_OUT_DATETIME)
)
then 1 else 0
end) as TimeFrameFlag
from table t;
```
|
Grouping rows with a date range
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
**I want to replace the space with Special characters while searching in OBIEE**
*Example:* When I search for "T MOBILE", I find "T-MOBILE" and "T\_MOBILE", etc.
**Here's my Select statement:**
```
SELECT "- Customer Install At"."Cust Number" saw_0,
"- Customer Install At"."Cust Name" saw_1,
"- Customer CRI Current Install At"."Global Duns Number" saw_2,
"- Customer CRI Current Install At"."Global Duns Name" saw_3
FROM "GS Install Base"
ORDER BY saw_0, saw_1, saw_2, saw_3
```
I tried to use REGEXP\_LIKE in a WHERE, but it gives me an error
"Error getting drill information:"
Can anyone help me with this query?
|
If you wanted to do something more complex than koriander's answer then the following should work.
You can't use database functions (such regexp\_like) directly, only OBI functions. So you need to use the OBI function EVALUATE to pass the regexp\_like function back to the database.
(There is plenty of documentation on the EVALUATE function, both by Oracle and others.)
On the column you are trying to filter, you will first need to convert the filter to SQL, replace the entire filter with something like:
```
evaluate('REGEXP_LIKE(%1, ''^T.MOBILE$'', ''i'')', MyTable.MyColumn)
```
|
I'm not sure how general you need this to be. For your example, you would use the LIKE operator.
```
WHERE FIELDNAME LIKE 'T_MOBILE'
```
The wildcard underscore "\_" will look for any character matching in that position. The other wildcard you can use is "%" which will match a set of characters of any length.
|
Replace a Space with Special characters in a SELECT
|
[
"",
"sql",
"oracle",
"obiee",
""
] |
I want to convert 12/8/2006 12:30:00 to 12/8/2006
I tried -
```
1. trunc(TO_DATE (effective_date_time,'DD/MM/YYYY HH24:Mi:SS'))
2. TO_DATE (effective_date_time,'DD/MM/YYYY')
```
But all these are returning values as 12/8/0006.
Why Oracle is returning year 0006 instead of 2006.
|
Try this:
```
trunc(effective_date_time)
```
It's a date, you don't need TO\_DATE
When you're using TO\_DATE(effective\_date\_time, 'format') on a DATE column, effective\_date\_time is converted to a char using NLS params. I suppose your NLS settings is something like 'dd/mm/yy'. That's why you get a wrong year.
A simple example:
```
alter session set nls_date_format = 'dd/mm/yy';
select trunc(TO_DATE (sysdate,'DD/MM/YYYY HH24:Mi:SS')) from dual;
November, 22 0014 00:00:00+0000
alter session set nls_date_format = 'dd/mm/yyyy';
select trunc(TO_DATE (sysdate,'DD/MM/YYYY HH24:Mi:SS')) from dual;
November, 22 2014 00:00:00+0000
```
|
If `effective_date_time` is a `date` column using `to_date()` is totally useless.
It will first (implicitely!) convert the `date` to a `varchar` (based on the NLS settings, just to convert it back to a `date` again.
If you want a specific format for your date column use `to_char()`
```
to_char(effective_date_time,'DD/MM/YYYY HH24:Mi:SS')
```
**Never use `to_date()` on `date` or `timestamp` columns!**
|
Issue in removing time stamp in PL/SQL
|
[
"",
"sql",
"oracle",
"date",
"plsql",
"timestamp",
""
] |
I'm working to update a stored procedure that current selects up to n rows, if the rows returned = n, does a select count without the limit, and then returns the original select and the total impacted rows.
Kinda like:
```
SELECT TOP (@rowsToReturn)
A.data1,
A.data2
FROM
mytable A
SET @maxRows = @@ROWCOUNT
IF @rowsToReturn = @@ROWCOUNT
BEGIN
SET @maxRows = (SELECT COUNT(1) FROM mytableA)
END
```
I'm wanting reduce this to a single select statement. Based on [this question](https://stackoverflow.com/questions/16175955/t-sql-get-row-count-before-top-is-applied), `COUNT(*) OVER()` allows this, but it is put on every single row instead of in an output parameter. Maybe something like `FOUND_ROWS()` in MYSQL, such as a @@TOTALROWCOUNT or such.
As a side note, since the actual select has an order by, the data base will need to already traverse the entire set (to make sure that it gets the correct first n ordered records), so the database should already have this count somewhere.
|
As @MartinSmith mentioned in a comment on this question, there is no direct (i.e. pure T-SQL) way of getting the total numbers of rows that would be returned while at the same time limiting it. In the past I have done the method of:
* dump the query to a temp table to grab `@@ROWCOUNT` (the total set)
* use `ROW_NUBMER() AS [ResultID]` on the ordered results of the main query
* `SELECT TOP (n) FROM #Temp ORDER BY [ResultID]` or something similar
Of course, the downside here is that you have the disk I/O cost of getting those records into the temp table. Put `[tempdb]` on SSD? :)
I have also experienced the "run COUNT(\*) with the same rest of the query first, then run the regular SELECT" method (as advocated by @Blam), and it is not a "free" re-run of the query:
* It is a full re-run in many cases. The issue is that when doing `COUNT(*)` (hence not returning any fields), the optimizer only needs to worry about indexes in terms of the JOIN, WHERE, GROUP BY, ORDER BY clauses. But when you want some actual data back, that *could* change the execution plan quite a bit, especially if the indexes used to get the COUNT(\*) are not "covering" for the fields in the SELECT list.
* The other issue is that even if the indexes are all the same and hence all of the data pages are still in cache, that just saves you from the physical reads. But you still have the logical reads.
I'm not saying this method doesn't work, but I think the method in the Question that only does the `COUNT(*)` conditionally is far less stressful on the system.
The method advocated by @Gordon is actually functionally very similar to the temp table method I described above: it dumps the full result set to [tempdb] (the `INSERTED` table is in [tempdb]) to get the full `@@ROWCOUNT` and then it gets a subset. On the downside, the INSTEAD OF TRIGGER method is:
* *a lot* more work to set up (as in 10x - 20x more): you need a real table to represent each distinct result set, you need a trigger, the trigger needs to either be built dynamically, or get the number of rows to return from some config table, or I suppose it could get it from `CONTEXT_INFO()` or a temp table. Still, the whole process is quite a few steps and convoluted.
* *very* inefficient: first it does the same amount of work dumping the full result set to a table (i.e. into the `INSERTED` table--which lives in `[tempdb]`) but then it does an additional step of selecting the desired subset of records (not really a problem as this should still be in the buffer pool) to go back into the real table. What's worse is that second step is actually double I/O as the operation is also represented in the transaction log for the database where that real table exists. But wait, there's more: what about the next run of the query? You need to clear out this real table. Whether via `DELETE` or `TRUNCATE TABLE`, it is another operation that shows up (the amount of representation based on which of those two operations is used) in the transaction log, plus is additional time spent on the additional operation. AND, let's not forget about the step that selects the subset out of `INSERTED` into the real table: it doesn't have the opportunity to use an index since you can't index the `INSERTED` and `DELETED` tables. Not that you always would want to add an index to the temp table, but sometimes it helps (depending on the situation) and you at least have that choice.
* overly complicated: what happens when two processes need to run the query at the same time? If they are sharing the same real table to dump into and then select out of for the final output, then there needs to be another column added to distinguish between the SPIDs. It could be `@@SPID`. Or it could be a GUID created before the initial `INSERT` into the real table is called (so that it can be passed to the `INSTEAD OF` trigger via `CONTEXT_INFO()` or a temp table). Whatever the value is, it would then be used to do the `DELETE` operation once the final output has been selected. And if not obvious, this part influences a performance issue brought up in the prior bullet: `TRUNCATE TABLE` cannot be used as it clears the entire table, leaving `DELETE FROM dbo.RealTable WHERE ProcessID = @WhateverID;` as the only option.
Now, to be fair, it is *possible* to do the final SELECT from within the trigger itself. This would reduce some of the inefficiency as the data never makes it into the real table and then also never needs to be deleted. It also reduces the over-complication as there should be no need to separate the data by SPID. However, this is a *very* time-limited solution as the ability to return results from within a trigger is going bye-bye in the next release of SQL Server, so sayeth the MSDN page for the [disallow results from triggers Server Configuration Option](http://msdn.microsoft.com/en-us/library/ms186337.aspx):
> This feature will be removed in the next version of Microsoft SQL Server. Do not use this feature in new development work, and modify applications that currently use this feature as soon as possible. We recommend that you set this value to 1.
**The only actual way to do:**
* the query one time
* get a subset of rows
* and still get the total row count of the full result set
is to use .Net. If the procs are being called from app code, please see "EDIT 2" at the bottom. If you want to be able to randomly run various stored procedures via ad hoc queries, then it would have to be a SQLCLR stored procedure so that it could be generic and work for any query as stored procedures can return dynamic result sets and functions cannot. The proc would need at least 3 parameters:
* @QueryToExec NVARCHAR(MAX)
* @RowsToReturn INT
* @TotalRows INT OUTPUT
The idea is to use "Context Connection = true;" to make use of the internal / in-process connection. You then do these basic steps:
1. call `ExecuteDataReader()`
2. before you read any rows, do a `GetSchemaTable()`
3. from the SchemaTable you get the result set field names and datatypes
4. from the result set structure you construct a `SqlDataRecord`
5. with that `SqlDataRecord` you call `SqlContext.Pipe.SendResultsStart(_DataRecord)`
6. now you start calling `Reader.Read()`
7. for each row you call:
1. `Reader.GetValues()`
2. `DataRecord.SetValues()`
3. `SqlContext.Pipe.SendResultRow(_DataRecord)`
4. `RowCounter++`
8. Rather than doing the typical "`while (Reader.Read())`", you instead include the @RowsToReturn param: `while(Reader.Read() && RowCounter < RowsToReturn.Value)`
9. After that while loop, call `SqlContext.Pipe.SendResultsEnd()` to close the result set (the one that you are sending, not the one you are reading)
10. then do a second while loop that cycles through the rest of the result, but never gets any of the fields:
while (Reader.Read())
{
RowCounter++;
}
11. then just set `TotalRows = RowCounter;` which will pass back the number of rows for the full result set, even though you only returned the top n rows of it :)
Not sure how this performs against the temp table method, the dual call method, or even @M.Ali's method (which I have also tried and kinda like, but the question was specific to *not* sending the value as a column), but it should be fine and does accomplish the task as requested.
**EDIT:**
Even better! Another option (a variation on the above C# suggestion) is to use the `@@ROWCOUNT` from the T-SQL stored procedure, sent as an `OUTPUT` parameter, rather than cycling through the rest of the rows in the `SqlDataReader`. So the stored procedure would be similar to:
```
CREATE PROCEDURE SchemaName.ProcName
(
@Param1 INT,
@Param2 VARCHAR(05),
@RowCount INT OUTPUT = -1 -- default so it doesn't have to be passed in
)
AS
SET NOCOUNT ON;
{any ol' query}
SET @RowCount = @@ROWCOUNT;
```
Then, in the app code, create a new SqlParameter, Direction = Output, for "@RowCount". The numbered steps above stay the same, except the last two (10 and 11), which change to:
10. Instead of the 2nd while loop, just call `Reader.Close()`
11. Instead of using the RowCounter variable, set `TotalRows = (int)RowCountOutputParam.Value;`
I have tried this and it does work. But so far I have not had time to test the performance against the other methods.
**EDIT 2:**
If the T-SQL stored procs are being called from the app layer (i.e. no need for ad hoc execution) then this is actually a much simpler variation of the above C# methods. In this case you don't need to worry about the `SqlDataRecord` or the `SqlContext.Pipe` methods. Assuming you already have a `SqlDataReader` set up to pull back the results, you just need to:
1. Make sure the T-SQL stored proc has a @RowCount INT OUTPUT = -1 parameter
2. Make sure to `SET @RowCount = @@ROWCOUNT;` immediately after the query
3. Register the OUTPUT param as a `SqlParameter` having Direction = Output
4. Use a loop similar to: `while(Reader.Read() && RowCounter < RowsToReturn)` so that you can stop retrieving results once you have pulled back the desired amount.
5. Remember to *not* limit the result in the stored proc (i.e. no `TOP (n)`)
At that point, just like what was mentioned in the first "EDIT" above, just close the `SqlDataReader` and grab the `.Value` of the OUTPUT param :).
|
How about this....
```
DECLARE @N INT = 10
;WITH CTE AS
(
SELECT
A.data1,
A.data2
FROM mytable A
)
SELECT TOP (@N) * , (SELECT COUNT(*) FROM CTE) Total_Rows
FROM CTE
```
The last column will be populated with the total number of rows it would have returned without the TOP Clause.
The issue with your requirement is, you are expecting a SINGLE select statement to return a table and also a scalar value. which is not possible.
A Single select statement will return a table or a scalar value. OR you can have two separate selects one returning a Scalar value and other returning a scalar. Choice is yours :)
|
TSQL: Is there a way to limit the rows returned and count the total that would have been returned without the limit (without adding it to every row)?
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
"sqlclr",
""
] |
For a table such as this:
```
tblA
A,B,C
1,2,t3a
1,3,d4g
1,2,b5e
1,3,s6u
```
I want to produce a table that selects distinct on both A and B simultaneously, and still keep one value of C, like so:
```
tblB
A,B,C
1,2,t3a
1,3,d4g
```
Seems like this would be simple, but not finding it for the life of me.
```
DROP TABLE IF EXISTS tblA CASCADE;
SELECT DISTINCT ON (A,B), C
INTO tblB
FROM tblA;
```
|
This should do the trick
```
CREATE TABLE tblB AS (
SELECT A, B, max(C) AS max_of_C FROM tblA GROUP BY A, B
)
```
|
When you use `DISTINCT ON` you should have `ORDER BY`:
```
SELECT DISTINCT ON (A,B), C
INTO tblB
FROM tblA
ORDER BY A, B;
```
|
Select distinct on multiple columns simultaneously, and keep one column in PostgreSQL
|
[
"",
"sql",
"postgresql",
""
] |
I have two tables.
**Invoices**
```
ID | Amount
-----------
1 | 123.54
2 | 553.46
3 | 431.34
4 | 321.31
5 | 983.12
```
**Credit Memos**
```
ID | invoice_ID | Amount
------------------------
1 | 3 | 25.50
2 | 95 | 65.69
3 | 51 | 42.50
```
I want to get a result set like this out of those two tables
```
ID | Amount | Cr_memo
---------------------
1 | 123.54 |
2 | 553.46 |
3 | 431.34 | 25.50
4 | 321.31 |
5 | 983.12 |
```
I've been messing with joins and whatnot all morning with no real luck.
Here is the last query I tried, which pulled everything from the Credit Memo table...
```
SELECT A.ID, A.Amount FROM Invoices AS A
LEFT JOIN Credit_Memos AS B ON A.ID = B.invoice_ID
```
Any help or pointers are appreciated.
|
Your query would work fine. Just add `Credit_memo.Amount` with an alias:
```
SELECT Inv.ID,Inv.Amount,IFNULL(C.Amount,'') AS Cr_memo
FROM Invoices Inv LEFT JOIN
Credit_Memos C ON Inv.ID=C.invoice_ID
```
Result:
```
ID AMOUNT CR_MEMO
1 124
2 553
3 431 25.50
4 321
5 983
```
See result in [**SQL FIDDLE**](http://www.sqlfiddle.com/#!2/4524a).
|
You almost got the answer `Left Outer Join` is what you need but you missed to select `Cr_memo` from `Credit_Memos` table. Since you don't want to show Null values when there is no `Invoices_ID` in `Credit Memos` table use `IFNULL` to make `NULL's` as Empty string
```
SELECT A.ID, A.Amount, IFNULL(B.Cr_memo,'') AS Cr_memo
FROM Invoices AS A
LEFT JOIN Credit_Memos AS B
ON A.ID = B.invoice_ID
```
|
Select data from another table if exists, if not display null
|
[
"",
"mysql",
"sql",
""
] |
Given two Tables that are Linked together
```
tbl_Gasoline
--------------
ID | Type
-------------
1 | Diesel
2 | Kerosene
```
and
```
tbl_Expense
-----------------------------
ID | Price | GasolineType (tbl_Gasoline foreign key)
-----------------------------
1 | 5000 | 1
2 | 4000 | 2
3 | 3000 | 1
```
I want to have an Output like this
```
tbl_GasolineExpense
----------------------------
ID | Price | Type
----------------------------
1 | 8000 | Diesel
2 | 4000 | Kerosene
```
I have tried to use a DISTINCT and SUM clauses but I can't seem to make a good query.
It's been long since I have used SQL so some help would really be appreciated.
|
Try this one
```
SELECT e.id, SUM(price) AS 'price', g.NAME
FROM tbl_expense e
INNER JOIN tbl_gasoline g ON e.GasolineType = g.id
GROUP BY e.id, g.NAME
```
|
**Please Try this.**
```
SELECT
e.id,
SUM(price) AS 'price',
g. NAME
FROM
tbl_expense e
INNER JOIN tbl_gasoline g ON e.GasolineType = g.id
GROUP BY
g.id,g. NAME
```
|
SELECT DISTINCT TWO COLUMNS and SUM another COLUMN's VALUES
|
[
"",
"sql",
"sql-server-2008",
"sum",
"distinct",
""
] |
When I try to execute this query
```
INSERT INTO StateRegion
( FullName ,
Abbreviation ,
RegionType ,
Admitted ,
Capital
)
VALUES ( 'Alabama' ,
'AL' ,
1 ,
'1819-Dec-14' ,
'Montgomery'
);
```
it gives me error sql date conversion error :
> Conversion failed when converting date and/or time from character
> string
Admitted is a Date type.
The issue is I can not change this format : `1819-Dec-14`, is it possible to add convert method to the query above ?
Table definition :
```
CREATE TABLE StateRegion
(
ID bigint PRIMARY KEY IDENTITY(1,1),
FullName varchar(50) NOT NULL,
Abbreviation varchar(2) NOT NULL,
RegionType smallint NOT NULL,
Admitted date NULL,
Capital varchar(50) NULL
);
```
|
The month name part of that date format is interpreted according to the language of the login.
You can change the default language of the login to US English or British English if you must work with that format or issue a
```
Set language english
```
To set the format at run time before the problematic query then optionally switch it back afterwards.
If you have the choice using yyyy-mm-dd or yyyymmdd would be preferable formats for date literals though that both avoid this issue when casting to `date`.
|
Use a parameterized query. Parameterization will send the date to the server in binary, avoiding any string conversions which depend upon the client locale.
Example in C#:
```
SqlCommand sqc = new SqlCommand("INSERT INTO MyTable (DateColumn) VALUES (@date)", con);
sqc.Parameters.AddWithValue("@date", new DateTime(1819, 12, 14));
```
If you are running this from an interactive batch (SQL Server Management Studio, or similar), use `SET LANGUAGE` to ensure the dates are parsed correctly:
```
SET LANGUAGE ENGLISH;
INSERT INTO StateRegion (FullName, Abbreviation, RegionType, Admitted, Capital)
VALUES ('Alabama', 'AL', 1, '1819-Dec-14', 'Montgomery');
```
[SqlFiddle example showing correct parsing](http://sqlfiddle.com/#!3/95c81/3)
|
Inserting date in sql server
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I created two tables Category and Products for my shopping cart database. Should I build a third table CategoryProduct and save both the ID's of category and Products in it or have a column in the product table to save the category to it.
I ask because I need to display the products, category wise. Please guide or show me an example.
|
If the product belongs to only one category, then you don't need a third table. Just create a column `CategoryId` as a foreign key and references the `CategoryId` from the categories table.
However, if the product belongs to many categories, then you need a third table, something like `CategoryProducts` with columns:
* `Id`: Primary key works as a surrogate key, it has nothing to do with the relations,
* `ProductId` (references the product id column from products table,
* `CategoryId` (references the category id column from categories table.
|
It depends on if a Product can have more than one Category, if this is the case then you will need the 3rd table to hold id's of both. If it is one Category per product you can add a Foreign Key to the Product table to hold the category id.
|
How to add category to product in mysql
|
[
"",
"mysql",
"sql",
"shopping-cart",
""
] |
I've two tables structure is below
```
person (id, fname, lname, ph, mob, dob, email)
address (id, address1, address2, address3, town, county, postcode)
person_address (id, person_id, address_id)
```
I've have a issue here, if one person have more than one address how would i work out which is active or current address, should i add a direct link of address table such as `person (id, fname, lname, ph, mob, dob, email, address_id)`
or should i add a link to person\_address link to person table `person (id, fname, lname, ph, mob, dob, email, person_address_id)`
any idea
|
Presumably, a person can have only one current address. If so, you should add a column into the `person` table, called something like `CurrentAddress`.
If you *require* a current address, you can even declare `CurrentAddress` to be `NOT NULL`.
If a person could have more than one current address, then use a flag in `person_address`.
Now, if you want the current address to be the most recently inserted address, you can use a trigger to reset the value on each insert. Or, if your database is not too big (thousands of rows, not millions of rows), you can calculate it on the fly by choosing the `person_address` record with the most recent creation time.
EDIT:
@Joanvo's point is a good point. You can fix it by adding a foreign key constraint in `person` referring to to `person_address`. You will have to create a unique constraint on `current_address(person_id, address_id)` and use that for the foreign key.
|
You should add a bit in the person\_address table to indicate if it is the current address or not. Make sure there are not more than one current addresses via sql or code checks:
```
person_address (id, person_id, address_id, current)
```
|
MySQL relationship issue
|
[
"",
"mysql",
"sql",
"database",
"foreign-keys",
""
] |
I materialize the result of a view in a table.
I want to know if their is a way to know when the result of this view change to refresh it.
Materialized view and result cache can do that but I don't want to use both techniques.
Is their another way to answer to this question?
In fact we have a generic system who materialize and refresh views every n minutes. The cost for the refresh could be high and I try to know if we can know if underlying data has change or not.
We don't use materialized view because full refresh truncate the table and fast refresh work only on table with primary key.
Result cache can't do the job to because we don't have enough memory for that.
So I want to know if we could have a technique to know if underlying data of the view has change since last refresh
|
You could have a flag table that looks something like this:
```
CREATE TABLE FlagTable
SourceTable varchar(30),
UpdateDate DATETIME,
Handled CHAR(1) DEFAULT 'N'
```
Create a trigger on your base tables, where on INSERT, UPDATE, or DELETE of the table, you insert a record into FlagTable with the name of the base table
When you want to check for a refresh, just select WHERE Handled = 'N', refresh the pseudo-view, then set Handled = 'Y'
|
You can do all this in an easier way, by setting the mat\_view with refresh on commit option and if possible, also to make it refresh *fast* on commit. This will guarantee you that each time a change occurs the view will refresh only the new lines (compared to complete refresh, where the query behind the view is executed as a whole).
Now this last method can cause a lot of problems, because the *refresh fast on commit* option has limitations and if you can't comply with it, you just can't use it. If you can however set this to *refresh complete on commit*, this can either slow the system as you suggested happens from time to time and also suggests that you can't isolate the source of change.
If you want to see where the most frequent change occurs, I suggest you to use a custom table (per day) where to store the daily result of the query behind the mat\_view. Then after a few days, just do something like:
```
select * from daily_table_day_one
minus
select * from mat_view
union all
select * from mat_view
minus
select * from daily_table_day_one
```
as the method with `ORA_ROWSCN` not always works. The method with the triggers works in limited cases and if it isn't required to create those triggers on like 100 tables ... that's ludicrous.
When you find the source of change - for example several columns, coming from some tables, then you might consider rewriting the mat\_view query, following some DW principles and create a star schema of the query. This will segment the query and will most certainly speed-up the query. Check where the load is coming from - if those expensive joins, scans whatever can't be avoided, maybe breaking down the query on smaller queries can do the trick if you can materialize them as well, maybe you can set them with refresh fast on commit as well.
It all depends on the implementation. If you can give here an example of the query or a part of it,with the explain plan we can also give you concrete solutions to it.
cheers
|
When refreshing the result of a view?
|
[
"",
"sql",
"oracle",
"view",
"materialized-views",
""
] |
Is it possible to use unique on multiple columns?
like:
```
user_vote user_id
------------------
1 1
1 2
2 1
```
both unique
This must be possible
But:
```
user_vote user_id
1 2
1 2
```
This must not be possible
|
You can add a unique constraint on the column's combination:
```
ALTER TABLE my_table
ADD CONSTRAINT my_table_uq UNIQUE (user_vote, user_id)
```
|
# MySQL / SQL Server / Oracle / MS Access:
```
CREATE TABLE uservotetable
(
user_vote int NOT NULL,
user_id int NOT NULL,
CONSTRAINT uservote UNIQUE (user_vote ,user_id)
);
```
and if you created your table before ..then you can use *ALTER*
```
ALTER TABLE uservotetable
ADD CONSTRAINT uservote UNIQUE (user_vote ,user_id)
```
this can be useful for you [sql\_unique](http://www.w3schools.com/sql/sql_unique.asp)
|
SQL multiple unique columns
|
[
"",
"sql",
"constraints",
"unique",
""
] |
For example I have this statement:
```
my name is Joseph and my father's name is Brian
```
This statement is split by word, like this table:
| ID | word |
| --- | --- |
| 1 | my |
| 2 | name |
| 3 | is |
| 4 | Joseph |
| 5 | and |
| 6 | my |
| 7 | father's |
| 8 | name |
| 9 | is |
| 10 | Brian |
I want to get previous and next word of each word
For example I want to get previous and next word of "name":
| my | name | is |
| --- | --- | --- |
| father's | name | is |
How could I get this result?
|
you didn't specify your DBMS, so the following is ANSI SQL:
```
select prev_word, word, next_word
from (
select id,
lag(word) over (order by id) as prev_word,
word,
lead(word) over (order by id) as next_word
from words
) as t
where word = 'name';
```
SQLFiddle: <http://sqlfiddle.com/#!12/7639e/1>
|
Why did no-body give the simple answer?
```
SELECT LAG(word) OVER ( ORDER BY ID ) AS PreviousWord ,
word ,
LEAD(word) OVER ( ORDER BY ID ) AS NextWord
FROM words;
```
|
Get previous and next row from rows selected with (WHERE) conditions
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
"row",
""
] |
I have a table of data that looks like follows:
```
+--------+----------+------------+
| Client | Item No. | Serial No. |
+--------+----------+------------+
| A | 1 | 1 |
| A | 1 | 2 |
| B | 1 | 2 |
| B | 2 | 1 |
| C | 2 | 3 |
| C | 2 | 2 |
| C | 3 | 1 |
| C | 3 | 2 |
| D | 2 | 3 |
| D | 2 | 1 |
| D | 3 | 2 |
| D | 3 | 3 |
| D | 2 | 2 |
| D | 3 | 1 |
+--------+----------+------------+
```
What I'm looking to do is find for each client the highest **Item No.** followed by highest **Serial No.** (in that order). So for the above the output would be:
```
+--------+----------+------------+
| Client | Item No. | Serial No. |
+--------+----------+------------+
| A | 1 | 2 |
| B | 2 | 1 |
| C | 3 | 2 |
| D | 3 | 3 |
+--------+----------+------------+
```
I'm thinking this will require a nested MAX() statement, to first get the **MAX(Item No.)** for each client and for those with multiple then get the **MAX(Serial No.)** What's an efficient way to write this query?
|
Maybe this helps:
```
SELECT t.client,
t.item,
MAX(t.serial) AS serial
FROM(SELECT client,
MAX(item) AS item
FROM your_table
GROUP
BY client
) a
JOIN your_table t
ON a.client = t.client AND a.item = t.item
GROUP
BY t.client, t.item
```
If your DBMS supports window functions, another way would be:
```
SELECT client, item, serial
FROM(SELECT client, item, serial,
ROW_NUMBER() OVER (PARTITION BY client ORDER BY item DESC, serial DESC) rn
FROM your_table
)
WHERE rn = 1
```
|
Postgres has the awesome shortcut
```
SELECT DISTINCT ON (Client) Client, Item, SerialNumber
FROM some_table
ORDER BY Item DESC, SerialNumber DESC -- DESC gets highest
```
Because of the DISTINCT this grabs only one record per Client, and the ORDER BY makes sure it is the ones you want.
I have no idea if any other DBMSs will do this so easily.
|
Finding the Max of another Max query
|
[
"",
"sql",
"max",
""
] |
I'm building my application in 3-tier architecture. In my `DataAccess` class i got a function for inserting a `Person` into the database.
`Person` table has up to 81 columns in the database. The user may not insert all the data in these columns initially, so he might only fill 10 or 20 fields.
Is there a way i can make my `Insert` Function in the `DataAccess` class accept dynamic number of parameters so that it only passes those which have data and not be forced to wait for 81 parameters and try to insert 81 parameters each time ?
|
A very good option when writing sprocs that insert rows into a single table is to use [Table-Valued Parameters](http://msdn.microsoft.com/en-us/library/bb675163%28v=vs.110%29.aspx).
First, you need to create a `table type` that reflects the table into which you want to insert new records, e.g.
```
CREATE TYPE dbo.PersonType AS TABLE
( PersonID int, LastName nvarchar(50), FirstName nvarchar(50), ... etc )
```
Then, using this table type you can write a sproc that accepts table-valued parameters based on that type, e.g.:
```
CREATE PROCEDURE dbo.usp_InsertPerson (@tvpNewPersons dbo.PersonType READONLY)
AS
BEGIN
INSERT INTO Person (PersonID, LastName, FirstName)
SELECT PersonID, LastName, FirstName
FROM @tvpNewPersons
END
```
Once you have this infrastructure in place, then you can just pass as parameter to the `Insert` method of your `DataAccess` class *a single parameter*, i.e. a strongly-type DataTable that reflects the real sql table(btw you can even [manually create](http://www.codeproject.com/Articles/30490/How-to-Manually-Create-a-Typed-DataTable) such a DataTable).
This is how your `Insert` method would look like:
```
public void Insert(PersonDataTable dtPersons)
{
// ... build connection object here with 'using' block
// Create command that executes stored procedure
SqlCommand command = new SqlCommand("usp_InsertPerson", connection);
command.CommandType = System.Data.CommandType.StoredProcedure;
SqlParameter tvpParameter = command.Parameters.AddWithValue("@tvpNewPersons", dtPersons);
tvpParameter.SqlDbType = SqlDbType.Structured;
// now execute command, etc ...
}
```
`PersonDataTable` is the type of the typed data table that you need to create.
|
A quick and dirty solution would be to pass in a dictionary of which has the name and value of the items you want to change:
```
public void Upsert(Dictionary<string, object> KeyValues, Dictionary<string, object> Changes)
{
// insert logic here
}
```
PS: Take note that many ORM's solve this problem for you, go and google Entity Framework for example..
|
Creating a dynamic number of parameters for Insert based on User's input
|
[
"",
"sql",
"asp.net",
"sql-server",
""
] |
I have a hierarchy in SQL Server, with multiple parents, but can't seem to get the result set I need.
This is what i have so far.
```
DECLARE @Table TABLE (ChildId varchar(max), ParentId varchar(max))
INSERT INTO @Table (ChildId,ParentId)
VALUES
('England',NULL),
('Cities',NULL),
('Towns',NULL),
('South West','England'),
('Bristol','South West'),
('Bristol','Cities'),
('Suburb','Bristol'),
('Thornbury','South West'),
('Thornbury','Towns');
WITH CTE (ChildId, ParentId, Level)
AS (
SELECT
ChildId,
ParentID,
0
FROM @Table
WHERE ParentID IS NULL
UNION ALL
SELECT
r.ChildId,
r.ParentId,
ct.Level + 1
FROM @Table r
JOIN CTE ct
ON ct.ChildId = r.ParentId
)
SELECT * FROM CTE order by childId, level
```
Which gives me this result set:
```
ChildId | ParentId | Level
Bristol | Cities | 1
Bristol | South West | 2
Suburb | Bristol | 2
Suburb | Bristol | 3
Cities | NULL | 0
England | NULL | 0
South West | England | 1
Thornbury | Towns | 1
Thornbury | South West | 2
Towns | NULL | 0
```
But I also want grand parents and great grand parents and great great grandparents (etc):
```
ChildId | ParentId | Level
Bristol | Cities | 1
Bristol | South West | 2
Bristol | England | <------------------------
Suburb | South West | <------------------------
Suburb | England | <------------------------
Suburb | Cities | <------------------------
```
etc.
|
What you are trying to do resembles, at least to some extent, the Ranganathan's classification. In this case you have to go up in hierarchy, not down:
```
with cte as (
select t.ChildId, t.ParentId, 0 as [Lvl]
from @Table t
where t.ParentId is not null
union all
select c.ChildId, t.ParentId, c.Lvl + 1
from @Table t
inner join cte c on c.ParentId = t.ChildId
where t.ParentId is not null
)
select * from cte c order by c.ChildId, c.Lvl, c.ParentId;
```
EDIT: updated the `WHERE` clause in the recursive part of the CTE. Looks like it was some leftover from initial tries, which I forgot to think through..
|
What about using a recursive table-valued function instead of a CTE:
```
CREATE FUNCTION tvf_GetParents
(
@childID VARCHAR(MAX),
@level INT
)
RETURNS
@output TABLE
(
ancestor VARCHAR(MAX),
level INT
)
AS
BEGIN
DECLARE @parentIDs TABLE (pID VARCHAR(MAX))
-- Get parent of child and add it to output
IF EXISTS (SELECT 1 FROM HTable WHERE ChildId = @childID AND ParentId IS NOT NULL)
BEGIN
INSERT @parentIDs
SELECT ParentId FROM HTable WHERE ChildId = @childID
INSERT INTO @output (ancestor, level)
SELECT pID, @level FROM @parentIDs
END
ELSE
RETURN
DECLARE @pID VARCHAR(MAX) = 0
-- Iterate over all parents (cursorless loop)
WHILE (1 = 1)
BEGIN
-- Get next ParentId
SELECT TOP 1 @pID = pID
FROM @parentIDs
WHERE pID > @pID
ORDER BY pID
-- Exit loop if no more parents
IF @@ROWCOUNT = 0 BREAK;
-- call function recursively so as to add to output
-- the rest of the ancestors (if any)
INSERT INTO @output (ancestor, level)
SELECT ancestor, level FROM tvf_GetParents(@pID, @level + 1)
END
RETURN
END
GO
```
Using the above function you can easily get all child - ancestor pairs:
```
SELECT DISTINCT ChildId, ancestor, level
FROM HTable h
OUTER APPLY tvf_GetParents(h.ChildId, 0) AS p
ORDER BY ChildId, Level
```
Output:
```
ChildId ancestor level
------------------------------
Bristol Cities 0
Bristol South West 0
Bristol England 1
Cities NULL NULL
England NULL NULL
South West England 0
Suburb Bristol 0
Suburb Cities 1
Suburb South West 1
Suburb England 2
Thornbury South West 0
Thornbury Towns 0
Thornbury England 1
Towns NULL NULL
```
Please note that 'Level' has a different meaning here: level NULL denotes a parent-less child, level 0 denotes a child-parent record, level 1 denotes a child-grandparent record, etc.
Please also note that there is a limitation as far as nesting level of recursive functions in sql server is concerned. I think it is 32. If your tree depth goes beyond that range then the solution I propose will not work.
|
Show all Children and Grandchildren in SQL Hierarchy CTE
|
[
"",
"sql",
"sql-server",
"t-sql",
"hierarchy",
"common-table-expression",
""
] |
My SQL table called `categories` has the following structure:
```
CREATE TABLE categories(
id int NOT NULL AUTO_INCREMENT,
parent_id int,
name varchar(50),
PRIMARY KEY(id)
);
```
I would like to know if it's possible to JOIN the same table and present the `parent_id` name. I can do this by PHP code, but I would like, because of performance issues, to retrieve as SQL Query.
**[See example at SQLFiddle](http://sqlfiddle.com/#!2/b0ee23/4)**
I've managed to JOIN the table, but somehow the values aren't right. For example, the result of SQLFiddle retrieves:
```
ID | Name | Parent Category Name
-----------------------------------
1 Meats Steaks
```
Which is wrong, it should be:
```
ID | Name | Parent Category Name
-----------------------------------
3 Steaks Meats
```
|
Use `INNER JOIN` instead of `LEFT JOIN` and you have to join on the `id` equal to the `parent_id` of the other table like this:
```
SELECT c.id, c.name, parents.name AS `Parent Category Name`
FROM categories AS c
INNER JOIN categories AS parents ON parents.id = c.parent_id
ORDER BY c.name ASC;
```
* [**SQL Fiddle Demo**](http://sqlfiddle.com/#!2/b0ee23/19)
This will give you:
```
| ID | NAME | PARENT CATEGORY NAME |
|----|--------|----------------------|
| 3 | Steaks | Meats |
```
---
If you want to include those categories with no parent, use `LEFT JOIN` instead of `INNER JOIN`.
|
Of course you can do a self join. For the syntax to work you need to use aliases:
```
select c.id, c.name, cp.name as parent_category_name
from categories c left join
categories cp
on c.parent_id = cp.id;
```
In your query, you just have the join in the wrong direction. I would suggest that you use more meaningful table aliases than `category` and `cat`. In the above `cp` is intended to be the row for the parent.
Your `on` clause should be:
```
SELECT category.id, category.name, cat.name AS `Parent Category Name`
FROM categories category LEFT JOIN
categories cat
ON cat.id = category.parent_id
ORDER BY category.name ASC
```
|
MySQL category with parent_id - SELF Join
|
[
"",
"mysql",
"sql",
"self-join",
""
] |
I am having a table with following structure:
```
ID TIME_UNIX STATUS TIME_DDMMYY_HHMMSS
1 1416234308 1 (dateadd(second,[time],'19700101'))
1 1416234313 0 (dateadd(second,[time],'19700101'))
1 1416234319 1 (dateadd(second,[time],'19700101'))
1 1416234337 0 (dateadd(second,[time],'19700101'))
2 1416319268 1 (dateadd(second,[time],'19700101'))
2 1416319280 0 (dateadd(second,[time],'19700101'))
2 1416319631 1 (dateadd(second,[time],'19700101'))
```
Value Descriptions
```
ID is the identifier of the object who does the insert
TIME_UNIX is when it happened
Status can be 1 or 0 (enable/disable)
TIME_DDMMYY_HHMMSS is just a friendly view of TIME_UNIX column
```
So, what I'm trying to do is to generate a view to show me for each object the min and max hours of each event, meaning, first enable and last disable for a day by object
```
ID FIRST_ENABLE_TIME LAST_DISABLE_TIME DDMMYYY
1 1416234308 1416234337 17Nov2014
2 1416319268 1416319280 17Nov2014
```
Have been trying many inner joins, views and so far no luck...
Any body can help me on this?
Thanks in advance
|
```
SELECT ID,
MIN(CASE WHEN STATUS = 1 THEN TIME_UNIX ELSE NULL END) AS FIRST_ENABLE_TIME,
MAX(CASE WHEN STATUS = 0 THEN TIME_UNIX ELSE NULL END) AS LAST_DISABLE_TIME,
TIME_DDMMYY_HHMMSS AS DDMMYYY
FROM MyTable
GROUP BY ID, TIME_DDMMYY_HHMMSS
```
|
```
SELECT ID,
MIN(CASE WHEN STATUS = 1 THEN TIME_UNIX ELSE NULL END) AS FIRST_ENABLE_TIME,
MAX(CASE WHEN STATUS = 0 THEN TIME_UNIX ELSE NULL END) AS LAST_DISABLE_TIME,
convert(varchar(10),TIME_DDMMYY_HHMMSS,103) AS DDMMYYY FROM
FROM MyTable
GROUP BY ID, convert(varchar(10),TIME_DDMMYY_HHMMSS,103)
```
|
Get min and max value from a multiple rows events
|
[
"",
"sql",
""
] |
what does the line (rowid,0) mean in the following query
```
select * from emp
WHERE (ROWID,0) in (
select rowid, mod(rownum,2) from emp
);
```
i dont get the line `WHERE (ROWID,0)`.
what is it?
thanx in advance
|
`IN` clause in Oracle SQL can support column groups. You can do things like this:
```
select ...
from tab1
where (tab1.col1, tab1.col2) in (
select tab2.refcol1, tab2.refcol2
from tab2
)
```
That can be useful in many cases.
In your particular case, the subquery use for the second expression `mod(rownum,2)`. Since there is no order by, that means that `rownum` will be in whichever order the database retrieves the rows - that might be a full table scan or a fast full index scan.
Then by using `mod` every other row in the subquery gets the value 0, every other row gets the value 1.
The `IN` clause then filters on second value in the subquery being equal to 0. The end result is that this query retrieves *half* of your employees. Which half will depend on which access path the optimizer chooses.
|
Not sure what dialect of sql you're using, but it appears that since the subquery in the IN clause has two columns in the select list, then the (ROWID,0) indicates which columns align with the subquery. I have never seen multiple columns in an IN statment's select list before.
|
Confused syntax in Where clause
|
[
"",
"sql",
"oracle",
"oracle11g",
""
] |
Have table like :
```
col1 col2 col3 col4 col5
test1 1 13 15 1
test2 1 13 15 4
test3 2 7 3 5
test4 3 11 14 18
test5 3 11 14 8
test6 3 11 14 11
```
Want select `col1,col2,col3,col4` data where `col2,col3,col4` are duplicates
for example it must be :
```
col1 col2 col3 col4
test1 1 13 15
test2 1 13 15
test4 3 11 14
test5 3 11 14
test6 3 11 14
```
How to do it ?
|
Presuming SQL-Server >= 2005 you can use [`COUNT(*) OVER`](http://msdn.microsoft.com/en-us/library/ms189461.aspx):
```
WITH CTE AS
(
SELECT col1, col2, col3, col4, cnt = COUNT(*) OVER (PARTITION BY col2, col3, col4)
FROM dbo.TableName t
)
SELECT col1, col2, col3, col4
FROM CTE WHERE cnt > 1
```
`Demo`
|
If I understand correctly:
```
select col1, col2, col3, col4
from table t
where exists (select 1 from table t2 where t2.col1 = t.col1 and t2.col1 <> t.col1) and
exists (select 1 from table t2 where t2.col2 = t.col2 and t2.col1 <> t.col1) and
exists (select 1 from table t2 where t2.col3 = t.col3 and t2.col1 <> t.col1);
```
|
How to select duplicate columns data from table
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
Consider three parameters:
```
@Y=2014
@M=11
@D=24
```
I want to have a function in SQL Server which gets three numbers and return one date as result.
|
You can use SQL Server 2012 [DATEFROMPARTS](http://msdn.microsoft.com/en-us/library/hh213228.aspx) function.
```
SELECT DATEFROMPARTS(@year, @month, @day)
```
For versions below 2012, I'd use:
```
SELECT CONVERT(DATETIME, STR(@year * 10000 + @month * 100 + @day))
```
|
You could do:
```
select cast(cast( (@y * 10000 + @m * 100 + @d) as varchar(255)) as date)
```
But `datefromparts()` is best if you are using SQL Server 2012+.
|
Converting three int parameter to one date
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"function",
""
] |
I need to check if a partial name matches full name. For example:
```
Partial_Name | Full_Name
--------------------------------------
John,Smith | Smith William John
Eglid,Timothy | Timothy M Eglid
```
I have no clue how to approach this type of matching.
Another thing is that name and last name may come in the wrong order, making it harder.
I could do something like this, but this only works if names are in the same order and 100% match
`decode(LOWER(REGEXP_REPLACE(Partial_Name,'[^a-zA-Z'']','')), LOWER(REGEXP_REPLACE(Full_Name,'[^a-zA-Z'']','')), 'Same', 'Different')`
|
This is what I ended up doing... Not sure if this is the best approach.
I split partials by comma and check if first name present in full name and last name present in full name. If both are present then match.
```
CASE
WHEN
instr(trim(lower(Full_Name)),
trim(lower(REGEXP_SUBSTR(Partial_Name, '[^,]+', 1, 1)))) > 0
AND
instr(trim(lower(Full_Name)),
trim(lower(REGEXP_SUBSTR(Partial_Name, '[^,]+', 1, 2)))) > 0
THEN 'Y'
ELSE 'N'
END AS MATCHING_NAMES
```
|
you could use this pattern on the text provided - works for most engines
```
([^ ,]+),([^ ,]+)(?=.*\b\1\b)(?=.*\b\2\b)
```
[Demo](http://regex101.com/r/yW4aZ3/125)
|
Check if string variations exists in another string
|
[
"",
"sql",
"regex",
"oracle",
""
] |
I have 2 tables in my database one tblNews and another tblNewsComments
I want to select 10 records from tblNewsComments than have must Comments of news
I used this query but it give an error
```
SELECT tblNews.id,
tblNews.newsTitle,
tblNews.createdate,
tblNews.viewcount,
COUNT(tblNewsComments.id) AS comcounts
FROM tblNews
INNER JOIN tblNewsComments ON tblNews.id = tblNewsComments.newsID
GROUP BY tblNews.id
```
|
Try to replace
```
GROUP BY tblNews.id
```
With
```
GROUP BY tblNews.id,
tblNews.newsTitle,
tblNews.createdate,
tblNews.viewcount
```
All the expressions in the SELECT list should be in the GROUP BY or inside an aggregate function.
|
I've always found this to be an annoyance in SQL. There's nothing logically wrong with your query; you're grouping by news item and selecting various attributes of the news item, and then selecting the count of comments linked to the news item. That makes sense.
The error arises because the SQL engine isn't smart enough to realize that all the columns in tblNews are at the same data context, and that grouping by tblNews.id effectively guarantees that there will only be one newsTitle, createdate, and viewcount for each group. It *should* be able to realize that, I think, and carry out the query. But it doesn't do that; the only column it considers to be unique in the group data context is the exact column that you grouped by, id.
One solution, as Multisync just posted, is to group by ALL the columns you want to include in the select clause. I don't think this is the best solution, however, as you shouldn't have to specify all those columns in the group by clause, and that would force you to keep adding to that list whenever you want to add a new TblNews column to the select clause.
The solution I've always used is to wrap the column in an ineffectual aggregate function in the select clause; I always use `max()`:
```
select
tblNews.id,
max(tblNews.newsTitle),
max(tblNews.createdate),
max(tblNews.viewcount),
count(tblNewsComments.id) comcounts
from
tblNews
inner join tblNewsComments on tblNews.id=tblNewsComments.newsID
group by
tblNews.id
;
```
|
How To Get Count of the records in sql
|
[
"",
"sql",
""
] |
I have a table CLIENT(id, name, date). in mssql. how can i return all the id's per day. say for instance I have a poller and i want to insert script into a poller that polls once a day. what script will return all the id's created today?
|
Assuming that `date` is a date with no time component, the simplest way to get the records on the current date is:
```
select id
from client
where date = cast(getdate() as date);
```
If there is a time component, then use:
```
where date >= cast(getdate() as date) and date < cast(getdate() + 1 as date)
```
Notice that the functions (`cast()`) are not on the columns but on `getdate()`. This allows the query optimizer to use an index on `date`, if appropriate.
|
use DATEDIFF function to get the difference between datetime in db vs current date
```
SELECT ID FROM Client WHERE DATEDIFF(day, Date, GETDATE()) = 0;
```
|
How to return all id's per day in mssql
|
[
"",
"sql",
"sql-server",
""
] |
How to add an offset in a "select" query in Oracle 11g.
I only know how to add the limit by e.g `rownum <= 5`
this question is not a duplicate, I already checked the other questions and are not related to mine.
So, how to add the offset in Oracle 11g ?
|
You can do it easily on `12c` by specifying `OFFSET`.
In `12c`,
```
SELECT val
FROM table
ORDER BY val
OFFSET 4 ROWS FETCH NEXT 4 ROWS ONLY;
```
To do the same on `11g` and prior, you need to use `ROWNUM` twice, `inner query` and `outer query` respectively.
The same query in `11g`,
```
SELECT val
FROM (SELECT val, rownum AS rnum
FROM (SELECT val
FROM table
ORDER BY val)
WHERE rownum <= 8)
WHERE rnum > 4;
```
Here `OFFSET` is 4.
|
You can use ROW\_NUMBER function for that.
Maybe this helps:
```
SELECT *
FROM(SELECT t.*,
ROW_NUMBER() OVER (ORDER BY ...) rn -- whatever ordering you want
FROM your_table t
)
WHERE rn >= ... -- your offset
```
Hope that helps
|
How to add offset in a "select" query in Oracle 11g?
|
[
"",
"sql",
"oracle",
"oracle11g",
"pagination",
"rownum",
""
] |
I'm looking for a text editor, or maybe a complement for Notepad++ to help me when building SQL insert statements.
```
INSERT INTO table_name (column1,column2,column3)
VALUES (value1,value2,value3);
```
I would like that when I click on `column2`, `value2` is highlighted, and vice versa.
Do you know something that can do this?
|
Several good options: <http://www.phpmyadmin.net/home_page/index.php> (PHP + APACHE + MySQL)
MySQL Workbech: <http://www.mysql.com/products/workbench/> (MySQL, PgSQL, others)
<https://www.jetbrains.com/dbe/> (More general solution)
|
Maybe you can do this with SublimeText
Cheers.
|
SQL insert statement text editor
|
[
"",
"sql",
"insert",
"editor",
""
] |
I really need to know if i can somehow count 4 different values in a column and store it in 4 different variables. Those variables will be used later in that procedure.
This is the code I have but i don't want to use the select statement 4 times but prefferably 1 time.
Is there any way to solve it?
```
create or replace PROCEDURE FCountPrizes (dData IN DATE)
IS
szostki NUMBER;
piatki NUMBER;
czworki NUMBER;
trojki NUMBER;
BEGIN
SELECT COUNT(*) into szostki FROM Losy WHERE dData=DataLosowania AND FCountGuessed(dData, LiczbyMniejsze, LiczbyWieksze) = 6;
SELECT COUNT(*) into piatki FROM Losy WHERE dData=DataLosowania AND FCountGuessed(dData, LiczbyMniejsze, LiczbyWieksze) = 5;
SELECT COUNT(*) into czworki FROM Losy WHERE dData=DataLosowania AND FCountGuessed(dData, LiczbyMniejsze, LiczbyWieksze) = 4;
SELECT COUNT(*) into trojki FROM Losy WHERE dData=DataLosowania AND FCountGuessed(dData, LiczbyMniejsze, LiczbyWieksze) = 3;
END;
```
|
```
SELECT sum(case FCountGuessed(dData, LiczbyMniejsze, LiczbyWieksze) when 6 then 1 else 0 end),
sum(case FCountGuessed(dData, LiczbyMniejsze, LiczbyWieksze) when 5 then 1 else 0 end),
sum(case FCountGuessed(dData, LiczbyMniejsze, LiczbyWieksze) when 4 then 1 else 0 end),
sum(case FCountGuessed(dData, LiczbyMniejsze, LiczbyWieksze) when 3 then 1 else 0 end)
into szostki, piatki, czworki, trojki
FROM Losy WHERE dData=DataLosowania;
```
According to the comments, wrap each sum with nvl:
```
NVL(sum(case FCountGuessed(dData, LiczbyMniejsze, LiczbyWieksze)
when 3 then 1 else 0 end),
0)
```
if you don't expect NULLs in your local variables.
Another option:
```
SELECT max(case func_res when 6 then n else null end),
max(case func_res when 5 then n else null end),
max(case func_res when 4 then n else null end),
max(case func_res when 3 then n else null end)
into szostki, piatki, czworki, trojki
FROM (
SELECT FCountGuessed(dData, LiczbyMniejsze, LiczbyWieksze) as func_res, count(*) as n
FROM Losy WHERE dData=DataLosowania
GROUP BY FCountGuessed(dData, LiczbyMniejsze, LiczbyWieksze)
);
```
|
```
create or replace PROCEDURE FCountPrizes (dData IN DATE)
IS
SELECT COUNT(DECODE(CountGuessed(dData, LiczbyMniejsze, LiczbyWieksze),6,1)) ,
COUNT(DECODE(CountGuessed(dData, LiczbyMniejsze, LiczbyWieksze),5,1)) ,
COUNT(DECODE(CountGuessed(dData, LiczbyMniejsze, LiczbyWieksze),4,1)) ,
COUNT(DECODE(CountGuessed(dData, LiczbyMniejsze, LiczbyWieksze),3,1))
INTO
szostki,
piatki,
czworki,
trojki
FROM Losy
WHERE dData=DataLosowania
END FCountPrizes;
/
```
|
Oracle / SQL Counting Different values from one Column and storing it in a variable
|
[
"",
"sql",
"oracle",
"plsql",
""
] |
I would like to get the latest record for each server.
Here is some example data:
```
TimeGenerated SourceName ComputerName Message
2014-11-22 21:48:30 Windows Update Agent Server1 Update Failed
2014-11-22 21:42:30 Windows Update Agent Server2 Update Failed
2014-11-22 21:45:30 Windows Update Agent Server2 Update Failed
2014-11-22 21:43:30 Windows Update Agent Server1 Update Failed
```
Desired Output:
```
TimeGenerated SourceName ComputerName Message
2014-11-22 21:48:30 Windows Update Agent Server1 Update Failed
2014-11-22 21:45:30 Windows Update Agent Server2 Update Failed
```
I tried:
```
SELECT * FROM TABLE
GROUP BY ComputerName
ORDER BY TimeGenerated ASC
```
But that outputs inconsistent results and does not give me the latest in most cases.
I also tried some sub queries, but failed miserably.
|
```
SELECT *
FROM yourtable
INNER JOIN (
SELECT MAX(timeGenerated) as maxtime, ComputerName
FROM yourtable
GROUP BY ComputerName
) AS latest_record ON (yourtable.timeGenerated = maxtime)
AND (latest_record.ComputerName = yourtable.ComputerName)
```
Inner query gets the latest timestamp for every computer name. The outer query then joins against that query result to fetch the rest of the fields from the table, based on the time/computername the inner query finds. If you have two events logged with identical max times, you'd get two records for that computername.
|
I you only use a GROUP function, you can only display the columns which are part of that SELECT statement. If you need more data displayed, you need to work with a subquery. For example, if you want to "select \*", you can't just use only a GROUP BY and no subquery. So, it depends what you want to display.
|
get latest record for each ID
|
[
"",
"mysql",
"sql",
"groupwise-maximum",
""
] |
I have made some SQL code to (what I thought) would update 1 field for only some records. The code I made was:
```
UPDATE Name
SET name.STATUS = 'a'
WHERE EXISTS
(SELECT Name.ID,
Name.MEMBER_TYPE,
Name.CATEGORY,
Name.STATUS,
Name.COMPANY_SORT,
Name.FULL_NAME,
Name.TITLE,
Name.FUNCTIONAL_TITLE,
Activity.ACTIVITY_TYPE,
Activity.PRODUCT_CODE,
Activity.TRANSACTION_DATE
FROM Name
INNER JOIN Activity ON Name.ID = Activity.ID
WHERE ACTIVITY_TYPE = 'ELECTIONS'
AND PRODUCT_CODE LIKE '%new%'
AND TRANSACTION_DATE LIKE '%2014%'
AND name.MEMBER_TYPE IN ('mcm',
'MCNM'))
```
but it updated all the records in the name name to the status of `a`, not just the one in the where exist statement ... what have I screwed up?
|
Try something like this:
```
UPDATE Name
SET name.STATUS = 'a'
WHERE EXISTS
(SELECT *
FROM Activity
WHERE Name.ID = Activity.ID
AND ACTIVITY_TYPE = 'ELECTIONS'
AND PRODUCT_CODE LIKE '%new%'
AND TRANSACTION_DATE LIKE '%2014%'
AND name.MEMBER_TYPE IN ('mcm', 'MCNM')
)
```
Some explanation: when you do `FROM Name` in the inner query you mask outer table `Name` and just query whole thing. So inner query is the same for every entry. You need to reuse `Name` from outer query so that inner query relates to outer. Sorry, my code might need some polishing I don't even know what DB you are using.
|
The issue is that your reference to `Name` in the update line and your reference to `Name` under the sub-query are treated as two separate instances, so there is no relation between the update statement and the sub-query.
To get around it, you can actually alias your tables within the sub-query itself, and then explicitly refer to the alias in the update:
```
UPDATE nm
SET nm.STATUS = 'a'
WHERE EXISTS
(SELECT *
FROM Name nm
INNER JOIN Activity act ON nm.ID = act.ID
WHERE ACTIVITY_TYPE = 'ELECTIONS'
AND PRODUCT_CODE LIKE '%new%'
AND TRANSACTION_DATE LIKE '%2014%'
AND nm.MEMBER_TYPE IN ('mcm',
'MCNM'))
```
|
Why did this UPDATE change all rows instead of just one row?
|
[
"",
"sql",
""
] |
Given the following data:
```
EMPID NAME SALARY DID
1 kevin 32000 2
2 joan 42000 1
3 brian 37000 3
4 larry 82000 5
5 harry 92000 4
6 peter 45000 2
7 peter 68000 3
8 smith 39000 4
9 chen 71000 1
10 kim 46000 5
11 smith 46000 1
```
List the name of employees who do not share the same name.
I'm not sure if I have to join the table with itself (or do something else).
This is what I wrote so far:
```
SELECT distinct e.name
FROM employee e, employee ee
WHERE e.name <> ee.name
```
|
No need for joining:
```
SELECT name
FROM employee
GROUP BY name
HAVING COUNT(name) = 1
```
count how many of each name there are, then return only those where there's only one name. e.g. peter has a count of 2 and won't show up, while kim has a count of 1 and WILL show up.
|
With an inner join that does a count of the number of records for each emp. you can filter the results where count = 1
```
SELECT
T.EMPID,
T.NAME,
T.SALARY,
T.DID
FROM
employee AS T
INNER JOIN (SELECT COUNT(*) AS CNT , NAME FROM employee GROUP BY NAME) AS A ON T.NAME = A.NAME
WHERE
CNT = 1
```
|
Query: employees who do not have the same name
|
[
"",
"sql",
""
] |
During the creation of a new Tablespace in Oracle Database, the user has to enter the DBF file name that he (or she) want to use. The DBF file is then created in a specific location.
The user may also specify a path in which the DBF file should be created.
I need to find a way to get the default location of the DBF file.
I know how to do it in MS Sql by using a SQL query:
```
select substring(physical_name, 1, charindex(N'master.mdf', lower(physical_name)) - 1) from master.sys.master_files where database_id = 1 and file_id = 1;
```
**But I have no idea about how to do it in Oracle.** I've tried several things:
* Ran a query on `all_directories` - didn't find any information there
* Looked at the `v$datafile` view - realized that this view and the others are accesible to database administrators only
There are also several limitations:
* The Oracle Database may be installed on another machine with a different operating system.
* My application may connect to the database with a user who is not an
admin.
* It should be done preferably with a SQL query.
Any help is much appreciated.
|
`DB_CREATE_FILE_DEST` specifies the default location for Oracle-managed datafiles (see [its entry](https://docs.oracle.com/cd/E11882_01/server.112/e40402/initparams053.htm#REFRN10034) in the Database Reference).
You can retrieve its value with the following SQL query:
```
select value from v$parameter where name = 'db_create_file_dest'
```
To access the `v$parameter` view a user needs at least the `SELECT_CATALOG_ROLE` role.
|
Use this sentence with system user:
```
Select * from dba_data_files ;
```
[](https://i.stack.imgur.com/ms2R0.png)
|
How to find the default location in which Oracle DBF files are created?
|
[
"",
"sql",
"oracle",
"dbf",
""
] |
Table - BASECASE(CaseNumber, PatientID)
```
CaseNumber------PatientID
S100---------------P201
S101---------------P201
S102---------------P200
S103---------------P199
S104---------------P201
```
2nd table
```
Table - CHECKUP(CheckupNumber, CaseNumber)
CheckupNumber------CaseNumber
C301-------------------S100
C302-------------------S100
C303-------------------S101
C304-------------------S102
C305-------------------S103
```
SQL Code
```
SELECT CaseNumber FROM BASECASE
& COUNT(CaseNumber)FROM CHECKUP, WHERE PatientID='P201'
```
If the CaseNumber is Not present in CHECKUP, COUNT(CaseNumber)[which will be null] should return value 0.
Expected result.
```
CaseNumber-----No.Of Checkups
S100------------------2
S101------------------1
S104------------------0
```
How to sort out this problem?
|
you need to use `left join` and `group by`
```
select B.caseNumber, ISNULL(count(C.checkupNumber) ,0)
from BaseTable B
left join Checkup C
on B.caseNumber = C.caseNumber
where B.patientId ='P201'
group by B.caseNumber
```
|
**Join with Group BY**
```
select BC.Casenumber,isnull(COUNT(BC.Casenumber),0) CNT from BaseCase BC
left
join CheckUp Ch
on BC.CasetNumber=Ch.CaseNumber
group by BC.Casenumber
```
|
SQL Server : Select from two tables with default value 0
|
[
"",
"sql",
"sql-server",
"select",
"default-value",
""
] |
I've got a problem with some query. What I need to do is write out all the orders from the customer with the highest number of orders (from one table).
So I got this (it works and tell me which customer have the highest number of orders):
```
SELECT custid, COUNT(*) as "Number of orders"
FROM sales.orders
GROUP BY custid
ORDER BY COUNT(*) DESC
```
And then I need to combine this with another query that gives me all "ORDERID" from this customers orders. I try different approaches, but nothings works fine. Finally I've got something like this:
```
SELECT custid, orderid FROM sales.orders
WHERE EXISTS
(
SELECT custid, COUNT(*) as "Number of orders"
FROM sales.orders
GROUP BY custid
)
ORDER BY COUNT(*) DESC
```
With error:
> "'SALES.ORDERS.custid' is invalid in the select list because it is not
> contained in either an aggregate function or the GROUP BY clause."
How can I use subqueries in this case to make it work? Thanks for your time and answers!
|
try
```
SELECT custid,
orderid
FROM sales.orders
WHERE custid = (SELECT TOP 1 custid
FROM sales.orders
GROUP BY custid
ORDER BY Count(*) DESC)
```
|
```
SELECT custid, orderid FROM sales.orders
WHERE custid IN(
SELECT TOP 1 custid
FROM sales.orders
GROUP BY custid
ORDER BY COUNT(*) DESC)
```
This should work (for MySQL), here a SQL fiddle:
<http://sqlfiddle.com/#!2/d580a6/1>
|
using subqueries on example (find all orders from the customer with condition)
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"subquery",
""
] |
I have a table with currency exchange rates:
```
CREATE TABLE ExchangeRates
(
ID int IDENTITY,
SellingCurrency nvarchar(20),
BuyingCurrency nvarchar(20),
Rate float,
CONSTRAINT PK__ExchangeRates__ID PRIMARY KEY (ID)
)
```
For example, table contains this data:
```
INSERT INTO ExchangeRates
VALUES ('USD', 'RUB', 1.2),
('RUB', 'EUR', 0.5),
('SEK', 'RUB', 1.3)
```
I need to write a query which should return an exchange rate of two currencies even if there is no row in table with this two currencies (using a chain of exchanges).
How can I do it?
|
This problem is an excellent fit for solutions from the [graph theory](https://en.wikipedia.org/wiki/Graph_theory) field (and graph databases like [Neo4j](http://neo4j.com/) for example), but modelling a graph in a relational database isn't that hard and implementing an path finding algorithm like BFS/DFS or Dijkstra (for shortest path) is doable too, and could be a viable solution given a small enough data set (which an exchange rate table would be), although given the iterative nature of those algorithms I'm not sure they would scale that well (but it should be easy enough to implement the algorithm in a CLR proc for better performance).
Anyway, I like graph theory and found this problem interesting and went looking for, and found, a t-sql stored procedure implementation of [Dijkstras algorithm](https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm) (among several others) [here](http://hansolav.net/sql/graphs.html), and played around with adapting it to your data (with a slight, and unnecessary change to the table structure - I put the currencies in a separate table to not have to modify the procedure too much) and got it working (the code isn't that hard to understand if you are familiar with how [Dijkstras algorithm](https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm) works).
You can look at the implementation with examples in [this SQL Fiddle](http://www.sqlfiddle.com/#!6/3a722/1).
The result from the example runs (five separate executions) :
```
| STARTNODE | ID | NAME | DISTANCE | PATH | NAMEPATH |
|-----------|----|------|----------|-------|-------------|
| USD | 2 | RUB | 1.2 | 1,2 | USD,RUB |
|-----------|----|------|----------|-------|-------------|
| USD | 3 | EUR | 1.7 | 1,2,3 | USD,RUB,EUR |
|-----------|----|------|----------|-------|-------------|
| RUB | 3 | EUR | 0.5 | 2,3 | RUB,EUR |
|-----------|----|------|----------|-------|-------------|
| SEK | 2 | RUB | 1.3 | 4,2 | SEK,RUB |
|-----------|----|------|----------|-------|-------------|
| SEK | 3 | EUR | 1.8 | 4,2,3 | SEK,RUB,EUR |
```
The test data uses these id numbers for currencies:
```
1 = USD
2 = RUB
3 = EUR
4 = SEK
```
Credit to the [author of the original algorithm](http://hansolav.net/sql/graphs.html)
On a side note it worth considering that even though it's clearly possible to use a relational database in this way, it's probably not a good idea; there are much better solutions for this.
|
When you are sure you can fix it in 2 levels, you could use someting like this:
---
```
select Rate
from ExchangeRates
where SellingCurrency = 'SEK'
and BuyingCurrency = 'EUR'
UNION
select er1.Rate*er2.Rate as Rate
from ExchangeRates er1
left join ExchangeRates er2 on er2.SellingCurrency = er1.BuyingCurrency
where er1.SellingCurrency = 'SEK'
and er2.BuyingCurrency = 'EUR'
```
|
Joining multiple tables in SQL Server
|
[
"",
"sql",
"sql-server",
""
] |
Basically I have a LoginHistory table in SQL .
I want to execute a command that will return the count of logins on a certain date.
But if a user has logged in multiple times I only want to record it as one log in.
So I can see how many logged in on one day and not how many times they logged in on that day.
So far I have this ....
```
select count(username) as TotalUserCountOnTheDay
from dbo.LoginHistory
where datediff(day,LoginDate, '2014-11-19') = 0
```
But this returns the number of log-in's or all user.
The result is 15 rows, it should be 4
The table I'm querying:

|
You want to count *distinct* users:
```
select count(distinct username) as TotalUserCountOnTheDay
from dbo.LoginHistory
where datediff(day,LoginDate, '2014-11-19') = 0;
```
|
Maybe the *distinct* keyword could help you in this case.
|
How to get count of number of rows in table that have multiple matching row values?
|
[
"",
"mysql",
"sql",
""
] |
I have two tables, `Students` and `Course_Registrations` where each student registers several courses.
In my `students` table I have his/her degree(MS/BS) but it's missing in the `course_Registrations` table. I've tried the below code to get each student degree program in `course_registrations` but I get this error:
*Scalar subquery is only allowed to return a single row*
Any suggestions?
```
Update ROOT.ISB_COURSE_REGISTRATIONS
set ROOT.ISB_COURSE_REGISTRATIONS.degree=
(
Select ROOT.ISB_STUDENTS.degree
from ROOT.ISB_STUDENTS
where ROOT.ISB_STUDENTS.STUDENT_ID=ROOT.ISB_COURSE_REGISTRATIONS.STUDENT_ID
)
where exists
(
select *
from ROOT.ISB_STUDENTS
where ROOT.ISB_STUDENTS.STUDENT_ID=ROOT.ISB_COURSE_REGISTRATIONS.STUDENT_ID
GROUP BY STUDENT_ID
);
```
|
You have a problem because some students have multiple rows in `ISB_STUDENTS`. You need to figure out what to do to combine them. In the meantime, I would suggest this:
```
Update ROOT.ISB_COURSE_REGISTRATIONS
set degree = (Select MAX(s.degree)
from ROOT.ISB_STUDENTS s
where ISB_STUDENTS.STUDENT_ID = s.STUDENT_ID
)
where exists (select 1
from ROOT.ISB_STUDENTS s
where s.STUDENT_ID = ISB_COURSE_REGISTRATIONS.STUDENT_ID
group by s.STUDENT_ID
having min(s.degree) = max(s.degree)
);
```
Note the use of the `having` clause in the subquery in the `where` clause. This will set the value only when all the degrees on the student records are the same. Then you can investigate two issues:
* In a table called `ISB_STUDENTS` why are there duplicate `STUDENT_ID`s?
* When there are such duplicates, how will you handle multiple inconsistent degrees?
|
as i understood from your question, you have the following
1. student table that contains degree field
2. course\_registration table that also contains degree field and you want to update it from student table for each student
if so, you can try this
```
Update r set r.degree=s.degree
from ROOT.ISB_COURSE_REGISTRATIONS r
inner join ROOT.ISB_STUDENTS s on r.STUDENT_ID=s.STUDENT_ID
```
if i misunderstood any of your requirements, please clarify it
hope this will help you
|
Inputing column values in one table from another related table
|
[
"",
"sql",
""
] |
I just try to write MS SQL statement to get LAST DATA from database, unfortunately when i add there DESC LIMIT 1 it says wrong syntax despite the fact it looks OK to me.
Can somebody with more skill look at it?
```
Select
sum(spareparts),
month(calculationdate)
from cz_axnmrs_calculations
where CASE_ID in (select case_id
from cz_axnmrs_cases
where insurer_memberid = 'MM-O-5B57274F')
and YEAR(calculationdate)='2014'
group by month(calculationdate) DESC LIMIT 1
```
LIKE THIS it work:
```
Select
sum(spareparts),
month(calculationdate)
from cz_axnmrs_calculations
where CASE_ID in (select case_id
from cz_axnmrs_cases
where insurer_memberid = 'MM-O-5B57274F')
and YEAR(calculationdate)='2014'
group by month(calculationdate)
```
|
SQL server uses `TOP` instead of `LIMIT` to restrict number of records.
Your query becomes:
```
Select top 1
sum(spareparts),
month(calculationdate)
from cz_axnmrs_calculations
where CASE_ID in (select case_id
from cz_axnmrs_cases
where insurer_memberid = 'MM-O-5B57274F')
and YEAR(calculationdate)='2014'
group by month(calculationdate) DESC
```
|
In SQL Server 2012+, Microsoft supports the ANSI standard OFFSET clause. You can write this as:
```
Select sum(spareparts), month(calculationdate)
from cz_axnmrs_calculations
where CASE_ID in (select case_id from cz_axnmrs_cases where insurer_memberid = 'MM-O-5B57274F') and
YEAR(calculationdate)='2014'
group by month(calculationdate) DESC
fetch first 1 row only;
```
|
MS SQL last record
|
[
"",
"sql",
"sql-server",
""
] |
I have two tables `restaurants` and `meals`. Restaurants has
```
`id`
`name`
`menu`
`image`
`text`
`address`
```
Meals has
```
`meal_id`
`meal_name`
`meal_image`
`meal_weight`
`meal_price`
`meal_menu`
```
What I think to connect them `meal.restaurant.id`. Is this good approach or is better to make another (third) table and connect them there.
Can you provide me with a query how when restaurant with ID=1 is selected to show meals for this restaurant only. For now I think to hold restaurant id in `meal_menu` row.
|
Create a pivot table e.g.
```
CREATE TABLE meal_restaurant
(
meal_id int not null,
restaurant_id int not null,
primary key (meal_id, restaurant_id),
foreign key (meal_id) references meals (meal_id),
foreign key (restaurant_id) references restaurants (id)
);
```
Now to select all meals for a specific restaurant you do
```
SELECT m.meal_id, meal_name, meal_image, meal_weight, meal_price
FROM meals m JOIN meal_restaurant mr
ON m.meal_id = mr.meal_id
WHERE restaurant_id = 1
```
Here is a **[SQLFiddle](http://sqlfiddle.com/#!2/c0ebd5/2)** demo
|
**if you think one restaurant has a many meal and one meal has only one restaurant**
then you should try restaurants table
```
`id`
`name`
`menu`
`image`
`text`
`address`
```
Meals table
```
`meal_id`
`meal_name`
`meal_image`
`meal_weight`
`meal_price`
`meal_menu
`restaurant_id`
```
**Or If you think one restaurant has a many meal and one meal has many restaurant**
then you should try restaurants table
```
`id`
`name`
`menu`
`image`
`text`
`address`
```
Meals table
```
`meal_id`
`meal_name`
`meal_image`
`meal_weight`
`meal_price`
`meal_menu
```
and restaurant\_meal
```
`restaurant_id`
`meal_id`
```
restaurant\_meal table only contain foreign key references restaurants table and meals table
it up to you which one you use ...you can use any one.it just depend on requirement
|
MySQL relations and query
|
[
"",
"mysql",
"sql",
""
] |
I have asked a question with the same query earlier but this time the question is different so I am posting another question. Below is the sql stored procedure query.
```
SELECT ROW_NUMBER() OVER (ORDER BY [PostedDate] Desc)AS RowNumber,[Products].[Id], [Name], [Description], [PostedDate],
ISNULL(AVG([Rating].[RatingValue]), 0) AverageRating, COUNT([Rating].[RatingValue]) RatingCount
INTO #DealResults1
FROM [Products]
LEFT OUTER JOIN [Rating] ON [Product].[Id] = [Rating].[ProductId]
WHERE [City] = CASE WHEN @CityId IS NULL THEN [City] ELSE @CityId END
AND [Description] IS NOT NULL
Group by [Products].[Id], [Name], [Description], [PostedDate]
ORDER BY [PostedDate] Desc
```
This is the query for 1 day. I have changed the table structure and now it is like this,
```
Id Rating_Monday Rating_Tuesday Rating_Wednesday .......
1 3.0 NULL NULL
2 3.5 NULL NULL
3 NULL 2.0 NULL
4 NULL 3.0 NULL
5 NULL 1.5 NULL
6 NULL NULL 1.0
7 NULL NULL 2.5
8 NULL NULL 4.5
```
On Monday, value for all other days will be Null. Now for Monday the rating selected is Rating\_Monday. Everything works fine. In the above query there are two important parts which are,
```
ISNULL(AVG([Rating].[Rating_Monday]), 0) AverageRating
COUNT([Rating].[Rating_Monday]) RatingCount
```
Average is selected absolutely fine but RatingCount is wrong because it is selected according to row count and does not check for the Null value.
|
Using a case can solve this.
```
SELECT ROW_NUMBER() OVER (ORDER BY [PostedDate] Desc)AS RowNumber,[Products].[Id], [Name], [Description], [PostedDate],
ISNULL(AVG([Rating].[RatingValue]), 0) AverageRating,
SUM(case when (Rating.RatingValue) is null then 0 else 1 END) RatingCount
INTO #DealResults1
FROM [Products]
LEFT OUTER JOIN [Rating] ON [Product].[Id] = [Rating].[ProductId]
WHERE [City] = CASE WHEN @CityId IS NULL THEN [City] ELSE @CityId END
AND [Description] IS NOT NULL
Group by [Products].[Id], [Name], [Description], [PostedDate]
ORDER BY [PostedDate] Desc
```
|
COUNT(expression) evaluates expression for each row in a group and returns the **number of nonnull values**.
You may look into using the OVER clause : `COUNT(EmployeeID) OVER (PARTITION BY DepartmentID) AS EmployeesPerDept`
See [COUNT (Transact SLQ)](http://msdn.microsoft.com/en-us/library/ms175997.aspx) for more information.
|
Don't want to use columns in the COUNT which are NULL
|
[
"",
"sql",
"sql-server",
""
] |
I want to return 1 if some number already exists in table and 0 otherwise.
I tried something but it doesn't work:
```
select
case when 100 in (select distinct id from test) then '1'
else '0'
from test
```
I want something similar to exists function that already exists in PostgreSQL, but instead of `true` and `false` I want `1` or `0`.
|
**[`EXISTS`](https://www.postgresql.org/docs/current/functions-subquery.html#FUNCTIONS-SUBQUERY-EXISTS)** yields a `boolean` result.
The simple way to achieve what you are asking for is to cast the result to `integer`:
```
SELECT (EXISTS (SELECT FROM test WHERE id = 100))::int;
```
`TRUE` becomes `1`.
`FALSE` becomes `0`.
See:
* [Return a value if no record is found](https://stackoverflow.com/questions/8098795/return-a-value-if-no-record-is-found/8098816#8098816)
**Or** with `UNION ALL` / `LIMIT 1` (probably slightly faster):
```
SELECT 1 FROM test WHERE id = 100
UNION ALL
SELECT 0
LIMIT 1;
```
If a row is found, `1` is returned and Postgres stops execution due to `LIMIT 1`. Else, `0` is returned.
**Disclaimer** for the `UNION ALL` solution: this relies on **undocumented behavior** that Postgres would execute `UNION ALL` terms in sequence. This used to be the case until Postgres 11, where `Parallel Append` was added to Postgres. It's actually still the case (currently Postgres 15) for the given example, as retrieving a single row will never trigger a `Parallel Append` plan. But it remains undocumented behavior, so you might not want to rely on it. See:
* [Are results from UNION ALL clauses always appended in order?](https://dba.stackexchange.com/q/316818/3684)
|
If the field you are testing is the Primary Key (or some other unique constraint), then you can simply return the count (which will always be 0 or 1):
```
SELECT count(*) FROM test WHERE id = 100;
```
|
Return 1 if number exists in table and 0 otherwise
|
[
"",
"sql",
"postgresql",
"boolean",
"exists",
""
] |
I want to find new, modified and deleted records in one table (tableA) by comparing it to another table (tableB). Both tables are of the same schema and has a unique ID field.
In my situation, tableA is originally the same as tableB but it has been edited by some external organisation and once they have done their edits, they send the table back via ZIP file, and we re-populate (truncate and insert) that data to tableA. So I want to find out what records have changed in tableA. I am using SQL Server 2012.
I can get new and modified records with the "except" keyword:
```
select * from tableA
except
select * form tableB
```
(Let's call the above results ResultsA)
I can also get deleted and modified records:
```
select * from tableB
except
select * form tableA
```
(Let's call the above results ResultsB)
The problem is, both ResultsA and ResultsB have the same records that have been modified/edited. So the modified/edited records are doubled up. I can use inner join or intersect on ResultsA and ResultsB to get *just the modified* records (call this results ResultsC). But then I will need to use join/except again between ResultsA and ResultsC to get *just the new* records, and join/except again between ResultsB and ResultsC to get *just the deleted* records... I tried [this](https://stackoverflow.com/questions/372126/t-sql-comparing-two-tables-records-that-dont-exist-in-second-table) and [this](https://stackoverflow.com/questions/2077807/sql-query-to-return-differences-between-two-tables) but they are not working for me.
Obviously this is not good. Are there any elegant and simpler ways to find out the records that have been deleted, modified or added in tableA compared to tableB?
|
How about:
```
-- DELETED
SELECT B.*, 'DELETED' AS 'CHANGE_TYPE'
FROM TableB B
LEFT JOIN TableA A ON B.PK_ID = A.PK_ID
WHERE A.PK_ID IS NULL
UNION
-- NEW
SELECT A.*, 'NEW' AS 'CHANGE_TYPE'
FROM TableA A
LEFT JOIN TableB B ON B.PK_ID = A.PK_ID
WHERE B.PK_ID IS NULL
UNION
-- MODIFIED
SELECT B.*, 'MODIFIED' AS 'CHANGE_TYPE'
FROM (
SELECT * FROM TableA
EXCEPT
SELECT * FROM TableB
) S1
INNER JOIN TableB B ON S1.PK_ID = B.PK_ID;
```
Not exactly elegant, but it works.
|
Based on what i understood i came up with the following solution.
```
DECLARE @tableA TABLE (ID INT, Number INT)
DECLARE @tableB TABLE (ID INT, Number INT)
INSERT INTO @tableA VALUES
(1,10),
(2,20),
(3,30),
(4,40)
INSERT INTO @tableB VALUES
(1,11),
(2,20),
(4,40),
(5,50)
SELECT *,'Modified or deleted' as 'Status' FROM
(
select * from @tableA
except
select * from @tableB
)a WHERE ID NOT IN
(
select ID from @tableB
except
select ID from @tableA
)
UNION
SELECT *,'New' as 'Status' FROM
(
select * from @tableB
except
select * from @tableA
)b WHERE ID NOT IN
(
SELECT ID FROM
(
select * from @tableA
except
select * from @tableB
)a WHERE ID NOT IN
(
select ID from @tableB
except
select ID from @tableA
)
)
```
|
Find Modified/New/Deleted Records Between Two Tables
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
```
SELECT state, business, a.report
FROM base
WHERE state IN
(SELECT a.state FROM heart a join (SELECT CAST(MAX(percent_adults) AS DOUBLE) max1 FROM heart)b on (a.percent_adults=b.max1));
```
In the above subquery, only one value can be returned i.e a.state from table 'heart'.
that value is used in the main query and fetches business from 'base' table . I need to return a.report from 'heart' table in subquery in the report along with state and business . Thanks much!
|
You don't need the subquery to return two values. All you need the subquery for is to provided the unique join value (relying on the odds that two calculated double values will ever be exactly the same).
So just do a join and choose the report from that:
```
select b.state, b.business, a.report
from base b
join heart a
on a.state = b.state
where a.percent_adults =(
select max( percent_adults )
from heart );
```
You're limiting the join to just the one state with the highest `percent_adult` value and getting the report from that.
|
Use `exists` instead of `in`:
```
SELECT state, business, a.report
FROM base b
WHERE EXISTS (SELECT 1
FROM heart h JOIN
(SELECT MAX(percent_adults) as max1
FROM heart h2
) sb
ON h.percent_adults = h2.max1
WHERE h.state = b.state and h.business = b.business
);
```
Don't convert the maximum value, particularly to a floating point representation. It is generally dangerous to compare floating point values for equality, but it *should* be safe with the result of a `MIN()` or `MAX()`.
|
How to return multiple values from a subquery
|
[
"",
"sql",
"hive",
""
] |
I have a record which has value with leading and space in a column.
eg: column value is `' D0019 '`
I want to pass this particular column in where clause.
```
select * from my_table where my_column='D0019';
```
Since the value has space, it doesn't detect from the where clause.
How can I select the record even it has leading and trailing spaces in the value?
My DB is `ORACLE`
========================================
UPDATE :
I get value only when I try
```
select * from my_table where my_column like '%D0019%'
```
not even with `' %D0019% '`
=============================================
UPDATE 2 :
```
SELECT my_column ,DUMP(my_column) FROM my_table WHERE my_column like '%D0019';
```
output is
```
" D0019" Typ=1 Len=6: 9,68,48,48,49,57
```
|
it's not the normal space you have to remove. It's `Horizontal Tab character` . (Ascii 9).
The below `regexp` would strip all the charcters from ASCII range `0-32` , which are associated with the white space symbols.
```
select * from my_table
WHERE
REGEXP_REPLACE(my_column,'['||chr(1)||'-'||chr(32)||']' ) = 'D0019';
```
[More on ASCII table](http://www.asciitable.com)
|
Seems as simple as (if I get it):
```
select * from TAB where REGEXP_LIKE (COL,'\s*D0019')
```
returns values such ' D0019', 'D0019', ' D0019', ' D0019' ...etc.
|
Find value having leading and trailing space in DB table column - Oracle
|
[
"",
"sql",
"oracle",
""
] |
I generated a .sql file by mysqldump command:
```
system "mysqldump -u#{USERNAME} --password=#{PASSPORT} --extended-insert #{DATABASE} #{table_name} > init_#{table_name}.sql"
```
the .sql file looks like this:
```
.....
.....
LOCK TABLES `config_products` WRITE;
/*!40000 ALTER TABLE `config_products` DISABLE KEYS */;
INSERT INTO `config_products` VALUES (1,1000,30,54),(2,2000,30,56),(3,3000,30,51),(4,1000,30,54)....
.....
.....
```
All the data is merged into one line, but i want it looks like it:
```
.....
.....
LOCK TABLES `config_products` WRITE;
/*!40000 ALTER TABLE `config_products` DISABLE KEYS */;
INSERT INTO `config_products` VALUES
(1,1000,30,54),
(2,2000,30,56),
(3,3000,30,51),
(4,1000,30,54)....
.....
.....
```
Is there any easy way to make it?
|
You can use `gsub`, and replace opening bracket using a `newline`, a `tab` and the bracket itself.
You should either do this before writing or on reading
```
text = "INSERT INTO `config_products` VALUES (1,1000,30,54),(2,2000,30,56),(3,3000,30,51),(4,1000,30,54)...."
newtext = text.gsub(/\(/,"\n\t(")
#INSERT INTO `config_products` VALUES
# (1,1000,30,54),
# (2,2000,30,56),
# (3,3000,30,51),
# (4,1000,30,54)....
```
**UPDATE**
Since this is done with a system task, you can do this, in ruby console, or write a helper method somewhere, if you are going to use this a lot.
```
def export_formatted_sql(DATABASE,table_name,USERNAME,PASSWORD)
system "mysqldump -u#{USERNAME} --password=#{PASSPORT} --extended-insert #{DATABASE} #{table_name} > init_#{table_name}.sql"
file = File.open("init_#{table_name}.sql","r")
newtext = file.read.gsub(/\(/,"\n\t(")
file.close
file = File.open("init_#{table_name}.sql","w") # overwrite the existing file
file.write newtext
file.close
end
```
|
Check out this gem once, may be helpful to you:
`anbt-sql-formatter`
|
how to modify .sql file with ruby?
|
[
"",
"mysql",
"sql",
"ruby-on-rails",
"ruby",
""
] |
I have a set of data as below
```
number quantity
1 4
2 6
3 7
4 9
2 1
1 2
5 4
```
I need to find the unique value in the column "number"
The output should look like this:
```
number quantity
3 7
4 9
5 4
```
Any help would be appreciated. I am using MS SQL
|
One way to go could be to have an aggregate query that counts the number of occurrences for each number use it in a subquery:
```
SELECT number, quantity
FROM my_table
WHERE number IN (SELECT number
FROM my_table
GROUP BY number
HAVING COUNT(*) = 1)
```
|
In the inner query get all the distinct numbers, then join with again with the main table to get your expected results.
```
select o.*
from mytable o , (select number
from mytable
group by number) dist
where o.number = dist.number
```
|
Find the unique value in column MS SQL database
|
[
"",
"sql",
"sql-server",
""
] |
When I use a `DateDiff` in a `SELECT` statement in SQL Server, I get an `int` value back:
```
select DATEDIFF(day,'01/01/2011','15/05/2011')
from Table1
```
Result is : `134`
```
select DATEDIFF(day,'01/01/2011','15/05/2011')/15
from Table1
```
Result is : `8`
```
select cast(DATEDIFF(day,'01/01/2011','15/05/2011') / 15 as Decimal(9,2))
from Table1
```
Result is : `8.00`
But in reality the result is : **8.9333333**
How can I change the result from `int` to `Double`?
I want to get the result `8.9333333` - how can I do this?
|
The problem is that you're using `CAST` after dividing. If you perform the division after the `CAST`, you'll get the answer you want:
```
select
cast(DATEDIFF(day,'01/01/2011','15/05/2011') as Decimal(9,2))/15
from Table1
```
|
You need to cast your inputs into the division as decimal first:
```
select cast(DATEDIFF(day,'2011-01-01','2011-05-15') as Decimal(9,2))/15.0
```
Otherwise you are dividing an integer by an integer - you'll get an integer out. If you then cast this to a decimal after the division, you are going to get 8.0.
|
Issue with DateDiff / Nb in SQL Server
|
[
"",
"sql",
"sql-server",
"casting",
"datediff",
""
] |
Please check the tables below for a simplified version of my problem:
**Table Boys**
```
BoyId
BoyName
...
```
**Table Girls**
```
GirlId
GirlName
```
**Table Toys**
```
ToyId
ToyName
ToyOwnerBoyOrGirl ( The toy could be owned by a boy or a girl)
ToyOwnerId
```
I created two constraints:
1) `ToyOwnerId` is a foreign key of the Primary Key `Boys.BoyId`
2) `ToyOwnerId` is a foreign key of the Primary Key `Girls.GirlId`
My purpose is to tell the database that `ToyOwnerId` will always be one of these Ids
**My problem:**
When I tried to insert a new Toy with an id of a Boy, I got an error that there is a `foreign key conflict` in the Girls constraint.
Is this a bag design or I can still use the same design with a fix ?
|
It looks like a bad design. Why don't have one table for all children and some mark - is it boy or girl? Also I really doubt you need `ToyOwnerBoyOrGirl` field - as it can be easily obtained by join from toys to owners.
Consider following scheme:
```
Table Children
ID
Name
Is_Boy
Table Toys
ID
Name
Owner_ID
```
In this case you need just foreign key from toys to owners, and other tasks you might encounter will be much more simplier to solve.
**EDIT**: As per OP's comment - Boys and Girls tables are totally different.
So, in this case you still can have table `Children` (let's use previous terminology) as a "common" table for `Boys` and `Girls`.
Something like:
```
Table Children
ID
Table_Name ('Boys' or 'Girls' here)
Record_ID (ID from Boys or Girls respectively)
...maybe some common fields from boys and girls tables here...
Table Boys
ID
Child_ID
...the rest of fields
Table Girls
ID
Child_ID
...the rest of fields
```
|
I think you should combine the boys and girls table to one table called children. It would have sex column that would have an M or F. That will simplify things.
|
A field referencing two tables - Foreign Key Conflict
|
[
"",
"sql",
"sql-server",
"database-design",
""
] |
I'm trying to get the total minutes but it's not working correctly with my left joins. Is there a way to add a column that has the DISTINCT total of minutes for a given MachineDescription?
Here is the sqlfiddle: <http://sqlfiddle.com/#!2/7fd99/1>
```
SELECT Work.WorkID
, Work.Description
, Machine.MachineDescription
, Name.NAME
, Work2.RegMin
, Work.MINUTES
FROM Work Work
JOIN Machine Machine ON Machine.MachineID = Work.MachineID
LEFT JOIN Work2 Work2 ON Work2.WorkID = Work.WorkID
LEFT JOIN Name Name ON Name.NameID = Work2.NameID
```
I'd like to have a column with the DISTINCT Sum of Minutes. So the the total of minutes would be '30' not '40' for WorkID 111.
My output looks like this:
```
WorkID | Description | MachineDescription | Name | RegMin | Minutes
-------------------------------------------------------------------
111 Replace Belt Splitter Joe 10 10
111 Replace Belt Splitter Bob 20 10
112 Door Broke Splitter Joe 10 20
```
I want a column with the total minutes by DISTINCT WorkID like this...
```
WorkID | Description | MachineDescription | Name | RegMin | Minutes | Total Minutes
-----------------------------------------------------------------------------------
111 Replace Belt Splitter Joe 10 10 30
111 Replace Belt Splitter Bob 20 10 30
112 Door Broke Splitter Joe 10 20 30
```
Is this possible?
|
Use `Correlated Sub-query` to get the result. Try this.
```
SELECT Work.WorkID,
Work.Description,
Machine.MachineDescription,
NAME.NAME,
Work2.RegMin,
Work.MINUTES,
(SELECT Sum(DISTINCT MINUTES)
FROM Work w
WHERE w.MachineID = Machine.MachineID) Total_minutes
FROM Work Work
JOIN Machine Machine
ON Machine.MachineID = Work.MachineID
LEFT JOIN Work2 Work2
ON Work2.WorkID = Work.WorkID
LEFT JOIN NAME NAME
ON NAME.NameID = Work2.NameID
```
**Output :**
```
WorkID Description MachineDescription NAME RegMin MINUTES Total_minutes
------ ----------- ------------------ ---- ------ ------- -------------
111 Sink Broken SPLITTER Joe 10 10 30
111 Sink Broken SPLITTER Bob 20 10 30
112 Door Broken SPLITTER Joe 10 20 30
```
|
```
SELECT Work.WorkID
, Work.Description
, Machine.MachineDescription
, Name.NAME
, Work2.RegMin
, Work.MINUTES
, (SELECT (SUM(work2.RegMin) ) FROM work2 left join work on work2.workid = work.workid) - work.minutes AS TotalMinutes
FROM Work Work
JOIN Machine Machine ON Machine.MachineID = Work.MachineID
LEFT JOIN Work2 Work2 ON Work2.WorkID = Work.WorkID
LEFT JOIN Name Name ON Name.NameID = Work2.NameID
GROUP BY work.workID
, Work.Description
, Machine.MachineDescription
, Name.NAME
, Work2.RegMin
, Work.MINUTES
```
**Returns**
```
WORKID DESCRIPTION MACHINEDESCRIPTION NAME REGMIN MINUTES TOTALMINUTES
111 Sink Broken SPLITTER Bob 20 10 30
111 Sink Broken SPLITTER Joe 10 10 30
112 Door Broken SPLITTER Joe 10 20 20
```
|
Get DISTINCT Sum of Total by WorkID with LEFT JOINs
|
[
"",
"sql",
"sql-server",
"sql-server-2005",
""
] |
I'm in coldfusion working with data from an sql table, and using a query of queries to join the sql data to some data from an oracle database. Unfortunately, I need to order them by date, and the oracle table has two columns - DRWR\_DATE which is of type DATE and TIME which is of type VARCHAR2. The two columns put into a string read `17-JUN-03 16:35:18` or something similar. I need to return these two columns as a TIMESTAMP so I can use query of queries to sort them.
Also, I think I read that a date column holds the time in Oracle anyway? I don't have much experience with Oracle so I am unsure how best to do this.
|
Try this:
```
SELECT to_timestamp(
to_char( drwr_date,'dd-mon-yy') ||' '|| time
, 'dd-mon-yy hh24:mi:ss'
)
FROM your_table
```
|
Try using `TO_TIMESTAMP` function:
```
SELECT TO_TIMESTAMP('17-JUN-03 16:35:18', 'DD-MON-RR HH24:MI:SS')
FROM DUAL;
```
|
Return an oracle column as timestamp
|
[
"",
"sql",
"oracle",
"coldfusion",
""
] |
How could I get the first business day of the current month?
Without create a function, only select.
something like that:
```
SELECT CONVERT(VARCHAR(25),DATEADD(dd,-(DAY(GETDATE())-1), GETDATE()), 101)
```
somebody knows please?
Thanks.
|
A Simple case statement could do it
```
SELECT CASE
WHEN DATENAME(WEEKDAY, dateadd(mm, DATEDIFF(MM, 0, getdate()), 0)) = 'Saturday'
THEN dateadd(mm, DATEDIFF(MM, 0, getdate()), 0) + 2
WHEN DATENAME(WEEKDAY, dateadd(mm, DATEDIFF(MM, 0, getdate()), 0)) = 'Sunday'
THEN dateadd(mm, DATEDIFF(MM, 0, getdate()), 0) + 1
ELSE dateadd(mm, DATEDIFF(MM, 0, getdate()), 0)
END
```
|
This will literally give you what you're asking for -- the first business day in a month if we define a business day as "any day that's not a Saturday or a Sunday". But this is is a very narrow definition of "business day" that is not appropriate when taking into account holidays and cultural differences, so it generalizes poorly. The typical solution for this problem is to create a table that actually holds the working days (which is generated somewhere just before he year, or calculated in advance if that's feasible), and simply look it up in that.
```
SELECT DATEADD(DAY,
CASE
(DATEPART(WEEKDAY, DATEADD(MONTH, DATEDIFF(MONTH, 0, GETDATE()), 0)) + @@DATEFIRST - 1) % 7
WHEN 6 THEN 2
WHEN 7 THEN 1
ELSE 0
END,
DATEADD(MONTH, DATEDIFF(MONTH, 0, GETDATE()), 0)
)
```
This solution uses `@@DATEFIRST` to avoid any language issues -- using `DATEPART(WEEKDAY, ...)` on its own or `DATENAME()` only works if we assume a specific region.
|
First business day of the current month - SQL Server
|
[
"",
"sql",
"sql-server",
""
] |
I am running an SQL Query and i want to use `min()` and `DATE()`
i tried:
```
and min(DATE(datetime)) = '".date("Y-m-d")."'
```
but that does not work, what should i be using?
the full query i have tried is:
```
select
min(datetime) as d,
ticketnumber
from ticket_updates
where type = 'update'
and min(DATE(datetime)) = '".date("Y-m-d")."'
group by ticketnumber
```
i have also tried:
```
select
min(datetime) as d,
ticketnumber
from ticket_updates
where type = 'update'
and min(datetime) >= '".date("Y-m-d 00:00:00")."'
and min(datetime) <= '".date("Y-m-d 23:59:59")."'
group by ticketnumber
```
but i get an error saying:
> Invalid use of group function
|
If you want the *earliest* date time, then use a `having` clause:
```
select min(datetime) as d, ticketnumber
from ticket_updates
where type = 'update'
group by ticketnumber
having min(DATE(datetime)) = '".date("Y-m-d")."'
```
These seems rather strange as a construct, because the min is going to get the date you are entering. All the work is in the filtering.
|
There is no need to use the `min` function in your where clause.
I've also changed the where to use `between` for readability.
```
select
min(datetime) as d,
ticketnumber
from ticket_updates
where type = 'update'
and datetime between'".date("Y-m-d 00:00:00")."'
and '".date("Y-m-d 23:59:59")."'
group by ticketnumber
```
This code will find the lowest date/time for each ticket number on the date provided.
Check out this [FIDDLE](http://sqlfiddle.com/#!2/5e32ca/5)
|
using min and DATE() in SQL
|
[
"",
"mysql",
"sql",
""
] |
I have an ETL process which takes values from an input table which is a key value table with each row having a field ID and turning it into a more denormalized table where each row has all the values. Specifically, this is the input table:
```
StudentFieldValues (
FieldId INT NOT NULL,
StudentId INT NOT NULL,
Day DATE NOT NULL,
Value FLOAT NULL
)
```
`FieldId` is a foreign key from table `Field`, `Day` is a foreign key from table `Days`. The PK is the first 3 fields. There are currently 188 distinct fields. The output table is along the lines of:
```
StudentDays (
StudentId INT NOT NULL,
Day DATE NOT NULL,
NumberOfClasses FLOAT NULL,
MinutesLateToSchool FLOAT NULL,
... -- the rest of the 188 fields
)
```
The PK is the first 2 fields.
Currently the query that populates the output table does a self join with `StudentFieldValues` 188 times, one for each field. Each join equates `StudentId` and `Day` and takes a different `FieldId`. Specifically:
```
SELECT Students.StudentId, Days.Day,
StudentFieldValues1.Value NumberOfClasses,
StudentFieldValues2.Value MinutesLateToSchool,
...
INTO StudentDays
FROM Students
CROSS JOIN Days
LEFT OUTER JOIN StudentFieldValues StudentFieldValues1
ON Students.StudentId=StudentFieldValues1.StudentId AND
Days.Day=StudentFieldValues1.Day AND
AND StudentFieldValues1.FieldId=1
LEFT OUTER JOIN StudentFieldValues StudentFieldValues2
ON Students.StudentId=StudentFieldValues2.StudentId AND
Days.Day=StudentFieldValues2.Day AND
StudentFieldValues2.FieldId=2
... -- 188 joins with StudentFieldValues table, one for each FieldId
```
I'm worried that this system isn't going to scale as more days, students and fields (especially fields) are added to the system. Already there are 188 joins and I keep reading that if you have a query with that number of joins you're doing something wrong. So I'm basically asking: Is this something that's gonna blow up in my face soon? Is there a better way to achieve what I'm trying to do? It's important to note that this query is minimally logged and that's something that wouldn't have been possible if I was adding the fields one after the other.
More details:
* MS SQL Server 2014, 2x XEON E5 2690v2 (20 cores, 40 threads total), 128GB RAM. Windows 2008R2.
* 352 million rows in the input table, 18 million rows in the output table - both expected to increase over time.
* Query takes 20 minutes and I'm very happy with that, but performance degrades as I add more fields.
|
Think about doing this using conditional aggregation:
```
SELECT s.StudentId, d.Day,
max(case when sfv.FieldId = 1 then sfv.Value end) as NumberOfClasses,
max(case when sfv.FieldId = 2 then sfv.Value end) as MinutesLateToSchool,
...
INTO StudentDays
FROM Students s CROSS JOIN
Days d LEFT OUTER JOIN
StudentFieldValues sfv
ON s.StudentId = sfv.StudentId AND
d.Day = sfv.Day
GROUP BY s.StudentId, d.Day;
```
This has the advantage of easy scalability. You can add hundreds of fields and the processing time should be comparable (longer, but comparable) to fewer fields. It is also easer to add new fields.
EDIT:
A faster version of this query would use subqueries instead of aggregation:
```
SELECT s.StudentId, d.Day,
(SELECT TOP 1 sfv.Value FROM StudentFieldValues WHERE sfv.FieldId = 1 and sfv.StudentId = s.StudentId and sfv.Day = sfv.Day) as NumberOfClasses,
(SELECT TOP 1 sfv.Value FROM StudentFieldValues WHERE sfv.FieldId = 2 and sfv.StudentId = s.StudentId and sfv.Day = sfv.Day) as MinutesLateToSchool,
...
INTO StudentDays
FROM Students s CROSS JOIN
Days d;
```
For performance, you want a composite index on `StudentFieldValues(StudentId, day, FieldId, Value)`.
|
Yes, this is going to blow up. You have your definitions of "normalized" and "denormalized" backwards. The Field/Value table design is *not* a relational design. It's a variation of the [entity-attribute-value](/questions/tagged/entity-attribute-value "show questions tagged 'entity-attribute-value'") design, which has all sorts of problems.
I recommend you do not try to pivot the data in an SQL query. It doesn't scale well that way. Instea, you need to query it as a set of rows, as it is stored in the database, and fetch back the result set into your application. There you write code to read the data row by row, and apply the "fields" to fields of an object or a hashmap or something.
|
Creating a denormalized table from a normalized key-value table using 100s of joins
|
[
"",
"sql",
"sql-server",
"join",
"etl",
"olap",
""
] |
I have a table (ClassEnrolments) which contains student data:
```
StudentId Student Module Status EndDate Credits
12345678 J Bloggs Introduction Pass 2014/09/01 10
12345678 J Bloggs Advanced Pass 2014/06/01 15
23456789 T Guy Introduction Pass 2013/05/25 10
23456789 T Guy Advanced Pass 2014/03/21 15
```
What I want to do is return information on how many modules the student has taken in total during a set period of time, so for example, if we take the above data and look at the period 01/01/2014 to 24/11/2014 it would return the following (based on EndDate)
```
StudentId Student Modules Credits AnnivDate
12345678 J Bloggs 2 25 2015/06/01
23456789 T Guy 1 15 2014/05/25
```
This is using SQL Server 2008 - any help would be greatly appreciated.
I'm now also wondering whether using the same scenario - its possible to count the number of credits obtained from those taken modules?
Another thought! It would be useful to see some sort of anniversary date based on the students foremost module taken. This will help to establish whether the student has obtained the correct amount of credits by the anniversary date. I've toyed with the idea of the DATEADD function, but this seemed to duplicate the student information again.
```
DATEADD(MONTH, DATEDIFF(MONTH, 0, EndDate()) +12,0) AS AnnivDate
```
I understand why, as there will be multiple instances, but is there a way to look at the first instance of EndDate?
|
Use `Group By` and Aggregate Function `count` to count the `Module` and filter the data by using `Between operator` to filter the data between particular date range
```
SELECT StudentId,
Student,
Count(Module) Module,
Sum(Credits) Credits
FROM tablename
WHERE EndDate BETWEEN '2014-01-01' AND '2014-11-24'
GROUP BY StudentId,
Student
```
|
Try this:
```
Set dateformat DMY;
Select StudentId, Student, Count(Modules) as ModuleCount
from ClassEnrolments
Where EndDate >= convert(Datetime, '01-01-2014', 105) and
EndDate <= convert(Datetime, '24-11-2014', 105)
Group By StudentId, Student
```
|
SQL to COUNT items associated with individuals
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"aggregate-functions",
""
] |
In my table I want to see a list of unoccupied id numbers in a certain range.
For example there are 10 records in my table with id's: "2,3,4,5,10,12,16,18,21,22" and say that I want to see available ones between 1 and 25. So I want to see a list like:
1,6,7,89,11,13,14,15,17,19,20,23,24,25
How should I write my sql query?
|
Select the numbers form 1 to 25 and show only those that are not in your table
```
select n from
( select rownum n from dual connect by level <= 25)
where n not in (select id from table);
```
|
Let's say you a #numbers table with three numbers -
```
CREATE TABLE #numbers (num INT)
INSERT INTO #numbers (num)
SELECT 1
UNION
SELECT 3
UNION
SELECT 6
```
Now, you can use CTE to generate numbers recursively from 1-25 and deselect those which are in your #numbers table in the WHERE clause -
```
;WITH n(n) AS
(
SELECT 1
UNION ALL
SELECT n+1 FROM n WHERE n < 25
)
SELECT n FROM n
WHERE n NOT IN (select num from #numbers)
ORDER BY n
OPTION (MAXRECURSION 25);
```
|
How can I find unoccupied id numbers in a table?
|
[
"",
"sql",
"oracle11g",
""
] |
I am trying to implement a function with dates as parameters in PostgreSQL 9.3.5.
It should cover all the below conditions and return the results accordingly.
```
select * from tablename
where sid is not null and dob >= '11-14-2014' order by dob;
-- Only Start Date
select * from tablename
where sid is not null and dob <= '11-21-2014' order by dob;
-- Only End Date
select * from tablename
where sid is not null and dob between '11-10-2014' and '11-21-2014' order by dob;
-- Both Start and End Dates are Given
select * from tablename
where sid is not null and dob is null;
-- Both Start and End Dates are Null
```
[**SQL Fiddle**](http://sqlfiddle.com/#!12/4194e/1)
But my above SQL Fiddle is not working as expected.
I want to have a generic implementation so that if tomorrow I have some more `new Date Fields` into the criteria search, then I just can add them in a `where condition`, but not in every line of `IF` Statement.
[**Updated SQL Fiddle**](http://sqlfiddle.com/#!12/137dd/1)
|
This can be ***much simpler still***:
You can simplify the expression from @Clodoaldo's currently accepted answer to:
```
SELECT *
FROM sampletest
WHERE eid IS NOT NULL
AND (_sd IS NULL AND _ed IS NULL OR
_sd IS NULL AND dob <= _ed OR
_ed IS NULL AND dob >= _sd OR
dob BETWEEN _sd AND _ed);
```
* Once you compare `dob >= _sd`, there is no need to check `_sd is not null` additionally.
The expression can only evaluate to `TRUE` with non-null `_sd` anyway.
But none of this is necessary:
```
CREATE OR REPLACE FUNCTION test_dob_dates(_sd date, _ed date)
RETURNS SETOF sampletest AS
$func$
SELECT *
FROM sampletest
WHERE eid IS NOT NULL
AND dob BETWEEN COALESCE(_sd, -infinity) AND COALESCE(_ed, infinity)
$func$ LANGUAGE sql STABLE;
```
* Do not quote the language name. It's an identifier.
* Postgres provides the [special values `-infinity` and `infinity`](http://www.postgresql.org/docs/current/interactive/datatype-datetime.html#AEN5861) for `timestamp` and `date` types. All you need is to default to those values with `COALESCE`.
[**SQL Fiddle**](http://sqlfiddle.com/#!12/0a5de/4) (for pg 9.3).
Aside: I would advise to use standard ISO dates **`'2014-11-10'`** instead of `'11-10-2014'`, which is potentially ambiguous and breaks with a different locale setting. [The manual advises as much.](http://www.postgresql.org/docs/current/interactive/datatype-datetime.html#DATATYPE-DATETIME-DATE-TABLE)
|
You have to define the function as returning a `set of` sampletest and then [return different query restults](http://www.postgresql.org/docs/9.3/static/plpgsql-control-structures.html#AEN58217) based on the arguments passed.
```
CREATE FUNCTION
test_dob_dates(_sd date, _ed date)
RETURNS
SETOF sampletest
AS
$BODY$
BEGIN
IF _sd IS NOT NULL AND _ed IS NOT NULL THEN
RETURN QUERY SELECT * FROM sampletest WHERE dob BETWEEN _sd AND _ed;
ELSIF _sd IS NULL AND _ed IS NULL THEN
RETURN QUERY SELECT * FROM sampletest WHERE dob IS NULL;
ELSEIF _ed IS NOT NULL THEN
RETURN QUERY SELECT * FROM sampletest WHERE dob <= _ed;
ELSE
RETURN QUERY SELECT * FROM sampletest WHERE dob >= _sd;
END IF;
END;
$BODY$
LANGUAGE
'plpgsql' VOLATILE;
```
The `eid IS NOT NULL` clause seems redundant as you have already defined `eid` to be a PRIMARY KEY and that means that it can never be NULL.
|
Function with dates as params that can be NULL
|
[
"",
"sql",
"postgresql",
"null",
"date-range",
"postgresql-9.3",
""
] |
I have a table with student ids (sID) and the classes (cCODE) that they're taking.
I'm trying to retrieve the student ids of the student(s) who are taking the highest number of classes, so the query needs to allow for the fact that there might be a tie.
I know the highest number of occurrences of the same sID will indicate the the one I'm looking for so If I was just looking to retrieve the top record I'd go
```
mysql> select sID, count(sID) as NumberOfClasses
-> from STUDENT_CLASSES
-> group by sID
-> order by NumberOfClasses desc
-> limit 1;
+------+-----------------+
| sID | NumberOfClasses |
+------+-----------------+
| 2040 | 3 |
+------+-----------------+
```
I've tried
```
mysql> select sID, count(sID) as NumberOfClasses
-> from STUDENT_CLASSES
-> group by sID
-> having NumOfClasses=3;
+------+-----------------+
| sID | NumberOfClasses |
+------+-----------------+
| 2040 | 3 |
| 3040 | 3 |
+------+-----------------+
```
Which works but only because I know the value of the highest number of projects anyone is working on is 3, what I need is some way of putting a MAX() function after the = that would calculate the highest value in the NumberOfProjects column of the newly created table….
Any thoughts?
This is the original STUDENT\_CLASSES table…
```
mysql> select * from STUDENT_CLASSES;
+------+-------+
| sID | cCODE |
+------+-------+
| 2040 | C101 |
| 2040 | C102 |
| 2040 | C103 |
| 3282 | C101 |
| 3282 | C102 |
| 2113 | C105 |
| 3040 | C102 |
| 3040 | C105 |
| 3040 | C108 |
+------+-------+
```
|
Have you tried combining the two approaches using a subquery?
```
select sID, count(sID) as NumberOfClasses
from STUDENT_CLASSES
group by sID
having count(sID) = (select count(sID) N from STUDENT_CLASSES group by sID order by N desc limit 1)
```
|
This is a bit of a pain in MySQL with an aggregation query. I would suggest just using variables
```
select sID, NumberOfClasses
from (select sID, count(sID) as NumberOfClasses,
@max := if(count(sID) > @max, count(sID), @max) as unused
from STUDENT_CLASSES cross join
(select @max := -1) vars
group by sID
) t
where NumberOfClasses = @max;
```
EDIT:
Okay, the correct and tested answer is:
```
select sID, NumberOfClasses
from (select t.*,
@max := if(NumberOfClasses > @max, NumberOfClasses, @max) as unused
from (select sID, count(sID) as NumberOfClasses
from STUDENT_CLASSES cross join
(select @max := -1) vars
group by sID
) t
) t
where NumberOfClasses = @max;
```
The SQL Fiddle is [here](http://www.sqlfiddle.com/#!2/cb5d8d/6).
In other words, the aggregation needs to be done first, and then the variable assignment. This is for an arcane reason, involving the order of processing of aggregations. If only MySQL supported window functions, this would be much easier, more standard, and compatible with other databases.
|
MySQL; Return most frequent occurrence(s) where there may be more than one
|
[
"",
"mysql",
"sql",
"greatest-n-per-group",
""
] |
I need to get the minimum count of data. my current sql is
```
select min(count(*)) from table group by id ;
```
where i expect to get minimum count of variable count(\*). Seems like above query is not allowed. Error
```
aggregate function calls cannot be nested
```
is thrown. Is there a way achieve this using any alternating method ?
|
Many solutions exist as attested by the number of answers here.
Interestingly, @ahorse\_with\_no\_name questions the performance of each of them.
### The subquery approach
```
SELECT MIN(count) FROM (SELECT COUNT(*) FROM table GROUP BY id) t;
```
It typically yields the following plan:
```
Aggregate
-> HashAggregate
-> Seq Scan on table
```
### The ORDER/LIMIT approach
```
SELECT COUNT(*) FROM table GROUP BY id ORDER BY 1 LIMIT 1;
```
It feels more natural to some but unfortunately yields a second sort:
```
Limit
-> Sort
Sort Key: (count(*))
-> HashAggregate
-> Seq Scan on table
```
### The CTE approach
```
WITH cte AS (SELECT count(*) FROM table GROUP BY id) SELECT MIN(count) FROM cte;
```
It is very similar to the subquery, except that the plan shows the CTE is scanned (and could be materialized if the table is large).
```
Aggregate
CTE cte
-> HashAggregate
-> Seq Scan on table
-> CTE Scan on cte
```
### The window aggregate function approach
Alternatively, you could use a window aggregate function combined with LIMIT and just avoid the second sort.
```
SELECT MIN(COUNT(*)) OVER () FROM table GROUP BY id LIMIT 1;
```
It yields a plan equivalent to the subquery approach (if we consider `LIMIT 1` as nearly free).
```
Limit
-> WindowAgg
-> HashAggregate
-> Seq Scan on table
```
|
You need to wrap that in a sub-select:
```
select min(cnt)
from (
select id, count(*) as cnt
from the_table
group by id
) t
;
```
|
PostgreSQL get minimum of count(*)?
|
[
"",
"sql",
"postgresql",
""
] |
so I've got a question that shouldn't be too hard to answer, but for whatever the reason, I'm just not able to get it. I need to sum the costs of two different tables. I've tried doing one long join, but the numbers themselves come out wrong. The only way I can get the correct numbers is making two queries and summing them together. However, I want them to display under one ID.
```
SELECT s.storeId, s.street, s.city, s.state, s.zipcode, SUM(p.cost)
FROM store s
JOIN video v ON s.storeId=v.storeId
JOIN previousrental p ON v.videoid=p.videoid
GROUP BY s.storeId
UNION
SELECT s.storeId, s.street, s.city, s.state, s.zipcode, SUM(r.cost)
FROM store s
JOIN video v ON s.storeId=v.storeId
JOIN rental r ON v.videoid=r.videoid
GROUP BY s.storeId
```
|
Try this:
```
SELECT s.storeId, s.street, s.city, s.state, s.zipcode, SUM(p.cost)
FROM store s
INNER JOIN video v ON s.storeId=v.storeId
INNER JOIN (SELECT p.videoid, SUM(p.cost) cost
FROM previousrental p
GROUP BY p.videoid
UNION
SELECT r.videoid, SUM(r.cost) cost
FROM rental r
GROUP BY r.videoid
) AS p ON v.videoid=p.videoid
GROUP BY s.storeId;
```
**OR**
```
SELECT s.storeId, s.street, s.city, s.state, s.zipcode,
SUM(ISNULL(p.cost, 0) + ISNULL(r.cosr, 0))
FROM store s
INNER JOIN video v ON s.storeId = v.storeId
LEFT OUTER JOIN (SELECT videoid, SUM(cost) cost FROM previousrental GROUP BY videoid) p ON v.videoid = p.videoid
LEFT OUTER JOIN (SELECT videoid, SUM(cost) cost FROM rental GROUP BY videoid) r ON v.videoid = r.videoid
GROUP BY s.storeId;
```
|
One option would be to put the results in a subquery:
```
SELECT
storeId, street, city, state, zipcode, SUM(cost)
FROM
(SELECT s.storeId, s.street, s.city, s.state, s.zipcode, SUM(p.cost) cost
FROM store s
JOIN video v ON s.storeId = v.storeId
JOIN previousrental p ON v.videoid = p.videoid
GROUP BY s.storeId
UNION
SELECT s.storeId, s.street, s.city, s.state, s.zipcode, SUM(r.cost)
FROM store s
JOIN video v ON s.storeId = v.storeId
JOIN rental r ON v.videoid = r.videoid
GROUP BY s.storeId
) T
GROUP BY storeId
```
|
Having Trouble Summing Two Parts of Union in MySQL
|
[
"",
"mysql",
"sql",
"select",
"join",
"union",
""
] |
I have following tables which are BankDetails and Transactiondetails. Using these two tables, I want to get the current balance of the account name.
Tables:
```
Create table Bankdetails
(
AccName varchar(50),
AccNo int,
OpBal numeric(18,2)
)
Create table Trandetails
(
AccNo int,
Amount numeric(18,2),
Trantype varchar(10)
)
```
Insert scripts for both tables:
```
insert into Bankdetails values('A', 12345, 30000.00)
insert into Bankdetails values('B', 13345, 30000.00)
insert into Bankdetails values('C', 14545, 30000.00)
insert into Bankdetails values('D', 15045, 30000.00)
insert into Trandetails values(12345, 5000.00, 'Credit')
insert into Trandetails values(13345, 5000.00, 'Debit')
insert into Trandetails values(15045, 5000.00, 'Debit')
insert into Trandetails values(13345, 5000.00, 'Credit')
insert into Trandetails values(12345, 5000.00, 'Debit')
insert into Trandetails values(13345, 5000.00, 'Debit')
insert into Trandetails values(14545, 5000.00, 'Credit')
insert into Trandetails values(15045, 5000.00, 'Debit')
insert into Trandetails values(14545, 5000.00, 'Debit')
```
Output would be like this:
```
AccName Accno CurrBal
A 12345 30000.00
B 13345 25000.00
C 14545 30000.00
D 15045 20000.00
```
I need Account Holdername, Account No and current balance using the above two tables.
Below is my query, I want to get optimized query i.e without using subqueries if possible.
**Note:** In my case, credit = amount added into account and debit = amount subracted from account.
```
Select bd.accname, bd.accno,
(bd.opbal - isnull((select SUM(Amount) from Trandetails where Trantype = 'Debit' and accno = bd.accno group by accno),0) + isnull((select SUM(Amount) from Trandetails where Trantype = 'Credit' and accno = bd.accno group by accno),0)) as Bal
From Bankdetails BD inner join Trandetails TD on td.AccNo = bd.AccNo
group by bd.accno, bd.accname, bd.opbal
```
My apologies for not following proper naming conventions for tables. Any help will be appreciated.
Thanks,
Paresh J
|
The idea is to generate first the sum for each transaction type, `Debit` and `Credit`. With that, join it to the `Bankdetails` to compute for the current balance.
```
;with cte as(
select
AccNo,
Credit = sum(case when TranType = 'Credit' then Amount else 0 end),
Debit = sum(case when TranType = 'Debit' then Amount else 0 end)
from Trandetails
group by
AccNo
)
select
bd.AccName,
bd.AccNo,
CurrBal = bd.opBal - c.Debit + c.Credit
from BankDetails bd
inner join cte c
on c.Accno = bd.Accno
```
|
For those that care about performance,here goes;
```
Select B.AccName
,B.AccNo
,(B.OpBal + SUM( Case When TranType = 'Credit' Then Amount Else Amount* -1 End)) CurrBal
From Bankdetails B
Left Join Trandetails T
On B.AccNo = T.AccNo
Group By B.AccName, B.AccNo, B.OpBal;
```
**[SQL Fiddle](http://sqlfiddle.com/#!18/bba7d/1/0)**
|
Optimizing the SQL Query for Calculating account balance
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
Just thought of learning SQL on my own and started with
**[THIS](http://sqlzoo.net/wiki/SELECT_from_Nobel_Tutorial)**
I was practicing some tutorial and got stuck in one basic query.
After **trying a lot** I could not get it running.
Question is: in which years was the Physics prize awarded but no Chemistry prize.
Here is what I tried
```
SELECT yr
FROM nobel
WHERE subject ='Physics' AND
NOT (Select yr from nobel where subject = 'Chemistry')
```
Can someone help.
|
So close! You just needed `yr` and `IN`:
```
SELECT yr
FROM nobel
WHERE subject ='Physics' AND
yr NOT IN (Select yr from nobel where subject = 'Chemistry')
```
You could also do this with an exclusion join:
```
SELECT yr
FROM nobel n1
LEFT JOIN nobel n2 ON n1.yr = n2.yr AND n2.subject = 'Chemistry'
WHERE n1.subject = 'Physics' AND n2.subject is NULL
```
or a `NOT EXISTS`
```
SELECT yr
FROM nobel n1
WHERE n1.subject ='Physics'
AND NOT EXISTS
(
SELECT NULL
FROM nobel n2
WHERE n2.subject = 'Chemistry' AND n2.yr=n1.yr
)
```
|
```
SELECT yr
FROM nobel
WHERE subject ='Physics' AND
yr NOT IN (Select yr from nobel where subject = 'Chemistry')
```
|
Simple SELECT SQL query not working
|
[
"",
"sql",
"sql-server",
""
] |
I am trying to declare local variable like:
```
DECLARE @thresholdDate DATETIME = '2014-11-30'
```
And I am getting error:
> Cannot assign a default value to a local variable.
As per [documentation](http://msdn.microsoft.com/en-us/library/ms188927.aspx):
```
DECLARE @find varchar(30);
/* Also allowed:
DECLARE @find varchar(30) = 'Man%';
*/
```
What I am doing wrong?
|
Prior to SQL Server 2008, assigning a default value (or initial value) to a local variable is not allowed; otherwise this error message will be encountered.
**Solution 1:** (Use `SET`)
```
DECLARE @thresholdDate DATETIME
set @thresholdDate = '2014-11-30'
```
For more details about the error : <http://www.sql-server-helper.com/error-messages/msg-139.aspx>
**Solution 2:** (**Upgrade**)
Another way of avoiding this error, which is a little bit a far-fetched solution, is to upgrade to SQL Server 2008. SQL Server 2008 now allows the assigning of a value to a variable in the DECLARE statement.
|
You will hit this error
> Cannot assign a default value to a local variable
in case you are running SQL Server 2005 or earlier, and you try to declare a variable and assign it a value in one statement.
Something like -
```
DECLARE @Var Varchar(15) = 'Test'
```
> Message 139, Level 15, State 1, Line 0
> Cannot assign a default value to a local variable
Variable declaration has been enhanced in SQL Server 2008 and newer versions, and I realized this today when I was deploying a code for multiple systems.

Apparently we also have the flexibility to just use one Declare statement to declare multiple variables -
```
DECLARE @Var Varchar(15) = 'Test',
@Char Varchar(10) = 'Test2',
@Char2 Varchar(10) = 'Test3'
```
In SQL Server 2005 or earlier, you will need to declare the variable and then assign it a value using Set statement.
Example -
```
DECLARE @Var Varchar(15)
SET @Var = 'Test'
```
|
Cannot assign a default value to a local variable in SQL
|
[
"",
"sql",
"sql-server",
"sql-server-2005",
"default-value",
"declare",
""
] |
I'm using mysql and I've got a table similar to this one:
```
id | user | task | time | checkout
----+-------+------+-----------------------+---------
1 | 1 | 1 | 2014-11-25 17:00:00 | 0
2 | 2 | 2 | 2014-11-25 17:00:00 | 0
3 | 1 | 1 | 2014-11-25 18:00:00 | 1
4 | 1 | 2 | 2014-11-25 19:00:00 | 0
5 | 2 | 2 | 2014-11-25 20:00:00 | 1
6 | 1 | 2 | 2014-11-25 21:00:00 | 1
7 | 1 | 1 | 2014-11-25 21:00:00 | 0
8 | 1 | 1 | 2014-11-25 22:00:00 | 1
```
id is just an autogenerated primary key, and checkout is 0 if that row registered a user checking in and 1 if the user was checking out from the task.
I would like to know how to make a query that returns how much time has a user spent at each task, that is to say, I want to know the sum of the time differences between the checkout=0 time and the nearest checkout=1 time for each user and task.
Edit: to make things clearer, the results I'd expect from my query would be:
```
user | task | SUM(timedifference)
------+------+-----------------
1 | 1 | 02:00:00
1 | 2 | 02:00:00
2 | 2 | 03:00:00
```
I have tried using `SUM(UNIX_TIMESTAMP(time) - UNIX_TIMESTAMP(time))`, while grouping by user and task to figure out how much time had elapsed, but I don't know how to make the query only sum the differences between the particular times I want instead of all of them.
Can anybody help? Is this at all possible?
|
As all comments tell you, your current table structure is not ideal. However it's still prossible to pair checkins with checkouts. This is a SQL server implementation but i am sure you can translate it to MySql:
```
SELECT id
, user_id
, task
, minutes_per_each_task_instance = DATEDIFF(minute, time, (
SELECT TOP 1 time
FROM test AS checkout
WHERE checkin.user_id = checkout.user_id
AND checkin.task = checkout.task
AND checkin.id < checkout.id
AND checkout.checkout = 1
))
FROM test AS checkin
WHERE checkin.checkout = 0
```
Above code works but will become slower and slower as your table starts to grow. After a couple of hundred thousands it will become noticable
I suggest renaming `time` column to `checkin` and instead of having `checkout` boolean field make it datetime, and update record when user checkouts. That way you will have half the number of records and no complex logic for reading or querying
|
You can determine with a ranking method what are the matching check in/ check out records, and calculate time differences between them
In my example new\_table is the name of your table
```
SELECT n.user, n.task,n.time, n.checkout ,
CASE WHEN @prev_user = n.user
AND @prev_task = n.task
AND @prev_checkout = 0
AND n.checkout = 1
AND @prev_time IS NOT NULL
THEN HOUR(TIMEDIFF(n.time, @prev_time)) END AS timediff,
@prev_time := n.time,
@prev_user := n.user,
@prev_task := n.task,
@prev_checkout := n.checkout
FROM new_table n,
(SELECT @prev_user = 0, @prev_task = 0, @prev_checkout = 0, @prev_time = NULL) a
ORDER BY user, task, `time`
```
Then sum the time differences (timediff) by wrapping it in another select
```
SELECT x.user, x.task, sum(x.timediff) as total
FROM (
SELECT n.user, n.task,n.time, n.checkout ,
CASE WHEN @prev_user = n.user
AND @prev_task = n.task
AND @prev_checkout = 0
AND n.checkout = 1
AND @prev_time IS NOT NULL
THEN HOUR(TIMEDIFF(n.time, @prev_time)) END AS timediff,
@prev_time := n.time,
@prev_user := n.user,
@prev_task := n.task,
@prev_checkout := n.checkout
FROM new_table n,
(@prev_user = 0, @prev_task = 0, @prev_checkout = 0, @prev_time = NULL) a
ORDER BY user, task, `time`
) x
GROUP BY x.user, x.task
```
It would probably be easier to understand by changing the table structure though. If that is at all possible. Then the SQL wouldn't have to be so complicated and would be more efficient. But to answer your question it is possible. :)
In the above examples, names prefixed with '@' are MySQL variables, you can use the ':=' to set a variable to a value. Cool stuff ay?
|
How can I make an SQL query that returns time differences between checkins and checkouts?
|
[
"",
"mysql",
"sql",
""
] |
I have a MySQL table data that looks like below
```
+--------------------------+
| period |
+--------------------------+
| 2014-11-27 to 2014-11-28 |
| 2014-11-09 to 2014-11-09 |
| 2014-11-07 to 2014-11-07 |
| 2014-11-06 to 2014-11-06 |
| 2014-11-04 to 2014-11-04 |
| 2014-11-02 to 2014-11-02 |
| 2014-11-01 to 2014-11-02 |
| 2014-10-24 to 2014-10-24 |
| 2014-10-23 to 2014-10-24 |
| 2014-10-23 to 2014-10-24 |
| 2014-10-23 to 2014-10-23 |
| 2014-10-23 to 2014-10-23 |
| 2014-10-20 to 2014-10-20 |
| 2014-10-18 to 2014-10-18 |
| 2014-10-17 to 2014-10-17 |
| 2014-10-13 to 2014-10-13 |
| 2014-10-13 to 2014-10-13 |
| 2014-10-09 to 2014-10-10 |
| 2014-10-06 to 2014-10-07 |
| 2014-10-01 to 2014-10-10 |
| 2014-09-30 to 2014-09-30 |
| 2014-09-24 to 2014-09-24 |
| 2014-09-12 to 2014-09-12 |
| 2014-09-12 to 2014-09-12 |
| 2014-09-08 to 2014-09-09 |
| 2014-09-08 to 2014-09-08 |
| 2014-09-08 to 2014-09-08 |
| 2014-09-01 to 2014-09-01 |
| 2014-09-01 to 2014-09-01 |
+--------------------------+
```
Here i would like to get the details between two dates like (`2014-09-01 to 2014-09-30`)
or (`2014-09-01 to 2014-010-3`0) or (`2014-09-01 to 2014-11-30`) and it's a **varchar** column, is there any possibilities to get results between two dates?
|
i think below is right one,
```
select period from mysql_common.leave_details
where (period like '%-11-%' or period like '%-10-%' or period like '%-09-%')
order by period desc;
```
|
FIrst thing you should really normalize your table and store the dates with mysql native date data-types and make sure you store them in two columns `start_date` and `end_date`. This will make your life easy.
Now coming back to the current situation there is a way you can do it, first extract the start and end date from the varchar string using `substring_index` function and finally use them in the condition using `having` clause.
```
select
str_to_date(substring_index(period,'to',1),'%Y-%m-%d') as start_date,
str_to_date(substring_index(period,'to',-1),'%Y-%m-%d') as end_date,
period
from table_name
having start_date >='2014-09-01' and end_date <='2014-09-30';
```
|
MySQL, split column data and get result by splitted data
|
[
"",
"mysql",
"sql",
"date",
"select",
"between",
""
] |
In MySQL, I have a two tables as below :
```
ClientTable
clientID clientName
1 Client A
2 Client B
3 Client C
4 Client D
5 Client E
6 Client F
NotesTable
noteID clientID note noteDate
1 3 Test 1 12-Jun-14
2 3 Test 2 18-Aug-14
3 4 Test 3 23-Oct-14
4 6 Test 4 25-May-14
5 3 Test 5 25-Nov-14
6 6 Test 6 16-Jul-14
```
I want to select **all** the clients from the client table and, where a note exists for the client, the date of the **latest** note entry. If no note exists for a client, then return null for the noteDate. Desired result set as follows :
```
client ID clientName latestNoteDate
1 Client A null
2 Client B null
3 Client C 25-Nov-14
4 Client D 23-Oct-14
5 Client E null
6 Client F 16-Jul-14
```
Any help appreciated, I have tried a few options using nested Select with MAX(noteDate) and various left joins but can't seem to get it right.
|
Why all the subqueries?
```
select ct.clientID, ct.clientName,max(nt.noteDate) latestNoteDate
from ClientTable ct
left outer join NotesTable nt
on ct.clientID = nt.clientID
group by ct.clientID, ct.clientName
```
|
You can use an `outer join` with a subquery:
```
select c.clientid, c.clientname, n.latestnotedate
from client c
left join (
select clientId, max(noteDate) latestnotedate
from notes
group by clientId
) n on c.clientId = n.clientId
```
This assumes the `max(noteDate)` is the **latest** note entry. If that's not the case, easy enough to use the `noteid` instead and then just include one additional join.
|
SQL SELECT statement MAX date and table joins
|
[
"",
"mysql",
"sql",
"select",
"left-join",
""
] |
I want to get the missing protocol Numbers from this list for each section
I have my list
```
ProtocolNumber Section
--------------------------------
14A1000014 | A1
14A1000015 | A1
14A1000018 | A1
14A1000019 | A1
14A2000014 | A2
14A2000015 | A2
14A2000019 | A2
```
I try this
```
SELECT lb1.ProtocolNumber, lb1.Section FROM #tmp lb1
WHERE not exists ( SELECT * FROM #tmp lb2
WHERE lb2.ProtocolNumber = lb1.ProtocolNumber + 1 and lb2.Section = lb1.Section)
```
The output should be like this
```
ProtocolNumber Section
--------------------------------
14A1000016 | A1
14A1000017 | A1
14A2000016 | A2
14A2000017 | A2
14A2000018 | A2
```
|
With the assumption that you are trying to generate a list of missing protocol numbers between the minimum and maximum range currently existing for that section, I'd suggest the following:
```
/*Sample Data*/
CREATE TABLE #tmp (ProtocolNumber VARCHAR(20), Section VARCHAR(2))
INSERT INTO #tmp (ProtocolNumber, Section) SELECT'14A1000014', 'A1'
INSERT INTO #tmp (ProtocolNumber, Section) SELECT'14A1000015', 'A1'
INSERT INTO #tmp (ProtocolNumber, Section) SELECT'14A1000018', 'A1'
INSERT INTO #tmp (ProtocolNumber, Section) SELECT'14A1000019', 'A1'
INSERT INTO #tmp (ProtocolNumber, Section) SELECT'14A2000014', 'A2'
INSERT INTO #tmp (ProtocolNumber, Section) SELECT'14A2000015', 'A2'
INSERT INTO #tmp (ProtocolNumber, Section) SELECT'14A2000019', 'A2'
/*CTEs to generate numbers list: 1 through 1,000,000*/
;WITH
E1(N) AS (SELECT 1 FROM (VALUES (1),(1),(1),(1),(1),(1),(1),(1),(1),(1)) s(N)),
E2(N) AS (SELECT 1 FROM E1 a, E1 b), --10E+2 or 100 rows
E4(N) AS (SELECT 1 FROM E2 a, E2 b), --10E+4 or 10,000 rows max
E5(N) AS (SELECT 1 FROM E4 a, E2 b), --1,000,000 rows max
cteTally(N) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM E5 ),
/*CTE to identify ranges of current numbers for each Section*/
Ranges AS
(
SELECT
Section,
MIN(CAST(SUBSTRING(ProtocolNumber, 5,6) AS INT)) MinNumber,
MAX(CAST(SUBSTRING(ProtocolNumber, 5,6) AS INT)) MaxNumber
FROM
#tmp
GROUP BY Section
),
/*CTE to generate full list of available protocols for each Section*/
ProtocolList AS
(
SELECT DISTINCT
Section,
'14' + Section + RIGHT('00000' + CAST(N AS VARCHAR(6)),6) AS ProtocolNumber
FROM Ranges
INNER JOIN
cteTally ON
cteTally.N >= Ranges.MinNumber AND
cteTally.N <= Ranges.MaxNumber
)
/*Final SELECT - protocols in the master list that do not exist for those sections in the temp table*/
SELECT l.ProtocolNumber, l.Section
FROM
ProtocolList l
LEFT JOIN
#tmp t ON
l.ProtocolNumber = t.ProtocolNumber
WHERE t.ProtocolNumber IS NULL
ORDER BY
l.Section,
l.ProtocolNumber
DROP TABLE #tmp
```
|
Does it have to be a one line select statment?
How about writing a stored procedure or Table-Valued function that using cursor iterates through your table and creates missing records, which the function returns
Implemenation would also need to check for max protocol number for each give section so that you only create records with in range
|
Get the missing numbers for each section
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I want to repeat first row first column value in every row in a particular column like a group. Please let me know how i will do it?
```
Table
Id - Title
1 - ABC1
2 - ABC2
3 - ABC3
4 - ABC4
5 - ABC5
```
I want result like this in a simple select query, Please dont use variable for it means storing first row value in a variable,
Resultant Table
```
Table
Id - Title - Group
1 - ABC1 - 1
2 - ABC2 - 1
3 - ABC3 - 1
4 - ABC4 - 1
5 - ABC5 - 1
```
|
This query will be helpful to get expected result.
```
SELECT Id, Title, (SELECT TOP 1 Id FROm Table ORDER BY ID) AS [Group] FROM Table
```
|
USE THE QUERY: (ORDER BY ID ASC will make sure that the first ID will selected always even if there is a change of order in ID.)
```
SELECT Id, Title,
(SELECT TOP 1 ID FROM Table ORDER BY ID ASC)
AS [Group] FROM Table
```
|
How to make a group with multiple column
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
"sql-server-2012",
""
] |
I'm trying to select persons who have an account at all branches of the city.
(With a SQL query)
```
SELECT A.customId
FROM accountholder as A
WHERE NOT EXISTS (
(SELECT name
FROM branch
WHERE city='LA')
EXCEPT (SELECT C.branch
FROM accountholder AS B, account AS C
WHERE B.accountnumber = C.accountnumber
AND A.customId = B.customId));
```
Now I got:
> ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'EXCEPT (SELECT C.branch FROM accountholder AS B, account AS C WHERE B.accountnumber=' at line 1
And I do not see the problem. Am I blind or just stupid?
Thanks for help.
|
As others have stated, MySQL does not support the `EXCEPT` operation. You will have to rewrite your query.
You appear to be querying accountholders if there is no other branch in LA besides those branches at which the accountholder's accounts are held.
Here's a rough guess:
```
SELECT A.customId
FROM accountholder AS A
JOIN account AS C
ON A.accountnumber = C.accountnumber
LEFT OUTER JOIN branch AS B
ON C.branch <> B.name AND B.city = 'LA'
WHERE B.city IS NULL;
```
I'm making some assumptions about your tables and columns and their relationships, so this query is just a guess. *Do not just run it blindly and expect it to work.* I ask you to use it as an example and confirm that the comparisons are being done correctly for your data.
|
MySQL does not use `EXCEPT`. Use [`NOT IN`](http://dev.mysql.com/doc/refman/5.0/en/comparison-operators.html#function_not-in).
```
SELECT A.customId
FROM accountholder as A
WHERE branch NOT IN (
(SELECT name FROM branch WHERE city='LA')
AND branch NOT IN (SELECT C.branch FROM accountholder AS B, account AS C WHERE B.accountnumber = C.accountnumber AND A.customId = B.customId));
```
|
SQL Error in Syntax near except
|
[
"",
"mysql",
"sql",
"database",
"mysql-error-1064",
""
] |
I am trying to calculate an average number from three columns but only include the column in the calculation if column is not null and is bigger than 0;
for example the average usually is
```
(column1 + column2 + column3) / 3
```
but if column3 is null or 0 then it will be
```
(column1 + column2 + column3) / 2 or (column1 + column2 ) / 2
```
I have this sol far but it is not complete. the average is wrong when one of the columns is 0 (0 is default)
```
SELECT movie.title,
movie.imdbrating,
movie.metacritic,
tomato.rating,
((imdbrating + metacritic + tomato.rating)/3) as average
FROM movie, tomato
WHERE movie.imdbid = tomato.imdbid
```
How can I implement this?
|
I'm fixing the rest of the query to use table aliases and proper `join` syntax. But the `case` statements are what you really need:
```
SELECT m.title, m.imdbrating, m.metacritic,
t.rating,
((case when imdbrating > 0 then imdbrating else 0 end) +
(case when metacritic > 0 then metacritic else 0 end) +
(case when t.rating > 0 then t.rating else 0 end) +
) / nullif(coalesce((imdbrating > 0), 0) + coalesce((metacritic > 0), 0) + coalesce((t.rating > 0), 0)), 0)
FROM movie m JOIN
tomato t
ON m.imdbid = t.imdbid;
```
The denominator is using a convenient MySQL extension where booleans are treated as 0 or 1 in a numeric context. The `nullif()` returns `NULL` if no rating meets the conditions. And, the `> 0` is is not true for `NULL` values.
|
Try this:
```
SELECT (IF Column3 IS NULL OR Column3=0, AVG(Column1+Column2), Avg(Column1+Column2+ Column3)) as Result FROM Table
```
**EDIT:**
If any of the tree columns can be nullm, try this:
```
SELECT AVG(IF(Column1 IS NULL,0,Column1)+IF(Column2 IS NULL,0,Column2)+IF(Column3 IS NULL,0,Column3)) as Result FROM Table
```
|
Calculating an average number from three columns
|
[
"",
"mysql",
"sql",
""
] |
A table Jobs which have 2 column JobId, City when we save job a job location may be multiple city like below
```
-----------------------------
JobId City
-------------------------------
1 New York
2 New York , Ohio , Virginia
3 New York , Virginia
```
how i count jobid in perticular city like i want count of jobid in New York city
i want result like
New York 3
Ohio 1
Virginia 2
|
```
SELECT
SUBSTRING_INDEX(SUBSTRING_INDEX(all_city, ',', num), ',', -1) AS one_city,
COUNT(*) AS cnt
FROM (
SELECT
GROUP_CONCAT(city separator ',') AS all_city,
LENGTH(GROUP_CONCAT(citySEPARATOR ',')) - LENGTH(REPLACE(GROUP_CONCAT(citySEPARATOR ','), ',', '')) + 1 AS count_city
FROM table_name
) t
JOIN numbers n
ON n.num <= t.count_city
GROUP BY one_city
ORDER BY cnt DESC;
```
for getting count of comma separated distinct value run above query but getting correct resulr you should use one more table `**numbers**` which have only one column num integer type and insert some values.
if you getting error during GROUP\_CONCAT(city separator ',') AS all\_city in this condition set a global variable " SET group\_concat\_max\_len = 18446744073709547520; "
|
Your database is poorly designed and you are going to have a lot of trouble down the line.
Using the current structure you can get the count using the `find_in_set` function but that you should avoid .
Your table is as
```
create table test
(jobid int ,city varchar(100));
insert into test values
(1,'New York'),
(2,'New York, Ohio,Virginia'),
(3,'New York,Virginia');
```
Now to get the count you can use the following
```
select
count(*) as tot from test
where
find_in_set('Virginia',city) > 0;
```
As I mentioned this is a poor db design the ideal would be as
* first a job table with job details
* a location table containing all the locations
* and finally a table linking a job and a location
So it would look like
```
create table jobs (jobid int, name varchar(100));
insert into jobs values
(1,'PHP'),(2,'Mysql'),(3,'Oracle');
create table locations (id int, name varchar(100));
insert into locations values (1,'New York'),(2,'Ohio'),(3,'Virginia');
create table job_locations (id int, jobid int, location_id int);
insert into job_locations values
(1,1,1),(2,2,1),(3,2,2),(4,2,3),(5,3,1),(6,3,3);
```
Now getting the count and many more operations will be fairly easy
```
select
count(j.jobid) as tot
from jobs j
join job_locations jl on jl.jobid = j.jobid
join locations l on l.id = jl.location_id
where
l.name = 'Virginia'
```
For counting all the jobs per city and using the above schema it would very simple
```
select
l.name,
count(j.jobid) as tot
from jobs j
join job_locations jl on jl.jobid = j.jobid
join locations l on l.id = jl.location_id
group by l.name
```
**[DEMO](http://www.sqlfiddle.com/#!2/10d51/2)**
|
Get Count of different values in comma separated row in mysql
|
[
"",
"mysql",
"sql",
"string-aggregation",
""
] |
I've tried this query, but got the wrong result...
I looked at many references but I could not find the answer.
```
SELECT a.id, SUM(b1.amount) as section1, SUM(ABS(b2.amount)) as section2 FROM parent_table as a
INNER JOIN child_table as b1 ON a.id=b1.parent_id
INNER JOIN child_table as b2 ON a.id=b2.parent_id
WHERE b1.bh_status IN ('section1') AND b2.bh_status IN ('section2')
GROUP BY a.id
```
Tables are:
**parent\_table**
```
id | customer_namem | date_register
----+----------------+--------------
1 | customer1 | 2014-20-11
2 | customer2 | 2014-21-11
3 | customer3 | 2014-22-11
4 | customer4 | 2014-23-11
5 | customer5 | 2014-23-11
```
**child\_table**
```
id | ch_key | ch_value | parent_id
----+-----------+----------+----------
1 | deposit | 100 | 1
2 | deposit | 500 | 1
3 | withdraw | 100 | 1
4 | withdraw | 100 | 1
5 | deposit | 10 | 2
6 | deposit | 100 | 2
7 | deposit | 50 | 3
8 | deposit | 50 | 3
9 | withdraw | 10 | 3
10 | deposit | 50 | 4
11 | withdraw | 50 | 4
12 | withdraw | 150 | 5
```
I want to display like this:
```
id | customer_namem | deposit | withdtaw | date_register
1 | customer1 | 600 | 200 | 2014-20-11
2 | customer2 | 110 | 0 | 2014-20-11
3 | customer3 | 100 | 10 | 2014-20-11
4 | customer4 | 50 | 50 | 2014-20-11
5 | customer5 | 0 | 150 | 2014-20-11
```
|
You have to use `CASE`, like this:
```
SELECT
a.id, a.customer_namem,
SUM(CASE WHEN ch_key = "deposit" THEN ch_value ELSE 0 END) as deposit,
SUM(CASE WHEN ch_key = "withdraw" THEN ch_value ELSE 0 END) as withdraw,
a.date_register
FROM parent_table as a
INNER JOIN child_table as b ON a.id=b.parent_id
GROUP BY a.id
```
Working Demo: <http://sqlfiddle.com/#!2/01ad95/1>
|
## Query
```
SELECT a.id,
a.customer_name,
COALESCE(SUM(CASE WHEN b.ch_key='deposit' THEN b.ch_value END),0) AS deposit,
COALESCE(SUM(CASE WHEN b.ch_key='withdraw' THEN b.ch_value END),0) AS withdraw,
a.date_register
FROM parent_table a
JOIN child_table b
ON a.id=b.parent_id
GROUP BY a.id;
```
**Output**
```
+----+---------------+---------+----------+---------------+
| ID | CUSTOMER_NAME | DEPOSIT | WITHDRAW | DATE_REGISTER |
+----+---------------+---------+----------+---------------+
| 1 | customer 1 | 600 | 200 | 2014-11-20 |
| 2 | customer 2 | 110 | 0 | 2014-11-21 |
| 3 | customer 3 | 100 | 10 | 2014-11-22 |
| 4 | customer 4 | 50 | 50 | 2014-11-23 |
| 5 | customer 5 | 0 | 150 | 2014-11-23 |
+----+---------------+---------+----------+---------------+
```
## [Fiddle demo](http://sqlfiddle.com/#!2/188bd/1)
|
How do I sum the child table?
|
[
"",
"mysql",
"sql",
"select",
"group-by",
"sum",
""
] |
I wrote a sql query and my sample query like below:
```
select
value1,
value2,
value3,
(
select CAST(amount1 as decimal(17,2)) from
table1 where something...)
- select CAST(amount2 as decimal(17,2)) from table1 where something...)
) as 'Total Purchase'
from
table1
where
something
```
But, I am getting syntax error at "-" operator.
i tried to use "Set" statement like below
```
Declare
@value1 decimal(17,2),
@value2 decimal(17,2),
@result decimal(17,2)
set value1 = select CAST(amount1 as decimal(17,2)) from table1 where something...);
set value2 = select CAST(amount2 as decimal(17,2)) from table1 where something...);
set result = value1 - value2;
```
but i am getting syntax error again,
What can i use instead of "-" operator.
Thnaks for your advice,,
|
There are lot of things wrong in your sql. try this
```
SELECT value1,
value2,
value3,
(SELECT Cast(amount1 AS DECIMAL(17, 2))
FROM table1 where something...) - (SELECT Cast(amount2 AS DECIMAL(17, 2))
FROM table1 where something...) AS 'Total Purchase'
FROM table1
```
or
```
DECLARE @value1 DECIMAL(17, 2),
@value2 DECIMAL(17, 2),
@result DECIMAL(17, 2)
SET @value1 = (SELECT Cast(amount1 AS DECIMAL(17, 2))
FROM table1 where something...)
SET @value2 = (SELECT Cast(amount2 AS DECIMAL(17, 2))
FROM table1 where something...)
SET @result = @value1 - @value2;
```
|
Just a syntax error, i believe it should be like this :
```
(
(select CAST(amount1 as decimal(17,2)) from table1 where something...) - (select CAST(amount2 as decimal(17,2)) from table1 where something...)
) as total purchase
```
|
Using Extraction in SQL Server
|
[
"",
"sql",
"sql-server",
""
] |
I am trying to create an SQL query for an ASP NET drop down list using two tables and a session variable.
I want to retrieve the all values from table 1 which correspond to a matching ids in table 2, where table 2 is filtered by an external variable.
As it is clear I do not know how to word this question, here is a simplified example of what I am attempting to do:
* My site has a session variable which holds the current colour the user is "filtering".
* A dropdown list will show a list of cars corresponding to that colour using an SQL query.
For example if the session variable was "Blue" the dropdown list would contain "Punto" as it can see that the colour ID for "Blue" is 12 and "Punto" is the only car name corresponding to that colour.
Linked image: <https://i.stack.imgur.com/2zT90.png>

As session variables can be assigned and called in ASP NET custom queries the session variable can just be referred to as, for example, @ExternalVar (Colours.ID WHERE (Colours.Name = @ExternalVar))
Apologies I had to word this like a quiz question; giving a simplified example was the only way I could really articulate my question.
|
I think this should do the trick if I understand the question above
```
select * from Cars c
inner join Colours cl on c.colourID = cl.ID
where cl.Name = @ExternalVar
```
|
```
SELECT Cars.* FROM Colours
INNER JOIN Cars
ON Colours.ID = Cars.ColourID
WHERE Colours.Name = @Variable
```
|
SQL Query - SELECT WHERE Table1.ID = Table2.ID AND Table2.Var = @Var
|
[
"",
"sql",
"asp.net",
"webforms",
""
] |
I have the following query:
```
SELECT u.user_id, u.username, u.email
hp.homepage_id
FROM table_u u
LEFT JOIN table_hp hp
ON (u.user_id = hp.user_id)
WHERE u.blocked = 'N'
AND u.email LIKE 'someemailaddress'
```
I am joining on the user\_id column, for the given emailadress I know both properties are the same so I should get a result but still I don't get any result... so what is wrong with this query?
|
The point was how some data was stored in the database... Seems that in an old (buggy) version of the application data was stored without decent trimming etc. The issue was the field was stored in with a whitespace...
Stupid... should/must had checked the actual data that was stored...
|
Put the '%' before and after in the LIKE clause - see below
```
SELECT u.user_id, u.username, u.email
hp.homepage_id
FROM table_u u
LEFT JOIN table_hp hp
ON (u.user_id = hp.user_id)
WHERE u.blocked = 'N'
AND u.email LIKE '%someemailaddress%'
```
|
SQL Query shows no result with left join
|
[
"",
"mysql",
"sql",
"left-join",
"resultset",
""
] |
```
with cte as (
select '2014-03-10 08:00:00' as Dates
union all
select '2014-05-11 14:00:00'
)
select * from cte
join someTable on 1=1
OPTION (MAXRECURSION 0)
```
The here above SQL is outputing like a charm all hours between two dates and a field retrieved from a join with another table:
```
2014-03-10 02:00:00 A
2014-03-10 02:00:00 B
2014-03-10 03:00:00 A
2014-03-10 03:00:00 B
...
2014-05-11 13:00:00 A
2014-05-11 13:00:00 B
2014-05-11 14:00:00 A
2014-05-11 14:00:00 B
```
I would like to create a view from that but I do not manage to do it. I tried several things but without success. The following is returning : `Incorrect syntax near the keyword 'OPTION'.`
```
CREATE VIEW viewName as
with cte as (
select '2014-03-10 08:00:00' as Dates
union all
select '2014-05-11 14:00:00'
)
select * from cte
join someTable on 1=1
OPTION (MAXRECURSION 0)
```
|
You cannot specify the `MAXRECURSION` option inside a view.
From <http://benchmarkitconsulting.com/colin-stasiuk/2010/04/12/maxrecursion-with-a-cte-in-a-view/>:
In order to make use of the MAXRECURSION option you need to first create your view without using the MAXRECURSION option:
```
USE AdventureWorks;
GO
CREATE VIEW vwCTE AS
--Creates an infinite loop
WITH cte (EmployeeID, ManagerID, Title) as
(
SELECT EmployeeID, ManagerID, Title
FROM HumanResources.Employee
WHERE ManagerID IS NOT NULL
UNION ALL
SELECT cte.EmployeeID, cte.ManagerID, cte.Title
FROM cte
JOIN HumanResources.Employee AS e
ON cte.ManagerID = e.EmployeeID
)
-- Notice the MAXRECURSION option is removed
SELECT EmployeeID, ManagerID, Title
FROM cte
GO
```
Then when you query the view include the MAXRECURSION option:
```
USE AdventureWorks;
GO
SELECT EmployeeID, ManagerID, Title
FROM vwCTE
OPTION (MAXRECURSION 2);
```
See also AaskashM's answer at <https://stackoverflow.com/a/7428903/195687>
|
If you have more than 100 expected results, and want to avoid having to add the OPTION statement to your VIEW calls, try executing the CTE query - including the OPTION clause - in an OPENQUERY statement within your VIEW.
In your example, it would probably look something like this:
```
USE AdventureWorks;
GO
CREATE VIEW vwCTE AS
select * from OPENQUERY([YourDatabaseServer], '
--Creates an infinite loop
WITH cte (EmployeeID, ManagerID, Title) as
(
SELECT EmployeeID, ManagerID, Title
FROM AdventureWorks.HumanResources.Employee
WHERE ManagerID IS NOT NULL
UNION ALL
SELECT cte.EmployeeID, cte.ManagerID, cte.Title
FROM cte
JOIN AdventureWorks.HumanResources.Employee AS e
ON cte.ManagerID = e.EmployeeID
)
-- Notice the MAXRECURSION option is removed
SELECT EmployeeID, ManagerID, Title
FROM cte
OPTION (MAXRECURSION 0)
' ) x
GO
```
Notice that you must fully-qualify object references, i.e. the database and user specifications must prefix the object (table, view, sproc, or function) references.
Sure, it's a little ugly, but gets the job done nicely, and avoids having to add that pesky OPTION clause.
|
MS SQL Server - How to create a view from a CTE?
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
select max(case when field\_number = 7 then value end) value from wp\_rg\_lead\_detail group by lead\_id having max(case when field\_number = 21 then value end) > curdate()I already tried my best reading documentation and searching on Google but i can't figure it out..

I need to build a query where it selects the value of field\_number 7 if field\_number 19 has the value 1 and the date of field\_number 21 is past the current date.
This is what I currently have:
```
SELECT * FROM wp_rg_lead_detail WHERE field_number IN (7, 19) AND value = 1
```
But that only gives me the row where field\_number is 19
**EDIT:**
I need to get the value of field\_number 7 where field\_number 19 is past the current date. There can be multiple inctances where field\_number is 7 (each lead\_id is a different "person" of which i need the field\_number 7 value) Where field\_number 22 has the value 1 is no longer applicable
**EDIT2:**
Hey guys thanks! With your help the problem is now solved. The code that i'm using now is (thank you sgeddes):
```
SELECT value FROM wp_rg_lead_detail WHERE field_number = 7 AND EXISTS (SELECT field_number, value FROM wp_rg_lead_detail WHERE field_number = 21 AND value > CURDATE())
```
|
You could use `group by` with `max` and `case`. I'm assuming the unique group is `lead_id` and `form_id`:
```
select max(case when field_number = 7 then value end) value
from wp_rg_lead_detail
group by lead_id, form_id
having max(case when field_number = 19 then value end) = 1
and max(case when field_number = 21 then value end) > curdate()
```
The last part depends a little on which database you are using (curdate is mysql, getdate() is sql server), but this should give you the general idea. You may need to cast the value to a date as well.
* [SQL Fiddle Demo](http://sqlfiddle.com/#!2/74326/1)
|
This should do the trick
```
SELECT o.value
FROM wp_rg_lead_detail o
JOIN wp_rg_lead_detail t1
ON t1.lead_id = o.lead_id
AND t1.form_id = o.form_id
AND t1.field_number = 19
AND t1.value = 1
JOIN wp_rg_lead_detail t2
ON t2.lead_id = o.lead_id
AND t2.form_id = o.form_Id
AND t2.field_number = 21
AND t2.value > CONVERT(char(10), GetDate(),126)
WHERE o.field_number = 7
```
* It selects the value from your table;
* Joins it on itself, where the field number is 19 and its value is 1;
* Performs the same join where the field number is 21 and the date is past today;
* Inner joins will limit this to show the relevant details only.
**Edit**
Thanks Hans for the reminder that I forgot the Where clause :)
|
Need help building sql query
|
[
"",
"sql",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.