Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I need to do a SQL query to find some entries from a large table.
table:
```
id value1 value2
ny 35732 8023
ny 732 23
ny 292 109
nj 8232 813
nj 241 720
nj 590 287
```
I need to **randomly** select 2 entries **from each distinct id group s**uch that
```
id value1 value2
ny 35732 8023
ny 292 109
nj 8232 813
nj 590 287
```
My SQL code:
```
select top 2 * from my_table group by id value1 value2
```
But, it is not what I want.
I also need to insert the result into a table.
Any help would be appreciated.
|
You can use `ROW_NUMBER` and use `NEWID()` to generate a random `ORDER`:
*EDIT: I replaced `CHECKSUM(NEWID())` with `NEWID()` since I cannot prove which is faster and `NEWID()` is I think the most used.*
```
WITH CTE AS(
SELECT *,
RN = ROW_NUMBER() OVER(PARTITION BY id ORDER BY NEWID())
FROM tbl
)
SELECT
id, value1, value2
FROM Cte
WHERE RN <= 2
```
[**SQL Fiddle**](http://sqlfiddle.com/#!6/46710/1/0)
The fiddle should show different result among different runs.
---
If you're inserting this to another table use this subquery version:
```
INSERT INTO yourNewTable(id, value1, value2)
SELECT
id, value1, value2
FROM (
SELECT *,
RN = ROW_NUMBER() OVER(PARTITION BY id ORDER BY NEWID())
FROM tbl
)t
WHERE RN <= 2
```
|
```
DECLARE @Table1 TABLE
(id varchar(2), value1 int, value2 int)
;
INSERT INTO @Table1
(id, value1, value2)
VALUES
('ny', 35732, 8023),
('ny', 732, 23),
('ny', 292, 109),
('nj', 8232, 813),
('nj', 241, 720),
('nj', 590, 287)
;
SELECT *
FROM @Table1 T
WHERE
(
SELECT COUNT(*)
FROM @Table1 TT
WHERE T.id = TT.id AND
T.value1 >= TT.value1
) <= 2
```
|
randomly select a fixed number of rows in each group in SQL server table
|
[
"",
"sql",
"sql-server",
""
] |
How to insert record from Table A to Table B based on the conditions in Table C.
Table A:
```
ID Name DateFrom DateTo
1 Alex 22-7-2015 26-7-2015
2 Alice 21-8-2015 25-8-2015
```
Table C:
```
ID Quarter DateFrom DateTo
1 Quater 1 1-7-2015 31-7-2015
2 Quater 2 1-8-2015 31-8-2015
```
If the records from Table A is between the date range in Table C. It will insert into a new Table B.
Redemption is Table B
```
insert into redemption(staffID,staffName,department,pointsAccumulated,referrerID)
select referrerStaffID,referrerName,referrerDepartment,SUM(points),activeDirectoryID
FROM referral
WHERE NOT EXISTS (select * from redemption1 where referrerID=activeDirectoryID) group by activeDirectoryID;
```
|
Try this
```
Insert into tableB(Id,name,datefrom,dateto)
select t1.Id,t1.name,t1.datefrom,t1.dateto from tableA as t1
inner join tableC as t2 on t1.id=t2.id
where
t1.datefrom between t2.datefrom and t2.dateto or
t1.dateto between t2.datefrom and t2.dateto
```
|
I would use a correlated subquery with `WHERE NOT EXISTS..` construct
```
insert into
redemption(
staffID,
staffName,
department,
pointsAccumulated,
referrerID
)
select
referrerStaffID,
referrerName,
referrerDepartment,
SUM(points),
activeDirectoryID
FROM referral r
WHERE NOT EXISTS (
select 1 from redemption1 r1 where
r1.referrerID=r1.activeDirectoryID
and (r.datefrom between r1.datefrom and r1.dateto or
r.dateto between r1.datefrom and r1.dateto)
)
group by r.referrerStaffID,r.referrerName,r.referrerDepartment,r.activeDirectoryID;
```
|
Insert into, select and where statement combined
|
[
"",
"mysql",
"sql",
"sqlyog",
""
] |
I am building SQL queries and I'm wondering how using posix regular expressions (such as `~ , ~* , !~ , !~*`) or `LIKE` and `ILIKE` affects the performance of those queries. Does using any of these affects the speed and performance of my SQL queries? If yes, how? And which is more applicable to use?
|
The regex/like operators require **something** to happen in the DB, so of course they somehow affect performance... but they all do a specific job.
[`LIKE` provides a simple syntax](http://www.postgresql.org/docs/9.3/static/functions-matching.html#FUNCTIONS-LIKE), but not much functionality. According to [another SO answer](https://stackoverflow.com/a/29474797/320220), the `LIKE` functionality is pretty specialized and therefore probably more performant when compared to an equivalent regex.
> which is more applicable to use?
Not all text can be matched by a `LIKE`, so in those cases you'll have to use a regex. But if `LIKE` is sufficient, the linked answer suggests that it would be better to use that.
If you're concerned about a specific query, use postgres' [`EXPLAIN ANALYZE`](http://www.postgresql.org/docs/9.3/static/sql-explain.html) to see what postgres is actually going to do.
|
Based on my research,
POSIX regular expressions are more applicable than using LIKE and ILIKE clause due to some advantages:
1. **SPEED**
2. **Simple clause query**
Here are some examples of the posix regex usage:
~ tilde for case sensitive
```
POSIX: SELECT record FROM mytable WHERE record ~ 'a';
LIKEi: SELECT record FROM mytable WHERE record like '%a%';
```
~\* for case insensitive
```
POSIX: SELECT record FROM mytable WHERE record ~* 'a';
LIKEi: SELECT record FROM mytable WHERE record ilike '%A%';
```
!~ exclude/not (case sensitive)
```
POSIX: SELECT record FROM mytable WHERE record !~ 'a';
LIKEi: SELECT record FROM mytable WHERE record not like '%a%';
```
!~\* exclude/not (case insensitive)
```
POSIX: SELECT record FROM mytable WHERE record !~ 'a';
LIKEi: SELECT record FROM mytable WHERE record not ilike '%a%';
```
Some other usage of posix regex , LIKE and ILIKE can be found [here](http://www.postgresql.org/docs/9.0/static/functions-matching.html).
|
What is the advantage of using POSIX regular expressions than LIKE and ILIKE in PostgreSQL?
|
[
"",
"sql",
"regex",
"postgresql",
"sql-like",
""
] |
Suppose I have a following query (which a part of my actual query) which retrieves person\_id and age for that person:
```
select p.person_id,TRUNC ((SYSDATE - p.birth_date) / 365.25, 0) AS age
from person p
```
Now after getting this age I want to get the number of people in each age group like:
```
group count
age <=5 100
age >5 and age <=10 50
age > 10 15
... ...
```
I can use group by clause to get count for individual age. But how to get count between two ages?
Can someone tell me the way in which I can do it?
you can use following data for your reference:
```
pid age
1 4
2 13
3 10
4 8
5 9
6 12
```
So the result should be:
```
group count
age < =5 1
>5 & <=10 3
> 10 2
```
|
First, your age calculation is incorrect. You almost certainly want to measure the months between the two dates rather than hoping that dividing by 365.25 is close enough
```
trunc( months_between( sysdate, p.birth_date )/ 12 )
```
Second, if you want to group by ranges, you just need to select the range in a `case` statement and group by that
```
SELECT (case when age <= 5
then 'age <= 5'
when age > 5 and age <= 10
then 'age > 5 and age <= 10'
else 'age > 10'
end) bucket,
count(*)
FROM( SELECT trunc( months_between( sysdate, p.birth_date )/ 12 ) age
FROM person p )
GROUP BY (case when age <= 5
then 'age <= 5'
when age > 5 and age <= 10
then 'age > 5 and age <= 10'
else 'age > 10'
end)
```
|
You may want to do something like this:
```
with data as (
select p.person_id,TRUNC ((SYSDATE - p.birth_date) / 365.25, 0) AS age
from person p
)
select '30 or below' as stat, sum(case when age between 0 and 30 then 1 else 0 end) as count from data
union all
select '31 to 60' as stat, sum(case when age between 31 and 60 then 1 else 0 end) as count from data
union all
select '31 to 130' as stat, sum(case when age between 31 and 130 then 1 else 0 end) as count from data
;
```
Example: <http://sqlfiddle.com/#!4/8c9ae4/13>
|
How to get count of people based on age groups using SQL query in Oracle database?
|
[
"",
"sql",
"oracle",
""
] |
I am using SQL Server and I have a very big table.
If we use `delete from table` query we have jump in transaction log.
I should use a query like this
```
delete top (@count) from table
```
And then I should put that in a while in my asp.net code.
My question is what is the best `@count` amount to achieve these goals:
* The table would be able to respond other queries during delete process
* No jump in transaction log file
* Super fast for deleting all of rows (I have hundreds of thousands of rows)
Added information:
This is a item\_in\_tags table which stores item id and tag id so it has many many rows when I want to re tag all items I need to delete every thing in it and then enter the data to the table I cannot have a very large transaction because jump in transaction log is not acceptable for host and I have not shrink permission.
I want to keep size of transaction small and fast but I want to get best performance too and in retagging process I don't want to get down time for problems like table locking, etc.
|
Here are different scenarios that seem to fall within the scope of what you are asking:
1. You want to delete everything in the table. Then `truncate table` is the right approach.
2. Your delete query needs to do a full table scan to identify the rows being deleted. Then a larger number would probably be better, to reduce the number of full table scans.
3. You have very complex triggers or cascading operations on the table. In that case, "1" might be the best answer.
4. The other queries are modifying the table, so you have the potential issue of locking conflicts and timeouts. In that case, you might need to wait until the system is quieter to do your work.
5. The other operations are read-only and don't need to see the deletes. In that case, selecting into another table, and using `sp_rename` when the table is not in use might be the most efficient solution.
And, there are no doubt other scenarios as well. My advice is to choose a number such as 100 or 1000 and just go with it. Reduce the number if the deletes appear to be interfering with other operations.
|
Microsoft used to default to 1000 rows for many operations, now it defaults, in SSIS among other places, to 10000 rows at a time. Many developers use 10000 as the number of rows for a large transaction but you may want to play with that number to see (on your system with its own memory and hard disk) to see if a larger, or smaller number, finishes faster.
|
best number of rows to delete in a delete query
|
[
"",
"sql",
"asp.net",
"sql-server",
"t-sql",
""
] |
I have a date in the format '201501', I imported the table from excel so it is not in datetime datatype, I want to extract/return the month name. How can convert this '201501' into datetime while also getting month name.
|
`YYYYMMDD` is safe so you can:
```
;with t(example) as
(
select '201512'
)
select
cast(example + '01' as date) as [DATE],
datename(month, cast(example + '01' as date)) as [MONTH]
from t
DATE MONTH
2015-12-01 December
```
|
```
select DATE_FORMAT(str_to_date(substring("201501",5,6),"%m"),"%M") ;
```
|
converting to datetime time and extracting month name
|
[
"",
"sql",
"datetime",
"sql-server-2012",
""
] |
I have two queries. One for the numerator and one for the denominator. How do I combine the two queries so that my result is one table with the numerator, denominator, and grouping?

Example of desired output:
```
Numerator | Denominator | Grouping
----------|-------------|---------
30 | 51 | 1111
172 | 216 | 2768
```
|
You really have two different aggregates over the same table. For many reasons, performance being one of them, you do not want to break your query down into two parts and then re-join them together. You can accomplish the correct result by using column-level filtering instead of WHERE clause filtering:
```
select [officerID]
,sum(case when [ST3ID] != '' then 1 else 0 end) as [Numerator]
,count(*) as [Denomimator]
FROM [dbo].[cobanVideos]
WHERE [starting] > '6/1/2015 0:00:00 AM'
AND [starting] < '7/1/2015 0:00:00 AM'
GROUP BY [officerID]
```
By using a CASE statement to filter the data at the column level, you can retrieve both values at the same time. You can also calculate the percentage value (numerator/denminator) by adding the following as an additional column:
```
select [officerID]
,sum(case when [ST3ID] != '' then 1 else 0 end) as [Numerator]
,count(*) as [Denomimator]
,case when count(*) <> 0
then sum(case when [ST3ID] != '' then 1.0 else 0 end) / count(*)
else 0
end as [Pct ST3]
FROM [dbo].[cobanVideos]
WHERE [starting] > '6/1/2015 0:00:00 AM'
AND [starting] < '7/1/2015 0:00:00 AM'
GROUP BY [officerID]
```
SQL Window functions give you a whole other set of tools for working with aggregates at different levels of aggregation, all in one query. If you are interested, I can follow up with an example of how you could calculate the ratio per officerId, for all officers, and also determine the contribution percent of each officer to the overall total, all with a single SELECT.
|
Use a Join:
```
Select numerator.Count, denominator.Count, numerator.officerID from (SELECT COUNT() as Count, officerID FROM [dbo].[cobanVideos] WHERE starting > '6/1/2015 0:00:00 AM' AND starting < '7/1/2015 0:00:00 AM' AND ST3ID != '' GROUP BY officerID) numerator Join (SELECT COUNT() as Count, officerID FROM [dbo].[cobanVideos] WHERE starting > '6/1/2015 0:00:00 AM' AND starting < '7/1/2015 0:00:00 AM' GROUP BY officerID) denominator On numerator.officerId = denominator.officerId
```
|
Combining two queries to get numerator, denominator, and grouping
|
[
"",
"sql",
"sql-server",
""
] |
I'm trying to run the following query, but it give me the
following error message:
> ALTER DATABASE statement not allowed within multi-statement
> transaction.
the query is:
```
ALTER DATABASE TSQL2012
SET READ_COMMITTED_SNAPSHOT ON;
```
and as shown in the pic: any idea why?
[](https://i.stack.imgur.com/zpxZ3.png)
|
A multi-statement transaction is one that is either created, explicitly, by a [`BEGIN TRANSACTION`](https://msdn.microsoft.com/en-GB/library/ms188929.aspx) statement, or one that has been created by use of the [Implicit Transactions](https://technet.microsoft.com/en-us/library/ms188317(v=sql.105).aspx) mode - that is, a statement has executed and, before the next statement executes, the [transaction count](https://msdn.microsoft.com/en-GB/library/ms187967.aspx) is still greater than 0.
So this implies that, on the connection on which you're trying to execute your code, the transaction count is already greater than zero.
You should execute either `ROLLBACK` (safest if you don't know what was already done in the transaction) or `COMMIT` (safest for preserving data changes already performed), and then you can execute the `ALTER DATABASE` statement.
|
I'll make things easier for you.
The thing is there is an uncommitted transaction there.
So, the sql server just wants you to commit that first and then run this query.
If you dont want that transaction then just close that query window, it will ask if you wanna save it or no.
Then open new one and then run the query you want.
|
error: ALTER DATABASE statement not allowed within multi-statement transaction
|
[
"",
"sql",
"t-sql",
"sql-server-2012",
""
] |
I have SQL query with LEFT JOIN:
```
SELECT COUNT(stn.stocksId) AS count_stocks
FROM MedicalFacilities AS a
LEFT JOIN stocks stn ON
(stn.stocksIdMF = ( SELECT b.MedicalFacilitiesIdUser
FROM medicalfacilities AS b
WHERE b.MedicalFacilitiesIdUser = a.MedicalFacilitiesIdUser
ORDER BY stn.stocksId DESC LIMIT 1)
AND stn.stocksEndDate >= UNIX_TIMESTAMP() AND stn.stocksStartDate <= UNIX_TIMESTAMP())
```
These query I want to select one row from table `stocks` by conditions and with field equal value `a.MedicalFacilitiesIdUser`.
I get always `count_stocks = 0` in result. But I need to get `1`
|
Your subquery seems redundant and main query is hard to read as much of the join statements could be placed in where clause. Additionally, original query might have a performance issue.
Recall `WHERE` is an implicit join and `JOIN` is an explicit join. Query optimizers
make no distinction between the two if they use same expressions but readability and maintainability is another thing to acknowledge.
Consider the revised version (notice I added a `GROUP BY`):
```
SELECT COUNT(stn.stocksId) AS count_stocks
FROM MedicalFacilities AS a
LEFT JOIN stocks stn ON stn.stocksIdMF = a.MedicalFacilitiesIdUser
WHERE stn.stocksEndDate >= UNIX_TIMESTAMP()
AND stn.stocksStartDate <= UNIX_TIMESTAMP()
GROUP BY stn.stocksId
ORDER BY stn.stocksId DESC
LIMIT 1
```
|
The `count(...)` aggregate doesn't count `null`, so its argument matters:
```
COUNT(stn.stocksId)
```
Since `stn` is your right hand table, this will not count anything if the `left join` misses. You could use:
```
COUNT(*)
```
which counts every row, even if all its columns are `null`. Or a column from the left hand table (`a`) that is never `null`:
```
COUNT(a.ID)
```
|
How to fix SQL query with Left Join and subquery?
|
[
"",
"mysql",
"sql",
"join",
"subquery",
"left-join",
""
] |
**Table 1- Job**
```
JobID
JobCustomerID
JobAddressID
```
**Table 2- Addresses**
```
AddressID
AStreetAddress
```
**Table 3- Customer**
```
CustomerID
CustomerName
```
Query:
```
SELECT *
FROM [Jobs]
LEFT JOIN [Addresses] ON [Jobs].JobAddressID = dbo.Addresses.AddressID
LEFT JOIN [Customers] ON [Jobs].JobCustomerID = [Customers].CustomerID
GROUP BY AStreetAddress
HAVING (COUNT(AStreetAddress) > 1)
```
I am trying to find the jobs with duplicated addresses.
Error
> Column Jobs.JobID' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
|
With group by you can use in select only columns that are in group by or aggregate functions:
```
SELECT AStreetAddress
FROM [Jobs]
LEFT JOIN [Addresses] ON [Jobs].JobAddressID = dbo.Addresses.AddressID
LEFT JOIN [Customers] ON [Jobs].JobCustomerID = [Customers].CustomerID
GROUP BY AStreetAddress
HAVING (COUNT(AStreetAddress) > 1)
```
You need something like this:
```
SELECT [Jobs].ID
FROM [Jobs]
LEFT JOIN [Addresses] ON [Jobs].JobAddressID = dbo.Addresses.AddressID
LEFT JOIN [Customers] ON [Jobs].JobCustomerID = [Customers].CustomerID
GROUP BY [Jobs].ID
HAVING (COUNT(*) > 1)
```
|
This should work for you:
```
SELECT *
FROM [Jobs]
left JOIN [Addresses] ON [Jobs].JobAddressID = dbo.Addresses.AddressID
left JOIN [Customers] ON [Jobs].JobCustomerID = [Customers].CustomerID
WHERE [AStreetAddress] IN (SELECT [AStreetAddress] FROM [Addresses] GROUP BY [AStreetAddress] HAVING COUNT(*) > 1)
```
|
Show duplicates from SQL join
|
[
"",
"sql",
"join",
"duplicates",
""
] |
I wrote this SQL statement to calculate the days for each month
```
(select count(*) DAYs FROM
(
select trunc(ADD_MONTHS(sysdate,-1),'MM') + level -1 Dates from dual connect by
level <= ADD_MONTHS(trunc(sysdate,'MM'),1)-1 - trunc(sysdate,'MM')+1
) Where To_char(dates,'DY') NOT IN ('SA','SO'))
```
At the moment this statement ignores Saturdays and Sundays and it calculates the days from the month before the sysdate (June).
June has 22 days without weekends but sadly my statement says it has 23. I found out it includes the 1st July, which is wrong.
Do you know how I can tell my little statement it only calculates the days from the month I want to get not including days from another month?
|
Doing this sort of thing is always going to look not pretty... here's one way, which does it for the entire current year. You can restrict to a single month by adding an additional statement to the where clause:
```
select to_char(trunc(sysdate, 'y') + level - 1, 'fmMON') as month, count(*)
from dual
where to_char(trunc(sysdate, 'y') + level - 1, 'fmDY', 'nls_date_language=english') not in ('SAT','SUN')
connect by level <= trunc(add_months(sysdate, 12), 'y') - trunc(sysdate, 'y')
group by to_char(trunc(sysdate, 'y') + level - 1, 'fmMON')
```
As I said, not pretty.
Note a couple of things:
* Use of the [`fm` format model modifier](http://docs.oracle.com/cd/E11882_01/server.112/e41084/sql_elements004.htm#i170559) to remove leading spaces
* Explicit use of `nls_date_language` to ensure it'll work in all environments
* I've added 12 months to the current date and then truncated it to the first of January to get the first day of the new year for simplicity
* If you want to do this by month it might be worth looking at the [`LAST_DAY()`](http://docs.oracle.com/cd/E11882_01/server.112/e41084/functions084.htm#SQLRF00654) function
The same statement (using `LAST_DAY()`) for the previous month only would be:
```
select count(*)
from dual
where to_char(trunc(sysdate, 'y') + level - 1, 'fmDY', 'nls_date_language=english') not in ('SAT','SUN')
connect by level <= last_day(add_months(trunc(sysdate, 'mm'), -1)) - add_months(trunc(sysdate, 'mm'), -1) + 1
```
|
Firstly, your inner query (`select trunc(ADD_MONTHS(sysdate,-1),'MM') + level -1 Dates from dual connect by level <= ADD_MONTHS(trunc(sysdate,'MM'),1)-1 - trunc(sysdate,'MM')+1`) returns the days of the month plus one extra day from the next month.
Secondly, a simpler query could use the LAST\_DAY function which gets the last day of the month.
Finally, use the `'D'` date format to get the day of the week as a number.
```
SELECT COUNT(*) FROM (
SELECT TO_CHAR(TRUNC(SYSDATE,'MM') + ROWNUM - 1, 'D') d
FROM dual CONNECT BY LEVEL <= TO_NUMBER(TO_CHAR(LAST_DAY(SYSDATE),'DD'))
) WHERE d BETWEEN 1 AND 5;
```
|
Oracle count days per month
|
[
"",
"sql",
"oracle",
"sysdate",
""
] |
```
ProdStock
+---------+--------------+
| ID_Prod | Description |
+---------+--------------+
| 1 | tshirt |
| 2 | pants |
| 3 | hat |
+---------+--------------+
Donation
+---------+---------+----------+
| id_dona | ID_Prod | Quantity |
+---------+---------+----------+
| 1 | 1 | 10 |
| 2 | 2 | 20 |
| 3 | 1 | 30 |
| 4 | 3 | 5 |
+---------+---------+----------+
Beneficiation
+---------+---------+----------+
| id_bene | ID_Prod | Quantity |
+---------+---------+----------+
| 1 | 1 | -5 |
| 2 | 2 | -10 |
| 3 | 1 | -15 |
+---------+---------+----------+
Table expected
+---------+-------------+----------+
| ID_Prod | Description | Quantity |
+---------+-------------+----------+
| 1 | tshirt | 20 |
| 2 | pants | 10 |
| 3 | hat | 5 |
+---------+-------------+----------+
```
Donation = what is given to the institution.
Beneficiation = institution gives to people in need.
I need to achieve "Table expected". I tried `sum`. I don't have much knowledge in SQL, it would be great if someone could help.
|
try adding the SUMs of both together
```
SELECT p.ID_Prod,
Description,
ISNULL(d.Quantity,0) + ISNULL(b.Quantity,0) AS Quantity
FROM ProdStock p
LEFT OUTER JOIN (SELECT ID_Prod,
SUM(Quantity) Quantity
FROM Donation
GROUP BY ID_Prod) d ON p.ID_Prod = d.ID_Prod
LEFT OUTER JOIN (SELECT ID_Prod,
SUM(Quantity) Quantity
FROM Beneficiation
GROUP BY ID_Prod) b ON p.ID_Prod = b.ID_Prod
```
|
Something like this...
```
SELECT ps.ID_Prod,
ps.Description,
SUM(d.Quantity) + SUM(b.Quantity) AS Quantity
FROM ProdStock ps
INNER JOIN Donation d ON ps.ID_Prod = d.ID_Prod
INNER JOIN Beneficiation b ON d.ID_Prod = b.ID_Prod
GROUP BY ps.ID_Prod, ps.Description
```
|
Sum quantities from different tables
|
[
"",
"sql",
""
] |
I have following table:
```
Card(
MembershipNumber,
EmbossLine,
status,
EmbossName
)
```
with sample data
```
(0009,0321,'E0','Finn')
(0009,0322,'E1','Finn')
(0004,0356,'E0','Mary')
(0004,0398,'E0','Mary')
(0004,0382,'E1','Mary')
```
I want to retrieve rows such that only those rows should appear that have `count` of `MembershipNumber > 1` AND count of `status='E0' > 1`.
**For Example** The query should return following result
```
(0004,0356,'E0','Mary')
(0004,0398,'E0','Mary')
```
I have the query for filtering it with `MembershipNumber` count but cant figure out how to filter by status='E0'. Here's the query so far
```
SELECT *
FROM (SELECT *,
Count(MembershipNumber)OVER(partition BY EmbossName) AS cnt
FROM card) A
WHERE cnt > 1
```
|
You can just add `WHERE status = 'E0'` inside your subquery:
[**SQL Fiddle**](http://sqlfiddle.com/#!3/c34a4b/10/0) (*credit to Raging Bull for the fiddle*)
```
SELECT *
FROM (
SELECT *,
COUNT(MembershipNumber) OVER(PARTITION BY EmbossName) AS cnt
FROM card
WHERE status = 'E0'
)A
WHERE cnt > 1
```
|
You can do it this way:
```
select t1.*
from card t1 left join
(select EmbossName
from card
where [status]='E0'
group by EmbossName,[status]
having count(MembershipNumber)>1 ) t2 on t1.EmbossName=t2.EmbossName
where t2.EmbossName is not null and [status]='E0'
```
Result:
```
MembershipNumber EmbossLine status EmbossName
---------------------------------------------------
4 356 E0 Mary
4 398 E0 Mary
```
Sample result in [**SQL Fiddle**](http://www.sqlfiddle.com/#!3/c34a4b/8)
|
Filter rows by count of two column values
|
[
"",
"sql",
"sql-server",
""
] |
I am trying to display where a record has multiple categories though my query only appears to be showing the first instance. I need for the query to be displaying the domain multiple times for each category it appears in
The SQL statement I have is
```
SELECT domains.*, category.*
FROM domains,category
WHERE category.id IN (domains.category_id)
```
Which gives me the below results
[](https://i.stack.imgur.com/2gCNS.png)
|
You should not store numeric values in a string. Bad, bad idea. You should use a proper junction table and the right SQL constructs.
Sometimes, we are stuck with other people's bad design decisions. MySQL offers `find_in_set()` to help in this situation:
```
where find_in_set(category.id, domains.category_id) > 0
```
|
Use find\_in\_set().
```
SELECT domains.*, category.*
FROM domains,category
WHERE find_in_set (category.id ,domains.category_id)
```
But it is very bad db design to store fk as a csv.
|
SQL split column by comma in where clause
|
[
"",
"mysql",
"sql",
"select",
"where-clause",
"where-in",
""
] |
**UNDERSTANDING INDEXES & MISSING INDEX RECOMMENDATIONS**
I'm trying to gain a better understanding of indexes. I have a lot of reading to do, and have found a number of valuable resources from other SO posts, some of which I've read, others I still need to read. In the meantime, I'm trying to get better performance out of my database.
I've learned that a covering index is going to be better performing than indexes on individual columns, so I decided to start by deleting my individual indexes and letting the proposed query execution plan recommend indexes.
**SSMS INDEX RECOMMENDATION**
```
CREATE NONCLUSTERED INDEX IX_my_index_name
ON [dbo].[my_table] ([field_a],[field_b])
INCLUDE (
[field_1]
,[field_2]
,[field_3]
,[field_4]
,[field_5]
,[field_6]
)
```
**TABLE DETAILS**
fields 1-6 are the columns I commonly use to join the 2 tables I'm using. fields a & b are found in the where clause of a few time consuming queries I run.
I understand using fields 1-6 because for the most part they all contain many different values, but `field a` has only about 75 distinct values, and `field b` only has 3 distinct values. This is in a table with 70MM records in it.
Note that this is a heap. All of the records on this table come from another table that has a primary key, so that unique value comes with it, but it's not set up as a key or a unique index on this table. SSMS didn't recommend including that column in this index. Wondering how I should handle the unique value coming into this table? A clustered, unique index I'm guessing?
**MY QUESTIONS**
1. I want to understand the logic behind this index recommendation. Given the information regarding the similar values in columns a & b, why was this recommended?
2. I want to understand the difference between the `ON` columns and the `INCLUDE` columns?
|
The first thing I'd ask is whether there is a good reason for a table of that size doesn't have a clustered index? A clustered key doesn't even have to be unique (SQL Server will add a 'uniquifier' to it if not, although it's usually best to use an IDENTITY column).
To answer your two questions:
1) The index recommendation is related to the query you are running. As a rule of thumb, the suggested columns will match the columns the query optimiser is using to probe into the table, so if you have a query like:
```
SELECT field1, field2, field3
FROM table1
WHERE field4 = 1 AND field5 = 'bob'
```
The suggested index is likely to be on the `field4` and `field5` columns, and in order of selectivity (i.e. the column with the most variation in values first). It may include other columns (for instance `field1, field2, field3`) because then the query optimiser will only have to visit the index to get that data, and not visit the data page.
Note also that sometimes the suggested index is not always the one you might choose yourself. If joining several tables, the query optimiser will choose the execution plan that it thinks best suits the data, based on available indexes and statistics. It might loop over one table and probe into another, when the best possible plan might do it the other way around. You have to inspect the actual query execution plan to see what is going on.
If you know your query is selective enough to drill down to a small range of records (for instance has a where clause like `WHERE table1.field1 = 1 AND table1.field2 = 'abc' AND table1.field3 = '2015-07-01' ...`), you can add an index that covers all the referenced columns. This might influence the query optimiser to scan this index to get a small number of rows to join to another table, rather than performing scans.
As a rule of thumb, a good place to start when examining the execution plans is trying to eliminate scans, where the server will be reading a large range of rows, and provide indexes that narrow down the amount of data that has to be processed.
2) I think others have probably explained this well enough by now - the included columns are there so that when the index is read, the server doesn't then have to read the data page to get those values; they are stored on the index as well.
The initial response a lot of people may have when they read about such 'covering indexes' is "why don't I add a whole bunch of indexes that do this", or "why don't I add an index that covers all the columns".
In some situations (usually small tables with narrow columns, such as many-to-many joining tables), this is useful. However, with each index you add comes some costs:
Firstly, every time you update or insert a value into your table, the index has to be updated. This means you will have to contend with locking, lock escalation issues (possibly deadlocking), page splits, and the associated fragmentation. There are various ways to mitigate these issues, such as using an appropriate fill-factor to allow more values to be inserted into an index page without having to split it.
Secondly, indexes take up space. At the very least, an index is going to contain the key values you use and either the RID (in a heap) or clustering key (in a table with a clustered index). Covering indexes also contain a copy of the included columns. If these are large columns (such as big varchars) then the index can be quite large and it is not unheard of for a tables indexes to add up to be bigger than the table itself. Note that there are also limits on the size of an index, both in terms of columns, and total size. Because the clustering key is always included in non-clustered indexes on a table with a clustered index (the clustered index is on the data page itself), this means that a smaller clustered key is better. Whilst you can use a composite index, this is likely to be several bytes wide, and whilst you can use a non-unique key, SQL Server will add that uniquifier to it, which is another 4 bytes. Best practice is to use an identify column (int, or bigint if you envisage ever having more than 2 billion rows in the table). Identities also always increment, so you won't get page splits in your data pages when inserting a new record, as it will always go on the end of the table.
so the tl;dr; is:
The suggested indexes can be useful, but often don't give the best index. if you know the structure of your data and how it will be queried, you can construct indexes that contain the commonly use probing keys.
Always order the columns in your index in the order of *selectivity* (i.e. the column with the most values first). This might seem counter-intuitive, but it allows SQL Server to find the data you want faster, with fewer reads.
Included columns are useful, but only usually when they are small columns (e.g. integers). If your query needs six columns from a table and the index covers only five of them, SQL Server will still have to visit the data page, so in this case you're better off without the included columns because they just take up space and have a maintenance cost.
|
The ON columns in the index can be used for searching the rows. Those fields are included in the index tree. Once the rows are found, if any additional columns are needed, for example fields in select part or joins, they have to be fetched from the table. This is called a `key lookup` in the execution plan.
If the index has multiple columns, and not all columns are specified in the where clause, the columns can be used from first onwards as long as the fields are given. For example index has fields A, B, C, D and where clause has fields A, B and D, then only A and B can be used to fetch the data.
If the table has a clustered index, the values of the keys in the clustered index are stored in the other indexes and are used to find the row from the table itself. If there is no clustered index, RID (Row ID) is used in similar way to locate the rows from the table.
The include columns in index are additional columns and their data is stored at the leaf level of the non-clustered index. This way SQL Server can read the data directly from there and skip the whole part of reading the table. This is called a `covering index`.
|
Understanding Indexes and Missing Index Recommendations in SSMS
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I have a table `CallTable` with columns name `caller_id` and `Is_picked_up` which contain the status whether is picked up or not.
```
Caller_id Is_picked
1 no
1 yes
1 no
2 no
3 no
```
I want the callers who never picked up the calls. in above case 2 and 3 would be the ouput.
|
You can `group by` each unique `Caller_id` and check if conditional count for `Is_picked`'s column value *yes* is 0 within group:
```
select `Caller_id`
from `CallTable`
group by `Caller_id`
having sum(`Is_picked` = 'yes') = 0
```
**[SQLFiddle Demo](http://sqlfiddle.com/#!9/32f0d/1)**
|
You can do this with the `exists` operator:
```
SELECT DISTINCT caller_id
FROM call_table a
WHERE NOT EXISTS (SELECT *
FROM call_table b
WHERE a.caller_id = b.called_id AND is_picked = 'yes')
```
|
Include Caller_id if all corresponding rows doesn't match certain condition
|
[
"",
"mysql",
"sql",
""
] |
Hi consider there is an INSERT statement running on a table TABLE\_A, which takes a long time, I would like to see how has it progressed.
What I tried was to open up a new session (new query window in SSMS) while the long running statement is still in process, I ran the query
```
SELECT COUNT(1) FROM TABLE_A WITH (nolock)
```
hoping that it will return right away with the number of rows everytime I run the query, but the test result was even with (nolock), still, it only returns after the INSERT statement is completed.
What have I missed? Do I add (nolock) to the INSERT statement as well? Or is this not achievable?
---
(Edit)
OK, I have found what I missed. If you first use CREATE TABLE TABLE\_A, then INSERT INTO TABLE\_A, the SELECT COUNT will work. If you use SELECT \* INTO TABLE\_A FROM xxx, without first creating TABLE\_A, then non of the following will work (not even sysindexes).
|
If you are using SQL Server 2016 the [live query statistics](https://msdn.microsoft.com/en-us/library/dn831878.aspx) feature can allow you to see the progress of the insert in real time.
The below screenshot was taken while inserting 10 million rows into a table with a clustered index and a single nonclustered index.
It shows that the insert was 88% complete on the clustered index and this will be followed by a sort operator to get the values into non clustered index key order before inserting into the NCI. This is a blocking operator and the sort cannot output any rows until all input rows are consumed so the operators to the left of this are 0% done.
[](https://i.stack.imgur.com/q3DVq.png)
With respect to your question on `NOLOCK`
It is trivial to test
## Connection 1
```
USE tempdb
CREATE TABLE T2
(
X INT IDENTITY PRIMARY KEY,
F CHAR(8000)
);
WHILE NOT EXISTS(SELECT * FROM T2 WITH (NOLOCK))
LOOP:
SELECT COUNT(*) AS CountMethod FROM T2 WITH (NOLOCK);
SELECT rows FROM sysindexes WHERE id = OBJECT_ID('T2');
RAISERROR ('Waiting for 10 seconds',0,1) WITH NOWAIT;
WAITFOR delay '00:00:10';
SELECT COUNT(*) AS CountMethod FROM T2 WITH (NOLOCK);
SELECT rows FROM sysindexes WHERE id = OBJECT_ID('T2');
RAISERROR ('Waiting to drop table',0,1) WITH NOWAIT
DROP TABLE T2
```
## Connection 2
```
use tempdb;
--Insert 2000 * 2000 = 4 million rows
WITH T
AS (SELECT TOP 2000 'x' AS x
FROM master..spt_values)
INSERT INTO T2
(F)
SELECT 'X'
FROM T v1
CROSS JOIN T v2
OPTION (MAXDOP 1)
```
## Example Results - Showing row count increasing
[](https://i.stack.imgur.com/emFq7.png)
`SELECT` queries with `NOLOCK` allow dirty reads. They don't actually take no locks and can still be blocked, they still need a `SCH-S` (schema stability) lock on the table ([and on a heap it will also take a `hobt` lock](http://tenbulls.co.uk/2011/10/14/nolock-hits-mythbusters/)).
The only thing incompatible with a `SCH-S` is a `SCH-M` (schema modification) lock. Presumably you also performed some DDL on the table in the same transaction (e.g. perhaps created it in the same tran)
For the use case of a large insert, where an approximate in flight result is fine, I generally just poll `sysindexes` as shown above to retrieve the count from metadata rather than actually counting the rows ([non deprecated alternative DMVs are available](http://blogs.msdn.com/b/martijnh/archive/2010/07/15/sql-server-how-to-quickly-retrieve-accurate-row-count-for-table.aspx))
When an insert has a [wide update plan](http://blogs.msdn.com/b/bartd/archive/2006/07/27/wide-vs-narrow-plans.aspx) you can even see it inserting to the various indexes in turn that way.
If the table is created inside the inserting transaction this `sysindexes` query will still block though as the `OBJECT_ID` function won't return a result based on uncommitted data regardless of the isolation level in effect. It's sometimes possible to get around that by getting the object\_id from `sys.tables` with `nolock` instead.
|
**Short answer**: You can't do this.
**Longer answer**: A single INSERT statement is an [atomic](https://en.wikipedia.org/wiki/Atomicity_(database_systems)) operation. As such, the query has either inserted all the rows or has inserted none of them. Therefore you can't get a count of how far through it has progressed.
**Even longer answer**: Martin Smith has given you a way to achieve what you want. Whether you still want to do it that way is up to you of course. Personally I still prefer to insert in manageable batches if you really need to track progress of something like this. So I would rewrite the INSERT as multiple smaller statements. Depending on your implementation, that may be a trivial thing to do.
|
How to SELECT COUNT from tables currently being INSERT?
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
Is it possible to stop SQL from duplicating lines from a table when creating a JOIN to a table with more than one line?
Table 1
```
Car Name Colour Size
Car 1 Red big
car 2 Blue small
Car 3 Green small
```
Table 2
```
Car Name Part Number
Car 1 123456
Car 1 234567
Car 1 345678
Car 2 ABCDEFG
Car 2 BCDEFGH
Car 2 CDEFGHI
```
Then Join Table 1 with Table 2 on "Car Name" but only have the information once from each table,
Resulting SQL View
```
Car Name Colour Size Part Number
Car 1 Red big 123456
NULL NULL NULL 234567
NULL NULL NULL 345678
Car 2 Blue small ABCDEFG
NULL NULL NULL BCDEFGH
NULL NULL NULL CDEFGHI
```
edit: if the original "Car Name" column is duplicated this isn't a problem, that's not really made clear above because i've put NULL's under that column but i understand that's the column its joined on and that information is already on the lines of the second table, its more being able to stop the duplication of the other information that isn't in table 2
|
As mentioned in comments it is recommend to do this kind of data formatting on client application but still if need a sql answer then try something like this.
```
WITH cte
AS (SELECT RN=Row_number() over(PARTITION BY a.Car_Name ORDER BY (SELECT NULL)),
a.car_name,
a.color,
a.size,
b.part_number
FROM table1
INNER JOIN table2
ON a.car_name = b.car_name)
SELECT car_name=CASE WHEN rn = 1 THEN car_name ELSE NULL END,
color=CASE WHEN rn = 1 THEN color ELSE NULL END,
size=CASE WHEN rn = 1 THEN size ELSE NULL END,
part_number
FROM cte
```
|
You would just need to assign a `ROW_NUMBER()` and populate table1 fields only when row number = 1 something like this:
```
WITH q1
AS (
SELECT t2.*
,row_number() OVER (
PARTITION BY t2.CarName ORDER BY PartNumber
) AS rn
FROM table2 t2
)
SELECT CASE
WHEN q1.rn = 1
THEN t1.CarName
ELSE NULL
END AS CarName
,CASE
WHEN q1.rn = 1
THEN t1.Colour
ELSE NULL
END AS Colour
,CASE
WHEN q1.rn = 1
THEN t1.Size
ELSE NULL
END AS Size
,q1.PartNumber
FROM q1
INNER JOIN table1 t1 ON t1.carname = q1.carname
```
[**SQL Fiddle Demo**](http://www.sqlfiddle.com/#!3/b14d7b/3/0)
|
SQL join 1 line and return NULLs for others
|
[
"",
"sql",
"sql-server-2008",
""
] |
I'm trying to split a string with regexp\_subtr, but i can't make it work.
So, first, i have this query
```
select regexp_substr('Helloworld - test!' ,'[[:space:]]-[[:space:]]') from dual
```
which very nicely extracts my delimiter - *blank*-*blank*
But then, when i try to split the string with this option, it just doesn't work.
```
select regexp_substr('Helloworld - test!' ,'[^[[:space:]]-[[:space:]]]+')from dual
```
The query returns nothing.
Help will be much appreciated!
Thanks
|
[SQL Fiddle](http://sqlfiddle.com/#!4/6fa6f/2)
**Oracle 11g R2 Schema Setup**:
```
CREATE TABLE TEST( str ) AS
SELECT 'Hello world - test-test! - test' FROM DUAL
UNION ALL SELECT 'Hello world2 - test2 - test-test2' FROM DUAL;
```
**Query 1**:
```
SELECT Str,
COLUMN_VALUE AS Occurrence,
REGEXP_SUBSTR( str ,'(.*?)([[:space:]]-[[:space:]]|$)', 1, COLUMN_VALUE, NULL, 1 ) AS split_value
FROM TEST,
TABLE(
CAST(
MULTISET(
SELECT LEVEL
FROM DUAL
CONNECT BY LEVEL < REGEXP_COUNT( str ,'(.*?)([[:space:]]-[[:space:]]|$)' )
)
AS SYS.ODCINUMBERLIST
)
)
```
**[Results](http://sqlfiddle.com/#!4/6fa6f/2/0)**:
```
| STR | OCCURRENCE | SPLIT_VALUE |
|-----------------------------------|------------|--------------|
| Hello world - test-test! - test | 1 | Hello world |
| Hello world - test-test! - test | 2 | test-test! |
| Hello world - test-test! - test | 3 | test |
| Hello world2 - test2 - test-test2 | 1 | Hello world2 |
| Hello world2 - test2 - test-test2 | 2 | test2 |
| Hello world2 - test2 - test-test2 | 3 | test-test2 |
```
|
Trying to negate the match string `'[[:space:]]-[[:space:]]'` by putting it in a character class with a circumflex (^) to negate it will not work. Everything between a pair of square brackets is treated as a list of optional single characters except for named named character classes which expand out to a list of optional characters, however, due to the way character classes nest, it's very likely that your outer brackets are being interpreted as follows:
* `[^[[:space:]]` A single non space non left square bracket character
* `-` followed by a single hyphen
* `[[:space:]]` followed by a single space character
* `]+` followed by 1 or more closing square brackets.
It may be easier to convert your multi-character separator to a single character with regexp\_replace, then use regex\_substr to find you individual pieces:
```
select regexp_substr(regexp_replace('Helloworld - test!'
,'[[:space:]]-[[:space:]]'
,chr(11))
,'([^'||chr(11)||']*)('||chr(11)||'|$)'
,1 -- Start here
,2 -- return 1st, 2nd, 3rd, etc. match
,null
,1 -- return 1st sub exp
)
from dual;
```
In this code I first changed `-` to `chr(11)`. That's the ASCII vertical tab (VT) character which is unlikely to appear in most text strings. Then the match expression of the regexp\_substr matches all non VT characters followed by either a VT character or the end of line. Only the non VT characters are returned (the first subexpression).
|
Split string by space and character as delimiter in Oracle with regexp_substr
|
[
"",
"sql",
"regex",
"oracle",
"split",
"regexp-substr",
""
] |
I am building an Android app for a bus schedule between my village and Barcelona.
I have built a sqlite database as follows (probably it's a really bad design, but I'm a newbie in databases):
```
run | stopCode | time
1.........1.......620
1.........2.......650
.
.
.
5........11.......NULL
5........12.......1020
```
And so on.
I use the following SQL statement in order to pick the rows that match my criteria:
```
SELECT run,stopCode,time
FROM tablename
WHERE time> x AND time <>'' AND (stopCode=1 or stopCode=8);
```
Using this sentence gives me an output similar to this:
```
6(run)|8(stopCode)|1045(time)
9|1|1900
9|8|2000
```
My problem is that I want to select **only** the rows that have the run value repeated twice. If it only appears once, it means that the bus has a NULL destination in that particular run.
I tried using `COUNT(*) c`, `blabla c=2`, but that gives me only the second row with the repeated run number, and I want to select both of them (which would correspond to where you pick the bus and where you want to go down). Any clues how to do it?
Edit:
this is the statement i tried
```
SELECT run,stopNumber,time,COUNT(run) c
FROM Directes_dirBarna
WHERE time> 600 and time<>'' and (stopNumber=1 or stopNumber=8)
GROUP BY run having c=2;
```
Edit2:
Thanks to Marcos Vasconcelos i fixed it. This is my final statement if anyone has the same problem:
```
SELECT run,parada, hora FROM Directes_dirBarna taula WHERE hora> 600
and TIME<>'' IN (parada=1, parada=8);
```
|
Here you go:
```
select t1.run, t1.code, t2.code from tablename t1, tablename t2
where t1.run = t2.run and t1.code = 1 and t2.code = 8;
```
You select 2 rows (t1 and t2) such as they are on the same run, one is the departure, and one is the arrival. It will give you a single line for each run that satisfies the condition.
See <http://sqlfiddle.com/#!5/7d3f3/3>
|
You can use the IN operator and repeat the query (but it would be bad if any other option is available)
|
How to select only the rows that have a column with a repeated value?
|
[
"",
"android",
"sql",
"sqlite",
""
] |
How can I remove duplicates and merge Account Types?
I have a call log that reports duplicate phones based on Account Type.
**For example:**
```
Telephone | Account Type
304-555-6666 | R
304-555-6666 | C
```
* I know how to remove duplicate Telephones using RANK\MAXCOUNT
* But before removing duplicates I need to reset the Account Type to “B” is the duplicates have multiple account types.
In the example the surviving duplicate would be:
```
Telephone | Account Type
304-555-6666 | B
```
**Warning, it is not guaranteed that duplicate phones have multiple Account Types.**
Example:
```
Telephone | Account Type
999-888-6666 | R
999-888-6666 | R
```
Therefore the surviving duplicate should be:
```
Telephone | Account Type
999-888-6666 | R
```
**How can I remove duplicates and reset the account type at the same time?**
```
--
-- Remove Duplicate Recordings
--
SELECT * FROM (
SELECT i.dateofcall ,
i.recordingfile ,
i.telephone ,
s.accounttype ,
ROW_NUMBER() OVER (PARTITION BY i.telephone ORDER BY i.dateofcall DESC) AS 'RANK' ,
COUNT(i.telephone) OVER (PARTITION BY i.telephone) AS 'MAXCOUNT'
FROM #myactions i
LEFT JOIN #myphone s ON s.interactionID = i.Interactionid
) x
WHERE [RANK] = [MAXCOUNT]
```
|
```
SELECT * FROM (
SELECT i.dateofcall ,
i.recordingfile ,
i.telephone ,
s.accounttype ,
ROW_NUMBER() OVER (PARTITION BY i.telephone ORDER BY i.dateofcall DESC) AS 'RANK' ,
COUNT(i.telephone) OVER (PARTITION BY i.telephone) AS 'MAXCOUNT',
DENSE_RANK() OVER ( PARTITION BY i.telephone ORDER BY s.accounttype DESC ) AS 'ContPhone'
FROM #myactions i
LEFT JOIN #myphone s ON s.interactionID = i.Interactionid
) x
WHERE [RANK] = [MAXCOUNT]
```
|
Try this?
```
select
x.dateofcall
, x.recordingfile
, x.telephone
, case when count(*) > 2 then 'B' else max(x.accounttype) end accounttype
(
select
i.dateofcall
, i.recordingfile
, i.telephone
, s.accounttype
from
#myactions i
LEFT JOIN #myphone s ON s.interactionID = i.Interactionid
group by
i.dateofcall
, i.recordingfile
, i.telephone
, s.accounttype
) x
group by
x.dateofcall
, x.recordingfile
, x.telephone
```
|
Remove Duplicates while Merging values
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
My question similar to [MySQL Split String and Select with results](https://stackoverflow.com/questions/30071597/mysql-split-string-and-select-with-results). Currently I have 2 tables:
**`student`**
```
uid | subject_id | name
1 | 1^2^3^4 | a
2 | 2^3^ | b
3 | 1 | c
```
**`subject`**
```
uid | subject_name
1 | math
2 | science
3 | languange
4 | sport
```
The result I expected is:
```
uid | name | subject_passed
1 | a | math, science, languange, sport
2 | b | science, languange
3 | c | sport
```
I have tried this query:
```
SELECT
student.uid,
student.name,
group_concat(subject.subject_name) as subjects_passed
from student
join subject on find_in_set(subject.uid,student.subject_id ) > 0
group by student.uid
```
Which returns the error:
> #1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use
> near 'join subject on find\_in\_set(subject.uid,student.subject\_id ) > 0
> group' at line 7
I believe because of [`FIND_IN_SET`](https://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_find-in-set). According to documentation, this function expects `,` as delimiter. Is there any alternative query I might use ?
|
Why not [`REPLACE`](https://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_replace) the separator:
```
SELECT
student.uid,
student.name,
GROUP_CONCAT(subject.subject_name) AS subjects_passed
FROM student
JOIN subject ON FIND_IN_SET(subject.uid, REPLACE(student.subject_id, '^', ',')) > 0
GROUP BY student.uid
```
[SQLFiddle](http://www.sqlfiddle.com/#!2/270b8/1)
---
If you decide to de-normalize your tables it is fairly straight forward to create the junction table and generate the data:
```
-- Sample table structure
CREATE TABLE student_subject (
student_id int NOT NULL,
subject_id int NOT NULL,
PRIMARY KEY (student_id, subject_id)
);
-- Sample query to denormalize student <-> subject relationship
SELECT
student.uid AS student_id,
subject.uid AS subject_id
FROM student
JOIN subject ON FIND_IN_SET(subject.uid, REPLACE(student.subject_id, '^', ',')) > 0
```
```
+------------+------------+
| student_id | subject_id |
+------------+------------+
| 1 | 1 |
| 1 | 2 |
| 1 | 3 |
| 1 | 4 |
| 2 | 2 |
| 2 | 3 |
| 3 | 1 |
+------------+------------+
```
|
You should never store data with a delimiter separator and should normalize the table and create the 3rd table to store student to subject relation.
However in the current case you may do it as
```
select
st.uid,
st.name,
group_concat(sb.subject_name) as subject_name
from student st
left join subject sb on find_in_set(sb.uid,replace(st.subject_id,'^',',')) > 0
group by st.uid
```
Here is the option to create third table and store the relation
```
create table student_to_subject (id int primary key auto_increment, stid int, subid int);
insert into student_to_subject(stid,subid) values
(1,1),(1,2),(1,3),(1,4),(2,2),(2,3),(3,1);
```
Now you can remove the column `subject_id` from the `student` table
So the query becomes
```
select
st.uid,
st.name,
group_concat(sb.subject_name) as passed_subject
from student st
join student_to_subject sts on sts.stid = st.uid
join subject sb on sb.uid = sts.subid
group by st.uid;
```
<http://www.sqlfiddle.com/#!9/f02df>
|
MySQL Select from column use ^ as delimiter
|
[
"",
"mysql",
"sql",
""
] |
I have a table List shown below:
```
+------+-------------------------------------+
| Code | name |
+------+-------------------------------------+
| A001 | ABBOTT_1000000 |
| A002 | AGCO_1000001 |
| A003 | ALFA LAVAL_1000002 |
| A004 | ALSTOM POWER INDIA LIMITED_1000003 |
| A005 | AMERICAN BUREAU OF SHIPPING_1000004 |
+------+-------------------------------------+
```
I need to update the table extracting the characters present after \_ in name and replace them in code column. like this.
```
+---------+-------------------------------------+
| Code | name |
+---------+-------------------------------------+
| 1000000 | ABBOTT_1000000 |
| 1000001 | AGCO_1000001 |
| 1000002 | ALFA LAVAL_1000002 |
| 1000003 | ALSTOM POWER INDIA LIMITED_1000003 |
| 1000004 | AMERICAN BUREAU OF SHIPPING_1000004 |
+---------+-------------------------------------+
```
This is has to be done in sql server 2012. please help me.
|
You can try to use `SUBSTRING` in following:
**SAMPLE DATA**
```
CREATE TABLE #MyTable
(
Code NVARCHAR(60),
Name NVARCHAR(60)
)
INSERT INTO #MyTable VALUES
('A001','ABBOTT_1000000' ),
('A002','AGCO_1000001' ),
('A003','ALFA LAVAL_1000002' ),
('A004','ALSTOM POWER INDIA LIMITED_1000003' ),
('A005','AMERICAN BUREAU OF SHIPPING_1000004' )
```
**QUERY**
```
UPDATE #MyTable
SET Code = SUBSTRING(Name, CHARINDEX('_', Name) + 1, LEN(Name))
```
**TESTING**
```
SELECT * FROM #MyTable
DROP TABLE #MyTable
```
**OUTPUT**
```
Code Name
1000000 ABBOTT_1000000
1000001 AGCO_1000001
1000002 ALFA LAVAL_1000002
1000003 ALSTOM POWER INDIA LIMITED_1000003
1000004 AMERICAN BUREAU OF SHIPPING_1000004
```
**SQL FIDDLE**
**`DEMO`**
|
Try this
```
with cte as
(
select substring(name,charindex('_',name)+1,len(name)) as ext_str,*
from yourtable
)
update cte set code = ext_str
```
|
Extract characters after a symbol in sql server 2012
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-server-2012",
""
] |
I have a table `Orders (Id, OrderDate, CreatorId)` and a table `OrderLines (Id, OrderId, OwnerIdentity, ProductId, Amount)`
Scenario is as follows: Someone opens up an `Order` and other users can then place their product orders on that order. Those users are the `OwnerId` of `OrderLines`.
I need to retrieve the top 3 latest orders that a user has placed an order on and display all of his orders placed, to give him an insight in his personal recent orders.
So my end result would be something like
```
OrderId | ProductId | Amount
----------------------------
1 | 1 | 2
1 | 7 | 1
1 | 2 | 5
4 | 4 | 3
4 | 1 | 2
8 | 4 | 1
8 | 9 | 2
```
|
```
Select o.Id as OrderId, ol.ProductId, ol.Amount from Orders o
inner join OrderLines ol
on o.Id = ol.OrderId where o.Id in
(Select top 3 OrderId from Orders where OwnerId = @OwnerId)
Order By o.OrderDate desc
```
You can add date time column to OrderLines table to query latest personal orders and then update the code by moving "order by OrderDate desc" section to sub select query.
|
Try the below query:
```
SELECT OL.OrderId, OL.ProductID, OL.Amount
FROM OrderLines OL WHERE OL.OrderId IN
(
SELECT TOP 3 O.OrderID FROM orders O LEFT JOIN OrderLines OL2
ON OL2.orderId=O.OrderID
WHERE OL2.OwnerIdentity =...
ORDER BY O.OrderDate DESC
) AND WHERE OL.OwnerIdentity =...
```
|
Get top n occurences based on related table value
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I am working on db2 z/os 10.
when I am updating a single column I can write query like this (in order to prevent null in column A.COL1).
---
```
UPDATE TABLE1 A
SET A.COL1 = COALESCE((
SELECT B.COL1
FROM TABLE2 B
WHERE A.KEY = B.KEY
), '');
```
---
However in case of updating multiple columns since I can not use COALESCE, I have to use "Exists" clause (like below).
---
```
UPDATE TABLE1 A
SET (
A.COL1
,A.COL2
) = (
SELECT B.COL1
,B.COL2
FROM TABLE2 B
WHERE A.KEY = B.KEY
)
WHERE EXISTS (
SELECT 'X'
FROM TABLE2 B
WHERE A.KEY = B.KEY
);
```
---
Can I re-write second query using only scalar function (without EXISTS) and prevent updating null in COL1 and COL2.
|
I *think* the following will work (I don't have DB2 on hand to test it):
```
UPDATE TABLE1 A
SET (A.COL1, A.COL2) =
(SELECT COALESCE(MAX(B.COL1), A.COL1),
COALESCE(MAX(B.COL2), A.COL2)
FROM TABLE2 B
WHERE A.KEY = B.KEY
);
```
The `MAX()` guarantees that exactly one row is returned -- even when there are no matches. The rest just chooses which value to use. Do note that this will keep the current value, even when the matching value is `NULL` (that is, there is a match but the value in the row is `NULL`).
Also, the `EXISTS` in the `WHERE` clause is typically a *good* idea, because it reduces the number of rows that need to be accessed for the update.
|
Maybe you are looking for something like this:
```
UPDATE A
SET A.COL1 = COALESCE(B.COL1,'')
,A.COL2 = COALESCE(B.COL2,'')
FROM table1 A
JOIN TABLE2 B ON A.[KEY] = B.[KEY]
```
|
When updating multiple columns, prevent updating null in case of no row is returned from select
|
[
"",
"sql",
"coalesce",
"db2-400",
""
] |
Database looks like:
```
ID | volume | timestamp (timestamp without time zone)
1 | 300 | 2015-05-27 00:
1 | 250 | 2015-05-28 00:
2 | 13 | 2015-05-25 00:
1 | 500 | 2015-06-28 22:
1 | 100 | 2015-06-28 23:
2 | 11 | 2015-06-28 21:
2 | 15 | 2015-06-28 23:
```
Is there any way to merge hourly prices history, that oldest than 1 month, to daily and put them back to table? That means merge hourly records into 1 record, with sum volume and timestamp of 00 hour (I mean only day, 2013-08-15 00:00:00).
So, wanted result:
```
ID | volume | timestamp
1 | 300 | 2015-05-27 00:
1 | 250 | 2015-05-28 00:
2 | 13 | 2015-05-25 00:
1 | 600 | 2015-06-28 00:
2 | 26 | 2015-06-28 00:
```
|
Thanks for your help @Pholochtairze and @a\_horse\_with\_no\_name.
Final query:
```
WITH merged_history AS (
SELECT item_id, SUM(history_volume) AS history_volume,
(SUM(history_medianprice * history_volume) / SUM(history_volume)) AS history_medianprice,
history_timestamp::date
FROM prices_history
WHERE history_timestamp < NOW()::date - INTERVAL '1 month'
GROUP BY item_id, history_timestamp::date),
upsert AS (
UPDATE prices_history AS h
SET history_volume = mh.history_volume, history_medianprice = mh.history_medianprice
FROM merged_history AS mh
WHERE h.item_id = mh.item_id AND h.history_timestamp = mh.history_timestamp RETURNING h.*)
INSERT INTO prices_history
SELECT item_id, history_volume, history_medianprice, history_timestamp
FROM merged_history
WHERE NOT EXISTS (
SELECT 1 FROM upsert AS u, merged_history AS mh
WHERE u.item_id = mh.item_id AND u.history_timestamp = mh.history_timestamp);
DELETE FROM prices_history
WHERE history_timestamp != history_timestamp::date
AND history_timestamp < NOW()::date - INTERVAL '1 month';
```
(It's took 2 minutes to complete this chalenge (3m rows) :D I will run it once a week. It will be a bit modified later, coz no need to merge already merged data)
|
Looks like a simply group by based on the date without the time:
```
select id,
sum(volume) as volume,
timestamp::date as timestamp
from the_table
group by id, timestamp::date
order by id, timestamp::date;
```
`timestamp::date` will cast the column named `timestamp`(which btw is a *horrible* name for a column) to a `date` thus removing the time part of the timestamp (the data type).
`timestamp::date` is Postgres specific. The ANSI SQL equivalent is `cast(timestamp as date)` (did I mention that *timestamp* is a horrible name for a column?)
|
Merge hourly prices history to daily
|
[
"",
"sql",
"postgresql",
""
] |
i recently imported some old data to a new SQL table and forgot to set the ID column to auto increment.
Now i have several "NULL"s in the column named ID.
Is there a way I can update all ID columns with an unique ID automatically?
sadly i have no clue how
After that I will be able to set the column to INT and Auto Increment.
Thank you for your help.
|
Try using **Sequence Object** for Sql Server 2012
```
create Sequence Sq as int
minvalue 1
cycle;
update table set Column=NEXT VALUE FOR Sq where Column is null
```
|
The easiest way is to remove your old ID Column and add a new column ID with :
```
ALTER TABLE [dbo].[myTable] DROP COLUMN ID
ALTER TABLE [dbo].[myTable] ADD ID int IDENTITY
```
|
MS SQL update column with auto incremented value
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I have 2 tables A and B with the following data: (I use oracle 11g)
[](https://i.stack.imgur.com/vncrv.png)
[](https://i.stack.imgur.com/mh8w7.png)
I need to combine the 2 tables above into 1 table based on the field "Code". This illustration is just a simplified version of my bigger problem at work. Basically, I have a form with structure stored in B and responses stored in A. However, the table A where the responses are kept does not keep the headers which are stored in B. In the report, I need to print the header along with the response. Anyway, not wanting to complicate the issue, the result I am looking for is in the following format:
[](https://i.stack.imgur.com/2FiZa.png)
Is this feasible to produce the result I wanted using select, join and possible union? I can not make it work. The statement I can come up with so far is:
```
select * from b left outer join a
on b."Code"= a."Code"
```
But the result is not what I am looking for. Is this even feasible without creating a procedure to format it. Ideally, it should be put in a view
Below is the script to generate my test data:
```
CREATE TABLE "A"
( "Id" NUMBER,
"Row" NUMBER,
"Description" VARCHAR2(20 BYTE),
"Type" NUMBER,
"Answer" VARCHAR2(20 BYTE),
"Code" VARCHAR2(20 BYTE),
"ClientId" NUMBER
) ;
Insert into A ("Id","Row","Description","Type","Answer","Code","ClientId") values (1,1,'Question 1',2,'ABC','QCONTROL',1000);
Insert into A ("Id","Row","Description","Type","Answer","Code","ClientId") values (2,3,'Question 2',2,'DEC','QCONTROL',1000);
Insert into A ("Id","Row","Description","Type","Answer","Code","ClientId") values (1,1,'Question 1',2,'XYZ','QCONTROL',2000);
Insert into A ("Id","Row","Description","Type","Answer","Code","ClientId") values (2,3,'Question 2',2,'STU','QCONTROL',2000);
Insert into A ("Id","Row","Description","Type","Answer","Code","ClientId") values (3,1,'Question 5',3,'JKL','QCONTROL',3000);
CREATE TABLE "B"
( "Id" NUMBER,
"Desc" VARCHAR2(20 BYTE),
"Row" NUMBER,
"Type" NUMBER,
"Code" VARCHAR2(20 BYTE)
) ;
Insert into B ("Id","Desc","Row","Type","Code") values (10,'----------',2,0,'QCONTROL');
Insert into B ("Id","Desc","Row","Type","Code") values (20,'**********',10,0,'OTHER');
```
|
is this what you want?
```
with res as(
select "Id" , "Row" , "Description" , "Type" , "Answer", "Code", "ClientId" from A
union all
select B."Id" as "Id" , B."Row" as "Row" , B."Desc" as "Description",
B."Type" as "Type", null as "Answer", B."Code" as "Code", A1."ClientId" as "ClientId"
from B inner join (select distinct "ClientId", "Code" from A) A1 ON B."Code"= A1."Code"
)
select * from res order by "Code", "ClientId", "Answer"
```
[SqlFiddle Link](http://sqlfiddle.com/#!4/6a6e1/11)
|
I'm not sure about syntax (I'm not in front of an oracle instance..), but my script can be useful as a starting point.
```
Select * from
(Select Id, '9999' as Row, Desc,0 as type, B.Code, A.ClientId
from B, A
where A.Code=B.Code
union all
Select * from A) ab
order by ab.ClientId, ab.row
```
I put a '9999' in Row to order it in a correct way.
I hope it can helps!
|
Combining 2 Oracle Tables
|
[
"",
"sql",
"oracle",
""
] |
I keep getting this error anytime i try to run the sql statement. What could be wrong?
> Ambiguous column name 'trafficCrashReportId'.
|
The `trafficCrashReportId` is present in both tables (`InvolvedVehicle` and `TrafficCrashReport`)
You'll need to specify which one should be selected... This example shows how to select the `trafficCrashReportId` from the `InvolvedVehicle` table.
```
Invoke-Sqlcmd -Query "select [trafficCrashReportId],
InvolvedVehicle.involvedVehicleId,
,[assocVehNum]
,[wasStrikingVehicle]
from InvolvedVehicle INNER JOIN TrafficCrashReport ON InvolvedVehicle.trafficCrashReportId = TrafficCrashReport.trafficCrashReportId ...
```
|
The problem is that the column is in two tables, and the database doesn't know which to choose. The best solution is to use table aliases and always qualify your column names:
```
select iv.[trafficCrashReportId], iv.[involvedVehicleId],
tcr.[assocVehNum], tcr[wasStrikingVehicle]
from InvolvedVehicle iv INNER JOIN
TrafficCrashReport tcr
ON iv.trafficCrashReportId = tcr.trafficCrashReportId
Where tcr.addDate between '$Yesterday' AND '$TodayDate';" `
```
I am guessing which columns come from which tables.
|
Getting an error after running join
|
[
"",
"sql",
"powershell",
""
] |
I have exported database from Oracle SQL developer tool into .sql file. Now I want to run this file which is of size 500+ MB.
I read about running scripts [here](https://stackoverflow.com/questions/13137258/oracle-sql-developer-how-to-dynamicly-run-sql-scripts), but I didn't understand the way. Is there any command or query by which we can run this sql script by providing path?
|
You could execute the `.sql` file as a script in the **SQL Developer** worksheet. Either use the **Run Script** icon, or simply press `F5`.
[](https://i.stack.imgur.com/NJ9ST.jpg)
For example,
```
@path\script.sql;
```
Remember, you need to put `@` as shown above.
But, if you have exported the database using database export utility of SQL Developer, then you should use the Import utility. Follow the steps mentioned here [Importing and Exporting using the Oracle SQL Developer 3.0](http://www.oracle.com/webfolder/technetwork/tutorials/obe/db/sqldev/r30/SQLdev3.0_Import_Export/sqldev3.0_import_export.htm)
|
You need to Open the SQL Developer first and then click on **File** option and browse to the location where your .sql is placed. Once you are at the location where file is placed double click on it, this will get the file open in SQL Developer. Now select all of the content of file **(CTRL + A)** and press **F9** key. Just make sure there is a commit statement at the end of the .sql script so that the changes are persisted in the database
|
How to run .sql file in Oracle SQL developer tool to import database?
|
[
"",
"sql",
"oracle",
"oracle-sqldeveloper",
""
] |
I need to fetch details from DB. Any thing wrong in my code?
```
SELECT DISTINCT FNAME, LNAME, MEMBERORG, DAYCOUNT, TIMESTAMP, COUNT(FNAME) AS total,(SELECT COUNT(*) FROM REPORT_VIEW_PAGE) AS tot
FROM REPORT_VIEW_PAGE
WHERE ID = '68' AND TYPE = 'node'
GROUP BY FNAME, LNAME, MEMBERORG, DAYCOUNT, TIMESTAMP
ORDER BY TITLE ASC
```
This giving me an error:
```
ORA-01791: not a SELECTed expression
01791. 00000 - "not a SELECTed expression"
*Cause:
*Action:
Error at Line: 6 Column: 10
```
|
The problem here is the `ORDER BY` column `TITLE` isn't selected in the `DISTINCT` query. Since `DISTINCT` is used, the `SELECT` query will try to group the `resultset` based on the selected columns.
`ORDER BY` column isn't selected here, it doesn't ensure the uniqueness on the `resultset` and hence it fails to apply `ORDER BY`.
|
Add the `title` column to your `SELECT` statement. When you're using `DISTINCT`, you must have all the columns from the `ORDER BY` in your `SELECT` statement as well.
```
-- correct
SELECT DISTINCT a, b, c FROM tbl.x ORDER BY a,b;
-- incorrect
SELECT DISTINCT c FROM tbl.x ORDER BY a,b;
```
The `a` and `b` columns must be selected.
|
ORA-01791: not a SELECTed expression
|
[
"",
"sql",
"oracle",
""
] |
I have a column in TableA which contains date as Varchar2 datatype for column Start\_date.( '2011-09-17:09:46:13').
Now what i need to do is , compare the Start\_date of TableA with the SYSDATE, and list out any values thts atmost 7days older than SYSDATE.
Can any body help me with this isssue.
|
You may perform the below to check the date:
```
select * from
TableA
where
to_date(start_date,'YYYY-MM-DD') between sysdate and sysdate-7;
```
|
> I have a column in TableA which contains date as Varchar2 datatype for column Start\_date.( '2011-09-17:09:46:13').
Then you have a flawed design. You must use appropriate data types for the data. A datetime should always be stored as **DATE** data type.
> Now what i need to do is , compare the Start\_date of TableA with the SYSDATE, and list out any values thts atmost 7days older than SYSDATE.
Since the data type of your column is **VARCHAR2**, you must use **TO\_DATE** to explicitly convert the string into date.
```
where to_date(start_date,'YYYY-MM-DD HH24:MI:SS') between sysdate and sysdate-7;
```
Remember, a **DATE** has both **date and time elements**. IF you want to ignore the time portion, then you need to use **TRUNC**.
For example,
```
SQL> alter session set nls_date_format='YYYY-MM-DD HH24:MI:SS';
Session altered.
SQL> SELECT sysdate FROM DUAL;
SYSDATE
-------------------
2015-07-28 16:03:32
SQL> SELECT TRUNC(sysdate) FROM DUAL;
TRUNC(SYSDATE)
-------------------
2015-07-28 00:00:00
SQL>
```
So, `BETWEEN sysdate AND sysdate -7` will consider the current datetime till past 7 days till that time portion. If you only want to consider the date portion, use **TRUNC**.
For example,
```
where to_date(start_date,'YYYY-MM-DD') between TRUNC(sysdate) and TRUNC(sysdate-7);
```
|
Oracle Date Comparsion
|
[
"",
"sql",
"oracle",
"date-comparison",
""
] |
I have 3 tables say Employee, Permanent\_Emp and Contract\_Emp
```
SELECT E.EMP_NO,
E.NAME,
JET.EMP_TYPE,
JET.DATE_JOINED
FROM Employee E
LEFT OUTER JOIN
/* Here Join Table Name(JET) it can be Permanent_Emp or Contract_Emp
which i want as a result of my case expression. */
ON (some condition here) ORDER BY E.EMP_NO DESC
```
case expression:
```
CASE
WHEN (E.EMP_TYPE_CODE >10 )
THEN
Permanent_Emp JET
ELSE
Contract_Emp JET
END
```
Note: table and column names are just for an example to understand requirement.
how can i have join table name from a case expression?
|
Using Alan's query as a starting point you can still use a case statement, just move it to the join condition:
```
SELECT *
FROM employee e
JOIN (
SELECT employee_id
, relevant_column
, 'P' AS source_indicator
FROM permanent_emp
UNION ALL
SELECT employee_id
, relevant_column
, 'C' AS source_indicator
FROM contract_emp
) se
ON se.employee_id = e.employee_id
and se.source_indicator = case when e.emp_type_code > 10
then 'P'
else 'C'
end
```
The only difference between this query and Allan's is the use of a case statement instead of an or statement.
|
Something like this (although without a description of your tables, the exact join conditions or any sample data its hard to give a more precise answer):
```
SELECT E.EMP_NO,
E.NAME,
COALESCE( P.EMP_TYPE, C.EMP_TYPE ) AS EMP_TYPE
COALESCE( P.DATE_JOINED, C.DATE_JOINED ) AS DATE_JOINED
FROM Employee E
LEFT OUTER JOIN
Permanent_Emp P
ON ( E.EMP_TYPE_CODE > 10 AND E.EMP_NO = P.EMP_NO )
LEFT OUTER JOIN
Contract_Emp C
ON ( E.EMP_TYPE_CODE <= 10 AND E.EMP_NO = C.EMP_NO )
ORDER BY
E.EMP_NO DESC
```
|
can we have CASE expression/case result as a join table name in oracle
|
[
"",
"sql",
"oracle",
""
] |
How to use the outer join and or operator together with the conditions mentioned below
Table 1: `PERSON`
```
ID_NO NAME
512 RAM
513 RAVI
514 RAMU
```
Table 2: `FORM_DETAILS`
```
APPL_DATE STATUS ID NO
12/09/2009 A 512
12/08/2010 A 513
12/07/2009 C 514
12/06/2006 S 513
```
The query
```
SELECT A.ID_NO,A.NAME B.APPL_DATE FROM PERSON A, FORM_DETAILS B
WHERE A.ID_NO=B.ID_NO(+) AND
B.APPL_DATE>='12/07/2009' AND
B.APPL_DATE<='12/09/2009' AND
B.STATUS='A' OR
B.STATUS='C'
ORDER BY APPL_DATE
```
Produces
```
Error: outer join operation(+) is not allowed in operand of or
```
Any suggestion is appreciated.
|
You should just use explicit join type instead of `(+)`:
```
SELECT A.ID_NO,A.NAME B.APPL_STATUS
FROM PERSON A LEFT JOIN FORM_DETAILS B ON A.ID_NO=B.ID_NO
WHERE
B.APPL_DATE BETWEEN '12/07/2009' AND '12/09/2009' AND
(B.STATUS = 'A' OR
B.STATUS = 'C')
ORDER BY APPL_DATE
```
|
Just change your query to (if you want to stick with the old `(+)` notation, discouraged, because it's vendor specific syntax, and deprecated):
```
SELECT A.ID_NO,
A.NAME,
B.APPL_STATUS
FROM PERSON A,
FORM_DETAILS B
WHERE A.ID_NO = B.ID_NO(+)
AND B.APPL_DATE BETWEEN '12/07/2009' AND '12/09/2009'
AND B.STATUS IN ('A', 'C')
ORDER BY APPL_DATE
```
Better, if you want to write ANSI compliant SQL:
```
SELECT A.ID_NO,
A.NAME,
B.APPL_STATUS
FROM PERSON A
LEFT JOIN FORM_DETAILS B ON A.ID_NO = B.ID_NO
WHERE B.APPL_DATE BETWEEN '12/07/2009' AND '12/09/2009'
AND B.STATUS IN ('A', 'C')
ORDER BY APPL_DATE
```
|
Use of OUTER JOIN AND OR operators together
|
[
"",
"mysql",
"sql",
""
] |
I have search query with parameters are Name, From date, To date. When i pass empty string or null to date parameters it should return all data matches with name only. This code is not working for me.
```
WHERE name=@name
AND
(CASE WHEN @_from IS not null
THEN
Servicedate<=@_from
END)
```
This is also not woking for me.
```
WHERE name = @name AND
Servicedate>=ISNULL((case when @_from='' then NULL else @_from),Servicedate)
```
Thank You.
|
You can use this:
```
WHERE
name = @name
AND (@_from IS NULL OR ServiceDate >= @_from)
AND (@_to IS NULL OR ServiceDate <= @_to)
```
If the variables(`@_from` and `@_to`) are unset, the condition will return `true` since `@var IS NULL` is `true`.
|
```
WHERE ( name=@name AND @_from IS NOT NULL AND Servicedate<=@_from )
OR
( name=@name AND @_from IS null )
```
If date parameter `@_from` is empty then searching is done only on name parameter `@name` else `ServiceDate` column is matched against date parameter `@_from`
|
Where clause in SQL Query skip date filter when date is null
|
[
"",
"mysql",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have two tables in my database:
```
table_A: table_B:
id user id user
1 Mike 1 Mike
2 Dan 2 Dan
3 Tom 3 Tom
4 Lina 4 Lina
5 Cynthia
6 Sam
```
My aim is to identify which users in Table\_B do not exist in Table\_A based on id. I'm new in SQL, and this is what i came up with:
```
SELECT id FROM Table_B
WHERE B.id NOT IN ( SELECT A.id from Table_A)
```
Most likely my logic is wrong, so i'd appreciate any guidance please.
|
You can use sub-query in `WHERE` clause predicate `NOT IN` Then it will return the `id` present in `table_B` only
**Sub-Query**
This query return `id` from `table_A`
```
SELECT table_A.id FROM table_A
```
Then it will be passed to `NOT IN` clause which will return boolean `true` on every record set iteration if not matched. So `id` 5 and 6 only return in the main query.
**Final Query**
```
SELECT table_B.id, table_B.name FROM table_B WHERE table_B.id NOT IN (SELECT table_A.id FROM table_A);
```
OR
to select all column use symbol `*` instead of column lists
```
SELECT * FROM table_B WHERE table_B.id NOT IN (SELECT table_A.id FROM table_A);
```
|
You can use **EXISTS** function.
```
Select * from table_B where Not EXISTS (select * from table_A)
```
|
SQL query comparison between two tables
|
[
"",
"mysql",
"sql",
"phpmyadmin",
""
] |
I have this table:
```
code(integer) |number_of_data(integer)| date (Date)
```
I need to:
1. Group by day, this for tho month of june only
2. Select sum of number\_of\_data for the day for each code != 0
3. Select n\_data for code = 0
For the first 2 points I came up with:
```
select sum(number_of_data) nData, TO_CHAR(date, 'DD') dayOfMonth from T1
where to_char( date, 'mm') = 6
and code <> 0
group by TO_CHAR(date, 'DD') order by TO_CHAR(date, 'DD');
```
it gives me this table result:
```
nData | dayOfMonth
```
which is fine, anyway I'm missing requirement 3, whose query would be the same but with the opposite condition (code=0).
Is there a way to add it to the above query so to get this result:
```
nData | nDataZero | dayOfMonth
```
?
|
Assuming that there will only be one entry with `CODE = 0` for each day, then you can do:
```
SELECT SUM( CASE CODE WHEN 0 THEN NULL ELSE number_of_data END ) AS nData,
MAX( CASE CODE WHEN 0 THEN number_of_data END ) AS nDataZero,
EXTRACT( DAY FROM "Date" ) AS dayOfMonth
FROM T1
WHERE EXTRACT( MONTH FROM "Date" ) = 6
GROUP BY EXTRACT( DAY FROM "Date" )
ORDER BY EXTRACT( DAY FROM "Date" );
```
If there will be more than one entry then you will need to specify how it is to be handled (i.e. change `MAX` to `SUM` if you want the total of the `CODE = 0` values).
|
Whit some regards to the syntax in MS Sql. This is a way i would solve this in a oracle sql-like way:)
```
SELECT sum(nData) nData , sum(nDataZero) nDataZero, T1
from (
select sum(number_of_data) nData, 0 nDataZero , TO_CHAR(date, 'DD') dayOfMonth
from T1
where to_char( date, 'mm') = 6 and code <> 0
group by TO_CHAR(date, 'DD')
order by TO_CHAR(date, 'DD')
UNION
select 0 nData, sum(number_of_data) nDataZero , TO_CHAR(date, 'DD') dayOfMonth
from T1
where to_char( date, 'mm') = 6 and code == 0
group by TO_CHAR(date, 'DD')
order by TO_CHAR(date, 'DD'))
group by T1;
```
Rgds
|
Is it possible to map multiple select statement into columns?
|
[
"",
"sql",
"oracle",
"oracle11g",
""
] |
I have a transaction table and want to calculate based on the type of transaction. How to?
Table:
```
id | amount | type
--------------------
01 | 230 | IncomingTransfer
02 | 100 | OutcomingTransfer
03 | 20 | IncomingTransfer
```
Logic:
```
SELECT SUM(amount)
IF IncomingTransfer
+ 230
+ 20
ELSE IF OutcomingTransfer
- 100
Total amount: +150
```
|
AS simple as my **[sqlfiddle](http://sqlfiddle.com/#!9/43aed/1)** shows:
```
select
sum(CASE WHEN type = 'IncomingTransfer' THEN amount ELSE -amount END) as totalsum
from t;
```
|
```
SELECT SUM(amount) FROM (
SELECT CASE WHEN type='IncomingTransfer'
THEN +amount
ELSE -amount
END as amount
FROM MyTable
) as data
```
|
How to SQL SUM with condition
|
[
"",
"sql",
"sum",
""
] |
I am trying to create sub nodes of same name but having different data coming from different columns in a table using for xml. But i am not getting the expected output from the query i have built.
Can some one point me to the right way of building this query?
Sample table and the FOR XML query used given below:
```
;WITH Temp(id, name1, name2)
AS
(
SELECT 1, 'A', 'B' UNION
SELECT 2, 'C', 'D' UNION
SELECT 3, 'E', 'F'
)
SELECT
id
,name1 AS [names/name]
,name2 AS [names/name]
FROM
Temp
FOR XML PATH('Data'), TYPE, ROOT('Feed')
```
Output :
```
<Feed>
<Data>
<id>1</id>
<names>
<name>AB</name>
</names>
</Data>
<Data>
<id>2</id>
<names>
<name>CD</name>
</names>
</Data>
<Data>
<id>3</id>
<names>
<name>EF</name>
</names>
</Data>
</Feed>
```
Expected output :
```
<Feed>
<Data>
<id>1</id>
<names>
<name>A</name>
<name>B</name>
</names>
</Data>
<Data>
<id>2</id>
<names>
<name>C</name>
<name>D</name>
</names>
</Data>
<Data>
<id>3</id>
<name>E</name>
<name>F</name>
</names>
</Data>
</Feed>
```
|
You can select the names in a subquery
```
;WITH Temp(id, name1, name2)
AS
(
SELECT 1, 'A', 'B' UNION
SELECT 2, 'C', 'D' UNION
SELECT 3, 'E', 'F'
)
SELECT
id
,(SELECT name
FROM (
SELECT name1 AS name
FROM Temp t2
WHERE t1.id = t2.id
UNION ALL
SELECT name2 AS name
FROM Temp t2
WHERE t1.id = t2.id) AS t
FOR XML PATH(''), TYPE) AS names
FROM
Temp t1
FOR XML PATH('Data'), TYPE, ROOT('Feed')
```
|
I guess, this should be quite efficient (works in SQL Server at least):
```
;WITH Temp(id, name1, name2)
AS
(
SELECT 1, 'A', 'B' UNION
SELECT 2, 'C', 'D' UNION
SELECT 3, 'E', 'F'
)
SELECT
id,
(
SELECT
name1 AS name
,null
,name2 AS name
FOR XML PATH(''), TYPE
) AS names
FROM
Temp
FOR XML PATH('Data'), TYPE, ROOT('Feed')
```
|
SQL FOR XML to generate multiple same name nodes
|
[
"",
"sql",
"xml",
"for-xml",
""
] |
Trying to make a query which generate this kind of result
```
EmpId, EmpSickLeave,EmpCasualLeave, EmpAnnualLeave, TotalLeave
1, 1, 0, 0 1
2, 0, 0, 2 2
3, 0, 1, 0 1
4, 0, 1, 0 1
5, 1, 0, 0 1
```
while I have two tables
**Table EmpLeave:: include a column EmpId, Date, EmpLeaveTypeID**
**Table EmpLeaveType:: include EmpLeaveTypeID and LeaveName**
DATA In tables
```
EmpLeaveType Table
EmpLeaveTypeID , LeaveName
1, Sick Leave
2, Annual
3, Casual
EmpLeave Table
column EmpId, Date, EmpLeaveTypeID
1, 2015-07-01, 1
3, 2015-07-02, 2
5, 2015-07-04, 1
4, 2015-07-04, 2
2, 2015-07-05, 2
2, 2015-07-07, 2
```
I am pulling my hair and unable to brind this kind of result. Is this possible?
```
EEmpId, EmpSickLeave,EmpCasualLeave, EmpAnnualLeave, TotalLeave
1, 1, 0, 0 1
2, 0, 0, 2 2
3, 0, 1, 0 1
4, 0, 1, 0 1
5, 1, 0, 0 1
```
This wrong thing I have tried!
```
SELECT * count(EmpLeaveTypeID) FROM `EmpLeaveType`
WHERE SwitchDate Between '2015-07-01' AND '2015-07-28'
group by EmpLeaveType, EmpId
```
|
Please check the below :
```
SELECT
temp.EmpId ,
temp.EmpSickLeave,
temp.EmpCasualLeave,
temp.EmpAnnualLeave,
(temp.EmpSickLeave + temp.EmpCasualLeave + temp.EmpAnnualLeave )
as TotalLeave
FROM
( SELECT
EmpId,
SUM( CASE WHEN EmpLeaveTypeID=1 THEN 1 ELSE 0 END ) as EmpSickLeave,
SUM( CASE WHEN EmpLeaveTypeID=2 THEN 1 ELSE 0 END ) as EmpCasualLeave,
SUM( CASE WHEN EmpLeaveTypeID=3 THEN 1 ELSE 0 END ) as EmpAnnualLeave
FROM EmpLeave
WHERE Date Between '2015-07-01' AND '2015-07-28'
group by EmpId ) temp
```
|
Please try the SELECT query below. It does not take into consideration that leave types may be changed someday.
SQL fiddle link <http://sqlfiddle.com/#!9/0fd13/2>
The query simply does an conditional aggregate to calculate leaves for each employee.
```
SELECT
EmpId,
SUM(CASE WHEN EmpLeaveTypeID=1 THEN 1 ELSE 0 END) as EmpSickLeave,
SUM(CASE WHEN EmpLeaveTypeID=2 THEN 1 ELSE 0 END) as EmpCasualLeave,
SUM(CASE WHEN EmpLeaveTypeID=3 THEN 1 ELSE 0 END) as EmpAnnualLeave,
COUNT(1) as TotalLeave
FROM `EmpLeave`
WHERE Date Between '2015-07-01' AND '2015-07-28'
group by EmpId
```
|
SQL Query to bring group result of a column in multiple Column
|
[
"",
"mysql",
"sql",
""
] |
I am retrieving a column named removal\_count in my query using COUNT() function. In result set the datatype of removal\_count is BIGDECIMAL. I want to convert number into five digits. SO if value is less than five digits then it should be represented with leading zero's.
e.g 1) If removal count is 540 then display 00540
2) If removal count is 60 then display 00060
If the removal count is integer/string value then I can add leading zero's using java expression :
```
--if removal_count is integer--
String.format("%05d",removal_count)
--if removal_count is string--
("00000"+removal_count).subString(removal_count.length())
```
Can we convert removal\_count into string or integer ( from big decimal) so that I can use given java expression? Or else is there any way to add leading zero's in query itself?
|
You could do it in two ways.
**Method 1**
Using **LPAD**.
For example,
```
SQL> WITH DATA(num) AS(
2 SELECT 540 FROM dual UNION ALL
3 SELECT 60 FROM dual UNION ALL
4 SELECT 2 FROM dual
5 )
6 SELECT num, lpad(num, 5, '0') num_pad FROM DATA;
NUM NUM_P
---------- -----
540 00540
60 00060
2 00002
SQL>
```
The WITH clause is only to build sample data for demo, in your actual query just do:
```
lpad(removal_count, 5, '0')
```
Remember, a **number** cannot have **leading zeroes**. The output of above query is a **string** and not a **number**.
**Method 2**
Using **TO\_CHAR** and format model:
```
SQL> WITH DATA(num) AS(
2 SELECT 540 FROM dual UNION ALL
3 SELECT 60 FROM dual UNION ALL
4 SELECT 2 FROM dual
5 )
6 SELECT num, to_char(num, '00000') num_pad FROM DATA;
NUM NUM_PA
---------- ------
540 00540
60 00060
2 00002
SQL>
```
**Update** : To avoid the extra leading space which is used for minus sign, use **FM** in the `TO_CHAR` format:
**Without FM:**
```
SELECT TO_CHAR(1, '00000') num_pad,
LENGTH(TO_CHAR(1, '00000')) tot_len
FROM dual;
NUM_PAD TOT_LEN
------- ----------
00001 6
```
**With FM:**
```
SELECT TO_CHAR(1, 'FM00000') num_pad,
LENGTH(TO_CHAR(1, 'FM00000')) tot_len
FROM dual;
NUM_PAD TOT_LEN
------- ----------
00001 5
```
|
Use the `LPAD` pl/sql function <http://www.techonthenet.com/oracle/functions/lpad.php>.
```
removal_count = 540
LPAD(TO_CHAR(removal_count), 5, '0');
Result: '00540'
```
|
How to add leading zero in a number in Oracle SQL query?
|
[
"",
"sql",
"oracle",
""
] |
Like the title says, I am trying to build a query that selects all records from a database table where the date is equal to yesterdays date.
The date column in the table is however of the format `datetime` (with hours, minutes, seconds as well) so I do the select based on the dates year, month and day (times don't matter as long as the date is yesterday).
To achieve this I have build the following query:
```
SELECT
*
FROM
qryTouchBoekingen
WHERE
(DATEPART(yyyy, myDateTime) = DATEADD(dd, -1, Datepart(dd, GetDate()))
AND (DATEPART(mm, myDateTime) = DATEADD(dd, -1, Datepart(mm, GetDate()))
AND (DATEPART(dd, myDateTime) = DATEADD(dd, -1, Datepart(dd, GetDate())) )
ORDER BY
Starttijd ASC
```
Though this doesn't return any records. When I only use it on the day part of the `myDateTime` column then it works (but obviously also returns all other years and months with that specific date).
I also couldn't do it using:
```
SELECT
*
FROM
qryTouchBoekingen
WHERE
myDateTime = DATEADD(dd, -1, GetDate())
```
because this give errors on the `time`.
|
How about this much simpler version:
```
cast(myDateTime as date) = cast(dateadd(day, -1, getdate()) as date)
```
Or, even better:
```
(myDateTime >= cast(dateadd(day, -1, getdate()) as date) and
myDateTime < cast(getdate() as date)
)
```
This version is more explicit in its ability to take advantage of an index. (The first will also take advantage of an index on `myDateTime`, but that is an exception to the rule that functions preclude the use of indexes.)
|
Try this one.
You should count previous day before making datepart from it
```
Select * from qryTouchBoekingen
WHERE ( DATEPART(yyyy, myDateTime) = Datepart(dd, DATEADD(dd, -1, GetDate()))
AND (DATEPART(mm, myDateTime) = Datepart(mm, DATEADD(dd, -1, GetDate()))
AND (DATEPART(dd, myDateTime) = Datepart(dd, DATEADD(dd, -1, GetDate())) )
Order by Starttijd ASC
```
|
Select every date where date is equal to today's date minus 1 day - Not working with dateadd on Month/year - T SQL
|
[
"",
"sql",
"sql-server",
"date",
"datetime",
""
] |
I want know if there is a way to search a database and find out which tables are empty and which have data. I will be migrating some data to another system and it would be nice to know which tables I should export. I'm using Oracle SQL Developer.
|
One way to do it, aside from running a silly pl/sql block to `count(*)` for each table, is to run this:
```
SELECT num_rows FROM ALL_TAB_STATISTICS WHERE OWNER = 'user name';
```
(Alternate tables: `DBA_TAB_STATISTICS`, `USER_TAB_STATISTICS`)
But then, it's valid only if you recently gathered statistics with the `DBMS_STATS` package.
|
Yes, You can select count for all tables in a database with a query like
```
select table_name,
to_number(
extractvalue(
xmltype(
dbms_xmlgen.getxml('select count(*) c from '||table_name))
,'/ROWSET/ROW/C')) count
from user_tables;
```
Heres a [demo](http://sqlfiddle.com/#!4/5b404/2)
|
Is there a way to in Oracle to see what tables have data and which don't?
|
[
"",
"sql",
"oracle",
"oracle-sqldeveloper",
""
] |
I need to update a column using the update statement.
```
SELECT *
FROM tng_managedobject
WHERE severity <> 0
AND class_name like 'Plaza_VES_Junction_Box'
AND propagate_status <> 0
```
I need to update the `propagate_status` for all these classes to 0. How can I use the SQL update statement to do this?
|
A simple update statement like this will do it:
```
update tng_managedobject
set propagate_status = 0
where severity <> 0
and class_name = 'Plaza_VES_Junction_Box'
and propagate_status <> 0
```
You don't need a `LIKE` clause when you are specifying the exact class name. `=` will suffice.
|
```
update tng_managedobject
set propagate_status = 0
where severity <> 0
and class_name like '%Plaza_VES_Junction_Box%'
and propagate_status <> 0
```
For the class\_name column, If you know the exact name, it is better to use `=`. If you are looking for a string that contains `Plaza_VES_Junction_Box` anywhere use `%`
|
SQL Select and Update command
|
[
"",
"sql",
""
] |
I have a query that returns the following table
```
ID SubId Rate Time
1 1 10.00 '00:00:10'
2 1 11.00 '00:00:15'
3 2 12.00 '00:00:20'
4 3 13.00 '00:00:25'
5 4 14.00 '00:00:30'
6 5 15.00 '00:00:35'
7 6 16.00 '00:00:40'
```
Now the problem is that i need all those record whose **SubId** lies in **Top 5 of Time's order**.
```
ID SubId Rate Time
1 1 10.00 '00:00:10'
2 1 11.00 '00:00:15'
3 2 12.00 '00:00:20'
4 3 13.00 '00:00:25'
5 4 14.00 '00:00:30'
6 5 15.00 '00:00:35'
```
**My Approach**
```
Select ID,SubId,Rate from Query1 where SubId In (Select Top 5 SubId from Query1)
--Time was not included in it
```
**Note :** Please do not suggest an answer like above because it needs to use the **query twice** as the query is already taking too much time to return the above records.
|
If you don't want to use the same query twice, I suggest you insert the result into a temporary table. That way, you don't have to execute the complex query twice.
```
CREATE TABLE #TopFive(Id)
INSERT INTO #TopFive
SELECT TOP 5 SubId FROM QueryId ORDER BY [Time] DESC
```
Then in your subsequent queries, you can just use the temporary table:
```
SELECT * FROM <tbl> WHERE subId IN(SELECT Id FROM #TopFive)
```
You could also add a `NONCLUSTERED INDEX` on the temporary table for added performance gain:
```
CREATE NONCLUSTERED INDEX NCI_TopFive ON #TopFive(Id)
```
|
My answer is only slightly different than Felix's with a small difference. I would rather create a *covered* NC Index. That way I/O operations would get reduced when it's used down the line.
Store the results once in a temporary table and create a covered non-clustered on `SubID`
```
Select ID, SubId, Rate, [Time]
INTO #results
FROM Query1
CREATE NONCLUSTERED INDEX IX_SubID ON #results(SubId) INCLUDE(Id, Rate, [Time])
SELECT A.ID, A.SubId, A.Rate, A.[Time]
FROM
#results A
JOIN
(SELECT TOP 5 SubID from #results order by [Time] desc) B
on A.SubID = B.SubID
```
|
SQL - How to get Top 5 record with sub-record
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have an XML which I am processing in SQL to pass data to SQL Server tables.
XML is as below
```
<SearchProgramsResponse xmlns="http://api.abc.com/2011-03-01/">
<page>1</page>
<items>50</items>
<total>3129</total>
<programItems>
<programItem id="7779">
<name>Coolblue NL</name>
<adrank>5.4</adrank>
<categories>
<category id="34">Shopping & Mail Order Shops</category>
<category id="43">Other</category>
</programItem>
</programItems>
</SearchProgramsResponse>
```
Now I am using below query to get results
```
;with xmlnamespaces(default 'http://api.abc.com/2011-03-01/')
select
t1.c.value('@id', 'int') as id,
t1.c.value('(name/text())[1]', 'nvarchar(200)') as name,
t1.c.value('(adrank/text())[1]', 'decimal(29,2)') as adrank
from
@xmldata.nodes('SearchProgramsResponse/programItems/programItem') as t1(c)
```
But here `xmlnamespaces default` value is hard coded. How can I make it a variable to pass .
I have tried dynamic SQL but getting error
Here is what I am trying
```
DECLARE @xmldata XML
SET @xmldata = N'<SearchProgramsResponse xmlns="http://api.abc.com/2011-03-01/">
<page>1</page>
<items>50</items>
<total>3129</total>
<programItems>
<programItem id="7779">
<name>Coolblue NL</name>
<adrank>5.4</adrank>
<categories>
<category id="34">Shopping & Mail Order Shops</category>
<category id="43">Other</category>
</programItem>
</programItems>
</SearchProgramsResponse>';
DECLARE @d NVARCHAR(200)='http://api.abc.com/2011-03-01/'
DECLARE @ns NVARCHAR(max);
SET @ns = '
;with xmlnamespaces(default ' + @d +' )
select
t1.c.value(''@id'', ''int'') as id,
t1.c.value(''(name/text())[1]'', ''nvarchar(200)'') as name,
t1.c.value(''(adrank/text())[1]'', ''decimal(29,2)'') as adrank
from @xmldata.nodes(''SearchProgramsResponse/programItems/programItem'') as t1(c)';
exec sp_executesql @ns;
```
Now this throws error
> Must declare the scalar variable "@xmldata"
because it's variable and I am executing dynamic SQL.
But If I try replacing it
```
SET @ns = REPLACE(@ns, '@xmldata', @xmldata);
EXEC sp_executesql @ns;
```
This will again throw error because can replace table with whole XML.
Is there any other way to achieve this without dynamic SQL?
If not how can I get result in dynamic SQL?
Thanks
|
You need to pass additional parameter to `sp_executesql` which specify local parameter name and value of that parameter :
```
.....
.....
exec sp_executesql @ns, N'@xmldata XML', @xmldata=@xmldata;
```
**`Sqlfiddle Demo`**
|
If the namespace is not needed in the query you can use a wildcard for the namespace and avoid the dynamic query.
```
select
t1.c.value('@id', 'int') as id,
t1.c.value('(*:name/text())[1]', 'nvarchar(200)') as name,
t1.c.value('(*:adrank/text())[1]', 'decimal(29,2)') as adrank
from
@xmldata.nodes('*:SearchProgramsResponse/*:programItems/*:programItem') as t1(c);
```
|
Pass xmlnamespaces with default as variable in SQL
|
[
"",
"sql",
"sql-server",
"xml",
""
] |
Is there any difference in the performance, or the background execution behavior when counting `*` , any custom field ( `n` ), or the `primary id` in MySQL?
What exactly does `*` refer to in the query, and how does it differ from those two other ways?
```
SELECT COUNT( * ) FROM t;
SELECT COUNT( id ) FROM t;
SELECT COUNT( n ) FROM t;
```
---
**UPDATE:**
Assume, that neither `id` nor `n` is null at any record.
|
```
COUNT(*) will include NULLS
COUNT(column_or_expression) won't.
```
This means `COUNT(any_non_null_column)` will give the same as `COUNT(*)` of course because there are no NULL values to cause differences.
Generally, `COUNT(*)` should be better because any index can be used because `COUNT(column_or_expression)` may not be indexed or SARGable
From [ANSI-92](http://www.contrib.andrew.cmu.edu/~shadow/sql/sql1992.txt) (look for "Scalar expressions 125")
Case:
> a) If `COUNT(*)` is specified, then the result is the cardinality of **T**.
>
> b) Otherwise, let **TX** be the single-column table that is the result of applying the
> to each row of **T** and eliminating null values.
> If one or more null values are eliminated, then a completion condition is
> raised: warning- null value eliminated in set function.
The same rules apply to SQL Server and Sybase too at least
*Note: `COUNT(1)` is the same as `COUNT(*)` because 1 is a non-nullable expression.*
|
[`count(*)`](https://dev.mysql.com/doc/refman/5.6/en/group-by-functions.html#function_count) has some optimizations in certain cases (querying a single MyISAM table with no `where` clause), which could to be the case in the OP, depending on the storage engine. If your query doesn't hit this special case, MySQL will have to construct an execution plan and run the query normally, which would be just as good (or bad) as `count(my_primary_key)` if you have a primary key.
Long story short - don't over think it. Just use `count(*)` and let the database worry about optimizing your query. After all, that's what it's build for.
|
What to count when counting all rows MySQL
|
[
"",
"mysql",
"sql",
"performance",
"select",
"count",
""
] |
Here is my table:
```
id | title | lang
----+----------+------
1 | Moscow | en
1 | Москва | ru
2 | Helsinki | en
2 | Хельсинки| ru
```
I would like to efficiently get the `ru` title by `en` title.
At the moment, I get the `id` of the entry first and then make another query by hand.
Any other, more elegant, solutions?
|
A [`SELF JOIN`](https://stackoverflow.com/questions/3362038/what-is-self-join-and-when-would-you-use-it) might be of help and is usually a preferable solution to lots of nested queries, if not for performance reasons (see: [Join vs. sub-query](https://stackoverflow.com/a/2577224/3881403), [Rewriting Subqueries as Joins](https://dev.mysql.com/doc/refman/5.5/en/rewriting-subqueries.html)) certainly for readability.
In your case try:
```
SELECT movies_ru.title
FROM movies AS movies_ru
JOIN movies AS movies_en
ON movies_ru.id = movies_en.id
WHERE movies_ru.lang = "ru"
AND movies_en.lang = "en"
AND movies_en.title = "The English Title"
```
|
**Edit:** It turns out [Tobia Tesan's answer](https://stackoverflow.com/a/31648626/2302862) is usually better practice.
---
Use a [subselect](https://dev.mysql.com/doc/refman/5.6/en/subqueries.html):
```
SELECT `title` FROM `table` WHERE `lang` = 'ru' AND `id` = (SELECT `id` FROM `table` WHERE `lang` = 'en' AND `title` = 'Moscow')
```
|
How to efficiently SELECT a column's field based on another row with the same `id`?
|
[
"",
"mysql",
"sql",
""
] |
I have the following table with three fields as shown below:
Table : `Testing`
```
create table testing
(
colnum varchar(10),
coldate date,
colid int
);
```
Insertion :
```
insert into testing values('111','2015-01-01',1);
insert into testing values('111','2015-01-02',1);
insert into testing values('111','2015-01-03',1);
insert into testing values('111','2015-01-04',1);
insert into testing values('111','2015-01-05',1);
insert into testing values('222','2015-01-01',1);
insert into testing values('333','2015-01-01',1);
```
Index creation:
```
create clustered index id_idx on testing(colid);
create nonclustered index date_nidx on testing(coldate);
create nonclustered index num_nidx on testing(colnum);
```
**Note**: Now I want to display the records who is **only in** specific date and specific ID not the other date.
For example: I want to display the records who is **ONLY IN** specified date and ID but not the other dates.
Given Date: `2015-01-01`
Given ID : `1`
For which I have written following query:
```
select * from testing
where coldate in ('2015-01-01')
and coldid = 1
and colnum not in(select colnum from testing where coldid = 1 and
coldate in('2015-01-02','2015-01-03','2015-01-04'
'2015-01-05');
```
Result:
```
colnum coldate colid
--------------------------
222 2015-01-01 1
333 2015-01-01 1
```
**Explaination**: The query shows two records because both records have only in specific date and id **But** the record `111` not shown because that also belongs to other dates as you can see in the above table.
The above query works fine for me **But** taking more time for execution for billions of records.
|
Try this query:
```
SELECT colnum, coldate,colid
FROM
(
select *,COUNT(1) OVER (PARTITION BY colnum) as cnt
from (SELECT DISTINCT colnum, coldate,colid from testing ) t
) q
where q.cnt=1 and q.coldate in ('2015-01-01') and q.colid = 1
```
fiddle link: <http://sqlfiddle.com/#!6/650c0/4>
|
I have made some assumptions on the basis of the result in your example.
* you want specific date and not the other date
* you want all the colid(s) (as per your example)
Can you please check if this is the expected result?
```
SELECT t.*
FROM testing t
LEFT JOIN (SELECT *
FROM testing
WHERE coldate <> '2015-01-01') x
ON x.colnum = t.colnum
WHERE x.colnum IS NULL
```
|
Select rows from table where value in column appears ones
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-server-2008-r2",
""
] |
I have a below table
```
+------------------------+
| Column1 Column2 |
+------------------------+
| 101 A_TABLE |
| |
| 103 B_TABLE |
+------------------------+
```
I want to run a query such that:
```
if my column1 row 1 has 101 then select * from A_TABLE
If my column1 row 2 has 103 then select * from B_TABLE
and so on
```
I would much appreciate your inputs.
|
You'll need to do something like this:
```
declare @SQL varchar(max) = ''
select @SQL = @SQL + 'select * from ' + Column2 + ' union all '
from table1
set @SQL = left(@SQL, len(@SQL) - 10)
exec (@SQL)
```
This will just take all the tables from column2 and create union all clause from that.
|
You can use a while loop:
```
DECLARE @Max_Value INT ,@Min_Value INT ,@SQL VARCHAR (1000) =''
SELECT @Max_Value = MAX(Column1) , @Min_Value = MIN(Column1) FROM [Your_Table]
WHILE (@Min_Value <= @Max_Value)
BEGIN
SET @SQL= ( SELECT 'SELECT * FROM '+table_name FROM [Your_Table] WHERE Column1=@Min_Value)
EXEC(@SQL)
SELECT @Min_Value = MIN(Column1) FROM [Your_Table] WHERE Column1 > @Min_Value
END
```
|
Use table values in a SQL query
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I've been getting this error from an insert on duplicate update query in MYSQL randomly every now and then.
Any idea what's going on? I can't seem to reproduce the error consistently it occurs sometimes and then sometimes not.
Here is the query in question:
```
INSERT INTO friendships (u_id_1,u_id_2,status) VALUES (?,?,'active') ON DUPLICATE KEY UPDATE id=LAST_INSERT_ID(id);
```
And the schema describing the table is:
```
DROP TABLE IF EXISTS `friendships`;
CREATE TABLE `friendships` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`u_id_1` int(11) NOT NULL,
`u_id_2` int(11) NOT NULL,
`status` enum('active','pending','rejected','blocked') DEFAULT 'pending' NOT NULL,
`initiatiator` enum('1','2','system') DEFAULT 'system' NOT NULL,
`terminator` enum('1','2','system') DEFAULT NULL,
`confirm_timestamp` timestamp DEFAULT NULL,
`created` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`id`),
UNIQUE KEY (`u_id_1`,`u_id_2`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
```
|
Your ON DUPLICATE KEY UPDATE statement isn't helping you at all here.
You are taking the LAST\_INSERT\_ID, which is the auto inc of the last successfully inserted row, and trying to update the duplicated row with that id. This will always cause a duplicate primary (you're trying to change the id of some row to match the id of the last thing you added)
|
A separate consideration is to check the source for duplicates. I had a simple audit table
```
INSERT INTO table
field1, field2, ... , field3
ON DUPLICATE KEY UPDATE row_id=row_id;
```
where `field1` is an `INDEX` but not `UNIQUE` with `row_ID` as `INTEGER UNSIGNED AUTO_INCREMENT PRIMARY KEY`.
Ran for years, but an unexpected duplicate row triggered this error.
**Fixed by de-duping the source.**
Possibly a trivial point to many readers here, but it cost me some head-scratching (followed by a facepalm).
|
auto-increment value in update conflicts with internally generated values
|
[
"",
"mysql",
"sql",
"prepared-statement",
"innodb",
""
] |
**The Problem:**
I'm working in PostgreSQL 9.0 and I'm having a difficult time figuring out how to tackle the situation where you want to return a specific column value of a certain row for use in a CASE WHEN THEN statement.
I want to basically go in and set the value of table A: someRow's someColumn value, equal to the value of table B: row X's column A value, given the value of row X's column B. (More detail in "Backround Info" if needed to understand the question)
This is what I want to do (but don't know how):
```
Update tableA
Set someColumn
CASE WHEN given_info_column = 'tableB: row X's column B value'
THEN (here I want to return row X's column A value, finding row X using the given column B value)
ELSE someColumn END
```
**Background Info: (Optional, for clarification)**
Imagine that there is a user activity table, and a device table in an already existing database, with already existing activity performed strings that exist throughout to codebase you are working in: (for example)
```
User_Activity:
id (int) | user_name (string) | activity_preformed (string) | category (string)
---------|-----------------------|----------------------------------------|------------------
1 | Joe Martinez | checked out iphone: iphone2 | dvc_activity
2 | Jon Shmoe | uploads video from device: (id: 12345) | dvc_activity
3 | Larry David | goes to the bathroom |other_activity
Device:
seq (int)| device_name (string) | device_srl_num (int) | device_status (string)|
---------+-----------------------+----------------------+-----------------------+
1 | iphone1 | 12344 | available
2 | iphone2 | 12345 | checked out
3 | android1 | 23456 | available
```
Your assignment from your boss is to create a report that shows one table with all device activity, like so:
```
Device Activity Report
(int) (int) (string) (string) (string) (int) (string)
act_seq |usr_id | usr_name | act_performed | dvc_name | dvc_srl_num | dvc_status
---------+-------+--------------+---------------------------------------+-----------+-------------+------------
1 |1 | Joe Martinez | Checked out iphone: iphone2 | iphone2 | 12345 | checked out
2 |2 | John Shmoe | uploads video from device: (id: 12345)| android1 | 23456 | available
```
For the purposes of this question, this has to be done by adding a new column to the user activity table called dvc\_seq which will be a foreign key to the device table. You will create a temporary table by querying from the user activity table and joining the two where User\_Activity (dvc\_seq) = Device (seq)
This is fine and will work great for new entries into the User\_Activity table, which will record a dvc\_seq linking to the associated device **if** the activity involves a device.
The problem is that you need to go in and fill in values for the new dvc\_seq column in the User\_Activity table for all previous entries relating to devices. Since the previous programmers decided to specify which device in the activity\_performed column using the serial number certain times and the device names other times, this presents an interesting problem, where you will need to derive the associated Device seq number from a device, given its name or serial number.
So once again, what I want to do: (using this example)
```
UPDATE User_Activity
SET dvc_seq
CASE WHEN activity_performed LIKE 'checked out iphone:%'
THEN (seq column of Device table)
WHERE (SELECT 1 FROM Device WHERE device_name = (substring in place of the %))
ELSE dvc_seq (I think this would be null since there would be nothing here yet)
END
```
Can any of you help me accomplish this?? Thanks in advance for all responses and advice!
|
The query below uses an update-join to update the sequence number when the serial number or the name is contained within the activity\_performed
```
UPDATE UserActivity
SET a.dvc_seq = b.seq
FROM UserActivity AS a
JOIN devices b
ON UserActivity.activity_performed LIKE '%from device: (id: ' || b.serial_num || '%'
OR UserActivity.activity_performed LIKE '%: ' || b.name || '%'
```
|
Just an additional update on how to speed up this code based off of the correct answer given by @FuzzyTree (this would only work for the serial number, which has a standard length, and not for the device name which could be many different sizes)
Because of the LIKE used in the join, the query runs very slow for large databases. an even better solution would utilize the postgres `substring()` and `position()` functions and join the tables on the serial number like so:
```
UPDATE UserActivity
SET a.dvc_seq = b.seq
FROM UserActivity AS a
JOIN devices b
ON b.serial_num =
substring(activity_performed from position('%from device: (id: ' in activity_performed)+(length of the string before the plus so that position returns the start position for the serial number)) for (lengthOfSerialNumberHere))
WHERE UserActivity.activity_performed LIKE '%from device: (id: ' || b.serial_num || '%';`
```
|
How do you return a specfic column value of a certain row in an existing table within a database?
|
[
"",
"sql",
"postgresql",
"foreign-keys",
"case",
"case-when",
""
] |
Is there a way to give a result table of a join an alias?
Example query:
```
SELECT *
FROM t1
JOIN t2
ON t1.num = t2.num
AS result;
```
|
Yes!
```
select result.*
from (SELECT *
FROM t1
JOIN t2
ON t1.num = t2.num) result
```
You do need to watch where you've got columns of the same name in both the inner tables; you'll get an ambiguous column error for num in the results table. Instead of select \* its a good idea to pick out the ones you want.
|
You can make it a table expression (sub-query) and give that an alias, but in your simple example I don't see what benefit it could add. I assume you're going to use this in a more complex query
```
SELECT *
FROM (
SELECT t1.id, t1.name, t2.category, t2.subcategory
FROM t1
INNER JOIN t2 ON t1.id=t2.id
) AS result
LEFT JOIN t3 ON result.id=t3.id
```
|
SQL Join result table as alias
|
[
"",
"mysql",
"sql",
"join",
""
] |
I have a static list of strings from which I want to determine which of these strings do NOT exist in a particular table. This list contains over a thousand entries and do not exist in the database (and I cannot make changes to the database, I can only query against it)
For example, my list of strings could be `("Apple", "Orange", "Banana")` and the table I want to search against is:
```
Id Name
1 Apple
2 Banana
```
The results of the query should be `Orange`
I can find the ones that do exist easily, but cant do the inverse.
|
If you want your list to show up in the results, then you will need to construct a table of some type: permanent, temporary, or derived. For instance:
```
with list as (
select 'Apple' as name union all select 'Banana' union all select 'Orange'
)
select l.*
from list l
where not exists (select 1 from table t where t.name = l.name);
```
On the web, you can easily find a `split()` function, which would allow you to write a query such as:
```
with list(name) as (
select *
from dbo.split(@ListAsString, ',')
)
select l.*
from list l
where not exists (select 1 from table t where t.name = l.name);
```
Here is another idea. I don't really recommend it, but it might be applicable in some cases:
```
select @ListAsString = replace(@ListAsString, t.name, '')
from table t;
```
You would probably want to include appropriate delimiters. The idea is to remove elements from the string. In the end, you end up with a value of the non-matching names. It is as a string, though, and not as a table.
|
```
DECLARE @tbl TABLE( Id INT, Name VARCHAR(20))
INSERT @tbl
SELECT 1, 'Apple' UNION ALL
SELECT 2, 'Banana'
```
Using `NOT IN`
```
SELECT Name FROM
(SELECT 'Apple' Name UNION ALL
SELECT 'Banana' UNION ALL
SELECT 'Orange') t
WHERE Name NOT IN (SELECT Name FROM @tbl)
```
|
Finding items that do not exist in a table from static values
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I am trying to add two queries in Pentaho Report Designer so that I could get query results from more than one table.
Here is my case:
**Query 1:**
```
SELECT
`raw_crawl_shop_list`.`sales`,
`raw_crawl_shop_list`.`keyword`,
`raw_crawl_shop_list`.`number`
FROM
`raw_crawl_shop_list`
LIMIT 1
```
**Query 2:**
```
SELECT
`dim_keyword_tag`.`keyword`,
`dim_keyword_tag`.`tag`
FROM
`dim_keyword_tag`
```
**My problem:**
I want to use the five columns selected from two table respectively (that means there's no relationship between these columns, I just use them seperately), However, in Pentaho Report Designer. I could only use columns selected from one single query.
As the picture shown below:
[](https://i.stack.imgur.com/T7EtU.png)[](https://i.stack.imgur.com/dGffg.png)
If I choose Query 1, columns selected by Column 2 disappear, if I choose Query 2 the same with Query 1.
**My question:**
How could I use two queries in Pentaho Report Designer in a single report **OR** is there a way to combine these two sql queries in a single one with the three columns from the first and the two columns from the second?
Thank you all!
|
There are two possible way of reading two queries in PRD.
1. Creating a **SUB-REPORT** and then define the second query. Check this [wiki](http://infocenter.pentaho.com/help/index.jsp?topic=%2Freport_designer_user_guide%2Ftask_sub_reports.html) for more.
2. **JOINING** the two tables inside a single query, as correctly said by Kotukunui.
|
If both tables are in the same database, just JOIN them into a single query
```
SELECT
`raw_crawl_shop_list`.`sales`,
`raw_crawl_shop_list`.`keyword`,
`raw_crawl_shop_list`.`number`,
`dim_keyword_tag`.`tag`
FROM
`raw_crawl_shop_list` JOIN `dim_keyword_tag` ON `raw_crawl_shop_list`.`keyword` = `dim_keyword_tag`.`keyword`
```
|
How to use more than one query in Pentaho Report Designer?
|
[
"",
"sql",
"database",
"pentaho",
"pentaho-report-designer",
""
] |
SQL code:
```
select DeviceUID, LogType
from `DeviceUsage_Table`
where DeviceUID in (select DeviceUID, Max(LogTime)
from `DeviceUsage_Table`
group by DeviceUID);
```
Error:
> Operand should contain 1 column(s)
How do I solve it?
---
I want to use the first select result as new condition.
```
select DeviceUID, Max(LogTime)
from `DeviceUsage_Table`
group by DeviceUID
```
This code I can get each `DeviceUID` and its last `LogTime`.
I use the query result to select DeviceUID's LogType when it is at last `LogTime`, so my method is as above.
|
Your sub-query:
```
"SELECT DeviceUID, Max(LogTime) FROM `DeviceUsage_Table` group by DeviceUID"
```
currently returns two columns, DeviceUID and Max(Logtime). This won't work with `DeviceUID in` as following the `in` you need a list. Either a hand-written list: `("A", "B", "C")` or a query that returns just one column.
What was your reasoning for returning Max(LogTime) too? Then I may be able to help more.
|
The error message is explicit, it's because by the rules of SQL, in order to use the "in" keyword, the following SELECT must return only 1 column. So you might want to do something like : DeviceUID in (SELECT **DeviceUID** FROM ...
|
My sql code has error
|
[
"",
"sql",
""
] |
I get a table `X` (with 1 row):
```
COL_XA COL_VG COL_LF COL_EQ COL_PP COL_QM ...
1 0 0 0 1 1
```
Each column `COL_x` can have only values 0 or 1.
I want to transform this table into this form `Y`:
```
NAME
"COL_XA"
"COL_PP"
"COL_QM"
...
```
This table should print only those columns from table `X` that the first (and only) row has value 1.
This question is related to any other question about transposition, with the difference that I don't want the actual values, but the column names, which are not known in advance.
I could use Excel or PL/SQL to create a list of strings of the form
`MIN(CASE WHEN t.COL_XA = 1 THEN 'COL_XA' ELSE null END) as NAME`, but this solution is inefficient (`EXECUTE IMMEDIATE`) and difficult to maintain. And the string passed to `EXECUTE IMMEDIATE` is limited to 32700 characters, which can be easily exceeded in production, where the table `X` can have well over 500 fields.
|
To completly automate the query you must be able to read the column names of the actual cursor. In PL/SQL this is possible using DBMS\_SQL (other way would be in JDBC). Based on [this OTN thread](https://community.oracle.com/thread/702648) here a basic table function.
The importent parts are
1) *dbms\_sql.parse* the query given as a text string and *dbms\_sql.execute* it
2) *dbms\_sql.describe\_columns* to get the list of the column names returned from the query on table x
3) *dbms\_sql.fetch\_rows* to fetch the first row
4) loop the columns and checking the *dbms\_sql.column\_value* if equals to 1 output column\_name (with PIPE)
```
create or replace type str_tblType as table of varchar2(30);
/
create or replace function get_col_name_on_one return str_tblType
PIPELINED
as
l_theCursor integer default dbms_sql.open_cursor;
l_columnValue varchar2(2000);
l_columnOutput varchar2(4000);
l_status integer;
l_colCnt number default 0;
l_colDesc dbms_sql.DESC_TAB;
begin
dbms_sql.parse( l_theCursor, 'SELECT * FROM X', dbms_sql.native );
for i in 1 .. 1000 loop
begin
dbms_sql.define_column( l_theCursor, i,
l_columnValue, 2000 );
l_colCnt := i;
exception
when others then
if ( sqlcode = -1007 ) then exit;
else
raise;
end if;
end;
end loop;
dbms_sql.define_column( l_theCursor, 1, l_columnValue, 2000 );
l_status := dbms_sql.execute(l_theCursor);
dbms_sql.describe_columns(l_theCursor,l_colCnt, l_colDesc);
if dbms_sql.fetch_rows(l_theCursor) > 0 then
for lColCnt in 1..l_colCnt
loop
dbms_sql.column_value( l_theCursor, lColCnt, l_columnValue );
--DBMS_OUTPUT.PUT_LINE( l_columnValue);
IF (l_columnValue = '1') THEN
DBMS_OUTPUT.PUT_LINE(Upper(l_colDesc(lColCnt).col_name));
pipe row(Upper(l_colDesc(lColCnt).col_name));
END IF;
end loop;
end if;
return;
end;
/
select * from table(get_col_name_on_one);
COLUMN_LOOOOOOOOOOOOOONG_100
COLUMN_LOOOOOOOOOOOOOONG_200
COLUMN_LOOOOOOOOOOOOOONG_300
COLUMN_LOOOOOOOOOOOOOONG_400
COLUMN_LOOOOOOOOOOOOOONG_500
COLUMN_LOOOOOOOOOOOOOONG_600
COLUMN_LOOOOOOOOOOOOOONG_700
COLUMN_LOOOOOOOOOOOOOONG_800
COLUMN_LOOOOOOOOOOOOOONG_900
COLUMN_LOOOOOOOOOOOOOONG_1000
```
You should not get in troubles with wide tables using this solution, I tested with a 1000 column tables with *long* column names.
|
Here is solution but I have to break it in two parts
First you extract all the column names of table. I have used LISTAGG to collect column names separated by ,
I will use the output of first query in second query.
```
select listagg(column_name,',') WITHIN GROUP (ORDER BY column_name )
from user_tab_cols where upper(table_name)='X'
```
The output of above query will be like COL\_XA,COL\_VG,COL\_LF,COL\_EQ,COL\_PP,COL\_QM ... and so on.
Copy above output and use in below query replacing
```
select NAME from X
unpivot ( bit for NAME in (<outputvaluesfromfirstquery>))
where bit=1
```
I am trying to merge above two, but I have option for pivot xml but not for unpivot xml.
|
How to transpose a table from a wide format to narrow, using the values as a filter?
|
[
"",
"sql",
"oracle",
"plsql",
""
] |
I have a SQL statement that I wish to automate using SAS EG (9.4). The following statement has been tested in Teradata SQL Assistant and works.
```
select * from TD.DATA where date='2015-06-01'
```
Now I wish to push this through a proc SQL pass through, and feed the date to the SQL program, like so....
```
proc sql;
connect to teradata as tera(user=&tera_user password="&tera_pwd" tdpid=terap);
create table MYDATA as
select * from connection to tera
(
select * from TD.DATA where date='2015-06-01'
);
disconnect from tera;
quit;
```
The above code has been tested and produces the exact same output as the previous SQL statement. However, what I really want is to do something like this:
```
%let input_date='2015-06-01';
proc sql;
connect to teradata as tera(user=&tera_user password="&tera_pwd" tdpid=terap);
create table MYDATA as
select * from connection to tera
(
select * from TD.DATA where date=&input_date.
);
disconnect from tera;
quit;
```
I have tried various combinations of quotations and different date formats....what am I missing here? Thanks.
|
You can use the `%BQUOTE()` macro function to resolve macro variables within single quotes.
```
%let input_date = 2015-06-01;
proc sql;
connect to teradata as tera(user=&tera_user password="&tera_pwd" tdpid=terap);
create table MYDATA as
select * from connection to tera
(
select * from TD.DATA where date = %BQUOTE('&INPUT_DATE')
);
disconnect from tera;
quit;
```
|
Try this:
```
%let input_date=2015-06-01;
proc sql;
connect to teradata as tera(user=&tera_user password="&tera_pwd" tdpid=terap);
create table MYDATA as
select * from connection to tera
(
select * from TD.DATA where date=%str(%'&input_date%')
);
disconnect from tera;
quit;
```
|
Use SAS Macro Variable within Proc SQL Teradata passthrough
|
[
"",
"sql",
"sas",
""
] |
Question with left join. I am trying to LEFT JOIN a table that requires other tables to be joined on the initial left joined table. So..
```
SELECT * FROM tableA
LEFT JOIN tableB
ON tableB.id=tableA.id
JOIN tableC
ON tableC.id=tableB.id
```
The problem is if I don't left join table C I get no results, and if do left join I get too many results.
What kind of joins should I be using where if tableB join is null, tableC joins will also be null?
|
What about a subquery ?
```
SELECT * FROM tableA
LEFT JOIN (SELECT tableB.id FROM tableB
JOIN tableC
ON tableC.id=tableB.id) tableZ
ON tableZ.id=tableA.id
```
|
> I don't left join table C I get no results, and if do left join I get
> too many results
You need to determine what is your driving table and data. In this case, it seems like table A is the driving table and the join from B to C also could be a left join, meaning data from C could be returned even if no matching exists in B.
```
SELECT * FROM tableA
LEFT JOIN tableB
ON tableB.id=tableA.id
LEFT JOIN tableC
ON tableC.id=tableB.id
```
> if do left join I get too many results
Can you post some sample data to show what you mean by this?
|
Left Join With Regular Joins and Results
|
[
"",
"sql",
"postgresql",
""
] |
I have a simple query which I only want to return values
```
SELECT advisor AS advisor,
SUM(`outcome` LIKE '%TPS_Approved%') AS approved
FROM shift_report
WHERE `team`!=6 AND date>20150720 AND date<20150728
GROUP BY `advisor`
ORDER BY `advisor`
```
I only want it to return advisor and approved count if the count is >=1
|
Would this do what you want ?
```
SELECT advisor AS advisor,
count(*) AS approved
FROM shift_report
WHERE `team`!=6 AND date>20150720 AND date<20150728 and `outcome` LIKE '%TPS_Approved%'
GROUP BY `advisor`
ORDER BY `advisor`
```
|
One of posible options
```
SELECT A.advisor, A.approved
FROM (
SELECT advisor,
SUM(`outcome` LIKE '%TPS_Approved%') AS approved
FROM shift_report WHERE `team`!=6 and date>20150720 and date<20150728
group by `advisor` order by `advisor`
) A
WHERE
A.approved >= 1
```
|
SQL SUM only return values >=1
|
[
"",
"mysql",
"sql",
"sum",
""
] |
```
SELECT *, someSpecialProperty
FROM table
WHERE x = @mail
OR y IN (SELECT y FROM someTable WHERE m = @mail)
OR z IN (SELECT z FROM otherTable WHERE otherM = @mail)
```
`someSpecialProperty` should be
* `1` if the first of the three conditions holds
* `2` if the second of the three conditions hold, but not the first
* `3` if the third of three conditions hold, but neither the second or the first
So I tried
```
CASE x WHEN @mail THEN 1 ELSE
CASE y WHEN IN(SELECT y FROM someTable WHERE m=@mail) THEN 2 ELSE
CASE z WHEN IN(SELECT z FROM otherTable WHERE otherM=@mail) THEN 3 ELSE 0 END
AS someSpecialProperty
```
But
> "the syntax is incorrect near `IN`".
How could I get this to work?
|
I think you need this:
```
SELECT
CASE
WHEN x = @mail THEN 1
WHEN y IN (SELECT y FROM someTable WHERE m = @mail) THEN 2
WHEN z IN (SELECT z FROM otherTable WHERE otherM = @mail) THEN 3
ELSE 0
END AS newField
```
|
You should use EXISTS instead of Y IN IN (SELECT ...) :
```
SELECT
CASE WHEN x = @email THEN 1
WHEN EXISTS (SELECT y FROM someTable AS S WHERE S.m=@mail AND S.y = T.y ) THEN 2 /* y here should be the column from outer table T*/
WHEN EXISTS (SELECT z FROM otherTable AS O WHERE O.otherM=@mail AND O.z = T.z) THEN 3 /* z here should be the column from outer table T */
ELSE 0
END
FROM table T
```
Hope it helps.
|
SQL: CASE ... WHEN IN ..., to return which OR holds
|
[
"",
"sql",
"sql-server",
"case",
""
] |
If I have the following table:
```
| name | value |
------------------
| A | 1 |
| B | NULL |
```
Where at the moment name is of type `varchar(10)` and value is of type `bit`.
I want to change this table so that value is actually a `nvarchar(3)` however, and I don't want to lose any of the information during the change. So in the end I want to end up with a table that looks like this:
```
| name | value |
------------------
| A | Yes |
| B | No |
```
What is the best way to convert this column from one type to another, and also convert all of the data in it according to a pre-determined translation?
NOTE: I am aware that if I was converting, say, a `varchar(50)` to `varchar(200)`, or an `int` to a `bigint`, then I can just alter the table. But I require a similar procedure for a `bit` to a `nvarchar`, which will not work in this manner.
|
[Alex K's](https://stackoverflow.com/users/246342/alex-k "Alex K") comment to my question was the best.
> Simplest and safest; Add a new column, update with transform, drop existing column, rename new column
Transforming each item with a simple:
```
UPDATE Table
SET temp_col = CASE
WHEN value=1
THEN 'yes'
ELSE 'no'
END
```
|
The best option is to ALTER bit to varchar and then run an update to change 1 to 'Yes' and 0 or NULL to 'No'
This way you don't have to create a new column and then rename it later.
|
What is the best way to change the type of a column in a SQL Server database, if there is data in said column?
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I need to aggregate amounts grouped by "horizon" 12 next months over 5 year:
*assuming we are **2015-08-15***
```
SUM amount from 0 to 12 next months (from 2015-08-16 to 2016-08-15)
SUM amount from 12 to 24 next months (from 2016-08-16 to 2017-08-15)
SUM amount from 24 to 36 next months ...
SUM amount from 36 to 48 next months
SUM amount from 48 to 60 next months
```
Here is a [fiddled](http://sqlfiddle.com/#!15/af71d/1) dataset example:
```
+----+------------+--------+
| id | date | amount |
+----+------------+--------+
| 1 | 2015-09-01 | 10 |
| 2 | 2015-10-01 | 10 |
| 3 | 2016-10-01 | 10 |
| 4 | 2017-06-01 | 10 |
| 5 | 2018-06-01 | 10 |
| 6 | 2019-05-01 | 10 |
| 7 | 2019-04-01 | 10 |
| 8 | 2020-04-01 | 10 |
+----+------------+--------+
```
Here is the expected result:
```
+---------+--------+
| horizon | amount |
+---------+--------+
| 1 | 20 |
| 2 | 20 |
| 3 | 10 |
| 4 | 20 |
| 5 | 10 |
+---------+--------+
```
How can I get these 12 next months grouped "horizons" ?
---
I tagged *PostgreSQL* but I'm actually using an ORM so it's just to find the idea. (by the way I don't have access to the date formatting functions)
|
I would split by 12 months time frame and group by this:
```
SELECT
FLOOR(
(EXTRACT(EPOCH FROM date) - EXTRACT(EPOCH FROM now()))
/ EXTRACT(EPOCH FROM INTERVAL '12 month')
) + 1 AS "horizon",
SUM(amount) AS "amount"
FROM dataset
GROUP BY horizon
ORDER BY horizon;
```
[SQL Fiddle](http://sqlfiddle.com/#!15/af71d/55)
Inspired by: [Postgresql SQL GROUP BY time interval with arbitrary accuracy (down to milli seconds)](https://stackoverflow.com/questions/12045600/postgresql-sql-group-by-time-interval-with-arbitrary-accuracy-down-to-milli-sec)
|
Perhaps CTE?
```
WITH RECURSIVE grps AS
(
SELECT 1 AS Horizon, (date '2015-08-15') + interval '1' day AS FromDate, (date '2015-08-15') + interval '1' year AS ToDate
UNION ALL
SELECT Horizon + 1, ToDate + interval '1' day AS FromDate, ToDate + interval '1' year
FROM grps WHERE Horizon < 5
)
SELECT
Horizon,
(SELECT SUM(amount) FROM dataset WHERE date BETWEEN g.FromDate AND g.ToDate) AS SumOfAmount
FROM
grps g
```
[SQL fiddle](http://sqlfiddle.com/#!15/af71d/40)
|
GROUP BY next months over N years
|
[
"",
"sql",
"postgresql",
"group-by",
"aggregation",
""
] |
I have a populated table with a 'Number' as `PK`. I would like a query which searches for a specific number, and if it is not found then it would return "NULL" not a no value.
I have managed to do it for one return:
```
SELECT (SELECT Risk_Impact FROM [dbo].[RFC] WHERE Number = 'RFC-018345')
```
However I would like to select multiple columns like:
```
SELECT (SELECT Risk_Impact, Risk_Impact, BI_TestingOutcome FROM [dbo].[RFC] WHERE Number = 'RFC-018345')
```
However it is giving me an error:
> "Msg 116, Level 16, State 1, Line 1
> Only one expression can be specified in the select list when the subquery is not introduced with EXISTS."
Can you please assist?
Thank you in advance
|
Try
```
select p.* from (select 1 as t ) v
left join (select * from [dbo].[RFC] WHERE Number = 'RFC-018345') p
on 1=1
```
|
In sql server should be use isnull function like this
```
SELECT isnull((SELECT Risk_Impact FROM [dbo].[RFC] WHERE Number = 'RFC-018345'),0)
```
|
SQL Server return NULL (or value) in case entry does not exist - Multiple Columns
|
[
"",
"mysql",
"sql",
"sql-server",
""
] |
I have to build a process that takes a VARCHAR string (for example 'AHT559') and converts it to a INT only string by converting the Alphabetic chars to INTEGERS based on the nth letter in the alphabet. The above would thus result in: 010820559.
I have done this in SAS before, but I'm relatively new to SQL. What would be the best way to do this in SQL?
Here is what I've done in SAS:
```
DO _i = 1 TO length( account );
IF (rank( char( account, _i ) ) -64) < 0 THEN agreement_hash = CATS( agreement_hash, char( account, _i ) );
ELSE IF (rank( char( account, _i ) ) -64) < 10 THEN agreement_hash = CATS( agreement_hash, 0, rank( char( account, _i ) )-64 );
ELSE agreement_hash = CATS( agreement_hash, rank( char( account, _i ) )-64 );
END;
```
|
Here is a similar script for sqlserver, any character which is not a capital letter is assumed a digit in this syntax:
```
DECLARE @x varchar(100) = 'AHT559'
DECLARE @p int = len(@x)
WHILE @p > 0
SELECT @x =
CASE WHEN substring(@x, @p, 1) between 'A' and 'Z'
THEN stuff(@x, @p, 1, right(ascii(substring(@x, @p, 1)) - 64 + 100, 2))
ELSE @x END,
@p -= 1
SELECT @x
```
Result:
```
010820559
```
|
If the format of the values is always the same as you state in the comments and you only need to process a single value at a time you can do some simple string manipulation to convert the characters to integers using their [ASCII values](http://www.techonthenet.com/ascii/chart.php), and subtracting 64 to get the number of the alphabetic character:
```
SELECT ASCII('A') -- produces 65
SELECT ASCII('A') - 64 -- produces 1
```
This is a little long winded and could be done in less lines of code, but it's separated for clarity.
```
DECLARE @val NVARCHAR(10) = 'AHT559'
-- get first, second and third character numeric values
DECLARE @first INT = ASCII(SUBSTRING(@val, 1, 1)) - 64
DECLARE @second INT = ASCII(SUBSTRING(@val, 2, 1)) - 64
DECLARE @third INT = ASCII(SUBSTRING(@val, 3, 1)) - 64
-- join them together adding a '0' if < 10
SELECT RIGHT('0' + CAST(@first AS VARCHAR(2)), 2)
+ RIGHT('0' + CAST(@second AS VARCHAR(2)), 2)
+ RIGHT('0' + CAST(@third AS VARCHAR(2)), 2)
+ RIGHT(@val, 3)
```
Tested on 4 million rows:
```
-- temp table creation - takes approx 100 seconds on my machine
CREATE TABLE #temp (val NVARCHAR(6))
DECLARE @rowno INT = 1
SELECT @rowno = 1
WHILE @rowno <= 4000000
BEGIN
INSERT INTO #temp ( val ) VALUES ( 'AHT559' )
SELECT @rowno = @rowno + 1
END
```
To run this code against the entire temp table takes < 20 seconds on my machine:
```
SELECT val AS OrignalValue,
RIGHT('0' + CAST( ASCII(SUBSTRING(val, 1, 1)) - 64 AS VARCHAR(2)), 2)
+ RIGHT('0' + CAST( ASCII(SUBSTRING(val, 2, 1)) - 64 AS VARCHAR(2)), 2)
+ RIGHT('0' + CAST( ASCII(SUBSTRING(val, 3, 1)) - 64 AS VARCHAR(2)), 2)
+ RIGHT(val, 3) AS FormattedValue
FROM #temp
```
|
Converting CHAR string to nth letter in Alphabet string in SQL
|
[
"",
"sql",
"sql-server",
""
] |
I create a table:
```
create table empty
(
data_id int identity(70,30),
emp_name varchar(20),
address varchar(20),
city varchar(20)
)
```
and insert data like:
```
data_id emp_name address city
---------------------------------------------
70 ritesh 210 b_block sivapur
100 pritma visvaas khand lucknow
130 rakesh nehru nagar delhi
```
Now I want to alter auto increment from `(70, 30)` to `(70, 10)`.
Can I do this?
|
Create a new column called `data_id_New`.
```
ALTER TABLE empty ADD data_id_New int;
```
Now copy all values of `data_id` into `data_id_New`.
```
Update empty set data_id_New = data_id
```
Now delete column.
```
ALTER TABLE empty DROP COLUMN data_id;
```
Rename column `data_id_New` to `data_id`.
```
ALTER TABLE empty RENAME COLUMN data_id_new to data_id;
```
Finally modify your new column
```
ALTER TABLE empty MODIFY data_id int identity(70,10);
```
Hope, this will solve your problem.
|
You can not change that in one go, you would need to create a new table and copy over the rows.
Try this to create a temp table, move over the rows, drop the old table and rename the temp table:
```
BEGIN TRANSACTION
CREATE TABLE dbo.Tmp_empty
(
data_id int NOT NULL IDENTITY (70, 10),
emp_name varchar(20) NULL,
address varchar(20) NULL,
city varchar(20) NULL
) ON [PRIMARY]
GO
SET IDENTITY_INSERT dbo.Tmp_empty ON
GO
IF EXISTS(SELECT * FROM dbo.empty)
EXEC('INSERT INTO dbo.Tmp_empty (data_id, emp_name, address, city)
SELECT data_id, emp_name, address, city FROM dbo.empty WITH (HOLDLOCK TABLOCKX)')
GO
SET IDENTITY_INSERT dbo.Tmp_empty OFF
GO
DROP TABLE dbo.empty
GO
EXECUTE sp_rename N'dbo.Tmp_empty', N'empty', 'OBJECT'
GO
COMMIT
```
|
How change auto increment statement in SQL Server?
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
Here is my first question about a SQL Server 2008 Express database, containing articles, quantities, prices and so on. Unfortunately when I loaded data the first time I loaded articles making the following mistake:
I created two records for (almost) every article very similar but with a different character.
Here's an example:
* TUTO510088.9X3
* TUTO510088,9X3
The only different character is `,` and `.`. Is there a way to `SELECT` all these articles to `DELETE` the ones with the comma and leaving the others?
Please note that the position of the different character is not fixed, it could be on the 3rd from right, or in the middle
|
You can do it using a self join:
```
delete T2
from TableName T1 JOIN
TableName T2 on Replace(T1.article,'.','')=Replace(T2.article,',','')
where T2.article like '%,%'
```
Demo in [**SQL Fiddle**](http://www.sqlfiddle.com/#!3/d4b60/11)
You can check which records are going to be deleted by replacing `delete T2` by `select T2.*`
|
```
delete from my_table
where my_column like '%,%';
```
|
SQL Server 2008 Express : find similar records in a table
|
[
"",
"sql",
"sql-server-2008-express",
""
] |
I am trying assign a rank like the one in the example below (**rowFinal**), where the rows are partitioned by **GalleryId** and ordered by **StartDateKey** ascending. However, I need to have a 'New' **EventName** restart the count back from 1 like in the example below with StartdateKey 20131219.
```
GalleryId StartDateKey row EventName rowFinal
425934233 20130226 1 Renew 1
425934233 20130326 2 Renew 2
425934233 20130426 3 Renew 3
425934233 20130526 4 Renew 4
425934233 20130626 5 Renew 5
425934233 20131219 6 New 1
425934233 20140119 7 Renew 2
425934233 20140219 8 Renew 3
425934233 20140319 9 Renew 4
425934233 20140419 10 Renew 5
…
425931351 20121210 1 Renew 1
425931351 20130110 2 Renew 2
425931351 20130210 3 Renew 3
425931351 20130310 4 Renew 4
425931351 20130410 5 Renew 5
425931351 20130510 6 Renew 6
425931351 20130610 7 Renew 7
425931351 20130710 8 Renew 8
425931351 20130810 9 Renew 9
```
Parsing and updating each row (while loops) proved not practical due to speed. Any suggestions would be profoundly appreciated.
|
You can do this with a trick. The idea is to group rows based on the number of "New" records before it. Then you can do this using a cumulative sum:
```
select t.*,
row_number() over (partition by GalleryId, numRenew order by startdatekey) as rowFinal
from (select t.*,
sum(case when EventName = 'Renew' then 1 else 0 end) over
(partition by partition by GalleryId order by startdatekey) as numRenew
from table t
) t;
```
|
```
select *,
row_number() over(partition by galleryid,substring(startdatekey,7,2) order by startdatekey)
as rowFinal
from tablename;
```
It looks like you are partitioning on galleryid and also on the last 2 characters of startdatekey.
|
alternative to sql rank () over (partition by... order by..)
|
[
"",
"sql",
""
] |
I need a query in SQL to combine several rows in the same table.
```
L_Life_Amount M_LifeAmount L_Health_Amount M_Health_amount
100 200 300 400
200 300 400 600
300 400 500 700
```
and I need to bring the below result set
```
InsuranceType L_Amount M_Amount L_amount-M_amount
Life amount 600 900 -300
Health amount 1200 1700 -500
Total 1800 2600 -800
```
Here I need to `sum` the values of `L_life_Amount(L_Amount)` and `sum`of `M_Life_amount(M_Amount)` and display it as Life amount
the same way I need to do `sum` for `L_Helath_Amount(L_Amount)` and `sum` of `M_Health_Amount(M_Amount)` and display it as Health Amount
Finally I make the diff betweeen `L_amount and M-amount` and make a total of it.
|
Use `Cross Apply` with `table valued constructor` to *unpivot* the data then do the math
Something like this
```
;with cte as
(
SELECT insurancetype,
l_amount=Sum(l_amount),
m_amount=Sum(m_amount),
[l_amount-m_amount]=Sum([l_amount-m_amount])
FROM yourtable
CROSS apply (VALUES (l_life_amount,m_lifeamount,
l_life_amount - m_lifeamount,'Life amount'),
(l_health_amount,m_health_amount,
l_health_amount - m_health_amount,'Health amount')) cs(l_amount, m_amount, [L_amount-M_amount],insurancetype)
GROUP BY insurancetype
)
select insurancetype,
l_amount,
m_amount,
[l_amount-m_amount]
From CTE
UNION ALL
Select 'Total',
sum(l_amount),
sum(m_amount),
sum([l_amount-m_amount])
From CTE
```
[**SQLFIDDLE DEMO**](http://sqlfiddle.com/#!3/77183/6)
|
```
CREATE VIEW Life AS
SELECT 'Life amount' AS InsuranceType,
sum(L_Life_Amount) AS L_Amount,
sum(M_Life_Amount) AS M_Amount,
sum(L_Life_Amount) - sum(M_Life_Amount) AS L_amount-M_Amount
FROM <tablename>
CREATE VIEW Health AS
SELECT 'Health amount' AS InsuranceType,
sum(L_Health_Amount) AS L_Amount,
sum(M_Health_Amount) AS M_Amount,
sum(L_Health_Amount) - sum(M_Health_Amount) AS L_amount-M_Amount
FROM <tablename>
CREATE VIEW Total AS
SELECT 'Total' AS InsuranceType,
sum(L_Amount),
sum(M_Amount),
sum(L_Amount-M_Amount),
FROM Life UNION Health
SELECT * FROM Life UNION Health UNION Total
```
(Not sure if syntax is correct...)
|
How to combine multiple named Columns as a Row and sum of it
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-server-2005",
""
] |
I have a table with many records and I want to know only the record which I have created at second last.
For ex: I have a table `customer` in which `customerID` are random numbers.
**Now I want to select second last row.**
```
customerID customer_name cont_no
---------------------------------------
7 david sam 5284
1 shinthol 1
11 lava 12548
2 thomas 1
3 peeter 1
4 magge 1
5 revas 1
6 leela 123975
```
Output row :
```
customerID customer_name cont_no
5 revas 1
```
I don't want second highest...
I want second last row.
|
As you asked I can give you example.
Imagine, that you have full bag of apples. How can you take second last apple? How you will know which one is second last? You can't do It while you not sort them in any way.
---
For now your data isn't sorted so you can't achieve It as expected. You can do It in following, only after you have any sorting criteria like `Id`, `date created` or etc.
```
SELECT TOP 1 *
FROM(
SELECT TOP 2 *
FROM Tbl
ORDER BY SortingCol DESC -- here you need to pass column which will provide expected sorting
) t
ORDER BY SortingCol
```
|
As you probably already know, you need a column to order by to achieve this task. OVER Clause be used for this.
```
;WITH CTE as
(
SELECT
customerid, customer_name, cont_no,
row_number() over (order by newlymadesortcolumn desc) rn
FROM customer
)
SELECT customerid, customer_name, cont_no
FROM CTE
WHERE rn = 2
```
|
How can I retrieve second last row?
|
[
"",
"sql",
"sql-server",
"sql-server-2008-r2",
""
] |
I have a simple query which groups and orders date data:
```
SELECT COUNT(*) AS Events,
datepart(YYYY, call_date) AS YearPart,
datepart(MM, call_date) As MonthPart,
datename(YYYY, call_date) AS YearName,
dateName(MM, call_date) As MonthName
FROM rm_report
GROUP BY
datename(year, call_date),
datename(month, call_date),
datepart(year, call_date),
datepart(month, call_date)
ORDER BY
YearPart,
MonthPart;
```
The issue is that the first two months of the first year contain no data. As a result the ordering commences at the third month and then cycles to the first
```
EVENTS | YEARPART | YEARNAME | MONTHPART | MONTHNAME
3 2012 2012 3 March
6 2012 2012 4 April
6 2012 2012 5 May
6 2012 2012 6 June
6 2012 2012 7 July
6 2012 2012 8 August
6 2012 2012 9 September
6 2012 2012 10 October
6 2012 2012 11 November
6 2012 2012 12 December
6 2012 2013 1 January
6 2012 2013 2 February
```
How can I ensure that if there are no events, a 0 is allocated so that the list can be correctly ordered - that is the first entry in the `MonthPart` column is "1" and the First entry in the `YearPart` column is 2012?
We are using MS SQL.
|
You need to generate all combinations of `month`s and `year`s first. And then do a `LEFT JOIN` on `rm_report`:
```
WITH CteTally(N) AS(
SELECT N FROM(VALUES
(1), (2), (3), (4), (5), (6), (7), (8), (9), (10), (11), (12)
)t(N)
),
CteDates(YearPart, MonthPart, YearName, MonthName) AS(
SELECT
DATEPART(YEAR, DATEADD(MONTH, t.N - 1, DATEADD(YEAR, a.Y - 1900, 0))),
DATEPART(MONTH, DATEADD(MONTH, t.N - 1, DATEADD(YEAR, a.Y - 1900, 0))),
DATENAME(YEAR, DATEADD(MONTH, t.N - 1, DATEADD(YEAR, a.Y - 1900, 0))),
DATENAME(MONTH, DATEADD(MONTH, t.N - 1, DATEADD(YEAR, a.Y - 1900, 0)))
FROM(
SELECT DISTINCT DATEPART(YEAR, call_date) AS Y
FROM rm_report
)a
CROSS JOIN CteTally t
)
SELECT
COUNT(r.call_date) AS Events,
d.YearPart,
d.MonthPart,
d.YearName,
d.MonthName
FROM CteDates d
LEFT JOIN rm_report r
ON d.YearPart = DATEPART(YEAR, r.call_date)
AND d.YearName= DATENAME(YEAR, r.call_date)
AND d.MonthPart = DATEPART(MONTH, r.call_date)
AND d.MonthName = DATENAME(MONTH, r.call_date)
GROUP BY
d.YearPart, d.YearName, d.MonthPart, d.MonthName
ORDER BY
d.YearName, d.MonthPart
```
|
Create a temp table/CTE with one column MONTHPART which contains numbers from 1-12. Right Join your result with this table and replace NULLS with zero.
Example:
```
CREATE TABLE #temp
(
[monthpart] INT
)
INSERT INTO #Temp (MONTHPART) Values (1),(2),(3),(4),(5),(6),(7),(8),(9),(10),(11),(12)
--SELECT * FROM #Temp
SELECT [monthpart],
ISNULL([deptid], 0) AS Department,
ISNULL(SUM([salart]), 0) AS Salary
FROM [DeeTest].[dbo].[employeesalary]
RIGHT JOIN #temp
ON empid = monthpart
GROUP BY deptid,
monthpart
DROP TABLE #temp
```
|
Ordering dt by year and date when some months are missing
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I'm running a pretty straightforward query using the `database/sql` and `lib/pq` (postgres) packages and I want to toss the results of some of the fields into a slice, but I need to know how big to make the slice.
The only solution I can find is to do another query that is just `SELECT COUNT(*) FROM tableName;`.
Is there a way to both get the result of the query AND the count of returned rows in one query?
|
Conceptually, the problem is that the database cursor may not be enumerated to the end so the database does not really know how many records you will get before you actually read all of them. The only way to count (in general case) is to go through all the records in the resultset.
But practically, you can enforce it to do so by using subqueries like
```
select *, (select count(*) from table) from table
```
and just ignore the second column for records other than first. But it is very rude and I do not recommend doing so.
|
Not sure if this is what you are asking for but you can call the @@Rowcount function to return the count of the previous select statement that has been executed.
```
SELECT mytable.mycol FROM mytable WHERE mytable.foo = 'bar'
SELECT @@Rowcount
```
If you want the row count included in your result set you can use the the OVER clause ([MSDN](http://msdn.microsoft.com/en-us/library/ms189461.aspx))
```
SELECT mytable.mycol, count(*) OVER(PARTITION BY mytable.foo) AS 'Count' FROM mytable WHERE mytable.foo = 'bar'
```
You could also perhaps just separate two SQL statements with the a ; . This would return a result set of both statements executed.
|
Get count and result from SQL query in Go
|
[
"",
"sql",
"go",
""
] |
There are two stored procedures which has same name and same logic. But in different databases. Only some of the content names are different (example - Synonym Name, Table Name, schema). Other code is the same.
Is there any way to compare code between above two stored procedures?
|
For many type of comparisons (folder, text, etc.) you can use [Beyond Compare](http://www.scootersoftware.com/) (they offers 30 days trial, but after it, you can simple reinstall it).
[](https://i.stack.imgur.com/Ed6g8.png)
If you want something free, you can use a compare plugin for `NotePad++` but is not so fancy as the first tool:
[](https://i.stack.imgur.com/M8ERA.png)
The `Plugin Manager` can be opened from `Plugins -> Plugin Manager -> Show Plugin Manager`.
|
Personally i prefer to use a SSDT Project (SQL Server Data Tools)
It allows you to store the entire database schema in a Visual Studio project and in git/svn.
It is capable of comparing two databases (in full) or a database with the SSDT project schema. It will show you the differences and allow you to publish those differences.
Configurable and powerful.
I highly recommend it.
|
Compare code between two stored procedures in two different databases in SQL Server
|
[
"",
"sql",
"sql-server",
"t-sql",
"stored-procedures",
""
] |
I have a field named Path and it looks like this:
```
/{e8cfdcba-9572-4c64-828f-dea54d8a00b7}/sites/BI_Center/euroreporting/Reports/BITraining/Elena/GroupingEx.rdl
```
I need a parameter from where i can choose a folder name. Something like this:
```
/sites/BI_Center/euroreporting/Reports/BITraining/Elena
```
What i have done by now is to delete the first bit of the path. This is the code:
```
SELECT replace(reverse(substring(reverse(Path), 1, ISNULL(NullIF(charindex('}',reverse(Path)),0),len(Path))) ),'}','') AS Path2 from Catalog
```
Now, my path looks like this: `/sites/BI_Center/euroreporting/Reports/BITraining/Elena/GroupingEx.rdl`
How can i exclude the report's name? (for example GroupingEx.rdl). I tried the MID function, but it doesn't work because the report's name length is variable.
Thank you in advance.
|
This is one of the methods
```
declare @s varchar(200)
set @s='/sites/BI_Center/euroreporting/Reports/BITraining/Elena/GroupingEx.rdl'
select reverse(replace(reverse(@s),substring(reverse(@s),1,charindex('/',reverse(@s))),''))
```
EDIT:
This is much simpler
```
declare @s varchar(200)
set @s='/sites/BI_Center/euroreporting/Reports/BITraining/Elena/GroupingEx.rdl'
select substring(@s,1,len(@s)-charindex('/',reverse(@s)))
```
|
I suggest this function:
```
SUBSTRING([path], CHARINDEX('/', [path], 2),
LEN([path]) - CHARINDEX('/', [path], 2) - CHARINDEX('/', REVERSE([path]), 1) + 1)
```
for this: `/sites/BI_Center/euroreporting/Reports/BITraining/Elena`
|
Delete the last part of a string SQL
|
[
"",
"sql",
"string",
"function",
""
] |
I have a scores table like this:
```
code | week | points
1001 | 1 | 2
1001 | 1 | 1
1001 | 3 | 6
2001 | 1 | 0
2001 | 4 | 5
2001 | 4 | 2
```
What I'd like is a result like this:
```
code | 1 | 3 | 4
1001 | 3 | 6 |
2001 | 0 | | 7
```
I've written a simple group by which I could probably use to write some code around but I'd rather do the work in the SQL. <http://sqlfiddle.com/#!15/8ff5d>
```
select code, week, sum(points) as total
from scores
group by code, week
order by code, week;
```
And the result is:
```
code | week | total
1001 | 1 | 3
1001 | 3 | 6
2001 | 1 | 0
2001 | 4 | 7
```
I'm sure it's really simple but I'm stumped. Thanks in advance for any help.
|
You're looking for PIVOT Function:
similar question: [Create a pivot table with PostgreSQL](https://stackoverflow.com/questions/20618323/create-a-pivot-table-with-postgres-sql)
When the number of columns is fix, then it's more or less simple.
If the number is dynamic, then search for dynamic pivot.
|
What you're trying to do is actually not simple at all, I don't think! It looks like you want a set of results that has a variable number of columns, which is not really how SQL works, most of the time.
Think of it this way: if your table was much larger, and had many different weeks in it, your result would look something like this:
```
code | 1 | 3 | 4 | 5 | 7 | 8 | 9 | ... | ...
1001 | 3 | 6 | | 1 | 3 | etc. ...
2001 | 0 | | 7 | 2 | | ...
```
Some varieties of SQL let you pivot the table, which switches the orientation of rows and columns, which is basically what you want to do. However, I don't think PostgreSQL has that.
What you can do, though, is turn one set of results into an array, so you will get one column that has an array in it with an arbitrary number of values.
It looks like something like this might be what you want:
```
SELECT code, array_agg(ROW(week, total)) AS week_array
FROM (select code, week, sum(points) as total
from scores
group by code, week
ORDER BY week) AS row
GROUP BY code
ORDER BY code;
```
The array will have tuples like `{("1", 0), ("3", 8)}` which means 0 points in week 1 and 8 points in week 3 for that code. This should work no matter how your table grows.
|
Grouping by as a single row with results as a column?
|
[
"",
"sql",
"postgresql",
"group-by",
"sum",
"pivot",
""
] |
I have two table named "user" and "logs".
user table has column named "userID" which is also pk.
logs table has two columns named "log\_detail" and "userID".
What I want to query is "get all user.userID values one by one from user table and check them if log\_detail value contains this value, If it countains then update logs.userID with this userID value".
I tried some queries but I really don't know how to do it.
By the way I am using Mysql.
```
UPDATE logs
SET logs.userID = user.userID
SELECT userID
FROM logs
WHERE logs.userID LIKE concat("%",user.userID,"%");
```
|
[SQL Fiddle](http://sqlfiddle.com/#!9/e367fa/1)
**Schema details**
```
create table user
(userid varchar(30));
create table logs
(log_detail varchar(100),
userid varchar(30));
insert into user values('user1');
insert into user values('user2');
insert into user values('user3');
insert into logs values('update by user1','user3');
insert into logs values('inserted by user2','user2');
insert into logs values('inserted by user3',null);
```
**Table data before update**
```
| log_detail | userid |
|-------------------|--------|
| update by user1 | user3 |
| inserted by user2 | user2 |
| inserted by user3 | (null) |
```
**Update Query**
```
update logs join user
set logs.userid=user.userid
where logs.log_detail LIKE concat("%",user.userID,"%");
```
**Table data after update**
```
| log_detail | userid |
|-------------------|--------|
| update by user1 | user1 |
| inserted by user2 | user2 |
| inserted by user3 | user3 |
```
|
```
update logs
inner join 'user' on
logs.userID = user.userID
set logs.userID = user.userID
WHERE logs.log_detail LIKE concat("%",user.userID,"%");
```
Also take into account that user is a reserved keyword in mysql
|
Sql update statement with variable
|
[
"",
"mysql",
"sql",
""
] |
I have three tables. We'll call them Data, Period, and Type. Data contains a record for each record in Type and each record in Period, so that the length of Data is the length of Period times the length of Type.
```
Type:
TypeID(primary key int)
TypeName(vchar)
Period:
PeriodID(primary key int)
PeriodName(varchar)
Data:
DataID(primary key int)
PeriodID(relational int)
TypeID(relational int)
Value(int)
```
From time to time new values will be added to Period, but Type should remain the same for the foreseeable future. What I want to do is insert new records into Data for each new Period (PeriodIDs that Data does not yet contain), one for each TypeID. So if there are 5 new Period records and 6 Type records, Data should have 30 new entries. I'm unsure how to do this concisely with SQL. Performance is not an issue since both Period and Type are reasonably small tables.
|
A Cartesian Product and a Subquery should do what you want:
```
insert into Data (PeriodID, TypeID)
select PeriodID, TypeID
from Period, Type
where PeriodID not in (select PeriodID from Data)
```
Note that your DataID column is assumed to be automatically generated, and the Value column will be null (so needs to be nullable). You may need to change the SQL if this isn't the case.
|
Sounds like you are looking to get a cross product
```
INSERT INTO Data
SELECT P.PeriodID, T.TypeID, 'some value' AS Value
FROM Period P
CROSS JOIN Type T
```
A quick example of results
```
DECLARE @Period TABLE (
PeriodID int
)
DECLARE @Type TABLE (
TypeID int
)
insert into @Period
VALUES
(1),(2),(3)
insert into @Type
VALUES
(4),(5),(6)
SELECT P.PeriodID, T.TypeID, 'some value' AS Value
FROM @Period P
CROSS JOIN @Type T
```
Results
```
1 4 some value
2 4 some value
3 4 some value
1 5 some value
2 5 some value
3 5 some value
1 6 some value
2 6 some value
3 6 some value
```
|
How do I add NEW values in one table along with the values in another table to a third table?
|
[
"",
"sql",
"sql-server",
"sql-insert",
"bulkinsert",
""
] |
I'm looking in the sql db and I can see something that resembles orders inside wp\_posts . However, I would expect them to be inside the tables beginning with wp\_woocommerce.
Can anyone shed some light on this phenomenon?
Cheers
|
In woocommerce orders are modelled as a [custom post type](https://codex.wordpress.org/Post_Types#Custom_Post_Types) so they are stored in `wp_posts` as you found. See [WooCommerce taxonomies and post types](https://woocommerce.com/document/installed-taxonomies-post-types/) for a list of the custom post types that woocommerce uses. Orders are stored as the type `shop_order`
Separate items within an Order are stored as separate records in the custom table [woocommerce\_order\_items](https://github.com/woocommerce/woocommerce/wiki/Database-Description#table-woocommerce_order_items)
|
**Updated plug-in version information 8/10/2018**
Orders are a custom post type. From WooCommerce [Post Types](https://woocommerce.com/document/installed-taxonomies-post-types/):
> * Shop Order (shop\_order)
Orders live in the `wp_posts` table (`post_type = 'shop_order'`). More data is available by looking up the order's post\_id in the `wp_postmeta` table.
In addition, from the WooCommerce GitHub Wiki [Database Description](https://github.com/woocommerce/woocommerce/wiki/Database-Description)
> * **woocommerce\_order\_items** – Stores line items which are associated with orders.
> * **woocommerce\_order\_itemmeta** – Stores meta data about order line items.
Currently the WordPress WooCommerce plug-in version is 3.4.x
|
where does woocommerce store orders?
|
[
"",
"sql",
"database",
"wordpress",
"woocommerce",
""
] |
```
ITEM LOCATION QTY WEEK
A X 30 1
A X 35 2
A X 40 3
A X 0 4
A X 10 5
A X 19 6
```
I need to create a new column with the computation like..
```
ITEM LOCATION QTY WEEK NEW_COLUMN
A X 30 1 AVG(WEEK2(qty)+WEEK3(qty)+WEEK4(qty)+WEEK5(qty))
A X 35 2 AVG(WEEK3(qty)+WEEK4(qty)+WEEK5(qty)+WEEK6(qty))
```
similarly for all the rows....
the average of 4 weeks is fixed,it wont change.
The first week will have the average of next 4 weeks i.e., 2,3,4 and 5 `avg(35+40+0+10)`
The 2nd week will have the average of next 4 weeks i.e., 3,4,5 and 6
`avg(40+0+10+19)`.
I tried to create to bucket them based on the week number,say
`Week 1-4 as 1`
`Week 5-8 as 2`.
and tried to do the process,but i am getting the same avg for the each buckets,say same value for 1,2,3,4 line items..
|
Joining to the same table with a clause restricting the `Weeks` to be within your range should work. You'll have to decide what the right answer is for the last weeks (which won't have 4 weeks afterwards) and either `COALESCE` the right answer or `INNER JOIN` them out.
```
SELECT T.Item, T.Location, T.Week, AVG(N.Qty) as New_Column
FROM Table T
LEFT OUTER JOIN Table N ON
T.Item = N.Item
AND T.Location = N.Location
AND N.Week BETWEEN (T.Week + 1) AND (T.Week + 4)
GROUP BY T.Item, T.Location, T.Week
```
|
Some of the other answers work fine, but with 2012 it should be really easy:
```
SELECT *,New_Column = (SUM(Qty) OVER(ORDER BY Week ROWS BETWEEN 1 FOLLOWING AND 5 FOLLOWING)*1.0)/4
FROM Table1
```
Demo: [SQL Fiddle](http://www.sqlfiddle.com/#!6/f1463/5/0)
If it's by `item` and `location` then just add `PARTITION BY`:
```
SELECT *,New_Column = (SUM(Qty) OVER(PARTITION BY Item, Location ORDER BY Week ROWS BETWEEN 1 FOLLOWING AND 5 FOLLOWING)*1.0)/4
FROM Table1
```
To filter out records that don't have 4 subsequent records, you could use `LEAD()` for filtering:
```
;with cte AS ( SELECT *,New_Column = (SUM(Qty) OVER(PARTITION BY Item, Location ORDER BY Week ROWS BETWEEN 1 FOLLOWING AND 5 FOLLOWING)*1.0)/4
,Lead4Col = LEAD(week,5) OVER(PARTITION BY Item,Location ORDER BY Week)
FROM Table1
)
SELECT *
FROM cte
WHERE Lead4Col IS NOT NULL
```
You could also use `COUNT(Qty) OVER(PARTITION BY Item, Location ORDER BY Week ROWS BETWEEN 1 FOLLOWING AND 5 FOLLOWING)` instead of `LEAD()` to do your filtering to when 4 subsequent weeks exist.
Edit: I think you actually want to exclude this week from the calculation, so adjusted slightly.
|
Value for a column is the sum of the next 4 values - SQL
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I can get any SELECT statement that has '\*' plus some other facet like a CASE statement or whatever in my SELECT clause to work if I always put '\*' first. So, this works:
```
SELECT *, (@row_number := @row_number + 1) AS num
FROM tblContracts, (SELECT @row_number := 0) as T;
```
But this does not:
```
SELECT (@row_number := @row_number + 1) AS num, *
FROM tblContracts, (SELECT @row_number := 0) as T;
```
Am I doing something obviously wrong here, or is this some quirk of MySQL? This is really my first exposure to MySQL.
|
The \* needs qualifying with the table name
```
SELECT (@row_number := @row_number + 1) AS num, tblContracts.*
FROM tblContracts, (SELECT @row_number := 0) as T;
```
|
It is always a good practice to alias your tablename. Also, anytime you have a `select *` following where other columns are selected explicitly, you need to define where the `*` is coming from:
```
SELECT (@row_number := @row_number + 1) AS num, tc.*
FROM tblContracts tc, (SELECT @row_number := 0) as T;
```
|
Can MySQL list a column before * in a SELECT?
|
[
"",
"mysql",
"sql",
""
] |
I have a table in sql server 2012 called AccntTemp that has a bunch of columns and an ID column. the person who has to enter data into this table should not be able to see the ID column and a few other columns, but needs to be able to enter data.
So, I need to create a view that allows the entry of new records, but only shows the fields that I want her to see. I know how to create the view so it only shows the fields I want her to see, but I dont know how to make the view so that it will allow the entry of new records.
What do I have to change about my query so that this will work?
```
CREATE VIEW dbo.DataEntry
AS
SELECT code, invno, ven, invdate, amon, accno, amnt, jno, saccno, ckno
FROM accnttemp
```
this just created the view, but when I go to the view i cant modify any records or add any new ones. How do I make this happen?
|
If your view is not already updatable, for some reason (most simple views should be) then you'll have to implement trigger(s) which manipulate the base tables manually.
E.g.
```
CREATE TRIGGER T_DataEntry_I on dbo.DataEntry
instead of insert
as
insert into accnttemp(code, invno, ven, invdate, amon, accno, amnt, jno, saccno, ckno)
SELECT code, invno, ven, invdate, amon, accno, amnt, jno, saccno, ckno
FROM inserted
```
With whatever changes are required such that the insert succeeds.
One common reason for a view to not allow `INSERT`s is if there are additional columns in the base table for which neither a `NULL` nor a default value will work. In such a case, you'd have to add those column(s) to the above trigger code and pick or compute appropriate values for them.
---
In case it's not clear, I disagree with TTeeple's answer. The design of SQL is such that, so far as possible, Views and Tables should be indistinguishable. You should be able to replace a table with a view (with the same column definitions, and saving data in appropriate other tables) and not have to make any changes to any client applications that use it.
---
It was, in fact, one of the original [Codd's Rules for Relational Databases](https://en.wikipedia.org/wiki/Codd%27s_12_rules) that views *should* be updatable:
> **Rule 6**: The view updating rule:
>
> All views that are theoretically updatable must be updatable by the system.
Unfortunately, it has since been found that for some views, although a human can implement an update for them, systems are unable to find ways to do so.
|
There is something in the documentation. Check this [link](https://technet.microsoft.com/en-us/library/ms180800%28v=sql.110%29.aspx)
> Updatable Views
>
> You can modify the data of an underlying base table through a view, as
> long as the following conditions are true:
>
> Any modifications, including UPDATE, INSERT, and DELETE statements,
> must reference columns from only one base table.
>
> The columns being modified in the view must directly reference the
> underlying data in the table columns. The columns cannot be derived in
> any other way, such as through the following:
>
> An aggregate function: AVG, COUNT, SUM, MIN, MAX, GROUPING, STDEV,
> STDEVP, VAR, and VARP.
>
> A computation. The column cannot be computed from an expression that
> uses other columns. Columns that are formed by using the set operators
> UNION, UNION ALL, CROSSJOIN, EXCEPT, and INTERSECT amount to a
> computation and are also not updatable.
>
> The columns being modified are not affected by GROUP BY, HAVING, or
> DISTINCT clauses.
>
> TOP is not used anywhere in the select\_statement of the view
> together with the WITH CHECK OPTION clause.
|
creating a view in sql-server that allows the entry of new records
|
[
"",
"sql",
"sql-server",
"view",
"insert",
"sql-view",
""
] |
I have a `products` table with these fields: `_id`, `product_name`, `priority` and `shelf_id`.
And I have a `shelves` table with these fields: `_id` and `shelf_name`.
Currently, I have this SQL which returns a resultset showing the name of each shelf along with the number of products within each shelf:
```
SELECT
shelves._id AS _id,
shelves.shelf_name AS shelf_name,
COUNT(products._id) AS total_num_products_in_shelf
FROM
shelves
INNER JOIN
products ON shelves._id = products.shelf_id
GROUP BY
shelves._id
HAVING
COUNT(products._id) > 0
ORDER BY
shelf_name ASC
```
What I am trying to achieve is the creation of an additional column in the resultset that will show the number of products in each shelf that have a `priority` value that is greater than zero. Something along the lines of...
```
SELECT
shelves._id AS _id,
shelves.shelf_name AS shelf_name,
COUNT(products._id) AS total_num_products_in_shelf,
COUNT(products._id WHERE products.priority > 0) AS num_products_in_shelf_with_priority
...
```
...but valid, of course.
I have searched for **sqlite subqueries** and found [this tutorial](https://www.tutorialspoint.com/sqlite/sqlite_sub_queries.htm), but it doesn't seem to be what I'm after.
Could someone help me out with a push in the right direction or, even better, modify my SQL query so that it will return valid `num_products_in_shelf_with_priority` data.
|
```
SELECT
shelves._id AS _id,
shelves.shelf_name AS shelf_name,
COUNT(products._id) AS total_num_products_in_shelf,
SUM(
CASE WHEN products.priority > 0 THEN 1 ELSE 0 END
) AS num_products_in_shelf_with_priority
FROM
shelves
INNER JOIN products ON shelves._id = products.shelf_id
GROUP BY
shelves._id,
shelves.shelf_name
HAVING
COUNT(products._id) > 0
ORDER BY
shelf_name ASC
```
You can include a `case` condition and then `sum` it. Also, included is the `shelf_name` in the `group by` clause.
|
In SQLite [v3.32.0 and above](https://www.sqlite.org/changes.html#version_3_32_0), you can use [`IIF()`](https://www.sqlite.org/lang_corefunc.html#iif) function to count only products with priority larger than `0`:
```
SUM(IIF(products.priority > 0, 1, 0))
```
Another alternative using `COUNT()`:
```
COUNT(IIF(products.priority > 0, 1, NULL))
```
|
SQLite - How to perform COUNT() with a WHERE condition?
|
[
"",
"sql",
"sqlite",
"subquery",
""
] |
I have a table that has over 100,000,000 rows and I have a query that looks like this:
```
SELECT
COUNT(IF(created_at >= '2015-07-01 00:00:00', 1, null)) AS 'monthly',
COUNT(IF(created_at >= '2015-07-26 00:00:00', 1, null)) AS 'weekly',
COUNT(IF(created_at >= '2015-06-30 07:57:56', 1, null)) AS '30day',
COUNT(IF(created_at >= '2015-07-29 17:03:44', 1, null)) AS 'recent'
FROM
items
WHERE
user_id = 123456;
```
The table looks like so:
```
CREATE TABLE `items` (
`user_id` int(11) NOT NULL,
`item_id` int(11) NOT NULL,
`created_at` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
PRIMARY KEY (`user_id`,`item_id`),
KEY `user_id` (`user_id`,`created_at`),
KEY `created_at` (`created_at`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
```
The explain looks fairly harmless, minus the massive row counts:
```
1 SIMPLE items ref PRIMARY,user_id user_id 4 const 559864 Using index
```
I use the query to gather counts for a specific user for 4 segments of time.
Is there a smarter/faster way to obtain the same data or is my only option to tally these as new rows are put into this table?
|
I would add an index on created\_at field:
```
ALTER TABLE items ADD INDEX idx_created_at (created_at)
```
or (as Thomas suggested) since you are also filtering for user\_id a composite index on created\_at and user\_id:
```
ALTER TABLE items ADD INDEX idx_user_created_at (user_id, created_at)
```
and then I would write your query as:
```
SELECT 'monthly' as description, COUNT(*) AS cnt FROM items
WHERE created_at >= '2015-07-01 00:00:00' AND user_id = 123456
UNION ALL
SELECT 'weekly' as description, COUNT(*) AS cnt FROM items
WHERE created_at >= '2015-07-26 00:00:00' AND user_id = 123456
UNION ALL
SELECT '30day' as description, COUNT(*) AS cnt FROM items
WHERE created_at >= '2015-06-30 07:57:56' AND user_id = 123456
UNION ALL
SELECT 'recent' as description, COUNT(*) AS cnt FROM items
WHERE created_at >= '2015-07-29 17:03:44' AND user_id = 123456
```
yes, the output is a little different. Or you can use inline queries:
```
SELECT
(SELECT COUNT(*) FROM items WHERE created_at>=... AND user_id=...) AS 'monthly',
(SELECT COUNT(*) FROM items WHERE created_at>=... AND user_id=...) AS 'weekly',
...
```
and if you want an average, you could use a subquery:
```
SELECT
monthly,
weekly,
monthly / total,
weekly / total
FROM (
SELECT
(SELECT COUNT(*) FROM items WHERE created_at>=... AND user_id=...) AS 'monthly',
(SELECT COUNT(*) FROM items WHERE created_at>=... AND user_id=...) AS 'weekly',
...,
(SELECT COUNT(*) FROM items WHERE user_id=...) AS total
) s
```
|
If you have an index on created\_at, I would also put in the where clause created\_at >= '2015-06-30 07:57:56' which is the lowest date possible in your segment.
Also with the same index it might work splitting in 4 queries:
```
select count(*) AS '30day'
FROM
items
WHERE
user_id = 123456
and created_at >= '2015-06-30 07:57:56'
union ....
```
And so on
|
Possible to improve the performance of this SQL query?
|
[
"",
"mysql",
"sql",
"performance",
""
] |
I have two tables.
Table 1 is a single COLUMN of integers.
Table 2 has three COLUMNS : start\_integer, end\_integer, data
The simple query is to join the column of integers with data where
```
integer >= start_integer AND integer <= end_integer
```
In many SQL implementations this can be accomplished with a left conditional JOIN ... ON BETWEEN
```
SELECT tbl1.integer, tbl2.data FROM tbl1
LEFT JOIN tbl2 ON tbl1.integer BETWEEN tbl2.start_integer AND
tbl2.end_integer;
```
But it seems BigQuery supports only JOIN ON with only an = condition.
This could be accomplished with a cross join, but BigQuery complains that my tables are too big. CROSS JOIN EACH is invalid.
How can I accomplish this join task within the limitations of BigQuery's SQL?
Below is my BigQuery SQL:
```
SELECT tbl1.integer, tbl2.data
FROM bq:data.tbl1
CROSS JOIN bq:data.tbl2
WHERE tbl1.integer BETWEEN tbl2.start_integer AND tbl2.end_integer;
```
Which returns the error:
> Error: 4.1 - 4.132: The JOIN operator's right-side table must be a small table. Switch the tables if the left-side table is smaller, or use JOIN EACH if both tables are larger than the maximum described at <http://goo.gl/wXqgHs>.
|
Good news (2016)! BigQuery does support inequality joins now - make sure to uncheck the "use legacy SQL option".
Example query:
```
SELECT *
FROM (
SELECT 1 x
) a JOIN (
SELECT 2 y
) b
ON a.x<b.y
```
With legacy SQL:
```
Error: ON clause must be AND of = comparisons of one field name from each table, ...
```
With standard SQL:
```
1 2
```
[](https://i.stack.imgur.com/6ZAg6.png)
* Docs: <https://cloud.google.com/bigquery/sql-reference/enabling-standard-sql>
* Discussion: <https://code.google.com/p/google-bigquery/issues/detail?id=448#c31>
|
BigQuery does not support cross joins on right-side tables.
|
conditional join in bigquery
|
[
"",
"sql",
"join",
"google-bigquery",
""
] |
In the following table, records are stored of volume license agreements
purchased by customers. The `TotalLicenses` column stores the maximum number of
users that are allowed to be activated with that license.
**VolumeLicenses**
```
+-----------+---------------+
| LicenseId | TotalLicenses |
+-----------+---------------+
| 901 | 3 |
| 902 | 100 |
| 903 | 150 |
+-----------+---------------+
```
Another table stores the users who have activated with each license:
**VolumeLicenseUsers**
```
+-----------+--------+
| LicenseId | UserId |
+-----------+--------+
| 901 | fred |
| 901 | bob |
| 901 | donna |
| 902 | matt |
| 902 | sarah |
+-----------+--------+
```
In the example above, I should not be able to INSERT another record with a
`LicenseId` of 901, because the maximum number (3) has already been reached. I
should, however, be able to INSERT additional records with `LicenseId` of 902
until there are 100 such records in the table.
To do so reliably, it seems that I would have to check how many licenses are in
available and in use (by SELECTing on the `VolumeLicenses` and
`VolumeLicenseUsers` tables), then do the INSERT as long as nothing changes in
the meantime.
What SQL can I use to make this transactional? Ideally, I would only lock the
single row in `VolumeLicenses` of that `LicenseId` and only protect the
`VolumeLicenseUsers` table against additional INSERTS with the `LicenseId` in
question.
**CLARIFICATION**
I am not attempting to enforce database integrity in general, like a pseudo-CHECK CONSTRAINT. I am only looking for some SQL that will allow me to do this without failing to some race condition.
|
*Answering my own question, but not 100% sure this is valid,
transactionally-speaking....*
```
DECLARE @UserId nvarchar(50) = 'john'
DECLARE @LicenseId int = 902
INSERT INTO dbo.VolumeLicenseUsers (LicenseId, UserId)
SELECT l.LicenseId, @UserId
FROM
dbo.VolumeLicenses l
LEFT OUTER JOIN dbo.VolumeLicenseUsers u ON u.LicenseId = l.LicenseId
WHERE l.LicenseId = @LicenseId
GROUP BY l.LicenseId, l.TotalLicenses
HAVING COUNT(u.UserId) < l.TotalLicenses
-- Did it work?
SELECT * FROM dbo.VolumeLicenseUsers
WHERE LicenseId = @LicenseId AND UserId = @UserId
```
|
I don't know if it's actually a clean solution:
`1)` Open a transaction
`2)` Add a record in `VolumeLicenseUsers`
`3)` Check in `VolumeLicenseUsers` if you have more licenses than the number allowed
`4)` If `not 3)` then `commit` else `rollback`
Could it work?
|
Transactional SELECT and INSERT
|
[
"",
"sql",
"sql-server",
""
] |
consider the following data:
```
category | index | value
-------------------------
cat 1 | 1 | 2
cat 1 | 2 | 3
cat 1 | 3 |
cat 1 | 4 | 1
cat 2 | 1 | 5
cat 2 | 2 |
cat 2 | 3 |
cat 2 | 4 | 6
cat 3 | 1 |
cat 3 | 2 |
cat 3 | 3 | 2
cat 3 | 4 | 1
```
I am trying to fill in the holes, so that `hole = avg(value)` of 2 nearest neighbours with non-null values within a category:
```
category | index | value
-------------------------
cat 1 | 1 | 2
cat 1 | 2 | 3
cat 1 | 3 | 2*
cat 1 | 4 | 1
cat 2 | 1 | 5
cat 2 | 2 | 5.5*
cat 2 | 3 | 5.5*
cat 2 | 4 | 6
cat 3 | 1 | 1.5*
cat 3 | 2 | 1.5*
cat 3 | 3 | 2
cat 3 | 4 | 1
```
I've been playing with window functions and am pretty sure it can be achieved but the solution is eluding me.
Any ideas?
|
You are correct, window function is what you're looking for. Here's how it can be done (`with` part is used to define table, so you probably won't need it):
```
with dt as
(
select * from
(
values
('cat 1', 1, 2),
('cat 1', 2, 3),
('cat 1', 3, null),
('cat 1', 4, 1),
('cat 2', 1, 5),
('cat 2', 2, null),
('cat 2', 3, null),
('cat 2', 4, 6),
('cat 3', 1, null),
('cat 3', 2, null),
('cat 3', 3, 1),
('cat 3', 4, 2)
) tbl ("category", "index", "value")
)
select
"category",
"index",
case
when "value" is null then (avg("value") over (partition by "category") )
else "value"
end
from dt
order by "category", "index";
```
refer to `WINDOW Clause` section of [this](http://www.postgresql.org/docs/9.3/static/sql-select.html) page for further info on window functions.
|
I was working on a solution for you, but SQLfiddle is giving (internal) errors at the moment, so I can't complete it.
A statement like this should do the update for you:
```
update table1 as t1
set value =
(select avg(value)
from
(select value
from table1 as t3
where t1.category = t3.category
and t3.index in (t1.index - 1, t1.index + 1)
) AS T2
)
where value is null
;
```
The fiddle I was working on is here: <http://sqlfiddle.com/#!15/acbc2/1>
|
select nearest neighbours
|
[
"",
"sql",
"postgresql",
"window-functions",
""
] |
In the table below, i need to find the percentage of males who are in the state 'nyc'.
```
select count(*) from table where state = 'nyc' and gender = 'male';
```
gives me the count of males from nyc. I need the % of males from nyc as 75%.
```
+-------+-------+--------+
| name | state | gender |
+-------+-------+--------+
| Jon | nyc | male |
+-------+-------+--------+
| obama | LA | male |
+-------+-------+--------+
| kat | nyc | male |
+-------+-------+--------+
| andy | nyc | male |
+-------+-------+--------+
| sri | nyc | female |
+-------+-------+--------+
```
desired output:
state, male\_percentage--> nyc, 75 %
as 3 are males and 1 is female from total 4 from nyc.
|
With one select from table )
```
select state ,100 * sum( IF(gender = 'male',1,0) ) / count(*)
from table where state = 'nyc'
group by state
```
|
```
select count(*)/(select count(*) from table where gender='male')*100
from table where state = 'nyc' and gender = 'male';
```
|
sql query to calculate percentage
|
[
"",
"mysql",
"sql",
""
] |
```
Select *
into relatesupply
from
OrderByStore m
m.Product_Name,m.Quantity,n.Quantity
inner join
presentsupply n on m.Product_Name = n.Product_Name
```
I want `relatesupply` as a new table and join output will store at `relatesupply` ? How should I fire the query? Where `OrderByStore` and `presentsupply` are two tables
|
Query will create a table `relatesupply` as you need with inner join `OrderByStore` and `presentsupply` :
```
SELECT
OrderByStore.Product_Name, OrderByStore.Quantity,
presentsupply.Product_Name AS Expr1, presentsupply.Quantity AS Expr2
INTO
relatesupply
FROM
OrderByStore
INNER JOIN
presentsupply ON OrderByStore.Product_Name = presentsupply.Product_Name
```
[](https://i.stack.imgur.com/mk74U.png)
|
Create the table in advance (either manually or in script) and then insert the data using the insert statement. Avoid using the \* to select columns, name them. Just a much neater way to do things in my opinion.
|
SQL Query related to joins?
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"join",
""
] |
I have a table **user**
I need to select the total number of logins by date.
|
Group by date and use `sum` to aggregate all users:
```
select year, month, day, sum(loginCount)
from user
group by year, month, day
```
|
```
SELECT Day, Month, Year, SUM(LoginCount)
FROM User
GROUP BY Day, Month, Year
```
By grouping by the touple (Day, Month, Year) you are basically grouping by date. SUM(LoginCount) will sum all the values on LoginCount for the same touple you are grouping by.
|
How to select the total number of logins by date?
|
[
"",
"mysql",
"sql",
"select",
""
] |
I have following values in my sql column:-
```
a,b,c,d e,f
```
I want to check if `b` is present in the column.
|
You can use [FIND\_IN\_SET()](https://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_find-in-set):
```
FIND_IN_SET('b',yourcolumn) > 0
```
As an example to be used in a query:
```
SELECT * FROM yourtable WHERE FIND_IN_SET('b',yourcolumn) > 0;
```
|
you can use FIND\_IN\_SET()
```
FIND_IN_SET('a','a,b,c,d,e');
```
<http://www.w3resource.com/mysql/string-functions/mysql-find_in_set-function.php>
|
Check if value exists in a comma separated list
|
[
"",
"mysql",
"sql",
""
] |
I am trying to write an SQL statement based on the following code.
```
CREATE TABLE mytable (
year INTEGER,
month INTEGER,
day INTEGER,
hoursWorked INTEGER )
```
Assuming that each employee works multiple days over each month in a 3 year period.
I need to write an sql statement that returns the total hours worked in each month, grouped by earliest year/month first.
I tried doing this, but I don't think it is correct:
```
SELECT Sum(hoursWorked) FROM mytable
ORDER BY(year,month)
GROUP BY(month);
```
I am a little confused about how to operate the sum function in conjunction with thee GROUP BY or ORDER BY function. How does one go about doing this?
|
Try this:
```
SELECT year, month, SUM(hoursWorked)
FROM mytable
GROUP BY year, month
ORDER BY year, month
```
This way you will have for example:
```
2014 December 30
2015 January 12
2015 February 40
```
Fields you want to group by always have be present in SELECT part of query. And vice-versa - what you put in SELECT part, need be also in GROUP BY.
|
```
SELECT year, month, Sum(hoursWorked)as workedhours
FROM mytable
GROUP BY year,month
ORDER BY year,month;
```
You have to `group by` year and month.
|
How do I correctly use the SQL Sum function with multiple variables and grouping?
|
[
"",
"sql",
""
] |
I'm not a SQL expert, but if anybody can help me.
I use a recursive CTE to get the values as below.
Child1 --> Parent 1
Parent1 --> Parent 2
Parent2 --> NULL
If data population has gone wrong, then I'll have something like below, because of which CTE may go to infinite recursive loop and gives max recursive error. Since the data is huge, I cannot check this **bad data** manually. Please let me know if there is a way to find it out.
Child1 --> Parent 1
Parent1 --> Child1
or
Child1 --> Parent 1
Parent1 --> Parent2
Parent2 --> Child1
|
You haven't specified the dialect or your column names, so it is difficult to make the perfect example...
```
-- Some random data
IF OBJECT_ID('tempdb..#MyTable') IS NOT NULL
DROP TABLE #MyTable
CREATE TABLE #MyTable (ID INT PRIMARY KEY, ParentID INT NULL, Description VARCHAR(100))
INSERT INTO #MyTable (ID, ParentID, Description) VALUES
(1, NULL, 'Parent'), -- Try changing the second value (NULL) to 1 or 2 or 3
(2, 1, 'Child'), -- Try changing the second value (1) to 2
(3, 2, 'SubChild')
-- End random data
;WITH RecursiveCTE (StartingID, Level, Parents, Loop, ID, ParentID, Description) AS
(
SELECT ID, 1, '|' + CAST(ID AS VARCHAR(MAX)) + '|', 0, * FROM #MyTable
UNION ALL
SELECT R.StartingID, R.Level + 1,
R.Parents + CAST(MT.ID AS VARCHAR(MAX)) + '|',
CASE WHEN R.Parents LIKE '%|' + CAST(MT.ID AS VARCHAR(MAX)) + '|%' THEN 1 ELSE 0 END,
MT.*
FROM #MyTable MT
INNER JOIN RecursiveCTE R ON R.ParentID = MT.ID AND R.Loop = 0
)
SELECT StartingID, Level, Parents, MAX(Loop) OVER (PARTITION BY StartingID) Loop, ID, ParentID, Description
FROM RecursiveCTE
ORDER BY StartingID, Level
```
Something like this will show if/where there are loops in the recursive cte. Look at the column `Loop`. With the data as is, there is no loops. In the comments there are examples on how to change the values to cause a loop.
In the end the recursive cte creates a `VARCHAR(MAX)` of ids in the form `|id1|id2|id3|` (called `Parents`) and then checks if the current `ID` is already in that "list". If yes, it sets the `Loop` column to 1. This column is checked in the recursive join (the `ABD R.Loop = 0`).
The ending query uses a `MAX() OVER (PARTITION BY ...)` to set to 1 the `Loop` column for a whole "block" of chains.
A little more complex, that generates a "better" report:
```
-- Some random data
IF OBJECT_ID('tempdb..#MyTable') IS NOT NULL
DROP TABLE #MyTable
CREATE TABLE #MyTable (ID INT PRIMARY KEY, ParentID INT NULL, Description VARCHAR(100))
INSERT INTO #MyTable (ID, ParentID, Description) VALUES
(1, NULL, 'Parent'), -- Try changing the second value (NULL) to 1 or 2 or 3
(2, 1, 'Child'), -- Try changing the second value (1) to 2
(3, 3, 'SubChild')
-- End random data
-- The "terminal" childrens (that are elements that don't have childrens
-- connected to them)
;WITH WithoutChildren AS
(
SELECT MT1.* FROM #MyTable MT1
WHERE NOT EXISTS (SELECT 1 FROM #MyTable MT2 WHERE MT1.ID != MT2.ID AND MT1.ID = MT2.ParentID)
)
, RecursiveCTE (StartingID, Level, Parents, Descriptions, Loop, ParentID) AS
(
SELECT ID, -- StartingID
1, -- Level
'|' + CAST(ID AS VARCHAR(MAX)) + '|',
'|' + CAST(Description AS VARCHAR(MAX)) + '|',
0, -- Loop
ParentID
FROM WithoutChildren
UNION ALL
SELECT R.StartingID, -- StartingID
R.Level + 1, -- Level
R.Parents + CAST(MT.ID AS VARCHAR(MAX)) + '|',
R.Descriptions + CAST(MT.Description AS VARCHAR(MAX)) + '|',
CASE WHEN R.Parents LIKE '%|' + CAST(MT.ID AS VARCHAR(MAX)) + '|%' THEN 1 ELSE 0 END,
MT.ParentID
FROM #MyTable MT
INNER JOIN RecursiveCTE R ON R.ParentID = MT.ID AND R.Loop = 0
)
SELECT * FROM RecursiveCTE
WHERE ParentID IS NULL OR Loop = 1
```
This query should return all the "last child" rows, with the full parent chain. The column `Loop` is `0` if there is no loop, `1` if there is a loop.
|
With Postgres it's quite easy to prevent this by collecting all visited nodes in an array.
Setup:
```
create table hierarchy (id integer, parent_id integer);
insert into hierarchy
values
(1, null), -- root element
(2, 1), -- first child
(3, 1), -- second child
(4, 3),
(5, 4),
(3, 5); -- endless loop
```
Recursive query:
```
with recursive tree as (
select id,
parent_id,
array[id] as all_parents
from hierarchy
where parent_id is null
union all
select c.id,
c.parent_id,
p.all_parents||c.id
from hierarchy c
join tree p
on c.parent_id = p.id
and c.id <> ALL (p.all_parents) -- this is the trick to exclude the endless loops
)
select *
from tree;
```
---
To do this for multiple trees at the same time, you need to carry over the ID of the root node to the children:
```
with recursive tree as (
select id,
parent_id,
array[id] as all_parents,
id as root_id
from hierarchy
where parent_id is null
union all
select c.id,
c.parent_id,
p.all_parents||c.id,
p.root_id
from hierarchy c
join tree p
on c.parent_id = p.id
and c.id <> ALL (p.all_parents) -- this is the trick to exclude the endless loops
and c.root_id = p.root_id
)
select *
from tree;
```
### Update for Postgres 14
Postgres 14 introduced the (standard compliant) `CYCLE` option to detect cycles:
```
with recursive tree as (
select id,
parent_id
from hierarchy
where parent_id is null
union all
select c.id,
c.parent_id
from hierarchy c
join tree p
on c.parent_id = p.id
)
cycle id -- track cycles for this column
set is_cycle -- adds a boolean column is_cycle
using path -- adds a column that contains all parents for the id
select *
from tree
where not is_cycle
```
As mentioned [in the documentation](https://www.postgresql.org/docs/16/queries-with.html#QUERIES-WITH-CYCLE), this syntax is a shortcut equivalent to adding an `is_cycle` and `path` array rows manually.
|
To find infinite recursive loop in CTE
|
[
"",
"sql",
"recursive-cte",
""
] |
I have 2 tables
```
Table Users:
UserID | Name
Table Cars:
CarID | Car Name | FK_UserID
```
A user can have more than 1 car.
I want to join each user with 1 car only, not more.
Having looked at other threads here,
I've tried the following:
```
Select users.UserID, users.name, carid
from Users
join cars
on users.UserID =
(
select top 1 UserID
from users
where UserID = CarID
)
```
But it still returns more than 1 match for each user.
What am I doing wrong?
|
You can try like below using `ROW_NUMBER()` function
```
select userid, username, carname
from
(
Select users.UserID as userid,
users.name as username,
cars.carname as carname,
ROW_NUMBER() OVER(PARTITION BY users.UserID ORDER BY users.UserID) AS r
from Users
join cars
on users.UserID = cars.FK_UserID
) XXX
where r = 1;
```
|
Another way to do it
```
select u.UserID,
u.name,
(select TOP 1 carid
from cars c
where u.UserID = c.FK_UserID
order by carid) carid -- Could be ordered by anything
from Users u
-- where only required if you only want users with cars
where exists (select * from car c where u.UserID = c.FK_UserID)
```
|
How to Join only first row, disregard further matches
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2000",
"greatest-n-per-group",
""
] |
I have two tables as following:
**Table1**
```
id name desc
----- ----- ---------
1 nd1 desc1
2 nd2 desc2
4 nd4
7 nd7 desc7
```
**Table2**
```
id name desc
----- ----- ---------
3 nd3 desc3
4 nd4 desc17
5 nd5 desc5
6 nd6 desc6
7 nd7
```
What I need is all records from 1st table plus records from 2nd table that their ID does not exists in 1st table.
The data that I need is following:
**Result**
```
id name desc
----- ----- ---------
1 nd1 desc1
2 nd2 desc2
3 nd3 desc3
4 nd4
5 nd5 desc5
6 nd6 desc6
7 nd7 desc7
```
Currently I am doing following:
```
SELECT
id, name, desc FROM table1
UNION
SELECT
id, name, desc FROM table2
WHERE
Id NOT IN (SELECT id FROM table1)
```
Is there a more efficient way of doing this?
|
How about this one:
Assuming `Table1` has a `PK` on `id`.
```
SELECT
id, name, [desc]
FROM Table1
UNION ALL
SELECT
id, name, [desc]
FROM Table2 t2
WHERE
NOT EXISTS(SELECT 1 FROM Table1 WHERE id = t2.id)
```
|
According to the Actual Query Plan, this query is slightly more efficient than your example query with the UNION.
```
select
case when t1.id is not null then t1.id else t2.id end id,
case when t1.id is not null then t1.name else t2.name end name,
case when t1.id is not null then t1.[desc] else t2.[desc] end [desc]
from
table1 t1
full outer join
table2 t2
on
t1.id = t2.id
```
I haven't tested it on a large dataset, so I don't know what will happen then. This is the complete fiddle: <http://sqlfiddle.com/#!6/bb95d/1>
|
UNION two table minus duplicate records from second table by their ID
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I'm aware of the **INSERT INTO** table\_name *QUERY*; however, I'm unsure how to go about achieving the desired result in this case.
Here's a slightly contrived example to explain what I'm looking for, but I'm afraid I cannot put it more succiently.
I have two tables in a database designed for a hotel.
**BOOKING** and **CUSTOMER\_BOOKING**
Where **BOOKING** contains *PK\_room\_number, room\_type,* etc. and **CUSTOMER\_BOOKING** contains *FK\_room\_number, FK\_cusomer\_id*
**CUSTOMER\_BOOKING** is a linking table (many customers can make many bookings, and many bookings can consist of many customers).
[](https://i.stack.imgur.com/x7F0a.png)
Ultimately, in the application back-end I want to be able to list all rooms that have less than 3 customers associated with them. I could execute this a separate query and save the result in the server-side scripting.
However, a more elegant solution (from my point of view) is to store this within the **BOOKING** table itself. That is to add a column no\_of\_bookings that counts the number of times the current PK\_room\_number appears as the foreign key FK\_room\_number within the **CUSTOMER\_BOOKING** table. And why do this instead? Because it would be impossible for me to write a single complicated query which will both include the information from all ROOMS, among other tables, and also count the occurrences of bookings, without excluding ROOMS that don't have any bookings. A very bad thing for a hotel website attempting to show free rooms!
So it would look like this
**BOOKING**: PK\_room\_number (104B) room\_type (double) room\_price (high), no\_of\_bookings (3)
**BOOKING**: PK\_room\_number (108C) room\_type (single) room\_price (low), no\_of\_bookings (1)
**CUSTOMER\_BOOKING**: FK\_room\_number (104B) FK\_customer\_id (4312)
**CUSTOMER\_BOOKING**: FK\_room\_number (104B) FK\_customer\_id (6372)
**CUSTOMER\_BOOKING**: FK\_room\_number (104B) FK\_customer\_id (1112)
**CUSTOMER\_BOOKING**: FK\_room\_number (108C) FK\_customer\_id (9181)
How would I go about creating this?
|
> Because it would be impossible for me to write a single complicated
> query which will both include the information from all ROOMS, among
> other tables, and also count the occurrences of bookings, without
> excluding ROOMS that don't have any bookings.
I wouldn't say it's impossible and unless you're running into performance issues, it's easier to implement than adding a new summary column:
```
select b.*, count(cb.room_number)
from bookings b
left join customer_booking cb on b.room_number = cb.room_number
group by b.room_number
```
Depending on your query may need to use a derived table containing the booking counts for each room instead instead
```
select b.*, coalesce(t1.number_of_bookings,0) number_of_bookings
from bookings b
left join (
select room_number, count(*) number_of_bookings
from customer_booking
group by room_number
) t1 on t1.room_number = b.room_number
```
You have to `left join` the derived table and select `coalesce(t1.number_of_bookings,0)` in case a room does not have any entries in the derived table (i.e. 0 bookings).
A summary column is a good idea when you're running into performance issues with counting the # of bookings each time. In that case I recommend creating `insert` and `delete` triggers on the `customer_booking` table that either increment or decrement the `number_of_bookings` column.
|
You could do it in a single straight select like this:
```
select DISTINCT
b1.room_pk,
c1.no_of_bookings
from cust_bookings b1,
(select room_pk, count(1) as no_of_bookings
from cust_bookings
group by room_pk) c1
where b1.room_pk = c1.room_pk
having c1.no_of_bookings < 3
```
Sorry i used my own table names to test it but you should figure it out easily enough. Also, the "having" line is only there to limit the rows returned to rooms with less than 3 bookings. If you remove that line you will get everything and could use the same sql to update a column on the bookings table if you still want to go that route.
|
How to store SQL Query result in table column
|
[
"",
"mysql",
"sql",
"calculated-columns",
""
] |
I have a table `CUST` with following layout. There are no constraints. I do see that one `ChildID` has more than one `ParentID` associated with it. (Please see the records for `ChildID = 115`)
Here is what I need -
Wherever one child has more than 1 parent, I want to update those `ParentID` and `ParentName` with the `ParentID` and `ParentName` which has max `match_per`. So in the below image, I want `ParentID` 1111 and `ParentName` LEE YOUNG WOOK to update all records where `ChildId = 115` (since the `match_per` 0.96 is maximum within the given set). In case there are two parents with equal max match\_per, then I want to pick any 1 one of them.
[](https://i.stack.imgur.com/lThrq.png)
I know it is possible using CTE but I don't know how to update CTE. Can anybody help?
|
One way of doing it
```
WITH CTE1 AS
(
SELECT *,
CASE WHEN match_per =
MAX(match_per) OVER (PARTITION BY ChildId)
THEN CAST(ParentId AS CHAR(10)) + ParentName
END AS parentDetailsForMax
FROM CUST
), CTE2 AS
(
SELECT *,
MAX(parentDetailsForMax) OVER (PARTITION BY ChildId) AS maxParentDetailsForMax
FROM CTE1
)
UPDATE CTE2
SET ParentId = CAST(LEFT(maxParentDetailsForMax,10) AS int),
ParentName = SUBSTRING(maxParentDetailsForMax,10,8000)
```
|
Getting both the parent id and parent name is a bit tricky. I think the logic is easiest using `cross apply`:
```
with toupdate as (
select t.*, p.parentId as new_parentId, p.parentName as new_parentName
max(match_per) over (partition by childid) as max_match_per,
count(*) over (partition by childid) as numparents
from table t cross apply
(select top 1 p.*
from table p
where p.childid = t.childid
order by match_per desc
) p
)
update toupdate
set parentId = new_ParentId,
parentName = new_ParentName
where numparents > 1;
```
As a note: the fact that parent id and parent name are both stored in the table, potentially multiple times seems like a problem. I would expect to look up the name, given the id, to reduce data redundancy.
|
How to do conditional update on columns using CTE?
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-update",
"row-number",
""
] |
I need to store both **time** and **date** in the mysql. So I used of `NOW()` function for that. But I don't know what should I use for *type column* im phpmyadmin. It should be noted that `NOW()` returns both time and date like this:
```
2014-11-11 12:45:34
```
Here is a solution, I can use of a separator for separating date and time (`2014-11-11` and `12:45:34`) and then store them in the DATE type and TIME type individually. Or I can use of VARCHAR type for storing both of them in one column. But I think these ways are not standard. what is standard type for storing both date and time ?
Here is my query: (also I don't know why `NOW()` function does not works)
```
INSERT INTO table (timedate) VALUES (NOW())
```
|
**DATE:** It is used for values with a date part but no time part. MySQL retrieves and displays DATE values in ***YYYY-MM-DD*** format. The supported range is `1000-01-01` to `9999-12-31`.
**DATETIME:** It is used for values that contain both date and time parts. MySQL retrieves and displays DATETIME values in ***YYYY-MM-DD HH:MM:SS*** format. The supported range is `1000-01-01 00:00:00` to `9999-12-31 23:59:59`.
**TIMESTAMP:** It is also used for values that contain both date and time parts, and includes the time zone. TIMESTAMP has a range of `1970-01-01 00:00:01` UTC to `2038-01-19 03:14:07` UTC.
**TIME:** Its values are in ***HH:MM:SS*** format (or ***HHH:MM:SS*** format for large hours values). TIME values may range from `-838:59:59` to `838:59:59`. The hours part may be so large because the TIME type can be used not only to represent a time of day (which must be less than 24 hours), but also elapsed time or a time interval between two events (which may be much greater than 24 hours, or even negative).
|
I have a slightly different perspective on the difference between a **DATETIME** and a **TIMESTAMP**. A **DATETIME** stores a literal value of a date and time with no reference to any particular timezone. So, I can set a **DATETIME** column to a value such as '2019-01-16 12:15:00' to indicate precisely when my last birthday occurred. Was this Eastern Standard Time? Pacific Standard Time? Who knows? Where the current session time zone of the server comes into play occurs when you set a **DATETIME** column to some value such as **NOW()**. The value stored will be the current date and time using the current session time zone in effect. But once a **DATETIME** column has been set, it will display the same regardless of what the current session time zone is.
A **TIMESTAMP** column on the other hand takes the '2019-01-16 12:15:00' value you are setting into it and interprets it in the current session time zone to compute an internal representation relative to 1/1/1970 00:00:00 UTC. When the column is displayed, it will be converted back for display based on whatever the current session time zone is. It's a useful fiction to think of a **TIMESTAMP** as taking the value you are setting and converting it from the current session time zone to UTC for storing and then converting it back to the current session time zone for displaying.
If my server is in San Francisco but I am running an event in New York that starts on 9/1/1029 at 20:00, I would use a **TIMESTAMP** column for holding the start time, set the session time zone to 'America/New York' and set the start time to '2009-09-01 20:00:00'. If I want to know whether the event has occurred or not, regardless of the current session time zone setting I can compare the start time with **NOW()**. Of course, for displaying in a meaningful way to a perspective customer, I *would* need to set the correct session time zone. If I did not need to do time comparisons, then I would probably be better off just using a **DATETIME** column, which will display correctly (with an implied EST time zone) regardless of what the current session time zone is.
**TIMESTAMP LIMITATION**
The `TIMESTAMP` type has a range of **'1970-01-01 00:00:01' UTC** to **'2038-01-19 03:14:07' UTC** and so it may not usable for your particular application. In that case you will have to use a `DATETIME` type. You will, of course, always have to be concerned that the current session time zone is set properly whenever you are using this type with date functions such as `NOW()`.
|
What difference between the DATE, TIME, DATETIME, and TIMESTAMP Types
|
[
"",
"mysql",
"sql",
"datetime",
""
] |
I have two tables with identical columns `ID`, `A`, `B`, `C`.
I need to ADD to `TableX` the values from `TableY` for the corresponding `ID`'s. I know how to do this for a SINGLE update as follows:
```
update TableX x
set x.A= x.A +
(select y.A
from TableY y
where x.id= y.id)
where exists (select y.id
from TableY y
where x.id = Y.id).
```
But how to modify this statement so that I can update multiple columns as sums?
TIA
|
```
update TableX x
set (x.A, x.B, x.C) = (select y.A + x.A,
y.B + x.B,
y.C + x.C
from TableY y
where x.id= y.id)
where exists (
select y.id
from TableY y
where x.id = Y.id)
```
|
```
merge into tableX x
using (select * from tableY) y
on (x.id = y.id)
when matched then update set
x.a = x.a + y.a, x.b = x.b + y.b, x.c = x.c + y.c;
```
[SQLFiddle](http://sqlfiddle.com/#!4/df0923/1)
You could use [merge](http://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_9016.htm), especially if you want also insert non existing rows.
|
Oracle 10 how to update a table as a sum
|
[
"",
"sql",
"oracle",
""
] |
My SQlite database in my app stores the date in the format of `yyyy-MM-dd HH:mm:ss`. I want to query a transaction that happened on a certain month of a year but my code is not working :
```
SELECT COUNT(id_sale) AS total_transactions,sold_to,
strftime('%m', sale_date) AS month,
strftime('%Y', sale_date) AS year,
sale_date FROM sales WHERE month=5 AND year=2015
```
|
After some research I discovered that for some reason I had to do some casting for it to work.
```
SELECT COUNT(id_sale) AS total_transactions,sold_to,
CAST(strftime('%m', sale_date) AS INTEGER) month,
CAST(strftime('%Y', sale_date) AS INTEGER) year,
sale_date FROM sales WHERE month=5
```
|
As your date values are strings, you need no casting. You can compare directly with string values in your first select statement
```
SELECT
COUNT(id_sale) AS total_transactions,
sold_to,
strftime('%m', sale_date) AS month,
strftime('%Y', sale_date) AS year,
sale_date
FROM sales
WHERE month='05' AND year='2015'
```
|
SQLite not selecting date
|
[
"",
"android",
"sql",
"sqlite",
""
] |
I was running through some of our code at work and came across this structure for an sql query and was sure it was a typo in the variables of our PHP, I ran it and it works.
```
Select column from table where value = column;
```
Anyone I know was always taught the correct syntax is:
```
Select column from table where column = value;
```
Is there any reason for this being legal, other than SQL just checks are both sides of an equation equal to each other?
I'm more posting this because I found it really interesting, like a 'the more you know' kind of thing.
|
The equality operator (`=`) is symmetric - if `a=b` is true, then so is `b=a`. `a` and `b` can be column names, values or complex expressions. It's common to use the form `column=value`, but syntactically speaking, it's completely equivalent to `value=column`.
|
Yes, you have it right sql just checks both sides of an equation. The equation could even contain a column on *neither* side! such as
```
SELECT column from table where 1=2;
```
|
Mysql where clause syntax
|
[
"",
"mysql",
"sql",
"syntax",
"where-clause",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.