Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I am querying a data base that has two ID's separated by commas in a single Column- ID
table : user
```
B_ID FirstName LastName email
B5,B6 Mo Asif xxx
B1 Adam chung xxx
```
As Mo has two ID's: B5, B6 - how can I query the data base so I can split them into two separate rows, - Exactly like this
```
B_ID FirstName LastName email
B5 Mo Asif xxx
B6 Mo Asif xxx
B1 Adam chung xxx
```
There are cases when there are 3 ID's, but I want same results for 3,4..IDs
|
```
WITH cte AS
(SELECT B_ID,FirstName,LastName,email FROM t)
SELECT REGEXP_SUBSTR(t1.B_ID, '([^,])+', 1, t2.COLUMN_VALUE),FirstName,LastName,email
FROM cte t1 CROSS JOIN
TABLE
(
CAST
(
MULTISET
(
SELECT LEVEL
FROM DUAL
CONNECT BY LEVEL <= REGEXP_COUNT(t1.B_ID, '([^,])+')
)
AS SYS.odciNumberList
)
) t2;
```
[FIDDLE](http://sqlfiddle.com/#!4/bcb4f/8)
|
This handles as many B\_IDs as there are and NULL B\_ID elements. Always test for unexpected values/conditions and make sure you are handling them! I suggest renaming that `B_ID` column. The name implies a unique identifier which it obviously is not. Either that or some further normalization is required.
Note the regular expression which handles NULL list elements. [The commonly used expression of `'[^,]+'` for parsing lists does not handle NULL elements](https://stackoverflow.com/questions/31464275/split-comma-separated-values-to-columns-in-oracle/31464699#31464699).
```
SQL> with tbl(B_ID, FirstName, LastName, email) as (
select 'B5,B6', 'Mo', 'Asif', 'xxx@yxz.com' from dual
union
select 'B1', 'Adam', 'chung', 'xxx@xyz.com' from dual
union
select 'B7,,B9', 'Lance', 'Link', 'llink@ape.org' from dual
union
select '', 'Mata', 'Hari', 'mhari@ape.org' from dual
)
SELECT REGEXP_SUBSTR(B_ID , '(.*?)(,|$)', 1, COLUMN_VALUE, NULL, 1 ) AS B_ID,
firstname, lastname, email
FROM tbl,
TABLE(
CAST(
MULTISET(
SELECT LEVEL
FROM DUAL
CONNECT BY LEVEL <= REGEXP_COUNT(B_ID , ',' )+1
) AS SYS.ODCINUMBERLIST
)
);
B_ID FIRST LASTN EMAIL
------ ----- ----- -----------
B1 Adam chung xxx@xyz.com
B5 Mo Asif xxx@yxz.com
B6 Mo Asif xxx@yxz.com
B7 Lance Link llink@ape.org
Lance Link llink@ape.org
B9 Lance Link llink@ape.org
Mata Hari mhari@ape.org
7 rows selected.
SQL>
```
|
Splitting one row into two rows based on IDs
|
[
"",
"sql",
"oracle",
""
] |
I have a query that joins 3 tables together.
One is a SPAAPPOINTMENTS table - (Spa appointments with price charged in)
The second is SPAAPTADDONS - Addons for the appointments (Nibbles etc), Links between apts and rst
The third is RSTCATALOG table - Where the addon price is located.
The query:
```
SELECT SPAAPPOINTMENTS.APTID, SPAAPPOINTMENTS.SERVICECODE, SPAAPPOINTMENTS.LOCATIONCODE, SPAAPPOINTMENTS.STAFFCODE,
SPAAPPOINTMENTS.BLOCKSTARTTIME, SPAAPPOINTMENTS.BLOCKENDTIME, SPAAPPOINTMENTS.I_RECID, SPAAPPOINTMENTS.S_STAYID,
SPAAPPOINTMENTS.STATUSCODE, SPAAPPOINTMENTS.BLOCKEDTIME, SPAAPPOINTMENTS.GRATUITY, SPAAPPOINTMENTS.TEMPHOLDID,
SPAAPPOINTMENTS.LASTCHGID, SPAAPPOINTMENTS.CANCELID, SPAAPPOINTMENTS.ITEMNO, SPAAPPOINTMENTS.COMMENTS,
SPAAPPOINTMENTS.OFFSITELOCATION, SPAAPPOINTMENTS.PRICE, SPAAPPOINTMENTS.PRIORITY, SPAAPPOINTMENTS.SPAPKGCODE,
SPAAPPOINTMENTS.ENFORCEGENDER, SPAAPPOINTMENTS.NEWSTATUS, SPAAPPOINTMENTS.RESTNO, SPAAPPOINTMENTS.DONOTMOVE,
SPAAPPOINTMENTS.NOSHOW, SPAAPPOINTMENTS.OTHERGENDERREQUESTED, SPAAPPOINTMENTS.MADEBY, SPAAPPOINTMENTS.LINKCODE,
SPAAPPOINTMENTS.CONFIRMED, SPAAPPOINTMENTS.CXLNUMBER, SPAAPPOINTMENTS.REQUEST, SPAAPPOINTMENTS.CUSTOMFIELD1,
SPAAPPOINTMENTS.CUSTOMFIELD2, SPAAPPOINTMENTS.CUSTOMFIELD3, SPAAPPOINTMENTS.VIP, SPAAPPOINTMENTS.NOPREFERENCE,
SPAAPPOINTMENTS.CUSTOMFIELD4, SPAAPPOINTMENTS.CUSTOMFIELD5, SPAAPPOINTMENTS.BOOKID, SPAAPPOINTMENTS.ARTSRESVNO,
SPAAPPOINTMENTS.LMSCONFNO, SPAAPPOINTMENTS.ARTSGUESTID, SPAAPPOINTMENTS.CLUBNO, SPAAPPOINTMENTS.MULTIGRPID,
SPAAPPOINTMENTS.MAINCONTACT, SPAAPPOINTMENTS.SPABKNAME, SPAAPPOINTMENTS.MULTIGRPAPPTXT, SPAAPPOINTMENTS.CONFCOMMENTS,
SPAAPPOINTMENTS.MEMCODE, SPAAPPOINTMENTS.FAMILYMEMTYPE, SPAAPPOINTMENTS.COCOMMENTS, SPAAPPOINTMENTS.GRPMASTERID,
SPAAPPOINTMENTS.GUESTTYPE, SPAAPPOINTMENTS.EXTUNIQUEID, SPAAPPOINTMENTS.SETUPTIME, SPAAPPOINTMENTS.BREAKDOWNTIME,
SPAAPPOINTMENTS.PSBASEPRICE, SPAAPPOINTMENTS.PSCODE, SPAAPPOINTMENTS.PSCALENDARADJPCT, SPAAPPOINTMENTS.PSDETAILADJ,
SPAAPPOINTMENTS.PSPRICE, SPAAPPOINTMENTS.PSDETAILADJSOURCE, SPAAPPOINTMENTS.PSDETAILADJTYPE, SPAAPPOINTMENTS.PSPRICETYPE,
RSTCATALOG.C_DPRICE
FROM SPAAPPOINTMENTS
INNER JOIN SPAAPPTADDONS ON SPAAPPOINTMENTS.APTID = SPAAPPTADDONS.APTID
INNER JOIN RSTCATALOG ON SPAAPPTADDONS.ITEMNO = RSTCATALOG.C_ITEM
```
This returns all data from the appointments table AS LONG AS THEY HAD A ADDON to the Spa Appointment.
If they just had an appointment, and no Add on, it doesn't return the data.
For example, if i use the query and postfix it with: `WHERE (SPAAPPOINTMENTS.APTID = 626746)`
It doesn't show any data, but there is a line in spaappopintments but NOTHING in the addons table.
If I use anopther AptID, which HAS HAD AN ADDON, it shows the data and is output from the query.
How can I get to show the data if there is NO data in the SPAAPTADDONS table please anyone?
|
When you join tables together you specify a *left* and a *right* table. An `inner join` will produce a set where the join condition is true for both the left and right side.
What you want is to get a set that has all the data from the *left* table along with the matching data from the *right* side. This is a `left join`.
But, as you also want to get the pricing information for the addons you need to join the third table too. As we can assume that there is a one-to-one matching between the addons and the pricing table it looks like we could use an `inner join` here, but this won't work as the joins are performed in order, meaning that last join will use the result from the first join as its left side (and that set will have null values for the appointments without any addons) - if we try to do an `inner join` with the pricing table those rows would be excluded, so you need a `left join` here too.
In the end you want this:
```
FROM SPAAPPOINTMENTS
LEFT JOIN SPAAPPTADDONS ON SPAAPPOINTMENTS.APTID = SPAAPPTADDONS.APTID
LEFT JOIN RSTCATALOG ON SPAAPPTADDONS.ITEMNO = RSTCATALOG.C_ITEM
```
On a side note I would encourage you to use table aliases to make the query a bit shorter and more readable. Something like this:
```
FROM SPAAPPOINTMENTS apt
LEFT JOIN SPAAPPTADDONS add ON apt.APTID = add.APTID
LEFT JOIN RSTCATALOG rst ON add.ITEMNO = rst.C_ITEM
```
|
Change the below `INNER JOIN` line
```
INNER JOIN SPAAPPTADDONS ON SPAAPPOINTMENTS.APTID = SPAAPPTADDONS.APTID
INNER JOIN RSTCATALOG ON SPAAPPTADDONS.ITEMNO = RSTCATALOG.C_ITEM
```
To an `LEFT OUTER JOIN` like
```
LEFT JOIN SPAAPPTADDONS ON SPAAPPOINTMENTS.APTID = SPAAPPTADDONS.APTID
LEFT JOIN RSTCATALOG ON SPAAPPTADDONS.ITEMNO = RSTCATALOG.C_ITEM
```
|
SQL Join - Show data if a joined table doesn't hold any data
|
[
"",
"sql",
"join",
""
] |
As sadly SQL is my weakest skill.
I'm trying to use `UNION` in a `VIEW`, where I can get statistics from two different tables with one query.
```
SELECT COUNT(*) AS `customer_count` FROM `Customers`
UNION
SELECT COUNT(*) AS `supplier_count` FROM `Suppliers`;
```
**[[Demo table](http://www.w3schools.com/sql/trysql.asp?filename=trysql_select_union)]**
However, it only returns **customer\_count**, with two rows. Is there anyway, to make this work, so it returns **customer\_count** and **supplier\_count** separately?
|
You would need a `cross join` to see the results adjacent to each other in one row. So you would `select` from both the tables without a `join` condition.
```
select * from
(select count(*) as customer_count from Customers) x,
(select count(*) as supplier_count from Suppliers) y
```
|
Using your Table Demo.
The key is use alias so the field names match on each union select.
In this case `TableSource` and `Total`
```
SELECT 'Customer' as TableSource, Count(City) as Total FROM Customers
UNION
SELECT 'Suppliers' as TableSource, Count(City) as Total FROM Suppliers;
```
|
How to get AS two separate selectors with UNION?
|
[
"",
"mysql",
"sql",
""
] |
```
select
order_slip,
product_name,
month
from orders
where month >= 09/03/2015
and month <= 09/09/2015
```
This is my query and it has no result. I have tried this query also:
```
select
order_slip,
product_name,
month from orders
where month between date(09/03/2015) and date(09/09/2015)
```
but it returns more than the date 09/09/09
|
Try to execute something like this:
```
select order_slip,product_name,month from orders where month between '09/03/2015' and '09/09/2015'
```
The column month has to be Date type.
|
`09/03/2015` is interpreted as "9 divided by 3 divided 2015" (which is 0.001488...). This is obviously not what you want.
Dates need to be *quoted* as strings. They also need to be in the correct format. If you are using a `DATE` or `DATETIME` field, then you need to use `2015-09-03`.
```
select
order_slip,
product_name,
month
from orders
where month >= '2015-09-03'
and month <= '2015-09-09'
```
|
Query shows no results when searching for dates
|
[
"",
"mysql",
"sql",
"database",
""
] |
In SQL Server 2012, I'm trying to select mutiple columns and concatenate them as below.
```
LTRIM(RTRIM(ISNULL(S.TITLE, ''))) +
' ' +
LTRIM(RTRIM(ISNULL(S.FIRSTNAME, ''))) +
' ' + LTRIM(RTRIM(ISNULL(S.SURNAME, ''))) AS 'Full Name',
```
The problem I have is that if the first column 'TITLE' is NULL or Blank then the next column 'FIRSTNAME' starts with a space due to the concatination.
Any ideas?
P.S - This is my first question so apologies if I have done anything wrong....
|
Enclose whole one within `LTRIM(`
```
LTRIM(LTRIM(RTRIM(ISNULL(S.TITLE, ''))) +
' ' +
LTRIM(RTRIM(ISNULL(S.FIRSTNAME, ''))) +
' ' + LTRIM(RTRIM(ISNULL(S.SURNAME, '')))) AS 'Full Name',
```
|
You could just solve that problem using `ltrim()` around the entire expression:
```
LTRIM(LTRIM(RTRIM(ISNULL(S.TITLE, ''))) + ' ' + LTRIM(RTRIM(ISNULL(S.FIRSTNAME, ''))) + ' ' + LTRIM(RTRIM(ISNULL(S.SURNAME, '')))) AS [Full Name],
```
Helpful hint: don't use single quotes for column aliases. Although this sometimes works, at other times, the code will not do what you expect. Only use single quotes for string and date constants.
There are other methods, such as adding the space before the test for NULL
```
(COALESCE(LTRIM(RTRIM(s.Title)) + ' ', '') +
COALESCE(LTRIM(RTRIM(S.FirstName)) + ' ', '') +
COALESCE(LTRIM(RTRIM(s.SurName)), '')
)
```
This has the advantage of handling the double spaces if there is no first name. However, you will need an `rtrim()` for the entire expression to handle no surname.
|
How to Concatenate in SQL
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
"concatenation",
""
] |
Stored procedure is:
```
CREATE PROCEDURE CountUtily
@domain varchar(50),
@count int,
@totalCount int OUT
AS
BEGIN
SET NOCOUNT ON
SET @totalCount=0
IF (EXISTS (SELECT @totalCount = count
FROM FormFillerAuto2_DomainCount
WHERE domain = @domain))
BEGIN
SET @totalCount = @totalCount + @count
UPDATE FormFillerAuto2_DomainCount
SET count = @totalCount
WHERE domain = @domain
END
ELSE
BEGIN
INSERT INTO FormFillerAuto2_DomainCount (domain, count)
VALUES (@domain, @count)
END
END
```
Error:
> Incorrect syntax near '='. Incorrect syntax near the keyword 'ELSE'.
|
See [**EXISTS**](https://msdn.microsoft.com/en-us/library/ms188336.aspx)
> Specifies a subquery to test for the existence of rows. it Returns
> `TRUE` if a subquery contains any rows,it accept `restricted SELECT statement`, The `INTO` keyword is not allowed.
>
> the problem here is you can't set the value inside the Exists.
Try
```
alter PROCEDURE CountUtily
@domain varchar(50),
@count int,
@totalCount int OUT
AS BEGIN
SET NOCOUNT ON
SET @totalCount=0;
IF (EXISTS (SELECT [count] FROM FormFillerAuto2_DomainCount WHERE domain=@domain))
begin
SELECT @totalCount=[count] FROM FormFillerAuto2_DomainCount WHERE domain=@domain
UPDATE FormFillerAuto2_DomainCount SET count=@totalCount WHERE domain=@domain
end
ELSE
begin
INSERT INTO FormFillerAuto2_DomainCount (domain, count) VALUES (@domain, @count)
end
end
```
|
your Select @totalCount = count does not return a bool. Try setting @totalCount before the if evaluates and eval the count in the if
```
CREATE PROCEDURE CountUtily
@domain varchar(50),
@count int,
@totalCount int OUT
AS BEGIN
SET NOCOUNT ON
SET @totalCount=0
SELECT @totalCount=count FROM FormFillerAuto2_DomainCount WHERE
IF (@totalCount > 0)
begin
SET @totalCount=@totalCount+@count
UPDATE FormFillerAuto2_DomainCount SET count=@totalCount WHERE domain=@domain
end
ELSE
begin
INSERT INTO FormFillerAuto2_DomainCount (domain, count) VALUES (@domain, @count)
end
end
```
|
SQL Server stored procedure compile error
|
[
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
I've got an SQL query which joins 2 tables, I'm trying to filter rows which match a condition, then filter out the results if the same condition matches with different values, but the last WHERE clause seems to be ignored:
```
SELECT DISTINCT
tblclients.firstname, tblclients.lastname, tblclients.email
FROM
tblclients
LEFT JOIN
(
SELECT *
FROM tblhosting
WHERE tblhosting.packageid IN (75,86)
) tblhosting ON tblclients.id = tblhosting.userid
WHERE
tblhosting.packageid NOT IN (76,77,78)
```
The idea being to get a list of customers which have a certain package (ID 75 and 86), then exclude/take out any results/customers which also have another package as well as 75/86 (ID 76,77,78 etc). It's not excluding those results though at all, tried numerous variations here on Stackoverflow, where am I going wrong please?
|
I wouldn't use a `LEFT JOIN` at all in this case. I would use a combination of `EXISTS` and `NOT EXISTS` to get your desired results.
```
SELECT DISTINCT c.firstname, c.lastname, c.email
FROM tblclients c
WHERE EXISTS (SELECT *
FROM tblhosting h
WHERE h.userid = c.id
AND h.packageid IN (75,86))
AND NOT EXISTS (SELECT *
FROM tblhosting h
WHERE h.userid = c.id
AND h.packageid IN (76,77,78))
```
|
Add it to the `join` condition itself. When you have a `where` clause filter your `join` would be treated as `inner join`.
```
tblhosting ON tblclients.id = tblhosting.userid and tblhosting.packageid NOT IN (76,77,78)
```
|
SQL Join and exclude / filter
|
[
"",
"mysql",
"sql",
""
] |
I have an issue in my `SQL` query that I am trying to run (I am practicing at this point). I'm trying to get some registries between 2 specific dates and some conditionals.
The thing is, the query isn't working properly because it only shows the same registry all the time, I mean, if there is `100` entries the query shows `100` times the same entry.
Here's the query:
```
SELECT Base.Registry.Registry,
Base.Client.Name,
Base.Client.Surname1,
Base.Client.Surname2,
Base.Client.BirthDate,
Base.Registry.Edad,
Base.Client.NIF,
Base.Registry.Procedency,
Base.Registry.DateRegistry,
Base.Registry.DateValidation,
Base.Registry.CodeSample,
Base.SnomedRegistry.Code,
Base.Sample.T
FROM Base.Registry,
Base.Client,
Base.SnomedRegistry,
Base.Sample
WHERE Base.Registry.CodeClient = Base.Client.CodeClient
AND Base.Registry.Registry = Base.SnomedRegistry.Registry
AND Base.Registry.CodeSample = Base.Sample.Sample
AND Base.Registry.DateRegistry BETWEEN '2012-10-26' AND '2012-12-31'
AND Base.SnomedRegistry.Code LIKE 'M8%%%3'
OR Base.SnomedRegistry.Code LIKE 'M8%%%2'
OR Base.SnomedRegistry.Code LIKE 'M8%%%6'
OR Base.SnomedRegistry.Code LIKE 'M8%%%9'
OR Base.SnomedRegistry.Code LIKE 'M9%%%2'
OR Base.SnomedRegistry.Code LIKE 'M9%%%3'
OR Base.SnomedRegistry.Code LIKE 'M9%%%6'
OR Base.SnomedRegistry.Code LIKE 'M9%%%9'
OR Base.SnomedRegistry.Code LIKE 'M69730'
OR Base.SnomedRegistry.Code LIKE 'M60760'
OR Base.SnomedRegistry.Code LIKE 'M74000'
OR Base.SnomedRegistry.Code LIKE 'M74007'
OR Base.SnomedRegistry.Code LIKE 'M74008'
LIMIT 0, 100
```
I am quite inexperienced right now with SQL, so every advice would be appreciated.
Obs: Sorry for my english.
|
I would comment rather than provide an answer, but I need 50 points to do that. lol
I'm not quite sure why you need to use % so many times like %%%%%, as usually one % is enough.
I would say your problem is to do with your from statement and ideally need to use joins.
I would also use brackets around your like statements...
```
WHERE 1 = 1
AND Base.Registry.CodeClient = Base.Client.CodeClient
AND Base.Registry.Registry = Base.SnomedRegistry.Registry
AND Base.Registry.CodeSample = Base.Sample.Sample
AND Base.Registry.DateRegistry BETWEEN '2012-10-26' AND '2012-12-31'
AND (
Base.SnomedRegistry.Code LIKE 'M8%3'
OR Base.SnomedRegistry.Code LIKE 'M8%2'
OR Base.SnomedRegistry.Code LIKE 'M8%6'
OR Base.SnomedRegistry.Code LIKE 'M8%9'
OR Base.SnomedRegistry.Code LIKE 'M9%2'
OR Base.SnomedRegistry.Code LIKE 'M9%3'
OR Base.SnomedRegistry.Code LIKE 'M9%6'
OR Base.SnomedRegistry.Code LIKE 'M9%9'
OR Base.SnomedRegistry.Code LIKE 'M69730'
OR Base.SnomedRegistry.Code LIKE 'M60760'
OR Base.SnomedRegistry.Code LIKE 'M74000'
OR Base.SnomedRegistry.Code LIKE 'M74007'
OR Base.SnomedRegistry.Code LIKE 'M74008'
)
LIMIT 0 , 100
```
|
You forgot parenthesis:
```
SELECT ...
FROM Base.Registry, Base.Client, Base.SnomedRegistry, Base.Sample
WHERE Base.Registry.CodeClient = Base.Client.CodeClient
AND Base.Registry.Registry = Base.SnomedRegistry.Registry
AND Base.Registry.CodeSample = Base.Sample.Sample
AND Base.Registry.DateRegistry
BETWEEN '2012-10-26' AND '2012-12-31'
AND ( #<----- here
Base.SnomedRegistry.Code LIKE 'M8%%%3'
OR Base.SnomedRegistry.Code LIKE 'M8%%%2'
OR Base.SnomedRegistry.Code LIKE 'M8%%%6'
OR Base.SnomedRegistry.Code LIKE 'M8%%%9'
OR Base.SnomedRegistry.Code LIKE 'M9%%%2'
OR Base.SnomedRegistry.Code LIKE 'M9%%%3'
OR Base.SnomedRegistry.Code LIKE 'M9%%%6'
OR Base.SnomedRegistry.Code LIKE 'M9%%%9'
OR Base.SnomedRegistry.Code LIKE 'M69730'
OR Base.SnomedRegistry.Code LIKE 'M60760'
OR Base.SnomedRegistry.Code LIKE 'M74000'
OR Base.SnomedRegistry.Code LIKE 'M74007'
OR Base.SnomedRegistry.Code LIKE 'M74008'
) #<----- here
LIMIT 0 , 100
```
|
SQL isn't working as intended
|
[
"",
"mysql",
"sql",
"t-sql",
""
] |
I know you can run SELECT queries on top of SELECT queries in Access, but the application also provides the Make Table query type.
I'm wondering what the benefits/reasons for using Make Table might be?
|
You would usually use `Make Table` for performance reasons. If you have a fairly complex query that returns a subset of your table's data, and that you may need to retrieve multiple times, it can be expensive to re-run the query multiple times.
Using `Make Table` allows you to incur the cost of running the expensive query *once*, and make a copy of the query results into a table. Querying this *copy* would then be a lot less expensive than running your original expensive query.
This is usually a good option when you don't expect your original data to change frequently, or if you don't care that you are working of a copy of the data that may not be 100% up-to-date with the original data.
Notice what the following article on [Create a make table query](https://support.office.com/en-us/article/Create-a-make-table-query-96424f9e-82fd-411e-aca4-e21ad0a94f1b) has to say:
> Typically, you create make table queries when you need to copy or archive data. For example, suppose you have a table (or tables) of past sales data, and you use that data in reports. The sales figures cannot change because the transactions are at least one day old, and constantly running a query to retrieve the data can take time — especially if you run a complex query against a large data store. Loading the data into a separate table and using that table as a data source can reduce workload and provide a convenient data archive. As you proceed, remember that the data in your new table is strictly a snapshot; it has no relationship or connection to its source table or tables.
|
The main defense here is that a make table query creates a table. And when you done with the table then effort and time to delete that table and recover the VERY LARGE increase in the database file will have to occur. For general reports and a query of data make much more send. A comparison would be to build a NEW garage every time you want to park your car.
The database engine and query system can fetch and pull rows at a very high rate and those results are then able to be rendered into a report or form, and this occurs without having to create a temp table. It makes little sense to go through all of the trouble of having the system create a WHOLE NEW table for such results of data when they can with ease be sent to a report.
In other words creating a whole table just to display or use some data that the database engine already fetched and returned makes little sense. A table is a set of rows that holds data that can be updated and the results are permanent. A query is a “on the fly” results or sub set of data that only exists in memory and is discarded after you use the results.
So for general reporting and display of data, it makes no sense to create a temp table. MUCH WORSE of an issue is that if you have two users wanting to run a report, if they both need different results and you send the results to the SAME temp table, then you have a big mess and collision between the two users. So use of a temp table in Access for the most part makes little sense, and this is EVEN MORE so when working in a multi-user environment. And as noted, once the table is created, then after you are done you need to delete and remove the table. And with many users in a multi-user database this becomes even more of a problem and issue.
However in a multi-user environment as pointed out that if the resulting data needs additional processing, then sending the results to a temp table can be of use. This approach however suggests that EACH USER has their own front end and own copy of the application side. And better is that the temp table is created outside of the front end application that resides on each computer. Since the application part (front end) is placed on each computer, then creating of a temp table does not occur in the production database (back end) and as a result you can have multiple users function correctly without each individual user creating a temp table in the production back end database. So if one is to adopt a make table query, it likely should occur on each local workstation and not in the back end database when you have a multiple user database application.
Thus for the most part a make table and that of reports and query of data are VERY different goals and tasks. You don't want nor as a general rule create a whole brand new table for a simple query. In a multi user database system the users might run 100's of reports in a given day and FEW if any systems will send such data to a temp table in place of sending the query results directly to the report.
|
What are the benefits of a Make Table vs a Select query in Access?
|
[
"",
"sql",
"ms-access",
""
] |
I have a table like the following ( there is of course other data in the table):
```
Col A Col B
1 Red
1 Red
2 Blue
2 Green
3 Black
```
I am trying to return a value for Col A only when ALL the Col B values match, otherwise return null.
This will be used as part of another sql statement that will be passing the Col A value, ie
Select \* from Table where Col A = 1
I need to return the value in Col B. The correct result in the above table would be Red,Black
any ideas ?
|
how about this?
[SQL Fiddle](http://sqlfiddle.com/#!4/45f6d/18)
**Oracle 11g R2 Schema Setup**:
```
create table t( id number, color varchar2(20));
insert into t values(1,'RED');
insert into t values(1,'RED');
insert into t values(2,'BLUE');
insert into t values(2,'GREEN');
insert into t values(3,'BLACK');
```
**Query 1**:
```
select color from t where id in (
select id
from t
group by id having min(color) = max(color) )
group by color
```
**[Results](http://sqlfiddle.com/#!4/45f6d/18/0)**:
| COLOR |
|-------|
| RED |
| BLACK |
|
If you just want the *values* in A (rather than each row), then use `group by`:
```
select a
from table t
group by a
having min(b) = max(b);
```
Note: this ignores `NULL` values. If you want to treat them as an additional value, then add another condition:
```
select a
from table t
group by a
having min(b) = max(b) and count(*) = count(b);
```
It is also tempting to use `count(distinct)`. In general, though, `count(distinct)` requires more processing effort than a `min()` and a `max()`.
|
Oracle SQL - return record only if colB is the same for all of colA
|
[
"",
"sql",
"oracle",
""
] |
In a simple search like this:
```
find.where('name LIKE ?', "%#{search}%")
```
I understand that `#{search}` is just string interpolation. What do the `%` symbols do?
|
The percent sign `%` is a [wildcard](http://www.w3schools.com/sql/sql_wildcards.asp) in SQL that matches zero or more characters. Thus, if `search` is `"hello"`, it would match strings in the database such as `"hello"`, `"hello world"`, `"well hello world"`, etc.
Note that this is a part of SQL and is not specific to Rails/ActiveRecord. The queries it can be used with, and the precise behavior of `LIKE`, differ based on SQL dialect (MySQL, PostgreSQL, etc.).
|
```
search = 'something'
find.where('name LIKE ?', "%#{search}%")
```
In your DB it will be interpreted as
```
SELECT <fields> FROM finds WHERE name LIKE '%something%';
```
|
In a Rails WHERE LIKE query, what does the percent sign mean?
|
[
"",
"sql",
"ruby-on-rails",
"search",
"model",
""
] |
I have a table storing the usage count of topics used, for each subsequent request increases the usage count. I have added a query to return the least used topic.
```
SELECT b.body, n.nid
FROM node n
LEFT JOIN body b ON n.nid = b.entity_id
LEFT JOIN topic_usage u ON n.nid = u.entity_id
LEFT JOIN reference r ON n.nid = r.entity_id
AND (
r.entity_type = 'node'
AND
r.deleted = '0'
)
WHERE n.type = :type
AND n.status = 1
AND r.target_id= :id
ORDER BY u.field_topic_usage_value ASC LIMIT 0,1
```
Example table
```
nid | Usage Count
-----------------
1 | 0
2 | 0
3 | 0
4 | 1
```
The above query works fine to return the lowest used topic. But the requirement is to apply `RAND()` on the lowest usage.
As per the above example table, the query should do a `rand()` where usage count is 0.
I can write 2 queries, first one gets the lowest count and second query does the `rand()` on that count but can that be combined into 1 ?
Thank you.
Solution 1 @fancypants
```
SELECT b.body, node.nid
FROM node
LEFT JOIN body ON n.nid = body.entity_id
LEFT JOIN topic_usage ON n.nid = u.entity_id
LEFT JOIN field_data_field_issue_reference r ON node.nid = f.entity_id ,
( select @min := (
SELECT min(f.usage_value) from node n
LEFT JOIN field_data_field_topic_usage ON n.nid = f.entity_id
)
) var_init_subquery
AND node.status = 1
AND f.field_issue_reference_target_id = 708
order by f.usage_value, if(f.usage_value=@min, rand(), node.nid)
LIMIT 0,1
```
Solution 2 @Hitesh
```
SELECT b.body, node.nid
FROM node
LEFT JOIN body ON node.nid = b.entity_id
LEFT JOIN topic_usage ON node.nid = f.entity_id
LEFT JOIN issue_reference f ON node.nid = f.entity_id
AND (
f.entity_type = 'node'
AND
f.deleted = '0'
)
WHERE node.type = 'issue_paragraphs'
AND node.status = 1
AND f.field_issue_reference_target_id = 708
GROUP BY node.nid having f.usage_value=min(f.usage_value)
ORDER BY rand() LIMIT 0,1
```
|
I think you need to use Group by and having clause for selections and then use order by rand(). For example i am writing you query as following. My query may be wrong on based on column selections but i am tring to focus on approch.
```
SELECT b.body, n.nid
FROM node n
LEFT JOIN body b ON n.nid = b.entity_id
LEFT JOIN topic_usage u ON n.nid = u.entity_id
LEFT JOIN reference r ON n.nid = r.entity_id
AND (
r.entity_type = 'node'
AND
r.deleted = '0'
)
WHERE n.type = :type
AND n.status = 1
AND r.target_id= :id
GROUP BY n.nid having u.field_topic_usage_value=min(u.field_topic_usage_value)
ORDER BY rand()
```
|
**Sample data:**
```
CREATE TABLE t
(`a` int, `b` int)
;
INSERT INTO t
(`a`, `b`)
VALUES
(1, 1),
(2, 1),
(3, 0),
(4, 0),
(5, 1),
(6, 0)
;
```
**Query:**
```
select
*
from
t
order by b, if(b=0, rand(), a)
```
* see it working live in an [sqlfiddle](http://sqlfiddle.com/#!9/fe9c7/1/0)
Every time you execute it, `a` column is ordered random for entries where `b` is `0` and ordered by `a` where `b` is not `0`.
To have it working for your minimum, you could simply use [variables](http://dev.mysql.com/doc/refman/5.5/en/user-variables.html):
```
select
a, b
from
t
, (select @min := (select min(b) from t)) var_init_subquery
order by b, if(b=@min, rand(), a)
```
* again, [sqlfiddle](http://sqlfiddle.com/#!9/fe9c7/3/0)
|
Mysql ORDER BY RAND()
|
[
"",
"mysql",
"sql",
""
] |
I have this time-duration: `00:00:23.323`
I want to convert it in sql to milliseconds.
EDIT://
I tried this but it isn't very nice:
```
SELECT (DATEPART(hh,'12:13:14.123') * 60 * 60 * 1000)
SELECT (DATEPART(n,'12:13:14.123') * 60 * 1000)
SELECT (DATEPART(s,'12:13:14.123') * 1000)
SELECT DATEPART(ms,'12:13:14.123')
```
How does it work?
Thanks for your answers.
|
Use [DATEDIFF](https://msdn.microsoft.com/query/dev10.query?appId=Dev10IDEF1&l=EN-US&k=k(DATEDIFF_TSQL);k(SQL11.SWB.TSQLRESULTS.F1);k(SQL11.SWB.TSQLQUERY.F1);k(MISCELLANEOUSFILESPROJECT);k(DevLang-TSQL)&rd=true):
```
SELECT DATEDIFF(MILLISECOND, 0, '00:00:23.323')
```
Result:
```
23323
```
|
You can use datepart function.
like this
```
select DATEPART(MILLISECOND,GETDATE())+DATEPART(second,getdate())*1000
```
|
SQL convert time to milliseconds
|
[
"",
"sql",
"sql-server",
"time",
"milliseconds",
""
] |
I'm trying to declare some variables using DBeaver and keep hitting this error.
```
Unterminated dollar-quoted string at or near "$$
DO $$
DECLARE A integer; B integer;
BEGIN
END$$;
```
Any ideas?
|
DBeaver was the issue. Switched to PGAdmin and no more problems.
|
As of DBeaver 6, you can execute the script with ALT-X (on Windows), which does not attempt to do variable capture/interpolation involving dollar signs.
|
Unterminated dollar-quoted string at or near "$$
|
[
"",
"sql",
"postgresql",
"dbeaver",
"dollar-quoting",
""
] |
I'm trying to use a SELECT statement to get number of matches from my database that has matches more than 0, but I also want to just get the maximum matches. So if there are matches of 1,2, and 3, I just want to matches of 3. After I added in the part "AND 'matches' = MAX( 'matches' )", it stopped getting results, for some reason. It was getting results before I added it. I'm wondering what I did wrong, and how to fix it. Thanks.
```
SELECT input, response,
( input LIKE '% you %' ) +
( input LIKE '% are %' ) +
( input LIKE '% here %' )
AS 'matches'
FROM allData
HAVING `matches` > 0
AND 'matches' = MAX( 'matches' )
```
|
You tagged mysql in your question. This is a bit unfortunate, because the question can be answered more cleanly for almost any other relational database popular today than it can be for MySQL. Nevertheless, with MySQL you can do it like this:
```
SELECT
input,
response,
( input LIKE '% you %' ) +
( input LIKE '% are %' ) +
( input LIKE '% here %' )
AS matches
FROM allData
HAVING matches > 0
AND matches = (
SELECT MAX(
( input LIKE '% you %' ) +
( input LIKE '% are %' ) +
( input LIKE '% here %' ) )
FROM allData
)
```
|
Simply use `ORDER BY` with `LIMIT` to get the best record only:
```
SELECT input, response,
( input LIKE '% you %' ) +
( input LIKE '% are %' ) +
( input LIKE '% here %' )
AS matches
FROM allData
HAVING matches > 0
ORDER BY matches DESC LIMIT 1;
```
In case of a tie you would get only one random best record though. If you want all, then go with John Bollinger's answer.
SQL fiddle: <http://sqlfiddle.com/#!9/1ece1/3>.
---
If you want to count multiple matches, use some math: remove the search string from input and count how often its length was removed:
```
SELECT input, response,
((length(input) - length(replace(input, ' you ', ''))) / length(' you ')) +
((length(input) - length(replace(input, ' are ', ''))) / length(' are ')) +
((length(input) - length(replace(input, ' here ', ''))) / length(' here '))
AS matches
FROM allData
HAVING matches > 0
ORDER BY matches DESC LIMIT 1;
```
SQL fiddle: <http://sqlfiddle.com/#!9/f680b/3>.
And one more fiddle to show how it works: <http://sqlfiddle.com/#!9/f680b/1>.
---
Here is John's query with the same technique:
```
SELECT input, response,
((length(input) - length(replace(input, ' you ', ''))) / length(' you ')) +
((length(input) - length(replace(input, ' are ', ''))) / length(' are ')) +
((length(input) - length(replace(input, ' here ', ''))) / length(' here '))
AS matches
FROM allData
HAVING matches > 0
AND matches =
(
SELECT MAX(
((length(input) - length(replace(input, ' you ', ''))) / length(' you ')) +
((length(input) - length(replace(input, ' are ', ''))) / length(' are ')) +
((length(input) - length(replace(input, ' here ', ''))) / length(' here '))
)
FROM allData
);
```
SQL fiddle: <http://sqlfiddle.com/#!9/57098/1>.
---
As to ' you you you ' you are right; by removing ' you ' from the string I take with it a space needed to find the next ' you '. You can easily circumvent this though by replacing each space with two in `input` before doing the operation. And you may want to add a leading and a trainling space, too. (You may even want to apply `LOWER()` on the string, so as to find 'You' as well as 'you'.)
So replace
```
FROM allData
```
with
```
FROM
(
select
replace(concat(' ', input, ' '), ' ', ' ') as input,
response
from allData
) prepared
```
|
Getting the maximum matches from database using select statement
|
[
"",
"mysql",
"sql",
""
] |
I have a tiny table like this:
```
****************************
| Name | old_v | new_v *
****************************
|ITEM1 | new | passed *
|ITEM1 | failed | passed *
|ITEM2 | new | failed *
|ITEM1 | passed | failed *
|ITEM3 | new | blocked *
|ITEM2 | failed | failed *
|ITEM4 | new | new *
```
I tried to build a sql query which shows me all the items, that **never** had the status **passed**.
According to the above example, the output should be:
```
ITEM2
ITEM3
```
However, this query:
```
SELECT Name FROM table WHERE old_v NOT IN 'passed' AND new_v NOT IN 'passed'
```
Does not give me the correct result. Do you have any clue how to solve this?
|
Use `NOT EXISTS` to return a row if no row with same name has passed:
```
select distinct t1.name
from tablename t1
where not exists (select 1 from tablename t2
where t2.name = t1.name
and 'passed' in (t2.old_v, t2.new_v))
```
|
In your case where all data needed is in the same table and you just need the name, you can aggregate your data: group by name and remove unwanted records with `HAVING`:
```
select name
from mytable
group by name
having count(case when old_v = 'passed' then 1 end) = 0
and count(case when new_v = 'passed' then 1 end) = 0;
```
|
Exclude results in a sql query
|
[
"",
"sql",
""
] |
Here is the table
```
id name
----------
1 john
2 dave
3 eve
```
`select * from table` will give us
```
id name
----------
1 john
2 dave
3 eve
```
How will you query in order to get the below output?
```
id name
----------------------
1,2,3 john,dave,eve
```
|
You need to first increase the value of variable group\_concat\_max\_len so that data shouldn't truncate in group\_concat . Check the variable values by command:
```
show variables like 'group_concat_max_len'
```
And increase it as:
```
set global group_concat_max_len = 160000.
```
After this use query:
```
SELECT GROUP_CONCAT(id), GROUP_CONCAT(name) FROM table_name
```
|
You could use the [`group_concat`](https://dev.mysql.com/doc/refman/5.6/en/group-by-functions.html#function_group-concat) function:
```
SELECT GROUP_CONCAT(id), GROUP_CONCAT(name)
FROM mytable
```
|
How to select columns as CSV in MySQL?
|
[
"",
"mysql",
"sql",
"database",
"csv",
"select",
""
] |
This MySQL SQL statement conveys what I'm attempting to do, but isn't valid:
```
SELECT * FROM
(SELECT DISTINCT Thing FROM BigList) AS SomeThing
LEFT JOIN thingDetail ON SomeThing = thingDetail.Thing
-- other left joins follow
```
The problem is that in the ON clause, the alias isn't allowed. And, removing the alias from the subquery produces an error the the alias is required. Very Catch-22.
I've found lots of examples online where the JOIN-part is a subquery and aliases do work there. But I haven't found something that does the above.
If it's in the MySQL manual on Subqueries, I'm missing it. What syntax should I be using to do this?
|
You forgot to add the column name `thing` in `ON` clause. It should be like
```
SELECT * FROM
(SELECT DISTINCT Thing FROM BigList) AS SomeThing
LEFT JOIN thingDetail ON SomeThing.Thing = thingDetail.Thing
```
|
```
SELECT * FROM
(SELECT DISTINCT Thing FROM BigList) AS SomeThing
LEFT JOIN thingDetail ON SomeThing.Thing = thingDetail.Thing
-- other left joins follow
```
You were referring to the `table` alias instead of the `column` in the table.
|
Using an Alias on a Subquery and then doing a LEFT JOIN with it in the ON clause
|
[
"",
"mysql",
"sql",
"left-join",
""
] |
So, I have an Oracle database (in APEX) in which I have a column called "Scheme".
Like seen below:
```
CREATE TABLE Scheme
(
Scheme_ID NUMBER NOT Null,
description VARCHAR2 (800) ,
price FLOAT (3) ,
amount_sold INTEGER ,
Personal_Trainer_ID NUMBER NOT NULL
) ;
ALTER TABLE Schema ADD CONSTRAINT Schema_PK PRIMARY KEY ( Schema_ID ) ;
```
Now, all my tables are set-up like this and perfectly working, but when I try an insert on my Scheme, it says I'm trying to insert null into the primary key scheme\_ID.
I'll show you 2 SQL inserts I use. One for a personal\_trainer, and one for the Scheme.
```
INSERT INTO Personal_Trainer (name, loginname, date_of_birth, password)
VALUES('Bojan', 'Bojan', '15-07-1974','fitline');
```
Peronal\_Trainer has a Personal\_Trainer\_ID as primary key, exactly set-up like the Scheme. Inserting this command works perfectly fine.
```
insert into schema (description, price, amount_sold, Personal_Trainer_ID)
values ('3x pushups, 5x bench, 7x squats - 15kg',200, 1, 2);
```
Now, when I try to insert this command I get this error message:
ORA-01400: cannot insert NULL into ("SCHEME"."SCHEME\_ID")
\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*EDIT\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
```
CREATE TABLE Personal_Trainer
(
Personal_Trainer_ID NUMBER NOT NULL,
name VARCHAR2 (35) ,
date_of_birth DATE ,
loginname VARCHAR2 (35) ,
password VARCHAR2 (35)
) ;
ALTER TABLE Personal_Trainer ADD CONSTRAINT Personal_Trainer_PK PRIMARY KEY ( Personal_Trainer_ID ) ;
```
This is my table of Personal Trainer.
|
Oh I found it, the NOT NULL constraint still works fine, but my Column is actually called 'Schema' (other language) and apparently this has a definition in Oracle. So it's never altered as a primary key. Changing the table his name should work.
Thanks for the fast responds and help!
**DONT USE RESERVED TERMS FOR YOUR TABLE NAMES!**
|
You need to provide a unique, not null, value for `schemeid`.
Oracle 12c allows you to elegantly define this column as an identity column:
```
CREATE TABLE Scheme
(
Scheme_ID NUMBER GENERATED BY DEFAULT AS IDENTITY
description VARCHAR2 (800) ,
price FLOAT (3) ,
amount_sold INTEGER ,
Personal_Trainer_ID NUMBER NOT NULL
);
```
In earlier Oracle versions this option isn't available, unfortunately. The idiomatic solution would be to declare a sequence:
```
CREATE SEQUENCE scheme_id_seq;
```
And either use it directly:
```
INSERT INTO schema
(scheme_id, description, price, amount_sold, Personal_Trainer_ID)
VALUES
(scheme_id_seq.nextval, '3x pushups, 5x bench, 7x squats - 15kg',200, 1, 2);
```
Or create a trigger to fill it in automatically:
```
CREATE OR REPLACE TRIGGER schema_insert_tr
BEFORE INSERT ON schema
FOR EACH ROW
BEGIN
IF :new.scheme_id IS NULL THEN
SELECT scheme_id_seq.nextval INTO :new.scheme_id FROM DUAL;
END IF;
END;
```
|
SQL/Oracle - "Cannot insert null into primary key"
|
[
"",
"sql",
"oracle",
"null",
"primary-key",
"oracle-apex",
""
] |
If I have a MySQL table such as:
[](https://i.stack.imgur.com/bcssV.png)
I want to use SQL to calculate the sum of the `PositiveResult` column and also the `NegativeResult` column. Normally I could simply do `SUM(PositiveResult)` in a query.
But what if I wanted to go a step further and place the totals in a row at the bottom of the result set:
[](https://i.stack.imgur.com/RLJlB.png)
Can this be achieved at the data level or is it a presentation layer issue? If it can be done by SQL, how might I do this? I am a bit of an SQL newbie.
Thanks to the respondents. I will now check things with the customer.
Also, can a text column be added so that the value of the last row of data is not shown in the summary row? Like this:
[](https://i.stack.imgur.com/38To8.png)
|
I would also do this in the presentation layer, but you *can* do it MySQL...
```
DROP TABLE IF EXISTS my_table;
CREATE TABLE my_table
(id INT NOT NULL AUTO_INCREMENT PRIMARY KEY
,pos DECIMAL(5,2)
,neg DECIMAL(5,2)
);
INSERT INTO my_table VALUES
(1,0,0),
(2,1,-2.5),
(3,1.6,-1),
(4,1,-2);
SELECT COALESCE(id,'total') my_id,SUM(pos),SUM(neg) FROM my_table GROUP BY id WITH ROLLUP;
+-------+----------+----------+
| my_id | SUM(pos) | SUM(neg) |
+-------+----------+----------+
| 1 | 0.00 | 0.00 |
| 2 | 1.00 | -2.50 |
| 3 | 1.60 | -1.00 |
| 4 | 1.00 | -2.00 |
| total| 3.60 | -5.50 |
+-------+----------+----------+
5 rows in set (0.02 sec)
```
Here's a hack for the amended problem - it ain't pretty but I think it works...
```
SELECT COALESCE(id,'') my_id
, SUM(pos)
, SUM(neg)
, COALESCE(string,'') n
FROM my_table
GROUP
BY id
, string
WITH ROLLUP
HAVING n <> '' OR my_id = ''
;
```
|
```
select keyword,sum(positiveResults)+sum(NegativeResults)
from mytable
group by
Keyword
```
if you need the absolute value put sum(abs(NegativeResults)
|
SQL to add a summary row to MySQL result set
|
[
"",
"mysql",
"sql",
"resultset",
"summary",
""
] |
What I'm trying to do is to find all matching IDs from two tables then combine multiple rows into one cell in the original table.
So I have these two tables.
Table 1
```
+-----+-------+
| id | BS_ID |
+-----+-------+
| 999 | 12345 |
| 977 | 12347 |
| 955 | 12349 |
| 933 | 12351 |
+-----+-------+
```
Table 2
```
+-----+-------+------------+
| id | BS_ID | callstatus |
+-----+-------+------------+
| 999 | 12345 | noanswer |
| 999 | 12345 | contacted |
| 977 | 12347 | noanswer |
| 955 | 12349 | noanswer |
| 933 | 12351 | noanswer |
| 933 | 12351 | contacted |
+-----+-------+------------+
```
What I want to happen is find all matching rows in table 2 based on the id in table 1. Then copy the "callstatus" in all the matching rows and put it in one cell in table 1 like the one below:
```
+-----+-------+---------------------+
| id | BS_ID | callstatus |
+-----+-------+---------------------+
| 999 | 12345 | noanswer, contacted |
| 977 | 12347 | noanswer |
| 955 | 12349 | noanswer |
| 933 | 12351 | noanswer, contacted |
```
I have so far figured out how to count the instances in table 2 but I am stumped on how to copy the "callstatus" into that one cell in table 1.
```
SELECT table1.*
, (SELECT COUNT(*)
FROM table2
WHERE table2.id = table1.id) AS TOT
FROM table1
```
|
Here you are:
(But keep in mind: You have no sort order, so the callstatus could appear in random order. And - if you don't need this just for output - think about the possibility to put this in an XML column)
```
DECLARE @Table1 TABLE(id INT,BS_ID INT);
INSERT INTO @Table1 VALUES
(999,12345)
,(977,12347)
,(955,12349)
,(933,12351);
DECLARE @Table2 TABLE(id INT,BS_ID INT,callstatus VARCHAR(100));
INSERT INTO @Table2 VALUES
(999,12345,'noanswer')
,(999,12345,'contacted')
,(977,12347,'noanswer')
,(955,12349,'noanswer')
,(933,12351,'noanswer')
,(933,12351,'contacted');
SELECT DISTINCT tbl1.id
,tbl1.BS_ID
,STUFF(
(
SELECT ', ' + tbl2.callstatus
FROM @Table2 AS tbl2
WHERE tbl1.id = tbl2.id AND tbl1.BS_ID=tbl2.BS_ID
FOR XML PATH('')
), 1, 2, '') AS StatusList
FROM @Table1 AS tbl1
```
|
```
SELECT t1.id,t1.id_bsid,substring(statussummary1,1,len(statussummary1)-1)
from table1 t1
inner join table2 as t21 on t1.id=t21.id and t1.id_bsid=t21.id_bsid
cross apply
(
SELECT statussummary + ','
FROM table2 AS t2
WHERE t2.id = t21.id and t2.id_bsid = t21.id_bsid
FOR XML PATH('')
)grping(statussummary1)
group by t1.id,t1.id_bsid,statussummary1
```
Fiddle here: <http://sqlfiddle.com/#!3/18fb53>
|
Copy all matching rows into one cell in another table in SQL Server 2008
|
[
"",
"sql",
"sql-server",
""
] |
Im new to stack so please go easy on me. Ive looked all over the web and cant find anything that really helps me.
So I need to provide details of all regular academics working in the Computing Department who were
over 60 years old as of 31/12/2014.
my trouble comes with how would I approach showing data of someone 60+ could you minus one date from another date? or is there is possible sql command that I am missing.
my attempt:
```
SELECT *
FROM staff, department
WHERE DOB <= '31/12/1964'
AND staff.department_ID = department.department _ID
```
|
There are functions to calculate the difference between dates, but the most efficient is to first calculate the date that a person would be born to be 60 at 2014-12-31. That way you make a direct comparison to a value, so the database can make use of an index if there is one.
Example for Oracle:
```
select
PersonId, FirstName, LastName
from
Person
where
Born <= add_months(date '2014-12-31', -60 * 12)
```
Example for MySQL (eventhough you removed the MySQL tag):
```
select
PersonId, FirstName, LastName
from
Person
where
Born <= date_sub('2014-12-31' 60 year)
```
|
I think In SQL SERVER
Select Datediff(DAYS,'05-19-2015','05-21-2015')
In My SQL
SELECT TIMESTAMPDIFF(HOUR, start\_time, end\_time)
as `difference` FROM timeattendance WHERE timeattendance\_id = '1484'
|
SQL Query on find individuals that age is over 60 as of a specific date
|
[
"",
"sql",
"oracle11g",
""
] |
I'm not sure how to improve the performance of this query. It takes over 100 seconds. I've added indexes and experimented with sub-queries but nothing seems to improve performance.
The Query
```
SELECT
GiftVoucher.VoucherNumber,
GiftVoucher.DateIssued,
GiftVoucher.DateRedeemed,
R.old_name as RedeemedBy,
I.old_name as IssuedBy,
RH.Name as RedeemedForHotel,
V.old_name as VoidedBy,
GiftVoucher.VoidedReplacment,
GiftVoucher.VoidedDescription
FROM GiftVoucher
LEFT JOIN StaffToWp R ON GiftVoucher.RedeemedBy=R.old_id
LEFT JOIN StaffToWp I ON GiftVoucher.IssuedBy=I.old_id
LEFT JOIN StaffToWp V ON GiftVoucher.VoidedBy=V.old_id
LEFT JOIN Hotel RH ON GiftVoucher.RedeemedForHotelID=RH.HotelID
WHERE DateIssued > "2011-12-31 23:59:59"
LIMIT 0, 20000
```
GiftVoucher Structure
```
GiftVoucher
Column Type Null Default Comments
GiftVoucherID int(11) No
ParentGiftVoucherID int(11) Yes NULL
Value decimal(19,4) No
VoucherNumber varchar(150) Yes NULL
SendToRecipientAddress int(11) No
DateIssued datetime No
DateRedeemed datetime Yes NULL
GiftVoucherPurchaseID int(11) No
RedeemedBy int(11) Yes NULL
IssuedBy int(11) Yes NULL
Active int(11) No
RedeemedForHotelID int(11) Yes NULL
RedeemedTo int(11) Yes NULL
Redeemed int(1) No 0
RedeemedAmount decimal(19,4) Yes NULL
Voided int(1) No 0
VoidedDate datetime Yes NULL
VoidedBy int(11) Yes NULL
VoidedReplacment int(11) Yes NULL
VoidedDescription mediumtext Yes NULL
SystemVersion int(11) No
Indexes
Keyname Type Unique Packed Column Cardinality Collation Null Comment
PRIMARY BTREE Yes No GiftVoucherID 23191 A No
VoidedBy BTREE No No VoidedBy 2 A Yes
RedeemedBy BTREE No No RedeemedBy 244 A Yes
IssuedBy BTREE No No IssuedBy 212 A Yes
DateIssued BTREE No No DateIssued 23191 A No
RedeemedForHotelID BTREE No No RedeemedForHotelID 10 A Yes
```
StaffToWP Structure
```
StaffToWp
Column Type Null Default Comments
id int(11) No
old_id int(11) No
old_name varchar(255) No
new_id int(11) No
new_name varchar(255) No
Indexes
Keyname Type Unique Packed Column Cardinality Collation Null Comment
PRIMARY BTREE Yes No id 121 A No
old_id BTREE No No old_id 121 A No
Space usage:
Data 4,524 B
Index 7,168 B
Total 11,692 B
Row Statistics:
Format dynamic
Rows 121
Row length ø 37
Row size ø 97 B
Next autoindex 122
Creation Sep 16, 2015 at 12:01 PM
Last update Sep 16, 2015 at 12:01 PM
Last check Sep 16, 2015 at 12:01 PM
```
Hotel Strcture
```
Hotel
Column Type Null Default Comments
HotelID int(11) No
Name varchar(250) No
Telephone varchar(50) No
AccommodationUrl varchar(250) No ''
ColourClass varchar(50) Yes NULL
Indexes
Keyname Type Unique Packed Column Cardinality Collation Null Comment
PRIMARY BTREE Yes No HotelID 7 A No
HotelID BTREE No No HotelID 7 A No
```
EXPLAIN results
```
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE GiftVoucher ALL DateIssued NULL NULL NULL 22180 Using where
1 SIMPLE R ref old_id old_id 4 milsomho_voucher.GiftVoucher.RedeemedBy 1
1 SIMPLE I ref old_id old_id 4 milsomho_voucher.GiftVoucher.IssuedBy 1
1 SIMPLE V ref old_id old_id 4 milsomho_voucher.GiftVoucher.VoidedBy 1
1 SIMPLE RH eq_ref PRIMARY,HotelID PRIMARY 4 milsomho_voucher.GiftVoucher.RedeemedForHotelID 1
```
|
Kudos for actually including the table structures and explain plan!
> DateIssued > "2011-12-31 23:59:59"
Presumably this filter would significantly reduce the number of rows returned by the query. But you'll see that the DBMS is not using the index on DateIssued:
```
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE GiftVoucher ALL DateIssued NULL NULL NULL 22180 Using where
```
The likely reason for this is the mismatch in types which is forcing MySQL to do a type conversion on each row in the source table.
Try:
```
DateIssued > 20111231235959
```
You also might consider other ways to be more specific about the data you are extracting (more filtering) and denormalizing the data to reduce the number of tables LEFT joined.
The cardinality figures look rather low - is this a test data set? Are they out of date?
|
The table has about 22K rows? Yet you are asking for 20K? Sounds like the `LIMIT` is useless; why have it?
That date is 4 years ago... Does it include most of the `GiftVoucher`? If so, an index on `DateIssued` probably won't be used. This is because it may be more efficient to scan the table rather than bouncing between the index and the data.
Composite index? None would help. Only one column, `DateIssued`, is referenced in `WHERE`, `GROUP BY`, and `ORDER BY`.
Is `old_id` indexed in the other tables? It seems to be ("ref"), but it does not seem to be the `PRIMARY KEY`.
You have a `LIMIT` without an `ORDER BY`. So you don't care which 20K rows you get?
You have not provided `SHOW CREATE TABLE`; I assume the Engine is InnoDB?
`DateIssued > "2011-12-31 23:59:59"` is fine for comparing a `DATETIME`; no need to use a different syntax.
Shrinking the table sizes would help a tiny bit... You have lots of `INTs` (signed, 4-byte), where you could probably use `SMALLINT UNSIGNED` (2-bytes, range 0..65535). Or `MEDIUMINT UNSIGNED`.
One thing that might help a bit... A "covering index":
```
INDEX(old_id, old_name)
```
on `StaffToWp`. This would make it a bit more efficient to look up the `old_id` to get the `old_name`, which seems to be the purpose of 3 of the `LEFT JOINs`.
|
Optimize slow MySQL query with lots of joins
|
[
"",
"mysql",
"sql",
"database",
"performance",
"optimization",
""
] |
I have a SQL problem which seem simple but I can't find the solution...
I have 3 tables :
1) "user" (id, lastname, firstname)
2) "user\_x\_group" (id\_user, id\_group)
3) "group" (id, name)
A "user" can have many "group".
What is the query to get all the users in the group 1 and the group 2 at the same time ?
```
SELECT *
FROM user u
JOIN user_x_group x ON x.id_user = u.id
WHERE id_group IN ('1', '2')
```
is not correct because I get all users in group 1 + all users in group 2 + all users in group 1&2. I just need all users in group 1&2 in one query.
How to do that ?
|
Do a `GROUP BY` and require (`HAVING`) more than one different id\_group's are found
```
SELECT u.*
FROM user u
JOIN user_x_group x ON x.id_user = u.id
WHERE id_group IN ('1', '2')
group by u.id
having count(distinct id_group) >= 2
```
You can easily adjust to 3 or more id\_groups if needed.
Alternative `HAVING` clause, for exactly 2 groups:
```
having max(id_group) <> min(id_group)
```
|
```
SELECT *
FROM user u
JOIN user_x_group x ON x.id_user = u.id and x.id_group = 1
JOIN user_x_group x1 ON x.id_user = x1.id_user and x1.id_group = 2
```
You can `join` twice once for each `id_group` condition to get all users in both the groups.
|
SQL : get all users in many specific group
|
[
"",
"mysql",
"sql",
""
] |
I'm trying to use a different column in my `WHERE` clause depending on the date being passed in. So for example, I'd like to do something like:
```
SELECT *
FROM TABLE
WHERE ACTIVE_FLAG = 1
AND
CASE WHEN @param < '1/1/2015' THEN
COLUMN_1 = 'Warehouse'
ELSE
COLUMN_2 = 'Warehouse, CA'
```
As of now I'm handling it like this:
```
IF @param < '1/1/2015' THEN
SELECT * FROM TABLE WHERE ACTIVE_FLAG = 1 AND COLUMN_1 = 'Warehouse'
ELSE
SELECT * FROM TABLE WHERE ACTIVE_FLAG = 1 AND COLUMN_2 = 'Warehouse, CA'
```
But I assume there has to be a better solution than maintaining the same query twice?
Thanks for the input!
|
You can use simple `AND`, `OR` operations to get what you want:
```
SELECT *
FROM TABLE
WHERE ACTIVE_FLAG = 1
AND (
(@param < '1/1/2015' AND COLUMN_1 = 'Warehouse')
OR
(@param >= '1/1/2015' AND COLUMN_2 = 'Warehouse, CA')
)
```
If `@param < '1/1/2015'`, then the `WHERE` clause becomes:
```
ACTIVE_FLAG = 1 AND COLUMN_1 = 'Warehouse'
```
otherwise, in case when `@param >= '1/1/2015'`, the `WHERE` clause becomes:
```
ACTIVE_FLAG = 1 AND COLUMN_2 = 'Warehouse, CA'
```
|
You can express this as:
```
WHERE ACTIVE_FLAG = 1 AND
((@param < '1/1/2015' AND COLUMN_1 = 'Warehouse') OR
(@param >= '1/1/2015' AND COLUMN_1 = 'Warehouse, CA')
);
```
(Perhaps with an additional check for `NULL` `@param` values.)
Note on performance, however: when you put this in one query, you run the risk of SQL Server choosing a worse execution plan. So, if you have an index on `(ACTIVE_FLAG, COLUMN_1)`, then using two queries or dynamic SQL might produce a better execution plan.
|
CASE in WHERE clause? Filter query by different column depending a value SQL Server 2008 R2
|
[
"",
"sql",
"sql-server",
"sql-server-2008-r2",
""
] |
I have table `TableLL` which stores longitude and latitude, first 10 row as shown below:
```
Latitude Longitude ID
---------------------------
52.239215 -0.927128 1
52.627201 -1.701828 2
53.413624 -2.151294 3
52.402537 -1.519893 4
52.991135 -2.411111 5
53.409981 -2.598566 6
52.596913 -2.090278 7
52.041275 -0.777819 8
53.196655 -2.909875 9
52.945638 -1.13321 10
```
and I can count rows in different table (`MainTable`) for one latitude and one longitude from `tableLL` by using this query :
```
SELECT
Month, Crime_type, count(*) as TOTAL
FROM
MainTable
WHERE
ABS(MainTableLongitude - (SELECT Longitude
FROM TableLL
WHERE ID = 1)) < 0.014347
AND ABS(MainTableLongitude - (SELECT Latitude
FROM TableLL
WHERE ID = 1)) < 0.023033
GROUP BY
Crime_type,MONTH
```
and result is:
```
Month Crime_type TOTAL
-----------------------------
2015-04 ASB 326
2015-04 Burglary 44
2015-04 CDA 50
2015-05 ASB 126
2015-05 Burglary 21
2015-05 CDA 3
2015-06 ASB 14
2015-06 Burglary 7
2015-06 CDA 58
```
But I want to count results by using all latitude and longitude in `TableLL` (I have to compare every row from `TableLL` with `MainTable`'s values in the `WHERE` clause). Any ideas?
|
In @Serif answer try to user `INNER JOIN` instead of `WHERE`
```
SELECT Month, Crime_type, COUNT(*) as TOTAL
FROM MainTable m
INNER JOIN TableLL t
on ABS(m.MainTableLongitude - t.Longitude) < 0.014347
and ABS(m.MainTableLongitude - t.Latitude) < 0.023033
GROUP BY Crime_type,MONTH
```
Also you should consider use [spatial query](https://msdn.microsoft.com/en-us/library/bb933790.aspx) from sql server instead. You could create spaital index to improve overal perfomance.
|
Something like this?
```
SELECT Month,Crime_type,count(*) as TOTAL
FROM MainTable, TableLL
WHERE ABS(MainTableLongitude - Longitude) < 0.014347
AND ABS(MainTableLongitude - Latitude)< 0.023033
GROUP BY Crime_type,MONTH
```
For to be more meaningfull you need to add some columns from TableLL to the select and group by parts.
|
Using every subqueries on where clause to count in SQL Server
|
[
"",
"sql",
"sql-server",
"subquery",
"where-clause",
""
] |
The below SQL is returning 'Cannot perform an aggregate function on an expression containing an aggregate or a subquery.', can anyone help?
```
SELECT
sum(case when Frequency = 'Monthly' then ISNULL(SUM(Amount),0.0) else 0 end) +
sum(case when Frequency = '4 Weekly' then ISNULL(SUM(Amount),0.0) / 2 else 0 end) +
sum(case when Frequency = 'Fortnightly' then ISNULL(SUM(Amount),0.0) / 3 else 0 end) +
sum(case when Frequency = 'Weekly' then ISNULL(SUM(Amount),0.0) / 5 else 0 end)
FROM TableWHERE Id = 1
```
|
If you want conditional aggregation, you only want one `sum()`:
```
SELECT sum(case when Frequency = 'Monthly' then Amount else 0 end) +
sum(case when Frequency = '4 Weekly' then Amount / 2 else 0 end) +
sum(case when Frequency = 'Fortnightly' then Amount / 3 else 0 end) +
sum(case when Frequency = 'Weekly' then Amount,0.0) / 5 else 0 end)
FROM Table
WHERE Id = 1;
```
|
I think you want to do something like
```
sum(case when Frequency = 'Monthly' then ISNULL(Amount,0.0) else 0 end)
```
|
Sum case statement
|
[
"",
"sql",
"sql-server",
""
] |
Is possible to use case expression inside of DATEADD interval parameter?
```
select DATEADD(case c1 when 1 then HOUR when 2 then DAY end, c2, date) from T
```
**Update1**: Sorry, I want to use it in where clause
```
select * from T where DATEADD(case c1 when 1 then HOUR when 2 then DAY end, c2, date) < GETDATE()
```
Maybe there is another alternative.
Thanks in advance,
|
Try below..
```
select * from T
where case c1 when 1 then DATEADD(HOUR, c2, date)
when 2 then DATEADD(DAY, c2, date)
end < Getdate()
```
|
No, you cannot parameterize the `datepart` parameter of [`DATEADD`](https://msdn.microsoft.com/en-us/library/ms186819.aspx):
> The following table lists all valid *datepart* arguments. User-defined variable equivalents are not valid.
You'll have to use two different `DATEADD` expressions, or change your logic:
```
select DATEADD(hour, c2 * case c1 when 1 then 1 when 2 then 24 end, date) from T
```
|
Using case expression in DATEADD interval
|
[
"",
"sql",
"sql-server",
"case",
"dateadd",
""
] |
I used to name my parameters in my SQL query when preparing it for practical reasons like in php with PDO.
So can I use named parameters with node-postgres module?
For now, I saw many examples and docs on internet showing queries like so:
```
client.query("SELECT * FROM foo WHERE id = $1 AND color = $2", [22, 'blue']);
```
But is this also correct?
```
client.query("SELECT * FROM foo WHERE id = :id AND color = :color", {id: 22, color: 'blue'});
```
or this
```
client.query("SELECT * FROM foo WHERE id = ? AND color = ?", [22, 'blue']);
```
I'm asking this because of the numbered parameter `$n` that doesn't help me in the case of queries built dynamically.
|
There is a [library](https://www.npmjs.com/package/yesql) for what you are trying to do. Here's how:
```
var sql = require('yesql').pg
client.query(sql("SELECT * FROM foo WHERE id = :id AND color = :color")({id: 22, color: 'blue'}));
```
|
QueryConvert to the rescue. It will take a parameterized sql string and an object and converts it to pg conforming query config.
```
type QueryReducerArray = [string, any[], number];
export function queryConvert(parameterizedSql: string, params: Dict<any>) {
const [text, values] = Object.entries(params).reduce(
([sql, array, index], [key, value]) => [sql.replace(`:${key}`, `$${index}`), [...array, value], index + 1] as QueryReducerArray,
[parameterizedSql, [], 1] as QueryReducerArray
);
return { text, values };
}
```
Usage would be as follows:
```
client.query(queryConvert("SELECT * FROM foo WHERE id = :id AND color = :color", {id: 22, color: 'blue'}));
```
|
Node-postgres: named parameters query (nodejs)
|
[
"",
"sql",
"node.js",
"prepared-statement",
"node-postgres",
""
] |
Given a date D, is there a way to find out the current active subscription or the most recent expired subscription for each user.
Here's the table structure
```
user_subscription(
user_subscription_id PRIMARY KEY,
user_id,
subscription_start_date,
subscription_end_date,
user_subscription_count
)
```
Sample Data:
```
1 1 01-jan-2011 31-jan-2011 1
2 1 01-mar-2011 01-apr-2011 2
3 1 03-jun-2011 05-dec-2011 3
4 2 05-jan-2011 11-jan-2011 1
5 2 01-jun-2011 01-nov-2011 2
```
Example result for:
* `D = 15-jan-2011` would be row 1 for user 1 and row 4 for user 2.
I'm struggling to figure out a way to do this in SQL.
Currently I do a separate query for each user, but it's expensive to do it for thousands of users.
Any help/ideas are really appreciated!
Thanks!
|
Fiddle here : <http://sqlfiddle.com/#!3/95688>
```
DECLARE @D AS DATE='15-jan-2011'
;WITH CTE
as
( SELECT *
FROM user_subscription
WHERE (@D BETWEEN subscription_start_date and subscription_end_date) OR subscription_end_date <= @D
),
CTE1
AS
(
SELECT ROW_NUMBER() OVER(PARTITION BY USER_ID ORDER BY subscription_end_date DESC) AS RN,*
FROM CTE
)
SELECT * FROM CTE1 WHERE RN=1
```
|
You can use `ROW_NUMBER`:
```
DECLARE @d DATE = '20110115'
;WITH Cte AS(
SELECT *,
rn = ROW_NUMBER() OVER(PARTITION BY user_id ORDER BY subscription_start_date DESC)
FROM user_subscription
WHERE
subscription_end_date <= @d
)
SELECT
user_subscription_id, user_id,subscription_start_date, subscription_end_date
FROM cte
```
|
SQL query to find either current active date range, or last passed date range
|
[
"",
"sql",
"sql-server",
"date",
""
] |
I have the following query. I simplified it for demo purpose. I am using SQL Server - t-sql
```
Select tm.LocID = (select LocID from tblLoc tl
where tl.LocID = tm.LodID )
from tblMain tm
```
if the subquery returns multiple records, I like to assign tm.LocID to null else if there is only 1 record returned then assign it to tm.LocID. I am looking for a simple way to do this. Any help would be appreciated.
One way I can see is to have a CASE statement and check if (Count \* > 1 ) then assign null else return the value but that would require a select statement within a select statement.
|
You have the right idea about using a `case` expression for `count(*)`, but it will not require another subquery:
```
SELECT tm.LocID = (SELECT CASE COUNT(*) WHEN 1 THEN MAX(LocID) END
FROM tblLoc tl
WHERE tl.LocID = tm.LodID )
FROM tblMain tm
```
|
Your query above (and many of the other answers here) is a *correlated* subquery which will be very slow since it performs a separate aggregation query on each record. This following will address both your problem and *potentially* perform a bit better since the count happens in a single pass.
```
SELECT
CASE
WHEN x.locid IS NOT NULL THEN x.locid
ELSE NULL
END
FROM tblMain m
LEFT JOIN (
SELECT
locid
FROM tblLoc
GROUP BY locid
HAVING COUNT(1) = 1
) x
ON x.locid = m.locid
;
```
The above is in Postgres syntax (what I'm familiar with) so you would have to make it TSQL compatible.
|
Assign null if subquery retrieves multiple records. How can it be done?
|
[
"",
"sql",
"sql-server",
""
] |
Could you please advise how to find the number of rows in the table which have 1 or more missing values? The missing values are represented in my table by `question marks = '?'`. The table has `15 columns and ~50k rows`. When I run the following query for some of the columns I can receive some results:
```
SELECT
COUNT(*)
FROM table_name
WHERE column_name ='?'
```
However I have also columns which bring me result: `"Error converting data type varchar to float"`
I would like to be able to find the number of rows in the table which have 1 or more missing values using 1 query/not run separately for each column.
Thank you in advance for your support!
|
```
Select Count(*)
From mySchema.myTable
Where Cast(Col1 As NVarChar(128)) +
Cast(Col2 As NVarChar(128)) +
Cast(Coln As NVarChar(128)) Like '%?%'
```
It's ugly and **WILL** be slow *and* you may need to modify the Casts accordingly, but should do the trick.
|
Try following query:
Just set table name and it will get all columns
Also you can give value\_to\_match like '?' in your case or any other if you want.
```
DECLARE @table_name nvarchar(max) = 'table_name'
DECLARE @value_to_match nvarchar(max) = '1'
DECLARE @query nvarchar(max) = ''
DECLARE @Condition nvarchar(max) = ' OR ' -- 1 OR when you want to count row if any column has that value -- 2 when you want all all columns to have same value
SELECT @query = @query + ' cast(' + COLUMN_NAME + ' as nvarchar(500)) = ''' + @value_to_match + '''' + @Condition FROM informatioN_schema.columns WHERE table_name = @table_name
if @@rowcount = 0
BEGIN
SELECT 'Table doesn''t Exists'
RETURN
END
SELECT @query = LEFT(@query,LEN(@query)-3)
PRINT ('select count(9) FROM ' + @table_name + ' WHERE ' + @query)
EXEC ('select count(9) FROM ' + @table_name + ' WHERE ' + @query)
```
|
Counting rows in the table which have 1 or more missing values
|
[
"",
"sql",
"sql-server",
""
] |
I am using SQL developer. I need to export the SQL result set to Excel sheet. I know how to export it by manually.
Currently I am using a batch file. Which in turn run's multiple SQL script files. At the end there is a 1 SQL script which contains multiple SQL select statements. Now I need to export these result to excel sheet while running batch file itself.
Batch file Name: Mytest.SQL it contain multiple script files as below.
```
@@test1.sql;
@@test2.sql;
@@test3.sql;
@@test4.sql;
```
The last script test4.sql contains multiple select statements, which needs to be exported into multiple excels. Please suggest any solution.
|
Months ago I have found this solution. A ready to use package to unload query result into `xlsx` file, also with formating. Here is the link. There is also a description how to use the package.
[pl/sql package to unload as xlsx](https://technology.amis.nl/wp-content/uploads/2011/02/as_xlsx11.txt)
Hope this helps.
|
Do you need a specific xls file or a csv file?
If you want a csv you can spool a file with sql.
For xls files, you can't do it easily, you'll probably have to go through another programming langage like java or c# or whatever with a specific library to build your report (e.g. for Java Apache POI).
|
How to generate SQL results to Excel sheet (Oracle)
|
[
"",
"sql",
"oracle",
"oracle-sqldeveloper",
""
] |
I am not good with `mysql` so need to know how to find result from two tables.
Let's say I have two tables:
**Brand table:**
```
Id Email Password
-- ----- --------
1 kris@gmail.com 12345678
```
**Facebook table:**
```
Id Email Password
-- ----- --------
1 calbares@gmail.com 12345678
```
Now I want to find this email address `calbares@gmail.com` from both table and want to get Email & Password from result.
Any Idea how to do this?
Thanks.
|
```
SELECT * FROM brand_table
union
select * from facebook_table
```
`UNION` will bring distinct records.
If you use `union all`, it will bring duplicates as well.
In your case:
```
SELECT * FROM brand_table where email ='calbares@gmail.com'
union all
select * from facebook_table where email ='calbares@gmail.com'
```
|
To do please try this
```
Select * from Brand where Email = '$email';
UNION ALL
Select * from Facebook where Email = '$email';
```
|
MYSQL - How to find result from two tables?
|
[
"",
"mysql",
"sql",
"select",
"sql-like",
""
] |
I was going through "Learn SQL the hard way" and I am currently in exercise 13.
I am stuck in the part where we have to
> Write a query that can find all the names of pets and their owners
> bought after 2004. Key to this is to map the person\_pet based on the
> purchased\_on column to the pet and parent.
My tables look like this
```
sqlite> select * from person ;
id first_name last_name age dead phone_number salary dob
---------- ---------- ---------- ---------- ---------- ------------ ---------- ----------
0 john doe 20 0 9929 123123.0 2015-12-09
1 foo bar 25 0 12 123123.0 2004-12-11
2 michal jordan 19 0 12 123123.0 2005-12-11
3 tom ford 30 0 12 123123.0 2002-12-11
```
And
```
sqlite> select * from pet ;
id name breed age dead dob parent_id
---------- ---------- ---------- ---------- ---------- ---------- ----------
0 fluffy Unicorn 5 0 2012-02-01
1 quora social net 10 0 2010-02-01
2 Goldie German She 6 0 2009-02-01
3 boxer golden ret 3 0 2007-02-01
4 naman kutta 10 1 2011-02-01
5 hari ohk 7 0 2015-02-01
sqlite> select * from person_pet ;
person_id pet_id purchased_on
---------- ---------- ------------
2 2 2002-03-30
2 3 2001-04-30
2 4 2005-04-30
2 5 2003-04-30
3 1 2006-04-30
3 4 2005-04-30
```
My `schema` looks like this
```
sqlite> .schema
CREATE TABLE person(
id INTEGER PRIMARY KEY,
first_name TEXT,
last_name TEXT,
age INTEGER
, dead INTEGER, phone_number INTEGER, salary FLOAT, dob DATETIME);
CREATE TABLE pet(
id INTEGER PRIMARY KEY,
name TEXT,
breed TEXT,
age INTEGER,
dead INTEGER
, dob DATETIME, parent_id INTEGER);
CREATE TABLE person_pet(
person_id INTEGER,
pet_id INTEGER
, purchased_on DATETIME);
CREATE TABLE cars(
id INTEGER PRIMARY KEY,
car_name TEXT
);
```
**What I have tried**
I am able to get the `person_id` and `pet_id` brought after the year "2004" with the query
```
sqlite> select person_id, pet_id, purchased_on from person_pet
...> where purchased_on > "2004-01-01" ;
person_id pet_id purchased_on
---------- ---------- ------------
2 4 2005-04-30
3 1 2006-04-30
3 4 2005-04-30
```
How should I proceed to get the names out now ?
This is my first time with SQL so any help would be much appreciated. Been stuck in this for a while.
**Edit** :
I got to know that this can be done using `joins` but as it has not been covered till Exercise 13. How can we do this without using `joins` ?
As @vkp asked in the comments, `in` has been covered till now.
|
To replace the IDs with their associated names, you could use [correlated subqueries](http://en.wikipedia.org/wiki/Correlated_subquery):
```
select (select first_name
from person
where id = person_pet.person_id),
(select name
from pet
where id = person_pet.pet_id),
purchased_on
from person_pet
where purchased_on >= '2005';
```
(When you need to get multiple columns from the same table (e.g., first name and last name), this is less efficient than a join.)
|
You just need to `join` the other two tables as well.
```
select pp.person_id, pp.pet_id, pp.purchased_on, pt.name, p.first_name, p.last_name
from person_pet pp
inner join person p on p.id = pp.person_id
inner join pet pt on pt.id = pp.pet_id
where purchased_on > "2004-01-01"
```
|
"Learn sql the hard way" - Exercise 13 : Trouble with a nested select statement
|
[
"",
"sql",
"sqlite",
"select",
""
] |
I have a bitmap stored as a `VARCHAR` in Netteza. Need to convert that `VARCHAR` to a binary string in Netezza.
Input (Netezza col value - `VARCHAR` ) = `'0xFFFFFFFFFFFFFFFF'`
Desired output (`VARCHAR`)->
```
'1111111111111111111111111111111111111111111111111111111111111111'
```
Is there a way to do this using Netezza query ?
I tried
```
SELECT CAST('0xFFFFFFFFFFFFFFFF' AS VARBINARY(64) );
```
but that throws an error
> ERROR [HY000]ERROR: Cannot cast type 'VARCHAR' to 'VARBINARY'
|
You can convert a hex string into binary data and store it in either a VARCHAR or VARBINARY column. I tend to prefer VARCHAR because of the rather limited CASTs that are available for VARBINARY.
To convert a hex string to binary and stored it in a VARCHAR, use the hextoraw function provided with the SQL Extension Toolkit. This is included with Netezza but must be configured and made available by your administrator.
To convert a hex string to binary and store it in a VARBINARY, use the hex\_to\_binary function included with Netezza (added in v 7.2).
```
drop table test_table if exists;
DROP TABLE
create table test_table (col1 varchar(50), col2 varbinary(50));
CREATE TABLE
insert into test_table values (hextoraw('464F4F'), hex_to_binary('464F4F'));
INSERT 0 1
select * from test_table;
COL1 | COL2
------+-----------
FOO | X'464F4F'
(1 row)
```
From there you'll need a UDF to handle the bit calculations that you want to do. I've put together three simple UDFs that I believe will suit your purpose.
FirstBit returns the position of the first non-zero bit.
BitCount returns the total count of non-zero bits.
CharToBase2 converts a binary values in a VARCHAR of 1s and 0s.
I think the first two get the end result that you need without the third, but in case you still wanted that, it's here.
```
select firstbit(hextoraw('0000')), bitcount(hextoraw('0000')), chartobase2(hextoraw('0000'));
FIRSTBIT | BITCOUNT | CHARTOBASE2
----------+----------+------------------
-1 | 0 | 0000000000000000
(1 row)
select firstbit(hextoraw('0001')), bitcount(hextoraw('0001')), chartobase2(hextoraw('0001'));
FIRSTBIT | BITCOUNT | CHARTOBASE2
----------+----------+------------------
15 | 1 | 0000000000000001
(1 row)
select firstbit(hextoraw('FFFF')), bitcount(hextoraw('FFFF')), chartobase2(hextoraw('FFFF'));
FIRSTBIT | BITCOUNT | CHARTOBASE2
----------+----------+------------------
0 | 16 | 1111111111111111
(1 row)
```
Here are the sources for each. Please note that I am a terrible C++ coder, and would likely be fired if that were my job, so caveat emptor.
**BitCount.cpp**
```
#include "udxinc.h"
#include <string.h>
using namespace nz::udx;
class BitCount : public Udf
{
public:
static Udf* instantiate();
ReturnValue evaluate()
{
StringArg* str = stringArg(0);
int32 retval = 0;
for(int i=0; i< str->length; i++)
{
for (int y=7; y>=0 ; y--)
{
if ((str->data[i] & (unsigned char)pow(2,y)) > 0)
{
retval++;
}
}
}
NZ_UDX_RETURN_INT32(retval);
}
};
Udf* BitCount::instantiate()
{
return new BitCount;
}
```
**FirstBit.cpp**
```
#include "udxinc.h"
#include <string.h>
using namespace nz::udx;
class FirstBit : public Udf
{
public:
static Udf* instantiate();
ReturnValue evaluate()
{
StringArg* str = stringArg(0);
int32 retval = -1;
for(int i=0; i< str->length; i++) {
for (int y=7; y>=0 ; y--) {
if ((str->data[i] & (unsigned char)pow(2,y)) > 0)
{
retval = i*8 + 7 - y;
}
if (retval > -1) break;
}
if (retval > -1) break;
}
NZ_UDX_RETURN_INT32(retval);
}
};
Udf* FirstBit::instantiate()
{
return new FirstBit;
}
```
**CharToBase2.cpp**
```
#include "udxinc.h"
#include <string.h>
using namespace nz::udx;
class CharToBase2 : public Udf
{
public:
static Udf* instantiate();
ReturnValue evaluate()
{
StringArg* str = stringArg(0);
StringReturn* result = stringReturnInfo();
result->size = str->length*8;
//unsigned char stringbyte = 0 ;
for(int i=0; i< str->length; i++)
{
for (int y=7; y>=0 ; y-- )
{
if ((str->data[i] & (unsigned char)pow(2,y)) == 0) {
result->data[i*8 + 7 - y] = '0'; }
else {
result->data[i*8 + 7 - y] = '1'; }
}
}
NZ_UDX_RETURN_STRING(result);
}
uint64 calculateSize() const
{
return sizerStringSizeValue(sizerStringArgSize(0)*8);
}
};
Udf* CharToBase2::instantiate()
{
return new CharToBase2;
}
```
Finally, here are the scripts I used to compile and install each.
**install\_firstbit.sh DBNAME**
```
DB=$1
if [[ -z $DB ]]; then
DB=$NZ_DATABASE
fi
if [[ -z $DB ]]; then
print "Usage: install <database>"
return 1
fi
export NZ_DATABASE="${DB}"
nzudxcompile FirstBit.cpp \
--fenced \
--sig "FirstBit(varchar(any))" \
--return "integer" \
--class "FirstBit"
rm FirstBit.o_*
```
**install\_bitcount.sh DBNAME**
```
DB=$1
if [[ -z $DB ]]; then
DB=$NZ_DATABASE
fi
if [[ -z $DB ]]; then
print "Usage: install <database>"
return 1
fi
export NZ_DATABASE="${DB}"
nzudxcompile BitCount.cpp \
--fenced \
--sig "BitCount(varchar(any))" \
--return "integer" \
--class "BitCount"
rm BitCount.o_*
```
**install\_chartobase2.sh DBNAME**
```
DB=$1
if [[ -z $DB ]]; then
DB=$NZ_DATABASE
fi
if [[ -z $DB ]]; then
print "Usage: install <database>"
return 1
fi
export NZ_DATABASE="${DB}"
nzudxcompile CharToBase2.cpp \
--fenced \
--sig "CharToBase2(varchar(any))" \
--return "varchar(any)" \
--class "CharToBase2"
rm CharToBase2.o_*
```
|
I think you'll need to define a [UDF](https://www-304.ibm.com/support/knowledgecenter/SSULQD_7.2.0/com.ibm.nz.udf.doc/c_udf_create_udfs.html) in C, register it with the database, and then call it on your column.
I'd start by looking at either [this answer](https://stackoverflow.com/questions/8205298/convert-hex-to-binary-in-c) or [this one](https://stackoverflow.com/questions/1557400/hex-to-char-array-in-c). In both of those cases you'd likely have to strip the leading `0x`.
|
Netezza SQL convert VARCHAR to binary string
|
[
"",
"sql",
"netezza",
""
] |
We are facing a problem with migration a large data set into elasticsearch from postgres (backup or whatever).
We have schema similar like this
```
+---------------+--------------+------------+-----------+
| user_id | created_at | latitude | longitude |
+---------------+--------------+------------+-----------+
| 5 | 23.1.2015 | 12.49 | 20.39 |
+---------------+--------------+------------+-----------+
| 2 | 23.1.2015 | 12.42 | 20.32 |
+---------------+--------------+------------+-----------+
| 2 | 24.1.2015 | 12.41 | 20.31 |
+---------------+--------------+------------+-----------+
| 5 | 25.1.2015 | 12.45 | 20.32 |
+---------------+--------------+------------+-----------+
| 1 | 23.1.2015 | 12.43 | 20.34 |
+---------------+--------------+------------+-----------+
| 1 | 24.1.2015 | 12.42 | 20.31 |
+---------------+--------------+------------+-----------+
```
And we are able to find a latest position by created\_at thanks to rank function in SQL
```
... WITH locations AS (
select user_id, lat, lon, rank() over (partition by user_id order by created_at) as r
FROM locations)
SELECT user_id, lat, lon FROM locations WHERE r = 1
```
and the result is only newest created locations for each user:
```
+---------------+--------------+------------+-----------+
| user_id | created_at | latitude | longitude |
+---------------+--------------+------------+-----------+
| 2 | 24.1.2015 | 12.41 | 20.31 |
+---------------+--------------+------------+-----------+
| 5 | 25.1.2015 | 12.45 | 20.32 |
+---------------+--------------+------------+-----------+
| 1 | 24.1.2015 | 12.42 | 20.31 |
+---------------+--------------+------------+-----------+
```
After we import the data into elasticsearch, our document model looks like:
```
{
"location" : { "lat" : 12.45, "lon" : 46.84 },
"user_id" : 5,
"created_at" : "2015-01-24T07:55:20.606+00:00"
}
etc...
```
I am looking for alternatives for this SQL query in elasticsearch query, I think it must be possible, but i did not find how yet.
|
You can achieve this using `field collapsing` clubbed with `inner_hits`.
```
{
"collapse": {
"field": "user_id",
"inner_hits": {
"name": "order by created_at",
"size": 1,
"sort": [
{
"created_at": "desc"
}
]
}
},
}
```
Detailed Article: <https://blog.francium.tech/sql-window-function-partition-by-in-elasticsearch-c2e3941495b6>
|
It is simple: if you want to find the *oldest* record (for a given id), you just need the records for which *no older ones* (with the same id) exist. (this assumes that for a given id, no records exist with the *same* created\_at date)
---
```
SELECT * FROM locations ll
WHERE NOT EXISTS (
SELECT * FROM locations nx
WHERE nx.user_id = ll.user_id
AND nx.created_at > ll.created_at
);
```
---
EDITED (it appears the OP wants the *newst* observation, not the oldest)
|
Rank over partition from postgresql in elasticsearch
|
[
"",
"sql",
"postgresql",
"elasticsearch",
"window-functions",
""
] |
So I have my table shown below
```
CREATE TABLE [dbo].[Test]
(
[Id] INT NOT NULL,
[CommaSeperatedString] nvarchar(Max) NOT NULL,
CONSTRAINT [PK_Test] PRIMARY KEY ([Id])
)
```
and I would like to lock down this table so that it can only be updated through triggers.
I.e. I don't want a user to be able to run
> * Insert
> * Update
> * Delete
Against this table. It may only be updated through a some triggers I have set up.
Is this possible?
|
Well here **[INSTEAD OF Triggers](https://technet.microsoft.com/en-us/library/ms175521(v=sql.105).aspx)** comes into picture.
you can create trigger for `INSTEAD OF INSERT` and `INSTEAD OF UPDATE` and same for delete.
```
CREATE TRIGGER Trig_INS_Test ON Test
INSTEAD OF INSERT
AS
BEGIN
SET NOCOUNT ON
INSERT INTO Test (CommaSeperatedString)
SELECT CommaSeperatedString
FROM inserted
END
```
|
In management studio you can go to the userrole --> properties --> securables.
Add the table here and you can deny rights to the table.
Deny will overrule any allow rights. So even if the user has another role that says allow, it will still be denied the rights.
More info: [Understanding GRANT, DENY, and REVOKE in SQL Server](https://www.mssqltips.com/sqlservertip/2894/understanding-grant-deny-and-revoke-in-sql-server/)
Edit: If your triggers are on the same table use an instead of trigger, like the other answers said. Otherwise use rights.
|
How do I lock table so that a user cannot perform Insert, Update, delete operations on it (Only triggers can)
|
[
"",
"sql",
"sql-server",
"permissions",
""
] |
I have a table with username field as given below. I have a stored procedure which receives the username argument and process to show the results. At the moment I am manually executing the stored procedure for each username by specifying the argument and after execution copy paste its result in excel.
The table with username is:
```
╔════╦══════════╦══╗
║ ID ║ Username ║ ║
╠════╬══════════╬══╣
║ 1 ║ abc ║ ║
║ 2 ║ def ║ ║
║ 3 ║ ghi ║ ║
║ 4 ║ jkl ║ ║
║ 5 ║ mno ║ ║
║ 6 ║ xyz ║ ║
╚════╩══════════╩══╝
```
The stored procedure is:
```
spCalculateSomeValuesForThePassedUsername 'abc'
```
Its output of the stored procedure is:
```
╔════╦══════════╦═══════╗
║ ID ║ Username ║ Value ║
╠════╬══════════╬═══════╣
║ 1 ║ abc ║ 100 ║
╚════╩══════════╩═══════╝
```
I can make a parent stored procedure which contains all the stored procedure and execute it to get the results but I will still have to copy paste the results. E.g.
```
CREATE PROCEDURE SPparent
AS
BEGIN
spCalculateSomeValuesForThePassedUsername 'abc'
spCalculateSomeValuesForThePassedUsername 'def'
spCalculateSomeValuesForThePassedUsername 'ghi'
spCalculateSomeValuesForThePassedUsername 'jkl'
.
.
.
END
```
I am wondering if it will be possible to get the username and paste the output to a result table automatically.
The desired Result table
```
╔════╦══════════╦═══════╗
║ ID ║ Username ║ Value ║
╠════╬══════════╬═══════╣
║ 1 ║ abc ║ 100 ║
║ 2 ║ def ║ 200 ║
║ 3 ║ ghi ║ 150 ║
║ 4 ║ jkl ║ 300 ║
║ 5 ║ mno ║ 700 ║
║ 6 ║ xyz ║ 1000 ║
╚════╩══════════╩═══════╝
```
|
And here's a much shorter set-oriented solution that uses no loops or cursors:
```
CREATE TABLE #Results(ID INT, Username SYSNAME, Value INT);
DECLARE @sql AS NVARCHAR(MAX) = N'';
SELECT @sql = @sql + N'
INSERT #Results EXEC spCalculateSomeValuesForThePassedUsername N'''+Username+''';'
FROM <yourTable>;
EXEC(@sql);
Select * from #Results;
```
|
```
Declare @UserName NVarChar(128);
Create Table #Results(Id Int, UserName NVarChar(128), Value Int);
Declare curSO Cursor Local
Fast_Forward
Read_Only
For
Select UserName
From mySchema.myTable
;
Open curSO;
Fetch Next From curSO
Into @UserName;
While @@Fetch_Status = 0
Begin
Insert Into #Results
Exec spCalculateSomeValuesForThePassedUsername @UserName;
Fetch Next From curSO
Into @UserName;
End
Close curSO;
Deallocate curSO;
-- Or do something different here. I used a temporary table. You can use whatever
Select *
From #Results
```
|
How to pass each value of the column from a table to a stored procedure and save the result in a table in SQL-Server
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
Suppose, we have query like this:
```
SELECT
tt.id first_column,
tt.value second_column,
--another values
qq.code_1 third_column,
qq.code_2 column4,
qq.code_3 column5,
qq.code_4 column6,
qq.code_5 column7
FROM
test_tbl tt LEFT JOIN
(SELECT * FROM (
SELECT id, code_value, ROW_NUMBER() OVER (ORDER BY code_value) AS RN
FROM attributes attr WHERE attr.id = tt.id AND
attr.code IN ('ATTRIBUTE_CODE_1',
'ATTRIBUTE_CODE_2',
'ATTRIBUTE_CODE_3',
'ATTRIBUTE_CODE_4',
'ATTRIBUTE_CODE_5')
)
PIVOT (MAX(code_value) FOR (RN) IN (1 as code_1, 2 as code_2, 3 as code_3, 4 as code_4, 5 as code_5))) qq
ON tt.id = qq.id
-- LEFT JOINS with another tables...
```
The problem is error:
```
ORA-00904: "TT"."ID": invalid identifier
```
How can I circumvent this obstacle?
Of course above example is easy and dummy, real query is more complicated.
This example can be checked here: <http://sqlfiddle.com/#!4/eec83/3>
|
Just remove the problematic condition:
```
WHERE attr.id = tt.id
```
... but to ensure correct `RN` values, you'll also need to add a `partition by id` inside your `row_number() over` clause.
```
ROW_NUMBER() OVER (PARTITION BY id ORDER BY code_value) AS RN
```
[SQLFiddle Demo](http://sqlfiddle.com/#!4/eec83/45):
```
SELECT
tt.id first_column,
tt.value second_column,
--another values
qq.code_1 third_column,
qq.code_2 column4,
qq.code_3 column5,
qq.code_4 column6,
qq.code_5 column7
FROM
test_tbl tt LEFT JOIN
(SELECT * FROM (
SELECT id, code_value, ROW_NUMBER() OVER (partition by id ORDER BY code_value) AS RN
FROM attributes attr WHERE
attr.code IN ('ATTRIBUTE_CODE_1',
'ATTRIBUTE_CODE_2',
'ATTRIBUTE_CODE_3',
'ATTRIBUTE_CODE_4',
'ATTRIBUTE_CODE_5')
)
PIVOT (MAX(code_value) FOR (RN) IN (1 as code_1, 2 as code_2, 3 as code_3, 4 as code_4, 5 as code_5))) qq
ON tt.id = qq.id
```
|
You can't refer the other tables/views in an **inline view**. I couldn't find a documentation which states this, but that's just it. It's by design, how query optimizer treat the query.
The actual case is more obvious with this simple query:
```
SELECT
*
FROM
test_tbl tt ,
(
SELECT ID, CODE_VALUE
FROM attributes attr
WHERE attr.id = tt.id
) qq
```
Error:
```
ORA-00904: "TT"."ID": invalid identifier
```
If you want to do join then you should use `JOIN` and put the condition in ON clause or put the condition in WHERE clause to join inline view and the table.
Like here:
```
SELECT
*
FROM
test_tbl tt ,
(
SELECT ID, CODE_VALUE
FROM attributes attr
) qq
WHERE attr.id = tt.id
```
or see if you want to make a `CROSS APPLY`. It seems like this is a join.
Also see this [answer](https://stackoverflow.com/a/20127137/1538014), there is a nice expression:
> An inline view / derived table creates a temporary unnamed view at the beginning of your query and then treats it like another table until the operation is complete. Because the compiler needs to create a temporary view when it sees on of these subqueries on the FROM line, those subqueries must be entirely self-contained with no references outside the subquery.
I believe that's not always true, in some cases oracle can choose to merge inline view. And can be forced to be merged by use of [NO\_MERGE](http://docs.oracle.com/cd/B13789_01/server.101/b10752/hintsref.htm#26161) optimizer hint.
[SEE THIS](https://community.oracle.com/message/9860690#9860690)
[SEE THIS](https://community.oracle.com/thread/342118)
I intend to add query plan as soon as I can, which may give more idea
|
ORA-00904: invalid identifier in nested subquery
|
[
"",
"sql",
"oracle",
""
] |
Within Crystal Reports, I'm using the following query (against an Oracle database) to generate data for a single field in a report:
```
SELECT SUM(e1.ENT_LOCAL_AMOUNT+e1.ENT_DISCRETIONARY_AMOUNT) AS "Entitlement"
FROM CLAIM_PERIODS cp1
JOIN ENTITLEMENTS e1
ON cp1.CPE_REFNO=e1.ENT_CPE_REFNO
WHERE e1.ENT_REFNO=(SELECT MAX(to_number(e2.ENT_REFNO))
FROM ENTITLEMENTS e2
WHERE e1.ENT_CPE_REFNO=e2.ENT_CPE_REFNO
AND (e2.ENT_START_DATE <= {?HB_As_At_Date}
AND e2.ENT_END_DATE > {?HB_As_At_Date})
AND e2.ENT_CREATED_DATE<={?HB_As_At_Date})
AND cp1.CPE_CPA_CPY_CODE='HB'
```
This works fine and returns a single integer value, based on the {?HB\_As\_At\_Date} supplied (The {?} syntax is Crystal's way of embedding parameter values into SQL). The content of the above query isn't my issue though - what I want to do is run it repeatedly for several different dates, and have that output be what gets fed through to Crystal for use in the report.
So say I want this query run for every Monday in September, I'd currently run the Crystal report once with a parameter of 07/09/2015, then again for 14/09/2015, etc.
I'd instead like to use my SELECT statement in conjunction with a query that tabulates this as needed - running the above once each per date required. With the output being something like:
```
Date Entitlement
07/09/2015 450,000.00
14/09/2015 460,123.00
21/09/2015 465,456.00
28/09/2015 468,789.00
```
Could someone point me in the right direction in terms of which keywords I should be reading up on here? I'd imagine it's quite straight-forward to generate a set of dates and run my SQL as a subquery using them, but I'm not sure where to start.
|
The only way I can think of without using a stored procedure is by repeating (i.e. copy/paste) your query for each date parameter and then combining them as sub-queries using UNION. Something like this:
```
SELECT SUM(e1.ENT_LOCAL_AMOUNT+e1.ENT_DISCRETIONARY_AMOUNT) AS "Entitlement"
FROM CLAIM_PERIODS cp1
JOIN ENTITLEMENTS e1
ON cp1.CPE_REFNO=e1.ENT_CPE_REFNO
WHERE e1.ENT_REFNO=(SELECT MAX(to_number(e2.ENT_REFNO))
FROM ENTITLEMENTS e2
WHERE e1.ENT_CPE_REFNO=e2.ENT_CPE_REFNO
AND (e2.ENT_START_DATE <= {?HB_As_At_Date_1}
AND e2.ENT_END_DATE > {?HB_As_At_Date_1})
AND e2.ENT_CREATED_DATE<={?HB_As_At_Date_1})
AND cp1.CPE_CPA_CPY_CODE='HB'
UNION
SELECT SUM(e1.ENT_LOCAL_AMOUNT+e1.ENT_DISCRETIONARY_AMOUNT) AS "Entitlement"
FROM CLAIM_PERIODS cp1
JOIN ENTITLEMENTS e1
ON cp1.CPE_REFNO=e1.ENT_CPE_REFNO
WHERE e1.ENT_REFNO=(SELECT MAX(to_number(e2.ENT_REFNO))
FROM ENTITLEMENTS e2
WHERE e1.ENT_CPE_REFNO=e2.ENT_CPE_REFNO
AND (e2.ENT_START_DATE <= {?HB_As_At_Date_2}
AND e2.ENT_END_DATE > {?HB_As_At_Date_2})
AND e2.ENT_CREATED_DATE<={?HB_As_At_Date_2})
AND cp1.CPE_CPA_CPY_CODE='HB'
```
As for your comment about writing a script for that, I don't know how you are running your report. But if you have an app/website running it, then you can generate the SQL in the app/website's language and assign it to the report object before you run it. Or even better, you can generate the SQL, run it, and assign the results to the report object. I do this all the time as I prefer my code to run the queries rather than the report itself, because I follow the layered design pattern in my app. The report will be located in the presentation layer which cannot communicate with the database directly, instead it calls the business/data layer with generates/runs the query and returns the results to the business/presentation layer.
|
Edit the parameter to take the input as multiple values and change the query as
Use either start or end but not both
```
SELECT SUM(e1.ENT_LOCAL_AMOUNT+e1.ENT_DISCRETIONARY_AMOUNT) AS "Entitlement"
FROM CLAIM_PERIODS cp1
JOIN ENTITLEMENTS e1
ON cp1.CPE_REFNO=e1.ENT_CPE_REFNO
WHERE e1.ENT_REFNO=(SELECT MAX(to_number(e2.ENT_REFNO))
FROM ENTITLEMENTS e2
WHERE e1.ENT_CPE_REFNO=e2.ENT_CPE_REFNO
AND e2.ENT_END_DATE in ( {?HB_As_At_Date})
AND e2.ENT_CREATED_DATE in ({?HB_As_At_Date})
AND cp1.CPE_CPA_CPY_CODE='HB'
```
|
Running Oracle SQL query over several dates
|
[
"",
"sql",
"oracle",
"crystal-reports",
""
] |
My query return below results, whenever i see value "YES" for COLUMN4 values i need to Set the "YES" value for all the records under the COLUMN1 group.
for Example DAVID has 2 records with NO and YES- but my target state should be "YES" for all rows because he has Value "YES" for atleast one of the records.
QUery Results
```
Column1 Column2 Column3 Column4
=================================
Mary AA AAA YES
Mary BB BBB YES
David AA AAA YES
David BB BBB NO
Clara AA AAA NO
Clara BB BBB NO
```
Requested Target State
```
Column1 Column2 Column3 Column4
================================================
Mary AA AAA YES
Mary BB BBB YES
David AA AAA YES
David BB BBB **YES**
Clara AA AAA NO
Clara BB BBB NO
```
|
Here is the Select statement you want,
```
SELECT mt.Column1,mt.Column2,mt.Column3,
(CASE WHEN mt2.cc IS NULL THEN 'NO' ELSE 'YES' END) AS Column4
FROM mytable mt LEFT JOIN
(SELECT Column1,COUNT(*) AS cc FROM mytable WHERE Column4 = 'YES' GROUP BY Column1)
AS mt2 ON mt2.Column1 = mt.Column1
```
Result:
```
+-------------+-------------+-------------+-------------+
| Column1 | Column2 | Column3 | Column4 |
+-------------+-------------+-------------+-------------+
| Mary | AA | AAA | YES |
+-------------+-------------+-------------+-------------+
| Mary | BB | BBB | YES |
+-------------+-------------+-------------+-------------+
| David | AA | AAA | YES |
+-------------+-------------+-------------+-------------+
| David | BB | BBB | YES |
+-------------+-------------+-------------+-------------+
| Clara | AA | AAA | NO |
+-------------+-------------+-------------+-------------+
| Clara | BB | BBB | NO |
+-------------+-------------+-------------+-------------+
```
|
```
UPDATE my_table m set m.column4='YES'
WHERE m.column4='NO'
AND exists(
select 1 from my_table mm where mm.column1= m.column1 and mm.column4 = 'YES')
```
|
how to Update a column with specific values found in any of the rows
|
[
"",
"sql",
"oracle",
"oracle11g",
""
] |
I have 2 tables "Dispatch" and "Salesled" that contains the relational column Invoice. "SalesLed" has multiple records relating to 1 specific invoice number. Dispatch has only 1 result. I need the query to return the invoice only if all the matches for Spiff in "Salesled" are zero. Otherwise, I do not want to see that invoice.
---
**Basic Query:**
```
SELECT invoice,dispatch
FROM dispatch
WHERE Invoice=0000001071
```
[](https://i.stack.imgur.com/t7qct.jpg)
```
SELECT invoice,spiff
FROM salesled
WHERE Invoice=0000001071
```
[](https://i.stack.imgur.com/hVw1R.jpg)
---
**Undesired Result:** It returns the invoice with the 2 records.
```
SELECT SalesLed.Invoice, SalesLed.Spiff, Dispatch.Dispatch
FROM Dispatch
INNER JOIN SalesLed
ON Dispatch.Invoice = SalesLed.Invoice
WHERE dispatch.Dispatch = 50007
```
[](https://i.stack.imgur.com/5NVpU.jpg)
**Desired Result:** I want it to return no invoices since one of the results has a spiff larger than 0. I am wondering if there is a simple function I don't know about that would do this?
|
A simple `NOT EXISTS` clause will do the trick:
```
select d.*
from Dispatch d
where d.Invoice = 50007
and not exists (select *
from SalesLed s
where s.Invoice = d.Invoice
and s.spiff > 0)
```
My query assumes that you're not actually interested in displaying the value for `SalesLed.Spiff`, since it would always come back `0` anyways, if I understand your data correctly.
|
```
SELECT SalesLed.Invoice, max(SalesLed.Spiff), Dispatch.Dispatch
FROM Dispatch INNER JOIN SalesLed ON Dispatch.Invoice = SalesLed.Invoice
WHERE dispatch.Dispatch = 50007 group by SalesLed.Invoice,Dispatch.Dispatch
having max(SalesLed.Spiff)<=0
```
|
Query returns unwanted results
|
[
"",
"sql",
"sql-server",
""
] |
I have 10 groups total. The grouping numbers (Column `A`) restart at 1 at every distinct value in column `Type`. I want to divide 100 by the 10 total groups across all types, then the resulting figure per group by number of entries(rows) in each grouping.
Column `A + Type` is my group identifier, column `B` is what I want the final result to look like.
I tried some variations equations based off of nesting queries with counts grouped by `A` and `type`, but I'm not getting my head around how to pull it all together.
```
ID A B TYPE
01 1 10 APPLE
02 2 5 APPLE
03 2 5 APPLE
04 3 2.5 APPLE
05 3 2.5 APPLE
06 3 2.5 APPLE
07 3 2.5 APPLE
08 4 5 APPLE
09 4 5 APPLE
10 5 10 APPLE
11 1 5 ORANGE
12 1 5 ORANGE
13 2 10 ORANGE
14 3 5 ORANGE
15 3 5 ORANGE
16 4 2.5 ORANGE
17 4 2.5 ORANGE
18 4 2.5 ORANGE
19 4 2.5 ORANGE
20 5 10 ORANGE
```
|
**EDITED**
To fix aditional row
```
21 5 ** ORANGE
```
[**SQL Fiddle Demo**](http://sqlfiddle.com/#!6/42d7e/6)
```
WITH subGroup as (
SELECT [TYPE], [A], count([TYPE]) as total
FROM Table1
GROUP BY [TYPE], [A]
),
totalGroup as (
SELECT count(DISTINCT [TYPE] + CAST([A] as varchar(10))) as total
FROM Table1
)
SELECT t.ID, t.[A], (100.0 / tg.total / s.total) as [B], t.[TYPE]
FROM Table1 t
inner join subGroup s
on t.[Type] = s.[Type]
and t.[A] = s.[A]
, totalGroup tg
```
**OUTPUT**
```
| ID | A | B | TYPE |
|----|---|-----|--------|
| 1 | 1 | 10 | APPLE |
| 2 | 2 | 5 | APPLE |
| 3 | 2 | 5 | APPLE |
| 4 | 3 | 2.5 | APPLE |
| 5 | 3 | 2.5 | APPLE |
| 6 | 3 | 2.5 | APPLE |
| 7 | 3 | 2.5 | APPLE |
| 8 | 4 | 5 | APPLE |
| 9 | 4 | 5 | APPLE |
| 10 | 5 | 10 | APPLE |
| 11 | 1 | 5 | ORANGE |
| 12 | 1 | 5 | ORANGE |
| 13 | 2 | 10 | ORANGE |
| 14 | 3 | 5 | ORANGE |
| 15 | 3 | 5 | ORANGE |
| 16 | 4 | 2.5 | ORANGE |
| 17 | 4 | 2.5 | ORANGE |
| 18 | 4 | 2.5 | ORANGE |
| 19 | 4 | 2.5 | ORANGE |
| 20 | 5 | 5 | ORANGE |
| 21 | 5 | 5 | ORANGE |
```
|
Hope this query helps
```
Select At.A, B = CAST(tot as float) / Cnt, At.Type
From Test AT
Inner Join
(Select A, Type, Cnt = Count(*) From Test
Group By A, Type) BT On AT.A = BT.A and AT.Type = BT.Type
Inner Join
(Select Type, Tot = Count(A) From Test
Group By Type) CT On CT.Type = AT.Type`
```
|
Dividing a single number by count of rows in each grouping
|
[
"",
"sql",
"sql-server",
""
] |
Please help me with the following
Question:
```
+------+----------+
| Name | Sub-name |
+------+----------+
| A | x |
| A | x |
| B | x |
| A | y |
| B | y |
+------+----------+
```
Desired Result:
```
+------+----------+-------+
| Name | Sub-name | Count |
+------+----------+-------+
| A | x | 2 |
| A | x | 2 |
| B | x | 1 |
| A | y | 1 |
| B | y | 1 |
+------+----------+-------+
```
Three columns Name, Subname, Count
I want to partition based on both name and subname.
|
[SQL Fiddle](http://sqlfiddle.com/#!4/caba4/1)
**Oracle 11g R2 Schema Setup**:
```
CREATE TABLE test ( Name, "Sub-name" ) AS
SELECT 'A', 'x' FROM DUAL
UNION ALL SELECT 'A', 'x' FROM DUAL
UNION ALL SELECT 'B', 'x' FROM DUAL
UNION ALL SELECT 'A', 'y' FROM DUAL
UNION ALL SELECT 'B', 'y' FROM DUAL;
```
**Query 1**:
```
SELECT Name,
"Sub-name",
COUNT( 1 ) OVER ( PARTITION BY "Sub-name", Name ) AS "Count"
FROM test
```
**[Results](http://sqlfiddle.com/#!4/caba4/1/0)**:
```
| NAME | Sub-name | Count |
|------|----------|-------|
| A | x | 2 |
| A | x | 2 |
| B | x | 1 |
| A | y | 1 |
| B | y | 1 |
```
|
Try this:
select name, sub\_name, count(name) over (partition by name, sub\_name) as count from table
|
How to partition based on two columns in Oracle/sql
|
[
"",
"sql",
"oracle",
"window-functions",
""
] |
I am creating a function in pgsql script language, and what I want to do in this point is iterate over the results of a query and for each row do something specific. My current try is the following, where `temprow` is declared as `temprow user_data.users%rowtype`. The code in question is the following:
```
FOR temprow IN
SELECT * FROM user_data.users ORDER BY user_seasonpts DESC LIMIT 10
LOOP
SELECT user_id,user_seasonpts INTO player_idd,season_ptss FROM temprow;
INSERT INTO user_data.leaderboards (season_num,player_id,season_pts) VALUES (old_seasonnum,player_idd,season_ptss);
END LOOP;
```
However I get the following error from this: `ERROR: relation "temprow" does not exist`. If it's clear what I want to be done, could you point to me the right way to do it?
|
`temprow` is a record variable which is bound in turn to each record of the first `SELECT`.
So you should write:
```
FOR temprow IN
SELECT * FROM user_data.users ORDER BY user_seasonpts DESC LIMIT 10
LOOP
INSERT INTO user_data.leaderboards (season_num,player_id,season_pts) VALUES (old_seasonnum,temprow.userd_id,temprow.season_ptss);
END LOOP;
```
This loop could be further simplified as a single query:
```
INSERT INTO user_data.leaderboards (season_num,player_id,season_pts)
SELECT old_seasonnum,player_idd,season_ptss FROM user_data.users ORDER BY user_seasonpts DESC LIMIT 10
```
|
For future reference, I want to emphasise [Thushara](https://stackoverflow.com/users/12162813/thushara-buddhika) comment on the accepted answer
On Postgres@12 the following would work:
```
DO $$
DECLARE temprow RECORD;
BEGIN FOR temprow IN
SELECT * FROM user_data.users ORDER BY user_seasonpts DESC LIMIT 10
LOOP
INSERT INTO user_data.leaderboards (season_num,player_id,season_pts) VALUES (old_seasonnum,temprow.userd_id,temprow.season_ptss);
END LOOP;
END; $$
```
|
How to iterate over results of query
|
[
"",
"sql",
"postgresql",
"plpgsql",
""
] |
I have found many answers on selecting non-distinct rows where they group by a singular column, for example, e-mail. However, there seems to have been issue in our system where we are getting some duplicate data whereby everything is the same except the identity column.
```
SELECT DISTINCT
COLUMN1,
COLUMN2,
COLUMN3,
...
COLUMN14
FROM TABLE1
```
How can I get the non-distinct rows from the query above? Ideally it would include the identity column as currently that is obviously missing from the distinct query.
|
```
select COLUMN1,COLUMN2,COLUMN3
from TABLE_NAME
group by COLUMN1,COLUMN2,COLUMN3
having COUNT(*) > 1
```
|
```
With _cte (col1, col2, col3, id) As
(
Select cOl1, col2, col3, Count(*)
From mySchema.myTable
Group By Col1, Col2, Col3
Having Count(*) > 1
)
Select t.*
From _Cte As c
Join mySchema.myTable As t
On c.col1 = t.col1
And c.col2 = t.col2
And c.col3 = t.col3
```
|
How to select non-distinct rows with a distinct on multiple columns
|
[
"",
"sql",
"sql-server",
""
] |
I am trying to do a join and where column using an inner query. the actual query is a bit more complicated that the below but I have simplified it for this questiion. Consider I have a peopel table. I have in it two likes. The first name is George in both rows but the last\_name is Lloyd and Lloyds in each row. I want to be able to return both rows. I am looking for Lloyd - returning that in the inner join and then trying to do a like int he outside query. This only returns the first row though with Lloyd. not the one with the s.
Is it possible to do this? If I was looking for Lloyd and hardcoding it I could do
```
like 'Lloyd%'
```
but in my case there are many rows Im returning so I want to do it dynamically.
```
select * from people p
join
(select p1.id, p1.first_name, p1.last_name
from people p1
where p1.first_name = 'George' and p1.last_name = 'Lloyd' and p1.id = 17)
un on un.id = p.id
where p.first_name = un.first_name and p.last_name like '%' || un.last_name || '%'
```
Thanks
|
You are only receiving one row because your query is predicated on just one row (assuming id is unique in your table)
Remove the id = 17
```
select *
from people
where first_name = :first_name
and last_name like :last_name
```
|
Because your join condition is on `un.id = p.id`, your result set is always restricted to `id = 17`.
It sounds like this is the query you were trying to write:
```
select * from people p
join
(select p1.id, p1.first_name, p1.last_name
from people p1
where p1.first_name = 'George' and p1.last_name = 'Lloyd' and p1.id = 17)
un on p.first_name = un.first_name and p.last_name like '%' || un.last_name || '%'
```
... which I would personally clean up a little to this:
```
select p2.*
from people p1
join people p2
on p2.first_name = p1.first_name
and p2.last_name like '%' || p1.last_name || '%'
where p1.first_name = 'George'
and p1.last_name = 'Lloyd'
and p1.id = 17
```
|
Using a parameter from another query in a like clause
|
[
"",
"sql",
"oracle",
""
] |
I have a table of data collected over time in the following format:
```
Title | Date | Value
-----------------------
test1 | 2015-09-18 | 99
test1 | 2015-09-02 | 97
test1 | 2015-08-31 | 101
test1 | 2015-08-03 | 11
test1 | 2015-07-20 | 100
test2 | 2015-09-05 | 102
test2 | 2015-09-04 | 101
test2 | 2015-08-22 | 91
test2 | 2015-07-19 | 76
test2 | 2015-07-12 | 66
```
I'd like to be able to output the **last result** for each month in columns for each distinct value so. The column headings (Sept,Aug,July) aren't important by the way. There will always be a 3 month timespan.
```
Title | Sept | Aug | July
----------------------------
test1 | 99 | 101 | 100
test2 | 102 | 91 | 76
```
Is this possible? I've thought about the use of CTE but I'm a little confused as to the next step.
Any help/advice is appreciated!
|
You can do with using a CTE to help prepare a result set for the actual SELECT.
```
WITH cte AS (
SELECT Title, MAX([Date]) d, ROW_NUMBER() OVER(
PARTITION BY Title ORDER BY DATEFROMPARTS(YEAR([Date]), MONTH([Date]), 1) DESC) rn
FROM t1
GROUP BY Title, DATEFROMPARTS(YEAR([Date]), MONTH([Date]), 1)
)
SELECT t.Title, MAX(CASE WHEN cte.rn = 1 THEN t.Value END) m1,
MAX(CASE WHEN cte.rn = 2 THEN t.Value END) m2,
MAX(CASE WHEN cte.rn = 3 THEN t.Value END) m3
FROM cte INNER JOIN t1 t ON
t.Title = cte.Title AND t.[Date] = cte.d
GROUP BY t.Title
```
See working [SQLFiddle](http://sqlfiddle.com/#!6/2fbad/13)
\*Note: This will only take the last 3 month of data per "Title", but will be un-aligned if for example the last month for test1 is Sept. and the last month for test2 is Aug.
|
Do a `GROUP BY`, use `CASE` to do conditional `MAX` for each desired month.
```
select title,
max(case when Month(Date) = 9 then value end),
max(case when Month(Date) = 8 then value end),
max(case when Month(Date) = 7 then value end)
from tablename
group by title
```
|
Top result by date as columns per unique item
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I'm trying to have 2 different columns with different date in mysql
For example I have this table
```
id name amount date
1 Jane 20.00 2015-08-14
2 Joe 10.00 2015-08-15
3 Joe 20.00 2015-08-21
4 Jane 30.00 2015-09-21
5 Joe 20.00 2015-09-25
6 Jane 10.00 2015-09-25
```
I want to get the sum of amount `groupby` name and display two columns for different date one is
`where date <= '2015-08-31'` and the other one is
`where date <= '2015-09-30'`
My desired output is
```
id name amount_aug amount_sep
1 Jane 20.00 60.00
2 Joe 30.00 50.00
```
My query so far is `select name, SUM(amount) amount_sum from table_name groupby name` which will simply output
```
name amount_sum
Jane 60.00
Joe 50.00
```
|
```
SELECT
name,
SUM(
IF( date <= '2015-08-31', amount, 0 )
) amount_aug,
SUM(
IF( date <= '2015-09-30', amount, 0 )
) amount_sep
FROM table_name
GROUP BY name
```
|
Here is an example:
```
select
name,
sum(case when date <= '2015-08-31' then amount else 0 end) amount_on_or_before_aug,
sum(case when date <= '2015-09-30' then amount else 0 end) amount_on_or_before_sept
from table_name
group by name
```
|
Two where "date" clause to have different column
|
[
"",
"mysql",
"sql",
""
] |
I have found the below query in one our `stored procedures`
```
SELECT *FROM TABLE1
WHERE (CONVERT(DATE,DateTo) BETWEEN @wkstdate AND @wkenddate))
```
Since Usage of `Functions` in `where` clause may hinder the performance I have changed it as below,
```
SELECT *FROM TABLE1
WHERE DateTo BETWEEN @wkstdate AND @wkenddate
```
The result is same after changing the codes. But i am not sure whether both will give same result in all the time. Any Scenarios where the above codes bring different results?
> (P.S: @wkstdate and @wkenddate are `DATE` values & DateTo is a
> `DATETIME` value)
Appreciate Your Suggestions
|
This will not yield the same result.
Let's say your `DateTo`, which is a `DATETIME` value, has a time component:
```
'2015-09-21 01:00:00'
```
Your `@wkenddate` is `'2015-09-21'`. The `WHERE DateTo BETWEEN @wkstdate AND @wkenddate` will not retrieve the above row since `'2015-09-21 01:00:00'` > `@wkenddate`.
For more example:
```
CREATE TABLE tbl(DateTo DATETIME)
INSERT INTO tbl
SELECT CAST('2015-09-21 00:00:00.000' AS DATETIME) UNION ALL
SELECT CAST('2015-09-21 16:10:49.047' AS DATETIME) UNION ALL
SELECT CAST('2015-09-22 16:10:49.047' AS DATETIME) UNION ALL
SELECT CAST('2015-09-20 16:10:49.047' AS DATETIME)
DECLARE @wkstdate DATE = '20150921',
@wkenddate DATE = '20150921'
SELECT *
FROM tbl
WHERE DateTo BETWEEN @wkstdate AND @wkenddate
SELECT * FROM tbl
WHERE (CONVERT(DATE,DateTo) BETWEEN @wkstdate AND @wkenddate)
DROP TABLE tbl
```
Now, using function in `WHERE` clause does make your query un-SARGable but there are exceptions. One of them is [`CAST`ing to `DATE`](https://dba.stackexchange.com/questions/34047/cast-to-date-is-sargable-but-is-it-a-good-idea).
Another alternative if you do not want to `CAST` to `DATE` is to not use the `BETWEEN` operator. Instead use `>= and <`:
```
WHERE
DateTo >= @wkstdate
AND DateTo < DATEADD(DAY, 1, @wkenddate)
```
|
The `BETWEEN` operator will not cope properly with Times on your date data. So if you have two dates `1/1/2000` and `2/1/2000`, and then ask for `BETWEEN` to work on a datetime like `2/1/2000 14:00`, then this datetime does NOT fall between them. Stripping the Time portion off the datetime is advisable, using your `CONVERT` function as in your example is probably the best way. There are other ways to strip off the Time portion, but `CONVERT` is probably the most efficient. (My example using `dd/mm/yyyy` format)
What is the least efficient thing I noticed about your stored procedure is the use of `SELECT * FROM`. Try to use explicit field selections - to minimize the load on SQL if you want more efficient Stored procedures.
|
Comparing a Date column with a datetime value
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
Trying to Create a Script for Auto-Numbering rows using Sequence Object where rows having the same values in columns [Surname, Birthdate, Sex] are categorized as Duplicates and each of these duplicates are respectively assigned the same 'Ext ID' which is assigned by Sequence Object........
And when new rows are run through the script, if it cant find matches based on the select columns, it should increment e.g R5 to R6 but if it can find matches in table, it should assign the 'Previously assigned' [Ext ID] of pre-existing match and not redundantly increment new [Ext ID]'
```
Ref Surname Firstname Birthdate Sex ExternalSource Ext ID
1 AAA AA 1/1/2000 M Alpha Null
2 BBB BB 1/1/2001 F Beta Null
3 AAA AA 1/1/2000 M Beta Null
4 CCC CC 1/1/2003 M Alpha Null
5 BBB BB 1/1/2001 F Gamma Null
6 DDD DD 1/1/2004 M Beta Null
7 CCC CC 1/1/2003 M Alpha Null
8 AAA AA 1/1/2000 M Gamma Null
```
Such that the Script populates the column [Ext ID] accordingly as below table
```
Ref Surname Firstname Birthdate Sex ExternalSource Ext ID
1 AAA AA 1/1/2000 M Alpha R1
2 BBB BB 1/1/2001 F Beta R2
3 AAA AA 1/1/2000 M Beta R1
4 CCC CC 1/1/2003 M Alpha R3
5 BBB BB 1/1/2001 F Gamma R2
6 DDD DD 1/1/2004 M Beta R4
7 CCC CC 1/1/2003 M Alpha R3
8 AAA AA 1/1/2000 M Gamma R1
```
Business Scenario> The table represents union of all records of clients in separate business applications and rows having the same surname, birthdate & Sex, are considered to be the same 'Customer' across these different Business Applications, so assigning the same value of [Ext ID] helps in categorizing these similar rows together such that [Ext ID] can be used externally to query & retrieve all records where these values are the same
---
**Further Clarification**
Assuming the 'Desired' script populates [EXT ID] of the 'FIRST' foundation Table loaded in database, please can someone create script for populating [EXT ID] in another table containing fresh set of new rows such that based on the same select columns [Surname, Birthdate, Sex], if matches are found between this 'NEW' table and the 'FIRST' foundation table,
```
Ref Surname Firstname Birthdate Sex ExternalSource Ext ID
9 AAA AA 1/1/2000 M Alpha Null
10 EEE EE 1/1/2001 F Beta Null
11 AAA AA 1/1/2000 M Beta Null
12 CCC CC 1/1/2003 M Alpha Null
13 EEE EE 1/1/2001 F Gamma Null
14 FFF FF 1/1/2004 M Beta Null
15 CCC CC 1/1/2003 M Alpha Null
16 AAA AA 1/1/2000 M Gamma Null
```
the [EXT ID] assigned in the 'NEW' Table is retrieved from the [EXT ID] in the 'FIRST' Table but if there are no matches, the assigned [EXT ID] in the 'NEW' Table should continue from the end of the last assigned [EXT ID] in the 'FIRST' foundation Table e.g if [EXT ID] in 'FIRST' Table was R12, the [EXT ID] in the 'NEW' Table should be R13
```
Ref Surname Firstname Birthdate Sex ExternalSource Ext ID
9 AAA AA 1/1/2000 M Alpha R1
10 EEE EE 1/1/2001 F Beta R5
11 AAA AA 1/1/2000 M Beta R1
12 CCC CC 1/1/2003 M Alpha R3
13 EEE EE 1/1/2001 F Gamma R5
14 FFF FF 1/1/2004 M Beta R6
15 CCC CC 1/1/2003 M Alpha R3
16 AAA AA 1/1/2000 M Gamma R1
```
Reason being that in reality, new fresh records will be aggregated from these Business Applications regularly of which it will be ideal to always reference the [EXT ID] in the 'FIRST' foundation table for 'NEW' tables since this column permanently serves as the external reference key to Source Business Applications
|
You can use [`DENSE_RANK()`](https://msdn.microsoft.com/en-GB/library/ms173825.aspx) to give each of your unique combinations of `Surname`, `BirthDate` and `Sex` a unique number, then simply place this into an update statement to update your column:
```
UPDATE t
SET ExtID = NewExtID
FROM ( SELECT ExtID,
NewExtID = 'R' + CAST(DENSE_RANK()
OVER(ORDER BY Surname, Birthdate, Sex)
AS VARCHAR(10))
FROM dbo.YourTableName
) AS t;
```
**FULL WORKING EXAMPLE**
```
IF OBJECT_ID(N'tempdb..#T', 'U') IS NOT NULL
DROP TABLE #T;
CREATE TABLE #T
( Ref INT,
Surname VARCHAR(50),
Firstname VARCHAR(50),
Birthdate DATE,
Sex CHAR(1),
ExternalSource VARCHAR(50),
ExtID VARCHAR(11)
);
INSERT #T (Ref, Surname, Firstname, Birthdate, Sex, ExternalSource)
VALUES
(1, 'AAA', 'AA', '2000-01-01', 'M', 'Alpha'),
(2, 'BBB', 'BB', '2001-01-01', 'F', 'Beta'),
(3, 'AAA', 'AA', '2000-01-01', 'M', 'Beta'),
(4, 'CCC', 'CC', '2003-01-01', 'M', 'Alpha'),
(5, 'BBB', 'BB', '2001-01-01', 'F', 'Gamma'),
(6, 'DDD', 'DD', '2004-01-01', 'M', 'Beta'),
(7, 'CCC', 'CC', '2003-01-01', 'M', 'Alpha'),
(8, 'AAA', 'AA', '2000-01-01', 'M', 'Gamma');
UPDATE t
SET ExtID = NewExtID
FROM ( SELECT ExtID,
NewExtID = 'R' + CAST(DENSE_RANK()
OVER(ORDER BY Surname, Birthdate, Sex)
AS VARCHAR(10))
FROM #T
) AS t;
SELECT *
FROM #T
ORDER BY Ref;
```
---
**ADDENDUM**
For maintaining this, I would suggest a slightly different approach, and have a separate table to maintain your ExtID, which would allow you to leverage an identity column:
```
CREATE TABLE dbo.Ext
(
ID INT IDENTITY(1, 1) NOT NULL,
Surname VARCHAR(50) NOT NULL,
BirthDate DATE NOT NULL,
Sex CHAR(1) NOT NULL,
ExtID AS 'R' + CAST(ExtIntID AS VARCHAR(10)),
CONSTRAINT PK_Ext__ID PRIMARY KEY (ID),
);
CREATE UNIQUE NONCLUSTERED INDEX UQ_Ext__Surname_Birthdate_Sex ON dbo.Ext (Surname, Birthdate, Sex);
```
Realistically, with a similar index on your base tables you probably don't need this ExtID column, you can just join to the above table to get the ExtID with not a huge performance hit, but on the off chance you did need to update the ExtID column you could use:
```
MERGE dbo.Ext AS e WITH (HOLDLOCK)
USING
( SELECT DISTINCT Surname, Birthdate, Sex
FROM dbo.YourTable
) AS t
ON t.Surname = e.Surname
AND t.Birthdate = e.Birthdate
AND t.Sex = e.Sex
WHEN NOT MATCHED THEN
INSERT (Surname, Birthdate, Sex)
VALUES (t.Surname, t.Birthdate, t.Sex);
UPDATE t
SET ExtID = r.ExtID
FROM db.YourTable AS t
INNER JOIN dbo.Ext AS e
ON e.Surname = t.Surname
AND e.Birthdate = t.Birthdate
AND e.Sex = t.Sex
WHERE t.ExtID IS NULL;
```
I have used `MERGE WITH (HOLDLOCK)` because this is the least vulnerable method I know of meeting a race condition, and getting unique constraint violations.
If all of this is not suitable, then I would still suggest as above (if possible) removing the `R` from the identifier, and making it just an integer. You can, if needed, create the text column as a computed column:
```
CREATE TABLE #T
( Ref INT,
Surname VARCHAR(50),
Firstname VARCHAR(50),
Birthdate DATE,
Sex CHAR(1),
ExternalSource VARCHAR(50),
ExtIntID INT,
ExtID AS 'R' + CAST(ExtIntID AS VARCHAR(10))
);
```
This will just make getting the maximim easier, and would probably make other uses easier too.
Then, your update statement is fairly similar:
```
UPDATE t
SET ExtIntID = NewExtID
FROM ( SELECT t.ExtIntID,
NewExtID = CASE WHEN e.ExtIntID IS NOT NULL THEN e.ExtIntID
ELSE
ISNULL(m.MaxID, 0) +
DENSE_RANK() OVER(PARTITION BY e.ExtIntID
ORDER BY t.Surname, t.Birthdate, t.Sex)
END
FROM #T AS t
LEFT JOIN
( SELECT Surname, Birthdate, Sex, ExtIntID = MAX(ExtIntID)
FROM #T
GROUP BY Surname, Birthdate, Sex
) AS e
ON e.Surname = t.Surname
AND e.Birthdate = t.Birthdate
AND e.Sex = t.Sex
OUTER APPLY (SELECT MAX(ExtIntID) FROM #T) AS m (MaxID)
WHERE t.ExtIntID IS NULL
) AS t;
```
If you can't make an `INT` column, again the update is pretty similar, you just need to mess around with formatting more:
```
UPDATE t
SET ExtID = NewExtID
FROM ( SELECT t.ExtID,
NewExtID = CASE WHEN e.ExtID IS NOT NULL THEN e.ExtID
ELSE
'R' +
CAST(ISNULL(m.MaxID, 0) +
DENSE_RANK() OVER(PARTITION BY e.ExtID
ORDER BY t.Surname, t.Birthdate, t.Sex)
AS VARCHAR(10))
END
FROM #T AS t
LEFT JOIN
( SELECT Surname, Birthdate, Sex, ExtID = MAX(ExtID)
FROM #T
GROUP BY Surname, Birthdate, Sex
) AS e
ON e.Surname = t.Surname
AND e.Birthdate = t.Birthdate
AND e.Sex = t.Sex
OUTER APPLY (SELECT MAX(CONVERT(INT, SUBSTRING(ExtID, 2, LEN(ExtID)))) FROM #T) AS m (MaxID)
WHERE t.ExtID IS NULL
) AS t;
```
|
Use SELECT DISTINCT to get each person only once, and then use ROW\_NUMBER() to create your IDs.
```
SELECT DISTINCT Surname, Birthdate, Sex, ROW_NUMBER() OVER (ORDER by
Surname,Birthdate, Sex) as RowNum
FROM mytable
```
Then you can use this to assign those values using an UPDATE statement:
```
UPDATE mytable
SET [Ext ID] = 'R'+cast(RowNum as varchar)
FROM
mytable
INNER JOIN
(SELECT DISTINCT Surname, Birthdate, Sex, ROW_NUMBER() OVER (
ORDER by Surname,Birthdate, Sex) as RowNum
FROM mytable) AS generateIds
ON generateIds.Surname=mytable.Surname
AND generateIds.Birthdate=mytable.Birthdate
NAD generateIds.Sex=mytable.Sex
```
|
How to Auto-Number Duplicate Rows Using Sequence Based on Multiple Duplicate Columns (T-SQL)
|
[
"",
"sql",
"t-sql",
"ssis",
""
] |
i have a question:
I want to join two SQL querys into one query using UNION to avoid duplicates, but i need to know if the data comes from the fisrt select query or from the second select query.
Sample data:
```
A TABLE B TABLE
-----------------------------------------------------------------------------
01 JOHN 01 JOHN
02 JUAN 02 PETER
03 MARTIN 03 MARTIN
```
I have something like this:
```
Select A.code,A.name from A where some conditions
unión
Select B.code,B.name from B where diferent conditions
```
RESULT TABLE
---
```
01 JOHN
02 JUAN
02 PETER
03 MARTIN
```
This Works fine, but now if i want to know if the data comes from first query or from the second i think something like this:
```
Select A.code,A.name, 'A' from A where some conditions
unión
Select B.code,B.name, 'B' from B where diferent conditions
```
RESULT TABLE
---
```
01 JOHN 'A'
01 JOHN 'B'
02 JUAN 'A'
02 PETER 'B'
03 MARTIN 'A'
03 MARTIN 'B'
```
But don't avoid "duplicates" because 'A' is diferent to 'B', so the question is, can i do something so that they don't compare the 'A' with the 'B'?, is another way to obtain the expected result?
EDIT:
The expected result
---
```
01 JOHN 'A'
02 JUAN 'A'
02 PETER 'B'
03 MARTIN 'A'
```
|
Here's another way of doing it:
```
SELECT code, name, MIN(SourceTable) AS SourceTable
FROM (
SELECT code, name, 'A' AS SourceTable
FROM A
UNION
SELECT code, name, 'B' AS SourceTable
FROM B) t
GROUP BY code, name
ORDER BY code
```
[**Demo here**](http://sqlfiddle.com/#!3/8774d6/8)
or perhaps:
```
SELECT code, name, SourceTable
FROM (
SELECT code, name, SourceTable,
ROW_NUMBER() OVER (PARTITION BY code, name
ORDER BY SourceTable) AS rn
FROM (
SELECT code, name, 'A' AS SourceTable
FROM A
UNION ALL
SELECT code, name, 'B' AS SourceTable
FROM B) t) AS x
WHERE x.rn = 1
```
[**Demo here**](http://sqlfiddle.com/#!3/8774d6/7)
|
```
Select A.code, A.name, 'A' from A where some conditions
union
Select B.code, B.name, 'B' from B
where different conditions
and not exists (select 1 from A
where some conditions
and A.code = B.code
and A.name = B.name)
```
Do the `UNION` as before, but don't return B rows that have already been returned from the A select.
|
select UNION except one column
|
[
"",
"sql",
"sql-server-2008",
"union",
""
] |
I expect no repeat another question... I can't find the solution. All the solution are for MySql.
I want order the result of my query by the values in the clausule IN. But in internet and stackoverflow just find the solution for "MySql" like here:
[Order by FIELD in MYSQL](https://stackoverflow.com/questions/13416153/order-by-field-in-mysql)
[Ordering by specific field value first](https://stackoverflow.com/questions/14104055/ordering-by-specific-field-value-first)
[MySQL ORDER BY FIELD with %](https://stackoverflow.com/questions/9917829/mysql-order-by-field-with)
Using clausule FIELD or FIELD\_IN\_SET. Can I use something similar to this.
thanks!!
---
I just try... and it doesn't work. This is my query.
```
select * from (
select 4 as dato1 from systables where tabid = 1 union
select 2 as dato1 from systables where tabid = 1 union
select 1 as dato1 from systables where tabid = 1 union
select 3 as dato1 from systables where tabid = 1
)
where dato1 in (4,2,1,3)
order by instr('4,2,1,3', dato1)
```
This is that the query show:
```
dato1
1
2
4
3
```
I don't undertand...
|
For informix 12.10 (Developer Edition) the [INSTR()](http://www-01.ibm.com/support/knowledgecenter/SSGU8G_12.1.0/com.ibm.sqls.doc/ids_sqs_2336.htm%23ids_sqs_2336?lang=en "INSTR()") does not seem to be properly converting the function arguments to character types.
I did a explicit cast to VARCHAR and the [INSTR()](http://www-01.ibm.com/support/knowledgecenter/SSGU8G_12.1.0/com.ibm.sqls.doc/ids_sqs_2336.htm%23ids_sqs_2336?lang=en "INSTR()") function starts to return proper values.
```
select
dato1
from (
select 4 as dato1 from systables where tabid = 1 union
select 2 as dato1 from systables where tabid = 1 union
select 1 as dato1 from systables where tabid = 1 union
select 3 as dato1 from systables where tabid = 1
)
where dato1 in (4,2,1,3)
order by instr('4,2,1,3', CAST(dato1 AS VARCHAR(255)))
```
Returns:
```
dato1
4
2
1
3
```
EDIT:
To clarify the use of the [INSTR()](http://www-01.ibm.com/support/knowledgecenter/SSGU8G_12.1.0/com.ibm.sqls.doc/ids_sqs_2336.htm%23ids_sqs_2336?lang=en "INSTR()") function:
```
select
dato1
, instr('4213', CAST(dato1 AS VARCHAR(255))) AS position
from (
select 4 as dato1 from systables where tabid = 1 union
select 2 as dato1 from systables where tabid = 1 union
select 1 as dato1 from systables where tabid = 1 union
select 3 as dato1 from systables where tabid = 1
)
where dato1 in (4,2,1,3)
order by instr('4213', CAST(dato1 AS VARCHAR(255)))
```
Returns:
```
dato1 position
4 1
2 2
1 3
3 4
```
That being said, the [DECODE()](http://www-01.ibm.com/support/knowledgecenter/SSGU8G_12.1.0/com.ibm.sqls.doc/ids_sqs_1447.htm?lang=en "DECODE()") suggestion from Ricardo seems to be a better option.
|
One approach that works in many databases is something like this:
```
where x in ('a', 'b', 'c')
order by instr('a,b,c', x)
```
Of course, delimiters can cause a problem, so this is safer:
```
where x in ('a', 'b', 'c')
order by instr(',a,b,c,', ',' || x || ',', )
```
|
ORDER BY Values especified in clausule "IN()" BUT in Generics SQL and Informix
|
[
"",
"sql",
"informix",
""
] |
How can I create a new column for an existing dataset that is the running total of an existing column - partitioned by some identifier?
```
ID | Value | New Value
---|--------|--------------------
1 | 10 | 10
1 | 5 | 15 = 10 + 5
1 | 3 | 18 = 10 + 5 + 3
2 | 45 | 45
2 | 15 | 60 = 45 + 15
```
I'm used to accomplishing this in SQL (Oracle) with a simple SUM() OVER() statement, but that syntax is apparently not supported in PROC SQL.
If possible, I would like to accomplish this within PROC SQL (I am much more experienced with SQL than SAS coding).
Thanks!
Mike.
|
Joe - your answer did not work for whatever reason, but got me on the right track to figure it out. Thank you!
```
data want;
set have;
by id;
if first.id then running_total = 0;
if first.id then retained_total = 0;
running_total = retained_total + value;
retained_total = running_total;
retain retained_total;
run;
```
|
In the data step this is accomplished with the `sum statement`.
```
data want;
set have;
by id;
if first.id then running_total=0;
running_Total + value;
run;
```
In PROC SQL this would be impossible unless you have an ordering variable (in which you could do something like this):
```
proc sql;
create table want as
select id, value,
(select sum(value) from have V
where V.id=H.id and V.ordervar le H.ordervar
) as running_total
from have H
;
quit;
```
But SAS doesn't have a `partition by` concept - the SAS data step is far more powerful than that.
|
Partitioned Running Totals in SAS
|
[
"",
"sql",
"sum",
"sas",
"cumulative-sum",
""
] |
Table schemas (SQL Server 2012)
```
Create Table InterestBuffer
(
AccountNo CHAR(17) PRIMARY KEY,
CalculatedInterest MONEY,
ProvisionedInterest MONEY,
AccomodatedInterest MONEY,
)
Create Table #tempInterestCalc
(
AccountNo CHAR(17) PRIMARY KEY,
CalculatedInterest MONEY
)
```
I am doing an upsert. Update rows those existed and insert others.
```
UPDATE A
SET A.CalculatedInterest = A.CalculatedInterest + B.CalculatedInterest
FROM InterestBuffer A
INNER JOIN #tempInterestCalc B ON A.AccountNo = B.AccountNo
INSERT INTO InterestBuffer
SELECT A.AccountNo, A.CalculatedInterest, 0, 0
FROM #tempInterestCalc A
LEFT JOIN InterestBuffer B ON A.AccountNo = B.AccountNo
WHERE B.AccountNo IS NULL
```
All is working fine. Problem occurs during concurrent executions. I am inserting data into `#tempInterestCalc` by joining other various tables including a left join with the `InterestBuffer` table and different set of data is inserted into `#tempInterestCalc` for each concurrent execution.
My problem is that sometimes executions become locked by another execution until I commit them in serial.
My question is as I am providing different set of data then it should not have any impact of row lock over other concurrent operation. Any suggestion will be appreciated.
**UPDATE 1:** I have used `SP_LOCK` for InterestBuffer table. It says `IndId = 1, Type = KEY, Mode = X, Status = GRANT`.
I think the update and insert blocks other transaction to make phantom reads.
**UPDATE 2:** Sorry! Previously I told that update is fine. But now I realized that first Transaction write is blocking second transactions write. In first transaction I run the update and insert. In second transaction, after I insert data in #tempInterestCalc table I just do as following and its just worked fine.
```
--INSERT DATA INTO #tempInterestCalc
SELECT * FROM #tempInterestCalc
RETURN
--UPDATE InterestBuffer
--INSERT InterestBuffer
```
**UPDATE 3:** I think my problem is to read data from InterestBuffer during update and insert into InterestBuffer.
**UPDATE 4:** My answer below is working sometimes if I `REBUILD INDEX` of BranchCode in InterestBuffer table. Is there any reason that batch insert/update make problem with index ???
**UPDATE 5:** I have read that if maximum rows of a page needs to be locked for batch update then SQL server may locked that page. Is there any way to see which row is containing by which page or which page is going to lock and release during execution??
**UPDATE 6:** I am providing my scenario.
```
CREATE TABLE [dbo].[Account](
[AccountNo] [char](17) NOT NULL,
[BranchCode] [char](4) NOT NULL,
CONSTRAINT [PK_Account] PRIMARY KEY CLUSTERED
(
[AccountNo] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
CREATE TABLE [dbo].[InterestBuffer](
[AccountNo] [char](17) NOT NULL,
[BranchCode] [char](4) NOT NULL,
[CalculatedInterest] [money] NOT NULL,
CONSTRAINT [PK_Buffer] PRIMARY KEY CLUSTERED
(
[AccountNo] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
```
Query for Branch 0001:
```
BEGIN TRAN
Declare @BranchCode AS Char(4) = '0001'
Declare @CalculatedInterestNew MONEY = 10
CREATE TABLE #tempInterestCalc
(
AccountNo Char(17),
BranchCode Char(4),
CalculatedInterestNew MONEY,
CalculatedInterestOld MONEY
)
INSERT INTO #tempInterestCalc
SELECT A.AccountNo, A.BranchCode, ISNULL(B.CalculatedInterest, 0), B.CalculatedInterest
FROM Account A
LEFT JOIN InterestBuffer B ON A.AccountNo = B.AccountNo AND A.BranchCode = B.BranchCode
WHERE A.BranchCode = @BranchCode
UPDATE A
SET A.CalculatedInterest = B.CalculatedInterestNew + @CalculatedInterestNew
FROM InterestBuffer A
INNER JOIN #tempInterestCalc B ON A.AccountNo = B.AccountNo AND A.BranchCode = B.BranchCode
WHERE A.BranchCode = @BranchCode
INSERT INTO InterestBuffer
SELECT A.AccountNo, A.BranchCode, A.CalculatedInterestNew + @CalculatedInterestNew
FROM #tempInterestCalc A
WHERE A.CalculatedInterestOld IS NULL
DROP TABLE #tempInterestCalc
--ROLLBACK
--COMMIT TRAN
```
For Branch 0002, 0003 just change the @BranchCode variable value to 0002 &0003 and Run them simultaneously.
[](https://i.stack.imgur.com/Lk4Ws.png)
[](https://i.stack.imgur.com/wLZCd.png)
[](https://i.stack.imgur.com/23hC5.png)
|
You could have a potential deadlock problem because you are doing another read against the `InterestBuffer` table after a write. A transaction could deadlock if another has blocked part of the `InterestBuffer` table for the update and your transaction is trying to read from it again for the select needed to do the insert.
You said you are already left joining with `InterestBuffer` while calculating your `#tempInterestCalc` table... why not use it to cache some of the data needed from `InterestBuffer` so you don't have to read from it again?
Change your temp table to:
```
Create Table #tempInterestCalc
(
AccountNo CHAR(17) PRIMARY KEY,
CalculatedInterestNew MONEY,
CalculatedInterestOld MONEY
)
```
You might possibly want to set repeatable read isolation level before beginning your transaction with:
```
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
```
It's more restrictive locking, but will prevent other transactions from trying to process the same records at the same time, which you probably need because you are combining the old and new values. Consider this scenario:
* Transaction 1 reads data and wants to add 0.03 to existing
`CalculatedInterest` of 5.0.
* Transaction 2 reads data and wants to add 0.02 to the 5.0.
* Transaction 1 updates `CalculatedInterest` to 5.03.
* Transaction 2's update overwrites the values from transaction one to
5.03 (instead of adding to it and coming up with 5.05).
Maybe you don't need this if your **sure** that transactions will never be touching the same records, but if so read committed won't let transaction 2 read the values until transaction 1 is finished with it.
Then separate your transaction to a distinct read phase first and then a write phase:
```
--insert data into #tempInterestCalc and include the previous interest value
insert into #tempInterestCalc
select AccountNo,
Query.CalculatedInterest CalculatedInterestNew,
InterestBuffer.CalculatedInterest CalculatedInterestOLD
from
(
...
) Query
left join InterestBuffer
on Query.AccountNo = InterestBuffer.AccountNo
UPDATE A
SET A.CalculatedInterest = B.CalculatedInterestNew + B.CalculatedInterestOld
FROM InterestBuffer A
INNER JOIN #tempInterestCalc B ON A.AccountNo = B.AccountNo
INSERT INTO InterestBuffer
SELECT A.AccountNo, A.CalculatedInterestNew, 0, 0
FROM #tempInterestCalc A
--no join here needed now to read from InterestBuffer
WHERE CalculatedInterestOld is null
```
This shouldn't deadlock... but you could see "unnecessary" blocking due to [Lock Escalation](https://technet.microsoft.com/en-us/library/ms184286%28v=sql.100%29.aspx), particularly if you are updating a large number of rows. Once there are more than 5000 locks on a table it will escalate to a table. No other transactions will then be able to continue until the transaction completes. This isn't necessarily a bad thing... you just want to make sure that your transactions are as short as possible so as to not lock other transactions for too long. If lock escalation is causing you problems, there are [some things you can do to mitigate this](https://support.microsoft.com/en-us/kb/323630) such as:
* Breaking your transaction up to do smaller chunks of work so as to create fewer locks.
* Ensuring you have an efficient query plan.
* Making judicious use of lock hints.
Check your query plan and see if there are any table scan's of `InterestBuffer` in any statements... particularly with your initial population of `#tempInterestCalc` since you didn't show how you are building that.
If you will absolutely never be updating accounts in one branch at the same time, then you might consider keeping your primary key the same but changing your clustered index to `Branch, Account number` (order is significant). This will keep all your records of the same branch physically next to each other and will reduce the chance that your plan will do a table scan or lock pages that other transactions might need. You then can also use the `PAGLOCK` hints, which will encourage SQL Server to lock by page instead of row and prevent reaching the threshold to trigger lock escalation. To do this, modifying your code from **UPDATE 6** in your question would look something like this:
```
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
BEGIN TRAN
Declare @BranchCode AS Char(4) = '0001'
Declare @CalculatedInterestNew MONEY = 10
CREATE TABLE #tempInterestCalc
(
AccountNo Char(17),
BranchCode Char(4),
CalculatedInterestNew MONEY,
CalculatedInterestOld MONEY
)
INSERT INTO #tempInterestCalc
SELECT A.AccountNo, A.BranchCode, ISNULL(B.CalculatedInterest, 0), B.CalculatedInterest
FROM Account A
LEFT JOIN InterestBuffer B
ON A.AccountNo = B.AccountNo AND A.BranchCode = B.BranchCode
WHERE A.BranchCode = @BranchCode
UPDATE A WITH (PAGLOCK)
SET A.CalculatedInterest = B.CalculatedInterestNew + @CalculatedInterestNew
FROM InterestBuffer A
INNER JOIN #tempInterestCalc B ON A.AccountNo = B.AccountNo AND A.BranchCode = B.BranchCode
WHERE A.BranchCode = @BranchCode
INSERT INTO InterestBuffer WITH (PAGLOCK)
SELECT A.AccountNo, A.BranchCode, A.CalculatedInterestNew + @CalculatedInterestNew
FROM #tempInterestCalc A
WHERE A.CalculatedInterestOld IS NULL
DROP TABLE #tempInterestCalc
--ROLLBACK
--COMMIT TRAN
```
Because the records are physically sorted together this should only lock a few pages... even when updating thousands of records. You could then run a transaction for branch 0003 at the same time as 0001 without any blocking issues. However you will probably have a blocking problem if you try to do an adjacent branch such as 0002 at the same time. This is because some records from branch 0001 and 0002 will probably share the same page.
If you really need to separate your branches you could look into using a [Partitioned Table or Index](https://msdn.microsoft.com/en-us/library/ms190787%28v=sql.100%29.aspx). I don't know much about them, but it sounds like it might be useful to what you are trying to do, but it also probably comes with it's own set of complications.
|
i) See if there is no problem in dirty read then you can use Nolock,there is no problem or you can SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED at the top of you proc.There is no problem and both are same.**You should just consider "Problem of dirty read" data before using nolock**.
ii) You have not explain your problem so well.what is the use of #tempInterestCalc and #temp.
iii) #tempInterestCalc get populated from where ?
iv) During insert process record of #temp B is not being use so you can remove left join and use and exists.But this depend when above points are clear.
iv) You are getting record from InterestBuffer in temp table then again updating back then again inserting in same table.This is not clear.
|
Concurrent execution in SQL Server
|
[
"",
"sql",
"sql-server",
"rowlocking",
""
] |
It appears that I cannot use a computed value in a `WHERE` clause?
```
SELECT id, TIMESTAMPDIFF(YEAR,dateOfBirth,CURDATE()) AS age
FROM user
WHERE age >= 20
```
I get `ERROR 1054 (42S22): Unknown column 'age' in 'where clause'`. A possible workaround is to use subquery but it overly complicate things?
|
You ***can*** use a computed value in a `WHERE` clause but not its Alias. You have to type the whole expression again
```
WHERE TIMESTAMPDIFF(YEAR,dateOfBirth,CURDATE()) >=20
```
> An alias can be used in a query select list to give a column a different name. You can use the alias in GROUP BY, ORDER BY, or HAVING clauses to refer to the column:
>
> Standard SQL disallows references to column aliases in a WHERE clause. This restriction is imposed because when the WHERE clause is evaluated, the column value may not yet have been determined.
**[Reference](https://dev.mysql.com/doc/refman/5.5/en/problems-with-alias.html)**
|
Yes, but mySQL behaviour isn't like you think
```
SELECT id, TIMESTAMPDIFF(YEAR,dateOfBirth,CURDATE()) AS age
FROM user
WHERE TIMESTAMPDIFF(YEAR,dateOfBirth,CURDATE()) >= 20
```
mySQL WHERE CLAUSE only look into columns and not into alias.
|
SQL WHERE clause with computed value
|
[
"",
"mysql",
"sql",
""
] |
i need a help for my problem.
I have a value to be stored in table, and the value is a range.
ex:
```
Category Value
-------------------- ---------------------------
1 < 70
2 > 70 - 90
3 > 90 - 100
4 > 100 - 120
5 > 120
```
so, when i have a value 80, he is in category 2.
ex:
```
Value Category
-------------------- ---------------------------
80 2
75 2
115 4
50 1
99 2
```
How is the easiest way to store this range type parameter in table, and easy to query it?
Thank you in advance
|
I would design it this way.
```
CREATE TABLE CategoryRange(
CategoryId INT IDENTITY(1, 1),
MinValue INT NULL,
MaxValue INT NULL
)
```
To get the corresponding category:
*Note that range is interpreted as `value >= min AND value < max`*
```
DECLARE @values TABLE(value INT)
INSERT INTO @values VALUES
(80), (75), (115), (50), (99)
SELECT
v.value,
r.CategoryId
FROM @values v
CROSS APPLY(
SELECT CategoryId
FROM CategoryRange
WHERE
(MinValue IS NULL OR v.value >= MinValue)
AND (
MaxValue IS NULL
OR v.value < MaxValue
)
)r
```
[**SQL Fiddle**](http://sqlfiddle.com/#!6/7ed5e/2/0)
```
| value | CategoryId |
|-------|------------|
| 80 | 2 |
| 75 | 2 |
| 115 | 4 |
| 50 | 1 |
| 99 | 3 |
```
|
Adding to Felix's solution, if the ranges were continuous, you can have the flawless design of range table
```
-- Accept only one value of range, RangeTo
DECLARE @ContinuousRange table (CategoryId int identity(1,1), RangeTo int primary key)
INSERT @ContinuousRange VALUES (70), (90), (100), (120)
,(2147483647) -- Add this to the last entry for completing the range set
-- Your table
DECLARE @Values table (Value int)
INSERT INTO @Values VALUES (80), (75), (115), (50), (99)
-- Usage
SELECT *
FROM @Values v
OUTER APPLY
(
SELECT TOP 1 * FROM @ContinuousRange WHERE v.value <= RangeTo
ORDER BY RangeTo
) rng
```
|
Range type parameter in SQL Server
|
[
"",
"sql",
"sql-server",
""
] |
I have following 3 tables:
```
POS_Transactions(TransactionDate,TerminalID,TransactionTypeID,TotalAmount)
Terminal(TerminalID,CountryID)
Country(CountryID,CountryName,CurrencyName)
```
Now I am using inner joins to link these table, but I am not getting the desired result i.e It is not grouping `Country-wise`
```
SELECT C.countryname 'CountryName',
C.currencyname 'CurrencyName',
transactiondate,
Sum(CASE transactiontypeid
WHEN 6 THEN 1
ELSE 0
END) 'Number of Cards Issue',
Sum(CASE transactiontypeid
WHEN 6 THEN totalamount
ELSE 0
END) 'Total Amount Loaded',
Count(DISTINCT CASE transactiontypeid
WHEN 4 THEN pan
ELSE NULL
END)'Number of Card Redeemed',
Sum(CASE transactiontypeid
WHEN 4 THEN 1
ELSE 0
END) 'Number of Sales Transaction',
Sum(CASE transactiontypeid
WHEN 4 THEN totalamount
ELSE 0
END) 'Total Values of Sale Transaction'
INTO #temp
FROM pos_transactions p
INNER JOIN terminal T
ON T.terminalid = p.terminalid
INNER JOIN country C
ON T.countryid = C.countryid
GROUP BY transactiondate,
C.countryname,
C.currencyname,
C.countryid
select [Number of Cards Issue],[Total Amount Loaded], [Number of Card Redeemed],[Number of Sales Transaction],
[Total Values of Sale Transaction],CountryName,CurrencyName from #temp
where (TransactionDate >= @DateFrom)
and (TransactionDate < @DateTo)
drop table #temp
```
For Example, If there are two records `Transactions` in `Country` UAE then It is showing individual results:
```
(CountryName,Numbers of Cards Issued,CurrencyName,Total Amount Loaded,Number of Sales Transaction,Total Values of Sale Transaction)
UAE 1 SAR 320.000 0 0.0000
UAE 2 SAR 320.000 0 0.0000
```
Instead it should group result for `Country` UAE.Should be
```
UAE 3 SAR 640.000 0 0.0000
```
What am I doing wrong?
|
You're grouping by *transactionDate* (among others) and putting that data in to #temp. Nothing in the second query changes this, so the result is still grouped by *transactionDate* even if you select it or not.
So I guess you're left with a couple of options:
1. Remove *transactionDate* from the "select into #temp"-query. I guess this is not a good option though or you would have removed it already.
2. Group by and aggregate the second query by all selected columns.
Example (edit: in my haste I forgot to use aggregates. Updated):
```
SELECT SUM([Number of Cards Issue]), SUM([Total Amount Loaded]), SUM([Number of Card Redeemed]),
SUM([Number of Sales Transaction]), SUM([Total Values of Sale Transaction]),
CountryName, CurrencyName
FROM #temp
WHERE (TransactionDate >= @DateFrom) and (TransactionDate < @DateTo)
GROUP BY
CountryName, CurrencyName
```
|
If the TransactionDate is irrelevant for your collection you should remove it from both the select and the group by. If you are trying to group the transaction date and not the time may I suggest you use the following in your group by and select parts of your statement.
```
GROUP BY CAST(transactiondate AS DATE)
```
|
GROUP BY clause not showing desired result
|
[
"",
"sql",
"sql-server",
"group-by",
"inner-join",
""
] |
I have a table named `LoginTable` with just 2 columns: `id` as VarChar and `login_date` as Date. Here is a small sample:
```
+-------+------------+
| id | login_date |
+-------+------------+
| user1 | 2014-10-15 |
| user1 | 2014-10-15 |
| user2 | 2014-10-15 |
| user3 | 2014-10-15 |
| user1 | 2014-10-16 |
| user3 | 2014-10-16 |
| user4 | 2014-10-16 |
| user2 | 2014-10-17 |
| user4 | 2014-10-17 |
+-------+------------+
```
I would like to write a SQL query that shows, for each day, how many users logged in the next day. For instance, Oct 15th has 3 unique logins - out of those 3, only 2 users have logged in on Oct 16th. THe output should be this:
```
+------------+--------------+
| Date | NextDayLogin |
+------------+--------------+
| 2014-10-15 | 2 |
| 2014-10-16 | 1 |
| 2014-10-17 | 0 |
+------------+--------------+
```
My best try (theoretically) was this:
```
SELECT
DISTINCT(id),
DATE(login_date) as 'Dates'
FROM LoginTable t1
INNER JOIN (
SELECT
DISTINCT(id) as id2,
DATE(login_date) as 'DatesNew'
FROM LoginTable
WHERE 'DatesNew' = DATE(t1.login_date) + INTERVAL '1' DAY
) t2
ON DATE(t1.login_date) = t2.DatesNew
AND t1.id = t2.id2
```
However, I receive an error: `Unknown column t1.log_date in where clause` .
How can this be achieved? I am using MySQL if this matters.
|
You can `left join` on the table with the specified +1 date to get the desired count.
[Fiddle with sample data](http://sqlfiddle.com/#!9/9fb35/15)
```
select t1.login_date, count(distinct t2.id) as nextdaylogin
from t t1
left join t t2 on t1.login_date = t2.login_date-1 and t1.id = t2.id
group by t1.login_date
```
|
Just setup a new column maybe called `consecutive_days`
Then each time the users login, just test the `curr_date` against a `last_login_datetime` (column you can create as well) and if the times are less than 24 hours apart, set `consecutive_days = consecutive_days +1;`
If not the set it equal to 0.
|
SQL - Counting re-occuring records based on date
|
[
"",
"mysql",
"sql",
""
] |
I need to write something like:
```
select *
from HR_EMP_TRANSFER t1, HR_EMP_PROFILE_MSTR_TT t2
where (t1.APPROVAL_STATUS='P' and t1.EMPNO = '13724')
OR (t2.APPROVAL_STATUS='P' and t2.EMPNO = '13724')
```
but currently it is not giving me the results only for empno 13724
Both the tables have different table structure. But these two columns are same. I just need to combine these two queries some how and check if these are returning any value
```
select * from HR_EMP_TRANSFER t1 where t1.APPROVAL_STATUS='P' and t1.EMPNO = '13724'
select * from HR_EMP_PROFILE_MSTR_TT t2 where t2.APPROVAL_STATUS='P' and t2.EMPNO = '13724'
```
|
You could use a `FULL OUTER JOIN`:
```
SELECT *
FROM HR_EMP_TRANSFER t1
FULL OUTER JOIN
HR_EMP_PROFILE_MSTR_TT t2
ON ( t1.APPROVAL_STATUS = t2.APPROVAL_STATUS
AND t1.EMPNO = t2.EMPNO )
WHERE (t1.APPROVAL_STATUS='P' and t1.EMPNO = '13724')
OR (t2.APPROVAL_STATUS='P' and t2.EMPNO = '13724')
```
|
You might want `UNION ALL`:
```
SELECT t1.*
FROM HR_EMP_TRANSFER t1
WHERE t1.APPROVAL_STATUS='P' and t1.EMPNO = '13724'
UNION ALL
SELECT t2.*
FROM HR_EMP_PROFILE_MSTR_TT t2
WHERE t2.APPROVAL_STATUS='P' and t2.EMPNO = '13724';
```
This assumes that the two tables have the same structure (same columns, in the same order, with the same types). It returns the records in separate rows, rather than in one row.
|
AND and OR in oracle sql
|
[
"",
"sql",
"oracle",
""
] |
I have one vb.net windows application and I want to deliver it to my client with 1 year validity.
After one year this software will automatically stop working or ask for renewal.
The client PC doesn't have internet access.
Please tell me the secure way for this.
|
When the program is installed, have it set a registry value with the current date. Then, on every subsequent program start, have it check that registry value against the current time. If more than a year has passed, do whatever you plan on doing to lock up your application.
[This post](https://social.msdn.microsoft.com/Forums/vstudio/en-US/5b22e94c-37a9-4be5-ad55-3d9229220194/how-to-use-add-read-change-delete-registry-keys-with-vbnet?forum=vbgeneral) has some excellent info on the specifics of adding, modifying, and accessing registry values in `vb.net`.
|
Check the date.
```
If dateToday > dateProgramSold.AddYears(1) Then
'open form that cant be close saying program is expired
End If
```
|
expire application after 1 year span from client PC, No internet is available on client PC
|
[
"",
"sql",
"sql-server",
"vb.net",
"desktop-application",
""
] |
I'm trying to join two tables but the columns emp\_id and scheme\_id may be null or not populated in either table but if it is populated in either then I need to return the total pen\_ee for that employee for each scheme (further table descriptions below). I can't edit the table structure and have to work with what I have.
I've been trying to use a full join to do this but don't understand if you can do a full join on two fields emp\_id & scheme\_id to get the required result.
**Table PAYAUDPEN**
This is the first two months of the year.
- Employee A has given 44.06 to scheme BMAL.
- Employee B has given 98.06 to scheme BMAL.
- Employee B has given 98.06 to scheme CLFL.
```
emp_id, period_id, scheme_id, pen_ee
A, 201601, BMAL, 22.03
A, 201602, BMAL, 22.03
B, 201601, BMAL, 98.06
B, 201602, CLFL, 98.06
```
**Table PAYISPEN**
This is the third & current month of the year. The system always puts the current month into this table)
- Employee A gave 22.03.
- Employee B gave 98.06.
(Note employee B did not contribute to the BMAL scheme again in month 3 which is part of the issue).
```
emp_id, scheme_id, pen_ee
A, BMAL, 22.03
B, CLFL, 98.06
```
**Required Result**
The SQL statement needs to return the 3 periods added together, for each employee for each scheme that they contributed to.
- Employee A would be 44.06 + 22.03=66.09 for scheme BMAL.
- Employee B would be 98.06 + NULL =98.06 for scheme BMAL.
- Employee B would be 98.06 + 98.06=196.12 for scheme CLFL.
```
A, BMAL, 66.09
B, BMAL, 98.06
B, CLFL, 196.12
```
To create the basics of the two tables and populate with the example data above run the following queries.
```
CREATE TABLE [dbo].[payaudpen](
[emp_id] [char](10) NOT NULL,
[period_id] [char](6) NOT NULL,
[scheme_id] [char](10) NOT NULL,
[pen_ee] [numeric](15, 2) NULL)
CREATE TABLE [dbo].[payispen](
[emp_id] [char](10) NOT NULL,
[scheme_id] [char](10) NOT NULL,
[pen_ee] [numeric](15, 2) NULL )
INSERT INTO payaudpen VALUES ('A','201601','BMAL','22.03'), ('A','201602','BMAL','22.03'), ('B','201601','BMAL','98.06'), ('B','201602','CLFL','98.06')
INSERT INTO payispen VALUES ('A','BMAL','22.03'), ('B','CLFL','98.06')
```
Current statement that I'm using:
```
SELECT a.emp_id,
a.scheme_id,
SUM(a.pen_ee)+AVG(b.pen_ee)
FROM payaudpen a
FULL JOIN payispen b
ON a.emp_id=b.emp_id
GROUP BY a.scheme_id, a.emp_id
```
**Incorrect result**
Doesn't return the correct value for employee B for each scheme.
```
A, BMAL, 66.09
B, BMAL, 196.12
B, CLFL, 196.12
```
|
You are trying to sum across two tables, use union all to make the tables into one relation with more rows, instead of join to make the tables into a relation with more columns:
```
WITH all_records AS (SELECT emp_id
, scheme_id
, pen_ee
FROM payispen
UNION ALL
SELECT emp_id
, scheme_id
, pen_ee FROM payaudpen)
SELECT emp_id, scheme_id, SUM(pen_ee)
FROM all_records
GROUP BY emp_id, scheme_id
```
Results:
```
emp_id scheme_id (No column name)
A BMAL 66.09
B BMAL 98.06
B CLFL 196.12
```
|
Apparently you want to join only rows that have both the same `emp_id` *and* the same `scheme_id`. This is possible in outer joins, just as it is in inner joins. I infer that you also want to merge the `emp_id` and `scheme_id` columns from the two tables so that when `a` does not provide them, they come from `b`, instead. This will do it:
```
SELECT
COALESCE(a.emp_id, b.emp_id) AS emp_id,
COALESCE(a.scheme_id, b.scheme_id) AS scheme_id,
SUM(a.pen_ee)+AVG(b.pen_ee) AS pen_ee
FROM
payaudpen a
FULL JOIN payispen b
ON a.emp_id = b.emp_id AND a.scheme_id = b.scheme_id
WHERE
COALESCE(a.emp_id, b.emp_id) in ('A','B')
AND (a.period_id IS NULL OR a.period_id in ('201601','201602'))
GROUP BY COALESCE(a.scheme_id, b.scheme_id), COALESCE(a.emp_id, b.emp_id)
```
Note the use of `COALESCE()` to handle the cases where table `a` does not provide `emp_id` or `scheme_id`; with SQL Server you could also use `ISNULL()` in its place. Note also the allowance for `a.period_id IS NULL` in the `WHERE` condition -- this is necessary (in conjunction with the `COALESCE()`ing) to include data from `b` rows that do not have corresponding `a` rows.
|
SQL Full join on two columns
|
[
"",
"sql",
"sql-server",
"group-by",
"join",
""
] |
I have the following query that pulls all orders for an order:
```
SELECT
order_id,
GROUP_CONCAT(order_item_name SEPARATOR ' | ') as 'items'
FROM
wp_woocommerce_order_items oi1
WHERE
oi1.order_item_type = 'line_item'
AND order_id = 422
GROUP BY
order_id
```
It returns my desired results:
[](https://i.stack.imgur.com/ZEA46.png)
What I'd like to do now is to add data from another table that looks like this:
`SELECT * FROM wp_woocommerce_order_itemmeta where order_item_id = 17`
`wp_woocommerce_order_itemmeta` and `wp_woocommerce_order_items` join on `order_item_id`
[](https://i.stack.imgur.com/H4YT0.png)
So my final desired results would look something like:
```
order_item_name:_qty:_line_subtotal
order_id | items
--------- -----------------------------------
422 | Metal Polish:1:13.09
```
|
Since each value in `wp_woocommerce_order_itemmeta` is in its own row, you'll need to join that table multiple times.
```
SELECT order_id, GROUP_CONCAT(
order_item_name,':',qty.meta_value,':',subtotal.meta_value SEPARATOR ' | '
) as 'items'
FROM wp_woocommerce_order_items oi1
JOIN wp_woocommerce_order_itemmeta qty ON
qty.order_item_id = oli.order_item_id
AND qty.meta_key = '_qty'
JOIN wp_woocommerce_order_itemmeta subtotal ON
qty.order_item_id = oli.order_item_id
AND qty.meta_key = '_line_subtotal'
WHERE oi1.order_item_type = 'line_item'
AND order_id = 422
GROUP BY order_id
```
|
Do you need some like this?
```
SELECT * FROM wp_woocommerce_order_itemmeta as wc
inner join
(SELECT
order_id,
GROUP_CONCAT(order_item_name SEPARATOR ' | ') as 'items'
FROM
wp_woocommerce_order_items oi1
WHERE
oi1.order_item_type = 'line_item'
AND order_id = 422
GROUP BY
order_id) as order_counts
using (order_id)
```
|
Using GROUP_CONCAT and CONCAT when joining values from different tables
|
[
"",
"mysql",
"sql",
""
] |
I am implementing search using a sql, this is the query
```
Select CM.ID, ProductName,ImageURL,SKU,AA.Name as
MemberName,Price,Discount,DM.Name as CategoryName from tblMasterProduct CM
INNER JOIN tblProducts OM ON CM.ID=OM.MasterProductID
INNER JOIN tblMasterCategory DM ON CM.SubCategoryID=DM.ID
INNER JOIN tblOnlineRetailMember AA ON OM.MemberID=AA.ID
WHERE 1=1 AND CM.ProductName LIKE '% watches %' AND CM.SubCategoryID= 112
AND (OM.Price - OM.Discount) BETWEEN 0 And 200000
```
The issue is that the the query returns results containing the word watches, whereas i would also like to get all the entries containing the word "Watch" too.
EDIT: Sorry for incomplete information, The query is written in a stored procedure and the string is entered by the user, cannot get "Watch" from the user.
|
Try to use [SOUNDEX](https://msdn.microsoft.com/en-us/library/ms187384.aspx) or [DIFFERENCE](https://msdn.microsoft.com/en-us/library/ms188753.aspx) functions in MSSQL. If `ProductName` is a multi word then you can use [PARSENAME](https://msdn.microsoft.com/en-us/library/ms188006.aspx) to split to words and use DIFFERENCE to find one similar word in a string:
```
select * from t WHERE DIFFERENCE(ProductName,'watches')>=3
```
`SQLfiddle demo`
|
You can use OR ..
```
Select CM.ID, ProductName,ImageURL,SKU,AA.Name as
MemberName,Price,Discount,DM.Name as CategoryName from tblMasterProduct CM
INNER JOIN tblProducts OM ON CM.ID=OM.MasterProductID
INNER JOIN tblMasterCategory DM ON CM.SubCategoryID=DM.ID
INNER JOIN tblOnlineRetailMember AA ON OM.MemberID=AA.ID
WHERE 1=1 AND ( CM.ProductName LIKE '%watches%' or CM.ProductName LIKE '%watch%' ) AND CM.SubCategoryID= 112
AND (OM.Price - OM.Discount) BETWEEN 0 And 200000
```
|
SQL Like query to get closest match to the parameter
|
[
"",
"sql",
"sql-server",
""
] |
My data is currently in the form:
```
ID Fill1 Fill2 Fill3 Fill4 Fill5
1 01JAN2014 28JAN2014 26FEB2014 . .
2 . 05FEB2012 03MAR2012 02APR2012 01MAY2012
3 10MAR2015 08APR2015 07MAY2015 05JUN2015 03JUL2015
4 . . 20FEB2013 18MAR2013 .
```
And I am trying to create treatment "episodes" per ID. In other words, for each ID I want to find the first and last non-empty Fills and then calculate the difference between the two dates. For example for ID=1 I need to find the time difference between 01JAN2014 and 26FEB2014. That is,
Fill1 - Fill3 = episodeduration
but for ID=4 I would need to find,
Fill3 - Fill4 = episodeduration
where episodeduration is a new variable created. I have over 30k unique IDs with varying "first" and "last" Fill dates. Thanks in advance for you help.
|
```
data have;
input Id Fill1 date9. Fill2 date9. Fill3 date9. Fill4 date9. Fill5 date9.;
format Fill1 - Fill5 date9.;
cards;
1 01JAN201428JAN201426FEB2014
2 05FEB201203MAR201202APR201201MAY2012
3 10MAR201508APR201507MAY201505JUN201503JUL2015
4 20FEB201318MAR2013
;
run;
data want;
set have;
array fill {5};
format first last date9.;
do i = 1 to dim(fill);
first=coalesce(first, fill(i));
last=coalesce(fill(i), last);
end;
episodeduration = last - first;
drop i;
run;
```
Use `array` statement to create array and loop through variables and `coalesce()` function to find first/last non missing.
Comment: this code will find first/last by going from first to last variable. If you need first/last in terms of dates, min and max functions are good: `min(of fill1 -- fill5);` - no need to loop.
|
vasja's SAS version looks pretty nice, here's how it could be done SQL side (which is pretty much exactly the same procedure).
```
Select *,
DATEDIFF(day,
CONVERT(date,COALESCE(date1, date2, date3, date4, date5)),
CONVERT(date, COALESCE(date5,date4,date3,date2,date1))
)
from SomeTableNameAboutEpisodes;
```
Basically, you use coalesce to find the first non-null value, and you convert it into a date. You then take the difference between the 2 dates. This however only works if the empty cells have no values (null) and that there is no empty line. (you could simply put an **ISNULL(DATEDIF(...), 0)** though).
|
Using SAS or SQL find the first and last non-empty value within a row?
|
[
"",
"sql",
"date",
"sas",
"proc-sql",
"date-difference",
""
] |
I have the following code:
```
<cfquery name="somequery1" datasource="somedsn">
SELECT somecolumn1, somecolumn2, somecolumn3
FROM sometable
WHERE someid = <cfqueryparam cfsqltype="cf_sql_integer" value="1">
</cfquery>
<cfquery name="somequery2" dbtype="query">
SELECT *
FROM somequery1
</cfquery>
```
My code manager says I need to change the Query of Query to:
```
<cfquery name="somequery2" dbtype="query">
SELECT somecolumn1, somecolumn2, somecolumn3
FROM somequery1
</cfquery>
```
Can someone explain why I would need to redefine the column references in the Query of Query? Surely, the wildcard operator takes care of this.
Is there any technical or performance gain to redefining the column references in the SELECT clause of a Coldfusion Query of Queries? This assumes that the column references have already been explicitly set in the database query that is supplied to the Query of Queries.
I believe the use of the wildcard operator makes the code cleaner and easier to update, because any changes to the column references only need to be done once.
|
As you've discussed with Rahul: your "code manager" is offering good advice if this was a DB-based query, but I think it's a bit egregious in the context of a CFML query-on-query.
I suspect they have heard the guidance in the context of DB queries, and have not really thought it through sufficiently when giving guidance on in-memory query operations.
In short: your code is more optimal as it stands than the change's they're advising.
|
**EDIT:**
As discussed, yes it is correct that your current code will be more modular considering the fact that it would incorporate any changes(for example if you need to make the changes in the selected columns) in your query ie., it will take care of any columns which you might add in future. So your present query is efficient and good to proceed with.
---
The wildcard character surely takes care of it if you want to select all the column, however it is nowadays not recommended and usually not preferred to use wildcard character when selecting the columns. You can have a look at Aaron Bertrand [Bad habits to kick : using SELECT \* / omitting the column list](https://sqlblog.org/2009/10/10/bad-habits-to-kick-using-select-omitting-the-column-list):
> But there are several reasons why you should avoid SELECT \* in
> production code:
>
> 1. You can be returning unnecessary data that will just be ignored,
> since you don't usually need every single column. This is wasteful
> in I/O, since you will be reading all of that data off of the pages,
> when perhaps you only needed to read the data from the index pages.
> It is also wasteful in network traffic and in many cases the memory
> required by the consuming application to hold the results.
> 2. When you use SELECT \* in a join, you can introduce complications
> when multiple tables have columns with the same name (not only on
> the joined columns, such as OrderID, which are typically the same,
> but also peripheral columns like CreatedDate or Status). On a
> straight query this might be okay, but when you try to order by one
> of these columns, or use the query in a CTE or derived table, you
> will need to make adjustments.
> 3. While applications should *not* be relying on ordinal position of
> columns in the resultset, using SELECT \* will ensure that when you
> add columns or change column order in the table, the shape of the
> resultset should change. Ideally, this should only happen
> intentionally.
|
Using a wildcard operator in a query of query
|
[
"",
"sql",
"coldfusion",
"qoq",
""
] |
I have a table containing 42000+ zipcodes, longitude, latitude, and state information. Whats the most accurate and fastest query to return a result with all zipcodes with a 25 mile radius of the zipcode entered?
Current Code (I don't think its accurate)
```
SELECT
zipcode, (
3959 * acos (
cos ( radians(78.3232) )
* cos( radians( latitude ) )
* cos( radians( longitude ) - radians(65.3234) )
+ sin ( radians(78.3232) )
* sin( radians( latitude ) )
)
) AS distance
FROM Location
HAVING distance < 25
ORDER BY distance
```
|
**On Accuracy**
The only way to calculate distance accurately is with 3D trig, as you're doing. You can read more on that topic here: <https://en.wikipedia.org/wiki/Geographical_distance>
Although giving a pretty accurate distance between the lat/lng center-points of zipcodes, **those center-points are arbitrarily picked, and the distance is calculated "as the crow flies"**, so you won't get an accurate representation of actual travel distance between two points within each.
For example, you may have two homes next-door to each other in adjacent zipcodes, or two homes on opposite ends of each zipcode, which will calculate as equidistant given this calculation.
The only way to correct that issue is to calculate address distance, which requires USPS data to map an address to a more specific point, or the use of an API like Google Maps, which will also calculate actual travel distance given available roads.
**On Performance**
There are a couple ways to speed up your query.
**1. Reduce the Real-time Math**
The fastest way to do your calculations in real-time is to precalculate and store the expensive trig values in columns in your table, e.g.:
```
ALTER TABLE Location
ADD COLUMN cos_rad_lat DOUBLE,
ADD COLUMN cos_rad_lng DOUBLE,
ADD COLUMN sin_rad_lat DOUBLE;
```
Then
```
UPDATE Location
SET cos_rad_lat = cos(radians(latitude)),
cos_rad_lng = cos(radians(longitude)),
sin_rad_lat = sin(radians(latitude));
```
Do your cos(radians(78.3232)) type calculations outside the query, so that math isn't done for each row of data.
Thus reducing all calculations to constant values (before getting to SQL) and calculated columns will make your query look like this:
```
SELECT
zipcode,
3959 * acos(
0.20239077538110228
* cos_rad_lat
* cos_rad_lng - 1.140108408597264
)
+ 0.979304842243025 * sin_rad_lat AS distance
FROM Location
HAVING distance < 25
ORDER BY distance
```
**2. Bounding-box Reduction**
Note: You can combine this with method 1.
You could probably increase performance slightly by adding a bounding-box reduction of zips in a subquery before doing the trig, but that may be more complicated than you would like.
For example, instead of:
```
FROM Location
```
You could do
```
FROM (
SELECT *
FROM Location
WHERE latitude BETWEEN A and B
AND longitude BETWEEN C and D
) AS Location
```
Where A, B, C, and D are numbers corresponding to your center-point +- about 0.3 (As each 10th of a degree of lat/lng corresponds to about 5-7 miles in the US).
This method gets tricky at -180 / 180 Longitude, but that doesn't affect the US.
**3. Store All Calculated Distances**
Another thing you could do is precalculate all distances of all zips, and store then in a separate table
```
CREATE TABLE LocationDistance (
zipcode1 varchar(5) NOT NULL REFERENCES Location(zipcode),
zipcode2 varchar(5) NOT NULL REFERENCES Location(zipcode)
distance double NOT NULL,
PRIMARY KEY (zipcode1, zipcode2),
INDEX (zipcode1, distance)
);
```
Populate this table with every combination of zip and their calculated distance.
Your query would then look like:
```
SELECT zipcode2
FROM LocationDistance
WHERE zipcode1 = 12345
AND distance < 25;
```
This would by far be the fastest solution, though it involves storing on the order of 1 Billion records.
|
This may or may not be the fastest, but you can do it by first precomputing the normal vectors (NV) for each coordinate pair and expressing the vector in terms of it's X, Y and Z components:
```
NV = [Nx, Ny, Nz]
```
where
```
Nx = cos(radians(latitude))*cos(radians(longitude))
Ny = cos(radians(latitude))*sin(radians(longitude))
Nz = sin(radians(latitude))
```
Then the distance between any two coordinates can be computed by determining the difference of the two normal vectors NV1 and NV2 and using Pythagoras equation in three dimensions to get the straight line distance between the two points, that is the chord length C:
```
C = SQRT(dx^2+dy^2+dz^2)
```
where
```
dx = Nx1-Nx2
dy = Ny1-Ny2
dz = Nz1-Nz2
```
Then the great circle distance can be found with the following formula:
```
D = arcsin(C/2)*2*R
```
Where R it the radius of the sphere in this case the earth, which is 3959mi.
Putting it all together:
```
select pt2.zip
, asin(power(power(pt1.nx-pt2.nx,2)
+power(pt1.ny-pt2.ny,2)
+power(pt1.nz-pt2.nz,2)
,.5)/2)*2*3959 distance
from (select 78.3232 lattitude
, 65.3234 longitude
, cos(radians(78.3232))*cos(radians(65.3234)) nx
, cos(radians(78.3232))*sin(radians(65.3234)) ny
, sin(radians(78.3232)) nz
) pt1
, (select zip
, lattitude
, longitude
, cos(radians(latitude))*cos(radians(longitude)) nx
, cos(radians(latitude))*sin(radians(longitude)) ny
, sin(radians(latitude)) nz
from location) pt2
having distance < 25;
```
To further optimize this you can calculate a some bounds on the coordinates. Every degree of latitude is roughly equal to 69 miles so you can limit your search to those latitudes ±(D/69). The number of miles per degree of longitude, however, varies with the latitude ranging from about 69 miles per degree at the equator to zero at the poles or 69\*cos(latitude), you use ±(D/69\*cos(latitude)).
```
where pt2.latitude between pt1.latitude - 25/69
and pt1.latitude + 25/69
and pt2.longitude between pt1.longitude - 25/(69*cos(radians(abs(pt1.latitude)+25/69)))
and pt1.longitude + 25/(69*cos(radians(abs(pt1.latitude)+25/69)))
```
|
MySQL Calculate the Distance between Zipcodes Faster and More Accurately?
|
[
"",
"mysql",
"sql",
""
] |
I have many applications. Each applications can have many languages assigned.
For each language there can be many translations.
Each translation has a `Key` column which is the primary key.
When 2 applications create a translation with `Key = "AdminAreaTitle"` there will be a duplicate key exception. I know this will rarely happen. But getting an exception that another application has this key already sounds pretty bad user experience.
What I want is a translation key per application so the primary key of translation would be `Key` and I guess `ApplicationName`.
How would you extend the design to get that behavior?
**Removed wrong image!**
**UPDATE**
This is now a sql view on the tables/relations not entity framework view anymore.
WITH **surrogate keys**
[](https://i.stack.imgur.com/M0bVW.png)
**UPDATE 2**
WITH **natural keys**
I must say I prefer this ERD more because its simpler to design and think about. The surrogate key is more disturbing.
[](https://i.stack.imgur.com/Z4Oc2.png)
**UPDATE 3**
I created a new database with 3 tables and all natural keys see the screenshot, but I keep getting an error message from sql management studio:
[](https://i.stack.imgur.com/CS7yD.png)
|
The ERD you show is broken, because you show tables with their primary keys plus their relations (1:n, n:m) and these do not match. The keys of application and language suggest 1:n, but your graphic is labeled n:m.
What you might have now is something like (PK bold):
* application (**app\_name**)
* application\_language (**app\_name, iso\_lang**)
* translation (**key, iso\_lang**, text)
But you want each application to define their own texts, so add the app\_name to your composite PK for the translations table.
* application (**app\_name**)
* application\_language (**app\_name, iso\_lang**)
* translation (**key, app\_name, iso\_lang**, text)
|
In the E-R model, your database is described with two tables (Application) and (Language), and a relationship (Translate) between them (entities are "nouns", and relationships are "verbs").
In a real DBMS, a relationship is a table that contains both the two primary keys of a certain application that is being translated in a certain language (and other characteristics proper of the traslation itself), making the relationship unique (an application may be translated in many languages, and a language can translate many applications, but an application may be translated only once in french).
```
APPLICATION
AppName (PK)
...
LANGUAGE
LangName (PK)
...
TRANSLATE
AppName (PK)
LangName (PK)
...
```
|
How can I make this column an unique column
|
[
"",
"sql",
"sql-server",
"erd",
""
] |
I am looking for a TSQL query that can return just a single record per day; For example if I had the following data I would want to just return the entry at 01:59:00 for each day; note that I don't really want to provide the seconds as it could sometimes be 01:59:03 or something like that so preferably 01:59
```
id snapshotTime name code total onTime
1 2015-09-22 02:01:01 GB Railfreight 54 0 0
2 2015-09-22 02:00:01 GB Railfreight 54 0 0
3 2015-09-22 01:59:00 GB Railfreight 54 78 70
4 2015-09-21 02:01:01 GB Railfreight 54 0 0
5 2015-09-21 02:00:01 GB Railfreight 54 0 0
6 2015-09-21 01:59:00 GB Railfreight 54 72 68
```
The results I want back would look like this
```
id snapshotTime name code total onTime
3 2015-09-22 01:59:00 GB Railfreight 54 78 70
6 2015-09-21 01:59:00 GB Railfreight 54 72 68
```
So I would have one row per day with just the last entry in this case the last entry is added at 1:59 and then reset at 2:00 for another days results. This data is coming from network rail, hence why I cant just get the values at midnight; as data is stored from 02:00 to 02:00 the next day.
Would be great to get some help on this; many thanks
|
I got it working nicely with this
```
SELECT * FROM OperatorRTPPM
WHERE CONVERT(varchar(5),snapshotTime, 108) = '01:59'
AND snapshotTime > '2015-09-20' AND snapshotTime < DATEADD(d, 1, '2015-09-22 02:00')
AND code = 54
```
The important thing was to get the time at 01:59 and then just limit by the date range and train company code
|
If I've understood correctly. You just want to return for each individual day. Any record that occurred prior to 02:00:00 each day.
See my working example below on how to do this. This example takes into account any scenario where there may be more than one record that occurs at 01:59:XX taking the last one that was entered:
```
IF OBJECT_ID('RailTimes', 'U') IS NOT NULL
DROP TABLE RailTimes
GO
CREATE TABLE RailTimes (
Id INT,
SnapshotTime DATETIME
)
INSERT RailTimes (Id, SnapshotTime)
VALUES (1, '21 september 2015 02:00:00')
, (2, '21 september 2015 01:59:02')
, (3, '21 september 2015 01:59:05')
, (4, '22 september 2015 01:59:00')
, (5, '22 september 2015 02:00:02')
SELECT
*
FROM (SELECT
*,
ROW_NUMBER() OVER (PARTITION BY DATEADD(D, 0, DATEDIFF(D, 0, SnapshotTime)) ORDER BY SnapshotTime DESC) MostRecentSnapshot
FROM RailTimes
WHERE DATEPART(HOUR, SnapshotTime) < 2) SnapShotTimes
WHERE MostRecentSnapshot = 1
```
The `ROW_NUMBER()` function, is essentially grouping each day together by using the following partition (which turns each snapshottime to a time of 00:00:00 so we can group the days together):
```
DATEADD(D, 0, DATEDIFF(D, 0, SnapshotTime))
```
It then orders them by `DATETIME` (with the most recent time first). The `WHERE` clause then only returns the ones that occurred prior to 02:00:00.
In the outer query, we then say just return the most recent snapshot time:
```
WHERE MostRecentSnapshot = 1
```
So in my example, you can see that for the `21st September 2015`. The row with Id `3` is returned with the time `01:59:05` as it occurred the latest.
|
TSQL Get rows with a certain time for each day
|
[
"",
"sql",
"t-sql",
"networking",
""
] |
Running into a wall when trying to pull info from tables similar to those below. Not sure how to approach this.
The results should have the most recent TRANSAMT for each ACCNUM along with NAME and address.
```
Select A.ACCNUM, MAX(B.TRANSAMT) as BAMT, B.ADDRESS from
From TableA A inner join TableB on A.ACCNUM = B.ACCNUM
```
This is what i have so far. Any help would be appreciated.
TableA
```
ACCNUM NAME ADDRESS
00001 R. GRANT Miami, FL
00002 B. PAUL Dallas, TX
```
TableB
```
ACCNUM TRANSAMT TRANSDATE
00001 150 1/1/2015
00001 200 13/2/2015
00002 100 2/1/205
00003 50 18/2/2015
```
|
You can use `row_number` to order rows per each account number by the most recent first.
```
select accnum, amt, name, address
from (
select A.ACCNUM, B.TRANSAMT as BAMT, B.ADDRESS,A.Name,
row_number() over(partition by a.accnum order by b.transdate desc) as rn
From TableA A
inner join TableB on A.ACCNUM = B.ACCNUM
) t
where rn = 1;
```
Please note this will not work if you are using `MySQL`.
|
You can use the ANSI standard `row_number()` function in most databases. This allows you to do conditional aggregation:
```
select a.accnum, a.name, b.amount, a.address
from tableA a left join
(select b.*, row_number() over (partition by accnum order by transdate desc) as seqnum
from tableB b
) b
on a.accnum = b.accnum and b.seqnum = 1;
```
Note: I changed the `join` to a `left join`. This will keep all records in `tableA`, even those with no matches. I am not sure if that is the intention of your query.
|
Select Most Recent Date with Inner Join
|
[
"",
"sql",
"select",
"max",
"min",
""
] |
I have 2 lookup and 1 intersect tables like:
```
thing thing_feature feature
+----+-------------+ +----+----------+------------+ +----+-------------+
| id | name | | id | thing_id | feature_id | | id | name |
+----+-------------+ +----+----------+------------+ +----+-------------+
| 1 | Thing One | | 1 | 1 | 1 | | 1 | Feature A |
| 2 | Thing Two | | 2 | 2 | 1 | | 2 | Feature B |
| 3 | Thing Three | | 3 | 2 | 2 | | 3 | Feature C |
| 4 | Thing Four | | 4 | 3 | 3 | | 4 | Feature D |
+----+-------------+ +----+----------+------------+ +----+-------------+
```
I can't figure out the query to produce...
```
"Thing One", "Feature A"
"Thing Two", "Feature A"
"Thing Two", "Feature B"
"Thing Three", "Feature C"
"Thing Four", null
```
I can get it to include "Thing 4" if I stop at the intersect table...
```
select
thing.name,
tf.id
from
thing
left outer join thing_feature tf on thing.id = tf.thing_id;
"Thing One", 1
"Thing Two", 1
"Thing Two", 2
"Thing Three", 3
"Thing Four", null
```
...but I can't figure out the syntax to get the query to "join back up" to the feature table through thing\_feature.feature\_id. The query I almost have working loses the Feature-less "Thing Four".
```
select
thing.id,
thing.name,
tf.id,
feature.name
from
thing
left outer join thing_feature tf on thing.id = tf.thing_id,
feature
where
tf.feature_id = feature.id;
"Thing One", 1, "Feature 1"
"Thing Two", 1, "Feature 1"
"Thing Two", 2, "Feature 2"
"Thing Three", 3, "Feature 3"
```
Perhaps I've been deep in NoSQL for too long. This is MySQL, if it matters.
Little help here? What do I need to do to my query to include the "Thing Four" row?
|
You're *almost* there. You just need to do one more left outer join, to make sure you keep all your results on the left tables, and get all the nulls on the right.
```
select
thing.name,
f.name
from
thing
left outer join thing_feature tf on thing.id = tf.thing_id
left outer join feature f on tf.feature_id = f.id;
```
|
```
SELECT
t.name tname, t.id, f.name as fname, f.id, tf.tid, tf.feature_id
FROM
thing as t
LEFT JOIN
thing_feature as tf
ON
t.id = tf.thing_id
LEFT JOIN
feature as f
ON
tf.feature_id = f.id
```
|
Can't get LEFT OUTER JOIN to work through an intersect table
|
[
"",
"mysql",
"sql",
""
] |
I have table like this in SQL Server
```
src destination
-------------------
A B
B A
A D
D A
B D
D B
```
I want result like this
```
src destination
-------------------
A B
A D
B D
```
|
There are several different ways to accomplish this. Probably the best performing is `union all` with `not exists`:
```
select src, dest
from table t
where src < dest
union all
select dest, src
from table t
where dest > src and
not exists (select 1 from table t2 where t.src = t2.dest and t.dest = t2.src);
```
Note: this assumes that you have no duplicates in your data (as is the case with the sample data).
|
To expand a bit on Gordon's answer, this may be a better solution. Gordon's answer will not catch cases where the `src` and `destination` are the same value, and it will also lead to duplicated results where there is no reversed counterpart. This should work better:
```
Select A.src, A.destination
From Table A
Join Table B On A.src = B.destination
And A.destination = B.src
Where A.src < B.src
Union
Select src, destination
From Table
Where src = destination
Union
Select src, destination
From Table A
Where Not Exists
(
Select *
From Table B
Where A.src = B.destination
And A.destination = B.src
)
```
|
SQL Sserver : two column have same value in different order and want only distinct combination
|
[
"",
"sql",
"sql-server",
""
] |
I have a transactional database with sales data and user id like the following:
```
id_usuarioweb dt_fechaventa
1551415 2015-08-01 14:57:21.737
1551415 2015-08-06 15:34:21.920
6958538 2015-07-30 09:26:24.427
6958538 2015-08-05 09:30:06.247
6958538 2015-08-31 17:39:02.027
39101175 2015-08-05 16:34:17.990
39101175 2015-09-20 20:37:26.043
1551415 2015-09-05 13:41:43.767
3673384 2015-09-06 13:34:23.440
```
And I would like to calculate the average diference between dates by the same customer in the data base (to find average frequency with which the user buys).
I'm aware I can do datediff with two columns, but i'm have issues trying to do it in the same field and "grouping" by user id.
The desired outcome would be like this:
```
id_usuarioweb avgtime_days
1551415 5
6958538 25
39101175 25
1551415 0
3673384 0
```
How can I achieve this? I would have the database ordered by user\_id and then dt\_fechaventa (the sale time).
USING: SQL Server 2008
|
I think what you are looking for is calculated like this. Take the maximum and minimum dates, get the difference between them and divide by the number of purchases.
```
SELECT id_usuarioweb, CASE
WHEN COUNT(*) < 2
THEN 0
ELSE DATEDIFF(dd,
MIN(
dt_fechaventa
), MAX(
dt_fechaventa
)) / (
COUNT(*) -
1
)
END AS avgtime_days
FROM mytable
GROUP BY id_usuarioweb
```
EDIT: (by @GordonLinoff)
The reason that this is correct is easily seen if you look at the math. Consider three dates, a, b, and c.
The average time between them is:
```
((b - a) + (c - b)) / 2
```
This simplifies to:
```
(c - a) / 2
```
In other words, the intermediate value cancels out. And, this continues regardless of the number of intermediate values.
|
This should do:
```
;WITH CTE AS
(
SELECT *,
RN = ROW_NUMBER() OVER(PARTITION BY id_usuarioweb ORDER BY dt_fechaventa),
N = COUNT(*) OVER(PARTITION BY id_usuarioweb)
FROM dbo.YourTable
)
SELECT A.id_usuarioweb,
AVG(DATEDIFF(DAY,A.dt_fechaventa,B.dt_fechaventa)) avgtime_days
FROM CTE A
INNER JOIN CTE B
ON A.id_usuarioweb = B.id_usuarioweb
AND A.RN = B.RN - 1
WHERE A.N > 1
GROUP BY A.id_usuarioweb;
```
I'm filtering the users that only have one row there, because you can't calculate an average of days with them.
[**Here is a demo**](http://sqlfiddle.com/#!3/7db63/2) in sqlfiddle of this. And the results are:
```
╔═══════════════╦══════════════╗
║ id_usuarioweb ║ avgtime_days ║
╠═══════════════╬══════════════╣
║ 1551415 ║ 17 ║
║ 6958538 ║ 16 ║
║ 39101175 ║ 46 ║
╚═══════════════╩══════════════╝
```
|
Average time between dates in same field by groups
|
[
"",
"sql",
"sql-server",
"date",
"average",
"datediff",
""
] |
When I have a conditional statement in my `WHERE` statement, are `NULL`s automatically not included?
For example, in a query with a `GROUP BY` statement that has the line `FROM @data WHERE Trade_Amt > 0 AND Execution_Code != 'EXPIRATION'`, `Execution_Code` values that are `null` will not be included in the summation.
If I include `OR Execution_Code IS NULL` then I get more records than expected.
The only work around that I have found is changing the `null` values to a specified character. Is this my only option or am I missing something?
An example:
Table `@data`
```
Trade_Amt Execution_Code Trade_Px
-----------------------------------------
4 XVD 5
-4 NULL 5
4 NULL 5
5 EXPIRATION 5
```
Query:
```
SELECT
SUM(Trade_Amt) AS [Trade_Amt],
dbo.ConcatStr(DISTINCT Execution_Code , ',', 'ASC') AS [Execution_Code],
Trade_Px
--above line concatenates execution code strings together during summation
FROM
@data
WHERE
Trade_Amt > 0 AND Execution_Code != 'EXPIRATION'
GROUP BY
Trade_Px
```
Expected output:
```
Trade_Amt Execution_Code Trade_Px
--------------------------------------------
8 XVD 5
```
What I actually get:
```
Trade_Amt Execution_Code Trade_Px
--------------------------------------------
4 XVD 5
```
|
You can use `()` to specify that you only want the execution codes that aren't equal to expiration and are null
```
WHERE Trade_Amt > 0 AND (Execution_Code != 'EXPIRATION' OR Execution_Code IS NULL)
```
If you don't have the `()`, you won't get the results you're expecting.
|
NULL is a different sort of value. It cannot be tested as greater than or less than. Because it is neither. It is NULL.
So the only way to test if a value is NULL is:
```
Select thisvalue from thistable where value is NULL
```
or
```
Select thisvalue from thistable where value is not NULL
```
So you need to separate these sorts of things in cases where the value could indeed be NULL.
|
NULLS in WHERE statements - SQL
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I am looking for smart idea to update just a part of string in column. There is lot of similar question but I haven't found way for me.
For example I have column `varchar(10)`
```
1234|5|6789X
____________
2134|1|71891
```
How I can easy update 5th element of my string without touching rest of string, there is no pattern.
I was trying use `patindex` & `substring`
|
use
`STUFF(Column_Name,Char_Index_Start_Position,Lenght_to_Replace,Replaced_string)`
for your case try
```
STUFF(column_name,5,1,'1')
```
**[STUFF()](https://msdn.microsoft.com/en-us/library/ms188043.aspx)**
|
How about [*Stuff*](https://msdn.microsoft.com/en-us/library/ms188043.aspx)
```
SELECT Stuff('1234|5|6789X', 6, 1, '1')
```
|
Updating a part of string in column MS SQL
|
[
"",
"sql",
"sql-server",
"sql-update",
""
] |
I have deleted some tables from an Oracle 10g database using the `drop` command.
When I list the tables using `select * from cat;` I get the following :
[](https://i.stack.imgur.com/E9gMA.png)
Are the tables deleted or not? Why do I have these `BIN$...` things? How can I remove them or the tables once and for all?
Thanks!
|
They are entries in the Recycle Bin, created by the deletion of a table.
They can be purged if required.
<http://docs.oracle.com/cd/B28359_01/server.111/b28310/tables011.htm>
|
The tables prefixed with `BIN$` are tables placed in the recycle bin for easy recovery. You can completely remove them by purging it. Either specifically per table:
```
PURGE TABLE BIN$ABCDEFG;
```
Or for the entire recycle bin at once:
```
PURGE RECYCLEBIN;
```
|
Delete tables from database Oracle 10g
|
[
"",
"sql",
"oracle",
"oracle10g",
"ddl",
"database-administration",
""
] |
Imagine a SQL table `votes`, with two columns `vote` and `time`; `vote` is a string, `time` is a timestamp. I'd like to retrieve a "cumulative count" from the table. So, the table looks like this:
```
vote | time
-----------+----------------
stuart | 1443000000
stuart | 1443000000
bryan | 1443000000
stuart | 1443000001
bryan | 1443000001
bryan | 1443000002
bryan | 1443000002
```
so, "stuart" gets two votes in the first time interval and "bryan" gets one; they get one each in the second interval, and bryan gets two more in the third, and thus the output results I would like are these:
```
vote | time | cumulative total
-----------+--------------+-----------------
stuart | 1443000000 | 2
bryan | 1443000000 | 1
stuart | 1443000001 | 3
bryan | 1443000001 | 2
stuart | 1443000002 | 3
bryan | 1443000002 | 4
```
I'm not sure how to do this. Obviously, getting how many votes happened in each time interval is trivial (`SELECT vote, time, COUNT(*) from table GROUP BY time, vote ORDER BY time, vote`) but making a *cumulative* count seems harder. Is there a way to do so?
(I'm using MySQL, and am happy to use MySQL-specific functions here if useful.)
|
Using a correlated sub-query to do the counting:
```
select distinct vote, time, (select count(*) from table t2
where t2.vote = t1.vote
and t2.time <= t1.time)
from table t1
```
|
MySQL 8 introduced support for [window functions](https://blog.jooq.org/2013/11/03/probably-the-coolest-sql-feature-window-functions/), which are the perfect tool for cumulative sums. They will perform *much* better than the alternatives using correlated subqueries. Your desired query can be written as follows:
```
SELECT
vote,
time,
sum(count(*)) OVER (PARTITION BY vote ORDER BY time)
FROM table
GROUP BY time, vote
ORDER BY time, vote
```
Note that the inner `count(*)` is an ordinary aggregate function, which is nested in the outer `sum(...) OVER (...)` window function. This is possible because [aggregate functions are logically calculated before window functions](https://blog.jooq.org/2016/12/09/a-beginners-guide-to-the-true-order-of-sql-operations/).
|
Cumulative count(*) in MySQL
|
[
"",
"mysql",
"sql",
""
] |
I have a list of dates ("start", datetime) and I would like to select all dates where :
```
today = start + 1 WEEK or
today = start + 2 WEEK or
today = start + 3 WEEK or
today = start + 4 WEEK or
today = start + 5 WEEK or
today = start + 6 WEEK
```
Maximum is start + 6 weeks.
Any idea ?
|
I assume you want a `WHERE` filter to capture all rows containing `start` DATETIMEs on this weekday one week ago, and two ... six weeks ago. That's the effect of the logic in your question:
```
today = start + 1 WEEK or today = start + 2 WEEK or ...
```
means the same thing as
```
start = today - 1 WEEK etc.
```
The thing is, you are using DATETIME values for `start`. They're not guaranteed to be `start = CURDATE()` because they may not be at midnight.
So, you need to use the `DATE()` function to reduce them to midnight values before comparing them. Something like this will work.
```
WHERE DATE(start) IN (
CURDATE() - INTERVAL 6 WEEK, CURDATE() - INTERVAL 5 WEEK, CURDATE() - INTERVAL 4 WEEK,
CURDATE() - INTERVAL 3 WEEK, CURDATE() - INTERVAL 2 WEEK, CURDATE() - INTERVAL 1 WEEK)
```
You could also do this -- it picks out all records six weeks old or newer, but not the ones in the most recent week, then picks the ones on today's weekday.
```
WHERE start >= CURDATE() - INTERVAL 6 WEEK
AND start < CURDATE() - 6 DAY
AND WEEKDAY(CURDATE()) = WEEKDAY(start)
```
This second formulation will be more efficient if you have a great deal of old data in your table and you have an index on your `start` column: the first two where clauses are [*sargeable*](https://stackoverflow.com/questions/799584/what-makes-a-sql-statement-sargable).
Pro tip: When specifying this kind of date filter, the more effort you spend making your specification exact *before* you write code, the faster you will finish your work. That's true even if you don't count debugging time.
|
**setup**
```
create table example
(
id integer primary key not null auto_increment,
start datetime not null
);
insert into example ( start )
values
( date_sub(current_date, interval 1 week) ),
( date_sub(current_date, interval 2 week) ),
( date_sub(current_date, interval 3 week) ),
( date_sub(current_date, interval 4 week) ),
( date_sub(current_date, interval 5 week) ),
( date_sub(current_date, interval 6 week) ),
( date_sub(current_date, interval 6 week) ),
( date_sub(current_date, interval 4 week) ),
( date_sub(current_date, interval 9 week) ),
( date_sub(current_date, interval 12 week) )
;
```
---
**query**
```
select id, start
from example
where
date(start) in
(
date_sub(current_date, interval 1 week) ,
date_sub(current_date, interval 2 week) ,
date_sub(current_date, interval 3 week) ,
date_sub(current_date, interval 4 week) ,
date_sub(current_date, interval 5 week) ,
date_sub(current_date, interval 6 week)
)
;
```
**output**
```
+----+-----------------------------+
| id | start |
+----+-----------------------------+
| 1 | September, 16 2015 00:00:00 |
| 2 | September, 09 2015 00:00:00 |
| 3 | September, 02 2015 00:00:00 |
| 4 | August, 26 2015 00:00:00 |
| 5 | August, 19 2015 00:00:00 |
| 6 | August, 12 2015 00:00:00 |
| 7 | August, 12 2015 00:00:00 |
| 8 | August, 26 2015 00:00:00 |
+----+-----------------------------+
```
**[sqlfiddle](http://sqlfiddle.com/#!9/2ae89f/4)**
|
select dates on this weekday in the past
|
[
"",
"mysql",
"sql",
"datetime",
""
] |
I have a stored procedure and I want to update a value in a table with the SYSDATE only if a parameter is NOT NULL.
In the following SQL, I want to set SYSENDDATE to NULL if pETime IS NULL Otherwise to SYSDATE
```
UPDATE OLCACT SET
ENDDATE = pETime,
SYSENDDATE = SYSDATE,
GRD = pGRD,
PASS = v_pass
```
Not sure how to use either NVL or COALESCE to do that.
|
[`nvl2`](http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions106.htm) (yeah, "great" name, I know) will actually be much more convenient:
```
UPDATE OLCACT SET
ENDDATE = pETime,
SYSENDDATE = NVL2(pETime, SYSDATE, NULL)
GRD = pGRD,
PASS = v_pass
```
|
As @Mureinik suggests, `nvl2` is a perfectly valid way of doing this. Even though this is a built-in function, it's not particularly well named or well known so I tend to avoid it. It's way too easy to inadvertently read this as `nvl` or for someone to not recall exactly what that function does. I would rather use a `CASE` statement that makes my intentions clear
```
UPDATE OLCACT SET
ENDDATE = pETime,
SYSENDDATE = (CASE WHEN pETime IS NOT NULL
THEN sysdate
ELSE null
END)
GRD = pGRD,
PASS = v_pass
```
That's a bit more verbose than the `nvl2` option. But it is more likely that a random developer looking at it in the future will be able to immediately understand what it is doing.
|
Oracle COALESCE or NVL
|
[
"",
"sql",
"oracle",
"sql-update",
"coalesce",
""
] |
I have an sql query that retrieves a column which values are strings. I want to create a column next to it that takes a value of 1 if the substring 'MB' is contained in the value or 0 otherwise
|
You can output another column with a calculated value like this:
```
select COLUMN, IF(LOCATE('MB', COLUMN) > 0, 1, 0) as STR_FOUND
from TABLE
```
See the documentation of [`locate`](http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_locate) and [`if`](http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_if) for further Information.
|
You can try it using the case when then like this:
```
select case when INSTR('mycol', 'MB') > 0 then 1
else 0
end as myBoolCol
```
|
SQL how create a Boolean column based on the value of another column
|
[
"",
"mysql",
"sql",
"boolean",
""
] |
I'm trying to solve this problem.
This sql
```
select t.currency || ' ' || abs(t.price)
from apple_month t
where t.price <= 1
```
(t.price is a float)
returns number like this:
```
EUR ,97
USD ,87
```
There are no zero digits before the decimal. What is wrong?
|
The symbol || concatenates strings. In
```
t.currency || ' ' || abs(t.price)
```
you rely hence on the implicit conversion of `abs(t.price)` to string. And this converts the value 0.97 to ',97'.
Use `TO_CHAR` to format your number explicitely. E.g.
```
t.currency || ' ' || to_char(abs(t.price), 'fm9999990d00')
```
|
If you want to add `zero` before decimal if the value is less than 1 then,
try;
```
select t.currency ||
case when abs(t.price) < 1 then ' 0' || abs(t.price)
else ' ' || abs(t.price) end
from apple_month t
where t.prezzo <= 1
```
|
No zero before decimal
|
[
"",
"sql",
"oracle",
""
] |
I have 2 tables name `event` and `checkin`
It is joined with `event.id` to `checkin.event_id`
i want to get count of number of checkins in `checkin` for all records in `event`
So far i have done this and it returns the value but it returns the count for the entries in `checkin` table
I want to get no of checkins of each `event.id`
How do i get that
I am using this query right now
```
SELECT e.id,COUNT(*) from checkin c
LEFT JOIN event e ON (c.event_id=e.id)
GROUP by e.id
```
which is giving me result like this
```
event_id COUNT(*)
1 2
5 5
7 8
```
Which is fine but i want result like this
```
event_id COUNT(*)
1 2
2 0
3 0
4 0
5 5
6 0
7 8
```
Hope my question is clear now
thank you
|
Reverse the join and count non-null event\_id:
```
SELECT e.d, COUNT(c.event_id) AS check_count
FROM event e LEFT OUTER JOIN checkin c ON c.event_id = e.id
GROUP BY e.event_id
```
Try adding `COUNT(e.id)` so you can compare it to `COUNT(*)` or `COUNT(c.event_id)`.
|
use the below query with an alias for the counter
```
SELECT e.id,COUNT(c.*) as check_count from checkin c
LEFT JOIN event e ON (c.event_id=e.id)
GROUP by e.id
```
Also i recommend you to use `PDO` or `mysqli_*` functions instead of `mysql_*` functions that are deprecated
|
Get count for each record mysql
|
[
"",
"mysql",
"sql",
""
] |
I have a periods like `012015`, `022015`, `032015` etc.,
Here first 2 chars are `Month` and last four chars are `Year`.
My result should be
`20150131`, `20150228`, `20150331` which is nothing but last date of that month.
Can you help me with this in SQL?
|
You can do It in following:
**SAMPLE DATA**
```
CREATE TABLE #dates
(
Period NVARCHAR(40)
)
INSERT INTO #dates VALUES ('012015'),('022015'),('032015')
```
**QUERY**
```
SELECT DATEADD(MONTH,1+DATEDIFF(MONTH,0,CAST((CAST(RIGHT(Period,4) AS INT) * 100 + CAST(LEFT(Period,2) AS INT)) * 100 + 01 AS NVARCHAR(20))),-1)
FROM #dates
```
**OUTPUT**
```
Input Output
012015 2015-01-31 00:00:00.000
022015 2015-02-28 00:00:00.000
032015 2015-03-31 00:00:00.000
```
---
**UPDATE**
Or you can use `REPLACE` to get output as you expected:
```
SELECT REPLACE(CAST(DATEADD(MONTH,1+DATEDIFF(MONTH,0,CAST((CAST(RIGHT(Period,4) AS INT) * 100 + CAST(LEFT(Period,2) AS INT)) * 100 + 01 AS NVARCHAR(20))),-1) AS DATE),'-','')
```
**OUTPUT AFTER UPDATE**
```
Input Output
012015 20150131
022015 20150228
032015 20150331
```
---
**UPDATE 2**
With your provided data working as expected.
```
CREATE TABLE #dates
(
Period NVARCHAR(40)
)
INSERT INTO #dates VALUES ('022019'),('022019'),('022019'),('022019'),('112018'),('082019'),('082019'),('082019'),('082019'),('112018'),('112018'),('112018'),('082019'),('022019'),('022019'),('022019'),('022019'),('052016'),('052016'), ('122016')
SELECT Period as Input,
REPLACE(CAST(DATEADD(MONTH,1+DATEDIFF(MONTH,0,CAST((CAST(RIGHT(Period,4) AS INT) * 100 + CAST(LEFT(Period,2) AS INT)) * 100 + 01 AS NVARCHAR(20))),-1) AS DATE),'-','') as [Output]
FROM #dates
```
**OUTPUT**
```
Input Output
022019 20190228
022019 20190228
022019 20190228
022019 20190228
112018 20181130
082019 20190831
082019 20190831
082019 20190831
082019 20190831
112018 20181130
112018 20181130
112018 20181130
082019 20190831
022019 20190228
022019 20190228
022019 20190228
022019 20190228
052016 20160531
052016 20160531
122016 20161231
```
|
Wish it helps,
`CONVERT(NVARCHAR(8), DATEADD(DAY, -1,SUBSTRING('012015', 3, 5) + SUBSTRING('012015', 0, 3) + '01'), 112)`
change the hardcoded date with the date you want to parse.
Good luck!
|
Calculate last date of the month from a period
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have a simple SQL table containing some values, for example:
```
id | value (table 'values')
----------
0 | 4
1 | 7
2 | 9
```
I want to iterate over these values, and use them in a query like so:
```
SELECT value[0], x1
FROM (some subquery where value[0] is used)
UNION
SELECT value[1], x2
FROM (some subquery where value[1] is used)
...
etc
```
In order to get a result set like this:
```
4 | x1
7 | x2
9 | x3
```
It has to be in SQL as it will actually represent a database view. Of course the real query is a lot more complicated, but I tried to simplify the question while keeping the essence as much as possible.
I think I have to select from *values* and join the subquery, but as the value should be used in the subquery I'm lost on how to accomplish this.
**Edit:** I oversimplified my question; in reality I want to have 2 rows from the subquery and not only one.
**Edit 2:** As suggested I'm posting the real query. I simplified it a bit to make it clearer, but it's a working query and the problem is there. Note that I have hardcoded the value '2' in this query two times. I want to replace that with values from a different table, in the example table above I would want a result set of the combined results of this query with 4, 7 and 9 as values instead of the currently hardcoded 2.
```
SELECT x.fantasycoach_id, SUM(round_points)
FROM (
SELECT DISTINCT fc.id AS fantasycoach_id,
ffv.formation_id AS formation_id,
fpc.round_sequence AS round_sequence,
round_points,
fpc.fantasyplayer_id
FROM fantasyworld_FantasyCoach AS fc
LEFT JOIN fantasyworld_fantasyformation AS ff ON ff.id = (
SELECT MAX(fantasyworld_fantasyformationvalidity.formation_id)
FROM fantasyworld_fantasyformationvalidity
LEFT JOIN realworld_round AS _rr ON _rr.id = round_id
LEFT JOIN fantasyworld_fantasyformation AS _ff ON _ff.id = formation_id
WHERE is_valid = TRUE
AND _ff.coach_id = fc.id
AND _rr.sequence <= 2 /* HARDCODED USE OF VALUE */
)
LEFT JOIN fantasyworld_FantasyFormationPlayer AS ffp
ON ffp.formation_id = ff.id
LEFT JOIN dbcache_fantasyplayercache AS fpc
ON ffp.player_id = fpc.fantasyplayer_id
AND fpc.round_sequence = 2 /* HARDCODED USE OF VALUE */
LEFT JOIN fantasyworld_fantasyformationvalidity AS ffv
ON ffv.formation_id = ff.id
) x
GROUP BY fantasycoach_id
```
**Edit 3:** I'm using PostgreSQL.
|
SQL works with tables as a whole, which basically involves set operations. There is no explicit iteration, and generally no need for any. In particular, the most straightforward implementation of what you described would be this:
```
SELECT value, (some subquery where value is used) AS x
FROM values
```
Do note, however, that a correlated subquery such as that is very hard on query performance. Depending on the details of what you're trying to do, it may well be possible to structure it around a simple join, an uncorrelated subquery, or a similar, better-performing alternative.
---
**Update:**
In view of the update to the question indicating that the subquery is expected to yield multiple rows for each value in table `values`, contrary to the example results, it seems a better approach would be to just rewrite the subquery as the main query. If it does not already do so (and maybe even if it does) then it would join table `values` as another base table.
---
**Update 2:**
Given the real query now presented, this is how the values from table `values` could be incorporated into it:
```
SELECT x.fantasycoach_id, SUM(round_points) FROM
(
SELECT DISTINCT
fc.id AS fantasycoach_id,
ffv.formation_id AS formation_id,
fpc.round_sequence AS round_sequence,
round_points,
fpc.fantasyplayer_id
FROM fantasyworld_FantasyCoach AS fc
-- one row for each combination of coach and value:
CROSS JOIN values
LEFT JOIN fantasyworld_fantasyformation AS ff
ON ff.id = (
SELECT MAX(fantasyworld_fantasyformationvalidity.formation_id)
FROM fantasyworld_fantasyformationvalidity
LEFT JOIN realworld_round AS _rr
ON _rr.id = round_id
LEFT JOIN fantasyworld_fantasyformation AS _ff
ON _ff.id = formation_id
WHERE is_valid = TRUE
AND _ff.coach_id = fc.id
-- use the value obtained from values:
AND _rr.sequence <= values.value
)
LEFT JOIN fantasyworld_FantasyFormationPlayer AS ffp
ON ffp.formation_id = ff.id
LEFT JOIN dbcache_fantasyplayercache AS fpc
ON ffp.player_id = fpc.fantasyplayer_id
-- use the value obtained from values again:
AND fpc.round_sequence = values.value
LEFT JOIN fantasyworld_fantasyformationvalidity AS ffv
ON ffv.formation_id = ff.id
) x
GROUP BY fantasycoach_id
```
Note in particular the `CROSS JOIN` which forms the cross product of two tables; this is the same thing as an `INNER JOIN` without any join predicate, and it can be written that way if desired.
The overall query could be at least a bit simplified, but I do not do so because it is a working example rather than an actual production query, so it is unclear what other changes would translate to the actual application.
|
In the example I create two tables. See how outer table have an alias you use in the inner select?
[SQL Fiddle Demo](http://sqlfiddle.com/#!6/47eab5/6)
```
SELECT T.[value], (SELECT [property] FROM Table2 P WHERE P.[value] = T.[value])
FROM Table1 T
```
This is a better way for performance
```
SELECT T.[value], P.[property]
FROM Table1 T
INNER JOIN Table2 p
on P.[value] = T.[value];
```
`Table 2` can be a QUERY instead of a real table
## Third Option
Using a `cte` to calculate your values and then join back to the main table. This way you have the subquery logic separated from your final query.
```
WITH cte AS (
SELECT
T.[value],
T.[value] * T.[value] as property
FROM Table1 T
)
SELECT T.[value], C.[property]
FROM Table1 T
INNER JOIN cte C
on T.[value] = C.[value];
```
|
SQL query: Iterate over values in table and use them in subquery
|
[
"",
"sql",
"postgresql",
"join",
"subquery",
"qsqlquery",
""
] |
I have table with two columns like this:
```
+-----------+------------+
| FROM | TO |
+-----------+------------+
|2015-03-01 | 2015-03-04 |
|2015-03-05 | 2015-03-09 |
+-----------+------------+
```
And I want to write the function which will take two arguments - DateFrom, and DateTo, and check this interval. For example, if function takes `DateFrom = 2015-03-03`, and `DateTo = 2015-03-08` as arguments it should return `true`, because every day from this interval is in table.
But if table is like this:
```
+-----------+------------+
| FROM | TO |
+-----------+------------+
|2015-03-01 | 2015-03-04 |
|2015-03-06 | 2015-03-09 |
+-----------+------------+
```
the function should return `false`, because `2015-03-05` isn't in table. Any idea for algorithm? Thanks for your help.
|
If you don't already have calendar table, you can use tally table for this:
```
DECLARE @t TABLE
(
FromDate DATE ,
ToDate DATE
)
INSERT INTO @t
VALUES ( '2015-03-01', '2015-03-04' ),
( '2015-03-05', '2015-03-09' )
DECLARE @from DATE = '2015-03-03', @to DATE = '2015-03-08'
;WITH cte1 AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS d
FROM (VALUES(0),(0),(0),(0),(0),(0),(0),(0),(0),(0)) a(n)
CROSS JOIN (VALUES(0),(0),(0),(0),(0),(0),(0),(0),(0),(0)) b(n)
CROSS JOIN (VALUES(0),(0),(0),(0),(0),(0),(0),(0),(0),(0)) c(n)),
cte2 AS(SELECT DATEADD(dd, d - 1, @from) AS d
FROM cte1
WHERE DATEADD(dd, d - 1, @from) <= @to)
SELECT CASE WHEN EXISTS ( SELECT *
FROM cte2
WHERE NOT EXISTS ( SELECT *
FROM @t t
WHERE d BETWEEN t.FromDate AND t.ToDate ) )
THEN 0
ELSE 1
END AS IntervalExists
```
It will work for interval with 1000 days difference. If more needed just add more `cross joins`(one `cross join` will multiply interval by 10).
|
If you want to perform this operation on a table to pass through each row in the table. I would advise you to go with a usp (User Stored Procedure)
Below is the sample code to do the same:
```
CREATE PROC usp_CheckInterval
(
@DateFrom Date ,
@DateTo Date ,
@ReturnStatus bit OUTPUT
)
AS
BEGIN TRY
SET NOCOUNT ON;
IF(EXISTS (SELECT TOP 1 * FROM [YourTable] WHERE [StartDate] < @DateFrom AND [EndDate] > @DateTo))
BEGIN
SET @ReturnStatus = 0
END
ELSE
BEGIN
SET @ReturnStatus = 1
END
END TRY
BEGIN CATCH
--Catch Here
END CATCH;
```
Sample Execution:
```
DECLARE @RC int
DECLARE @DateFrom date = GETDATE()
DECLARE @DateTo date = GETDATE()
DECLARE @ReturnStatus bit
-- TODO: Set parameter values here.
EXECUTE @RC = [dbo].[usp_CheckInterval]
@DateFrom
,@DateTo
,@ReturnStatus OUTPUT
SELECT @ReturnStatus
```
Let me know how it pans out.
|
interval in intervals table
|
[
"",
"sql",
"sql-server",
"algorithm",
"date",
"intervals",
""
] |
Apologies for the newbie question.
I have data that looks like this:
```
ID|SubID|Category
1|1|Fixed
1|2|Fixed
2|1|Unfixed
2|2|Unfixed
3|1|Fixed
4|1|Unfixed
5|1|Fixed
5|2|Unfixed
```
I need to know all the IDs where the Category is "Fixed" for all SubIDs (i.e. I'd want the query to return IDs 1 and 3).
How can I do this?
An an extension, I need to know all the IDs where the Category contains a mix of "Fixed" AND "Unfixed" for all SubIDs (i.e. I'd want the query to return just ID 5).
Thanks in advance!
|
You can use a `group by` + `having` clause.
> I need to know all the IDs where the Category is "Fixed" for all SubIDs
```
select id
from tablename
group by id
having count(*) = count(case when Category = 'Fixed' then 'X' end)
```
> I need to know all the IDs where the Category contains a mix of "Fixed" AND "Unfixed" for all SubIDs
```
select id
from tablename
group by id
having count(distinct Category) = 2
```
|
Return an ID if there exists no row with that ID which is unfixed:
```
select distinct ID
from tablename t1
where not exists (select 1 from tablename t2
where t2.id = t1.id
and t2.Category = 'Unfixed')
```
Problem 2, mixed fixed and unfixed:
```
select id
from tablename
group by id
having max(Category) <> min(Category)
```
I.e. of two different Categories for an ID, return the id.
|
How to check whether all records have the same value in a field?
|
[
"",
"sql",
""
] |
I have a big trouble with the query in the oracle database ver.10. What I want is to find the last date `dateofstat`
I have try many solutions, but it works but it take too much time.
- Using rownum
- Using row\_number()
- Using rank()
There are my tries:
**1. rownum**
```
select dateofstat from (
select stat.dateofstat from dhg.statistics stat
join (
select distinct assetid from dhg.relatedasset
where (`CONDITION1`)
MINUS
select distinct assetid from dhg.relatedasset
where (`CONDITION2`)
) grs
on stat.assetid = grs.assetid
order by stat.dateofstat desc
)where rownum = 1
```
Explain plan:
[](https://i.stack.imgur.com/2pOC0.png)
**row\_number()**
```
select dateofstat from (
select stat.dateofstat,
row_number() over (order by stat.dateofstat desc) rnumber
from dhg.statistics stat
join (
select distinct assetid from dhg.relatedasset
where (`CONDITION1`)
MINUS
select distinct assetid from dhg.relatedasset
where (`CONDITION2`)
) grs
on stat.assetid = grs.assetid
) where rnumber = 1
```
Explain plan:
[](https://i.stack.imgur.com/LSplu.png)
**rank()**: This solution I did try but it gives repetitive rank number, because of it, I dont think that I should used this solution to find the top one.
I dont know what should I do now, really need help. For testing, I use sqlplus on emacs, without `rownum` I take less than 3 second to get the first rows of this query.
```
select stat.dateofstat from dhg.statistics stat
join (
select distinct assetid from dhg.relatedasset
where (`CONDITION1`)
MINUS
select distinct assetid from dhg.relatedasset
where (`CONDITION2`)
) grs
on stat.assetid = grs.assetid
order by stat.dateofstat desc
```
I wonder that I could have any workaround solution by this tweak.
**UPDATE SOLUTION STATUS FROM @ANTON**
```
select max(stat.dateofstat) from dhg.statistics stat
join (
select distinct assetid from dhg.relatedasset
where relatedasset.assetid in (191759.0, 3.0, 5.0, 98.0, 99.0)
or relatedasset.linkid in (3232.0, 1049.0, 1057.0, 1067.0, 102.0, 1032.0, 104.0, 105.0, 1051.0)
MINUS
select distinct assetid from dhg.relatedasset
where relatedasset.assetid in (192106.0, 1014.0, 10302.0)
or relatedasset.linkid in (210.0, 737.0, 126.0, 1053.0, 1054.0, 119.0, 3133.0)
) grs
on stat.assetid = grs.assetid
```
I must say, I did work for once, very weird behavior. In the first executing I take only 3 seconds to execute, but in second time, I take much much more time (I did not count). Here are explain plan:
[](https://i.stack.imgur.com/TuLjS.png)
In addition, I also did try the second solution which use `exist` and `not exists`, however, it does not work well.
```
select max(stat.dateofstat)
from dhg.statistics stat
where exists(select *
from dhg.relatedasset rasset
where stat.assetid = rasset.assetid
and rasset.assetid in (191759.0, 3.0, 5.0, 98.0, 99.0)
or rasset.linkid in (3232.0, 1049.0, 1057.0, 1067.0, 102.0, 1032.0, 104.0, 105.0, 1051.0)
)
and not exists (select *
from dhg.relatedasset rasset2
where stat.assetid = rasset2.assetid
and rasset2.assetid in (192106.0, 1014.0, 10302.0)
or rasset2.linkid in (210.0, 737.0, 126.0, 1053.0, 1054.0, 119.0, 3133.0)
)
```
This query, the plan give me a painful result.
[](https://i.stack.imgur.com/LZHCL.png)
|
Due to the various OR conditions (IN and OR) on `relatedasset` the DBMS decides to perform a full table scan. That seems reasonable. So what can we optimize? We can see to it that the full table scan is done just once. Query the table, group by `assetid` and check with HAVING if the first condition was true for any record and the second for none.
You can also use a parallel hint to make Oracle perform the full table scan in parallel if possible.
```
select max(dateofstat)
from dhg.statistics
where assetid in
(
select /*+ parallel(relatedasset,4) */ assetid
from dhg.relatedasset
group by assetid
having
max( case when assetid in (191759.0, 3.0, 5.0, 98.0, 99.0)
or linkid in (3232.0, 1049.0, 1057.0, 1067.0, 102.0, 1032.0, 104.0, 105.0, 1051.0)
then 1 else 0 end ) = 1
and
max( case when assetid in (192106.0, 1014.0, 10302.0)
or linkid in (210.0, 737.0, 126.0, 1053.0, 1054.0, 119.0, 3133.0)
then 1 else 0 end ) = 0
);
```
|
Why so complex?
if you need just last date, you may use max() function:
```
select max(stat.dateofstat)
from dhg.statistics stat
join (
select distinct assetid from dhg.relatedasset
where (`CONDITION1`)
MINUS
select distinct assetid from dhg.relatedasset
where (`CONDITION2`)
) grs
on stat.assetid = grs.assetid
```
If dhg.statistics table is not too big and you may presume that you need to probe just several records with highest dateofstat to find one that sutisfies your relatedasset requirements, then you may rewrite query like this:
```
select max(stat.dateofstat)
from dhg.statistics stat
where exists(select *
from dhg.relatedasset asset1
where (`CONDITION1`)
and stat.assetid = asset1.assetid)
and not exists (select *
from dhg.relatedasset asset2
where (`CONDITION2`)
and stat.assetid = asset2.assetid)
```
But if you need to do too much probes in relatedasset table to find statistics that you need, you may have worse performance.
**UPDATE TAKING INTO ACCOUNT NEW PLANS**
Sstan is right, as `statistics` table is big (71M), and minus result is small (5), you just need proper indexing of relatedasset table
I'd suggest index by `relatedasset.assetid` and by `(relatedasset.linkid,relatedasset.assetid)` to avoid table scan at all.
By the way, you missed brackets in second query
(as `AND` has higher priority then `OR` so taking `CONDITION1` and `CONDITION2` in brackets is required to achive correct WHERE condition),
so you get plan much worse than it could be. But anyway with such data distribution even correct version will perform slower then your varian with proper indexing.
|
Find the lastest date in oracle database
|
[
"",
"sql",
"oracle",
"row-number",
"rownum",
""
] |
How can I check that varchar contains all chars from another varchar, where sequence of characters is irrelevant?
For example: I have varchar `@a = 'ABC'` and column `'Col'` in table `'Table'` where is row with `'Col' = 'CBAD'`. I want to select this row, because it contains all characters from `@a` variable. Please for your help.
I tried something like that:
```
DECLARE @a varchar(5) = 'ABCD'
DECLARE @b varchar(5) = 'DCA'
DECLARE @i int = 0
DECLARE @pat varchar(30) = ''
while @i <> len(@b) BEGIN
SET @i = @i + 1
SET @pat = @pat + '[' + @a + ']'
END
SELECT @pat
IF @b LIKE @pat SELECT 1
ELSE SELECT 0
```
But I can not put this to `WHERE` condition
|
Your first need to split your variable that you are checking for into rows, and remove duplicates. For only a few characters you could simply use a table valued constructor:
```
DECLARE @b varchar(5) = 'DCA';
SELECT DISTINCT Letter = SUBSTRING(@b, n.Number, 1)
FROM (VALUES(1),(2),(3),(4),(5),(6),(7),(8),(9),(10)) AS n (Number)
WHERE n.Number <= LEN(@b)
```
Which gives:
```
Letter
----------
D
C
A
```
Now you can compare this to your column, and limit it only to columns where the column contains all the letters (done in the `HAVING` clause)
```
DECLARE @b varchar(5) = 'DCA';
WITH Letters AS
( SELECT DISTINCT Letter = SUBSTRING(@b, n.Number, 1)
FROM (VALUES(1),(2),(3),(4),(5),(6),(7),(8),(9),(10)) AS n (Number)
WHERE n.Number <= LEN(@b)
)
SELECT *
FROM (VALUES ('AA'), ('ABCD'), ('ABCDEFG'), ('CAB'), ('NA')) AS t (Col)
WHERE EXISTS
( SELECT 1
FROM Letters AS l
WHERE t.Col LIKE '%' + l.Letter + '%'
HAVING COUNT(DISTINCT l.Letter) = (SELECT COUNT(*) FROM Letters)
);
```
If your variable can be longer than 10 characters, then you may need to adopt a slightly different string splitting method. I would still use numbers to do this, but would instead use [Itzik Ben-Gan's stacked CTE method](http://www.sqlservercentral.com/articles/T-SQL/62867/):
```
WITH N1 AS (SELECT N FROM (VALUES(1),(1),(1),(1),(1),(1),(1),(1),(1),(1)) AS n (N)),
N2 (N) AS (SELECT 1 FROM N1 AS N1 CROSS JOIN N1 AS N2),
N3 (N) AS (SELECT 1 FROM N2 AS N1 CROSS JOIN N2 AS N2)
SELECT ROW_NUMBER() OVER(ORDER BY N)
FROM N3;
```
This will give you a set of numbers from 1 to 10,000, and you can simply add more CTE's and cross joins as necessary to extend the process. So with a longer string you might have:
```
DECLARE @b varchar(5) = 'DCAFGHIJKLMNEOPNFEDACCRADFAE';
WITH N1 AS (SELECT N FROM (VALUES(1),(1),(1),(1),(1),(1),(1),(1),(1),(1)) AS n (N)),
N2 (N) AS (SELECT 1 FROM N1 AS N1 CROSS JOIN N1 AS N2),
N3 (N) AS (SELECT 1 FROM N2 AS N1 CROSS JOIN N2 AS N2),
Numbers (Number) AS (SELECT TOP (LEN(@b)) ROW_NUMBER() OVER(ORDER BY N) FROM N3),
Letters AS (SELECT DISTINCT Letter = SUBSTRING(@b, n.Number, 1) FROM Numbers AS n)
SELECT *
FROM (VALUES ('ABCDDCAFGHIJKLMNEOPNFEDACCRADFAEEFG'), ('CAB'), ('NA')) AS t (Col)
WHERE EXISTS
( SELECT 1
FROM Letters AS l
WHERE t.Col LIKE '%' + l.Letter + '%'
HAVING COUNT(DISTINCT l.Letter) = (SELECT COUNT(*) FROM Letters)
);
```
|
You can try like this:
```
SELECT * FROM yourTable where colname like '%[A]%'
AND colname like '%[B]%'
AND colname like '%[C]%'
```
or you can try using [PATINDEX](https://msdn.microsoft.com/en-us/library/ms188395.aspx)
```
SELECT * FROM yourTable WHERE PATINDEX('%[ABC]%',colname) > 1
```
|
check chars in varchar
|
[
"",
"sql",
"sql-server",
"varchar",
"contain",
""
] |
I have a table called Product. I need to select all product records that have the MAX ManufatureDate.
Here is a sample of the table data:
```
Id ProductName ManufactureDate
1 Car 01-01-2015
2 Truck 05-01-2015
3 Computer 05-01-2015
4 Phone 02-01-2015
5 Chair 03-01-2015
```
This is what the result should be since the max date of all the records is 05-01-2015 and these 2 records have this max date:
```
Id ProductName ManufactureDate
2 Truck 05-01-2015
3 Computer 05-01-2015
```
The only way I can think of doing this is by first doing a query on the entire table to find out what the max date is and then store it in a variable @MaxManufatureDate. Then do a second query where ManufactureDate=@MaxManufactureDate. Something tells me there is a better way.
There are 1 million+ records in this table:
**Here is the way I am currently doing it:**
```
@MaxManufactureDate = select max(ManufactureDate) from Product
select * from Product where ManufactureDate = @MaxManufactureDate
```
If figure this is a lot better then doing a subselect in a where clause. Or is this the same exact thing as doing a subselect in a where clause? I am not sure if the query gets ran for each row regardless or if sqlserver stored the variable value in memory.
|
```
select * from product
where manufactureDate = (select max(manufactureDate) from product)
```
The inner select-statements selects the maximum date, the outer all products which have the date.
|
You can use a subQuery
```
SELECT *
FROM Product
WHERE ManufactureDate = (
SELECT ManufactureDate
FROM Product
ORDER BY ManufactureDate
LIMIT 1
);`
```
You may need to use `ASC` or `DESC` to collect the right order
|
Select multiple rows from a table where field is the max date
|
[
"",
"sql",
"sql-server",
"maxdate",
""
] |
I have seen a query to find nth highest salary in table as;
```
SELECT *
FROM one one1
WHERE ( n ) = ( SELECT COUNT( one2.salary )
FROM one one2
WHERE one2.salary >= one1.salary
)
```
Can explain how is this will work? What is mean by `"WHERE(n)"` here
|
It is called [row constructor](https://dev.mysql.com/doc/refman/5.0/en/row-subqueries.html).
For your case (`scalar subquery`) brackets are redundant, and you can use just:
```
SELECT *
FROM one one1
WHERE n = ( SELECT COUNT( one2.salary )
FROM one one2
WHERE one2.salary >= one1.salary
)
```
But it is useful when you need to compare more than one value:
```
SELECT *
FROM t1
WHERE (col1,col2) = (SELECT col3, col4 FROM t2 WHERE id = 10);
```
|
When writing a query, it is useful to use table aliases for all columns:
```
SELECT one1.*
FROM one one1
WHERE 2 = (SELECT COUNT( one2.salary )
FROM one one2
WHERE one2.salary >= one1.salary
);
```
What is this doing? For each row in `one1`, the subquery is counting the number of rows that have a salary as large or larger than that row.
For the row(s) that have the largest salary, the count returned by the subquery is the number of rows with the maximum. If we assume that the salary values are different, then this provides the ranking. When the value is 2, then we get the second largest.
However, you might not get a value of "2" -- for instance, if three rows are tied for the maximum. Then the value will be "3" for the top salary. You might want "1" for this, so a better construct is:
However, that assumption is not always true. So, a better way to write this construct is:
```
SELECT one1.*
FROM one one1
WHERE 2 = (SELECT 1 + COUNT( one2.salary )
FROM one one2
WHERE one2.salary > one1.salary
);
```
The above is equivalent to the ANSI standard `rank()` function. Often, you really want to know the second highest *different* salary (ignoring ties). This is the ANSI standard `dense_rank()` function, and is implemented using:
```
SELECT one1.*
FROM one one1
WHERE 2 = (SELECT 1 + COUNT(DISTINCT one2.salary )
FROM one one2
WHERE one2.salary > one1.salary
);
```
|
What is exactly mean by " where(n) " in mysql query
|
[
"",
"mysql",
"sql",
""
] |
My DB has existing data. I'm trying to implement some sort of security rights system. It doesn't need to be truly secure...just limit what each level can effectively change. Technically...one exists...but it needs to be beefed up.
I need to adjust the existing Rights level for people that are Instructors. I have a query (qInstructors) that lists a DISTINCT query of anyone in the class table listed as an Instructor. There are a total of 38 Instructors.
Now I need to update the User table to adjust the rights of those 38 people...and this is where I'm stuck. A simple update query, no problem. But I must not be searching with the correct term because I can't find anything to help me hammer out the SQL.
```
UPDATE tblUserList
INNER JOIN tblUserList ON tblUserList.NTID = qInstructors.Instructor
SET tblUserList.Rights = 2
WHERE [NTID]=[Instructor];
```
When I try to run this, I get a syntax error in the JOIN. This is beyond my SQL knowledge...any leads?
|
I would suggest doing this using `IN`:
```
UPDATE tblUserList
SET tblUserList.Rights = 2
WHERE [NTID] IN (SELECT [Instructor] FROM qInstructors);
```
The use of `JOIN` in an `UPDATE` clause varies by database. The `IN` version is ANSI standard and should work both in MS Access and in whatever back-end database you might be using.
|
You were specifying the `tblUserlist` instead of `qinstructors` in the `join` clause.
```
UPDATE tblUserList
INNER JOIN qInstructors ON tblUserList.NTID = qInstructors.Instructor
SET tblUserList.Rights = 2
```
|
MS Access SQL update query based on another query
|
[
"",
"sql",
"ms-access",
"sql-update",
""
] |
Wonder if someone might be able to help me. I'm trying to build a view on a table that allows me to put the table name as an extra column into that view. I can do this through a manual process by mentioning'`TableName' AS ColumnA`, but I'm trying to see if there is a way that I can get SQL to grab the Table name from its own query to add this as a column.
|
The short answer is yes (see example below). The long answer is that may depend upon how your view is constructed.
If you're on SQL Server 2008 or greater, you can use [sys.dm\_sql\_referenced\_entities](https://msdn.microsoft.com/en-us/library/bb677185.aspx) to get dependency info for a DB object.
Let's try it with a view:
```
create table myTable (myColumn int, myOtherColumn char(1));
insert into myTable select 1, 'a';
insert into myTable select 2, 'b';
create view myView as
select t.*, e.referenced_entity_name
from myTable t
join sys.dm_sql_referenced_entities ('dbo.myView', 'OBJECT') e
on e.referenced_minor_id = 0;
select * from myView;
```
Results:
```
myColumn myOtherColumn referenced_entity_name
-------- ------------- ----------------------
1 a myTable
2 b myTable
(2 row(s) affected)
```
Will this simple example fit your usage? Please let us know.
|
No. The problem with this, may be that, a view can draw upon multiple tables. You are looking for Me.parent.name or this.parent.name. More precisely, Me.parents(0), Me.parents(1) and so, based on all tables that were joined to create a view. Also, a view can be based on other views. Then, the reference would look like :Me.parents\_at\_up\_level(0).name, if the view at dependency level 3, wants to look at its great-grandfather-table-name. Nevertheless, a good idea and good thinking, this should have been possible.
|
Can a view auto-identify the table name(s) on which it is based ?
|
[
"",
"sql",
"sql-server",
"objectid",
""
] |
Is there a way to name a column the value of what's in a parameter, without using dynamic SQL?
I need to somehow output the value of what is in the `@input` parameter as the name of the column in the SQL statement. I need to avoid using dynamic SQL.
```
DECLARE @input VARCHAR(10)=' Person ';
SELECT count(*) AS @input + ' Open Data'
FROM #Accounts a
JOIN dbo.FileFeed t On t.ID = a.AccountID
GROUP BY a.accountid
```
|
You need to use dynamic SQL:
```
DECLARE @input VARCHAR(10) = ' Person ';
DECLARE @sql NVARCHAR(MAX) = '
SELECT count(*) AS [@Input Open Data]
FROM #Accounts a JOIN
dbo.FileFeed t
On t.ID = a.AccountID
GROUP BY a.accountid';
SET @sql = REPLACE(@sql, '@Input', @Input);
exec sp_executesql @sql;
```
However, I don't really think this is a good idea. If you need to rename a column, do it in the application code.
|
One **ugly way**, without `Dynamic-SQL` is using temporary table and rename column:
```
DECLARE @input VARCHAR(10) = ' Person ';
DECLARE @new_name VARCHAR(100) = @input + ' Open Data';
SELECT [rename_me] = COUNT(*)
INTO #temp
FROM #Accounts a
JOIN dbo.FileFeed t On t.ID = a.AccountID
GROUP BY a.accountid;
EXEC tempdb..sp_rename '#temp.rename_me', @new_name, 'COLUMN';
SELECT *
FROM #temp;
```
|
Make value of a parameter the column name without dynamic SQL
|
[
"",
"sql",
"sql-server",
"parameters",
"ssms",
""
] |
I want to select rows from my Postgres database that meet the following criteria:
* There are other rows with the same value in column A
* Those other rows have a specific value in column B
* Those other rows have a larger value in column C
So if I had a table like this:
```
User | Item | Date
-----+------+------
Fred | Ball | 5/1/2015
Jane | Pen | 5/7/2015
Fred | Cup | 5/11/2015
Mike | Ball | 5/13/2015
Jane | Ball | 5/18/2015
Fred | Pen | 5/20/2015
Jane | Bat | 5/22/2015
```
The search might be "what did people buy after they bought a ball?" The output I would want would be:
```
User | Item | Date
-----+------+------
Fred | Cup | 5/11/2015
Fred | Pen | 5/20/2015
Jane | Bat | 5/22/2015
```
I've gotten as far as `SELECT * FROM orders AS or WHERE or.user IN (SELECT or2.second_id FROM orders AS or2 GROUP BY or2.user HAVING count(*) > 1);`, which gives me all of Fred's and Jane's orders (since they ordered multiple things). But when I try to put additional limitations on the WHERE clause (e.g. `SELECT * FROM orders AS or WHERE or.item = 'Ball' AND or.user IN (SELECT or2.second_id FROM orders AS or2 GROUP BY or2.user HAVING count(*) > 1);`, I get something that isn't what I expect at all -- a list of records where item = 'Ball' that seems to have ignored the second part of the query.
Thanks for any help you can provide.
Edit: Sorry, I misled some people at the end by describing the bad approach I was taking. (I was working on getting a list of the Ball purchases, which I could use as a subquery in a next step, but people correctly noted that this is an unnecessarily complex/expensive approach.)
|
I think this might give the result you are looking for:
```
SELECT orders.user, orders.item, orders.date
FROM orders, (SELECT * FROM orders WHERE item = 'ball') ball_table
WHERE orders.user = ball_table.user AND orders.date > ball_table.date;
```
|
```
select b.*
from user_table a
join user_table b
on b.user = a.user
and b.date > a.date
and b.item = 'Ball'
```
|
How do I select records by comparing values in columns B, C among records with same value in column A?
|
[
"",
"sql",
"postgresql",
""
] |
I have a table with two columns that stores translations from English to German (labelled "english" and "german"). I lookup translations with a query like:
```
SELECT german
FROM translate_content
WHERE german = "A" || german = "B" || german = "C" || german = "A";
```
My problem is that sometimes the same word will appear on a page multiple times (in this case the word "A"), and I need the number of rows in my SQL result to match the number of words I want to translate. SQL will return rows like this:
```
A
B
C
```
but I need it to return
```
A
B
C
A
```
How can I accomplish this? If it helps I am using MySQL.
Thanks!
|
Try this:
```
SELECT german
FROM translate_content
WHERE german = "A"
UNION ALL
SELECT german
FROM translate_content
WHERE german = "B"
UNION ALL
SELECT german
FROM translate_content
WHERE german = "C"
UNION ALL
SELECT german
FROM translate_content
WHERE german = "A"
```
|
I would suggest using a `join` for this:
```
SELECT tc.german
FROM translate_content tc JOIN
(SELECT 'A' as g UNION ALL SELECT 'B' UNION ALL SELECT 'C' UNION ALL SELECT 'A'
) v
ON v.german = tc.german;
```
Give the structure of your question, you don't even need the `translate_content` table. The following returns what you are asking for:
```
SELECT 'A' as g UNION ALL
SELECT 'B' UNION ALL
SELECT 'C' UNION ALL
SELECT 'A';
```
However, I suspect different column names might be involved.
)
|
Make SQL return one row for every WHERE clause, even if identical to previous one
|
[
"",
"mysql",
"sql",
""
] |
I have the following tables for a university database:
```
takes(ID, course_id, sec_id, semester, year, grade)
section(course_id, sec_id, semester, year)
```
I need to determine which sections had the maximum number of students enrolled in Fall semester of 2009.
I have tried variations of the following query:
```
with sec_enrollment as(
select course_id, sec_id, count(ID) as enrollment
from section natural join takes
where semester = 'Fall' and year = 2009
group by course_id, sec_id)
select course_id, sec_id, enrollment
from sec_enrollment
group by course_id, sec_id, enrollment
having enrollment = max(enrollment)
```
The above query returns all sections from Fall 2009 instead of just the sections with max enrollment. It seems that my 'having' clause is being ignored.
If I use the query:
```
with sec_enrollment as(
select course_id, sec_id, count(ID) as enrollment
from section natural join takes
where semester = 'Fall' and year = 2009
group by course_id, sec_id)
select max(enrollment)
from sec_enrollment
```
I can get the desired value for the maximum enrollment. I just can't figure out how to get the desired value along with course\_id and sec\_id of sections that contain that maximum. I'm thinking that the 'group by' clause is screwing things up, but I can't configure it any other way without drawing an error (ORA-00979: not a GROUP BY expression). Any help would be greatly appreciated
|
This is one way to do it using one more `cte`.
```
with sec_enrollment as (
select course_id, sec_id, count(ID) as enrollment
from section natural join takes
where semester = 'Fall' and year = 2009
group by course_id, sec_id)
, max_enrollment as (
select max(enrollment) as maxenrollment from sec_enrollment)
select course_id, sec_id
from sec_enrollment s
join max_enrollment m on s.enrollment = m.maxenrollment
```
|
Order the results in descending order by the enrollment count and then limit the results to 1 record.
```
with sec_enrollment as(
select course_id, sec_id, count(ID) as enrollment
from section natural join takes
where semester = 'Fall' and year = 2009
group by course_id, sec_id)
select course_id, sec_id, enrollment
from sec_enrollment
where ROWNUM = 1
order by enrollment DESC
```
|
SQL (Oracle) - Selecting rows containing a max() using 'having'
|
[
"",
"sql",
"oracle",
"max",
""
] |
**Customer** table
number = 666, ContactId= 1a1b..
**Contact** table
Id = 1a1b.., Name = Bob smith
**Address** table
Address = 350 king road, AddressType = officle, Order = 1, ContactId = 1a1b
Address = 123 queen street, AddressType = home, Order = 2, ContactId = 1a1b
SQL query:
```
SELECT TOP (100)
dbo.Customer.Number, dbo.Contact.Name,
dbo.Address.StreetAddress, dbo.Address.Order
FROM
dbo.Customer
LEFT JOIN
dbo.Contact ON dbo.Customer.ContactId = dbo.Contact.Id
LEFT JOIN
dbo.Address ON dbo.Contact.Id = dbo.Address.ContactId
WHERE
Contact.Name LIKE '%o%' OR Address.StreetAddress LIKE '%o%'
```
This return the following results :
```
666 | Bob Smith | 350 king road | 1
666 | Bob Smith | 123 queen street | 2
```
**Questions**:
I want to get the first available address (350 king road / sequence 1) and not both (even if the search string match both address)
When the match is on the second address only (like when doing a like %q% instead of like %o% in my example) I want to return this particular address and not the other one
Is it possible to do this in SQL? BTW my goal is also to get the results as fast as possible, is there any other way to do this kind of search in some "one to many" joined table?
I just need something to start with....thanks!
|
If you plan on retrieving multiple customers but only the primary address for each you can use row\_number() in a subquery and only return row 1 of each partition(customer), as such:
```
SELECT dbo.Customer.Number,
dbo.Contact.Name,
dbo.Address.StreetAddress,
dbo.Address.Order
FROM (SELECT TOP (100) Row_number()
OVER (
partition BY dbo.Customer.Number, dbo.Contact.Name
ORDER BY dbo.Address.Order) AS seq,
dbo.Customer.Number,
dbo.Contact.Name,
dbo.Address.StreetAddress,
dbo.Address.Order
FROM dbo.Customer
LEFT JOIN dbo.Contact
ON dbo.Customer.ContactId = dbo.Contact.Id
LEFT JOIN dbo.Address
ON dbo.Contact.Id = dbo.Address.ContactId
WHERE Contact.Name LIKE '%o%'
OR Address.StreetAddress LIKE '%o%') t
WHERE t.seq = 1
```
|
You need to consider where Contact.Name LIKE '%o%' is likely to return more than one unique name
Values in the where negate the left so might as well use a join
```
select top 100 *
from
(
SELECT dbo.Customer.Number,
dbo.Contact.Name,
dbo.Address.StreetAddress,
row_number() over (partition by dbo.Customer.Number order dbo.Address.Order) as rn
FROM dbo.Customer
JOIN dbo.Contact
ON dbo.Contact.Id = dbo.Customer.ContactId
JOIN dbo.Address
ON dbo.Address.ContactId = dbo.Customer.ContactId -- = dbo.Contact.Id
where Contact.Name LIKE '%o%'
or Address.StreetAddress LIKE '%o%'
) tt
where tt.rn = 1
order by tt.Customer.Number
```
I think you can get away with just partition by dbo.Customer.Number but you may need to add dbo.Contact.Name anddbo.Address.StreetAddress
if you want to use a left then you would need
( Contact.Name LIKE '%o%' or Contact.Name is null )
and Address would need to join on dbo.Customer.ContactId
to get empty joins
|
Where clause when join is on many elements
|
[
"",
"sql",
"sql-server",
""
] |
I'm trying to count the number of instances of a value in a column and return it in another column. I've successfully done the calculation with a hard coded value but can't figure out how to get a variable from the main SELECT into the COUNT.
In the sample data at the bottom, CommissionRate shows the number of times the DealInstanceOID value in that row shows up in the DealInstanceOID column. My hard coded solution works for all of these values, but getting this to happen dynamically is mystifying me. The DealInstanceOID variable is out of the scope of the nested SELECT and I'm unsure of how to work around that.
I posted this question earlier and had some complaints about how my tables are joined - wasn't able to get any more feedback from those posters and I am reposting as they suggested.
```
SELECT
D.DealOID
DD.DealInstanceOID
, CommissionRate = (SELECT (DealInstanceOID COUNT(*) FROM dbo.DealDetail WHERE DealInstanceOID = 4530))
, Commission = CONVERT(MONEY,C.Commission,1)
FROM dbo.Book AS B WITH(NOLOCK)
INNER JOIN Contract as C WITH(NOLOCK) ON B.BookOID = C.BookOID
INNER JOIN Deal as D WITH(NOLOCK)ON C.ContractOID = D.ContractOID
INNER JOIN DealInstance DI WITH(NOLOCK) ON DI.DealOID = D.DealOID
INNER JOIN DealDetail AS DD WITH(NOLOCK)ON DD.DealInstanceOID = DI.DealInstanceOID
GROUP BY
DD.DealInstanceOID
, D.DealOID
, C.Commission
, B.BookOID
ORDER BY DealOID ASC
DealOID |Commission |CommissionRate|Commission/Rate|DealInstanceOID
101 | $1000 | 5 | $200.00 | 4530
101 | $1000 | 5 | $200.00 | 4530
101 | $1000 | 5 | $200.00 | 4530
101 | $1000 | 5 | $200.00 | 4530
101 | $1000 | 5 | $200.00 | 4530
101 | $5000 | 6 | $833.33 | 4531
102 | $5000 | 6 | $833.33 | 4531
102 | $5000 | 6 | $833.33 | 4531
102 | $5000 | 6 | $833.33 | 4531
102 | $5000 | 6 | $833.33 | 4531
102 | $5000 | 6 | $833.33 | 4531
103 | $6000 | 3 | $2,000.00 | 4540
103 | $6000 | 3 | $2,000.00 | 4540
103 | $6000 | 3 | $2,000.00 | 4540
```
|
Two problems with your scalar sub-select statement. One is a syntax error and the other is referencing. Fix them as follows:
```
CommissionRate = (SELECT COUNT(*) FROM dbo.DealDetail as s WHERE s.DealInstanceOID = dd.DealInstanceOID)
```
|
You should be able to reference it by the table alias and column name:
```
...
WHERE dbo.DealDetail.DealInstanceOID = DD.DealInstanceOID))
...
```
|
Using COUNT in nested SELECTS
|
[
"",
"sql",
"sql-server",
"count",
""
] |
I have a union query, where I want to use the results of the select in the "left side" of the union query, in the select statement on the "right side" of the union query. The query below works correctly (on postgres at least), but I am running query1 2 times, once as query1, and again as sameAsQuery1.
```
select x as zz from (select 69 as x) as query1
union all
select count(zz) as zz from
(select x as zz from (select 69 as x) as sameAsQuery1) as query2
```
I would like to do something like this so I don't have to run query1 2 times, but it doesn't work:
```
select x as zz from (select 69 as x) as query1
union all
select count(zz) as zz from query1
```
I get this error message:
> ERROR: relation "query1" does not exist LINE 3: select count(zz)
> as zz from query1
Is there a way to rewrite this query so query1 only runs once?
Minor modification to Mr. Llama's response worked quite well, it looks like this (Note the addition of "as q2"):
```
WITH
query1 AS
(
SELECT x AS zz FROM (SELECT 69 AS x) as q2
)
SELECT zz FROM query1
UNION ALL
SELECT COUNT(zz) AS zz FROM query1
```
|
You're looking for [common table expressions](http://www.postgresql.org/docs/8.4/static/queries-with.html).
They let you define a result and use it multiple times within a query.
In your fist case:
```
WITH
query1 AS
(
SELECT x AS zz FROM (SELECT 69 AS x)
)
SELECT zz FROM query1
UNION ALL
SELECT COUNT(zz) AS zz FROM query1
```
|
Why would do something like this when most databases support `group by` and `rollup`? It seems you want something like this:
```
select x, count(*) as cnt
from <whatever>
group by x with rollup;
```
|
How to use the results of a select in another select in a union query?
|
[
"",
"sql",
"postgresql",
"union",
"subquery",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.