Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I have a database project at my school and I am almost finished. The only thing that I need is average movies per day. I have a watchhistory where you can find the users who have watch a movie. The instrucition is that you filter the people out of the watchhistory who have an average of 2 movies per day.
I wrote the following SQL statement. But every time I get errors. Can someone help me?
SQL:
```
SELECT
customer_mail_address,
COUNT(movie_id) AS AantalBekeken,
COUNT(movie_id) / SUM(GETDATE() -
(SELECT subscription_start FROM Customer)) AS AveragePerDay
FROM
Watchhistory
GROUP BY
customer_mail_address
```
The error:
> Msg 130, Level 15, State 1, Line 1
> Cannot perform an aggregate function on an expression containing an aggregate or a subquery.
I tried something different and this query sums the total movie's per day. Now I need the average of everything and that SQL only shows the cusotmers who are have more than 2 movies per day average.
```
SELECT
Count(movie_id) as AantalPerDag,
Customer_mail_address,
Cast(watchhistory.watch_date as Date) as Date
FROM
Watchhistory
GROUP BY
customer_mail_address, Cast(watch_date as Date)
```
|
I got it guys. Finally :)
```
SELECT customer_mail_address, SUM(AveragePerDay) / COUNT(customer_mail_address) AS gemiddelde
FROM (SELECT DISTINCT customer_mail_address, COUNT(CAST(watch_date AS date)) AS AveragePerDay
FROM dbo.Watchhistory
GROUP BY customer_mail_address, CAST(watch_date AS date)) AS d
GROUP BY customer_mail_address
HAVING (SUM(AveragePerDay) / COUNT(customer_mail_address) >= 2
```
|
The big problem that I see is that you're trying to use a subquery as if it's a single value. A subquery could potentially return many values, and unless you have only one customer in your system it will do exactly that. You should be `JOIN`ing to the Customer table instead. Hopefully the `JOIN` only returns one customer per row in WatchHistory. If that's not the case then you'll have more work to do there.
```
SELECT
customer_mail_address,
COUNT(movie_id) AS AantalBekeken,
CAST(COUNT(movie_id) AS DECIMAL(10, 4)) / DATEDIFF(dy, C.subscription_start, GETDATE()) AS AveragePerDay
FROM
WatchHistory WH
INNER JOIN Customer C ON C.customer_id = WH.customer_id -- I'm guessing at the join criteria here since no table structures were provided
GROUP BY
C.customer_mail_address,
C.subscription_start
HAVING
COUNT(movie_id) / DATEDIFF(dy, C.subscription_start, GETDATE()) <> 2
```
I'm guessing that the criteria isn't **exactly** 2 movies per day, but either less than 2 or more than 2. You'll need to adjust based on that. Also, you'll need to adjust the precision for the average based on what you want.
|
Access: Having trouble with getting average movies per day
|
[
"",
"sql",
"sql-server",
"database",
"ms-access",
"average",
""
] |
I have two Oracle tables and I am doing an UNION between them to find out the difference in the data stored in those two tables but when I run the query in SQL Developer then the query is too slow and I am using the same query in Informatica and its throughput is less too.
TABLE 1: W\_SALES\_INVOICE\_LINE\_FS EBS(NET\_AMT,
INVOICED\_QTY,
CREATED\_ON\_DT,
CHANGED\_ON\_DT,
INTEGRATION\_ID,
'EBS' AS SOURCE\_NAME)
TABLE 2: W\_SALES\_INVOICE\_LINE\_F DWH (NET\_AMT,
INVOICED\_QTY,
CREATED\_ON\_DT,
CHANGED\_ON\_DT,
INTEGRATION\_ID,
'EBS' AS SOURCE\_NAME)
I am attaching the query with the question:
```
SELECT EBS.NET_AMT,
nvl(EBS.INVOICED_QTY,
case nvl(EBS.NET_AMT,0) when 0 then EBS.INVOICED_QTY
else -1 end) INVOICED_QTY,
EBS.CREATED_ON_DT,
EBS.CHANGED_ON_DT,
EBS.INTEGRATION_ID,
'EBS' AS SOURCE_NAME
FROM
W_SALES_INVOICE_LINE_FS EBS
WHERE NOT EXISTS (SELECT INTEGRATION_ID FROM W_SALES_INVOICE_LINE_F DWH
WHERE EBS.INTEGRATION_ID = DWH.INTEGRATION_ID)
UNION
SELECT DWH.NET_AMT,
DWH.INVOICED_QTY,
DWH.CREATED_ON_DT,
DWH.CHANGED_ON_DT,
DWH.INTEGRATION_ID,
'DWH' AS SOURCE_NAME
FROM
W_SALES_INVOICE_LINE_F DWH
where DWH.IS_POS = 'N' and
not exists (SELECT INTEGRATION_ID FROM W_SALES_INVOICE_LINE_FS EBS
WHERE EBS.INTEGRATION_ID = DWH.INTEGRATION_ID);
```
Let me know if you want to see the explain plan. Can someone tell me how to improve the performance or let me know if the issues is with something else and not with the above query!
[](https://i.stack.imgur.com/nwn3b.jpg)
|
You are not performing a `JOIN`, you are performing a `UNION`. You are performing subqueries however, and those may be slowing down the overall performance. You might change the `EXISTS` to `IN` which can take advantage of an index if it exists.
Try the following:
```
SELECT EBS.NET_AMT,
nvl(EBS.INVOICED_QTY,
case nvl(EBS.NET_AMT,0) when 0 then EBS.INVOICED_QTY
else -1 end) INVOICED_QTY,
EBS.CREATED_ON_DT,
EBS.CHANGED_ON_DT,
EBS.INTEGRATION_ID,
'EBS' AS SOURCE_NAME
FROM
W_SALES_INVOICE_LINE_FS EBS
WHERE EBS.INTEGRATION_ID NOT IN (
SELECT INTEGRATION_ID
FROM W_SALES_INVOICE_LINE_F
)
UNION ALL
SELECT DWH.NET_AMT,
DWH.INVOICED_QTY,
DWH.CREATED_ON_DT,
DWH.CHANGED_ON_DT,
DWH.INTEGRATION_ID,
'DWH' AS SOURCE_NAME
FROM
W_SALES_INVOICE_LINE_F DWH
where DWH.IS_POS = 'N'
and DWH.INTEGRATION_ID not in (
SELECT INTEGRATION_ID
FROM W_SALES_INVOICE_LINE_FS
);
```
Also, as mentioned by others in comments, a `UNION ALL` might be more appropriate.
Also, you could try using a `LEFT OUTER JOIN` which, if you do have an index, is a more explicit way of doing the above. I don't have access to my oracle from my current location to try an explain plan, but the above and below may actually be optimized similarly.
```
SELECT EBS.NET_AMT,
Nvl(EBS.INVOICED_QTY,
CASE Nvl(EBS.NET_AMT, 0) WHEN 0
THEN EBS.INVOICED_QTY
ELSE -1 END
) AS INVOICED_QTY,
EBS.CREATED_ON_DT,
EBS.CHANGED_ON_DT,
EBS.INTEGRATION_ID,
'EBS' AS SOURCE_NAME
FROM W_SALES_INVOICE_LINE_FS EBS
LEFT OUTER JOIN W_SALES_INVOICE_LINE_F DWH
ON DWH.INTEGRATION_ID = EBS.INTEGRATION_ID
WHERE DWH.INTEGRATION_ID IS NULL
UNION ALL
SELECT DWH.NET_AMT,
DWH.INVOICED_QTY,
DWH.CREATED_ON_DT,
DWH.CHANGED_ON_DT,
DWH.INTEGRATION_ID,
'DWH' AS SOURCE_NAME
FROM W_SALES_INVOICE_LINE_F DWH
LEFT OUTER JOIN W_SALES_INVOICE_LINE_FS EBS
ON EBS.INTEGRATION_ID = DWH.INTEGRATION_ID
WHERE EBS.INTEGRATION_ID IS NULL
AND DWH.IS_POS = 'N'
;
```
Could you provide a brief description of the tables in your question? How many (approximately) records are in each table? Do you have any indexes? Are any of the fields calculated/derived? When you do an explain plan on these or your original query, where does it show the bottleneck?
|
Not exists and not in statements can often be the performance bottleneck. A performance trick to get round this is to use a LEFT OUTER JOIN with a clause stating the second table column is null I.e. there is no matching row. So try:
```
SELECT EBS.NET_AMT,
nvl(EBS.INVOICED_QTY,
case nvl(EBS.NET_AMT,0) when 0 then EBS.INVOICED_QTY
else -1 end) INVOICED_QTY,
EBS.CREATED_ON_DT,
EBS.CHANGED_ON_DT,
EBS.INTEGRATION_ID,
'EBS' AS SOURCE_NAME
FROM
W_SALES_INVOICE_LINE_FS EBS
LEFT OUTER JOIN
W_SALES_INVOICE_LINE_F DWH
ON EBS.INTEGRATION_ID = DWH.INTEGRATION_ID
WHERE DWH.INTEGRATION_ID IS NULL
UNION
SELECT DWH.NET_AMT,
DWH.INVOICED_QTY,
DWH.CREATED_ON_DT,
DWH.CHANGED_ON_DT,
DWH.INTEGRATION_ID,
'DWH' AS SOURCE_NAME
FROM
W_SALES_INVOICE_LINE_F DWH
LEFT OUTER JOIN W_SALES_INVOICE_LINE_FS EBS
ON EBS.INTEGRATION_ID = DWH.INTEGRATION_ID
where EBS.INTEGRATION_ID IS NULL
AND DWH.IS_POS = 'N'
```
|
I am trying to improve the performance of an Oracle SQL that is finding the differences between two tables
|
[
"",
"sql",
"oracle",
"informatica",
""
] |
Imagine I have two tables, `food` and `people`, and I indicate who likes which food with a link table. So:
```
foods
-----
sausages
pie
Mars bar
people
------
john
paul
george
ringo
person | food (link table)
-------+-----
john | pie
john | sausage
paul | sausage
```
I'd like to get a list of foods, along with **a** person who likes that food. So I'd like a table like this:
```
food | a randomly chosen liker
---------+------------------------
sausage | john (note: this could be "paul" instead)
pie | john (note: must be john; he's the only liker)
Mars bar | null (note: nobody likes it)
```
Is it possible to do this in one query?
Obviously, I can do:
```
select
f.food, p.person
from
food f inner join link l
on f.food = l.food
inner join person p
on l.person = p.person
```
but that will give me *two* `sausage` rows, because two people like it, and I'll have to deduplicate the rows myself.
|
Do `LEFT JOINs` to also get food that no-one likes. `GROUP BY` to get each food only once, use `MIN` to pick first person that likes that food.
```
select f.food, min(p.person)
from food f
left join linktable l on f.id = l.food_id
left join people p on p.id = l.person_id
group by f.food
```
|
Another variant.. (Assuming it is SQL Server )
```
Select
a.Food, b.Person
from
foods a
outer apply
(
Select top 1 Person from linkTable b where a.Food = b.Food
) b
```
|
SQL: Fetching one row across a link table join
|
[
"",
"sql",
""
] |
If I have a table of Buses that has many Stops, and each Stop record has an arrival time, how do I retrieve and order Buses by the earliest Stop time?
```
_______ ________
| Buses | | Stops |
|-------| |--------|
| id | | id |
| name | | bus_id |
------- | time |
--------
```
I'm able to do it with the following query:
```
SELECT DISTINCT sub.id, sub.name
FROM
(SELECT buses.*, stops.time
FROM buses
INNER JOIN stops ON stops.bus_id = buses.id
ORDER BY stops.time) AS sub;
```
...but this has the downsides of having to do 2 queries and having to specify all the fields from buses in the SELECT DISTINCT clause. That gets particularly annoying if the buses table ever changes.
What I want to do is this:
```
SELECT DISTINCT buses.*
FROM buses
INNER JOIN stops ON stops.bus_id = buses.id
ORDER BY stops.time;
```
...however in order to get `DISTINCT buses.*`, I have to include `stops.time` there as well, which gives me duplicate buses with different stop times.
What would be a better way to do this query?
|
One thing you can do is to put the inner query into `ORDER BY`. This will keep the outer query "clean" as it will only select from buses. This way you won't need to return any additional fields.
```
SELECT buses.*
FROM buses
ORDER BY (
SELECT MIN(stops.time) FROM stops WHERE stops.bus_id = buses.id
)
```
|
specifying fields in select is best practice so i'm not sure why that is listed as downside.
I would do
```
Select buses.* From buses inner join
(Select stops.bus_id, min(stops.time) as mintime
From Stops
Group By stops.bus_id) st on buses.id = st.bus_id
```
or
```
Select buses.*, min(stops.time) as stoptime
From buses inner join stops on
buses.ID = stops.bus_ID group by buses.id, buses.name
```
|
Ordering SQL results by attributes in another table
|
[
"",
"sql",
"postgresql",
""
] |
I have a table (Main), which records number of seconds worked in a day for a list of people. I'm trying to convert these seconds into hh:mm:ss.
trying to use:
```
DECLARE @TimeinSecond INT
SET @TimeinSecond = main.column -- Change the seconds
SELECT RIGHT('0' + CAST(@TimeinSecond / 3600 AS VARCHAR),2) + ':' +
RIGHT('0' + CAST((@TimeinSecond / 60) % 60 AS VARCHAR),2) + ':' +
RIGHT('0' + CAST(@TimeinSecond % 60 AS VARCHAR),2)
```
main.column wont register as what needs to be converted here. Can a column be selected for full time conversion? (The seconds worked daily are different for everyone)
|
Assignment of column to the variable has no sense in sql server (I'm supposing you're using it since you've tagged your question with `ssms`).
It looks like you need just select this column value and make some calculations over it
```
select
RIGHT('0' + CAST(column / 3600 AS VARCHAR),2) + ':' +
RIGHT('0' + CAST((column / 60) % 60 AS VARCHAR),2) + ':' +
RIGHT('0' + CAST(column % 60 AS VARCHAR),2)
from main
```
|
```
select * , convert(varchar(20), dateadd(ss, secondsColumn, 0), 108) from main
```
Output
**09:20:34**
|
SQL Converting column's recorded seconds to hours:minutes:seconds
|
[
"",
"sql",
"sql-server",
"ssms",
""
] |
I'm trying to create a stored procedure using T-SQL to insert variables into a table. Using the method that I know of, you declare the variables in the creation of the stored procedure, but I believe it's possible to declare the variables as part of the exec statement to make the insert dynamic. I've googled and googled and googled and cannot find anything that supports this.
Here is my example code as it stands with static variables:
```
create procedure spInsert as
declare @insertValueOne varchar(5) = 'Test1'
declare @insertValueTwo varchar(5) = 'Test2'
declare @insertValueThree varchar(5) = 'Test3'
declare @insertValueFour varchar(5) = 'Test4'
begin
insert into testTable
(ValueOne, ValueTwo, ValueThree, ValueFour)
values
(@insertValueOne, @insertValueTwo, @insertValueThree, @insertValueFour)
end
exec spInsert
```
What I'm trying to achieve is a situation where I can can use an execution script like this for example:
```
exec spInsert('Test1', 'Test2', 'Test3', 'Test4')
```
Instead of creating static variables within the procedure. Allowing the user to execute the stored procedure without amending the contents of it.
Does this make sense?
Any ideas?
|
Use aguments with default value instead of local variables:
```
create procedure dbo.spInsert
@insertValueOne varchar(5) = 'Test1'
,@insertValueTwo varchar(5) = 'Test2'
,@insertValueThree varchar(5) = 'Test3'
,@insertValueFour varchar(5) = 'Test4'
AS
BEGIN
-- You can still use local variables
-- DECLARE @my_local_variable = UPPER(@insertValueOne);
INSERT INTO testTable(ValueOne, ValueTwo, ValueThree, ValueFour)
VALUES (@insertValueOne, @insertValueTwo, @insertValueThree, @insertValueFour)
END;
```
Call:
```
-- named parameters (good practice, self-documenting)
EXEC spInsert @insertValueOne = 'Test1', @insertValueTwo = 'BBB';
-- positional parameters(order is crucial)
EXEC spInsert 'Test1', 'Test2', 'Test3', 'Test4'
```
`LiveDemo`
Naming user defined stored procedure with `sp` is not best practice.
---
> I think this looks like my solution
No it isn't your stored procedure does not accept any arguments.
```
create procedure spInsert as
declare @insertValueOne varchar(5) = 'Test1'
declare @insertValueTwo varchar(5) = 'Test2'
declare @insertValueThree varchar(5) = 'Test3'
declare @insertValueFour varchar(5) = 'Test4'
begin
insert into testTable
(ValueOne, ValueTwo, ValueThree, ValueFour)
values
(@insertValueOne, @insertValueTwo, @insertValueThree, @insertValueFour)
end
```
|
You are nearly there. Your code contains a few errors. My example only demonstrates a fraction of what can be achieved with stored procedures. See Microsoft's MSDN [help docs on procedures](https://msdn.microsoft.com/en-GB/library/ms187926.aspx) for more.
The example uses a temp procedure (the hash in front of the name makes it temp). But the principles apply to regular SPs.
**SP**
```
/* Declares a temp SP for testing.
* The SP has two parameters, each with a default value.
*/
CREATE PROCEDURE #TempExample
(
@ValueOne VARCHAR(50) = 'Default Value One',
@ValueTwo VARCHAR(50) = 'Default Value Two'
)
AS
SET NOCOUNT ON;
BEGIN
SELECT
@ValueOne AS ReturnedValueOne,
@ValueTwo AS ReturnedValueTwo
END
GO
```
I've included the *SET NOCOUNT ON* statement. MSDN recommends this in the section on best practice.
> Use the SET NOCOUNT ON statement as the first statement in the body of
> the procedure. That is, place it just after the AS keyword. This turns
> off messages that SQL Server sends back to the client after any
> SELECT, INSERT, UPDATE, MERGE, and DELETE statements are executed.
> Overall performance of the database and application is improved by
> eliminating this unnecessary network overhead.
In the example I've given each parameter a default value, but this is optional. Below shows how you can call this SP using variables and hard coded values.
**Example Call**
```
/* We can declare varaiabels outside the SP
* to pass in values.
*/
DECLARE @ParamOne VARCHAR(50) = 'Passed Value One';
/* Calling the SP with params.
* You can use variables or hard coded values.
*/
EXECUTE #TempExample @ParamOne, 'Passed Value Two';
```
|
T-SQL Stored procedure, declaring variables within the exec
|
[
"",
"sql",
"sql-server",
"t-sql",
"stored-procedures",
""
] |
How can we identify the relationship between two SQL Server tables, either one to one or some other relationship.....?
|
In SQL Server, create a new view in your database, add the two tables you want to see the relationship with.
[](https://i.stack.imgur.com/cnBOH.png)
|
You can use Microsoft system views for that purpose:
```
SELECT
obj.name AS fk
,sch.name AS [schema_name]
,tabParent.name AS [table]
,colParent.name AS [column]
,tabRef.name AS [referenced_table]
,colRef.name AS [referenced_column]
FROM sys.foreign_key_columns fkc
JOIN sys.objects obj ON obj.object_id = fkc.constraint_object_id
JOIN sys.tables tabParent ON tabParent.object_id = fkc.parent_object_id
JOIN sys.schemas sch ON tabParent.schema_id = sch.schema_id
JOIN sys.columns colParent ON colParent.column_id = parent_column_id AND colParent.object_id = tabParent.object_id
JOIN sys.tables tabRef ON tabRef.object_id = fkc.referenced_object_id
JOIN sys.columns colRef ON colRef.column_id = referenced_column_id AND colRef.object_id = tabRef.object_id
JOIN sys.schemas schRef ON tabRef.schema_id = schRef.schema_id
WHERE schRef.name = N'dbo'
AND tabRef.name = N'Projects'
```
It is possible to filter this query by referenced tables or columns, or just to look for everything that references a specific column.
|
How can we identify the relationship between two SQL Server tables , either one to one or some other relationship.....?
|
[
"",
"sql",
"sql-server",
""
] |
I want to open a Huge SQL file (20 GB) on my system i tried [phpmyadmin](https://www.phpmyadmin.net/) and [bigdump](http://www.ozerov.de/bigdump/) but it seems bigdump dose not support more than 1 GB SQL files is there any script or software that i can use to open,view,search and edit it.
|
MySQL Workbench should work fine, works well for large DB's, and is very useful...
<https://www.mysql.com/products/workbench/>
Install, then basically you just create a new connection, and then double click it on the home screen to get access to the DB. Right click on a table and click Select 1000 for quick view of table data.
More info <http://mysqlworkbench.org/2009/11/mysql-workbench-5-2-beta-quick-start-tutorial/>
|
Try using mysql command line to do basic SELECT queries.
```
$ mysql -u myusername -p
>>> show DATABASES; // shows you a list of databases
>>> use databasename; //selects database to query
>>> show TABLES; // displays tables in database
>>> SELECT * FROM tablename WHERE column = 'somevalue';
```
|
Opening huge mySQL database
|
[
"",
"mysql",
"sql",
"database",
"bigdata",
""
] |
I'm stuck on crafting a MySQL query to solve a problem. I'm trying to iterate through a list of "sales" where I'm trying to sort the Customer IDs listed by their total accumulated spend.
```
|Customer ID| Purchase price|
10 |1000
10 |1010
20 |2111
42 |9954
10 |9871
42 |6121
```
How would I iterate through the table where I sum up purchase price where the customer ID is the same?
Expecting a result like:
```
Customer ID|Purchase Total
10 |11881
20 |2111
42 |16075
```
I got to: select Customer ID, sum(PurchasePrice) as PurchaseTotal from sales where CustomerID=(select distinct(CustomerID) from sales) order by PurchaseTotal asc;
But it's not working because it doesn't iterate through the CustomerIDs, it just wants the single result value...
|
You need to `GROUP BY` your customer id:
```
SELECT CustomerID, SUM(PurchasePrice) AS PurchaseTotal
FROM sales
GROUP BY CustomerID;
```
|
Select CustomerID, sum(PurchasePrice) as PurchaseTotal FROM sales GROUP BY CustomerID ORDER BY PurchaseTotal ASC;
|
MYSQL - SUM of a column based on common value in other column
|
[
"",
"mysql",
"sql",
"database",
""
] |
Say I've got some data from a SELECT query. Can I use this as a table later, meaning naming it something then using its rows and columns in other queries?
I can't solve this problem for the life of me. I'm a beginner. It's just one table but I just can't get a query working. Here is what I have:
Hotel table and Room table (I'll just need to use the Room table; I mentioned Hotel just as a reference point for understanding).
Room has the following columns: (Number,HID) - this is a composite primary key; Number is the numerical number of the room and HID is the ID of the hotel which it belongs to. I also have one more column, Name. Now the problem is:
**Find all the Hotels which only have rooms Named OneBedroom**
I tried (and failed) by doing it by selecting all HIDs from Room, then filtering on not exist(hotels that have at least one non-OneBedroom named room), but I couldn't make this work.
```
Room
Number HID Name
1 H1 OneBedroom
2 H2 OneBedroom
3 H1 OneBedroom
4 H1 OneBedroom
5 H2 TwoBedroom
6 H3 OneBedroom
Desired Output: HID
H1
H3
```
|
The easiest way is using `group by` plus `having` with a conditional `count`, so no need to include aditional subquery.
This return hotel with all `room = "OneBedroom"`, remember `COUNT` only count `values <> NNLL`
```
SELECT `hid`
FROM `room`
GROUP BY `hid`
HAVING COUNT(CASE WHEN `name` != "OneBedroom" THEN 1 END ) = 0
```
At least one bedroom is call "OneBedroom"`
```
HAVING COUNT(CASE WHEN `name` = "OneBedroom" THEN 1 END ) >= 1
```
The default value for `CASE` is `NULL` so no need for `ELSE` part
|
I suspect something like this would do it
```
SELECT DISTINCT `hid` FROM `room` WHERE
`room`.`hid` NOT IN
(SELECT `hid` as `hid` FROM `room` WHERE `name` != "OneBedroom");
```
Untested, might be crap, but seems like it should work. Basically, the inner query gets all `hid`'s of rooms that are not `OneBedroom`s. We then subtract all of those `hid`'s from a full list of `hid`'s.
|
Can you use data that you got from a query in another query?
|
[
"",
"sql",
"sql-server",
""
] |
My data is something like this
Table: customer\_error
[](https://i.stack.imgur.com/Oixuo.png)
I just want to want to get the result as the error ID that appeared first for the first and not the proceeding ones.
[](https://i.stack.imgur.com/VlbPK.png)
|
I believe you're looking for this:
here's 1 way:
```
select distinct first_value(customer_id) over (partition by customer_id
order by error_id ) customer_id,
first_value(error_id) over (partition by customer_id
order by error_id ) error_id,
first_value(error_description) over (partition by customer_id
order by error_id ) error_description
from customer_error
/
```
and a slightly different way:
```
select customer_id, error_id, error_description
from (
select row_number() over (partition by customer_id
order by error_id ) rnum,
customer_id, error_id, error_description
from customer_error
)
where rnum = 1
/
```
Both use Analytics, a very useful tool for doing this sort of thing, I'd recommend reading up on it and learning it as it is very useful.
|
If we assume that error\_number with the minimum value is the one that appeared first, we can do this with regular sql. Our goal is to get the minimum error number per customer, and then paste the associated error description for that error onto it.
```
SELECT a.customer_id, a.error_id, b.error_description FROM
( SELECT customer_id, MIN(error_id) FROM customer_error
GROUP BY customer_id) a
LEFT JOIN customer_error b on a.error_id=b.error_id;
```
|
How to display only the higher value from a column that has multiple values from other column using SQL
|
[
"",
"sql",
"oracle",
""
] |
I have below table
```
City1 City2
NY TO
TO ON
TO NY
TO AT
AT TO
TO AT
```
Questions considers NY-TO and TO-NY as duplicate, Need a query in Oracle to find and remove duplicate row as mentioned above. Dont consider TO-AT & TO-AT as Duplicate. Tried several ways i.e. Subquery, Self- joins...etc. But could not solve. Anyone Query scientist here??
|
Assuming you have some way of ordering the table (i.e. by a primary key or a timestamp):
```
CREATE TABLE table_name ( id, city1, city2 ) AS
SELECT 1, 'NY', 'TO' FROM DUAL UNION ALL
SELECT 2, 'TO', 'ON' FROM DUAL UNION ALL
SELECT 3, 'TO', 'NY' FROM DUAL UNION ALL
SELECT 4, 'TO', 'AT' FROM DUAL UNION ALL
SELECT 5, 'AT', 'TO' FROM DUAL UNION ALL
SELECT 6, 'TO', 'AT' FROM DUAL;
```
Then you can do:
```
DELETE FROM table_name
WHERE ROWID IN (
SELECT ROWID
FROM (
SELECT CASE city1
WHEN FIRST_VALUE( city1 )
OVER ( PARTITION BY LEAST( City1, City2 ),
GREATEST( City1, City2 )
ORDER BY id )
THEN 0
ELSE 1
END AS to_delete
FROM table_name
)
WHERE to_delete = 1
)
```
Which will leave:
```
ID | C1 | C2
-------------
1 | NY | TO
2 | TO | ON
4 | TO | AT
6 | TO | AT
```
|
```
SELECT ct1.id
FROM city_table ct1
JOIN city_table ct2 ON CONCAT(ct1.city1+ct1.city2)=CONCAT(ct2.city1+ct2.city2)
OR CONCAT(ct1.city2+ct1.city1)=CONCAT(ct2.city1+ct2.city2)
```
You can base the JOIN on concatenation both cities
records are equal if `city1+city2` of rec1 is the same as either `city1+city2` or `city2+city1` or rec2
|
Delete Duplicate Rows in Oracle with combination of columns[Complex]
|
[
"",
"sql",
"oracle",
"duplicates",
""
] |
I have a table in Postgresql with:
```
id qty
1 10
2 11
3 18
4 17
```
I want to add each row a number starting from 1
meaning I want:
```
id qty
1 11 / 10+1
2 13 /11 +2
3 21 /18 +3
4 21 /17+4
```
first row gets +1, second row +2 , third row +3 etc...
It should be something like:
```
update Table_a set qty=qty+(increased number starting from 1) order by id asc;
```
how do I do that?
|
If column ID is unique then you can use the following way
```
UPDATE Table_a a
SET qty = qty + b.rn
FROM (
SELECT id,ROW_NUMBER() OVER (ORDER BY id) rn
FROM Table_a
) b
WHERE a.id = b.id
```
---
[`ROW_NUMBER()`](http://www.postgresql.org/docs/9.4/static/functions-window.html)
> assigns unique numbers to each row within the PARTITION given the
> ORDER BY clause
|
Using windowed function `ROW_NUMBER` will handle gaps in `id`:
```
CREATE TABLE Table_a(id INT PRIMARY KEY, qty INT);
INSERT INTO Table_a(id, qty)
SELECT 1 , 10
UNION ALL SELECT 2 , 11
UNION ALL SELECT 3 , 18
UNION ALL SELECT 4 , 17;
WITH cte AS
(
SELECT *, ROW_NUMBER() OVER(ORDER BY id) AS r
FROM Table_a
)
UPDATE Table_a AS a
SET qty = a.qty + c.r
FROM cte c
WHERE c.id = a.id;
SELECT *
FROM table_a;
```
|
How to update column with increased interval vlaue?
|
[
"",
"sql",
"postgresql",
""
] |
I am using the following query to populate some data. From column "query expression" is there a way to remove any text that is to the left of N'Domain\
Basically I only want to see the text after N'Domain\ in the column "Query Expression" Not sure how to do this.
```
SELECT
v_DeploymentSummary.SoftwareName,
v_DeploymentSummary.CollectionName,
v_CollectionRuleQuery.QueryExpression
FROM
v_DeploymentSummary
INNER JOIN v_CollectionRuleQuery
ON v_DeploymentSummary.CollectionID = v_CollectionRuleQuery.CollectionID
```
|
At least for SQL Server:
```
SUBSTRING([v_CollectionRuleQuery.QueryExpression], CHARINDEX('N''Domain\', [v_CollectionRuleQuery.QueryExpression]) + 9, LEN([v_CollectionRuleQuery.QueryExpression])
```
Give it a try.
I didn't understand if you wanted to include N'Domain\ in your string if that's the case just remove the +9.
In my understanding you want something like this:
```
SELECT
v_DeploymentSummary.SoftwareName,
v_DeploymentSummary.CollectionName,
SUBSTRING([v_CollectionRuleQuery.QueryExpression], CHARINDEX('N''Domain\', [v_CollectionRuleQuery.QueryExpression]) + 9, LEN([v_CollectionRuleQuery.QueryExpression])
FROM
v_DeploymentSummary
INNER JOIN v_CollectionRuleQuery
ON v_DeploymentSummary.CollectionID = v_CollectionRuleQuery.CollectionID
```
|
In SQL Server, you can use `stuff()` for this purpose:
```
SELECT ds.SoftwareName, ds.CollectionName,
STUFF(crq.QueryExpression, 1,
CHARINDEX('Domain\', rq.QueryExpression) + LEN('Domain\') - 1,
'')
FROM v_DeploymentSummary ds INNER JOIN
v_CollectionRuleQuery crq
ON ds.CollectionID = crq.CollectionID;
```
Note the use of table aliases makes the query easier to write and to read.
|
SQL query trim a column?
|
[
"",
"sql",
"sql-server",
"trim",
""
] |
So I have a table in a database which contains the column "SELECTED". The values in this column can only be "CHECKED" or "UNCHECKED". I would like to enforce "CHECKED" can only be used once (like a radiobutton) through a PL/SQL trigger, though I cannot think of how to do this.
First, the idea (in case it didn't become clear):
Initial table "dummy":
```
ID | SELECTED
--------------
1 | 'UNCHECKED'
2 | 'CHECKED'
3 | 'UNCHECKED'
```
Then, I execute this query:
```
UPDATE dummy
SET SELECTED = 'CHECKED'
WHERE ID = 3;
```
Through a PL/SQL trigger, I'd like to have my table "dummy" to look like this after the execution:
```
ID | SELECTED
--------------
1 | 'UNCHECKED'
2 | 'UNCHECKED'
3 | 'CHECKED'
```
I hope you get the idea. I myself have tried to solve this, without success. I came up with the following code:
```
CREATE OR REPLACE TRIGGER DUMMY_ONE_CHECKED
AFTER INSERT OR UPDATE ON DUMMY
FOR EACH ROW
DECLARE
v_checked_is_present DUMMY.SELECTED%TYPE;
BEGIN
SELECT SELECTED
INTO v_checked_is_present
FROM DUMMY
WHERE SELECTED = 'CHECKED';
IF v_checked_is_present IS NOT NULL THEN
UPDATE DUMMY
SET SELECTED = 'UNCHECKED'
WHERE SELECTED = 'CHECKED';
UPDATE DUMMY
SET SELECTED = 'CHECKED'
WHERE ID = :NEW.ID;
END IF;
END;
```
However, I get the errors ORA-04091, ORA-06512 and ORA-04088 with the following message:
```
*Cause: A trigger (or a user defined plsql function that is referenced in
this statement) attempted to look at (or modify) a table that was
in the middle of being modified by the statement which fired it.
*Action: Rewrite the trigger (or function) so it does not read that table.
```
Clearly, this is not the right solution. I wonder how I could accomplish what I would like to do (if possible at all)?
Thank you in advance!
|
I would not design it that way. The database should enforce the rules, not automatically attempt to fix violations of them.
So, I'd enforce that only one row can be CHECKED at a time, like this:
```
CREATE UNIQUE INDEX dummy_enforce_only_one ON dummy ( NULLIF(selected,'UNCHECKED') );
```
Then, I'd make it the responsibility of calling code to deselect other rows before selecting a new one (rather than trying to have a trigger do it).
I know that doesn't answer the text of your question, but it does answer the *title* of your question: "how to enforce only one value..."
|
I'm not sure a trigger is the best approach to this problem. The trigger needs to update all the records for every update -- even worse, the rows are in the same table leading to the dreaded mutating table error.
How about a different table structure altogether? The idea is just to keep track of the last time something was "checked" and then use the maximum timestamp:
```
create table t_dummy (
id int,
checkedtime timestamp(6)
);
create view dummy as
select t_dummy.id,
(case when checkedtime = maxct then 'CHECKED' else 'UNCHECKED') as selected
from t_dummy cross join
(select max(checktime) as maxct from t_dummy) x;
```
This should be simpler to implement than a trigger.
|
Oracle SQL: how to enforce only one value may be 'checked'?
|
[
"",
"sql",
"oracle",
"plsql",
""
] |
I have three tables which are as:
* table1(ID,JName, XVal, YVal)
* table2(ID, RName, XVAl, YVal)
* table3(ID, TName, XValue, YValue)
XValue, YValue , XVal, YVal are of number type. I want maximum and minimum value of XVal and YVal among all these tables. Previously I wrote following queries:
```
Select max(XVal) FROM table1
Select max(XVal) FROM table2
Select max(XValue) FROM table3
```
And for minimum value:
```
Select min(XVal) FROM table1
Select min(XVal) FROM table2
Select min(XValue) FROM table3
```
By running these queries, I can get 3 different results for minimum and maximum and then, I can just use logical codes to compare between them and get the maximum and minimum value but what i want here is single query which will do this for me.
I have used following queries:
```
Select max(XVal) from table1 UNION select max(XVal) from table2 UNION select max(XValue) as XVal from table3 ORDER BY 'MAX(XVal)'DESC LIMIT 1
```
And another one:
```
select max(XVal) as maxval from(select XVal from table1 UNION ALL SELECT XVal from table2 UNION ALL Select XValue from table3) as subquery
```
But both of them failed. Can somebody help me with this? And also point out the cause for the above queries to not function?
|
The correct syntax for the first of the two versions is:
```
Select max(XVal) as max_XVal from table1 UNION ALL
select max(XVal) from table2 UNION ALL
select max(XValue) as XVal from table3
ORDER BY max_XVal DESC
LIMIT 1;
```
You can use either `UNION` or `UNION ALL` here.
If you want the minimum and maximum values from the three tables:
```
SELECT MIN(XVal), MAX(XVal)
FROM
(
SELECT XVal FROM table1
UNION ALL
SELECT XVal FROM table2
UNION ALL
SELECT XVal FROM table3
) t123;
```
|
The `UNION` solution above is better but it's worth noting you could also just use your original query like this:
```
SELECT
GREATEST
(
(SELECT MAX(XVal) FROM table1),
(SELECT MAX(XVal) FROM table2),
(SELECT MAX(XValue) FROM table3)
)
LEAST
(
(SELECT MIN(XVal) FROM table1),
(SELECT MIN(XVal) FROM table2),
(SELECT MIN(XValue) FROM table3)
)
```
|
maximum value from three different tables
|
[
"",
"mysql",
"sql",
"oledb",
""
] |
Say I have a table with the following data:
[](https://i.stack.imgur.com/31oPP.png)
You can see columns a, b, & c have a lot of redundancies. I would like those redundancies removed while preserving the site\_id info. If I exclude the site\_id column from the query, I can get part of the way there by doing `SELECT DISTINCT a, b, c from my_table`.
What would be ideal is a SQL query that could turn the site IDs relevant to a permutation of a/b/c into a delimited list, and output something like the following:
[](https://i.stack.imgur.com/Fj46a.png)
Is it possible to do that with a SQL query? Or will I have to export everything and use a different tool to remove the redundancies?
The data is in a SQL Server DB, though I'd also be curious how to do the same thing with postgres, if the process is different.
|
For SQL Server, you can use the FOR XML trick as found in the accepted answer in [this](https://stackoverflow.com/questions/12668528/sql-server-group-by-clause-to-get-comma-separated-values) post.
For your scenario it would look something like this:
```
SELECT a, b, c, SiteIds =
STUFF((SELECT ', ' + SiteId
FROM your_table t2
WHERE t2.a = t1.a AND t2.b = t1.b AND t2.c = t1.c
FOR XML PATH('')), 1, 2, '')
FROM your_table t1
GROUP BY a, b, c
```
|
For Postgres:
```
select a,b,c, string_agg(site_id::varchar, ',')
from my_table
group by a,b,b;
```
I assume `site_id` is a number, and as `string_agg()` only accepts character value, this needs to be casted to a character string for the aggregation. This is what `site_id::text` does. Alternatively you can use the `cast()` operator: `string_agg(cast(site_id as varchar), ',')`
|
SQL - Turn relationship IDs into a delimited list
|
[
"",
"sql",
"sql-server",
"postgresql",
""
] |
Since Oracle 12c we can use IDENTITY fields.
Is there a way to retrieve the last inserted identity (i.e. `select @@identity` or `select LAST_INSERTED_ID()` and so on)?
|
Well. Oracle uses sequences and default values for IDENTITY functionality in 12c. Therefore you need to know about sequences for your question.
First create a test identity table.
```
CREATE TABLE IDENTITY_TEST_TABLE
(
ID NUMBER GENERATED ALWAYS AS IDENTITY
, NAME VARCHAR2(30 BYTE)
);
```
First, lets find your sequence name that is created with this identity column. This sequence name is a default value in your table.
```
Select TABLE_NAME, COLUMN_NAME, DATA_DEFAULT from USER_TAB_COLUMNS
where TABLE_NAME = 'IDENTITY_TEST_TABLE';
```
for me this value is "ISEQ$$\_193606"
insert some values.
```
INSERT INTO IDENTITY_TEST_TABLE (name) VALUES ('atilla');
INSERT INTO IDENTITY_TEST_TABLE (name) VALUES ('aydΔ±n');
```
then insert value and find identity.
```
INSERT INTO IDENTITY_TEST_TABLE (name) VALUES ('atilla');
SELECT "ISEQ$$_193606".currval from dual;
```
you should see your identity value. If you want to do in one block use
```
declare
s2 number;
begin
INSERT INTO IDENTITY_TEST_TABLE (name) VALUES ('atilla') returning ID into s2;
dbms_output.put_line(s2);
end;
```
Last ID is my identity column name.
|
`IDENTITY` column uses a `SEQUENCE` βunder the hoodβ - creating and dropping sequence automatically with the table it uses.
Also, you can specify start with and increment parameters using
start with 1000 and increment by 2. It's really very convenient to use `IDENTITY` when you don't want to operate it's values directly.
But if you need to somehow operate sequence directly you should use
another option available in Oracle 12c - column default values. Sutch default
values could be generated from sequence `nextval` or `currval`. To allow you to have a comprehensible sequence name and use it as "identity" without a trigger.
```
create table my_new_table
(id number default my_new_table_seq.nextval not null)
```
You will be always able to call: `my_new_table_seq.currval`.
It is possible to get ID generated from `SEQUENCE` on insert statement using `RETURNING` clause.
For example, create a temporary table:
```
create global temporary table local_identity_storage ("id" number) on commit delete rows
```
Make some insert saving this value in the temporary table:
```
CREATE TABLE identity_test_table (
id_ident NUMBER GENERATED BY DEFAULT AS IDENTITY,
same_value VARCHAR2(100)
);
declare
v_id number(10, 0);
begin
INSERT INTO identity_test_table
(same_value)
VALUES
('Test value')
RETURNING id_ident INTO v_id;
insert into local_identity_storage ("id") values (v_id);
commit;
end;
```
Now you have "local" inserted id.
```
select "id" from local_identity_storage
```
|
Retrieve Oracle last inserted IDENTITY
|
[
"",
"sql",
"oracle",
"primary-key",
""
] |
Probably it's a duplicate, but I couldn't find a solution.
**Requirement:**
I have the below strings:
```
Heelloo
Heeelloo
Heeeelloo
Heeeeeelloo
Heeeeeeeelloo
.
.
.
Heeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeelloo
```
**Expected output:** `Hello`
What is the best way to achieve this in SQL?
***Version I am using*** :
```
Microsoft SQL Server 2012 - 10.0.7365.0 (X64) Jul 28 2015 00:39:54 Copyright (c)
Microsoft Corporation Parallel Data Warehouse (64-bit) on Windows NT 6.2 <X64>
(Build 9200: )
```
|
There is a nice trick for removing such duplicates for a single letter:
```
select replace(replace(replace(col, 'e', '<>'
), '><', ''
), '<>', 'e'
)
```
This does require two characters ("<" and ">") that are not in the string (or more specifically, not in the string next to each other). The particular characters are not important.
How does this work?
```
Heeello
H<><><>llo
H<>llo
Hello
```
|
Try this user defined function:
```
CREATE FUNCTION TrimDuplicates(@String varchar(max))
RETURNS varchar(max)
AS
BEGIN
while CHARINDEX('ee',@String)>0 BEGIN SET @String=REPLACE(@String,'ee','e') END
while CHARINDEX('oo',@String)>0 BEGIN SET @String=REPLACE(@String,'oo','o') END
RETURN @String
END
```
Example Usage:
```
select dbo.TrimDuplicates ('Heeeeeeeelloo')
```
returns Hello
|
Replace multiple instance of a character with a single instance in sql
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
Cant quite figure this one out, i have a set of conditions that i want to be met only if a value is in a field.
So if the Status is complete i want to have three where clause's, if the status doesn't equal complete then i don't want any where clause.
Code
```
SELECT *
FROM mytable
WHERE CASE WHEN Status = 'Complete'
THEN (included = 0 OR excluded = 0 OR number IS NOT NULL)
ELSE *Do nothing*
```
|
It is usually simple to only use boolean expressions in the `WHERE`. So:
```
WHERE (Status <> 'Complete') OR
(included = 0 OR excluded = 0 OR number IS NOT NULL)
```
If `Status` could be `NULL`:
```
WHERE (Status <> 'Complete' OR Status IS NULL) OR
(included = 0 OR excluded = 0 OR number IS NOT NULL)
```
|
You can translate the natural language in SQL, then, if possible, reformulate.
```
SELECT *
FROM mytable
WHERE (Status = 'Complete' and (included = 0 OR excluded = 0 OR number IS NOT NULL))
or status <> 'Complete'
or status IS NULL;
```
|
Oracle SQL CASE expression in WHERE clause only when conditions are met
|
[
"",
"sql",
"oracle",
""
] |
I need to obtain records in a key-value table with the following structure:
```
CREATE TABLE `PROPERTY` (
`id` int(11) NOT NULL,
`key` varchar(64) NOT NULL,
`value` text NOT NULL,
PRIMARY KEY (`id`,`key`)
);
```
I need to get all ids that have MULTIPLE specific key-value entries. For example, all ids that have keys "foo", "bar", and "foobar".
|
Given updated question...
since you know the specific keys, you also know how many there are... so a count distinct in having should do it... along with a where...
```
SELECT id
FROM `PROPERTY`
Where key in ('foo','bar','foobar')
GROUP BY ID
having count(distinct key) = 3
```
|
Simply use `GROUP BY` to group and then check the group count to count multiple values:
```
Select
id
from
`PROPERTY`
group by
key, value
having
count(*) > 1
```
|
SQL Key value table--select ids that have multiple keys
|
[
"",
"mysql",
"sql",
""
] |
I have a query down below which works. My question is I cant seem to alter it so it updates the LessonTaken field in Availability Table everytime unless StudentID=0. So only want it to update the field if the studentID <> 0;
```
UPDATE Availability SET LessonTaken = 'Y'
WHERE (
SELECT LessonID
FROM Lesson
WHERE Availability.StudentID = Lesson.StudentID
);
```
The Tables are like so:
Availability:
AvailabilityID
StudentID
StartTime
EndTime
LessonTaken
NoOfFrees
Lesson:
LessonID
StudentID
StartTime
EndTime
DayOfWeek
LessonPaid.
I have a query which selects the student with the fewest frees, (selecting DayOfWeek, StartTime, EndTime) and inserts this into the LessonTable for the corresponding fields. This is for a timetabling programme. I hope this is clear, many thanks :)
|
Does adding the condition you want help?
```
UPDATE Availability
SET LessonTaken = 'Y'
WHERE Availability.studentID <> 0 AND
(SELECT LessonID
FROM Lesson
WHERE Availability.StudentID = Lesson.StudentID
);
```
|
This is for T-SQL, using join
```
update avail
set LessonTaken = 'Y'
from Availability avail
join Lesson less on avail.StudentID = less.StudentID
where avail.StudentID <> 0
```
Good luck
|
Updating a table from another table
|
[
"",
"sql",
"ms-access",
""
] |
**SAMPLE**
I have string like:
```
AA=Item01,ZZ=Item111,ZZ=Item2,ZZ=Item3333,ZZ=Item4,ZZ=Item55
```
**EXPLANATION**
`AA=` and `ZZ=` are static and always count of `AA=` is 1 and count of `ZZ=` is 5.
All `Item*` are dynamic and their lengths are dynamic.
**WHAT I NEED**
I need to select `Item2` from that string. How can I achieve It?
**WHAT I'VE TRIED**
I've tried to use `RIGHT`, `LEFT`, `LEN`, `CHARINDEX` to detect `=`, but can't achieve It far away (incorrect syntax)...
---
*NOTE: I know that comma separated strings is terrible practice, but I can't avoid It, customer provide us string like this.*
|
Assuming your items values can't contain `ZZ` and `,ZZ` patterns (so they can be considered as real delimiters) you can do it using bunch of `charindex` and `substring`:
```
declare @src nvarchar(max), @Start_Position int, @End_Position int
select @src = 'AA=Item0,ZZ=Item1,ZZ=Item2,ZZ=Item3,ZZ=Item4,ZZ=Item5'
select @Start_Position = charindex('ZZ', @src, charindex('ZZ', @src) + 1) + 3
select @End_Position = charindex(',ZZ', @src, @Start_Position)
select substring(@src, @Start_Position, @End_Position - @Start_Position)
```
Explanation:
1. Find occurence of first `ZZ` in the string: `charindex('ZZ', @src)`
2. Find occurence of next `ZZ` starting from position of first `ZZ` and add three characters - it will be position where `Item2` starts.
3. Find occurence of `,ZZ` characters starting from position determined in previous step - it will be bosition where `Item2` ends.
4. Do substring.
|
```
declare @src nvarchar(max)
set @src = 'AA=Item0,ZZ=Item1,ZZ=Item2,ZZ=Item3,ZZ=Item4,ZZ=Item5'
select item from [dbo].[SplitString](@src,',') where item like '%item2%'
```
## User defined function
[](https://i.stack.imgur.com/RhlN7.png)
```
GO
/****** Object: UserDefinedFunction [dbo].[SplitString] Script Date: 15-01-2016 18:13:21 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER FUNCTION [dbo].[SplitString]
(
@Input NVARCHAR(MAX),
@Character CHAR(1)
)
RETURNS @Output TABLE (
Item NVARCHAR(1000)
)
AS
BEGIN
DECLARE @StartIndex INT, @EndIndex INT
SET @StartIndex = 1
IF SUBSTRING(@Input, LEN(@Input) - 1, LEN(@Input)) <> @Character
BEGIN
SET @Input = @Input + @Character
END
WHILE CHARINDEX(@Character, @Input) > 0
BEGIN
SET @EndIndex = CHARINDEX(@Character, @Input)
INSERT INTO @Output(Item)
SELECT SUBSTRING(@Input, @StartIndex, @EndIndex - 1)
SET @Input = SUBSTRING(@Input, @EndIndex + 1, LEN(@Input))
END
RETURN
END
```
|
Get specific part of dynamic string in TSQL
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I have 3 tables:
```
Table_Cars
-id_car
-description
Table_CarDocuments
-id_car
-id_documentType
-path_to_document
Table_DocumentTypes
-id_documentType
-description
```
I want to select all cars that do **NOT** have documents on the table Table\_CarDocuments with 4 specific id\_documentType.
Something like this:
```
Car1 | TaxDocument
Car1 | KeyDocument
Car2 | TaxDocument
```
With this i know that i'm missing 2 documents of car1 and 1 document of car2.
|
You are looking for missing car documents. So cross join cars and document types and look for combinations NOT IN the car douments table.
```
select c.description as car, dt.description as doctype
from table_cars c
cross join table_documenttypes dt
where (c.id_car, dt.id_documenttype) not in
(
select cd.id_car, cd.id_documenttype
from table_cardocuments cd
);
```
UPDATE: It shows that SQL Server's IN clause is very limited and not capable of dealing with value lists. But a NOT IN clause can easily be replaced by NOT EXISTS:
```
select c.description as car, dt.description as doctype
from table_cars c
cross join table_documenttypes dt
where not exists
(
select *
from table_cardocuments cd
where cd.id_car = c.id_car
and cd.id_documenttype = dt.id_documenttype
);
```
UPDATE: As you are only interested in particular id\_documenttype (for which you'd have to add `and dt.id_documenttype in (1, 2, 3, 4)` to the query), you can generate records for them on-the-fly instead of having to read the table\_documenttypes.
In order to do that replace
```
cross join table_documenttypes dt
```
with
```
cross join (values (1), (2), (3), (4)) as dt(id_documentType)
```
|
You can use the query below to get the result:
```
SELECT
c.description,
dt.description
FROM
Table_Cars c
JOIN Table_CarDocuments cd ON c.id_car = cd.id_car
JOIN Table_DocumentTypes dt ON cd.id_documentType = dt.id_documentType
WHERE
dt.id_documentType NOT IN (1, 2, 3, 4) --replace with your document type id
```
|
SQL - Select records not present in another table (3 table relation)
|
[
"",
"sql",
"sql-server",
""
] |
I'm a newbie in SQL. For my SAP B1 add-on I need a SQL query to display Birthdates of employees for a period of +-30days(this will be a user given int at the end).
I wrote a query according to my understanding and it only limits the period only for the current month. Ex:If the current date is 2016.01.15 the correct query should show birthdates between the period of 16th December to 14th February. But I only see the birthdates for January.You can see the query below.
```
SELECT T0.[BirthDate], T0.[CardCode], T1.[CardName], T0.[Name], T0.[Tel1],
T0.[E_MailL] FROM OCPR T0 INNER JOIN OCRD T1 ON T0.CardCode = T1.CardCode
WHERE DATEADD( Year, DATEPART( Year, GETDATE()) - DATEPART( Year, T0.[BirthDate]),
T0.[BirthDate]) BETWEEN CONVERT( DATE, GETDATE()-30)AND CONVERT( DATE, GETDATE() +30);
```
What are the changes I should do to get the correct result?
Any help would be highly appreciated! :-)
|
How about something like this:
```
SELECT T0.[BirthDate], T0.[CardCode], T1.[CardName], T0.[Name], T0.[Tel1], T0.[E_MailL]
FROM OCPR T0 INNER JOIN OCRD T1 ON T0.CardCode = T1.CardCode
WHERE TO.[BirthDate] BETWEEN DATEADD(DAY, -30, GETDATE()) AND DATEADD(DAY, +30, GETDATE())
```
You can adapt the [answer I've referenced](https://stackoverflow.com/questions/83531/sql-select-upcoming-birthdays) in the comments as follows:
```
SELECT T0.[BirthDate], T0.[CardCode], T1.[CardName], T0.[Name], T0.[Tel1], T0.[E_MailL]
FROM OCPR T0 INNER JOIN OCRD T1 ON T0.CardCode = T1.CardCode
WHERE 1 = (FLOOR(DATEDIFF(dd,TO.Birthdate,GETDATE()+30) / 365.25))
-
(FLOOR(DATEDIFF(dd,TO.Birthdate,GETDATE()-30) / 365.25))
```
As per Vladimir's comment you can amend the '**365.25**' to '**365.2425**' for better accuracy if needed.
|
I tested it in SQL Server, because it has the `DATEADD`, `GETDATE` functions.
Your query returns wrong results when the range of +-30 days goes across the 1st of January, i.e. when the range belongs to two years.
Your calculation
```
DATEADD(Year, DATEPART(Year, GETDATE()) - DATEPART( Year, T0.[BirthDate]), T0.[BirthDate])
```
moves the year of the `BirthDate` into the same year as `GETDATE`, so if `GETDATE` returns `2016-01-01`, then a `BirthDate=1957-12-25` becomes `2016-12-25`. But your range is from `2015-12-01` to `2016-01-30` and adjusted `BirthDate` doesn't fall into it.
There are many ways to take this boundary of the year into account.
One possible variant is to make not one range from `2015-12-01` to `2016-01-30`, but three - for the next and previous years as well:
```
from `2014-12-01` to `2015-01-30`
from `2015-12-01` to `2016-01-30`
from `2016-12-01` to `2017-01-30`
```
One more note - it is better to compare original `BirthDate` with the result of some calculations, rather than transform `BirthDate` and compare result of the function. In the first case optimizer can use index on `BirthDate`, in the second case it can't.
Here is a full example that I tested in SQL Server 2008.
```
DECLARE @T TABLE (BirthDate date);
INSERT INTO @T (BirthDate) VALUES
('2016-12-25'),
('2016-01-25'),
('2016-02-25'),
('2016-11-25'),
('2015-12-25'),
('2015-01-25'),
('2015-02-25'),
('2015-11-25'),
('2014-12-25'),
('2014-01-25'),
('2014-02-25'),
('2014-11-25');
--DECLARE @CurrDate date = '2016-01-01';
DECLARE @CurrDate date = '2015-12-31';
DECLARE @VarDays int = 30;
```
I used a variable `@CurrDate` instead of `GETDATE` to check how it works in different cases.
`DATEDIFF(year, @CurrDate, BirthDate)` is the difference in years between `@CurrDate` and `BirthDate`
`DATEADD(year, DATEDIFF(year, @CurrDate, BirthDate), @CurrDate)` is `@CurrDate` moved into the same year as `BirthDate`
The final `DATEADD(day, -@VarDays, ...)` and `DATEADD(day, +@VarDays, ...)` make the range of `+-@VarDays`.
This range is created three times for the "main" and previous and next years.
```
SELECT
BirthDate
FROM @T
WHERE
(
BirthDate >= DATEADD(day, -@VarDays, DATEADD(year, DATEDIFF(year, @CurrDate, BirthDate), @CurrDate))
AND
BirthDate <= DATEADD(day, +@VarDays, DATEADD(year, DATEDIFF(year, @CurrDate, BirthDate), @CurrDate))
)
OR
(
BirthDate >= DATEADD(day, -@VarDays, DATEADD(year, DATEDIFF(year, @CurrDate, BirthDate)+1, @CurrDate))
AND
BirthDate <= DATEADD(day, +@VarDays, DATEADD(year, DATEDIFF(year, @CurrDate, BirthDate)+1, @CurrDate))
)
OR
(
BirthDate >= DATEADD(day, -@VarDays, DATEADD(year, DATEDIFF(year, @CurrDate, BirthDate)-1, @CurrDate))
AND
BirthDate <= DATEADD(day, +@VarDays, DATEADD(year, DATEDIFF(year, @CurrDate, BirthDate)-1, @CurrDate))
)
;
```
**Result**
```
+------------+
| BirthDate |
+------------+
| 2016-12-25 |
| 2016-01-25 |
| 2015-12-25 |
| 2015-01-25 |
| 2014-12-25 |
| 2014-01-25 |
+------------+
```
|
Get Birthdates of employees for a period of +-30 days
|
[
"",
"sql",
"sapb1",
""
] |
I'm writing a website user behaviour analyser tool as a hobby project. It tracks down user link clicks and pages they end up to from those links. It differentiates user sessions with unique UIN identifier within clicks.
I'm writing a milestone and click report from the data but the query is extremely slow. I haven't yet found out a way to increase the performance so that it would run reasonably fast (sub 5s execution time) so if anyone could help me that'd be greatly appreciated.
The part of the query below is very fast. Running time is close to 0.05s:
```
declare @startDate date = '2013-01-01'
declare @endDate date = '2016-01-14'
declare @user int = 4
declare @country int = 224
select
p.PageId,
p.Name,
-- count of successful page landings
SUM(CASE WHEN m.MileStoneTypeId = 1 AND m.UserId = @user
THEN 1
ELSE 0
END) AS [Successful landings],
-- count of failed page landings
SUM(CASE WHEN m.MileStoneTypeId = 2 AND m.UserId = @user
THEN 1
ELSE 0
END) AS [Failed landings],
-- count of unfinished page landings
SUM(CASE WHEN m.MileStoneTypeId = 3 AND m.UserId = @user
THEN 1
ELSE 0
END) AS [Unfinished landings],
from
Page as p
inner join
Milestone as m
ON p.PageId = m.CampaignId
AND m.UserId = @user
AND m.Created >= @startDate
AND m.Created < @endDate
where
p.PageCountryId = @country
group by
p.PageId,
p.PageName
```
Here is the full query which performs VERY slowly. The running time is something between 45-60 seconds. The difference is that I'm attempting to gather a count of clicks generated for a specific Page Milestone:
```
declare @startDate date = '2013-01-01'
declare @endDate date = '2016-01-14'
declare @user int = 4
declare @country int = 224
select
p.PageId,
p.Name,
-- Unique clicks
(SELECT
COUNT(DISTINCT click.UIN)
FROM
Click as click
WHERE
click.PageId = p.PageId AND
click.Created >= @startDate AND
click.Created < @endDate AND
click.UserId = @user
) as [Unique clicks],
-- Total clicks
(SELECT
COUNT(click.UIN)
FROM
Click as click
WHERE
click.PageId = p.PageId AND
click.Created >= @startDate AND
click.Created < @endDate AND
click.User = @user
) as [Total clicks],
-- count of successful page landings
SUM(CASE WHEN m.MileStoneTypeId = 1 AND m.UserId = @user
THEN 1
ELSE 0
END) AS [Successful landings],
-- count of failed page landings
SUM(CASE WHEN m.MileStoneTypeId = 2 AND m.UserId = @user
THEN 1
ELSE 0
END) AS [Failed landings],
-- count of unfinished page landings
SUM(CASE WHEN m.MileStoneTypeId = 3 AND m.UserId = @user
THEN 1
ELSE 0
END) AS [Unfinished landings],
from
Page as p
inner join
Milestone as m
ON p.PageId = m.CampaignId
AND m.UserId = @user
AND m.Created >= @startDate
AND m.Created < @endDate
where
p.PageCountryId = @country
group by
p.PageId,
p.PageName
```
Executing the click count queries as a standalone queries are reasonably fast. The running time is something close to 1 second for each (DISTINCT and non-distinct) query.
This is "fast" as a standalone query:
```
-- Unique clicks
(SELECT
COUNT(DISTINCT click.UIN)
FROM
Click as click
WHERE
click.PageId = p.PageId AND
click.Created >= @startDate AND
click.Created < @endDate AND
click.UserId = @user
) as [Unique clicks],
```
This is also "fast" as a standalone query:
```
-- Total clicks
(SELECT
COUNT(click.UIN)
FROM
Click as click
WHERE
click.PageId = p.PageId AND
click.Created >= @startDate AND
click.Created < @endDate AND
click.User = @user
) as [Total clicks],
```
The problem arises when I attempt to combine all in a single large query. For some reason standalone queries run very fast but the combined query execution time is extremely slow.
The table with clicks has a column "UIN" which is assigned for each user when they arrive to the website. When they click a link, a row is inserted to the Click -table with User Id and UIN. The UIN differentiates between user sessions, so UserId 4 with UIN abcdef123 can have multiple identical rows. This UIN is used to calculate unique clicks and total clicks within a user session.
The Page table has approximately 1000 rows. The Milestone table has approximately 200 000 rows and the Click table has approximately 10 000 000 rows.
Any idea how I can improve the performance of the full query with unique and total clicks included?
Here's the table contents and the target output
**Data from Page table**
```
+--------+-----------------------+-----------+
| PageId | Name | CountryId |
+--------+-----------------------+-----------+
| 3095 | Registration | 77 |
| 3110 | Customer registration | 77 |
| 5174 | View user details | 77 |
+--------+-----------------------+-----------+
```
**Data from User table**
```
+--------+------+
| UserId | Name |
+--------+------+
| 1 | Dan |
| 2 | Mike |
| 3 | John |
+--------+------+
```
**Data from Clicks table**
```
+---------+--------------------------------------+--------+-------------------------+--------+
| ClickId | Uin | UserId | Created | PageId |
+---------+--------------------------------------+--------+-------------------------+--------+
| 1296600 | B420D0F4-20BE-49BE-AAC9-47DD858B68DD | 4301 | 2016-01-14 12:08:03:723 | 8603 |
| 1296599 | DA5877BA-8FF5-4671-8DF9-CCCBF1555BA1 | 4418 | 2016-01-14 12:07:46:930 | 2009 |
| 1296598 | C6790CB9-6DA6-4A8B-84AA-7D2D3A4B5787 | 4276 | 2016-01-14 12:07:43:563 | 8678 |
+---------+--------------------------------------+--------+-------------------------+--------+
```
**Data from Milestone table**
```
+-------------+-----------------+------------+--------+-------------------------+--------+
| MilestoneId | MilestoneTypeId | CampaignId | UserId | Created | PageId |
+-------------+-----------------+------------+--------+-------------------------+--------+
| 1 | 1 | 1001 | 4 | 2014-02-06 13:18:04:487 | 52 |
| 2 | 1 | 1001 | 4 | 2014-02-06 13:41:01:257 | 9642 |
| 3 | 1 | 1001 | 4 | 2014-02-07 09:52:29:373 | 2393 |
+-------------+-----------------+------------+--------+-------------------------+--------+
```
**Here's the output data I'm trying to achieve:**
```
+---------+-----------------------+---------------+--------------+----------------------+-----------------+---------------------+
| Page Id | Page Name | Unique clicks | Total clicks | Successfull Landings | Failed Landings | Unfinished Landings |
+---------+-----------------------+---------------+--------------+----------------------+-----------------+---------------------+
| 3095 | Registration | 102 | 116 | 2 | 0 | 0 |
| 3110 | Customer registration | 3 | 6 | 1 | 1 | 0 |
| 5174 | View user details | 13 | 13 | 0 | 1 | 0 |
| 5178 | Edit content page | 11 | 11 | 1 | 0 | 0 |
| 6217 | Add new vehicle | 18 | 18 | 2 | 0 | 0 |
+---------+-----------------------+---------------+--------------+----------------------+-----------------+---------------------+
```
|
Clickstream data can be very hard to deal with, often due to the volume of records generated. But in this case I think the problem is due to the use of [correlated subqueries](https://en.wikipedia.org/wiki/Correlated_subquery) in the SELECT clause. If you are not familiar; a correlated subquery is any subquery that refers to the outer query. These harm performance because the SQL engine is forced to evaluate the query once for each row returned. This undermines the [set](https://en.wikipedia.org/wiki/Set_theoretic_programming) based nature of SQL.
I've made some changes to your sample data. As supplied I couldn't return any records to validate my resultset. I've updated values in the joining fields to address this:
**Sample Data**
```
DECLARE @Page TABLE
(
PageId INT,
Name VARCHAR(50),
CountryId INT
)
;
DECLARE @User TABLE
(
UserId INT,
Name VARCHAR(50)
)
;
DECLARE @Clicks TABLE
(
ClickId INT,
Uin UNIQUEIDENTIFIER,
UserId INT,
Created DATETIME,
PageId INT
)
;
DECLARE @Milestone TABLE
(
MiestoneId INT,
MilestoneTypeId INT,
CampaignId INT,
UserId INT,
Created DATETIME,
PageId INT
)
;
INSERT INTO @Page
(
PageId,
Name,
CountryId
)
VALUES
(3095, 'Registration', 77),
(3110, 'Customer registration', 77),
(5174, 'View user details', 77)
;
INSERT INTO @User
(
UserId,
Name
)
VALUES
(4301, 'Dan'),
(2, 'Mike'),
(3, 'John')
;
INSERT INTO @Clicks
(
ClickId,
Uin,
UserId,
Created,
PageId
)
VALUES
(1296600, 'B420D0F4-20BE-49BE-AAC9-47DD858B68DD', 4301, '2016-01-14 12:08:03:723', 3095),
(1296600, 'B420D0F4-20BE-49BE-AAC9-47DD858B68DD', 4301, '2016-01-14 12:08:03:723', 3095),
(1296599, 'DA5877BA-8FF5-4671-8DF9-CCCBF1555BA1', 4301, '2016-01-14 12:07:46:930', 3110),
(1296598, 'C6790CB9-6DA6-4A8B-84AA-7D2D3A4B5787', 4301, '2016-01-14 12:07:43:563', 5174)
;
INSERT INTO @Milestone
(
MiestoneId,
MilestoneTypeId,
CampaignId,
UserId,
Created,
PageId
)
VALUES
(1, 1, 1001, 4301, '2014-01-06 13:18:04:487', 3095),
(2, 1, 1001, 4301, '2014-01-06 13:41:01:257', 3110),
(3, 3, 1001, 4301, '2014-01-07 09:52:29:373', 5174)
;
```
As you spotted in your original query, you cannot directly join Milestone to Click, as each table has a different grain. In my query I've used [CTEs](https://msdn.microsoft.com/en-gb/library/ms175972.aspx) to return the totals from each table. The main body of my query joins the results.
**Example**
```
DECLARE @StartDate date = '2013-01-01';
DECLARE @EndDate date = '2016-01-15';
DECLARE @UserId int = 4301;
DECLARE @CountryId int = 77;
WITH Click AS
(
SELECT
UserId,
PageId,
COUNT(DISTINCT Uin) AS [Distinct Clicks],
COUNT(ClickId) AS [Total Clicks]
FROM
@Clicks
WHERE
UserId = @UserId
AND Created BETWEEN @StartDate AND @EndDate
GROUP BY
UserId,
PageId
),
Milestone AS
(
SELECT
UserId,
PageId,
SUM(CASE WHEN MileStoneTypeId = 1 THEN 1 ELSE 0 END) AS [Successful Landings],
SUM(CASE WHEN MileStoneTypeId = 2 THEN 1 ELSE 0 END) AS [Failed Landings],
SUM(CASE WHEN MileStoneTypeId = 3 THEN 1 ELSE 0 END) AS [Unfinished Landings]
FROM
@Milestone
WHERE
UserId = @UserId
AND Created BETWEEN @StartDate AND @EndDate
GROUP BY
UserId,
PageId
)
SELECT
p.PageId,
p.Name,
c.[Distinct Clicks],
c.[Total Clicks],
ms.[Successful Landings],
ms.[Failed Landings],
ms.[Unfinished Landings]
FROM
@Page AS p
INNER JOIN Click AS c ON c.PageId = p.PageId
INNER JOIN Milestone AS ms ON ms.PageId = c.PageId
AND ms.UserId = c.UserId
WHERE
p.CountryId = @CountryId
;
```
|
It is slow because you making your "click" selects two times and for each row in your query.
Try to join it as you did with Milestones table and add `group by user` clause.
upd.
please, can you provide us tables strucure and data like in next example?
```
declare @Page as table (
PageId int,
etc
)
insert into @page (PageId, etc) values (3095, etc)
```
|
Slow SQL query on join and subquery
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have to tables :
```
Table1
--------------------------------
ID VAL1 DATE1
--------------------------------
1 1 20/03/2015
2 null null
3 1 10/01/2015
4 0 12/02/2015
5 null null
Table2
--------------------------------
ID VAL2 DATE1
--------------------------------
1 N 02/06/2015
1 N 01/08/2015
2 null null
3 O 05/04/2016
3 O 02/02/2015
4 O 01/07/2015
5 O 03/02/2015
5 N 10/01/2014
5 O 12/04/2015
```
I want to update :
* column VAL1 (of Table1) with '0', if VAL2 (of Table2) is equal to 'O'
* column DATE1 (of Table1) with the earliest DATE2 (of Table2) for each ID (here my problem)
(This two tables are not so simple, it's just for illustration, they can be joined with the ID column).
Here my code :
```
UPDATE Table1 t1
SET t1.VAL1 = '0',
t1.DATE1 = (select min(t2.DATE2) --To take the first DATE for each ID where VAL2='O' (not working fine)
FROM Table2 t2, Table1 t1
WHERE trim(t2.ID) = trim(t1.ID)
AND VAL2='O')
WHERE EXISTS (SELECT NULL
FROM Table2 t2
WHERE trim(t2.ID) = trim(t1.ID)
AND t2.Table2 = 'O')
AND VAL1<>'0'; --(for doing the update only if VAL1 not already equal to 0)
```
The expected result is :
```
Table1
--------------------------------
ID VAL1 DATE1
--------------------------------
1 1 20/03/2015
2 null null
3 0 02/02/2015
4 0 01/07/2015
5 0 10/01/2014
```
The result I get is :
```
Table1
--------------------------------
ID VAL1 DATE1
--------------------------------
1 1 20/03/2015
2 null null
3 0 10/01/2014
4 0 10/01/2014
5 0 10/01/2014
```
My problem is that the DATE1 is always updated with the same date, regardless of the ID.
|
You shouldn't have a second reference to `table1` in the first subquery; that is losing the correlation between the subquery and the outer query. If you run the subquery on its own it will always find the lowest date in `table2` for *any* ID that has `val2='O'` in `table1`, which is 10/01/2014. (Except your sample data isn't consistent; that's actually `N` so won't be considered - your current and expected results don't match the data you showed, but you said it isn't real). Every row eligible to be updated runs that same subquery and gets that same value.
You need to maintain the correlation between the outer query and the subquery, so the subquery should use the outer `table1` for its join, just like the second subquery already does:
```
UPDATE Table1 t1
SET t1.VAL1 = '0',
t1.DATE1 = (select min(t2.DATE2)
FROM Table2 t2
WHERE trim(t2.ID) = trim(t1.ID)
AND VAL2='O')
WHERE EXISTS (SELECT NULL
FROM Table2 t2
WHERE trim(t2.ID) = trim(t1.ID)
AND t2.Val2 = 'O')
AND VAL1<>'0';
```
|
You can use this UPDATE statement.
```
UPDATE TABLE1 T1
SET T1.VAL1 = '0',
T1.DATE1 = (SELECT MIN(T2.DATE2)
FROM TABLE2 T2
WHERE TRIM(T2.ID) = TRIM(T1.ID)
AND T2.VAL2='O')
WHERE T1.ID IN (SELECT T2.ID FROM TABLE2 T2 WHERE T2.VAL2='O')
```
Hope it will help you.
|
SQL (oracle) Update some records in table using values in another table
|
[
"",
"sql",
"oracle",
"oracle-sqldeveloper",
""
] |
I'm attempting to select values from the databases pictured below so that I may insert into a new table called `Desired Table`. Data and debugging is in a `Microsoft Access` database and I continue to receive the error:
> syntax error in query expression.
What is wrong with this query? The joins seem correct and so does the `FROM` clause. Please let me know if you need more information. Don't worry about the `INSERT` clause.
Query:
```
SELECT vicdescriptions.vid,
vicdescriptions.make,
vicdescriptions.vic_year,
vicdescriptions.optiontable,
vacdescriptions.accessory,
vacvalues.value,
vicvalues.valuetype,
vicvalues.value
FROM vicdescriptions
JOIN vicvalues
ON ( vicdescriptions.vic_make = vicvalues.vic_make
AND vicdescriptions.vic_year = vicvalues.vic_year );
```
Database Structure:
[DATABASE SCHEMA](https://i.stack.imgur.com/NcqF5.jpg)
Desired table for insertion:
[](https://i.stack.imgur.com/eVyPl.png)
|
```
SELECT VicDescriptions.VID,
VicDescriptions.Make,
VicDescriptions.VIC_Year,
VicDescriptions.OptionTable,
VacDescriptions.accessory,
VacValues.value,
VacValues.valuetype,
VacValues.value --(No such table as VicValues available in the database, you only have VacValues)
FROM VicDescriptions
JOIN VacValues --(No such table available in the database, you only have VacValues)
ON ( VicDescriptions.VIC_Make = VacValues.VIC_Make
AND VicDescriptions.VIC_Year = VacValues.VIC_Year )
JOIN VacDescriptions
ON ( VacDescriptions.Period = VacValues.Period
AND VacDescriptions.VAC = VacValues.VAC);
```
|
Access does not support `JOIN` as a synonym for `INNER JOIN`. You must always specify the type of `JOIN`:
```
FROM vicdescriptions
INNER JOIN vicvalues
ON ( vicdescriptions.vic_make = vicvalues.vic_make
AND vicdescriptions.vic_year = vicvalues.vic_year )
```
If there is not a table named `vicvalues`, Access will give you a different error message after you have changed `JOIN` to `INNER JOIN`.
|
SQL Join Incorrectly
|
[
"",
"sql",
"ms-access",
"join",
"left-join",
""
] |
How to get last date of particular month from the month ID and year in SQL ?
Inputs :
```
@MonthID INT = 2,
@Year INT = 2015
```
Required output : `28/02/2015`
|
It's easy in SQL Server 2012+
To frame date from the year and month use **[`DATEFROMPARTS (Transact-SQL)`](https://msdn.microsoft.com/en-us/library/hh213228.aspx)**
To find the last day of month use **[`EOMONTH (Transact-SQL)`](https://msdn.microsoft.com/en-us/library/hh213020.aspx)**
```
Declare @MonthID INT = 2,
@Year INT = 2015
SELECT Eomonth(Datefromparts(@Year, @MonthID, 1))
```
For any thing less then `Sql server 2012`
```
SELECT Dateadd(dd, -1, Dateadd(mm, 1, Cast(Cast(@Year AS CHAR(4)) + '-'
+ Cast(@MonthID AS VARCHAR(2))+'-'+'1' AS DATE)))
```
**Note:** If the variables are of `Varchar` type then you can remove the `cast` to varchar
|
You can find the last day of the month in the following manner
```
Declare @MonthID varchar(10) = '02', @Year varchar(10) = '2015'
select dateadd(dd,-1,dateadd(MM,1,cast(@Year+'-'+@MonthID+'-1' as date)))
```
|
How to get last date of particular month from the month ID and year in SQL?
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I have SQL table what looks like:
```
+----------+-----------+
| ID | Direction |
+----------+-----------+
| 1 | left |
| 1 | null |
| 2 | left |
| 2 | null |
| 3 | null |
| 4 | left |
| 4 | null |
| 5 | null |
+----------+-----------+
```
I want to show each value only once:
* If there will be ID `1` with Direction `null` and `left`, then show only ID `1` with direction `left`.
* If there will be ID `1` only with `null` value, show it with `null` value.
[](https://i.stack.imgur.com/rMwMt.png)
|
Use a common table expression (cte):
```
with cte as
(
Your huge select...
)
select *
from cte t1
where t1.Direction = 'left'
or not exists (select * from cte t2
where t2.kanbanid = t1.kanbanid
and t2.Direction = 'left')
```
I.e. if your select has Direction 'left' for a kanbanid, return that row. Also return that row if same kanbanid has no Direction 'left' at all.
|
Why wont below query work:
```
select id,max(dir)
from #temp
group by id
```
below is test data:
```
create table #temp
(
id int,
dir char(10)
)
insert into #temp
select 1,'left'
union all
select 1,null
union all
select 2,null
union all
select 3,'right'
union all
select 3,null
union all
select 3,null
select id,max(dir)
from #temp
group by id
```
aggregate functions will ignore null,below is the output:
[](https://i.stack.imgur.com/Mf0mo.png)
|
SQL 1 row twice
|
[
"",
"sql",
""
] |
I have two table `users` and `transactions`. `transactions` table has relation with users two table format like below
```
users
id name email created
1 a a@mail.com 12-03-01
2 b b@mail.com 11-03-01
Transactions
id user_id balance
1 1 250
2 1 550
3 2 50
4 2 1000
```
I need last inserted users balance from transactions table with all users information. I am new in sql.
So I have tried below code
```
select * from transactions
where id in (select max(id) from transactions group by user_id)
INNER JOIN users on transactions.user_id=users.id
```
It's giving me syntax error near inner join.Have I made any mistake in inner join ? or I am in wrong direction ?
|
If you only want the balance, then a correlated subquery might be faster:
```
select u.*,
(select t.balance
from transactions t
where t.user_id = u.id
order by t.id desc
limit 1
) as MostRecentBalance
from users u;
```
For maximum performance, you want an index on `transactions(user_id, id desc, balance)`.
The reason this is faster is because it avoids the aggregation on the entire `transactions` table. This is even more important if you are only selecting a subset of users.
EDIT:
I originally read this question as one row per user. However, if you only want one row returned -- for the last insert into `transactions` -- then a simpler method is:
```
select u.*, t.balance
from users u join
transactions t
on u.id = t.user_id
order by t.id desc
limit 1;
```
|
The JOIN should be part of the FROM statement so it should look more like the code below.
```
select *
from transactions ts
INNER JOIN users
ON (transactions.user_id=users.id)
where ts.id in
(
select max(transactions.id)
from transactions
group by user_id
);
```
edited to clarify which id is in use as per Gordons suggestion
|
How to get most recent balance from many users balances?
|
[
"",
"mysql",
"sql",
""
] |
I need a query that returns the values of the columns concatenated and with the single quotes in the strings data types, like an INSERT INTO STATEMENT, taking into account their respective data types.
for example, the table in sql server look like this
```
String String Int
------------------------------------
000001 TUBO BUM 8 X 400 GRS 93,6
000002 TUBO BUM 2 X 50 GRS 10,6
000003 TUBO BUM 5 X 40 GRS 11,6
```
I need the result is the following
```
'000001','TUBO BUM 8 X 400 GRS', 93.6
'000002','TUBO BUM 8 X 50 GRS', 10.6
'000003','TUBO BUM 8 X 40 GRS', 11.6
```
is like an INSERT INTO STATEMENT.
I need to do it **dynamically** for all the columns of the table.
I researched and I saw need to consult sys.columns right ?
Thanks !!!
|
[Here](https://github.com/drumsta/sql-generate-insert) is a github project for a solution to autogenerating insert statements for tables.
|
use below script. It generates insert statements.
```
declare @sql varchar(max) = (select STUFF((SELECT ', ' + C.name
FROM sys.columns C
WHERE object_id = object_id('MyTable')
FOR XML PATH('')), 1, 2, ''))
declare @data varchar(max) = 'SELECT ''INSERT INTO MyTable (xx5xx) VALUES ('' + ''vx5xv'' + CAST(' + @sql + ' as varchar) + ''vx5xv'' + '')'' FROM MyTable'
set @data = replace(@data, ', ', ' as varchar) + ''vx5xv'' + '', '' + ''vx5xv'' + CAST(')
set @data = replace(replace(@data, 'xx5xx', @sql), 'vx5xv', '''''')
print @data
exec(@data)
```
|
How to concatenate all columns values in SQL Server 2005?
|
[
"",
"sql",
"sql-server",
"csv",
"concatenation",
""
] |
I have two tables:
locations, and users. Users have a location\_id set, which is a one to many relation with the locations table.
I am trying to count how many users belong to a location.
So far I have a query:
```
SELECT users.location_id, locations.name FROM users
LEFT JOIN locations on locations.id = users.location_id
```
But this doesn't return the counts. I tried using Group By, but I am not to familiar with how it works.
|
You can use the aggregate [count()](http://dev.mysql.com/doc/refman/5.7/en/counting-rows.html) function with the GROUP BY:
```
SELECT l.id, l.name, count(u.id)
FROM locations l
LEFT JOIN users u
ON l.id = u.location_id
GROUP BY l.id, l.name
```
|
You want all locations, so the query have to be in form: locations LEFT JOIN users.
```
SELECT locations.id, locations.name, count(*) as usersCount
FROM locations
LEFT JOIN users on locations.id = users.location_id
GROUP BY locations.id, locations.name
```
It is correct, but imagine that Location have ten columns and You want them all. Would You group by ten columns? You would have to, but that is inefficient. You want to group on the lowest level possible
```
SELECT locations.id, locations.name, usersCount
FROM locations
LEFT JOIN (SELECT users.location_id, count(*) AS usersCount
FROM users GROUP BY users.location_id) AS usersTable
ON locations.id = usersTable.location_id
```
or just use correlated subquery (my favourite choice)
```
SELECT locations.id, locations.name,
(SELECT count(*) FROM users
WHERE users.location_id = locations.id) AS usersCount
FROM locations
```
Performace of second and third solution would be the same.
|
Get column from ID in a join with group by
|
[
"",
"sql",
"postgresql",
""
] |
I want to create three temp tables and then combine their contents like so:
```
CREATE TABLE #TEMP1
MEMBERITEMCODE VARCHAR(25),
WEEK1USAGE VARCHAR(25),
WEEK1PRICE VARCHAR(25);
INSERT INTO #TEMP1 (MEMBERITEMCODE, WEEK1USAGE, WEEK1PRICE)
SELECT MEMBERITEMCODE, SUM(QTYSHIPPED), PRICE
FROM INVOICEDETAIL
WHERE UNIT=@UNIT AND INVOICEDATE BETWEEN @BEGDATE AND @WEEK1END
GROUP BY MEMBERITEMCODE, PRICE
CREATE TABLE #TEMP2
MEMBERITEMCODE VARCHAR(25),
WEEK2USAGE VARCHAR(25),
WEEK2PRICE VARCHAR(25);
INSERT INTO #TEMP2 (MEMBERITEMCODE, WEEK2USAGE, WEEK2PRICE)
SELECT MEMBERITEMCODE, SUM(QTYSHIPPED), PRICE
FROM INVOICEDETAIL
WHERE UNIT=@UNIT AND INVOICEDATE BETWEEN @WEEK2BEGIN AND @ENDDATE
GROUP BY MEMBERITEMCODE, PRICE
CREATE TABLE #TEMP3
MEMBERITEMCODE VARCHAR(25),
DESCRIPTION VARCHAR(200),
THIS VARCHAR(25),
THAT VARCHAR(25),
THEOTHERTHING VARCHAR(25);
INSERT INTO #TEMP3 (MEMBERITEMCODE, DESCRIPTION, THIS, THAT, THEOTHERTHING)
SELECT MEMBERITEMCODE, DESCRIPTION, THIS, THAT, THEOTHERTHING
FROM INVOICEDETAIL
WHERE UNIT=@UNIT AND INVOICEDATE BETWEEN @BEGDATE AND @ENDDATE
CREATE TABLE #TEMPCOMBINED
MEMBERITEMCODE VARCHAR(25),
DESCRIPTION VARCHAR(200),
THIS VARCHAR(25),
THAT VARCHAR(25),
THEOTHERTHING VARCHAR(25),
WEEK1USAGE VARCHAR(25),
WEEK1PRICE VARCHAR(25);
WEEK2USAGE VARCHAR(25),
WEEK2PRICE VARCHAR(25),
USAGEVARIANCE VARCHAR(25),
PRICEVARIANCE VARCHAR(25),
PRICEVARIANCEPERCENTAGE VARCHAR(25);
INSERT INTO #TEMPCOMBINED (MEMBERITEMCODE, DESCRIPTION, THIS, THAT, THEOTHERTHING, WEEK1USAGE, WEEK1PRICE, WEEK2USAGE, WEEK2PRICE, USAGEVARIANCE,
PRICEVARIANCE, PRICEVARIANCEPERCENTAGE)
SELECT T1.MEMBERITEMCODE, T3.DESCRIPTION, T3.THIS, T3.THAT, T3.THEOTHERTHING, T1.WEEK1USAGE, T1.WEEK1PRICE, T2.WEEK2USAGE, T2.WEEK2PRICE, NULL, NULL, NULL
FROM #TEMP1 T1
LEFT JOIN #TEMP2 T2 ON T1.MEMBERITEMCODE = T2.MEMBERITEMCODE
LEFT JOIN #TEMP3 T3 ON T1.MEMBERITEMCODE = T3.MEMBERITEMCODE
```
Now I want to replace the placeholder Nulls in the calculated fields, but don't know how to do that. My best idea so far is:
```
UPDATE #TEMPCOMBINED
SET USAGEVARIANCE = WEEK2USAGE - WEEK1USAGE,
PRICEVARIANCE = WEEK2PRICE - WEEK1PRICE,
PRICEVARIANCEPERCENTAGE = (WEEK2PRICE - WEEK1PRICE) / WEEK1PRICE
```
Is there a better way (one that actually works, for instance)?
|
Using computed columns:
```
CREATE TABLE #TEMPCOMBINED(
MEMBERITEMCODE VARCHAR(25),
DESCRIPTION VARCHAR(200),
THIS VARCHAR(25),
THAT VARCHAR(25),
THEOTHERTHING VARCHAR(25),
WEEK1USAGE DECIMAL(18,10),
WEEK1PRICE DECIMAL(18,10),
WEEK2USAGE DECIMAL(18,10),
WEEK2PRICE DECIMAL(18,10),
USAGEVARIANCE AS WEEK2USAGE - WEEK1USAGE,
PRICEVARIANCE AS WEEK2PRICE - WEEK1PRICE,
PRICEVARIANCEPERCENTAGE AS (WEEK2PRICE - WEEK1PRICE) / WEEK1PRICE
);
```
`LiveDemo`
And skip them in `INSERT` statement:
```
INSERT INTO #TEMPCOMBINED (MEMBERITEMCODE, DESCRIPTION, THIS, THAT, THEOTHERTHING, WEEK1USAGE, WEEK1PRICE, WEEK2USAGE, WEEK2PRICE)
SELECT T1.MEMBERITEMCODE, T3.DESCRIPTION, T3.THIS, T3.THAT, T3.THEOTHERTHING, T1.WEEK1USAGE, T1.WEEK1PRICE, T2.WEEK2USAGE, T2.WEEK2PRICE
FROM #TEMP1 T1
LEFT JOIN #TEMP2 T2 ON T1.MEMBERITEMCODE = T2.MEMBERITEMCODE
LEFT JOIN #TEMP3 T3 ON T1.MEMBERITEMCODE = T3.MEMBERITEMCODE
```
*Warning:*
Division by `WEEK1PRICE` so you could add
`(WEEK2PRICE - WEEK1PRICE) / NULLIF(WEEK1PRICE,0)` to avoid Math exception or add `CHECK` constraint to ensure that `WEEK1PRICE` is not 0.
|
First thing: why are you using varchar datatype for following columns?
`USAGEVARIANCE, ,WEEK2USAGE,WEEK1USAGE,PRICEVARIANCE , WEEK2PRICE , WEEK1PRICE, PRICEVARIANCEPERCENTAGE, WEEK2PRICE , WEEK1PRICE,WEEK1PRICE ;`
It is better to use decimal or integer data types. As you are using `sum()` function to calculate its value and it will return int/decimal value
Second: you can calculate `USAGEVARIANCE,PRICEVARIANCE,PRICEVARIANCEPERCENTAGE` while inserting into the table. For example:
```
select a,b, (a-b) as USAGEVARIANCE, (c-d) as PRICEVARIANCE from table
```
etc and then insert it into the temporary table
|
How can I compute values in a TSQL Update statement?
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-update",
"calculated-columns",
""
] |
What is the most clever way to delete tuples from table1, that are in the second table,
if the second table is not a part of initial database, but a result of some really big query?
```
table1 *this table is a result of some query
------------- -------------
| id1 | id2 | | id1 | id2 |
------------- -------------
| 1 2 | | 5 6 |
| 3 4 | | 1 2 |
| 5 6 | | 11 12 |
| 7 8 | -------------
| 9 10 |
| 11 12 |
| 13 14 |
-------------
```
I came up with
```
delete from table1
where id1 in (select id1 from ( really long query to get a second table))
and id2 in (select id2 from (the same really long query to get a second table));
```
It works, but I feel like I'm doing it way too wrong, and not keeping the query DRY.
And would the way you suggest work the same if table1 had an additional column, for example "somecol"?
|
IMO, You can use [`EXISTS`](http://www.techonthenet.com/sqlite/exists.php) statement like this:
```
DELETE FROM table1
WHERE EXISTS (
SELECT 1
FROM (<your long query>) AS dt
WHERE table1.id1 = dt.id1
AND table1.id2 = dt.id2);
```
**[[SQL Fiddle Sample]](http://www.sqlfiddle.com/#!5/81039/5)**
|
A Correlated Subquery using `EXISTS` allows matching multiple columns:
```
delete
from table1
where exists
( select * from
(
"really long query"
) as t2
where table1.id1 = t2.id1 -- correlating inner and outer table
and table1.id2 = t2.id2 -- similar to a join-condition
)
```
|
Delete subtable from a table, SQL
|
[
"",
"sql",
"sqlite",
""
] |
BigQuery has NOAA's gsod data loaded as a public dataset - starting in 1929: <https://www.reddit.com/r/bigquery/comments/2ts9wo/noaa_gsod_weather_data_loaded_into_bigquery/>
How can I retrieve the historical data for any city?
|
Update 2019: For convenience
```
SELECT *
FROM `fh-bigquery.weather_gsod.all`
WHERE name='SAN FRANCISCO INTERNATIONAL A'
ORDER BY date DESC
```
Updated daily - or report here if it doesn't
For example, to get the hottest days for San Francisco stations since 1980:
```
SELECT name, state, ARRAY_AGG(STRUCT(date,temp) ORDER BY temp DESC LIMIT 5) top_hot, MAX(date) active_until
FROM `fh-bigquery.weather_gsod.all`
WHERE name LIKE 'SAN FRANC%'
AND date > '1980-01-01'
GROUP BY 1,2
ORDER BY active_until DESC
```
[](https://i.stack.imgur.com/bgILk.png)
Note that this query processed only 28MB thanks to a clustered table.
And similar, but instead of using the station name I'll use a location and a table clustered by the location:
```
WITH city AS (SELECT ST_GEOGPOINT(-122.465, 37.807))
SELECT name, state, ARRAY_AGG(STRUCT(date,temp) ORDER BY temp DESC LIMIT 5) top_hot, MAX(date) station_until
FROM `fh-bigquery.weather_gsod.all_geoclustered`
WHERE EXTRACT(YEAR FROM date) > 1980
AND ST_DISTANCE(point_gis, (SELECT * FROM city)) < 40000
GROUP BY name, state
HAVING EXTRACT(YEAR FROM station_until)>2018
ORDER BY ST_DISTANCE(ANY_VALUE(point_gis), (SELECT * FROM city))
LIMIT 5
```
[](https://i.stack.imgur.com/HtEZA.png)
---
Update 2017: Standard SQL and up-to-date tables:
```
SELECT TIMESTAMP(CONCAT(year,'-',mo,'-',da)) day, AVG(min) min, AVG(max) max, AVG(IF(prcp=99.99,0,prcp)) prcp
FROM `bigquery-public-data.noaa_gsod.gsod2016`
WHERE stn='722540' AND wban='13904'
GROUP BY 1
ORDER BY day
```
---
Additional example, to show the coldest days in Chicago in this decade:
```
#standardSQL
SELECT year, FORMAT('%s%s',mo,da) day ,min
FROM `fh-bigquery.weather_gsod.stations` a
JOIN `bigquery-public-data.noaa_gsod.gsod201*` b
ON a.usaf=b.stn AND a.wban=b.wban
WHERE name='CHICAGO/O HARE ARPT'
AND min!=9999.9
AND mo<'03'
ORDER BY 1,2
```
---
To retrieve the historical weather for any city, first we need to find what station reports in that city. The table `[fh-bigquery:weather_gsod.stations]` contains the name of known stations, their state (if in the US), country, and other details.
So to find all the stations in Austin, TX, we would use a query like this:
```
SELECT state, name, lat, lon
FROM [fh-bigquery:weather_gsod.stations]
WHERE country='US' AND state='TX' AND name CONTAINS 'AUST'
LIMIT 10
```
[](https://i.stack.imgur.com/P8t0f.png)
This approach has 2 problems that need to be solved:
* Not every known station is present in that table - I need to get an updated version of this file. So don't give up if you don't find the station you are looking for here.
* Not every station found in this file has been operating every year - so we need to find stations that have data during the year we are looking for.
To solve the second problem, we need to join the stations table with the actual data we are looking for. The following query looks for stations around Austin, and the column `c` looks at how many days during 2015 have actual data:
```
SELECT state, name, FIRST(a.wban) wban, FIRST(a.stn) stn, COUNT(*) c, INTEGER(SUM(IF(prcp=99.99,0,prcp))) rain, FIRST(lat) lat, FIRST(lon) long
FROM [fh-bigquery:weather_gsod.gsod2015] a
JOIN [fh-bigquery:weather_gsod.stations] b
ON a.wban=b.wban
AND a.stn=b.usaf
WHERE country='US' AND state='TX' AND name CONTAINS 'AUST'
GROUP BY 1,2
LIMIT 10
```
[](https://i.stack.imgur.com/5Sogw.png)
That's good! We found 4 stations with data for Austin during 2015.
Note that we had to treat "rain" in a special way: When a station doesn't monitor for rain, instead of `null`, it marks it as 99.99. Our query filters those values out.
Now that we know the stn and wban numbers for these stations, we can pick any of them and visualize the results:
```
SELECT TIMESTAMP('2015'+mo+da) day, AVG(min) min, AVG(max) max, AVG(IF(prcp=99.99,0,prcp)) prcp
FROM [fh-bigquery:weather_gsod.gsod2015]
WHERE stn='722540' AND wban='13904'
GROUP BY 1
ORDER BY day
```
[](https://i.stack.imgur.com/jTqpu.png)
|
There's now an [official set of the NOAA data on BigQuery](https://bigquery.cloud.google.com/dataset/bigquery-public-data:noaa_gsod "NOAA data on BigQuery") in addition to [Felipe's "official" public dataset](https://stackoverflow.com/users/132438/felipe-hoffa "Felipe's"). There's [a blog post describing it](https://cloudplatform.googleblog.com/2016/09/global-historical-daily-weather-data-now-available-in-BigQuery.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed:%20ClPlBl%20(Cloud%20Platform%20Blog) "Blog post").
An example getting minimum temperatures for August 15, 2016:
```
SELECT
name,
value/10 AS min_temperature,
latitude,
longitude
FROM
[bigquery-public-data:ghcn_d.ghcnd_stations] AS stn
JOIN
[bigquery-public-data:ghcn_d.ghcnd_2016] AS wx
ON
wx.id = stn.id
WHERE
wx.element = 'TMIN'
AND wx.qflag IS NULL
AND STRING(wx.date) = '2016-08-15'
```
Which returns:
[](https://i.stack.imgur.com/BpfPJ.png)
|
How to get the historical weather for any city with BigQuery?
|
[
"",
"sql",
"google-bigquery",
"weather",
"opendata",
""
] |
(I'm using postgres)
I've got two queries from different tables. Each is grouped by date.
What is the best way to join them into one query (UNION or JOIN)?
Thanks in advance.
Query β1:
```
SELECT to_char(date,'MM') as mon,
extract(year from date) as yyyy,
SUM("table_1"."user_money") AS user_sum,
SUM("table_1"."system_money") AS system_sum
FROM "table_1"
GROUP BY 1,2
ORDER BY 2,1;
```
Result for Query β1:
```
mon | yyyy | user_sum | system_sum
-----+------+------------------+-----------------
11 | 2015 | 10 | 50
12 | 2015 | 20 | 60
(2 rows)
```
Query β2:
```
SELECT to_char(created_at,'MM') as mon,
extract(year from created_at) as yyyy,
SUM("table_2"."amount") AS payments_sum
FROM "table_2"
GROUP BY 1,2
ORDER BY 2,1;
```
Result for Query β2:
```
mon | yyyy | payments_sum
-----+------+--------------
10 | 2015 | 500
11 | 2015 | 600
12 | 2015 | 700
01 | 2016 | 800
(4 rows)
```
Required result:
```
mon | yyyy | payments_sum | user_sum | system_sum
-----+------+--------------+------------------+----------------
10 | 2015 | 500 | |
11 | 2015 | 600 | 10 | 50
12 | 2015 | 700 | 20 | 60
01 | 2016 | 800 | |
(4 rows)
```
|
Here is another one in a more PostgreSQL specific syntax:
```
WITH query_one AS (
SELECT
to_char(date,'MM') as mon,
extract(year from date) as yyyy,
SUM("table_1"."user_money") AS user_sum,
SUM("table_1"."system_money") AS system_sum
FROM
"table_1"
GROUP BY
1,2
), query_two AS (
SELECT
to_char(created_at,'MM') as mon,
extract(year from created_at) as yyyy,
SUM("table_2"."amount") AS payments_sum
FROM
"table_2"
GROUP BY
1,2
)
SELECT
*
FROM
query_one
LEFT JOIN query_two USING (mon, yyyy)
ORDER BY
yyyy
, mon;
```
The `USING` clause is easier to use and prevents `mon` and `yyyy` to be twice in the output and the rest is just for the sake of a clear view.
|
Try this.
```
SELECT
to_char(date,'MM') as mon,
extract(year from date) as yyyy,
SUM("table_2"."amount") AS payments_sum,
SUM("table_1"."user_money") AS user_sum,
SUM("table_1"."system_money") AS system_sum
FROM
"table_1"
LEFT JOIN
"table_2" ON (to_char("table_1"."date", 'MM') = to_char("table_2"."date", 'MM') AND extract(year from "table_1"."date") = extract(year from "table_2"."date"))
GROUP BY 1,2
ORDER BY 2,1;
```
|
Group by date for multiple tables
|
[
"",
"sql",
"postgresql",
"join",
"group-by",
"union",
""
] |
I need to create a SQL Query.
This query need to select from a table where a column contains regular expression.
For example, I have those values:
TABLE test (name)
* XHRTCNW
* DHRTRRR
* XHRTCOP
* CPHCTPC
* CDDHRTF
* PEOFOFD
I want to select all the data who have "HRT" after 1 char (value 1, 2 and 3 - Values who looks like "-HRT---") but not those who might have "HRT" after 1 char (value 5).
So I'm not sure how to do it because a simple
```
SELECT *
FROM test
WHERE name LIKE "%HRT%"
```
will return value 1, 2, 3 and 5.
Sorry if I'm not really clear with what I want/need.
|
You can use `substring`.
```
SELECT * FROM test WHERE substring(name from 2 for 3) = 'HRT'
```
|
You can also change the pattern. Instead of using `%` which means zero-or-more anything, you can use `_` which means exactly one.
```
SELECT * FROM test WHERE name like '_HRT%';
```
|
Select Where Like regular expression
|
[
"",
"sql",
"regex",
"postgresql",
""
] |
I have table with 4 columns (`id, bm, name, act`).
I want to retrieve records grouped by `bm` and count how many records have of every group where `act = no` and where `act = yes` at once...
So if I have records:
```
(Id, bm, name, act)
1, 5, Nik, yes
2, 6, Mike, yes
3, 5, Tom, no
4, 5, Alex, no
```
Result I need is:
```
(bm, totalYes, totalNo)
5, 1, 2
6, 1, 0
```
I guess that everything is possible to retrieve from SQL but I don't know how :(
|
You can use conditional aggregation to achieve this result:
```
select
bm,
sum(case when act = 'yes' then 1 else 0 end) as "Count of yes",
sum(case when act = 'no' then 1 else 0 end) as "Count of no"
from t
group by bm;
```
With some databases, like MySQL, you can reduce the aggregation to `sum(act = 'yes')`, but the ANSI SQL standard requires the full case expression.
[Sample SQL Fiddle](http://www.sqlfiddle.com/#!15/66bd9/2)
|
Try following
```
SELECT SUM(CASE WHEN act = 'no' THEN 1 ELSE 0 END) as NoCount,
SUM(CASE WHEN act = 'yes' THEN 1 ELSE 0 END) YesCount
FROM tbl
GROUP BY gm
```
|
SQL SELECT Query with group and distinct
|
[
"",
"sql",
"database",
""
] |
Right now i have a select query which basically does Table1 minus table2(Including records that are different) ,Left join and filter nulls.
My query is:
```
SELECT table1.serial_number,
table1.equip_account_number,
table1.equip_service_address_id,
table1.equip_ani_phone_number,
table1.equip_part_number,
table1.equip_polled_date,
table1.equip_zone_map,
table1.equip_return_value,
table1.equip_renewal_frequency,
table1.equip_last_renewal_date,
table1.equip_in_stock_date,
table1.equip_assigned_addresses,
table1.equip_link_to_serial_number,
table1.equip_converter_type,
table1.equip_converter_id,
table1.equip_converter_model,
table1.equip_converter_options,
table1.equip_converter_value,
table1.equip_emp_code,
table1.equip_vendor_code,
table1.equip_headend_code,
table1.equip_distributor_code,
table1.equip_manufacturer_code,
table1.equip_location_code,
table1.equip_group_code,
table1.equip_ownership_code,
table1.equip_secondary_conv_info,
table1.equip_secondary_conv_type,
table1.equip_second_conv_manufacturer,
table1.equip_second_conv_install_date,
table1.call_back_cycle_day,
table1.call_back_last_date,
table1.call_back_request_date,
table1.trx_equip_status_code,
table1.trx_equip_reason_code,
table1.tv_ind,
table1.int_ind,
table1.tel_ind,
table1.dwh_create_date,
table1.dwh_update_date,
table1.equip_outlet_location_code,
table1.equip_return_date_due,
table1.equip_unrecovered_ind,
table1.equip_delete_date,
table1.install_date,
table1.work_order_number,
table1.ds_work_order_number,
table1.disconnect_emp_code,
table1.disconnect_date,
table1.ds_equip_location,
table1.equip_active_in_tv_ind,
table1.equip_active_in_tel_ind,
table1.equip_active_in_int_ind,
table1.equip_active_intv_change_date,
table1.equip_active_intel_change_date,
table1.equip_active_inint_change_date,
table1.pirat_ind,
table1.pirat_ind_change_date,
table1.equip_owner
FROM dim_equip,
scd_equip
WHERE table2.dwh_end_date(+) = To_date('31/12/2999', 'DD/MM/YYYY')
AND ( table2.serial_number IS NULL
OR ( table1.equip_account_number <> table2.equip_account_number
OR table1.equip_service_address_id <>
table2.equip_service_address_id
OR table1.equip_ani_phone_number <>
table2.equip_ani_phone_number
OR table1.equip_part_number <> table2.equip_part_number
OR table1.equip_polled_date <> table2.equip_polled_date
OR table1.equip_zone_map <> table2.equip_zone_map
OR table1.equip_return_value <> table2.equip_return_value
OR table1.equip_renewal_frequency <>
table2.equip_renewal_frequency
OR table1.equip_last_renewal_date <>
table2.equip_last_renewal_date
OR table1.equip_in_stock_date <> table2.equip_in_stock_date
OR table1.equip_assigned_addresses <>
table2.equip_assigned_addresses
OR table1.equip_link_to_serial_number <>
table2.equip_link_to_serial_number
OR table1.equip_converter_type <>
table2.equip_converter_type
OR table1.equip_converter_id <> table2.equip_converter_id
OR table1.equip_converter_model <>
table2.equip_converter_model
OR table1.equip_converter_options <>
table2.equip_converter_options
OR table1.equip_converter_value <>
table2.equip_converter_value
OR table1.equip_emp_code <> table2.equip_emp_code
OR table1.equip_vendor_code <> table2.equip_vendor_code
OR table1.equip_headend_code <> table2.equip_headend_code
OR table1.equip_distributor_code <>
table2.equip_distributor_code
OR table1.equip_manufacturer_code <>
table2.equip_manufacturer_code
OR table1.equip_location_code <> table2.equip_location_code
OR table1.equip_group_code <> table2.equip_group_code
OR table1.equip_ownership_code <>
table2.equip_ownership_code
OR table1.equip_secondary_conv_info <>
table2.equip_secondary_conv_info
OR table1.equip_secondary_conv_type <>
table2.equip_secondary_conv_type
OR table1.equip_second_conv_manufacturer <>
table2.equip_second_conv_manufacturer
OR table1.equip_second_conv_install_date <>
table2.equip_second_conv_install_date
OR table1.call_back_cycle_day <> table2.call_back_cycle_day
OR table1.call_back_last_date <> table2.call_back_last_date
OR table1.call_back_request_date <>
table2.call_back_request_date
OR table1.trx_equip_status_code <>
table2.trx_equip_status_code
OR table1.trx_equip_reason_code <>
table2.trx_equip_reason_code
OR table1.tv_ind <> table2.tv_ind
OR table1.int_ind <> table2.int_ind
OR table1.tel_ind <> table2.tel_ind
OR table1.equip_outlet_location_code <>
table2.equip_outlet_location_code
OR table1.equip_return_date_due <>
table2.equip_return_date_due
OR table1.equip_unrecovered_ind <>
table2.equip_unrecovered_ind
OR table1.equip_delete_date <> table2.equip_delete_date
OR table1.install_date <> table2.install_date
OR table1.work_order_number <> table2.work_order_number
OR table1.ds_work_order_number <>
table2.ds_work_order_number
OR table1.disconnect_emp_code <> table2.disconnect_emp_code
OR table1.disconnect_date <> table2.disconnect_date
OR table1.ds_equip_location <> table2.ds_equip_location
OR table1.equip_active_in_tv_ind <>
table2.equip_active_in_tv_ind
OR table1.equip_active_in_tel_ind <>
table2.equip_active_in_tel_ind
OR table1.equip_active_in_int_ind <>
table2.equip_active_in_int_ind
OR table1.equip_active_intv_change_date <>
table2.equip_active_intv_change_date
OR table1.equip_active_intel_change_date <>
table2.equip_active_intel_change_date
OR table1.equip_active_inint_change_date <>
table2.equip_active_inint_change_date
OR table1.pirat_ind <> Nvl(table2.pirat_ind, 0)
OR table1.pirat_ind_change_date <>
NVL(table2.pirat_ind_change_date,
TO_DATE('01/01/0001', 'DD/MM/YYYY'))
OR table1.equip_owner <> table2.equip_owner) )
AND table1.serial_number = table2.serial_number(+)
```
`Table1` have only Unique index and PK - `SERIAL_NUMBER`
`Table2` have Unique index and PK - `DWH_SERIAL_KEY`
and normal indexes - `DWH_END_DATE, EQUIP_ACCOUNT_NUMBER, SERIAL_NUMBER`.
`Table1` have 13MIL records and table2 have a lot more but after the first condition`(END_DATE=2999)` it returns 13MIL records to.
The query takes about 10-25minutes depending each day on the amount of data that arrives.
Any thoughts on how to make it faster will be appriciated.
|
The query you wrote and the variants using `MINUS` require all columns to be compared, causing the performance problem.
### recommended approach
I recommend updating the way how your tables are modelled, in order to allow easier and faster identification of modified records.
For example, you could keep track of the last modification date of each records, as well as the date of the last execution of your data update process. comparing the last modification column to your timestamps would give you the desired records much more faster, as only one column would need to be compared.
### workaround
If this is not an option for you, you may want to do the following instead:
(but really, try the recommended approach, this is what's used in the industry, and there's a reason for it).
Add an extra column `record_hash` to table1 and table2, populated with the following:
```
ora_hash(equip_account_number||equip_service_address_id||equip_ani_phone_number||...||equip_owner)
```
Just make sure to map all the columns inside ora\_hash in the same way how you want to compare the records (use `nvl`s where appropriate).
Then you can do the minus logic in a much simpler way:
```
SELECT *
FROM table1 tb1,
( -- this subquery compares record from both tables and returns the serial_number of all new/modified records inside table1
SELECT table1.serial_number,
table1.record_hash
FROM table1
minus
select table2.serial_number,
table2.record_hash
From table2
WHERE table2.dwh_end_date = to_date('31/12/2999', 'DD/MM/YYYY')
) diff
where diff.serial_number = tb1.serial_number
```
`ora_hash` is [documented here](http://stanford.edu/dept/itss/docs/oracle/10g/server.101/b10759/functions097.htm).
also: read about [collisions](https://jonathanlewis.wordpress.com/2009/11/21/ora_hash-function/).
|
Looking at your code seems you want selevt the value of a set not in other set
If you are using Oracle the you can use also the minus clause per operation over sets .
Your second part of query ...
```
AND ( table2.serial_number IS NULL
OR ( table1.equip_account_number <> table2.equip_account_number
OR table1.equip_service_address_id <>
table2.equip_service_address_id
OR table1.equip_ani_phone_number <>
table2.equip_ani_phone_number ...
......
```
could be easily change in a minus select of the data in table 2 respect the data in table1
the code below is just a partial sample ( a suggestion)
```
SELECT table1.serial_number,
table1.equip_account_number,
table1.equip_service_address_id,
table1.equip_ani_phone_number,
table1.equip_part_number,
......
......
table1.pirat_ind,
table1.pirat_ind_change_date,
table1.equip_owner
FROM table1,
minus
select table2.serial_number,
table2.equip_account_number,
table2.equip_service_address_id,
table2.equip_ani_phone_number,
table2.equip_part_number,
WHERE table2.dwh_end_date(+) = To_date('31/12/2999', 'DD/MM/YYYY')
AND table2.serial_number IS NULL;
```
|
Improve SELECT performance
|
[
"",
"sql",
"oracle",
"performance",
"plsql",
"etl",
""
] |
I have Oracle table `Demo_t`, having column `end_date` which may have value null. I want to select all rows having `end_date` greater than current date.
I have written query below
```
select * from Demo_t where NVL(End_date,sysdate+3) > sysdate;
```
No doubt, query is giving expected result but I want make sure that i am not missing something or that any other better solution exists. It will help me to better understand Oracle database more.
Thanks,
|
If `End_Date` is null then you take it in your results. Is that what you want?
If so you could write it like
```
SELECT *
FROM DEMO_T
WHERE END_DATE > SYSDATE OR END_DATE IS NULL;
```
If you don't want that, could try
```
SELECT *
FROM DEMO_T
WHERE END_DATE > SYSDATE;
```
In your code:
`NVL(End_date,sysdate+3)` actually increment `sysdate` for each row in your table, which isn't a good idea if you can simply check `End_Date` for null
Edit: As @Thorsten Kettner said, there is only one sysdate in query so it won't be calculated for each row
|
You query is fine. You can use that or check for NULL explicitely with `(end_date > sysdate or end_date is null)`. Both do the same and while some people prefer the former, others prefer the latter and still others always use `DECODE` to check nullable columns. Use what you consider most readable and stick to one way in all your queries (or in a team: use what your colleagues use). Stay consistent.
There is one way, though, you could speed up such queries: use a high fix date instead of `SYSDATE+3`, usually something like `DATE'9999-12-31'`. Then create a function index on this expression.
```
create index idx_end_date on demo_t ( nvl(end_date, date'9999-12-31') );
```
The matching query:
```
select * from demo_t where nvl(end_date, date'9999-12-31') > sysdate;
```
As the index covers *all* dates, it is much more likely to be used than a mere index on `end_date`. This can speed up queries immensely.
By the way: Be aware that `SYSDATE` contains a time part. If `end_date` only contains a date, then you may want to `TRUNC(SYSDATE)` in the comparision.
|
Oracle Query - Select row having end_date (may be NULL/EMPTY) greater than sysdate
|
[
"",
"sql",
"oracle",
""
] |
I was asked to alter a query to work with data from a given date selection instead of just the current month. The query should get the average sales per hour during that date range. It appears to work just fine when selecting one month of data, but when I try go to over a month, the averages appear to be higher than they ought to.
I think the problem may have to do with grouping by the day, since the day would be doubled up when data is over a month, but how would I go about fixing it? Thanks in advance.
```
DECLARE @Start DATETIME
DECLARE @End DATETIME
SET @Start = '6/15/2015'
SET @End = '8/15/2015'
SELECT TheHour, AVG(TheCount) AS SalesPerHour
FROM
(SELECT DATEPART(DAY, DateTimeCreated) AS TheDay,
DATEPART(HOUR, DateTimeCreated) AS TheHour,
COUNT(*) AS TheCount
FROM OrderHeader
WHERE Deleted = 0
AND OrderType = 1
AND BranchID = 4
AND BackOrderedFromID IS NULL
AND DateTimeCreated >= @Start
AND DateTimeCreated < @End
GROUP BY DATEPART(DAY, DateTimeCreated), DATEPART(HOUR, DateTimeCreated)) AS T
GROUP BY TheHour
ORDER BY TheHour
```
SAMPLE DATA for 6/15/2015 to 7/15/2015
```
TheHour SalesPerHour
5 2
6 5
7 6
8 5
9 4
10 4
11 2
12 2
13 3
14 2
15 2
16 1
```
SAMPLE DATA for 7/15/2015 to 8/15/2015
```
TheHour SalesPerHour
5 1
6 7
7 6
8 5
9 4
10 4
11 4
12 2
13 4
14 2
15 1
```
SAMPLE DATA for 6/15/2015 to 8/15/2015 (most values are too high?)
```
TheHour SalesPerHour
5 2
6 10
7 11
8 8
9 7
10 6
11 5
12 3
13 5
14 4
15 2
16 1
```
|
Don't use `datepart(day)`. This gives the day of the month. When your time frame spans multiple months, `datepart(day)` returns the same value for different days (for instance, "1" on the first of any month).
Instead, simply cast the value to a `date` to remove the time component. The rest of the query remains the same:
```
SELECT TheHour, AVG(TheCount) AS SalesPerHour
FROM (SELECT CAST(DateTimeCreated as Date) AS TheDay,
DATEPART(HOUR, DateTimeCreated) AS TheHour,
COUNT(*) AS TheCount
FROM OrderHeader
WHERE Deleted = 0 AND OrderType = 1 AND BranchID = 4 AND
BackOrderedFromID IS NULL AND
DateTimeCreated >= @Start
DateTimeCreated < @End
GROUP BY CAST(DateTimeCreated as Date), DATEPART(HOUR, DateTimeCreated)
) dh
GROUP BY TheHour
ORDER BY TheHour;
```
Alternatively, you can do this without the double aggregation:
```
SELECT DATEPART(HOUR, DateTimeCreated) as TheHour,
(COUNT(*) * 1.0 /
COUNT(DISTINCT CAST(DateTimeCreated as Date))
) as SalesPerHour
FROM OrderHeader oh
WHERE Deleted = 0 AND OrderType = 1 AND BranchID = 4 AND
BackOrderedFromID IS NULL AND
DateTimeCreated >= @Start
DateTimeCreated < @End
GROUP BY DATEPART(HOUR, DateTimeCreated);
```
Also, note that `AVG()` of an integer value does an integer average. So, the average of 1 and 2 is 1 in SQL Server, not 1.5. In this version the query multiplies the count by 1.0 to get decimal places -- that may or may not be desirable.
|
To round a `datetime` down to it's nearest whole hour, use `DATEADD` and `DATEDIFF` together:
```
DECLARE @Start DATETIME
DECLARE @End DATETIME
SET @Start = '6/15/2015'
SET @End = '8/15/2015'
SELECT DATEPART(hour,RoundedHour) as Hour, AVG(TheCount) AS SalesPerHour
FROM
(SELECT DATEADD(hour,DATEDIFF(hour,0,DateTimeCreated),0) as RoundedHour,
COUNT(*) AS TheCount
FROM OrderHeader
WHERE Deleted = 0
AND OrderType = 1
AND BranchID = 4
AND BackOrderedFromID IS NULL
AND DateTimeCreated >= @Start
AND DateTimeCreated < @End
GROUP BY DATEADD(hour,DATEDIFF(hour,0,DateTimeCreated),0)) AS T
GROUP BY DATEPART(hour,RoundedHour)
ORDER BY DATEPART(hour,RoundedHour)
```
That way you don't have to think about all of the larger components (day, month, year) that you'd also want to group by, for larger ranges.
|
Averages are too high when getting data from over a month
|
[
"",
"sql",
"sql-server",
""
] |
I have 2 tables with these columns:
Table A
```
Id | Name | Salary
1 | TEST1 | 100
2 | TEST2 | 200
3 | TEST3 | 300
```
Table B
```
Id | Name | Salary
1 | TEST1 | 100
2 | TEST2 | 200
4 | TEST4 | 400
```
I want to delete similar data from two tables (without using joins). When I query
```
SELECT *
FROM A
SELECT *
FROM B
```
I should get this result:
Table A
```
Id | Name | Salary
3 | TEST3 | 300
```
Table B
```
Id | Name | Salary
4 | TEST4 | 400
```
Any help would be greatly appreciated. Thanks in advance.
PS : I'm going to load the table with around 10 millions rows
|
You better replace *"delete lots of rows"* to *"create a new table with only those rows remaining"*.
Very simple if your SQL Server version supports `EXCEPT`:
```
SELECT * INTO newA FROM a
EXCEPT
SELECT * FROM b
;
SELECT * INTO newB FROM b
EXCEPT
SELECT * FROM a
;
```
See [fiddle](http://sqlfiddle.com/#!6/001b5/5)
`EXISTS` also simplifies `NULL` treatment.
|
Use `NOT EXISTS`
```
SELECT *
FROM a
WHERE NOT EXISTS (SELECT 1
FROM b
WHERE a.id = b.id)
SELECT *
FROM b
WHERE NOT EXISTS (SELECT 1
FROM a
WHERE a.id = b.id)
```
For all the field's use `EXCEPT`
```
SELECT Id, Name, Salary FROM A
EXCEPT
SELECT Id, Name, Salary FROM B
SELECT Id, Name, Salary FROM B
EXCEPT
SELECT Id, Name, Salary FROM A
```
To delete the records from `tablea` use the below query
```
WITH cte
AS (SELECT *
FROM tablea a
WHERE EXISTS (SELECT 1
FROM tableb b
WHERE a.id = b.id
AND a.NAME = b.NAME
AND a.salary = b.salary))
DELETE FROM cte
SELECT *
FROM tablea
```
Before deleting the data from `tableA` insert the data into temp table to refer when deleting data from `tableB`
|
Delete equal data from two tables in SQL Server
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-delete",
""
] |
i was using oracle 10g i want to make a sql procedure that return mulltiple values i am using IN OUT in my procedure
```
create or replace procedure proc_mull_val
(
e_id in chr_emgt_employee.employee_code%type,
head_id out chr_emgt_employee.employee_code%type,
zone_id out chr_emgt_employee.employee_code%type
)
is
begin
SELECT (SELECT employee_code
FROM chr_emgt_employee
WHERE EMPLOYEE_ID = emgt1.SUPERVISOR_EMP_ID)
, (SELECT employee_code
FROM chr_emgt_employee
WHERE employee_id = EMGT2.SUPERVISOR_EMP_ID)
INTO head_id,zone_id
FROM chr_emgt_employee emgt1, chr_emgt_employee emgt2
WHERE EMGT2.employee_id = emgt1.SUPERVISOR_EMP_ID
AND emgt1.employee_code = e_id;
end;
```
procedure is successfuly created but when i execute this
```
declare
head_id chr_emgt_employee.employee_code%type;
zone_id chr_emgt_employee.employee_code%type;
begin
execute proc_mull_val(151018,head_id,zone_id);
end;
```
i face this error when i execute it
***PLS-00103: Encountered the symbol "PROC\_MULL\_VAL" when expecting one of the following:
:= . ( @ % ; immediate
The symbol ":=" was substituted for "PROC\_MULL\_VAL" to continue.***
if someone has better suggestion about this query or my question then please inform me
|
i think your procedure has no issue just your way of calling is change
```
DECLARE
head_id varchar2 (20);
zone_id varchar2 (20);
BEGIN
proc_mull_val ('151018', head_id, zone_id);
DBMS_OUTPUT.put_line (head_id || '..... ' || zone_id);
END;
```
just try this and tell me if not solve change the data type and try again
|
Execute is an [SQL\*PLUS comand](https://docs.oracle.com/cd/B19306_01/server.102/b14357/ch12022.htm) thus you cannot use it inside a pl/sql anonymous block.
|
How Sql Procedure return mulltiple values with IN OUT :PLS-00103 error
|
[
"",
"sql",
"oracle",
"stored-procedures",
"plsql",
""
] |
I have a query that will select all records from my database based on the month. For example, I want to select all records for the month of January. The month() function doesn't work for me. The `ComboBox` has a value of month name ("January", "February", etc.). I'm using VB 2010 and my database is Microsoft Access.
```
query = "SELECT empid, empname, department, empdate, timeinam, " & _
"timeoutam, lateam, timeinpm, timeoutpm, latepm, thw " & _
"FROM tbldtr where Month(empdate) =" & cmbMonth.Text
```
|
Well, supposing that you have your combobox items sorted in a *monthly order* (Jan, Feb, March...) then you could write your query as
```
query = "SELECT empid, empname, department, empdate, timeinam, " & _
"timeoutam,lateam, timeinpm, timeoutpm,latepm,thw " & _
"FROM tbldtr where Month(empdate) =" & cmbMonth.SelectedIndex + 1
```
[ComboBox.SelectedIndex](https://msdn.microsoft.com/en-us/library/system.windows.forms.combobox.selectedindex%28v=vs.110%29.aspx) property is an integer that tells you the index of the current selected item. This property starts at zero, so adding one, matches the number returned by the VBA Month function
Of course, this means that you have somewhere, before this line, a check that informs your user to select something from the combobox and the combobox itself should have `DropDownStyle` set to [ComboBoxStyle.DropDownList](https://msdn.microsoft.com/en-us/library/system.windows.forms.comboboxstyle(v=vs.110).aspx) to avoid user inputs its own 'month'
**WARNING:** While, in this context, (converting an integer to a string) there is no much concern about Sql Injection it is better to not indulge in these practices and use always a parameterized query.
|
Admittedly this is no different than other responses but wanted to express checking the selected index before executing the query along with best to use parameters as shown below. This is OleDb provider, same applies for all managed data providers just change to the correct one e.g. SQL server use SqlClient etc.
Load ComboBox
```
cmbMonth.Items.AddRange(
(
From M In System.Globalization.CultureInfo.CurrentCulture.DateTimeFormat.MonthNames
Where Not String.IsNullOrEmpty(M)).ToArray
)
```
Sample to run statement
```
If cmbMonth.SelectedIndex > -1 Then
Using cn As New OleDb.OleDbConnection With
{
.ConnectionString = "Your connection string"
}
Using cmd As New OleDb.OleDbCommand With
{
.Connection = cn,
.CommandText =
"SELECT empid, empname, department, empdate, timeinam, timeoutam, lateam, timeinpm, timeoutpm, latepm, thw " &
"FROM tbldtr where Month(empdate) = @SelectedMonth"
}
cmd.Parameters.Add(New OleDb.OleDbParameter With
{
.ParameterName = "@SelectedMonth",
.DbType = DbType.Int32,
.Value = cmbMonth.SelectedIndex + 1
}
)
' continue
End Using
End Using
End If
```
|
Query selecting month only in date
|
[
"",
"sql",
"vb.net",
"ms-access-2010",
""
] |
It seems simple but I struggle with it. The question is how can I lock for example a single row from the table **JOBS** with **JOB\_ID = IT\_PROG**. I want to do it, because I want to try an exception from a procedure, where it displays you a message when you try to update a locked row. Thanks in advance for your time.
|
You may lock the record as described in other answers, but you will **not see any exception** while UPDATEing this row.
The `UPDATE` statement will **wait until the lock will be released**, i.e. the session with `SELECT ... FOR UPDATE` commits. After that the UPDATE will be performed.
The only exeption you can manage is DEADLOCK, i.e.
```
Session1 SELECT FOR UPDATE record A
Session2 SELECT FOR UPDATE record B
Session1 UPDATE record B --- wait as record locked
Session2 UPDATE record A --- deadlock as 1 is waiting on 2 and 2 waiting on 1
```
|
AskTom has an example of what you're trying to do:
<https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:4515126525609>
From AskTom:
```
declare
resource_busy exception;
pragma exception_init( resource_busy, -54 );
success boolean := False;
begin
for i in 1 .. 3
loop
exit when (success);
begin
select xxx from yyy where .... for update NOWAIT;
success := true;
exception
when resource_busy then
dbms_lock.sleep(1);
end;
end loop;
if ( not success ) then
raise_application_error( -20001, 'row is locked by another session' );
end if;
end;
```
This attempts to get a lock, and if it can't get one (i.e. ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired is raised) it will raise an error.
|
How can I lock a single row in Oracle SQL
|
[
"",
"sql",
"oracle",
"row",
"locked",
""
] |
I am working with a database that has field name "From" and any time I try to write a query like this:
```
Select [Tab.From] AS FromAddress From MyAddresses Tab
```
I get invalid column name "Tab.From".
However if I do `Select [From]` it works.
I believe it has something to do with "From" being a keyword in sql. Is there any way I can access it in select statement?
|
It is a truly bad column name (& yes it certainly is a reserved word)
```
SELECT t.[From] AS FromAddress
FROM MyAddresses AS t
```
|
Try:
```
SELECT [From] AS FromAddress FROM MyTable
```
|
How to select "From" field with qualifier in sql server
|
[
"",
"sql",
"sql-server-2008",
""
] |
Similar to [this question](https://stackoverflow.com/q/11231544/327026), how can I find if a NULL value exists in an array?
Here are some attempts.
```
SELECT num, ar, expected,
ar @> ARRAY[NULL]::int[] AS test1,
NULL = ANY (ar) AS test2,
array_to_string(ar, ', ') <> array_to_string(ar, ', ', '(null)') AS test3
FROM (
SELECT 1 AS num, '{1,2,NULL}'::int[] AS ar, true AS expected
UNION SELECT 2, '{1,2,3}'::int[], false
) td ORDER BY num;
num | ar | expected | test1 | test2 | test3
-----+------------+----------+-------+-------+-------
1 | {1,2,NULL} | t | f | | t
2 | {1,2,3} | f | f | | f
(2 rows)
```
Only a trick with [`array_to_string`](http://www.postgresql.org/docs/9.1/static/functions-array.html#ARRAY-FUNCTIONS-TABLE) shows the expected value. Is there a better way to test this?
|
## Postgres 9.5 or later
Use [**`array_position()`**](https://www.postgresql.org/docs/current/functions-array.html#ARRAY-FUNCTIONS-TABLE). Basically:
```
SELECT array_position(arr, NULL) IS NOT NULL AS array_has_null
```
See demo below.
### Postgres 9.3 or later
You can test with the built-in functions [`array_remove()`](https://www.postgresql.org/docs/current/functions-array.html#ARRAY-FUNCTIONS-TABLE) or [`array_replace()`](https://www.postgresql.org/docs/current/functions-array.html#ARRAY-FUNCTIONS-TABLE).
### Postgres 9.1 or any version
If you ***know*** a single element that can never exist in your arrays, you can use this *fast* expression. Say, you have an array of positive numbers, and `-1` can never be in it:
```
-1 = ANY(arr) IS NULL
```
Related answer with detailed explanation:
* [Is array all NULLs in PostgreSQL](https://stackoverflow.com/questions/6852312/is-array-all-nulls-in-postgresql/22257626#22257626)
If you *cannot be absolutely sure*, you *could* fall back to one of the expensive but *safe* methods with `unnest()`. Like:
```
(SELECT bool_or(x IS NULL) FROM unnest(arr) x)
```
or:
```
EXISTS (SELECT 1 FROM unnest(arr) x WHERE x IS NULL)
```
But you can have *fast and safe* with a `CASE` expression. Use an unlikely number and fall back to the safe method if it should exist. You may want to treat the case `arr IS NULL` separately. See demo below.
### Demo
```
SELECT num, arr, expect
, -1 = ANY(arr) IS NULL AS t_1 -- 50 ms
, (SELECT bool_or(x IS NULL) FROM unnest(arr) x) AS t_2 -- 754 ms
, EXISTS (SELECT 1 FROM unnest(arr) x WHERE x IS NULL) AS t_3 -- 521 ms
, CASE -1 = ANY(arr)
WHEN FALSE THEN FALSE
WHEN TRUE THEN EXISTS (SELECT 1 FROM unnest(arr) x WHERE x IS NULL)
ELSE NULLIF(arr IS NOT NULL, FALSE) -- catch arr IS NULL -- 55 ms
-- ELSE TRUE -- simpler for columns defined NOT NULL -- 51 ms
END AS t_91
, array_replace(arr, NULL, 0) <> arr AS t_93a -- 99 ms
, array_remove(arr, NULL) <> arr AS t_93b -- 96 ms
, cardinality(array_remove(arr, NULL)) <> cardinality(arr) AS t_94 -- 81 ms
, COALESCE(array_position(arr, NULL::int), 0) > 0 AS t_95a -- 49 ms
, array_position(arr, NULL) IS NOT NULL AS t_95b -- 45 ms
, CASE WHEN arr IS NOT NULL
THEN array_position(arr, NULL) IS NOT NULL END AS t_95c -- 48 ms
FROM (
VALUES (1, '{1,2,NULL}'::int[], true) -- extended test case
, (2, '{-1,NULL,2}' , true)
, (3, '{NULL}' , true)
, (4, '{1,2,3}' , false)
, (5, '{-1,2,3}' , false)
, (6, NULL , null)
) t(num, arr, expect);
```
Result:
```
num | arr | expect | t_1 | t_2 | t_3 | t_91 | t_93a | t_93b | t_94 | t_95a | t_95b | t_95c
-----+-------------+--------+--------+------+-----+------+-------+-------+------+-------+-------+-------
1 | {1,2,NULL} | t | t | t | t | t | t | t | t | t | t | t
2 | {-1,NULL,2} | t | f --!! | t | t | t | t | t | t | t | t | t
3 | {NULL} | t | t | t | t | t | t | t | t | t | t | t
4 | {1,2,3} | f | f | f | f | f | f | f | f | f | f | f
5 | {-1,2,3} | f | f | f | f | f | f | f | f | f | f | f
6 | NULL | NULL | t --!! | NULL | f | NULL | NULL | NULL | NULL | f | f | NULL
```
Note that `array_remove()` and `array_position()` are not allowed for **multi-dimensional arrays**. All expressions to the right of `t_93a` only work for 1-dimenstioal arrays.
*db<>fiddle [here](https://dbfiddle.uk/?rdbms=postgres_13&fiddle=8b0375321dd98b6f0b14e42998f4f72b)* - Postgres 13, with more tests
Old [sqlfiddle](http://sqlfiddle.com/#!17/324f8/3)
### Benchmark setup
The added times are from a **benchmark test with 200k rows in Postgres 9.5**. This is my setup:
```
CREATE TABLE t AS
SELECT row_number() OVER() AS num
, array_agg(elem) AS arr
, bool_or(elem IS NULL) AS expected
FROM (
SELECT CASE WHEN random() > .95 THEN NULL ELSE g END AS elem -- 5% NULL VALUES
, count(*) FILTER (WHERE random() > .8)
OVER (ORDER BY g) AS grp -- avg 5 element per array
FROM generate_series (1, 1000000) g -- increase for big test case
) sub
GROUP BY grp;
```
### Function wrapper
For **repeated use**, I would create a function in Postgres **9.5** like this:
```
CREATE OR REPLACE FUNCTION f_array_has_null (anyarray)
RETURNS bool
LANGUAGE sql IMMUTABLE PARALLEL SAFE AS
'SELECT array_position($1, NULL) IS NOT NULL';
```
`PARALLEL SAFE` only for Postgres 9.6 or later.
Using a [polymorphic](https://www.postgresql.org/docs/current/extend-type-system.html#EXTEND-TYPES-POLYMORPHIC) input type this works for *any* array type, not just `int[]`.
Make it `IMMUTABLE` to allow performance optimization and index expressions.
* [Does PostgreSQL support "accent insensitive" collations?](https://stackoverflow.com/questions/11005036/does-postgresql-support-accent-insensitive-collations/11007216#11007216)
But don't make it `STRICT`, which would disable "function inlining" and impair performance because `array_position()` is not `STRICT` itself. See:
* [Function executes faster without STRICT modifier?](https://stackoverflow.com/questions/8455177/function-executes-faster-without-strict-modifier)
If you need to catch the case `arr IS NULL`:
```
CREATE OR REPLACE FUNCTION f_array_has_null (anyarray)
RETURNS bool
LANGUAGE sql IMMUTABLE PARALLEL SAFE AS
'SELECT CASE WHEN $1 IS NOT NULL
THEN array_position($1, NULL) IS NOT NULL END';
```
For Postgres **9.1** use the `t_91` expression from above. The rest applies unchanged.
Closely related:
* [How to determine if NULL is contained in an array in Postgres?](https://stackoverflow.com/questions/22695015/how-to-determine-if-null-is-contained-in-an-array-in-postgres)
|
PostgreSQL's [UNNEST(](http://www.postgresql.org/docs/current/static/functions-array.html#ARRAY-FUNCTIONS-TABLE)) function is a better choice.You can write a simple function like below to check for NULL values in an array.
```
create or replace function NULL_EXISTS(val anyelement) returns boolean as
$$
select exists (
select 1 from unnest(val) arr(el) where el is null
);
$$
language sql
```
For example,
```
SELECT NULL_EXISTS(array [1,2,NULL])
,NULL_EXISTS(array [1,2,3]);
```
Result:
```
null_exists null_exists
----------- --------------
t f
```
So, You can use `NULL_EXISTS()` function in your query like below.
```
SELECT num, ar, expected,NULL_EXISTS(ar)
FROM (
SELECT 1 AS num, '{1,2,NULL}'::int[] AS ar, true AS expected
UNION SELECT 2, '{1,2,3}'::int[], false
) td ORDER BY num;
```
|
Check if NULL exists in Postgres array
|
[
"",
"sql",
"arrays",
"postgresql",
"null",
"postgresql-9.1",
""
] |
I am trying to use a case statement to determine which select statement should be executed.
I want to check the third character of the variable `:SSN`. If the third character is a dash, use the first SQL statement to mask the first 5 numbers (`XX-XXX1234`). Otherwise, use the second SQL statement to mask the first 5 numbers (`XXX-XX-1234`).
What is the correct syntax for this?
```
select case
when substr(:SSN, 3,1) = '-' then
SELECT 'XX-XXX'||substr(:SSN, 7,4) INTO :MaskedSSN FROM DUAL
else
SELECT 'XXX-XX-'||substr(:SSN, 8,4) INTO :MaskedSSN FROM DUAL
end
```
|
You can use SQL `CASE` *expression* for this
```
select case
when substr(:SSN, 3,1) = '-' then
'XX-XXX'||substr(:SSN, 7,4)
else
'XXX-XX-'||substr(:SSN, 8,4)
end
INTO :MaskedSSN
from dual
```
|
> Just an alternaive to try.Enjoy
```
SELECT DECODE(SUBSTR(:SSN, 3,1),'-','XX-XXX'
||SUBSTR(:SSN, 7,4),'XXX-XX-'
||SUBSTR(:SSN, 8,4))
INTO lv_var
FROM dual;
```
|
Using a PLSQL CASE statement
|
[
"",
"sql",
"oracle",
"plsql",
"case",
""
] |
I'm trying to show four columns in the query below: a count of total responses, a count of incorrect responses, and a `%` incorrect based off the two previous columns. The results will be grouped by the `question_id`.
```
SELECT
COUNT(correct) as total_resp,
SUM(case when correct = 'f' then 1 else 0 end) as incor_resp,
(incor_resp / total_resp) as percent_incor,
question_id
FROM answers
WHERE student_id IN (
SELECT id FROM students
WHERE lang = 'es'
LIMIT 50
)
GROUP BY question_id;
```
My question is, why doesn't the `percent_incor` definition in the above work? Do I not have access to `total_resp` and `incor_resp` to be able to perform operations off them for a 3rd field definition? If not, how would I be able to include this field in my output?
Thanks!
|
It's not possible to refer to other fields via their aliases in the same select. Either you need to repeat the expressions again or you can wrap the select in another select and compute it there:
```
SELECT total_resp, incor_resp, incor_resp/total_resp as percent_incor
FROM (
SELECT
COUNT(correct) as total_resp,
SUM(case when correct = 'f' then 1 else 0 end) as incor_resp,
(incor_resp / total_resp) as percent_incor,
question_id
FROM answers
WHERE student_id IN (
SELECT id FROM students
WHERE lang = 'es'
LIMIT 50
)
GROUP BY question_id
) t;
```
|
If you want use the alias you have to access a sub query
```
SELECT total_resp, incor_resp, total_resp / incor_resp as percent_incor
FROM (
SELECT
COUNT(correct) as total_resp,
SUM(case when correct = 'f' then 1 else 0 end) as incor_resp,
question_id
FROM answers
WHERE student_id IN (
SELECT id FROM students
WHERE lang = 'es'
LIMIT 50
)
GROUP BY question_id
) T
```
|
Perform operation on 2 aggregated fields in SQL
|
[
"",
"mysql",
"sql",
"rdbms",
""
] |
For example, I have a Customers table with the columns FirstName and LastName - I assume have to concatenate the two columns in order to find out the longest customer name. How would I go about this?
|
```
SELECT TOP 1 ID, FirstName, LastName, LEN(FirstName + LastName)
FROM Customers
ORDER BY LEN(FirstName + LastName) DESC;
```
Where ID is your primary key. This will give you the id, firstname, lastname and length of the longest concatenated name.
|
This will give you the longest name in the DB:
```
SELECT MAX(LEN(FirstName) + LEN(LastName)) AS MaxLen
from Customers
```
|
Find the Max Length of two Columns Combined
|
[
"",
"sql",
"sql-server",
""
] |
According to the Redshift WITH Clause [documentation](http://docs.aws.amazon.com/redshift/latest/dg/r_WITH_clause.html), you can use a WITH clause with a `INSERT INTO...SELECT` statement. However when testing this, I am getting the below error. Is this not possible, or do I have the syntax wrong?
```
CREATE TABLE TestCTEInsert (SomeTimestamp TIMESTAMP);
WITH CTE AS
(SELECT GETDATE() as SomeTimestamp)
INSERT INTO TestCTEInsert
(SomeTimestamp) SELECT SomeTimestamp from CTE;
```
> ERROR: 42601: syntax error at or near "insert"
Interestingly, it does support inserting into a new table i.e.
```
WITH CTE AS
(SELECT GETDATE() as SomeTimestamp)
INSERT SomeTimestamp INTO NewTable
SELECT SomeTimestamp from CTE;
```
> The command completed successfully (1 rows affected)
**EDIT:** Just to confirm, I get the same error when using an `INTEGER` column rather than `TIMESTAMP`:
```
CREATE TABLE TestCTE (SomeInt INTEGER);
WITH CTE AS
(SELECT 1 as SomeInt)
INSERT INTO TestCTEInsert
SELECT SomeInt from CTE;
```
> ERROR: 42601: syntax error at or near "insert"
|
Try putting the CTE in the insert (not sure if that beats the point)
```
INSERT INTO TestCTEInsert
WITH CTE AS
(SELECT CURRENT_TIMESTAMP as SomeTimestamp)
SELECT SomeTimestamp from CTE;
```
|
The `;` **terminates** a statement, so it needs to go at the end of the statement, not somewhere in the middle:
You can do this in two ways, either use a `create table as select`
```
create table TestCTEInsert
as
WITH CTE AS
(
SELECT current_timestamp as SomeTimestamp
)
SELECT SomeTimestamp
from CTE; -- ; only at the end
```
Or in two steps:
```
CREATE TABLE TestCTEInsert (SomeTimestamp TIMESTAMP); -- end this with a ;
insert into TestCTEInsert
WITH CTE AS
(
SELECT current_timestamp as SomeTimestamp
)
SELECT SomeTimestamp
from CTE; -- ; only at the end
```
The above runs on a vanilla Postgres installation, I don't have access to RDS
|
Redshift INSERT INTO TABLE from CTE
|
[
"",
"sql",
"amazon-web-services",
"common-table-expression",
"amazon-redshift",
""
] |
I have two queries:
> Query for Table 1:
```
select a.[Kode AK] as KodeAK, b.Nama as NamaAK from RC_Member a left join
(SELECT Kode_AK, Nama FROM OPENQUERY([ARMS],
' select Kode_AK,Nama, Tgl_insert from
(select ROW_NUMBER () over ( partition by Kode_AK order by Tgl_insert desc ) rn,
Kode_AK, Nama, Tgl_insert from KEANGGOTAAN.dbo.LOG_NAMA ) A where rn = 1'))
b on a.[Kode AK] = b.Kode_AK
```
> Query for Table 2:
```
select a.secCode as KodeStock,c.SEC_DSC as NamaStock from openquery(PDC_MYAPPS,'select * from mii.secReq')a left join (
select * from RC_Saham
)b on a.secCode=b.kode left join (select * from openquery([ARMS],
'select CODE_BASE_SEC,SEC_DSC from REFERENCES_DATA_DEV.dbo.PRODUCT_EQUITY'))c on a.secCode=c.CODE_BASE_SEC
where b.[RC Sec (%)] is not null and b.[RC Sec (%)] ='0' and a.riskCharge !='0'
```
I want to use cross join the result of `First query` with the result of `Second queries.` I put cross join in the middle of those queries but it doesn't work.
How can I do it? Thanks
|
*Cross join can be used like below to use different selected tables query,*
```
SELECT * FROM (
SELECT a.[Kode AK] AS KodeAK
,b.Nama AS NamaAK
FROM RC_Member a
LEFT JOIN (
SELECT Kode_AK
,Nama
FROM OPENQUERY(
[ARMS]
,
' select Kode_AK,Nama, Tgl_insert from
(select ROW_NUMBER () over ( partition by Kode_AK order by Tgl_insert desc ) rn,
Kode_AK, Nama, Tgl_insert from KEANGGOTAAN.dbo.LOG_NAMA ) A where rn = 1'
)
)
b
ON a.[Kode AK] = b.Kode_AK ) Query1
CROSS JOIN
(SELECT a.secCode AS KodeStock
,c.SEC_DSC AS NamaStock
FROM OPENQUERY(PDC_MYAPPS ,'select * from mii.secReq')a
LEFT JOIN (
SELECT *
FROM RC_Saham
)b
ON a.secCode = b.kode
LEFT JOIN (
SELECT *
FROM OPENQUERY(
[ARMS]
,
'select CODE_BASE_SEC,SEC_DSC from REFERENCES_DATA_DEV.dbo.PRODUCT_EQUITY'
)
)c
ON a.secCode = c.CODE_BASE_SEC
WHERE b.[RC Sec (%)] IS NOT NULL
AND b.[RC Sec (%)] = '0'
AND a.riskCharge!= '0') Query2
```
Also, you need to learn how to use [Cross Join](http://www.tutorialspoint.com/sql/sql-cartesian-joins.htm)
|
How did you write the CROSS JOIN query? Something like this should work.
```
SELECT * FROM
(type your first query here) q1
CROSS JOIN
(type your second query here) q2;
```
|
Query using a cross join from two selected tables in SQL Server
|
[
"",
"sql",
"sql-server",
""
] |
I have a SQL table that contains three columns:
```
userId
userName
item
```
and I created this SQL query which will count all the items types of one user:
```
select
count(ItemID) as 'count of all items types',
userId,
userName
from
userTable
where
ItemID in (2, 3, 4)
and userId = 1
group by
userId, userName
```
The result will be like this:
```
+--------+----------+--------------------------+
| userId | userName | count of all items types |
+--------+----------+--------------------------+
| 1 | kim | 25 |
```
and I am looking for a way to separate the counting of itemes types, so the result should be like this:
```
+--------+----------+----------------+----------------+-----------------+
| userId | userName | count of item1 | count of item2 | count of item3 |
+--------+----------+----------------+----------------+-----------------+
| 1 | kim | 10 | 10 | 5 |
```
|
```
SELECT
userID,
userName,
SUM(CASE WHEN ItemID = 2 THEN 1 ELSE 0 END) AS count_of_item1,
SUM(CASE WHEN ItemID = 3 THEN 1 ELSE 0 END) AS count_of_item2,
SUM(CASE WHEN ItemID = 4 THEN 1 ELSE 0 END) AS count_of_item3
FROM
My_Table
GROUP BY
userID,
userName
```
|
This is called conditional aggregation. Use CASE for this.
With COUNT:
```
select
count(case when ItemID = 1 then 1 end) as count_item1,
count(case when ItemID = 2 then 1 end) as count_item2,
count(case when ItemID = 3 then 1 end) as count_item3
...
```
(`then 1` could also be anything else except null, e.g. `then 'count me'`. This works because `COUNT` counts non-null values and when omitting the `ELSE` in `CASE WHEN` you get null. You could also explicitly add `else null`.)
Or with SUM:
```
select
sum(case when ItemID = 1 then 1 else 0 end) as count_item1,
sum(case when ItemID = 2 then 1 else 0 end) as count_item2,
sum(case when ItemID = 3 then 1 else 0 end) as count_item3
...
```
|
SQL separate the count of one column
|
[
"",
"sql",
"database",
"count",
""
] |
i have table Persons where i have feilds
```
pID , pName , pDateOfBirth , pCountry
```
now i have 10thousand peoples or above in database
**Question?** I want TOP 10 Name of coutries that is pCountry have persons age 40+?
my effort for that is
```
select TOP 10 Count(pid) as ratio ,PCountry, datediff(pDOB, Date.Now) as
age where age > '40' Group By country
```
what i want is this
```
Pak = 555
INDIA = 6666
USA= 88
Aus = 557
```
etc
|
If I understood correctly, you want a top of countries by number of persons aged 40+, so grouping should be by country only.
**Setup**
```
-- drop table Person
create table Person
(
pID INT NOT NULL IDENTITY(1, 1) CONSTRAINT PK_Person PRIMARY KEY,
pDateOfBirth DATE,
pCountry VARCHAR(3)
)
GO
INSERT INTO Person (pDateOfBirth, pCountry)
SELECT TOP 1000 DATEADD(DAY, message_id, '19740101'), 'BEL'
FROM sys.messages
GO
INSERT INTO Person (pDateOfBirth, pCountry)
SELECT TOP 1000 DATEADD(DAY, message_id, '19730101'), 'NED'
FROM sys.messages
GO
INSERT INTO Person (pDateOfBirth, pCountry)
SELECT TOP 1000 DATEADD(DAY, message_id, '19760101'), 'DEU'
FROM sys.messages
GO
INSERT INTO Person (pDateOfBirth, pCountry)
SELECT TOP 1000 DATEADD(DAY, message_id, '19750101'), 'ROM'
FROM sys.messages
GO
INSERT INTO Person (pDateOfBirth, pCountry)
SELECT TOP 1000 DATEADD(DAY, message_id, '19740615'), 'USA'
FROM sys.messages
GO
```
**Query**
```
DECLARE @today DATE = GETDATE()
DECLARE @age INT = 40
SELECT TOP 10 pCountry, COUNT(1) cnt
FROM Person
WHERE DATEDIFF(day, pDateOfBirth, @today) >= @age * 365.25
GROUP BY pCountry
ORDER BY cnt DESC
```
I have used `DATEDIFF` with `day` option because using year will perform the difference at year level only, so that persons born in 19761201 will still be included (which is not correctly demographically speaking).
Also, `365.25` instead of `365` is used to compensate for leap years. However, this may still lead to 1-day error depending on the current date (I think this is more than reasonable when talking about age and large data).
**[edit]**
As correctly pointed out by `dnoeth`, date difference logic may be reverted and thus simplified. So, where condition becomes:
```
WHERE pDateOfBirth <= DATEADD(year, -40, @today)
```
|
This is too long for a comment.
If you want to measure age in years, then don't use `datediff()`. As is well documented, `datediff()` counts the number of time-unit ends between two dates. That is, it returns one for the difference between '2015-12-31' and '2015-01-01'. But it returns 0 for '2015-01-01' and '2015-12-31'.
Instead, use `dateadd()`:
```
where dob < daeadd(year, -40, getdate())
```
This applies to any solution you might choose.
|
Get the TOP 10 Countries have peoples age 40+ from their DOB ? SQL
|
[
"",
"sql",
"sql-server",
""
] |
I made this query but it gave me error just like in the title
```
INSERT INTO data_waktu_vaksinasi (id_binatang, id_vaksin, tanggal_vaksin, status_vaksin) VALUES
(1, 1, (SELECT DATE_ADD(max(tanggal_vaksin), INTERVAL 1 YEAR)) FROM data_waktu_vaksinasi, 'belum')
```
|
MySQL *does* allow the same table to be used for the source and target for `inserts`. You just need to use the correct syntax:
```
INSERT INTO data_waktu_vaksinasi (id_binatang, id_vaksin, tanggal_vaksin, status_vaksin)
SELECT 1, 1, DATE_ADD(max(tanggal_vaksin), INTERVAL 1 YEAR), 'belum'
FROM data_waktu_vaksinasi;
```
|
Put the source table in `subquery` to fix this problem. `Mysql` does not allow same table in source and target for `DML` operation's
```
INSERT INTO data_waktu_vaksinasi
(id_binatang,
id_vaksin,
tanggal_vaksin,
status_vaksin)
SELECT 1,
1,
dt,
'belum'
FROM (SELECT Date_add(Max(tanggal_vaksin), interval 1 year) AS dt
FROM data_waktu_vaksinasi)a
```
|
table is specified twice both as a target for INSERT and as separate source of data
|
[
"",
"mysql",
"sql",
""
] |
I have that diagram. UserId for both table are same (take it as foreign key).
How can which `UserId` doesn't exists in `UserAdditioStamp` table and then insert these data in `UserAdditioStamp` table.
[](https://i.stack.imgur.com/P22ln.png)
```
SELECT
min(ev.UserId) as UserId, ev.Value,
min(uas.UserId) as AdditionUserId,
max(uas.[TimeStamp]) as AdditionTimestamp,
count(ev.UserId) as [Registrations]
FROM
EventLog ev
LEFT JOIN
UserAdditionStamp uas ON ev.UserId = uas.UserId
WHERE
uas.UserId IS NULL
AND EventTypeId = 3
AND Value IS NOT NULL
GROUP BY
ev.Value
ORDER BY
UserId
```
I'm trying that for select but it gives wrong result about registrations count (there should be 2 registrations for UserId = 10, 2 for 13 etc.)
```
UserId AdditionUserId AdditionTimestamp [Registrations]
3 NULL NULL 1
10 NULL NULL 1
10 NULL NULL 1
13 NULL NULL 1
13 NULL NULL 1
```
and then I want to insert these data in `UserAdditioStamp` table. How is that possible?
|
```
INSERT INTO UserAdditionStamp
SELECT e.UserId, e.TimeStamp
FROM EventLog e
WHERE NOT EXISTS (SELECT UserId FROM UserAdditionStamp WHERE UserId = e.UserId)
```
OR
```
INSERT INTO UserAdditionStamp
SELECT e.UserId, e.TimeStamp
FROM EventLog e
WHERE e.UserId NOT IN (SELECT UserId FROM UserAdditionStamp)
```
|
If I understood correctly:
```
insert into UserAdditionStamp ( UserId, TimeStamp )
select UserId, TimeStamp
from EventLog
where (userId not in ( select UserId from UserAdditionStamp ))
```
|
Insert data when not exists
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I've created three tables.
```
CREATE TABLE Clients
(
ClientID INT IDENTITY(1,1) PRIMARY KEY,
First_Name VARCHAR(50) NOT NULL,
Last_Name VARCHAR(50) NOT NULL,
)
CREATE TABLE Reservation
(
ReservationID INT IDENTITY(1,1) PRIMARY KEY,
ClientID INT FOREIGN KEY (ClientID) REFERENCES Clients(ClientID),
Reservation_paid VARCHAR(3) DEFAULT 'NO',
)
CREATE TABLE Payment
(
Payment_ID INT IDENTITY(1,1) PRIMARY KEY,
ClientID INT FOREIGN KEY (ClientID) REFERENCES Clients(ClientID),
ReservationID INT FOREIGN KEY (ReservationID) REFERENCES Reservation(ReservationID),
)
```
I would like to change the value of the column Reservation\_paid to YES at the Reservation table whenever the Client does pay the reservation, and i want to do it automatically with trigger.
Example: If the `ClientID` at the Reservation table exists at the Payment table automatically the value of the Reservation\_paid will set to YES.
Thank you in advance.
|
```
CREATE TRIGGER trgAfterInsert ON [dbo].[Payment]
FOR INSERT
AS
declare @ClientID int;
select @ClientID =i.ClientID from inserted i;
if update(ClientID)
UPDATE Reservation set Reservation_paid='Yes' WHERE
ClientID=@ClientID;
--PRINT 'AFTER INSERT trigger fired.'
```
|
*After Insert* Trigger should do something like this
```
UPDATE R
SET Reservation_paid = 'Yes'
FROM reservation R
WHERE EXISTS (SELECT 1
FROM INSERTED I
WHERE I.clientid = R.clientid
AND I.reservationid = R.reservationid)
```
|
How to trigger a table to change the value of another table column
|
[
"",
"sql",
"sql-server",
"dynamic",
"triggers",
""
] |
I am having the following data in my database table in SQL Server:
```
Id Date Val_A Val_B Val_C Avg Vector MINMAXPOINTS
329 2016-01-15 78.09 68.40 70.29 76.50 BELOW 68.40
328 2016-01-14 79.79 75.40 76.65 76.67 BELOW 75.40
327 2016-01-13 81.15 74.59 79.00 76.44 ABOVE 81.15
326 2016-01-12 81.95 77.04 78.95 76.04 ABOVE 81.95
325 2016-01-11 82.40 73.65 81.34 75.47 ABOVE 82.40
324 2016-01-08 78.75 73.40 77.20 74.47 ABOVE 78.75
323 2016-01-07 76.40 72.29 72.95 73.74 BELOW 72.29
322 2016-01-06 81.25 77.70 78.34 73.12 ABOVE 81.25
321 2016-01-05 81.75 76.34 80.54 72.08 ABOVE 81.75
320 2016-01-04 80.95 75.15 76.29 70.86 ABOVE 80.95
```
The column `MIMMAXPOINTS` should actually contain lowest of `Val_B` until `Vector` is `'BELOW'` and highest of `Val_A` until `Vector` is `'ABOVE'`. So, we would have the following values in `MINMAXPOINTS`:
```
MINMAXPOINTS
68.40
68.40
82.40
82.40
82.40
82.40
72.29
81.75
81.75
81.75
```
Is it possible without cursor?
Any help will be greatly appreciated!.
|
At first apply classic `gaps-and-islands` to determine groups (gaps/islands/above/below) and then calculate `MIN` and `MAX` for each group.
I assume that `ID` column defines the order of rows.
Tested on SQL Server 2008. Here is [SQL Fiddle](http://sqlfiddle.com/#!3/3d6954dd72e53b9015d2a6e6546058f8/15/0).
**Sample data**
```
DECLARE @T TABLE
([Id] int, [dt] date, [Val_A] float, [Val_B] float, [Val_C] float, [Avg] float,
[Vector] varchar(5));
INSERT INTO @T ([Id], [dt], [Val_A], [Val_B], [Val_C], [Avg], [Vector]) VALUES
(329, '2016-01-15', 78.09, 68.40, 70.29, 76.50, 'BELOW'),
(328, '2016-01-14', 79.79, 75.40, 76.65, 76.67, 'BELOW'),
(327, '2016-01-13', 81.15, 74.59, 79.00, 76.44, 'ABOVE'),
(326, '2016-01-12', 81.95, 77.04, 78.95, 76.04, 'ABOVE'),
(325, '2016-01-11', 82.40, 73.65, 81.34, 75.47, 'ABOVE'),
(324, '2016-01-08', 78.75, 73.40, 77.20, 74.47, 'ABOVE'),
(323, '2016-01-07', 76.40, 72.29, 72.95, 73.74, 'BELOW'),
(322, '2016-01-06', 81.25, 77.70, 78.34, 73.12, 'ABOVE'),
(321, '2016-01-05', 81.75, 76.34, 80.54, 72.08, 'ABOVE'),
(320, '2016-01-04', 80.95, 75.15, 76.29, 70.86, 'ABOVE');
```
**Query**
To understand better how it works examine results of each CTE.
`CTE_RowNumbers` calculates two sequences of row numbers.
`CTE_Groups` assigns a number for each group (above/below).
`CTE_MinMax` calculates `MIN/MAX` for each group.
Final `SELECT` picks `MIN` or `MAX` to return.
```
WITH
CTE_RowNumbers
AS
(
SELECT [Id], [dt], [Val_A], [Val_B], [Val_C], [Avg], [Vector]
,ROW_NUMBER() OVER (ORDER BY ID DESC) AS rn1
,ROW_NUMBER() OVER (PARTITION BY Vector ORDER BY ID DESC) AS rn2
FROM @T
)
,CTE_Groups
AS
(
SELECT [Id], [dt], [Val_A], [Val_B], [Val_C], [Avg], [Vector]
,rn1-rn2 AS Groups
FROM CTE_RowNumbers
)
,CTE_MinMax
AS
(
SELECT [Id], [dt], [Val_A], [Val_B], [Val_C], [Avg], [Vector]
,MAX(Val_A) OVER(PARTITION BY Groups) AS MaxA
,MIN(Val_B) OVER(PARTITION BY Groups) AS MinB
FROM CTE_Groups
)
SELECT [Id], [dt], [Val_A], [Val_B], [Val_C], [Avg], [Vector]
,CASE
WHEN [Vector] = 'BELOW' THEN MinB
WHEN [Vector] = 'ABOVE' THEN MaxA
END AS MINMAXPOINTS
FROM CTE_MinMax
ORDER BY ID DESC;
```
**Result**
```
+-----+------------+-------+-------+-------+-------+--------+--------------+
| Id | dt | Val_A | Val_B | Val_C | Avg | Vector | MINMAXPOINTS |
+-----+------------+-------+-------+-------+-------+--------+--------------+
| 329 | 2016-01-15 | 78.09 | 68.4 | 70.29 | 76.5 | BELOW | 68.4 |
| 328 | 2016-01-14 | 79.79 | 75.4 | 76.65 | 76.67 | BELOW | 68.4 |
| 327 | 2016-01-13 | 81.15 | 74.59 | 79 | 76.44 | ABOVE | 82.4 |
| 326 | 2016-01-12 | 81.95 | 77.04 | 78.95 | 76.04 | ABOVE | 82.4 |
| 325 | 2016-01-11 | 82.4 | 73.65 | 81.34 | 75.47 | ABOVE | 82.4 |
| 324 | 2016-01-08 | 78.75 | 73.4 | 77.2 | 74.47 | ABOVE | 82.4 |
| 323 | 2016-01-07 | 76.4 | 72.29 | 72.95 | 73.74 | BELOW | 72.29 |
| 322 | 2016-01-06 | 81.25 | 77.7 | 78.34 | 73.12 | ABOVE | 81.75 |
| 321 | 2016-01-05 | 81.75 | 76.34 | 80.54 | 72.08 | ABOVE | 81.75 |
| 320 | 2016-01-04 | 80.95 | 75.15 | 76.29 | 70.86 | ABOVE | 81.75 |
+-----+------------+-------+-------+-------+-------+--------+--------------+
```
|
Modify the query to check for group of data greater than current records as
You can use below query using case statment which will let you select a conditional value based on vector value for each row.
The query is
```
SELECT ID, DATE, VAL_A, VAL_B, VAL_C, AVG, VECTOR,
CASE
WHEN VECTOR = 'BELOW' THEN (SELECT MIN(VAL_B) FROM TABLE A WHERE ROWID >= B.ROWID)
WHEN VECTOR = 'ABOVE' THEN (SELECT MAX(VAL_A) FROM TABLE A WHERE ROWID >= B.ROWID)
END AS MINMAXVALUE
FROM TABLE B
GO
```
Check this should yield the result you are expecting from the data.
|
How do i calculate minimum and maximum for groups in a sequence in SQL Server?
|
[
"",
"sql",
"sql-server",
"group-by",
"gaps-and-islands",
""
] |
I'm having a table Employee, in that some values are started with ", ". So, I need to remove the **comma** and **white-space** at the beginning of the name at the time of **SELECT query** using **LTRIM()** - SQL-Server.
My Table : Employee
```
CREATE TABLE Employee
(
PersonID int,
ContactName varchar(255),
Address varchar(255),
City varchar(255)
);
INSERT INTO Employee(PersonID, ContactName, Address, City)
VALUES ('1001',', B. Bala','21, Car Street','Bangalore');
SELECT PersonID, ContactName, Address, City FROM Employee
```
Here the ContactName Column has a value "**, B. Bala**". I need to remove the **comma** and **white-space** at the beginning of the name.
|
You could potentially use [PATINDEX()](https://msdn.microsoft.com/en-us/library/ms188395.aspx) in order to get this done.
```
DECLARE @Text VARCHAR(50) = ', Well Crap';
SELECT STUFF(@Text, 1, PATINDEX('%[A-z]%', @Text) - 1, '');
```
This would output `Well Crap`. [PATINDEX()](https://msdn.microsoft.com/en-us/library/ms188395.aspx) will find first letter in your word and cut everything before it.
It works fine even if there's no leading rubbish:
```
DECLARE @Text VARCHAR(50) = 'Mister Roboto';
SELECT STUFF(@Text, 1, PATINDEX('%[A-z]%', @Text) - 1, '');
```
This outputs `Mister Roboto`
If there are no valid characters, let's say ContactName is `, 9132124, :::`, this would output `NULL`, if you'd like to get blank result, you can use `COALESCE()`:
```
DECLARE @Text VARCHAR(50) = ', 9132124, :::';
SELECT COALESCE(STUFF(@Text, 1, PATINDEX('%[A-z]%', @Text) - 1, ''), '');
```
This will output an empty string.
|
Alas, SQL Server does not support the ANSI standard functionality of specifying the characters for [`LTRIM()`](https://msdn.microsoft.com/en-us/library/ms177827.aspx).
In this case, you can use:
```
(case when ContactName like ', %' then stuff(ContactName, 1, 2, '')
else ContactName
end)
```
|
How to Specify Trim Chars in SQL TRIM
|
[
"",
"sql",
"sql-server",
""
] |
The following queries both select data from a `posts` table and a `users` table.
The first query uses a join the second doesn't... My question is why would you use a JOIN?
Query with JOIN:
```
SELECT u.*, p.* FROM users AS u
JOIN posts AS p ON p.user_id=u.user_id
WHERE u.user_id=1
```
Query without:
```
SELECT u.*, p.* FROM users AS u, posts AS p
WHERE p.user_id=u.user_id
AND u.user_id=1
```
|
The second form is called an implicit join. First and foremost, implicit joins are considered deprecated by most [rdbms](/questions/tagged/rdbms "show questions tagged 'rdbms'")s. Personally, I sincerely doubt that any major RDBMS will drop support for them any time in the near future, but why take the risk?
Second, explicit joins have a standard way to perform outer joins. Implicit joins have all sorts of unreadable hacks solutions (like, e.g., Oracle's `(+)` syntax), but, as far as I know, nothing standard that has a reasonable expectancy of portability.
And third, and I admit this is purely a matter of taste, they just look better. Using explicit joins allows you to logically separate the conditions in the query to the "scaffolding" needed to join all the tables together and the actual logical conditions of the `where` clause. With implicit joins, everything just gets lumped into the `where` clause and with as little as three or four tables it becomes pretty hard to manage.
|
The second query *is* using a join. That's what the comma means in `users AS u, posts AS p`. This is an implicit join (implicit because although you're not explicitly using the `JOIN` keyword, you're getting its effects) also known as a `CROSS JOIN`, and means "every row of the left table, joined with every row in the right table".
The use of `JOIN ... ON` syntax is (in my opinion) *much* more explicit and readable, due in no small part to moving the joining condition from the `WHERE` clause to being directly attached to the `JOIN`, and also opens up the syntax for other join types (`LEFT JOIN`, the default, and `INNER JOIN`) with different semantics.
|
Why would you use a JOIN, if you can select from multiple tables without a JOIN?
|
[
"",
"mysql",
"sql",
"select",
"join",
"pdo",
""
] |
Please note that the question below is specifically for **MySQL**.
Imagine a table called `Cars` with the following structure (*we can ignore the lack of proper key constraints, etc. as it is not relevant to my question*):
```
CREATE TABLE Cars
(
id Integer,
maker_id Integer,
status_id Integer,
notes Varchar(100)
);
```
Now imagine loading some test data like this:
```
INSERT INTO Cars
(id, maker_id, status_id, notes)
VALUES
(1, 1001, 0, 'test1'),
(2, 1001, 0, 'test2'),
(3, 1001, 0, 'test3'),
(4, 1002, 0, 'test4'),
(5, 1002, 0, 'test5'),
(6, 1002, 1, 'test6'),
(7, 1002, 1, 'test7'),
(8, 1002, 2, 'test8'),
(9, 1003, 3, 'test9'),
(10, 1003, 3, 'test10'),
(11, 1003, 4, 'test11'),
(12, 1003, 4, 'test12'),
(13, 1003, 5, 'test13'),
(14, 1003, 5, 'test14')
```
There are 14 records, with 3 `DISTINCT` values in `maker_id` (1001, 1002, 1003), and 6 `DISTINCT` values in `status_id` (0,1,2,3,4,5).
Now, imagine taking the `DISTINCT` pairs of (`maker_id`, `status_id`).
```
SELECT DISTINCT maker_id, status_id FROM Cars;
```
Here is a link to an example in SQL Fiddle: <http://sqlfiddle.com/#!9/cb1c7/2>
This results in the following records (`maker_id`, `status_id`):
* (1001, 0)
* (1002, 0)
* (1002, 1)
* (1002, 2)
* (1003, 3)
* (1003, 4)
* (1003, 5)
The logic for what I need to return is as follows:
If a given `maker_id` value (e.g., 1001) only has 1 distinct record for its corresponding `DISTINCT` (`maker_id`, `status_id`) pairs, simply return it. In this example: (1001, 0).
If a given `maker_id` value has *more than 1* distinct record for its corresponding `DISTINCT` (`maker_id`, `status_id`) pairs, return all of them *except* the one with a `status_id` value of 0. In this example: (1002, 1), (1002, 2), (1003, 3), (1003, 4), and (1003, 5).
Notice that we left out (1002, 0).
Can anybody think of a conciser / more efficient (in terms of runtime) way of writing this query? In the real world, my table has millions of records.
I have come up with the following:
```
SELECT
subq.maker_id,
subq.status_id
FROM
(
SELECT DISTINCT
maker_id,
status_id,
(SELECT COUNT(*) FROM Cars WHERE maker_id = c.maker_id AND status_id != 0 GROUP BY maker_id) AS counter
FROM Cars AS c
) AS subq
WHERE
subq.counter IS NULL
OR (subq.counter IS NOT NULL AND subq.status_id != 0)
;
```
Here is an example in SQL Fiddle: <http://sqlfiddle.com/#!9/cb1c7/3>
|
There's several query patterns that can return the specified result. Some are going to look more complicated than others. There's likely to be *big* differences in performance.
Performing a `GROUP BY` operation on a huge set can be costly (in terms of resources and elapsed time ESPECIALLY if MySQL can't make use of an index to optimize that operation. (Using a `GROUP BY` operation is one way to get a count of `status_id` for each `maker_id`.)
And correlated subqueries can be expensive, when they are executed repeatedly. I usually only see better performance out of correlated subqueries when the number of times they need to be executed is limited.
I think the best shot at getting good performance would be something like this:
**NOT TESTED**
```
SELECT c.maker_id
, c.status_id
FROM Cars c
WHERE c.status_id > 0
UNION ALL
SELECT d.maker_id
, d.status_id
FROM Cars d
LEFT
JOIN Cars e
ON e.maker_id = d.maker_id
AND e.status_id > 0
WHERE e.maker_id IS NULL
AND d.status_id = 0
```
As to whether that's more efficient or more concise than other query approaches, we'd need to test.
But for any shot at good performance with this query, we are going to need an index.
```
.. ON Cars (maker_id, status_id)
```
We expect the EXPLAIN output will show "Using index" in the `Extra` column. And we're not expecting a "Using filesort".
One big downside of this approach is that's effectively going to be two passes through the table (or index).
The first SELECT is pretty straightforward... get me all the rows where `status_id` is not zero. We need all those rows. It's possible that an index e.g.
```
... ON Cars (status_id, maker_id)
```
might be of benefit for that query. But if we're returning a significant portion of the table, I'd bet dollars to donuts that a full scan of the other index will be just as fast, or faster.
The second `SELECT` uses an *anti-join* pattern. What this is doing is getting all rows that have a `status_id` equal to zero, and from that set, "filtering out" any of the rows where there's another row, for the same `maker_id` with a `status_id` other than zero.
We do the filtering with an outer join operation (`LEFT JOIN`) to return all rows with `status_id=0`, along with any and all matching rows. The *trick* is the predicate in the `WHERE` clause that filters out all the rows that had a match. So what we're left with is rows that didn't find a match. That is, values of `maker_id` which have *only* a `status_id=0` row.
We could get an equivalent result using a `NOT EXISTS` predicate rather than an *anti-join*. But in my experience, sometimes the performance is not as good. We could re-write that second `SELECT` (following the `UNION ALL` operation)
```
SELECT d.maker_id
, d.status_id
FROM Cars d
WHERE d.status_id = 0
AND NOT EXISTS
( SELECT 1
FROM Cars e
WHERE e.maker_id = d.maker_id
AND e.status_id > 0
)
```
And performance of that query is going to be dependent on a suitable index just like the anti-join is.
IMPORTANT: Do *not* omit the `ALL` keyword. A `UNION ALL` operation just concatenates the results of the two queries. If we omit the `ALL` keyword, then we are requiring that MySQL to perform a "sort unique" operation to eliminate duplicate rows.
NOTE: The reason for the `UNION ALL` rather than an `OR` condition is that I've usually gotten much better query plans with the `UNION ALL`. The MySQL optimizer doesn't seem to do too well with `OR` when the predicates are on different columns and conditions, and either predicate can be used to "drive" the execution plan. With the `UNION ALL`, breaking it into two queries, we can usually get a good plan for both parts.
|
this query will help :)
```
select
distinct c1.maker_id, c1.status_id
from
Cars AS c1
where
c1.status_id!=0
or c1.maker_id not in (
select distinct c2.maker_id
from Cars AS c2
where c2.status_id!=0
)
```
|
Optimizing a SQL Query with Complex Filtering
|
[
"",
"mysql",
"sql",
""
] |
I have the following query which returns multiple rows per join but I need to return only the row with the highest id
Any ideas how I can do this without sub-queries?
```
SELECT cp.RefId, work.PhoneNumber AS work, work.id AS work_id, home.PhoneNumber AS home, home.id AS home_id
FROM Contacts cp
LEFT JOIN OtherPhoneNumber work ON cp.ZoneId = work.ZoneId AND work.PhoneNumber_Type = 'W' AND work.OwnerType = 'C' AND work.OwnerRefId = cp.RefId
LEFT JOIN OtherPhoneNumber home ON cp.ZoneId = home.ZoneId AND home.PhoneNumber_Type = 'H' AND home.OwnerType = 'C' AND home.OwnerRefId = cp.RefId
WHERE cp.ZoneId = '123123'
```
This returns something like:
```
RefId work work_id home home_id
QWERTY1234 01234523423 1739092 01234563232 1818181
QWERTY1234 01234523423267196 1739093 01234563232 1818181
```
I only want:
```
RefId work work_id home home_id
QWERTY1234 01234523423267196 1739093 01234563232 1818181
```
|
One method is to extract the ids for the home and work numbers, and then join back to the original tables:
```
SELECT cp.RefId, work.PhoneNumber AS work, work.id AS work_id,
home.PhoneNumber AS home, home.id AS home_id
FROM Contacts cp LEFT JOIN
(SELECT o..OwnerRefId, o.zoneId,
MAX(CASE WHEN o..PhoneNumber_Type = 'W' THEN w.id END) as workid,
MAX(CASE WHEN o..PhoneNumber_Type = 'H' THEN w.id END) as homeid
FROM OtherPhoneNumber o
WHERE w.OwnerType = 'C'
GROUP BY o.OwnerRefId, o..zoneId
) wh
ON cp.RefId = w.OwnerRefId LEFT JOIN
OtherPhoneNumber work
ON work.id = wh.workid LEFT JOIN
OtherPhoneNumber home
ON home.id = wh.homeid
WHERE cp.ZoneId = '123123';
```
EDIT:
In SQL Server, you can do this using `OUTER APPLY`:
```
select cp.RefId, work.PhoneNumber AS work, work.id AS work_id,
home.PhoneNumber AS home, home.id AS home_id
from Contacts cp outer apply
(select top 1 o.*
from OtherPhoneNumber o
where o.PhoneNumber_Type = 'W' AND o.OwnerType = 'C' AND
o.OwnerRefId = cp.RefId AND o.ZoneId = cp.ZoneId
order by o.id desc
) work outer apply
(select top 1 o.*
from OtherPhoneNumber o
where o.PhoneNumber_Type = 'H' AND o.OwnerType = 'C' AND
o.OwnerRefId = cp.RefId AND o.ZoneId = cp.ZoneId
order by o.id desc
) home
where cp.ZoneId = '123123';
```
This is probably the fastest approach, with the right indexes: `Contacts(ZoneId, RefId)` and `OtherPhoneNumber(ZoneId, OwnerRefId, PhoneNumber_Type, OwnerType, id)`.
|
Okay you can try this -
```
;WITH myCTE
AS
(
SELECT
cp.RefId
,work.PhoneNumber AS work
,work.id AS work_id
,home.PhoneNumber AS home
,home.id AS home_id
,ROW_NUMBER() OVER (PARTITION BY cp.RefId, work.PhoneNumber, work.id, home.PhoneNumber, home.id ORDER BY cp.RefId) AS RowNum
FROM Contacts cp
LEFT JOIN OtherPhoneNumber work
ON cp.ZoneId = work.ZoneId
AND work.PhoneNumber_Type = 'W'
AND work.OwnerType = 'C'
AND work.OwnerRefId = cp.RefId
LEFT JOIN OtherPhoneNumber home
ON cp.ZoneId = home.ZoneId
AND home.PhoneNumber_Type = 'H'
AND home.OwnerType = 'C'
AND home.OwnerRefId = cp.RefId
WHERE cp.ZoneId = '123123'
)
SELECT
*
FROM myCTE
WHERE RowNum = 1
```
|
Left Join same table multiple times getting only the max value each join
|
[
"",
"sql",
""
] |
I have three tables - bands, gigs and assigns (which assigns a gig to the registered user that entered it to create a many-many relationship between gigs and users.)
The tables are:
```
Bands: bandID, bandname
Gigs: GigID, bandID, venue, date
Assigns: assignID, gigid, userid (which is then linked to the users table)
```
A function on the site is to delete a band, but then it goes "Oh, before you delete the band, it'll delete all these gigs", then the user chooses confirm and so on.
I need a query that deletes the band, then deletes the gigs where gigs.bandID = bands.bandID, and then deletes the assigns where assigns.gigID = gigs.gigID.
I've worked out the SELECT statement and this displays the record I want to delete from the assigns table by feeding it the bandid.
```
SELECT assigns.*
FROM bands INNER JOIN
gigs ON bands.bandid = gigs.bandid INNER JOIN
gigsaccass ON gigs.gigid = gigsaccass.gigid AND bands.bandid = 91
```
But how do I create a DELETE statement from this?
|
There are a bunch of ways you could go about it, but this is the route I'd go if you don't want to do the `ON DELETE CASCADE` constraint option...
```
Declare @bandID Int
Set @bandID = 91
-- Delete the associated gigsaccess records
DELETE ga
FROM bands As b
INNER JOIN gigs As g
ON b.bandid = g.bandid
INNER JOIN gigsaccass As ga
ON g.gigid = ga.gigid
WHERE b.bandid = @bandID
-- Delete the associated gigs records
DELETE g
FROM bands As b
INNER JOIN gigs As g
ON b.bandid = g.bandid
WHERE b.bandid = @bandID
-- Delete the band record
DELETE b
FROM bands As b
WHERE b.bandid = @bandID
```
|
There are two ways that this can be done, one as has been mentioned is a `Cascading Delete`. The other would be to effectively recreate a cascading deletion by use of several queries. Personally I prefer the second option, as it allows you a little bit more control over your data. Utilizing cascading deletions (especially in a production environment) can lead to rather disastrous consequences if you're not extremely careful. Undoing a single record deletion can be fairly easy (if it's a simple table), undoing a cascading delete that inadvertently hits against 20 different tables is all but impossible without a database backup.
|
How can I delete a record, child records and grandchildren records?
|
[
"",
"sql",
"sql-server",
"inner-join",
""
] |
I would like to write a procedure to database which will return select all data from database `Tournaments` plus bool parameter. If user is registered, it will return `true`.
Call:
```
exec TournamentsWithLoggedUser @user = 'asd123'
```
Procedure:
```
CREATE PROCEDURE [dbo].[TournamentsWithLoggedUser]
@user nvarchar(128)
AS
SELECT
t.Id, t.Info, BIT(r.Id)
FROM
Tournaments AS t
LEFT JOIN
Registrations AS r ON t.Id = r.TournamentId
WHERE
r.UserId IS NULL OR r.UserId = @user
RETURN
```
it mean something like
```
1, 'some info', true //1
2, 'some info2', false //2
```
|
You are looking for this query
```
SELECT t.id,
t.info,
Cast (CASE
WHEN r.userid IS NOT NULL THEN 1
ELSE 0
END AS BIT) AS IsRegistered
FROM tournaments AS t
LEFT JOIN registrations AS r
ON t.id = r.tournamentid
AND r.userid = @user
```
|
```
SELECT t.Id, t.Info,
-- this works in SQL Server
CAST ((CASE WHEN r.UserId IS NOT NULL THEN 1 ELSE 0 END) AS BIT) AS IsRegistered
FROM Tournaments as t
LEFT JOIN Registrations as r ON t.Id = r.TournamentId
where (r.UserId = '' OR r.UserId = @user)
```
-- i think this one is help for you...
|
SQL procedure select
|
[
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
I need to execute SQL from batch file.
I am executing following to connect to Postgres and select data from table
```
C:/pgsql/bin/psql -h %DB_HOST% -p 5432 -U %DB_USER% -d %DB_NAME%
select * from test;
```
I am able to connect to database, however I'm getting the error
> 'select' is not recognized as an internal or external command,
> operable program or batch file.
Has anyone faced such issue?
This is one of the query i am trying, something similar works in shell script, (please ignore syntax error in the query if there are any)
```
copy testdata (col1,col2,col3) from '%filepath%/%csv_file%' with csv;
```
|
You could pipe it into psql
```
(
echo select * from test;
) | C:/pgsql/bin/psql -h %DB_HOST% -p 5432 -U %DB_USER% -d %DB_NAME%
```
When closing parenthesis are part of the SQL query they have to be escaped with three carets.
```
(
echo insert into testconfig(testid,scenarioid,testname ^^^) values( 1,1,'asdf'^^^);
) | psql -h %DB_HOST% -p 5432 -U %DB_USER% -d %DB_NAME%
```
|
Use the `-f` parameter to pass the batch file name
```
C:/pgsql/bin/psql -h %DB_HOST% -p 5432 -U %DB_USER% -d %DB_NAME% -f 'sql_batch_file.sql'
```
<http://www.postgresql.org/docs/current/static/app-psql.html>
> -f filename
>
> --file=filename
>
> Use the file filename as the source of commands instead of reading commands interactively. After the file is processed, psql terminates. This is in many ways equivalent to the meta-command \i.
>
> If filename is - (hyphen), then standard input is read until an EOF indication or \q meta-command. Note however that Readline is not used in this case (much as if -n had been specified).
|
How to execute postgres' sql queries from batch file?
|
[
"",
"sql",
"postgresql",
"batch-file",
"command-prompt",
""
] |
I have a field in one of my SQL tables which stores a version number, like `'3.4.23'` or `'1.224.43'`.
Is there a way to use a *greater than* condition for this field?
`SELECT * FROM versions WHERE version_number > '2.1.27'`
|
Thanks for the tips @symcbean and @gordon-linoff, my final query looks like this:
```
SELECT *
FROM versions WHERE CONCAT(
LPAD(SUBSTRING_INDEX(SUBSTRING_INDEX(version_number, '.', 1), '.', -1), 10, '0'),
LPAD(SUBSTRING_INDEX(SUBSTRING_INDEX(version_number, '.', 2), '.', -1), 10, '0'),
LPAD(SUBSTRING_INDEX(SUBSTRING_INDEX(version_number, '.', 3), '.', -1), 10, '0')
) > CONCAT(LPAD(2,10,'0'), LPAD(1,10,'0'), LPAD(27,10,'0'));
```
This allows each component to be up to 10 digits long.
It transforms this:
```
X.XX.XXX > 2.1.27
```
to this:
```
'000000000X00000000XX0000000XXX' > '000000000200000000010000000027'
```
|
While it would be possible to write a function which would compare version numbers, is this the right way to solve the problem? Comparing f(x) and f(y) cannot be indexed. If you know that any part of the version number will never exceed, say, 4 digits, then you could create an additional field holding the value padded with 0's (or on Mariadb, use a virtual column) which can be indexed, e.g. `2.1.27` would become '000200010027`.
It would be a lot simpler if you stopped trying to use such a numbering schema and just used integers or datetimes. If you must stick with this numbering, then consider splitting the data into 3 columns.
For a quick hack, if you know that the version number will always have 3 component and each component will always be less than 256, then you could...
```
SELECT *
FROM versions
WHERE INET_ATON(CONCAT(version_number, '.0')) > INET_ATON('2.1.27.0');
```
|
MySQL query - compare version numbers
|
[
"",
"mysql",
"sql",
"versioning",
""
] |
I would like to use if statement in sql query :
what I want :
```
if(tractions_delivery.send_date_id !=0 ){
date_send_commodities.id = tractions_delivery.send_date_id
}
```
---
my query :
```
from
tractions_delivery,user_address,province,city,date_send_commodities,users
WHERE
tractions_delivery.tr_id = $tr_id
AND
tractions_delivery.address_id = user_address.id
AND
user_address.province_id = province.id
AND
user_address.city_id = city.id
AND
//not work
(tractions_delivery.send_date_id IS NOT 0 date_send_commodities.id = tractions_delivery.send_date_id)
AND
users.id = user_address.user_id
```
|
You could use the CASE-statement
```
SELECT
*
FROM
tractions_delivery,
user_address,
province,
city,
date_send_commodities,users
WHERE
tractions_delivery.tr_id = $tr_id AND
tractions_delivery.address_id = user_address.id AND
user_address.province_id = province.id AND
user_address.city_id = city.id AND
CASE WHEN tractions_delivery.send_date_id != 0 THEN date_send_commodities.id = tractions_delivery.send_date_id ELSE 1=1 END AND
users.id = user_address.user_id
```
|
You can only use if statements in stored procedures or functions. If you just write a sql statement unfortunately you cannot use if statements around the query. But you can use logic in the query itself, e.g.:
```
SELECT CASE WHEN col1 = col2 THEN'col1 equals col2' else 'col1 doesnt equal col2' ELSE
FROM table1
```
So around doesnt work, but in the field list you can create CASE WHEN ELSE END logic.
|
using if statement in mysql query
|
[
"",
"mysql",
"sql",
""
] |
I am experimenting with PostgreSQL coming from SQL using MySQL and I simply wish to create a table with this piece of code which is valid SQL:
```
CREATE TABLE flat_10
(
pk_flat_id INT(30) DEFAULT 1,
rooms INT(10) UNSIGNED NOT NULL,
room_label CHAR(1) NOT NULL,
PRIMARY KEY (flat_id)
);
```
I get the error
```
ERROR: syntax error at or near "("
LINE 3: pk_flat_id integer(30) DEFAULT 1,
```
I have conducted searches on the web and found no answer and I cant seem to find an answer in the PostgreSQL manual. What am I doing wrong?
I explicitly want to set a limit to the number of digits that can be inserted into the "pk\_flat\_id" field
|
> *I explicitly want to set a limit to the number of digits that can be inserted into the "pk\_flat\_id" field*
Your current table definition does ***not*** impose a "size limit" in any way. In MySQL the parameter for the `int`data type is only a *hint* for applications on the display width of the column when *displaying* it.
You can store the value 2147483647 in an `int(1)` without any problems.
If you want to limit the values to be stored in an integer column you can use a check constraint:
```
CREATE TABLE flat_10
(
pk_flat_id bigint DEFAULT 1,
rooms integer NOT NULL,
room_label CHAR(1) NOT NULL,
PRIMARY KEY (flat_id),
constraint valid_number
check (pk_flat_id <= 999999999)
);
```
|
The answer is that you use `numeric` or `decimal` types. These are documented [here](http://www.postgresql.org/docs/current/static/datatype-numeric.html).
Note that these types can take an optional precision argument, but you don't want that. So:
```
CREATE TABLE flat_10
(
pk_flat_id DECIMAL(30) DEFAULT 1,
rooms DECIMAL(10) NOT NULL,
room_label CHAR(1) NOT NULL,
PRIMARY KEY (pk_flat_id)
);
```
[Here](http://www.sqlfiddle.com/#!15/0f97a) is a SQL Fiddle.
I don't think that Postgres supports unsigned decimals. And, it seems like you really want serial types for your keys and the long number of digits is superfluous.
|
How can I set a size limit for an "int" datatype in PostgreSQL 9.5
|
[
"",
"sql",
"postgresql",
"database-design",
"sqldatatypes",
""
] |
I would like to retrieve all points within a given range of another set of points. Let's say, find all shops within 500m of any subway station.
I wrote this query, which is quite slow, and would like to optimize it:
```
SELECT DISCTINCT ON(locations.id) locations.id FROM locations, pois
WHERE pois.poi_kind = 'subway'
AND ST_DWithin(locations.coordinates, pois.coordinates, 500, false);
```
I'm running on latest versions of Postgres and PostGis (Postgres 9.5, PostGis 2.2.1)
Here is the table metadata:
```
Table "public.locations"
Column | Type | Modifiers
--------------------+-----------------------------+--------------------------------------------------------
id | integer | not null default nextval('locations_id_seq'::regclass)
coordinates | geometry |
Indexes:
"locations_coordinates_index" gist (coordinates)
Table "public.pois"
Column | Type | Modifiers
-------------+-----------------------------+---------------------------------------------------
id | integer | not null default nextval('pois_id_seq'::regclass)
coordinates | geometry |
poi_kind_id | integer |
Indexes:
"pois_pkey" PRIMARY KEY, btree (id)
"pois_coordinates_index" gist (coordinates)
"pois_poi_kind_id_index" btree (poi_kind_id)
Foreign-key constraints:
"pois_poi_kind_id_fkey" FOREIGN KEY (poi_kind_id) REFERENCES poi_kinds(id)
```
Here is the result of EXPLAIN (ANALYZE, BUFFERS):
```
Unique (cost=2407390.71..2407390.72 rows=2 width=4) (actual time=3338.080..3338.252 rows=918 loops=1)
Buffers: shared hit=559
-> Sort (cost=2407390.71..2407390.72 rows=2 width=4) (actual time=3338.079..3338.145 rows=963 loops=1)
Sort Key: locations.id
Sort Method: quicksort Memory: 70kB
Buffers: shared hit=559
-> Nested Loop (cost=0.00..2407390.71 rows=2 width=4) (actual time=2.466..3337.835 rows=963 loops=1)
Join Filter: (((pois.coordinates)::geography && _st_expand((locations.coordinates)::geography, 500::double precision)) AND ((locations.coordinates)::geography && _st_expand((pois.coordinates)::geography, 500::double precision)) AND _st_dwithin((pois.coordinates)::geography, (locations.coordinates)::geography, 500::double precision, false))
Rows Removed by Join Filter: 4531356
Buffers: shared hit=559
-> Seq Scan on locations (cost=0.00..791.68 rows=24168 width=36) (actual time=0.005..3.100 rows=24237 loops=1)
Buffers: shared hit=550
-> Materialize (cost=0.00..10.47 rows=187 width=32) (actual time=0.000..0.009 rows=187 loops=24237)
Buffers: shared hit=6
-> Seq Scan on pois (cost=0.00..9.54 rows=187 width=32) (actual time=0.015..0.053 rows=187 loops=1)
Filter: (poi_kind_id = 3)
Rows Removed by Filter: 96
Buffers: shared hit=6
Planning time: 0.184 ms
Execution time: 3338.304 ms
(20 rows)
```
|
I eventually came to the conclusion I could not live-compute distance between thousands of point of interest and thousands of locations within a realistic amount of time (< 1sec).
So instead I precompute everything: each time a location or a POI is created/updated, I store the minimum distance between each location and each kind of POI in order to be able to answer the question "which locations are closer than X meters from this kind of POI".
Here is the module I coded for this purpose (it's in Elixir, but the main part is raw SQL)
```
defmodule My.POILocationDistanceService do
alias Ecto.Adapters.SQL
alias My.Repo
def delete_distance_for_location(location_id) do
run_query!("DELETE FROM poi_location_distance WHERE location_id = $1::integer", [location_id])
end
def delete_distance_for_poi_kind(poi_kind_id) do
run_query!("DELETE FROM poi_location_distance WHERE poi_kind_id = $1::integer", [poi_kind_id])
end
def insert_distance_for_location(location_id) do
sql = """
INSERT INTO poi_location_distance(poi_kind_id, location_id, poi_id, distance)
SELECT
DISTINCT ON (p.poi_kind_id)
p.poi_kind_id as poi_kind_id,
l.id as location_id,
p.id as poi_id,
MIN(ST_Distance_Sphere(l.coordinates, p.coordinates)) as distance
FROM locations l, pois p
WHERE
l.id = $1
AND ST_DWithin(l.coordinates, p.coordinates, $2, FALSE)
GROUP BY p.poi_kind_id, p.id, l.id
ORDER BY p.poi_kind_id, distance;
"""
run_query!(sql, [location_id, max_distance])
end
def insert_distance_for_poi_kind(poi_kind_id, offset \\ 0, limit \\ 10_000_000) do
sql = """
INSERT INTO poi_location_distance(poi_kind_id, location_id, poi_id, distance)
SELECT
DISTINCT ON(l.id, p.poi_kind_id)
p.poi_kind_id as poi_kind_id,
l.id as location_id,
p.id as poi_id,
MIN(ST_Distance_Sphere(l.coordinates, p.coordinates)) as distance
FROM pois p, (SELECT * FROM locations OFFSET $1 LIMIT $2) as l
WHERE
p.poi_kind_id = $3
AND ST_DWithin(l.coordinates, p.coordinates, $4, FALSE)
GROUP BY l.id, p.poi_kind_id, p.id;
"""
run_query!(sql, [offset, limit, poi_kind_id, max_distance])
end
defp run_query!(query, params) do
SQL.query!(Repo, query, params)
end
def max_distance, do: 5000
end
```
|
I think that you are using the geography version of st\_dwithin, because of the fourth parameter.
Try changing your query to this one:
```
SELECT DISCTINCT ON(locations.id) locations.id FROM locations, pois
WHERE pois.poi_kind = 'subway'
AND ST_DWithin(locations.coordinates, pois.coordinates, 500);
```
If it doesn't solve, please post the explain analyze again.
|
PostGis nearest neighbours query
|
[
"",
"sql",
"postgresql",
"postgis",
"query-performance",
"nearest-neighbor",
""
] |
my problem is: I need to select all my db-tables which contain a column NrPad out of my database and for exactly this tables I need to update the column NrPad
I have already a working select and update statement:
```
select
t.name as table_name
from sys.tables t
inner join sys.columns c
on t.object_id = c.object_id
where c.name like 'NrPad'
Update Anlage Set NrPad = CASE WHEN Len(Nr) < 10 THEN '0' + Convert(Nvarchar,Len(Nr)) ELSE Convert(Nvarchar,Len(Nr)) END + Nr
```
My problem is: How can I merge this two statements together?
I'm open to suggestions and your help is greatly appreciated.
|
Use the `INFORMATION_SCHEMA` rather than `sys.tables`, and create a dynamic SQL statement like so:
```
DECLARE @sql varchar(max) = '';
SELECT
@sql = @sql + '; UPDATE ' + c.TABLE_NAME + ' SET NrPAd = CASE WHEN LEN(Nr)<10 THEN ''0'' + CONVERT(NVARCHAR,LEN(NR)) ELSE CONVERT(NVARCHAR,LEN(NR)) END + Nr'
FROM INFORMATION_SCHEMA.COLUMNS c
where c.COLUMN_NAME = 'NrPad'
print @sql -- for debugging purposes
exec (@sql)
```
This assumes that all tables that have the `NrPad` column also have a `Nr` column. If you need to check for those, or if you just need to use the `Nr` column from a particular table, it's a bit different (either join against `INFORMATION_SCHEMA.COLUMNS` again or against `Anglage` to get the value of Nr or check that Nr is a column on that table).
|
Not testet on your case but you could do an update - set - from - where.
Have a look at this question with multiple answers: [How do I UPDATE from a SELECT in SQL Server?](https://stackoverflow.com/questions/2334712/update-from-select-using-sql-server)
|
SQL update in a select statement
|
[
"",
"sql",
"sql-server",
"select",
"sql-update",
""
] |
Here is my query:
```
SELECT DISTINCT v.codi, m.nom, v.matricula, v.data_compra, v.color,
v.combustible, v.asseguranca,
(CASE WHEN lloguer.dataf IS NOT NULL THEN 'Si' ELSE 'Llogat' END) AS Disponible
FROM vehicle v
INNER JOIN model m on model_codi=m.codi
INNER JOIN lloguer on codi_vehicle=v.codi
WHERE Disponible='Si';
```
What I'm trying to do it's to show only those rows that have the "lloguer.dataf" is not NULL, but it doesn't alow me to use the "Disponible" alias to do the last line comparison.
What can I do?
This is how the info is shown (with some more atribute) without the last line comparison.
[](https://i.stack.imgur.com/ljhR3.png)
|
The problem is the alias doesnt exists yet. So you have to repeat the full code or create a subquery.
```
SELECT *
FROM ( .... ) YourQuery
WHERE Disponible='Si';
```
You can read more details here <https://community.oracle.com/thread/1109532?tstart=0>
|
I'm a TSQL guy by nature, but can you do this?
```
Select distinct codi, nom,matricula, data_compra, colour, combustible, asseguranca from
(SELECT DISTINCT v.codi, m.nom, v.matricula, v.data_compra, v.color,
v.combustible, v.asseguranca,
(CASE WHEN lloguer.dataf IS NOT NULL THEN 'Si' ELSE 'Llogat' END) AS Disponible
FROM vehicle v
INNER JOIN model m on model_codi=m.codi
INNER JOIN lloguer on codi_vehicle=v.codi)
WHERE Disponible='Si';
```
As @JuanCarlosOropeza has stated, the alias doesn't exist until the data is initially fetched. This is why you would be able to use the alias in an `order by` clause without using a subquery, but not in the where clause as the data hasn't been fetched yet.
|
CASE statement ALIAS comparison
|
[
"",
"sql",
"oracle",
"case",
"alias",
""
] |
I have tow tables like this
Table a
```
id email firstname lastname
1 a@a.com xx xx
2 b@b.com xb ab
```
Table b
```
id email firstname lastname
1 a@a.com sd cx
2 c@c.com df dr
```
i want to like this
```
email firstname lastname
a@a.com xx xx
b@b.com xb ab
c@c.com df dr
```
any one please help to me.i tried union,distinct not getting my result
|
You can try this:-
```
SELECT email, firstname, lastname
FROM table_a
UNION
SELECT email, firstname, lastname
FROM table_b
WHERE email NOT IN (SELECT email FROM table_a)
```
|
```
SELECT email
, firstname
, lastname
FROM table_a
UNION
SELECT b.email
, b.firstname
, b.lastname
FROM table_b b
LEFT
JOIN table_a a
ON a.email = b.email
WHERE a.email IS NULL;
```
|
Mysql join two tables get same field value based on condition
|
[
"",
"mysql",
"sql",
""
] |
I have a table in SQL Server with data that has an auto-increment column. The data in the auto increment column is not sequential. It is like `1, 2, 3, 5, 6, 7, 9` (missing 4 and 8).
I want to copy the exact data in this table to another fresh and empty identical table. The destination table also has an auto increment column.
Problem: when I copy the data using the query below, the `AttachmentID` has new and different values
```
INSERT INTO FPSDB_new.dbo.Form_Attachment
SELECT
CategoryID
FROM
FPSDB.dbo.Form_Attachment
```
the `Form_Attachment` table in destination and source is same as below
```
CREATE TABLE [dbo].[Form_Attachment]
(
[AttachmentID] [int] IDENTITY(1,1) NOT NULL,
[CategoryID] [int] NULL
)
```
Is there a SQL query solution to make the two tables with identical data?
|
You can insert into an `IDENTITY` column by using `SET IDENTITY_INSERT ON` in your transaction (don't forget to turn it off afterwards):
[How to turn IDENTITY\_INSERT on and off using SQL Server 2008?](https://stackoverflow.com/questions/7063501/how-to-turn-identity-insert-on-and-off-using-sql-server-2008)
```
SET IDENTITY_INSERT FPSDB_new.dbo.Form_Attachment ON
INSERT INTO FPSDB_new.dbo.Form_Attachment ( AttachmentID, CategoryID )
SELECT
AttachmentID,
CategoryID
FROM
FPSDB.dbo.Form_Attachment
SET IDENTITY_INSERT FPSDB_new.dbo.Form_Attachment OFF
```
|
You can also do this:
1. Drop the copy table
2. Create as select, which will copy the exact structure and data to the new table.
```
Select *
into new_table
from old_table
```
|
Copying data from a table with auto increment column
|
[
"",
"sql",
"sql-server",
"auto-increment",
""
] |
Anyone know why a situation like the following would run fine on MSSQL 2005 and not MSSQL 2008:
```
declare @X int = null;
select A, B, C from TABLE where X=@X
```
Without going into detail, I've got a stored proc which calls another stored proc that takes a hard coded Null as one of the parameters and it runs fine apparently on MSSQL2005 but not 2008.
|
That might be due to your ansi\_null settings in two servers.
> **When SET ANSI\_NULLS is ON**, a SELECT statement that uses WHERE
> column\_name = NULL returns zero rows even if there are null values in
> column\_name. A SELECT statement that uses WHERE column\_name <> NULL
> returns zero rows even if there are nonnull values in column\_name.
>
> **When SET ANSI\_NULLS is OFF**, the Equals (=) and Not Equal To (<>)
> comparison operators do not follow the SQL-92 standard. A SELECT
> statement that uses WHERE column\_name = NULL returns the rows that
> have null values in column\_name. A SELECT statement that uses WHERE
> column\_name <> NULL returns the rows that have nonnull values in the
> column. Also, a SELECT statement that uses WHERE column\_name <>
> XYZ\_value returns all rows that are not XYZ\_value and that are not
> NULL.
You can find detailed information here: <https://msdn.microsoft.com/en-us/library/ms188048(v=sql.90).aspx>
|
The code is poorly written regardless of which version of SQL you're using, because `NULL` is never "equal" to anything (even itself). It's "unknown", so whether or not it's equal (or greater than, or less than, etc.) another value is also "unknown".
One thing that can affect this behavior is the setting of `ANSI_NULLS`. If your 2005 server (or that connection at least) has `ANSI_NULLS` set to "`OFF`" then you'll see the behavior that you have. For a stored procedure the setting is dependent at the time that the stored procedure was created. Try recreating the stored procedure with the following before it:
```
SET ANSI_NULLS ON
GO
```
and you'll likely see the same results as in 2008.
You should correct the code to properly handle `NULL` values using something like:
```
WHERE X = @X OR (X IS NULL AND @X IS NULL)
```
or
```
WHERE X = COALESCE(@X, X)
```
The specifics will depend on your business requirements.
|
MSSQL variable = null
|
[
"",
"sql",
"sql-server",
"null",
""
] |
How can I speed up this rather simple UPDATE query? It's been running for over 5 hours!
I'm basically replacing SourceID in a table by joining on a new table that houses the Old and New IDs. All these fields are VARCHAR(72) and must stay that way.
Pub\_ArticleFaculty table has 8,354,474 rows (8.3 million). ArticleAuthorOldNew has 99,326,472 rows (99.3 million) and only the 2 fields you see below.
There are separate non-clustered indexes on all these fields. Is there a better way to write this query to make it run faster?
```
UPDATE PF
SET PF.SourceId = AAON.NewSourceId
FROM AA..Pub_ArticleFaculty PF WITH (NOLOCK)
INNER JOIN AA2..ArticleAuthorOldNew AAON WITH (NOLOCK)
ON AAON.OldFullSourceId = PF.SourceId
```
|
In my experience, looping your update so that it acts on small a numbers of rows each iteration is a good way to go. The ideal number of rows to update each iteration is largely dependent on your environment and the tables you're working with. I usually stick around 1,000 - 10,000 rows per iteration.
**Example**
```
SET ROWCOUNT 1000 -- Set the batch size (number of rows to affect each time through the loop).
WHILE (1=1) BEGIN
UPDATE PF
SET NewSourceId = 1
FROM AA..Pub_ArticleFaculty PF WITH (NOLOCK)
INNER JOIN AA2..ArticleAuthorOldNew AAON WITH (NOLOCK)
ON AAON.OldFullSourceId = PF.SourceId
WHERE NewSourceId IS NULL -- Only update rows that haven't yet been updated.
-- When no rows are affected, we're done!
IF @@ROWCOUNT = 0
BREAK
END
SET ROWCOUNT 0 -- Reset the batch size to the default (i.e. all rows).
GO
```
|
If you are resetting all or almost all of the values, then the `update` will be quite expensive. This is due to logging and the overhead for the updates.
One approach you can take instead is `insert` into a temporary table, then truncate, then re-insert:
```
select pf.col1, pf.col2, . . . ,
coalesce(aaon.NewSourceId, pf.sourceid) as SourceId
into temp_pf
from AA..Pub_ArticleFaculty PF LEFT JOIN
AA2..ArticleAuthorOldNew AAON
on AAON.OldFullSourceId = PF.SourceId;
truncate table AA..Pub_ArticleFaculty;
insert into AA..Pub_ArticleFaculty
select * from temp_pf;
```
Note: You should either be sure that the columns in the original table match the temporary table or, better yet, list the columns explicitly in the `insert`.
I should also note that the major benefit is when your recovery mode is simple or bulk-logged. The reason is that logging for the truncate, `select into`, and `insert . . . select` is minimal (see [here](https://technet.microsoft.com/en-us/library/ms191244(v=sql.105).aspx)). This savings on the logging can be very significant.
|
How to speed up simple UPDATE query with millions of rows?
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2012",
""
] |
Consider this 24-hour summary. The problem is that I'm getting 0s returned for hours where values haven't been inserted yet. I believe this is due to the **`SUM()`** function call.
```
SELECT
section,
[21], [22], [23], [0], [1], [2], [3], [4], [5], [6], [7], [8],
[21] + [22] + [23] + [0] + [1] + [2] + [3] + [4] + [5] + [6] + [7] + [8] as s1_total
-- shift 2 fields are ommitted for brievity
FROM (
SELECT
section,
-- hours from 21:00 (1st shift) to 20:00 (2nd shift)
SUM(CASE WHEN prTime = '21:00:00' THEN Amount ELSE 0 END) AS [21],
SUM(CASE WHEN prTime = '22:00:00' THEN Amount ELSE 0 END) AS [22],
SUM(CASE WHEN prTime = '23:00:00' THEN Amount ELSE 0 END) AS [23],
SUM(CASE WHEN prTime = '00:00:00' THEN Amount ELSE 0 END) AS [0],
SUM(CASE WHEN prTime = '01:00:00' THEN Amount ELSE 0 END) AS [1],
SUM(CASE WHEN prTime = '02:00:00' THEN Amount ELSE 0 END) AS [2],
-- ... similar cases are omitted for brieviety
SUM(CASE WHEN prTime = '20:00:00' THEN Amount ELSE 0 END) AS [20]
FROM (
SELECT prTime, prDate, section01 AS Amount, 'section 1' as [Section] FROM SectionsHourlyValues
UNION
SELECT prTime, prDate, section02 AS Amount, 'section 2' as [Section] FROM SectionsHourlyValues
UNION
SELECT prTime, prDate, section03 AS Amount, 'section 3' as [Section] FROM SectionsHourlyValues
) AS U
WHERE
(prDate = CONVERT(DATE, DATEADD(HOUR, -4, CONVERT(DATETIME2(7), @dt, 104))) and prTime > '20:00:00') or
(prDate = CONVERT(DATE, @dt, 104) and prTime <= '20:00:00')
GROUP BY section
) t;
```
For example, running the query
```
DECLARE @dt varchar(10) = 'certain_date';
SELECT * from [dbo].[SectionsHourlyValues] WHERE
(prDate = CONVERT(DATE, DATEADD(HOUR, -4, CONVERT(DATETIME2(7), @dt, 104))) and prTime > '20:00:00') or
(prDate = CONVERT(DATE, @dt, 104) and prTime <= '20:00:00');
```
wouldn't return us the data for say **09:00:00** / **section1** whereas the summary would show us **0**.
Then I want to show **`NULL`** (for not yet inserted records) instead of **`0`**. How do I do that?
|
How about replacing the 0 with `NULL` in your conditional aggregation ?
```
SUM(CASE WHEN etc... ELSE NULL END)
```
|
Use NULLIF
ex
```
NULLIF(SUM(CASE WHEN prTime = '21:00:00' THEN Amount ELSE 0 END),0)
```
or simply do not use ELSE
```
SUM(CASE WHEN prTime = '21:00:00' THEN Amount END)
```
|
How to show NULL instead of 0 when using sum aggregate / pivoting
|
[
"",
"sql",
"sql-server",
"pivot-table",
""
] |
It sounds retarded but can it be possible?
I have `EMPLOYEE_ID` and `DEPARTMENT_ID` and I have to sort it according to the `DEPARTMENT_ID` without using `ORDER BY`.
It should not present anywhere in the query i.e in `USING` clause or in SUB-QUERY or in SELECT statement or in anywhere.
Is it possible?
|
There are a few approaches that can work depending on particular RDBMS and none of them is to be used in production environment, but just for fun:
1. Use XML output and apply server-side XSLT transformation (through CLR for instance) with `<xsl:sort>`.
2. Use stored procedure to produce sorted list in one text return value.
3. Write own SQL proxy client replacing `-- HIDDEN MESSAGE` with `ORDER BY`. (I admit, this is not exactly SQL solution).
4. Create an Indexed (Materialized) View on the table sorted by `DEPARTMENT_ID` that would be solely used by this query. **Not guaranteed to work every single time**.
5. Create temporary table with all possible IDs in incremental order, left join source table on `DEPARTMENT_ID` and use hints to prevent optimizer from reordering joins. **Not guaranteed to work every single time**.
**Upd** 6. When there are fewer rows to sort then the RDBMS supported CTE recursion depth:
```
With Example (EMPLOYEE_ID, DEPARTMENT_ID) As (
Select 4, 2 Union All
Select 5, 2 Union All
Select 6, 3 Union All
Select 7, 3 Union All
Select 2, 1 Union All
Select 3, 1 Union All
Select 1, 1
),
Stringified (ID) AS (
Select
RIGHT('0000000000' + CAST(DEPARTMENT_ID AS NVARCHAR(10)), 10) +
RIGHT('0000000000' + CAST(EMPLOYEE_ID AS NVARCHAR(10)), 10)
From Example
),
Sorted (PREV_EMPLOYEE_ID, PREV_DEPARTMENT_ID,
NEXT_EMPLOYEE_ID, NEXT_DEPARTMENT_ID) As (
Select
CAST(Right(ex1.ID, 10) AS INT),
CAST(Left(ex1.ID, 10) AS INT),
CAST(Right(Min(ex2.ID),10) AS INT),
CAST(Left(Min(ex2.ID),10) AS INT)
From Stringified ex1
Inner Join Stringified ex2 On ex1.ID < ex2.ID
Group By ex1.ID
),
RecursiveCTE (EMPLOYEE_ID, DEPARTMENT_ID) AS (
Select
CAST(Right(Min(ID),10) AS INT),
CAST(Left(Min(ID),10) AS INT)
From Stringified
Union All
Select NEXT_EMPLOYEE_ID, NEXT_DEPARTMENT_ID
From Sorted
Inner Join RecursiveCTE
ON RecursiveCTE.EMPLOYEE_ID = Sorted.PREV_EMPLOYEE_ID
AND RecursiveCTE.DEPARTMENT_ID = Sorted.PREV_DEPARTMENT_ID
)
Select *
From RecursiveCTE
```
**Upd** 7. Many RDBMS engines would sort result when applying `GROUP BY`, `UNION`, `EXCEPT`, `INTERSECT` or just `DISTINCT` especially if they are single-threaded or forced not to use parallelism with a hint. **Not guaranteed to work every single time**.
|
That could be possible if you would create a index on your table where first(or only) key is `DEPARTMENT_ID` and you would force your query engine to use this index. This should be a plain `SELECT` statement as well.
But even then, it won't guarantee correct sort order.
|
Sorting without ORDER BY
|
[
"",
"sql",
"sorting",
"sql-order-by",
""
] |
I have a question. I need to update two large table - `t_contact` (170 million rows) and `t_participants` (11 million rows). This tables both have column `CUSTOMER_ID`. Some of this IDs wrong and I need to update it. Wrong IDs is about 140 thousand.
I understand that if I will use `UPDATE TABLE` it takes a lot of times, but this two tables mustn't be unavailable for a long time. What should I do?
|
If you have the wrong ID's stored some where you should use merge:
```
MERGE INTO t_contact D
USING (select * from t_wrong_ids) S
ON (D.CUSTOMER_ID = S.NEW_ID)
WHEN MATCHED THEN UPDATE SET D.CUSTOMER_ID = S.OLD_ID
```
A lot faster then a normal update.
Second table is the same:
```
MERGE INTO t_participants D
USING (select * from t_wrong_ids) S
ON (D.CUSTOMER_ID = S.NEW_ID)
WHEN MATCHED THEN UPDATE SET D.CUSTOMER_ID = S.OLD_ID
```
|
Split one of your tables to parts and process them one by one in PL/SQL block. For example, suppose, IDs are consequent, you take `t_participants` and split it to parts with 1 million rows in each:
```
begin
-- 1 and 11 - hardcoded values,
-- since your t_participants table has 11 000 000 rows
for i in 1..11 loop
merge t_contact c
using (select * from t_participants
where id between (i - 1) * 1000000 and i * 1000000) p
on (c.id = p.id)
when matched then update ...;
commit;
end loop;
end;
```
I took size of a part 1000000 records, but you can choose another size. It will depend on your server performance. Try to update manually 100, 1000, 10000, etc. rows to define which size is most convenient.
|
SQL Update large table
|
[
"",
"sql",
"oracle",
"merge",
"sql-update",
"bigdata",
""
] |
I have just added a new column, `Person_Id_Helper` to `MyTable`. It is supposed to contain 1,2,3 etc, in the order the table is now sorted.
This is what I want to do:
```
DECLARE @i INT = 1, @NumberOfRows INT = SELECT COUNT(*) FROM MyTable
WHILE(@i <= @NumberOfRows)
BEGIN
-- Person_Id_Helper = @i
-- @i = @i + 1
END
```
How do I write this?
|
I think this is what you want to achieve -
```
DECLARE @i INT = 0
UPDATE MyTable
SET
@i = Person_Id_Helper = @i + 1
```
Now check your column value.
Well we can not use `ORDER BY` clause in update statement. But to use it here is the updated query.
```
UPDATE t
SET Person_Id_Helper = rn.RowNum
FROM MyTable t
INNER JOIN (SELECT
ID
,ROW_NUMBER() OVER (ORDER BY ID) AS RowNum
FROM MyTable) rn
ON t.ID = rn.ID
```
@shungo: Thanks for point out.
|
I think, that it might be the wrong idea to persist the sort oder within a column. But - for sure! - it is the wrong idea to do this in a while loop.
Read about row-based and set-based approaches. SQL demands for set-based thinking...
Look at this as an example how to do this (just paste it into an empty query window and execute, adapt to your needs):
```
DECLARE @tbl TABLE(SortDate DATE, Inx INT);
INSERT INTO @tbl VALUES({d'2016-01-20'},0)
,({d'2016-01-19'},0)
,({d'2016-01-14'},0)
,({d'2016-01-16'},0);
WITH cte AS
(
SELECT Inx,ROW_NUMBER() OVER(ORDER BY SortDate) AS RN
FROM @tbl
)
UPDATE cte SET Inx=RN;
SELECT * FROM @tbl;
```
|
SQL Server : loop through every row, add incremented value to column
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I might do something like:
```
select
max(val), max(val2), max(val3) from table
group by
val4;
```
Take this sample output:
```
Table
val1 val2 val3 val4
A P Z 1
P Z P 1
```
For my use case, let's assume that 'P' is an important value, so if a P is showing up somewhere when grouping by val4, that's the value I want to surface. That is, I'd want to surface ('P','P','P') in this case for val1, val2, val3 when grouping by val4.
How can or should this be done?
|
I would just use conditional aggregation. For instance:
```
select coalesce(max(case when val1 = 'P' then val end), max(val1)) as val1,
coalesce(max(case when val2 = 'P' then val end), max(val2)) as val2,
coalesce(max(case when val3 = 'P' then val end), max(val3)) as val3
from table
group by val4;
```
If any of the values match `'P'`, then the first expression in each `coalesce()` returns `'P'`. If not, the maximum is returned.
|
```
select
RIGHT(MAX(CASE WHEN val1='P' THEN {fn CONCAT(1,val1)} ELSE {fn CONCAT(0,val1)} end),1),
RIGHT(MAX(CASE WHEN val2='P' THEN {fn CONCAT(1,val2)} ELSE {fn CONCAT(0,val2)} end),1),
RIGHT(MAX(CASE WHEN val3='P' THEN {fn CONCAT(1,val3)} ELSE {fn CONCAT(0,val3)} end),1),
val4
Group by val4
```
Intersystems Cache SQL above.. below should work for other flavors?
```
select
RIGHT(MAX(CASE WHEN val1='P' THEN CONCAT(1,val1) ELSE CONCAT(0,val1) end),1),
RIGHT(MAX(CASE WHEN val2='P' THEN CONCAT(1,val2) ELSE CONCAT(0,val2) end),1),
RIGHT(MAX(CASE WHEN val3='P' THEN CONCAT(1,val3) ELSE CONCAT(0,val3) end),1),
val4
Group by val4
```
|
Oracle SQL -- When combining multiple rows with an analytic function, 'prefer' some value over others
|
[
"",
"sql",
"oracle",
""
] |
I have such data structure:
[](https://i.stack.imgur.com/a151Y.png)
I have to find all people who doesn't have `PHONE_TYPE = 2`
I solved this with such query:
```
SELECT DISTINCT NAME FROM table WHERE NAME NOT IN (
SELECT S2.NAME FROM table S2
LEFT OUTER JOIN table S1
ON S1.PHONE_TYPE != 2 AND S2.PHONE_TYPE = S1.PHONE_TYPE
WHERE S1.PHONE_TYPE is null);
```
* Is there more sophisticate approach?
* What is the standard solution for search when you have `many-to-many` relation?
|
Ideally, you would have three tables; it may be worth taking a look at what **referential integrity** is. See <https://docs.oracle.com/cd/B19306_01/server.102/b14220/data_int.htm>
You would have the following tables: -
[](https://i.stack.imgur.com/XC8v2.png)
The reason I have split up into three tables is because I believe that your above data structure needs *normalizing*. See <https://en.wikipedia.org/wiki/Database_normalization>
> 1. To free the collection of relations from undesirable insertion, update and deletion dependencies;
> 2. To reduce the need for restructuring the collection of relations, as new types of data are introduced, and thus increase the life span of application programs;
> 3. To make the relational model more informative to users;
> 4. To make the collection of relations neutral to the query statistics, where these statistics are liable to change as time goes by.
> ββE.F. Codd, "Further Normalization of the Data Base Relational Model"[8]
I would strongly recommend having primary keys for your person and phone type tables. I prefer surrogate keys, such as a UUID, but see <http://sqlmag.com/business-intelligence/surrogate-key-vs-natural-key> for determining what's best for your setup.
You could use the following query to then pull out rows: -
```
SELECT p.NAME FROM PERSON as p
WHERE p.id NOT IN (SELECT person_id FROM PersonPhone WHERE phone_id = 2)
```
You typically want to get rid of many-many relationships as you end up having redundant data in your database, which eventually could cause you some problems.
|
This is a task for NOT EXISTS:
```
select distinct name
from table as t1
where not exists
( select *
from table as t2
where t1.name = t2.name
and t2.phone_type = 2
)
```
|
SQL: excluding look up
|
[
"",
"sql",
"many-to-many",
"self-join",
""
] |
I have two queries that I'm grouping to get the account.
Is it possible make it in one query using the `dept_id` column, some of the ID's may no exist in both queries.
The output like this:
```
dept_id | totalstars | totalstarsgiven
```
Query:
```
SELECT
employeedept as dept_id,
COUNT(*) as 'totalstars'
FROM
Responses a
WHERE
execoffice_status = 1
AND YEAR ([approveddate]) = 2015
AND MONTH ([approveddate]) = 11
and employeedept not in (22,16)
GROUP BY
execoffice_status, employeedept
SELECT
a.submitterdept as dept_id,
COUNT(*) as 'totalstarsgiven'
FROM
Responses a
WHERE
execoffice_status = 1
AND YEAR ([approveddate]) = 2015
AND MONTH ([approveddate]) = 11
GROUP BY
execoffice_status, submitterdept
```
|
I think this will do what you want:
```
SELECT
employeedept as dept_id
, COUNT(*) as totalstars
, totalstarsgiven
FROM
Responses a
LEFT JOIN (
SELECT
a.submitterdept as dept_id
, COUNT(*) as totalstarsgiven
FROM
Responses a
WHERE
execoffice_status = 1
and YEAR ([approveddate]) =2015
and month ([approveddate]) =11
GROUP BY
execoffice_status
, submitterdept
) b
ON a.employeedept = b.dept_id
WHERE
execoffice_status = 1
and YEAR ([approveddate]) =2015
and month ([approveddate]) =11
and employeedept not in (22,16)
GROUP BY
execoffice_status
, employeedept
```
This will aggregate the departments by the number of stars they received (when dept\_id = employeedept) and given (when dept\_id = submitterdept)
|
Because you want to see the lines returned by both tables, you need to do a [Full Outer Join](http://www.w3schools.com/sql/sql_join_full.asp).
```
SELECT NVL(ed.dept_id, sd.dept_id), NVL(ed.totalstars, 0) totalstars,
NVL(sd.totalstarsgiven, 0) totalstarsgiven
FROM
(SELECT employeedept as dept_id, COUNT(*) as totalstars
FROM Responses a
WHERE execoffice_status = 1
and YEAR ([approveddate]) =2015
and month ([approveddate]) =11
and employeedept not in (22,16)
GROUP BY execoffice_status, employeedept) ed
FULL OUTER JOIN
(SELECT a.submitterdept as dept_id, COUNT(*) as totalstarsgiven
FROM Responses a
WHERE execoffice_status = 1
and YEAR ([approveddate]) =2015
and month ([approveddate]) =11
GROUP BY execoffice_status, submitterdept) sd
ON ed.deptId = sd.deptId
```
|
How to join two queries counting from two different columns?
|
[
"",
"sql",
""
] |
How can I exclude matched elements of one array from another?
Postgres code:
```
a1 := '{1, 2, 5, 15}'::int[];
a2 := '{1, 2, 3, 6, 7, 9, 15}'::int[];
a3 := a2 ??magic_operator?? a1;
```
In `a3` I expect exactly '{3, 6, 7, 9}'
**Final Result**
My and lad2025 solutions works fine.
Solution with `array_position()` required **PostgreSQL 9.5** and later, executes x3 faster.
|
The additional module [`intarray`](http://www.postgresql.org/docs/current/interactive/intarray.html) provides a simple and fast [subtraction operator **`-`**](http://www.postgresql.org/docs/current/interactive/intarray.html#INTARRAY-OP-TABLE) for integer arrays, exactly the *magic\_operator* you are looking for:
```
test=# SELECT '{1, 2, 3, 6, 7, 9, 15}'::int[] - '{1, 2, 5, 15}'::int[] AS result;
?column?
-----------
{3,6,7,9}
```
You need to install the module once per database:
```
CREATE EXTENSION intarray;
```
It also provides special operator classes for indexes:
* [Postgresql intarray error: undefined symbol: pfree](https://stackoverflow.com/questions/10491184/postgresql-intarray-error-undefined-symbol-pfree/10491433#10491433)
Note that it [only works for:](http://www.postgresql.org/docs/current/interactive/intarray.html)
> ... null-free arrays of integers.
|
It looks like `XOR` between arrays:
```
WITH set1 AS
(
SELECT * FROM unnest('{1, 2, 5, 15}'::int[])
), set2 AS
(
SELECT * FROM unnest('{1, 2, 3, 6, 7, 9, 15}'::int[])
), xor AS
(
(SELECT * FROM set1
UNION
SELECT * FROM set2)
EXCEPT
(SELECT * FROM set1
INTERSECT
SELECT * FROM set2)
)
SELECT array_agg(unnest ORDER BY unnest)
FROM xor
```
Output:
```
"{3,5,6,7,9}"
```
How it works:
1. Unnest both arrays
2. Calculate SUM
3. Calculate INTERSECT
4. From SUM - INTERSECT
5. Combine to array
Alternatively you could use sum of both minus(except) operations:
```
(A+B) - (A^B)
<=>
(A-B) + (B-A)
```
---
Utilizing `FULL JOIN`:
```
WITH set1 AS
(
SELECT *
FROM unnest('{1, 2, 5, 15}'::int[])
), set2 AS
(
SELECT *
FROM unnest('{1, 2, 3, 6, 7, 9, 15}'::int[])
)
SELECT array_agg(COALESCE(s1.unnest, s2.unnest)
ORDER BY COALESCE(s1.unnest, s2.unnest))
FROM set1 s1
FULL JOIN set2 s2
ON s1.unnest = s2.unnest
WHERE s1.unnest IS NULL
OR s2.unnest IS NULL;
```
**EDIT:**
If you want only elements from second array that are not is first use simple `EXCEPT`:
```
SELECT array_agg(unnest ORDER BY unnest)
FROM (SELECT * FROM unnest('{1, 2, 3, 6, 7, 9, 15}'::int[])
EXCEPT
SELECT * FROM unnest('{1, 2, 5, 15}'::int[])) AS sub
```
Output:
```
"{3,6,7,9}"
```
|
Exclude matched array elements
|
[
"",
"sql",
"arrays",
"postgresql",
""
] |
How do I sort a column (VARCHAR) like this:
```
[CSGO] Bot #1
[CSGO] Bot #2
[CSGO] Bot #3
...
[CSGO] Bot #10
```
My Query results in:
```
[CSGO] Bot #2
[CSGO] Bot #23
[CSGO] Bot #5
[CSGO] Bot #6
```
Query:
```
SELECT bot_id, name, username FROM bots ORDER BY ABS(REPLACE(name, '[CSGO]Β #', '')) ASC
```
Without the ABS() and REPLACE(), gives basically the same result.
|
A simple way to do this assuming the prefixes are the same length:
```
order by length(name), name
```
If you just want to go by the portion after the #:
```
order by substring_index(name, '#', -1) + 0
```
|
```
SELECT bot_id, name, username
FROM bots
ORDER BY substring_index(name, '#', 1), substring_index(name, '#', -1) + 0
```
if your column name always starts with '[CSGO] Bot #', then do this:
```
SELECT bot_id, name, username
FROM bots
ORDER BY substr(name, 13) + 0
```
|
MySQL Sort varchar by its integer
|
[
"",
"mysql",
"sql",
"varchar",
""
] |
I have a table that looks like the below in SQL
[](https://i.stack.imgur.com/2d3PU.png)
I would like to select the row with the earliest date in either the PR create date or AC create date field. So in this case, I would want these two records.
[](https://i.stack.imgur.com/fOtPC.png)
I know that I can write
```
Select email, min(PR_create_date) from table
group by email
```
However, this will only find the minimum date in the PR\_create\_date column. How can I find the minimum date in either the PR\_create\_date or AC\_create\_date columns?
|
You can use
```
SELECT email,
MIN(least(pr_create_date, ac_create_date))
FROM my_table
GROUP BY email
```
Function `least` returns the minimal value of its arguments.
But if one of arguments is `NULL` then the result is `NULL` too. So you are to write some logic to handle nulls, depending on how nulls must be treated in your business logic: for example you can use "special date" in the far past or future to replace nulls as
```
SELECT email,
MIN( least( ifnull(pr_create_date, DATE '2999-12-31'),
ifnull(ac_create_date, DATE '2999-12-31')) )
FROM my_table
GROUP BY email
```
But in the simplest case when dates are excluding (i.e. in every row exactly one of the dates is null) its enough to just write
```
SELECT email,
MIN( ifnull(pr_create_date, ac_create_date) )
FROM my_table
GROUP BY email
```
|
One more solution :
```
select temp.EMAIL, min(temp.PR_AC_CREATE_DATE)
from
(
select EMAIL , min(PR_CREATE_DATE) as PR_AC_CREATE_DATE
from table
group by email
union
select EMAIL, min(AC_CREATE_DATE)
from table
group by email
) temp
group by temp.EMAIL ;
```
|
Get min date across two rows in SQL table
|
[
"",
"mysql",
"sql",
""
] |
Is there any function on oracle who can give him phone number like a parameter and the output is : code area + phone number.
exemple :
parameter : +3323658568526 or 003323658568526 (french country code)
return :
* code area : +33
* phone number : 23658568526
|
You could use **SUBSTR**.
For example,
```
SQL> WITH sample_data AS(
2 SELECT '+3323658568526' num FROM dual UNION ALL
3 SELECT '003323658568526' num FROM dual
4 )
5 -- end of sample_data mimicking real table
6 SELECT num,
7 CASE
8 WHEN SUBSTR(num, 1, 1) = '+'
9 THEN SUBSTR(num, 1, 3)
10 ELSE '+'
11 ||ltrim(SUBSTR(num, 1, 4), '0')
12 END area_code ,
13 CASE
14 WHEN SUBSTR(num, 1, 1) = '+'
15 THEN SUBSTR(num, 4)
16 ELSE SUBSTR(num, 5)
17 END phone_number
18 FROM sample_data;
NUM AREA_ PHONE_NUMBER
--------------- ----- ------------
+3323658568526 +33 23658568526
003323658568526 +33 23658568526
SQL>
```
NOTE : The number of digits of area code may vary, in that case you need to handle it in the **CASE** expression. The better design would be to store them separately. You could **normalize** the data further.
|
You could even try with regular expressions:
```
SELECT REGEXP_REPLACE(
phone,
'(\+|00)([0-9][0-9])([0-9]*)',
'+\2'
) AS AREA,
REGEXP_REPLACE(
phone,
'(\+|00)([0-9][0-9])([0-9]*)',
'\3'
) AS PHONE
FROM (SELECT '003323658568526' phone FROM DUAL
UNION ALL
SELECT '+3323658568526' phone FROM DUAL);
```
Again, this assumes that your area codes always have 2 digits.
|
is there any way to separate area code from phone number in oracle
|
[
"",
"sql",
"oracle",
"phone-number",
"country-codes",
""
] |
I am working with a query which contains CASE statement within WHERE clause inside which I have to compare date column with 2 variables fromDate and toDate if it is not null. But SQL Server 2008 is giving some errors like βIncorrect syntax near '>'β while executing it. Can anyone please help me with the correct query? Here is the query:
```
declare
@fromDate varchar(32)= '2015-08-04',
@toDate varchar(32) = '2016-01-04'
SELECT
DC_MASTER.DC_NO,
PSR_LOAD_DATE_TIME,
CUS_NAME
FROM DC_ITEM
INNER JOIN DC_MASTER ON DC_MASTER.DC_NO=DC_ITEM.DC_NO
INNER JOIN ITEM_MASTER ON item_master.ITEM_CODE = DC_ITEM.ITEM_CODE
INNER JOIN CUSTOMER_MASTER ON customer_master.CUS_CODE = DC_MASTER.CUS_CODE
INNER JOIN LOCATION_MASTER ON location_master.LOC_CODE = DC_ITEM.LOC_CODE
WHERE DC_ITEM.ITEM_CODE=item_master.ITEM_CODE
AND CASE
WHEN @fromDate IS NOT NULL AND @toDate IS NOT NULL THEN DC_MASTER.PSR_LOAD_DATE_TIME >= @fromDate and PSR_LOAD_DATE_TIME <= @toDate
END
```
|
You can easily turn it into a normal where condition instead of case :
```
declare
@fromDate varchar(32)= '2015-08-04',
@toDate varchar(32) = '2016-01-04'
SELECT
DC_MASTER.DC_NO,
PSR_LOAD_DATE_TIME,
CUS_NAME
FROM DC_ITEM
INNER JOIN DC_MASTER ON DC_MASTER.DC_NO=DC_ITEM.DC_NO
INNER JOIN ITEM_MASTER ON item_master.ITEM_CODE = DC_ITEM.ITEM_CODE
INNER JOIN CUSTOMER_MASTER ON customer_master.CUS_CODE = DC_MASTER.CUS_CODE
INNER JOIN LOCATION_MASTER ON location_master.LOC_CODE = DC_ITEM.LOC_CODE
WHERE DC_ITEM.ITEM_CODE=item_master.ITEM_CODE
AND ((@fromDate IS NOT NULL AND @toDate IS NOT NULL and DC_MASTER.PSR_LOAD_DATE_TIME >= @fromDate
and PSR_LOAD_DATE_TIME <= @toDate) or @fromDate IS NULL or @toDate IS NULL)
```
|
You can rewrite the logic to:
```
WHERE DC_ITEM.ITEM_CODE=item_master.ITEM_CODE
AND (@fromDate IS NULL OR @toDate IS NULL
OR (DC_MASTER.PSR_LOAD_DATE_TIME >= @fromDate
and PSR_LOAD_DATE_TIME <= @toDate)
)
```
|
CASE statement for date comparison within WHERE clause in SQL Server 2008
|
[
"",
"sql",
"sql-server-2008",
"case",
""
] |
On my controlled assessment in school I'm stuck on this question:
Create, run, test, explain and demonstrate scripts to do the following:
1. Produce a list of all entries with the OCR exam board, showing:
* the names of the students with entries
* the subject names and level of entry for the exams the students are entered for.
2. Produce a list of all students, showing the studentsβ names, followed by the exams to be
taken. This list should be presented in alphabetical order by the studentβs last name.
In my code, I don't know how to join more than 2 tables with INNER JOIN, but if I try to the 'ON' statement doesn't want to work, I don't know how to solve this question.
`CREATE` tables and `INSERT` data:
```
CREATE TABLE IF NOT EXISTS students
(
student_id INT UNSIGNED NOT NULL AUTO_INCREMENT,
first_name VARCHAR(20) NOT NULL,
middle_name VARCHAR(20),
last_name VARCHAR(40) NOT NULL,
email VARCHAR(60) NOT NULL,
password CHAR(40) NOT NULL,
reg_date DATETIME NOT NULL,
PRIMARY KEY (student_id),
UNIQUE (email)
);
INSERT INTO students (first_name,last_name,email,password,reg_date) VALUES
("ex1","ex1.1","example1@gmail.com","11062001",'2009-12-04 13:25:30'),
("ex2","ex2.2","example2@gmail.com","ex123",'2015-02-12 15:20:45'),
("my name is jeff","21","kid","mynameis21kid@vine.com","yolo",'2014-09-21 14:15:25'),
("Mr.Right","Mr.Calvin","Mr.Hildfiger","Mr.misters@mister.com","mistermaster",'2015-06-04 19:50:35'),
("Bob","Dabuilda","bobthebuilder@fixit.com","BTBCWFI?",'2005-11-12 21:20:55');
CREATE TABLE IF NOT EXISTS subjects
(
subject_id INT UNSIGNED NOT NULL AUTO_INCREMENT,
subject_name VARCHAR(20) NOT NULL,
level_of_entry VARCHAR(5) NOT NULL,
exam_board VARCHAR(10) NOT NULL,
PRIMARY KEY (subject_id),
UNIQUE(subject_id)
);
INSERT INTO subjects (subject_name,level_of_entry,exam_board) VALUES
("Chemistry","AS","OCR"),
("Biology","GCSE","AQA"),
("Music","GCSE","Edexcel"),
("English","A","OCR"),
("Physics","A","AQA"),
("Computing","GCSE","Edexcel"),
("French","A","AQA"),
("Maths","AS","OCR"),
("Product Design","GCSE","AQA"),
("History","AS","OCR");
CREATE TABLE IF NOT EXISTS entries
(
entry_id INT UNSIGNED NOT NULL AUTO_INCREMENT,
date_of_exam DATE NOT NULL,
student_id INT UNSIGNED NOT NULL,
subject_id INT UNSIGNED NOT NULL,
FOREIGN KEY (student_id) REFERENCES students(student_id),
FOREIGN KEY (subject_id) REFERENCES subjects(subject_id),
PRIMARY KEY (entry_id)
);
INSERT INTO entries (date_of_exam, student_id, subject_id) VALUES
('2015-05-31', 1, 6),
('2015-05-31', 2, 10),
('2015-01-21', 3, 3),
('2015-01-21', 4, 7),
('2015-09-13', 5, 1),
('2015-09-13', 2, 9),
('2015-12-06', 4, 8),
('2015-12-06', 1, 2),
('2015-04-01', 3, 5),
('2015-04-01', 5, 4);
```
And the `SELECT`:
```
SELECT entries.*, subjects.subject_name, subjects.level_of_entry
FROM subjects
INNER JOIN entries,
students ON entries.subject_id = subjects.subject_id
WHERE subjects.exam_board LIKE "OCR%";
```
|
Provide the join condition right after the join to keep it simple and readable. As jarlh wrote, do not mix the explicit joins with the comma separated list. There is no need to do a like with pattern matching if you know the exact value you are looking for. Just use the = operator with the exact value.
```
SELECT *
FROM subjects
INNER JOIN entries ON entries.subject_id = subjects.subject_id --join 2 tables
INNER JOIN students ON entries.student_id=students.student_id --join the 3rd tables
WHERE subjects.exam_board = "OCR";
```
|
What I think you might need the code to look like is this:
```
select entries.*, subjects.subject_name, subjects.level_of_entry
from entries
join students on students.student_id = entries.student_id
join subjects on subjects.subject_id = entries.subject_id
where exam_board = 'OCR';
```
I hope this helps.
|
SQL: Inner join help for assessment
|
[
"",
"mysql",
"sql",
"inner-join",
""
] |
I have a table named Photos (Id, Title, Description, Date, Online).
I want to search the table by Title and Description, but I want see first the results of the Title search and then the results of the Description.
I tried a normal search but don't know how to separate them.
```
SELECT * FROM [Photos]
WHERE ([Online] = 1)
AND ([Description] like '%" + querytext + "%')
OR ([Title] like '%" + querytext + "%')
ORDER BY [Date] DESC, [Id] DESC
```
Then I tried to do it with the UNION, but didn't work either.
```
SELECT * FROM [Photos]
WHERE ([Online] = 1)
AND ([Description] like '%" + querytext + "%')
UNION
SELECT * FROM [Photos]
WHERE ([Online] = 1)
AND ([Title] like '%" + querytext+ "%')
ORDER BY [Date] DESC, [Id] DESCBeto
```
|
You can use `CASE` expressions in `ORDER BY` clause:
```
SELECT *
FROM Photos
WHERE
Online = 1
AND Description like '%" + querytext + "%'
OR Title like '%" + querytext + "%'
ORDER BY
CASE WHEN Title like '%" + querytext + "%' THEN 1 ELSE 2 END,
[Date] DESC,
[Id] DESC
```
|
Add some column that will mark rows:
```
SELECT *, 1 AS Mark
FROM [Photos]
WHERE ([Online] = 1)
AND ([Title] like '%" + querytext + "%')
UNION ALL
SELECT *, 2 AS Mark
FROM [Photos]
WHERE ([Online] = 1)
AND ([Description] like '%" + querytext+ "%')
ORDER BY Mark, [Date] DESC, [Id] DESC
```
|
Search table in 2 fields show first result first
|
[
"",
"sql",
"union",
""
] |
I have this query
```
SELECT Ets_Se_Categorie AS Ets_Se_Categorie,COUNT(Ets_Se_Categorie) AS cpte
FROM TableV_MyTest
GROUP BY Ets_Se_Categorie
```
It gives me the following table / results
```
Ets_Se_Categorie cpte
(Seems to be empty string) 5531
Old place 8
Secondary 1066
Principal 4713
Subsidiary 7985
First 9
headquarter 31610
Main Headquarter 1587
```
The `(Seems to be empty string)` means it is an empty field
Aim is to have this
```
Ets_Se_Categorie cpte
Old place 8
Secondary 1066
Principal 4713
Subsidiary 7985
First 9
headquarter 31610
Main Headquarter 1587
```
I've created this query
```
SELECT *
FROM
(
SELECT Ets_Se_Categorie AS Ets_Se_Categorie,COUNT(Ets_Se_Categorie) AS cpte
FROM TableV_MyTest
GROUP BY Ets_Se_Categorie
) AS A
WHERE (A.Ets_Se_Categorie IS NOT NULL OR A.Ets_Se_Categorie != '')
```
Issue is nothing has changed ...
Why did I use a nested query?
I tried this, first
```
SELECT Ets_Se_Categorie,COUNT(Ets_Se_Categorie) AS cpte
FROM TableV_MyTest
WHERE (Ets_Se_Categorie IS NOT NULL OR Ets_Se_Categorie != '')
GROUP BY Ets_Se_Categorie
```
Nothing happened...
Any insights would be greatly appreciated. Probably very simple but I'm bit confused on the why.
Thanks!
|
You have used `OR` instead of `AND`, this should work:
```
WHERE (A.Ets_Se_Categorie IS NOT NULL AND A.Ets_Se_Categorie != '')
```
Since there was confusion about why the `OR` didn't work as expected:
Both conditions must be `true` to return a record. All records are either `NOT NULL` or `!= ''`. Consider a record with an empty string, this record is `NOT NULL`, so it is returned.
Why you didn't see the `NULL` records:
`NULL` is not a valid value, it means *undefined*. So nothing is either equal or unequal to `NULL`. Neither `NULL = NULL` nor `Anything != NULL` is `true`. You have to use `IS (NOT) NULL`.
In your original query you've filtered out the `NULL` values:
```
SELECT Ets_Se_Categorie,COUNT(Ets_Se_Categorie) AS cpte
FROM TableV_MyTest
WHERE (Ets_Se_Categorie IS NOT NULL OR Ets_Se_Categorie != '')
GROUP BY Ets_Se_Categorie
```
because a `NULL` would neither be `NOT NULL` nor `!= ''`(remember last section).
Conclusion: either use `IS NOT NULL AND != ''` or `ISNULL(Column, '') != ''` or `COALESCE(Column, '') != ''` which is ANSI sql (works in all databases).
|
You can just use [ISNULL](https://msdn.microsoft.com/en-GB/library/ms184325.aspx) instead. Like this:
`WHERE ISNULL(A.Ets_Se_Categorie, '') != ''`
|
Sql(Server) NULL or empty string still here
|
[
"",
"sql",
"sql-server",
""
] |
Ok I have tried searching for the answer to this and I just can't seem to find it. I am using SQL Server 2012. I am pulling data that will be going to a flat file. At the start and end of the flat file I need a header and footer with specific data. My issue comes in the footer in that I need the row count from the data set. Right now I have my query set up as such. This is simplified for the purpose of just trying to get the rowcount.
```
select 'header'
union
select mytable.data
from mytable
union
select 'footer'+convert(varchar(4),ROWCOUNT)
```
So the query works as needed for the purpose of the flat file I just have to populate ROWCOUNT with the numbers of rows from mytable.data.
expected output
```
Header|04||160119|||2.0|160119||
D|||||...
D|||||...
Footer|ROWCOUNT||blank||
```
UPDATE:
So wrote the code as such
```
select 'header'
union all
select mytable.data
from mytable
union all
select 'footer'+convert(varchar(4),@@Rowcount)
```
And it started working. Not entirely sure why @@Rowcount started working now when it wasn't early but it works now. Thank you all for helping me work through this.
|
You can also try this using `CTE` -
```
;With dataCTE
AS
(
select data, COUNT(*) over() as RowCnt
from mytable
),
footerCTE
AS
(
select 'footer'+convert(varchar(4),RowCnt) as data from dataCTE
)
select 'header' AS data
union
select data from dataCTE
union
select data from footerCTE
```
|
Your query would look like this:
```
select 'header'
union
select mytable.data
from mytable
union
select 'footer '+CAST((SELECT COUNT(*) FROM mytable) AS VARCHAR(16));
```
|
Count rows from a seperate query
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
"rowcount",
""
] |
I have a query
```
SELECT
convert(varchar, dates, 101)
FROM
database
WHERE
dates BETWEEN '04/01/2015' AND '04/30/2015'
```
It returns all April dates but the problem is it is for every year (i.e. 2010, 2011, 2012, 2013, 2014, 2015). I think that it is not reading the last 2 digits of the year and just pulling everything from 2000 on. Am I correct in thinking this? How do I fix this?
|
Your dates column is likely not defined to be a date type. You can update the column type... or explicitly convert it in your query:
```
Select convert(varchar, dates, 101)
FROM database
Where convert(date,dates) BETWEEN '04/01/2015' and '04/30/2015'
```
The reason it would give you all April dates if your dates field was interpreted as a string of text is because all the April dates would be sorted together alphabetically. If it were properly interpreting your field to be a date, it would be able properly filter them.
|
You need to provide your dates in a standard format that SQL Server will understand correctly. See [Correct way of specifying a given date in T-SQL](https://stackoverflow.com/questions/8517804/correct-way-of-specifying-a-given-date-in-t-sql).
For you it will be:
```
Where dates BETWEEN '20150401' and '20150430'
```
|
SQL query not pulling correct year
|
[
"",
"sql",
"sql-server",
""
] |
IS there a system table I can join to so I can query to find all tables with a column flagged as ROWGUIDCOL?
thanks!
|
You could utilize [`sys.columns`](https://msdn.microsoft.com/en-us/library/ms176106.aspx) with [`COLUMNPROPERTY`](https://msdn.microsoft.com/en-us/library/ms174968.aspx):
```
SELECT DISTINCT OBJECT_NAME(object_id) AS tab_name
FROM sys.columns
WHERE COLUMNPROPERTY(object_id, name, 'IsRowGuidCol') = 1
```
`SqlFiddleDemo`
---
```
CREATE TABLE MyTable(ID UNIQUEIDENTIFIER PRIMARY KEY DEFAULT NEWID());
CREATE TABLE MyTable2(ID UNIQUEIDENTIFIER ROWGUIDCOL PRIMARY KEY
DEFAULT NEWSEQUENTIALID());
```
Output:
```
ββββββββββββ
β tab_name β
β βββββββββββ£
β MyTable2 β
ββββββββββββ
```
|
You can use this sql query to achive your goal. `Objects.Type = 'U'` is user table.
```
SELECT O.name AS table_name,
C.name AS column_name
FROM sys.objects AS O
JOIN sys.columns AS C ON C.object_id = O.object_id
WHERE o.type = 'U'
AND C.is_rowguidcol = 1
```
|
Find all tables that have ROWGUIDCOL
|
[
"",
"sql",
"sql-server",
""
] |
I have done an unsuccessful attempt to merge values from two different tables.The first one displays just as I would like but the 2nd one displays only the first row every time.
```
Select * From (Select date as Date, Sum(qty) as qtySum, Sum(weight)
as weightSum From stock_list Group by date) as A,
(Select Sum(weight) as weightSum,Count(barcode)
as barcodeCount From selected_items Group by date) as B Group by date;
```
This is the output that I get.
[](https://i.stack.imgur.com/t48mt.png)
> These is my selected\_items table.
[](https://i.stack.imgur.com/5WHGt.png)
This is my stock\_list.[](https://i.stack.imgur.com/gU6zR.png)
Both my queries work individually and I get the correct output only when I try to run them together it gives a problem for the 2nd query.Can anyone point out my mistake or show me a better way to do it.
This is what my final objective is[](https://i.stack.imgur.com/t4bGd.png)
|
The first problem is that you are grouping by date in your subquery `B`, but you don't select it, so your result set might be something like:
```
weightSum barcodeCount
---------------------------
26 8
9 14
4 7
```
This is the result for 3 dates, but you have no idea which date which row refers to.
Your next problem is that you are using a cross join, because there is no link between your two queries, this means if your first query returns:
```
Date qtySum weightSum
----------------------------------------
2016-01-20 1 1
2016-01-21 2 2
```
After you have done this cross join you end up:
```
Date qtySum a.weightSum b.weightSum barcodeCount
--------------------------------------------------------------------------
2016-01-20 1 1 26 8
2016-01-20 1 1 9 14
2016-01-20 1 1 4 7
2016-01-21 2 2 26 8
2016-01-21 2 2 9 14
2016-01-21 2 2 4 7
```
So every row from A is matched with every row from B giving you 6 total rows.
Your third problem is that you then group by date, but don't perform any aggregates, without delving too much into the fine print of the SQL Standard, the group by clause, and functional dependency, lets simplify it to MySQL allows this, but you shouldn't do it unless you understand the limitations (This is covered in more detail on this in [this answer](https://stackoverflow.com/a/7596265/1048425)). Anything in the select that is not in a group by clause should probably be within an aggregate.
So, due to MySQL's GROUP BY Extension by selecting everything and grouping only by date, what you are effectively saying is take 1 row per date, but you have no control over which row, it might be the first row from each group as displayed above, so the result you would get is:
```
Date qtySum a.weightSum b.weightSum barcodeCount
--------------------------------------------------------------------------
2016-01-20 1 1 26 8
2016-01-21 2 2 26 8
```
Which I think is why you are ending up with all the same values from the subquery B repeated.
So that covers what is wrong, now on to a solution, assuming there will be dates in `stock_list` that don't exist in `selected_items`, and vice versa you would need a full join, but since this is not supported in MySQL you would have to use `UNION`, the simplest way would be:
```
SELECT t.Date,
SUM(t.StockQuantity) AS StockQuantity,
SUM(t.StockWeight) AS StockWeight,
SUM(t.SelectedWeight) AS SelectedWeight,
SUM(t.BarcodeCount) AS BarcodeCount
FROM ( SELECT date,
SUM(qty) AS StockQuantity,
SUM(weight) AS StockWeight,
0 AS SelectedWeight,
0 AS BarcodeCount
FROM stock_list
GROUP BY Date
UNION ALL
SELECT date,
0 AS StockQuantity,
0 AS StockWeight,
SUM(weight) AS SelectedWeight,
COUNT(BarCode) AS BarcodeCount
FROM selected_items
GROUP BY Date
) AS t
GROUP BY t.Date;
```
**EDIT**
I can't test this, nor am I sure of your exact logic, but you can use [variables to calculate a running total in MySQL](https://stackoverflow.com/a/1290936/1048425). This should give an idea of how to do it:
```
SELECT Date,
StockQuantity,
StockWeight,
SelectedWeight,
BarcodeCount,
(@w := @w + StockWeight - SelectedWeight) AS TotalWeight,
(@q := @q + StockQuantity - BarcodeCount) AS TotalQuantity
FROM ( SELECT t.Date,
SUM(t.StockQuantity) AS StockQuantity,
SUM(t.StockWeight) AS StockWeight,
SUM(t.SelectedWeight) AS SelectedWeight,
SUM(t.BarcodeCount) AS BarcodeCount
FROM ( SELECT date,
SUM(qty) AS StockQuantity,
SUM(weight) AS StockWeight,
0 AS SelectedWeight,
0 AS BarcodeCount
FROM stock_list
GROUP BY Date
UNION ALL
SELECT date,
0 AS StockQuantity,
0 AS StockWeight,
SUM(weight) AS SelectedWeight,
COUNT(BarCode) AS BarcodeCount
FROM selected_items
GROUP BY Date
) AS t
GROUP BY t.Date
) AS t
CROSS JOIN (SELECT @w := 0, @q := 0) AS v
GROUP BY t.Date;
```
|
You could use a `join`. However, if the set of dates is not the same in both tables, then you would want a `full outer join`. But that is not available in MySQL. Instead:
```
select date, sum(qtySum), sum(weightsum1), sum(weightsum2), sum(barcodeCount)
from ((Select date as Date, Sum(qty) as qtySum, Sum(weight) as weightSum1,
NULL as weightsum2, NULL as barcodeCount
From stock_list
Group by date
) union all
(Select date, null, null, Sum(weight), Count(barcode) as barcodeCount
From selected_items
Group by date
)
) t
Group by date;
```
I'm not sure how your desired output corresponds to the query you have provided. But this should aggregate and combine the data from the two tables by date, so you can finalize the query.
|
Merging two tables without using join in mysql?
|
[
"",
"mysql",
"sql",
"date",
"join",
"union",
""
] |
I have a `EMPLOYEE` table with `EMP_ID`,`NAME` and `DEPARTMENT_ID`.
I want to order all the record with odd `DEPARTMENT_ID` as `ASC` and even `DEPARTMENT_ID` as `DESC`.
Can it be done?
Thank you
|
You can use `CASE` in the `ORDER BY`, change the sign accordingly:
```
...
ORDER BY CASE WHEN DEPARTMENT_ID % 2 = 0
THEN -DEPARTMENT_ID
ELSE DEPARTMENT_ID END ASC;
```
|
Makes two queries, first one filters odds, second pairs. Order by as you wish and then Union queries.
```
SELECT e.* FROM (SELECT *
FROM EMPLOYEE
WHERE MOD(DEPARTMENT_ID, 2) = 1
ORDER BY DEPARTMENT_ID ASC) e
UNION ALL
SELECT e1.* FROM (SELECT *
FROM EMPLOYEE
WHERE MOD(DEPARTMENT_ID, 2) = 0
ORDER BY DEPARTMENT_ID DESC) e1
```
|
Ascending and Descending according to the column value
|
[
"",
"sql",
""
] |
this question has probably been asked a thousand times but I didnt know how to formulate it in google so I didn't get any good answers.
Let's say I have a table "words" in SQL which stores some word in the first column of each row. What I want to do here is to store all the synonyms to that word as well and link them to that word in the first column. For instance if I have the word "angry", I want to tie the words "mad, bitter, enraged, furious, irritated" etc. to that word, so if I want to get all the synonyms with an SQL query I can easily do so.
As I understand it, trying to store lists in a cell is a no go. The only thing I can think of is to have a row entry for "angry" for each and everyone of its synonyms. But if a word has a lot of synonyms, or I have a lot of words, it feels like there will be a lot of entries (and a lot of unnecessary duplicates of each word). Is there a better practice to achieve my goal than this?
|
You use two tables, one for each entity with defined foreign key relationships. The tables would look like:
```
create table words (
WordId int not null primary key auto_increment,
Word varchar(255)
. . .
);
create table synonyms (
SynonymId int not null primary key auto_increment,
WordId int not null,
Synonym varchar(255),
. . .
constraint fk_wordid foreign key (WordId) references Words(WordId)
);
```
If synonyms are necessarily words, then you would use the id in the second table:
```
create table synonyms (
SynonymId int not null primary key auto_increment,
WordId int not null,
SynonymWordId int not null,
. . .
constraint fk_wordid foreign key (WordId) references Words(WordId),
constraint fk_synonymwordid foreign key (SynonymWordId) references Words(WordId)
);
```
|
I would also use two tables but my 2nd one would be different than Gordon's.
```
table word
wordID int pk
word varchar
other fields
table synonym
wordID int FK to word
synonymID int FK to word
pk is both fields
```
query for synonyms of angry
```
select s.word
from word w join synonym sy on w.wordID = sy.wordID
join word s on sy.synonymID = s.wordID
where w.word = 'angry'
```
|
How to store data which is essentially a list tied to a unique value in SQL
|
[
"",
"mysql",
"sql",
"list",
"store",
"synonym",
""
] |
I have a long SQL server query, which runs on different tables, does some pre-processing and spits out different select table outputs. I want them to be stored in different csv files.
simplified eg.
```
-- some declare variables and other pre-processing,
-- then save the outputs to different CSV files
select * from Table1 -- save to Table1_Output.csv
select * from Table2 -- save to Table2_Output.csv
```
Now, I can use [this](https://stackoverflow.com/questions/14212641/export-table-from-database-to-csv-file) to run SQLCmd and save the output for a single select statement to a single CSV. I need to be able to save the various outputs to different CSVs
The first option in the solution above is not viable as that requires manual intervention and cannot be used in a scripting environment.
Thanks
|
Personally, I'd just use PowerShell if you can't use SSIS:
```
$SQLServer = 'SQLSERVERNAME';
# If you have a non-default instance such as with SQL Express, use this format
# $SQLServer = 'SQLSERVERNAME\SQLINSTANCENAME';
$Database = 'MyDatabase';
$ConnectionString = "Data Source={0};Initial Catalog={1};Integrated Security=SSPI" -f $SQLServer, $Database;
$CommandText = @'
DECLARE @ID int;
SET @ID = 123456;
SELECT ID, First_Name FROM dbo.Person WHERE ID = @ID;
SELECT ID, Last_Name FROM dbo.Person WHERE ID = @ID;
'@;
$SqlConnection = New-Object System.Data.SqlClient.SqlConnection;
$SqlConnection.ConnectionString = $ConnectionString;
$SqlCommand = New-Object System.Data.SqlClient.SqlCommand;
$SqlCommand.CommandText = $CommandText;
$SqlCommand.Connection = $SqlConnection;
$SqlDataAdapter = New-Object System.Data.SqlClient.SqlDataAdapter;
$SqlDataAdapter.SelectCommand = $SqlCommand;
$DataSet = New-Object System.Data.DataSet;
$SqlConnection.Open();
$SqlDataAdapter.Fill($DataSet) | Out-Null;
$SqlConnection.Close();
$SqlConnection.Dispose();
$DataSet.Tables[0] | Export-Csv -NoTypeInformation -Path 'U:\FirstNames.csv';
$DataSet.Tables[1] | Export-Csv -NoTypeInformation -Path 'U:\LastNames.csv';
```
|
When multiple results are returned, `sqlcmd` prints a blank line between each result set in a batch. The output can then be processed with `sed '/^$/q'` or `csplit -s - '/^$/' '{*}'` to split into individual files: [How to split file on first empty line in a portable way in shell](https://stackoverflow.com/questions/1644532/how-to-split-file-on-first-empty-line-in-a-portable-way-in-shell-e-g-using-sed).
**UPD** If sygwin is not an option custom VBScript can run sqlcmd through `WScript.CreateObject("WScript.Shell")` `.Exec` and process output line-by-line with `.StdOut.ReadLine()` until `.StdOut.AtEndOfStream`: [Getting command line output in VBScript (without writing to files)](https://stackoverflow.com/questions/5393345/getting-command-line-output-in-vbscript-without-writing-to-files)
|
How to export multiple select statements to different csv files in SQL Server query
|
[
"",
"sql",
"sql-server",
""
] |
I have the following SQL:
```
SELECT fldTitle
FROM tblTrafficAlerts
ORDER BY fldTitle
```
Which returns the results (from a `NVARCHAR` column) in the following order:
```
A1M northbound within J17 Congestion
M1 J19 southbound exit Congestion
M1 southbound between J2 and J1 Congestion
M23 northbound between J8 and J7 Congestion
M25 anti-clockwise between J13 and J12 Congestion
M25 clockwise between J8 and J9 Broken down vehicle
M3 eastbound at the Fleet services between J5 and J4A Congestion
M4 J19 westbound exit Congestion
```
You'll see the M23 and M25 are listed above the M3 and M4 rows, which doesn't look pleasing to the eye and if scanning a much longer list of results you'd not expect to read them in this order.
Therefore I would like the results sorted alphabetically, then numerically, to look like:
```
A1M northbound within J17 Congestion
M1 J19 southbound exit Congestion
M1 southbound between J2 and J1 Congestion
M3 eastbound at the Fleet services between J5 and J4A Congestion
M4 J19 westbound exit Congestion
M23 northbound between J8 and J7 Congestion
M25 anti-clockwise between J13 and J12 Congestion
M25 clockwise between J8 and J9 Broken down vehicle
```
So M3 and M4 appear above M23 and M25.
|
This should handle it. Also added some strange data to make sure the ordering also works on that:
```
SELECT x
FROM
(values
('A1M northbound within J17 Congestion'),
('M1 J19 southbound exit Congestion'),
('M1 southbound between J2 and J1 Congestion'),
('M23 northbound between J8 and J7 Congestion'),
('M25 anti-clockwise between J13 and J12 Congestion'),
('M25 clockwise between J8 and J9 Broken down vehicle'),
('M3 eastbound at the Fleet services between J5 and J4A Congestion'),
('M4 J19 westbound exit Congestion'),('x'), ('2'), ('x2')) x(x)
ORDER BY
LEFT(x, patindex('%_[0-9]%', x +'0')),
0 + STUFF(LEFT(x,
PATINDEX('%[0-9][^0-9]%', x + 'x1x')),1,
PATINDEX('%_[0-9]%', x + '0'),'')
```
Result:
```
2
A1M northbound within J17 Congestion
M1 J19 southbound exit Congestion
M1 southbound between J2 and J1 Congestion
M3 eastbound at the Fleet services between J5 and J4A Congestion
M4 J19 westbound exit Congestion
M23 northbound between J8 and J7 Congestion
M25 anti-clockwise between J13 and J12 Congestion
M25 clockwise between J8 and J9 Broken down vehicle
x
x2
```
|
I'd go like this:
EDIT: I separated this in two portiions: The leading letter and the second part. This allows you to - if needed - treat the second part numerically (but there is a disturbing "M" in the first row...)
It would be easier to do just the second step: Cut at the first blank, check the length and add a '0' on sorting if needed.
```
DECLARE @tblTrafficAlerts TABLE(fldTitle VARCHAR(500));
INSERT INTO @tblTrafficAlerts VALUES
('A1M northbound within J17 Congestion')
,('M1 J19 southbound exit Congestion')
,('M1 southbound between J2 and J1 Congestion')
,('M23 northbound between J8 and J7 Congestion')
,('M25 anti-clockwise between J13 and J12 Congestion')
,('M25 clockwise between J8 and J9 Broken down vehicle')
,('M3 eastbound at the Fleet services between J5 and J4A Congestion')
,('M4 J19 westbound exit Congestion');
SELECT ta.fldTitle
,Leading.Letter
,Leading.SecondPart
FROM @tblTrafficAlerts AS ta
CROSS APPLY(SELECT SUBSTRING(ta.fldTitle,1,1) AS Letter
,SUBSTRING(ta.fldTitle,2,CHARINDEX(' ',ta.fldTitle)-1) AS SecondPart) AS Leading
ORDER BY Leading.Letter,CASE WHEN LEN(Leading.SecondPart)=1 THEN Leading.SecondPart + '0' ELSE Leading.SecondPart END
```
The result:
```
fldTitle Letter SecondPart
A1M northbound within J17 Congestion A 1M
M1 J19 southbound exit Congestion M 1
M1 southbound between J2 and J1 Congestion M 1
M23 northbound between J8 and J7 Congestion M 23
M25 anti-clockwise between J13 and J12 Congestion M 25
M25 clockwise between J8 and J9 Broken down vehicle M 25
M3 eastbound at the Fleet services between J5 and J4A Congestion M 3
M4 J19 westbound exit Congestion M 4
```
|
How do I sort an alphanumeric SQL Server NVARCHAR column in numerical order?
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have some time series data. For example look at the following values (Lets assume time here is minutes):
```
User Time Value
a 0 10
b 1 100
c 2 200
a 3 5
e 4 7
a 5 999
a 6 8
b 7 10
a 8 10
a 9 10
a 10 10
a 11 10
a 12 100
```
Now I want to find out if within any given 5 minute intervals a total SUM of more than 1000 is achieved.
For example in the above example I should get an output such as user a, minute 5,6,8,9.
|
That's an easy task for Window Function:
```
select *
from
(
select t.*
,sum("Value") -- cumulative sum over the previous five minutes
over (partition by "user"
order by "Time"
range 4 preceding) as sum_5_minutes
from Table1 t
) dt
where sum_5_minutes > 1000
```
See [fiddle](http://sqlfiddle.com/#!4/fcc94/6)
Edit: SQLFiddle is offline again, but you can also search the next 5 minutes.
Edit2: SQLFiddle offline, but if the datatype is a `TimeStamp` or `Date` you must use intervals instead of integers:
```
select *
from
(
select t.*
,sum("Value")
over (partition by "User"
order by "Time"
range interval '4' minute preceding) as sum_prev5_minutes
,sum("Value")
over (partition by "User"
order by "Time"
range between interval '0' minute preceding -- or "current row" if there are no duplicate timestamps
and interval '4' minute following) as sum_next5_minutes
from Table1 t
) dt
where sum_prev5_minutes > 1000
or sum_next5_minutes > 1000
```
|
To illustrate my comment to dnoeth's post, and so don't take my answer as correct as he did the heavy lifting and deserves the green checkmark, the following shows how you can set the range at runtime...
```
WITH DAT AS (
SELECT 'a' u, 0 t, 10 v from dual union all
SELECT 'b' u, 1 t, 100 v from dual union all
SELECT 'c' u, 2 t, 200 v from dual union all
SELECT 'a' u, 3 t, 5 v from dual union all
SELECT 'e' u, 4 t, 7 v from dual union all
SELECT 'a' u, 5 t, 999 v from dual union all
SELECT 'a' u, 6 t, 8 v from dual union all
SELECT 'b' u, 7 t, 10 v from dual union all
SELECT 'a' u, 8 t, 10 v from dual union all
SELECT 'a' u, 9 t, 10 v from dual union all
SELECT 'a' u, 10 t, 10 v from dual union all
SELECT 'a' u, 11 t, 10 v from dual union all
SELECT 'a' u, 12 t, 100 v from dual )
-- imaging passing a variable in to this second query, setting it in a config table, or whatever.
-- This is just showing that you don't have to hard-code it into the actual select clause, and that the value can be determined at runtime.
, wind as (select 5 rng from dual)
select d.*
,sum(v) -- cumulative sum over the previous five minutes
over (partition by u order by t
range w.rng preceding) as sum_5_minutes
from dat d
join wind w on 1=1
order by u,t;
```
I also note that lad2025 is correct that this windowing WILL miss some rows in the set. To correct that you need to bring back all rows in the set over the range for a user where the preceeding five seconds exceed 1000. This works correctly for user Z below, but would have only brought back the second row as originally coded.
```
WITH DAT AS (
SELECT 'a' u, 0 t, 10 v from dual union all
SELECT 'b' u, 1 t, 100 v from dual union all
SELECT 'c' u, 2 t, 200 v from dual union all
SELECT 'a' u, 3 t, 5 v from dual union all
SELECT 'e' u, 4 t, 7 v from dual union all
SELECT 'a' u, 5 t, 999 v from dual union all
SELECT 'a' u, 6 t, 8 v from dual union all
SELECT 'b' u, 7 t, 10 v from dual union all
SELECT 'a' u, 8 t, 10 v from dual union all
SELECT 'a' u, 9 t, 10 v from dual union all
SELECT 'a' u, 10 t, 10 v from dual union all
SELECT 'a' u, 11 t, 10 v from dual union all
-- two Z rows added. In the initial version only the second row would be caught.
SELECT 'z' u, 10 t, 999 v from dual union all
SELECT 'z' u, 11 t, 10 v from dual union all
SELECT 'a' u, 12 t, 100 v from dual )
, wind as (select 3 rng from dual)
SELECT dd.*, sum_5_minutes
from dat dd
JOIN (
SELECT * FROM (
select d.*
,sum(v) -- cumulative sum over the previous five minutes
over (partition by u order by t
range w.rng preceding) as sum_5_minutes
,min(t) -- start point of the range that we are covering
over (partition by u order by t
range w.rng preceding) as rng_5_minutes
from dat d
join wind w on 1=1
) WHERE sum_5_minutes > 1000 ) fails
on dd.u = fails.u
and dd.t >= fails.rng_5_minutes
and dd.t <= fails.t
order by dd.u, dd.t;
```
|
Finding where a running sum of a time series is above given threshold
|
[
"",
"sql",
"oracle",
"oracle11g",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.