Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I have a query where I iterate through a table -> for each entry I iterate through another table and then compute some results. I use a cursor for iterating through the table. This query takes ages to complete. Always more than 3 minutes. If I do something similar in C# where the tables are arrays or dictionaries it doesn't even take a second. What am I doing wrong and how can I improve the efficiency?
```
DELETE FROM [QueryScores]
GO
INSERT INTO [QueryScores] (Id)
SELECT Id FROM [Documents]
DECLARE @Id NVARCHAR(50)
DECLARE myCursor CURSOR LOCAL FAST_FORWARD FOR
SELECT [Id] FROM [QueryScores]
OPEN myCursor
FETCH NEXT FROM myCursor INTO @Id
WHILE @@FETCH_STATUS = 0
BEGIN
DECLARE @Score FLOAT = 0.0
DECLARE @CounterMax INT = (SELECT COUNT(*) FROM [Query])
DECLARE @Counter INT = 0
PRINT 'Document: ' + CAST(@Id AS VARCHAR)
PRINT 'Score: ' + CAST(@Score AS VARCHAR)
WHILE @Counter < @CounterMax
BEGIN
DECLARE @StemId INT = (SELECT [Query].[StemId] FROM [Query] WHERE [Query].[Id] = @Counter)
DECLARE @Weight FLOAT = (SELECT [tfidf].[Weight] FROM [TfidfWeights] AS [tfidf] WHERE [tfidf].[StemId] = @StemId AND [tfidf].[DocumentId] = @Id)
PRINT 'WEIGHT: ' + CAST(@Weight AS VARCHAR)
IF(@Weight > 0.0)
BEGIN
DECLARE @QWeight FLOAT = (SELECT [Query].[Weight] FROM [Query] WHERE [Query].[StemId] = @StemId)
SET @Score = @Score + (@QWeight * @Weight)
PRINT 'Score: ' + CAST(@Score AS VARCHAR)
END
SET @Counter = @Counter + 1
END
UPDATE [QueryScores] SET Score = @Score WHERE Id = @Id
FETCH NEXT FROM myCursor INTO @Id
END
CLOSE myCursor
DEALLOCATE myCursor
```
The logic is that i have a list of docs. And I have a question/query. I iterate through each and every doc and then have a nested iteration through the query terms/words to find if the doc contains these terms. If it does then I add/multiply pre-calculated scores.
|
The problem is that you're trying to use a set-based language to iterate through things like a procedural language. SQL requires a different mindset. You should almost never be thinking in terms of loops in SQL.
From what I can gather from your code, this should do what you're trying to do in all of those loops, but it does it in a single statement in a set-based manner, which is what SQL is good at.
```
INSERT INTO QueryScores (id, score)
SELECT
D.id,
SUM(CASE WHEN W.[Weight] > 0 THEN W.[Weight] * Q.[Weight] ELSE NULL END)
FROM
Documents D
CROSS JOIN Query Q
LEFT OUTER JOIN TfidfWeights W ON W.StemId = Q.StemId AND W.DocumentId = D.id
GROUP BY
D.id
```
|
You might not even need documents
```
INSERT INTO QueryScores (id, score)
SELECT W.DocumentId as [id]
, SUM(W.[Weight] + Q.[Weight]) as [score]
FROM Query Q
JOIN TfidfWeights W
ON W.StemId = Q.StemId
AND W.[Weight] > 0
GROUP BY W.DocumentId
```
|
SQL Query with Cursor optimization
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have an author table
| au\_id | au\_fname | au\_lname | city | state |
what i am trying to do is get a query of first and last names based on who lives in the same state as Sarah
Heres what I have so far:
```
SELECT AU_FNAME, AU_LNAME FROM authors WHERE "STATE" like 'CA'
```
I don't want to use a static state in my code, I want it to be based on the selected person - Sarah in this case.
Thanks
|
Use a `Sub-Query` to find the state of `Sarah` and filter that `state`
Try this
```
SELECT AU_FNAME, AU_LNAME
FROM authors
WHERE STATE in (select state from authors where au_fname = 'Sarah')
```
|
```
SELECT AU_FNAME, AU_LNAME FROM authors WHERE STATE in (select state from authors where au_fname = 'sarah')
```
or
```
select a1.AU_FNAME, a1.AU_LNAME FROM authors a1
inner join authors a2 on a1.state = a2.state
where a2.au_fname = 'sarah'
```
|
Select * from a table using data from specific entry in table
|
[
"",
"sql",
""
] |
unfortunately i'm not that good as SQL and i'm trying to get a join between three tables done.
here's a rough simplified table structure:
```
links: id, url, description
categories: id, name, path
link_cat: link_id, cat_id
```
The select statement I'm aiming for should have
```
links.id, link.url, link.description, categories.name, categories.path
```
Where links and categories are matched via the link\_cat table. I think that shouldn't be too hard as long as there's only one category for each link. This is what I'm assuming. If not it would be good to have another way that pulls multiple categories comma separated into the categories.name field.
I hope this is all understandable and doesn't sound too silly.
|
```
SELECT links.id, links.url, links.description, categories.name, categories.path
FROM links
INNER JOIN link_cat ON links.id = link_cat.links_id
INNER JOIN categories ON categories.id = link_cat.category_id
```
|
```
# Add each field you want to the select list
SELECT links.id, link.url, link.description, categories.name, categories.path
# Add the "links" table to the list of tables to select from
FROM links
# Add the "link_cat" table, specify "link_id" as the common field
JOIN link_cat USING (link_id)
# Add the "categories" table specifying the "cat_id" as the common field
JOIN categories USING (cat_id)
```
|
Join three (3) MySQL tables
|
[
"",
"mysql",
"sql",
"join",
""
] |
**SCHEMA**
I have the following set-up in MySQL database:
```
CREATE TABLE items (
id SERIAL,
name VARCHAR(100),
group_id INT,
price DECIMAL(10,2),
KEY items_group_id_idx (group_id),
PRIMARY KEY (id)
);
INSERT INTO items VALUES
(1, 'Item A', NULL, 10),
(2, 'Item B', NULL, 20),
(3, 'Item C', NULL, 30),
(4, 'Item D', 1, 40),
(5, 'Item E', 2, 50),
(6, 'Item F', 2, 60),
(7, 'Item G', 2, 70);
```
**PROBLEM**
> I need to select:
>
> * **All** items with `group_id` that has `NULL` value, **and**
> * **One** item from each group identified by `group_id` having the **lowest** price.
**EXPECTED RESULTS**
```
+----+--------+----------+-------+
| id | name | group_id | price |
+----+--------+----------+-------+
| 1 | Item A | NULL | 10.00 |
| 2 | Item B | NULL | 20.00 |
| 3 | Item C | NULL | 30.00 |
| 4 | Item D | 1 | 40.00 |
| 5 | Item E | 2 | 50.00 |
+----+--------+----------+-------+
```
**POSSIBLE SOLUTION 1:** Two queries with `UNION ALL`
```
SELECT id, name, group_id, price FROM items
WHERE group_id IS NULL
UNION ALL
SELECT id, name, MIN(price) FROM items
WHERE group_id IS NOT NULL
GROUP BY group_id;
/* EXPLAIN */
+----+--------------+------------+------+--------------------+--------------------+---------+-------+------+----------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+--------------+------------+------+--------------------+--------------------+---------+-------+------+----------------------------------------------+
| 1 | PRIMARY | items | ref | items_group_id_idx | items_group_id_idx | 5 | const | 3 | Using where |
| 2 | UNION | items | ALL | items_group_id_idx | NULL | NULL | NULL | 7 | Using where; Using temporary; Using filesort |
| NULL | UNION RESULT | <union1,2> | ALL | NULL | NULL | NULL | NULL | NULL | |
+----+--------------+------------+------+--------------------+--------------------+---------+-------+------+----------------------------------------------+
```
However it is undesirable to have two queries since there will be more complex condition in `WHERE` clause and I would need to sort the final results.
**POSSIBLE SOLUTION 2:** `GROUP BY` on expression ([reference](https://stackoverflow.com/a/18883400/3549014))
```
SELECT id, name, group_id, MIN(price) FROM items
GROUP BY CASE WHEN group_id IS NOT NULL THEN group_id ELSE RAND() END;
/* EXPLAIN */
+----+-------------+-------+------+---------------+------+---------+------+------+---------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+------+---------------+------+---------+------+------+---------------------------------+
| 1 | SIMPLE | items | ALL | NULL | NULL | NULL | NULL | 7 | Using temporary; Using filesort |
+----+-------------+-------+------+---------------+------+---------+------+------+---------------------------------+
```
Solution 2 seems to be faster and simple to use but I'm wondering whether there is a better approach in terms of performance.
**UPDATE**:
According to documentation referenced by [@axiac](https://stackoverflow.com/questions/36010981/group-only-certain-rows-with-group-by/36011093?noredirect=1#comment59674237_36010981), this query is illegal in SQL92 and earlier and may work in MySQL only.
|
According to [this answer](https://stackoverflow.com/a/28090544/3549014) by [@axiac](https://stackoverflow.com/users/4265352/axiac), better solution in terms of compatibility and performance is shown below.
It is also explained in [SQL Antipatterns book, Chapter 15: Ambiguous Groups](https://rads.stackoverflow.com/amzn/click/com/1934356557).
To improve performance, combined index is also added for `(group_id, price, id)`.
> **SOLUTION**
```
SELECT a.id, a.name, a.group_id, a.price
FROM items a
LEFT JOIN items b
ON a.group_id = b.group_id
AND (a.price > b.price OR (a.price = b.price and a.id > b.id))
WHERE b.price is NULL;
```
See [explanation on how it works](https://stackoverflow.com/a/28090544/3549014) for more details.
By accident as a side-effect this query works in my case where I needed to include **ALL** records with `group_id` equals to `NULL` **AND** **one** item from each group with the lowest price.
> **RESULT**
```
+----+--------+----------+-------+
| id | name | group_id | price |
+----+--------+----------+-------+
| 1 | Item A | NULL | 10.00 |
| 2 | Item B | NULL | 20.00 |
| 3 | Item C | NULL | 30.00 |
| 4 | Item D | 1 | 40.00 |
| 5 | Item E | 2 | 50.00 |
+----+--------+----------+-------+
```
> **EXPLAIN**
```
+----+-------------+-------+------+-------------------------------+--------------------+---------+----------------------------+------+--------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+------+-------------------------------+--------------------+---------+----------------------------+------+--------------------------+
| 1 | SIMPLE | a | ALL | NULL | NULL | NULL | NULL | 7 | |
| 1 | SIMPLE | b | ref | PRIMARY,id,items_group_id_idx | items_group_id_idx | 5 | agi_development.a.group_id | 1 | Using where; Using index |
+----+-------------+-------+------+-------------------------------+--------------------+---------+----------------------------+------+--------------------------+
```
|
You can do this using `where` conditions:
`SQLFiddle Demo`
```
select t.*
from t
where t.group_id is null or
t.price = (select min(t2.price)
from t t2
where t2.group_id = t.group_id
);
```
Note that this returns all rows with the minimum price, if there is more than one for a given group.
EDIT:
I believe the following fixes the problem of multiple rows:
```
select t.*
from t
where t.group_id is null or
t.id = (select t2.id
from t t2
where t2.group_id = t.group_id
order by t2.price asc
limit 1
);
```
Unfortunately, SQL Fiddle is not working for me right now, so I cannot test it.
|
Group only certain rows with GROUP BY
|
[
"",
"mysql",
"sql",
"group-by",
"greatest-n-per-group",
""
] |
Imagine the following two tables, named "Users" and "Orders" respectively:
```
ID NAME
1 Foo
2 Bar
3 Qux
ID USER ITEM SPEC TIMESTAMP
1 1 12 4 20150204102314
2 1 13 6 20151102160455
3 3 25 9 20160204213702
```
What I want to get as the output is:
```
USER ITEM SPEC TIMESTAMP
1 12 4 20150204102314
2 NULL NULL NULL
3 25 9 20160204213702
```
In other words: do a LEFT OUTER JOIN betweeen Users and Orders, and if you don't find any orders for that user, return null, but if you do find some, only return the first one (the earliest one based on timestamp).
If I use only a LEFT OUTER JOIN, it will return two rows for user 1, I don't want that. I thought of nesting the LEFT OUTER JOIN inside another select that would GROUP BY the other fields and fetch the MIN(TIMESTAMP) but that doesn't work either because I need to have "SPEC" in my group by, and since those two orders have different SPECs, they still both appear.
Any ideas on how to achieve the desired result is appreciated.
|
The best way i can think of is using `OUTER APPLY`
```
SELECT *
FROM Users u
OUTER apply (SELECT TOP 1 *
FROM Orders o
WHERE u.ID = o.[USER]
ORDER BY TIMESTAMP DESC) ou
```
Additionally creating a below `NON-Clustered` Index on `ORDERS` table will help you to increase the performance of the query
```
CREATE NONCLUSTERED INDEX IX_ORDERS_USER
ON ORDERS ([USER], TIMESTAMP)
INCLUDE ([ITEM], [SPEC]);
```
|
This should do the trick :
```
SELECT Users.ID, Orders2.USER , Orders2.ITEM , Orders2.SPEC , Orders2.TIMESTAMP
FROM Users
LEFT JOIN
(
SELECT Orders.ID, Orders.USER , Orders.ITEM , Orders.SPEC , Orders.TIMESTAMP, ROW_NUMBER()
OVER (PARTITION BY ID ORDER BY TIMESTAMP DESC) AS RowNum
FROM Orders
) Orders2 ON Orders2.ID = Users.ID And RowNum = 1
```
|
LEFT OUTER JOIN and only return the first match
|
[
"",
"sql",
"sql-server",
"left-join",
"outer-join",
""
] |
I am receiving an error message for this one. The error message is:
> Data type varchar of receiving variable is not equal to the data type
> nvarchar of column 'VEHICLE\_ID2\_FW'
Please help, thanks
```
DECLARE @IMPORTID INT
DECLARE @LASTID INT
DECLARE @VEHICLEID VARCHAR (20)
SELECT @LASTID = (SELECT LAST_REFERENCE_FW FROM REFERENCE_FW WHERE RECORD_TYPE_FW = 'VEHICLES_ORDERS_FW' AND REFERENCE_FIELD_FW = 'VEHICLE_ID2_FW' AND ARCHIVE_STATUS_FW ='N')
SELECT @IMPORTID = (SELECT IMPORT_ID_FW FROM VEHICLES_ORDERS_FW WHERE RECORD_NUMBER_FW = %RECORD_NUMBER_FW%)
SELECT @VEHICLEID = (SELECT VEHICLE_ID2_FW FROM VEHICLES_ORDERS_FW WHERE RECORD_NUMBER_FW = %RECORD_NUMBER_FW%)
IF @IMPORTID IS NOT NULL AND @VEHICLEID IS NULL
BEGIN
UPDATE VEHICLES_ORDERS_FW
SET @LASTID = VEHICLES_ORDERS_FW.VEHICLE_ID2_FW = @LASTID+1
FROM VEHICLES_ORDERS_FW
WHERE RECORD_NUMBER_FW = %RECORD_NUMBER_FW%;
UPDATE VEHICLES_ORDERS_FW
SET VEHICLES_ORDERS_FW.VEHICLE_ID2_FW = 'FW'+VEHICLE_ID2_FW
FROM VEHICLES_ORDERS_FW
WHERE RECORD_NUMBER_FW = %RECORD_NUMBER_FW%
END
```
|
Try changing your script with this script below
```
DECLARE @IMPORTID INT
DECLARE @LASTID INT
DECLARE @VEHICLEID VARCHAR (20)
SELECT @LASTID = (SELECT LAST_REFERENCE_FW FROM REFERENCE_FW WHERE RECORD_TYPE_FW like N'VEHICLES_ORDERS_FW' AND REFERENCE_FIELD_FW LIKE N'VEHICLE_ID2_FW' AND ARCHIVE_STATUS_FW LIKE N'N')
SELECT @IMPORTID = (SELECT IMPORT_ID_FW FROM VEHICLES_ORDERS_FW WHERE CAST(RECORD_NUMBER_FW AS NVARCHAR) LIKE CONCAT(N'%', CAST(RECORD_NUMBER_FW AS NVARCHAR),N'%'))
SELECT @VEHICLEID = (SELECT VEHICLE_ID2_FW FROM VEHICLES_ORDERS_FW WHERE CAST(RECORD_NUMBER_FW AS NVARCHAR) = CONCAT(N'%', CAST(RECORD_NUMBER_FW AS NVARCHAR),N'%'))
IF @IMPORTID IS NOT NULL AND @VEHICLEID IS NULL
BEGIN
UPDATE VEHICLES_ORDERS_FW
SET @LASTID = @LASTID+1
FROM VEHICLES_ORDERS_FW
WHERE CAST(RECORD_NUMBER_FW AS NVARCHAR) LIKE CONCAT(N'%', CAST(RECORD_NUMBER_FW AS NVARCHAR),N'%');
UPDATE VEHICLES_ORDERS_FW
SET VEHICLES_ORDERS_FW.VEHICLE_ID2_FW = CONCAT(N'FW',VEHICLE_ID2_FW)
FROM VEHICLES_ORDERS_FW
WHERE CAST(RECORD_NUMBER_FW AS NVARCHAR) LIKE CONCAT(N'%', CAST(RECORD_NUMBER_FW AS NVARCHAR),N'%');
```
|
Mabey here is the problem:
```
set @LASTID = VEHICLES_ORDERS_FW.VEHICLE_ID2_FW = @LASTID + 1
```
will be good if you provide table structure.
|
Data type varchar of receiving variable is not equal to the data type nvarchar of column
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have 2 tables,
Table 1 is fact\_table
```
style sales
ABC 100
DEF 150
```
and Table 2 is m\_product
```
product_code style category
ABCS ABC Apparel
ABCM ABC Apparel
ABCL ABC Apparel
DEF38 DEF Shoes
DEF39 DEF Shoes
DEF40 DEF Shoes
```
and I want to join those two tables, I want the result is
```
style category sales
ABC Apparel 100
DEF Shoes 150
```
I create query like this, but failed
```
Select t1.style, t2.category, t1.sales
From fact_table t1
Inner Join m_product t2
On t1.style = t2.style
```
The result
```
style category sales
ABC Apparel 100
ABC Apparel 100
ABC Apparel 100
```
If I am using SUm(sales) and group by, the result will sum all the sales.
Do I have to use average(sales) and then Group By or do you guys have other option ?
I am using SQL Server.
Thanks
|
use a sub-query:
```
Select t1.style, t2.category, t1.sales
From fact_table t1
Inner Join (SELECT DISTINCT style, category FROM m_product) t2
On t1.style = t2.style
```
BTW, you need to change the DB design to move the category to a separate table
|
Maybe something like this:
```
select p.style, p.category, f.sales
(select distinct style, category
from m_product) p join fact_table f
on p.style = f.style
```
|
Join in SQL Server with several data
|
[
"",
"sql",
"sql-server",
""
] |
I have a scenario whereby the `accountNo` is not a `Primary Key` and it has duplicates and I would like to search for accounts that have `priority` with the value of `'0'`. The `priority` field is a `varchar` data type. The following table is an example:
```
ID AccountNo Priority
1 20 0
2 22 0
3 30 0
4 20 1
5 25 0
6 22 0
```
I want to get duplicates or single records of `accounts` that have `priority` of value `'0'` with the condition that other duplicates of the same `accountNo` doesn't have any `priority` value `'1'`. For example, `accountNo 20` have 2 records but one with `priority` valued `'1'` so it shouldn't be in the output. For `accountNo 22`, although it has 2 records but both have `priority` of value `'0'`, therefore it is considered as one of the result.
```
AccountNo
22
30
25
```
The problem i encountered here is that I can only find accounts with `priority` `'0'` but those accounts is prone to the possibility of having duplicate `accountNo` with `priority` valued `'1'`. The following code is what i have implemented:
```
SELECT AccountNo
FROM CustTable
WHERE PRIORITY = '0'
GROUP BY AccountNo
```
|
If `Priority` field takes only values in `('0', '1')`, then try this:
```
SELECT AccountNo
FROM CustTable
GROUP BY AccountNo
HAVING MAX(Priority) = '0'
```
otherwise you can use:
```
SELECT AccountNo
FROM CustTable
GROUP BY AccountNo
HAVING COUNT(CASE WHEN Priority <> '0' THEN 1 END) = 0
```
|
This would work regardless of the RDBMS you use, as not all of them accept a 'group by' without selecting an aggregate function (e.g. max()).
It'd be better if you mention the RDBMS you use, going forward.
```
SELECT DISTINCT tmp.AccountNo
FROM
(SELECT AccountNo, MAX(Priority)
FROM CustTable
GROUP BY AccountNo
HAVING MAX(Priority) = '0'
) tmp
```
|
Finding specific values in SQL
|
[
"",
"sql",
""
] |
For my website, I have a loyalty program where a customer gets some goodies if they've spent $100 within the last 30 days. A query like below:
```
SELECT u.username, SUM(total-shipcost) as tot
FROM orders o
LEFT JOIN users u
ON u.userident = o.user
WHERE shipped = 1
AND user = :user
AND date >= DATE(NOW() - INTERVAL 30 DAY)
```
`:user` being their user ID. Column 2 of this result gives how much a customer has spent in the last 30 days, if it's over 100, then they get the bonus.
I want to display to the user which day they'll leave the loyalty program. Something like "x days until bonus expires", but how do I do this?
Take today's date, March 16th, and a user's order history:
```
id | tot | date
-----------------------
84 38 2016-03-05
76 21 2016-02-29
74 49 2016-02-20
61 42 2015-12-28
```
This user is part of the loyalty program now but leaves it on March 20th. What SQL could I do which returns how many days (4) a user has left on the loyalty program?
If the user then placed another order:
```
id | tot | date
-----------------------
87 12 2016-03-09
```
They're still in the loyalty program until the 20th, so the days remaining doesn't change in this instance, but if the total were 50 instead, then they instead leave the program on the 29th (so instead of 4 days it's 13 days remaining). For what it's worth, I care only about 30 days prior to the current date. No consideration for months with 28, 29, 31 days is needed.
Some `create table` code:
```
create table users (
userident int,
username varchar(100)
);
insert into users values
(1, 'Bob');
create table orders (
id int,
user int,
shipped int,
date date,
total decimal(6,2),
shipcost decimal(3,2)
);
insert into orders values
(84, 1, 1, '2016-03-05', 40.50, 2.50),
(76, 1, 1, '2016-02-29', 22.00, 1.00),
(74, 1, 1, '2016-02-20', 56.31, 7.31),
(61, 1, 1, '2015-12-28', 43.10, 1.10);
```
An example output of what I'm looking for is:
```
userident | username | days_left
--------------------------------
1 Bob 4
```
This is using March 16th as today for use with `DATE(NOW())` to remain consistent with the previous bits of the question.
|
You would need to take the following steps (per user):
* join the *orders* table with itself to calculate sums for different (bonus) starting dates, for any of the starting dates that are in the last 30 days
* select from those records only those starting dates which yield a sum of 100 or more
* select from those records only the one with the most recent starting date: this is the start of the bonus period for the selected user.
Here is a query to do that:
```
SELECT u.userident,
u.username,
MAX(base.date) AS bonus_start,
DATE(MAX(base.date) + INTERVAL 30 DAY) AS bonus_expiry,
30-DATEDIFF(NOW(), MAX(base.date)) AS bonus_days_left
FROM users u
LEFT JOIN (
SELECT o.user,
first.date AS date,
SUM(o.total-o.shipcost) as tot
FROM orders first
INNER JOIN orders o
ON o.user = first.user
AND o.shipped = 1
AND o.date >= first.date
WHERE first.shipped = 1
AND first.date >= DATE(NOW() - INTERVAL 30 DAY)
GROUP BY o.user,
first.date
HAVING SUM(o.total-o.shipcost) >= 100
) AS base
ON base.user = u.userident
GROUP BY u.username,
u.userident
```
Here is a [fiddle](http://sqlfiddle.com/#!9/db79d/4).
With this input as orders:
```
+----+------+---------+------------+-------+----------+
| id | user | shipped | date | total | shipcost |
+----+------+---------+------------+-------+----------+
| 61 | 1 | 1 | 2015-12-28 | 42 | 0 |
| 74 | 1 | 1 | 2016-02-20 | 49 | 0 |
| 76 | 1 | 1 | 2016-02-29 | 21 | 0 |
| 84 | 1 | 1 | 2016-03-05 | 38 | 0 |
| 87 | 1 | 1 | 2016-03-09 | 50 | 0 |
+----+------+---------+------------+-------+----------+
```
The above query will return this output (when executed on 2016-03-20):
```
+-----------+----------+-------------+--------------+-----------------+
| userident | username | bonus_start | bonus_expiry | bonus_days_left |
+-----------+----------+-------------+--------------+-----------------+
| 1 | John | 2016-02-29 | 2016-03-30 | 10 |
+-----------+----------+-------------+--------------+-----------------+
```
|
The following is basically how to do what you want. Note that references to "30 days" are rough estimates and what you may be looking for is "29 days" or "31 days" as works to get the exact date that you want.
1. Retrieve the list of dates and amounts that are still active, i.e., within the last 30 days (as you did in your example), as a table (I'll call it **Active**) like the one you showed.
2. Join that new table (**Active**) with the original table where a row from **Active** is joined to all of the rows of the original table using the *date* fields. Compute a total of the amounts from the original table. The new table would have a **Date** field from **Active** and a **Totol** field that is the sum of all the amounts in the joined records from the original table.
3. Select from the resulting table all records where the Amount is greater than 100.00 and create a new table with **Date** and the minimum **Amount** of those records.
4. Compute 30 days ahead from those dates to find the ending date of their loyalty program.
|
Finding date where conditions within 30 days has elapsed
|
[
"",
"mysql",
"sql",
"date",
""
] |
I would like to get distinct record of each Design and type with random id of each record
It is not possible to use
```
select distinct Design, Type, ID from table
```
It will return all values
This is structure of my table
```
Design | Type | ID
old chair 1
old table 2
old chair 3
new chair 4
new table 5
new table 6
newest chair 7
```
Possible result
```
Design | Type | ID
old table 2
old chair 3
new chair 4
new table 6
newest chair 7
```
|
If it doesn't matter which one, you can always take the maximum\minimum one:
```
SELECT design,type,max(ID)
FROM YourTable
GROUP BY design,type
```
This won't be randomly, it will always take the maximum\minimum one but it doesn't seems like it matters.
|
Hope this one helps you :
```
WITH CTE AS
(
SELECT Design, Type, ID, ROW_NUMBER() OVER (PARTITION BY Design,
Type ORDER BY id DESC) rid
FROM table
)
SELECT Design, Type, ID FROM CTE WHERE rid = 1 ORDER BY ID
```
|
select distinct two columns with random id
|
[
"",
"sql",
"random",
"db2",
"distinct",
""
] |
How sort this
```
a 1 15
a 2 3
a 3 34
b 1 55
b 2 44
b 3 8
```
to (by third column sum):
```
b 1 55
b 2 44
b 3 8
a 1 15
a 2 3
a 3 34
```
since (55+44+8) > (15+3+34)
|
If you are using SQL Server/Oracle/Postgresql you could use windowed `SUM`:
```
SELECT *
FROM tab
ORDER BY SUM(col3) OVER(PARTITION BY col) DESC, col2
```
`LiveDemo`
Output:
```
βββββββ¦βββββββ¦βββββββ
β col β col2 β col3 β
β ββββββ¬βββββββ¬βββββββ£
β b β 1 β 55 β
β b β 2 β 44 β
β b β 3 β 8 β
β a β 1 β 15 β
β a β 2 β 3 β
β a β 3 β 34 β
βββββββ©βββββββ©βββββββ
```
|
You can do this using ANSI standard window functions. I prefer to use a subquery although this is not strictly necessary:
```
select col1, col2, col3
from (select t.*, sum(col3) over (partition by col1) as sumcol3
from t
) t
order by sumcol3 desc, col3 desc;
```
|
sql sorting by subgroup sum data
|
[
"",
"sql",
"sorting",
"group-by",
""
] |
I want to extract all of the string after, and including, 'Th' in a string of text (column called 'COL\_A' and before, and including, the final full stop (period). So if the string is:
```
'3padsa1st/The elephant sat by house No.11, London Street.sadsa129'
```
I want it to return:
```
'The elephant sat by house No.11, London Street.'
```
At the moment I have:
```
substr(SUBSTR(COL_A, INSTR(COL_A,'Th', 1, 1)),1,instr(SUBSTR(COL_A, INSTR(COL_A,'Th', 1, 1)),'.'))
```
This nearly works but returns the text after and including 'Th' (which is right), but returns the text before the first full stop (period), rather than the final one. So it returns:
```
The elephant sat by house No.
```
Thanks in advance for any help!
|
Assuming that the full stop is given by the last period in you string, you can try with something like this:
```
select regexp_substr('3padsa1st/The elephant sat by house No.11, London Street.sadsa129',
'(Th.*)\.')
from dual
```
|
From [the INSTR docs](http://docs.oracle.com/database/121/SQLRF/functions089.htm#SQLRF00651), you can use a negative value of `position` to search backwards from the end of the string, so this returns the position of the last full stop:
```
instr (cola, '.', -1)
```
So you can do this:
```
substr ( cola
, instr (cola, 'Th')
, instr (cola, '.', -1) - instr(cola, 'Th') + 1
)
```
|
Extract string after character and before final full stop (/period) in SQL
|
[
"",
"sql",
"string",
"oracle",
"plsql",
""
] |
I need to generate a text file and inside generate the employee name and the length should 20. eg, if the name length is above 20 display only first 20 characters, if the name length is below 20, first display the name and leading character fill with blank space (left aligned).
I tried the following example
**1)**
```
select right(' ' + CONVERT(NVARCHAR, 'Merbin Joe'), 20);
```
But this will add the fill with blank space before the name, but I need to fill after the name
**2)**
```
select left(' ' + CONVERT(NVARCHAR, 'Merbin Joe'), 20)
```
But this is fill the 20 blank space first.
|
The one with `left` is almost correct, except that you have to add spaces after the string, not before:
```
select
left(CONVERT(NVARCHAR, 'Merbin Joe') + replicate(' ', 20), 20)
```
|
Try this:
```
SELECT LEFT(CONVERT(NVARCHAR, 'Merbin Joe') + SPACE(20), 20)
```
|
How to restrict the character when the length is out of 20 characters in sql
|
[
"",
"sql",
"sql-server",
""
] |
I have an Oracle SQL query which includes calculations in its column output. In this simplified example, we're looking for records with dates in a certain range where some field matches a particular thing; and then for those records, take the ID (not unique) and search the table again for records with the same ID, but where some field matches something else and the date is before the date of the main record. Then return the earliest such date. The follow code works exactly as intended:
```
SELECT
TblA.ID, /* Not a primary key: there may be more than one record with the same ID */
(
SELECT
MIN(TblAAlias.SomeFieldDate)
FROM
TableA TblAAlias
WHERE
TblAAlias.ID = TblA.ID /* Here is the link reference to the main query */
TblAAlias.SomeField = 'Another Thing'
AND TblAAlias.SomeFieldDate <= TblA.SomeFieldDate /* Another link reference */
) AS EarliestDateOfAnotherThing
FROM
TableA TblA
WHERE
TblA.SomeField = 'Something'
AND TblA.SomeFieldDate BETWEEN TO_DATE('2015-01-01','YYYY-MM-DD') AND TO_DATE('2015-12-31','YYYY-MM-DD')
```
Further to this, however, I want to include another calculated column which returns text output according to what EarliestDateOfAnotherThing actually is. I can do this with a CASE WHEN statement as follows:
```
CASE WHEN
(
SELECT
MIN(TblAAlias.SomeFieldDate)
FROM
TableA TblAAlias
WHERE
TblAAlias.ID = TblA.ID /* Here is the link reference to the main query */
TblAAlias.SomeField = 'Another Thing'
AND TblAAlias.SomeFieldDate <= TblA.SomeFieldDate /* Another link reference */
) BETWEEN TO_DATE('2000-01-01','YYYY-MM-DD') AND TO_DATE('2004-12-31','YYYY-MM-DD')
THEN 'First period'
WHEN
(
SELECT
MIN(TblAAlias.SomeFieldDate)
FROM
TableA TblAAlias
WHERE
TblAAlias.ID = TblA.ID /* Here is the link reference to the main query */
TblAAlias.SomeField = 'Another Thing'
AND TblAAlias.SomeFieldDate <= TblA.SomeFieldDate /* Another link reference */
) BETWEEN TO_DATE('2005-01-01','YYYY-MM-DD') AND TO_DATE('2009-12-31','YYYY-MM-DD')
THEN 'Second period'
ELSE 'Last period'
END
```
That is all very well. However the problem is that I'm re-running exactly the same subquery - which strikes me as very inefficient. What I'd like to do is run the subquery just once, then take the output and subject it to various cases. Just as if I could use the VBA statement "SELECT CASE" as follows:
```
''''' Note that this is pseudo-VBA not SQL:
Select case (Subquery which returns a date)
Case Between A and B
"Output 1"
Case Between C and D
"Output 2"
Case Between E and F
"Output 3"
End select
' ... etc
```
My investigations suggested that the SQL statement "DECODE" could do the job: however it turns out that DECODE only works with discrete values, and not date ranges. I also found some things about putting the subquery in the FROM section - and then re-using the output in multiple places in SELECT. However that failed because the subquery does not stand up in its own right, but relies upon comparing values to the main query... and those comparisons could not be made until the main query had been executed (therefore making a circular reference, as the FROM section is itself part of the main query).
I'd be grateful if anyone could tell me an easy way to achieve what I want - because so far the only thing that works is manually re-using the subquery code in every place I want it, but as a programmer it pains me to be so inefficient!
**EDIT:**
Thanks for the answers so far. However I think I'm going to have to paste the real, unsimplified code here. I tried to simplify it to just get the concept clear, and to remove potentially identifying information - but the answers so far make it clear that it's more complicated than my basic SQL knowledge will allow. I'm trying to wrap my head around the suggestions people have given, but I can't match up the concepts to my actual code. For example my actual code includes more than one table from which I am selecting in the main query.
I think I'm going to have to bite the bullet and show my (still simplified, but more accurate) actual code in which I have been trying to get the "Subquery in FROM clause" thing to work. Perhaps some kind person will be able to use this to more accurately guide me in how to use the concepts introduced so far in my actual code? Thanks.
```
SELECT
APPLICANT.ID,
APPLICANT.FULL_NAME,
EarliestDate,
CASE
WHEN EarliestDate BETWEEN TO_DATE('2000-01-01','YYYY-MM-DD') AND TO_DATE('2004-12-31','YYYY-MM-DD') THEN 'First Period'
WHEN EarliestDate BETWEEN TO_DATE('2005-01-01','YYYY-MM-DD') AND TO_DATE('2009-12-31','YYYY-MM-DD') THEN 'Second Period'
WHEN EarliestDate >= TO_DATE('2010-01-01','YYYY-MM-DD') THEN 'Third Period'
END
FROM
/* Subquery in FROM - trying to get this to work */
(
SELECT
MIN(PERSON_EVENTS_Sub.REQUESTED_DTE) /* Earliest date of the secondary event */
FROM
EVENTS PERSON_EVENTS_Sub
WHERE
PERSON_EVENTS_Sub.PER_ID = APPLICANT.ID /* Link the person ID */
AND PERSON_EVENTS_Sub.DEL_IND IS NULL /* Not a deleted event */
AND PERSON_EVENTS_Sub.EVTYPE_SDV_VALUE IN (/* List of secondary events */)
AND PERSON_EVENTS_Sub.COU_SDV_VALUE = PERSON_EVENTS.COU_SDV_VALUE /* Another link from the subQ to the main query */
AND PERSON_EVENTS_Sub.REQUESTED_DTE <= PERSON_EVENTS.REQUESTED_DTE /* subQ event occurred before main query event */
AND ROWNUM = 1 /* To ensure only one record returned, in case multiple rows match the MIN date */
) /* And here - how would I alias the result of this subquery as "EarliestDate", for use above? */,
/* Then there are other tables from which to select */
EVENTS PERSON_EVENTS,
PEOPLE APPLICANT
WHERE
PERSON_EVENTS.PER_ID=APPLICANT.ID
AND PERSON_EVENTS.EVTYPE_SDV_VALUE IN (/* List of values - removed ID information */)
AND PERSON_EVENTS.REQUESTED_DTE BETWEEN '01-Jan-2014' AND '31-Jan-2014'
```
|
Looking ***only*** at restructuring the existing query *(rather that logically or functionally different approaches)*.
The simplest approach, to me, for is simply to do this as a nested query...
- The inner query would be your basic query, without the CASE statement
- It would also include your correlated sub-query as an additional ***field***
- The outer query can then embed that field in a CASE statement
```
SELECT
nested_query.ID,
nested_query.FULL_NAME,
nested_query.EarliestDate,
CASE
WHEN nested_query.EarliestDate BETWEEN TO_DATE('2000-01-01','YYYY-MM-DD') AND TO_DATE('2004-12-31','YYYY-MM-DD') THEN 'First Period'
WHEN nested_query.EarliestDate BETWEEN TO_DATE('2005-01-01','YYYY-MM-DD') AND TO_DATE('2009-12-31','YYYY-MM-DD') THEN 'Second Period'
WHEN nested_query.EarliestDate >= TO_DATE('2010-01-01','YYYY-MM-DD') THEN 'Third Period'
END AS CaseStatementResult
FROM
(
SELECT
APPLICANT.ID,
APPLICANT.FULL_NAME,
(
SELECT
MIN(PERSON_EVENTS_Sub.REQUESTED_DTE) /* Earliest date of the secondary event */
FROM
EVENTS PERSON_EVENTS_Sub
WHERE
PERSON_EVENTS_Sub.PER_ID = APPLICANT.ID /* Link the person ID */
AND PERSON_EVENTS_Sub.DEL_IND IS NULL /* Not a deleted event */
AND PERSON_EVENTS_Sub.EVTYPE_SDV_VALUE IN (/* List of secondary events */)
AND PERSON_EVENTS_Sub.COU_SDV_VALUE = PERSON_EVENTS.COU_SDV_VALUE /* Another link from the subQ to the main query */
AND PERSON_EVENTS_Sub.REQUESTED_DTE <= PERSON_EVENTS.REQUESTED_DTE /* subQ event occurred before main query event */
AND ROWNUM = 1 /* To ensure only one record returned, in case multiple rows match the MIN date */
)
AS EarliestDate
FROM
EVENTS PERSON_EVENTS,
PEOPLE APPLICANT
WHERE
PERSON_EVENTS.PER_ID=APPLICANT.ID
AND PERSON_EVENTS.EVTYPE_SDV_VALUE IN (/* List of values - removed ID information */)
AND PERSON_EVENTS.REQUESTED_DTE BETWEEN '01-Jan-2014' AND '31-Jan-2014'
) nested_query
```
|
You can do it without correlated sub-queries or sub-query factoring (`WITH .. AS ( ... )`) clauses using an analytic function (and in a single table scan):
```
SELECT ID,
EarliestDateOfAnotherThing
FROM (
SELECT ID,
MIN( CASE WHEN SomeField = 'Another Thing' THEN SomeFieldDate END )
OVER( PARTITION BY ID
ORDER BY SomeFieldDate
ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW )
AS EarliestDateOfAnotherThing
FROM TableA
)
WHERE SomeField = 'Something'
AND SomeFieldDate BETWEEN TO_DATE('2015-01-01','YYYY-MM-DD')
AND TO_DATE('2015-12-31','YYYY-MM-DD')
```
And you could do the extended case example as:
```
SELECT ID,
CASE
WHEN DATE '2000-01-01' <= EarliestDateOfAnotherThing
AND EarliestDateOfAnotherThing < DATE '2005-01-01'
THEN 'First Period'
WHEN DATE '2005-01-01' <= EarliestDateOfAnotherThing
AND EarliestDateOfAnotherThing < DATE '2010-01-01'
THEN 'Second Period'
ELSE 'Last Period'
END AS period
FROM (
SELECT ID,
MIN( CASE WHEN SomeField = 'Another Thing' THEN SomeFieldDate END )
OVER( PARTITION BY ID
ORDER BY SomeFieldDate
ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW )
AS EarliestDateOfAnotherThing
FROM TableA
)
WHERE SomeField = 'Something'
AND SomeFieldDate BETWEEN TO_DATE('2015-01-01','YYYY-MM-DD')
AND TO_DATE('2015-12-31','YYYY-MM-DD')
```
|
Oracle SQL: Re-use subquery for CASE WHEN without having to repeat subquery
|
[
"",
"sql",
"oracle",
"subquery",
"code-reuse",
""
] |
I have a table where several reporting entitities store several versions of their data (indexed by an integer version number). I created a view for that table that selects only the latest version:
```
SELECT * FROM MYTABLE NATURAL JOIN
(
SELECT ENTITY, MAX(VERSION) VERSION FROM MYTABLE
GROUP BY ENTITY
)
```
Now I want to create another view that always selects the one version before the latest for comparison purposes. I thought about using MAX()-1 for this (see below), and it generally works but the problem is that this excludes entries from entities who reported only one version.
```
SELECT * FROM MYTABLE NATURAL JOIN
(
SELECT ENTITY, MAX(VERSION) - 1 VERSION FROM MYTABLE
GROUP BY ENTITY
)
```
Edit: for clarity, if there is only one version available, I would like it to report that one. As an example, consider the following table:
```
ENTITY VERSION VALUE1
10000 1 10
10000 2 11
12000 1 50
14000 1 15
14000 2 16
14000 3 17
```
Now what I would like to get with my query would be
```
ENTITY VERSION VALUE1
10000 1 10
12000 1 50
14000 2 16
```
But with my current query, the entry for 12000 drops out.
|
You could avoid the self-join with an analytic query:
```
SELECT ENTITY, VERSION, LAST_VERSION
FROM (
SELECT ENTITY, VERSION,
NVL(LAG(VERSION) OVER (PARTITION BY ENTITY ORDER BY VERSION), VERSION) AS LAST_VERSION,
RANK() OVER (PARTITION BY ENTITY ORDER BY VERSION DESC) AS RN
FROM MYTABLE
)
WHERE RN = 1;
```
That finds the current and previous version at the same time, so you could have a single view to get both if you want.
The `LAG(VERSION) OVER (PARTITION BY ENTITY ORDER BY VERSION)` gets the previous version number for each entity, which will be null for the first recorded version; so `NVL` is used to take the current version again in that case. (You can also use the more standard `COALESCE` function). This also allows for gaps in the version numbers, if you have any.
The `RANK() OVER (PARTITION BY ENTITY ORDER BY VERSION DESC)` assigns a sequential number to each entity/version pair, with the `DESC` meaning the highest version is ranked 1, the second highest is 2, etc. I'm assuming you won't have duplicate versions for an entity - you can use `DENSE_RANK` and decide how to break ties if you do, but it seems unlikely.
For your data you can see what that produces with:
```
SELECT ENTITY, VERSION, VALUE1,
LAG(VERSION) OVER (PARTITION BY ENTITY ORDER BY VERSION) AS LAG_VERSION,
NVL(LAG(VERSION) OVER (PARTITION BY ENTITY ORDER BY VERSION), VERSION) AS LAST_VERSION,
RANK() OVER (PARTITION BY ENTITY ORDER BY VERSION DESC) AS RN
FROM MYTABLE
ORDER BY ENTITY, VERSION;
ENTITY VERSION VALUE1 LAG_VERSION LAST_VERSION RN
---------- ---------- ---------- ----------- ------------ ----------
10000 1 10 1 2
10000 2 11 1 1 1
12000 1 50 1 1
14000 1 15 1 3
14000 2 16 1 1 2
14000 3 17 2 2 1
```
All of that is done in an inline view, with the outer query only returning those ranked first - that is, the row with the highest version for each entity.
You can include the `VALUE1` column as well, e.g. just to show the previous values:
```
SELECT ENTITY, VERSION, VALUE1
FROM (
SELECT ENTITY,
NVL(LAG(VERSION) OVER (PARTITION BY ENTITY ORDER BY VERSION), VERSION) AS VERSION,
NVL(LAG(VALUE1) OVER (PARTITION BY ENTITY ORDER BY VERSION), VALUE1) AS VALUE1,
RANK() OVER (PARTITION BY ENTITY ORDER BY VERSION DESC) AS RN
FROM MYTABLE
)
WHERE RN = 1
ORDER BY ENTITY;
ENTITY VERSION VALUE1
---------- ---------- ----------
10000 1 10
12000 1 50
14000 2 16
```
|
You can formulate the task as: Get the two highest available versions per entity and from these take the minimum version per entity. You determine the n highest versions by ranking the records with `ROW_NUMBER`.
```
select entity, min(version)
from
(
select
entity,
version,
row_number() over (partition by entity order by version desc) as rn
from mytable
)
where rn <= 2
group by entity;
```
This works no matter if there is only one record or two or more for an entity and regardless of any possible gaps.
|
Oracle SQL: select max minus 1 except lowest (get previous data version)
|
[
"",
"sql",
"oracle",
""
] |
I just wonder how to use below MySQL query in laravel5.2 using eloquent.
```
SELECT MAX(date) AS "Last seen date" FROM instances WHERE ad_id =1
```
I have column date in instance table .
I would like to select the latest date from that table where ad\_id =1
|
If you want to get the `date` column only then use the following:
```
$instance = Instance::select('date')->where('ad_id', 1)->orderBy('date', 'desc')->first();
```
Or if you want to get all the instances related to that latest date then use:
```
$instance = Instance::where('ad_id', 1)->orderBy('date', 'desc')->get();
```
|
To actually select the max though you can now:
```
Instance::latest()->get();
```
Documentation: [laravel.com/docs/5.6/queries](https://laravel.com/docs/5.6/queries)
|
max(date) sql query in laravel 5.2
|
[
"",
"sql",
"laravel-5",
"eloquent",
"max",
""
] |
I'm trying to get a row count from a record set.
What I would like is to count the number of rows in the record-set, grouped by a common value in a column named `member_location`, ordered by a column named `reputation_total_points` in descending order, until the parser reaches a result with a specific value in the `ID` column.
For example, if the query was using **`member_location`= 10**, and **`id`= 2**, the final correct count result will be **3** by using the information below. Below is a sample of the db entries:
```
Columns: id | reputation_total_points | member_location
2 | 32 | 10
3 | 35 | 7
4 | 40 | 10
5 | 15 | 5
6 | 10 | 10
7 | 65 | 10
```
|
If I understood correctly this should work as expected:
```
SELECT rn
FROM
(
SELECT id
,ROW_NUMBER() -- assign a sequence based on descending order
OVER (ORDER BY reputation_total_points DESC) AS rn
FROM tab
WHERE member_location = 10
) AS dt
WHERE id = 2 -- find the matching id
```
In fact this seems like you want to rank your members:
```
SELECT id
,RANK()
OVER (PARTITION BY member_location
ORDER BY reputation_total_points DESC) AS rnk
FROM tab
```
|
I am not sure your where works with both id and member\_location, as the id = 2 will only bring back one row. However if you're just after the occurences where member\_location = 10 then something like the below should work:
```
SELECT
member_location,
COUNT(member_location) AS [Total]
FROM yourTable
GROUP BY member_location
```
Hope that makes sense!
|
RECORD COUNT in SQL with an ORDER BY, until a specific value is found in a column
|
[
"",
"sql",
"vbscript",
"asp-classic",
"sql-server-2012",
""
] |
i have full date(with time).
But i want only millisecond from date.
please tell me one line solution
for example: `date= 2016/03/16 10:45:04.252`
i want this answer= `252`
i try to use this query.
```
SELECT ADD_MONTHS(millisecond, -datepart('2016/03/16 10:45:04.252', millisecond),
'2016/03/16 10:45:04.252') FROM DUAL;
```
but i'm not success.
|
> i have full date (with time)
This can only be done using a `timestamp`. Although Oracle's `date` does contain a time, it only stores seconds, not milliseconds.
---
To get the fractional seconds from a **timestamp** use `to_char()` and convert that to a number:
```
select to_number(to_char(timestamp '2016-03-16 10:45:04.252', 'FF3'))
from dual;
```
|
```
SELECT
TRUNC((L.OUT_TIME-L.IN_TIME)*24) ||':'||
TRUNC((L.OUT_TIME-L.IN_TIME)*24*60) ||':'||
ROUND(CASE WHEN ((L.OUT_TIME-L.IN_TIME)*24*60*60)>60 THEN ((L.OUT_TIME-L.IN_TIME)*24*60*60)-60 ELSE ((L.OUT_TIME-L.IN_TIME)*24*60*60) END ,5) Elapsed
FROM XYZ_TABLE
```
|
How to extract millisecond from date in Oracle?
|
[
"",
"sql",
"oracle11g",
"timestamp",
""
] |
How to covert the following `11/30/2014` into `Nov-2014`.
`11/30/2014` is stored as varchar
|
A solution that should work even with Sql server 2005 is using [convert](https://msdn.microsoft.com/en-us/library/ms187928(v=sql.90).aspx), [right](https://msdn.microsoft.com/en-us/library/ms177532(v=sql.90).aspx) and [replace](https://msdn.microsoft.com/en-us/library/ms186862(v=sql.90).aspx):
```
DECLARE @DateString char(10)= '11/30/2014'
SELECT REPLACE(RIGHT(CONVERT(char(11), CONVERT(datetime, @DateString, 101), 106), 8), ' ', '-')
```
result: `Nov-2014`
|
Try it like this:
```
DECLARE @str VARCHAR(100)='11/30/2014';
SELECT FORMAT(CONVERT(DATE,@str,101),'MMM yyyy')
```
The `FORMAT` function was introduced with SQL-Server 2012 - very handsome...
Despite the tags you set you stated in a comment, that you are working with SQL Server 2012, so this should be OK for you...
|
Conversion function to convert the following "11/30/2014" format to Nov-2014
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I have this SQL at the moment:
```
SELECT Count(create_weekday),
create_weekday,
Count(create_weekday) * 100 / (SELECT Count(*)
FROM call_view
WHERE
( create_month = Month(Now() -
INTERVAL 1 month) )
AND ( create_year = Year(
Now() - INTERVAL 1 month) )
AND customer_company_name = "Company"
) AS Percentage
FROM call_view
WHERE ( create_month = Month(Now() - INTERVAL 1 month) )
AND ( create_year = Year(Now() - INTERVAL 1 month) )
AND customer_company_name = "Company"
GROUP BY CREATE_WEEKDAY
ORDER BY (CASE CREATE_WEEKDAY
WHEN 'Monday' THEN 1
WHEN 'Tuesday' THEN 2
WHEN 'Wednesday' THEN 3
WHEN 'Thursday' THEN 4
WHEN 'Friday' THEN 5
WHEN 'Saturday' THEN 6
WHEN 'Sunday' THEN 7
ELSE 100 END)
```
It's working and I received the result:
```
Count(create_weekday) | Create_Weekday | Percentage
225 Monday 28.0899
```
How do I round to only 1 decimal place?( Like 28.1)
Would appreciate any help
|
Use ROUND(Percentage, 1):
```
SELECT Count(create_weekday),
create_weekday,
ROUND(Count(create_weekday) * 100 / (SELECT Count(*)
FROM call_view
WHERE
( create_month = Month(Now() -
INTERVAL 1 month) )
AND ( create_year = Year(
Now() - INTERVAL 1 month) )
AND customer_company_name = "Company"
), 1) AS Percentage
FROM call_view
WHERE ( create_month = Month(Now() - INTERVAL 1 month) )
AND ( create_year = Year(Now() - INTERVAL 1 month) )
AND customer_company_name = "Company"
GROUP BY CREATE_WEEKDAY
ORDER BY (CASE CREATE_WEEKDAY
WHEN 'Monday' THEN 1
WHEN 'Tuesday' THEN 2
WHEN 'Wednesday' THEN 3
WHEN 'Thursday' THEN 4
WHEN 'Friday' THEN 5
WHEN 'Saturday' THEN 6
WHEN 'Sunday' THEN 7
ELSE 100 END)
```
|
You can just use the built-in `ROUND(N,D)` function, the second argument is the number of digits.
MySQL Reference: <http://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_round>
|
Calculate group percentage to 1 decimal places - SQL
|
[
"",
"mysql",
"sql",
""
] |
I need make a query that I get the result and put in one line separated per comma.
For example, I have this query:
```
SELECT
SIGLA
FROM
LANGUAGES
```
This query return the result below:
**SIGLA**
```
ESP
EN
BRA
```
I need to get this result in one single line that way:
**SIGLA**
```
ESP,EN,BRA
```
Can anyone help me?
Thank you!
|
try
```
SELECT LISTAGG( SIGLA, ',' ) within group (order by SIGLA) as NewSigla FROM LANGUAGES
```
|
```
SELECT LISTAGG(SIGLA, ', ') WITHIN GROUP (ORDER BY SIGLA) " As "S_List" FROM LANGUAGES
```
Should be the listagg sequence you are needing
|
Group query rows result in one result
|
[
"",
"sql",
"oracle",
""
] |
I've been looking all around stack overflow and can't seem to find a question like this but its probably super simple and has been asked a million times. So I am sorry if my insolence offends you guys.
I want to remove an attribute from the result if it appears anywhere in the table.
Here is an example: I want display every team that does not have a pitcher. This means I don't want to display 'Phillies' with the rest of the results.
Example of table:
[](https://i.stack.imgur.com/VRFRW.png)
Here is the example of the code I have currently have where Players is the table.
```
SELECT DISTINCT team
FROM Players
WHERE position ='Pitcher' Not IN
(SELECT DISTINCT position
FROM Players)
```
Thanks for any help you guys can provide!
|
You can use NOT EXISTS() :
```
SELECT DISTINCT s.team
FROM Players s
WHERE NOT EXISTS(SELECT 1 FROM Players t
where t.team = s.team
and position = 'Pitcher')
```
Or with NOT IN:
```
SELECT DISTINCT t.team
FROM Players t
WHERE t.team NOT IN(SELECT s.team FROM Players s
WHERE s.position = 'Pitcher')
```
And a solution with a left join:
```
SELECT distinct t.team
FROM Players t
LEFT OUTER JOIN Players s
ON(t.team = s.team and s.position = 'pitcher')
WHERE s.team is null
```
|
Use `NOT EXISTS`
**Query**
```
select distinct team
from players p
where not exists(
select * from players q
where p.team = q.team
and q.position = 'Pitcher'
);
```
|
Removing an item from the result if it has a particular parameter somewhere in the table
|
[
"",
"mysql",
"sql",
""
] |
I want to calculate the average of a column of numbers, but i want to exclude the rows that have a zero in that column, is there any way this is possible?
The code i have is just a simple sum/count:
```
SELECT SUM(Column1)/Count(Column1) AS Average
FROM Table1
```
|
```
SELECT AVG(Column1) FROM Table1 WHERE Column1 <> 0
```
|
One approach is `AVG()` and `CASE`/`NULLIF()`:
```
SELECT AVG(NULLIF(Column1, 0)) as Average
FROM table1;
```
Average ignores `NULL` values. This assumes that you want other aggregations; otherwise, the obvious choice is filtering.
|
How to calculate an average in SQL excluding zeroes?
|
[
"",
"sql",
""
] |
I have a problem how can we write sql statement in this:
I have this:
```
id name color
1 A blue
3 D pink
1 C grey
3 F blue
4 E red
```
and I want my result to be like this:
```
id name name color color
1 A C blue grey
3 D F pink blue
4 E red
```
How can I do that in SQL?
your help is very appreciated
Thank you
|
**Query - Concatenate the values into a single column**:
```
SELECT ID,
LISTAGG( Name, ',' ) WITHIN GROUP ( ORDER BY ROWNUM ) AS Names,
LISTAGG( Color, ',' ) WITHIN GROUP ( ORDER BY ROWNUM ) AS Colors
FROM table_name
GROUP BY ID;
```
**Output**:
```
ID Names Colors
-- ----- ---------
1 A,C blue,grey
3 D,F pink,blue
4 E red
```
**Query - If you have a fixed maximum number of values**:
```
SELECT ID,
MAX( CASE rn WHEN 1 THEN name END ) AS name1,
MAX( CASE rn WHEN 1 THEN color END ) AS color1,
MAX( CASE rn WHEN 2 THEN name END ) AS name2,
MAX( CASE rn WHEN 2 THEN color END ) AS color2
FROM (
SELECT t.*,
ROW_NUMBER() OVER ( PARTITION BY id ORDER BY ROWNUM ) AS rn
FROM table_name t
)
GROUP BY id;
```
**Output**:
```
ID Name1 Color1 Name2 Color2
-- ----- ------ ----- ------
1 A blue C grey
3 D pink F blue
4 E red
```
|
there is a fundamental problem with what you are trying to achieve - you do not know how many values of names (/ color) per id to expect - so you do not know how many columns the output should be.....
a workaround would be to keep all the names (and colors) per id in one column :
```
select id,group_concat(name),group_concat(color) from tblName group by id
```
|
How write sql statement to have in one line the same ID
|
[
"",
"sql",
"database",
"oracle",
""
] |
I am currently working with a SQL back end and vb.net Windows Forms front end. I am trying to pull a report from SQL based on a list of checkboxes the user will select.
To do this I am going to use an `IN` clause in SQL. The only problem is if I use an if statement in vb.net to build the string its going to be a HUGE amount of code to set up the string.
I was hoping someone knew a better way to do this. The code example below shows only selecting line 1 and selecting both line 1 and 2. I will need for the code to be able to select any assortment of the lines. The string will have to be the line number with a comma following the number. This way when I include the code in my SQL query it will not bug.
Here is the code:
```
Dim LineString As String
'String set up for line pull
If CBLine1.Checked = False And CBLine2.Checked = False And CBLine3.Checked = False And CBLine4.Checked = False And _
CBLine7.Checked = False And CBLine8.Checked = False And CBLine10.Checked = False And CBLine11.Checked = False And CBLine12.Checked = False Then
MsgBox("No lines selected for download, please select lines for report.")
Exit Sub
End If
If CBLine1.Checked = True And CBLine2.Checked = False And CBLine3.Checked = False And CBLine4.Checked = False And _
CBLine7.Checked = False And CBLine8.Checked = False And CBLine10.Checked = False And CBLine11.Checked = False And CBLine12.Checked = False Then
MsgBox("This will save the string as only line 1")
ElseIf CBLine1.Checked = True And CBLine2.Checked = True And CBLine3.Checked = False And CBLine4.Checked = False And _
CBLine7.Checked = False And CBLine8.Checked = False And CBLine10.Checked = False And CBLine11.Checked = False And CBLine12.Checked = False Then
MsgBox("This will save the string as only line 1 and 2")
End If
```
The final string will have to be inserted into a SQL statement that looks like this:
```
SELECT *
FROM tabl1
WHERE LineNumber IN (-vb.netString-)
```
The above code will need commas added in for the string.
|
First you need to set up all your checkboxes with the Tag property set to the line number to which they refers. So, for example, the CBLine1 checkbox will have its property Tag set to the value 1 (and so on for all other checkboxes).
This could be done easily using the WinForm designer or, if you prefer, at runtime in the Form load event.
Next step is to retrieve all the checked checkboxes and extract the Tag property to build a list of lines required. This could be done using some Linq
```
Dim linesSelected = new List(Of String)()
For Each chk in Me.Controls.OfType(Of CheckBox)().
Where(Function(c) c.Checked)
linesSelected.Add(chk.Tag.ToString())
Next
```
Now you could start your verification of the input
```
if linesSelected.Count = 0 Then
MessageBox.Show("No lines selected for download, please select lines for report.")
Else If linesSelected.Count = 1 Then
MessageBox.Show("This will save the string only for line " & linesSelected(0))
Else
Dim allLines = string.Join(",", linesSelected)
MessageBox.Show("This will save the string for lins " & allLines)
End If
```
Of course, the List(Of String) and the string.Join method are very good to build also your IN clause for your query
|
I feel that I'm only half getting your question but I hope this helps. In VB.net we display a list of forms that have a status of enabled or disabled.
There is a checkbox that when checked displays only the enabled forms.
```
SELECT * FROM forms WHERE (Status = 'Enabled' or @checkbox = 0)
```
The query actually has a longer where clause that handles drop down options in the same way. This should help you pass the backend VB to a simple SQL statement.
|
Build string for SQL statement with multiple checkboxes in vb.net
|
[
"",
"sql",
"vb.net",
"string",
"if-statement",
""
] |
I have an SQL Server table were each row represent a machine log that says the time when the machine were switched on or switched off. The columns are ACTION, MACHINE\_NAME, TIME\_STAMP
ACTION is a String that can be "ON" or "OFF"
MACHINE\_NAME is a String representing the machine id
TIME\_STAMP is a date.
An example:
```
ACTION MACHINE_NAME TIME_STAMP
ON PC1 2016/03/04 17:13:10
OFF PC1 2016/03/04 17:13:15
ON PC1 2016/03/04 17:14:15
OFF PC1 2016/03/04 17:15:45
```
I need to extract from these logs a new table that can tell me: "The machine X was ON for N minutes from START\_TIME to END\_TIME"
How could I write an SQL Query in order to do this?
Desired result
```
MACHINE_NAME START_TIME END_TIME
PC1 2016/03/04 17:13:10 2016/03/04 17:13:15
PC1 2016/03/04 17:14:15 2016/03/04 17:15:45
```
|
I was able to solve this using CTEs and the `LAG` function. This enables us to get the first `'ON'` action for each `'OFF'`, and thereafter apply `ROW_NUMBER` to match them:
```
;WITH first_ON AS
(
SELECT *, LAG(m.ACTION, 1, 'OFF') OVER (PARTITION BY m.MACHINE_NAME ORDER BY m.TIME_STAMP) AS previous_action
FROM your_table m
),
ON_actions AS
(
SELECT
m.ACTION,
m.MACHINE_NAME,
m.TIME_STAMP,
ROW_NUMBER() OVER ( ORDER BY TIME_STAMP ) AS RN
FROM first_ON m
WHERE m.previous_action = 'OFF' AND m.ACTION = 'ON'
),
OFF_actions AS (
SELECT
m.ACTION,
m.MACHINE_NAME,
m.TIME_STAMP,
ROW_NUMBER() OVER ( ORDER BY TIME_STAMP ) AS RN
FROM your_table m
WHERE m.ACTION = 'OFF'
)
SELECT a.MACHINE_NAME, a.TIME_STAMP AS START_TIME, b.TIME_STAMP AS END_TIME
FROM ON_actions a
INNER JOIN OFF_actions b ON a.MACHINE_NAME = b.MACHINE_NAME AND a.RN = b.RN
```
EDIT: This solution also takes unmatched ONs and OFFs into consideration, for example ON,ON,OFF,ON,ON,ON,OFF,OFF.
|
You can do it with a correlated query like this:
```
SELECT t.Machine_name,
t.time_stamp as start_date,
(SELECT min(s.time_stamp) from YourTable s
WHERE t.Machine_name = s.Machine_Name
and s.ACTION = 'OFF'
and s.time_stamp > t.time_stamp) as end_date
FROM YourTable t
WHERE t.action = 'ON'
```
EDIT:
```
SELECT * FROM (
SELECT t.Machine_name,
t.time_stamp as start_date,
(SELECT min(s.time_stamp) from YourTable s
WHERE t.Machine_name = s.Machine_Name
and s.ACTION = 'OFF'
and s.time_stamp > t.time_stamp) as end_date
FROM YourTable t
WHERE t.action = 'ON')
WHERE end_date is not null
```
|
Extracting start and end date in groups
|
[
"",
"sql",
"sql-server",
""
] |
I have a string such as this:
```
`a|b^c|d|e^f|g`
```
and I want to maintain the pipe delimiting, but remove the carrot sub-delimiting, only retaining the first value of that sub-delimiter.
The output result would be:
```
`a|b|d|e|g`
```
Is there a way I can do this with a simple SQL function?
|
Another option, using [`CHARINDEX`](https://msdn.microsoft.com/en-us/library/ms186323(v=sql.90).aspx), [`REPLACE`](https://msdn.microsoft.com/en-us/library/ms186862(v=sql.90).aspx) and [`SUBSTRING`](https://msdn.microsoft.com/en-us/library/ms187748(v=sql.90).aspx):
```
DECLARE @OriginalString varchar(50) = 'a|b^c^d^e|f|g'
DECLARE @MyString varchar(50) = @OriginalString
WHILE CHARINDEX('^', @MyString) > 0
BEGIN
SELECT @MyString = REPLACE(@MyString,
SUBSTRING(@MyString,
CHARINDEX('^', @MyString),
CASE WHEN CHARINDEX('|', @MyString, CHARINDEX('^', @MyString)) > 0 THEN
CHARINDEX('|', @MyString, CHARINDEX('^', @MyString)) - CHARINDEX('^', @MyString)
ELSE
LEN(@MyString)
END
)
, '')
END
SELECT @OriginalString As Original, @MyString As Final
```
Output:
```
Original Final
a|b^c^d^e|f|g a|b|f|g
```
|
This expression will replace the first instance of caret up to the subsequent pipe (or end of string.) You can just run a loop until no more rows are updated or no more carets are found inside the function, etc.
```
case
when charindex('^', s) > 0
then stuff(
s,
charindex('^', s),
charindex('|', s + '|', charindex('^', s) + 1) - charindex('^', s),
''
)
else s
end
```
Here's a loop you can adapt for a function definition:
```
declare @s varchar(30) = 'a|b^c^d|e|f^g|h^i';
declare @n int = charindex('^', @s);
while @n > 0
begin
set @s = stuff(@s, @n, charindex('|', @s + '|', @n + 1) - @n, '');
set @n = charindex('^', @s, @n + 1);
end
select @s;
```
A little bit of care needs to be taken for the trailing of the string where there won't be a final pipe separator. You can see I've handled that.
|
SQL Remove string between two characters
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2005",
""
] |
Please, this is my sql script
```
#-- creation de la table user
CREATE TABLE IF NOT EXISTS user(
iduser int AUTO_INCREMENT,
nom VARCHAR(50) NOT NULL,
prenom VARCHAR(50) ,
adressemail VARCHAR(200) NOT NULL,
motdepasse VARCHAR(200) NOT NULL,
CONSTRAINT pk_user PRIMARY KEY(iduser)
);
#-- creation de la table user
CREATE TABLE IF NOT EXISTS contact(
idcontact INT AUTO_INCREMENT,
nom VARCHAR(45) NOT NULL,
prenom VARCHAR(45),
adressemail VARCHAR(200) UNIQUE,
user_iduser INT NOT NULL,
CONSTRAINT pk_contact PRIMARY KEY(idcontact),
CONSTRAINT fk_contact_userIduser_user FOREIGN KEY (user_iduser) REFERENCES user(iduser) ON DELETE SET NULL ON UPDATE CASCADE
);
```
When i executing it on my maria db database, I get this error:
`Can't create table 'mydb.contact' (errno: 150)`
|
In your foreign key constraint you have set the action on delete to `set null`, but the column `user_iduser` does not allow null values as it is specified as `not null` which makes the constraint invalid. Either change the column to allow null values or change the delete action in the constraint.
The online MySQL manual even has [a warning](http://dev.mysql.com/doc/refman/5.7/en/create-table-foreign-keys.html) about this:
> If you specify a SET NULL action, *make sure that you have not declared
> the columns in the child table as NOT NULL*.
|
Your statement `ON DELETE SET NULL` is restricting you to create table as your column `user_iduser` cannot be NULL as you written
```
user_iduser INT NOT NULL
```
|
Can't create table 'mydb.contact' (errno: 150)
|
[
"",
"mysql",
"sql",
"mariadb",
"mariasql",
""
] |
I'm trying to include default values for data that is grouped but outside of the where statement.
**Table**
```
Name Location
-----------------------
Chris North
John North
Jane North-East
Bryan South
```
**Query**
```
SELECT
Location,
COUNT(*)
FROM Users
WHERE Location = 'North' OR Location = 'North-East'
GROUP BY Location
```
**Output**
```
North 2
North-East 1
```
**Desired Output**
```
North 2
North-East 1
South 0
```
Is it possible to return a zero for each location outside of the where clause?
**Update**
Thank you everyone for the help. I ended up using the left join as this was quickest for me and produced the correct results.
```
DECLARE @Locations as Table(Name varchar(20));
DECLARE @Users as Table(Name varchar(20), Location varchar(20));
INSERT INTO @Users VALUES ('Chris', 'North')
INSERT INTO @Users VALUES ('John', 'North')
INSERT INTO @Users VALUES ('Jane', 'North-East')
INSERT INTO @Users VALUES ('Bryan', 'South')
INSERT INTO @Locations VALUES ('North')
INSERT INTO @Locations VALUES ('North-East')
INSERT INTO @Locations VALUES ('South')
SELECT
l.Name,
count(u.location)
FROM
@Locations l
LEFT JOIN
@Users u on l.Name = u.location and u.location in ('North', 'North-East')
group by
l.Name;
```
|
I think the simplest way is to use conditional aggregation:
```
SELECT Location,
SUM(CASE WHEN Location IN ('North', 'North-East') THEN 1 ELSE 0 END) as cnt
FROM Users u
GROUP BY Location;
```
Or, better yet, if you have a locations table:
```
select l.location, count(u.location)
from locations l left join
users u
on l.location = u.location and
u.location in ('North', 'North-East')
group by l.location;
```
|
Assuming there is no locations table, the only way to do this is to do DISTINCT and a sub select
```
SELECT DISTINCT
Location,
(SELECT COUNT(*) FROM users AS U
WHERE U.Name = Users.Name
AND Location = 'North' OR Location = 'North-East')
FROM Users
WHERE Location = 'North' OR Location = 'North-East'
```
This code does a lot of table scans and will probably cause your system issues when run on large tables in a production environment where this query would be run multiple times a day.
|
GroupBy Return Results Outside of Restriction
|
[
"",
"sql",
"sql-server",
""
] |
I am trying to return the number of years someone has been a part of our team based on their join date. However i am getting a invalid minus operation error. The whole `getdate()` is not my friend so i am sure my syntax is wrong.
Can anyone lend some help?
```
SELECT
Profile.ID as 'ID',
dateadd(year, -profile.JOIN_DATE, getdate()) as 'Years with Org'
FROM Profile
```
|
**MySQL Solution**
Use the `DATE_DIFF` function
> The DATEDIFF() function returns the time between two dates.
>
> DATEDIFF(date1,date2)
<http://www.w3schools.com/sql/func_datediff_mysql.asp>
This method only takes the number of days difference. You need to convert to years by dividing by 365. To return an integer, use the `FLOOR` method.
In your case, it would look like this
```
SELECT
Profile.ID as 'ID',
(FLOOR(DATEDIFF(profile.JOIN_DATE, getdate()) / 365)) * -1 as 'Years with Org'
FROM Profile
```
Here's an example fiddle I created
<http://sqlfiddle.com/#!9/8dbb6/2/0>
**MsSQL / SQL Server solution**
> The DATEDIFF() function returns the time between two dates.
>
> Syntax: DATEDIFF(datepart,startdate,enddate)
It's important to note here, that unlike it's `MySql` counterpart, the SQL Server version takes in three parameters. For your example, the code looks as follows
```
SELECT Profile.ID as 'ID',
DATEDIFF(YEAR,Profile.JoinDate,GETDATE()) as difference
FROM Profile
```
<http://www.w3schools.com/sql/func_datediff.asp>
|
Looks like you're using T-SQL? If so, you should use DATEDIFF:
```
DATEDIFF(year, profile.JOIN_DATE, getdate())
```
|
Subtract table value from today's date - SQL
|
[
"",
"sql",
"subtraction",
"getdate",
""
] |
I have been really struggling with this one! Essentially, I have been trying to use COUNT and GROUP BY within a subquery, errors returning more than one value and whole host of errors.
So, I have the following table:
```
start_date | ID_val | DIR | tsk | status|
-------------+------------+--------+-----+--------+
25-03-2015 | 001 | U | 28 | S |
27-03-2016 | 003 | D | 56 | S |
25-03-2015 | 004 | D | 56 | S |
25-03-2015 | 001 | U | 28 | S |
16-02-2016 | 002 | D | 56 | S |
25-03-2015 | 001 | U | 28 | S |
16-02-2016 | 002 | D | 56 | S |
16-02-2016 | 005 | NULL | 03 | S |
25-03-2015 | 001 | U | 17 | S |
16-02-2016 | 002 | D | 81 | S |
```
Ideally, I need to count the number of times the unique value of ID\_val had for example U and 28 or D and 56. and only those combinations.
For example I was hoping to return the below results if its possible:
```
start_date | ID_val | no of times | status |
-------------+------------+---------------+--------+
25-03-2015 | 001 | 3 | S |
27-03-2016 | 003 | 1 | S |
25-03-2015 | 004 | 1 | S |
25-03-2015 | 002 | 3 | S |
```
I've managed to get the no of times on their own, but not be apart of a table with other values (subquery?)
Any advice is much appreciated!
|
You want one result per `ID_val`, so you'd group by `ID_val`.
You want the minimum start date: `min(start_date)`.
You want any status (as it is always the same): e.g. `min(status)` or `max(status)`.
You want to count matches: `count(case when <match> then 1 end)`.
```
select
min(start_date) as start_date,
id_val,
count(case when (dir = 'U' and tsk = 28) or (dir = 'D' and tsk = 56) then 1 end)
as no_of_times,
min(status) as status
from mytable
group by id_val;
```
|
This is a basic conditional aggregation:
```
select id_val,
sum(case when (dir = 'U' and tsk = 28) or (dir = 'D' and tsk = 56)
then 1 else 0
end) as NumTimes
from t
group by id_val;
```
I left out the other columns because your question focuses on `id_val`, `dir`, and `tsk`. The other columns seem unnecessary.
|
SQL Server - COUNT with GROUP BY in subquery
|
[
"",
"sql",
"sql-server",
""
] |
[Here](https://stackoverflow.com/q/5653423/383688) I've found how to define a variable in Oracle SQL Developer.
But can we define the range of values somehow?
I need smth like this:
```
define my_range = '55 57 59 61 67 122';
delete from ITEMS where ITEM_ID in (&&my_range);
```
|
Actually if you put commas in your list it will work since you are using a substitution parameter (not a bind variable):
```
define my_range = '55, 57, 59, 61, 67, 122';
delete from ITEMS where ITEM_ID in (&&my_range);
```
[](https://i.stack.imgur.com/8Pexz.png)
|
Use a collection:
```
CREATE TYPE INT_TABLE AS TABLE OF INT;
/
```
Then you can do:
```
DEFINE my_range = '55,57,59,61,67,122';
DELETE FROM items
WHERE ITEM_ID MEMBER OF INT_TABLE( &&my_range );
```
**Example**:
```
CREATE TABLE ITEMS ( ITEM_ID ) AS
SELECT LEVEL FROM DUAL CONNECT BY LEVEL <= 150;
DEFINE my_range = '55,57,59,61,67,122';
DELETE FROM items WHERE ITEM_ID MEMBER OF INT_TABLE( &&my_range );
```
**Output**:
```
Table ITEMS created.
old:DELETE FROM items WHERE ITEM_ID MEMBER OF INT_TABLE( &&my_range );
new:DELETE FROM items WHERE ITEM_ID MEMBER OF INT_TABLE( 55,57,59,61,67,122 );
6 rows deleted.
```
|
Can we define a variable which contains the range of values in Oracle SQL Developer?
|
[
"",
"sql",
"oracle",
"range",
""
] |
I need help with a sql query. I have a table like this:
```
ID bookType Date
----------- ---------- ------
1 85 01.01.2014
1 86 01.01.2014
1 88 01.01.2014
1 3005 01.01.2014
1 3028 01.01.2014
2 74 01.01.2016
2 85 01.01.2016
2 86 01.01.2016
3 88 01.01.2015
3 3005 01.01.2015
```
I need a query, that returns just all id's with booktype 85, 86 and NOT id's with booktype 88,3005,3028. All the other types are not relevant, they can be included.
Example:
I just want ID 2, because there is no booktype of 88, 3005, 3028. It have the id 74, but this doesn't matter, it can be included.
I tried something like this:
```
SELECT bookid AS id, COUNT(bookid) AS number
FROM books
WHERE date BETWEEN '01.01.2014' and '01.01.2016'
and booktype in (85,86)
GROUP BY bookid
HAVING COUNT(bookid) >1
MINUS
SELECT bookid AS id, count(bookid) AS number
FROM books
WHERE date BETWEEN '01.01.2014' and '01.01.2016'
and booktype in (88,3005,3028)
GROUP BY bookid;
```
It doesn't work. I get every time results with booktype 88 or other included.
I tried `EXCEPT`, but Oracle SQL Developer doesn't know it.
|
I see some inconsistency between your SQL and your columns names.
There is no bookid in the table and you miss booktype...
So assuming your first query is:
```
SELECT ID AS ID, COUNT(ID) AS number FROM books WHERE date
BETWEEN '2014-01-01' and '2016-01-01' and bookType in (85,86)
GROUP BY ID
HAVING COUNT(ID) >1;
```
This will have result set:
```
ID number
1 2
2 2
```
Your second query
```
SELECT ID AS ID, COUNT(ID) AS number FROM books WHERE date
BETWEEN '2014-01-01' and '2016-01-01' and bookType in (88, 3005, 3028)
GROUP BY ID;
```
This will have result set:
```
ID number
1 3
3 2
```
The MINUS operator in ORACLE returns only unique rows returned by the first query but not by the second. So the whole query will return first record set as both results of first query are different from the results of the second.
If you drop the count statement in your query you will have:
First query
```
SELECT ID AS ID FROM books WHERE date
BETWEEN '2014-01-01' and '2016-01-01' and bookType in (85,86)
GROUP BY ID
HAVING COUNT(ID) >1;
```
result set
```
ID
1
2
```
Second query:
```
SELECT ID AS ID FROM books WHERE date
BETWEEN '2014-01-01' and '2016-01-01' and bookType in (88, 3005, 3028)
GROUP BY ID;
```
result set
```
ID
1
3
```
and applying the MINUS operator you will get just 2 as required, as 1 is in the second result set.
This is just to confirm that your logic was right, but did not completely take in account the way the MINUS operates on result sets.
So your query has to be:
```
SELECT ID AS ID FROM books WHERE date
BETWEEN '2014-01-01' and '2016-01-01' and bookType in (85,86)
GROUP BY ID
HAVING COUNT(ID) >1
MINUS
SELECT ID AS ID FROM books WHERE date
BETWEEN '2014-01-01' and '2016-01-01' and bookType in (88, 3005, 3028)
GROUP BY ID;
```
Last remarks:
* I left WHERE date BETWEEN '2014-01-01' and '2016-01-01', as I thought it is relative to other requirements also if is not relevant in your example
* I left HAVING COUNT(ID) >1, as I thought it is relative to other requirements also if is not relevant in your example
Regards
|
Try this:
```
SELECT bookid AS id, COUNT(*) AS number
FROM books
WHERE date BETWEEN DATE '2014-01-01' and DATE '2016-01-01'
GROUP BY bookid
HAVING COUNT(DISTINCT CASE WHEN booktype IN (85,86) THEN booktype END) = 2 AND
COUNT(CASE WHEN booktype IN (88, 3005, 3028) THEN 1 END) = 0
```
If you just want to count `(85,86)`occurrences then use :
```
COUNT(CASE WHEN booktype IN (85,86) THEN 1 END)
```
instead of:
```
COUNT(*)
```
|
How to select id's which contains just special values?
|
[
"",
"sql",
"oracle",
"oracle-sqldeveloper",
"having",
""
] |
```
student_mas:: receipt_mas
name class name class month
john 2nd john 2nd JAN
bunny 3rd john 2nd FEB
sunny 4th bunny 3rd FEB
```
student who submits fees for a particular month gets inserted into the second table mentioning the month in the month column in the second table
I want the list of students who have not submitted the fees for the month of JAN
please help me. thanks in advance.
|
You can use **`NOT EXISTS`**
**Query**
```
select * from student_mas t
where not exists (
select * from receipt_mas
where name = t.name
and class = t.class
and [month] = 'JAN'
);
```
**`SQL Fiddle demo`**
|
Ullas answer would work perfectly but you can try like the below approach.
```
DECLARE @student_mas TABLE (
NAME VARCHAR(50)
,class VARCHAR(10)
);
insert into @student_mas
values
('john', '2nd'),
('bunny', '3rd'),
('sunny', '4th');
DECLARE @receipt_mas TABLE (
NAME VARCHAR(50)
,class VARCHAR(10)
,[month] VARCHAR(3)
);
insert into @receipt_mas
values
('john', '2nd', 'JAN'),
('john', '2nd', 'FEB'),
('bunny', '3rd', 'FEB');
SELECT sm.*
FROM @student_mas sm
LEFT JOIN @receipt_mas rm ON sm.NAME = rm.NAME
AND sm.class = rm.class
AND rm.month = 'JAN'
WHERE RM.class IS NULL
```
|
There are two tables. i want to join them in such a way that i can get the following result
|
[
"",
"sql",
"sql-server",
"vb.net",
"visual-studio-2010",
""
] |
I am working on a join and I cannot seem to get the resultset that I need. Let me paint the scenario:
I have 2 tables:
Data table
```
+----+-------+
| ID | Name |
+----+-------+
| 10 | Test1 |
| 11 | Test2 |
| 12 | Test3 |
| 13 | Test4 |
| 14 | Test5 |
| 15 | Test6 |
+----+-------+
```
Join table
```
+----+-----+-----+-----+
| ID | FID | GID | Val |
+----+-----+-----+-----+
| 10 | 3 | | abc |
| 10 | | 1 | def |
| 11 | 3 | | ijk |
| 12 | | 1 | lmn |
| 13 | 4 | | opq |
+----+-----+-----+-----+
```
Expected Result Set
```
+---------------+-----------------+---------------+----------------+----------------+
| Data table id | Data table name | Join Tabe FID | Join Table GID | Join Table Val |
+---------------+-----------------+---------------+----------------+----------------+
| 10 | Test1 | 3 | | abc |
| 11 | test2 | 3 | | ijk |
| 12 | test3 | | 1 | lmn |
+---------------+-----------------+---------------+----------------+----------------+
```
My Query
```
Select
*
from
datatable A
join jointable b
on
A.ID = B.ID
and B.FID = 3
join jointable c
on
A.ID = C.ID
and C.GID = 1
and C.FID <> null
```
What is happening is that the join on table C is being done on the resultset of the join between table A and B, therefore the resultset is blank.
I want the join on table C to be applied on table A and not on the resultset from the join between table A and B; which will result on the expected resultset.
Can anyone help?
Thanks
|
The expression **`C.FID <> null`** will never evaluate to true, it will always return NULL. An inequality comparison to `NULL` will always evaluate to `NULL`. (In SQL, in a boolean context, en expression will evaluate to one of *three* possible values: `TRUE`, `FALSE` or `NULL`.)
If you want a comparison to `NULL` to return `TRUE` or `FALSE`, use an `IS [NOT] NULL` comparison test. An expression like
```
foo IS NULL
```
or
```
foo IS NOT NULL
```
Or, you could make use the MySQL specific null-safe comparison (spaceship) operator:
```
foo <=> NULL
```
or
```
NOT (foo <=> NULL)
```
---
As to the result you want to return, it's a bit confusing as to how you arrive at what you want to return.
To me, it looks like you are wanting to get the matching rows from `jointable`... if there are matching rows with `fid=3`, return just those rows. If there aren't any matching rows with `fid=3`, then return rows that have a NULL value in `fid` and `gid=1`.
If that's what we want returned, we can write a query that does that. If that's not what we want returned, then the rest of this answer doesn't matter.
We can use a `NOT EXISTS` predicate to test for the non existence of matching rows.
For example:
```
SELECT d.id
, d.name
, j.fid
, j.gid
, j.val
FROM datatable d
JOIN jointable j
ON j.id = d.id
WHERE ( j.fid = 3 )
OR ( j.fid IS NULL
AND j.gid = 1
AND NOT EXISTS ( SELECT 1
FROM jointable t
WHERE t.id = d.id
AND t.fid = 3
)
)
```
|
```
SELECT
*
FROM datatable A
LEFT JOIN jointable B ON A.ID = B.ID
WHERE B.FID = 3 OR B.GID = 1;
```
This will return you:
```
10 Test1 10 3 abc
10 Test1 10 1 def
11 Test2 11 3 ijk
12 Test3 12 1 lmn
```
Now, it seems you want to filter out:
```
10 Test1 10 3 abc
```
and keep
```
10 Test1 10 1 def
```
Is that what you want?
Regards
|
Two joins in mysql query not returning expected result
|
[
"",
"mysql",
"sql",
"join",
"resultset",
""
] |
I'm trying to select all client ID's that has `TypeId` equal 1 but not `TypeId` equal 3.
Table example:
```
---------------------
| ClientID | TypeId |
---------------------
| 1 | 1 |
| 1 | 3 |
| 2 | 3 |
| 3 | 1 |
---------------------
```
My query:
```
SELECT ClientId, TypeId
FROM Table
GROUP BY ClientId, TypeId
HAVING TypeId != 3
```
What I have:
```
---------------------
| ClientID | TypeId |
---------------------
| 1 | 1 |
| 3 | 1 |
---------------------
```
What I expect:
```
---------------------
| ClientID | TypeId |
---------------------
| 3 | 1 |
---------------------
```
The critical thing is that the table have more than 3 \* 10^8 registers.
Thanks in advance!
|
I would suggest aggregation and `having`:
```
SELECT ClientId
FROM Table
GROUP BY ClientId
HAVING SUM(CASE WHEN TypeId = 1 THEN 1 ELSE 0 END) > 0 AND
SUM(CASE WHEN TypeId = 3 THEN 1 ELSE 0 END) = 0;
```
Each condition in the `HAVING` clause counts the number of rows having a particular `TypeId` value. The `> 0` means there is at least one. The `= 0` means there are none.
If you actually want to get the original *rows* that match -- so all `TypeId`s associated with a client. You can use a `JOIN` or window functions:
```
SELECT ClientId, TypeId
FROM (SELECT ClientId, TypeId,
SUM(CASE WHEN TypeId = 1 THEN 1 ELSE 0 END) OVER (PARTITION BY ClientId) as TypeId_1,
SUM(CASE WHEN TypeId = 3 THEN 1 ELSE 0 END) OVER (PARTITION BY ClientId) as TypeId_3
FROM Table
) t
WHERE TypeId_1 > 0 AND TypeId_3 = 0;
```
|
Try this:
```
SELECT t1.*
FROM Table AS t1
WHERE TypeId = 1 AND
NOT EXISTS (SELECT 1
FROM Table AS t2
WHERE t1.ClientId = t2.ClientId AND t2.TypeId = 3)
```
|
SQL Query with group and having
|
[
"",
"sql",
"sql-server",
"performance",
""
] |
I have 2 tables:
* `tbl1(ID, Name, Sex, OrderDate)`
* `tbl2(OrderDate, OrderCode)`
I try to display all data from `tbl1 (ID, Name, Sex, OrderDate)` and **only** one column from `tbl2(OrderCode)`.
I have tried this
```
SELECT tbl1.*, tbl2.OrderCode FROM tbl1, tbl2;
```
but it shows duplicate data. like this

I have search a while but only see query of mySQL where they use `join` but it appears syntax error clause.
I want it appears like this **ID Name Sex OrderDate OrderCode**
OrderCode is autoNumber Random which is why I put it in other table since Access not allow 2 autoNumber in same table
|
Create two tables as following:
[](https://i.stack.imgur.com/edxsZ.png)
and then use this query
```
select t1.ID, t1.Name, t1.Sex, t2.OrderDate, t2.OrderCode from
Table1 t1
inner join Table2 t2 on t1.ID = t2.ID;
```
|
You are doing a CROSS JOIN which results in a cartesian product
You should JOIN on the field that is common in the 2 tables to limit the rows returned.
In your case the only field I see on both tables is OrderDate, and that's a bit weird.
Try this
```
SELECT tbl1.*, tbl2.OrderCode FROM tbl1, tbl2 WHERE tbl1.OrderDate=tbl2.OrderDate
```
|
Ms.Access query in vb.net select all from tbl1 and only 1 column from tbl2
|
[
"",
"sql",
"vb.net",
"ms-access",
""
] |
```
table 1
id name value activ
1 abc 5 1
2 def 6 1
3 ghi 10 0
4 jkl 15 1
table 2
id name value table1_id
1 abc 100 1
2 jkl 200 4
```
i want to return all records from table 1 where active = 1 and the records from table 2 where table1\_id refers the id of table 1 and thus skipping the record of table 1 which matches the table1\_id in table 2..
the output must me something like this
```
name value
abc 100
def 6
jkl 200
```
i trie something like this ..
```
select s.name,
s.value
from table1 as s
where s.active =1 and
s.id NOT `IN (select d1.table1_id
from table2 as d1
where d1.table1_id = s.id)
union
select d.id,
d.name,
d.value
from table2 as d`
```
it always returns all records of table 2 . i am unable use the where condition at the end of the statement like 'where d.table1\_id = s.id' . It says s.id is unknown .
|
**[SQL Fiddle Demo](http://sqlfiddle.com/#!9/ae959/2)**
```
SELECT T1.name,
COALESCE(T2.value, T1.value) as value
FROM Table1 as T1
LEFT JOIN Table2 as T2
ON T1.id = T2.table1_id
WHERE T1.active = 1
```
**OUTPUT**
```
| name | value |
|------|-------|
| abc | 100 |
| jkl | 200 |
| def | 6 |
```
|
```
SELECT a.name,
NVL(b.value, a.value)
FROM table1 a
LEFT OUTER JOIN table2 b
ON a.id =b.table1_id
WHERE a.activ=1;
```
|
mysql - not in statement
|
[
"",
"mysql",
"sql",
"notin",
""
] |
I do not understand why mysql says that there is unknown column in where clause. If I remove alias and just use log\_archive.date then it works just fine.
Here is the sql:
```
SELECT DISTINCT(log_archive.msisdn) AS msisdn,
DATE(log_archive.date) AS actionDate,
users.activation_date
FROM log_archive
INNER JOIN users on log_archive.msisdn = users.msisdn
WHERE log_archive.action = 'pokrenuta aplikacija' AND
actionDate BETWEEN ':start_date' AND ':end_date'
ORDER BY actionDate DESC
```
Here is the error message:
> Unknown column 'actionDate' in 'where clause'
MySQL version is 5.1.36.
|
Column alias name can't be used in `WHERE` clause.
# Reference:
<http://dev.mysql.com/doc/refman/5.7/en/problems-with-alias.html>
> Standard SQL disallows references to column aliases in a WHERE clause.
> This restriction is imposed because when the WHERE clause is
> evaluated, the column value may not yet have been determined. For
> example, the following query is illegal:
|
Because your actionDate is not a aliased name of column, actionDate is a aliase of the result of function `DATE`
If change the sql as below, it's still working, so problem is not the aliased name, you must use the exactly column in where clause.
```
SELECT DISTINCT(log_archive.msisdn) AS msisdn,
log_archive.date AS actionDate,
users.activation_date
FROM log_archive
INNER JOIN users on log_archive.msisdn = users.msisdn
WHERE log_archive.action = 'pokrenuta aplikacija' AND
DATE(actionDate) BETWEEN ':start_date' AND ':end_date'
ORDER BY actionDate DESC
```
|
Mysql aliased column unknown in the where clause
|
[
"",
"mysql",
"sql",
""
] |
I have an `updates` table in Postgres is 9.4.5 like this:
```
goal_id | created_at | status
1 | 2016-01-01 | green
1 | 2016-01-02 | red
2 | 2016-01-02 | amber
```
And a `goals` table like this:
```
id | company_id
1 | 1
2 | 2
```
I want to create a chart for each company that shows the state of all of their goals, per week.
[](https://i.stack.imgur.com/3pDez.png)
I image this would require to generate a series of the past 8 weeks, finding the most recent update for each goal that came before that week, then counting the different statuses of the found updates.
What I have so far:
```
SELECT EXTRACT(year from generate_series) AS year,
EXTRACT(week from generate_series) AS week,
u.company_id,
COUNT(*) FILTER (WHERE u.status = 'green') AS green_count,
COUNT(*) FILTER (WHERE u.status = 'amber') AS amber_count,
COUNT(*) FILTER (WHERE u.status = 'red') AS red_count
FROM generate_series(NOW() - INTERVAL '2 MONTHS', NOW(), '1 week')
LEFT OUTER JOIN (
SELECT DISTINCT ON(year, week)
goals.company_id,
updates.status,
EXTRACT(week from updates.created_at) week,
EXTRACT(year from updates.created_at) AS year,
updates.created_at
FROM updates
JOIN goals ON goals.id = updates.goal_id
ORDER BY year, week, updates.created_at DESC
) u ON u.week = week AND u.year = year
GROUP BY 1,2,3
```
But this has two problems. It seems that the join on `u` isn't working as I thought it would. It seems to be joining on every row (?) returned from the inner query as well as this only selects the most recent update that happened from that week. It *should* grab the most recent update from before that week if it needs to.
This is some pretty complicated SQL and I love some input on how to pull it off.
### Table structures and info
The goals table has around ~1000 goals ATM and is growing about ~100 a week:
```
Table "goals"
Column | Type | Modifiers
-----------------+-----------------------------+-----------------------------------------------------------
id | integer | not null default nextval('goals_id_seq'::regclass)
company_id | integer | not null
name | text | not null
created_at | timestamp without time zone | not null default timezone('utc'::text, now())
updated_at | timestamp without time zone | not null default timezone('utc'::text, now())
Indexes:
"goals_pkey" PRIMARY KEY, btree (id)
"entity_goals_company_id_fkey" btree (company_id)
Foreign-key constraints:
"goals_company_id_fkey" FOREIGN KEY (company_id) REFERENCES companies(id) ON DELETE RESTRICT
```
The `updates` table has around ~1000 and is growing around ~100 a week:
```
Table "updates"
Column | Type | Modifiers
------------+-----------------------------+------------------------------------------------------------------
id | integer | not null default nextval('updates_id_seq'::regclass)
status | entity.goalstatus | not null
goal_id | integer | not null
created_at | timestamp without time zone | not null default timezone('utc'::text, now())
updated_at | timestamp without time zone | not null default timezone('utc'::text, now())
Indexes:
"goal_updates_pkey" PRIMARY KEY, btree (id)
"entity_goal_updates_goal_id_fkey" btree (goal_id)
Foreign-key constraints:
"updates_goal_id_fkey" FOREIGN KEY (goal_id) REFERENCES goals(id) ON DELETE CASCADE
Schema | Name | Internal name | Size | Elements | Access privileges | Description
--------+-------------------+---------------+------+----------+-------------------+-------------
entity | entity.goalstatus | goalstatus | 4 | green +| |
| | | | amber +| |
| | | | red | |
```
|
You need one data item per week and goal (before aggregating counts per company). That's a plain `CROSS JOIN` between `generate_series()` and `goals`. The (possibly) expensive part is to get the current `state` from `updates` for each. Like [@Paul already suggested](https://stackoverflow.com/a/36088907/939860), a `LATERAL` join seems like the best tool. Do it only for `updates`, though, and use a faster technique with `LIMIT 1`.
And simplify date handling with [**`date_trunc()`**](http://www.postgresql.org/docs/current/interactive/functions-datetime.html#FUNCTIONS-DATETIME-TRUNC).
```
SELECT w_start
, g.company_id
, count(*) FILTER (WHERE u.status = 'green') AS green_count
, count(*) FILTER (WHERE u.status = 'amber') AS amber_count
, count(*) FILTER (WHERE u.status = 'red') AS red_count
FROM generate_series(date_trunc('week', NOW() - interval '2 months')
, date_trunc('week', NOW())
, interval '1 week') w_start
CROSS JOIN goals g
LEFT JOIN LATERAL (
SELECT status
FROM updates
WHERE goal_id = g.id
AND created_at < w_start
ORDER BY created_at DESC
LIMIT 1
) u ON true
GROUP BY w_start, g.company_id
ORDER BY w_start, g.company_id;
```
To make this ***fast*** you need a **multicolumn index**:
```
CREATE INDEX updates_special_idx ON updates (goal_id, created_at DESC, status);
```
Descending order for `created_at` is best, but not strictly necessary. Postgres can scan indexes backwards almost exactly as fast. ([Not applicable for inverted sort order of multiple columns, though.](https://dba.stackexchange.com/questions/39589/optimizing-queries-on-a-range-of-timestamps-two-columns/39599#39599))
Index columns in *that* order. Why?
* [Multicolumn index and performance](https://dba.stackexchange.com/a/33220/3684)
And the third column `status` is only appended to allow fast [index-only scans](https://wiki.postgresql.org/wiki/Index-only_scans) on `updates`. Related case:
* [Slow index scans in large table](https://dba.stackexchange.com/a/81554/3684)
1k goals for 9 weeks (your interval of 2 months overlaps with at least 9 weeks) only require 9k index look-ups for the 2nd table of only 1k rows. For small tables like this, performance shouldn't be much of a problem. But once you have a couple of thousand more in each table, performance will deteriorate with sequential scans.
`w_start` represents the start of each week. Consequently, counts are for the start of the week. You *can* still extract year and week (or any other details represent your week), if you insist:
```
EXTRACT(isoyear from w_start) AS year
, EXTRACT(week from w_start) AS week
```
Best with [`ISOYEAR`](http://www.postgresql.org/docs/current/interactive/functions-datetime.html#FUNCTIONS-DATETIME-EXTRACT), like @Paul explained.
[**SQL Fiddle.**](http://sqlfiddle.com/#!15/3d5e1/1)
Related:
* [What is the difference between LATERAL and a subquery in PostgreSQL?](https://stackoverflow.com/questions/28550679/what-is-the-difference-between-lateral-and-a-subquery-in-postgresql/28557803#28557803)
* [Optimize GROUP BY query to retrieve latest record per user](https://stackoverflow.com/questions/25536422/optimize-group-by-query-to-retrieve-latest-record-per-user/25536748#25536748)
* [Select first row in each GROUP BY group?](https://stackoverflow.com/questions/3800551/select-first-row-in-each-group-by-group/7630564#7630564)
* [PostgreSQL: running count of rows for a query 'by minute'](https://stackoverflow.com/questions/8193688/postgresql-running-count-of-rows-for-a-query-by-minute/8194088#8194088)
|
This seems like a good use for `LATERAL` joins:
```
SELECT EXTRACT(ISOYEAR FROM s) AS year,
EXTRACT(WEEK FROM s) AS week,
u.company_id,
COUNT(u.goal_id) FILTER (WHERE u.status = 'green') AS green_count,
COUNT(u.goal_id) FILTER (WHERE u.status = 'amber') AS amber_count,
COUNT(u.goal_id) FILTER (WHERE u.status = 'red') AS red_count
FROM generate_series(NOW() - INTERVAL '2 months', NOW(), '1 week') s(w)
LEFT OUTER JOIN LATERAL (
SELECT DISTINCT ON (g.company_id, u2.goal_id) g.company_id, u2.goal_id, u2.status
FROM updates u2
INNER JOIN goals g
ON g.id = u2.goal_id
WHERE u2.created_at <= s.w
ORDER BY g.company_id, u2.goal_id, u2.created_at DESC
) u
ON true
WHERE u.company_id IS NOT NULL
GROUP BY year, week, u.company_id
ORDER BY u.company_id, year, week
;
```
Btw I am extracting `ISOYEAR` not `YEAR` to ensure I get sensible results around the beginning of January. For instance `EXTRACT(YEAR FROM '2016-01-01 08:49:56.734556-08')` is `2016` but `EXTRACT(WEEK FROM '2016-01-01 08:49:56.734556-08')` is `53`!
**EDIT:** You should test on your real data, but I feel like this ought to be faster:
```
SELECT year,
week,
company_id,
COUNT(goal_id) FILTER (WHERE last_status = 'green') AS green_count,
COUNT(goal_id) FILTER (WHERE last_status = 'amber') AS amber_count,
COUNT(goal_id) FILTER (WHERE last_status = 'red') AS red_count
FROM (
SELECT EXTRACT(ISOYEAR FROM s) AS year,
EXTRACT(WEEK FROM s) AS week,
u.company_id,
u.goal_id,
(array_agg(u.status ORDER BY u.created_at DESC))[1] AS last_status
FROM generate_series(NOW() - INTERVAL '2 months', NOW(), '1 week') s(t)
LEFT OUTER JOIN (
SELECT g.company_id, u2.goal_id, u2.created_at, u2.status
FROM updates u2
INNER JOIN goals g
ON g.id = u2.goal_id
) u
ON s.t >= u.created_at
WHERE u.company_id IS NOT NULL
GROUP BY year, week, u.company_id, u.goal_id
) x
GROUP BY year, week, company_id
ORDER BY company_id, year, week
;
```
Still no window functions though. :-) Also you can speed it up a bit more by replacing `(array_agg(...))[1]` with a real `first` function. You'll have to define that yourself, but there are implementations on the Postgres wiki that are easy to Google for.
|
Aggregating the most recent joined records per week
|
[
"",
"sql",
"postgresql",
"greatest-n-per-group",
""
] |
I need to link 2 tables in one big table.
My problem is that I need to link 2 different columns (in example: books, toys) in one column (things).
Other columns:
* if they are in both tables, then they are in one column in every row (in example: price)
* if they are in one table, then in rows with data from other table have null (in example: cover, name).
Example:
table 1:
```
books cover price
----- ----- ------
book1 soft 19
book2 soft 23
book3 hard 39
```
table2:
```
toys name price
---- ---- -----
astro Buzz 29
mouse Jerr 35
```
Result:
```
things name cover price
------ ---- ----- -----
book1 null soft 19
book2 null soft 23
book3 null hard 39
astro Buzz null 29
mouse Jerr null 35
```
|
You can try using a `UNION ALL` more info [here](http://www.techonthenet.com/sql/union_all.php)
Something like this:
```
SELECT books "things", NULL "name", cover, price
FROM table1
UNION ALL
SELECT toys "things", name , NULL "cover", price
FROM table2
```
|
How about a simple UNION:
```
select books as things, null as name, cover, price
from table1
union
select toys as things, name, null as cover, price from table2
```
|
SQL: linking two different columns in one
|
[
"",
"sql",
""
] |
I have two tables:
* `customers` table, with columns `customer_id`, `created_at`
* `orders` table, with columns `order_id`, `customer_id`, `paid_at`, `amount`
I have to write a SQL query that breaks up the customers based on the year that they signed up (their cohort year), and determines total annual revenue for each cohort over the years (e.g., the 2011 cohort has total revenue of $x in year 1, $y in year 2, etc.)
```
select c.customer_id, c.created_at, SUM(o.amount) as Tot_amt
from customers c inner join orders o on c.customer_id = o.customer_id
group by c.created_at, Tot_amt;
```
|
GROUP BY year ( date column )
```
select c.customer_id, YEAR(c.created_at), SUM(o.amount) as Tot_amt
from customers c inner join orders o on c.customer_id = o.customer_id
group by c.customer_id, YEAR(c.created_at)
```
|
To support your statement
> 2011 cohort has total revenue of $x in year 1, $y in year 2
you should have orderDate in your orders table
```
select
YEAR(c.created_at) as cohortYear,
(YEAR(orderDate)-YEAR(c.created_at)+1) as YearNum,
SUM(o.amount) OVER( PARTITION by YEAR(c.created_at),(YEAR(orderDate)-YEAR(c.created_at)) ORDER BY YEAR(c.created_at),(YEAR(orderDate)-YEAR(c.created_at))) as Tot_amt
from customers c inner join orders o on c.customer_id = o.customer_id
```
In case you need a solution using group by
```
select
YEAR(c.created_at) as cohortYear,
(YEAR(orderDate)-YEAR(c.created_at)+1) as YearNum,
SUM(o.amount) as Tot_amt
from customers c inner join orders o on c.customer_id = o.customer_id
group by YEAR(c.created_at),(YEAR(orderDate)-YEAR(c.created_at)+1)
order by YEAR(c.created_at),(YEAR(orderDate)-YEAR(c.created_at))
```
|
Need sum of a fact row grouped by start year and every year after it
|
[
"",
"mysql",
"sql",
"sql-server",
""
] |
I am new to regular expression. I am using regular expression in SQL query. And want to display all the records which contain anything other than: Alphanumeric characters, white spaces, hyphen(-) in between the string, dot(.)in the end and in between the string.
I have been able to do for alphanumeric characters and spaces. But hyphen and dot are creating problem.
for eg: "first.name", "firstname." should not come but ".firstname" should come in the result.
And "first-name" should not come but "-firstname" and "firstname-" should come in the result.
As of now i have done this.
```
SELECT * FROM my_table WHERE column1 like '.%' or column1 like '-%' or
regexp_like(column1 ,'[^[:alnum:]^[:blank:]]');
```
The above query is working in the way that it allows dot and hyphen at any place in the string. And not the places i want as discussed above.
Can anyone help me in this.
|
I tried a whole lot of solutions. Finally the below is exactly working the way i wanted.
```
SELECT * FROM my_table WHERE (INSTR(column1,'.', 1, 1))=1
or (INSTR(column1,'-', 1, 1))=1 or (INSTR(column1,'-', 1,1)=length(column1))
or (REGEXP_LIKE(column1,'[^[:alnum:]^[:blank:]-.]'));
```
|
I've create table just like yours and this statement
```
select * from tmp where x like '.%' OR x like '-%' or x like '%-';
```
works perfectly fine - only . and -
:)
|
what will be the regular expression for allowing alphanumeric characters, space, hypen in between, dot at the end or in between
|
[
"",
"sql",
"regex",
"oracle",
""
] |
First of all I have this which returns the date of all the football games where
HomeShotsOnTarget(HST) = FullTimeHomeGoals(FTHG)
or
AwayShotsOnTarget(AST) = FullTimeAWayGoals(FTHG)
```
SELECT MatchDate, HomeTeam, AwayTeam
FROM matches
WHERE HST=FTHG or AST=FTAG
```
**This displays**
```
MatchDate | HomeTeam | AwayTeam
2003/08/23 17 32
2003/09/13 24 39
```
and so on and so on...
The numbers under HomeTeam and AwayTeam are the TeamCodes which are in another table called clubs which also has the teams real name.
The following matches the TeamCode for the HomeTeam with the RealName in table clubs.
```
SELECT MatchDate, RealName
FROM club T1
INNER JOIN matches T2 ON T1.TeamCode = T2.HomeTeam
```
This displays
```
MatchDate| RealName|
2003/08/23 Arsenal
2003/09/13 Blackburn
```
Etc...
So my problem is I can't seem to find a way that displays the RealName Under HomeTeam and AwayTeam instead of the TeamCode. Like this...
```
MatchDate | HomeTeam | AwayTeam
2003/08/23 Arsenal Aston Villa
2003/09/13 Blackburn Man Utd
```
|
You just have to join the Team-Table two times, try this query:
```
SELECT
MatchDate,
T1.RealName,
T2.RealName
FROM
matches INNER JOIN club T1 ON (matches.HomeTeam = T1.TeamCode)
INNER JOIN club T2 ON (matches.AwayTeam = T2.TeamCode)
WHERE
HST=FTHG OR AST=FTAG
```
|
Maybe something like:
```
SELECT MatchDate, homeTeam.RealName AS HomeTeam, awayTeam.RealName AS AwayTeam
FROM matches m
INNER JOIN club homeTeam ON (m.HomeTeam = homeTeam.TeamCode)
INNER JOIN club awayTeam ON (m.AwayTeam = awayTeam.TeamCode);
```
I use to put some meaning labels instead of just, `T1` and `T2`.
|
MySQL - displaying two inner joins in separate columns
|
[
"",
"mysql",
"sql",
"database",
"join",
"inner-join",
""
] |
I have two config tables. The structure is as below:
```
Table 1: Client_Config
id, name, value, type, description
Table 2: App_Config
name, value, type, description
```
I want to get `name` and `value` from `Client_config` table **`where id = @id`**.
I also want to get `name` and `values` from `App_config` for rows where there are no entries(matched with name) in `client_config`. Values for the same name can be different in both the tables.
eg:
Values in Client\_Config
```
1, testName, testValue, testType, testDescription
1, testName1, testValue1, testType1, testDescription1
1, testName2, testValue2, testType2, testDescription2
```
Values in App\_Config
```
testName, testValue1, testType, testDescription
testName1, testValue1, testType1, testDescription1
testName2, testValue2, testType2, testDescription2
testName3, testValue3, testType3, testDescription3
```
In the result set I need the following rows:
```
1, testName, testValue, testType, testDescription
1, testName1, testValue1, testType1, testDescription1
1, testName2, testValue2, testType2, testDescription2
NULL, testName3, testValue3, testType3, testDescription3
```
|
You can do it using a left join:
```
SELECT t.id, s.name, s.value, s.type, s.description
FROM App_Config s
LEFT JOIN Client_Config t
ON(t.name = s.name and t.id = @id)
```
|
You can try a query like below
```
select
c.id, a.name, a.value, a.type, a.description
from App_Config a
left join
(
select * from Client_Config where id=@id
)c
on c.name=a.name
```
**Explanation:** We need all rows from `app_config` and corresponding id from `client_config`. So we do a `**LEFT JOIN**` from A to C. The C result set however must contain rows from a particular `@id` only so we sneak in a `WHERE` clause in the C set
Sql fiddle demo link : <http://sqlfiddle.com/#!6/44659/4>
|
Joining two tables and getting values
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"join",
""
] |
I could do
```
select substr(to_char(20041111), 1, 4) FROM dual;
2004
```
Is there a way without converting to string first?
|
You can use the [FLOOR function](http://docs.oracle.com/database/121/SQLRF/functions076.htm#SQLRF00643):
```
select floor(20041111/10000) from dual;
```
|
The following does not convert to a string but I'm not sure it's more readable...:
```
select floor(20041111 / power(10, floor(log(10, 20041111) - 3)))
from dual;
```
log(10, 20041111) -> 8.3... meaning that 10 ^ 8.3... = 20041111
if you floor this value, you get the number of digits in the base 10 representation of your number
if you want to remove digits, you just need to divide by 10 ^ (#digits - 3) (and not -4 since 10^1 already has 2 digits)
|
How to get first 4 digits in a number, with oracle sql
|
[
"",
"sql",
"oracle",
""
] |
Hi i want to search for a particular string in a table stored in mysql database. The table structure is as follow:-
```
+--------------------------------------------------------------------------+
| day | 9-10 | 10-11 | 11-12 | 12-1 | 1-2 | 2-3 | 3-4 |
|---------------------------------------------------------------------------
| Monday | DA6210 | DA6220 | DA6010 | | DA6020 | | |
| Tuesday | DA6210 | DA6010 | DA6220 | | DA6020 | | |
| Wednesday | IA6010 | DA6220 | | | DA6020 | | |
| Thursday | IA6010 | | DA6210 | | GC6210 | | |
| Friday | IA6010 | | DA6010 | | GC6210 | | |
| Saturday | DA6210 | | | | | | |
+--------------------------------------------------------------------------+
```
I want to search for the string against the day and all the time. For ex. if the day is thursday and code is DA6220 it should return empty set as the string is not there across all the time slots on thursday.
I have created an sql query but it is not working as desired. Kindly please provide a way to do it.
SQl query:-
```
SELECT * from `table` where day='Thursday' AND `9-10`='DA6220' OR `10-11`='DA6220' OR `11-12`='DA6220'
OR `12-1`='DA6220' OR `1-2`='DA6220' OR `2-3`='DA6220' OR `3-4`='DA6220'
```
|
```
SELECT * from `table` where day='Thursday' AND (`9-10`='DA6220' OR `10-11`='DA6220' OR `11-12`='DA6220'
OR `12-1`='DA6220' OR `1-2`='DA6220' OR `2-3`='DA6220' OR `3-4`='DA6220')
```
Looks like your AND is evaluated first. Please try the above
|
The answer to your problem has been given, but when I look at your data I see something that can make your life a lot easier and increase your possibilities:
You would better to create a **VIEW**.
First of all, I have adapted slightly your columns names because you had reserved words.
[Click here for SQL Fiddle](http://sqlfiddle.com/#!9/9b5897/1)
**Introducing your new friend *myview***
```
cREATE VIEW myview AS
SELECT wkday, '9-10' AS rng, `9-10` AS cd from `mytable`
UNION
SELECT wkday, '10-11' AS rng, `10-11` AS cd from `mytable`
UNION
SELECT wkday, '11-12' AS rng, `11-12` AS cd from `mytable`
UNION
SELECT wkday, '12-1' AS rng, `12-1` AS cd from `mytable`
UNION
SELECT wkday, '1-2' AS rng, `1-2` AS cd from `mytable`
UNION
SELECT wkday, '2-3' AS rng, `2-3` AS cd from `mytable`
UNION
SELECT wkday, '3-4' AS rng, `3-4` AS cd from `mytable`
;
```
This is a one shot query against your DB.
Your view data is :
```
SELECT * FROM myview
wkday rng cd
----------------------
Monday 9-10 DA6210
Tuesday 9-10 DA6210
Wednesday 9-10 IA6010
Thursday 9-10 IA6010
Friday 9-10 IA6010
Saturday 9-10 DA6210
Monday 10-11 DA6220
Tuesday 10-11 DA6010
Wednesday 10-11 DA6220
...
```
---
Once your view is created, your original query is lot more simple to write :
**It becomes this :**
```
SELEct rng FROM myview WHERE wkday='Thursday' AND cd='DA6220';
```
No more ORs, no need to repeat endlessly the code you are looking for
...and if you want to search for more than one code, it's also peanuts:
```
SELEct rng FROM myview WHERE wkday='Thursday' AND cd IN ('DA6220','IA6010');
```
---
But what's cool with your new view is that you can ask as simply :
*What are the days or the ranges for which I have code 'DA6220' ?*
**days**
```
SELEct wkday FROM myview WHERE cd='DA6220';
```
Results:
```
wkday
----------
Tuesday
Wednesday
Monday
```
**Ranges**
```
SELEct rng FROM myview WHERE cd='DA6220';
```
Results:
```
rng
-----
11-12
10-11
10-11
```
This is a typical application of a VIEW. It makes your life easier.
|
SQL query to search for string in a table?
|
[
"",
"mysql",
"sql",
"database",
""
] |
I Need to Extract the Date from Address column Value . It may be possible that my address column value may have street no , house no , city , state , country, day etc
But I want to extract only City value or state value or country value . How can i extract that particular if all values are stored in one address column .
I am using mysql database . My structure of that column is -
Column Name - Address
Column Type - varchar(500)
Common Column value -
`Street No-4 , House No-29 , Prabhat Nagar , Ludhiana , Punjab , India , wednessday , 25 may 2014`
There is check available in Code To Check the required values are not empty. Streeno , house no , city , country ,state are required .If any of required fields is empty column will will not be inserted or updated in db and will empty
Code For Date Format is also fixed in code and it will go in this format everytime
|
You Need to make some changes that it will work good .
So While entering data in Address Column you need to maintain a structure Everytime you insert data maintain that structure then you can select your desired result with query below -
```
SELECT Mid(Title, Instr(Title, 'Day') + 4, Instr(Title, '|')- Instr(Title, ':')-1) As LastName FROM tblblog
```
**My Address Column Value in Database is as below**
```
Address
House No:29| Street No:4| Country:India| City:Ludhiana| Day:Thursday|
```
In that way you can search for country , city as well and it works for other values also .
**Explanation :**
**The Instr function**
It is often used in combination with other string functions for manipulating string values. The Instr function returns the position of a string occurring within another string. The format of the function is as follows:
**Instr ( [start], stringToSearch, stringToFind)**
The start parameter is optional and specifies where in the string we will start searching for the second string. If we wanted to start at the fifth character of the first string then we would specify 5 for the start parameter. If this is left blank then the search is started at the beginning of the first string.
**The Mid Function**
**The Mid(string, start, length)** function returns a portion of a string starting from a specified position and containing a specified number of characters.
The following extracts the month value from a string column in the format dd/mm/yy. The extracted string starts at the 4th character in the string and ends after 2 characters.
|
I created this. It will separate all your values. You forgot to mention the `Area` so I added it also.
`SQLFiddle Demo`
```
select
substring_index(substring_index(Address, ',', 1),',',-1) as street_no,
substring_index(substring_index(Address, ',', 2),',',-1) as house_no,
substring_index(substring_index(Address, ',', 3),',',-1) as area,
substring_index(substring_index(Address, ',', 4),',',-1) as city,
substring_index(substring_index(Address, ',', 5),',',-1) as state,
substring_index(substring_index(Address, ',', 6),',',-1) as country,
substring_index(substring_index(Address, ',', 7),',',-1) as day,
STR_TO_DATE(trim(substring_index(substring_index(Address, ',', 8),',',-1)),'%d %b %Y') as date1
from table1
```
Output
```
+-------------+-------------+---------------+----------+--------+---------+------------+-----------------------+
| street_no | house_no | area | city | state | country | day | date1 |
+-------------+-------------+---------------+----------+--------+---------+------------+-----------------------+
| Street No-4 | House No-29 | Prabhat Nagar | Ludhiana | Punjab | India | wednessday | May, 25 2014 00:00:00 |
+-------------+-------------+---------------+----------+--------+---------+------------+-----------------------+
```
|
How do I display a part of column Value in sql?
|
[
"",
"mysql",
"sql",
""
] |
I am learning databases and I am using SQL Server 2008. When I go through the concept of join, I found the problem of duplication of data on table. I tried to avoid duplicate values by using distinct keyword but it's not working. I show you my table structure, query which I am trying and required output. Thank you in advance.
**Table 1**:
```
id GroupName
---------------
1 A
2 B
3 C
```
**Table 2**:
```
id ItemName Gid
----------------------
1 aa 1
2 bb 1
3 cc 1
4 dd 2
5 ee 2
6 ff 3
7 gg 3
```
This is the query which I am using
```
SELECT DISTINCT
T2.ItemName, T1.GroupName
FROM
TABLE2 T2
JOIN
TABLE1 T1 ON T1.ID = T2.GID
ORDER BY
T2.ItemName, T1.GroupName
```
Output I am getting is this:
```
ItemName GroupName
------------------------
aa A
bb A
cc A
dd B
ee B
ff C
gg C
```
but I need this output instead:
```
ItemName GroupName
----------------------
aa A
bb
cc
dd B
ee
ff C
gg
```
|
You can not get that output as you described. Because using left or right outer join also not get the desired output.
I tried this :
```
select distinct i.Item_Name, g.GrpName
from ItemMaster i right outer join GrpDetail g
on i.Id = g.Id
order by i.Item_Name, g.GrpName;
```
And also tried :
```
select distinct i.Item_Name, g.GrpName
from ItemMaster i left outer join GrpDetail g
on i.Id = g.Id
order by i.Item_Name, g.GrpName;
```
There are no diffrence in output.
|
If you only want the "first" row to get a match, then you can use `row_number()`. Here is one way without a subquery:
```
SELECT T2.ItemName,
(CASE WHEN ROW_NUMBER() OVER (PARTITION BY T2.ItemName ORDER BY ID) = 1
THEN T1.GroupName
END) as GroupName
FROM TABLE2 T2 JOIN
TABLE1 T1
ON T1.ID = T2.GID
ORDERY BY T2.ItemName, T1.GroupName;
```
Note: The data suggest that the `JOIN` is on `GroupName`. However, it is more reasonable to use the `Id` for the foreign key.
|
How to avoid duplicate values during joining of 2 table using SQL Server 2008
|
[
"",
"sql",
"sql-server-2008",
""
] |
I want to get the specific row data.
but in this query it shows me the error.In here i want to get 5th row
> ORA-00904: "RN": invalid identifier
Code
```
SELECT NEWSDATE,ROWNUM AS RN
FROM NEWS
WHERE NEWSNO='100000' AND (CAT='LR' OR CRT ='LD') AND RN = 5 //<-- It highlighted this RN is invalid
ORDER BY NEWSDATE ASC
```
|
Try this:
```
select * from (
SELECT NEWSDATE,row_number() over (order by newsdate) AS RN
FROM NEWS
WHERE NEWSNO='100000' AND (CAT='LR' OR CRT ='LD')
)
where rn = 5;
```
|
You can't use column aliases defined in the `SELECT` clause in the `WHERE` clause as the `WHERE` clause is evaluated first. To make your query syntactically valid it would be:
```
SELECT NEWSDATE,ROWNUM AS RN
FROM NEWS
WHERE NEWSNO='100000' AND (CAT='LR' OR CRT ='LD') AND ROWNUM = 5
ORDER BY NEWSDATE ASC
```
However, this will never return any rows as will consider the first row produced and discard it as `ROWNUM=1` and then it will consider the second row produced and, since the first was discarded, this will also have `ROWNUM=1` and be discarded. This will repeat ad-nauseum until all the rows have been discarded and none of them will ever be considered to have a higher `ROWNUM` than 1.
For it to work you need to consider that `ROWNUM` is applied before the `ORDER BY` clause takes effect; so, if you want to number the ordered rows then firstly apply the `ORDER BY` then, in an outer query, assign the `ROWNUM` then finally, in further outer query, filter on the that number:
```
SELECT NEWDATE
FROM (
SELECT NEWDATE,
ROWNUM AS RN -- Assign the ROWNUM in the outer query after the ORDER BY
FROM (
SELECT NEWSDATE
FROM NEWS
WHERE NEWSNO='100000' AND (CAT='LR' OR CRT ='LD')
ORDER BY NEWSDATE ASC
)
WHERE RN <= 5 -- This WHERE clause is not necessary but might make it more efficient as
-- it can immediately stop filtering and discard all rows after the 5th.
)
WHERE RN = 5; -- Then in a further outer query filter on RN
```
|
Get the specific row data on Oracle
|
[
"",
"sql",
"oracle",
""
] |
I have a sql view, let's call it `SampleView`, whose results have the following format.
```
Id (INT), NameA (VARVHAR(50)), NameB (VARCHAR(50)), ValueA (INT), ValueB (INT)
```
The result set of the view contains rows that may have the same `Id` or not. When there are two or more rows with the same Id, I would like to get something like the following
```
SELECT
Id,
MAX(NameA),
MAX(NameB),
MAX(ValueA),
MAX(ValueB)
FROM SampleView
GROUP BY Id
ORDER BY Id
```
Regarding the columns `Id`, `ValueA` and `ValueB` there isn't any problem. On the other hand using `MAX` for both `NameA` and `NameB` things are not as expected. After some googling and searching I realized that `MAX` has not the "expected" behavior for alphanumeric columns. Saying the expected, I mean using `MAX` in my case, it would be to return the value of `NameA` with the maximum number of characters, `MAX(LEN(NameA))`. I have to mention here that there ins't any possibility for `NameA` to have two values for the same `Id` with the same length. This might makes the problem more easy to be solved.
I use SQL Server 2012 and `TSQL`.
Have you any suggestion on how I could deal with this problem?
Thank you very much in advance for any help.
|
You can use window functions:
```
SELECT DISTINCT
id,
FIRST_VALUE(NameA) OVER (PARTITION BY id
ORDER BY len(NameA) DESC) AS MaxNameA,
MAX(ValueA) OVER (PARTITION BY id) AS MaxValueA,
FIRST_VALUE(NameB) OVER (PARTITION BY id
ORDER BY len(NameB) DESC) AS MaxNameB,
MAX(ValueB) OVER (PARTITION BY id) AS MaxValueB
FROM SampleView
```
[**Demo here**](https://data.stackexchange.com/stackoverflow/query/edit/451426)
|
You can use correlated queries like this:
```
SELECT
t.Id,
(SELECT TOP 1 s.NameA FROM SampleView s
WHERE s.id = t.id
ORDER BY length(s.NameA) DESC) as NameA,
(SELECT TOP 1 s.NameB FROM SampleView s
WHERE s.id = t.id
ORDER BY length(s.NameB) DESC) as NameB,
MAX(t.ValueA),
MAX(t.ValueB)
FROM SampleView t
GROUP BY t.Id
ORDER BY t.Id
```
|
tsql group by get alphanumeric column value with maximum length
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
"group-by",
"max",
""
] |
So I have a dataset where I want to select the closest records to point X for my output,
What I have is
```
PROC SQL ;
create table Check_vs_Excel2 as
SELECT PROPERTY, START_DATE, END_DATE, DAY_OF_WEEK, MARKET_CODE_PREFIX, RATE_PGM, ROOM_POOL, QUOTE_SERIES_NO, QUOTE_POSITION
FROM Sbtddraf.Vssmauditdraftfull
group by Property, RATE_PGM
having START_DATE = MAX(START_DATE);
quit;
```
I want to take the START\_DATE = Max(Start\_DATE); and change it to something which is (effectively)
having START\_DATE = **close to**(TODAY())
Advice would be much appreciated
|
In SQL your query would be using a Correlated Subquery:
```
SELECT PROPERTY, START_DATE, END_DATE, DAY_OF_WEEK, MARKET_CODE_PREFIX, RATE_PGM, ROOM_POOL, QUOTE_SERIES_NO, QUOTE_POSITION
FROM Sbtddraf.Vssmauditdraftfull AS t
-- group by Property, RATE_PGM
WHERE START_DATE =
( select MAX(START_DATE)
FROM Sbtddraf.Vssmauditdraftfull AS t2
where t1.Property = t2.Property
and t1.RATE_PGM = t2.RATE_PGM
)
```
|
Assuming I understand that you want the row that has the minimum absolute difference between start\_date and today() (so, `MIN(ABS(START_DATE-TODAY()))`), you can do a somewhat messy query using the having clause this way:
```
data have;
do id = 2 to 9;
do start_date = '02MAR2016'd to '31MAR2016'd by id;
output;
end;
end;
run;
proc sql;
select id, start_date format=date9.
from have
group by id
having abs(start_date-today()) = min(abs(start_date-today()));
quit;
```
I don't like this in part because it's non-standard SQL and gives a note about re-merging data (it's non-standard and gives you that note because you're using a value that's not really available in a group by), and in part because it gives you multiple rows if two are tied (see id=4 if you run this on 3/16/2016).
A correlated subquery version, which at least avoids the remerging note (but actually does effectively the same thing):
```
proc sql;
select id, start_date format=date9.
from have H
where abs(start_date-today()) = (
select min(abs(start_date-today()))
from have V
where H.id=V.id
);
quit;
```
Still gives two for id=4 though (on 3/16/2016). You'd have to make a way to pick if there are possibly two answers (or perhaps you want strictly less than?). This does a subquery to determine what the smallest difference is then returns it.
|
How to select within the closest thing to Y in SAS as a starting point
|
[
"",
"sql",
"sas",
"proc-sql",
""
] |
my supervisor asked me to not put transactions and commit etc in this code because he says that it's useless to put transactions in this procedure. He's well experienced and i can't directly argue with him so need your views on it ?
```
ALTER PROCEDURE [Employee].[usp_InsertEmployeeAdvances](
@AdvanceID BIGINT,
@Employee_ID INT,
@AdvanceDate DATETIME,
@Amount MONEY,
@MonthlyDeduction MONEY,
@Balance MONEY,
@SYSTEMUSER_ID INT,
@EntryDateTime DATETIME = NULL,
@ProcedureType SMALLINT)
AS
BEGIN
BEGIN TRY
BEGIN TRANSACTION [Trans1]
IF EXISTS
(
SELECT *
FROM Employee.Advance
WHERE AdvanceID = @AdvanceID
)
BEGIN
--UPDATION OF THE RECORD
IF @ProcedureType = 1
BEGIN
SET @Amount = @Amount * -1;
END
UPDATE Employee.Advance
SET
Employee_ID = @Employee_ID,
AdvanceDate = @AdvanceDate,
Amount = @Amount,
MonthlyDeduction = @MonthlyDeduction,
Balance = @Balance,
EntryDateTime = GETDATE()
WHERE AdvanceID = @AdvanceID
END
ELSE
BEGIN
DECLARE @LastRecordID INT
DECLARE @LastBalance MONEY
SET @LastRecordID =
(
SELECT MAX(EA.AdvanceID)
FROM Employee.Advance EA
WHERE EA.Employee_ID = @Employee_ID
)
SET @LastBalance =
(
SELECT EA.Balance
FROM Employee.Advance EA
WHERE EA.AdvanceID = ISNULL(@LastRecordID, 0)
)
IF(@ProcedureType = 0) --Advances
BEGIN
SET @Balance = ISNULL(@LastBalance, 0) + @Amount
INSERT INTO Employee.Advance
(Employee_ID,
AdvanceDate,
Amount,
MonthlyDeduction,
Balance,
User_ID,
EntryDateTime
)
VALUES
(@Employee_ID,
@AdvanceDate,
@Amount,
@MonthlyDeduction,
@Balance,
@SYSTEMUSER_ID,
GETDATE())
END
ELSE --Receivings
BEGIN
IF NOT EXISTS
(
SELECT *
FROM Employee.Advance EA
WHERE EA.Employee_ID = @Employee_ID
AND EA.Balance > 0
AND EA.AdvanceID =
(
SELECT MAX(AdvanceID)
FROM Advance
WHERE Employee_ID = @Employee_ID
)
)
BEGIN
RAISERROR('This Employee has no advances history', 16, 1)
RETURN
--Select 0
END
ELSE
BEGIN
SET @Balance = ISNULL(@LastBalance, 0) - @Amount
INSERT INTO Employee.Advance
(Employee_ID,
AdvanceDate,
Amount,
MonthlyDeduction,
Balance,
User_ID,
EntryDateTime
)
VALUES
(@Employee_ID,
@AdvanceDate,
-1 * @Amount,
@MonthlyDeduction,
@Balance,
@SYSTEMUSER_ID,
GETDATE())
END
END
END
COMMIT TRANSACTION [Trans1]
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION [Trans1]
END CATCH
END
```
|
```
ALTER PROCEDURE [Employee].[usp_InsertEmployeeAdvances]
(
@AdvanceID BIGINT,
@Employee_ID INT,
@AdvanceDate DATETIME,
@Amount MONEY,
@MonthlyDeduction MONEY,
@Balance MONEY,
@SYSTEMUSER_ID INT,
@EntryDateTime DATETIME = NULL,
@ProcedureType SMALLINT
)
AS BEGIN
SET NOCOUNT ON
IF EXISTS (
SELECT 1
FROM Employee.Advance
WHERE AdvanceID = @AdvanceID
)
BEGIN
UPDATE Employee.Advance
SET
Employee_ID = @Employee_ID,
AdvanceDate = @AdvanceDate,
Amount = CASE WHEN @ProcedureType = 1 THEN -@Amount ELSE @Amount END,
MonthlyDeduction = @MonthlyDeduction,
Balance = @Balance,
EntryDateTime = GETDATE()
WHERE AdvanceID = @AdvanceID
END
ELSE BEGIN
DECLARE
@LastRecordID INT
, @LastBalance MONEY
, @IsBalance BIT
SELECT @LastRecordID = MAX(AdvanceID)
FROM Employee.Advance
WHERE Employee_ID = @Employee_ID
SELECT
@LastBalance = Balance,
@IsBalance = CASE WHEN Balance > 0 THEN 1 ELSE 0 END
FROM Employee.Advance
WHERE AdvanceID = ISNULL(@LastRecordID, 0)
IF ISNULL(@IsBalance, 0) = 0 BEGIN
RAISERROR('This Employee has no advances history', 16, 1)
RETURN
END
ELSE BEGIN
INSERT INTO Employee.Advance(Employee_ID, AdvanceDate, Amount, MonthlyDeduction, Balance, [User_ID], EntryDateTime)
SELECT
@Employee_ID,
@AdvanceDate,
CASE WHEN @ProcedureType = 0 THEN @Amount ELSE -@Amount END,
@MonthlyDeduction,
ISNULL(@LastBalance, 0) + CASE WHEN @ProcedureType = 0 THEN @Amount ELSE -@Amount END,
@SYSTEMUSER_ID,
GETDATE()
END
END
END
```
|
This is a changed answer as I did not read the whole question
Without a transaction if this was called at the same time and record did not exists then both could insert and one would likely get @Balance wrong
yes a transaction serves a purpose
|
Is it necessary to put transactions in this code?
|
[
"",
"sql",
"sql-server",
"database",
"sql-server-2008",
"t-sql",
""
] |
I have a column `Time` in my table. It holds a time value in minutes, ie: 605. What I want it to show it as 10:05, as `hh:mm` format.
I am doing it like this:
```
...
Time = cast(AVG(Time) / 60 as varchar) + ':' + cast(AVG(Time) % 60 as varchar),
...
```
It shows the value as 10:5, but I want to make it as 10:05 and I couldn't find out a way to add '0' before '5' in the minutes section.
Any help would be appreciated.
|
You can use a `case` expression to do this.
Also you should always specify the length of varchar when you use `cast`, or it would only show the first character.
```
case when len(cast(AVG(Time)%60 as varchar(2))) < 2 then '0'+cast(AVG(Time)%60 as varchar(2))
else cast(AVG(Time)%60 as varchar(2))
end
```
|
Try this:
```
SELECT
TIME = CASE WHEN @t < 600 THEN '0' ELSE '' END +
CAST(Time / 60 as varchar(10)) + ':' +RIGHT(100 +(Time % 60), 2)
```
Example:
```
DECLARE @t int = 55121
SELECT
TIME = CASE WHEN @t < 600 THEN '0' ELSE '' END +
CAST(@t / 60 as varchar(10)) + ':' +RIGHT(100 +(@t % 60), 2)
```
Result:
```
Time
918:41
```
|
Minutes to hh:mm format in SQL Server 2008 query
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
In a document review tool, one can create "batches" of documents. A batch is a group of related documents, identified by GroupID.
These groups of documents are presented to reviewers, who update a field called Testcompleted. This field has 3 possible states: 1, 0 or null. The number of documents in a group varies.
In the example below I have 3 groups ("batches") of documents. The first batch (batch\_0001), for instance, has 2 documents (58 and 59).
```
#Document
ArtifactID Testcompleted GroupID
--------------------------------------
58 1 4
59 1 4
60 null 6
61 1 6
62 null 7
63 null 7
64 null 7
#DocumentBatch
BatchArtifactID DocumentArtifactID
-------------------------------------
66 58
66 59
67 60
67 61
68 62
68 63
68 64
#Batch
ArtifactID Name
------------------------
66 batch_0001
67 batch_0002
68 batch_0003
```
I need to know when a batch is completed -- that is: when all documents in that batch have the Testcompleted field set to 1. In the example, this is the case for batch\_0001.
The output I am looking for is:
```
batch documents reviewed completed
------------------------------------------------------
batch_0001 2 2 yes
batch_0002 2 1 no
batch_0003 3 0 no
```
I started by joining the tables:
```
select
*
from
#Document d
left join #DocumentBatch db
on db.DocumentArtifactID = b.ArtifactID
left join #Batch b
on db.BatchArtifactID = b.ArtifactID
where
d.Testcompleted = 1
;
```
This obviously does not return the result I need, but I am stuck. Some help on how to solve this would be greatly appreciated.
|
```
select
b.name,
count(db.DocumentArtifactID) as documents,
-- count only completed
count(case when d.Testcompleted = 1 then d.ArtifactID end) as reviewed,
-- if the minimum = 1 there's no 0 or NULL
case when min(cast(Testcompleted as tinyint)) = 1 then 'yes' else 'no' end as completed
from
#Batch b
left join #DocumentBatch db
on db.BatchArtifactID = b.ArtifactID
left join #Document d
on db.DocumentArtifactID = d.ArtifactID
group by b.name;
```
If there are no missing rows you can switch to Inner Joins...
|
You can try something like this:
```
select b.name
, count(*) as documents
, sum(d.Testcompleted) as reviewed
, (case when count(*) = sum(d.Testcompleted) then 'yes' else 'no' end) as completed
from [#Document] d
join [#DocumentBatch] db on db.DocumentArtifactID = d.ArtifactID
join [#Batch] b on db.BatchArtifactID = b.ArtifactID
group by b.name
```
[**SQLFiddle**](http://sqlfiddle.com/#!6/45187/3)
* `count(*)` includes into calculation all values;
* `sum(d.Testcompleted)` count only cases where `Testcompleted` is `1`;
|
How to count if all documents in a group have a value set to 1?
|
[
"",
"sql",
"sql-server",
""
] |
I have a ORACLE table like this
```
---------------------------------------
DATE | USERID | DOMAIN | VALUE
---------------------------------------
03/16/2016 1001 ASIA 10
03/16/2016 1001 EUROPE 20
03/16/2016 1002 ASIA 20
03/17/2016 1001 ASIA 20
03/17/2016 1002 EUROPE 10
----------------------------------------
```
I want to translate this table to a view something like this
```
-------------------------------------
DATE | USERID | ASIA | EUROPE
-------------------------------------
03/16/2016 1001 10 20
03/16/2016 1002 20
03/17/2016 1001 20
03/17/2016 1002 10
-------------------------------------
```
If I tried to use PIVOT I can do it on a user level, but don't know how to get the date, user level. Any pointers would be great.
|
PIVOT seems to achieve the result you need:
```
SQL> with test (DATE_, USERID, DOMAIN, VALUE)
2 as (
3 select '03/16/2016', 1001 ,'ASIA' ,10 from dual union all
4 select '03/16/2016', 1001 ,'EUROPE' ,20 from dual union all
5 select '03/16/2016', 1002 ,'ASIA' ,20 from dual union all
6 select '03/17/2016', 1001 ,'ASIA' ,20 from dual union all
7 select '03/17/2016', 1002 ,'EUROPE' ,10 from dual
8 )
9 SELECT *
10 FROM (select *
11 from test)
12 PIVOT ( sum(value) FOR (domain) IN ('ASIA', 'EUROPE'))
13 ORDER BY 1, 2;
DATE_ USERID 'ASIA' 'EUROPE'
---------- ---------- ---------- ----------
03/16/2016 1001 10 20
03/16/2016 1002 20
03/17/2016 1001 20
03/17/2016 1002 10
```
|
Do a `GROUP BY`, use `case` expression to chose Asia or Europe:
```
select DATE, USERID,
sum(case when DOMAIN = 'ASIA' then VALUE end) as asia
sum(case when DOMAIN = 'EUROPE' then VALUE end) as europe
from tablename
group by DATE, USERID
```
Answer for products not supporting modern Oracle versions' `PIVOT`.
|
Convert row values into columns based on row values multi level hierarchy
|
[
"",
"sql",
"oracle",
""
] |
**SAMPLE DATA**
Suppose I have table like this:
```
No Company Vendor Code Date
1 C1 V1 C1 2016-03-08
1 C1 V1 C1 2016-03-07
1 C1 V1 C2 2016-03-06
```
**DESIRED OUPUT**
Desired output should be:
```
No Company Vendor Code Date
1 C1 V1 C1 2016-03-08
```
It should take max `Date` for `No, Company, Vendor` (group by these columns). But shouldn't group by `Code`, It have to be taken for that `Date`.
**QUERY**
SQL query like:
```
.....
LEFT JOIN (
SELECT No_, Company, Vendor, Code, MAX(Date)
FROM tbl
GROUP BY No_, Company, Vendor, Code
) t2 ON t1.Company = t2.Company and t1.No_ = t2.No_
.....
```
**OUTPUT FOR NOW**
But I got output for now:
```
No Company Vendor Code Date
1 C1 V1 C1 2016-03-08
1 C1 V1 C2 2016-03-06
```
That because `Code` records are different, but It should take C1 code in this case (because `No, Company, Vendor match`)
**WHAT I'VE TRIED**
I've tried to remove `Code` from `GROUP BY` clause and use `SELECT MAX(Code)...`, but this is wrong that because It take higher `Code` by alphabetic.
Have you ideas how can I achieve It? If something not clear I can explain more.
|
If you don't have any identity column for your table then each row is identified by all column values combination it has. That brings us weird `on` statement. It includes all columns we are grouping by and a date column which is max for given tuple `(No_, Company, Vendor)`.
```
select t1.No_, t1.Company, t1.Vendor, t1.Code, t1.Date
from tbl t1
join (select No_, Company, Vendor, MAX(Date) as Date
from tbl
group by No_, Company, Vendor) t2
on t1.No_ = t2.No_ and
t1.Company = t2.Company and
t1.Vendor = t2.Vendor and
t1.Date = t2.Date
```
Take a look at [this similar question](https://stackoverflow.com/questions/612231/how-can-i-select-rows-with-maxcolumn-value-distinct-by-another-column-in-sql).
---
**Edit**
> Thank you for an answer, but this returning duplicates. Suppose that there can be rows with equal No, Company, Vendor and Date, some other columns are different, but no care. So with INNER SELECT everything fine, It returning distinct values, but problem accured when joining t1, that because It have multiple values.
Then you might be interested in such tsql constructions as `rank` or `row_number`. Take a look at [Ullas' answer](https://stackoverflow.com/a/36079799/1770952). Try `rank` as well as it can give slightly different output which might fit your needs.
|
If 1 date only can have 1 record, then you can Query it by search the max date first, then check it.
```
select No_, Company, Vendor, Code, Date
FROM tbl
where Date in
(select MAX(Date) from tbl GROUP BY No_, Company, Vendor)
```
if there is more than 1 row that could have the same date, then you could use `partition`
```
with cte as
(
select *, ROW_NUMBER() over(partition by No_, Company, Vendor order by Date DESC) as rn
from tbl
)
select No_, Company, Vendor, Code, Date
from cte
where rn=1
```
|
Custom GROUP BY clause
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have two tables **a** and **b** in my Access Database. In both tables I have the field ID. However in table **a** my ID field is prefixed with '31' where as my ID field in table **b** is not.
So for example
```
table a table b
ID field2 ID field3
31L123 test123 L123 123test
31L122 test321 L122 321test
```
My prefixed field table is imported regularly from an Excel export, I understand I could remove the prefix at the excel level but is there a way to join the two tables on the ID field by using some sort of Concatenate function on the join within the SQL statement by any chance?
So for example something along the lines of:
```
SELECT Id, Field2, Field3
FROM a LEFT JOIN b ON CONCATENATE('31', a.ID) = b.ID
WHERE a.Field2 = 13
```
I am not sure if this is the correct approach or not - and that is why I can not seem to find any existing help on my problem (ignoring processing the fields at the excel level before the import).
|
`CONCATENATE()` is not supported in Access SQL. Generally you would use `&` for concatenation.
However I don't think you need concatenate anything for your join's `ON` condition. Just use `Mid()` to ignore the first 2 characters ...
```
ON Mid(a.ID, 3) = b.ID
```
That should work, but performance may become unacceptable as the tables grow over time. You should get better performance by "cleaning" the `a.ID` values immediately after each import ...
```
UPDATE a
SET ID = Mid(ID, 3)
WHERE ID ALike '31%';
```
Then your `SELECT` query's join can use a simpler, faster `ON` condition ...
```
ON a.ID = b.ID
```
Ensure `ID` is indexed to allow optimal performance.
|
Why don't you just update your query to be something like:
```
SELECT Id, Field2, Field3
FROM a LEFT JOIN b ON RIGHT(a.ID,4) = b.ID
WHERE a.Field2 = 13
```
You could also take the a.ID, and subtract 2 digits from the length of a.ID and then use the RIGHT-most characters in the event of a.ID being longer than 6 characters wide. And, re-writing, we get:
```
SELECT Id, Field2, Field3
FROM a LEFT JOIN b ON RIGHT(a.ID,LEN(a.ID)-2) = b.ID
WHERE a.Field2 = 13
```
Hope this helps?
|
SQL Query that joins two tables on a prefixed and non prefixed field
|
[
"",
"sql",
"ms-access",
"left-join",
"ms-query",
""
] |
I have an Oracle DB, and I don't control the date format. I want to know what the date format is to ensure that searches like
```
select * from search where search_date>='03/16/2016 00:00:00'
```
work as expected.
|
Don't do that - you are relying on implicit data type conversion which is going to fail at some point.
You have two options:
1) Use a proper ANSI SQL date literal:
```
select *
from search
where search_date >= timestamp '2016-03-16 00:00:00';
```
2) use `to_date()` (or `to_timestamp()`) and use a custom format.
```
select *
from search
where search_date >= to_date('03/16/2016 00:00:00', 'mm/dd/yyyy hh24:mi:ss');
```
With `to_date()` you should avoid any format that is language dependent. Use numbers for the month, not abbreviations (e.g. `'Mar'` or `'Apr'`) because they again rely on the *client* language.
More details can be found in the manual:
<https://docs.oracle.com/cd/E11882_01/server.112/e41084/sql_elements003.htm#SQLRF51062>
---
**Never** rely on implicit data type conversion.
|
You can get all the `NLS` session parameters with the query:
```
SELECT * FROM NLS_SESSION_PARAMETERS;
```
or, if you have the permissions `GRANT SELECT ON V_$PARAMETER TO YOUR_USERNAME;`, you can use the command:
```
SHOW PARAMETER NLS;
```
If you just want the date format then you can do either:
```
SELECT * FROM NLS_SESSION_PARAMETERS WHERE PARAMETER = 'NLS_DATE_FORMAT';
```
or
```
SHOW PARAMETER NLS_DATE_FORMAT;
```
However, you could also use ANSI date (or timestamp) literals which are format agnostic. An ANSI date literal has the format `DATE 'YYYY-MM-DD'` and a timestamp literal has the format `TIMESTAMP 'YYYY-MM-DD HH24:MI:SS.FF9'`. So your query would be:
```
select * from search where search_date>= DATE '2016-03-16'
```
or
```
select * from search where search_date>= TIMESTAMP '2016-03-16 00:00:00'
```
|
What is Oracle's Default Date Format?
|
[
"",
"sql",
"oracle",
"date-formatting",
""
] |
This is the error I get.
> 16.03.2016 12:02:16.413 *WARN* [xxx.xxx.xx.xxx [1458147736268] GET /en/employees-leaders/employee-s-toolkit2/epd-update/epd-update-archives/caterpillar-news/upcoming-brand-webinarfocusonmarketing.html HTTP/1.1] com.day.cq.wcm.core.impl.LanguageManagerImpl Error while
> retrieving language property. javax.jcr.AccessDeniedException: cannot
> read item xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx (alpha-numeric)
I am trying to locate the node in JCR using the xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx, which I believe is uuid, using a query in AEM.
* Is the xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx actually a uuid?
* How do I locate the source i.e node, causing the issue?
I tried running a sql with the above id in the jcr, but it returned no result.
```
//*[jcr:contains(., '91186155-45ad-474-9ad9-d5156a398629')] order by @jcr:score descending
```
Any other insights would be much appreciated.
|
You don't need a query if you know the Node's UUID, just use the [Session.getNodeByIdentifier(String id)](http://www.day.com/specs/jsr170/javadocs/jcr-2.0/javax/jcr/Session.html#getNodeByIdentifier%28java.lang.String%29) method.
|
Your query is not SQL as you stated, it's XPATH. Is that a typo or did you run the query incorrectly?
It certainly looks like a UUID. You can query for the `jcr:uuid` property or you can continue doing a full text search.
XPATH:
`/jcr:root//*[jcr:contains(., '91186155-45ad-474-9ad9-d5156a398629')]`
`/jcr:root//*[@jcr:uuid='91186155-45ad-474-9ad9-d5156a398629']`
JCR-SQL2:
`SELECT * FROM [nt:base] AS s WHERE contains(s.*, '91186155-45ad-474-9ad9-d5156a398629')`
`SELECT * FROM [nt:base] WHERE [jcr:uuid] = '91186155-45ad-474-9ad9-d5156a398629'`
What read permissions does your account have? You're going to find a lot of results for a `jcr:uuid` query will be under `/jcr:system/jcr:versionStorage`.
|
How to locate the node in JCR using uuid from query
|
[
"",
"sql",
"aem",
"jcr",
""
] |
[](https://i.stack.imgur.com/Wexz8.jpg)
[](https://i.stack.imgur.com/eq6eA.jpg)
I need to find the names of aircraft such that **all pilots** certified to operate them earn more than 60000.
Query I wrote:
```
select aname
from employee join certified
on employee.eid=certified.eid
join aircraft
on certified.aid=aircraft.aid
where salary>60000;
```
But it returns aname if there is any pilot with more than 60000 salary,difficult part is that i need to find if all pilots earn more than 60000 only then the aname is displayed.
|
You can just look for the opposite case - that **no** pilots earn less than 60,000:
```
SELECT
aname
FROM
Aircraft A
WHERE
NOT EXISTS
(
SELECT *
FROM Certified C
INNER JOIN Employee E ON
E.eid = C.eid AND
E.salary < 60000
WHERE C.aid = A.aid
)
```
|
```
SELECT aname FROM Aircraft where NOT EXISTS (SELECT eid FROM Employee AS e INNER JOIN Certified AS c ON c.eid=e.eid WHERE salary<60000 AND aid=Aircraft.aid)
```
|
How to write this complex SQL query?
|
[
"",
"mysql",
"sql",
"join",
""
] |
So, I admit it was one of my exam tasks yesterday and I failed to deal with it...
I had simple database of people with their name, salary and function (A, B, C, D, E and F) and I had to select functions that have the biggest and the lowest avg salary. I had also to ignore function C.
Example of database:
```
name salary function
Mike 100 A
John 200 F
Jenny 500 B
Fred 400 B
... 250 C
... 800 D
... 100 E
... 350 E
... 450 F
... 250 A
... 500 B
```
Example of result:
```
function avg salary
A 300
C 600
```
I know how to do it using UNION as in oracle I can group by function order by salary, union that with order by salary desc and for example fetch 1 rows only in both selects. I could have used WHERE but it's impossible to use WHERE with aggregation (like AVG(salary)), but how to do it in single query without UNION, MINUS or INTERSECT?
|
Similar to @vkp's approach, but without the join back to the base table:
```
select function, avg_salary
from (
select function, avg(salary) as avg_salary,
rank() over (order by avg(salary)) as rnk_asc,
rank() over (order by avg(salary) desc) as rnk_desc
from tablename
group by function
)
where rnk_asc = 1 or rnk_desc = 1;
F AVG_SALARY
- ----------
D 800
A 175
```
The two `rank()` calls put each function/average in order:
```
select function, avg(salary) as avg_salary,
rank() over (order by avg(salary)) as rnk_asc,
rank() over (order by avg(salary) desc) as rnk_desc
from tablename
group by function;
F AVG_SALARY RNK_ASC RNK_DESC
- ---------- ---------- ----------
D 800 6 1
B 4.7E+02 5 2
F 325 4 3
C 250 3 4
E 225 2 5
A 175 1 6
```
That forms the inline view; the outer query then just selects the rows ranked 1 in either of the generated columns, which is D and A here.
If you had two functions with the same average salary then they would get the same rank, and both ranked as 1 then you'd see both; so you can get more than two results. That may be what you want. If not you can avoid it by defining how to break ties, either with `rank()` or `dense_rank()`.
|
You could use `rank` function (use `dense_rank` if there can be ties in averages) to order rows by their average salary. Then select the highest and lowest ranked rows.
```
select t1.function, avg(t1.salary)
from (select
rank() over(order by avg(salary) desc) rnk_high
,rank() over(order by avg(salary)) rnk_low
,function
from tablename
group by function) t
join tablename t1 on t.function = t1.function
where rnk_high = 1 or rnk_low = 1
group by t1.function
```
|
How to select first and last record from ordered query without UNION, INTERSECT etc
|
[
"",
"sql",
"oracle",
"group-by",
"sql-order-by",
"union",
""
] |
I have many to many association between `contact` and `project`.
```
contact:
+-----------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-----------------+--------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| deleted | boolean | NO | | NULL | |
+-----------------+--------------+------+-----+---------+----------------+
project:
+-----------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-----------------+--------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| status | varchar | NO | | NULL | |
+-----------------+--------------+------+-----+---------+----------------+
project_contact:
+---------------+---------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+---------------+---------+------+-----+---------+-------+
| project_id | int(11) | NO | PRI | NULL | |
| contact_id | int(11) | NO | PRI | NULL | |
| proj_con_role | varchar | NO | | NULL | |
+---------------+---------+------+-----+---------+-------+
```
I would like to count how many contacts are associated with no projects, one project or more than one project. But, for those 2 latter (one project and more than one project) the project status has to be `'STATUS_X'.` The `proj_con_role` has to be `'CLIENT'` and also, the contact must not be marked as deleted. If I could get that in one single query, that'd be absolutely awesome, if not, 3 different queries would do as well.
I have this so far:
```
SELECT numprojects,
Count(*) AS numcontacts
FROM (
SELECT c.id,
Count(pc.contact_id) AS numprojects
FROM contact c
LEFT JOIN project_contact pc
ON pc.contact_id = c.id
AND pc.proj_con_role = 'CLIENT'
WHERE (
c.deleted isnull
OR c.deleted = false)
GROUP BY c.id ) c
GROUP BY numprojects
ORDER BY numprojects
```
Now, this works fine, but for the life of me, I cannot seem to add the condition that the project has to have a certain status... I have no idea how to add it. Any help would be absolutely great.
I have tried adding:
```
left join project p on p.status = 'STATUS_X' and p.id = pc.project_id
```
but of course, it doesn't work like this...
later edit 1:
if I add:
```
inner join project p on p.status = 'STATUS_X' and p.id = pc.project_id
```
I get the correct results for 1 or more projects, but the contacts on no projects are ignored. Maybe a union here? Not sure.
|
I fixed it this way:
```
SELECT numprojects,
Count(*) AS numcontacts
FROM (
SELECT c.id,
Count(pc.contact_id) AS numprojects
FROM contact c
LEFT JOIN project_contact pc
ON pc.contact_id = c.id
AND pc.proj_con_role = 'CLIENT'
INNER JOIN project p
ON p.status = 'STATUS_X'
AND p.id = pc.mandate_id
WHERE (
c.deleted isnull
OR c.deleted = false)
GROUP BY c.id ) c
GROUP BY numprojects
UNION ALL
SELECT numprojects,
count(*) AS numcontacts
FROM (
SELECT c.id,
count(pc.contact_id) AS numprojects
FROM contact c
LEFT JOIN project_contact pc
ON pc.contact_id = c.id
AND pc.project_contact_role = 'CLIENT'
WHERE (
c.deleted isnull
OR c.deleted = false)
GROUP BY c.id ) c
WHERE numprojects = 0
GROUP BY numprojects
ORDER BY numprojects
```
Thank you all for your answers and support.
|
This should work:
```
select
case when project_contracts = 0 then '0 projects'
when project_contracts = 1 then '1 project'
else '2+ projects' end as num_of_projects
count(contracts) as contracts
from
(select
c.id as contracts
sum(case when p.id is null then 0 else 1 end) as projects_contracts
from contracts c
left join project_contracts p on p.id = c.id
group by c.id)
group by case when project_contracts = 0 then '0 projects'
when project_contracts = 1 then '1 project'
else '2+ projects' end
```
|
counting in joins
|
[
"",
"sql",
"database",
"postgresql",
"join",
"left-join",
""
] |
Let's say that I have simple database like this:
```
People
name age
Max 25
Mike 15
Lea 22
Jenny 75
Juliet 12
Kenny 10
Mark 44
```
and I want to select N the oldest people from there by using JOIN with People table itself. I've tried to JOIN them this way
```
People p1 JOIN People p2 ON p1.age < p2.age
```
to be able to COUNT number of people in p2 that are elder from specific one in p1 and then filter the result according to this number, but I don't know how to COUNT it and if the way I JOIN these tables is correct :)
For N = 4 expected result is:
```
name age
Jenny 75
Mark 44
Max 25
Lea 22
```
|
I think what you meant to do is:
```
SELECT t.name,t.age FROM (
SELECT p1.name,p1.age,count(*) as cnt FROM People p1
JOIN People p2ORDER ONBY p1.age < p2.age
GROUP BY p1.name,p1.age) t
WHERE t.cnt <= N
```
But there is not need for that, you can use ORACLE's `rownum`
```
SELECT * FROM (
SELECT * FROM People p1
ORDER BY p1.age DESC)
WHERE ROWNUM <= N
```
|
Newer Oracle versions support `FETCH FIRST`:
```
select *
from people
order by age desc
fetch first 4 rows only
```
You can also try: `fetch first 4 rows with ties`
|
Selecting N first rows with the biggest value of an attribute using JOIN with the same table
|
[
"",
"sql",
"oracle",
"join",
"sql-order-by",
"greatest-n-per-group",
""
] |
<http://sqlfiddle.com/#!4/bab93d>
See the SQL Fiddle example... I have Customers, Tags, and a mapping table. I am trying to implement customer search by tags, and it has to be an AND search. The query is passed a list of tag identifiers (any number), and it has to return only customers that have ALL the tags.
In the example I have used an IN operator, but that corresponds to an OR search and doesn't solve my problem. What should the query look like to be an AND search?
```
select
*
from
customer c
inner join customer_tag ct on ct.customer_id = c.customer_id
where
ct.tag_id in (1, 2);
```
This returns both customers, but only the first customer is tagged with tag 1 and 2.
|
You could use correlated subquery to get list of all customers:
```
SELECT *
FROM customer c
WHERE c.customer_ID IN
(
SELECT customer_id
FROM customer_tag ct
WHERE ct.customer_id = c.customer_id
AND ct.tag_id IN (1,2)
GROUP BY customer_id
HAVING COUNT(DISTINCT tag_id) = 2
);
```
`LiveDemo`
It is easy to extend just:
```
WHERE ct_tag IN (1,2,3)
...
HAVING COUNT(DISTINCT tag_id) = 3
```
|
`JOIN` version:
```
SELECT c.*
FROM customer c
JOIN (SELECT customer_id
FROM customer_tag
WHERE tag_id IN (1,2)
GROUP BY customer_id
HAVING MAX(tag_id) <> MIN(tag_id)) ct ON c.customer_id = ct.customer_id
```
If you have more than 2 different values, use `COUNT DISTINCT` instead, like this:
```
SELECT c.*
FROM customer c
JOIN (SELECT customer_id
FROM customer_tag
WHERE tag_id IN (1,2,3)
GROUP BY customer_id
HAVING COUNT(DISTINCT tag_id) = 3) ct ON c.customer_id = ct.customer_id
```
|
Oracle return rows only if all join conditions match
|
[
"",
"sql",
"oracle",
""
] |
I have a table with a foreign key called `team_ID`, a date column called `game_date`, and a single char column called `result`. I need to find when the next volleyball game happens. I have successfully narrowed the game dates down to all the volleyball games that have not happened yet because the result `IS NULL`. I have all the select in line, I just need to find the earliest date.
Here is what I've got:
```
SELECT game.game_date, team.team_name
FROM game
JOIN team
ON team.team_id = game.team_id
WHERE team.sport_id IN
(SELECT sport.sport_id
FROM sport
WHERE UPPER(sport.sport_type_code) IN
(SELECT UPPER(sport_type.sport_type_code)
FROM sport_type
WHERE UPPER(sport_type_name) like UPPER('%VOLLEYBALL%')
)
)
AND game.result IS NULL;
```
*I'm a time traveler so don't mind the old dates.*
When I run it, I get this:
```
GAME_DATE TEAM_NAME
----------- ----------
11-NOV-1998 BEars
13-NOV-1998 BEars
13-NOV-1998 WildCats
14-NOV-1998 BEars
```
How do I set it up so I get *only* the `MIN(DATE)` and the `TEAM_NAME` playing on that date?
I've tried `AND game.game_date = MIN(game.game_date)` but it simply tells me that a group function in not allowed here. There has to be a way to retrieve the `MIN(game_date)` and use it as a condition to be met.
*I'm using Oracle 11g pl/sql.*
[](https://i.stack.imgur.com/GM7eJ.png)
This should be the final working code.
```
SELECT *
FROM
(
SELECT g.game_date, t.team_name
FROM game g
JOIN team t
ON t.team_id = g.team_id
JOIN sport s
ON t.sport_id = s.sport_id
JOIN sport_type st
ON UPPER(s.sport_type_code) IN UPPER(st.sport_type_code)
WHERE UPPER(sport_type_name) like UPPER('%VOLLEYBALL%')
AND g.result IS NULL
ORDER BY g.game_date
)
WHERE ROWNUM = 1;
```
|
The `ROWNUM` pseudocolumn is generated before any `ORDER BY` clause is applied to the query. If you just do `WHERE ROWNUM <= X` then you will get `X` rows in whatever order Oracle produces the data from the datafiles and not the `X` minimum rows. To guarantee getting the minimum row you need to use `ORDER BY` first and then filter on `ROWNUM` like this:
```
SELECT *
FROM (
SELECT g.game_date, t.team_name
FROM game g
JOIN team t
ON t.team_id = g.team_id
INNER JOIN sport s
ON t.sport_id = s.sport_id
INNER JOIN sport_type y
ON UPPER( s.sport_type_code ) = UPPER( y.sport_type_code )
WHERE UPPER( y.sport_type_name) LIKE UPPER('%VOLLEYBALL%')
AND g.result IS NULL
ORDER BY game_date ASC -- You need to do the ORDER BY in an inner query
)
WHERE ROWNUM = 1; -- Then filter on ROWNUM in an outer query.
```
If you want to return multiple rows with the minimum date then:
```
SELECT game_date,
team_name
FROM (
SELECT g.game_date,
t.team_name,
RANK() OVER ( ORDER BY g.game_date ASC ) AS rnk
FROM game g
JOIN team t
ON t.team_id = g.team_id
INNER JOIN sport s
ON t.sport_id = s.sport_id
INNER JOIN sport_type y
ON UPPER( s.sport_type_code ) = UPPER( y.sport_type_code )
WHERE UPPER( y.sport_type_name) LIKE UPPER('%VOLLEYBALL%')
AND g.result IS NULL
)
WHERE rnk = 1;
```
|
Could you make it simple and order by date and SELECT TOP 1? I think this is the syntax in Oracle:
WHERE ROWNUM <= number;
|
Find the latest or earliest date
|
[
"",
"sql",
"oracle",
""
] |
I would like to combine two columns into one and separate by a '/' using SQL statement
Currently, I could only do this.
```
Select A.Marks, Q.NoOfAnsBox FROM AnswerTable AS A INNER JOIN QuestionTable AS
Q WHERE A.QuestionID = Q.QuestionID
```
With the output:
```
Marks NoOfAnsBox
3 5
2 5
```
May I know if it is possible to output it as one column as follows?
```
Marks
3/5
2/5
```
I tried convert the integers into string and perform concatenation but failed to obtain the desired result
```
Select (CONVERT(A.Marks, Char(50)) + '/' + Convert(Q.NoOfAnsBox,Char(50))) As
Marks FROM AnswerTable AS A INNER JOIN QuestionTable AS Q WHERE A.QuestionID =
Q.QuestionID
```
```
Marks
8
7
```
|
In Mysql use the `CONCAT()` function to concatenate strings as `+` acts differently here than in other RDBMS's:
```
Select CONCAT(A.Marks, '/', Q.NoOfAnsBox) As Marks
FROM AnswerTable AS A
INNER JOIN QuestionTable AS Q ON A.QuestionID = Q.QuestionID
```
Also consider changing your "WHERE" to an "ON". In your case they are synonymous, but things could get ugly if you swap your `INNER` join to a `LEFT OUTER` join and forget to monkey with the `WHERE` clause.
|
```
Select CONCAT(A.Marks, '/', Q.NoOfAnsBox) AS Marks FROM AnswerTable AS A INNER JOIN QuestionTable AS
Q WHERE A.QuestionID = Q.QuestionID
```
|
SQL: How to combine two columns with punctuation?
|
[
"",
"mysql",
"sql",
"concatenation",
""
] |
I have a table consists of table name, for example:
TableA
```
UID TableName CifKey ...
1 xxx 12345
1 yyy 12345
1 xxx 12345
2 zzz 45678
```
How can I select data from tables that having name same as the `TableName` column in Table A?
For example:
```
SELECT A.a, B.b
FROM TableA A
JOIN ' + @TableName + ' B ON A.Cifkey = B.Cifkey
WHERE A.uid = @uid AND A.cifkey = @cifkey
```
Thank you!
|
Dynamic SQL is the way to go:
```
DECLARE @uid INT = 1,
@cifkey INT = 12345
DECLARE @sql NVARCHAR(MAX) = ''
SELECT @sql = @sql +
'SELECT
A.a, B.b -- Replace with correct column names
FROM TableA A
JOIN ' + QUOTENAME(TableName) + ' B
ON A.Cifkey = B.Cifkey
WHERE
A.uid = @uid
AND A.cifkey = @cifkey
UNION ALL
'
FROM (
SELECT DISTINCT TableName FROM TableA WHERE UID = @uid AND cifkey = @cifkey
) t
IF @sql <> '' BEGIN
-- Remove the last UNION ALL
SELECT @sql = LEFT(@sql, LEN(@sql) - 11)
PRINT @sql
EXEC sp_executesql
@sql,
N'@uid INT, @cifkey INT',
@uid,
@cifkey
END
```
|
You will have to use a dyanmic query - something like this
```
DECLARE @tablename VARCHAR(50)
DECLARE @uid INT = 1
DECLARE @cifkey INT = 12345
SELECT @tablename = TableName FROM TableA WHERE uid = @uid AND cifkey = @cifkey
DECLARE @query VARCHAR(500)
SELECT @query = 'select A.a, B.b from TableA A JOIN ' + @tablename + ' B
ON A.Cifkey = B.Cifkey
WHERE A.uid = ' + CONVERT(VARCHAR(20),@uid) + ' and A.cifkey = ' + CONVERT(VARCHAR(20), @cifkey)
EXEC(@query)
```
|
How to select data from multiple tables where table name is value from a column of another table?
|
[
"",
"sql",
"sql-server",
""
] |
table1
```
| email | result |
----------------------------
| abc@gmail.com |0.12 |
| dsv@gmail.com |0.23 |
| rthgmail.com | 0.45 |
| hfg@gmail.com |0.56 |
| yyt@gmail.com | 0.78 |
| hjg@gmail.com | 0.35 |
```
table2
```
| resource |
-------------------
| 0.5 |
```
table3
```
| temp |
-------------------
|NULL |
```
Now I want addition of result and resource from table1 and table2 respectively for email abc@gmail.com and save the result in table3. I want to update the table3 every time.
I tried the following but its not working:
```
UPDATE table3 SET temp = table1.result + table2.resource WHERE email = 'abc@gmail.com'
```
How can I do it guys?
|
You can do it with a sub query:
```
UPDATE Table3 t
SET t.temp = (SELECT s.result+p.resource
FROM table1 s INNER JOIN table2 p
ON(s.email = 'abc@gmail.com'))
```
If your Table3 doesn't have data yet:
```
INSERT INTO Table3
(SELECT s.result+p.resource
FROM table1 s INNER JOIN table2 p
ON(s.email = 'abc@gmail.com'))
```
|
Seems like a curious request, but you want an `update` with `join`:
```
UPDATE table3 t3 CROSS JOIN
table2 t2 CROSS JOIN JOIN
(SELECT SUM(t1.result) FROM table1 t1 WHERE t1.email = 'abc@gmail.com') t1
SET t3.temp = t1.result + t2.resource;
```
Are you sure you don't really want an `insert` instead?
|
Do some calulations of the values in two table and store it in third table mysql
|
[
"",
"mysql",
"sql",
"database",
"select",
""
] |
I am getting repeating rows when I query this table using the below query. I am not sure why this is happening, could someone please help explain?
I get 4 repeating rows when I query the data when the expected result should be only 1 row.
The query is:
```
SELECT d.DIRECTOR_FNAME, d.Director_lname, s.studio_name
FROM DIRECTOR d, STUDIO s, FILM f, CASTING c
WHERE s.STUDIO_ID = f.STUDIO_ID
AND f.FILM_ID = c.FILM_ID
AND d.DIRECTOR_ID = c.DIRECTOR_ID
AND f.FILM_TITLE = 'The Wolf Of Wall Street';
```
And here's the table, I probably didn't need to put the entire table in but it's done now.
```
drop table casting;
drop table film;
drop table studio;
drop table actor;
drop table director;
CREATE TABLE studio(
studio_ID NUMBER NOT NULL,
studio_Name VARCHAR2(30),
PRIMARY KEY(studio_ID));
CREATE TABLE film(
film_ID NUMBER NOT NULL,
studio_ID NUMBER NOT NULL,
genre VARCHAR2(30),
genre_ID NUMBER(1),
film_Len NUMBER(3),
film_Title VARCHAR2(30) NOT NULL,
year_Released NUMBER NOT NULL,
PRIMARY KEY(film_ID),
FOREIGN KEY (studio_ID) REFERENCES studio);
CREATE TABLE director(
director_ID NUMBER NOT NULL,
director_fname VARCHAR2(30),
director_lname VARCHAR2(30),
PRIMARY KEY(director_ID));
CREATE TABLE actor(
actor_ID NUMBER NOT NULL,
actor_fname VARCHAR2(15),
actor_lname VARCHAR2(15),
PRIMARY KEY(actor_ID));
CREATE TABLE casting(
film_ID NUMBER NOT NULL,
actor_ID NUMBER NOT NULL,
director_ID NUMBER NOT NULL,
PRIMARY KEY(film_ID, actor_ID, director_ID),
FOREIGN KEY(director_ID) REFERENCES director(director_ID),
FOREIGN KEY(film_ID) REFERENCES film(film_ID),
FOREIGN KEY(actor_ID) REFERENCES actor(actor_ID));
INSERT INTO studio (studio_ID, studio_Name) VALUES (1, 'Paramount');
INSERT INTO studio (studio_ID, studio_Name) VALUES (2, 'Warner Bros');
INSERT INTO studio (studio_ID, studio_Name) VALUES (3, 'Film4');
INSERT INTO studio (studio_ID, studio_Name) VALUES (4, 'Working Title Films');
INSERT INTO film (film_ID, studio_ID, genre, genre_ID, film_Len, film_Title, year_Released) VALUES (1, 1, 'Comedy', 1, 180, 'The Wolf Of Wall Street', 2013);
INSERT INTO film (film_ID, studio_ID, genre, genre_ID, film_Len, film_Title, year_Released) VALUES (2, 2, 'Romance', 2, 143, 'The Great Gatsby', 2013);
INSERT INTO film (film_ID, studio_ID, genre, genre_ID, film_Len, film_Title, year_Released) VALUES (3, 3, 'Science Fiction', 3, 103, 'Never Let Me Go', 2008);
INSERT INTO film (film_ID, studio_ID, genre, genre_ID, film_Len, film_Title, year_Released) VALUES (4, 4, 'Romance', 4, 127, 'Pride and Prejudice', 2005);
INSERT INTO director (director_ID, director_fname, director_lname) VALUES (1, 'Martin', 'Scorcese');
INSERT INTO director (director_ID, director_fname, director_lname) VALUES (2, 'Baz', 'Luhrmann');
INSERT INTO director (director_ID, director_fname, director_lname) VALUES (3, 'Mark', 'Romanek');
INSERT INTO director (director_ID, director_fname, director_lname) VALUES (4, 'Joe', 'Wright');
INSERT INTO actor (actor_ID, actor_fname, actor_lname) VALUES (1, 'Matthew', 'McConnaughy');
INSERT INTO actor (actor_ID, actor_fname, actor_lname) VALUES (2, 'Leonardo', 'DiCaprio');
INSERT INTO actor (actor_ID, actor_fname, actor_lname) VALUES (3, 'Margot', 'Robbie');
INSERT INTO actor (actor_ID, actor_fname, actor_lname) VALUES (4, 'Joanna', 'Lumley');
INSERT INTO actor (actor_ID, actor_fname, actor_lname) VALUES (5, 'Carey', 'Mulligan');
INSERT INTO actor (actor_ID, actor_fname, actor_lname) VALUES (6, 'Tobey', 'Maguire');
INSERT INTO actor (actor_ID, actor_fname, actor_lname) VALUES (7, 'Joel', 'Edgerton');
INSERT INTO actor (actor_ID, actor_fname, actor_lname) VALUES (8, 'Keira', 'Knightly');
INSERT INTO actor (actor_ID, actor_fname, actor_lname) VALUES (9, 'Andrew', 'Garfield');
INSERT INTO actor (actor_ID, actor_fname, actor_lname) VALUES (10, 'Sally', 'Hawkins');
INSERT INTO actor (actor_ID, actor_fname, actor_lname) VALUES (11, 'Judi', 'Dench');
INSERT INTO actor (actor_ID, actor_fname, actor_lname) VALUES (12, 'Matthew', 'Macfadyen');
INSERT INTO casting (film_ID, actor_ID, director_ID) VALUES (1, 1, 1);
INSERT INTO casting (film_ID, actor_ID, director_ID) VALUES (1, 2, 1);
INSERT INTO casting (film_ID, actor_ID, director_ID) VALUES (1, 3, 1);
INSERT INTO casting (film_ID, actor_ID, director_ID) VALUES (1, 4, 1);
INSERT INTO casting (film_ID, actor_ID, director_ID) VALUES (2, 2, 2);
INSERT INTO casting (film_ID, actor_ID, director_ID) VALUES (2, 5, 2);
INSERT INTO casting (film_ID, actor_ID, director_ID) VALUES (2, 6, 2);
INSERT INTO casting (film_ID, actor_ID, director_ID) VALUES (2, 7, 2);
INSERT INTO casting (film_ID, actor_ID, director_ID) VALUES (3, 5, 3);
INSERT INTO casting (film_ID, actor_ID, director_ID) VALUES (3, 8, 3);
INSERT INTO casting (film_ID, actor_ID, director_ID) VALUES (3, 9, 3);
INSERT INTO casting (film_ID, actor_ID, director_ID) VALUES (3, 10, 3);
INSERT INTO casting (film_ID, actor_ID, director_ID) VALUES (4, 5, 4);
INSERT INTO casting (film_ID, actor_ID, director_ID) VALUES (4, 8, 4);
INSERT INTO casting (film_ID, actor_ID, director_ID) VALUES (4, 11, 4);
INSERT INTO casting (film_ID, actor_ID, director_ID) VALUES (4, 12, 4);
```
|
First question you should be asking is "What Result Set Am I Looking For?". Your query is returning The Director First Name, Director Last Name, and Studio Name where the Film Title is "The Wolf of Wall Street", for each cast that belongs to that Film. You get 4 records because you get a record for each casting where FilmId = 1 in this case. Include the ActorId column and you will see what I am talking about. Hope that helps... if not I would study harder earlier.
|
You are missing a join, instead of an AND , follow this example :
```
SELECT d.DIRECTOR_FNAME, d.Director_lname, s.studio_name
FROM DIRECTOR d
INNER JOIN STUDIO s ON s.STUDIO_ID = f.STUDIO_ID
INNER JOIN CASTING c ON d.DIRECTOR_ID = c.DIRECTOR_ID
INNER JOIN FILM f ON f.FILM_ID = c.FILM_ID
WHERE f.FILM_TITLE = 'The Wolf Of Wall Street';
```
|
SQL Query prints 4 times/Advice
|
[
"",
"sql",
"oracle",
"join",
"duplicates",
"inner-join",
""
] |
Let's say I have 20 columns in a table and I run a manual query like:
```
SELECT *
FROM [TABLE]
WHERE [PRODUCT] LIKE '%KRABBY PADDY%'
```
After viewing the results, I realize I only need 10 of those columns. Is there a quick way to list out the 10 columns you want, something like right clicking on the wild card and somehow selecting the columns you want?
|
Right clicking the `*` and selecting the columns doesn't sound terribly fast either.
You can use SSMS to go to the table, and drag the "Columns":
[](https://i.stack.imgur.com/rFmr3.png)
You'll get every column, and then you can keep the ones you want:
[](https://i.stack.imgur.com/WqDHp.png)
|
As far as I know you can't do exactly what you are asking for, but in SQL Server Management Studio you can obtain the SELECT statement with all the columns of a table by right-clicking he table on the object explorer an select the options:
> **script table as** --> **SELECT to** --> **Clipboard**
Once you have this SELECT is prety easy to eliminate the columns you don't need on the SELECT
|
Quickly going from * wild card to a few specific columns
|
[
"",
"sql",
"sql-server",
"ssms",
""
] |
If I have the below select query:
```
select col1, col2, col3, sum(value1) as col4, col5
from table1
group by col1, col2, col3, col5
```
How to add col6 = col4/col5?
|
You cannot access the alias in the `SELECT` clause. So you have to repeat `sum(value1)`:
```
select col1, col2, col3,
sum(value1) as col4,
col5,
sum(value1) / col5 as col6
from table1
group by col1, col2, col3, col5
```
|
Do the `GROUP BY` in a derived table:
```
select col1, col2, col3, col4, col5, col4/col5 as col6
from
(
select col1, col2, col3, sum(value1) as col4, col5
from table1
group by col1, col2, col3, col5
) dt
```
|
Divide two columns into group by function SQL
|
[
"",
"sql",
"sql-server",
"group-by",
"divide",
""
] |
IΒ΄m having a problem with a query.
```
select p.id, p.firstname, p.lastname , max(s.year) as 'Last Year', min(s.year) as 'First Year', c.name from pilot p
join country on country.sigla = p.country
join circuit c on c.country_id = country.sigla
join season s
on(p.id = s.pilot_id)
group by p.id, p.firstname, p.lastname, c.name
order by p.id
```
Table Pilot
```
Id (Primary Key)
Name
Table Season
```
Table Season
```
Year (Primary key)
Pilot_id (Foreign Key)
```
Table Country
```
Sigla (Primary Key)
```
Table Cicuit
```
id (Primary Key)
name
```
The table Pilot is linked to Season and Country. And the table circuit is linked to Country.
I want to show for every pilot the last and the first circuit in every line, but the problem is that i'm having duplicate results. The first result shows me the 1st circuit and the duplicate shows me the last circuit. I'm having 67 results where i want to have only 40 ( the total number of pilots in the database)
|
Borrowed answer from above with further explanation because I couldn't comment on it.
You'll need to write subqueries for the first year and the last year to use different select criteria for each. For the first year, you want to order by the year ASC to get the smallest year in the column. For last year, you'll want to order by year DESC to get the largest year in the column.
```
SELECT *
FROM (
SELECT p.id
,p.firstname
,p.lastname
,FirstCircuit = (
SELECT TOP 1 circuitname
FROM circuit c
WHERE p.id = c.id
ORDER BY year ASC
)
,LastCircuit = (
SELECT TOP 1 circuitname
FROM circuit c
WHERE p.id = c.id
ORDER BY year DESC
)
FROM pilot p
INNER JOIN country ON country.sigla = p.country
INNER JOIN season s ON (p.id = s.pilot_id)
) tbl
GROUP BY id
,firstname
,lastname
ORDER BY id
```
|
I suspect the issue is with the join to the `circuit` table.
There's no need to replace the `MIN(s.year)` and `MAX(s.year)` expressions in the select list. (Despite what other answers recommend, that does nothing to address the real issue... getting a result that meets the specification to return only *one* row per pilot.)
Start debugging this by starting with a simpler query... join just the `pilot` table and the `season` table. For example:
```
select p.id
, p.firstname
, p.lastname
, max(s.year) as 'Last Year'
, min(s.year) as 'First Year'
from pilot p
join season s
on p.id = s.pilot_id
group by p.id, p.firstname, p.lastname
order by p.id
```
That should return, at most, one row per `pilot` (given that `id` is unique in the `pilot` table.) Rows in `pilot` that don't have any associated rows in `season` will be excluded, due to the inner join.
When you add in the joins to the other tables (`country` and `circuit`) you have the potential to introduce duplicate rows. But those rows will get "collapsed" into a single row for each pilot.
It's when you include `c.name` in the `GROUP BY`, that's when your "duplicate" rows start appearing in your result set. With that expression in the `GROUP BY` clause, you have the potential to get back more than one row for each pilot.
That's where the issue is.
You will be guaranteed that the rows returned will have distinct values of `c.name` for each pilot.
To fix that, you could remove `c.name` from the `GROUP BY` clause, and use an aggregate expression in the select list, for example `MAX(c.name)`.
That query would return (at most) one row per pilot. (Again, if there are no rows in `circuit` associated with a `country` associated with a pilot, then those pilot rows will be excluded.)
```
select p.id
, p.firstname
, p.lastname
, max(s.year) as 'Last Year'
, min(s.year) as 'First Year'
, max(c.name) as circuit_name
from pilot p
join season s
on p.id = s.pilot_id
join country
on country.sigla = p.country
join circuit c
on c.country_id = country.sigla
group by p.id, p.firstname, p.lastname
order by p.id
```
---
On the bit about returning the "first and last circuit on each line"...
How do you determine which circuit is "first" and which circuit is "last" for each pilot? The only two columns we see in the table are `id` and `name`. And the only relationship (shown) between a `pilot` and a `circuit` is through the `country` table. A `pilot` only has one `country`, so a pilot is associated with every circuit in that `country`.
|
SQL - Query between four tables
|
[
"",
"sql",
"sql-server",
"database",
"relational-database",
""
] |
I have the following values in my database:
[](https://i.stack.imgur.com/aptOa.png)
I would like to create a unique key on `tv_series_id+name+tv_season_number+tv_episode_number`. However, the above does not work for my purposes using the conventional:
```
ALTER TABLE main_itemmaster ADD UNIQUE KEY (tv_series_id, tv_season_number, tv_episode_number, name)
```
Because of the null values. What would be a way to solve this at the database level (and not the query side) ?
And to clarify, the above should not be allowed, as "97736-Season 4-4-NULL" is repeated 10 times. The solution I was thinking of implementing as a last resort would be to store an additional string column for uniqueness, so "97736-Season 4-4-" for the above.
|
As requested, my suggestion (untested):
```
ALTER TABLE main_itemmaster ADD COLUMN uniquer BIGINT NOT NULL;
DELIMITER $$
CREATE TRIGGER add_uniquer BEFORE INSERT ON main_itemmaster
FOR EACH ROW
BEGIN
SET NEW.uniquer = (SELECT UUID_SHORT());
END $$
DELIMITER ;
ALTER TABLE main_itemmaster ADD UNIQUE KEY (tv_series_id, tv_season_number, tv_episode_number, name, uniquer);
```
The algorithm behind the uniquer is entirely up to you. I chose a [short UUID](http://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_uuid-short) here. You might want a long UUID. Or UUID [might not be right for you](https://rclayton.silvrback.com/do-you-really-need-a-uuid-guid). Time-based might work. You could base it on the MD5 of the values in the inserted row.
|
More simple solution (if you have a lot of fields, which could be NULL and if you still want to have rows unique)
1) add unique\_hash field
```
ALTER TABLE `TABLE_NAME_HERE`
ADD `unique_hash` BIGINT unsigned NOT NULL DEFAULT '0' COMMENT 'JIRA-ISSUE-ID' AFTER `SOME_EXISTING_FIELD`
;
```
2) fill them
```
UPDATE TABLE_NAME_HERE SET unique_hash=cast(conv(substring(md5(
CONCAT(
IF(field1_id IS NULL, 'NULL',field1_id),'_',
IF(field2_id IS NULL, 'NULL',field2_id),'_',
IF(field3_id IS NULL, 'NULL',field3_id),'_',
IF(field4_id IS NULL, 'NULL',field4_id),'_',
IF(field5_id IS NULL, 'NULL',field5_id),'_'
)
), 1, 16), 16, 10) as unsigned integer);
```
3) add unique index on bigint (it would be faster, as complex unique index - because it will be smaller)
```
ALTER TABLE `TABLE` ADD UNIQUE (`unique_hash`);
```
4) add trigger
```
DELIMITER $$
CREATE TRIGGER add_unique_hash BEFORE INSERT ON TABLE_NAME_HERE
FOR EACH ROW
BEGIN
SET NEW.unique_hash = (cast(conv(substring(md5(
CONCAT(
IF(NEW.field1_id IS NULL, 'NULL',NEW.field1_id),'_',
IF(NEW.field2_id IS NULL, 'NULL',NEW.field2_id),'_',
IF(NEW.field3_id IS NULL, 'NULL',NEW.field3_id),'_',
IF(NEW.field4_id IS NULL, 'NULL',NEW.field4_id),'_',
IF(NEW.field5_id IS NULL, 'NULL',NEW.field5_id),'_'
)
), 1, 16), 16, 10) as unsigned integer));
END $$
DELIMITER ;
```
5) ENJOY! :)
You will have **realy** unique index on field1\_id, field2\_id, field3\_id, field4\_id, field5\_id . And NULL values would be allowed only once. No changes on scripts required.
|
Preferred way to create composite unique index with NULL values
|
[
"",
"mysql",
"sql",
"indexing",
""
] |
All, I have a table with similar values to this:
```
Name Value Server time
a None 3/17/2016 11:59:50 PM
a None 3/17/2016 11:59:36 PM
a None 3/17/2016 11:59:33 PM
a March Madness 3/17/2016 11:59:33 PM
a None 3/17/2016 11:59:19 PM
b TGIF 3/17/2016 11:59:04 PM
b None 3/17/2016 11:58:44 PM
b March Madness 3/17/2016 11:58:29 PM
b None 3/17/2016 11:58:22 PM
c None 3/17/2016 11:58:17 PM
c None 3/17/2016 11:58:15 PM
c None 3/17/2016 11:58:14 PM
c None 3/17/2016 11:57:50 PM
c None 3/17/2016 11:57:33 PM
```
My result set should be this:
```
a March Madness
b TGIF
c 0
```
So in other words, I should get the latest value sorted by ServerTime that is not 'None' and when I just have 'None' it should be zero
Thanks in advance
|
If you don't care about "c", an easy way uses a `where` clause:
```
select t.*
from t
where t.servertime = (select max(t2.server_time)
from t t2
where t2.name = t.name and t2.value <> 'None'
);
```
If you do want all names, then you can use an aggregation approach:
```
select name, max(case when t.value <> 'None' then servertime end),
max(case when t.value <> 'None' and seqnum = 1 then value end)
from (select t.*,
row_number() over (partition by name
order by (case when t.value = 'None' then 1 else 2 end) desc,
servertime desc
) as seqnum
from t
) t
group by name;
```
Or even `outer apply`:
```
select n.name, t2.serertime, coalesce(t2.value, 0) as value
from (select distinct name from t) n outer apply
(select top 1 t2.servertime, t2.value
from t t2
where t2.name = n.name and t2.value <> 'None'
order by t2.servertime
) t2;
```
I actually like this method.
|
**[SQL Fiddle Demo](http://sqlfiddle.com/#!15/4ebfd5/30)**
Using just SQL you need create a list of every Name using distinct. Then join to find the MAX(sever time) for each name. Then a third join to get the value. Finally the NULL become `0` like the case for `c`
```
SELECT S."Name",
T.mtime,
CASE WHEN T.mtime IS NULL THEN '0'
ELSE YT."Value"
END as Value
FROM (
SELECT DISTINCT YT."Name"
FROM YourTable YT
) S
LEFT JOIN (
SELECT "Name", MAX("Server time") as mtime
FROM YourTable
WHERE "Value" <> 'None'
GROUP BY "Name"
) T
ON S."Name" = T."Name"
LEFT JOIN YourTable YT
ON YT."Server time" = T.mtime
AND YT."Value" <> 'None'
```
**OUTPUT**
```
| Name | mtime | value |
|------|-------------------------|---------------|
| a | March, 17 2016 23:59:33 | March Madness |
| b | March, 17 2016 23:59:04 | TGIF |
| c | (null) | 0 |
```
Another solution with a less `JOIN`, create a dummy date to solve `C` case
```
SELECT TM."Name",
TM.mtime,
CASE WHEN T."Value" IS NULL THEN '0'
ELSE T."Value"
END as Value
FROM (
SELECT "Name", MAX(CASE WHEN t."Value" <> 'None' THEN "Server time"
ELSE '0001-1-1 0:0:0'
END) as mtime
FROM YourTable t
GROUP BY "Name"
) TM
LEFT JOIN YourTable T
ON TM."Name" = T."Name"
AND (TM.mtime = T."Server time" )
AND T."Value" <> 'None'
```
|
Get Last Value that is not 'None'
|
[
"",
"sql",
"t-sql",
""
] |
I have a view cnst\_prsn\_nm . I want to check for records which share same cnst\_mstr\_id and same last name but differ on first names. So I did in Teradata SQL
```
SELECT TOP 20 prsn_nm_a.cnst_mstr_id FROM arc_mdm_vws.bz_cnst_prsn_nm prsn_nm_a
INNER JOIN arc_mdm_vws.bz_cnst_prsn_nm prsn_nm_b
ON prsn_nm_a.cnst_mstr_id = prsn_nm_b.cnst_mstr_id
GROUP BY prsn_nm_a.cnst_mstr_id
HAVING COUNT(DISTINCT prsn_nm_a.bz_cnst_prsn_last_nm) = 1
WHERE prsn_nm_a.bz_cnst_prsn_first_nm <> prsn_nm_b.bz_cnst_prsn_first_nm
```
And then for those records' cnst\_mstr\_ids, I want to check another table cnst\_mstr .
Basically I want to check for where **left join IS NULL**
```
LEFT JOIN arc_mdm_vws.bzal_cnst_mstr mstr_new
ON prsn_nm_a.cnst_mstr_id = mstr_new.new_cnst_mstr_id
WHERE mstr_new.new_cnst_mstr_id IS NULL
```
So my query becomes essentially
```
SELECT TOP 20 prsn_nm_a.cnst_mstr_id FROM arc_mdm_vws.bz_cnst_prsn_nm prsn_nm_a
INNER JOIN arc_mdm_vws.bz_cnst_prsn_nm prsn_nm_b
ON prsn_nm_a.cnst_mstr_id = prsn_nm_b.cnst_mstr_id
GROUP BY prsn_nm_a.cnst_mstr_id
HAVING COUNT(DISTINCT prsn_nm_a.bz_cnst_prsn_last_nm) = 1
WHERE prsn_nm_a.bz_cnst_prsn_first_nm <> prsn_nm_b.bz_cnst_prsn_first_nm
LEFT JOIN arc_mdm_vws.bzal_cnst_mstr mstr_new
ON prsn_nm_a.cnst_mstr_id = mstr_new.new_cnst_mstr_id
WHERE mstr_new.new_cnst_mstr_id IS NULL
```
But there are two WHERE Clauses . Also LEFT JOIN cannot be there directly after HAVING . How can I have a left join after Group By and HAVING Clause when there is Filter associated with the Grouping ?
|
Your original query is not correct (`WHERE` goes before `GROUP BY`) Let me assuming you mean this:
```
SELECT TOP 20 prsn_nm_a.cnst_mstr_id
FROM arc_mdm_vws.bz_cnst_prsn_nm prsn_nm_a INNER JOIN
arc_mdm_vws.bz_cnst_prsn_nm prsn_nm_b
ON prsn_nm_a.cnst_mstr_id = prsn_nm_b.cnst_mstr_id
WHERE prsn_nm_a.bz_cnst_prsn_first_nm <> prsn_nm_b.bz_cnst_prsn_first_nm
GROUP BY prsn_nm_a.cnst_mstr_id
HAVING COUNT(DISTINCT prsn_nm_a.bz_cnst_prsn_last_nm) = 1;
```
A non-matching left join is equivalent to using `NOT EXISTS`, so you can do:
```
SELECT TOP 20 prsn_nm_a.cnst_mstr_id
FROM arc_mdm_vws.bz_cnst_prsn_nm prsn_nm_a INNER JOIN
arc_mdm_vws.bz_cnst_prsn_nm prsn_nm_b
ON prsn_nm_a.cnst_mstr_id = prsn_nm_b.cnst_mstr_id
WHERE prsn_nm_a.bz_cnst_prsn_first_nm <> prsn_nm_b.bz_cnst_prsn_first_nm
GROUP BY prsn_nm_a.cnst_mstr_id
HAVING COUNT(DISTINCT prsn_nm_a.bz_cnst_prsn_last_nm) = 1 AND
NOT EXISTS (SELECT 1
FROM arc_mdm_vws.bzal_cnst_mstr mstr_new
WHERE prsn_nm_a.cnst_mstr_id = mstr_new.new_cnst_mstr_id
);
```
|
The clauses in a SQL statement always come in a specific order. First `SELECT`, then `FROM`, then `JOIN`s, then `WHERE`, then `GROUP BY`, then `HAVING`. You cannot deviate from that order, and do not need (and cannot have) a second `WHERE` clause. Make your one and only `WHERE` clause include ***all*** the condition you need.
```
SELECT TOP 20 prsn_nm_a.cnst_mstr_id
FROM arc_mdm_vws.bz_cnst_prsn_nm prsn_nm_a
INNER JOIN arc_mdm_vws.bz_cnst_prsn_nm prsn_nm_b
ON prsn_nm_a.cnst_mstr_id = prsn_nm_b.cnst_mstr_id
LEFT JOIN arc_mdm_vws.bzal_cnst_mstr mstr_new
ON prsn_nm_a.cnst_mstr_id = mstr_new.new_cnst_mstr_id
WHERE prsn_nm_a.bz_cnst_prsn_first_nm <> prsn_nm_b.bz_cnst_prsn_first_nm
AND mstr_new.new_cnst_mstr_id IS NULL
GROUP BY prsn_nm_a.cnst_mstr_id
HAVING COUNT(DISTINCT prsn_nm_a.bz_cnst_prsn_last_nm) = 1
```
|
LEFT JOIN after Group By and HAVING
|
[
"",
"sql",
"group-by",
"teradata",
""
] |
A conundrum I can't figure out
Giving these data in an Oracle table (named tbs\_test). First row are column names
```
A |B| C
100|6|1000
100|6|1001
100|6|1002
100|7|1003 **
200|6|2000
200|6|2001
300|7|3000
300|7|3001
400|6|4000 **
400|7|4001
400|7|4002
```
Through an Oracle SQL select I want to retrieve the two records marked with \*\*.
The rules are:
1. I'm interested in A=100 and A=400 because there are two different B's in the same A
A=100 has B=6 and B=7
A=400 has B=6 and B=7
2. I'm interested in C=1003 because there are fewer B=7 (one) than B=6 (three) in A=100
And I'm interested in C0 4000 because there are fewer B=6 (one) than B=7 (two) in A=100
Man, I'm troubled... Can anyone see the solution?
cheers
Torsten
|
You can do this with aggregation and analytic functions:
```
select a, b, c
from (select a, b, max(c) as c,
row_number() over (partition by a order by count(*)) as seqnum,
count(*) over (partition by a) as numBs
from tbs_test t
group by a, b
) ab
where seqnum = 1 and numBs > 1;
```
One issue is that this returns only one row for each "A". It is unclear what to do when there is more than one match on the `B` with the fewest matches.
If you want all `C` values, the simplest way is to use `listagg(C, ',') within group (order by C)` instead of `max(C)`.
|
**Oracle Setup**:
```
CREATE TABLE tbs_test (A, B, C )AS
SELECT 100,6,1000 FROM DUAL UNION ALL
SELECT 100,6,1001 FROM DUAL UNION ALL
SELECT 100,6,1002 FROM DUAL UNION ALL
SELECT 100,7,1003 FROM DUAL UNION ALL
SELECT 200,6,2000 FROM DUAL UNION ALL
SELECT 200,6,2001 FROM DUAL UNION ALL
SELECT 300,7,3000 FROM DUAL UNION ALL
SELECT 300,7,3001 FROM DUAL UNION ALL
SELECT 400,6,4000 FROM DUAL UNION ALL
SELECT 400,7,4001 FROM DUAL UNION ALL
SELECT 400,7,4002 FROM DUAL;
```
**Query**:
```
SELECT A,B,C
FROM (
SELECT t.*,
COUNT( CASE B WHEN 6 THEN 1 END ) OVER ( PARTITION BY A ) AS count6,
COUNT( CASE B WHEN 7 THEN 1 END ) OVER ( PARTITION BY A ) AS count7
FROM tbs_test t
)
WHERE ( B = 6 AND count6 < count7 )
OR ( B = 7 AND count7 < count6 );
```
**Output**:
```
A B C
---------- ---------- ----------
100 7 1003
400 6 4000
```
|
An extreme multidimension Oracle Group By Select
|
[
"",
"sql",
"oracle",
"group-by",
""
] |
OK so here's what I'm trying to accomplish.
Table 1 has a column called inifile, path. Entries look like...
```
----------------------------------
| inifile | path |
----------------------------------
|example1.ini | c:\temp\text.txt |
|example2.ini | c:\temp\text2.txt |
----------------------------------
```
Table 2 has columns called jobs, and job name. Entries look like...
```
----------------------------
| jobs | job name |
----------------------------
| example1 | Example Job 1 |
| example2 | Example Job 2 |
----------------------------
```
The jobs data in Table 2 matches the inifile from Table 1, minus the '.ini' extension. So I need to trim the extension off of the 'inifile' field, and join it with the 'jobs' field, to return the 'job name' and 'path'. The field lengths in the inifile are varying in length.
So the return brings back
```
------------------------------------------------------
| jobs | job name | path |
------------------------------------------------------
| example1 | Example Job 1 | c:\temp\text.txt |
| example2 | Example Job 2 | c:\temp\text2.txt|
------------------------------------------------------
```
I'm wracking my brain over this and haven't had any luck.
|
Try this:
```
SELECT Table2.jobs, table2.JobName, table1.Path
FROM
Table1
JOIN Table2 ON table2.jobs = REPLACE(table1.inifile, '.ini','')
```
|
If you are guaranteed that there is at most one occurrence of '.ini' within the string value, and that it's at the end of the string, then the `REPLACE` function (demonstrated in other answers) can work for you.
To trim the string at the first occurrence of '.ini', you could use the `SUBSTRING_INDEX` function
```
SET @foo = 'foo.ini' ;
SELECT SUBSTRING_INDEX(@foo,'.ini',1) ;
```
If there's a possibility of multiple occurrences of '.ini' within the string, and your goal is to remove '.ini' from the end of the string...
```
SET @foo = 'foo.init.bar.ini';
SELECT IF(@foo LIKE '%.ini',SUBSTRING(@foo,1,CHAR_LENGTH(@foo)-4),@foo) ;
```
Whichever expression does what you need it to do to return the value in your column, you can use that expression in a predicate.
```
SELECT ...
FROM table1 t1
JOIN table2 t2
ON t2.jobs = IF(t1.inifile LIKE %.ini'
,SUBSTRING(t1.inifile,1,CHAR_LENGTH(t1.initfile)-4)
,t1.inifile
)
```
As an alternative, you might consider just concatenating `".ini"` to the value of the column in table2, and doing the comparison
```
ON CONCAT(t2.jobs,'.ini') = t1.inifile
```
|
Need a SQL Query to string trim from data in 1 table and match to another
|
[
"",
"sql",
""
] |
I have 4 tables:
```
CREATE TABLE ROLE (
id INT PRIMARY KEY,
name VARCHAR(50)
);
CREATE TABLE PROFILE (
id INT PRIMARY KEY,
batch VARCHAR(10)
);
CREATE TABLE USER (
id INT PRIMARY KEY,
name VARCHAR(50),
role_id INT REFERENCES ROLE(id),
profile_id INT REFERENCES PROFILE(id)
);
CREATE TABLE POST (
id INT PRIMARY KEY,
content VARCHAR(4000),
user_id INT REFERENCES USER(id)
);
```
Write a SQL query to display the name(s) of the alumni user (Role-βAlumniβ) from 2008 batch who has/have posted the maximum number of posts, sorted by name.
I've tried this:
```
select user.name
from user
inner join role
on user.role_id=role.id
inner join profile
on user.profile_id=profile.id
inner join post
on user.id=post.user_id
where profile.batch="2008"
group by user.name
having count(post.content)=3
order by user.name;
```
I'm not able to put correct condition in the having clause.
|
[SQL Fiddle](http://sqlfiddle.com/#!9/35893b/7)
**MySQL 5.6 Schema Setup**:
```
CREATE TABLE ROLE (
id INT PRIMARY KEY,
name VARCHAR(50)
);
CREATE TABLE PROFILE (
id INT PRIMARY KEY,
batch VARCHAR(10)
);
CREATE TABLE USER (
id INT PRIMARY KEY,
name VARCHAR(50),
role_id INT REFERENCES ROLE(id),
profile_id INT REFERENCES PROFILE(id)
);
CREATE TABLE POST (
id INT PRIMARY KEY,
content VARCHAR(4000),
user_id INT REFERENCES USER(id)
);
INSERT INTO ROLE VALUES ( 1, 'Role1' );
INSERT INTO PROFILE VALUES ( 1, '2008' );
INSERT INTO USER VALUES ( 1, 'Alice', 1, 1 );
INSERT INTO USER VALUES ( 2, 'Bob', 1, 1 );
INSERT INTO USER VALUES ( 3, 'Carol', 1, 1 );
INSERT INTO POST VALUES ( 1, 'Post 1', 1 );
INSERT INTO POST VALUES ( 2, 'Post 2', 1 );
INSERT INTO POST VALUES ( 3, 'Post 1', 2 );
INSERT INTO POST VALUES ( 4, 'Post 1', 3 );
INSERT INTO POST VALUES ( 5, 'Post 2', 3 );
```
**Query 1**:
```
SELECT name
FROM (
SELECT name,
CASE WHEN @prev_value = num_posts THEN @rank_count
WHEN @prev_value := num_posts THEN @rank_count := @rank_count + 1
END AS rank
FROM (
SELECT u.name,
( SELECT COUNT( 1 ) FROM post t WHERE t.user_id = u.id ) AS num_posts
from user u
inner join role r
on u.role_id=r.id
inner join profile p
on u.profile_id=p.id,
( SELECT @prev_value := NULL, @rank_count := 0 ) i
where p.batch="2008"
ORDER BY num_posts DESC
) posts
) ranks
WHERE rank = 1
ORDER BY name ASC
```
**[Results](http://sqlfiddle.com/#!9/35893b/7/0)**:
```
| name |
|-------|
| Alice |
| Carol |
```
|
Could be this query
```
select user.name, count(post.id)
from user
inner join role on user.role_id = role.id
inner join profile on user.profile_id = profile.id
inner join post on user.id = post.user_id
where profile.batch = '2008'
and role.name = 'alumni'
group by user.name
order by user.name ;
```
If you want only the users with max sum post the should be this
```
select user.name from (
select user.name, sum(post.id) totpost
from user
inner join role on user.role_id = role.id
inner join profile on user.profile_id = profile.id
inner join post on user.id = post.user_id
where profile.batch = '2008'
and role.name = 'alumni'
group by user.name
having totpost = max(post.id)
order by totpost ) as mytable
```
|
query to display the name(s) of the alumni user (Role-βAlumniβ) from 2008 batch who has/have posted the maximum number of posts, sorted by name
|
[
"",
"mysql",
"sql",
""
] |
I have rows in my table that needs deleting based on a few columns being duplicates.
e.g Col1,Col2,Col3,Col4
If Col1,Col2 and Col3 are duplicates regardless of what value is in Col4 I want both these duplicates deleted. How do I do this?
|
You can do this using the `where` clause:
```
delete from t
where (col1, col2, col3) in (select col1, col2, col3
from t
group by col1, col2, col3
having count(*) > 1
);
```
|
Group by these IDs and check with HAVING whether there are duplicates. With the duplicates thus found delete the records.
```
delete from mytable
where (col1,col2,col3) in
(
select col1,col2,col3
from mytable
group by col1,col2,col3
having count(*) > 1
);
```
|
Deleting Semi Duplicate Rows in ORACLE SQL
|
[
"",
"sql",
"oracle",
""
] |
I have 2 tables that can have many-to-many relations between them:
```
Person (pid, name, a,b) ,
Attributes (attribId, d,e)
```
The mapping is present in a separate table:
```
Mapping (mapId, pid, attribId)
```
The goal is to get all Person and Attributes values for a person who qualifies the filter criteria. The filter criteria is based on a column in the Attributes table. Eg - column d.
For example:
```
Person ->
(1,'person1','a1','b1')
(2,'person2','a1','b1')
Attributes ->
(1,'d1','e1')
(2,'d2','e1')
(3,'d3','e1')
(4,'d3','e2')
Mapping ->
(1,1,1)
(2,1,2)
(3,1,3)
After running the query ->
Result:
(1,'person1','a1','b1')(1,'d1','e1')
(1,'person1','a1','b1')(2,'d2','e1')
(1,'person1','a1','b1')(3,'d3','e1')
```
Query that i have been trying ->
```
select p.*, a.*
from
Person p
left outer join
Mapping m
on p.pid=m.pid
left outer join
Attributes a
on m.attribId=a.attribId
where
p.pid in (select p1.pid
from
Person p1
left outer join
Mapping m1
on p1.pid=m1.pid
left outer join
Attributes a1
on m1.attribId=a1.attribId
where
a1.d = 'd1')
```
Similarly, I also have to discard Person entries that have a certain d value.
So, currently, the final query looks like this:
```
SELECT
p.*,
a.*
FROM Person p
LEFT OUTER JOIN Mapping m
ON p.pid = m.pid
LEFT OUTER JOIN Attributes a
ON m.attribId = a.attribId
WHERE p.pid IN (SELECT
p1.pid
FROM Person p1
LEFT OUTER JOIN Mapping m1
ON p1.pid = m1.pid
LEFT OUTER JOIN Attributes a1
ON m1.attribId = a1.attribId
WHERE a1.d = 'd1')
AND p.pid NOT IN (SELECT
p2.pid
FROM Person p2
LEFT OUTER JOIN Mapping m2
ON p2.pid = m2.pid
LEFT OUTER JOIN Attributes a2
ON m2.attribId = a2.attribId
WHERE a2.d = 'd5');
```
It feels like this query is inefficient since the same join is done at 3 places. Is there a way to reuse the join for all the sub-queries and make this more efficient?
[sqlfiddle demo](http://sqlfiddle.com/#!4/e8167/4)
|
You can get all persons satisfying the filter using:
```
select m.pid
from mapping m join
attributes a
on m.attribId = a.attribId and a.d = 'dS';
```
You can get all person/attribute combinations using `IN` or `EXISTS` or a `JOIN`. Which is better depends on the database. But the idea is:
```
select p.*, a.*
from person p join
mapping m
on p.pid = m.pid join
attributes a
on m.attribId = a.attribId
where p.pid in (select m.pid
from mapping m join
attributes a
on m.attribId = a.attribId and a.d = 'dS'
);
```
I see no reason to have `left join`s for these queries.
EDIT:
If the filter criteria is based on multiple columns, then use `group by` and `having` for the subquery:
```
select m.pid
from mapping m join
attributes a
on m.attribId = a.attribId and a.d = 'dS'
group by m.pid
having sum(case when a.d = 'dS' then 1 else 0 end) > 0 and -- at least one of these
sum(case when a.d = 'd1' then 1 else 0 end) = 0; -- none of these
```
|
First thing I have noticed is that you use left join in subqueries, inner join would work as well and is much faster. Second remove Person from nested selects because it is not needed.
```
select m2.pid
from
Mapping m2
inner join
Attributes a2
on m2.attribId=a2.attribId
where
a2.d = 'd5'
```
|
Optimizing sub-queries and joins
|
[
"",
"sql",
"sql-optimization",
""
] |
I have a database which (For whatever reason) has a column containing pipe delimited data.
I want to parse this data quickly, so I've thought of converting this column (nvarchar) into an XML by replacing the pipes with XML attributes and putting it into an XML data typed column somewhere else.
It works, except in the case where that column had a character that required encoding, such a '<' character.
I found I could encode XML using FOR XML clause, however, that appears to inject some XML tags around the data.
For example: (this gives error on bad character)
```
SELECT CAST('<f>' + replace(value,'|','</f><f>') + '</f>' AS XML)
FROM TABLE
```
this gives xml encoded value, but wraps it in "< value> < /value>" tag
```
SELECT value
FROM table
FOR XML PATH('')
```
Any ideas on how I can get the XML encoded value without this extra tag added, so I can convert the pipe format to XML after it's done (preferably in one swoop)?
EDIT: since people are asking, this is what 5 potential rows of data might look like
```
foo
foo|bar
foo|bar|1
foo||
baz|
```
And the results would be
```
Col1, Col2, Col3
foo,null,null
foo,bar,null
foo,bar,1
foo,null,null
baz,null,null
```
I'm achieving this by using the resulting XML type in a sub query such as: (it can be up to 4 columns pr 3 pipes in any given row)
```
SELECT
*,
x.query('f[1]').value('.','nVarChar(2048)') Col1
,x.query('f[2]').value('.','nVarChar(2048)') Col2
,x.query('f[3]').value('.','nvarchar(2048)') Col3
,x.query('f[4]').value('.','nvarchar(2048)') Col4
FROM
(
SELECT *,
CAST('<f>' + REPLACE(Value,'|','</f><f>') + '</f>' AS XML) as x
FROM table
) y
```
@srutzky makes a great point. No, I do not need to do XML here at all. If I can find a fast & clean way to parse pipes in a set based operation, I'll do that. Will review the SQL# documentation...
|
```
SELECT CAST('<values><f>' +
REPLACE(
REPLACE(
REPLACE(
REPLACE(
REPLACE(value,'&','&')
,'"','"')
,'<','<')
,'>','>')
,'|','</f><f>') + '</f></values>' AS XML)
FROM TABLE;
```
|
Your idea was absolutely OK: By making an XML out of your string the XML engine will convert all special characters properly. After your splitting the XML should be correct.
If your string is stored in a column you can avoid the automatically given name by either doing kind of *computation* (something like `'' + YourColumn`) or you give the column an alias `AS [*]`:
Try it like this:
```
DECLARE @str VARCHAR(100)='300|2β¬&ΓΓΓ|This is text -> should be text|2015-12-31';
SELECT @str FOR XML PATH('');
/*
300|2β¬&ΓΓΓ|This is text -> should be text|2015-12-31
*/
DECLARE @Xml XML=(SELECT CAST('<x>' + REPLACE((SELECT @str FOR XML PATH('')),'|','</x><x>')+'</x>' AS XML));
SELECT @Xml.value('/x[1]','int') AS IntTypeSave
,@Xml.value('/x[3]','varchar(max)') AS VarcharTypeSave
,@Xml.value('/x[4]','datetime') AS DateTypeSave;
/*
300 This is text -> should be text 2015-12-31 00:00:00.000
*/
SELECT X.value('.','varchar(max)') AS EachX
FROM @Xml.nodes('/x') AS Each(X);
/*
300
2β¬&ΓΓΓ
This is text -> should be text
2015-12-31
*/
```
|
How to encode XML in T SQL without the additional XML overhead
|
[
"",
"sql",
"sql-server",
"xml",
"t-sql",
""
] |
I have a scenario in which I have to get those users whose age is between say (10 - 20) in sql, I have a column with date of birth `dob`. By executing the below query I get the age of all users.
```
SELECT
FLOOR((CAST (GetDate() AS INTEGER) - CAST(dob AS INTEGER)) / 365.25) AS Age
FROM users
```
My question is how should I write my query so that I can get that users whose age is between 10 to 20
|
I don't have a SQL-Server available to test right now. I'd try something like:
```
select * from users where datediff(year, dob, getdate()) between 10 and 20;
```
|
First add computed field `Age` like you already did. Then make where filtering on the data.
```
SELECT * FROM
(SELECT FLOOR((CAST (GetDate() AS INTEGER) - CAST(dob AS INTEGER)) / 365.25) AS Age, *
from users) as users
WHERE Age >= 10 AND Age < 20
```
There are [number of ways](https://stackoverflow.com/questions/1572110/how-to-calculate-age-in-years-based-on-date-of-birth-and-getdate) to calculate age.
|
Get between ages from date of birth in sql
|
[
"",
"sql",
""
] |
I have a TraitQuestion model that have a number of traits or answers - each of which has a value such as 1, 2, 3, or -3105 (for none of the above). I can't change these values because they are required for an external API.
I'd like to order them by the value, but when I do that, the answer with -3105 shows up at the top even though it says "None of the above":
```
Answer 4 = -3105
Answer 1 = 1
Answer 2 = 2
Answer 3 = 3
```
Any idea on how we could order this so that it's
```
1
2
3
-3105?
```
I'm pretty new to SQL but it seems like I should be able to do something like this:
```
@trait_question.traits.order('CASE WHEN value AS int > 0 then 1 ELSE 0 END')
```
But that doesn't work. Any ideas how I can adjust the order call SQL such that it would order them in the correct order?
EDIT:
This is Postgresql 9.4.1 via Amazon AWS
|
I don't know if this will work for you or not but try this:
```
@trait_question.traits.order('ABS(value)')
```
The `ABS` will change the negative value to positive hence taking the absolute value each time. If your database field is `string` then you can do it like this as suggested by Tom which worked for him:
```
@trait_question.traits.order('ABS(CAST(value AS int))')
```
|
Hey you can try this way
```
@trait_question.traits.order("CASE WHEN (value AS integer > 0) THEN value ELSE (value As integer)*(-1) END")
```
Other wise use
```
@trait_question.traits.order("abs(value AS integer)")
```
|
Rails Order by SQL
|
[
"",
"sql",
"ruby-on-rails",
""
] |
I have a simple table as scripted below.
```
CREATE TABLE dbo.KeyNumbers (
RowId INT IDENTITY
,ProFormaNumber NCHAR(10) NULL
,SalesOrderNumber NCHAR(10) NULL
,PurchaseOrderNumber NCHAR(10) NULL
,InvoiceCreditNoteNumber NCHAR(10) NULL
,InvoiceNumber NCHAR(10) NULL
,PurchaseInvoiceCreditNoteNumber NCHAR(10) NULL
,ModifiedDate DATETIME NULL
,CONSTRAINT PK_KeyNumbers PRIMARY KEY CLUSTERED (RowId)
) ON [PRIMARY]
```
The table is used to store key document numbers (Invoice number, Sales Order number etc) for the company, and as such only requires a single row. The main interaction with this table is done through stored procedures so the end use should never need to access it, but I am wondering if there is a way in SQL server to actively restrict the table to have one, and only one row, and to be able to do that at the design stage.
EDIT
Proof that Gordon's suggestion works nicely
[](https://i.stack.imgur.com/fzBeZ.png)
|
The obvious method uses a trigger on insert to be sure the table is empty.
I've never tried this, but an index on a computed column might also work:
```
alter table dbo.KeyNumbers add OneAndOnly as ('OneAndOnly');
alter table dbo.KeyNumbers add constraint unq_OneAndOnly unique (OneAndOnly);
```
This should generate a unique key violation if a second row is inserted.
|
--I think this would be cleaner and utilizes the existing columns
```
CREATE TABLE dbo.KeyNumbers (
RowId AS (1) PERSISTED
,ProFormaNumber NCHAR(10) NULL
,SalesOrderNumber NCHAR(10) NULL
,PurchaseOrderNumber NCHAR(10) NULL
,InvoiceCreditNoteNumber NCHAR(10) NULL
,InvoiceNumber NCHAR(10) NULL
,PurchaseInvoiceCreditNoteNumber NCHAR(10) NULL
,ModifiedDate DATETIME NULL
,CONSTRAINT PK_KeyNumbers PRIMARY KEY CLUSTERED (RowId)
) ON [PRIMARY]
```
|
Is it possible to restrict a sql table to only have a single row at the design stage
|
[
"",
"sql",
"sql-server",
"sql-server-2014",
""
] |
Most resources that describe a `SELECT TOP ...` query in Postgres say that you should use `LIMIT` instead, possibly with an `ORDER BY` clause if you need to select the top elements by some ordering.
What do you do if you need to select the top N elements from a recursive query, where there is no ordering and there is a possibility the query can return fewer than N rows without the recursion (so that the `TOP` part is necessary to ensure the result set is *at least* N rows, whereas `LIMIT` can allow fewer rows)?
My specific use case is a modification of the [dynamic SQL pattern for selecting a random subsample of a table](https://stackoverflow.com/questions/8674718/best-way-to-select-random-rows-postgresql/8675160#8675160).
Here is a [link to the sql source](https://github.com/spearsem/learnmeapostgres/blob/master/random_sample/random_draw.sql) of my modification. The easiest thing is to look at the final function defined there, `_random_select`. It follows very closely to the above link, but has been modified to be polymorphic in the input table and output result set, as well as correctly accounting for the need to return only the columns that already existed in the input table (yet another dynamic SQL hack to exclude the interim `row_number` result from the final result set).
It's an eyesore, but it's the closest thing I have to a reproducible example. If you use `_random_select` and attempt to get something around 4500 rows from a table larger than 4500 rows, you begin to see smaller result sets with high probability, and it only gets worse as you increase the size of your desired sample (because the occurrence of duplicates gets worse as your desired sample gets larger).
Note that in my modification, I am not using the `_gaps` trick from this link, meant to over-sample to offset sampling inefficiency if there are gaps in a certain index column. That part doesn't relate to this question, and in my case I am using `row_number` to ensure that there is an integer column with no possible gaps.
The CTE is recursive, to ensure that if the first, non-recursive part of the CTE doesn't give you enough rows (because of duplicates removed by the `UNION`), then it will go back through another round of recursive call of the CTE, and keep tacking on more results until you've got enough.
In the linked example, `LIMIT` is used, but I am finding that this does not work. The method returns fewer results because `LIMIT` is only an *at most N rows* guarantee.
How do you get an at *least* N rows guarantee? Selecting the `TOP` N rows seemed like the natural way to do this (so that the recursive CTE had to keep chugging along until it gets enough rows to satisfy the `TOP` condition), but this is not available in Postgres.
|
Your assessment is to the point. The recursive query in [my referenced answer](https://stackoverflow.com/questions/8674718/best-way-to-select-random-rows-postgresql/8675160#8675160) is only somewhat more flexible than the original simple query. It still requires relatively few gaps in the ID space and a sample size that is substantially smaller than the table size to be reliable.
While we need a comfortable surplus ("limit + buffer") in the simple query to cover the worst case scenario of misses and duplicates, we can work with a smaller surplus that's typically enough - since we have the safety net of a recursive query that will fill in if we should fall short of the limit in the first pass.
Either way, the technique is intended to get a small random selection from a big table quickly.
The technique is ***pointless*** for cases with *too many gaps* or (your focus) a sample size that is too close to the total table size - so that the recursive term might run dry before the limit is reached. For such cases a plain old:
```
SELECT * -- or DISTINCT * to fold duplicates like UNION does
FROM TABLE
ORDER BY random()
LIMIT n;
```
.. is more efficient: you are going to read most of the table anyway.
|
This is too long for a comment, but is informative about what's going on with my existing query. From the [documentation on recursive query evaluation](http://www.postgresql.org/docs/9.5/static/queries-with.html), the steps that a recursive query will take are:
> Evaluate the non-recursive term. For UNION (but not UNION ALL), discard duplicate rows. Include all remaining rows in the result of the recursive query, and also place them in a temporary working table.
>
> So long as the working table is not empty, repeat these steps:
>
> a. Evaluate the recursive term, substituting the current contents of the working table for the recursive self-reference. For UNION (but not UNION ALL), discard duplicate rows and rows that duplicate any previous result row. Include all remaining rows in the result of the recursive query, and also place them in a temporary intermediate table.
>
> b. Replace the contents of the working table with the contents of the intermediate table, then empty the intermediate table.
So my hunch in the comments (after trying with `UNION ALL`) was mostly on the right track.
As the documentation states, this is actually just a type of *iteration* that re-uses the previous non-recursive result part in place of the recursive *name* used in the recursive part.
So it's more like an ever shrinking process, where the "working table" used to substitute in for the recursive name consists only of the particular subset of results at the most recent recursive round which *were not duplicates of previous results*.
Here's an example. Say we have 5000 rows in the table and we want to sample 3000 unique rows, doing a recursive sample of 1000 (potentially not unique) samples at a time.
We do the first batch of 1000, remove duplicates so we end up with about 818 non-dupes [based on the binomial distribution](https://math.stackexchange.com/questions/1056540/sampling-with-replacement-expected-number-of-duplicates-triplicates-n-t) for these large numbers (N=5000, m = 1000, k=1, rearrange terms to avoid overflow).
These 818 become the working table and this result set is subbed in as the recursive term for our next pass. We draw another set of about 818 unique rows, but then have to remove duplicates (`UNION`) when comparing to the original 818 that are in the working table. Two different random draws of 818 will have significant overlap (somewhere around 150 on average), so all of those are discarded and whatever *new* unique rows remain become the *new* working table.
So you'll add about 818 unique samples on the first draw, then the working table shrinks, you'll about 650 on the second draw, the working table shrinks, ... and you keep doing this until either you reach the overall total samples desired (3000 in this case) *or* the working table ends up being empty.
Once the working table is small enough, there will be a very high probability that everything within it will be duplicated in the next draw of 1000, at which point the working table becomes empty and the recursion terminates.
For drawing 3000 samples, you might be able to do this working table update enough times. But as you move from 3000 closer to the table's total size of 5000, the probability shrinks to nearly zero very fast.
So instead of this being an optimizer issue that's short-circuiting with a smaller result set, it's actually an issue with the specific way "recursion" is implemented in Postgres -- it's not actually recursion but simple iteration that operates on the set difference between the current working table and the most recent recursive query result set. For random sampling like this, that working table will get smaller very fast with each iteration, until it's extremely likely to be empty due to the high probability of selecting a duplicate.
|
How to guarantee that at least N rows are returned by recursive CTE in Postgres
|
[
"",
"sql",
"postgresql",
"common-table-expression",
"recursive-query",
"sql-limit",
""
] |
What is the difference between the following 2 queries?
```
DELETE FROM experience WHERE end IS NOT NULL;
```
And
```
DELETE FROM experience WHERE NOT( end=NULL);
```
First query was accepted as a correct answer but second was not.
|
NULL values are treated differently from other values.
NULL is used as a placeholder for unknown or inapplicable values.
It is not possible to test for NULL values with comparison operators, such as =, <, or <>
You will have to use the IS NULL and IS NOT NULL operators instead.
Please refer below link for more details.
<http://www.w3schools.com/sql/sql_null_values.asp>
|
NULLs are a bit weird. A NULL is *never* equal to *anything **including another NULL.*** Further, any boolean operation against a NULL returns NULL.
The expression `end IS NOT NULL` will evaluate **`false`** if `end` is NULL, and **`true`** if `end` is not NULL.
The expression `NOT( end=NULL)` will actually always evaluate to NULL because `(end = NULL)` equals NULL and `NOT (NULL)` also equals NULL. More to the point in a WHERE clause, it will *never* evaluate **`true`**.
|
Difference between "IS NOT NULL" and "NOT (field = NULL)" in these 2 queries
|
[
"",
"mysql",
"sql",
""
] |
I have script where I want to first drop view and then create it.
I know how to drop table:
```
IF EXISTS (SELECT * FROM sys.tables WHERE name = 'table1' AND type = 'U') DROP TABLE table1;
```
so I did the same for views:
```
IF EXISTS (SELECT * FROM sys.views WHERE name = 'view1' AND type = 'U') DROP VIEW view1;
create view1 as(......)
```
and then I got error:
> 'CREATE VIEW' must be the first statement in a query batch.
|
your exists syntax is wrong and you should seperate DDL with go like below
```
if exists(select 1 from sys.views where name='tst' and type='v')
drop view tst;
go
create view tst
as
select * from test
```
you also can check existence test, with object\_id like below
```
if object_id('tst','v') is not null
drop view tst;
go
create view tst
as
select * from test
```
In SQL 2016,you can use below syntax to drop
```
Drop view if exists dbo.tst
```
From SQL2016 CU1,you can do below
```
create or alter view vwTest
as
select 1 as col;
go
```
|
```
DROP VIEW if exists {ViewName}
Go
CREATE View {ViewName} AS
SELECT * from {TableName}
Go
```
|
Drop view if exists
|
[
"",
"sql",
"sql-server",
"view",
"create-view",
""
] |
Actually, I have 2 tables the friend table and the users table
what I try to achieve is to retrieve my mutual friend by checking the friend of another user and get the data of these mutual friends from the user's table
table friend is built like this
```
id | user1 | user2 | friend_status
```
then the table data looks like this
```
1 | 1 | 2 | 1
2 | 1 | 3 | 1
3 | 2 | 3 | 1
4 | 1 | 4 | 1
5 | 2 | 4 | 1
```
Then let's say that I am the user with id 2, then in that table, I have 3 friends - 1, 3 and 4. What I want to retrieve are the common friends with user 1 that have also 3 friends - 2, 3 and 4 and retrieve the data from the table users for the 2 common mutual friend 3 and 4
|
You can use a `UNION` to get a users friends:
```
SELECT User2 UserId FROM friends WHERE User1 = 1
UNION
SELECT User1 UserId FROM friends WHERE User2 = 1
```
Then, joining two of these `UNION` for two different Users on `UserId` you can get the common friends:
```
SELECT UserAFriends.UserId FROM
(
SELECT User2 UserId FROM friends WHERE User1 = 1
UNION
SELECT User1 UserId FROM friends WHERE User2 = 1
) AS UserAFriends
JOIN
(
SELECT User2 UserId FROM friends WHERE User1 = 2
UNION
SELECT User1 UserId FROM friends WHERE User2 = 2
) AS UserBFriends
ON UserAFriends.UserId = UserBFriends.UserId
```
|
Here's a way using `union all` to combine all friends of user 1 and user 2 and using `count(distinct src) > 1` to only select those that are friends with both users.
```
select friend from (
select 2 src, user1 friend from friends where user2 = 2
union all select 2, user2 from friends where user1 = 2
union all select 1, user1 from friends where user2 = 1
union all select 1, user2 from friends where user1 = 1
) t group by friend
having count(distinct src) > 1
```
|
Finding mutual friend sql
|
[
"",
"sql",
"mutual-friendship",
""
] |
I am writing an query to select player bans from another table but firstly its very slow taking around 7-14 seconds and secondly its returning invalid rows.
The first query is as follows:
```
SELECT *
FROM sourcebans.sb_bans
WHERE removetype IS NULL
AND removedon IS NULL
AND reason NOT LIKE '%[FragSleuth] Duplicate account%'
AND ip IN(SELECT DISTINCT ip
FROM fragsleuth.history
WHERE trackingid = "ad000c3803b48190aabf382e01b957c9")
OR authid IN(SELECT DISTINCT steamid
FROM fragsleuth.history
WHERE trackingid = "ad000c3803b48190aabf382e01b957c9")
```
The second query is as follows
```
SELECT * FROM `history` WHERE trackingid = "ad000c3803b48190aabf382e01b957c9"
```
And a couple of screenshots to show what I mean:
[First Query](https://i.stack.imgur.com/dAEI3.png) [Second Query](https://i.stack.imgur.com/0EPmF.png)
In screenshot 1 you can see that its returning a row where the removedon and removetype is not null when I asked the query to only return rows with NULL.
I am also afraid that inside the history table there will be duplicate entries for the steamid and ip columns which might make the query slow, is there any way to make the query only select rows with a unique ip or steamid based on the trackingid?
Any help would be much appreciated.
Thanks
Edit: I am overwhelmed by the help, Thanks to @maraca, @Skorpioh and @Adam Silenko, the query time is now less than a second!
|
The query returns rows that are not `NULL` because it is interpreted as
`(... AND ... AND ... ) OR ...` instead of
`... AND ... AND ( ... OR ...)`
So you need to add a braces, also the `DISTINCT` is not needed:
```
SELECT *
FROM sourcebans.sb_bans
WHERE removetype IS NULL
AND removedon IS NULL
AND reason NOT LIKE '%[FragSleuth] Duplicate account%'
AND (ip IN(SELECT ip
FROM fragsleuth.history
WHERE trackingid = "ad000c3803b48190aabf382e01b957c9")
OR authid IN(SELECT steamid
FROM fragsleuth.history
WHERE trackingid = "ad000c3803b48190aabf382e01b957c9"))
```
|
the and have higher priority then or...
You need have to index on your tables
np. add index to trackingid field in fragsleuth.history if you don't have
You can probably do faster using one sub query, but i'm not sure this.
```
SELECT *
FROM sourcebans.sb_bans
WHERE removetype IS NULL
AND removedon IS NULL
AND reason NOT LIKE '%[FragSleuth] Duplicate account%'
AND exists (
SELECT 1 from fragsleuth.history
WHERE trackingid = "ad000c3803b48190aabf382e01b957c9"
and (ip = sourcebans.ip or steamid = sourcebans.authid) )
```
|
MySQL Query returning invalid rows and very slow
|
[
"",
"mysql",
"sql",
"performance",
""
] |
I honestly have no idea how to give a better title for this :(
Basically I have these 3 tables
```
Table "public.users"
Column | Type | Modifiers
--------+-----------------------+----------------------------------------------------
id | integer | not null default nextval('users_id_seq'::regclass)
name | character varying(40) |
Indexes:
"users_pkey" PRIMARY KEY, btree (id)
Referenced by:
TABLE "comments" CONSTRAINT "comments_user_id_fkey" FOREIGN KEY (user_id) REFERENCES users(id) ON UPDATE CASCADE ON DELETE CASCADE
Table "public.comments"
Column | Type | Modifiers
---------+---------+-------------------------------------------------------
id | integer | not null default nextval('comments_id_seq'::regclass)
user_id | integer |
content | text |
Indexes:
"comments_pkey" PRIMARY KEY, btree (id)
Foreign-key constraints:
"comments_user_id_fkey" FOREIGN KEY (user_id) REFERENCES users(id) ON UPDATE CASCADE ON DELETE CASCADE
Table "public.votes"
Column | Type | Modifiers
------------+---------+----------------------------------------------------
id | integer | not null default nextval('votes_id_seq'::regclass)
up | boolean | default false
comment_id | integer |
Indexes:
"votes_pkey" PRIMARY KEY, btree (id)
Foreign-key constraints:
"votes_comment_id_fkey" FOREIGN KEY (comment_id) REFERENCES comments(id) ON UPDATE CASCADE ON DELETE CASCADE
```
I want to select all users (including those who don't have any comment) and include 3 comments for each user and then for each comment select 2 votes (including those comments that don't have any vote)
What I have so far is the query to select 3 comments for each user
```
SELECT users.id as userId, comments.id as commentId, users.name, comments.content, comments.rn
FROM users
LEFT JOIN (
SELECT *, row_number() OVER (PARTITION BY comments.user_id) as rn FROM comments
) as comments
ON users.id = comments.user_id
WHERE comments.rn <= 3 OR comments.rn IS NULL;
```
|
You have the right idea. Just continue with it:
```
SELECT u.id as userId, c.id as commentId, u.name, c.content, c.rn
FROM users u LEFT JOIN
(SELECT c.*,
ROW_NUMBER() OVER (PARTITION BY c.user_id) as rn
FROM comments c
) c
ON u.id = c.user_id LEFT JOIN
(SELECT v.*,
ROW_NUMBER() OVER (PARTITION BY v.comment_id) as rn
FROM votes v
) v
ON c.id = v.comment_id
WHERE (c.rn <= 3 OR c.rn IS NULL) and
(v.rn <= 2 or v.rn IS NULL);
```
|
Because users to comments and comments to votes both have 1-to-many relationships, doing both joins will give you CxV (3x2=6) rows per user.
Because comments will be duplicated when a comment has more than 1 vote, use `dense_rank()` instead of `row_number()` so that "duplicate" comments will have the same rank and be included in the `rn < 4` criteria
```
select * from (
select *,
dense_rank() over (partition by u.id order by c.id) rn,
dense_rank() over (partition by c.id order by v.id) rn2
from users u
left join comments c on c.user_id = u.id
left join votes v on v.comment_id = c.id
) t where rn < 4 and rn2 < 3
```
|
LIMIT the results of nested JOIN queries
|
[
"",
"sql",
"postgresql",
""
] |
I am formatting a number which is Date and I want to display only day and month part **(e.g. Jan 1, Jan 2, Jan 3, Jan 4β¦)** of this but it is not formatting and displaying the complete date.
**Whereas If I use the same wizard with tabular then it is working fine.**
I am using wizard to format data
[](https://i.stack.imgur.com/qqolm.png)
**Is there any other way to format data while working with matrix ?**
|
Try right clicking on the field in your SSRS report and using an expression for it instead.
[](https://i.stack.imgur.com/2cdwh.gif)
|
In the altanative, you could use datename function in sql server for that Column
SELECT DATENAME(DAY,Col\_Name)+', '+LEFT(DATENAME(MONTH,Col\_Name),3)
that is
SELECT DATENAME(DAY,'2016-03-21')+', '+LEFT(DATENAME(MONTH,'2016-03-21'),3)
|
How to format a number in SSRS Report for Matrix?
|
[
"",
"sql",
"sql-server",
"reporting-services",
"ssrs-2008",
"ssrs-2008-r2",
""
] |
I'm looking for a way group `empNumber` together and can't seem to find a way
This is my current query:
```
SELECT EmpNumber, [Match], [NonMatch], [Base], [SRA], [SRS]
from
(select a.empnumber, TT_GB.MAXDate, a.TT_Plan, a.TT_Ratio
from table_1 as a
inner join
(select b.empnumber, b.TT_Plan, MAX(b.TT_Date) as MAXDate
from table_1 as b
group by b.empnumber, b.TT_Plan) TT_GB
ON a.EmpNumber = TT_GB.EmpNumber AND a.TT_Plan = TT_GB.TT_Plan AND a.TT_Date = TT_GB.MAXDate
) M
PIVOT
(
SUM(TT_Ratio)
FOR [TT_Plan] IN ([Match], [NonMatch], [Base], [SRA], [SRS])
) P
```
This is what I have in the table
```
=============================================
EmpNumber TT_Plan TT_Date TT_Ratio
1 Base 2008-05-11 11
1 Base 2015-08-22 12
1 Base 2010-07-09 13
1 Match 2003-01-23 15
1 Match 2000-11-17 14
1 NonMatch 2014-09-22 19
1 NonMatch 2015-02-15 18
1 NonMatch 2009-08-13 17
1 SRA 2005-07-16 2
1 SRS 2006-04-12 3
2 Base 2008-08-08 5
2 Base 2009-09-09 7
2 Base 2010-10-10 4
2 NonMatch 2011-01-01 8
2 NonMatch 2012-12-12 6
2 NonMatch 2007-07-07 9
Desired output
==============
EmpNumber Match NonMatch Base SRA SRS
1 15 18 12 2 3
2 0 6 4 0 0
```
But what I have right now is this, and all I want is do group by EmpNumber as above
```
=============================================================================
EmpNumber Match NonMatch Base SRA SRS
1 15 NULL NULL NULL NULL
1 NULL NULL NULL 2 NULL
1 NULL NULL NULL NULL 3
2 NULL NULL 4 NULL NULL
2 NULL 6 NULL NULL NULL
1 NULL 18 NULL NULL NULL
1 NULL NULL 12 NULL NULL
```
|
I don't think this is the best solution but you can do it by creating an alias table like the following:
```
Select t.EmpNumber, Sum(t.Match), Sum(t.NonMatch), Sum(t.Base), Sum(t.SRA), Sum(t.SRS) from
(
SELECT EmpNumber, [Match], [NonMatch], [Base], [SRA], [SRS]
from
(select a.empnumber, TT_GB.MAXDate, a.TT_Plan, a.TT_Ratio
from table_1 as a
inner join
(select b.empnumber, b.TT_Plan, MAX(b.TT_Date) as MAXDate
from table_1 as b
group by b.empnumber, b.TT_Plan) TT_GB
ON a.EmpNumber = TT_GB.EmpNumber AND a.TT_Plan = TT_GB.TT_Plan AND a.TT_Date = TT_GB.MAXDate
) M
PIVOT
(
SUM(TT_Ratio)
FOR [TT_Plan] IN ([Match], [NonMatch], [Base], [SRA], [SRS])
) P
) t group by t.EmpNumber
```
|
So how I wrote before
```
declare @table_1 table ( EmpNumber int, TT_Plan varchar(20), TT_Date date, TT_Ratio int )
insert @table_1
values
(1,'Base','2008-05-11',11)
,(1,'Base','2015-08-22',12)
,(1,'Base','2010-07-09',13)
,(1,'Match','2003-01-23',15)
,(1,'Match','2000-11-17',14)
,(1,'NonMatch','2014-09-22',19)
,(1,'NonMatch','2015-02-15',18)
,(1,'NonMatch','2009-08-13',17)
,(1,'SRA','2005-07-16',2)
,(1,'SRS','2006-04-12',3)
,(2,'Base','2008-08-08',5)
,(2,'Base','2009-09-09',7)
,(2,'Base','2010-10-10',4)
,(2,'NonMatch','2011-01-01',8)
,(2,'NonMatch','2012-12-12',6)
,(2,'NonMatch','2007-07-07',9)
SELECT EmpNumber, [Match], [NonMatch], [Base], [SRA], [SRS]
from
(select a.empnumber,a.TT_Plan, a.TT_Ratio
from @table_1 as a
inner join
(select b.empnumber, b.TT_Plan, MAX(b.TT_Date) as MAXDate
from @table_1 as b
group by b.empnumber, b.TT_Plan) TT_GB
ON a.EmpNumber = TT_GB.EmpNumber AND a.TT_Plan = TT_GB.TT_Plan AND a.TT_Date = TT_GB.MAXDate
) M
PIVOT
(
SUM(TT_Ratio)
FOR [TT_Plan] IN ([Match], [NonMatch], [Base], [SRA], [SRS])
) P
```
And result is
```
EmpNumber Match NonMatch Base SRA SRS
----------- ----------- ----------- ----------- ----------- -----------
1 15 18 12 2 3
2 NULL 6 4 NULL NULL
```
|
GROUP BY after you get the result from PIVOT
|
[
"",
"sql",
"sql-server",
""
] |
I've been using this for *years*, so it is high time to understand it fully. Suppose a query like this:
```
SELECT
*
FROM a
LEFT JOIN b ON foo...
LEFT JOIN c ON bar...
```
The [documentation](http://www.postgresql.org/docs/current/static/queries-table-expressions.html) tells us that
> `T1 { [INNER] | { LEFT | RIGHT | FULL } [OUTER] } JOIN T2 ON boolean_expression`
>
> `LEFT OUTER JOIN`
>
> First, an inner join is performed. Then, for each row in T1 that does not satisfy the join condition with any row in T2, a joined row is added with null values in columns of T2. Thus, the joined table always has at least one row for each row in T1.
The question is simple: what is `T1` in this case? Is it `a`? Or is it `a LEFT JOIN b ON foo`? (or, is it the same?)
|
A `FROM` clause parses the conditions from left to right (unless overridden by parentheses). So:
```
FROM a
LEFT JOIN b
ON foo...
LEFT JOIN c
ON bar...
```
is parsed as:
```
FROM (
a
LEFT JOIN b
ON foo...
)
LEFT JOIN c
ON bar...
```
This is explained in the [documentation](http://www.postgresql.org/docs/current/static/sql-select.html) under the `join-type` section of the `FROM` clause:
> Use parentheses if necessary to determine the order of nesting. In the
> absence of parentheses, `JOIN`s nest left-to-right. In any case `JOIN`
> binds more tightly than the commas separating `FROM`-list items.
As a consequence, a series of `LEFT JOIN`s keeps all records in the first mentioned table. This is a convenience.
Note that the parsing of the `FROM` clause is the same regardless of the join type.
|
Here is multiple join operation.
SQL is a language where you describe the results to get, not how to get them. The optimizer will decide which join to do first, depending on what it thinks will be most efficient.
You can read here some information
<https://community.oracle.com/thread/2428634?tstart=0>
I think, it works the same for PostgreSQL
|
Multiple LEFT JOINs - what is the "left" table?
|
[
"",
"sql",
"postgresql",
"join",
"outer-join",
""
] |
I need a query to find a film featuring both **Keira Knightly** and **Carey Mulligan**.
How would I go about finding these two various actors that have played a role in the same movie. I have attempted it myself and it has resulted in repeating rows. I think because I wasn't using inner joins possibly. Could someone please lend a hand and show me the correct way of going about this query?
Here is the table:
```
drop table film_director;
drop table film_actor;
drop table film;
drop table studio;
drop table actor;
drop table director;
CREATE TABLE studio(
studio_ID NUMBER NOT NULL,
studio_Name VARCHAR2(30),
PRIMARY KEY(studio_ID));
CREATE TABLE film(
film_ID NUMBER NOT NULL,
studio_ID NUMBER NOT NULL,
genre VARCHAR2(30),
genre_ID NUMBER(1),
film_Len NUMBER(3),
film_Title VARCHAR2(30) NOT NULL,
year_Released NUMBER NOT NULL,
PRIMARY KEY(film_ID),
FOREIGN KEY (studio_ID) REFERENCES studio);
CREATE TABLE director(
director_ID NUMBER NOT NULL,
director_fname VARCHAR2(30),
director_lname VARCHAR2(30),
PRIMARY KEY(director_ID));
CREATE TABLE actor(
actor_ID NUMBER NOT NULL,
actor_fname VARCHAR2(15),
actor_lname VARCHAR2(15),
PRIMARY KEY(actor_ID));
CREATE TABLE film_actor(
film_ID NUMBER NOT NULL,
actor_ID NUMBER NOT NULL,
PRIMARY KEY(film_ID, actor_ID),
FOREIGN KEY(film_ID) REFERENCES film(film_ID),
FOREIGN KEY(actor_ID) REFERENCES actor(actor_ID));
CREATE TABLE film_director(
film_ID NUMBER NOT NULL,
director_ID NUMBER NOT NULL,
PRIMARY KEY(film_ID, director_ID),
FOREIGN KEY(film_ID) REFERENCES film(film_ID),
FOREIGN KEY(director_ID) REFERENCES director(director_ID));
INSERT INTO studio (studio_ID, studio_Name) VALUES (1, 'Paramount');
INSERT INTO studio (studio_ID, studio_Name) VALUES (2, 'Warner Bros');
INSERT INTO studio (studio_ID, studio_Name) VALUES (3, 'Film4');
INSERT INTO studio (studio_ID, studio_Name) VALUES (4, 'Working Title Films');
INSERT INTO film (film_ID, studio_ID, genre, genre_ID, film_Len, film_Title, year_Released) VALUES (1, 1, 'Comedy', 1, 180, 'The Wolf Of Wall Street', 2013);
INSERT INTO film (film_ID, studio_ID, genre, genre_ID, film_Len, film_Title, year_Released) VALUES (2, 2, 'Romance', 2, 143, 'The Great Gatsby', 2013);
INSERT INTO film (film_ID, studio_ID, genre, genre_ID, film_Len, film_Title, year_Released) VALUES (3, 3, 'Science Fiction', 3, 103, 'Never Let Me Go', 2008);
INSERT INTO film (film_ID, studio_ID, genre, genre_ID, film_Len, film_Title, year_Released) VALUES (4, 4, 'Romance', 4, 127, 'Pride and Prejudice', 2005);
INSERT INTO director (director_ID, director_fname, director_lname) VALUES (1, 'Martin', 'Scorcese');
INSERT INTO director (director_ID, director_fname, director_lname) VALUES (2, 'Baz', 'Luhrmann');
INSERT INTO director (director_ID, director_fname, director_lname) VALUES (3, 'Mark', 'Romanek');
INSERT INTO director (director_ID, director_fname, director_lname) VALUES (4, 'Joe', 'Wright');
INSERT INTO actor (actor_ID, actor_fname, actor_lname) VALUES (1, 'Matthew', 'McConnaughy');
INSERT INTO actor (actor_ID, actor_fname, actor_lname) VALUES (2, 'Leonardo', 'DiCaprio');
INSERT INTO actor (actor_ID, actor_fname, actor_lname) VALUES (3, 'Margot', 'Robbie');
INSERT INTO actor (actor_ID, actor_fname, actor_lname) VALUES (4, 'Joanna', 'Lumley');
INSERT INTO actor (actor_ID, actor_fname, actor_lname) VALUES (5, 'Carey', 'Mulligan');
INSERT INTO actor (actor_ID, actor_fname, actor_lname) VALUES (6, 'Tobey', 'Maguire');
INSERT INTO actor (actor_ID, actor_fname, actor_lname) VALUES (7, 'Joel', 'Edgerton');
INSERT INTO actor (actor_ID, actor_fname, actor_lname) VALUES (8, 'Keira', 'Knightly');
INSERT INTO actor (actor_ID, actor_fname, actor_lname) VALUES (9, 'Andrew', 'Garfield');
INSERT INTO actor (actor_ID, actor_fname, actor_lname) VALUES (10, 'Sally', 'Hawkins');
INSERT INTO actor (actor_ID, actor_fname, actor_lname) VALUES (11, 'Judi', 'Dench');
INSERT INTO actor (actor_ID, actor_fname, actor_lname) VALUES (12, 'Matthew', 'Macfadyen');
INSERT INTO film_actor (film_ID, actor_ID) VALUES (1, 1);
INSERT INTO film_actor (film_ID, actor_ID) VALUES (1, 2);
INSERT INTO film_actor (film_ID, actor_ID) VALUES (1, 3);
INSERT INTO film_actor (film_ID, actor_ID) VALUES (1, 4);
INSERT INTO film_actor (film_ID, actor_ID) VALUES (2, 2);
INSERT INTO film_actor (film_ID, actor_ID) VALUES (2, 5);
INSERT INTO film_actor (film_ID, actor_ID) VALUES (2, 6);
INSERT INTO film_actor (film_ID, actor_ID) VALUES (2, 7);
INSERT INTO film_actor (film_ID, actor_ID) VALUES (3, 5);
INSERT INTO film_actor (film_ID, actor_ID) VALUES (3, 8);
INSERT INTO film_actor (film_ID, actor_ID) VALUES (3, 9);
INSERT INTO film_actor (film_ID, actor_ID) VALUES (3, 10);
INSERT INTO film_actor (film_ID, actor_ID) VALUES (4, 5);
INSERT INTO film_actor (film_ID, actor_ID) VALUES (4, 8);
INSERT INTO film_actor (film_ID, actor_ID) VALUES (4, 11);
INSERT INTO film_actor (film_ID, actor_ID) VALUES (4, 12);
INSERT INTO film_director (film_ID, director_ID) VALUES (1,1);
INSERT INTO film_director (film_ID, director_ID) VALUES (2,2);
INSERT INTO film_director (film_ID, director_ID) VALUES (3,3);
INSERT INTO film_director (film_ID, director_ID) VALUES (4,4);
```
|
Your query actually return correct results? I doubt it..
You can use EXISTS() :
```
SELECT f.film_title AS "Mulligan and Knightly"
FROM Film f
INNER JOIN film_actor x
ON(f.film_id = x.film_id)
INNER JOIN actor a
ON(x.actor_id = a.actor_id)
WHERE a.actor_id = 8
AND EXISTS(select 1 FROM film_actor s
where s.film_id = x.film_id
and s.actor_id = 5)
```
This can also be done with a simple group by and a having clause:
```
SELECT f.film_title AS "Mulligan and Knightly"
FROM Film f
INNER JOIN film_actor x
ON(x.film_id = f.film_id)
WHERE x.actor_id in(5,8)
GROUP BY f.film_title
HAVING COUNT(*) = 2
```
|
Perhaps something like this will do it?
```
select f.filmTitle as 'mulligan and knightly' from film f inner join film_actor fa on f.film_id = fa.film_id inner join actor a on fa.actor_id = a.actor_id where a.actor_lname = 'Knightley'
intersect
select f.filmTitle as 'mulligan and knightly' from film f inner join film_actor fa on f.film_id = fa.film_id inner join actor a on fa.actor_id = a.actor_id where a.actor_lname = 'Mulligan'
```
|
Query for multiple conditions
|
[
"",
"sql",
"oracle",
"inner-join",
""
] |
is it possible to us a CTE to perform multiple update commands?
```
With Query AS
(
SELECT
Table_One.FOO AS FOO,
Table_Two.BAR AS BAR
FROM FOO
JOIN BAR ON FOO.ID = BAR.ID
)
UPDATE
Query.FOO = 1;
UPDATE
Query.BAR = 2;
```
In the example Query isn't available anymore on the second UPDATE command.
EDIT:
My working code looks like this:
```
With Query AS
(
SELECT
Table_One.FOO AS FOO,
Table_Two.BAR AS BAR
FROM FOO
JOIN BAR ON FOO.ID = BAR.ID
)
UPDATE
Query.FOO = 1
With Query AS
(
SELECT
Table_One.FOO AS FOO,
Table_Two.BAR AS BAR
FROM FOO
JOIN BAR ON FOO.ID = BAR.ID
)
UPDATE
Query.BAR = 2;
```
Because you can't Update two Tables with one UPDATE command I need two Update commands.
The problem right know is, that if I need to change the Select in the CTE I have to do it on two locations in the code.
|
A SQL Server `UPDATE` only allows you to update a single table. As buried in the [documentation](https://msdn.microsoft.com/en-us/library/ms177523.aspx):
> The following example updates rows in a table by specifying a view as
> the target object. The view definition references multiple tables,
> however, the `UPDATE` statement succeeds because it references columns
> from only one of the underlying tables. **The `UPDATE` statement would
> fail if columns from both tables were specified.**
Although views and CTEs are not exactly the same thing, they often follow similar rules. So, this is also explained in the section on [updatable views](https://msdn.microsoft.com/en-us/library/ms187956.aspx):
> Any modifications, including `UPDATE`, `INSERT`, and `DELETE`
> statements, must reference columns from only one base table.
You can effectively do what you want by issuing two updates and wrapping them in a single transaction.
|
You can insert your `CTE` result to a `@Table` variable and use this Table wherever required in the code block. (You can `join` this Table with actual table to perform the `UPDATE/INSERT/DELETE` etc). You can't use the same CTE in multiple statement, because CTE is part of the subsequent statement only.
|
USING Common Table Expression and perform multiple update commands
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have a dataset in Excel where I have a few thousand id's. In a database I need a few columns to match these ids but some ids are listed twice in the Excel list (and they need to be there twice). I'm trying to write a query with an IN statement, but it automatically filters the duplicates. But I want the duplicates as well, otherwise I need to manually rearrange the data merge between the Excel and SQL results.
Is there any way to do something like
```
SELECT *
FROM table
WHERE id IN (
.. list of thousands ids
)
```
To also get the duplicates without using `UNION ALL` to prevent from firing thousands of seperate queries to the database?
|
You need to use a `left join` if you want to keep the duplicates. If the ordering is important, then you should include that information as well.
Here is one method:
```
select t.*
from (values (1, id1), (2, id2), . . .
) ids(ordering, id) left join
table t
on t.id = ids.id
order by ids.ordering;
```
An alternative is to load the ids into a temporary table with an identity column to capture the ordering:
```
# Create the table
create table #ids (
ordering int identity(1, 1) primary key,
id
);
# Insert the ids
insert into #ids (id)
select @id;
# Use them in the query
select t.*
from #ids ids left join
table t
on t.id = ids.id
order by ids.ordering;
```
|
If I understand this correctly this is exactly the way `IN` is supposed to work...
```
DECLARE @tbl TABLE(value INT, content VARCHAR(100));
WITH RunningNummber AS
(
SELECT ROW_NUMBER() OVER(ORDER BY (SELECT NULL)) AS Nmbr
FROM sys.objects
)
INSERT INTO @tbl
SELECT Nmbr,'Content for ' + CAST(Nmbr AS VARCHAR(100))
FROM RunningNummber;
--This ...
SELECT * FROM @tbl WHERE value IN(1,3,5);
-- ... is the same as this:
SELECT * FROM @tbl WHERE value IN(1,1,1,1,3,3,5,1,3,5);
```
If you want to combine two result-sets you have to join them...
|
T-SQL: select * from table where column in (...) with duplicates without using union all
|
[
"",
"sql",
"sql-server",
"t-sql",
"duplicates",
""
] |
I'm trying to select a column from another table like I used to do in **MSSQL**:
```
select * , Date = (select top 1 Date from [dbo].TableB where status = 1 order by Date desc)
from [dbo].TableA
```
**How can I do that in PostgreSQL?**
Additional sample data:
> TableA
```
Names
Richards
Marcos
Luke
Matthew
John
```
> TableB
```
Date Status
2016-01-01 1
2016-01-02 0
2016-01-03 1
2016-01-04 1
2016-01-05 1
```
Expected Output:
```
Name Date
Richards 2016-01-02
Marcos 2016-01-02
Luke 2016-01-02
Matthew 2016-01-02
John 2016-01-02
```
Thanks!
|
`Date = (...)` is invalid (standard) SQL and won't work in Postgres (or any other DBMS except for SQL Server)
A column alias is defined using `AS ...` in SQL (and Postgres). Postgres also doesn't have `top`. It uses `limit`.
Using square brackets in an identifier is also not allowed in SQL. So `[dbo]` needs to become `dbo` or `"dbo"` [depending on how you created the schema](http://www.postgresql.org/docs/current/static/sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS).
```
select a.*,
(select date
from dbo.tableb as b
where b.status = 1
order by b.date desc
limit 1) as date
from dbo.tablea a
```
`date` is a reserved word and should not be used as an identifier (column name)
If you want to use standard ANSI SQL, you can also use `fetch first 1 row only` instead of `limit 1`.
Another option would be to use `max()` instead of the limit in the sub-select which doesn't need the limit at all:
```
select a.*,
(select max(date)
from dbo.tableb as b
where b.status = 1) as date
from dbo.tablea a
```
|
I'm not sure if this is the correct syntax, but did you try this:
```
select * , (select NewColumn from [dbo].TableB) as NewColumn
from [dbo].TableA
```
I hope it helps.
|
PostgreSQL: How to add Column from other table in select statement?
|
[
"",
"sql",
"postgresql",
""
] |
I'm continue discovering `SQL` and store procedure, having trouble that `@feedid` is returning only 1 id, but it have for ex. 20. How to return array of id's?
```
CREATE PROC spGetItemsByUser
@userName NVARCHAR(50)
AS BEGIN
DECLARE @userId INT,
@feedId INT;
SELECT @userId = ID
FROM users
WHERE name = @userName
SELECT @feedId = feedid
FROM userstofeed
WHERE userid = @userId
SELECT *
FROM feed
WHERE ID = @feedId
END
```
|
You have to use *table* variables:
```
declare @userId table (uid int)
declare @feedId table (id int)
insert into @userid(uid)
select id
from Users
where name = @userName
insert into @feedid(id)
select feedid
from userstofeed
where userid in (select uid from @userId)
select * from feed where id in (select id from @feedId)
```
|
Or just go by simple and plain join
```
create proc spGetItemsByUser
@userName nvarchar(50)
as
begin
select * from Users usr
join UsersToFeed utf on usr.id = utf.userID
join feed fee on utf.feedid = fee.id
where usr.name = @userName
end
```
|
Return all id's from table in store procedure
|
[
"",
"sql",
"sql-server",
"t-sql",
"stored-procedures",
""
] |
I am using an Oracle SQL Db and I am trying to count the number of terms starting with X letter in a dictionnary.
Here is my query :
```
SELECT Substr(Lower(Dict.Term),0,1) AS Initialchar,
Count(Lower(Dict.Term))
FROM Dict
GROUP BY Substr(Lower(Dict.Term),0,1)
ORDER BY Substr(Lower(Dict.Term),0,1);
```
This query is working as expected, but the thing that I'm not really happy about is the fact that I have to rewrite the long "Substr(Lower(Dict.Term),0,1)" in the GROUP BY and ORDER BY clause. Is there any way to reuse the one I defined in the SELECT part ?
Thanks
|
You can use a subquery. Because Oracle follows the SQL standard, `substr()` starts counting at 1. Although Oracle does explicitly allow 0 (["If position is 0, then it is treated as 1"](https://docs.oracle.com/cd/B19306_01/server.102/b14200/functions162.htm)), I find it misleading because "0" and "1" refer to the same position.
So:
```
select first_letter, count(*)
from (select d.*, substr(lower(d.term), 1, 1) as first_letter
from dict d
) d
group by first_letter
order by first_letter;
```
|
I prefer [subquery factoring](http://docs.oracle.com/cd/E11882_01/timesten.112/e21642/state.htm#TTSQL503) for this purpose.
```
with init as (
select substr(lower(d.term), 1, 1) as Initialchar
from dict d)
select Initialchar, count(*)
from init
group by Initialchar
order by Initialchar;
```
Contrary to opposite meaning, IMO this makes the query much clearer and defines natural order; especially while using more subqueries.
I'm not aware about performance caveats, but there are some limitation, such as it not possible to use `with` clause within another `with` clause: `ORA-32034: unsupported use of WITH clause`.
|
Using part of the select clause without rewriting it
|
[
"",
"sql",
"oracle11g",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.