Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I have two separate databases (MySQL and PostgreSQL) that maintain different data-sets from different departments in my organization-- this can't be changed. I need to connect to one to get a list of `symbols` or `ids` from the first DB with a DBAPI in python and request the other set and operate on it.
(I've spent a lot of time on this approach, and it makes sense because of other components in my architecture, so unless there is a *much better* alternative, I'd like to stick with this method.)
```
CREATE TABLE foo (fooid int, foosubid int, fooname text);
INSERT INTO foo VALUES (1, 1, 'Joe');
INSERT INTO foo VALUES (1, 2, 'Ed');
INSERT INTO foo VALUES (2, 1, 'Mary');
CREATE FUNCTION get_results(text[]) RETURNS SETOF record AS $$
SELECT fooname, fooid, foosubid FROM foo WHERE name IN $1;
$$ LANGUAGE SQL;
```
In reality my SQL is much more complicated, but I think this method completely describes the purpose. **Can I pass in an arbitrary length parameter into a stored procedure or user defined function and return a result set?**
I would like to call the function like:
```
SELECT * FROM get_results(('Joe', 'Ed'));
SELECT * FROM get_results(('Joe', 'Mary'));
SELECT * FROM get_results(('Ed'));
```
I believe using the `IN` and passing these parameters (if it's possible) would give me the same (or comparable) performance as a `JOIN`. For my current use case the symbols won't exceed 750-1000 'names', but if performance is an issue here I'd like to know why, as well.
|
Use `RETURNS TABLE` instead of `RETURNS SETOF record`. This will simplify the function calls.
You cannot use `IN` operator in that way. Use `ANY` instead.
```
CREATE FUNCTION get_results(text[])
RETURNS TABLE (fooname text, fooid int, foosubid int)
AS $$
SELECT fooname, fooid, foosubid
FROM foo
WHERE fooname = ANY($1);
$$ LANGUAGE SQL;
SELECT * FROM get_results(ARRAY['Joe']);
fooname | fooid | foosubid
---------+-------+----------
Joe | 1 | 1
(1 row)
```
---
If the function returns setof records you have to put a column definition list in every function call:
```
SELECT *
FROM get_results(ARRAY['Joe']) AS (fooname text, fooid int, foosubid int)
```
|
## Row vs Array constructor
`('Joe', 'Ed')` is equivalent to `ROW('Joe', 'Ed')` and creates a new row.
But your function accepts an array. To create one, call it with an Array constructor:
```
SELECT * FROM get_results(ARRAY['Joe', 'Ed']);
```
## Variadic functions
You can declare your input parameter as `VARIADIC` like so
```
CREATE FUNCTION get_results(VARIADIC text[]) RETURNS SETOF record AS $$
SELECT fooname, fooid, foosubid FROM foo WHERE name = ANY($1);
$$ LANGUAGE SQL;
```
It accepts a variable number of arguments. You can call it like this:
```
SELECT * FROM get_results('Joe', 'Ed');
```
More on functions with variable length arguments: <http://www.postgresql.org/docs/9.4/static/xfunc-sql.html>
|
PostgreSQL - Passing Array to Stored Function
|
[
"",
"sql",
"postgresql",
"stored-procedures",
""
] |
My table contains the details like with two fields:
```
ID DisplayName
1 Editor
1 Reviewer
7 EIC
7 Editor
7 Reviewer
7 Editor
19 EIC
19 Editor
19 Reviewer
```
I want get the unique details with DisplayName like
`1 Editor,Reviewer 7 EIC,Editor,Reviewer`
Don't get duplicate value with ID 7
How to combine DisplayName Details? How to write the Query?
|
In *SQL-Server* you can do it in the following:
**QUERY**
```
SELECT id, displayname =
STUFF((SELECT DISTINCT ', ' + displayname
FROM #t b
WHERE b.id = a.id
FOR XML PATH('')), 1, 2, '')
FROM #t a
GROUP BY id
```
**TEST DATA**
```
create table #t
(
id int,
displayname nvarchar(max)
)
insert into #t values
(1 ,'Editor')
,(1 ,'Reviewer')
,(7 ,'EIC')
,(7 ,'Editor')
,(7 ,'Reviewer')
,(7 ,'Editor')
,(19,'EIC')
,(19,'Editor')
,(19,'Reviewer')
```
**OUTPUT**
```
id displayname
1 Editor, Reviewer
7 Editor, EIC, Reviewer
19 Editor, EIC, Reviewer
```
|
**SQL Server 2017+ and SQL Azure: STRING\_AGG**
Starting with the next version of SQL Server, we can finally concatenate across rows without having to resort to any variable or XML witchery.
[STRING\_AGG (Transact-SQL)](https://learn.microsoft.com/en-us/sql/t-sql/functions/string-agg-transact-sql?view=sql-server-ver15)
```
SELECT ID, STRING_AGG(DisplayName, ', ') AS DisplayNames
FROM TableName
GROUP BY ID
```
|
How to concatenate many rows with same id in sql?
|
[
"",
"sql",
"sql-server",
""
] |
**QUERY:**
I have a table consisting columns like UserId and FirstName
What i want to do is:
I want to fetch the FirstName while getting the max if UserId
My novice attempt:
```
select FirstName from tblUsers where MAX(UserId)=@UserId
```
Which is showing the error of:
> An aggregate may not appear in the WHERE clause unless it is in a
> subquery contained in a HAVING clause or a select list, and the column
> being aggregated is an outer reference.
A query without @UserId is also welcome
|
Try to use `HAVING` in following:
```
select FirstName
from tblUsers
group by FirstName
having MAX(UserId)=@UserId
```
**UPDATE 1**
If you want to get only 1 `FirstName` which have highest `UserId` you can use `TOP 1` in following:
```
select top 1 FirstName
from tblUsers
order by UserId desc
```
**UPDATE 2**
I'm not recommending to use this solution, but If you want (for any reason) to use `MAX(UserId)` you can do something like:
```
select FirstName
from tblUsers
group by FirstName
having max(UserId ) = (select max(UserId) from tblUsers)
```
|
```
select FirstName from tblUsers group by FirstName having MAX(UserId)=@UserId
```
HAVING is like a where clause for aggregate expressions.
But that doesn't seem to make sense to me. You'll only get a row if @UserId is the maximum user Id in the table, so I don't quite understand what you want.
|
Selecting a data from table while fetching the MAX value of a column
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I'm trying to select the amount of articles each author has on my table.
Here's what I got so far:
```
SELECT
(SELECT COUNT(*) FROM tb_artigos as b WHERE a.id_autores = b.id_autores) as total
FROM tb_autores as a
GROUP BY
total
ORDER BY
total desc
```
I guess it wrong because it only return 19 rows and I have 24 authors. Another problem is that I need the id of each author and with the above query I only get the total value.
Anyone can help me with that please?
Edit: fixing sql code
|
Normally, this would be done using an explicit `JOIN`. However, a correlated subquery is fine. The problem is your `GROUP BY` clause. The correct `GROUP BY` is on the author id:
```
SELECT a.id_autores,
(SELECT COUNT(*) FROM tb_artigos as b WHERE a.id_autores = b.id_autores) as total
FROM tb_autores a
GROUP BY a.id_autores
ORDER BY total desc;
```
I would suggest you compare the results of this query to your query and try to understand the difference between the two result sets (apart from the author id on each row).
|
Some authors may not have any article which means you won't get anything out of a simple JOIN. Use this LEFT OUTER JOIN and the authors will be 24:
```
SELECT tb_autores.id_autores, COUNT(*)
FROM tb_autores
LEFT OUTER JOIN tb_artigos
ON tb_autores.id_autores = tb_artigos.id_autores
GROUP BY tb_autores.id_autores;
```
This is also the fastest and correct way to write that query.
|
Trying to get the number of posts from authors with a subquery
|
[
"",
"mysql",
"sql",
""
] |
Say I got a meeting room named niagara. I want to find who occupied this room given a start and end time range. The table name is "niagara". Lets just keep the search for today.
```
Person InTime OutTime
A 9AM 1PM
B 10AM 12PM
C 10:25AM 1:30PM
D 9AM 9:00 PM
E 12:20PM 5PM
F 10:45 AM 11:30 PM
```
Give the list of persons who occupied between 10:30 AM and 12:15 PM
Expected Answer is - A,BC,D and F
How to do this
I tried
```
SELECT PERSON
FROM NIAGARA
WHERE (IN_TIME > START_TIME AND OUT_TIME < END_TIME)
OR (IN_TIME < START_TIME AND OUT_TIME > END_TIME)
```
BTW I was asked this in a job interview.
So which means this is the way I am trying to learn the answer
|
The basic logic is that someone is in the room if the `in_time` is less than the period end and the `out_time` is after the period start. So, that would be:
```
SELECT PERSON
FROM NIAGARA
WHERE OUT_TIME > START_TIME AND IN_TIME < END_TIME;
```
How you actually express this in Oracle depends on how the values are stored. As phrased, it seems like they are stored as strings. Doing the actual comparisons would then require more work, but the same logic holds.
|
The common logic to check for overlapping ranges is this:
```
(start#1,end#2) overlaps (start#2,end#2)
start#1 <= end#2 AND end#1>= start#2
```
Depending on your logic (both start & end inclusive or only one) you might need to change the comparison to
```
start#1 < end#2 AND end#1>= start#2
```
|
SQL query: identify who occupied room in a specific time range
|
[
"",
"sql",
"oracle",
""
] |
I'm working on Visual Studio 2015 and whenever I try to add a table from the server explorer menu, it only shows two options **Properties** and **Refresh**.
There have been answers for [this](https://i.stack.imgur.com/8WEhm.png) problem, but I have already tried them, like, adding SQL data tools and repairing visual studio.
SQL data tools were already installed and even after repairing the problem persists.
So please suggest me how can I add tables to the database.
|
Thanks for the comments, but i solved the problem.
Open the command prompt and type the command:
```
C:\sqllocaldb create "MyInstance"
```
*MyInstance* refers to your sql server instance, it can be **v11.0** but for me it was **mssqllocaldb** .
If it runs successfully, it will show you result stating 'Instance created' and you will be able to add the tables.
But if you get error regarding creation of instance then delete the instance by typing the command in command prompt:
```
C:\sqllocaldb delete "MyInstance"
```
and then create the instance.
I hope this helps.
|
1. Close your visual studio.
2. [Download SQL Server Data Tools](https://download.microsoft.com/download/3/4/6/346DB3B9-B7BB-4997-A582-6D6008796846/Dev12/EN/SSDTSetup.exe)
3. Install it in your pc
4. Then Close if successfully installation.
5. Now Open your visual studio.
Hope this will help.
Enjoy :)
|
"Add new table" option missing - Visual Studio 2015
|
[
"",
"sql",
"visual-studio",
""
] |
I have the following table:
```
CREATE TABLE orders (
id INT PRIMARY KEY IDENTITY,
oDate DATE NOT NULL,
oName VARCHAR(32) NOT NULL,
oItem INT,
oQty INT
-- ...
);
INSERT INTO orders
VALUES
(1, '2016-01-01', 'A', 1, 2),
(2, '2016-01-01', 'A', 2, 1),
(3, '2016-01-01', 'B', 1, 3),
(4, '2016-01-02', 'B', 1, 2),
(5, '2016-01-02', 'C', 1, 2),
(6, '2016-01-03', 'B', 2, 1),
(7, '2016-01-03', 'B', 1, 4),
(8, '2016-01-04', 'A', 1, 3)
;
```
I want to get the most recent rows (of which there might be multiple) for each name. For the sample data, the results should be:
| id | oDate | oName | oItem | oQty | ... |
| --- | --- | --- | --- | --- | --- |
| 5 | 2016-01-02 | C | 1 | 2 | |
| 6 | 2016-01-03 | B | 2 | 1 | |
| 7 | 2016-01-03 | B | 1 | 4 | |
| 8 | 2016-01-04 | A | 1 | 3 | |
The query might be something like:
```
SELECT oDate, oName, oItem, oQty, ...
FROM orders
WHERE oDate = ???
GROUP BY oName
ORDER BY oDate, id
```
Besides missing the expression (represented by `???`) to calculate the desired values for `oDate`, this statement is invalid as it selects columns that are neither grouped nor aggregates.
Does anyone know how to do get this result?
|
The [`rank`](https://msdn.microsoft.com/en-us/library/ms176102.aspx) window clause allows you to, well, rank rows according to some partitioning, and then you could just select the top ones:
```
SELECT oDate, oName, oItem, oQty, oRemarks
FROM (SELECT oDate, oName, oItem, oQty, oRemarks,
RANK() OVER (PARTITION BY oName ORDER BY oDate DESC) AS rk
FROM my_table) t
WHERE rk = 1
```
|
This is a generic query without using analytical function.
`SQLFiddle Demo`
```
SELECT a.*
FROM table1 a
INNER JOIN
(SELECT max(odate) modate,
oname,
oItem
FROM table1
GROUP BY oName,
oItem
)
b ON a.oname=b.oname
AND a.oitem=b.oitem
AND a.odate=b.modate
```
|
Get the latest records per Group By SQL
|
[
"",
"sql",
"sql-server",
"select",
"sql-server-2008-r2",
"groupwise-maximum",
""
] |
I have a table with two different stamps. Let's call them oristamp and tarstamp. I need to find only the records that for the same oristamp have different tarstamps. It is possible to do that with a simple query? I think should be used a cursor but I'm not familiar with that. Any help?
|
Use a sub-query to find oristamp values having at least two different tarstamp values. Join with that sub-query:
```
select t1.*
from tablename t1
join (select oristamp from tablename
group by oristamp
having count(distinct tarstamp) >= 2) t2 on t1.oristamp = t2.oristamp
```
|
I hope I understand the question. I am assuming you want all rows where more than 1 distinct value for tarstamp exists for each oristamp.
```
DECLARE @t table(tarstamp int, oristamp int)
INSERT @t values
(1,1),
(1,1),
(1,2),
(2,2)
;WITH CTE as
(
SELECT *,
max(tarstamp) over (partition by oristamp) mx,
min(tarstamp) over (partition by oristamp) mn
FROM @t
)
SELECT *
FROM CTE
WHERE mx <> mn
```
|
SQL query or cursor?
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a table which contains the following columns:
```
ProductName copmanyname arianame bno mrp exp_date Date qty
DANZEN DS HELIX PHARMA CITY 1 J026 215 01-Feb-16 30-Oct-19 41
DANZEN DS HELIX PHARMA CITY 2 J026 215 01-Feb-16 30-Aug-19 2
HIPRO HELIX PHARMA CITY 1 J035 225 01-Feb-16 30-Nov-18 20
NOGARD HELIX PHARMA CITY 1 J010 135 01-Feb-16 30-Nov-20 2
NOGARD HELIX PHARMA CITY 2 J010 135 01-Feb-16 30-Nov-20 8
NOGARD HELIX PHARMA TANK J004 135 01-Feb-16 30-May-20 1
ALINAMIN F HELIX PHARMA CITY 1 I002 195 02-Feb-16 30-Sep-19 2
ALINAMIN F HELIX PHARMA CITY 2 H003 195 02-Feb-16 30-Nov-18 1
```
I want to display the record on specific date, and which Record have same Product and company and bno and mrp then the qty of the these record wil sum. for example in above table :
```
ProductName copmanyname arianame bno mrp exp_date Date qty
NOGARD HELIX PHARMA CITY 1 J010 135 01-Feb-16 30-Nov-20 30
```
I tried with the following statement but it not sum up the qty, display all record.
```
SELECT
ProductName, CopmanyName, AriaName, bno, mrp, exp_date,
Sum(quantity) AS qty
FROM
q_saledetail
GROUP BY
ProductName, CopmanyName, AriaName, bno, mrp, exp_date,date
WHERE
date = any date
```
|
The query in your question greatly differs from what you are asking to get as a result. Additionally your desired output seems to be off in quantity.
If you want the sum of the quantity, where Productname, companyname,bno and mrp are identical, then it is wrong to group by exp\_date and date as well.
If you take a look at this [SQL Fiddle](http://sqlfiddle.com/#!6/825fc/2/0), I have filtered by a random date and then grouped by the 4 columns you have mentioned.
```
SELECT
ProductName, companyName, bno, mrp, SUM(qty) as Quantity
FROM
FiddleTable
WHERE [date] = '30-Nov-20'
GROUP BY
ProductName, companyName, bno, mrp
```
|
to make the other comments clear:
```
ProductName copmanyname arianame bno mrp exp_date Date qty
DANZEN DS HELIX PHARMA CITY 1 J026 215 01-Feb-16 30-Oct-19 41
DANZEN DS HELIX PHARMA CITY 2 J026 215 01-Feb-16 30-Aug-19 2
^ can you see the different arianame? ^^^ date??
HIPRO HELIX PHARMA CITY 1 J035 225 01-Feb-16 30-Nov-18 20
NOGARD HELIX PHARMA CITY 1 J010 135 01-Feb-16 30-Nov-20 2
NOGARD HELIX PHARMA CITY 2 J010 135 01-Feb-16 30-Nov-20 8
NOGARD HELIX PHARMA TANK J004 135 01-Feb-16 30-May-20 1
^ again ^^^^
ALINAMIN F HELIX PHARMA CITY 1 I002 195 02-Feb-16 30-Sep-19 2
ALINAMIN F HELIX PHARMA CITY 2 H003 195 02-Feb-16 30-Nov-18 1
^ and ^^^^ ^^^^^^^^^
```
if you take a look at NOGARD: you have 3 different rows - they cant be grouped. if you spare out arianame AND bno - then you could group NOGARD to qty 10 ...
|
How we sum the value of unique record in sql
|
[
"",
"sql",
"group-by",
""
] |
my table looks like this
```
+--------+--------+--------------+--------------+
| CPI_id | Weight | score_100_UB | score_100_LB |
+--------+--------+--------------+--------------+
| 1.1 | 10 | 100 | 90 |
+--------+--------+--------------+--------------+
```
while executing the insert query the table should look like
```
+--------+--------+--------------+--------------+
| CPI_id | Weight | score_100_UB | score_100_LB |
+--------+--------+--------------+--------------+
| 1.1 | 10 | 100 | 90 |
| 5.5 | 10 | NULL | 93 |
+--------+--------+--------------+--------------+
```
but NULL values should be replaced by 100.
I also tried using trigger.I couldn't get.
thanks in advance
|
For MySQL use:
```
insert into table values (CPI_id , Weight ,IFNULL(score_100_UB ,100), score_100_LB )
```
or:
```
insert into table values (CPI_id , Weight ,COALESCE(score_100_UB ,100), score_100_LB )
```
SQL Server:
```
insert into table values (CPI_id , Weight ,ISNULL(score_100_UB ,100), score_100_LB )
```
Oracle:
```
insert into table values (CPI_id , Weight ,NVL(score_100_UB ,100), score_100_LB )
```
|
Alter your table and set the field `score_100_UB` to have some default value like below
```
ALTER TABLE t1 MODIFY score_100_UB INT UNSIGNED DEFAULT 100;
```
After this, whenever you try to insert a NULL value in this column, it will be replaced by 100
|
how to replace NULL value during insertion in sql
|
[
"",
"mysql",
"sql",
""
] |
I have a table where I store details about the chapters, I have to show data in following order of Table of Index
> 1. Chapter One
>
> 1.1 Chapter One Page 1
>
> 1.2 Chapter One Page 2
> 2. Chapter Two
>
> 2.1 Chapter Two Page 1
>
> 2.2 Chapter Two Page 2
> 3. Chapter Three
>
> Title One
>
> Title Two
>
> Title Three
>
> 3.1 Chapter Three Page 1
>
> 3.2 Chapter Three Page 2
>
> 3.3 Chapter Three Page 3
We can insert data in database in sorted or un-sorted order. But data should show in a sorted order based on pageOrder of Parent and Child pages
I have set up SQL Fiddle but for some reason I am not able to save SQL. Below you will find fiddle link and details
```
CREATE TABLE [Book]
(
[id] int,
[Chapter] varchar(20),
[PageOrder] int,
[parentID] int
);
INSERT INTO [Book] ([id], [Chapter], [PageOrder], [parentID])
VALUES
('1', 'Chapter One', 1, 0),
('2', 'Chapter Two', 2, 0),
('3', 'Chapter Three', 3, 0),
('4', 'Chapter Four', 4, 0),
('5', 'Chapter Five', 5, 0),
('6', 'Chapter One Page 1', 1, 1),
('7', 'Chapter One Page 2', 2, 1),
('8', 'Chapter One Page 3', 3, 1),
('9', 'Chapter One Page 4', 4, 1),
('10', 'Chapter Two Page 1', 1, 2),
('11', 'Chapter Two Page 3', 3, 2),
('12', 'Chapter Two Page 2', 2, 2),
('13', 'Chapter Three Tite 1', 0, 3),
('14', 'Chapter Three Tite 2', 0, 3),
('15', 'Chapter Three Tite 3', 0, 3),
('16', 'Chapter Three Page 2', 2, 3),
('17', 'Chapter Three Page 3', 3, 3),
('18', 'Chapter Three Page 1', 1, 3);
WITH CTE(ID, parentID, Chapter, PageOrder, Depth, SortCol) AS
(
SELECT
ID, parentID, Chapter, PageOrder, 0,
CAST(ID AS varbinary(max))
FROM Book
WHERE parentID = 0
UNION ALL
SELECT
d.ID, d.parentID, d.Chapter, d.PageOrder, p.Depth + 1,
CAST(SortCol + CAST(d.ID AS binary(4)) AS varbinary(max))
FROM Book AS d
JOIN CTE AS p ON d.parentID = p.ID
)
SELECT
ID, parentID, Chapter, PageOrder, Depth,
REPLICATE('--', Depth) + Chapter as PageName
FROM CTE
ORDER BY SortCol
```
This CTE query is sorting data but it child pages are not properly sorted child pages show up in sort order in which they where saved in database
SqlFiddle Link <http://www.sqlfiddle.com/#!3/9770a/1>
|
Use `PageOrder` instead of `ID` to build `SortCol`, also cast to `VARCHAR(MAX)` instead of `VARBINARY(MAX)`:
```
WITH CTE(ID, parentID, Chapter, PageOrder, Depth, SortCol) AS (
SELECT ID, parentID, Chapter,PageOrder, 0,
CAST(PageOrder AS varchar(max))
FROM Book
WHERE parentID = 0
UNION ALL
SELECT d.ID, d.parentID, d.Chapter, d.PageOrder, p.Depth + 1,
CAST(SortCol + CAST(d.PageOrder AS varchar(max)) AS varchar(max))
FROM Book AS d
JOIN CTE AS p ON d.parentID = p.ID
)
SELECT ID, parentID, Chapter, PageOrder, Depth, SortCol,
REPLICATE('--', Depth) + Chapter as PageName
FROM CTE
ORDER BY SortCol, Chapter
```
Additionally `Chapter` column is used to sort chapters having the same `PageOrder` and belonging to the same tree level.
[**Demo here**](http://www.sqlfiddle.com/#!3/9770a/2)
|
The Complete Solution like
```
SELECT * FROM
(
SELECT p.CategoryID
, p.Category_Name
, p.IsParent
, p.ParentID
, p.Active
, p.Sort_Order AS Primary_Sort_Order
, CASE WHEN p.IsParent = 0 THEN (SELECT Sort_Order FROM tbl_Category WHERE
CategoryID = p.ParentID) ELSE p.Sort_Order END AS Secondary_Sort_Order
FROM tbl_Category p
) x
ORDER BY Secondary_Sort_Order,
CASE WHEN ParentID = 0 THEN CategoryID ELSE ParentID END,
CASE WHEN ParentID = 0 THEN 0 ELSE Primary_Sort_Order END,
Primary_Sort_Order ASC
```
Hope It helps....!
|
Sort data in SQL Server based on parent child relation
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
[](https://i.stack.imgur.com/A5E95.jpg)
When I select all rows from table zvw\_test it return 145 rows.
Table Customer\_Class\_Price have 160 rows.
When I try to join this 2 table with 2 condition it return 122 rows.
I don't understand why it not return all rows from zvw\_test (145 rows)
becasue I use left outer join it should return all rows from left table.
Thank you.
```
SELECT zvw_test.Goods_ID,
zvw_test.Thai_Name,
zvw_test.UM,
zvw_test.CBal,
Customer_Class_Price.ListPrice
FROM zvw_test
LEFT OUTER JOIN
Customer_Class_Price ON zvw_test.Goods_ID = Customer_Class_Price.Goods_ID AND
zvw_test.UM = Customer_Class_Price.UM
WHERE (Customer_Class_Price.ClassCode = '444-666')
```
|
I had this problem before, I used a CTE to solve this, like:
```
WITH A AS
(
SELECT Customer_Class_Price.Goods_ID, Customer_Class_Price.UM, Customer_Class_Price.ListPrice
FROM Customer_Class_Price
WHERE Customer_Class_Price.ClassCode = '444-666'
)
SELECT zvw_test.Goods_ID, zvw_test.Thai_Name, zvw_test.UM, zvw_test.CBal, A.ListPrice
FROM zvw_test LEFT OUTER JOIN A
ON zvw_test.Goods_ID = A.Goods_ID AND zvw_test.UM = A.UM
```
|
By putting one of your columns from the `LEFT OUTER JOIN` table in your `WHERE` clause, you have effectively turned it into an `INNER JOIN`. You need to move that up to the `JOIN` clause.
|
Left outer join with 2 column missing some output rows
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I am selecting a date column which is in the format "YYYY-MM-DD".
I want to cast it to a timestamp such that it will be "YYYY-MM-DD HH:MM:SS:MS"
I attempted:
```
select CAST(mycolumn as timestamp) from mytable;
```
but this resulted in the format YYYY-MM-DD HH:MM:SS
I also tried
```
select TO_TIMESTAMP(mycolumn,YYYY-MM-DD HH:MM:SS:MS) from mytable;
```
but this did not work either. I cannot seem to figure out the correct way to format this. Note that I only want the first digit of the milliseconds.
//////////////second question
I am also trying to select numeric data such that there will not be any trailing zeros.
For example, if I have values in a table such as 1, 2.00, 3.34, 4.50.
I want to be able to select those values as 1, 2, 3.34, 4.5.
I tried using ::float, but I occasionally strange output. I also tried the rounding function, but how could I use it properly without knowing how many decimal points I need before hand?
thanks for your help!
|
It seems that the functions `to_timestamp()` and `to_char()` are unfortunately not perfect.
If you cannot find anything better, use these workarounds:
```
with example_data(d) as (
values ('2016-02-02')
)
select d, d::timestamp || '.0' tstamp
from example_data;
d | tstamp
------------+-----------------------
2016-02-02 | 2016-02-02 00:00:00.0
(1 row)
create function my_to_char(numeric)
returns text language sql as $$
select case
when strpos($1::text, '.') = 0 then $1::text
else rtrim($1::text, '.0')
end
$$;
with example_data(n) as (
values (100), (2.00), (3.34), (4.50))
select n::text, my_to_char(n)
from example_data;
n | my_to_char
------+------------
100 | 100
2.00 | 2
3.34 | 3.34
4.50 | 4.5
(4 rows)
```
See also: [How to remove the dot in to\_char if the number is an integer](https://stackoverflow.com/a/33279672/1995738)
|
```
SELECT to_char(current_timestamp, 'YYYY-MM-DD HH:MI:SS:MS');
```
prints
```
2016-02-05 03:21:18:346
```
|
Two questions for formatting timestamp and number using postgresql
|
[
"",
"sql",
"postgresql",
"timestamp",
"rounding",
"greenplum",
""
] |
I'm trying to group sales data based on a sellers' name. The name is available in another table. My tables look like this:
**InvoiceRow:**
```
+-----------+----------+-----+----------+
| InvoiceNr | Title | Row | Amount |
+-----------+----------+-----+----------+
| 1 | Chair | 1 | 2000.00 |
| 2 | Sofa | 1 | 1500.00 |
| 2 | Cushion | 2 | 2000.00 |
| 3 | Lamp | 1 | 6500.00 |
| 4 | Table | 1 | -500.00 |
+-----------+----------+-----+----------+
```
**InvoiceHead:**
```
+-----------+----------+------------+
| InvoiceNr | Seller | Date |
+-----------+----------+------------+
| 1 | Adam | 2016-01-01 |
| 2 | Lisa | 2016-01-04 |
| 3 | Adam | 2016-01-08 |
| 4 | Carl | 2016-01-17 |
+-----------+----------+------------+
```
**The query that I'm working with currently looks like this:**
```
SELECT SUM(Amount)
FROM InvoiceRow
WHERE InvoiceNr IN (
SELECT InvoiceNr
FROM InvoiceHead
WHERE Date >= '2016-01-01' AND Date < '2016-02-01'
)
```
This works and will sum the values of all rows of all invoices (total sales) in the month of january.
**What I want to do is a sales summary grouped by each sellers' name. Something like this:**
```
+----------+------------+
| Seller | Amount |
+----------+------------+
| Adam | 8500.00 |
| Lisa | 3500.00 |
| Carl | -500.00 |
+----------+------------+
```
And after that maybe even grouped by month (but that's not part of this question, I'm hoping to be able to figured that out if I solve this).
I've tried all kinds of joins but I end up with a lot of duplicates, and I'm not sure how to SUM and group at the same time. Does anyone know how to do this?
|
Try This
```
SELECT seller, SUM(amount) FROM InvoiceRow
JOIN InvoiceHead
ON InvoiceRow.InvoiceNr = InvoiceHead.InvoiceNr
GROUP BY InvoiceHead.seller;
```
OR If you want to between two date. Try This
```
SELECT seller, SUM(amount) FROM InvoiceRow
JOIN InvoiceHead
ON InvoiceRow.InvoiceNr = InvoiceHead.InvoiceNr
WHERE InvoiceHead.Date >= '2016-01-01' AND InvoiceHead.Date < '2016-02-01'
GROUP BY InvoiceHead.seller;
```
|
You just need to join the tables, filter result by date as you need and then make grouping:
```
select
H.Seller,
sum(R.Amount) as Amount
from InvoiceHead as H
left outer join InvoiceRow as R on R.InvoiceNr = H.InvoiceNr
where H. Date >= '2016-01-01' AND H.Date < '2016-02-01'
group by H.Seller
```
|
SQL sum and group values from two tables
|
[
"",
"sql",
""
] |
I want to select second highest value from tblTasks(JobID, ItemName, ContentTypeID)
That's what I though of. I bet it can be done easier but I don't know how.
```
SELECT Max(JobID) AS maxjobid,
Max(ItemName) AS maxitemname,
ContentTypeID
FROM
(SELECT JobID, ItemName, ContentTypeID
FROM tblTasks Ta
WHERE JobID NOT IN
(SELECT MAX(JobID)
FROM tblTasks Tb
GROUP BY ContentTypeID)
) secmax
GROUP BY secmax.ContentTypeID
```
|
I'm guessing you'd want something like this.
```
SELECT JobID AS maxjobid,
ItemName AS maxitemname,
ContentTypeID
FROM (SELECT JobID,
ItemName,
ContentTypeID,
ROW_NUMBER() OVER (PARTITION BY ContentTypeID ORDER BY JobID DESC) Rn
FROM tblTasks Ta
) t
WHERE Rn = 2
```
this would give you the second highest JobID record per ContentTypeID
|
I would suggest `DENSE_RANK()`, if you want the second `JobID`:
```
SELECT tb.*
FROM (SELECT tb.*, DENSE_RANK() OVER (ORDER BY JobID DESC) as seqnum
FROM tblTasks Tb
) tb
WHERE seqnum = 2;
```
If there are no duplicates, then `OFFSET`/`FETCH` is easier:
```
SELECT tb.*
from tblTasks
ORDER BY JobId
OFFSET 1
FETCH FIRST 1 ROW ONLY;
```
|
Select second MAX value
|
[
"",
"sql",
"sql-server",
"max",
""
] |
In my table each row has some data columns `Priority` column (for example, timestamp or just an integer). I want to group my data by ID and then in each group take latest not-null column. For example I have following table:
```
id A B C Priority
1 NULL 3 4 1
1 5 6 NULL 2
1 8 NULL NULL 3
2 634 346 359 1
2 34 NULL 734 2
```
Desired result is :
```
id A B C
1 8 6 4
2 34 346 734
```
In this example table is small and has only 5 columns, but in real table it will be much larger. I really want this script to work fast. I tried do it myself, but my script works for SQLSERVER2012+ so I deleted it as not applicable.
Numbers: table could have 150k of rows, 20 columns, 20-80k of unique `id`s and average `SELECT COUNT(id) FROM T GROUP BY ID` is `2..5`
Now I have a working code (thanks to @ypercubeα΅α΄Ή), but it runs very slowly on big tables, in my case script can take one minute or even more (with indices and so on).
How can it be speeded up?
```
SELECT
d.id,
d1.A,
d2.B,
d3.C
FROM
( SELECT id
FROM T
GROUP BY id
) AS d
OUTER APPLY
( SELECT TOP (1) A
FROM T
WHERE id = d.id
AND A IS NOT NULL
ORDER BY priority DESC
) AS d1
OUTER APPLY
( SELECT TOP (1) B
FROM T
WHERE id = d.id
AND B IS NOT NULL
ORDER BY priority DESC
) AS d2
OUTER APPLY
( SELECT TOP (1) C
FROM T
WHERE id = d.id
AND C IS NOT NULL
ORDER BY priority DESC
) AS d3 ;
```
In my test database with real amount of data I get following execution plan:
[](https://i.stack.imgur.com/qSGKf.png)
|
This should do the trick, everything raised to the power 0 will return 1 except null:
```
DECLARE @t table(id int,A int,B int,C int,Priority int)
INSERT @t
VALUES (1,NULL,3 ,4 ,1),
(1,5 ,6 ,NULL,2),(1,8 ,NULL,NULL,3),
(2,634 ,346 ,359 ,1),(2,34 ,NULL,734 ,2)
;WITH CTE as
(
SELECT id,
CASE WHEN row_number() over
(partition by id order by Priority*power(A,0) desc) = 1 THEN A END A,
CASE WHEN row_number() over
(partition by id order by Priority*power(B,0) desc) = 1 THEN B END B,
CASE WHEN row_number() over
(partition by id order by Priority*power(C,0) desc) = 1 THEN C END C
FROM @t
)
SELECT id, max(a) a, max(b) b, max(c) c
FROM CTE
GROUP BY id
```
Result:
```
id a b c
1 8 6 4
2 34 346 734
```
|
One alternative that might be faster is a multiple join approach. Get the priority for each column and then join back to the original table. For the first part:
```
select id,
max(case when a is not null then priority end) as pa,
max(case when b is not null then priority end) as pb,
max(case when c is not null then priority end) as pc
from t
group by id;
```
Then join back to this table:
```
with pabc as (
select id,
max(case when a is not null then priority end) as pa,
max(case when b is not null then priority end) as pb,
max(case when c is not null then priority end) as pc
from t
group by id
)
select pabc.id, ta.a, tb.b, tc.c
from pabc left join
t ta
on pabc.id = ta.id and pabc.pa = ta.priority left join
t tb
on pabc.id = tb.id and pabc.pb = tb.priority left join
t tc
on pabc.id = tc.id and pabc.pc = tc.priority ;
```
This can also take advantage of an index on `t(id, priority)`.
|
Aggregate data from multiple rows into single row
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2008-r2",
""
] |
I have a component that retrieves data from database based on the keys provided.
However I want my java application to get all the data for all keys in a single database hit to fasten up things.
I can use 'in' clause when I have only one key.
While working on more than one key I can use below query in oracle
```
SELECT * FROM <table_name>
where (value_type,CODE1) IN (('I','COMM'),('I','CORE'));
```
which is similar to writing
```
SELECT * FROM <table_name>
where value_type = 1 and CODE1 = 'COMM'
```
and
```
SELECT * FROM <table_name>
where value_type = 1 and CODE1 = 'CORE'
```
together
However, this concept of using 'in' clause as above is giving below error in 'SQL server'
`ERROR:An expression of non-boolean type specified in a context where a condition is expected, near ','.`
Please let know if their is any way to achieve the same in SQL server.
|
This syntax doesn't exist in SQL Server. Use a combination of `And` and `Or`.
```
SELECT *
FROM <table_name>
WHERE
(value_type = 1 and CODE1 = 'COMM')
OR (value_type = 1 and CODE1 = 'CORE')
```
(In this case, you could make it shorter, because value\_type is compared to the same value in both combinations. I just wanted to show the pattern that works like IN in oracle with multiple fields.)
---
When using IN with a subquery, you need to rephrase it like this:
Oracle:
```
SELECT *
FROM foo
WHERE
(value_type, CODE1) IN (
SELECT type, code
FROM bar
WHERE <some conditions>)
```
SQL Server:
```
SELECT *
FROM foo
WHERE
EXISTS (
SELECT *
FROM bar
WHERE <some conditions>
AND foo.type_code = bar.type
AND foo.CODE1 = bar.code)
```
There are other ways to do it, depending on the case, like inner joins and the like.
|
If you have under 1000 tuples you want to check against and you're using SQL Server 2008+, you can use a [table values constructor](https://learn.microsoft.com/en-us/previous-versions/sql/sql-server-2008/dd776382(v=sql.100)?redirectedfrom=MSDN), and perform a join against it. You can only specify up to 1000 rows in a table values constructor, hence the 1000 tuple limitation. Here's how it would look in your situation:
```
SELECT <table_name>.* FROM <table_name>
JOIN ( VALUES
('I', 'COMM'),
('I', 'CORE')
) AS MyTable(a, b) ON a = value_type AND b = CODE1;
```
This is only a good idea if your list of values is going to be unique, otherwise you'll get duplicate values. I'm not sure how the performance of this compares to using many ANDs and ORs, but the SQL query is at least much cleaner to look at, in my opinion.
You can also write this to use EXIST instead of JOIN. That may have different performance characteristics and it will avoid the problem of producing duplicate results if your values aren't unique. It may be worth trying both EXIST and JOIN on your use case to see what's a better fit. Here's how EXIST would look,
```
SELECT * FROM <table_name>
WHERE EXISTS (
SELECT 1
FROM (
VALUES
('I', 'COMM'),
('I', 'CORE')
) AS MyTable(a, b)
WHERE a = value_type AND b = CODE1
);
```
In conclusion, I think the best choice is to create a temporary table and query against that. But sometimes that's not possible, e.g. your user lacks the permission to create temporary tables, and then using a table values constructor may be your best choice. Use EXIST or JOIN, depending on which gives you better performance on your database.
|
'In' clause in SQL server with multiple columns
|
[
"",
"sql",
"sql-server",
"oracle",
""
] |
Consider a table:
```
ββββββββ¦βββββββββββββ¦ββββββββββ¦βββββββββββββββββββββ¦ββββββββββ
β Name β License No β Status β Status_update_date β Address β
β βββββββ¬βββββββββββββ¬ββββββββββ¬βββββββββββββββββββββ¬ββββββββββ£
β Jon β 1234 β Active β 01/01/2016 β aaaa β
β Rick β 5678 β Expired β 31/11/2015 β xxxx β
β Bob β 0987 β Expired β 30/01/2016 β ssss β
β Carl β 3456 β Active β 03/12/2015 β qqqq β
ββββββββ©βββββββββββββ©ββββββββββ©βββββββββββββββββββββ©ββββββββββ
```
Status update date is the date when the status of the person is changed Active to Expiry in case of Expiry and SET in case of Active
I want to get the records for all active licences and licences expired in last 30 days other expired licences are to be ignored
Here is the expected result assuming the current date is `05/02/2016`:
```
ββββββββ¦βββββββββββββ¦ββββββββββ¦βββββββββββββββββββββ¦ββββββββββ
β Name β License No β Status β Status_update_date β Address β
β βββββββ¬βββββββββββββ¬ββββββββββ¬βββββββββββββββββββββ¬ββββββββββ£
β Jon β 1234 β Active β 01/01/2016 β aaaa β
β Bob β 0987 β Expired β 30/01/2016 β ssss β
β Carl β 3456 β Active β 03/12/2015 β qqqq β
ββββββββ©βββββββββββββ©ββββββββββ©βββββββββββββββββββββ©ββββββββββ
```
One restriction is that the query should not contain `UNION`.
|
You need an `OR` condition for **status** and **Status\_update\_date** as they can't occur at same time.
```
SELECT *
FROM table_name
WHERE status = 'Active'
OR ( status = 'Expired'
AND Status_update_date >= SYSDATE -30
);
```
To get the current date, you could use `SYSDATE` or `CURRENT_DATE` given that the *timezones are same for the session and that of the OS of the database server*.
|
You can try this:
```
SELECT *
FROM myatble
WHERE Stauts = 'Active' OR Expiration_DT > sysdate-30
```
|
Filtering a query on a particular criteria like records for last 30 days
|
[
"",
"sql",
"oracle",
""
] |
Here is my sample table with only a bit of info.
```
select * from juniper_fpc';
id | router | part_name
-----------+-----------+--------------------
722830939 | BBBB-ZZZ1 | MPC-3D-16XGE-SFPP
722830940 | BBBB-ZZZ1 | MPC-3D-16XGE-SFPP
723103163 | AAAA-ZZZ1 | DPCE-R-40GE-SFP
723103164 | AAAA-ZZZ1 | MPC-3D-16XGE-SFPP
723103172 | AAAA-ZZZ1 | DPCE-R-40GE-SFP
722830941 | BBBB-ZZZ1 | MPC-3D-16XGE-SFPP
```
What I'm trying to do is identify elements from the router column that only have a part\_name entry beginning with MPC. What I've come up with is this but it's wrong because it lists both of the elements above.
```
SELECT router
FROM juniper_fpc
WHERE part_name LIKE 'MPC%'
GROUP BY router
ORDER BY router;
router
-----------
AAAA-ZZZ1
BBBB-ZZZ1
```
|
This should perform well:
```
SELECT j1.router
FROM (
SELECT router
FROM juniper_fpc
WHERE part_name LIKE 'MPC%'
GROUP BY router
) j1
LEFT JOIN juniper_fpc j2 ON j2.router = j1.router
AND j2.part_name NOT LIKE 'MPC%'
WHERE j2.router IS NULL
ORDER BY j1.router;
```
[@sagi's idea](https://stackoverflow.com/a/35181944/939860) with `NOT EXISTS` whould work, too, if you get it right:
```
SELECT router
FROM juniper_fpc j
WHERE NOT EXISTS (
SELECT 1
FROM juniper_fpc
WHERE router = j.router
AND part_name NOT LIKE 'MPC%'
)
GROUP BY router
ORDER BY router;
```
Details:
* [Select rows which are not present in other table](https://stackoverflow.com/questions/19363481/select-rows-which-are-not-present-in-other-table/19364694#19364694)
[**SQL Fiddle.**](http://sqlfiddle.com/#!15/96173/1)
Or, [@Frank's idea](https://stackoverflow.com/a/35182127/939860) with syntax for Postgres 9.4 or later:
```
SELECT router
FROM juniper_fpc
GROUP BY router
HAVING count(*) = count(*) FILTER (WHERE part_name LIKE 'MPC%')
ORDER BY router;
```
Best with an index on `(router, partname)` for each of them.
|
Assuming you want the routers that only have part\_name like 'MPC%', you can use a conditional count:
```
select * from (
select router,
count(case when part_name like 'MPC%' then 1 else null end) as cnt_mpc,
count(*) as cnt_overall
from juniper_fpc
group by router) v_inner
where cnt_mpc = cnt_overall
```
This can be written more compact (albeit slightly less readable) as
```
select router
from juniper_fpc
group by router
having count(case when part_name like 'MPC%' then 1 else null end) = count(*)
```
[SQL Fiddle](http://sqlfiddle.com/#!15/8e91f/1)
|
GROUP BY isn't working as expected
|
[
"",
"sql",
"postgresql",
"aggregate",
""
] |
How can I get a RANK that restarts at partition change?
I have this table:
```
ID Date Value
1 2015-01-01 1
2 2015-01-02 1 <redundant
3 2015-01-03 2
4 2015-01-05 2 <redundant
5 2015-01-06 1
6 2015-01-08 1 <redundant
7 2015-01-09 1 <redundant
8 2015-01-10 2
9 2015-01-11 3
10 2015-01-12 3 <redundant
```
and I'm trying to delete all the rows where the Value is not changed from the previous entry (marked with **< redundant**).
I've tried using cursors but it takes too long, as the table has ~50 million rows.
I've also tried using RANK:
```
SELECT ID, Date, Value,
RANK() over(partition by Value order by Date ASC) Rank,
FROM DataLogging
ORDER BY Date ASC
```
but I get:
```
ID Date Value Rank (Rank)
1 2015-01-01 1 1 (1)
2 2015-01-02 1 2 (2)
3 2015-01-03 2 1 (1)
4 2015-01-05 2 2 (2)
5 2015-01-06 1 3 (1)
6 2015-01-08 1 4 (2)
7 2015-01-09 1 5 (3)
8 2015-01-10 2 3 (1)
9 2015-01-11 3 1 (1)
10 2015-01-12 3 2 (2)
```
in parantheses is the Rank I would want, so that I can filter out rows with Rank = 1 and delete the rest of the rows.
EDIT: I've accepted the answer that seemed the easiest to write, but unfortunately none of the answers runs fast enough for deleting the rows.
In the end I've decided to use the **CURSOR** afterall. I've split the data in chuncks of about 250k rows and the cursor runs through and deletes the rows in ~11 mins per batch of 250k rows, and the answers below, with DELETE, take ~35 mins per batch of 250k rows.
|
```
select *
from ( select ID, Date, Value, lag(Value, 1, 0) over (order by ID) as ValueLag
from table ) tt
where ValueLag is null or ValueLag <> Value
```
if the order is Date then over (order by Date)
this should show you good and bad - it is based on ID - it you need date then revise
it may look like a long way around but it should be pretty efficient
```
declare @tt table (id tinyint, val tinyint);
insert into @tt values
( 1, 1),
( 2, 1),
( 3, 2),
( 4, 2),
( 5, 1),
( 6, 1),
( 7, 1),
( 8, 2),
( 9, 3),
(10, 3);
select id, val, LAG(val) over (order by id) as lagVal
from @tt;
-- find the good
select id, val
from ( select id, val, LAG(val) over (order by id) as lagVal
from @tt
) tt
where lagVal is null or lagVal <> val
-- select the bad
select tt.id, tt.val
from @tt tt
left join ( select id, val
from ( select id, val, LAG(val) over (order by id) as lagVal
from @tt
) ttt
where ttt.lagVal is null or ttt.lagVal <> ttt.val
) tttt
on tttt.id = tt.id
where tttt.id is null
```
|
Here is a somewhat convoluted way to do it:
```
WITH CTE AS
(
SELECT *,
ROW_NUMBER() OVER(ORDER BY [Date]) RN1,
ROW_NUMBER() OVER(PARTITION BY Value ORDER BY [Date]) RN2
FROM dbo.YourTable
), CTE2 AS
(
SELECT *, ROW_NUMBER() OVER(PARTITION BY Value, RN1 - RN2 ORDER BY [Date]) N
FROM CTE
)
SELECT *
FROM CTE2
ORDER BY ID;
```
The results are:
```
ββββββ¦βββββββββββββ¦ββββββββ¦ββββββ¦ββββββ¦ββββ
β ID β Date β Value β RN1 β RN2 β N β
β βββββ¬βββββββββββββ¬ββββββββ¬ββββββ¬ββββββ¬ββββ£
β 1 β 2015-01-01 β 1 β 1 β 1 β 1 β
β 2 β 2015-01-02 β 1 β 2 β 2 β 2 β
β 3 β 2015-01-03 β 2 β 3 β 1 β 1 β
β 4 β 2015-01-05 β 2 β 4 β 2 β 2 β
β 5 β 2015-01-06 β 1 β 5 β 3 β 1 β
β 6 β 2015-01-08 β 1 β 6 β 4 β 2 β
β 7 β 2015-01-09 β 1 β 7 β 5 β 3 β
β 8 β 2015-01-10 β 2 β 8 β 3 β 1 β
β 9 β 2015-01-11 β 3 β 9 β 1 β 1 β
β 10 β 2015-01-12 β 3 β 10 β 2 β 2 β
ββββββ©βββββββββββββ©ββββββββ©ββββββ©ββββββ©ββββ
```
To delete the rows you don't want, you just need to do:
```
DELETE FROM CTE2
WHERE N > 1;
```
|
RANK() OVER PARTITION with RANK resetting
|
[
"",
"sql",
"sql-server",
"window-functions",
""
] |
I have two tables - Client and Banquet
```
Client Table
----------------------------
ID NAME
1 John
2 Jigar
3 Jiten
----------------------------
Banquet Table
----------------------------
ID CLIENT_ID DATED
1 1 2016.2.3
2 2 2016.2.5
3 2 2016.2.8
4 3 2016.2.6
5 1 2016.2.9
6 2 2016.2.5
7 2 2016.2.8
8 3 2016.2.6
9 1 2016.2.7
----------------------------
:::::::::: **Required Result**
----------------------------
ID NAME DATED
2 Jigar 2016.2.5
3 Jiten 2016.2.6
1 John 2016.2.7
```
> The result to be generated is such that
>
> > **1.** The Date which is FUTURE : CLOSEST or EQUAL to the current date, which is further related to the respective client should be filtered and ordered in format given in **Required Result**
>
> CURDATE() for current case is 5.2.2016
**FAILED: Query Logic 1**
```
SELECT c.id, c.name, b.dated
FROM client AS c, banquet AS b
WHERE c.id = b.client_id AND b.dated >= CURDATE()
ORDER BY (b.dated - CURDATE());
------------------------------------------- OUTPUT
ID NAME DATED
2 Jigar 2016.2.5
2 Jigar 2016.2.5
3 Jiten 2016.2.6
3 Jiten 2016.2.6
1 John 2016.2.7
2 Jigar 2016.2.8
2 Jigar 2016.2.8
1 John 2016.2.9
```
**FAILED: Query Logic 2**
```
SELECT c.id, c.name, b.dated
FROM client AS c, banquet AS b
WHERE b.dated = (
SELECT MIN(b.dated)
FROM banquet as b
WHERE b.client_id = c.id
AND b.dated >= CURDATE()
)
ORDER BY (b.dated - CURDATE());
------------------------------------------- OUTPUT
ID NAME DATED
2 Jigar 2016.2.5
2 Jigar 2016.2.5
3 Jiten 2016.2.6
3 Jiten 2016.2.6
1 John 2016.2.7
```
[**sqlfiddle**](http://sqlfiddle.com/#!9/aded8/1)
> **UPDATE** : Further result to be generated is such that
>
> > **2.** Clients WITHOUT : DATED should also be listed : may be with a NULL
> >
> > **3.** the information other then DATED in the BANQUET table also need to be listed
**UPDATED Required Result**
```
ID NAME DATED MEAL
2 Jigar 2016.2.5 lunch
3 Jiten 2016.2.6 breakfast
1 John 2016.2.7 dinner
4 Junior - -
5 Master - supper
```
|
For this query, I suggest applying your `WHERE` condition `>= CURDATE()` and then `SELECT` the `MIN(dated)` with `GROUP BY client_id`:
```
SELECT b.client_id, MIN(b.dated) FROM banquet b
WHERE b.dated >= CURDATE()
GROUP BY b.client_id;
```
From this, you can add the necessary `JOIN` to the client table to get the client name:
```
SELECT b.client_id, c.name, MIN(b.dated) FROM banquet b
INNER JOIN client c
ON c.id = b.client_id
WHERE b.dated >= CURDATE()
GROUP BY b.client_id;
```
SQLFiddle: <http://sqlfiddle.com/#!9/aded8/18>
EDITED TO REFLECT NEW PARTS OF QUESTION:
Based on the new info you added - asking how to handle nulls and the 'meal' column, I've made some changes. This updated query handles possible null values (by adjusting the WHERE clause) in dated, and also includes meal information.
```
SELECT b.client_id, c.name,
MIN(b.dated) AS dated,
IFNULL(b.meal, '-') AS meal
FROM banquet b
INNER JOIN client c
ON c.id = b.client_id
WHERE b.dated >= CURDATE() OR b.dated IS NULL
GROUP BY b.client_id;
```
or you can take some of this and combine it with Gordon Linoff's answer, which sounds like it will perform better overall.
New SQLFiddle: <http://sqlfiddle.com/#!9/a4055/2>
|
One approach uses a correlated subquery:
```
select c.*,
(select max(dated)
from banquet b
where b.client_id = c.id and
b.dated >= CURDATE()
) as dated
from client c;
```
Then, I would recommend an index on `banquet(client_id, dated)`.
The advantage of this approach is performance. It does not require an aggregation over the entire client table. In fact, the correlated subquery can take advantage of the index, so the query should have good performance.
|
MySQL get the nearest future date to given date, from the dates located in different table having Common ID
|
[
"",
"mysql",
"sql",
"join",
""
] |
I need to split a date like '01/12/15' and replace the year part from 15 to 2015 (ie; 01-12-2015). I get the year by the sql query:
```
select YEAR('10/12/15')
```
It returns the year 2015. but I have to replace 15 to 2015. how do i achieve this.
Anyone here please help me. thanks in advance..
Edited:
I've tried following query too..
```
declare @date varchar='10/12/2015'
declare @datenew date
SELECT @datenew=CONVERT(nvarchar(10), CAST(@date AS DATETIME), 103)
print @datenew
```
but it throws some error like this :
Conversion failed when converting date and/or time from character string.
How do I change the varchar to date and replace its year part to 4 digit.. please help me..
```
DECLARE @intFlag INT,@date varchar(150),@payperiod numeric(18,0),@emp_Id varchar(50)
SET @intFlag = 1
declare @count as int set @count=(select count(*) from @myTable)
WHILE (@intFlag <=@count)
BEGIN
select @emp_Id=Employee_Id from @myTable where rownum=@intFlag
select @date=attendance_date from @myTable where rownum=@intFlag
declare @datenew datetime
SELECT @datenew=convert(datetime,CONVERT(nvarchar(10), CAST(@date AS DATETIME), 103) ,103)
```
It throws the error "The conversion of a varchar data type to a datetime data type resulted in an out-of-range value."
|
As you said your column is in `varchar` type, try the following
**Query**
```
CREATE TABLE #temp
(
dt VARCHAR(50)
);
INSERT INTO #temp VALUES
('01/12/15'),
('02/12/15'),
('03/12/15'),
('04/12/15'),
('05/12/15');
UPDATE #temp
SET dt = REPLACE(LEFT(dt, LEN(dt) - 2)
+ CAST(YEAR(CAST(dt AS DATE)) AS VARCHAR(4)), '/', '-');
SELECT * FROM #temp;
```
**EDIT**
While declaring the variable `@date` you have not specified the length.
Check the below sql query.
```
declare @date varchar(10)='10/12/2015'
declare @datenew date
SELECT @datenew=CONVERT(nvarchar(10), CAST(@date AS DATETIME), 103)
print @datenew
```
|
Problem with your query is that you haven't specified length for `varchar` datatype:
```
declare @date varchar(12)='10/12/2015'
declare @datenew date
SELECT @datenew=CONVERT(nvarchar(10), CAST(@date AS DATETIME), 103)
print @datenew
```
|
Splitting a datetime and replace the year part
|
[
"",
"sql",
"sql-server-2008",
"datetime",
""
] |
I need some help for a problem that i am struggling to solve.
Example table:
```
ID |Identifier1 | Identifier2
---------------------------------
1 | a | c
2 | b | f
3 | a | g
4 | c | h
5 | b | j
6 | d | f
7 | e | k
8 | i |
9 | l | h
```
I want to group identifiers that are related with each other between two columns and assign a unique group id.
Desired Output:
```
Identifier | Gr_ID | Gr.Members
---------------------------------------------------
a | 1 | (a,c,g,h,l)
b | 2 | (b,d,f,j)
c | 1 | (a,c,g,h,l)
d | 2 | (b,d,f,j)
e | 3 | (e,k)
f | 2 | (b,d,f,j)
g | 1 | (a,c,g,h,l)
h | 1 | (a,c,g,h,l)
j | 2 | (b,d,f,j)
k | 3 | (e,k)
l | 1 | (a,c,g,h,l)
i | 4 | (i)
```
Note:the column Gr.Members is not necessary, mostly is used for a clearer view.
> So the definition for a group is: A row belongs to a group if it
> shares at least one identifier with at least one row of this group
But the group id has to be assigned to each identifier(selected by the union of the two columns) not to the row.
Any help on how to build a query to give the desired output?
Thank you.
---
**Update:** Below are some extra sample sets with their expected output.
---
Given table:
```
Identifier1 | Identifier2
----------------------------
a | f
a | g
a | NULL
b | c
b | a
b | h
b | j
b | NULL
b | NULL
b | g
c | k
c | b
d | l
d | f
d | g
d | m
d | a
d | NULL
d | a
e | c
e | b
e | NULL
```
Expected output: all the records should belong to the same group with group ID = 1.
---
Given Table:
```
Identifier1 | Identifier2
--------------------------
a | a
b | b
c | a
c | b
c | c
```
Expected output: The records should be in the same group with group ID = 1.
|
Here is a variant that doesn't use cursor, but uses a single recursive query.
Essentially, it treats the data as edges in a graph and traverses recursively all edges of the graph, stopping when the loop is detected. Then it puts all found loops in groups and gives each group a number.
See the detailed explanations of how it works below. I recommend you to run the query CTE-by-CTE and examine each intermediate result to understand what it does.
**Sample 1**
```
DECLARE @T TABLE (ID int, Ident1 char(1), Ident2 char(1));
INSERT INTO @T (ID, Ident1, Ident2) VALUES
(1, 'a', 'a'),
(2, 'b', 'b'),
(3, 'c', 'a'),
(4, 'c', 'b'),
(5, 'c', 'c');
```
**Sample 2**
I added one more row with `z` value to have multiple rows with unpaired values.
```
DECLARE @T TABLE (ID int, Ident1 char(1), Ident2 char(1));
INSERT INTO @T (ID, Ident1, Ident2) VALUES
(1, 'a', 'a'),
(1, 'a', 'c'),
(2, 'b', 'f'),
(3, 'a', 'g'),
(4, 'c', 'h'),
(5, 'b', 'j'),
(6, 'd', 'f'),
(7, 'e', 'k'),
(8, 'i', NULL),
(88, 'z', 'z'),
(9, 'l', 'h');
```
**Sample 3**
```
DECLARE @T TABLE (ID int, Ident1 char(1), Ident2 char(1));
INSERT INTO @T (ID, Ident1, Ident2) VALUES
(1, 'a', 'f'),
(2, 'a', 'g'),
(3, 'a', NULL),
(4, 'b', 'c'),
(5, 'b', 'a'),
(6, 'b', 'h'),
(7, 'b', 'j'),
(8, 'b', NULL),
(9, 'b', NULL),
(10, 'b', 'g'),
(11, 'c', 'k'),
(12, 'c', 'b'),
(13, 'd', 'l'),
(14, 'd', 'f'),
(15, 'd', 'g'),
(16, 'd', 'm'),
(17, 'd', 'a'),
(18, 'd', NULL),
(19, 'd', 'a'),
(20, 'e', 'c'),
(21, 'e', 'b'),
(22, 'e', NULL);
```
**Query**
```
WITH
CTE_Idents
AS
(
SELECT Ident1 AS Ident
FROM @T
UNION
SELECT Ident2 AS Ident
FROM @T
)
,CTE_Pairs
AS
(
SELECT Ident1, Ident2
FROM @T
WHERE Ident1 <> Ident2
UNION
SELECT Ident2 AS Ident1, Ident1 AS Ident2
FROM @T
WHERE Ident1 <> Ident2
)
,CTE_Recursive
AS
(
SELECT
CAST(CTE_Idents.Ident AS varchar(8000)) AS AnchorIdent
, Ident1
, Ident2
, CAST(',' + Ident1 + ',' + Ident2 + ',' AS varchar(8000)) AS IdentPath
, 1 AS Lvl
FROM
CTE_Pairs
INNER JOIN CTE_Idents ON CTE_Idents.Ident = CTE_Pairs.Ident1
UNION ALL
SELECT
CTE_Recursive.AnchorIdent
, CTE_Pairs.Ident1
, CTE_Pairs.Ident2
, CAST(CTE_Recursive.IdentPath + CTE_Pairs.Ident2 + ',' AS varchar(8000)) AS IdentPath
, CTE_Recursive.Lvl + 1 AS Lvl
FROM
CTE_Pairs
INNER JOIN CTE_Recursive ON CTE_Recursive.Ident2 = CTE_Pairs.Ident1
WHERE
CTE_Recursive.IdentPath NOT LIKE CAST('%,' + CTE_Pairs.Ident2 + ',%' AS varchar(8000))
)
,CTE_RecursionResult
AS
(
SELECT AnchorIdent, Ident1, Ident2
FROM CTE_Recursive
)
,CTE_CleanResult
AS
(
SELECT AnchorIdent, Ident1 AS Ident
FROM CTE_RecursionResult
UNION
SELECT AnchorIdent, Ident2 AS Ident
FROM CTE_RecursionResult
)
SELECT
CTE_Idents.Ident
,CASE WHEN CA_Data.XML_Value IS NULL
THEN CTE_Idents.Ident ELSE CA_Data.XML_Value END AS GroupMembers
,DENSE_RANK() OVER(ORDER BY
CASE WHEN CA_Data.XML_Value IS NULL
THEN CTE_Idents.Ident ELSE CA_Data.XML_Value END
) AS GroupID
FROM
CTE_Idents
CROSS APPLY
(
SELECT CTE_CleanResult.Ident+','
FROM CTE_CleanResult
WHERE CTE_CleanResult.AnchorIdent = CTE_Idents.Ident
ORDER BY CTE_CleanResult.Ident FOR XML PATH(''), TYPE
) AS CA_XML(XML_Value)
CROSS APPLY
(
SELECT CA_XML.XML_Value.value('.', 'NVARCHAR(MAX)')
) AS CA_Data(XML_Value)
WHERE
CTE_Idents.Ident IS NOT NULL
ORDER BY Ident;
```
**Result 1**
```
+-------+--------------+---------+
| Ident | GroupMembers | GroupID |
+-------+--------------+---------+
| a | a,b,c, | 1 |
| b | a,b,c, | 1 |
| c | a,b,c, | 1 |
+-------+--------------+---------+
```
**Result 2**
```
+-------+--------------+---------+
| Ident | GroupMembers | GroupID |
+-------+--------------+---------+
| a | a,c,g,h,l, | 1 |
| b | b,d,f,j, | 2 |
| c | a,c,g,h,l, | 1 |
| d | b,d,f,j, | 2 |
| e | e,k, | 3 |
| f | b,d,f,j, | 2 |
| g | a,c,g,h,l, | 1 |
| h | a,c,g,h,l, | 1 |
| i | i | 4 |
| j | b,d,f,j, | 2 |
| k | e,k, | 3 |
| l | a,c,g,h,l, | 1 |
| z | z | 5 |
+-------+--------------+---------+
```
**Result 3**
```
+-------+--------------------------+---------+
| Ident | GroupMembers | GroupID |
+-------+--------------------------+---------+
| a | a,b,c,d,e,f,g,h,j,k,l,m, | 1 |
| b | a,b,c,d,e,f,g,h,j,k,l,m, | 1 |
| c | a,b,c,d,e,f,g,h,j,k,l,m, | 1 |
| d | a,b,c,d,e,f,g,h,j,k,l,m, | 1 |
| e | a,b,c,d,e,f,g,h,j,k,l,m, | 1 |
| f | a,b,c,d,e,f,g,h,j,k,l,m, | 1 |
| g | a,b,c,d,e,f,g,h,j,k,l,m, | 1 |
| h | a,b,c,d,e,f,g,h,j,k,l,m, | 1 |
| j | a,b,c,d,e,f,g,h,j,k,l,m, | 1 |
| k | a,b,c,d,e,f,g,h,j,k,l,m, | 1 |
| l | a,b,c,d,e,f,g,h,j,k,l,m, | 1 |
| m | a,b,c,d,e,f,g,h,j,k,l,m, | 1 |
+-------+--------------------------+---------+
```
## How it works
I'll use the second set of sample data for this explanation.
**`CTE_Idents`**
`CTE_Idents` gives the list of all Identifiers that appear in both `Ident1` and `Ident2` columns.
Since they can appear in any order we `UNION` both columns together. `UNION` also removes any duplicates.
```
+-------+
| Ident |
+-------+
| NULL |
| a |
| b |
| c |
| d |
| e |
| f |
| g |
| h |
| i |
| j |
| k |
| l |
| z |
+-------+
```
**`CTE_Pairs`**
`CTE_Pairs` gives the list of all edges of the graph in both directions. Again, `UNION` is used to remove any duplicates.
```
+--------+--------+
| Ident1 | Ident2 |
+--------+--------+
| a | c |
| a | g |
| b | f |
| b | j |
| c | a |
| c | h |
| d | f |
| e | k |
| f | b |
| f | d |
| g | a |
| h | c |
| h | l |
| j | b |
| k | e |
| l | h |
+--------+--------+
```
**`CTE_Recursive`**
`CTE_Recursive` is the main part of the query that recursively traverses the graph starting from each unique Identifier.
These starting rows are produced by the first part of `UNION ALL`.
The second part of `UNION ALL` recursively joins to itself linking `Ident2` to `Ident1`.
Since we pre-made `CTE_Pairs` with all edges written in both directions, we can always link only `Ident2` to `Ident1` and we'll get all paths in the graph.
At the same time the query builds `IdentPath` - a string of comma-delimited Identifiers that have been traversed so far.
It is used in the `WHERE` filter:
```
CTE_Recursive.IdentPath NOT LIKE CAST('%,' + CTE_Pairs.Ident2 + ',%' AS varchar(8000))
```
As soon as we come across the Identifier that had been included in the Path before, the recursion stops as the list of connected nodes is exhausted.
`AnchorIdent` is the starting Identifier for the recursion, it will be used later to group results.
`Lvl` is not really used, I included it for better understanding of what is going on.
```
+-------------+--------+--------+-------------+-----+
| AnchorIdent | Ident1 | Ident2 | IdentPath | Lvl |
+-------------+--------+--------+-------------+-----+
| a | a | c | ,a,c, | 1 |
| a | a | g | ,a,g, | 1 |
| b | b | f | ,b,f, | 1 |
| b | b | j | ,b,j, | 1 |
| c | c | a | ,c,a, | 1 |
| c | c | h | ,c,h, | 1 |
| d | d | f | ,d,f, | 1 |
| e | e | k | ,e,k, | 1 |
| f | f | b | ,f,b, | 1 |
| f | f | d | ,f,d, | 1 |
| g | g | a | ,g,a, | 1 |
| h | h | c | ,h,c, | 1 |
| h | h | l | ,h,l, | 1 |
| j | j | b | ,j,b, | 1 |
| k | k | e | ,k,e, | 1 |
| l | l | h | ,l,h, | 1 |
| l | h | c | ,l,h,c, | 2 |
| l | c | a | ,l,h,c,a, | 3 |
| l | a | g | ,l,h,c,a,g, | 4 |
| j | b | f | ,j,b,f, | 2 |
| j | f | d | ,j,b,f,d, | 3 |
| h | c | a | ,h,c,a, | 2 |
| h | a | g | ,h,c,a,g, | 3 |
| g | a | c | ,g,a,c, | 2 |
| g | c | h | ,g,a,c,h, | 3 |
| g | h | l | ,g,a,c,h,l, | 4 |
| f | b | j | ,f,b,j, | 2 |
| d | f | b | ,d,f,b, | 2 |
| d | b | j | ,d,f,b,j, | 3 |
| c | h | l | ,c,h,l, | 2 |
| c | a | g | ,c,a,g, | 2 |
| b | f | d | ,b,f,d, | 2 |
| a | c | h | ,a,c,h, | 2 |
| a | h | l | ,a,c,h,l, | 3 |
+-------------+--------+--------+-------------+-----+
```
**`CTE_CleanResult`**
`CTE_CleanResult` leaves only relevant parts from `CTE_Recursive` and again merges both `Ident1` and `Ident2` using `UNION`.
```
+-------------+-------+
| AnchorIdent | Ident |
+-------------+-------+
| a | a |
| a | c |
| a | g |
| a | h |
| a | l |
| b | b |
| b | d |
| b | f |
| b | j |
| c | a |
| c | c |
| c | g |
| c | h |
| c | l |
| d | b |
| d | d |
| d | f |
| d | j |
| e | e |
| e | k |
| f | b |
| f | d |
| f | f |
| f | j |
| g | a |
| g | c |
| g | g |
| g | h |
| g | l |
| h | a |
| h | c |
| h | g |
| h | h |
| h | l |
| j | b |
| j | d |
| j | f |
| j | j |
| k | e |
| k | k |
| l | a |
| l | c |
| l | g |
| l | h |
| l | l |
+-------------+-------+
```
**Final SELECT**
Now we need to build a string of comma-separated `Ident` values for each `AnchorIdent`.
`CROSS APPLY` with `FOR XML` does it.
`DENSE_RANK()` calculates the `GroupID` numbers for each `AnchorIdent`.
|
This script produces the outputs for test sets 1, 2 and 3 as required. Notes on the algorithm as comments in the script.
Be aware:
* This algorithm **destroys** the input set. In the script the input set is `#tree`. So using this script requires inserting the source data into `#tree`
* This algorithm does not work for `NULL` values for nodes. Replace `NULL` values with `CHAR(0)` when inserting into `#tree` using `ISNULL(source_col,CHAR(0))` to circumvent this shortcoming. When selecting from the final result, replace `CHAR(0)` with `NULL` using `NULLIF(node,CHAR(0))`.
Note that the [answer using recursive CTEs](https://stackoverflow.com/a/35457468/243373) is more elegant in that it is a single SQL statement, but for large input sets using recursive CTEs may give abysmal execution time (see [this comment](https://stackoverflow.com/questions/35254260/how-to-find-all-connected-subgraphs-of-an-undirected-graph#comment58664276_35457468) on that answer). The solution as described below, while more convoluted, should run much faster for large input sets.
---
```
SET NOCOUNT ON;
CREATE TABLE #tree(node_l CHAR(1),node_r CHAR(1));
CREATE NONCLUSTERED INDEX NIX_tree_node_l ON #tree(node_l)INCLUDE(node_r); -- covering indices to speed up lookup
CREATE NONCLUSTERED INDEX NIX_tree_node_r ON #tree(node_r)INCLUDE(node_l);
INSERT INTO #tree(node_l,node_r) VALUES
('a','c'),('b','f'),('a','g'),('c','h'),('b','j'),('d','f'),('e','k'),('i','i'),('l','h'); -- test set 1
--('a','f'),('a','g'),(CHAR(0),'a'),('b','c'),('b','a'),('b','h'),('b','j'),('b',CHAR(0)),('b',CHAR(0)),('b','g'),('c','k'),('c','b'),('d','l'),('d','f'),('d','g'),('d','m'),('d','a'),('d',CHAR(0)),('d','a'),('e','c'),('e','b'),('e',CHAR(0)); -- test set 2
--('a','a'),('b','b'),('c','a'),('c','b'),('c','c'); -- test set 3
CREATE TABLE #sets(node CHAR(1) PRIMARY KEY,group_id INT); -- nodes with group id assigned
CREATE TABLE #visitor_queue(node CHAR(1)); -- contains nodes to visit
CREATE TABLE #visited_nodes(node CHAR(1) PRIMARY KEY CLUSTERED WITH(IGNORE_DUP_KEY=ON)); -- nodes visited for nodes on the queue; ignore duplicate nodes when inserted
CREATE TABLE #visitor_ctx(node_l CHAR(1),node_r CHAR(1)); -- context table, contains deleted nodes as they are visited from #tree
DECLARE @last_created_group_id INT=0;
-- Notes:
-- 1. This algorithm is destructive in its input set, ie #tree will be empty at the end of this procedure
-- 2. This algorithm does not accept NULL values. Populate #tree with CHAR(0) for NULL values (using ISNULL(source_col,CHAR(0)), or COALESCE(source_col,CHAR(0)))
-- 3. When selecting from #sets, to regain the original NULL values use NULLIF(node,CHAR(0))
WHILE EXISTS(SELECT*FROM #tree)
BEGIN
TRUNCATE TABLE #visited_nodes;
TRUNCATE TABLE #visitor_ctx;
-- push first nodes onto the queue (via #visitor_ctx -> #visitor_queue)
DELETE TOP (1) t
OUTPUT deleted.node_l,deleted.node_r INTO #visitor_ctx(node_l,node_r)
FROM #tree AS t;
INSERT INTO #visitor_queue(node) SELECT node_l FROM #visitor_ctx UNION SELECT node_r FROM #visitor_ctx; -- UNION to filter when node_l equals node_r
INSERT INTO #visited_nodes(node) SELECT node FROM #visitor_queue; -- keep track of nodes visited
-- work down the queue by visiting linked nodes in #tree; nodes are deleted as they are visited
WHILE EXISTS(SELECT*FROM #visitor_queue)
BEGIN
TRUNCATE TABLE #visitor_ctx;
-- pop_front for node on the stack (via #visitor_ctx -> @node)
DELETE TOP (1) s
OUTPUT deleted.node INTO #visitor_ctx(node_l)
FROM #visitor_queue AS s;
DECLARE @node CHAR(1)=(SELECT node_l FROM #visitor_ctx);
TRUNCATE TABLE #visitor_ctx;
-- visit nodes in #tree where node_l or node_r equal target @node;
-- delete visited nodes from #tree, output to #visitor_ctx
DELETE t
OUTPUT deleted.node_l,deleted.node_r INTO #visitor_ctx(node_l,node_r)
FROM #tree AS t
WHERE t.node_l=@node OR t.node_r=@node;
-- insert visited nodes in the queue that haven't been visited before
INSERT INTO #visitor_queue(node)
(SELECT node_l FROM #visitor_ctx UNION SELECT node_r FROM #visitor_ctx) EXCEPT (SELECT node FROM #visited_nodes);
-- keep track of visited nodes (duplicates are ignored by the IGNORE_DUP_KEY option for the PK)
INSERT INTO #visited_nodes(node)
SELECT node_l FROM #visitor_ctx UNION SELECT node_r FROM #visitor_ctx;
END
SET @last_created_group_id+=1; -- create new group id
-- insert group into #sets
INSERT INTO #sets(group_id,node)
SELECT group_id=@last_created_group_id,node
FROM #visited_nodes;
END
SELECT node=NULLIF(node,CHAR(0)),group_id FROM #sets ORDER BY node; -- nodes with their assigned group id
SELECT g.group_id,m.members -- groups with their members
FROM
(SELECT DISTINCT group_id FROM #sets) AS g
CROSS APPLY (
SELECT members=STUFF((
SELECT ','+ISNULL(CAST(NULLIF(si.node,CHAR(0)) AS VARCHAR(4)),'NULL')
FROM #sets AS si
WHERE si.group_id=g.group_id
FOR XML PATH('')
),1,1,'')
) AS m
ORDER BY g.group_id;
DROP TABLE #visitor_queue;
DROP TABLE #visited_nodes;
DROP TABLE #visitor_ctx;
DROP TABLE #sets;
DROP TABLE #tree;
```
---
Output for set 1:
```
+------+----------+
| node | group_id |
+------+----------+
| a | 1 |
| b | 2 |
| c | 1 |
| d | 2 |
| e | 4 |
| f | 2 |
| g | 1 |
| h | 1 |
| i | 3 |
| j | 2 |
| k | 4 |
| l | 1 |
+------+----------+
```
---
Output for set 2:
```
+------+----------+
| node | group_id |
+------+----------+
| NULL | 1 |
| a | 1 |
| b | 1 |
| c | 1 |
| d | 1 |
| e | 1 |
| f | 1 |
| g | 1 |
| h | 1 |
| j | 1 |
| k | 1 |
| l | 1 |
| m | 1 |
+------+----------+
```
---
Output for set 3:
```
+------+----------+
| node | group_id |
+------+----------+
| a | 1 |
| b | 1 |
| c | 1 |
+------+----------+
```
|
How to find all connected subgraphs of an undirected graph
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have a table of ward
```
ward_number | class | capacity
________________________________________
1 | A1 | 1
2 | A1 | 2
3 | B1 | 3
4 | C | 4
5 | B2 | 5
```
*capacity = how many beds there is in the ward*
I also have a table called ward\_stay:
```
ward_number | from_date | to_date
_____________________________________________
2 | 2015-01-01 | 2015-03-08
3 | 2015-01-16 | 2015-02-18
6 | 2015-03-05 | 2015-03-18
3 | 2015-04-15 | 2015-04-20
1 | 2015-05-19 | 2015-05-30
```
I want to count the number of beds available in ward with class 'B1' on date '2015-04-15':
```
ward_number | count
_____________________
3 | 2
```
How to get the count is basically capacity - the number of times ward\_number 3 appears
I managed to get the number of times ward\_number 3 appears but I don't know how to subtract capacity from this result.
Here's my code:
```
select count(ward_number) AS 'result'
from ward_stay
where ward_number = (select ward_number
from ward
where class = 'B1');
```
How do I subtract capacity from this result?
|
```
select w.ward_number,
w.capacity - count(ws.ward_number) AS "result"
from ward as w left join ward_stay as ws
on ws.ward_number = w.ward_number
and date '2015-05-19' between ws.from_date and ws.to_date
where w.class = 'B1' -- which class
-- bed not occupied on that date
group by w.ward_number, w.capacity
having w.capacity - count(*) > 0 -- only available wards
```
See [fiddle](http://sqlfiddle.com/#!15/7053e/1)
|
**[SQL Fiddle Demo](http://sqlfiddle.com/#!15/2ab55/12)**
Using `2015-01-17` instead I calculate the total of `occupied` bed on that day. Then join back to substract from original `capacity`. in case all bed are free the `LEFT JOIN` will return `NULL`, so `COALESCE` will put `0`
```
SELECT w."ward_number", "capacity" - COALESCE(occupied, 0) as "count"
FROM wards w
LEFT JOIN (
SELECT "ward_number", COUNT(*) occupied
FROM ward_stay
WHERE to_date('2015-01-17', 'yyyy-mm-dd') BETWEEN "from_date" and "to_date"
GROUP BY "ward_number"
) o
ON w."ward_number" = o."ward_number"
WHERE w."class" = 'B1'
```
**OUTPUT**
```
| ward_number | count |
|-------------|-------|
| 3 | 2 |
```
|
How to subtract two columns in different table
|
[
"",
"sql",
""
] |
I Get the following error when i try to update my table although there's n't any sub query :
```
Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression.
```
---
MY QUERY :
```
UPDATE t1
SET t1.modified = 2
FROM TransActions AS t1
INNER JOIN Ruser R
ON t1.USERID = r.USERID
WHERE r.dep_code = 54 and r.dep_year =2014
and YEAR(t1.checktime) =2016 and MONTH(t1.checktime) =1 and t1.modified = 0
```
---
The data selected like this :
```
USERID empNum
3090 25
3090 25
2074 464
```
---
According to the comments my update trigger :
```
after update
as
declare @userid int , @date date
if (select userid from inserted)<>(select userid from deleted )
raiserror ('YOU ARE NOT ALLOWED TO PERFORME THIS ACTION',10 , 1)
ELSE
begin
set nocount on;
set @userid = (select userid from inserted)
set @date = (select convert(date , checktime) from inserted)
exec calc_atten @date , @userid
end
```
|
Triggers are executed per **statement**, not per **row**, that's the source of your error.
Your trigger assumes that the inserted and deleted tables will only ever have one row, however that is simply wrong.
The number of rows in the inserted / deleted tables is the number of rows effected by the DML statement (update/insert/delete).
I don't know what the procedure `calc_atten` does, but you need to find a way to execute it's logic on a set level and not on scalar variables as it does now.
Your condition at the beginning of the trigger should be changed to fit a multi-row update.
One way to do it is this: (I could probably write it shorter and better if I would have known the table's structure)
```
IF EXISTS (
SELECT 1
FROM deleted d
INNER JOIN inserted i
ON d.[unique row identifier] = i.[unique row identifier]
WHERE i.userId <> d.UserId
)
```
\*[unique row identifier] stands for any column or column combination that is unique per row in that table. If the unique row identifier contains the UserId column then this will not work properly.
|
Your query is ok. The problem is the trigger. `inserted` and `deleted` are tables (well, really views but that is irrelevant), so they can contain multiple rows.
Assuming that `transactions` has a primary key, you can check the update by doing
```
declare @userid int , @date date ;
if (exists (select 1
from inserted i
where not exists (select 1
from deleted d
where d.transactionid = i.transactionid and
d.userid <> i.userid
)
)
)
begin
raiserror ('Changing user ids is not permitted', 10 , 1);
end;
else begin
set nocount on;
declare icursor cursor for select userid, checktime from inserted;
open icursor;
fetch next from icursor into @userid, @date;
while not @@FETCH_STATUS = 0
begin
exec calc_atten @date, @userid
fetch next from icursor into @userid, @date;
end;
close icursor; deallocate icursor;
end;
```
Cursors are not my favorite SQL construct. But, if you need to loop through a table and call a stored procedure, then they are appropriate. If you can rewrite the code to be set-based, then you can get rid of the cursor.
|
Update fails because Subquery returned more than 1 value
|
[
"",
"sql",
"sql-server",
"sql-update",
"subquery",
"inner-join",
""
] |
Basically I get this from a simple select query:
```
SELECT Site.RefID, SiteName, Dates.Date, Dates.Type
FROM Site, Dates
WHERE Site.RefID = Dates.RefID;
RefID | SiteName | Date | Type
1 Sydney 06-12-15 OPENED
1 Sydney 08-12-15 CLOSED
2 Mel 17-12-15 OPENED
2 Mel 19-12-15 CLOSED
```
But I want to seperate it so tge result is similar to this:
```
RefID | SiteName | DateOPENED | DateCLOSED
1 Sydney 06-12-15 | 08-12-15
```
Basically I want to compare the data tracking details
Apologies in advance if this question isn't structured very well :/ I'm a complete beginner
I was thinking maybe a select within a select or possible case when's, but I can't seem to get either working
|
Try the following approach, using `case` expression:
```
select s.RefID
, s.Name
, min(case when d.Type == 'OPENED' then d.Date end) as DateOPENED
, min(case when d.Type == 'CLOSED' then d.Date end) as DateCLOSED
from Site s
join Dates d on s.RefID = d.RefID
group by s.RefID, s.Name
```
|
```
SELECT A.RefId, A.SiteName, A.Date DateOpened, B.Date DateClosed
FROM #tbl A JOIN #tbl B
ON A.RefId = B.RefID
AND A.Type = 'OPENED'
AND B.Type = 'CLOSED'
```
For the sake of simplicity, have replaced the query with `#tbl`(you can deal with it howsoever you'd like to).
|
SQL query: Separate column where IDs are the same but type is different
|
[
"",
"mysql",
"sql",
""
] |
Is it possible to combine multiple CTEs in single query?
I am looking for way to get result like this:
```
WITH cte1 AS (
...
),
WITH RECURSIVE cte2 AS (
...
),
WITH cte3 AS (
...
)
SELECT ... FROM cte3 WHERE ...
```
As you can see, I have one recursive CTE and two non recursive.
|
Use the key word `WITH` *once* at the top. If any of your Common Table Expressions (CTE) are recursive (rCTE) you have to add the keyword `RECURSIVE` at the top *once* also, even if not all CTEs are recursive:
```
WITH RECURSIVE
cte1 AS (...) -- can still be non-recursive
, cte2 AS (SELECT ...
UNION ALL
SELECT ...) -- recursive term
, cte3 AS (...)
SELECT ... FROM cte3 WHERE ...
```
[The manual:](https://www.postgresql.org/docs/current/sql-select.html#SQL-WITH)
> If `RECURSIVE` is specified, it **allows** a `SELECT` subquery to
> reference itself by name.
Bold emphasis mine. And, even more insightful:
> **Another effect of `RECURSIVE` is that `WITH` queries need not be ordered**:
> a query can reference another one that is later in the list. (However,
> circular references, or mutual recursion, are not implemented.)
> Without `RECURSIVE`, `WITH` queries can only reference sibling `WITH`
> queries that are earlier in the `WITH` list.
Bold emphasis mine again. Meaning that the order of `WITH` clauses is *meaningless* when the `RECURSIVE` key word has been used.
BTW, since `cte1` and `cte2` in the example are not referenced in the outer `SELECT` and are plain `SELECT` commands themselves (no collateral effects), they are never executed (unless referenced in `cte3`).
|
Yes. You don't repeat the `WITH`. You just use a comma:
```
WITH cte1 AS (
...
),
cte2 AS (
...
),
cte3 AS (
...
)
SELECT ... FROM 'cte3' WHERE ...
```
And: Only use single quotes for string and date constants. Don't use them for column aliases. They are not allowed for CTE names anyway.
|
How to use multiple CTEs in a single SQL query?
|
[
"",
"sql",
"postgresql",
"common-table-expression",
"recursive-cte",
""
] |
I've a simple SQL table which looks like this-
```
CREATE TABLE msg (
from_person character varying(10),
from_location character varying(10),
to_person character varying(10),
to_location character varying(10),
msglength integer,
ts timestamp without time zone
);
```
[](https://i.stack.imgur.com/5Lqbp.png)
I want to find out for each row in the table if a different 'from\_person' and 'from\_location' has interacted with the 'to\_person' in the current row in last 3 minutes.
For example, in above table, for row # 4, other than mary from Mumbai (current row), nancy from NYC and bob from Barcelona has also sent a message to charlie in last 3 minutes so the count is 2.
Similarly, for row#2, other than bob from Barcelona (current row), only nancy from NYC has sent a message to charlie in ca (current row) so the count is 1
Example desired output-
```
0
1
0
2
```
I tried using window function but it seems that in frame clause I can specify rows count before and after but I can't specify a time itself.
|
As is well known, every table in Postgres has a primary key. Or should have at least. It would be great if you had a primary key defining expected order of rows.
Example data:
```
create table msg (
id int primary key,
from_person text,
to_person text,
ts timestamp without time zone
);
insert into msg values
(1, 'nancy', 'charlie', '2016-02-01 01:00:00'),
(2, 'bob', 'charlie', '2016-02-01 01:00:00'),
(3, 'charlie', 'nancy', '2016-02-01 01:00:01'),
(4, 'mary', 'charlie', '2016-02-01 01:02:00');
```
The query:
```
select m1.id, count(m2)
from msg m1
left join msg m2
on m2.id < m1.id
and m2.to_person = m1.to_person
and m2.ts >= m1.ts- '3m'::interval
group by 1
order by 1;
id | count
----+-------
1 | 0
2 | 1
3 | 0
4 | 2
(4 rows)
```
In the lack of a primary key you can use the function `row_number()`, for example:
```
with msg_with_rn as (
select *, row_number() over (order by ts, from_person desc) rn
from msg
)
select m1.id, count(m2)
from msg_with_rn m1
left join msg_with_rn m2
on m2.rn < m1.rn
and m2.to_person = m1.to_person
and m2.ts >= m1.ts- '3m'::interval
group by 1
order by 1;
```
Note that I have used `row_number() over (order by ts, from_person desc)` to get the sequence of rows as you have presented in the question. Of course, you should decide yourself how to resolve ambiguities arising from the same values of the column `ts` (as in the first two rows).
|
**Building on your *actual* question**, this would be a correct answer:
```
SELECT count(m2.to_person) AS ct_3min
FROM msg m1
LEFT JOIN msg m2
ON m2.to_person = m1.to_person
AND (m2.from_person, m2.from_location) <> (m1.from_person, m1.from_location)
AND m2.ts <= m1.ts -- including same timestamp (?)
AND m2.ts >= m1.ts - interval '3 min'
GROUP BY m1.ctid
ORDER BY m1.ctid;
```
Assuming `to_person`, `from_person` and `from_location` are all defined `NOT NULL`.
Returns:
```
1 -- !!
1
0
2
```
Note that the result is basically ***meaningless*** without additional columns, any unique combination of columns, ideally a PK. I return the rows in the current physical order - which can change any time without warning. There is no natural order of rows in a relational table. Without an unambiguous `ORDER BY` clause, the order of result rows is unreliable.
According to your definition the first two rows (according to your displayed order) need to have the same result: `1` - or `0` if you don't count same timestamp - `0` for one and `1` for the other would be incorrect according to your definition.
In the absence of any unique key, I am using the [`ctid`](http://www.postgresql.org/docs/current/interactive/ddl-system-columns.html) as poor-man's surrogate key. More:
* [In-order sequence generation](https://stackoverflow.com/questions/17500013/in-order-sequence-generation/17503095#17503095)
You *should* still have a primary key defined in your table, but it's by no means compulsory. That's not the only dubious detail in your table layout. You should probably operate with `timestamp with time zone`, have some `NOT NULL` constraints and only `person_id` columns referencing a `person` table in a properly normalized design. Something like:
```
CREATE TABLE msg (
msg_id serial PRIMARY KEY
, from_person_id integer NOT NULL REFERENCES person
, to_person_id integer NOT NULL REFERENCES person
, msglength integer
, ts timestamp with time zone
);
```
Either way, relying on a surrogate PK for the purpose of your query would be *plain wrong*. The "next" `msg_id` does not even have to have a later timestamp. In a multi-user database a sequence does not guarantee anything of the sort.
|
In SQL how to select previous rows based on the current row values?
|
[
"",
"sql",
"postgresql",
"window-functions",
""
] |
I have a table like this:
```
+----+-----------+------+-------+--+
| id | Part | Seq | Model | |
+----+-----------+------+-------+--+
| 1 | Head | 0 | 3 | |
| 2 | Neck | 1 | 3 | |
| 3 | Shoulders | 2 | 29 | |
| 4 | Shoulders | 2 | 3 | |
| 5 | Stomach | 5 | 3 | |
+----+-----------+------+-------+--+
```
How can I insert another record with the next seq after `Stomach` for Model 3. So here is what the new table suppose to look like:
```
+----+-----------+------+-------+--+
| id | Part | Seq | Model | |
+----+-----------+------+-------+--+
| 1 | Head | 0 | 3 | |
| 2 | Neck | 1 | 3 | |
| 3 | Shoulders | 2 | 29 | |
| 4 | Shoulders | 2 | 3 | |
| 5 | Stomach | 5 | 3 | |
| 6 | Groin | 6 | 3 | |
+----+-----------+------+-------+--+
```
Is there a way to craft an insert query that will give the next number after the highest seq for Model 3 only. Also, looking for something that is concurrency safe.
|
If you do not maintain a counter table, there are two options. Within a transaction, first select the `MAX(seq_id)` with one of the following table hints:
1. `WITH(TABLOCKX, HOLDLOCK)`
2. `WITH(ROWLOCK, XLOCK, HOLDLOCK)`
`TABLOCKX + HOLDLOCK` is a bit overkill. It blocks regular select statements, which can be considered *heavy* even though the transaction is small.
A `ROWLOCK, XLOCK, HOLDLOCK` table hint is probably a better idea (but: read the alternative with a counter table further on). The advantage is that it does not block regular select statements, ie when the select statements don't appear in a `SERIALIZABLE` transaction, or when the select statements don't provide the same table hints. Using `ROWLOCK, XLOCK, HOLDLOCK` will still block insert statements.
Of course you need to be sure that no other parts of your program select the `MAX(seq_id)` without these table hints (or outside a `SERIALIZABLE` transaction) and then use this value to insert rows.
Note that depending on the number of rows that are locked this way, it is possible that SQL Server will escalate the lock to a table lock. Read more about lock escalation [here](https://technet.microsoft.com/en-us/library/ms172010%28v=sql.110%29.aspx).
The insert procedure using `WITH(ROWLOCK, XLOCK, HOLDLOCK)` would look as follows:
```
DECLARE @target_model INT=3;
DECLARE @part VARCHAR(128)='Spine';
BEGIN TRY
BEGIN TRANSACTION;
DECLARE @max_seq INT=(SELECT MAX(seq) FROM dbo.table_seq WITH(ROWLOCK,XLOCK,HOLDLOCK) WHERE model=@target_model);
IF @max_seq IS NULL SET @max_seq=0;
INSERT INTO dbo.table_seq(part,seq,model)VALUES(@part,@max_seq+1,@target_model);
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION;
END CATCH
```
---
An alternative and probably a better idea is to have a *counter* table, and provide these table hints on the counter table. This table would look like the following:
```
CREATE TABLE dbo.counter_seq(model INT PRIMARY KEY, seq_id INT);
```
You would then change the insert procedure as follows:
```
DECLARE @target_model INT=3;
DECLARE @part VARCHAR(128)='Spine';
BEGIN TRY
BEGIN TRANSACTION;
DECLARE @new_seq INT=(SELECT seq FROM dbo.counter_seq WITH(ROWLOCK,XLOCK,HOLDLOCK) WHERE model=@target_model);
IF @new_seq IS NULL
BEGIN SET @new_seq=1; INSERT INTO dbo.counter_seq(model,seq)VALUES(@target_model,@new_seq); END
ELSE
BEGIN SET @new_seq+=1; UPDATE dbo.counter_seq SET seq=@new_seq WHERE model=@target_model; END
INSERT INTO dbo.table_seq(part,seq,model)VALUES(@part,@new_seq,@target_model);
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION;
END CATCH
```
The advantage is that fewer row locks are used (ie one per model in `dbo.counter_seq`), and lock escalation cannot lock the whole `dbo.table_seq` table thus blocking select statements.
You can test all this and see the effects yourself, by placing a `WAITFOR DELAY '00:01:00'` after selecting the sequence from `counter_seq`, and fiddling with the table(s) in a second SSMS tab.
---
PS1: Using `ROW_NUMBER() OVER (PARTITION BY model ORDER BY ID)` is not a good way. If rows are deleted/added, or ID's changed the sequence would change (consider invoice id's that should never change). Also in terms of performance having to determine the row numbers of all previous rows when retrieving a single row is a bad idea.
PS2: I would never use outside resources to provide locking, when SQL Server already provides locking through isolation levels or fine-grained table hints.
|
The correct way to handle such insertions is to use an `identity` column or, if you prefer, a sequence and a default value for the column.
However, you have a `NULL` value for the `seq` column, which does not seem correct.
The problem with a query such as:
```
Insert into yourtable(id, Part, Seq, Model)
Select 6, 'Groin', max(Seq) + 1, 3
From yourtable;
```
is that two such queries, running at the same time, could produce the same value. The recommendation is to declare `seq` as a unique, identity column and let the database do all the work.
|
How to get the next number in a sequence
|
[
"",
"sql",
"sql-server",
"sql-server-2014",
""
] |
I need help with this task:
I have a table in a MsSql database where I store inter users messages. I need a query to select the last message for each of the user conversation (WhatApp like list of conversations for a user).
Table structure (**x is the message ID, GUID, and is unique for each message, txtMsg is the text message**):
```
| messageId | fromUserId | toUserId | Message | sentDate |
| x | 1 | 2 | txtMsg | 1.1.2016 1:00:00 |
| x | 1 | 2 | txtMsg | 1.1.2016 1:00:01 |
| x | 1 | 2 | txtMsg | 1.1.2016 1:00:02 |
| x | 2 | 1 | txtMsg | 1.1.2016 1:00:03 |
| x | 3 | 1 | txtMsg | 1.1.2016 1:00:04 |
| x | 4 | 1 | txtMsg | 1.1.2016 1:00:05 |
| x | 2 | 3 | txtMsg | 1.1.2016 1:00:06 |
| x | 2 | 4 | txtMsg | 1.1.2016 1:00:07 |
| x | 2 | 3 | txtMsg | 1.1.2016 1:00:08 |
| x | 1 | 5 | txtMsg | 1.1.2016 1:00:09 |
| x | 3 | 1 | txtMsg | 1.1.2016 1:00:10 |
| x | 2 | 4 | txtMsg | 1.1.2016 1:00:11 |
| x | 2 | 5 | txtMsg | 1.1.2016 1:00:12 |
| x | 1 | 2 | txtMsg | 1.1.2016 1:00:13 |
```
Expected result example, for user with id = 1 (the sentDate is the newest date):
```
| x | 1 | 2 | txtMsg | 1.1.2016 1:00:13 |
| x | 3 | 1 | txtMsg | 1.1.2016 1:00:10 |
| x | 4 | 1 | txtMsg | 1.1.2016 1:00:05 |
| x | 1 | 5 | txtMsg | 1.1.2016 1:00:09 |
```
What sql query will create such result?
Thank you!
Later edit:
I added sqlfiddle example here:
<http://sqlfiddle.com/#!3/6ffbe/15>
|
You can use `row_nubmer` with case logic to get the smallest and largest user ids:
```
select m.*
from (select m.*,
row_number() over (partition by (case when fromuserid < touserid then fromuserid else touserid end),
(case when fromuserid < touserid then touserid else fromuserid end)
order by sentDate desc
) as seqnum
from messages m
) m
where seqnum = 1;
```
EDIT:
This SQL works on the [SQL Fiddle](http://sqlfiddle.com/#!3/fad5a/2):
```
select m.*
from (select m.*,
row_number() over (partition by (case when fromuser < touser then fromuser else touser end),
(case when fromuser < touser then touser else fromuser end)
order by createdAt desc
) as seqnum
from messages m
) m
where seqnum = 1;
```
|
Pretty much the same as answer by Gordon Linoff. I just added a filter for user id = 1 (either `from` or `to`). And I put the query in your [SQL Fiddle](http://sqlfiddle.com/#!3/6ffbe/17/15).
The `ROW_NUMBER` partitions all rows into conversations and then assigns sequential numbers to each row in the conversation starting from the latest (ordered by `createdAt`).
```
WITH
CTE
AS
(
SELECT
M.Id
,M.fromUser
,M.toUser
,M.message
,M.createdAt
,ROW_NUMBER() OVER (
PARTITION BY
CASE WHEN M.fromUser < M.toUser THEN M.fromUser ELSE M.toUser END,
CASE WHEN M.fromUser > M.toUser THEN M.fromUser ELSE M.toUser END
ORDER BY M.createdAt DESC) AS rn
FROM Messages AS M
WHERE
M.fromUser = 1
OR M.toUser = 1
)
SELECT
CTE.Id
,CTE.fromUser
,CTE.toUser
,CTE.message
,CTE.createdAt
FROM CTE
WHERE rn = 1
ORDER BY fromUser, toUser;
```
|
sql query to group data by conversation
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I got three tables.
I want to query the badges table to get the badge if all the missions exists for the `badge_id` in the `badge_requirements` table.
In the case below, the badge would be returned because for `badge_id = 1`, all missions exist.
But if, for example, one record in the `finished_missions` table would not exist, then no badge would be returned.
The `user_id` will be supplied from the application.
```
table badges
+----+------+-------+
| id | name | image |
+----+------+-------+
| 1 | OHYE | path |
+----+------+-------+
PK(id)
table badge_requirements
+------------+----------+
| mission_id | badge_id |
+------------+----------+
| 3 | 1 |
+------------+----------+
| 5 | 1 |
+------------+----------+
UNIQUE(mission_id, badge_id)
FK(mission_id, missions.id)
FK(badge_id, badges.id)
table finished_missions
+----+---------+------------+
| id | user_id | mission_id |
+----+---------+------------+
| 3 | 221 | 3 | // if any of these record doesn't exist
+----+---------+------------+
| 5 | 221 | 5 | // the badge associated with this mission would not be returned
+----+---------+------------+
PK(id)
FK(user_id, users.id)
FK(mission_id, missions.id)
```
**EDIT**
Changed the `missions` table to `finished_missions` for better readability.
The user id and mission id are just referencing the user and mission table.
**EDIT 2**
I've tried this, given from an answer:
```
SELECT * FROM badges b
INNER JOIN finished_missions fm ON (fm.user_id = 221)
INNER JOIN badge_requirements br ON (br.mission_id = fm.mission_id AND br.badge_id = b.id)
```
But it still returns the badge even if I have just one record in `finished_missions` table.
|
There are many ways. This should be one:
```
SELECT badge_id
FROM ( -- count missions per badge for the given user
SELECT br.badge_id, count(*) AS ct
FROM finished_missions fm
JOIN badge_requirements br USING (mission_id)
WHERE fm.user_id = 221
GROUP BY 1
) u -- count missions per badge total
JOIN (
SELECT br.badge_id, count(*) AS ct
FROM badge_requirements
) b USING (badge_id, ct) -- only badges with the full count of missions
```
In addition to the constraints you declared there should also be a, `UNIQUE(user, mission_id)` on `finished_missions` to disallow duplicate entries. Or you have to use `count(DISTINCT mission_id) AS ct` in the first subquery, so you can rely on the count.
And `UNIQUE(mission_id, badge_id)` should really be the PK - or add a `NOT NULL` constraint for both columns.
|
One method is a counting method:
```
select br.badge_id
from badge_requirements br
group by br.badge_id
having count(distinct mission_id) = (select count(*) from missions);
```
This returns the `badge_id`. If you want more information, join back to the `badges` table or use `in`.
And, if there are no duplicates in `badge_requirements`, then use `count(*)` instead of `count(distinct)`.
|
Select badges from table only if all requirements rows exist
|
[
"",
"sql",
"postgresql",
"relational-division",
""
] |
```
select distinct account_num from account order by account_num;
```
The above query gave the below result
```
account_num
1
2
4
7
12
18
24
37
45
59
```
I want to split the account\_num column into tuple of three account\_num's like (1,2,4);(7,12,18);(24,37,45),(59); The last tuple has only one entry as there are no more account\_num's left. Now I want a query to output the min and max of each tuple. (please observe that the max of one tuple is less than the min of the next tuple). Output desired is shown below
```
1 4
7 18
24 45
59 59
```
Edit: I have explained my requirement in the best way I could
|
This is another solution.
```
SELECT *
FROM (SELECT DISTINCT MIN(val) over(PARTITION BY gr) min_,
MAX(val) over(PARTITION BY gr) max_
FROM (SELECT val,
decode(trunc(rn / 3), rn / 3, rn / 3, ceil(rn / 3)) gr
FROM (SELECT val,
row_number() over(ORDER BY val) rn
FROM (select distinct account_num from account order by account_num)))) ORDER BY min_
```
**UPDATED**
Solution without analytic function.
```
SELECT MIN(val) min_,
MAX(val) max_
FROM (SELECT val,
ceil(rn / 3) gr
FROM (SELECT val,
rownum rn
FROM A_DEL_ME)) GROUP BY gr
```
|
You can use the example below as a scratch, this is only based on information you have provided so far. For further documentation, you can consult Oracle's [analytical functions](http://docs.oracle.com/cd/E11882_01/server.112/e41084/functions004.htm#SQLRF06174) docs:
```
with src as( --create a source data
select 1 col from dual union
select 2 from dual union
select 4 from dual union
select 7 from dual union
select 12 from dual union
select 18 from dual union
select 24 from dual union
select 37 from dual union
select 45 from dual union
select 59 from dual
)
select
col,
decode(col_2, 0, max_col, col_2) col_2 -- for the last row we get the maximum value for the row
from (
select
col,
lead(col, 2, 0) over (order by col) col_2, -- we get the values from from two rows behind
max(col) over () max_col, -- we get the max value to be used for the last row in the result
rownum rn from src -- we get the rownum to handle the final output
) where mod(rn - 1, 3) = 0 -- only get rows having a step of two
```
|
Split a column into multiple columns
|
[
"",
"sql",
"oracle",
"plsql",
"oracle11g",
""
] |
[](https://i.stack.imgur.com/hhtMw.png)
In SQL Server 2008 R2 I have a table (database size is 450 GB) with over a billion rows, which I'd like to purge by deleting all records older than 180 days counting from the current date. Any help here would be appreciated!
I'm using the following query :
```
DELETE FROM table name
WHERE column name < '2015-01-01'
```
But it is taking too much time. Is there any maintenance plan or any query so that I can delete the data fast?
|
One approach that might save you some time:
* Start off with taking a backup (you never know)
* Insert the rows you want to keep into a temporary table (make sure you have enough room on disk for `tempdb`)
* `TRUNCATE` the table to remove all rows quickly (this statement will execute instantly)
* Insert the rows from the temporary table back into your source table
---
```
INSERT INTO #keep SELECT * FROM table_name WHERE column_name>='2015-01-01';
TRUNCATE TABLE table_name;
INSERT INTO table_name SELECT * FROM #keep;
DROP TABLE #keep;
```
|
It's taking a long time because (in part) all of those deletes are going into one gargantuan transaction. You need to break it down into smaller chunks (transactions) and periodically commit. It'll still take a long time but the impact on your server will be lessened. See <https://stackoverflow.com/a/28324562/1324345> and the blog post it references, <http://sqlperformance.com/2013/03/io-subsystem/chunk-deletes>
```
SET NOCOUNT ON;
DECLARE @r INT;
SET @r = 1;
WHILE @r > 0
BEGIN
BEGIN TRANSACTION;
DELETE TOP (100000) -- this will change
table
WHERE column name < '2015-01-01' ;
SET @r = @@ROWCOUNT;
COMMIT TRANSACTION;
-- CHECKPOINT; -- if simple
-- BACKUP LOG ... -- if full
END
```
If your table is partitioned, it can be much easier and faster.
|
Purge by deleting all records older than 180 days from the current date in SQL Server
|
[
"",
"sql",
"sql-server",
"sql-delete",
""
] |
I need to run a query in SQL Server, where I have a particular number of values stored individually on separate lines in a text file, and I need to run a query in SQL server to check if a value in a column of the table matches any one of the value stored in the txt file.
How should I go about doing this ?
I am aware of how to formulate various types of queries in SQL Server, just not sure how to run a query that is dependent on a file for its query parameters.
**EDIT :**
**Issue 1 :** I am not doing this via a program since the query that I need to run traverses over 7 million datapoints which results in the program timing out before it can complete, hence the only alternative I have left is to run the query in SQL Server itself without worrying about the timeout.
**Issue 2 :** I do not have admin rights to the database that I am accessing which is why there is no way I could create a table, dump the file into it, then perform a query by joining those tables.
Thanks.
|
One option would be to use BULK INSERT and a temp table. Once in the temp table, you can parse the values. This is likely not the exact answer you need, but based on your experience, I'm sure you could tweak as needed.
Thanks...
```
SET NOCOUNT ON;
USE Your_DB;
GO
CREATE TABLE dbo.t (
i int,
n varchar(10),
d decimal(18,4),
dt datetime
);
GO
BULK INSERT dbo.t
FROM 'D:\import\data.txt'
WITH (FIELDTERMINATOR = ',', ROWTERMINATOR = '\n');
```
|
1. Bulk import the data from text file into a temporary table.
2. Execute the query to do the comparison between your actual physical table & temporary table.
|
SQL Server : Query using data from a file
|
[
"",
"sql",
"sql-server",
""
] |
Dear friends on Stackoverflow,
I recently started learning SQL and I'm currently trying to add ' as a value into MYSQL Community edition.
For example, In order to add the letter A id' put
```
INSERT INTO symbols (test_chars) VALUES ('A');
```
But If I wanted to add the ' itself how would go about doing that?
''' Is not working for me and I have a feeling that's not how it should work.
Much appreciated and Best Regards,
Waqar
|
The [documentation](http://dev.mysql.com/doc/refman/5.7/en/string-literals.html) is your best friend.
[It says](http://dev.mysql.com/doc/refman/5.7/en/string-literals.html#idm140153918913264):
> There are several ways to include quote characters within a string:
>
> * A `'` inside a string quoted with `'` may be written as `''`.
> * A `"` inside a string quoted with `"` may be written as `""`.
> * Precede the quote character by an escape character (`\`).
> * A `'` inside a string quoted with `"` needs no special treatment and need not be doubled or escaped. In the same way, `"` inside a string quoted with `'` needs no special treatment.
As you can see there are three possible solutions to your problem. Pick your favorite.
|
You use a lot of single quotes. Doubling a single quote in a string is a single quote. So, four single quotes in a row defines a string with one single quote:
```
INSERT INTO symbols (test_chars) VALUES ('''');
```
The quotes at the beginning and end delimit the string. The two quotes in the middle are the single quote character.
This is ANSI standard and should work in any database.
|
How to add ' as a value into SQL
|
[
"",
"mysql",
"sql",
""
] |
I have two `tables` have Account-Codes As Below,
```
table 1:
account
50000
50006
50015
50105
50150
50155
50165
table 2:
Account
50000
50010
50140
50105
50150
50155
50165
```
I need to Join these two `tables`. If any account codes of `table-1` is not matching with `table-2`, then I have implicitly change table-1 account-code to table-2 account code.
I have done something as below,
```
SELECT T1.Account, T2.Account
FROM table1 t1
INNER JOIN table2 t2
on (t2.account = CASE t1 .account
WHEN 50015 THEN 50010
WHEN 50006 THEN 50140
ELSE t1 .account
END )
```
But I got only matched codes as output,
```
account Account
50000 50000
50105 50105
50150 50150
50155 50155
50165 50165
```
I didn't get the Unmatched Account codes i.e `(50006 and 50015)`. Could anyone help me to find out what's wrong here?
my expected output is
```
account Account
50000 50000
50006 50140
50015 50010
50105 50105
50150 50150
50155 50155
50165 50165
```
Thanks for the help
|
use `CASE` then `DISTINCT` the data, that will give you generic solution
```
-- table1
declare @table1 table
(account bigint)
insert into @table1 values (50000)
insert into @table1 values (50006)
insert into @table1 values (50015)
insert into @table1 values (50105)
insert into @table1 values (50150)
insert into @table1 values (50155)
insert into @table1 values (50165)
-- table2
declare @table2 table
(account bigint)
insert into @table2 values (50000)
insert into @table2 values (50010)
insert into @table2 values (50140)
insert into @table2 values (50105)
insert into @table2 values (50150)
insert into @table2 values (50155)
insert into @table2 values (50165)
-- QUERY
select distinct t1.account as Account1,
Account2 = case
when t1.account = t2.account then t2.account else t1.account
end
from @table1 t1, @table2 t2
```
RESULT
```
Account1 Account2
50000 50000
50006 50006
50015 50015
50105 50105
50150 50150
50155 50155
50165 50165
```
**EDIT** after comment - *well that's a part of our requirement. I need to update amount corresponding to the 50006 account code in 50140 and so on...*
```
select distinct t1.account as Account1,
Account2 = case
when t1.account = 50006 then 50140
when t1.account = 50015 then 50010
else t1.account end
from @table1 t1 , @table2 t2
```
RESULT
```
Account1 Account2
50000 50000
50006 50140
50015 50010
50105 50105
50150 50150
50155 50155
50165 50165
```
|
Try this
[SQL Fiddle](http://sqlfiddle.com/#!3/c7671/5)
```
SELECT T1.Account, T2.Account
FROM table1 t1
INNER JOIN table2 t2
on (t1.account = CASE t1 .account
WHEN 50015 THEN t2 .account+5
WHEN 50006 THEN t2 .account-134
ELSE t2 .account
END )
```
|
Inner Join with CASE statement is not working in SQL SERVER
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a column in SQL where in the data is stored as 102915, 060315 and so on.
The data type of the column is `nvarchar(50)`. I want to convert the above data to a date format.
It should look like 10-29-15 or 10/29/15.
How can I convert the above in SQL Server Management Studio 2008?
Tried quite a few things: One of them is
```
create view v1
as
Select
CAST([ISSUE] as date) as ISSUE,
CAST([EXPIRE] as date) as EXPIRE
from tablename
```
The view is being created.
Note: ISSUE and EXPIRE are the column names which have values like 100515, 060315 and I need to convert all the values in those columns to 10-05-15 or 10/05/15.
When I do a `select * from v1;` I get the following error:
> Conversion failed when converting data and/or time from character string.
|
You can try this:
```
DECLARE @date VARCHAR(50) = '102915'
SELECT CAST( CAST( '20'+ --prefix for the year 2000
SUBSTRING( @date,5,2)+ --year
SUBSTRING( @date,1,2)+ --month
SUBSTRING( @date,3,2) --day
AS VARCHAR(10))
AS DATE)
```
result:
[](https://i.stack.imgur.com/dAiBt.png)
But this will assume your dates are all greater than 1999.
as your date format is MMddYY it's hard to attain the correct date part for the year.
So for your view you can use:
```
create view v1 as
Select CAST( CAST( '20'+ --prefix for the year 2000
SUBSTRING( [ISSUE],5,2)+ --year
SUBSTRING( [ISSUE],1,2)+ --month
SUBSTRING( [ISSUE],3,2) --day
AS VARCHAR(10))
AS DATE) as ISSUE
,
CAST( CAST( '20'+ --prefix for the year 2000
SUBSTRING( [EXPIRE],5,2)+ --year
SUBSTRING( [EXPIRE],1,2)+ --month
SUBSTRING( [EXPIRE],3,2) --day
AS VARCHAR(10))
AS DATE) as EXPIRE
from tablename
```
To have the date in the format mm-dd-yyyy you need to use CONVERT you can see the different conversions [here](http://www.sqlusa.com/bestpractices/datetimeconversion/):
```
create view v1 as
Select convert(VARCHAR ,CAST( CAST( '20'+ --prefix for the year 2000
SUBSTRING( [ISSUE],5,2)+ --year
SUBSTRING( [ISSUE],1,2)+ --month
SUBSTRING( [ISSUE],3,2) --day
AS VARCHAR(10))
AS DATE), 110) as ISSUE
,
convert(VARCHAR ,CAST( CAST( '20'+ --prefix for the year 2000
SUBSTRING( [EXPIRE],5,2)+ --year
SUBSTRING( [EXPIRE],1,2)+ --month
SUBSTRING( [EXPIRE],3,2) --day
AS VARCHAR(10))
AS DATE), 110) as EXPIRE
from tablename
```
|
This is a lot simpler:
```
select convert(nvarchar(50), convert(date, RIGHT('102915',2) + LEFT('102915',4), 112), 110)
```
and respectively:
```
create view v1 as
Select
convert(nvarchar(50), convert(date, RIGHT([ISSUE],2) + LEFT([ISSUE],4), 112), 110) as ISSUE,
convert(nvarchar(50), convert(date, RIGHT([EXPIRE],2) + LEFT([EXPIRE],4), 112), 110) as EXPIRE
from tablename
```
|
Converting nvarchar to DATE
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"ssms",
""
] |
there is need to compare two column value find and want to find out difference between this two column value and at the end , want to display all the values present in that table where we find out maximum difference
```
companyname open close date
A 10 9.8 2015-01-01
A 11 8.2 2015-02-01
A 9.8 6.5 2015-02-04
B 10 8 2015-04-01
B 9.9 9.5 2015-04-15
C 8.7 2.3 2015-02-01
```
now I want query which show output
```
company name open close difference date
A 9.8 6.5 3.3 2015-02-04
B 10 8 2 2015-04-01
c 8.7 2.3 6.4 2015-02-01
```
|
As an alternative to the self-join in [@TimBiegeleisen's answer](https://stackoverflow.com/a/35286110/44853), you can use a windowing function to order by the difference partitioned by `companyname`, then select the first row from each partition:
```
select [companyname] as [company name], [open], [close], [difference], [date]
from (
select [companyname], [open], [close],
[open] - [close] as [difference], [date],
row_number() over (partition by [companyname]
order by [open] - [close] desc) rn
from Table1
) x
where rn = 1
```
[[SQL Fiddle demo](http://sqlfiddle.com/#!6/57722/2)]
|
A simple `INNER JOIN` of your original table to a subquery obtaining the maximum differences for each company is all you need.
```
SELECT c1.companyname, c1.open, c1.close, c2.maxTime AS difference, c1.date
FROM companies c1
INNER JOIN
(
SELECT companyname, MAX(open - close) AS maxTime
FROM companies
GROUP BY companyname
) c2
ON c1.companyname = c2.companyname AND (c1.open - c1.close) = c2.maxTime
```
**Follow the link below for a running demo** (courtesy of @lc.):
[**SQLFiddle**](http://sqlfiddle.com/#!6/57722/4)
|
maximum differenence between two column
|
[
"",
"sql",
"sql-server-2012",
""
] |
I want to pass a temp table from one execution path to another one nested in side it
What I have tried is this:
```
DECLARE @SQLQuery AS NVARCHAR(MAX)
SET @SQLQuery = '
--populate #tempTable with values
EXECUTE('SELECT TOP (100) * FROM ' + tempdb..#tempTable)
EXECUTE sp_executesql @SQLQuery
```
but it fails with this error message:
> Incorrect syntax near 'tempdb'
Is there a another\better way to pass temporary table between execution contexts?
|
Your temp table will be visible inside the dynamic sql with no problem. I am not sure if you are creating the temp table inside the dynamic sql or before.
Here it is with the table created BEFORE the dynamic sql.
```
create table #Temp(SomeValue varchar(10))
insert #Temp select 'made it'
exec sp_executesql N'select * from #Temp'
```
|
You can create a global temp table using the `##tablename` syntax (double hash). The difference is explained [on the TechNet site](https://technet.microsoft.com/en-us/library/ms177399(v=sql.105).aspx):
> There are two types of temporary tables: local and global. They differ from each other in their names, their visibility, and their availability. Local temporary tables have a single number sign (#) as the first character of their names; they are visible only to the current connection for the user, and they are deleted when the user disconnects from the instance of SQL Server. Global temporary tables have two number signs (##) as the first characters of their names; they are visible to any user after they are created, and they are deleted when all users referencing the table disconnect from the instance of SQL Server.
>
> For example, if you create the table employees, the table can be used by any person who has the security permissions in the database to use it, until the table is deleted. If a database session creates the local temporary table #employees, only the session can work with the table, and it is deleted when the session disconnects. If you create the global temporary table ##employees, any user in the database can work with this table. If no other user works with this table after you create it, the table is deleted when you disconnect. If another user works with the table after you create it, SQL Server deletes it after you disconnect and after all other sessions are no longer actively using it.
>
> If a temporary table is created with a named constraint and the temporary table is created within the scope of a user-defined transaction, only one user at a time can execute the statement that creates the temp table. For example, if a stored procedure creates a temporary table with a named primary key constraint, the stored procedure cannot be executed simultaneously by multiple users.
The next suggestion may be even more helpful:
> Many uses of temporary tables can be replaced with variables that have the table data type. For more information about using table variables, see [table (Transact-SQL)](https://technet.microsoft.com/en-us/library/ms175010(v=sql.105).aspx).
|
Passing temp table from one execution to another
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
To perform good performance with Spark. I'm a wondering if it is good to use sql queries via `SQLContext` or if this is better to do queries via DataFrame functions like `df.select()`.
Any idea? :)
|
There is no performance difference whatsoever. Both methods use exactly the same execution engine and internal data structures. At the end of the day, all boils down to personal preferences.
* Arguably `DataFrame` queries are much easier to construct programmatically and provide a minimal type safety.
* Plain SQL queries can be significantly more concise and easier to understand. They are also portable and can be used without any modifications with every supported language. With `HiveContext`, these can also be used to expose some functionalities which can be inaccessible in other ways (for example UDF without Spark wrappers).
|
Ideally, the Spark's catalyzer should optimize both calls to the same execution plan and the performance should be the same. How to call is just a matter of your style.
In reality, there is a difference accordingly to the report by Hortonworks (<https://community.hortonworks.com/articles/42027/rdd-vs-dataframe-vs-sparksql.html> ), where SQL outperforms Dataframes for a case when you need GROUPed records with their total COUNTS that are SORT DESCENDING by record name.
|
Spark sql queries vs dataframe functions
|
[
"",
"sql",
"performance",
"apache-spark",
"dataframe",
"apache-spark-sql",
""
] |
How to make stored procedure dynamic search with `SQL Server` with name table and name filed and value filed
```
select * from @nametable where @filedname = @valuename
```
|
```
1. DECLARE @value AS SMALLINT
2. DECLARE @SQLQuery AS NVARCHAR(500)
3. DECLARE @NameTable AS NVARCHAR(500)
4. DECLARE @field AS NVARCHAR(500)
5. SET @value = 2
6. SET @NameTable = 'Suppliers'
7. SET @field = 'Id'
8. SET @SQLQuery = 'SELECT * FROM '+@NameTable+' WHERE '+@field+' = ' +
CAST(@value AS NVARCHAR(10))
9. EXECUTE(@SQLQuery)
```
|
You can use dynamic SQL
Example:
```
DECLARE @nametable VARCHAR(150)='MyTable' --You can change the name here
DECLARE @fieldname VARCHAR(150)='MyField'
DECLARE @valuename INT=4 --You must be careful here because you have to know the datatype of field to determine data type of parameter
DECLARE @sqlToExecute AS NVARCHAR(MAX) =N'select * from ['+ @nametable +'] where ['+ @fieldname +'] = ' + @valuename
EXEC (@sqlToExecute)
```
|
How to make stored procedure dynamic search with SQL Server
|
[
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
I have two tables that almost have identical columns. The first table contains the "current" state of a particular record and the second table contains all the previous stats of that records (it's a history table). The second table has a FK to the first table.
I'd like to query both tables so I get the entire records history, including its current state in one result. I don't think a JOIN is what I'm trying to do as that "joins" multiple tables "horizontally" (one or more columns of one table combined with one or more columns of another table to produce a result that includes columns from both tables). Rather, I'm trying to "join"(???) the tables "vertically" (meaning, no columns are getting added to the result, just that the results from both tables are falling under the same columns in the result set).
Not exactly sure if what I'm expressing make sense -- or if it's possible in MySQL.
|
To accomplish this, you could use a `UNION` between two `SELECT` statements. I would also suggest selecting from a *derived table* in the following manner so that you can sort by columns in your result set. Suppose we wanted to combine results from the following two queries:
```
SELECT FieldA, FieldB FROM table1;
SELECT FieldX, FieldY FROM table2;
```
We could join these with a `UNION` statement as follows:
```
SELECT Field1, Field2 FROM (
SELECT FieldA AS `Field1`, FieldB AS `Field2` FROM table1
UNION SELECT FieldX AS `Field1`, FieldY AS `Field2` FROM table2)
AS `derived_table`
ORDER BY Field1 ASC, Field2 DESC
```
In this example, I have selected from `table1` and `table2` fields which are similar, but not identically named, sharing the same data type. They are matched up using aliases (*e.g.*, `FieldA` in `table1` and `FieldX` in `table2` both map to `Field1` in the result set, *etc.*).
If each table has the same column names, field aliasing is not required, and the query becomes simpler.
---
Note: In MySQL it is necessary to name derived tables, even if the name given is not intended to be used.
|
UNION.
```
Select colA, colB From TblA
UNION
Select colA, colB From TblB
```
|
Query Two Tables - but not trying to JOIN?
|
[
"",
"mysql",
"sql",
"database",
""
] |
I want to merge two tables with different columns mysql.
For e.g.
Table 1 :
```
-------------------------------------------------------------
item_id title slug type views updatedAt createdAt
-------------------------------------------------------------
1 sometitle someslg 1 43454 timestamp timestamp
2 sometitle someslg 1 43566 timestamp timestamp
```
Table 2:
```
-------------------------------------------------------------
id ptitle slug pviews updatedAt createdAt
-------------------------------------------------------------
1 sometitle someslg 3434 timestamp timestamp
2 sometitle someslg 6454 timestamp timestamp
3 sometitle someslg 5454 timestamp timestamp
```
The above tables are examples.
I have merged with **UNION**. Here is my query
```
SELECT * ((SELECT t1.item_id,t1.title,t1.slug,t1.type,t1.views,t1.updatedAt,t1.createdAt
FROM table1
t1) UNION ALL (SELECT t2.id,t2.ptitle,t2.slug,'',t2.pviews,t2.updatedAt,t2.createdAt)) t3
ORDER BY t3.item_id ASC
```
This is working fine After Executing the query i will get the results like
```
-------------------------------------------------------------
item_id title slug type views updatedAt createdAt
-------------------------------------------------------------
1 sometitle someslg 1 43454 timestamp timestamp
1 sometitle someslg 3434 timestamp timestamp
2 sometitle someslg 1 43566 timestamp timestamp
2 sometitle someslg 6454 timestamp timestamp
3 sometitle someslg 5454 timestamp timestamp
```
But i want a virtual column in the result that shows the row from which table as shown below
```
--------------------------------------------------------------------------
item_id title slug type views updatedAt createdAt from_tbl
--------------------------------------------------------------------------
1 sometitle someslg 1 43454 timestamp timestamp t1
1 sometitle someslg 3434 timestamp timestamp t2
2 sometitle someslg 1 43566 timestamp timestamp t1
2 sometitle someslg 6454 timestamp timestamp t2
3 sometitle someslg 5454 timestamp timestamp t2
```
|
Just add the column in:
```
SELECT t3.*
FROM ((SELECT t1.item_id, t1.title, t1.slug, t1.type, t1.views, t1.updatedAt, t1.createdAt, 't1' as from_tbl
FROM table1 t1
) UNION ALL
(SELECT t2.id, t2.ptitle, t2.slug, '', t2.pviews, t2.updatedAt, t2.createdAt, 't2'
)
) t3
ORDER BY t3.item_id ASC
```
|
try this way
```
SELECT t3.*
FROM ((SELECT t1.item_id, t1.title, t1.slug, t1.type, t1.views, t1.updatedAt, t1.createdAt, '1' as from_tbl
FROM table1 t1
) UNION ALL
(SELECT t2.id, t2.ptitle, t2.slug, '', t2.pviews, t2.updatedAt, t2.createdAt, '2' FROM table2 t2
)
) t3
ORDER BY t3.item_id ASC
```
|
merge two tables with different columns mysql
|
[
"",
"mysql",
"sql",
""
] |
Expected Output:
[](https://i.stack.imgur.com/nOVEN.png)
How can we run a query that returns "Robot" followed by a year between 2000 and 2099? (So 2015 is a valid value at the end, but 2123 is not.)
[](https://i.stack.imgur.com/ah4Wp.png)
|
Got the answer:
```
SELECT id, name FROM robots WHERE name LIKE '%Robot 20%_%'
```
|
```
SELECT name FROM robots WHERE name LIKE '%Robot 20__%'
```
If there might be other characters after `20` you need `REGEXP`:
```
SELECT name FROM robots
WHERE name REGEXP 'Robot 20[0-9][0-9]'
```
|
SQL Like Operator
|
[
"",
"mysql",
"sql",
"select",
"where-clause",
"sql-like",
""
] |
I have to modify a big pricelist table so that there is only one valid price for every article.
Sometimes the sales employees insert new prices and forgot to change the old infinite validTo dates.
So I have to write a sql-query to change all validTo dates to the next validFrom date minus one day, when the validTo date has infinite validity (9999-12-31).
But I have no idea how can i reach this with only SQL (Oracle 12).
```
anr price validFrom validTo
1 447.1 2015-06-01 9999-12-31 <
1 447.2 2015-06-16 2015-06-16
1 447.3 2015-06-17 2015-06-17
1 447.4 2015-06-22 2015-06-22
1 447.5 2015-07-06 9999-12-31 <
1 395.0 2015-07-20 2015-07-20
1 447.6 2015-08-03 9999-12-31 <
1 447.7 2015-08-17 9999-12-31 <
1 447.8 2015-08-24 9999-12-31 <
1 395.0 2015-09-07 2015-09-07
1 450.9 2015-11-15 9999-12-31 < no change because it is the last entry
```
after updating the the table, the result should look like
```
anr price validFrom validTo
1 447.1 2015-06-01 2015-06-15 <
1 447.2 2015-06-16 2015-06-16
1 447.3 2015-06-17 2015-06-17
1 447.4 2015-06-22 2015-06-22
1 447.5 2015-07-06 2015-07-19 <
1 395.0 2015-07-20 2015-07-20
1 447.6 2015-08-03 2015-08-16 <
1 447.7 2015-08-17 2015-08-23 <
1 447.8 2015-08-24 2015-09-06 <
1 395.0 2015-09-07 2015-09-07
1 450.9 2015-11-15 9999-12-31 <
```
|
In order to update an end date you can simply select the minimum of all higher start dates.
```
update mytable upd
set enddate = coalesce(
(
select min(startdate) - 1
from mytable later
where later.startdate > upd.startdate
and later.anr = upd.anr -- same product
), date'9999-12-31') -- coalesce for the case there is no later record
where enddate = date'9999-12-31';
```
I have taken `anr` to be the product id. If it isn't then change the statement accordingly.
|
Oracle provides an analytic function `LEAD` that references the current-plus-n-th record given a sort criterion. This function may serve the purpose of selecting the proper date value in an update statement as follows ( let `test_prices` be the table name, `ppk` its PK ):
```
update test_prices p
set p.validTo = (
select ps.vtn
from (
select lead ( p1.validFrom, 1 ) over ( order by p1.validFrom ) - 1 vtn
, ppk
from test_prices p1
) ps
where ps.ppk = p.ppk
)
where to_char(p.validTo, 'YYYY') = '9999'
and p.validFrom != ( select max(validFrom) from test_prices )
;
```
|
sql-query that change all validTo dates to the next validFrom date minus one Day
|
[
"",
"sql",
"oracle",
""
] |
I need to build SQL query to get all users metadata (`meta`) from `signup` table which `id` is in `courses_history` table.
The problem is that `signup` table don't have `user_id` but does have `user_login` and `courses_history` have only `user_id`.
There is the third table - `users`, which holds `user_id` and `user_login` but I don't know how to connect all 3 table in the right way
```
CREATE TABLE users
(`user_id` int, `user_login` varchar(7), `description` varchar(55));
INSERT INTO users
(`user_id`, `user_login`, `description`)
VALUES
(100, 'user_1', 'userdataxxxx'),
(201, 'user_2', 'userdatayyyy'),
(301, 'user_3', 'userdatazzzz');
CREATE TABLE signups
(`id` int, `user_login` varchar(7), `meta` varchar(55));
INSERT INTO signups
(`id`, `user_login`, `meta`)
VALUES
(1, 'user_1', 'metaxxxxx'),
(2, 'user_2', 'metayyyy'),
(3, 'user_3', 'metazzzzz');
CREATE TABLE courses_history
(`id` int, `user_id` int, `stuff` varchar(55), `course_id` int);
INSERT INTO courses_history
(`id`, `user_id`, `stuff`)
VALUES
(1, 301, 'stuffxxx', 10),
(2, 301, 'stuffyyyy', 11),
(3, 100, 'stuffzzzz', 22);
(4, 201, 'stuffzzzz', 66);
(5, 201, 'stuffzzzz', 88);
```
Thanks
|
```
SELECT ch.id, u.user_login, su.meta
FROM signups su
INNER JOIN users u ON su.user_login = u.user_login
INNER JOIN courses_history ch ON u.user_id = ch.user_id
```
|
```
SELECT
s.meta
FROM
signups AS s
INNER JOIN courses_history AS ch
ON s.id = ch.id
INNER JOIN users AS u
ON ch.user_id = u.user_id
```
You can pull whatever data you want out of all three of the tables with this, but I believe this illustrates the point well enough.
|
How connect two SQL tables that don't have common columns using third table, which contains columns of those both
|
[
"",
"mysql",
"sql",
""
] |
I have a Package table in BigQuery as follows:
```
Packageid Scanid dispatchid timestamp status
p1 s1 null t1 'in'
p2 s1 xxx t2 'in'
p1 s2 yyy t3 'pkin'
p1 s3 sss t4 'iwi'
p1 s4 eee t5 'lhp'
p2 s2 uuuu t6 'uio'
p2 s3 null t7 'jsk'
```
I want to retrieve the following details:
```
Packageid Latest-Scanid First-Dispatch-time Last-Dispatch-time latest-status
p1 s4 t3 t5 'lhp'
p2 s3 t2 t6 'jsk'
```
First-Dispatch-time is the time when first time dispatch id appeared in the package scan.
Last-Dispatch-time is the time when last time dispatch id appeared in the package scan.
Is there any way to get the above table using BigQuery or uer defined functions in BigQuery?
|
One method uses windows functions and conditional aggregation:
```
select packageid,
max(case when seqnum = 1 then dispatchid end) as dispatchid,
min(case when dispatchid is not null then timestamp end) as first_dispatchid,
max(case when dispatchid is not null then timestamp end) as last_dispatchid,
max(case when seqnum = 1 then status end) as status
from (select t.*,
row_number() over (partition by packageid order by timestamp desc) as seqnum
from t
) t
group by packageid;
```
|
I will note that this is for SQL Server and may or may not work in MYSQL.
```
SELECT Packageid,
MAX(Scanid) [Latest_Scanid],
MIN(timestamp) [First-Dispatch-time],
MAX(timestamp) [Last-Dispatch-time],
(SELECT status FROM Package p WHERE p.timestamp = Package.timestamp AND p.Packageid = Package.Packageid) [latest-status]
FROM Package
```
|
Querying in BigQuery?
|
[
"",
"sql",
"database",
"google-bigquery",
"user-defined-functions",
""
] |
I need to add to columns in a row.
Table Data
| id | Col1 | Col2 |
| --- | --- | --- |
| 1 | 10 | 20 |
| 2 | 11 | 20 |
| 3 | 12 | 20 |
Result expected
| id | Sum |
| --- | --- |
| 1 | 30 |
| 2 | 31 |
| 3 | 32 |
I tried `sum(col1 + col2)`, but that gives the sum of all the columns together.
|
`sum()` is a *aggregating* function (one that give a single result for a group of rows), not a *algebraic* one: You want the *addition* (the *mathematical* sum) of the two columns:
```
select id, col1 + col2 as sum
from mytable
```
|
we have two type of columns in group clause (Aggregation column and Group column) in this query
```
select id, col1 + col2 as sum
from mytable
group by id
```
we have to insert `id`, `col1` and `col2` in front of `group by`, otherwise we get this error
> Column 'TEST.COL1' is invalid in the select list because it is not
> contained in either an aggregate function or the GROUP BY clause.
if use `MAX()` aggregation function like this
```
SELECT
ID,
MAX(COL1+COL2) AS SUM
FROM TEST
GROUP BY ID
```
we get the result BUT this isn't good idea because the cost of this code 4 times more than
bellow code
```
SELECT
ID,COL1+COL2 AS SUM
FROM TEST
```
|
DB2, SQL query to SUM 2 columns
|
[
"",
"sql",
"oracle",
""
] |
I have following query (`BOCRTNTIME - varchar e.g 2015-02-28 12:21:45, VIEW_BASE_MARIX_T - some view`):
```
select BOCRTNTIME
from VIEW_BASE_MARIX_T
where to_date(substr(BOCRTNTIME,1,10),'YYYY-MM-DD')
between (to_date ('2016-01-01', 'YYYY-MM-DD'))
and (to_date ('2016-02-01', 'YYYY-MM-DD'))
```
On executing I get error:
```
ORA-01839: "date not valid for month specified"
```
I thought that there are can be incorrect data in `BOCRTNTIME`, so execute following query:
```
select distinct
substr(BOCRTNTIME,1,8),
substr(BOCRTNTIME,9,2)
from VIEW_BASE_MARIX_T
order by substr(BOCRTNTIME,9,2);
```
But everything looks fine: <http://pastebin.com/fNjP4UAu>.
Also following query executes without any error:
```
select to_date(substr(BOCRTNTIME,1,10),'YYYY-MM-DD')
from VIEW_BASE_MARIX_T;
```
I already tried add `trunc()` to all `to_date()` but no luck. Also I create pl/sql procedure that takes one by one item form `VIEW_BASE_MARIX_T` and convert it to date - and everything works just fine.
Any ideas why I get error on first query?
UPD: Query on table that used in view works fine, but in view - not
UPD2: We have few enviroments with same products, but get error only on one
UPD3: Issue was resolved by search non valid dates in table that used in view
|
I think that what might be happening is that Oracle is pushing the predicate to the underlying tables of the view.
Have you tried to run the query
```
select to_date(substr(BOCRTNTIME,1,10),'YYYY-MM-DD') BOCRTNTIME
from MY_TABLE
```
instead of querying the view?
you can also confirm this by using the NO\_PUSH\_PRED hint
```
select /*+ NO_PUSH_PRED(VIEW_BASE_MARIX_T) */
BOCRTNTIME
from VIEW_BASE_MARIX_T
where
to_date(substr(BOCRTNTIME,1,10),'YYYY-MM-DD') between (to_date ('2016-01-01', 'YYYY-MM-DD')) and (to_date ('2016-02-01', 'YYYY-MM-DD'))
```
|
A bit too long for a comment - create a simple function to test the dates:
```
CREATE FUNCTION is_Valid_Date(
in_string VARCHAR2,
in_format VARCHAR2 DEFAULT 'YYYY-MM-DD'
) RETURN NUMBER DETERMINISTIC
AS
dt DATE;
BEGIN
dt := TO_DATE( in_string, in_format );
RETURN 1;
EXCEPTION
WHEN OTHERS THEN
RETURN 0;
END;
/
```
Then you can do:
```
SELECT BOCRTNTIME
FROM VIEW_BASE_MARIX_T
WHERE is_Valid_Date( substr(BOCRTNTIME,1,10) ) = 0;
```
You will possibly find that April, June, September or November have an entry for the 31st of that month or February has values greater than 28/29th (although I can't see anything like that in your pasted data).
Otherwise you could try using ANSI Date literals:
```
SELECT BOCRTNTIME
FROM VIEW_BASE_MARIX_T
WHERE to_date(substr(BOCRTNTIME,1,10),'YYYY-MM-DD') between DATE '2016-01-01' and DATE '2016-02-01';
```
or, even simpler, given the format of the input:
```
SELECT BOCRTNTIME
FROM VIEW_BASE_MARIX_T
WHERE substr(BOCRTNTIME,1,10) between '2016-01-01' and '2016-02-01';
```
|
ORA-01839 "date not valid for month specified" for to_date in where clause
|
[
"",
"sql",
"oracle",
"date-format",
"to-date",
""
] |
I dont know how to freame this question. But read the description.
I have two tables as below,
EntityProgress table-
```
EntityID CurrentStateID
101 1154
201 1155
301 1155
```
EnityApprovar Table
```
EntityID StateID ActorID
201 1154 8
201 1154 9
201 1155 8
301 1154 8
301 1154 9
301 1155 9
```
Now What I want is if I pass the `ActorID=2` as parameter then it should return only one row as below, Because we dont have any matching enityID in the `entityapprovar` table.
```
EntityID CurrentStateID
101 1154
```
But If I pass the `ActorID=9` then it should give me the result as below,
```
EntityID CurrentStateID
301 1155
```
Because we have the entityID matching record in the `EntityApprover` table and also for that entityID we have the currentstateID and for that we have the actorid as 9.
So to get the result I have done as below,
```
SELECT
E.EntityID,
E.CurrentStateID
FROM
EntityProgress E LEFT JOIN EntityApprover EP
ON E.EntityID = EP.EntityID AND E.CurrentStateID = EP.StateID
WHERE
-- some conditions
AND ((ISNULL(EP.ActorID,0) )= 0
OR ((ISNULL(EP.ActorID,0))!= 0 AND EP.ActorID = @ActorID AND Ep.CurrentStateID = E.StateID))
```
BUt When I pass the 2 I get the first result but when I pass the 9/8 I dont get the desired result. May be this is simple but I am stuck with it. I need some one others view to give me different way.
IN case of the confusing feel free to leave the comment.
|
Here is my attempt to answer you.
**Query**
```
DECLARE @ActorID int = 2
DECLARE @EntityProgress table
(
EntityID int,
CurrentStateID int
)
DECLARE @EnityApprovar table
(
EntityID int,
StateID int,
ActorID int
)
INSERT into @EntityProgress
values (101, 1154),
(201, 1155),
(301, 1155)
INSERT into @EnityApprovar
VALUES (201, 1154, 8),
(201, 1154, 9),
(201, 1155, 8),
(301, 1154, 8),
(301, 1154, 9),
(301, 1155, 9)
SELECT
E.EntityID
,E.CurrentStateID
,EP.ActorID
FROM @EntityProgress E
LEFT JOIN @EnityApprovar EP
ON E.EntityID = EP.EntityID
AND E.CurrentStateID = EP.StateID
WHERE ((EP.ActorID IS NULL AND NOT EXISTS (SELECT 1 FROM @EnityApprovar WHERE ActorID = @ActorID))
OR (EP.ActorID = @ActorID))
```
When you pass `@ActorID = 2` then it'll give below output.
```
EntityID CurrentStateID
101 1154
```
And when you pass `@ActorID = 9` then it'll give below output.
```
EntityID CurrentStateID
301 1155
```
Which is as expected you want.
|
You'd have to include a `not exists` in the query as the row that you are trying to exclude when passing in 9 has no information to determine that there are other rows in the table that match.
i.e.
```
SELECT
E.EntityID,
E.CurrentStateID
FROM
EntityProgress E LEFT JOIN EnityApprovar EP
ON E.EntityID = EP.EntityID AND E.CurrentStateID = EP.StateID
WHERE ((ISNULL(EP.ActorID,0) = 0
and not exists(select 1
from EnityApprovar ep2
where ep2.ActorID = @ActorID ))
OR (ISNULL(EP.ActorID,0) != 0
AND EP.ActorID = @ActorID
AND E.CurrentStateID = Ep.StateID))
```
|
Left Join and conditional where not giving desired result?
|
[
"",
"sql",
"sql-server",
"join",
"where-clause",
""
] |
This may be a stupid question and very easy but i am very distracted right now, and didn't find any solution.
I have a table like this:
```
ID | Value | Gr_Id | Gr_Value
------------------------------
a |0 |1 |Null
b |2 |2 |Null
c |4 |2 |Null
d |1 |3 |Null
e |3 |4 |Null
f |3 |4 |Null
g |2 |5 |Null
h |3 |5 |Null
```
Desired Output:
```
ID | Value | Gr_Id | Gr_Value
------------------------------
a |0 |1 |0
b |2 |2 |4
c |4 |2 |4
d |1 |3 |1
e |3 |4 |3
f |3 |4 |3
g |2 |5 |3
h |3 |5 |3
```
So i want to update the group value and set the maximum value of the group\_id.
Thank you.
|
I think this will solve your problem:
```
SELECT ID, Value, Gr_Id, (
SELECT MAX(Value)
FROM tableName t2
WHERE t1.Gr_Id = t2.Gr_Id
) as Gr_Value
FROM tableName t1
```
Try it; hope it helps
|
Using `OUTER APPLY` you can do this
```
SELECT ID,Value,Gr_Id,M.Gr_Value
FROM URTable
OUTER APPLY
(
SELECT MAX (Value) as Gr_Value
FROM URTable tmp
WHERE tmp.Gr_Id=URTable.Gr_Id
)M
```
|
Set the maximum value of the group to each group id
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I would like to label quantities (in the `quantity table`) using the labels assigned (see `label assignment` table) until the quantity goes to 0. Then I know that I am done labeling that particular ID.
`label assignment` table is as follows:
```
ID | Label | Quantity
1 aaa 10
1 bbb 20
2 ccc 20
```
And my `quantity table`:
```
ID | Total Quantity
1 60
2 20
```
And I would like to get the following result:
```
ID | Label | Quantity
1 aaa 10 (read from reference table, remaining 50)
1 bbb 20 (read from reference table, remaining 30)
1 [NULL] 30 (no label in reference table, remaining 0)
2 ccc 20 (read from reference table, remaining 0)
```
|
Simplest answer I think, after getting ideas from the other answers: Just create a "FAKE" label for the missing amount:
```
DECLARE @PerLabelQuantity TABLE(Id int, Label varchar(10), Quantity int);
INSERT INTO @PerLabelQuantity
VALUES (1, 'aaa', 10), (1, 'bbb', 20), (2, 'ccc', 20);
SELECT *
FROM @PerLabelQuantity
DECLARE @QuantityRequired TABLE(Id int, TotalQuantity int);
INSERT INTO @QuantityRequired
VALUES (1, 60), (2, 20);
SELECT *
FROM @QuantityRequired
-- MAKE A FAKE LABEL LET'S CALL IT [NULL] WITH THE AMOUNT THAT IS NOT LABELED
-- i.e. WITH THE REMAINING AMOUNT
-- Probably should be done by copying the original data and the following
-- into a temp table but this is just for proof of concept
INSERT INTO @PerLabelQuantity( Id, Label, Quantity )
SELECT q.ID,
NULL,
ISNULL(q.TotalQuantity - p.TotalQuantityLabeled, q.TotalQuantity)
FROM @QuantityRequired q
LEFT JOIN (SELECT p.ID, SUM(Quantity) AS TotalQuantityLabeled
FROM @PerLabelQuantity p
GROUP BY p.Id) p ON
p.ID = q.ID
AND q.TotalQuantity - p.TotalQuantityLabeled > 0
SELECT *
FROM @PerLabelQuantity p
```
|
You can do it with a simple `JOIN` and `UNION` operation so as to include 'not covered' quantities:
```
SELECT la.ID, la.Label, la.Quantity
FROM label_assignment AS la
INNER JOIN quantity AS q ON la.ID = q.ID
UNION
SELECT q.ID, NULL AS Label, q.TotalQuantity - la.TotalQuantity
FROM quantity AS q
INNER JOIN (
SELECT ID, SUM(Quantity) AS TotalQuantity
FROM label_assignment
GROUP BY ID
) AS la ON q.ID = la.ID AND q.TotalQuantity > la.TotalQuantity
```
[**Demo here**](http://sqlfiddle.com/#!6/5e24f/1)
|
How would l write SQL to label quantities until they run out?
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
Suppose you have a query that returns a record set like the following:
```
Date | Code | Val 1 | Val 2
1/1/2016 A 1 2
1/1/2016 B 3 4
1/2/2016 A 5 6
1/2/2016 B 7 8
1/2/2016 C 9 10
```
And a second table that contains all possible values for "Code" from the subquery:
```
Code
A
B
C
D
```
How can the two sets be joined such that a record is added to the resultant query for each missing Code from the second table for *each date* in the first. In the example above the resultant query should be:
```
Date | Code | Val 1 | Val 2
1/1/2016 A 1 2
1/1/2016 B 3 4
1/1/2016 C NULL NULL
1/1/2016 D NULL NULL
1/2/2016 A 5 6
1/2/2016 B 7 8
1/2/2016 C 9 10
1/2/2016 D NULL NULL
```
An outer join on the Code column works for a single day, but because the date is not included in the second set (or the join), it does not work when applied to a set that spans multiple dates.
|
Sample Data:
```
CREATE TABLE #temp (DateField DATE , Code CHAR , Val1 INT, Val2 INT)
INSERT INTO #temp
VALUES
('1/1/2016','A','1','2'),
('1/1/2016','B','3','4'),
('1/2/2016','A','5','6'),
('1/2/2016','B','7','8'),
('1/2/2016','C','9','10')
CREATE TABLE #Code (Code CHAR )
INSERT INTO #Code
VALUES('A'),
('B'),
('C'),
('D')
```
Query:
1. get the cartesian product of all distinct dates within #temp and all codes within #Code and place in the Common Table Expression CTE
2. LEFT OUTER JOIN #temp with the Common Table Expression CTE on Code
and DateField
Query:
```
WITH CTE AS ( SELECT *
FROM (SELECT DISTINCT DateField
FROM #temp) AS A CROSS JOIN
#Code AS C
)
SELECT C.*,t.Val1,t.Val2
FROM CTE AS C LEFT OUTER JOIN
#temp AS t ON c.Code=t.Code AND c.DateField=t.DateField
```
Result:
[](https://i.stack.imgur.com/jm84d.png)
|
You don't say which database you are using; if you don't have CTE available you can also use a `Calendar` table.
*I've assumed your data is a table of `Orders`, just to give the table a name.*
```
CREATE TABLE Orders
(order_date date
,code varchar(1)
,val_1 int
,val_2 int);
INSERT INTO Orders
([order_date], [code], [val_1], [val_2])
VALUES
('2016-01-01 00:00:00', 'A', 1, 2),
('2016-01-01 00:00:00', 'B', 3, 4),
('2016-01-02 00:00:00', 'A', 5, 6),
('2016-01-02 00:00:00', 'B', 7, 8),
('2016-01-02 00:00:00', 'C', 9, 10);
CREATE TABLE Codes(code CHAR(1));
INSERT INTO Codes(code) VALUES
('A'),
('B'),
('C'),
('D');
CREATE TABLE Calendar(calendar_date DATE);
INSERT INTO Calendar(calendar_date) VALUES
('1/1/2016'),
('1/2/2016')
```
You can then join these to get all `codes` for each day
```
SELECT calendar_date, T.code, val_1, val_2
FROM
(SELECT calendar_date, code FROM Codes CROSS JOIN Calendar) AS T
LEFT JOIN Orders ON T.calendar_date = Orders.order_date
AND T.code = Orders.code
ORDER BY calendar_date, T.code
```
Query result
```
calendar_date code val_1 val_2
------------- ---- ----- -----
"2016-01-01" "A" "1" "2"
"2016-01-01" "B" "3" "4"
"2016-01-01" "C" "NULL" "NULL"
"2016-01-01" "D" "NULL" "NULL"
"2016-01-02" "A" "5" "6"
"2016-01-02" "B" "7" "8"
"2016-01-02" "C" "9" "10"
"2016-01-02" "D" "NULL" "NULL"
```
An advantage of the calendar table is when you want to populate your result with all days and codes, even if there is not a matching row in the main table. For example, adding another date produces the result
```
calendar_date code val_1 val_2
------------- ---- ----- -----
"2016-01-01" "A" "1" "2"
"2016-01-01" "B" "3" "4"
"2016-01-01" "C" "NULL" "NULL"
"2016-01-01" "D" "NULL" "NULL"
"2016-01-02" "A" "5" "6"
"2016-01-02" "B" "7" "8"
"2016-01-02" "C" "9" "10"
"2016-01-02" "D" "NULL" "NULL"
"2016-01-03" "A" "NULL" "NULL"
"2016-01-03" "B" "NULL" "NULL"
"2016-01-03" "C" "NULL" "NULL"
"2016-01-03" "D" "NULL" "NULL"
```
|
How to Outer Join a Table onto a Subset of a Query
|
[
"",
"sql",
""
] |
Can anyone plese help me how can i join these table, i have used outer join
but i am not sured what should i put in select statement. i don't want to put wildcard. Please help.
Table
```
DECLARE @MASTER TABLE
(
SKU VARCHAR (30),
Resistor_ID VARCHAR (30),
Capacitor_ID VARCHAR (30),
Inductor_ID VARCHAR (30)
)
INSERT @MASTER
SELECT 'AREN2XS', '101', '1F01', 'H561' UNION ALL
SELECT 'GTYO63K','201', '2F02', 'H562' UNION ALL
SELECT 'LKUTN58','301', '3F03', 'H563' UNION ALL
SELECT 'POILOI44','401', '4F04', 'H564' UNION ALL
SELECT 'GTYUIO99','501', '5F05', 'H565' UNION ALL
SELECT 'AREN2XS','101', '1F01', 'H561' UNION ALL
SELECT 'LKUTN58','301', '3F03', 'H563' union ALL
select 'KPNGHT39','','',''
DECLARE @RESISTOR_CHILD TABLE
(
Resitor_ID VARCHAR (30),
Resistor_Value VARCHAR (30)
)
INSERT @RESISTOR_CHILD
SELECT '101','10OHM' UNION ALL
SELECT '301','30OHM'UNION ALL
SELECT '401','40OHM'UNION ALL
SELECT '501','50OHM'
DECLARE @CAPACITOR_CHILD TABLE
(
Capacitor_ID VARCHAR (20),
Capacitor_Value VARCHAR (20)
)
INSERT @CAPACITOR_CHILD
SELECT '2F02', 2UF UNION ALL
SELECT '3F03', 3UF UNION ALL
SELECT '5F05', 5UF UNION ALL
SELECT '5F05', 5UF
DECLARE @INDUCTOR_CHILD TABLE
(
Inductor_ID VARCHAR (20),
Inductor_valuue VARCHAR (20)
)
INSERT @INDUCTOR_CHILD
SELECT 'H561', '1HEN' UNION ALL
SELECT 'H562', '2HEN' UNION ALL
SELECT 'H562', '2HEN' UNION ALL
SELECT 'H563', '3HEN' UNION ALL
SELECT 'H564', '4HEN'
```
--Expected Output
/\*
--Expected Output
/\*
```
SKU, Resistor_ID,Resistor_Value,Capacitor_ID,Inductor_ID
AREN2XS, 101, 10OHM, 1F01, '' ,H561 ,1HEN
GTYO63K , 201, '' , 2F02, 2UF ,H562 ,2HEN
LKUTN58, 301, 30OHM, 3F03, 3UF ,H563 ,3HEN
POILOI44, 401, 40OHM, 4F04, '' ,H564 ,4HEN
GTYUIO99, 501, 50OHM, 5F05, 5UF ,H565 ,''
KPNGHT39, '', '', '', '', '', ''
```
\*/
|
You need left outer joins like this:
```
SELECT distinct t1.sku,t2.resistor_id,t2.resistor_value,t3.capacitor_id,t4.inductor_id
FROM master t1
LEFT OUTER JOIN resistor_child t2 ON(t1.resistor_id = t2.resistor_id)
LEFT OUTER JOIN CAPACITOR_child t3 on(t1.capacitor_id = t3.capacitor_id)
LEFT OUTER JOIN inductor_child t4 on(t1.inductor_id = t4.inductor_id)
```
That will join all the table together, and the left join will put null values in those columns that doesn't have value matched in one of the 3 tables (resistor\_child , CAPACITOR\_child ,inductor\_child )
|
This is a pretty basic left join.
Left joins return all records from the table on the LEFT and only those that match from those on the right. So in this example: return all records from master and only those that match in each of the other tables. Thus you get the null values on those records where no matching record could be found in the child tables; yet you still get all records from master.
```
SELECT M.SKU, RC.*, CC.*, IC.*
FROM master M
LEFT JOIN resistor_Child RC
on M.Resistor_ID = RC.Resistor_ID
LEFT JOIN Capacitor_Child CC
on M.Capacitor_ID = CC.Capacitor_ID
LEFT join Inductor_Child IC
on M.Inductor_ID = IC.Inductor_ID
```
|
Complexity in a join
|
[
"",
"sql",
""
] |
I am doing a report, and one of the tables called `tblProduct` has a calculation triggered in SQL Server that calculates cost on a column. I am having trouble locating the formula. Is there a way to determine what or where the formula is being pulled from ?
|
By the sounds of it you are looking for a calculated column. So if you are looking for the logic within a calculated column you can get this by simply doing the folowing:
1. Right click on the table
2. Go to script table as
3. Click on CREATE to.
Within the definition then you should see the calculated column definition
SO in the below example:
```
USE [KamTest]
GO
/****** Object: Table [dbo].[temp] Script Date: 2/8/2016 2:24:40 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[temp](
[Col1] [int] NULL,
[Col2] [int] NULL,
[col3] AS ([Col1]*[Col2]) --<-- the calculated column
) ON [PRIMARY]
GO
```
below is the calculated column definition
```
[col3] AS ([Col1]*[Col2])
```
|
I figured it out , it maybe for someone in the future. It was in the server agent under Jobs and there were jobs that calculated another sp.
|
Where is the SQL Server 2012 formula stored?
|
[
"",
"sql",
"sql-server-2012",
""
] |
I have 2 tables:
1) et\_pics - here information about employees:
* ob\_no, int, key, e.g. 2020
* c\_name, varchar, e.g. Dribbler D.E.
* e\_post, varchar, e.g. Chairman
SELECT \* FROM et\_pics:
> ob\_no | c\_name | e\_post
>
> 2020 | Dribbler D.E. | Chairman
2) et\_vacations β here information about vacation is stored:
* ob\_no, int, e.g. 666 e\_pic, int, connection to pic.ob\_no, e.g. 2020
* c\_name, varchar, e.g. Vacation blah blah
* e\_dateFrom, date, e.g. 2010-08-08 00:00:00.000
* e\_dateTo, date, e.g 2010-08-09 00:00:00.000
SELECT \* FROM et\_vacations vac returns
> ob\_no | e\_pic |c\_name | e\_dateFrom |e\_dateTo
> |
> 777 | 2020 |Vacation blah blah |2010-08-08 00:00:00.000 | 2010-08-09 00:00:00.000 |
>
> 777 | 2020 |Vacation blah blah |2015-08-08 00:00:00.000 | 2015-08-09 00:00:00.000 |
What I need to do is to connect et\_vacations to et\_pics with conditions:
* the could be only one vacation record per person (seems to me
max(e\_dateTo));
* vacation record must be >= getDate() or null is
displayed.
Canβt understand how to write right subquery β tried in this way, but no luck:
```
SELECT
pics.c_name,
pics.e_post,
vac.e_dateTo
FROM et_pics pics
INNER JOIN et_division div on pics.e_division = div.ob_no
INNER JOIN et_vacations vac on vac.e_pic = pics.ob_no
WHERE
(pics.e_fireDate IS NULL OR pics.e_fireDate > getDate())
AND vac.e_dateTo IN (
SELECT MAX(vac.e_pic) from et_vacations vac
GROUP BY vac.e_pic
)
ORDER BY pics.c_name;
```
|
I believe your problem needs a bit more definition around what is considered the date of the vacation (the beginning date?, the end date?) you are interested in but here is a start. Note I left the join to division in since you had it but you are not using the data in that table in any of the select field or where clauses. Unless it is being used to reduce the number of people who are in the query because not everyone in et\_pics is in et\_division, then this join should be removed.
```
SELECT
pics.c_name,
pics.e_post,
vac.e_dateTo
FROM et_pics pics
INNER JOIN et_division div on pics.e_division = div.ob_no
INNER JOIN(select e_pic, max(e_dateTo) from et_vacations group by e_pic )vac on vac.e_pic = pics.ob_no
WHERE
(pics.e_fireDate IS NULL OR pics.e_fireDate > getDate())
ORDER BY pics.c_name;
```
|
I guess you need to output employees with nearest planed vacation(?). Then you should JOIN already prepared `et_vacations` table using LEFT JOIN in case of empty vacations records for some of employees:
```
WITH T_vac as
(
select et_vacations.*,
ROW_NUMBER() OVER (PARTITION BY e_pic ORDER BY e_dateTo) as RowNum
from et_vacations
where e_dateTo>=getDate()
)
SELECT
pics.c_name,
pics.e_post,
vac. e_dateFrom,
vac.e_dateTo
FROM et_pics pics
LEFT JOIN T_vac vac on (vac.e_pic = pics.ob_no) AND (vac.RowNum=1)
ORDER BY pics.c_name;
```
`SQLFiddle demo`
|
Problem with joining two tables on subquery condition
|
[
"",
"sql",
"sql-server",
"sql-server-2005",
""
] |
```
SELECT TO_CHAR((select logical_date -1 from logical_date
where logical_date_type='B'),'DD/MM/YYYY HH24:MI:SS') FROM DUAL;
```
This Query returns `23/04/2016 00:00:00`
```
o/p of select logical_date -1 from logical_date where logical_date_type='B' :4/23/2016
SELECT TO_DATE((select logical_date -1 from logical_date
where logical_date_type='B'),'DD/MM/YYYY HH24:MI:SS') FROM DUAL;
```
This Query returns `4/23/0016`.
How do I get the format given by TO\_CHAR using TO\_DATE ??
|
Dates do not have any intrinsic format. Oracle has an internal representation which doesn't look like anything you'd recognise as a date. The client or application decides how to display the date (e.g. using NLS\_DATE\_FORMAT), or you can use `TO_CHAR` to convert the date to a string with a format you specify, as you're doing in your first example.
In your second example you are doing an implicit comversion of your actual date to a string, using your NLS setting - losing the century in the process, so presumably that has YY rather than YYYY - and then you convert that back to a date. I'm surprised that doesn't error as you are swapping the month and day positions.
Do not do that. Converting to a string and back to a date is pointless, even if your settings don't lose information on the way.
If you want it as a date for a client or other process to use just do:
```
select logical_date -1
from logical_date
where logical_date_type='B'
```
If you want to compare against another date then still leave it as a date:
```
select other_columns
from logical_date
where logical_date_type='B'
and logical_date -1 < sysdate
```
If you want to specify the format **for display** then use:
```
select to_char(logical_date -1, 'DD/MM/YYYY')
from logical_date
where logical_date_type='B'
```
|
The problem is related to the default date format for your session that is configured in your oracle client settings
To check the NLS\_DATE\_FORMAT settings for your Session
```
SELECT value
FROM nls_session_parameters
WHERE parameter = 'NLS_DATE_FORMAT'
```
here's how you can change this setting for your session to achieve the desired results:
```
alter session set nls_date_format='DD/MM/YYYY HH24:MI:SS';
```
|
TO_CHAR and TO_DATE giving different results.How to achieve the TO_CHAR functionality using TO_DATE?
|
[
"",
"sql",
"oracle",
"date-formatting",
"to-date",
"to-char",
""
] |
Is there a way to convert the first letter uppercase in Oracle SQl without using the Initcap Function?
I have the problem, that I must work with the DISTINCT keyword in SQL clause and the Initcap function doesnΒ΄t work.
Heres is my SQL example:
```
select distinct p.nr, initcap(p.firstname), initcap(p.lastname), ill.describtion
from patient p left join illness ill
on p.id = ill.id
where p.deleted = 0
order by p.lastname, p.firstname;
```
I get this error message: ORA-01791: not a SELECTed expression
|
When `SELECT DISTINCT`, you can't `ORDER BY` columns that aren't selected. Use column aliases instead, as:
```
select distinct p.nr, initcap(p.firstname) fname, initcap(p.lastname) lname, ill.describtion
from patient p left join illness ill
on p.id = ill.id
where p.deleted = 0
order by lname, fname
```
|
this would do it, but i think you need to post your query as there may be a better solution
```
select upper(substr(<column>,1,1)) || substr(<column>,2,9999) from dual
```
|
How to convert only first letter uppercase without using Initcap in Oracle?
|
[
"",
"sql",
"oracle",
""
] |
I have no idea where to even start with this, but here's what i need to do...
We have a table with addresses and telephone numbers in. But i need to reduce the 6 telephone number columns from 6 to 3. Moving the numbers along from right to left, into any empty cells.
Example below -
**What the table looks like**

**What i want it to look like**

|
`PIVOT` and `UNPIVOT` will be able to do the job. The idea:
1. `UNPIVOT` to get the data into rows
2. Clean out empty rows and calculate what the new column will be
3. `PIVOT` to get the cleaned data back into columns.
Here's one way of doing it using a bunch of CTEs in one statement. Note I've assumed there is an ID column and I've made up the table name:
```
;WITH Unpivoted AS
(
-- our data into rows
SELECT ID, TelField, Tel
FROM Telephone
UNPIVOT
(
Tel FOR TelField IN (TEL01,TEL02,TEL03,TEL04,TEL05,TEL06)
) as up
),
Cleaned AS
(
-- cleaning the empty rows
SELECT
'TEL0' + CAST(ROW_NUMBER() OVER (PARTITION BY ID ORDER BY TelField) AS VARCHAR) [NewTelField],
ID,
TelField,
Tel
FROM Unpivoted
WHERE NULLIF(NULLIF(Tel, ''), 'n/a') IS NOT NULL
),
Pivoted AS
(
-- pivoting back into columns
SELECT ID, TEL01, TEL02, TEL03
FROM
(
SELECT ID, NewTelField, Tel
FROM Cleaned
) t
PIVOT
(
-- simply add ", TEL04, TEL05, TEL06" if you want to still see the
-- other columns (or if you will have more than 3 final telephone numbers)
MIN(Tel) FOR NewTelField IN (TEL01, TEL02, TEL03)
) pvt
)
SELECT * FROM Pivoted
ORDER BY ID
```
That will shift the telephone numbers into their correct place in one go. You can also change the `Pivoted` in `SELECT * FROM Pivoted` to any of the other CTEs - `Unpivoted`, `Cleaned` - to see what the partial results would look like. Final result:
[](https://i.stack.imgur.com/tfkoi.png)
|
There are a few different ways you can do this, the best choice depends on the database engine you're using, and/or what other tools are available.
The dirtiest method might be to write 64 different sql statements fetching out the data in each of the 64 possible combinations (including where 6 fields are all blank, and 6 fields are all populated) However, you might not relish this in practice.
So the problem is finding a simple, repeatable algorithm to reduce the problem space to something simpler.
One way to do this is to write 6 statements, each of which performs, where possible, a shift, to the left of one space.
e.g.
```
update table set field1 = field2, field2 = null from table
where field2 is not null and field1 is null
```
Moves content from field2 into field 1, if and only if field1 is blank, and field 2 is not blank
A second statement:
```
update table set field2 = field3, field3 = null from table
where field3 is not null and field2 is null
```
Does the same - and so on:
```
update table set field3 = field4, field4 = null from table
where field4 is not null and field3 is null
update table set field4 = field5, field5 = null from table
where field5 is not null and field4 is null
update table set field5 = field6, field6 = null from table
where field6 is not null and field2 is null
```
So that's 5 statements in total - running them all as a batch will incrementally shift records over by a minimum of one per batch - it will move more per batch in cases where there is an empty space to the left and all the records to the right are adjacent with no gaps. Gaps will tend to move to the right and so I'd guess that by running the batch a minimum of 6 times, you ought to find yourself with a dataset that's conformant with what you want.
The actual sql you need will be dependent on how your RDBMS language deals with nulls or empty strings or whatever, but the good news is you only need 5 statements, and you only need to run them a relatively small number of times.
There's probably cleverer methods than this, but this is fairly simple, and each run of the batch gets you one step closer to where you want to be. On the down-side, it's a bit clunky. If you need to run this periodically, I might be tempted to figure out something a bit more elegant.
Depending on the datasize, it might be worth running something that classifies/counts the number of records you have of each combination/null-count. That way it might be possible to do cut down the number of runs you've got to do - chances are there's only 4 or 5 different patterns, and not the full 64.
|
SQL moving data along columns
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
This seems really trivial, but for the life of me I cannot understand why this is not working. Any ideas would be great!
```
USE [ACCTQA]
GO
INSERT INTO [dbo].[glacct]
([Company]
,[Account]
,[Description])
VALUES
('E096','191802','INVESTMENT IN SUBS')
,('E096','320007','COMMON STOCK')
,('E096','330013','SURPLUS')
,('E096','340003','RETAINED EARNINGS')
,('E096','690876','LIAB & CASUALTY INS')
,('E096','710000','BONDS - AGENCY BILL')
,('E096','710001','COMM. P&C - AGENCY BILL')
,('E096','710002','EMPL BENEFITS-AGENCY BILL')
```
ERROR MSG:
```
Msg 102, Level 15, State 1, Line 8
Incorrect syntax near ','.
```
|
Try this
```
USE [ACCTQA]
GO
INSERT INTO [dbo].[glacct]
([Company]
,[Account]
,[Description])
SELECT * FROM (
SELECT 'E096','191802','INVESTMENT IN SUBS' UNION ALL
SELECT 'E096','320007','COMMON STOCK' UNION ALL
SELECT 'E096','330013','SURPLUS' UNION ALL
SELECT 'E096','340003','RETAINED EARNINGS'
.....)
```
I'm guessing your version is not accepting multiple values like this, so you have to use a select statement(INSERT FROM SELECT) to make it work.
|
Your query won't work if you are using SQL Server 2005 or below. On those version you have to try like
```
INSERT INTO glacct
([Company],[Account],[Description])
SELECT 'E096','191802','INVESTMENT IN SUBS'
UNION ALL
SELECT 'E096','320007','COMMON STOCK'
UNION ALL
SELECT 'E096','330013','SURPLUS'
```
Your query will work on SQL Server 2008 and above,.
**[SQL FIDDLE DEMO](http://sqlfiddle.com/#!3/fd272)**
|
SQL Insert statement with multiple rows
|
[
"",
"sql",
"sql-server",
""
] |
```
select name, (regexp_split_to_array(name, '')) from table
```
The result is `{a,b,c,d,e}`. Is it possible to split that up to individual rows and group by name so that it looks like:
```
a 1
b 2
c 3
d 4
e 5
```
|
If you are on 9.4 or later you can use `with ordinality`:
```
SELECT name, u.*
FROM the_table
cross join lateral unnest(regexp_split_to_array(name,'')) with ordinality as u(ch, nr)
```
|
You are looking for the [unnest](http://www.postgresql.org/docs/9.3/static/functions-array.html) function:
```
select name, unnest(regexp_split_to_array(name, '')) from table
```
|
Splitting characters after regexp_split_to_array
|
[
"",
"sql",
"postgresql",
"unnest",
""
] |
The tables will build, but every time I try to insert values into the table I get a 1452 error of foreign key constraints fails. I wonder if the problem has to do with EMPLOYEE table has a foreign key for STORE\_CODE in the STORE table, and STORE table has a foreign key for EMP\_CODE in EMPLOYEE table. Is the circular reference the problem here?
```
ALTER TABLE EMPLOYEE DROP FOREIGN KEY STORE_CD;
ALTER TABLE STORE DROP FOREIGN KEY REGION_CD;
ALTER TABLE STORE DROP FOREIGN KEY EMPLOYEE_CD;
DROP TABLE IF EXISTS EMPLOYEE, REGION, STORE;
CREATE TABLE EMPLOYEE (
EMP_CODE int NOT NULL PRIMARY KEY,
EMP_TITLE varchar(4),
EMP_LNAME varchar(15),
EMP_FNAME varchar(15),
EMP_INITIAL varchar(1),
EMP_DOB datetime,
STORE_CODE int NOT NULL
) Engine=InnoDB;
-- Table Region
CREATE TABLE REGION (
REGION_CODE int NOT NULL PRIMARY KEY,
REGION_DESCRIPT varchar(20)
) Engine=InnoDB;
-- Table Store
CREATE TABLE STORE (
STORE_CODE int NOT NULL PRIMARY KEY,
STORE_NAME varchar(20) NOT NULL,
STORE_YTD_SALES numeric NOT NULL,
REGION_CODE int NOT NULL,
EMP_CODE int NOT NULL
) Engine=InnoDB;
ALTER TABLE EMPLOYEE ADD CONSTRAINT STORE_CD
FOREIGN KEY STORE_CD(STORE_CODE) REFERENCES STORE(STORE_CODE);
ALTER TABLE STORE ADD CONSTRAINT REGION_CD
FOREIGN KEY REGION_CD(REGION_CODE) REFERENCES REGION(REGION_CODE);
ALTER TABLE STORE ADD CONSTRAINT EMPLOYEE_CD
FOREIGN KEY EMPLOYEE_CD(EMP_CODE) REFERENCES EMPLOYEE(EMP_CODE);
```
|
In my experience with relational databases, I think you should create an
intermediate table to conect "store" with "employee" (lets name it (store\_has\_employee) with the atributes(idstore(fk), idemployee(fk) and isManager(boolean)).
Then you should insert the "regions" first, so you can insert a "store", then when you have registered "employees", all you have to do is conect them in "store\_has\_employee", and if you want to say that is the manager, just insert isManager=true.
This is the most eficient way to do it and to get faster queries.
Hope it helps.
|
It's not possible to have mutual foreign keys unless you allow at least one of the columns to be `NULL`. Otherwise you can never have a consistent set of tables: If you add the store first, it will refer to a nonexistent employee; if you add the employee first, it will refer to a nonexistent store.
So you need to allow the referencing column to be `NULL`. Then you can add a row to the first table with `NULL` in the referencing column, add a row to the second table, then fill in the referencing column in the first table with the ID from the second table.
|
What is the correct way of have foreign keys in tables that reference each other
|
[
"",
"mysql",
"sql",
""
] |
I have a problem. I need to take the newly created table column (Field) I had just name "ID" and INDEX it (Unique). I have tried several ways with no success. All I'm asking is for someone to offer a fresh perspective. ~ Shaw
```
Public Sub OrcleJEtoUnmatched()
Dim db As Database
Dim tdf As DAO.TableDef
Dim fld1 As DAO.Field, fld2 As DAO.Field
Dim idx As Index
Dim rst As DAO.Recordset
Dim hertz As String
Set db = CurrentDb()
'Copies table data from ORACLE JE Table; Creates / overwrites existing data to UNMATCHED Table (Working Table)
DoCmd.RunSQL "SELECT [Oracle JE].* INTO Unmatched FROM [Oracle JE];"
Set tdf = db.TableDefs("Unmatched")
Set fld1 = tdf.CreateField("ID", dbText, 255)
Set fld2 = tdf.CreateField("BatchCalc", dbText, 255)
With tdf
.Fields.Append fld1
.Fields.Append fld2
End With
Set rst = db.OpenRecordset("Unmatched", dbOpenTable)
Do Until rst.EOF
hertz = rst![Accounting Document Item] & Mid(rst![JE Line Description], 20, 2) & Round(Abs(rst![Transaction Amount]), 0)
rst.Edit
rst!ID = Replace(hertz, " ", "")
rst!BatchCalc = Mid(rst![JE Line Description], 8, 8)
rst.Update
rst.MoveNext
Loop
rst.Close
Application.RefreshDatabaseWindow
Set fld1 = Nothing
Set fld2 = Nothing
Set tdf = Nothing
Set db = Nothing
End Sub
```
|
If you want to stick with the DAO object model to create your field and index it, look at the *Index.CreateField Method* in Access' help system.
But I think it's easier to do both by executing an `ALTER TABLE` statement ...
```
CurrentDb.Execute "ALTER TABLE Unmatched ADD COLUMN ID TEXT(255) UNIQUE;"
```
|
To add an index to a DAO tabledef object, use the createindex method.
Here's the basic documentation;
<https://msdn.microsoft.com/EN-US/library/office/ff196791.aspx>
|
How to INDEX a new DAO CreateField on the Fly - MS Access
|
[
"",
"sql",
"ms-access",
"vba",
"dao",
"unique-index",
""
] |
I have a column named "Date of Payment" ( a nvarchar(50) ) and I am trying to select all the rows which the date is in the year 2015.
```
Select ID,[Date of Payment]
From mytable
WHERE ('Date of Payment' BETWEEN '2015/01/1' AND '2015/12/31')
```
When I run the select statement above it doesn't return any rows. The date of payment samples could be 1/1/2015 or 1/1/15 etc. Is there something wrong with my syntax? TIA
|
You can't use between because your date field is not type DATETIME.
```
Select ID,[Date of Payment]
From mytable
WHERE [Date of Payment] like '%2015%'
```
|
There is an error in the Where clause
If I understood well, you use MS-SQL and in this dialect the field names are enclosed in square brackets , not in single quote marks.
```
Select [ID],[Date of Payment]
From [mytable]
WHERE ([Date of Payment] BETWEEN '2015/01/01' AND '2015/12/31' )
```
|
SQL select between 2 dates not working
|
[
"",
"sql",
""
] |
I have a table with strings:
```
Orig_Value
-----------
Abc_abc_1.2.3
PQRST.abc_1
XY.143_z
```
I want to split this using `_` and `.` with the desired output:
```
Original_Value Col1 Col2 Col3 Col4 Col5
------------- ---- ---- ---- ---- ----
Abc_abc_1.2.3 Abc abc 1 2 3
PQRST.abc_1 PQRST abc 1
XY.143_z XY 143 z
```
Query for MSSql and MySQL or PostgreSQL will do.
I tried using substring function like below query, but its not applicable in all data I have:

|
Here is one way to do it in `SQL Server`
```
;WITH cte
AS (SELECT Replace(col, '_', '.') + '.' AS col
FROM (VALUES ('Abc_abc_1.2.3'),
('PQRST.abc_1'),
('XY.143_z')) tc (col))
SELECT original_col = col,
column_1=COALESCE(LEFT(col, Charindex('.', col) - 1), ''),
column_2=COALESCE(Substring(col, P1.POS + 1, P2.POS - P1.POS - 1), ''),
column_3=COALESCE(Substring(col, P2.POS + 1, P3.POS - P2.POS - 1), ''),
column_4=COALESCE(Substring(col, P3.POS + 1, P4.POS - P3.POS - 1), ''),
column_4=COALESCE(Substring(col, P4.POS + 1, P5.POS - P4.POS - 1), '')
FROM cte
CROSS APPLY (VALUES (CASE
WHEN Charindex('.', col) >= 1 THEN Charindex('.', col)
END)) AS P1(POS)
CROSS APPLY (VALUES (CASE
WHEN Charindex('.', col, P1.POS + 1) >= 1 THEN Charindex('.', col, P1.POS + 1)
END)) AS P2(POS)
CROSS APPLY (VALUES (CASE
WHEN Charindex('.', col, P2.POS + 1) >= 1 THEN Charindex('.', col, P2.POS + 1)
END )) AS P3(POS)
CROSS APPLY (VALUES (CASE
WHEN Charindex('.', col, P3.POS + 1) >= 1 THEN Charindex('.', col, P3.POS + 1)
END)) AS P4(POS)
CROSS APPLY (VALUES (CASE
WHEN Charindex('.', col, P4.POS + 1) >= 1 THEN Charindex('.', col, P4.POS + 1)
END)) AS P5(POS)
```
**Result:**
```
ββββββββββββββββββ¦βββββββββββ¦βββββββββββ¦βββββββββββ¦βββββββββββ¦βββββββββββ
β original_col β column_1 β column_2 β column_3 β column_4 β column_4 β
β βββββββββββββββββ¬βββββββββββ¬βββββββββββ¬βββββββββββ¬βββββββββββ¬βββββββββββ£
β Abc.abc.1.2.3. β Abc β abc β 1 β 2 β 3 β
β PQRST.abc.1. β PQRST β abc β 1 β β β
β XY.143.z. β XY β 143 β z β β β
ββββββββββββββββββ©βββββββββββ©βββββββββββ©βββββββββββ©βββββββββββ©βββββββββββ
```
|
**`PostgreSQL`**
Following as an exampple,
```
CREATE TABLE foo (org_val TEXT);
INSERT INTO foo
VALUES ('Abc_abc_1.2.3')
,('PQRST.abc_1')
,('XY.143_z');
```
using [`regexp_split_to_array()`](http://www.postgresql.org/docs/9.5/static/functions-string.html)
```
SELECT org_val
,a [1] col1
,a [2] col2
,a [3] col3
,a [4] col4
,a [5] col5
FROM (
SELECT org_val
,regexp_split_to_array(replace(org_val, '.', '_'),'_') AS a
FROM foo
) t
```
result:
```
org_val col1 col2 col3 col4 col5
------------- ----- ---- ---- ---- ----
Abc_abc_1.2.3 Abc abc 1 2 3
PQRST.abc_1 PQRST abc 1 NULL NULL
XY.143_z XY 143 z NULL NULL
```
---
[***SQLFIDDLE-DEMO***](http://sqlfiddle.com/#!15/a67cd/1)
|
How to split value using underscore (_) and period (.)?
|
[
"",
"mysql",
"sql",
"postgresql",
""
] |
I know this question has been asked, but I have a slightly different flavour of it. I have a use case where the only thing I have control over is the WHERE clause of the query, and I have 2 tables.
Using simple example:
Table1 contains 1 column named "FULLNAME" with hundreds of values
Table2 contains 1 column named "PATTERN" with some matching text
so, What I need to do is select all values from Table 1 which match the values in table 2.
Here's a simple example:
Table1 (FULLNAME)
ANTARCTICA
ANGOLA
AUSTRALIA
AFRICA
INDIA
INDONESIA
Table2 (PATTERN)
AN
Effectively what I need is the entries in Table1 which contain the values from Table2 (result would be ANTARCTICA, ANGOLA, INDIA, INDONESIA)
In other words, what I need is something like:
> Select \* from Table1 where FULLNAME IN LIKE (Select '%' || Pattern || '%' from
> Table2)
The tricky thing here is I only have control over the where clause, I can't control the Select clause at all or add joins since I'm using a product which only allows control over the where clause. I can't use stored procedures either.
Is this possible?
I'm using Oracle as the backend DB
Thanks
|
One possible approach is to use `EXISTS` in combination with `LIKE` in the subquery:
```
select * from table1 t1
where exists (select null
from table2 t2
where t1.fullname like '%' || t2.pattern || '%');
```
|
If the patterns are always two characters and only have to match the start of the full name, like the examples you showed, you could do:
```
Select * from Table1 where substr(FULLNAME, 1, 2) IN (Select Pattern from Table2)
```
Which prevents any index on Table1 being used, and your real case may need to be more flexible...
Or probably even less efficiently, similar to TomH's approach, but with the join *inside* a subquery:
```
Select * from Table1 where FULLNAME IN (
Select t1.FULLNAME from Table1 t1
Join Table2 t2 on t1.FULLNAME like '%'||t2.Pattern||'%')
```
|
Combine LIKE and IN using only WHERE clause
|
[
"",
"sql",
"oracle",
"where-in",
""
] |
I want to drop my tables in my database.
But, when I use, for example,
`DROP TABLE if exists users;`
I receive this message:
> cannot drop table users because other objects depend on it
I found the solution is to drop all tables. But, anyway, how to solve this problem without total data removal?
|
Use the `cascade` option:
```
DROP TABLE if exists users cascade;
```
this will drop any foreign key that is referencing the `users` table or any view using it.
It will not drop other *tables* (or delete rows from them).
|
If it was really necessary to drop that specific table with or without recreating it, then first find the object(s) that depends on it.
```
CREATE OR REPLACE VIEW admin.v_view_dependency AS
SELECT DISTINCT srcobj.oid AS src_oid
, srcnsp.nspname AS src_schemaname
, srcobj.relname AS src_objectname
, tgtobj.oid AS dependent_viewoid
, tgtnsp.nspname AS dependant_schemaname
, tgtobj.relname AS dependant_objectname
FROM pg_class srcobj
JOIN pg_depend srcdep ON srcobj.oid = srcdep.refobjid
JOIN pg_depend tgtdep ON srcdep.objid = tgtdep.objid
JOIN pg_class tgtobj ON tgtdep.refobjid = tgtobj.oid AND srcobj.oid <> tgtobj.oid
LEFT JOIN pg_namespace srcnsp ON srcobj.relnamespace = srcnsp.oid
LEFT JOIN pg_namespace tgtnsp ON tgtobj.relnamespace = tgtnsp.oid
WHERE tgtdep.deptype = 'i'::"char" AND tgtobj.relkind = 'v'::"char";
```
Then,
```
select top 99 * from admin.v_view_dependency where src_objectname like '%the_table_name_it_complaint_about%';
```
The result set will show you the dependant object in the field "dependant\_objectname".
|
Cannot drop table users because other objects depend on it
|
[
"",
"sql",
"database",
"postgresql",
"sql-drop",
"drop-table",
""
] |
I've two tables: **users** and **friendship**. For example:
**users** table:
```
id | user | email | website
1 | Igor | i@o.com | stack.com
2 | Lucas | a@i.com | overflow.com
3 | John | j@w.com | www.com
```
**friendship** table (when `status = 2` means that they are friends):
```
id | friend1 | friend2 | status
1 | Igor | John | 2
2 | Lucas | Igor | 2
3 | John | Lucas | 2
```
And I do the following to select the friends (of which `$user` is the current user logged in):
```
SELECT friend1, friend2
FROM friendship
WHERE (friend1 = '$user' OR friend2 = '$user') AND status = 2
```
But I also want to select the user data in the **users** table, therefore, I would do:
```
SELECT friend1, friend2
FROM friendship
LEFT JOIN users
ON user = **friend**
WHERE (friend1 = '$user' OR friend2 = '$user') AND status = 2
```
in ON clause above, I left asterisks in the word **friend** for you look where is my doubt. I want to select only the data in the table **users** other than the current user logged (`$user`), I know I should use to verify that CASE is equal to the logged in user but I don't know how to do this in this case.
|
This is easily done with a simple `JOIN` that just includes your logic:
```
SELECT
U.id,
U.user,
U.email,
U.website
FROM
Friendship F
INNER JOIN Users U ON
(U.user = F.friend1 AND F.friend1 <> '$user') OR
(U.user = F.friend2 AND F.friend2 <> '$user')
WHERE
F.friend1 = '$user' OR F.friend2 = '$user'
```
|
i think you can use
```
select * from user where
user in (
select friends1 from friendship where friend2='$user' and status=2
union all
select friends2 from friendship where friend1='$user' and status=2
)
```
I added left join example
```
select u.* from (
select friends1 f from friendship where friend2='$user' and status=2
union all
select friends2 f from friendship where friend1='$user' and status=2) a
left join user u on a.f=u.user
```
|
LEFT JOIN with relative value from main query
|
[
"",
"mysql",
"sql",
"join",
"left-join",
""
] |
I have 3 possible delay categories for each gate. I am trying to find the Tot\_Delay\_Min at certain gates for each category, therefor I need to find a way to combine these Delay Categories into one Column and the total minutes for that category into another. DLY1MIN, DLY2MIN, and DLY3MIN correspond with DLY1CAT, DLY2CAT, and DLY3CAT.
This is a sample of what I have for one gate:
```
Gate DLY1CAT DLY1MIN DLY2CAT DLY2MIN DLY3CAT DLY3MIN
==== ======= ======= ======= ======= ======= =======
SDF Weather 50 Fuel 3
SDF Late Jet 10 Weather 9
SDF Deice 9
SDF Late Jet 7 Fuel 11
SDF Computer 20 Weather 13
SDF Load Issue 8 Deice 5 Fuel 3
SDF Damage 20 Deice 7
```
This is a sample of what I would Like:
```
Gate DLYCAT DLYMIN
==== ======= =======
SDF Weather 72
SDF Deice 21
SDF Computer 20
SDF Damage 20
SDF Fuel 17
SDF Late Jet 17
SDF Load Issue 8
```
Any tips on how to go about this?
|
This is untested, and I'm sure there is a better way to do this, but this should work:
```
SELECT Gate, Category, SUM("Delay Minutes") AS "Delay Minutes"
FROM
(
SELECT Gate AS Gate,
Dly1Cat AS Category,
DLY1Min AS "Delay Minutes"
FROM YourTable
UNION ALL
SELECT Gate AS Gate,
Dly2Cat AS Category,
DLY2MIN AS "Delay Minutes"
FROM YourTable
UNION ALL
SELECT Gate AS Gate,
Dly3Cat AS Category,
DLY3MIN AS "Delay Minutes"
FROM YourTable
)
WHERE Category IS NOT NULL
GROUP BY Gate, Category;
```
What you're doing here is making three separate queries for each of the categories - and then combining them via UNION ALL. Then, you select from the combined result set and group by Gate and the newly created "Category" column.
EDIT:
Removed the groupings in the three inner queries since they are unneccessary.
SECOND EDIT:
Changed brackets to ticks since it's Oracle - thanks, a\_horse\_with\_no\_name
THIRD EDIT:
Only included quotes where needed. Added semicolon to the end. Removed table alias for Oracle compatibility.
|
Try this one and let me know if it is working or not.
```
Select Gate, DLYCAT AS DLYCAT, SUM(DLYMIN) AS DLYMIN
FROM
(Select Gate, DLY1CAT AS DLYCAT, SUM(DLY1MIN) AS DLYMIN FROM [ExperimentalDatabase].[dbo].[Table_1] GROUP BY GATE, DLY1CAT
UNION ALL
Select Gate, DLY2CAT AS DLYCAT, SUM(DLY2MIN) AS DLYMIN FROM [ExperimentalDatabase].[dbo].[Table_1] GROUP BY GATE, DLY2CAT
UNION ALL
Select Gate, DLY3CAT AS DLYCAT, SUM(DLY3MIN) AS DLYMIN FROM [ExperimentalDatabase].[dbo].[Table_1] GROUP BY GATE, DLY3CAT) AS TEMP
WHERE DLYCAT IS NOT NULL AND DLYMIN IS NOT NULL
GROUP BY GATE, DLYCAT
```
|
How to combine similar columns in SQL?
|
[
"",
"sql",
"oracle",
""
] |
Suppose I have a table like the one below:
```
+----+-----------+
| ID | TIME |
+----+-----------+
| 1 | 12-MAR-15 |
| 2 | 23-APR-14 |
| 2 | 01-DEC-14 |
| 1 | 01-DEC-15 |
| 3 | 05-NOV-15 |
+----+-----------+
```
What I want to do is for each year ( the year is defined as DATE), list the ID that has the highest count in that year. So for example, ID 1 occurs the most in 2015, ID 2 occurs the most in 2014, etc.
What I have for a query is:
```
SELECT EXTRACT(year from time) "YEAR", COUNT(ID) "ID"
FROM table
GROUP BY EXTRACT(year from time)
ORDER BY COUNT(ID) DESC;
```
But this query just counts how many times a year occurs, how do I fix it to highest count of an ID in that year?
**Output:**
```
+------+----+
| YEAR | ID |
+------+----+
| 2015 | 2 |
| 2012 | 2 |
+------+----+
```
**Expected Output:**
```
+------+----+
| YEAR | ID |
+------+----+
| 2015 | 1 |
| 2014 | 2 |
+------+----+
```
|
Starting with your sample query, the first change is simply to group by the ID as well as by the year.
```
SELECT EXTRACT(year from time) "YEAR" , id, COUNT(*) "TOTAL"
FROM table
GROUP BY EXTRACT(year from time), id
ORDER BY EXTRACT(year from time) DESC, COUNT(*) DESC
```
With that, you could find the rows you want by visual inspection (the first row for each year is the ID with the most rows).
To have the query just return the rows with the highest totals, there are several different ways to do it. You need to consider what you want to do if there are ties - do you want to see all IDs tied for highest in a year, or just an arbitrary one?
Here is one approach - if there is a tie, this should return just the lowest of the tied IDs:
```
WITH groups AS (
SELECT EXTRACT(year from time) "YEAR" , id, COUNT(*) "TOTAL"
FROM table
GROUP BY EXTRACT(year from time), id
)
SELECT year, MIN(id) KEEP (DENSE_RANK FIRST ORDER BY total DESC)
FROM groups
GROUP BY year
ORDER BY year DESC
```
|
You need to count per id and then apply a RANK on that count:
```
SELECT *
FROM
(
SELECT EXTRACT(year from time) "YEAR" , ID, COUNT(*) AS cnt
, RANK() OVER (PARTITION BY "YEAR" ORDER BY COUNT(*) DESC) AS rnk
FROM table
GROUP BY EXTRACT(year from time), ID
) dt
WHERE rnk = 1
```
If this return multiple rows with the same high count per year and you want just one of them randomly, you can switch to a ROW\_NUMBER.
|
Querying for an ID that has the most number of reads
|
[
"",
"sql",
"oracle",
""
] |
im working in sql server 2008, i want to concatenate two strings, and the condition is
```
@str1 = 'A1,B1,C1'
@str2 = 'A2,B2,C2'
```
i want the result as
```
@result = 'A1,A2,B1,B2,C1,C2'
```
Please help...
|
The way you are storing your data is really bad practice. However here is a solution for training purposes:
```
DECLARE
@str1 varchar(30) = 'A1,B1,C1',
@str2 varchar(30) = 'A2,B2,C2',
@result varchar(60)
;WITH split as
(
SELECT t.c.value('.', 'VARCHAR(2000)') x
FROM (
SELECT x = CAST('<t>' +
REPLACE(@str1 + ',' + @str2, ',', '</t><t>') + '</t>' AS XML)
) a
CROSS APPLY x.nodes('/t') t(c)
)
SELECT
@result =
STUFF((
SELECT ',' + x
FROM split
ORDER BY x
for xml path(''), type
).value('.', 'varchar(max)'), 1, 1, '')
SELECT @result
```
Result:
```
A1,A2,B1,B2,C1,C2
```
|
First create a split function to get you items seperately:
```
CREATE FUNCTION [dbo].[Split]
(
@String NVARCHAR(4000),
@Delimiter NCHAR(1)
)
RETURNS TABLE
AS
RETURN
(
WITH Split(stpos,endpos)
AS(
SELECT 0 AS stpos, CHARINDEX(@Delimiter,@String) AS endpos
UNION ALL
SELECT endpos+1, CHARINDEX(@Delimiter,@String,endpos+1)
FROM Split
WHERE endpos > 0
)
SELECT 'Id' = ROW_NUMBER() OVER (ORDER BY (SELECT 1)),
'Data' = SUBSTRING(@String,stpos,COALESCE(NULLIF(endpos,0),LEN(@String)+1)-stpos)
FROM Split
)
GO
```
Now you can first split then and order your result and then concat the values
```
DECLARE @DelimitedString NVARCHAR(128)
SET @DelimitedString = @str1 + ',' + @str2
SELECT @result = COALESCE(@result + ',', '') +Data
FROM (SELECT Data
FROM dbo.Split(@DelimitedString, ',')
ORDER BY Data)
```
|
SQL concatenate strings
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I am trying to find maximum of a price column in SQL but the prices have a $ sign in front of them, so when I do Max(column\_name) it does not show me the right answer. How do I get rid of the $ sign? Thanks.
|
assuming you are using sql-server and your data is in a varchar or nvarchar column:
```
DECLARE @Data varchar(max) = '$565'
SELECT MAX(CAST(REPLACE(@data,'$','') AS INT))
```
result:
[](https://i.stack.imgur.com/eLCDI.png)
So what you can do is:
```
SELECT MAX(CAST(REPLACE(column_name,'$','') AS INT)
```
|
Assuming that the amounts are all formatted the same (that is, with or without cents), then you can do:
```
select price
from t
order by len(price) desc, price desc
fetch first 1 row only;
```
Note that some databases spell `len()` as `length()`. And some spell `fetch first 1 row only` as `limit` -- or perhaps use another method such as `top` or `rownum`.
|
Removing prefix in numbers in SQL
|
[
"",
"sql",
""
] |
Is there a way to use extract from date in format `YYYY-MM-DD` how many days were in this month?
example:
for `2016-02-05` it will give `29` (Feb 2016 has 29 days)
for `2016-03-12` it will give `31`
for `2015-02-05` it will give `28` (Feb 2015 had 28 days)
I'm using PostgreSQL
**EDIT:**
[LAST\_DAY function in postgres](https://stackoverflow.com/questions/14229038/last-day-function-in-postgres) is not what i'm looking for. it returns DATE while I expect an Integer
|
One way to achieve this would be to subtract the beginning of the following month from the beginning of the current month:
```
db=> SELECT DATE_TRUNC('MONTH', '2016-02-05'::DATE + INTERVAL '1 MONTH') -
DATE_TRUNC('MONTH', '2016-02-05'::DATE);
?column?
----------
29 days
(1 row)
```
|
Just needed this today and seems that I came up with pretty much the same as Mureinik, just that I needed it numeric. (PostgreSQL couldn't convert from interval to number directly)
**previous month**:
```
select CAST(to_char(date_trunc('month', current_date) - (date_trunc('month', current_date) - interval '1 month'),'dd') as integer)
```
**current month**:
```
select CAST(to_char(date_trunc('month', current_date) + interval '1 month' - date_trunc('month', current_date), 'dd') as integer)
```
|
How to get number of days in month based on date?
|
[
"",
"sql",
"postgresql",
""
] |
I have a simple table `Customers` with `fields id, name, rating, seller_id`.
The result of query
```
select seller_id, MAX(rating)
from Customer
group by seller_id
```
is the customers grouped by `seller_id` with max values of the rating. Like this:
```
seller_id rating
2 17
3 20
4 17
```
How to modify this query to get customer's id additionally?
If I add `id` to `SELECT` I get an error, that I should add `id` to grouping. But I want only get above shown values with id specified. Does anyone know how to do this? Thanks.
|
Try:
```
select t.id, t.seller_id, t.rating
from Customer c
join (
select seller_id, MAX(rating) rating
from Customer
group by seller_id
) x
on x.seller_id = c.seller_id
and x.rating = c.rating
```
|
**A dataset is divided into
groups using the GROUP BY clause.
The grouping attribute is common
key shared by members of each group.
The grouping attribute is usually a single
column but may be multiple columns or
an expression that cannot be based on
group functions. Note that only grouping
attributes and group functions are
permitted in the SELECT clause when using
GROUP BY.**
refer this
[Select EMP with max SAL from each DEPT](https://stackoverflow.com/questions/16428473/select-emp-with-max-sal-from-each-dept)
|
Query to get results with the grouping
|
[
"",
"mysql",
"sql",
""
] |
I need some help to fetch records having alternate set of entries associated with Unique value(ex: user\_id)
[](https://i.stack.imgur.com/bEruA.png)
> I want output to be only (1111,2222,3333)
**Here is the scenario:**
user\_id 1111 attended .net course from 2005-01-01 to 2006-12-31
he later attended java from 2007-01-01 to 2009-12-31
he later came back to .net
so i want to retrieve these kind of user\_id's
user\_id 4444 should not be in the output, because there is no alternative courses.
> UPDATE: 4444 started his Java course from 2007 to 2009 he again
> attended Java from 2010 - 2012 Later he attended .net but never came
> back to Java so he must be excluded from output
*If **Group by** is used, it will consider records irrespective of alternate course name.
We can create a procedure to accomplish this by looping and comparing the alternate course name but i want to know if a query can do this?*
|
You can use two `INNER JOIN` operations:
```
SELECT DISTINCT user_id
FROM mytable AS t1
INNER JOIN mytable AS t2
ON t1.user_id = t2.user_id AND t1.id < t2.id AND t1.course_name <> t2.course_name
INNER JOIN mytable AS t3
ON t2.user_id = t3.user_id AND t2.id < t3.id AND t1.course_name = t3.course_name
```
I assume that `id` is an auto-increment field that reflects the order the rows have been inserted in the DB. Otherwise, you should use a date field in its place.
|
Same as Girogos Betsos' answer, only with select distinct to prevent duplicates.
```
SELECT DISTINCT user_id
FROM mytable AS t1
INNER JOIN mytable AS t2
ON t1.user_id = t2.user_id AND t1.Start_Date < t2.Start_Date AND
t1.course_name <> t2.course_name
INNER JOIN mytable AS t3
ON t2.user_id = t3.user_id AND t2.Start_Date < t3.Start_Date AND
t1.course_name = t3.course_name
```
EDIT: Using Start\_Date since the answer has been updated and IDs are not necessarily sequential.
|
How to fetch records that have an alternate entry
|
[
"",
"sql",
"oracle",
"plsql",
"oracle11g",
"group-by",
""
] |
I sometimes want to test certain BQ functions and sql statements in the BQ console **without** creating a test table in my dataset. For example, I can test regexp\_match in the console with this:
```
Select Regexp_extract(StringToParse,r'\b(Γ )\b') as Extract,
Regexp_match(StringToParse,r'\b(Γ )\b') as match,
FROM
(SELECT 'VoilΓ la sΓ©ance qui est Γ Paris.' as StringToParse)
```
I would like to do the same using complete tables, given perhaps as a json string.
For example, if I have a test table with two records:
```
[
{"rowNumber":1,
"index": [1,2,3]
},
{"rowNumber":2,
"index": [2,7,8,15]
}
]
```
Can I give that table to BQ sql for testing? Something like:
```
Select max(index) as max from parse('long json string')....
```
I realize that there is no schema given, so an on-the-fly table is probably not possible.
The schema would be the following (well, with 'string' where I have an 'record' for an array of integers, probably -- that's the sort of thing I want to test):
```
[
{
"name":"rowNumber",
"type":"integer"
},
{
"name": "index"
"type": "record" (oops, can't put an array of integers here...)
},
]
```
|
Note: I am answering/focusing on the question - `I sometimes want to test certain BQ functions and sql statements in the BQ console without creating a test table in my dataset`
I see few cases (it might be more but at least three below can make good start for you)
> Case #1 β Super Simple - no record type fields involved
Example:
```
SELECT a, b, c, d
FROM
(SELECT 1 AS a, 'x' AS b, 'VoilΓ la sΓ©ance qui est Γ Paris.' AS c, '[{"rowNumber":1,"index": [1,2,3]},{"rowNumber":2,"index": [2,7,8,15]}]' AS d),
(SELECT 2 AS a, 'y' AS b, 'That session is in Paris.' AS c, '[{"rowNumber":3,"index": [4,5]},{"rowNumber":4,"index": [20, 23, 39]}]' AS d),
(SELECT 3 AS a, 'z' AS b, 'ΠΡΠ° ΡΠ΅ΡΡΠΈΡ Π² ΠΠ°ΡΠΈΠΆΠ΅.' AS c, '[{"rowNumber":5,"index": [6,7,8,9]},{"rowNumber":6,"index": [15, 45]}]' AS d)
```
So, now you can use this βvirtualβ table to experiment with your code like below
```
SELECT
a, b,
REGEXP_EXTRACT(c, r'(Γ )') AS extract,
REGEXP_MATCH(c, r'(Γ )') AS match,
JSON_EXTRACT(d, '$[1].index[0]') AS index
FROM (
SELECT a, b, c, d
FROM
(SELECT 1 AS a, 'x' AS b, 'VoilΓ la sΓ©ance qui est Γ Paris.' AS c, '[{"rowNumber":1,"index": [1,2,3]},{"rowNumber":2,"index": [2,7,8,15]}]' AS d),
(SELECT 2 AS a, 'y' AS b, 'That session is in Paris.' AS c, '[{"rowNumber":3,"index": [4,5]},{"rowNumber":4,"index": [20, 23, 39]}]' AS d),
(SELECT 3 AS a, 'z' AS b, 'ΠΡΠ° ΡΠ΅ΡΡΠΈΡ Π² ΠΠ°ΡΠΈΠΆΠ΅.' AS c, '[{"rowNumber":5,"index": [6,7,8,9]},{"rowNumber":6,"index": [15, 45]}]' AS d)
)
```
> Case #2 β Simple with Record
If you have record with just one nested field β below makes it
```
SELECT rowNumber, NEST(index) AS index
FROM
(SELECT 1 AS rowNumber, 1 AS index),
(SELECT 1 AS rowNumber, 2 AS index),
(SELECT 1 AS rowNumber, 3 AS index),
(SELECT 2 AS rowNumber, 2 AS index),
(SELECT 2 AS rowNumber, 3 AS index),
(SELECT 2 AS rowNumber, 8 AS index),
(SELECT 2 AS rowNumber, 15 AS index)
GROUP BY rowNumber
```
You can use this as a replacement in your experimentations with βsimpleβ record field
Btw, to confirm to yourself that this actually two rows and not 7 β run below:
```
SELECT COUNT(1) AS rows FROM (
SELECT rowNumber, NEST(index) AS index
FROM
(SELECT 1 AS rowNumber, 1 AS index),
(SELECT 1 AS rowNumber, 2 AS index),
(SELECT 1 AS rowNumber, 3 AS index),
(SELECT 2 AS rowNumber, 2 AS index),
(SELECT 2 AS rowNumber, 3 AS index),
(SELECT 2 AS rowNumber, 8 AS index),
(SELECT 2 AS rowNumber, 15 AS index)
GROUP BY rowNumber
)
```
> Case #3 β Schema with Record of arbitrary complexity, like in example in your question
If you want to have arbitrary schema to experiment with you should first experiment a little with how to create such schemas within GBQ using JS UDF.
Check out below examples
[Create a table with Record type column](https://stackoverflow.com/questions/35040039/create-a-table-with-record-type-column/35043714#35043714)
[create a table with a column type RECORD](https://stackoverflow.com/questions/34996384/create-a-table-with-a-column-type-record/35000299#35000299)
After you master it β you can mimic any table of any complexity within GBQ and use it as sub-select (instead of real table) for experimenting with GBQ Functionality
|
Based on your example data, the output schema you want is:
```
[
{
"name":"rowNumber",
"type":"integer"
},
{
"name": "index",
"type": "integer",
"mode": "repeated"
},
]
```
Here's something that will work for your example, finding the `MAX` of each index. The "SELECT NULL" in the innermost `SELECT` is unfortunate, but BigQuery complains about the use of `SPLIT` without a `FROM` clause.
```
SELECT rowNumber, MAX(index) AS max_index FROM
(SELECT 1 AS rowNumber, INTEGER(SPLIT('1,2,3')) AS index FROM (SELECT NULL)),
(SELECT 2 AS rowNumber, INTEGER(SPLIT('2,7,8,15')) AS index FROM (SELECT NULL))
GROUP BY rowNumber
```
If you're looking for a way to do this generally for JSON, you might investigate using the [JSON functions in the query reference.](https://cloud.google.com/bigquery/query-reference#jsonfunctions)
I wasn't able to get your exact example working with those functions, but depending on your JSONPath-fu / JSON structure you might be able to get something working. This, for example, grabs the values in the first row. Note however that output is stringified, so you get the string "[1,2,3]", but you could probably parse that into the right format with some string functions and `SPLIT`.
```
SELECT
JSON_EXTRACT(input, '$[0].rowNumber') as rowNumber,
JSON_EXTRACT(input, '$[0].index') as index
FROM
(SELECT '[
{"rowNumber":1,
"index": [1,2,3]
},
{"rowNumber":2,
"index": [2,7,8,15]
}
]' as input);
```
|
Can an entire table (as a string) be included in a sql statement for bigquery?
|
[
"",
"sql",
"google-bigquery",
""
] |
I have this two tables
```
TABLE_A TABLE_B
id_user|name id_user | points | date
------------- ------------------------
1 | A 1 | 10 | yesterday
2 | B 2 | 10 | today
3 | C 2 | 20 | today
3 | 15 | today
```
Well, i need to show how many points will earn EVERY user "today", i need to show it although he doesn't earn any amount of points, something like:
```
VIEW_POINTS
name | sum(points)_today
------------------------
1 | 0
2 | 30
3 | 15
```
I tried differents sentences, but i can't show user 1 when he doesn't earn any points, it just shows the other users that have some points.
|
You can do it using a `LEFT JOIN`:
```
SELECT t1.id_user, t1.name, COALESCE(SUM(t2.points), 0) AS points
FROM TABLE_A AS t1
LEFT JOIN TABLE_B AS t2 ON t1.id_user = t2.id_user AND t2.date = 'today'
GROUP BY t1.id_user, t1.name
```
The crucial part of the `LEFT JOIN` is the fact that it's `ON` clause also contains the `t2.date = 'today'` predicate: this way *all* `TABLE_A` rows are selected, even the ones that have not a today's date.
[**Demo here**](http://sqlfiddle.com/#!9/597fc/2)
|
You need to use left join:
```
SELECT a.id_user AS name, SUM(COALESCE(b.points, 0)) AS `sum(points)_today`
FROM TABLE_A a LEFT JOIN TABLE_B b
ON a.id_user = b.id_user AND b.date = 'today'
GROUP BY a.id_user
```
|
MySQL statment, how to show when a column has not results
|
[
"",
"mysql",
"sql",
"database",
""
] |
I have two tables in my database.
I want to compare the each row of `email` of `table-1` with the each row of `email2` of `table-2` if their content matches then I want to store the `mac` of `table-1` to `Mac` of `Result table`.
This is table-1
```
Email Mac
value1 21321
value2 45666
```
This is table-2
```
Name email2
name1 xyyxas
name2 xxxxxx
```
This is Result Table
```
ID Mac
1 21321
2 45666
```
I am completely out of logic here I tried join queries but they doesn't seem to be fit in my case.
|
you can use a insert select with inner join
```
insert into result_table (mac)
select table1.mac from table1
inner join table2 on table1.email = table2.email
```
|
This is Simple. First you Join your table using the common column, in your case the `email` column. Once they are joined, you collect the data required, in your case the `mac` column on table 1. One this is done you `insert` it to your table 3. This is my approach:
```
INSERT INTO tbl3(mac)
SELECT t1.mac
FROM tbl1 AS t1
JOIN tbl2 AS t2
ON t1.email = t2.email
```
|
Compare two tables and store result in third table
|
[
"",
"mysql",
"sql",
""
] |
I have 3 queries, but I want to combine them into one (with three columns).
[this are my results](https://i.stack.imgur.com/yt4lM.jpg)
this is my code:
```
Select family = count(s.destiny) from rental
inner join cars as s on s.number_car = ID_car
where s.destiny in (select destiny from cars where destiny like 'kids')
Select lux = count(s.destiny) from rental
inner join cars as s on s.number_car = ID_car
where s.destiny in (select destiny from cars where destiny like 'luxury')
Select sports = count(s.destiny) from rental
inner join cars as s on s.number_car = ID_car
where s.destiny in (select destiny from cars where destiny like 'sport car')
```
Can you help me combine it into one 'select'?
|
Conditional aggregation seems like this right approach:
```
Select sum(case when density = 'kids' then 1 else 0 end) as family,
sum(case when density = 'lux' then 1 else 0 end) as luxury,
sum(case when density = 'sport car' then 1 else 0 end) as sports
from rental r inner join
cars s
on s.number_car = ID_car ;
```
|
The simple answer is:
```
SELECT
(
Select family = count(s.destiny) from rental
inner join cars as s on s.number_car = ID_car
where s.destiny in (select destiny from cars where destiny like 'kids')
) as kids,
(
Select lux = count(s.destiny) from rental
inner join cars as s on s.number_car = ID_car
where s.destiny in (select destiny from cars where destiny like 'luxury')
) as luxury,
(
Select sports = count(s.destiny) from rental
inner join cars as s on s.number_car = ID_car
where s.destiny in (select destiny from cars where destiny like 'sport car')
) as sportsCar
```
That being said, I strongly recommend considering removing the sub-queries here.
Something like this:
```
SELECT destiny, COUNT(1)
FROM cars
GROUP BY destiny
```
|
combining columns with different queries sql
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
The following code returns a list of duplicate rows of a specific column. In this case I am trying to find duplicates of university name. And my problem is how could I delete the duplicates and leave just one copy for each different university\_name?
```
Select * from `university` where `university_name` in ( select `university_name` from `university` group by `university_name` having count(*) > 1 )
```
This is the result:
[](https://i.stack.imgur.com/Gw2bL.png)
Can you please explain I am very new to SQL! Thank you in advance!!
|
1) If you want to keep the row with the lowest id value:
```
DELETE a
FROM university a, university b
WHERE a.id > b.id
AND b.university_name=a.university_name
```
2) If you want to keep the row with the highest id value:
```
DELETE a
FROM university a, university b
WHERE a.id < b.id
AND b.university_name=a.university_name
```
|
Try this:
```
DELETE u
FROM university u
LEFT JOIN
(
SELECT MIN(ID) ID, university_name
FROM university
GROUP BY university_name
) v ON u.ID = v.ID AND
u.university_name = v.university_name
WHERE v.ID IS NULL
```
|
SQL How to delete duplicate rows of a specific column and keep one
|
[
"",
"mysql",
"sql",
"phpmyadmin",
"duplicates",
""
] |
I have a table with a date field of lastDelivery, and I want to retrieve all records that are 6 months or more older than today's date, how do I do this?
|
Try this:
```
SELECT * FROM Table
WHERE lastdelivery <= dateadd(month, -6, getdate())
```
|
Use [`DATEADD`](https://msdn.microsoft.com/en-us/library/ms186819.aspx)
**Query**
```
select * from your_table_name
where lastDelivery <= dateadd(month, -6, getdate());
```
|
How do I get all records where date is 6 months greater than today's date, using Microsoft SQL Server 2008?
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
[](https://i.stack.imgur.com/uEEOf.jpg)
I have a poll system. I use this query to get count of every answer of question. How can I get percentage of every answered option like this:
```
question 1 ---> option1=20.13 % ---> option2=79.87 %
question 2 ---> option3=100 %
question 3 ---> option4=30 % ---> option5=70 %
....
```
I tried this query But is not my desire:
```
[getPollResult]
@poll_form_id int
AS
BEGIN
SELECT a.question_id,a.title,COUNT(*) vote
FROM tbl_poll_option a
JOIN tbl_poll_answer b ON a.Id=b.option_id
JOIN tbl_poll_question c ON a.question_id=c.Id
WHERE poll_form_id=@poll_form_id
GROUP BY a.title,a.question_id
END
```
|
I agree with the advice of doing calculations (like percentages) on the front end.
Having said that, this might be helpful for you.
Design limitations: this is suitable for relatively small data sets, and I find the "with()" syntax easier to work with than making temp tables and hoping I remember to clean them up. (for more on the "with...as.." sytnax [see here](https://msdn.microsoft.com/en-us/library/ms175972.aspx) ).
The following sql (untested) attempts to generate two temporary result sets.
```
counts_by_option - # of votes for each question+option pair
counts_by_question - total # of votes for each question
```
Let's ease into it:
## counts\_by\_option
This is more or less what you started with (just a little more actually, I think we'll likely want to know both question & option later, so we grab both of them now):
```
with counts_by_option( question_id, question_title, option_id, option_title, vote_cnt )
as (
SELECT opt.question_id
, quest.title as question_title
, opt.id as option_id
, opt.title as option_title
, COUNT(*) vote_cnt
FROM tbl_poll_option opt
JOIN tbl_poll_answer ans ON ans.option_id = opt.Id
JOIN tbl_poll_question quest ON quest.Id = opt.question_id
WHERE poll_form_id=@poll_form_id
GROUP BY opt.question_id, quest.title, opt.id, opt.title
)
select * from counts_by_option
order by question_id, option_id
```
If that works, let's extend the sql and add our next temporary result set...
## counts\_by\_question
Now we can use our option-totals to generate the grand total of votes for each question; we'll need that in a bit to compute the actual %ge.
```
with counts_by_option( question_id, question_title, option_id, option_title, vote_cnt )
as (
SELECT opt.question_id
, quest.title as question_title
, opt.id as option_id
, opt.title as option_title
, COUNT(*) vote_cnt
FROM tbl_poll_option opt
JOIN tbl_poll_answer ans ON ans.option_id = opt.Id
JOIN tbl_poll_question quest ON quest.Id = opt.question_id
WHERE poll_form_id=@poll_form_id
GROUP BY opt.question_id, quest.title, opt.id, opt.title
)
-- select * from counts_by_option order by question_id, option_id
-- (I like to comment out select but leave in place for future testing...)
, counts_by_question( question_id, question_total_votes )
as (
select question_id, sum(vote_cnt) as question_total_votes
from counts_by_option group by question_id
)
select * from counts_by_question order by question_id
```
If that worked we are (finally) in a position to answer the original question about percentages:
## percentages example
```
with counts_by_option( question_id, question_title, option_id, option_title, vote_cnt )
as (
SELECT opt.question_id
, quest.title as question_title
, opt.id as option_id
, opt.title as option_title
, COUNT(*) vote_cnt
FROM tbl_poll_option opt
JOIN tbl_poll_answer ans ON ans.option_id = opt.Id
JOIN tbl_poll_question quest ON quest.Id = opt.question_id
WHERE poll_form_id=@poll_form_id
GROUP BY opt.question_id, quest.title, opt.id, opt.title
)
-- select * from counts_by_option order by question_id, option_id
-- (I like to comment out select but leave in place for future testing...)
, counts_by_question( question_id, question_total_votes )
as (
select question_id, sum(vote_cnt) as question_total_votes
from counts_by_option group by question_id
)
-- select * from counts_by_question order by question_id
select question_id
, question_title
, option_id
, option_title
, vote_cnt
, (100.0 * vote_cnt)
/ (select b.question_total_votes
from counts_by_question b
where b.question_id = a.question_id
) as option_percentage
from counts_by_option a
order by question_id, vote_cnt desc
```
The result set is driven by **counts\_by\_option a**.
The percentages expression asks **counts\_by\_question b** for help.
Let's focus on the expression for **option\_percentage**:
```
(100.0 * vote_cnt)
/ (select b.question_total_votes
from counts_by_question b
where b.question_id = a.question_id
)
as option_percentage
```
This is kind of complex (which is why I prefer to do things like aggregations in the front end) but not too bad...
We start by multiplying **vote\_count** by **100.0**.
Then we use the current OPTION to drive a sub-query that hits question\_total\_values to find our current question's total number of votes.
Note the qualifiers **a** and **b** which are important to focus our subquery against **counts\_by\_question** on only **a**'s current question using this where clause: **where b.question\_id = a.question\_id** (important since a subquery like that can only return a single value, otherwise it errors out at execution time).
This would blow up if any questions could have zero total votes (e.g. division by zero error).
edit: The way **counts\_by\_option** is constructed, only questions with answers (table **tbl\_poll\_answer**) would be used, so all of the values in **counts\_by\_question** will be non-zero.
|
The schema for the above case
```
CREATE TABLE #POLL_QUESTION (QUESTION_ID INT, QUESTION VARCHAR(50))
INSERT INTO #POLL_QUESTION
SELECT 1,'WHAT?'
UNION ALL
SELECT 2,'WHEN?'
UNION ALL
SELECT 3,'WHY?'
UNION ALL
SELECT 4,'WHERE?'
CREATE TABLE #POLL_OPTION(OPTION_ID INT , QUESTION_ID INT, OPTION_NME VARCHAR(50))
INSERT INTO #POLL_OPTION
SELECT 1,1,'A'
UNION ALL
SELECT 2,1,'B'
UNION ALL
SELECT 3,1,'C'
UNION ALL
SELECT 4,2,'A'
UNION ALL
SELECT 5,2,'B'
UNION ALL
SELECT 6,2,'C'
UNION ALL
SELECT 7,3,'A'
UNION ALL
SELECT 8,3,'B'
UNION ALL
SELECT 9,3,'C'
UNION ALL
SELECT 10,4,'A'
UNION ALL
SELECT 11,4,'B'
UNION ALL
SELECT 12,4,'C'
CREATE TABLE #POLL_ANSWER(ANSWER_ID INT, OPTION_ID INT)
INSERT INTO #POLL_ANSWER
SELECT 1,2
UNION ALL
SELECT 2,2
UNION ALL
SELECT 3,3
UNION ALL
SELECT 4,4
UNION ALL
SELECT 5,4
UNION ALL
SELECT 6,4
UNION ALL
SELECT 7,5
UNION ALL
SELECT 8,7
UNION ALL
SELECT 9,8
UNION ALL
SELECT 10,9
UNION ALL
SELECT 11,10
UNION ALL
SELECT 12,10
```
The Statment for the data
```
SELECT Q.QUESTION_ID,Q.QUESTION,O.OPTION_NME,COUNT(O.OPTION_NME)COUNT_OPTION,LEFTQUERY.COUNT_QUESTION, (COUNT(O.OPTION_NME)*100)/LEFTQUERY.COUNT_QUESTION AS PER FROM #POLL_QUESTION Q
INNER JOIN #POLL_OPTION O ON Q.QUESTION_ID=O.QUESTION_ID
INNER JOIN #POLL_ANSWER A ON O.OPTION_ID= A.OPTION_ID
LEFT JOIN
(
SELECT Q.QUESTION_ID,Q.QUESTION, COUNt(O.OPTION_NME)COUNT_QUESTION FROM #POLL_QUESTION Q
INNER JOIN #POLL_OPTION O ON Q.QUESTION_ID=O.QUESTION_ID
INNER JOIN #POLL_ANSWER A ON O.OPTION_ID= A.OPTION_ID
GROUP BY Q.QUESTION_ID,Q.QUESTION
)AS LEFTQUERY ON Q.QUESTION_ID= LEFTQUERY.QUESTION_ID
GROUP BY Q.QUESTION_ID,Q.QUESTION,O.OPTION_NME,
LEFTQUERY.COUNT_QUESTION
```
|
How can I get percentage of each answer in sql?
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have client numbers that are three characters. '001' to '999'. Sometimes there will be gaps that can be reused. I am trying to fill this gaps. So I am searching for a way to find first available gap.
```
CREATE TABLE co
( co_clno varchar(3));
INSERT INTO co
VALUES
('001'),
('002'),
('003'),
('005'),
('006'),
('007'),
('008');
```
The available gap here is '004'
I have tried to first create a list of available number with no sucess:
```
WITH numbers AS
(SELECT to_char(generate_series(1,9),'000') num)
SELECT num FROM numbers
WHERE num NOT IN(SELECT co_clno FROM co)
```
The final code should be something like:
```
WITH numbers AS
(SELECT to_char(generate_series(1,9),'000') num)
SELECT min(num) FROM numbers
WHERE num NOT IN(SELECT co_clno FROM co)
```
SQLFiddle: <http://sqlfiddle.com/#!15/1e48d/1>
Thanks in advance for any clue.
|
```
select substring(to_char(n,'000') from 2) as num
from generate_series(1,9) gs(n)
except
select co_clno
from co
order by 1
limit 1
```
|
You can use `lead()` to find where the gap starts:
```
select n.*
from (select n.*, lead(co_clno) over (order by co_clno) as next_num
from co n
) n
where next_num is null or
n. co_clno::int <> (next_num::int) - 1
order by co_clno
limit 1;
```
You can get the next value with:
```
select to_char((n. co_clno::int) + 1, '000')
from (select n.*, lead(co_clno) over (order by co_clno) as next_num
from co n
) n
where next_num is null or
n. co_clno::int <> next_num::int
order by co_clno
limit 1;
```
The only problem with this is that it won't get the first value if missing. Hmmm . . .
```
select (case when first_num <> '001' then '001'
else min(to_char((n. co_clno::int) + 1, '000'))
end)
from (select n.*, lead(co_clno) over (order by co_clno) as next_num,
min(co_clno) over () as first_num
from co n
) n
where next_num is null or
n. co_clno::int <> (next_num::int) - 1
group by first_num;
```
|
next available value - postgresql
|
[
"",
"sql",
"postgresql",
""
] |
I have a table with multiple records for each account, each record has a unit# with a balance. I want to select the account if one of the units has a negative balance and return all records for that account including the record with the negative unit. for example: Acct # 1234 has two records (each having a different unit# and balance). One unit has a negative balance and one has a positive balance. I want to retrieve both the unit records for any account where one unit is negative.
How would I accomplish this?
|
A simple `IN` will do it in standard SQL;
```
SELECT * FROM myTable
WHERE account_id IN (
SELECT account_id FROM myTable WHERE balance < 0
)
```
|
Try this way:
```
Select *
from yourAccountTable a
where EXISTS (select 1
from yourAccountTable b
where a.accID = b.accId
and b.unitBalance < 0 )
```
What the clause `exists` do is to validate in a boolean way if a given registry ( `a.accID = b.accId` ) follow some condition it is basically a JOIN operation to return rows based on conditions in your case, if the balance is negative.
|
SQL query that will retrieve multiple records when only one meets the criteria
|
[
"",
"sql",
""
] |
Let says i've data like below.
```
Col1 Col2
A E
C C
C
D
G C
```
and output will be like below
```
Column
A
C
C
D
G
```
So i want only take value that has a value, if that both columns has a value, take only from column1 only.
|
```
select nvl(col1,col2) from table1
```
NVL will check whether col1 is null if not it will take col1 value. If null it will take second col2 value
|
Also you could use `CASE` for common approach.
```
SELECT CASE
WHEN col1 IS NULL THEN col2
ELSE col1
END
FROM table1;
```
|
Select data from 2 Field and use only 1 values
|
[
"",
"sql",
"oracle",
""
] |
The current data in my table is:
> ```
> a b
> ---------
> -1 5
> -11 2
> -5 32
> ```
My request is to convert every data of column a into negative value.
But how to update the positive values into the negative selecting the entire column?
|
Try this:
```
Update table set a= 0-a where a >0
```
|
`UPDATE mytable SET a = a * -1;`
This multiplies all values in 'a' by -1. Now, if the value is already negative, it will become positive. You you want to make sure they are *always* negative, do this:
`UPDATE mytable SET a = a * -1 WHERE a > 0;`
|
Convert positive data into negetive data (entire column) in Postgres Server
|
[
"",
"sql",
"postgresql",
""
] |
I have a model that has two columns (`started_at` and `ended_at`). I want to add a custom validator that ensures that no other record exists with dates that overlap with the record I'm validating. So far I have:
```
# app/models/event.rb
class Event < ActiveRecord::Base
validates_with EventValidator
end
# app/validators/event_validator.rb
class EventValidator < ActiveModel::Validator
attr_reader :record
def validate(record)
@record = record
validate_dates
end
private
def validate_dates
started_at = record.started_at
ended_at = record.ended_at
arel_table = record.class.arel_table
# This is where I'm not quite sure what type of query I need to perform...
constraints = arel_table[:started_at].gteq(ended_at)
.and(arel_table[:ended_at].lteq(started_at))
if record.persisted?
constraints = constraints
.and(arel_table[:id].not_eq(record.id))
end
if record.class.where(constraints).exists?
record.error[:base] << "Overlaps with another event"
end
end
end
```
I don't know exactly what query I need to ensurethat there is no overlapping. Any help is greatly appreciated
|
I don't use Arel but I think the query should be:
```
constraints = arel_table[:started_at].lteq(ended_at)
.and(arel_table[:ended_at].gteq(started_at))
```
Two periods overlap when
```
period1.start < period2.end
period1.end > period2.start
```
|
Have a look at [Validates Overlap gem](https://github.com/robinbortlik/validates_overlap)
You can either use it instead your code or take [condition code](https://github.com/robinbortlik/validates_overlap/blob/master/lib/validates_overlap/overlap_validator.rb#L119) from it
```
# Return the condition string depend on exclude_edges option.
def condition_string(starts_at_attr, ends_at_attr)
except_option = Array(options[:exclude_edges]).map(&:to_s)
starts_at_sign = except_option.include?(starts_at_attr.to_s.split(".").last) ? "<" : "<="
ends_at_sign = except_option.include?(ends_at_attr.to_s.split(".").last) ? ">" : ">="
query = []
query << "(#{ends_at_attr} IS NULL OR #{ends_at_attr} #{ends_at_sign} :starts_at_value)"
query << "(#{starts_at_attr} IS NULL OR #{starts_at_attr} #{starts_at_sign} :ends_at_value)"
query.join(" AND ")
end
```
|
Check for Dates Overlap in Ruby on Rails
|
[
"",
"sql",
"ruby-on-rails",
"ruby",
""
] |
I am trying to create a simple PostgreSQL function, where by using `INT` parameter I like to get array back. The example below will not work, but shall give idea of what I try to get back from a function. Thanks.
```
CREATE OR REPLACE FUNCTION contact_countries_array(INT)
RETURNS ANYARRAY AS '
SELECT ARRAY[contacts_primarycountry, contacts_othercountry] FROM contacts WHERE contacts_id = $1'
LANGUAGE SQL;
```
The data type of contacts\_primarycountry and contacts\_othercountry is integer. contacts\_id is unique and integer.
|
Per the docs:
> It is permitted to have polymorphic arguments with a fixed return
> type, but the converse is not.
As such, I think your attempt to return `anyarray` won't work.
Your fields look like text, so I think if you altered it to something like this, it would work:
```
CREATE OR REPLACE FUNCTION contact_countries_array(INT)
RETURNS text[] AS $$
select array[contacts_primarycountry::text, contacts_othercountry::text]
FROM contacts WHERE contacts_id = $1
$$
LANGUAGE SQL;
```
This should compile, and it *might* work, but I'm honestly not sure:
```
CREATE OR REPLACE FUNCTION contact_countries_array(anyelement)
RETURNS anyarray AS $$
select array[contacts_primarycountry::text, contacts_othercountry::text]
FROM contacts WHERE contacts_id = $1
$$
LANGUAGE SQL;
```
I think the datatypes would have to match perfectly for this to work, unless you did casting.
|
**Declaring Array, Looping, Adding items to Array, Returning Array with Postgres Function,**
You can declare `INTEGER` array instead of `TEXT` and avoid casting `(counter::TEXT)` as well as return type `TEXT[]`. *(Added those for reference.)*
```
CREATE OR REPLACE FUNCTION "GetNumbers"(maxNo INTEGER) RETURNS TEXT[] AS $nums$
DECLARE
counter INTEGER := 0;
nums TEXT[] := ARRAY[]::TEXT[];
BEGIN
LOOP
EXIT WHEN counter = maxNo;
counter = counter + 1;
nums = array_append(nums, counter::TEXT);
END LOOP;
RETURN nums;
END ;
$nums$ LANGUAGE plpgsql;
SELECT "GetNumbers"(5); -- {1,2,3,4,5}
```
|
How to return array of values from PostgreSQL function by INT id
|
[
"",
"sql",
"arrays",
"database",
"postgresql",
"function",
""
] |
I got a SQL statement which uses some parameters to filter the output.
Looking like this:
```
ALTER PROCEDURE [dbo].[spGetPerson]
@Vorname varchar(50),
@Nachname varchar(50),
@FirmaID int,
@AbteilungID int,
@ArbeitsortID int,
@FunktionID int
AS
BEGIN
SET NOCOUNT ON;
-- Insert statements for procedure here
Select
p.Vorname, p.Nachname, ar.Arbeitsort, fi.Firma,
ab.Abteilung, fu.Funktion
From
tblPerson p
inner join
tblArbeitsort ar on ar.ArbeitsortID = p.Arbeitsort_fk
inner join
tblFirmaHatAbteilungUndPersonHatFunktion zt on zt.Person_fk = p.PersonID
inner join
tblFirma fi on fi.FirmaID = zt.Firma_fk
inner join
tblAbteilung ab on ab.AbteilungID = zt.Abteilung_fk
inner join
tblFunktion fu on fu.FunktionID = zt.Funktion_fk
Where
p.Vorname = @Vorname
AND p.Nachname = @Nachname
AND fi.FirmaID = @FirmaID
AND ab.AbteilungID = @AbteilungID
AND ar.ArbeitsortID = @ArbeitsortID
AND fu.FunktionID = @FunktionID
END
```
I'm joining some stuff together but that's not important.
As you can see at the end I use every parameter in my `WHERE` clause. What I would like to do is, if one parameter is missing the related "where" falls away.
**As an example** if `@Nachname` is missing the statement should look like this:
```
Where
p.Vorname = @Vorname
AND fi.FirmaID = @FirmaID
AND ab.AbteilungID = @AbteilungID
AND ar.ArbeitsortID = @ArbeitsortID
AND fu.FunktionID = @FunktionID
```
How do I do this?
|
The typical way to do this is to use logic like this:
```
Where (p.Vorname = @Vorname or @Vorname is null) AND
(p.Nachname = @Nachname or @Nachname is null) AND
(fi.FirmaID = @FirmaID or @FirmaID is null) AND
(ab.AbteilungID = @AbteilungID or @AbteilungID is null) AND
(ar.ArbeitsortID = @ArbeitsortID or @ArbeitsortID is null) AND
(fu.FunktionID = @FunktionID or @FunktionID is null)
```
One note: the use of such complex `where` clauses can affect the optimization strategies for the query. If performance is a big issue (and you have indexes on the columns in the `where` clause used for the conditions), then you might want to use dynamic SQL instead. That is, you would construct the `where` clause based on the parameters that have values, rather than checking for `NULL` at run-time.
|
Another way that I have found coming to my rescue in these cases is as below. How it works is simple. If the needed parameter is not null it will be used else the column is matched with itself causing engine to ignore that join as WHERE 1=1 type joins are skipped. Hope this helps
```
Where
p.Vorname = COALESCE(@Vorname, p.Vorname)
AND fi.FirmaID = COALESCE(@FirmaID ,fi.FirmaID )
AND ab.AbteilungID = COALESCE(@AbteilungID ,ab.AbteilungID )
AND ar.ArbeitsortID = COALESCE(@ArbeitsortID,ar.ArbeitsortID )
AND fu.FunktionID = COALESCE(@FunktionID,fu.FunktionID)
```
|
SQL Server stored procedure: disable "where" filter if related parameter is not set
|
[
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
If I have a simple User table in my database and a simple Item table with a User.id as a foreign key thus:
```
(id UNIQUEIDENTIFIER DEFAULT (NEWID()) NOT NULL,
name NVARCHAR (MAX) NULL,
email NVARCHAR (128) NULL,
authenticationId NVARCHAR (128) NULL,
createdAt DATETIME DEFAULT GETDATE() NOT NULL,
PRIMARY KEY (id))
CREATE TABLE Items
(id UNIQUEIDENTIFIER DEFAULT (NEWID()) NOT NULL,
userId UNIQUEIDENTIFIER NOT NULL,
name NVARCHAR (MAX) NULL,
description NVARCHAR (MAX) NULL,
isPublic BIT DEFAULT 0 NOT NULL,
createdAt DATETIME DEFAULT GETDATE() NOT NULL,
PRIMARY KEY (id),
FOREIGN KEY (userId) REFERENCES Users (id))
```
If a user is removed from the table I need all of the related items to be removed first to avoid breaking referential integrity constraints. This is easily done with `CASCADE DELETE`
```
CREATE TABLE Items
(id UNIQUEIDENTIFIER DEFAULT (NEWID()) NOT NULL,
userId UNIQUEIDENTIFIER NOT NULL,
name NVARCHAR (MAX) NULL,
description NVARCHAR (MAX) NULL,
isPublic BIT DEFAULT 0 NOT NULL,
createdAt DATETIME DEFAULT GETDATE() NOT NULL,
PRIMARY KEY (id),
FOREIGN KEY (userId) REFERENCES Users (id) ON DELETE CASCADE)
```
But if I also have collections which reference users, and a table collecting items into collections I am in trouble, i.e. the following additional code does not work.
```
CREATE TABLE Collections
(id UNIQUEIDENTIFIER DEFAULT (NEWID()) NOT NULL,
userId UNIQUEIDENTIFIER NOT NULL,
name NVARCHAR (MAX) NULL,
description NVARCHAR (MAX) NULL,
isPublic BIT DEFAULT 0 NOT NULL,
layoutSettings NVARCHAR (MAX) NULL,
createdAt DATETIME DEFAULT GETDATE() NOT NULL,
PRIMARY KEY (id),
FOREIGN KEY (userId) REFERENCES Users (id) ON DELETE CASCADE)
CREATE TABLE CollectedItems
(itemId UNIQUEIDENTIFIER NOT NULL,
collectionId UNIQUEIDENTIFIER NOT NULL,
createdAt DATETIME DEFAULT GETDATE() NOT NULL,
PRIMARY KEY CLUSTERED (itemId, collectionId),
FOREIGN KEY (itemId) REFERENCES Items (id) ON DELETE CASCADE,
FOREIGN KEY (collectionId) REFERENCES Collections (id) ON DELETE CASCADE)
```
The error indicates that this "may cause cycles or multiple cascade paths". The way around this I see recommended is to
1. Redesign the tables, but I cannot see how; or, and often stated as ["a last resort"](https://stackoverflow.com/a/852047/575530)
2. Use triggers.
So I remove the `ON DELETE CASCADE` and [instead use triggers](https://stackoverflow.com/a/1655334/575530) ([documentation](https://msdn.microsoft.com/en-us/library/ms189799(v=sql.110).aspx)) like this:
```
CREATE TRIGGER DELETE_User
ON Users
INSTEAD OF DELETE
AS
BEGIN
SET NOCOUNT ON
DELETE FROM Items WHERE userId IN (SELECT id FROM DELETED)
DELETE FROM Collections WHERE userId IN (SELECT id FROM DELETED)
DELETE FROM Users WHERE id IN (SELECT id FROM DELETED)
END
CREATE TRIGGER DELETE_Item
ON Items
INSTEAD OF DELETE
AS
BEGIN
SET NOCOUNT ON
DELETE FROM CollectedItems WHERE itemId IN (SELECT id FROM DELETED)
DELETE FROM Items WHERE id IN (SELECT id FROM DELETED)
END
CREATE TRIGGER DELETE_Collection
ON Collections
INSTEAD OF DELETE
AS
BEGIN
SET NOCOUNT ON
DELETE FROM CollectedItems WHERE collectionId IN (SELECT id FROM DELETED)
DELETE FROM Collections WHERE id IN (SELECT id FROM DELETED)
END
```
However this fails, although subtly. I have a bunch of unit tests (written in xUnit). Individually the tests always pass. But run en masse some randomly fail with a SQL deadlock. In [another answer](https://stackoverflow.com/a/35224577/575530) I was pointed to the SQL Profiler which shows a deadlock between two delete calls.
What is the correct way to solve these diamond shaped delete cascades?
|
I prefer to **not** have automatic cascade operations, being it DELETE or UPDATE. Just for the sake of peace of mind. Imagine you've configured your cascade deletes and then your program due to some bug tries to delete the wrong user, even though the database has some data related to it. All related data in related tables will be gone without any warning.
Normally I make sure that all related data is deleted first using explicit separate procedures, one for each related table, and then I delete the row in the master table. The delete will succeed because there are no child rows in referenced tables.
For your example I'd have a dedicated stored procedure `DeleteUser` with one parameter `UserID`, which knows what tables are related to the user and in what order the details should be deleted. This procedure is tested and is the only way to remove the user. If the rest of the program by mistake would try to directly delete a row from the `Users` table, this attempt would fail, if there is some data in the related tables. If the mistakenly deleted user didn't have any details, the attempt would go through, but at least you will not lose a lot of data.
For your schema the procedure may look like this:
```
CREATE PROCEDURE dbo.DeleteUser
@ParamUserID int
AS
BEGIN
SET NOCOUNT ON; SET XACT_ABORT ON;
BEGIN TRANSACTION;
BEGIN TRY
-- Delete from CollectedItems going through Items
DELETE FROM CollectedItems
WHERE CollectedItems.itemId IN
(
SELECT Items.id
FROM Items
WHERE Items.userId = @ParamUserID
);
-- Delete from CollectedItems going through Collections
DELETE FROM CollectedItems
WHERE CollectedItems.collectionId IN
(
SELECT Collections.id
FROM Collections
WHERE Collections.userId = @ParamUserID
);
-- Delete Items
DELETE FROM Items WHERE Items.userId = @ParamUserID;
-- Delete Collections
DELETE FROM Collections WHERE Collections.userId = @ParamUserID;
-- Finally delete the main user
DELETE FROM Users WHERE ID = @ParamUserID;
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION;
...
-- process the error
END CATCH;
END
```
---
If you really want to set up cascade deletes, then I'd define **one** trigger, only for `Users` table. Again, there will be no foreign keys with a cascade delete, but the trigger on `Users` table would have the logic very similar to the procedure above.
|
Another thing to try is setting isolation level to SERIALIZABLE in your trigger when you delete a user/item/collection. Since you are possibly deleting many items/collections/collected items when deleting a user, having another transaction INSERT something during this run can cause problems. SERIALIZABLE solves that to some extent.
SQL-Server uses that isolation level on cascading deletes for exactly this reason:
<https://learn.microsoft.com/en-us/archive/blogs/conor_cunningham_msft/conor-vs-isolation-level-upgrade-on-updatedelete-cascading-ri>
|
Cascading diamond shaped deletes in SQL
|
[
"",
"sql",
"sql-server",
"dml",
""
] |
I've got the following tables in a database. We have Product which have multiple Products for a Series, and we have ProductVariation which has multiple product variations per product.
We are wishing to perform a set of aggregate queries on the ProductVariation table for a set of products based on the ID of the series. For a SeriesID of 276, ProductID's 400-415 match the SeriesID. We then want to find minimum and maximum of various fields in the ProductVariation table that have ProductID's 400-415 assigned to them.
The T-SQL statement I've written is the following:-
```
SELECT(
SELECT MAX([X]) FROM [ProductVariation] AS B WHERE B.ProductID = A.ProductID
)
FROM [Product] AS A
WHERE SeriesID = 12 AND IsDeleted = 0 and IsEnabled = 1
```
but this returns 15 rows of minimum and maximum data. I was looking for the Maximum of the maximum, but I don't know how to adapt the above statement to retrieve that. We will need to do 10 of these aggregates at least in the same query as well.
Can anyone suggest how to get the maximum of the maximum?
Cheers,
Mike.
|
Ok, so my issue was that the above were all right, but it turns out I was looking for group by SeriesID, to return MIN and MAX of set fields per SeriesID. Many thanks for all the SQL statements, you guys have helped clear up a problem I've had many times before! :)
In sql:-
@Oliver's post was what made it trigger as to what I needed. I then added a group by for what I needed to do, to GROUP BY SeriesID.
In LINQ:-
```
(from x in productBlocks
join y in products on x.ProductBlockID equals y.ProductBlockID
where x.SeriesID == id && x.IsEnabled && !x.IsDeleted
group y by x.SeriesID into g
select new SeriesCharacteristicsViewModel
{
MinWheelDiameter = g.Min(s => s.WheelDiameter),
MaxWheelDiameter = g.Max(s => s.WheelDiameter),
ShoreHardness = g.Select(s => s.ShoreHardness).FirstOrDefault(),
MinimumCarryingCapacityAt4kmh = g.Min(s => s.StaticCapacity),
MaximumCarryingCapacityAt4kmh = g.Max(s=>s.StaticCapacity),
MinimumRollingResistance = g.Min(s=>s.RollingResistance),
MaximumRollingResistance = g.Max(s=>s.RollingResistance),
MinimumTemperature = g.Min(s=>s.TempFrom),
MaximumTemperature = g.Max(s=>s.TempTo)
}
)
```
|
So essentially, you just want the one Maximum for the whole series? I think there are a bit too many MAX's around in the other solutions. Just use a join:
```
SELECT MAX(PV.X), MAX(PV.Y)
FROM [Product] AS P
JOIN [ProductVariation] AS PV ON P.ProductID = PV.ProductID
WHERE P.SeriesID = 12 AND P.IsDeleted = 0 and P.IsEnabled = 1
```
This way, you can also query multiple maximums at once.
|
T-SQL Select MAX from subquery
|
[
"",
"sql",
"sql-server",
"database",
"t-sql",
"aggregate-functions",
""
] |
I'm having some issues where I can't seem the like regex to match 3 or more a's or e's in the name.
Find all managers that manage employees with at least 3 letters 'a' or 'e' in their name (both uppercase
and lowercase). For instance having 2 'a' and 1 'e' in the name, will satisfy the selection criteria
```
select manager_name
from manages
where regexp_like(employee_name, '[a,e]{3, }');
```
When I do this it shows a proper list with an 'e' or 'a' in it, but when I try to do 3 or more it returns blank set. Also sample data provided below.
```
select manager_name
from manages
where regexp_like(employee_name, '[a,e]');
```
Sample Data
```
William Gates III
Lakshmi Mittal
Ingvar Kamprad
Lawrence Ellison
Mark Zuckerberg
Sheryl Sandberg
Liliane Bettencourt
Michael Dell
```
|
You're looking for this instead
```
(.*[ae]){3,}
```
The `.*` accepts differents chars between those wanted
So your query becomes:
```
select manager_name
from manages
where
regexp_like(employee_name, '(.*[ae]){3,}', 'i');
```
The `i` flag is for insensitive match, so capital `AE` are taken into account to... If ommitted, sensitive match is performed...
You can also use simply `{3}` instead of `{3,}`, it will produce the same results in this case
|
If you want at least 3 `a`'s or `e`'s anywhere in the name then:
```
select manager_name
from manages
where regexp_like(employee_name, '(.*?[ae]){3,}', 'i' );
```
If you want at least 3 consecutive `a`'s or `e`s then:
```
select manager_name
from manages
where regexp_like(employee_name, '.*[ae]{3,}', 'i' );
```
|
SQL using regular expression REGEXP_LIKE()
|
[
"",
"sql",
"oracle",
"regexp-like",
""
] |
I have a database containing two tables `Team` and `User`.
Every team can have one or two users in it.
I wish to select an output of the table `Team` such that information of the both the users from the table `User` are included.
It'll be easier to understand once I define the table structures.
Table `Team`:
```
+-----------+-------------------+-------------------+-----------------+
| team_name | user_one | user_two | team_note |
+-----------+-------------------+-------------------+-----------------+
| Team one | skuhsa@jdds.dfd | kgihse@kdhf.dfj | one to twenty |
| Team two | dsjgknsz@djfd.fkg | | three to thirty |
+-----------+-------------------+-------------------+-----------------+
```
Table `User`:
```
+-------------------+-----------+-----------------+
| email | user_name | user_note |
+-------------------+-----------+-----------------+
| skuhsa@jdds.dfd | skuhsaone | gimme money |
| kgihse@kdhf.dfj | kgihse | drop it |
| dsjgknsz@djfd.fkg | dsjgknsz | just eat it |
+-------------------+-----------+-----------------+
```
The output I'm looking for goes like this.
```
+-----------+--------------------+-----------+------------------+-----------+-----------------+
| team_name | user_one | user_name | user_two | user_name | team_note |
+-----------+--------------------+-----------+------------------+-----------+-----------------+
| Team one | skuhsa@jdds.dfd | skuhsaone | kgihse@kdhf.dfj | kgihse | one to twenty |
+-----------+--------------------+-----------+------------------+-----------+-----------------+
| Team two | dsjgknsz@djfd.fkg | dsjgknsz | | | three to thirty |
+-----------+--------------------+-----------+------------------+-----------+-----------------+
```
I have a good feeling that it can be done easily, but right now I'm trying all sorts of JOINs and stuff and ending up with duplicate results or rows.
If there is any PostgreSQL specific ways to do it, it'd just fine with me.
`SELECT * FROM Team LEFT JOIN User ON Team.user_one=User.email` works, but how do I select only the columns I want? i.e., how will it differentiate b/w columns for the first and the second user?
|
A couple of `left join`s should do the trick:
```
SELECT team_name, user_one, u1.user_name, user_two, u2.user_name, team_note
FROM team t
LEFT JOIN user u1 ON t.user_one = u1.email
LEFT JOIN user u2 ON t.user_one = u2.email
```
|
The answer is actually quite simple, as the comment says simply use a `LEFT join`, this could by saying `LEFT JOIN [TableName]`
```
SELECT * FROM [Team]
LEFT JOIN [User]
```
Also take a look at
<http://www.w3schools.com/sql/sql_join_left.asp>
Edit:
Your second option would be to do this:
```
SELECT * FROM [Team]
UNION
SELECT * FROM [User]
```
|
Using Join, select values from multiple SQL tables without a foreign key
|
[
"",
"sql",
"postgresql",
"select",
"join",
""
] |
I am using SQL Server 2012 on Windows7. I currently have 2 tables.
The first table:
```
DeviceID DeviceSwVersion DeviceIPAddr
1 802 172.26.20.1
115 800 172.26.18.1
1234 264 172.26.18.3
4717 264 172.26.19.2 // <- new
14157 264 172.26.19.1 // <- new
```
The second table:
```
DeviceIPAddr Status TimeStamp (default=getdate())
172.26.20.1 1 2016-02-09 10:25:01
172.26.18.1 1 2016-02-09 10:30:12
172.26.18.3 1 2016-02-09 10:33:08
```
What I need is a SQL query to insert into 2nd table new rows corresponding to the new `DeviceIP` that are now present in the first table. *Only the new* `DeviceIP`s that *are not already there* in the 2nd table.
So, finally the 2nd table should look like this:
```
DeviceIPAddr Status TimeStamp // default = getdate()
172.26.20.1 1 2016-02-09 10:25:01
172.26.18.1 1 2016-02-09 10:30:12
172.26.18.3 1 2016-02-09 10:33:08
172.26.19.2 0 2016-02-10 09:53:00
172.26.19.1 0 2016-02-10 09:53:01
```
Remark: `Status` column is 0 for new inserted rows and `TimeStamp` is the current date-time (default value filled automatically by getdate() function).
|
You could write a **AFTER INSERT TRIGGER** something like bellow:
```
CREATE TRIGGER trigger_insert_table_1
ON table_1
AFTER INSERT AS
BEGIN
INSERT INTO table_2
( DeviceIPAddr,
Status,
TimeStamp)
VALUES
( NEW.DeviceIPAddr,
0,
getdate() );
END;
```
|
Merge is another way..
```
MERGE <[The second table]> t
USING [The first table: ] s
ON t.DeviceIPAddr = s.DeviceIPAddr
WHEN NOT MATCHED
INSERT (DeviceIPAddr, Status, Timestamp)
VALUES (s.DeviceIPAddr,0, getutcdate())
;
```
|
How to insert into a table new records based on info from another table?
|
[
"",
"sql",
"sql-server",
"insert",
"record",
""
] |
I am reading an SQL query in Redshift and can't understand the last part:
```
...
LEFT JOIN (SELECT MIN(modified) AS first_modified FROM user) ue
ON 1=1
```
What does `ON 1=1` mean here?
|
It's simply doing a cross join, which selects all rows from the first table and all rows from the second table and shows as cartesian product, i.e. with all possibilities.
JOIN (LEFT, INNER, RIGHT, etc.) statements normally require an 'ON ..." condition. Putting in 1=1 is like saying "1=1 is always true, do don't eliminate anything".
|
The intention is an *unconditional* `LEFT JOIN`, which is ***different*** from a `CROSS JOIN` in that all rows from the left table expression are returned, even if there is no match in the right table expression - while a `CROSS JOIN` drops such rows from the result. [More on joins in the manual.](https://www.postgresql.org/docs/current/sql-select.html#SQL-FROM)
However:
**`1=1` is pointless** in Postgres and [all derivatives](https://wiki.postgresql.org/wiki/PostgreSQL_derived_databases) including Amazon Redshift. Just use `true`. This has probably been carried over from another RDBMS that does not support the [`boolean`](https://www.postgresql.org/docs/current/datatype-boolean.html) type properly.
```
... LEFT JOIN (SELECT ...) ue ON true
```
Then again, `LEFT JOIN` is pointless for this particular subquery with `SELECT MIN(modified) FROM user` on the right, because a `SELECT` with an aggregate function (`min()`) and no `GROUP BY` clause always returns exactly one row. This case (but not other cases where ***no row*** might be found) can be simplified to:
```
... CROSS JOIN (SELECT MIN(modified) AS first_modified FROM user) ue
```
|
JOIN (SELECT ... ) ue ON 1=1?
|
[
"",
"sql",
"postgresql",
"join",
"left-join",
"amazon-redshift",
""
] |
Assume a many-to-many relation between team and player. This is modelled by the following tables:
```
create table team
(
identifier integer primary key
);
create table player
(
identifier integer primary key
);
create table member
(
team_identifier integer,
player_identifier integer,
primary key(team_identifier, player_identifier),
foreign key(team_identifier) references team on update cascade on delete cascade,
foreign key(player_identifier) references player on update cascade on delete cascade
);
```
Assume the following data:
```
insert into team values(1);
insert into team values(2);
insert into player values(1);
insert into member values(1, 1);
insert into member values(2, 1);
```
Let's delete the teams:
```
delete from team where identifier = 1;
delete from team where identifier = 2;
```
Now we have a player without a team. Is there a way to automatically delete this player? That is, when the deletion of a team results in an orphan player, this player should be removed as well (but not the other way around).
|
ideal solution to settle your problem
create after-delete trigger or update in membre:
```
CREATE FUNCTION delete_player_not_in_member() RETURNS trigger AS $delete_player_not_in_member$
BEGIN
DELETE FROM player WHERE Identifier NOT IN (SELECT player_identifier FROM membre);
RETURN OLD;
END;
$delete_player_not_in_member$ LANGUAGE plpgsql;
CREATE TRIGGER delete_player_not_in_member AFTER DELETE OR UPDATE ON member
FOR EACH ROW EXECUTE PROCEDURE delete_player_not_in_member();
```
|
Well not necessarily. The player could still be considered a player who needs a team if you plan on adding him to a team at a later time. Or, if you want a pull of available players.
However, you could try using
```
DELETE FROM member WHERE team_identifier = 1
DELETE FROM member WHERE team_identifier = 2
DELETE FROM player
WHERE Identifier NOT IN (SELECT player_identifier FROM member)
```
|
Many-to-many relation - automatically delete orphans
|
[
"",
"sql",
"postgresql",
"many-to-many",
""
] |
I have the following table:
```
+----+--------+-----+
| id | fk_did | pos |
+----+--------+-----+
```
This table contains hundreds of rows, each of them referencing another table with `fk_did`. The value in `pos` is currently always zero which I want to change.
Basically, for each group of `fk_did`, the `pos`-column should start at zero and be ascending. It doesn't matter how the rows are ordered.
Example output (`select * from table order by fk_did, pos`) that I wanna get:
```
+----+--------+-----+
| id | fk_did | pos |
+----+--------+-----+
| xx | 0 | 0 |
| xx | 0 | 1 |
| xx | 0 | 2 |
| xx | 1 | 0 |
| xx | 1 | 1 |
| xx | 1 | 2 |
| xx | 4 | 0 |
| xx | 8 | 0 |
| xx | 8 | 1 |
| xx | 8 | 2 |
+----+--------+-----+
```
* There must be no two rows that have the same combination of `fk_did` and `pos`
* `pos` must be ascending for each `fk_did`
* If there is a row with `pos` > 0, there must also be a row with the same `fk_did` and a lower `pos`.
Can this be done with a single update query?
|
You can do this using a window function:
```
update the_table
set pos = t.rn - 1
from (
select id,
row_number() over (partition by fk_id) as rn
from the_table
) t
where t.id = the_table.id;
```
The ordering of `pos` will be more or less random, as there is no `order by`, but you said that doesn't matter.
This assumes that `id` is unique, if not, you can use the internal column `ctid` instead.
|
If `id` is the PK of your table, then you can use the following query to update your table:
```
UPDATE mytable
SET pos = t.rn
FROM (
SELECT id, fk_did, pos,
ROW_NUMBER() OVER (PARTITION BY fk_did ORDER BY id) - 1 AS rn
FROM mytable) AS t
WHERE mytable.id = t.id
```
`ROW_NUMBER` window function, used with a `PARTITION BY` clause, generates sequence numbers starting from 1 for each `fk_did` slice.
[**Demo here**](http://sqlfiddle.com/#!15/99673/1)
|
SQL query to create ascending values within groups
|
[
"",
"sql",
"postgresql",
"sql-update",
""
] |
I have a `Shipments` table which basicly contains Shipments data with dates
id is `integer`
dateshipped is `date`
```
id dateshipped
1 1-JAN-16
2 1-JAN-16
3 3-FEB-16
4 9-FEB-16
```
I want to write a query which count all shipments based on Months.
What I should get is:
```
Jan Feb March....
2 2 0
```
I know I can do it by having query for each column, get only relevent rows for this specific month and just count them.
As follows:
```
Select (Select count(*)
from Shipments
Where EXTRACT(YEAR FROM dateshipped)::int=2016 and EXTRACT(MONTH FROM dateshipped)::int=1 )as JAN,
(Select count(*)
from Shipments
Where EXTRACT(YEAR FROM dateshipped)::int=2016 and EXTRACT(MONTH FROM dateshipped)::int=2 )as FEB
```
**This works however its too much of the same code...**
I am wondring if it is possible to do it with a single `FROM` statment and each column get it's own relevent rows for count.
Something like:
```
Select COL1,COL2,COL3...
from Shipments
Where EXTRACT(YEAR FROM dateshipped)::int=2016;
```
and have something like:
```
COL1 = count only JAN records
COL2 = count only FEB records
....
```
maybe there is something with `Parations` on months or any other solution?
|
You need a pivot query to accomplish this:
```
SELECT SUM(CASE WHEN EXTRACT(MONTH FROM dateshipped)::int = 1 THEN 1 ELSE 0 END) AS Jan,
SUM(CASE WHEN EXTRACT(MONTH FROM dateshipped)::int = 2 THEN 1 ELSE 0 END) AS Feb,
SUM(CASE WHEN EXTRACT(MONTH FROM dateshipped)::int = 3 THEN 1 ELSE 0 END) AS Mar,
SUM(CASE WHEN EXTRACT(MONTH FROM dateshipped)::int = 4 THEN 1 ELSE 0 END) AS Apr,
SUM(CASE WHEN EXTRACT(MONTH FROM dateshipped)::int = 5 THEN 1 ELSE 0 END) AS May,
SUM(CASE WHEN EXTRACT(MONTH FROM dateshipped)::int = 6 THEN 1 ELSE 0 END) AS Jun,
SUM(CASE WHEN EXTRACT(MONTH FROM dateshipped)::int = 7 THEN 1 ELSE 0 END) AS Jul,
SUM(CASE WHEN EXTRACT(MONTH FROM dateshipped)::int = 8 THEN 1 ELSE 0 END) AS Aug,
SUM(CASE WHEN EXTRACT(MONTH FROM dateshipped)::int = 9 THEN 1 ELSE 0 END) AS Sep,
SUM(CASE WHEN EXTRACT(MONTH FROM dateshipped)::int = 10 THEN 1 ELSE 0 END) AS Oct,
SUM(CASE WHEN EXTRACT(MONTH FROM dateshipped)::int = 11 THEN 1 ELSE 0 END) AS Nov,
SUM(CASE WHEN EXTRACT(MONTH FROM dateshipped)::int = 12 THEN 1 ELSE 0 END) AS Dec
FROM Shipments
WHERE EXTRACT(YEAR FROM dateshipped)::int=2016
```
|
Since 9.4 you can use [FILTER](http://www.postgresql.org/docs/current/static/sql-expressions.html)
```
SELECT
count(*) AS total,
count(*) FILTER (WHERE Extract(MONTH FROM dateshipped)::int=1) AS JAN,
count(*) FILTER (WHERE Extract(MONTH FROM dateshipped)::int=2) AS FEB,
...
FROM Shipments
WHERE Extract(YEAR FROM dateshipped)::int=2016;
```
|
Is it possible to Count by diffrent condition in one query?
|
[
"",
"sql",
"postgresql",
""
] |
I am trying to run this query in Oracle
```
SELECT A1.ID_USER,A1.CNAME,
(SELECT CNAME
FROM
(SELECT CNAME,COUNTER
FROM
(SELECT id_Category AS IDCATEGORT,COUNT(*) AS COUNTER
FROM gamescateg
WHERE id_game IN
(SELECT ID_GAME
FROM UserGames
WHERE id_user=UsersGamers.ID_USER
)
GROUP BY id_Category
)
JOIN Categories
ON id_Category =IDCATEGORT
ORDER BY COUNTER DESC
)
WHERE rownum = 1
) AS FAVORITECATEGORY
FROM UsersGamers A1
JOIN UsersGamers A2 ON A1.ID_USER= A2.ID_USER
```
but i get the the error
```
ORA-00904: "A2"."ID_USER": invalid identifier
```
A2.ID\_USER need for do the subquery
why ?
how to solve that?
could you help me?
**I SOLVE IT BY CREATE FUNCTION**
```
CREATE OR REPLACE FUNCTION FAVORITECATEGORYfunc( idnumber NUMBER )
RETURN VARCHAR2
IS
l_conc_names VARCHAR2(32676);
BEGIN
SELECT CNAME INTO l_conc_names
FROM
(SELECT CNAME,COUNTER
FROM
(SELECT id_Category AS IDCATEGORT,COUNT(*) AS COUNTER
FROM gamescateg
WHERE id_game IN
(SELECT ID_GAME
FROM UserGames
WHERE id_user=idnumber
)
GROUP BY id_Category
)
JOIN Categories
ON id_Category =IDCATEGORT
ORDER BY COUNTER DESC
)
WHERE rownum = 1;
RETURN l_conc_names;
END;
```
and the query was
```
select CNAME,FAVORITECATEGORYfunc(id_user) from usersgamers
```
thanks for help :)
|
I've expanded your query to make it more readable and commented the problem (invalid identifier). There's a reference to alias A2 where A2 can't be.
```
select A1.ID_USER, A1.CNAME,
(
SELECT CNAME FROM
(
SELECT CNAME, COUNTER
FROM
(
SELECT id_Category AS IDCATEGORT,
COUNT(*) AS COUNTER
FROM gamescateg
WHERE id_game IN
(
SELECT ID_GAME FROM UserGames
WHERE id_user=A2.ID_USER /* your problem is here */
)
GROUP BY id_Category
)
JOIN Categories
ON id_Category =IDCATEGORT
ORDER BY COUNTER DESC
)
WHERE rownum = 1
) AS FAVORITECATEGORY
FROM UsersGamers A1
JOIN UsersGamers A2
ON A1.ID_USER= A2.ID_USER
```
Have you considered rewriting this to avoid so many nested queries? Instead of wrapping them one inside the other, try structuring it around the favourite category using joins instead.
|
Essentially a superficial fix will not fix the query. So, please explain what you are trying to do. I suspect the query is much more complicated than it needs to be. And, it is very hard to decipher.
Your problem is, superficially, that you are using the table name rather than the alias. So, one would think that writing the query like this would fix the problem:
```
SELECT A1.ID_USER, A1.CNAME,
(SELECT CNAME
FROM (SELECT CNAME, COUNTER
FROM (SELECT id_Category AS IDCATEGORT, COUNT(*) AS COUNTER
FROM gamescateg
WHERE id_game IN (SELECT ug.ID_GAME
FROM UserGames ug
WHERE ug.id_user = UsersGamers.ID_USER
)
GROUP BY id_Category
) cc JOIN
Categories c
ON c.id_Category = cc.IDCATEGORT
ORDER BY COUNTER DESC
)
WHERE rownum = 1
) AS FAVORITECATEGORY
FROM UsersGamers A1 JOIN
UsersGamers A2
ON A1.ID_USER = A2.ID_USER;
```
But, alas, it will not. Oracle limits the scoping depth for correlated subqueries. So, this doesn't fix the problem; you'll just get another error.
The most reasonable thing that such a query might be doing is getting the most common category for a given user. If so:
```
SELECT u.*
FROM (SELECT ugr.ID_USER, ugr.CNAME, c.cname, COUNT(*) as cnt,
ROW_NUMBER() OVER (PARTITION BY ugr.ID_USER, ugr.CNAME
ORDER BY COUNT(*) DESC
) as seqnum
FROM UsersGamers ugr JOIN
UserGames ug
ON ugr.ID_USER = ug.ID_USER JOIN
Categories c
ON c.id_Category = ug.id_Category
GROUP BY ugr.ID_USER, ugr.CNAME
) u
WHERE seqnum = 1;
```
|
Sql Subquery Not working "ORA-00904: "A2"."ID_USER": invalid identifier"
|
[
"",
"sql",
"oracle",
"subquery",
""
] |
I have many duplicate products that I need to delete. Is there a query to delete these "Duplicates" with the "lower price" and/or "Same Price" an just keep 1 of each product?
The duplicates have duplicate "Product Name" I am using Opencart Version 2.1.0.1
|
According to the schema on <http://wiki.opencarthelp.com/doku.php?id=databse_schema> and only one language the folowing query should solve your issue:
```
delete p1
from product p1
join product_description d1 on d1.product_id = p1.product_id
join product_description d2
on d2.product_id <> d1.product_id
and d2.language_id = d1.language_id
and d2.name = d1.name
join product p2 on p2.product_id = d2.product_id
where d1.language_id = 1 -- define the language used for product name
and (p2.price > p1.price -- delete if higher price exists
or p2.price = p1.price and p2.product_id < p1.product_id -- delete if same price with lower id exists
)
;
```
|
Since you are using mysql, you need to use joins (partition by is not supported):
The select:
```
select p.*
from products as p
join
(
select name, min(price) as price
from products group by name having count(price) = 2
) as p2 on p2.name = p.name and p2.price = p.price;
```
Gets the lowest price for all duplicate products (where duplicate assumes there are exactly two rows of the same product).
To delete, change the initial select to a delete, as follows:
```
delete p.*
from products as p
join
(
select name, min(price) as price
from products group by name having count(price) = 2
) as p2 on p2.name = p.name and p2.price = p.price;
```
|
Remove duplicate product with lower price using mysql query
|
[
"",
"mysql",
"sql",
"opencart2.x",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.