Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I have to make SQL query, that would give me count of buyers per month, who made at least one purchase in that month, NOT COUNTING this buyer's first purchase ever.
For example, I have this table:
```
id bill_date
1 2014-01-14
1 2014-02-14
2 2014-02-14
2 2014-02-18
1 2014-02-19
2 2014-03-14
1 2014-03-14
1 2014-03-16
1 2014-04-08
1 2014-06-03
2 2014-06-10
1 2014-06-11
3 2014-11-07
3 2014-11-13
```
Therefore:
```
Jan - 1 bill for ID1
Feb - 2 bills for ID2, 2 bills for ID1
Mar - 1 bill for ID1, 2 bills for ID2
Apr - 1 bill for ID1
Jun - 2 bills for ID1, 1 bill for ID2
Nov - 2 bills for ID3
```
Expected results:
```
period accounts
2014-02 2
2014-03 2
2014-04 1
2014-06 2
2014-11 1
```
Basically, since ID1 made a purchase in January, do a distinct count for each month after January that they appear.
Since ID2 made two purchases in February, they would count for 1 in February, and then one every month after that (even if they made multiple purchases).
For ID3 they made two purchases in November, so count them as 1 for November. Had they made one purchase in November, and another in December, December would show one, but November would not show anything.
Thank you very much in advance!
|
This may help you :
```
SELECT date_format(t1.bill_date, '%Y-%m') AS dt,
Count(DISTINCT t1.id) AS cnt
FROM tempt t1
JOIN (SELECT id,
Min(bill_date) AS bill_date
FROM tempt
GROUP BY id) AS t2
ON t1.id = t2.id
AND t1.bill_date <> t2.bill_date
GROUP BY date_format(t1.bill_date, '%Y-%m')
```
[sql fiddle](http://sqlfiddle.com/#!2/184c8/6)
|
You want to first get distinct buyers for each month, then do your counting. This is a portable solution and more in line with the way you would process it mentally.
select x.period, count(x.buyer\_id) accounts from
(SELECT distinct DATE\_FORMAT(bill\_date, '%Y-%m') period, buyer\_id
from billing)x
group by x.period
I don't like the idea of the column name "id" so I have named it "buyer\_id" here. Consider changing your column name and us id for the primary key instead.
|
Get count of entries per period excluding first one (SQL query)
|
[
"",
"mysql",
"sql",
""
] |
A credit card company has several credit cards issued to each consumer.
```
CREATE TABLE card (
cardnumber int PRIMARY KEY,
customer int,
balance decimal(9,2),
status varchar(10)
);
INSERT INTO card VALUES ('100', '1', 100.00, 'active');
INSERT INTO card VALUES ('101', '1', 100.00, 'expired');
INSERT INTO card VALUES ('102', '2', 100.00, 'limited');
INSERT INTO card VALUES ('103', '2', 100.00, 'locked');
```
A report is desired consisting of one row per cardholder, total balance on all cards, and the least restrictive status in the exact order of: 1. Active, 2. Expired, 3. Limited, 4. Locked.
Cardholder #1 has a card in 'active', and a card in 'expired', the report should show 'active'.
Cardholder #2 has a card in 'limited', and a card in 'locked', the report should show 'limited'.
Query
```
select customer, sum(balance), '?'
from card
group by customer
```
How can the query show just the least restrictive status of the cardholders cards?
|
You should create a separate Table for the "status" of the card. So you could create a simple "CASE WHEN" block in the SQL to handle all that.
Example:
```
CREATE TABLE card_status (
status_id int PRIMARY KEY,
message varchar(255)
);
INSERT INTO card_status VALUES (1, 'active');
INSERT INTO card_status VALUES (2, 'expired');
...
```
Finally, the SQL:
```
SELECT customer, sum(balance), MIN(status)
FROM card
GROUP BY customer
```
Where u can now filter by the status (Sub-Select, CASE WHEN...)
|
Normalize the table by changing status to integer and insert a foreign key relationship to a status table then you can select min(status) in the select query
|
MySQL how to group by and select certain text from multiple rows?
|
[
"",
"mysql",
"sql",
"group-by",
""
] |
I have two talbles a login table and a login attempts table for track worng login attempts
login table
```
| id | username | password |
| 1 | Jon | 1234 |
| 2 | Danny | ttttt |
| 7 | Martin | fffff |
```
attempts table
```
| id_attempts| time |
| 1 | 535353 |
| 2 | 554335 |
| 1 | 654545 |
| 1 | 566777 |
```
Query must identify a user as authorized, check the database for his combination of username/password but must return also the number of incorrect attempts for that user, even if the username and password are wrong.
I'have try this:
```
SELECT u.id,count(p.id_attempts)
FROM login as l, attempts as a
LEFT JOIN
ON l.id=a.id_attempts
WHERE username=Jon
AND pass=1234
```
EDITED:
Exmple Jon try to login, if username and password are correct query must return jon id and 3 (number of jon wrong attempt) if jon username and password are wrong query must return only 3 (number of jon wrong attempt)
|
Instead of "if jon username and password are wrong query must return only 3 (number of jon wrong attempt)" I recommend using an additional column instead to return the authorization status. e.g.
Also, since the username is being specified in the query, there is not much point in returning the userid or username.
```
SELECT l.username, count(l.id) attempts, l.password='1234' authorized
FROM login as l LEFT JOIN attempts as a
ON l.id=a.id_attempts
WHERE l.id=1
GROUP BY l.username
```
will return:
```
username,attempts,authorized
Jon,3,1
```
Where 1 means authorized, and 0 not authorized.
But if you really want to meet your original requirement, do this:
```
SELECT IF( l.password = '1234' && l.id =1, l.id, "" ) id, COUNT( l.id ) attempts
FROM login AS l
LEFT JOIN attempts AS a ON l.id = a.id_attempts
WHERE l.id =1
GROUP BY l.username
```
For a correct password, this will return:
id,attempts
1,3
Where there was no correct password, this will return:
id,attempts
,3
|
I don't know which database engine you are using since you have specified both mysql and postgresql as tags but here is how the select statement looks like in PostgreSQL:
```
SELECT (CASE WHEN l.password = ? THEN l.id END) AS userId,
COUNT(a.id_attempts) AS numAttempts
FROM login l LEFT JOIN attempts a ON a.id_attempts=l.id
WHERE l.username = ?
GROUP BY l.id;
```
|
Sql query with count() join and where on to tables
|
[
"",
"mysql",
"sql",
"postgresql",
""
] |
I have three tables: Pupils, KS3Assessments and AssessmentSets.
* Pupils each have a StudentID, FName, SName etc.
* AssessmentSet contains the title of the assessment, the deadline, the year group that must complete it, etc. New ones are created throughout the year, so their titles/ids can't be named explicitly in the SQL.
* KS3Assessments records each have a StudentID that refers to the pupil who
completed the work, a SetID that refers to the relevant AssessmentSet record and an 'NCLevel' indicating the result that the pupil achieved.
I need a results overview table that looks like this:
```
- StudentID ¦ FName ¦ SName ¦ Creative Writing #1 ¦ Novel Study ¦ Random Thingy Test ¦ etc. ¦ etc.
- 072509273 ¦ Adam¦ Adamson¦ 5.5¦ 4.8¦ 6.5¦ etc.¦ etc¦
- 072509274 ¦ Bob ¦ Bobson¦ 5.8¦ 5.2¦ 7.2¦ etc.¦ etc¦
```
... so that, at any time, a teacher can see what a pupil has achieved in whatever assessments they've done so far.
So far, using pivot, I've managed to get this:
```
- StudentID, FName, SName, 147, 146, 154 (These numbers are the SetIDs)
- 072509273, Adam, Adamson, 5.5, 4.8, 6.5
- 072509274, Bob, Bobson, 5.8, 5.2, 7.2
```
Here's my SQL. I'd really appreciate any ideas about how to fix this and upgrade it to get the result that I'm looking for. I suspect it will involve an inner join (or two), but I'm still having trouble getting my head around the pivot syntax. Many thanks.
```
DECLARE @cols AS NVARCHAR(MAX),
@query AS NVARCHAR(MAX)
select @cols = STUFF((SELECT distinct ',' + QUOTENAME(SetID)
from KS3Assessments
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
,1,1,'')
set @query = 'SELECT StudentID, FName, SName' + @cols + ' from
(
select KS3Assessments.StudentID,
Pupils.FName,
Pupils.SName,
KS3Assessments.NCLevel,
KS3Assessments.SetID
from KS3Assessments inner join Pupils on KS3Assessments.StudentID = Pupils.StudentID
where Pupils.GroupDesignation = ''8KF/En 14/15''
) x
pivot (max(NCLevel) for SetID in (' + @cols + ') ) p '
execute(@query)
```
|
try
```
DECLARE @cols AS NVARCHAR(MAX),
@query AS NVARCHAR(MAX)
select @cols = STUFF((SELECT distinct ',' + QUOTENAME(Title)
from AssessmentSet
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
,1,1,'')
set @query = 'SELECT StudentID, FName, SName' + @cols + ' from
(
select KS3Assessments.StudentID,
Pupils.FName,
Pupils.SName,
KS3Assessments.NCLevel,
AssessmentSet.Title
from KS3Assessments inner join Pupils on KS3Assessments.StudentID = Pupils.StudentID
inner join AssessmentSet on KS3Assessments.SetID = AssessmentSet.SetID
where Pupils.GroupDesignation = ''8KF/En 14/15''
) x
pivot (max(NCLevel) for Title in (' + @cols + ') ) p '
execute(@query)
```
|
Give this a try:
```
DECLARE @cols AS NVARCHAR(MAX), @query AS NVARCHAR(MAX)
SELECT @cols = STUFF((
SELECT DISTINCT ',' + QUOTENAME(title)
FROM AssessmentSet
FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)')
,1,1,'')
SET @query = N'
SELECT StudentID, FName, SName, ' + @cols + '
FROM (
SELECT
K.StudentID,
P.FName,
P.SName,
K.NCLevel,
A.title
FROM KS3Assessments K
INNER JOIN Pupils P ON K.StudentID = P.StudentID
INNER JOIN AssessmentSet A ON K.SetID = A.SetID
WHERE Pupils.GroupDesignation = ''8KF/En 14/15''
) x
PIVOT (MAX(NCLevel) FOR title IN (' + @cols + ')
) p '
EXECUTE(@query)
```
[Sample SQL Fiddle](http://www.sqlfiddle.com/#!3/caffc/12)
Sample output:
```
| STUDENTID | FNAME | SNAME | CREATIVE WRITING #1 | NOVEL STUDY | RANDOM THINGY TEST |
|-----------|-------|---------|---------------------|-------------|--------------------|
| 72509273 | Adam | Adamson | 5.5 | 4.8 | 6.5 |
| 72509274 | Bob | Bobson | 5.8 | 5.2 | 7.2 |
```
|
How do I turn this SQL pivot into an inner join pivot?
|
[
"",
"sql",
"sql-server",
""
] |
I have a query and a result as follows.
In the database NULL and 0 represent the same meaning.
Now I want a counter based on Null+0 or 1
Eg:in the following example I want the result like this:
```
IsVirtual Category counter
NULL+0 3 343+8 = (351 is Total)
```
**Query**
```
select * from
(
Select IsVirtual, Category, count(*) as counter
from [Hardware]
group by IsVirtual, Category
) innercat
```
Output
```
+-----------+----------+---------+
| IsVirtual | Category | counter |
+-----------+----------+---------+
| NULL | 3 | 343 |
| 0 | 3 | 8 |
| 1 | 2 | 1 |
| 0 | 1 | 1 |
| NULL | 6 | 119 |
| 0 | 4 | 1 |
| NULL | 1 | 70 |
| 0 | 5 | 9 |
| NULL | 4 | 54 |
| 0 | 2 | 2 |
| NULL | 5 | 41 |
| NULL | 2 | 112 |
| 1 | 1 | 5 |
+-----------+----------+---------+
```
|
I think you want this :
```
SELECT COALESCE(IsVirtual, 0) as [IsVirtual],
Category,
Count(*) as [Counter]
FROM yourtable
GROUP BY COALESCE(IsVirtual, 0),Category
```
This will give you expected result without using subquery.
|
try with this
```
select * from (
Select CASE ISNULL(IsVirtual,0)
WHEN 0 Then 'NULL + 0'
ELSE IsVirtual
END AS IsVirtual, Category, count(*) as counter from [Hardware] group by ISNULL(IsVirtual,0), Category
)innercat
```
|
Update innerquery result
|
[
"",
"mysql",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I am trying to write a script which drops some obsolete tables in Postgres database. I want to be sure the tables are empty before dropping them. I also want the script could be kept in our migration scripts where it is safe to run even after these tables are actually dropped.
There is my script:
```
CREATE OR REPLACE FUNCTION __execute(TEXT) RETURNS VOID AS $$
BEGIN EXECUTE $1; END;
$$ LANGUAGE plpgsql STRICT;
CREATE OR REPLACE FUNCTION __table_exists(TEXT, TEXT) RETURNS bool as $$
SELECT exists(SELECT 1 FROM information_schema.tables WHERE (table_schema, table_name, table_type) = ($1, $2, 'BASE TABLE'));
$$ language sql STRICT;
CREATE OR REPLACE FUNCTION __table_is_empty(TEXT) RETURNS bool as $$
SELECT not exists(SELECT 1 FROM $1 );
$$ language sql STRICT;
-- Start migration here
SELECT __execute($$
DROP TABLE oldtable1;
$$)
WHERE __table_exists('public', 'oldtable1')
AND __table_is_empty('oldtable1');
-- drop auxilary functions here
```
And finally I got:
> ```
> ERROR: syntax error at or near "$1"
> LINE 11: SELECT not exists(SELECT 1 FROM $1 );
> ```
Is there any other way?
|
You must use `EXECUTE` if you want to pass a table name as parameter in a Postgres function.
Example:
```
CREATE OR REPLACE FUNCTION __table_is_empty(param character varying)
RETURNS bool
AS $$
DECLARE
v int;
BEGIN
EXECUTE 'select 1 WHERE EXISTS( SELECT 1 FROM ' || quote_ident(param) || ' ) '
INTO v;
IF v THEN return false; ELSE return true; END IF;
END;
$$ LANGUAGE plpgsql;
/
```
Demo: <http://sqlfiddle.com/#!12/09cb0/1>
|
No, no, no. For many reasons.
[@kordirko](https://stackoverflow.com/a/26944582/939860) already pointed out the immediate cause for the error message: In plain SQL, variables can only be used for *values* not for *key words* or *identifiers*. You can fix that with [**dynamic SQL**](https://stackoverflow.com/questions/tagged/plpgsql+dynamic-sql), but that still doesn't make your code right.
You are applying **programming paradigms** from other programming languages. With PL/pgSQL, it is *extremely* inefficient to split your code into multiple separate tiny sub-functions. The overhead is huge in comparison.
Your actual call is also a time bomb. Expressions in the `WHERE` clause are executed **in any order**, so this may or may not raise an exception for non-existing table names:
```
WHERE __table_exists('public', 'oldtable1')
AND __table_is_empty('oldtable1');
```
... which will roll back your whole transaction.
Finally, you are completely open to race conditions. Like [@Frank already commented](https://stackoverflow.com/questions/26943904/check-if-table-is-empty-in-runtime/26945929#comment42432387_26943904), a table can be in use by concurrent transactions, in which case open locks may stall your attempt to drop the table. Could also lead to deadlocks (which the system resolves by rolling back all but one competing transactions). Take out an **exclusive lock** yourself, *before* you check whether the table is (still) empty.
### Proper function
This is **safe for concurrent use**. It takes an array of table names (and optionally a schema name) and only drops existing, empty tables that are not locked in any way:
```
CREATE OR REPLACE FUNCTION f_drop_tables(_tbls text[] = '{}'
, _schema text = 'public'
, OUT drop_ct int) AS
$func$
DECLARE
_tbl text; -- loop var
_empty bool; -- for empty check
BEGIN
drop_ct := 0; -- init!
FOR _tbl IN
SELECT quote_ident(table_schema) || '.'
|| quote_ident(table_name) -- qualified & escaped table name
FROM information_schema.tables
WHERE table_schema = _schema
AND table_type = 'BASE TABLE'
AND table_name = ANY(_tbls)
LOOP
EXECUTE 'SELECT NOT EXISTS (SELECT 1 FROM ' || _tbl || ')'
INTO _empty; -- check first, only lock if empty
IF _empty THEN
EXECUTE 'LOCK TABLE ' || _tbl; -- now table is ripe for the plucking
EXECUTE 'SELECT NOT EXISTS (SELECT 1 FROM ' || _tbl || ')'
INTO _empty; -- recheck after lock
IF _empty THEN
EXECUTE 'DROP TABLE ' || _tbl; -- go in for the kill
drop_ct := drop_ct + 1; -- count tables actually dropped
END IF;
END IF;
END LOOP;
END
$func$ LANGUAGE plpgsql STRICT;
```
Call:
```
SELECT f_drop_tables('{foo1,foo2,foo3,foo4}');
```
To call with a different schema than the default 'public':
```
SELECT f_drop_tables('{foo1,foo2,foo3,foo4}', 'my_schema');
```
### Major points
* Reports the number of tables actually dropped. (Adapt to report info of your choice.)
* Using the **information schema** like in your original. Seems the right choice here, but be aware of subtle limitations:
+ [How to check if a table exists in a given schema](https://stackoverflow.com/questions/20582500/how-to-check-if-a-table-exists-in-a-given-schema/24089729#24089729)
* For use under heavy concurrent load (with long transactions), consider the [**`NOWAIT`** option for the `LOCK` command](http://www.postgresql.org/docs/current/interactive/sql-lock.html) and possibly catch exceptions from it.
* [Per documentation on **"Table-level Locks"**:](http://www.postgresql.org/docs/current/interactive/explicit-locking.html#LOCKING-TABLES)
> `ACCESS EXCLUSIVE`
>
> Conflicts with locks of all modes (`ACCESS SHARE`, `ROW SHARE`, `ROW EXCLUSIVE`,
> `SHARE UPDATE` `EXCLUSIVE`, `SHARE`, `SHARE ROW EXCLUSIVE`, `EXCLUSIVE`,
> and `ACCESS EXCLUSIVE`). This mode guarantees that the holder
> is the only transaction accessing the table in any way.
>
> Acquired by the `ALTER TABLE`, `DROP TABLE`, `TRUNCATE`, `REINDEX`, `CLUSTER`, and `VACUUM FULL` commands. This is also the
> default lock mode for **`LOCK TABLE` statements that do not specify a mode explicitly**.
Bold emphasis mine.
|
Check if table is empty in runtime
|
[
"",
"sql",
"postgresql",
"stored-procedures",
"plpgsql",
"dynamic-sql",
""
] |
I've got a table in postgres, with a column with int(8) values:
Table TABLE:
```
number | name
----------------------
1111 | a
1122 | b
1133 | c
1144 | d
1155 | e
2211 | f
2222 | g
2233 | h
2244 | i
2255 | k
```
I wanna select all those rows where 'number' begins with "11":
```
number | name
----------------------
1111 | a
1122 | b
1133 | c
1144 | d
1155 | e
```
When both columns (number and name) are varchar type, I can compare values using the % operator, like:
```
select *
from TABLE
where number = '11%'
```
But it does not work on int.
How can I do it?
Thanks!
|
I would do this with explicit conversion:
```
select *
from TABLE
where cast(number as varchar(255)) like '11%';
```
Be very careful when you write SQL code that does implicit conversion.
|
```
SELECT *
FROM table
WHERE number::text LIKE '11%'
```
|
Postgres - Selecting rows begining with value
|
[
"",
"sql",
"postgresql",
""
] |
I have a table and I need to get unique rows with latest called\_date but adding phones to the right if phones do not match but name is the same. There could be even more different phones with the same name. I need to add them all to the right. I am working on it all day. Is it even possible? How to do that?
Database is SQL Server. This example on excel is just for show.

|
First it'd be nice to have some nicely formatted sample data to use:
```
SELECT
*
INTO ##CallData
FROM
(
SELECT '1245' as ID, 555963 as Phone, '2014-11-01' as Called_date, 'some_name' as Name, 'some@gmail.com' as Email
UNION
SELECT '5896' as ID, 896111 as Phone, '2014-11-05' as Called_date, 'other_name' as Name, 'other@yahoo.com' as Email
UNION
SELECT '4751' as ID, 666963 as Phone, '2014-11-14' as Called_date, 'some_name' as Name, 'some@gmail.com' as Email
UNION
SELECT '2896' as ID, 987987 as Phone, '2014-11-14' as Called_date, 'diff_name' as Name, 'diff@gmail.com' as Email
)t
```
Next, we need to use a CTE to return the row with the most recent Call\_date for each Name:
```
;WITH CallData AS
(
SELECT
Id,
Phone,
Called_date,
Name,
Email,
ROW_NUMBER() OVER (PARTITION BY Name ORDER BY Called_date DESC) AS RN
FROM
##CallData
)
SELECT
Id,
Phone,
Called_date,
Name,
Email
FROM
CallData
WHERE
RN = 1
```
Finally, you can use FOR XML PATH and STUFF to join in the aggregated phone numbers for each name:
```
;WITH CallData AS
(
SELECT
Id,
Phone,
Called_date,
Name,
Email,
ROW_NUMBER() OVER (PARTITION BY Name ORDER BY Called_date DESC) AS RN
FROM
##CallData
)
SELECT
Id,
Phone,
Called_date,
Name,
Email,
STUFF((SELECT ', ' + CAST(t2.phone as VARCHAR(10))
FROM ##CallData t2
WHERE t.Name = t2.Name
AND t.Phone <> t2.Phone
FOR XML PATH('')
), 1, 2, '') as Additional_phone
FROM
CallData t
WHERE
RN = 1
```
|
Try this
```
SELECT id, Phone, Called_date, Name, Email
FROM(
SELECT id, Phone, Called_date, Name, Email, ROW_NUMBER() OVER (PARTITION BY Email ORDER BY Called_date DESC) as rn
FROM yourTable
) a
WHERE rn = 1
```
Updated Answer:
```
SELECT a.id, a.Phone, a.Called_date, a.Name, a.Email, c.Phone as SecondPhone
FROM
(
SELECT id, Phone, Called_date, Name, Email, ROW_NUMBER() OVER (PARTITION BY Email ORDER BY Called_date DESC) as rn
FROM yourTable
) as a
inner join (
SELECT name, Phone
FROM
( SELECT name, Phone, ROW_NUMBER() OVER (PARTITION BY Email ORDER BY Called_date DESC) as rn
FROM yourTable
) b
Where rn = 2
) c on a.name = c.name
WHERE rn = 1
```
|
SQL Server : select how to?
|
[
"",
"sql",
"sql-server",
"select",
""
] |
What I am trying to accomplish:
```
Dataset 1
Name1
Name2
Name3
Dataset 2
Number1
Number2
Number3
```
will become 2 columns:
```
dataset1 dataset2
Name1 Number1
Name2 Number2
Name3 Number3
```
My datasets 1 & 2 will always have equal rows.
Which name linked to which number I don't care as long as two names are not linked to the same number and vice versa.
How can I solve this with SQL / SQL Server ?
|
If you don't want to add an identity column to the tables, you can use the ROW\_NUMBER() function like this:
```
SELECT
T1.Col1,
T2.Col1
FROM
(SELECT Col1, ROW_NUMBER() OVER (ORDER BY Col1) AS N FROM Table1) T1
INNER JOIN
(SELECT Col1, ROW_NUMBER() OVER (ORDER BY Col1) AS N FROM Table2) T2
ON T1.N = T2.N
```
Here, replace Table1 and Table2 with the name of your tables, and replace Col1 with the name of the column (or columns) that you want to output from the two tables.
|
Add identity columns to both the tables and perform join on basis of these column
```
ALTER TABLE Table1
ADD ID INT IDENTITY(1,1) NOT NULL
ALTER TABLE Table2
ADD ID INT IDENTITY(1,1) NOT NULL
SELECT Table1.dataset1col , Table2.dataset2Col
From Table1 INNER JOIN Table2
ON Table1.ID = Table2.ID
```
|
TSQL merge 2 dataset with even number of rows next to eachother
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I am a complete noob to SQL and trying to understand why this query returns "115":
```
select datediff(yy, -3, getdate())
```
|
`datediff` takes three Parameter. The first is the `interval`, the second the `start date` and the third the `end date`. You are passing -3 as start date, there we can show:
```
SELECT CAST(-3 AS datetime) -- results in '1899-12-29 00:00:00.000'
```
And because 2014 - 1899 is 115, you will get this as result.
|
Because DATEDIFF() calculates an interval between 2 dates, and you specified year -3 in one of them.
Firstly, the date "zero" is 12/30/1899 on SQL server.
Secondly, your "-3" was an incorrect date format and it thus replaced it with its 0
2014 - 1899 = 115
Use DATEADD() instead to achieve what you want to do.
|
SQL datediff with negative parameter
|
[
"",
"sql",
"sql-server",
"date",
""
] |
I have a database with two tables: `members` and `profilefields`.
`members` has the columns: `ID, name, email`
`profilefields` has the columns: `ID, field1, field2, etc.`
I'd like to select the `name` and `email` of each row in `members`, based on a query of `profilefields`
I think this is how it works, but I don't know how to make the query:
Get id from `profilefields` where `field1 = X` AND `field2 = Y`
Get name and email from members for those IDs
I'm really new to this so I'd really appreciate any help.
|
You could use the `in` operator:
```
SELECT name, email
FROM members
WHERE id IN (SELECT id
FROM profilefields
WHERE field1 = 'X' and field2 = 'Y')
```
|
This should do the trick:
```
SELECT
m.name,
m.email
FROM
members m
INNER JOIN profilefields pf ON
m.ID = pf.id
WHERE
pf.field1=X AND
pf.field2=Y
```
Here we use an `INNER JOIN` in the `FROM` clause to link the tables ON their `id` field. The filter goes into the `WHERE` clause, and the fields you want to bring back are up in the `SELECT` clause.
|
MySQL Query Of Two Tables
|
[
"",
"mysql",
"sql",
"database",
""
] |
How do I construct my left join to produce this output?
Table 1: Main Table
```
| ID | Team | Color |
| 1 | A | Red |
| 2 | A | Blue |
| 3 | B | Green |
| 4 | B | Green |
| 5 | C | Yellow |
```
Table 2: Lookup Table
```
| Team | Color | Final |
| A | Red | A-1 |
| A | Blue | A |
| B | Green | B |
| C | Yellow | C |
```
Output Table
```
| ID | Team | Color | Final |
| 1 | A | Red | A-1 |
| 2 | A | Blue | A |
| 3 | B | Green | B |
| 4 | B | Green | B |
| 5 | C | Yellow | C |
```
In excel the logic would be something like this ...
```
=IF(AND(Team = "A", Color = "Red"),"A-1", Team = Team))
```
How does this translate to an sql query in access or mysql using inner join? I have something like this but doesn't know how to construct an if statement within the query.
I want the query to check if **Team** = 'A' **AND** **Color** = 'Red' then return 'A-1' otherwise return the same Team column value.
```
SELECT Table1.ID, Table1.Team, Table1.Color, Table2.Final FROM Table1
INNER JOIN Table2 ON Table1.Team = Table2.Team;
```
Any help woul dbe highly appreciated. Thanks a lot.
|
You can use the AND keyword in your JOIN clause:
```
SELECT * FROM Table1
INNER JOIN Table2
ON Table1.Team = Table2.Team AND Table1.Color = Table2.Color
```
|
Try this untested query:
```
SELECT Table1.ID, Table1.Team, Table1.Color,
case Table1.Team when "A" then
case Table1.Color when "READ" then
Table2.Final
else
Table1.Team
end
else
Table1.Team
end
Table2.Final FROM Table1
FROM Table1
INNER JOIN Table2 ON Table1.Team = Table2.Team;
```
|
INNER JOIN Using IF logic
|
[
"",
"mysql",
"sql",
""
] |
I have a CRM database with primary table columns `createon` (`date`) & `createat` (`time`). We'll call it table A
There is a secondary table that is used to log data concerning activity taken in an attempt to contact records created in A, let's call this table B.
I'm trying to write a tSQL query that will show the average time it takes between when a record is created in A & the first activity is logged in B.
This is what I have so far:
```
SELECT
B.userid,
avg(cast(datediff(MINUTE, A.createat, B.ontime) as float))
AS 'AVG TIME FROM LEAD CREATION TO FIRST LOGGED ACTIVITY'
FROM
TableA A, TableB B
WHERE
A.KEY_value = B.KEY_value
AND B.USERID in ('USER1','USER2','USER3','USER4','USER5')
AND B.ONDATE > '09/01/2014'
GROUP BY
B.USERID
HAVING
B.ontime = min(B.ontime)
```
The query will work if I remove the 'having' clause, but then I can't be sure I'm getting the time of the first logged activity in Table B (min(B.ontime))
I'm still a SQL novice so any help/guidance with this would be greatly appreciated!
|
one way to do is do get minimum ontime for each user in a subquery and join with table A.
```
SELECT
T.userid,
ISNULL(avg(cast(datediff(MINUTE, cast(A.createat as datetime)+A.createdon, T.MinOnTime) as float)),0)
AS [AVG TIME FROM LEAD CREATION TO FIRST LOGGED ACTIVITY]
FROM
TableA A
LEFT JOIN ( SELECT Key_value, userid, min(cast(B.ontime as datetime) + B.ondate) as MinOnTime FROM
TableB B
WHERE B.USERID in ('USER1','USER2','USER3','USER4','USER5')
AND B.ONDATE > '09/01/2014'
GROUP BY KEY_Value, userid
) T
on A.KEY_value = T.KEY_value
GROUP BY T.userid
```
|
```
SELECT
t.userid
,case when A.createon = B.ondate then avg(cast(datediff(MINUTE, A.createat, B.ontime) as float))
else avg(datediff(MINUTE, A.createon, B.ondate)+(cast(datediff(MINUTE, A.createat, B.ontime) as float)))
end as 'AVG TIME FROM LEAD CREATION TO FIRST LOGGED ACTIVITY'
FROM
TableA a
LEFT JOIN ( SELECT Key_value, userid, min(B.ontime) as MinOnTime
FROM TableB b
WHERE b.USERID in ('USER1','USER2','USER3','USER4','USER5')
and b.ondate > '2014/09/01'
GROUP BY KEY_Value, userid) t on a.KEY_value = t.KEY_value
GROUP BY
t.userid
```
|
tSQL: Get average amount of time from record creation to first contact
|
[
"",
"sql",
"sql-server",
"t-sql",
"average",
""
] |
sql Query:
```
Select *
from tabEmployee
where city='Delhi' OR locality='' OR pincode='' OR salary Between '' AND '' OR designation=''
```
I am quite new for sql server.
I want to write a sql query where city is mandatory and other fields may or may not be provided by the user.
I mean to say if one user enters locality='Kirti Nagar'.
Filter should work according to city='Delhi' and locality='Kirti Nagar'.
In other case, any user may enter only salary range.
Then Filter should work according to city='Delhi' and salary Between 5000 AND 10000.
If user doesn't enter any field then filter should work according to city='Delhi' only.
**How can i do this ?**
|
Try this:
```
Select *
from tabEmployee
where city='Delhi' AND ISNULL(locality_parameter,locality)=locality AND
(pincode=pincode_parameter OR pincode_parameter=0) AND
ISNULL(designation_parameter ,designation)=designation AND
salary BETWEEN ISNULL(lower,salary) AND ISNULL(upper,salary)
```
Set pincode\_parameter=0 when user doesn't enter pincode
|
Try this :
```
Select * from tabEmployee
where city='Delhi' and (locality='Kirti Nagar' OR pincode='411521' OR
(salary Between '5000' AND '10000') OR designation='Engineer')
```
|
Sql Query which filters according to one or more fields
|
[
"",
"sql",
"sql-server",
""
] |
I have two tables named `Evaluation` and `Value`.
In both tables, there are four columns. But three of the four are the same. In other words, they both have the `CaseNum`, `FileNum`, `ActivityNum` columns. In addition to those, the `Evaluation` table has the `Grade` column, and the `Value` table has the `Score` column.
I want to merge the two into one table, joining by `CaseNum`, `FileNum`, and `ActivityNum`, so I have a new table of five columns, including `Value` and `Score`.
Can I use `INNER JOIN` multiple times to do this?
|
**Yes: You can use `Inner Join`** to join on multiple columns.
```
SELECT E.CaseNum, E.FileNum, E.ActivityNum, E.Grade, V.Score from Evaluation E
INNER JOIN Value V
ON E.CaseNum = V.CaseNum AND
E.FileNum = V.FileNum AND
E.ActivityNum = V.ActivityNum
```
Create table
```
CREATE TABLE MyNewTab(CaseNum int, FileNum int,
ActivityNum int, Grade int, Score varchar(100))
```
Insert values
```
INSERT INTO MyNewTab Values(CaseNum, FileNum, ActivityNum, Grade, Score)
SELECT E.CaseNum, E.FileNum, E.ActivityNum, E.Grade, V.Score from Evaluation E
INNER JOIN Value V
ON E.CaseNum = V.CaseNum AND
E.FileNum = V.FileNum AND
E.ActivityNum = V.ActivityNum
```
|
No, just include the different fields in the "ON" clause of 1 inner join statement:
```
SELECT * from Evalulation e JOIN Value v ON e.CaseNum = v.CaseNum
AND e.FileNum = v.FileNum AND e.ActivityNum = v.ActivityNum
```
|
How to join two tables by multiple columns in SQL?
|
[
"",
"mysql",
"sql",
"sql-server",
""
] |
I'm using Oracle 10g and want to delete approx 5 million records from Persons table (Having total 15 million records) which doesn't have any reference in Order table (Having total 5 million records)
Since deteting 5 million records in one go would cause undo log issues, I've decided to delete in batches of 100k. I'm using the below query:
```
DELETE FROM Persons p
WHERE City = 'ABC'
AND NOT EXISTS
(SELECT O_Id
FROM Orders o
WHERE p.P_Id = o.P_Id)
AND ROWNUM <= 100000
```
Now the problem is that this query takes as long to execute for 100k records as it would have taken for 5 million because still full table scans and joins on both tables will happen.
Is there a efficient way to rewrite this query for faster execution?
or replace NOT EXISTS clause with better join condition?
or use some better way to limit records to 100k?
P.S. This is a one time operation only and I can't use any DDL operations for this, however pl/sql is fine
|
If you want this query to run faster, add the following two indexes:
```
create index idx_persons_city_pid on persons(city, p_id);
create index idx_orders_pid on orders(p_id);
```
|
From my experience, the fastest way to delete lot of rows is :
solution 1 (recommended by Tom Kyte)
```
`SET TRANSACTION USE ROLLBACK SEGMENT <your_seg>
DELETE FROM <tab1> WHERE <cond>
COMMIT`
```
OR
solution 2
```
`create table new_table unrecoverable as select * from old_table where ....;
drop table old_table;
rename new_table to old_table;
create index old_table_idx1 on old_table(c1,c2) unrecoverable parallel 5;
`
```
I used the second solution in different contexts: it is always the fastest to delete huge amount of rows.
An alternative is to put the data to delete in a partition and then drop the partition (each partition has its own rollback segment, can use parallelism, ...).
|
How to delete large amount of data from Oracle table in batches
|
[
"",
"sql",
"database",
"oracle",
"oracle10g",
""
] |
Good day.
I have the following question:
Is it possible to check the existence of the table when sql script running first time?
I'm using Advantage Data Architect 11.10.
I want to clarify my question.
In my script I need to create a temporary table each time when sql script starting. For do this I delete my temporary table and recreate table. For example (1):
> ```
> ...
> if exists (select * from #tmp) then
> delete table #tmp;
> end if;
>
> create table #tmp (g integer);
> ...
> ```
But when I run my script for the first time I get the following error:
> The temporary table cannot be found.
To fix the error, I forced to create a temporary table by "my hands". Then my code which I showed in "For example (1)" worked without errors.
Thanks.
Sorry for my English.
|
One solution is this:
```
TRY DROP TABLE #tmp; CATCH ALL END TRY;
CREATE TABLE #tmp ...
```
Another solution:
```
IF NOT EXISTS (SELECT 1 FROM (EXECUTE PROCEDURE sp_GetTables (NULL, NULL, 'tmp', 'LOCAL TEMPORARY')) getTables ) THEN
CREATE TABLE #tmp ...
END IF;
```
See also here:
<http://devzone.advantagedatabase.com/forum/questions/5573/determine-if-temp-table-exists>
|
```
--first variant
--if table exist then drop table
IF OBJECT_ID('Tempdb..#tmp') IS NOT NULL
DROP TABLE #tmp
CREATE TABLE #tmp
(
ID INT ,
Definitions VARCHAR(30)
)
-------------------------------------------------------------------------
--second variant
--if table exist then delete all records from table, otherwise create table
IF OBJECT_ID('Tempdb..#tmp') IS NOT NULL
TRUNCATE TABLE #tmp --delete all records from temp table
ELSE
BEGIN
CREATE TABLE #tmp
(
ID INT ,
Definitions VARCHAR(30)
)
END
```
|
Check the existence of the table when sql script runnnig first time (Advantage data architect)
|
[
"",
"sql",
"advantage-database-server",
""
] |
I have a table.
this table has usage period of user.
example is follow
```
|user_id|start_at |end_at |
|1 |2014/11/02|2014/11/05|
|2 |2014/11/03|2014/11/07|
|3 |2014/11/08|2014/11/09|
```
I would like to get maximum concurrent number of users
above case,
Between 2014/11/03 ~ 2014/05, 2 user used.
I would like to get this "2".
Is there an easy way to set this up? What do you recommend?
|
The maximum is going to occur at one of the `start_at` times. So, you can do this with a self-join:
```
select u.start_at, count(*) as numconcurrent
from usage u join
usage u2
on u.start_at between u2.start_at and u2.end_at
group by u.start_at
```
If you want the maximum number, then add:
```
order by count(*) desc
limit 1;
```
|
sounds like you may have mis-stated the question. Do you really mean "max" ? or just the number of users between two dates ? If that is so, then
```
Select count(*) from table
where start_at < @end
and end_at > @start
```
|
In SQL, how can you calculate maximum concurrent number of users
|
[
"",
"mysql",
"sql",
""
] |
In full disclosure, I was kind of thrown into a project this week using SQL and, unfortunately, have very little experience using it up until now outside of very basic querying. I'm going to start studying up, but really could use some help here in the short-term.
In this example, the tables are named `Person_PR` and `Person_CME`.
Both tables
* contain information pertaining to the same person and contain the same person identifiers(record\_num and person\_num).
* are sorted by record\_num and person\_num, so they are in total alignment.
Tables:
Person\_PR
```
record_num, person_num, pr_variable1, pr_variable2
```
Person\_CME
```
record_num, person_num, cme_variable1, cme_varable2
```
I'd like for the new table `Person_new` to contain `record_num, person_num, cme_variable1, cme_varable2, pr_variable1 and pr_variable2`.
* The `cme_` variables would come from the `Person_CME` table and
* the `pr_` variables would come from the `Person_PR` table.
New table `Person_new`:
```
record_num, person_num, cme_variable1, cme_varable2, pr_variable1, pr_variable2
```
I have attempted to do this using both insert into and create table as select methods, but am unable to get anything to work properly. I was hoping that someone could get me started in the right direction. Thanks in advance for taking the time to read this and any help you might be able to provide.
|
The syntax to create a new table in sql server is
```
SELECT p.record_num
, p.person_num
, c.cme_variable1
, c.cme_variable2
, p.pr_variable1
, p.pr_variable2
INTO new_table
FROM Person_PR p
INNER JOIN Person_CME c
ON p.person_num = c.person_num
AND p.record_num = b.record_num
```
However, it may make more sense to use a view to present the data you want.
|
You'll want to use a stored procedure, not a new table:
```
CREATE PROCEDURE [dbo].GetPersonData
AS
BEGIN
SET NOCOUNT ON;
SELECT cme.record_num ,
cme.person_num ,
cme.cme_variable1 ,
cme.cme_variable2 ,
pr.prvariable1 ,
pr.prvariable2
FROM Person_CME cme
INNER JOIN Person_PR pr ON cme.record_num = pr.record_num
AND cme.person_num = pr.person_num
END
```
You can then use the stored procedure as the record source for your Access form. There is no need to create a new table, which would need to be periodically refreshed and would stored duplicate data. You could also you a view if applicable.
|
Creating new table from columns in multiple tables
|
[
"",
"sql",
"sql-server",
""
] |
I tried the UNION but it doesn't work it says #1221 - Incorrect usage of UNION and ORDER BY.
```
SELECT members.Name, friends.MemberNumber1
FROM friends
INNER JOIN members ON friends.MemberNumber1= members.MemNo
WHERE friends.MemberNumber2 LIKE 1
ORDER BY members.Name
UNION
SELECT members.Name, friends.MemberNumber2
FROM friends
INNER JOIN members ON friends.MemberNumber2= members.MemNo
WHERE friends.MemberNumber1 LIKE 1
ORDER BY members.Name
```
Both of the queries work great on their own but I am trying to have them in one query because I want to "for each" on the result set in JSTL after that. I also tried creating an alias for friends.MemberNumber1 and friends.MemberNumber2 but it didn't change anything. Maybe there is also a simpler query I'm just not seeing. Or do I absolutely need to make two queries?
Thanks
|
You can do a derived table, then order by.
The column names have to match up.
I know this answer is close to Barmar's, but the aliasName for the second column is important.
You can also look up the difference between UNION and UNION ALL, I use the latter typically.
```
SELECT Name , MemberNumber
FROM (
SELECT members.Name, friends.MemberNumber1 as MemberNumber
FROM friends
INNER JOIN members ON friends.MemberNumber1= members.MemNo
WHERE friends.MemberNumber2 LIKE 1
UNION ALL
SELECT members.Name, friends.MemberNumber2 as MemberNumber
FROM friends
INNER JOIN members ON friends.MemberNumber2= members.MemNo
WHERE friends.MemberNumber1 LIKE 1) AS derived1
ORDER BY derived1.Name
```
|
You need to put the `UNION` into a subquery, then order the entire thing.
```
SELECT *
FROM (
SELECT members.Name, friends.MemberNumber1
FROM friends
INNER JOIN members ON friends.MemberNumber1= members.MemNo
WHERE friends.MemberNumber2 LIKE 1
UNION
SELECT members.Name, friends.MemberNumber2
FROM friends
INNER JOIN members ON friends.MemberNumber2= members.MemNo
WHERE friends.MemberNumber1 LIKE 1) AS u
ORDER BY u.Name
```
|
Merging two SELECTs into one from the same table?
|
[
"",
"mysql",
"sql",
"select",
"union",
""
] |
To be able to explain the situation, let's say I have a table
```
Product price
Cola 2
Cola null
Fanta 1
Fanta 2
Sprite 2
Sprite null
```
I need to write a query that would return the maximum price per product and if the price is null, would consider it the maximum.
So for this table it should return Cola null, Fanta 2, Sprite null.
I really appreciate your help! Thank you in advance.
|
Standard SQL allows you to specify where `NULL` values should be sorted using the expression `NULLS FIRST` or `NULLS LAST` in the `ORDER BY` statement. This can be combined with a window function to get the desired behaviour:
```
select product, price
from (
select product,
price,
row_number() over (partition by product order by price desc nulls first) as rn
from products
) t
where rn = 1
order by product, price desc nulls first
;
```
With Postgres it is usually faster to use `distinct on` for this kind of queries:
```
select distinct on (product) product, price
from products
order by product, price nulls first
```
|
```
select product, case when sum(case when price is null then 1 else 0 end) > 0
then null
else max(price)
end as price
from your_table
group by product
```
|
How can I make null values be considered as MAX in SQL?
|
[
"",
"sql",
"postgresql",
"max",
""
] |
I am trying to create a table with a constraint where the date column should only accept any dates in the month of May or November.
How should I write the constraint like?
|
There are two ways to achieve this: Using `CHECK CONSTRAINT` on the column or a `TRIGGER`.
The `CHECK Constraint` could look like this:
```
create table tbl(
date_col date
check( MONTH(date_col) = 5 OR MONTH(date_col) = 11)
)
```
You have not specified a particular database so treat this as a pseudo-code only.
|
I may be misunderstanding your question. If you're just trying to retrieve values in may and nov, you can do this:
```
SELECT column1, column2, column3
FROM yourTable
WHERE MONTH(yourTimeStamp)=5 or MONTH(yourTimeStamp)=11
```
If you're trying to keep values from other months out of the DB, you should really do that in the software that interfaces with the DB.
I'm no sql guru, but in the rare circumstance where I'm trying to discourage foreign software from messing with a DB in any way other that what I'm allowing, I'll write stored proceedures that take care of the constraints and just publish those proceedures to the software providers.
|
sql query accept only dates of two months
|
[
"",
"sql",
"date",
""
] |
**Group by Date -Month -Day Hour and Time Query**
I would like to group by Rundate and then JobDateStamp by yy/mm/dd hh:mm
no seconds
**Results**
```
[RunDate] [count]
12/11/2014 21:00 3
13/11/2014 21:00 1
```
3 lots of jobs were run on 12/11/2014 (3 date and time)
1 lots of jobs were run on 13/11/2014 (1 date and time)
```
**create table tbl_tasks**
(
Rundate datetime,
JobDateStamp datetime,
Runs int
)
insert into tbl_tasks values
('2014-11-13 21:00:46.393','2014-11-13 21:36:27.393',1),
('2014-11-13 21:00:46.393','2014-11-13 21:36:25.393',1),
('2014-11-13 21:00:46.393','2014-11-13 21:36:24.393',1),
('2014-11-12 21:00:47.000','2014-11-13 14:14:46.393',1),
('2014-11-12 21:00:47.000','2014-11-13 14:12:46.393',1),
('2014-11-12 21:00:47.000','2014-11-12 21:04:43.393',1),
('2014-11-12 21:00:47.000','2014-11-12 21:04:41.393',1)
```
**This data is a result of a query and next step is to group by**
**yy/mm/dd hh:mm**
Rundate JobDateStamp Runs
2014-11-13 21:00:46.393 2014-11-13 21:36:27.393 1
2014-11-13 21:00:46.393 2014-11-13 21:36:25.393 1
2014-11-13 21:00:46.393 2014-11-13 21:36:24.393 1
2014-11-12 21:00:47.000 2014-11-13 14:14:46.393 1
2014-11-12 21:00:47.000 2014-11-13 14:12:46.393 1
2014-11-12 21:00:47.000 2014-11-12 21:04:43.393 1
2014-11-12 21:00:47.000 2014-11-12 21:04:41.393 1
|
you can do it by converting time to minutes and using `count(distinct .. )`
As you need further filter by jobdatestamp, need to use it in count
```
SELECT dateadd(minute, datediff(minute, 0, rundate), 0) ,
count( distinct dateadd(minute, datediff(minute, 0, JobDateStamp), 0))
FROM tbl_tasks
GROUP by dateadd(minute, datediff(minute, 0, rundate), 0)
```
|
Just truncate the datetime to the previous minute and group by that value:
```
select
dateadd(minute, datediff(minute, 0, Rundate ), 0) RunDate,
COUNT(*) Count
FROM tbl_tasks
GROUP BY dateadd(minute, datediff(minute, 0, Rundate ), 0)
```
|
Group by Date -Month -Day Hour and Time Query
|
[
"",
"sql",
"sql-server",
""
] |
I am a SQL noob in need of some help with a specific query using the NYC 2013 Taxi Trips Dataset [located here](https://bigquery.cloud.google.com/table/833682135931:nyctaxi.trip_data).
I want to analyze dropoffs at JFK Airport, but want to build my query so that I can include the next subsequent pickup that a taxi does after dropping off someone at the airport.
This query gets me all the trips at the airport for a given day:
```
SELECT * FROM [833682135931:nyctaxi.trip_data]
WHERE DATE(pickup_datetime) = '2013-05-01'
AND FLOAT(pickup_latitude) < 40.651381
AND FLOAT(pickup_latitude) > 40.640668
AND FLOAT(pickup_longitude) < -73.776283
AND FLOAT(pickup_longitude) > -73.794694
```
I want to join the dataset with itself to add next\_pickup\_time, next\_pickup\_lat, and next\_pickup\_lon values for each row.
To do this, I assume I need a correlated subquery, but don't know where to start building it out because the subquery is based on the outer query.
It needs to search for trips with the same medallion, on the same day, and with a pickup time later than the current airport dropoff, then limit 1... Any help is much appreciated!
|
This should give you all the dropoffs with next pickups
```
SELECT *
FROM
(SELECT medallion,
dropoff_datetime,
dropoff_longitude,
dropoff_latitude,
LEAD(pickup_datetime, 1, "") OVER (PARTITION BY medallion
ORDER BY pickup_datetime) AS next_datetime,
LEAD(pickup_longitude, 1, "0.0") OVER (PARTITION BY medallion
ORDER BY pickup_datetime) AS next_longitude,
LEAD(pickup_latitude, 1, "0.0") OVER (PARTITION BY medallion
ORDER BY pickup_datetime) AS next_latitude
FROM [833682135931:nyctaxi.trip_data]) d
WHERE date(next_datetime)=date(dropoff_datetime)
AND DATE(dropoff_datetime) = '2013-05-01'
AND FLOAT(dropoff_latitude) < 40.651381
AND FLOAT(dropoff_latitude) > 40.640668
AND FLOAT(dropoff_longitude) < -73.776283
AND FLOAT(dropoff_longitude) > -73.794694
```
|
I think that N.N. has the right idea, except that you want LEAD instead of LAG to get the next pickup. For example, this query will produce the next pickup time, lat and long after a pickup at JFK.
```
SELECT
medallion,
pickup_datetime,
pickup_longitude,
pickup_latitude,
LEAD(pickup_datetime, 1, "") OVER (PARTITION BY medallion ORDER BY pickup_datetime) AS next_datetime,
LEAD(pickup_longitude, 1, "0.0") OVER (PARTITION BY medallion ORDER BY pickup_datetime) AS next_longitude,
LEAD(pickup_latitude, 1, "0.0") OVER (PARTITION BY medallion ORDER BY pickup_datetime) AS next_latitude
FROM [833682135931:nyctaxi.trip_data]
WHERE DATE(pickup_datetime) = '2013-05-01'
AND FLOAT(pickup_latitude) < 40.651381
AND FLOAT(pickup_latitude) > 40.640668
AND FLOAT(pickup_longitude) < -73.776283
AND FLOAT(pickup_longitude) > -73.794694;
```
Any time you can avoid a self-join, it's good to do so.
|
Self-Join with Subquery in Google BigQuery
|
[
"",
"sql",
"google-bigquery",
""
] |
This question has been asked before, but they all referenced to single and not composite keys, and the solutions don't seem to work for me. Basically, please consider the following table:
```
CREATE TABLE IF NOT EXISTS `my_answers` (
`id` int(11) NOT NULL auto_increment,
`question_id` int(11) NOT NULL default 0,
`user_id` int(11) NOT NULL default 0,
PRIMARY KEY (`id`),
UNIQUE KEY (`question_id`, `user_id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
```
How do I remove the unique key made of question\_id and user\_id in this case? I have tried the following:
```
ALTER TABLE my_answers DROP INDEX `UNIQUE`;
```
and
DROP INDEX `UNIQUE` ON my\_answers;
Both of which didn't work, throwing the following error "#1091 - Can't DROP 'UNIQUE'; check that column/key exists"
Any help is greatly appreciated, thanks!
|
If you do not specify a name of the unique key in case of composite key then by default the first column name is used as the name.
Here is the example
```
mysql> CREATE TABLE IF NOT EXISTS `my_answers` (
-> `id` int(11) NOT NULL auto_increment,
-> `question_id` int(11) NOT NULL default 0,
-> `user_id` int(11) NOT NULL default 0,
-> PRIMARY KEY (`id`),
-> UNIQUE KEY (`question_id`, `user_id`)
-> ) ENGINE=MyISAM DEFAULT CHARSET=utf8;
Query OK, 0 rows affected (0.08 sec)
```
If you now run `show create table` you can see something as
```
mysql> show create table my_answers ;
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Table | Create Table |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| my_answers | CREATE TABLE `my_answers` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`question_id` int(11) NOT NULL DEFAULT '0',
`user_id` int(11) NOT NULL DEFAULT '0',
PRIMARY KEY (`id`),
UNIQUE KEY `question_id` (`question_id`,`user_id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 |
```
Here you can see that the key name is the first column in the composite key which is `question_id` and you can drop it as
```
mysql> alter table `my_answers` drop INDEX question_id ;
Query OK, 0 rows affected (0.09 sec)
```
|
Try the following command:
```
show index from `my_answers`;
```
then inspect the key name of your index and drop it by its name.
|
How to drop composite UNIQUE KEY in MySQL?
|
[
"",
"mysql",
"sql",
""
] |
Before a `select SOMETHING into v_something`, I would like to know if my query returns a row.
Here are a good methods, but it cost two `select` if the row exists :
```
select count(1) into isRowExists from PERSON where CONDITION='Something';
if (isRowExists > 0) then
select NAME into v_name from PERSON where CONDITION='Something';
else
raise name_not_found;
end if;
select count(1) into isRowExists from CAR where CONDITION='Something';
if (isRowExists > 0) then
select MODEL into v_model from CAR where CONDITION='Something';
else
raise model_not_found;
end if;
```
Or something like that :
```
select NAME into v_name from PERSON where CONDITION='Something';
select MODEL into v_model from CAR where CONDITION='Something';
exception
when no_data_found then
--do_something
```
But with this method, I don't know if the problem came from `PERSON` or `CAR`...
Is there any other solution ? Like sending a parameter to an `exception` ?
|
You could do:
```
BEGIN
BEGIN
select NAME into v_name from PERSON where CONDITION='Something';
exception
when no_data_found then
--do_something
END;
BEGIN
select MODEL into v_model from CAR where CONDITION='Something';
exception
when no_data_found then
--do_something
END;
END;
/
```
|
For the second method, you can just wrap each in a `begin`/`end` block:
```
begin
select NAME into v_name from PERSON where CONDITION='Something';
exception
when no_data_found then
--do_something
end;
begin
select MODEL into v_model from CAR where CONDITION='Something';
exception
when no_data_found then
--do_something
end;
```
|
Fastest way to check if row exist
|
[
"",
"sql",
"oracle",
"plsql",
""
] |
The following query executes as expected, giving me the expected output given the fields being queried
```
SELECT 1 as SEQ, TERM_DESC as TERM, PRIMARY_COLLEGE_DESC as COLLEGE, LEVEL_GROUPING_CODE as LEVEL_CODE, LEVEL_GROUPING_DESC as LEVEL_DESC
FROM SECopy as SE
```
However, when I add the function SUM() on a field, I'm getting the following error regarding TERM\_DESC, which isn't even related to ID\_COUNT, which the SUM() Function is used on.
```
SELECT 1 as SEQ, TERM_DESC as TERM, PRIMARY_COLLEGE_DESC as COLLEGE, LEVEL_GROUPING_CODE as LEVEL_CODE, LEVEL_GROUPING_DESC as LEVEL_DESC, SUM(ID_COUNT) as HEADCOUNT
from SECopy as se
```
I'm getting the following error:
```
Column 'Student_Enrollment_copy.TERM_DESC' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
```
Why would it fail only after adding the SUM() function?
|
1. Alias Name for Table is not needed
2. You Should Use All Columns in Group BY Clause except Aggregate Function
> ```
> SELECT 1 as SEQ,
> TERM_DESC as TERM,
> PRIMARY_COLLEGE_DESC as COLLEGE,
> LEVEL_GROUPING_CODE as LEVEL_CODE,
> SUM(ID_COUNT) as HEADCOUNT
> from SECopy
> GROUP BY TERM_DESC,PRIMARY_COLLEGE_DESC,LEVEL_GROUPING_CODE
> ```
|
You'll need to stick any of your fields that are in your SELECT statement, but are not being aggregated by a SUM(), AVG(), MIN(), etc... into a GROUP BY clause:
```
SELECT
1 AS SEQ,
TERM_DESC AS TERM,
PRIMARY_COLLEGE_DESC AS COLLEGE,
LEVEL_GROUPING_CODE AS LEVEL_CODE,
LEVEL_GROUPING_DESC AS LEVEL_DESC,
SUM(ID_COUNT) AS HEADCOUNT
FROM SECopy AS se
GROUP BY
TERM_DESC, PRIMARY_COLLECT_DESC,LEVEL_GROUPING_CODE,LEVEL_GROUPING_DESC
```
|
Why is my query failing upon adding SUM function?
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I've read through a couple similar posts, but not found a solution for this issue:
I have a form with an unbound rich text, multiline textbox named tbxNote. When the textbox is exited, I use VBA code to create an SQL string which I subsequently execute to UPDATE a table field [Note] with the value in the unbound textbox. [Note] is a "Long Text" field (from my understanding, "Long Text" is equivalent to what used to be called a "Memo" field). The backend is an Access database.
Problem is: Only the first 250 characters of what is in tbxNote get stored in the target table field [Note] even though other "Long Text" fields in other tables are accepting values much longer than 250 characters. So, it does not seem to be an issue with the field type or characteristics in the backend table.
Furthermore, if I manually open the target table and paste 350 characters into the same [Note] field in the target table, all 350 characters get stored. But, if I load up that record into the form or put the same 350 characters into the form's tbxNote textbox, only 250 characters are pulled into tbxNote or saved out to [Note].
Is there a way to store more than 250 characters in an unbound textbox using an UPDATE SQL in code?
In case it matters, here's the SQL code that I used to prove only 250 of 350 characters gets saved to the table field [Note]:
```
dbs.Execute "UPDATE tblSupeGenNotes " & _
"SET [NoteDate] = #" & Me.tbxNoteDate & "#, " & _
"[SupeType] = " & Chr(34) & Me.cbxSupeType & Chr(34) & ", " & _
"[SupeAlerts] = " & alrt & ", " & _
"[Note] = " & Chr(34) & String(350, "a") & Chr(34) & " " & _
"WHERE [SupeGenNoteID] = " & Me.tbxSupeGenNoteID & ";"
```
Of course, normally I'd have `Me.tbxNote` instead of `String(350, "a")` but the `String` proves that only 250 of the 350 characters get stored in the [Note] field.
I must be missing something simple, but I cannot figure it out.
|
@HansUp's suggested trying a DAO recordset to update the table. That did the trick! Thank you, HansUp. HansUp requested that I post the answer, so, here is the code that worked for anyone else who comes across this thread:
```
Dim dbs As DAO.Database
Dim rsTable As DAO.Recordset
Dim rsQuery As DAO.Recordset
Set dbs = CurrentDb
'Open a table-type Recordset
Set rsTable = dbs.OpenRecordset("tblSupeGenNotes", dbOpenDynaset)
'Open a dynaset-type Recordset using a saved query
Set rsQuery = dbs.OpenRecordset("qrySupeGenNotes", dbOpenDynaset)
'update the values vased on the contents of the form controls
rsQuery.Edit
rsQuery![NoteDate] = Me.tbxNoteDate
rsQuery![SupeType] = Me.cbxSupeType
rsQuery![SupeAlerts] = alrt
rsQuery![Note] = Me.tbxNote
rsQuery.Update
'clean up
rsQuery.Close
rsTable.Close
Set rsQuery = Nothing
Set rsTable = Nothing
```
AH! Another bit to the solution is that prior to using the DAO recordset, I was pulling values from the table into a listbox and from the listbox into the form controls (instead of directly into the form controls from the table). Part of the problem (I believe) was that I was then populating the form controls from the selected item in the listbox instead of directly from the table. I believe listboxes will only allow 255 characters (250 characters?) in any single column, so, everytime I pulled the value into the textbox from the listbox, the code was pulling only the first 255 characters into the textbox. Then, when the textbox was exited, the code was updating the table with the full textbox text, but when it was pulled back into the form through the listbox, we'd be back down to 255 characters. Of course, when I switched to the DAO approach, I also switched to reading the textbox value directly from the table instead of pulling it from the listbox.
**Moral: Beware of pulling Long Text values through a listbox!**
Thanks to everyone who helped me solve this. Sorry for such a newbie error seeming more complicated than it was.
|
Unfortunately, you posted test code works, but you FAILED to post your actual update string that fails. A common (and known) problem is if you include a function (especially aggregates) in your SQL, then you are limited to 255 characters.
In fact this can apply if you have function(s) that surrounds the unbound text box and is used in the query.
So such an update should and can work, but introduction functions into this mix can cause problems with the query processor.
If you included the actual update, then the above issue(s) likely could have been determined.
So the workarounds are:
Don’t use any “functions” directly in the SQL update string, but build up the string.
So in place of say:
```
Dbs.Execute "update tblTest set Notes = string(350,’a’)"
```
Note how above the string function is INSIDE the sql.
You can thus place the function(s) OUTSIDE of the query and thus pre-build the string - the query processor is NOT executing nor will it even see such functions.
So we can change above to as PER YOUR EXAMPLE:
Eg:
```
Dbs.Execute "update tblTest set Notes = ‘" & string(350,’a’) & "’"
```
(this is how/why your posted example works, but likely why your actual code fails). So functions can (and should) be moved out of the actual query string.
Also make sure there is NO FORMAT in the formatting for the text box, as once again this will truncate the text box to 255.
And as noted here the other suggestion is to consider using a recordset update in place of the SQL update.
Using a recordset can often remove issues of delimiters and functions then become a non issue.
So such SQL updates can work beyond 255 characters, but functions need to be evaluated in your VBA code before the query processor gets its hands on the data as per above examples.
And as noted remove any “format” you have for the text box (property sheet, format tab).
|
Access SQL to save value in unbound textbox cannot save more than 255 characters
|
[
"",
"sql",
"ms-access",
"vba",
"ms-access-2013",
""
] |
I need to calculate the year a week is assigned to. For example the 29th December of 2003 (Monday) was assigned to week one of year 2004 (this is only for europe, I think). You can take a look at this with this code:
```
SELECT DATEPART(isowk, '20141229');
```
But now I need an easy way to get the year this week is assigned to. What I currently do is not that elegant:
```
DECLARE @week int, @year int, @date char(8)
--set @date = '20150101'
set @date = '20141229'
SET @week = cast(datepart(isowk, @date) as int)
if @week = 1
begin
if DATEPART(MONTH, @date) = 12
begin
set @year = DATEPART(year, @date) + 1
end
else
begin
set @year = DATEPART(year, @date)
end
end
select @date "DATE", @week "WEEK", @year "YEAR"
```
If anybody knew a more elegant way, that would be nice :-)
|
~~This solution~~ The code in the question does not return the correct value for the date `'1-1-2027'`.
The following will return the correct value with all dates i tested (and i tested quite a few).
```
SELECT YEAR(DATEADD(day, 26 - DATEPART(isoww, '2012-01-01'), '2012-01-01'))
```
As taken from: <https://capens.net/content/sql-year-iso-week>
|
This is the most compact solution I could come up with:
```
CASE
WHEN DATEPART(ISO_WEEK, @Date) > 50 AND MONTH(@Date) = 1 THEN YEAR(@Date) - 1
WHEN DATEPART(ISO_WEEK, @Date) = 1 AND MONTH(@Date) = 12 THEN YEAR(@Date) + 1
ELSE YEAR(@Date) END
```
Can be used directly inside a SELECT statement. Or you could consider creating a user-defined function that takes the @Date parameter as input and outputs the result of the case statement.
|
SQL Get "ISO Year" for ISO Week
|
[
"",
"sql",
"sql-server",
"t-sql",
"date",
""
] |
Cant figure out how to get this result:
```
ProductID Variantno
53121 5197, 5198,5199
```
From this data collection.
```
ProductID Variantno
53121 5197
53121 5198
53121 5199
```
Tried with group by but no good result, total sql noob...
|
Try this..
```
SELECT
ProductID,
GROUP_CONCAT(Variantno)
FROM tbl
GROUP BY ProductID
```
|
```
select ProductID , group_concat(Variantno)
from table
group by ProductID
```
|
Select multiple values from multiple rows
|
[
"",
"mysql",
"sql",
""
] |
In oracle will the below query works for getting 3rd highest salary.
```
select empname, salary
from (select empname,salary from employee order by salary desc)
where rownum==3
```
I have seen answers in other threads but they seem to be complicated. If above query works then it is simple solution for oracle
What will be the query for SQLServer?
I don't have Oracle and SQLServer software to try out these queries.
Please let me know
|
I did some exploration and found below link which explains the difference between `row_number()`, `rank()` and `dense_rank()`. From the below link I found `dense_rank()` is suitable answer for this question.
<http://www.dwbiconcepts.com/tutorial/24-interview-questions/190-top-20-sql-interview-questions-with-answers.html>
`row_number()` does not give expected results if two or more empolyees has same salary.
Below query works for both Oracle and SQL Server when tried on <http://sqlfiddle.com/>. The following query gives you the list of employees with third highest salary
```
SELECT empname, salary FROM (SELECT empname, salary, dense_rank() over(order by salary desc) dense_rank_by_sal
FROM EMPLOYEE ) A where dense_rank_by_sal=3;
```
|
Use `Dense_Rank()` ranking function in `SQL SERVER` to find the 3rd highest salary.
`Row_number()` returns the sequential number of a row within a partition of a result set, starting at 1 for the first row in each partition.The `ORDER BY` clause determines the sequence in which the rows are assigned their unique `ROW_NUMBER`
```
SELECT empname,
salary
FROM (SELECT Dense_Rank() OVER(ORDER BY salary DESC) rn,
empname,
salary
FROM employee) A
WHERE rn = 3
```
|
SQL query in Oracle and SQL server to get 3rd highest salary
|
[
"",
"sql",
"sql-server",
"oracle",
""
] |
I want to retrieve distinct rows from each column in this sql table
My table
```
1 Apple
2 Banana
3 Apple
2 Apple
1 Orange
```
I want the result like this:
```
1 Apple
2 Banana
3 Orange
```
Please help me with this
|
Please try it
```
select *,row=rank() over(order by name) from (SELECT distinct name FROM abc) as cte
```
or
```
with cte as
(
SELECT distinct name FROM abc
)
select *,row=rank() over(order by name) from cte
```
Output
```
| row | Name |
|-----------|----------|
| 1 | Apple |
| 2 | Banana |
| 3 | Orange |
```
|
You can get the distinct names by doing:
```
select distinct name
from table t;
```
You can add the first column by doing:
```
select row_number() over (order by name) as id, name
from (select distinct name
from table t
) t;
```
Most databases support the ANSI standard row number. You haven't tagged this with the database, so that is the most general solution.
EDIT:
Oh, you want two columns each with values. I would approach this as a `full outer join`:
```
select nu.num, na.name
from (select num, row_number() over (order by num) as seqnum
from table
group by num
) nu full outer join
(select name, row_number() over (order by name) as seqnum
from table t
group by name
) na
on nu.seqnum = na.seqnum;
```
Each subquery enumerates the values in each column. The `full outer join` makes sure that you have values even when they are missing on one side or the other.
|
How to select distinct records from individual column in sql table
|
[
"",
"sql",
"datatable",
"distinct",
""
] |
I have a table of winners for a prize draw, where each winner has earned a number of points over the year. There are 1300 registered users, with points varying between 50 and 43,000. I need to be able to select a random winner, which is straight forward, but the challenge I am having is building the logic where each point counts as an entry ticket into the prize draw. Would appreciate any help.
John
|
Your script would look something similar to this:
Script 1 :
```
DECLARE @Name varchar(100),
@Points int,
@i int
DECLARE Cursor1 CURSOR FOR SELECT Name, Points FROM Table1
OPEN Cursor1
FETCH NEXT FROM Cursor1
INTO @Name, @Points
WHILE @@FETCH_STATUS = 0
BEGIN
SET @i = 0
WHILE @i < @Points
BEGIN
INSERT INTO Table2 (Name)
VALUES (@Name)
SET @i = @i + 1
END
FETCH NEXT FROM Cursor1 INTO @Name, @Points
END
DEALLOCATE Cursor1
```
I have created a table (Table1) with only a Name and Points column (varchar(100) and int), I have created a cursor in order to look through all the records within Table1 and then loop through the Points and then inserted each record into another table (Table2).
This then imports the Name depending on the Points column.
Script 2 :
```
DECLARE @Name varchar(100),
@Points int,
@i int,
@Count int
CREATE TABLE #temptable(
UserEmailID nvarchar(200),
Points int)
DECLARE Cursor1 CURSOR FOR SELECT UserEmailID, Points FROM Table1_TEST
OPEN Cursor1
FETCH NEXT FROM Cursor1
INTO @Name, @Points
WHILE @@FETCH_STATUS = 0
BEGIN
SET @i = 0
WHILE @i < @Points
BEGIN
INSERT INTO #temptable (UserEmailID, Points)
VALUES (@Name, @Points)
SET @i = @i + 1
END
FETCH NEXT FROM Cursor1 INTO @Name, @Points
END
DEALLOCATE Cursor1
SELECT * FROM #temptable
DROP TABLE #temptable
```
In Script2 I have imported the result into a TEMP table as requested.
The script now runs through each record within you Table1 and imports the individuals UserEmailID and Points into the TEMP table depending on how much the Points are in Table1.
So if John has a total of 3 points, and Sarah 2, the script will import Johns UserEmailID 3 times into the TEMP table and 2 times for Sarah.
If you apply the random selector on the TEMP table, it will then randomly select a individual.
John would obviously stand a better chance to win because he has 3 records in the TEMP table whereas Sarah only has 2.
Suppose Johns UserEmailID is 1 and Sarah is 2:
The OUTPUT of TEMP table would then be:
```
UserEmailID | Points
1 | 3
1 | 3
1 | 3
2 | 2
2 | 2
```
Please let me know if you need any clarity.
Hope this helps.
|
So you want a winner with 1000 points have double the chances as another with only 500 points.
Sort the winners by whatever order and create a running total for the points:
```
id points
winner1 100
winner2 50
winner3 150
```
gives:
```
id points from to
winner1 100 1 100
winner2 50 101 150
winner3 150 151 300
```
Then compare with a random number from 1 to sum(points), in the example a number between 1 and 300. Find the winner with that number range and you're done.
```
select winpoints.id_winner
from
(
select
id as id_winner,
coalesce(sum(points) over(order by id rows between unbounded preceding and 1 preceding), 0) + 1 as from_points,
sum(points) over(order by id rows between unbounded preceding and current row) as to_points
from winners
) winpoints
where (select floor(rand() * (sum(points) from winners)) + 1)
between winpoints.from_points and winpoints.to_points;
```
|
Create a random selection weighted on number of points, SQL
|
[
"",
"sql",
"sql-server",
""
] |
table master
```
| id | user |
| 1 | user1 |
| 3 | user2 |
| 4 | user3 |
```
table ver
```
| id_v | value |
| 2 | value1 |
| 3 | value2 |
```
I must delete id\_v 2 on table ver
I' have this query
```
DELETE FROM `ver` WHERE id_v in (SELECT v.id_v
FROM master m
RIGHT JOIN ver v
ON m.id = v.id_v
WHERE m.id IS NULL)
```
but this is the error
> You can't specify target table 'ver' for update in FROM clause
|
It could be done using the following
```
delete
v
from ver v
left join master m on m.id = v.id_v
where m.id is null
```
**[Demo](http://www.sqlfiddle.com/#!9/40155/2)**
|
Try this
```
DELETE FROM `ver` WHERE id_v in ( SELECT id_v from
(SELECT v.id_v FROM master m
RIGHT JOIN ver v
ON m.id = v.id_v
WHERE m.id IS NULL) as temp )
```
|
Asql select join and delete together
|
[
"",
"mysql",
"sql",
""
] |
I have the following query:
```
this->getDoctrine()->getManager();
$sql = "SELECT shop.*
FROM instagram_shop shop
LEFT JOIN instagram_shop_picture picture
ON picture.shop_id = shop.id
WHERE COUNT(picture) = 0
AND shop.isLocked = 0
AND shop.expirydate IS NOT NULL
AND shop.expirydate > now()
AND shop.deletedAt IS NULL
"
```
Wanted to get all shops that has 0 pictures in it. But this returns a sql error of:
```
General error: 1111 Invalid use of group function
```
Why is this?
|
```
SELECT shop.id
FROM instagram_shop shop
LEFT JOIN instagram_shop_picture picture ON picture.shop_id = shop.id
WHERE shop.isLocked = 0
AND shop.expirydate IS NOT NULL
AND shop.expirydate > now()
AND shop.deletedAt IS NULL
GROUP BY shop.id
HAVING COUNT(picture.shop_id) = 0
```
|
You can do what you want *without* a `group by` or `count()`. Just find the places where the `left join` fails:
```
SELECT shop.*
FROM instagram_shop shop LEFT JOIN
instagram_shop_picture picture
ON picture.shop_id = shop.id
WHERE picture.shop_id is NULL AND
shop.isLocked = 0 AND
shop.expirydate IS NOT NULL AND
shop.expirydate > now() AND
shop.deletedAt IS NULL
```
|
one to many relationship query with count
|
[
"",
"mysql",
"sql",
""
] |
I have a simple table for the events log:
```
uid | event_id | event_data
----+----------+------------
1 | 1 | whatever
2 | 2 |
1 | 3 |
4 | 4 |
4 5 |
```
If I need the latest event for a given user, that's obvious:
```
SELECT * FROM events WHERE uid=needed_uid ORDER BY event_id DESC LIMIT 1
```
However, suppose I need the latest events for each user id in an array. For example, for the table above and users `{1, 4}` I'd expect events `{3, 5}`. Is that possible in plain SQL without resorting to a pgSQL loop?
|
A Postgres specific solution is to use `distinct on` which is usually faster than the solution using a window function:
```
select distinct on (uid) uid, event_id, event_data
from events
where uid in (1,4)
order by uid, event_id DESC
```
|
try below query :
```
select DesiredColumnList
from
(
select *, row_number() over ( partition by uid order by event_id desc) rn
from yourtable
) t
where rn = 1
```
`Row_Number` will assign unique number starting from 1 to each row order by `event_id desc` and `partition by` will ensure that numbering should be done for each group of `uid`.
|
Postgres: getting latest rows for an array of keys
|
[
"",
"sql",
"postgresql",
"greatest-n-per-group",
""
] |
I have found no other specific problem like this on here.
This is a single table query in MySQL. I have a 'book' table that holds the 'title' and 'price' columns. The problem is to find the minimum book price and display the titles with that minimum value. My problem is that I can only return one title but there is multiple titles with that minimum value. The title my code returns isn't even a valid one, but the first title in the table.
My current code is:
```
SELECT TITLE,
MIN(PRICE) AS "PRICE"
FROM book;
```
|
You can find the title(s) of the books with the lowest price with a sub query.
```
SELECT TITLE
FROM book
WHERE PRICE = (SELECT MIN(PRICE)
FROM book);
```
The query in your question is not valid standard SQL and would be rejected by most other RDBMSs.
MySQL does not raise an error ([yet - though this is coming](http://rpbouman.blogspot.co.uk/2014/09/mysql-575-group-by-respects-functional.html)) but also doesn't guarantee any particular semantics of the result.
|
You can calculate the minimum price in one subquery, and then find all the books whose price matches that in an outer query.
```
SELECT a.title, a.price
FROM book AS a
JOIN (SELECT MIN(price) AS minprice
FROM book) AS b
ON a.price = b.minprice
```
Another way to write it is:
```
SELECT titla, price
FROM book
WHERE price = (SELECT MIN(price) FROM book)
```
|
Select Multiple Book Titles that share the MIN(PRICE) value | MySQL
|
[
"",
"mysql",
"sql",
"min",
""
] |
**Description :**
I have a table of users as following
students
```
id ------ name
1 ------ John
2 ------ Sarah
3 ------ Peter
```
and friends table as
Buddies
```
person1 ------ person2
1 ------ 2
2 ------ 3
```
Now I want all the friends of 2 and all other my friends whose friends are not my friends.
E.g in "people you may know" we see people that are friends of our friends but are not our friends
I have successfully written the query to find all my friends but I am not sure how to find "friends of my friends" in one query
Is there a way to do it in one query ....
I am finding friends like this
```
select * from `students` join `buddy_circle` on
'$reg_no' = `person_1` and `stregno` = `person_2` or
'$reg_no' = `person_2` and `stregno` = `person_1`
```
where stregno is the id of student and buddy\_circle is the friends table and `$regno` is the id of the user
|
Maybe this?
I have tested it only with your example data.
```
select name from students where id in (
select p2 from buddies where p1 in (
select p2 from buddies where p1=[serach_for_id]));
```
|
Join twice to get friends of friends:
```
select distinct name
from buddy_circle a
join buddy_circle b on b.p1 = a.p2
join students on id = b.p2
where a.p1 = $reg_no
```
Note the order of tables in the query is such that the where clause applies to the first named table and joined tables flow on from that, which gives maximum performance.
|
how to find friends of friends in mysql
|
[
"",
"mysql",
"sql",
"hierarchical-data",
"recursive-query",
""
] |
## My Issue
I am trying to select one row from multiple tables based on parameters, but my limited knowledge of SQL joining is holding me back. Could somebody possibly point me in the right direction?
Consider these table structures:
```
+-----------------------+ +---------------------+
| Customers | | Sellers |
+-------------+---------+ +-----------+---------+
| Customer_ID | Warning | | Seller_ID | Warning |
+-------------+---------+ +-----------+---------+
| 00001 | Test 1 | | 00008 | Testing |
| 00002 | Test 2 | | 00010 | Testing |
+-------------+---------+ +-----------+---------+
```
What I would like to do is one `SELECT` to retrieve only one row, and in this row will be the 'Warning' field for each of the tables based on the X\_ID field.
## Desired Results
So, if I submitted the following information, I would receive the following results:
Example 1:
```
Customer_ID = 00001
Seller_ID = 00008
Results:
+-----------------------------------+
| Customer_Warning | Seller_Warning |
+------------------+----------------+
| Test 1 | Testing |
+------------------+----------------+
```
Example 2:
```
Customer_ID = 00001
Seller_ID = 00200
Results:
+-----------------------------------+
| Customer_Warning | Seller_Warning |
+------------------+----------------+
| Test 1 | NULL |
+------------------+----------------+
```
## What I Have Tried
This is my current code (I am receiving loads of rows):
```
SELECT c.Warning 'Customer_Warning', s.Warning AS 'Seller_Warning'
FROM Customers c,Sellers s
WHERE c.Customer_ID = @Customer_ID
OR s.Seller_ID = @Seller_ID
```
But I have also played around with `UNION`, `UNION ALL` and `JOIN`. Which method should I go for?
|
Since you're not really joining tables together, just selecting a single row from each, you could do this:
```
SELECT
(SELECT Warning
FROM Customers
WHERE Customer_ID = @Customer_ID) AS Customer_Warning,
(SELECT Warning
FROM Sellers
WHERE Seller_ID = @Seller_ID) AS Seller_Warning
```
|
The problem is you're getting a cartesian product of rows in each table where *either* column has the value you're looking for.
I think you just want `AND` instead of `OR`:
```
SELECT c.Warning 'Customer_Warning', s.Warning AS 'Seller_Warning'
FROM Customers c
JOIN Sellers s
ON c.Customer_ID = @Customer_ID
AND s.Seller_ID = @Seller_ID
```
If performance isn't good enough you could join two filtered subqueries:
```
SELECT c.Warning 'Customer_Warning', s.Warning AS 'Seller_Warning'
FROM (SELECT Warnning FROM Customers WHERE c.Customer_ID = @Customer_ID) c,
(SELECT Warning FROM Sellers s WHERE s.Seller_ID = @Seller_ID) s
```
But I suspect SQL will be able to optimize the filtered join just fine.
> it wont return a row if one of the ID's doesnt exist.
Then you want a `FULL OUTER JOIN`:
```
SELECT c.Warning 'Customer_Warning', s.Warning AS 'Seller_Warning'
FROM Customers c
FULL OUTER JOIN Sellers s
ON c.Customer_ID = @Customer_ID
AND s.Seller_ID = @Seller_ID
```
|
UNION or JOIN for SELECT from multiple tables
|
[
"",
"sql",
"sql-server",
"join",
"union",
""
] |
I want to select items from TableB that have Start Date between one of the date ranges in VacationHistory (TableA) table.
Lets say I have VacationHistory datatable (TableA) that has Start and End date values. What I need to do is to compare a date from another datatable (TableB) to be between date ranges in VacationHistory (TableA). So if there are three entries in VacationHistory datatable, I would do something like this:
```
SELECT *
FROM TableB WHERE (
(DateInTableB BETWEEN '2014-10-21' AND '2014-10-27')
OR (DateInTableB BETWEEN '2014-11-05' AND '2014-11-13')
OR (...)
)
```
However, how can I build this query using all date ranges in VacationHistory table in this WHERE clause. I guess there is nothing that I could say `BETWEEN IN (....)`.
Something that I tried, is to build a string of entire where clause::
```
DECLARE @listStr VARCHAR(MAX)
SELECT @listStr = COALESCE(@listStr + ' OR ' , '') +
('DateInTableB BETWEEN ''' + CAST(StartDate AS NVARCHAR) + ''' AND '
+ '''' + CAST(EndDate AS NVARCHAR) + '''')
FROM VacationHistory
SELECT @listStr
```
With `@listStr` having something like this: `DateInTableB BETWEEN '2014-11-05' AND '2014-11-13' OR DateInTableB BETWEEN '2014-10-21' AND '2014-10-27'`
However, I don't think I can use @listStr in where clause directly, like this:
```
WHERE (@listStr)
```
What could I do instead?
|
If you just want to find the rows in TableB that have a Start Date in any of the date ranges in TableA you can use a correlated`exists`:
```
select *
from TableB b
where exists (
select 1
from TableA a
where b.StartDate between a.StartDate and a.EndDate
)
```
[Sample SQL Fiddle](http://www.sqlfiddle.com/#!6/be3e4/1)
|
A Simple `INNER JOIN` is what you need.
```
SELECT DISTINCT b.id,
b.StartDate
FROM TableB b
JOIN TableA a
ON b.StartDate BETWEEN a.StartDate AND a.EndDate
```
Or
```
SELECT *
FROM TableB
INNER JOIN (SELECT Min(StartDate) StartDate,
Max(EndDate) EndDate
FROM TableA) A
ON DateInTableB BETWEEN A.StartDate AND A.EndDate
```
Still if you want the answer in the way you have tried then you need to have `Dynamic SQL`
```
declare @sql nvarchar(max)
DECLARE @listStr VARCHAR(MAX)
SELECT @listStr = COALESCE(@listStr + ' OR ' , '') +
('DateInTableB BETWEEN ''' + CAST(StartDate AS NVARCHAR) + ''' AND '
+ '''' + CAST(EndDate AS NVARCHAR) + '''')
FROM VacationHistory
set @sql = 'SELECT *
FROM TableB where '+@listStr+''
exec sp_executesql @sql
```
|
Compare Date from TableB to several date ranges in TableA: Building WHERE statement
|
[
"",
"sql",
"sql-server",
"t-sql",
"date",
"between",
""
] |
**sample\_data**
```
id_type,seq_no,acct_name,_acct#,address
12345,67,jiimm,,167 s.40th st
12345,67,jiimm joe the 3rd,,167 s.40th st
12345,67,jiimm
12345,67,,0981_1,po box 1234
12345,80,Lee,,1234 street ave
12345,80,Lee
12345,80,,588_1,109 road st
```
**CODE**
```
SELECT `ID`_type,
seq_no,
MAX(`acct_name`) AS acct_name,
MAX(`acct_#`) AS acct_#,
address
FROM `test_table`
GROUP BY `ID`_type,
seq_no;
```
---
I want to merge rows based on by id\_type and seq\_no. I am using max to merge the rows but I am overwriting any exisiting addresses and acct\_names due to the MAX acct#.
**my results**
```
id_type,seq_no.,acct_name,_acct#,address
12345,67,jiimm joe the 3rd,0981_1,167 s.40th st
12345,80,Lee,588_1,109 road st
```
* Losing po box 1234 for 67-
* Losing 1234 street ave for 80, losing jiimm-
**desired results**
```
12345,80,Lee,588_1,109 road st
12345,80,Lee,588_1,1234 street ave
12345,67,jiimm,0981_1,167 s.40th st
12345,67,jiimm,0981_1,po box 1234
12345,67,jiimm joe the 3rd,0981_1,167 s.40th st
```
|
This gives you what you're looking for, but read my comment / question underneath your question above. There is an ambiguous "pick one row from many" situation that needs clarification. In that ambiguous situation, you imply that rules call for delivering the minimum non-blank account name, which this code does, but you can see how it requires treating the account name one way, and treating the acct (#) and address a different way. I think you're headed for an application that delivers results based on hard to remember rules. Funky rules like that end up getting reported as defects, even if you publish said processing rules. Hence, you may want to enhance the process upstream that captures this data to deliver more disciplined data.
[SQLFIDDLE link](http://sqlfiddle.com/#!2/d511d0/12/0) - In short the inner query populates missing values, then the outer result set delivers the distinct rows. I tested this with blank values not null. I did make a quick effort to add the code to handle nulls, but I didn't test it using nulls, so I suggest testing it as such if that is what production will use.
```
select distinct * from (
select d.id_type, d.seq_no
,coalesce( nullif( acct_name, ''), min_acct_name ) as merged_acct_name
,coalesce( nullif( acct, ''), max_acct ) as merged_acct
,coalesce( nullif( address, ''), max_address ) as merged_address
from test_table d
left join ( select id_type, seq_no
,max( acct ) as max_acct
,max( address ) as max_address
from test_table
group by id_type, seq_no
) as max_
on max_.id_type = d.id_type and max_.seq_no = d.seq_no
and ( coalesce( d.acct,'' ) = ''
or coalesce( d.address,'' ) = '' )
left join ( select id_type, seq_no
,min( acct_name ) as min_acct_name
from test_table
where coalesce( acct_name, '' ) <> ''
group by id_type, seq_no
) as min_
on min_.id_type = d.id_type and min_.seq_no = d.seq_no
and coalesce( d.acct_name,'' ) = ''
) as t
order by id_type, seq_no desc, merged_acct_name, merged_acct, merged_address
```
|
```
SELECT
D1.id_type
, D1.seq_no
, IFNULL(D1.acct_name, (SELECT MIN(acct_name) FROM data D WHERE D.id_type = D1.id_type AND D.seq_no = D1.seq_no)) t
, IFNULL(D1.acct_no, (SELECT MAX(acct_no) FROM data D WHERE D.id_type = D1.id_type AND D.seq_no = D1.seq_no)) s
, D1.address
FROM data D1
WHERE D1.address IS NOT NULL
ORDER BY id_type, seq_no DESC, acct_name
;
```
returns
```
| ID_TYPE | SEQ_NO | T | S | ADDRESS |
|---------|--------|-------------------|--------|-----------------|
| 12345 | 80 | Lee | 588_1 | 109 road st |
| 12345 | 80 | Lee | 588_1 | 1234 street ave |
| 12345 | 67 | jiimm | 0981_1 | po box 1234 |
| 12345 | 67 | jiimm | 0981_1 | 167 s.40th st |
| 12345 | 67 | jiimm joe the 3rd | 0981_1 | 167 s.40th st |
```
Which is consistent with your expected output except for the order of the third and fourth row. However, for larger amounts of data `MAX` and `MIN` will become more and more likely to be of limited help.
[SQL Fiddle](http://sqlfiddle.com/#!2/042cb/20/0)
|
combining rows in same table sql
|
[
"",
"mysql",
"sql",
"max",
""
] |
we develop the sql scripts in oracle sql developer and in production dba execute the same in oracle sql plus.
The problem is if we insert a line break
```
Create table test(
text number(10)
```
The line break is causing the scripts to fail, do you know how to fix this in sqlplus?
Also we generate the scripts using liquibase, so any ways to address this?
|
Try adding:
```
set sqlblanklines on
```
to the start of your sql script.
|
It's not the line break, it's the empty line that causes the problem:
```
Create table test(
-- here you have it
text number(10)
```
The solution is simple - delete the empty line.
|
line breaks in sqlplus
|
[
"",
"sql",
"oracle",
"oracle11g",
"sqlplus",
""
] |
I'm guessing there is a quick and easy solution to this but I've tried a number of different methods and keep hitting a stone wall. Tried searching a LOT both here and elsewhere but I don't think I am using the right words to clarify what I want to know (as per my confusing subject!). My apologies if this is a duplicate or similar.
So, to explain the problem (obfuscated as the actual data is somewhat sensitive), say you have a table of clients, a table of meetings that you have with those clients (the meetings may have multiple clients tied to each), and another table with fees charged to these clients during the meetings. There may be single or multiple fees charged at a single meeting (i.e. consulting fee, new contract fee, purchasing fee, etc.).
What I'm trying to find is any instances where the system may have erroneously charged multiple copies of the same type of fee to a client, i.e. a consulting fee that can only ever be charged once per meeting.
The way this would be identified is by finding fees of that type (let's say CONS for consulting) and then checking if there are multiple distinct **fee\_id**s of that type tied to a single **meeting\_id**. It is possible that you might have 10 rows for the same **fee\_type** within the same meeting (say, for 10 clients attending the same meeting) but they should all be tied to the same **fee\_id**.
The solutions I've tried seem to either count these as 10 entries (where it should just count them as one) or counts rows individually and doesn't group them all in to the same meeting, etc.
Here's a simple, rough example of what it'd look like (though this is wrong, as it doesn't group the distinct counting within unique **meeting\_id**s):
```
select c.client_name as "Client"
, m.meeting_id as "Meeting ID"
, m.meeting_date as "Meeting Date"
, f.fee_type as "Fee Type"
, count(distinct
(
case when f.fee_type = 'CONS'
then f.fee_id
else null
end
)
) as "Consultation Fees Charged"
from client c
inner join meetings m
on c.client_id = m.client_id
inner join fees f
on m.meeting_id = f.meeting_id
where f.fee_type = ‘CONS’
group by c.client_name, m.meeting_id, m.meeting_date
```
I'm sure there's a simple solution and I'm just missing something obvious. Sorry for the mass of text.
|
I'm not 100% that I understand what you are looking for. I think it is fees of a certain type applied to a particular meeting for a particular client. If so, your basic query is on the right track, but it needs a `group by` and some simplification in the `case` (the `case` is redundant with the `where`):
```
select c.client_name as "Client", m.meeting_id as "Meeting ID", m.meeting_date as "Meeting Date",
count(distinct f.fee_id ) as "Consultation Fees Charged"
from client c inner join
meetings m
on c.client_id = m.client_id inner join
fees f
on m.meeting_id = f.meeting_id
where f.fee_type = 'CONS'
group by c.client_name, m.meeting_id, m.meeting_date
having count(*) > 1;
```
|
To find the instances where the system may have erroneously charged multiple copies of the same type of fee to a client, we can use `GROUP BY` and then check total clients in the meeting and total fee\_ids per meeting are matching using `HAVING` clause.
The below query will return all such meetings, where there is discrepancy for `consultation fees`
```
SELECT m.meeting_id as "Meeting ID",
m.meeting_date as "Meeting Date",
count(f.fee_id ) as "Total Fee record count",
count(c.client_name) as "Total Client count"
FROM client c
INNER JOIN meetings m
ON c.client_id = m.client_id
INNER JOIN fees f
on m.meeting_id = f.meeting_id
AND f.fee_type = 'CONS'
group by c.client_name, m.meeting_id, m.meeting_date
having count(f.fee_id) <> count(c.client_name);
```
|
Finding IDs that join with multiple alternate IDs matching criteria
|
[
"",
"sql",
"oracle",
""
] |
java datetime (date.getTime()) is stored as string in mysql field.
How can we convert this to sql date using sql query. I am using mysql database.
Is there any sql function available?
For example - This is stored (1416231812348) for today's date in db.
Thanks for suggestions.
|
Java is returning the date as a long, to convert it you can use:
```
SELECT FROM_UNIXTIME(event_time) FROM MY_TABLE
```
If you get an error, try the following (after testing, I can see that your data is stored in milliseconds so you need to use this method):
```
SELECT FROM_UNIXTIME(event_time/1000) FROM MY_TABLE
```
(Change event\_time to be the field name in your table and MY\_TABLE to be the table name.)
Here is a [SQLFiddle example](http://sqlfiddle.com/#!2/d41d8/48776) that shows it working.
Here is an answer that gives you formatting options as well:
<http://notsoyellowstickies.blogspot.co.uk/2011/11/converting-long-into-datetime-mysql.html>
|
most probably you have recorded from:
```
System.currentTimeMillis()
```
so:
```
select DATE_FORMAT ( from_unixtime( your_table_field / 1000 ) , '%e %b %Y');
```
you can change the date format as you like.
|
converting java time to sqldate in query
|
[
"",
"mysql",
"sql",
""
] |
I have a column called `Indicator` in a table. It contains `Y`, `N`, `NULL`, or just blank.
What does the following two logic do?
```
coalesce(Indicator, 'N') = 'N'
coalesce(Indicator, 'N') = 'Y'
```
It doesn't seem just to return the rows where `Indicator` equals `N` or `Y`. Is there something else going on?
|
For each condition there's a different answer
For
```
coalesce(Indicator, 'N') = 'N'
```
You get
```
coalesce('N', 'N') = 'N' --> 'N' = 'N' --> True
coalesce('Y', 'N') = 'N' --> 'Y' = 'N' --> False
coalesce(Null, 'N') = 'N' --> 'N' = 'N' --> True
```
and for
```
coalesce(Indicator, 'N') = 'Y'
```
you get
```
coalesce('N', 'N') = 'N' --> 'N' = 'N' --> True
coalesce('Y', 'N') = 'N' --> 'Y' = 'N' --> False
coalesce(Null, 'N') = 'Y' --> 'N' = 'Y' --> False
```
|
The logic does two things. Functionally, the first expression is equivalent to:
```
(Indicator = 'N' or Indicator is null)
```
In addition, it also prevents an index from being used on `indicator` (in most databases).
For a binary indicator, the use of the index is typically of minor importance. In addition, SQL optimizers are pretty bad at using indexes for `or` conditions. And, they almost never use them when a column is an argument to a function.
|
coalesce in SQL
|
[
"",
"mysql",
"sql",
"sql-server",
""
] |
I am a beginning SQL programmer - I am getting most things, but not EXISTS.
It looks to me, and looks by the documentation, that an entire EXISTS statement returns a boolean value.
However, I see specific examples where it can be used and returns part of a table as opposed to all or none of it.
```
SELECT DISTINCT PNAME
FROM P
WHERE EXISTS
(
SELECT *
FROM SP Join S ON SP.SNO = S.SNO
WHERE SP.PNO = P.PNO
AND S.STATUS > 25
)
```
This query returns to me one value, the one that meets the criteria (S.Status > 25).
However, with other queries, it seems to return the whole table I am selecting from if even one of the rows in the EXISTS subquery is true.
How does one control this?
|
Subqueries such as with EXISTS can either be correlated or non-correlated.
In your example you use a correlated subquery, which is usually the case with EXISTS. You look up records in SP for a given P.PNO, i.e. you do the lookup for each P record.
Without `SP.PNO = P.PNO` you would have a non-correlated subquery. I.e. the subquery no longer depends on the P record. It would return the same result for any P record (either a Status > 25 exists at all or not). Most often when this happens this is done by mistake (one forgot to relate the subquery to the record in question), but sometimes it is desired so.
|
You have actually created a `Correlated subquery`. `Exists` predicate accepts a subquery as input and returns TRUE if the subquery returns **any** rows and FALSE otherwise.
The outer query against table `P` doesn't have any filters, so all the rows from this table will be considered for which the EXISTS predicate returns TRUE.
```
SELECT DISTINCT PNAME -- Outer Query
FROM P
```
Now, the `EXISTS` predicate returns `TRUE` if the current row in table `P` has related rows in `SP Join S ON SP.SNO = S.SNO` where `S.STATUS > 25`
```
SELECT *
FROM SP Join S ON SP.SNO = S.SNO
WHERE SP.PNO = P.PNO -- Inner query
AND S.STATUS > 25
```
One of the benefits of using the `EXISTS` predicate is that it allows you to intuitively phrase English like queries. For example, this query can be read just as you would say it in ordinary English: select all `unique PNAME` from table `P` where at least one row exists in which `PNO` equals `PNO` in table SP and `Status in table S > 25`, provided table `SP` and `S` are joined based on `SNO`.
|
How does EXISTS return things other than all rows or no rows?
|
[
"",
"sql",
"sql-server",
""
] |
I have a table called TransferRequest with two columns that are (receivingPrison, currentPrison). They both reference a table Prison with columns (prisonID, location).
I need to have some sort of join to display the 'location' of both prison eg. Result: "Washington", "New York".
I know to get the location from one table I can do:
```
SELECT Prison.location
FROM Prison
JOIN TransferRequest
ON Prison.prisonID = TransferRequest.currentPrison;
```
And:
```
SELECT Prison.location
FROM Prison
JOIN TransferRequest
ON Prison.prisonID = TransferRequest.receivingPrison;
```
But I'm not sure how to do a SELECT for two of the same type of items from the same type of table. What would I do to get this table? (Headers: Current Location, Receiving Location).
|
Maybe I'm missing something but if you just want to get the location names for the current and receiving prisons you could get them by joining the`prison`table twice:
```
SELECT
rec_prison.location as 'Recieving Location',
cur_prison.location as 'Current Location'
FROM TransferRequest tr
INNER JOIN Prison rec_prison
ON rec_prison.prisonID = tr.receivingPrison
INNER JOIN Prison cur_prison
ON cur_prison.prisonID = tr.currentPrison;
```
[Sample SQL Fiddle](http://sqlfiddle.com/#!2/a7ca68/2)
|
A correlated subquery could do it:
```
SELECT P1.location AS location1
, (SELECT Prison.location
FROM Prison
JOIN TransferRequest
ON Prison.prisonID = TransferRequest.receivingPrison ) AS location2
FROM Prison P1
JOIN TransferRequest T1
ON P1.prisonID = T1.currentPrison;
```
|
Select data from two rows of the same table
|
[
"",
"mysql",
"sql",
"join",
""
] |
I have a table that has 4 columns: Item, Year, Month, Amount. Some of the values for Amount are null and when that happens I want to fill those values in with the previous Amount value that is not null. I can easily do this with the LAG function when there is only one null value but when there are multiple in a row I am not sure how to approach it. Below is an example of what the table might look like with an added column for what I want to add in my query:
```
Item | Year | Month | Amount | New_Amount
AAA | 2013 | 01 | 100 | 100
AAA | 2013 | 02 | | 100
AAA | 2013 | 03 | 150 | 150
AAA | 2013 | 04 | 125 | 125
AAA | 2013 | 05 | | 125
AAA | 2013 | 06 | | 125
AAA | 2013 | 07 | | 125
AAA | 2013 | 08 | 175 | 175
```
I had two ideas which I can't seem to get to work to produce what I want. First I was going to use LAG but then I noticed when there are multiple null values in a row it won't satisfy that. Next I was going to use FIRST\_VALUE but that wouldn't help in this situation where there is a null followed by values followed by more nulls. Is there a way to use FIRST\_VALUE or another similar function to retrieve the last non-null value?
|
**last\_value** with IGNORE NULLS works fine in Oracle 10g:
```
select item, year, month, amount,
last_value(amount ignore nulls)
over(partition by item
order by year, month
rows between unbounded preceding and 1 preceding) from tab;
```
`rows between unbounded preceding and 1 preceding` sets the window for analytic function.
In this case Oracle is searching for LAST\_VALUE inside the group defined in PARTITION BY (the same item) from the begining (UNBOUNDED PRECEDING) until current row - 1 (1 PRECEDING)
It's a common replacement for LEAD/LAG with IGNORE NULLS in Oracle 10g
However, if you're using Oracle 11g you can use LAG from the Gordon Linoff's answer (there is a small typo with "ignore nulls")
|
Here is an approach. Count the number of non-null values before a given row. Then use this as a group for a window function:
```
select t.item, t.year, t.month, t.amount,
max(t.amount) over (partition by t.item, grp) as new_amount
from (select t.*,
count(Amount) over (Partition by item order by year, month) as grp
from table t
) t;
```
In Oracle version 11+, you can use `ignore nulls` for `lag()` and `lead()`:
```
select t.item, t.year, t.month, t.amount,
lag(t.amount ignore nulls) over (partition by t.item order by year, month) as new_amount
from table t
```
|
Fill null values with last non-null amount - Oracle SQL
|
[
"",
"sql",
"oracle",
""
] |
In my oracle DB, I have a table 'action' containing two columns, start\_time and end\_time (both date). I can query action duration for each action in seconds like this:
```
select (end_time-start_time)*24*60*60 as actionDuration from action
```
We have a 2 hour maintenance window, 00:00 - 02:00. I'd like to ignore the elapsed time of an action that occurs within this window.
* action may start & end outside of the maint window.
* action may begin & end within the window (ignore these).
* action may begin within the maint window and end outside the maint window. Just the seconds outside the maint window should count.
* action may begin outside but terminate within the main window. Just the seconds outside the maint window should count.
One final complicated case : an action duration may span more than one maint window
|
If my guess in comments is correct, then:
```
select a.id, max((end_time - start_time) * 1440) - sum(nvl((mend2 - mbeg2), 0) * 1440) duration
from (select id, start_time, end_time, mbeg, mend,
case when start_time between mbeg and mend then start_time else maint.mbeg end mbeg2,
case when end_time between mbeg and mend then end_time else maint.mend end mend2
from action a left join
(select to_date(:PSTART, 'yyyy-mm-dd hh24:mi:ss') + rownum - 1 mbeg,
to_date(:PSTART, 'yyyy-mm-dd hh24:mi:ss') + 2/24 + rownum - 1 mend
from dual connect by rownum < :PDAYS) maint
on (maint.mbeg between start_time and end_time) or (maint.mend between start_time and end_time)
-- this condition I forgot earlier
where not (start_time between mbeg and mend
and
end_time between mbeg and mend)
) a
group by a.id
order by a.id;
```
Here you need to use parameters:
* `:PSTART` - date and time of your first maintenance
* `:PDAYS` - count of days of period in which you want to calculate duration of actions
Now query counts durations in minutes, if you need another measurement unit, use another number instead of 1440.
**UPD** How it works.
* Subquery `maint` uses hierarchical clause `connect by` to create as many rows, as you need (equal to count of days from first action to last)
* Then I make left join with tables of action. Join condition - maintenance begins or stops inside an action. Result of join - list of actions, where every action is followed by all its maintenances, and if action don't intersect with maintenance - `NULL`
* Then I shift start or end of maintenance, if it occurs while action take place. If action started during maintenance, then I use start of action as start of maintenance (field `mbeg2`)
* the same thing I do with end of maintenance (field `mend2`); result - fields mbeg2 and mend2 contains intervals when action and maintenance window took place simultaneously (just intersection of periods)
* then I count length of action using `max` aggregate function. if action was very long and intersected with many windows, it will have many lines in subquery, that's why I use `max` (you also can use `min` or `avg` and get the same result).
* then I count sum of all reduced maintenance intervals (results of intersection) and subtract this sum from action length
I hope now it is clear.
|
This approach could really kill performance if there's a lot of time between the dates, and if you're running a report on multiple records, but might be worth a look. It requires a primary key for the action. I would consider adding a column to your table to store the elapsed time, and calculating it in an update trigger.
Substitute the correct name of your primary key column of course:
```
with action as
(select id, starttime, endtime from action where id = :P_ACTION_ID)
select count(*) seconds from action
where to_number(to_char(starttime + ((level-1) / (24*60*60)) , 'HH24')) between 2 and 23
CONNECT BY LEVEL <= (endtime-starttime)*24*60*60;
```
|
Oracle SQL - seconds between two dates, ignoring a maintenance window
|
[
"",
"sql",
"oracle",
"date",
""
] |
Records without customerId always have DT01-DT09 and WT01 and records with customerId have WIF...
`CustomerId` is essentially missing from records with `DT` but it's that same customer.
How could I select records to get them in one row?

[Full Size](https://i.stack.imgur.com/vCSjQ.png)
Script I am tweaking ATM is below:
```
select * from (
select
tqs.transactiondetailid as transactiondetailid,
q.productid as productid,
tqs.answer as QAnswer,
QCODE,
tqs.transactionversion as transactionversion,
tqs.CUSTOMERID AS CUSTOMERID
from TRANSACTIONDETAILQSHIS tqs
inner join question q on q.questionid = tqs.questionid
and (
q.qcode like 'DT%'
or q.qcode like 'WT%'
or (q.qcode like 'WIF%' and q.isthunderheadonly = 0))
) pivot (
min(QAnswer) for QCODE in (
'DT01', 'DT02', 'DT03', 'DT04', 'DT05', 'DT06', 'DT07', 'DT09', 'WT01', 'WIF3.1', 'WIF4.1', 'WIF4.2', 'WIF6.1', 'WIF7.1', 'WIF7.2', 'WIF7.3', 'WIF7.7', 'WIF7.10', 'WIF9.1', 'WIF9.6', 'WIF10.1', 'WIF10.2', 'WIF12.1', 'WIF13.1', 'WIF13.1.1', 'WIF14.1'
))
where transactiondetailid = 5845
```
That `transactiondetailid = 5845` at the end is for clarity only, normally it's all records.
|
Figured it out myself.
Problem in all answers above is that if you eliminate CustomerId before filtering on it - you will get other customer's data **(transactionversion 9 and 10 has two customers)**, the trick is to filter the records on customerid without selecing customerid from inner select - that way the rows will collapse into one while skipping the records from second customer.
The way I did it I created a subselect (view) above that contained customerid's that I needed and then joined on that view to filter to customers that match, and if i need that customerid I can always get it from `customerv`
```
SELECT *
FROM
( SELECT tqs.transactiondetailid AS transactiondetailid,
q.productid AS productid,
tqs.answer AS QAnswer,
QCODE,
tqs.transactionversion AS transactionversion
FROM TRANSACTIONDETAILQSHIS tqs
INNER JOIN question q ON q.questionid = tqs.questionid
INNER JOIN customerv curr ON curr.transactiondetailid = tqs.transactiondetailid
WHERE curr.CUSTOMERID = tqs.CUSTOMERID
OR tqs.customerid IS NULL ) pivot ( min(QAnswer)
FOR QCODE IN ( 'DT01', 'DT02', 'DT03', 'DT04', 'DT05', 'DT06', 'DT07', 'DT09', 'WT01', 'WIF3.1', 'WIF4.1', 'WIF4.2', 'WIF6.1', 'WIF7.1', 'WIF7.2', 'WIF7.3', 'WIF7.7', 'WIF7.10', 'WIF9.1', 'WIF9.6', 'WIF10.1', 'WIF10.2', 'WIF12.1', 'WIF13.1', 'WIF13.1.1', 'WIF14.1' )) x
WHERE x.transactiondetailid = 5845
```
see result below:

|
you can use `GROUP BY` and get the maximum value among the Qcode value based on customer id, transaction detail id, product id, transaction version in your Sub query before the `PIVOT`
```
select
tqs.transactiondetailid as transactiondetailid,
q.productid as productid,
max(tqs.answer) as QAnswer,
tqs.transactionversion as transactionversion,
max(tqs.CUSTOMERID) AS CUSTOMERID
max(qcode) as QCODE
from TRANSACTIONDETAILQSHIS tqs
inner join question q on q.questionid = tqs.questionid
and (
q.qcode like 'DT%'
or q.qcode like 'WT%'
or (q.qcode like 'WIF%' and q.isthunderheadonly = 0))
group by tqs.transactiondetailid, q.productid, q.productid
```
|
Oracle combine 2 rows into one with different join criteria
|
[
"",
"sql",
"oracle",
"pivot",
""
] |
I am trying to search and update record which start which a partcular number or fall with a range.
I have made a query which works where values start with a particular number. I can't figure out how to do it with a range.
for example I want to update the following values in products\_ean with " "
```
255
201-230 ---> starts with 201,202,203 ...230
236
980-990 ---> starts with 980.981,982 ...990
```
I have written the following query which works but not sure if it is efficent espcialy when it has to serach over 100k records. It doesn't work with range.
```
UPDATE products SET products_ean =""
where products_ean like "200%"
OR products_ean like "020%"
OR products_ean like "023%"
OR products_ean like "027%"
OR products_ean like "042%"
OR products_ean like "221%"
OR products_ean like "209%"
OR products_ean like "041%"
OR products_ean like "049%"
OR products_ean like "026%"
OR products_ean like "025%"
OR products_ean like "299%";
```
|
This will be a full table scan anyhow, so you can use a function on `products_ean` with no loss of performance. This said, you can get the query more readable, but probably not much faster. However, you can still try whether it is faster, to take the three leading digits and compare these:
```
UPDATE products SET products_ean = ''
where left(products_ean,3) in ('200', '020', '027', ...);
```
If you find it more readable, you can even use ranges:
```
UPDATE products SET products_ean = ''
where left(products_ean,3) = '255'
or left(products_ean,3) between '201' and '230'
...
```
|
I don't think its going to be faster but its another alternative:
To Find:
```
SELECT *
FROM products
WHERE products_ean
REGEXP '^(200|020|023|027|042|221|209|041|049|026|025|299)'
```
To Replace:
```
UPDATE products SET products_ean = ''
WHERE products_ean
REGEXP '^(200|020|023|027|042|221|209|041|049|026|025|299)'
```
note: it will not use indexes
|
is there a better way to write this query
|
[
"",
"mysql",
"sql",
""
] |
I have following query which I am trying to optimize, the query runs in 3 seconds if I remove the join condition **I.ident\_id in (select distinct(ident\_id) from MISSING\_Images miss )** from below query but with this I takes 3 mins to execute.
```
SELECT IDENT_ID
FROM tbl_IDENT I
JOIN tbl_TA AN ON (AN.IDENT_ID = I.IDENT_ID and AN.anchor_point = I.c_id)
JOIN tbl_AB A ON (A.A_ID = I.A_ID)
JOIN tbl_FULL_ARCHIVE FT ON (FT.ARCHIVE_ID = I.ARCHIVE_ID)
WHERE (I.DATA_STATUS = 'ACTIVE' or I.DATA_STATUS = 'INACTIVE')
AND
(
I.FD = 'Y'
OR
I.ident_id in (select distinct(ident_id) from MISSING_Images miss where substr(miss.NAME, 0, INSTR(miss.NAME, '.',-1,1)) in (SELECT substr(IMG_NAME, 0, INSTR(IMG_NAME, '.',-1,1)) FROM IMAGES ))
```
);
**select distinct(ident\_id) from MISSING\_Images miss** return 2000 records and main tbl\_IDENT has over 100,000 record. I have the index created on I.ident\_id
Any hint how to improve it. I am using oracle 10g.
|
You may try to replace
```
I.ident_id in (select distinct(ident_id) from MISSING_Images miss)
```
with
```
EXISTS (select 1 from MISSING_Images miss where miss.ident_id = I.ident_id)
```
And create an index on MISSING\_Images.ident\_id
**EDIT**: The most direct solution will be:
```
EXISTS (select 1 from MISSING_Images miss
where miss.ident_id = I.ident_id
and exists (select 1 from images img
where substr(img.IMG_NAME, 0, INSTR(img.IMG_NAME, '.',-1,1))
= substr(miss,NAME, 0, INSTR(IMG_NAME, '.',-1,1))
)
)
```
And create index-based indexes:
```
create index indx_name1 on images(substr(IMG_NAME, 0, INSTR(IMG_NAME, '.',-1,1)));
create index indx_name2 on MISSING_Images(substr(miss.NAME, 0, INSTR(miss.NAME, '.',-1,1)));
```
Take a note that such indexes can have a bad impact on insert/update operations on the undelying objects and require some additional space. In addition to that they don't work well with nulls.
Other choices:
```
EXISTS (select 1 from MISSING_Images miss join images img
on substr(img.IMG_NAME, 0, INSTR(img.IMG_NAME, '.',-1,1))
= substr(miss,NAME, 0, INSTR(IMG_NAME, '.',-1,1))
where miss.ident_id = I.ident_id
)
EXISTS (select 1 from (select miss.ident_id MISSING_Images miss join images img
on substr(img.IMG_NAME, 0, INSTR(img.IMG_NAME, '.',-1,1))
= substr(miss,NAME, 0, INSTR(IMG_NAME, '.',-1,1))
) sub
where sub.ident_id = I.ident_id
)
```
|
Try a union instead, to begin with?
```
SELECT IDENT_ID
FROM tbl_IDENT I
JOIN tbl_TA AN ON AN.IDENT_ID = I.IDENT_ID AND AN.anchor_point = I.c_id
JOIN tbl_AB A ON A.A_ID = I.A_ID
JOIN tbl_FULL_ARCHIVE FT ON FT.ARCHIVE_ID = I.ARCHIVE_ID
WHERE
(I.DATA_STATUS = 'ACTIVE' OR I.DATA_STATUS = 'INACTIVE')
AND I.FD = 'Y'
UNION
SELECT IDENT_ID
FROM tbl_IDENT I
JOIN tbl_TA AN ON AN.IDENT_ID = I.IDENT_ID AND AN.anchor_point = I.c_id
JOIN tbl_AB A ON A.A_ID = I.A_ID
JOIN tbl_FULL_ARCHIVE FT ON FT.ARCHIVE_ID = I.ARCHIVE_ID
JOIN MISSING_Images miss ON I.ident_id = miss.ident_id
WHERE
I.DATA_STATUS = 'ACTIVE'
OR I.DATA_STATUS = 'INACTIVE'
;
```
More improvements can surely be provided, but for that I'd need an explain plan of the query...
|
Slow query with multiple joins
|
[
"",
"sql",
"oracle",
"oracle10g",
""
] |
I am trying to display **all the columns** available in the table **EXCEPT IDENTITY COLUMN**.
Is it possible?
Pseudo code would be something like this.
```
SELECT * FROM Tab1 WHERE IDENTIY = NULL
```
or
```
SELECT * FROM Tab1 WHERE IDENTIY <> 'YES'
```
This is obviously invalid. But I want to achieve something like this.
Is it possible??
PS. I am doing this through SQL server connection from VBScript, and I dont want to hard code the column names( Obviously for flexibility).
Here's the VbScript snippet.
```
SQL_Cols = "select Column_name from Information_schema.columns where Table_name = 'Tab1'"
Recordset.open SQL_Cols,ConnString
p = recordset.getrows()
STR_Cols = ""
for i=1 to Ubound(p,2) ' from 1 to UBound since in p(0,0) name of identity column is stored
if i = Ubound(p,2) Then
STR_Cols = STR_Cols &"["&p(0,i)&"]"
else
STR_Cols = STR_Cols &"["&p(0,i)&"],"
end if
NEXT
Recordset.close
```
**STR\_Cols** stores all the column names (Except 1st one (which is an identity Column))
Which i am later using to perform the down streaming task.
This is working absolutely fine only one thing. This would work only if the 1st column is IDENTIY column, if the table is altered, the code will have to be changed. I want the code to be as flexible and robust as possible.
|
try like this:
`sys.columns`=Contains All Columns regarding to Particular Database
`sys.tables`=Contains All Tables regarding to Particular Database
`sys.identity_columns`=Contains All identity Columns regarding to Particular Database
So Result Can be Generated By `joins` with `is_identity=1` and Exclude Identity column with `Not IN` with all columns to particular table
```
select c.name from sys.columns c
join sys.tables AS t
on t.object_id=c.object_id
where c.name not in (select name from sys.identity_columns where is_identity=1)
and t.name='MyTableName'
```
|
Try this. Use `Sys.columns` table to get the column list without `identity column`. Then use `Dynamic SQL` to generate column list and execute the query
```
declare @ collist varchar(max)='',@ sql nvarchar(max)
select @collist += ','+name from sys.columns where object_name(object_id)='Tab1' and is_identity <> 1
select @collist = right(@collist,len(@collist)-1)
set @sql ='select '+@collist+ ' from Tab1'
exec sp_executesql @sql
```
|
Selecting All the columns except IDENTITY Column
|
[
"",
"sql",
"sql-server",
"vbscript",
""
] |
Yesterday I posted a question regarding oracle sql query being repeated.
[How to remove repeated lines in an Oracle SQL query](https://stackoverflow.com/questions/26965909/how-to-remove-repeated-lines-in-an-oracle-sql-query)
How do I modify the SQL Query with JOIN if I have multiple parent and child tables?
```
SELECT t1.table_id FROM TABLE_ONE t1, COMMON_TABLE cmn
WHERE
t1.table_name = 'TEST_ONE'
AND t1.table_id = cmn.common_id
AND cmn.column_one > 20
AND cmn.column_two <= 30
UNION
SELECT t2.table_id FROM TABLE_TWO t2, COMMON_TABLE cmn
WHERE
t2.table_name = 'TEST_TWO'
AND t2.table_id = cmn.common_id
AND cmn.column_one > 20
AND cmn.column_two <= 30
UNION
SELECT t3.table_id FROM TABLE_THREE t3, COMMON_TABLE cmn
WHERE
t3.table_name = 'TEST_THREE'
AND t3.table_id = cmn.common_id
AND cmn.column_one > 20
AND cmn.column_two <= 30
UNION
SELECT tc1.table_id FROM TABLE_CHILD_ONE tc1, TABLE_ONE t1, COMMON_TABLE cmn
WHERE
tc1.table_name = 'TEST_CHILD_ONE'
AND tc1.table_id = t1.table_id
AND cmn.column_one > 20
AND cmn.column_two <= 30
UNION
SELECT tc2.table_id FROM TABLE_CHILD_TWO tc2, TABLE_TWO t2, COMMON_TABLE cmn
WHERE
tc2.table_name = 'TEST_CHILD_TWO'
AND tc2.table_id = t2.table_id
AND cmn.column_one > 20
AND cmn.column_two <= 30
```
I would like to remove the repeated lines
```
AND cmn.column_one > 20
AND cmn.column_two <= 30
```
|
USE `UNION` and get the values from all tables and then do one JOIN with COMMON\_TABLE
```
SELECT table_id
FROM
(
SELECT t1.table_id FROM TABLE_ONE t1
WHERE t1.table_name = 'TEST_ONE'
UNION
SELECT t2.table_id FROM TABLE_TWO t2
WHERE t2.table_name = 'TEST_TWO'
UNION
SELECT tc1.table_id FROM TABLE_CHILD_ONE tc1
JOIN TABLE_ONE t1
ON tc1.table_name = 'TEST_CHILD_ONE'
AND tc1.table_id = t1.table_id
) T
JOIN COMMON_TABLE cmn
ON T.table_id = cmn.common_id
AND cmn.column_one > 20
AND cmn.column_two <= 30
```
|
Did you try this?
```
select table_id
from
(
select table_id, cmn.column_one, cmn.column_two
etc
) derived_table
where column_one > 20
and column_two <= 30
```
|
How to remove repeated lines in an Oracle SQL query for multiple parent and child tables
|
[
"",
"sql",
"oracle",
"join",
""
] |
I am trying to find the difference between today's date and a value that is a concatenation of mulitple values but begins with an 8 digit date without any dashes or forward slashes. There's something wrong with my syntax I believe, but I'm not yet skilled enough to see what I'm doing incorrectly. Here is what I have so far:
```
select DateDiff(dd, (select MIN(CAST(Left(batchid, 8) as Date)) from
[Table]), getdate()) from [Table]
```
This is returning the following error: "Msg 241, Level 16, State 1, Line 1
Conversion failed when converting date and/or time from character string."
|
I think your have data where the left 8 is not a valid date in yyyymmdd format. Your can run the following query to find them
```
select batchid, isdate(Left(batchid, 8))
from [Table]
where isdate(Left(date, 8)) = 0
```
This is the correct syntax to your query. Your original example had an extra parenthesis which I assume was a typo since your error appears to be data related.
```
select
datediff(dd, (select min(cast(left(batchid, 8) as date))
from [Table]), getdate())
```
|
This was may error. I was working with another table and forgot batchID was not the same for both. The concatenated batchID in the table I posted a question about can't be converted to a date.
|
Using DateDiff() to find the difference between getDate() and a concatonated value
|
[
"",
"sql",
"sql-server",
"concatenation",
"datediff",
"date-conversion",
""
] |
I would like to change the manner in which the mileage is represented in the database. For example, right now the mileage is represented as 080+0.348; this would mean that this particular feature is at mileage point 80.348 along the roadway corridor. I would like to have the data represented in the database in the latter form, 80.348 and so on. This would save me from having to export the dataset to excel for the conversion. Is this even possible? The name of the column is NRLG\_MILEPOINT.
Much appreciated.
|
One thing you could try is to pick the string value apart into its component pieces and then recombine them as a number. If your data is in a table called TEST you might do something like the following:
```
select miles, fraction,
nvl(to_number(miles), 0) + nvl(to_number(fraction), 0) as milepoint
from (select regexp_substr(nrlg_milepoint, '[0-9]*') as miles,
regexp_substr(nrlg_milepoint, '[+-][0-9.]*') as fraction
from test);
```
[SQLFiddle here](http://sqlfiddle.com/#!4/e77bf/1).
Share and enjoy.
|
Using the answer provided above, I was able to expand it to get exactly the answer i needed. Thanks a ton to everyone who helped! Here is the query i ended up with:
```
select distinct nrlg_dept_route,corridor_code_rb,nrlg_county,next_county,
nvl(to_number(miles), 0) + nvl(to_number(fraction), 0) as milepoint
from (select regexp_substr(nrlg_milepoint, '[0-9]*') as miles,
nrlg_milepoint as nrlg_mile_point
nrlg_dept_route as nrlg_dept_route,
nrlg_county as nrlg_county,
next_county as next_county,
corridor_code_rb as corridor_code_rb,
corridor_code as corridor_code,
regexp_substr(nrlg_milepoint, '[+-][0-9.]*') as fraction
from corridor_county_intersect,south_van_data_view)
where nrlg_dept_route = corridor_code
order by 1,5;
```
|
How to change mileage representation forms in sql
|
[
"",
"sql",
"oracle",
"data-conversion",
""
] |
There is table of numbers containing values [1,1,2,3,7,3,4,2,7]
I want to select only duplicates so output set would contain [1,2,3,7] so 4 is filtered out.
I have code that I did like this:
```
DROP TABLE #tempTable;
SELECT [numbers] as nums, COUNT(*) as cny
INTO #tempTable
FROM [testBase].[dbo].[numbers] group by numbers;
SELECT nums from #tempTable where nums > 1;
```
Now I would like to know if I can get it without first selecting to #tempTable?
Something more like select from select or is it only way I can do it?
|
The same in one query.
Conditions that are made after **GROUP BY** should be added to **HAVING** clause.
```
SELECT [numbers] as nums, COUNT(*) as cny
FROM [testBase].[dbo].[numbers]
GROUP BY numbers
HAVING COUNT(*) > 1
```
More on this
[HAVING (Transact-SQL)](http://msdn.microsoft.com/en-us/library/ms180199.aspx)
[Having clause tutorial](http://www.techonthenet.com/sql_server/having.php)
|
Try this:-
```
SELECT [numbers] AS NUM
FROM TAB_NAME
HAVING COUNT([numbers]) > 1;
```
|
Select from another select without temporary table?
|
[
"",
"sql",
"sql-server",
""
] |
## Question
I would like to pull all assets from a database which is a certain amount of years old. Is this statement correct?
## Background
The database is called `AssetRegister`
The table is called `dbo.Assets`
the column is called `AcquiredDate`
## Statement so far
```
SELECT * FROM dbo.Assets WHERE AcquiredDate < '2008-01-01'
```
|
```
SELECT * FROM dbo.Assets
WHERE DATEDIFF(YEAR, AcquiredDate, GetDate()) >= 8
```
For an performance optimized query look at [@Horaciuxs answer](https://stackoverflow.com/a/26993036/575376).
|
The answer by @Juergen bring the right results:
```
SELECT * FROM dbo.Assets
WHERE DATEDIFF(YEAR, AcquiredDate, GetDate()) >= 8
```
But, the SQL optimizer can't use an index on AcquiredDate, even if one exists. It will literally have to evaluate this function for every row of the table.
For big tables is recommended to use:
```
DECLARE @limitDate Date
SELECT @limitDate=DATEADD(year,-8,GETDATE()) --Calculate limit date 8 year before now.
SELECT * FROM dbo.Assets
WHERE AcquiredDate <= @limitDate
```
Or simply:
```
SELECT * FROM dbo.Assets
WHERE AcquiredDate <= DATEADD(year,-8,GETDATE())
```
|
SQL Server: How to get rows where the date is older than X years?
|
[
"",
"sql",
"sql-server",
""
] |
I have a very simple script in TSQL that *tries* to convert a timestamp in milliseconds to a `DATETIME` data type. This also includes the local time offset.
```
DECLARE @Time AS BIGINT
SET @Time = 1413381394000
SELECT DATEADD(MILLISECOND,
@Time - DATEDIFF(MILLISECOND, GETDATE(), GETUTCDATE()), CAST('1970-01-01 00:00:00' AS DATETIME)) AS [Datetime]
```
The error it's giving me all the time is:
```
Arithmetic overflow error converting expression to data type int.
```
Now, I don't have any *explicit* `int` variables in this query, and any `CAST()` to `BIGINT` or `DECIMAL(13,0)` I've did, resulted in the same error.
Which is the wrong part in this query? Is `int` the default return data type of `DATEDIFF()`?
I know that I could just divide `@Time` by 1000 and work with `SECONDS` instead of `MILLISECONDS`, I just want to know if there is a way to work *directly* with milliseconds, since the idea is to use this script as an Inline Table-Valued Function (cannot use scalar ones for other reasons outside this query).
|
Your calculation for local offset has the potential to be wrong by an hour due to Daylight Savings Time. `DATEDIFF(MILLISECOND, GETDATE(), GETUTCDATE())` will only get the current offset and not the offset for the given date. Conversions to and from UTC and local time are generally best handled in application or SQLCLR code due to SQL's lack of functionality for this purpose. See [How can I get the correct offset between UTC and local times for a date that is before or after DST?](https://dba.stackexchange.com/q/28187/50669).
In [DATEADD (Transact-SQL)](http://msdn.microsoft.com/en-us/library/ms186819.aspx) Microsoft states that:
> number
> Is an expression that can be resolved to an int that is added
> to a datepart of date. User-defined variables are valid.
>
> ```
> If you specify a value with a decimal fraction, the fraction is
> truncated and not rounded.
> ```
Therefore, you cannot directly work with millisecond values larger than the maximum value for an int which supports a range of -2^31 (-2,147,483,648) to 2^31-1 (2,147,483,647) as stated in [int, bigint, smallint, and tinyint (Transact-SQL)](http://msdn.microsoft.com/en-us/library/ms187745.aspx). You'll have to do to separate date adds and some modulo division.
```
DECLARE @Time bigint
DECLARE @Seconds int
DECLARE @RemainingMilliseconds int
DECLARE @EpochDate datetime
SET @Time = 1413381394000
SET @EpochDate = '1970-01-01 00:00:00'
SET @Seconds = @Time / 1000
SET @RemainingMilliseconds = @Time % 1000
SELECT DATEADD(MILLISECOND, @RemainingMilliseconds, DATEADD(SECOND,@Seconds, @EpochDate))
```
|
[`DateDiff()`](http://msdn.microsoft.com/en-us/library/ms189794.aspx) does indeed return an int, but I suspect that it's [`DateAdd()`](http://msdn.microsoft.com/en-us/library/ms186819.aspx) that's giving you the error message.
You'll need to work in that precision, unfortunately, as you said you wanted to avoid, as you're wanting to work in milliseconds.
> DATEADD (datepart , number , date )
>
> `number` Is an expression that can be resolved to an int that is added to a datepart of date. User-defined variables are valid.
You could obviously code your away around it with loops or something, but obviously there's a cost/benefit there that you would need to go through.
|
T-SQL - Arithmetic overflow while working with milliseconds
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have a View which has a SQL Script as:
```
Select
a.iAssetId,
ac.eEventCode,
vm.dtUTCDateTime,
g.iGeofenceId,
g.sGeofenceName,
c.sCategoryName,
c.iCategoryId,
s.sSiteName,
s.iSiteId,
CASE WHEN ac.eEventCode = 6 THEN vm.dtUTCDateTime ELSE NULL END as EnterTime,
CASE WHEN ac.eEventCode = 7 THEN vm.dtUTCDateTime ELSE NULL END as ExitTime,
CASE WHEN
a.iAssetId = Lead(a.iAssetId) OVER (ORDER BY a.iAssetId)
AND g.iGeofenceId = Lead(g.iGeofenceId) OVER (ORDER BY a.iAssetId)
AND ac.eEventCode != Lead(ac.eEventCode) OVER (ORDER BY a.iAssetId)
THEN DATEDIFF(minute, vm.dtUTCDateTime, Lead(vm.dtUTCDateTime) OVER (ORDER BY a.iAssetId)) ELSE NULL END as Test
From AssetCommunicationSummary ac
Inner join VehicleMonitoringLog vm on vm.iVehicleMonitoringId = ac.iVehicleMonitoringId
Inner Join Geofences g on g.iGeofenceId = vm.iGeofenceId
Inner Join Assets a on a.iAssetId = ac.iAssetId
Inner Join Categories c on c.iCategoryId = a.iCategoryId
Inner Join Sites s on s.iSiteId = c.iSiteId
Where ac.eEventCode = 6 OR ac.eEventCode = 7
Group by
a.iAssetId,
ac.eEventCode,
vm.dtUTCDateTime,
g.iGeofenceId,
g.sGeofenceName,
c.sCategoryName,
c.iCategoryId,
s.sSiteName,
s.iSiteId
```
I have used Lead to calculate the Time differenc in minutes for leading rows based on conditions.
I need to now merge the leading Row and the Current Row based on Condition.
Is there a possible way to do this?
The goal is to get the EnterTime and ExitTime in the Same Row with Time Difference in the Column Next to it.
My result is like this:

|
If your eventcode is always going to be 6 and 7, then you can just join to that table twice using that clause in the join itself. I think I've got the rest of your schema joined up properly below, but if not, you can adjust it around to fit.
```
Select
a.iAssetId,
vmEnter.dtUTCDateTime,
g.iGeofenceId,
g.sGeofenceName,
c.sCategoryName,
c.iCategoryId,
s.sSiteName,
s.iSiteId,
vmEnter.dtUTCDateTime as EnterTime,
vmExit.dtUTCDateTime as ExitTime,
DATEDIFF(minute, vmEnter.dtUTCDateTime, vmExit.dtUTCDateTime) as ExitTime,
From Sites s
Inner Join Categories c on s.iSiteId = c.iSiteId
Inner Join Assets a on c.iCategoryId = a.iCategoryId
Inner Join AssetCommunicationSummary acEnter on a.iAssetId = acEnter.iAssetId and acEnter.eEventCode = 6
Inner Join VehicleMonitoringLog vmEnter on vmEnter.iVehicleMonitoringId = acEnter.iVehicleMonitoringId
Inner Join AssetCommunicationSummary acExit on a.iAssetId = acExit.iAssetId and acExit.eEventCode = 7
Inner Join VehicleMonitoringLog vmExit on vmExit.iVehicleMonitoringId = acExit.iVehicleMonitoringId
Inner Join Geofences g on g.iGeofenceId = vmEnter.iGeofenceId
```
|
Im gonna guess that eventcode = 6 means thats the intake time
if so two of your data paris dont make much sense as the exit time is before the intake time,
The Query below only accounts for when amd if eventcode 6 = intake time
and the fact that exit time should be before entertime.
query is based on the output you provided and not the view query.
if doing a select \* on your view table gives you that output then replace vw\_table with *yourviewstablename*
There are Nulls in the timedif of sqlfiddle because
* there was only one instance of assetid 2
* assetid 4 and 6 have exit time that happened before entertimes
[SQLFIDDLE](http://sqlfiddle.com/#!3/bef89/2)
```
select
v1.iAssetid,
v1.EnterTime,
v2.ExitTime,
datediff(minute, v1.Entertime, v2.Exittime) timedif
from vw_table v1
left join vw_table v2 on
v1.iAssetid= v2.iAssetid
and v1.sCategoryNamea = v2.sCategoryNamea
and v2.eEventcode = 7
and v2.dtUTCDatetime >= v1.dtUTCDatetime
where
v1.eEventcode = 6
```
|
Merge two rows with condition SQL View
|
[
"",
"sql",
"sql-server",
"sql-view",
""
] |
I have a standard user table. Now each user can add an unknown number of languages to his profile. What is the best way of storing the languages for each user? I want to query only **once** to fetch a user and all his corresponding languages. What is the best design?
Example:
```
User
====
id name
----------
99 peter
Languages
=========
id userid lang
--------------------
44 99 en
45 99 fr
```
|
That's a many to many relationship (a user can have many languages, and viceversa) so to normalize the DB you should do a new table containing the two Foreign Keys (call it user\_x\_languages):
```
user_x_languages
================
user_id lang_id
99 44
99 45
```
you would need to remove the userid column from you languages table
the query should then use a join
```
select name lang from user u
join user_x_languages x on u.id = x.user_id
join languages l on x.lang_id = l.id
where u.id = 99;
```
that would output a row for each language the user has, it will however be repeating the username on each row.
You want to look into DB normalization, a very nice non-technical article is this: [A Simple Guide to Five Normal Forms in Relational Database Theory](http://www.bkent.net/Doc/simple5.htm)
following the advice of [this](https://stackoverflow.com/questions/276927/can-i-concatenate-multiple-mysql-rows-into-one-field) thread about returning one line, the procedure would be something like (I might have changed the table and column names a little bit from your original):
```
delimiter //
Create function languages (p_user varchar(30))
returns text
begin
DECLARE ret_val TEXT;
select GROUP_CONCAT(lang) into ret_val
from Users u
join user_x_lang x on u.user_id = x.user_id
join Lang l on x.lang_id = l.lang_id
where u.name = p_user;
return ret_val;
end//
delimiter ;
```
and now the select should be (for all users):
```
select name, languages(name) from users;
```
|
You may create a many-to-many relationship table. Like this:
Table "Users"
```
user_id | name
99 peter
100 tim
```
Table "Languages"
```
language_id | language
44 en
45 fr
```
Table "User\_to\_language"
```
User_id | Language_id
99 45
100 45
100 46
```
Or, if you need a lot of user attributes in future, you might use something like:
Table User\_attributes
```
user_id | attribute | value
46 'lang' 'en'
```
Also, this might be useful for you: [Database normalization](http://en.wikipedia.org/wiki/Database_normalization)
|
How to store variable number of languages per user?
|
[
"",
"mysql",
"sql",
""
] |
I have got a table in DataBase. Table consists of two columns.
First column for DeliveryDate it has a type of DATETIME
Second column for DeliveryTime it has a type of VARCHAR
Here is a little example of table data
```
DeliveryDate - DeliveryTime
2014-11-06 00:00:00.000 - 15:00
2014-11-06 00:00:00.000 - 15:00
2014-11-12 00:00:00.000 - 09:00
2014-11-12 00:00:00.000 - 09:00
```
Table has 92 rows in it.
The goal is to make a query that concats DeliveryDate and DeliveryTime to a single DATETIME column
I make request like this
```
SELECT CAST((O.DeliveryDate + RIGHT(ISNULL(O.DeliveryTime,'00:00'),5)) AS DATETIME)
FROM MySuperTable
```
Query fails with error of invalid conversion from varchar
But if i make query with TOP for example TOP 92.
```
SELECT TOP 92 CAST((O.DeliveryDate + RIGHT(ISNULL(O.DeliveryTime,'00:00'),5)) AS DATETIME)
FROM MySuperTable
```
Query completes without errors.
Moreover TOP until 620 is ok nut TOP 621 again fails.
Please help to undestand how top logic works and whats the problem with my code
|
This is due to the Query Optimizer and how it optimizes the query. It most likely assumed it could rearrange a step or two to make things faster, not realizing that the data issue existed. This is why
* doing the `TOP 620` worked when there were only 92 rows as it was likely affecting the way the optimizer built the path on how to get the data
* separating the calculation to the UDF worked as it stopped the optimizer from being able to take whatever shortcut was causing the issue.
But UDFs are terribly slow. Instead, try being more explicit in your query and not leaving it to SQL Server to implicitly convert a string (the time portion coming out of the `RIGHT` function) into a `DATETIME`:
```
SELECT O.DeliveryDate + CONVERT(DATETIME, RIGHT(ISNULL(O.DeliveryTime, '00:00'), 5))
FROM MySuperTable;
```
|
It's strange but when I've separated `CAST` in a scalar function all begin to work fine. But very slow :(
```
CREATE FUNCTION [dbo].[ConcatDateAndTime]
(
@DatePart DATETIME,
@TimePart VARCHAR(50)
)
RETURNS DATETIME
AS
BEGIN
RETURN @DatePart + RIGHT(COALESCE(@TimePart,'00:00'),5)
END
```
|
strange CAST() behaviour in SQL Server 2008 R2
|
[
"",
"sql",
"sql-server",
"sql-server-2008-r2",
""
] |
I need to generate alphanumeric random number with 6 character length and it should contain Numerics, Alphabets (Both Lowercase and Uppercase) check the query below.
**I NEED TO IMPLEMENT IN FUNCTION. (In function is it possible to use NEWID(), RAND()).**
```
SELECT SUBSTRING(CONVERT(VARCHAR(255), NEWID()),0,7)
```
**Output:**
```
23647D
06ABA9
542191
.
.
.
```
**I Need Output as:**
```
236m7D
3n64iD
6t4M7D
.
.
.
```
|
As in function we cannot use NEWID() OR RAND() first need to create VIEW
**For Function**
```
CREATE VIEW NewID as select newid() as new_id
DECLARE @new_id VARCHAR(255)
SELECT @new_id = new_id FROM newid
SELECT @Password = CAST((ABS(CHECKSUM(@new_id))%10) AS VARCHAR(1)) +
CHAR(ASCII('a')+(ABS(CHECKSUM(@new_id))%25)) +
CHAR(ASCII('A')+(ABS(CHECKSUM(@new_id))%25)) +
LEFT(@new_id,3)
SELECT @PASSWORD
```
**Output:**
```
9eEF44
5uUFA2
7hHFA7
.
.
.
```
**For Select Statement**
```
DECLARE @new_id VARCHAR(200)
SELECT @new_id = NEWID()
SELECT CAST((ABS(CHECKSUM(@new_id))%10) AS VARCHAR(1)) +
CHAR(ASCII('a')+(ABS(CHECKSUM(@new_id))%25)) +
CHAR(ASCII('A')+(ABS(CHECKSUM(@new_id))%25)) +
LEFT(@new_id,3)
```
**Output:**
```
0aAF3C
5pP3CE
2wW85E
.
.
.
```
|
Try this:
```
select cast((Abs(Checksum(NewId()))%10) as varchar(1)) +
char(ascii('a')+(Abs(Checksum(NewId()))%25)) +
char(ascii('A')+(Abs(Checksum(NewId()))%25)) +
left(newid(),5) Random_Number
```
Also,
```
DECLARE @exclude varchar(50)
SET @exclude = '0:;<=>?@O[]`^\/'
DECLARE @char char
DECLARE @len char
DECLARE @output varchar(50)
set @output = ''
set @len = 8
while @len > 0 begin
select @char = char(round(rand() * 74 + 48, 0))
if charindex(@char, @exclude) = 0 begin
set @output = @output + @char
set @len = @len - 1
end
end
SELECT @output
```
can be used.
|
How to Generate Alphanumeric Random numbers using function in SQL Server 2008
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"select",
"stored-procedures",
""
] |
I dont know how I can do this sql query, probably its simple but I don't know how i can do it.
I have 2 tables:
```
Table_Articles:
COD NAME
1 Bottle
2 Car
3 Phone
Table_Articles_Registered
COD_ARTICLE DATE
1 05/11/2014
1 06/11/2014
1 07/11/2014
2 08/11/2014
2 09/11/2014
3 05/11/2014
```
I want take in the table Table\_Articles\_Registered the row with the MAX date , finally I want get this result:
```
COD NAME DATE
1 Bottle 07/11/2014
2 Car 09/11/2014
3 Phone 05/11/2014
```
I need use the sencente like this. The problem its in the subquery. Later I use other inner join in the sentence, this is only a fragment.
```
select
_Article.Code,
_Article.Description ,
from Tbl_Articles as _Article left join
(
select top 1 *
from ArticlesRegisterds where DATE_REGISTERED <= '18/11/2014'
order by DATE_REGISTERED
)
as regAux
on regAux.CODE_ARTICLE= _Article.CODE
```
I dont know how can I connect the field CODE\_ARTICLE in the table ArticlesRegisterds with the first query.
|
Can't you just do this?:
```
SELECT
Table_Articles.COD,
Table_Articles.NAME,
(
SELECT MAX(Table_Articles_Registered.DATE)
FROM Table_Articles_Registered
WHERE Table_Articles.COD_ARTICLE=Table_Articles.COD
) AS DATE
FROM
Table_Articles
```
|
I think this is a basic aggregation query with a `join`:
```
select a.cod, a.name, max(ar.date) as date
from Artiles a join
ArticlesRegisterds ar
on ar.cod_article = a.cod
group by a.cod, a.name
```
|
How to get a correlated subquery as column
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I want to retrieve records for certain days .
```
DateTime BETWEEN DATEADD(dd,-1* 30,GETDATE()) and GETDATE()
```
The above code gets all the records from todays date to 30 days back, but I only want the records thats on the 30th day and not all in between.How do I achieve this?
But its not always going to be 30 days, if i enter 5 days it must give me only the records for the 5th day etc
|
Return data for the date that is -30th day from today.
```
SELECT * FROM TABLENAME
WHERE DATEDIFF(day, CONVERT(VARCHAR(10), GETDATE(),110),
CONVERT(VARCHAR(10), COLUMNNAME,110)) = -30
```
|
try this:
this is Returns Todays Date=`18`
```
DATEPART(Day,MyColumn)=30
```
|
how to retrieve records in sql when you pass in a certain day
|
[
"",
"sql",
"sql-server",
""
] |
I'm working with a database structure similar to this one: <http://dev.mysql.com/doc/employee/en/sakila-structure.html>
Table: **employees**
Table with information about each employee.
```
+---------+----------+
| emp_no* | emp_name |
+---------+----------+
| emp1 | John |
| emp2 | Mike |
| emp3 | Rob |
| emp4 | Kim |
+---------+----------+
```
Table: **departments**
Table with information about the departments of the company.
```
+----------+-----------+
| dept_no* | dept_name |
+----------+-----------+
| 1 | Dep 1 |
| 2 | Dep 2 |
| 3 | Dep 3 |
| 4 | Dep 4 |
| 5 | Dep 5 |
+----------+-----------+
```
JUNCTION TABLE: **emp\_dept**
primary key: [ *emp\_no*, *from\_date* ]
Table to keep track of the departments where an employee had worked before or is working right now.
```
+---------+----------+------------+------------+
| emp_no* | dept_no | from_date* | to_date |
+---------+----------+------------+------------+
| emp1 | 1 | 2010-01-01 | 2010-12-31 |
| emp2 | 2 | 2010-01-01 | 2013-10-31 |
| emp1 | 4 | 2010-12-31 | 2012-06-14 |
| emp3 | 3 | 2010-01-01 | 2011-08-14 |
| emp4 | 1 | 2010-01-01 | 2014-11-14 |
| emp2 | 5 | 2013-10-31 | 2014-11-14 |
| emp1 | 3 | 2012-06-14 | 2014-11-17 |
| emp3 | 1 | 2011-08-14 | 2013-07-20 |
| emp3 | 4 | 2013-07-20 | 2014-11-14 |
+---------+----------+------------+------------+
```
**THE EXPECTED TABLE:**
¿How could I join *only the latest record for each employee from the junction table* (emp\_dept) to my employee table and get a table like the one below?
```
+---------+----------+--------+
| emp_no* | emp_name | dep_no |
+---------+----------+--------+
| emp1 | John | 3 |
| emp2 | Mike | 5 |
| emp3 | Rob | 4 |
| emp4 | Kim | 1 |
+---------+----------+--------+
```
|
Assuming `emp_dept.Emp_no` is the relation between `employees.Emp_no`
```
Select * from
employees e
join emp_dept ed on e.emp_no = ed.emp_no
and from_date = (Select Max(from_date)
from emp_dept ed2 where ed2.emp_no = e.emp_no)
```
|
you can get the maximum date in a subquery and join with it.
looks like you have a typo in the emp\_dept table entries, the emp\_no is not matching with employees table.
In case a employee is currently working in deparment, does to\_date will be NULL?
In such as case, you need to handle it in the sub query.
```
SELECT e.emp_no, e.emp_name, ED.dept_no
FROM
(
SELECT emp_no, max(to_date) as maxDate
FROM emp_dept
group by emp_no)T
JOIN employee e
ON T.emp_no = e.emp_no
JOIN emp_dept ED
on T.maxDate = ED.t_date
AND ED.emp_no = T.emp_no
```
|
Querying all data from a table joining ONLY the latest record from a junction table
|
[
"",
"mysql",
"sql",
"database",
"junction-table",
""
] |
We have a table with events (as in calendar event with start and end times) that is regularily queried:
```
TABLE event (
`id` varchar(32) NOT NULL,
`start` datetime,
`end` datetime,
`derivedfrom_id` varchar(32),
`parent_id` varchar(32) NOT NULL
)
```
* The `parent_id` points to a calendar table that provides some additional information.
* Some of the events were created out of another event and hence have a reference pointing to that "origin" event via the `derivedfrom_id` column.
When retrieving a set of events, we usually query by date (`start`/`end`) and calendar (`parent_id`) and limit the number of results via `limit` for paging.
The problem we are now facing: sometimes we need to merge related events for the user into a single representation. So we do our normal query
```
SELECT id, start, parent_id
FROM event
WHERE parent_id in (<list of calendars>)
AND start >= 'some date'
LIMIT x
```
... and then filter out the original events, because the derivates have different information and refer to their origins anyways.
As you might have seen (sooner than we did), we do the limit before the filtering and thus receive a set of events with smaller cardinality than what we initially anticipated, i.e. the number of results is lower than 'x' after the filtering.
The only thing I could think of is to duplicate the query and do a sub-select:
```
SELECT id, start, parent_id
FROM event
WHERE parent_id in (<list_of_calendars>)
AND start >= 'some date'
AND (/* the part below duplicates the previous conditions */
derivedfrom_id is not null
or id not in (
SELECT derivedfrom_id
FROM event
WHERE parent_id in (<list_of_calendars>)
AND start >= 'some date'
AND derivedfrom_id is not null
)
)
LIMIT x
```
But I hardly believe that this is the only way to do this. Especially, since our query is much more complicated.
Is there a better way?
---
### Example Data
(as requested in a comment)
Given these three events:
```
│ *ID* │ *DERIVEDFROM_ID* │ *PARENT_ID* │ *START*
├──────┼──────────────────┼─────────────┼─────────────────
│ 100 │ - │ A │ 2014-11-18 15:00
│ 101 │ 100 │ B │ 2014-11-18 15:00
│ 150 │ - │ A │ 2014-11-20 08:00
```
... and a limit of 2, I want to get events 101 and 150.
Instead, with the current approach:
* The query with a limit of 2 results in events 100 and 101
* After filtering, event 100 is discarded and the only remaining event is 101
### Note on Expected Answer
The SQL above is actually generated from a Java application that uses JPA. My current solution is to generate a where clause and duplicate it. If there is something generic JPA-specific, I would appreciate any pointers.
|
Try this:
```
SELECT e.*
FROM `event` e # 'e' from 'event'
LEFT JOIN `event` d # 'd' from 'derived'; `LEFT JOIN` gets ALL entries from `e`
ON e.id = d.derivedfrom_id # match an event `e` with all those `d` derived from it
WHERE d.id IS NULL # keep only events `e` without derived events `d`
;
```
The `LEFT JOIN` selects all events from `e` and pairs them with the events `d` that derive from them. It ensures **all** the entries from `e` have the chance to be selected, no matter if they have derived events or not. The `WHERE` clause keeps only the events from `e` that do not have derived events. It keeps the derived events and also the original events that do not have derived events but strips out those original events that have derived events.
Add additional `WHERE` conditions on fields of table `e` as you wish, use a `LIMIT` clause, stir well, serve cold.
|
I suggest to group the events by their DERIVEDFROM\_ID or - if it is not a derived event their ID using MySQL's `IFNULL` method, see [SELECT one column if the other is null](https://stackoverflow.com/questions/5697942/select-one-column-if-the-other-is-null)
```
SELECT id, start, parent_id, text, IFNULL(derivedfrom_id, id) as grouper
FROM event
WHERE parent_id in (<list_of_calendars>)
AND start >= '<some date>'
GROUP BY grouper
LIMIT <x>
```
This would however return randomly the original or a derived event. If you want to get only derived events you'd have to sort your results by ID before grouping (assuming the IDs are ascending and derived events have thus higher IDs than their ancestor). Because it's not possible to run a `ORDER BY` before a `GROUP BY` in MySQL you'll have to ressort to an inner join ([MySQL order by before group by](https://stackoverflow.com/questions/14770671/mysql-order-by-before-group-by)):
```
SELECT e1.* FROM event e1
INNER JOIN
(
SELECT max(id) maxId, IFNULL(derivedfrom_id, id) as grouper
FROM event
WHERE parent_id in (<list_of_calendars>)
AND start >= '<some date>'
GROUP BY grouper
) e2
on e1.id = e2.maxId
LIMIT <x>
```
edit: As pointed out by Aaron the assumption of ascending IDs is in conflict with the given data structure. Assuming there is a timestamp `created` you could use a query like this:
```
SELECT e1.* FROM event e1
INNER JOIN
(
SELECT max(created) c, IFNULL(derivedfrom_id, id) grouper
FROM event
WHERE parent_id IN (<list_of_calendars>)
AND start >= '<some date>'
GROUP BY grouper
) e2
ON (e1.id = e2.grouper AND e1.created = c) OR (e1.derivedfrom_id = e2.grouper AND e1.created = c)
LIMIT <x>
```
[SQL Fiddle](http://sqlfiddle.com/#!2/9df5b/3)
|
MySQL filter out self-references
|
[
"",
"mysql",
"sql",
"performance",
""
] |
I have example data in the table as follows:
```
id, name , parentid
--------------------
1 , jason , null
2 , john , 1
3 , abe , 2
4 , mary , 2
5 , yong , 4
6 , albert, 5
7 , jane , 3
8 , alex , 7
```
How do I get the middle part of the tree data if I only want 2 levels from the selected `parentid` example `john,abe,mary,jane` in SQL Server?
Thanks in advance to all
|
Check this query: just selected the Childs of current parent and child's childs.
```
DECLARE @SelectedID INT
SET @SelectedID = 1
SELECT ID, NAME, PARENTID FROM #TABLE1 WHERE PARENTID = @SelectedID
UNION ALL
SELECT ID, NAME, PARENTID FROM #TABLE1 WHERE PARENTID IN
(SELECT ID FROM #TABLE1 WHERE PARENTID = @SelectedID)
UNION ALL
SELECT ID, NAME, PARENTID FROM #TABLE1 WHERE PARENTID IN
(SELECT ID FROM #TABLE1 WHERE PARENTID IN
(SELECT ID FROM #TABLE1 WHERE PARENTID = @SelectedID))
```
I have created a stored proc to achieve get childs based on used input.
Just Run the SP (Before that change the table name #Table1 to your table name)
```
CREATE PROCEDURE GetChilds(@SelectedID INT, @SelectedLevel INT)
AS
BEGIN
DECLARE @CurrentLevel INT
SET @CurrentLevel = 0
CREATE TABLE #TABLENew(ID INT, NAME VARCHAR(20), PARENTID INT, Level INT)
CREATE TABLE #TABLETemp1(ID INT)
CREATE TABLE #TABLETemp2(ID INT)
INSERT INTO #TABLETemp1(ID)
SELECT ID FROM #TABLE1 WHERE PARENTID = @SelectedID
INSERT INTO #TABLENew (ID, NAME, PARENTID, Level)
SELECT ID, NAME, PARENTID, @CurrentLevel FROM #TABLE1 WHERE PARENTID IN(@SelectedID)
SET @CurrentLevel = @CurrentLevel + 1
INSERT INTO #TABLETemp2(ID)
SELECT ID FROM #TABLETemp1
WHILE (@CurrentLevel <= @SelectedLevel)
BEGIN
INSERT INTO #TABLENew (ID, NAME, PARENTID, Level)
SELECT ID, NAME, PARENTID, @CurrentLevel FROM #TABLE1 WHERE PARENTID IN(SELECT ID FROM #TABLETemp1)
TRUNCATE TABLE #TABLETemp1
INSERT INTO #TABLETemp1(ID)
SELECT ID FROM #TABLE1 WHERE PARENTID IN(SELECT ID FROM #TABLETemp2)
TRUNCATE TABLE #TABLETemp2
INSERT INTO #TABLETemp2(ID)
SELECT ID FROM #TABLETemp1
SET @CurrentLevel = @CurrentLevel + 1
END
SELECT * FROM #TABLENew
DROP TABLE #TABLENew
DROP TABLE #TABLETemp1
DROP TABLE #TABLETemp2
END
```
To execute the result.(1 - Parent, 2 - Level)
```
EXEC GetChilds 1, 2
```
|
This should get you what you need:
```
WITH CTE
AS(
SELECT ID,Name,parentID, 1 Depth FROM YourTable
UNION ALL
SELECT E.ID,E.Name,E.ParentID,Depth+1 Depth FROM YourTable E
INNER JOIN CTE ON E.ParentID=CTE.ID)
SELECT * FROM CTE
WHERE Depth = 2
```
NOTE that the where clause in the second select gets the depth, and the answer assumes that you are looking for the second level.
|
SQL Server recursive function getting middle data tree
|
[
"",
"sql",
"sql-server",
"performance",
""
] |
## Problem
I am trying to add a second join to a query however when i add the secone line
```
INNER JOIN dbo.Assets ON Assets.AssetTypeID = AssetTypes.AssetTypeID
```
SQL Server throws this error
```
Msg 1013, Level 16, State 1, Line 1
The objects "dbo.Assets" and "dbo.Assets" in the FROM clause have the same exposed names. Use correlation names to distinguish them.
```
## Query
```
SELECT DENumber, AcquiredDate, ItemDescription, ItemName, LocationName, AssetTypeID
FROM dbo.Assets
INNER JOIN dbo.Locations ON Assets.LocationId = Locations.LocationId
INNER JOIN dbo.Assets ON Assets.AssetTypeID = AssetTypes.AssetTypeID
WHERE DATEDIFF(YEAR, AcquiredDate, GetDate()) >= 7
```
## Question
How do I edit the query to allow both JOINS to work? Possibly using correlation names?
|
```
INNER JOIN dbo.Assets ON Assets.AssetTypeID = AssetTypes.AssetTypeID
```
here is some issue, assettypes is also a table? then why you are not join in your query.
Always give alias name and try to make sort to understand as
```
SELECT Distinct --to remove un-wanted duplicate rows
--I don't know, so you will add alias before each column to avoid error.
DENumber, A.AcquiredDate, ItemDescription, ItemName, LocationName, AssetTypeID
FROM dbo.Assets A
INNER JOIN dbo.Locations L ON A.LocationId = L.LocationId
INNER JOIN dbo.Assets A1 ON A.AssetTypeID = A1.AssetTypeID
--INNER JOIN dbo.AssetsTypes AT ON A.AssetTypeID = AT.AssetTypeID --if assettypes you want to join
WHERE DATEDIFF(YEAR, A.AcquiredDate, GetDate()) >= 7
```
|
This error happens when you reference a table at least twice in the `FROM` clause and you did not specify a table alias to either table so that SQL Server can distinguish one from the other.
In your query you added `Assets` table in the `FROM` clause without any Alias name also `Assets` table in the `Inner Join` without any Alias, try to improve your query like,
```
SELECT DENumber, AcquiredDate, ItemDescription, ItemName, LocationName, AssetTypeID
FROM dbo.Assets ATS
INNER JOIN dbo.Locations LOC ON ATS.LocationId = LOC.LocationId
INNER JOIN dbo.Assets ATS1 ON ATS.AssetTypeID = ATS1.AssetTypeID
WHERE DATEDIFF(YEAR, AcquiredDate, GetDate()) >= 7
```
for more info on the error you are getting refer <http://www.sql-server-helper.com/error-messages/msg-1013.aspx>
|
Correlation Names, how to use them?
|
[
"",
"sql",
"sql-server",
"join",
""
] |
I need the time difference between two times in minutes. I am having the start time and end time as shown below:
```
start time | End Time
11:15:00 | 13:15:00
10:45:00 | 18:59:00
```
I need the output for first row as 45,60,15 which corresponds to the time difference between 11:15 and 12:00, 12:00 and 13:00, 13:00 and 13:15 respectively.
|
The following works as expected:
```
SELECT Diff = CASE DATEDIFF(HOUR, StartTime, EndTime)
WHEN 0 THEN CAST(DATEDIFF(MINUTE, StartTime, EndTime) AS VARCHAR(10))
ELSE CAST(60 - DATEPART(MINUTE, StartTime) AS VARCHAR(10)) +
REPLICATE(',60', DATEDIFF(HOUR, StartTime, EndTime) - 1) +
+ ',' + CAST(DATEPART(MINUTE, EndTime) AS VARCHAR(10))
END
FROM (VALUES
(CAST('11:15' AS TIME), CAST('13:15' AS TIME)),
(CAST('10:45' AS TIME), CAST('18:59' AS TIME)),
(CAST('10:45' AS TIME), CAST('11:59' AS TIME))
) t (StartTime, EndTime);
```
To get 24 columns, you could use 24 case expressions, something like:
```
SELECT [0] = CASE WHEN DATEDIFF(HOUR, StartTime, EndTime) = 0
THEN DATEDIFF(MINUTE, StartTime, EndTime)
ELSE 60 - DATEPART(MINUTE, StartTime)
END,
[1] = CASE WHEN DATEDIFF(HOUR, StartTime, EndTime) = 1
THEN DATEPART(MINUTE, EndTime)
WHEN DATEDIFF(HOUR, StartTime, EndTime) > 1 THEN 60
END,
[2] = CASE WHEN DATEDIFF(HOUR, StartTime, EndTime) = 2
THEN DATEPART(MINUTE, EndTime)
WHEN DATEDIFF(HOUR, StartTime, EndTime) > 2 THEN 60
END -- ETC
FROM (VALUES
(CAST('11:15' AS TIME), CAST('13:15' AS TIME)),
(CAST('10:45' AS TIME), CAST('18:59' AS TIME)),
(CAST('10:45' AS TIME), CAST('11:59' AS TIME))
) t (StartTime, EndTime);
```
The following also works, and may end up shorter than repeating the same case expression over and over:
```
WITH Numbers (Number) AS
( SELECT ROW_NUMBER() OVER(ORDER BY t1.N) - 1
FROM (VALUES (1), (1), (1), (1), (1), (1)) AS t1 (N)
CROSS JOIN (VALUES (1), (1), (1), (1)) AS t2 (N)
), YourData AS
( SELECT StartTime, EndTime
FROM (VALUES
(CAST('11:15' AS TIME), CAST('13:15' AS TIME)),
(CAST('09:45' AS TIME), CAST('18:59' AS TIME)),
(CAST('10:45' AS TIME), CAST('11:59' AS TIME))
) AS t (StartTime, EndTime)
), PivotData AS
( SELECT t.StartTime,
t.EndTime,
n.Number,
MinuteDiff = CASE WHEN n.Number = 0 AND DATEDIFF(HOUR, StartTime, EndTime) = 0 THEN DATEDIFF(MINUTE, StartTime, EndTime)
WHEN n.Number = 0 THEN 60 - DATEPART(MINUTE, StartTime)
WHEN DATEDIFF(HOUR, t.StartTime, t.EndTime) <= n.Number THEN DATEPART(MINUTE, EndTime)
ELSE 60
END
FROM YourData AS t
INNER JOIN Numbers AS n
ON n.Number <= DATEDIFF(HOUR, StartTime, EndTime)
)
SELECT *
FROM PivotData AS d
PIVOT
( MAX(MinuteDiff)
FOR Number IN
( [0], [1], [2], [3], [4], [5],
[6], [7], [8], [9], [10], [11],
[12], [13], [14], [15], [16], [17],
[18], [19], [20], [21], [22], [23]
)
) AS pvt;
```
It works by joining to a table of 24 numbers, so the case expression doesn't need to be repeated, then rolling these 24 numbers back up into columns using `PIVOT`
|
Use [DateDiff](http://msdn.microsoft.com/en-us/library/ms189794.aspx) with MINUTE difference:
```
SELECT DATEDIFF(MINUTE, '11:10:10' , '11:20:00') AS MinuteDiff
```
Query that may help you:
```
SELECT StartTime, EndTime, DATEDIFF(MINUTE, StartTime , EndTime) AS MinuteDiff
FROM TableName
```
|
Calculate time difference in minutes in SQL Server
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"datetime",
"difference",
""
] |
If I want to add 5 days to a date, I can do it using the `INTERVAL` function:
```
select create_ts + interval '5 days' from abc_company;
```
However, my table has a field called `num_of_days` and I want to add it to my create\_ts. Something like this:
```
select create_ts + interval num_of_days || ' days' from abc_company;
```
This does not work. How can I accomplish this in postgresql?
|
Simply multiply the value with an interval:
```
select create_ts + num_of_day * interval '1' day
from abc_company;
```
Since Postgres 9.4 this is easier done using the `make_interval()` [function](https://www.postgresql.org/docs/9.4/functions-datetime.html):
```
select create_ts + make_interval(days => num_of_day)
from abc_company;
```
|
You just need a working type cast. This kind is standard SQL.
```
select current_timestamp + cast((num_of_days || ' days') as interval)
from abc_company;
```
This is an alternative syntax, peculiar to PostgreSQL.
```
select current_timestamp + (num_of_days || ' days')::interval
from abc_company;
```
I prefer not trying to remember the third kind of type cast supported by PostgreSQL, which is the function-like syntax.
```
select current_timestamp + "interval" (num_of_days || ' days')
from abc_company;
```
Why? Because [some function names have to be quoted](http://www.postgresql.org/docs/9.3/static/sql-expressions.html#SQL-SYNTAX-TYPE-CASTS); *interval* is one of them.
> Also, the names interval, time, and timestamp can only be used in this
> fashion if they are double-quoted, because of syntactic conflicts.
> Therefore, the use of the function-like cast syntax leads to
> inconsistencies and should probably be avoided.
|
Postgres INTERVAL using value from table
|
[
"",
"sql",
"postgresql",
""
] |
I need to query for rows from 3 different tables based upon certain `WHERE` criteria.
My tables are:
* regions
* profiles
* usermeta
The profiles table has two foreign keys, user\_id and region\_id which reference the following tables.
The users table has an id, first\_name and last\_name column. I need to get the first\_name and last\_name for each row that has an id in the profiles table as well.
The regions table has a name and id column. I need to get the name for each region that has a corresponding id in the profiles table.
Out of the profiles table, I also have columns id, certs, and title which I need.
Right now I'm pulling these with three separate queries which is so inefficient. I know I can do it with JOINS, but I'm not sure what the syntax is supposed to be.
UPDATE:
After a lot of help from you guys and doing some additional digging, here is the right code for the query in PHP and WordPress formats:
```
$select_sql = "SELECT p.id, p.title, concat( u1.meta_value, ' ', u2.meta_value ) as fullname, concat( r.name, ', ', c.name ) as location";
$from_sql = "FROM {$modules->tables->profiles} p";
$join_sql = "LEFT JOIN {$wpdb->usermeta} u1 ON p.user_id = u1.user_id AND u1.meta_key = 'first_name'";
$join_sql .= " LEFT JOIN {$wpdb->usermeta} u2 ON p.user_id = u2.user_id AND u2.meta_key = 'last_name'";
$join_sql .= " LEFT JOIN {$modules->tables->regions} r ON r.id = p.region_id";
$join_sql .= " LEFT JOIN {$modules->tables->countries} c ON c.id = p.country_id";
$where_sql = "WHERE p.certification IN ( '%s' ) AND p.country_id IN ( %d )";
$orderby_sql= "ORDER BY p.user_id ASC";
$q = $wpdb->prepare( "{$select_sql} {$from_sql} {$join_sql} {$where_sql} {$orderby_sql}", $certification, $country_id );
$results = $wpdb->get_results( $q );
```
A var\_dump on $q gives us the following SQL:
```
string(515) "SELECT p.id, p.title, concat( u1.meta_value, ' ', u2.meta_value ) as fullname, concat( r.name, ', ', c.name ) as location FROM modules_profiles p LEFT JOIN moonlight_usermeta u1 ON p.user_id = u1.user_id AND u1.meta_key = 'first_name' LEFT JOIN moonlight_usermeta u2 ON p.user_id = u2.user_id AND u2.meta_key = 'last_name' LEFT JOIN modules_regions r ON r.id = p.region_id LEFT JOIN modules_countries c ON c.id = p.country_id WHERE p.certification IN ( 'certified' ) AND p.country_id IN ( 2 ) ORDER BY p.user_id ASC"
```
|
Do a `JOIN` between all the tables. Below query is the assemble of the information provided by you in posts
```
select users.first_name,
users.last_name,
regions.name,
profiles.id,
profiles.certs,
profiles.title
from users
left join profiles on profiles.user_id = users.id
left join region on profiles.region_id = regions.id;
```
|
The syntax for a join looks something like this:
```
SELECT stuff
FROM firstTable
JOIN secondTable ON firstTable.someColumn = secondTable.matchingColumn
```
In this case, I also recommend using table aliases (just adding an alias right after the table name is enough). This is just a simple push in the right direction:
```
SELECT columnsYouNeed
FROM users u
JOIN profiles p ON p.user_id = u.id
JOIN regions r ON r.id = p.region_id;
```
This is the syntax you are looking for, I'll challenge you to input the columns properly and add any WHERE clause you might need. If you are still stuck, feel free to ask questions.
|
Syntax for joining multiple tables
|
[
"",
"mysql",
"sql",
"wordpress",
"join",
""
] |
How can I change column text Not Exists when it is empty or null ?
My query :
```
Select TOP 1 ISNULL(NULLIF(DR.Name,''),'Not Exists') as Name,
DR.Name as Name ,Coalesce(NullIf(rtrim(DR.Name),''),'Not Exist') as Name,
Name = case when DR.Name is null then 'Not Exists'
when DR.Name='' then 'Not Exists' else DR.Name end
from Transfer TR
join Driver DR on DR.OID=TR.DriverID
WHERE TR.TruckID=51 AND TR.Statues<>7 and TR.DateScheduled<GETDATE()
AND TR.DateScheduled>=DATEADD(DAY,-7,GETDATE()) ORDER BY TR.OID DESC
```
Result :

|
If you just need a single column, then you can use a sub-select, this way when no rows are returned by the query you will still get `not exists`:
```
SELECT Name = ISNULL(( SELECT TOP 1 NULLIF(DR.Name,'')
FROM Transfer AS TR
INNER JOIN Driver AS DR
ON DR.OID = TR.DriverID
WHERE TR.TruckID = 51
AND TR.Statues <> 7
AND TR.DateScheduled < GETDATE()
AND TR.DateScheduled >= DATEADD(DAY, -7, GETDATE())
ORDER BY TR.OID DESC), 'Not Exists');
```
If you need multiple columns then you could union your `Not Exists` record to the bottom of the query, place all this inside a subquery then select the top 1 again, ensuring that your actual value takes precedence (by adding the column `SortOrder`):
```
SELECT TOP 1 Name, SomeOtherColumn
FROM ( SELECT TOP 1
Name = NULLIF(DR.Name,''),
SomeOtherColumn,
SortOrder = 0
FROM Transfer AS TR
INNER JOIN Driver AS DR
ON DR.OID = TR.DriverID
WHERE TR.TruckID = 51
AND TR.Statues <> 7
AND TR.DateScheduled < GETDATE()
AND TR.DateScheduled >= DATEADD(DAY, -7, GETDATE())
ORDER BY TR.OID DESC
UNION ALL
SELECT 'Not Exists', NULL, 1
) AS t
ORDER BY SortOrder;
```
|
I'm not entirely sure I understand your question, but if you are trying to catch nulls and empty strings "in one go", try this:
```
select TOP 1
case when length(trim(coalesce(DR.Name, ''))) = 0 then
'Not Exists'
else
DR.Name
as Name
....
```
The `coalesce` catches the `NULL`s and sets a replacement value. The `trim` gets rid of any padding and the `length` checks if what is left is an empty string --> so this covers nulls, padded- and non-padded trivial strings.
|
Check and Change for empty or null value of column in SQL?
|
[
"",
"sql",
"sql-server",
"sql-server-2008-r2",
""
] |
Is there a way, through the `information_schema` or otherwise, to calculate how many percent of each column of a table (or a set of tables, better yet) are `NULL`s?
|
[Your query](https://stackoverflow.com/a/27019681/939860) has a number of problems, most importantly you are not escaping identifiers (which could lead to exceptions at best or SQL injection attacks in the worst case) and you are not taking the schema into account.
Use instead:
```
SELECT 'SELECT ' || string_agg(concat('round(100 - 100 * count(', col
, ') / count(*)::numeric, 2) AS ', col_pct), E'\n , ')
|| E'\nFROM ' || tbl
FROM (
SELECT quote_ident(table_schema) || '.' || quote_ident(table_name) AS tbl
, quote_ident(column_name) AS col
, quote_ident(column_name || '_pct') AS col_pct
FROM information_schema.columns
WHERE table_name = 'my_table_name'
ORDER BY ordinal_position
) sub
GROUP BY tbl;
```
Produces a query like:
```
SELECT round(100 - 100 * count(id) / count(*)::numeric, 2) AS id_pct
, round(100 - 100 * count(day) / count(*)::numeric, 2) AS day_pct
, round(100 - 100 * count("oDd X") / count(*)::numeric, 2) AS "oDd X_pct"
FROM public.my_table_name;
```
Closely related answer on dba.SE with a lot more details:
* [Check whether empty strings are present in character-type columns](https://dba.stackexchange.com/questions/81966/check-whether-empty-strings-are-present-in-character-type-columns/82030#82030)
|
In PostgreSQL, you can easily compute it using the statistics tables if your autovacuum setting is on (check it by SHOW ALL;). You can also set the vacuum interval to configure how fast your statistics tables should be updated. You can then compute the NULL percentage (aka, null fraction) simply using the query below:
```
select attname, null_frac from pg_stats where tablename = 'table_name'
```
|
Count how many percent of values on each column are nulls
|
[
"",
"sql",
"postgresql",
"information-schema",
""
] |
I am trying to run the below query in order to get ordered data from category\_child table and accordingly from the category table.
select \* from category where id in (select child\_id from category\_child where category\_id=1 order by sequence);
It's like
```
select * from category where id in (2,3,4);
```
and
```
select * from category where id in (3,2,4);
```
are giving me the same result.
Is there any way I can get the result in the same order.
category and category\_child tables are:
```
-- Table structure for table `category`
--
DROP TABLE IF EXISTS `category`;<br/>
/*!40101 SET @saved_cs_client = @@character_set_client */;<br/>
/*!40101 SET character_set_client = utf8 */;<br/>
CREATE TABLE `category` (<br/>
`id` int(11) NOT NULL AUTO_INCREMENT,<br/>
`name` VARCHAR(50) NOT NULL, <br/>
`description` VARCHAR(250) NOT NULL,<br/>
`image_url` VARCHAR(250),<br/>
`created_on` timestamp NOT NULL DEFAULT '2014-11-06 00:00:00',<br/>
`updated_on` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, <br/>
PRIMARY KEY (`id`)<br/>
) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=utf8;<br/>
/*!40101 SET character_set_client = @saved_cs_client */;<br/>
--
-- Table structure for table `category_child`<br/>
--
DROP TABLE IF EXISTS `category_child`;<br/>
/*!40101 SET @saved_cs_client = @@character_set_client */;<br/>
/*!40101 SET character_set_client = utf8 */;<br/>
CREATE TABLE `category_child` (<br/>
`id` int(11) NOT NULL AUTO_INCREMENT,<br/>
`category_id` int(11) NOT NULL,<br/>
`child_id` int(11) NOT NULL,<br/>
`child_type` VARCHAR(100) NOT NULL,<br/>
`sequence` int(4) NOT NULL,<br/>
`created_on` timestamp NOT NULL DEFAULT '2014-11-06 00:00:00',<br/>
`updated_on` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, <br/>
PRIMARY KEY (`id`)<br/>
) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=utf8;<br/>
/*!40101 SET character_set_client = @saved_cs_client */;<br/>
```
|
following query
```
select c.*
from category c, category_child cc
where cc.category_id=1 and c.id=cc.child_id
order by cc.sequence;
```
will work, just a little change in condition in the answer given by Nir-Z and removing join.
|
Why not doing
```
Select c.*
from category c
join category_child cc on cc.id=c.category_id
where cc.category_id=1
order by cc.sequence
```
|
how to get ordered result set while using in keyword in mysql
|
[
"",
"mysql",
"sql",
""
] |
I have a table in SQL that is a list of users checking in to a website. It looks much like this:
```
id | date | status
------------------
```
Status can be 0 for not checking in, 1 for checked in, 2 for covered, and 3 for absent.
I'm trying to build one single query that lists all rows with `status = 0`, but also has a COUNT on how many rows have `status = 3` on each particular `id`.
Is this possible?
|
# MySQL VERSION
just join a count that is joined by id.
```
SELECT t.*, COALESCE(t1.status_3_count, 0) as status_3_count
FROM yourtable t
LEFT JOIN
( SELECT id, SUM(status=3) as status_3_count
FROM yourtable
GROUP BY id
) t1 ON t1.id = t.id
WHERE t.status = 0
```
note: this is doing the boolean sum (aka count)..
the expression returns either true or false a 1 or a 0. so I sum those up to return the count of status = 3 for each id
# SQL SERVER VERSION
```
SELECT id, SUM(CASE WHEN status = 3 THEN 1 ELSE 0 END) as status_3_count
FROM yourtable
GROUP BY id
```
or just use a `WHERE status = 3` and a `COUNT(id)`
|
Try a dependent subquery:
```
SELECT t1.*,
( SELECT count(*)
FROM sometable t2
WHERE t2.id = t1.id
AND t2.status = 3
) As somecolumnname
FROM sometable t1
WHERE t1.status=0
```
|
SELECT with a COUNT of another SELECT
|
[
"",
"mysql",
"sql",
"select",
""
] |
I had written an stored procedure where selecting from one table(some columns have null data) and inserting into other table with out duplicates.
part of my stored procedure code:
```
LOOP
BEGIN
SELECT ID
INTO ROWCOUNT
FROM TBL_EMPLOYEE
WHERE
NAME = TEMPTABLE.NAME
AND AGE =TEMPTABLE.AGE
EXCEPTION
WHEN no_data_found THEN
ROWCOUNT := 0;
END;
IF ROWCOUNT = 0 THEN
INSERT INTO TARGET TABLE ......
```
In the above piece of code there is null data for some columns(eg: TEMPTABLE.AGE etc).
when there is null value it is throwing no\_data\_found exception and it is inserting the data. we tried alternatively by putting nvl function which is taking more time.
```
LOOP
BEGIN
SELECT ID
INTO ROWCOUNT
FROM TBL_EMPLOYEE
WHERE
nvl(NAME,0000) = nvl(TEMPTABLE.NAME,0000)
AND nvl(AGE,0000) =nvl(TEMPTABLE.AGE,0000)
EXCEPTION
WHEN no_data_found THEN
ROWCOUNT := 0;
END;
IF ROWCOUNT = 0 THEN
INSERT INTO TARGET TABLE ......
```
Can any one suggest any alternative how to do null check with equals to operator.
I have tried it with LIKE as well but it did not work.
|
I think you want to do this:
```
NAME = VALUE {in case of Some Value}
```
and
```
NAME IS NULL {in case the value is null}
```
ORACLE doesn't handles it by its own. I think this is a logical condition and this should be handled logically. Let me know if I have answered/understood you correctly?
|
You are going about the check in the wrong way. Eliminate both the `ROWCOUNT` variable and the loop altogether and use `NOT EXISTS()`:
```
BEGIN
INSERT INTO TARGET_TABLE
SELECT .....
FROM DUAL
WHERE NOT EXISTS (
SELECT *
FROM TBL_EMPLOYEE
WHERE nvl(NAME,0000) = nvl(TEMPTABLE.NAME,0000)
AND nvl(AGE,0000) = nvl(TEMPTABLE.AGE,0000);
END
```
And your condition may have a bug: If your intention is to treat two nulls as "equal", change the condition to:
```
WHERE (NAME = TEMPTABLE.NAME OR NVL(NAME, TEMPTABLE.NAME) IS NULL)
AND (AGE = TEMPTABLE.AGE OR NVL(AGE, TEMPTABLE.AGE) IS NULL)
```
|
what is the alternative way to check for null with out using is null operator in oracle
|
[
"",
"sql",
"oracle",
"plsql",
""
] |
Is there a way to generate unique (distinct) random numbers in posgresql (and redshift in particular)? The following code generates 5 random integers from 1 to 10 *with* replacement:
```
SELECT round(random()*(10 - 1) + 1) from generate_series(1, 5);
```
Is there a way to generate a set of 5 random integers from 1 to 10 *without* replacement? Thanks.
|
You could use `generate_series` to generate all the numbers between 1 and 10, shuffle them in the `order by` clause and then take the top 5:
```
SELECT num
FROM GENERATE_SERIES (1, 10) AS s(num)
ORDER BY RANDOM()
LIMIT 5
```
|
Try These selects
**1.)**
```
SELECT (9 * random())::int + 1 r_num
FROM generate_series(1, 10)
GROUP BY 1
LIMIT 5;
```
**2.)**
```
SELECT round(random()*(10 - 1) + 1)::int r_num
FROM generate_series(1, 10)
GROUP by 1
LIMIT 5;
```
|
Unique random integers in postgresql
|
[
"",
"sql",
"postgresql",
"random",
"amazon-redshift",
""
] |
I need help with a SQL server query. I'm trying to update an existing table (#Masterfile), based on the results of left joining the table with another outside source (table 2). Row 6 is basically a flag that says whether the ID exists in table b or not. I am new to sql (just started learning a couple of weeks ago) so my syntax is probably very basic. I left joined the #Masterfile with the new table of interest (Table 2) and then select the result (and update the column6\_flag).
However, I get error, identifying the statement (, a.column6\_flag = case when x.column9 is not NULL then 1 else 0 end) as the culprit. Any help would be much appreciated!!! I tried looking at case when syntax again but can't identify why it's yielding error. When I comment out the case when portion of the query, it works.
Thanks again!
```
--Insert into #MasterFile
select distinct
a.column1
, a.column2
, a.column3
, a.column4
, a.column5
, a.column6_flag = case when x.column9 is not NULL then 1 else 0 end
, a.column7
, x.column9
from
#Alldata a
left join
(select m.column9
from #Masterfile m
left join table2 n on m.id = n.id) x on a.id = x.id
```
|
You just need to move the case around so it's like this:
```
select distinct
a.column1
, a.column2
, a.column3
, a.column4
, a.column5
, case when x.column9 is not NULL then 1 else 0 end AS column6_flag
, a.column7
, x.column9
from #Alldata a
left join
(
select
m.column9
from #Masterfile m
left join table2 n
on m.id=n.id
) x
on a.id=x.id
```
|
Replace `a.column6_flag` with just `column6_flag`
```
column6_flag = case when x.column9 is not NULL then 1 else 0 end
```
|
Updating sql table with results from left join (sql server)
|
[
"",
"sql",
"sql-server",
""
] |
How to remove the last numbers from the data and group in tsql ,
Query
`select browser from Table1 group by browser`
Result
```
Chrome11
Chrome28
Chrome30
Chrome31
Chrome32
Chrome33
Firefox12
Firefox22
Firefox30
Firefox32
Firefox33
IE10
IE7
IE8
IE9
IEMobile
IEMobile10
InternetExplorer11
Mozilla
Opera9
Safari
Safari5
Safari6
```
Required format
```
Chrome
Firefox
IE
Safari
InternetExplorer
Mozilla
IEMobile
```
Note: We don't no which browser the client using
|
this will Work For You ..
```
select distinct browser from (
select REPLACE (REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE
(browser, '0', ''),'1', ''),'2', ''),'3', ''),'4', ''),'5', ''),
'6', ''),'7', ''),'8', ''),'9', '') as browser
from TestTable group by browser) temp
```
|
You could do something like this:
**Test data:**
```
DECLARE @tbl TABLE(browser VARCHAR(100))
INSERT INTO @tbl
VALUES
('Chrome11'),('Chrome28'),('Chrome30'),
('Chrome31'),('Chrome32'),('Chrome33'),
('Firefox12'),('Firefox22'),('Firefox30'),
('Firefox32'),('Firefox33'),('IE10'),
('IE7'),('IE8'),('IE9'),('IEMobile'),
('IEMobile10'),('InternetExplorer11'),('Mozilla'),
('Opera9'),('Safari'),('Safari5'),('Safari6')
```
And then a query like this:
```
;WITH CTE
AS
(
SELECT
(
CASE
WHEN PATINDEX('%[0-9.-]%', browser) = 0
THEN browser
ELSE SUBSTRING(browser,0,PATINDEX('%[0-9.-]%', browser))
END
) AS browser
FROM
@tbl AS tbl
)
SELECT
*
FROM
CTE
GROUP BY
CTE.browser
```
This will get you an output like this:
```
Chrome
Firefox
IE
IEMobile
InternetExplorer
Mozilla
Opera
Safari
```
Reference:
* [SUBSTRING (Transact-SQL)](http://msdn.microsoft.com/en-us/library/ms187748.aspx)
* [PATINDEX (Transact-SQL)](http://msdn.microsoft.com/en-us/library/ms188395.aspx)
|
Remove last number digit in T-SQL
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I'm trying to convert my query from inner join to straight join. My original query was:
`SELECT COUNT(*) FROM x AS t1 INNER JOIN x AS t2 USING (id);` And I'd like to simply replace inner join to straight join: `SELECT COUNT(*) FROM x AS t1 STRAIGHT_JOIN x AS t2 USING (id);`
The first query executes as expected but the second query throws syntax error. If I change the query to simply `SELECT COUNT(*) FROM x AS t1 STRAIGHT_JOIN x AS t2 ON t1.id=t2.id`
Does MySQL disallow straight join with using? I didn't see that in the docs.
|
Yes, MySQL disallows straight join with USING. The syntax rules in <http://dev.mysql.com/doc/refman/5.6/en/join.html> show this:
```
join_table:
table_reference [INNER | CROSS] JOIN table_factor [join_condition]
| table_reference STRAIGHT_JOIN table_factor
| table_reference STRAIGHT_JOIN table_factor ON conditional_expr
| table_reference {LEFT|RIGHT} [OUTER] JOIN table_reference join_condition
| table_reference NATURAL [{LEFT|RIGHT} [OUTER]] JOIN table_factor
join_condition:
ON conditional_expr
| USING (column_list)
```
You can see that the syntax with `STRAIGHT_JOIN` allows only no condition, or an `ON` clause. Other joins types allow `[join_condition]` which is another syntax rule that expands to either `ON` clause or a`USING` expression.
I don't know if there's a good reason for this different treatment of `STRAIGHT_JOIN`. Perhaps it was simply an oversight. For what it's worth, this was reported as a bug back in 2008: <http://bugs.mysql.com/bug.php?id=35268>
But there's no explanation for the difference, and no sign that it has any priority to get fixed.
As a workaround, you could apply the STRAIGHT\_JOIN query modifier to the whole query, and then use standard JOIN operators:
```
SELECT STRAIGHT_JOIN COUNT(*) FROM x AS t1 INNER JOIN x AS t2 USING (id)
```
|
I think you can go for `NATURAL JOIN` in MySQL like
```
SELECT COUNT(*) FROM x AS t1 NATURAL JOIN x AS t2;
```
|
Does MySQL allow using syntax with straight join?
|
[
"",
"mysql",
"sql",
"join",
"rdbms",
""
] |
```
CREATE TABLE GroovyExps_Tgt
(EMPNO SMALLINT,
FIRSTNAME VARCHAR(20) NOT NULL,
MIDINIT CHAR(1) NOT NULL,
LASTNAME VARCHAR(15) NOT NULL,
SALARY_INT INT,
SALARY_Decimal DECIMAL,
SALARY_Numeric NUMERIC,
SALARY_FLOAT FLOAT(9),
SALARY_MONEY MONEY,
SALARY_SMALLMONEY SMALLMONEY,
BIRTHDATE DATETIME,
HIREDATE_DATETIME DATETIME,
JOIN_TIME TIME,
JOINTIME DATETIME)
insert into GroovyExps_Tgt
values(000010, 'CHRISTINE', 'I', 'HAAS', 52750, 52750.45, 52750.45000045, 52750.45454,
52750, 52750, '1980-08-22', '2014-08-22 10:00:00.000000', '16:00', '2014-08-22 10:00:00.000000')
```
Error:
> Msg 241, Level 16, State 1, Line 1
> Conversion failed when converting date and/or time from character string.
What should I do to successfully insert data into this table?
|
MS SQL Server supports `Datetime` upto 1000 miliseconds. these queries will explain problem :
```
select CAST('2014-08-22 10:00:00.000000' as datetime)
select CAST('2014-08-22 10:00:00.000' as datetime)
```
|
You're missing the time part of the data you are trying to insert under the `BIRTHDATE` column.
Try inserting `'1980-08-22 00:00:00.00'` instead of just `1980-08-22` if you don't have a time part for that value since the `datetime` data type requires both date and time values.
|
SQL SERVER 2008 data Insertion Error
|
[
"",
"sql",
"sql-server-2008",
"date",
""
] |
i have 10 table and more than 10000 record that contain ``
how can search `` and replace this in DB?
since the `` equal `0xEF,0xBB,0xBF` how can search this?
i use this code
```
WITH foo(myvarbincolumn) AS
(
SELECT text from BPM_Letters
)
SELECT *
FROM foo
WHERE CONVERT(VARCHAR(max), myvarbincolumn) COLLATE Arabic_CI_AS
LIKE '%' + CONVERT(NVARCHAR(max), CHAR(0xEF)+CHAR(0xBB)+CHAR(0xBF)) + '%'
```
I found this code in stackoverflow but it s incomplete.
[script of `BPM_Letters`](https://drive.google.com/file/d/0B-NJ1J4kViZhcTRfWmxrMVBBNXM/view?usp=sharing)
this code not find any record!
please help me
|
I wrote a query to find that weird character via the query below:
```
SELECT cast(LEFT(text,1) AS VARBINARY(MAX)) from BPM_Letters
```
and the result was 0xFFFE. So I wrote this query and it worked perfectly:
```
UPDATE BPM_Letters Set text=REPLACE(text,0xFFFE,'');
```
|
What about this CTE:
```
StripBOM AS
(
SELECT CASE
WHEN LEFT(text,3) = 0xEFBBBF
THEN CONVERT(varbinary(max),SUBSTRING(text, 4, LEN(text)))
ELSE text
END AS text
FROM BPM_Letters
)
```
It should provide you with a new table where all BOM characters have been stripped off.
P.S. This code assumes '`text`' field is of type `varbinary`.
|
Search and Replace Byte Order Mark In Sql Server
|
[
"",
"sql",
"sql-server",
"search",
"encoding",
"byte-order-mark",
""
] |
Is there a SQL command for starting at a certain amount of results?
Example:
```
SELECT * FROM table WHERE ID=1 BEGIN AT 100
```
|
> SELECT \* FROM table WHERE ID=1 LIMIT 100,500;
result will show 500 value from 100
|
Try to create a temporary **Row Number** then use that as criteria for your starting point which is in your case 100, like:
```
SELECT * FROM (
SELECT (@RowNum := @RowNum + 1) as Row, tbl.* FROM tbl,
(SELECT @RowNum:=0) ctr
) t
WHERE Row >= 100
```
See `Demo`
|
MYSQL start at nth result (opposite of LIMIT)
|
[
"",
"mysql",
"sql",
"sql-limit",
""
] |
I am trying to get a selected item from datagridview to show in a combobox.
I have done
```
ComboBox1.Text = DataGridView1.CurrentCell.Value
```
However, I need it to display two fields in the combobox from the datagrid. Is there a way to do the above with a SQL query? Because I need to concat a query to display in the combobox because right now it's only displaying one field that I need. Thank you :-)
|
You could try doing something like this.
```
ComboBox1.Text = DataGridView1.SelectedRows.Item(0).Cells(0).FormattedValue + " " + DataGridView1.SelectedRows.Item(0).Cells(1).FormattedValue
```
or
```
ComboBox1.Text = DataGridView1.SelectedRows.Item(0).Cells(0).FormattedValue + " " + _
DataGridView1.SelectedRows.Item(0).Cells(1).FormattedValue
```
However if your drop down list box has an ID in the value you and you have it in the Grid, you set the
```
ComboBox1.Value = DataGridView1.Rows[DataGridView1.SelectedIndex].Cells["HiddenIdRow"].Text.ToString()
```
|
try using ComboBox.Add('cellValue1 + 'cellvalue2')
ComboBox1.Text will just set the original text, but will not create a list of data
|
Selected item from Datagridview to Show in ComboBox
|
[
"",
"sql",
"vb.net",
"drop-down-menu",
"datagridview",
"combobox",
""
] |
I have a problem I can't get my head around.
I have a table of events and a table of users. Some events are chargeable and some are not. What I want to establish is the number chargeable days per week month and year for every user.
Here is the data...
```
ID FromDate ToDate isChargeable Username
1 2014-11-03 2014-11-04 Y AUser
2 2014-11-04 2014-11-06 Y AUser
3 2014-11-07 2014-11-07 Y AUser
```
And I've written a basic query to calculate the difference between the from and to dates and sum them adding 1 to the total as the toDate counts as a full day.
```
SELECT DISTINCT FromDate, ToDate, isChargeable,Username, DATEDIFF(day, FromDate, ToDate) + 1) AS 'count1'
FROM dbo.vDiary
GROUP BY FromDate, ToDate, isChargeable,Username
```
This results in...
```
FromDate ToDate isChargeable Username count1
2014-11-03 2014-11-04 Y AUser 2
2014-11-04 2014-11-06 Y AUser 3
2014-11-07 2014-11-07 Y AUser 1
```
Which is incorrect as it's showing 6 working days because two events are overlapping.
How can I allow for this overlapping? I need to state that if there is an event on a particular day that is chargeable then that day is a chargeable day.
I think I need a calendar table but I'm not sure on the process of using it to get the right results.
Any help would be really appreciated!!
Thanks
Steve
EDIT: Trying solution from below
```
;WITH cte
AS (SELECT Row_number()OVER(PARTITION BY Username ORDER BY FromDate) rn,
FromDate,ToDate,isChargeable,Username
FROM #calen)
SELECT a.FromDate,a.ToDate,a.isChargeable,a.Username,
CASE
WHEN a.FromDate > b.ToDate
OR b.ToDate IS NULL THEN Datediff(day, a.FromDate, a.ToDate) + 1
ELSE Datediff(day, Dateadd(dd, 1, b.ToDate), a.ToDate)+ 1
END DatDiff,
*
FROM cte a
LEFT JOIN cte b
ON a.rn = b.rn + 1
AND a.username = b.username
WHERE ( a.ToDate > b.todate
OR b.ToDate IS NULL )
```
Produces (updated)
```
FromDate ToDate isChargeable Username DatDiff rn FromDate ToDate isChargeable Username rn FromDate ToDate isChargeable Username
2014-11-03 2014-11-04 Y AUser 2 1 2014-11-03 2014-11-04 Y AUser NULL NULL NULL NULL NULL
2014-11-04 2014-11-06 Y AUser 2 2 2014-11-04 2014-11-06 Y AUser 1 2014-11-03 2014-11-04 Y AUser
2014-11-07 2014-11-07 Y AUser 1 3 2014-11-07 2014-11-07 Y AUser 2 2014-11-04 2014-11-06 Y AUser
2014-12-03 2014-12-15 Y AUser 13 4 2014-12-03 2014-12-15 Y AUser 3 2014-11-07 2014-11-07 Y AUser
2014-12-10 2014-12-17 Y AUser 2 5 2014-12-10 2014-12-17 Y AUser 4 2014-12-03 2014-12-15 Y AUser
2015-12-04 2015-12-15 Y AUser 12 6 2015-12-04 2015-12-15 Y AUser 5 2014-12-10 2014-12-17 Y AUser
2014-11-03 2014-11-03 Y BUser 1 1 2014-11-03 2014-11-03 Y BUser NULL NULL NULL NULL NULL
2014-11-04 2014-11-04 Y BUser 1 2 2014-11-04 2014-11-04 Y BUser 1 2014-11-03 2014-11-03 Y BUser
```
|
Try this.
If you have Unique ID's try this.
```
create table #calen(ID int, FromDate date, ToDate date,isChargeable char(1),Username varchar(20))
INSERT #calen
VALUES (1,'2014-11-03','2014-11-04','Y ','AUser'),
(2,'2014-11-04','2014-11-06','Y','AUser'),
(3,'2014-11-07','2014-11-07','Y','AUser')
SELECT a.FromDate,a.ToDate,a.isChargeable,a.Username,
CASE
WHEN a.FromDate > b.ToDate
OR b.ToDate IS NULL THEN Datediff(day, a.FromDate, a.ToDate) + 1
ELSE Datediff(day, Dateadd(dd, 1, b.ToDate), a.ToDate)
+ 1
END DatDiff
FROM #calen a
LEFT JOIN #calen b
ON a.id = b.id + 1
WHERE ( a.ToDate > b.todate
OR b.ToDate IS NULL )
```
**Update :**
For more than one user.
```
;WITH cte
AS (SELECT Row_number()OVER(PARTITION BY Username ORDER BY FromDate) rn,
FromDate,ToDate,isChargeable,Username
FROM #calen)
SELECT a.FromDate,a.ToDate,a.isChargeable,a.Username,
CASE
WHEN a.FromDate > b.ToDate
OR b.ToDate IS NULL THEN Datediff(day, a.FromDate, a.ToDate) + 1
ELSE Datediff(day, Dateadd(dd, 1, b.ToDate), a.ToDate)+ 1
END DatDiff
FROM cte a
LEFT JOIN cte b
ON a.rn = b.rn + 1
AND a.username = b.username
WHERE ( a.ToDate > b.todate
OR b.ToDate IS NULL )
```
**OUTPUT :**
```
FromDate ToDate isChargeable Username DatDiff
---------- ---------- ------------ -------- -------
2014-11-03 2014-11-04 Y AUser 2
2014-11-04 2014-11-06 Y AUser 2
2014-11-07 2014-11-07 Y AUser 1
```
|
If all the ID values are not unique we can have the below query:
```
SELECT ID, FromDate, ToDate, isChargeable, Username,
CASE WHEN EXISTS(SELECT ToDate FROM UserTable B WHERE A.FromDate = B.ToDate
AND A.UserName = B.UserName AND A.ID <> B.ID) THEN
DATEDIFF(day, FromDate, ToDate)
ELSE
DATEDIFF(day, FromDate, ToDate) + 1
END AS 'count1'
FROM UserTable A
```
else we have to create CTE to create unique ROWID.
```
;WITH UserTable
AS (SELECT Row_number() OVER(ORDER BY FromDate) RowID,
FromDate,ToDate, isChargeable, Username
FROM #user)
SELECT RowID, FromDate, ToDate, isChargeable, Username,
CASE WHEN EXISTS(SELECT ToDate FROM UserTable B WHERE A.FromDate = B.ToDate
AND A.UserName = B.UserName AND A.RowID <> B.RowID) THEN
DATEDIFF(day, FromDate, ToDate)
ELSE
DATEDIFF(day, FromDate, ToDate) + 1
END AS 'count1'
FROM UserTable A
```
|
Working days SQL query
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
```
SELECT
sku_master.sku
, sku_master.description
, sku_master.min_on_hand
, sku_master.max_on_hand
, location_inventory.qty_on_hand
FROM [FCI].[dbo].[location_inventory]
JOIN [FCI].dbo.[sku_master] ON location_inventory.sku = sku_master.sku
WHERE min_on_hand > 0
GROUP BY sku_master.sku
```
What I am trying to achieve by this is I want to group all the duplicates SKU's together and sum the qty\_on\_hand. I just can't figure out how to.
|
This will get you the total by SKU:
```
SELECT
sku_master.sku
, sum(location_inventory.qty_on_hand) as total_qty_on_hand
FROM [FCI].[dbo].[location_inventory]
JOIN [FCI].dbo.[sku_master] ON location_inventory.sku = sku_master.sku
WHERE min_on_hand > 0
GROUP BY sku_master.sku
```
If you want other attributes frm the SKU\_MASTER table as well, you can join this result to another query:
```
SELECT
sku_master.sku
, sku_master.description
, sku_master.min_on_hand
, sku_master.max_on_hand
, x.total_qty_on_hand
FROM [FCI].dbo.[sku_master]
inner join
(
SELECT
sku_master.sku
, sum(location_inventory.qty_on_hand) as total_qty_on_hand
FROM [FCI].[dbo].[location_inventory]
JOIN [FCI].dbo.[sku_master] ON location_inventory.sku = sku_master.sku
WHERE min_on_hand > 0
GROUP BY sku_master.sku
) x on sku_master.sku = x.sku
```
|
```
SELECT
sku_master.sku
, sku_master.description
, sku_master.min_on_hand
, sku_master.max_on_hand
, sum(location_inventory.qty_on_hand)
FROM [FCI].[dbo].[location_inventory]
JOIN [FCI].dbo.[sku_master] ON location_inventory.sku = sku_master.sku
WHERE min_on_hand > 0
GROUP BY sku_master.sku,
sku_master.description,
sku_master.min_on_hand,
sku_master.max_on_hand;
```
|
Remove Duplicates and SUM
|
[
"",
"sql",
""
] |
Here is my schema:
```
create TABLE sample (
userId INT,
score INT
);
INSERT INTO sample VALUES (1,10);
INSERT INTO sample VALUES (1,15);
INSERT INTO sample VALUES (1,20);
INSERT INTO sample VALUES (2,100);
INSERT INTO sample VALUES (2,200);
INSERT INTO sample VALUES (2,500);
INSERT INTO sample VALUES (4,100);
INSERT INTO sample VALUES (4,200);
INSERT INTO sample VALUES (4,500);
INSERT INTO sample VALUES (3,5);
INSERT INTO sample VALUES (3,5);
INSERT INTO sample VALUES (3,10);
INSERT INTO sample VALUES (3,7);
INSERT INTO sample VALUES (3,2);
```
I want to find the user ID's of those who have the maximum highest average score. Note there could be more than one! So for the above sample data, the answer would be: 2 and 4, becuase they both have a average score of 266.666... .
I have a working SQL for this problem:
```
select s.USERID
from sample s
group by USERID
having AVG(s.SCORE) IN (
-- Gets the Maximum Average Score (returns only 1 result)
select MAX(average_score) as Max_Average_Score
from (
-- Gets the Average Score
select AVG(s2.SCORE) as average_score
from sample s2
group by USERID
)
);
```
But I think it is a bit inefficient because I'm calculating the average score twice. Once for the main SQL and again for finding the max avg score. Is there a better way?
Note: I'm using SQL Plus
|
I can only say how I would solve this with DB2 SQL
I would create a temponary table where you can save the average score for each user and select the maximum value from it. This is possible in sql plus too: [How to create a temporary table in Oracle](https://stackoverflow.com/questions/2671518/how-to-create-a-temporary-table-in-oracle).
Here is the solution in DB2 Syntax (not tested)
<http://www.cs.newpaltz.edu/~pletcha/DB/db2_TempTables.html>
```
WITH tempTable
AS (select userid, AVG(score) FROM sample GROUP BY userid)
SELECT * FROM tempTable WHERE score = (SELECT MAX(score) FROM tempTable)
```
|
```
select userid from
(select userid, rank() over (order by avg(score) desc) rw
from sample group by userid)
where rw = 1;
```
Calculate avg score for each user, than calculate the rank of each score using analytic functions (which are performed after the grouping). Lastly get the rows with the first rank
|
SQL- Finding the users with the Maximum average score
|
[
"",
"sql",
"oracle",
""
] |
My requirement is to get each client's latest order, and then get top 100 records.
I wrote one query as below to get latest orders for each client. Internal query works fine. But I don't know how to get first 100 based on the results.
```
SELECT * FROM (
SELECT id, client_id, ROW_NUMBER() OVER(PARTITION BY client_id ORDER BY create_time DESC) rn
FROM order
) WHERE rn=1
```
Any ideas? Thanks.
|
Assuming that create\_time contains the time the order was created, and you want the 100 clients with the latest orders, you can:
* add the create\_time in your innermost query
* order the results of your outer query by the `create_time desc`
* add an outermost query that filters the first 100 rows using `ROWNUM`
Query:
```
SELECT * FROM (
SELECT * FROM (
SELECT
id,
client_id,
create_time,
ROW_NUMBER() OVER(PARTITION BY client_id ORDER BY create_time DESC) rn
FROM order
)
WHERE rn=1
ORDER BY create_time desc
) WHERE rownum <= 100
```
**UPDATE for Oracle 12c**
With release 12.1, Oracle introduced ["real" Top-N queries](https://oracle-base.com/articles/misc/top-n-queries). Using the new `FETCH FIRST...` syntax, you can also use:
```
SELECT * FROM (
SELECT
id,
client_id,
create_time,
ROW_NUMBER() OVER(PARTITION BY client_id ORDER BY create_time DESC) rn
FROM order
)
WHERE rn = 1
ORDER BY create_time desc
FETCH FIRST 100 ROWS ONLY)
```
|
you should use rownum in oracle to do what you seek
```
where rownum <= 100
```
see also those answers to help you
[limit in oracle](https://stackoverflow.com/questions/470542/how-do-i-limit-the-number-of-rows-returned-by-an-oracle-query-after-ordering)
[select top in oracle](https://stackoverflow.com/questions/3451534/how-to-do-top-1-in-oracle)
[select top in oracle 2](https://stackoverflow.com/questions/2498035/oracle-select-top-10-records)
|
How to Select Top 100 rows in Oracle?
|
[
"",
"sql",
"oracle",
""
] |
Imagine I have this tables:
.
What I need is to get the Data that exists in A but not in B, in this case my SELECT will have to return "2".
I've done it before, but right know I can't remember how. I suppose it was something like this:
```
SELECT a.*
FROM A as a
LEFT JOIN B AS b ON b.column = a.column
```
But it's not working. Can someone help me, please?
Thanks in advance.
|
You're just missing a filter:
```
SELECT a.*
FROM A as a
LEFT JOIN B AS b ON b.column = a.column
WHERE B.column IS NULL
```
|
If `B` could have multiple rows that match `A`, then this query would be more appropriate:
```
SELECT a.*
FROM A as a
WHERE NOT EXISTS(SELECT NULL FROM B WHERE b.column = a.column)
```
|
Select data that exists in A but not in B
|
[
"",
"sql",
"select",
"join",
"left-join",
"exists",
""
] |
I have looked high and low on SO for an answer over the last couple of hours (subqueries, CTE's, left-joins with derived tables) to this question but none of the solutions are really meeting my criteria..
I have a table with data like this :
```
COL1 COL2 COL3
1 A 0
2 A 1
3 A 1
4 B 0
5 B 0
6 B 0
7 B 0
8 B 1
```
Where column1 1 is the `primary key` and is an `int`. Column 2 is `nvarchar(max)` and column 3 is an `int`. I have determined that by using this query:
```
select name, COUNT(name) as 'count'
FROM [dbo].[AppConfig]
group by Name
having COUNT(name) > 3
```
I can return the total counts of "A, B and C" only if they have an occurrence of column C more than 3 times. I am now trying to remove all the rows that occur after the initial value of column 3. The sample table I provided would look like this now:
```
COL1 COL2 COL3
1 A 0
2 A 1
4 B 0
8 B 1
```
Could anyone assist me with this?
|
If all you want is the first row with a ColB-ColC combination, the following will do it:
```
select min(id) as id, colB, colC
from tbl
group by colB, colC
order by id
```
[SQL Fiddle](http://sqlfiddle.com/#!3/b8ebca/5)
|
This should work:
```
;WITH numbered_rows as (
SELECT
Col1,
Col2,
Col3,
ROW_NUMBER() OVER(PARTITION BY Col2, Col3 ORDER BY Col3) as row
FROM AppConfig)
SELECT
Col1,
Col2,
Col3
FROM numbered_rows
WHERE row = 1
```
|
Removing rows in SQL that have a duplicate column value
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have 3 DB table:
```
users: id, name
posts: id, user_id, content
post_likes: id, user_id, post_id
```
I want to fetch my whole user's likes, except likes that I did on my posts. In other words: Fetch all of my likes to other people's posts.
How can I filter by a relationship's column `posts.user_id` when I'm actually fetching `post_likes`?
|
Try this one
```
SELECT * FROM post_likes pl
LEFT JOIN posts p ON pl.post_id=p.id
WHERE pl.user_id = 3 AND p.user_id != 3
```
|
You can try with this
```
SELECT * FROM post_likes pls
INNER JOIN posts p ON pls.post_id=p.id
WHERE pls.user_id = 3 AND p.user_id != 3
```
|
SQL - Filter by a relationship
|
[
"",
"sql",
"join",
"relationship",
""
] |
i am trying to copy table information from a backup dummy database to our live sql database(as an accident happened in our program, Visma Business, where someone managed to overwrite 1300 customer names) but i am having a hard time figuring out the perfect code for this, i've looked around and yes there are several similar problems, but i just can't get this to work even though i've tried different solutions.
Here is the simple code i used last time, in theory all i need is the equivilant of mysqls On Duplicate, which would be MERGE on SQL server? I just didn't quite know what to write to get that merge to work.
```
INSERT [F0001].[dbo].[Actor]
SELECT * FROM [FDummy].[dbo].[Actor]
```
The error message i get with this is:
Violation of PRIMARY KEY constraint 'PK\_\_Actor'. Cannot insert duplicate key in object 'dbo.Actor'.
|
What error message says is simply **"You cant add same value if an attribute has PK constraint".** If you already have all the information in your backup table what you should do is `TRUNCATE TABLE` which removes all rows from a table, but the table structure and its columns, constraints, indexes, and so on remain.
After that step you should follow this [answer](https://stackoverflow.com/q/187770/1581921) . Or alternatively i recommend a tool called [Kettle](http://wiki.pentaho.com/display/BAD/Downloads) which is open source and easy to use for these kinds of data movements. That will save you a lot of work.
|
The `MERGE` statement will be possibly the best for you here, unless the primary key of the `Actor` table is reused after a previous record is deleted, so not autoincremented and say record with id 13 on `F0001.dbo.Actor` is not the same "actor" information as on `FDummy.dbo.Actor`
To use the statement with your code, it will look something like this:
```
begin transaction
merge [F0001].[dbo].[Actor] as t -- the destination
using [FDummy].[dbo].[Actor] as s -- the source
on (t.[PRIMARYKEY] = s.[PRIMARYKEY]) -- update with your primary keys
when matched then
update set t.columnname1 = s.columnname1,
t.columnname2 = s.columnname2,
t.columnname3 = s.columnname3
-- repeat for all your columns that you want to update
output $action,
Inserted.*,
Deleted.*;
rollback transaction -- change to commit after testing
```
Further reading can be done at the sources below:
[MERGE (Transact-SQL)](http://msdn.microsoft.com/en-us/library/bb510625.aspx)
[Inserting, Updating, and Deleting Data by Using MERGE](http://technet.microsoft.com/en-us/library/bb522522%28v=sql.105%29.aspx)
[Using MERGE in SQL Server to insert, update and delete at the same time](http://www.mssqltips.com/sqlservertip/1704/using-merge-in-sql-server-to-insert-update-and-delete-at-the-same-time/)
|
Trying to copy one table from another database to another in SQL Server 2008 R2
|
[
"",
"sql",
"database",
"sql-server-2008",
"insert",
"duplicate-data",
""
] |
Can I have a condition of something like this:
```
SELECT * FROM table WHERE ? LIKE (column || '%')
```
Where the `?` is a string parameter value. For example, these **parameter value `?`** should return true when column is equal to `/admin/products`
```
/admin/products/1
/admin/products/new
/admin/products/1/edit
```
Is this possible?
**Update:** Added test case.
Basically, the where clause would render like this:
```
1. ? LIKE (column || '%')
2. '/admin/products/1' like ('/admin/products' || %)
3. '/admin/products/1' like ('/admin/products%')
```
But it always return false for me.
These queries works fine though:
```
column = '/admin/products' --returns true
column = '/admin/products/1' --returns false
column LIKE '/admin/prod%' --returns true
```
The problem arises when I put the parameter `?` before the `LIKE` clause. Is it not allowed?
If it's not, are there any workarounds for this?
|
The query:
```
SELECT * FROM table WHERE ? LIKE (col || '%');
```
can be rewritten as (Postgres and MySQL):
```
SELECT * FROM table WHERE col = left(?, length(col));
```
As commented, the first form should work as well. It can be tricky, though, because **characters with special meaning** for [`LIKE`](http://www.postgresql.org/docs/current/interactive/functions-matching.html#FUNCTIONS-LIKE) (at least `_%\`) *in the column* would have to be escaped. If you want it to work with both MySQL and Postgres, you'll have to observe special characters in both implementations. So the 2nd form is **much less error-prone** on principal.
### Performance
Neither of these queries can use an index on `col`, both are not [sargable](http://en.wikipedia.org/wiki/Sargable). The problem can be re-assessed as finding all possible prefixes to the given search pattern `?`, which can be optimized in a similar fashion like in this related answer (for Postgres) on dba.SE:
* [Algorithm for finding the longest prefix](https://dba.stackexchange.com/a/43444/3684)
|
Replacing
```
SELECT * FROM table WHERE ? LIKE (column || '%')
```
by
```
SELECT * FROM table WHERE ? LIKE CONCAT(column, '%')
```
works for me.
Maybe || is used as logical or-operation instead of concatenation.
|
? LIKE (column || '%')
|
[
"",
"mysql",
"sql",
"postgresql",
"pattern-matching",
"prefix",
""
] |
I'm working on an app that should find 26-letter code in char(26) column out of 5,760,000 rows. I need to know how long it's going to take. I'm using MS SQL Server 2012 Express.
I have a database which has only one table, `myTable`:
```
Idcolumn integer
CodeColumn char(26)
DateAndTimeColumn datetime
```
* Column 'CodeColumn' has an index.
* IdColumn is simply integer ID.
* CodeColumn has "00592098715648275649283746" format (this is an example).
* DateAndTimeColumn is a timestamp.
I would like to populate this table with data to do some tests and to find out how long it is going to take to get an answer from the database. I don't know how to write proper tsql statement to populate my table with 5,760,000 rows. Especially that second column is very long. How can I populate the table to get my table populated?
Let's say the data should be like this when I use statement
```
SELECT IdColumn, CodeColumn, DateAndTimeColumn FROM myTable;
```
Output:
```
1 00000000000000000000000001 2014-11-19 15:46:50.843
2 00000000000000000000000002 2014-11-19 15:46:54.310
3 00000000000000000000000003 2014-11-19 15:46:56.060
```
and so on ... till 5,760,000 rows.
How can I do that?
|
```
;WITH Numbers AS
(
SELECT TOP (5760000)
IdColumn = CONVERT(INT, ROW_NUMBER() OVER (ORDER BY s1.[object_id]))
FROM sys.all_objects AS s1
CROSS JOIN sys.all_objects AS s2
CROSS JOIN sys.all_objects AS s3
)
INSERT INTO dbo.YourTable
SELECT IdColumn,
RIGHT(REPLICATE('0',26)+CONVERT(VARCHAR(26),IdColumn),26) CodeColumn,
GETDATE() DateAndTimeColumn
FROM Numbers;
```
|
Here is another way to do this using Lamak's excellent example. The only difference is this will create a 10 million row cte with zero reads. When you use sys.all\_objects it can get extremely slow because of all the I/O.
```
WITH
E1(N) AS (select 1 from (values (1),(1),(1),(1),(1),(1),(1),(1),(1),(1))dt(n)),
E2(N) AS (SELECT 1 FROM E1 a, E1 b),
E4(N) AS (SELECT 1 FROM E2 a, E2 b),
E6(N) AS (SELECT 1 from E4 a, E2 b, E1 c),
cteTally(N) AS
(
SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM E6
)
INSERT INTO dbo.YourTable
SELECT IdColumn,
RIGHT(REPLICATE('0',26)+CONVERT(VARCHAR(26),IdColumn),26) CodeColumn,
GETDATE() DateAndTimeColumn
FROM cteTally
where cteTally.N <= 5760000
```
|
How to populate a table with 5 million rows in MS SQL Server?
|
[
"",
"sql",
"sql-server",
""
] |
Need some help with an issue I'm having.
This is the code:
```
Private Sub dtpStartDate_ValueChanged(sender As Object, e As EventArgs) Handles dtpStartDate.ValueChanged
startDateChanged = 1
If endDateChanged = 1 Then
cbLocation.Enabled = True
cbLocation.Items.Clear()
cbLocation.Items.Add(New ListViewItem(""))
Dim unbookedLocationsSQL As String = "SELECT locationID FROM Locations WHERE NOT EXISTS (Select LocationID FROM Bookings WHERE @startDate <= bookingEndDate AND bookingStartDate <= @endDate)"
Dim unbookedLocationsCommand = New OleDbCommand(unbookedLocationsSQL, globalVariables.objConnection)
Dim unbookedLocationsAdapter As New OleDbDataAdapter(unbookedLocationsSQL, globalVariables.objConnection)
Dim unbookedLocationsDataSet As New DataSet
unbookedLocationsCommand.Parameters.AddWithValue("startDate", dtpStartDate.Value)
unbookedLocationsCommand.Parameters.AddWithValue("endDate", dtpEndDate.Value)
unbookedLocationsAdapter.Fill(unbookedLocationsDataSet, "Locations")
With cbLocation
.DataSource = unbookedLocationsDataSet.Tables("Locations")
.DisplayMember = "locationID"
.ValueMember = "locationID"
End With
End If
End Sub
```
First things first, if you change the sql statement to "SELECT \* from locations" the combo box just displayed all of locations.
What I am trying to achieve is this; when someone changes both date time pickers, the combo box is enabled and populated with a list of locations that aren't booked between those two dates, which is determined by the bookings table.
I know the SQL statement is wrong. I've tried various combinations of things and tried isolating bit and pieces but I can't get any subquery to do what I want.
Any help would be appreciated.
|
Here is the code now it's working:
```
Private Sub dtpStartDate_ValueChanged(sender As Object, e As EventArgs) Handles dtpStartDate.ValueChanged
startDateChanged = 1
If endDateChanged = 1 Then
cbLocation.Enabled = True
Me.Refresh()
cbLocation.Items.Add(New ListViewItem(""))
Dim unbookedLocationsSQL As String = "SELECT * FROM Locations WHERE LocationID NOT IN (SELECT LocationID FROM Bookings WHERE bookingEndDate >= @startDate AND bookingStartDate <= @endDate)"
Dim unbookedLocationsCommand = New OleDbCommand(unbookedLocationsSQL, globalVariables.objConnection)
Dim unbookedLocationsAdapter As New OleDbDataAdapter(unbookedLocationsSQL, globalVariables.objConnection)
Dim unbookedLocationsDataSet As New DataSet
unbookedLocationsCommand.Parameters.AddWithValue("startDate", dtpStartDate.Value)
unbookedLocationsCommand.Parameters.AddWithValue("endDate", dtpEndDate.Value)
unbookedLocationsAdapter.SelectCommand = unbookedLocationsCommand
unbookedLocationsAdapter.Fill(unbookedLocationsDataSet, "Locations")
With cbLocation
.DataSource = unbookedLocationsDataSet.Tables("Locations")
.DisplayMember = "LocationName"
.ValueMember = "LocationID"
End With
End If
End Sub
```
|
I think something is wrong here
```
WHERE @startDate <= bookingEndDate AND bookingStartDate <= @endDate
```
try changing it to
```
WHERE bookingStartDate >= @startDate AND bookingEndDate <= @endDate
```
also put in mind to put ".Date" in your parameters..
```
unbookedLocationsCommand.Parameters.AddWithValue("startDate", dtpStartDate.Value.Date)
```
|
Trouble with SELECT WHERE NOT IS/EXISTS Subquery (VB.NET, Access)
|
[
"",
"sql",
"vb.net",
"subquery",
""
] |
Suppose I have 2 data frames A and B that both have c1, c2, c3 as columns
I wish to get a equivalent of
```
SELECT A.C1 AS "C1", A.C2 AS "C2", (A.C3 - B.C3) AS "C3"
FROM A
JOIN B
ON (A.C1 = B.C1) AND (A.C2 = B.C2);
```
How do I perform the same operation in R without using for-loop?
Edit:add example
```
A
c1, c2, c3
1, 1, 3
1, 2, 4
1, 2, 5
2, 1, 5
B
c1, c2, c3
1, 1, 2
1, 2, 4
2, 1, 8
```
should produce
```
c1, c2, c3
1, 1, 1
1, 2, 0
1, 2, 1
2, 1, -3
```
|
Lots of possible answers here, but here's one (of the many) using just standard/base functions:
```
> foo <- merge(x = A, y = B, by.x = c("c1", "c2"), by.y = c("c1", "c2"))
> foo$c3 <- foo$c3.x - foo$c3.y
> foo <- foo[c("c1", "c2", "c3")]
```
Generally speaking, `merge(...)` is the function you seek to replicate SQL-like joins.
Also consider using R packages that permit use of SQL directly on data frames, e.g. the [sqldf](http://code.google.com/p/sqldf/) package.
|
Here is a `data.table` alternative.
```
A = read.table(text="c1, c2, c3
1, 1, 3
1, 2, 4
1, 2, 5
2, 1, 5", header=T, sep=",")
B = read.table(text="c1, c2, c3
1, 1, 2
1, 2, 4
2, 1, 8", header=T, sep=",")
setDT(A); setDT(B) # convert data frames to data tables
setkey(A, c1, c2) # key columns c1, c2 for "joining"
new_dt <- A[B][,list(c1, # the operation
c2,
c3 = c3-i.c3)]
new_dt
# you get
c1 c2 c3
1: 1 1 1
2: 1 2 0
3: 1 2 1
4: 2 1 -3
```
|
R data frame equivalence of SQL join and select
|
[
"",
"sql",
"r",
""
] |
I have the following SQL statement where order by clause is passed dynamically .
How can I pass 'order by asc and desc' dynamically to SQL?
```
SELECT table1.prod_id,table2.prod_name from table1 left outer join table2
ON table1.prod1 = table2.prod_id
ORDER BY CASE WHEN :odb = 1 THEN prod_id END
```
I would like pass `order by asc or desc` to above SQL dynamically,
how can I do this?
|
You can do solutions like @TonyAndrews by manipulating numeric or data values. For `VARCHAR2` an alternative to dynamic SQL could be to have two expressions:
```
order by
case when :sorting='ASC' then col1 end ASC,
case when :sorting='DESC' then col1 end DESC
```
When `:sorting` has the value `'ASC'` the result of that `ORDER BY` becomes like if it had been:
```
order by
col1 ASC,
NULL DESC
```
When `:sorting` has the value `'DESC'` the result of that `ORDER BY` becomes like if it had been:
```
order by
NULL ASC,
col1 DESC
```
One downside to this method is that those cases where the optimizer can skip a SORT operation because there is an index involved that makes the data already sorted like desired, that will not happen when using the CASE method like this. This will mandate a sorting operation no matter what.
|
If the column you are sorting by is numeric then you could do this:
```
order by case when :dir='ASC' then numcol ELSE -numcol END
```
For a date column you could do:
```
order by case when :dir='ASC' then (datecol - date '1901-01-01')
else (date '4000-12-31' - datecol) end
```
I can't think of a sensible way for a VARCHAR2 column, other than using dynamic SQL to construct the query (which would work for any data type of course).
|
SQL Dynamic ASC and DESC
|
[
"",
"sql",
"oracle",
"sql-order-by",
""
] |
I need to generate a full list of row\_numbers for a data table with many columns.
In SQL, this would look like this:
```
select
key_value,
col1,
col2,
col3,
row_number() over (partition by key_value order by col1, col2 desc, col3)
from
temp
;
```
Now, let's say in Spark I have an RDD of the form (K, V), where V=(col1, col2, col3), so my entries are like
```
(key1, (1,2,3))
(key1, (1,4,7))
(key1, (2,2,3))
(key2, (5,5,5))
(key2, (5,5,9))
(key2, (7,5,5))
etc.
```
I want to order these using commands like sortBy(), sortWith(), sortByKey(), zipWithIndex, etc. and have a new RDD with the correct row\_number
```
(key1, (1,2,3), 2)
(key1, (1,4,7), 1)
(key1, (2,2,3), 3)
(key2, (5,5,5), 1)
(key2, (5,5,9), 2)
(key2, (7,5,5), 3)
etc.
```
(I don't care about the parentheses, so the form can also be (K, (col1,col2,col3,rownum)) instead)
How do I do this?
Here's my first attempt:
```
val sample_data = Seq(((3,4),5,5,5),((3,4),5,5,9),((3,4),7,5,5),((1,2),1,2,3),((1,2),1,4,7),((1,2),2,2,3))
val temp1 = sc.parallelize(sample_data)
temp1.collect().foreach(println)
// ((3,4),5,5,5)
// ((3,4),5,5,9)
// ((3,4),7,5,5)
// ((1,2),1,2,3)
// ((1,2),1,4,7)
// ((1,2),2,2,3)
temp1.map(x => (x, 1)).sortByKey().zipWithIndex.collect().foreach(println)
// ((((1,2),1,2,3),1),0)
// ((((1,2),1,4,7),1),1)
// ((((1,2),2,2,3),1),2)
// ((((3,4),5,5,5),1),3)
// ((((3,4),5,5,9),1),4)
// ((((3,4),7,5,5),1),5)
// note that this isn't ordering with a partition on key value K!
val temp2 = temp1.???
```
Also note that the function sortBy cannot be applied directly to an RDD, but one must run collect() first, and then the output isn't an RDD, either, but an array
```
temp1.collect().sortBy(a => a._2 -> -a._3 -> a._4).foreach(println)
// ((1,2),1,4,7)
// ((1,2),1,2,3)
// ((1,2),2,2,3)
// ((3,4),5,5,5)
// ((3,4),5,5,9)
// ((3,4),7,5,5)
```
Here's a little more progress, but still not partitioned:
```
val temp2 = sc.parallelize(temp1.map(a => (a._1,(a._2, a._3, a._4))).collect().sortBy(a => a._2._1 -> -a._2._2 -> a._2._3)).zipWithIndex.map(a => (a._1._1, a._1._2._1, a._1._2._2, a._1._2._3, a._2 + 1))
temp2.collect().foreach(println)
// ((1,2),1,4,7,1)
// ((1,2),1,2,3,2)
// ((1,2),2,2,3,3)
// ((3,4),5,5,5,4)
// ((3,4),5,5,9,5)
// ((3,4),7,5,5,6)
```
|
The `row_number() over (partition by ... order by ...)` functionality was added to Spark 1.4. This answer uses PySpark/DataFrames.
Create a test DataFrame:
```
from pyspark.sql import Row, functions as F
testDF = sc.parallelize(
(Row(k="key1", v=(1,2,3)),
Row(k="key1", v=(1,4,7)),
Row(k="key1", v=(2,2,3)),
Row(k="key2", v=(5,5,5)),
Row(k="key2", v=(5,5,9)),
Row(k="key2", v=(7,5,5))
)
).toDF()
```
Add the partitioned row number:
```
from pyspark.sql.window import Window
(testDF
.select("k", "v",
F.rowNumber()
.over(Window
.partitionBy("k")
.orderBy("k")
)
.alias("rowNum")
)
.show()
)
+----+-------+------+
| k| v|rowNum|
+----+-------+------+
|key1|[1,2,3]| 1|
|key1|[1,4,7]| 2|
|key1|[2,2,3]| 3|
|key2|[5,5,5]| 1|
|key2|[5,5,9]| 2|
|key2|[7,5,5]| 3|
+----+-------+------+
```
|
This is an interesting problem you're bringing up. I will answer it in Python but I'm sure you will be able to translate seamlessly to Scala.
Here is how I would tackle it:
1- Simplify your data:
```
temp2 = temp1.map(lambda x: (x[0],(x[1],x[2],x[3])))
```
temp2 is now a "real" key-value pair. It looks like that:
```
[
((3, 4), (5, 5, 5)),
((3, 4), (5, 5, 9)),
((3, 4), (7, 5, 5)),
((1, 2), (1, 2, 3)),
((1, 2), (1, 4, 7)),
((1, 2), (2, 2, 3))
```
]
2- Then, use the group-by function to reproduce the effect of the PARTITION BY:
```
temp3 = temp2.groupByKey()
```
temp3 is now a RDD with 2 rows:
```
[((1, 2), <pyspark.resultiterable.ResultIterable object at 0x15e08d0>),
((3, 4), <pyspark.resultiterable.ResultIterable object at 0x15e0290>)]
```
3- Now, you need to apply a rank function for each value of the RDD. In python, I would use the simple sorted function (the enumerate will create your row\_number column):
```
temp4 = temp3.flatMap(lambda x: tuple([(x[0],(i[1],i[0])) for i in enumerate(sorted(x[1]))])).take(10)
```
Note that to implement your particular order, you would need to feed the right "key" argument (in python, I would just create a lambda function like those:
```
lambda tuple : (tuple[0],-tuple[1],tuple[2])
```
At the end (without the key argument function, it looks like that):
```
[
((1, 2), ((1, 2, 3), 0)),
((1, 2), ((1, 4, 7), 1)),
((1, 2), ((2, 2, 3), 2)),
((3, 4), ((5, 5, 5), 0)),
((3, 4), ((5, 5, 9), 1)),
((3, 4), ((7, 5, 5), 2))
```
]
Hope that helps!
Good luck.
|
How do I get a SQL row_number equivalent for a Spark RDD?
|
[
"",
"sql",
"apache-spark",
"row-number",
"rdd",
""
] |
I assume there is a way of doing this from code, at least some good workaround.
Please consider that I do not want to delete all tables (I've seen this command), just to delete rows from them but to keep existing schema and all constraints.
Maybe I can somehow obtain the list of all tables from metadata and apply TRUNCATE command for each separately? But what about their relations and foreign keys?
Any Idea?
|
You may do it this way:
* Disable referential integrity using [`SET REFERENTIAL_INTEGRITY FALSE`](https://web.archive.org/web/20200413121749/https://www.h2database.com/html/commands.html#set_referential_integrity)
* Get the list of all tables using [`SHOW TABLES`](https://web.archive.org/web/20200413121749/https://www.h2database.com/html/commands.html#show)
* Delete the data from each table using [`TRUNCATE TABLE tableName`](https://web.archive.org/web/20190517154424/http://www.h2database.com/html/commands.html#truncate_table)
* Enable referential integrity using [`SET REFERENTIAL_INTEGRITY TRUE`](https://web.archive.org/web/20200413121749/https://www.h2database.com/html/commands.html#set_referential_integrity)
|
For now, I came up with this solution... But still need to test it more thoroughly.
```
private void truncateDatabase () throws SQLException {
String tempDir = System.getProperty("java.io.tmpdir");
File tempRestoreFile = new File(tempDir + File.separator + "tempRestore");
Connection connection = dataSource.getConnection();
Statement statement = connection.createStatement();
statement.execute("SCRIPT SIMPLE NODATA DROP TO '" + tempRestoreFile + "' CHARSET 'UTF-8'");
statement.execute("RUNSCRIPT FROM '" + tempRestoreFile.getAbsolutePath() + "' CHARSET 'UTF-8'");
}
```
|
H2 - How to truncate all tables?
|
[
"",
"sql",
"database",
"h2",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.