Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I'm on oracle 11g and I am stuck on this problem.
My table structure is as below
```
βββββββββ¦βββββββ¦βββββββββ
β tm_id β flag β countr β
β ββββββββ¬βββββββ¬βββββββββ£
β 1 β 0 β null β
β 2 β 0 β null β
β 3 β 1 β null β
β 4 β 0 β null β
βββββββββ©βββββββ©βββββββββ
```
I want to update all values of the column countr with a sequential value as below
```
βββββββββ¦βββββββ¦βββββββββ
β tm_id β flag β countr β
β ββββββββ¬βββββββ¬βββββββββ£
β 1 β 0 β 1 β
β 2 β 0 β 2 β
β 3 β 1 β 2 β
β 4 β 0 β 3 β
βββββββββ©βββββββ©βββββββββ
```
So basically the value for countr should only increase if the flag is 0. If it is 1 then it shouldn't increase (or it should have the previous value)
I tried the following update statement
```
UPDATE calendar
SET countr = case when flag = 0 then tm_id else countr-1 end
```
|
[SQL Fiddle](http://sqlfiddle.com/#!4/b2beb1/1)
**Oracle 11g R2 Schema Setup**:
```
CREATE TABLE test ( tm_id, flag, countr ) AS
SELECT 1,0, CAST( NULL AS NUMBER ) FROM DUAL
UNION ALL SELECT 2,0, CAST( NULL AS NUMBER ) FROM DUAL
UNION ALL SELECT 3,1, CAST( NULL AS NUMBER ) FROM DUAL
UNION ALL SELECT 4,0, CAST( NULL AS NUMBER ) FROM DUAL
/
UPDATE test t
SET countr = ( SELECT total
FROM (
SELECT tm_id,
SUM( 1 - FLAG ) OVER ( ORDER BY tm_id ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW ) AS total
FROM test
) x
WHERE t.tm_id = x.tm_id
)
/
```
**Query 1**:
```
SELECT * FROM test
```
**[Results](http://sqlfiddle.com/#!4/b2beb1/1/0)**:
```
| TM_ID | FLAG | COUNTR |
|-------|------|--------|
| 1 | 0 | 1 |
| 2 | 0 | 2 |
| 3 | 1 | 2 |
| 4 | 0 | 3 |
```
**Edit - Explanation**
The use of `SUM` here is as an [analytic function](http://docs.oracle.com/cd/E11882_01/server.112/e41084/functions004.htm#SQLRF06174) rather than, as typically used, an aggregate function.
```
SUM( 1 - FLAG ) OVER ( ORDER BY tm_id ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW )
```
From (almost) right to left:
* `ORDER BY tm_id`- order the rows by ascending `tm_id`
* Then for each row:
+ Consider all the `ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW` (i.e. only the rows starting from the earliest `tm_id` to the current row)
+ For those rows find the `SUM( 1 - FLAG )` (i.e. increment the counter when the flag is zero and not when it is one).
|
You want to use a subquery in the `update`:
```
update calendar t
set countr = (select count(*)
from calendar t2
where t2.tm_id <= t.tm_id and t2.flag = 0
);
```
If your table is large, you might find that `merge` is more efficient.
|
Update a column sequentially based on condition
|
[
"",
"sql",
"oracle",
"oracle11g",
""
] |
I would like to update a part of string in a particular column of my database.
I have cm2 in the particular column of my database and would like to update it to cm2.
My sql query to find the cm2 is:
```
SELECT *
FROM `question_table`
WHERE `option` LIKE '%cm2%'
```
To update I want to try is this:
```
Update question_table.option
set Value = replace(value,'cm2','cm<sup>2</sup>').
```
Not sure whether this is correct. Need some guidance on this.
**EDIT:**
Syntax:
```
UPDATE question_table SET option = Replace(option, 'cm2','cm<sup>2</sup>')
WHERE option LIKE '%cm2%'
```
It throws me an error as:
```
#1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'option = Replace(option, 'cm2','cm<sup>2</sup>') WHERE option LIKE '%cm2%'' at line 1
```
|
The reason you get an error is probably since `option` is a reserved word in MySql. to escape reserved words in MySql use this char **`**:
```
UPDATE question_table
SET `option` = Replace(`option`, 'cm2','cm<sup>2</sup>')
WHERE `option` LIKE '%cm2%'
```
[Here is a list of reserved words in MySql](http://dev.mysql.com/doc/refman/5.7/en/keywords.html)
[Here is my favorite method of avoiding the use of reserved words.](https://stackoverflow.com/a/30132058/3094533)
|
Your guess looks almost correct. I just tried your code in SQL Server and it works a treat.
```
UPDATE table SET field = Replace(field, 'string to replace','replace with this')
WHERE field LIKE '%filteredby%'
```
|
Update part of a string in SQL
|
[
"",
"mysql",
"sql",
"sql-update",
""
] |
Lets say emp is hired on 10 Apr 15 and is transferred on 30 Apr 15 and again promoted on 10 june 15. So from this history i need the very first and last (recent record) from the emp\_data. action\_date are the dates mentioned above. So i need 10 Apr and 10 june as a output for my query. Action refereed as hire, transfer, etc.
This is what i am attempting-
```
select empid, action, action_dt from ps_job where
action_dt in (select min(action_dt), max(action_dt) from ps_job where empid='88888');
```
But instead it is showing me all the 3 dates from the record.
|
I think you have to modify your query to something like the following:
```
select empid, action, action_dt
from ps_job
where action_dt in (select min(action_dt)
from ps_job
where empid='88888'
UNION ALL
select max(action_dt)
from ps_job
where empid='88888')
```
`MIN` and `MAX` should be selected as *two separate rows*, so that `IN` clause works. One way to do so is by the use of `UNION` as in the above query.
[**Demo here**](http://sqlfiddle.com/#!4/2e6a3/2)
Alternatively you can use `ROW_NUMBER`:
```
SELECT empid, action, action_dt
FROM (
select empid, action, action_dt,
ROW_NUMBER() OVER (PARTITION BY empid ORDER BY action_dt) AS frn,
ROW_NUMBER() OVER (PARTITION BY empid ORDER BY action_dt DESC) AS lrn
from ps_job ) t
WHERE frn = 1 OR lrn = 1
```
Note that this version is not equivalent to the first query, as there may be more than one records having the same max or min date. You have to replace `ROW_NUMBER` by `RANK` if you want *all* max, min records selected.
[**Demo here**](http://sqlfiddle.com/#!4/2e6a3/3)
|
[SQL Fiddle](http://sqlfiddle.com/#!4/ed329/1)
**Oracle 11g R2 Schema Setup**:
```
CREATE TABLE ps_job ( empid, action, action_dt ) AS
SELECT 88888, 'Join', DATE '2015-04-10' FROM DUAL
UNION ALL SELECT 88888, 'Transfer', DATE '2015-04-30' FROM DUAL
UNION ALL SELECT 88888, 'Promote', DATE '2015-06-10' FROM DUAL;
```
**Query 1**:
```
SELECT empid,
MIN( action ) KEEP ( DENSE_RANK FIRST ORDER BY action_dt ) AS action,
MIN( action_dt ) AS action_dt
FROM ps_job
WHERE empid = 88888
GROUP BY
empid
UNION ALL
SELECT empid,
MAX( action ) KEEP ( DENSE_RANK LAST ORDER BY action_dt ) AS action,
MAX( action_dt ) AS action_dt
FROM ps_job
WHERE empid = 88888
GROUP BY
empid
```
**[Results](http://sqlfiddle.com/#!4/ed329/1/0)**:
```
| EMPID | ACTION | ACTION_DT |
|-------|---------|-------------------------|
| 88888 | Join | April, 10 2015 00:00:00 |
| 88888 | Promote | June, 10 2015 00:00:00 |
```
**Query 2**:
```
SELECT empid,
MIN( action ) KEEP ( DENSE_RANK FIRST ORDER BY action_dt ) AS first_action,
MIN( action_dt ) AS first_action_dt,
MAX( action ) KEEP ( DENSE_RANK LAST ORDER BY action_dt ) AS last_action,
MAX( action_dt ) AS last_action_dt
FROM ps_job
WHERE empid = 88888
GROUP BY
empid
```
**[Results](http://sqlfiddle.com/#!4/ed329/1/1)**:
```
| EMPID | FIRST_ACTION | FIRST_ACTION_DT | LAST_ACTION | LAST_ACTION_DT |
|-------|--------------|-------------------------|-------------|------------------------|
| 88888 | Join | April, 10 2015 00:00:00 | Promote | June, 10 2015 00:00:00 |
```
|
How to get the first and last dates/records from the employee history table
|
[
"",
"sql",
"oracle",
""
] |
I have a PostgreSQL database, with only `SELECT` permissions. In this DB there are two tables with the same structure (the same columns).
I need to write several query in each table and join the results.
There is a way for writing a query like this one?
```
SELECT
field1,
field2,
field3
FROM
table1
AND
table2
WHERE
condition;
```
[Select from 2 tables. Query = table1 OR table1 + table2](https://stackoverflow.com/q/21761272/2314915) have no answer and it is not my question.
|
`UNION ALL`
```
SELECT field1, field2, field3
FROM table1
WHERE condition
UNION ALL
SELECT field1, field2, field3
FROM table2
WHERE condition;
```
Or to simplify your WHERE condition
```
SELECT * FROM
( SELECT field1, field2, field3
FROM table1
UNION ALL
SELECT field1, field2, field3
FROM table2
)
WHERE condition;
```
|
You can use Union:
```
SELECT
field1,
field2,
field3
FROM
table1
UNION
SELECT
field1,
field2,
field3
FROM
table2
```
|
How to create a SELECT query FROM "TABLE1 AND TABLE2"
|
[
"",
"sql",
"sql-server",
"postgresql",
"sql-server-2008",
"join",
""
] |
I have next table:
```
COLUMN_NAME DATA_TYPE PK NULLABLE
inc_id bigint YES NO
dt_cr timestamp NO NO
email varchar(255) NO YES
```
`email` column is not unique, so I should use `GROUP BY email`.
**Question**:
How can I get from this table minimal date, next row after minimal date and email related to this dates?
I read about `first_value` and `nth_value` functions, code listed below, but seems like it doesn't work correct.
```
SELECT J1.email, J2.first_date, J2.second_date
FROM (
SELECT email
FROM orders
GROUP BY email) J1
LEFT OUTER JOIN (
SELECT email,
first_value(dt_cr) over (order by dt_cr ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) first_date,
NTH_VALUE(dt_cr, 2) over (order by dt_cr ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) second_date
FROM orders) J2
ON (J1.email=J2.email);
```
Any ideas?
--edited
maybe it can be done with `rank()` function..
|
This query uses `WITH` construction that works similar to sub-queries. Investigate this query with `EXPLAIN` before use in production because it may be slow on big tables:
```
WITH orders AS (
SELECT
email
, first_value(dt_cr) OVER wnd1 AS min_date
, nth_value(dt_cr, 2) OVER wnd1 AS second_date
FROM orders
WINDOW wnd1 AS ( PARTITION BY email ORDER BY email, dt_cr)
)
SELECT DISTINCT *
FROM orders
WHERE second_date IS NOT NULL;
```
|
Try with `row_number` function.
Within whole table:
```
select *
from(select *, row_number() over(order by dt_cr) rn from tableName)t
where rn <= 2
```
Or within `email`:
```
select *
from(select *, row_number() over(partition by email order by dt_cr) rn from tableName)t
where rn <= 2
```
|
How to get second row in PostgreSQL?
|
[
"",
"sql",
"postgresql",
""
] |
I need to convert **2014-11-18T14:08:43+00:00** which is in varchar in my sql developer to date format in order to filter a few entries.
**I tried**
`to_date(LAST_UPDATE_DATE,'YYYY-MM-DD')`
**but it gives an error**
> ORA-01830: date format picture ends before converting entire input
> string.
Kindly help..
|
Just in case you didn't mean to put up sql server but instead you need to use oracle (seeing as you are using to\_date and you are getting an ora exception)
I added a quick datetime conversion for date and timestamp (no milliseconds) for your date format:
```
SELECT to_Date(concat
(substr
(myvar,0,10),
concat(' ',
substr(myvar,12,8)
)
),'YYYY-MM-DD HH24:mi:ss') AS mydate
FROM mytable
```
[Fiddle](http://sqlfiddle.com/#!4/233262/30)
|
You could try this;
```
Select CAST ('2014-11-18T14:08:43+00:00' as date)
```
The assumption is you are in SQL Server 2012
|
Converting string to date in sql
|
[
"",
"sql",
"oracle",
"date",
"type-conversion",
"timestamp-with-timezone",
""
] |
I want to find any character occurs between 4 and 10 times, I used `REGEXP_LIKE` but it's valid just for one character 'a' , I want to find for all alphabet:
```
SELECT regex_test_name
FROM regex_test
WHERE REGEXP_LIKE(regex_test_name, 'a{4,10}')
```
|
```
SELECT regex_test_name
FROM regex_test
WHERE REGEXP_LIKE(regex_test_name, '([[:alpha:]])\1{3,9}')
```
Inspired by [dnoeth's answer](https://stackoverflow.com/a/30845819/4607488), but since it catches the first character, specifying 3-9 subsequent repeats means 4-10 successive occurences in total.
|
Use `[[:alpha:]]`
```
REGEXP_LIKE(regex_test_name, '[[:alpha:]]{4,10}');
```
|
Find any character occur more than 4 times
|
[
"",
"sql",
"regex",
"oracle",
""
] |
```
DECLARE @Date DATE = '2/28/2014'
```
I need to test whether the DAY in the DATE above is the LAST day of the month and it should work for all months in the year.
If it's true, then I need to return a true, if not, then false.
E.g.
```
SET @Date = '2/27/2014'
```
This should return FALSE.
```
SET @Date = '12/31/2014'
```
This should return TRUE.
I know you can manipulate this based on month but I'm just wondering whether there is an easy way to do it.
|
An easy way that works in almost any version of SQL Server is:
```
select (case when month(@date) <> month(dateadd(day, 1, @date))
then 'true' else 'false'
end)
```
That is, add a day and see if you are in the same month. This works for leap years, and Dec 31 and so on.
|
This will return 1 if its the last day of the month, and 0 otherwise
```
DATEDIFF(month,@date,@date+1)
```
|
How do I check whether the input date's DAY is the last day of the given month in SQL?
|
[
"",
"sql",
"sql-server",
"datepart",
""
] |
I have this ms access query:
```
SELECT t1.sb, suchbegriff2, menge
FROM (SELECT artnr & '/' & [lfdnr-kal] AS sb, left(suchbegriff,7) &
val(right(suchbegriff,4)) AS suchbegriff2
FROM kvks
WHERE suchbegriff like '*/*') AS t1
INNER JOIN (SELECT artnr & '/' & [lfdnr-kal] AS sb,
[artnr-hz] & '/' & val(lfdnr) AS hz, menge
FROM konf
WHERE [artnr-hz]<>'') AS t2
ON (t1.sb=t2.sb) AND (t1.suchbegriff2=t2.hz);
```
It runs really very slow (over 30 sec.). I figured out, it is because the `inner join` part. If I leave this, the speed is correct.
Maybe it is because of the fact slow, that the joined fields are calculated expressions?
**EDIT:**
I modified the query based on the answer of Smandoli:
```
SELECT kvks.artnr & '/' & kvks.[lfdnr-kal] AS sb,
left(suchbegriff,7) & val(right(suchbegriff,4)) AS suchbegriff2,
konf.menge
FROM kvks, konf
WHERE kvks.suchbegriff like '*/*'
and konf.[artnr-hz]<>''
and kvks.artnr=konf.artnr
and kvks.[lfdnr-kal]=konf.[lfdnr-kal]
and left(suchbegriff,7) & val(right(suchbegriff,4))=[artnr-hz] & '/' & val(lfdnr)
```
It runs now correct.
Thanks for your contribution.
|
You do have a complicated mess with those calculated fields. Why not join more directly? This query below leaves one `'/'` unaccounted for, but should tell you what I'm thinking of.
```
SELECT
t1.sb,
left(st1.uchbegriff,7) & val(right(t1.suchbegriff,4)) AS suchbegriff2,
t1.menge
FROM kvks AS t1
INNER JOIN konf AS t2
WHERE (t1.suchbegriff like '*/*')
AND (t2.artnr-hz<>'')
AND (t1.artnr=t2.artnr)
AND (t1.lfdnr-kal=t2.lfdnr-kal)
AND (left(t1.suchbegriff,7)=t1.[artnr-hz])
AND (val(right(t1.suchbegriff,4))=val(t2.hz));
```
|
For the inner join, you can try to use a saved query (or temp table) instead of writing the query at run time.
So, I would first try to abstract this query
```
SELECT artnr & '/' & [lfdnr-kal] AS sb,
[artnr-hz] & '/' & val(lfdnr) AS hz, menge
FROM konf
WHERE [artnr-
hz]<>'') AS t2
ON (t1.sb=t2.sb) AND (t1.suchbegriff2=t2.hz)
```
Second of all, if possible, I would abstract some of the functions in the queries. You could do this with VBA, or manipulating the data outside of the queries.
Third, you could always create a field on your table that combines the two fields together that you need.
E.x: Make a new column in your `konf` table that stores the value of `artnr & '/' & [lfdnr-kal]`
What you need to do is limit the functions/calculations/coalescing of fields *at run time*. That's a lot for a query to do, and if it's running slow I would see a direction correlation either between that, or something incorrect with your indexes/joins.
If you've identified this as a join issue, you can use VBA to spin up a temp table with your queries, and use those as the record source instead of the SQL.
Also, if you don't utilize a temp table, at least *save the queries*. This allows Access to have a plan for running the queries, whereas your query is 100% run-time dependent.
|
ms access query very slow
|
[
"",
"sql",
"ms-access",
"join",
""
] |
I need to retrieve data from MySQL from one day, but I don't know the interval between two rows (there may be 4 rows from one day and a gap of a week for example).
Let's say I have: *(following isn't code)*
```
3 rows where date = 2015-06-15
1 row where date = 2015-06-09
4 rows where date = 2015-06-05
```
I want to retrieve all 4 rows from 2015-06-05 with
```
[...] WHERE `date` < '2015-06-07'
```
or only one row from 2015-06-09 with
```
[...] WHERE `date` < '2015-06-14'
```
Is that possible with SQL only?
|
If I understand correctly, you want to receive one days worth of rows before a given date. I think that would be:
```
SELECT t.*
FROM table t
WHERE date = (SELECT MAX(t2.date) FROM table t2 WHERE t2.`date` < '2015-06-07')
```
|
I think you want something like this:
```
select * from table
where date = (select max(date) from table where date < '2015-06-14')
```
|
Select before specific date from one day only
|
[
"",
"mysql",
"sql",
"date",
""
] |
I am using SQLite in an iOS application and I am using FMDB as a wrapper. This is my database schema :
```
CREATE TABLE Offer (code TEXT PRIMARY KEY NOT NULL, name TEXT);
CREATE TABLE OffreMarket (codeOffer TEXT NOT NULL,
codeMarket TEXT NOT NULL,
FOREIGN KEY(codeOffer) REFERENCES Offer(code),
FOREIGN KEY(codeMarket) REFERENCES Market(code));
CREATE TABLE Market (code TEXT PRIMARY KEY NOT NULL, name TEXT);
```
My model objects :
```
@interface Offer : NSObject
@property (nonatomic,copy) NSString *code;
@property (nonatomic,copy) NSString *name;
@property (nonatomic,copy) NSArray *markets;
@end
@interface OffreMarket : NSObject
@property (nonatomic,copy) NSString *codeOffer;
@property (nonatomic,copy) NSString *codeMarket;
@end
@interface Market : NSObject
@property (nonatomic,copy) NSString *code;
@property (nonatomic,copy) NSString *name;
@end
```
For example I am fetching all the offers in the database like this :
```
- (NSArray *)offers {
// Open database
NSMutableArray *offers = [NSMutableArray new];
FMResultSet *resultSet = [database executeQuery:@"SELECT * FROM Offer"];
while ([resultSet next]){
Offer *offer = [Offer new];
offer.code = [resultSet stringForKey:@"code"];
offer.name = [resultSet stringForKey:@"name"];
// Get the markets for each offer
FMResultSet *marketResultSet = [database executeQuery:@"SELECT * FROM OffreMarket WHERE codeOffer = ?",offer.code];
NSMutableArray *offers = [NSMutableArray new];
while ([marketResultSet next]) {
OffreMarket *offerMarket = [OffreMarket new];
....
[offers addObject:offerMarket];
}
market.offers = [offers copy];
}
return [offers copy]
}
```
This is working but it takes time because I am using many SQL requests to fetch all the Offers and the corresponding Markets.
Can i avoid many `SQL` requests to fetch all the Offers with the corresponding markets? Thanks for your answers
|
What I can suggest is:
* Refine your SQL statement. Instead of 2 loops, you can simply change your statement into "`SELECT * FROM OffreMarket WHERE codeOffer IN (SELECT code FROM Offer)`". If you want to use column "`name`" in table "`Offer`", you can join two tables "`codeOffer`" and "`Offer`". The rule of thumb here is to avoid too many loops but try to combine or refine your SQL statement.
* Create index for column "`code`" in table "`Offer`". It will speed up your searching a lot. For example, once in one of my project, I had to work on a SQL table with 36K records. With a simple index set to the primary key column, I managed to reduce the searching time on that table by 10 seconds.
|
In this case you can get the results with just one query:
```
select * from Offer
left outer join OffreMarket OM on (OM.codeOffer = Offer.code)
```
|
Fetching complex data using FMDB
|
[
"",
"sql",
"objective-c",
"sqlite",
"fmdb",
""
] |
I have two tables : 'pc' and 'results'. the table 'results' contains so many results of every pc.
I need to create view that contains all pc columns and the last result of every pc.
I've tried this but it didn't work.
```
select *
from pc,resultat where pc.code_pc=result.code_pc
order by code_resultat DESC limit 3
```
Do I have to use a cursor? if yes, how?
EDIT : `PC
ID_pc name x y
1 Station1 1 1
2 Station2 2 2`
Result table:
```
code_rslt ID_pc parametre value date
1 1 ph 6 15/06/2015
2 2 ph 6.3 15/06/2015
3 1 ph 6.6 16/06/2015
4 2 ph 6.2 16/06/2015
```
I need a niew like this
```
ID_pc name x y code_rslt parametre value date
1 Station1 1 1 3 ph 6.6 16/06/2015
2 Station2 2 2 4 ph 6.2 16/06/2015
```
|
I think what you are looking for is something like this:
```
Select p.*,r.*
from pc p
inner join
Results r
on p.ID_pc = r.ID_pc
Where r.Code_reslt = (Select MAX(code_rslt) from results where ID_pc = p.ID_PC)
```
|
You could do this:
```
select r.id_pc, p.name, p.x, p.y, r.code_rslt, r.parametre, r.value, r.date
from (
-- Get the max result per PC
select max(code_rslt) as code_rslt
from result
group by id_pc
) maxPerId
inner join result r on maxPerId.code_rslt = r.code_rslt -- join maxPerId onto result table, to only include the maximum results
inner join pc p on r.id_pc = p.id_pc -- join on PC to get the PC information for the max results.
```
I'm unclear on what the `limit 3` was for in your example query as nothing like that was mentioned in your actual question - so hopefully this is something like what you need.
|
SQL: Create view using multiple tables and a cursor
|
[
"",
"sql",
"view",
""
] |
How to get only last record
```
strConnStr = "Provider=SQLNCLI11;..."
Set cn = server.CreateObject("ADODB.Connection")
cn.Open strConnStr
strSQL = "SELECT DISTINCT year1 FROM cars WHERE year1 > 1900 ORDER BY year1;"
Set rs = Server.CreateObject("ADODB.Recordset")
Set rs.ActiveConnection = cn
rs.Open strSQL
YearList = rs.GetRows()
rs.MoveLast
MaxYear = rs.Fields(0)
cn.Close
set rs = Nothing
set cn = nothing
response.write MaxYear
```
return nothing, also i tryed `MaxYear = rs(0)`, `MaxYear = rs("Year1")` all return nothing
I do not need to change strSQL !!!
Also i get error on rs.movelast
> Microsoft SQL Server Native Client 11.0 error '80040e24'
>
> Rowset does not support fetching backward.
|
Based on the update:
```
Set rs.ActiveConnection = cn
rs.Open strSQL
YearList = rs.GetRows()
MaxYear = YearList(0,UBound(YearList))
cn.Close
```
---
Older answer:
If you're insistent on not changing the SQL (and so forcing the server to do more work on data that you're just going to throw away), then you just need to move through the whole recordset:
```
rs.Open strSQL
while Not rs.EOF
YearArr = rs.Fields(0)
rs.MoveNext
Wend
```
|
You can use the TOP keyword to get the latest record like this:
```
SELECT TOP 1 year1 FROM cars WHERE year1 > 1900
group by year1
ORDER BY year1 desc;
```
|
SQLSERVER get last record from recordset
|
[
"",
"sql",
"sql-server",
""
] |
```
=Switch(Fields!CompnyID.Value=5, "company 1" "dsSalesManagement", Fields!CompnyID.Value=6, "company 2" "dsSalesManagement" )
```
Trying to use switch statement to display two different results in my report.
Was maybe looking for syntax correction or is there a way to use an else if statement? Problem is I want this expression to be dependent on value from another expression. Is that possible in ssrs?

|
It seems that you have more than one datasource in your report so you have to specify the correct scope of your aggregate function.
For example:
```
=Switch(
First(Fields!CompnyID.Value, "dsSalesManagement") = 5, "company 1",
First(Fields!CompnyID.Value, "dsSalesManagement") = 6, "company 2"
)
```
|
You want to use an IIF (immediate if) statement. The MSDN page is [here](https://msdn.microsoft.com/en-us/library/ms157328.aspx)
|
Trying to use Switch statement in SSRS
|
[
"",
"sql",
"reporting-services",
"ssrs-2012",
""
] |
Sorry for the subject as it is not very definientia. I have 2 tables, one stores Person Data and one stores Subject data along with the person interested in. Two tables looks like below
**Person**
```
Id Name
1 Imad
2 Sumeet
3 Suresh
4 Navin
```
**Subjects**
```
Id PId Subject
1 1 DC
2 1 DS
3 3 DS
4 4 CA
```
PId is a Persons' Id
I need to get all students who are interested in max number of subjects, e.g Imad here.
Here is my query
```
With c as
(
select Pid, count(Id) as 'Total' from subjects group by Pid
)
select Pid into #Temp from c where Total = (Select Max(Total) from c)
select * from Person where Id in (Select Pid from #Temp)
```
It gives me desired output but whenever this type question is asked in interview, I never get good response from interviewer as they always expect better solution. I am not confident on my SQL skills that's why I think there must be more efficient solution hence I posted it here.
Thanks
|
Simply order the data and get top most one record with ties(this means if some students have equal counts they both will come in result):
```
select top 1 with ties p.Id, p.Name
from Subjects s
join Person p on s.PId = p.Id
group by p.Id, p.Name,
order by count(*) desc
```
|
You can try this :
```
;With c as
(
select Pid, count(Id) as 'Total' from subject group by Pid
)
select * from Person join c on c.Pid=Person.Id where c.total>1
```
|
Getting Person having interests in more number of subjects - Sql Server
|
[
"",
"sql",
"sql-server",
""
] |
Here is the Query:
```
SELECT DECODE(AVSR.INFORMATIONTYPE,'NULL','UNKNOWN','5','Infectious Agent Transmission Issue','4','Environmental Issue','3','Withdrawal Period Issue','2','Lack OF Expected Efficacy','1','Safety Issue') Infotype,
COUNT(VM.Vaer_No),
AVSR.INFORMATIONTYPE
FROM AGVET_VAE_INFO VM,
Agvet_Vae_Safetyreport AVSR
WHERE AVSR.RECORD_ID = VM.FK_AVSR_REC_ID
AND VM.Vaer_Delete = 0
AND VM.Archived = 0
AND VM.E2B_MESSAGE_LIST_TYPE <> 01
AND VM.IMPORT_FLAG <> 1
AND AVSR.ORIGINALRECEIVEDATE BETWEEN to_date ('2000/04/1', 'yyyy/mm/dd') AND to_date ('2000/04/28', 'yyyy/mm/dd')+1
GROUP BY AVSR.INFORMATIONTYPE
```
In this query it is displaying records 2000/04/29 also.But i need records only from `2000/04/1` to `2000/04/28` (Inclusive both dates).
|
1) Using `trunc`
```
AND trunc(AVSR.ORIGINALRECEIVEDATE) BETWEEN to_date ('2000/04/1', 'yyyy/mm/dd') AND to_date ('2000/04/28', 'yyyy/mm/dd')
```
but this will ignore if any index is there on column `AVSR.ORIGINALRECEIVEDATE`, use it if there are no index on this column or if there is functional index using function `trunc`
2) using `>=` and `<`
```
AND (AVSR.ORIGINALRECEIVEDATE >= to_date ('2000/04/1', 'yyyy/mm/dd') AND AVSR.ORIGINALRECEIVEDATE < (to_date ('2000/04/28', 'yyyy/mm/dd') + 1))
```
3) using timestamp
```
AND AVSR.ORIGINALRECEIVEDATE BETWEEN to_date ('2000/04/1 00:00:00', 'yyyy/mm/dd hh24:mi:ss') AND to_date ('2000/04/28 23:59:59', 'yyyy/mm/dd hh24:mi:ss')
```
|
remove the +1 from the second date
```
SELECT DECODE(AVSR.INFORMATIONTYPE,'NULL','UNKNOWN','5','Infectious Agent Transmission Issue','4','Environmental Issue','3','Withdrawal Period Issue','2','Lack OF Expected Efficacy','1','Safety Issue') Infotype,
COUNT(VM.Vaer_No),
AVSR.INFORMATIONTYPE
FROM AGVET_VAE_INFO VM,
Agvet_Vae_Safetyreport AVSR
WHERE AVSR.RECORD_ID = VM.FK_AVSR_REC_ID
AND VM.Vaer_Delete = 0
AND VM.Archived = 0
AND VM.E2B_MESSAGE_LIST_TYPE <> 01
AND VM.IMPORT_FLAG <> 1
AND AVSR.ORIGINALRECEIVEDATE BETWEEN to_date ('2000/04/1', 'yyyy/mm/dd') AND to_date ('2000/04/28', 'yyyy/mm/dd')
GROUP BY AVSR.INFORMATIONTYPE
```
|
How to fetch records between a given date range and both date range should include for searching records?
|
[
"",
"sql",
"database",
"oracle",
""
] |
If I had this:
```
NAME EYES==ID==HAIR
Jon Brown==F9182==Red
May Blue==F10100==Brown
Bill Hazel/Green==F123==Brown
```
...and I wanted to create a new ID column with the ID alone, and I know that everyone's ID starts with an 'F' and will end at the '=' how would I select a substring from the compact column and take JUST the ID out?
ex. I want this as the end product
```
NAME EYES==ID==HAIR ID
Jon Brown==F9182==Red F9182
May Blue==F10100==Brown F10100
Bill Hazel/Green==F123==Brown F123
```
If I can't make it end at '=' is there any way to trim the rest of the content that isn't part of the ID after selecting it?
|
You can use a Regular Expression:
```
regexp_substr('Hazel/Green==F123==Brown','(==F.+?==)')
```
extracts '==F123==', now trim the `=`:
```
ltrim(rtrim(regexp_substr('Hazel/Green==F123==Brown','(==F.+?==)'), '='), '=')
```
If Oracle supported lookahead/lookbehind this would be easier...
Edit:
Base on @ErkanHaspulat's query you don't need LTRIM/RTRIM as you can specify to return only the first capture group (I always forget about that). But just to be safe you should change the regex not to be greedy:
```
regexp_substr('Hazel/Green==F123==Brown==iii','==(.+?)==', 1, 1, null, 1)
```
|
The idea is to find the portion of the string you want using a regular expression. In the query below, notice the `()` characters that defines a sub-expression in the given regular expression. That is the magic part; you can use as many sub-expressions you want in a regular expression and select them using the final parameter of `regexp_substr` function.
See [documentation](http://docs.oracle.com/cd/B28359_01/server.111/b28286/functions138.htm#SQLRF06303) for details.
```
with sample_data as(
select
'Brown==F9182==Red' text
from dual)
select
text
,regexp_substr(text, '==(.+)==', 1, 1, null, 1)
from sample_data
```
|
SQL Oracle | How would I select a substring where it begins with a certain letter and ends with a certain symbol?
|
[
"",
"sql",
"oracle",
"select",
"substring",
"oracle-sqldeveloper",
""
] |
I have a table with the following columns
```
NAME FRIEND
----------------------
Apple Flavour
Flavour Apple
New Banana
Banana Flavour
```
I want to remove the columns having records with same combination for example
apple, flavour and flavour, apple are same. I want only one record among the two records when we have such combinations repeated.
|
You could use **LEAST** and **GREATEST** functions to fetch the unique combination.
For example,
```
SQL> WITH DATA AS(
2 SELECT 'apple' a, 'flavour' b FROM dual UNION ALL
3 SELECT 'flavour', 'apple' FROM dual UNION ALL
4 SELECT 'new', 'flavour' FROM dual UNION ALL
5 SELECT 'banana', 'fruit' FROM dual
6 )
7 SELECT least(a,b) a,
8 greatest(a,b) b
9 FROM data
10 GROUP BY least(a,b),
11 greatest(a,b)
12 /
A B
------- -------
banana fruit
apple flavour
flavour new
SQL>
```
|
To avoid duplicates, start with select all rows where name < friend. Otherwise, also return if the reversed name/friend pair doesn't exist.
```
select *
from tablename t1
where t1.name < t1.friend
or not exists (select 1 from tablename t2
where t1.name = t2.friend
and t1.friend = t2.name)
```
Alternatively, `select distinct`, using `greatest` and `least`:
```
select distinct least(name, friend), greatest(name, friend)
from tablename
```
|
How to remove duplicate records from the table
|
[
"",
"mysql",
"sql",
"oracle",
"hive",
""
] |
Good day everyone,
I am hoping someone could help me with an issue I am having. I have a database that stores the amount of data that was reviewed on a hard drive. The values can either be Megabytes or Gigabytes.
Example:
```
Hard Drive 1 100MB
Hard Drive 2 2.5 GB
Hard Drive 3 650 MB
```
My question is this. I need to add these values up to get a total amount of data that was reviewed. I need to convert the Megabytes to Gigabytes and then add all the values together to get a total.
Can I do this in a query, or would a function be better for something like this? And how would I go about doing this?
Thank you for your help in advance!
|
The basic idea is to get the size by `CASE` like
```
Size /
CASE Unit
WHEN 'KB' THEN 1000 * 1000
WHEN 'MB' THEN 1000
WHEN 'GB' THEN 1
WHEN 'TB' THEN 0.001
END
```
Example for SQL Server:
```
DECLARE @Table table
(
Size decimal(18,3),
Unit char(2)
)
INSERT @Table VALUES (100, 'MB'), (2.5, 'GB'), (650, 'MB')
-- Get size for all rows
SELECT
*,
CONVERT(decimal(18, 3),
Size / CASE Unit WHEN 'MB' THEN 1000 WHEN 'GB' THEN 1 END
) AS GB
FROM @Table
-- Get total
SELECT
CONVERT(decimal(18, 3), SUM(
Size / CASE Unit WHEN 'MB' THEN 1000 WHEN 'GB' THEN 1 END
)) AS GB
FROM @Table
```
Result
```
Size Unit GB
--------------------------------------- ---- ---------------------------------------
100.000 MB 0.01
2.500 GB 2.500
650.000 MB 0.650
(3 row(s) affected)
TotalGB
---------------------------------------
3.250
(1 row(s) affected)
```
|
If your schema defines the size and the 'unit multiplier' in *separate columns*, like this:
```
create table disks (
id int not null,
size varchar(20) not null,
mult_id varchar(2)
)
go
```
Then create a table to map each string quantifier to the size (in MB)
```
create table mult (
id varchar(2) not null,
mult int not null
)
go
insert into disks(id, size, mult_id) values
(1, 200, 'GB'),
(2, 100, 'MB'),
(3, 200, 'TB')
insert into mult(id, mult) values
('MB', 1),
('GB', 1000),
('TB', 1000000)
```
This will give the size of each disk in MB
```
-- The SELECT to get the result of each disk in MB
SELECT disks.id, size * mult.mult size_mb FROM disks JOIN mult on disks.mult_id = mult.id
```
You can add other multipliers as required to the 'mult' table
|
Adding values of different sizes in SQL
|
[
"",
"sql",
"sql-server",
""
] |
I am using a `RIGHT(LEFT())` method of stripping a string as each character needs to be put into its own holder so I can access it and use it for a report (each character needs to be in its own box for some reason).
There are 16 characters usually but for space and to save repition I've slimmed down the code.
What I am trying to do is put the separated character value into the corresponding column of the temp table - how is this best achieved?
I have no other use for this data once used I'll destroy it.
**Code**
```
CREATE table #StringSeparate
(
col1 varchar(1),
col2 varchar(1),
col3 varchar(1),
col4 varchar(1),
col5 varchar(1),
col6 varchar(1),
col7 varchar(1),
col8 varchar(1),
)
declare @string varchar(16)
set @string = 'tpg22052015-1204'
SELECT
LEFT(@string,1),
RIGHT(LEFT(@string,2),1),
RIGHT(LEFT(@string,3),1),
RIGHT(LEFT(@string,4),1),
RIGHT(LEFT(@string,5),1),
RIGHT(LEFT(@string,6),1),
RIGHT(LEFT(@string,7),1),
RIGHT(LEFT(@string,8),1)
INTO
#String Separate
```
|
Just do it like:
```
CREATE table #StringSeparate
(
col1 varchar(1),
col2 varchar(1),
col3 varchar(1),
col4 varchar(1),
col5 varchar(1),
col6 varchar(1),
col7 varchar(1),
col8 varchar(1),
)
INSERT INTO #StringSeparate
SELECT
LEFT(@string,1),
RIGHT(LEFT(@string,2),1),
RIGHT(LEFT(@string,3),1),
RIGHT(LEFT(@string,4),1),
RIGHT(LEFT(@string,5),1),
RIGHT(LEFT(@string,6),1),
RIGHT(LEFT(@string,7),1),
RIGHT(LEFT(@string,8),1)
```
Or don't create temp table and do this:
```
SELECT
LEFT(@string,1) col1,
RIGHT(LEFT(@string,2),1) col2,
RIGHT(LEFT(@string,3),1) col3,
RIGHT(LEFT(@string,4),1) col4,
RIGHT(LEFT(@string,5),1) col5,
RIGHT(LEFT(@string,6),1) col6,
RIGHT(LEFT(@string,7),1) col7,
RIGHT(LEFT(@string,8),1) col8
INTO
#StringSeparate
```
It will automatically create that temp table, because `INTO` creates table.
|
Depending on you RDBMS I suppose I might prefer SUBSTRING:
```
INSERT INTO #StringSeparate
SELECT
LEFT(@string,1),
SUBSTRING(@string,2,1),
SUBSTRING(@string,3,1),
...
RIGHT(@string,1)
```
|
Separated string into specific temp table colums
|
[
"",
"sql",
""
] |
**This is my question:**
*Find the capital and the name where the capital includes the name of the country.*
**This is my answer:**
```
SELECT name,capital
FROM world WHERE
capital=concat(name,"NULL")
```
I dont understand what should i do in the "WHERE" part.
Should i do **capital=concat(name,something?)** or **IN** or **LIKE?**
Can you help me please? Thank you.
*The question is from this link:
<http://sqlzoo.net/wiki/SELECT_names>
Question 13*
|
Not sure what you're trying to do with the `NULL` there, but basically you if you want to find a capital that contains the name of the country, the usage of the `like` operator is definitely in order. Just slap a couple of wildcards (`%`) around it, and you should be good to go:
```
SELECT name, capital
FROM world
WHERE capital LIKE CONCAT('%', name, '%')
```
|
`CONCAT` is not necessary here, you should be able to use:
```
WHERE [capital] LIKE '%[name]%'
```
|
Sql string search
|
[
"",
"sql",
"string",
"search",
"select",
""
] |
first Table "teams" has TeamCode(varchar 5) and TeamName (varchar 20)
second Table "season" has homeTeam (varchar 5) , Team2 (varchar 5), Gameday (date)
homeTeam & Team2 are FKs that are connected to TeamCode PK
table: teams
| TeamCode | TeamName |
|:-----------:|:--------------|
| 1 | USA |
| 2 | UK |
| 3 | JAPAN |
---
table: season
each team plays the other once as a home team
```
| Team1 | Team2 |Gameday
|:-----:|:------|:------|
| 1 | 2 | 7 jan|
| 1 | 3 | 14 jan|
| 2 | 1 | 21 jan|
| 2 | 3 | 28 jan|
| 3 | 1 | 4 feb|
| 3 | 2 | 11 feb|
```
---
I want a query that would display the Team names and the day they will play together
So it should look like
```
HomeTeam Name | Team2 Name | Gameday
```
|
Try this
```
SELECT
T1.Name As Host ,
T2.Name As Guest,
S.Date
FROM [dbo].[Season] as S
Inner Join [dbo].[Team] as T1 on S.HostTeam = T1.ID
Inner Join [dbo].[Team] as T2 on S.GuestTeam = T2.ID
```
|
```
SELECT
ht.TeamName,
at.TeamName,
s.GameDay
FROM
teams AS ht
INNER JOIN season AS s ON ht.TeamCode = s.Team1
INNER JOIN teams AS at ON s.Team2 = at.TeamCode
```
|
two foreign keys to same primary key select statement MYSQL
|
[
"",
"mysql",
"sql",
"select",
"foreign-keys",
""
] |
Here is my sample data for the select statement
```
banner_id subject_code
N00012301 MATH
N00012963 ENGL
N00012963 MATH
N00013406 ENGL
N00013406 ENGL
N00013406 MATH
N00013998 ENGL
N00016217 MATH
N00017367 MATH
N00017367 ENGL
N00017833 MATH
N00018132 MATH
N00019251 ENGL
N00019251 ENGL
N00019312 MATH
N00019312 ENGL
N00019312 ENGL
N00020261 ENGL
```
i want count of banner\_id where it has both 'engl' and 'math'
ex:
```
N00019312 MATH
N00019312 ENGL
N00019312 ENGL
```
is 1 value based on banner\_id though it has 2 engl and 1 math
Appreciate your help,
thanks
|
Try this:
```
SELECT banner_id AS [COUNT]
FROM @t
WHERE subject_code IN ('ENGL', 'MATH')
GROUP BY banner_id
HAVING COUNT(DISTINCT subject_code) > 1
```
If you want to just count then:
```
SELECT COUNT(*)
FROM ( SELECT banner_id AS [COUNT]
FROM @t
WHERE subject_code IN ( 'ENGL', 'MATH' )
GROUP BY banner_id
HAVING COUNT(DISTINCT subject_code) > 1
) t
```
|
If you want to see the banner\_ids, this should get you each `banner_id` where the `banner_id` exists with both `math` and `engl`:
```
SELECT banner_id
FROM YourTable
WHERE subject_code IN ('engl', 'math')
GROUP BY banner_id
HAVING COUNT(DISTINCT subject_code) > 1
```
|
Want to count Distinct values in column 1 based on column 2 values
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
In SQL Server, I can't seem to use an aliased column name in the `WHERE` clause
This doesn't work
```
SELECT *, [dbo].[myFunc] (X.Prop1) AS Width
FROM X
WHERE Width > 0
```
I have to do
```
SELECT *, [dbo].[myFunc] (X.Prop1) AS Width
FROM X
WHERE [dbo].[myFunc] (X.Prop1) > 0
```
My concern is that SQL is executing `[dbo].[myFunc] (X.Prop1)` twice.
My questions are:
* Is it executing twice or is SQL Server smart enough?
* Is there a way to use an aliased column in the `WHERE` clause?
|
When I try both ways, they have the exact same execution plan, so looks like the query optimizer is smart enough to find out that they are the same query and run it once...
So both ways are OK I assume...
But if want to use the subquery approach and use the alias you can use a something like this:
```
Select * from (
SELECT *, [dbo].[myFunc] (X.Prop1) AS Width FROM X
) T
WHERE Width > 0
```
|
I am not sure if it is calculated twice or not, but you can avoid this using `CROSS APPLY`:
```
SELECT *, Width
FROM X
CROSS APPLY (Select [dbo].[myFunc] (X.Prop1)) N(Width)
WHERE Width > 0
```
Look this [fiddle](http://sqlfiddle.com/#!3/9eecb7db59d16c80417c72d1/639) that selects the square of values that are greater than 10.
|
Filter SQL query based on scalar function result
|
[
"",
"sql",
"sql-server",
""
] |
Long time reader but first time poster so apologies for any mistakes in creating a post.
I'm trying to work out the rates of sales for a product based on its time available and sales.
```
SELECT productname, datediff(day, MIN(DateAdded), getdate())/ nullif(SalesToDate,0) as [Days For A Sale]
FROM TABLE
```
Pretty simple but it works, the problem is some old products incorrectly have dateadded as the year as 1970 in the database (eg. 1970-02-02 09:21:00.000).
Is there a way to replace the all the DateAdded in the select that include 1970 with 2012 for example? Some kind of 'case if 1970 then 2012' but for the select and not affecting other dates.
I just want it for this query, I couldn't update the dates in the database without affecting a lot. Thanks in advance and any help is appreciated.
|
you can use Case inside Aggregate Function as
```
SELECT
productname,
DATEADD(YEAR, 42, datediff(day, MIN(case when year(DateAdded) = '1970'
then dateadd(YY,CEILING(datediff(YY,DateAdded,'1/1/2012')),DateAdded)
else a end ),getdate())/nullif(SalesToDate,0)) as [Days For A Sale]
FROM TABLE
```
Used Celling to round always maximum value as we are comparing with '1/1/2012'
Thanks
|
If you want to do it in the select, and specifically replace 1970 with 2012 then a `DATEADD` may help you out:
```
SELECT
productname,
DATEADD(YEAR, 42, datediff(day, MIN(DateAdded),getdate())/nullif(SalesToDate,0)) as [Days For A Sale]
FROM TABLE
```
|
SQL replace specific date in a select
|
[
"",
"mysql",
"sql",
""
] |
I have 2 tables with 3 columns that are suppose to have the same information.
I would like have a query that selects only the rows that don't have a complete row match. Below is an example of the 2 tables I would like to match:
Table 1
```
ID FPRICE FPRODUCT
1 1 A
2 2 B
3 3 C
4 4 D
5 5 F
```
Table 2
```
ID TPRICE TPRODUCT
1 1 A
2 2 B
3 3 C
4 5 D
6 6 E
```
Desired Output:
```
ID FPRICE FPRODUCT TPRICE TPRODUCT
4 4 D 5 D
5 5 F NULL NULL
6 NULL NULL 6 E
```
|
Easier to verify if we build some DDL and fillwith sample data, but I think this would do it. It takes a full join to find records with a partial match and then filters out records with a full match.
[sqlfiddle.com](http://sqlfiddle.com/#!6/163c8/1)
```
CREATE TABLE Table1 (ID INT, FPRICE INT, FPRODUCT CHAR(1))
INSERT INTO Table1 (ID,FPRICE,FPRODUCT) VALUES
(1, 1, 'A')
,(2, 2, 'B')
,(3, 3, 'C')
,(4, 4, 'D')
,(5, 5, 'F')
CREATE TABLE TABLE2 (ID INT, TPRICE INT, TPRODUCT CHAR(1))
INSERT INTO Table2 (ID,TPRICE,TPRODUCT) VALUES
(1, 1, 'A')
,(2, 2, 'B')
,(3, 3, 'C')
,(4, 5, 'D')
,(6, 6, 'E')
SELECT *
FROM Table1 t1
FULL JOIN
Table2 t2 ON t1.ID = t2.ID
--EDIT: remove to exactly match the desired output
--OR t1.FPRICE = t2.TPRICE
--OR t1.FPRODUCT = t2.TPRODUCT
WHERE NOT ( t1.ID = t2.ID
AND t1.FPRICE = t2.TPRICE
AND t1.FPRODUCT = t2.TPRODUCT)
OR ( COALESCE(t1.ID,t1.FPRICE,T1.FPRODUCT) IS NULL
OR COALESCE(t2.ID,t2.TPRICE,T2.TPRODUCT) IS NULL)
```
|
Can you try this query?
```
SELECT DISTINCT t1.ID t1.FPRICE t1.FPRODUCT
from Table1 t1 LEFT JOIN Table2 ON
Table1.ID = Table2.ID AND Table1.FPRICE = Table2.TPRICE AND Table1.FPRODUCT= Table2.TPRODUCT
WHERE
Table1.ID is NULL or Table1.FPRICE is NULL or Table1.FPRODUCT is NULL
or Table2.TPRICE is NULL or Table2.TPRODUCT is NULL
```
|
Finding unmatched rows of 2 tables in SQL
|
[
"",
"sql",
"sql-server",
"join",
""
] |
I have one table which has fields `X,Y,Z,BAGID.`
The BAGID is in the form of (12345-400) where 12345 is the user's id and 400 is the BAG's id.
I have another table which has fields `A,B,C,USERID.`
The USERID is in the form of 12345 which is same as the first part of BAGID.
So is it possible to join these two tables on the common USERID and get the fields USERID,X,Y,A,B?
```
Table 1:
X Y Z BAGID(userid+bagid)
1 2 4 12345-400
Table 2 :
A B C USERID
3 5 7 12345
```
I want the output as:
```
X Y A B USERID
1 2 3 5 12345
```
Is it possible to have a join these two tables?
```
select Table1.X, Table1.Y, Table2.A, Table2.B, Table2.USERID
from Table1
inner join Table2
on Table1.BAGID = Table2.USERID;
```
I know i cannot user BAGID and USERID as they are different. But is it possible for me to use the userid part of the BAGID of Table1 which is the same as USERID of Table2?
Any help would be appreciated.
|
You can use the [`SUBSTRING_INDEX`](https://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_substring-index) to extract `USERID` out of `BAGID`:
```
select Table1.X, Table1.Y, Table2.A, Table2.B, Table2.USERID
from Table1
inner join Table2 on SUBSTRING_INDEX(Table1.BAGID, '-', 1) = Table2.USERID
```
This will work provided that there is only one `'-'` in `BAGID`.
[**Demo here**](http://sqlfiddle.com/#!9/c1cf38/1)
|
Sure, just join on LEFT(BAGID,5). Depending on the USERID DataType you may need to CAST it as well.
If the USERID portion of BAGIT is variable length you first need to find the length using INSTR(BAGID, '-')
|
Inner join on two different fields of 2 different tables
|
[
"",
"mysql",
"sql",
""
] |
I have 5 tables which are
```
- Project(PK Project_ID),
- Bottles (PK Bottle_ID, FK Project_ID),
- Plastics (PK Plastic_ID, FK Project_ID),
- Glasses (PK Glass_ID,FK Project_ID) and
- Cups (PK Cup_ID, FK Project_ID)
```
How can I get one SQL query to return the number of Bottles ,Plastics , Glasses and Cup per Project, where One Project has many Bottles ,Plastics , Glasses and Cup ?
|
```
select p.project_id, count(distinct bottle_id), count(distinct plastic_id), count(distinct glass_id), count(distinct cup_id)
from project p
left join bottles b on p.project_id = b.project_id
left join plastics pl on p.project_id = pl.project_id
left join glasses g on p.project_id = g.project_id
left join cups c on p.project_id = c.project_id
group by p.project_id
```
|
```
Create table Project (Project_ID int )
Create table Bottles ( Bottle_ID int, Project_ID int)
Create table Plastics ( Plastics_ID int, Project_ID int)
Create table Glasses ( Glasses_ID int, Project_ID int)
Create table Cups ( Cups_ID int, Project_ID int)
insert into Project (Project_ID) values (1)
insert into Bottles (Bottle_ID,Project_ID) values (1,1)
insert into Bottles (Bottle_ID,Project_ID) values (2,1)
insert into Bottles (Bottle_ID,Project_ID) values (3,1)
insert into Plastics (Plastics_ID,Project_ID) values (1,1)
insert into Plastics (Plastics_ID,Project_ID) values (2,1)
insert into Glasses (Glasses_ID,Project_ID) values (1,1)
insert into Glasses (Glasses_ID,Project_ID) values (2,1)
insert into Cups (Cups_ID,Project_ID) values (1,1)
insert into Cups (Cups_ID,Project_ID) values (2,1)
select distinct p.Project_ID,
(select COUNT(*) from Bottles where p.Project_ID=Project_ID) Bottles ,
(select COUNT(*) from Plastics where p.Project_ID=Project_ID) Plastics ,
(select COUNT(*) from Glasses where p.Project_ID=Project_ID) Glasses ,
(select COUNT(*) from Cups where p.Project_ID=Project_ID) Cups
from Project p
```
|
One SQL Query to get number of items from different tables reference in another table
|
[
"",
"sql",
"sql-server",
"count",
""
] |
I have a master table with one record:
```
employee Id
-----------
10
```
and detail table with 4 records:
```
employee Id1
-----------
10
20
10
10
```
My join condition is `employee Id = employee Id1` with normal join type.
My question is which of the following output table will I get?
1. Output employee Id table with only 1 record (10)?
Or
2. Output employee Id table with 3 records (10, 10, 10)?
|
For Normal join type, output is 3 rows
**NORMAL JOIN** -It will give matching rows from both tables
**MASTER OUTER JOIN** - It will give matching rows from MASTER table and all rows from DETAIL table ,rest of the rows are discarded.
**DETAIL OUTER JOIN** - (just opposite above)
It will give matching rows from DETAIL table and all rows from MASTER Table.,rest of the rows are discarded.
Source with with fewer rows and with fewer duplicate keys should be consider as the master and the other source as detail.
|
Output should be 3 records obviously.
|
Number of records using normal joiner in informatica
|
[
"",
"sql",
"informatica-powercenter",
"joiner",
""
] |
I have following query:
```
select user_id, trajectory_id, count(trajectory_id) from point group by user_id, trajectory_id
```
It returns following result:

And I want to delete all records from `point` table where `count` in produced table is < 5.
Right now I have following query, but it is not compiling. What is wrong with it and how can I fix it?
```
delete
from point
where count(trajectory_id) < 5
GROUP BY user_id, trajectory_id
```
I am using Postgres.
|
```
delete p1
from point p1
join
(
select user_id, trajectory_id
from point
group by user_id, trajectory_id
having count(trajectory_id) < 5
) p2 on p1.user_id = p2.user_id
and p1.trajectory_id = p2.trajectory_id
```
|
I would do this with a simple correlated subquery:
```
delete from point
where (user_id, trajectory_id) in (select user_id, trajectory_id
from point
group by user_id, trajectory_id
having count(*) < 5
);
```
|
sql delete based on group by
|
[
"",
"sql",
"postgresql",
"sql-delete",
""
] |
I've been trying to figure out how I can modify this query so that the result set does not include the numHits. I want the same results in the same order, just not have the numHits included.
```
SELECT
`newel_inventoryKeywordIdDictionaryId`.`inventoryId`
,COUNT(`newel_inventoryKeywordIdDictionaryId`.`inventoryId`) as numHits
FROM
`newel_inventoryKeywordIdDictionaryId`
, `newel_inventoryDictionary`
WHERE
`newel_inventoryKeywordIdDictionaryId`.`dicId` = `newel_inventoryDictionary`.`dicId`
AND (
`newel_inventoryDictionary`.`word` = 'alabaster' OR `newel_inventoryDictionary`.`word` = 'chess'
)
GROUP BY inventoryId
ORDER BY numHits DESC;
```
sample results:
```
inventoryId, numHits
6928, 2
6929, 2
6924, 2
6925, 2
13772, 2
6926, 2
18203, 1
6931, 1
13863, 1
18402, 1
```
Desired Results:
```
inventoryId
6928
6929
6924
6925
13772
6926
18203
6931
13863
18402
```
|
Move the column from SELECT clause to ORDER BY clause:
```
SELECT
`newel_inventoryKeywordIdDictionaryId`.`inventoryId`
FROM
`newel_inventoryKeywordIdDictionaryId`
, `newel_inventoryDictionary`
WHERE
`newel_inventoryKeywordIdDictionaryId`.`dicId` = `newel_inventoryDictionary`.`dicId`
AND (
`newel_inventoryDictionary`.`word` = 'alabaster' OR `newel_inventoryDictionary`.`word` = 'chess'
)
GROUP BY inventoryId
ORDER BY COUNT(`newel_inventoryKeywordIdDictionaryId`.`inventoryId`) DESC;
```
|
```
SELECT
`newel_inventoryKeywordIdDictionaryId`.`inventoryId`
FROM
`newel_inventoryKeywordIdDictionaryId`
, `newel_inventoryDictionary`
WHERE
`newel_inventoryKeywordIdDictionaryId`.`dicId` = `newel_inventoryDictionary`.`dicId`
AND (
`newel_inventoryDictionary`.`word` = 'alabaster' OR `newel_inventoryDictionary`.`word` = 'chess'
)
GROUP BY inventoryId
ORDER BY COUNT(`newel_inventoryKeywordIdDictionaryId`.`inventoryId`) DESC;
```
|
Group and order by a column but donot include that column in results
|
[
"",
"mysql",
"sql",
"group-by",
"sql-order-by",
""
] |
I am (still) new to postgresql and jsonb. I am trying to select some records from a subquery and am stuck. My data column looks like this (jsonb):
```
{"people": [{"age": "50", "name": "Bob"}], "another_key": "no"}
{"people": [{"age": "73", "name": "Bob"}], "another_key": "yes"}
```
And here is my query. I want to select all names that are "Bob" whose age is greater than 30:
```
SELECT * FROM mytable
WHERE (SELECT (a->>'age')::float
FROM (SELECT jsonb_array_elements(data->'people') as a
FROM mytable) as b
WHERE a @> json_object(ARRAY['name', 'Bob'])::jsonb
) > 30;
```
I get the error:
```
more than one row returned by a subquery used as an expression
```
I don't quite understand. If I do some simple substitution (just for testing) I can do this:
```
SELECT * FROM mytable
WHERE (50) > 30 -- 50 is the age of the youngest Bob
```
and that returns both rows.
|
The error means just what it says:
> more than one row returned by a subquery used as an expression
The expression in the `WHERE` clause expects a single value (just like you substituted in your added test), but your subquery returns ***multiple*** rows. `jsonb_array_elements()` is a set-returning function.
Assuming this table definition:
```
CREATE TABLE mytable (
id serial PRIMARY KEY
, data jsonb
);
```
The JSON array for `"people"` wouldn't make sense if there couldn't be multiple persons inside. Your examples with only a single person are misleading. Some more revealing test data:
```
INSERT INTO mytable (data)
VALUES
('{"people": [{"age": "55", "name": "Bill"}], "another_key": "yes"}')
, ('{"people": [{"age": "73", "name": "Bob"}], "another_key": "yes"}')
, ('{"people": [{"age": "73", "name": "Bob"}
,{"age": "77", "name": "Udo"}], "another_key": "yes"}');
```
The third row has two people.
I suggest a query with a `LATERAL` join:
```
SELECT t.id, p.person
FROM mytable t
, jsonb_array_elements(t.data->'people') p(person) -- implicit LATERAL
WHERE (t.data->'people') @> '[{"name": "Bob"}]'
AND p.person->>'name' = 'Bob'
AND (p.person->>'age')::int > 30;
```
[fiddle](https://dbfiddle.uk/pC22lLKm)
The first `WHERE` condition `WHERE (t.data->'people') @> '[{"name": "Bob"}]'` is logically redundant, but it helps performance by eliminating irrelevant rows early: don't even unnest JSON arrays without a `"Bob"` in it.
For big tables, this is ***much*** more efficient with a matching index. If you run this kind of query regularly, you should have one:
```
CREATE INDEX mytable_people_gin_idx ON mytable
USING gin ((data->'people') jsonb_path_ops);
```
Related, with more explanation:
* [Postgres 9.4 jsonb array as table](https://stackoverflow.com/questions/30827578/postgres-9-4-jsonb-array-as-table/30829650#30829650)
* [Index for finding an element in a JSON array](https://stackoverflow.com/questions/18404055/index-for-finding-an-element-in-a-json-array/18405706#18405706)
In **Postgres 12** or later consider using SQL/JSON path expressions instead. See:
* [Find rows containing a key in a JSONB array of records](https://dba.stackexchange.com/a/196635/3684)
|
In your subquery:
```
SELECT (a->>'age')::float
FROM (SELECT jsonb_array_elements(data->'people') as a
FROM mytable) as b
WHERE a @> json_object(ARRAY['name', 'Bob'])::jsonb
```
You selected all the rows of `mytable` all over again. That's why your subquery returns multiple values.
**If you want to select rows from your table** containing an element that satisfies certain conditions, then in your conditions, don't re-select from that table; use the row you've already selected in your outer query:
```
SELECT * FROM mytable
WHERE exists(SELECT 1
FROM (SELECT jsonb_array_elements(data->'people') as person) as x
WHERE person @> '{"name": "Bob"}'
AND (person->>'age')::float > 30)
```
As far as I can tell, that weird double-subquery syntax is necessary. Note that `data` is from the outer query.
**If you wanted to select all the JSON objects from the `"people"` fields** that satisfy your conditions, then just aggregate all those `"people"` elements and filter them:
```
SELECT person
FROM (SELECT jsonb_array_elements(data->'people') as person
FROM mytable) as x
WHERE person @> '{"name": "Bob"}'
AND (person->>'age')::float > 30
```
|
JSONB: more than one row returned by a subquery used as an expression
|
[
"",
"sql",
"postgresql",
"postgresql-9.4",
"jsonb",
"set-returning-functions",
""
] |
I have a table
```
declare @t table (val int)
insert INto @t (val) values (420), (420), (520), (520), (520), (620)
select
val, ROW_NUMBER() OVER (PARTITION BY val order by val) RN
from @t
```
Returns output:
```
val RN
------
420 1
420 2
520 1
520 2
520 3
620 1
```
Desired output :
```
val RN
420 1
420 2
NULL NULL
520 1
520 2
520 3
NULL NULL
620 1
```
How to achieve this?
|
This works:
```
declare @t table (val int)
declare @n table( val int)
insert INTO @n(val) values( NULL )
insert INto @t (val)values (420),(420),(520),(520),(520),(620)
SELECT T2.val, T2.RN AS RN
FROM (
select T.val AS GRP, T.val,ROW_NUMBER()OVER(PARTITION BY T.val order by T.val)RN from @t AS T
union
select DISTINCT T.VAL AS GRP, NULL AS val, NULL AS RN FROM @t AS T
) AS T2
ORDER BY T2.GRP, ISNULL(T2.RN, 99)
```
To have a summary row change the main SELECT:
```
SELECT T2.val, T2.RN AS RN
FROM (
select T.val AS GRP, T.val,ROW_NUMBER()OVER(PARTITION BY T.val order by T.val) RN from @t AS T
union
select T.VAL AS GRP, SUM(T.VAL) AS val, NULL AS RN FROM @t AS T GROUP BY T.Val
) AS T2
ORDER BY T2.GRP, ISNULL(T2.RN, 99)
```
|
This type of manipulation should really be done in the application. You can do this in SQL Server, but it is really more of an amusing trick, then a recommended answer.
The idea is to join in the rows you want and then use `order by` to get the results in order:
```
select val, rn
from ((select val, ROW_NUMBER() OVER (PARTITION BY val order by val) as rn,
1 as ordering, val as val2
from @t t
) union all
(select NULL, NULL, 2, val
from @t t
group by val
)
) t
order by val2, ordering, rn;
```
Let me emphasize that this is a demonstration of the power of the `select` statement. The real answer is to do this at the application layer.
|
how can i achieve Null row after every sequence
|
[
"",
"sql",
"sql-server",
""
] |
these are the 2 columns(month,year). I want to create a single column out of them having an accurate date-time format('YYYY-MM-DD HH:MM:SS') and add as new column in the table.
```
Month year
12/ 3 2013 at 8:40pm
12/ 3 2013 at 8:39pm
12/ 3 2013 at 8:39pm
12/ 3 2013 at 8:38pm
12/ 3 2013 at 8:37pm
```
What could be the best possible hive query for the same? I'm not able to form an accurate regex for the.
|
I'm going to assume that the 12 is `month` and that 3 is `day` since you didn't specify. Also, you said you want `HH:MM:SS` but there is no seconds in your example so I don't know how you're going to get them in there. I also changed `8:37pm` to `8:37am` in your example to try both cases.
**Query:**
```
select concat_ws(' ', concat_ws('-', yr, month, day)
, concat_ws(':', hour, minutes)) date_time
from (
select yr
, case when length(month) < 2 then concat('0', month) else month end as month
, case when length(day) < 2 then concat('0', day) else day end as day
, case when instr(minutes, 'pm') > 0 then cast(hour+12 as int)
when instr(minutes, 'am') > 0 and length(hour) < 2 then concat('0', hour)
else hour end as hour
, substr(minutes, 1, 2) minutes
from (
select ltrim(split(Month, '\\/')[1]) day
, split(Month, '\\/')[0] month
, split(year, ' ')[0] yr
, split(split(year, ' ')[2], '\\:')[0] hour
, split(split(year, ' ')[2], '\\:')[1] minutes
from test.sample_data ) x ) y
```
**Output:**
```
date_time
2013-12-03 20:40
2013-12-03 20:39
2013-12-03 20:39
2013-12-03 20:38
2013-12-03 08:37
```
|
yea worked fine, thank you very much @GoBrewers14. You saved my day!
a little mismatch, this is the correct one:
```
select concat_ws(' ', concat_ws('-', yr, month, day), concat_ws(':', hour, minutes)) date_time
from (
select yr
, case when length(month) < 2 then concat('0', month) else month end as month
, case when length(day) < 2 then concat('0', day) else day end as day
, case when instr(minutes, 'pm') > 0 then cast(hour+12 as int)
when instr(minutes, 'am') > 0 and length(hour) < 2 then concat('0', hour) else hour end as hour
, substr(minutes, 1, 2) minutes
from (
select ltrim(split(month, '\\/')[1]) day
, ltrim(split(month, '\\/')[0]) month
, split(year, ' ')[1] yr
, split(split(year, ' ')[3], '\\:')[0] hour
, split(split(year, ' ')[3], '\\:')[1] minutes
from db.table_name )
x ) y limit 5;
```
|
Merging two columns into a single column and formatting the content to form an accurate date-time format in Hive?
|
[
"",
"sql",
"regex",
"hadoop",
"hive",
"datetime-format",
""
] |
I'm using `SQL Server 2008` and I have the following stored procedure :
```
ALTER PROC [dbo].[spGetLatestLangId]
@salt VARCHAR(50),
@result VARCHAR(50) OUTPUT
AS
DECLARE @CheckStr AS VARCHAR(50)
SET @CheckStr = @salt + '%'
SET @result = (SELECT TOP 1 Id FROM Lang WHERE salt LIKE @CheckStr ORDER BY dtCrt DESC)
```
Let's say my `Lang` table have following rows:
```
Id dtCrt
------------- ----------
PRODUCT_1 2011-01-01
PRODUCT_2 2011-01-02
PRODUCTCODE_1 2011-01-05
```
When I call the stored procedure with following parameter
```
EXEC spGetLatestLangId 'PRODUCT_', @a OUTPUT
```
I'm getting `PRODUCTCODE_1` instead of `PRODUCT_2` because it ignores the `underscore`. How do I escape `underscore` in `LIKE` clause?
|
I think you need to use:
```
SET @CheckStr = REPLACE(@salt, '_', '[_]') + '%'
```
|
> how do I escape underscore in LIKE clause
Using the `ESCAPE` clause:
```
where some_column LIKE 'X/_%' ESCAPE '/';
```
in your case:
```
WHERE salt LIKE @CheckStr ESCAPE '/'
```
You can use *any* character for the escape character (e.g. `ESCAPE '\'` or `ESCAPE '@'`) - it's a good idea to use one that is not part of the values.
But this means you also have to call the procedure with the escape character:
```
EXEC spGetLatestLangId 'PRODUCT/_', @a OUTPUT
```
|
Escape underscore in LIKE
|
[
"",
"sql",
"sql-server",
""
] |
My table structure:
```
consumer_id, signup_date, plan_id, subscription_date
```
It has multiple subscription\_dates for the same consumer\_id.
What I need in the result is all fields with first (min) subscription\_date where plan\_id is not a specific value.
Without the where clause I tried:
```
select consumer_id,
signup_date,
plan_id,
min(subscription_date)
from myTable
Group by consumer_id
```
I get an error stating that
> Column 'signup\_date', 'plan\_id' is invalid in the select list because
> it is not contained in either an aggregate function or the GROUP BY
> clause.
But I do not wish to group the results by 'signup\_date' or 'plan\_id'.
Whatever is the value in the row of 'signup\_date' and 'plan\_id' after I get the earliest subscription\_date for each consumer\_id, that is the value I want. How can I write this condition in SQL?
|
You can try this:
```
select consumer_id, signup_date, plan_id
, min(subscription_date) over (partition by consumer_id)
from myTable
```
If needed you can remove unneded rows with your where.
|
Use `NOT EXISTS` to return a row if no other row with same consumer\_id but earlier date exists:
```
select consumer_id,
signup_date,
plan_id,
subscription_date
from myTable t1
where not exists (select 1 from myTable t2
where t2.consumer_id = t1.consumer_id
and t2.subscription_date < t1.subscription_date)
```
Will return both rows if there are two from same, earliest date.
|
Get rows for each user with early date values
|
[
"",
"sql",
"sql-server",
""
] |
I have a database of baseball plays with a PlayerID and a TypeID (the kind of play: double, strike out, etc). The data looks something like this:
```
+----------+--------+
| playerid | typeid |
+----------+--------+
| 2 | 4 |
| 2 | 4 |
| 2 | 7 |
| 3 | 7 |
| 3 | 7 |
| 3 | 7 |
| 3 | 26 |
| 3 | 7 |
```
I'm trying to find which players had the most of each kind of play. E.g. Jim (PlayerID 3) had the most strike outs (TypeID 7) and Bob (PlayerID 2) had the most home runs (TypeID 4) which should result in the following table:
```
+----------+--------+----------------+
| playerid | typeid | max(playcount) |
+----------+--------+----------------+
| 2 | 4 | 12 |
| 3 | 7 | 9 |
| 3 | 26 | 1 |
```
My best attempt so far is to run:
```
SELECT playerid,typeid,MAX(playcount) FROM
(
SELECT playerid,typeid,COUNT(*) playcount FROM plays GROUP BY playerid,typeid
) AS t GROUP BY typeid;
```
Which returns the proper maximums of each type, but the associated PlayerIDs are all wrong and I can't figure out why. I'm sure I'm missing something simple (or making this overly complicated) but can't figure it out. Any ideas?
|
In MySQL this group=wise maximum it is sadly not a simply as you want it to be.
Here's a way to do it using a method similar to what is suggested in [ROW\_NUMBER() in MySQL](https://stackoverflow.com/questions/1895110/row-number-in-mysql)
```
SELECT a.*
FROM (
SELECT playerid
,typeid
,COUNT(*) playcount
FROM plays
GROUP BY playerid,typeid
) a
LEFT JOIN
(
SELECT playerid
,typeid
,COUNT(*) playcount
FROM plays
GROUP BY playerid,typeid
) b
ON a.typeid = b.typeid
AND a.playcount < b.playcount
WHERE b.playerid IS NULL
```
|
Would this work?
```
SELECT
playertypecounts.*
FROM
(SELECT
playerid,
typeid,
COUNT(*) as playcount
FROM plays
GROUP BY playerid, typeid) playertypecounts
INNER JOIN
(SELECT
typeid,
MAX(playcount) as maxplaycount
FROM
(SELECT
playerid,
typeid,
COUNT(*) as playcount
FROM plays
GROUP BY playerid, typeid) playcounts
GROUP BY typeid) maxplaycounts
ON playertypecounts.typeid = maxplaycounts.typeid
AND playertypecounts.playcount = maxplaycounts.maxplaycount
```
This part of the query block returns the maximum playcount for each typeid:
```
(SELECT
typeid,
MAX(playcount) as maxplaycount
FROM
(SELECT
playerid,
typeid,
COUNT(*) as playcount
FROM plays
GROUP BY playerid, typeid) playcounts
GROUP BY typeid) maxplaycounts
```
Then it's inner-joined to all the typeid/playcounts in order to filter those counts where the player(s) have the maximum counts for any given typeid.
See [SQLFiddle example](http://sqlfiddle.com/#!9/beaee/5/0 "SQL Fiddle example").
Having said all that, I actually prefer @KarlKieninger's answer since it's more elegant.
|
Select ID having maximum count of another ID
|
[
"",
"mysql",
"sql",
"count",
""
] |
I need to select `employee_id` from this table, if there is 00001 is four times, I need to get it only once.
How to write query for this scenario?
```
id company_ID Employee_ID Name Relationship Dob Age Gender
1 EMPL 00001 Choodamani Spouse 11-Aug-66 49 Female
2 EMPL 00001 Komala Mother 30-Oct-39 76 Female
3 EMPL 00001 Varshini Daughter 29-Apr-04 11 Female
4 EMPL 00001 Vasudevan Employee 15-Jul-62 53 Male
5 EMPL 00002 Siddharth Son 1-Jun-00 15 Male
6 EMPL 00002 Poongavanam Mother 21-Oct-39 76 Female
7 EMPL 00002 Aruna Spouse 16-Sep-68 47 Female
8 EMPL 00002 Abirami Daughter 7-May-97 18 Female
9 EMPL 00002 Murali Employee 7-Oct-67 48 Male
```
|
You can use the `distinct` keyword:
```
SELECT DISTINCT Employee_ID FROM Employees
```
|
You can try with `distinct`:
```
select distinct Employee_ID from tbl
```
|
How to write query to select value without repeat same values again?
|
[
"",
"mysql",
"sql",
"select",
""
] |
The following query is working absolutely fine for me:
```
SELECT * From Customers
WHERE Customers.ContactName = (SELECT FirstName
FROM Employees as E, orders as O
WHERE <condition>
LIMIT 1);
```
However, if i use `LIKE` instead of `=` to compare with the result of the subquery, I'm not getting any results.
How do I use `LIKE '%%'` in the above query?
|
First, this query *should* not be working fine:
```
SELECT *
From Customers
WHERE Customers.ContactName = (SELECT FirstName
from Employees as E, orders as O
WHERE LIMIT 1);
```
Because `WHERE LIMIT 1` is not proper SQL. And, you should learn to use proper `join` syntax. Presumably, you intend:
```
SELECT c.*
From Customers c
WHERE c.ContactName = (SELECT FirstName
FROM Employees as E JOIN
Orders as O
ON . . .
LIMIT 1
);
```
You could conceivably add `LIKE` instead of `=` and '%' in the subquery:
```
WHERE c.ContactName LIKE (SELECT CONCAT('%', FirstName, '%') . . .
```
But I would write this using `EXISTS`:
```
SELECT c.*
From Customers c
WHERE EXISTS (SELECT 1
FROM Employees as E JOIN
Orders as O
ON . . .
WHERE c.ContactName LIKE CONCAT('%', FirstName, '%')
);
```
This does not do exactly the same thing as your query. It does something more reasonable. Instead of comparing one random name from the subquery, it will determine if there are *any* matches in the subquery. That seems a more reasonable intention for the query.
|
Why not use a simple `INNER JOIN`:
```
SELECT Customers.*
FROM Customers
INNER JOIN Employees ON Customers.ContactName LIKE CONCAT('%', Employees.FirstName,'%')
WHERE Employees.Foo = 'Bar'
```
Note: `+` is the addition operator and can not be used to concatenate strings.
|
Using 'LIKE' with the result of a SQL subquery
|
[
"",
"mysql",
"sql",
"subquery",
"sql-like",
""
] |
I have a **SQL Database** containing **24 Columns** and about **24000 Rows**. I want to retrieve data from that **SQL Database**. I am new to **Android** and don't know how to use it.
I found an **Android library** called [Realm](http://Realm.io/docs/java/latest/) but I couldn't find a way of **Converting SQL** to **Realm Database**.
If there is a simple way to **read data** from and to **store data** in **SQL Database** or another way of storing data in Android in form of **Database**, please let me know.Any Help will be appreciated.
Thanks!!
|
android uses SQLite. This is very similar to SQL. You would only need to have the app download the data from the server database and copy in into the local database.
SQLite information can be found at <http://developer.android.com/reference/android/database/sqlite/SQLiteDatabase.html>
Converting from an SQL file has been discussed at [Convert MySQL to SQlite](https://stackoverflow.com/questions/3890518/convert-mysql-to-sqlite)
|
Android comes with SQLite, this is what you should use. did you search anything like "Android and sql"? [android training on the subject](http://developer.android.com/training/basics/data-storage/databases.html)
Edit:
so you dont have an sql file, you have an online MySql database. can use [jdbc](http://www.oracle.com/technetwork/java/javase/jdbc/index.html) but it is not widely recomended. there are some alternatives as in [here](https://stackoverflow.com/questions/12233145/connecting-to-mysql-from-android-with-jdbc) and [here](https://stackoverflow.com/questions/7221620/android-jdbc-not-working-classnotfoundexception-on-driver/7221716#7221716)
|
How to use SQL Database other means of Data Storage
|
[
"",
"android",
"mysql",
"sql",
"sql-server",
"database",
""
] |
I have a simple SQL Server 2008 update query using `LOWER(FIELD_NAME)` that should only update where `FIELD_NAME` contains uppercase characters.
```
UPDATE TABLE
SET FIELD_NAME = LOWER(FIELD_NAME)
WHERE FIELD_NAME *contains uppercase characters*
```
How can I select only columns with uppercase characters in the where clause?
|
You could use a case-sensitive collation:
```
UPDATE TABLE
SET FIELD_NAME = LOWER(FIELD_NAME)
WHERE FIELD_NAME LIKE '%[ABCDEFGHIJKLMNOPQRSTUVWXZ]%' COLLATE SQL_LATIN1_GENERAL_CP1_CS_AS
```
|
Try the following out:
```
UPDATE TABLE
SET FIELD_NAME = LOWER(FIELD_NAME)
WHERE FIELD_NAME LIKE '%[A-Z]%' Collate SQL_Latin1_General_CP1_CS_AS
```
If you also have other characters such as numbers then the you could also do the query as follows:
```
UPDATE TABLE
SET FIELD_NAME = LOWER(FIELD_NAME)
WHERE FIELD_NAME = UPPER(FIELD_NAME) Collate SQL_Latin1_General_CP1_CS_AS
```
|
SQL select fields containing uppercase characters
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
When executing my doDelete.php, this error message appears:
" Cannot delete or update a parent row: a foreign key constraint fails (`fyp`.`book`, CONSTRAINT `book_user_key` FOREIGN KEY (`id`) REFERENCES `user` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION) "
the Delete query:
```
$queryDelete = "DELETE FROM user WHERE id = $theUserID";
```
the retrieve information query from 3 tables:
```
$query2 = "SELECT * FROM user,country,book where book.id=user.id AND user.country_id=country.country_id ORDER BY `user`.`id` ASC";
```
The Relationship between my 'book' and 'user' table is that a user is able to add a book. Hence, the user ID is placed in the book table to identify which user adds a book. However when I want to delete a user from my php, the above error message appeared. Here's a snippet of the retrieval of user ID to delete the user. How do I fix this?
```
<td><form method="post" action="doDelete.php"><input type="hidden" name="theUserID" value="<?php echo $rows['id']; ?>" /><input type="submit" value="Delete" /</form></td>
```
Here is the relevant tables
```
CREATE SCHEMA IF NOT EXISTS `fyp` ;
USE `fyp` ;
-- -----------------------------------------------------
-- Table `fyp`.`country`
-- -----------------------------------------------------
DROP TABLE IF EXISTS `fyp`.`country` ;
CREATE TABLE IF NOT EXISTS `fyp`.`country` (
`country_id` INT NOT NULL AUTO_INCREMENT,
`country` VARCHAR(45) NOT NULL,
PRIMARY KEY (`country_id`))
ENGINE = InnoDB;
-- -----------------------------------------------------
-- Table `fyp`.`user`
-- -----------------------------------------------------
DROP TABLE IF EXISTS `fyp`.`user` ;
CREATE TABLE IF NOT EXISTS `fyp`.`user` (
`id` INT NOT NULL AUTO_INCREMENT,
`user_name` VARCHAR(45) NOT NULL,
`password` VARCHAR(45) NOT NULL,
`email_address` VARCHAR(45) NOT NULL,
`date_of_birth` DATE NOT NULL,
`country_id` INT NOT NULL,
`gender_id` INT NOT NULL,
`role_id` INT NOT NULL,
`last_login` TIMESTAMP NULL,
PRIMARY KEY (`id`),
INDEX `fk_user_country1_idx` (`country_id` ASC),
INDEX `fk_user_gender1_idx` (`gender_id` ASC),
INDEX `fk_user_role1_idx` (`role_id` ASC),
CONSTRAINT `user_country_key`
FOREIGN KEY (`country_id`)
REFERENCES `fyp`.`country` (`country_id`)
ON DELETE NO ACTION
ON UPDATE NO ACTION,
CONSTRAINT `user_gender_key`
FOREIGN KEY (`gender_id`)
REFERENCES `fyp`.`gender` (`gender_id`)
ON DELETE NO ACTION
ON UPDATE NO ACTION,
CONSTRAINT `user_role_key`
FOREIGN KEY (`role_id`)
REFERENCES `fyp`.`role` (`role_id`)
ON DELETE NO ACTION
ON UPDATE NO ACTION)
ENGINE = InnoDB;
-- -----------------------------------------------------
-- Table `fyp`.`book`
-- -----------------------------------------------------
DROP TABLE IF EXISTS `fyp`.`book` ;
CREATE TABLE IF NOT EXISTS `fyp`.`book` (
`book_id` INT NOT NULL AUTO_INCREMENT,
`title` VARCHAR(45) NOT NULL,
`ISBN` VARCHAR(45) NOT NULL,
`book_desc` VARCHAR(100) NOT NULL,
`year_published` VARCHAR(45) NOT NULL,
`year_of_birth` YEAR NOT NULL,
`image` VARCHAR(45) NULL,
`genre_id` INT NOT NULL,
`publisher_id` INT NOT NULL,
`user_id` INT NOT NULL,
PRIMARY KEY (`book_id`),
INDEX `fk_book_publishers1_idx` (`publisher_id` ASC),
INDEX `fk_book_user1_idx` (`user_id` ASC),
INDEX `fk_book_genre1_idx` (`genre_id` ASC),
CONSTRAINT `book_publishers_key`
FOREIGN KEY (`publisher_id`)
REFERENCES `fyp`.`publishers` (`publisher_id`)
ON DELETE NO ACTION
ON UPDATE NO ACTION,
CONSTRAINT `book_user_key`
FOREIGN KEY (`user_id`)
REFERENCES `fyp`.`user` (`id`)
ON DELETE NO ACTION
ON UPDATE NO ACTION,
CONSTRAINT `book_genre_key`
FOREIGN KEY (`genre_id`)
REFERENCES `fyp`.`genre` (`genre_id`)
ON DELETE NO ACTION
ON UPDATE NO ACTION)
ENGINE = InnoDB;
```
|
Indeed, you have a constraint in book table, that does not let you delete a user as long as there are books added by that user:
```
CONSTRAINT `book_user_key`
FOREIGN KEY (`user_id`)
REFERENCES `fyp`.`user` (`id`)
ON DELETE NO ACTION
ON UPDATE NO ACTION
```
You can change it to
```
ON DELETE CASCADE
```
so that deleting the user will delete all their books too.
Or you can set the constraint to
```
ON DELETE SET NULL
```
and remove NOT NULL from book.user\_id: this will keep the books of the deleted user, but set their user\_id to NULL.
|
You cannot create a user record because it has related records in *book* table. It is how foreign key constraint works. You created this constraint in this line:
```
FOREIGN KEY (`user_id`) REFERENCES `fyp`.`user` (`id`)
ON DELETE NO ACTION
ON UPDATE NO ACTION
```
To solve this problem you can:
1. Delete records in *book* table for given user id before deleting a given user from *user* table. This will require executing 2 DELETE statements one by one.
2. Modify a foreign key and use `ON DELETE CASCADE`. It will cause that records in *book* table will be deleted together with related records in *user* table automatically.
I prefer solution 1 because it gives you more control. It is also less error prone because you have to always explicitly delete records from 2 tables. In the second case it is possible that you will accidentally delete a lot of data from your database.
|
Error message: Cannot delete or update a parent row: a foreign key constraint fails
|
[
"",
"mysql",
"sql",
"constraints",
""
] |
I am sorry if this is duplicate. Please point me to correct question. I am using SQL SERVER 2008. I am using below query since I need to get data from 3 tables.
```
SELECT qc.FileID as [FileID],
qc.QID1 as [QID1],
xqs.SID1 as [SID1],
xqc.CID1 as [CID1],
xqs.Comments as [SComments],
xqc.Comments as [CComments]
FROM QCTable(nolock) qc
JOIN QCSectionTable (nolock) xqs ON qc.QCID = xqs.QCID
LEFT JOIN QCChargeTable (nolock) xqc ON xqc.QCXrefID = xqs.QCXrefID
```
For above I am getting this like
FieID1 SID1 SID1 CID1 SComments CComments
I have a row like below
```
FileID1 QID1 SID1 CID1 SComments CComments
```
I need to split above row as
```
FileID1 QID1 SID1 null SComments
FileID1 QID1 SID1 CID1 CComments
```
Thanks in advance.
|
You could do something like this using `UNION ALL`:
```
SELECT
qc.FileID AS [FileID1]
,qc.QID1 AS [QID1]
,xqs.SID1 AS [SID1]
,NULL AS [CID1] --assigning default value as null
,xqs.Comments AS [SComments]
FROM QCTable(NOLOCK) qc
JOIN QCSectionTable(NOLOCK) xqs ON qc.QCID = xqs.QCID
LEFT JOIN QCChargeTable(NOLOCK) xqc ON xqc.QCXrefID = xqs.QCXrefID
UNION ALL
SELECT
qc.FileID AS [FileID1]
,qc.QID1 AS [QID1]
,xqs.SID1 AS [SID1]
,xqc.CID1 AS [CID1]
,xqc.Comments AS [CComments]
FROM QCTable(NOLOCK) qc
JOIN QCSectionTable(NOLOCK) xqs ON qc.QCID = xqs.QCID
LEFT JOIN QCChargeTable(NOLOCK) xqc ON xqc.QCXrefID = xqs.QCXrefID
```
|
The easiest way is `union all`:
```
select FileID1, QID1, SID1, null as cId1, SComments
from table t
union all
select FileID1, QID1, SID1, cId1, CComments
from table t;
```
If you have a large amount of data, it can be a bit faster to do this using `cross apply` or a `cross join`:
```
select v.*
from table t cross apply
(values (FileID1, QID1, SID1, null, SComments),
(FileID1, QID1, SID1, cId1, CComments)
) v(FileID1, QID1, SID1, cId1, CComments);
```
The advantage is that this would scan the table only once.
|
SQL : Split one row into two rows depending on column
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have to do a query without using views that displays the automobile brand that has the maximum number of cars that have a single owner.
that's the table that I have
```
CAR(idcar, brand, name, year_p)
OWNER(idowner,name, b_date)
PROPERTY(idcar,idowner,date_b, date_s)
```
That's what I was writing..
```
SELECT Car.brand, COUNT(PROPERTY.idowner) as con
FROM PROPERTY JOIN CAR ON Car.idcar=Property.idcar
GROUP BY Car.brand
ORDER BY con ASC
LIMIT 1;
```
It's ok? what I can do to improve it?
That's how it should work
```
CAR
-----------------
1|FIAT|test1|2000
2|Alfa|test2|2011
3|FIAT|test3|2001
4|Alfa|test4|2006
5|Alfa|test5|2004
OWNER
-----------
1|John|1990
2|Jack|1980
3|Sam |1991
4|Rick|1980
5|Cris|1993
6|Holm|1992
PROPERTY
-------------
1|1|2000|2000
2|1|2001|2003
3|2|2000|2004
1|3|2002|2005
4|4|2004|2006
5|6|2003|2008
2|5|2001|2009
```
So the query should return Alfa because I've 3 people that have only one car and those 3 people have one alfa ( so 3 alfa). It can't be FIAT because I only have 2 people with only one car that is a FIAT(so 2 FIAT). The OWNER n1 is excluded because he has 2 car
|
I think you were on the right path with your query, it's only need a little modification...
First if you want to select brand with most owner than you should order them in descending order so the max will be at the top... And second you need to add WHERE clause which will exclude all owner that have more then one car...
It should look like this:
```
SELECT c.brand, COUNT(p.idowner) as con
FROM PROPERTY p
JOIN CAR c
ON c.idcar=p.idcar
WHERE p.idowner NOT IN (SELECT idowner
FROM (SELECT idowner, COUNT(idowner) as c
FROM PROPERTY
GROUP BY idowner) x
WHERE c > 1)
GROUP BY c.brand
ORDER BY con DESC
LIMIT 1;
```
Here is the [SQL Fiddle](http://sqlfiddle.com/#!9/4c149/18) to take a look how it's work...
GL!
|
This is too long for a comment.
Your query is not correct. It returns the brands that have the most properties for its cars. That may or may not have to do with the number of owners. I'm sure there's a relationship of some sort ("no properties" --> "no owners" and vice versa).
In any case, I don't know what you mean by "the maximum number of cars that have a single owner". You should edit your question with sample data and desired results. A SQL Fiddle would also be helpful.
|
SQL query creation
|
[
"",
"mysql",
"sql",
""
] |
I have a client table from which I need to select data in the following order:
```
Gender Count
-----------------
Female 10
Male 50
Unknown 2
```
(The above table data is just as an example)
To achieve this, I used the following SQL:
```
SELECT decode(gender, 'F', 'Female', 'M', 'Male', 'Unknown') "Gender"
, count(ID) "Count"
FROM client c
WHERE c.created <= '01-JAN-2000'
GROUP BY decode(gender, 'F', 'Female', 'M', 'Male', 'Unknown')
ORDER BY decode(gender, 'F', 'Female', 'M', 'Male', 'Unknown')
;
```
But, the problem is there are not clients created before 1-JAN-2000! So, the table appears blank:
```
Gender Count
----------------
```
But, I would like to show in this case like this:
```
Gender Count
----------------
Female 0
Male 0
Unknown 0
```
Please tell how can I do that?
|
You could start of with a select on the genders (using an inline view), then do a `left outer join` on the results. This will guarantee that there is always a list of genders available to show in the first column:
```
SELECT g.gender
, count(ID) "Count"
FROM ( select 'Female' gender from dual
union all
select 'Male' gender from dual
union all
select 'Unknown' gender from dual
) g
left
outer
join client c
on decode(c.gender, 'F', 'Female', 'M', 'Male', 'Unknown') = g.gender
and c.created <= '01-JAN-2000'
GROUP
BY g.gender
ORDER
BY g.gender
;
```
|
You should normalize your data to hold a gender table. Or at least use one (view) in your Query like below, and do a OUTER join.
And use standard date formats..
**EDIT:** For better readability and performance, you have to carefully categorize the gender. Here am presuming an additional category 'U' for unknown case.
```
SELECT g.GENDER "Gender"
, count(ID) "Count"
FROM client c,
( SELECT 'Female' As Gender,'F' as GenderId FROM DUAL
UNION ALL
SELECT 'Male','M' FROM DUAL
UNION ALL
SELECT 'Unknown','U' FROM DUAL
) g
WHERE c.created(+) <= DATE '2000-01-01'
AND c.Gender(+) = g.genderId
GROUP BY g.gender
ORDER BY g.gender
;
```
|
How to write an SQL query that returns count = 0 when no records found in group
|
[
"",
"sql",
"oracle",
""
] |
I have two unrelated tables:
```
contribution(id,amount, create_at, user_id)
solicitude(id, amount, create_at, status_id, type_id, user_id)
```
I need to subtract the sum of the amount of the contribution and of the solicitude from a user, but that result can't to be negative.
How can I do this? Function or query?
I tried this query:
```
SELECT sum(contribution.amount)
- (SELECT sum(solicitude.amount)
FROM solicitude
WHERE user_id = 1 AND status_id = 1) as total
FROM contribution
WHERE contribution.user_id = 1
```
|
I interpret your remark `but that result can't to be negative` as requirement to return 0 instead of negative results. The simple solution is [**`GREATEST()`**](http://www.postgresql.org/docs/current/interactive/functions-conditional.html#FUNCTIONS-GREATEST-LEAST):
```
SELECT GREATEST(sum(amount)
- (SELECT sum(amount)
FROM solicitude
WHERE status_id = 1
AND user_id = 1), 0) AS total
FROM contribution
WHERE user_id = 1;
```
Otherwise, I kept your original query, which is fine.
For other cases with the possible result that no row could be returned I would replace with *two* sub-selects. But the use of the aggregate function *guarantees* a result row, even if the given `user_id` is not found at all. Compare:
* [Get n grouped categories and sum others into one](https://stackoverflow.com/questions/29560381/get-n-grouped-categories-and-sum-others-into-one/30598887#30598887)
If the result of the subtraction would be `NULL` (because no row is found or the sum is `NULL`), `GREATEST()` will also return `0`.
|
You can add an outer query to check the total value:
```
SELECT CASE WHEN total > 0 THEN total ELSE 0 END AS total
FROM (
SELECT
sum(contribution.amount) - (SELECT sum(solicitude.amount)
FROM solicitude
WHERE user_id = 1 AND status_id = 1) as total
FROM contribution
WHERE
contribution .user_id = 1
) alias;
```
This solution is OK, but I suggest an alternative approach. Check how this query works:
```
with contribution as (
select user_id, sum(amount) as amount from contribution
group by 1),
solicitude as (
select user_id, sum(amount) as amount from solicitude
where status_id = 1
group by 1)
select
c.user_id, c.amount as contribution, s.amount as solitude,
case when c.amount > s.amount then c.amount - s.amount else 0 end as total
from contribution c
join solicitude s on c.user_id = s.user_id;
```
---
I made a simple test, just out of curiosity, on this setup:
```
create table public.solicitude (
id integer,
amount numeric,
create_at timestamp without time zone,
status_id integer,
type_id integer,
user_id integer
);
create table public.contribution (
id integer,
amount numeric,
create_at timestamp without time zone,
user_id integer
);
insert into contribution (user_id, amount)
select (random()* 50)::int, (random()* 100)::int
from generate_series(1, 4000000);
insert into solicitude (user_id, amount, status_id)
select (random()* 50)::int, (random()* 100)::int, 1
from generate_series(1, 4000000);
```
Results (msecs):
```
Erwin's solution with greatest(): 922, 905, 922, 904, 904, 904, 905, 912, 905, 922
My solution with an outer query: 796, 795, 814, 814, 815, 795, 815, 796, 815, 796
```
|
Subtract two columns of different tables
|
[
"",
"sql",
"postgresql",
"sum",
"aggregate-functions",
"subtraction",
""
] |
I am having trouble trying to create a table using MS Access.
I know for sure my problem lies in the "datetime" field but I can not figure what I am missing or doing wrong.
When I click "run" I get the
> "Syntax Error in Field Definition"
```
Create Table Appointments
(DocID char(4) not null primary key,
PatID char(8) not null,
Day varchar(8) not null,
Time datetime not null,
Procedure varchar(50) null);
```
|
Time and procedure are reserved words, and therefore should be escaped:
```
Create Table Appointments
(DocID char(4) not null primary key,
PatID char(8) not null,
[Day] varchar(8) not null,
[Time] datetime not null,
[Procedure] varchar(50) null);
```
Or better yet, find names that aren't reserved words:
```
Create Table Appointments
(DocID char(4) not null primary key,
PatID char(8) not null,
AppointmentDay varchar(8) not null,
AppointmentTime datetime not null,
MedicalProcedure varchar(50) null);
```
|
Here `Procedure` and `Time` are reserved words and so need to be escaped using `[]` like below. See [Documentation](https://support.microsoft.com/en-us/kb/286335) for more information
```
Create Table Appointments
(DocID char(4) not null primary key,
PatID char(8) not null,
[Day] varchar(8) not null,
[Time] datetime not null,
[Procedure] varchar(50) null);
```
|
Create Table - Time Statement
|
[
"",
"sql",
"datetime",
"ms-access",
"ddl",
"create-table",
""
] |
I think this should be straightforward, but I'm at a loss. I want to return data in the columns that weren't used in the SQL statement HAVING clause without having to do a 2nd query. I'm using PostgreSQL, but this should be a fairly generic question. Here's the data:
```
CREATE TABLE sandbox (
ret_val character(10),
key1 character(10),
key2 character(10),
value1 real,
value2 real
);
COPY sandbox (ret_val, key1, key2, value1, value2) FROM stdin;
baa DEF Book 0.800000012 0.270000011
oink GHI Play 0.200000003 0.280000001
Oink DEF Book 0.300000012 0.109999999
chirp GHI Play 0.100000001 0
woof DEF Play 0.400000006 0.300000012
meow DEF Play 0.699999988 0.219999999
woof ABC Book 0.140000001 0.939999998
baa ABC Play 0.25999999 0.75
meow ABC Play 0.75999999 0.150000006
neigh DEF Movie 0.970000029 0.349999994
cluck DEF Movie 0.870000005 0.550000012
quack GHI Movie 0.870000005 0.119999997
\.
```
I want to return the values in (ret\_val, value1, value2) ONLY for cases where the (key1,key2) pair is unique. So first I do a SELECT using HAVING:
```
=> SELECT key1,key2 from sandbox group by 1,2 HAVING count(*)=1;
key1 | key2
------------+------------
GHI | Movie
ABC | Book
(2 rows)
```
to see that there are two unique rows. Then I select the data I need using the output of the HAVING:
```
=> SELECT ret_val,value1,value2 from sandbox where
(key1='ABC' and key2='Book') OR (key1='GHI' and key2='Movie');
ret_val | value1 | value2
------------+--------+--------
woof | 0.14 | 0.94
quack | 0.87 | 0.12
(2 rows)
```
Surely that should be done in a single query, right?
|
I would use a window function for this:
```
select ret_val, value1, value2
from (select s.*, count(*) over (partition by key1, key2) as cnt
from sandbox
) s
where cnt = 1;
```
As a note, you can also do this with aggregation, but I don't think the query is as clean:
```
select max(ret_val) as ret_val, max(value1) as value1, max(value2) as value2
from sandbox
group by key1, key2
having count(*) = 1;
```
The logic here is that if there is only one row in the group, then `max()` returns the value in that row.
|
Here is a join version:
```
SELECT ret_val,value1,value2 from sandbox,
( SELECT key1,key2 from sandbox group by 1,2 HAVING count(*)=1 ) as keys
where sandbox.key1 = keys.key1 and sandbox.key2 = keys.key2
```
|
return column data not used in SQL HAVING query
|
[
"",
"sql",
"postgresql",
"having-clause",
""
] |
I have a table
```
T (variable_name, start_no, end_no)
```
that holds values like:
```
(x, 10, 20)
(x, 30, 50)
(x, 60, 70)
(y, 1, 3)
(y, 7, 8)
```
All intervals are guaranteed to be disjoint.
I want to write a query in T-SQL that computes the intervals where a variable is not searched:
```
(x, 21, 29)
(x, 51, 59)
(y, 4, 6)
```
Can I do this without a cursor?
I was thinking of partitioning by variable\_name and then ordering by start\_no. But how to proceed next? Given the current row in the rowset, how to access the "next" one?
|
Since you didn't specify which version of SQL Server, I have multiple solutions. If you have are still rocking SQL Server 2005, then Giorgi's uses CROSS APPLY quite nicely.
**Note:** For both solutions, I use the where clause to filter out improper values so even if the the data is bad and the rows overlap, it will ignore those values.
## My Version of Your Table
```
DECLARE @T TABLE (variable_name CHAR, start_no INT, end_no INT)
INSERT INTO @T
VALUES ('x', 10, 20),
('x', 30, 50),
('x', 60, 70),
('y', 1, 3),
('y', 7, 8);
```
## Solution for SQL Server 2012 and Above
```
SELECT *
FROM
(
SELECT variable_name,
LAG(end_no,1) OVER (PARTITION BY variable_name ORDER BY start_no) + 1 AS start_range,
start_no - 1 AS end_range
FROM @T
) A
WHERE end_range > start_range
```
## Solution for SQL 2008 and Above
```
WITH CTE
AS
(
SELECT ROW_NUMBER() OVER (PARTITION BY variable_name ORDER BY start_no) row_num,
*
FROM @T
)
SELECT A.variable_name,
B.end_no + 1 AS start_range,
A.start_no - 1 AS end_range
FROM CTE AS A
INNER JOIN CTE AS B
ON A.variable_name = B.variable_name
AND A.row_num = B.row_num + 1
WHERE A.start_no - 1 /*end_range*/ > B.end_no + 1 /*start_range*/
```
|
Here is another version with `cross apply`:
```
DECLARE @t TABLE ( v CHAR(1), sn INT, en INT )
INSERT INTO @t
VALUES ( 'x', 10, 20 ),
( 'x', 30, 50 ),
( 'x', 60, 70 ),
( 'y', 1, 3 ),
( 'y', 7, 8 );
SELECT t.v, t.en + 1, c.sn - 1 FROM @t t
CROSS APPLY(SELECT TOP 1 * FROM @t WHERE v = t.v AND sn > t.sn ORDER BY sn)c
WHERE t.en + 1 < c.sn
```
Fiddle <http://sqlfiddle.com/#!3/d6458/3>
|
T-SQL query - row iteration without cursor
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a SQL Server database with three tables: **Trips**, **Slices**, and **Legs**.
Each **Trip** has a one to many relationship with **Slices** and **Slices** has a one to many relationship with **Legs**.
Trips represents a full trip, a slice represents only the outbound or return portions of a trip, and the legs represent all the stops in either outbound or return slices.
I want to be able to find all the trips with matching legs.
Here's look at the tables:
**Trips**:
```
tripId saleTotal queryDate
1 $200 6/10/2015
2 $198 6/11/2015
```
**Slices**:
```
sliceId connections duration tripIdFK
1 1 50 1
2 1 45 1
3 0 60 2
4 1 85 2
```
**Legs**:
```
legId carrier flightNumber departureAirport departureDate ArrivalAirport ArrivalDate sliceIDFK
1 AA 1 JFK 7/1/2015 LON 7/2/2015 1
2 AA 2 LON 7/2/2015 FRA 7/2/2015 1
3 AA 11 FRA 7/10/2015 LON 7/10/2015 2
4 AA 12 LON 7/10/2015 JFK 7/10/2015 2
5 UA 5 EWR 8/1/2015 LAX 8/1/2015 3
6 UA 6 LAX 8/5/2015 ORD 8/5/2015 4
7 UA 7 ORD 8/5/2015 EWR 8/5/2015 4
```
How would I be able to find all the trips where the all the carrier and flight numbers match such as in legId 1-4 by searching departureAirport/arrivalAirport (JFK/FRA)?
In other words, legId 1-4 is one unit with the details for Trip 1 and legId 5-7 is another unit with the details for Trip 2. I need to find which other trips match exactly legId 1-4 details (except for PK and FK), etc. Any help would be greatly appreciated!!
|
Ow, my brain hurts...
Replace all of the question marks (3 of them) with the trip ID of the trip where you want to check for similar trips.
```
select distinct s.tripIDFK as tripId
from Legs l
left join Slices s on l.sliceIDFK = s.sliceId
where s.tripIDFK != ?
and not exists (
select carrier, flightNumber, departureAirport, departureDate
from Legs l2
left join Slices s2 on l2.sliceIDFK = s2.sliceId
where s2.tripIDFK = s.tripIDFK
except
select carrier, flightNumber, departureAirport, departureDate
from Legs l2
left join Slices s2 on l2.sliceIDFK = s2.sliceId
where s2.tripIDFK = ?
)
and not exists (
select carrier, flightNumber, departureAirport, departureDate
from Legs l2
left join Slices s2 on l2.sliceIDFK = s2.sliceId
where s2.tripIDFK = ?
except
select carrier, flightNumber, departureAirport, departureDate
from Legs l2
left join Slices s2 on l2.sliceIDFK = s2.sliceId
where s2.tripIDFK = s.tripIDFK
)
order by s.tripIDFK
```
The meat of the query is the `and not exists` clauses. They get the leg data for one trip and effectively subtracts the leg data of another trip using the `except` clause. If you're left with nothing, then the second trip data contains all of the first trip data. You have to run the `and not exists` clause twice (with the operands reversed) to ensure that the two sets of trip data are truly identical, and that one is not merely a subset of the other.
This is in no way scalable to large numbers of rows.
|
Hope this helps.
Just pass the base TripId (in @BaseTripID) with which you want to compare other records. I assume that you are concerned only about carrier,flightNumber,departureAirport,ArrivalAirport to match exactly with any other trip regardless of date fields.
```
create table Trips(tripId int,saleTotal int,queryDate date)
create table Slices(sliceId int ,connections int,duration int ,tripIdFK int)
create table Legs(legId int, carrier char(2), flightNumber int, departureAirport char(3), departureDate date, ArrivalAirport char(3), ArrivalDate date, sliceIDFK int)
insert into Trips values(1,200,'6/10/2015'),(2,198,'6/11/2015'),(3,300,'6/15/2015'),(4,200,'6/21/2015')
insert into Slices values(1,1,50,1),(2,1,45,1),(3,0,60,2),(4,1,85,2),(5,1,50,3),(6,1,45,3),(7,1,45,4),(8,1,45,4)
insert into Legs values(1,'AA',1,'JFK','7/1/2015','LON','7/2/2015',1) ,
(2,'AA',2,'LON','7/2/2015','FRA','7/2/2015',1),
(3,'AA',11,'FRA','7/10/2015','LON','7/10/2015',2),
(4,'AA',12,'LON','7/10/2015','JFK','7/10/2015',2),
(5,'UA',5,'EWR','8/1/2015','LAX','8/1/2015',3),
(6,'UA',6,'LAX','8/5/2015','ORD','8/5/2015',4),
(7,'UA',7,'ORD','8/5/2015','EWR','8/5/2015',4),
(8,'AA',1,'JFK','7/11/2015','LON','7/12/2015',5),
(9,'AA',2,'LON','7/12/2015','FRA','7/12/2015',5),
(10,'AA',11,'FRA','7/20/2015','LON','7/20/2015',6),
(11,'AA',12,'LON','7/20/2015','JFK','7/20/2015',6),
(12,'AA',1,'JFK','7/1/2015','LON','7/2/2015',7) ,
(13,'AA',2,'LON','7/2/2015','FRA','7/2/2015',7),
(14,'AA',11,'FRA','7/10/2015','BEL','7/10/2015',8),
(15,'AA',12,'BEL','7/10/2015','JFK','7/10/2015',8)
--select * from Trips
--select * from Slices
--select * from Legs
-------------------------------------------------------------------
Declare @BaseTripID int = 1, @Legs int ,@MatchingTripID int
declare @BaseTrip table(carrier char(2), flightNumber int, departureAirport char(3), ArrivalAirport char(3),row_no int)
declare @MatchingTrip table(carrier char(2), flightNumber int, departureAirport char(3), ArrivalAirport char(3),row_no int,legid int,tripid int)
insert into @BaseTrip
select carrier, flightNumber, departureAirport, ArrivalAirport,ROW_NUMBER() over(order by l.legId)
from Legs l join slices s on s.sliceId = l.sliceIDFK
where s.tripIdFK = @BaseTripID
select @Legs=count(*) from @BaseTrip
Insert into @MatchingTrip
select carrier, flightNumber, departureAirport, ArrivalAirport,ROW_NUMBER() over(partition by s.tripIdFK order by l.legId) as row_no,l.legId,s.tripIdFK
from Legs l join slices s on s.sliceId = l.sliceIDFK
and s.tripIdFK in
(select s.tripIdFK
from Legs l join slices s on s.sliceId = l.sliceIDFK
and s.tripIdFK <> @BaseTripID
Group by s.tripIdFK having count(l.legId)=@Legs)
select @MatchingTripID = m.tripid
from @MatchingTrip m join @BaseTrip b
on m.carrier = b.carrier
and m.flightNumber = b.flightNumber
and m.departureAirport = b.departureAirport
and m.ArrivalAirport = b.ArrivalAirport
and m.row_no = b.row_no
GROUP BY m.tripid HAVING COUNT(*) = @Legs
select s.tripIdFK as matchingTripID,l.legid,l.carrier,l.flightNumber,l.departureAirport,l.ArrivalAirport
from Legs l
join Slices s on s.sliceId = l.sliceIDFK
where s.tripIdFK = @MatchingTripID
---------------------
drop table Trips
drop table Slices
drop table Legs
```
making use of leg count is the key.Thus we are eliminating any matches other than completely identical legs(trip 4 with just two matching legs). So Now we get only Trip 3 as matching records.
please note that we are also excluding trips, that has any additional legs besides the matching ones. I hope this is what you expect, pair of Perfectly identical trips.
|
Finding records in main table that match records in another table in SQL Server
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
```
DATEDIFF(MONTH, '1/1/2014', '12/31/2014') + 1
```
This will give me 12.
What if I do this:
```
DATEDIFF(MONTH, '1/1/2014', '12/30/2014') + 1
```
It should give me 11 point something. How do I go about getting the exact number of months between these two dates? This needs to work for any combination of dates (any month of the year for any year).
|
You could do the calculation yourself in the following way:
```
DECLARE @startdate date = '1/1/2014'
DECLARE @enddate date = '12/30/2014'
DECLARE @startday int = DATEPART(DAY, @startdate)
DECLARE @endday int = DATEPART(DAY, @enddate)
DECLARE @startdateBase date = DATEADD(DAY, 1 - @startday, @startdate)
DECLARE @enddateBase date = DATEADD(DAY, 1 - @endday, @enddate)
DECLARE @deciMonthDiff float = CAST(DATEDIFF(MONTH, @startdate, @enddate) AS float) -
(@startday - 1.0) / DATEDIFF(DAY, @startdateBase, DATEADD(MONTH, 1, @startdateBase)) +
(@endday - 1.0) / DATEDIFF(DAY, @enddateBase, DATEADD(MONTH, 1, @enddateBase))
SELECT @deciMonthDiff
```
This calculates the `@deciMonthDiff` to be 11.935483870967.
Of course you can "inline" this as much as you want in order to avoid all the middle declarations.
The idea is to calculate the total month diff, then subtract the relative part of the first & last month depending on the actual day.
|
DATEDIFF with the MONTH option only returns an integer value. Using days or years would give you a rough "guesstimate" but still not exactly right (different number of days in a month/year so you can't just divide the days difference by 30).
If you want exact you would need to write your own function to walk through the months from start until end and account for how many days are in each month and get a percentage/factor of that month covered.
|
how do I get the EXACT number of months between two dates?
|
[
"",
"sql",
"sql-server",
"monthcalendar",
""
] |
I have a SQL Server table ("Activities") which has a column containing an ID, e.g. "10553100". This is the activity ID and is composed of two parts -> first 4 digits indicates which project it belongs to, and the last 4 digits indicates which sub project.
I need to extract the project number from the `activityId` column (aka the first 4 digits). Is it possible to do this using a simple SELECT statement in SQL?
|
Use the `LEFT` function.
```
SELECT activityId, LEFT(activityId, 4) AS projectnumber
FROM Activities
```
Or the `SUBSTRING` function.
```
SELECT activityId, SUBSTRING (activityId, 1, 4) AS projectnumber
FROM Activities
```
And if you want too include your subprojectnumber
`LEFT` and `RIGHT` functions.
```
SELECT activityId, LEFT(activityId, 4) AS projectnumber, RIGHT(activityId, 4) AS subprojectnumber
FROM Activities
```
`SUBSTRING` function.
```
SELECT activityId, SUBSTRING (activityId, 1, 4) AS projectnumber, SUBSTRING(activityId, 5, 4) AS subprojectnumber
FROM Activities
```
|
If the field is numeric then:
```
SELECT Col / 10000 FROM TableName
```
If char the:
```
SELECT LEFT(Col, 4) FROM TableName
```
|
Select the first N digits of an integer
|
[
"",
"sql",
"sql-server",
""
] |
I have a MySQL database with about 20.000 entries where files are named `Name_Subname_XXXXX`. Because the files are named that way, the name that shows to the internet itself also get's that name when entered into the database.
I wonder how can I in an easy way remove the `_` from the name and just keep `Name Subname XXXXX`?
|
You can use MySQL's [REPLACE](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_replace) method for strings:
```
UPDATE tbl SET filename = REPLACE(filename, '_', ' ');
```
|
You can use [`replace`](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_replace):
```
select replace(col, '_', ' ') from tbl
```
|
Remove all characters from the string
|
[
"",
"mysql",
"sql",
"replace",
""
] |
I have a table like this:
```
WOTranID WOID Status DateCreated
----------------------------------------------
1 5 Ready 6/6/2015
2 5 Pending 6/5/2015
3 7 Pending 6/9/2015
4 8 Scheduled 6/10/2015
```
What I need is to select all WOID where the status was pending but is not currently pending.
Thank you for your help.
Edit: Using the example table above I would only like to return WOID 5.
|
First, select all WOIDs which have had a status of pending, then intersect that with the list of WOIDs whose current status is not pending (using `row_number()` to select the latest status per WOID).
```
(Select Distinct WOID
from MyTable
where Status = 'Pending')
intersect
(Select WOID
from
(Select WOID
, Status
, Row_number() over (partition by WOID order by DateCreated desc) as RN
from MyTable) a
where Status <> 'Pending' and RN = 1)
```
|
May be this:
```
select distinct t1.WOID
from TableName t1
where Status <> 'Pending' and
not exists(select * from TableName t2
where t2.WOID = t1.WOID and t2.DateCreated > t1.DateCreated) and
exists(select * from TableName t3
where t3.WOID = t1.WOID and t3.DateCreated < t1.DateCreated and t3.Status = 'Pending')
```
|
SQL Server : select where row was X but is not now X
|
[
"",
"sql",
"sql-server",
""
] |
Okay, so I have a MySQL table called `entries` which contains the columns `name VARCHAR(255) NOT NULL` and `address VARCHAR(255)`
The table has about a million sets of data.
For every set of data, name has a value e.g. "john" whilst address is NULL.
For example:
```
+------+---------+
| name | address |
+------+---------+
| john | NULL |
+------+---------+
| jake | NULL |
+------+---------+
| zach | NULL |
+------+---------+
```
I received a CSV file which contains names along with their corresponding address in the format of `name:address`.
Like I said, the `entries` table has nearly a million entries, so the csv file has about 800,000 lines.
I want to take each line in the csv, and insert the address where the name is the same which would be:
```
UPDATE `entries` SET `address` = <address from csv> WHERE `name` = <name from csv>;
```
I made a Python script to open the csv file reading it line by line. For each line, it would store the name and address in separate variables. It would then execute the query above, but it was taking too long to insert the data into the columns.
Is there anyway I could do this in MySQL, if so, what is the fastest way?
Thanks.
|
You can import the CSV file into a separate table using mysql LOAD DATA INFILE and then update the entries table using JOIN statement on the basis of similar column name.
E.g:
update entries a inner join new\_table b on a.name = b.name set a.address = b.address ;
Here new\_table is imported from the CSV file..
Don't forget to add index on both tables for the name column so that it would be fast..
|
Create table1 and table2
```
LOAD DATA INFILE '/path/theFile1.csv'
INTO TABLE table1
FIELDS TERMINATED BY ','
OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
```
Ditto for file2 into table2
Proceed
|
What would be the fastest way to insert this data
|
[
"",
"mysql",
"sql",
"database",
"performance",
"csv",
""
] |
I have a table (tblABC) which looks like
```
-----------------------------
BasicID | Filter 1| Filter2 |
------------------------------
100 1 2
100 3 4
101 8 9
```
I want to select the BasicID which has Filter1=1 and Filter2=4. ie I want to get the output as
100
I can't be using AND here as as it searches within the same row only. ie
`select * from tblABC where Filter1=1 and Filter2=4` , yields no result.
As of now the query that I use is
```
select * from tblABC
where
and BasicID in
(
select BasicID from tblABC
where Filter1 IN (1)
)
and BasicID in
(
select BasicID from tblABC
where Filter2 IN (4)
)
```
This one works for me. But there are like 12 Filter columns and when such a string search is run in large volumes, wouldn't it make the query slow. What would be a more efficient way of doing this?
I am using Microsoft SQL 2014
|
Try below
```
select basicid from tblABC where filter1 = 1 intersect
select basicid from tblABC where filter1 = 4
```
|
If the two filter values can be either in different rows or the same row, the `GROUP BY / HAVING` method will fail (in the same row case). This method will work in all cases (the `intersect` query by @Azar will work, too):
```
select distinct a.BasicID
from tblABC as a
join tblABC as b
on a.BasicID = b.BasicID
where a.Filter1 = 1
and b.Filter2 = 4 ;
```
---
If you want the `GROUP BY / HAVING COUNT` method, this modification will work in all cases, too:
```
select basicid
from tblABC
where filter1 = 1
or filter2 = 4
group by
basicid
having count(case when filter1 = 1 then 1 end) >= 1
and count(case when filter2 = 4 then 1 end) >= 1 ;
```
|
Sql query for AND on multiple rows
|
[
"",
"sql",
"sql-server",
""
] |
I have read the many posts regarding collapsing multiple table columns into a single column using FOR XML PATH, but I am not a database person and cannot make other posts' suggestions work. I expect my problem is easy to solve for an experienced SQL DB person.
I have MS SQL Server 2012. I have 3 tables (table1, table2, table3). The middle table (table2) creates a one-to-many relationship between table1 and table3 like this:
```
TABLE1.eventId
\ _________________ one-to-one (on eventId)
\
TABLE2.eventId _____ creates one-to-many relationship
TABLE2.valueId
\ __________ one-to-one (on valueId)
\
TABLE3.valueId
TABLE3.stringValue
```
My select statement:
```
SELECT TABLE1.eventId, TABLE2.eventId, TABLE2.valueId, TABLE3.valueId AS myValueId, TABLE3.stringValue
FROM TABLE1
INNER JOIN TABLE2 ON TABLE1.eventId = TABLE2.eventId
INNER JOIN TABLE3 ON TABLE2.valueId = TABLE3.valueId
ORDER BY TABLE1.eventId DESC
```
Of course, the result is:
```
187252 187252 3 3 SomeString1 -\__ I WANT TO COMBINE INTO ONE
187252 187252 9 9 SomeString2 -/ COLUMN ON A SINGLE LINE
187251 187251 3 3 SomeString1
187251 187251 14 14 SomeString3
```
What I really want is something like this:
```
187252 SomeString1 SomeString2
187251 SomeString1 SomeString3
```
I have tried this sort of query using FOR XML PATH:
```
SELECT TABLE1.eventId, TABLE3.stringValue,
SUBSTRING((SELECT mt2.eventId, mt2.valueId
FROM TABLE2 AS mt2
WHERE mt2.valueId = TABLE3.valueId
ORDER BY mt2.eventId
FOR XML PATH('')), 3, 2000) AS JoinedValue
FROM TABLE1, TABLE3
ORDER BY TABLE1.eventId DESC
```
But it returns multiple lines with the same valueId and different strings. I think I simply don't understand database fundamentals well enough to make FOR XML PATH work for me.
Loads of appreciation given to all for not beating me up about duplicate posts. And, of course, for clues leading me to a solution.
(Please note that server-side programming is not an option, for programmatic reasons. And this must be done in a single query.)
Many thanks for your thoughts.
|
Similar to @MartianCodeHound I'd use STUFF
```
SELECT t1.eventId,
STUFF((SELECT ',' + t3.stringValue
FROM TABLE2 t2
JOIN TABLE3 t3 ON t2.valueId = t3.valueId
WHERE t2.eventId = t1.eventId
ORDER BY t3.stringValue
FOR XML PATH('')), 1, 1, '') AS stringValue
FROM TABLE1 t1
```
Here's a test for you
```
DECLARE @TABLE1 TABLE (eventId INT)
DECLARE @TABLE2 TABLE (eventId INT, valueId INT)
DECLARE @TABLE3 TABLE (valueId INT, stringValue VARCHAR(255))
INSERT INTO @TABLE1 VALUES (1), (2), (3)
INSERT INTO @TABLE2 VALUES(1, 1),(1, 2), (2, 1), (3, 1)
INSERT INTO @TABLE3 VALUES (1, 'StringValue1'), (2, 'StringValue2')
SELECT t1.eventId,
STUFF((SELECT ',' + t3.stringValue
FROM @TABLE2 t2
JOIN @TABLE3 t3 ON t2.valueId = t3.valueId
WHERE t2.eventId = t1.eventId
ORDER BY t3.stringValue
FOR XML PATH('')), 1, 1, '') AS stringValue
FROM @TABLE1 t1
```
|
Try the method in [this post](https://stackoverflow.com/questions/273238/how-to-use-group-by-to-concatenate-strings-in-sql-server). It's one of my favorites.
```
SELECT
eventID,
STUFF((
SELECT ', ' + stringValue
FROM table1 t2
WHERE (t2.eventID = t1.eventID)
FOR XML PATH(''))
,1,2,'') AS NameValues
FROM table1 t1
GROUP BY eventID
```
|
SQL Query using FOR XML PATH that works right
|
[
"",
"sql",
"t-sql",
"sql-server-2012",
""
] |
In sql Server I join two query using *UNION* and my "Address" column has *nText* datatype so it's have an issue in Distinct. So i have to convert Address column *nText* to *varchar* but in output i got symbolic data in Adress. Actual data is in our local 'Gujarati' Language.
|
`varchar:` Variable-length, non-Unicode character data. The database collation determines which code page the data is stored using.
`nvarchar:` Variable-length Unicode character data. Dependent on the database collation for comparisons.
so change your type `varchar` to `nvarchar` it will sort your issue..
Same issue face by me during storing arabic charachters
|
Please use "nvarchar" instead of "varchar"
|
data convert to symbolic language
|
[
"",
"sql",
"sql-server",
""
] |
table1 K
```
Date id
2015-01-01 10
2015-01-02 10
2015-01-03 10
2015-01-04 10
2015-01-05 10
2015-01-06 10
```
table2 H
```
Date id Holiday
2015-01-03 10 a holiday day
```
I want to exclude holidays from my `Date` column and create a new column like `AS new_dates`.
So the output is like:
table3 Output
```
new_dates id
2015-01-01 10
2015-01-02 10
2015-01-04 10
2015-01-05 10
2015-01-06 10
```
|
Now edited.
I was suggesting `EXCEPT ALL`, but Oracle has `MINUS` instead, according to MTO's answer.
```
select date, id from table1
MINUS
select date, id from table2
```
Or `NOT EXISTS`:
```
select date, id from table1 t1
where not exists (select 1 from table2 t2
where t1.date = t2.date
and t1.id = t2.id)
```
**Edited**: added `and t1.id = t2.id` to the `NOT EXISTS` version's sub-select.
|
Simple
```
select date, id from table1
where not exists(select 'x' from table2 where table1.date = table2.date)
```
|
Exclude holidays in Oracle table
|
[
"",
"sql",
"oracle",
""
] |
Is there any value which I can place in a SQL IN clause which will guarantee that the clause will evaluate to false?
```
SELECT *
FROM Products
WHERE ProductID IN (???)
```
Is there anything I could replace ??? with to guarantee no rows will be returned?
|
Replace with `NULL`. There is no better guarantee!
Because no other value can equal to `NULL`, even `NULL` itself.
And this is kinda universal value for any type(as @zohar-peled mentioned).
|
Use `NULL`
```
SELECT *
FROM Products
WHERE ProductID IN (NULL)
```
As this will return nothing
|
SQL IN clause which is guaranteed to evaluate to false
|
[
"",
"sql",
"sql-server",
"sql-in",
""
] |
I am trying to use an Oracle SQL update statement using a 'SWITCHED' case statement as mentioned below:
```
update MY_TABLE SET STATE_ABBREVIATION=
(CASE STATE
WHEN 'MAHARASHTRA' THEN 'MH'
WHEN 'ASSAM' THEN 'AS'
WHEN 'HIMACHAL%' THEN 'HP'
...
ELSE
STATE
END
);
```
I got this clue from the thread [SQL - WHERE clause on each SET command in UPDATE?](https://stackoverflow.com/questions/1421116/sql-where-clause-on-each-set-command-in-update)
The issue is that the wildcard fields are not functioning as expected. The State column may values like 'Himachal Pradesh' as well as 'HimachalPradesh'.
I also searched <https://community.oracle.com/thread/290198> ,one of the answers there used a '**searched** case' which works fine..
..
```
CASE
(WHEN STATE LIKE 'ANDHRA%' THEN 'AP'
WHEN STATE LIKE 'HIMACHAL%' THEN 'HP'
END
```
How can we achieve this in the SWITCHED case style ?
I also tried
```
update MY_TABLE SET STATE_ABBREVIATION=
(CASE STATE
WHEN 'MAHARASHTRA' THEN 'MH'
WHEN 'ASSAM' THEN 'AS'
WHEN LIKE 'HIMACHAL%' THEN 'HP'
...
ELSE
STATE
END
);
```
This too did not work.
> (SQL Error: ORA-00905: missing keyword
> 00905. 00000 - "missing keyword").
|
You'll have to use the searched version in all cases.
See the [syntax diagram](http://docs.oracle.com/cd/B19306_01/server.102/b14200/expressions004.htm)
|
You can use `LIKE` in all your conditions as there is no difference between `= 'aaaa'` and `LIKE 'aaaa'` in terms of output
```
update MY_TABLE SET STATE_ABBREVIATION=
(CASE
WHEN STATE LIKE 'MAHARASHTRA' THEN 'MH'
WHEN STATE LIKE 'ASSAM' THEN 'AS'
WHEN STATE LIKE 'HIMACHAL%' THEN 'HP'
...
ELSE
STATE
END
);
```
|
Updating table using wildcard in Switch case statement
|
[
"",
"sql",
"oracle",
""
] |
i have items table which is having Size as follows:
```
1sq.mm.
1.5sq.mm.
0.5sq.mm.
2.5sq.mm.
0.75sq.mm.
4sq.mm.
20mm
25mm
20mtr
75mm x 50mm
100mm x 50mm
100mm x 100mm
75mm x 75mm
```
i wanted to display it has
```
0.5sq.mm.
0.75sq.mm.
1.5sq.mm.
2.5sq.mm.
4sq.mm.
20mm
20mtr
25mm
75mm x 50mm
75mm x 75mm
100mm x 50mm
100mm x 100mm
```
i tried the following sql query but am getting error
```
'Conversion failed when converting the varchar value '1 sq.mm.' to data type int.'
```
SQL query:
```
select * from Items
order by CAST(SUBSTRING(Sizes, PATINDEX('%[0-9]%', Sizes), LEN(Sizes)) AS INT)
```
|
Try this:
```
SELECT Sizes
FROM Items
ORDER BY CAST(LEFT(Sizes, PATINDEX('%[a-z]%', Sizes)-1) as numeric(9, 2))
```
that's assuming your data will always be number followed by at least one alphabetic char.
[sql fiddle (thanks to matt!)](http://sqlfiddle.com/#!3/577e4/8)
|
Use `REPLACE` in the `ORDER BY`
```
SELECT Sizes
FROM Items
ORDER BY CAST(REPLACE(Sizes,'sq.mm.','') as NUMERIC(9,2))
```
OUTPUT:
```
Sizes
0.5sq.mm.
0.75sq.mm.
1sq.mm.
1.5sq.mm.
2.5sq.mm.
4sq.mm.
```
SQL Fiddle: <http://sqlfiddle.com/#!3/ad91f/3/0>
|
how to sort alphanumeric data in sql..?
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-server-2012",
""
] |
My table data looks like this
```
declare @t table (name varchar(10), amt int, dt datetime)
insert into @t (name, amt, dt)
values
('meeseva',100,'06-17-2015'), ('meeseva',200,'06-17-2015'),
('meeseva',200,'06-17-2015'), ('meeseva',100,'06-16-2015'),
('meeseva',100,'06-15-2015'), ('meeseva',100,'06-14-2015'),
('fish',100,'06-17-2015'), ('fish',200,'06-17-2015'),
('fish',100,'06-16-2015'), ('fish',200,'06-16-2015'),
('fish',100,'06-15-2015'), ('fish',100,'06-14-2015'),
('raju',100,'06-17-2015'), ('raju',200,'06-17-2015'),
('raju',100,'06-16-2015'), ('raju',100,'06-15-2015'),
('raju',100,'06-14-2015'), ('raju',500,'06-14-2015')
```
So far I have tried
```
select
name,
SUM(amt),
dt,
ROW_NUMBER() OVER (PARTITION BY name order by dt)
from
@t
where
dt >= (SELECT CONVERT (VARCHAR(10), Getdate() - 4, 101))
and dt <= (SELECT CONVERT (VARCHAR(10), Getdate(), 101))
GROUP BY
name, dt
ORDER BY
name, dt desc
```
I have data where I need to get data basing on sum of amounts and per day highest two totals should be returned if amounts are same based on record id and date top 2 records should be returned.
My desired output :
```
name sum dt
-----------------------------------
fish 300 2015-06-17 00:00:00.000
meeseva 500 2015-06-17 00:00:00.000
fish 300 2015-06-16 00:00:00.000
raju 200 2015-06-16 00:00:00.000
fish 100 2015-06-15 00:00:00.000
meeseva 100 2015-06-15 00:00:00.000
raju 600 2015-06-14 00:00:00.000
meeseva 100 2015-06-14 00:00:00.000
```
|
you can try this
```
;WITH ranking AS(
SELECT name,
SUM(amt) [sum],
dt,
RANK() OVER (PARTITION BY dt ORDER BY SUM(amt) DESC, name -- use recordid here to order by sum then recordnumber
) AS rnk
FROM @t
WHERE dt >= DATEADD(DAY, -4, CONVERT(DATE, GETDATE()))
GROUP BY name,
dt
)
SELECT name,
[sum],
dt
FROM ranking
WHERE rnk <= 2
ORDER BY dt DESC,
name
```
|
Please, read my comment to the question. Here is a sample query:
```
SELECT *
FROM (
SELECT *, ROW_NUMBER() OVER(PARTITION BY dt ORDER BY amt DESC) AS RowNo
FROM @t
) T
WHERE RowNo IN (1, 2)
ORDER BY dt DESC
```
**[EDIT]**
More adequate sample to your description of issue:
```
SELECT t.*
FROM (
SELECT name, dt, SUM(amt) AS amt, ROW_NUMBER() OVER(PARTITION BY dt ORDER BY SUM(amt) DESC) AS RowNo
FROM @t
WHERE dt BETWEEN ... AND ...
GROUP BY name, dt
) T
WHERE RowNo IN (1, 2)
ORDER BY dt DESC
```
|
how to get top 2 highest sum amounts per day
|
[
"",
"sql",
"sql-server",
""
] |
I have two columns `X` and `Y` and a where statement as following:
```
WHERE (x=0 or x=1 or x=4 or x=6 or x=7 or x=12 or x=75) and
(y=0 or y=1 or y=4 or y=6 or y=7 or y=12 or y=75)
```
since it's the same condition on both columns of the same table is there a way to short it?
Something like `x and y are (0 or 1 or 4....)` - it's not PostgreSQL syntax but its clarify the question.
Note: the numbers represent statuses, there is no mathematical logic behind this condition.
|
Provided that you don't need it to use an index on `x` or `y`, the simplest way is probably:
```
...WHERE ARRAY[x,y] <@ ARRAY[0,1,4,6,7]
```
|
You may put the values into an array and the array into a [CTE](http://www.postgresql.org/docs/current/static/queries-with.html).
Assuming the existence of `tablename(x int, y int)`:
```
with list(arr) as (select array[0,1,4,6,7])
select x,y from list,tablename where x=any(arr) and y=any(arr);
```
|
Check same long condition on two columns
|
[
"",
"sql",
"postgresql",
""
] |
I have a table allData
```
id name
1 meat
2 chicken
3 fish
4 chicken
5 fish
6 chicken
```
and table relevantData
```
id name
1 meat
2 fish
```
I want to count the total number of occurrence per name in `allData`, but only for names which appear in `relevantData`. I.e.
The result I want to get :
```
id name count
1 meat 1
2 fish 2
```
I thought about the query:
```
Select count(*) from allData group by name
```
Not sure how I correlate with `relevantData`. Any thoughts?
|
Change the point of view, you are interested in `relevantData`, that's the base of your query. You can use a correlated subquery, no need to use join or group-by:
```
SELECT rd.name,
(SELECT COUNT(*) FROM allData ad WHERE rd.name = ad.name) AS Count
FROM relevantData rd
```
`Demo`
|
You can try the following.
```
SELECT COUNT(*) FROM allData WHERE name in (SELECT name from relevantData) GROUP BY name
```
|
Count the number of entries in the table
|
[
"",
"sql",
"count",
""
] |
I have a problem with something in SQL, let's see an example of database :
```
CREATE TABLE person( //Employee
pe_id PRIMARY KEY NOT NULL AUTO_INCREMENT,
pe_name VARCHAR(20),
pe_office VARCHAR(20)
);
CREATE TABLE project( //Mission
pr_id PRIMARY KEY NOT NULL AUTO_INCREMENT,
pr_name VARCHAR(20),
pr_status VARCHAR(15)
);
CREATE TABLE techno( //Programming language
te_id PRIMARY KEY NOT NULL AUTO_INCREMENT,
te_name VARCHAR(20)
);
CREATE TABLE job( //developer, manager, ...
jo_id PRIMARY KEY NOT NULL AUTO_INCREMENT,
jo_name VARCHAR(20)
);
```
I would like to assign persons on projects for a job using technos.
For example, Rob works as a developer and project manager on the projet #13 with AngularJS and HTML.
So I created this table :
```
CREATE TABLE assignment(
pe_id INT,
pr_id INT,
te_id INT,
jo_id INT,
as_days INT, //Days of work
PRIMARY KEY(pe_id, pr_id, tr_id, jo_id),
CONSTRAINT fk_as_pe_id FOREIGN KEY(pe_id) REFERENCES person(pe_id),
CONSTRAINT fk_as_pr_id FOREIGN KEY(pr_id) REFERENCES project(pr_id),
CONSTRAINT fk_as_te_id FOREIGN KEY(te_id) REFERENCES techno(te_id),
CONSTRAINT fk_as_jo_id FOREIGN KEY(jo_id) REFERENCES job(jo_id)
);
```
I would like to have the ability to assign a developer with somes technos to a project without knowing who exaclty, like this:
`INSERT INTO assignment(pr_id,te_id,jo_id,as_days) VALUES(1,2,3,4); //No person!`
We suppose that this values exists in project, techno and job tables.
But it seems that I can not insert this, probably because I do not define person's ID (which is in the primary key).
How can I do this ?
Hope I'm understandable :)
|
You solve this problem by not having this as a primary key. Primary keys cannot be NULL or, if they're composite primary keys, cannot contain NULL. *Make it a unique index instead*. Create an autonumber field for the primary key. I think this is better solution in your case
* Primary Key:
Can be only one in a table
It never allows null values
Primary Key is unique key identifier and can not be null and must be unique.
* Unique Key:
Can be more than one unique key in one table.
Unique key can have null values(only single null is allowed).
It can be a candidate key
Unique key can be null and may not be unique.
|
Maybe you should do this:
Before insert disable constraint:
```
ALTER INDEX fk_as_pe_id ON assignment
DISABLE;
```
After insert enable it:
```
ALTER INDEX fk_as_pe_id ON assignment
REBUILD;
```
|
Insert null value in an association table
|
[
"",
"sql",
""
] |
We have a PostgreSQL database with billions of entries in the form:
```
CREATE TABLE entry (
session integer NOT NULL,
time integer NOT NULL,
key text NOT NULL,
data text NOT NULL
)
```
In this database `key` is a developer-defined string, while `data` is user-defined content. In other words, while there are virtually infinitely many `data` entries, there is only a very limited amount of `key` entries.
Is there a way to tell SQL to optimize on `key` in such a way of telling it:
> *Well, you might receive a string, but you really should convert it to and store it as an integer since you will only receive a very finite amount of these (say, 300)*.
Of course we *could* create a 2nd table and translate / hash incoming strings and do explicit (reverse) lookups when performing queries. However, my feeling was that there must / should be a way to automate this.
We are currently using PostgreSQL 9.3, but we're willing to upgrade to later versions (and potentially even to other (No)SQL solutions) that can handle the above data time- and space-efficiently.
Thanks.
**Edit:** I forgot to clarify, the reason we cannot use Enums is that because the `key` values are defined implicitly by a bunch of distributed developers that do not have database access. Which means, although there are only finitely many keys, they are created from the databases perspective on-the-fly.
**Edit 2:** More mathematically speaking, an implicitly applied, collision-free but small (in terms of target width) hash function might would do the trick if it existed.
**Edit 3:** And the reason why using a 2nd table is unfeasible is because we have a bunch of very complex queries which refer to `key` numerous times. If we had to redirect to a 2nd table every time things would probably get even messier and harder to understand.
|
There is no built-in facility to do what you want, at least in PostgreSQL. Doing it effectively would require signifciant changes to how data is stored, as currently each row is independent of all other rows (well, except TOAST pointers for out-of-line stored data that's unchanged in an UPDATE). A column store could do kind-of what you want by highly compressing the keys but would introduce other problems for some query patterns.
Your best bet is very likely to be the side lookup table. To solve the issues with query complexity (additional joins, planning time, etc) I would probably write a lookup function, so all references to `key` could be replaced with `lookup_key(key)`.
A naΓ―ve implementation of `lookup_key` would just be a SQL function that does a `SELECT`. Such a function can even be inlined and optimized away if defined `STABLE` but not `STRICT`, so this might be a surprisingly good option.
A more sophisticated alternative if the key lookup table is effectively static would be to write a function that builds a session-lifetime in-memory cache as an associative array (hash) of the table when first called. You'd want to write it either in a procedural language like PL/Python, or in C. Subsequent calls could just look up the associative array, completely removing the need to access the other table. This might offer a big performance gain if done in C, but I suspect the costs of doing it in PL/Python or PL/Perl would actually outweigh the benefits of avoiding a simple scan of a cached table. The function would have to fall back to an SPI SQL query if it couldn't find the row, since it might've been added since the cache was built.
|
You could normalize `key` into a domain-table, and add a FK to that. Below, I add a numerical FK pointing to a domain-tabel, but you could just as wel use the `key` text field to refer to the table of allowable strings. (it would make your table a bit fatter, but it would also make the updates/inserts a bit simpler) Another way would be to wrap an updateble view around the two tables.
```
CREATE TABLE entry (
session integer NOT NULL
, time integer NOT NULL
, key text NOT NULL
, data text NOT NULL
);
CREATE TABLE key_domain
( key_id SERIAL NOT NULL PRIMARY KEY
, key_text text NOT NULL
);
INSERT INTO key_domain (key_text)
SELECT DISTINCT key FROM entry;
ALTER TABLE entry
ADD COLUMN key_id INTEGER
;
UPDATE entry e
SET key_id = k.key_id
FROM key_domain k
WHERE e.key = k.key_text
;
ALTER TABLE entry
ADD CONSTRAINT key_fk
FOREIGN KEY (key_id) REFERENCES key_domain(key_id)
;
ALTER TABLE entry
ALTER COLUMN key_id SET NOT NULL
;
ALTER TABLE entry
DROP COLUMN key
;
```
|
(Automatically) convert SQL strings to 'enums'?
|
[
"",
"sql",
"performance",
"postgresql",
""
] |
There is a table `Table1` with rows shown below. Column `Label` stores priority for column `Tag`.
Also in column `Label` - L1 is first priority, L2 is second and L3 is least priority.
I have `Value` column which holds values for tag and which can be null.
```
RecordNo. Lable Tag Value
----------------------------------------------
1 L1 T1
2 L2 T1 D12
3 L3 T1 D13
4 L1 T2 D21
5 L2 T2
6 L3 T3
7 L2 T3 D31
8 L2 T4
9 L3 T4 D41
10 L3 T5 D51
```
I want to write a query to get the output as below.
For every Tag, if value not found for L1 then we will search for L2 and if for L2 data not found then search for L3.So at any point it should return not null value for tag.
Output will look like below.
```
RecordNo. Lable Tag Value
---------------------------------------------------------
2 L2 T1 D12
4 L1 T2 D21
7 L2 T3 D31
9 L3 T4 D41
10 L3 T5 D51
```
Can anyone please check on above query?
Thanks in advance.
|
You can use `ROW_NUMBER`:
```
;WITH Cte AS(
SELECT *,
RN = ROW_NUMBER() OVER(PARTITION BY Tag ORDER BY Lable)
FROM Table1
WHERE Value IS NOT NULL
)
SELECT
RecordNo, Lable, Tag, Value
FROM Cte
WHERE RN = 1
```
[**SQL Fiddle**](http://sqlfiddle.com/#!6/333bf/1/0)
|
cte with Min should do the trick
```
;WITH cte AS (
SELECT Tag,
MIN(Lable) AS Lable
FROM Table1
WHERE Value IS NOT NULL
GROUP BY Tag
)
SELECT *
FROM Table1 t1
JOIN cte c ON t1.Lable = c.Lable
AND t1.Tag = c.Tag
```
|
Need SQL query to get conditional output
|
[
"",
"sql",
"sql-server",
""
] |
I want update string (`varchar`) column in my database. I want to concatenate ',' + value.
This is working, but I don't want to add ',' (comma) if value in database column is empty.
It does not make sense to have comma at beginning.
```
UPDATE ImportState
SET ClientValidationRemark = ClientValidationRemark + ',' + @ClientValidationRemark
WHERE Id=@ImportId
```
Of course I want add comma every time if value is not empty. Do you know any simple, clean way to do this? I always update one row, `@ImportId` is the primary key.
|
Well, let me start by saying that keeping delimited string in a single column is a terrible database design, with only one exception: when the data is never used in sql and is needed as a delimited string in the application. In over 16 years of programming, I've only one time stored values like this, and it was exactly this scenario.
However, should you choose to store delimited values, here is something easy to do:
```
UPDATE ImportState
SET ClientValidationRemark = ISNULL(NULLIF(ClientValidationRemark, '') + ',', '') +
@ClientValidationRemark
WHERE Id=@ImportId
```
Taking advantage of the fact that concatenating strings to null values will result as null (Note: only when you use the [string concatenation operator (`+`)](https://learn.microsoft.com/en-us/sql/t-sql/language-elements/string-concatenation-transact-sql?view=sql-server-ver15), Not when you use the built in function [`concat`](https://learn.microsoft.com/en-us/sql/t-sql/functions/concat-transact-sql?view=sql-server-ver15)), I've used `NULLIF` to convert empty string values in `ClientValidationRemark` to null, and then `ISNULL` to convert the `null + ', '` back to an empty string.
|
Here's an untested sample
```
UPDATE ImportState
SET ClientValidationRemark =
CASE ClientValidationRemark WHEN Null THEN @ClientValidationRemark
WHEN '' THEN @ClientValidationRemark
ELSE ClientValidationRemark + ',' + @ClientValidationRemark
END
WHERE Id=@ImportId
```
|
SQL string concatenation, separated by ',' only when column is not null
|
[
"",
"sql",
"sql-server",
"string-concatenation",
""
] |
I'm a SQL newbie, in that case, I want to perform an IF..THEN in SQL SELECT statement, I try to write code like this, but it returns an error syntax
Here's my code :
```
SELECT *,IF(total_skor <= 100 AND total_skor >= 80,"Amat Baik")
AS pred FROM rekap_nilai ORDER BY id ASC
```
Can you tell me what's wrong in my code? Thanks sir/madam.
|
You missing else part in your query otherwise your query ok.Below query with else part.
```
SELECT *,IF(total_skor <= 100 AND total_skor >= 80,"Amat Baik",NULL) AS pred FROM rekap_nilai ORDER BY id ASC;
```
|
The MySQL `IF` function takes *three* arguments. The first is evaluated as a boolean expression, if it evaluates to true (any non-zero integer value), the function returns the *second* argument, otherwise, it returns the *third* argument.
Your IF is missing a third argument, a value to return when the boolean expression does not evaluate to TRUE.
This can be any expression, including a literal empty string or NULL keyword, whatever you want to return.
```
IF(total_skor <= 100 AND total_skor >= 80,'Amat Baik','foo')
^^^^^^
```
---
NOTES:
Your first argument could also be re-written using a `BETWEEN` comparison, e.g.
```
IF(total_skor BETWEEN 80 AND 100, 'Amat Baik', '')
```
|
Code for IFβ¦THEN in SQL SELECT
|
[
"",
"mysql",
"sql",
""
] |
There is open date `2015-05-19 10:40:14` and close date `2015-05-20 09:21:11`
when I subtract them I am getting (close\_date.date\_value - open\_date.date\_value)
some `9.45104166666666666666666666666666666667E-01` value
I want to ignore the time 10:40:14 and 09:21:11 from 2 dates
similarly I am subtracting (SYSDATE - open\_date.date\_value) and get the number of days in number when I subtract 2 dates
Could anyone help me resolving this problem
```
case
when s then
(close_date.date_value - open_date.date_value)
else
(SYSDATE - open_date.date_value)
end as "dd",
```
|
From **performance** point of view, I would not use **TRUNC** as it would suppress any regular **index** on the date **column**. I would let the date arithmetic as it is, and **ROUND** the value.
For example,
```
SQL> SELECT SYSDATE - to_date('2015-05-20 09:21:11','YYYY-MM-DD HH24:MI:SS') diff,
2 ROUND(
3 SYSDATE - to_date('2015-05-20 09:21:11','YYYY-MM-DD HH24:MI:SS')
4 ) diff_round
5 FROM dual;
DIFF DIFF_ROUND
---------- ----------
29.1248264 29
SQL>
```
|
You can use DATEDIFF function. Here is the code
```
SELECT DATEDIFF(DAY, CONVERT(DATETIME, '2015-05-19 10:40:14'), CONVERT(DATETIME, ' 2015-05-20 09:21:11'))
```
|
How to mask date format in sql?
|
[
"",
"sql",
"oracle",
"oracle11g",
"date-arithmetic",
""
] |
I have 2 tables that I need to update:
Table A consists of: ID, personName, Date, status
Table B consist of: PersonID, Date, status
For every row in A there can be multiple rows in B with the same personID
I need to "loop" over all results from A that the status=2 and update the date and status to 1.
Also, for every row in A that status=2 I need to update all the rows in B that has the same personID (i.e, A.ID==B.PersonID) β I need to update date and status to 1 as well.
So basically, if I was to do this programmatically (or algorithmically) its's something like that:
```
Foreach(var itemA in A)
If (itemA.status = 2)
itemA.status to 1
itemA.date = GetDate()
foreach(var itemB in B)
if(itemB.PersonID == itemA.ID && itemB.status != 2 )
Change itemB.status to 1
Change itemB.date = GetDate()
```
i know how to update all the rows in B using the following sql statement:
```
UPDATE
B
SET
status = 1,
date = GETDATE()
FROM
B
INNER JOIN
A
ON
B.PersonID = A.ID
```
the problem is that i don't know how to also update table A since there can't be multiple tables in an update statement
thanks for any help
|
Here is an example using the `output` clause:
```
declare @ids table (id int);
update table1
set status = 1
output inserted.id into @ids
where status = 2;
update table2
set status = 1,
date = getdate()
where personid in (select id from @ids);
```
|
Question has been asked before:
[How to update two tables in one statement in SQL Server 2005?](https://stackoverflow.com/questions/2044467/how-to-update-two-tables-in-one-statement-in-sql-server-2005)
it is not possible to update multiple tables at once.
Summary answer from that question:
You can't update multiple tables in one statement, however, you can use a transaction to make sure that two UPDATE statements are treated atomically. You can also batch them to avoid a round trip.
```
BEGIN TRANSACTION;
UPDATE Table1
SET Table1.LastName = 'DR. XXXXXX'
FROM Table1 T1, Table2 T2
WHERE T1.id = T2.id
and T1.id = '011008';
UPDATE Table2
SET Table2.WAprrs = 'start,stop'
FROM Table1 T1, Table2 T2
WHERE T1.id = T2.id
and T1.id = '011008';
COMMIT;
```
For your question something like this would work:
```
BEGIN TRANSACTION;
UPDATE B
SET status = 1
, date = GETDATE()
WHERE B.PersonId IN ( SELECT ID
FROM A
WHERE A.status = 2
);
UPDATE A
SET status = 1
, date = GETDATE()
WHERE A.status = 2;
COMMIT;
```
|
How to update multiple tables with single query
|
[
"",
"sql",
"sql-server",
""
] |
What's the CTE syntax to *delete* from a table, then *insert* to the same table and return the values of the *insert*?
Operating on 2 hours of sleep and something doesn't look right (besides the fact that this won't execute):
```
WITH delete_rows AS (
DELETE FROM <some_table> WHERE id = <id_value>
RETURNING *
)
SELECT * FROM delete_rows
UNION
(
INSERT INTO <some_table> ( id, text_field )
VALUES ( <id_value>, '<text_field_value>' )
RETURNING *
)
```
The expected behavior is to first clear all the records for an ID, then insert records for the same ID (intentionally not an upsert) and return those inserted records (not the deletions).
|
Your question update made clear that you cannot do this in a single statement.
Packed into CTEs of the same statement, both operations (`INSERT` and `DELETE`) would see the same snapshot of the table and execute **virtually at the same time**. I.e., the `INSERT` would still see all rows that you thought to be deleted already. [The manual:](https://www.postgresql.org/docs/devel/queries-with.html#QUERIES-WITH-MODIFYING)
> All the statements are executed with the same *snapshot* (see [Chapter 13](https://www.postgresql.org/docs/current/mvcc.html)), so they cannot "see" one another's effects on the target tables.
You can wrap them as two independent statements into the same transaction - which doesn't seem strictly necessary either, but it would allow the whole operation to succeed / fail atomically:
```
BEGIN;
DELETE FROM <some_table> WHERE id = <id_value>;
INSERT INTO <some_table> (id, text_field)
VALUES ( <id_value>, '<text_field_value>')
RETURNING *;
COMMIT;
```
Now, the `INSERT` can see the results of the `DELETE`.
|
```
CREATE TABLE test_table (value TEXT UNIQUE);
INSERT INTO test_table SELECT 'value 1';
INSERT INTO test_table SELECT 'value 2';
WITH delete_row AS (DELETE FROM test_table WHERE value='value 2' RETURNING 0)
INSERT INTO test_table
SELECT DISTINCT 'value 2'
FROM (SELECT 'dummy') dummy
LEFT OUTER JOIN delete_row ON TRUE
RETURNING *;
```
The query above handles the situations when DELETE deletes 0/1/some rows.
|
SQL CTE Syntax to DELETE / INSERT rows
|
[
"",
"sql",
"postgresql",
"common-table-expression",
"postgresql-9.2",
""
] |
My table:
```
ID Name Status1 Status2
-------------------------------------
1 foo bar grain
2 bar foo sball
3 foo bar grain
4 far boo sball
```
I need for it to actually come out like this:
```
ID Name Status
-------------------------------
1 foo bar
1 foo grain
2 bar foo
2 bar sball
3 foo bar
3 foo bar
4 far boo
4 far sball
```
How would I go about doing that and can you explain why?
I have tried concat but that is obviously wrong.
|
You can use `union all` (or `union`, it depends on what you want to get in case `Status1` and `Status2` are the same):
```
select ID, Name, Status1 as Status from tbl
union all
select ID, Name, Status2 from tbl
```
[**SQLFiddle**](http://sqlfiddle.com/#!4/66ef90/2)
|
Here's another solution:
```
WITH counter AS
(SELECT LEVEL lvl FROM dual CONNECT BY LEVEL <= 2)
SELECT t.ID
,t.NAME
,CASE c.lvl
WHEN 1 THEN t.status1
WHEN 2 THEN t.status2
END status
FROM test_table t
CROSS JOIN counter c;
```
The hierarchical query is something you'll see quite often on <http://asktom.oracle.com/> to produce a list of numbers. The cross join returns a row for every combination of the rows in the joined tables.
|
Merge results from two separate columns into one elongated column
|
[
"",
"sql",
"oracle",
"select",
""
] |
I came across a question while buidling schema for my application.
When to use `varchar(max)` and `nvarchar(max)`. I mean the exact use cases where it should be applied. I have surfed in net too but I was able to get the exact answer.
Could anyone suggest some exact usecase.
|
This is for Microsoft **SQL Server**:
`NVARCHAR` is **Unicode** - 2 bytes per character, therefore max. of 1 billion characters; will handle East Asian, Arabic, Hebrew, Cyrillic etc. characters just fine.
`VARCHAR` is **non-Unicode** - 1 byte per character, max. capacity is 2 billion characters, but limited to the character set you're SQL Server is using, basically - no support for those languages mentioned before
|
Varchar(max)
Varchar fields can be of any size up to a limit, which varies by databases: an Oracle 9i database has a limit of 4000 bytes, a MySQL database has a limit of 65,535 bytes (for the entire row) and Microsoft SQL Server 2005 has a limit of 8000 characters (unless varchar(max) is used, which has a maximum storage capacity ...
nvarchar(max)
abcdefg n defines the string length and can be a value from 1 through 4,000. max indicates that the maximum storage size is 2^31-1 bytes (2 GB). The storage size, in bytes, is two times the actual length of data entered + 2 bytes. The ISO synonyms for nvarchar are national char varying and national character varying.
|
Difference between Varchar(max) and nvarchar(max)
|
[
"",
"sql",
"sqldatatypes",
""
] |
I have this records from my `users` table:
```
user_id first_name last_name gender email
******* ********** ********* ****** *****
229 Natalie Fern F natalie.fern@hotmail.com
```
and I want to search same `First Name` & `Last Name` from `first_name` OR `last_name`.
I have created sql query but not getting record.
```
SELECT *
FROM user_detail
WHERE first_name LIKE '%Natalie Fern%'
OR last_name LIKE '%Natalie Fern%'
LIMIT 0 , 30
```
You notice that I am passing first & last name from both fields.
Any Idea why I am not getting records?
Thanks.
|
You are using the `LIKE` operator for `first_name` and `last_name` as follows:
```
LIKE '%Natalie Fern%'
```
This will only match strings which contain *anything* followed by 'Natalie Fern' followed by *anything*. But since the first and last name columns only contain (surprise) the first and last names, your query isn't matching any records. You can use `CONCAT` to try to match the combination of first and last names:
```
SELECT *
FROM user_detail
WHERE CONCAT(first_name, ' ', last_name)
LIKE '%Natalie Fern%'
LIMIT 0, 30
```
If you want to check for people whose first *or* last names could be 'Natalie' or 'Fern', then you could use this query:
```
SELECT *
FROM user_detail
WHERE first_name LIKE '%Natalie%'
OR first_name LIKE '%Fern%'
OR last_name LIKE '%Natalie%'
OR last_name LIKE '%Fern%'
LIMIT 0, 30
```
|
Try this
[Sql Fiddle](http://sqlfiddle.com/#!3/8edd6d/1)
Function
```
CREATE FUNCTION [dbo].[fn_Split](@text varchar(8000), @delimiter varchar(20))
RETURNS @Strings TABLE
(
position int IDENTITY PRIMARY KEY,
value varchar(8000)
)
AS
BEGIN
DECLARE @index int
SET @index = -1
WHILE (LEN(@text) > 0)
BEGIN
SET @index = CHARINDEX(@delimiter , @text)
IF (@index = 0) AND (LEN(@text) > 0)
BEGIN
INSERT INTO @Strings VALUES (@text)
BREAK
END
IF (@index > 1)
BEGIN
INSERT INTO @Strings VALUES (LEFT(@text, @index - 1))
SET @text = RIGHT(@text, (LEN(@text) - @index))
END
ELSE
SET @text = RIGHT(@text, (LEN(@text) - @index))
END
RETURN
END
```
Query
```
SELECT * FROM user_detail inner join
(select value from fn_split('Natalie Fern',' ')) as split_table
on (user_detail.first_name like '%'+split_table.value+'%'
or user_detail.last_name like '%'+split_table.value+'%' ;
```
|
SQL find same value on multiple filelds with like operator
|
[
"",
"mysql",
"sql",
"search",
"sql-like",
""
] |
In my database each row has a column for average rating.
Now lets say I have thousands of rows with a variety of averages such as `4.45 4.78 3.21 2.13 4.91`
How would I get the rows with the top 3 highest average?
|
You can order rows in descending with `order by average_rating desc` and `limit` output to the top 3 results:
```
select average_rating
from tbl
order by average_rating desc
limit 3
```
|
Order the avg by `DESC` order to find largest avergaes and then `LIMIT 5` to find the top 5 avg
```
SELECT avg FROM table_name ORDER BY avg DESC LIMIT 5;
```
|
find top entries in MYSQL database
|
[
"",
"mysql",
"sql",
""
] |
Not being experienced with SQL, I was hoping someone could help me with this.
I have a empty temp table, as well as a table with information in it.
My outline of my query as it stands is as follows:
```
CREATE TABLE [#Temp] (ID Int, Field1 Varchar)
INSERT INTO [#Temp]
SELECT ID, Field1
FROM [Other_table]
WHERE ID IN (ID1, ID2, ID3...)
```
So I'm passing a whole bunch of IDs to the query, and where the ID corresponds to an ID in `Other_table`, it must populate the temp table with this information.
Is it possible to save the IDs that did not match somewhere else (say another temp table) within the same query? Or to the same temp table, just with Field1 = NULL in that case?
I need to do extra work on the IDs that were not matched, so I need ready access to them somewhere. I was hoping to do this all in this one query, if that's the fastest way.
Edit:
Thanks for all the help.
Apologies, I see now that my question is not entirely clear.
If Other\_table contains IDs 1 - 1000, and I pass in IDs 999, 1000 and 1001, I want the temp table to contain the information for 999 and 1000, and then also an entry with ID = 1001 with Field1 = NULL. I don't want IDs 1 - 998 returned with Field1 = NULL.
|
You can only use one target table for each insert statement. therefore, keeping `field1` as null seems like the easy way to go:
```
INSERT INTO [#Temp]
SELECT ID, CASE WHEN ID IN (ID1, ID2, ID3...) THEN Field1 END
FROM [Other_table]
```
the case statement will return `null` id the ID is not in the list.
**Update**
After you have updated the question, this is what I would recommend:
First, insert the list of ids you are using in the `in` operator into another temporary table:
```
create table #tempIDs (id int)
insert into #tempIDs values(id1), (id2), (id3), ....
```
then just use a simple left join:
```
INSERT INTO [#Temp]
SELECT t1.ID, Field1
FROM #tempIDs t1
LEFT JOIN [Other_table] t2 ON(t1.id = t2.id)
```
|
The quickest fix to your existing solution is to get all the values from other table which doesn't exists in your temp table. I have marked Field1 as NULL for all those Id's.
```
CREATE TABLE [#Temp] (ID Int, Field1 Varchar)
INSERT INTO [#Temp]
SELECT ID, Field1
FROM [Other_table]
WHERE ID IN (ID1, ID2, ID3...)
INSERT INTO [#Temp]
SELECT ID, NULL AS Field1
FROM [Other_Table]
WHERE ID NOT IN (SELECT DISTINCT ID FROM #Temp)
```
Other way is to include it in the same INSERT statement
```
CREATE TABLE [#Temp] (ID Int, Field1 Varchar(100))
INSERT INTO [#Temp]
SELECT ID,
CASE WHEN ID IN (ID1,ID2....) THEN Field1
ELSE NULL END AS 'Field1'
FROM [Other_Table]
```
|
SQL INSERT INTO WITH SELECT query
|
[
"",
"sql",
"sql-server",
"select",
"insert-into",
""
] |
I have the following table:

What i want is the following result:
```
InteractionID - OpenTime - CloseTime
```
Opentime is the `DATE_STAMP` where `UPDATE_TYPE = "Open"` and
CloseTime is is the `DATE_STAMP` where `UPDATE_TYPE = "Close"`
I have the following SQL Query:
```
Select DISTINCT A.INTERACTION_ID as idd,
(Select B.DATE_STAMP from dbo.SD_INTERACTIONS_HISTORY B WHERE B.UPDATE_TYPE = 'Open' and B.INTERACTION_ID = A.INTERACTION_ID) as OpenTime,
(Select TOP 1 C.DATE_STAMP from dbo.SD_INTERACTIONS_HISTORY C WHERE C.UPDATE_TYPE = 'Close' and C.INTERACTION_ID = A.INTERACTION_ID) as CloseTime
from dbo.SD_INTERACTIONS_HISTORY A where DATEPART(year, A.DATE_STAMP) > 2014
```
It work kinda ok, but i have **2 questions:**
1. Is this a correct way to do this? It seems like a slow query, so perhaps there is a way to optimize this?
2. Instead of checking that the datestamp is > than 2014, I would like to find the results where OpenTime is less than 30 days, how do i do this?
I am using SQL Server 2008
|
You can try this query
```
SELECT DISTINCT A.INTERACTION_ID as idd,
A.DATE_STAMP AS OpenTime,
B.DATE_STAMP AS CloseTime,
FROM dbo.SD_INTERACTIONS_HISTORY A
JOIN dbo.SD_INTERACTIONS_HISTORY B ON A.INTERACTION_ID = B.INTERACTION_ID
AND A.UPDATE_TYPE = 'Open'
AND B.UPDATE_TYPE = 'Close'
WHERE DATEDIFF(DAY,A.DATE_STAMP,GETDATE())<30
```
|
something like this?
```
SELECT
INTERACTION_ID,
update_open,
update_close,
datediff('d',update_open,update_close) AS diff
FROM
(SELECT
INTERACTION_ID,
MAX(CASE WHEN update_type = 'Open' THEN DATE_STAMP ELSE NULL END) AS update_open,
MAX(CASE WHEN update_type = 'Close' THEN DATE_STAMP ELSE NULL END) AS update_close
FROM
SD_INTERACTIONS_HISTORY
WHERE
DATEPART(YEAR, DATE_STAMP) > 2014 AND
UPDATE_TYPE IN('Open','Close')
GROUP BY
INTERACTION_ID) mx
WHERE
datediff('d',update_open,update_close) < 30
```
|
Select record and get closeTime and OpenTime from the same table
|
[
"",
"sql",
"sql-server",
""
] |
My SQL skills are quite limited. I'm in my second year of computer science at a technical college. I'm building a windows forms application that will allow the BAS director at my college to keep track of students and their progress throughout the courses. I have complete control over the database design so if you think of a way to help me reach a solution that would involve tweaking the database that is a possibility.
I'm trying to select all of the `Students` who do not have an `EnrollmentStatus` of 3 in all of the `Courses` that have a `CreditSection` of `1`. There are 12 courses with a `CreditSection` of `1`
The tables I'm using look like this:

I can think of a few ways to get my solution in speech, but can't seem to write them SQL:
```
SELECT * FROM Students WHERE each student has 12 entries in CourseEnrollment AND
CourseEnrollment.EnrollmentStatus = 3 AND Courses.CreditSection = 1
```
or
```
SELECT * FROM Students WHERE Courses.CourseID 1 thru 12 EXIST in
CourseEnrollment for each student AND CourseEnrollment.EnrollmentStatus = 3
```
I can get to the desired solution using this mess below, but as I check for students who have completed 4 years worth of courses... this query becomes ridiculously long and probably mega resource consuming.
This query selects students who are not in the list of students who have completed one or more the given courses:
```
SELECT DISTINCT s.* FROM Students s
WHERE s.StudentID NOT IN (SELECT ce.StudentID FROM CourseEnrollment ce WHERE ce.CourseID = 1 AND ce.EnrollmentStatus = 3) OR
s.StudentID NOT IN (SELECT ce.StudentID FROM CourseEnrollment ce WHERE ce.CourseID = 2 AND ce.EnrollmentStatus = 3) OR
s.StudentID NOT IN (SELECT ce.StudentID FROM CourseEnrollment ce WHERE ce.CourseID = 3 AND ce.EnrollmentStatus = 3) OR
s.StudentID NOT IN (SELECT ce.StudentID FROM CourseEnrollment ce WHERE ce.CourseID = 4 AND ce.EnrollmentStatus = 3) OR
s.StudentID NOT IN (SELECT ce.StudentID FROM CourseEnrollment ce WHERE ce.CourseID = 5 AND ce.EnrollmentStatus = 3) OR
s.StudentID NOT IN (SELECT ce.StudentID FROM CourseEnrollment ce WHERE ce.CourseID = 6 AND ce.EnrollmentStatus = 3) OR
s.StudentID NOT IN (SELECT ce.StudentID FROM CourseEnrollment ce WHERE ce.CourseID = 7 AND ce.EnrollmentStatus = 3) OR
s.StudentID NOT IN (SELECT ce.StudentID FROM CourseEnrollment ce WHERE ce.CourseID = 8 AND ce.EnrollmentStatus = 3) OR
s.StudentID NOT IN (SELECT ce.StudentID FROM CourseEnrollment ce WHERE ce.CourseID = 9 AND ce.EnrollmentStatus = 3) OR
s.StudentID NOT IN (SELECT ce.StudentID FROM CourseEnrollment ce WHERE ce.CourseID = 10 AND ce.EnrollmentStatus = 3) OR
s.StudentID NOT IN (SELECT ce.StudentID FROM CourseEnrollment ce WHERE ce.CourseID = 11 AND ce.EnrollmentStatus = 3) OR
s.StudentID NOT IN (SELECT ce.StudentID FROM CourseEnrollment ce WHERE ce.CourseID = 12 AND ce.EnrollmentStatus = 3) OR
s.StudentID NOT IN (SELECT ce.StudentID FROM CourseEnrollment ce WHERE ce.CourseID = 13 AND ce.EnrollmentStatus = 3)
```
My goal is to figure out how to write this query in SQL and then convert it to LINQ which is ultimately what I need. If anyone can help with either part of this I would be grateful.
I've converted the above to LINQ and it looks just as horrendous:
```
var query =
from student in datStudents.Students.AsEnumerable<dsStudentManager.StudentsRow>()
where !(from ce2 in datStudents.CourseEnrollment.AsEnumerable<dsStudentManager.CourseEnrollmentRow>() where ce2.CourseID == 1 && ce2.EnrollmentStatus == 3 select ce2.Field<int>("StudentID")).Contains<int>(student.StudentID) ||
!(from ce2 in datStudents.CourseEnrollment.AsEnumerable<dsStudentManager.CourseEnrollmentRow>() where ce2.CourseID == 2 && ce2.EnrollmentStatus == 3 select ce2.Field<int>("StudentID")).Contains<int>(student.StudentID) ||
!(from ce2 in datStudents.CourseEnrollment.AsEnumerable<dsStudentManager.CourseEnrollmentRow>() where ce2.CourseID == 3 && ce2.EnrollmentStatus == 3 select ce2.Field<int>("StudentID")).Contains<int>(student.StudentID) ||
!(from ce2 in datStudents.CourseEnrollment.AsEnumerable<dsStudentManager.CourseEnrollmentRow>() where ce2.CourseID == 4 && ce2.EnrollmentStatus == 3 select ce2.Field<int>("StudentID")).Contains<int>(student.StudentID) ||
!(from ce2 in datStudents.CourseEnrollment.AsEnumerable<dsStudentManager.CourseEnrollmentRow>() where ce2.CourseID == 5 && ce2.EnrollmentStatus == 3 select ce2.Field<int>("StudentID")).Contains<int>(student.StudentID) ||
!(from ce2 in datStudents.CourseEnrollment.AsEnumerable<dsStudentManager.CourseEnrollmentRow>() where ce2.CourseID == 6 && ce2.EnrollmentStatus == 3 select ce2.Field<int>("StudentID")).Contains<int>(student.StudentID) ||
!(from ce2 in datStudents.CourseEnrollment.AsEnumerable<dsStudentManager.CourseEnrollmentRow>() where ce2.CourseID == 7 && ce2.EnrollmentStatus == 3 select ce2.Field<int>("StudentID")).Contains<int>(student.StudentID) ||
!(from ce2 in datStudents.CourseEnrollment.AsEnumerable<dsStudentManager.CourseEnrollmentRow>() where ce2.CourseID == 8 && ce2.EnrollmentStatus == 3 select ce2.Field<int>("StudentID")).Contains<int>(student.StudentID) ||
!(from ce2 in datStudents.CourseEnrollment.AsEnumerable<dsStudentManager.CourseEnrollmentRow>() where ce2.CourseID == 9 && ce2.EnrollmentStatus == 3 select ce2.Field<int>("StudentID")).Contains<int>(student.StudentID) ||
!(from ce2 in datStudents.CourseEnrollment.AsEnumerable<dsStudentManager.CourseEnrollmentRow>() where ce2.CourseID == 10 && ce2.EnrollmentStatus == 3 select ce2.Field<int>("StudentID")).Contains<int>(student.StudentID) ||
!(from ce2 in datStudents.CourseEnrollment.AsEnumerable<dsStudentManager.CourseEnrollmentRow>() where ce2.CourseID == 11 && ce2.EnrollmentStatus == 3 select ce2.Field<int>("StudentID")).Contains<int>(student.StudentID) ||
!(from ce2 in datStudents.CourseEnrollment.AsEnumerable<dsStudentManager.CourseEnrollmentRow>() where ce2.CourseID == 12 && ce2.EnrollmentStatus == 3 select ce2.Field<int>("StudentID")).Contains<int>(student.StudentID) ||
!(from ce2 in datStudents.CourseEnrollment.AsEnumerable<dsStudentManager.CourseEnrollmentRow>() where ce2.CourseID == 13 && ce2.EnrollmentStatus == 3 select ce2.Field<int>("StudentID")).Contains<int>(student.StudentID)
select new
{
id = student.StudentID,
rtcid = student.RTCStudentID,
firstname = student.FirstName,
lastname = student.LastName,
phone = student.Phone,
studentemail = student.StudentEmail,
personalemail = student.PersonalEmail,
address = student.Address,
city = student.City,
state = student.State,
zip = student.Zip,
birthdate = student.BirthDate,
gender = student.Gender,
notes = student.Notes,
studentdocumentslocation = student.StudentDocumentsLocation
};
```
|
Using a subquery with a group by and having statement you can come up with something similar to this:
```
SELECT * FROM Students WHERE StudentID NOT IN (
SELECT s.StudentID FROM Students s
JOIN CourseEnrollment ce ON s.StudentID = ce.StudentID
JOIN Courses c ON ce.CourseID = c.CourseID
WHERE ce.EnrollmentStatus = 3 AND c.CreditSection = 1
GROUP BY s.StudentID
HAVING COUNT(*) = 12
)
```
The inner query builds the criteria for students to return, and the "HAVING COUNT(\*) = 12" gets you students that match 12 courses. If you only a subset of courses, you can also try the following.
```
SELECT * FROM Students WHERE StudentID NOT IN (
SELECT s.StudentID FROM Students s
JOIN CourseEnrollment ce ON s.StudentID = ce.StudentID
JOIN Courses c ON ce.CourseID = c.CourseID
WHERE ce.EnrollmentStatus = 3 AND c.CreditSection = 1
AND c.CourseID IN (1,2,3,4,5,6,7,8)
GROUP BY s.StudentID
HAVING COUNT(*) = 8 -- Number of courses in the ID in clause
)
```
Hope this helps you get on track.
|
Try something like this, I typed it by hand, so there could be a typo.
```
SELECT * FROM Students s
inner join (select StudentID, count(0) as CrsEnrollCount
from CourseEnrollment ce
inner join Courses c
on ce.CourseID = c.CourseID
where ce.EnrollmentStatus = 3 AND Courses.CreditSection = 1) cnt
on cnt.StudentID = s.StudentID
where CrsEnrollCount < 12
```
|
Improve my SQL Select statement to select students who have not fully completed a section
|
[
"",
"sql",
"sql-server",
"linq",
"t-sql",
""
] |
If a SQL Server 2012 table has 2 `varchar(max)` columns that are rarely used and causing a table to grow rapidly, does it make sense to split into a separate table?
Data in the two `varchar(max)` columns is used to store raw JSON responses from an API.
If stored in a separate table, rows would be truncated to only include previous 6 months, reducing table size of first table.
|
> If stored in a separate table, rows would be truncated to only include previous 6 months, reducing table size of first table.
The rows would have to be deleted, not truncated, and then the BLOB space would have to be reclaimed by running `ALTER INDEX ... REORGANIZE WITH (LOB_COMPACTION = ON)`
If you'd store instead the blobs in the original table, you would have to update the rows to `SET blob = NULL` and then reclaim the space with `ALTER INDEX ... REORGANIZE WITH (LOB_COMPACTION = ON)`
So when it boils down to the details, you aren't achieving much using a split table, imho. So I stick to my earlier advice from [SQL Server varbinary(max) and varchar(max) data in a separate table](https://stackoverflow.com/questions/8703942/sql-server-varbinarymax-and-varcharmax-data-in-a-separate-table/8704485): I see no benefits in split, but I see trouble from having to maintain the rows consistent 1:1 between the splits.
You may have a case if you split and *partition* the 'blobs' table. Then you could, indeed, deallocate very efficiently the old space by switching 'out' the old partition and replacing them with an empty one, and then dropping the switched out data. That *is* something to consider. Of course, you code would have to be smart enough when it joins the two 'splits' to consider that the blobs may be gone (ee. use OUTER JOIN).
|
Just to give you a hint. You can use `SPARSE` on a rarely used column.
Example:
```
CREATE TABLE myTable (
id int identity(1,1),
name nvarchar(100),
blob nvarchar(max) SPARSE,
blob2 nvarchar(max) SPARSE
)
```
Sparse will just left a small marker inside the page. But it's mostly bad practice to have `nvarchar(max)`. Is it really needed?
You can read more about it [here](https://msdn.microsoft.com/en-us/library/cc280604.aspx).
|
SQL Server 2012 - Separate 2 Varchar(max) columns to separate table?
|
[
"",
"sql",
"sql-server",
"database",
"database-design",
""
] |
I have a query that it's select statement is this:
```
select Greatest(p.price,0) as newprice, sum(q.qty) as qty
from ....
```
it gives me:
```
newprice qty
10 1
0 1
100 2
1 2
```
I want to multiply newprice with qty to get:
```
newprice qty result
10 1 10
0 1 0
100 2 200
1 2 2
```
When I try to do `select Greatest(p.price,0) as newprice, sum(q.qty) as qty, newprice * qty` it says
```
ERROR: column "newprice" does not exist
```
I don't really need this extra column.
what i really want is : `SUM(Greatest(p.price,0) * SUM(q.qty))` which should give the value `212` but It says `ERROR: aggregate function calls cannot be nested`
Basically all I need is to multiply two columns and sum the result.
I know I can use CTE something similar to what is shown [here](https://stackoverflow.com/questions/15437704/multiplying-two-columns-which-have-been-calculated-on-a-case-statement) but I'm wondering if there is an easier way with less code.
|
You can just repeat what you wrote:
```
select Greatest(p.price,0) as newprice,
sum(q.qty) as qty,
Greatest(p.price,0) * sum(q.qty) as result
from ...
```
or, you can wrap your *select statement* to *temporary derived table* ([PostgreSQL: using a calculated column in the same query](https://stackoverflow.com/questions/8840228/postgresql-using-a-calculated-column-in-the-same-query))
```
select tmp.newprice,
tmp.qty,
tmp.newprice * tmp.qty as result
from (
select Greatest(p.price,0) as newprice,
sum(q.qty) as qty
from ...
) as tmp
```
|
Query should be like this :
```
select *, newprice*qty from
(
select Greatest(p.price,0) as newprice, sum(q.qty) as qty
from ....
) T
```
OR
```
select Greatest(p.price,0) as newprice, sum(q.qty) as qty, Greatest(p.price,0)*sum(q.qty) as result
from ....
```
UPDATE :
you are using `group by` in your query (I presumed because of aggregation) and in order to find sum(newprice\*qty), you will need a sub select :
```
select sum(newprice*qty) from
(
select Greatest(p.price,0) as newprice, sum(q.qty) as qty
from ....
) T
```
|
PostgreSQL: How to multiply two columns and display result in same query?
|
[
"",
"sql",
"postgresql",
""
] |
Say I have a query that returns the following
```
ID SomeValue
1 a,b,c,d
2 e,f,g
```
Id like to return this as follows:
```
ID SomeValue
1 a
1 b
1 c
1 d
2 e
2 f
2 g
```
I already have a UDF calls Split that will accept a string and a delimter and return it as a table with a single column called [Value]. Given this, How shoudl the SQL look to achieve this?
|
Alternatively, you could use XML like so:
```
DECLARE @yourTable TABLE(ID INT,SomeValue VARCHAR(25));
INSERT INTO @yourTable
VALUES (1,'a,b,c,d'),
(2,'e,f,g');
WITH CTE
AS
(
SELECT ID,
[xml_val] = CAST('<t>' + REPLACE(SomeValue,',','</t><t>') + '</t>' AS XML)
FROM @yourTable
)
SELECT ID,
[SomeValue] = col.value('.','VARCHAR(100)')
FROM CTE
CROSS APPLY [xml_val].nodes('/t') CA(col)
```
|
You use `cross apply`. Something like this:
```
select t.id, s.val as SomeValue
from table t cross apply
dbo.split(SomeValue, ',') as s(val);
```
|
Split comma separated string table row into separate rows using TSQL
|
[
"",
"sql",
"sql-server-2008",
"t-sql",
""
] |
> There is a table a\_status\_check with the following data:

The requirement is: If status LC and BE both are present, then consider only LC. Otherwise, consider BE. Ignore other codes for that id.
So, the result should be like:

I tried DECODE and CASE functions with no luck. Can someone please help.
|
Use analytical functions:
```
select distinct
id,
first_value (status) over (partition by id order by status desc) status,
first_value (amt ) over (partition by id order by status desc) amt
from
tq84_a_status_check
where
status in ('LC', 'BE')
order by
id;
```
Testdata:
```
create table tq84_a_status_check (
id number,
status varchar2(10),
amt number
);
select distinct
id,
first_value (status) over (partition by id order by status desc) status,
first_value (amt ) over (partition by id order by status desc) amt
from
tq84_a_status_check
where
status in ('LC', 'BE')
order by
id;
```
|
You can try this:
```
SELECT id, status, amt
FROM yourTable yt
INNER JOIN (
-- get only max
SELECT id, MAX(status) as status
FROM yourTable
GROUP BY id
) as onlyMax
ON yt.id = onlyMax.id
AND yt.status = onlyMax.status
```
|
Query to filter records based on specific conditions
|
[
"",
"sql",
"oracle",
"oracle11g",
""
] |
I have a SQL query that returns all employees. It gives me an error because it expects after the `WITH( )`, it expects a `SELECT` statement.
But if I would like to filter the result a second time, how do I work around using IF-ELSE IF?
Note: My SQL might sound a little confusing (i.e.: if @Gender = Male, look for all single employees) because those are placeholders I put in place of my real SQL. I apologize for not being too creative in naming.
```
DECLARE @Gender VARCHAR(10)
SET @Gender = 'Female'
/* Store all employees aged 50+ in a variable called queryResult */
WITH queryResult AS
(
SELECT * FROM Employees WHERE Age >= 50
)
/* If Gender = Male, I'm looking for all employees that are single */
IF @Gender = 'Male'
BEGIN
SELECT * From queryResult WHERE Status = 'Single'
END
/* If Gender = Female, I'm looking for all employees that are married */
ELSE IF @Gender = 'Female'
BEGIN
SELECT * From queryResult WHERE Status = 'Married'
END
```
**SOLVED:** In case you wanted to see my real query, here's my working query based on Rahul's answer.
```
WITH queryResult AS
(
SELECT TOP 1 t.userName, t.timeStamp
FROM [Tracking].[dbo].[Tracking] t join (SELECT URL, PrimaryOwner FROM Governance WHERE Content = @Content) g
ON t.referringURL like g.URL
WHERE
timeStamp >=
CASE WHEN @Frequency = 'Daily' THEN CAST(GETDATE() As Date)
WHEN @Frequency = 'Monthly' THEN DATEADD(month, -1, CAST(GETDATE() As Date))
WHEN @Frequency = 'Annually' THEN DATEADD(year, -1, CAST(GETDATE() As Date))
END
and PrimaryOwner = t.userName
ORDER BY timestamp desc
)
SELECT * from queryResult
```
|
Why not do the filtering in CTE itself like
```
DECLARE @Gender VARCHAR(10)
SET @Gender = 'Female'
WITH queryResult AS
(
SELECT * FROM Employees WHERE Age >= 50 AND Gender = @Gender
)
select * from queryResult;
```
**EDIT:** In that case change your CTE with a `CASE` expression like below
```
DECLARE @Gender VARCHAR(10)
SET @Gender = 'Female'
WITH queryResult AS
(
SELECT * FROM Employees WHERE Age >= 50
AND Status = CASE WHEN @Gender = 'Male' THEN 'Single' ELSE 'Married' END
)
SELECT * FROM queryResult;
```
|
Why not just:
```
DECLARE @Gender VARCHAR(10)
SET @Gender = 'Female';
WITH queryResult AS
(
SELECT * FROM Employees WHERE Age >= 50
)
SELECT * From queryResult WHERE Gender = @Gender
```
EDIT:
If your WHERE clauses are very different you are likely better off putting the conditional on the outside. Putting the same cte or subquery in multiples places sort of offends the eye of a developer, but its could be the right answer here.
So to show that with your oversimplified example:
```
DECLARE @Gender VARCHAR(10)
SET @Gender = 'Female'
IF @Gender = 'Male'
BEGIN
;WITH queryResult AS
(
SELECT * FROM Employees WHERE Age >= 50
)
SELECT * From queryResult WHERE Gender = 'Male'
END
ELSE IF @Gender = 'Female'
BEGIN
;WITH queryResult AS
(
SELECT * FROM Employees WHERE Age >= 50
)
SELECT * From queryResult WHERE Gender = 'Female'
END
```
|
How to filter a SELECT result with IF condition?
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I'm a first time poster and new to SQL, as are my colleagues.
I have a column of numbers (let's just call it "numbers") and the values in it range from 1-9999.
I need to create a grouping column (let's call it "numbers level") and assign the values of AA to all numbers from 1-999, BB from 1000-6999 and CC for 7000-9999.
So, the final output would look like below (output does not need to be ordered):
```
numbers | numbers level
-------------------------------------
136 | AA
2935 | BB
4288 | BB
8987 | CC
436 | AA
3737 | BB
```
Is there any easy way to script this, i.e.: by using a GROUP BY clause?
Or does that grouping column have to exist already and I need to just populate it with a command?
Please let me know if my question wreaks of cluelessness and I'll clarify the best I can.
|
This would be easiest with a case statement (assuming you're using sql-server; if not, please add tags for the correct version).
```
Select [Numbers]
, case when [Numbers] between 1 and 999 then 'AA'
when [Numbers] between 1000 and 6999 then 'BB'
when [Numbers] between 7000 and 9999 then 'CC'
end as [Numbers Level]
from MyTable
```
If any of your numbers don't fall into those ranges, it will return NULL - use `else` if you want a different result when this happens.
|
The grouping column has to exist, but you could use a `case` instead.
```
select nr,
case when nr <= 999 then 'AA'
when nr <= 7000 then 'BB'
else 'CC'
end as NrLevel
from Numbers
```
|
SQL: GROUP BY for number ranges?
|
[
"",
"sql",
"group-by",
""
] |
I need to increment each field in a column by 1 starting from 2866. I've built the following query, but all it does is set each field to 2867 as opposed to 2868, 2869 etc...
```
DECLARE @a int
SET @a = 2866
UPDATE view_kantech_matched SET image_id = @a + 1
WHERE image_id = null
```
Any suggestions? Thanks
|
The general structure for doing this would be:
```
;WITH Numbered as (
SELECT *,ROW_NUMBER() OVER (ORDER BY <some column>) rn
FROM view_kantech_matched
WHERE image_id is NULL
)
UPDATE Numbered SET image_id = 2865 + rn
```
But I don't know what `<some column>` would be.
|
```
DECLARE @a int
SET @a = 2866
UPDATE view_kantech_matched SET image_id = @a, @a=@a + 1
WHERE image_id = null
```
|
Increment each field in a column by 1 - SQL Server 2008
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I am trying to make sql query between two columns.
table\_1
```
ID ProductName ProductDescription
1 Prod_1 Description_1
2 Prod_2 Description_1
3 Prod_3 Description_1
4 Prod_4 Description_1
5 Prod_5 Description_1
```
table\_2
```
ID Product Partner
1 1 21
2 2 21
3 3 21
4 1 32
5 1 32
6 4 21
7 5 21
8 5 32
```
By using query below I get as a result only list of products that are selected in table\_2. That is good I need to get that, but I also want to print all values from table\_1 in same query for later programming I need that. Not sure is it possible to make one column that will print 1 if product\_ID and ID match and if not print 0
```
$query = "SELECT a.ID, a.ProductName, b.ID, b.Product, b.Partner
FROM table_1 a
LEFT JOIN table_2 b
ON a.ID = b.Product
WHERE b.Partner = 21"
```
I want to print values from table\_1 match with table\_2 what are selected in table\_2.
I am getting stuck here any advice is appreciated.
|
As i am right with your requirenments this untested query should work:
```
$query = "SELECT a.ID, a.ProductName, b.ID, b.Product, b.Partnerm, case when b.id is null then 0 else 1 end
FROM table_1 a
LEFT JOIN table_2 b
ON a.ID = b.ID and b.Partner = 21"
```
|
Try this
```
$query = "SELECT a.ID, a.ProductName, b.ID, b.Product, b.Partnerm case when b.id is null then 0 else 1 end
FROM table_1 a
LEFT JOIN table_2 b
ON a.ID = b.Product and b.Partner = 21"
```
|
sql statement for two different columns
|
[
"",
"mysql",
"sql",
""
] |
In a certain table, I'm trying to get the `CCID`s that have rows with a `TypeID`, and no rows of any other `TypeID`. The following SQL query seems incorrect
```
SELECT
T1.CCID,
T1.TrCnt AS OurTypeCnt,
T2.TrCnt AS NotOurTypeCnt
FROM (SELECT
CCID,
COUNT(CCID) AS TrCnt
FROM CCsTransactions
WHERE TypeID = 5
GROUP BY CCID) T1,
(SELECT
CCID,
COUNT(CCID) AS TrCnt
FROM CCsTransactions
WHERE TypeID <> 5
GROUP BY CCID) T2
WHERE ((T1.TrCnt >= 1)
AND (T2.TrCnt < 1))
```
It returns empty because the results seem to be joining funny (with no 0 values in the `TrCnts`).
Is there a better way to do this?
|
You access the same table twice with different WHERE-conditions. This can be replaced by CASE:
```
SELECT CCID,
COUNT(CASE WHEN TypeID = 5 THEN CCID END) as OurTypeCnt,
COUNT(CASE WHEN TypeID <> 5 THEN CCID END) as NotOurTypeCnt
FROM CCsTransactions
GROUP BY CCID
HAVING ((OurTypeCnt >= 1) AND (NotOurTypeCnt < 1))
```
|
You complete query can be written as:
```
select ccid
from CCsTransactions
group by ccid
having sum(case when TypeId = 5 then 1 else 0 end) > 0 and
sum(case when TypeId <> 5 then 1 else 0 end) = 0;
```
Or, a simpler way is:
```
select ccid
from CCsTransactions
group by ccid
having max(TypeId) = 5 and min(TypeId) = 5;
```
|
SQL Group By COUNT in two
|
[
"",
"sql",
"sql-server",
""
] |
My question here is what is the difference between `CTE` and `View` in **SQL**. I mean in which case I should use the `CTE` and which case the `View`. I know that both are some kind of virtual tables but I can't differentiate their use.
I found a similar question [here](https://stackoverflow.com/questions/6026842/cte-vs-view-performance-in-sql-server) but it's about performance.
**Update 1:**
For example: I have a database filled with trades(`tbl_trade`). I need to select from 3.5 millions records only the trades that was opened the current month until current time and then manipulate the data(with different queries on the virtual table - this looks like View). The problem here is that I want a `SUM` of 3-4 columns and then on I need to `SUM` some columns and create a virtual column with the result(looks like CTE).
Eg: `tbl_trade` has columns: `profit`,`bonus` and `expenses`.
I need `SUM(profit)`,`SUM(bonus)`,`SUM(expenses)` and a new column `total` which will be equal to `SUM(profit)`+`SUM(bonus)`+`SUM(expenses)`.
PS. Rerunning the queries for `SUM` is not an option since I already have the result.
Thanks in advance!
|
Views can be indexed but CTE can't. So this is one important point.
CTE work excellent on `tree hierarchy`i.e. recursive
Also, consider views when dealing with complex queries. Views being a physical object on database (but does not store data physically) and can be used on multiple queries, thus provide flexibility and centralized approach. CTE, on the other hand are temporary and will be created when they are used; that's why they are called as `inline view`.
**Update**
According to your updated question, views will be the right choice. Dealing with 3.5 million rows in CTE will create extra overhead on TempDb which will eventually slow down SQL Server performance. Remember, CTE is a disposable view hence no statistics are stored and you can't create Indexes too. It is just like a sub query.
|
Both will be interpreted exactly the same by the Plan Optimizer. It's just a different thing.
A view can be used on its own. It can encapsulate complex statements to a more simple query.
Where a CTE is mostly used to write cleaner code with lesser redundancy in procedures/views for example. You can use a CTE for recursive queries too, which is a very great and powerful feature!
I hope this helps to clarify things.
|
SQL - CTE vs VIEW
|
[
"",
"sql",
"sql-server",
"common-table-expression",
"sql-view",
""
] |
I'm looking for a way to fire a trigger based on a condition and below is my sql. How do i include an `IF` statement in this trigger to fire if `ac_mail = '1@mail.com'`
```
CREATE TRIGGER username
ON temp
AFTER INSERT
AS
DECLARE @var1 int ,@var2 char (100)
select @var1 = al_id, @var2 = ac_mail
From temp
BEGIN
INSERT INTO user (v_id,v_m)
VALUES (@var1,@var2)
END
```
The trigger should only fire if `ac_mail = '1@mail.com'`
|
try this :-
```
CREATE TRIGGER username
ON temp
AFTER INSERT
AS
DECLARE @var1 int ,@var2 char (100)
Select @var1 = al_id
,@var2 = ac_mail
From inserted
IF @var2 = '1@mail.com'
BEGIN
INSERT INTO user (v_id,v_m)
VALUES (@var1,@var2)
END
```
|
Triggers can fire for *multiple* rows - so using scalar variables to capture the data is usually wrong. As is querying the base table rather than [`inserted`](https://msdn.microsoft.com/en-us/library/ms191300.aspx) - `inserted` contains just the rows that caused the trigger to fire, whereas you're querying temp for any random row. And finally, of course, as I've said twice now, multiple rows. So you don't want an if because *some* of those rows might match your condition and other might *not* match.
I think you want:
```
CREATE TRIGGER username
ON temp
AFTER INSERT
AS
INSERT INTO user (v_id,v_m)
select al_id,ac_mail from inserted where ac_mail = '1@mail.com'
```
See how much simpler it is?
|
Insert trigger with an IF statement
|
[
"",
"sql",
"sql-server",
""
] |
I have a table in the following structure:
```
id | att1 | att2 | att3
-----------------------
1 | a1 | b1 | c1
2 | a2 | b2 | c2
3 | a3 | b3 | c3
```
And I want to transpose the columns to become rows for each id. Like this:
```
id | attname | value
------------------------
1 | att1 | a1
1 | att2 | b1
1 | att3 | c1
2 | att1 | a2
2 | att2 | b2
2 | att3 | c2
3 | att1 | a3
3 | att2 | b3
3 | att3 | c3
```
I was reading up on the PIVOT function and wasn't sure if it would do the job or how to use it. Any help would be appreciated.
|
Do `UNION ALL`:
```
select id, 'att1' as attname, att1 as value from tablename
union all
select id, 'att2' as attname, att2 as value from tablename
union all
select id, 'att3' as attname, att3 as value from tablename
```
Note that `VALUE` is a reserved word in SQL, so if that's your real column name you need to double quote it, `"value"`.
|
You can use unpivot for this task.
```
SELECT *
FROM table
UNPIVOT (
value FOR att_name
IN (att1 as 'att1', att2 as 'att2', att3 as 'att3')
);
```
|
SQL query transposing columns
|
[
"",
"sql",
"oracle11g",
""
] |
I have a schema with four tables Users, Groups, Memberships & MembershipControls
Memberships is a M:M between users & groups.
Membership controls specify the groups that a user must already be a member of to be a member of a group.
For example to be a member of the "Green" group a user must already be a member of either the "Blue" or "Yellow" groups.
When a user is a member of the "Green" group their membership is contingent upon them being a member of either the "Blue" or "Yellow" groups. If the user ceases to be a member of the "Blue" group their existing memberships remain however if they cease to be a member of the "Yellow" group also then their "Membership of "Green" should be deleted.
I am trying to work out the sql that will delete from the records from Memberships that are in violation of the membership controls.
<http://sqlfiddle.com/#!9/302d2>
Based up the fiddle above the following Memberships are valid:
(1, 1, 1)
(2, 1, 2)
(3, 1, 3)
(4, 1, 4)
If the middle two memberships above of green & blue were removed the only valid memberships would be:
(1, 1, 1)
i.e. if the Memberships table contained:
(1, 1, 1)
(1, 1, 4)
The last record would be invalid and should be deleted as it is in violation of the MembershipControls.
This is because the Membership Control specifies that to be a member of group: 4 you need to also be a member of group: 2 or group: 3
|
The sub query identifies the groups the user is entitled to be a member of and the parent query identifies the groups the user is currently a member of.
When this query is run against the fiddle above it identifies no records for deletion from memberships. If the middle two rows are removed from the memberships table (2, 1, 2) & (3, 1, 3) then the query identifies one row from memberships for deletion.
```
SELECT Memberships.*
FROM Memberships
INNER JOIN MembershipControls ON Memberships.group_id = MembershipControls.group_id
WHERE Memberships.user_id = 1
AND Memberships.group_id NOT IN
(
SELECT MembershipControls.group_id
FROM GROUPS
INNER JOIN Memberships ON Groups.id = Memberships.group_id
INNER JOIN MembershipControls ON Memberships.group_id = MembershipControls.criteria_id
WHERE Memberships.user_id = 1
GROUP BY MembershipControls.group_id
)
```
|
Try:
```
delete from memberships
where user_id in
(
select user_id
from memberships
group by user_id
having sum(case when group_id = 4 then 1 else 0 end) = 1
and sum(case when group_id = 2 then 1 else 0 end) = 0
and sum(case when group_id = 3 then 1 else 0 end) = 0
)
and group_id = 4
```
To put it simply, this deletes all rows from the memberships table where a user belongs to group 4, but not group 2 or 3 (and only their group 4 row).
|
SQL self referencing query
|
[
"",
"sql",
"activerecord",
""
] |
I am pulling my hair out trying to create a running / cumulative `median` of a partitioned value, in a chronological ordering. Basically I have a table:
```
create table "SomeData"
(
ClientId INT,
SomeData DECIMAL(10,2),
SomeDate TIMESTAMP
);
```
With some data:
```
INSERT INTO "SomeData" (ClientId, SomeData, SomeDate) VALUES
(1, 1, '1 Jan 2000'),
(1, 2, '2 Jan 2000'),
(1, 3, '3 Jan 2000'),
(1, 4, '4 Jan 2000'),
(2, 100, '1 Jan 2000'),
(2, 100, '2 Jan 2000'),
(2, 100, '3 Jan 2000'),
(2, 200, '4 Jan 2000'),
(2, 200, '5 Jan 2000'),
(2, 200, '6 Jan 2000'),
(2, 200, '7 Jan 2000');
```
And I need a running median partitioned by `ClientId`, ordered by `SomeDate`.
Basically, what I need to produce is this:
```
ClientId SomeDate Median of SomeData
1 "2000-01-01" 1.000
1 "2000-01-02" 1.500
1 "2000-01-03" 2.000
1 "2000-01-04" 2.500
2 "2000-01-01" 100.0
2 "2000-01-02" 100.0
2 "2000-01-03" 100.0
2 "2000-01-04" 100.0
2 "2000-01-05" 100.0
2 "2000-01-06" 150.0
2 "2000-01-07" 200.0
```
I am able to do this in a number of ways in in PostgresSql 9.x with the [`Aggregate_median` function](https://wiki.postgresql.org/wiki/Aggregate_Median), however this is proving to be difficult in Redshift, which only has an aggregate median
```
SELECT ClientId, SomeDate, median(SomeData) OVER (PARTITION BY ClientId ORDER BY SomeDate)
FROM "SomeData" xout
ORDER BY ClientId, SomeDate;
```
Running the above on Redshift however gives the error:
> ERROR: window specification should not contain frame clause and order-by for window function median
The median can be replaced with a manual correlated subquery back to the original table, however RedShift doesn't appear to support these either.
> ERROR: This type of correlated subquery pattern is not supported due to internal error
[Here are a bunch of fiddles](http://sqlfiddle.com/#!15/03c5b/3) which work in PostGres, none of which work in Redshift
At this point it looks like I'll need to pull the data into memory and [do this in code](https://stackoverflow.com/a/10657732/314291), but would be appreciative if this can be done in Redshift directly.
|
I wonder if you can do this with `nth_value()`:
```
SELECT ClientId, SomeDate,
NTH_VALUE(seqnum / 2) OVER (PARTITION BY ClientId ORDER BY SomeDate)
FROM (SELECT s.*,
COUNT(*) OVER (PARTITION BY ClientId ORDER BY SomeDate) as seqnum
FROM SomeData s
) s
ORDER BY ClientId, SomeDate;
```
As a note: that use of `COUNT(*)` instead of `ROW_NUMBER()` takes some getting used to.
|
I think the solution presented by @GordonLinoff is not correct because it does not order the rows with the value you are trying to find median of. The correct way inspired by :
[Moving Median, Mode in T-SQL](https://stackoverflow.com/questions/29542846/moving-median-mode-in-t-sql)
works on redshift:
```
WITH CTE
AS
(
SELECT ClientId,
ROW_NUMBER() OVER (PARTITION BY ClientId ORDER BY SomeDate ASC) row_num,
SomeDate,
SomeData
FROM "SomeData"
)
SELECT A.SomeDate,
A.SomeData,
(SELECT MEDIAN(B.SomeData)
FROM CTE B
WHERE B.row_num BETWEEN 1 AND A.row_num
GROUP BY A.ClientId) AS median
FROM CTE A
```
|
How can I achieve a windowed running median in Redshift?
|
[
"",
"sql",
"amazon-redshift",
""
] |
While removing duplicate rows with same ID value, how to remove the rows that has null value in one particular column.
Note: there are other non-duplicate rows (below e.g., 12) that has NULL value and should still get selected in the result set.
Input table:
```
Id | sale_date | price
-----------------------------
11 20051020 22.1
11 NULL 20.1
12 NULL 20.1
13 20051020 20.1
```
Expected result:
```
Id | sale_date | price
-----------------------------
11 20051020 22.1
12 NULL 20.1
13 20051020 20.1
```
|
Assuming you have SQL Server 2008 or above, this will work for you. I use row\_number and assign the values by ID starting at the max date. So any value higher than 1 is lower than the max date for that particular ID so I delete row\_num greater than 1.
Check it out:
```
DECLARE @yourTable TABLE (ID INT,Sale_date DATE, Price FLOAT);
INSERT INTO @yourTable
VALUES (11,'20051020',22.1),
(11,NULL,20.1),
(12,NULL,20.1),
(13,'20051020',20.1);
WITH CTE
AS
(
SELECT *,
ROW_NUMBER() OVER (PARTITION BY ID ORDER BY sale_date DESC) AS row_num
FROM @yourTable
)
DELETE
FROM CTE
WHERE row_num > 1
SELECT *
FROM @yourTable
```
|
Try this
```
SELECT * FROM (
SELECT *, ROW_NUMBER() OVER (PARTITION BY ID ORDER BY Sale_Date desc) AS ROW_NUM FROM AA1) A
WHERE ROW_NUM<2
```
|
How to remove duplicate ID rows. While removing, use the rows that has NULL value in another column
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I've tried flexing my Google-fu to no avail so here I am! Unfortunately I cannot change anything about these tables as they are coming out of an application that I have to report out of.
In SQL Server 2008, I'm trying to replace multiple values in one text string column (Table 1) with the value from another table (Table 2).
Thanks in advance!!
Table 1
```
id value
-------------
1 a1, a2, a3
2 a2, a3
3 a4
```
Table 2
```
id value
---------
a1 Value1
a2 Value2
a3 Value3
a4 Value4
```
Desired Output
```
id value
-----------------------------
1 Value1, Value2, Value3
2 Value2, Value3
3 Value4
```
|
I'm sorry for this solution in advance :) It does what you need though:
```
create table TableA(
id int,
string varchar(255)
)
create table table2(
id varchar , text varchar(255)
)
insert into tableA values(1,'a,b,c,d')
insert into tableA values(2,'e,f')
insert into table2 values('a', 'value1')
insert into table2 values('b', 'value2')
insert into table2 values('c', 'value3')
insert into table2 values('d', 'value4')
insert into table2 values('e', 'value5')
insert into table2 values('f', 'value6')
select id, left(myConcat,len(myConcat)-1) from (
select c.id, replace(replace(CAST(CAST('<i'+stuff((select * from(
SELECT A.[id] ,
Split.a.value('.', 'VARCHAR(1000)') AS String
FROM (SELECT [id],
CAST ('<M>' + REPLACE([string], ',', '</M><M>') + '</M>' AS XML) AS String
FROM TableA) AS A CROSS APPLY String.nodes ('/M') AS Split(a)) a
inner join table2 b on a.String = b.id
where a.id = c.id
FOR XML PATH ('')
),1,2,'') AS XML).query('/text') AS VARCHAR(1000)),'<text>',''),'</text>',',') myConcat
from TableA c
group by c.id
) d
```
|
This site has a delimited text split function <http://www.sqlservercentral.com/articles/Tally+Table/72993/>
Use that function to split your values out into a temp table. Replace the values in your temp table with the new values. Then use `STUFF..FOR XML` to combine the records back together and update your table.
One query with a few cte's should be able to handle all of this after you add the function to your database.
Example using [**Sql Fiddle**](http://sqlfiddle.com/#!3/767ba/1)
|
SQL Server 2008 - Replace Text Values in Column with Values from Another Table
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have a simple question related to grouping rows by date with some "narrative" periods.
Let's assume that I have very simple table with articles. ID which is PK, title and date. The date column is datetime / timestamp.
I would like to group somehow my results so I can present them in the view like
**Written today:**
* art 944
* art 943
**Written in last 7 days:**
* art 823
* art 743
**Written in last 30 days:**
* art 520
* art 519
* art 502
**Older:**
* art 4
* art 3
* art 1
Can I achieve that in just one single query with some *group by* statements?
|
Gordon should have credit for writing this out. But I think you probably just want to append a column with the appropriate descriptor and probably sort them in the order you'd like to see them.
```
select
title,
case when date = curdate() then 'today'
when date >= curdate() - interval 6 day then 'last 7 days'
when date >= curdate() - interval 29 day then 'last 30 days'
else 'older'
end as bucket,
from ...
order by
case
when date = curdate() then 1
when date >= curdate() - interval 6 day then 2
when date >= curdate() - interval 29 day then 3
else 4
end,
title ...
```
It looks like you didn't have the titles in alphabetical order. If you want them sorted by age then remove the case expression and just use the date value.
|
You can use a `case` statement for the grouping column. Something like this:
```
select (case when date = curdate() then 'today'
when date >= curdate() - interval 6 day then 'last 7 days'
when date >= curdate() - interval 29 day then 'last 30 days'
else 'older'
end),
count(*),
. . .
from . . .
group by (case when date = curdate() then 'today'
when date >= curdate() - interval 6 day then 'last 7 days'
when date >= curdate() - interval 29 day then 'last 30 days'
else 'older'
end)
```
|
Group query results by "today", "in last 7 days", "in last month" and "older"
|
[
"",
"mysql",
"sql",
"database",
"group-by",
""
] |
I am trying to join two tables and only select the dealers that did not have their promo code used on any order.
How can I do this?
I'm trying this below, but it's not working right. In the example I want to get just Bob, since his `promo_code` hasn't been used in any orders.
```
SELECT d.`name`
FROM z_dealer d
LEFT OUTER JOIN z_order o ON (d.promo_code = o.promo_code)
AND o.promo_code IS NULL
```
Here are my tables...
```
mysql> select * from z_dealer;
+----+------+------------+
| id | name | promo_code |
+----+------+------------+
| 1 | John | holiday |
| 2 | Suzy | special |
| 3 | Bob | laborday |
+----+------+------------+
mysql> Select * from z_order;
+----+-------+------------+
| id | total | promo_code |
+----+-------+------------+
| 1 | 10 | holiday |
| 2 | 20 | special |
| 3 | 15 | holiday |
| 4 | 45 | special |
+----+-------+------------+
```
|
```
SELECT d.`name` FROM z_dealer d LEFT JOIN z_order o ON (d.promo_code = o.promo_code) WHERE o.promo_code IS NULL
```

|
Have you tried `INNER JOIN`? or You can try `IN` like this :
```
SELECT d.name
FROM z_dealer d
WHERE d.promo_code not in( SELECT promo_code FROM z_order)
```
|
How do I select all the dealers that did not have an order?
|
[
"",
"mysql",
"sql",
""
] |
I have a call in SQL Server 2012 that I need to have working on Server 2005 and 2008.
`FORMAT(Date, 'yyyyMM') = 201501` is what I am currently using and need to use DATEPART older versions of SQL Server but cannot see a way of doing it easily.
Any help would be very much appreciated.
|
Earlier versions of SQL Server don't have the `FORMAT`, so you'd need to so something like
```
YEAR(Date) = 2015 AND MONTH(Date) = 1
```
or something like that to check the two conditions
|
Should work with SQL Server 2005 and 2008
```
DECLARE @Date DATETIME = GETDATE();
SELECT CAST(YEAR(@Date) AS VARCHAR(4))
+ RIGHT('00' + CAST(MONTH(@Date) AS VARCHAR(2)),2)
RESULT: 201606
```
If you are going to use it in where clause you can do something like...
```
WHERE YEAR(DateColumn) = 2016
AND MONTH(DateColumn) = 6
```
|
SQL Server DATEPART for year and month
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I am going to perform a table-wide update on a huge table (+200Millon records) so I am going to populate the data via SELECT into a separate table, than drop the original and rename the new one.
[In one of the articles](https://stackoverflow.com/questions/3711217/fastest-way-to-update-120-million-records) someone mentioned that it is better to create the new table with the same name in a temporary schema (e.g. 'clone') and switch it to the used one (e.g. 'dbo'), than to use the original schema with a temporary name and call sp\_rename after the data is in place and the old table is dropped.
I was looking into this, but I cannot think of anything why the schema switch is better than the sp\_rename. Can anyone find any good reason why is better to use the first or the second approach?
Thanks!
EDIT: I want to update the values in a specific column
EDIT2: Ultimately my question is, if I decide to go down the way of creating a new table to transfer data to which alternative to use:
```
CREATE TABLE dbo.newTable
...
DROP TABLE dbo.originalTable
EXEC sp_rename N'dbo.newTable', N'dbo.originalTable'
```
OR
```
CREATE TABLE clone.originalTable
...
DROP TABLE dbo.originalTable
ALTER SCHEMA dbo TRANSFER clone.originalTable
```
|
Based on your edited answer the quickest way to do that is:
* If you have to include default value to the column
`ALTER TABLE {TABLENAME}
ADD {COLUMNNAME} {TYPE} {NULL|NOT NULL}
CONSTRAINT {CONSTRAINT_NAME} DEFAULT {DEFAULT_VALUE}
[WITH VALUES]`
and then drop the old column from the table.
```
ALTER TABLE {TABLENAME} DROP COLUMN {OLD COLUMN}
```
* If you have to update table column based calculated values
1. Disable index on the column which you are updating
2. Create index on the column which are in WHERE clause
3. Update statistics
4. Use WITH(NOLOCK) table hint [if you are fine with dirty read]
**Update**
As per edit 2, your first statement is about changing table name and second statement is about changing schema. They both are different and does not related to moving data or updating value. In this case, changing schema would be the best bet
|
By the way, I would suggest that you WON'T populate the table by using `SELECT * INTO`. This will lock your source table for everyone else during the insertion, which could take quite a time.
Just a suggestion, try this instead:
```
SELECT TOP 0 INTO [newTable]
FROM [oldTable]
INSERT INTO [newTable]
SELECT * FROM [oldTable]
```
By the way, you can use `sp_rename` to rename your table to another name. But it won't change the schema. If you try to change the schema too it will produce a buggy table name.
You can instead try to move the table to another name. Example below:
```
EXEC sp_rename N'oldTable', N'oldTable_Backup'
EXEC sp_rename N'newTable', N'oldTable'
```
Hopefully this will help you.
|
Rename table via sp_rename or ALTER SCHEMA
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.