Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
Here is the code:
```
CREATE TABLE audit_trail (
old_email TEXT NOT NULL,
new_email TEXT NOT NULL
);
INSERT INTO audit_trail(old_email, new_email)
VALUES ('harold_gim@yahoo.com', 'hgimenez@hotmail.com'),
('hgimenez@hotmail.com', 'harold.gimenez@gmail.com'),
('harold.gimenez@gmail.com', 'harold@heroku.com'),
('foo@bar.com', 'bar@baz.com'),
('bar@baz.com', 'barbaz@gmail.com');
WITH RECURSIVE all_emails AS (
SELECT old_email, new_email
FROM audit_trail
WHERE old_email = 'harold_gim@yahoo.com'
UNION
SELECT at.old_email, at.new_email
FROM audit_trail at
JOIN all_emails a
ON (at.old_email = a.new_email)
)
SELECT * FROM all_emails;
old_email | new_email
--------------------------+--------------------------
harold_gim@yahoo.com | hgimenez@hotmail.com
hgimenez@hotmail.com | harold.gimenez@gmail.com
harold.gimenez@gmail.com | harold@heroku.com
(3 rows)
select old_email, new_email into iter1
from audit_trail where old_email = 'harold_gim@yahoo.com';
select * from iter1;
-- old_email | new_email
-- ----------------------+----------------------
-- harold_gim@yahoo.com | hgimenez@hotmail.com
-- (1 row)
select a.old_email, a.new_email into iter2
from audit_trail a join iter1 b on (a.old_email = b.new_email);
select * from iter2;
-- old_email | new_email
-- ----------------------+--------------------------
-- hgimenez@hotmail.com | harold.gimenez@gmail.com
-- (1 row)
select * from iter1 union select * from iter2;
-- old_email | new_email
-- ----------------------+--------------------------
-- hgimenez@hotmail.com | harold.gimenez@gmail.com
-- harold_gim@yahoo.com | hgimenez@hotmail.com
-- (2 rows)
```
As you can see the recursive code gives the result in right order, but the non-recursive code does not.
They both use `union`, why the difference?
|
Basically, your query is incorrect to begin with. Use **`UNION ALL`**, not `UNION` or you would incorrectly remove duplicate entries. (There is nothing to say the trail cannot switch back and forth between the same emails.) And `UNION` is highly likely to reorder rows.
The Postgres implementation for `UNION ALL` typically returns values in the sequence as appended - as long as you do *not* add `ORDER BY` at the end or do anything else with the result. But there is no formal guarantee, and since the advent of `Parallel Append` plans in Postgres 11, this can actually break. See this related post:
* [Are results from UNION ALL clauses always appended in order?](https://dba.stackexchange.com/q/316818/3684)
Be aware though, that each `SELECT` returns rows in arbitrary order unless `ORDER BY` is appended. There is no natural order in tables.
So this usually works:
```
SELECT * FROM iter1
UNION ALL -- union all!
SELECT * FROM iter2;
```
To get a **reliable sort order**, and "simulate the record of growth", you can track levels like this:
```
WITH RECURSIVE all_emails AS (
SELECT *, 1 AS lvl
FROM audit_trail
WHERE old_email = 'harold_gim@yahoo.com'
UNION ALL -- union all!
SELECT t.*, a.lvl + 1
FROM all_emails a
JOIN audit_trail t ON t.old_email = a.new_email
)
TABLE all_emails
ORDER BY lvl;
```
*db<>fiddle [here](https://dbfiddle.uk/?rdbms=postgres_13&fiddle=62489a68ce1e98f7bd8a309dd33158a1)*
Old [sqlfiddle](http://sqlfiddle.com/#!15/dbd06/4)
Aside: if `old_email` is not defined `UNIQUE` in some way, you can get multiple trails. You would need a unique column (or combination of columns) to keep it unambiguous. If all else fails you can (ab-)use the internal tuple ID `ctid` for the purpose of telling trails apart. But you should rather use your own columns. (Added example in the fiddle.)
* [In-order sequence generation](https://stackoverflow.com/questions/17500013/in-order-sequence-generation/17503095#17503095)
Consider:
* [How to return records in correct order in PostgreSQL](https://stackoverflow.com/questions/13895225/how-to-return-records-in-correct-order-in-postgresql/13897203#13897203)
|
Ordering is never preserved after any operation in any reasonable database. If you want the result set in a particular order, use `ORDER BY`. Period.
This is *especially* true after a `UNION`. `UNION` removes duplicates and that operation is going to change the ordering of the rows, in all likelihood.
|
Is order preserved after UNION in PostgreSQL?
|
[
"",
"sql",
"postgresql",
"sql-order-by",
"union",
""
] |
I have a list of deals, and I need to check whether they exist in my `DEALS` table, and for each existing deal, display its properties from that table.
I use that query :
```
select * from deals
where run_id = 2550
and deal_id
in ('4385601', '4385602', ...);
```
However, I also want to know which deals don't exist in that table. How can I achieve this ?
|
Have all your dealsids in a separate lookup table and use this
```
select t2.*,
case when t1.deal_id is null then 'do not exist' else 'exists' end as status
from lookup_table as t1 left join deals as t2
on t1.deal_id=t2.deal_id
and t1.deal_id and t2.run_id = 2550
where ('4385601', '4385602', ...)
;
```
|
The short answer is that you can't. Any result row has to come from a row that exists somewhere.
So the answer is to create something somewhere that has the full list you want to check. If you can have a real permanent table listing every valid deal, then that's very simple...
```
SELECT
all_possible_deals.deal_id
FROM
all_possible_deals
LEFT JOIN
deals
ON deals.run_id = 2550
ON deals.deal_id = all_possible_deals.deal_id
WHERE
deals.deal_id IS NULL -- This is NULL if it exists in [all_possible_deals], but not in [deals]
```
But it may not be practical (or perhaps even possible) to create and/or maintain such a table.
In such a case you can use an in-line-view instead of your `IN (?,?,?)` clause, and use that as your template to left-join on to.
```
SELECT
all_possible_deals.deal_id
FROM
(
SELECT '4385601' AS deal_id FROM dual
UNION ALL
SELECT '4385602' AS deal_id FROM dual
UNION ALL
SELECT '4385603' AS deal_id FROM dual
)
all_possible_deals
LEFT JOIN
deals
ON deals.run_id = 2550
ON deals.deal_id = all_possible_deals.deal_id
WHERE
deals.deal_id IS NULL -- This is NULL if it exists in [all_possible_deals], but not in [deals]
```
This involves a little bit of code to programatically generate the `UNION ALL` blocks, but it functionally does what you're asking.
Either way, the principle is always going to be the same.
You can find what is in tableA but not in tableB using a LEFT JOIN and checking for a miss (the NULL check).
But SQL can't create the `rows that are not there`, so the table you LEFT JOIN on to *must* contain the full list of potential existing/missing values.
|
How can I display non existing records?
|
[
"",
"sql",
"oracle",
""
] |
So I don't know if this is possible, but I would like to return the name of the column which has the closet value to my query.
So this is my table:
```
--------------------------
| ID | D1 | D2 | D3 | D4 |
--------------------------
| 1 | 0 | 10 | 22 | 50 |
--------------------------
```
I would like to do a query where I am searching in D1 to D4 for the closest value of 20, in this case D3
Then I would like to return D3 and the value in it.
So my answer should be:
```
------
| D3 |
------
| 22 |
------
```
Is this by any way possible?
|
Not easily. Since you've got those as different fields in the same record, you'll have to use very ugly queries:
```
SELECT source, val, ABS(20 - val) AS diff
FROM (
SELECT 'd1' AS source, d1 AS val FROM foo
UNION ALL
SELECT 'd2', d2 FROM foo
UNION ALL
SELECT 'd3', d3 FROM foo
UNION ALL
SELECT 'd4', d4 FROM foo
) AS child
ORDER BY diff DESC
LIMIT 1
```
You should [normalize](https://en.wikipedia.org/wiki/Database_normalization) your table, which'd entirely eliminate the entire `union` business and leave only the outer parent query.
|
You can do this with a complicated `case`:
```
select (case least(abs(d1 - 20), abs(d2 - 20), abs(d3 - 20), abs(d4 - 20))
when abs(d1 - 20) then 'd1'
when abs(d2 - 20) then 'd2'
when abs(d3 - 20) then 'd3'
when abs(d4 - 20) then 'd4'
end)
from mytable;
```
Note: This should have better performance than normalizing the data using `union all`. However, as a comment on the data structure, you probably want to store the values in separate rows rather than separate columns. Having separate columns with essentially the same values usually suggests the need for a junction table.
|
return column name with closest value
|
[
"",
"mysql",
"sql",
"sqlite",
""
] |
I have this table:
```
-----------------------
summonerId | timestamp
-----------------------
253222 | 14395235091096
929112 | 14395235091056
(...)
```
I want to update the row with the lower timestamp but I can't, when I do this
```
UPDATE summoners_shell
SET
summonerId = ".$s.",
timestamp = ".$time."
WHERE timestamp = (SELECT MIN(timestamp))
```
It updates all rows! Why? How do I do what I want?
|
When `SELECT`-subquery is in `WHERE`-clause, it locks the table so an update can not pass.
Just use `JOIN` instead
```
UPDATE summoners_shell AS s1
JOIN (SELECT MIN(timestamp) AS mt FROM summoners_shell) AS mt ON (s1.timestamp = mt.mt)
SET
summonerId = ".$s.",
timestamp = ".$time."
```
|
If this is not a typo then probably this query will serve your problem.
```
UPDATE summoners_shell
SET summonerId=".$s.",
timestamp=".$time."
WHERE timestamp=(
SELECT MIN(timestamp)
FROM
summoners_shell
)
```
|
Update row with minimum value sql
|
[
"",
"mysql",
"sql",
""
] |
I have this dataset:
```
id uid follows_uid status
1 1 2 ACTIVE
2 1 3 ACTIVE
3 3 1 ACTIVE
4 4 1 ACTIVE
5 2 1 ACTIVE
```
on giving `uid` I want to calculate how many users are following, and how many are followed by (the given user).
Result set will be:
```
following followers
2 3
```
and here is the query which does the work:
```
SELECT COUNT(*) as following,
(SELECT COUNT(*) FROM user_followers where follows_uid = 1 ) as followers
FROM user_followers
WHERE uid = 1 and `status` = 'ACTIVE'
```
Now the question is, In't there any other way to get this done? Or is it the best way to achieve this?
|
If you have separate indexes on `uid` and `follows_uid`, then I believe using subqueries as you did is the fastest way to retrieve the separate counts because each query will take advantage of an index to retrieve the count.
|
Here's another way of achieving it.
```
select following.*, followers.* from
(select count(uid) from user_followers where uid = 1) following,
(select count(follows_uid) from user_followers where follows_uid = 1) followers;
```
And, to answer your question, your subquery approach is, in fact, the best way to achieve it. As pointed out by @FuzzyTree, you could use indexes to optimise your performance.
|
Count 2 columns on different conditions
|
[
"",
"mysql",
"sql",
"social-networking",
""
] |
If I have a table with the categories tree specified in colum category which is a varchar. The character "-" acts as a category parent-child relationship separator but it is part of a string, like this(simplified for illustration purposes):
```
categoryid category
---------- --------
1 Colors
2 Colors-Red
3 Colors-Red-Bright
4 Colors-Red-Medium
5 Colors-Red-Dark
6 Colors-Red-Dark-saturated
7 Colors-Red-Dark-unsaturated
8 Temperatures
9 Temperatures-cold
10 Temperatures-cold-freezing
11 Temperatures-cold-mild
12 Temperatures-hot
13 Temperatures-hot-burning
14 Temperatures-hot-burning-1st degree
15 Temperatures-hot-burning-2nd degree
```
I need a query that would return me ONLY those categories that don't have any existing "child" category. So this query should return only:
```
categoryid category
---------- --------
3 Colors-Red-Bright
4 Colors-Red-Medium
6 Colors-Red-Dark-saturated
7 Colors-Red-Dark-unsaturated
10 Temperatures-cold-freezing
11 Temperatures-cold-mild
14 Temperatures-hot-burning-1st degree
15 Temperatures-hot-burning-2nd degree
```
Any ideas how to accomplish this?
Many thanks!
|
Here is one method:
```
select c.*
from categories c
where not exists (select 1
from categories c2
where c2.category like concat(c.category, '-%')
);
```
|
You can do it like this:
```
select * from test t
where not exists (
select * from test tt
where tt.id <> t.id AND left(tt.name,length(t.name))=t.name
)
```
The idea is to chop off all characters of a longer string past the length of the name of this string, and see if the results match.
[Demo.](http://www.sqlfiddle.com/#!9/00986/2)
|
Query to select only child categories where the category tree/path is in a varchar column
|
[
"",
"mysql",
"sql",
""
] |
I am having an issue where if I write to a table (**using Linq-to-SQL**) which is a dependency of a view, and then immediately turn around and query that view to check the impact of the write (using a new connection to the DB, and hence a new data context), the impact of the write doesn't show up immediately but takes up to a few seconds to appear. This only happens occasionally (perhaps `10-20` times per `10,000` or so writes).
This is the definition of the view:
```
CREATE VIEW [Position].[Transactions]
WITH SCHEMABINDING
AS
(
SELECT
Account,
Book,
TimeAPIClient AS DateTimeUtc,
BaseCcy AS Currency,
ISNULL(QuantityBase, 0) AS Quantity,
ValueDate AS SettleDate,
ISNULL(CAST(0 AS tinyint), 0) AS TransactionType
FROM Trades.FxSpotMF
WHERE IsCancelled = 0
UNION ALL
SELECT
Account,
Book,
TimeAPIClient AS DateTimeUtc,
QuoteCcy AS Currency,
ISNULL(-QuantityBase * Rate, 0) AS Quantity,
ValueDate AS SettleDate,
ISNULL(CAST(0 AS tinyint), 0) AS TransactionType
FROM Trades.FxSpotMF
WHERE IsCancelled = 0
UNION ALL
SELECT
Account,
Book,
ExecutionTimeUtc AS DateTimeUtc,
BaseCcy AS Currency,
ISNULL(QuantityBase, 0) AS Quantity,
ValueDate AS SettleDate,
ISNULL(CAST(1 AS tinyint), 1) AS TransactionType
FROM Trades.FxSpotManual
WHERE IsCancelled = 0
UNION ALL
SELECT
Account,
Book,
ExecutionTimeUtc AS DateTimeUtc,
QuoteCcy AS Currency,
ISNULL(-QuantityBase * Rate, 0) AS Quantity,
ValueDate AS SettleDate,
ISNULL(CAST(1 AS tinyint), 1) AS TransactionType
FROM Trades.FxSpotManual
WHERE IsCancelled = 0
UNION ALL
SELECT
Account,
Book,
ExecutionTimeUtc AS DateTimeUtc,
BaseCcy AS Currency,
ISNULL(SpotQuantityBase, 0) AS Quantity,
SpotValueDate AS SettleDate,
ISNULL(CAST(2 AS tinyint), 2) AS TransactionType
FROM Trades.FxSwap
UNION ALL
SELECT
Account,
Book,
ExecutionTimeUtc AS DateTimeUtc,
QuoteCcy AS Currency,
ISNULL(-SpotQuantityBase * SpotRate, 0) AS Quantity,
SpotValueDate AS SettleDate,
ISNULL(CAST(2 AS tinyint), 2) AS TransactionType
FROM Trades.FxSwap
UNION ALL
SELECT
Account,
Book,
ExecutionTimeUtc AS DateTimeUtc,
BaseCcy AS Currency,
ISNULL(ForwardQuantityBase, 0) AS Quantity,
ForwardValueDate AS SettleDate,
ISNULL(CAST(2 AS tinyint), 2) AS TransactionType
FROM Trades.FxSwap
UNION ALL
SELECT
Account,
Book,
ExecutionTimeUtc AS DateTimeUtc,
QuoteCcy AS Currency,
ISNULL(-ForwardQuantityBase * ForwardRate, 0) AS Quantity,
ForwardValueDate AS SettleDate,
ISNULL(CAST(2 AS tinyint), 2) AS TransactionType
FROM Trades.FxSwap
UNION ALL
SELECT
Account,
c.Book,
TimeUtc AS DateTimeUtc,
Currency,
ISNULL(Amount, 0) AS Quantity,
SettleDate,
ISNULL(CAST(3 AS tinyint), 3) AS TransactionType
FROM Trades.Commission c
JOIN Trades.Payment p
ON c.UniquePaymentId = p.UniquePaymentId
AND c.Book = p.Book
)
```
while this is the query generated by Linq-to-SQL to write to one of the underlying tables:
```
INSERT INTO [Trades].[FxSpotMF] ([UniqueTradeId], [BaseCcy], [QuoteCcy], [ValueDate], [Rate], [QuantityBase], [Account], [Book], [CounterpartyId], [Counterparty], [ExTradeId], [TimeAPIClient], [TimeAPIServer], [TimeExchange], [TimeHandler], [UniqueOrderId], [IsCancelled], [ClientId], [SequenceId], [ExOrdId], [TradeDate], [OrderCycleId], [CycleIndex])
VALUES (@p0, @p1, @p2, @p3, @p4, @p5, @p6, @p7, @p8, @p9, @p10, @p11, @p12, @p13, @p14, @p15, @p16, @p17, @p18, @p19, @p20, @p21, @p22)
```
and this is the query generated by Linq-to-SQL to check the effect of the write:
```
SELECT
SUM([t0].[Quantity]) AS [Item2],
[t0].[Currency] AS [Item1]
FROM [Position].[Transactions] AS [t0]
WHERE ([t0].[Book] = @p0)
AND ([t0].[DateTimeUtc] < @p1)
GROUP BY [t0].[Currency]
```
Also, this is the Linq-to-SQL code that generates the write (using F# type providers):
```
type Schema = Microsoft.FSharp.Data.TypeProviders.DbmlFile<"TradeDb.dbml", ContextTypeName="TradeDb">
use db = new Schema.TradeDb(connectionString)
let trade = new Schema.Trades_FxSpotMF()
(* omitted: set object properties corresponding to column values here... *)
db.Trades_FxSpotMF.InsertOnSubmit(trade)
db.SubmitChanges()
```
while this is the corresponding Linq-to-SQL that generates the read:
```
use db = new Schema.TradeDb(connectionString)
query { for t in db.Position_Transactions do
where ( t.Book = book &&
t.DateTimeUtc < df.MaxExecutionTimeExcl
)
groupBy t.Currency into group
let total = query { for x in group do sumBy x.Quantity }
select (group.Key, total)
}
|> Map.ofSeq
```
I would have thought `System.Data.Linq.DataContext.SubmitChanges()` would only return once the write transaction was complete, and that any subsequent query of the view must contain the effect of the write... what am I missing/doing wrong?
|
I finally got to the bottom of this: the DB writes are done in their own threads, with the main thread waiting for all the write threads to complete before checking the results. However, there was a bug in the code which checked whether all the threads were complete, causing the main thread to do the check too early.
|
Could you try with [table hints](https://msdn.microsoft.com/en-us/library/ms187373.aspx) i.e.
```
CREATE VIEW [Position].[Transactions]
WITH SCHEMABINDING
AS
(
SELECT
Account,
Book,
TimeAPIClient AS DateTimeUtc,
BaseCcy AS Currency,
ISNULL(QuantityBase, 0) AS Quantity,
ValueDate AS SettleDate,
ISNULL(CAST(0 AS tinyint), 0) AS TransactionType
FROM Trades.FxSpotMF WITH(NOLOCK)
WHERE IsCancelled = 0
UNION ALL
SELECT
Account,
Book,
TimeAPIClient AS DateTimeUtc,
QuoteCcy AS Currency,
ISNULL(-QuantityBase * Rate, 0) AS Quantity,
ValueDate AS SettleDate,
ISNULL(CAST(0 AS tinyint), 0) AS TransactionType
FROM Trades.FxSpotMF WITH(NOLOCK)
WHERE IsCancelled = 0
...
)
```
Alos check [this](http://blogs.msdn.com/b/sqlcat/archive/2007/02/01/previously-committed-rows-might-be-missed-if-nolock-hint-is-used.aspx) blog entry, in my case use the nolock hint solve the issue.
|
Querying a view immediately after writing to underlying tables in SQL Server 2014
|
[
"",
"sql",
".net",
"sql-server",
"linq-to-sql",
""
] |
I have a table contacts and in that table there is a contact level and levelID. So a contact can be Admin, Assistant, friend etc. Each one of these levels has its own table so right now to get all the contacts and the information I have a query with a series of levels each with their own query for example
```
SELECT Admin.AdminDescripiton, Contact.ContactTypeID, Contact.LevelID, Contact.FundID, etc.
FROM Contact INNER JOIN Admin ON Contact.LevelID = Admin.AdminID
WHERE (Contact.LevelID = 1)
UNION
SELECT Assistant.AssistantDescripiton, Contact.ContactTypeID, Contact.LevelID,Contact.FundID, etc.
FROM Contact
INNER JOIN Assistant ON Contact.LevelID = Assistant. AssistantID
WHERE
(Contact.LevelID = 2)
UNION
SELECT Friend.FriendDescripiton, etc
FROM Contact
INNER JOIN Friend ON Contact.LevelID = Friend. FriendID
WHERE
(Contact.LevelID = 3)
```
It there a way to put this into a case statement or some other join type that could replace the unions with on just one select?
|
It's not completely clear what your tables and data looks like but maybe you could use left joins like this:
```
SELECT
COALESCE(AdminDescripiton, AssistantDescripiton, FriendDescripiton) AS Description
, c.ContactTypeID
, c.FundID
, c.LevelID
FROM contact c
LEFT JOIN Admin ON c.LevelID = Admin.AdminID AND (c.LevelID = 1)
LEFT JOIN Assistant ON c.LevelID = Assistant.AssistantID AND (c.LevelID = 2)
LEFT JOIN Friend ON c.LevelID = Friend.FriendID AND (c.LevelID = 3)
```
The `coalesce` operator returns the first non-null value from it's arguments.
|
Sounds like a LEFT JOIN might work.
```
SELECT Assistant.AssistantDescripiton, Contact.ContactTypeID, Contact.LevelID, Contact.FundID, ...
FROM Contact
LEFT JOIN ADMIN ON Contact.LevelID = ADMIN.AdminID
AND Contact.LevelID = 1
LEFT JOIN Assistant ON Contact.LevelID = Assistant.AssistantID
AND Contact.LevelID = 2
```
|
SQL Union Replacement
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I work on contracting Company database " sql server " . I'm lost what’s the best solutions to calculate their customers balance accounts.
1. Balance table: create table for balance and another for transactions. So my application add any transactions to transactions table and calculate the balance according to balance table value.
2. Calculate balance using query: so I'll create transactions table only.
Note: the records may be up to 2 million records for each year, so I think they will need to backup it every year or some thing like that.
any new ideas or comments ?!
|
I would have a `transactions` table and a `balances` table as well, if I were you. Let's consider for example that you have 1 000 000 users. If a user has 20 transactions on average, then getting balance from a transaction table would be roughly 20x slower than getting balance from a balances table. Also, it is better to have something than not having that something.
So, I would choose to create a `balances` table without thinking twice.
|
Comments on your 2 ways:
1. Good solution if you have much more queries than updates (100 times or more). So, you add new transaction, recalculate balance and store it. You can do it in one transaction but it can take a lot of time and block user action. So, you can do it later (for example, update balances onces a minute/hour/day). Pros: fast reading. Cons: possible difference between balance value and sum of transactions or increasing user action time
2. Good solution if you have much more updates than reads (for example, trading system with a lot of transactions). Updating current balance can take time and may be worthless, because another transaction has already came :) so, you can calculate balance at runtime, on demand. Pros: always actual balance. Cons: calculating balance can take time.
As you see, it depends on your payload profile (reads or writes). I'll advice you to begin with second variant - it's easy to implement and good DB indexies can help you to get sum very fast (2 millions per year - not so much as it looks). But, it's up to you.
|
transactions and balance
|
[
"",
"sql",
"sql-server",
"database-design",
"database-schema",
"database-administration",
""
] |
I Have a table comments with the following structure :
```
CommentId (int),
CommentText (nvarchar(max))
EditedBy (nvarchar(max))
ParentCommentId (int)
CaseId (int)
```
for a particular `caseId` I want to select the latest comment text as well as the `EditedBy` Column.
for example , if someone adds a comment table would be
```
CommentId CommentText EditedBy ParentCommentId CaseId
1 ParentComment1 ABC NULL 1
2 ParentComment2 ABC NULL 1
```
now, if someone edits that comment , the table would look like
```
CommentId CommentText EditedBy ParentCommentId CaseId
1 ParentComment1 ABC NULL 1
2 ParentComment2 ABC NULL 2
3 Comment2 DEF 1 1
```
This editing can be done any number of times
I want to select the latest comment as well as the history.
In this case my dataset should be something like this :
```
CaseId CommentId CommentText Predicate
1 3 Comment2 Created By ABC , Updated by DEF
1 2 ParentComment2 Created By ABC
```
This is a simplified version of the problem .
TIA
|
You can use `FOR XML PATH` for creating your predicate column. Something like this.
**[SQL Fiddle](http://sqlfiddle.com/#!6/d5c6c/2)**
**Sample Data**
```
CREATE TABLE Comment
([CommentId] int, [CommentText] varchar(8), [EditedBy] varchar(3), [ParentCommentId] varchar(4), [CaseId] int);
INSERT INTO Comment
([CommentId], [CommentText], [EditedBy], [ParentCommentId], [CaseId])
VALUES
(1, 'Comment1', 'ABC', NULL, 1),
(2, 'Comment2', 'DEF', '1', 1);
```
**Query**
```
;WITH CTE AS
(
SELECT CommentID, CommentText,EditedBy,ParentCommentID,CaseID,ROW_NUMBER()OVER(PARTITION BY Caseid ORDER BY CommentID DESC) RN
FROM Comment
), CTE2 as
(
SELECT CommentID, CommentText,EditedBy,ParentCommentID,CaseID,
(
SELECT
CASE WHEN ParentCommentID IS NULL THEN 'Created By ' ELSE ', Updated By ' END
+ EditedBy
FROM CTE C1
WHERE C1.CaseID = C2.CaseID
ORDER BY CommentID ASC
FOR XML PATH('')
) as Predicate
FROM CTE C2
WHERE RN = 1
)
SELECT CaseID,CommentID, CommentText,Predicate FROM CTE2;
```
**EDIT**
If you do not want to repeat `Updated By` for each user who updated the `caseid`, use the following `CASE`
```
CASE WHEN ParentCommentID IS NULL THEN 'Created By ' + EditedBy + ', Updated By' ELSE '' END,
CASE WHEN ParentCommentID IS NOT NULL THEN ', ' + EditedBy ELSE '' END
```
Instead of
```
CASE WHEN ParentCommentID IS NULL THEN 'Created By ' ELSE ', Updated By ' END + EditedBy
```
**Output**
```
| CaseID | CommentID | CommentText | Predicate |
|--------|-----------|-------------|--------------------------------|
| 1 | 2 | Comment2 | Created By ABC, Updated By DEF |
```
**EDIT 2**
Use Recursive CTE to achieve your expected output. Something like this.
**[SQL Fiddle](http://sqlfiddle.com/#!6/f2c9e4/1)**
**Query**
```
;WITH CTEComment AS
(
SELECT CommentID as RootCommentID,CommentID, CommentText,EditedBy,ParentCommentID,CaseID
FROM Comment
WHERE ParentCommentID IS NULL
UNION ALL
SELECT CTEComment.RootCommentID as RootCommentID,Comment.CommentID, Comment.CommentText,Comment.EditedBy,Comment.ParentCommentID,Comment.CaseID
FROM CTEComment
INNER JOIN Comment
ON CTEComment.CommentID = Comment.ParentcommentID
AND CTEComment.CaseID = Comment.CaseID
), CTE as
(
SELECT CommentID,RootCommentID,CommentText,EditedBy,ParentCommentID,CaseID,ROW_NUMBER()OVER(PARTITION BY CaseID,RootCommentID ORDER BY CommentID DESC) RN
FROM CTEComment
), CTE2 as
(
SELECT CommentID, CommentText,EditedBy,ParentCommentID,CaseID,
(
SELECT
CASE WHEN ParentCommentID IS NULL THEN 'Created By ' ELSE ', Updated By ' END
+ EditedBy
FROM CTE C1
WHERE C1.CaseID = C2.CaseID
AND C1.RootCommentID = C2.RootCommentID
ORDER BY CommentID ASC
FOR XML PATH('')
) as Predicate
FROM CTE C2
WHERE RN = 1
)
SELECT CaseID,CommentID, CommentText,Predicate FROM CTE2;
```
**Output**
```
| CaseID | CommentID | CommentText | Predicate |
|--------|-----------|----------------|--------------------------------|
| 1 | 3 | Comment2 | Created By ABC, Updated By DEF |
| 2 | 2 | ParentComment2 | Created By ABC |
```
|
you can simply use a self join to get the results:
find the query below:
```
select a.CaseId,b.CommentId, b.CommentText,'Created By '+a.EditedBy +', update by '+b.EditedBy as Predicate
from Comment a
inner join Comment b
on a.CaseId=b.CaseId
where a.CommentId<b.CommentId
```
|
Select the row with the biggest ID and a specific condition
|
[
"",
"sql",
"sql-server",
"select",
"greatest-n-per-group",
""
] |
```
SQL> SELECT * FROM student;
NAME ID AGE MARK1 MARK2 TOTAL
-------------------- ---------- ---------- ---------- ---------- -----------
Ananda 200 22 90 95
Chris 250 18 80 75
Gokul 325 17 50 50
SQL> SELECT MAX(mark1),name FROM student;
SELECT MAX(mark1),name FROM student
*
ERROR at line 1:
ORA-00937: not a single-group group function
```
As you can see the error,
can anyone suggest me a query to select the Maximum mark from the table
and display it along with the corresponding name of the student??
Is it even possible without using GROUP BY clause?
As you can see, there's no logical way of using GROUP BY clause here.
|
I found an easy solution:
```
SELECT mark1,name
FROM student
WHERE mark1=
(SELECT MAX(mark1) FROM student);
```
Result:
```
MARK1 NAME
----- -------
90 Ananda
```
|
If you want to get the name of the student also, you need to use a join:
```
SELECT T2.Mark,T1.name
FROM student T1 JOIN
(SELECT MAX(Mark1) as Mark
FROM student) T2 on T1.Mark=T2.mark
```
Result:
```
MAXMARK NAME
------------
90 Ananda
```
Sample result in [**SQL Fiddle**](http://www.sqlfiddle.com/#!4/60977f/4)
|
How to use aggregate function without GROUP BY?
|
[
"",
"sql",
"oracle",
""
] |
I'm having trouble populating a schema in MySQL 5.5. Each table has a "create\_time" and "update\_time" which are Timestamps. I prefer to have no Default for these columns but I get a 1067 error referencing the Default value even when there is no Default value.
```
#1067 - Invalid default value for 'update_time'
```
Below is a portion of my script which is generated by MySQL Workbench.
```
SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0;
SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0;
SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='TRADITIONAL,ALLOW_INVALID_DATES';
CREATE SCHEMA IF NOT EXISTS `database` DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci ;
USE `database` ;
CREATE TABLE IF NOT EXISTS `database`.`table` (
`id` INT UNSIGNED NOT NULL AUTO_INCREMENT,
`create_time` TIMESTAMP NOT NULL,
`update_time` TIMESTAMP NOT NULL,
PRIMARY KEY (`id`),
UNIQUE INDEX `id_UNIQUE` (`id` ASC))
ENGINE = InnoDB;
SET SQL_MODE=@OLD_SQL_MODE;
SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS;
SET UNIQUE_CHECKS=@OLD_UNIQUE_CHECKS;
```
How can I create this table with these Timestamps?
|
From the MySQL documentation:
> As of MySQL 5.6.5, TIMESTAMP and DATETIME columns can be automatically initializated and updated to the current date and time (that is, the current timestamp). Before 5.6.5, this is true only for TIMESTAMP, and for at most one TIMESTAMP column per table. The following notes first describe automatic initialization and updating for MySQL 5.6.5 and up, then the differences for versions preceding 5.6.5.
Since you are using 5.5 and are attempting to set both attributes with the TIMESTAMP data type, you are only guaranteed one attribute to set a default value.
This makes sense, because the `update` column is throwing the error, the second attribute to be initialized, which would not be allowed to generate a default TIMESTAMP per the docs.
|
The creation of the table works.
But if you insert a record your need to set a value for `update_time` since `null` values are not allowed.
Otherwise an error is thrown because there is no default value that can be inserted.
|
Error 1067 for Default of Timestamp column where there is no Default
|
[
"",
"mysql",
"sql",
"mysql-workbench",
""
] |
I'm trying to figure out why this query is returning all of the data in the table:
```
State IN ('North Dakota') AND ('Homicide Offenses' > 0 OR 'Assault Offenses' > 0)
```
I've included the code I used to generate this statement below. The code works for single searches such as:
```
State IN ('North Dakota') AND 'Homicide Offenses' > 0
```
But does not work for anything else. Thank you for any insight you can give.
```
function getWhere() {
var caps = getCAPS(); //Returns an array of user selections
var states = getStates(); //Returns an array of selected states
var where = "State IN (" + states.join(",") + ')';
if (caps.length) {
if (caps.length == 1) {
where = where + " AND " + caps.pop();
} else {
where += " AND (";
while(caps.length) {
if(caps.length == 1) {
where += caps.pop() + ")";
} else {
where += caps.pop() + " OR ";
}
}
}
}
return where;
}
```
|
If both of
`State IN ('North Dakota') AND ('Homicide Offenses' > 0)` and
`State IN ('North Dakota') AND ('Assault Offenses' > 0)`
do actually work as expected, I'd check, whether
> <filter\_condition>
> A Boolean expression (returns TRUE or FALSE). Only table rows where the condition is TRUE will be included in the results of the SELECT. Use AND to combine multiple filters into a single query. [...] OR is **not** supported.
(from [Fusion Tables REST API - Row and Query SQL Reference](https://developers.google.com/fusiontables/docs/v2/sql-reference)) is really to be read as: No **`OR`**. (... and whether it leads -when ignored- to the observed effect.)
(This "OR is **not** supported" appears in both, the `SELECT` and the `CREATE VIEW` `<filter_condition>` sections of the document...).
|
I don't see the error in the code, but try this to see if it fixes it. Just add the offenses together using a join like you do with the states, and check if the sum is > 0.
```
function getWhere() {
var caps = getCAPS(); //Returns an array of user selections
var states = getStates(); //Returns an array of selected states
return where = "State IN (" + states.join(",") + ") AND " + caps.join("+") + " > 0"
}
```
You will need to change getCAPS to return a list of simply
```
"'Homicide Offenses'"
```
and not
```
"'Homicide Offenses' > 0"
```
which is probably the better way to do it anyway.
|
Query Returning All Data In Table
|
[
"",
"sql",
"google-fusion-tables",
""
] |
I'm trying to bucket unit statuses. What am I doing wrong with my case statement? I'm new to SQL.
```
CASE WHEN [sStatus] LIKE '%Notice%'
THEN 'Notice'
ELSE
CASE WHEN [sStatus] LIKE '%Occupied%'
THEN 'Occupied'
ELSE
CASE WHEN [sStatus] LIKE '%Vacant%'
THEN 'Vacant'
ELSE [sStatus]
END as [Status]
```
[](https://i.stack.imgur.com/E9H8i.png)
Thank you!
|
Your case statements are missing ends. But, they don't need to be nested in the first place:
```
(CASE WHEN [sStatus] LIKE '%Notice%' THEN 'Notice'
WHEN [sStatus] LIKE '%Occupied%' THEN 'Occupied'
WHEN [sStatus] LIKE '%Vacant%' THEN 'Vacant'
ELSE [sStatus]
END) as [Status]
```
And, if you just want the first word, you don't need a `case` at all:
```
SUBSTRING(sStatus, CHARINDEX(' ', sStatus + ' '), LEN(sStatus))
```
|
You are getting the error because you have 3 CASE statements and only one END.
However, there is no need to nest these CASE statements at all. You can simply do this:
```
CASE
WHEN [sStatus] LIKE '%Notice%'
THEN 'Notice'
WHEN [sStatus] LIKE '%Occupied%'
THEN 'Occupied'
WHEN [sStatus] LIKE '%Vacant%'
THEN 'Vacant'
ELSE [sStatus]
END as [Status]
```
|
Nested SQL case statement
|
[
"",
"sql",
"sql-server",
"case-statement",
""
] |
This is my current oracle table:
* DATE = date
* HOUR = number
* RUN\_DURATION = number
[](https://i.stack.imgur.com/j6I8X.png)
I need a query to get RUN\_DURATION between two dates **with hours** like
`Select * from Datatable where DATE BETWEEN to_date('myStartDate', 'dd/mm/yyyy hh24:mi:ss') + HOURS? and to_date('myEndDate', 'dd/mm/yyyy hh24:mi:ss') + HOURS?`
*For example all data between 30.10.14 11:00:00 and 30.10.14 15:00:00*
I stuck to get the hours to the dates. I tried to add the hours into myStartDate but this will be ignored for the start date because of `BETWEEN`.
I know `BETWEEN` shouldn't be used for dates but I can't try other opportunities because I don't know how to get `DATE` and `HOUR` together...
Thanks!
|
A **DATE** has both date and time elements. To add hours to date, you just need to do some mathematics.
For example,
```
SQL> alter session set nls_date_format='yyyy-mm-dd hh24:mi:ss';
Session altered.
SQL> SELECT SYSDATE, SYSDATE + 1/24 FROM dual;
SYSDATE SYSDATE+1/24
------------------- -------------------
2015-08-14 15:02:59 2015-08-14 16:02:59
```
Adds 1 hour to sysdate. So, if you have a column of date data type, just add **number of hours to be added divided by 24** because you add **number of days** to a date. So, you just need to first convert the number of hours into date as `hours/24`.
|
Well, you can add hour to your field date this way
```
select "DATE" + (hour / 24) from <yourTable>
```
this will give you ( from your first sample, may be different based on your format)
```
August, 14 2015 10:00:00
August, 14 2015 08:00:00
```
Based on that, you can do any between, select that you need.
In your case
```
where "DATE" + (hour / 24 )
```
which would make
```
Select *
from Datatable
where "DATE" + (hour / 24 )
BETWEEN to_date('30/10/2014 11:00:00', 'dd/mm/yyyy hh24:mi:ss') and
to_date('30/10/2014 15:00:00', 'dd/mm/yyyy hh24:mi:ss')
```
see [SqlFiddle](http://sqlfiddle.com/#!4/5d9e0/5)
(By the way, don't use reserved keywords for column name, but I guess this is just a sample).
|
oracle query between two dates with hours
|
[
"",
"sql",
"oracle",
"date-arithmetic",
""
] |
I want to speed-up some scripts, so my question is:
**Is there a faster available solution for this?**
uid are just IDs which are added to a temporary table.
After that it updates the origin table with the given uid's.
Engine is InnoDB, MariaDB 10.0.19.
It takes approx 20s for ~400 entries.
```
CREATE TEMPORARY TABLE cleanup_users AS (
SELECT uid FROM users
WHERE FROM_UNIXTIME(last_connect) <= DATE_SUB(NOW(), INTERVAL 2 MONTH)
AND activity <= 20 AND hide != 1
);
UPDATE users SET hide = 1 WHERE uid IN (
SELECT * FROM cleanup_users
);
DROP TABLE IF EXISTS cleanup_users;
```
|
```
UPDATE users SET hide = 1 WHERE last_connect <= UNIX_TIMESTAMP(DATE_SUB(NOW(), INTERVAL 2 MONTH)) AND activity <= 20 AND hide != 1
```
Do not use function over the column in the where clause and add covering index to speed up the things.
```
ALTER TABLE users ADD INDEX(last_connect,activity,hide);
```
|
You do not need a temporary table even if you need to update data matching from different table and in the current case it could be done with one single query
```
update users
set hide = 1
where
last_connect <= unix_timestamp(DATE_SUB(NOW(), INTERVAL 2 MONTH))
activity <= 20 AND hide != 1
```
Now assuming that there are lot of data involved in the table you would need some indexing and you can have the following covering index
```
alter table users add index date_act_hide_idx(last_connect,activity,hide);
```
Make sure to take a backup of the table before applying the index.
Now note that there is a change in the where clause the `FROM_UNIXTIME` is removed and `unix_timestamp` is used on the right side of the comparison forcing it to use the index on the column `last_connect`
|
MySQL - UPDATE - faster solution?
|
[
"",
"mysql",
"sql",
"mariadb",
""
] |
Say I have two tables --`people` and `pets`-- where each person may have more than one pet:
`people`:
```
+-----------+-------+
| person_id | name |
+-----------+-------+
| 1 | Bob |
| 2 | John |
| 3 | Pete |
| 4 | Waldo |
+-----------+-------+
```
`pets`:
```
+--------+-----------+--------+
| pet_id | person_id | animal |
+--------+-----------+--------+
| 1 | 1 | dog |
| 2 | 1 | dog |
| 3 | 1 | cat |
| 4 | 2 | cat |
| 5 | 3 | dog |
| 6 | 3 | tiger |
| 7 | 3 | tiger |
| 8 | 4 | tiger |
| 9 | 4 | tiger |
| 10 | 4 | tiger |
+--------+-----------+--------+
```
I'm trying to select the people who ONLY have `tiger`s as pets. Obviously the only one that fits this criteria is `Waldo`, since `Pete` has a `dog` as well... but I'm having some trouble writing the query for this.
The most obvious case is `select people.person_id, people.name from people join pets on people.person_id = pets.person_id where pets.animal = "tiger"`, but this returns `Pete` and `Waldo`.
It would be helpful if there was a clause like `pets.animal ONLY = "tiger"`, but as far as I know this doesn't exist.
How could the query be written?
|
```
select people.person_id, people.name
from people
join pets on people.person_id = pets.person_id
where pets.animal = "tiger"
AND people.person_id NOT IN (select person_id from pets where animal != 'tiger');
```
|
Use `group by` and `having`:
```
select p.person_id
from pets p
group by p.person_id
having max(animal) = 'tiger' and min(animal) = 'tiger';
```
|
Selecting rows whose foreign rows ONLY match a single value
|
[
"",
"mysql",
"sql",
"postgresql",
""
] |
I have a very large table (several hundred millions of rows) that stores test results along with a datetime and a foreign key to a related entity called 'link', I need to to group rows by time intervals of 10,15,20,30 and 60 minutes as well as filter by time and 'link\_id' I know this can be done with this query as explained [here][1]:
```
SELECT time,AVG(RTT),MIN(RTT),MAX(RTT),COUNT(*) FROM trace
WHERE link_id=1 AND time>='2015-01-01' AND time <= '2015-01-30'
GROUP BY UNIX_TIMESTAMP(time) DIV 600;
```
This solution worked but it was extremely slow (about 10 on average) so I tried adding a datetime column for each 'group by interval' for example the row:
```
id | time | rtt | link_id
1 | 2014-01-01 12:34:55.4034 | 154.3 | 2
```
became:
```
id | time | rtt | link_id | time_60 |time_30 ...
1 | 2014-01-01 12:34:55.4034 | 154.3 | 2 | 2014-01-01 12:00:00.00 | 2014-01-01 12:30:00.00 ...
```
and I get the intervals with the following query:
```
SELECT time_10,AVG(RTT),MIN(RTT),MAX(RTT),COUNT(*) FROM trace
WHERE link_id=1 AND time>='2015-01-01' AND time <= '2015-01-30'
GROUP BY time_10;
```
this query was at least 50% faster (about 5 seconds on average) but it is still pretty slow, how can I optimize this query to be faster?
explain query outputs this:
```
+----+-------------+------------+------+------------------------------------------------------------------------+----------------------------------------------------+---------+-------+---------+----------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+------------+------+------------------------------------------------------------------------+----------------------------------------------------+---------+-------+---------+----------------------------------------------+
| 1 | SIMPLE | main_trace | ref | main_trace_link_id_c6febb11f84677f_fk_main_link_id,main_trace_e7549e3e | main_trace_link_id_c6febb11f84677f_fk_main_link_id | 4 | const | 1478359 | Using where; Using temporary; Using filesort |
+----+-------------+------------+------+------------------------------------------------------------------------+----------------------------------------------------+---------+-------+---------+----------------------------------------------+
```
and these are the table indexes:
```
+------------+------------+----------------------------------------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment |
+------------+------------+----------------------------------------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| main_trace | 0 | PRIMARY | 1 | id | A | 2956718 | NULL | NULL | | BTREE | | |
| main_trace | 1 | main_trace_link_id_c6febb11f84677f_fk_main_link_id | 1 | link_id | A | 2 | NULL | NULL | | BTREE | | |
| main_trace | 1 | main_trace_07cc694b | 1 | time | A | 2956718 | NULL | NULL | | BTREE | | |
| main_trace | 1 | main_trace_e7549e3e | 1 | time_10 | A | 22230 | NULL | NULL | YES | BTREE | | |
| main_trace | 1 | main_trace_01af8333 | 1 | time_15 | A | 14783 | NULL | NULL | YES | BTREE | | |
| main_trace | 1 | main_trace_1681ff94 | 1 | time_20 | A | 10870 | NULL | NULL | YES | BTREE | | |
| main_trace | 1 | main_trace_f7c28c93 | 1 | time_30 | A | 6399 | NULL | NULL | YES | BTREE | | |
| main_trace | 1 | main_trace_0f29fcc5 | 1 | time_60 | A | 3390 | NULL | NULL | YES | BTREE | | |
+------------+------------+----------------------------------------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
```
|
For this query:
```
SELECT time_10, AVG(RTT), MIN(RTT), MAX(RTT), COUNT(*)
FROM trace
WHERE link_id = 1 AND time >= '2015-01-01' AND time <= '2015-01-30'
GROUP BY time_10;
```
The best index is the covering index: `trace(link_id, time, time_10, rtt)`.
|
a composite index on (id,time) followed by a potential `analyze table trace` would make it snappy.
It is just a suggestion, I am not saying to do it. Analyze table can take some people hours to run with millions of rows.
Suggesting index creation based on just one query is not a great idea. Assumption being, you have other queries. And they are a drag on inserts/updates.
|
Mysql Group by time interval optimization
|
[
"",
"mysql",
"sql",
"database",
"optimization",
"indexing",
""
] |
I have two MySQL tables: one called Audit and the other Audit2. I need to copy the column names from Audit2 to Audit. For example table Audit has columns 1, 2 and 3 and Audit2 has columns 4, 5 and 6 and I need Audit to have columns 1, 2, 3, 4, 5 and 6.
I tried the following to no success:
```
ALTER TABLE `Audit` ADD (select `COLUMN_NAME` from information_schema.columns
where table_schema = 'BMS' and `TABLE_NAME` = 'Audit2') (select `DATA_TYPE` from information_schema.columns
where table_schema = 'BMS' and `TABLE_NAME` = 'Audit2') NOT NULL
```
|
You can do it by
```
create table new_audit as
select t1.*,t2.* from audit as t1 inner join audit_2 as t2 on 1=0;
```
The table new\_audit will have all the columns
|
Have you tried:
```
SELECT COLUMN_NAME FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME='your_table_name';
```
It'll return all the column names in a single column from the table you want.
|
Adding Mysql columns from one table to another
|
[
"",
"mysql",
"sql",
""
] |
I am new to this topic (i.e tree structure) in SQL. I have gone through different sources but still not clear.
Here in my case, I have a table which I have attached herewith.
[](https://i.stack.imgur.com/jjhKX.png)
Now here first, I have to retrieve a Full Tree for “OFFICE”.
Also i have to find all the leaf nodes (those with no Children) in the attached hierarchical data.
Please provide the answers with detail explanation.
Thanks in advance.
|
You didn't specify your DBMS but with standard SQL (supported by all [modern DBMS](http://use-the-index-luke.com/de/blog/2015-02/modern-sql)) you can easily do a recursive query to get the full tree:
```
with recursive full_tree as (
select id, name, parent, 1 as level
from departments
where parent is null
union all
select c.id, c.name, c.parent, p.level + 1
from departments c
join full_tree p on c.parent = p.id
)
select *
from full_tree;
```
If you need a sub-tree, just change the starting condition in the common table expression. To get e.g. all "categories":
```
with recursive all_categories as (
select id, name, parent, 1 as level
from departments
where id = 2 --- << start with a different node
union all
select c.id, c.name, c.parent, p.level + 1
from departments c
join all_categories p on c.parent = p.id
)
select *
from all_categories;
```
Getting all leafs is straightforward: it's all nodes where their ID does not appear as a `parent`:
```
select *
from departments
where id not in (select parent
from departments
where parent is not null);
```
SQLFiddle example: <http://sqlfiddle.com/#!15/414c9/1>
---
Edit after DBMS has been specified.
Oracle does support recursive CTEs (although you need 11.2.x for that) you just need to leave out the keyword `recursive`. But you can also use the `CONNECT BY` operator:
```
select id, name, parent, level
from departments
start with parent is null
connect by prior id = parent;
select id, name, parent, level
from departments
start with id = 2
connect by prior id = parent;
```
SQLFiddle for Oracle: <http://sqlfiddle.com/#!4/6774ee/3>
See the manual for details: <https://docs.oracle.com/database/121/SQLRF/queries003.htm#i2053935>
|
There is a pattern design called "hardcoded trees" that can be usefull for this purpose.
[](https://i.stack.imgur.com/EzxnM.png)
Then you can use this query for finding a parent for each child
```
SELECT level1ID FROM Level2entity as L2 WHERE level2ID = :aLevel2ID
```
Or this one for finding the children for each parent:
```
SELECT level1ID FROM Level2entity as L2 WHERE level2ID = :aLevel2ID
```
|
SQL Tree Structure
|
[
"",
"sql",
"oracle",
"hierarchical-data",
"recursive-query",
""
] |
I have `Table A` with the following data:
```
Id Value
1 100
2 63
4 50
6 24
7 446
```
I want to select the first rows with a `SUM(value) <= 200`.
So the desired output should be:
```
Id Value
1 100
4 50
6 24
```
|
This should return expected result:
```
WITH TableWithTotals AS
(SELECT
id,
value,
SUM (Value) OVER (ORDER BY Value, Id) as total
FROM myTable)
SELECT * FROM TableWithTotals
WHERE total <=200;
```
This code will maximise the number of records that fit under 200 limit as running total is calculated over ordered values.
[SQL Fiddle](http://sqlfiddle.com/#!3/139ce/4)
|
**SIMPLE ANSWER**
You need to find the Cumulative Sum for each row, and since you want as many rows as possible, you need to start with the lowest value (`ORDER BY Value`):
```
WITH Data AS
( SELECT Id,
Value,
CumulativeValue = SUM(Value) OVER(ORDER BY Value, Id)
--FROM (VALUES (1, 100), (2, 300), (4, 50), (6, 24), (7, 446)) AS t (Id, Value)
FROM TableA AS t
)
SELECT d.Id, d.Value
FROM Data AS d
WHERE d.CumulativeValue <= 200
ORDER BY d.Id;
```
**COMPLETE ANSWER**
If you want to be more selective about which rows you choose that have a sum of less than 200 then it gets a bit more complicated for example, in your new sample data:
```
Id Value
1 100
2 63
4 50
6 24
7 446
```
There are 3 different combinations that allow for a total of less than 200:
```
Id Value
1 100
2 63
6 24
--> 187
Id Value
2 63
4 50
6 24
--> 137
Id Value
1 100
4 50
6 24
--> 174
```
The only way to do this is to get all combinations that have a sum less than 200, then choose the combination that you want, in order to do this you will need to use a recursive common table expression to get all combinations:
```
WITH TableA AS
( SELECT Id, Value
FROM (VALUES (1, 100), (2, 63), (4, 50), (6, 24), (7, 446)) t (Id, Value)
), CTE AS
( SELECT Id,
IdList = CAST(Id AS VARCHAR(MAX)),
CumulativeValue = Value,
ValueCount = 1
FROM TableA AS t
UNION ALL
SELECT T.ID,
IdList = CTE.IDList + ',' + CAST(t.ID AS VARCHAR(MAX)),
CumulativeValue = CTE.CumulativeValue + T.Value,
ValueCount = CTE.ValueCount + 1
FROM CTE
INNER JOIN TableA AS T
ON ',' + CTE.IDList + ',' NOT LIKE '%,' + CAST(t.ID AS VARCHAR(MAX)) + ',%'
AND CTE.ID < T.ID
WHERE T.Value + CTE.CumulativeValue <= 200
)
SELECT *
FROM CTE
ORDER BY ValueCount DESC, CumulativeValue DESC;
```
This outputs (with single rows removed)
```
Id IdList CumulativeValue ValueCount
-------------------------------------
6 1,2,6 187 3
6 1,4,6 174 3
6 2,4,6 137 3
2 1,2 163 2
4 1,4 150 2
6 1,6 124 2
4 2,4 113 2
6 2,6 87 2
6 4,6 74 2
```
So you need to choose which combination of rows best meets your requirements, for example if you wanted, as described before, the most number of rows with a value as close to 200 as possible, then you would need to choose the top result, if you wanted the lowest total, then you would need to change the ordering.
Then you can get your original output by using `EXISTS` to get the records that exist in `IdList`:
```
WITH TableA AS
( SELECT Id, Value
FROM (VALUES (1, 100), (2, 63), (4, 50), (6, 24), (7, 446)) t (Id, Value)
), CTE AS
( SELECT Id,
IdList = CAST(Id AS VARCHAR(MAX)),
CumulativeValue = Value,
ValueCount = 1
FROM TableA AS t
UNION ALL
SELECT T.ID,
IdList = CTE.IDList + ',' + CAST(t.ID AS VARCHAR(MAX)),
CumulativeValue = CTE.CumulativeValue + T.Value,
ValueCount = CTE.ValueCount + 1
FROM CTE
INNER JOIN TableA AS T
ON ',' + CTE.IDList + ',' NOT LIKE '%,' + CAST(t.ID AS VARCHAR(MAX)) + ',%'
AND CTE.ID < T.ID
WHERE T.Value + CTE.CumulativeValue <= 200
), Top1 AS
( SELECT TOP 1 IdList, CumulativeValue
FROM CTE
ORDER BY ValueCount DESC, CumulativeValue DESC -- CHANGE TO MEET YOUR NEEDS
)
SELECT *
FROM TableA AS t
WHERE EXISTS
( SELECT 1
FROM Top1
WHERE ',' + Top1.IDList + ',' LIKE '%,' + CAST(t.ID AS VARCHAR(MAX)) + ',%'
);
```
This is not very efficient, but I can't see a better way at the moment.
This returns
```
Id Value
1 100
2 63
6 24
```
This is the closest to 200 you can get with the most possible rows. Since there are multiple ways of achieving "x number of rows that have a sum of less than 200", there are also multiple ways of writing the query. You would need to be more specific about what your preference of combination is in order to get the exact answer you need.
|
Select x rows from table having total < y SQL Server
|
[
"",
"sql",
"sql-server",
"sum",
""
] |
I have 3 tables customer,car customer\_has\_car .I want to get the customers who has got only red cars using given scenario. Not red and green both.
> My tables information given below:
[](https://i.stack.imgur.com/NH68a.png)
> Output should be :
Jhon ,
Ann
Any suggestion please...
|
If the MAX(color) = the MIN(color) then there is only one value for color, and you do not need to specify any other color in the query.
```
SELECT
c.cus_id
, c.cus_name
, c.tel
FROM customer c
INNER JOIN customer_has_car chc
ON c.cus_id = chc.cus_id
INNER JOIN car
ON chc.id_car = car.id_car
GROUP BY c.cus_id
, c.cus_name
, c.tel
HAVING MIN(car.color) = 'red'
AND MAX(car.color) = 'red'
;
```
|
## Schema:
```
create table customer
( cus_id int not null,
cus_name varchar(20) not null,
tel varchar(20) not null
);
create table car
( id_car int not null,
car_name varchar(20) not null,
`year` int not null,
color varchar(20) not null
);
create table customer_has_car
( cus_id int not null,
id_car int not null,
`date` date not null
);
insert car (id_car,car_name,`year`,color) values (1,'corolla',2012,'red');
insert car (id_car,car_name,`year`,color) values (2,'corolla',2013,'blue');
insert car (id_car,car_name,`year`,color) values (3,'corolla',2014,'red');
insert car (id_car,car_name,`year`,color) values (4,'corolla',2003,'green');
insert customer(cus_id,cus_name,tel) values (1,'jhon','012345');
insert customer(cus_id,cus_name,tel) values (2,'Ann','875646');
insert customer(cus_id,cus_name,tel) values (3,'Sam','446363');
insert customer(cus_id,cus_name,tel) values (4,'Cristina','356561');
insert customer_has_car(cus_id,id_car,date) values (1,1,'2015-01-08');
insert customer_has_car(cus_id,id_car,date) values (1,2,'2015-07-08');
insert customer_has_car(cus_id,id_car,date) values (2,1,'2015-08-08');
insert customer_has_car(cus_id,id_car,date) values (3,4,'2015-09-08');
insert customer_has_car(cus_id,id_car,date) values (4,3,'2015-10-08');
insert customer_has_car(cus_id,id_car,date) values (4,4,'2015-11-08');
```
## Query:
```
-- has red cars but not green:
select cus_id,cus_name,tel
from ( select c.cus_id,c.cus_name,c.tel,group_concat(car.color) as colors
from customer c
join customer_has_car chc
on chc.cus_id=c.cus_id
join car
on car.id_car=chc.id_car
group by c.cus_id,c.cus_name) inr
where find_in_set('green',colors)=0 and find_in_set('red',colors)>0;
+--------+----------+--------+
| cus_id | cus_name | tel |
+--------+----------+--------+
| 1 | jhon | 012345 |
| 2 | Ann | 875646 |
+--------+----------+--------+
```
|
Select customers only got red cars in mysql (not red and green both )
|
[
"",
"mysql",
"sql",
"select",
""
] |
I'm quite new to SQL and working on a query that has been thoroughly defeating me for a while now. I come to this site often - it's a terrific resource thanks to all of your expertise, and generally I find what I need, but this time I think my query is a bit too specific and I've not found something applicable. Could someone give me a hand, please?
I have two tables: one Client table and one Contact (aka appointment) table. What I need to find are all of the clients' most recent appointment days (before a certain date, in this case '11/08/2015') where the Outcome for any appointment on that day is NOT a '2'. Each client may have more than one appointment on a single day, and an Outcome of '2' for any of those appointments means that we have to ignore the whole day and move back to the next most recent day..
For example, Client '3' should have a returned appointment date of '01/07/2015' (and just one row) and not the two rows for '16/07/2015', because one of the appointments on '16/07/2015' had an Outcome of '2'. All other values for Outcome are acceptable (including NULL), just not '2'.
The multiple appointment on the same day bit is the part that I'm finding tricky - I can find the latest appointment day using a Select MAX (or TOP 1) statement, but when I add on a "<> '2'" it still continues to return the same days that may have an Outcome '2' because other appointments on that same day have another Outcome. I've been trying to play around with my tables and GROUP BY and NOT EXIST, but I don't seem to be making any headway.
```
Contact
ClientID AppDate Outcome
1 30/07/2015 17:00 2
1 01/07/2015 17:00 3
2 03/03/2015 16:00 NULL
2 01/03/2015 16:00 NULL
3 16/07/2015 15:40 6
3 16/07/2015 15:40 2
3 01/07/2015 15:40 3
4 05/08/2015 12:30 6
4 05/08/2015 12:30 2
4 01/08/2015 12:30 3
5 23/07/2015 15:30 2
5 23/07/2015 15:30 NULL
5 01/07/2015 15:30 4
6 20/07/2015 10:10 NULL
6 20/07/2015 10:10 2
6 01/07/2015 10:10 6
7 23/07/2015 15:40 2
7 01/07/2015 15:40 1
7 23/06/2015 15:40 8
8 13/07/2015 11:30 2
8 13/07/2015 11:30 6
8 01/07/2015 11:30 2
8 01/06/2015 11:30 3
9 29/07/2015 17:00 3
9 29/07/2015 17:00 6
10 14/07/2015 11:00 NULL
10 01/07/2015 11:00 5
Client
ClientID Forename Surname
1 I B
2 J B
3 S C
4 S T
5 P C
6 K D
7 P E
8 P H
9 S F
10 A G
```
Apologies if I'm missing something glaringly obvious! Thanks for reading and for any responses. I attach my truncated query for your general amusement...
```
SELECT
cli.ClientID ,
cli.Forename ,
cli.Surname ,
con.AppDate ,
con.Outcome
FROM
Client AS cli
INNER JOIN
Contact AS con
ON cli.ClientID = con.ClientID
AND con.AppDate =
(SELECT MAX(con1.AppDate)
FROM Contact AS con1
WHERE con.ClientID = con1.ClientID
AND con1.AppDate < '11/08/2015 00:00:00'
AND con1.Outcome <> '2')
ORDER BY
cli.ClientID
```
EDIT:
Thank you to Mr Linoff for the Cross Apply query, it worked perfectly.
Sorry that I didn't include the expected output earlier. For reference (for anyone else working with a similar problem in future) I was looking to obtain:
```
Appointments
Client ID Act Date and Time Outcome
1 01/07/2015 17:00 3
2 03/03/2015 16:00 NULL
3 01/07/2015 15:40 3
4 01/08/2015 12:30 3
5 01/07/2015 15:30 4
6 01/07/2015 10:10 6
7 01/07/2015 15:40 1
8 01/06/2015 11:30 3
9 29/07/2015 17:00 3
9 29/07/2015 17:00 6
10 14/07/2015 11:00 NULL
```
|
I think `cross apply` is the best approach to this:
```
select c.*, con.*
from client c cross apply
(select top 1 con.*
from (select con.*,
sum(case when Outcome = 2 then 1 else 0 end) over (partition by ClientId, AppDate) as num2s
from contact con
where con.ClientId = c.ClientId and
con.AppDate < '2015-11-08'
) con
where num2s = 0
order by AppDate desc
) con;
```
In this case, `cross apply` works a lot like a correlated subquery, but you can return multiple values. The subquery uses window functions to count the number of "2" on a given day and the rest of the logic should be pretty obvious.
This returns one row from the most recent date with appropriate appointments. If you want multiple such rows, use `with ties`.
|
What you need to do is to have a condition which picks out all the Dates where Outcome is two and filter them out.
Something like this:
```
WITH ClientCTE AS
(
SELECT MAX(con1.AppDate) AS AppDate ,ClientID
FROM Contact AS con1
WHERE con.ClientID = con1.ClientID
AND con1.AppDate < '11/08/2015 00:00:00'
AND con1.Outcome <> '2'
AND con1.AppDate NOT IN (SELECT AppDate FROM Contact WHERE Outcome = '2'))
SELECT
* FROM Client C
INNER JOIN
Contact AS con
ON cli.ClientID = con.ClientID
INNER JOIN ClientCTE CTE
ON cli.ClientID = CTE.ClientCTE
AND CTE.AppDate = con.AppDate
```
Let me know if it works
|
SQL Find latest date where condition doesn't exist
|
[
"",
"sql",
"sql-server",
""
] |
I'm trying to change the datatype in a column in a table from `Float` (null) to `Varchar(25)` (null). The largest float in the current data is 12 digits, but more digits may have to be added in the future, hence `varchar(25)`.
The current data in the column is phone numbers. The change must be made to allow for preceding zeros.
However, I'm having some difficulties doing this.
I've tried the following:
```
ALTER TABLE Customer
ALTER COLUMN Phonenumber varchar(25)
```
This does not give me the desired result.
For instance `1549779498` becomes `1.54978e+009`
Then I've tried something in the lines of this:
* Creating a new (temporary) column `PhonenumberVarchar`
* Converting and copying the data from one column to another
* Deleting the old column
* Renaming the new column to the old name
Code:
```
ALTER TABLE Customer
ADD PhonenumberVarchar varchar(25)
UPDATE Customer
SET PhonenumberVarchar = STR(Phonenumber, 12, 0)
ALTER TABLE Customer
DROP COLUMN Phonenumber
EXEC sp_rename 'Customer.PhonenumberVarchar', 'Phonenumber', 'COLUMN'
```
This doesn't work either:
> Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression.
And now it's late and my head hurts...
Can anyone help?
NOTE:
The table is rather large, some 1.5 million rows, so performance can be an issue.
Using SQL Server.
|
You can fix this by going through a `decimal` first:
```
ALTER TABLE Customer ALTER COLUMN Phonenumber decimal(25, 0);
ALTER TABLE Customer ALTER COLUMN Phonenumber varchar(25);
```
You get the same behavior when using `cast()`:
```
select cast(cast(1549779498 as float) as varchar(255))
```
So the fix is illustrated by:
```
select cast(cast(cast(1549779498 as float) as decimal(25)) as varchar(255))
```
The [documentation](https://msdn.microsoft.com/en-us/library/ms190273.aspx) for `alter table alter column` explicitly references `cast()`:
> Some data type changes may cause a change in the data. For example,
> changing an nchar or nvarchar column to char or varchar may cause the
> conversion of extended characters. For more information, **see CAST
> and CONVERT (Transact-SQL)**. Reducing the precision or scale of a
> column may cause data truncation.
EDIT:
After you load the data, I would suggest that you also add a check constraint:
```
check (PhoneNumber not like '%[^0-9]%')
```
This will ensure that numbers -- and only numbers -- remain in the column in the future.
|
Direct float to varchar conversions can be tricky. Merely altering column data type wont be sufficient.
STEP 1: Take backup of your data table.
```
SELECT * INTO Customer_Backup FROM Customer
```
STEP 2: Drop and Create your original data table using SQL Server Scripts // OR // DROP and Alter the data type of the column
```
ALTER TABLE Customer
ALTER COLUMN Phonenumber varchar(25)
```
STEP 3: In you scenario, since phone numbers don't have decimal data values in the float column, we can convert it first to int and then to varchar like below
```
INSERT into Customer (Phonenumber)
SELECT convert (varchar(25), convert(int, [Phonenumber])) as [Phonenumber]
FROM Customer_Backup
```
|
SQL Converting Column type from float to varchar
|
[
"",
"sql",
"sql-server",
"copy",
"sql-types",
""
] |
I'm looking for an `awk` command, or similar tool, to transform standard well formatted SQL insert script into csv file.
By standard I mean there is no database vendor specific stuff anywhere.
By well formatted I mean the case where each line of the sql script has a full column set to insert, even if there are NULLs. Also the order of fields to insert is the same.
Sample input SQL script:
```
INSERT INTO tbl VALUES (1, 'asd', 923123123, 'zx');
INSERT INTO tbl VALUES (1, NULL, 923123123, 'zxz');
INSERT INTO tbl VALUES (3, 'asd3', 923123123, NULL);
```
Optionally:
```
INSERT INTO tbl (colA, colB, colC, colD) VALUES (1, 'asd', 923123123, 'zx');
```
Expected output should be a csv file:
```
1,'asd',923123123,'zx'
1,,923123123,'zxz'
3,'asd3',923123123,
```
Looking for a performance efficient solution.
|
```
$ awk -F' *[(),]+ *' -v OFS=, '{for (i=2;i<NF;i++) printf "%s%s", ($i=="NULL"?"":$i), (i<(NF-1)?OFS:ORS)}' file
1,'asd',923123123,'zx'
1,,923123123,'zxz'
3,'asd3',923123123,
```
I'd recommend you test all potential solutions with this input:
```
$ cat file
INSERT INTO tbl VALUES (1, NULL, 923123123, 'foo NULL bar');
$ awk -F' *[(),]+ *' -v OFS=, '{for (i=2;i<NF;i++) printf "%s%s", ($i=="NULL"?"":$i), (i<(NF-1)?OFS:ORS)}' file
1,,923123123,'foo NULL bar'
```
to make sure the string `NULL` and blank chars are not deleted when they appear as part of a literal string.
|
Try this with GNU grep and sed:
```
grep -oP '\(\K[^)]*(?=\);)' file | sed 's/NULL//g;s/ //g'
```
Output from all four lines:
```
1,'asd',923123123,'zx'
1,,923123123,'zxz'
3,'asd3',923123123,
1,'asd',923123123,'zx'
```
or only with GNU sed:
```
sed 's/.*(\([^)]*\));/\1/;s/NULL//g;s/ //g' file
```
Output from all four lines:
```
1,'asd',923123123,'zx'
1,,923123123,'zxz'
3,'asd3',923123123,
1,'asd',923123123,'zx'
```
|
Transform SQL insert script into CSV format
|
[
"",
"sql",
"csv",
"awk",
"sed",
"export-to-csv",
""
] |
I want to perform a task if the input string contain any accented characters else do another task in SQL. Is there any way to check this condition in SQL ?
Eg:
```
@myString1 = 'àéêöhello!'
IF(@myString1 contains any accented characters)
Task1
ELSE
Task2
```
|
SQL Fiddle: <http://sqlfiddle.com/#!6/9eecb7d/1607>
```
declare @a nvarchar(32) = 'àéêöhello!'
declare @b nvarchar(32) = 'aeeohello!'
select case
when (cast(@a as varchar(32)) collate SQL_Latin1_General_Cp1251_CS_AS) = @a
then 0
else 1
end HasSpecialChars
select case
when (cast(@b as varchar(32)) collate SQL_Latin1_General_Cp1251_CS_AS) = @b
then 0
else 1
end HasSpecialChars
```
(based on solution here: [How can I remove accents on a string?](https://stackoverflow.com/questions/3578582/how-can-i-remove-accents-on-a-string))
|
I use this function to get text (mainly foreign family names) to a more comparable plain latin (with some indiv goodies - feel free to implement your own...).
Maybe you can use this too. Just compare, if your string is the same as the one passed back.
```
CREATE FUNCTION [dbo].[GetRunningNumbers](@anzahl INT=1000000, @StartAt INT=0)
RETURNS TABLE
AS
RETURN
SELECT TOP (ISNULL(@anzahl,1000000)) ROW_NUMBER() OVER(ORDER BY A) -1 + ISNULL(@StartAt,0) AS Nmbr
FROM (VALUES(1),(1),(1),(1),(1),(1),(1),(1),(1),(1)) AS tblA(A)
,(VALUES(1),(1),(1),(1),(1),(1),(1),(1),(1),(1)) AS tblB(B)
,(VALUES(1),(1),(1),(1),(1),(1),(1),(1),(1),(1)) AS tblC(C)
,(VALUES(1),(1),(1),(1),(1),(1),(1),(1),(1),(1)) AS tblD(D)
,(VALUES(1),(1),(1),(1),(1),(1),(1),(1),(1),(1)) AS tblE(E)
,(VALUES(1),(1),(1),(1),(1),(1),(1),(1),(1),(1)) AS tblF(F);
GO
CREATE FUNCTION [dbo].[GetTextPlainLatin]
(
@Txt VARCHAR(MAX)
,@CaseSensitive BIT
,@KeepNumbers BIT
,@NonCharReplace VARCHAR(100),@MinusReplace VARCHAR(100)
,@PercentReplace VARCHAR(100),@UnderscoreReplace VARCHAR(100) --for SQL-Masks
,@AsteriskReplace VARCHAR(100),@QuestionmarkReplace VARCHAR(100) --for SQL-Masks (Access-Style)
)
RETURNS VARCHAR(MAX)
AS
BEGIN
DECLARE @txtTransformed VARCHAR(MAX)=(SELECT LTRIM(RTRIM(CASE WHEN ISNULL(@CaseSensitive,0)=0 THEN LOWER(@Txt) ELSE @Txt END)));
RETURN
(
SELECT Repl.ASCII_Code
FROM dbo.GetRunningNumbers(LEN(@txtTransformed),1) AS pos
--ASCII-Codes of all characters in your text
CROSS APPLY(SELECT ASCII(SUBSTRING(@txtTransformed,pos.Nmbr,1)) AS ASCII_Code) AS OneChar
--re-code
CROSS APPLY
(
SELECT CASE
WHEN OneChar.ASCII_Code BETWEEN ASCII('A') AND ASCII('Z') THEN CHAR(OneChar.ASCII_Code)
WHEN OneChar.ASCII_Code BETWEEN ASCII('a') AND ASCII('z') THEN CHAR(OneChar.ASCII_Code)
WHEN OneChar.ASCII_Code BETWEEN ASCII('0') AND ASCII('9') AND @KeepNumbers=1 THEN CHAR(OneChar.ASCII_Code)
WHEN OneChar.ASCII_Code = ASCII('ƒ') THEN 'f'
WHEN OneChar.ASCII_Code = ASCII('Š') THEN 'S'
WHEN OneChar.ASCII_Code = ASCII('š') THEN 's'
WHEN OneChar.ASCII_Code = ASCII('ß') THEN 'ss'
WHEN OneChar.ASCII_Code = ASCII('Ä') THEN 'Ae'
WHEN OneChar.ASCII_Code = ASCII('ä') THEN 'ae'
WHEN OneChar.ASCII_Code = ASCII('Æ') THEN 'Ae'
WHEN OneChar.ASCII_Code = ASCII('æ') THEN 'ae'
WHEN OneChar.ASCII_Code = ASCII('Ö') THEN 'Oe'
WHEN OneChar.ASCII_Code = ASCII('ö') THEN 'oe'
WHEN OneChar.ASCII_Code = ASCII('Œ') THEN 'Oe'
WHEN OneChar.ASCII_Code = ASCII('œ') THEN 'oe'
WHEN OneChar.ASCII_Code = ASCII('Ü') THEN 'Ue'
WHEN OneChar.ASCII_Code = ASCII('ü') THEN 'ue'
WHEN OneChar.ASCII_Code = ASCII('Ž') THEN 'Z'
WHEN OneChar.ASCII_Code = ASCII('ž') THEN 'z'
WHEN OneChar.ASCII_Code = ASCII('×') THEN 'x'
WHEN OneChar.ASCII_Code BETWEEN ASCII('À') AND ASCII('Å') THEN 'A'
WHEN OneChar.ASCII_Code BETWEEN ASCII('à') AND ASCII('å') THEN 'a'
WHEN OneChar.ASCII_Code = ASCII('Ç') THEN 'C'
WHEN OneChar.ASCII_Code = ASCII('ç') THEN 'c'
WHEN OneChar.ASCII_Code BETWEEN ASCII('È') AND ASCII('Ë') THEN 'E'
WHEN OneChar.ASCII_Code BETWEEN ASCII('è') AND ASCII('ë') THEN 'e'
WHEN OneChar.ASCII_Code BETWEEN ASCII('Ì') AND ASCII('Ï') THEN 'I'
WHEN OneChar.ASCII_Code BETWEEN ASCII('ì') AND ASCII('ï') THEN 'i'
WHEN OneChar.ASCII_Code = ASCII('Ð') THEN 'D' --island Eth
WHEN OneChar.ASCII_Code = ASCII('ð') THEN 'd' --island eth
WHEN OneChar.ASCII_Code = ASCII('Ñ') THEN 'N'
WHEN OneChar.ASCII_Code = ASCII('ñ') THEN 'n'
WHEN OneChar.ASCII_Code BETWEEN ASCII('Ò') AND ASCII('Ö') THEN 'O'
WHEN OneChar.ASCII_Code BETWEEN ASCII('ò') AND ASCII('ö') THEN 'o'
WHEN OneChar.ASCII_Code = ASCII('Ø') THEN 'O'
WHEN OneChar.ASCII_Code = ASCII('ø') THEN 'o'
WHEN OneChar.ASCII_Code BETWEEN ASCII('Ù') AND ASCII('Ü') THEN 'U'
WHEN OneChar.ASCII_Code BETWEEN ASCII('ù') AND ASCII('ü') THEN 'u'
WHEN OneChar.ASCII_Code = ASCII('Ý') THEN 'Y'
WHEN OneChar.ASCII_Code = ASCII('ý') THEN 'y'
WHEN OneChar.ASCII_Code = ASCII('Þ') THEN 'Th' --island Thorn
WHEN OneChar.ASCII_Code = ASCII('þ') THEN 'th' --island thorn
WHEN OneChar.ASCII_Code = ASCII('Ÿ') THEN 'Y'
WHEN OneChar.ASCII_Code = ASCII('ÿ') THEN 'y'
--Special with "minus"
WHEN OneChar.ASCII_Code = ASCII('-') THEN ISNULL(@MinusReplace,ISNULL(@NonCharReplace,CHAR(OneChar.ASCII_Code)))
--Special with mask characters
WHEN OneChar.ASCII_Code = ASCII('%') THEN ISNULL(@PercentReplace,ISNULL(@NonCharReplace,CHAR(OneChar.ASCII_Code)))
WHEN OneChar.ASCII_Code = ASCII('_') THEN ISNULL(@UnderscoreReplace,ISNULL(@NonCharReplace,CHAR(OneChar.ASCII_Code)))
WHEN OneChar.ASCII_Code = ASCII('*') THEN ISNULL(@AsteriskReplace,ISNULL(@NonCharReplace,CHAR(OneChar.ASCII_Code)))
WHEN OneChar.ASCII_Code = ASCII('?') THEN ISNULL(@QuestionmarkReplace,ISNULL(@NonCharReplace,CHAR(OneChar.ASCII_Code)))
--replace others
ELSE ISNULL(@NonCharReplace,CHAR(OneChar.ASCII_Code))
END AS ASCII_Code
) AS Repl
FOR XML PATH(''),TYPE
).value('.','varchar(max)');
END
GO
SELECT dbo.GetTextPlainLatin('Case sensitive ÄÖ àéêöhello!',1,1,NULL,NULL,NULL,NULL,NULL,NULL);
SELECT dbo.GetTextPlainLatin('Case in-sensitive ÄÖ àéêöhello!',0,1,NULL,NULL,NULL,NULL,NULL,NULL);
GO
DROP FUNCTION dbo.GetTextPlainLatin
GO
DROP FUNCTION dbo.GetRunningNumbers;
```
|
Check If the string contains accented characters in SQL?
|
[
"",
"sql",
"sql-server",
"diacritics",
"non-ascii-characters",
""
] |
Ruby on rails has the t.timestamps method which creates two columns, created\_at and updated\_at. How can I make just the created\_at column?
This works
```
class CreateLinesSources < ActiveRecord::Migration
def change
create_table :lines_sources do |t|
t.timestamps null: false
end
end
end
```
Both of these I want to work but fail
```
class CreateLinesSources < ActiveRecord::Migration
def change
create_table :lines_sources do |t|
t.created_at null: false
end
end
end
```
and
```
class CreateLinesSources < ActiveRecord::Migration
def change
create_table :lines_sources do |t|
t.timestamps null: false
remove_column updated_at
end
end
end
```
|
Both attempts are okay but contains syntax error.
The problem on the first attempt is that the t. is on datatype not column name.
```
class CreateLinesSources < ActiveRecord::Migration
def change
create_table :lines_sources do |t|
t.datetime :created_at, null: false
end
end
end
```
The problem on the second attempt is that `remove_column` does not work inside `create_table` block. You can move it outside the block and provide correct syntax to remove the column.
```
class CreateLinesSources < ActiveRecord::Migration
def change
create_table :lines_sources do |t|
t.timestamps null: false
end
remove_column :line_sources, :updated_at, :datetime
end
end
```
|
```
t.datetime :created_at, null: false
```
Just like any other column. Rails will still take care of magic updating because of column name.
|
Ruby Rails - Make Table With created_at Column
|
[
"",
"sql",
"ruby-on-rails",
"ruby",
"timestamp",
""
] |
I need some help on sql-joins.
I have 3 select statements, which give me an output like this:
```
select-statement A:
location amount_A
7234 17
7456 2
select-statement B:
location number_x
7234 4455
7456 555
select-statement C:
location errors
7234 1
7456 44537
```
I want to have the result in one table like this:
```
location Amount_A number_x errors
7234 17 4455 1
7456 2 555 44537
```
what is the best and/or easiest way to achieve this? Each select-statement uses other tables.?!
these are the statements:
```
A: select substring(column_a for 4) location, count(*) Amount_A from table_a where column_a like '7%' group by location ;
B: select substring(e.column_xy for 4), count(*) number_x from table_b b, table_e e , table_c c where b.stationextension_id = e.id and b.id = c.id and ( c.column_h in ( 'value_a', 'value_b' ) ) group by substring(e.column_xy for 4) ;
C: select substring(name from 1 for 4), count(*) from errors group by substring(name from 1 for 4) ;
```
|
Try this:
```
select
a.location, a.amount_A,
b.number_x,
c.errors
from (
select_satement_A
) as a
join (
select_satement_B
) as b on a.location = b.location
join (
select_satement_C
) as c on a.location = c.location
```
If all data from `select_satement_A` needs to be `retrieved` use `left join`.
All data from `select_satement_A` will be retrieved with corresponding data from`select_satement_B` and `select_satement_C`. If no match for the `join` condition found `null` will be substituted.
```
select
a.location, a.amount_A,
b.number_x,
c.errors
from (
select_satement_A
) as a
left join (
select_satement_B
) as b on a.location = b.location
left join (
select_satement_C
) as c on a.location = c.location
```
For all data to be retrieved use `full join`.
|
Use `Inner Join` to connect all the three queries.
Also use Inner Join syntax to join two tables which is more readable then old style comma separated join.
`Substring` function has some different arguments hope those are example in original query you need pass proper values to substring function
```
select a.location,Amount_A ,number_x,errors from
(
SELECT substring(column_a for 4) location, count(*) Amount_A FROM table_a WHERE column_a LIKE '7%' GROUP BY location
) A
inner join
(
SELECT substring(e.column_xy for 4) location , count(*) number_x FROM table_b b inner join table_e e on b.stationextension_id = e.id and b.stationextension_id = e.id
inner join table_c c on b.id = c.id
where c.column_h IN ( 'value_a', 'value_b' ) ) GROUP BY substring(e.column_xy FOR 4) ;
) B on a.location = b.location
inner join
(
SELECT substring(name from 1 FOR 4) location, count(*) errors FROM errors GROUP BY substring(name FROM 1 FOR 4) ;
)
C on c.location = b.location
```
|
SQL join on different select count result-tables
|
[
"",
"sql",
"sql-server",
"select",
"join",
"count",
""
] |
I need a query that selects rows in a table searching for the maximum value in a given column for each unique pair from two other columns. Consider this very simple example table:
```
a b c d
1 1 0 1
1 1 2 2
1 2 1 1
1 2 3 2
2 1 7 5
2 1 6 4
```
Unique pairs are formed from columns a and b. We look for the maximum in column d. The result returns the corresponding columns a, b and c. So, for the above sample data, the returned results should look like this:
```
a b c
1 1 2
1 2 3
2 1 7
```
I have tried nested queries and joins and I haven't reached my goal. I also performed an Google search looking for a discussion and consulted several database books, to no avail. I see plenty of examples of nested queries that involve multiple tables, but none that involve the same table.
It seems the problem requires three passes (nesting):
1) get a set of rows of unique a & b pairs
2) for each row in that set find the maximum value of d: return a set of a, b and d rows
3) for each triplet a, b, and d values, query returning a, b and c columns.
Help is much appreciated.
|
got you wrong first, sorry.
You can use GROUP BY, for 2 columns to find the max, and then join the original table.
In your example, this query do the work:
```
SELECT t2.a,t2.b,t2.c
FROM (SELECT a, b, max(d) as max_d
FROM TABLE_NAME
GROUP BY a,b) t1
inner join TABLE_NAME t2 on(t1.a=t2.a AND t1.b=t2.b AND t1.max_d=t2.d)
```
|
In SQLite 3.7.11 or later, you can simply SELECT values from columns that are neither grouped nor aggregated, and the output will come from the same row that matches MAX/MIN:
```
SELECT a, b, c, MAX(d)
FROM MyTable
GROUP BY a, b
```
Note: this does not work if you omit the `MAX(d)`.
|
SQLite SQL query - 3 level/nesting on same table?
|
[
"",
"sql",
"sqlite",
""
] |
I have run the query above for all columns in a massive table (Billion rows) and everything is fine except a couple which are returning 0. How is this possible?
|
`COUNT(DISTINCT)` can return zero under two circumstances. The first is that all the values for the column/expression evaluate to `NULL`. The second is that the `WHERE` clause (or `JOIN`) filters out all rows.
If you have no `WHERE` or `JOIN`, then the values are all `NULL` for `<Columnb>`.
|
To validate all the records in the column are having NULL values you can try the following query:
```
SELECT <columB> FROM <TABLE> WITH (NOLOCK) WHERE <columB> IS NOT NULL
```
it should return 0 Records.
|
Why "SELECT COUNT(DISTINCT <Column>) FROM <Table>" return 0?
|
[
"",
"sql",
"database",
""
] |
I am trying to write an SQL query that will return Table1, which has 10 columns. This table consists of a primary key id, 4 foreign key Id columns, and 5 other columns that I want to return but not change. The goal is to do a join to replace the foreign key Ids with their descriptions that are held in other tables.
Here is one attempt with the first FK Id:
```
Select * from Table1 t1
left join Table2 t2
on t1.BranchId = t2.BranchId;
```
This left join returns the description from table2, but does not replace it.
Here is another with the first FK Id:
```
Select t2.BranchName from Table1 t1
left join Table2 t2
on t1.BranchId = t2.BranchId;
```
This returns the name I want, but does not return table1 fully.
For the sake of an example you could pretend that OtherName3, OtherName4, OtherName5 are in tables Table3, Table4, Table5, respectively.
This may seem trivial for experienced SQL devs, but I am having a hard time figuring out the syntax.
Thanks!
|
I don't know what you mean by 'replace' but you just need to qualify what columns from which table you want. That goes for all tables you are joined to, especially if they have the same column name in multiple tables. I put junk columns in since I don't know your tables but you should get the general idea.
```
Select t2.BranchName, t1.BranchId, t1.Name, t1.Amount, t2.BranchLocation from Table1 t1
left join Table2 t2
on t1.BranchId = t2.BranchId;
```
|
I'm not sure what you mean by replace it.
I think you just need to list out all the columns you want:
```
Select t1.col1, t1.col2, t1.col3, . . .,
t2.name
from Table1 t1 left join
Table2 t2
on t1.BranchId = t2.BranchId;
```
|
SQL join to return a table with multiple columns from other tables replacing its own
|
[
"",
"sql",
"sql-server",
"database",
"sql-server-2008",
"join",
""
] |
I have a SQL query:
```
select
t1.name, t2.address
from
Table1 t1
inner join
Table2 t2 on t1.id = t2.id;
```
and a map:
```
Map<String,String> map = new HashMap<String,String>();
map.put("testTable", "hive.DB1");
map.put("testTable", "mongo.DB2");
```
I just want using this map the above mentioned query to be converted in :
```
select
t1.name, t2.address
from
hive.DB1.`Table1` t1
inner join
mongo.DB2.`Table2` t2 on t1.id = t2.id;
```
Which open source SQL parser is suitable for this purpose.
|
I used [JSqlParser](https://github.com/JSQLParser/JSqlParser).
I extracted table names from SQL query:
```
Statement statement = CCJSqlParserUtil.parse("select t1.name, t2.address from Table1 t1 inner join Table2 t2 on t1.id = t2.id;");
Select selectStatement = (Select) statement;
TablesNamesFinder tablesNamesFinder = new TablesNamesFinder();
List<String> tableList = tablesNamesFinder.getTableList(selectStatement);
```
Then I modified Table Names according to the map.
|
Do you really need a SQL Parser ?
Why not just using sed ?
Ex for one table :
sed -e "s/Table1/hive.DB1.TABLE1/g" C:\test.sql > C:\new.sql
|
SQL parser to alter table name
|
[
"",
"sql",
"parsing",
"sql-parser",
""
] |
i have some problems with join tables:
```
Table A -> ID,Col1,Col2,Col3
Table B -> Rank , ColX , A_ID (Relationship with A.ID)
```
I want to take higher Rank (each `A_ID` , like group by `A_ID`) of B table
my results must be something like `A.ID , Col1 , Col2 , Col3 , ""ColX""` , how can i do that ?
and i want my result count equals to A.ID count.
---
**TableA**
```
+--------------------+
| ID|Col1|Col2|Col3| |
+--------------------+
| 1 | C1 | C2 | C3 |
| 2 | C1 | C2 | C3 |
+--------------------+
```
**TABLE\_B**
```
+-----------------------------+
| ID| COL_X |RANK |A_ID| |
+-----------------------------+
| 1 | SomeValue | 1 | 1 |
| 2 | some22222 | 2 | 1 |
| 3 | SOMEXXXX | 3 | 1 |
| 4 | SOMEVAL | 1 | 2 |
| 5 | VALUE | 2 | 2 |
+-----------------------------+
```
**Expected Output:**
```
+--------------------------------------------------------------------+
| ID| Col1| Col2 | Col3| COLX |
+--------------------------------------------------------------------+
| 1 | C1 | C2 | C3 | SOMEXXXX (Higher Rank of TableB-> A_ID = 1) |
| 2 | C1 | C2 | C3 | VALUE (Higher Rank of TableB-> A_ID = 2) |
+--------------------------------------------------------------------+
```
|
You could easily do this using a `subquery` by first finding the `max` for each `A_ID` and then joining to tableA and TableB to get your desired rows:
```
SELECT a.ID,
a.col1,
a.Col2,
a.Col3,
b1.Col_X
FROM (
SELECT a_id
,max(rank) AS MaxRank
FROM tableb
GROUP BY a_id
) b
INNER JOIN tablea a ON a.id = b.a_id
INNER JOIN tableb b1 ON b.a_id = b1.a_id AND b1.rank = b.MaxRank
ORDER BY a.ID;
```
[`SQL Fiddle Demo`](http://sqlfiddle.com/#!3/d3f22/7/0)
|
I'm thinking you want to take the max rank from your table b for each row in table a?
There's lots of different ways of approaching this. Here's one simple one:
```
with maxCTE as
(select
a_id,
max(rank) as MaxRank
from
tableb
group by
a_id
)
select
*
from
tablea a
inner join tableb b
on a.id = b.a_id
inner join maxcte c
on b.a_id = c.a_id
and b.rank = c.MaxRank
```
[SQLFiddle](http://sqlfiddle.com/#!3/d3f22/2)
Basically, the CTE identifies the max rank for each a\_id, then we join that back to tableb to get the details about that row.
|
Simple SQL Questions with join
|
[
"",
"sql",
"sql-server-2008",
"t-sql",
""
] |
I am trying to get a `SUM` of `Order` `amount`s converted to a fixed currency. The `amount`s in `Order` table can be of different currencies.
If the `IsBase` flag in `Currency` table is `1`, then I want to divide with the `Rate`, else I want to multiply with the `Rate`.
```
SELECT CASE WHEN c.isBase = 1
THEN SUM(o.Amount / c.Rate)
ELSE SUM(o.Amount * c.Rate)
END
FROM Orders o JOIN Currency c
ON o.Currency = c.Currency
```
This does not work and gives error:
> Column c.isBase' is invalid in the select list because it is not
> contained in either an aggregate function or the GROUP BY clause.
But the said column is not part of `SELECT`. It is part of a condition in `SELECT`. How can I make it work if I want to have the `SUM` like this?
|
```
SELECT SUM(CASE WHEN c.isBase = 1
THEN o.Amount / c.Rate
ELSE o.Amount * c.Rate
END) AS answer
FROM Orders o JOIN Currency c
ON o.Currency = c.Currency
```
|
You should put your `CASE` expression inside the `SUM` aggregate function:
```
SELECT
SUM(CASE WHEN c.isBase = 1 THEN o.Amount / c.Rate ELSE o.Amount * c.Rate END)
FROM Orders o
JOIN Currency c
ON o.Currency = c.Currency
```
|
Conditional SUM not allowing operands
|
[
"",
"sql",
"sql-server",
"aggregate-functions",
""
] |
Normaly I wouldn't ask this question however I have a bit of unique situation where I'm having troudle in How to do it.
I have a table `A` as follows:
```
uniqueid , itemid, quantity, selected
1 2455 10 TRUE
2 7458 50 FALSE
3 58494 20.6 FALSE
4 2455 20 TRUE
```
I need to write a combine function that takes rows which are `TRUE` of the same part and combine them to one row (with the updated `quantity`).
meaning I want to get:
```
uniqueid , itemid, quantity, selected
1 2455 30 FASLE
2 7458 50 FALSE
3 58494 20.6 FALSE/TRUE (doesn't matter)
```
OR:
```
uniqueid , itemid, quantity, selected
2 7458 50 FALSE
3 58494 20.6 FALSE
4 2455 30 FALSE/TRUE (doesn't matter)
```
My function doesn't get any paramenters... I need some method which I can identfy and work with the rows which are selected.
First I thought of doing:
```
CREATE OR REPLACE FUNCTION func1()
RETURNS void AS
$BODY$
declare
ROW RECORD
begin
for ROW in select * from A where selected
LOOP
do combine code
end loop;
end;
$BODY$
LANGUAGE plpgsql VOLATILE
```
However this code will not work as after each combine two rows become one. In the above example the loop will generate two iterations, but I need only 1 iteration.. only 1 combine action neeeded.
I only need assistance in how to get the structure of the function - loop? if? how to save the rows?. no need to write the combine code.
**To make it simpler** assume there can be only two rows which `selected` is `TRUE`.
**Note**: `func1` is supposed to **save** the new state into `A`.
|
A simple solution would be to do this in a union - in that case it also doesn't matter how many rows there are with `selected = false`:
```
select min(uniqueid) as uniqueid,
itemid,
sum(quantity) as quantity,
false as selected
from a
where selected
group by itemid
union all
select uniqueid,
itemid,
quantity,
selected
from a
where not selected
order by 1;
```
---
**Edit** after it has been clarified that the table should be modified.
You can do this with a data modifying CTE. In the first step the sum of the quantity is updated and in the second step the no longer needed rows are deleted:
```
with updated as (
-- this updates the lowest uniqueid with the total sum
-- of all rows. If you want to keep/update the highest
-- uniqueid change the min() to max()
update a
set quantity = t.total_sum,
selected = false
from (
select min(uniqueid) as uniqueid,
itemid,
sum(quantity) as total_sum
from a
where selected
group by itemid
) t
where t.uniqueid = a.uniqueid
returning a.uniqueid
)
-- this now deletes the rows that are still marked
-- as "selected" and where not updated
delete from a
where selected
and uniqueid not in (select uniqueid from updated);
```
This assumes that the `uniqueid` column is indeed unique (e.g. a primary key or has a unique index/constraint defined). The value for `selected` column has to be changed in order for this to work. So it *does* matter if `selected` is set to false during this process.
|
You might want to give a try with a `UNION` maybe
```
SELECT u.*
FROM (
SELECT MIN(t.uniqueid) AS uniqueid,
t.itemid,
SUM(t.quantity) AS quantity,
FALSE AS selected
FROM table_name t
WHERE t.selected = TRUE
GROUP BY t.itemid
UNION
SELECT t.uniqueid,
t.itemid,
t.quantity,
t.selected
FROM table_name t
WHERE t.selected = FALSE) u
ORDER BY u.uniqueid ASC
```
|
How to combine two rows in a table postgresql?
|
[
"",
"sql",
"postgresql",
""
] |
When I execute this query in SSMS
```
SELECT SUBSTRING(NAME, 1, LEN(NAME) - 4)
FROM sys.tables
```
I get an error:
> Invalid length parameter passed to the LEFT or SUBSTRING function.
But when I execute this query here:
```
SELECT SUBSTRING(NAME, 1, LEN(NAME) - 4)
FROM sys.tables
WHERE NAME LIKE '%_OLD'
```
I get the output. What is the reason for the failure of first query?
|
The reason is that your script fails is that some of the tables has a length of fewer than 4 characters. The third parameter of SUBSTRING cannot be negative.
Here is a workaround:
```
SELECT
REVERSE(STUFF(REVERSE(NAME), 1,4,''))
FROM
Sys.tables
```
The script reverse the text, removes the first 4 characters and reverse the rest again.
|
Note that the last parameter of the `SUBSTRING` function must be a *positive value*.
You probably have some tables with names shorter than 5 characters.
Try to execute this to see the minimum length of names in your table:
```
SELECT MIN(LEN(Name)) AS MinLength
FROM Sys.tables
```
|
Error while executing Substring Function in SQL Server
|
[
"",
"sql",
"sql-server",
""
] |
I have two tables as below
```
Table 1
-----------------------------------
UserID | UserName | Age | Salary
-----------------------------------
1 | foo | 22 | 33000
-----------------------------------
```
```
Table 2
------------------------------------------------
UserID | Age | Salary | CreatedDate
------------------------------------------------
1 | NULL | 35000 | 2015-01-01
------------------------------------------------
1 | 28 | NULL | 2015-02-01
------------------------------------------------
1 | NULL | 28000 | 2015-03-01
------------------------------------------------
```
I need the result like this.
```
Result
-----------------------------------
UserID | UserName | Age | Salary
-----------------------------------
1 | foo | 28 | 28000
-----------------------------------
```
This is just an example. In my real project I have around 6 columns like Age and Salary in above tables.
In table 2 , each record will have only have one value i.e if Age has value then Salary will be NULL and viceversa.
UPDATE :
Table 2 has CreatedDate Column. So i want to get latest "NOTNULL" CELL Value instead of maximum value.
|
Note: I'm giving you the benefit of the doubt that you know what you're doing, and you just haven't told us everything about your schema.
It looks like `Table 2` is actually an "updates" table, in which each row contains a delta of changes to apply to the base entity in `Table 1`. In which case you can retrieve each column's data with a correlated join (technically an outer-apply) and put the results together. Something like the following:
```
select a.UserID, a.UserName,
coalesce(aAge.Age, a.Age),
coalesce(aSalary.Salary, a.Salary)
from [Table 1] a
outer apply (
select Age
from [Table 2] x
where x.UserID = a.UserID
and x.Age is not null
and not exists (
select 1
from [Table 2] y
where x.UserID = y.UserID
and y.Id > x.Id
and y.Age is not null
)
) aAge,
outer apply (
select Salary
from [Table 2] x
where x.UserID = a.UserID
and x.Salary is not null
and not exists (
select 1
from [Table 2] y
where x.UserID = y.UserID
and y.Id > x.Id
and y.Salary is not null
)
) aSalary
```
Do note I am assuming you have at minimum an `Id` column in `Table 2` which is monotonically increasing with each insert. If you have a "change time" column, use this instead to get the latest row, as it is better.
|
You can get this done using a simple `MAX()` and `GROUP BY`:
```
select t1.userid,t1.username, MAX(t2.Age) as Age, MAX(t2.Salary) as Salary
from table1 t1 join
table2 t2 on t1.userid=t2.userid
group by t1.userid,t1.username
```
Result:
```
userid username Age Salary
--------------------------------
1 foo 28 35000
```
Sample result in [**SQL Fiddle**](http://www.sqlfiddle.com/#!3/49d549/1)
|
SQL Joins . One to many relationship
|
[
"",
"sql",
"sql-server",
""
] |
I am trying to go from this:
```
+------+------+------+------+
| fld1 | fld2 | fld3 | etc… |
+------+------+------+------+
| a | 5 | 1 | |
| b | 5 | 0 | |
| c | 6 | 0 | |
| b | 2 | 5 | |
| b | 1 | 6 | |
| c | 0 | 6 | |
| a | 8 | 9 | |
+------+------+------+------+
```
To:
```
+--------+--------+-----------+-----+-----+------+
| Factor | Agg | CalcDate | Sum | Avg | etc… |
+--------+--------+-----------+-----+-----+------+
| fld2 | fld1/a | 8/14/2015 | 13 | 6.5 | |
| fld2 | fld1/b | 8/14/2015 | 8 | 2.7 | |
| fld2 | fld1/c | 8/14/2015 | 6 | 3 | |
| fld3 | fld1/a | 8/14/2015 | 10 | 5 | |
| fld3 | fld1/b | 8/14/2015 | 11 | 3.7 | |
| fld3 | fld1/c | 8/14/2015 | 6 | 3 | |
+--------+--------+-----------+-----+-----+------+
```
Notes:
* Obviously this data is simplified quite a bit.
* I have a ton of fields I need to do this for
* I included easy aggregation calcs here so it might be easier for someone to help me. The exhaustive list is: NaPct, Mean, Sd, Low, Q1, Median, Q3, High, IQR, Kurt, Skew, Obs. Where NaPct = Percent that are NULL, Sd = Standard deviation, Q1 = quartile 1, Q3 = quartile 3, IQR = Inter quartile range, Kurt = Kurtosis, Skew = Skewness, Obs = number of observations that are not NULL.
* In reality, in the second table above the factor field will be FactorID, Agg will be AggID, and CalcDate will be CalcDateID, but I put the actual values in there for illustration purposes. Shouldn't matter to the question/answer though.
* Speed is very important as I have 1305 fields and several aggregations to do calculations on before the work day starts.
* Answers using only MS Access, SQL, and VBA. Sorry business requirement. That said, a SQL only answer that works in MS Access would be best for simplicity.
* Below is code that uses a custom domain function (DCalcForQueries) and supporting functions that I built that return one calculated aggregate value per field and selected aggregation. Aka, not what I want. Maybe that code is usable for what I want, maybe not. Nevertheless it has the calculations I want in it which hopefully will help.
* The message boxes are just how I debug while I am alpha testing: not necessary.
* To use the code, put all of the code in a VBA module, change table "tbl\_DatedModel\_2015\_0702\_0" to a table you have in MS Access, change field "Rk-IU Mkt Cap" to a field in your table, and run the TestIT() sub and you should get calculated values in the Immediate window.
* Don't worry about the calculations so much. I'll deal with that. I just need to know what is the best way to get from the first table above to the second table above in a way that allows for the calculations that I want. Thanks!
```
Sub TestIt()
Dim x
Set x = GetOrOpenAndGetExcel
Dim rst As DAO.Recordset
Dim sSql As String
Dim q As String
q = VBA.Chr(34)
sSql = "SELECT " & _
"DCalcForQueries(" & q & "NaPct" & q & ", " & q & "tbl_DatedModel_2015_0702_0" & q & ", " & q & "Rk-IU Mkt Cap" & q & ", " & q & "[Rk-IU Mkt Cap] IS NOT NULL AND [GICS Sector] = 'Consumer Discretionary'" & q & ") AS NaPct ," & _
"DCalcForQueries(" & q & "Mean" & q & ", " & q & "tbl_DatedModel_2015_0702_0" & q & ", " & q & "Rk-IU Mkt Cap" & q & ", " & q & "[Rk-IU Mkt Cap] IS NOT NULL AND [GICS Sector] = 'Consumer Discretionary'" & q & ") AS Mean ," & _
"DCalcForQueries(" & q & "Sd" & q & ", " & q & "tbl_DatedModel_2015_0702_0" & q & ", " & q & "Rk-IU Mkt Cap" & q & ", " & q & "[Rk-IU Mkt Cap] IS NOT NULL AND [GICS Sector] = 'Consumer Discretionary'" & q & ") AS Sd ," & _
"DCalcForQueries(" & q & "Low" & q & ", " & q & "tbl_DatedModel_2015_0702_0" & q & ", " & q & "Rk-IU Mkt Cap" & q & ", " & q & "[Rk-IU Mkt Cap] IS NOT NULL AND [GICS Sector] = 'Consumer Discretionary'" & q & ") AS Low ," & _
"DCalcForQueries(" & q & "Q1" & q & ", " & q & "tbl_DatedModel_2015_0702_0" & q & ", " & q & "Rk-IU Mkt Cap" & q & ", " & q & "[Rk-IU Mkt Cap] IS NOT NULL AND [GICS Sector] = 'Consumer Discretionary'" & q & ") AS Q1 ," & _
"DCalcForQueries(" & q & "Median" & q & ", " & q & "tbl_DatedModel_2015_0702_0" & q & ", " & q & "Rk-IU Mkt Cap" & q & ", " & q & "[Rk-IU Mkt Cap] IS NOT NULL AND [GICS Sector] = 'Consumer Discretionary'" & q & ") AS Median ," & _
"DCalcForQueries(" & q & "Q3" & q & ", " & q & "tbl_DatedModel_2015_0702_0" & q & ", " & q & "Rk-IU Mkt Cap" & q & ", " & q & "[Rk-IU Mkt Cap] IS NOT NULL AND [GICS Sector] = 'Consumer Discretionary'" & q & ") AS Q3 ," & _
"DCalcForQueries(" & q & "High" & q & ", " & q & "tbl_DatedModel_2015_0702_0" & q & ", " & q & "Rk-IU Mkt Cap" & q & ", " & q & "[Rk-IU Mkt Cap] IS NOT NULL AND [GICS Sector] = 'Consumer Discretionary'" & q & ") AS High ," & _
"DCalcForQueries(" & q & "IQR" & q & ", " & q & "tbl_DatedModel_2015_0702_0" & q & ", " & q & "Rk-IU Mkt Cap" & q & ", " & q & "[Rk-IU Mkt Cap] IS NOT NULL AND [GICS Sector] = 'Consumer Discretionary'" & q & ") AS IQR ," & _
"DCalcForQueries(" & q & "Kurt" & q & ", " & q & "tbl_DatedModel_2015_0702_0" & q & ", " & q & "Rk-IU Mkt Cap" & q & ", " & q & "[Rk-IU Mkt Cap] IS NOT NULL AND [GICS Sector] = 'Consumer Discretionary'" & q & ") AS Kurt ," & _
"DCalcForQueries(" & q & "Skew" & q & ", " & q & "tbl_DatedModel_2015_0702_0" & q & ", " & q & "Rk-IU Mkt Cap" & q & ", " & q & "[Rk-IU Mkt Cap] IS NOT NULL AND [GICS Sector] = 'Consumer Discretionary'" & q & ") AS Skew ," & _
"DCalcForQueries(" & q & "Obs" & q & ", " & q & "tbl_DatedModel_2015_0702_0" & q & ", " & q & "Rk-IU Mkt Cap" & q & ", " & q & "[Rk-IU Mkt Cap] IS NOT NULL AND [GICS Sector] = 'Consumer Discretionary'" & q & ") AS Obs " & _
"FROM tbl_DatedModel_2015_0702_0;"
Debug.Print sSql
Set rst = CurrentDb.OpenRecordset(sSql, dbOpenSnapshot)
rst.MoveFirst
Debug.Print rst.RecordCount
Debug.Print rst.Fields("NaPct")
Debug.Print rst.Fields("Mean")
Debug.Print rst.Fields("Sd")
Debug.Print rst.Fields("Low")
Debug.Print rst.Fields("Q1")
Debug.Print rst.Fields("Median")
Debug.Print rst.Fields("Q3")
Debug.Print rst.Fields("High")
Debug.Print rst.Fields("IQR")
Debug.Print rst.Fields("Kurt")
Debug.Print rst.Fields("Skew")
Debug.Print rst.Fields("Obs")
End Sub
Public Function DCalcForQueries(sCalc As String, Optional sTbl As String = "", Optional sMainFld As String = "", Optional sWhereClause As String = "", Optional k As Double) As Variant
Dim dblData() As Double
Dim oxl As Object
On Error Resume Next
Set oxl = GetObject(, "Excel.Application")
If Err.Number <> 0 Then
MsgBox "Excel object must be openned by the calling sub of DCalcForQueries so it isn't opened over and over, which is very slow"
GoTo cleanup
End If
Dim x As Integer
Dim aV() As Variant
Dim tmp
Dim lObsCnt As Long
Dim lNaCnt As Long
Dim i As Long
Dim vTmp As Variant
Dim lTtl As Long
Dim bDoCalc As Boolean
aV = a2dvGetSubsetFromQuery(sTbl, sMainFld, sWhereClause, "Numeric")
If aV(0, 0) = "Not Numeric" Then
MsgBox "Data returned by query was not numeric. Press OK to Stop and debug."
Stop
End If
If sCalc = "Percentile" Or sCalc = "Q1" Or sCalc = "Q2" Or sCalc = "Q3" Or sCalc = "Q4" Then
DCalcForQueries = oxl.WorksheetFunction.Percentile_Exc(aV, k)
ElseIf sCalc = "Median" Then
DCalcForQueries = oxl.WorksheetFunction.Median(aV)
ElseIf sCalc = "Kurt" Or sCalc = "Kurt" Then
DCalcForQueries = oxl.WorksheetFunction.Kurt(aV)
ElseIf sCalc = "Minimum" Or sCalc = "Low" Then
DCalcForQueries = oxl.WorksheetFunction.Min(aV)
ElseIf sCalc = "Maximum" Or sCalc = "High" Then
DCalcForQueries = oxl.WorksheetFunction.Max(aV)
ElseIf sCalc = "IQR" Then
DCalcForQueries = oxl.WorksheetFunction.Quartile_Exc(aV, 3) - oxl.WorksheetFunction.Quartile_Exc(aV, 1)
ElseIf sCalc = "Obs" Then
lObsCnt = 0
For Each tmp In aV
If Not IsNull(tmp) Then
lObsCnt = lObsCnt + 1
End If
Next
DCalcForQueries = lObsCnt
ElseIf sCalc = "%NA" Or sCalc = "PctNa" Or sCalc = "NaPct" Or sCalc = "%Null" Or sCalc = "PctNull" Then
lNaCnt = 0
lTtl = UBound(aV, 2) + 1
For Each tmp In aV
If IsNull(tmp) Then
lNaCnt = lNaCnt + 1
End If
Next
DCalcForQueries = (lNaCnt / lTtl) * 100
ElseIf sCalc = "Skewness" Or sCalc = "Skew" Then
DCalcForQueries = oxl.WorksheetFunction.Skew(aV)
ElseIf sCalc = "StDev" Or sCalc = "Sd" Then
DCalcForQueries = oxl.WorksheetFunction.StDev_S(aV)
ElseIf sCalc = "Mean" Then
DCalcForQueries = oxl.WorksheetFunction.Average(aV)
Else
MsgBox "sCalc parameter not recognized: " & sCalc
End If
cleanup:
End Function
Function a2dvGetSubsetFromQuery(sTbl As String, sMainFld As String, sWhereClause As String, sTest As String) As Variant()
'sTest can be "Numeric" or "None" ...will implement more as needed
Dim iFieldType As Integer
Dim rst As DAO.Recordset
Dim db As Database
Set db = CurrentDb
Dim sMainFldFull As String
Dim sSubSetFldFull As String
Dim sSql As String
sMainFldFull = "[" & sMainFld & "]"
sSubSetFldFull = ""
sSql = ""
sSql = "SELECT " & sMainFldFull & " FROM " & sTbl
If Len(sWhereClause) > 0 Then
sSql = sSql & " WHERE " & sWhereClause
End If
Set rst = db.OpenRecordset(sSql, dbOpenSnapshot)
'make sure the data is the right type
iFieldType = rst(sMainFld).Type
If sTest = "Numeric" Then
If iFieldType = dbByte Or _
iFieldType = dbInteger Or _
iFieldType = dbLong Or _
iFieldType = dbCurrency Or _
iFieldType = dbSingle Or _
iFieldType = dbDouble _
Then
rst.MoveLast
rst.MoveFirst
a2dvGetSubsetFromQuery = rst.GetRows(rst.RecordCount)
Else
Dim aV(0 To 1, 0 To 1) As Variant
aV(0, 0) = "Not Numeric"
a2dvGetSubsetFromQuery = aV
End If
ElseIf sTest = "None" Then
'don't do any testing
rst.MoveLast
rst.MoveFirst
a2dvGetSubsetFromQuery = rst.GetRows(rst.RecordCount)
Else
MsgBox "Test type (sTest) can only be 'None' or 'Numeric'. It was: " & sTest
Stop
End If
cleanup:
rst.Close
Set rst = Nothing
End Function
Public Function GetOrOpenAndGetExcel() As Object
'if excel is open it will return the excel object
'if excel is not open it will open excel and return the excel object
On Error GoTo 0
On Error Resume Next
Set GetOrOpenAndGetExcel = GetObject(, "Excel.Application")
If Err.Number <> 0 Then
Set GetOrOpenAndGetExcel = CreateObject("Excel.Application")
End If
On Error GoTo 0
End Function
```
Edit1: The code I provide above is just to illustrate my attempt and the calculations. I'm pretty sure it isn't directly related to a good answer, but I'm not 100% sure. If I use what I have above, it produces one record at a time and I'd have to add (INSERT INTO) each record one at a time, which would be quite slow. My plan was to build a 2d array of the results and use that 2d array to add the records in batches, but was told that you can't do that without looping through the array adding each record once at a time, which would defeat the purpose. I am pretty sure a solution that includes looping through the fld1 types or one query with sub-queries that can do it in one step is the direction that should be taken. What I have done to optimize so far: I pulled the creation of the Excel object out so is created only once in the TestIt() Sub.
Edit2: I have 1305 fields to do calculations on. They are not all in the same table; however, for the purposes of this question I just need a working answer that does more than one field at a time. I.e. your answer can assume all fields are all in the same table and for simplicity your answer can include only 2 fields and I can expand it from there. In the code above I have calculated 12 metrics on one field "Rk-IU Mkt Cap" aggregating on one type,'Consumer Discretionary' ([GICS Sector] = 'Consumer Discretionary'"). What I have is not what I am after.
|
This doesn't seem so hard to do in MS Access. If I have the logic right:
```
select "fld2" as factor, "fld1/"&fld1, #8/14/2015# as calcdate,
sum(fld2), avg(fld2)
from table
group by fld1
union all
select "fld3" as factor, "fld1/"&fld1, #8/14/2015# as calcdate,
sum(fld3), avg(fld3)
from table
group by fld1;
```
|
Would something like this work, just using pure tSql?
1: Create table and insert some sample data
```
CREATE TABLE [dbo].[FLD](
[fld1] [nvarchar](2) NOT NULL,
[fld2] [int] NULL,
[fld3] [int] NULL
) ON [PRIMARY]
GO
INSERT FLD VALUES ('a', 5, 9)
INSERT FLD VALUES ('b', 1, 8)
INSERT FLD VALUES ('a', 3, 7)
```
2: Use a nested [UNPIVOT](https://technet.microsoft.com/en-us/library/ms177410(v=sql.105).aspx) to create the factors
```
SELECT t.factor,t.val + '/' + t.v AS Agg, SUM(value) AS [Sum], AVG(value) AS [AVG]
FROM
(
SELECT * from
(
select * from FLD f
UNPIVOT
(
v
for val in (fld1)
) piv
) f
UNPIVOT
(
value
for factor in (fld2, fld3)
) s
) t
group by t.v, t.factor, t.val
```
|
Query and/or function that subsets the data on one field for all types and calcs on other field
|
[
"",
"sql",
"vba",
"ms-access",
"subquery",
"aggregate-functions",
""
] |
I have sql Upgrade script which has many sql statements(DDL,DML). When i ran this upgrade script in SQL developer, it runs successufully.I also provide in my script at the bottom commit. I can see all the changes in the database after running this upgrade script except the unique index constraints. When i insert few duplicate records it says unique constraint violated. It means the table has unique constraints. But i dont know why i cant view this constraints in oracle sql developer. The other DDL changes made i can view.I dont know is there any settings to view it in oracle sql developer.
```
CREATE UNIQUE INDEX "RATOR_MONITORING"."CAPTURING_UK1" ON "RATOR_MONITORING"."CAPTURING" ("DB_TABLE");
CREATE UNIQUE INDEX "RATOR_MONITORING_CONFIGURATION"."BRAND_UK1" ON "RATOR_MONITORING_CONFIGURATION"."BRAND" ("NAME");
CREATE UNIQUE INDEX "RATOR_MONITORING_CONFIGURATION"."BRAND_BUSINESS_PROCESS_UK1" ON "RATOR_MONITORING_CONFIGURATION"."BRAND_BUSINESS_PROCESS" ("BRAND_ID", "BP_ID");
CREATE UNIQUE INDEX "RATOR_MONITORING_CONFIGURATION"."BRAND_ENGINE_UK1" ON "RATOR_MONITORING_CONFIGURATION"."BRAND_ENGINE" ("BRAND_ID", "ENGINE_ID");
```
|
As A Hocevar noted, if you create an index
```
create unique index test_ux on test(id);
```
you see it in the Indexes tab of the table properties (not in the Constraints tab).
Please note that COMMIT is not required here, it is done implicitely in each DDL statement. More usual source of problems are stale metadata in SQL Developer, i.e. missing REFRESH (ctrl R on user or table node).
If you want to define the constraint, **add** following statement, that will reuse the index defined previously
```
alter table test add constraint test_unique unique(id) using index test_ux;
```
See further discussion about the option in [Documentation](http://docs.oracle.com/cd/B28359_01/server.111/b28286/clauses002.htm#SQLRF52179)
|
I am assuming you are trying to look for index on a table in the correct tab in sql developer. If you are not able to see the index there, one reason could be that your user (the one with which you are logged in) doesn't have proper rights to see the Index.
|
DDL changes not showing in Oracle sql developer
|
[
"",
"sql",
"oracle",
"oracle-sqldeveloper",
"unique-constraint",
""
] |
I select data from table `Employee_Master` using the below query
```
SELECT EMP_ID
, EMP_NAME
, EMP_LOC1
, EMP_LOC2
, EMP_LOC3
, EMP_DESG
, EMP_ADDR1
, EMP_ADDR2
, EMP_ADDR3
, EMP_PHNO
, EMP_STATUS
FROM Employee_Master
WHERE EMP_STATUS = 'A'
```
The Result set looks like this.
```
EMP_ID EMP_NAME EMP_LOC1 EMP_LOC2 EMP_LOC3 EMP_DESG EMP_ADDR1 EMP_ADDR2 EMP_ADDR3 EMP_PHNO EMP_STATUS
23 Covey, Stephen J, Mr 1 2 4 Writer 3 3 3 11 A
24 Jhon, Abraham A, Mr 1 2 4 Actor 1 1 1 22 A
```
Now I have to split the one record into three records and insert in into `Emp_history` table like this. `EMP_SAL_ID` is fetched from a different table which I can take care of.
```
EMP_SAL_ID First_Name Middle_Initial Last_Name Title Designation Location Address Phone_No Status
3456 Stephen J Covey Mr Writer 1 3 11 A
3456 Stephen J Covey Mr Writer 2 3 11 A
3456 Stephen J Covey Mr Writer 4 3 11 A
```
Is it possible to achieve this using PL/SQL block or any other simple method with performance.?
|
You could use **UNION** to have separate rows for each address value.
For example, let's say you have a row as built in the with clause:
```
SQL> WITH DATA(EMP_NAME,EMP_ADDR1,EMP_ADDR2,EMP_ADDR3) AS(
2 SELECT 'Covey, Stephen J, Mr', 1, 2, 4 FROM DUAL UNION ALL
3 SELECT 'Jhon, Abraham A, Mr ', 1, 2, 4 FROM DUAL
4 )
5 SELECT * FROM DATA;
EMP_NAME EMP_ADDR1 EMP_ADDR2 EMP_ADDR3
-------------------- ---------- ---------- ----------
Covey, Stephen J, Mr 1 2 4
Jhon, Abraham A, Mr 1 2 4
SQL>
```
Now you could split the above row into multiple rows using **UNION**. Just an additional effort is to use **SUBSTR** and **INSTR** to extract the name from emp\_name.
For example,
```
SQL> WITH DATA(EMP_NAME,EMP_ADDR1,EMP_ADDR2,EMP_ADDR3) AS(
2 SELECT 'Covey, Stephen J, Mr', 1, 2, 4 FROM DUAL UNION ALL
3 SELECT 'Jhon, Abraham A, Mr ', 1, 2, 4 FROM DUAL
4 )
5 SELECT SUBSTR(emp_name, instr(emp_name, ',', 1, 1)+1, instr(emp_name, ' ', 1, 2) - instr(emp_name, ',', 1, 1)) AS "ename",
6 emp_addr1 AS "addr"
7 FROM DATA
8 UNION ALL
9 SELECT SUBSTR(emp_name, instr(emp_name, ',', 1, 1)+1, instr(emp_name, ' ', 1, 2) - instr(emp_name, ',', 1, 1)),
10 emp_addr2
11 FROM DATA
12 UNION ALL
13 SELECT SUBSTR(emp_name, instr(emp_name, ',', 1, 1)+1, instr(emp_name, ' ', 1, 2) - instr(emp_name, ',', 1, 1)),
14 emp_addr3
15 FROM DATA
16 /
ename addr
-------------------- ----------
Stephen 1
Abraham 1
Stephen 2
Abraham 2
Stephen 4
Abraham 4
6 rows selected.
SQL>
```
**NOTE** :
The **WITH** clause is only used to build sample data for demonstration purpose. In real case, you just need to use the **SELECT** statement on your table.
```
INSERT INTO hist_table SELECT statement as shown above...
```
|
```
create table adress_test(ename varchar2(30), addr number);
insert all
into adress_test values(SUBSTR(emp_name, instr(emp_name, ',', 1, 1)+1, instr(emp_name, ' ', 1, 2) - instr(emp_name, ',', 1, 1)), emp_addr1)
into adress_test values(SUBSTR(emp_name, instr(emp_name, ',', 1, 1)+1, instr(emp_name, ' ', 1, 2) - instr(emp_name, ',', 1, 1)),emp_addr2)
into adress_test values(SUBSTR(emp_name, instr(emp_name, ',', 1, 1)+1, instr(emp_name, ' ', 1, 2) - instr(emp_name, ',', 1, 1)),emp_addr3)
SELECT 'Covey, Stephen J, Mr' EMP_NAME , 1 EMP_ADDR1, 2 EMP_ADDR2, 4 EMP_ADDR3 FROM DUAL UNION ALL
SELECT 'Jhon, Abraham A, Mr ', 1, 2, 4 FROM DUAL
;
```
|
How do I split a single row into multiple rows and Insert into a table in Oracle?
|
[
"",
"sql",
"oracle",
""
] |
I have a field with numbers stored as text in 3 formats:
```
xx. (example: 31.)
xx.x (example: 31.2)
xx x/x (example: 31 2/7)
```
For the final result, I need all numbers to be in decimal format (that is, xx.x).
Converting the first two formats into decimals is fairly simple, but I haven't quite figured out how to convert the last case, as a simple CAST function doesn't work. I've used the INSTR function to isolate all the fractional cases of these numbers, but I don't know where to go from there. I've looked at other examples but some of the functions referenced (like SUBSTRING\_INDEX) don't exist in Netezza.
|
Thanks for the help everyone. I forgot to close this out, I actually figured out a way to do it:
```
select
case when instr(num,'/') > 0 then
cast(substr(num,1,2) as float)
+ (cast(substr(num,4,1) as float)/cast(substr(num,6,1) as float))
when instr(num,'.') > 0 then cast(substr(num,1,4) as float)
else cast(num as float)
end as float_num
```
|
I think @Niederee has the solution from brute force, but I'd use the [sql extensions toolkit](http://www-01.ibm.com/support/knowledgecenter/SSULQD_7.2.0/com.ibm.nz.sqltk.doc/r_sqlext_regexp_extract.html?lang=en).
```
create temporary table fractions (
val nvarchar(64)
) distribute on random;
insert into fractions values ('2.');
insert into fractions values ('2.3');
insert into fractions values ('31 2/7');
insert into fractions values('2 0/8');
insert into fractions values('516 56/537');
select
val
,case
when regexp_like(val,'^[\d\.]+$') then val::numeric(20,10) --Cast it if we can.
when regexp_like(val,'^[\d\.\s\/]+$')
then regexp_extract(val,'\d+',1,1)::numeric(20,10) --Whole.
+ (
regexp_extract(val,'\d+',1,2)::numeric(20,10) --Numerator.
/ regexp_extract(val,'\d+',1,3)::numeric(20,10) --Denominator.
)
else null
end
from
fractions;
```
|
Converting a mixed fraction to a float number in Netezza
|
[
"",
"sql",
"netezza",
""
] |
I have a table `tab` that contaions:
```
item identifier quantity methodid
10 1 20 2
10 1 30 3
11 1 10 3
11 1 12.5 3
11 2 20 5
12 1 20 1
12 1 30 1
```
I need to write a function that checks if there is a case of duplicate `methodid` for `item` and `identifier`.
In the above example `item` 11 `identifier` 1 has two rows of `methodid` 3 means it's duplicated, also `item` 12 `idfentifier` 1 has duplicated rows as well.
I don't need to do anything to the data just to identify this situation.
I don't need to find where and what was duplicated... just tell there is duplication.
The only information I have is the `identifier`
```
CREATE OR REPLACE FUNCTION func(identifier integer)
RETURNS integer AS
$BODY$
declare
errorcode int;
begin
if _____________ then
errorcode =1;
raise exception 'there id duplication in this identifier';
END IF;
continue work
return 0;
exception
when raise_exception then
return errorcode;
end;
$BODY$
LANGUAGE plpgsql VOLATILE
```
in the blank spot I want to put a query that checks for duplications.
How do I write a query that perform the check?
The structure of function can be changed. but I need somehow to know when to raise the exception.
|
To check wether any datasets are duplicated based on selected columns you could group by these columns and count the occurrences.
So in your case you could do:
```
SELECT 1 FROM tab GROUP BY item, identifier, methodid HAVING COUNT(*) > 1;
```
To incorporate this into your functions you could just check if it exists:
```
if EXISTS (SELECT 1 ...) then
```
|
Use `group by`:
```
select item, identifier, methodid, count(*)
from tab
group by item, identifier, methodid
having count(*) > 1
```
Where `having count(*) > 1` is used to return only duplicated rows.
|
How to compare two rows in postgresql?
|
[
"",
"sql",
"postgresql",
""
] |
I want to order the following things by their Ordernum (unique) regarding the RefID (which holds the same products together) with SQL on a Oracle Database:
**It has to be first ordered by OrderNum followed by every product with the same RefID. The row with the lowest OrderNum should be first, then the products with the same RefID, after that the next higher OrderNum and so on...**
```
OrderNum | RefID | ID
10 | 100 | 8
1 | 200 | 9
2 | 100 | 4
8 | 200 | 12
3 | 200 | 20
0 | 10 | 11
```
What I tried and what gives me just the result ordered by OrderNum, not regarding the same RefIDs:
```
SELECT * FROM products
ORDER BY OrderNum, RefID
```
Expected result
```
0 | 10 | 11
1 | 200 | 9
3 | 200 | 20
8 | 200 | 12
2 | 100 | 4
10 | 100 | 8
```
I think this has to be done with a subselect, right? But how does this look like?
|
I believe that Oracle supports CTE and window functions, so something like the following should work:
```
WITH Extras as (
SELECT
p.*,
MIN(OrderNum) OVER (PARTITION BY RefID) as LowNum,
ROW_NUMBER() OVER (PARTITION BY RefID ORDER BY OrderNum) as rn
FROM
Products p
)
SELECT * from Extras ORDER BY LowNum,rn;
```
Common Table Expressions (CTEs) are similar to subqueries but I tend to prefer to use them, all other things being equal - there's no specific advantage in this query, but they *can* be reused multiple times, and they can easily build on previous ones without introducing lots of nesting.
|
You are ordering by OrderNum and inside each OrderNum by RefID, but what you want to do is just the opposite, i.e. to order by RefID first and inside each RefID by OrderNum:
```
SELECT * FROM products
ORDER BY RefID, OrderNum;
```
|
Order by two columns regarding their relationship
|
[
"",
"sql",
"oracle",
"sql-order-by",
""
] |
I have the following statement in MYSQL:
```
SELECT Site, Areateam,
SUM( IF( year = '15-16', 1, 0 ) ) "Y2",
SUM( IF( year = '14-15', 1, 0 ) ) "Y1",
SUM('Y2') / SUM('Y1')* 100 AS Diff
FROM CD2015_EmailIncidents
WHERE Areateam = 'Greater Manchester'
GROUP BY Site
```
which returns the following results:
```
**Site |Areateam |Y2 |Y1 |Diff**
Acute Trust |Greater Manchester |0 |1 |NULL
Care Home |Greater Manchester |3 |22 |NULL
CD Store Room |Greater Manchester |7 |4 |NULL
College Greater |Greater Manchester |0 |1 |NULL
```
I cant seem to the correct syntax to show the percentage difference between Y2 and Y1 as it keeps displaying NULL
Many thanks
Max
|
Your problem is that you are doing arithmetic on string values:
```
SUM('Y2') / SUM('Y1')* 100 AS Diff
```
MySQL does silent conversion, so it converts the values to numbers. There are no leading digits, so they are converted to 0.
MySQL then returns `NULL` for `0/0`.
Juergen has the right solution for this. Your overall query can be expressed as:
```
SELECT Site, Areateam,
SUM(year = '15-16') as Y2,
SUM(year = '14-15') as Y1,
100 * SUM(year = '15-16') / SUM(year = '14-15') AS Diff
FROM CD2015_EmailIncidents
WHERE Areateam = 'Greater Manchester'
GROUP BY Site;
```
|
You can't use calculated column names in the `select` clause. And first multiplicate with 100 to avoid integer limitations
```
SUM(year = '15-16') * 100 / SUM(year = '14-15') AS Diff
```
|
MySQL Percentage of 2 columns not working =NULL
|
[
"",
"mysql",
"sql",
"multiple-columns",
"percentage",
""
] |
As part of a query I need to search for a nine digit number where the last number is 9.
How would I do such a query?
|
MOD for % in informix
and between for 9 digits
```
select colname
from tablename
where
MOD(colname, 10) = 9
and colname between 100000000 and 999999999
```
|
```
select colname from tablename where colname%10 = 9
```
|
Search for an integer ending in '9' using SQL
|
[
"",
"sql",
"informix",
""
] |
So I have this table
```
Product_ID Client_ID
1 2
1 3
2 2
3 2
```
Basically I need to select the `product_ID`'s that's exclusive to `client_id = 2`. In my case it should return only the 2 and 3 `product_ID` as the one with `id = 1` is not exclusive and has multiple clients setup.
|
Here's how to do it.
First, let's create your table.
```
create table SomeTable
(
Product_ID int,
Client_ID int
)
Go
insert into SomeTable values(1, 2)
insert into SomeTable values(1, 3)
insert into SomeTable values(2, 2)
insert into SomeTable values(3, 2)
```
The following script will return a list of all `Product_ID` values which have *just one* `Client_ID` value:
```
SELECT Product_ID
FROM SomeTable
GROUP BY Product_ID
HAVING COUNT(*) = 1
```
And you make this a sub-clause to get the results you're looking for:
```
SELECT st.Product_ID
FROM SomeTable st,
(
SELECT Product_ID
FROM SomeTable
GROUP BY Product_ID
HAVING COUNT(*) = 1
) tmp (Product_ID)
WHERE tmp.Product_ID = st.Product_ID
and st.Client_ID = 2
```
This will give you the row results (2 and 3) that you're looking for.
|
Try this:
```
Select product_id from ClientsProducts where client_id in (
Select client_ID from
(select client_ID,count(product_id) from ClientsProducts
group by client_ID
having count(product_id)=1) a )
```
|
Select records with certain value but exclude if it has another one
|
[
"",
"sql",
""
] |
Hello I have the following MySQL table:
```
id item value
1 A 11
2 A 20
3 B 2
4 C 1
5 B 14
6 C 12
```
What I'm trying to do is to select items with the lowest values - what I tried to do based on search of this forum is to use the following query:
```
SELECT *,
MIN(value)
FROM mytable
GROUP BY item
```
The expected result should be **A => 11, B => 2; C => 1** however for some of the items the min value is correct and for some other items the min value is different value. Please note that mytable consist of approx ~100 rows.
|
You may try this way.
```
SELECT t.* FROM mytable t
JOIN
( SELECT item, MIN(value) minVal
FROM mytable GROUP BY item
) t2
ON t.value = t2.minVal AND t.item = t2.item;
```
|
this would work in oracle
```
SELECT item, min(value)
FROM mytable
GROUP BY item
```
|
Select min value per item in MySQL
|
[
"",
"mysql",
"sql",
"database",
"oracle",
"greatest-n-per-group",
""
] |
I have a table
```
date name
2014-01-01 AAA
2014-01-01 BBB
2014-01-01 CCC
2014-01-01 DDD
2015-05-05 AAA
2016-09-09 AAA
2016-09-09 BBB
```
and I want to have only the newest information about every person:
```
date name
2016-09-09 AAA
2016-09-09 BBB
2014-01-01 CCC
2014-01-01 DDD
```
I can simply write
```
SELECT MAX(date), name
FROM table1
GROUP BY name;
```
but if I want to add another column, it doesn't work
```
SELECT MAX(date), name, address
FROM table1
GROUP BY name;
```
(doesn't work)
```
SELECT MAX(date), name, address
FROM table1
GROUP BY name, address;
```
(it works, but I want to have only one record for one person)
How can I do it?
|
Assuming your dates for each name are unique, it's quite simple to solve using a derived table and join, like this:
```
SELECT maxdate, t.name, t.address
from table1 t
inner join
(
SELECT MAX(date) as maxdate, name
FROM table1
GROUP BY name
) d on(t.date = d.maxdate and t.name = d.name)
```
|
In *SQL-Server* you can use `ROW_NUMBER()` in following:
```
SELECT [date], name, address
FROM (
SELECT *,
ROW_NUMBER() OVER (PARTITION BY name, address ORDER BY [date] DESC) rn
FROM Table1
)x
WHERE rn = 1
```
|
aggregate function, newest date
|
[
"",
"sql",
"aggregate",
""
] |
I have two tables
* "visit" which basically stores each visit on a website
```
| visitdate | city |
----------------------------------
| 2014-12-01 00:00:02 | Paris |
| 2015-01-03 00:00:02 | Marseille|
```
* "cityweather" that stores weather infos 3 times a day for a lot of cities
```
| weatherdate | city | temp |
-------------------------------------------
| 2014-12-01 09:00:02 | Paris | 20 |
| 2014-12-01 09:00:02 | Marseille| 22 |
```
I precise that there can be cities in the table **visit** that are not in **cityweather** and vice versa and I need to only take cities that are common to both tables.
**So my question is :**
How can I `SELECT` for each `visitdate` the `MAX(weatherdate)` that is inferior to the visitdate ?
It should look like this :
```
| visitdate | city | beforedate |
--------------------------------------------------------
| 2014-12-01 00:00:02 | Paris | 2014-11-30 21:00:00 |
| 2015-01-03 15:07:26 | Marseille| 2015-01-03 09:00:00 |
```
I tried something like this :
```
SELECT t.city, t.visitdate, d.weatherdate as beforedate
FROM visitsub as t
JOIN cityweatherfrsub as d
ON d.weatherdate =
( SELECT MAX(d.weatherdate)
FROM cityweatherfrsub
WHERE d.weatherdate <= t.visitdate AND d.city=t.city
)
AND d.city = t.city;
```
But the size of the tables make it impossible to compute it in a "reasonnable" time (10^14 steps):
```
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
---------------------------------------------------------------------------------------------------------------------------------------------------------
| 1 | PRIMARY | d | ALL | idx_city,Idx_citydate | NULL | NULL | NULL | 1204305 | Using where |
| 1 | PRIMARY | t | ref | Idxcity, Idxcitydate | Idxcitydate | 303 | meteo.d.city | 111 | Using where; Using index |
| 2 | DEPENDANT SUBQUERY | cityweather | index | NULL | Idx_date | 6 | NULL | 1204305 | Using where; Using index |
```
I am now investigating the field of `user-variable` like `@variable` but I am very new to it and only wrote something that is not working `Error Code: 1111. Invalid use of group function`:
```
SET @j :=0;
SET @k :=0;
SET @l :=0;
SET @m :=0;
CREATE TABLE intermedweather
SELECT @l as city, @k as visitdate, @j as beforedate
FROM visitsub t
JOIN cityweatherfrsub d
WHERE (@j := d.weatherdate) <= (@k := t.visitdate)
AND (@l := d.city) = (@m := t.city)
AND @j = MAX(d.weatherdate);
```
[You can find here a similar post but it can't work for my problem](https://stackoverflow.com/questions/6186962/sql-query-to-show-nearest-date)
|
I ended up finding the answer myself. It all comes down to narrowing the selection on the table cityweather. So I did it in two steps to avoid the O(n^2) problem we had up until now and I reduce the size of the first table (sometimes virtual table) found in other answers:
**First step (the crucial one):**
```
CREATE TABLE intermedtable
SELECT t.city, t.visitdate, d.weatherdate
FROM visit as t
JOIN cityweather as d
WHERE d.city=t.city AND d.weatherdate <= t.visitdate AND d.weatherdate + interval 1 day >= t.visitdate;
```
What is crucial here compared to what we had before is the `d.weatherdate + interval 1 day >= t.visitdate` conditions. It "only" took 22 minutes.
**The second step is to find the `MAX(weatherdate)` for each pair `(city, visitdate)` :**
```
Create table beforedatetable
SELECT city, visitdate, max(weatherdate) as beforedate
FROM intermedtable
GROUP BY city, visitdate;
```
With this solution I go from the 16 hours computation (with crash at the end) down to 32 minutes.
The core of this answer was to reduce the size of the virtual table created in previous answer by adding the `d.weatherdate + interval 1 day >= t.visitdate` condition. This is based on the fact that the weatherdate of interest can't be more than one day away from the visitdate.
|
Maybe something like this:
```
select
V.*,
(
select
MAX(weatherdate)
from Weather W
where
W.weatherdate < V.visitdate and
W.city = V.city
) beforedate
from Visit V
where
exists ( select 1 from Weather where V.city = W.city)
```
|
Find closest lower date among two sets of dates with big data set Mysql
|
[
"",
"mysql",
"sql",
"performance",
"datetime",
"bigdata",
""
] |
I want to retrive data from table according to a particular time so I use this query below.
```
select * from a
where convert(smalldatetime,date,datepart(hh,date)) ='09:12:00'
```
but I get no row as result. So what should I do for my expected result?
|
You can convert directly to varchar as below
```
SELECT * FROM a
WHERE CONVERT(VARCHAR(8), [date], 108) = '09:12:00'
```
|
Try this:
```
SELECT *
FROM a
WHERE CONVERT(VARCHAR(8), [DATE],108) = '09:12:00'
```
Your datatype is already `smalldatetime` so you have to convert to `varchar` and compare with string.
**EDIT:**
Answer explanation:
You have `smalldatetime` inside [DATE] column. For example `2015-10-01 09:12:00`. To compare only time you need to convert to string which contains only time. For this reason you will use command [CONVERT](https://msdn.microsoft.com/en-us/library/ms187928.aspx) from TSQL.
It will convert `smalldatetime` to 8 characters string. As a result you will have `09:12:00` and then you compare it with your string.
|
How to filter Time portion of a DateTime column
|
[
"",
"sql",
"sql-server",
"sql-server-2008-r2",
""
] |
My question is to select pilots that are currently not assigned to any flight.
My idea is to use `LEFT JOIN` to select pilots that are currently assigned to flights, then select the `null` values. Am I right?
I am trying to select the `null` value by using `WHERE...IS NULL`, but I don't know how. Thanks for the help.
There are five tables:
* Flight:
```
PRIMARY KEY (id),
FOREIGN KEY (route) REFERENCES Route (id),
FOREIGN KEY (aircraft) REFERENCES Aircraft (id)
```
* Aircraft:
```
PRIMARY KEY (id),
FOREIGN KEY (aircraftType) REFERENCES AircraftType (id)
```
* AircraftType:
```
PRIMARY KEY (id)
```
* Pilot:
```
PRIMARY KEY (person, forAircraftType),
FOREIGN KEY (person) REFERENCES Person (id),
FOREIGN KEY (forAircraftType) REFERENCES AircraftType (id)
```
* Person:
```
PRIMARY KEY (id)
```
Here is my code, I can select pilots that are currently assigned to flight
```
SELECT name
FROM Flight
LEFT JOIN Aircraft ON Flight.aircraft = Aircraft.id
LEFT JOIN AircraftType ON Aircraft.AircraftType = AircraftType.id
LEFT JOIN Pilot ON AircraftType.id = Pilot.forAircraftType
LEFT JOIN Person ON Pilot.person = Person.id
```
|
Instead of trying to join your "Flight" table into the query, simply check if the pilot exists in the Flight table:
```
SELECT name
FROM Aircraft
LEFT JOIN AircraftType
ON Aircraft.AircraftType = AircraftType.id
LEFT JOIN Pilot
ON AircraftType.id = Pilot.forAircraftType
LEFT JOIN Person
ON Pilot.person = Person.id
WHERE NOT EXISTS (SELECT 1 FROM Flight WHERE Flight.aircraft = Aircraft.id)
```
|
Reverse your query or use Right Join as opposed to Left. In its current configuration, Flight is your base table that all other tables are being joined to so it will only return records where pilots have been assinged to a flight i.e. this acts a filter which excludes unassigned Pilots.
By making Pilot or Person as your base, you are aligning all other tables to it and can simply use IS NULL to get your results.
```
SELECT name
FROM Person
LEFT JOIN Pilot ON Person.id = Pilot.person
LEFT JOIN AircraftType ON Pilot.forAircraftType = AircraftType.id
LEFT JOIN Aircraft ON AircraftType.id = Aircraft.AircraftType
LEFT JOIN Flight ON Aircraft.id = Flight.aircraft
Where Flight.aircraft IS NULL
```
|
How to select NOT using SQL JOIN
|
[
"",
"sql",
""
] |
I must create an SQL table with 90+ fields, the majority of them are bit fields like N01, N02, N03 ... N89, N90 is there a fast way of creating multiple fileds or is it possible to have one single field to contain an array of values true/false? I need a solution that can also easily be queried.
|
At least you can generate `ALTER TABLE` scripts for bit fields, and then run those scripts.
```
DECLARE @COUNTER INT = 1
WHILE @COUNTER < 10
BEGIN
PRINT 'ALTER TABLE table_name ADD N' + RIGHT('00' + CONVERT(NVARCHAR(4), @COUNTER), 2) + ' bit'
SET @COUNTER += 1
END
```
|
There is no easy way to do this and it will be very challenging to do queries against such a table. Create a table with three columns - item number, bit field number and a value field. Then you will be able to write 'good' succinct Tsql queries against the table.
|
creating a SQL table with multiple columns automatically
|
[
"",
"sql",
"sql-server",
"database-design",
"ssms",
""
] |
How can I drop a column in a Sybase ASE 15 table setup.
I have tried a few, to no avail:
```
alter table my_table
drop column1, column2
```
As well as looking at the [Sybase documentation](http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc00801.1510/html/iqrefso/BABCGFHG.htm) which provides no solution to my issues.
> The 'select into' database option is not enabled for database
> 'my\_db'. ALTER TABLE with data copy cannot be done. Set the 'select
> into' database option and re-run.
|
As your `select into` option on database is off, there are two option either you ON the `select into` option using
```
sp_dboption my_db, "select into", true
```
OR
The `no datacopy` parameter to the alter table drop column allows you to drop columns from a table without performing a data copy, and reduces the amount of time required for alter table drop column to run.
[SYBASE Documentation - Dropping Columns from a Table Without Performing a Data Copy](http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc00641.1572/html/new_features/new_features65.htm)
```
ALTER TABLE my_table
DROP column1, column2
WITH no datacopy
```
|
Just want to share with you I've just had similar issue, and in my case I had to add "drop" before each column name. The code looks like this:
```
alter table dba.ABC
drop column1,
drop column2,
drop column3
commit;
```
I work on some old version of sybase that is not that flexible.
Cheers
|
Drop columns in Sybase ASE table
|
[
"",
"sql",
"t-sql",
"sybase",
"sap-ase",
""
] |
Which method would be better to use for getting the number of likes?
Assuming I have the table likes:
[](https://i.stack.imgur.com/aNH8z.png)
so like\_type is to determine what type is id\_liked. So if like\_type is "question" and the id\_liked is 10 then the like belongs to this row:
```
SELECT * FROM questions where question_id = 10;
```
if like\_type is "answer" the row with the answer will be
```
SELECT * FROM answers where answer_id = 10;
```
[](https://i.stack.imgur.com/9DvHH.png)
So the main question is: Which method is best to get all question with id 10 likes?
```
SELECT COUNT() as like_numbers from likes where like_type = `question` AND id_liked = 10;
```
OR
Every time when someone like a question increment question\_likes from questions table so would not be necessary to count every time.
|
it depends on several factors. Here are some pros and cons so you can decide for yourself.
# Incrementing a number
### Pros
* Easier and more performant to lookup a single number.
* Easy to modify to skew results.
### Cons
* People can likely find a way to like multiple times.
* Can't count individual likes if they need to be recalculated.
* Can't remove likes for stuff like a user was deleted/banned.
---
## Storing individual likes
### Pros
* Can be re-calculated if the numbers get skewed for some reason.
* More verbose, you can literally count the likes yourself.
* Can store identifying information with the likes to prevent multiple likes for same question or answer.
* Can remove a vote if a user is deleted/banned (or just don't count votes from those users).
### Cons
* Less performant to count many rows, especially if the number gets really, really high (think [m|b|tr]illions+).
* Uses more space. Likely not an issue, but when you have tons of rows, it might. Would need quite a lot though.
---
The forum software my website uses I believe does both. Each post has a thumbs up button and it stores a count with each row but also keeps record of each individual vote. As an admin I can edit the count to another number which only adjusts the count but I can hover the number on the post and see who voted it up. There is also a "recalculate all post scores" option in the admin panel that will go over and readjust the number to what the actual votes were.
|
It really depends on your specific needs, expected concurrency, and complexity of your application.
Caching a column with the current count on the `questions` table is likely to be more efficient if you're already reading that table to generate a listing of questions.
It *could* require additional application logic if, for example, a user's likes need to be removed from the totals if their account get suspended or such (since just pulling them from the `likes` table wouldn't remove them from the cached count). Even if you have conditions like this, you have to decide if the performance benefit is worth adding additional logic in your scenario.
Another option is generating the cached count on a schedule via a cronjob or other mechanism. This can minimize your impact on your application logic and MySQL's query cache, but also adds a level of complexity to something which in your case seems to be low-impact (the cost of adding to the count when adding a like is low).
If you add an index to the `id_liked` column, it will improve the performance of counting likes from that table—but you'll still have the overhead of a `JOIN` or a separate query for each record when connecting this to your questions.
Note: If you're adding to a count column of some type, make sure you do it atomically within MySQL so you aren't wiping out likes that happen between querying the current count and when you update the value:
`` UPDATE `questions` SET `question_likes`=(`question_likes`+1) WHERE [...condition...] ``
|
COUNT from database vs increment
|
[
"",
"mysql",
"sql",
""
] |
Hi I want to convert the date time like below `2015-05-12 23:59:59`. The hours and secs and should come like `12:59:59` this.
**Ex:** I want to convert today's date like below `2015-08-17 23:59:59`.
**Edited**
for GETDATE() in sql server I will get the datetime like this 2015-08-17 17:10:54.080 this one I want to convert into 2015-08-17 23:59:59.080
|
Seems this question is not about formatting. So here is a solution to get last timestamps of the day:
To get the last minute of the day:
```
SELECT dateadd(d, datediff(d, 0, getdate()), cast('23:59:59' as datetime))
```
Returns:
```
2015-08-17 23:59:59.000
```
To get the last possible timestamp of the day:
```
SELECT dateadd(d, datediff(d, 0, getdate()), cast('23:59:59:997' as datetime))
2015-08-17 23:59:59.997
```
|
You need to use `convert()` here, try below query
```
SELECT CONVERT(char(19), GetDate(),121)
```
**Output :**
```
2015-08-17 11:37:29
```
|
converting datetime in sqlserver like yyyy-mm-dd hh:mi:ss to yyyy-mm-dd 23:59:59
|
[
"",
"sql",
"sql-server",
"date",
"datetime",
""
] |
i have two temp tabels like this:
```
t1 t2
id a b c id a b c
1 250
2 251
3 .
. .
.
250
251
.
.
```
id 250 from t2 is equal with id 1 from t1
id 251 from t2 is equal
with id 2 from t1 and so on.
how can i achieve something like this?
```
select id, a, b, c
from #t1
join #t2
on #t1.id - 249 = #t2.id
```
I need this because i need to perform calculations, example:
> (the value from a with the id 250 from t1 - the value from a with id 1
> from t1 ) \* 100 in a new column
Thanks
|
You've got it right except for resolving ambiguity in your query. See below syntax
```
select
t1.*,
t2.*,
(t2.a-t1.a)*100 as result
from #t1 t1
join #t2 t2
on t1.id - 249 = t2.id
```
further based on your comment to question
> I have 1 table and i perform two different queries to create 2 temp
> tables. From there you know the story. I want to insert the result
> into another temp table.
I'd suggest that instead of creating temp tables used the select queries as inner queries in join like below so that problems with temp tables being invisible at a part is not a hinderance. This saves on memory as well.
```
select
t1.*,
t2.*,
(t2.a-t1.a)*100 as result
from
(
select id,a,b,c from orig_table
-- where ....
) t1
join
(
select id,a,b,c from orig_table
-- where ....
) t2
on t1.id - 249 = t2.id
```
**update**
> Do you have any idea how i could write this for 12 different cases?
> insetead copy pasting the query 12 times. example: case when a like
> 'e1m' or 'e2m', or 'ey1' and so on, 12 in total
Based on asker's comment I'd suggest that following query is a better approach:
```
select
id as orig_id,
a,
b,
case when c like 'c1' then c else Null end as t1c,
case when c like 'c1' then id-249 else Null end as t1id,
case when c like 'c2' then c else Null end as t2c,
case when c like 'c1' then id-249*2 else Null end as t2id
from orig_table
```
Put this as inner query and we can quickly do all calculations over this inner query result set instead of all those joins
demo Sql fiddle link here:<http://sqlfiddle.com/#!6/b7aee/4>
|
This will work:
`select * from #t1 JOIN #t2 ON (#t2.id -249)= #t1.id`
|
join 2 temp tables on different id values (always 249 positions apart)
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I want to retrieve the `empId` that belongs to more than one city. So I use this query:
```
select empId
from phone
group by city
having count(city) > 1
```
but I get an error:
> Msg 8120, Level 16, State 1, Line 1
> Column 'phone.empId' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
|
Use `GROUP BY` and `HAVING` count distinct city to find empId's with more than 1 city:
```
SELECT empId
FROM phone
GROUP BY empId
HAVING COUNT(DISTINCT city) > 1
```
|
You can use ROW\_NUMBER() also to work it :
following code should work :
```
select empId from
(
select distinct empId,city,ROW_NUMBER() over(partition by empId order by city) rn
from phone
) a
where rn>1
```
|
Aggregate function error in SQL Server 2008 R2
|
[
"",
"sql",
"sql-server",
"sql-server-2008-r2",
""
] |
I have following code
```
DECLARE @a INT
SET @a = 1
SELECT TOP 1
@a = id
FROM
test
WHERE
name = 'notexist'
SELECT @a
```
variable `@a` will still have `1` value if the `SELECT` statement doesn't find any row, is it possible to set the `@a` variable to null if the `SELECT` statement doesn't find any row without adding `SET @a = NULL` before `SELECT` statement?
|
Assign value this way
```
SET @a = (
SELECT TOP 1
id
FROM
test
WHERE
name = 'notexist'
)
SELECT @a
```
|
Try this
```
SELECT TOP 1
@a = id
FROM
test
WHERE
name = 'notexist'
If @@rowcount = 0
set @a = NULL
```
or
```
set @a = case when @@rowcount = 0 then null else @a end
```
But its better to avoid the initialization of `1`. By default it will be `NULL`
|
Set variable to NULL if not found in query statement
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I'm required to split names to insert into the database. Some sample strings are given below.
```
Covey, Stephen J, Mr
Clinton, Hilary B,
Obama, Barack, Mr
```
I need to split the above string like follows.
```
First_Name Middle_Initial Last_NAME Title
Stephen J Covey Mr
Hilary B Clinton
Barack Obama Mr
```
I can achieve the above using java code. Is is possible to split those strings directly in a sql select.?
Please note that, Few names might not have title or middle initial. I need the output as given above.
|
[SQL Fiddle](http://sqlfiddle.com/#!4/61d71/7)
**Oracle 11g R2 Schema Setup**:
```
CREATE TABLE Names ( Name ) AS
SELECT 'Covey, Stephen J, Mr' FROM DUAL
UNION ALL SELECT 'Clinton, Hilary B,' FROM DUAL
UNION ALL SELECT 'Obama, Barack, Mr' FROM DUAL
```
**Query 1**:
```
SELECT REGEXP_SUBSTR( Name, '^(.*?),\s*(.*?)(\s+(\w))?,\s*(.*)$', 1, 1, NULL, 1 ) AS Last_Name,
REGEXP_SUBSTR( Name, '^(.*?),\s*(.*?)(\s+(\w))?,\s*(.*)$', 1, 1, NULL, 2 ) AS First_Name,
REGEXP_SUBSTR( Name, '^(.*?),\s*(.*?)(\s+(\w))?,\s*(.*)$', 1, 1, NULL, 4 ) AS Middle_Initial,
REGEXP_SUBSTR( Name, '^(.*?),\s*(.*?)(\s+(\w))?,\s*(.*)$', 1, 1, NULL, 5 ) AS Title
FROM Names
```
**[Results](http://sqlfiddle.com/#!4/61d71/7/0)**:
```
| LAST_NAME | FIRST_NAME | MIDDLE_INITIAL | TITLE |
|-----------|------------|----------------|--------|
| Covey | Stephen | J | Mr |
| Clinton | Hilary | B | (null) |
| Obama | Barack | (null) | Mr |
```
**Query 2**:
```
SELECT REGEXP_REPLACE( Name, '^(.*?),\s*(.*?)(\s+(\w))?,\s*(.*)$', '\1' ) AS Last_Name,
REGEXP_REPLACE( Name, '^(.*?),\s*(.*?)(\s+(\w))?,\s*(.*)$', '\2' ) AS First_Name,
REGEXP_REPLACE( Name, '^(.*?),\s*(.*?)(\s+(\w))?,\s*(.*)$', '\4' ) AS Middle_Initial,
REGEXP_REPLACE( Name, '^(.*?),\s*(.*?)(\s+(\w))?,\s*(.*)$', '\5' ) AS Title
FROM Names
```
**[Results](http://sqlfiddle.com/#!4/61d71/7/1)**:
```
| LAST_NAME | FIRST_NAME | MIDDLE_INITIAL | TITLE |
|-----------|------------|----------------|--------|
| Covey | Stephen | J | Mr |
| Clinton | Hilary | B | (null) |
| Obama | Barack | (null) | Mr |
```
**Query 3**:
```
WITH Split_Names AS (
SELECT REGEXP_SUBSTR( Name, '^[^,]+' ) AS Last_Name,
REGEXP_REPLACE( Name, '^.*?,\s*|\s*,.*?$' ) AS Given_Names,
REGEXP_SUBSTR( Name, '[^\s,]+$' ) AS Title
FROM Names
)
SELECT Last_Name,
REGEXP_REPLACE( Given_Names, '\s+\w$' ) AS First_Name,
TRIM( REGEXP_SUBSTR( Given_Names, '\s+\w$' ) ) AS Middle_Initial,
Title
FROM Split_Names
```
**[Results](http://sqlfiddle.com/#!4/61d71/7/2)**:
```
| LAST_NAME | FIRST_NAME | MIDDLE_INITIAL | TITLE |
|-----------|------------|----------------|--------|
| Covey | Stephen | J | Mr |
| Clinton | Hilary | B | (null) |
| Obama | Barack | (null) | Mr |
```
|
This is Alexander's answer modified with an improved regex that handles NULL list elements. Oh and instead of repeating that regex, make it reusable by creating a function as described here: [REGEX to select nth value from a list, allowing for nulls](https://stackoverflow.com/questions/25648653/regex-to-select-nth-value-from-a-list-allowing-for-nulls/25652018#25652018), then call that. That way the code is encapsulated and reusable by all with only one place to change code if you have to:
```
SQL> with tbl(input_row) as (
select 'Covey, Stephen J, Mr' from dual
union
select 'Clinton,,Ms' from dual
union
select 'Obama, Barack, Mr' from dual
)
SELECT TRIM( REGEXP_SUBSTR(input_row, '([^,]*)(,|$)', 1, 1, NULL, 1)) AS Last_NAME,
TRIM( REGEXP_SUBSTR( REGEXP_SUBSTR(input_row, '([^,]*)(,|$)', 1, 2, NULL, 1), '[^ ]+', 1, 1)) AS First_Name,
TRIM( REGEXP_SUBSTR( REGEXP_SUBSTR(input_row, '([^,]*)(,|$)', 1, 2, NULL, 1), '[^ ]+', 1, 2)) AS Middle_Initial,
TRIM( REGEXP_SUBSTR(input_row, '([^,]*)(,|$)', 1, 3, NULL, 1)) AS Title
FROM tbl;
LAST_NAME FIRST_NAME MIDDLE_INITIAL TITLE
-------------------- -------------------- -------------------- --------------------
Clinton Ms
Covey Stephen J Mr
Obama Barack Mr
SQL>
```
|
How to split the given string as per requirement using oracle
|
[
"",
"sql",
"string",
"oracle",
"substring",
""
] |
I have this output:
```
Contact_Type Category_Type Category_Count
---------------------------------------------------
Window Admissions 1775
Window Financial Aid 17377
Window Miscellaneous 2720
Window Student Financials 14039
Phone Admissions 5758
Phone Financial Aid 10048
Phone Miscellaneous 4497
Phone Registration 11
Phone Student Financials 4857
```
and this is my query:
```
SELECT
Contact_Type, Category_Type1, Category_Type2, Category_Type3,
Category_Type4, Category_Type5
FROM
(SELECT
CASE
WHEN event.contact_type = 0 THEN 'Window'
WHEN event.contact_type = 1 THEN 'Phone'
END AS Contact_Type,
cat.category_type AS Category_Type,
COUNT(ec.category_id) AS Category_Count,
'Category_Type' + CAST(ROW_NUMBER() OVER (PARTITION BY Contact_Type
ORDER BY Contact_Type) AS varchar(20)) AS ColumnSequence
FROM
yLines.ylines_event AS Event
JOIN
ylines.ylines_event_category AS ec ON ec.event_id = event.event_id
JOIN
ylines.ylines_category AS cat ON ec.category_id = cat.category_id
WHERE /*event.contact_type = '0' AND*/
CAST(FORMAT(event.event_date_time, 'yyyy') AS int) BETWEEN 2014 AND dateadd(year, 1, event.event_date_time)
GROUP BY
Category_Type, Contact_Type) a
PIVOT
(MAX(Contact_Type)
FOR ColumnSequence IN (Category_Type1, Category_Type2, Category_Type3,
Category_Type4, Category_Type5)) as piv;
```
If I run this it gives me an error:
> Msg 207, Level 16, State 1, Line 1
> Invalid column name 'Contact\_Type'
and I can't seem to fix this. I am trying to transpose it so I see two rows only with 'Windows' and 'Phone' and the five Category Types transposed as five columns with the count in each. I am writing T-SQL statements. Please help!
|
I would try do it in dynamic
```
; WITH [CONTACT]
AS (
SELECT *
FROM (
VALUES
('Window', 'Admissions ', ' 1775')
, ('Window', 'Financial Aid ', '17377')
, ('Window', 'Miscellaneous ', ' 2720')
, ('Window', 'Student Financials', '14039')
, ('Phone ', 'Admissions ', ' 5758')
, ('Phone ', 'Financial Aid ', '10048')
, ('Phone ', 'Miscellaneous ', ' 4497')
, ('Phone ', 'Registration ', ' 11')
, ('Phone ', 'Student Financials', ' 4857')
) X ([Contact_Type], [Category_Type], [Category_Count])
)
SELECT *
INTO #TEMP_PIVOT
FROM [CONTACT]
DECLARE @TYPE VARCHAR(MAX)
SET @TYPE = STUFF(
(SELECT DISTINCT ', ' + QUOTENAME(RTRIM(LTRIM([CATEGORY_TYPE])))
FROM #TEMP_PIVOT
FOR XML PATH('')
)
, 1, 1, '')
DECLARE @SQL VARCHAR(MAX)
SET @SQL = ' SELECT [CONTACT_TYPE] '
+ ' , ' + @TYPE
+ ' FROM #TEMP_PIVOT '
+ ' PIVOT ( '
+ ' MAX([CATEGORY_COUNT]) '
+ ' FOR [CATEGORY_TYPE] IN (' + @TYPE + ')'
+ ' ) P '
EXECUTE (@SQL)
```
|
In the `group by` clause you say
```
`group by Category_Type, Contact_Type`
```
However, you have defined a calculated column as `contact_type` which is not available in the `group by` clause. You should use
```
GROUP BY Category_Type, -- Contact_Type
case
when event.contact_type=0 then 'Window'
when event.contact_type=1 then 'Phone'
end
```
It is a better approach to name your calculated columns different than the columns in any of your tables.
|
How do you transpose rows into columns in a SQL query
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a local SQL Server 2008R2. I have configured Linked Server to a remote database.
The Linked Server works great when I login to the local server using a SQL-login account with `sysadmin` server role. I can query against the remote server, so I know the Linked Server setting is correct. However, I would get the error below if I use an account that does not have the `sysadmin` server role.
```
Msg 7416, Level 16, State 2, Line 2
Access to the remote server is denied because no login-mapping exists.
```
For both local and remote servers, SQL login is used (Windows authentication is not used)
What kind of security I need to configure for a regular SQL-login account to use Linked Server?
|
As alternative solution you can use the parameter @datasrc instead of @provstr.
@dataSrc works without setting the User ID
Sample:
```
EXEC master.dbo.sp_addlinkedserver @server = N'LinkServerName', @provider=N'SQLNCLI',@srvproduct = 'MS SQL Server', @datasrc=N'serverName\InstanceName'
EXEC master.dbo.sp_addlinkedsrvlogin @rmtsrvname = N'LinkServerName', @locallogin = NULL , @useself = N'False', @rmtuser = N'myUser', @rmtpassword = N'*****'
```
I've added a comment [here](https://blogs.technet.microsoft.com/mdegre/2011/03/10/access-to-the-remote-server-is-denied-because-no-login-mapping-exists/), too, but it's not visible (don't know why).
|
**UPDATE**: See @Anton's and @Wouter's answer for alternative solution.
According to [this blog](http://blogs.technet.com/b/mdegre/archive/2011/03/10/access-to-the-remote-server-is-denied-because-no-login-mapping-exists.aspx), I have to specify `User ID` in the provider string if non-sysadmin accounts are used. Here is an example.
```
EXEC master.dbo.sp_addlinkedserver
@server = N'MyLinkServerName',
@provider = N'SQLNCLI',
@srvproduct = 'SQLNCLI',
@provstr = N'SERVER=MyServerName\MyInstanceName;User ID=myUser'
```
This exactly matches what I have encountered and it solves my problem.
|
SQL Linked Server returns error "no login-mapping exists" when non-admin account is used
|
[
"",
"sql",
"sql-server",
"security",
"linked-server",
""
] |
I have the following database structure (*some fields removed for clarity*):
```
quotes
------
id
name
business_quotes
---------------
id
quote_id
business_id
status
```
For `status` field in `business_quotes` table the possible values are:
* `0` - Quote not answered
* `1` - Quote answered
* `2` - Quote ordered
* `3` - Quote refused
I'd like to retrieve all quotes which have *at least one* order (`status = 2`) **OR** where *all* associated business\_quotes records were refused. I can get one or another, with the queries below.
Getting all `quotes` which have an order:
```
SELECT quotes.*
FROM quotes
INNER JOIN business_quotes
ON quotes.id = business_quotes.quote_id
WHERE business_quotes.status = 2
GROUP BY quotes.id
```
Getting all `quotes` where all `business_quotes` were refused.
```
SELECT quotes.*
FROM quotes
INNER JOIN business_quotes
ON quotes.id = business_quotes.quote_id
HAVING SUM(business_quotes.status) = COUNT(business_quotes.id) * 3
```
What I couldn't do was mix the two conditions in a single query.
In some cases it's possible to simply convert the `WHERE` condition into a `HAVING`, like stated in this answer: <https://stackoverflow.com/a/20900476/1128918>. But I can't follow the same logic, because the `status` column is not in the `select` statement.
|
Something like this should work. The aggregates in the select are optional. Essentially you are just using having to query for quotes that have ANY "2" statuses, or have ALL status of value "3".
Disclaimers: not checked for syntax errors, and my usual SQL Language is T-SQL, so it may need tweaking.
```
SELECT
quotes.id,
quotes.name,
SUM(CASE WHEN business_quotes.status = 2 THEN 1 ELSE 0 END) as NumOrdered,
SUM(CASE WHEN business_quotes.status = 3 THEN 1 ELSE 0 END) as NumRefused
FROM
quotes
INNER JOIN business_quotes ON quotes.id = business_quotes.quote_id
GROUP BY
quotes.id,
quotes.name
HAVING
SUM(CASE WHEN business_quotes.status = 2 THEN 1 ELSE 0 END) > 0
OR
SUM(CASE WHEN business_quotes.status = 3 THEN 1 ELSE 0 END)
= COUNT(quotes.id)
```
|
That's where UNION is useful:
```
SELECT quotes.*
FROM quotes
INNER JOIN business_quotes
ON quotes.id = business_quotes.quote_id
WHERE business_quotes.status = 2
GROUP BY quotes.id
UNION
SELECT quotes.*
FROM quotes
INNER JOIN business_quotes
ON quotes.id = business_quotes.quote_id
HAVING SUM(business_quotes.status) = COUNT(business_quotes.id) * 3
```
|
Mixing WHERE and HAVING in an OR operation
|
[
"",
"mysql",
"sql",
""
] |
I'm trying to execute stored procedure through shell script and try to get return from stored procedure but I didn't get any thing from the stored procedure on other hand same thing I do with sqlplus prompt and I'm able to get the result
```
sqlplus -silent xxx@xxx <<EOF
set serveroutput on
declare
DE_REC_COUNT number(10);
begin
DE_DUP_PROC ('T_MCL_30404_20150317_020','MCL','30404','FT',DE_REC_COUNT);
end;
EOF
```
Through sqlplus prompt
```
SQL> set serveroutput on
declare
DE_REC_COUNT number;
begin
DE_DUP_PROC ('T_MCL_30404_20150317_020','MCL','30404','FT',DE_REC_COUNT);
end;
0
PL/SQL procedure successfully completed.
```
|
The version of the anonymous block in the shell script will not be executed as shown, because you don't have [a slash after the block to run it](http://docs.oracle.com/cd/E11882_01/server.112/e16604/ch_four.htm#i1039663). If you run that you get no output at all. If you change it to have a slash:
```
sqlplus -silent xxx@xxx <<EOF
set serveroutput on
declare
DE_REC_COUNT number(10);
begin
DE_DUP_PROC ('T_MCL_30404_20150317_020','MCL','30404','FT',DE_REC_COUNT);
end;
/
EOF
```
then you'll see:
```
0
PL/SQL procedure successfully completed.
```
You've shown the interactive version in SQL\*Plus without the slash too, but you must have had that to see the output you showed.
If you want the zero - which seems to be coming from a `dbms_output` call in your procedure, rather than directly from your anonymous block - n a shell variable you can refer to later, you can assign the output of the heredoc to a variable:
```
MY_VAR=`sqlplus -silent xxx@xxx <<EOF
set serveroutput on
set feedback off
declare
DE_REC_COUNT number(10);
begin
DE_DUP_PROC ('T_MCL_30404_20150317_020','MCL','30404','FT',DE_REC_COUNT);
end;
/
EOF`
printf "Got back MY_VAR as %s\n" ${MY_VAR}
```
Note that I've added `set feedback off` so you don't see the `PL/SQL procedure successfully completed` line. Now when you run that you'll see:
```
Got back MY_VAR as 0
```
and you can do whatever you need to with `${MY_VAR}`. It depends what you mean by 'capture' though.
|
Here's an example of how it can be done by surrounding the code with the evaluation operators (` back quotes):
```
#!/bin/sh
results=`sqlplus -s xxx@xxx <<EOF
set serveroutput on feedback off
declare
DE_REC_COUNT number(10);
begin
DE_DUP_PROC ('T_MCL_30404_20150317_020','MCL','30404','FT',DE_REC_COUNT);
end;
/
EOF`
echo $results
```
|
How to capture the result of stored procedure through shell script?
|
[
"",
"sql",
"oracle",
"shell",
"plsql",
""
] |
I'm trying to create indexes in Amazon Redshift but I received an error
```
create index on session_log(UserId);
```
`UserId` is an integer field.
|
If you try and create an index (with a name) on a Redshift table:
```
create index IX1 on "SomeTable"("UserId");
```
You'll receive the error
> An error occurred when executing the SQL command:
> create index IX1 on "SomeTable"("UserId")
> ERROR: SQL command "create index IX1 on "SomeTable"("UserId")" not supported on Redshift tables.
This is because, like other [data warehouses](https://en.wikipedia.org/wiki/Column-oriented_DBMS), Redshift uses [columnar storage](http://docs.aws.amazon.com/redshift/latest/dg/c_columnar_storage_disk_mem_mgmnt.html), and as a result, many of the indexing techniques (like adding non-clustered indexes) used in other RDBMS aren't applicable.
You do however have the option of providing a single [sort key](http://docs.aws.amazon.com/redshift/latest/dg/c_best-practices-sort-key.html) per table, and you can also influence performance with a [distribution key](http://docs.aws.amazon.com/redshift/latest/dg/c_best-practices-best-dist-key.html) for sharding your data, and selecting appropriate [compression encodings](http://docs.aws.amazon.com/redshift/latest/dg/tutorial-tuning-tables.html) for each column to minimize storage and I/O overheads.
For example, in your case, you may elect to use `UserId` as a sort key:
```
create table if not exists "SomeTable"
(
"UserId" int,
"Name" text
)
sortkey("UserId");
```
You might want to read a few primers [like](http://docs.aws.amazon.com/redshift/latest/dg/c_redshift_system_overview.html) [these](https://aws.amazon.com/redshift/faqs/)
|
You can [Define Constraints](http://docs.aws.amazon.com/redshift/latest/dg/t_Defining_constraints.html) but will be informational only, as Amazon says: they are not enforced by Amazon Redshift. Nonetheless, primary keys and foreign keys are used as planning hints and they should be declared if your ETL process or some other process in your application enforces their integrity.
Some services like pipelines with insert mode (REPLACE\_EXISTING) will need a primary key defined in your table.
For other performance purposes the Stuart's response is correct.
|
How to create an Index in Amazon Redshift
|
[
"",
"sql",
"amazon-web-services",
"indexing",
"amazon-redshift",
""
] |
I want to create a new table same as some existing table plus some additional columns. In simple case we will do as:
```
Select * INTO table2 from table1 where 1=0
```
But I also want to add some new columns. Can we do it in one step i.e. without using `ALTER TABLE` as next step?
|
Yes, but in order to specify the data types, you should cast null as the desired type.
```
SELECT *
, CAST(NULL as NVARCHAR(100)) as NewColumn1
, CAST(NULL as INT) as NewColumn2
INTO B
FROM A
WHERE 1=0
```
Column constraints/defaults will need to be added separately via an UPDATE TABLE statement, but they may not be necessary for what you are trying to do.
The column types certainly will be.
|
Can you explicitly list the new columns in the `SELECT` statement? Do they need to be populated or can you just have empty fields? Something like
```
Select *, cast(NULL as type) as NewColumn1, cast(NULL as type) as NewColumn2... INTO table2 from table1 where 1=0
```
Although you might have to list all of the individual columns from the original table rather than `*`, too.
|
Create table from another table.(Sql Server)
|
[
"",
"sql",
"sql-server",
""
] |
EDIT2: Sorry everyone, problem solved. I WAS trying to concatenate a number with my string. I forgot that I simplified my statement when I posted here, thinking my problem had to do with the backslash rather than type compatibility. You guys are right and I was being an idiot. Thanks!
I'm using `Access 2013` where my query pulls its data from a SQL 10.0 server (using pass-through).
I am trying to use the backslash in my SQL query like below (\*\*\*edit: tbltask.jobnum is a string in my database):
```
SELECT [tblEstimator].[Name] + '\\20' + [tbltask].[JobNum] + ' JOBS\\' AS JobMidFilePath
```
But when I run the query, I get the error:
> Conversion failed when converting the varchar value '\\20' to data type smallint. (#245)
I have no idea what this means or why it's trying to convert anything to smallint.
|
To replicate your issue, we can write a query something like this:
```
declare @name varchar(50) = 'Test',
@JobNum smallint = 12
select @name + '\\20' + @JobNum + ' JOBS\\';
```
This will return the same error:
> Conversion failed when converting the varchar value 'Test\20' to data
> type smallint.
But if you convert the `smallint` or the number field to a string like this, the error should go away:
```
declare @name varchar(50) = 'Test',
@JobNum smallint = 12
select @name + '\\20' + cast(@JobNum as varchar(10)) + ' JOBS\\'
```
Your query should look something like this:
```
SELECT [tblEstimator].[Name] + '\\20' + cast([tbltask].[JobNum] as varchar(10))
+ ' JOBS\\' AS JobMidFilePath
```
I am not sure the equivalent of `MS-ACESS` on this, but that should resolve your issue.
|
First off, you can try to cast/convert your int as your output should be string. If the single quote doesn't work, you can try using double qoute (")
```
SELECT [tblEstimator].[Name] + "\\20" + CAST([tbltask].[JobNum] as varchar(100)) + " JOBS\\" AS JobMidFilePath
```
|
How to use literal backslash in SQL query statement?
|
[
"",
"sql",
"ms-access-2013",
"backslash",
""
] |
I have two tables:
```
METHOD_TYPES
---- ----------------
ID Methods_Type
---- ----------------
1 public
2 ALL_Methods
3 private1235678
4 social
METHOD_TABLE
-------- ----------------- ----------
Ser_ID Ser_Method_Type Emp_Name
-------- ----------------- ----------
1 (null) AAAA
2 (null) BBBB
3 All_Methods Rama
4 social Raja
5 private12345678 Rakesh
```
I used the below query for the `ORDER BY`:
```
SELECT SUBSTR(Methods_Type, 1, 10) AS disMisType
FROM METHOD_TABLE MET
LEFT JOIN METHOD_TYPES TRMT
ON MET.Ser_Method_Type = TRMT.Methods_Type
ORDER BY (NLSSORT(MET.Ser_Method_Type, 'NLS_SORT=binary_ai')) DESC NULLS FIRST;
```
**OUTPUT:**
```
(null)
All_Methods
(null)
social
private12345678
```
But I need to order all the nulls first.
Kindly provide the exact query.
|
Using the data you provided - and adding in the extra columns, I get:
```
with method_types as (select 1 id, 'public' methods_type from dual union all
select 2 id, 'ALL_Methods' methods_type from dual union all
select 3 id, 'private1235678' methods_type from dual union all
select 4 id, 'social' methods_type from dual),
method_table as (select 1 ser_id, null ser_method_type, 'AAAA' emp_name from dual union all
select 2 ser_id, null ser_method_type, 'BBBB' emp_name from dual union all
select 3 ser_id, 'All_Methods' ser_method_type, 'Rama' emp_name from dual union all
select 4 ser_id, 'social' ser_method_type, 'Raja' emp_name from dual union all
select 5 ser_id, 'private12345678' ser_method_type, 'Rakesh' emp_name from dual)
select substr(trmt.methods_type,1,10) as dismistype,
met.*,
trmt.*
from method_table met
left join method_types trmt on (met.ser_method_type = trmt.methods_type)
order by (nlssort(met.ser_method_type, 'NLS_SORT=binary_ai')) desc nulls first;
DISMISTYPE SER_ID SER_METHOD_TYPE EMP_NAME ID METHODS_TYPE
------------------------------ ---------- --------------- -------- ---------- --------------
1 AAAA
2 BBBB
social 4 social Raja 4 social
5 private12345678 Rakesh
3 All_Methods Rama
```
which is not what your expected output shows, but it does maybe explain why you see nulls apparently out of order in your results - you're selecting the trmt.methods\_type column, but ordering by the met.ser\_method\_type column. If there aren't any rows in the method\_types table matching those in the method\_table, then of course you will see nulls, but because there *IS* a value in the method\_table, they may well be displayed *after* rows that do have a value.
Perhaps all you need to do is to change the column being selected
from `substr(trmt.methods_type,1,10)`
to `substr(met.ser_method_type,1,10)`
or change the order clause
from `nlssort(met.ser_method_type, 'NLS_SORT=binary_ai')`
to `nlssort(trmt.methods_type, 'NLS_SORT=binary_ai')`
|
I'm not sure why your query is not working, but you can have a more explicit `order by`:
```
ORDER BY (CASE WHEN MET.Ser_Method_Type IS NULL THEN 1 ELSE 2 END),
NLSSORT(MET.Ser_Method_Type, 'NLS_SORT=binary_ai') DESC
```
|
Need Oracle Query Tune for order by
|
[
"",
"sql",
"oracle",
"syntax",
"sql-order-by",
""
] |
The problem splits into two parts.
How to check which working days are missing from my database, if some are missing then add them and fill the row with the values from the closest date.
First part, check and find the days. Should i use a gap approach like in the example below?
```
SELECT t1.col1 AS startOfGap, MIN(t2.col1) AS endOfGap
FROM
(SELECT col1 = theDate + 1 FROM sampleDates tbl1
WHERE NOT EXISTS(SELECT * FROM sampleDates tbl2
WHERE tbl2.theDate = tbl1.theDate + 1)
AND theDate <> (SELECT MAX(theDate) FROM sampleDates)) t1
INNER JOIN
(SELECT col1 = theDate - 1 FROM sampleDates tbl1
WHERE NOT EXISTS(SELECT * FROM sampleDates tbl2
WHERE tbl1.theDate = tbl2.theDate + 1)
AND theDate <> (SELECT MIN(theDate) FROM sampleDates)) t2
ON t1.col1 <= t2.col1
GROUP BY t1.col1;
```
Then i need to see which is the closest date to the one i was missing and fill the new inserted date (the one which was missing) with the values from the closest. Some time ago, I came up with something to get the closest value from a row, but this time i need to adapt it to check both down and upwards.
```
SELECT
t,A, C,Y,
COALESCE(Y,
(SELECT TOP (1) Y
FROM tableT AS p2
WHERE
p2.Y IS NOT NULL
AND p2.[t] <= p.[t] and p.C = p2.C
ORDER BY p2.[t] DESC)) as 'YNew'
FROM tableT AS p
order by c, t
```
How to combine those into one?
Thanks
**EDIT:** Expected result
```
Date 1mA
20.12.2012 0.152
21.12.2012 0.181
22 weekend so it's skipped (they are skipped automatically)
23 weekend -,-
24 missing
25 missing
26 missing
27.12.2012 0.173
28.12.2012 0.342
Date 1mA
20.12.2012 0.152
21.12.2012 0.181
22 weekend so it's skipped (they are skipped automatically)
23 weekend 0.181
24 missing 0.181
25 missing 0.181
26 missing 0.173
27.12.2012 0.173
28.12.2012 0.342
```
So, 24,25,26 are not even there with null values. They are simply not there.
**EDIT 2:**
For taking the closest value, let's consider the scenario in which i'm always looking above. So always going back 1 when it's missing.
```
Date 1mA
20.12.2012 0.152
21.12.2012 0.181
22 weekend so it's skipped (they are skipped automatically)
23 weekend 0.181
24 missing 0.181
25 missing 0.181
26 missing 0.181
27.12.2012 0.173
28.12.2012 0.342
```
|
For these types of query you gain significant performance benefits from creating a calendar table containing every date you'll ever need to test. *(If you're familiar with the term "dimension tables", this is just one such table to enumerate every date of interest.)*
Also, the query as a whole can become significantly simpler.
```
SELECT
cal.calendar_date AS data_date,
CASE WHEN prev_data.gap <= next_data.gap
THEN prev_data.data_value
ELSE COALESCE(next_data.data_value, prev_data.data_value)
END
AS data_value
FROM
calendar AS cal
OUTER APPLY
(
SELECT TOP(1)
data_date,
data_value,
DATEDIFF(DAY, data_date, cal.calendar_date) AS gap
FROM
data_table
WHERE
data_date <= cal.calendar_date
ORDER BY
data_date DESC
)
prev_data
OUTER APPLY
(
SELECT TOP(1)
data_date,
data_value,
DATEDIFF(DAY, cal.calendar_date, data_date) AS gap
FROM
data_table
WHERE
data_date > cal.calendar_date
ORDER BY
data_date ASC
)
next_data
WHERE
cal.calendar_date BETWEEN '2015-01-01' AND '2015-12-31'
;
```
***EDIT*** Reply to your comment with a different requirement
To always get "the value above" is easier, and to insert those values in to a table is easy enough...
```
INSERT INTO
data_table
SELECT
cal.calendar_date,
prev_data.data_value
FROM
calendar AS cal
CROSS APPLY
(
SELECT TOP(1)
data_date,
data_value
FROM
data_table
WHERE
data_date <= cal.calendar_date
ORDER BY
data_date DESC
)
prev_data
WHERE
cal.calendar_date BETWEEN '2015-01-01' AND '2015-12-31'
AND cal.calendar_date <> prev_data.data_date
;
```
***Note:*** You could add `WHERE prev_data.gap > 0` to the bigger query above to only get dates that don't already have data.
|
As suggested by [Aaron Bertrand](https://stackoverflow.com/questions/21189369/better-way-to-generate-months-year-table) you can write a query as:
```
-- create a calendar table at run time if you don't have one:
DECLARE @FromDate DATETIME, @ToDate DATETIME;
SET @FromDate = (select min(Date) from test);
SET @ToDate = (select max(Date) from test);
--Get final result:
select Tblfinal.Date,
case when Tblfinal.[1mA] is null then
( select top 1 T2.[1mA] from Test T2
where T2.Date < Tblfinal.Date and T2.[1mA] > 0
order by T2.Date desc)
else Tblfinal.[1mA] end as [1mA]
from
(
select isnull( C.TheDate, T.Date) as Date ,T.[1mA]
from Test T
right join (
-- all days in that period
SELECT TOP (DATEDIFF(DAY, @FromDate, @ToDate)+1)
TheDate = DATEADD(DAY, number, @FromDate)
FROM [master].dbo.spt_values
WHERE [type] = N'P'
)C on T.Date= C.TheDate
) Tblfinal
```
`DEMO`
|
find the missing entries for the working days and fill the row with the values from the closest date
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I have a table like this
```
user | app
name1 app1
name1 not an app
name1 app1
name1 app2
name2 not an app
name3 app1
name3 app3
name4 app1
name4 not an app2
name5 app1
name5 not an app
name5 not an app2
```
I need to get the cases where a user has an app1 and at least one type of app "not an app", which im going to call in this example not an app and not an app2.
I need to get back something like this:
```
user
name1
name4
name5
```
Im trying to filter with WHERE AND NOT LIKE(s) then ORDER BY I think next step would be something like HAVING COUNT DISTINCT app = app1 > 2 but I get lost here where I have to start counting only if there is an app1...
Ideally id want also to know
```
user name: list of apps
user name1: not an app, ...
user name4: not an app2, ...
user name5: not an app1, not an app2, ...
```
|
I am not sure what is your real goal, because if you need *to get the cases where a user has an app1 and at least one "not an app"*
Your expected result
```
user
name1
name4
name5
```
is wrong.
Check my fiddle: <http://sqlfiddle.com/#!9/cbb566/7>
```
SELECT `user`
FROM table1 t1
GROUP BY `user`
HAVING SUM(IF(`app`='app1',1,0))>0
AND SUM(IF(`app`='not an app',1,0))>0
```
**UPDATE** If you need any that starts with **'not an app'**
You can <http://sqlfiddle.com/#!9/cbb566/11> :
```
SELECT `user`
FROM table1 t1
GROUP BY `user`
HAVING SUM(IF(`app`='app1',1,0))>0
AND SUM(IF(`app` LIKE 'not an app%',1,0))>0
```
|
Try this
```
select user from table
where app = 'app1' or app like 'not an app%'
group by user
having count(distinct app)>=2
```
|
SQL query filter, count if
|
[
"",
"mysql",
"sql",
""
] |
Getting the last 12 months from a specific date is easy and can be retrieved by the following command in SQL-server. Its answer is 2014-08-17.
```
select Dateadd(Month, -12, '2015-08-17')
```
What I want is to get the last 12 months but ending at 2014-08-**01** (in the above case) instead of any where in the middle of the month.
|
Using `DATEADD` and `DATEDIFF`:
```
DECLARE @ThisDate DATE = '20150817'
SELECT DATEADD(YEAR, -1, DATEADD(MONTH, DATEDIFF(MONTH, '19000101', @ThisDate), '19000101'))
```
For more common date routines, see this [article](http://www.sqlservercentral.com/blogs/lynnpettis/2009/03/25/some-common-date-routines/) by Lynn Pettis.
---
To use in your `WHERE` clause:
```
DECLARE @ThisDate DATE = '20150817'
SELECT *
FROM <your_table>
WHERE
<date_column> >= DATEADD(YEAR, -1, DATEADD(MONTH, DATEDIFF(MONTH, '19000101', @ThisDate), '19000101'))
```
|
```
SELECT dateadd(month,datediff(month,0,getdate())-12,0)
```
Result is
```
-----------------------
2014-08-01 00:00:00.000
```
So the where clause should be
```
WHERE datecol >=dateadd(month,datediff(month,0,getdate())-12,0)
```
to get all data starting from jan 01 of last year's same month
|
How can I get the last 12 months from the current date PLUS extra days till 1st of the last month retrieved
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have a MS SQL DB table, where I store part status journal, ie. how a HW part has changed somewhere.
I need to get exactly one row (ordered by date) for every part and then count all rows, where `new_status` is equal to 2.
Now to put it in example, I came up with this query:
```
Select part_sys_id, old_status, new_status, sys_created
From Vhw_status_journal
Where i = 5
Order by sys_created desc
```
Which gets me (note that `i` is just an internal column):
```
part_sys_id | old_status | new_status | sys_created
-----------------------------------------------------------
21 | 2 | 3 | 2015-08-19 11:00:25
21 | NULL | 2 | 2015-08-19 10:59:28
20 | 1 | 2 | 2015-08-18 14:13:04
20 | 2 | 1 | 2015-08-17 10:51:03
20 | NULL | 2 | 2015-08-12 15:05:46
```
Now it turns out that I am completely lost when I have to get only the newest entry for each part\_sys\_id (I tried `Select Disctint` to no avail) and then even count the output rows where new\_status = 2.
My requested output is:
```
part_sys_id | old_status | new_status | sys_created
-----------------------------------------------------------
21 | 2 | 3 | 2015-08-19 11:00:25
20 | 1 | 2 | 2015-08-18 14:13:04
```
And then I need to count the rows with new\_status = 2, ie. I should then get something like:
```
count
-----
1
```
|
try this
```
with cte
as
(
select row_number() over(partition by part_sys_id order by sys_created desc) as ri,part_sys_id, old_status, new_status, sys_created
from Vhw_status_journal
where i = 5
)
select count(*) from cte where ri=1 and new_status=2
```
|
```
Select part_sys_id, old_status, new_status, max(sys_created )
From Vhw_status_journal Where i = 5 GROUP BY part_sys_id
```
That should give you only one of each part\_sys\_id, the newest one.
**EDIT: the group by has to be infront of the order by.**
**EDIT2: It was an inproper way to use `order desc, limit 1` better use `max()`**
|
Select Count with trimmed rows
|
[
"",
"sql",
"sql-server",
""
] |
I have two tables in a SQL db named tbl\_country and tbl\_seaport
I am trying to create a query that returns all possible combinations of both tables using a join on field `CountryCode.`
```
tbl_country
Fields: CountryID, Country, CountryCode
tbl_seaport
Fields: PortID, PortName, RoutingCode, CountryCode
```
I started with the below but I can only get it to return 250 rows which is the actual table row count. I thought it would return 62500 (250 x 250) rows of data.
```
SELECT s.Country, m.Country
FROM tbl_country AS s
LEFT JOIN tbl_country AS m
ON s.CountryID = m.CountryID
```
Any ideas on how to achieve this?
|
Try this
```
SELECT s.Country, m.Country FROM tbl_country AS s cross JOIN tbl_country AS m
```
**EDIT:**
with regards to your comment, you can use below query to join 3rd table.make sure to remove any duplication column names.
```
with cte
as
(
SELECT s.Country as sCountry , m.Country as mCountry FROM tbl_country AS s cross JOIN tbl_country AS m
)
select * from cte cross join tbl_seaport
```
*but, please reconsider your design*
|
As mentioned already it is a CROSS JOIN that joins two tables unconditionally such as to build a cartesian product of the tables.
However, now that you clarified your request, it becomes clear that you do want a condition. Only the condition is not some column must match another as it is usually the case when joining tables, but the opposite: A record must *not* match itself.
Then it is actually all *seaport* combinations you seek. The seaports' countries are implicit, i.e. a seaport belongs to a country, so you can use a normal join to get it.
```
select
port1.portname as port1_name,
country1.country as port1_country,
port2.portname as port2_name,
country2.country as port2_country
from tbl_seaport port1
join tbl_seaport port2 on port2.portid <> port1.portid
join tbl_country country1 on country1.countryid = port1.countryid
join tbl_country country2 on country2.countryid = port2.countryid;
```
The above still gives you combinations twice, e.g. PORT\_A-PORT\_B and PORT\_B-PORT\_A. If you want to omit this, then Change the ON clause to `port2.portid > port1.portid`.
|
SQL query design not returning expected results
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
How can i update 1 table value based on another table ID match ? for example below sql, i want update to tb\_test `NAME` to tb\_test2 `NAME2` where condition code are same `A001`
[](https://i.stack.imgur.com/lkxiy.png)
expected result after update
tb\_test `NAME` will become `ALI` where condition code are `A001`
What i've tried so far based on online solution. (Failed to apply)
`update tb_test set tb_test.name = tb_test2.name2 from tb_test A inner join tb_Test2 B on A.code = B.code2`
|
Something like the following should do:
`update tb_test set name=(select tb2.name2 from tb_test2 tb2 where tb2.code2=code)`
You update the name which is selected from the other table and the code/code2 columns need to match.
|
The correct script would be :
> update tb\_test
> inner join tb\_test2
> on tb\_test.CODE = tb\_test2.CODE2
> set tb\_test.name = tb\_test2.NAME2 ;
|
How to data in a table from another table?
|
[
"",
"sql",
"db2",
""
] |
I have a long list of `UPDATE` statements that I run on my Oracle 11g server:
```
UPDATE Table SET Column1 = 'One' WHERE Column2 = 'Example';
UPDATE Table SET Column1 = 'Two' WHERE Column2 = 'Something';
-- ...
```
I want to check that each `UPDATE` affects exactly one row (not zero, not more than one), and if it doesn't stop the execution, raise an error, and roll back all of the updates.
I was hoping for a syntax like this perhaps:
```
UPDATE Table SET Column1 = 'One' WHERE Column2 = 'Example';
ASSERT sql%rowcount = 1;
```
That does not seem to exist, so maybe I should do something like this:
```
UPDATE Table SET Column1 = 'One' WHERE Column2 = 'Example';
IF sql%rowcount != 1 THEN
-- Do something, but what?
END IF;
```
However, I am not sure what to put inside the `IF` statement or if that is even a good approach. So, what is the most elegant way of accomplishing this?
|
```
UPDATE your_table SET Column = 'One' WHERE Key = 'Example';
IF SQL%ROWCOUNT <> 1 THEN
ROLLBACK;
RAISE_APPLICATION_ERROR( -20000, 'Incorrect number of rows updated for Key "Example".' )
END IF;
```
Or:
```
DECLARE
p_rowid ROWID;
p_key YOUR_TABLE.KEY%TYPE := 'Example';
BEGIN
SELECT ROWID INTO p_rowid FROM your_table WHERE Key = p_key;
UPDATE your_table SET Column = 'One' WHERE ROWID = p_rowid;
EXCEPTION
WHEN NO_DATA_FOUND THEN
ROLLBACK;
RAISE_APPLICATION_ERROR( -20000, 'No data found for key "'|| p_key || '".' );
WHEN TOO_MANY_ROWS THEN
ROLLBACK;
RAISE_APPLICATION_ERROR( -20001, 'More than one row found for key "'|| p_key || '".' );
END;
```
|
As you expect the updates to affect one row each, you could simply put the query to get that one row in a subquery, which would result in an ORA-01427 exception ("single-row subquery returns more than one row") when there is more than one.
```
UPDATE Table SET Column = 'One' WHERE ROWID = (SELECT ROWID FROM TABLE WHERE Key = 'ABC');
UPDATE Table SET Column = 'Two' WHERE ROWID = (SELECT ROWID FROM TABLE WHERE Key = 'DEF');
...
```
|
Make sure only one row is updated in PL/SQL?
|
[
"",
"sql",
"oracle",
"plsql",
"sql-update",
""
] |
I am trying to copy the data of all tables from one DB to other using `Insert...select` statement.
```
Insert into collist select collist from table
```
I am doing the following things
1. Cleanup of all tables either truncate or delete.
2. Copy data from source to destination (by doing identity\_insert on).
After doing this process, do we need to reset the identity again after copying data to my target db?
|
I don't think you need to reset your `IDENTITY` column. Resetting your `IDENTITY` column might lead to constraint violations. Let's say your `IDENTITY` column is your `PK`.
```
CREATE TABLE aa(
id INT IDENTITY(1, 1) PRIMARY KEY,
x INT
)
INSERT INTO aa(x) VALUES (1), (2);
```
Your data looks like this now:
```
id x
----------- -----------
1 1
2 2
```
Then, you reset the `IDENTITY`:
```
DBCC CHECKIDENT ('aa', RESEED, 0)
```
When you insert new rows into your table, a PK violation error would occur:
```
INSERT INTO aa(x) VALUES (3), (4);
```
> Msg 2627, Level 14, State 1, Line 8 Violation of PRIMARY KEY
> constraint 'PK\_\_aa\_\_3213E83F76F5681B'. Cannot insert duplicate key in
> object 'dbo.aa'. The duplicate key value is (1).
Aaron Bertrand has great insight on resetting `IDENTITY` column. See [here](https://dba.stackexchange.com/questions/43910/reset-identity-value) for more details.
|
From the data point of view, Identity values should be meaningless.
Meaning that the value of an identity column should not be a part of the data, but only a row identifier.
They are very useful when you have lookup tables (id, name) or data tables where the row uniqueness is comprised of multiple columns and you need to use these tables in a relationship - having an identity column as a surrogate key in these situations can save both you and your database a lot of work.
specifying a column as an identity guarantee almost anything - not even uniqueness - since you can use `SET IDENTITY INSERT tableName ON` and manually insert whatever value you want, even duplicated values (unless there is a unique constraint or index on that column, of course).
Identity is simply a mechanism the database provides to simplify relationships and surrogate keys handling.
That being said, you really should not reset identity values.
|
Is Identity reset necessary?
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I've recently learned from here how to do PIVOT in SQL, and I actually took an example from the other question on here. It works perfectly.
However, I want to perform additional joins, after the query, but I am unable to insert into temporary table the results of query? How may I do that?
Create table
```
CREATE TABLE yt
([Store] int, [Week] int, [xCount] int)
;
INSERT INTO yt
([Store], [Week], [xCount])
VALUES
(102, 1, 96),
(101, 1, 138),
(105, 1, 37),
(109, 1, 59),
(101, 2, 282),
(102, 2, 212),
(105, 2, 78),
(109, 2, 97),
(105, 3, 60),
(102, 3, 123),
(101, 3, 220),
(109, 3, 87);
```
Perform pivoting query
```
DECLARE @cols AS NVARCHAR(MAX),
@query AS NVARCHAR(MAX)
select @cols = STUFF((SELECT ',' + QUOTENAME(Week)
from yt
group by Week
order by Week
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
,1,1,'')
set @query = 'SELECT store,' + @cols + ' from
(
select store, week, xCount
from yt
) x
pivot
(
sum(xCount)
for week in (' + @cols + ')
) p '
execute(@query)
```
The result is
```
store 1 2 3
101 138 282 220
102 96 212 123
105 37 78 60
109 59 97 87
```
But Id like to have it in #temp table, and I tried placing INTO #temp before 'Execute Query' and before FROM statement within Query.
Any idea? I am aware of SELECT \* INTO #temp FROM BlaBla but seems its diff with Queries.
|
You can create a global temp table dynamically
```
declare @tblName nvarchar(10)
set @tblName = N'##Temp' + cast(@@spid as nvarchar(5))
declare @tblCreate nvarchar(max)
SET @tblCreate = N'create table ' + @tblName + ' ('
+ REPLACE(@cols,',',' int,') + N' int)'
EXECUTE sp_executesql @tblCreate
```
And then edit your @query to insert into the table.
```
set @query = 'INSERT INTO ' + @tblName + ' SELECT store,' + @cols + ' from
(
select store, week, xCount
from yt
) x
pivot
(
sum(xCount)
for week in (' + @cols + ')
) p;
drop table ' + @tblName
execute(@query)
```
|
The problem you're facing with `select ... into #temp` inside the execute is that the table gets created, but due to being in separate scope, it gets dropped immediately when the execute ends, so your procedure can't see it.
Your code will work, if you create the table before calling execute and just use insert into. You can check this in [SQL Fiddle](http://sqlfiddle.com/#!3/8a6974/5). This just causes the problem that if your query is dynamic, how to create the table so that it fits the query.
Trying to create logic where you dynamically adjust the number of columns in SQL is not simple to do, and really isn't something you should be doing. Better way would be to handle that in your presentation layer.
|
Insert query results into temp table
|
[
"",
"sql",
"sql-server",
"pivot",
""
] |
I got column with `nvarchar(max)` type.
it has html text as string. like bellow.
```
<a>hi_this </a>
<i>is_sample</i>
<p>text_data</P>
<a>in_column</a>
<p>this_is_paragraph</p>
<a>this_is_end</a>
```
Is there anyway top replace underscore( `_` ) symbol between tags `<p>` and `</p>` with space
so I want final output like bellow.
```
<a>hi_this </a>
<i>is_sample</i>
<p>text data</P>
<a>in_column</a>
<p>this is paragraph</p>
<a>this_is_end</a>
```
is it possible?
**Update:**
it is in single row like this.
'< a >hi\_this < /a >< i >is\_sample< /i >< p >text data< /P >< a >in\_column< /a >
< p >this is paragraph< /p > < a >this\_is\_end< /a >'
|
I really don't think SQL is the best tool for this, but I have done something that seems to work for your situation. The first step is to build a table of numbers. If you already have one then great, use that, but for the sake of a complete answer I have assumed you don't.
This is done by simply using a [table value constructor](https://msdn.microsoft.com/en-GB/library/dd776382.aspx) to create a table of 10 rows (`N1`), then cross joining this with itself to get 100 rows (`N3`), then cross joining this with itself to get a table of 10,000 rows (`Numbers`) then using [`ROW_NUMBER()`](https://msdn.microsoft.com/en-us/library/ms186734.aspx) to get a number from 1-10,000 for each row:
```
WITH N1 AS (SELECT N FROM (VALUES (1),(1),(1),(1),(1),(1),(1),(1),(1),(1)) t (N)),
N2 (N) AS (SELECT 1 FROM N1 AS N1 CROSS JOIN N1 AS N2),
Numbers (Number) AS (SELECT ROW_NUMBER() OVER(ORDER BY N1.N) FROM N2 AS N1 CROSS JOIN N2 AS N2)
SELECT *
FROM Numbers;
```
Further reading on this, and other methods of doing this can be found in the series [Generate a set or sequence without loops](http://sqlperformance.com/2013/01/t-sql-queries/generate-a-set-1). If you need more than 10,000 numbers simply add more cross joins until you have enough
Once you have your numbers table, you can use [`SUBSTRING()`](https://msdn.microsoft.com/en-us/library/ms187748.aspx) to identify the position of all of your opening `<p>` tags, using `
```
DECLARE @S NVARCHAR(500) = '
<a>hi_this </a>
<i>is_sample</i>
<p>text_data</P>
<a>in_column</a>
<p>this_is_paragraph</p>
<a>this_is_end</a>';
WITH N1 AS (SELECT N FROM (VALUES (1),(1),(1),(1),(1),(1),(1),(1),(1),(1)) t (N)),
N2 (N) AS (SELECT 1 FROM N1 AS N1 CROSS JOIN N1 AS N2),
Numbers (Number) AS (SELECT ROW_NUMBER() OVER(ORDER BY N1.N) FROM N2 AS N1 CROSS JOIN N2 AS N2)
SELECT OriginalString = SUBSTRING(@S, Number, CHARINDEX('</p>', @s, Number + 1) - Number + 4),
Start = Number,
NumChars = CHARINDEX('</p>', @s, Number + 1) - Number + 4,
NewString = REPLACE(SUBSTRING(@S, Number, CHARINDEX('</p>', @s, Number + 1) - Number + 4), '_', ' ')
FROM Numbers
WHERE SUBSTRING(@S, Number, 3) = '<p>';
```
Result:
```
OriginalString Start NumChars NewString
---------------------------------------------------------------------------
<p>text_data</P> 40 16 <p>text data</P>
<p>this_is_paragraph</p> 80 24 <p>this is paragraph</p>
```
Here you use `SUBSTRING(@S, Number, 3) = '<p>'` to get the starting position of each `p` tag, then you can use [`CHARINDEX()`](https://msdn.microsoft.com/en-us/library/ms186323.aspx) to get the position of the next closing `p` tag, and replace the text in between.
Finally you need to use the output from this to replace the original string, which you can do using [`STUFF()`](https://msdn.microsoft.com/en-us/library/ms188043.aspx):
```
DECLARE @S NVARCHAR(500) = '
<a>hi_this </a>
<i>is_sample</i>
<p>text_data</P>
<a>in_column</a>
<p>this_is_paragraph</p>
<a>this_is_end</a>';
WITH N1 AS (SELECT N FROM (VALUES (1),(1),(1),(1),(1),(1),(1),(1),(1),(1)) t (N)),
N2 (N) AS (SELECT 1 FROM N1 AS N1 CROSS JOIN N1 AS N2),
Numbers (Number) AS (SELECT ROW_NUMBER() OVER(ORDER BY N1.N) FROM N2 AS N1 CROSS JOIN N2 AS N2),
Data AS
( SELECT OriginalString = SUBSTRING(@S, Number, CHARINDEX('</p>', @s, Number + 1) - Number + 4),
Start = Number,
NumChars = CHARINDEX('</p>', @s, Number + 1) - Number + 4,
NewString = REPLACE(SUBSTRING(@S, Number, CHARINDEX('</p>', @s, Number + 1) - Number + 4), '_', ' ')
FROM Numbers
WHERE SUBSTRING(@S, Number, 3) = '<p>'
)
SELECT @S = STUFF(@S, Start, NumChars, NewString)
FROM Data;
PRINT @S;
```
Which gives:
```
<a>hi_this </a>
<i>is_sample</i>
<p>text data</P>
<a>in_column</a>
<p>this is paragraph</p>
<a>this_is_end</a>
```
|
Why not use a simple REPLACE funtion
```
update my_table
set path = replace(path, '_', ' ')
WHERE path like '<p>%'
```
|
Replace text between specific words in sql server
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-server-2008-r2",
"sql-server-2012",
""
] |
I have a table which records questions, their answers, and their authors. The column names are as follows:
id, question, answers, author
I would like to get a list of top 10 authors who have written the most questions. So it would need to first count the number of questions each author has written then sort them by the count then return the top 10.
This is in SQLite and I'm not exactly sure how to get the list of counts. The second part should be fairly simple as it's just an `ORDER BY` and a `LIMIT 10`. How can I get the counts into a list which I can select from?
|
```
SELECT BY COUNT(author)
,author
FROM table_name
GROUP BY author
ORDER BY COUNT(author) DESC LIMIT 10;
```
|
You can apply an `order by` clause to an aggregate query:
```
SELECT author, COUNT(*)
FROM mytable
GROUP BY author
ORDER BY 2 DESC
LIMIT 10
```
|
Selecting top 10 counts in SQLite
|
[
"",
"sql",
"select",
"sqlite",
"top-n",
""
] |
For example, say you have a table of files and a logging table of transfer operations.
```
CREATE TABLE files
(
id INTEGER PRIMARY KEY
-- various other columns
);
CREATE TABLE transfers
(
id INTEGER PRIMARY KEY,
file_id INTEGER,
status TEXT NOT NULL,
-- various other columns
FOREIGN KEY (file_id) REFERENCES files (id)
)
```
A transfer operation can have various statuses -- `'succeeded'`, `'failed'`, `'in progress'`, etc. One file can have many transfer operations -- in particular, if a transfer fails, another transfer for that file may be scheduled later.
Now, say we want to find all the files with *only* failed transfers -- no transfers currently in progress, no later successful transfers.
So far I have the solution below with subselects:
```
SELECT files.*
FROM files
WHERE files.id IN (
SELECT DISTINCT file_id
FROM transfers
WHERE transfers.status == 'failed'
) AND files.id NOT IN (
SELECT DISTINCT file_id
FROM transfers
WHERE transfers.status <> 'failed'
)
```
However, this feels a little clunky and procedural. Is there a more elegant solution, possibly involving self-joins?
|
If you want to use self joins:
```
SELECT DISTINCT files.*
FROM
files INNER JOIN transfers t1
ON files.id = t1.file_id AND t1.status='failed'
LEFT JOIN transfers t2
ON file.id = t2.file_id AND t2.status<>'failed'
WHERE
t2.id IS NULL
```
|
How about an aggregation with a `having` clause?
```
select t.file_id
from transfers t
group by t.file_id
having sum(case when status <> 'failed' then 1 else 0 end) = 0;
```
If there is additional information from `files` that you want, you can `join` it in.
|
Select records in table A with *only* certain corresponding records in table B
|
[
"",
"sql",
"sqlite",
"subquery",
"self-join",
""
] |
I know the basics in SQL programming and I know how to apply some tricks in SQL Server in order to get the result set, but I don't know all tricks in Oracle.
I have these columns:
```
col1 col2 col3
```
And I wrote this query
```
SELECT
col1, MAX(col3) AS mx3
FROM
myTable
GROUP BY
col1
```
And I need to get the value of col2 in the same row where I found the max value of col3, do you know some trick to solve this problem?
|
A couple of different ways to do this:
In both cases I'm treating your initial query as either a common table expression or as an inline view and joining it back to the base table to get your added column. The trick here is that the INNER JOIN eliminates all the records not in your max query.
```
SELECT A.*,
FROM myTable A
INNER JOIN (SELECT col1 , MAX( col3 ) AS mx3 FROM myTable GROUP BY col1) B
on A.Col1=B.Col1
and B.mx3 = A.Col3
```
or
```
with CTE AS (SELECT col1 , MAX( col3 ) AS mx3 FROM myTable GROUP BY col1)
SELECT A.*
FROM MyTable A
INNER JOIN CTE
on A.col1 = B.Col1
and A.col3= cte.mx3
```
|
The easiest way to do this, IMHO, is not to use `max`, but the window function `rank`:
```
SELECT col1 , col2, col3
FROM (SELECT col1, col2, col3,
RANK() OVER (PARTITION BY col1 ORDER BY col3 DESC) rk
FROM myTable) t
WHERE rk = 1
```
BTW, the same syntax should also work for MS SQL-Server and most other modern databases, with MySQL being the notable exception.
|
Getting the value of no grouping column
|
[
"",
"sql",
"oracle",
"select",
""
] |
I'm working with ASP.Net web application and whenever I tried to add a `FOREIGN KEY` this error appears in the Data tools operations :
> SQL71508 :: The model already has an element that has the same name
> dbo.FK\_Sellers\_Users. SQL71508 :: The model already has an element
> that has the same name dbo.FK\_Sellers\_Users.
I don't understand what's the problem with FK! I have 2 tables with this error
table `Sellers` :
```
CREATE TABLE [dbo].[Sellers] (
[Seller_ID] INT IDENTITY (1, 1) NOT NULL,
[Name] NVARCHAR (50) NOT NULL,
[SUsername] NVARCHAR (50) NOT NULL,
[Password] NVARCHAR (50) NOT NULL,
[SEmail] NVARCHAR (50) NOT NULL,
[Phone] NVARCHAR (50) NOT NULL,
[City] NVARCHAR (50) NOT NULL,
[LastLoginDate] DATETIME NULL,
[CreatedDate] DATETIME NULL,
PRIMARY KEY CLUSTERED ([Seller_ID] ASC),
CONSTRAINT [FK_Sellers_Users] FOREIGN KEY ([SEmail]) REFERENCES [Users]([Email]),
CONSTRAINT [FK_Sellers_Users] FOREIGN KEY ([SUsername]) REFERENCES [Users]([Username])
);
```
and table `Users` :
```
CREATE TABLE [dbo].[Users] (
[Id] INT NOT NULL IDENTITY,
[Username] NVARCHAR (50) NOT NULL,
[Password] NVARCHAR (50) NOT NULL,
[Email] NVARCHAR (50) NOT NULL,
[UserType] INT NULL,
PRIMARY KEY CLUSTERED ([Id]),
CONSTRAINT [AK_Users_Username] UNIQUE ([Username]),
CONSTRAINT [AK_Users_Email] UNIQUE ([Email]),
);
```
|
Right there in your CREATE TABLE statement for dbo.Sellers, you have two FK constraints named FK\_Sellers\_Users.
Make those names unique, perhaps by adding the column name on the end.
|
You have two foreign keys with the same name `FK_Sellers_Users`. You should better use `FK_Sellers_Users_Email` and `FK_Sellers_Users_Username`.
|
The model already has an element that has the same name - ASP.NET
|
[
"",
"sql",
"asp.net",
"database",
"error-handling",
""
] |
I have a table that contains records of different transaction that is needed to be updated monthly. Once the record for a specific month has been successfully updated, it will insert a new record to that table to indicate that it is already updated. Let's take this example.
```
**date_of_transaction** **type**
2015-04-21 1 //A deposit record
2015-04-24 2 //A withdrawal record
2015-04-29 1
2015-04-30 2
2015-04-30 3 //3, means an update record
2015-05-14 1
2015-05-22 1
2015-05-27 2
2015-05-30 2
2015-06-09 1
2015-06-12 2
2015-06-17 2
2015-06-19 2
```
Let's suppose that the day today is July 23, 2015. I can only get the data one month lower than the current month, so only the data that I can get are june and downwards records.
As you can see, there is an update performed in the month of April because of the '3' in the type attribute, but in the month of May and June, there are no updates occurred, how can I get the month that is not yet updated?
|
This will return you months, which has no type=3 rows
```
SELECT MONTH([trans-date]) FROM [table] GROUP BY MONTH([trans-date]) HAVING MAX([trans-type])<3
```
Note: this will not work if 3 is not max value in the column
|
My approach would be to find all the months first, then find the months whose records were updated. Then select only those months from all months whose records werent updated (A set minus operation).
Mysql query would be something like this
> select extract(MONTH,data\_of\_transaction) from your\_table\_name where month not in (select extract(MONTH,data\_of\_transaction) from table where type=3);
|
How can I get the month that is not yet updated in SQL by inserting another row on every update?
|
[
"",
"mysql",
"sql",
"date",
""
] |
I have the set of \*.sql files (>200) which contains scripts of tables creation. How can I apply all of them automatically to Oracle DB?
I use Oracle SQL Developer environment
|
Just create a "driver" sql file, and call that from sqlplus - example, if you have
script1.sql, script2.sql, create a file, "driver.sql" and include:
```
@script1.sql
@script2.sql
...etc...
```
|
In which enviroment are you working? Give a bit more information please.
If u use SQLPLUS try this
```
SQL > @{path}{file}
```
For Example
```
SQL > @/usr/bin/script.sql
```
|
How to create tables in Oracle DB from the set of *.sql files
|
[
"",
"sql",
"database",
"oracle",
"oracle11g",
""
] |
I have two tables:
Table A of customer data
```
ID - Name - Age
1 - Joe - 24
2 - Bloggs - 46
```
Table B is a temp table of random numbers
```
ID - RandomNumber
1 - 68
2 - 46
3 - 49
4 - 24
```
I need to select from table A
```
SELECT * FROM tableA a WHERE a.age = [randomNumber from tableB]
```
The where clause needs to look into tableB. Looks at first random number - 68, does this exist in Age column of tableA, no, so move to next random number. does 46 exist in tableB, yes, it then returns 2 - Bloggs - 46, and inserts that row into another table.
the process starts again but from where is left off - does 49 exist in tableA, no, so next, does 24 exist in tableA, yes, copy that row into other table.
I have a list of 150 students, and want to randomly select 30 by age.
I could use `RAND()` function, but number will change every time, so id rather place random numbers in a table first, and then look up.
Any help is much appreciated.
|
If you want to randomly order the output rows, `ORDER BY RAND()` is your friend:
```
SELECT sub.* FROM (
SELECT ta.ID,
ta.Name,
ta.Age,
ROWNUM num
FROM TABLEA ta
JOIN TABLEB tb ON ta.Age = tb.RandomNumber
ORDER BY RAND() ) sub
WHERE sub.num <=30
```
This will retrieve maximum 30 randomly selected names from your table A
Note: if you are using a DBMS where `LIMIT` is implemented, you can simplify the query and simply use
```
SELECT ta.ID,
ta.Name,
ta.Age
FROM TABLEA ta
JOIN TABLEB tb ON ta.Age = tb.RandomNumber
ORDER BY RAND()
LIMIT 30
```
|
You can try
```
SELECT tableA.name FROM tableA, tableB WHERE tableA.age IN (SELECT RandomNumber FROM tableB)
```
If you need only 30 students, you should also add
```
LIMIT 30
```
|
Looping function
|
[
"",
"sql",
"database",
""
] |
Following query will return 1-10 in 10 rows.
```
DECLARE @Range AS INT = 10
;WITH CTE AS(
SELECT TOP (@Range) Duration = ROW_NUMBER() OVER(ORDER BY OBJECT_ID)
FROM sys.all_columns
ORDER BY [Object_id]
)
SELECT Duration from CTE
```
But when I set @Range as 10000 it returns 7374 rows. Why this query can't return more than 7374 rows.
**UPDATE**
I just found another way to achieve my requirement as following
```
DECLARE @start INT = 1;
DECLARE @end INT = 10;
WITH numbers AS (
SELECT @start AS number
UNION ALL
SELECT number + 1
FROM numbers
WHERE number < @end
)
SELECT *
FROM numbers
OPTION (MAXRECURSION 0);
```
Without last line of code it breaks with error Maximum recursion 100 has been exhausted before statement completion and I found this line is specifying 0 for infinite recursion. But this query seems a little slower to me. Is there any faster way???
|
As commented earlier, it's because you reached the number of rows of `sys.columns`. Here is another way to generate list of numbers or what others call `Numbers Table` or `Tally Table`.
This uses cascaded `CTE`s and is said to be the fastest way to create a Tally Table:
```
DECLARE @Range AS INT = 7374
;WITH E1(N) AS( -- 10 ^ 1 = 10 rows
SELECT 1 FROM(VALUES (1),(1),(1),(1),(1),(1),(1),(1),(1),(1))t(N)
),
E2(N) AS(SELECT 1 FROM E1 a CROSS JOIN E1 b), -- 10 ^ 2 = 100 rows
E4(N) AS(SELECT 1 FROM E2 a CROSS JOIN E2 b), -- 10 ^ 4 = 10,000 rows
E8(N) AS(SELECT 1 FROM E4 a CROSS JOIN E4 b), -- 10 ^ 8 = 10,000,000 rows
CteTally(N) AS(
SELECT TOP(@Range) ROW_NUMBER() OVER(ORDER BY(SELECT NULL))
FROM E8
)
SELECT * FROM CteTally
```
You could easily add another CTE if you need more than 10,000 rows.
For more information about Tally Table, read this excellent [**article**](http://www.sqlservercentral.com/articles/T-SQL/62867/) by Jeff Moden.
For performance comparisons among ways to generate Tally Tables, read [**this**](http://www.sqlservercentral.com/articles/T-SQL/74118/).
---
Explanation taken from Jeff's [article](http://www.sqlservercentral.com/articles/T-SQL/74118/):
> The CTE called `E1` (as in 10E1 for scientific notation) is nothing
> more than ten `SELECT 1`'s returned as a single result set.
>
> `E2` does a `CROSS JOIN` of `E1` with itself. That returns a single
> result set of 10\*10 or up to 100 rows. I say "up to" because if the
> TOP function is 100 or less, the CTE's are "smart" enough to know that
> it doesn't actually need to go any further and `E4` and `E8` won't
> even come into play. If the `TOP` has a value of less than 100, not
> all 100 rows that `E2` is capable of making will be made. It'll always
> make just enough according to the `TOP` function.
>
> You can follow from there. `E4` is a `CROSS JOIN` of `E2` and will
> make up to 100\*100 or 10,000 rows and `E8` is a `CROSS JOIN` of `E4`
> which will make more rows than most people will ever need. If you do
> need more, then just add an `E16` as a `CROSS JOIN` of `E8` and change
> the final `FROM` clause to `FROM E16`.
>
> What's really amazing about this bad-boy is that is produces **ZERO
> READS**. Absolutely none, nada, nil.
|
One way to generate a large series of numbers would be to use a `cross join` to create a cartesian product between two tables which will generate a set that is `n^2` in size.
**This approach however performs a lot worse than the solution put forward in [the answer by Felix Pamittan](https://stackoverflow.com/a/32096374/1175077) and therefore shouldn't be used.**
```
DECLARE @Range AS INT = 10000
;WITH CTE AS(
SELECT TOP (@Range) Duration = ROW_NUMBER() OVER(ORDER BY (SELECT NULL))
FROM sys.all_columns a CROSS JOIN sys.all_columns b
)
SELECT Duration from CTE
```
This would generate a set of 54375876 rows in your case. Instead of generating the rows on the fly you should consider creating a tally table suitable for your needs.
|
Selecting N rows in SQL Server
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have float datatype column in sql server and I need to get minimum decimal scale and maximum decimal scale .
[Here is SQL Fiddle](http://sqlfiddle.com/#!6/a93ba/2)
I am using
```
Len (Cast(Cast(Reverse(CONVERT(VARCHAR(50), value, 128)
) AS FLOAT) AS BIGINT))
```
Which work great but when value is something in negative eg `-0.061` it throw error `Error converting data type varchar to float.`
So for alter I search and found that there is function name `PARSENAME(@var,1)` which is working perfect for negative numbers too but this function returns after decimal value and return type is `nchar`
Eg : If float value is -0.061 its will return 061 of return type `nchar` So i can not use `len` function to get its length.
SO how do I make this working either by using fiddle or by using above function.
|
Try using the `ABS()` function
```
SELECT value,
Decimals = CASE Charindex('.', value)
WHEN 0 THEN 0
ELSE
Len (Cast(Cast(Reverse(CONVERT(VARCHAR(50), ABS(value), 128)
) AS FLOAT) AS BIGINT))
END
FROM numbers
```
|
I guess using [ABS()](https://msdn.microsoft.com/en-us/library/ms189800.aspx?f=255&MSPPError=-2147217396) function should be enough. Shouldn't it?
> A mathematical function that returns the absolute (positive) value of
> the specified numeric expression.
```
SELECT value
, Decimals = CASE CHARINDEX('.', value)
WHEN 0 THEN 0
ELSE LEN(CAST(CAST(REVERSE(CONVERT(VARCHAR(50), ABS(value), 128)) AS FLOAT) AS BIGINT))
END
FROM numbers;
```
|
Get minimum and maximum decimal scale value
|
[
"",
"sql",
"sql-server",
""
] |
I have a query , where I want to obtain some data for different time durations (this month, this week, etc).
For the "this week" column, I want it to get all the data from the most recent Monday until now. How would I do this?
I have the following SQL so far :
```
WHERE prla.CREATION_DATE >= SYSDATE - ?
```
|
`trunc(sysdate, 'iw')` is what you're after. IW is the format mask used for Monday of the week the specified date is in (as Monday is the ISO standard for the start of the week). E.g.:
```
with dates as (select trunc(sysdate, 'mm') - 10 + level dt
from dual
connect by level <= 40)
select dt
from dates
where dt >= trunc(sysdate, 'iw')
and dt <= sysdate; -- only needed if the dates in the column could be beyond now.
```
|
Yeah that will do: But it is better to use `sysdate-8`. Because if the current day is same as your searching day, it will return the current date. For Eg.
```
select next_day(sysdate-7,'WED') from dual;
OUTPUT
19-AUG-15
```
Whereas the below one will give you the last week
```
select next_day(sysdate-8,'WED') from dual;
OUTPUT
12-AUG-15
```
|
In Oracle SQL, how to get the time for only this current week?
|
[
"",
"sql",
"oracle",
""
] |
I have a three tables as illustrated in the *screenshots below*:
The primary design objective is to separate the values in the `LICENSE_REQ` table: `TAX`, `INSURANCE` and `BENEFIT` columns from their description in the `LICENSE_REQ_DESC` table.
The below query returns the `TAX`, `INSURANCE` and `BENEFIT` columns next to their description from the `LICENSE_REQ_DESC` table.
*Query:*
```
SELECT A.LICENSE_ID,
B.TAX, T.REQ_CODE_DESC,
B.INSURANCE,
I.REQ_CODE_DESC,
B.BENEFIT,
J.REQ_CODE_DESC
FROM BUSINESS A INNER JOIN LICENSE_REQ B ON A.LICENSE_ID = B.LICENSE_ID
LEFT OUTER JOIN LICENSE_REQ_DESC T ON B.TAX = T.REQ_CODE
LEFT OUTER JOIN LICENSE_REQ_DESC I ON B.INSURANCE = I.REQ_CODE
LEFT OUTER JOIN LICENSE_REQ_DESC J ON B.BENEFIT = J.REQ_CODE
```
**Tables:**
`BUSINESS` - `Primary Key LICENSE_ID`
[](https://i.stack.imgur.com/OpnF7.png)
`LICENSE_REQ` - `Foreign Key LICENSE_ID`
[](https://i.stack.imgur.com/Ao2dH.png)
`LICENSE_REQ_DESC` - `Primary Key SEQ_NBR`
[](https://i.stack.imgur.com/BR2Ru.png)
And here is the `resultset` screenshot:
[](https://i.stack.imgur.com/fAw3U.png)
Is there a more effective and efficient method for separating data from meta data (description) for efficiently querying the desire `resultset`?
|
I like your way.
Some developers might create a stored function like `get_desc(req_code)` and write the SQL as:
```
SELECT A.LICENSE_ID,
B.TAX,
get_desc(B.TAX) TAX_DESC,
B.INSURANCE,
get_desc(B.INSURANCE) INSURANCE_DESC,
B.BENEFIT,
get_desc(B.BENEFIT) BENEFIT_DESC
FROM BUSINESS A INNER JOIN LICENSE_REQ B ON A.LICENSE_ID = B.LICENSE_ID
```
But I much prefer the way you've got it. Keep it all in pure-SQL when you can!
|
There are pro's and con's to the single lookup table method.
The primary pro is that all the translations are in one place, you don't need to go searching hither and yon to find each translation.
The cons are that you end up with larger lookup tables and usually more complex join requirements since there is usually a discriminator column included in the join, and you are limited to having the exact same attributes available for every lookup.
Right now your scheme doesn't include a discriminator column to determine from the contents of `LICENSE_REQ_DESC` which code value source the description goes with. This can be a problem if for some reason their are `REQ_CODE` values conflicts between the `TAX`, `INSURANCE`, and `BENEFIT` code columns.
An improved version of your `LICENSE_REQ_DESC` table would be to:
```
ALTER TABLE LICENSE_REQ_DESC ADD (REQ_CODE_SOURCE VARCHAR2(15) );
```
Populate the new column with appropriate values e.g. 'TAX', 'INSURANCE', and 'BENEFIT' then:
```
ALTER TABLE CODE_LOOKUP MODIFY (REQ_CODE_SOURCE NOT NULL);
ALTER TABLE CODE_LOOKUP ADD CONSTRAINT CODE_LOOKUP_UK UNIQUE
(
REQ_CODE_SOURCE
, REQ_CODE
)
ENABLE;
```
And finally change your query to use the following joins:
```
LEFT OUTER JOIN LICENSE_REQ_DESC T ON T.REQ_CODE_SOURCE = 'TAX' AND B.TAX = T.REQ_CODE
LEFT OUTER JOIN LICENSE_REQ_DESC I ON T.REQ_CODE_SOURCE = 'INSURANCE' AND B.INSURANCE = I.REQ_CODE
LEFT OUTER JOIN LICENSE_REQ_DESC J ON T.REQ_CODE_SOURCE = 'BENEFIT' AND B.BENEFIT = J.REQ_CODE
```
|
Database table design: How to use multiple tables to separate related data so that querying resultset is simpler or require less code?
|
[
"",
"sql",
"oracle",
""
] |
I have one *SQL* statement as:
```
SELECT ARTICLES.NEWS_ARTCL_ID, ARTICLES.NEWS_ARTCL_TTL_DES,
ARTICLES.NEWS_ARTCL_CNTNT_T, ARTICLES.NEWS_ARTCL_PUB_DT,
ARTICLES.NEWS_ARTCL_AUTH_NM, ARTICLES.NEWS_ARTCL_URL, ARTICLES.MEDIA_URL,
ARTICLES.ARTCL_SRC_ID, SOURCES.ARTCL_SRC_NM, MEDIA.MEDIA_TYPE_DESCRIP
FROM
RSKLMOBILEB2E.NEWS_ARTICLE ARTICLES,
RSKLMOBILEB2E.MEDIA_TYPE MEDIA,
RSKLMOBILEB2E.ARTICLE_SOURCE SOURCES
WHERE ARTICLES.MEDIA_TYPE_IDENTIF = MEDIA.MEDIA_TYPE_IDENTIF
AND ARTICLES.ARTCL_SRC_ID = SOURCES.ARTCL_SRC_ID
AND ARTICLES.ARTCL_SRC_ID = 1
ORDER BY ARTICLES.NEWS_ARTCL_PUB_DT
```
Now I need to combine another *SQL* statement into one which is:
```
SELECT COUNT ( * )
FROM RSKLMOBILEB2E.NEWS_LIKES LIKES, RSKLMOBILEB2E.NEWS_ARTICLE ARTICLES
WHERE LIKES.NEWS_ARTCL_ID = ARTICLES.NEWS_ARTCL_ID
```
Basically I have one table which contains articles and I need to include the user likes which is in another table.
|
Use a `subquery` to add the `likescount` in your first query like this:
```
SELECT ARTICLES.NEWS_ARTCL_ID
,ARTICLES.NEWS_ARTCL_TTL_DES
,ARTICLES.NEWS_ARTCL_CNTNT_T
,ARTICLES.NEWS_ARTCL_PUB_DT
,ARTICLES.NEWS_ARTCL_AUTH_NM
,ARTICLES.NEWS_ARTCL_URL
,ARTICLES.MEDIA_URL
,ARTICLES.ARTCL_SRC_ID
,SOURCES.ARTCL_SRC_NM
,MEDIA.MEDIA_TYPE_DESCRIP
,(
SELECT COUNT(*)
FROM RSKLMOBILEB2E.NEWS_LIKES LIKES
WHERE LIKES.NEWS_ARTCL_ID = ARTICLES.NEWS_ARTCL_ID
) AS LikesCount
FROM RSKLMOBILEB2E.NEWS_ARTICLE ARTICLES
,RSKLMOBILEB2E.MEDIA_TYPE MEDIA
,RSKLMOBILEB2E.ARTICLE_SOURCE SOURCES
WHERE ARTICLES.MEDIA_TYPE_IDENTIF = MEDIA.MEDIA_TYPE_IDENTIF
AND ARTICLES.ARTCL_SRC_ID = SOURCES.ARTCL_SRC_ID
AND ARTICLES.ARTCL_SRC_ID = 1
ORDER BY ARTICLES.NEWS_ARTCL_PUB_DT;
```
|
I think that solution is in using Analytic Functions. Please have a look on <https://oracle-base.com/articles/misc/analytic-functions>
Please check following query (keep in mind I have no idea about your table structures). Due to left join records might be duplicated, this is why grouping is added.
```
SELECT ARTICLES.NEWS_ARTCL_ID, ARTICLES.NEWS_ARTCL_TTL_DES,
ARTICLES.NEWS_ARTCL_CNTNT_T, ARTICLES.NEWS_ARTCL_PUB_DT,
ARTICLES.NEWS_ARTCL_AUTH_NM, ARTICLES.NEWS_ARTCL_URL, ARTICLES.MEDIA_URL,
ARTICLES.ARTCL_SRC_ID, SOURCES.ARTCL_SRC_NM, MEDIA.MEDIA_TYPE_DESCRIP,
count(LIKES.ID) over ( partition by ARTICLES.NEWS_ARTCL_ID ) as num_likes
FROM RSKLMOBILEB2E.NEWS_ARTICLE ARTICLES
join RSKLMOBILEB2E.MEDIA_TYPE MEDIA
on ARTICLES.MEDIA_TYPE_IDENTIF = MEDIA.MEDIA_TYPE_IDENTIF
join RSKLMOBILEB2E.ARTICLE_SOURCE SOURCES
on ARTICLES.ARTCL_SRC_ID = SOURCES.ARTCL_SRC_ID
LEFT JOIN RSKLMOBILEB2E.NEWS_LIKES LIKES
ON LIKES.NEWS_ARTCL_ID = ARTICLES.NEWS_ARTCL_ID
WHERE
ARTICLES.ARTCL_SRC_ID = 1
group by ARTICLES.NEWS_ARTCL_ID, ARTICLES.NEWS_ARTCL_TTL_DES,
ARTICLES.NEWS_ARTCL_CNTNT_T, ARTICLES.NEWS_ARTCL_PUB_DT,
ARTICLES.NEWS_ARTCL_AUTH_NM, ARTICLES.NEWS_ARTCL_URL, ARTICLES.MEDIA_URL,
ARTICLES.ARTCL_SRC_ID, SOURCES.ARTCL_SRC_NM, MEDIA.MEDIA_TYPE_DESCRIP
ORDER BY ARTICLES.NEWS_ARTCL_PUB_DT
```
I also changed coma-separated list of tables from where condition to joins. I think this is more readable since table join conditions are separated from result filtering in where clause.
|
Select Count of one table into another
|
[
"",
"sql",
"plsql",
"oracle11g",
""
] |
I have two tables, T1 AND T2.
T1 with following columns: `Id, TypeofValue, Year, value`
`Typeofvalue` can have 2 values
`1` - indicates `Actual`
`2` - indicates `Target`
T2 With following column: NoOfRecordsToDisplay
I need to fetch the number of records (if existing) for `Target` corresponding to an `Id`.
However, the catches are:
1. Sometimes Target value might not be present for a year
2. I need to get only last records for targets on the basis of NoOfRecordsToDisplay (The number of records to display comes from T2) for actual
Example1:
NoOfRecordsToDisplay =3, ID =123
The data below should return 3 as we have 3 non null values for target for last 3 years -2015, 2014,2013 in this case
```
Id TypeofValue Year Value
123 1 2015 55
123 1 2014 56
123 1 2013 57
123 1 2012 58
123 2 2015 50
123 2 2014 50
123 2 2013 50
123 2 2012 50
124 1 2015 55
124 1 2014 56
124 1 2013 57
124 1 2012 58
124 2 2015 50
124 2 2014 50
124 2 2013 50
124 2 2012 50
```
Another dataset -
NoOfRecordsToDisplay =3, ID =123
The data below should return 0, as we have no values for target for last 3 years -2015, 2014,2013
```
Id TypeofValue Year Value
123 1 2015 55
123 1 2014 56
123 1 2013 57
123 1 2012 58
123 2 2012 50
124 1 2015 55
124 1 2014 56
124 1 2013 57
124 1 2012 58
124 2 2012 50
```
|
OK so if I understand correctly, you want a count of rows where the TypeOfValue = 2, and the Year is in the top *n* values where TypeOfValue = 1, for a given `Id`.
This should be:
```
DECLARE @Id int, @NoOfRecordsToDisplay int
SET @Id = 123
SET @NoOfRecordsToDisplay = 3
SELECT COUNT(*) FROM myTable
WHERE
TypeofValue = 2
AND Id = @Id
AND [Year] IN ( SELECT TOP(@NoOfRecordsToDisplay) [Year] FROM myTable
WHERE TypeofValue = 1 AND Id = @Id
ORDER BY [Year] DESC)
```
In practice, you would probably want to create this as a stored proc with @Id as an input parameter. @NoOfRecordsToDisplay could either be a parameter too, or selected from some other table - I'm still not 100% clear on this from your question.
Updated SQL Fiddle here: <http://sqlfiddle.com/#!3/87b0c/2>
**Edit:** Forgot the ORDER BY on the subquery!
**Edit 2:** Updated query and SQL fiddle based on updated question.
|
With SQL and such these queries, understanding the problem and imagination of an algorithm or method for solving the problem is too much important, on base of what you want:
> I need to get only last 3 records for targets on the basis of latest 3
> values for actual
you need to have tow steps:
1.determine the last 3 years of actual values:
```
SELECT TOP 3 [Year]
FROM Your_Table
WHERE Typeofvalue = 1
ORDER BY [Year] DESC
```
2.count the records of target values which their years are in above query:
```
SELECT COUNT(*) FROM Your_Table
WHERE
Typeofvalue = 2
AND
[Year] IN (
SELECT TOP 3 [Year]
FROM Your_Table
WHERE Typeofvalue = 1
ORDER BY [Year] DESC)
```
---
You can do it with join too instead of subquery:
```
SELECT COUNT(*) FROM Your_Table t
JOIN
(
SELECT TOP 3 [Year]
FROM Your_Table
WHERE Typeofvalue = 1
ORDER BY [Year] DESC
)q
ON T.[Year]=q.[Year]
WHERE t.Typeofvalue=2
```
|
Getting rowcount for a column based on values from another column
|
[
"",
"sql",
"sql-server-2008",
""
] |
I am using SQL parameters to allow the user to create their own SELECT query using dropdowns in the view. The parameter values are passed to the stored procedure from the view using entity framework in an MVC controller.
I am trying to avoid dynamic SQL, so I have a couple options for doing this.
Here is the method that I ultimately chose:
```
@topLeftInput varchar(100),
@topRightInput varchar(100)
AS
BEGIN
Select * from test
where
CASE @topLeftInput
WHEN 'BasicReturnReasonId' THEN BasicReturnReasonId
WHEN 'ClientName' THEN ClientName
WHEN 'CreditDeniedReasonId' THEN CreditDeniedReasonId
WHEN 'ItemsnotReturnedReasonId' THEN ItemsnotReturnedReasonId
WHEN 'ManufacturerId' THEN ManufacturerId
WHEN 'ManufacturerOrderNumber' THEN ManufacturerOrderNumber
WHEN 'ProductConditionId' THEN ProductConditionId
ELSE NULL
END
= @topRightInput;
END
```
The exception is occurring when `@topLeftInput` is `ClientName` or `ManufacturerOrderNumber`. When `@topLeftInput` is equal to either of these, the user can enter the value for `@topRightInput` in a textbox, instead of using a dropdown.
[](https://i.stack.imgur.com/NOMVU.png)
I am getting an exception that says error converting the varchar value to int. Not sure why it is trying to do that.
In the previous technique I was using, this exception was not happening and it was working perfectly (Just an example for the first option):
```
if @topLeftInput = 'BasicReturnReasonId'
Begin
Select * from test
where BasicReturnReasonId = @topRightInput;
End
```
Why would changing this if statement method to a `CASE` cause this kind of exception?
|
Problem is case statement
> Returns the highest precedence type from the set of types in
> result\_expressions and the optional else\_result\_expression
Try changing your query like this
```
Select *
from test
where
(@topLeftInput = 'BasicReturnReasonId' and BasicReturnReasonId = @topRightInput)
or
(@topLeftInput = 'ClientName' and ClientName = @topRightInput)
or
(@topLeftInput = 'CreditDeniedReasonId' and CreditDeniedReasonId = @topRightInput)
or
(@topLeftInput = 'ItemsnotReturnedReasonId' and ItemsnotReturnedReasonId = @topRightInput)
or
..
```
or you have to explicitly convert the `integer` columns to `varchar` to avoid implicit conversion error
```
CASE @topLeftInput
WHEN 'BasicReturnReasonId' THEN cast(BasicReturnReasonId as varchar(50))
WHEN 'ClientName' THEN ClientName
WHEN 'CreditDeniedReasonId' THEN cast(CreditDeniedReasonId as varchar(50))
WHEN 'ItemsnotReturnedReasonId' THEN cast(ItemsnotReturnedReasonId as varchar(50))
WHEN 'ManufacturerId' THEN cast(ManufacturerId as varchar(50))
WHEN 'ManufacturerOrderNumber' THEN cast(ManufacturerOrderNumber as varchar(50))
WHEN 'ProductConditionId' THEN cast(ProductConditionId as varchar(50))
ELSE NULL
END
```
|
The value of a case expression must resolve to only one type. It can't return an `integer` in some cases and a `varchar` in others. When you have mixed types in your possible return values, SQL Server picks the type that has the highest [precedence](https://msdn.microsoft.com/en-us/library/ms190309%28v=sql.100%29.aspx). In your case statement it is `int`.
You have to make sure that all possible return values can be converted to the type of highest precedence.
|
Stored Procedure trying to convert varchar to int for unknown reason
|
[
"",
"sql",
"sql-server",
"asp.net-mvc",
"sql-server-2008",
"stored-procedures",
""
] |
This code doesn't work when I run it and I don't understand why it shouldn't, the error is that a table already exists so it doesn't seem so much as an actual bug but a bug in the verification process, perhaps I am wrong. it's something from a procedure so doesn't work to use go's everywhere.
in my first question I grossly oversimplified my problem
these were my constraints :
I use temporary tables to communicate between procedures, in this case I needed to use the same procedure twice but with different parameters so i needed the table "#a" to be informed twice. Because the way of working is to create a temporary table before executing the procedure and in that form the procedure will know if it needs to do a select or inform the table. If it informs the table it is also responsible to create column (this is done to avoid dependencies between procedures). so I can't reuse the same table because the columns have been create already.
The solution I found was to create another stored procedure which will do the work for the second part, I could have changed the procedure sp\_a but I don't have actual control of it.
```
CREATE PROCEDURE zzzzz_testtodelete
AS
BEGIN
CREATE TABLE #a(idtmp bit)
Exec sp_a
@a = 1
.....
DROP TABLE #a
CREATE TABLE #a(idtmp bit)
Exec sp_a
@a = 2
.....
DROP TABLE #a
END
```
Is this normal or is it a bug from sql?
|
This is normal. What happens here is a compile error, not a run-time error. This is because when parsing the batch, SQL Server sees that you already have created `#a`, so creating it again will produce an error. Again, this error is not a run-time error but compile error. To test:
```
PRINT 'TESTING'
CREATE TABLE #a(idtmp bit)
DROP TABLE #a
CREATE TABLE #a(idtmp bit)
DROP TABLE #a
```
Note that `TESTING` is not printed, since the batch is not even executed due to the compile error.
Putting a `GO` in between the first `DROP` and second `CREATE` will fix the problem:
```
PRINT 'TESTING'
CREATE TABLE #a(idtmp bit)
DROP TABLE #a
GO
CREATE TABLE #a(idtmp bit)
DROP TABLE #a
```
This is because you have two separate batches now. SQL Server parses and validates sql queries by batch.
---
Only after the OP's edit that he/she states that this is used in stored procedure. In that case, there is no real solution. Still, this is a compile error. The code is not executed.
Additionally, according to [BOL](https://technet.microsoft.com/en-us/library/ms174979(v=sql.100).aspx):
> If more than one temporary table is created inside a single stored
> procedure or batch, they must have different names.
The workaround is to use different names for the tables. Or just truncate the table after the first use.
|
Temp tables are less efficient in stored procedures, try using table types, you don't need to drop them.
```
DECLARE @a TABLE (idtmp bit)
```
|
Creating and deleting temp tables consecutively in a stored procedure
|
[
"",
"sql",
"sql-server",
"transactions",
"temp-tables",
""
] |
I'm trying to understand the performance logic between the following two stored procedures. Let's say I have a stored procedure that counts rows from three different tables based on a variable called @user. Is it better to filter the output on each table select or to filter only once on the "parent" select.
Stored Procedure 1
```
CREATE PROCEDURE [dbo].[myProcedure1]
@user NVARCHAR(150),
@postCount int out
AS
BEGIN
SELECT @postCount = COUNT(unionPostCount.IDpost)
FROM (
SELECT IDpost FROM Table1 WHERE Table1.IDuser = @user
UNION ALL SELECT IDpost FROM Table2 WHERE Table2.IDuser = @user
UNION ALL SELECT IDpost Table3 WHERE Table3.IDuser = @user
)
AS unionPostCount
END
```
Stored Procedure 2
```
CREATE PROCEDURE [dbo].[myProcedure2]
@user NVARCHAR(150),
@postCount int out
AS
BEGIN
SELECT @postCount = COUNT(unionPostCount.IDpost)
FROM (
SELECT IDpost FROM Table1
UNION ALL SELECT IDpost FROM Table2
UNION ALL SELECT IDpost Table3
)
AS unionPostCount WHERE unionPostCount.IDuser = @user
END
```
I have the feeling that with very huge tables Procedure 2 is slower because it has to select all rows from three tables and then filter them but I might be wrong. Which of the two procedures is faster in performance? All help is well apprecated.
|
I changed the answer as indeed Viktor was right. The plans are identical.
I still believe that 3 count operations should be faster depending on the size of the data retrieved, thought the most time spent is in finding the rows in all solutions. So I guess all solutions are close in terms of performans.
|
Inner optimizer don't count unions individually.
Both plans are equal.
For example [first SQL fiddle](http://sqlfiddle.com/#!6/4f0aa/1) and [second SQL Fiddle](http://sqlfiddle.com/#!6/4f0aa/2)
|
Is it more efficient to filter rows in each union table select or filter only once on the parent select
|
[
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
I have been learning query optimization, increase query performance and all but in general if we create a query how can we know if this is a wise query.
I know we can see the execution time below, But this time will not give a clear indication without a good amount of data. And usually, when we create a new query we don't have much data to check.
I have learned about clauses and commands performance. But is there is anything by which we can check the performance of the query? Performance here is not execution time, it means that whether a query is **"ok"** or not, without data dependency.
As we cannot create that much data that would be in live database.
|
General performance of a query can be checked using the EXPLAIN command in MySQL. See <https://dev.mysql.com/doc/refman/5.7/en/using-explain.html>
It shows you how MySQL engine plans to execute the query and allows you to do some basic sanity checks i.e. if the engine will use keys and indexes to execute the query, see how MySQL will execute the joins (i.e. if foreign keys aren't missing) and many more.
You can find some general tips about how to use EXPLAIN for optimizing queries here (along with some nice samples): <http://www.sitepoint.com/using-explain-to-write-better-mysql-queries/>
|
As mentioned above, Right query is always `data-dependent`. Up to some level you can use the below methods to check the performance
1. You can use `Explain` to understand the *Query Execution Plan* and that may help you to correct some stuffs. For more info :
Refer Documentation [Optimizing Queries with EXPLAIN](https://dev.mysql.com/doc/refman/5.7/en/using-explain.html)
2. You can use `Query Analyzer`. Refer [MySQL Query Analyzer](https://www.mysql.com/products/enterprise/query.html)
|
How to check performance of mysql query?
|
[
"",
"mysql",
"sql",
"database",
"database-design",
""
] |
I have two SQL Server tables that I'm trying to compare for missing records in the 2nd table.
The first table is the master table that simply contains job titles (e.g. Web Developer, DBA, etc) and levels (e.g. Manager, Expert, etc.)
The 2nd table has a list of companies with their potential positions and rate. Each company should have every job title and level combination from the master table and many do not.
* `Master` table - `Title | Level`
* `Positions` table - `Employer | Title | Level | Rate`
I've tried using JOINS (all types), `NOT IN`, and `NOT EXISTS`.
I'm coming up with either no data or false positives for all.
I need to know which employers are missing what title and level combinations, so I can get them loaded.
Thanks for your input!
|
**FIXED** after re-reading and understanding the question... The code sample below will output one row for "EmployerB / CFO" (the one position commented out during the INSERTs). If you don't have an Employers table (you should, I think) then you could always just `SELECT DISTINCT Employer FROM PositionsTable` instead.
```
DECLARE @MasterTable TABLE
(
Title varchar(100),
Level int
)
DECLARE @PositionsTable TABLE
(
Employer varchar(100),
Title varchar(100),
Level int,
Rate int
)
DECLARE @Employers TABLE
(
Employer varchar(100)
)
INSERT INTO @Employers VALUES ('EmployerA')
INSERT INTO @Employers VALUES ('EmployerB')
INSERT INTO @MasterTable VALUES ('CEO', 1)
INSERT INTO @MasterTable VALUES ('CFO', 2)
INSERT INTO @MasterTable VALUES ('CTO', 2)
INSERT INTO @PositionsTable VALUES ('EmployerA', 'CEO', 1, 100)
INSERT INTO @PositionsTable VALUES ('EmployerA', 'CFO', 2, 100)
INSERT INTO @PositionsTable VALUES ('EmployerA', 'CTO', 2, 100)
INSERT INTO @PositionsTable VALUES ('EmployerB', 'CEO', 1, 100)
--INSERT INTO @PositionsTable VALUES ('EmployerB', 'CFO', 2, 100)
INSERT INTO @PositionsTable VALUES ('EmployerB', 'CTO', 2, 100)
SELECT *
FROM @Employers e
CROSS JOIN @MasterTable mt
LEFT JOIN @PositionsTable pt ON e.Employer = pt.Employer AND mt.Title = pt.Title AND mt.Level = pt.Level
WHERE pt.Employer IS NULL
```
The `CROSS JOIN` functions to get a complete list of Employers and the expected positions, and the `LEFT JOIN` then allows you to determine which ones are missing. You can reduce the result set to just the columns you are interested in, but if you do that, make sure you select `e.Employer` and not `pt.Employer` (which will be NULL).
|
Check this query sample
```
create table #master(title varchar(100),level varchar(100))
create table #position(employer varchar(100),title varchar(100),level varchar(100))
insert into #master values('A','A1')
insert into #master values('B','B1')
insert into #master values('C','C1')
insert into #master values('D','D1')
insert into #position values('E1','A','A1')
insert into #position values('E2','A','A1')
insert into #position values('E3','A','A1')
insert into #position values('E4','A','A1')
insert into #position values('E5','X','X1')
select * from
(select m.title,m.level,p.employer from #master m cross join #position p
except
select m.title,m.level,p.employer from #master m inner join #position p
on m.title=p.title and m.level=p.level) a
order by employer
```
|
SQL Server Join Confusion - How to Find Missing Rows
|
[
"",
"sql",
"sql-server",
"sql-server-2008-r2",
""
] |
My First Table and its data is :
```
DECLARE @TempTableA TABLE (FinYearVal VARCHAR(9))
FinYearVal
----------
2007-2008
2008-2009
2009-2010
2010-2011
2011-2012
2012-2013
2013-2014
2014-2015
```
Then I have another table with data as :
```
DECLARE @TempTableB TABLE (Category VARCHAR(10), FinYear VARCHAR(9), AMOUNT NUMERIC(18,10))
Category FinYear AMOUNT
---------- ------------- ----------
A 2013-2014 100.0000
A 2014-2015 200.0000
B 2012-2013 100.0000
B 2013-2014 200.0000
B 2014-2015 300.0000
B 2015-2016 400.0000
C 2011-2012 100.0000
C 2012-2013 200.0000
```
I want my Table1 to be Right Joined with Table2 for Every Category, just as we would separately Right Join and union the Data.
The Expected Result is :
```
Category FinYearVal AMOUNT
---------- ---------- ----------
A 2007-2008 0.0000
A 2008-2009 0.0000
A 2009-2010 0.0000
A 2010-2011 0.0000
A 2011-2012 0.0000
A 2012-2013 0.0000
A 2013-2014 100.0000
A 2014-2015 200.0000
B 2007-2008 0.0000
B 2008-2009 0.0000
B 2009-2010 0.0000
B 2010-2011 0.0000
B 2011-2012 0.0000
B 2012-2013 100.0000
B 2013-2014 200.0000
B 2014-2015 300.0000
C 2007-2008 0.0000
C 2008-2009 0.0000
C 2009-2010 0.0000
C 2010-2011 0.0000
C 2011-2012 100.0000
C 2012-2013 200.0000
C 2013-2014 0.0000
C 2014-2015 0.0000
```
NOTE: My Table2 has many Categories where I would dynamically choose how many categories I want to be joined in the Query.
|
```
SELECT t.FinYearVal, t.Category, ISNULL(ttb.AMOUNT,0)
FROM (
SELECT tta.FinYearVal, d.Category
FROM @TempTableA tta
CROSS JOIN (SELECT DISTINCT ttb.Category FROM @TempTableB ttb) AS d
) AS t
LEFT OUTER JOIN @TempTableB ttb ON t.FinYearVal = ttb.FinYear AND ttb.Category = t.Category
ORDER BY t.Category, t.FinYearVal
```
If you have special Category table, query can be impruved by replacing cross join on distinct with cross join on this table
|
Try this
```
SELECT C.Category,A.FinYearVal, ISNULL(B.AMOUNT,0) AS AMOUNT
FROM @TempTableA A
CROSS JOIN (SELECT DISTINCT Category FROM @TempTableB) C
LEFT JOIN @TempTableB B ON B.Category = C.Category AND B.FinYear = A.FinYearVal
```
|
Join One Table with Other Table for Every Row Type
|
[
"",
"sql",
"sql-server",
"join",
"cross-apply",
""
] |
I have a database of appointments, each with a personid, sequence number, appointment date and two questionnaire scores.
What I need to do is bring back one line per personid, showing their first appointment with scores, and their last appointment with scores, along with the scores themselves for both.
Having spent a long time already on what is probably a very simple query, I've tried to use the following to just give me the first appointment with scores:
```
SELECT
PERSONID,
MIN(CONTACTDATE) AS 'FIRST_CONTACT',
QUEST_1,
QUEST_2
FROM TBL_APPOINTMENTS
GROUP BY
PERSONID,
CONTACTDATE,
QUEST_1,
QUEST_2
ORDER BY
PERSONID,
FIRST_CONTACT
```
As you can probably guess, this is giving me duplicate rows every time the values in QUEST\_1 or QUEST\_2 change.
Can anyone help please? I'm sure this is all very simple but it's driving me up the wall!
Thanks in advance!
|
If the result you're looking for is something like this:
```
PERSONID FIRST_CONTACT FIRST_QUEST_1 FIRST_QUEST_2 LAST_CONTACT LAST_QUEST_1 LAST_QUEST_2
----------- ------------- ------------- ------------- ------------ ------------ ------------
1 2015-01-01 10 11 2015-01-21 21 211
2 2015-01-01 11 24 2015-01-31 12 25
3 2015-02-01 13 21 2015-03-01 14 28
4 2015-03-01 15 29 2015-04-01 16 21
```
then the query below would give you that. Note that the max and min dates and scores would be the same if there is only one record for a person.
```
SELECT
PERSONID,
FIRST_CONTACT, FIRST_QUEST_1, FIRST_QUEST_2,
LAST_CONTACT, LAST_QUEST_1, LAST_QUEST_2
FROM (
SELECT PERSONID, MIN(CONTACTDATE) AS 'FIRST_CONTACT', MAX(CONTACTDATE) AS 'LAST_CONTACT'
FROM TBL_APPOINTMENTS a
GROUP BY PERSONID
) a
CROSS APPLY (
SELECT QUEST_1 AS 'FIRST_QUEST_1', QUEST_2 AS 'FIRST_QUEST_2'
FROM TBL_APPOINTMENTS
WHERE PERSONID = a.PERSONID AND CONTACTDATE = A.FIRST_CONTACT
) o_first
CROSS APPLY (
SELECT QUEST_1 AS 'LAST_QUEST_1', QUEST_2 AS 'LAST_QUEST_2'
FROM TBL_APPOINTMENTS
WHERE PERSONID = a.PERSONID AND CONTACTDATE = A.LAST_CONTACT
) o_last
ORDER BY PERSONID, FIRST_CONTACT;
```
The same query could be written using joins:
```
SELECT
A.PERSONID,
FIRST_CONTACT,
F.QUEST_1 AS FIRST_QUEST_1,
F.QUEST_2 AS FIRST_QUEST_2,
LAST_CONTACT,
L.QUEST_1 AS LAST_QUEST_1,
L.QUEST_2 AS LAST_QUEST_2
FROM (
SELECT PERSONID, MIN(CONTACTDATE) AS 'FIRST_CONTACT', MAX(CONTACTDATE) AS 'LAST_CONTACT'
FROM TBL_APPOINTMENTS a
GROUP BY PERSONID
) a
JOIN TBL_APPOINTMENTS F ON F.PERSONID = a.PERSONID AND A.FIRST_CONTACT = F.CONTACTDATE
JOIN TBL_APPOINTMENTS L ON L.PERSONID = a.PERSONID AND A.LAST_CONTACT = L.CONTACTDATE
ORDER BY A.PERSONID, FIRST_CONTACT;
```
|
Using `ROW_NUMBER()` you can assign appointments a sequential value starting from `1` on a per `PERSONID` basis.
In the example below I create two such ordinals, one forward in time, and one reverse in time. then I can pick the `first going forward` as well as the `first going backwards`.
```
WITH
sorted AS
(
SELECT
PERSONID,
CONTACTDATE,
QUEST_1,
QUEST_2,
ROW_NUMBER() OVER (PARTITION BY PERSONID ORDER BY CONTACTDATE ASC) AS ORD_FWD,
ROW_NUMBER() OVER (PARTITION BY PERSONID ORDER BY CONTACTDATE DESC) AS ORD_REV
FROM
TBL_APPOINTMENTS
)
SELECT
PERSONID,
CONTACTDATE,
QUEST_1,
QUEST_2
FROM
sorted
WHERE
ORD_FWD = 1 OR ORD_REV = 1
ORDER BY
PERSONID,
CONTACTDATE
```
**EDIT** *With both first and last aggregated in to one row...*
```
WITH
sorted AS
(
SELECT
PERSONID,
CONTACTDATE,
QUEST_1,
QUEST_2,
ROW_NUMBER() OVER (PARTITION BY PERSONID ORDER BY CONTACTDATE ASC) AS ORD_FWD,
ROW_NUMBER() OVER (PARTITION BY PERSONID ORDER BY CONTACTDATE DESC) AS ORD_REV
FROM
TBL_APPOINTMENTS
)
SELECT
PERSONID,
MAX(CASE WHEN ORD_FWD = 1 THEN CONTACTDATE END) AS FIRST_CONTACTDATE,
MAX(CASE WHEN ORD_FWD = 1 THEN QUEST_1 END) AS FIRST_QUEST_1,
MAX(CASE WHEN ORD_FWD = 1 THEN QUEST_2 END) AS FIRST_QUEST_2,
MAX(CASE WHEN ORD_REV = 1 THEN CONTACTDATE END) AS FINAL_CONTACTDATE,
MAX(CASE WHEN ORD_REV = 1 THEN QUEST_1 END) AS FINAL_QUEST_1,
MAX(CASE WHEN ORD_REV = 1 THEN QUEST_2 END) AS FINAL_QUEST_2
FROM
sorted
WHERE
ORD_FWD = 1 OR ORD_REV = 1
GROUP BY
PERSONID
ORDER BY
PERSONID
```
NOTE: It is possible that the first and last appointment are the same appointment *(if the person only had one appointment)*.
|
Selecting first and last scores from a list of appointments
|
[
"",
"sql",
"sql-server-2008",
""
] |
How to make a left join with only one-to-one row in each category. Here are category id's and product prices. Note that I do not want emerge repetitions in 5th category that would occur if I used LEFT JOIN.
[](https://i.stack.imgur.com/iHgF1.png)
(1) The most suitable join is when both category and price in each tables match. This is the case of the 2nd category (note that the rows are in different order in table A and B)
(2) If only the category match, then I want to show field of whatever row (like I did in the 1st category with info that it is one of many rows).
(3) If neither category nor price match, I want to get NULL.
I used the following query but it is too slow for me.
```
with
A as (select A.id, A.price
,ROW_NUMBER() over(partition BY id) as Row_id_A
,ROW_NUMBER() OVER(PARTITION BY id, price order by price asc) AS [Row_id_price_A]
from TableA as A)
,
B as (select B.id, B.price, B.field
,ROW_NUMBER() over(partition BY id) as Row_id_B
,ROW_NUMBER() OVER(PARTITION BY id, price order by price asc) AS [Row_id_price_B]
from TableB as B)
select A.id, A.price, A.Row_A,
,ResultField=case
when A.Row_id_A=C.Row_id_B then C.field
when [Row_id_price_A]=[Row_id_price_B] then D.field
else N'One of many: '+C.field
end
from A
outer apply (select top 1 * from B
where
A.id=B.id
and Row_A=Row_B
) as C
outer apply (select top 1 * from B
where
A.id=B.id
and Row_A=Row_B
and [Row_id_price_A]=[Row_id_price_B]
) as D
```
Update. I add sample data:
```
CREATE TABLE dbo.TableA(
id INTEGER NOT NULL
,price INTEGER NOT NULL
);
INSERT INTO TableA(id,price) VALUES (1,50);
INSERT INTO TableA(id,price) VALUES (2,20);
INSERT INTO TableA(id,price) VALUES (2,30);
INSERT INTO TableA(id,price) VALUES (2,50);
INSERT INTO TableA(id,price) VALUES (4,15);
INSERT INTO TableA(id,price) VALUES (4,5);
INSERT INTO TableA(id,price) VALUES (5,100);
INSERT INTO TableA(id,price) VALUES (5,100);
CREATE TABLE dbo.TableB(
id INTEGER NOT NULL
,price INTEGER NOT NULL
,field VARCHAR(2) NOT NULL
);
INSERT INTO TableB(id,price,field) VALUES (1,1,'A1');
INSERT INTO TableB(id,price,field) VALUES (2,30,'A2');
INSERT INTO TableB(id,price,field) VALUES (2,50,'A3');
INSERT INTO TableB(id,price,field) VALUES (2,20,'A4');
INSERT INTO TableB(id,price,field) VALUES (5,5,'A5');
INSERT INTO TableB(id,price,field) VALUES (5,100,'A6');
INSERT INTO TableB(id,price,field) VALUES (5,100,'A7');
INSERT INTO TableB(id,price,field) VALUES (6,1,'A8');
```
|
Sounds like using two left joins would work just fine:
```
select
...
coalesce (B1.Field, B2.Field) as Field,
...
left join TableB B1 on B1.id = TableA.id and B1.price = TableA.price
left join TableB B2 on B2.id = TableA.id
```
This would usually be tricky, because it could cause you trouble with row duplication, but it shouldn't hurt in your case.
If you need the `One of many` text as well, just add it to the coalesce, e.g. `coalesce(B1.Field, 'One of many: ' + B2.Field)` - make sure you have the proper types, though.
**EDIT:**
Oh, you do care about duplication. In that case, a subquery might be a better choice:
```
select
...
coalesce(B1.Field, (select top 1 Field from TableB where id = TableA.id)) as Field
...
```
|
You can have two `left join` with Table2 (one for exact match and one for one to many match) and use `ROW_NUMBER()` for managing association of rows for exact match
Something like this. [SQL Fiddle](http://sqlfiddle.com/#!3/79ac7/1)
**Sample Data**
```
CREATE TABLE Table1
(
ID INT NOT NULL,
Price INT NOT NULL
);
CREATE TABLE Table2
(
ID INT NOT NULL,
Price INT NOT NULL,
Field VARCHAR(20) NOT NULL
);
INSERT INTO Table1 VALUES(1,50),(2,20),(2,30),(2,50),(4,15),(4,5),(5,100),(5,100);
INSERT INTO Table2 VALUES
(1,1,'A1'),(2,30,'A2'),(2,50,'A3'),(2,20,'A4'),
(5,5,'A5'),(5,100,'A6'),(5,100,'A7'),(6,1,'A8');
```
**Query**
```
;WITH CT1 AS
(
SELECT *,rn = ROW_NUMBER()OVER(PARTITION BY ID,Price ORDER BY Price)
FROM Table1
), CT2 AS
(
SELECT *,rn = ROW_NUMBER()OVER(PARTITION BY ID,Price ORDER BY Field),
cc = ROW_NUMBER()OVER(PARTITION BY ID ORDER BY Price ASC)
FROM Table2
)
SELECT T1.*,ISNULL(T2.Field,'One of Many: ' + T3.Field) as Field
FROM CT1 T1
LEFT JOIN CT2 T2
ON T1.ID = T2.ID
AND (T1.Price = T2.Price AND T1.rn = T2.rn)
LEFT JOIN CT2 T3
ON T1.ID = T3.ID
AND T3.cc = 1
ORDER BY T1.Id,T1.Price
```
**Output**
```
| ID | Price | rn | Field |
|----|-------|----|-----------------|
| 1 | 50 | 1 | One of Many: A1 |
| 2 | 20 | 1 | A4 |
| 2 | 30 | 1 | A2 |
| 2 | 50 | 1 | A3 |
| 4 | 5 | 1 | (null) |
| 4 | 15 | 1 | (null) |
| 5 | 100 | 1 | A6 |
| 5 | 100 | 2 | A7 |
```
|
One to one left join
|
[
"",
"sql",
"sql-server",
"join",
"sql-server-2008-r2",
""
] |
I tried to run `yii migrate`, but it showed the following error:
```
create table news-cate ...Exception: SQLSTATE[42000]: Syntax error or access violation: 1075 Incorrect table definition; there can be only one auto column and it must be defined as a key
The SQL being executed was: CREATE TABLE `news-cate` (
`news-id` int(11) NOT NULL AUTO_INCREMENT PRIMARY KEY,
`cate-id` int(11) NOT NULL AUTO_INCREMENT PRIMARY KEY
```
Here is my code:
```
class m150821_083020_create_newscate_table extends Migration
{
public function safeUp()
{
$this->createTable('news-cate', [
'news-id' => $this->primaryKey(),
'cate-id' => $this->primaryKey(),
]);
$this->addForeignKey("fk_news_cate_nid", "news-cate", "news-id", "news", "id", "RESTRICT", "CASCADE");
$this->addForeignKey("fk_news_cate_cid", "news-cate", "cate-id", "category", "id", "RESTRICT", "CASCADE");
}
public function safeDown()
{
echo "m150821_083020_create_newscate_table cannot be reverted.\n";
$this->dropTable("news-cate");
return false;
}
}
```
So how to create composite primary key by using Migration in Yii2?
|
**UPDATE**
As of yii version 2.06 you can use new schema builder:
```
<?php
use yii\db\Migration;
...
$this->createTable('news-cate', [
'news-id' => $this->integer()->notNull(),
'cate-id' => $this->integer()->notNull(),
]);
$this->addPrimaryKey('news-cate_pk', 'news-cate', ['news-id', 'cate-id']);
...
?>
```
**ORIGINAL ANSWER**
Don't add primary keys in table creation, only declare types:
```
use yii\db\Schema;
,,,
$this->createTable('news-cate', [
'news-id' => Schema::TYPE_INTEGER . ' NOT NULL',
'cate-id' => Schema::TYPE_INTEGER . ' NOT NULL',
]);
```
After that you can add the composite primary key like this:
```
$this->addPrimaryKey('news-cate_pk', 'news-cate', ['news-id', 'cate-id']);
```
For multiple columns, array is allowed in [addPrimaryKey()](http://www.yiiframework.com/doc-2.0/yii-db-migration.html#addPrimaryKey%28%29-detail) method.
This is better than writing raw sql.
|
try this way
```
public function safeUp()
{
$this->createTable('news-cate', [
'news-id' =>'int NOT NULL',
'cate-id' =>'int NOT NULL',
'PRIMARY KEY (news-id,cate-id)'
]);
$this->addForeignKey("fk_news_cate_nid", "news-cate", "news-id", "news", "id", "RESTRICT", "CASCADE");
$this->addForeignKey("fk_news_cate_cid", "news-cate", "cate-id", "category", "id", "RESTRICT", "CASCADE");
}
```
|
How to create composite primary key by using Migration in Yii2?
|
[
"",
"sql",
"yii2",
"database-migration",
""
] |
I am consistently getting an error telling me that there are too many values being inserted however this is clearly not the case. Can anybody help me. Here is my code. Currently the error message is INSERT INTO a2\_account VALUES
\*
ERROR at line 1:
ORA-00913: too many values
```
DROP TABLE a2_loanr;
DROP TABLE a2_accr;
DROP TABLE a2_customer;
DROP TABLE a2_account;
DROP TABLE a2_loan;
DROP TABLE a2_bankbranch;
DROP TABLE a2_bank;
CREATE TABLE a2_bank (
routingcode VARCHAR(200) PRIMARY KEY,
name VARCHAR(200) NOT NULL,
address VARCHAR(200) NOT NULL
);
INSERT INTO a2_bank VALUES
( '123456',' Stan walker', '3 gladstone rd');
INSERT INTO a2_bank VALUES
( '123556',' Sam ben', '5 gladstone rd');
INSERT INTO a2_bank VALUES
( '1256',' Stacy talker', '4 gladstone rd');
CREATE TABLE a2_bankbranch (
branch_num VARCHAR(200) PRIMARY KEY,
name VARCHAR(200) NOT NULL,
address VARCHAR(200) NOT NULL
);
INSERT INTO a2_bankbranch VALUES
( 'Ben Heir', '5', '3 gladstone rd');
INSERT INTO a2_bankbranch VALUES
( 'Kane Wen', '4', '28 stevee rd');
CREATE TABLE a2_loan (
loan_num VARCHAR(200) PRIMARY KEY,
type VARCHAR(200) NOT NULL,
amount VARCHAR(200) NOT NULL,
contract_date DATE NOT NULL
);
INSERT INTO a2_loan VALUES
( '323', 'Mortgage', '$2000000', TO_DATE('11-03-1994', 'DD-MM-YYYY') );
INSERT INTO a2_loan VALUES
( '33', 'Car', '$2000', TO_DATE('12-08-1994', 'DD-MM-YYYY') );
INSERT INTO a2_loan VALUES
( '3243', 'Pesonal', '$875', TO_DATE('14-06-1994', 'DD-MM-YYYY') );
INSERT INTO a2_loan VALUES
( '6', 'Mortgage', '$400500', TO_DATE('11-06-1994', 'DD-MM-YYYY') );
CREATE TABLE a2_account (
acc_num VARCHAR(20) PRIMARY KEY,
type VARCHAR(20) NOT NULL,
balance VARCHAR(10) NOT NULL
);
INSERT INTO a2_account VALUES
( '2539267332', 'Savings', '20');
INSERT INTO a2_account VALUES
( '8237893378', 'Cash', '300');
INSERT INTO a2_account VALUES
( '2378723936', 'Cheque', '75');
CREATE TABLE a2_customer (
ird_num CHAR(8) PRIMARY KEY,
name VARCHAR(200) NOT NULL,
address VARCHAR(200) NOT NULL,
phone VARCHAR(20)
);
INSERT INTO a2_customer VALUES
( '25362672', 'Stan Yel', '5 Wanna way', '02010201');
INSERT INTO a2_account VALUES
( '83783783', 'Cam Birch', '34 Trada st', '02302020202');
INSERT INTO a2_account VALUES
( '23723367', 'Jeff King', '5 Queens st', '38982383');
INSERT INTO a2_account VALUES
( '54637822', 'John Smith', '24 Queen st', '38922383');
CREATE TABLE a2_accr (
ird_num CHAR(8) NOT NULL UNIQUE,
account_num CHAR(10) NOT NULL UNIQUE
);
INSERT INTO a2_accr VALUES
( '25362672', '2537626722');
INSERT INTO a2_accr VALUES
( '83783783', '8237832783');
CREATE TABLE a2_loanr (
ird_num CHAR(8) NOT NULL UNIQUE,
loan_num CHAR(10) NOT NULL UNIQUE
);
INSERT INTO a2_loanr VALUES
( '54637822', '323');
INSERT INTO a2_loanr VALUES
( '23723367', '33');
COMMIT;
```
|
The problem is in the `INSERTS` after the creation of table `a2_customer`, looks like you have copied and pasted some `INSERTS` and not changed the table name
```
CREATE TABLE a2_customer (
ird_num CHAR(8) PRIMARY KEY,
name VARCHAR(200) NOT NULL,
address VARCHAR(200) NOT NULL,
phone VARCHAR(20)
);
INSERT INTO a2_customer VALUES
( '25362672', 'Stan Yel', '5 Wanna way', '02010201');
INSERT INTO a2_account VALUES
( '83783783', 'Cam Birch', '34 Trada st', '02302020202');
INSERT INTO a2_account VALUES
( '23723367', 'Jeff King', '5 Queens st', '38982383');
INSERT INTO a2_account VALUES
( '54637822', 'John Smith', '24 Queen st', '38922383');
```
should be
```
CREATE TABLE a2_customer (
ird_num CHAR(8) PRIMARY KEY,
name VARCHAR(200) NOT NULL,
address VARCHAR(200) NOT NULL,
phone VARCHAR(20)
);
INSERT INTO a2_customer VALUES
( '25362672', 'Stan Yel', '5 Wanna way', '02010201');
INSERT INTO a2_customer VALUES
( '83783783', 'Cam Birch', '34 Trada st', '02302020202');
INSERT INTO a2_customer VALUES
( '23723367', 'Jeff King', '5 Queens st', '38982383');
INSERT INTO a2_customer VALUES
( '54637822', 'John Smith', '24 Queen st', '38922383');
```
|
Well..
these 3 lines have 4 values and the underline table only have 3 columns:
```
INSERT INTO a2_account VALUES
( '83783783', 'Cam Birch', '34 Trada st', '02302020202');
INSERT INTO a2_account VALUES
( '23723367', 'Jeff King', '5 Queens st', '38982383');
INSERT INTO a2_account VALUES
( '54637822', 'John Smith', '24 Queen st', '38922383');
```
Adjust this and all will be fine.
|
Error when creating table in Oracle Sql
|
[
"",
"sql",
"oracle",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.