Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
In ms sql database table files\_master I have a column named 'AstNum' `varchar(100)` it contain data like:`1/1980
2/1980
11/1980`And so on when I sort the column:
```
SELECT AstNum FROM Files_master ORDER BY AstNum ASC
```
It shows me the records `1/1980,11/1980` in the column. `/1980` is the year before slash is increment number please help me how to sort the records I want in result`1/1980
2/1980
3/1980`
|
Try this
```
Select AstNum from Files_master Order by convert(datetime,'1/'+AstNum,101) ASC
```
EDIT : Based on the comments, here is another solution
```
Select AstNum from Files_master
Order by
parsename(replace(AstNum,'/','.'),2)*1 ASC,parsename(replace(AstNum,'/','.'),1)*1 ASC
```
|
This will easily not work that way, and probably also violates [standard database rules](https://en.wikipedia.org/wiki/Database_normalization).
Your way to go would be to have two columns, both of type `int`, with one of them being the index and the other the year. Say, you have columns `id` and `year`, your request would then be
```
SELECT `id`, `year` FROM Files_master ORDER BY `id` ASC, `year` ASC
```
Note that the order in which you list the columns in your `ORDER BY` statement depends on primary and secondary order column, so maybe you first want to order by year.
If you **really, really, have to** use this format, and only then, you could probably find a way to use [`SUBSTRING_INDEX`](http://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_substring-index) to split your string, and then have something like (untested):
```
SELECT SUBSTRING_INDEX(`AstNum`, '/', 1) AS `id`,
SUBSTRING_INDEX(`AstNum`, '/', -1) AS `year`
ORDER BY `id` ASC,`year` ASC
```
Note that this will probably be quite slow to evaluate.
---
Edit: In reply to the comment, since it turns out SQL Server is being used instead of MySQL (what I assumed at the time of writing) the command has to be slightly different, more complex (still untested, offsets might be.. off):
```
SELECT SUBSTRING(AstNum, 1, LEN(AstNum)-5) AS id,
SUBSTRING(AstNum, LEN(AstNum)-3, 4) AS year
ORDER BY id ASC, year ASC
```
|
Sort by varchar column
|
[
"",
"sql",
"sql-server",
"sorting",
"sql-server-2014",
"varchar",
""
] |
I have the following table
```
Id, Class1, Class2, Class3
1 1 2 3
2 2 3 3
3 1 2 3
```
When exact duplicate rows (Class1,Class2,Class3) appear apart from the primary key I want to take the first record found and ignore further records with that same fields.
So
I want the output to be
```
1 1 2 3
2 2 3 3
```
I can't use distinct as I want to return the primary key.
I am using Sql Server 2012.
What SQL can give me the desired output?
I have other fields I'm interested in to get from the output that varies but the 3 criteria fields is Class1,Class2,Class3 which I don't want to duplicate in my output. I am not looking to eliminate duplicates as I just want to take the first record found for the duplicate and ignore the remaining.
Take note: This is a implied example but my real table has hundred of thousands of rows so performance is important.
|
you can use the following:
```
select * from t1 where ID in(select min(Id) FROM t1 group by Class1, Class2, Class3)
```
Example Demo:
<http://sqlfiddle.com/#!3/88fab>
|
```
with cte
as
(select *,
row_number() over (partition by class1,class2,class3 order by id ) as rn from #temp
)
select * from cte where rn=1
```
|
How to in SQL return first found primary key for criteria that has duplicates
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a table representing a system of folders and sub-folders with an ordinal `m_order` column.
Sometimes sub-folders are sorted alphanumerically, others are sorted by date or by importance.
I recently had to delete some sub-folders of a particular parent folder and add a few new ones. I also had to switch the ordering scheme to alphanumeric. This needed to be reflected in the `m_order` column.
Here's an example of the table:
```
+-----+-----------+-----------+------------+
| ID | parent | title | m_order |
+-----+-----------+-----------+------------+
| 100 | 1 | docs | 3 |
| 101 | 1 | reports | 2 |
| 102 | 1 | travel | 1 |
| 103 | 1 | weekly | 4 |
| 104 | 1 | briefings | 5 |
| ... | ... | ... | ... |
+-----+-----------+-----------+------------+
```
And here is what I want:
```
+-----+-----------+-----------+------------+
| ID | parent | title | m_order |
+-----+-----------+-----------+------------+
| 100 | 1 | docs | 3 |
| 101 | 1 | reports | 4 |
| 102 | 1 | travel | 5 |
| 200 | 1 | contacts | 2 |
| 201 | 1 | admin | 1 |
| ... | ... | ... | ... |
+-----+-----------+-----------+------------+
```
|
I would do this with a simple `update`:
```
with toupdate as (
select m.*, row_number() over (partition by parent order by title) as seqnum
from menu m
)
update toupdate
set m_order = toupdate.seqnum;
```
This restarts the ordering for each parent. If you have a particular parent in mind, use a `WHERE` clause:
```
where parentid = @parentid and m_order <> toupdate.seqnum
```
|
After deleting the old folders and inserting the new records, I accomplished the reordering by using `MERGE INTO` and `ROW_NUMBER()`:
```
DECLARE @parentID INT
...
MERGE INTO menu
USING (
SELECT ROW_NUMBER() OVER (ORDER BY title) AS rowNumber, ID
FROM menu
WHERE parent = @parentID
) AS reordered
ON menu.ID = reordered.ID
WHEN MATCHED THEN
UPDATE
SET menu.m_order = reordered.rowNumber
```
|
Update an ordinal column based on the alphabetic ordering of another column
|
[
"",
"sql",
""
] |
i'm a newbie in SQL. I have 2 tables
Table 1 (TBMember):
```
MemberCode | Date
001 | Jan 21
002 | Jan 21
003 | Jan 21
004 | Jan 21
```
Table 2 (TBDeposit):
```
Date | MemberCode | Deposit
Jan 21 | 001 | $100
Jan 21 | 001 | $200
Jan 21 | 002 | $300
Jan 21 | 002 | $400
Jan 21 | 003 | $500
```
First, i want to find how many member that register on that day. Select Count(membercode) from TBMember where date = 'Jan 21'. This return 4. This one is ok for me.
Second, i want to find how many member that register and deposit on the same day. I want this return 3, because only 3 member that register and deposit at that day. How can i do this in SQL Server?
Thanks
|
@Darren: Ankit Bajpai answer is right. Please try with sample data that you provided below.
```
--Data Setup
Declare @TBMember table(MemberCode varchar(3), MDate varchar(20))
insert into @TBMember
values
('001','Jan 21'),
('002','Jan 21'),
('003','Jan 21'),
('004','Jan 21')
--Table 2 (TBDeposit):
Declare @TBDeposit table(MDate varchar(20), MemberCode varchar(3),Deposit varchar(10))
insert into @TBDeposit
values
('Jan 21','001','$100'),
('Jan 21','001','$200'),
('Jan 21','002','$300'),
('Jan 21','002','$400'),
('Jan 21','003','$500')
SELECT *
FROM @TBMember
SELECT *
FROM @TBDeposit
--First Question
SELECT Count(membercode) membercount
FROM @TBMember
WHERE mdate = 'Jan 21'
--Second Question
SELECT Count(DISTINCT td.membercode) membercount
FROM @TBMember tm
INNER JOIN @TBDeposit td
ON tm.MemberCode = td.MemberCode
AND tm.mdate = td.MDate
WHERE tm.MDate = 'Jan 21'
```
|
try this:
```
select count(distinct dp.MemberCode)
from TBDeposit dp
inner join TBMember mmb
on dp.MemberCode = mmb. MemberCode and dp.Date = mmb.Date
```
if you want to limit the date to some specific date you just go with where Date = 'date you want to known'
|
How To Find A Member That Register And Deposit At The Same Day
|
[
"",
"sql",
"sql-server",
"database",
""
] |
Here is what I tried:
```
SELECT SUM(PQ.QuotaValue)
FROM PackageQuotas AS PQ
JOIN (
SELECT DISTINCT PackageID,
ParentPackageID
FROM Packages
WHERE ParentPackageID = @ParentPackageID
) PA ON PQ.PackageID = PA.PackageID
WHERE PQ.QuotaID = @QuotaID
```
The common column is the ParentPackageID of table Packages with the PackageId from PackageQuotas.
The problem is how to avoid adding negative numbers of column `PackageQuotas`
Can this query become simpler ?
|
```
CREATE /* UNIQUE */ INDEX ix ON dbo.PackageQuotas (QuotaID, PackageID)
INCLUDE (QuotaValue)
WHERE QuotaValue > 0
SELECT SUM(q.QuotaValue)
FROM dbo.PackageQuotas q /* WITH(INDEX(ix)) */
WHERE q.QuotaID = @QuotaID
AND q.QuotaValue > 0
AND EXISTS(
SELECT 1
FROM dbo.Packages p
WHERE p.ParentPackageID = @ParentPackageID
AND q.PackageID = p.PackageID
)
```
My post about `0` and `NULL` inside `SUM`, `AVG` - <http://blog.devart.com/what-is-faster-inside-stream-aggregate-hash-match.html>
|
Remove the null and Negative values by adding condition in Where Clause
```
SELECT SUM(PQ.QuotaValue)
FROM PackageQuotas AS PQ
JOIN (
SELECT DISTINCT PackageID,
ParentPackageID
FROM Packages
WHERE ParentPackageID = @ParentPackageID
) PA ON PQ.PackageID = PA.PackageID
WHERE PQ.QuotaID = @QuotaID AND PQ.QuotaValue IS NOT NULL AND PQ.QuotaValue > 0
```
|
How to sum the values of one colum except negative numbers or null in SQL
|
[
"",
"sql",
"sql-server",
""
] |
I'm trying to pull in data from several tables, one of which has a one-to-many relationship. My SQL looks like this:
```
SELECT
VU.*,
UI.UserImg,
COALESCE(CI.Interests, 0) as NumInterests
FROM [vw_tmpUsers] VU
LEFT JOIN (
SELECT
[tmpUserPhotos].UserID,
CASE WHEN MAX([tmpUserPhotos].uFileName) = NULL
THEN 'dflt.jpg'
ELSE Max([tmpUserPhotos].uFileName)
END as UserImg
FROM [tmpUserPhotos]
GROUP BY UserID) UI
ON VU.UserID = UI.UserID
```
I've excluded a bunch of LEFT JOINS which follows this, because they're all working properly.
My trouble is, I only want to pull in *one* image name from the table which has a one-to-many relationship. In order to do this, I've used the MAX function.
My desired result would look like this:
```
UserID UserName UserState UserZip UserIncome UserHeight UserImg
1 Jimbo NY 10012 2 64 1[Blue hills.jpg
2 Jack MA 06902 3 66 dflt.jpg
3 Lisa CT 06820 4 64 dflt.jpg
4 Mary CT 06791 6 67 4[Natalie.jpg
5 Wanda CT 06791 6 67 dflt.jpg
```
but instead it looks like this:
```
UserID UserName UserState UserZip UserIncome UserHeight UserImg
1 Jimbo NY 10012 2 64 1[Blue hills.jpg
2 Jack MA 06902 3 66 NULL
3 Lisa CT 06820 4 64 NULL
4 Mary CT 06791 6 67 4[Natalie.jpg
5 Wanda CT 06791 6 67 NULL
```
It's got to be something wonky in my CASE statement, but I'm not very good with them. I've tried using IS NULL, = NULL, ='NULL', all to no avail.
Can anyone spot what I'm doing wrong? One little tidbit that might shed light; MAX(tmpUserPhotos.uFileName) will only return NULL if there is no matching record in tmpUserPhotos.
|
You need to check for `NULL` in the outer query:
```
SELECT VU.*,
COALESCE(UI.UserImg, 'dflt.jpg') as UserImg,
COALESCE(CI.Interests, 0) as NumInterests
FROM [vw_tmpUsers] VU LEFT JOIN
(SELECT [tmpUserPhotos].UserID, MAX([tmpUserPhotos].uFileName) as UserImg
FROM [tmpUserPhotos]
GROUP BY UserID
) UI
ON VU.UserID = UI.UserID ;
```
You are getting `NULL` values because the `LEFT JOIN` has no matches. No matches means that the `SELECT` is not being executed on any data.
I'll also note that if you have a preference for the priority of the images, then `OUTER APPLY` is easier than `LEFT JOIN`. However, that would be another question.
|
Instead of the case statement, try the 'isnull' function...
```
isnull(MAX([tmpUserPhotos].uFileName),'dflt.jpg') as UserImg
```
|
Trouble with a JOIN
|
[
"",
"sql",
"sql-server-2008-r2",
""
] |
I am trying to design a database schema for the following entities: Stores, Products, Discounts and Buyers. The goal is to provide a discount campaign for a couple of products. However, on my first iteration over the database schema I can see there is a circular dependency as illustrated bellow:
[](https://i.stack.imgur.com/GT1lK.png)
The question is: Is having such dependencies a bad SQL Design ? Also, how can I include the entity of Buyer that will represent reference the product and the discount used ?Thank you.
|
I don't see such a "circular dependency" in your model. Look at it as an oriented (or directed) graph. It's impossible to start from one entity and return to it going neither clockwise nor counter-clockwise.
|
I don't have ERD software, but I would do something like this:
[sample store erd](https://i.stack.imgur.com/gl3tL.png)
Or:
[](https://i.stack.imgur.com/Cyurx.png)
|
SQL Circular Dependency
|
[
"",
"mysql",
"sql",
"database",
"database-design",
""
] |
I have a stored procedure that returns columns . This stored procedure is mainly being used by other query for functional reasons
So my stored procedure:
```
IF OBJECT_ID ( 'dbo.ProcDim', 'P' ) IS NOT NULL
DROP PROCEDURE dbo.ProcDim;
GO
CREATE PROCEDURE dbo.ProcDim
@Dim1 nvarchar(50),
@Dim2 nvarchar(50)
AS
SET NOCOUNT ON;
IF OBJECT_ID('tempdb..#TMP1') IS NOT NULL
DROP TABLE #TMP1
SELECT
INTO #TMP1
FROM DBase.dbo.Table1 AS Parameter
WHERE Parameter.Dim1 = @Dim1
AND Parameter.Dim2 = @Dim2;
GO
EXECUTE dbo.ProcDim N'value1', N'value2';
SELECT * from #TMP1
```
So when i excute my procedure without #TMP1 work fine but i want to insert the result into temp table
|
You can't use temporary table in such a manner.
By this code: `SELECT INTO #TMP1` you're implicity creating temporary table, and it is accessible in the scope of your stored procedure - but not outside of this scope.
If you need this temporary table to be accessible outside of stored procedure, you have to remove `INTO #TMP1` from stored procedure and explicitly create it outside:
```
create table #tmp1 (columns_definitions_here)
insert into #tmp1
exec dbo.ProcDim N'value1', N'value2';
select * from #TMP1
```
Notice you have to explicitly create temporary table in this case, supplying all column names and their data types.
Alternatively you can change your stored procedure to be user-defined table function, and in this case you will be able to implicitly create and populate your temporary table:
```
create function dbo.FuncDim
(
@Dim1 nvarchar(50),
@Dim2 nvarchar(50)
)
returns @result TABLE (columns_definition_here)
as
begin
... your code
return
end
select *
into #TMP1
from dbo.FuncDim(@Dim1, @Dim2)
```
|
Temp tables are scoped (in this case) to the stored procedure in which they're created. That is, when your stored procedure completes, the temp table is dropped.
If you need the contents of the temp table, select from it *before* the end of the procedure - IOW, `select * from #TMP1` should be the output of the procedure, not a separate statement executed outside of it.
|
Can't store procedure into dynamic temp table
|
[
"",
"sql",
"sql-server",
"t-sql",
"stored-procedures",
""
] |
I have an Syntax error in my SQL create table command but cant understand why?
```
CREATE TABLE E0 (
div VARCHAR(50),
date VARCHAR(50), hometeam VARCHAR(50), awayteam VARCHAR(50),
fthg VARCHAR(50), ftag VARCHAR(50), ftr VARCHAR(50),
hthg VARCHAR(50), htag VARCHAR(50), htr VARCHAR(50),
referee VARCHAR(50), hs VARCHAR(50),
as VARCHAR(50),
hst VARCHAR(50), ast VARCHAR(50), hf VARCHAR(50),
af VARCHAR(50), hc VARCHAR(50), ac VARCHAR(50),
hy VARCHAR(50), ay VARCHAR(50), hr VARCHAR(50),
ar VARCHAR(50), b365h VARCHAR(50), b365d VARCHAR(50),
b365a VARCHAR(50), bwh VARCHAR(50), bwd VARCHAR(50),
bwa VARCHAR(50), iwh VARCHAR(50), iwd VARCHAR(50),
iwa VARCHAR(50), lbh VARCHAR(50), lbd VARCHAR(50),
lba VARCHAR(50), psh VARCHAR(50), psd VARCHAR(50),
psa VARCHAR(50), whh VARCHAR(50), whd VARCHAR(50),
wha VARCHAR(50), vch VARCHAR(50), vcd VARCHAR(50),
vca VARCHAR(50), bb1x2 VARCHAR(50), bbmxh VARCHAR(50),
bbavh VARCHAR(50), bbmxd VARCHAR(50), bbavd VARCHAR(50), bbmxa VARCHAR(50), bbava VARCHAR(50), bbou VARCHAR(50), bbmx_2_5 VARCHAR(50), bbav_2_5 VARCHAR(50), bbmx_2_5 VARCHAR(50), bbav_2_5 VARCHAR(50), bbah VARCHAR(50), bbahh VARCHAR(50), bbmxahh VARCHAR(50), bbavahh VARCHAR(50), bbmxaha VARCHAR(50), bbavaha VARCHAR(50))
```
|
`as`, `div` and `date` are sql reserved words (see [manual](https://dev.mysql.com/doc/refman/5.7/en/keywords.html)), you cannot have such names without escaping with backticks.
|
One of the columns is named AS which is a reserved word. Either use different name or wrap around `` like `as` varchar(50)
|
Syntax error in SQL create table command
|
[
"",
"mysql",
"sql",
""
] |
I'm having trouble understanding other questions I see, as they are a little bit different.
I'm getting a XML as response from a webservice vi UTL\_HTTP. The XML has repeating child nodes and I want to extract only 1 specific value.
The response XML:
```
<Customer>
<Loyalty>
<Client>
<Identifications>
<Identification>
<Form>Form1</Form>
<value>1234</value>
</Identification>
<Identification>
<Form>Form2</Form>
<value>4442</value>
</Identification>
<Identification>
<Form>Form3</Form>
<value>9995</value>
</Identification>
</Identifications>
</Client>
</Loyalty>
</Customer>
```
I need to extract the the node `<value>` only where the node `<Form>` = "Form3".
So, within my code, I receive the response from another function
```
v_ds_xml_response XMLTYPE;
-- Here would lie the rest of the code (omitted) preparing the XML and next calling the function with it:
V_DS_XML_RESPONSE := FUNCTION_CALL_WEBSERVICE(
P_URL => V_DS_URL, --endpoint
P_DS_XML => V_DS_XML, --the request XML
P_ERROR => P_ERROR);
```
With that, I created a LOOP to store the values. I've tried using WHERE and even creating a type (V\_IDENTIFICATION below is the type), but It didn't return anything (null).
```
for r IN (
SELECT
ExtractValue(Value(p),'/Customer/Loyalty/Client/Identifications/Identification/*[local-name()="Form"]/text()') as form,
ExtractValue(Value(p),'/Customer/Loyalty/Client/Identifications/Identification/*[local-name()="value"]/text()') as value
FROM TABLE(XMLSequence(Extract(V_DS_XML_RESPONSE,'//*[local-name()="Customer"]'))) p
LOOP
V_IDENTIFICATION.FORM := r.form;
V_IDENTIFICATION.VALUE := r.value;
END LOOP;
SELECT
V_IDENTIFICATION.VALUE
INTO
P_LOYALTY_VALUE
FROM dual
WHERE V_IDENTIFICATION.TIPO = 'Form3';
```
Note, P\_LOYALTY\_VALUE is an OUT parameter from my Procedure
|
Aaand I managed to find the solution, which is quite simple, just added [text()="Form3"]/.../" to predicate the Xpath as in
```
SELECT
ExtractValue(Value(p),'/Customer/Loyalty/Client/Identifications/Identification/*[local-name()="Form"][text()="Form3"]/text()') as form,
ExtractValue(Value(p),'/Customer/Loyalty/Client/Identifications/Identification/Form[text()="Form3"]/.../*[local-name()="value"]/text()') as value
```
Also extracted the values just sending them directly into the procedure's OUT parameter:
```
P_FORM := r.form;
P_LOYALTY_VALUE := r.value;
```
|
with this sql you should get the desired value:
```
with data as
(select '<Customer>
<Loyalty>
<Client>
<Identifications>
<Identification>
<Form>Form1</Form>
<value>1234</value>
</Identification>
<Identification>
<Form>Form2</Form>
<value>4442</value>
</Identification>
<Identification>
<Form>Form3</Form>
<value>9995</value>
</Identification>
</Identifications>
</Client>
</Loyalty>
</Customer>' as xmlval
from dual b)
(SELECT t.val
FROM data d,
xmltable('/Customer/Loyalty/Client/Identifications/Identification'
PASSING xmltype(d.xmlval) COLUMNS
form VARCHAR2(254) PATH './Form',
val VARCHAR2(254) PATH './value') t
where t.form = 'Form3');
```
|
XML Oracle: Extract specific attribute from multiple repeating child nodes
|
[
"",
"sql",
"xml",
"oracle",
"plsql",
""
] |
I have written an SQL script which runs fine when executed directly in SQL Management Studio. However, when entering it into Power BI as a source, it reports that it has an incorrect syntax.
This is the query:
```
EXEC "dbo"."p_get_bank_balance" '2'
```
However, the syntax is apparently incorrect? See Picture:
[](https://i.stack.imgur.com/Xzua1.png)
Any help is much appreciated.
EDIT \*\*\*
When the double quotes are removed (as per Tab Alleman's suggestion):
[](https://i.stack.imgur.com/QTZvL.png)
|
I found time ago the same problem online on power bi site:
<http://community.powerbi.com/t5/Desktop/Use-SQL-Store-Procedure-in-Power-BI/td-p/20269>
> You must be using DirectQuery mode, in which you cannot connect to data with stored procedures. Try again using Import mode or just use a SELECT statement directly.
|
In DirectQuery mode, PowerBI automatically wraps your query like so: `select * from ( [your query] )`, and if you attempt this in SSMS with a stored procedure i.e.
```
select * from (exec dbo.getData)
```
You get the error you see above.
The solution is you have to place your stored procedure call in an OPENQUERY call to your local server i.e.
```
select * from OPENQUERY(localServer, 'DatabaseName.dbo.getData')
```
Prerequisites would be: enabling local server access in OPENQUERY with
```
exec sp_serveroption @server = 'YourServerName'
,@optname = 'DATA ACCESS'
,@optvalue = 'TRUE'
```
And then making sure you use three-part notation in the OPENQUERY as all calls there default to the `master` database
|
SQL reporting invalid syntax when run in Power BI
|
[
"",
"sql",
"sql-server",
"powerbi",
""
] |
Basically, I've got the following table:
```
ID | Amount
AA | 10
AA | 20
BB | 30
BB | 40
CC | 10
CC | 50
DD | 20
DD | 60
EE | 30
EE | 70
```
I need to get unique entries in each column as in following example:
```
ID | Amount
AA | 10
BB | 30
CC | 50
DD | 60
EE | 70
```
So far following snippet gives almost what I wanted, but `first_value()` may return some value, which isn't unique in current column:
```
first_value(Amount) over (partition by ID)
```
`Distinct` also isn't helpful, as it returns unique rows, not its values
**EDIT:**
Selection order doesn't matter
|
This works for me, even with the problematic combinations mentioned by Dimitri. I don't know how fast that is for larger volumes though
```
with ids as (
select id, row_number() over (order by id) as rn
from data
group by id
), amounts as (
select amount, row_number() over (order by amount) as rn
from data
group by amount
)
select i.id, a.amount
from ids i
join amounts a on i.rn = a.rn;
```
SQLFiddle currently doesn't work for me, here is my test script:
```
create table data (id varchar(10), amount integer);
insert into data values ('AA',10);
insert into data values ('AA',20);
insert into data values ('BB',30);
insert into data values ('BB',40);
insert into data values ('CC',10);
insert into data values ('CC',50);
insert into data values ('DD',20);
insert into data values ('DD',60);
insert into data values ('EE',30);
insert into data values ('EE',70);
```
Output:
```
id | amount
---+-------
AA | 10
BB | 20
CC | 30
DD | 40
EE | 50
```
|
My solution implements recursive `with` and makes following: first - select minival values of `ID` and `amount`, then for every next level searches values of `ID` and `amount`, which are more than already choosed (this provides uniqueness), and at the end query selects 1 row for every value of recursion level. But this is not an ultimate solution, because it is possible to find a combination of source data, where query will not work (I suppose, that such solution is impossible, at least in SQL).
```
with r (id, amount, lvl) as (select min(id), min(amount), 1
from t
union all
select t.id, t.amount, r.lvl + 1
from t, r
where t.id > r.id and t.amount > r.amount)
select lvl, min(id), min(amount)
from r
group by lvl
order by lvl
```
[SQL Fiddle](http://sqlfiddle.com/#!4/69dbd3/1)
|
How can I select unique values from several columns in Oracle SQL?
|
[
"",
"sql",
"oracle",
""
] |
Following is a simple table with only two columns `patientid` (id of a patient) and `visitdate` (date when patient visited the clinic) in SQL Server.
Each row represents a patient visit. Created a table variable and inserted some dummy data for testing purpose below. Attempted to write a query that display the days since last (previous) visit against next to every visit. If there is no previous visit, query is displaying display null and sorting by partientid and visit date (desc).
Can this query be further optimized? Also, can we avoid self join and use any SQL Server built-in construct/support/function to simplify the query. Any help will be appreciated.
```
declare @patientvisits table
(
patientid int,
visitdate datetime
)
insert into @patientvisits
values (1, dateadd(day, -7, getdate())),
(1, dateadd(day, -20, getdate())),
(1, dateadd(day, -1, getdate())),
(1, dateadd(day, -4, getdate())),
(2, dateadd(day, -19, getdate())),
(2, dateadd(day, -8, getdate())),
(2, dateadd(day, -5, getdate())),
(3, dateadd(day, -40, getdate())),
(3, dateadd(day, -9, getdate())),
(3, dateadd(day, -3, getdate())),
(3, dateadd(day, -1, getdate())),
(3, dateadd(day, 0, getdate()))
SELECT *
FROM
(SELECT
VisitsA.patientid, VisitsA.visitdate "Visit Date",
CAST(DATEDIFF(DAY, VisitsB.visitdate, VisitsA.visitdate) AS varchar(10)) "Last Visit (days)"
FROM
(SELECT
ROW_NUMBER() OVER (PARTITION BY patientid ORDER BY visitdate DESC) rowid,
patientid, visitdate
FROM
@patientvisits) VisitsA
CROSS JOIN
(SELECT
ROW_NUMBER() OVER (PARTITION BY patientid ORDER BY visitdate DESC) rowid,
patientid, visitdate
FROM
@patientvisits) VisitsB
WHERE
VisitsA.patientid = VisitsB.patientid
AND VisitsA.rowid + 1 = VisitsB.rowid
UNION
SELECT
patientid, MIN(visitdate) visitdate, 0
FROM
(SELECT
ROW_NUMBER() OVER (PARTITION BY patientid ORDER BY visitdate DESC) rowid,
patientid, visitdate
FROM
@patientvisits) Visits
GROUP BY
patientid) Result
ORDER BY
patientid, "Visit Date" DESC
```
|
This query is simpler and perform better based on provided sample data
Per execution plan, it is 4 times faster.
```
;with cte as (
select patientid, visitdate, (select max(visitdate) from @patientvisits
where patientid = p.patientid and visitdate < p.visitdate) prevVisitDate
from @patientvisits p
)
select patientid, visitdate, DATEDIFF(day, prevVisitDate, visitdate) as 'Last Visit (days)'
from cte
order by patientid, visitdate desc
```
|
`CROSS JOIN` plus join-condition in `WHERE` is the same as an Inner Join. Why do you cast the number of days as a `VarChar(10)`?
There's no need for UNION or repeating the same Select multiple times:
```
WITH cte as
(
SELECT row_number() over (partition by patientid order by visitdate desc) rowid, patientid, visitdate
FROM patientvisits
)
SELECT VisitsA.patientid, VisitsA.visitdate "Visit Date", cast(COALESCE(datediff(day, VisitsB.visitdate, VisitsA.visitdate), 0) as varchar(10)) "Last Visit (days)"
FROM cte AS VisitsA LEFT JOIN
cte AS VisitsB
ON VisitsA.patientid = VisitsB.patientid and VisitsA.rowid + 1 = VisitsB.rowid
order by patientid, "Visit Date" desc
```
|
SQL Server : query to displays days since last visit. Can optimize this and/or use in-built SQL function to avoid self join and simply the query
|
[
"",
"sql",
"sql-server",
"performance",
"sql-server-2008-r2",
""
] |
Using VBScript, I create a recordset from a SQL query through and ADO connection object. I need to be able to write the field names and the largest field length to a text file, essentially as a two dimensional array, in the format of FieldName|FieldLength with a carriage return delimiter, example:
Matter Number|x(13)
Description|x(92)
Due Date|x(10)
Whilst I am able to loop through the Columns and write out the field names, I cannot solve the issue of Field Length. Code as follows:
```
Set objColNames = CreateObject("Scripting.FileSystemObject").OpenTextFile(LF14,2,true)
For i=0 To LF06 -1
objColNames.Write(Recordset.Fields(i).Name & "|x(" & Recordset.Fields(i).ActualSize & ")" & vbCrLf)
Next
```
in this instance it only writes the current selected Field Length.
|
After a little more research and testing I solved the issue by creating a dictionary based on the recordset field (column) count, then iterating through each item and evaluating the length of each field:
Dim Connection
Dim Recordset
```
Set Connection = CreateObject("ADODB.Connection")
Set Recordset = CreateObject("ADODB.Recordset")
Connection.Open LF08
Recordset.Open LF05,Connection
LF06=Recordset.Fields.Count
Set d = CreateObject("Scripting.Dictionary")
Set objColNames = CreateObject("Scripting.FileSystemObject").OpenTextFile(LF14,2,true)
For i=0 to LF06 -1
d.Add i, 0
Next
Dim aTable1Values
aTable1Values=Recordset.GetRows()
Set objFileToWrite = CreateObject("Scripting.FileSystemObject").OpenTextFile(LF07,2,true)
Dim iRowLoop, iColLoop
For iRowLoop = 0 to UBound(aTable1Values, 2)
For iColLoop = 0 to UBound(aTable1Values, 1)
If d.item(iColLoop) < Len(aTable1Values(iColLoop, iRowLoop)) Then
d.item(iColLoop) = Len(aTable1Values(iColLoop, iRowLoop))
End If
If IsNull(aTable1Values(iColLoop, iRowLoop)) Then
objFileToWrite.Write("")
Else
objFileToWrite.Write(aTable1Values(iColLoop, iRowLoop))
End If
If iColLoop <> UBound(aTable1Values, 1) Then
objFileToWrite.Write("|")
End If
next 'iColLoop
objFileToWrite.Write(vbCrLf)
Next 'iRowLoop
For i=0 to LF06 -1
d.item(i) = d.item(i) + 3
objColNames.Write(Recordset.Fields(i).Name & "|x(" & d.item(i) & ")" & vbCrLf)
Next
```
I then have two text files, one with the field names and lengths, the other with the query results. Using this I can then create a two dimensional array in the CMS (VisualFiles) from the results.
|
If I understand the question correctly (I'm not certain I do)....
If you change your SQL statement you only need to return one record.
```
Select Max(Len([Matter Number])) as [Matter Number],
Max(Len([Description])) As Description, Max(Len([Due Date])) As [Due Date] FROM TableName
```
This will return the maximum length of each field. Then construct your output from there.
|
VBScript Recordset Field Names and Length
|
[
"",
"sql",
"vbscript",
""
] |
I have a table: tblperson
There are three columns in tblperson
```
id amort_date total_amort
C000000004 12/30/2015 4584.00
C000000004 01/31/2016 4584.00
C000000004 02/28/2016 4584.00
```
The user will have to provide a billing date `@bill_date`
I want to sum all the total amort of all less than the date given by the user on month and year basis regardless of the date
For example
```
@bill_date = '1/16/2016'
Result should:
ID sum_total_amort
C000000004 9168.00
```
Regardless of the date i want to sum all amort less than January 2016
This is my query but it only computes the date January 2016, it does not include the dates less than it:
```
DECLARE @bill_date DATE
SET @bill_date='1/20/2016'
DECLARE @month AS INT=MONTH(@bill_date)
DECLARE @year AS INT=YEAR(@bill_date)
SELECT id,sum(total_amort)as'sum_total_amort' FROM webloan.dbo.amort_guide
WHERE loan_no='C000000004'
AND MONTH(amort_date) = @month
AND YEAR(amort_date) = @year
GROUP BY id
```
|
You would use aggregation and inequalities:
```
select id, sum(total_amort)
from webloan.dbo.amort_guide
where loan_no = 'C000000004' and
year(amort_date) * 12 + month(amort_date) <= @year * 12 + @month
group by id;
```
Alternatively, in SQL Server 2012+, you can just use `EOMONTH()`:
```
select id, sum(total_amort)
from webloan.dbo.amort_guide
where loan_no = 'C000000004' and
amort_date <= EOMONTH(@bill_date)
group by id;
```
|
You can get the start of the month using:
```
DATEADD(MONTH, DATEDIFF(MONTH, 0, @bill_date), 0)
```
So to get the `SUM(total_amort)`, your query should be:
```
SELECT
id,
SUM(total_amort) AS sum_total_amort
FROM webloan.dbo.amort_guide
WHERE
loan_no='C000000004'
AND amort_date < DATEADD(MONTH, DATEDIFF(MONTH, 0, @bill_date) + 1, 0)
```
|
Query all dates less than the given date (Month and Year)
|
[
"",
"sql",
"sql-server",
""
] |
I am trying to write a select statement gathering one row for each name. Expected output is hence:
Name=Al, Salary=30, Bonus=10
Table\_1
```
Name Salary
Al 10
Al 20
```
Table\_2
```
Name Bonus
Al 5
Al 5
```
How do I write that?
I try to:
```
Select t1.Name, SUM(t1.Salary), SUM(t2.Bonus) FROM table_1 t1
LEFT JOIN table_2 t2
ON t1.Name=t2.Name
Group By 1
```
I get bonus 20 instead of 10 as bonus. That is probably because there are two rows in t1 from which the bonus is summed up. How can I modify my function in order to get the correct bonus?
|
Group the tables separately by employee, then join them:
```
SELECT t1.Name, Salary, Bonus
FROM (
SELECT Name, SUM(Salary) Salary
FROM table_1
GROUP BY Name
) t1
LEFT JOIN (
SELECT Name, SUM(Bonus) Bonus
FROM table_2
GROUP BY Name
) t2 ON t1.Name = t2.Name
```
|
You can do it with a subquery like this:
```
declare @salary table (Name varchar(100), value int)
declare @bonus table (Name varchar(100), value int)
insert into @salary
values ('al', 10)
insert into @salary
values ('al', 20)
insert into @bonus
values ('al', 5)
insert into @bonus
values ('al', 5)
select s.Name, sum(value) as Salary, Bonus
from @salary s JOIN
(
select Name, sum(value) as Bonus
from @bonus
group by Name
) b on b.name = s.Name
group by s.Name, b.Bonus
```
|
Join gives incorrect sum
|
[
"",
"sql",
"teradata",
""
] |
I am new in mysql. What I would to do is create a new table which is a copy of the original one *table* with one more column under a specific condition. Which condition appears as a new column is the new table. I mean:
Let *table* be a sequence of given point (x,y) I want to create the table *temp* being (x,y,r) where r = x^2 + y^2<1 But what I did is
```
CREATE temp LIKE table;
ALTER TABLE temp ADD r FLOAT;
INSERT INTO temp (x,y) SELECT * FROM table WHERE x*x+y*y<1;
UPDATE temp SET r=x*x+y*y;
```
It is ok, it gives what I want, but my database is much more bigger than this simple example and here I calculate twice the radius r in two table. It is not so good about optimization.
Is there a way to pass the clause into the new column directly?
Thanks in advance.
|
You don't actually need to worry about doing the calculation twice. There is more overhead to doing an `insert` and `update`. So, you should do those calculations at the same time.
MySQL extends the use of the `having` clause, so this is easy:
```
CREATE temp LIKE table;
ALTER TABLE temp ADD r FLOAT;
INSERT INTO temp(x, y, r)
SELECT x, y, x*x+y*y as r
FROM table
HAVING r < 1;
```
It is quite possible that an additional table is not actually necessary, but it depends on how you are using the data. For instance, if you have rather complicated processing and are referring to `temp` multiple times and `temp` is rather smaller than the original data, then this could be a useful optimization.
Also, materializing the calculation in a table not only saves time (when the calculation is expensive, which this isn't), but it also allows building an index on the computed value -- something you cannot otherwise do in MySQL.
Personally, my preference is for more complicated queries rather than a profusion of temporary tables. As with many things with extremes, the best solution often lies in the middle (well, not really in the middle but temporary tables aren't all bad).
|
You should (almost) never store calculated data in a database. It ends up creating maintenance and application nightmares when the calculated values end up out of sync with the values from which they are calculated.
At this point you're probably saying to yourself, "Well, I'll do a really good job keeping them in sync." It doesn't matter, because down the road at some point, for whatever reason, they will get out of sync.
Luckily, SQL provides a nice mechanism to handle what you want - views.
```
CREATE VIEW temp
AS
SELECT
x,
y,
x*x + y*y AS r
FROM My_Table
WHERE
x*x + y*y < 1
```
|
MySQL CLAUSE can become a value?
|
[
"",
"mysql",
"sql",
"database",
"optimization",
""
] |
There's already an answer for this question in SO with a MySQL tag. So I just decided to make your lives easier and put the answer below for SQL Server users. Always happy to see different answers perhaps with a better performance.
Happy coding!
|
```
SELECT SUBSTRING(@YourString, 1, LEN(@YourString) - CHARINDEX(' ', REVERSE(@YourString)))
```
Edit: Make sure `@YourString` is trimmed first as Alex M has pointed out:
```
SET @YourString = LTRIM(RTRIM(@YourString))
```
|
Just an addition to answers.
The doc for `LEN` function in MSSQL:
> LEN excludes trailing blanks. If that is a problem, consider using the DATALENGTH (Transact-SQL) function which does not trim the string. If processing a unicode string, DATALENGTH will return twice the number of characters.
The problem with the answers here is that trailing spaces are not accounted for.
```
SELECT SUBSTRING(@YourString, 1, LEN(@YourString) - CHARINDEX(' ', REVERSE(@YourString)))
```
As an example few inputs for the accepted answer (above for reference), which would have wrong results:
```
INPUT -> RESULT
'abcd ' -> 'abc' --last symbol removed
'abcd 123 ' -> 'abcd 12' --only removed only last character
```
To account for the above cases one would need to trim the string (would return the last word out of 2 or more words in the phrase):
```
SELECT SUBSTRING(RTRIM(@YourString), 1, LEN(@YourString) - CHARINDEX(' ', REVERSE(RTRIM(LTRIM(@YourString)))))
```
The reverse is trimmed on both sides, that is to account for the leading as well as trailing spaces.
Or alternatively, just trim the input itself.
|
SQL Server query to remove the last word from a string
|
[
"",
"sql",
"sql-server",
""
] |
How can I select count(\*) from two different tables (table1 and table2) having as result:
```
Count_1 Count_2
123 456
```
I've tried this:
```
select count(*) as Count_1 from table1
UNION select count(*) as Count_2 from table2;
```
But here's what I get:
```
Count_1
123
456
```
I can see a solution for Oracle and SQL server here, but either syntax doesn't work for MS Access (I am using Access 2013).
[Select count(\*) from multiple tables](https://stackoverflow.com/q/606234/5556320)
I would prefer to do this using SQL (because I am developing my query dynamically within VBA).
|
Cross join two subqueries which return the separate counts:
```
SELECT sub1.Count_1, sub2.Count_2
FROM
(SELECT Count(*) AS Count_1 FROM table1) AS sub1,
(SELECT Count(*) AS Count_2 FROM table2) AS sub2;
```
|
```
Select TOP 1
(Select count(*) as Count from table1) as count_1,
(select count(*) as Count from table2) as count_2
From table1
```
|
Select count(*) from multiple tables in MS Access
|
[
"",
"sql",
"ms-access",
""
] |
I need to populate a currency value column with a value which is calculated from 2 other columns.
So:
```
original_value amount new_value
12 2 NULL
10 1 NULL
```
This would become:
```
original_value amount new_value
12 2 24
10 1 10
```
I only want to update the NULL columns.
This needs to work for SQL Server and MYSQL!
|
Just use `UPDATE`
```
UPDATE table_name
SET new_value = (original_value * amount)
WHERE new_value IS NULL
```
|
As I said in my comment; *don't store computed values (from other columns.) Redundancy, not normalized, risk of data inconsistency! Create a view instead - will always be up to date!*
```
create view viewname as
select original_value, amount, original_value * amount as new_value
from tablename
```
SQL Server has computed columns, do something like:
```
alter table tablename add new_value as original_value * amount
```
|
populate null columns in database
|
[
"",
"mysql",
"sql",
"sql-server",
""
] |
What is an efficient way to get all records with a datetime column whose value falls somewhere between yesterday at `00:00:00` and yesterday at `23:59:59`?
SQL:
```
CREATE TABLE `mytable` (
`id` BIGINT,
`created_at` DATETIME
);
INSERT INTO `mytable` (`id`, `created_at`) VALUES
(1, '2016-01-18 14:28:59'),
(2, '2016-01-19 20:03:00'),
(3, '2016-01-19 11:12:05'),
(4, '2016-01-20 03:04:01');
```
If I run this query at any time on 2016-01-20, then all I'd want to return is rows 2 and 3.
|
Since you're only looking for the date portion, you can compare those easily using [MySQL's `DATE()` function](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date).
```
SELECT * FROM table WHERE DATE(created_at) = DATE(NOW() - INTERVAL 1 DAY);
```
Note that if you have a very large number of records this can be inefficient; indexing advantages are lost with the derived value of `DATE()`. In that case, you can use this query:
```
SELECT * FROM table
WHERE created_at BETWEEN CURDATE() - INTERVAL 1 DAY
AND CURDATE() - INTERVAL 1 SECOND;
```
This works because date values such as the one returned by [`CURDATE()`](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_curdate) are assumed to have a timestamp of 00:00:00. The index can still be used because the date column's value is not being transformed at all.
|
You can still use the index if you say
```
SELECT * FROM TABLE
WHERE CREATED_AT >= CURDATE() - INTERVAL 1 DAY
AND CREATED_AT < CURDATE();
```
|
All MySQL records from yesterday
|
[
"",
"mysql",
"sql",
"datetime",
""
] |
I've got this TSQL:
```
CREATE TABLE #TEMPCOMBINED(
DESCRIPTION VARCHAR(200),
WEEK1USAGE DECIMAL(18,2),
WEEK2USAGE DECIMAL(18,2),
USAGEVARIANCE AS WEEK2USAGE - WEEK1USAGE,
WEEK1PRICE DECIMAL(18,2),
WEEK2PRICE DECIMAL(18,2),
PRICEVARIANCE AS WEEK2PRICE - WEEK1PRICE,
PRICEVARIANCEPERCENTAGE AS (WEEK2PRICE - WEEK1PRICE) / NULLIF(WEEK1PRICE,0)
);
INSERT INTO #TEMPCOMBINED (DESCRIPTION,
WEEK1USAGE, WEEK2USAGE, WEEK1PRICE, WEEK2PRICE)
SELECT T1.DESCRIPTION, T1.WEEK1USAGE, T2.WEEK2USAGE,
T1.WEEK1PRICE, T2.WEEK2PRICE
FROM #TEMP1 T1
LEFT JOIN #TEMP2 T2 ON T1.MEMBERITEMCODE = T2.MEMBERITEMCODE
```
...which works as I want (displaying values with two decimal places for the "Usage" and "Price" columns) except that the calculated percentage value (PRICEVARIANCEPERCENTAGE) displays values longer than Cyrano de Bergerac's nose, such as "0.0252707581227436823"
How can I force that to instead "0.03" (and such) instead?
|
Try explicitly casting the computed column when you create the table, as below:
```
CREATE TABLE #TEMPCOMBINED(
DESCRIPTION VARCHAR(200),
WEEK1USAGE DECIMAL(18,2),
WEEK2USAGE DECIMAL(18,2),
USAGEVARIANCE AS WEEK2USAGE - WEEK1USAGE,
WEEK1PRICE DECIMAL(18,2),
WEEK2PRICE DECIMAL(18,2),
PRICEVARIANCE AS WEEK2PRICE - WEEK1PRICE,
PRICEVARIANCEPERCENTAGE AS CAST((WEEK2PRICE - WEEK1PRICE) / NULLIF(WEEK1PRICE,0) AS DECIMAL(18,2))
);
```
|
use ROUND function
```
SELECT ROUND(column_name,decimals) FROM table_name;
```
|
How can I get my calculated SQL Server percentage value to only display two decimals?
|
[
"",
"sql",
"t-sql",
"precision",
"decimal-precision",
""
] |
I'm trying to find instances where a user has entered a person with their name backwards and then entered the person again correctly.
```
FirstName LastName
----------------------
Doc Jones
Jones Doc
Doc Holiday
John Doe
```
I want to get
```
Doc Jones
Jones Doc
```
I tried
```
Select FirstName, LastName
From People
Where FirstName Like '%' + LastName + '%'
```
but I get no results and I know there are multiple instances of this. I know I'm overlooking something easy.
|
Just for the sake of completeness. You can also use `INTERSECT`:
```
Select FirstName, LastName
From People
INTERSECT
Select LastName, FirstName
From People
```
This will return only one pair of matching rows, i.e.:
```
+-----------+----------+
| FirstName | LastName |
+-----------+----------+
| Doc | Jones |
| Jones | Doc |
+-----------+----------+
```
even if original data has `Doc Jones` or `Jones Doc` more than once:
```
DECLARE @People TABLE ([FirstName] varchar(50), [LastName] varchar(50));
INSERT INTO @People ([FirstName], [LastName]) VALUES
('Doc', 'Jones'),
('Doc', 'Jones'),
('Jones', 'Doc'),
('Doc', 'Holiday'),
('John', 'Doe');
```
|
```
SELECT P1.FirstName, P1.LastName
FROM People P1
JOIN People P2
ON P1.FirstName = P2.LastName
AND P2.FirstName = P1.LastName
```
The problem I see is you dont have some form of ID you wont have a way to see what are the rows duplicated between lot of duplicates.
So maybe this is better
```
SELECT P1.*, P2.*
FROM People P1
JOIN People P2
ON P1.FirstName = P2.LastName
AND P2.FirstName = P1.LastName
AND P1.FirstName < P1.LastName
```
And you get
```
Doc Jones Jones Doc
```
|
Find rows where the value in column 1 exists in colunm2
|
[
"",
"sql",
"sql-server",
""
] |
I heard people saying that table joins can be used everywhere to replace sub-queries. I tested it in my query, but found that appropriate data set was only retrieved when I used sub-queries. I was not able to get same data set using joins. I am not sure if what I found is right because I am a newcomer in RDBMS, thus not so much experienced. I will try to draw the schema (in words) of the database in which I was experimenting:
The database has two tables:
`Users` (**ID**, Name, City) and Friendship (***ID***, ***Friend\_ID***)
`Goal`: Users table is designed to store simple user data and Friendship table represents Friendship between users. Friendship table has both the columns as foreign keys, referencing to Users.ID. Tables have many-to-many relationship between them.
Question: I have to retrieve Users.ID and Users.Name of all the Users, which are not friends with a particular user x, but are from same city (much like fb's friend suggestion system).
By using subquery, I am able to achieve this. Query looks like:
```
SELECT ID, NAME
FROM USERS AS U
WHERE U.ID NOT IN (SELECT FRIENDS_ID
FROM FRIENDSHIP,
USERS
WHERE USERS.ID = FRIENDSHIP.ID AND USERS.ID = x)
AND U.ID != x AND CITY LIKE '% A_CITY%';
```
Example entries:
```
Users
```
Id = 1 Name = Jon City = Mumbai
Id=2 Name=Doe City=Mumbai
Id=3 Name=Arun City=Mumbai
Id=4 Name=Prakash City=Delhi
```
Friendship
```
Id= 1 Friends\_Id = 2
Id = 2 Friends\_Id=1
Id = 2 Friends\_Id = 3
Id = 3 Friends\_Id = 2
Can I get the same data set in a single query by performing joins. How? Please let me know if my question is not clear. Thanks.
Note: I used inner join in the sub-query by specifying both tables: Friendship, Users. Omitting the Users table and using the U from outside, gives an error (But if not using alias for the table Users, query becomes syntactically okay but result from this query includes ID's and names of users, who have more than one friends, including the user having ID x. Interesting, but is not the topic of the question).
|
For `not in` you can use `left join` and check for `is null`:
```
select u.id, u.name
from Users u
left join Friends f on u.id = f.id and f.friend_id = @person
where u.city like '%city%' and f.friend_id is null and u.id <> @person;
```
There are some cases where you can't work out your way with just inner/left/right joins, but your case is not one of them.
Please check sql fiddle: <http://sqlfiddle.com/#!9/1c5b1/14>
Also about your note: What you tried to do can be achieved with `lateral` join or `cross apply` depending on the engine you are using.
|
You can rewrite your query using only joins. The trick is to join to the User tables once with an inner join to identify users within the same city and reference the Friendship table with a left join and a null check to identify non-friends.
```
SELECT
U1.ID,
U1.Name
FROM
USERS U1
INNER JOIN
USERS U2
ON
U1.CITY = U2.CITY
LEFT JOIN
FRIENDSHIP F
ON
U2.ID = F.ID AND
U1.ID = F.FRIEND_ID
WHERE
U2.id = X AND
U1.ID <> U2.id AND
F.id IS NULL
```
The above query doesn't handle the situation where USER x's primary key is in the FRIEND\_ID column of the FRIENDSHIP table. I assume because your subquery version doesn't handle that situation, perhaps you create 2 rows for each friendship, or friendships are not bi-directional.
|
Is it true that JOINS can be used everywhere to replace Subqueries in SQL
|
[
"",
"sql",
"join",
""
] |
So, I'm new to rails so this is probably a newbie question, but I didn't find any help for this problem anywhere...
Let's say I have a database containing "stories" here. The only column are the title and a timestamp :
```
create_table :stories
add_column :stories, :title, :string
add_column :stories, :date, :timestamp
```
I have a form in my views so I can create a new story by inputing a title :
```
<%= form_for @story do %>
<input type="text" name="title" value="Story title" />
<input type="submit" value="Start a story" />
<% end %>
```
I have this on my controller :
```
def create
Story.create title:params[:title]
redirect_to "/stories"
end
```
And the model 'Story' works fine.
So, when I create a new story, everything looks fine. My goal however is **to be able to sort the stories by date** (hence the :date timestamp).
**How can I make it so the current date is stocked in the :date timestamp, so I can sort my items by date ?**
Thank you very much in advance
|
Without having to recreate the table from scratch, in terminal run 'rails g migration add\_timestamps\_to\_stories'
- find the newly created migration file, make sure it has the following:
```
def change
add_timestamps :stories
end
```
Run rake db:migrate
This adds two new columns, created\_at and updated\_at, you'll be able to sort by created\_at from that point. Be sure to update previous records that don't have a created\_at date (since it'll be nil) so you don't run into errors.
|
If you only want to track the creation timestamp of your story records, you could do it like this in your migration:
```
create_table do |t|
t.string :title
t.timestamps
end
```
In addition to your title field, this would create "created\_at" and "updated\_at" fields which are handled by ActiveRecord for you, like Andrew Kim suggested.
|
Save date in database with rails
|
[
"",
"sql",
"ruby-on-rails",
"ruby",
""
] |
I am getting `Syntax error near 'ORDER'` from the following query:
```
SELECT i.ItemID, i.Description, v.VendorItemID
FROM Items i
JOIN ItemVendors v ON
v.RecordID = (
SELECT TOP 1 RecordID
FROM ItemVendors iv
WHERE
iv.VendorID = i.VendorID AND
iv.ParentRecordID = i.RecordID
ORDER BY RecordID DESC
);
```
If I remove the `ORDER BY` clause the query runs fine, but unfortunately it is essential to pull from a descending list rather than ascending. All the answers I have found relating to this indicate that `TOP` must be used, but in this case I am already using it. I don't have any problems with `TOP` and `ORDER BY` when not part of a subquery. Any ideas?
|
I'd use max instead of top 1 ... order by
SELECT i.ItemID, i.Description, v.VendorItemID
FROM Items i
JOIN ItemVendors v ON
v.RecordID = (
SELECT max(RecordID)
FROM ItemVendors iv
WHERE
iv.VendorID = i.VendorID AND
iv.ParentRecordID = i.RecordID);
|
This error has nothing to do with TOP.
ASE simply does not allow ORDER BY in a subquery. That's the reason for the error.
|
SQL error on ORDER BY in subquery (TOP is used)
|
[
"",
"sql",
"sybase-asa",
""
] |
I've a question about the use of recursive SQL in which I have following table structure
Products can be in multiple groups (for the sake of clarity, I am not using int )
```
CREATE TABLE ProductGroups(ProductName nvarchar(50), GroupName nvarchar(50))
INSERT INTO ProductGroups(ProductName, GroupName) values
('Product 1', 'Group 1'),
('Product 1', 'Group 2'),
('Product 2', 'Group 1'),
('Product 2', 'Group 6'),
('Product 3', 'Group 7'),
('Product 3', 'Group 8'),
('Product 4', 'Group 6')
+-----------+---------+
| Product | Group |
+-----------+---------+
| Product 1 | Group 1 |
| Product 1 | Group 2 |
| Product 2 | Group 1 |
| Product 2 | Group 6 |
| Product 3 | Group 7 |
| Product 3 | Group 8 |
| Product 4 | Group 6 |
+-----------+---------+
```
**Now the Question is** I want to find out all the related products
so i.e. if I pass **Product 1** then I need the following result
```
+-----------+---------+
| Product | Group |
+-----------+---------+
| Product 1 | Group 1 |
| Product 1 | Group 2 |
| Product 2 | Group 1 |
| Product 2 | Group 6 |
| Product 4 | Group 6 |
+-----------+---------+
```
So basically I want to first find out all the Groups for product 1 and then for each group I want to find out all the products and so on...
1. Product 1 => Group 1, Group 2;
2. Group 1 => Product 1, Product 2 (Group 1 and Product 1 already exist so should be avoided otherwise would go into infinite
loop);
3. Group 2 => Product 1 (already exist so same as above);
4. Product 2 => Group 1, Group 6 (Group 1 and Product 2 already exist)
5. Group 6 => Product 4
|
I don't think this is possible with a recursive CTE, because you're only allowed one recursive reference per recursive definition.
I did manage to implement it with a `while` loop, which is likely to be less efficient than the cte:
```
declare @related table (ProductName nvarchar(50), GroupName nvarchar(50))
-- base case
insert @related select * from ProductGroups where ProductName='Product 1'
-- recursive step
while 1=1
begin
-- select * from @related -- uncomment to see progress
insert @related select p.*
from @related r
join ProductGroups p on p.GroupName=r.GroupName or p.ProductName=r.ProductName
left join @related rr on rr.ProductName=p.ProductName and rr.GroupName=p.GroupName
where rr.ProductName is null
if @@ROWCOUNT = 0
break;
end
select * from @related
```
You should definitely be careful with the above - benchmark on real sized data before deploying!
|
It can be done with a recursive query, but it's not optimal because SQL Server does not allow you to reference the recursive table as a set. So you end up having to keep a path string to avoid infinite loops. If you use ints you can replace the path string with a `hierarchyid`.
```
with r as (
select ProductName Root, ProductName, GroupName, convert(varchar(max), '/') Path from ProductGroups
union all
select r.Root, pg.ProductName, pg.GroupName, convert(varchar(max), r.Path + r.ProductName + ':' + r.GroupName + '/')
from r join ProductGroups pg on pg.GroupName=r.GroupName or pg.ProductName=r.ProductName
where r.Path not like '%' + pg.ProductName + ':' + pg.GroupName + '%'
)
select distinct ProductName, GroupName from r where Root='Product 1'
```
<http://sqlfiddle.com/#!3/a65d1/5/0>
|
Implementing a recursive query in SQL
|
[
"",
"sql",
"sql-server",
"recursive-query",
""
] |
I have been asked this in many interviews:
> What is the first step to do if somebody complains that a query is running slowly?
I say that I run `sp_who2 <active>` and check the queries running to see which one is taking the most resources and if there is any locking, blocking or deadlocks going on.
Can somebody please provide me their feedback on this? Is this the best answer or is there a better approach?
Thanks!
|
This is one of my interview questions that I've given for years. Keep in mind that I do not use it as a yes/no, I use it to gauge how deep their SQL Server knowledge goes and whether they're server or code focused.
Your answer went towards how to find which query is running slow, and possibly examine server resource reasons as to why it's suddenly running slow. Based on your answer, I would start to label you as an operational DBA type. These are exactly the steps that an operational DBA performs when they get the call that the server is suddenly running slow. That's fine if that's what I'm interviewing for and that's what you're looking for. I might dig further into what your steps would be to resolve the issue once you find deadlocks for example, but I wouldn't expect people to be able to go very deep. If it's not a deadlock or blocking, better answers here would be to capture the execution plan and see if there are stale stats. It's also possible that parameter sniffing is going on, so a stored proc may need to be "recompiled". Those are the typical problems I see the DBA's running into. I don't interview for DBA's often so maybe other people have deeper questions here.
If the interview is for a developer job however, then I would expect the answer more to make an assumption that we've already located which query is running slowly, and that it's reproducible. I'll even go ahead and state as much if needed. The things that a developer has control over are different than what the operational DBA has control over, so I would expect the developer to start looking at the code.
People will often recommend looking at the execution plan at this point, and therefore recommend it as a good answer. I'll explain a little later why I don't necessarily agree that this is the best first step. If the interviewee does happen to mention the execution plan at this point however, my followup questions would be to ask what they're looking for on the execution plan. The most common answer would be to look for table scans instead of seeks, possibly showing signs of a missing index. The answers that show me more experience working with execution plans have to do with looking for steps with the highest percentage of the whole and/or looking for thick lines.
I find a lot of query tuning efforts go astray when starting with the execution plans and solutions get hacky because the people tuning the queries don't know what they want the execution plan to look like, just that they don't like the one they have. They'll then try to focus on the seemingly worst performing step, adding indexes, query hints, etc, when it may turn out that because of some other step, the entire execution plan is flipped upside down, and they're tuning the wrong piece. If, for example, you have three tables joined together on foreign keys, and the third table is missing an index, SQL Server may decide that the next best plan is to walk the tables in the opposite direction because primary key indexes exist there. The side effect may be that it looks like the first table is the one with the problem when really it's the third table.
The way I go about tuning a query, and therefore what I prefer to hear as an answer, is to look at the code and get a feel for what the code is trying to do and how I would expect the joins to flow. I start breaking up the query into pieces starting with the first table. Keep in mind that I'm using the term "first" here loosely, to represent the table that I want SQL Server to start in. That is not necessarily the first table listed. It is however typically the smallest table, especially with the "where" applied. I will then slowly add in the additional tables one by one to see if I can find where the query turns south. It's typically a missing index, no sargability, too low of cardinality, or stale statistics. If you as the interviewee use those exact terms in context, you're going to ace this question no matter who is interviewing you.
Also, once you have an expectation of how you want the joins to flow, now is a good time to compare your expectations with the actual execution plan. This is how you can tell if a plan has flipped on you.
If I was answering the question, or tuning an actual query, I would also add that I like to get row counts on the tables and to look at the selectivity of all columns in the joins and "where" clauses. I also like to actually look at the data. Sometimes problems just aren't obvious from the code but become obvious when you see some of the data.
|
I can't really say which is the best answer, but I'd answer: analyze the [Actual Execution Plan](https://learn.microsoft.com/en-us/sql/relational-databases/performance/display-an-actual-execution-plan). That should be a basis to check for performance issues.
There is plenty of information to be found on the internet about analyzing Execution Plans. I suggest you check it out.
|
Fixing a slow running SQL query
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I need to improve my view performance, right now the SQL that makes the view is:
```
select tr.account_number , tr.actual_collection_trx_date ,s.customer_key
from fct_collections_trx tr,
stg_scd_customers_key s
where tr.account_number = s.account_number
and trunc(tr.actual_collection_trx_date) between s.start_date and s.end_date;
```
Table fct\_collections\_trx has 170k+-(changes every day) records.
Table stg\_scd\_customers\_key has 430mil records.
Table fct\_collections\_trx have indexes as following:
(SINGLE INDEX OF ALL OF THEM) (ACCOUNT\_NUMBER, SUB\_ACCOUNT\_NUMBER, ACTUAL\_COLLECTION\_TRX\_DATE, COLLECTION\_TRX\_DATE, COLLECTION\_ACTION\_CODE)(UNIQUE) and ENTRY\_SCHEMA\_DATE(NORMAL). DDL:
```
alter table stg_admin.FCT_COLLECTIONS_TRX
add primary key (ACCOUNT_NUMBER, SUB_ACCOUNT_NUMBER, ACTUAL_COLLECTION_TRX_DATE, COLLECTION_TRX_DATE, COLLECTION_ACTION_CODE)
using index
tablespace STG_COLLECTION_DATA
pctfree 10
initrans 2
maxtrans 255
storage
(
initial 80K
next 1M
minextents 1
maxextents unlimited
);
```
Table structure:
```
create table stg_admin.FCT_COLLECTIONS_TRX
(
account_number NUMBER(10) not null,
sub_account_number NUMBER(5) not null,
actual_collection_trx_date DATE not null,
customer_key NUMBER(10),
sub_account_key NUMBER(10),
schema_key VARCHAR2(10) not null,
collection_group_code CHAR(3),
collection_action_code CHAR(3) not null,
action_order NUMBER,
bucket NUMBER(5),
collection_trx_date DATE not null,
days_into_cycle NUMBER(5),
logical_delete_date DATE,
balance NUMBER(10,2),
abbrev CHAR(8),
customer_status CHAR(2),
sub_account_status CHAR(2),
entry_schema_date DATE,
next_collection_action_code CHAR(3),
next_collectin_trx_date DATE,
reject_key NUMBER(10) not null,
dwh_update_date DATE,
delta_type VARCHAR2(1)
)
```
Table stg\_scd\_customers\_key have indexes : (SINGLE INDEX OF ALL OF THEM)
(ACCOUNT\_NUMBER, START\_DATE, END\_DATE). DDL :
```
create unique index stg_admin.STG_SCD_CUST_KEY_PKP on stg_admin.STG_SCD_CUSTOMERS_KEY (ACCOUNT_NUMBER, START_DATE, END_DATE);
```
This table is also partitioned:
```
partition by range (END_DATE)
(
partition SCD_CUSTOMERS_20081103 values less than (TO_DATE(' 2008-11-04 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
tablespace FCT_CUSTOMER_SERVICES_DATA
pctfree 10
initrans 1
maxtrans 255
storage
(
initial 8M
next 1M
minextents 1
maxextents unlimited
)
```
Table structure:
```
create table stg_admin.STG_SCD_CUSTOMERS_KEY
(
customer_key NUMBER(18) not null,
account_number NUMBER(10) not null,
start_date DATE not null,
end_date DATE not null,
curr_ind NUMBER(1) not null
)
```
I Can't add filter on the big table(need all range of dates) and i can't use materialized view. This query runs for about 20-40 minutes, I have to make it faster..
I've already tried to drop the trunc, makes no different.
Any suggestions?
Explain plan:
[](https://i.stack.imgur.com/jwdw2.png)
|
First, write the query using explicit `join` syntax:
```
select tr.account_number , tr.actual_collection_trx_date ,s.customer_key
from fct_collections_trx tr join
stg_scd_customers_key s
on tr.account_number = s.account_number and
trunc(tr.actual_collection_trx_date) between s.start_date and s.end_date;
```
You already have appropriate indexes for the customers table. You can try an index on `fct_collections_trx(account_number, trunc(actual_collection_trx_date), actual_collection_trx_date)`. Oracle might find this useful for the `join`.
However, if you are looking for a single match, then I wonder if there is another approach that might work. How does the performance of the following query work:
```
select tr.account_number , tr.actual_collection_trx_date,
(select min(s.customer_key) keep (dense_rank first order by s.start_date desc)
from stg_scd_customers_key s
where tr.account_number = s.account_number and
tr.actual_collection_trx_date >= s.start_date
) as customer_key
from fct_collections_trx tr ;
```
This query is not exactly the same as the original query, because it is not doing any filtering -- and it is not checking the end date. Sometimes, though, this phrasing can be more efficient.
Also, I think the `trunc()` is unnecessary in this case, so an index on `stg_scd_customers_key(account_number, start_date, customer_key)` is optimal.
The expression `min(x) keep (dense_rank first order by)` essentially does `first()` -- it gets the first element in a list. Note that the `min()` isn't important; `max()` works just as well. So, this expression is getting the first customer key that meets the conditions in the `where` clause. I have observed that this function is quite fast in Oracle, and often faster than other methods.
|
If the start and end dates have no time elements (ie. they both default to midnight), then you could do:
```
select tr.account_number , tr.actual_collection_trx_date ,s.customer_key
from fct_collections_trx tr,
stg_scd_customers_key s
where tr.account_number = s.account_number
and tr.actual_collection_trx_date >= s.start_date
and tr.actual_collection_trx_date < s.end_date + 1;
```
On top of that, you could add an index to each table, containing the following columns:
* for fct\_collections\_trx: (account\_number, actual\_collection\_trx\_date)
* for stg\_scd\_customers\_key: (account\_number, start\_date, end\_date, customer\_key)
That way, the query should be able to use the indexes rather than having to go to the table as well.
|
Improve view performance
|
[
"",
"sql",
"oracle",
"performance",
"select",
"view",
""
] |
I have 2 different tables employees and salaries the salaries table has multiple duplicate id's on it my question is how can i combined the employee and salaries table and removed its duplicate but i want the max salary to be displayed for that employee.
Employees table
[](https://i.stack.imgur.com/9KOwD.png)
Salaries table
[](https://i.stack.imgur.com/xmP0o.png)
|
Based on the definition of the `salaries` table (`from_date` & `to_date`) it's a [Slowly Changing Dimension](https://en.wikipedia.org/wiki/Slowly_Changing_Dimension). Your data might look like this:
```
Emp_no salary from_date to_date
22 14000 2007-01-01 2008-03-31 -- or 2008-04-01
22 16000 2008-04-01 2010-12-31 -- or 2011-01-01
22 18000 2011-01-01 9999-12-31 -- or NULL
```
In that case you don't want the `MAX` salary but the current/latest salary.
In a SCD `to_date` is usually set to either a maximum date like `9999-12-31` or `3999-12-31` or `NULL`. To get the current salary you use following conditions:
```
WHERE to_date IS NULL
or
WHERE to_date = DATE '9999-12-31' -- or whatever your max date is
or
WHERE CURRENT_DATE BETWEEN from_date AND to_date
```
To get the salary for any point in time:
```
WHERE whatever_date_you_want BETWEEN from_date AND to_date
```
|
You need something like this. I created a fiddle demo with necessary columns.
```
select e.*, s.salary from
employees e
inner join
(
select emp_no,max(salary) as salary from salaries
group by emp_no
) s
on e.emp_no=s.emp_no
```
**See Fiddle demo here**
<http://sqlfiddle.com/#!9/b5b67/5>
|
combined 2 different structure tables and remove duplicates
|
[
"",
"mysql",
"sql",
""
] |
Here's the code that is in production:
```
dynamic_sql := q'[ with cte as
select user_id,
user_name
from user_table
where regexp_like (bizz_buzz,'^[^Z][^Y6]]' || q'[') AND
user_code not in ('A','E','I')
order by 1]';
```
1. Start at the beginning and search bizz\_buzz
2. Match any one character that is NOT Z
3. Match any two characters that are not Y6
4. What's the ']' after the 6?
5. Then what?
|
I think that StackOverflow's formatting is causing some of the confusion in the answers. Oracle has a syntax for a string literal, `q'[...]'`, which means that the `...` portion is to be interpreted exactly as-is; so for instance it can include single quotes without having to escape each one individually.
But the code formatting here doesn't understand that syntax, so it is treating each single-quote as a string delimiter, which makes the result look different that how Oracle really sees it.
The expression is concatenating two such string literals together. (I'm not sure why - it looks like it would be possible to write this as a single string literal with no issues.) As pointed out in another answer/comment, the resulting SQL string is actually:
```
with cte as
select user_id,
user_name
from user_table
where regexp_like (bizz_buzz,'^[^Z][^Y6]') AND
user_code not in ('A','E','I')
order by 1
```
And also as pointed out in another answer, the `[^Y6]` portion of the regex matches a single character, not two. So this expression should simply match any string whose first character is not 'Z' and whose second character is neither 'Y' nor '6'.
|
When not in couples `]` means... Well... Itself:
```
^[^Z][^Y6]]/
^ assert position at start of the string
[^Z] match a single character not present in the list below
Z the literal character Z (case sensitive)
[^Y6] match a single character not present in the list below
Y6 a single character in the list Y6 literally (case sensitive)
] matches the character ] literally
```
1. Start at the beginning and search bizz\_buzz
2. Match any one character that is NOT Z
3. Match any two one characters that is not Y or 6
4. What's the ']' after the 6? it's a `]`
|
What is this Oracle regexp matching in this production code?
|
[
"",
"sql",
"regex",
"oracle",
""
] |
I have a table with 51 records . The table structure looks something like below :
**ack\_extract\_id** **query\_id** **cnst\_giftran\_key** **field1** **value1**
Now ack\_extract\_ids can be 8,9.
I want to check for giftran keys which are there for extract\_id 9 and not there in 8.
What I tried was
```
SELECT *
FROM ddcoe_tbls.ack_flextable ack_flextable1
INNER JOIN ddcoe_tbls.ack_main_config config
ON ack_flextable1.ack_extract_id = config.ack_extract_id
LEFT JOIN ddcoe_tbls.ack_flextable ack_flextable2
ON ack_flextable1.cnst_giftran_key = ack_flextable2.cnst_giftran_key
WHERE ack_flextable2.cnst_giftran_key IS NULL
AND config.ack_extract_file_nm LIKE '%Dtl%'
AND ack_flextable2.ack_extract_id = 8
AND ack_flextable1.ack_extract_id = 9
```
But it is returning me 0 records. Ideally the left join where right is null should have returned the record for which the cnst\_giftran\_key is not present in the right hand side table, right ?
What am I missing here ?
|
When you test columns from the left-joined table in the where clause (`ack_flextable2.ack_extract_id` in your case), you force that join to behave as if it were an inner join. Instead, move that test to be part of the join condition.
Then to find records where that value is missing, test for a NULL key in the where clause.
```
SELECT *
FROM ddcoe_tbls.ack_flextable ack_flextable1
INNER JOIN ddcoe_tbls.ack_main_config config
ON ack_flextable1.ack_extract_id = config.ack_extract_id
LEFT JOIN ddcoe_tbls.ack_flextable ack_flextable2
ON ack_flextable1.cnst_giftran_key = ack_flextable2.cnst_giftran_key
AND ack_flextable2.ack_extract_id = 8
WHERE ack_flextable2.cnst_giftran_key IS NULL
AND config.ack_extract_file_nm LIKE '%Dtl%'
AND ack_flextable1.ack_extract_id = 9
AND ack_flextable2.cnst_giftran_key IS NULL
```
|
**THIS IS NO ANSWER, JUST AN EXPLANATION**
From your comment to Joe Stefanelli's answer I gather that you don't fully understand the issue with WHERE and ON in an outer join. So let's look at an example.
We are looking for all supplier's last orders, i.e. the order records where there is no newer order for the supplier.
```
select *
from order
where not exists
(
select *
from order newer
where newer.supplier = order.supplier
and newer.orderdate > order.orderdate
);
```
This is straight-forward; the query matches what we just put in words: Find orders for which NOT EXISTS a newer order for the same supplier.
The same query with the anti-join pattern:
```
select order.*
from order
left join order newer on newer.supplier = order.supplier
and newer.orderdate > order.orderdate
where newer.id is null;
```
Here we join every order with all their newer orders, thus probably creating a huge intermediate result. With the left outer join we make sure we get a dummy record attached when there is no newer order for the supplier. Then at last we scan the intermediate result with the WHERE clause, keeping only records where the attached record has an ID null. Well, the ID is obviously the table's primary key and can never be null, so what we keep here is only the outer-joined results where the newer data is just a dummy record containing nulls. Thus we get exactly the orders for which no newer order exists.
Talking about a huge intermediate result: How can this be faster than the first query? Well, it shouldn't. The first query should actually either run equally fast or faster. A good DBMS will see through this and make the same execution plan for both queries. A rather young DBMS however may really execute the anti join quicker. That is because the developers put so much effort into join techniques, as these are needed in about every query, and didn't yet care about IN and EXISTS that much. In such a case one may run into performance issues with NOT IN or NOT EXISTS and use the anti-join pattern instead.
Now as to the WHERE / ON problem:
```
select order.*
from order
left join order newer on newer.orderdate > order.orderdate
where newer.supplier = order.supplier
and newer.id is null;
```
This looks almost the same as before, but some criteria has moved from ON to WHERE. This means the outer join gets different criteria. Here is what happens: for every order find all newer orders ‐ no matter which supplier! So it is all orders of the last order date that get an outer-join dummy record. But then in the WHERE clause we remove all pairs where the supplier doesn't match. Notice that the outer-joined records contain NULL for newer.supplier, so `newer.supplier = order.supplier` is never true for them; they get removed. But then, if we remove all outer-joined records we get exactly the same result as with a vanilla inner join. When we put outer join criteria in the WHERE clause we turn the outer join into an inner join. So the query can be re-written as
```
select order.*
from order
inner join order newer on newer.orderdate > order.orderdate
where newer.supplier = order.supplier
and newer.id is null;
```
And with tables in FROM and INNER JOIN it doesn't matter whether the criteria is in ON or WHERE; it's rather a matter of readability, because both criteria will equally get applied.
Now we see that `newer.id is null` can never be true. The final result will be empty ‐ which is exactly what happened with your query.
|
LEFT JOIN WHERE RIGHT IS NULL for same table in Teradata SQL
|
[
"",
"sql",
"left-join",
"teradata",
""
] |
Is there Go equivalent of Apple's GCD serial dispatch queue?
So far I have only found a solution that is a channel of functions.
```
work := make(chan func())
```
I would have a function receive from this channel and call the received functions. The functions must be executed in FIFO order.
Is there a better method or structure to do this in Go?
*This shouldn't make a difference but I am looking to queue SQL queries to run in FIFO for this.*
|
@OneOfOne, it was close but not quite.
I ended up making a Serial Dispatch Queue implementation in Go available [here](https://github.com/ansonl/serialqueue).
It is basically a go routine that blocks on a `channel` of type `func()` and runs the functions that are passed in order.
**Implementation:**
```
//Package serialqueue provides a serial queue for functions.
//Queue items are processed in First In First Out (FIFO) order.
package serialqueue
//New returns a new serial queue.
//Enqueue items like queueObj <- func() {doWork(data)}
func New() chan func() {
//create channel of type function
var queue = make(chan func())
//spawn go routine to read and run functions in the channel
go func() {
for true {
nextFunction := <-queue
nextFunction()
}
}()
return queue
}
```
**Usage:** (demonstrating writing to a string in the correct order)
```
//Package serialqueue provides provides tests for github.com/ansonl/serialqueue.
package serialqueue_test
import (
"testing"
"fmt"
"sync"
"github.com/ansonl/serialqueue"
)
func TestQueue(t *testing.T) {
//Create new serial queue
queue := serialqueue.New()
//Number of times to loop
var loops = 100
//Queue output will be added here
var queueOutput string
//WaitGroup for determining when queue output is finished
var wg sync.WaitGroup
//Create function to place in queue
var printTest = func(i int) {
queueOutput = fmt.Sprintf("%v%v",queueOutput, i)
wg.Done()
}
//Add functions to queue
var i int;
for i=0;i<loops;i++ {
wg.Add(1)
t:=i
queue <- func() {printTest(t)}
}
//Generate correct output
var correctOutput string
for i=0;i<loops;i++ {
correctOutput = fmt.Sprintf("%v%v", correctOutput, i)
}
//Wait until all functions in queue are done
wg.Wait()
//Compare queue output with correct output
if queueOutput != correctOutput {
t.Errorf("Serial Queue produced %v, want %v", queueOutput, correctOutput);
}
}
```
Hope this helps someone with the same issue!
|
Something like this should work, however I'm not familiar with how GCD works so I could be way off.
```
func main() {
q := NewQueue(10) // the size is mainly so it wouldn't block, you can play with that to your liking.
var wg sync.WaitGroup
for i := 0; i < 10; i++ {
wg.Add(1)
i := i
q <- func() { log.Println("i =", i); wg.Done() }
}
wg.Wait()
close(q)
}
func NewQueue(size int) (q chan func()) {
q = make(chan func(), size)
go func() {
for fn := range q {
fn()
}
}()
return
}
```
`playground`
|
Go equivalent of GCD serial dispatch queue
|
[
"",
"sql",
"go",
"queue",
"grand-central-dispatch",
""
] |
I have a table of logins that looks like this:
logins table
```
Emp_ID Created | Action
1 20/01/2016 10:44:42 AM login
1 20/01/2016 4:45:49 PM logout
1 20/01/2016 6:30:13 PM logout
1 21/01/2016 8:46:28 AM login
1 21/01/2016 9:46:42 AM login
1 21/01/2016 1:46:46 PM logout
1 22/01/2016 8:49:21 AM login
1 22/01/2016 1:49:27 PM logout
1 22/01/2016 2:29:53 PM login
1 22/01/2016 2:30:13 PM logout
3 22/01/2016 2:42:06 PM login
1 22/01/2016 9:57:22 PM login
1 22/01/2016 10:22:23 PM logout
1 23/01/2016 8:01:47 AM login
1 23/01/2016 9:01:58 AM logout
3 23/01/2016 8:02:06 AM login
3 23/01/2016 9:02:28 AM logout
```
The employees table
```
| ID | Fname | Lname |
|----|-------|-------|
| 1 | James | Brown |
| 2 | Mark | Bond |
| 3 | Kemi | Ojo |
```
The result I got
```
| created | login | logout | Employee | Emp_ID |
|------------|----------|----------|-------------|--------|
| 2016-01-20 | 10:44:42 | 18:30:13 | James Brown | 1 |
| 2016-01-21 | 08:46:28 | 13:46:46 | James Brown | 1 |
| 2016-01-22 | 08:49:21 | 22:22:23 | James Brown | 1 |
| 2016-01-22 | 14:42:06 | 22:22:23 | Kemi Ojo | 3 |
| 2016-01-23 | 08:01:47 | 09:02:28 | James Brown | 1 |
| 2016-01-23 | 08:02:06 | 09:02:28 | Kemi Ojo | 3 |
```
Here is what I have tried:
```
SELECT
CAST(LI.created AS DATE) AS created,
MIN(CAST(LI.created AS TIME)) AS login,
MAX(CAST(LO.created AS TIME)) AS logout,
e.fname+' '+e.lname Employee, li.Emp_ID
FROM
Logins LI
LEFT OUTER JOIN Logins LO ON
LO.action = 'logout' AND
CAST(LO.created AS DATE) = CAST(LI.created AS DATE)
JOIN dbo.Employees AS E ON E.ID = li.Emp_ID
WHERE
LI.action = 'login'
GROUP BY
CAST(LI.created AS DATE), E.fname + ' ' + E.lname, li.Emp_ID
```
But the result is not correct.
1. Notice that the last two result for different users are the same. for example `09:02:28` appears twice instead of `9:01:58`
2. Also I have issues with a login without a logout for emp\_id = 3. this happens when the app shuts down unexpectedly.
3 how can i place a 00:00:00 in the case when there is no logout
4. Or what would be your suggestion of what to do in this case?
I need to select a result set that looks like this:
```
| created | login | logout | Employee | Emp_ID |
|------------|----------|----------|-------------|--------|
| 2016-01-20 | 10:44:42 | 18:30:13 | James Brown | 1 |
| 2016-01-21 | 08:46:28 | 13:46:46 | James Brown | 1 |
| 2016-01-22 | 08:49:21 | 22:22:23 | James Brown | 1 |
| 2016-01-22 | 14:42:06 | 00:00:00 | Kemi Ojo | 3 |
| 2016-01-23 | 08:01:47 | 09:01:58 | James Brown | 1 |
| 2016-01-23 | 08:02:06 | 09:02:28 | Kemi Ojo | 3 |
```
[**SQL fiddle**](http://sqlfiddle.com/#!3/465f0/2)
|
First of all, cudos for a prefect question. Table structure, fiddle demo and expected output helps a lot.
Now I tried this and it is working in fiddle. Please recheck and let me know.
```
select t_login.emp_id,t_login.dt_created as created,t_login.login,
case when t_logout.logout is null
then cast('00:00:00' as time)
else t_logout.logout end as logout,
e.fname+' '+e.lname Employee
from
(select emp_id,CAST(created AS DATE) AS dt_created,
MIN(CAST(created AS TIME)) as login
from logins
where action='login'
group by emp_id,CAST(created AS DATE)) t_login
left join
(select emp_id,CAST(created AS DATE) AS dt_created,
max(CAST(created AS TIME)) as logout
from logins
where action='logout'
group by emp_id,CAST(created AS DATE)) t_logout
on t_login.emp_id=t_logout.emp_id
and t_login.dt_created=t_logout.dt_created
inner join
employees e
on e.id=t_login.emp_id
```
PS: This will not take care of cases where there is no login on a particular day and only a logout. If you want that, then use a `full outer join` and use same `case` statement as I used in outer `select` clause.
**See fiddle demo here**
<http://sqlfiddle.com/#!3/465f0/34>
Ouput
```
+---------+-------------+-------------------+-------------------+-------------+
| emp_id | created | login | logout | Employee |
+---------+-------------+-------------------+-------------------+-------------+
| 1 | 2016-01-20 | 10:44:42.0000000 | 18:30:13.0000000 | James Brown |
| 1 | 2016-01-21 | 08:46:28.0000000 | 13:46:46.0000000 | James Brown |
| 1 | 2016-01-22 | 08:49:21.0000000 | 22:22:23.0000000 | James Brown |
| 3 | 2016-01-22 | 14:42:06.0000000 | 00:00:00.0000000 | Kemi Ojo |
| 1 | 2016-01-23 | 08:01:47.0000000 | 09:01:58.0000000 | James Brown |
| 3 | 2016-01-23 | 08:02:06.0000000 | 09:02:28.0000000 | Kemi Ojo |
+---------+-------------+-------------------+-------------------+-------------+
```
|
This is my suggestion (be aware of culture specific date literals. I had to set a language to get a correct date conversion):
```
SET LANGUAGE GERMAN;
DECLARE @logins TABLE(Emp_ID INT,Created DATETIME, Action VARCHAR(100));
INSERT INTO @logins VALUES
(1,'20/01/2016 10:44:42 AM','login')
,(1,'20/01/2016 4:45:49 PM','logout')
,(1,'20/01/2016 6:30:13 PM','logout')
,(1,'21/01/2016 8:46:28 AM','login')
,(1,'21/01/2016 9:46:42 AM','login')
,(1,'21/01/2016 1:46:46 PM','logout')
,(1,'22/01/2016 8:49:21 AM','login')
,(1,'22/01/2016 1:49:27 PM','logout')
,(1,'22/01/2016 2:29:53 PM','login')
,(1,'22/01/2016 2:30:13 PM','logout')
,(3,'22/01/2016 2:42:06 PM','login')
,(1,'22/01/2016 9:57:22 PM','login')
,(1,'22/01/2016 10:22:23 PM','logout')
,(1,'23/01/2016 8:01:47 AM','login')
,(1,'23/01/2016 9:01:58 AM','logout')
,(3,'23/01/2016 8:02:06 AM','login')
,(3,'23/01/2016 9:02:28 AM','logout');
DECLARE @employees TABLE(ID INT,Fname VARCHAR(100),Lname VARCHAR(100));
INSERT INTO @employees VALUES
(1,'James','Brown')
,(2,'Mark','Bond')
,(3,'Kemi','Ojo');
WITH Logins AS
(
SELECT
MIN(PureDate) AS Created
,MIN(CAST(l.Created AS TIME)) AS LoginTime
,l.Emp_ID
FROM @logins AS l
CROSS APPLY(SELECT CAST(l.Created AS DATE) PureDate) AS Created
WHERE l.Action ='login'
GROUP BY l.Emp_ID,PureDate
)
,Logouts AS
(
SELECT
MAX(PureDate) AS Created
,MAX(CAST(l.Created AS TIME)) AS LogoutTime
,l.Emp_ID
FROM @logins AS l
CROSS APPLY(SELECT CAST(l.Created AS DATE) PureDate) AS Created
WHERE l.Action ='logout'
GROUP BY l.Emp_ID,PureDate
)
SELECT Logins.Created
,Logins.LoginTime
,ISNULL(Logouts.LogoutTime,'00:00:00')
,e.Lname
,e.ID
FROM Logins
INNER JOIN @employees AS e ON e.ID = Logins.Emp_ID
LEFT JOIN Logouts ON Logins.Emp_ID = Logouts.Emp_ID
AND Logins.Created = Logouts.Created
ORDER BY Created,LoginTime
```
The result
```
2016-01-20 10:44:42 18:30:13 Brown 1
2016-01-21 08:46:28 13:46:46 Brown 1
2016-01-22 08:49:21 22:22:23 Brown 1
2016-01-22 14:42:06 00:00:00 Ojo 3
2016-01-23 08:01:47 09:01:58 Brown 1
2016-01-23 08:02:06 09:02:28 Ojo 3
```
|
select columns into rows grouped by columns values
|
[
"",
"sql",
"sql-server-2008",
""
] |
Using Postgres 9.4, I am trying to select a single row from from a table that contains data nearest to, but not before, the current system time. The `datetime` colum is a `timestamp without time zone` data type, and the data is in the same timezone as the server. The table structure is:
```
uid | datetime | date | day | time | predictionft | predictioncm | highlow
-----+---------------------+------------+-----+----------+--------------+--------------+---------
1 | 2015-12-31 03:21:00 | 2015/12/31 | Thu | 03:21 AM | 5.3 | 162 | H
2 | 2015-12-31 09:24:00 | 2015/12/31 | Thu | 09:24 AM | 2.4 | 73 | L
3 | 2015-12-31 14:33:00 | 2015/12/31 | Thu | 02:33 PM | 4.4 | 134 | H
4 | 2015-12-31 21:04:00 | 2015/12/31 | Thu | 09:04 PM | 1.1 | 34 | L
```
Query speed is not a worry since the table contains ~1500 rows.
For clarity, if the current server time was `2015-12-31 14:00:00`, the row returned should be `3` rather than `2`.
EDIT:
The solution, based on the accepted answer below, was:
```
select *
from myTable
where datetime =
(select min(datetime)
from myTable
where datetime > now());
```
EDIT 2: Clarified question.
|
The general idea follows. You can adjust it for postgresql.
```
select fields
from yourTable
where datetimeField =
(select min(datetimeField)
from yourTable
where datetimeField > current_timestamp)
```
|
You can also use this. This will be faster. But it wont make much difference if you have few rows.
```
select * from table1
where datetime >= current_timestamp
order by datetime
limit 1
```
`SQLFiddle Demo`
|
Select row with timestamp nearest to, but not later than, now
|
[
"",
"sql",
"postgresql",
""
] |
I'm working on a query right now that will count the number of unique login days per user, place them in a bucket (1-2, 3-4 log in days etc.), and return the number of each users in each bucket by department. I apologize if this isn't too clear, and hopefully the examples below will help illustrate my question.
I have a table like this:
```
+-------------+-----------+------------+-----------------+
| time | user_name | dept | event |
+-------------+-----------+------------+-----------------+
| 2016-01-04 | Joe | finance | logged in |
| 2016-01-04 | Jeff | marketing | logged in |
| 2016-01-04 | Jeff | marketing | project created |
| 2016-01-04 | Bob | finance | logged in |
| 2016-01-04 | Mark | accounting | logged in |
| 2016-01-05 | Bob | finance | logged in |
| 2016-01-08 | Bob | finance | logged in |
| 2016-01-09 | Jeff | marketing | logged in |
| 2016-01-10 | Jeff | marketing | logged in |
| 2016-01-11 | Nate | accounting | logged in |
| 2016-01-11 | Nate | accounting | project created |
+-------------+-----------+------------+-----------------+
```
I want to return a table like this:
```
+------------------+-----------------+------------+
| number of logins | number of users | dept |
+------------------+-----------------+------------+
| 1-2 | 1 | finance |
| 3-4 | 1 | finance |
| 5+ | 0 | finance |
| 1-2 | 0 | marketing |
| 3-4 | 1 | marketing |
| 5+ | 0 | marketing |
| 1-2 | 2 | accounting |
| 3-4 | 0 | accounting |
| 5+ | 0 | accounting |
+------------------+-----------------+------------+
```
As of now, my query looks like:
```
select
(case when count(distinct(`time`)) between 1 and 2 then '1-2'
when count(distinct(`time`)) between 3 and 4 then '3-4'
else '5+'
end) as buckets, dept, user_name
from change_log where event in ('logged in')
group by dept, user_name
```
However, this is returning a table like below, which is the closest I could get to what I want, but I'm not sure how to roll it up to just by the buckets and department.
```
+---------+------------+-----------+
| buckets | dept | user_name |
+---------+------------+-----------+
| 1-2 | accounting | Mark |
| 1-2 | accounting | Nate |
| 3-4 | finance | Bob |
| 1-2 | finance | Joe |
| 3-4 | marketing | Jeff |
+---------+------------+-----------+
```
|
Something like this?
```
select buckets, dept, count(user_name) no_of_u from
(select
(case when count(distinct(`time`)) between 1 and 2 then '1-2'
when count(distinct(`time`)) between 3 and 4 then '3-4'
else '5+'
end) as buckets, dept, user_name
from change_log where event in ('logged in')
group by dept, user_name)
group by buckets, dept
```
|
But you can do it like this:
```
select
ELT(LEAST(count(*),5), '1-2', '1-2', '3-4', '3-4','5+') as buckets,
dept,
count(*) as `number of users`
FROM change_log where event in ('logged in')
WHERE event in ('logged in')
GROUP BY
dept,
ELT(LEAST(count((*),5), '1-2', '1-2', '3-4', '3-4','5+');
```
|
MySQL - Group By Number of Users within Ranges of Unique Login Days and by Department
|
[
"",
"mysql",
"sql",
""
] |
I have table with column id,firstName,lastName,fullName and I want to copy the combined value of column firstName and lastName into fullName column.Is it possible by using query without stored procedure ?
As i know we can set the one column value to other using something like:
```
UPDATE table_name SET column1 = column2
```
|
Use CONCAT function of mysql:
```
UPDATE table_name SET column1 = CONCAT(column2, ' ', column3);
```
|
For `SQL Server`...
`COMPUTED` column:
```
ALTER TABLE dbo.tbl
ADD FullName AS firstName + ' ' + lastName
GO
```
Or permanent `UPDATE`:
```
UPDATE dbo.tbl
SET FullName = firstName + ' ' + lastName
```
|
How to copy the combined value of two column to one Column in same table
|
[
"",
"mysql",
"sql",
"sql-server",
""
] |
I have a SQL select statement that is basically just
```
SELECT * FROM myTable
WHERE status != 5
```
It is a lot more complex than in reality, but its mostly convoluted column names and joins.
what i want to be able to do is on a parameter being a value, I disable the where clause, i.e.
```
SELECT * FROM myTable
where CASE WHEN @parameter = 'true'
THEN --...add the where clause ('status != 5')
```
It doesn't work like that but does anyone have a different method of approach? i'm using ASP.net with the select in an SQLDatasource if that opens up any options. It's to do with hiding particular records from non-logged in users but showing others.
**TL;DR:** Can I enable/disable a where clause based on a parameter in SQL or ASP.net
|
You could do literally what you're asking (i.e. disable the where clause) using dynamic sql.
```
declare @ssql varchar(max)
set @ssql = 'SELECT * FROM myTable'
+ CASE WHEN @parameter = 'true' THEN ' WHERE status != 5' ELSE '' END
EXEC (@ssql)
```
However, it will be easier to read and modify in future if you can avoid dynamic sql.
Since you need to filter status where param = true, but when param = false you don't want to filter on status, your where clause should look like this:
```
WHERE (@parameter=True AND status!=5) OR (@parameter=false)
```
|
The following will work perfectly in your situation without any use of dynamic SQL. This is based on an assumption that `status` value can never take on a value of `-1`. You would need to come up with a similar value in your situation that status will never ever equal.
When @parameter is true then the query is like this: `SELECT * FROM myTable where status !=5`, but when @parameter is false, then the below query acts like this query: `SELECT * FROM myTable where status != -1` which is effectively returning all records as if the the where clause is not there *since the condition `status != -1` is always true*.
```
SELECT * FROM myTable
where status != CASE WHEN @parameter = 'true' THEN 5 else -1 END
```
|
SQL - Adding/Disabling a Where clause based on a boolean condition in Select Statement (ASP.net SQLDataSource)
|
[
"",
"sql",
"asp.net",
"sql-server",
"vb.net",
"t-sql",
""
] |
Can I modify the next to use the column aliases `avg_time` and `cnt` in an expression `ROUND(avg_time * cnt, 2)`?
```
SELECT
COALESCE(ROUND(stddev_samp(time), 2), 0) as stddev_time,
MAX(time) as max_time,
ROUND(AVG(time), 2) as avg_time,
MIN(time) as min_time,
COUNT(path) as cnt,
ROUND(avg_time * cnt, 2) as slowdown, path
FROM
loadtime
GROUP BY
path
ORDER BY
avg_time DESC
LIMIT 10;
```
**It raises the next error:**
```
ERROR: column "avg_time" does not exist
LINE 7: ROUND(avg_time * cnt, 2) as slowdown, path
```
**The next, however, works fine** (use primary expressions instead of column aliases:
```
SELECT
COALESCE(ROUND(stddev_samp(time), 2), 0) as stddev_time,
MAX(time) as max_time,
ROUND(AVG(time), 2) as avg_time,
MIN(time) as min_time,
COUNT(path) as cnt,
ROUND(AVG(time) * COUNT(path), 2) as slowdown, path
FROM
loadtime
GROUP BY
path
ORDER BY
avg_time DESC
LIMIT 10;
```
|
You can use a previously created alias in the `GROUP BY` or `HAVING` statement but not in a `SELECT` or `WHERE` statement. This is because the program processes all of the `SELECT` statement at the same time and doesn't know the alias' value yet.
The solution **is to encapsulate the query in a subquery and then the alias is available outside.**
```
SELECT stddev_time, max_time, avg_time, min_time, cnt,
ROUND(avg_time * cnt, 2) as slowdown
FROM (
SELECT
COALESCE(ROUND(stddev_samp(time), 2), 0) as stddev_time,
MAX(time) as max_time,
ROUND(AVG(time), 2) as avg_time,
MIN(time) as min_time,
COUNT(path) as cnt,
path
FROM
loadtime
GROUP BY
path
ORDER BY
avg_time DESC
LIMIT 10
) X;
```
|
The order of execution of a query (and thus the *evaluation* of expressions and aliases) is NOT the same as the way it is written. The "general" position is that the clauses are evaluated in this sequence:
```
FROM
WHERE
GROUP BY
HAVING
SELECT
ORDER BY
```
Hence the *column aliases* are unknown to most of the query **until the select clause is complete** (and this is why you **can** use aliases in the ORDER BY clause). However *table aliases* which are established in the from clause are understood in the where to order by clauses.
The most common workaround is to encapsulate your query into a "derived table"
Suggested reading: [Order Of Execution of the SQL query](https://stackoverflow.com/questions/4596467/order-of-execution-of-the-sql-query)
Note: different SQL dbms have different specific rules regarding use of aliases
**EDIT**
The purpose behind reminding readers of the logical clause sequence is that often (*but not always*) aliases only becomes referable AFTER the clause where the alias is declared. The most common of which is that aliases declared in the `SELECT` clause can be used by the `ORDER BY` clause. In particular, an alias declared in a `SELECT` clause cannot be referenced within the same `SELECT` clause.
But please do note that due to differences in products not every dbms will behave in this manner
|
Why can't I use column aliases in the next SELECT expression?
|
[
"",
"sql",
"postgresql",
"column-alias",
""
] |
I'm currently building a YTD report in SSRS. I'm looking to edit the default "FROM" date in a calendar selection.
I'm looking to retrieve January 1st of the previous months year. For example:
(If it's Feb 16th, 2016 .. the result should be **1/1/2016**
If it's Jan 10th, 2016 .. the result should be **1/1/2015**)
I built this to retrieve the current year for jan 1st, but it causes issues if we're in January because I need it to retrieve the year of the previous month (in that case it would be 2015, not 2016).
Thanks!
|
Try this, it should work
```
=DateAdd(DateInterval.Month,-1,DateSerial(Year(Today), Month(Today), 1))
```
**UPDATE:**
Based on your comment I've created this expression. It is not tested but should work.
```
=IIF(Today.Month>1,
DateAdd(DateInterval.Month,-1,DateSerial(Year(Today), Month(Today), 1)),
DateAdd(DateInterval.Year,-1,DateSerial(Year(Today), Month(Today), 1))
)
```
Let me know if this helps.
|
```
select cast(cast(year(dateadd(mm, -1,getdate())) as varchar)+'-01-01' as date)
```
replace getdate() with which ever field you're basing this calculation on.
for testing:
```
select cast(cast(year(dateadd(mm, -1,'2015-01-22')) as varchar)+'-01-01' as date)
select cast(cast(year(dateadd(mm, -1,'2016-02-01')) as varchar)+'-01-01' as date)
select cast(cast(year(dateadd(mm, -1,'2015-12-12')) as varchar)+'-01-01' as date)
```
|
Get Year of Previous Month (For Jan 1st)
|
[
"",
"sql",
"sql-server",
"reporting-services",
""
] |
I have a function like this in SQL Server. I can not understand what is it. Can anyone please explain it to me briefly?
```
6371 * ACOS(ROUND(
COS(RADIANS(Latitude1)) *
COS(RADIANS([Latitude2])) *
COS(RADIANS(Longitude1) - RADIANS(Longitude2)) +
SIN(RADIANS(Latitude1)) *
SIN(RADIANS([Latitude2])), 15, 2))*1000 <= 500
```
|
You are looking at the Haversine formula, which computes the distance on a sphere using their latitude and longitude coordinates. Here is a query using the full Haversine formula:
```
SELECT
ACOS(
COS(RADIANS(Latitude1))
* COS(RADIANS(Latitude2))
* COS(RADIANS(Longitude1) - RADIANS(Longitude2))
+ SIN(RADIANS(Latitude1))
* SIN(RADIANS(Latitude2))
) AS haversineDistance
FROM yourTable
```
Have a look at [this SO article](https://stackoverflow.com/questions/574691/mysql-great-circle-distance-haversine-formula) or [Wikipedia](https://en.m.wikipedia.org/wiki/Haversine_formula) for more information.
|
Just like @TimBiegeleisen and @RobertRodkey have said, it's an old school formula for calculating the distance between two points on a Earth.
So this leads to some other questions:
1. Q: Doesn't SqlServer have geospatial methods available? A: **Yes** for sql server version 2008 and later.
2. Q: If Sql has this available using the `Geography` methods, should I use the above math? A: **No** *if* you're on Sql Server 2008 or later. If before that, then you have no choice but you have to use that way/formula.
3. Q: Are Narwhales, unicorns of the ocean? A: **Yes**.
Further show off tip - use [STDistance](https://msdn.microsoft.com/en-us/library/bb933808.aspx) .. that's your friend :)
(browser code, not tested, etc)...
```
-- Arrange.
DECLARE @longitude1 FLOAT = -122.360,
@latitude1 FLOAT = 47.656,
@longitude2 FLOAT = -122.343,
@latitude2 FLOAT = 47.656;
DECLARE @point1 GEOGRAPHY = GEOGRAPHY::STGeomFromText('POINT(' + CAST(@longitude1 AS VARCHAR(10)) + ' ' + CAST(@latitude1 AS VARCHAR(10)) + ')', 4326);
DECLARE @point2 GEOGRAPHY = GEOGRAPHY::STGeomFromText('POINT(' + CAST(@longitude2 AS VARCHAR(10)) + ' ' + CAST(@latitude2 AS VARCHAR(10)) + ')', 4326);
-- Determine the distance (which would be in metres because we're using the 4326 == [common GPS format](http://spatialreference.org/ref/epsg/wgs-84/)).
SELECT @point1.STDistance(@point2);
```
|
Query using Longitude and Latitude in SQL Server
|
[
"",
"sql",
"sql-server",
""
] |
I have a table of logins that looks like this:
```
Created | Action
20/01/2016 08:00:00 AM login
20/01/2016 10:05:10 AM logout
20/01/2016 12:00:00 PM login
20/01/2016 04:12:22 PM logout
21/01/2016 08:00:50 AM login
21/01/2016 09:44:42 AM login
21/01/2016 10:44:42 AM login
21/01/2016 04:00:42 PM logout
```
I need to select a result set that looks like this:
```
Created | Login | Logout
20/01/2016 08:00:00 AM 04:12:22 PM
21/01/2016 08:00:50 PM 04:00:42 PM
```
Here is what I have tried:
```
SELECT
CONVERT(VARCHAR(10),li.Created,10) [Date],
CONVERT(VARCHAR(8),MAX(li.Created),8) [Login],
CONVERT(VARCHAR(8),MAX(lo.Created),8) [Logout]
FROM Logins li
LEFT JOIN Logins lo ON lo.[Action] = 'logout'
GROUP BY li.Created
```
But the result is not grouped by date.
What is the proper way?
|
Similar to Rahul's answer, but you can do a self join to get first login/last logout and just group by the date to get the result you're asking for;
```
SELECT CONVERT(DATE, li.created) [Date],
CONVERT(TIME, MIN(li.created)) [Login],
CONVERT(TIME, MAX(lo.created)) [Logout]
FROM Logins li
JOIN Logins lo
ON CONVERT(DATE, li.created) = CONVERT(DATE, lo.created)
AND li.action = 'login'
AND lo.action = 'logout'
GROUP BY CONVERT(DATE, li.created)
```
[An SQLfiddle to test with](http://sqlfiddle.com/#!3/13f0e/1).
EDIT: untested for SQL Server 2005, you may need to set a length for the varchars;
```
SELECT CONVERT(VARCHAR, li.created, 110) [Date],
CONVERT(VARCHAR, MAX(li.created), 8) [Login],
CONVERT(VARCHAR, MAX(lo.created), 8) [Logout]
FROM Logins li
JOIN Logins lo
ON CONVERT(VARCHAR, li.created, 110) = CONVERT(VARCHAR, lo.created, 110)
AND li.action = 'login'
AND lo.action = 'logout'
GROUP BY CONVERT(VARCHAR, li.created, 110)
```
[Another SQLfiddle](http://sqlfiddle.com/#!3/d2872/2).
|
You can try like this:
```
SELECT
CONVERT(VARCHAR(10),li.Created,10) [Date],
CONVERT(VARCHAR(8),MAX(li.Created),8) [Login],
CONVERT(VARCHAR(8),MAX(lo.Created),8) [Logout]
From Logins li
Left Join Logins lo on lo.[Action] = 'logout'
GROUP BY CAST(li.Created AS DATE)
```
ie, you need to group by your records on the date only and not by date and time both.
|
select group same column by date into different column
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
```
Server version: 5.7.10 MySQL Community Server (GPL)
```
In MySQL, I have a user mangos. The user worked perfectly when I created it. After rebooting my computer, though, attempting to login to mangos yielded this output:
```
$ mysql -u mangos -p
Enter password:
ERROR 1524 (HY000): Plugin '*some_random_long_hash_I_cannot_remember' is not loaded
$
```
It kind of reminded me of a password hash, so after investigating mysql.user, I found that mangos had no password!
I updated the password:
```
SET PASSWORD FOR 'mangos'@'127.0.0.1' = PASSWORD('mangos');
FLUSH PRIVILEGES;
```
Now, I get:
```
ERROR 1524 (HY000): Plugin '*3FBBDB84EA2B2A0EA599948396AD622B7FF68183' is not loaded
```
`3FBBDB84EA2B2A0EA599948396AD622B7FF68183` is the same number shown in the password column of mysql.user for mangos, and is a different number than originally. I still can't log in.
**How do I make MySQL recognize a password properly?** Is that even the issue here?
*Edits:*
```
mysql> SELECT * FROM mysql.user WHERE user = 'mangos' \G
*************************** 1. row ***************************
Host: localhost
User: mangos
Password: *3FBBDB84EA2B2A0EA599948396AD622B7FF68183
Select_priv: N
Insert_priv: N
Update_priv: N
Delete_priv: N
Create_priv: N
Drop_priv: N
Reload_priv: N
Shutdown_priv: N
Process_priv: N
File_priv: N
Grant_priv: N
References_priv: N
Index_priv: N
Alter_priv: N
Show_db_priv: N
Super_priv: N
Create_tmp_table_priv: N
Lock_tables_priv: N
Execute_priv: N
Repl_slave_priv: N
Repl_client_priv: N
Create_view_priv: N
Show_view_priv: N
Create_routine_priv: N
Alter_routine_priv: N
Create_user_priv: N
Event_priv: N
Trigger_priv: N
Create_tablespace_priv: N
ssl_type:
ssl_cipher:
x509_issuer:
x509_subject:
max_questions: 0
max_updates: 0
max_connections: 0
max_user_connections: 0
plugin: *3FBBDB84EA2B2A0EA599948396AD622B7FF68183
authentication_string: NULL
password_expired: N
```
|
It appears your user table is corrupted. Likely the reboot you mentioned triggered an upgrade to MySQL and the `mysql_upgrade` script was not run. This should resolve the situation:
```
mysql_upgrade -u root -ppassword --skip-grant-tables
mysql -u root -ppassword -e "UPDATE mysql.user SET plugin = 'mysql_native_password' WHERE user = 'mangos'; FLUSH PRIVILEGES"
```
Source: [http://kb.odin.com/en/126676](https://web.archive.org/web/20160221002152/http://kb.odin.com/en/126676)
Providing the `--force` option to `mysql_upgrade` will re-apply the upgrade scripts even if an upgrade has already been done. This may be needed in case of partial restoration from backup.
Also worth mentioning, the command to change a user password has changed in [MySQL 5.7.6](http://dev.mysql.com/doc/refman/5.7/en/set-password.html) / [MariaDB 10.2.0](https://mariadb.com/kb/en/mariadb/alter-user/) and forward:
```
ALTER USER mangos IDENTIFIED BY 'mangos';
```
This is now the preferred method for setting the password, although the older `SET PASSWORD` syntax is not officially deprecated.
|
`mysql_upgrade` (suggested by @miken32) wasn't working for me, so I had to do it the hard way, by shutting down the service and using `mysqld_safe`, as explained [here](https://askubuntu.com/questions/489098/unable-to-reset-root-password-of-mysql/671593#671593).
**UPDATE**: Actually, that didn't work either, so I had to do the hard hard hard way (beware, **this deletes all your databases**):
1. `sudo killall mysqld`
2. `sudo rm -rf /var/lib/mysql`
3. `sudo apt-get purge mysql-server`
4. Install `mysql-server` package again.
|
MySQL Won't let User Login: Error 1524
|
[
"",
"mysql",
"sql",
""
] |
Say I have some data like:
```
grp v1 v2
--- -- --
2 5 7
2 4 9
3 10 2
3 11 1
```
I'd like to create new columns which are independent of the ordering of the table - such that the two columns have independent orderings, i.e. sort by v1 independently of v2, while partitioning by grp.
The result (independently ordered, partitioned by grp) would be:
```
grp v1 v2 v1_ordered v2_ordered
--- -- -- ---------- ----------
2 5 7 4 7
2 4 9 5 9
3 10 2 10 1
3 11 1 11 2
```
One way to do this is to create two tables and CROSS JOIN. However, I'm working with too many rows of data for this to be computationally tractable - is there a way to do this within a single query without a JOIN?
Basically, I'd like to write SQL like:
```
SELECT
*,
v1 OVER (PARTITION BY grp ORDER BY v1 ASC) as v1_ordered,
v2 OVER (PARTITION BY grp ORDER BY v2 ASC) as v2_ordered
FROM [example_table]
```
This breaks table row meaning, but it's a necessary feature for many applications - for example computing ordered correlation between two fields `CORR(v1_ordered, v2_ordered).`
Is this possible?
|
I think you are in right direction! You just need to use proper window function . Row\_number() in this case. And it should work!
Adding working example as per @cgn request:
I dont think there is way to totally avoid use of JOIN.
At the same time below example uses just **ONE JOIN** vs **TWO JOIN**s in other answers:
```
SELECT
a.grp AS grp,
a.v1 AS v1,
a.v2 AS v2,
a.v1 AS v1_ordered,
b.v2 AS v2_ordered
FROM (
SELECT grp, v1, v2, ROW_NUMBER() OVER(PARTITION BY grp ORDER BY v1) AS v1_order
FROM [example_table]
) AS a
JOIN (
SELECT grp, v1, v2, ROW_NUMBER() OVER(PARTITION BY grp ORDER BY v2) AS v2_order
FROM [example_table]
) AS b
ON a.grp = b.grp AND a.v1_order = b.v2_order
```
Result is as expected:
```
grp v1 v2 v1_ordered v2_ordered
2 4 9 4 7
2 5 7 5 9
3 10 2 10 1
3 11 1 11 2
```
And now you can use CORR() as below
```
SELECT grp, CORR(v1_ordered, v2_ordered) AS [corr]
FROM (
SELECT
a.grp AS grp,
a.v1 AS v1,
a.v2 AS v2,
a.v1 AS v1_ordered,
b.v2 AS v2_ordered
FROM (
SELECT grp, v1, v2, ROW_NUMBER() OVER(PARTITION BY grp ORDER BY v1) AS v1_order
FROM [example_table]
) AS a
JOIN (
SELECT grp, v1, v2, ROW_NUMBER() OVER(PARTITION BY grp ORDER BY v2) AS v2_order
FROM [example_table]
) AS b
ON a.grp = b.grp AND a.v1_order = b.v2_order
)
GROUP BY grp
```
|
This will work for you.
`SQLFiddle Demo in SQL Server`
Note: The sequence you mentioned in the sample, is not necessary how the rows are returned from database. In my case, for `v1`, I got `4,5,10,11` unlike your `5,4,10,11`. However, your output will be same as you wanted.
```
Select t.grp,t.v1,t.v2,
v1.v1 as v1_ordered,v2.v2 as v2_ordered
From
(
select t1.*,
row_number() over (partition by grp
Order by v1) v1o
,
row_number() over (partition by grp
Order by v2) v2o
from table1 t1
) t
Inner join
(
Select t.*,
row_number() over (partition by grp
Order by v1) v1o
From table1 t
) v1
On t.grp=v1.grp
And t.v1o=v1.v1o
Inner join
(
Select t.*,
row_number() over (partition by grp
Order by v2) v2o
From table1 t
) v2
On t.grp=v2.grp
And t.v1o=v2.v2o
```
Output:
```
+------+-----+-----+-------------+------------+
| grp | v1 | v2 | v1_ordered | v2_ordered |
+------+-----+-----+-------------+------------+
| 2 | 4 | 9 | 4 | 7 |
| 2 | 5 | 7 | 5 | 9 |
| 3 | 10 | 2 | 10 | 1 |
| 3 | 11 | 1 | 11 | 2 |
+------+-----+-----+-------------+------------+
```
|
Google BigQuery SQL: Order two columns independently
|
[
"",
"sql",
"sorting",
"google-bigquery",
"window-functions",
"database-partitioning",
""
] |
I'm selecting one customer from `customers` table and trying to get the latest order number for the customer from a `jobs` table, but based on a different timestamp column in that table.
**Customers**
```
id | name | ...
```
**Jobs**
```
id | customer_id | order_id | assigned
----------------------------------------------------
1 | 985 | 8020 | 2015-12-03 00:00:00
2 | 985 | 4567 | 2015-04-19 00:00:00
3 | 985 | 9390 | 2016-20-01 00:00:00
4 | 985 | 6381 | 2015-08-26 00:00:00
```
The latest `order_id` which should be joined is **9390** because the `assigned` timestamp is the latest.
**SQL**
```
SELECT c.name, j.latest_order
FROM customers c
LEFT JOIN (
SELECT customer_id,
??? AS latest_order
FROM jobs
WHERE withdrawn IS NULL
GROUP BY customer_id
) j ON j.customer_id = c.id
WHERE c.id = 985
```
I can't really figure out the best way to get the `latest_order` in the sub query, but it should be the `jobs.order_id` where `jobs.assigned = MAX(jobs.assigned)` for that customer.
|
Use one more derived table to get the order number for the latest order.
```
SELECT c.name, t.order_id, j.latest_order
FROM customers c
JOIN (
SELECT customer_id,
max(assigned) AS latest_order
FROM jobs
WHERE withdrawn IS NULL
GROUP BY customer_id
) j ON j.customer_id = c.id
JOIN (select customer_id, order_id, assinged
from jobs
where withdrawn is null) t
ON t.customer_id = j.customer_id and t.assigned = j.latest_order
WHERE c.id = 985
```
|
I'm not sure if this is the fastest way but maybe you want to try an explain against the alternate answer to compare both performances:
```
SELECT c.name, j.latest_order
FROM customers c
LEFT JOIN (
SELECT j1.customer_id, j1.order_id
FROM jobs j1
JOIN (
SELECT customer_id, max(assigned) as maxassigned
FROM jobs j2
WHERE withdrawn IS NULL
GROUP BY customer_id
) j2
ON j1.customer_id = j2.customer_id and j1.assigned = j2.maxassigned) j
ON j.customer_id = c.id
AND c.id = 985;
```
|
MySQL - Join order number of latest order based on timestamp
|
[
"",
"mysql",
"sql",
""
] |
I would like to maintain a system for uploading data through Excel to SQL Server with ADO method. The process consists of two steps:
1. the raw data is inserted to temporary table, say `dbo.TableTemp`
2. the raw data is processed with a stored procedure and inserted to a `dbo.GoodTable`
3. `delete from dbo.TableTemp` at the end of stored procedure
Is there any way to be sure that the activities of two users not overlap? For example the `delete from dbo.TableTemp` of user1 will not be executed after user2 inserts data and before the data are processed?
**Update.** Unluckily I have not been successful with `#temp` tables. They seem to be too much temporary and when I try to insert data into them #temps already do not exist. For uploading data I use the variation of code by Sergey Vaselenko downloaded from here: [http://www.excel-sql-server.com/excel-sql-server-import-export-using-vba.htm#Excel Data Export to SQL Server using ADO](http://www.excel-sql-server.com/excel-sql-server-import-export-using-vba.htm#Excel%20Data%20Export%20to%20SQL%20Server%20using%20ADO)
In the Sergey's solution it is possible to create table by stored procedure prior to inserting the data in step 1. But when I create `#temp` table with stored procedure, it vanishes at the end of procedure, so I cannot insert data to it. Any help please?
|
Use [temporary tables](https://technet.microsoft.com/en-us/library/ms177399(v=sql.105).aspx) `#TableTemp`. Those are specific for each session and thus would not overlap.
> There are two types of temporary tables: local and global. They differ
> from each other in their names, their visibility, and their
> availability. Local temporary tables have a single number sign (#) as
> the first character of their names; they are visible only to the
> current connection for the user, and they are deleted when the user
> disconnects from the instance of SQL Server. Global temporary tables
> have two number signs (##) as the first characters of their names;
> they are visible to any user after they are created, and they are
> deleted when all users referencing the table disconnect from the
> instance of SQL Server.
**Update.** Looks like this particular [Excel-SQL Server Import-Export using VBA](http://www.excel-sql-server.com/excel-sql-server-import-export-using-vba.htm) use separate functions to create table and upload the data each opening and closing own connection. From SQL Server perspective those functions operate in different sessions and thus temporary tables do not persist. I think this solution can be rewritten to use single connection to create temporary table, populate, process the data and output the results into permanent table.
You might also find useful this question: [How do I make an ADODB.Connection Persistent in VBA in Excel?](https://stackoverflow.com/questions/6642548/how-do-i-make-an-adodb-connection-persistent-in-vba-in-excel) In particular - **Kevin Pope**'s answer suggesting the use of global connection variable opened and closed with the workbook itself:
> ```
> Global dbConnPublic As ADODB.Connection
> ```
>
> In the "ThisWorkbook" object:
>
> ```
> Private Sub Workbook_Open()
> Set dbConnPublic = openDBConn() 'Or whatever your DB connection function is called
> End Sub
> Private Sub Workbook_BeforeClose(Cancel As Boolean)
> dbConnPublic.Close
> End Sub
> ```
|
Another approach - use `TABLE` variable. [From MSDN](https://msdn.microsoft.com/en-us/library/ms175010.aspx)
```
CREATE @AddedValues TABLE (ID INT, SomeValue VARCHAR(50))
```
Then use it normally as tables in the query.
```
INSERT INTO @AddedValues (ID, SomeValue) VALUES (1, 'Test');
SELECT ID FROM @AddedValues WHERE SomeValue = 'Test';
```
Table variable's scope limited to the batch. So you can be sure that other user or even same user will not access it from another batch.
From MSDN
> A table variable behaves like a local variable. It has a well-defined
> scope. This is the function, stored procedure, or batch that it is
> declared in.
|
Handling insert and delete with multiple users in SQL
|
[
"",
"sql",
"sql-server",
""
] |
I have a table as follows
```
Col1 Col2
12 34
34 12
```
Considering, these are duplicates how we delete them?
I tried solving this question using self joins. But I am not able to get the required answer. Can someone help?
|
you can use [GREATEST](http://dev.mysql.com/doc/refman/5.0/en/comparison-operators.html#function_greatest) and [LEAST](http://dev.mysql.com/doc/refman/5.0/en/comparison-operators.html#function_least) to 'order' your columns, and then use distinct:
```
SELECT DISTINCT GREATEST(col1, col2) as first, LEAST(col1, col2) as second from yourTable
```
This will give you a distinct result. If what you're looking for is delete, you can delete everything not in this result:
```
DELETE FROM yourTable where (col1, col2) NOT IN (
SELECT DISTINCT GREATEST(col1, col2) as first, LEAST(col1, col2) as second from yourTable
)
```
|
I assume you want to have a symmetric relation: for example, if A is a friend of B, then B is also a friend of A? I also assume that both columns are foreign IDs, and numeric. If this is not so, you will have to adapt.
The best way is to never ever insert the two versions at all; normalise the relation so that the smaller one is always in Col1, and larger one always in Col2. I.e. for `13 -> 27` you would insert `[13, 27]`; for `27 -> 13`, you would again insert `[13, 27]`, if it wasn't present again.
If you already have a messed up table, I'd probably just do:
```
UPDATE IGNORE t
SET col1=(@temp:=col1), col1 = col2, col2 = @temp
WHERE col1 > col2;
```
to normalise it (didn't try, could have errors; also, MySQL syntax, you'd probably have to adapt for other engines); then this to remove the extras in case both directions existed:
```
DELETE FROM t
WHERE col1 > col2;
```
|
How to remove duplicate tuples using SQL
|
[
"",
"mysql",
"sql",
""
] |
Given a table structured like that:
```
id | news_id(fkey)| status | date
1 10 PUBLISHED 2016-01-10
2 20 UNPUBLISHED 2016-01-10
3 10 UNPUBLISHED 2016-01-12
4 10 PUBLISHED 2016-01-15
5 10 UNPUBLISHED 2016-01-16
6 20 PUBLISHED 2016-01-18
7 10 PUBLISHED 2016-01-18
8 20 UNPUBLISHED 2016-01-20
9 30 PUBLISHED 2016-01-20
10 30 UNPUBLISHED 2016-01-21
```
I'd like to count distinct news that, in given period time, had first and last status equal(and also status equal to given in query)
So, for this table query from 2016-01-01 to 2016-02-01 would return:
* 1 (with `WHERE status = 'PUBLISHED'`) because news\_id 10 had PUBLISHED in both first( 2016-01-10 ) and last row (2016-01-18)
* 1 (with `WHERE status = 'UNPUBLISHED'` because news\_id 20 had UNPUBLISHED in both first and last row
notice how news\_id = 30 does not appear in results, as his first/last statuses were contrary.
I have done that using following query:
```
SELECT count(*) FROM
(
SELECT DISTINCT ON (news_id)
news_id, status as first_status
FROM news_events
where date >= '2015-11-12 15:01:56.195'
ORDER BY news_id, date
) first
JOIN (
SELECT DISTINCT ON (news_id)
news_id, status as last_status
FROM news_events
where date >= '2015-11-12 15:01:56.195'
ORDER BY news_id, date DESC
) last
using (news_id)
where first_status = last_status
and first_status = 'PUBLISHED'
```
Now, I have to transform query into SQL our internal Java framework, unfortunately it does not support subqueries, except when using `EXISTS` or `NOT EXISTS`. I was told to transform the query to one using `EXISTS` clause(if it is possible) or try finding another solution. I am, however, clueless. Could anyone help me do that?
edit: As I am being told right now, the problem lies not with our framework, but in Hibernate - if I understood correctly, "you cannot join an inner select in HQL" (?)
|
First of all, subqueries are a substantial part of SQL. A framework forbidding their use is a bad framework.
However, "first" and "last" can be expressed with NOT EXISTS: where not exists an earlier or later entry for the same news\_id and date range.
```
select count(*)
from mytable first
join mytable last on last.news_id = first.news_id
where date between @from and @to
and not exists
(
select *
from mytable before_first
where before_first.news_id = first.news_id
and before_first.date < first.date
and before_first.date >= @from
)
and not exists
(
select *
from mytable after_last
where after_last.news_id = last.news_id
and after_last.date > last.date
and after_last.date <= @to
)
and first.status = @status
and last.status = @status;
```
|
Not sure if this adresses you problem correctly, since it is more of a workaround. But considering the following:
News need to be published before they can be "unpublished". So if you'd add 1 for each "published" and substract 1 for each "unpublished" your balance will be positive (or 1 to be exact) if first and last is "published". It will be 0 if you have as many unpublished as published and negative, if it has more unpublished than published (which logically cannot be the case but obviously might arise, since you set a date threshhold in the query where a 'published' might be occured before).
You might use this query to find out:
```
SELECT SUM(CASE status WHEN 'PUBLISHED' THEN 1 ELSE -1 END) AS 'publishbalance'
FROM news_events
WHERE date >= '2015-11-12 15:01:56.195'
GROUP BY news_id
```
|
Count how many first and last entries in given period of time are equal
|
[
"",
"sql",
"database",
"hibernate",
"postgresql",
"hql",
""
] |
I have a table with Transactions, amongst whose columns are `id`, `created_at`, and `company_id`. I'd like to group the four first transactions of every company and return the created\_at values of each transaction on each row.
In other words, I want each row of my output to correspond to the four first transactions of each company (so grouping by `company_id`) with columns showing me the `company_id` and the created\_at of each of those four transactions.
How do I do that?
Sample data:
```
id | company_id | created_at
---------------------------------
1123 | abcd | 10/12/2015
8291 | abcd | 10/14/2015
9012 | abcd | 10/15/2015
9540 | abcd | 10/16/2015
10342 | abcd | 10/21/2015
10456 | abcd | 10/22/2015
2301 | efgh | 10/13/2015
4000 | efgh | 11/01/2015
4023 | efgh | 11/03/2015
6239 | efgh | 11/08/2015
7500 | efgh | 11/14/2015
```
Sample output:
```
company_id | created_at_1 | created_at_2 | created_at_3 | created_at_4
--------------------------------------------------------------------------
abcd | 10/12/2015 | 10/14/2015 | 10/15/2015 | 10/16/2015
efgh | 10/13/2015 | 11/01/2015 | 11/03/2015 | 11/08/2015
```
|
```
DROP TABLE IF EXISTS my_table;
CREATE TABLE my_table
(id INT NOT NULL AUTO_INCREMENT PRIMARY KEY
,company_id VARCHAR(12) NOT NULL
,created_at DATE NOT NULL
);
INSERT INTO my_table VALUES
( 1123,'abcd','2015/10/12'),
( 8291,'abcd','2015/10/14'),
( 9012,'abcd','2015/10/15'),
( 9540,'abcd','2015/10/16'),
(10342,'abcd','2015/10/21'),
(10456,'abcd','2015/10/22'),
( 2301,'efgh','2015/10/13'),
( 4000,'efgh','2015/11/01'),
( 4023,'efgh','2015/11/03'),
( 6239,'efgh','2015/11/08'),
( 7500,'efgh','2015/11/14');
SELECT x.*
FROM my_table x
JOIN my_table y
ON y.company_id = x.company_id
AND y.created_at <= x.created_at
GROUP
BY x.id
HAVING COUNT(*) <= 4
ORDER
BY company_id
, created_at;
+------+------------+------------+
| id | company_id | created_at |
+------+------------+------------+
| 1123 | abcd | 2015-10-12 |
| 8291 | abcd | 2015-10-14 |
| 9012 | abcd | 2015-10-15 |
| 9540 | abcd | 2015-10-16 |
| 2301 | efgh | 2015-10-13 |
| 4000 | efgh | 2015-11-01 |
| 4023 | efgh | 2015-11-03 |
| 6239 | efgh | 2015-11-08 |
+------+------------+------------+
```
A solution with variables will be orders of magnitude faster, e.g...
```
SELECT a.id
, a.company_id
, a.created_at
FROM
( SELECT x.*
, CASE WHEN @prev = x.company_id THEN @i:=@i+1 ELSE @i:=1 END i, @prev:=x.company_id prev
FROM my_table x
, (SELECT @i:=1,@prev:=null) vars
ORDER
BY x.company_id
, x.created_at
) a
WHERE i <= 4;
```
|
One possible way is the following:
```
select company_id,
min(created_at) as created_at_1,
(select created_at from t where company_id=t1.company_id order by created_at limit 1 offset 1) as created_at_2,
(select created_at from t where company_id=t1.company_id order by created_at limit 1 offset 2) as created_at_3,
(select created_at from t where company_id=t1.company_id order by created_at limit 1 offset 3) as created_at_4
from t as t1
group by company_id
```
EDIT:
Another possibility (inspired by [this answer](https://stackoverflow.com/a/3333697/3573263)) is:
```
select company_id,
min(created_at) as created_at_1,
min(case r when 2 then created_at else null end) as created_at_2,
min(case r when 3 then created_at else null end) as created_at_3,
min(case r when 4 then created_at else null end) as created_at_4
from (
select company_id, created_at,
(case company_id when @curType
then @curRank := @curRank + 1
else @curRank := 1 and @curType := company_id end)+1 as r
from t, (select @curRank := 0, @curType := '') f
order by company_id, created_at
) as o
where r <= 4
group by company_id
```
|
Group exactly four rows sql
|
[
"",
"mysql",
"sql",
""
] |
I have the following text:
```
Price is .....
return policy is....
Mega's accept....
```
I want to insert this whole text into a column of type `TEXT`
What I wrote was:
```
Insert into A (id,msg) values (5, ' Price is .....
return policy is....
Mega's store online ....')
```
`Mega's` is the name of the store. The `'` in this name causes the query to think that it's the end of the text...
So I get the error message:
> ERROR: column "Price is ....." does not exist
I tried to Insert with SELECT, but that doesn't work either. I get the same error message.
How do I perform this insert?
|
If you don't want to (or can't) change the actual text, use [dollar quoting](http://www.postgresql.org/docs/current/static/sql-syntax-lexical.html#SQL-SYNTAX-DOLLAR-QUOTING). Just replace the first and last `'` with e.g. `$$`:
```
Insert into A (id,msg) values (5, $$ Price is .....
return policy is....
Mega's store online ....$$)
```
You need to use a delimiter that does not occur in the actual text. If your text *can* contain `$$` you can use something different, e.g: `$not_there$`
```
Insert into A (id,msg) values (5, $not_there$ Price is .....
return policy is....
Mega's store online ....$not_there$)
```
|
You can just double the single quote inside the string: `'Mega''s store online'`.
Alternatively, you can pass any string through the function `quote_literal()` and that will then take care of proper quoting and other issues such as [SQL-injection](http://bobby-tables.com). That is useful when the string comes from some application that allows single quotes in a string.
The `INSERT` command should be:
```
Insert into A (id,msg) values (5, ' Price is .....'
'return policy is....'
'Mega''s store online ....');
```
You can break a long string over multiple lines, but every lines has to have an opening and closing single quote.
|
How to enter text with ' into table?
|
[
"",
"sql",
"postgresql",
""
] |
SQL Server 2014 database. Table with 200 million rows.
Very large query with HUGE IN clause.
I originally wrote this query for them, but they have grown the IN clause to over 700 entries. The CTE looks unnecessary because I have omitted all the select columns and their substring() transformations for simplicity.
The focus is on the **IN clause**. 700+ pairs of these.
```
WITH cte AS (
SELECT *
FROM AODS-DB1B
WHERE
Source+'-'+Target
IN
(
'ACY-DTW',
'ACY-ATL',
'ACY-ORD',
:
: 700+ of these pairs
:
'HTS-PGD',
'PIE-BMI',
'PGD-HTS'
)
)
SELECT *
FROM cte
order by Source, Target, YEAR, QUARTER
```
When running, this query shoots CPU to 100% for hours - not unexpectedly.
There are indexes on all columns involved.
**Question 1**: Is there a better, or more effecient way to accomplish this query other than the huge IN clause? Would 700 UNION ALLs be better?
**Question 2**: When this query runs, it creates a Session\_ID that contains 49 "threads" (49 processes that all have the same Session\_ID). Every one of them an instance of this query with it's "Command" being this query text.
21 of them SUSPENDED,
14 of them RUNNING, and
14 of them RUNNABLE.
This changes rapidly as the task is running.
WHAT the *heck* is going on there? Is this SQL Server breaking the query up into pieces to work on it?
|
I recommend you store your 700+ strings in a permanent table as it is generally perceived as bad practice to store that much meta data in a script. You can create the table like this:
```
CREATE TABLE dbo.LookUp(Source varchar(250), Target varchar(250))
CREATE INDEX IX_Lookup_Source_Target on dbo.Lookup(Source,Target)
INSERT INTO dbo.Lookup (Source,Target)
SELECT 'ACY','DTW'
UNION
SELECT 'ACY','ATL'
.......
```
and then you can simply join on this table:
```
SELECT * FROM [AODS-DB1B] a
INNER JOIN dbo.Lookup lt ON lt.Source = a.Source
AND lt.Target=a.Target
ORDER BY Source, Target, YEAR, QUARTER
```
However, even better would be to normalise the `AODS-DB1B` table and store `SourceId` and `TargetId` INT values instead, with the VARCHAR values stored in `Source` and `Target` tables. You can then write a query that only performs integer comparisons rather than string comparisons and this should be much faster.
|
Put all of your codes into a temporary table (or permamnent if suitable).....
```
SELECT *
FROM AODS-DB1B
INNER JOIN NEW_TABLE ON Source+'-'+Target = NEWTABLE.Code
WHERE
...
...
```
|
SQL Server query with extremely large IN clause results in numerous queries in activity monitor
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
In my application I've got "articles" (similar to posts/tweets/articles) that are tagged with descriptive predefined tags: i.e "difficult", "easy", "red", "blue", "business" etc
These available tags are stored in a table, call it "tags" that contains all available tags.
Each article can be tagged with multiple tags, editable through a custom admin interface.
It could be tempting to simply bundle the tags for each entity into a stringified array of the IDs of each tag and store it alongside the article record in my "articles" table:
```
id | title | author | tags
---+-------+--------+-------------
1 | title | TG | "[1,4,7,12]"
```
though I'm sure this is a bad idea for a number of reasons, is there ever a reasonable reason to do the above?
|
I think you should read about [Database normalization](https://en.wikipedia.org/wiki/Database_normalization) and decide for yourself. In short though, there are a number of issues with your proposal, but you may decide you can live with them.
The most obvious are:
1. What if an additional tag is added to row(1)? Do you have to first parse, check if it's already present then update the row to be `tags.append(newTag)`.
2. Worse still deleting a tag? Search tags, is present, re-create tags.
3. What if a tag is to change name - some moderation process, perhaps?
4. Worse again, what about dfferent people specifying a tag-name differently - it'd be hard to rationalise.
5. What if you want to query data based on tags? Your query becomes far more complex than it would need to be.
6. Presentation: The client has to parse the tag in order to use it. What about the separator field? Change that and all clients have to change.
In short, all of these operations become harder and more cumbersome. Normalization is designed to overcome such issues. Probably the only reason for doing what you say, IMO, is that you're capturing the data as a one-off and it's informational only - that is, makes sense to a user but not to a system per-se. This is kind of like saying it's probably best avoided (again, IMO).
|
It seems to me like you want to have a separate table that stores tags and holds a foreign key which relates the tag records back to their parent record in the articles table (this is referred to as "normalizing" the database structure).
Doing it like you have suggested by cramming the tags into one field may seem to make sense now, but it will prove to be difficult to maintain and difficult/time consuming to pull the values out efficiently as your application grows in size or the amount of data grows a lot larger.
I would say that there are very few reasons to do what you have suggested, given how straightforward it is to create another table and setup a relationship to link keys between the two tables to maintain referential integrity.
|
Is it ever a good idea to store an array as a field value, or store array values as records?
|
[
"",
"sql",
"database",
"postgresql",
"database-design",
""
] |
I am trying to write a SQL query in SQL Server 2008 R2 that will allow a user to search a database table by a number of parameters. The way this should work is, my user enters his criteria and the query looks for all close matches, while ignoring those criteria for which the user did not enter a value.
I've written my query using `LIKE` and parameters, like so:
```
select item
from [item]
where a like @a and b like @b and c like @c ...
```
where '`a`', '`b`', and '`c`' are table columns, and my `@` parameters all default to '`%`' wildcards. This goes on for about twenty different columns, and that leads to my problem: if this query is entered as is, no input, just wildcards, it returns no results. Yet this table contains over 30,000 rows, so an all-wildcard query should return the whole table. Obviously I'm going about this the wrong way, but I don't know how to correct it.
I can't use '`contains`' or '`freetext`', as those look for whole words, and I need to match user input no matter where it occurs in the actual column value. I've tried breaking my query up into individual steps using 'intersect', but that doesn't change anything. Does anyone know a better way to do this?
|
Thanks for the guidance, all. It turns out that the table I'm querying can easily contain null values in the columns I'm searching against, so I expanded my query to say "where (a like @a or a is null) and ... " and it works now.
|
To allow for null inputs, this is a good pattern:
```
select * from my table where ColA LIKE isnull(@a, ColA) AND ColB like isnull(@b, ColB)
```
This avoids having to construct and execute a dynamic SQL statement (and creating possible SQL injection issues.)
|
How can I search a SQL database with multiple "%" wildcards?
|
[
"",
"sql",
"sql-server-2008-r2",
""
] |
I have the following two tables:
```
Table A
+-------------------+
|___User___|__Value_|
| 3 | a |
| 4 | b |
| 5 | c |
|____6_____|__d_____|
Table B
+-------------------+
|___User___|__Value_|
| 1 | |
| 4 | |
| 5 | |
|____9_____|________|
```
My job is to take `user` from Table A (and their correspondings `value`) and then map it to Table B and insert those values in there. So from the above example Table B should look like this after running the script:
```
Table B
+-------------------+
|___User___|__Value_|
| 1 | |
| 4 | b |
| 5 | c |
|____9_____|________|
```
My question is how can I construct an SQL query that will do this for me in an efficient way, if Table A contains 300,000 + entries and Table B contains 70,000 entries?
*NOTES:* In Table A the `User` field is not unique and neither is the `Value` field. However in Table B, both the `User` and `Value` fields are unique and should not appear more than once. Neither are primary keys for either tables.
|
Could be this
```
update table_b as b
inner join table_a as a on a.User = b.User
set b.value = a.value
```
|
In real-world situations, it would be more likely that you want a predictable value, such as the greatest `value` for any given `user`. In that case you would want
```
update table_b as b
inner join (
select user, max(value) from table_a
group by user ) as a_max on a.user = b.user
set b.value = a_max.value
```
|
SQL: Update table by mapping two columns to each other
|
[
"",
"mysql",
"sql",
"database",
""
] |
There is a query I am trying to implement in which I am not having much success with in trying to find the MAX and MIN for each week.
I have 2 Tables:
SYMBOL\_DATA (contains open,high,low,close, and volume)
WEEKLY\_LOOKUP (contains a list of weeks(no weekends) with a WEEK\_START and WEEK\_END)
```
**SYMBOL_DATA Example:**
OPEN, HIGH, LOW, CLOSE, VOLUME
23.22 26.99 21.45 22.49 34324995
```
WEEKLY\_LOOKUP (contains a list of weeks(no weekends) with a WEEK\_START and WEEK\_END)
```
**WEEKLY_LOOKUP Example:**
WEEK_START WEEK_END
2016-01-25 2016-01-29
2016-01-18 2016-01-22
2016-01-11 2016-01-15
2016-01-04 2016-01-08
```
I am trying to find for each WEEK\_START and WEEK\_END the high and low for that particular week.
For instance, if the WEEK is WEEK\_START=2016-01-11 and WEEK\_END=2016-01-15, I would have
5 entries for that particular symbol listed:
```
DATE HIGH LOW
2016-01-15 96.38 93.54
2016-01-14 98.87 92.45
2016-01-13 100.50 95.21
2016-01-12 99.96 97.55
2016-01-11 98.60 95.39
2016-01-08 100.50 97.03
2016-01-07 101.43 97.30
2016-01-06 103.77 100.90
2016-01-05 103.71 101.67
2016-01-04 102.24 99.76
```
For each week\_ending (2016-01-15) the HIGH is 100.50 on 2016-01-13 and the LOW is 92.45 on 2016-01-14
I attempted to write a query that gives me a list of highs and lows, but when I tried adding a MAX(HIGH), I had only 1 row returned back.
I tried a few more things in which I couldn't get the query to work (some sort of infinite run type). For now, I just have this that gives me a list of highs and lows for every day instead of the roll-up for each week which I am not sure how to do.
```
select date, t1.high, t1.low
from SYMBOL_DATA t1, WEEKLY_LOOKUP t2
where symbol='ABCDE' and (t1.date>=t2.START_DATE and t1.date<=t2.END_DATE)
and t1.date<=CURDATE()
LIMIT 30;
```
How can I get for each week (Start and End) the High\_Date, MAX(High), and Low\_Date, MIN(LOW) found each week? I probably don't need a
full history for a symbol, so a LIMIT of like 30 or (30 week periods) would be sufficient so I can see trending.
If I wanted to know for example each week MAX(High) and MIN(LOW) start week ending 2016-01-15 the result would show
```
**Result:**
WEEK_ENDING 2016-01-15 100.50 2016-01-13 92.45 2016-01-14
WEEK_ENDING 2016-01-08 103.77 2016-01-06 97.03 2016-01-08
etc
etc
```
Thanks to all of you with the expertise and knowledge. I greatly appreciate your help very much.
**Edit**
Once the Week Ending list is returned containing the MAX(HIGH) and MIN(LOW) for each week, is it possible then on how to find the MAX(HIGH) and MIN(LOW) from that result set so it return then only 1 entry from the 30 week periods?
Thank you!
**To Piotr**
```
select part1.end_date,part1.min_l,part1.max_h, s1.date, part1.min_l,s2.date from
(
select t2.start_date, t2.end_date, max(t1.high) max_h, min(t1.low) min_l
from SYMBOL_DATA t1, WEEKLY_LOOKUP t2
where symbol='FB'
and t1.date<='2016-01-22'
and (t1.date>=t2.START_DATE and t1.date<=t2.END_DATE)
group by t2.start_date, t2.end_date order by t1.date DESC LIMIT 1;
) part1, symbol_data s1, symbol_data s2
where part1.max_h = s1.high and part1.min_l = s2.low;
```
[](https://i.stack.imgur.com/TLHsI.jpg)
You will notice that the MAX and MIN for each week is staying roughly the same and not changing as it should be different for week to week for both the High and Low.
|
[SQL Fiddle](http://sqlfiddle.com/#!9/f156d/4)
I have abbreviated some of your names in my example.
Getting the high and low for each week is pretty simple; you just have to use GROUP BY:
```
SELECT s1.symbol, w.week_end, MAX(s1.high) AS weekly_high, MIN(s1.LOW) as weekly_low
FROM weeks AS w
INNER JOIN symdata AS s1 ON s1.zdate BETWEEN w.week_start AND w.week_end
GROUP BY s1.symbol, w.week_end
```
**[Results](http://sqlfiddle.com/#!9/f156d/4/0)**:
```
| symbol | week_end | weekly_high | weekly_low |
|--------|---------------------------|-------------|------------|
| ABCD | January, 08 2016 00:00:00 | 103.77 | 97.03 |
| ABCD | January, 15 2016 00:00:00 | 100.5 | 92.45 |
```
Unfortunately, getting the dates of the high and low requires that you re-join to the symbol\_data table, based on the symbol, week and values. And even *that* doesn't do the job; you have to account for the possibility that there might be two days where the same high (or low) was achieved, and decide which one to choose. I arbitrarily chose the first occurrence in the week of the high and low. So to get that second level of choice, you need another GROUP BY. The whole thing winds up looking like this:
```
SELECT wl.symbol, wl.week_end, wl.weekly_high, MIN(hd.zdate) as high_date, wl.weekly_low, MIN(ld.zdate) as low_date
FROM (
SELECT s1.symbol, w.week_start, w.week_end, MAX(s1.high) AS weekly_high, MIN(s1.low) as weekly_low
FROM weeks AS w
INNER JOIN symdata AS s1 ON s1.zdate BETWEEN w.week_start AND w.week_end
GROUP BY s1.symbol, w.week_end) AS wl
INNER JOIN symdata AS hd
ON hd.zdate BETWEEN wl.week_start AND wl.week_end
AND hd.symbol = wl.symbol
AND hd.high = wl.weekly_high
INNER JOIN symdata AS ld
ON ld.zdate BETWEEN wl.week_start AND wl.week_end
AND ld.symbol = wl.symbol
AND ld.low = wl.weekly_low
GROUP BY wl.symbol, wl.week_start, wl.week_end, wl.weekly_high, wl.weekly_low
```
**[Results](http://sqlfiddle.com/#!9/f156d/4/1)**:
```
| symbol | week_end | weekly_high | high_date | weekly_low | low_date |
|--------|---------------------------|-------------|---------------------------|------------|---------------------------|
| ABCD | January, 08 2016 00:00:00 | 103.77 | January, 06 2016 00:00:00 | 97.03 | January, 08 2016 00:00:00 |
| ABCD | January, 15 2016 00:00:00 | 100.5 | January, 13 2016 00:00:00 | 92.45 | January, 14 2016 00:00:00 |
```
|
The week table seems entirely redundant...
```
SELECT symbol
, WEEK(zdate)
, MIN(low) min
, MAX(high) max_high
FROM symdata
GROUP
BY symbol, WEEK(zdate);
```
This is a simplified example. In reality, you might use DATE\_FORMAT or something like that instead.
<http://sqlfiddle.com/#!9/c247f/3>
|
Finding MAX and MIN values for each same start and end week
|
[
"",
"mysql",
"sql",
""
] |
```
SELECT
COUNT(emp.empNo)
FROM
Employee emp
WHERE
NOT EXISTS (SELECT dept.empNo
FROM department dept
WHERE emp.empNo = dept.empNo);
```
What does the `where` condition(where emp.empNo = dept.empNo) signify in the above query? I get different results with and without the where condition. I'm new to Oracle. Can any one help me to understand?
|
The query means that you're looking for only those employees, for which it does not exist a department with the same empNo as the employee's empNo.
I guess this is a query to find those employees which are not managers of any department (if we assume that the department's empNo is the empNo of the department's manager).
Still, it would be better if you provide the schema of the employee and the department tables.
|
your query displaying count of employees which are not present in emp table but present in dept table.
suppose we have two tables emp and dept :
```
emp dept
1 1
2 2
3 3
4 4
5 5
6
7
```
from the given table we have emp 1 to 5 in both of the tables but in dept table having 2 employees (6,7)which are not present in emp table and your query is displaying count for those emp i.e 2
|
What does where do in the below query?
|
[
"",
"sql",
"oracle11g",
"oracle10g",
""
] |
Need some help in writing the SQL query for SQL Server. I have following data model with three tables
Table 1
```
Seq ID Name
1 1234 Abc
2 4567 Pqr
3 7890 Xyz
```
Table 2
```
Seq Table1Id Table3Seq
1 1234 1
2 1234 2
3 7890 3
```
Table 3
```
Seq Status
1 Rejected
2 Accepted
3 Pending
```
My requirement is something like below. I want to have all the records from Table 1 with the LATEST status from Table 3 if it exists. So, as we can see for Id `1234`, there are two records matching in Table 3 (via Table 2), however I want the latest which is Accepted. But there is a record in Table 1 `4567` which doesn't have any record in Table 2, so I am fine to show empty against the Status.
```
Number Name Status
1234 Abc Accepted
4567 Pqr
7890 Xyz Pending
```
I guess, we might need to use Outer and Inner join together, however so far I am unable to find the right query. When I try to just use outer join using `MAX`, it still gives me two records against `1234` and when I use inner join, then I don't get record `4567` in the output.
|
```
select a.ID, a.Name, c.Status
from table1 a
left outer join (select max(b.seq) AS SEQ, b.table1id from table2 b group by b.table1id) t2 on a.id = t2.table1id
left outer join table2 t2b on t2.seq = t2b.seq and t2.table1id = t2b.table1id
left outer join table3 c on t2b.table3seq = c.seq
```
There are probably more efficient ways to do it, but this will give you the results you're after. Basically joins table 1 to a subset of table 2 getting the max sequence of each record, then join to table2 again to get the link to table 3, then join to table3 to get the status.
All joins are left outer so you get your pqr result coming back with a null status.
|
One way of doing this in SQL Server is using `outer apply`. This method often has the best performance:
```
select t1.*, t3.status
from table1 t1 outer apply
(select top 1 t3.*
from table2 t2 join
table3 t3
on t2.table3seq = t3.seq
where t1.id = t2.table1id
order by t2.id desc
) t3;
```
The appropriate indexes are `table2(table1id, table3seq, id)` and `table3(seq, status)`.
|
SQL Server - Inner and Outer join together
|
[
"",
"sql",
"sql-server",
"inner-join",
"outer-join",
""
] |
I have two tables: `users` and `userlogs`
```
table1
----------
id | name
----------
1 | A
2 | B
3 | C
4 | D
table2
----------
user_id | date
----------
1 | 2015-12-17
2 | 2015-12-18
3 | 2015-12-19
4 | 2015-12-20
```
If I do LEFT JOIN with selected date, it only gives data that exist in table 2.
```
SELECT r.user_id, count(r.user_id) as count
FROM table1 a LEFT OUTER JOIN table2 r ON ( r.user_id = a.id )
WHERE r.created_at BETWEEN '2015-12-17' AND '2015-12-19'
GROUP BY a.id
```
I get the following
```
Result
----------
user_id | count
----------
1 | 1
2 | 1
3 | 1
```
I need something like the following:
```
Result
----------
user_id | count
----------
1 | 1
2 | 1
3 | 1
4 | 0
```
I have tried many different ways, but was unsuccessful.
|
This query should do just fine:
```
DECLARE @T1 TABLE
(
id INT
, name VARCHAR(50)
);
DECLARE @T2 TABLE
(
[user_id] INT
, [date] DATE
);
INSERT INTO @T1 (id, name)
VALUES (1, 'A'), (2, 'B'), (3, 'C'), (4, 'D');
INSERT INTO @T2 ([user_id], [date])
VALUES (1, '2015-12-17'), (2, '2015-12-18'), (3, '2015-12-19'), (4, '2015-12-20');
SELECT T1.id, COALESCE(T2.Cnt, 0) AS Cnt
FROM (SELECT DISTINCT id FROM @T1) AS T1
LEFT JOIN (
SELECT [user_id], COUNT(*) AS Cnt
FROM @T2
WHERE [date] BETWEEN '2015-12-17' AND '2015-12-19'
GROUP BY [user_id]
) AS T2
ON T2.[user_id] = t1.id;
```
It outputs what you are expecting :)
Or wrap it up a in a stored procedure like that:
```
CREATE PROCEDURE dbo.YourProcedure
(
@company_id INT
, @start_date DATE
, @end_date DATE
)
as
SELECT T1.id
, T1.department_id
, COALESCE(T2.cnt, 0) AS cnt
FROM (
SELECT DISTINCT id, department_id
FROM users
WHERE company_id = @company_id
) AS T1
LEFT JOIN (
SELECT user_id, COUNT(*) AS cnt
FROM userlogs
WHERE created_at BETWEEN @start_date AND @end_date
GROUP BY user_id
) AS T2
ON T2.user_id = t1.id;
```
And just call it by passing variables to it. [Using a stored procedure in Laravel 4](https://stackoverflow.com/questions/22517903/using-a-stored-procedure-in-laravel-4) explains how to do it.
|
Try moving the filter condition into the left join.
```
SELECT
a.id,
count(r.user_id) as count
FROM table1 a
LEFT OUTER JOIN table2 r ON r.user_id = a.id AND
r.created_at BETWEEN '2015-12-17' AND '2015-12-19'
GROUP BY a.id
```
|
SQL query left joining issue
|
[
"",
"mysql",
"sql",
"sql-server",
"join",
"left-join",
""
] |
I have two tables, assets and asset\_params.
assets
```
|asset_id| some_asset_data |
-----------------------------
| 1 | 'some data' |
| 2 | 'some other data'|
```
asset\_params
```
|asset_param_id|asset_id|param_name|param_value|
------------------------------------------------
| 1 | 1 | 'Memory' | '4096' |
| 2 | 1 | 'OS' | 'Windows' |
| 3 | 2 | 'Memory' | '4096' |
| 4 | 2 | 'OS' | 'Linux' |
```
Now, how can i find assets which have parameters where they have a param called 'Memory' with value '4096' AND a param 'OS' with value 'Windows'.
Desired result is that i find one asset row, with id 1, in the example case.
I cant find a reasonable solution. Any help is appreciated.
|
```
select * from assets a where
exists (select 1 from asset_params p
where name = "OS" and value = "Win" and a.asset_id=p.asset_id)
and exists (select 1 from asset_params p
where name = "memory" and value = "4096" and a.asset_id=p.asset_id)
```
|
You can do this with aggregation and a `having` clause:
```
select asset_id
from asset_params
where param_name = 'Memory' and param_value = '3096' or
param_name = 'OS' and param_value = 'Windows'
group by asset_id
having count(*) = 2;
```
Note: if you can have multiple parameters with the same name, you should use `count(distinct param_name)` = 2.
This easily generalized. A more Postgres'ish way of writing it is:
```
select asset_id
from asset_params
where (param_name, param_value) in (('Memory', '3096'), ('OS', 'Windows'))
group by asset_id
having count(*) = 2;
```
|
sql check if join table has rows by mutliple criterias
|
[
"",
"sql",
"postgresql",
""
] |
I've a query like this:
```
select tractions.id,tractions.expired,tractions.succ,tractions.cradit_end_date,
tractions.user_id,users.family,tractions.coupon_code,users.username,traction_details.title,
traction_details_sub.title as sub_title,tractions.coupon_parent,tractions.coupon_id,tractions.coupon_property_id
from tractions,traction_details,traction_details_sub,users
WHERE
tractions.app='mobile'
AND tractions.succ = 1
AND tractions.user_id=$user_id
AND tractions.id = traction_details.tr_id
AND tractions.id = traction_details_sub.tr_id
AND tractions.user_id = users.id
```
now,some records in `tractions` have not any `tr_id` in `traction_details_sub` table.
how to check if `traction_details_sub` table have `tr_id` then join these tables ?
|
Switch to ANSI joins (they have been nearly universally available for a long time) and use an outer join for tables that may not have records:
```
SELECT
t.id
, t.expired
, t.succ
, t.cradit_end_date
, t.user_id
, u.family
, t.coupon_code
, u.username
, d.title
, s.title as sub_title
, t.coupon_parent
, t.coupon_id
, t.coupon_property_id
FROM tractions t
LEFT OUTER JOIN traction_details d ON t.id = d.tr_id
LEFT OUTER JOIN traction_details_sub s ON t.id = s.tr_id
JOIN users u ON t.user_id = u.id
WHERE t.app='mobile' AND t.succ = 1 AND t.user_id=$user_id
```
Note how this syntax moves tables away from the `FROM` clause into separate joins, with `LEFT OUTER JOIN` designating tables with optional rows. Also note how the last three conditions of your `WHERE` clause became `ON` conditions of the corresponding joins. Finally, note the use of *table aliases* (i.e. `t` for `tractions`, `d` for `traction_details`, `s` for `traction_details_sub`, and `u` for `users`) to shorten the query.
|
First select all tr\_id from traction\_details and after repeat you sql query add condition IN tractions.id (all id available)
|
using join if there is a record
|
[
"",
"mysql",
"sql",
""
] |
I haven't had much experience with SQL and it strikes me as a simple question, but after an hour of searching I still can't find an answer...
I have a table that I want to add up the totals for based on ID - e.g:
```
-------------
ID Quantity
1 30
2 11
1 4
1 3
2 17
3 16
.............
```
After summing the table should look something like this:
```
-------------
ID Quantity
1 37
2 28
3 16
```
I'm sure that I need to use the DISTINCT keyword and the SUM(..) function, but I can only get one total value for all unique value combinations in the table, and not separate ones like above. Help please :)
|
```
Select ID, Sum(Quantity) from YourTable
Group by ID
```
You can find here some resources to learn more about "Group by": <http://www.w3schools.com/sql/sql_groupby.asp>
|
```
SELECT ID, SUM(QUANTITY) FROM TABLE1 GROUP BY ID ORDER BY ID;
```
|
totalling rows for distinct values in SQL
|
[
"",
"sql",
""
] |
I have a database table (Oracle 11g) of questionnaire feedback, including multiple choice, multiple answer questions. The Options column has each value the user could choose, and the Answers column has the numerical values of what they chose.
```
ID_NO OPTIONS ANSWERS
1001 Apple Pie|Banana-Split|Cream Tea 1|2
1002 Apple Pie|Banana-Split|Cream Tea 2|3
1003 Apple Pie|Banana-Split|Cream Tea 1|2|3
```
**I need a query that will decode the answers, with the text versions of the answers as a single string.**
```
ID_NO ANSWERS ANSWER_DECODE
1001 1|2 Apple Pie|Banana-Split
1002 2|3 Banana-Split|Cream Tea
1003 1|2|3 Apple Pie|Banana-Split|Cream Tea
```
I have experimented with regular expressions to replace values and get substrings, but I can't work out a way to properly merge the two.
```
WITH feedback AS (
SELECT 1001 id_no, 'Apple Pie|Banana-Split|Cream Tea' options, '1|2' answers FROM DUAL UNION
SELECT 1002 id_no, 'Apple Pie|Banana-Split|Cream Tea' options, '2|3' answers FROM DUAL UNION
SELECT 1003 id_no, 'Apple Pie|Banana-Split|Cream Tea' options, '1|2|3' answers FROM DUAL )
SELECT
id_no,
options,
REGEXP_SUBSTR(options||'|', '(.)+?\|', 1, 2) second_option,
answers,
REGEXP_REPLACE(answers, '(\d)+', ' \1 ') answer_numbers,
REGEXP_REPLACE(answers, '(\d)+', REGEXP_SUBSTR(options||'|', '(.)+?\|', 1, To_Number('2'))) "???"
FROM feedback
```
I don't want to have to manually define or decode the answers in the SQL; there are many surveys with different questions (and differing numbers of options), so I'm hoping that there's a solution that will dynamically work for all of them.
I've tried to split the options and answers into separate rows by LEVEL, and re-join them where the codes match, but this runs exceedingly slow with the actual dataset (a 5-option question with 600 rows of responses).
```
WITH feedback AS (
SELECT 1001 id_no, 'Apple Pie|Banana-Split|Cream Tea' options, '1|2' answers FROM DUAL UNION
SELECT 1002 id_no, 'Apple Pie|Banana-Split|Cream Tea' options, '2|3' answers FROM DUAL UNION
SELECT 1003 id_no, 'Apple Pie|Banana-Split|Cream Tea' options, '1|2|3' answers FROM DUAL )
SELECT
answer_rows.id_no,
ListAgg(option_rows.answer) WITHIN GROUP(ORDER BY option_rows.lvl)
FROM
(SELECT DISTINCT
LEVEL lvl,
REGEXP_SUBSTR(options||'|', '(.)+?\|', 1, LEVEL) answer
FROM
(SELECT DISTINCT
options,
REGEXP_COUNT(options||'|', '(.)+?\|') num_choices
FROM
feedback)
CONNECT BY LEVEL <= num_choices
) option_rows
LEFT OUTER JOIN
(SELECT DISTINCT
id_no,
to_number(REGEXP_SUBSTR(answers, '(\d)+', 1, LEVEL)) answer
FROM
(SELECT DISTINCT
id_no,
answers,
To_Number(REGEXP_SUBSTR(answers, '(\d)+$')) max_answer
FROM
feedback)
WHERE
to_number(REGEXP_SUBSTR(answers, '(\d)+', 1, LEVEL)) IS NOT NULL
CONNECT BY LEVEL <= max_answer
) answer_rows
ON option_rows.lvl = answer_rows.answer
GROUP BY
answer_rows.id_no
ORDER BY
answer_rows.id_no
```
**If there isn't a solution just using Regex, is there a more efficient way than LEVEL to split the values? Or is there another approach that would work?**
|
It's slow because you're expanding each row too many times; the connect-by clauses you're using are looking across all the rows, so you're ending up with a huge amount of data to then sort - which is presumably why you ended up with the `DISTINCT` in there.
You can add two `PRIOR` clauses to the connect-by, firstly so the `ID_NO` is preserved, and a second to avoid a loop - any non-deterministic function will do for this, I've picked `dbms_random.value` but you can use `sys_guid` if you prefer, or something else. You also don't need to many subqueries, you can do it with two; or as CTEs which I think it s a bit clearer:
```
WITH feedback AS (
SELECT 1001 id_no, 'Apple Pie|Banana-Split|Cream Tea' options, '1|2' answers FROM DUAL UNION
SELECT 1002 id_no, 'Apple Pie|Banana-Split|Cream Tea' options, '2|3' answers FROM DUAL UNION
SELECT 1003 id_no, 'Apple Pie|Banana-Split|Cream Tea' options, '1|2|3' answers FROM DUAL
),
option_rows AS (
SELECT
id_no,
LEVEL answer,
REGEXP_SUBSTR(options, '[^|]+', 1, LEVEL) answer_text
FROM feedback
CONNECT BY LEVEL <= REGEXP_COUNT(options, '[^|]+')
AND id_no = PRIOR id_no
AND PRIOR dbms_random.value IS NOT NULL
),
answer_rows AS (
SELECT
id_no,
REGEXP_SUBSTR(answers, '[^|]+', 1, LEVEL) answer
FROM feedback
CONNECT BY LEVEL <= REGEXP_COUNT(answers, '[^|]+')
AND PRIOR id_no = id_no
AND PRIOR dbms_random.value IS NOT NULL
)
SELECT
option_rows.id_no,
LISTAGG(option_rows.answer, '|') WITHIN GROUP (ORDER BY option_rows.answer) AS answers,
LISTAGG(option_rows.answer_text, '|') WITHIN GROUP (ORDER BY option_rows.answer) AS answer_decode
FROM option_rows
JOIN answer_rows
ON option_rows.id_no = answer_rows.id_no
AND option_rows.answer = answer_rows.answer
GROUP BY option_rows.id_no
ORDER BY option_rows.id_no;
```
Which gets:
```
ID_NO ANSWERS ANSWER_DECODE
---------- ---------- ----------------------------------------
1001 1|2 Apple Pie|Banana-Split
1002 2|3 Banana-Split|Cream Tea
1003 1|2|3 Apple Pie|Banana-Split|Cream Tea
```
I've also changed your regex pattern so you don't have to append or strip the `|`.
|
Check out this compact solution:
```
with sample_data as
(
select 'ala|ma|kota' options, '1|2' answers from dual
union all
select 'apples|oranges|bacon', '1|2|3' from dual
union all
select 'a|b|c|d|e|f|h|i','1|3|4|5|8' from dual
)
select answers, options,
regexp_replace(regexp_replace(options,'([^|]+)\|([^|]+)\|([^|]+)','\' || replace(answers,'|','|\')),'[|]+','|') answer_decode
from sample_data;
```
Output:
```
ANSWERS OPTIONS ANSWER_DECODE
--------- -------------------- ---------------------------
1|2 ala|ma|kota ala|ma
1|2|3 apples|oranges|bacon apples|oranges|bacon
1|3|4|5|8 a|b|c|d|e|f|h|i a|c|d|f|h|i
```
|
SQL Regex - Replace with substring from another field
|
[
"",
"sql",
"regex",
"oracle",
"regexp-replace",
"regexp-substr",
""
] |
I have q query like this:
```
Select WarehouseCode from [tbl_VW_Epicor_Warehse]
```
my output looks like this
```
WarehouseCode
Main
Mfg
SP
W01
W02
W03
W04
W05
```
But sometimes I want to get W04 as the first record, sometimes I want to get W01 as the first record .
How I can write a query to get some records in first row??
Any help is appreciated
|
you could try and select the row with the code you want to appear first by specifying a where condition to select that row alone then you can union all another select with all other rows that doesn't have this name
as follows
> ```
> SELECT WarehouseCode FROM Table WHERE WarehouseCode ='W04'
> UNION ALL
> SELECT WarehouseCode FROM Table WHERE WarehouseCode <>'W04'
> ```
|
Use a parameter to choose the top row, which can be passed to your query as required, and sort by a column calculated on whether the value matches the parameter; something like the `ORDER BY` clause in the following:
```
DECLARE @Warehouses TABLE (Id INT NOT NULL, Label VARCHAR(3))
INSERT @Warehouses VALUES
(1,'W01')
,(2,'W02')
,(3,'W03')
DECLARE @TopRow VARCHAR(3) = 'W02'
SELECT *
FROM @Warehouses
ORDER BY CASE Label WHEN @TopRow THEN 1 ELSE 0 END DESC
```
|
how to get some records in first row in sql
|
[
"",
"sql",
"sql-server",
"select",
""
] |
I'm trying to create a software for a gym but I'm getting this error and I don't know why, I've trying for hours but nothing
```
CREATE TABLE Socios (
IdSocio INT NOT NULL AUTO_INCREMENT,
Nombre VARCHAR(30) NOT NULL,
Apellido VARCHAR(30) NOT NULL,
N_Celular VARCHAR(12),
Correo VARCHAR(60),
Fecha_Nacimiento DATE NOT NULL,
Fecha_Asociacion DATE NOT NULL,
Fecha_Modificacion DATE NOT NULL,
Notas VARCHAR(100),
PRIMARY KEY (IdSocio)
) ENGINE=INNODB;
CREATE TABLE tipos(
IdTipos INT NOT NULL AUTO_INCREMENT,
Tipo VARCHAR (30) NOT NULL,
Precio DECIMAL(6,2) NOT NULL,
PRIMARY KEY (IdTipos)
) ENGINE = INNODB;
CREATE TABLE productos (
IdProducto INT NOT NULL AUTO_INCREMENT,
Producto VARCHAR (40) NOT NULL,
Descripcion VARCHAR (100),
Costo_Individual DECIMAL(6,2) NOT NULL,
Precio_venta DECIMAL(6,2) NOT NULL,
Estado BOOL NOT NULL,
Cantidad_Inicial INT NOT NULL,
Cantidad_actual INT NOT NULL,
PRIMARY KEY(IdProducto)
) ENGINE = INNODB;
```
Error Code: 1215. Cannot add foreign key constraint
I'm getting this error for this tables
```
CREATE TABLE membresia(
IdMembresia INT NOT NULL AUTO_INCREMENT,
Nombre VARCHAR(30) NOT NULL,
Tipo VARCHAR(30) NOT NULL,
Fecha_Inicio DATE NOT NULL,
Fecha_Vencimiento DATE NOT NULL,
Inscripcion BOOL NOT NULL,
Estado_membresia VARCHAR(15) NOT NULL,
Fecha_modificacion DATE NOT NULL,
Total DECIMAL(6,2) NOT NULL,
Nota VARCHAR(100),
Fecha_Nota DATE,
PRIMARY KEY (IdMembresia),
CONSTRAINT IdSocio FOREIGN KEY (Nombre)
REFERENCES Socios (Nombre),
CONSTRAINT IdTipos FOREIGN KEY (Tipo,Total)
REFERENCES tipos (Tipo,Precio)
) ENGINE = INNODB;
CREATE TABLE ventas (
IdVenta INT NOT NULL AUTO_INCREMENT,
Producto VARCHAR (40) NOT NULL,
Fecha_venta DATE NOT NULL,
cantidad INT NOT NULL,
Total DECIMAL(8,2),
Fecha_Modificacion DATE NOT NULL,
Nota VARCHAR (100),
PRIMARY KEY (IdVenta),
CONSTRAINT IdProducto FOREIGN KEY (Producto)
REFERENCES productos(Producto)
) ENGINE = INNODB;
```
|
This will get you over an initial hurdle. Though I doubt you really want an FK on a price but I don't know the translation into your mother tongue.
You need indexes on the *referenced* tables for the columns looked up in FK's.
```
create schema dbtest_xyz;
use dbtest_xyz;
-- drop table Socios;
CREATE TABLE Socios (
IdSocio INT NOT NULL AUTO_INCREMENT,
Nombre VARCHAR(30) NOT NULL,
Apellido VARCHAR(30) NOT NULL,
N_Celular VARCHAR(12),
Correo VARCHAR(60),
Fecha_Nacimiento DATE NOT NULL,
Fecha_Asociacion DATE NOT NULL,
Fecha_Modificacion DATE NOT NULL,
Notas VARCHAR(100),
PRIMARY KEY (IdSocio),
key(Nombre) -- ADDED *******************************
) ENGINE=INNODB;
-- drop table tipos;
CREATE TABLE tipos(
IdTipos INT NOT NULL AUTO_INCREMENT,
Tipo VARCHAR (30) NOT NULL,
Precio DECIMAL(6,2) NOT NULL,
PRIMARY KEY (IdTipos),
key(Tipo), -- ADDED *******************************
key(Precio) -- ADDED *******************************
) ENGINE = INNODB;
-- drop table productos;
CREATE TABLE productos (
IdProducto INT NOT NULL AUTO_INCREMENT,
Producto VARCHAR (40) NOT NULL,
Descripcion VARCHAR (100),
Costo_Individual DECIMAL(6,2) NOT NULL,
Precio_venta DECIMAL(6,2) NOT NULL,
Estado BOOL NOT NULL,
Cantidad_Inicial INT NOT NULL,
Cantidad_actual INT NOT NULL,
PRIMARY KEY(IdProducto),
key(Producto) -- ADDED *******************************
) ENGINE = INNODB;
CREATE TABLE membresia(
IdMembresia INT NOT NULL AUTO_INCREMENT,
Nombre VARCHAR(30) NOT NULL,
Tipo VARCHAR(30) NOT NULL,
Fecha_Inicio DATE NOT NULL,
Fecha_Vencimiento DATE NOT NULL,
Inscripcion BOOL NOT NULL,
Estado_membresia VARCHAR(15) NOT NULL,
Fecha_modificacion DATE NOT NULL,
Total DECIMAL(6,2) NOT NULL,
Nota VARCHAR(100),
Fecha_Nota DATE,
PRIMARY KEY (IdMembresia),
CONSTRAINT IdSocio FOREIGN KEY (Nombre)
REFERENCES Socios (Nombre),
CONSTRAINT IdTipos FOREIGN KEY (Tipo)
REFERENCES tipos (Tipo),
CONSTRAINT IdMembresia FOREIGN KEY (Total)
REFERENCES tipos (Precio)
) ENGINE = INNODB;
CREATE TABLE ventas (
IdVenta INT NOT NULL AUTO_INCREMENT,
Producto VARCHAR (40) NOT NULL,
Fecha_venta DATE NOT NULL,
cantidad INT NOT NULL,
Total DECIMAL(8,2),
Fecha_Modificacion DATE NOT NULL,
Nota VARCHAR (100),
PRIMARY KEY (IdVenta),
CONSTRAINT IdProducto FOREIGN KEY (Producto)
REFERENCES productos(Producto)
) ENGINE = INNODB;
-- Cleanup:
drop schema dbtest_xyz;
```
Mysql Manual Page on [Foreign Keys](https://dev.mysql.com/doc/refman/5.6/en/create-table-foreign-keys.html)
> MySQL requires indexes on foreign keys and referenced keys so that
> foreign key checks can be fast and not require a table scan. In the
> referencing table, there must be an index where the foreign key
> columns are listed as the first columns in the same order. Such an
> index is created on the referencing table automatically if it does not
> exist. This index might be silently dropped later, if you create
> another index that can be used to enforce the foreign key constraint.
> index\_name, if given, is used as described previously.
Note, the reason I have the `drop tables` above some tables is that it is wise to have them in place while one iteratively attempts the creations. When the FK creations fail, one often (almost always) has to re-jigger the *referenced* and *referencing* tables.
|
Your first 2 tables should be as per below, means referenced fields should be indexed.
```
CREATE TABLE Socios (
IdSocio INT NOT NULL AUTO_INCREMENT,
Nombre VARCHAR(30) NOT NULL,
Apellido VARCHAR(30) NOT NULL,
N_Celular VARCHAR(12),
Correo VARCHAR(60),
Fecha_Nacimiento DATE NOT NULL,
Fecha_Asociacion DATE NOT NULL,
Fecha_Modificacion DATE NOT NULL,
Notas VARCHAR(100),
KEY idx_Nombre(Nombre),
PRIMARY KEY (IdSocio)
) ENGINE=INNODB;
CREATE TABLE tipos(
IdTipos INT NOT NULL AUTO_INCREMENT,
Tipo VARCHAR (30) NOT NULL,
Precio DECIMAL(6,2) NOT NULL,
KEY idx_tipo_precio(Tipo,Precio),
PRIMARY KEY (IdTipos)
) ENGINE = INNODB;
```
|
Error Code: 1215. Cannot add foreign key constraint
|
[
"",
"mysql",
"sql",
"sql-server",
""
] |
```
SELECT
externalsource.id,
externalsource.url,
externalsource.last_fetch,
SUM(scripttracking.records) AS jobs_per_week
FROM
scripttracking
RIGHT JOIN
externalsource
ON
externalsource.script_name = scripttracking.script_name
AND
scripttracking.created_on > %s
GROUP BY
externalsource.id
```
Any idea how can I modify this query so that it will aggregate scripttracking.records with different "AND" condition?
Essentially, what I need to get is:
```
SUM(scripttracking.records) for scripttracking.created_on > now() - 1 week
SUM(scripttracking.records) for scripttracking.created_on > now() - 1 month
SUM(scripttracking.records) for scripttracking.created_on > now() - 1 year
```
All these fields I need to include in 1 result queryset.
Something like:
```
ID, URL, LAST_FETCH, JOBS_PER_WEEK, JOBS_PER_MONTH, JOBS_PER_YEAR
1, http://google.ru, 25/01/2016, 7, 23, 8889
```
|
Traditionally we'd use `CASE` statement but Postgres offers us more comfortable solution so better to use [FILTER](http://www.postgresql.org/docs/current/static/sql-expressions.html)
```
SELECT
externalsource.id
,externalsource.url
,externalsource.last_fetch
,SUM(scripttracking.records) AS jobs_per_week
,SUM(scripttracking.records) FILTER (WHERE scripttracking.created_on > now() - 1 week)
,SUM(scripttracking.records) FILTER (WHERE scripttracking.created_on > now() - 1 month)
,SUM(scripttracking.records) FILTER (WHERE scripttracking.created_on > now() - 1 year)
FROM
scripttracking
RIGHT JOIN
externalsource
ON
externalsource.script_name = scripttracking.script_name
AND
scripttracking.created_on > %s
GROUP BY
externalsource.id
```
|
Using a case statement to evaluate the values to sum otherwise use a 0.
Note this is inclusive logic so sum of month contains sum of week, and sum of year contains some of week and month.
```
SELECT
externalsource.id,
externalsource.url,
externalsource.last_fetch,
SUM(case when scripttracking.created_on > now() - 1 week then scripttracking.records else 0 end) AS jobs_per_week,
SUM(case when scripttracking.created_on > now() - 1 month then scripttracking.records else 0 end) AS jobs_per_Month,
SUM(case wehn scripttracking.created_on > now() - 1 year then scripttracking.records else 0 end) AS jobs_per_Year
FROM
scripttracking
RIGHT JOIN
externalsource
ON
externalsource.script_name = scripttracking.script_name
AND
scripttracking.created_on > %s
GROUP BY
externalsource.id
```
|
How to aggregate fields with different AND clause within RIGHT JOIN?
|
[
"",
"sql",
"django",
"postgresql",
""
] |
I was presented with the question:
*Write a SQL command to display for each employee who has a total
distance from all journeys of more than 100, the employee’s name,
the total number of litres used by the employee on all journeys. (The
number of litres for a journey is distanceInKm divided by kmPerLitre.)*
For the following set of data:
[](https://i.stack.imgur.com/GNMn4.png)
and I've more or less hit a brick wall. I had a question similar earlier and eventually some user told me to resolve it with an inner join, and I've been doing those since, but this question uniquely asks for data from all three tables. This is my solution:
```
SELECT DISTINCT E.NAME, sum(T.distanceInKM/C.kmPerLitre)
FROM Employe E, TravelCost T, CAR C
GROUP BY T.distanceInKm,E.name,C.kmPerLitre
HAVING SUM(distanceInKM) > 100;
```
(Please ignore the small differences; my Employee table is named Employe and my distanceSinceService column is just called "distance")
But this gives me absolutely insane outputs and I have no real clue what's happening. We did some group by's in class but none of them were as difficult as this so I'm a bit stuck.
|
Here is a version that joins your tables and should give you what you expect
```
SELECT E.NAME
, SUM(T.distanceInKM/C.kmPerLitre) AS NumLitres
, SUM(T.distanceInKM) AS TotalDistanceInKm
FROM Employe E
INNER
JOIN TravelCost T
ON E.id = T.employeeId
INNER
JOIN CAR C
ON T.carRegNo = C.regNo
GROUP
BY E.name
HAVING SUM(T.distanceInKM) > 100
;
```
Hope this helps
|
try it like this:
```
DECLARE @Employee TABLE(id INT,name VARCHAR(100));
INSERT INTO @Employee VALUES
(1,'Smith')
,(2,'Patel')
,(6,'Bennett')
,(7,'Booch');
DECLARE @Car TABLE(regNo VARCHAR(100),distanceSinceService INT,kmPerLitre FLOAT);
INSERT INTO @Car VALUES
('FF06DDS',122,10)
,('EE07SSA',110,8)
,('FG07BAK',165,7)
,('HH08BBW',0,12);
DECLARE @TravelCost TABLE(jouneyID INT,carRegNo VARCHAR(100),employeeID INT,occured DATE,distanceInKm FLOAT);
INSERT INTO @TravelCost VALUES
(4,'FF06DDS',1,{d'2007-07-11'},90)
,(6,'FG07BAK',2,{d'2007-07-21'},110)
,(11,'EE07SSA',6,{d'2007-07-21'},110)
,(12,'FF06DDS',1,{d'2007-09-05'},32)
,(21,'FG07BAK',7,{d'2007-09-05'},55);
SELECT e.name AS Employee
,SUM(tc.distanceInKm) AS SumKm
,SUM(tc.distanceInKm/c.kmPerLitre) AS UsedGasoline
FROM @TravelCost AS tc
INNER JOIN @Car AS c ON tc.carRegNo=c.regNo
INNER JOIN @Employee AS e ON tc.employeeID=e.id
GROUP BY e.name
HAVING SUM(tc.distanceInKm)>100;
```
The result
```
Employee SumKm UsedGasoline
Bennett 110 13,75
Patel 110 15,71
Smith 122 12,2
```
|
Some problems with extracting data from three separate tables
|
[
"",
"sql",
""
] |
By mistake I've added one column in my database Named as `doj` now if I want to drop that column from table using code first approach what should I do.
I've tried this things.
1- Removing the column definition from model.
2- Deleted Migration history.
3- Add-migration
4- Update database.
But still it is not reflecting in database? Where I made the mistake?
|
Don't need to delete migration history.
Just follow these steps
```
1: Remove properties from the model.
2: Run 'Enable-Migrations' in package manage console.
3: Next Run 'Add-Migration "MigrationsName"'. if any case it is showing 'The project tesproject failed to build then build project again.
4: Now run 'Update-Database'
```
It will surly effect on your code. Go with [MSDN](https://msdn.microsoft.com/en-us/data/jj591621.aspx)
|
While Neeraj's answer is correct and in most cases works well, if that column has previous data in it, it will be removed from the index, but the data will still be there (and the column, for that matter).
To completely remove the column (and data in it), use the `DropColumn()` method. To accomplish this, create a new migration `add-migration unwantedColumnCleanup` and include the necessary code in the `up()` method, and then `update-database`.
```
namespace MyDB.Migrations
{
using System;
using System.Data.Entity.Migrations;
public partial class unwantedColumnCleanup : DbMigration
{
public override void Up()
{
DropColumn("Table", "Column");
}
public override void Down()
{
}
}
}
```
|
How can we delete a column from sql table by using code first approach in EF 6.0?
|
[
"",
"sql",
"asp.net-mvc",
"entity-framework",
""
] |
Trying to figure out how to get my data to return the SELECT data first and then display the UNION data below. Currently it is returning the UNION data first two rows and SELECT data after. I need the SELECT on Top and the UNION on bottom. Tried a few ways but nothing is working.
```
SELECT
Record_Time_Stamp AS 'Record_Time_Stamp',
Car_ID,
Commodity,
CAST(Transaction_Num AS NVARCHAR(MAX)) AS 'Transaction_Num',
CAST(Weight_Out AS NVARCHAR(MAX)) AS 'Gross_Wt',
CAST(Weight_In AS NVARCHAR(MAX)) AS 'Tare_Wt',
CAST(abs(Weight_In - Weight_Out) AS NVARCHAR) AS 'Net_Wt'
FROM
dbo.Transactions_Truck
WHERE
(Date_Weighed_Out between @BegDate and @EndDate) AND
Vendor = @Vendor
UNION
SELECT
GETDATE() AS Record_Time_Stamp,
'' AS Car_ID,
'' AS Commodity,
'Total Tons' AS 'Transaction_Num',
'' AS 'Gross_Wt',
'' AS 'Tare_Wt',
@Total_Tons AS 'Net_Wt'
FROM
dbo.Transactions_Truck
UNION
SELECT
GETDATE() AS Record_Time_Stamp,
'' AS Car_ID,
'' AS Commodity,
'Total Lbs.' AS 'Transaction_Num',
'' AS 'Gross_Wt',
'' AS 'Tare_Wt',
@Total_Lbs AS 'Net_Wt'
FROM
dbo.Transactions_Truck
ORDER BY
Transaction_Num
```
|
Introduce an artificial sort key column (I'll call it `MySortKey` in the code sample below) to keep the two selects separated.
```
SELECT
Record_Time_Stamp AS 'Record_Time_Stamp',
Car_ID,
Commodity,
CAST(Transaction_Num AS NVARCHAR(MAX)) AS 'Transaction_Num',
CAST(Weight_Out AS NVARCHAR(MAX)) AS 'Gross_Wt',
CAST(Weight_In AS NVARCHAR(MAX)) AS 'Tare_Wt',
CAST(abs(Weight_In - Weight_Out) AS NVARCHAR) AS 'Net_Wt',
1 AS MySortKey
FROM
dbo.Transactions_Truck
WHERE
(Date_Weighed_Out between @BegDate and @EndDate) AND
Vendor = @Vendor
UNION
SELECT
GETDATE() AS Record_Time_Stamp,
'' AS Car_ID,
'' AS Commodity,
'Total Tons' AS 'Transaction_Num',
'' AS 'Gross_Wt',
'' AS 'Tare_Wt',
@Total_Tons AS 'Net_Wt',
2 AS MySortKey
FROM
dbo.Transactions_Truck
UNION
SELECT
GETDATE() AS Record_Time_Stamp,
'' AS Car_ID,
'' AS Commodity,
'Total Lbs.' AS 'Transaction_Num',
'' AS 'Gross_Wt',
'' AS 'Tare_Wt',
@Total_Lbs AS 'Net_Wt'
FROM
dbo.Transactions_Truck
ORDER BY
MySortKey, Transaction_Num
```
|
You could add own column:
```
SELECT ..., 1 AS query_num
FROM ...
UNION
SELECT ..., 2 AS query_num
FROM ...
UNION
SELECT ..., 3 AS query_num
FROM ...
ORDER BY query_num, ...
```
Keep in mind that adding column will circumvent `UNION` distinctness, so you could change it `UNION ALL`.
|
SQL - UNION - How to Manipulate Data Returned
|
[
"",
"sql",
"union",
""
] |
What is wrong with this syntax:
```
DELETE FROM customer WHERE id IN ('1,3')
```
Customer table:
[](https://i.stack.imgur.com/vBEdn.png)
It deletes only the row with id = 1. I'm expecting empty table. If I leave there only one row with id=3, it says that deleted records: 0.
|
No need for `'` around values:
```
DELETE FROM customer WHERE id IN (1,3);
```
|
You are looking for a single value that is a string with three characters: `'1,3'`, not two numeric values.
If you are constructing the query, then you should try to construct it with a proper `in` list. Under some circumstances, you might find it more convenient to use `find_in_set()`:
```
DELETE FROM customer WHERE find_in_set(id, '1,3') > 0
```
However, this cannot use an index, so lad2025 is a better general solution.
|
Can't delete using IN
|
[
"",
"mysql",
"sql",
""
] |
I have two tables (about 300,000 rows each table) with one column identifier column (ID) that is indexed. The following query takes a few seconds:
```
SELECT *
FROM Table_1 a JOIN Table_2 b on a.ID=b.ID
```
Versus the following query which takes hours:
```
SELECT *
FROM Table_1 a LEFT JOIN Table_2 b on a.ID=b.ID
```
The difference between the two queries is that one is done with a "join" and one with a "left join".
The query result need to include all the rows from Table\_1 so I have no choice but to perform a "left join".
Also tried this approach which also took allot of time as well:
```
SELECT *
FROM Table_1 a JOIN Table_2 b on a.ID=b.ID
UNION
SELECT *
FROM Table_1 a LEFT JOIN Table_2 b on a.ID=b.ID
WHERE b.ID is null;
```
Any suggestions??
Thanks allot in advance...
|
With the information provided, I can only think of trying something like this:
```
SELECT a.*, b.column1, b.column2, ..., column(n)
FROM Table_1 a JOIN Table_2 b on a.ID=b.ID
UNION
SELECT a.*, '' AS column1, '' AS column2, ..., '' as column(n)
FROM Table_1 a
WHERE a.ID NOT IN
(SELECT a.ID FROM Table_1 a JOIN Table_2 b on a.ID=b.ID);
```
**Edit:** I wanted to clarify why this is faster and make this a more complete answer.
This is because of the plan followed by the server to achieve the desired results.
A left outer join such as:
```
SELECT *
FROM Table_1 a LEFT JOIN Table_2 b on a.ID=b.ID
```
It has to go through every record in Table\_1 weather it matches something in Table\_2 or not. And, as the table grows this becomes more and more expensive.
By adding `WHERE b.ID is null` it basically has to go through a non-indexed column to filter the result of the previous operation. And, it has to create those nulls first in order to perform this, so it will likely not create a more optimal plan.
```
SELECT a.ID
FROM Table_1 a
JOIN Table_2 b on a.ID=b.ID
```
The inner join takes advantage of a better algorithm. And, it we're making use of indexed values not in a list. This perform better compared to its counterpart as the table increases in size and requires more operations. But, we could receive some performance hits later as the inner join return a larger number of indexes to filter by.
Since I there was no further context provided, or clearer use case. I think the solution is sufficient. But, it could be improved upon given a better understanding of the full problem.
|
JOIN is equivalent to INNER JOIN, so the first query will return rows *only* where the IDs match. The index here helps performance too.
Your second query performs a LEFT JOIN, which as you pointed out returns everything from table A, regardless if it matches something in table B.
|
"Join" between two tables is very fast versus "Left join" which is extremely slow (MySQL DB)
|
[
"",
"mysql",
"sql",
"performance",
""
] |
I have declared a string as shown below:
```
Declare @string
Set @string = 'this line is like this because this is repeated these many times in this line'
```
I am trying find that how many times each word is repeated. I am expecting a result like
```
Word Number
this x times
line y times
is z times
```
and so on...
Help me with the code. Any help would be appreciated. Thank you in advance.
Edit:
So far, I have got to the point of find number words in which a particular alphabet is replaced.
Here is the code of it
```
SELECT Len(@string) - Len(Replace(@string, 'x', '')).
```
Any explanation along with the provided code will be much appreciated. thank you.
|
```
;WITH splitString(val) AS
(
-- convert the string to xml, seperating the elements by spaces
SELECT CAST('<r><i>' + REPLACE(@string,' ','</i><i>') + '</i></r>' AS XML)
)
SELECT [Key],
COUNT(*) [WordCount]
FROM ( -- select all of the values from the xml created in the cte
SELECT p.value('.','varchar(100)') AS [Key]
FROM splitString
CROSS APPLY val.nodes('//i') t (p)) AS t
GROUP BY [Key]
```
if you want to get all technical..
```
;WITH splitString(val) AS
(
-- convert the string to xml, seperating the elements by spaces
SELECT CAST('<r><i>' + REPLACE(@string,' ','</i><i>') + '</i></r>' AS XML)
)
SELECT Word,
CAST(COUNT(*) AS VARCHAR)
+ (CASE WHEN COUNT(*) = 1 THEN ' time' ELSE ' times' END) AS Number
FROM ( -- select all of the values from the xml created in the cte
SELECT p.value('.','varchar(100)') AS Word
FROM splitString
CROSS APPLY val.nodes('//i') t (p)) AS t
GROUP BY Word
```
|
Using a t-sql splitter this is pretty straight forward. My personal preference is the Jeff Moden splitter which can be found here. <http://www.sqlservercentral.com/articles/Tally+Table/72993/> There are some other options which can be found here. <http://sqlperformance.com/2012/07/t-sql-queries/split-strings>
Using Jeff Moden's it is this simple.
```
Declare @string varchar(1000)
Set @string = 'this line is like this because this is repeated these many times in this line'
select x.Item
, COUNT(*) as WordCount
from dbo.DelimitedSplit8K(@string, ' ') x
group by x.Item
order by x.Item
```
|
How many times does each word is repeated in a string in sql server?
|
[
"",
"sql",
"sql-server",
""
] |
I'm using SQL server 2014,I'm fetching data from a view.The order of items is getting changed once i use **Group by** ,how can i get the order back after using this Group by,There is one date column,but its not saving any time,So i can't sort it based on date also..
How can I display the data in the same order as it displayed before using Group by?Anyone have any idea please help?
Thanks
|
Tables and views are essentially unordered sets. To get rows in a specific order, you should always add an `ORDER BY` clause on the columns you wish to order on.
I'm assuming you previously selected from the `VIEW` without an `ORDER BY` clause. The order in which rows are returned from a `SELECT` statement **without** an `ORDER BY` statement is **undefined**. The order you are getting them in, can change due to any number of reasons (eg some are listed [here](https://stackoverflow.com/a/21371444/243373)).
Your problem stems from the mistake you made on relying on the order from a `SELECT` from a `VIEW` without an `ORDER BY`. You should have had an `ORDER BY` clause in your `SELECT` statement to begin with.
---
> How can I display the data in the same order as it displayed before using Group by?
The answer: You can't if your initial statement did not have an `ORDER BY` clause.
The resolution: Determine the order you want the resultset in and add an `ORDER BY` clause on those columns, both in your initial query and the version with the `GROUP BY` clause.
|
The only way you can enforce a specific order is to explicitly use a `ORDER BY` clause. Otherwise the order of rows is not guaranteed ([take a look at this article](http://blogs.msdn.com/b/conor_cunningham_msft/archive/2008/08/27/no-seatbelt-expecting-order-without-order-by.aspx) for more details) and the database engine will return the rows based on "as fast as it can" or "as fast as it can retrieve them from disk" rule. So, order can also vary between executions of the same query in the span of a few seconds.
When doing a `DISTINCT`, `GROUP BY` or `ORDER BY`, SQL Server automatically does a `SORT` of the data based on an index it uses for that query.
Looking at the execution plan of your query will show you what index (and implicitly columns in that index) is being used to sort the data.
|
how to resolve this - group by changes the Order of items in SQL Server
|
[
"",
"sql",
"sql-server",
""
] |
We are doing this:
```
SELECT x from d where x not in
(
SELECT xx FROM a
INNER JOIN aa ON a.col = aa.col
UNION
SELECT xx FROM b
INNER JOIN bb on a.col = bb.col
)
```
When we run sub query separately, it executes in 10 seconds and returns 0 records because one of the tables used in INNER JOIN is blank
But, when we run the entire query, it keeps on running for more than 2 hours and never ends.
When entire query is run, we expect to see all the data from d, but why does the query hangs?
|
Sounds like the subquery is getting run for every x in d. Try it as a CTE, and if that fails a temp table, to pre-compute the subquery and only then use it to exclude records from x.
```
--CTE:
WITH subq AS
(
SELECT xx FROM a
INNER JOIN aa ON a.col = aa.col
UNION
SELECT xx FROM b
INNER JOIN bb on a.col = bb.col
)
SELECT x from d left join subq on subq.xx = d.x where subq.xx is null
```
Temp table approach exactly the same, just create a temp table with one row for col xx (i'm not sure what your data types are) and populate it with the subquery.
|
Try this
```
select x from d
where not exists(Select top 1 1 from a inner join aa on a.col = aa.col where d.x = xx)
and not exists (Select top 1 1 from b inner join bb on a.col = bb.col where d.x = xx)
```
|
Query Hangs when NOT IN is used with a SUBQUERY which returns NULL
|
[
"",
"sql",
"sql-server",
""
] |
I have a table which has two columns **UNIT**, **M\_UNIT** as shown below:
```
+------+--------+
| UNIT | M_UNIT |
+------+--------+
| 10 | 12 |
| 15 | 19 |
| 12 | 16 |
| 13 | 15 |
| 19 | 14 |
| 14 | 11 |
+------+--------+
```
I want to create a column **H\_TREE** which stores hierarchy as comma separated values after recursively searching through UNIT & M\_UNIT such that H\_TREE starts with UNIT and ends with the last possible M\_UNIT as shown below:
```
+------+--------+----------------+
| UNIT | M_UNIT | H_TREE |
+------+--------+----------------+
| 10 | 12 | 10,12,16 |
| 15 | 19 | 15,19,14,11 |
| 12 | 16 | 12,16 |
| 13 | 15 | 13,15,19,14,11 |
| 19 | 14 | 19,14,11 |
| 14 | 11 | 14,11 |
+------+--------+----------------+
```
Apologies if I am not clear enough, let me know if something is confusing. Thanks.
|
This should produce what you want:
```
WITH data (unit, m_unit) AS (
SELECT 10, 12 FROM dual UNION ALL
SELECT 15, 19 FROM dual UNION ALL
SELECT 12, 16 FROM dual UNION ALL
SELECT 13, 15 FROM dual UNION ALL
SELECT 19, 14 FROM dual UNION ALL
SELECT 14, 11 FROM dual)
SELECT
unit,
m_unit,
unit || ',' || listagg(root_unit, ',') WITHIN GROUP (ORDER BY depth) h_tree
FROM (
SELECT
id, unit, m_unit,
LEVEL depth, CONNECT_BY_ROOT m_unit root_unit
FROM
(SELECT ROWNUM id, unit, m_unit FROM data) data
CONNECT BY
PRIOR unit = m_unit)
GROUP BY
id,
unit,
m_unit
```
If the lines in your table are distinct the `id` column is not necessary.
|
You should not do it this way. You should have a column called ParentID or something. Then build the tree when needed.
This is an example (SQL Server) that wires it all together:
```
USE AdventureWorks2008R2;
GO
WITH DirectReports (ManagerID, EmployeeID, Title, DeptID, Level)
AS
(
-- Anchor member definition
SELECT e.ManagerID, e.EmployeeID, e.Title, edh.DepartmentID,
0 AS Level
FROM dbo.MyEmployees AS e
INNER JOIN HumanResources.EmployeeDepartmentHistory AS edh
ON e.EmployeeID = edh.BusinessEntityID AND edh.EndDate IS NULL
WHERE ManagerID IS NULL
UNION ALL
-- Recursive member definition
SELECT e.ManagerID, e.EmployeeID, e.Title, edh.DepartmentID,
Level + 1
FROM dbo.MyEmployees AS e
INNER JOIN HumanResources.EmployeeDepartmentHistory AS edh
ON e.EmployeeID = edh.BusinessEntityID AND edh.EndDate IS NULL
INNER JOIN DirectReports AS d
ON e.ManagerID = d.EmployeeID
)
-- Statement that executes the CTE
SELECT ManagerID, EmployeeID, Title, DeptID, Level
FROM DirectReports
INNER JOIN HumanResources.Department AS dp
ON DirectReports.DeptID = dp.DepartmentID
WHERE dp.GroupName = N'Sales and Marketing' OR Level = 0;
GO
```
|
Create column that stores hierarchy as comma separated values
|
[
"",
"sql",
"hierarchy",
"hierarchical-data",
""
] |
I have two tables. one table is named Shopper and it looks like
```
SHOPPER_ID | SHOPPER_NAME |
-------------------------
1 | Marianna |
2 | Jason |
```
and another table named Order has information like Date on the order
```
ORDER_ID | SHOPPER_ID | DATE
----------------------------------
1 | 1 | 08/09/2012
2 | 1 | 08/08/2012
```
Now I want to do a query that joins two tables and group by SHOPPER\_ID, because one shopper can have multiple orders, I want to pick the latest order base on DATE value.
My query looks like:
```
Select * from Shopper as s join Order as o
on s.SHOPPER_ID = o.SHOPPER_ID
group by s.SHOPPER_ID
```
The query is wrong right now because I don't know how to apply the filter to only get the latest order. Thanks in advance!
|
I suggest using a sub-select:
```
Select s.SHOPPER_ID, s.SHOPPER_NAME, o.MAX_DATE
from Shopper s
INNER join (SELECT SHOPPER_ID, MAX(DATE) AS MAX_DATE
FROM ORDER
GROUP BY SHOPPER_ID) o
on s.SHOPPER_ID = o.SHOPPER_ID
```
Best of luck.
|
Easy way is use `row_number` to find the lastest order
**[SQL Fiddle Demo](http://sqlfiddle.com/#!6/78546/2)**
```
SELECT *
FROM
(SELECT S.*,
O.[ORDER_ID], O.[DATE],
ROW_NUMBER() OVER ( PARTITION BY S.SHOPPER_ID
ORDER BY [DATE] DESC) as rn
FROM Shopper S
JOIN Orders O
ON S.SHOPPER_ID = O.SHOPPER_ID
) T
WHERE rn = 1
```
|
One to many join with group by
|
[
"",
"sql",
"sql-server",
"greatest-n-per-group",
""
] |
I want to update big table `TEMP_MA_CONTACT` and I use `MERGE INTO`:
```
MERGE INTO TEMP_MA_CONTACT C
USING (select * from TABLE_TO_CHANGE_2601) T
ON (C.CUSTOMER_RK = T.CUSTOMER_RK)
WHEN MATCHED THEN UPDATE SET C.CUSTOMER_RK = T.NEW_CUSTOMER_RK
```
but Oracle says:
```
ORA-38104: Columns referenced in the ON Clause cannot be updated
```
|
The best variant to resolve this problem is:
1) First add rowid to table\_to\_change\_2601
```
CREATE TABLE table_to_change_2601_new AS
SELECT T.*,
I.ROWID AS ROW_ID FROM
table_to_change_2601 T JOIN temp_ma_contact I ON T.CUSTOMER_RK = I.CUSTOMER_RK
```
2) Merge on this rowid's
```
MERGE INTO temp_ma_contact C
USING (select * from table_to_change_2601_new) T
ON (C.ROWID = T.ROW_ID)
WHEN MATCHED THEN UPDATE SET C.CUSTOMER_RK = T.NEW_CUSTOMER_RK
```
|
Mmmm, I'm not sure if it is possible with merge.. but you can always use update with a subquery:
```
UPDATE TEMP_MA_CONTACT C
set c.customer_rk =
(select P.new_customer_rk from table_to_change_2601 p where P.customer_rk = C.customer_rk)
```
Another idea is this:
```
ALTER TABLE TEMP_MA_CONTACT add TEMP_COL NUMBER(5);
UPDATE TEMP_MA_CONTACT set TEMP_COL = customer_rk;
```
and then use your original merge, but on this column like this:
```
MERGE INTO TEMP_MA_CONTACT C
USING (select * from TABLE_TO_CHANGE_2601) T
ON (C.TEMP_COLUMN = T.CUSTOMER_RK)
WHEN MATCHED THEN UPDATE SET C.CUSTOMER_RK = T.NEW_CUSTOMER_RK;
commit;
```
and then droping the column
```
ALTER TABLE TEMP_MA_CONTACT drop column TEMP_COLUMN
```
|
SQL How can I avoid ora-38104?
|
[
"",
"sql",
"oracle",
"merge",
""
] |
I have a table named 'resources'
```
Machine Host Enabled
mach1 host1 TRUE
mach2 host1 FALSE
mach3 host1 FALSE
mach4 host2 TRUE
mach5 host2 TRUE
```
I want to get the list of hosts where Enabled is True for all the resources/machines associated with them.
I tried the sql query-
```
select distinct Host from resources where Enabled = TRUE;
```
But that query gives me back both host1 and host2 as the answer whereas I am expecting the answer to be just host2. Can anyone help me with a sql query which can achieve this ?
|
Try this:
```
SELECT Host
FROM resources
GROUP BY Host
HAVING COUNT(*) = COUNT(CASE WHEN Enabled = True THEN 1 END)
```
or:
```
SELECT DISTINCT Host
FROM resources AS r1
WHERE Enabled = TRUE AND
NOT EXISTS (SELECT 1
FROM resources AS r2
WHERE r1.Host = r2.Host AND r2.enabled = FALSE)
```
|
TRY THIS ONE AND LET ME KNOW.
```
SELECT DISTINCT(Host)
FROM Resources
WHERE Enabled = TRUE AND Host NOT IN (Select Host FROM Resources WHERE Enabled = FALSE)
```
|
SQL Query to get unique list
|
[
"",
"mysql",
"sql",
""
] |
Given a PostgreSQL table named `requests` with a column named `status` and a constraint like this:
```
ALTER TABLE requests ADD CONSTRAINT allowed_status_types
CHECK (status IN (
'pending', -- request has not been attempted
'success', -- request succeeded
'failure' -- request failed
));
```
In `psql` I can pull up information about this constraint like this:
```
example-database=# \d requests
Table "public.example-database"
Column | Type | Modifiers
----------------------+-----------------------------+-------------------------------------------------------------------
id | integer | not null default nextval('requests_id_seq'::regclass)
status | character varying | not null default 'pending'::character varying
created_at | timestamp without time zone | not null
updated_at | timestamp without time zone | not null
Indexes:
"requests_pkey" PRIMARY KEY, btree (id)
Check constraints:
"allowed_status_types" CHECK (status::text = ANY (ARRAY['pending'::character varying, 'success'::character varying, 'failure'::character varying]::text[]))
```
But is it possible to write a query that specifically returns the `allowed_status_types` of pending, success, failure?
It would be great to be able to memoize the results of this query within my application, vs. having to maintain a duplicate copy.
|
You can query the system catalog [`pg_constraint`](http://www.postgresql.org/docs/current/static/catalog-pg-constraint.html), e.g.:
```
select consrc
from pg_constraint
where conrelid = 'requests'::regclass
and consrc like '(status%';
consrc
---------------------------------------------------------------------------
(status = ANY (ARRAY['pending'::text, 'success'::text, 'failure'::text]))
(1 row)
```
Use the following function to *unpack* the string:
```
create or replace function get_check_values(str text)
returns setof text language plpgsql as $$
begin
return query
execute format (
'select * from unnest(%s)',
regexp_replace(str, '.*(ARRAY\[.*\]).*', '\1'));
end $$;
select get_check_values(consrc)
from pg_constraint
where conrelid = 'requests'::regclass
and consrc like '(status%';
get_check_values
------------------
pending
success
failure
(3 rows)
```
|
### Building on your design
To simplify things, I would provide allowed values as (100 % equivalent) **array literal** instead of the `IN` expression (that is converted into a clumsy ARRAY constructor):
```
ALTER TABLE requests ADD CONSTRAINT allowed_status_types
CHECK (status = ANY ('{pending, success, failure}'::text[]));
```
The resulting text in the system column [`pg_constraint.consrc`](http://www.postgresql.org/docs/current/interactive/catalog-pg-constraint.html):
```
((status)::text = ANY ('{pending,success,failure}'::text[]))
```
Now it's simple to extract the list between curly braces with `substring()`:
```
SELECT substring(consrc from '{(.*)}') AS allowed_status_types
FROM pg_catalog.pg_constraint
WHERE conrelid = 'public.requests'::regclass -- schema qualify table name!
AND conname = 'allowed_status_types'; -- we *know* the constraint name
```
Result:
```
allowed_status_types
-------------------------
pending,success,failure
```
### Alternative design
What I really would do is normalize one more step:
```
CREATE TABLE request_status (
status_id "char" PRIMARY KEY
, status text UNIQUE NOT NULL
, note text
);
INSERT INTO request_status(status_id, status, note) VALUES
('p', 'pending', 'request has not been attempted')
, ('s', 'success', 'request succeeded')
, ('f', 'failure', 'req');
CREATE TABLE requests (
id serial PRIMARY KEY
, status_id "char" NOT NULL DEFAULT 'p' REFERENCES request_status
, created_at timestamp NOT NULL
, updated_at timestamp NOT NULL
);
```
The [data type `"char"`](http://www.postgresql.org/docs/current/interactive/datatype-character.html#DATATYPE-CHARACTER-SPECIAL-TABLE) is a single one-byte ASCII character that's ideal for cheap enumeration of a handful possible values.
The size of your row is now 48 instead of 56 bytes. [Details here.](https://stackoverflow.com/questions/13570613/making-sense-of-postgres-row-sizes/13570853#13570853)
And it's trivial to check allowed statuses:
```
SELECT status FROM request_status
```
|
PostgreSQL query for a list of allowed values in a constraint?
|
[
"",
"sql",
"database",
"postgresql",
"database-design",
"check-constraints",
""
] |
Given a table, for example **Article(Id,Body,Revisions)**, I would like to increment the Revisions attribute, and, once a certain limit is reached (it's a constant provided by the developer), an error should be thrown. Is this possible to achieve with a single UPDATE ... SET statement in T-SQL?
**What I've done:**
* To increment Revisions attribute by one, I solved as shown here: [Is UPDATE command thread safe (tracking revisions) in MS SQL](https://stackoverflow.com/questions/34975139/is-update-command-thread-safe-tracking-revisions-in-ms-sql).
**Problem**
* To find a way that is thread safe, which would allow incrementation of **Revisions** until a certain upper bound is reached.
**Context**
Since I'm using EF, the ideal solution would be to either thrown an error or specify a flag of some sort. The code I'm using (shown below) is encapsulated into a try-catch:
```
context.Database.ExecuteSqlCommand("UPDATE dbo.Articles SET Revisions = Revisions + 1 WHERE Id=@p0;", articleId);
```
|
You could do this with a `WHERE` clause in your `UPDATE` statement, which would do the test. If the test fails, the update will not happen and your call with `context.Database.ExecuteSqlCommand` will return 0 instead of 1.
In case of a limit of 1000, the update SQL would be:
```
count = context.Database.ExecuteSqlCommand(
"UPDATE dbo.Articles SET Revisions = Revisions + 1 WHERE Id=@p0 AND Revisions < 1000;", articleId);
```
Then afterwards you would test whether `count == 0` and raise an error message if so.
|
Use a CHECK constraint. No update statement can violate bounds that are implemented by a CHECK constraint. Not even an update statement issued by a sleep-deprived DBA at the console.
```
create table article (
id integer primary key,
body nvarchar(max) not null,
-- Allow six versions. (Original plus five revisions.)
revisions integer not null
check (revisions between 0 and 5)
);
insert into article values (1, 'a', 0);
update article
set body = 'b', revisions = 1
where id = 1;
update article
set body = 'c', revisions = 2
where id = 1;
-- Other updates . . .
-- This update will *always* fail with an error.
update article
set body = 'f', revisions = 6
where id = 1;
```
|
Specifying an upper bound for an int in UPDATE
|
[
"",
"sql",
"sql-server",
"entity-framework",
"t-sql",
""
] |
I was trying to access the time and timezone of a db. The query i had were :
```
select sysdate from dual
```
and
```
select systimestamp from dual
```
What would be the performance over-head of querying these ? I do understand that until 10g these were expensive and starting 11 they were to be in-memory. Would these run in 00:00:01 ?
|
I won't bother of that. Surely is fast.
An idea would be to measure it. Because is fast, let's do it 10000 times and measure the time:
```
declare
t number;
d date;
begin
t:=dbms_utility.get_time();
for k in 1..10000 loop
select sysdate into d from dual;
end loop;
dbms_output.put_line(dbms_utility.get_time() - t);
end;
/
```
On my machine(with oracle 11.2.0.4) this outputs 16, so 16 milliseconds for 10000 times, that means 0.000016 seconds per operation.
EDIT: Select 1 from dual is faster, the above returning 13 milliseconds. Systimestamp is a little slower than sysdate, with 17.
|
I stole @FlorinGhita's code and changed it to:
```
declare
t number;
d date;
begin
t := dbms_utility.get_time();
for k in 1..10000 loop
select sysdate into d from dual;
end loop;
dbms_output.put_line(dbms_utility.get_time() - t);
t := dbms_utility.get_time();
for k in 1..10000 loop
d := SYSDATE;
end loop;
dbms_output.put_line(dbms_utility.get_time() - t);
end;
```
because I wanted to contrast the amount of time required to do the SELECT with the amount of time required to perform the direct assignment.
The results of five runs of the above were
```
SELECT Assignment
73 6
84 6
74 6
74 6
74 5
```
Next, my thought was that perhaps there was overhead associated with having the SELECT statement first - so I changed it around so that the assignment loop was done first followed by the SELECT loop:
```
declare
t number;
d date;
begin
t := dbms_utility.get_time();
for k in 1..10000 loop
d := SYSDATE;
end loop;
dbms_output.put_line(dbms_utility.get_time() - t);
t := dbms_utility.get_time();
for k in 1..10000 loop
select sysdate into d from dual;
end loop;
dbms_output.put_line(dbms_utility.get_time() - t);
end;
```
Five runs of the above gave:
```
Assignment SELECT
5 78
6 75
7 72
6 75
6 86
```
If I'm reading the above correctly it shows that performing a `SELECT...FROM DUAL` takes more than 10 times LONGER than making a direct assignment.
Takeaway: don't use `SELECT xxxxx INTO zzz FROM DUAL`. Use `zzz := xxxxx` instead.
Timings above were generated by Oracle 11.1
Hope this helps.
|
Whats the performance of querying in dual on Oracle 11g
|
[
"",
"sql",
"oracle",
"select",
"oracle11g",
"oracle10g",
""
] |
I know that this is not possible to nest `insert ... exec` statements, but still I'm interested - is there a way to check if I already have active `insert ... exec` to avoid actual error?
So I want to have something like this:
```
....
if <check if I have outer insert into exec> = 0
insert into <#some temporary table>
exec <stored procedure>
```
In other words - `insert ... exec` is optional, it's nice to have it, but I want to skip it if somebody tries to call my procedure with outer `insert ... exec`
|
A naive approach is to use `TRY/CATCH` block:
```
BEGIN
DECLARE @isNested BIT = 0;
BEGIN TRY
BEGIN TRANSACTION;
EXEC p2;
ROLLBACK; --COMMIT;
END TRY
BEGIN CATCH
-- Msg 8164 An INSERT EXEC statement cannot be nested.
IF ERROR_NUMBER() = 8164 SET @isNested = 1;
ROLLBACK;
END CATCH
SELECT @isNested;
END
```
**[db<>fiddle demo](https://dbfiddle.uk/?rdbms=sqlserver_2017&fiddle=08f4b0a8161a001cde8c5ee4dc9e5a2f)**
|
As stated [here](https://stackoverflow.com/a/3795413/2118383)
> This is a common issue when attempting to 'bubble' up data from a
> chain of stored procedures. A restriction in SQL Server is you can
> only have one INSERT-EXEC active at a time. I recommend looking at
> [How to Share Data Between Stored Procedures](http://www.sommarskog.se/share_data.html) which is a very
> thorough article on patterns to work around this type of problem.
Try with `OpenRowset`
```
INSERT INTO #YOUR_TEMP_TABLE
SELECT * FROM OPENROWSET ('SQLOLEDB','Server=(local);TRUSTED_CONNECTION=YES;','set fmtonly off EXEC [ServerName].dbo.[StoredProcedureName] 1,2,3')
```
|
Avoiding "An INSERT EXEC statement cannot be nested"
|
[
"",
"sql",
"sql-server",
"insert-into",
""
] |
This should be a simple one, but I have not found any solution:
The normal way is using an alias like this:
```
CASE WHEN ac_code='T' THEN 'time' ELSE 'purchase' END as alias
```
When using alias in conjunction with `UNION ALL` this causes problem because the alias is not treated the same way as the other columns.
Using an alias to assign the value is not working. It is still treated as alias, though it has the column name.
```
CASE WHEN ac_code='T' THEN 'time' ELSE 'purchase' END as ac_subject
```
I want to assign a value to a column based on a condition.
```
CASE WHEN ac_code='T' THEN ac_subject ='time' ELSE ac_subject='purchase' END
```
Now I get the error message
> UNION types character varying and boolean cannot be matched
How can I assign a value to a column in a case statement without using an alias in the column (shared by other columns in `UNION`)?
Here is the whole (simplified) query:
```
SELECT hr_id,
CASE WHEN hr_subject='' THEN code_name ELSE hr_subject END
FROM hr
LEFT JOIN code ON code_id=hr_code
WHERE hr_job='123'
UNION ALL
SELECT po_id,
CASE WHEN po_subject='' THEN code_name ELSE po_subject END
FROM po
LEFT JOIN code ON code_id=po_code
WHERE po_job='123'
UNION ALL
SELECT ac_id,
CASE WHEN ac_code='T' THEN ac_subject='time' ELSE ac_subject='purchase' END
FROM ac
WHERE ac_job='123'
```
|
1. There is no alias in your presented query. You are confusing terms. This would be a [**column alias**](http://www.postgresql.org/docs/current/interactive/queries-table-expressions.html#QUERIES-TABLE-ALIASES):
```
CASE WHEN hr_subject='' THEN code_name ELSE hr_subject END AS ac_subject
```
2. In a [**`UNION` query**](http://www.postgresql.org/docs/current/interactive/queries-union.html), the number of columns, column names and data types in the returned set are determined by the *first* row. All appended rows have to match the row type. Column names in appended rows (including aliases) are just noise and ignored. Maybe useful for documentation, nothing else.
3. The `=` operator does *not* assign anything in a `SELECT` query. It's the equality operator that returns a `boolean` value. `TRUE` if both operands are equal, etc. This returns a `boolean` value: `ac_subject='time'` Hence your error message:
> UNION types character varying and boolean cannot be matched
The only way to "assign" a value to a particular output column in this query is to include it at the right position in the `SELECT` list.
4. The information in the question is incomplete, but I suspect you are also confusing the empty string (`''`) with the `NULL` value. A distinction that you need to understand before doing *anything* else with relational databases. [Maybe start here.](http://www.postgresql.org/docs/current/interactive/functions-comparison.html) In this case you would rather use [`COALESCE`](http://www.postgresql.org/docs/current/interactive/functions-conditional.html#FUNCTIONS-COALESCE-NVL-IFNULL) to provide a default for `NULL` values:
```
SELECT hr_id, COALESCE(hr_subject, code_name) AS ac_subject
FROM hr
LEFT JOIN code ON code_id=hr_code
WHERE hr_job = '123'
UNION ALL
SELECT po_id, COALESCE(po_subject, code_name)
FROM po
LEFT JOIN code ON code_id=po_code
WHERE po_job = '123'
UNION ALL
SELECT ac_id, CASE WHEN ac_code = 'T' THEN 'time'::varchar ELSE 'purchase' END
FROM ac
WHERE ac_job = '123'
```
Just an educated guess, assuming type `varchar`. You should have added table qualification to column names to clarify their origin. Or table definitions to clarify everything.
|
The CASE expression is supposed to return a value, e.g. 'time'.
Your value is another expression `subject ='time'` which is a boolean (true or false).
Is this on purpose? Does the other query you glue with UNION have a boolean in that place, too? Probably not, and this is what the DBMS complains about.
|
Assign a case value to a column rather than an alias
|
[
"",
"sql",
"postgresql",
"union",
"postgresql-9.1",
""
] |
I have a data table that essentially looks like this:
```
Employee ID ACTION Action DATE
1 SIC 12/10/15
1 CSL 11/10/15
2 SIC 12/10/15
2 CSL 11/10/15
1 CSL 10/22/15
2 CSL 10/22/15
1 SAC 10/21/15
2 SAC 10/21/15
```
I am trying to select every row with a CSL action that is not within 7 days of an SIC or SAC action row. The output I desire is this:
```
Employee ID ACTION Action DATE
1 CSL 11/10/15
2 CSL 11/10/15
```
What I have tried is joining the table to itself like this:
```
Select *
From
TableName A join TableName B on Keyfield A = Keyfield B
Where
A.Action = 'CSL' and
B.Action in ('SIC','SAC') and
A.Action_DT not between B.Action_DT-7 and B.Action_DT+7
```
But, that returns the undesirable 10/22/15 row, I am assuming because it is more than 7 days from the 12/10/15 SIC row.
Any help would be greatly appreciated.
|
You may rather need an self-antijoin instead of self-innerjoin. Try it like this:
```
select *
from TableName A
where A.action = 'CSL'
and not exists (
select 1
from TableName B
where B.action in ('SIC','SAC')
and B.key_field = A.key_field
and A.action_dt between B.action_dt-7 and B.action_dt+7
);
```
**Note:** I did not put any effort into replicating your input data, so this query is only hypothetical and may not even work. You can try it anyway.
|
Outer join the table to itself matching *unwanted* results, but filter out hits:
```
select A.*
from TableName A
left join TableName B on A.Action != B.Action
and A.Action_DT between B.Action_DT-7 and B.Action_DT+7
where A.Action = 'CSL'
and B.Action is null
```
A *successful* join means we *don't* want the row - we want the *missed* joins, which have all nulls for the joined columns, which is why a test for null for any joined column in the where clause returns what we want.
|
how can I select only rows that are not within a week of another type of row?
|
[
"",
"sql",
"oracle",
""
] |
I want to look at every row in `Examinations`, and for `Batra`, I want to update that column in `Patients` IF the value is 1 or 2.
A row for each patient already exists in `Patients`.
`Examinations`:
```
|ExaminationId | PatientId | Batra |
---------------------------------------
| 12345 | 123 | 2 |
| 54321 | 123 | 1 |
| 98765 | 123 | 0 |
```
`Patients`:
```
|PatientId | Batra |
-----------------------
| 123 | 0 |
```
The following is the result I want after performing this operation.
```
|PatientId | Batra|
-------------------
| 123 | 1 |
```
I have tried to produce some JOIN statement but I fail.
Edit: Something like this
```
UPDATE Patients
SET Patients.Batra = Examinations.Batra
WHERE Examinations.Batra = 1
OR Examinations.Batra = 2
```
Edit 2: It does not have to be set to 1. If 2 were the last value, it would be set to 2. Like this:
```
foreach(row in Examinations){
if(row.Batra == 1 || row.Batra == 2){
Patients.Batra = row.Batra
}
}
```
|
Try something like this Query
```
UPDATE p
SET p.Batra = e.Batra
FROM Patients p
INNER JOIN Examinition e ON p.PatientId = e.PatientId
WHERE e.Batra IN (1,2)
```
|
If I understand you correctly, you want something like this:
```
UPDATE
Patients
SET
Batra=em.batra
FROM
Patients AS p
INNER JOIN (
SELECT
e.patientid,
MAX(e.batra) AS batra
FROM
Examinations AS e
GROUP BY
e.patientid
) AS em ON
em.patientid=p.patientid;
```
|
SQL Server join statement
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
Imagine I have two simple tables, such as:
```
CREATE TABLE departments(dept INT PRIMARY KEY, name);
CREATE TABLE employees(id PRIMARY KEY, fname, gname,
dept INT REFERENCES departments(dept));
```
(simplified, of course).
I could have any of the following statements:
```
SELECT * FROM employees e INNER JOIN departments d ON e.dept=d.dept;
SELECT * FROM employees e NATURAL JOIN departments d;
SELECT * FROM employees e JOIN departments d USING(dept);
```
A working example can be found here: [SQL Fiddle: http://sqlfiddle.com/#!15/864a5/13/10](http://sqlfiddle.com/#!15/864a5/13/10 "SQL Fiddle")
They all give nearly the same results — certainly the same rows.
I have always preferred the first form because of its flexibility, readability and predictability — you clearly define what is connected to what.
Now, apart from the fact that the first form has a duplicated column, is there a real advantage to the other two forms? Or are they just syntactic sugar?
I can see the disadvantage in the latter forms is that you are expected to have named your primary and foreign keys the same, which is not always practical.
|
> Now, apart from the fact that the first form has a duplicated column, is there a real advantage to the other two forms? Or are they just syntactic sugar?
**TL;DR** NATURAL JOIN is used in a certain style of relational programming that is simpler than the usual SQL style. (Although when embedded in SQL it is burdened with the rest of SQL query syntax.) That's because 1. it *directly* uses the *simple* operators of ***predicate logic***, the language of precision in engineering (including software engineering), science (including computer science) and mathematics, and moreover 2. *simultaneously* and *alternatively* it *directly* uses the *simple* operators of ***relational algebra***.
The common complaint about NATURAL JOIN is that since shared columns aren't explicit, after a schema change inappropriate column pairing may occur. And that may be the case in a particular development environment. But in that case there was a *requirement* that *only certain columns be joined* and NATURAL JOIN without PROJECT *was not appropriate*. So these arguments *assume* that NATURAL JOIN is being used inappropriately. Moreover the arguers *aren't even aware* that they are ignoring requirements. Such complaints are specious. (Moreover, sound software engineering design principles lead to not having interfaces with such specificiatons.)
Another related misconceived specious complaint from the same camp is that ["NATURAL JOIN does not even take foreign key relationships into account"](https://stackoverflow.com/questions/41085629/when-to-and-when-not-to-natural-join?noredirect=1&lq=1#comment69377502_41085629). But [*any* join is there because of *the table meanings*, not the constraints](https://stackoverflow.com/a/24909854/3404097). Constraints are not needed to query. If a constraint is added then a query remains correct. If a constraint is dropped then a query *relying* on it *becomes wrong* and must be changed to a phrasing that *doesn't* rely on it that *wouldn't have had to change*. This has nothing to do with NATURAL JOIN.
---
You have described the difference in effect: just one copy of each common column is returned.
From [Is there any rule of thumb to construct SQL query from a human-readable description?](https://stackoverflow.com/a/33952141/3404097):
> It turns out that natural language expressions and logical expressions and relational algebra expressions and SQL expressions (a hybrid of the last two) correspond in a rather direct way.
Eg from [Codd 1970](https://www.seas.upenn.edu/~zives/03f/cis550/codd.pdf):
> The relation depicted is called *component*. [...] The meaning of *component*(*x*, *y*,*z*) is that part *x* is an immediate component (or subassembly) of part *y*, and *z* units of part *x* are needed to assemble one unit of part *y*.
From [this answer](https://stackoverflow.com/a/27682724/3404097):
> Every base table has a statement template, aka *predicate*, parameterized by column names, by which we put a row in or leave it out.
> Plugging a row into a predicate gives a statement aka proposition. The rows that make a true proposition go in a table and the rows that make a false proposition stay out. (So a table states the proposition of each present row and states NOT the proposition of each absent row.)
> But *every table expression value* has a predicate per its expression. The relational model is designed so that if tables `T` and `U` hold rows where T(...) and U(...) (respectively) then:
* `T NATURAL JOIN U` holds rows where T(...) AND U(...)
* `T WHERE`*`condition`* holds rows where T(...) AND *condition*
* `T UNION CORRESPONDING U` holds rows where T(...) OR U(...)
* `T EXCEPT CORRESPONDING U` holds rows where T(...) AND NOT U(...)
* `SELECT DISTINCT`*`columns to keep`*`FROM T` holds rows where
THERE EXISTS *columns to drop* SUCH THAT T(...)
* etc
Whereas reasoning about SQL otherwise is... not "natural":
An SQL SELECT statement can be thought of algebraically as 1. implicitly RENAMEing each column `C` of a table with (possibly implicit) correlation name `T` to `T.C`, then 2. CROSS JOINing, then 3. RESTRICTing per INNER ON, then 4. RESTRICTing per WHERE, then 5. PROJECTing per SELECT, then 6. RENAMEing per SELECT, dropping `T.`s, then 7. implicitly RENAMEing to drop remaining `T.`s Between the `T.`-RENAMEings algebra operators can also be thought of as logic operators and table names as their predicates: `T JOIN ...` vs `Employee T.EMPLOYEE has name T.NAME ... AND ...`. But conceptually inside a SELECT statement is a double-RENAME-inducing CROSS JOIN table with `T.C`s for column names while outside tables have `C`s for column names.
Alternatively an SQL SELECT statement can be thought of logically as 1. introducing `FORSOME T IN E` around the entire statement per correlation name `T` and base name or subquery `E`, then 2. referring to the value of quantified `T` by using `T.C` to refer to its `C` part, then 3. building result rows from `T.C`s per FROM etc, then 4. naming the result row columns per the SELECT clause, then 4. leaving the scope of the `FORSOME`s. Again the algebra operators are being thought of as logic operators and table names as their predicates. Again though, this conceptually has `T.C` inside SELECTs but `C` outside with correlation names coming and going.
These two SQL interpretations are nowhere near as straightforward as just using JOIN or AND, etc, *interchangeably*. (You don't have to agree that it's simpler, but that perception is why NATURAL JOIN and UNION/EXCEPT CORRESPONDING are there.) (Arguments criticizing this style outside the context of its intended use are specious.)
USING is a kind of middle ground orphan with one foot in the NATURAL JOIN camp and one in the CROSS JOIN. It has no real role in the former because there are no duplicate column names there. In the latter it is more or less just abbreviating JOIN conditions and SELECT clauses.
> I can see the disadvantage in the latter forms is that you are expected to have named your primary and foreign keys the same, which is not always practical.
PKs (primary keys), FKs (foreign keys) & other constraints are not needed for querying. (Knowing a column is a function of others allows scalar subqueries, but you can always phrase without.) Moreover any two tables can be meaningfully joined. If you need two columns to have the same name with NATURAL JOIN you rename via SELECT AS.
|
`NATURAL JOIN` is not as widely supported and neither is `JOIN USING` (i.e. not in SQL Server)
There are many arguments for NATURAL JOIN being a bad idea. Personally I think that not explicitly naming things such as joins is inviting disaster.
For example if you add a column in a table without realising that it happens to fit a 'natural join', you can have unexpected code failures when a natural join suddenly does something completely different. You'd think adding a column wouldn't break anything but it can break badly written views and a natural join.
When you're building a system you should never allow these kinds of risks to creep in. It's the same as creating views across multiple tables without a table alias on every column and using insert without a column list.
For those reasons, if you are just learning SQL now, get *out* of the habit of using these.
|
Inner Join vs Natural Join vs USING clause: are there any advantages?
|
[
"",
"sql",
"join",
""
] |
i try to get top 5 salary and second highest from employees table but it shows me also lowest salaries
check this picture salary column
[](https://i.stack.imgur.com/0dCzx.png)
and this is the output
[](https://i.stack.imgur.com/sbVQJ.png)
this is the queries which i try
```
SELECT MAX(Salary) FROM Employees
WHERE Salary NOT IN (SELECT MAX(Salary) FROM Employees )
select top 5 salary from Employees;
```
|
I think you are doing this for learning SQL purpose. So, if you can convert the salary column to integer ([Alter table, Alter column](http://www.w3schools.com/sql/sql_alter.asp)), that will really help you and your queries will become as simple as below:
For 2nd highest:
```
SELECT max(salary)
FROM employees
WHERE salary < (SELECT max(salary)
FROM employees);
```
'<' or '<>' are more efficient here, as 'NOT IN' is a costly operation.
and for Top 5:
```
SELECT TOP 5 salary
FROM employees
ORDER BY salary DESC;
```
Else as told above, you will have to convert it on the fly or give ranks to the rows.
|
The reason you are not getting correct result is because you are saving `salary` as `string`. So `200` in `string` will be more than `1000` in `string`.
To compare 2 string, their characters are converted to `ascii` characters and then compared based on `ascii` characters. So `ascii` value of `2` in `200` is more than `1` in `1000`. Hence incorrect result.
To give your answer, this will give you rank in descending order.
`SQLFiddle Demo for Top 5`
```
select salary,
rank() over(order by cast(salary as float) desc)
as rnk from employee
```
You can use this to get anything. Like top 5
```
select * from
(select salary,rank() over(order by cast(salary as float) desc)
as rnk from employee)
where rnk <=5
```
Only second highest
```
select * from
(select salary,rank() over(order by cast(salary as float) desc)
as rnk from employee)
where rnk =2
```
|
lowest and highest salaries display
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"stored-procedures",
""
] |
I want to insert only one letter into the table field. What will be the data type that will accept only one character?
I don't want to use `VARCHAR(1)`, because it will truncate the remaining characters. I want that if the input is 1 character, it will insert otherwise it will not insert into the table
|
If you set the column length to be longer (for example, 255), then you can add a trigger which checks the length of the new field. If greater than 1 then you can trigger an error.
For a test example:-
```
CREATE TABLE IF NOT EXISTS `insert_test` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`sometext` varchar(255) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=2 ;
--
-- Triggers `insert_test`
--
DROP TRIGGER IF EXISTS `length_check_trigger`;
DELIMITER //
CREATE TRIGGER `length_check_trigger` BEFORE INSERT ON `insert_test`
FOR EACH ROW BEGIN
DECLARE msg VARCHAR(255);
IF LENGTH(NEW.sometext) > 1 THEN
SET msg = "DIE: String Too Long.";
SIGNAL SQLSTATE '45000' SET MESSAGE_TEXT = msg;
END IF;
END
//
DELIMITER ;
```
You can change the message to what you want. You will need a similar trigger to catch updates as well.
|
Have you tried the datatype:
```
TINYTEXT
```
?
or would it not work on the system you are using?
|
Datatype for 1 character only
|
[
"",
"mysql",
"sql",
""
] |
i'm using mysql, and i want to add some custom order for my select query.
For example i'm having this table name A, there are column 'a' and 'b'. And I can assure that 'b' is bigger than 'a'.
```
|a|b|
|3|4|
|1|9|
|2|7|
|6|9|
|8|9|
|2|6|
|4|8|
```
I want to select them out and order by value c = 5, the order rule is:
if c is less than both a and b then this is weight 1.
if c is between a and b then this is weight 2.
if c is bigger than both a and b then this is weight 3.
and then order by this weight value.
(the order of the same weight does not need to be considered here.)
so the result should be:
```
|a|b|
|6|9| -> weight 1
|8|9| -> weight 1
|1|9| -> weight 2
|2|6| -> weight 2
|2|7| -> weight 2
|4|8| -> weight 2
|8|9| -> weight 3
```
So how do I write this select query?
PS: It doesn't have to specify weight 1, 2 and 3 in the query, the weight I 'invented' above myself is just to address the order rule!
|
You can use `CASE` for this:
```
ORDER BY CASE WHEN @c < a AND @c < b THEN 1
WHEN a < @c AND @c < b THEN 2
WHEN @c > a AND @c > b THEN 3
END
```
|
```
Select a, b from (
Select a, b,
case when a>5 and b>5 then 1 when a<5 and b>5 then 2 when a<5 and b>5 then 3 end as weight
order by weight)w
```
|
SQL - How to add this custom order for my select query?
|
[
"",
"mysql",
"sql",
""
] |
I have the table called Host in SQL as follows.
```
**FirstName** **LastName**
Krishna Krishna Murthy
Jithendra Jithendra Reddy
Varun Varun Kumar
Mahendra Varma
```
if firstname column is a part of LastName column I need to create one more column with values yes/No.
Output should as follows
```
**FirstName** **LastName** **Subset**
Krishna Krishna Murthy Yes
Jithendra Jithendra Reddy Yes
Varun Varun Kumar Yes
Mahendra Varma No
```
any ideas?
|
You can use
```
SELECT FirstName, LastName,
CASE WHEN LastName LIKE '%' + FirstName + '%'
THEN 'Yes' ELSE 'No'
END AS Flag
FROM host
```
to get the results you are looking for.
If you just want to search for cases where the forename is repeated, you might want to adjust the case to `LastName LIKE FirstName + ' %'` to ensure that it only matches where there are at least two words, and the first is in FirstName, e.g.
```
**FirstName** **LastName** **Subset**
Bob Bobbins No
SingleName SingleName No
AAA BBB AAA No
```
|
You can use `like` for this. In standard SQL, this would look like:
```
select h.*,
(case when h.lastname like concat('%', h.firstname, '%')
then 'yes' else 'no'
end) as flag
from host h;
```
Note that most databases support operators for string concatenation, such as `||` or `+`.
|
Check to see if a column value is contained by another column
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
Is there anything equivalent to the `LIKE` we use with `varchar` that can be used with `int`?
I want to filter with three `parameters`, two of them are `varchar` so I can use `LIKE` and in case I get an empty string I still can retrieve all records. But how can I achieve similar thing with an `int`
I want to retrieve even if `fkProductTypeID` doesn't exist:
This is my query:
```
select * from Product
where Code like '%'+ @code +'%' AND name LIKE '%'+ @name +'%'
AND fkProductTypeID = @ptype
```
I want it to be able to retrieve results even when I supply an ID that doesn't exist. From front-end, if I pass ' ' in `@code`,' ' in `@name` and 0 in `@ptype` I want it to retrieve all records
|
You could use this:
```
select * from Product
where Code like '%'+ @code +'%' AND name LIKE '%'+ @name +'%'
AND (@ptype IS NULL OR fkProductTypeID = @ptype)
```
if it's in a stored-procedure you should use an `IF ... ELSE`:
```
CREATE PROCEDURE dbo.SP_Name(@ptype int, @code varchar(1000), @name varchar(1000))
AS
BEGIN
IF @ptype IS NULL
BEGIN
select * from Product
where Code like '%'+ @code +'%' AND name LIKE '%'+ @name +'%'
END
ELSE
BEGIN
select * from Product
where Code like '%'+ @code +'%' AND name LIKE '%'+ @name +'%'
AND fkProductTypeID = @ptype
END
END
```
|
Is this what you want?
```
select *
from Product
where Code like '%'+ @code +'%' AND name LIKE '%'+ @name +'%' AND
(fkProductTypeID = @ptype or @ptype is null);
```
The value `''` doesn't make sense for an integer. And, don't mix types for comparisons. Just use `NULL` for this purpose.
|
SQL LIKE with int
|
[
"",
"sql",
"sql-server",
"wildcard",
""
] |
```
select *
from sometable
where column1='somevalue' and column2='someothervalue'
```
For the above query if both column1 and column2 have index tables ix1 and ix2, then which index is used for the above example? assume that 'somevalue' and 'someothervalue' are existing values in the table and column1 and column2 have non unique values.
Also how would it differ if the and was replaced by or?
|
The index that is used is the one that the oracle optimizer believes is best for the query. "Best" in this case means the most selective -- that is, which single condition matches the fewest rows (estimated based on the statistics on the table).
For your query, the best index is either `sometable(column1, column2)` or `somtable(column2, column1)`. With equality conditions, either matches the `where` clause.
With an `or` the situation is much tricker. I think that Oracle can use both indexes for the query (check the explain plan), but this is more likely with a `union`/`union all`:
```
select *
from sometable
where column1 = 'somevalue' or
selet *
from sometable
where column2 = 'someothervalue' and column1 > 'somevalue';
```
|
Quite impossible to say, depending on many elements; for example:
```
create table someTable ( column1 number, column2 number);
create index someIndex1 on someTable(column1);
create index someIndex2 on someTable(column2);
insert into sometable values ( 1,10);
insert into sometable values ( 2,10);
insert into sometable values ( 3,10);
insert into sometable values ( 4,20);
insert into sometable values ( 5,30);
insert into sometable values ( 5,40);
insert into sometable values ( 5,50);
commit;
```
Now I see the following:
```
SQL> select * from sometable where column1 = 1 and column2 = 10;
Execution Plan
----------------------------------------------------------
Plan hash value: 1247292719
------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 26 | 1 (0)| 00:00:01 |
|* 1 | TABLE ACCESS BY INDEX ROWID| SOMETABLE | 1 | 26 | 1 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | SOMEINDEX1 | 1 | | 1 (0)| 00:00:01 |
------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("COLUMN2"=10)
2 - access("COLUMN1"=1)
```
**Same behaviour** with different filter:
```
SQL> select * from sometable where column1 = 5 and column2 = 40;
Execution Plan
----------------------------------------------------------
Plan hash value: 1247292719
------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 26 | 1 (0)| 00:00:01 |
|* 1 | TABLE ACCESS BY INDEX ROWID| SOMETABLE | 1 | 26 | 1 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | SOMEINDEX1 | 1 | | 1 (0)| 00:00:01 |
------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("COLUMN2"=40)
2 - access("COLUMN1"=5)
```
Now I **calculate statistics** and try again:
```
SQL> select * from sometable where column1 = 5 and column2 = 40;
Execution Plan
----------------------------------------------------------
Plan hash value: 2029385636
------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 6 | 2 (0)| 00:00:01 |
|* 1 | TABLE ACCESS BY INDEX ROWID| SOMETABLE | 1 | 6 | 2 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | SOMEINDEX2 | 1 | | 1 (0)| 00:00:01 |
------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("COLUMN1"=5)
2 - access("COLUMN2"=40)
```
Now it has stats on my table, so it decides to use the **second index**, which performs better for the filters I gave.
Now I try to change again the filters:
```
SQL> select * from sometable where column1 = 2 and column2 = 10;
Execution Plan
----------------------------------------------------------
Plan hash value: 1247292719
------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 6 | 2 (0)| 00:00:01 |
|* 1 | TABLE ACCESS BY INDEX ROWID| SOMETABLE | 1 | 6 | 2 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | SOMEINDEX1 | 1 | | 1 (0)| 00:00:01 |
------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("COLUMN2"=10)
2 - access("COLUMN1"=2)
```
... and it uses again the **first index**, which is the best to match my conditions.
Even in this simple sequence, you can see how different can run a query, depending on data, statistics, and so on.
|
Which index is used ? ORACLE DB SQL
|
[
"",
"sql",
"oracle",
""
] |
I have the following relations in my db:
**Organization:** information about political and economical organizations.
name: the full name of the organization
abbreviation: its abbreviation
**isMember:** memberships in political and economical organizations.
organization: the abbreviation of the organization
country: the code of the member country
**geo\_desert:** geographical information about deserts
desert: the name of the desert
country: the country code where it is located
province: the province of this country
My task is to retrieve organizations which have within their members full set of countries with deserts. This organization can have also countries without deserts. So I have a set of countries with deserts and every organization in result should have all of them as members and arbitrary amount of other (no desert) countries.
I tried so far to write following code, but it doesn't work.
```
WITH CountriesWithDeserts AS (
SELECT DISTINCT country
FROM dbmaster.geo_desert
), OrganizationsWithAllDesertMembers AS (
SELECT organization
FROM dbmaster.isMember AS ism
WHERE (
SELECT count(*)
FROM (
SELECT *
FROM CountriesWithDeserts
EXCEPT
SELECT country
FROM dbmaster.isMember
WHERE organization = ism.organization
)
) IS NULL
), OrganizationCode AS (
SELECT name, abbreviation
FROM dbmaster.Organization
)
SELECT oc.name AS Organization
FROM OrganizationCode AS oc, OrganizationsWithAllDesertMembers AS owadm
WHERE oc.abbreviation=owadm.organization;
```
**UPD:** DBMS says: "ism.organization is not defined"
I'm using DB2/LINUXX8664 9.7.0
Output should look like this:
NAME
--------------------------------------------------------------------------------
African, Caribbean, and Pacific Countries
African Development Bank
Agency for Cultural and Technical Cooperation
Andean Group
|
I find the easiest way to handle this is by using `group by` and `having`. You just want to focus on the deserts, so the rest of the countries don't matter.
```
select m.organization
from isMember m join
geo_desert d
on m.country = d.country
group by m.organization
having count(distinct m.country) = (select count(distinct d.country) from geo_desert);
```
The `having` clause simply counts the number of matching (i.e. desert) countries and checks that all are included.
|
Word it like this: You are looking for organizations for which not exists a desert country they don't include.
```
select *
from organization o
where not exists
(
select country from geo_desert
except
select country from ismember
where organization = o.abbreviation
);
```
|
SQL query that will retrieve set containing all entries from another set
|
[
"",
"sql",
"db2",
""
] |
Let's imagine I have a query like the following:
```
SELECT
CASE WHEN ONE = 1 THEN 1 ELSE 0 END,
CASE WHEN JUST_ONE = 1 THEN 1 ELSE 0 END,
CASE WHEN ANOTHER_ONE = 1 THEN 1 ELSE 0 END,
CASE WHEN TWO = 1 THEN 1 ELSE 0 END,
CASE WHEN JUST_TWO = 1 THEN 1 ELSE 0 END,
CASE WHEN ANOTHER_TWO = 1 THEN 1 ELSE 0 END
-- 20 more things like that where changes only columns name
FROM
SOME_TABLE;
```
As you can see, the only difference between these two groups is that in the first one I use columns that have 'ONE' and in the second the ones that have 'TWO' and I have around 30 groups like this in my actual query so I wonder if there is a way to shorten it somehow?
|
Since they are different columns, you must explicitly mention them separately in the SELECT list. You cannot do it dynamically in pure **SQL**.
I would suggest, using a good **text editor**, it would hardly take a minute or two to write the entire SQL.
You could use **DECODE** which will have some less syntax instead of **CASE** expression which is verbose.
For example,
```
DECODE(ONE, 1, 1, 0) AS col1,
DECODE(JUST_ONE, 1, 1, 0) AS col2,
DECODE(ANOTHER_ONE, 1, 1, 0) AS col3,
DECODE(TWO, 1, 1, 0) AS col4,
DECODE(JUST_TWO, 1, 1, 0) AS col5,
DECODE(ANOTHER_TWO, 1, 1, 0) as col6
```
I would suggest to stick to **SQL**, and not use **PL/SQL**. They are not the same, they are different engines. `PL --> Procedural Language`.
But if you insist, then you could use a **cursor for loop** to loop through all the columns in [**DBA|ALL|USER]\_TAB\_COLS**. You could use a **SYS\_REFCURSOR** to see the data. First you will have to build the **dynamic SQL**.
|
Below is an example of dynamically creating query. You can put this in e.g. cursor variable if you want more queries created.
```
select 'SELECT ' || listagg('CASE WHEN '||column_name||' = 1 THEN 1 ELSE 0 END ' || column_name,',') within group(order by column_name) || ' FROM YOUR_TABLE_NAME'
from cols
where data_type in ('NUMBER')
and table_name = 'YOUR_TABLE_NAME';
```
You can use `COLS` view to get all column names and their datatypes for all your tables.
|
Metaprogramming oracle sql select statement
|
[
"",
"sql",
"oracle",
""
] |
I have a group of machines. They run and stop occasionally. These run times are recorded into a postgres table automatically with timestamps `starttime` and `endtime`.
I need to find the run time per machine in a 6 hour period. This is what I have so far:
```
SELECT machine, SUM(EXTRACT(EPOCH FROM (endtime - starttime)))
FROM table
WHERE
starttime >= '2016-01-27 12:00:00'
AND starttime <= '2016-01-27 18:00:00'
GROUP BY machine
ORDER BY machine
```
So this works, I get the run time in seconds by machine over that time period. BUT it has a flaw - any run times that started before 12:00 do not get counted. And any run times that started in my time frame but don't end until after it have time counted that shouldn't be there.
Is there a solution to ONLY extract the time that is inside the time frame? My initial thought would be to select all rows where:
```
endtime >= '2016-01-27 12:00:00'
```
and somehow, in memory, set all the start times at `'2016-01-27 12:00:00'`
where the start time is earlier than that and:
```
starttime <='2016-01-27 18:00:00'
```
and, again in memory without updating the database, set all end times to `'2016-01-27 18:00:00'` where the end time is later than that. and then run the extraction/summation query.
But I'm struggling on how to implement something like this. I have a working solution in Java/Python this data is returned to, but they are iterative and take more time than I'd like. I'd really like to find an SQL solution to this if possible.
Edit: To clarify, I need to count ALL run time that **occurs within the time frame** - if a run starts before the time frame only the portion of that run that occurs after the time frame should be counted.
|
EDIT : this is what you need.
When it start between the range, and ends after, it will calc it as ended at 18:00:00.
Same goes for when ended between the range and start before, it will calc as it started at 12:00:00
```
SELECT machine, SUM(EXTRACT(EPOCH FROM (endtime - starttime)))
FROM (SELECT machine,
case when starttime <= '2016-01-27 12:00:00' then '2016-01-27 12:00:00' else starttime end as starttime,
case when endtime>= '2016-01-27 18:00:00' then '2016-01-27 18:00:00' else endtime end as endtime, FROM table
WHERE
(endtime>= '2016-01-27 12:00:00'
AND endtime <= '2016-01-27 18:00:00')
OR (starttime>= '2016-01-27 12:00:00'
AND starttime<= '2016-01-27 18:00:00')
GROUP BY machine
ORDER BY machine
```
|
You can use the overlaps operator:
```
SELECT machine, SUM(EXTRACT(EPOCH FROM (endtime - starttime)))
FROM table
where (starttime, endtime) overlaps (timestamp '2016-01-27 12:00:00', timestamp '2016-01-27 18:00:00')
GROUP BY machine
ORDER BY machine
```
|
SQL solution to overlapping timeframes
|
[
"",
"sql",
"postgresql",
""
] |
How do I insert a record in a column having CLOB data type(so big text) having single quote in it?
I've already seen
[How to handle a single quote in Oracle SQL](https://stackoverflow.com/questions/2875257/how-to-handle-a-single-quote-in-oracle-sql)
but the solution is make it manual and I'm trying to insert Long text which contains lot of single quotes. Once Oracle detects ', my INSERT doesn't work.
My question is if there is a kind of command like "set define off" where I can tell Oracle to disable the ' in the text
|
Try with the [Q'](https://docs.oracle.com/cd/B19306_01/server.102/b14200/sql_elements003.htm) operator; for example:
```
create table t_clob ( a clob)
insert into t_clob values (q'[START aa'a''aa aa 'a'' aa'a' a'a' a END]')
```
|
You can use quoted notation:
`SELECT q'|text'containing'quotes|' FROM DUAL`
The pipes can be replaced by any matching symbol. It must be the same symbol at the beginning and at the end except when parentheses are used, then `()`, `[]` or `{}`.
See <https://docs.oracle.com/database/121/SQLRF/sql_elements003.htm#SQLRF00218>
|
How to handle a single quote in Oracle SQL with long text
|
[
"",
"sql",
"oracle",
""
] |
I'm trying to extract the total number of website hits plus the rate of change(rate of increase/decrease) per client per city per day as a percentage in a single SQL query but not able to get it right.
I've created an example at
> [http://www.sqlfiddle.com/#!9/fd279/8](http://www.sqlfiddle.com/#!9/fd279/8 "Rate of change")
Could I please request assistance?
|
To get percentage based increase/decrease
```
select
t1.HitDate,
t1.City,
t1.Client,
SUM(t1.NumVisits),
IFNULL(((SUM(t1.NumVisits) - IFNULL((SELECT SUM(t2.NumVisits) FROM PAGE_HITS t2 WHERE t2.HitDate = t1.HitDate-1 AND t2.City = t1.City AND t2.Client = t1.Client), 0)) * 100) / IFNULL((SELECT SUM(t2.NumVisits) FROM PAGE_HITS t2 WHERE t2.HitDate = t1.HitDate-1 AND t2.City = t1.City AND t2.Client = t1.Client), 0),0) as rate_of_change
from
PAGE_HITS t1
WHERE
t1.Client='C'
group by
HitDate,
City
ORDER BY HitDate;
```
|
Try avoiding inner join, it has wrong condition anyway (`t1.hitDate = t2.hitDate and t1.hitDate-1 = t2.hitDate` which can't be satisfied):
```
select
t1.HitDate,
t1.City,
t1.Client,
SUM(t1.NumVisits) - IFNULL((SELECT SUM(t2.NumVisits) FROM PAGE_HITS t2 WHERE t2.HitDate = t1.HitDate-1 AND t2.City = t1.City AND t2.Client = t1.Client), 0) as rate_of_change
from
PAGE_HITS t1
WHERE
t1.Client='C'
group by
HitDate,
City
ORDER BY HitDate;
```
|
MySQL - Total count & rate of change in a single SQL query
|
[
"",
"mysql",
"sql",
""
] |
For testing / debugging purposes, I need to get an enormous string into a field for one of my records. Doesn't matter what the string is. Could be a million "\*"s or the contents of Moby Dick.... whatever.
I'm not able to save a string so large via the app's UI because it crashes the browser. I'd like to write an SQL query to generate the massive string. Something like this:
`UPDATE my_table SET text_field = <HUGE STRING CREATION> WHERE id = 42`
The part I'm not sure how to do is `<HUGE STRING CREATION>`. I know I can concatenate strings with `||` but is there an SQL way to do that in a loop?
|
You can use the `repeat()` function; quoting from the [Postgres *String Functions and Operators* documentation](https://www.postgresql.org/docs/current/functions-string.html):
> Function: `repeat(string text, number int)`
> Return Type: `text`
> Description: Repeat *string* the specified *number* of times
```
testdb=# SELECT REPEAT('SQL', 3);
repeat
-----------
SQLSQLSQL
(1 row)
```
You can also use a custom function to generate a random string with a defined lengh, for that refer to this question:
[How do you create a random string that's suitable for a session ID in PostgreSQL?](https://stackoverflow.com/questions/3970795/how-do-you-create-a-random-string-in-postgresql)
|
Create your own lipsum function.
```
create or replace function lipsum( quantity_ integer ) returns character varying
language plpgsql
as $$
declare
words_ text[];
returnValue_ text := '';
random_ integer;
ind_ integer;
begin
words_ := array['lorem', 'ipsum', 'dolor', 'sit', 'amet', 'consectetur', 'adipiscing', 'elit', 'a', 'ac', 'accumsan', 'ad', 'aenean', 'aliquam', 'aliquet', 'ante', 'aptent', 'arcu', 'at', 'auctor', 'augue', 'bibendum', 'blandit', 'class', 'commodo', 'condimentum', 'congue', 'consequat', 'conubia', 'convallis', 'cras', 'cubilia', 'cum', 'curabitur', 'curae', 'cursus', 'dapibus', 'diam', 'dictum', 'dictumst', 'dignissim', 'dis', 'donec', 'dui', 'duis', 'egestas', 'eget', 'eleifend', 'elementum', 'enim', 'erat', 'eros', 'est', 'et', 'etiam', 'eu', 'euismod', 'facilisi', 'facilisis', 'fames', 'faucibus', 'felis', 'fermentum', 'feugiat', 'fringilla', 'fusce', 'gravida', 'habitant', 'habitasse', 'hac', 'hendrerit', 'himenaeos', 'iaculis', 'id', 'imperdiet', 'in', 'inceptos', 'integer', 'interdum', 'justo', 'lacinia', 'lacus', 'laoreet', 'lectus', 'leo', 'libero', 'ligula', 'litora', 'lobortis', 'luctus', 'maecenas', 'magna', 'magnis', 'malesuada', 'massa', 'mattis', 'mauris', 'metus', 'mi', 'molestie', 'mollis', 'montes', 'morbi', 'mus', 'nam', 'nascetur', 'natoque', 'nec', 'neque', 'netus', 'nibh', 'nisi', 'nisl', 'non', 'nostra', 'nulla', 'nullam', 'nunc', 'odio', 'orci', 'ornare', 'parturient', 'pellentesque', 'penatibus', 'per', 'pharetra', 'phasellus', 'placerat', 'platea', 'porta', 'porttitor', 'posuere', 'potenti', 'praesent', 'pretium', 'primis', 'proin', 'pulvinar', 'purus', 'quam', 'quis', 'quisque', 'rhoncus', 'ridiculus', 'risus', 'rutrum', 'sagittis', 'sapien', 'scelerisque', 'sed', 'sem', 'semper', 'senectus', 'sociis', 'sociosqu', 'sodales', 'sollicitudin', 'suscipit', 'suspendisse', 'taciti', 'tellus', 'tempor', 'tempus', 'tincidunt', 'torquent', 'tortor', 'tristique', 'turpis', 'ullamcorper', 'ultrices', 'ultricies', 'urna', 'ut', 'varius', 'vehicula', 'vel', 'velit', 'venenatis', 'vestibulum', 'vitae', 'vivamus', 'viverra', 'volutpat', 'vulputate'];
for ind_ in 1 .. quantity_ loop
ind_ := ( random() * ( array_upper( words_, 1 ) - 1 ) )::integer + 1;
returnValue_ := returnValue_ || ' ' || words_[ind_];
end loop;
return returnValue_;
end;
$$;
```
And then use it, for example, to generate 50 words.
```
select lipsum( 50 );
```
|
How to create a huge string in Postgresql
|
[
"",
"sql",
"string",
"postgresql",
""
] |
what is the correct way to do a bulk insertOrUpdate in Slick 3.0?
I am using MySQL where the appropriate query would be
```
INSERT INTO table (a,b,c) VALUES (1,2,3),(4,5,6)
ON DUPLICATE KEY UPDATE c=VALUES(a)+VALUES(b);
```
[MySQL bulk INSERT or UPDATE](https://stackoverflow.com/questions/6286452/mysql-bulk-insert-or-update)
Here is my current code which is very slow :-(
```
// FIXME -- this is slow but will stop repeats, an insertOrUpdate
// functions for a list would be much better
val rowsInserted = rows.map {
row => await(run(TableQuery[FooTable].insertOrUpdate(row)))
}.sum
```
What I am looking for is the equivalent of
```
def insertOrUpdate(values: Iterable[U]): DriverAction[MultiInsertResult, NoStream, Effect.Write]
```
|
There are several ways that you can make this code faster (each one *should* be faster than the preceding ones, but it gets progressively less idiomatic-slick):
* Run `insertOrUpdateAll` instead of `insertOrUpdate` if on slick-pg 0.16.1+
```
await(run(TableQuery[FooTable].insertOrUpdateAll rows)).sum
```
* Run your DBIO events all at once, rather than waiting for each one to commit before you run the next:
```
val toBeInserted = rows.map { row => TableQuery[FooTable].insertOrUpdate(row) }
val inOneGo = DBIO.sequence(toBeInserted)
val dbioFuture = run(inOneGo)
// Optionally, you can add a `.transactionally`
// and / or `.withPinnedSession` here to pin all of these upserts
// to the same transaction / connection
// which *may* get you a little more speed:
// val dbioFuture = run(inOneGo.transactionally)
val rowsInserted = await(dbioFuture).sum
```
* Drop down to the JDBC level and run your upsert all in one go ([idea via this answer](https://stackoverflow.com/a/25675325/135978)):
```
val SQL = """INSERT INTO table (a,b,c) VALUES (?, ?, ?)
ON DUPLICATE KEY UPDATE c=VALUES(a)+VALUES(b);"""
SimpleDBIO[List[Int]] { session =>
val statement = session.connection.prepareStatement(SQL)
rows.map { row =>
statement.setInt(1, row.a)
statement.setInt(2, row.b)
statement.setInt(3, row.c)
statement.addBatch()
}
statement.executeBatch()
}
```
|
As you can see at [Slick examples](http://slick.typesafe.com/doc/3.1.1/gettingstarted.html#populating-the-database), you can use `++=` function to insert using JDBC batch insert feature. Per instance:
```
val foos = TableQuery[FooTable]
val rows: Seq[Foo] = ...
foos ++= rows // here slick will use batch insert
```
You can also "size" you batch by "grouping" the rows sequence:
```
val batchSize = 1000
rows.grouped(batchSize).foreach { group => foos ++= group }
```
|
Slick 3.0 bulk insert or update (upsert)
|
[
"",
"mysql",
"sql",
"scala",
"slick",
"typesafe",
""
] |
I'm trying to pull back multiple different sums from one table using different criteria. The only problem is that my subqueries were pulling back more than one response, so I changed my query a bit and then got extremely lost trying to find the best way to do this.
Here's the query I'm working with right now:
```
SELECT TER_ID AS TER_ID,
SUM(MED_AMT) AS NET_SLS_AMT,
SUM(SELECT MED_AMT
FROM DAY_TER_MEDIA
WHERE TND_CD = 1 AND
DAY_TER_MEDIA.DT = '') AS CASH_AMT,
SUM(SELECT MED_AMT
FROM DAY_TER_MEDIA
WHERE DAY_TER_MEDIA.TND_CD IN(4,5,16,18,23,31) AND
DAY_TER_MEDIA.DT = '') AS ELEC_AMT,
SUM(SELECT MED_AMT
FROM DAY_TER_MEDIA
WHERE DAY_TER_MEDIA.TND_CD IN(2,3,8,9,10,11,15,20,52) AND
DAY_TER_MEDIA.DT = '') AS OTHER_AMT
FROM DAY_TER_MEDIA
WHERE DT = ''
GROUP BY TER_ID;
```
As you can see, my intention is to pull back NET\_SLS\_AMT, CASH\_AMT, ELEC\_AMT, and OTHER\_AMT all from the same column and sum only those with a certain TND\_CD, DT, and TER\_ID. As I said, I've been changing this query up multiple times and I know the current iteration of it does not actually do what I need it to. I'm a bit lost as to how to do this while still grouping them all by date and TER\_ID. Any ideas? Or is there even a clean way to do this?
|
The reason you are not getting your expected results, is because there is no relationship between your subqueries and the table in your FROM. You would need to add that relationship in each of your subqueries `WHERE` clauses (which is called a Correlated Subquery since it uses values from the outer query in the subquery):
```
SELECT TER_ID AS TER_ID,
SUM(MED_AMT) AS NET_SLS_AMT,
SUM(SELECT MED_AMT
FROM DAY_TER_MEDIA
WHERE TND_CD = 1 AND TER_ID = t1.TER_ID AND
DAY_TER_MEDIA.DT = '') AS CASH_AMT,
SUM(SELECT MED_AMT
FROM DAY_TER_MEDIA
WHERE DAY_TER_MEDIA.TND_CD IN(4,5,16,18,23,31) AND TER_ID = t1.TER_ID AND
DAY_TER_MEDIA.DT = '') AS ELEC_AMT,
SUM(SELECT MED_AMT
FROM DAY_TER_MEDIA
WHERE DAY_TER_MEDIA.TND_CD IN(2,3,8,9,10,11,15,20,52) AND TER_ID = t1.TER_ID AND
DAY_TER_MEDIA.DT = '') AS OTHER_AMT
FROM DAY_TER_MEDIA t1
WHERE DT = ''
GROUP BY TER_ID;
```
But this is a very inefficient way of doing this. Instead, use `CASE` statements without any subqueries:
```
SELECT
TER_ID,
SUM(CASE WHEN TND_CD = 1 THEN MED_AMT ELSE 0 END) as CASH_AMT,
SUM(CASE WHEN TND_CD IN(4,5,16,18,23,31) THEN MED_AMT ELSE 0 END) as ELEC_AMT,
SUM(CASE WHEN TND_CD IN(2,3,8,9,10,11,15,20,52) THEN MED_AMT ELSE 0 END) AS OTHER_AMT
FROM DAY_TER_MEDIA
WHERE DT= ''
GROUP BY TER_ID
```
|
Use case statements instead of selects
```
SELECT TER_ID AS TER_ID,
SUM(MED_AMT) AS NET_SLS_AMT,
SUM(CASE WHEN TND_CD = 1 THEN MED_AMT
ELSE 0
END) AS CASH_AMT,
SUM(CASE WHEN DAY_TER_MEDIA.TND_CD IN (4,5,16,18,23,31) THEN MED_AMT
ELSE 0
END) AS ELEC_AMT,
SUM(CASE WHEN DAY_TER_MEDIA.TND_CD IN (2,3,8,9,10,11,15,20,52) THEN MED_AMT
ELSE 0
END) AS OTHER_AMT
FROM DAY_TER_MEDIA
WHERE DT = ''
GROUP BY TER_ID;
```
|
Multiple Subqueries on the Same Column
|
[
"",
"sql",
"sql-server",
""
] |
I've wanted to create a view in my sql but I've been getting this error:
> Duplicate column name 'product\_id'
Here's my query:
```
CREATE VIEW product_view AS
SELECT
*
FROM
products
JOIN storeitems on products.product_id = storeitems.product_id;
```
I believe creating an alias would solve the problem but then I don't know how to do it. I hope you could help me with this. Thanks!
|
You get the error because both tables in your query has a column named product\_name and selected column names must be unique.
You make your aliases like this:
```
CREATE VIEW product_view AS SELECT p.product_name, si.product_name StoreItemsProductName -- continue with what you want to select...
FROM products p JOIN storeitems si
on p.product_id=si.product_id;
```
|
yes, creating an column alias would solve the problem and you do it like
```
select column1 as test_column
```
But if this your original query then I doubt you won't get this error since you are selecting from single table called `products` and obviously same table won't have multiple column with same name.
Anyway you can modify your query to be like
```
CREATE VIEW product_view AS
SELECT p.product_id as product_product_id,
si.product_id as storeItem_product_id
FROM
products p
JOIN storeitems si on p.product_id = si.product_id;
```
|
Duplicate column name in SQL
|
[
"",
"sql",
"alias",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.