Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I have 2 tables
Course ( ID Date Description Duration Meatier\_ID Promotion\_ID).
second table
Ensign( E\_Id Meatier\_Id Promotion\_Id)
infarct i have an id based on that id i have to Select data from Ensign where id=Eng\_Id then i need to select data from Course where Meatier\_Id and Promotion\_Id in table Course are equal to Meatier\_Id and Promotion\_Id to data selected in earlier query
can i do it using one S q l query thanks
Br
Sara | Join the two tables together on Meater\_ID and Promotion\_ID. Then select those rows where Eng\_Id is the id you are working with.
```
SELECT *
FROM Course c
INNER JOIN Ensign e
ON e.Meatier_ID = c.Meatier_ID
AND e.Promotion_ID = c.Promotion_ID
WHERE e.Eng_Id = <id value here>
```
EDIT:
The above should work for SQL Server. For derby, try:
```
SELECT *
FROM Course
INNER JOIN Ensign
ON Ensign.Meatier_ID = Course.Meatier_ID
AND Ensign.Promotion_ID = Course.Promotion_ID
WHERE Ensign.Eng_Id = <id value here>
``` | Your question is bit vague, But I gave it a try
```
--These two variables take the place for your 'Earlier Query' values
DECLARE @Meatier_ID INT = 100,
@Promotion_Id INT = 15
--The query
SELECT *
FROM Course AS C
INNER JOIN Ensign AS E ON C.ID = E.E_Id
WHERE C.Meatier_ID = @Meatier_ID
AND C.Promotion_Id = @Promotion_Id
``` | Selecting data from two tables with sql query | [
"",
"sql",
"database",
"sql-server-2008",
""
] |
I'm looking for a lighweight database/SQL-server to run on a Raspberry Pi. Since it should be accessible from more than one application, embedded databases like SQLite would not work, thus it should be a standalone database (I know that SQLite databases can be read by more than one process at the same time, but I'm mostly doing write-operations which would lead to many locks on the file).
From what I found on the web, people seem to advice against using databases like MySQL or PostgreSQL on the Pi for performance reasons. Is there a lightweight database to use on the Pi without slowing down the whole system? | Look into this <http://www.firebirdsql.org/>. It should be lightweight. | You're just speculating based on anecdotes, which is the worst way to make decisions.
Build a test case on a Pi, exercise it and collect metrics. You may find that the flash storage of the Pi is sufficiently fast that the write locks aren't a big deal. | Lightweight SQL server for Raspberry Pi | [
"",
"mysql",
"sql",
"sql-server",
"sqlite",
"raspberry-pi",
""
] |
Can any one help me in getting the last 12 months names from the current date (month).
I want this query in slq server. | Requires sql-server 2008
```
select datename(m,dateadd(m,-a,current_timestamp)) monthname,
datepart(m,dateadd(m,-a,current_timestamp)) id
from (values (1),(2),(3),(4),(5),(6),(7),(8),(9),(10),(11),(12)) x(a)
```
Result:
```
monthname id
December 12
November 11
October 10
September 9
August 8
July 7
June 6
May 5
April 4
March 3
February 2
January 1
``` | You can use common table expression for your solution:
```
;WITH DateRange AS(
SELECT GETDATE() Months
UNION ALL
SELECT DATEADD(mm, -1, Months)
FROM DateRange
WHERE Months > DATEADD(mm, -11, GETDATE())
)
SELECT DateName(m, Months) AS Months, Month(Months) AS ID FROM DateRange
```
shows previous months in the order:
```
Months ID
------------------------------ -----------
January 1
December 12
November 11
October 10
September 9
August 8
July 7
June 6
May 5
April 4
March 3
February 2
``` | Get Previous 12 months name/id from current month | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I just had to scour the internet for this code yet again so I figured I would put it here so I can find it a little faster the next time and hopefully you found it a little faster too :) | Check this.
The code below can check digit in all GTIN's (EAN8, EAN13, EAN14, UPC/A, UPC/E)
```
CREATE FUNCTION [dbo].[check_digit]
(
@GTIN VARCHAR(14)
)
RETURNS TINYINT
AS
BEGIN
DECLARE @Index TINYINT,
@Multiplier TINYINT,
@Sum TINYINT,
@checksum TINYINT,
@result TINYINT,
@GTIN_strip VARCHAR(13)
SELECT @GTIN_strip = SUBSTRING(@GTIN,1,LEN(@GTIN)-1);
SELECT @Index = LEN(@GTIN_strip),
@Multiplier = 3,
@Sum = 0
WHILE @Index > 0
SELECT @Sum = @Sum + @Multiplier * CAST(SUBSTRING(@GTIN_strip, @Index, 1) AS TINYINT),
@Multiplier = 4 - @Multiplier,
@Index = @Index - 1
SELECT @checksum = CASE @Sum % 10
WHEN 0 THEN '0'
ELSE CAST(10 - @Sum % 10 AS CHAR(1))
END
IF (SUBSTRING(@GTIN,LEN(@GTIN),1) = @checksum)
RETURN 1; /*true*/
RETURN 0; /*false*/
END
``` | ```
CREATE FUNCTION [dbo].[fn_EAN13CheckDigit] (@Barcode nvarchar(12))
RETURNS nvarchar(13)
AS
BEGIN
DECLARE @SUM int , @COUNTER int, @RETURN varchar(13), @Val1 int, @Val2 int
SET @COUNTER = 1 SET @SUM = 0
WHILE @Counter < 13
BEGIN
SET @VAL1 = SUBSTRING(@Barcode,@counter,1) * 1
SET @VAL2 = SUBSTRING(@Barcode,@counter + 1,1) * 3
SET @SUM = @VAL1 + @SUM
SET @SUM = @VAL2 + @SUM
SET @Counter = @Counter + 2;
END
SET @SUM=(10-(@SUM%10))%10
SET @Return = @BARCODE + CONVERT(varchar,((@SUM)))
RETURN @Return
END
``` | SQL proc to calculate check digit for 7 and 12 digit upc | [
"",
"sql",
"t-sql",
"barcode",
"checksum",
""
] |
I am trying to select products that are found in an range of categories and the product was created within the last 4 months. I have done this with this
```
select DISTINCT category_skus.sku, products.created_date from category_skus
LEFT JOIN products on category_skus.sku = products.sku
WHERE category_code > β70699β and category_code < β70791β
and products.created_date > β2013-09-13β;
```
This is the result:
```
+------------+------------+
|sku |created_date|
+------------+------------+
|511-696004PU|2014-01-07 |
+------------+------------+
|291-280 |2013-12-04 |
+------------+------------+
|89-80 |2013-10-07 |
+------------+------------+
|490-1137 |2013-11-21 |
+------------+------------+
```
However I need to select in multiple ranges within the category\_code table. Instead of searching from just '70699' to '70791', I need to also search in '60130' and '60420' (This is not a range, rather additional single categories that are related to the first range of categories).This is what I tried last but I get "Empty set (0.00 sec):
```
select DISTINCT category_skus.sku, products.created_date from category_skus
LEFT JOIN products on category_skus.sku = products.sku
WHERE (category_code BETWEEN β70699β and β70791β)
and WHERE category_code = β60130β and products.created_date > β2013-09-13β;
```
What am I doing wrong here??? I hope I explained it clear enough and thanks for any help! | Would
```
where category_code in (select category_code from category_skus where (category_code between '70699' and '70791') or (category_code in ('60130','60420')))
and products.created_date > ...
```
work? Not sure if I understood the question correctly. | Your conditions are a bit mixed up. There won't be any products between 70699-70791 that also = 60130. Try nesting your category\_code conditions in their own AND statement:
```
SELECT DISTINCT category_skus.sku, products.created_date
FROM category_skus
LEFT JOIN products ON category_skus.sku = products.sku
WHERE products.created_date > '2013-09-13'
AND (
category_code BETWEEN '70699' AND '70791'
OR category_code = '60130'
)
``` | mysql - selecting multiple ranges from the same table column | [
"",
"mysql",
"sql",
"range",
"between",
""
] |
Is there a way to run a query for a specified amount of time, say the last 5 months, and to be able to return how many records were created each month? Here's what my table looks like:
```
SELECT rID, dateOn FROM claims
``` | ```
SELECT COUNT(rID) AS ClaimsPerMonth,
MONTH(dateOn) AS inMonth,
YEAR(dateOn) AS inYear FROM claims
WHERE dateOn >= DATEADD(month, -5, GETDATE())
GROUP BY MONTH(dateOn), YEAR(dateOn)
ORDER BY inYear, inMonth
```
In this query the `WHERE dateOn >= DATEADD(month, -5, GETDATE())` ensures that it's for the past 5 months, the `GROUP BY MONTH(dateOn)` then allows it to count per month.
And to appease the community, here is a [SQL Fiddle](http://sqlfiddle.com/#!3/bc98c/2/0) to prove it. | Unlike the other two answers, this will return all 5 months, even when the count is 0. It will also use an index on the onDate column, if a suitable one exists (the other two answers so far are non-sargeable).
```
DECLARE @nMonths INT = 5;
;WITH m(m) AS
(
SELECT TOP (@nMonths) DATEADD(MONTH, DATEDIFF(MONTH, 0, GETDATE())-number, 0)
FROM master.dbo.spt_values WHERE [type] = N'P' ORDER BY number
)
SELECT m.m, num_claims = COUNT(c.rID)
FROM m LEFT OUTER JOIN dbo.claims AS c
ON c.onDate >= m.m AND c.onDate < DATEADD(MONTH, 1, m.m)
GROUP BY m.m
ORDER BY m.m;
```
You also don't have to use a variable in the `TOP` clause, but this might make the code more reusable (e.g. you could pass the number of months as a parameter). | How to count number of records per month over a time period | [
"",
"sql",
"sql-server",
"sql-server-2005",
""
] |
I'm using Azure to host my database. The most common solutions to this problem I've found all have to do with incorrect data in the SQL query. I'm using parameters so I wouldn't think that would be an issue. My input data doesn't include any characters that SQL would recognize for a query. I'm stumped. Here is my code.
```
Public Function camp_UploadScoutRecord(ByVal recordID As String, ByVal requirementsID As String, ByVal scoutID As String, _
ByVal scoutName As String, Optional ByVal unitType As String = "", Optional ByVal unitNumber As String = "", Optional ByVal district As String = "", _
Optional ByVal council As String = "", Optional ByVal street As String = "", Optional ByVal city As String = "", Optional ByVal campName As String = "", Optional ByVal req1 As String = "", Optional ByVal req2 As String = "", _
Optional ByVal req3 As String = "", Optional ByVal req4 As String = "", Optional ByVal req5 As String = "", Optional ByVal req6 As String = "", Optional ByVal req7 As String = "", _
Optional ByVal req8 As String = "", Optional ByVal req9 As String = "", Optional ByVal req10 As String = "", Optional ByVal req11 As String = "", Optional ByVal req12 As String = "", _
Optional ByVal req13 As String = "", Optional ByVal req14 As String = "", Optional ByVal req15 As String = "", Optional ByVal req16 As String = "", Optional ByVal req17 As String = "", _
Optional ByVal req18 As String = "", Optional ByVal req19 As String = "", Optional ByVal req20 As String = "", Optional ByVal req21 As String = "", Optional ByVal req22 As String = "", _
Optional ByVal badgeComplete As String = "", Optional ByVal badgeName As String = "", Optional ByVal subscriberID As String = "") As String Implements IMastersheetUpload.camp_UploadScoutRecord
Dim newRecordID As String
Dim dateToday As Date = Date.Today
newRecordID = Guid.NewGuid.ToString()
Dim selectcmd As New SqlCommand("SELECT * FROM campMeritBadgeRecords WHERE meritBadgeRequirementsID = @ID", myconn)
Dim sqlParam As New SqlParameter("@ID", newRecordID)
selectcmd.Parameters.Add(sqlParam)
Dim ds As New DataSet()
Dim da As New SqlDataAdapter(selectcmd)
da.Fill(ds)
'Find an unused recordID for this record
'If the GUID already exists in the database, then generate new one
If ds.Tables(0).Rows.Count <> 0 Then
While ds.Tables(0).Rows.Count <> 0
newRecordID = Guid.NewGuid.ToString()
da.Fill(ds)
End While
End If
Dim insertCMD As New SqlCommand("INSERT INTO campMeritBadgeRecords " + _
"VALUES (@recordID," + _
"@meritBadgeRequirementsID," + _
"@scoutID," + _
"@lastUpdated," + _
"@scoutName," + _
"@unitType," + _
"@unitNumber," + _
"@district," + _
"@council," + _
"@street," + _
"@city," + _
"@req1Complete," + _
"@req2Complete," + _
"@req3Complete," + _
"@req4Complete," + _
"@req5Complete," + _
"@req6Complete," + _
"@req7Complete," + _
"@req8Complete," + _
"@req9Complete," + _
"@req10Complete," + _
"@req11Complete," + _
"@req12Complete," + _
"@req13Complete," + _
"@req14Complete," + _
"@req15Complete," + _
"@req16Complete," + _
"@req17Complete," + _
"@req18Complete," + _
"@req19Complete," + _
"@req20Complete," + _
"@req21Complete," + _
"@req22Complete," + _
"@badgeComplete," + _
"@campName," + _
"@badgeName," + _
"@uploadSubscriberID);", myconn)
With insertCMD.Parameters
'Record Info
.AddWithValue("@recordID", newRecordID)
.AddWithValue("@meritBadgeRequirementsID", requirementsID)
'Scout Info
.AddWithValue("@scoutID", scoutID)
.AddWithValue("@lastUpdated", Date.Today.ToString)
.AddWithValue("@scoutName", scoutName)
.AddWithValue("@unitType", unitType)
.AddWithValue("@unitNumber", unitNumber)
.AddWithValue("@district", district)
.AddWithValue("@council", council)
.AddWithValue("@street", street)
.AddWithValue("@city", city)
'Merit Badge Completion Info
.AddWithValue("@req1Complete", req1)
.AddWithValue("@req2Complete", req2)
.AddWithValue("@req3Complete", req3)
.AddWithValue("@req4Complete", req4)
.AddWithValue("@req5Complete", req5)
.AddWithValue("@req6Complete", req6)
.AddWithValue("@req7Complete", req7)
.AddWithValue("@req8Complete", req8)
.AddWithValue("@req9Complete", req9)
.AddWithValue("@req10Complete", req10)
.AddWithValue("@req11Complete", req11)
.AddWithValue("@req12Complete", req12)
.AddWithValue("@req13Complete", req13)
.AddWithValue("@req14Complete", req14)
.AddWithValue("@req15Complete", req15)
.AddWithValue("@req16Complete", req16)
.AddWithValue("@req17Complete", req17)
.AddWithValue("@req18Complete", req18)
.AddWithValue("@req19Complete", req19)
.AddWithValue("@req20Complete", req20)
.AddWithValue("@req21Complete", req21)
.AddWithValue("@req22Complete", req22)
.AddWithValue("@badgeComplete", badgeComplete)
.AddWithValue("@campName", campName)
.AddWithValue("@badgeName", badgeName)
.AddWithValue("@uploadSubscriberID", subscriberID)
End With
insertCMD.ExecuteNonQuery()
myconn.Close()
'Return recordID to tablet software for future record updates
Return newRecordID
``` | I think your mistake in insert statement.
Where table name `campMeritBadgeRecords` and `values` are combined in insert statement so you have to add extra space after table name `campMeritBadgeRecords`
So your statement will be like that
```
Dim insertCMD As New SqlCommand("INSERT INTO campMeritBadgeRecords values" + _
``` | My guess would be this line...
```
INSERT INTO campMeritBadgeRecords" + _
"VALUES (@recordID," + _
```
You aren't leaving a space between campMeritBadgeRecords and VALUES, so SQL Server reads it as
```
INSERT INTO campMeritBadgeRecordsVALUES(
``` | SQL Incorrect syntax near "," when using parameterized SQL | [
"",
"sql",
"vb.net",
"azure",
""
] |
I want the value for users with the right ref and also users with extra value 1.
I have the following query but its not giving unique rows ,its giving duplicate rows.How do I resolve that ? I really appreciate any help.
basically its repeating values of ref.id if tab2.user=1 which repeats 4,6 row again in the final query I dont want both but only one as ref.id=0 does not exist but I want extra=1.
```
SELECT * FROM
tab1,tab2
WHERE tab1.ref=tab2.ref
AND tab1.to_users = 1
OR users.extra=1;
```
Tab1
```
sno users ref extra
1 1 4 1
2 2 5 0
3 3 0 1
4 1 0 1
5 2 5 0
6 3 0 1
```
Tab2
```
ref ad user
4 A 1
5 B 2
6 C 1
``` | After your last comment:
```
SELECT sno, users, tab1.ref, extra, max(ad) as ad
FROM
tab1,tab2
WHERE tab1.ref=tab2.ref
AND tab1.users = 1
OR tab1.extra=1
group by sno, users, ref, extra;
```
This will max the ad column (alternatively you can use min - all depends on your requirements)
example: <http://sqlfiddle.com/#!2/1837e/25> | You will probably need to use join
```
SELECT * FROM
tab1 LEFT JOIN tab2 ON tab1.ref=tab2.ref
WHERE tab1.users = 1 OR tab1.extra=1;
GROUP BY tab1.users
```
edit: added group by | sql query has few rows which are duplicate | [
"",
"mysql",
"sql",
""
] |
Noob SSIS question please. I am working on a simple database conversion script. The original data source have the phone number saved as a string with len = 50 in a column called Phone1. The destination table has the telephone number saved as a string with len = 20 in a column called Telephone. I keep getting this warning:
> [110]] Warning: Truncation may occur due to inserting data from data
> flow column "Phone1" with a length of 50 to database column
> "Telephone" with a length of 20.
I have tried a few things including adding a Derived Column task to Cast the Phone1 into a DT\_WSTR string with length = 20 - (DT\_WSTR, 20) (SubString(Phone1, 1, 20)) and adding a DataConversion tasks to convert the field Phone1 from WT\_WSTR(50) into WT\_WSTR(20) but none of them work. I know I can SubStr phone1 in the SQL String @ the OLEDB Source but I would love to find out how SSIS deals with this scenario | Your conversion should result in a NEW variable, do not use Phone1. Use the name of the Converted value. | **Root cause** - This warning message appears over the SSIS data flow task when the datatype length of the source column is more than the datatype length of the destination column.
**Resolution** - In order to remove this warning message from SSIS solution, make sure datatype length of the source column should be equal to the datatype length of the destination column. | SSIS Truncate Warning when moving data from a string max length 50 to a string max length 20 | [
"",
"sql",
"sql-server",
"ssis",
""
] |
i have the following row in my database:
```
'1', '1', '1', 'Hello world', 'Hello', '2014-01-14 17:33:34'
```
Now i wish to select this row and get only the date from this timestamp:
i have tried the following:
```
SELECT title, FROM_UNIXTIME(timestamp,'%Y %D %M') AS MYDATE FROM Team_Note TN WHERE id = 1
```
I have also tried:
```
SELECT title, DATE_FORMAT(FROM_UNIXTIME(`timestamp`), '%e %b %Y') AS MYDATE FROM Team_Note TN WHERE id = 1
```
However these just return an empty result (or well i get the Hello part but MYDATE is empty) | Did you try this?
```
SELECT title, date(timestamp) AS MYDATE
FROM Team_Note TN
WHERE id = 1
``` | You need to use DATE function for that:
<http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date>
```
SELECT title,
DATE(timestamp) AS MYDATE
FROM Team_Note TN
WHERE id = 1
``` | Select date from timestamp SQL | [
"",
"mysql",
"sql",
""
] |
Wondering how can I get data for any year and its prior year's when a year is given in where clause. Example: If i give 2012 as year in where clause, I should get both 2012 and 2011's data. This is in SQL Server 2008.
```
Select * from TableA where Year = 2012
```
but i want for 2011 in the same query.
Thank you! | You can use [`IN`](http://technet.microsoft.com/en-us/library/ms177682.aspx), which allows you to define a list:
```
Select * from TableA where Year IN (2011, 2012);
``` | If you only want to put the constant year in once, you could do this:
```
Select *
from TableA
where 2012 in (Year, Year + 1)
``` | Sql query for a particular year and the previous year's data | [
"",
"sql",
"sql-server-2008",
""
] |
I'm looking for a way to count how many results have been omitted by my sql query (i'm working with Sqlite) :
```
SELECT id FROM users GROUP BY x, y, email;
```
This query return me 121 id (of 50 000), it would be nice to know how many id have been omitted for each couple of (x,y).
Is it possible ?
Thank for your help,
EDIT :
sample :
```
+--+-----+----+-------------+
|ID|x |y |email |
+--+-----+----+-------------+
|1 |48.86|2.34|john@test.com|
+--+-----+----+-------------+
|2 |48.86|2.34|phil@test.com|
+--+-----+----+-------------+
|3 |40.85|2.31|john@test.com|
+--+-----+----+-------------+
|4 |48.86|2.34|phil@test.com|
+--+-----+----+-------------+
|5 |40.85|2.31|john@test.com|
+--+-----+----+-------------+
|6 |48.86|2.34|phil@test.com|
+--+-----+----+-------------+
```
Query:
```
SELECT id FROM users GROUP BY x, y, email;
```
Results:
```
+--+
|id|
+--+
|1 |
+--+
|2 |
+--+
|3 |
+--+
```
Because : id 4 and id 6 have the same x,y,email than id 2 and 5 is the same than 3.
I need the fastest way to know that :
```
id 1 -> 0 omitted
id 2 -> 2 omitted (id 4 and id 6 had same x, y, email)
id 3 -> 1 omitted (id 3)
``` | ```
SELECT id, COUNT(*) -1 AS omitted FROM users GROUP BY x, y, email;
```
...assuming you actually want "to know how many have been omitted for each tuple `(x, y, email)`. | You can try this:
```
SELECT COUNT(*)
FROM
(
SELECT *
FROM users
MINUS
SELECT id
FROM users
);
```
This shows you all of the records minus the ones you selected. Hope this helps. | SQL - Count the number of omitted results | [
"",
"sql",
"sqlite",
"count",
""
] |
I have one master table in SQL Server 2012 with many fields.
I have been given an excel spreadhseet with new data to incorporate, so I have in the first instance, imported this into a new table in the SQL database.
The new table has only two columns 'ID', and 'Source'.
The ID in the new table matches the 'ID' from the master table, which also has a field called 'Source'
What I need to do is UPDATE the values for 'Source' in the Master table with the corresponding values in the new table, ensuring to match IDs between the two tables.
Now to Query and see all the information together I can use the following -
```
SELECT m.ID, n.Source
FROM MainTable AS m
INNER JOIN NewTable AS n ON m.ID = n.ID
```
But what I don't know is how to turn this into an UPDATE statement so that the values for 'Source' from the new table are inserted into the corresponding column in the master table. | ```
UPDATE
MainTable
SET
MainTable.Source = NewTable.Source
FROM
MainTable
INNER JOIN
NewTable
ON
MainTable.ID = NewTable .ID
```
That should do the trick | You could do
```
UPDATE MainTable
SET MainTable.Source = NewTable.Source
FROM NewTable
WHERE MainTable.ID = NewTable .ID
``` | Combining Values from SQL Table into a Second Table | [
"",
"sql",
""
] |
`hive` rejects this code:
```
select a, b, a+b as c
from t
where c > 0
```
saying `Invalid table alias or column reference 'c'`.
do I really need to write something like
```
select * from
(select a, b, a+b as c
from t)
where c > 0
```
EDIT:
1. the computation of `c` it complex enough for me not to want to repeat it in `where a + b > 0`
2. I need a solution which would work in `hive` | Use a Common Table Expression if you want to use derived columns.
```
with x as
(
select a, b, a+b as c
from t
)
select * from x where c >0
``` | You can run this query like this or with a Common Table Expression
```
select a, b, a+b as c
from t
where a+b > 0
```
Reference the below order of operations for logical query processing to know if you can use derived columns in another clause.
Keyed-In Order
1. SELECT
2. FROM
3. WHERE
4. GROUP BY
5. HAVING
6. ORDER BY
Logical Querying Processing Phases
1. FROM
2. WHERE
3. GROUP BY
4. HAVING
5. SELECT
6. ORDER BY | Using new columns in the "where" clause | [
"",
"sql",
"hive",
""
] |
Have CTE like :
```
WITH fnl as
(select lo.id , s.service_name , lo.obj_name ,
........
from dbo.r_objects ro
inner join dbo.services s on ro.service_id = s.id
inner join dbo.local_objects lo on ro.local_object_id = lo.id
where ro.service_id = @service_id
)
select id , service_name , CAST(obj_name as varbinary(200)) obj_name
FROM fnl
WHERE ....... some logic
```
Its work good . Now i want check if data in **fnl** is empty then write **'Data is empty'**
I try like :
```
WITH fnl as
(select lo.id , s.service_name , lo.obj_name ,
........
from dbo.r_objects ro
inner join dbo.services s on ro.service_id = s.id
inner join dbo.local_objects lo on ro.local_object_id = lo.id
where ro.service_id = @service_id
)
select @table_count = COUNT(*) from fnl
IF @table_count > 0
begin
select id , service_name , CAST(obj_name as varbinary(200)) obj_name
FROM fnl
WHERE ....... some logic
end
else
set @check = 'Data is empty'
...............some logic here
```
when i exec procedure have an error : **Invalid object name 'fnl'** How to check it on empty ? I try `IF EXISTS` but have an error too
**EDIT :**
\**I solve it , create #tbl and then insert all data which is in fnl to #tbl because in CTE fnl you can use only once , I use it twice and have an error. then check if exists any data like this :*\* `IF EXISTS (select top 1 1 from #tbl)` *if there exists data then select data from #tbl else use some logic* | ```
select lo.id , s.service_name , lo.obj_name ,
........
INTO #temp
from dbo.r_objects ro
inner join dbo.services s on ro.service_id = s.id
inner join dbo.local_objects lo on ro.local_object_id = lo.id
where ro.service_id = @service_id
IF EXISTS (SELECT 1 FROM #temp)
begin
select id , service_name , CAST(obj_name as varbinary(200)) obj_name
FROM #temp
WHERE ....... some logic
end
else
begin
set @check = 'Data is empty'
...............some logic here
end
``` | I don't think CTE is appropriate use for the query that you have. I would be much better to do something like this. Instead of your CTE just place what you have inside that sub select. Later you can continue checking your vairable.
```
DECLARE @table_count INT
SELECT @table_count = COUNT(*)
FROM ( SELECT somestuff
FROM SOMEtables ) a
``` | How to check WITH statement is empty or not | [
"",
"sql",
"sql-server",
"t-sql",
"stored-procedures",
""
] |
I'm trying to see if it's possible to efficiently select a period a given date belongs to.
Let's say I have a table
```
id<long>|period_start<date>|period_end<date>|period_number<int>
```
and lets say I want for every id the period that "2013-11-20" belongs to.
i.e. naively
```
select id, period_number
from period_table
where '2013-11-20' >= period_start and '2013-11-20' < period_end
```
However, if my date is beyond any period\_end or before any period\_start, it won't find this id. In those cases I want the minimum (if before the first `period_start`) or the maximum (if after the last `period_end`).
Any thoughts if this can be done efficiently? I can obviously do multiple queries (i.e. select into the table as above and then do another query to figure out the min and max periods).
So for example
```
+--+------------+----------+-------------+
|id|period_start|period_end|period_number|
+--+------------+----------+-------------+
|1 |2011-01-01 |2011-12-31|1 |
|1 |2012-01-01 |2012-12-31|2 |
|1 |2013-01-01 |2013-12-31|3 |
+--+------------+----------+-------------+
```
If I want what period 2012-05-03 belongs to, my naive sql works and returns period #2 (1|2 as the row, id, period\_number). However, if I want what period 2014-01-14 (or 2010-01-14) it can't place it as it's outside the table.
Therefore since "2014-01-14" is > 2013-12-31, I want it to return the row "1|3" if I chose 2010-01-14, I'd want it to return 1|1, as 2010-01-14 < 2011-01-01.
The point of this is that we have a index table that keeps track of different types of periods and what is their relative value (think quarter, half year, years) for many different things and they all don't line up to normal years. Sometimes we want to say we want period X (some integer) relative to date Y. If we can place Y within the table and figure out Y's `period_number`, we can easily do the math to figure out what to add/subtract to that value. If Y is outside the bounds of the table, we define Y to be the max/min of the table respectively. | Why aren't you creating "boundary periods"?
Choose arbitrary beginning\_of\_time and end\_of\_time dates e.g. 01/01/0001 and 31/12/9999 and insert a fake period. Your example period\_table will become:
```
+--+------------+----------+-------------+
|id|period_start|period_end|period_number|
+--+------------+----------+-------------+
|1 |0001-01-01 |2010-12-31|1 |
|1 |2011-01-01 |2011-12-31|1 |
|1 |2012-01-01 |2012-12-31|2 |
|1 |2013-01-01 |2013-12-31|3 |
|1 |2014-01-01 |9999-12-31|3 |
+--+------------+----------+-------------+
```
In this case, any query will retrieve one and only one row, e.g:
```
select id, period_number from period_table
where '2013-11-20' between period_start and period_end
+--+-------------+
|id|period_number|
+--+-------------+
|1 |2 |
+--+-------------+
select id, period_number from period_table
where '2010-11-20' between period_start and period_end
+--+-------------+
|id|period_number|
+--+-------------+
|1 |1 |
+--+-------------+
select id, period_number from period_table
where '2014-11-20' between period_start and period_end
+--+-------------+
|id|period_number|
+--+-------------+
|1 |3 |
+--+-------------+
``` | Note: I missed the database engine you were using, so I answered from the perspective of SQL Server. However, the query is pretty simple and you should be able to adapt it to your own needs.
The best I can come up with is, if your table is clustered (or at least indexed) on `FromDate`, a query that works in 2 seeks:
```
DECLARE @SearchDate datetime = '4062-05-04';
SELECT TOP 1 *
FROM
(
SELECT TOP 2
Priority = 0,
*
FROM dbo.Period
WHERE @SearchDate >= FromDate
ORDER BY FromDate DESC
UNION ALL
SELECT TOP 1
2,
*
FROM dbo.Period
WHERE @SearchDate < FromDate
ORDER BY FromDate
) X
ORDER BY Priority, FromDate DESC
;
```
# [See a Live Demo at SQL Fiddle](http://sqlfiddle.com/#!6/9777a/1)
If you will post more information about your table structure and indexes, it's possible I can advise you better.
I also would like to suggest that if at all possible, you stop using inclusive end dates, where your `ToDate` column has the last day of the period in them such as `'2013-12-31'`, and start using exclusive end dates, where the `ToDate` column has the beginning of the next period. The reason for this is usually only apparent after long database experience, but imagine what would happen if you suddenly had to add periods that were shorter than 1 day (such as shifts or even hours)--everything would break! But if you had used exclusive end dates all along, everything would work *as is*. Also, queries that have to merge periods together become much more complicated because you are adding 1 all over the place instead of doing simple equijoins such as `WHERE P1.ToDate = P2.FromDate`. I promise you that you will run an enormously greater chance of regretting using inclusive end dates than you will exclusive ones. | Selecting an appropriate period or max/minimum period when outside of the period set | [
"",
"sql",
"ingres",
""
] |
For the first time in my professional career I found the need of join the result of a join with a table being the first join an inner join and the second a left outer join.
I came to this solution:
```
SELECT * FROM
(
TableA INNER JOIN TableB ON
TableA.foreign = TableB.id
)
LEFT OUTER JOIN TableC
ON TableC.id = TableB.id
```
I know I could create a view for the first join but I wouldn't like to create new objects in my database.
Is this code right? I got coherent results but I would like to verify if it is theoretically correct.
Do you have any suggestion to improve this approach? | Yes it is correct, but it does nothing different than the same (simpler) query without the parentheses:
```
SELECT *
FROM TableA
INNER JOIN TableB
ON TableA.foreign = TableB.id
LEFT OUTER JOIN TableC
ON TableC.id = TableB.id;
```
**[SQL Fiddle showing both queries with the same result](http://sqlfiddle.com/#!2/7fcfb/3)**
*Also worth noting that both queries have exactly the same execution plan*
The parentheses are used when you need to left join on the results of an inner join, e.g. If you wanted to return only results from table b where there was a record in tableC, but still left join to TableA you might write:
```
SELECT *
FROM TableA
LEFT OUTER JOIN TableB
ON TableA.foreign = TableB.id
INNER JOIN TableC
ON TableC.id = TableB.id;
```
However this effectively turns the LEFT OUTER JOIN on TableB into an INNER JOIN because of the INNER JOIN below. In this case you would use parentheses to alter the JOIN as required:
```
SELECT *
FROM TableA
LEFT OUTER JOIN (TableB
INNER JOIN TableC
ON TableC.id = TableB.id)
ON TableA.foreign = TableB.id;
```
**[SQL Fiddle to show difference in results when adding parentheses](http://sqlfiddle.com/#!2/7fcfb/2)** | You should do it in a simpler way.
```
SELECT * FROM
TableA
INNER JOIN TableB ON TableA.foreign = TableB.id
LEFT OUTER JOIN TableC ON TableB.id = TableC.id
```
Usually it is done that way. | Outer left join of inner join and regular table | [
"",
"mysql",
"sql",
"join",
"left-join",
""
] |
I am new to SQl and try to learn on my own.
i am learn the usage of if and else statement in SQL
Here is the data where I am trying to use, if or else statement
in the below table, i wish to have the comments updated based on the age using the sql query, let say if the age is between 22 and 25, comments" under graduate"
```
Age: 26 to 27, comments " post graduate"
Age: 28 to 30, comments "working and single"
Age: 31 to 33, comments " middle level manager and married"
```
Table name: persons
```
personid lastname firstname age comments
1 Cardinal Tom 22
2 prabhu priya 33
3 bhandari abhijeet 24
4 Harry Bob 25
5 krishna anand 29
6 hari hara 31
7 ram hara 27
8 kulkarni manoj 35
9 joshi santosh 28
``` | # [How To Use Case](http://technet.microsoft.com/en-us/library/ms181765.aspx)
Try with CASE Statement
```
Select personid,lastname,firstname,age,
Case when age between 26 and 27 then 'post graduate'
when age between 28 and 30 then 'working and single'
when age between 31 and 33 then ' middle level manager and married'
Else 'Nil'
End comments
from persons
``` | As by your requirement, we can't use if...else statement. Case..when statement will be most suitable one. And another one thing We can't able to use if...else inside of any queries(I mean inside of select, insert, update).
And your
```
Select personid,lastname,firstname,age,
Case when age between 26 and 27 then 'post graduate'
Case when age between 28 and 30 then 'working and single'
Case when age between 31 and 33 then ' middle level manager and married'
Else 'Nil'
End comments
from persons
``` | SQL, Multiple if statements, then, or else? | [
"",
"sql",
""
] |
I'm trying to count all the items for each brand and concatenate the brand name + number of items.
I have this query in SQL Server 2008 R2:
```
SELECT DISTINCT
Brands.BrandName + ' ' + COUNT(Items.ITEMNO) as ITEMSNO,
Brands.BrandId
FROM Items, Brand_Products, Brands
WHERE
Items.ITEMNO=Brand_Products.ItemNo
AND Brands.BrandId=Brand_Products.BrandId
AND Items.SubcategoryID='SCat-020'
GROUP BY
Brands.BrandId,
Brands.BrandName,
Items.ITEMNO
```
I'm trying to concatenate 2 fields, but I have 2 problems:
1. if I do this as shown in my example here I have a problem with `nvarchar` and `int`.
2. if I use convert I have a problem with (Distinct)
Any help? :) | This will work, fist you count items based on BrandId in CTE and than join it with Brand table.
```
WITH ItemCount
AS ( SELECT BrandId
,COUNT(Items.ITEMNO) AS item_Count
FROM Items
,Brand_Products
,Brands
WHERE Items.ITEMNO = Brand_Products.ItemNo
AND Brands.BrandId = Brand_Products.BrandId
AND Items.SubcategoryID = 'SCat-020'
GROUP BY Brands.BrandId)
SELECT b.BrandName + ' ' + CONVERT(VARCHAR(5), Item_Count)
FROM Brands AS b
JOIN ItemCount AS I
ON b.BrandId = i.BrandId
``` | Retrieve the field you're looking for twice, once in the concatenated field answer, and once by itself. This should solve your issue with DISTINCT. | Concatenate nvarchar and int while maintaining Distinct result | [
"",
"sql",
"sql-server",
"sql-server-2008-r2",
""
] |
I have an Oracle table which has a column called `sql_query`. All the SQL queries are placed in this column for all the records in that table.
Now, I see many records which have NO SPACE before WHERE in WHERE clause. I need it because the field hits several web forms and I get a syntax error.
Ex:
```
Select * from haystackwhere id = 3;
```
How do find these kind of records and replace them with a space like this:
```
Select * from haystack where id = 3;
```
Thanks!! | An easy way is to just use the `like` operation, assuming that your SQL queries are all simple (such as having a single `where` clause):
```
select *
from table t
where sql_query like '%where %' and sql_query not like '% where %';
```
You can readily turn this into an update:
```
update table
set sql_query = replace(sql_query, 'where ', ' where ')
where sql_query like '%where %' and sql_query not like '% where %';
``` | This query will look for any instances of the column that don't have the word 'WHERE' with a space before it. Then, if the word WHERE exists in those records, it will update it to '(space) WHERE'
```
UPDATE Table
SET SqlQuery = REPLACE(SqlQuery, 'WHERE', ' WHERE')
WHERE SqlQuery NOT LIKE '% WHERE%'
``` | Oracle SQL String search with no spaces and replace with space | [
"",
"sql",
"oracle-sqldeveloper",
""
] |
I have a table of products and each record has the price it was sold at, which can vary
```
+-------+-----+----+
|Product|Price|Date|
+-------+-----+----+
|a |2 |A |
+-------+-----+----+
|a |3 |B |
+-------+-----+----+
|a |4 |C |
+-------+-----+----+
|a |1 |D |
+-------+-----+----+
|b |10 |E |
+-------+-----+----+
|b |15 |F |
+-------+-----+----+
|b |20 |G |
+-------+-----+----+
```
I want to select Max price row group by [Product], how to query? Result I want:
```
+-------+-----+----+
|Product|Price|Date|
+-------+-----+----+
|a |4 |C |
+-------+-----+----+
|b |20 |G |
+-------+-----+----+
```
I tried
```
SELECT Product, Max(Price) as Price FROM TableName GROUP BY Product
```
but it does not get the [Date] column. | You could always use an CTE if this is on SQL server.
```
WITH CTE AS
(
SELECT Product, MAX(Price) AS Price
FROM TableName
GROUP BY Product
)
SELECT CTE.Product, CTE.Price, T.Date
FROM CTE
INNER JOIN
TableName T ON CTE.Product = T.Product AND CTE.Price = T.Price
```
However, there could be other viable options that are performing better, that you will have to verify yourself :) Also I have not taken into account that you could have a product with same price on two records (doesn't say from description that it should be considered).... | Try this-
```
SELECT
PRODUCT
,PRICE
,DATE
FROM
( SELECT
PRODUCT
,PRICE
, DATE
, RANK() OVER (PARTITION BY PRODUCT ORDER BY PRICE DESC) AS RANKED_PRICE
FROM TableName
) RANKED_DATA
WHERE RANKED_PRICE IN (1,2,3....N)
```
What we are doing is ranking the rows partitioned by product and in descending order of the price. Then we select the rows which have rank 1 to get the result you need. | How to query a specified row from each group | [
"",
"sql",
"sql-server",
"group-by",
"row",
""
] |
For Example:
**Table:** test
```
+--+-----+-----+
|id|name |value|
+--+-----+-----+
|1 |test1|5 |
+--+-----+-----+
|2 |test2|3 |
+--+-----+-----+
|3 |test3|7 |
+--+-----+-----+
|4 |test4|9 |
+--+-----+-----+
```
I want to add all the values from "*test*" table by using sql query
```
DROP PROCEDURE IF EXISTS proc_loop_test;
CREATE PROCEDURE proc_loop_test()
BEGIN
DECLARE add INT DEFAULT 0;
test_loop : LOOP
SELECT value;
SET add = add +1;
END LOOP;
END;
```
Desired output is: 24 | Try this:
```
SELECT SUM(a.value)
FROM test a
```
**OR**
```
SELECT (@value:=@value+a.value) val
FROM test a, (SELECT @value) b
ORDER BY val DESC LIMIT 1
``` | Not sure why do you need loop, but you can use SUM operator:
```
SELECT sum(`value`) FROM `test`
``` | How to perform loop in mysql query? | [
"",
"mysql",
"sql",
"select",
"stored-procedures",
"sum",
""
] |
I hv the following table :-

I need to convert it into:-

Do take note that the date of TIMEINA must follow exactly as per TIMEIN1. As for the time, it will be fixed ie 7.30AM, 8.30AM etc.
I tried the below SQL but it doesnβt work:-

Thanks | You should post your code as code and not as an image.
In any case, your code is comparing a `datetime` to a `time` value. Just do a conversion. Instead of `timein1 between . . .`, use:
```
cast(timein1 as time) between . . .
```
EDIT:
Oh, you also need to get the full date out. For that, use arithmetic on `datetime`:
```
cast('07:30:00' as datetime) + cast(cast(timein1 as date) as datetime)
```
The double `cast` on `timein1` is just to remove the time component. | The main question I have is how many queries are going against this table?
If you are doing this complex logic in one report, then by all means use a SELECT.
But it is crying out to me for a better solution.
**Why not use computed column?**
Since it is a date and non-deterministic, you can not use the persisted key word to physically store the calculated value.
However, you will only have this code in the table definition, not in every query.
I did the case for the first two ranges and two sample date items. The rest is up to you.!
```
-- Just play
use tempdb;
go
-- Drop table
if object_id('time_clock') > 0
drop table time_clock
go
-- Create table
create table time_clock
(
tc_id int,
tc_day char(3),
tc_time_in datetime,
tc_time_out datetime,
tc_division char(3),
tc_empid char(5),
-- Use computed column
tc_time_1 as
(
case
-- range 1
when
tc_division = 'KEP' and
cast(tc_time_in as time) between '04:30:00' and '07:29:59'
then
cast((convert(char(10), tc_time_in, 101) + ' 07:30:00') as datetime)
-- range 2
when
tc_division = 'KEP' and
cast(tc_time_in as time) between '17:30:00' and '19:29:59'
then
cast((convert(char(10), tc_time_in, 101) + ' 19:30:00') as datetime)
-- no match
else NULL
end
)
);
-- Load store products
insert into time_clock values
(1,'SUN', '20131201 06:53:57', '20131201 16:23:54', 'KEP', 'A007'),
(2,'TUE', '20131201 18:32:42', '20131201 03:00:47', 'KEP', 'A007');
-- Show the data
select * from time_clock
```
Expected results.
 | Convert Date & Time | [
"",
"sql",
"sql-server",
""
] |
I am working on PostgreSQL 9.1.4 .
I am inserting the data into 2 tables its working nicely.
I wish to apply transaction for my tables both table exist in
same DB. If my 2nd table going fail on any moment that time my 1 st
table should be rollback.
I tried the properties in "max\_prepared\_transactions" to a non zero
value in `/etc/postgres/postgres.conf`. But Still Transaction roll
back is not working. | in postgresql you cannot write commit or roll back explicitly within a function.
I think you could have use a begin end block
just write it simple
```
BEGIN;
insert into tst_table values ('ABC');
Begin
insert into 2nd_table values ('ABC');
EXCEPTION
when your_exception then
ROLL BACK;
END;
END;
``` | Probably you didn't started transaction.
Please, try
```
BEGIN;
INSERT INTO first_table VALUES(10);
-- second insert should fail
INSERT INTO second_table VALUES(10/0);
ROLLBACK;
``` | Transaction roll back not working in Postgresql | [
"",
"sql",
"postgresql",
"postgresql-9.1",
"phppgadmin",
"php-pgsql",
""
] |
I have the following Django Model:
```
class myModel(models.Model):
name = models.CharField(max_length=255, unique=True)
score = models.FloatField()
```
There are thousands of values in the DB for this model. I would like to efficiently and elegantly use that QuerySets alone to get the top-ten highest scores and display the names with their scores in descending order of score. So far it is relatively easy.
Here is where the wrinkle is: If there are multiple myModels who are tied for tenth place, I want to show them all. I don't want to only see some of them. That would unduly give some names an arbitrary advantage over others. If absolutely necessary, I can do some post-DB list processing outside of Querysets. However, the main problem I see is that there is no way I can know apriori to limit my DB query to the top 10 elements since for all I know there may be a million records all tied for tenth place.
Do I need to get all the myModels sorted by score and then do one pass over them to calculate the score-threshold? And then use that calculated score-threshold as a filter in another Queryset?
If I wanted to write this in straight-SQL could I even do it in a single query? | Of course you can do it in one SQL query. Generating this query using django ORM is also easily achievable.
```
top_scores = (myModel.objects
.order_by('-score')
.values_list('score', flat=True)
.distinct())
top_records = (myModel.objects
.order_by('-score')
.filter(score__in=top_scores[:10]))
```
This should generate single SQL query (with subquery). | As an alternative, you can also do it with two SQL queries, what might be faster with some databases than the single SQL query approach (IN operation is usually more expensive than comparing):
```
myModel.objects.filter(
score__gte=myModel.objects.order_by('-score')[9].score
)
```
Also while doing this, you should really have an index on score field (especially when talking about millions of records):
```
class myModel(models.Model):
name = models.CharField(max_length=255, unique=True)
score = models.FloatField(db_index=True)
``` | How to find top-X highest values in column using Django Queryset without cutting off ties at the bottom? | [
"",
"sql",
"django",
"django-models",
""
] |
I have written a query to create a string and to add padding between the values. This is then exported as a text file in order to load into a legacy system.
I have used a table variable to extract all the source data from table1 then run a query using CAST to create the required string with padding.
My question is; can this been achieved using fewer steps, without using a table variable (or temp table) and is CAST the best way to do it?
Unfortunately, using a padded string is the only way to create a suitable upload file.
Sample data and query:
```
CREATE TABLE dbo.table1(
[source1] [varchar](6),
[source2] [varchar](8),
[source3] [varchar](6),
[source4] [varchar](3),
[source5] [varchar](10),
[source6] [varchar](5),
[source7] [decimal](17, 2)
);
INSERT INTO dbo.table1 VALUES (999999,55566889,8964,'OPL',25648,'CR',12.35);
INSERT INTO dbo.table1 VALUES (222222,44422258,2548,'EWP',25698,'CR',10248.25);
INSERT INTO dbo.table1 VALUES (999999,33355589,3655,'SDO',75869,'DR',-897623.25);
INSERT INTO dbo.table1 VALUES (444444,11155987,5742,'SVI',25698,'CR',100023.36);
INSERT INTO dbo.table1 VALUES (555555,41555585,2586,'PLW',65879,'DR',-45.69);
Declare @TempTableVariable Table(
column1 nchar(15),
column2 nchar(6),
column3 nchar(3),
column4 nchar(10),
column5 nchar(6),
column6 nchar(25),
column7 nchar(17),
column8 nchar(17)
);
INSERT INTO @TempTableVariable
SELECT
source1 + source2 AS column1,
source3 AS column2,
source4 AS column3,
source5 AS column4,
source1 AS column5,
source6 AS column6,
CASE WHEN source7 > 0 THEN ABS(source7) ELSE NULL END AS column7,
CASE WHEN source7 < 0 THEN ABS(source7) ELSE NULL END AS column8
FROM dbo.table1
WHERE source1 = '999999';
SELECT
column1 AS SetID,
CAST(ISNULL(column2,'') AS nchar(4)) +
CAST(ISNULL(column3,'') AS nchar(6)) +
CAST(ISNULL(column4,'') AS nchar(14)) +
CAST(column5 AS nchar(7)) +
CAST(column1 AS nchar(15)) +
CAST(ISNULL(column7,'') AS nchar(17)) +
CAST(ISNULL(column8,'') AS nchar(17)) AS Input
FROM @TempTableVariable;
```
Result:
```
SETID|INPUT
99999955566889|8964OPL 25648 999999 99999955566889 12.35
99999933355589|3655SDO 75869 999999 99999933355589 897623.25
```
Thank you. | A solution with a CTE would be the following:
```
;WITH cte AS
(
SELECT
source1 + source2 AS column1,
source3 AS column2,
source4 AS column3,
source5 AS column4,
source1 AS column5,
source6 AS column6,
CASE WHEN source7 > 0 THEN ABS(source7) ELSE NULL END AS column7,
CASE WHEN source7 < 0 THEN ABS(source7) ELSE NULL END AS column8
FROM dbo.table1
WHERE source1 = '999999'
)
SELECT
column1 AS SetID,
CAST(ISNULL(column2,'') AS nchar(4)) +
CAST(ISNULL(column3,'') AS nchar(6)) +
CAST(ISNULL(column4,'') AS nchar(14)) +
CAST(column5 AS nchar(7)) +
CAST(column1 AS nchar(15)) +
CAST(ISNULL(CAST(column7 AS nchar(17)),'') AS nchar(17)) +
CAST(ISNULL(CAST(column8 AS nchar(17)),'') AS nchar(17)) AS Input
FROM cte
```
Slightly edited the post to overcome the fact that '' can't be CAST as decimal. | Not sure why you are using the @temp table, but try this:
```
SELECT
CAST(source1 + source2 AS nchar(15)) as column1,
CAST(ISNULL(source2,'') AS nchar(4))+
CAST(ISNULL(column4,'') AS nchar(6))+
CAST(ISNULL(column5,'') AS nchar(14))+
CAST(column1 AS nchar(7)) +
CAST(column6 AS nchar(15)) +
CASE WHEN source7 > 0 THEN CAST(ISNULL(abs(source7),'') AS nchar(17))
ELSE NULL END +
CASE WHEN source7 < 0 THEN CAST(ISNULL(abs(source7),'') AS nchar(17))
ELSE NULL END AS column2
FROM dbo.table1
WHERE source1 = '999999';
```
Can't test it, but should be would you need... | SQL Server add padding to a string | [
"",
"sql",
"sql-server",
"sql-server-2005",
""
] |
I tried for hours and read many posts but I still can't figure out how to handle this request:
I have a table like this:
```
+------+------+
|ARIDNR|LIEFNR|
+------+------+
|1 |A |
+------+------+
|2 |A |
+------+------+
|3 |A |
+------+------+
|1 |B |
+------+------+
|2 |B |
+------+------+
```
I would like to select the ARIDNR that occurs more than once with the different LIEFNR.
The output should be something like:
```
+------+------+
|ARIDNR|LIEFNR|
+------+------+
|1 |A |
+------+------+
|1 |B |
+------+------+
|2 |A |
+------+------+
|2 |B |
+------+------+
``` | This ought to do it:
```
SELECT *
FROM YourTable
WHERE ARIDNR IN (
SELECT ARIDNR
FROM YourTable
GROUP BY ARIDNR
HAVING COUNT(*) > 1
)
```
The idea is to use the inner query to identify the records which have a `ARIDNR` value that occurs 1+ times in the data, then get all columns from the same table based on that set of values. | Try this please. I checked it and it's working:
```
SELECT *
FROM Table
WHERE ARIDNR IN (
SELECT ARIDNR
FROM Table
GROUP BY ARIDNR
HAVING COUNT(distinct LIEFNR) > 1
)
``` | Select rows with same id but different value in another column | [
"",
"sql",
""
] |
I have this query
```
SELECT
user_id,
played_level,
sum( time_taken ) AS time_take
FROM answer
WHERE completed =1
GROUP BY played_level, user_id
ORDER BY time_take
LIMIT 20
```
This query shows all user id who took minimum time.
But now I want to display only distict user id with his minimum time.
```
user_id played_level time_take
1 18 19
1 12 21
2 3 25
6 3 26
2 2 27
6 4 27
1 8 32
```
expected output:
```
user_id played_level time_taken
1 18 19
2 3 25
6 3 26
``` | First select all distict user\_id from table
```
$distinct_users = "SELECT DISTINCT user_id FROM answer where addeddate BETWEEN :currdate1 AND :currdate2 AND completed=1";
$distinct_users_data = $conn->prepare($distinct_users);
$distinct_users_data->execute(array(':currdate1'=>$prev_date,':currdate2'=>$currdate2));
$distinct_users_arr = $distinct_users_data->fetchAll();
```
Now find the minimum time of this user\_id and store it in array
```
foreach($distinct_users_arr as $distict_data)
{
$user_ids=$distict_data['user_id'];
$all_user_time = "SELECT user_id, played_level, sum( time_taken ) AS time_take FROM answer where addeddate BETWEEN :currdate1 AND :currdate2
AND completed=1 AND user_id=:user_id GROUP BY played_level, user_id ORDER BY time_take limit 1";
$all_user_time_data = $conn->prepare($all_user_time);
$all_user_time_data->execute(array(':currdate1'=>$prev_date,':currdate2'=>$currdate2,':user_id'=>$user_ids));
$all_user_time_arr = $all_user_time_data->fetchAll();
$all_count= $all_user_time_data->rowCount();
foreach($all_user_time_arr as $leading_user)
{
$new_user_id= $leading_user['user_id'];
$new_time= $leading_user['time_take'];
$leader_board[] = array(
'user_id'=>$new_user_id,
'time_taken'=>$new_time
);
}
}
```
At end sort that array order by time taken. We get final result
```
foreach ($leader_board as $key => $row)
{
$sorting[$key] = $row['time_taken'];
}
array_multisort($sorting, SORT_ASC, $leader_board);
``` | ALthough you can do this with a `join`, that is complicated because you have to repeat the aggregation. You can also do this with nested aggregations. The hard part is getting the `played_level` where the minimum occurs. The following query uses a trick to get that using `substring_index()`/`group_concat()`. Here is the query without the `limit` clause -- I'm not quite sure what you want to limit:
```
select user_id,
substring_index(group_concat(played_level order by time_take), ',', 1) as played_level,
min(time_take) as time_take
from (SELECT user_id, played_level, sum( time_taken ) AS time_take
FROM answer
WHERE completed =1
GROUP BY played_level, user_id
) a
group by user_id
ORDER BY time_take;
``` | How to use distinct in group by query? | [
"",
"sql",
"mysql",
"group-by",
"min",
""
] |
Say, I have an organizational structure that is 5 levels deep:
```
CEO -> DeptHead -> Supervisor -> Foreman -> Worker
```
The hierarchy is stored in a table `Position` like this:
```
PositionId | PositionCode | ManagerId
1 | CEO | NULL
2 | DEPT01 | 1
3 | DEPT02 | 1
4 | SPRV01 | 2
5 | SPRV02 | 2
6 | SPRV03 | 3
7 | SPRV04 | 3
... | ... | ...
```
`PositionId` is `uniqueidentifier`. `ManagerId` is the ID of employee's manager, referring `PositionId` from the same table.
I need a SQL query to get the hierarchy tree going down from a position, provided as parameter, *including the position itself.* I managed to develop this:
```
-- Select the original position itself
SELECT
'Rank' = 0,
Position.PositionCode
FROM Position
WHERE Position.PositionCode = 'CEO' -- Parameter
-- Select the subordinates
UNION
SELECT DISTINCT
'Rank' =
CASE WHEN Pos2.PositionCode IS NULL THEN 0 ELSE 1+
CASE WHEN Pos3.PositionCode IS NULL THEN 0 ELSE 1+
CASE WHEN Pos4.PositionCode IS NULL THEN 0 ELSE 1+
CASE WHEN Pos5.PositionCode IS NULL THEN 0 ELSE 1
END
END
END
END,
'PositionCode' = RTRIM(ISNULL(Pos5.PositionCode, ISNULL(Pos4.PositionCode, ISNULL(Pos3.PositionCode, Pos2.PositionCode)))),
FROM Position Pos1
LEFT JOIN Position Pos2
ON Pos1.PositionId = Pos2.ManagerId
LEFT JOIN Position Pos3
ON Pos2.PositionId = Pos3.ManagerId
LEFT JOIN Position Pos4
ON Pos3.PositionId = Pos4.ManagerId
LEFT JOIN Position Pos5
ON Pos4.PositionId = Pos5.ManagerId
WHERE Pos1.PositionCode = 'CEO' -- Parameter
ORDER BY Rank ASC
```
It works not only for 'CEO' but for any position, displaying its subordinates. Which gives me the following output:
```
Rank | PositionCode
0 | CEO
... | ...
2 | SPRV55
2 | SPRV68
... | ...
3 | FRMN10
3 | FRMN12
... | ...
4 | WRKR01
4 | WRKR02
4 | WRKR03
4 | WRKR04
```
**My problems are:**
The output does not include intermediate nodes - it will only output end nodes, i.e. workers and intermediate managers which have *no subordinates*. I need all intermediate managers as well.
I have to manually `UNION` the row with original position on top of the output. I there any more elegant way to do this?
I want the output to be sorted in hieararchical tree order. Not all DeptHeads, then all Supervisors, then all Foremen then all workers, but like this:
```
Rank | PositionCode
0 | CEO
1 | DEPT01
2 | SPRV01
3 | FRMN01
4 | WRKR01
4 | WRKR02
... | ...
3 | FRMN02
4 | WRKR03
4 | WRKR04
... | ...
```
Any help would be greatly appreciated. | Try a recursive CTE, the example on TechNet is almost identical to your problem I believe:
<http://technet.microsoft.com/en-us/library/ms186243(v=sql.105).aspx> | Thx, everyone suggesting CTE. I got the following code and it's working okay:
```
WITH HierarchyTree (PositionId, PositionCode, Rank)
AS
(
-- Anchor member definition
SELECT PositionId, PositionCode,
0 AS Rank
FROM Position AS e
WHERE PositionCode = 'CEO'
UNION ALL
-- Recursive member definition
SELECT e.PositionId, e.PositionCode,
Rank + 1
FROM Position AS e
INNER JOIN HierarchyTree AS d
ON e.ManagerId = d.PositionId
)
SELECT Rank, PositionCode
FROM HierarchyTree
GO
``` | SQL Server: Select hierarchically related items from one table | [
"",
"sql",
"sql-server",
"hierarchy",
""
] |
I have this table:
```
fy period division employee_id category_name amount
2013 4 Sales 123452 Salary 130000
2013 4 Marketing 124232 Salary 120000
2013 4 Sales-WC 124244 Bonus 10000
2013 4 Sales 124244 Adjustments 1000
2013 4 Sales-WC 897287 Salary 65000
```
I'm trying to get a query that will give me the sum of the amounts for each category, but the trick is I want to combine the division when it has '-WC' on it.
This query:
```
select division_name, category_name, SUM(amount) as amount
FROM tblStaff
where fy=2013 and period=4
group by division_name, category_name
```
Gives me close to what I want:
```
division category_name amount
Sales Salary 130000
Sales-WC Salary 65000
Sales-WC Bonus 1000
Marketing Salary 120000
Sales Adjustments 1000
```
But What I would like is:
```
division category_name amount
Sales Salary 195000
Sales-WC Bonus 1000
Marketing Salary 120000
Sales Adjustments 1000
```
Where category\_name 'Salary' has been combined for 'Sales' and 'Sales-WC'.
I tried starting with a using a case statement like:
```
SELECT case
when division_name = division_name + '-WC' THEN division_name
ELSE 'not found' --did this just for testing
END as 'division_name',
category_name,
SUM(amount)
FROM tblStaff
where fy=2013 and period=4
group by division_name, category_name
```
But it appears I can't use a column name in the part of the case statement like that, because when i run this command I just get division\_name = 'not found' for every row.
Any ideas? Thanks! | You have to check if the division\_name ends with '-WC' and then strip it off:
```
select
case
when division_name like '%-WC'
then substring(division_name from 1 for position('-WC' in division_name) -1)
else division_name
end,
category_name, SUM(amount) as amount
FROM tblStaff
where fy=2013 and period_=4
group by
case
when division_name like '%-WC'
then substring(division_name from 1 for position('-WC' in division_name) -1)
else division_name
end,
category_name
``` | I think you want `LIKE`:
```
SELECT
case
when division_name LIKE '%-WC' THEN replace(division_name, '-WC', '')
ELSE division_name
END as 'division_name',
...
```
or you could just do this:
```
SELECT
replace(division_name, '-WC', '') as division_name,
...
GROUP BY replace(division_name, '-WC', '')
``` | sum the result of certain rows by comparing string values of column | [
"",
"sql",
"select",
""
] |
My SQL code is fairly simple. I'm trying to select some data from a database like this:
```
SELECT * FROM DBTable
WHERE id IN (1,2,5,7,10)
```
I want to know how to declare the list before the select (in a variable, list, array, or something) and inside the select only use the variable name, something like this:
```
VAR myList = "(1,2,5,7,10)"
SELECT * FROM DBTable
WHERE id IN myList
``` | You could declare a variable as a temporary table like this:
```
declare @myList table (Id int)
```
Which means you can use the `insert` statement to populate it with values:
```
insert into @myList values (1), (2), (5), (7), (10)
```
Then your `select` statement can use either the `in` statement:
```
select * from DBTable
where id in (select Id from @myList)
```
Or you could join to the temporary table like this:
```
select *
from DBTable d
join @myList t on t.Id = d.Id
```
And if you do something like this a lot then you could consider defining a [user-defined table type](http://technet.microsoft.com/en-us/library/bb522526(v=sql.105).aspx) so you could then declare your variable like this:
```
declare @myList dbo.MyTableType
``` | That is not possible with a normal query since the `in` clause needs separate values and not a single value containing a comma separated list. One solution would be a dynamic query
```
declare @myList varchar(100)
set @myList = '1,2,5,7,10'
exec('select * from DBTable where id IN (' + @myList + ')')
``` | SQL Server procedure declare a list | [
"",
"sql",
"sql-server",
"sql-server-2008-r2",
""
] |
I really wanted to come up with the solution by myself for this one, but this is turning out to be slightly more challenging than I thought it would be.
The table I am trying to retrieve information would look something like below in simpler form.
Table: CarFeatures
```
+---+---+---+---+-----+
|Car|Nav|Bth|Eco|Radio|
+---+---+---+---+-----+
|a |y |n |n |y |
+---+---+---+---+-----+
|b |n |y |n |n |
+---+---+---+---+-----+
|c |n |n |y |n |
+---+---+---+---+-----+
|d |n |y |y |n |
+---+---+---+---+-----+
|e |y |n |n |n |
+---+---+---+---+-----+
```
On the SSRS report, I need to display all the cars that has all the features from the given parameters. This will receive parameters from the report like: Nav-yes/no, Bth-yes/no, Eco-yes/no, Radio-yes/no.
For instance, if the parameter input were 'Yes' for navigation and 'No' for others, the result table should be like;
```
+---+----------+
|Car|Features |
+---+----------+
|a |Nav, Radio|
+---+----------+
|e |Nav |
+---+----------+
```
I thought this would be simple, but as I try to get the query done, this is kind of driving me crazy. Below is what I thought initially will get me what I need, but didn't.
```
select Car,
case when @nav = 'y' then 'Nav ' else '' end +
case when @bth = 'y' then 'Bth ' else '' end +
case when @eco = 'y' then 'Eco ' else '' end +
case when @radio = 'y' then 'Radio ' else '' end As Features
from CarFeatures
where (nav = @nav -- here I don't want the row to be picked if the input is 'n'
or bth = @bth
or eco = @eco
or radio = @radio)
```
Basically the logic should be something like, if there is a row for every parameter that is 'yes,' list me all the features with 'yes' for that row, even though the parameters are 'no' for those other features.
Also, I am not considering to filter on the report. I want this to be on stored proc itself.
I would certainly like to avoid multiple ifs considering I have 4 parameters and the permutation of 4 in if might not be a better thing to do.
Thanks. | --Aha! I kind of figured out myself (very happy) :). Since the value for the columns can only be 'y' or 'n,' to ignore when the parameters value are no,
-- I will just ask it look for value that will never be there.
--If anyone has a better way of doing it or enhancing what I have (preferred) would be appreciated.
--Thanks to everyone who replied. Since this is a part of already existing table and also a piece of a big stored proc, I was reluctant to go with previous answers to the question.
--variable declaring and assignments here
```
select Car,
case when @nav = 'y' then 'Nav ' else '' end +
case when @bth = 'y' then 'Bth ' else '' end +
case when @eco = 'y' then 'Eco ' else '' end +
case when @radio = 'y' then 'Radio ' else '' end As Features
from CarFeatures
where (nav = (case when @nav = 'y' then 'Y' else 'B' end
OR case when @bth = 'y' then 'Y' else 'B' end
OR case when @eco = 'y' then 'Y' else 'B' end
OR case when @radio = 'y' then 'Y' else 'B' end
)
``` | Your schema is awkward and denormalised, you should have 3 tables,
```
Car
Feature
CarFeature
```
The `CarFeature` table should consist of two columns, `CarId` and `FeatureId`. Then your could do something like,
```
SELECT DISTINCT
cr.CarId
FROM
CarFeature cr
WHERE
cr.FeatureId IN SelectedFeatures;
```
Rant
{
> Not only would it be easy to add features without changing the schema,
> offer better performance because of support of set based operations
> covered by good indecies, overall use less storage because you no
> longer need to store the `No` values, you would comply with some well
> thought out and established patterns backed by 40+ years of
> development effort and clarification.
}
---
If, for whatever reason, you cannot change the data or schema, you could `UNPIVOT` the columns like this, [Fiddle Here](http://sqlfiddle.com/#!3/34616/14)
```
SELECT
p.Car,
p.Feature
FROM
(
SELECT
Car,
Nav,
Bth,
Eco,
Radio
FROM
CarFeatures) cf
UNPIVOT (Value For Feature In (Nav, Bth, Eco, Radio)) p
WHERE
p.Value='Y';
```
Or, you could do it old style like this [Fiddle Here](http://sqlfiddle.com/#!3/34616/6),
```
SELECT
Car,
'Nav' Feature
FROM
CarFeatures
WHERE
Nav = 'Y'
UNION ALL
SELECT
Car,
'Bth' Feature
FROM
CarFeatures
WHERE
Bth = 'Y'
UNION ALL
SELECT
Car,
'Eco' Feature
FROM
CarFeatures
WHERE
Eco = 'Y'
UNION ALL
SELECT
Car,
'Radio' Feature
FROM
CarFeatures
WHERE
Radio = 'Y'
```
to essentially, denormalise into subquery. Both queries give results like this,
```
CAR FEATURE
A Nav
A Radio
B Bth
B Radio
C Eco
D Bth
D Eco
E Nav
``` | Conditional where; Sql query/TSql for SQLServer2008 | [
"",
"sql",
"sql-server-2008",
"t-sql",
"ssrs-2008",
""
] |
In my MYSQL database I have multiple tables like this:
`reference`:
```
------------------------------
| id | title | subtitle |
| 1 | Test | Just a test |
| 2 | Another | Second |
| 3 | Last | Third |
------------------------------
```
`author`:
```
-----------------------------
| id | firstname | lastname |
| 1 | Peter | Pan |
| 2 | Foo | Bar |
| 3 | Mr. | Handsome |
| 4 | Steve | Jobs |
-----------------------------
```
`keyword`:
```
----------------
| id | keyword |
| 1 | boring |
| 1 | lame |
----------------
```
`reference_author_mm`:
```
---------------------------
| uid_local | uid_foreign |
| 1 | 1 |
| 1 | 2 |
| 2 | 1 |
| 2 | 2 |
| 2 | 4 |
| 3 | 1 |
| 3 | 2 |
| 3 | 4 |
---------------------------
```
All tables have a `n:m` relation table to the `references` table. With one query I want to `SELECT` all references and **filter/order** them by author name or keywords etc. So the output I want is something like this:
*User searched by **title***:
```
------------------------------------------------------------------------------
| id | title | subtitle | authors | keywords |
| 1 | Test | Just a test | Peter Pan, Foo Bar | boring |
| 2 | Another | Second | Peter Pan, Foo Bar, Steve Jobs | boring, lame |
| 3 | Last | Third | Peter Pan, Foo Bar, Steve Jobs | boring, lame |
-------------------------------------------------------------------------------
```
At this moment I have a very long construct that is not quite working:
```
SELECT r.* , Authors.author AS authors, Keywords.keyword AS keywords
FROM references AS r
LEFT OUTER JOIN (
SELECT r.pid AS pid,
GROUP_CONCAT(CONCAT_WS(\' \', CONCAT(p.lastname, \',\'), p.prefix, p.firstname) SEPARATOR \'; \') AS author
FROM reference AS r
LEFT JOIN reference_author_mm r2p ON r2p.uid_local = r.uid
LEFT JOIN author p ON p.uid = r2p.uid_foreign
GROUP BY r.pid
) Authors ON r.pid = Authors.pid
```
This `LEFT OUTER JOIN` statement exists multiple times for every `n:m` relation to `references`. What would be the easiest way to query these tables. Keep in mind that I need to e.g. `ODER BY` the authors or search through the keywords of a reference.
### Edit:
The new query I tried:
```
SELECT r.*,
GROUP_CONCAT(CONCAT_WS(\' \', CONCAT(a.lastname, \',\'), a.prefix, a.firstname) SEPARATOR \'; \') AS authors,
GROUP_CONCAT(CONCAT_WS(\' \', k.name)) AS keywords,
GROUP_CONCAT(CONCAT_WS(\' \', c.name)) AS categories,
GROUP_CONCAT(CONCAT_WS(\' \', p.name)) AS publishers
FROM reference AS r
LEFT OUTER JOIN (
SELECT * FROM reference_author_mm AS rha
INNER JOIN author AS a ON a.uid = rha.uid_foreign
) AS a ON a.uid_local = r.uid
LEFT OUTER JOIN (
SELECT * FROM reference_keyword_mm AS rhk
INNER JOIN keywords AS k ON k.uid = rhk.uid_foreign
) AS k ON k.uid_local = r.uid
``` | I tried to make my query as clear as possible according to the table struture you specified in your question (not what I could see on your queries).
You will probably need to change some field/table name.
You can have a look on the query result here : `SQLFiddle demo`
```
SELECT
r.id
,r.title
,r.subtitle
,ra.authors
,rk.keyworks
FROM
reference r
LEFT JOIN
(SELECT
r.id
,GROUP_CONCAT(CONCAT(a.firstname, ' ', a.lastname) ORDER BY CONCAT(a.firstname, ' ', a.lastname) SEPARATOR ', ') as authors
FROM
reference r
LEFT JOIN reference_author_mm r2a
ON r.id = r2a.uid_local
LEFT JOIN author a
ON a.id = r2a.uid_foreign
GROUP BY
r.id) ra
ON ra.id = r.id
LEFT JOIN
(SELECT
r.id
,GROUP_CONCAT(k.keyword ORDER BY k.keyword SEPARATOR ', ') as keyworks
FROM
reference r
LEFT JOIN reference_keyword_mm r2k
ON r.id = r2k.uid_local
LEFT JOIN keyword k
ON k.id = r2k.uid_foreign
GROUP BY
r.id) rk
ON rk.id = r.id
ORDER BY
r.id
```
If you need to order authors name by lastname, just change `CONCAT(a.firstname, ' ', a.lastname)` to `CONCAT(a.lastname, ' ', a.firstname)`
If you have any issue, feel free to update the table structure in my sqlfiddle demo, and leave me a comment that explain what is the issue you are facing and give me examples. | Maybe?
```
SELECT r.*,
GROUP_CONCAT(CONCAT_WS(\' \', CONCAT(a.lastname, \',\'), a.prefix, a.firstname) SEPARATOR \'; \')) AS author,
GROUP_CONCAT(CONCAT_WS(\' \', k.keywords) AS keywords
FROM references AS r
LEFT OUTER JOIN (SELECT * FROM references_has_authors AS rha INNER JOIN authors AS a ON a.id = rha.authors_id) AS a ON a.references_id = r.id
LEFT OUTER JOIN (SELECT * FROM references_has_keywords AS rhk INNER JOIN keywords AS k ON k.id = rhk.references_id) AS k ON k.references_id = r.id
GROUP BY r.id;
```
As a last ditch effort, maybe do in application? | Optimize Subselect Search query | [
"",
"mysql",
"sql",
"query-optimization",
""
] |
so i need to get the sum of 20+ columns in a table that i'm working with and i'd really like to not have to write out the sum of every single column but i'm not sure that is possible. is there a way to get this to be dynamically written for all the columns in the table?
```
select isnull(osname, 'Total'), COUNT(*), SUM(col1), SUM(col2), SUM(col3)...sum(col27)
from usage
group by osname
with rollup
``` | First and foremost, if you can automate something, please do. Typing out 20, 100 columns definitely is tedious.
My solution below uses a Table Variable Function to the get the list of numbered columns. This can be reused again.
Second, I create some dynamic T-SQL using this function and applying your logic. The WITH ROLLUP syntax is non-ISO compliant. It was replaced in 2008 R2.
<http://technet.microsoft.com/en-us/library/ms177673.aspx>
Lets get down to business. I like creating a quick test table to my sure my syntax is correct. Even I make mistakes.
```
--
-- Setup test data
--
-- Just playing around
use tempdb;
go
-- Drop test table
if object_id ('usage') > 0
drop table usage
go
-- Create test table
create table usage
( osname varchar(16),
col1 int,
col2 int,
col3 int,
col4 int,
col5 int
);
go
-- Test data
insert into usage values
('UNIX', 1, 2, 3, 4, 5),
('UNIX', 2, 4, 6, 8, 10),
('WIN7', 1, 2, 3, 4, 5),
('WIN7', 2, 4, 6, 8, 10),
('WIN8', 5, 10, 15, 20, 25);
go
-- Show the data
select * from usage;
go
```
Next, lets create a in-line table value function. They are relatively quick.
<http://blog.waynesheffield.com/wayne/archive/2012/02/comparing-inline-and-multistatement-table-valued-functions/>
```
--
-- Create helper function
--
-- Remove ITVF
if object_id('get_columns') > 0
drop function get_columns
go
-- Create ITVF
create function get_columns (@table_name sysname)
returns table
as
return
(
select top 100 column_id, name from sys.columns where object_id in
( select object_id from sys.tables t where t.name = @table_name )
order by column_id
)
go
-- Get the columns
select * from get_columns('usage');
go
```
Now, lets put it all together to solve your problem. Notice, you need to have column names which you left out. I also changed over to ISO compliant syntax.
```
--
-- Solve the problem
--
-- Dynamic SQL
declare @var_tsql nvarchar(max) = 'SELECT ';
select
@var_tsql +=
case
when column_id <> 1 then 'SUM(' + name + ') as s_' + name + ', '
else 'ISNULL(' + name + ', ''Total'') as s_name, '
end
from get_columns('usage');
select @var_tsql += 'COUNT(*) as s_count FROM usage GROUP BY ROLLUP(osname); '
--print @var_tsql
EXEC sp_executesql @var_tsql;
```
The print statement is used to debug any syntax errors. Cut and paste into another window to check syntax. The more you learn, the easier dynamic SQL will be.
The above dynamic TSQL works fine. Here is your output.
 | I would run:
```
SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'usage'
```
Then I would copy the columns into excel or google spreadsheet.
Then make column B in excel:
```
="SUM(" & A1 & "),"
```
And then copy that to all appropriate B cells. | Sum each column for a table with over 20 columns without writing out each sum | [
"",
"sql",
"sql-server",
""
] |
```
SELECT
*, (SELECT count(1) from color i WHERE i.NAME <= c.NAME)
FROM color c
ORDER BY name
```
Output:
```
id NAME
----------- ------------------------- -----------
4 blue 1
1 orange 2
3 red 3
2 yellow 4
```
I didn't understand how this query generates row\_nums without using row\_number function?
How the query flow would be when subquery will be executed before order by or after? | It's a correlated subquery. For each row in `c`, count the number of rows in the same table, with the same name or a lower name.
This is an absolutely inefficient and wasteful way to generate a row number, but is likely the most common way people worked around the lack of such a function in, say, SQL Server 2000. Today, of course, you should be using:
```
SELECT id, name, ROW_NUMBER() OVER (ORDER BY name)
FROM dbo.color
ORDER BY name;
```
Assuming `name` is unique and you don't need to break ties. | the subselect
```
select count(1) from color i where i.name <= c.name)
```
counts the rows where the name is smaller or equal to the name in the current row of the outer select, thus counting the rows up to that point. you can confuse it by having a colour twice in that table. try inserting red again and see what happens | row_nums without using row_number function? | [
"",
"sql",
"sql-server",
""
] |
I have the following SQL statement. Which is working okay. Except in some cases the processing of the query takes a lot of time, resulting in a system time out. So I need to convert the statement. Probably with a JOIN.
I can't figure out how to convert this statement:
```
SELECT table1.id as id,
table1.firstname,
table1.lastname,
table3.name,
table4.name
FROM table1, table2, table3, table4, table5
WHERE table1.id = table2.tabel1id
AND table2.table2id = table3.table2id
AND table3.table3id = table5.table3id
AND table5.somecode = '5'
AND table3.table3id = table4.name
AND table1.firstname LIKE '%John%'
GROUP BY table1.id
ORDER BY table3.name, table1.firstname, table1.lastname
```
I have the nesting of the tables and I also want to use the results of subqueries in the final SELECT (table3.name and table4.name) | ```
select t1.id,t1.firstname,t3.name,t4.name
from table1 as t1
inner join table2 as t2 on t1.id=t2.tabel1id
inner join on table3 as t3 on t2.table2id=t3.table2id
inner join on table4 as t4 on t3.table3id=t4.name
inner join on table5 as t5 on t5.table3id=table3.table3id
where t1.firstname like '%John%' and t5.somecode='5'
group by t1.id
order by t3.name,t1.firstname,t1.lastname
``` | You already have JOINs, but you use the implicit version (comma-delimited table list in FROM plus join-conditions in WHERE) instead of the explicit (JOIN plus ON).
So rewriting should not improve performance (otherwise MySQL is more crappy than I thought).
You should better check if you created all neccessary indexes. | Converting complex SQL to JOIN | [
"",
"mysql",
"sql",
"sql-server",
"subquery",
"inner-join",
""
] |
When we make a SQL request like this:
```
SELECT attr1, attr2, AGGR(*)
FROM SomeTable
GROUP BY attr1, attr2
```
for some aggregate function, we have to include a GROUP BY clause and list attr1 and attr2 in it. My question is: why doesn't the SQL DBMS do it itself? | Because you may want to group by `attr1`, `attr2` *and* `attr3`, even though you have no interest in the actual `attr3` value itself.
In other words, you may be required to group by the non-aggegated columns but this in only at a minimum. There's nothing preventing you from grouping by columns not actually included in the query.
And, since the query evaluator doesn't know what you wanted (or it wants you to follow the relevant standard), it may think it safer to insist you explicitly state it. | You could do it with out `GROUP BY`, then you get the average from the whole table.
With `GROUP BY`, you get the average for each group. | Why we use group by in sql | [
"",
"sql",
"group-by",
""
] |
Please update title as I didn't got any suitable one and remove this comment ;)
I have a table having data like:
`id``assigned_to``foreign_id``purchase_date`
`1``5`
`1`
`2014-01-01`
`4``7`
`1`
`2014-01-09`
`6``2`
`1`
`2014-01-10`
`8``9`
`1`
`2014-01-13`
`12``9`
`2`
`2014-01-13`
`18``9`
`3`
`2014-01-13`
I want to retrieve rows from this data for `foreign_id=1` AND `assigned_to=7` if row exists for `assigned\_to=7,
else row having latest `purchase_date` for `foreign_id=1` **irrespective of assigned\_to=7**;
Result Set:
`id``assigned_to``foreign_id``purchase_date`
**CASE I : IF ASSIGNED\_TO=7 AND FOREIGN\_ID=1**
`4``7`
`1`
`2014-01-09`
--------------------------------------------------------------------------------
**CASE 2: IF ASSIGNED\_TO=7 AND FOREIGN\_ID=2**
`12``9`
`2`
`2014-01-13`
Please give a MySQL query for fetching requirements. Whatever way I thought, or my SQL knowledge ;) I got either only row having assigned\_to=7, or nothing if assigned\_to=X[x=other assigned users and other other foreign\_ids] isn't present in table. | Put the filter on `foreign_id` into the `WHERE` clause and the filter on `assigned_to` into the `ORDER BY` clause and add LIMIT 1, like this:
```
SELECT
id,
assigned_to,
foreign_id,
purchase_date
FROM
atable
WHERE
foreign_id = @foreign_id
ORDER BY
assigned_to = @assigned_to DESC,
purchase_date DESC
LIMIT 1
;
```
where `@foreign_id` and `@assigned_to` are query arguments.
The query will first find all rows matching the `@foreign_id`. Then it will divide them into two subsets:
1) rows matching the `@assigned_to`;
2) all the others.
The `@assigned_to` subset will be placed before the other subset and both will be additionally sorted in the descending order of `purchase_date`. Finally, the first row of the sorted dataset will be returned and the rest discarded.
As a result, the query returns the latest `@foreign_id`/`@assigned_to` match if it exists, or just the latest `@foreign_id` match otherwise. | You can use case...when in mysql with count() function.
Check out following link:
<http://www.mysqltutorial.org/mysql-case-statement/> | MySQL: Fetch conditional rows only, from a query dataset | [
"",
"mysql",
"sql",
"optimization",
""
] |
Not getting the value of `@lastupdate`. If removing the `SELECT` statement, then getting the value of `@lastupdate` in codebehind
```
@action int,
@autoid int = NULL,
@datecheck date = NULL,
@lastupdate datetime = '' OUTPUT
AS
BEGIN
SET NOCOUNT ON;
IF @action = 1
BEGIN
SET @lastupdate = (SELECT reddate FROM lastupdate)
SELECT * FROM view_mychart WHERE processdate=@datecheck
END
END
``` | Any output parameters are returned after the results set. Make sure you have processed/read the results before trying to access the output parameter. | From what I can see you should be using something along the lines of the following:
```
SELECT @lastupdate = reddate FROM lastupdate
```
Though personally I would probably limit the query as this could take a while depending on database size etc. | Select statement with output clause doesn't work | [
"",
"sql",
""
] |
Lets say we have such table:

i only what to select unique plans with highest time, and also being able to select other columns. To ilustrate here's a query(witch obviously wouldn't work):
```
SELECT DISTINCT(plan), time, id
FROM table
ORDER BY time desc
```
how can i get such table:
```
plan|time|id
----+----+--------
1 |0 |9
2 |90 |10
3 |180 |11
4 |360 |12
5 |720 |13
6 |1080|15
7 |0 |16
8 |720 |23
``` | SELECT plan, max(time), id
FROM demo group by plan
[check fiddle](http://sqlfiddle.com/#!9/7483c/7) | This sounds like a simple aggregate query:
```
SELECT plan, MAX(time)
FROM mytable
GROUP BY plan
``` | SQL select distinct only highest values | [
"",
"mysql",
"sql",
"select",
"distinct",
""
] |
Not sure that title clears my question. What i need is:
TABLE: id, year
Example data:
```
+--+----+
|id|year|
+--+----+
|1 |1990|
+--+----+
|2 |1990|
+--+----+
|3 |1990|
+--+----+
|4 |1992|
+--+----+
|5 |1992|
+--+----+
```
On input i receive an ID. For id 1 result must be 3 (cause 3 rows shares the same year as row with id 1). Any way to avoid subquery?
(Also code snippet for android would be fine too :), Thanks. | ```
SELECT t2.id, t1.year, count(*)
FROM TEMP t1 INNER JOIN TEMP t2 ON t1.year=t2.year
GROUP BY t1.year, t2.id
```
Look at <http://sqlfiddle.com/#!7/38b05/4> | First extract distinct year name using
```
String selectQuery = "SELECT DISTINCT year FROM "+TABLE_NAME;
```
Result of this query will be
```
1990
1992
```
After getting this you can use to get all the row for selected year
```
String query = "Select * from "+TABLE_NAME+ " where year = '"+year
```
You can store this in an ArrayList() and able get count by calling size() method | How to get row count with same field value as from row with provided id? | [
"",
"android",
"sql",
""
] |
I have a query that looks like this:
```
select Group, Sum(columnA) as SumColumn
FROM table
GROUP BY Group
```
I get results looking like this
```
+-----+---------+
|Group|SumColumn|
+-----+---------+
|A |10 |
+-----+---------+
|B |20 |
+-----+---------+
```
How can I change/add to this to show something like this?
```
+-----+---------+-----------+
|Group|SumColumn|TotalColumn|
+-----+---------+-----------+
|A |10 |30 |
+-----+---------+-----------+
|B |20 |30 |
+-----+---------+-----------+
``` | It is hard to see what your data looks like -- but from what you posted this is what you want:
```
SELECT Name,
SumColumn,
SUM(SumColumn) AS TotalColumn
FROM
(
SELECT Group as Name, SUM(columnA) AS SumColumn
FROM Table
GROUP BY Group
) T
```
You might want this -- depending on other stuff.
```
SELECT *,
SUM(columnA) OVER (PARTITION BY Group ORDER BY Group) AS SumColumn,
SUM(columnA) OVER (PARTITION BY Group) AS TotalColumn
FROM TABLE
``` | You can actually mix window functions and aggregation functions in a `select` statement. So, you can do this without subqueries:
```
select Group, Sum(columnA) as SumColumn,
sum(sum(columnA)) over () as TotalColumns
FROM table
GROUP BY Group;
``` | sum column to show total in every row | [
"",
"sql",
"t-sql",
"sql-server-2005",
""
] |
To start: I am querying a DB2 server built into the IBM i System package from Excel 2007 using "From Microsoft Query" and straight-up SQL. It is a financial database.
The DB is structured such that account #s are split up into 8 integer fields for fund, division, department, activity 1, activity 2, element 1, element 2, and object. In order to make query results readable I have concatenated them with the following code:
```
select
right(trim(trailing from (concat('00',cast(T1."GMFUND" as Char(8))))),3)||'-'
||right(trim(trailing from (concat('00',cast(t1."GMDPT" as Char(8))))),2)
||right(trim(trailing from (concat('00',cast(T1."GMDIV" as Char(8))))),2) ||'-'
||right(trim(trailing from (concat('00',cast(T1."GMSTAB" as Char(8))))),2)
||right(trim(trailing from (concat('00',cast(T1."GMSTAS" as Char(8))))),1)||'.'
||right(trim(trailing from (concat('00',cast(T1."GMELM1" as Char(8))))),1)
||right(trim(trailing from (concat('00',cast(T1."GMELM2" as Char(8))))),1)||'-'
||right(trim(trailing from (concat('00',cast(T1."GMOBJ" as Char(8))))),2)
as AJAccount
```
It's a bit difficult to read, so I'll explain: For each field I am concatenating '00' to each # typecast as an 8-char string (to add leading zeroes), then trimming the trailing blank spaces (from the typecast conversion), and finally grabbing the correct # of digits for each using right(). That ends up making up a single acct # which we use day-to-day in the following format:
000-0000-000.00-00
Now here comes the more complicated part. I need to sum a calculated field for debits and credits, defined by the following:
```
IFNULL(T1."GMDAMT",0)-IFNULL(T1."GMCAMT",0) as Trans_Total
```
But that isn't summed yet. And that's where I run into my problem. When I query those two fields I get accurate results for each and every transaction in the DB, labeled w/ acct #s. But when I attempt to enclose the [debits - credits] field in a SUM() aggregate function and group by 'AJAccount' my query throws an error.
The error points to the following section of my concatenated acct # field:
```
right(trim (trailing from (concat('00',cast(T1."GMOBJ" as Char(8))))),2)
```
Specifically it highlights "GMOBJ" as the source of the error. But without the SUM() function, the code runs just fine, producing results that look like the following:
```
001-0000-243.00-00 | 166898.00
001-0000-244.00-00 | -166898.00
161-0000-243.00-00 | 3000.00
161-0000-244.00-00 | -3000.00
470-0000-243.00-00 | 4999.00
470-0000-244.00-00 | -4999.00
490-0000-243.00-00 | 1000.00
490-0000-244.00-00 | -1000.00
```
This isn't even the half of what I really need to do (because the transaction data is split off into 4 different tables by transaction type that I will eventually need to JOIN on...but that's another task). But until I get this working I am at a dead stop.
This is the whole block of code:
```
select
right(trim(trailing from (concat('00',cast(T1."GMFUND" as Char(8))))),3)||'-'
||right(trim(trailing from (concat('00',cast(t1."GMDPT" as Char(8))))),2)
||right(trim(trailing from (concat('00',cast(T1."GMDIV" as Char(8))))),2) ||'-'
||right(trim(trailing from (concat('00',cast(T1."GMSTAB" as Char(8))))),2)
||right(trim(trailing from (concat('00',cast(T1."GMSTAS" as Char(8))))),1)||'.'
||right(trim(trailing from (concat('00',cast(T1."GMELM1" as Char(8))))),1)
||right(trim(trailing from (concat('00',cast(T1."GMELM2" as Char(8))))),1)||'-'
||right(trim(trailing from (concat('00',cast(T1."GMOBJ" as Char(8))))),2)
as AJAccount,
SUM(IFNULL(T1."GMDAMT",0)-IFNULL(T1."GMCAMT",0)) as Trans_Total
from "HTEDTA"."GM310AP" T1
where T1."GMAPYR" = 2014
group by 'AJAccount'
```
I thought maybe I typo'd or something, so I did the following easier query:
```
select
T1."GMFUND",
SUM(IFNULL(T1."GMDAMT",0)-IFNULL(T1."GMCAMT",0)) as Trans_Total
from "HTEDTA"."GM310AP" T1
where T1."GMAPYR" = 2014 and T1."GMAPMO" between 1 and 4
group by T1."GMFUND"
order by T1."GMFUND"
```
and it ran just fine, producing the following results:
```
1 | 20090901.49
111 | 32635.15
114 | 0.00
115 | 0.00
131 | 5916.66
```
So I suspect I am lacking knowledge about calculated fields and SUM(). Can someone enlighten me? | Am assuming the error is SQL0206 - Column or global variable AJAccount not found.
If true, the problem is that the GROUP BY clause is looking for a column in the database table, not a derived column in the SELECT clause. Try putting the entire derived column in your GROUP BY:
```
select...
where...
group by right(trim (trailing from (concat('00',cast(T1."GMFUND" as Char(8))))),3) || '-' || right(trim (trailing from (concat('00',cast(t1."GMDPT" as Char(8))))),2) || right(trim (trailing from (concat('00',cast(T1."GMDIV" as Char(8))))),2) || '-' || right(trim (trailing from (concat('00',cast(T1."GMSTAB" as Char(8))))),2) || right(trim (trailing from (concat('00',cast(T1."GMSTAS" as Char(8))))),1) || '.' || right(trim (trailing from (concat('00',cast(T1."GMELM1" as Char(8))))),1) ||
right(trim (trailing from (concat('00',cast(T1."GMELM2" as Char(8))))),1) || '-' || right(trim (trailing from (concat('00',cast(T1."GMOBJ" as Char(8))))),2)
order by 1
``` | A simplified example using a [common-table-expression](http://pic.dhe.ibm.com/infocenter/iseries/v7r1m0/topic/db2/rbafzcomtexp.htm):
```
WITH gm310ap AS (SELECT
RIGHT('00'||DIGITS(gmfund),3) || '-' ||
RIGHT('0'||DIGITS(gmdpt),2) ||
RIGHT('0'||DIGITS(gmdiv),2) || '-' ||
RIGHT('0'||DIGITS(gmstab),2) ||
RIGHT(DIGITS(gmstas),1) || '.' ||
RIGHT(DIGITS(gmelm1),1) ||
RIGHT(DIGITS(gmelm2),1) || '-' ||
RIGHT('0'||DIGITS(gmobj),2) AS AJAccount,
COALESCE(gmdamt,0) - COALESCE(gmcamt,0) AS Trans_Total
FROM "HTEDTA"."GM310AP"
WHERE gmapyr = 2014)
SELECT AJAccount, SUM(Trans_Total)
FROM gm310ap
GROUP BY AJAccount
```
Documentation for the scalar functions:
* [COALESCE](http://pic.dhe.ibm.com/infocenter/iseries/v7r1m0/topic/db2/rbafzscacoales.htm)
* [DEC](http://pic.dhe.ibm.com/infocenter/iseries/v7r1m0/topic/db2/rbafzscadec.htm)
* [DIGITS](http://pic.dhe.ibm.com/infocenter/iseries/v7r1m0/topic/db2/rbafzscadigits.htm)
* [RIGHT](http://pic.dhe.ibm.com/infocenter/iseries/v7r1m0/topic/db2/rbafzscaright.htm) | Summing over a calculated field of concatenated strings (split acct #s) | [
"",
"sql",
"db2",
"excel-2007",
"ibm-midrange",
""
] |
I have two tables:
1. `subject` which holds a list of subjects and their credits
2. `exams` which shows which subjects the students failed or succeeded in the exams
I am trying to get a list of total credits earned for every student. Problem is if a student failed in all subjects he doesn't show up cause he's filtered out in the `WHERE` clause before the `GROUP BY`.
Here's the SQL:
```
CREATE TABLE IF NOT EXISTS subject (
name VARCHAR(50) NOT NULL,
credits INT NOT NULL
);
ALTER TABLE subject ADD PRIMARY KEY (name);
INSERT INTO subject(name, credits) VALUES('ALGEBRA', 100);
INSERT INTO subject(name, credits) VALUES('FRENCH' , 10);
CREATE TABLE IF NOT EXISTS exam (
student VARCHAR(50) NOT NULL,
subject VARCHAR(50) NOT NULL,
success BOOLEAN NOT NULL);
ALTER TABLE exam ADD PRIMARY KEY (student, subject);
ALTER TABLE exam ADD CONSTRAINT exam_2_subject FOREIGN KEY (subject) REFERENCES subject(name);
INSERT INTO exam(student, subject, success) VALUES('Bob', 'ALGEBRA', true);
INSERT INTO exam(student, subject, success) VALUES('Bob', 'FRENCH', false);
INSERT INTO exam(student, subject, success) VALUES('Mike', 'ALGEBRA', false);
INSERT INTO exam(student, subject, success) VALUES('Mike', 'FRENCH', false);
CREATE VIEW student_credits AS
SELECT a.student, SUM(b.credits) AS total_credits FROM
exam a INNER JOIN
subject b ON b.name = a.subject
WHERE a.success IS TRUE
GROUP BY a.student;
```
View `student_credits` only shows:
```
student total_credits
--------------------------
Bob 100
```
What's an idiomatic way to fix view `student_credits` to also include `Mike` who failed all subjects? | Try this
```
CREATE VIEW student_credits AS
SELECT a.student, SUM(CASE WHEN a.success then b.credits
else 0 end) AS total_credits FROM
exam a INNER JOIN
subject b ON b.name = a.subject
GROUP BY a.student;
``` | I did:
```
CREATE VIEW student_credits AS
SELECT a.student, SUM(b.credits) AS total_credits FROM
exam a INNER JOIN
subject b ON b.name = a.subject
WHERE a.success IS TRUE
GROUP BY a.student
UNION
SELECT DISTINCT STUDENT, 0 AS total_credits FROM exam a
WHERE NOT EXISTS (SELECT 1 FROM exam b WHERE b.student=a.student AND success IS TRUE);
```
But I think that's ugly. Plus this is just an SSCCE, the real case would be more complicated. | group by and SUM only rows satisfying a criterion, but show the others too | [
"",
"sql",
"postgresql",
"group-by",
""
] |
I'm developing an online shop application which includes a table named 'tbl\_items'. The primary key for each item is the 'item\_id' field.
Now, I want to add an option for each item posted on the shop to be associated with multiple pictures describing the item (unlimited amount of pictures per item), so I created another table called 'tbl\_item\_pictures' which includes two columns - 'item\_id' and the url of the picture (varchar with the size of 2083).
I believe this structure isn't the best and it might be due to the fact it's already late where I live and I just can't think of any better solution, but I'm kind of lost. I would really not like to leave the table without a primary key, nor I want to assign a primary key to both of my fields.
Any ideas of what I can add/change in my current structure to make this work? | This is a very common design pattern, and putting both columns into a PK is the normal solution. If you **don't** do this you will potentially have multiple links from an item to the same picture.
There's nothing wrong with putting both columns into a PK for this.
**Update:**
to recap....
1 - Put your pictures into their own table, with an `ID` column and the `url`.
2 - In your linking table, use `tbl_itemID` and `pictureID`, and have them both be part of the PK for the lookup table. | You have 3 possibilities:
1.) Have no primary key. atm you seem not to need one.
2.) If `item_id` and `url` are unique together use both as primary key
3.) add a third column (like `picture_id`) and fill it manually or automatically from a sequence
Good luck! | Table Structure (Missing Primary Key) | [
"",
"sql",
""
] |
What's the difference between:
```
select t1.a1, t1.a2, t1.a3 from t1 cross join t2 where t1.a3 = t2.a1
```
and:
```
select t1.a1, t1.a2, t1.a3 from t1,t2 where t1.a3=t2.a1;
```
Can I use them interchangeably? | MySQL doesn't offer a distinction between `JOIN` and `CROSS JOIN`. They are the same.
In both your examples the clause
```
WHERE t1.a3 = t2.a1
```
converts any sort of join into an inner join. The standard way of expressing this query is
```
SELECT t1.a1, t1.a2, t1.a3
FROM t1
JOIN t2 ON t1.a3 = t2.a1
``` | SQL has the following types of joins, all of which come straight from set theory:
* **Inner join.**
`From A inner join B` is the equivalent of A β© B, providing the set of elements common to both sets.
* **Left outer join.**
`From A left outer join B` is the equivalent of (A β B) βͺ (A β© B). Each A will appear at least once; if there are multiple matching Bs, the A will be repeated once per matching B.
* **Right outer join.**
`From A right outer join B` is the equivalent of (A β© B) βͺ (B β A). It is identical to a left join with the tables trading places. Each B will appear at least once; if there are multiple matching As, each B will be repeated once per matching B.
* **Full outer join.**
`From A full outer join B` is the equivalent of (A β B) βͺ (A β© B) βͺ (B β A). Each A and each B will appear at least once. If an A matches multiple Bs it will be repeated once per match; if a B matches multiple As it will be repeated once per match.
* **Cross Join.**
`From A cross join B` is produces the *cartesian product* A Γ B. Each A will be repeated once for every B. If A has 100 rows and B has 100 rows, the result set will consist of 10,000 rows.
It should be noted that the theoretical execution of a `select` query consists of the following steps performed in this order:
1. Compute the full cartesian product of the source set(s) in the `from` clause to prime the candidate result set.
2. Apply the join criteria in the `from` clause and reduce the candidate result set.
3. Apply the criteria in the `where clause` to further reduce the candidate result set.
4. partition the candidate result set into groups based on the criteria in the `group by` clause.
5. Remove from the candidate result set any columns other than those involved in the `group by` clause or involved in the evaluation of an aggregate function.
6. Compute the value of any such aggregate functions for each group in the candidate result set.
7. Collapse each group in the candidate result set into a single row consisting of the grouping columns and the computed values for each aggregate function. The candidate result set now consists of one row for each group, with all columns other than the `group by` columns or the compute values of aggregate functions for the group are eliminated.
8. Apply the criteria in the `having` clause to reduce the candidate result set and produce the final result set.
9. Order the final result set by the criteria in `order by` clause and emit it.
There are more steps, having to do with things like `compute` and `compute by` clauses, but this is sufficient to get the theoretical notion of how it works.
It should also be noted that nothing but the most naΓ―ve implementation would actually evaluate a `select` statement this way, but the results produced must be the same as if the above steps were performed in full. | In SQL, what's the difference between JOIN and CROSS JOIN? | [
"",
"mysql",
"sql",
"join",
"cross-join",
""
] |
How can I INSERT INTO an **empty** field **without creating a new row**?
If I understand:
I cannot use `insert` into with a `where` clause:
```
INSERT INTO consulta (tema)
value('prova')
WHERE client=1;
```
I can `update` with a `where` but I cannot do that if the field is empty :
```
UPDATE consulta
SET tema=prova
WHERE client=1;
```
So how should I insert in an existing row? | I think you're trying to update actually, insert into is allways intended for creation, unless you use replace into which will over write your row ONLY IF there if an old row in the table has the same value as a new row for a PRIMARY KEY or a UNIQUE index, (From [Mysql 5 reference](http://dev.mysql.com/doc/refman/5.0/en/replace.html) But I don't know which DBMS you're using
From your update statement I see that prova doesn't have '', maybe that is your problem.
You should use:
`UPDATE consulta
SET tema='prova' WHERE client=1;` | Do you want one of these?
```
UPDATE consulta
SET tema=prova
WHERE client is null;
```
or
```
UPDATE consulta
SET tema=prova
WHERE client = '';
```
"Empty" doesn't have a technical meaning in SQL, so it could be either. | Insert in an existing row | [
"",
"sql",
"insert",
"sql-update",
""
] |
This is a trivial example, but I am trying to understand how to think creatively using SQL.
For example, I have the following tables below, and I want to query the names of `folks` who have three or more `questions`. How can I do this without using `HAVING` or `COUNT`? I wonder if this is possible using `JOINS` or something similar?
FOLKS
```
folkID name
---------- --------------
01 Bill
02 Joe
03 Amy
04 Mike
05 Chris
06 Elizabeth
07 James
08 Ashley
```
QUESTION
```
folkID questionRating questionDate
---------- ---------- ----------
01 2 2011-01-22
01 4 2011-01-27
02 4
03 2 2011-01-20
03 4 2011-01-12
03 2 2011-01-30
04 3 2011-01-09
05 3 2011-01-27
05 2 2011-01-22
05 4
06 3 2011-01-15
06 5 2011-01-19
07 5 2011-01-20
08 3 2011-01-02
``` | Using `SUM` or `CASE` seems to be cheating to me!
I'm not sure if it's possible in your current formulation, but if you add a primary key to the question table (`questionid`) then the following seems to work:
```
SELECT DISTINCT Folks.folkid, Folks.name
FROM ((Folks
INNER JOIN Question AS Question_1 ON Folks.folkid = Question_1.folkid)
INNER JOIN Question AS Question_2 ON Folks.folkid = Question_2.folkid)
INNER JOIN Question AS Question_3 ON Folks.folkid = Question_3.folkid
WHERE (((Question_1.questionid) <> [Question_2].[questionid] And
(Question_1.questionid) <> [Question_3].[questionid]) AND
(Question_2.questionid) <> [Question_3].[questionid]);
```
Sorry, this is in MS Access SQL, but it should translate to any flavour of SQL.
Returns:
```
folkid name
3 Amy
5 Chris
```
**Update:** Just to explain why this works. Each join will return all the question ids asked by that person. The where clauses then leaves only unique rows of question ids. If there are less than three questions asked then there will be no unique rows.
For example, Bill:
```
folkid name Question_3.questionid Question_1.questionid Question_2.questionid
1 Bill 1 1 1
1 Bill 1 1 2
1 Bill 1 2 1
1 Bill 1 2 2
1 Bill 2 1 1
1 Bill 2 1 2
1 Bill 2 2 1
1 Bill 2 2 2
```
There are no rows where all the ids are different.
however for Amy:
```
folkid name Question_3.questionid Question_1.questionid Question_2.questionid
3 Amy 4 4 5
3 Amy 4 4 4
3 Amy 4 4 6
3 Amy 4 5 4
3 Amy 4 5 5
3 Amy 4 5 6
3 Amy 4 6 4
3 Amy 4 6 5
3 Amy 4 6 6
3 Amy 5 4 4
3 Amy 5 4 5
3 Amy 5 4 6
3 Amy 5 5 4
3 Amy 5 5 5
3 Amy 5 5 6
3 Amy 5 6 4
3 Amy 5 6 5
3 Amy 5 6 6
3 Amy 6 4 4
3 Amy 6 4 5
3 Amy 6 4 6
3 Amy 6 5 4
3 Amy 6 5 5
3 Amy 6 5 6
3 Amy 6 6 4
3 Amy 6 6 5
3 Amy 6 6 6
```
There are several rows which have different ids and hence these get returned by the above query. | you can try sum , to replace count.
```
SELECT SUM(CASE WHEN Field_name >=3 THEN field_name ELSE 0 END)
FROM tabel_name
``` | SQL : How to find number of occurrences without using HAVING or COUNT? | [
"",
"sql",
""
] |
I mistyped a query today, but it still worked and gave the intended result. I meant to run this query:
```
SELECT e.id FROM employees e JOIN users u ON u.email=e.email WHERE u.id='139840'
```
but I accidentally ran this query
```
SELECT e.id FROM employees e JOIN users u ON u.email=e.email AND u.id='139840'
```
(note the `AND` instead of `WHERE` in the last clause)
and both returned the correct employee id from the user id.
What is the difference between these 2 queries? Does the second form only join members of the 2 tables meeting the criteria, whereas the first one would join the entire table, and then run the query? Is one more or less efficient than the other? Is it something else I am missing?
Thanks! | For inner joins like this they are logically equivalent. However, you can run in to situations where a condition in the join clause means something different than a condition in the where clause.
As a simple illustration, imagine you do a left join like so;
```
select x.id
from x
left join y
on x.id = y.id
;
```
Here we're taking all the rows from x, regardless of whether there is a matching id in y. Now let's say our join condition grows - we're not just looking for matches in y based on the id but also on id\_type.
```
select x.id
from x
left join y
on x.id = y.id
and y.id_type = 'some type'
;
```
Again this gives all the rows in x regardless of whether there is a matching (id, id\_type) in y.
This is very different, though:
```
select x.id
from x
left join y
on x.id = y.id
where y.id_type = 'some type'
;
```
In this situation, we're picking all the rows of x and trying to match to rows from y. Now for rows for which there is no match in y, y.id\_type will be null. Because of that, y.id\_type = 'some type' isn't satisfied, so those rows where there is no match are discarded, which effectively turned this in to an inner join.
Long story short: for inner joins it doesn't matter where the conditions go but for outer joins it can. | In the case of an INNER JOIN, the two queries are semantically the same, meaning they are guaranteed to have the same results. If you were using an OUTER join, the meaning of the two queries could be very different, with different results.
Performance-wise, I would *expect* that these two queries would result in the same execution plan. However, the query engine might surprise you. The only way to know is to view the execution plans for the two queries. | INNER JOIN condition in WHERE clause or ON clause? | [
"",
"mysql",
"sql",
"performance",
"inner-join",
""
] |
I am trying to fix an order problem for a column in a SQL table. The column has dates but with the type `varchar`, and they are in this format: `mm/dd/yy`. I did not make it this way, the person behind me did and left.
How do I change all the entries to be the format `yyyy/mm/dd`? Once I put them in this format I will make a new column of type `date` and I will convert all the values and put them in there. My question is, which command do I use to change these dates? They're all in the 2000s so I can just add "20" to all the year parts. | Do it like this, using [STR\_TO\_DATE](http://dev.mysql.com/doc/mysql/en/date-and-time-functions.html#function_str-to-date):
* Add a `date` column (called `newdate` for example)
* Execute this:
`UPDATE table SET newdate = STR_TO_DATE(olddate, '%m/%d/%y');`
* Test it
* Drop the old column and optionally rename the new one to suit.
Make sure you use `%y` and not `%Y`: The lower-cased version works with 2-digit years. | You can do it with string manipulations:
```
update t
set col = concat('20', right(col, 2), '/', left(col, 5))
where col like '%/%/%'
``` | How do I correct date values in a SQL table? | [
"",
"mysql",
"sql",
"database",
"algorithm",
""
] |
I tried to search for this but maybe I'm looking for it incorrectly.
This is for an android app.
Essentially, I want to search the database based on some criteria (e.g. subcategory and item name), then use some of the other data values in the row to do a calculation (e.g. get the price value, and use it for a calculation)
I am thinking it is something like this query..
```
Cursor d = db.rawQuery("SELECT * FROM prices WHERE subcat = 'meat' AND item = 'lamb' ")
```
Now I would need to retrieve a column (like price) and use the values to calculate something like the average price.. How would I do that? (assuming step 1 is correct)... | You can use [aggregate functions](http://www.sqlite.org/lang_aggfunc.html) to compute a single value based on a column's value in multiple records:
```
SELECT AVG(price) FROM prices WHERE subcat = 'meat' AND item = 'lamb'
``` | After creating your cursor, get the index(es) of the column(s) you want to read. For example, if one of the columns returned by SELECT \* is price, the following gives the position of that column in each cursor row:
```
int PRICE_INDEX = d.getColumnIndexOrThrow("price");
```
Then you need to iterate over the results set. For each row, use the appropriate 'get' method to extract the values. Here I've assumed price is an INTEGER on the database, hence getInt():
```
while (d.moveToNext())
{
int price = d.getInt(PRICE_INDEX);
//Do whatever calculation you need with price here, or add to an array to process later
}
```
Remember to close the cursor when you are finished with it:
```
d.close();
```
If you can do the calculation within the SQL, then CL's answer is better. In that case, only one row is returned so no need for a loop - just use d.MoveToFirst() and then d.getInt(0). You can safely do getInt(0) because you are only returning a single column in the SELECT. | Search SQLite Database and use values from retrieved rows for calculation | [
"",
"android",
"sql",
"sqlite",
"search",
"calculated-columns",
""
] |
I need to get the extensions of filenames. Extensions could be any length (not just 3) and they could also be non-existent, in which case I need `null` returned. I know I could easily write a PL/SQL function that does this then just call that function in the query but I was hoping that I could somehow do it all inline. And I don't really care how long the solution is, what I need is the **fastest** solution. Speed matters because this will end up being ran against a very large table. This is what I have so far...
```
/*
The same method is being used in all 5 examples.
It works for all of them except the first one.
The first one I need to return null
*/
SELECT substr(filename,instr(filename,'.',-1)+1,length(filename)-instr(filename,'.',-1))
FROM (select 'no_extension_should_return_null' filename from dual);
--returns: no_extension_should_return_null
SELECT substr(filename,instr(filename,'.',-1)+1,length(filename)-instr(filename,'.',-1))
FROM (select 'another.test.1' filename from dual);
--returns: 1
SELECT substr(filename,instr(filename,'.',-1)+1,length(filename)-instr(filename,'.',-1))
FROM (select 'another.test.doc' filename from dual);
--returns: doc
SELECT substr(filename,instr(filename,'.',-1)+1,length(filename)-instr(filename,'.',-1))
FROM (select 'another.test.docx' filename from dual);
--returns: docx
SELECT substr(filename,instr(filename,'.',-1)+1,length(filename)-instr(filename,'.',-1))
FROM (select 'another.test.stupidlong' filename from dual);
--returns: stupidlong
```
So is there a fast way to accomplish this inline or should I just write this in a PL/SQL function?
This is what I'm working with...
```
select * from v$version;
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
PL/SQL Release 11.2.0.2.0 - Production
CORE 11.2.0.2.0 Production
TNS for 64-bit Windows: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production
```
**UPDATE**
I'm moving this code into a function and will setup a test to call it a million times to see if the function slows it down, I'm thinking it won't make an impact since it's just string manipulation.
**UPDATE**
Thanks for the answers so far. I ended up making a PL/SQL function that does what I need...
```
create or replace function extrip(filename varchar2) return varchar2 as
begin
if ( instr(filename,'.',-1) = 0 ) then
return null;
end if;
return substr(filename,instr(filename,'.',-1)+1,length(filename)-instr(filename,'.',-1));
end;
```
I then ran two tests against a table with 2 million rows. When I viewed the explain plan for both they were 100% IDENTICAL. How could that be?
```
select regexp_substr(filename, '\.[^\.]*$') ext from testTable;
select extrip(filename) ext from testTable;
```
**UPDATE**
I added a `order by ext` to both of those then reran the tests and there was a difference. The regexp took 9sec and the function took 17sec. I guess without the order by TOAD was just retrning the first X number of recs. So @Brian McGinity was right. I still need the regexp method to **NOT** return the dot "." though. | It will run fastest when done 100% sql, as you have.
The substr/instr are native compiled functions in oracle.
If you put this in a plsql function it will run slower due to context switching between sql and plsql:
This is slower due to context switching:
```
select extrip( filename ) from million_row_table
```
What you have is faster.
## Update:
try this:
```
select s,
substr(s, nullif( instr(s,'.', -1) +1, 1) )
from (
select 'no_extension_should_return_null' s from dual union
select 'another.test.1' from dual union
select 'another.test.doc' from dual union
select 'another.test.docx' from dual union
select 'another.test.stupidlng' from dual
)
``` | You need to use regular expressions.
Try
```
select regexp_substr(filename, '\.[^\.]*$')
from
(select 'no_extension_should_return_null' filename from dual);
```
I don't have an Oracle database to test this on but this should be pretty close.
Check the [Oracle docs on regexp\_substr](http://docs.oracle.com/cd/B12037_01/server.101/b10759/functions116.htm) and [Using regular expressions in Oracle database](http://docs.oracle.com/cd/B19306_01/appdev.102/b14251/adfns_regexp.htm) for more info.
## Update
To drop the period from the file extension:
```
select substr(regexp_substr(filename, '\.[^\.]*$'), 2)
from
(select 'abc.def' filename from dual);
``` | Oracle: Fastest Way to Extract Filename Extension Using SQL or PL/SQL | [
"",
"sql",
"oracle",
"plsql",
""
] |
I've had a SQL performance review done on a project we're working on, and one 'Critical' item that has come up is this:
> This kind of wildcard query pattern will cause a table scan, resulting
> in poor query performance.
```
SELECT *
FROM TabFoo
WHERE ColBar = @someparam OR @someparam IS NULL
```
Their recommendation is:
> In many cases, an OPTION (RECOMPILE) hint can be a quick workaround.
> From a design point of view, you can also consider using separate If
> clauses or (not recommended) use a dynamic SQL statement.
Dynamic SQL surely isn't the right way forward. Basically the procedure is one where I am search for something, OR something else. Two parameters come into the procedure, and I am filtering on one, or the other.
A better example than what they showed is:
```
SELECT ..
FROM...
WHERE (ColA = @ParA OR @ColA IS NULL)
(AND ColB = @ParB OR @ParB IS NULL)
```
Is that bad practice, and besides dynamic SQL (because, I thought dynamic sql can't really compile and be more efficient in it's execution plan?), how would this best be done? | A query like
```
select *
from foo
where foo.bar = @p OR @p is null
```
might or might not cause a table scan. My experience is that it will not: the optimizer perfectly able to do an index seek on the expression `foo.bar = @p`, assuming a suitable index exists. Further, it's perfectly able to short-circuit things if the variable is null. You won't know what your execution plan looks like until you try it and examine the bound execution plane. A better technique, however is this:
```
select *
from foo
where foo.bar = coalesce(@p,foo.bar)
```
which will give you the same behavior.
If you are using a stored procedure, one thing that can and will bite you in the tookus is something like this:
```
create dbo.spFoo
@p varchar(32)
as
select *
from dbo.foo
where foo.bar = @p or @p = null
return @@rowcount
```
The direct use of the stored procedure parameter in the where clause will cause the cached execution plan to be based on the value of `@p` on its first execution. That means that if the first execution of your stored procedure has an outlier value for `@p`, you may get a cached execution plan that performs really poorly for the 95% of "normal" executions and really well only for the oddball cases. To prevent this from occurring, you want to do this:
```
create dbo.spFoo
@p varchar(32)
as
declare @pMine varchar(32)
set @pMine = @p
select *
from dbo.foo
where foo.bar = @pMine or @pMine = null
return @@rowcount
```
That simple assignment of the parameter to a local variable makes it an expression and so the cached execution plan is not bound to the initial value of `@p`. Don't ask how I know this.
Further the recommendation you received:
> In many cases, an OPTION (RECOMPILE) hint can be a quick workaround.
> From a design point of view, you can also consider using separate
> If clauses or (not recommended) use a dynamic SQL statement.
is hogwash. `Option(recompile)` means that the stored procedure is recompiled **on every execution**. When the stored procedure is being compiled, compile-time locks on taken out on dependent object. Further, nobody else is going to be able to execute the stored procedure until the compilation is completed. This has, shall we say, negative impact on concurrency and performance. Use of `option(recompile)` should be a measure of last resort.
Write clean SQL and vet your execution plans using production data, or as close as you can get to it: the execution plan you get is affected by the size and shape/distribution of the data. | I could be wrong, but I'm pretty sure a table scan will occur no matter what if the column you have in your `where` clause isn't indexed. Also, you could probably get better performance by reordering your `OR` clauses so that if `@ParA IS NULL` is true, it evaluates first and would not require evaluating the value in the column. Something to remember is that the `where` clause is evaluated for every row that comes back from the `from` clause. I would not recommend dynamic SQL, and honestly, even under relatively heavy load I'd find it difficult to believe that this form of filter would cause a significant performance hit, since a table scan is required anytime the column isn't indexed. | Wildcard Pattern usage | [
"",
"sql",
"sql-server",
""
] |
I have the query listed below, it is out from a PDO statement. Everything in the query works fine, apart from the collectionId = 3 part. Its returning results with other intergers...
I've stared at this for a while and can't spot what is wrong, it all looks fine to me?
```
SELECT `Hat`.`id` AS `Hat_id` , `Hat`.`hatCode` AS `Hat_hatCode` , `Hat`.`hatCodeOther` AS `Hat_hatCodeOther` , `Hat`.`name` AS `Hat_name` , `Hat`.`description` AS `Hat_description` , `Hat`.`colorId` AS `Hat_colorId` , `Hat`.`collectionId` AS `Hat_collectionId` , `Hat`.`mainPicture` AS `Hat_mainPicture` , `Hat`.`subPicture` AS `Hat_subPicture` , `Hat`.`type` AS `Hat_type` , `Hat`.`featured` AS `Hat_featured` , `Hat`.`published` AS `Hat_published` , `Hat`.`deleted` AS `Hat_deleted`
FROM `modx_hats` AS `Hat`
WHERE (
`Hat`.`published` =1
AND `Hat`.`collectionId` = '3'
AND `Hat`.`colorId` LIKE '%||2||%'
OR `Hat`.`colorId` LIKE '2||%'
OR `Hat`.`colorId` LIKE '%||2'
OR `Hat`.`colorId` LIKE '2'
)
LIMIT 0 , 30
``` | Your OR conditions need to be in parentheses:
```
SELECT `Hat`.`id` AS `Hat_id` , `Hat`.`hatCode` AS `Hat_hatCode` , `Hat`.`hatCodeOther` AS `Hat_hatCodeOther` , `Hat`.`name` AS `Hat_name` , `Hat`.`description` AS `Hat_description` , `Hat`.`colorId` AS `Hat_colorId` , `Hat`.`collectionId` AS `Hat_collectionId` , `Hat`.`mainPicture` AS `Hat_mainPicture` , `Hat`.`subPicture` AS `Hat_subPicture` , `Hat`.`type` AS `Hat_type` , `Hat`.`featured` AS `Hat_featured` , `Hat`.`published` AS `Hat_published` , `Hat`.`deleted` AS `Hat_deleted`
FROM `modx_hats` AS `Hat`
WHERE (
`Hat`.`published` =1
AND `Hat`.`collectionId` = '3'
AND (`Hat`.`colorId` LIKE '%||2||%'
OR `Hat`.`colorId` LIKE '2||%'
OR `Hat`.`colorId` LIKE '%||2'
OR `Hat`.`colorId` LIKE '2')
)
LIMIT 0 , 30
``` | Can you try this, Added `()` for OR grouping
```
WHERE (
`Hat`.`published` =1
AND `Hat`.`collectionId` = '3'
AND ( `Hat`.`colorId` LIKE '%||2||%'
OR `Hat`.`colorId` LIKE '2||%'
OR `Hat`.`colorId` LIKE '%||2'
OR `Hat`.`colorId` LIKE '2'
)
)
```
Ref: [SQL Query multiple AND and OR's not working](https://stackoverflow.com/questions/21096134/sql-query-multiple-and-and-ors-not-working/21096210#21096210) | mysql query not filtering AND as expected | [
"",
"mysql",
"sql",
""
] |
Here's my sql statement:
```
SELECT
tA.a1, GROUP_CONCAT(tB.b2) AS b2
FROM
tableA tA
LEFT JOIN
tableB tB ON tA.a2 = tB.b1
WHERE
CONCAT(tA.a1, b2) LIKE '%somestring%'
GROUP BY tA.a1;
```
I get an sql error saying something along the lines of "unknown column name b2 in WHERE". | You can't use aliases in `WHERE` clause - but in your case that's even senseless, because `WHERE` applies filter to rows that **will be grouped** while `GROUP_CONCAT()` collects rows that are **already grouped**
You may do that, for example, with subquery:
```
SELECT *
FROM
(SELECT
tA.a1 AS ta1, GROUP_CONCAT(tB.b2) AS b2
FROM
tableA tA
LEFT JOIN
tableB tB ON tA.a2 = tB.b1
GROUP BY tA.a1) AS grouped
WHERE
CONCAT(ta1, grouped.b2) LIKE '%somestring%'
``` | ```
SELECT
tA.a1, GROUP_CONCAT(tB.b2) AS b2
FROM
tableA tA
LEFT JOIN
tableB tB ON tA.a2 = tB.b1
GROUP BY tA.a1
HAVING
CONCAT(tA.a1, b2) LIKE '%somestring%';
``` | SQL GROUP_CONCAT alias in WHERE | [
"",
"mysql",
"sql",
""
] |
In a table within my database there is a varchar2 column that mostly consists of numbers. Now I want to receive the NUMERICAL maximum of that column, ignoring non-numerical values.
How do I do that?
Thank you. | ```
SQL> select * from t;
X
----------
1234
836
836AA%%$$
BBcdfrg
12099
SQL> select max(to_number(x)) from t
2 where trim(translate(x,'0123456789',' ')) is null
3 /
MAX(TO_NUMBER(X))
-----------------
12099
``` | There are so many traps and exceptions when you try to convert a string into a number. I use a function like this:
```
FUNCTION Is_Numeric(Expression IN VARCHAR2) RETURN INTEGER
BEGIN
IF TO_NUMBER(Expression) IS NOT NULL THEN
RETURN 1;
ELSE
RETURN 0;
END IF;
EXCEPTION
WHEN VALUE_ERROR OR INVALID_NUMBER THEN
RETURN 0;
END Is_Numeric;
```
Then you can use it in any WHERE condition. | Oracle-SQL Query to obtain the maximum value of a column that has been converted from String to Integer | [
"",
"sql",
"oracle",
""
] |
The table name is "OrderDetails" and columns are given below:
```
OrderDetailID || ProductID || ProductName || OrderQuantity
```
I'm trying to select multiple columns and Group By ProductID while having SUM of OrderQuantity.
```
Select ProductID,ProductName,OrderQuantity Sum(OrderQuantity)
from OrderDetails Group By ProductID
```
But of course this code gives an error. I have to add other column names to group by, but that's not what I want and since my data has many items so **results are unexpected that way.**
Sample Data Query:
ProductID,ProductName,OrderQuantity from OrderDetails
Results are below:
```
ProductID ProductName OrderQuantity
1001 abc 5
1002 abc 23 (ProductNames can be same)
2002 xyz 8
3004 ytp 15
4001 aze 19
1001 abc 7 (2nd row of same ProductID)
```
Expected result:
```
ProductID ProductName OrderQuantity
1001 abc 12 (group by productID while summing)
1002 abc 23
2002 xyz 8
3004 ytp 15
4001 aze 19
```
How do I select multiple columns and Group By ProductID column since ProductName is not unique?
While doing that, also get the sum of the OrderQuantity column. | I use this trick to group by one column when I have a multiple columns selection:
```
SELECT MAX(id) AS id,
Nume,
MAX(intrare) AS intrare,
MAX(iesire) AS iesire,
MAX(intrare-iesire) AS stoc,
MAX(data) AS data
FROM Produse
GROUP BY Nume
ORDER BY Nume
```
This works. | I just wanted to add a more effective and generic way to solve this kind of problems.
The main idea is about working with sub queries.
do your group by and join the same table on the ID of the table.
your case is more specific since your productId is **not unique** so there is 2 ways to solve this.
I will begin by the more specific solution:
Since your productId is **not unique** we will need an extra step which is to select `DISCTINCT` product ids after grouping and doing the sub query like following:
```
WITH CTE_TEST AS (SELECT productId, SUM(OrderQuantity) Total
FROM OrderDetails
GROUP BY productId)
SELECT DISTINCT(OrderDetails.ProductID), OrderDetails.ProductName, CTE_TEST.Total
FROM OrderDetails
INNER JOIN CTE_TEST ON CTE_TEST.ProductID = OrderDetails.ProductID
```
this returns exactly what is expected
```
ProductID ProductName Total
1001 abc 12
1002 abc 23
2002 xyz 8
3004 ytp 15
4001 aze 19
```
**But** there a cleaner way to do this. I guess that `ProductId` is a foreign key to products table and i guess that there should be and `OrderId` **primary key** (unique) in this table.
in this case there are few steps to do to include extra columns while grouping on only one. It will be the same solution as following
Let's take this `t_Value` table for example:
[](https://i.stack.imgur.com/9SCl5.png)
If i want to group by description and also display all columns.
All i have to do is:
1. create `WITH CTE_Name` subquery with your GroupBy column and COUNT condition
2. select all(or whatever you want to display) from value table and the total from the CTE
3. `INNER JOIN` with CTE on the ID(***primary key or unique constraint***) column
and that's it!
Here is the query
```
WITH CTE_TEST AS (SELECT Description, MAX(Id) specID, COUNT(Description) quantity
FROM sch_dta.t_value
GROUP BY Description)
SELECT sch_dta.t_Value.*, CTE_TEST.quantity
FROM sch_dta.t_Value
INNER JOIN CTE_TEST ON CTE_TEST.specID = sch_dta.t_Value.Id
```
And here is the result:
[](https://i.stack.imgur.com/HtIlC.png) | Select multiple columns from a table, but group by one | [
"",
"sql",
"group-by",
""
] |
I am trying to write a SQL SELECT statement to find out the most recently hired employees in EACH department. I am working on Oracle database and am trying to display the department name, employee name, hire date, and salary. The table definition for employee and department tables are below
Department Table
```
CREATE TABLE dpt
( DEPARTMENT_ID NUMBER(4) PRIMARY KEY,
DEPARTMENT_NAME VARCHAR2(20) NOT NULL,
ADDRESS VARCHAR2(20) NOT NULL);
```
Example Data
```
(10, 'ACCOUNTING', 'NEW YORK');
(20, 'RESEARCH', 'DALLAS');
(30, 'SALES', 'CHICAGO');
```
Employee Table
```
CREATE TABLE emp
( EMPLOYEE_ID NUMBER(4) PRIMARY KEY,
EMPLOYEE_NAME VARCHAR2(20) NOT NULL,
JOB VARCHAR2(50) NOT NULL,
MANAGER_ID NUMBER(4),HIRE_DATE DATE,
SALARY NUMBER(9, 2),
COMMISSION NUMBER(9, 2),
DEPARTMENT_ID NUMBER(4) REFERENCES dpt(DEPARTMENT_ID));
```
Example Data
```
(7839, 'KING', 'PRESIDENT', NULL, '20-NOV-01', 5000, NULL, 50);
(7596, 'JOST', 'VICE PRESIDENT', 7839, '04-MAY-01', 4500, NULL, 50);
(7603, 'CLARK', 'VICE PRESIDENT', 7839, '12-JUN-01', 4000, NULL, 50);
```
The Query that I have written is outputting the most hired employee only in one dept. Can someone please point out what am I doing wrong? I tried using Group By but am not able to do it the right way.
```
SELECT dpt.department_name, dpt.department_id, employee_name, hire_date, Salary
from dpt, emp
where emp.department_id = dpt.department_id
and
hire_date = (select max(hire_date) from emp where department_id = emp.department_id)
order by dpt.department_name;
``` | You can do it by join
```
SELECT dpt.department_name, dpt.department_id, employee_name, hire_date, Salary
from dpt inner join emp on emp.department_id = dpt.department_id inner join
(select emp.department_id, max(hire_date) as datemax from emp) x on emp.department_id=x.department_id and emp.hire_date =x.datemax
order by dpt.department_name;
``` | You are missing a table alias on the `where` clause in the correlated subquery:
```
SELECT dpt.department_name, dpt.department_id, employee_name, hire_date, Salary
from dpt join
emp
on emp.department_id = dpt.department_id
where hire_date = (select max(hire_date) from emp emp2 where emp.department_id = emp2.department_id)
order by dpt.department_name;
```
Basically, the condition `where department_id = emp.department_id` fetches the column twice from the table in the subquery. It is not correlated to the outer query. By using proper table aliases, you get the correlation.
I also changed the `join` syntax to use the explicit form of joins. | SQL query _ not able to find recently hired employee is each department | [
"",
"sql",
"oracle",
""
] |
I have 2 tables:
* groups
* contacts
groups table's fields are:
* group\_id
* group\_name
contacts table fields are:
* contact\_group\_id
* contact\_id
* contact\_name
Now, I want to select all groups with count of group's contacts...
For e.g.:
groupname contacts count
friend 12 |
school 8 |
ennemy 0 |
family 25 |
i want all groups dispaly(include groups that have not any contacts)
thanks a lot | you need to do left join and then group by. Do following
```
select a.group_name, count(b.contact_id) from
groups a left join contacts b on
a.group_id = b.contact_group_id
group by a.group_name
```
See [Fiddle](http://sqlfiddle.com/#!6/3f98a/3) | Create a view containing the number of contacts per group and then outer join that view with your groups table.
```
CREATE VIEW contactscount AS
SELECT contact_group_id, COUNT(contact_id) AS count FROM contacts GROUP BY contact_group_id
SELECT * FROM groups OUTER JOIN contactscount ON contactscount.contact_group_id=groups.group_id
```
This selects Null for count of groups with no contacts. You could omit the view. | how do select all rows from table groups and count of contacts for each groups in sql | [
"",
"sql",
""
] |
I'm having difficulty finding the answer for this in my searching. This is a sample table:
```
1 | Bob Smith |
2 | Mary Jones |
3 | Paul Bunyon |
4 | Bob Smith |
5 | Bob Smith |
6 | Mary Jones |
7 | Tim Gunn |
```
I'd like to select a distinct amount of names BUT, I need the count of number of times that name is in the records displayed after returned results. So returned results should look like this:
```
Bob Smith (3)
Mary Jone (2)
Paul Bunyon
Tim Gunn
```
I am using SQL SERVER 2012 | `Group By` + `Count(*)` , you need to cast the `int` to `varchar`:
```
SELECT NameCount = Name + CASE WHEN COUNT(*) = 1 THEN ''
ELSE ' (' + CAST(Count(*) as varchar(10)) + ')' END
FROM dbo.Table1
GROUP BY Name
```
`Demo` | You want the number to be in parentheses *conditionally* when it is greater than 1:
```
select (case when count(*) = 1 then Name
else concat(Name, ' (', Count(*), ')')
end)
from t
group by Name;
```
SQL Server 2012 supports the ANSI standard `concat()` function. And, it take numeric arguments so we don't have to explicitly cast numbers to strings to concatenate them to strings. | Returning count after name if more than one record | [
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I am trying to write a query in sql server:
table:
```
SELECT [Id]
,[BrokerId]
,[UserId]
FROM [UserBroker]
```
values I want to grab:
```
Id BrokerId UserId
21288 252055 223177
21291 252055 223178
```
attempt at query:
I want to grab all records that have duplicate userids to one brokerid. I tried this query but it is incorrect:
```
SELECT
[BrokerId]
,[UserId]
FROM [UserBroker]
group by [BrokerId],[UserId]
having count([BrokerId]) = 1 and count([UserId]) > 1
``` | ```
Select UB.ID, UB.BrokerID, UB.userId
FROM userBroker ub
INNER JOIN userBroker mUB
on Ub.BrokerID = mub.BrokerID
and Ub.userID <> mub.UserID
GROUP BY UB.ID, UB.BrokerID, UB.userId
ORDER BY UB.BrokerID, UB.userId, UB.Id
```
Returns ID, broker and userIDs where different userID's exist for the same broker. Note the group by is used incase there were duplicates in the data (the same brokerID and userId exists for different IDs)
Order by used to make the duplicates easier to find; they should be right after eachother. | ```
WITH t AS (
SELECT *,
COUNT(*) OVER(PARTITION BY [BrokerId], [UserId]) c
FROM [UserBroker]
)
SELECT [Id], [BrokerId], [UserId]
FROM t
WHERE c > 1
``` | sql grab duplicate record with same second column | [
"",
"sql",
"sql-server",
""
] |
What is the difference between `ColumnName = Null` and `ColumnName Is Null` in SQL Server? I have researched about this but I didn't get any clarification.
Actually I want to retrieve column has null value
```
Select * from Table1 where column1 is null
```
This query returning expected result but the following query not giving
```
Select * from Table1 where column1 = null
``` | Comparisons with null using normal operators are never true.
That's basically because null is "unknown" so it can't be determined whether it is or is not some value.
The special syntax `is null` and `is not null` must be used to determine nullness. | Using [comparison](http://technet.microsoft.com/en-us/library/ms188074.aspx) operators on `NULL` will result un `Unkown result`
Take a look at the following example:
```
Declare @temp table
(
val1 int,
val2 int
)
INSERT INTO @temp
VALUES (1,NULL),
(NULL,1),
(NULL,NULL)
SELECT CASE
WHEN val1 > val2 THEN 'Val 1 greater'
WHEN val2 > val1 THEN 'val 1 smaller'
WHEN val1 = val2 THEN 'Val1 is equal to val 2'
WHEN val1 = NULL THEN 'val1 = NULL'
WHEN val1 IS NULL THEN 'val1 IS NULL'
ELSE 'Unknown'
END result
FROM @temp
```
this will result:
```
Unknown
val1 IS NULL
val1 IS NULL
```
Please note that changing the `ANSI_NULLS` to `OFF`, will change the behaviour of the `=` operator, and will return `TRUE` for `NULL = NULL`
Any way, IMHO (and I think the prevailing view is), The best practice is using [IS NULL](http://technet.microsoft.com/en-us/library/ms188795.aspx) | Difference between ColumnName = Null and ColumnName Is Null in SQL Server | [
"",
"sql",
"sql-server",
""
] |
here is my query string
`SELECT
Payment
,Balance
,PatientNo
FROM
[GP_DB].[dbo].[GP]
where GP.GPDate= (SELECT CONVERT(VARCHAR(24),@GPDate,103))`
GPDate is a Date type column, not DateTime
and i pass parameter like this
```
cmd_select_treatment.Parameters.AddWithValue(
"@GPDate"
,Convert.ToDateTime(dateTimePicker1.Value));
```
but the following error occur
Conversion failed when converting date and/or time from character string. | @var is DateTime
select cast(convert (varchar(8),@var,112) as Date) | > GPDate is a Date type column
If it's a `DATE` column and your `Convert.ToDateTime` call returns a `DateTime` object then don't bother yourself with the `CAST`.
```
WHERE GP.GPDate = @GPDate
```
If `dateTimePicker1` might contain a time component and you don't care for that then simply discard it before using its value:
```
Convert.ToDateTime(dateTimePicker1.Value).Date
``` | how to solve Passing DateTime value to paramaterized query Error | [
"",
"sql",
"datetime",
"datetimepicker",
"sql-convert",
""
] |
I have a database as such
```
Update Record_ID Index Location
1 1 23 China
1 1 24 Beijing
1 1 45 Norway
1 1 23 China
2 1 423 Somne
2 1 24 Beijing
2 1 243 Nevela
3 1 334 DEro
3 1 555 Mood
```
I have tried other examples on this site which can't solve my situation yet so please don't be fast to tag this as a duplicate.
**In this scenario i have only 1 record 1 but they could be thousands.**
So i need to get the last record of Updates for all unique records. So in this case i should have something like this returned for record 1:
```
3 1 334 DEro
3 1 555 Mood
```
and if there is record 2 etc they will be included here as well.
I tried using "LIMIT" but my query returned only 1 row in this case.
```
3 1 555 Mood
```
**PS: There is an autogenerated PK column for this table which i didnt include.**
**[Online Demo](http://sqlfiddle.com/#!2/8e7fb/1)**. | ```
SELECT * FROM table t1
JOIN (SELECT Record_ID ,MAX(Updates)as maxupdate FROM table GROUP BY Record_ID)x
ON t1.Record_ID = x.Record_ID AND t1.Updates=x.maxupdate
```
[Fiddle](http://sqlfiddle.com/#!2/bfcdb/1)
It also works for multiple Record\_id | Try This
```
SELECT * FROM Table_Name
WHERE `Update` = (SELECT TOP 1 `Update` FROM Table_Name ORDER BY ID DESC)
``` | Get last record in mysql table | [
"",
"mysql",
"sql",
"database",
""
] |
How can I create an SQL select query to return variable as true(1) if a column value exists in a table else false(0).
I want to use this variable in my scripts so that
if variable=1
execute A
ELSE
execute B | In Oracle you can use something like this
```
SELECT
CASE nvl(length(column1), 0)
WHEN 0 THEN 0
ELSE 1
END AS column1
FROM yourTableName;
``` | You can try this...
```
IF ((SELECT COUNT(*) FROM TableName WHERE FieldName = 'Whatever') <> 0)
-- Execute something if column value exists
ELSE
-- Execute something if column value does not exist
``` | SQL query with variable ,IF & select staement | [
"",
"mysql",
"sql",
"oracle",
"oracle-sqldeveloper",
""
] |
I have a relational database in SQL Server which I use to store Products, Competitor Companies and Competitor Prices. I regularly add new records to the Competitor Prices table rather than updating existing records so I can track prices changes over time.
I want to build a query which given a particular product, find the most recent price from each of the competitors. It is possible that each competitor doesn't have a price recorded.
Data Example
*tblCompetitorPrices*
```
+-----+----------+-------------+-----+----------+
|cp_id|product_id|competitor_id|price|date_added|
+-----+----------+-------------+-----+----------+
|1 |1 |3 |70.00|15-01-2014|
+-----+----------+-------------+-----+----------+
|2 |1 |4 |65.10|15-01-2014|
+-----+----------+-------------+-----+----------+
|3 |2 |3 |15.20|15-01-2014|
+-----+----------+-------------+-----+----------+
|4 |1 |3 |62.30|19-01-2014|
+-----+----------+-------------+-----+----------+
```
And I want the query to return...
```
+-----+----------+-------------+-----+----------+
|cp_id|product_id|competitor_id|price|date_added|
+-----+----------+-------------+-----+----------+
|4 |1 |3 |62.30|19-01-2014|
+-----+----------+-------------+-----+----------+
|2 |1 |4 |65.10|15-01-2014|
+-----+----------+-------------+-----+----------+
```
I can currently access all the prices for the product, but I'm not able to filter the results so only the most recent price for each competitor is shown - I'm really unsure...here is what I have so far....
```
SELECT cp_id, product_id, competitor_id, price, date_added
FROM tblCompetitorPrices
WHERE product_id = '1'
ORDER BY date_added DESC
```
Thanks for any help! | As an alternative, you can also use `ROW_NUMBER()` which is a Window function that generates sequential number.
```
SELECT cp_id,
product_id,
competitor_id,
price,
date_added
FROM (
SELECT cp_id,
product_id,
competitor_id,
price,
date_added,
ROW_NUMBER() OVER (PARTITION BY competitor_id
ORDER BY date_added DESC) rn
FROM tblCompetitorPrices
WHERE product_ID = 1
) a
WHERE a.rn = 1
```
This query can easily be modified to return latest record for each competitor in every product. | Try this,
```
SELECT cp_id, product_id, competitor_id, price, date_added
FROM tblCompetitorPrices
WHERE product_id = '1' AND date_added=( SELECT MAX(date_added)
FROM tblCompetitorPrices
WHERE product_id = '1')
ORDER BY date_added DESC
``` | Confusing SQL Query, Group By? Having? | [
"",
"sql",
"sql-server",
"rdbms",
""
] |
I've looked at many SO questions related to this but I can't seem to get anything to work. I'm not very good with semi complex SQL queries.
I want to get the difference between the current time and a column that is in unix timestamp in hours.
I'm not sure what I'm doing wrong or right for that matter. The goal is to only pull the rows that is less than 24 hours old. If there is a better way or example that **works** that would be great.
I tried several answers from here [Timestamp Difference In Hours for PostgreSQL](https://stackoverflow.com/questions/1964544/timestamp-difference-in-hours-for-postgresql)
I can't get this query to work no matter how many different ways I try it. `wc.posted` is a `bigint` store as `unix timestamp`
```
SELECT w.wal_id, wc.com_id, w.posted AS post_time, wc.posted AS com_time
FROM wall as w LEFT JOIN wall_comments as wc ON w.wal_id=wc.wal_id
WHERE (EXTRACT(EPOCH FROM wc.posted)) > current_timestamp - interval '24 hours'
```
Then the **Error:**
```
ERROR: function pg_catalog.date_part(unknown, bigint) does not exist
LINE 1: ... wall_comments as wc ON w.wal_id=wc.wal_id WHERE (EXTRACT(EP...
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
********** Error **********
ERROR: function pg_catalog.date_part(unknown, bigint) does not exist
SQL state: 42883
Hint: No function matches the given name and argument types. You might need to add explicit type casts.
Character: 148
```
Here is a simplified [fiddle](http://sqlfiddle.com/#!1/8a68a/3) | From the [fine manual](http://www.postgresql.org/docs/current/interactive/functions-formatting.html):
> A single-argument `to_timestamp` function is also available; it accepts a `double precision` argument and converts from Unix epoch (seconds since 1970-01-01 00:00:00+00) to `timestamp with time zone`. (`Integer` Unix epochs are implicitly cast to `double precision`.)
So to convert your `bigint` seconds-since-epoch to a `timestampz`:
```
to_timestamp(wc.posted)
```
Perhaps you're looking for this:
```
WHERE to_timestamp(wc.posted) > current_timestamp - interval '24 hours'
``` | Try:
```
SELECT EXTRACT(EPOCH FROM (timestamp_B - timestamo_A))
FROM TableA
```
Details here: [EXTRACT](http://www.postgresql.org/docs/current/static/functions-datetime.html#FUNCTIONS-DATETIME-EXTRACT). | PostgreSQL get time difference between timestamps | [
"",
"sql",
"postgresql",
"timestamp",
""
] |
```
+-------+-----------+------+----------------------+----------------------+
|RATE_ID|DESCRIPTION|CHARGE|FROM_DATE |TO_DATE |
+-------+-----------+------+----------------------+----------------------+
|1 |small |100 |01/01/2014 12:00:00 AM|31/03/2014 12:00:00 AM|
+-------+-----------+------+----------------------+----------------------+
|2 |mediam |200 |01/04/2014 12:00:00 AM|04/04/2014 12:00:00 AM|
+-------+-----------+------+----------------------+----------------------+
|3 |big |300 |05/04/2014 12:00:00 AM|31/12/2014 12:00:00 AM|
+-------+-----------+------+----------------------+----------------------+
```
Let the above a sample table of charges within a date range, I will have an input like
`start_date = to_date('30/mar/2014','dd/mon/yyyy')` and
`end_date = to_date('05/apr/2014','dd/mon/yyyy')`.
So the input dates included in two days of charge 100 (rate\_id = 1),
4 days of charge 200 and 1 day of charge 300 and a total of `1300`.
Is there any simple method to find the number of days existing in the given range so that I can count the total\_charge. As currently I am using PL/SQL for the above using `loop` to find the existence.
As a whole :
from the input,
```
30 and 31st march belongs to small(100 charge) => 100* 2 = 200
1, 2, 3, 4 of april belongs to medium( 200 charge) => 200*4 = 800
5th april belongs to big ( 300 charge) => 300*1 = 300
so
the total:= 200 + 800 + 300 = 1300
```
Thanks in advance. | You can use CONNECT BY to generate the list of days between end and start date and join this list with the list of rates:
```
with v_days as (
SELECT TRUNC (to_date('2014-04-05', 'YYYY-MM-DD') - ROWNUM + 1) dt
FROM DUAL
CONNECT BY ROWNUM <= (to_date('2014-04-05', 'YYYY-MM-DD') + 1 -
to_date('2014-03-30', 'YYYY-MM-DD'))
),
v_rates as (
select 1 rate_id, 'small' rate, 100 charge,
to_date('2014-01-01', 'YYYY-MM-DD') start_date,
to_date('2014-03-31', 'YYYY-MM-DD') end_date
from dual union all
select 2 rate_id, 'medium' rate, 200 charge,
to_date('2014-04-01', 'YYYY-MM-DD') start_date,
to_date('2014-04-04', 'YYYY-MM-DD') end_date from dual
union all
select 3 rate_id, 'big' rate, 300 charge,
to_date('2014-04-05', 'YYYY-MM-DD') start_date,
to_date('2014-12-31', 'YYYY-MM-DD') end_date from dual
)
select sum(charge) as total_charge from (
select d.*, r.* from v_days d
join v_rates r on d.dt >= r.start_date and d.dt <= r.end_date
order by d.dt
)
```
Explanation:
* v\_days generates the list of days between the start and end date (one row per day)
* v\_rates simply contains the rates you've given
* we then join these two subqueries - a given date belongs to a rate if it is between the start and end date of the rate
* at last, we just sum the charges to get the total charge | The DATEDIFF() function returns the time between two dates.
```
SELECT DATEDIFF(day,'01-01-2014','31-03-2014') AS DiffDate
```
And consider the date format to get the correct result.
Please refer below link for further information
<http://www.w3schools.com/sql/func_datediff.asp> | find number of days exists within a date range | [
"",
"sql",
"oracle",
"date",
"date-range",
""
] |
I am connecting to Hive via an ODBC driver from a .NET application. Is there a query to determine if a table already exists?
For example, in MSSQL you can query the `INFORMATION_SCHEMA` table and in Netezza you can query the `_v_table` table.
Any assistance would be appreciated. | There are two approaches by which you can check that:
1.) As @dimamah suggested, just to add one point here, for this approach you need to
```
1.1) start the **hiveserver** before running the query
1.2) you have to run two queries
1.2.1) USE <database_name>
1.2.2) SHOW TABLES LIKE 'table_name'
1.2.3) Then you check your result using Result set.
```
2.) Second approach is to use HiveMetastoreClient APIs, where you can directly use the APIs to check whether the **table\_name** exist in a particular **database** or not.
For further help please go through this [Hive 11](http://hive.apache.org/docs/r0.11.0/api/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.html) | Execute the following command : `show tables in DB like 'TABLENAME'`
If the table exists, its name will be returned, otherwise nothing will be returned.
This is done directly from hive. for more options see [this](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-ShowTables/Partitions/Indexes).
`DB` is the database in which you want to see if the table exists.
`TABLENAME` is the table name you seek,
What actually happens is that Hive queries its [metastore](https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties) (depends on your configuration but it can be in a standard RDBMS like MySQL) so you can optionally connect directly to the same metastore and write your own query to see if the table exists. | How to check if a table exists in Hive? | [
"",
"sql",
"odbc",
"hive",
""
] |
I've got a project with a lot of SQL-queries compiled in \*.DLLs files.
Yesterday, I've received a new bug: username (which generated automatically) with single quote causes an error.
The reason is queries like this one:
```
string.Format("SELECT TimeZone from yaf_User WHERE [Name]='{0}'", UserName);
```
Can someone to suggest any kind of trick or hack to fix it?
Update: I don't know why the developers use this horrible way to generate SQL-query, but for now I should fix it. The client will not understand why I should to rewrite a lot of code for fix. | What you are doing is dangerous because it lends itself to SQL injection attacks (as a side effect it also causes the issue you're seeing.
The solution is to use parameterised queries - this also avoids SQL injection attacks.
```
SqlCommand cmd = new SqlCommand("SELECT TimeZone from yaf_User WHERE [Name]=@UserName"), conn);
cmd.Parameters.Add(new SqlParameter("Username", theUsername));
```
Your only alternative is to escape the single quote. However this is a fix to your code but your solution would remain insecure. **I cannot stress how important it is that you resolve this issue** - as it stands I could wipe out entire tables of data in your system by logging in with a malicious username. | This is for Chris. Escaping characters may work. If the person is clever, the are some ways around it.
For instance.
```
-- Use Adventure works
use adventureworks2012
go
```
Say, I know you are replace a single quote with two, your chosen solution on the answere line. Enter the following
**Bothell'; GRANT CONTROL TO [adw\_user];PRINT'** at a text box.
This boils down to this @fld variable.
```
-- Declare the vars
declare @sql nvarchar(max);
declare @fld varchar(128) = 'Bothell''; GRANT CONTROL TO [adw_user];PRINT''';
print @fld
-- Perform some injection
set @sql = 'select * from [Person].[Address] where City = ' +
char(39) + @fld + char(39);
print @sql
exec sp_executesql @sql
```
There you have SQL Injection.
```
select * from [Person].[Address] where City = 'Bothell';
GRANT CONTROL TO [adw_user];PRINT''
(26 row(s) affected)
```
<http://www.w3schools.com/sql/sql_injection.asp>
Quote from W3Schools - The only proven way to protect a web site from SQL injection attacks, is to use SQL parameters.
A very good read. Check out link to truncation attacks. In short, parameterization makes sure the input is treated as a literal, not code.
<http://blogs.msdn.com/b/raulga/archive/2007/01/04/dynamic-sql-sql-injection.aspx> | MSSQL: single quotes causes to error in queries | [
"",
"sql",
".net",
"sql-server",
""
] |
I have a table of comtent like bellow...
```
ProjectID | ParentID | Project Name ...
----------------------------------------
1 | 1 | Project Name 1...
2 | 2 | Project Name 2...
3 | 3 | Project Name 3...
4 | 1 | Project Name 4 & Sub Project 1...
5 | 1 | Project Name 5 & Sub Project 1...
6 | 1 | Project Name 6 & Sub Project 1...
7 | 4 | Project Name 7 & Sub Project 1 & Sub Project 4...
8 | 4 | Project Name 8 & Sub Project 1 & Sub Project 4...
9 | 9 | Project Name 9...
```
In here **ProjectID** is primary key & **ParentID** indicates the primary or main ProjectID of sub project. If ProjectID = ParentID it means it is primary (main) project
Now the question is if I want to fetch out the sub-projects of sub-projects that means,
If I want to see the sub-projects under **1** and out put'll be ...
```
ProjectID | ParentID | Project Name ...
----------------------------------------
1 | 1 | Project Name 1...
4 | 1 | Project Name 4 & Sub Project 1...
5 | 1 | Project Name 5 & Sub Project 1...
6 | 1 | Project Name 6 & Sub Project 1...
7 | 4* | Project Name 7 & Sub Project 1 & Sub Project 4...
8 | 4* | Project Name 8 & Sub Project 1 & Sub Project 4...
```
* here 4 is the sub project of 1 and 7,8 are the sub project of 4
How do I compare? Do I have to use #temp for 1 and then consider the primary ID of #temp table? or is there any better solution? | I found the way to solve. But the actual solution is given by [Jayvee](https://stackoverflow.com/users/2964963/jayvee). Sorry I didnt know the method it calls, so I think I didnt explain it perfectly. Actually I want to use recursion here on aspect of ProjectID.
```
DECLARE @ProjectID int = 1
WITH DirectReports
AS
(
-- Anchor member definition
SELECT C.*
FROM tblProject AS C
WHERE C.ProjectID = @ProjectID
UNION ALL
-- Recursive member definition
SELECT C1.*
FROM tblProject AS C1
JOIN DirectReports AS d ON C1.ProjectID = d.ProjectID
)
-- Statement that executes the CTE
SELECT * FROM DirectReports
UNION ALL
SELECT C.*
FROM tblProject AS C
WHERE C.ProjectID = @ProjectID and C.ProjectID=C.ParentID
```
More elaboration you may find [Here](http://technet.microsoft.com/en-us/library/ms186243(v=sql.105).aspx).
Thanks for contribution. | I think the easiest way to find this is to search the `ProjectName` column. It appears to contains the hierarchy information:
```
select c.*
from content c
where ProjectName+' ' like '%Project 1 %'
```
The additional space is to be sure that `Project 1` does not match `Project 10`. | Get data from same table with the compare of multiple rows? | [
"",
"sql",
"t-sql",
""
] |
I'm executing a SQL command to create a new record in a database table and get the ID of the created record. However, there's a constraint error generated by the SQL command (uninitialized non-null field) which is not being picked up by the VB code. The code is roughly:-
```
connection = New SqlConnection(connection_string)
connection.Open()
sql_command = New SqlCommand(command) 'command = the SQL command to execute
sql_command.Connection = connection
sql_command.Parameters.AddRange(sql_parameters.ToArray()) ' sql_parameters is a parameter to the function
reader = sql_command.ExecuteReader()
If reader IsNot Nothing Then
If reader.HasRows Then
While reader.Read
response_handler(reader, data) 'response handler is a callback which populates the data object
End While
End If
reader.Close()
End If
```
The reader object is non-null but contains no data and no exception is generated. The SQL command is:-
```
insert into [table] ([column1], [column2], [column3], [column4])
output Inserted.[pk]
values (@1, @2, @3, @4)
```
Executing the SQL statement using SQL Server Management Studio, I get the error:-
> Msg 515, Level 16, State 2, Line 2
> Cannot insert the value NULL into column 'somecolumn', table 'tablename'; column does not allow nulls. INSERT fails.
I have also added a handler for the `InfoMessage` event on the `SqlConnection` object but that doesn't get called, even when I set `FireInfoMessageEventOnUserErrors` to true.
Why am I not getting an error and what is the correct way to ensure the error is reported to VB?
I'm using Visual Studio 2008. | The `HasRows` call returns false if there is an error. That way you will never see the error. Remove both `If` statements. The null check is redundant, the other one suppresses errors. | The severity level is 16 which does not interrupt the current session. If you want to throw a hard error you could add something like this directly after your insert...
```
If @@Error <> 0
Begin
Raiserror('Constraint Error Encountered',20,1) With Log;
End
```
The security context will have to have sysadmin rights in order to perform the `RAISERROR WITH LOG`, but if you can't do this I'm sure there are other ways to throw a hard error. In any event a warning isn't going to throw an error to your VB code. | SqlCommand.ExecuteReader Fails to Report Errors | [
"",
"sql",
"sql-server",
"vb.net",
""
] |
I have 2 SQL Oracle datatables.
```
A: Col1 | Col2 | Key
c1 c2 1
c3 c4 2
c5 c6 3
B: Co1 | Co2 | Key
a1 a2 2
a3 a4 3
```
I need select that gives me following table:
```
C:
a1 a2 c3 c4
a3 a4 c5 c6
- - c1 c2
```
How can I do it with SQL ? The normal Join:
```
select * from a, b where a.key = b.key;
```
does not give the desired result. | You need a left outer join:
```
select b.col1, b.col2, a.col1, a.col2
from a left outer join
b
on a.key = b.key;
``` | When doing a join like you are it will only return entries that exist in both tables. To get the result that you are looking for try something like this...
```
SELECT B.Co1, B.Co2, A.Col1, A.Col2
FROM A
LEFT JOIN B ON A.Key = B.Key
``` | How to join two tables | [
"",
"sql",
"oracle",
"join",
""
] |
```
CREATE TABLE IF NOT EXISTS `accesscards` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`department` varchar(255) NOT NULL,
`name` varchar(255) NOT NULL,
`entrydates` datetime NOT NULL, PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ;
INSERT INTO `accesscards` (`id`, `department`, `name`, `entrydates`) VALUES
(1, 'test', 't1', '2013-12-06 16:10:00'),
(2, 'test', 't1', '2013-12-06 15:10:00'),
(3, 'test', 't1', '2013-12-07 15:11:00'),
(4, 'test', 't1', '2013-12-07 15:24:00'),
(5, 'test', 't2', '2013-12-06 16:10:00'),
(6, 'test', 't2', '2013-12-06 16:25:00'),
(7, 'test', 't2', '2013-12-07 15:59:00'),
(8, 'test', 't2', '2013-12-07 16:59:00');
```
Above is my query, I want to get records for a person for each day. And that record should have min datetime for the day. I need whole record for that date time
My expected output [here](http://www.sqlfiddle.com/#!2/66c3f/2)
I tried using
```
SELECT id, MIN(entrydates) FROM accesscards WHERE 1=1 AND name!='' GROUP BY DATE(entrydates) ORDER BY id
```
but for 't1' I got id=1 and entrydates of first row.
Please help me out. If duplicate then provide link. | ```
SELECT a1.*
FROM accesscards a1
JOIN (SELECT name, MIN(entrydates) mindate
FROM accesscards
WHERE name != ''
GROUP BY name, date(entrydates)) a2
ON a1.name = a2.name AND a1.entrydates = a2.mindate
```
[DEMO](http://www.sqlfiddle.com/#!2/66c3f/3) | If you are using mysql : [GROUP\_CONCAT](http://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html#function_group-concat) and [SUBSTRING\_INDEX](http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_substring-index)
```
SELECT
DATE(entrydates) AS grouped_date,
GROUP_CONCAT(id ORDER BY entrydates ASC SEPARATOR ',') AS id_ordered_list,
SUBSTRING_INDEX(GROUP_CONCAT(id ORDER BY entrydates ASC), ',', 1) AS min_id_for_day
FROM
accesscards
WHERE
1=1 AND name!=''
GROUP BY
DATE(entrydates)
```
If you need other fields besides id to be shown, add this to you select :
```
SUBSTRING_INDEX(GROUP_CONCAT(YOUR_FIELDNAME_HERE ORDER BY entrydates ASC), ',', 1) AS min_YOUR_FIELDNAME_for_day
```
Play at <http://sqlfiddle.com/#!2/a2671/13>
After you updated your question with new data:
<http://sqlfiddle.com/#!2/a2671/20>
```
SELECT
DATE(entrydates) AS grouped_date,
SUBSTRING_INDEX(GROUP_CONCAT(id ORDER BY entrydates ASC), ',', 1) AS min_id_for_day,
department,
name,
SUBSTRING_INDEX(GROUP_CONCAT(entrydates ORDER BY entrydates ASC), ',', 1) AS min_entrydate_for_day
FROM
accesscards
WHERE
1=1 AND name!=''
GROUP BY
name,DATE(entrydates)
ORDER BY entrydates
``` | Get records for each person's each day's min datetime | [
"",
"mysql",
"sql",
"greatest-n-per-group",
""
] |
**EDIT VERSION**
**EMPLOYEE TABLE**
```
|ID | employee_id | Name |
| 1 | 123 | John Richard |
| 2 | 554 | Daniel Domingo |
```
**educational background**
```
|ID | employee_id | School/institute | date graduated |
| 1 | 123 | highschool | 2007 |
| 2 | 123 | college | 2011 |
| 3 | 554 | college | 2010 |
| 4 | 554 | masteral | 2013 |
```
**job title**
```
|ID | employee_id | Job description |
| 1 | 123 | Free lancer |
| 2 | 554 | admin assistant |
```
i need to select the latest date info of the employee's educational background
the result would be
**result query**
```
|ID | employee_id | Name | Job title | year_graduated | school_institute |
| 1 | 123 | John Richard | Free Lancer | 2011 | college |
| 2 | 554 | Daniel Domingo | Admin Assistant | 2013 | masteral |
``` | Try this:
```
SELECT *
FROM (SELECT * FROM tableA ORDER BY employee_id, date_graduated DESC) A
GROUP BY employee_id
```
**OR**
```
SELECT a.*
FROM tableA a
INNER JOIN (SELECT employee_id, MAX(date_graduated) maxDate
FROM tableA GROUP BY employee_id
) b ON a.employee_id = b.employee_id AND a.date_graduated = b.maxDate
``` | ```
SELECT employee_id ,MAX([date graduated]) FROM Table GROUP BY [employee_id]
```
The result is
```
employee_id | date graduated
----------------------------
123 | 2011
554 | 2013
``` | how to select most recent date in mysql | [
"",
"mysql",
"sql",
"select",
"group-by",
"sql-order-by",
""
] |
In my situation, I need to deal with invalid date values like: '04070$'. I need to insert something even if I receive erroneous inputs like: '04070$'.
When I do:
```
select str_to_date('04070$','%m%d%Y') from dual;
```
The result is:
```
2000-04-07
```
But for insert statement, I get an error:
```
INSERT INTO `table1` ( `Date_Posted`) VALUES (str_to_date('04070$','%m%d%Y'))
#1292 - Truncated incorrect date value: '04070$'
```
I do not understand why select statement runs fine and insert statement gives error. Is there a way to make insert statement insert NULL or date (even if it is incorrect) when presented with such incorrect values? | It happens because MySQL is running on [Strict Mode](https://dev.mysql.com/doc/refman/5.7/en/sql-mode.html#sql-mode-strict).
On a `SELECT` statement, `STR_TO_DATE()` will return a year, but since it wasn't matched 100%, it'll also issue a warning:
```
mysql> SELECT str_to_date('040710$','%m%d%y');
+---------------------------------+
| str_to_date('040710$','%m%d%y') |
+---------------------------------+
| 2010-04-07 |
+---------------------------------+
1 row in set, 1 warning (0.00 sec)
```
As you can see, the warning is just a little mention on the select result, but it's still exists:
```
mysql> show warnings;
+---------+------+-------------------------------------------+
| Level | Code | Message |
+---------+------+-------------------------------------------+
| Warning | 1292 | Truncated incorrect date value: '040710$' |
+---------+------+-------------------------------------------+
1 row in set (0.00 sec)
```
On Strict Mode, any warning becomes an error when trying to `INSERT`/`UPDATE`. that's why you can see a date on select but can't save it on the database. | Change the year format to uppercase like.
```
str_to_date('01-01-2000','%d-%m-%Y')
``` | Truncated incorrect date value | [
"",
"mysql",
"sql",
"date",
""
] |
While working with following query on mysql, Its getting locked,
```
SELECT event_list.*
FROM event_list
INNER JOIN members
ON members.profilenam=event_list.even_loc
WHERE (even_own IN (SELECT frd_id
FROM network
WHERE mem_id='911'
GROUP BY frd_id)
OR even_own = '911' )
AND event_list.even_active = 'y'
GROUP BY event_list.even_id
ORDER BY event_list.even_stat ASC
```
The Inner query inside IN constraint has many frd\_id, So because of that above query is slooow..., So please help.
Thanks. | Try this:
```
SELECT el.*
FROM event_list el
INNER JOIN members m ON m.profilenam = el.even_loc
WHERE el.even_active = 'y' AND
(el.even_own = 911 OR EXISTS (SELECT 1 FROM network n WHERE n.mem_id=911 AND n.frd_id = el.even_own))
GROUP BY el.even_id
ORDER BY el.even_stat ASC
``` | You don't need the `GROUP BY` on the inner query, that will be making the database engine do a lot of unneeded work. | How to optimize this complected query? | [
"",
"mysql",
"sql",
"performance",
"select",
"query-optimization",
""
] |
Say I have the following:
```
PART SUBPART QUANTITY
-------- -------- -----------
01 02 2
01 03 3
01 04 4
01 06 3
02 05 7
02 06 6
03 07 6
04 08 10
04 09 11
...
```
For each `part` I need to identify the `subpart` with the max quantity.
My real example is a little bit more complex, I mean that there are not one, but 3 subpart columns (like a composite key). So I need to identify `part` each part the couples `subpart1`, `subpart2` and `subpart3`...
As database I use db2 for as400, but any examples are welcome.
I tried to do the following, but this does not work:
```
with T (PART, SUBPART1, SUBPART2, SUBPART3, SQ) AS
(SELECT PART, SUBPART1, SUBPART2, SUBPART3, SUM(QUANTITY)
FROM MYTABLE
GROUP BY PART, SUBPART1, SUBPART2, SUBPART3)
select PART, SUBPART1, SUBPART2, SUBPART3
WHERE SQ = max(SQ)
from T
group by PART
``` | "For each part I need the subpart that has the maximum quantity". How about this?
```
select t.*
from mytable t join
(select part, max(quantity) as maxq
from t
group by part
) m
on m.part = t.part and m.maxq = t.quantity;
``` | I assume DB2 for AS400 doesn't support MAX OVER, i.e. Windowed Aggregate Functions?
Then it would be easy, otherwise you need a correlated subquery:
```
with T (PART, SUBPART1, SUBPART2, SUBPART3, SQ) AS
(SELECT PART, SUBPART1, SUBPART2, SUBPART3, SUM(QUANTITY)
FROM MYTABLE
GROUP BY PART, SUBPART1, SUBPART2, SUBPART3)
select PART, SUBPART1, SUBPART2, SUBPART3
from T as t1
WHERE SQ =
(select max(SQ) from T as t2
where t1.PART = t2.PART)
```
Edit:
This is a version using Windowed Aggregate Functions
```
with T (PART, SUBPART1, SUBPART2, SUBPART3, SQ, MAXSQ) AS
(SELECT PART, SUBPART1, SUBPART2, SUBPART3, SUM(QUANTITY),
SUM(SUM(QUANTITY)) OVER (PARTITION BY PART)
FROM MYTABLE
GROUP BY PART, SUBPART1, SUBPART2, SUBPART3)
select PART, SUBPART1, SUBPART2, SUBPART3
WHERE SQ = MAXSQ
from T
```
But this version of DB2 seems to support only rankings:
```
with T (PART, SUBPART1, SUBPART2, SUBPART3, SQ, rnk) AS
(SELECT PART, SUBPART1, SUBPART2, SUBPART3, SUM(QUANTITY),
RANK() OVER (PARTITION BY PART ORDER BY SUM(QUANTITY) DESC)
FROM MYTABLE
GROUP BY PART, SUBPART1, SUBPART2, SUBPART3)
select PART, SUBPART1, SUBPART2, SUBPART3
WHERE rnk = 1
from T
``` | Select a composite key, where max(column_value) | [
"",
"sql",
"db2",
"ibm-midrange",
"aggregation",
""
] |
I wanted to take average of two columns and display in a new column something like this:
```
+-+--+--+--+--------+--------+
|A|B |C |D |AVG(B/C)|AVG(C/B)|
+-+--+--+--+--------+--------+
|S|23|34|56| | |
+-+--+--+--+--------+--------+
|T|45|6 |79| | |
+-+--+--+--+--------+--------+
```
So, as shown above, I needed to take the each row values and perform B/C and then take the average accordingly to display it in a new column.
I wanted to do this in SQL query. Is it possible to perform this in a SQL command? I know the `AVG()` function does take the average of a column but how can I do `B/C` and then take the average? also if I need to take the average of B and C as well how can i do that.
This is what I am doing right now:
```
Select A,B,C,D FROM tableTest where A='S';
```
I now have to take the average of corresponding and also have another two columns additionally in the query to show the respective results. | You should give it a try:
```
SELECT A, AVG((B+C)/2) as bc, AVG((C+B)/2) as cb
FROM tableTest
WHERE A = 'S'
GROUP BY A
``` | A`VG(col1/col2)` is not the same as `AVG((col1+col2/2)`.
How about using `GROUP BY`?
> SELECT city, "average money spend:", AVG( sales-profit/
> number-of-customer) FROM Sales GROUP BY City
Then better use
> SUM(sales-profit)/SUM(number-of-customers)
or use
> AVG(sales-profit)/AVG(number-of-customers)
(but this is less pragmatic (that is understandability is less). | Average of two columns in SQL | [
"",
"sql",
"database",
"postgresql",
"average",
""
] |
i have a mysql customers table:
```
customer_id | customer_name | creation_date
1 | john | 2013-09-12 18:34:00
2 | banjo | 2013-01-11 14:34:00
```
what i would to achieve is to know the closest DAY in the current ot next month that match the creation\_date field.
I.E if the current date is 2014-01-20, i would like to have the following result
```
customer_id | customer_name | creation_date | next_date
1 | john | 2013-09-12 18:34:00 | 2014-02-12
2 | banjo | 2013-01-11 14:34:00 | 2014-02-11
``` | The following seems to work but not tested for edge cases:
```
SELECT
CURRENT_DATE AS cutoff_date,
date_column AS creation_date,
CASE
WHEN STR_TO_DATE(CONCAT_WS('-', YEAR(CURRENT_DATE), MONTH(CURRENT_DATE), DAY(date_column)), '%Y-%c-%e') >= CURRENT_DATE
THEN STR_TO_DATE(CONCAT_WS('-', YEAR(CURRENT_DATE), MONTH(CURRENT_DATE), DAY(date_column)), '%Y-%c-%e')
ELSE STR_TO_DATE(CONCAT_WS('-', YEAR(CURRENT_DATE), MONTH(CURRENT_DATE), DAY(date_column)), '%Y-%c-%e') + INTERVAL 1 MONTH
END AS next_date
FROM dates2
```
Results:
```
cutoff_date creation_date next_date
------------------- ------------------- -------------------
2014-01-20 00:00:00 2010-01-01 00:41:00 2014-02-01 00:00:00
2014-01-20 00:00:00 2010-01-10 00:06:00 2014-02-10 00:00:00
2014-01-20 00:00:00 2010-01-19 22:34:00 2014-02-19 00:00:00
2014-01-20 00:00:00 2010-01-19 23:13:00 2014-02-19 00:00:00
2014-01-20 00:00:00 2010-01-20 00:36:00 2014-01-20 00:00:00
2014-01-20 00:00:00 2010-01-20 00:43:00 2014-01-20 00:00:00
2014-01-20 00:00:00 2010-02-15 08:05:00 2014-02-15 00:00:00
2014-01-20 00:00:00 2010-02-25 22:50:00 2014-01-25 00:00:00
``` | First calculate the wanted date, than use min/max to get the closest one. Maybe something like that:
```
-- If your stored date is always in year 2013
select CASE WHEN DATE_ADD(DATE_ADD(creation_date, INTERVAL 1 YEAR), INTERVAL 1 MONTH) < SYSDATE
THEN DATE_ADD(creation_date, INTERVAL 1 YEAR)
ELSE DATE_ADD(DATE_ADD(creation_date, INTERVAL 1 YEAR), INTERVAL 1 MONTH)
END AS next_date
from customers;
```
This will make your creation\_date `2013-09-12` to `2014-10-12`. But if you just want the day maybe this is useful?:
```
-- If your stored date is always in year 2013
select str_to_date(concat(date_format(curdate(),'%Y-%m'), date_format(creation_date,'%d')),'%Y-%m-%d') as next_date
from customers;
```
This should use the current year and month, but change just the day. You can use a `CASE` to check if the difference between the current or the next month is closer. | mysql next date by day of one field | [
"",
"mysql",
"sql",
"date",
"datetime",
""
] |
I dont know much about SQL, and I've got a problem using it.
I have two tables that connected to each other 1-1
Tbl1 (int\_id1, str\_desc1,....) and
Tbl2 (int\_id2, str\_desc2,....)
And these two are connected to each other
int\_id1 ---- int\_id2
First I want to know that is my design true?
And how can I insert into one of these of two together.
Cause I've got problem when I try to insert into one
here's the error description:
> The INSERT statement conflicted with the FOREIGN KEY constraint
> "FK\_Tbl2\_Tbl1". The conflict occurred in database "project", table
> "dbo.Tbl1", column 'int\_id1'.
Tnx... | It means you are trying to insert a value in a Foreign Key Column which does not exist in the Primary Key Column which it referencing to.
Any value you add in a Foreign Key Column, It must Exist in the Primary Key Column to which it referencing to, after all that is the whole Idea of adding Foreign Key Constraints. so you will not end up having orphan records in a table and also it reduces data redundancy.
`Read Here` for more information about `Foreign Key Constraints`. | > First I want to know that is my design true?
When you have tables in a one to one relationship, the first question should be: why not just use one table? There may not be a need to separate the data. | How can I insert into a table that is connected to another table with a 1-1 relationship | [
"",
"sql",
"sql-server",
""
] |
When we create a database with a column as date , it can have the type date . But in the startup , I am interning they use int(11) as type to store a date as yyyymmdd eg(20140121) . So is there any advantage of using it or is it just a matter of choice.. | I would recommend using the DATETIME data type within SQL Server. Not necessarily for performance reasons, but to leverage the RDBMS-specific functionality. For instance, you would have to re-invent a lot of logic just to do basic date math (think DATEDIFF(), DATEADD(), DATEPART() and many other functions. They are obviously tailored to the DATETIME data type and are easy to work with).
you are running into the exact problem that the first question (and my answer) is geared towards. You are looking at 20111201 and 20120131 as dates, and your brain is telling you that should be a difference of 60 days. Well, you're looping through based off of the delta...which is:
20120131 - 20111201 = 8930 (with the inclusive loop it'll be 8931)
In other words, your WHILE loop is executing 8931 times. This is happening because those are integer values and your loop will not jump from 20111231 straight to 20120101.
You integers aren't going to take into account the cap of years and months.
**Additional Information :-**
**Potential data types and their sizes/limitations:**
Decimal(8,0): 5 bytes
Date: 3 bytes, 0001-01-01 through 9999-12-31
Int: 4 bytes
**Pros for numeric data type:**
They look pretty?
**Cons for numeric data type:**
1)Requires custom code for handling date operations
2)Requires custom code to manage correct dates (ie, not allowing 20120230 [Feb 30th, 2012])
3)Larger data footprint when compared to the Date data type.
So i hope so know you will be able to know which datatype you should use to store date.
you can define easily by above example.
I hope my answer will help you. | Integer encoded dates (e.g. 20140121) are common in warehouse databases.
Whether it's a good idea depends somewhat upon which RBDMS you are using. In some instances they use less space than an equivalent datetime datatype, and they are readable in a raw data form.
For example, it used to be the recommended approach in SQL Server prior to version 2008. But in SQL Server 2008+ there is a 3 byte pure date type which takes less space and is also easily readable as a date. | The Date in a Database should be Date type or int(11)? | [
"",
"sql",
""
] |
It's my table t1; It has one million rows.
```
CREATE TABLE `t1` (
`a` varchar(10) NOT NULL,
`b` varchar(10) DEFAULT NULL,
`c` varchar(10) DEFAULT NULL,
`d` varchar(10) DEFAULT NULL,
`e` varchar(10) DEFAULT NULL,
`f` varchar(10) DEFAULT NULL,
`g` varchar(10) DEFAULT NULL,
`h` varchar(10) DEFAULT NULL,
PRIMARY KEY (`a`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 |
```
Result:
```
mysql> select * from t1 where a=10000000;
Empty set (1.42 sec)
mysql> select * from t1 where b=10000000;
Empty set (1.41 sec)
```
Why select primary key is as fast as a normal field? | Try `select * from t1 where a='10000000';`.
You're probably forcing MySQL to convert all of those strings to integers - because integers have a higher type precedence than `varchar` - in which case an index on the strings is useless
---
Actually, apparently, I was slightly wrong. By my reading of the [conversions](http://dev.mysql.com/doc/refman/5.7/en/type-conversion.html) documentation, I believe that in MySQL we end up forcing both sides of the comparison to be converted to `float`, since I can't see any bullet point above:
> In all other cases, the arguments are compared as floating-point (real) numbers.
that would match a string on one side and an integer on the other. | Data is stored in blocks in almost all databases. Reading a block is an elementary Unit of IO.
Indexes helps the system in zeroing in on the datablock which holds the data that we are trying to read and would avoid reading all the datablocks. In a very small table which has single or very few data blocks the usage of index could actually be a overhead and might be skipped altogether. Even if used, the indexes would rarely provide any performance benefit. Try the same experiment on a rather large table.
PS: Indexes and Key (Primary Keys) are not interchangeable concepts. The Former is Physical and the latter is logical. | Why primary key has no good effect on select? | [
"",
"mysql",
"sql",
"select",
""
] |
I have a table with with a latitude, longitude and coordinates column. There are over 500,000 rows. The coordinates field is currently empty on every row. What I need to do is convert the latitude and longitude into a Geospatial POINT() and put it in the coordinates column. Is this possible?
I know that `POINT`s can be inserted like this:
```
INSERT INTO `myTable` (`coordinates`) VALUES (GeomFromText('POINT(50.2 51.6)'));
```
I essentially need to do exactly this but by pulling the value from `latitude` and `longitude` from each row. | A GeomFromText needs a string, you've got to lot of concatenate
```
INSERT INTO myTable (coordinates)
SELECT GeomFromText(CONCAT('POINT(',ot.latitude, ' ', ot.longitude,')'))
FROM otherTable ot;
```
If it's an update to an existing table with `latitude` and `longitude` columns into a new column `coordinates` do this:
```
UPDATE myTable
SET coordinates = GeomFromText(CONCAT('POINT(',latitude, ' ', longitude,')'));
``` | This is a older question but I have remarked that all answers are wrong (at least in 2019) latitude and longitude must be opposite as the answers
instead: `POINT(latitude,longitude)`
**Must be:** `POINT(longitude,latitude)`
In Mysql 5.6 and later can use :
```
Update myTable
Set coordinates = POINT(longitude, latitude);
```
If we try some like:
```
SELECT ST_Distance_Sphere(
POINT(13.1500000,99.9666700),
POINT(48.861105, 120.335337)
);
```
got error:
> ERROR 1210 (HY000): Incorrect arguments to st\_distance\_sphere
Because I have used like in the answers `POINT(latitude,longitude)`
But if we try `POINT(longitude,latitude)`:
```
SELECT ST_Distance_Sphere(
POINT(99.9666700, 13.1500000),
POINT(120.335337, 48.861105)
);
```
The result is:
```
+--------------------------------------------+
| ST_Distance_Sphere(
POINT(99.9666700, 13.1500000),
POINT(120.335337, 48.861105)
) |
+--------------------------------------------+
| 4389299.754585881 |
+--------------------------------------------+
1 row in set (0.00 sec)
``` | Convert latitude/longitude fields into Geospatial Points | [
"",
"mysql",
"sql",
"geospatial",
""
] |
I need some help with the SUM feature. I am trying to SUM the bill amounts for the same account into one grand total, but the results I am getting show my SUM column just multiples my first column by 3.
Here is what I want as results for my mock data:
```
AccountNumber Bill BillDate
1 100.00 1/1/2013
1 150.00 2/1/2013
1 200.00 3/1/2013
2 75.00 1/1/2013
2 100.00 2/1/2013
```
Query:
```
SELECT AccountNumber, Bill, BillDate, SUM(Bill)
FROM Table1
GROUP BY AccountNumber, Bill, BillDate
AccountNumber Bill BillDate SUM(Bill)
1 100.00 1/1/2013 450.00
1 150.00 2/1/2013 450.00
1 200.00 3/1/2013 450.00
2 75.00 1/1/2013 175.00
2 100.00 2/1/2013 175.00
```
OR
```
AccountNumber Bill SUM(Bill)
1 100.00 450.00
2 75.00 175.00
```
I would prefer to have both results if possible.
Here is what I am getting:
My SUM column is just multiplying by three, it's not actually summing the data based on account Number.
```
AccountNumber Bill BillDate SUM(Bill)
1 100.00 1/1/2013 300.00
1 150.00 2/1/2013 450.00
1 200.00 3/1/2013 600.00
2 75.00 1/1/2013 225.00
2 100.00 2/1/2013 300.00
``` | Thank you for your responses. Turns out my problem was a database issue with duplicate entries, not with my logic. A quick table sync fixed that and the SUM feature worked as expected. This is all still useful knowledge for the SUM feature and is worth reading if you are having trouble using it. | If you don't want to group your result, use a window function.
You didn't state your DBMS, but this is ANSI SQL:
```
SELECT AccountNumber,
Bill,
BillDate,
SUM(Bill) over (partition by accountNumber) as account_total
FROM Table1
order by AccountNumber, BillDate;
```
Here is an SQLFiddle: <http://sqlfiddle.com/#!15/2c35e/1>
You can even add a running sum, by adding:
```
sum(bill) over (partition by account_number order by bill_date) as sum_to_date
```
which will give you the total up to the current's row date. | SQL Sum Multiple rows into one | [
"",
"sql",
"function",
"group-by",
"sum",
""
] |
I have the following table for evaluating students:
```
StudentID | EvaluationStatusID| Date
1011010 | 1 |2013-11-07 20:31:51.000
1011020 | 1 |2013-11-08 13:23:51.000
1011010 | 2 |2013-11-08 20:31:51.000
1011020 | 3 |2013-11-09 20:31:51.000
```
The evaluation of a student does through different stages - 'submitted','assessed,'accepted' etc.
I need to get the LATEST record(by date) on each student in teh form 'StudentID-EvaluationStatusID'.
So,in the above data i should have the following returned:
```
1011010-2
1011020-3
```
In Sql server 2008,how do I get this? | The simplest is using ranking functions like `ROW_NUMBER`.
```
WITH CTE AS
(
SELECT StudentID, EvaluationStatusID, Date,
RN = ROW_NUMBER() OVER (PARTITION BY StudentID
ORDER BY Date DESC)
FROM dbo.Student
)
SELECT StudentID, EvaluationStatusID, Date
FROM CTE WHERE RN = 1
```
`Demo` | ```
SELECT StudentID +' ' +EvaluationStatusID
FROM tblTable T
WHERE T.Date = (SELECT MAX(TT.Date)
FROM tblTable TT
WHERE TT.StudentID = T.StudentID
)
``` | Retrieve last entry for each | [
"",
"sql",
"sql-server",
""
] |
I have table A with below records:
Table A
```
+----+-----+
|Tuid|Etuid|
+----+-----+
|d1 |E1 |
+----+-----+
|d2 |E1 |
+----+-----+
|d3 |E2 |
+----+-----+
|d4 |E3 |
+----+-----+
|d5 |E4 |
+----+-----+
```
Table B
```
+----+---+
|Tuid|FID|
+----+---+
|d1 |1 |
+----+---+
|d2 |2 |
+----+---+
|d3 |1 |
+----+---+
|d3 |2 |
+----+---+
|d4 |1 |
+----+---+
|d4 |2 |
+----+---+
|d5 |1 |
+----+---+
|d5 |3 |
+----+---+
```
Here Tuid od Table A and Tuid of Table B are related. I want to query all the records which are having 1 and 2 FIDs in the same Etuid. I am expecting E1, E2 and E3 in result. I should not get E4 as it contains only '1' FID and not '2'.
I have written query like this:
```
SELECT TableA.ETuid
FROM TableA
WHERE TableA.Tuid IN (
SELECT TableB.Tuid
FROM TableB
WHERE TableB.FID IN (1, 2)
GROUP BY TableB.Tuid
HAVING COUNT(*) = 2
)
```
This is giving only E2 and E3 in result, please help me to get E1 also in result. | Try this code (not tested)
```
select a.ETuid
from tablea a
join tableb b on (a.ETuid= b.ETuid)
where b.fid in (1,2)
group by a.ETuid
having count(distinct b.fid) = 2
``` | This may not be an elegant solution, but the logic is pretty straight forward:
```
SELECT a1.Etuid
FROM TableA a1
INNER JOIN TableB b1 ON (a1.Tuid=b1.Tuid)
WHERE b1.FID=1
INTERSECT
SELECT a2.Etuid
FROM TableA a2
INNER JOIN TableB b2 ON (a2.Tuid=b2.Tuid)
WHERE b2.FID=2;
```
Outputs
```
ETUID
E1
E2
E3
```
[Online demo](http://sqlfiddle.com/#!4/9da6e0/8)
(Please be noted that due to SQLFiddle's convention and my laziness I have to use double quote for column names; This may not be the case in your real code)
[Oracle document on `INTERSECT`](http://docs.oracle.com/cd/E11882_01/server.112/e41084/queries004.htm) | Correct Oracle Query | [
"",
"sql",
"database",
"oracle",
""
] |
I have two tables. Table A contains daily information on transactions on corporate bonds from 2004 to 2012, and table B contains bond rating information on specific dates. I need to join the two tables, so that for each transaction i table A, the most recent rating for that specific bond is appended.
```
Table A: daily_transactions
--------------------------------------------
DATE |BOND |PRICE
--------------------------------------------
20110401 |AES |100
20110402 |AES |101
20110403 |AES |102
20110404 |AES |103
20110401 |BPP |99
20110402 |BPP |98
Table B: bond_ratings
--------------------------------------------
DATE |BOND |RATING
--------------------------------------------
20110401 |AES |AAA
20110403 |AES |BB
20110401 |BPP |CCC
Table C: joined_data
--------------------------------------------
DATE |BOND |PRICE |RATING
--------------------------------------------
20110401 |AES |100 |AAA
20110402 |AES |101 |AAA
20110403 |AES |102 |BB
20110404 |AES |103 |BB
20110401 |BPP |99 |CCC
20110402 |BPP |98 |CCC
```
I have approx. 1,000,000 records in table A and 14,000 records in table B.
**UPDATE:**
What I have so far is:
```
create table test_merge as
SELECT a.date, b.date, a.bond, a.price, b.rating
FROM daily_transactions a
LEFT JOIN bond_ratings b ON a.bond = b.bond AND b.date <= a.date
WHERE NOT EXISTS (
SELECT 1 FROM bond_ratings b1
WHERE b1.bond = a.bond
AND b1.date <= a.date
AND b1.date > b.date
);
```
It seems to work perfectly (<http://sqlfiddle.com/#!3/d287f/2>), however its runs extremely slow with the amount of data that I have. Takes about 2 hours. Is there any way to optimize this to run faster?
I'm very (very) new to sql, and any help is thus much appreciated! | I managed to bring down the runtime to 5 minutes by indexing on the `bond` column.
```
proc sql;
create index bond
on work.daily_transactions(bond);
quit;
proc sql;
create index bond
on work.bond_ratings(bond);
quit;
``` | For a more SAS based approach (instead of SQL), you can use a SAS format for Table B and probably speed it up more. A [format in SAS](http://support.sas.com/documentation/cdl/en/proc/61895/HTML/default/viewer.htm#a002473491.htm) is just a lookup table mapping anything between START and END to a LABEL. For example, loading this table as a format:
```
fmtname | START | END | LABEL
-----------------------------------------------------------
$bondRate | AES20110401 | AES20110403 | AAA
```
Maps any text string between START and END to LABEL. So `AES20110302` --> AAA.
Here is the full code, using your TABLE B above (assuming DATE is a numeric field, if not use `input(DATE,YYDDMMN8.)` to convert it to a number):
```
PROC SORT DATA = TABLE_B;
by bond descending date;
run;
/*Use lag function to get the start and end date on one line*/
data bond_ratings_fmt;
set TABLE_B;
by bond descending date;
START_DT = put(date,$8);*Character date like '20110401';
END_DT = put(lag(date)-1,$8);* 1 less than the prior records end;
*first.bond is the most recent rating for each bond;
*setting the END_DT to some future date in this case.;
if first.bond then END_DT= '20991231';
START = cats(BOND,START_DT);*Cats concatenates and trims spaces, makes AES20110401;
END = cats(BOND,END_DT);
LABEL = Rating;
fmtName='$bondRate';
run;
*Load the format, using CNTLIN (Control Table In);
proc format cntlin=bond_ratings_fmt;
*Apply the format;
data TableC_withRating (drop=_:);
set TableA;
_DateChar = put(DATE,$8.);
Rating = put(BOND||_DateChar,$bondRate.);
run;
```
You can get fancier by adding an OTHER case to the format - there are lots of good examples on the web with `cntlin` and `proc format`. | How to join with most recent record? | [
"",
"sql",
"performance",
"sas",
""
] |
I have stored procedure which is building dynamic SQL statement depending on its input parameters and then executed it.
One of the queries is causing time outs, so I have decided to check it. The first time (and only the first time) the issue statement is executed it is slow (30 secs - 45 secs) and every next execute takes 1-2 seconds.
In order to reproduce the issue, I am using
```
DBCC FREEPROCCACHE
DBCC DROPCLEANBUFFERS
```
I am really confused where the problem is, because ordinary if SQL statement is slow, it is always slow. Now, it has long execution time only the first time.
Is is possible, the itself to be slow and needs optimization or the problem can be caused by something else?
The execution plan is below, but for me there is nothing strange with it:
 | From your reply to my comment, it would appear that the first time this query runs it is performing a lot of physical reads or read-ahead reads, meaning that a lot of IO is required to get the right pages into the buffer pool to satisfy this query.
Once pages are read into the buffer pool (memory) they generally stay there so that physical IO is not required to read them again (you can see this is happening as you indicated that the physical reads are converted to logical reads the second time the query is run). Memory is orders of magnitude faster than disk IO, hence the difference in speed for this query.
Looking at the plan, I can just about see that every read operation is being done against the clustered index of the table. As the clustered index contains every column for the row it is potentially fetching more data per row than is actually required for the query.
Unless you are selecting every column from every table, I would suggest that creating non-clustered covering indexes that satisfy this query (that are as narrow as possible), this will reduce the IO requirement for the query and make it less expensive the first time round.
Of course this may not be possible/viable for you to do, in which case you should either just take the hit on the first run and not empty the caches, or rewrite the query itself to be more efficient and perform less reads. | Its very simple the reason 1st and very 1st time it takes longer and then all later executions are done fairly quickly. the reason behind this mystery is "CACHED EXECUTION PLANS".
> While working with Stored Procedures, Sql server takes the following
> steps.
>
> 1) Parse Syntax of command.
> 2) Translate to Query Tree.
> 3) Develop
> Execution Plan.
> 4) Execute.
The 1st two steps only take place when you create a Stored Procedure.
3rd step only takes place on very 1st Execution or if the CACHED PLAN has been flushed from the CACHE MEMORY.
Fourth Step takes place on every execution, and this is the only step that takes place after the very 1st execution if the Plan is still in cache memory.
In your case its quite understandable that very 1st execution took long and then later it gets executed fairly quickly.
To reproduce the "issue" you executed `DBCC FREEPROCCACHE` AND `DBCC DROPCLEANBUFFERS` commanda which basically Flushes the `BUFFER CACHE MEMORY` and causes your stored procedure to create a new Execution plan on it next execution. Hope this will clear the fog a little bit :) | Dynamic SQL - long execution time - first time only | [
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2012",
""
] |
I want to select the number of users that has marked some content as favorite and also return if the current user has "voted" or not. My table looks like this
```
CREATE TABLE IF NOT EXISTS `favorites` (
`user` int(11) NOT NULL DEFAULT '0',
`content` int(11) NOT NULL DEFAULT '0',
PRIMARY KEY (`user`,`content`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 ;
```
Say I have 3 rows containing
```
INSERT INTO `favorites` (`user`, `content`) VALUES
(11, 26977),
(22, 26977),
(33, 26977);
```
Using this
```
SELECT COUNT(*), CASE
WHEN user='22'
THEN 1
ELSE 0
END as has_voted
FROM favorites WHERE content = '26977'
```
I expect to get `has_voted=1` and `COUNT(*)=3` but
I get `has_voted=0` and `COUNT(*)=3`. Why is that? How to fix it? | This is because you mixed aggregated and non-aggregated expressions in a single `SELECT`. Aggregated expressions work on many rows; non-aggregated expressions work on a single row. An aggregated (i.e. `COUNT(*)`) and a non-aggregated (i.e. `CASE`) expressions should appear in the same `SELECT` when you have a `GROUP BY`, which does not make sense in your situation.
You can fix your query by aggregating the second expression - i.e. adding a `SUM` around it, like this:
```
SELECT
COUNT(*) AS FavoriteCount
, SUM(CASE WHEN user=22 THEN 1 ELSE 0 END) as has_voted
FROM favorites
WHERE content = 26977
```
Now both expressions are aggregated, so you should get the expected results. | Try this with `SUM()` and without `CASE`
```
SELECT
COUNT(*),
SUM(USER = '22') AS has_voted
FROM
favorites
WHERE content = '26977'
```
[**See Fiddle Demo**](http://sqlfiddle.com/#!2/c249b/4) | SELECT CASE, COUNT(*) | [
"",
"mysql",
"sql",
"select",
"max",
"case",
""
] |
I have data like this:
```
Product Group Product Level Quatity Sold Trend
==============================================================
Group 1 L1 10 up
Group 1 L2 20 up
Group 1 L3 30 down
Group 2 L1 20 up
Group 2 L2 40 up
Group 2 L3 60 down
Group 2 L4 80 down
```
I need to get the data in this format:
```
Product Group L1 L1Trend L2 L2Trend L3 L3Trend L4 L4Trend
======================================================================================
Group 1 10 up 20 up 30 down
Group 2 20 up 40 up 60 down 80 down
```
I was able to pivot on "Product Level" by using something like:
```
PIVOT (MAX(quatity) FOR productlevel IN([L1],[L2],[L3],[L4]) AS p
```
but got lost when dealing with the trend.
Thanks. | You could get the desired result by implementing the PIVOT function, but I would first UNPIVOT your multiple columns of `Quantity Sold` and `Trend`. The unpivot process will convert them from multiple columns into multiple rows of data.
Since you are using SQL Server 2008+, you can use `CROSS APPLY` with `VALUES` to unpivot the data:
```
select [Product Group],
col, value
from yourtable
cross apply
(
values
([Product Level], cast([Quatity Sold] as varchar(10))),
([Product Level]+'trend', [trend])
) c (col, value);
```
See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/01fcd/7) This converts your table data into the format:
```
| PRODUCT GROUP | COL | VALUE |
|---------------|---------|-------|
| Group 1 | L1 | 10 |
| Group 1 | L1trend | up |
| Group 1 | L2 | 20 |
| Group 1 | L2trend | up |
| Group 1 | L3 | 30 |
| Group 1 | L3trend | down |
```
Now you can easily apply the PIVOT function:
```
select [Product Group],
L1, L1trend,
L2, L2trend,
L3, L3trend,
L4, L4trend
from
(
select [Product Group],
col, value
from yourtable
cross apply
(
values
([Product Level], cast([Quatity Sold] as varchar(10))),
([Product Level]+'trend', [trend])
) c (col, value)
) d
pivot
(
max(value)
for col in (L1, L1trend, L2, L2trend,
L3, L3trend, L4, L4trend)
) piv;
```
See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/01fcd/8). This gives you a final result of:
```
| PRODUCT GROUP | L1 | L1TREND | L2 | L2TREND | L3 | L3TREND | L4 | L4TREND |
|---------------|----|---------|----|---------|----|---------|--------|---------|
| Group 1 | 10 | up | 20 | up | 30 | down | (null) | (null) |
| Group 2 | 20 | up | 40 | up | 60 | down | 80 | down |
``` | You can potentially do this using correlated subqueries:
```
select productGroup as [Product Group]
, (select sum(quantitySold) from myTable where productGroup = a.productGroup and productLevel = 'L1') as L1
, (select max(trend) from myTable where productGroup = a.productGroup and productLevel = 'L1') as L1Trend
, (select sum(quantitySold) from myTable where productGroup = a.productGroup and productLevel = 'L2') as L2
, (select max(trend) from myTable where productGroup = a.productGroup and productLevel = 'L2') as L2Trend
-- etc.
from myTable a
group by productGroup
order by productGroup
```
Here's an [example SqlFiddle](http://sqlfiddle.com/#!6/982a3/2).
It may help you to see it this way before you use the `PIVOT` keyword.
If you don't know how many productLevel values you have, you'd need a dynamic solution, however. | PIVOT on Multiple Columns | [
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2008-r2",
"pivot",
""
] |
here
```
SELECT
COUNT(*)
FROM
tbl_department
RIGHT JOIN tbl_video
ON tbl_video.department_id = tbl_department.department_id
LEFT JOIN tbl_language
ON tbl_language.language_id = tbl_video.video_language
WHERE tbl_video.department_id LIKE '%1%'
```
i already tried
```
SELECT
COUNT(*)
FROM
tbl_department
RIGHT JOIN tbl_video
ON tbl_video.department_id = tbl_department.department_id
LEFT JOIN tbl_language
ON tbl_language.language_id = tbl_video.video_language AS numbers
WHERE tbl_video.department_id LIKE '%1%'
```
but it doesnt work.
help pls. | When you use (\*) it implies all columns , "AS" can be used for one column , so column from a table tbl\_department with an one column . | Use as in select
i.e
```
SELECT
COUNT(*) AS numbers
.....
```
Dont use AS somewhere..Use it in select | how do i use "as" in this SQL statement? | [
"",
"mysql",
"sql",
"sql-server",
"phpmyadmin",
""
] |
Suppose I want to return the number '1' if EXISTS (customers.salary>'1000')
I tried:
```
SELECT 1
FROM customers
WHERE EXISTS (customers.salary>'1000')
```
also this could work:
```
SELECT COUNT(salary)
FROM (SELECT DISTINCT(salary) FROM (SELECT salary FROM costomers WHERE salary>'1000'));
```
but what if I would like to return some other number not 1? | ```
SELECT
CASE WHEN (customers.salary>1000) THEN 1 ELSE 0 END AS overThousand
FROM customers
```
Will bring back a 1 for each row where salary is over 1000. So if you also took your customer name, your results woul be "bob, 1" (Salary over 1000), "Dave, 0" (Salary under 1000).
If you literally just want a '1' if there is a salary over 1000 anywhere in the customer table, the following is my suggestion:
```
SELECT 1
WHERE EXISTS (SELECT c.salary FROM customers c WHERE c.salary > 1000)
```
* You don't need FROM customers in your main statement
* You need a full statement in your EXISTS. (SELECT, FROM, WHERE)
* Using apostrophes around your '1000' wont work. You need it to be recognised as a number to use the > operator. | You can return any number of your choice through your first query, or even an expression.
```
SELECT 10
FROM customers
WHERE EXISTS (customers.salary>'1000')
```
You can check the syntax of [SELECT](http://docs.oracle.com/cd/B28359_01/server.111/b28286/statements_10002.htm#i2065706) statement for more on this, and then read through the [SQL expressions](http://docs.oracle.com/cd/B28359_01/server.111/b28286/expressions001.htm#i1002626) on Oracle documentation, for more details.
Edit:
To clarify, this answer applies to Oracle database, even though you are not concerned about databases while looking for an answer. | How can you return an integer if a certain condition is met in SQL? | [
"",
"sql",
""
] |
I've been looking for a bit, but haven't found a good answer.
I've found that I could generate the code in a VBA procedure and run it from there, however, i'm working on someone else's design and don't want to change it up too much.
In part of that pass-through query we have something like this:
```
WHERE (((ID='380')
```
I want the 380 to be code that will look at a UserForm combo box, write a query for my database returning a value `WHERE column = combobox.value`
I hope that makes sense, let me know if anyone know's how to work with this.
Thanks. | After looking around a lot and researching a bit it seemed that only a stored procedure on the server (Which I cannot do) or creating a string and passing that string to the pass-through query are my options.
I'm going to pursue the string option since it seems like the best option with my constraints here. if anyone comes up with any idea's later on that would help, let me know.
Thanks. | You can set the combobox to put the selected value in a cell, and then link the query to the value of that cell via a parameter.
Cheers - | Access Dynamic Pass-through Query | [
"",
"sql",
"ms-access",
"vba",
"pass-through",
""
] |
I have table test0. This table referenced by tables:
```
test1
test2
test3
....
test150
```
Does it possible write simple short query to delete 1 row in table test0 and if they exist - in all tables test1 ... test150 ? | Try this function to get delete script for all of the child rows of the given parent row:
```
Create or replace
FUNCTION FN_GET_DELETE_SCRIPT
( Parent_ID IN VARCHAR2, Parent_Table_Name in varchar2
) RETURN varchar2
AS
sql_statement varchar2(200);
script varchar2(4000);
n pls_integer;
Tot Pls_Integer := 0;
Cc_Id Varchar2(500):=Null;
Cursor allTables
Is
Select uc.table_name , ac.column_name
from user_constraints uc , ALL_CONS_COLUMNS AC
Where
R_Constraint_Name = (Select Constraint_Name From User_Constraints Where Constraint_Type = 'P' And Table_Name = upper(Parent_Table_Name))
And Ac.Owner = Uc.Owner
And AC.constraint_name = uc.constraint_name;
Begin
for t in allTables loop
execute immediate 'select count(*) from '||t.table_name ||' where '||t.column_name||' = :1' into n using Parent_ID;
If N > 0 Then
script:='Delete From ' ||t.table_name||' Where '||t.column_name||'='''||Parent_ID||''';'||chr(10)||script;
End if;
End loop;
Return Script;
END FN_GET_DELETE_SCRIPT;
```
Note that this function gives you a delete script for deleting immediate children of the given parent row.
So this function needs a bit of modification to find all descendants of the given parent rows! | * Try like this
```
CREATE TABLE supplier
( supplier_id numeric(10) not null,
supplier_name varchar2(50) not null,
contact_name varchar2(50),
CONSTRAINT supplier_pk PRIMARY KEY (supplier_id)
);
CREATE TABLE products
(product_id numeric(10) not null,
supplier_id numeric(10) not null,
CONSTRAINT fk_supplier
FOREIGN KEY (supplier_id)
REFERENCES supplier(supplier_id)
ON DELETE CASCADE
);
```
* According to your requirement first drop all constraints and recreate them using on delete cascade as shown in above axample
* Delete the supplier, and it will delate all products for that supplier
* but be careful while using on delete cascade "You can, by mistake, delete half of your database without even realizing it" | How to delete all referential tables in Oracle? | [
"",
"sql",
"oracle",
"foreign-keys",
""
] |
I have an SQL table contains data for the sale of some items. In fact, it has the logs of the sale of items.
For example, there is a sale that contains 2 items: Keyboard (`id:1`) and mouse(`id:2`). Buyers can make bids to each item and multiple times, like ebay. So let's assume there are 2 buyers(`ids are 97 and 98`) made bids a couple of times. The related data would be:
```
bid_id | buyer_id | item_id | amount | time |
1 | 97 | 1 | 44.26 | 2014-01-20 15:53:16 |
2 | 98 | 2 | 30.47 | 2014-01-20 15:54:52 |
3 | 97 | 2 | 40.05 | 2014-01-20 15:57:47 |
4 | 97 | 1 | 42.46 | 2014-01-20 15:58:36 |
5 | 97 | 1 | 39.99 | 2014-01-20 16:01:13 |
6 | 97 | 2 | 24.68 | 2014-01-20 16:05:35 |
7 | 98 | 2 | 28 | 2014-01-20 16:08:42 |
8 | 98 | 2 | 26.75 | 2014-01-20 16:13:23 |
```
In this table, I need to select data for first item offers for each user and last offers for each user.
So if I select first item offers for each user (distinct), return data should be like:
```
bid_id | buyer_id | item_id | amount | time |
1 | 97 | 1 | 44.26 | 2014-01-20 15:53:16 |
2 | 98 | 2 | 30.47 | 2014-01-20 15:54:52 |
3 | 97 | 2 | 40.05 | 2014-01-20 15:57:47 |
```
If I select last offers for each user, return should be like:
```
bid_id | buyer_id | item_id | amount | time |
5 | 97 | 1 | 39.99 | 2014-01-20 16:01:13 |
6 | 97 | 2 | 24.68 | 2014-01-20 16:05:35 |
8 | 98 | 2 | 26.75 | 2014-01-20 16:13:23 |
```
Since I have to bring each item for each user, I tried to `GROUP BY` for both `buyer_id` and `item_id`, then `SELECT` the `MIN` value of `time` or `bid_id`. But It always returned me first `bid_id` but latest `amount` rows (which are last offers actually).
Here's the query I tried:
```
SELECT MIN(`bid_id`) AS `bid_id`,`buyer_id`,`item_id`,`amount`,`time` FROM `offers` GROUP BY `buyer_id`,`item_id`
```
And the result was:
```
bid_id | buyer_id | item_id | amount | time |
1 | 97 | 1 | 39.99 | 2014-01-20 16:01:13 |
2 | 97 | 2 | 24.68 | 2014-01-20 16:05:35 |
3 | 98 | 2 | 26.75 | 2014-01-20 16:13:23 |
```
As you can see, it groups by and the IDs are correct but the rest of the row values are not.
How can I correctly `SELECT` first and/or last rows when grouping buy multiple columns? | ```
SELECT o.`bid_id`,o.`buyer_id`,o.`item_id`,o.`amount`,o.`time` FROM `offers` o
JOIN
(SELECT MIN(`bid_id`) AS `bid_id`,`buyer_id`,`item_id`,`amount`,`time` FROM `offers` GROUP BY `buyer_id`,`item_id`)x
ON x.bid_id=o.bid_id AND x.buyer_id=o.buyer_id
``` | Here's another take, using [Quassnoi's ranking trick here](https://stackoverflow.com/questions/532878/how-to-perform-grouped-ranking-in-mysql/532941#532941)
For the first bids:
```
SELECT x.bid_id, x.buyer_id, x.item_id, x.amount, x.time
FROM
(
SELECT o.bid_id, o.buyer_id, o.item_id, o.amount, o.time,
@combo :=CASE WHEN NOT(@curItem = o.item_id AND @curBuyer = o.buyer_id)
THEN 1 ELSE @combo+1 END AS Rank,
@curItem:=o.item_id AS item,
@curBuyer:=o.buyer_id AS buyer
FROM
(
SELECT o.bid_id, o.buyer_id, o.item_id, o.amount, o.time
FROM offers o
ORDER BY o.buyer_id, o.item_id, o.bid_id
) o,
(SELECT @curItem := -1) itm,
(SELECT @curBuyer:= -1) buy
) x
WHERE x.Rank = 1;
```
For the last bids query, you just need to change the `ORDER BY` to `o.buyer_id, o.item_id, o.bid_id DESC`
[SqlFiddle here](http://sqlfiddle.com/#!2/916c2/5) | MySQL: SELECT First or Last Rows When Grouping By Multiple Columns | [
"",
"mysql",
"sql",
"select",
"group-by",
"distinct",
""
] |
my code gives TOTAL HOURS in hours, but i am trying to output something like
```
TotalHours
8:36
```
where 8 represents hour part and 36 represents minutes part mean totalHours a person has worked in a single day at office.
```
with times as (
SELECT t1.EmplID
, t3.EmplName
, min(t1.RecTime) AS InTime
, max(t2.RecTime) AS [TimeOut]
, t1.RecDate AS [DateVisited]
FROM AtdRecord t1
INNER JOIN
AtdRecord t2
ON t1.EmplID = t2.EmplID
AND t1.RecDate = t2.RecDate
AND t1.RecTime < t2.RecTime
inner join
HrEmployee t3
ON t3.EmplID = t1.EmplID
group by
t1.EmplID
, t3.EmplName
, t1.RecDate
)
SELECT EmplID
, EmplName
, InTime
, [TimeOut]
, [DateVisited]
, DATEDIFF(Hour,InTime, [TimeOut]) TotalHours
from times
Order By EmplID, DateVisited
``` | Small change like this can be done
```
SELECT EmplID
, EmplName
, InTime
, [TimeOut]
, [DateVisited]
, CASE WHEN minpart=0
THEN CAST(hourpart as nvarchar(200))+':00'
ELSE CAST((hourpart-1) as nvarchar(200))+':'+ CAST(minpart as nvarchar(200))END as 'total time'
FROM
(
SELECT EmplID, EmplName, InTime, [TimeOut], [DateVisited],
DATEDIFF(Hour,InTime, [TimeOut]) as hourpart,
DATEDIFF(minute,InTime, [TimeOut])%60 as minpart
from times) source
``` | Very simply:
```
CONVERT(TIME,Date2 - Date1)
```
For example:
```
Declare @Date2 DATETIME = '2016-01-01 10:01:10.022'
Declare @Date1 DATETIME = '2016-01-01 10:00:00.000'
Select CONVERT(TIME,@Date2 - @Date1) as ElapsedTime
```
Yelds:
```
ElapsedTime
----------------
00:01:10.0233333
(1 row(s) affected)
``` | DateDiff to output hours and minutes | [
"",
"sql",
"sql-server",
""
] |
I am using SQL Server 2005.
I'm having some issues when executing this query.
My code is
```
ALTER PROCEDURE [dbo].[Get]
@ApplicantID int
AS
BEGIN
SELECT
isnull(M_EvalApplicationStatuses.EvalApplicationStatus,'') EvalApplicationStatus,
isnull(M_Users.CompletionMailSent,'') MailSent,
isnull(APP_Applications.FirstName,'') FirstName,
isnull(APP_Applications.LastName,'') LastName,
isnull(M_Users.UserName,'') UserName,
isnull(APP_Applications.DocTrackingGenComment,'') DocTrackingGenComment
FROM
APP_Applications
left outer join
M_Users
ON
APP_Applications.UserID = M_Users.UserID
left outer join
M_EvalApplicationStatuses
ON
APP_Applications.Status = M_EvalApplicationStatuses.EvalApplicationStatusID and M_EvalApplicationStatuses.Status = 1
WHERE
ApplicantID =@ApplicantID
END
```
This now works perfectly. But I want to get data from another table, so I just left join that table to this query. Here I have found the issue. My new table name is `[APP_DocumentTracking]` and the below query I used to retrieve data.
```
SELECT DISTINCT
isnull(APP_DocumentTracking.Date,'') Date,
isnull(APP_DocumentTracking.IntervTime,'') IntervTime,
isnull(APP_DocumentTracking.Telephoneinterview,'') Telephoneinterview
FROM [APP_DocumentTracking]
where APP_DocumentTracking.ApplicantID = @ApplicantID
```
These two queries are separately working fine... but I want to join these two queries and the result will be get in one table. How can I do this? plz help me | As per your comment in sarathkumar's answer the problem is Ambiguous column name 'ApplicantID'. Try the below for that problem. I have updated the `where clause` from your comment.
```
left outer join [APP_DocumentTracking] on [APP_DocumentTracking].ApplicantID=APP_Applications.ApplicantID
WHERE [APP_DocumentTracking].ApplicantID =@ApplicantID END
``` | Just join as a subquery with first and second one .Please update the structure in SqlFiddle or here Itself.Then it will be easy to give u the answer...
Here is the code
```
ALTER PROCEDURE [dbo].[Get]
@ApplicantID int
AS
BEGIN
SELECT
isnull(M_EvalApplicationStatuses.EvalApplicationStatus,'') EvalApplicationStatus,
isnull(M_Users.CompletionMailSent,'') MailSent,
isnull(APP_Applications.FirstName,'') FirstName,
isnull(APP_Applications.LastName,'') LastName,
isnull(M_Users.UserName,'') UserName,
isnull(APP_Applications.DocTrackingGenComment,'') DocTrackingGenComment
,Trace.*
FROM
APP_Applications
left outer join
M_Users
ON
APP_Applications.UserID = M_Users.UserID
left outer join
M_EvalApplicationStatuses
ON
APP_Applications.Status = M_EvalApplicationStatuses.EvalApplicationStatusID and M_EvalApplicationStatuses.Status = 1
LEFT JOIN
(SELECT DISTINCT
isnull(APP_DocumentTracking.Date,'') Date,
isnull(APP_DocumentTracking.IntervTime,'') IntervTime,
isnull(APP_DocumentTracking.Telephoneinterview,'') Telephoneinterview
FROM [APP_DocumentTracking]
where APP_DocumentTracking.ApplicantID = @ApplicantID
) AS Trace
ON APP_Applications.ApplicantID = Trace.ApplicantID
WHERE APP_Applications.ApplicantID =@ApplicantID
END
```
Cheers. | SQL Query join issues | [
"",
"sql",
"sql-server",
"join",
""
] |
I have table like this:
```
+--------+--------+--------+
| color | fruit | amount |
+--------+--------+--------+
| red | cherry | 124 |
| red | plum | 23 |
| green | gauva | 119 |
| green | pear | 14 |
| orange | orange | 23 |
+--------+--------+--------+
```
I want it to add rank like this.
```
+------+--------+--------+--------+
| rank | color | fruit | amount |
+------+--------+--------+--------+
| 1 | red | cherry | 124 |
| 2 | red | plum | 23 |
| 1 | green | gauva | 119 |
| 2 | green | pear | 14 |
| 1 | orange | orange | 23 |
+------+--------+--------+--------+
```
I need to rank it based on amount for each color (seperately).
Is this possible ? | ```
select
color, fruit, amount,
case when (if(@prev_color != color, @rank:=1, @rank:=@rank + 1)) is null then null
when (@prev_color := color) is null then null
else @rank end as rank
from your_table
, (select @rank:=0, @prev_color := null) v
order by color, amount desc
``` | If the ranks are given based on the high `amount` of `color` availability, then you can also find number of rows less or more than with the `amount` value. That number of rows would be the `rank` for the concerned `color`.
```
select
( select count( color ) from fruits_table
where color = f.color and amount >= f.amount
) as 'rank',
color, fruit, amount
from fruits_table f
order by color, amount desc
-- order by field( color, 'red', 'green', 'orange' ), amount desc
;
```
You can alter the order of fields by `color` to show on top or bottom as desired using the `field` function. See the commented `order by` clause above. | Rank sql columns | [
"",
"mysql",
"sql",
""
] |
Say I have a table called `PHRASES` containing some text strings
```
+--+---------------+
|ID|PHRASE |
+--+---------------+
|0 |"HELLO BYE YES"|
+--+---------------+
|1 |"NO WHY NOT" |
+--+---------------+
|2 |"NO YES" |
+--+---------------+
```
And I want to add the number of times each of the following words occur to the `OCCURRENCE` column, let's call this table `KEYWORDS`:
```
+--------+----------+
|KEYWORD |OCCURRENCE|
+--------+----------+
|"YES" |NULL |
+--------+----------+
|"NO" |NULL |
+--------+----------+
|"HELLO" |NULL |
+--------+----------+
|"CHEESE"|NULL |
+--------+---------+
```
I now want to write a query that would update `KEYWORDS` to the following:
```
+--------+----------+
|KEYWORD |OCCURRENCE|
+--------+----------+
|"YES" |2 |
+--------+----------+
|"NO" |2 |
+--------+----------+
|"HELLO" |1 |
+--------+----------+
|"CHEESE"|0 |
+--------+----------+
```
Note that I have already got a function called `dbo.RegExIsMatch` that can take care of string matches, such that it returns `1` if parameter 1 matches against the string in parameter 2:
```
UPDATE KEYWORDS SET OCCURRENCE =
(
SELECT SUM
(
-- the following returns 1 if the keyword exists in the phrase, or 0 otherwise
CASE WHEN dbo.RegExIsMatch('.*' + KEYWORDS.KEYWORD + '.*',PHRASES.PHRASE,1) = 1 THEN 1 ELSE 0 END
)
FROM PHRASES
CROSS JOIN KEYWORDS
)
```
This doesn't work though, it just ends up filling each row with the same number. I'm sure this is a simple problem I'm just struggling to get my head around SQL-think. | Well this seems to work
```
MERGE INTO KEYWORDS masterList
USING (
SELECT COUNT(*) AS OCCURRENCE,KEYWORDS.KEYWORD AS KEYWORD FROM
KEYWORDS AS keywordList
CROSS JOIN PHRASES AS phraseList
WHERE (dbo.RegExIsMatch('.*' + keywordList.KEYWORD + '.*',phraseList.PHRASE,1) = 1)
GROUP BY KEYWORD
) frequencyList
ON (masterList.KEYWORD = frequencyList.KEYWORD)
WHEN MATCHED THEN
UPDATE SET masterList.OCCURRENCE = frequencyList.OCCURRENCE;
``` | Your query has three different tables, but the question only two. Is this what you mean?
```
UPDATE Keywords
SET OCCURRENCE = (SELECT SUM(CASE WHEN dbo.RegExIsMatch('.*' + KEYWORDS.KEYWORD + '.*',PHRASES.PHRASE,1) = 1
THEN 1 ELSE 0
END)
FROM PHRASES
);
```
Otherwise, if you have three tables, you need to correlate the subquery with the outer table. | How do I count the number of times each keyword from a table appears in a table of phrases? | [
"",
"sql",
"sql-server-2008",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.