Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I have a running inventory table of different products that records the inventory count after every transaction. Transactions do not happen every day, so the table does not have a running daily count.
I need to have all dates listed for each product so that I can sum and average the counts over a period of time.
inventory
```
DATE ID Qty Count
2014-05-13 123 12 12
2014-05-19 123 -1 11
2014-05-28 123 -1 10
2014-05-29 123 -3 7
2014-05-10 124 5 5
2014-05-15 124 -1 4
2014-05-21 124 -1 3
2014-05-23 124 -3 0
```
I have a table that includes dates for a Join, but I am not sure how to make the missing dates join over multiple products.
I need the query as follows. It needs to to return the counts over the a period selected, but also include dates inbetween.
```
DATE ID Qty Count
2013-05-01 123 0 0
2013-05-02 123 0 0
2013-05-03 123 0 0
2013-05-04 123 0 0
2013-05-05 123 0 0
2013-05-06 123 0 0
2013-05-07 123 0 0
2013-05-08 123 0 0
2013-05-09 123 0 0
2013-05-10 123 0 0
2013-05-11 123 0 0
2013-05-12 123 0 0
2014-05-13 123 12 12
2013-05-14 123 0 12
2013-05-15 123 0 12
2013-05-16 123 0 12
2013-05-17 123 0 12
2013-05-18 123 0 12
2014-05-19 123 -1 11
2013-05-20 123 0 11
2013-05-21 123 0 11
2013-05-22 123 0 11
2013-05-23 123 0 11
2013-05-24 123 0 11
2013-05-25 123 0 11
2013-05-26 123 0 11
2013-05-27 123 0 11
2014-05-28 123 -1 10
2014-05-29 123 -3 7
2013-05-30 123 0 7
2013-05-31 123 0 7
2013-05-01 124 0 0
2013-05-02 124 0 0
2013-05-03 124 0 0
2013-05-04 124 0 0
2013-05-05 124 0 0
2013-05-06 124 0 0
2013-05-07 124 0 0
2013-05-08 124 0 0
2013-05-09 124 0 0
2014-05-10 124 5 5
2014-05-11 124 0 5
2014-05-12 124 0 5
2014-05-13 124 0 5
2014-05-14 124 0 5
2014-05-15 124 -1 4
2014-05-16 124 0 4
2014-05-17 124 0 4
2014-05-18 124 0 4
2014-05-19 124 0 4
2014-05-20 124 0 4
2014-05-21 124 -1 3
2014-05-22 124 0 3
2014-05-23 124 -3 0
2014-05-24 124 0 0
2014-05-25 124 0 0
2014-05-26 124 0 0
2014-05-27 124 0 0
2014-05-28 124 0 0
2014-05-29 124 0 0
2014-05-30 124 0 0
2014-05-31 124 0 0
``` | Use `inv join inv` to build up at least 31 rows and construct a table of 31 days. Then join the ids, and finally the original table.
```
select a.d, a.id, a.qty,
if(a.id=@lastid, @count:=@count+a.qty, @count:=a.count) `count`,
@lastid:=a.id _lastid
from (
select a.d, b.id, ifnull(c.qty, 0) qty, ifnull(c.count, 0) `count`
from (
select adddate('2014-05-01', @row) d, @row:=@row+1 i
from inv a
join inv b
join (select @row := 0) c
limit 31) a
join (
select distinct id
from inv) b
left join inv c on a.d = c.date and b.id = c.id
order by b.id, a.d) a
join (select @count := 0, @lastid := 0) b;
```
[fiddle](http://sqlfiddle.com/#!2/70d3b/32) | Here are the steps needed:
1. Get all dates between the two given dates.
2. Get the initial stock per ID. This is: get the first date on or after the given start date for that ID, read this record's stock and subtract its transaction quantity.
3. For every date get the previous stock. If there is a record for this date, then add its transaction quantity and compare the result with its stock quantity. Throw an error if values don't match. (This is because you store data redundantly; a record's quantity must equal the quantity of the previous record plus its own transaction quantity. But data can always be inconsistent, so better check it.) Show the new stock and the difference to the previous stock.
All this would typically be achieved with a recursive CTE for the dates, a derived table for all initial stocks at best using a KEEP DENSE\_RANK function, and the LAG function to look into the previous record.
* MySQL doesn't support recursive CTEs - or CTEs at all for that matter. You can emulate this with a big enough table and a variable.
* MySQL doesn't support the KEEP DENSE\_RANK function. You can work with another derived table instead to find the minimum date per ID first.
* MySQL doesn't support the LAG function. You can work with a variable in MySQL instead.
Having said this, I suggest to use a programming language instead (Java, C#, PHP, whatever). You would just select the raw data with SQL, use a loop and simply do all processiong on a per record base. This is much more convenient (and readable) than building a very complex query that does all that's needed. You can do this in SQL, even MySQL; I just don't recommend it. | SQL Join Inventory Count Table to Date table | [
"",
"mysql",
"sql",
"join",
""
] |
When retrieving a list of objects from SQL Server , I am getting the following error:
> System.Data.SqlClient.SqlException
> Invalid column name 'AccountName'
> Invalid column name 'AccountNumber'
I understand that the database does not contain those columns, but I don't believe my code should be passing those columns for retrieval.
I have a class that is being derived from and Entity Framework code-first class that has a few extra properties added to it.
For some reason, when I make the database call, it passes the extra properties created by the `TransactionHistoryGrid` class to SQL Server, when it should only be passing the `TransactionHistory` properties.
Here is the code:
Entity Framework code-first class:
```
public partial class TransactionHistory : Transaction
{
public string BillingAddressCity { get; set; }
public string BillingAddressCompany { get; set; }
public string BillingAddressCountry { get; set; }
public string BillingAddressFax { get; set; }
public string BillingAddressFirst { get; set; }
public string BillingAddressLast { get; set; }
public string BillingAddressPhone { get; set; }
public string BillingAddressState { get; set; }
public string BillingAddressStreet { get; set; }
public string BillingAddressZip { get; set; }
[System.ComponentModel.DataAnnotations.Schema.NotMapped]
public Address BillingAddress { get; set; }
[System.ComponentModel.DataAnnotations.Schema.NotMapped]
public Address ShippingAddress { get; set; }
public System.Guid ID { get; set; }
public virtual Account Account { get; set; }
}
```
Class to populate Kendo Grid:
```
public class TransactionHistoryGrid : TransactionHistory
{
public string AccountName { get; set; }
public string AccountNumber { get; set; }
public static List<TransactionHistoryGrid> GetTransactionHistoryGrid()
{
List<TransactionHistoryGrid> grids = new List<TransactionHistoryGrid>();
// This is where the trouble begins:
foreach (TransactionHistory t in GetAllTransactionHistories())
{
// It doesn't make it this far.
var grid = new TransactionHistoryGrid()
{
Account = t.Account;
AccountName = t.Account.AccountName;
AccountNumber = t.Account.AccountNumber;
};
grids.Add(grid);
}
return grids;
}
}
```
Method to retrieve data:
```
public static List<TransactionHistory> GetAllTransactionHistories()
{
using (DatabaseContext context = new DatabaseContext())
{
// Error gets thrown here... Why are the extra properties being read??
return context.Set<TransactionHistory>().ToList();
}
}
```
Usage from controller:
```
public ActionResult TransactionHistory_Read([DataSourceRequest]DataSourceRequest request = null)
{
List<TransactionHistoryGrid> transactionHistory = TransactionHistoryGrid.GetTransactionHistoryGrid();
DataSourceResult result = transactionHistory.ToDataSourceResult(request);
return Json(result, JsonRequestBehavior.AllowGet);
}
```
Thoughts? Suggestions?
This is what SQL Server Profiler returns:
(`AccountName`, `AccountNumber`, and `TransactionHistoryGrid` should never be sent to SQL. Why is it?)
```
SELECT
1 AS [C1],
[Extent1].[Discriminator] AS [Discriminator],
...
[Extent1].[AccountName] AS [AccountName],
[Extent1].[AccountNumber] AS [AccountNumber],
[Extent1].[Account_ID] AS [Account_ID],
[Extent1].[BillingStatement_ID] AS [BillingStatement_ID]
FROM [dbo].[TransactionHistories] AS [Extent1]
WHERE [Extent1].[Discriminator] IN (N'TransactionHistoryGrid',N'TransactionHistory')
``` | Reposting from the comment thread:
Type Discovery as described **[here](http://msdn.microsoft.com/en-us/data/jj679962.aspx)** is probably at play. The **[[NotMapped]](http://msdn.microsoft.com/en-us/library/system.componentmodel.dataannotations.schema.notmappedattribute%28v=vs.110%29.aspx)** attribute
has the AttributeTargets.Class, AttributeTargets.Property and AttributeTargets.Field flags, so unless you ever intend to manipulate the TransactionHistoryGrid properties directly via the DatabaseContext, you could just put the attribute on the entire class.
```
[NotMapped]
public class TransactionHistoryGrid : TransactionHistory
{
...
}
```
The alternative would be to move the TransactionHistoryGrid class into another assembly. | If these are your EF models and the property is not part of the database structure you should be annotating the properties with the **[NotMapped]** Attribute so that the sql generation does not pick up the property and try to add it into the select statement.
Hopefully this site should help with explaining the annotations.
<http://www.entityframeworktutorial.net/code-first/dataannotation-in-code-first.aspx>
I personally prefer to transform my database models into view models. This way I can ensure any non-database properties can be managed outside of the "data gathering layer" section of my project.
I am not an EF person as I prefer to use Telerik's DataAccess as my ORM solution. | Derived Class Properties Getting Passed to SQL Server via Entity Framework | [
"",
"sql",
"entity",
"derived",
""
] |
i have two table, i want merge that data in one table how it is possible
my data is look like this,
```
table1 (col1 is column name)
col1
------------------
data1
data2
data3
data4
data5
table2 (col1 is column name)
col1
-----------------------
data5
data6
data7
expected Result
col1 and col2 are my columns
col1 col2
--------------------------
data1 data5
data2 data6
data3 data7
data4
data5
``` | You can do this by adding a `join` key using `row_number()`:
```
select t1.col1, t2.col1 as col2
from (select col1, row_number() over (order by (select NULL)) as seqnum
from table1
) t1 full outer join
(select col1, row_number() over (order by (select NULL)) as seqnum
from table2
) t2
on t1.seqnum = t2.seqnum;
```
Note: the ordering for the two columns is not guaranteed to be the same as in your example. You would need a column to specify the ordering. If you have only one column, then that wouldn't seem to be the case. But, this will produce the two columns as in your question. | ```
SELECT col1, col2
FROM (
SELECT col1, ROW_NUMBER() OVER (ORDER BY col1) rn FROM table1
) a
FULL OUTER JOIN (
SELECT col1 col2, ROW_NUMBER() OVER (ORDER BY col1) rn FROM table2
) b
ON a.rn = b.rn
ORDER BY COALESCE(a.rn, b.rn);
```
Here - lacking any other sort criteria - I order the columns by their value. If you have some other sort criteria, you'll have to change the `ORDER BY` clause in `OVER()` to reflect that. The join key is generated dynamically using ROW\_NUMBER() over the specified ordering.
[An SQLfiddle to test with](http://sqlfiddle.com/#!6/2b387/3). | merge row with two column in sql server | [
"",
"sql",
"sql-server",
"sql-server-2008-r2",
"sql-server-2012",
""
] |
I use the following SQL query to receive all entries that are older than `payment_target` days.
```
SELECT * from orders WHERE(EXTRACT(EPOCH FROM age(CURRENT_DATE,
date_trunc('day', orders.created_at))) - orders.payment_target*86400 >= 0)
```
However, I want to modify this query in a way that `orders.invcoice_sent_at` is used as the calculation basis if it is not null. Otherwise, `orders.created\_at' should be used.
I tried it with the following query but I guess it is more pseudo code than valid SQL. I don't know how I can set the attribute to be used in the statement block.
```
SELECT * from orders where (EXTRACT(EPOCH FROM age(CURRENT_DATE,
date_trunc('day', IF orders.invoice_sent_at IS NOT NULL
BEGIN orders.invoice_sent_at END
ELSE
BEGIN orders.created_at END ))) - orders.payment_target*86400 >= 0)
``` | Instead of `if` you can use `coalesce` which will work on every database supporting ANSI SQL:
```
coalesce( orders.invoice_sent_at, orders.created_at) - orders.payment_target*86400 >= 0)
``` | If I understand your question correctly, you can use a declared variable.
this will also make your code more readable. something along the lines of:
DECLARE @DUEDATE DATETIME
SET @DUEDATE = ISNULL(("GET THE VALUE FROM orders.invoice\_sent\_at"),("GET THE VALUE FROM orders.created\_at"))
use @DUEDATE in your select. | SQL: conditional SELECT | [
"",
"sql",
"postgresql",
""
] |
this is a question for plsql & oracles. I am new to this. Please help!
I have 2 tables: Table A and Table B
```
Table A: ID, Date
Table B: Name, Address
```
How do I do a join between 2 tables and then have it just return based on latest date. Also, it will be based on specified ID in a list.
My current query returns this:
```
1 | 1/1/2013 | Apple | 123 Malcolm
1 | 1/2/2013 | Apple | 123 Malcolm
1 | 1/3/2013 | Apple | 123 Malcolm
3 | 1/1/2013 | Orange| 124 Malcolm
3 | 1/2/2013 | Orange| 124 Malcolm
```
How do I get it to return just:
```
1 | 1/3/2013 | Apple | 123 Malcolm
3 | 1/2/2013 | Orange| 124 Malcolm
select unique(ID), a.Date, b.Name, b.Address
from tableA a
join tableB b
on a.ID = b.ID
where a.Date > TO_DATE('12/31/2012', 'mm/dd/yyyy') and a.ID in ('1', '3')
```
Thanks! | You need to `group` your result set. Also you need an aggregat function, in this case `MAX()`
This should work:
```
select unique(ID), MAX(a.Date), b.Name, b.Address
from tableA a
join tableB b
on a.ID = b.ID
where a.Date > TO_DATE('12/31/2012', 'mm/dd/yyyy') and a.ID in ('1', '3')
group by ID, b.Name, b.Address
```
You can read up on those methods and more over at <http://www.w3schools.com/sql>
Link to the `GROUP BY` explanation: <http://www.w3schools.com/sql/sql_groupby.asp>
Link to the `MAX()` explanation: <http://www.w3schools.com/sql/sql_func_max.asp> | There's lots of ways to do this:
This is one. It gets a subset of max dates by the ID and then joins it into what you have already thereby limiting to the max of each group.
```
Select A.ID, A.Date, B.Name, B.Address
FROM A
INNER JOIN B
on A.ID = B.ID
INNER JOIN (Select max(date) maxDate, ID from A group by ID) C
on C.ID=A.ID and C.MaxDate = A.Date
WHERE
A.ID IN ('1','3')
``` | PLSQL & Oracle - Join 2 tables and only return based on latest date | [
"",
"sql",
"oracle",
"plsql",
""
] |
I have lookup table, whose records are being referenced in other tables. For example, a countries lookup table, and a user table referc the country id from countries table.
I want to know, how i can get count of references and all the rows in other tables, which have a record with a country id.
If a reference of a row is found anywhere else, i need to show an alert to user that this record is being referened, and cannot be deleted, and list down all the referenced rows in a grid.
I have seen a similar topic at
[SQL Server: how to know if any row is referencing the row to delete](https://stackoverflow.com/questions/6301968/sql-server-how-to-know-if-any-row-is-referencing-the-row-to-delete)
But the answer didn't helped at all, and gave an error. | ```
DECLARE @TableName sysname = 'YourTable'
DECLARE @Command VARCHAR(MAX)
SELECT @Command = isnull(@Command + ' UNION ALL ', '') +
'Select count(*) ,''' + SCHEMA_NAME(obj.schema_id)+'.'+OBJECT_NAME(fkc.parent_object_id) + ''' '+
'From ' + SCHEMA_NAME(obj.schema_id)+'.'+OBJECT_NAME(fkc.parent_object_id) + ' '+
'Where ' + col.name+ ' IS NOT NULL'
from sys.foreign_key_columns fkc
INNER JOIN sys.columns col on fkc.parent_object_id = col.object_id and fkc.parent_column_id = col.column_id
INNER JOIN sys.objects obj ON obj.object_id = col.object_id
where object_name(referenced_object_id) = @TableName
execute (@Command)
``` | If you give reference as a foreign key, then sqlserver easily help you by [Viewing dependensy](http://msdn.microsoft.com/en-IN/library/ms190624.aspx) and you just select query to [get all table](http://blog.sqlauthority.com/2010/02/04/sql-server-get-the-list-of-object-dependencies-sp_depends-and-information_schema-routines-and-sys-dm_sql_referencing_entities/)
```
select * from countrytable where countryid in
(select countryid from usertable )
```
or
```
select * from countrytable c
inner join usertable u on c.countryid = u.countryid
``` | How to know if a row in a table is being referenced in other tables before deletion? | [
"",
"sql",
"sql-server",
"foreign-keys",
""
] |
I have a table with records of employees that shows a relationship of employees and who they report to:
```
From_ID position TO_ID position
----------------------------------------
1 Lowest_employee 3 employee
3 employee 4 employee
4 employee 5 BOSS
2 Lowest_employee 6 employee
6 employee 3 employee
10 Lowest_employee 50 BOSS2
```
I would like to show results that look like this, with the employee / boss IDs:
```
EmployeeID BossID
--------------------
1 5
2 5
10 50
```

This means employees 1 and 2 report to ID 5 and employee 10 reports to another boss with ID 50.
I know I need to use CTE and Recursive Queries, but cannot understand how it can be done, I'm newer to CTE Recursive Queries.
I read this article but it doesn't make any sense to me [MS link](http://technet.microsoft.com/en-us/library/ms186243%28v=sql.105%29.aspx)
Any help with the query required to achieve this would be useful. | This includes setting up test data, however I think this is what you want:
Test Data:
```
DECLARE @Table TABLE
(
From_ID int,
TO_ID int
)
INSERT INTO @Table VALUES(1,3)
INSERT INTO @Table VALUES(3,4)
INSERT INTO @Table VALUES(4,5)
INSERT INTO @Table VALUES(2,6)
INSERT INTO @Table VALUES(6,3)
INSERT INTO @Table VALUES(10,50)
```
Query to get answer:
```
;WITH Hierarchy (Employee, Superior, QueryLevel)
AS
(
--root is all employees that have no subordinates
SELECT E.From_ID, E.TO_ID, 1
FROM @Table E
LEFT
JOIN @Table S
ON S.TO_ID = E.From_ID
WHERE S.TO_ID IS NULL
--recurse up tree to final superior
UNION ALL
SELECT H.Employee, S.TO_ID, H.QueryLevel + 1
FROM Hierarchy H
JOIN @Table S
ON S.From_ID = H.Superior
)
SELECT Employee, Superior
FROM
(
SELECT *, ROW_NUMBER() OVER(PARTITION BY Employee ORDER BY QueryLevel DESC) AS RowNumber
FROM Hierarchy
) H
WHERE RowNumber = 1
```
Essentially, this works by :
1) get all employees with no reportees (the root)
2) recurses up through the bosses, recording the 'level'
3) use over/partition to select only the 'final' boss | ```
WITH q (employee, boss) AS
(
SELECT fromId, toId
FROM mytable
WHERE fromId NOT IN
(
SELECT toId
FROM mytable
)
UNION ALL
SELECT employee, toId
FROM q
JOIN mytable t
ON t.fromId = boss
)
SELECT *
FROM q
WHERE boss NOT IN
(
SELECT fromId
FROM mytable
)
``` | CTE Recursive Queries | [
"",
"sql",
"sql-server",
"recursion",
"common-table-expression",
""
] |
What is the difference between `cursor` and `view` ?
Because none of them store data in the database. | A cursor is defined and used within the scope of a stored procedure (it is used with PL/SQL).
On the other hand, a view is a database object (similar to a table), which can be used even outside of stored procedures as well, as in queries (it can be used with both SQL and PL/SQL).
**Reference**:
1. [Views on Oracle Database Concepts](http://docs.oracle.com/cd/B28359_01/server.111/b28318/schema.htm#i20690)
2. [Cursors on Oracle Magazine](http://www.oracle.com/technetwork/issue-archive/2013/13-mar/o23plsql-1906474.html) | A view is a pre-defined query which is stored in the database and can be used much like a table.
A cursor is a data structure which provides access to the rowset returned by a query.
Share and enjoy. | What is the difference between cursor and view? | [
"",
"sql",
"oracle",
"plsql",
""
] |
I have a table "User" with 5 columns 'id','firstname','lastname','age','email'.
Is it possible to execute this sql command?
```
SELECT LTRIM(RTRIM(firstname,lastname,email)) FROM User
```
All i need to select is the 3 columns that are trimmed. | You have to handle each column separately, as in:
```
SELECT LTRIM(RTRIM(firstname)) as firstname,
LTRIM(RTRIM(lastname)) as lastname,
LTRIM(RTRIM(email)) as email
FROM [User]
``` | You need to trim each field separately
```
select ltrim(rtrim(firstname)), ltrim(rtrim(lastname)), ltrim(rtrim(email))
from User
``` | TRIM specific column names in SELECT statement | [
"",
"sql",
"sql-server",
""
] |
Say I have the following schema:
```
create table client (clientid int)
create table mv (clientid int, fundid int, date datetime, value decimal)
insert into client values(1)
insert into mv values(1, 1, '1 may 2010', 35)
insert into mv values (1,1, '1 may 2011', 434)
insert into mv values (1, 2, '1 may 2011', 635)
```
The first table represents the client, and the second table represents their market value and the fund they are invested in.
I need to display their market value per fund at 2 dates, but display 0 for a date if I cant find a record for that date.
Here is my attempt:
```
select
c.clientid,
mvstart.fundid startfundid,
mvend.fundid endfundid,
mvstart.value startvalue,
mvend.value endvalue
from client c
left join mv mvstart
on c.clientid = mvstart.clientid
and mvstart.date = '1 may 2010'
left join mv mvend
on c.clientid = mvend.clientid
and mvend.date = '1 may 2011'
```
Which produces:
```
CLIENTID STARTFUNDID ENDFUNDID STARTVALUE ENDVALUE
1 1 1 35 434
1 1 2 35 635
```
I don't understand why the second row has a startvalue of 35.
I need to have the following output:
```
CLIENTID FUNDID STARTVALUE ENDVALUE
1 1 35 434
1 2 0 635
```
Can anyone help me join correctly or explain why my query produces 35 as the startvalue for the 2nd row?
Heres a [SQLFiddle](http://www.sqlfiddle.com/#!3/0090f/7) | Try this query without joins:
```
SELECT mv.clientid,
mv.fundid,
MAX(CASE WHEN date = '1 may 2010' THEN value ELSE 0 END) as startvalue,
MAX(CASE WHEN date = '1 may 2011' THEN value ELSE 0 END) as endtvalue
FROM mv
GROUP BY clientid,fundid
```
`SQLFiddle demo` | Again bear in mind that I have no access to a db right now :(
This query should do something similar to what you want
```
SELECT
X.CLIENTID,
X.FUNDID,
Y.STARTVAL,
Y.ENDVAL
FROM MV X,
(SELECT B.CLIENTID, B.FUNDID,
COALESCE(A.VALUE,0) STARTVAL,
COALESCE(B.VALUE,0) ENDVAL FROM MV A
FULL OUTER JOIN MV B
ON A.CLIENTID=B.CLIENTID AND
A.FUNDID=B.FUNDID AND
A.DATE='1 may 2010' AND
B.DATE='1 may 2011'
) Y
WHERE
X.CLIENTID=Y.CLIENTID AND
X.FUNDID=Y.FUNDID AND
EXISTS (SELECT 1 FROM CLIENT C
WHERE C.CLIENTID=X.CLIENTID)
```
Probably you don't even need to use the client table, but I added it just to be sure | How to join where there is a possibility of no record | [
"",
"sql",
"sql-server",
"t-sql",
"join",
""
] |
Maybe it's too obvious, but I am sure it has to be a simple way to do that
```
SELECT 'Not Contained Yet' AS FlagRow
FROM (
SELECT 'Already Contained'
FROM MyTable t
WHERE t.id=?
AND CURDATE()=t.date
LIMIT 1
) aux
HAVING COUNT(*)=0
```
That simply returns no results if the register is already in the database (by id,date) and returns a row with the message 'Not Contained Yet' if it does not exist.
Any way to query that particularity without using subselects? I just need a single row with gibberish as it acts as a trigger for starting my ETL in case there are no entries for that day and id | You should be able to do this with `having`:
```
SELECT 'Not Contained Yet' AS FlagRow
FROM MyTable t
WHERE t.id = ? and CURDATE()=t.date
HAVING COUNT(*) = 0;
``` | Using `count()` you get `0` if it does not contain the record
```
SELECT count(*) as contains_count
FROM MyTable t
WHERE t.id = ?
AND DATE(NOW()) = t.date
``` | Any trick to make a query that ouputs no row when certain conditions are satisfied and a row when they are not (without subselects)? | [
"",
"mysql",
"sql",
"subquery",
""
] |
Im having trouble joining two varchar(31) fields on sql server 2008. below is my query and it works fine
```
select A.CustId,A.Country,B.Country from [ACC].[dbo].[Customer] as A
left join
[Task Centre].[dbo].[CountryCodes] as B on A.Country=B.Country]
```
the results are as follows
```
CustomerA United Kingdom Null
CustomerB Ireland Ireland
CustomerC Spain Spain
CustomerD South Africa Null
```
South Africa and United Kingdom don't match even though they are in both dbs
I have tried to replace the space but its very slow and doesnt work. I think its something to do with the whitespace but I cant find the right command to achieve what I want.
Bear with me if I have omitted anything as Im a novice, I have also searched everywhere for an answer but cant find one that works for me.
Any help is greatly appreciated
Mike | Try to execute the following query on both tables. This will tell you if there's any "hidden" difference between the tables (for example, blank characters, line breaks, etc.):
```
select Country, CAST(Country AS VARBINARY) AS BinaryCountry
from [ACC].[dbo].[Customer]
where Country = 'United Kingdom'
select Country, CAST(Country AS VARBINARY) AS BinaryCountry
from [Task Centre].[dbo].[CountryCodes]
where Country = 'United Kingdom'
```
The column `BinaryCountry` should show a different value, if the content of the `Country`-columns are not exactly the same. If that is the case, consider correcting the error in either table. Once you've made sure that the value is the same in both tables, your join should work just fine.
Edit: The problem turns out to be a non-breaking space character in the Task Centre-table. To workaround this, use the following in your join criteria:
```
ON A.Country = Replace(B.Country, CHAR(0xA0), ' ')
``` | Try this:
If any space is in value you need to trim and check
```
SELECT A.CustId, A.Country, B.Country
FROM [ACC].[dbo].[Customer] AS A LEFT JOIN
[Task Centre].[dbo].[CountryCodes] AS B
ON LTRIM(RTRIM(A.Country)) = LTRIM(RTRIM(B.Country))
``` | Left join on 2 varchar fields not working | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
ORA-00904: "COUNT1": invalid identifier
00904. 00000 - "%s: invalid identifier"
\*Cause:
\*Action:
Error at Line: 8 Column: 73
```
SELECT a1.branch_name AS myfavourite,
COUNT(a1.branch_name) AS count1,
SUM(count1) AS total_purchase ,
a1.branch_id AS branch_id,
c1.city,
c1.Branch_description AS description,
c1.userid AS shopmailid,
c1.image
FROM tbl_orderdetails a1
INNER JOIN tbl_ordermaster b1
ON a1.order_master_id=b1.ordermasterid
INNER JOIN tbl_user c1
ON c1.id =a1.branch_id
WHERE b1.user_id='12'
GROUP BY a1.branch_name,
a1.branch_id,
c1.city,
c1.Branch_description,
c1.userid,
c1.image
ORDER BY COUNT(a1.branch_name) DESC
```
I want to sum count1 and get value in total purchase.I got error like the above | [The documentation](http://docs.oracle.com/cd/E11882_01/server.112/e26088/statements_10002.htm) is fairly clear that you can't use a column alias as you are trying to, within the same select list as you can *only* use it in the `order by`:
> **`c_alias`** Specify an alias for the column expression. Oracle Database will use this alias in the column heading of the result set. The `AS` keyword is optional. The alias effectively renames the select list item for the duration of the query. The alias can be used in the `order_by_clause` but not other clauses in the query.
You can work around that with an in-line view (as Vignesh Kumar showed) or a common table expression (CTE, also called subquery factoring, as vinoth\_S showed). But both those answers only show you the `total_purchase` value, not the individual `count1` values for each row in the result set.
You can't just remove the alias and do `SUM(COUNT(a1.branch_name)) AS ...` since that would give you an ORA-00937 error - the grouping for the sum isn't clear.
You can use the analytic version of `SUM` to calculate both in one go though:
```
SELECT a1.branch_name AS myfavourite,
COUNT(a1.branch_name) AS count1,
SUM(COUNT(a1.branch_name)) OVER (PARTITION BY NULL) AS total_purchase ,
...
```
If the original query got:
```
MYFAVOURITE COUNT1 BRANCH_ID CITY DESCRIPTION SHOPMAILID IMAGE
----------- ---------- ---------- ---------- ----------- ---------- ----------
BR2 12 12 Paris Branch 2 12 image 2
BR1 10 11 London Branch 1 12 image 1
BR3 1 13 New York Branch 4 12 image 3
```
... then adding that analytic sum would give:
```
MYFAVOURITE COUNT1 TOTAL_PURCHASE BRANCH_ID CITY DESCRIPTION SHOPMAILID IMAGE
----------- ---------- -------------- ---------- ---------- ----------- ---------- ----------
BR2 12 23 12 Paris Branch 2 12 image 2
BR1 10 23 11 London Branch 1 12 image 1
BR3 1 23 13 New York Branch 4 12 image 3
```
... with the same overall total count value in all rows in the result set, which you said in comments is what you want.
[SQL Fiddle](http://sqlfiddle.com/#!4/08b20/1). | Try this,
```
With Cte(myfavourite,count1,branch_id,city,description,shopmailid,image)
As
(
SELECT a1.branch_name AS myfavourite,
COUNT(a1.branch_name) AS count1,
--SUM(count1) AS total_purchase ,
a1.branch_id AS branch_id,
c1.city,
c1.Branch_description AS description,
c1.userid AS shopmailid,
c1.image
FROM tbl_orderdetails a1
INNER JOIN tbl_ordermaster b1 ON a1.order_master_id=b1.ordermasterid
INNER JOIN tbl_user c1 ON c1.id =a1.branch_id
WHERE b1.user_id='12'
GROUP BY a1.branch_name,a1.branch_id,c1.city,c1.Branch_description,
c1.userid,c1.image
ORDER BY COUNT(a1.branch_name) DESC
)
select sum(Count1) as total_purchase, myfavourite,branch_id,city,description,shopmailid,image from Cte
group by myfavourite,branch_id,city,description,shopmailid,image
``` | Error while counting Column (Count1) in oracle | [
"",
"sql",
"oracle",
""
] |
I am trying to create a query in MS SQL Server 2012 which gives me a `count`, `average` and some `sum` values of distinct records in a database table. I'll try to explain my situation and my wishes as best as I can. If there remains something unclear or if some extra information is needed, please let me know.
Having the following table `TEMP` with 10 records:
**TABLE**
```
ββββββββββ¦ββββββββββββββ¦βββββββββ¦ββββββββββββ
β Number β DateOfBirth β Gender β Activity β
β βββββββββ¬ββββββββββββββ¬βββββββββ¬ββββββββββββ£
β 191806 β 1940-08-31 β F β AMADMIN β
β 196484 β 1940-09-23 β F β AMHOST β
β 199480 β 1949-10-16 β F β AMTRAINER β
β 201089 β 1947-04-08 β M β AMTRAINER β
β 204528 β 1950-05-02 β F β AMHOST β
β 226356 β 1966-04-12 β M β AMADMIN β
β 226356 β 1966-04-12 β M β AMHOST β
β 377599 β 1985-05-15 β F β AMADMIN β
β 377599 β 1985-05-15 β F β AMHOST β
β 395809 β 1980-03-03 β F β AMADMIN β
ββββββββββ©ββββββββββββββ©βββββββββ©ββββββββββββ
```
Now, consider running the following query:
**SQL**
```
SELECT COUNT([Number]) AS Number, ROUND(AVG(CAST(DATEDIFF(DAY, [DateOfBirth], GETDATE()) / 365.2425 AS FLOAT)), 1) AS AverageAge,
SUM(CASE WHEN [Gender] = 'M' THEN 1 ELSE 0 END) AS Male,
SUM(CASE WHEN [Gender] = 'F' THEN 1 ELSE 0 END) AS Female
FROM [TEMP]
WHERE [Activity] IN ('AMHOST', 'AMADMIN', 'AMTRAINER')
```
This query will give me the following result:
**RESULT**
```
ββββββββββ¦βββββββββββββ¦βββββββ¦βββββββββ
β Number β AverageAge β Male β Female β
β βββββββββ¬βββββββββββββ¬βββββββ¬βββββββββ£
β 10 β 57,3 β 3 β 7 β
ββββββββββ©βββββββββββββ©βββββββ©βββββββββ
```
So far so good! But now for the tricky part. What I really want is this result for all `distinct` records in the table. That means calculating the average age and male/female counts for all persons minus the two "double" persons (having `Number` `226356` and `377599`). So I need a query which produces the following result:
**WANTED RESULT**
```
ββββββββββ¦βββββββββββββ¦βββββββ¦βββββββββ
β Number β AverageAge β Male β Female β
β βββββββββ¬βββββββββββββ¬βββββββ¬βββββββββ£
β 8 β 56,9 β 2 β 6 β
ββββββββββ©βββββββββββββ©βββββββ©βββββββββ
```
I know how to get the `distinct` records for one piece of the query like so:
**SQL**
```
SELECT COUNT(DISTINCT([Number])) AS Number, ROUND(AVG(CAST(DATEDIFF(DAY, [DateOfBirth], GETDATE()) / 365.2425 AS FLOAT)), 1) AS AverageAge,
SUM(CASE WHEN [Gender] = 'M' THEN 1 ELSE 0 END) AS Male,
SUM(CASE WHEN [Gender] = 'F' THEN 1 ELSE 0 END) AS Female
FROM [TEMP]
WHERE [Activity] IN ('AMHOST', 'AMADMIN', 'AMTRAINER')
```
But this produces:
**RESULT**
```
ββββββββββ¦βββββββββββββ¦βββββββ¦βββββββββ
β Number β AverageAge β Male β Female β
β βββββββββ¬βββββββββββββ¬βββββββ¬βββββββββ£
β 8 β 57,3 β 3 β 7 β
ββββββββββ©βββββββββββββ©βββββββ©βββββββββ
```
Now the `Number` count is good, but the `AverageAge`, `Male` and `Female` values are not right.
My question is, how can I adjust my query in such way that I retrieve the values as shown in the **WANTED RESULT** set, if such a query is even possible to begin with? | Since activity does not appear in any of the aggregate functions you can simply discount this from the results, and use a subquery to get distinct records before your aggregation, then also applying `COUNT(DISTINCT CASE..` to your male/female counts:
```
SELECT COUNT(DISTINCT [Number]) AS Number,
ROUND(AVG(CAST(DATEDIFF(DAY, [DateOfBirth], GETDATE()) / 365.2425 AS FLOAT)), 1) AS AverageAge,
COUNT(DISTINCT CASE WHEN [Gender] = 'M' THEN [Number] END) AS Male,
COUNT(DISTINCT CASE WHEN [Gender] = 'F' THEN [Number] END) AS Female
FROM ( SELECT DISTINCT Number, DateOfBirth, Gender
FROM [sw_test].[dbo].[TEMP]
WHERE [Activity] IN ('AMHOST', 'AMADMIN', 'AMTRAINER')
) AS t;
```
**[Example on SQL Fiddle](http://sqlfiddle.com/#!6/6ff35/4)** | Your query did not solve the problem because you only told sql to use the distinct data points for one of the columns, the number. When sql moves out of the parentheses and on to the calculations for the next columns, it is no longer using the distinct command.
In order to solve your problem, I would recommend the use of a subquery. There are other ways to do this, but I believe a subquery is your best bet because you can first filter down the data and then do the mathematical operations based on just the unique data points. Not all the columns in your data points are duplicates in the rows with the duplicated numbers. However, this is only in the activity column (which we can disregard since it is not necessary in the calculations). I am going to assume that the gender and the date of birth will always be the same. Now, your query will look like:
```
SELECT COUNT(DISTINCT(t.Number)) AS Number, ROUND(AVG(CAST(DATEDIFF(DAY, t.DateOfBirth, GETDATE()) / 365.2425 AS FLOAT)), 1) AS AverageAge,
SUM(CASE WHEN t.Gender = 'M' THEN 1 ELSE 0 END) AS Male,
SUM(CASE WHEN t.Gender = 'F' THEN 1 ELSE 0 END) AS Female
From
( Select t.number, t.DateOfBirth, t.Gender
From temp t
Where activity in ('AMHOST', 'AMADMIN', 'AMTRAINER')
Group by t.number, t.DateOfBirth, t.Gender) t
``` | How can I get distinct average and sum values in one MS SQL query? | [
"",
"sql",
"sql-server",
"database",
"sql-server-2012",
"distinct",
""
] |
Below is image which describe my requirements i have a column having different data i want its count on where condition where data=1 as col1 and where data = 2 as col2 and so on....
 | ```
SELECT
COUNT(CASE data WHEN 1 THEN 1 ELSE NULL END) as Col1,
COUNT(CASE data WHEN 2 THEN 1 ELSE NULL END) as Col2,
COUNT(CASE data WHEN 3 THEN 1 ELSE NULL END) as Col3,
COUNT(CASE data WHEN 4 THEN 1 ELSE NULL END) as Col4
FROM yourTable
``` | ```
SELECT SUM(CASE WHEN data=1 THEN 1 ELSE 0 END) AS col1,
SUM(CASE WHEN data=2 THEN 1 ELSE 0 END) AS col2
FROM Table
``` | I want to display one columns data sum on different columns based on where clause | [
"",
"mysql",
"sql",
""
] |
the below query give me address against order date for three sites.
if more than one orders are completed for a site i want to select only latest record for the site
```
SELECT DISTINCT ORDERS.Address, ORDERS.ORDERDATE
FROM ORDERS
Left JOIN PHONEDATA AS P
ON ORDERS.RECID = P.OrderID
where client IN ('site1','site2','site3')
```
result
```
Address orderdate
------- -----------------------
Site1 2014-02-13 14:58:22.427
site1 2014-02-13 14:48:57.413
site1 2014-02-13 15:03:32.403
Site2 2014-02-13 13:48:22.427
site2 2014-02-13 13:30:57.413
site2 2014-02-13 13:03:32.403
Site3 2014-02-13 14:12:22.427
site3 2014-02-13 11:10:57.413
site3 2014-02-13 13:03:32.403
Site1 2014-02-14 14:58:22.427
site1 2014-02-14 14:48:57.413
site1 2014-02-14 15:03:32.403
Site2 2014-02-14 13:48:22.427
site2 2014-02-14 13:30:57.413
site2 2014-02-14 13:03:32.403
Site3 2014-02-14 14:12:22.427
site3 2014-02-14 11:10:57.413
site3 2014-02-14 13:03:32.403
```
Expected result
```
site1 2014-02-13 15:03:32.403
Site2 2014-02-13 13:48:22.427
Site3 2014-02-13 14:12:22.427
site1 2014-02-14 15:03:32.403
Site2 2014-02-14 13:48:22.427
Site3 2014-02-14 14:12:22.427
```
so picking the latest record
UPDATE: sorry guys i should have mentioned, i want the latest value for that day.
i have updated the expected result, so rather than selecting the overall latest value for site1, i want to display the latest value for site 1 for a given day, repeated each day if there is a value for that site | Assuming that `client` is in the `orders` table:
```
SELECT o.Address, o.ORDERDATE
FROM (select o.*, row_number() over (partition by client order by orderdate desc) as seqnum
from ORDERS o
) o Left JOIN
PHONEDATA P
ON o.RECID = P.OrderID
where o.client IN ('site1', 'site2', 'site3') and
o.seqnum = 1;
```
Note that this will give you the address from the most recent order as well as the date.
EDIT:
Modifying the above to handle most recent per day is easy. The only change is to the definition of `seqnum`:
```
SELECT o.Address, o.ORDERDATE
FROM (select o.*, row_number() over (partition by client, cast(orderdate as date)
order by orderdate desc
) as seqnum
from ORDERS o
) o Left JOIN
PHONEDATA P
ON o.RECID = P.OrderID
where o.client IN ('site1', 'site2', 'site3') and
o.seqnum = 1;
``` | Try this:
```
SELECT ORDERS.Address, MAX(ORDERS.ORDERDATE) AS ORDERDATE
FROM ORDERS O
LEFT JOIN PHONEDATA AS P
ON O.RECID = P.OrderID
WHERE client IN ('site1','site2','site3')
GROUP BY ORDERS.Address
``` | Selecting one record for a day | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I'm getting grey hair by now...
I have a table like this.
```
ID - Place - Person
1 - London - Anna
2 - Stockholm - Johan
3 - Gothenburg - Anna
4 - London - Nils
```
And I want to get the result where all the different persons are included, but I want to choose which Place to order by.
For example. I want to get a list where they are ordered by LONDON and the rest will follow, but distinct on PERSON.
```
Output like this:
ID - Place - Person
1 - London - Anna
4 - London - Nils
2 - Stockholm - Johan
```
Tried this:
```
SELECT ID, Person
FROM users
ORDER BY FIELD(Place,'London'), Person ASC "
```
But it gives me:
```
ID - Place - Person
1 - London - Anna
4 - London - Nils
3 - Gothenburg - Anna
2 - Stockholm - Johan
```
And I really dont want Anna, or any person, to be in the result more then once. | This is one way to get the specified output, but this uses MySQL specific behavior which is not guaranteed:
```
SELECT q.ID
, q.Place
, q.Person
FROM ( SELECT IF(p.Person<=>@prev_person,0,1) AS r
, @prev_person := p.Person AS person
, p.Place
, p.ID
FROM users p
CROSS
JOIN (SELECT @prev_person := NULL) i
ORDER BY p.Person, !(p.Place<=>'London'), p.ID
) q
WHERE q.r = 1
ORDER BY !(q.Place<=>'London'), q.Person
```
This query uses an inline view to return all the rows in a particular order, by Person, so that all of the 'Anna' rows are together, followed by all the 'Johan' rows, etc. The set of rows for each person is ordered by, `Place='London'` first, then by ID.
The "trick" is to use a MySQL user variable to compare the values from the current row with values from the previous row. In this example, we're checking if the 'Person' on the current row is the same as the 'Person' on the previous row. Based on that check, we return a 1 if this is the "first" row we're processing for a a person, otherwise we return a 0.
The outermost query processes the rows from the inline view, and excludes all but the "first" row for each Person (the 0 or 1 we returned from the inline view.)
(This isn't the only way to get the resultset. But this is one way of emulating analytic functions which are available in other RDBMS.)
---
For comparison, in databases *other* than MySQL, we could use SQL something like this:
```
SELECT ROW_NUMBER() OVER (PARTITION BY t.Person ORDER BY
CASE WHEN t.Place='London' THEN 0 ELSE 1 END, t.ID) AS rn
, t.ID
, t.Place
, t.Person
FROM users t
WHERE rn=1
ORDER BY CASE WHEN t.Place='London' THEN 0 ELSE 1 END, t.Person
```
---
**Followup**
At the beginning of the answer, I referred to MySQL behavior that was not guaranteed. I was referring to the usage of MySQL User-Defined variables within a SQL statement.
Excerpts from MySQL 5.5 Reference Manual <http://dev.mysql.com/doc/refman/5.5/en/user-variables.html>
"As a general rule, other than in SET statements, you should never assign a value to a user variable and read the value within the same statement."
"For other statements, such as SELECT, you might get the results you expect, but this is not guaranteed."
"the order of evaluation for expressions involving user variables is undefined." | You want to use `group by` instead of `distinct`:
```
SELECT ID, Person
FROM users
GROUP BY ID, Person
ORDER BY MAX(FIELD(Place, 'London')), Person ASC;
```
The `GROUP BY` does the same thing as `SELECT DISTINCT`. But, you are allowed to mention other fields in clauses such as `HAVING` and `ORDER BY`. | MySQL ORDER BY Column = value AND distinct? | [
"",
"mysql",
"sql",
"asp-classic",
"field",
"sql-order-by",
""
] |
I really don't know how this is called so I couldn't find an answer.
For example I have the following tables:
```
Table products Table users
+----------+----------+----------+ +----------+----------+
| id | name | price | | username | products |
+----------+----------+----------+ +----------+----------+
| 1 | Bread | 1.5 | | James | 1, 2 |
+----------+----------+----------+ +----------+----------+
| 2 | Cookies | 2.0 | | David | 2, 3 |
+----------+----------+----------+ +----------+----------+
| 3 | Milk | 1.2 | | Paul | 3, 1 |
+----------+----------+----------+ +----------+----------+
```
`products.id` is the foreign key for `users.products`.
I would like to get the name and price of all David's products using a MySQL query. This should return the following:
```
+----------+----------+
| name | price |
+----------+----------+
| Cookies | 2.0 |
+----------+----------+
| Milk | 1.2 |
+----------+----------+
```
The `JOIN` function would be the best to use I guess, but David can have just 1 or 100 products. Creating 100 columns to fit all products doesn't sound very efficient.
How can I realize this?
---
Currently I'm just fetching everything and filter it using PHP, this is not a problem at the moment but as the tables will grow I guess this will be very inefficient. | This could be solved by changing your data model.
```
users
+----------+----------+
| id | username |
+----------+----------+
| 1 | Fred |
+----------+----------+
| 2 | John |
+----------+----------+
products
+----------+----------+----------+
| id | name | price |
+----------+----------+----------+
| 1 | Bread | 1.5 |
+----------+----------+----------+
| 2 | Cookies | 2.0 |
+----------+----------+----------+
| 3 | Milk | 1.2 |
+----------+----------+----------+
```
And here comes the magic: You could connect the two tables using a third table:
```
user_procuct_connections
+----------+----------+------------+
| id | user_id | product_id |
+----------+----------+------------+
| 1 | 1 | 2 | -> Fred has Cookies
+----------+----------+------------+
| 2 | 1 | 3 | -> Fred also has Milk
+----------+----------+------------+
| 3 | 2 | 1 | -> John has Bread
+----------+----------+------------+
```
If you want a user to be able to own a single product only, then you can remove the id column, an make the `user_id` and `product_id` primary key together.
Then when you want to get for example all of Freds products then just
```
SELECT
*
FROM
products
WHERE
id IN (
SELECT
product_id
FROM
user_procuct_connections
WHERE
user_id = 1
)
``` | You could try this:
```
SELECT * FROM products pt
where FIND_IN_SET(pt.id,(select us.prices from users us
WHERE us.username = "David"));
```
### Working fiddle: <http://sqlfiddle.com/#!2/4f78d/2> | Link to multiple foreign rows from one row | [
"",
"mysql",
"sql",
""
] |
Thanks in advance for your help.
I'm working with an application that a user developed. It prompts you for something to search for and then performs a basic query:
```
SELECT * FROM Table
WHERE Entry=[ENTRY];
```
I cannot change that format. All I can do is modify the text of [ENTRY]. Is there a way I can pull multiple records without modifying the structure of the statement itself? For Example:
```
SELECT * FROM Table
WHERE Entry='COW | APPL* | ROO*';
```
to acheive the results:
```
COW, APPLE, APPLES, ROOF, ROOM, ROOSTER;
```
Please excuse the rudimentary example - Thanks,
Blake | This totally depends on the code. If there is possibility than you can use Sql injection method to request multiple records.
```
SELECT * FROM Table
WHERE Entry='COW' OR Entry ='APPL' OR Entry = 'ROO';
```
Following this example your variable [ENTRY] should be this:
```
[ENTRY] = "'COW' OR Entry ='APPL' OR Entry = 'ROO'";
```
Note, that this will not work, if your [ENTRY] variable is protected against sql injection.
EDIT:
So here is an sql injection method not knowing the table name:
this should be your string to copy in:
```
COW' OR 1 = '1
``` | If the developer didn't prevent sql injection, you can try add `;` and create a new query.
If you can change `=` to `IN`. | How to pull multiple records using SQL statement? | [
"",
"sql",
"ms-access",
""
] |
Using a few SQL queries on our company Netezza box, I'm trying to concatenate a number of values into a single string. The hitch is that I need these values to be ordered, but Netezza won't let me order by terms that aren't grouped because it applies the ordering after doing the grouping.
I'm using a UDA called [group\_concat](https://www-304.ibm.com/connections/wikis/home?lang=en-us#!/wiki/W361a40e2ccf0_43c7_b802_e51fcadaec6b/page/C__%20UDX%20Examples) that concatenates strings and adds a separator between them. I'm pretty sure the UDA is functioning correctly (after tweaking it so it doesn't do any sorting internally).
Here's my test data:
```
CREATE TABLE TEST (GRP INTEGER, ID INTEGER, DATA VARCHAR(10));
INSERT INTO TEST VALUES (1,3,"Three");
INSERT INTO TEST VALUES (1,1,"One");
INSERT INTO TEST VALUES (1,2,"Two");
INSERT INTO TEST VALUES (2,3,"Three");
INSERT INTO TEST VALUES (2,2,"Two");
INSERT INTO TEST VALUES (2,1,"One");
```
I want the following output:
* GRP: 1, ConcatData: "One,Two,Three"
* GRP: 2, ConcatData: "One,Two,Three"
Here's what I would have liked to do:
```
SELECT GRP, GROUP_CONCAT(DATA)
FROM TEST
ORDER BY ID
GROUP BY GRP;
```
but this is not possible: syntax error because group by must come before order by, and after doing that order by can only apply to terms that appear in the result set.
Others have suggested using a subselect to get around this: order in the subquery and group in the outer query like this:
```
SELECT GRP, GROUP_CONCAT(DATA,',') AS CONCATDATA
FROM
(
SELECT *
FROM TEST
ORDER BY GRP, ID
) AS X
GROUP BY GRP;
```
This appears to work in PostgreSQL 9.3 but not in Netezza. The order of the result changes each time I run the query.
The issue with this last query has nothing to do with the group by. It is the outer select that is ignoring the ordering of the inner select as illustrated by the following snippet:
```
SELECT *
FROM
(
SELECT *
FROM TEST
ORDER BY GRP, ID
) AS X;
```
The inner select orders the results as expected, but the outer select reorders them arbitrarily (as far as I can tell).
So my questions are:
* Why is Netezza ignoring the ordering of my results?
* How can I build a string of grouped but ordered data?
PS: how should I include and format a result set in my question? I can't see how to make a table.
EDIT: Following @Alex 's comment, I've made it clear that I want to aggregate the values in one column (data) but order by another (id).
EDIT: I've realised that Netezza may not be able to order things in the same way as some other database engines because the data is distributed and worked on in parallel. The *Netezza UDF Developer's Guide* explains that in a UDA each SPU first aggregates the data in its possession, and then data from each SPU is merged centrally. In a simple UDA such as the ones I've looked at, the merge function knows nothing about what order the data should be in, and even if the data was ordered on each SPU, the final aggregated data cannot be guaranteed to be ordered. Maybe there's a way to write a UDA that accepts a ORDER BY clause... Alternatively, I could write a UDA that accepts two arguments, the first is the string to aggregate, the second is the order, however, I don't know that it's possible to easily work with associative arrays inside the UDA.
EDIT: [Niederee's solution](https://stackoverflow.com/a/24435895/3776418) works so I've accepted it but I ended up creating the strings in PostgreSQL because we already had a PostgreSQL preprocessing phase before loading to Netezza. FYI this was to transform a list of vertex coordinates into a [WKT string](http://en.wikipedia.org/wiki/Well-known_text) that can be used in the Netezza Spatial Toolkit (similar to PostGIS). | EDIT: Even better solution.
```
SELECT
GRP,
CONCAT_DATA
FROM (
SELECT
GRP,
GROUP_CONCAT(data) OVER (PARTITION BY grp ORDER BY id ASC) concat_data,
row_number() OVER (PARTITION BY grp ORDER BY id DESC) rn
FROM
test
) x
WHERE rn = 1;
```
**Note that this solution relies on using a very slightly modified group\_concat UDX where the `sort` line has been removed.**
Earlier solution left for posterity:
Just found an reasonably compact solution but I'm not sure how robust it is to future changes in Netezza. By using an ordered window function to force ordering of the subquery, I seem to get results that are consistently in the correct order. Note that there is no explicit ordering of the results and the row number isn't used for anything but if you comment out `MAX(rn)` then the results are no longer ordered, presumably because the call to `row_number()` gets optimised away.
```
SELECT
MAX(rn) as dummy, -- this prevents the row_number() from being optimised away and forces the output to be ordered
GRP,
GROUP_CONCAT(DATA,',') AS CONCATDATA
FROM
(
SELECT GRP, ID, DATA, ROW_NUMBER() OVER (PARTITION BY GRP ORDER BY ID) rn
FROM TEST
) AS X
GROUP BY GRP;
``` | The not so simple way to do this, if you have the SQL Functions Toolkit installed would be to use `Arrays`. I think a better way would be to add the group\_concat UDF from IBM. Array example below:
```
CREATE temp TABLE TEST (GRP INTEGER, ID INTEGER, DATA VARCHAR(10));
INSERT INTO TEST VALUES (1,3,'Three');
INSERT INTO TEST VALUES (1,1,'One');
INSERT INTO TEST VALUES (1,2,'Two');
INSERT INTO TEST VALUES (2,3,'Three');
INSERT INTO TEST VALUES (2,2,'Two');
INSERT INTO TEST VALUES (2,1,'One');
create temp table array_t(grp int,arr varchar(100));
-- create array placeholder
insert into array_t
select distinct grp, sql_functions.admin.array(8) from test;
-- populate the array
update array_t a set arr = sql_functions.admin.add_element(a.arr, b.data)
from (select grp, row_number() over(partition by grp order by id) as rown, data
from test) b
where a.grp=b.grp
and b.rown=1;
update array_t a set arr = sql_functions.admin.add_element(a.arr, b.data)
from (select grp, row_number() over(partition by grp order by id) as rown, data
from test) b
where a.grp=b.grp
and b.rown=2;
update array_t a set arr = sql_functions.admin.add_element(a.arr,b.data)
from (select grp, row_number() over(partition by grp order by id) as rown, data
from test) b
where a.grp=b.grp
and b.rown=3;
-- Return Result
select grp, sql_functions.admin.array_combine(arr,',')
from array_t;
``` | Ordering the input to an aggregate function | [
"",
"sql",
"netezza",
""
] |
I have the following table
```
ID location date
-- -------- -----
01 loc#1 10-06-2014
05 loc#1 11-06-2014
06 loc#2 13-06-2014
08 loc#2 14-06-2014
10 loc#2 15-06-2014
14 loc#1 16-06-2014
17 loc#1 17-06-2014
20 loc#1 18-06-2014
```
what i need is to get each location and the number of adjacent records that this location is mentioned on ordered by ID
i.e.
```
location count(location)
-------- ---------------
loc#1 2
loc#2 3
loc#1 3
```
the problem with using count + group by is that it counts all records for same location not only the adjacent and output a unique set of locations
i.e.
```
select location, count(location) from table group by location order by ID
location count(location)
-------- ---------------
loc#1 5
loc#2 3
```
any idea how to get the correct query? as I don't want to make it programmatically as i'm working with multiple thousands of records on iPhone which will produce a performance problem | ```
select minAdjLoc.ID, max(minAdjLoc.location) 'Loc', count(distinct adjLocs.ID) 'Count'
from test minAdjLoc -- Minimum record in each adjacent group
inner join test adjLocs -- All adjacent record, including self
on adjLocs.location = minAdjLoc.location
and adjLocs.ID >= minAdjLoc.ID
left join test intruder -- Possible intruder with different location
on intruder.location <> minAdjLoc.location
and intruder.ID > minAdjLoc.ID
and intruder.ID < adjLocs.ID
left join test lowerThanMin -- Possible record lower than minAdjLoc
on lowerThanMin.ID < minAdjLoc.ID
and lowerThanMin.location <> minAdjLoc.location
left join test lowerIntruder
on (lowerThanMin.ID is null or lowerThanMin.ID < lowerIntruder.ID)
and lowerIntruder.ID < minAdjLoc.ID
and lowerIntruder.location = minAdjLoc.location
where intruder.ID is null -- There can't be any record with a different location inside the group
and lowerIntruder.ID is null -- Ensure minAdjLoc is in fact the record with minimum ID
group by minAdjLoc.ID --The minimum ID of the adjacent group is unique
order by minAdjLoc.ID
``` | I think this can work
```
SELECT location, COUNT(*)
FROM (SELECT CASE WHEN t1.location <> (SELECT location FROM t WHERE id = (SELECT MAX(id) FROM t WHERE id < t1.id))
THEN t1.id
WHEN (SELECT MIN(id) FROM t WHERE id > (SELECT MAX(id) FROM t WHERE location <> t1.location AND id < t1.id)) IS NULL
THEN (SELECT MIN(id) FROM t)
ELSE (SELECT MIN(id) FROM t WHERE id > (SELECT MAX(id) FROM t WHERE location <> t1.location AND id < t1.id))
END AS mark,
t1.id AS id,
t1.location AS location
FROM t AS t1)
GROUP BY mark, location
;
``` | Count adjacent records in sqlite | [
"",
"sql",
"sqlite",
""
] |
```
SELECT SUM( IF( userId = '123456', amount, 0 ) ) AS 'amount'
FROM `amountInfo`
```
userId `123456` is not present in table `amountInfo`, in that case it is returning null i want 0(numerical) | You can use coalesce for this
**UPDATE** : I was missing the argument to set 0 instead of NULL, pointed by @Barmar
```
coalesce( SUM( IF( userId = '123456', amount, 0 ) ),0 )
``` | Use IFNULL:
```
SELECT IFNULL(SUM(IF(userId = '123456', amount, 0)), 0) AS amount
``` | return a SUM zero(0) instead of null | [
"",
"mysql",
"sql",
"sum",
""
] |
I want to calculate the time difference between two dates in Oracle sql query. I have written the following query:
```
DECLARE
v_time varchar2(40);
diff_hours varchar2(40);
BEGIN
select substr(((select date_time from observation_measurement where observation_measurement_id=2861971)), 1,17)
into v_time
from dual;
dbms_output.put_line(v_time);
select 24 * (to_date('06-25-2014 09:46:36', 'MM-DD-YYYY hh24:mi:ss')
- to_date(v_time, 'YY-MM-DD hh24:mi:ss')) into diff_hours
from dual;
END;
```
The first select statement returns correct result. When I am trying to calculate the time difference from current date to the previous calculated date then it is showing error. How can I get the correct result?
Thanks!!!! | Increasing the size of diff\_hours variable working. Thanks Krishna. :-) | Make `diff_hours` a `NUMBER`variable instead of a `VARCHAR2` variable!
Of course you can make `diff_hours` long enough to hold all the insignificant decimals your query produces, but declaring it `VARCHAR2` is pointless anyway! | ORA-06502: PL/SQL: numeric or value error: character string buffer too small in Oracle sql query | [
"",
"sql",
"oracle",
"date",
"difference",
""
] |
Is there any possibility to write delete query inside **Where clause**.
Example:
```
Select ID,Name From MyTable Where ID IN(Delete From MyTable)
```
It may be crazy, but let me explain my situation. In our reporting tool, we are supporting to enter SQL where query.
We will use our own **Select** and **From** **Clause** query and combine the user's where query input.
Example:
```
Select ID,Name From MyTable Where ("Query typed by user")
```
Here, user can type any kind of where query filter..
If he types like **ID=100** our final query becomes like this
```
Select ID,Name From MyTable Where (ID=100)
```
One of our customer asked us what will happen if anyone type the delete query as where query filter. he feels this may be the security hole..so we have tried that kind of possibility in our dev environment. But the sql returns error for the following query.
```
Select ID,Name From MyTable Where ID IN(Delete From MyTable)
```
So finally, my question is, is there any other possibility to write **Delete Query** inside **Where clause** or **Select clause**.. If it possible, how can I restrict it? | Yes. They can run a delete. They can type:
```
1 = 1; DELETE FROM MY_TABLE;
```
Or even worse in some ways, (since you should have backups):
```
1 = 0 UNION SELECT SOCIAL_SECURITY_NUMBER, CREDIT_CARD_NUMBER, OTHER_SENSITIVE_DATA FROM MY_SENSITIVE_TABLE;
```
Now, in your case its hard to validate. Normally if you are just passing a value to filter on you can use parameterised sql to save yourself. You however also need to let the user select a column. In cases like these, usually we use a drop down to allow the user to select a predefined list of columns and then validate the column name server side. We give the user a text box to enter the value to match and then parameterise that. | It's not quite possible. But he can do something like this :
```
Select ID,Name From MyTable Where (ID=100); (DELETE FROM MyTable Where 1 = 1)
```
by using `ID=100); (DELETE FROM MyTable Where 1 = 1` instead of `ID=100` | Delete Query inside Where clause | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I have two tables.
```
|Table One: Adversitements-----------------------|
| ID | ADVTITLE |
|----|-------------------------------------------|
| 1 | IT Staff will be taken. |
| 2 | Human resources personnel will be taken. |
| 3 | CNC Operator will be taken. |
|Table Two: Applications-----|
| ID | ADVID | APPLICANTNAME |
|----|-------|---------------|
| 1 | 1 | John Doe |
| 2 | 1 | John Doe 2 |
| 3 | 1 | Jane Doe |
| 4 | 2 | John Doe |
| 5 | 2 | Jane Doe |
| 6 | 3 | John Doe |
```
**I Want result:**
```
| ADVTITLE | APPLICANTCOUNT |
|-------------------------------------------|----------------|
| IT Staff will be taken. | 3 |
| Human resources personnel will be taken. | 2 |
| CNC Operator will be taken. | 1 |
```
But returning a single result;
**OUTPUT:**
```
| ADVTITLE | APPLICANTCOUNT |
|-------------------------|----------------|
| IT Staff will be taken. | 6 |
```
**MySQL Query**;
```
SELECT adv.advtitle, COUNT(applications.id) as applicantCount
FROM advertisements as adv
LEFT JOIN applications
ON adv.id = applications.advid
```
All listings can be related to how the number of applicants?
SQL Fiddle Link: <http://sqlfiddle.com/#!2/8644c/1/0> | You missed the `GROUP BY` clause:
```
SELECT adv.advtitle, COUNT(applications.id) as applicantCount
FROM advertisements as adv
LEFT JOIN applications ON adv.id = applications.advid
GROUP BY adv.advtitle
ORDER BY applicantCount desc
```
Result:
```
ADVTITLE APPLICANTCOUNT
---------------------------------------------------------
IT Staff will be taken. 3
Human resources personnel will be taken.. 2
CNC Operator will be taken. 1
```
[**Fiddle Example**](http://sqlfiddle.com/#!2/8644c/20) | You need `GROUP BY` clause
```
SELECT adv.advtitle, COUNT(applications.id) as applicantCount
FROM advertisements as adv
LEFT JOIN applications
ON adv.id = applications.advid
GROUP BY applications.advid
```
`SQL Fiddle` | Left join with count return one result | [
"",
"mysql",
"sql",
"left-join",
""
] |
this is my scenario... i would like to check if the member has mobile in one table, if yes then to display his mobile.. else to check if he has landline phone from another table... if yes then to display landline... else blank.
not sure if we can use case statement on comparing two different fields
```
SELECT a.name as name, b.name as parent_name, a.mobile, b.phone,
case
when a.mobile is not null or a.mobile<>'' then a.mobile
else b.phone
end
as phone
FROM family_member a join family_header b where a.name='sam' and a.id=b.match_key;
```
please guide me.. only the first case statement is executed and a.mobile is displayed when available.. but if mobile is not available, the landline is NOT displayed.. | If `mobile` is an empty string, it is `not null`, so this condition matches all those records where `a.mobile` is an empty string. So my guess is, none of your empty fields are `null`.
Change the condition to:
```
when a.mobile is not null AND a.mobile<>'' then
```
Maybe even better: make the field not nullable, so it always contains a phone number or an empty string. Then you can simplify the condition to:
```
when a.mobile<>'' then
``` | ```
SELECT a.name as name, b.name as parent_name, a.mobile, b.phone,
case
when a.mobile is not null or a.mobile<>'' then a.mobile
else (CASE
when b.phone is not NULL or b.phone<>'' then b.phone
else '')
end
as phone
FROM family_member a join family_header b where a.name='sam' and a.id=b.match_key;
``` | case statements with two conditions of two different fields | [
"",
"mysql",
"sql",
""
] |
**System background:** Coding in VBA using MS-Access 2010. Currently working on code behind module and calling stored procedure. The stored procedure is written in SQL and run on the Ms-SQL server 2008 application where the database is stored.
**Stored Procedure:** The stored procedure's purpose is to:
* Retrieve three input parameters: WOID, SampleID and Analyte
* Join two tables: tblWoSampleTest , tblTest
* Select testID WHERE the three values match
*note:* WOID and SampleID column are in tblWoSampleTest and Analyte is in tbltest
Once the stored procedure is called, the testId is saved to a local variable ThisTestID
```
CREATE PROCEDURE upGetTestIDForAnalyte @WOID nvarchar(60), @SampleID nvarchar(60),@Analyte nvarchar(60), @TestId int OUT
AS
SELECT @TestID = (Select TestID = t1.TestID
FROM tblWOSampleTest t1
JOIN tblTest t2
ON t1.TestID=t2.TestID
WHERE @WOID = t1.WOID AND @SampleID = t1.SampleID AND @Analyte = t2.Analyte)
GO
```
My issue is every time I call the stored procedure, The value ThistestId was previously initialized to is returned even though I know the test Id exists and the stored procedure seemed to run correctly. To verify it exists I took my stored procedure and simply ran:
```
Select TestID = t1.TestID
FROM tblWOSampleTest t1
JOIN tblTest t2
ON t1.TestID=t2.TestID
WHERE @WOID = t1.WOID AND @SampleID = t1.SampleID AND @Analyte = t2.Analyte
```
and had the correct testId returned (there will only ever be one value). I don't think there is an issue with the data type because the testid is a number not a string. Also here is the way I call it, although I am pretty sure this method is correct.
```
ThisTestId = 5
Set Conn = New ADODB.connection
Conn.ConnectionString = "connection string"
Conn.Open
Set cmd = New ADODB.Command
cmd.ActiveConnection = Conn
cmd.CommandType = adCmdStoredProc
cmd.CommandText = "upGetTestIDForAnalyte"
cmd.Parameters.Append cmd.CreateParameter("@Analyte", adVarChar, adParamInput, 60, Analyte)
cmd.Parameters.Append cmd.CreateParameter("@WOID", adVarChar, adParamInput, 60, ThisWOID)
cmd.Parameters.Append cmd.CreateParameter("@SampleID", adVarChar, adParamInput, 60, 1)
cmd.Parameters.Append cmd.CreateParameter("@testid", adDouble, adParamOutput, , ThisTestID)
cmd.Execute
Conn.Close
msgbox ThisTestId
```
In this case a 5 will be printed | Check that your your parameter is marked with OUTPUT keyword in your stored procedure
Try to specify adParamReturnValue for your output parameter
```
cmd.CreateParameter("@testid", adDouble, adParamOutput, , adParamReturnValue)
```
Then once you called the store procedure with cmd.Execute you have to read the value
```
ThisTestId = cmd.Parameters("@testid").Value
``` | I'm pretty new to SQL, but where's your return?
<http://msdn.microsoft.com/en-us/library/ms188655.aspx> | Output parameter from Stored Procedure only returning zero | [
"",
"sql",
"sql-server",
"stored-procedures",
"ms-access-2010",
""
] |
I'm creating a NACHA file and if the number of records in the file is not a multiple of 10, we need to insert enough "dummy" records filled with nines (`replicate('9',94)`) to hit that next tens place.
I know that I could write a loop or perhaps fill a temp table with 10 records full of nines and select the top N. But those options feel clunky.
I was trying to think of a single select statement that could do it for me. Any ideas?
```
select nacha_rows
from NACHA_TABLE
union all
select replicate('9',94) --do this 0 to 9 times
``` | The formula `(10-COUNT(*)%10)%10` tells you how many rows to add, so you can just select that many dummy rows from an existing dummy table.
```
SELECT nacha_rows
FROM NACHA_TABLE
UNION ALL
SELECT TOP (SELECT (10-COUNT(*)%10)%10 FROM NACHA_TABLE) REPLICATE('9',94)
FROM master.dbo.spt_values
``` | One idea is to prepare 9 filler rows than append only the ones needed to reach the next tens, same idea of JChao, with a different implementation
```
With Filler AS (
SELECT n.n, replicate('9',94) nacha_rows
FROM (VALUES (1), (2), (3), (4), (5), (6), (7), (8), (9)) n(n)
)
SELECT nacha_rows
FROM NACHA_TABLE
UNION ALL
SELECT nacha_rows
FROM Filler
OUTER APPLY (SELECT count(1) % 10 last
FROM NACHA_TABLE) l
WHERE filler.n + l.last <= 10
AND l.last > 0 -- to prevent filler line when NACHA_TABLE has exactly 10x rows
```
`SQLFiddle demo` | How to return N records in a SELECT statement without a table | [
"",
"sql",
"sql-server",
""
] |
I have 2 tables: Departments and Sales. In my department table I have a labor department id and the id of the sales department it is associated to. I also want to allow a department's sales to "rollup" up to another department. I need help with the query to "rollup" the sales.
Here are the tables:
```
TABLE = Departments
LaborDeptid | AssociatedSalesDept | RollUpTo
1 101 0
2 102 0
3 103 1
4 104 0
TABLE = Sales
Date | Sales | SalesDept
1/1/2014 10.00 101
1/1/2014 10.00 101
1/1/2014 10.00 102
1/1/2014 10.00 102
1/1/2014 10.00 103
1/1/2014 10.00 103
1/1/2014 10.00 104
1/1/2014 10.00 104
```
Here is the output I would like:
```
OUTPUT
Date | LaborDept | TotalSales
1/1/2014 1 40.00
1/1/2014 2 20.00
1/1/2014 4 20.00
```
As you can see, labor department 1 includes sales for sales department 101 and 103. I have no idea how to do this, though. The query to sum by day, by department is easy enough:
```
select
Date,
LaborDept,
sum(sales) as TotalSales
from sales s
inner join departments d on s.SalesDept = d.AssociatedSalesDept
group by Date,LaborDept`
```
but how would I do the "rollup"? I tried putting a case statement in the join like so:
```
select
sum(sales) as TotalSales,
Date,
LaborDept
from sales s
inner join departments d on s.SalesDept = case when d.RollUpTo <> 0 then
(select AssociatedSalesDept
from departments
where d.RollUpTo = LaborDeptID)
else d.AssociatedSalesDept end
group by Date,LaborDept
```
but that just dropped the 103 sales department all together. And it doesn't seem the right approach. | ```
select s.[Date], coalesce(d2.LaborDeptid,d1.LaborDeptid) [LaborDept], sum(s.Sales) [TotalSales]
from Sales s
join Departments d1 on s.SalesDept = d1.AssociatedSalesDept
left join Departments d2 on d1.RollUpTo = d2.LaborDeptid
group by s.[Date], coalesce(d2.LaborDeptid,d1.LaborDeptid)
``` | I had to think about this one a bit, but here is one solution:
```
WITH cte AS (
SELECT CASE WHEN RollUpTo = 0 THEN LaborDeptId ELSE RollUpTo END AS LaborDeptId, AssociatedSalesDept
FROM departments)
SELECT s.date, d.LaborDeptid, SUM(s.Sales) AS TotalSales
FROM Sales s
INNER JOIN cte d ON s.SalesDept = d.AssociatedSalesDept
GROUP BY s.date, d.LaborDeptid
``` | sum with conditional grouping | [
"",
"sql",
"sql-server",
"group-by",
"sql-server-2008-r2",
"sum",
""
] |
I am having trouble with a query I need to write. For some reason I cant seem to get anything to work. I know the solution is very simple but I just can't seem to think of it.
Lets say I have a table in my DB called "entrys" that looks like this:
```
ID | Item | value
------------------------------
1 | item 1 | 3
2 | item 1 | 4
3 | item 1 | 3
4 | item 2 | 3
5 | item 3 | 3
6 | item 1 | 3
7 | item 1 | 3
8 | item 1 | 3
9 | item 4 | 3
10 | item 4 | 6
```
I want to count the times item 1 occur where value = 3 and then group the other items together can provide a count for them that = 3 so for example the output for this table will be
```
Item | Count
---------------------
Item 1 | 5
Other | 3
```
As item 1 occurs 5 times the with the value of 3 and all other items that have the value of 3 are grouped together and displayed as other with a count of 3 | Try using a derived table where your items are named either `Item1` or `Other`. Then group the derived table by name and select the count.
```
SELECT
Item,
COUNT(*)
FROM (
SELECT
(CASE
WHEN Item = 'item 1'
THEN 'Item 1'
ELSE 'Other'
END) Item
FROM items
WHERE value = 3) t1
GROUP BY Item
``` | The key to the aggregation is a `case` statement. Then you need to count the appropriate values as well. In MySQL, you can do:
```
SELECT (CASE WHEN Item = 'item 1' THEN Item ELSE 'Other' END) AS Item, COUNT(*) AS `Count3`
FROM entrys e
WHERE value = 3
GROUP BY Item;
```
MySQL allows the use of column aliases in the `GROUP BY`. If you want counts of everything *and* where value is 3, then you can do:
```
SELECT (CASE WHEN Item = 'item 1' THEN Item ELSE 'Other' END) AS Item,
COUNT(*) as AllCount, SUM(value = 3) as Count3
FROM entrys e
GROUP BY Item;
``` | SQL count records equal to 1 value and count and group the rest | [
"",
"mysql",
"sql",
"database",
"count",
"group-by",
""
] |
I have 2 tables: AllClients & AllPolicies. I'm trying to find Client Records where there are no "active policies". An active policy is a policy where the effective date is before today and the expiration date is after today. Here is the sql I've come up with but it won't execute:
```
SELECT ac.LookupCode
FROM AllClients ac
LEFT JOIN AllPolicies ap
ON ap.LookupCode = ac.LookupCode
AND ap.EffectiveDate < '2014-06-24'
AND ap.ExpirationDate > '2014-06-24'
```
I'm using mySQL and I'm getting some weird "unknown table status: TABLE\_TYPE" errors. I'm fairly confident my SQL statement is junk ;) Thanks for the help! | ```
SELECT LookupCode
FROM AllClients
where LookupCode not in
(select LookupCode
from AllPolicies
where EffectiveDate < '2014-06-24' and ExpirationDate > '2014-06-24')
``` | This might help. You haven't expounded much on your mySQL errors, and they might have something to do with your table structures or server setup.
```
SELECT
ac.LookupCode
FROM
AllClients ac
LEFT JOIN AllPolicies ap ON ap.LookupCode = ac.LookupCode
AND ap.EffectiveDate < @Today
AND ap.ExpirationDate > @Today
WHERE
ap.LookupCode IS NULL --This means do not get records that have active policies
``` | SQL Left Join 2 Tables where Condition | [
"",
"mysql",
"sql",
"join",
"left-join",
""
] |
If you have a query like:
```
INSERT INTO
insert_table
SELECT
*
FROM
from_table
WHERE NOT EXISTS (
SELECT
*
FROM
from_table
WHERE
insert_table.column = from_table.column
);
```
Is that subquery evaluated for each row only at the initial `SELECT FROM from_table`?
My `insert_table` has a `UNIQUE` constraint that `from_table` does not, and it appears that duplicate keys exist in `from_table` and causing an error when trying to `INSERT` into `insert_table`. If the subquery would return duplicates on a particular key, they will be included in the result and attempted to be inserted? | The `where` clause is evaluated for each row in the table derived in the `from` clause.
If you need distinct keys use `distinct on`
```
insert into insert_table
select distinct on (key1, key2) *
from from_table
where not exists (
select
...
)
order by key1, key2, another_column
```
<http://www.postgresql.org/docs/current/static/sql-select.html#SQL-DISTINCT> | The subquery **is** executed for each row. However, it's operating on a snapshot of the database, as it was at the beginning of the statement, so it won't see its own inserts.
A simpler example to illustrate this is:
```
INSERT INTO some_table SELECT * FROM some_table;
```
This will duplicate the contents of `some_table`, rather than getting stuck in an infinite loop.
This does **not** necessarily mean that the database performs the `SELECT` and `INSERT` as separate, sequential steps, but for all intents and purposes, it behaves that way; the database can "hide" these inserts, in much the same way that it "hides" uncommitted inserts by other users.
As the others have pointed out, a `SELECT DISTINCT` will fix your specific case. | How many times is where clause evaluated in INSERT ... SELECT? | [
"",
"sql",
"postgresql",
""
] |
I've never done an inner join SQL statement before, so I don't even know if this is the right thing to use, but here's my situation.
Table 1 Columns: id, course\_id, unit, lesson
Table 2 Columns: id, course\_id
Ultimately, I want to count the number of `id's` in each unit in Table 1 **that are also in Table 2**.
So, even though it doesn't work, maybe something like....
`$sql = "SELECT table1.unit, COUNT( id ) as count, table2.id, FROM table1, table2, WHERE course_id=$im_course_id GROUP BY unit";`
I'm sure the syntax of what I'm wanting to do is a complete fail. Any ideas on fixing it? | ```
SELECT unit, COUNT( t1.id ) as count
FROM table1 as t1 inner JOIN table2 as t2
ON t1.id = t2.id
GROUP BY unit
```
hope this helps. | If I understand what you want (maybe you could post an example input and output?):
```
SELECT unit, COUNT( id ) as count
FROM table1 as t1 JOIN table2 as t2
ON t1.id = t2.id
GROUP BY unit
``` | Inner Join SQL Syntax | [
"",
"mysql",
"sql",
"join",
""
] |
I am trying to write a script that will show the number of non-null values in each column as well as the total number of rows in the table.
I have found a couple ways to do this:
```
SELECT sum(case my_column when null then 1 else 0) "Null Values",
sum(case my_column when null then 0 else 1) "Non-Null Values"
FROM my_table;
```
and
```
SELECT count(*) FROM my_table WHERE my_column IS NULL
UNION ALL
SELECT count(*) FROM my_table WHERE my_column IS NOT NULL
```
But these require me to type in each column name manually. Is there a way to perform this action for each column without listing them? | As Paolo said, but here is an example:
```
DECLARE @TableName VARCHAR(512) = 'invoiceTbl';
DECLARE @SQL VARCHAR(1024);
WITH SQLText AS (
SELECT
ROW_NUMBER() OVER (ORDER BY c.Name) AS RowNum,
'SELECT ''' + c.name + ''', SUM(CASE WHEN ' + c.Name + ' IS NULL THEN 1 ELSE 0 END) AS NullValues FROM ' + @TableName AS SQLRow
FROM
sys.tables t
INNER JOIN sys.columns c ON c.object_id = t.object_id
WHERE
t.name = @TableName),
Recur AS (
SELECT
RowNum,
CONVERT(VARCHAR(MAX), SQLRow) AS SQLRow
FROM
SQLText
WHERE
RowNum = 1
UNION ALL
SELECT
t.RowNum,
CONVERT(VARCHAR(MAX), r.SQLRow + ' UNION ALL ' + t.SQLRow)
FROM
SQLText t
INNER JOIN Recur r ON t.RowNum = r.RowNum + 1
)
SELECT @SQL = SQLRow FROM Recur WHERE RowNum = (SELECT MAX(RowNum) FROM Recur);
EXEC(@SQL);
``` | You should use `execute`:
```
DECLARE @t nvarchar(max)
SET @t = N'SELECT '
SELECT @t = @t + 'sum(case when ' + c.name + ' is null then 1 else 0 end) "Null Values for ' + c.name + '",
sum(case when ' + c.name + ' is null then 0 else 1 end) "Non-Null Values for ' + c.name + '",'
FROM sys.columns c
WHERE c.object_id = object_id('my_table');
SET @t = SUBSTRING(@t, 1, LEN(@t) - 1) + ' FROM my_table;'
EXEC sp_executesql @t
``` | Count number of NULL values in each column in SQL | [
"",
"sql",
"sql-server",
""
] |
SELECT
```
@begindate = dateadd(mm,-1,CAST(datepart(mm,getdate()) AS VARCHAR(2)) + '/15/' + CAST(datepart(YYYY,getdate() -1) AS varchar(4))),
@enddate = CAST(datepart(mm,getdate()) AS VARCHAR(2)) + '/14/' + CAST(datepart(YYYY,getdate() -1) AS varchar(4))
```
Right now this code returns the dates May 15th - June 14th. Those are the required dates that I need, but what I also need is for it to be in the year 2013, and this returns the year 2014. I've tried doing a dateadd(yyyy, -1) and it didn't work. So I was wondering if anyone knew how I would be able to get last years date. | If DateAdd didn't work, there must have been an error in your implementation. Try it like this after your existing code:
```
SET @begindate = dateadd(year, -1, @begindate);
SET @enddate = dateadd(year, -1, @enddate)
``` | declare @begindate datetime
declare @enddate datetime
set @begindate = dateadd(mm,-1,CAST(datepart(mm,getdate()) AS VARCHAR(2)) + '/15/' + CAST(datepart(YYYY, getdate()) - 1 AS varchar(4)))
set @enddate = CAST(datepart(mm,getdate() ) AS VARCHAR(2)) + '/14/' + CAST(datepart(YYYY, getdate()) - 1 AS varchar(4))
select @begindate
union all
select @enddate | Subtracting One Year | [
"",
"sql",
"date",
"dateadd",
""
] |
(I've looked it up and couldn't find an answer, If this is a duplicate, just point out a link and I'll delete this question)
Let's say I have the following code
```
select CASE WHEN lower(Http_User_Agent) like '%mobi%' then 'Mobile'
else 'Desktop' end as Device
from Http_User_Agent
where Device = 'Mobile'
```
Which returns
```
Msg 207, Level 16, State 1, Line 4
Invalid column name 'Device'.
```
What I'm basically trying to do is to filter by the newly created column `Device`. Is there an easy way to this?
I'm using SQL server 2008 on windows 7 | At the time of execution of the query, the `Device` alias isn't yet assigned to the resultset. So you need to either do this:
```
SELECT
CASE WHEN LOWER(Http_User_Agent) LIKE '%mobi%' THEN 'Mobile'
ELSE 'Desktop'
END AS Device
FROM Http_User_Agent
WHERE
CASE WHEN LOWER(Http_User_Agent) LIKE '%mobi%' THEN 'Mobile'
ELSE 'Desktop'
END = 'Mobile'
```
...i.e. not reference the new alias at all. Or this
```
SELECT *
FROM
(SELECT
CASE WHEN LOWER(Http_User_Agent) LIKE '%mobi%' THEN 'Mobile'
ELSE 'Desktop'
END AS Device
FROM Http_User_Agent)
WHERE Device = 'Mobile'
```
...i.e reference it from a surrounding select | You can do this with a subquery:
```
select ua.*
from (select ua.*,
(CASE WHEN lower(Http_User_Agent) like '%mobi%' then 'Mobile'
else 'Desktop'
end) as Device
from Http_User_Agent ua
) ua
where Device = 'Mobile';
```
Or by repeating the condition:
```
select ua.*,
(CASE WHEN lower(Http_User_Agent) like '%mobi%' then 'Mobile'
else 'Desktop'
end) as Device
from Http_User_Agent ua
where lower(Http_User_Agent) like '%mobi%';
```
However, you cannot use a column alias in a `where` at the same level where it is defined. This is part of the ANSI standard and not a "limitation" of SQL Server (or any other database). | Filtering based on a newly created column | [
"",
"sql",
"t-sql",
""
] |
I have a table with: id, client, domain and bytes. I have a query that gets the top 3 users either by client:
```
SELECT client FROM log
GROUP BY 1
ORDER BY SUM(bytes) DESC
LIMIT 3
```
or by domain
```
SELECT domain FROM log
GROUP BY 1
ORDER BY SUM(bytes) DESC
LIMIT 3
```
Is there a way in MySQL to get this two mixed without a stored procedure? For example, get the top 3 clients, each client row followed by the top 3 domains used BY THAT CLIENT in a single query execution?
* 192.168.1.1
+ google.com
+ facebook.com
+ twitter.com
* 192.168.1.2
+ facebook.com
+ twitter.com
+ ...
* ...
For example, to get the top 3 domain usage from client 192.168.1.1:
```
SELECT domain FROM log
WHERE client = '192.168.1.1'
GROUP BY 1
ORDER BY SUM(bytes) DESC
LIMIT 3
```
So for each row in query #1, somehow this query should be executed. So far I got to show a concatenated list of top domains for each client, sadly not related to the same client because I can't access `outer_client` in subquery:
```
SELECT client AS outer_client, top_domain
FROM log
JOIN (
SELECT GROUP_CONCAT(t.domain) AS top_domain
FROM (
SELECT domain
FROM log
WHERE client = outer_client
GROUP BY 1
ORDER BY SUM(bytes)
LIMIT 5
) t
) k
GROUP BY client
ORDER BY SUM(bytes)
LIMIT 5;
``` | One solution:
<http://sqlfiddle.com/#!2/cc1b1/2/0>
```
SELECT client
, (SELECT domain
FROM log
WHERE client=top_clients.client
GROUP BY client, domain
ORDER BY SUM(bytes) DESC
LIMIT 1) domain1
, (SELECT domain
FROM log
WHERE client=top_clients.client
GROUP BY client, domain
ORDER BY SUM(bytes) DESC
LIMIT 1 OFFSET 1) domain2
, (SELECT domain
FROM log
WHERE client=top_clients.client
GROUP BY client, domain
ORDER BY SUM(bytes) DESC
LIMIT 1 OFFSET 2) domain3
FROM (SELECT client FROM log GROUP BY client ORDER BY SUM(bytes) DESC LIMIT 3) top_clients;
```
My output:
```
+-------------+--------------+--------------+-------------+
| client | domain1 | domain2 | domain3 |
+-------------+--------------+--------------+-------------+
| 192.168.1.1 | google.com | facebook.com | twitter.com |
| 192.168.1.2 | facebook.com | twitter.com | NULL |
+-------------+--------------+--------------+-------------+
``` | Here it is:
```
select client,
substring_index(group_concat(domain order by sumbytes desc), ',', 5) as top5domains
from (
select client, domain, sum(bytes) as sumbytes
from log
group by client, domain
) cd
group by client
order by sum(sumbytes) desc
limit 5;
```
Courtesy of @Gordon Linoff here: [MySQL GROUP\_CONCAT from subquery](https://stackoverflow.com/questions/24336887/mysql-group-concat-from-subquery/24336948#24336948) | MySQL intertwine two queries based on previous query result | [
"",
"mysql",
"sql",
""
] |
I have a table MY\_TABLE in oracle with a Spatial index MY\_IDX and about 22000 rows. The following query runs in less than ~500 ms and returns ~2600 results.
```
SELECT /*+ INDEX (MY_TABLE MY_IDX) */ ID,GEOM,LABEL FROM MY_TABLE
where (
(sdo_filter(GEOM, mdsys.sdo_geometry(2003,8307,NULL,
mdsys.sdo_elem_info_array(1,1003,3),
mdsys.sdo_ordinate_array(-180.0,-48.0,-67.0,32.0)),
'querytype=WINDOW')='TRUE')
);
```
When I add an "OR" clause with another spatial filter, the query takes ~30 seconds to run, consuming vastly more CPU than it should:
```
SELECT /*+ INDEX (MY_TABLE MY_IDX) */ ID,GEOM,LABEL FROM MY_TABLE
where (
(sdo_filter(GEOM, mdsys.sdo_geometry(2003,8307,NULL,
mdsys.sdo_elem_info_array(1,1003,3),
mdsys.sdo_ordinate_array(-180.0,-48.0,-67.0,32.0)),
'querytype=WINDOW')='TRUE')
OR
(sdo_filter(GEOM, mdsys.sdo_geometry(2003,8307,NULL,
mdsys.sdo_elem_info_array(1,1003,3),
mdsys.sdo_ordinate_array(157.0,-48.0,180.0,32.0)),
'querytype=WINDOW')='TRUE')
);
```
The explain plans of the queries are very different - the first shows table access is "BY INDEX ROWID", where as the second is "FULL". Is there a way I can get the second query to perform in a manner similar to the first?
v$version returns:
```
Oracle Database 11g Release 11.2.0.1.0 - 64bit Production
PL/SQL Release 11.2.0.1.0 - Production
"CORE 11.2.0.1.0 Production"
TNS for 64-bit Windows: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production
```
On a side note, a different db running oracle enterprise edition produces a plan in which the index is used and the results merged. Can this be done with standard edition? | Refractoring the query using an inner query and a union all seemed to force oracle to use the indexes as expected:
```
SELECT ID,GEOM,LABEL FROM MY_TABLE
WHERE ID IN (
(SELECT ID FROM MY_TABLE WHERE sdo_filter(GEOM, mdsys.sdo_geometry(2003,8307,NULL,
mdsys.sdo_elem_info_array(1,1003,3),
mdsys.sdo_ordinate_array(-180.0,-48.0,-67.0,32.0)),
'querytype=WINDOW')='TRUE')
UNION ALL
(SELECT ID FROM MY_TABLE WHERE sdo_filter(GEOM, mdsys.sdo_geometry(2003,8307,NULL,
mdsys.sdo_elem_info_array(1,1003,3),
mdsys.sdo_ordinate_array(157.0,-48.0,180.0,32.0)),
'querytype=WINDOW')='TRUE')
);
``` | "*On a side note, a different db running oracle enterprise edition produces a plan in which the index is used and the results merged. Can this be done with standard edition?*"
To complete the answer: Enterprise Edition supports bitmap indexing techniques. Often when the optimizer has a choice between using multiple indexes (or here the same spatial index for multiple predicates), it does use both of them.
It does so by getting the results from each predicate (a set of ROWIDs) and converting it in a bitmap. It is then a simple matter of combining those bitmaps (here of ORing them) and extracting the result back into a list of ROWIDs to fetch the actual results.
That does not exist in Standard Edition. The only option for the optimizer is to use one for the index (for a query that combines multiple predicates in a AND condition), or just do a full table scan with an OR (since an OR condition makes the query condition very little selective).
Replacing the OR condition with a UNION ALL is the correct approach: we now have two queries that are optimized independently, and both will use the spatial index. | Can Oracle be forced to use the spatial index for sdo_filter in combination with an or clause? | [
"",
"sql",
"oracle",
"oracle-spatial",
""
] |
Basically,
```
id status timestamp
1 Good 0
2 Good 1
1 Bad 2
```
I want to have
```
id status timestamp
2 Good 1
1 Bad 2
```
returned to me.
This is because I want to know the most recent status of all the ids in the table. I've tried something naive like:
```
SELECT * FROM my_tbl group by id order by 'timestamp' desc;
```
But that obviously won't work, any idea? I'm sure it's very simple but I cannot figure it out :/ | Try this ..
Join is much faster than sub query.
```
select t1.* From Table t1
LEFT JOIN Table t2 on t2.id=t1.id and t2.timestamp > t1.timestamp
where t2.id IS NULL.
``` | ```
select t1.id, t1.status, t1.timestamp
from my_tbl as t1
where t1.timestamp = (select max(timestamp)
from my_table t2
where t1.id = t2.id)
```
This will however return ids twice which have the same highest timestamp. | MySQL select all rows from table with distinct ID using newest timestamp | [
"",
"mysql",
"sql",
""
] |
I have a query say
`SELECT name,number FROM TABLE WHERE number='123'`
What I want is to display multiple records using where clause or any other way so that I can get following data.
**Name Number**
`ABC 123
PQR 127
PQR 130`
I tried using `AND` `&&` `,` in where clause.i.e.
`where number=123,127,130` or `where number=123 AND 127 AND 130`etc. but nothing works. | just use an `IN` clause
```
where number in (123, 127, 130)
```
you could also use OR (just for info, I wouldn't use this in that case)
```
where number = 123 or
number = 127 or
number = 130
```
Don't forget round braces around the "ors" if you have other conditions in your where clause. | Try this
```
SELECT name,number FROM TABLE WHERE number IN (123,127,130);
```
OR
```
SELECT name,number FROM TABLE
WHERE number = 123
OR number = 127
OR number = 130;
``` | SQL Where Clause with multiple values | [
"",
"mysql",
"sql",
""
] |
I have 3 tables as you can see:

The relationships between all 3 tables:
* `cal_events.id = cf_cal_events.model_id`
* `cf_cal_events.col_10 = source.ID_ELEMENT`
I want to delete data from `cal_events` and `cf_cal_events` if `ASUPPRIMER = '1'`
I know it's not very hard, but I'm not very familiar with JOINS and Sub-queries, but here are my tries to do it for `cal_events`:
```
DELETE FROM cal_events
INNER JOIN cf_cal_events ON cal_events.id = cf_cal_events.model_id
INNER JOIN source ON cf_cal_events.col_10 = source.ID_ELEMENT
WHERE source.ASUPPRIMER = '1';
```
AND
```
DELETE FROM cal_events
WHERE cal_events.id IN (
SELECT cf_cal_events.model_id AS cal_events.id
FROM cf_cal_events
WHERE cf_cal_events.col_10 IN (
SELECT ID_ELEMENT AS cf_cal_events.col_10
FROM source
WHERE source.ASUPPRIMER = '1'
)
);
``` | There's a good example for your problem:
[How to Delete using INNER JOIN with SQL Server?](https://stackoverflow.com/questions/16481379/how-to-delete-using-inner-join-with-sql-server)
seems to me like it should be:
```
DELETE cal_events, cf_cal_events
FROM cal_events
INNER JOIN cf_cal_events ON cal_events.id = cf_cal_events.model_id
INNER JOIN source ON cf_cal_events.col_10 = source.ID_ELEMENT
WHERE source.ASUPPRIMER = '1';
``` | Find `multi table syntax` from the [docs](http://dev.mysql.com/doc/refman/5.6/en/delete.html). Something like this
```
DELETE cal_events,
cf_cal_events
FROM cal_events
JOIN cf_cal_events
ON cal_events.id = cf_cal_events.model_id
JOIN source
ON cf_cal_events.col_10 = source.ID_ELEMENT
WHERE source.ASUPPRIMER = '1';
``` | DELETE - INNER JOIN - Sub Query | [
"",
"mysql",
"sql",
"inner-join",
"delete-row",
""
] |
I have my new website which manage by private company.
This morning I accidently put Double Quote in url (") and become error as below:
URL: `http://domain.com/sys.aspx?page=5&search=1"`
```
Server Error in '/' Application.
Syntax error near '"' in the full-text search condition '1"'.
Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.
Exception Details: System.Data.SqlClient.SqlException: Syntax error near '"' in the full-text search condition '1"'.
Source Error:
An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below.
Stack Trace:
[SqlException (0x80131904): Syntax error near '"' in the full-text search condition '1"'.]
System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) +212
System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) +245
System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) +2811
System.Data.SqlClient.SqlDataReader.ConsumeMetaData() +58
System.Data.SqlClient.SqlDataReader.get_MetaData() +112
System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) +6281668
System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) +6282737
System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) +424
System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method) +28
System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method) +211
System.Data.SqlClient.SqlCommand.ExecuteReader() +117
Pazar3.list.Page_Load(Object sender, EventArgs e) in E:\mudi\ker_ss\Solution\trunk\sys.aspx.cs:119
System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) +25
System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) +42
System.Web.UI.Control.OnLoad(EventArgs e) +132
System.Web.UI.Control.LoadRecursive() +66
System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +2428
Version Information: Microsoft .NET Framework Version:2.0.50727.4984; ASP.NET Version:2.0.50727.4971
```
My question is what kind of database is this and could be SQL INJECTION? | > I just call them and they tell me 'we use mysql'. Is that correct?
It's possible they use both MySQL and SQL Server at their site. Not likely -- but *possible*.
I think it's more likely that they simply made a mistake. I have heard a few IT people say "MySQL" when they actually use Microsoft SQL Server.
The error message, `Syntax error near 'XXXX' in the full-text search condition 'YYYY'`,
seems like a Microsoft SQL Server error message. If you [search StackOverflow for that error message](https://stackoverflow.com/search?q=syntax%20error%20near%20in%20the%20full-text%20search%20condition) (minus the specific pattern), you'll find only Microsoft SQL Server issues.
You can use double-quotes inside a pattern with the `CONTAINS()` function, so you can search for phrases (see examples [here](http://msdn.microsoft.com/en-us/library/ms142583.aspx)). I assume the `CONTAINS()` function doesn't like it when your pattern includes unbalanced double-quotes, and it throws an exception.
Also, the class `System.Data.SqlClient` seems like a .NET class, and it's more common to use .NET with a Microsoft SQL Server back-end instead of a MySQL back-end (though the latter is possible, it's just not as common).
MySQL also has a fulltext search function, but MySQL *doesn't throw an error* if you feed it an unbalanced double-quote in the search pattern.
They might be using MySQL for other parts of their site, but the error you saw appears to be a Microsoft SQL Server error.
Is it an SQL injection vulnerability? Not necessarily. Their application might be passing the pattern safely using a prepared statement and a query parameter for the search pattern. So it may not be at risk for SQL injection per se (i.e a user can't submit a string that makes the query do something other than search for a pattern), but the application doesn't prevent the user from submitting invalid search patterns that result in an exception.
In short, we can't tell for certain from the error message whether it's an SQL injection vulnerability or not, because one could get the same error even if the query is executed with parameters. | It is Microsoft SQL Client Talking to MS SQL server
Yes you have proven SQL injection, into the FULL TEXT search expression, not the SQL statement - that is probably just as bad but I can not say. | What kind of Database is this and could be sql injection? | [
"",
"sql",
"sql-injection",
""
] |
I have an SQL function retuning a composite result.
```
CREATE TYPE result_t AS (a int, b int, c int, d numeric);
CREATE OR REPLACE FUNCTION slow_function(int)
RETURNS result_t
AS $$
-- just some placeholder code to make it slow
SELECT 0, 0, 0, (
SELECT sum(ln(i::numeric))
FROM generate_series(1, $1) i
)
$$ LANGUAGE sql IMMUTABLE;
```
When calling the function, I would like to have the parts of the composite type expanded into several columns. That works fine when I call:
```
SELECT (slow_function(i)).*
FROM generate_series(0, 200) i
a b c d
---- ---- ---- --------------------
0 0 0 (null)
0 0 0 0
0 0 0 0.6931471805599453
0 0 0 1.791759469228055
...
Total runtime: 6196.754 ms
```
Unfortunately, this causes the function to be called *once per result column*, which is unnecessarily slow. This can be tested by comparing the run time with a query, which directly returns the composite result and runs four times as fast:
```
SELECT slow_function(i)
FROM generate_series(0, 200) i
...
Total runtime: 1561.476 ms
```
The example code is also at <http://sqlfiddle.com/#!15/703ba/7>
How can I get the result with multiple columns without wasting CPU power? | A CTE is not even necessary. A **plain subquery** does the job.
```
SELECT i, (f).* -- decompose here
FROM (
SELECT i, (slow_func(i)) AS f -- do not decompose here
FROM generate_series(1, 3) i
) sub;
```
Be sure not to decompose the record (composite result) of the function in the subquery. Do that in the **outer query**.
Requires a registered row type. Not possible with anonymous records.
Or, [what @Richard wrote](https://stackoverflow.com/a/24393079/939860), a **`LATERAL JOIN`** works, too. The syntax can be simpler:
```
SELECT * FROM generate_series(1, 3) i, slow_func(i) f
```
`LATERAL` is applied implicitly in Postgres 9.3 or later.
A function can stand on its own in the `FROM` clause, doesn't have to be wrapped in an additional sub-select. Just like a table in its place.
[fiddle](https://dbfiddle.uk/CWd0Lruy) with `EXPLAIN VERBOSE` output for all variants. You can *see* multiple evaluation of the function if it happens.
Old [sqlfiddle](http://sqlfiddle.com/#!15/72703/3)
### `COST` setting
Generally (should not matter for this particular query), make sure to apply a high cost setting to your function, so the planner knows to avoid evaluating more often then necessary. Like:
```
CREATE OR REPLACE FUNCTION slow_function(int)
RETURNS result_t
LANGUAGE sql IMMUTABLE COST 100000 AS
$func$
-- expensive body
$func$;
```
[The manual:](https://www.postgresql.org/docs/current/sql-createfunction.html)
> Larger values cause the planner to try to avoid evaluating the function more often than necessary. | Perhaps a LATERAL subquery is what you want.
```
SELECT t.id, f.* FROM some_table t, LATERAL (SELECT slow_func(t.id)) f
```
That will call the function once for each row, and then "unwrap" the result into columns in the output. Any subquery will do for the "unwrapping" but LATERAL is what lets you reference columns from other clauses.
I believe LATERAL was introduced in PostgreSQL 9.3 | Avoid multiple calls on same function when expanding composite result | [
"",
"sql",
"postgresql",
"query-optimization",
"plpgsql",
""
] |
I'm trying to format a column that is timestamp with time zone using to\_char since i don't want to include the zone part of the column, but the difference between a query running to\_char and without it is like 10 seconds that is a lot of time, i don't have much experience with databases and maybe i'm doing something wrong.
query without to\_char time: 1313 ms:
```
select distinct on ("Results"."Timestamp") "Results"."Timestamp",
"TotalParticlesAccum", "BioAccumulated", "FlowVolume",
"DCOffsetCh0", "DCOffsetCh1", "DCOffsetCh2",
"LaserPower", "LaserCurrent", "LaserTemperature",
"LaserRunHour", "FlowRate", "FlowPressure",
"FlowTemperature", "CpuTemperature", "PwbTemperature",
"Temperature1", "Temperature2", "Temperature3",
"Temperature4", "TotalParticles", "Bio"
from "Results"
Left join "SensorLog" On "Results"."SampleID" = "SensorLog"."SampleID"
where "Results"."SampleID" = id order by 1 asc;
```
query with to\_char time: 12354 ms
```
select distinct on (to_char("Results"."Timestamp",'YYYY/MM/DD HH24:MI:SS'))
to_char("Results"."Timestamp",'YYYY/MM/DD HH24:MI:SS'),
"TotalParticlesAccum", "BioAccumulated", "FlowVolume",
"DCOffsetCh0", "DCOffsetCh1", "DCOffsetCh2",
"LaserPower", "LaserCurrent", "LaserTemperature",
"LaserRunHour", "FlowRate", "FlowPressure",
"FlowTemperature", "CpuTemperature", "PwbTemperature",
"Temperature1", "Temperature2", "Temperature3",
"Temperature4", "TotalParticles", "Bio"
from "Results"
Left join "SensorLog" On "Results"."SampleID" = "SensorLog"."SampleID"
where "Results"."SampleID" = id order by 1 asc;
```
I think I know that the problem is that I have to\_char twice, but if i don't have that, it gives me an error
ERROR: SELECT DISTINCT ON expressions must match initial ORDER BY expressions | Just wrap the fast query with one with the desired format
```
select
to_char("Timestamp",'YYYY/MM/DD HH24:MI:SS') as "Timestamp",
"TotalParticlesAccum", "BioAccumulated", "FlowVolume",
"DCOffsetCh0", "DCOffsetCh1", "DCOffsetCh2",
"LaserPower", "LaserCurrent", "LaserTemperature",
"LaserRunHour", "FlowRate", "FlowPressure",
"FlowTemperature", "CpuTemperature", "PwbTemperature",
"Temperature1", "Temperature2", "Temperature3",
"Temperature4", "TotalParticles", "Bio"
from (
select distinct on ("Results"."Timestamp")
"Results"."Timestamp",
"TotalParticlesAccum", "BioAccumulated", "FlowVolume",
"DCOffsetCh0", "DCOffsetCh1", "DCOffsetCh2",
"LaserPower", "LaserCurrent", "LaserTemperature",
"LaserRunHour", "FlowRate", "FlowPressure",
"FlowTemperature", "CpuTemperature", "PwbTemperature",
"Temperature1", "Temperature2", "Temperature3",
"Temperature4", "TotalParticles", "Bio"
from
"Results"
Left join
"SensorLog" On "Results"."SampleID" = "SensorLog"."SampleID"
where "Results"."SampleID" = id
order by 1 asc
) s
``` | my suggestion is use the raw field for the grouping and ordering but format it the way you want in the select clause:
```
SELECT DISTINCT ON ("Timestamp")
to_char("Results"."Timestamp", 'YYYY/MM/DD HH24:MI:SS')
,"TotalParticlesAccum"
,"BioAccumulated"
,"FlowVolume"
,"DCOffsetCh0"
,"DCOffsetCh1"
,"DCOffsetCh2"
,"LaserPower"
,"LaserCurrent"
,"LaserTemperature"
,"LaserRunHour"
,"FlowRate"
,"FlowPressure"
,"FlowTemperature"
,"CpuTemperature"
,"PwbTemperature"
,"Temperature1"
,"Temperature2"
,"Temperature3"
,"Temperature4"
,"TotalParticles"
,"Bio"
FROM "Results"
Left join "SensorLog" On "Results"."SampleID" = "SensorLog"."SampleID"
WHERE "Results"."SampleID" = 839
ORDER BY "Timestamp" ASC
```
Without an sqlfiddle I'm not 100% that this will run, but it is likely the right track to take. | Postgresql to_char slowing query a by a lot | [
"",
"sql",
"postgresql",
""
] |
a mysql database i'm working with has thousands of urls. now, a lot of those urls will be from the same domain, but each one will have differing sub-domains and page names. e.g.
```
somewebsite.com/gj29sjw
somewebsite.com/29shw0a
somewebsite.com/92jslwa
anothersite.net/jfdkden
anothersite.net/hj2892j
anothersite.net/282j290
```
etc...
Is there a query or syntax I can use to both group those urls and count them in largest first order, without the subdomains and page names. Ideally, after i run the count query, I would need to get:
```
somewebiste.com | 345
anothersite.net | 289
``` | You could use [`SUBSTRING_INDEX`](http://dev.mysql.com/doc/refman/5.1/en/string-functions.html#function_substring-index) to make the query fairly simple; just take the 2 last period separated groups before the first slash;
```
SELECT SUBSTRING_INDEX(SUBSTRING_INDEX(url, '/', 1), '.', -2) site, COUNT(*) cnt
FROM mytable
GROUP BY site;
```
[An SQLfiddle to test with](http://sqlfiddle.com/#!2/e69ba1/2).
Note that to get *good* performance on the query, I'd recommend storing the site name separately, doing calculations on each ro will not give ultimate performance. | ```
select substr(url, 1, instr(url, '/') - 1) as domain_name, count(*) as cnt
from your_table
group by domain_name
order by cnt desc
```
[SQLFiddle demo](http://sqlfiddle.com/#!2/d41d8/39757) | sorting large amounts of urls by top domain in mysql | [
"",
"mysql",
"sql",
"sorting",
"url",
"count",
""
] |
I have a table with several rows of data like this :
```
16 W:\2-Work\ALBO\00_Proposal\ALxO_Amendement #1_20091022_signed.pdf
17 W:\2-Work\ALBO\00_Proposal\Level1\ALBO_Amendment #1_20110418.docx
18 W:\2-Work\ALBO\00_Proposal\A\BR\T\X_#1_20110418_final.docx
19 W:\2-Work\ALBO\MyOptionl\AO_Amendment_2 August 2013.docx
```
I have created columns from `Col1` to `Col10`
I would like to separate each value with the delimiter `'\'`
The idea is to have on each column :
```
Col1 | Col2 | Col3 | Col4 | Col5 |etc...
W: 2-Work ALBO 00_Proposal ALxO_Amendement #1_20091022_signed.pdf
```
I know how to use `charindex` and `substring` but the number of '' are different on each line (8500 rows).
I'm using Microsoft SQL Server 2012.
## Edit
My goal is to generate an XML of the full path and split path.
Actually, here is my idea :
1 - Identify all the ID in a temporary table to do loop
```
--On dΓ©clare une table tempo
declare @IdTable Table (
id int,
src nvarchar(max))
--On injecte tous les id existant de la table
insert into @IdTable (id, src)
select id, src from albo
--on dΓ©clare l'id de dΓ©but en commencant par le plus petit
declare @id int = (select min(id) from ALBO)
--Tnat qu'il reste des ID on continue la boucle
while @id is not null
begin
print @id
select @id = min(id) from @IdTable where ID > @id
end
--Fin de la boucle des ID
```
2 - Split each row and update column (Colx => The Columns have been created before). This code should be placed into my previous loop.
```
Declare @products varchar(max) = 'W:\2-Work\ALBO\13_WP Reporting\13_07_Monthly reports\13_07_01 Archives\2012\201211\Draft\ALBO-MR-201211\gp_scripts\v1\Top10_duree_final.txt'
Declare @individual varchar(max) = null
WHILE LEN(@products) > 0
BEGIN
IF PATINDEX('%\%',@products) > 0
BEGIN
SET @individual = SUBSTRING(@products, 0, PATINDEX('%\%',@products))
select @individual --i have to make and update with the ID
SET @products = SUBSTRING(@products, LEN(@individual + '\') + 1,
LEN(@products))
END
ELSE
BEGIN
SET @individual = @products
SET @products = NULL
print @individual
END
END
``` | As others have said, this probably isn't the best way to do things, if you explain what you'll be doing with the results it might help us provide a better option
[Also, for some reason the colours of the code below are showing up odd, so copy and paste it into your Sql server to see it better]
```
drop table #Path
create table #Path (item bigint,location varchar(1000))
insert into #Path
select 16 ,'W:\2-Work\ALBO\00_Proposal\ALxO_Amendement #1_20091022_signed.pdf' union
select 17 ,'W:\2-Work\ALBO\00_Proposal\Level1\ALBO_Amendment #1_20110418.docx' union
select 18 ,'W:\2-Work\ALBO\00_Proposal\A\BR\T\X_#1_20110418_final.docx' union
select 19 ,'W:\2-Work\ALBO\MyOptionl\AO_Amendment_2 August 2013.docx'
select * from #Path;
with Path_Expanded(item,subitem,location, start, ending, split)
as(
select item
, 1 --subitem begins at 1
, location -- full location path
, 0 --start searching the file from the 0 position
, charindex('\',location) -- find the 1st '\' charactor
, substring(location,0,charindex('\',location)) --return the string from the start position, 0, to the 1st '\' charactor
from #Path
union all
select item
, subitem+1 --add 1 to subitem
, location -- full location path
, ending+1 -- start searching the file from the position after the last '\' charactor
, charindex('\',location,ending+1)-- find the 1st '\' charactor that occurs after the last '\' charactor found
, case when charindex('\',location,ending+1) = 0 then substring(location,ending+1,1000) --if you cant find anymore '\', return everything else after the last '\'
else substring(location,ending+1, case when charindex('\',location,ending+1)-(ending+1) <= 0 then 0
else charindex('\',location,ending+1)-(ending+1) end )--returns the string between the last '\' charactor and the next '\' charactor
end
from Path_Expanded
where ending > 0 --stop once you can't find anymore '\' charactors
)
--pivots the results
select item
, max(case when subitem = 1 then split else '' end) as col1
, max(case when subitem = 2 then split else '' end) as col2
, max(case when subitem = 3 then split else '' end) as col3
, max(case when subitem = 4 then split else '' end) as col4
, max(case when subitem = 5 then split else '' end) as col5
, max(case when subitem = 6 then split else '' end) as col6
, max(case when subitem = 7 then split else '' end) as col7
, max(case when subitem = 8 then split else '' end) as col8
, max(case when subitem = 9 then split else '' end) as col9
, max(case when subitem = 10 then split else '' end) as col10
from Path_Expanded
group by item
```
you might prefer to have each folder on its own row, if so replace the pivot part above with the below query instead
```
select item
, subitem
, location
, split from Path_Expanded where item = 16
``` | One way (de-dupes):
```
;with T(ordinal, path, starts, pos) as (
select 1, path, 1, charindex('\', path) from #tbl
union all
select ordinal + 1, path, pos + 1, charindex('\', path, pos + 1)
from t where pos > 0
)
select [1],[2],[3],[4],[5],[6],[7],[8],[9],[10] from (
select
ordinal, path, substring(path, starts, case when pos > 0 then pos - starts else len(path) end) token
from T
) T2
pivot (max(token) for ordinal in ([1],[2],[3],[4],[5],[6],[7],[8],[9],[10])) T3
``` | Split string in column and add value in column | [
"",
"sql",
"t-sql",
"sql-server-2012",
""
] |
I don't exactly know how to phrase the question, but an example would work. So I have this table
```
Users
Id Name
1 Tim
2 Jon
3 Matt
```
There is another table
```
Tags
TagId TagName
1 Test
2 Other
3 Dummy
4 More
```
In a temp table I have structure like this
```
TmpUserTags
User Tags
Tim Test
Jon Other, Test
Matt Dummy, More, Other
```
So, what I need to do is from this temp table, insert record in table `UserTags` with corresponding Ids, for the above given example, the result would be
```
UserTags
User TagId
1 1
2 2
2 1
3 3
3 4
3 2
```
So, this is the end result I want, to be inserted in `UserTags`. But since for each row in `TmpUserTags` each user can have many tags, separated by comma, I don't know what would be the best way to do it. I can probably use a while loop (or rather a cursor) to loop through all the rows in `TmpUserTags`, and then, for each row, split the tags by comma, find their Id, and insert those in `UserTags`. But that doesn't seems to be the most optimized way. Can someone please suggest some better way of doing it? | I think the simplest way would be to just join the tags column using `LIKE`:
```
CREATE TABLE #Users (ID INT, Name VARCHAR(4));
INSERT #Users (ID, Name)
VALUES (1, 'Tim'), (2, 'Jon'), (3, 'Matt');
CREATE TABLE #Tags (TagID INT, TagName VARCHAR(5));
INSERT #Tags (TagID, TagName)
VALUES (1, 'Test'), (2, 'Other'), (3, 'Dummy'), (4, 'More');
CREATE TABLE #TmpUserTags ([User] VARCHAR(4), Tags VARCHAR(100));
INSERT #tmpUserTags ([User], Tags)
VALUES ('Tim', 'Test'), ('Jon', 'Other,Test'), ('Matt', 'Dummy,More,Other');
SELECT u.ID, t.TagID
FROM #TmpUserTags AS ut
INNER JOIN #Users AS u
ON u.Name = ut.[User]
INNER JOIN #Tags AS t
ON ',' + ut.Tags + ',' LIKE '%,' + t.TagName + ',%';
```
You could also go down the route of creating a split function to split your comma separated list into rows:
```
CREATE FUNCTION [dbo].[Split](@StringToSplit NVARCHAR(MAX), @Delimiter NCHAR(1))
RETURNS TABLE
AS
RETURN
(
SELECT ID = ROW_NUMBER() OVER(ORDER BY n.Number),
Position = Number,
Value = SUBSTRING(@StringToSplit, Number, CHARINDEX(@Delimiter, @StringToSplit + @Delimiter, Number) - Number)
FROM ( SELECT TOP (LEN(@StringToSplit) + 1) Number = ROW_NUMBER() OVER(ORDER BY a.object_id)
FROM sys.all_objects a
) n
WHERE SUBSTRING(@Delimiter + @StringToSplit + @Delimiter, n.Number, 1) = @Delimiter
);
```
Then you can use:
```
SELECT u.ID, t.TagID
FROM #TmpUserTags AS ut
CROSS APPLY dbo.Split(ut.tags, ',') AS s
INNER JOIN #Users AS u
ON u.Name = ut.[User]
INNER JOIN #Tags AS t
ON t.TagName = s.Value;
``` | To me the main question would be how you arrived at the temp table containing the comma delimited column. If it was an import from a file and all the data was comma delimited it would be easy enough to save the file as a csv which will save the user and each tag separately, then create a table in the your database containing the same number of columns as the file has, then bulk insert this table from the file.
```
drop table #TmpUserTags
GO
create table #TmpUserTags
(
[user] varchar(10),
tag1 varchar(10),
tag2 varchar(10),
tag3 varchar(10)
)
bulk insert #TmpUserTags from '<filepath>' with (fieldterminator=',')
```
Then union the data to create two columns which should be easy enough to reinterpret as ids.
```
SELECT [User],Tag1 FROM #TmpUserTags WHERE Tag1 IS NOT NULL
UNION ALL
SELECT [User],Tag2 FROM #TmpUserTags WHERE Tag2 IS NOT NULL
UNION ALL
SELECT [User],Tag3 FROM #TmpUserTags WHERE Tag3 IS NOT NULL
ORDER BY [User]
```
Of course all this might be conjecture but, like, how *did* you arrive at the table with the comma delimited values? | How to insert multiple values in SQL Server table? | [
"",
"sql",
"sql-server",
"insert",
""
] |
**What's faster?**
```
update c
set
c.createdon=q.CreatedOn
,c.createdby=case when q.createdby is not null then q.createdby end
,c.modifiedon=q.modifiedon
,c.modifiedby=case when q.ModifiedBy is not null then q.ModifiedBy end
from crm_annotationbase c
join IncidentWorknote q
on c.annotationid=q.annotationid
```
**or this:**
```
update c
set
c.createdon=q.CreatedOn
,c.createdby=isnull(q.createdby,c.createdby)
,c.modifiedon=q.modifiedon
,c.modifiedby=isnull(q.modifiedby,c.modifiedby)
from crm_annotationbase c
join IncidentWorknote q
on c.annotationid=q.annotationid
```
I have the first query running for 24 hours already. I'm updating a CRM 2013 table based on staging data.
I'd like to know whether I've chosen the most effecient solution of doing this? | Ok.. I had to dig around for this script. From reading the comments, it's a very large table that you are trying to update. The BEST way to speed this update up is to break it into batches. The reason it's taking so long is because of the transactional nature of the system... If something fails, the ENTIRE transaction (your whole update) will be rolled back. This takes SOOO much extra time. If you DON'T need this transactional all-or-nothing, try something like this (below). We have to update hundreds of millions of records and we were able to speed it up by HOURS just by batching the update.
Tweaking this could make it faster for you based on your data.
```
DECLARE @Update INT
DECLARE @Batch INT
-- Total number of records in database
SELECT @Update = (
SELECT COUNT(id)
FROM [table] WITH (NOLOCK) -- be CAREFUL with this
WHERE [' + @fName + '] IS NOT NULL) --optional
SELECT @Batch = 4000 --Batch update amount
WHILE (@Update > 0)
BEGIN
UPDATE TOP(@Batch) c
set
c.createdon=q.CreatedOn
,c.createdby=case when q.createdby is not null then q.createdby end
,c.modifiedon=q.modifiedon
,c.modifiedby=case when q.ModifiedBy is not null then q.ModifiedBy end
from crm_annotationbase c
join IncidentWorknote q
on c.annotationid=q.annotationid
SELECT @Update = @Update - @Batch; -- Reduce for next set
WAITFOR DELAY '000:00:00.400'; -- Allows for waiting transactions to process optional
END;
``` | What you are doing is wrong for two reasons:
1. Direct updates to a Dynamics CRM database is highly unsupported and can lead to several issues with your CRM instance (you need to use CRM Web Services to update the data)
2. `CreatedOn`, `CreatedBy`, `ModifiedOn` and `ModifiedBy` are system fields and they are always filled, they never contains null values. (in particular `CreatedOn` and `CreatedBy` are specified when the record is created and cannot be modified after, `ModifiedOn` and `ModifiedBy` are updated every time the record is updated)
As advised by Microsoft here:
<http://msdn.microsoft.com/en-us/library/gg328350.aspx#Unsupported>
**Unsupported Customizations**
*Data (record) changes in the Microsoft Dynamics CRM database using SQL commands or any technology other than those described in the Microsoft Dynamics CRM SDK.* | performance of isnull vs select case statement | [
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2012",
"dynamics-crm-2013",
""
] |
I have a table called `Leaves` which has `Employee ID`, `Leave Type` and `Date`. For example, If an employee with ID = 1234 applies for a sick leave from 1-June-2014 to 5-June-2014, this will be stored in Leave tables day by day, means that the following records will be added:
1234 sick leave 1-June-2014
1234 sick leave 2-June-2014
1234 sick leave 3-June-2014
1234 sick leave 4-June-2014
1234 sick leave 5-June-2014
This is considered as one case. To clarify what I mean by the case: The total cases is how many leave request had been applied⦠for example:

What I need is to get the following information by SQL statement (I should determine a period: 1-January-2014 to 30-December-2014, for example):
Sick leave cases: 2
Escort leave cases: 2
Study leave cases: 1
I am using PostgreSQL 9.2. | This design is a bit strange, because of these daily rows, if the same person will have several escort leaves, then you have to figure out different cases.
For this certain case you can use something like this
```
SELECT COUNT(*), leavetype
FROM (
SELECT leavetype
FROM Leaves
GROUP BY employee_id, leavetype
)
GROUP BY leavetype;
```
My suggestion is to use case\_start and case\_end dates for one case row. | Perhaps you can try this:
```
select leave_type, count(*)
from (
select employee_id, leave_type
from leaves
where date between ...
group by employee_id, leave_type) t
group by leave_type;
``` | Count number of cases | [
"",
"sql",
"postgresql",
""
] |
I have to check for a value of a particular Column named `RESULT` is blank or not.
When I check with `if RESULT IS NULL`, the query failed, but When I checked `RESULT=''`, it worked.
What is difference between two.
please explain.
```
"UPDATE RLS_TP_2012_03 a, RLS_TP_2012_03 b SET a.COMMENT=b.COMMENT where b.TCODE='T1199' and a.GROUPNAME='xyz' and a.HLABNO=b.HLABNO and a.RESULT =''; ";
"UPDATE RLS_TP_2012_03 a, RLS_TP_2012_03 b SET a.COMMENT=b.COMMENT where b.TCODE='T1199' and a.GROUPNAME='xyz' and a.HLABNO=b.HLABNO and a.RESULT is NULL; "
``` | 1. `NULL` is an absence of a value. An empty string is a value, but is just empty. `NULL` is special to a database.
2. `NULL` has no bounds, it can be used for `string`, `integer`, `date`, etc. fields in a database.
3. `NULL` isn't allocated any memory, the `string` with `NULL` value is just a pointer which is pointing to nowhere in memory. however, Empty IS allocated to a memory location, although the value stored in the memory is `""`. | In tri-value logic, NULL should be used to represent "unknown" or "not applicable" An empty column should represent "known but empty".
Some DBMS products don't follow this convention all the time but I consider them deficient. Yes, you know who you are, Oracle :-)
In addition, while NULL can be put into any column type (ignoring `not null` clauses for now), you cannot put an "empty" value into a non-character field (like `integer`). | Difference between NULL and Blank Value in Mysql | [
"",
"mysql",
"sql",
""
] |
I have a log table which records the run history of various background jobs.
Now I need to display the most recent run of each and every job along with some data.
Here is my solution:
```
SELECT BackgroundJobId, bjl.LogId, ExecStartTime, ExecEndTime, ErrorDescription, Debug
FROM BackgroundJobLog bjl
JOIN (
SELECT LogId, ROW_NUMBER() OVER (PARTITION BY BackgroundJobId ORDER BY ExecStartTime DESC) rowNumber
FROM BackgroundJobLog
WHERE BackgroundJobStatusId IN (1, 3)
) AS bjl2 ON bjl.LogId = bjl2.LogId AND bjl2.rowNumber = 1
```
It returns 157 rows as expected, each row containing a distinct `BackgroundJobId` with the information from the most recent run of that job.
However, the performance is a problem. Right now, that log table has about 25,000,000 rows satisfying the nested `SELECT` statement. It seems to be a terrible waste to join with 25,000,000 rows when all I need is the row with the most recent `ExecStartTime`.
So, I figured I could use the `MAX` aggregation window function. But for the life of me I do not understand how. The following statement:
```
SELECT BackgroundJobId, LogId, MAX(ExecStartTime) OVER (PARTITION BY BackgroundJobId) ExecStartTime
FROM BackgroundJobLog
WHERE BackgroundJobStatusId IN (1, 3)
```
attempts to return the same 25,000,000 rows. True, for the same `BackgroundJobId` the most recent `ExecStartTime` value is returned, but it is repeated as many times as there are rows with the same `BackgroundJobId` ! Of course, each row has its own `LogId`. Whereas I want just the row with the most recent ExecStartTime within the same `BackgroundJobId`.
How can I do it efficiently?
**EDIT**
Guys, nested select is a nested select. Little difference whether it is joined explicitly or selected from as a CTE or directly. As long as there is nested select the performance is abismal.
**EDIT 2**
There is an index on the `BackgroundJobStatusId`:
```
CREATE NONCLUSTERED INDEX IX_BackgroundJobLog_BackgroundJobStatusId ON [BackgroundJobLog] ([BackgroundJobStatusId]) INCLUDE ([LogId],[BackgroundJobId],[ExecStartTime])
```
**EDIT 3**
The schema of the table is:
```
CREATE TABLE BackgroundJobLog
(
LogId uniqueidentifier NOT NULL,
BackgroundJobId int NOT NULL,
ExecStartTime datetime NULL,
ExecEndTime datetime NULL,
ErrorDescription ntext NULL,
BackgroundJobStatusId int NOT NULL,
Debug ntext NULL,
LogEntryId int IDENTITY(1,1) NOT NULL
CONSTRAINT PK_LogEntryId PRIMARY KEY CLUSTERED (LogEntryId),
CONSTRAINT IX_BackgroundJobLog UNIQUE NONCLUSTERED (LogId)
)
```
**EDIT 4**
Please, find below the execution plan for the Hamlet Hakobyan's answer:

**EDIT 5**
Please, find below the execution plan for the Kirill Zorin's answer:
 | For this query to work fast, you need two things:
* A list of distinct `BackgroundJobId`
* A composite index on `BackgroundJobLog (BackgroundJobId, ExecStartTime) INCLUDE (BackgroundJobStatusId)`
If you have a separate table with jobs, just use it:
```
SELECT bl.*
FROM job
CROSS APPLY
(
SELECT TOP 1
*
FROM BackgroundJobLog
WHERE BackgroundJobId = job.id
AND BackgroundJobStatusId IN (1, 3)
ORDER BY
ExecStartTime DESC
) bl
```
If you don't, you can create an indexed view to get such a list:
```
CREATE VIEW job
WITH SCHEMABINDING
AS
SELECT backgroundJobId, COUNT_BIG(*) cnt
FROM BackgroundJobLog
GROUP BY
backgroundJobId
GO
CREATE UNIQUE CLUSTERED INDEX
ux_job
ON job (backgroundJobId)
GO
```
then repeat the previous query, adding `NOEXPAND`:
```
SELECT bl.*
FROM job WITH (NOEXPAND)
CROSS APPLY
(
SELECT TOP 1
*
FROM BackgroundJobLog
WHERE BackgroundJobId = job.id
AND BackgroundJobStatusId IN (1, 3)
ORDER BY
ExecStartTime DESC
) bl
```
Alternatively, you might build such a list in a CTE:
```
WITH job (id) AS
(
SELECT MIN(BackgroundJobId)
FROM BackgroundJobLog
UNION ALL
SELECT (
SELECT backgroundJobId
FROM (
SELECT backgroundJobId,
ROW_NUMBER() OVER (ORDER BY backgroundJobId) rn
FROM BackgroundJobLog bl
WHERE bl.backgroundJobId > job.id
) q
WHERE rn = 1
)
FROM job
WHERE id IS NOT NULL
)
SELECT bl.*
FROM job
CROSS APPLY
(
SELECT TOP 1
*
FROM BackgroundJobLog
WHERE BackgroundJobId = job.id
AND BackgroundJobStatusId IN (1, 3)
ORDER BY
ExecStartTime DESC
) bl
WHERE job.id IS NOT NULL
``` | I didn't see that the JOIN is necessary.
```
;WITH CTE
AS
(
SELECT *,
ROW_NUMBER() OVER (PARTITION BY BackgroundJobId ORDER BY ExecStartTime DESC) rn
FROM BackgroundJobLog
WHERE BackgroundJobStatusId IN (1, 3)
)
SELECT BackgroundJobId
, LogId
, ExecStartTime
, ExecEndTime
, ErrorDescription
, Debug
FROM CTE
WHERE rn = 1
``` | Using the MAX aggregation window function correctly in SQL Server 2012 | [
"",
"sql",
"sql-server",
""
] |
This is rather silly, but i really don't know what's the problem here. I have a table 'profiles' which a column 'country', i filled this table from a tab separated value file and the table view seems to be fine.
Now when executing this query:
```
select * from profiles where country = 'Sweden'
```
Nothing gets returned, although the table has more than a hundred entry with country 'Sweden' and i double checked my spelling.
But when executing this query:
```
select * from profiles where country REGEXP 'Sweden'
```
it returns results as expected.
What's the cause of this ? and how do i fix it ?
Here is some data from the file used to fill the table:
```
0 f Germany
1 f Canada
2 Germany
3 m Mexico
4 m United States
5 m United Kingdom
6 m Finland
```
Thanks for reading | Please try with like operator.
select \* from profiles WHERE country LIKE '%Sweden%' | Try with [TRIM](http://www.w3resource.com/mysql/string-functions/mysql-trim-function.php) function if any leading or trailing space in country field
```
select * from profiles where TRIM(country) = 'Sweden'
```
OR
```
select * from profiles where TRIM(country) LIKE 'Sweden'
```
OR
```
select * from profiles where country LIKE '%Sweden%'
```
OR check for carrige return value in field list
```
SELECT * FROM profiles WHERE country REGEXP "\r\n";
``` | Simple MySql select statement returns 0 rows | [
"",
"mysql",
"sql",
"database",
""
] |
I have this stored procedure:
```
CREATE PROCEDURE ListIds
@ObjectiveId int,
@SubjectId int,
AS
BEGIN
SELECT Question.QuestionUId
FROM Objective
WHERE Objective.objectiveId = @ObjectiveId
AND Objective.subjectId = @SubjectId
END;
```
How can I make it so that if the stored procedure @ObjectiveId is 0 then it does not select **based on the objectiveId** from the table. However I want all the time to select on the Subject Id. | ```
SELECT Question.QuestionUId
FROM Objective
WHERE (Objective.objectiveId = @ObjectiveId OR @ObjectiveId = 0)
AND Objective.subjectId = @SubjectId
``` | Try this:
```
CREATE PROCEDURE ListIds
@ObjectiveId INT,
@SubjectId INT
AS
BEGIN
SELECT QuestionUId
FROM Objective
WHERE Objective.subjectId = @SubjectId AND
(ISNULL(@ObjectiveId,0) = 0 OR Objective.objectiveId = @ObjectiveId)
END;
``` | How can I make it so a where clause with a value of 0 makes it not check the where in a SQL Server stored procedure? | [
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
My goal is to avoid dead locks and so I centralized all locks in the same place ordering by table name and then by ID ascending:
```
SELECT * FROM table1 WHERE ID = 1 FOR UPDATE
SELECT * FROM table1 WHERE ID = 2 FOR UPDATE
SELECT * FROM table1 WHERE ID = 3 FOR UPDATE
SELECT * FROM table1 WHERE ID = 4 FOR UPDATE
SELECT * FROM table2 WHERE ID = 1 FOR UPDATE
SELECT * FROM table2 WHERE ID = 2 FOR UPDATE
SELECT * FROM table2 WHERE ID = 3 FOR UPDATE
SELECT * FROM table2 WHERE ID = 4 FOR UPDATE
```
but I wonder if I can do the same using IN() (which is probably a bit faster)
```
SELECT * FROM table1 WHERE ID IN(1,2,3,4) FOR UPDATE
SELECT * FROM table2 WHERE ID IN(1,2,3,4) FOR UPDATE
```
will the rows be locked in the exact order specified by the IN() operand or the lock will be applied using the "natural table ordering" instead?
ID is a primary auto\_increment field in all tables and I don't "reuse" old deleted IDs (so in theory the natural ordering should always be ascending)
thanks in advance!
---
added the update:
```
UPDATE table1 SET t1="hello1" WHERE ID = 1;
UPDATE table1 SET t1="hello2" WHERE ID = 2;
UPDATE table1 SET t1="hello3" WHERE ID = 3;
UPDATE table1 SET t1="hello4" WHERE ID = 4;
UPDATE table2 SET t2="hello1" WHERE ID = 1;
UPDATE table2 SET t2="hello2" WHERE ID = 2;
UPDATE table2 SET t2="hello3" WHERE ID = 3;
UPDATE table2 SET t2="hello4" WHERE ID = 4;
...
COMMIT;
``` | Even though it's a little unclear but some part of the answer to your question is stated in the [MySQL's documentation](http://dev.mysql.com/doc/refman/5.0/en/comparison-operators.html#function_in):
> **expr IN (value,...)**
>
> Returns 1 if expr is equal to any of the values in the IN list, else
> returns 0. *If all values are constants, they are evaluated according
> to the type of expr and sorted. The search for the item then is done
> using a binary search.*
Here's what you should get of it: *If all values in the list are constants, they are compared sorted using binary search.*
So in the end, it doesn't matter if you have sorted the values or not, because MySQL will sort them even if they are not. Nevertheless this wasn't your question. Now let's get back to your question.
First of all, deadlocks are absolutely possible in MySQL when your are using InnoDb and they happen all the time (at least to me). The strategy you've chosen to prevent deadlocks is a valid one (acquiring locks according to some order). But unfortunately I don't think it's going to work in MySQL. You see, even though in your query it is clearly stated which records you want to be locked, but the truth is that [they are not the only records that will be locked:](http://dev.mysql.com/doc/refman/5.0/en/innodb-locks-set.html)
> A locking read, an UPDATE, or a DELETE generally set record locks on
> every index record that is scanned in the processing of the SQL
> statement. It does not matter whether there are WHERE conditions in
> the statement that would exclude the row. InnoDB does not remember the
> exact WHERE condition, but only knows which index ranges were scanned.
> The locks are normally next-key locks that also block inserts into the
> βgapβ immediately before the record. However, gap locking can be
> disabled explicitly, which causes next-key locking not to be used.
So it's hard to say which records are actually locked. Now consider MySQL is searching the index for the first value in your list. As I've just said few more records might be locked along the way while MySQL is scanning the index. And since scanning indices does not happen in order (or at least that's what I believe), records would get locked regardless of their order. Which means that deadlocks are not prevented.
The last part is my own understanding of the situation and I've actually never read that anywhere before. But in theory it sounds right. Yet I really would like someone to prove me wrong (just so I can trust MySQL even more). | Rows are locked in the order they are read, so no order is guaranteed. Even if you add an `ORDER BY` clause, the rows will be locked as they are read, not as they are ordered. [Here is another good question](https://stackoverflow.com/questions/5694658/how-many-rows-will-be-locked-by-select-order-by-xxx-limit-1-for-update) with some great answers | could a rows lock made with IN(,,,) generate dead locks? | [
"",
"mysql",
"sql",
"locking",
"innodb",
"deadlock",
""
] |
I have the following SQL Query
```
UPDATE ea
SET ea.isholiday = 1
FROM employee_attendance ea
JOIN setup_holiday h ON h.day = DATEPART(dd,ea.timestamp)
AND h.month = DATEPART(mm,ea.timestamp)
WHERE ea.isactive = 1
```
I need to add other filter that is:
```
AND h.year = DATEPART(yyyy, ea.timestamp)
```
But, it not that easy, because I don't want to apply this filter if `year` = 0
Any clue? | You use a case statement and a trick -- when you have the predicate you want to ignore you just put the right hand side (which will always be true) otherwise your predicate. Like this:
```
UPDATE ea
SET ea.isholiday = 1
FROM employee_attendance ea
JOIN setup_holiday h ON h.day = DATEPART(dd,ea.timestamp)
AND h.month = DATEPART(mm,ea.timestamp)
WHERE ea.isactive = 1
AND (CASE WHEN h.year = 0 THEN DATEPART(yyyy, ea.timestamp)
ELSE h.year END) = DATEPART(yyyy, ea.timestamp)
``` | Just add the condition using `or`:
```
UPDATE ea
SET ea.isholiday = 1
FROM employee_attendance ea JOIN
setup_holiday h
ON h.day = DATEPART(dd,ea.timestamp) AND
h.month = DATEPART(mm,ea.timestamp) AND
(h.year = DATEPART(yyyy, ea.timestamp) or h.year = 0)
WHERE ea.isactive = 1
``` | SQL conditional join or where statement | [
"",
"sql",
"sql-server",
""
] |
I have a Data table in my Sql Database and I want to update some entries in that table on 1 day(regular) interval. How can I do it | You can try this with `Event Scheduler` in `SQL`
```
CREATE EVENT myevent
ON SCHEDULE AT CURRENT_TIMESTAMP + INTERVAL 1 HOUR
DO
UPDATE myschema.mytable SET mycol = mycol + 1;
```
I hope this may be useful for you...
Since i dont know your scenario.. | If this job is in regular basis, try using [Scheduler Events](http://www.sitepoint.com/how-to-create-mysql-events/) | How To Update a Sql Table On Regular Interval | [
"",
"mysql",
"sql",
""
] |
Regarding the example data below, it is `ORDER BY Type` but each resulting group is `ORDER BY MIN(Valu)` without any aggregation.
```
Type Valu
---- ----
B 12
B 88
B 200
B 200
A 56
A 100
A 100
C 70
C 70
```
So all records of Type B are listed first because that group has a record containing the lowest Valu (12). Then Type A records because of the record with Valu 56 and then Type C records because of the record with Valu(s) 70.
Is there any way to do this type of sorting in SQL?
To explain further, if I were to do this programmatically, I might first sort records by Type and secondarily by Valu decending. But then I would need to sort groups of records by the Valu of the first record of each group.
So something like:
```
SELECT Type, Valu FROM data ORDER BY Type, Valu
```
gets me most of the way there. But then I want to sort the groups by lowest Valu.
This might be compared to "threading" posts in a forum where posts with the same subject are grouped together but the groups are then sorted by the date of the latest post within the group (although threading forum posts is more complex because sorting is done by reply order whereas in my example I just want to sort numerically by Valu).
Is there any way to do this in SQL? If not, and I need to do it programmatically, is there at least a query that will yield the latest records and any records of the same Type but with a `LIMIT` so that I do not have to retrieve all records? If there are two records of the same Type but one with a very low Valu and the other with a vary high Valu, a LIMIT clause might exclude the record with the high Valu. | Get All Types an their min values in a subquery and rejoin to table. Then you can order of Type-groups min Value.
```
SELECT
table.Type,
table.value
FROM (
SELECT
Type,
min(value) AS minValue
FROM table
GROUP BY Type
) AS sub
INNER JOIN table
ON table.Type = sub.Type
ORDER BY sub.minValue ASC, table.Type ASC, table.value ASC
``` | You can try using *analytic functions* if your DBMS *supports* them (Oracle, MS SQL, Postgres...)
```
select Type,
Valu
from MyTable
order by Min(Valu) over (partition by Type),
Type,
Valu
``` | SQL ORDER BY but also by MIN of different col (aka threading)? | [
"",
"mysql",
"sql",
"sorting",
"sql-order-by",
""
] |
> #1109 - Unknown table 'cal\_events\_' in field list
I have a database groupofficecom, and I have two tables in it `cf_cal_events` and `cal_events_`:
The primary key of `cal_events_` is the column `id` which is auto incremented
The primary key of `cf_cal_events` is the column `model_id` which is **NOT** auto incremented
I want to add elements to `cf_cal_events` and each element from this table has one equivalent element from the other table, so I want to get `model_id` from `id`.
I'm trying to execute the following code but it isn't working.
```
REPLACE INTO `cf_cal_events`(`model_id`)
SELECT `cal_events_`.`id`;
```
I tried removing the backticks, and I tried adding the database name, but still doesn't find my the table `cal_events_` in field list. | ```
REPLACE INTO `cf_cal_events`(`model_id`)
SELECT `id`
FROM `cal_events_`
``` | Remove table name `cal_events_` from fieldName and add `cal_events_` to `FROM` clause like this
```
REPLACE INTO `cf_cal_events`(`model_id`)
SELECT `id`
FROM `cal_events_`;
``` | #1109 - Unknown table '...' in field list | [
"",
"mysql",
"sql",
""
] |
I have finished all my changes to a database table in sql server management studio 2012, but now I have a large gap between some values due to editing. **Is there a way to keep my data, but re-assign all the ID's from 1 up to my last value?**
I would like this cleaned up as I populate dropdownlists with these values and then I make interactions with my database with the assumption that my dropdownlist index and the table's ID match up, which is not the case right now.
My current DB has a large gap from 7 to 28, I would like to shift everything from 28 and up, back down to 8, 9, 10, 11, ect... so that my database has NO gaps from 1 and onward.

If the solution is tricky please give me some steps as I am new to SQL.
Thank you! | Yes, there are any number of ways to "close the gaps" in an auto generated sequence. You say you're new to SQL so I'll assume you're also new to relational concepts. Here is my advice to you: don't do it.
The ID field is a surrogate key. There are several aspects of surrogates one must be mindful of when using them, but the one I want to impress upon you is,
```
-- A surrogate key is used to make the row unique. Other than the guarantee that
-- the value is unique, no other assumptions may be made concerning the value.
-- In particular, no meaning may be derived from the value as to the contents of
-- the row or the row's relationship to any other row.
```
You have designed your app with a built-in assumption of the value of the key field (that they will be consecutive). Already it is causing you problems. Do you really want to go through this every time you make changes to the table? And suppose a future feature requires you to filter out some of the choices according to an option the user has selected? Or enable the user to specify the order of the items? Not going to be easy. So what is the solution?
You can create an additional (non-visible) field in the dropdown list that contains the key value. When the user makes a selection, use that index to get the key value of the selection and *then* go out to the database and get whatever additional data you need. This will work if you populate the list from the entire table or just select a few according to some as yet unknown filtering criteria or change the order in any way.
Viola. You never have this problem again, no matter how often you add and remove rows in the table.
However, on the off chance that you are as stubborn as me (not likely!) or just refuse to listen to the melodious voice of reason and experience, then try this:
* Create a new table exactly like the old table, including auto incrementing PK.
* Populate the new table using a Select from the old table. You can specify any order you want.
* Drop the old table.
* Rename the new table to the old table name.
You will have to drop and redefine any FKs from other tables. But this entire process
can be placed in a script because if you do this once, you'll probably do it again.
Now all the values are consecutive. Until you edit the table again... | **You should refactor the code for your dropdown list and not the PK of the table.**
If you do not agree, you can do one of the following:
1. Insert another column holding the dropdown's "order of appearance", make a unique index on it and fill this by hand (or programmatically).
2. Replace the SERIAL with an INT would work, make a unique index on the column and fill this by hand (or programmatically).
3. Remove the large ids and reseed your serial - the code depending on your DBMS | rebuild/refresh my table's PK list - gap in numbers | [
"",
"sql",
"sql-server",
"primary-key",
""
] |
I have the following query:
```
SELECT top 2500 *
FROM table a
LEFT JOIN table b
ON a.employee_id = b.employee_id
WHERE left(a.employee_rc,6) IN
(
SELECT employeeID, access
FROM accesslist
WHERE employeeID = '#client.id#'
)
```
The sub select in the `where` clause can return one or several access values, ex:
* js1234 BLKHSA
* js1234 HDF48R7
* js1234 BLN6
In the primary `where` clause I need to be able to change the integer expression from 6 to 5 or 4 or 7 depending on what the length of the values returned in the sub select. I am at a loss if this is the right way to go about it. I have tried using OR statements but it really slows down the query. | Try using `exists` instead:
```
SELECT top 2500 *
FROM table a LEFT JOIN
table b
ON a.employee_id = b.employee_id
WHERE EXISTS (Select 1
FROM accesslist
WHERE employeeID = '#client.id#' and
a.employee_rc like concat(employeeID, '%')
) ;
```
I don't see how your original query worked. The subquery is returning two columns and that normally isn't allowed in SQL for an `in`. | Move the subquery to a JOIN:
```
SELECT TOP 2500 *
FROM table a
LEFT JOIN table b ON a.employee_id = b.employee_id
LEFT JOIN accesslist al ON al.access LIKE concat('%', a.employee_id)
WHERE al.employeeID = '#client.id#'
```
Like Gordon, I don't quite see how your query worked, so I'm not quite sure if it should be `access` or `employeeID` which is matched. | can I use a variable for the integer expression in a left sql function | [
"",
"sql",
"sql-server",
"coldfusion",
""
] |
I have a SQL Query (simplified from real use):
```
SELECT MIN(cola), colb FROM tbl GROUP BY colb;
```
But actually, I don't need the minimum value- any cola value will do- it's only used to show an example value from the group.
At the moment PG has to do the group and then sort each group by cola to find the minimum value in the group, but this is slow because there's a lot of records in each group.
Does Postgres have some kind of FIRST(cola) or ANY(cola) that would just return whatever cola it sees first (like MySQL does when you don't use an aggregate function) or without needing to sort / read cola from every row? | I think using `DISTINCT ON()` with no order by will achieve what you are after:
```
SELECT DISTINCT ON (ColB) ColA, ColB
FROM tbl;
```
**[Example on SQL Fiddle](http://sqlfiddle.com/#!15/63fe9/1)**
The [docs state](http://www.postgresql.org/docs/9.0/static/sql-select.html)
> DISTINCT ON ( expression [, ...] ) keeps only the first row of each set of rows where the given expressions evaluate to equal. The DISTINCT ON expressions are interpreted using the same rules as for ORDER BY (see above). Note that the "first row" of each set is unpredictable unless ORDER BY is used to ensure that the desired row appears first.
However, with no example data to work on I can't actually compare if this will outperform using `MIN` or any other aggregate function. | This statement:
> At the moment PG has to do the group and then sort each group by cola
> to find the minimum value in the group, but this is slow because
> there's a lot of records in each group.
May logically describe what Postgres does, but it does not explain what is actually going on.
Postgres -- as with any database that I'm familiar with -- will keep a "register" for the minimum value. As new data comes in, it will compare the value in the next row to the minimum. If the new value is smaller, then it will be copied in. This, incidentally, is whay `min()`, `max()`, `avg()`, and `count()` are all faster than `count(distinct)`. For the latter, the list of values within a group must be maintained.
The `distinct on` approach may be faster than the `group by`. The reason, however, is not because the database engine is sorting all values for a given `colb` to get the minimum. | Get ANY(col) instead of MIN(col) from a group | [
"",
"sql",
"postgresql",
"optimization",
"psql",
""
] |
Hey guys am new to mysql development and am trying to inner join my both tables ..My code
```
SELECT Customers.name, oop.id
FROM Customers(SELECT name,id FROM Orders) AS oop
INNER JOIN oop
ON Customers.name=oop.id
ORDER BY Customers.name;
```
When i tried this code it shows me error i know this can be easily done with other method but i have seen this method on a website so i tried..Am i doing any error in my code.
Hope you guys can help me out ..Any help would be appreciated ..Thanx | You are writing a wrong syntax. it would be somthing like this:-
```
SELECT Customers.name, oop.id
FROM Customers
INNER JOIN (SELECT name,id FROM Orders) AS oop
ON Customers.name=oop.id
ORDER BY Customers.name;
``` | You don't actually need the second select (although you can use it as Ankit Bajpai's answer shows).
```
SELECT Customers.name, oop.id
FROM Customers
INNER JOIN orders as oop
ON Customers.name=oop.id
ORDER BY Customers.name;
``` | Inner join with mysql not works | [
"",
"mysql",
"sql",
""
] |
I have a SELECT TOP query with an ORDER BY clause that does not return the correct result. Below is a sample query and its output.
Any suggestion / workaround / solution ?
Query:
```
CREATE TABLE #TestTop
(
TopID INT,
TopStr VARCHAR(MAX)
)
INSERT INTO #TestTop VALUES(1749, '');
INSERT INTO #TestTop VALUES(1750, '');
INSERT INTO #TestTop VALUES(1752, 'z');
INSERT INTO #TestTop VALUES(1739, 'a');
SELECT * FROM #TestTop ORDER BY TopStr ASC
SELECT TOP 1 * FROM #TestTop ORDER BY TopStr ASC
SELECT TOP 4 * FROM #TestTop ORDER BY TopStr ASC
DROP TABLE #TestTop;
```
Result:
```
[SELECT *]
TopID TopStr
----------- -----------
1749
1750
1739 a
1752 z
[SELECT TOP 1]
TopID TopStr
----------- --------------
1750
[SELECT TOP 4]
TopID TopStr
----------- --------------
1750
1749
1739 a
1752 z
``` | Try this, it works in sql server
```
SELECT TOP 4 * FROM #TestTop
order by ROW_NUMBER() OVER(ORDER BY (CASE WHEN TopStr = '' THEN NULL ELSE TopStr END))
```
OR
```
SET ROWCOUNT 4
SELECT * FROM #TestTop ORDER BY TopStr ASC
SET ROWCOUNT 0
``` | You have not specified complete ordering, so the database (whichever one it is) is entitled to return rows that match on TopStr in any order it wants. Just because your first query happens to give the ordering you want is luck, and that could change based on a database upgrade / load on the system / many other factors.
You need to add the TopId into the ordering list if you want results to be ordered by that column as well as TopStr. | SQL: SELECT TOP with ORDER BY clause does not return correct result | [
"",
"sql",
"sql-server",
""
] |
There's a big discussion going on at my office on the order of the joined columns in a sql join. I'm finding it hard explaining it so I'll just present the two sql statements. Which one is better taking into account sql best practices?
```
SELECT a.Au_id
FROM AUTHORS a
INNER JOIN TITLEAUTHOR ta
ON a.Au_id = ta.Au_id
INNER JOIN TITLES t
ON ta.Title_id = t.Title_id
WHERE t.Title LIKE β%Computer%β
```
OR
```
SELECT a.Au_id
FROM AUTHORS a
INNER JOIN TITLEAUTHOR ta
ON ta.Au_id = a.Au_id
INNER JOIN TITLES t
ON t.Title_id = ta.Title_id
WHERE t.Title LIKE β%Computer%β
```
So, in a join, in the ON part, does it matter whether you write `A.x = B.y or B.y = A.x`? | The best practice here is to choose one and stick with it **within the team**. Personally, I prefer the `FROM a JOIN b ON b.col = a.col` because it seems cleaner to me. | SQL was designed to be readable as usual English text. In a usual text, when we want to specify any mentioned object, we say something like "An object. This object is like that object, this object is related to that object, and this object has those properties." We do not say "That object is like this object, and that object is related this object, and those properties have this object".
So, when we adding records of TableB to query from TableA, we need to specify what a record from TableB must have to satisfy the current record from TableA. So we must say "we need that records from TableB, which have an id, equal to TableA.ID", i.e., `FROM TableA a JOIN TableB b ON b.col = a.col`. We must write exactly in this order, to provide easy reading and understanding of this relation.
And I'm very very sad, that almost all SQL documentation does not provide any kind of reasoning to go one or another direction in this question, but have a lot of examples of wrong writing equation arguments in order of presenting of tables in a query. | Best practices for the order of joined columns in a sql join? | [
"",
"sql",
""
] |
I have a table of `beats` and for each beat there are multiple rows containing different prices in the `pricing` table.
```
SELECT b.id, b.name,
(SELECT p.price FROM pricing as p WHERE p.license = 1 AND p.beat_id = b.id) as price_1,
(SELECT p.price FROM pricing as p WHERE p.license = 2 AND p.beat_id = b.id) as price_2
FROM beats as b
WHERE b.added > 0
AND b.active = 1
AND b.deleted = 0
AND price_1 > 0
ORDER BY b.id DESC
LIMIT 50
```
I'm trying to make sure a `beat` is only retrieved when the `price_1` is greater than 0.
This doesn't work because you can't use the result of a nested SQL statement in a `WHERE` clause, but i've tried `HAVING price_1 > 0` and this doesn't work either.
How can I test `price_1` and `price_2`? | You can move that condition to a `having` clause. This is a feature of MySQL and not supported by other databases:
```
SELECT b.id, b.name,
(SELECT p.price FROM pricing as p WHERE p.license = 1 AND p.beat_id = b.id) as price_1,
(SELECT p.price FROM pricing as p WHERE p.license = 2 AND p.beat_id = b.id) as price_2
FROM beats as b
WHERE b.added > 0 AND b.active = 1 AND b.deleted = 0
GROUP BY b.producer
HAVING price_1 > 0
ORDER BY b.id DESC
LIMIT 50;
``` | You can use a JOIN:
```
SELECT b.id, b.name, p1.price AS price_1, p2.price AS price_2
FROM beats AS b
JOIN pricing AS p1 ON p1.beat_id = b.id
JOIN pricing AS p2 ON p2.beat_id = b.id
WHERE b.added > 0 AND b.active = 1 AND b.deleted = 0
AND p1.license = 1 AND p1.price > 0
AND p2.licence = 2
GROUP BY b.producer
ORDER BY b.id DESC
LIMIT 50
``` | Result of nested SQL statements in WHERE clause | [
"",
"mysql",
"sql",
""
] |
I have a stored procedure that searches across multiple columns in a combined table.
It works, however, it takes 15 seconds to search for a value in that combined table. The table takes 9 seconds to load so I'm not sure, maybe it's because my table too big?
So I'm just wondering if there's a way to make this query runs faster.
This is my stored procedure:
```
create procedure LRMWEB_Search
@input nvarchar(1500)
AS
SET NOCOUNT ON;
SELECT tr.ResourceID ,
tr.ProjectFile,
tr.ResourceFile,
tr.ResourceName,
trt.Culture,
trt.TranslatedFlag,
trt.TranslatedValue,
tr.Comments,
tr.IsApproved
FROM tblResourcesTranslated_NEW trt
INNER JOIN tblResources_NEW tr ON trt.ResourceID = tr.ResourceID
where tr.ResourceID like '%'+ @input + '%'
OR tr.ProjectFile like '%'+ @input + '%'
OR tr.ResourceFile like '%'+ @input + '%'
OR tr.ResourceName like '%'+ @input + '%'
OR tr.ResourceValue like '%'+ @input + '%'
OR tr.Comments like '%'+ @input + '%'
OR trt.Uid like '%'+ @input + '%'
OR trt.TranslatedValue like '%'+ @input + '%'
;
``` | Any use of `like` precludes an index, unless it has a fixed prefix with a wildcard suffix, such as `where foo like 'bar%'`. Your `like` expressions (e.g., `'%xxx%'` ) do not meet that requirement.
As a result, while the join criteria may well have a covering index, nothing else does and so a table scan of the join tables is required.
In a nutshell, there is no way to fix performance outside of either
* rethinking what you're doing, or
* using something like a full text search | ```
SELECT tblResources_NEW.ResourceID
,tblResources_NEW.ProjectFile
,tblResources_NEW.ResourceFile
,tblResources_NEW.ResourceName
,tblResourcesTranslated_NEW.Culture
,tblResourcesTranslated_NEW.TranslatedFlag
,tblResourcesTranslated_NEW.TranslatedValue
,tblResources_NEW.Comments
,tblResources_NEW.IsApproved
FROM
(
SELECT
tblResources_NEW.ResourceID
,tblResources_NEW.ProjectFile
,tblResources_NEW.ResourceFile
,tblResources_NEW.ResourceName
,tblResources_NEW.Comments
,tblResources_NEW.IsApproved
FROM tblResources_NEW
WHERE
tblResources_NEW.ResourceID like '%'+ @input + '%'
OR tblResources_NEW.ProjectFile like '%'+ @input + '%'
OR tblResources_NEW.ResourceFile like '%'+ @input + '%'
OR tblResources_NEW.ResourceName like '%'+ @input + '%'
OR tblResources_NEW.ResourceValue like '%'+ @input + '%'
OR tblResources_NEW.Comments like '%'+ @input + '%'
) AS tblResources_NEW
INNER JOIN
(
SELECT
tblResourcesTranslated_NEW.ResourceID
,tblResourcesTranslated_NEW.Culture
,tblResourcesTranslated_NEW.TranslatedFlag
,tblResourcesTranslated_NEW.TranslatedValue
FROM tblResourcesTranslated_NEW
WHERE
tblResourcesTranslated_NEW.Uid like '%'+ @input + '%'
OR tblResourcesTranslated_NEW.TranslatedValue like '%'+ @input + '%'
) AS tblResourcesTranslated_NEW ON tblResourcesTranslated_NEW.ResourceID=tblResources_NEW.ResourceID
``` | How to make my search for multiple columns procedure more efficient? | [
"",
"sql",
"sql-server",
"performance",
""
] |
I have this table:
```
create table teams (team char(1) primary key, players text);
insert into teams('A', 'Jhon');
insert into teams('B', 'Mark');
```
Now, how do I add the player 'Carl' in team 'A'?
The column 'players' maybe like a list? | You would do:
```
insert into teams('A', 'Carl');
```
*after* you remove the primary key constraint.
Actually, what you really want is:
```
create table TeamPlayers (
TeamPlayerId int auto_increment,
team char(1),
players text
);
```
Then you do the inserts that you want. This is a junction table (sort of). It suggests that you also want a `Teams` table with one row per team and a `Players` table with one row per player. Depending on the application, those tables may not be necessary. | You have made team as primary key so you can't duplicate it.
create some column id make it the primary key then you can add as many members as you want to team A | Add (not update) fields into column sql using insert | [
"",
"mysql",
"sql",
""
] |
I have query as below:
```
select
lower(convert(varchar(10), GETDATE(), 101) +
' '+
substring(convert(varchar(20), GETDATE(), 22),9,Len(GETDATE())))
```
**In this query i am getting result as:**
```
06/27/2014 6:23:57 pm
```
I just want to remove seconds (i.e. 57) from above.
My intentded result is:
```
06/27/2014 6:23 pm
```
How can i obtain it??
I tried adding Length in the query as:
```
select
lower(convert(varchar(10), GETDATE(), 101) +
' '+
substring(convert(varchar(20), GETDATE(), 22),9,Len(GETDATE())+1))
```
But it didnt help.
Plase help me. | Try the below
```
SELECT CAST(DATEPART(DD,GETDATE()) AS VARCHAR)+'/'
+CAST(DATEPART(MM,GETDATE()) AS VARCHAR)
+'/'+CAST(DATEPART(YYYY,GETDATE()) AS VARCHAR)
+' '+CAST(DATEPART(HH,GETDATE()) AS VARCHAR)
+':'+CAST(DATEPART(MI,GETDATE()) AS VARCHAR)
```
also look at this [Custom Date/Time formatting in SQL Server](https://stackoverflow.com/questions/202243/custom-date-time-formatting-in-sql-server) | Try this!
```
select
lower(convert(varchar(10), GETDATE(), 101) +
' '+
LEFT((substring(convert(varchar(20), GETDATE(), 22),9,Len(GETDATE())+1)), 6)
+
(RIGHT(substring(convert(varchar(20), GETDATE(), 22),9,Len(GETDATE())+1),2)))
``` | Remove seconds from datetime | [
"",
"sql",
"database",
"sql-server-2008-r2",
""
] |
I know I should try to avoid while loop in sql but I could not get it done, can someone please suggest a better way to achieve same result without using while loop. Here is my code -
```
begin transaction
declare @count int;
set @count = 1;
declare @Id bigint;
set @Id = 3781543256;
while @count <=35000
begin
INSERT INTO [Table_name]
([Column1]
,[Column2]
,[Column3]
,[Column4]
,[Column5])
VALUES
(7,
@Id,
'20130909 16:42:43.157',
0,
NEWID())
set @Id = @Id+1;
set @count = @count+1;
end
``` | ```
INSERT INTO [Table_name] (
[Column1]
,[Column2]
,[Column3]
,[Column4]
,[Column5]
)
SELECT TOP 35000
7
,@Id + ROW_NUMBER() OVER (ORDER BY objects1.object_id)
,'20130909 16:42:43.157'
,0
,NEWID()
FROM sys.objects AS objects1
CROSS JOIN sys.objects AS objects2
``` | You need a numbers table. It is handy to have around anyway. Assuming you have it, you can generate the set of rows you want to insert and do it in one go:
```
INSERT ...
SELECT 7, Numbers.ID - 1 + 3781543256, ..., NEWID()
FROM Numbers
WHERE Numbers.ID BETWEEN 1 AND @count
```
Shorter, less error prone and faster. Depending on the row count and schema this can be faster by multiple orders of magnitude. | How to avoid While Loop in Sql | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I am a newbie for Oracle database programming and I wish to INSERT date (also display) in 'DD MON YYYY' format. (PS: This only involves INSERT event). Which data type (DATE or TIMESTAMP) is the most suitable option for me in order to accomplish this format? How was I supposed to do that? Thanks. | A `DATE` column does not have any format.
So the format that you use when inserting or updating data is irrelevant for displaying that data (that's one of the reasons why you should never store a date in a `VARCHAR` column).
Any formatted output you see for a `DATE` column in your SQL tool (e.g. SQL\*Plus) is applied by that tool. It is not part of the data stored in that column.
When providing a date literal you should either use the `to_date()` function with an explicit format mask:
```
insert into some_table (some_date_column)
values (to_date('27-06-2014', 'dd-mm-yyyy'));
```
I also do not recommend using formats with written month names (27-JUN-2014) when supplying a date literal because they *also* depend on the NLS settings of the client computer and might produce (random) errors due to different languages. Using numbers only is much more robust.
I prefer to use ANSI date literals because it's a bit less typing:
```
insert into some_table (some_date_column)
values (DATE '2014-06-27');
```
The format for an ANSI date (or timestamp) literal is *always* the ISO format (yyyy-mm-dd).
When you select your data you can display the date in whatever format you like. Either by using the `to_char()` function (e.g. when using a SQL tool) or using functions from your programming language (the preferred way to use inside an application):
```
select to_char(some_date_column,'dd-mon-yyyy')
from some_table;
```
Note that the `DATE` data type in Oracle (despite it's name) also stores the time. a `TIMESTAMP` does the same thing only with a higher precision (it includes milliseconds, whereas the `DATE` data type only stores seconds). | To **insert** a record,
```
INSERT INTO TABLE_NAME (DATE_FIELD) VALUES (TO_DATE ('27-JUN-2014', 'DD-MON-YYYY');
```
It is advisable to use DATE data-type until and unless you need the date's accuracy to be till milli seconds. In your case, go with DATE datatype and TIMESTAMP is not necessary
To **select** a record,
```
SELECT TO_CHAR(DATE_FIELD, 'DD-MON-YYYY') FROM TABLE_NAME;
```
In genral, remember this:
* **TO\_DATE** is a function used to convert a string(CHAR) TO DATE
* **TO\_CHAR** is a function used to convert a DATE to a string(CHAR) | How do I display DATE in 'DD MON YYYY' format? | [
"",
"sql",
"oracle",
""
] |
I have a list of items which have states within date ranges, and some counts are coming up wrong (technically correct, but wrong in intent).
I'm wondering if there's some examples of the date ranges not matching up closely on the datetimes I'm checking, or if the datetimes I'm querying on happen to fall into holes in the date ranges between rows.
The table looks something like this:
```
table_above
itemid | start | stop | state
1234 | 2000-01-01 00:00:00.000 | 2014-02-01 10:04:00.000 | 1
1234 | 2014-02-01 10:04:00.003 | NULL | 2
1111 | 2000-01-01 00:00:00.000 | NULL | 2
```
`itemid 1234` was at `state 1` for about 14 years, then on `Feb 1` it switched to `state 2`, and the `stop = null` means that to this moment, `item 1234` is in `state 2`.
`item 1111` has been in `state 2` since `2000-01-01 00:00:00.000`, and is in `state 2` currently.
There was a tiny period of time when the DB recorded no state at all for `itemid 1234` for the following two datetimes: `'2014-02-01 10:04:00.001'` and `'2014-02-01 10:04:00.002'`
Since my counts aren't coming up quite the way I expected, I'd like to check whether it's because of this issue.
The query I run counts the number of items in a particular state at a particular datetime:
```
select
count(1)
where
start < '2014-02-01 12:00:00.000'
and (stop >= '2014-02-01 12:00:00.000' or stop is null)
and state = 2
from
table_above
```
This would return `2` for table\_above. However, the following query would return `1`.
```
select
count(1)
where
start < '2014-02-01 10:04:00.001'
and (stop >= '2014-02-01 10:04:00.001' or stop is null)
and state = 2
from
table_above
```
The problem is that in reality, for those tiny period of time, the items associated with those `itemid`s exist, but they're not being counted anywhere.
The query is technically correct, but I need it to represent intention instead.
If they fall into those gaps, I'd like to count them as the state they were in immediately prior to the gaps.
Also, is there any way I can run a query to print all gaps for a particular `itemid`?
Finally, I'm suspicious whether there is any overlap in the date ranges where there shouldn't be, and I'd like a query to discover whether there is any overlap. At any given time, an item can only be in one state at a time, but I'd like to check whether there are some points where items have multiple states, such as in the example below.
```
table_above
itemid | start | stop | state
1234 | 2000-01-01 00:00:00.000 | 2014-02-01 10:04:00.000 | 1
1234 | 2014-02-01 10:03:59.999 | NULL | 2
```
If I ran the query above twice, with `state = 2` or `state = 1` for the time point `'2014-02-01 10:04:00.000'`, both would return a count of `1`, but it is intended that each item have only one state at each point in time, so they should not both return `1` | The other answer have some useful information, but here are some other thing to consider also:
* If you're using a `datetime` type, then the values are *always* going to be rounded to increments of .000, .003, or .007, because that's it's precision. This is [in the documentation](http://msdn.microsoft.com/en-us/library/ms187819.aspx).
* Comparing a `datetime` to a string will cast that string to a `datetime`, so actually there is no "tiny period" of unrepresentation. Consider the following:
```
declare @dt0 datetime, @dt1 datetime, @dt2 datetime, @dt3 datetime
select @dt0 = '2014-02-01 10:04:00.000',
@dt1 = '2014-02-01 10:04:00.001',
@dt2 = '2014-02-01 10:04:00.002',
@dt3 = '2014-02-01 10:04:00.003'
if @dt0 = @dt1 print 'True' else print 'False'
if @dt2 = @dt3 print 'True' else print 'False'
```
Both tests print `True` because of the rounding happening with `datetime`. If you really need more precise values, consider using `datetime2` instead.
* Your logic is a bit off for the range comparison. Usually, the start of the range is inclusive while the end of the range is exclusive. You have this reversed.
Instead of:
```
start < '2014-02-01 12:00:00.000'
and (stop >= '2014-02-01 12:00:00.000' or stop is null)
```
It should be:
```
start <= '2014-02-01 12:00:00.000'
and (stop > '2014-02-01 12:00:00.000' or stop is null)
``` | > I'd like a query to discover whether there is any overlap. At any given time, an item can only be in one state at a time, but I'd like to check whether there are some points where items have multiple states
You could try something like the following to capture when the start/stop date of a status lands between the start and stop date of another status:
```
Select A.itemid
From table_above A
Join table_above B
On A.itemid = B.itemid
Where B.start Between A.start And A.stop
Or B.stop Between A.start And A.stop
``` | How can I discover and compensate for possible flaws in the method of date storage? | [
"",
"sql",
"sql-server",
"date",
"datetime",
""
] |
I'm trying to extract a number that follows an cardinal "#" from a String in sql server. Also I want to ignore all numbers that aren't attached to the "#" simbol. For that purpose I created something like this:
```
DECLARE @val nvarchar(50)
SET @val = '#5777 some text'
PRINT SUBSTRING(@val, PATINDEX('%#[0-9]%', @val) + 1,
PATINDEX('%[^0-9]%', SUBSTRING(@val,
PATINDEX('%#[0-9]%', @val) + 1, LEN(@val))) - 1)
----------
@val = '#5777 some text':
result = 5777
@val = '#5777'
result = error
```
It works fine if the number the number as some text after it but if the number is the last character I obvisualy get an error stating:
"
Invalid length parameter passed to the LEFT or SUBSTRING function
"
Of course I could do something like:
```
DECLARE @val nvarchar(50)
SET @val = '#5777'
PRINT SUBSTRING(@val, PATINDEX('%#[0-9]%', @val) + 1,
CASE WHEN PATINDEX('%[^0-9]%', SUBSTRING(@val,
PATINDEX('%#[0-9]%', @val) + 1, LEN(@val))) = 0 THEN
LEN(@val) ELSE
PATINDEX('%[^0-9]%', SUBSTRING(@val,
PATINDEX('%#[0-9]%', @val) + 1, LEN(@val))) - 1
END)
----------
result = 5777
```
So my question is: Is there any way to achieve this purpose in a simpler and more efficient way?
Thank you | A bit hacky, but it works for both of your scenarios (following text, no following text):
```
DECLARE @val nvarchar(50)
SET @val = '#5777 some text'
PRINT SUBSTRING(@val, PATINDEX('%#[0-9]%', @val) + 1,
ISNULL(NULLIF((PATINDEX('%[^0-9]%', SUBSTRING(@val, PATINDEX('%#[0-9]%', @val) + 1, LEN(@val))) - 1),-1),100))
SET @val = '#5777'
PRINT SUBSTRING(@val, PATINDEX('%#[0-9]%', @val) + 1,
ISNULL(NULLIF((PATINDEX('%[^0-9]%', SUBSTRING(@val, PATINDEX('%#[0-9]%', @val) + 1, LEN(@val))) - 1),-1),100))
``` | ```
DECLARE @val nvarchar(50)
SET @val = '#5777 some text'
PRINT REPLACE(@val, SUBSTRING(@val, PATINDEX('%[^a-zA-Z0-9 ]%', @val), 1), '')
``` | Extract int from String on SQL Server | [
"",
"sql",
"sql-server",
""
] |
I have this users:
**NAME** **\_\_\_\_\_** **ROLE**\_\_\_\_\_\_\_\_\_**STATUS**
user1 -------- role1 ---------------- true
user2 -------- role2 ---------------- true
user3 -------- role2 ---------------- false
user4 -------- role3 ---------------- false
user5 -------- role4 ---------------- true
I have this query:
```
User.where("role = ? OR role = ? OR role = ? AND status = ?", "role1", "role2", "role3", true)
```
and i expect that the data will be retrieve is:
user1
user2
but the system retrieve the user3 even if his status is false.
I also try to change `AND` to `OR` but it retrieves all.
what's wrong about my query? | What's wrong with the query is the lack of parentheses around the 'OR groups'
```
User.where("(role = ? OR role = ? OR role = ?) AND status = ?", "role1", "role2", "role3", true)
```
However, as already stated in another answer, it can be done in an easier way like this.
```
User.where(status: true).where(role: %w{role1 role2 role3})
``` | try this :
```
User.where(status: true).where(role: ["role1", "role2", "role3"])
``` | Rails MySQL query using where statement | [
"",
"mysql",
"sql",
"ruby-on-rails",
""
] |
I am using the following script to count the number of ShopId for one year.
I need also to breakdown the result by month for a year.
So end result should be
```
MONTH 1
SHOPID 5 100
SHOPID 4 90
MONTH 2
SHOPID 1 150
SHOPID 4 80
```
---
```
SELECT ShopId, Count(ShopId) as Interest
FROM dbo.Analytics
WHERE DateCreated >= '2014' AND DateCreated < '2015'
GROUP BY ShopId ORDER BY Interest DESC
```
---
Table structure
```
CREATE TABLE Analytics
(
DateCreated dateTime2(2),
ShopId int
);
```
What Should I change in my script? Shall I use DATEPART near GROUP BY | You can use **[DatePart](http://msdn.microsoft.com/en-us/library/ms174420.aspx)**.
Try like this
```
SELECT DatePart(mm,datecreated) 'Month',ShopId, Count(ShopId) as Interest
FROM dbo.Analytics
WHERE year(DateCreated) = '2015'
GROUP BY Datepart(mm,datecreated),ShopId
ORDER BY Interest DESC
```
DatePart will return Month Number only. If you need Result would have Month Name then you should use **DateName**
Try this
```
SELECT Select DateName( month , DateAdd( month , DatePart(mm,datecreated) , -1 ) ) 'Month',ShopId, Count(ShopId) as Interest
FROM dbo.Analytics
WHERE year(DateCreated) = '2015'
GROUP BY DateName( month , DateAdd( month , DatePart(mm,datecreated) , -1 ) ),ShopId
ORDER BY Interest DESC
``` | Try this
```
SELECT datepart(mm,datecreated) 'Month',ShopId, Count(ShopId) as Interest
FROM dbo.Analytics
WHERE year(DateCreated) = '2014'
GROUP BY datepart(mm,datecreated),ShopId ORDER BY Interest DESC
``` | How to use GROUP with DATEPART? | [
"",
"sql",
"sql-server",
""
] |
I have tables like following:
**post:**
```
id status
1 0
2 1
3 1
```
**comment:**
```
id post_id
1 2
2 1
3 3
4 2
```
I want select posts where status=0 or post have comments. I made this query:
```
SELECT t.*, COUNT(cmt.id) as commentsCount FROM `post` `t` LEFT JOIN comment cmt ON (cmt.post_id = t.id) WHERE t.status='0' OR commentsCount>0 GROUP BY t.id
```
but it isn't properly.
How to fix this?
P.s There are only simplified tables to make this easier to understand and in my database I can't add field with count. | You need to put this condition in a `having` clause:
```
SELECT t.*, COUNT(cmt.id) as commentsCount
FROM `post` `t` LEFT JOIN
comment cmt
ON (cmt.post_id = t.id)
WHERE t.status = '0'
GROUP BY t.id
HAVING commentsCount > 0;
```
EDIT:
For the `or` logic, you can move both conditions to the `having` clause:
```
SELECT t.*, COUNT(cmt.id) as commentsCount
FROM `post` `t` LEFT JOIN
comment cmt
ON (cmt.post_id = t.id)
GROUP BY t.id
HAVING max(t.status) = '0' OR commentsCount > 0;
```
The `max()` is, strictly speaking, unnecessary because `id` is a primary key. But I'm including it for clarity. | There is also more straightforward way to translate your query from English to SQL:
```
SELECT *
FROM post
WHERE status='0'
or (select count(1) from comment where post_id = post.id) > 0
```
I recommend you to check which query has better plan on your data (with join or with subquery). Anyway special field for filtering is highly recommended in your case. | COUNT of joined records in WHERE statement | [
"",
"sql",
"join",
"count",
"where-clause",
""
] |
I am trying to convert a long date stored as text to a short date via MS Access SQL.
For example I have a table which parsed information from a website and one of the field is Tuesday, June 17, 2014. I want to run an update query in another table which takes this value and converts it to 17/06/2014.
Any help on what functions I can use please?
thanks
Elton | Building on @VBlades' example, but allowing the year to be different from 2014, assuming that other years might appear in the source data.
This will only work if the dates you wish to parse are formated consistently.
Paste this function into a vba module and call it from your query.
```
Function dateParser(datestr As String) As Variant
Dim day_month, year, day_month_year As String
day_month = Split(datestr, ",")(1)
year = Split(datestr, ",")(2)
day_month_year = day_month + ", " + year
dateParser = Format(day_month_year, "dd/mm/yyyy")
End Function
``` | For date strings like "Tuesday, June 17, 2014" a VBA function like this
```
Option Compare Database
Option Explicit
Public Function ParseDateString(DateString As Variant) As Variant
If IsNull(DateString) Then
ParseDateString = Null
Else
ParseDateString = CDate(Split(DateString, ", ", 2)(1))
End If
End Function
```
will convert the string to a true Date value. If you are running an update query and putting the resulting value into a `Date/Time` field in the table then you **DO NOT** want to convert the date to `dd/mm/yyyy` format. Just use the result of the function (the true Date value).
If you *must* convert the date to a string then use the unambiguous date format `yyyy-mm-dd`. If you convert to `dd/mm/yyyy` format, Access might mangle ambiguous dates and `12/06/2014` could be interpreted as December 6, *not* June 12. | Convert long date stored as text to short date Access SQL | [
"",
"sql",
"date",
"ms-access",
""
] |
I am facing a problem.
I am not able to make a query which is a bit complicated but crucial for my work.
I have a view which looks like this :
```
dpt | status
94 | OK
94 | Average
94 | Average
94 | Average
95 | OK
95 | NOK
95 | OK
96 | OK
96 | OK
96 | OK
```
I want a rule like:
* If AT LEAST ONE status of dpt is NOK -> return the dpt number with status NOK
* If there is more than 2 average for a dpt then return dpt number and the status average
According to the example posted the result should be :
```
dpt | status
94 | Average
95 | NOK
96 | OK
``` | You can do this with conditional aggregation. For your rules, this would look like:
```
select dpt,
(case when sum(case when status = 'NOK' then 1 else 0 end) then 'NOK'
when sum(case when status = 'Average' then 1 else 0 end) then 'Average'
else 'OK'
end) as Status
from myview v
group by dpt;
``` | This is one way that you can get the result
```
SELECT DISTINCT ok.dpt, COALESCE(nok.status,avgTable.status,ok.status) AS Status
FROM myTable AS ok
LEFT JOIN (SELECT dpt,status
FROM myTable
WHERE status='Average'
GROUP BY dpt,status
HAVING COUNT(*)>=2) AS avgTable
ON ok.dpt = avgTable.dpt
LEFT JOIN myTable nok ON ok.dpt = nok.dpt and nok.status = 'NOK'
``` | SQL Server Query with case | [
"",
"sql",
"sql-server",
"case",
""
] |
I am reading someone else sql and his code was like this
There is view called `user_v` with column `path` as Array
```
select * from user_v where 'USER_TYPE'=path[2]
```
can't i use
`path[2] = 'USER_TYPE'` | This is a precaution taken by some programmers in languages where assignment and comparison can be easily confused, such as C or PHP, where the following statement looks innocent:
```
if ( $foo = 1 )
```
but it is actually *assigning* `1` to `$foo`, and the `if` will always evaluate to `true` (in PHP, at least, where `1` is true). What was meant was `if ( $foo == 1 )`.
If you reverse the arguments, the error becomes obvious sooner:
```
if ( 1 = $foo ) # SYNTAX ERROR
if ( 1 == $foo ) # Desired behaviour
```
This is sometimes known as "yoda coding", and is in [the coding standards of Wordpress, for example](http://make.wordpress.org/core/handbook/coding-standards/php/#yoda-conditions).
See also: [Why put the constant before the variable in a comparison?](https://stackoverflow.com/questions/370366/why-put-the-constant-before-the-variable-in-a-comparison)
In SQL, there is less chance of such a muddle, since although `=` can mean either assignment or comparison, there are rarely situations where a typo would select the wrong meaning.
However, if the coding standards for every other language used by a project mandate it, it would make sense to reinforce the habit by also using it in SQL, since I can't think of a specific reason *not* to write it that way. | There is no difference at all. | What is the difference between keeping column on left of = in sql | [
"",
"sql",
"postgresql",
""
] |
I am trying to get my head around using GROUP\_CONCAT within MYSQL.
Basically I have the following table, table1:
id, field1, field2, active
I want to bring back 5 rows within the table but in random order. So I'm using this:
```
SELECT GROUP_CONCAT(id ORDER BY rand()) FROM table1 WHERE active=1
```
This behaves as I would expect. I then want to use the output to select the other columns (field1, field2) from the table and display the results.
So I've tried using:
```
SELECT *
FROM table1
WHERE id IN
(
SELECT GROUP_CONCAT(id ORDER BY rand()) as id FROM table1 WHERE active=1
);
```
I expected something like the above to work but I cant figure out why it doesn't. It DOES bring back results but not all of them, (i.e.) my table contains 10 rows. 6 rows are set to active=1. Therefore I would expect 6 rows to be returned ... this isn't happening I may get 1,2 or 0.
Additionally if it helps I'd like to limit the number of results returned by the sub-query to 3 but adding LIMIT doesn't seem to have any affect on the results returned.
Thank you in advance for your help | I think this is what you are looking for. This will bring back 5 random active rows.
```
SELECT *
FROM table1
WHERE active=1
ORDER BY RAND()
LIMIT 5;
``` | why not use this :
```
SELECT *, GROUP_CONCAT(id ORDER BY rand()) as randoms FROM table1 WHERE active=1
``` | Using MYSQL GROUP_CONCAT with sub query | [
"",
"mysql",
"sql",
"group-concat",
""
] |
I have a SQL table that contains the following
```
ID AccountNumber Name
1 12345 Tony
2 123456 Mike
3 123458 Mike
4 45689 Tom
5 666999 Tim
6 6669997 Lisa
7 44455 Tim
8 78901 Matt
9 789011 Roger
```
What I need to do is show me all records where the Account Number begin with the same value (indeterminate number). For example. In this table, I'd want to select and display the following:
```
12345
123456
123458
666999
6669997
78901
789011
```
As you can see, it shows the each row where the AccountNumber matches or has the same beginning number. I haven't been able to find the proper query and would love any help.
Thanks! | The cases that you mention satisfy that the longer starts with the shorter. Here is a query that will get the shortest match for each account number:
```
select AccountNumber
from (select a.*, count(*) over (partition by ShortestAN) as numAN
from (select a.*,
(select top 1 a2.AccountNumber
from accounts a2
where a.AccountNumber like a2.AccountNumber + '%'
order by length(a2.AccountNumber) asc
) as ShortestAN
from accounts a
) a
) a
where numAN > 1
order by ShortestAN, AccountNumber;
```
The subquery finds the shortest account number that matches. The rest is just returning the ones where there is more than one match. | ```
select a1.ID, a1.AccountNumber, a1.Name,
a2.ID, a2.AccountNumber, a2.Name
from Accounts a1
join Accounts a2 on LEN(a1.name) <= LEN(a2.name) and SUBSTRING(a2.name, 1, LEN(a1.name)) = a1.name
where /*are not same rows*/ a1.ID <> a2.ID
``` | Get values and like values in SQL | [
"",
"sql",
"sql-server-2008",
""
] |
There are two tables Inspector(Parent) and InspectorOfficeAccess(Child), i need to give inspectors from country (73,74) to have access of offices (1,20,24,31,44).
There are many inspectors (over 100) in Inspector table with country 73 and 74. Is this possible to insert all inspectors with officeid in InspectorOfficeAccess table with one query?
second screen shot is for showing how the final result should look like. InspectorOfficeAcess table is empty.
Each inspector will be inserted 5 times with officesID (1,20,24,31,44) where inspector country IN (73,74)

**ADDED**
I have tried this so far
```
insert into InspectorOfficeAccess
select i.inspectorid,o.Offices from Inspectors i
cross join
(
SELECT 1 AS Offices
UNION
SELECT 20
UNION
SELECT 24
UNION
SELECT 31
UNION
SELECT 44
) o
where i.CountryID IN (73,74)
``` | If you have to pick Office Id's from other table then another way for doing this would be :
```
Insert into InspectorOfficeAccess
Select I.InspectorID,O.OfficeId
from Inspector I
Cross Join Office O
where I.CountryId in (73,74) and O.OfficeId in (1,20,24,31,44);
```
You can check [Demo here.](http://rextester.com/SROFKL69840) | Try this
```
Insert into InspectorOfficeAcess
Select InspectorOfficeAccessiD,i.InspectorID,oficeid
from Inspector i
cross apply(Select 1 union Select 20 union Select 24 union Select 31 union Select 44) d
where countryid in (73,74)
``` | insert multiple rows (one to many) based on other table | [
"",
"sql",
"sql-server-2008",
""
] |
I have two tables for Bill & Payment. I have show the balance sheet from these two tables.
The Data in the tables are:
**tblBill**

**tblPayment**

My current output is:

The query I'm trying to use is:
```
select Particulars,Date,BillAmount,0'PaidAmount' from tblBill
union
select Particulars,Date,0'BillAmount',PaidAmount from tblPayment
order by Date
```
However, I need my output in this format:

Is it possible to get the required format? | There you go:
Assuming there is only one transaction in a day....
```
With Tb1 as
(select Date,Particulars,BillAmount,0'PaidAmount' from tblBill
union
select Date,Particulars,0'BillAmount',PaidAmount from tblPayment
)
SELECT T1.Particulars,T1.[Date],T1.[BillAmount],T1.[PaidAmount],(Sum(T2.BillAmount) - Sum(T2.PaidAmount)) as Balance FROM Tb1 as T1
INNER JOIN
Tb1 as T2
ON T1.[date] >= T2.[date]
Group By T1.Particulars,T1.[Date],T1.[BillAmount],T1.[PaidAmount]
Order by [Date]
```
In case of more than one transactions in a day....
```
WITH Tb0 as
( SELECT [Date],Particulars,BillAmount,0'PaidAmount' from tblBill
UNION
SELECT [Date],Particulars,0'BillAmount',PaidAmount from tblPayment
)
, Tb1 as
(
SELECT Date,Particulars,BillAmount,PaidAmount,Row_Number() over (order by [Date] asc) as [OrderId]
FROM
Tb0
)
SELECT T1.Particulars,T1.[Date],T1.[BillAmount],T1.[PaidAmount],(Sum(T2.BillAmount) - Sum(T2.PaidAmount)) as Balance FROM Tb1 as T1
INNER JOIN
Tb1 as T2
ON T1.[OrderId] >= T2.[OrderId]
Group By T1.Particulars,T1.[Date],T1.[BillAmount],T1.[PaidAmount]
Order by [Date]
``` | You will need to JOIN the two tables. First, there must be a link between the two tables (say customerid existing to show which customer is involved).
Then, you can do.
```
CREATE VIEW vwTransactionHistory as
SELECT customerid, Particulars, [DATE], BillAmount, PaidAmount,
(SELECT SUM(BillAmount) FROM tblBill x WHERE x.customerid=temp1.customerid and x.date<=temp1.date) as bill2date, (SELECT SUM(PaidAmount) FROM tblPayment y WHERE y.customerid = temp1.customerid and y.date<=temp1.date) as Pay2Date
FROM
(
select customerid, Particulars,[Date],BillAmount,0 AS 'PaidAmount' from tblBill
union
select customerid,Particulars,[Date],0 AS 'BillAmount',PaidAmount from tblPayment
) AS temp1
GROUP BY customerid, Particulars,[Date],BillAmount,PaidAmount
```
Then you can do
```
SELECT TOP 1000 [customerid]
,[Particulars]
,[DATE]
,[BillAmount],[PaidAmount], isnull(bill2date,0) - isnull(pay2date,0) as Balance
FROM [vwTransactionHistory]
```
Remember that you don't need to create a View. I use views for clarity and abstraction of complex Queries. | Cumulative sum across two tables | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
Ive got a question which I am completely stumped on. It reads as follows:
*Write a import stored procedure that take information from a table (source) and write it into another table (target).*
Im guessing it would look something like the following:
```
create procedure add_data(@name, @surname)
AS
BEGIN
SELECT name, surname
FROM cardholder
INSERT INTO new_table
values (name, surname)
```
Is my logic correct or am I completely missing it? | You are almost there. Since you are directly copying from one table to another, you can use the `INSERT INTO ... SELECT ...` idiom, like so:
```
create procedure add_data
@name varchar(100)
@surname varchar(100)
AS
BEGIN
INSERT INTO new_table(name,surname)
SELECT name, surname
FROM cardholder
END
```
Note the 2 changes I made:
1. How you declare the parameters for the SP
2. The `values` clause should consist of the actual values being inserted. Since you are inserting data retrieved from a table rather than fixed values, you use the `select` query instead. | It is not correct. try dis
```
create procedure add_data
(
@name varchar(100),
@surname varchar(100)
)
AS
BEGIN
INSERT INTO target
SELECT name, surname
FROM cardholder
end
``` | procedure to take information from one table and write to another table | [
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
I have a SQL table, that contains a column named `Foo`.
I know how to get a row with the highest Foo value, with some SQL like this:
```
SELECT MAX(Foo) AS HighestFoo FROM Foos;
```
How can I get all the rows with the highest Foo? | Use a subquery:
```
SELECT
*
FROM
Foos
WHERE
Foo = (SELECT MAX(Foo) AS HighestFoo FROM Foos)
```
The subquery (on the last line) selects out the highest value, and then uses that value to modify the WHERE clause. | ```
With CTE As (
SELECT MAX(Foo) AS HighestFoo
FROM Foos
)
Select * From Foos Where Foo = ( Select HighestFoo From CTE)
``` | How to get a group of the highest Foo's in a SQL table | [
"",
"sql",
""
] |
I am currently working in a project on a Oracle database. I have observed in the application code that dates are almost never used directly. Instead, they are always used in conjunction with the trunc function (TRUNC(SYSDATE), TRUNC(event\_date), etc.)
Can anyone explain the reason behind using the trunc function instead of using the date directly? | A `DATE` in Oracle has not only a date part, but also a time part. This can lead to surprising results when querying data, e.g. the query
```
with v_data(pk, dt) as (
select 1, to_date('2014-06-25 09:00:00', 'YYYY-MM-DD hh24:mi:ss') from dual union all
select 2, to_date('2014-06-26 09:00:00', 'YYYY-MM-DD hh24:mi:ss') from dual union all
select 3, to_date('2014-06-27 09:00:00', 'YYYY-MM-DD hh24:mi:ss') from dual)
select * from v_data where dt = date '2014-06-25'
```
will return no rows, since you're comparing to 2014-06-25 at midnight.
The usual workaround for this is to use `TRUNC()` to get rid of the time part:
```
with v_data(pk, dt) as (
select 1, to_date('2014-06-25 09:00:00', 'YYYY-MM-DD hh24:mi:ss') from dual union all
select 2, to_date('2014-06-26 09:00:00', 'YYYY-MM-DD hh24:mi:ss') from dual union all
select 3, to_date('2014-06-27 09:00:00', 'YYYY-MM-DD hh24:mi:ss') from dual)
select * from v_data where trunc(dt) = date '2014-06-25'
```
Other, somewhat less frequently used approaches for this problem include:
* convert both dates with `to_char('YYYY-MM-DD')` and check for equality
* use a between clause: `WHERE dt between date '2014-06-25' and date '2014-06-26'` | You use the `trunc()` function to remove the time component of the date. By default, the `date` data type in Oracle stores both dates and times.
The `trunc()` function also takes a format argument, so you can remove other components of the dates, not just the time. For instance, you can trunc to the nearest hour. However, without the format, the purpose is to remove the time component. | Reason for using trunc function on dates in Oracle | [
"",
"sql",
"oracle",
""
] |
I have this column
```
NAME
John Stephenson
James Martin
Anna Corelia
```
How I can select this column to this?
```
NAME
Stephenson, John
Martin, James
Corelia, Anna
``` | One way
```
;with test(name) as (
select 'John Stephenson' union all
select 'James Martin' union all
select 'Anna J. Corelia' union all
select 'BOBBYTABLES'
)
select
case when charindex(' ', name) = 0 then name
else right(name, charindex(' ', reverse(name)) - 1) + ', ' + substring(name, 1, len(name) - charindex(' ', reverse(name))) end
from test
(No column name)
Stephenson, John
Martin, James
Corelia, Anna J.
BOBBYTABLES
``` | Your question has nothing to do with `TRIM()` function. Probably you are trying to get something like below using `LEFT()` and `RIGHT()` function of `SQL Server` and concatanating them with `,`
```
select right('John Stephenson',(len('John Stephenson')-charindex(' ','John Stephenson')))
+ ', ' + left('John Stephenson',(charindex(' ','John Stephenson') - 1))
```
which will result in
```
Stephenson, John
``` | How do I split and comma-delimit names? | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I've imported about 600 pages into my WordPress database and most (not all) of them have the word "park" at the end of their new URL's
```
domain.com/awesome-park/
```
I would like to bulk remove the word (and its previous dash - )change them via SQL query or other recommended method. Any advice for a safe way to change URLs inside a database would be greatly appreciated. | The slug is stored in `wp_posts.post_name`. So the following should work (this from the first answer above):
```
UPDATE wp_posts
SET post_name = REPLACE(post_name, '-park', '')
WHERE post_name REGEXP '-park$';
```
I do recommend backing up your WordPress database before running this query! | If you know the table and column where this url has been defined you could run next query:
```
UPDATE 'table_name' SET 'url_column' = REPLACE('url_column', '-page', '');
``` | Bulk remove a word from all WordPress URLs | [
"",
"sql",
"wordpress",
"url",
""
] |
Having 174 is there a way to convert it to show the complete date?
example: 174 should be 06/23/2014, it is the 174th day of the year. my data looks like this 2014174. is there a SQL function for this?
Thank you | You can use the dateadd function like below, to accomplish what you are looking for.
```
declare @inputNumber int
set @inputNumber = 174
select dateadd(day,(@inputNumber-1),'1/1/'+cast (year(getdate())as varchar(10)))
``` | ```
DECLARE @DATEINT INT
SET @DATEINT = 2014174
select DATEADD(YEAR, @DATEINT/1000-1900,DATEADD(day, @DATEINT%1000 - 1, 0))
``` | convert the number of the calendar day to date in SQL | [
"",
"sql",
"sql-server",
""
] |
On SQL server 2008 R2, I would like to select one value of a column for each distinct value of another column.
e.g.
```
name id_num
Tom 53
Tom 60
Tom 27
Jane 16
Jane 16
Bill 97
Bill 83
```
I need to get one id\_num for each distinct name, such as
```
name id_num
Tom 27
Jane 16
Bill 97
```
For each name, the id\_num can be randomly picked up (not required to be max or min) as long as it is associated with the name.
For example, for Bill, I can pick up 97 or 83. Either one is ok.
I do know how to write the SQL query.
Thanks | ```
SELECT
name,MIN(id_num)
FROM YourTable
GROUP BY name
```
**UPDATE:**
If you want pick id\_num randomly, you may try this
```
WITH cte AS (
SELECT
name, id_num,rn = ROW_NUMBER() OVER (PARTITION BY name ORDER BY newid())
FROM YourTable
)
SELECT *
FROM cte
WHERE rn = 1
```
[SQL Fiddle Demo](http://sqlfiddle.com/#!3/40817/5) | You could grab the max id like this:
```
SELECT name, MAX(id_num)
FROM tablename
GROUP BY name
```
That would get you one id for each distinct name. | SQL server 2008 R2, select one value of a column for each distinct value of another column | [
"",
"sql",
"sql-server",
"sql-server-2008",
"windows-7",
""
] |
I'm currently working on a table in Oracle database where there are time values stored in AM/PM format. By default, the times are a VARCHAR2 datatype so I have been converting them using the to\_date function.
What I'm doing is comparing two different times within the table and running a query which should return the results for all times contained within that interval. The interval is acquired by using the between condition. Here is an example:
```
SELECT TIME
FROM TIME_TABLE
WHERE to_date(TIME,'HH:MI:SS PM') between
to_date('04:00:00 PM', 'HH:MI:SS PM') and
to_date('08:00:00 PM', 'HH:MI:SS PM');
```
The problem is that when you do not declare the date using the to\_date function, it will automatically default to the 1st of the current month. Thus, if I wanted to get all the results from 11:59 PM to 12:00 AM I will get incorrect results.
Basically, I want to calculate the time interval in a manner where it loops instead of going only in one direction as it does naturally. Is there a way to accomplish this? | The biggest problem here is that as you add more data things will get slower and slower. It's difficult for an index to be efficiently used when you are calling a function on the field itself. It will have to scan the entire table to satisfy the query. There are three ways to avoid this:
* Don't store dates or times as strings. Use the correct field type. This is the best approach.
* Compare using strings instead of converting first. This will only work if you have guaranteed lexicographic ordering. You can do this with 24-hour times, but not with 12-hour times, because AM/PM gets in the way.
* As Barett enlightened me in comments, Oracle has the concept of [Function-Based Indexes](http://docs.oracle.com/cd/E11882_01/appdev.112/e10471/adfns_indexes.htm#ADFNS00505). You could create one that pre-computes the results of `to_date`, such that when you use it in the query that it has an index to work against. There are several [disadvantages](http://docs.oracle.com/cd/E11882_01/appdev.112/e10471/adfns_indexes.htm#ADFNS257) though, so if you take this approach, I suggest you consider the consequences carefully.
With regard to the question about looping over midnight on a time-only value, the general algorithm is as follows:
* Is the start-of-range value <= the end-of-range value?
+ If yes, then the return true if the test value is >= the start-of-range **and** < the end-of-range. Else return false.
+ If no, then return true if the test value is >= the start-of-range **or** < the end-of-range. Else return false.
There's no need to use :59, or any other value of the sort. Doing so can get in the way of other things, such as taking the duration of the range or testing a value like :59.5. | Not really. You'll need to add some logic for detecting a "loop" in your between criteria. If there is a loop (endRange < startRange), then the criteria should be:
```
to_char(to_date(TIME,'HH:MI:SS PM'),'HH24:MI:SS') >= '08:00:00' // endRange
OR to_char(to_date(TIME,'HH:MI:SS PM'),'HH24:MI:SS') <= '04:00:00' // startRange
```
On a side note, I reiterate wolΟi's suggestion to use 24 hour time. Otherwise you'll have AM/PM problems too. | Comparing clock times in SQL (Oracle) | [
"",
"sql",
"oracle",
"date",
"time",
""
] |
Consider a data structure such as the below where the user has a small number of fixed settings.
### User
```
[Id] INT IDENTITY NOT NULL,
[Name] NVARCHAR(MAX) NOT NULL,
[Email] VNARCHAR(2034) NOT NULL
```
### UserSettings
```
[SettingA],
[SettingB],
[SettingC]
```
Is it considered correct to move the user's settings into a separate table, thereby creating a one-to-one relationship with the users table? Does this offer any real advantage over storing it in the same row as the user (the obvious disadvantage being performance). | You would normally split tables into two or more 1:1 related tables when the table gets very wide (i.e. has many columns). It is hard for programmers to have to deal with tables with too many columns. For big companies such tables can easily have more than 100 columns.
So imagine a product table. There is a selling price and maybe another price which was used for calculation and estimation only. Wouldn't it be good to have two tables, one for the real values and one for the planning phase? So a programmer would never confuse the two prices. Or take logistic settings for the product. You want to insert into the products table, but with all these logistic attributes in it, do you need to set some of these? If it were two tables, you would insert into the product table, and another programmer responsible for logistics data would care about the logistic table. No more confusion.
Another thing with many-column tables is that a full table scan is of course slower for a table with 150 columns than for a table with just half of this or less.
A last point is access rights. With separate tables you can grant different rights on the product's main table and the product's logistic table.
So all in all, it is rather rare to see 1:1 relations, but they can give a clearer view on data and even help with performance issues and data access.
EDIT: I'm taking Mike Sherrill's advice and (hopefully) clarify the thing about normalization.
Normalization is mainly about avoiding redundancy and relateded lack of consistence. The decision whether to hold data in only one table or more 1:1 related tables has nothing to do with this. You can decide to split a user table in one table for personal information like first and last name and another for his school, graduation and job. Both tables would stay in the normal form as the original table, because there is no data more or less redundant than before. The only column used twice would be the user id, but this is not redundant, because it is needed in both tables to identify a record.
So asking "Is it considered correct to normalize the settings into a separate table?" is not a valid question, because you don't normalize anything by putting data into a 1:1 related separate table. | You're all wrong :) Just kidding.
**On a high load, high volume, heavily updated system splitting a table by 1:1 helps optimize I/O.**
For example, this way you can place heavily read columns *onto separate physical hard-drives* to speed-up parallel reads (the 1-1 tables have to be in different "filegroups" for this). Or you can optimize table-level locks. Etc. Etc.
But this type of optimization usually does not happen until you have millions of rows and huge read/write concurrency | SQL one to one relationship vs. single table | [
"",
"sql",
"database-design",
"rdbms",
""
] |
I have this table:
```
create table Student(name varchar(20), surname varchar(20), age int);
```
First, insert into table this fields:
```
insert into Student values ("Jhon", "Smith", null);
insert into Student values ("Mark", "William", null);
```
Now, how do I insert age=28 where name = 'Jhon'?? | You have to use [UPDATE](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0CB4QFjAA&url=http%3A%2F%2Fdev.mysql.com%2Fdoc%2Frefman%2F5.0%2Fen%2Fupdate.html&ei=uuirU8KPObPL0AWe5oGICA&usg=AFQjCNGVsZPXuo2367dgxMLfQjgy7VivQw&sig2=bvQnqF9s7LV2on5RzstNKg&bvm=bv.69837884,d.d2k) as below :
```
UPDATE Student SET age = 28 WHERE name = "Jhon";
```
**Notice :** *It will change age of all people who `name = "Jhon"`*
It's far better to define [Primary Key (ID)](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&sqi=2&ved=0CCUQFjAB&url=http%3A%2F%2Fdev.mysql.com%2Fdoc%2Frefman%2F5.5%2Fen%2Foptimizing-primary-keys.html&ei=xumrU_qnJur60gXE-YGABQ&usg=AFQjCNFqRq93Jd9tqU_v_8pd5li3T-U88Q&sig2=DA9it6S5tAgXw3WHeIBJCw&bvm=bv.69837884,d.d2k) for a column to perform commands for specific ID. | `UPDATE Student SET age = 28 WHERE name = 'Jhon'`
But please take care of your referetial integrity!
Those data structure without any unique ID is not a safe way to store a great amount of data. | Command to insert into with select | [
"",
"mysql",
"sql",
""
] |
I have one base table(id table) and multiple tables(data table) which have the data related to the id's and each table has different type data.
I want to filter id's by multiple condition from these different table.
I created query as follows, but this is too slow like 3 minutes. I checked explain information but still can not find better way. Could anyone find the way to get it faster?
query
```
select jj.id, jj.imgtitle, jj.alias
from jjtable jj
inner join jjtable_catg jjc on jj.catid = jjc.cid
left join jjtable_map jjtm_name on jj.id = jjtm_name.picid
left join jjtable_tags jjts_name on jjtm_name.tid = jjts_name.tid
left join jjtable_map_ge jjtm_ge on jj.id = jjtm_ge.picid
left join jjtable_tags jjts_ge on jjtm_ge.tid = jjts_ge.tid
left join jjtable_map_ban jjtm_ban on jj.id = jjtm_ban.picid
left join jjtable_tags jjts_ban on jjtm_ban.tid = jjts_ban.tid
left join jjtable_map_per jjtm_per on jj.id = jjtm_per.picid
left join jjtable_tags jjts_per on jjtm_per.tid = jjts_per.tid
left join jjtable_map_fea jjtm_fea on jj.id = jjtm_fea.picid
left join jjtable_tags jjts_fea on jjtm_fea.tid = jjts_fea.tid
left join jjtable_map_ev jjtm_ev on jj.id = jjtm_ev.picid
left join jjtable_tags jjts_ev on jjtm_ev.tid = jjts_ev.tid
left join jjtable_map_ag jjtm_ag on jj.id = jjtm_ag.picid
left join jjtable_tags jjts_ag on jjtm_ag.tid = jjts_ag.tid
left join jjtable_map_fa jjtm_fa on jj.id = jjtm_fa.picid
left join jjtable_tags jjts_fa on jjtm_fa.tid = jjts_fa.tid
left join jjtable_map_im jjtm_im on jj.id = jjtm_im.picid
left join jjtable_tags jjts_im on jjtm_im.tid = jjts_im.tid where jj.published = 1
and jj.approved = 1
and jjts_fea.tid in(87,90)
and jjts_fea.delete_flag = 0
and jjtm_fea.delete_flag = 0
and jjc.cid in(4,10)
and jjts_name.tid in(77)
and jjts_name.delete_flag = 0
and jjtm_name.delete_flag = 0
and jjts_per.tid in(28,36)
and jjts_per.delete_flag = 0
and jjtm_per.delete_flag = 0
and jjts_ag.tid in(98,99)
and jjts_ag.delete_flag = 0
and jjtm_ag.delete_flag = 0
and jjts_fa.tid in(104,107)
and jjts_fa.delete_flag = 0
and jjtm_fa.delete_flag = 0
group by jj.id
order by case when date(jj.date) > date_add(date(now()), interval -14 day) then jj.view end DESC, jj.id DESC
```
EDIT: Thank you for your comment. firstly I add information as your request.
WHY NEED THIS QUERY;
This query are intended to received the post data from "form input" tag which user select options from different categories to filter ID. That is why this query includes tables and data which are not used in this example.
The reason why there are so many left join is that each 'map' table has category mapping information to filter ID's. etc, user want to find ID's which are mapped with 'fea', 87, 90, and 'ag', 98, 99.
explain select;
```
+----+-------------+--------------+--------+-------------------+-----------------+---------+------------------------------------------------+------+---------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+--------------+--------+-------------------+-----------------+---------+------------------------------------------------+------+---------------------------------+
| 1 | SIMPLE | jjts_name | const | PRIMARY | PRIMARY | 4 | const | 1 | Using temporary; Using filesort |
| 1 | SIMPLE | jjc | range | PRIMARY | PRIMARY | 4 | NULL | 2 | Using where |
| 1 | SIMPLE | jjts_pe | range | PRIMARY | PRIMARY | 4 | NULL | 2 | Using where; Using join buffer |
| 1 | SIMPLE | jjts_fea | range | PRIMARY | PRIMARY | 4 | NULL | 2 | Using where; Using join buffer |
| 1 | SIMPLE | jjts_ag | range | PRIMARY | PRIMARY | 4 | NULL | 2 | Using where; Using join buffer |
| 1 | SIMPLE | jjts_fa | range | PRIMARY | PRIMARY | 4 | NULL | 2 | Using where; Using join buffer |
| 1 | SIMPLE | jj | ref | PRIMARY,idx_catid | idx_catid | 4 | jjc.cid | 95 | Using where |
| 1 | SIMPLE | jjtm_ag | eq_ref | udx_picid_tid | udx_picid_tid | 8 | jj.id,jjts_ag.tid | 1 | Using where |
| 1 | SIMPLE | jjtm_fa | eq_ref | udx_picid_tid | udx_picid_tid | 8 | jj.id,jjts_fa.tid | 1 | Using where |
| 1 | SIMPLE | jjtm_fea | eq_ref | udx_picid_tid | udx_picid_tid | 8 | jj.id,jjts_fea.tid | 1 | Using where |
| 1 | SIMPLE | jjtm_pe | eq_ref | udx_picid_tid | udx_picid_tid | 8 | jjtm_fea.picid,jjts_per.tid | 1 | Using where |
| 1 | SIMPLE | jjtm_ev | ref | udx_picid_tid | udx_picid_tid | 4 | jjtm_fa.picid | 3 | Using index |
| 1 | SIMPLE | jjts_ev | eq_ref | PRIMARY | PRIMARY | 4 | jjtm_ev.tid | 1 | Using index |
| 1 | SIMPLE | jjtm_im | ref | udx_picid_tid | udx_picid_tid | 4 | jjtm_pe.picid | 6 | Using index |
| 1 | SIMPLE | jjts_im | eq_ref | PRIMARY | PRIMARY | 4 | jjtm_im.tid | 1 | Using index |
| 1 | SIMPLE | jjtm_ge | ref | udx_picid_tid | udx_picid_tid | 4 | jj.id | 23 | Using index |
| 1 | SIMPLE | jjts_ge | eq_ref | PRIMARY | PRIMARY | 4 | jjtm_ge.tid | 1 | Using index |
| 1 | SIMPLE | jjtm_ban | ref | udx_picid_tid | udx_picid_tid | 4 | jjtm_pe.picid | 9 | Using index |
| 1 | SIMPLE | jjts_ban | eq_ref | PRIMARY | PRIMARY | 4 | jjtm_ban.tid | 1 | Using index |
| 1 | SIMPLE | jjtm_name | eq_ref | udx_picid_tid | udx_picid_tid | 8 | jjtm_fea.picid,const | 1 | Using where |
+----+-------------+--------------+--------+-------------------+-----------------+---------+------------------------------------------------+------+---------------------------------+
20 rows in set (2 min 15.87 sec)
```
show columns;
```
mysql> show columns from jjtables;
+--------------+------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+--------------+------------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| catid | int(11) | NO | MUL | 0 | |
| imgtitle | text | NO | | NULL | |
| alias | varchar(255) | NO | | | |
| date | datetime | NO | | NULL | |
| view | int(11) | NO | | 0 | |
| published | tinyint(1) | NO | | 0 | |
| approved | tinyint(1) | NO | | 0 | |
+--------------+------------------+------+-----+---------+----------------+
mysql> show columns from jjtable_catg;
+--------------+---------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+--------------+---------------------+------+-----+---------+----------------+
| cid | int(11) | NO | PRI | NULL | auto_increment |
| name | varchar(255) | NO | | | |
| alias | varchar(255) | NO | | | |
| parent | int(11) | NO | MUL | 0 | |
| desc | text | YES | | NULL | |
| order | int(11) | NO | | 0 | |
+--------------+---------------------+------+-----+---------+----------------+
mysql> show columns from jjtable_tags;
+--------------+-------------+------+-----+---------------------+----------------+
| Field | Type | Null | Key | Default | Extra |
+--------------+-------------+------+-----+---------------------+----------------+
| tid | int(11) | NO | PRI | NULL | auto_increment |
| name | varchar(60) | NO | UNI | NULL | |
| talias | varchar(60) | NO | | | |
| ttypeid | int(11) | NO | | 1 | |
| pa_tid | int(11) | NO | | 0 | |
| delete_flag | tinyint(1) | NO | | 0 | |
| created | datetime | NO | | 0000-00-00 00:00:00 | |
| created_by | int(11) | NO | | 0 | |
| modified | datetime | NO | | 0000-00-00 00:00:00 | |
| modified_by | int(11) | NO | | 0 | |
| t_due | datetime | NO | | 0000-00-00 00:00:00 | |
+--------------+-------------+------+-----+---------------------+----------------+
mysql> show columns from jjtable_map;
+-------------+------------+------+-----+---------------------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------+------------+------+-----+---------------------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| picid | int(11) | NO | MUL | NULL | |
| tid | int(11) | NO | | NULL | |
| delete_flag | tinyint(1) | NO | | 0 | |
| created | datetime | NO | | 0000-00-00 00:00:00 | |
| created_by | int(11) | NO | | 0 | |
| modified | datetime | NO | | 0000-00-00 00:00:00 | |
| modified_by | int(11) | NO | | 0 | |
+-------------+------------+------+-----+---------------------+----------------+
```
Sorry for my poor explanation. I hope this edit help your better understanding.
EDIT END
Regards | Try this: Convert your left joins to a bunch of `WHERE EXISTS` sub-queries and drop the `GROUP BY`, because (that's my suspicion here) that's what you really wanted to express.
```
select
jj.id, jj.imgtitle, jj.alias
from
jjtable jj
-- you could drop this join, you don't do anything with jjtable_catg
inner join jjtable_catg jjc on jjc.cid = jj.catid
where
jj.published = 1
and jj.approved = 1
and jjc.cid in(4, 10)
and exists (
select 1
from jjtable_map m inner join jjtable_tags t on m.tid = t.tid
where m.picid = jj.id m.delete_flag = 0 and t.tid in (77) and t.delete_flag = 0
)
and exists (
select 1
from jjtable_map_per m inner join jjtable_tags t on m.tid = t.tid
where m.picid = jj.id m.delete_flag = 0 and t.tid in (28,36) and t.delete_flag = 0
)
and exists (
select 1
from jjtable_map_fea m inner join jjtable_tags t on m.tid = t.tid
where m.picid = jj.id m.delete_flag = 0 and t.tid in (87,90) and t.delete_flag = 0
)
and exists (
select 1
from jjtable_map_ag m inner join jjtable_tags t on m.tid = t.tid
where m.picid = jj.id m.delete_flag = 0 and t.tid in (98,99) and t.delete_flag = 0
)
and exists (
select 1
from jjtable_map_fa m inner join jjtable_tags t on m.tid = t.tid
where m.picid = jj.id m.delete_flag = 0 and t.tid in (104,107) and t.delete_flag = 0
)
order by
case when date(jj.date) > date_add(date(now()), interval -14 day) then jj.view end DESC,
jj.id DESC
```
Also create these composite indexes (if they're missing):
* `jjtable_catg`: `(cid)`
* `jjtable_tags`: `(tid, delete_flag)`
* `jjtable_map` and all `jjtable_map_*`: `(picid, delete_flag, tid)` | One big improvement would be to use multiple conditions at joins instead of get a huge set and then put the condition at the end. I mean, if you put the condition at the same time you join your set will be smaller and your response time better:
```
select jj.id, jj.imgtitle, jj.alias
from jjtable jj
inner join jjtable_catg jjc on jj.catid = jjc.cid
left join jjtable_map jjtm_name on jj.id = jjtm_name.picid and jjtm_name.delete_flag = 0
...
``` | Very slow mysql query: left join multiple table and where clause for each table | [
"",
"mysql",
"sql",
""
] |
I have an insert query I'm currently using in a stored procedure that works as it should. It is as follows:
```
insert into tblAgentRank (AgtID, RankType, Rank, TimeFrame, RankValue)
select AgtID, 8, RANK() OVER (order by SUM(ColPrem*ModeValue) DESC) as Rank, 'Y', SUM(ColPrem*ModeValue)
from tblAppsInfo
where CompanyID in (select CompanyID from tblCompanyInfo
where DeptID = 7)
group by AgtID
order by Rank
```
This creates a total for each agent, and ranks them against their peers.
---
I need to create a similar statement that does the following calculations:
* If PolicyTypeID = 4, calculate SUM(ColPrem\*ModeValue) \* 0.07
* Else, calculate SUM((ColPrem\*ModeValue) + (ExcessPrem \* 0.07))
* Sum these two statements together for each agent, and then do the rank on the total.
I could easily do one of those, as demonstrated by the 1st query. My mental block is stemming from needing to do it on a case by case basis based on PolicyTypeID. | I think this `select` statement does the calculation that you want:
```
select AgtID, 8,
RANK() OVER (order by SUM(case when PolicyTypeID = 4 then ColPrem*ModeValue * 0.07
else ColPrem*ModeValue + ExcessPrem * 0.07
end) as RANK2,
SUM(case when PolicyTypeID = 4 then ColPrem*ModeValue * 0.07
else ColPrem*ModeValue + ExcessPrem * 0.07
end)
from tblAppsInfo
where CompanyID in (select CompanyID from tblCompanyInfo where DeptID = 7)
group by AgtID
order by Rank;
``` | You just need to use a case statement inside the sum.
```
CASE WHEN PolicyTypeID = 4 THEN SUM(ColPrem*ModeValue) * 0.07
ELSE SUM((ColPrem*ModeValue) + (ExcessPrem * 0.07)) END
``` | SQL RANK() Calculation | [
"",
"sql",
"sql-server-2005",
""
] |
I am learning to use regular expressions and I'm using them to limit the results of a search query by using the REGEXP\_LIKE in Oracle 11. Placing an example of the data available, I have the following:
```
Plan navegaciΓ³n 200 MB
Plan navegaciΓ³n 1 GB
Plan navegaciΓ³n 1 GB
Plan de navegacion 3G
Plan de navegacion 4G
Plan de navegacion 3G Empresarial
Plan de navegacion 4G Empresarial
Plan de servicios 3G
Plan de servicios 4G
Plan navegaciΓ³n Datos
```
I want this result is limited to the following (Only 3G, 4G):
```
Plan de navegacion 3G
Plan de navegacion 4G
Plan de navegacion 3G Empresarial
Plan de navegacion 4G Empresarial
```
I am using the following search pattern but I did not properly filtered results:
* Upper(PLAN\_GSM),'(NAVEGA){1}|(3G|4G|5G)'
* Upper(PLAN\_GSM),'*((NAVEGA)+)*(3G|4G)+'
I have done several tests and do not find the solution. Someone could give me hints? | You could simply use LIKE, as below:
```
select *
from mytable
where PLAN_GSM LIKE 'Plan de navegacion _G%';
```
or use REGEXP\_LIKE, as below:
```
select *
from mytable
where REGEXP_LIKE(PLAN_GSM, '^Plan de navegacion (3|4|5)G(*)');
```
`SQL Fiddle demo`
**Reference**:
[Oracle/PLSQL: REGEXP\_LIKE Condition on Tech on the Net](http://www.techonthenet.com/oracle/regexp_like.php) | You can use this:
```
SELECT * FROM mytable
WHERE REGEXP_LIKE(mycolumn, '\APlan de navegacion \dG.*\z', 'c');
```
* `\d` represents a digit
* `\A` is the beginning of the string
* `.*` greedily matches any characters
* `\z` is the end of the string | Regular expression Oracle | [
"",
"sql",
"regex",
"oracle",
""
] |
I have two tables that look like:
```
table A:
ID, target_date, target_ID
table B:
ID, target_ID, begin_date, end_date
```
Table B may have multiple records for the same target\_ID but different date ranges. I am interested in a SQL query that is able to return target\_dates that are not within the begin\_date and end\_date ranges for the given target\_ID. | There is a trick to this. Look for the ones that match, using a `left join`, and then choose the ones that don't match:
```
select a.*
from tablea a left join
tableb b
on a.target_id = b.target_id and
a.target_date between b.begin_date and b.end_date
where b.target_id is null;
```
You can express this in several different ways. For instance, `not exists` may also seem natural:
```
select a.*
from tablea a
where not exists (select 1
from tableb b
where a.target_id = b.target_id and
a.target_date between b.begin_date and b.end_date
);
```
Note: I am using `between` for these comparisons as a convenient shorthand (to match the language you use in the question). Often with dates, explicit use of `<`, `<=`, `>`, or `>=` is preferred. | ```
SELECT A.target_date
FROM A LEFT OUTER JOIN B
ON (A.target_ID=B.target_ID
AND A.target_date>=B.begin_date
AND A.target_date<=B.end_date)
WHERE B.begin_date IS NULL
``` | SQL: Selecting between dates | [
"",
"sql",
"database",
""
] |
Im having issues making this query work. Can anyone help.
I have two set of queries that I want to merge together.
the first one is
```
SELECT 'Total Calls Hitting the Office' as "Report"
,Call_date as "Date"
,SUM([Calls) as "Total"
,'Calls' as "Units"
FROM Call
GROUP BY Call_date
```
and this query is generated using 2 tables.
```
select 'Total Calls Coming through the Call Centre' as "Report"
,Interval_Start_date as "Date"
,sum(MyCS.Incoming + MyCA.Abandoned)/4 as "Total"
, 'Calls' as " Units"
From call_callsummary as MyCS
left join Call_abandonedcalls as MyCA
on MyCS.Interval_Start_date = MyCA.CallAbandoned_date
where MyCS.Category <> 'Application
```
My third query is meant to subtract the second query TOTAL from the first QUERY TOTAL...
I have tried using the query below merging the 3 tables together but I am getting results which is like 10 times more than the desired result should be.
```
select 'Total Calls Dealt Within the office' as "Report"
,Call_date as "Date"
,sum(MyC.[Calls]) - Sum(MyCS.Incoming + MyCA.Abandoned)/4 as "Total"
,'Calls' as " Units"
FROM Call as MyC inner join call_callsummary as MyCS
on MyCS.Interval_Start_date = MyC.Call_date
inner join Call_abandonedcalls as MyCA
on MyCS.Interval_Start_date = MyCA.CallAbandoned_date
where MyCS.Category <> 'Application'
group by Call_date
``` | ```
WITH TotalCallsHittingTheOffice AS (
SELECT Call_date as "Date"
,SUM(Calls) as "Total"
FROM Call
GROUP BY Call_date
)
,TotalCallsComingThroughTheCallCentre AS (
SELECT Interval_Start_date as "Date"
SUM(MyCS.Incoming + MyCA.Abandoned)/4 as "Total"
FROM call_callsummary as MyCS
LEFT JOIN Call_abandonedcalls as MyCA
ON Call_abandonedcalls.CallAbandoned_date = call_callsummary.Interval_Start_date =
WHERE call_callsummary.Category <> 'Application'
GROUP BY Interval_Start_date
)
SELECT 'Total Calls Dealt Within the office' AS "Report"
,TotalCallsHittingTheOffice."Date" AS "Date"
,SUM(TotalCallsHittingTheOffice."Total" - COALESCE(TotalCallsComingThroughTheCallCentre."Total", 0))
,'Calls' AS " Units"
FROM TotalCallsHittingTheOffice
LEFT JOIN TotalCallsComingThroughTheCallCentre
ON TotalCallsHittingTheOffice."Date" = TotalCallsComingThroughTheCallCentre."Date"
GROUP BY TotalCallsHittingTheOffice."Date"
``` | Most databases have `minus` or `except` to subtract two queries. This sample works for example on Oracle (using `minus`):
```
SELECT Call_date as "Date"
, SUM([Calls) as "Total"
, 'Calls' as "Units"
FROM Call
GROUP
BY Call_date
except
select Interval_Start_date as "Date"
, sum(MyCS.Incoming + MyCA.Abandoned)/4 as "Total"
, 'Calls' as " Units"
From call_callsummary as MyCS
left
join Call_abandonedcalls as MyCA
on MyCS.Interval_Start_date = MyCA.CallAbandoned_date
where MyCS.Category <> 'Application
```
If you want to subtract the *values* instead of the rows, use the query below. It joins the rows on `date` and subtracts their values:
```
select x.date
, x.total - y.total
, x.units - y.units
from ( SELECT Call_date as "Date"
, SUM([Calls) as "Total"
, 'Calls' as "Units"
FROM Call
GROUP
BY Call_date
) x
join ( select Interval_Start_date as "Date"
, sum(MyCS.Incoming + MyCA.Abandoned)/4 as "Total"
, 'Calls' as " Units"
From call_callsummary as MyCS
left
join Call_abandonedcalls as MyCA
on MyCS.Interval_Start_date = MyCA.CallAbandoned_date
where MyCS.Category <> 'Application
) y
on x.date = y.date
``` | sql for subtracting two queries | [
"",
"sql",
"sql-server",
""
] |
I have two tables:-
student table
```
std_ID|std_Name
------+--------
1 | Jhon
2 | Peter
3 | Mic
4 | James
```
studentBatch Table
```
B_std_ID|B_Batch_ID
--------+-------------
1 | 3
2 | 6
3 | 7
```
i want students those who are not enrolled in a batch,
i want this
```
std_ID|std_Name
------+--------
4 | James
```
i tried this code
```
SELECT std_ID, std_Name FROM student , studentBatch WHERE std_ID <> B_std_ID;
```
but it didn't work, please help me with this | This should do what you need:
```
SELECT std_ID, std_Name FROM student WHERE std_ID not in (select B_std_ID from studentBatch )
``` | ```
select std_id
from student
where std_id not in (select B_std_ID from studentbatch)
``` | SQL, SELECT WHERE NOT EQUAL | [
"",
"mysql",
"sql",
""
] |
I want to get SQL *LOAD\_FILE* function to work and have read every single question/answer + documentation about this, but here is what's been happening.
When I want to *LOAD\_FILE* from my home directory:
```
mysql> SELECT LOAD_FILE('/home/myuser/somefile.txt');
+----------------------------+
| LOAD_FILE('/home/myuser/somefile.txt') |
+----------------------------+
| NULL |
+----------------------------+
1 row in set (0.00 sec)
```
So after getting this, I thought maybe the problem is that MySQL cannot access my home directory. And I tried running this, which worked fine:
```
SELECT LOAD_FILE('/etc/mysql/my.cnf');
```
Then `SELECT LOAD_FILE('/etc/passwd');` worked fine as well.
So I said gotcha, it's a file/folder read/ownership permission problem. So, I moved my file into /etc/mysql/ but then it still didn't work. I've tried `chown mysql:mysql somefile.txt` but still, I had no luck:
```
mysql> SELECT LOAD_FILE('/etc/mysql/somefile.txt');
+----------------------------+
| LOAD_FILE('/etc/mysql/somefile.txt') |
+----------------------------+
| NULL |
+----------------------------+
1 row in set (0.00 sec)
```
P.S.
**1.** All files are readable by all user groups, so no need to chmod. But I've even tried *chmod 777* if you'd like to ask.
**2.** I checked, secure-file-priv variable is not set up in MySQL, so no, LOAD\_FILE is not restricted to any path.
Any ideas what might be the problem here? | I found out that it has to do with AppArmor. I disabled AppArmor for MySQL and it worked.
For people having the same problem, please read here: <http://www.cyberciti.biz/faq/ubuntu-linux-howto-disable-apparmor-commands/> | Per [Documentation](http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_load-file)
> To use this function, the file must be located on the server host.
>
> You must specify the full path name to the file.
>
> You must have the FILE privilege.
>
> The file must be readable by all
>
> File size should be less than `max_allowed_packet` bytes.
>
> If the secure\_file\_priv system variable is set to a nonempty directory
> name, the file to be loaded must be located in that directory.
>
> If the file does not exist or cannot be read because one of the
> preceding conditions is not satisfied, **the function returns NULL**.
So check all the above mentioned condition satisfies.
**EDIT:**
Not sure whether you understood properly ..
1. make sure the file parent directory have execute permission. So if `somefile.txt` sits under `myuser` directory; you must have execute have permission on `myuser` directory.
2. `You must have the FILE privilege.` means the FILE privilege must be granted explicitly using `GRANT FILE on . TO user@localhost`
3. Flush the privilege
4. Logout and Login back and check whether it's working or not.
See this post [MySQL LOAD\_FILE() loads null values](https://stackoverflow.com/questions/18069054/mysql-load-file-loads-null-values) | MySQL LOAD_FILE returns NULL | [
"",
"mysql",
"sql",
"chmod",
"chown",
"mysql-loadfile",
""
] |
Im trying to build replace statements but I'm getting this error:
The replace function requires 3 argument(s).
Incorrect syntax near the keyword 'with'. If this statement is a common table expression, an xmlnamespaces clause or a change tracking context clause, the previous statement must be terminated with a semicolon.
SQl script:
```
select ccc.*,
replace(replace(replace(replace(replace(bbb.Text,
'[Change]',convert(varchar,CAST(coalesce(ccc.change,0) as decimal(10,2)))),
'[CurrentAmount]','$'+convert(varchar,CAST(coalesce(ccc.currentamount,0) as money),1),
'[Increase]',convert(varchar,CAST(coalesce(ccc.increase,0) as decimal(10,2))))+ '%',
'[AmountIncrease]',convert(varchar,CAST(coalesce(ccc.amountincrease,0) as decimal(10,2))))+ '%',
'[AmountDecrease]',convert(varchar,CAST(coalesce(ccc.amountdecrease,0) as decimal(10,2))))+ '%'
) Condition
from CCCCheck ccc with (NOLOCK)
inner join BBBCheck bbb with (NOLOCK)on...
``` | ```
SELECT ccc.*,
REPLACE(
REPLACE(
REPLACE(
REPLACE(
REPLACE(bbb.Text,
'[Change]', CONVERT(varchar, CAST(COALESCE(ccc.change, 0) AS decimal(10, 2)))),
'[CurrentAmount]', '$' + CONVERT(varchar, CAST(COALESCE(ccc.currentamount, 0) AS money), 1)),
'[Increase]', CONVERT(varchar, CAST(COALESCE(ccc.increase, 0) AS decimal(10, 2))) + '%'),
'[AmountIncrease]', CONVERT(varchar, CAST(COALESCE(ccc.amountincrease, 0) AS decimal(10, 2))) + '%'),
'[AmountDecrease]', CONVERT(varchar, CAST(COALESCE(ccc.amountdecrease, 0) AS decimal(10, 2))) + '%')
as Condition FROM CCCCheck ccc WITH (NOLOCK)
``` | There are three places like this: `)))) + '%'`
Move that last parenthesis over: `))) + '%')` | Replace statement | [
"",
"sql",
"sql-server",
"replace",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.