Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I have a datetime column in one of my tables (team\_opps) called start\_date.
I am trying to write methods in my model that allow me to classify them as Monday, Tuesday, etc... opportunities.
```
def self.friday_team_opps
where('WEEKDAY(start_date) = ?', 4)
end
```
In my view I am trying to call a .each on it.
```
<% TeamOpp.friday_team_opps.each do |team_opp| %>
<%= render team_opp, :team_opp => :team_opp %>
<% end %>
```
Error is:
```
SQLite3::SQLException: no such function: WEEKDAY: SELECT "team_opps".* FROM "team_opps" WHERE (WEEKDAY(start_date) = 4)
```
Thanks | First of all, you need to define the method on the TeamOpp class by defining the method as `def self.friday_team_opps`.
Moreover, you can't call methods on the column since it would require ActiveRecord to load all the data in your table and then call the Ruby method on that data. What you can do is use direct SQL functions, like for example MySQL's [WEEKDAY](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_weekday) (monday = 0, tuesday = 1, etc.):
```
def self.friday_team_opps
where("WEEKDAY(team_opps.created_at) = ?", 4)
end
```
In SQLite, you can use the [strftime](http://www.sqlite.org/lang_datefunc.html) function (sunday = 0, monday = 1, etc.):
```
def self.friday_team_opps
where('strftime("%w", "created_at") = "?"', 5)
end
``` | You define it as an instance method
```
def friday_team_opps
```
And it should be defined as a class method
```
def self.friday_team_opps
``` | Trying to define method that makes datetime entries by day of week | [
"",
"sql",
"ruby-on-rails",
""
] |
I want to import data from MS Sql Server and apply linear regression on the data in R. But i am not sure how i can manipulate the data from sql server so that i can do a regression. My table in sql server looks like this,
```
Pack Cubes Name Sales
1001 1.2 A 10
1001 1.2 B 12
1002 0.9 A 8
1002 0.9 B 5
1002 0.9 C 12
1003 1.5 A 5
1003 1.5 C 10
1004 0.8 B 8
1004 0.8 C 10
1005 1.3 A 5
1005 1.3 B 8
1005 1.3 C 12
```
If i would manipulate the data in excel for a regression model it would looks like this,
```
Cubes A B C
1.2 10 12 0
0.9 8 5 12
1.5 5 0 10
0.8 0 8 10
1.3 5 8 12
```
The A, B, C is my dependent variables and Cubes my independent variable. The Pack in my sql table is just a reference. My Sql connection to a DSN looks like this (which works perfectly),
```
library(RODBC)
myconn <- odbcConnect("sqlserver")
data <- sqlQuery(myconn,"select Cubes,Name,Sales from mytable")
```
With the regression i tried (which is wrong),
```
summary(data)
reg<-lm(Cubes~Sales,data)
summary(reg)
```
How can i manipulate the data from sql server as i would if i did it in excel? | Try reshape or the reshape package:
```
wide <- reshape(data, v.names = "Sales", idvar = "Cubes",
timevar = "Name", direction = "wide")
``` | I would use `dcast` from the `reshape2` package. Note that `dcast` leads to `NA` for non-existing combinations of `Name` and `Sales`. You need to manually change this to `0`:
```
res = dcast(df, Cubes ~ Name, value.var = 'Sales')
res[is.na(res)] = 0
res
Cubes A B C
1 0.8 0 8 10
2 0.9 8 5 12
3 1.2 10 12 0
4 1.3 5 8 12
5 1.5 5 0 10
``` | Linear regression in R with data from Sql server | [
"",
"sql",
"sql-server",
"r",
"excel",
"linear-regression",
""
] |
Suppose I have a table called `Events` with data similar to the following:
```
ID | Name | ParentEvent
----+----------------+-----------------
0 | Happy Event | NULL
1 | Sad Event | NULL
2 |Very Happy Event| 0
3 | Very Sad Event | 1
4 | Happiest Event | 2
5 |Unpleasant Event| 1
```
How can I query this table to get results returned in a way such that
* Events with that have a non-null `ParentEvent` appear directly after the event with the `ID` mathching the `ParentEvent`
* Events with a null `ParentEvent` have a depth of 0. If an event has a depth of *n*, any event that it is a parent to has a depth of *n + 1*.
* As long as the results satisfy the previous two conditions, the order the results appear in does not matter.
For the table given above, I would like to get a result set that looks like
```
ID | Name | ParentEvent | Depth |
----+----------------+--------------+--------+
0 | Happy Event | NULL | 0 |
2 |Very Happy Event| 0 | 1 |
4 | Happiest Event | 2 | 2 |
1 | Sad Event | NULL | 0 |
3 | Very Sad Event | 1 | 1 |
5 |Unpleasant Event| 1 | 1 |
```
How can I construct an SQL query to get this result set? I am using T-SQL, but if you can do this in any flavor of SQL please go ahead and answer. | The following queries all return the exact result set you asked for. All of these work by calculating the full path to the root nodes, and using some technique for making that path able to be ordered by.
SQL Server 2008 and up. Here, by converting to the `hierarchyid` data type, SQL Server handles the ordering properly.
```
WITH Data AS (
SELECT
ID,
Name,
ParentID,
Depth = 0,
Ancestry = '/' + Convert(varchar(max), ID) + '/'
FROM
hierarchy
WHERE
ParentID IS NULL
UNION ALL
SELECT
H.ID,
H.Name,
H.ParentID,
D.Depth + 1,
Ancestry = D.Ancestry + Convert(varchar(max), H.ID) + '/'
FROM
Data D
INNER JOIN hierarchy H
ON H.ParentID = D.ID
)
SELECT
ID,
Name,
ParentID,
Depth
FROM Data
ORDER BY Convert(hierarchyid, Ancestry);
```
SQL Server 2005 and up. We can convert the ID values to string and pad them out so they sort.
```
WITH Data AS (
SELECT
ID,
Name,
ParentID,
Depth = 0,
Ancestry = Right('0000000000' + Convert(varchar(max), ID), 10)
FROM
hierarchy
WHERE
ParentID IS NULL
UNION ALL
SELECT
H.ID,
H.Name,
H.ParentID,
Depth + 1,
Ancestry = D.Ancestry + Right('0000000000' + Convert(varchar(max), H.ID), 10)
FROM
Data D
INNER JOIN hierarchy H
ON H.ParentID = D.ID
)
SELECT
ID,
Name,
ParentID,
Depth
FROM Data
ORDER BY Ancestry;
```
Also we can use `varbinary` (otherwise, this is the same as the prior query):
```
WITH Data AS (
SELECT
ID,
Name,
ParentID,
Depth = 0,
Ancestry = Convert(varbinary(max), Convert(varbinary(4), ID))
FROM
hierarchy
WHERE
ParentID IS NULL
UNION ALL
SELECT
H.ID,
H.Name,
H.ParentID,
Depth + 1,
Ancestry = D.Ancestry + Convert(varbinary(4), H.ID)
FROM
Data D
INNER JOIN hierarchy H
ON H.ParentID = D.ID
)
SELECT
ID,
Name,
ParentID,
Depth
FROM Data
ORDER BY Ancestry;
```
SQL Server 2000 and up, allowing a tree a maximum of 800 levels deep:
```
SELECT
*,
Ancestry = CASE WHEN ParentID IS NULL THEN Convert(varchar(8000), Right('0000000000' + Convert(varchar(10), ID), 10)) ELSE '' END,
Depth = 0
INTO #hierarchy
FROM hierarchy;
WHILE @@RowCount > 0 BEGIN
UPDATE H
SET
H.Ancestry = P.Ancestry + Right('0000000000' + Convert(varchar(8000), H.ID), 10),
H.Depth = P.Depth + 1
FROM
#hierarchy H
INNER JOIN #hierarchy P
ON H.ParentID = P.ID
WHERE
H.Ancestry = ''
AND P.Ancestry <> '';
END;
SELECT
ID,
Name,
ParentID,
Depth
FROM #hierarchy
ORDER BY Ancestry;
DROP TABLE #hierarchy;
```
The same `varbinary` conversion can be done, allowing up to 2000 levels deep. | This is just add to M.Ali's answer. I realize the OP said, "As long as the results satisfy the previous two conditions, the order the results appear in does not matter". However, by adding a column to the query that keeps track of the hierarchy path, it is possible to display the results the same as in the question.
```
;WITH CTE
AS
(
SELECT
ID,
NAME,
ParentID,
0 as Depth,
convert(varbinary(max), convert(varbinary(2), ID)) as ThePath
FROM hierarchy
WHERE ParentID is null
UNION ALL
SELECT
h.ID,
h.NAME,
h.ParentID,
cte.Depth + 1,
cte.ThePath + convert(varbinary(max), convert(varbinary(2), h.ID)) as ThePath
FROM hierarchy AS h
INNER JOIN CTE as cte
ON h.ParentID = cte.ID
)
SELECT
ID,
NAME,
ParentID,
Depth,
ThePath
FROM CTE
ORDER BY ThePath
```
This displays the results like this.
```
ID NAME ParentID Depth ThePath
----------- ------------------------------ ----------- ----------- ---------------
0 Happy Event NULL 0 0x0000
2 Very Happy Event 0 1 0x00000002
4 Happiest Event 2 2 0x000000020004
1 Sad Event NULL 0 0x0001
3 Very Sad Event 1 1 0x00010003
5 Unpleasant Event 1 1 0x00010005
``` | Ordering results in SQL select query | [
"",
"sql",
"sql-server",
"t-sql",
"select",
""
] |
I've come across a problem while learning transaction isolation levels in SQL server.
The problem is that after I run this code (and it finishes without errors):
```
set implicit_transactions off;
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRAN T1;
SELECT (...)
WAITFOR DELAY '00:00:5'
SELECT (...)
WAITFOR DELAY '00:00:3'
COMMIT TRAN T1;
```
I want to run this query:
```
set implicit_transactions off;
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
BEGIN TRANSACTION T2;
INSERT (...)
INSERT (...)
COMMIT TRANSACTION T2;
```
But it just says "Executing query", and does nothing.
I think it's because the lock on the tables somehow continues after the first transaction has been finished. Can someone help?
Of course the selects and the inserts refer to the same tables. | Either the first tran is still open (close the window to make sure it is not), or some other tran is open (`exec sp_who2`). You can't suppress X-locks taken by DML because SQL Server needs those locks during rollback. | @usr offers good possibilities.
A related specific possibility is that you selected only part of the first transaction to execute while tinkering - i.e. executed `BEGIN TRAN T1` and never executed `COMMIT TRAN T1`. It happens - part of Murphy's Law I think. Try executing just `COMMIT TRAN T1`, then re-trying the second snippet.
The following worked just fine for me on repeated, complete executions in a single session:
```
set implicit_transactions off;
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRAN T1;
SELECT * from tbl_A
WAITFOR DELAY '00:00:5'
SELECT * from tbl_B
WAITFOR DELAY '00:00:3'
COMMIT TRAN T1;
set implicit_transactions off;
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
BEGIN TRANSACTION T2;
INSERT tbl_A (ModifiedDate) values (GETDATE())
INSERT tbl_B (ModifiedDate) values (GETDATE())
INSERT tbl_A (ModifiedDate) select top 1 ModifiedDate from tbl_A
INSERT tbl_B (ModifiedDate) select top 1 ModifiedDate from tbl_B
COMMIT TRANSACTION T2;
``` | sql server a simple query takes forever to run due to transaction isolation level | [
"",
"sql",
"sql-server",
"transactions",
"isolation-level",
"transaction-isolation",
""
] |
I have an varchar input variable that contains a comma delimited list of integers that are in one of the columns in my select statement. I know how to split the list and use this in a where clause for example:
```
DECLARE @ListOfAges Varchar
SET @ListOfAges = '15,20,25'
select p.Name, a.Age
from People p
left join Ages a on p.AgeKey = a.AgeKey
Where a.Age in (dbo.Split(@ListOfAges))
```
What I'd like to do is if the @ListOfAges var is null, to select everything, so something like this:
```
select p.Name, a.Age
from People p
left join Ages a on p.AgeKey = a.AgeKey
Where (@ListOfAges = null OR a.Age in (dbo.Split(@ListOfAges)))
```
Is this there a way to do this that performs better? Possibly without using the IN clause or not in the WHERE clause? I wasn't sure if including this in the join would be possible, or if an entirely different approach is recommended (such as not using comma separated input variables).
Thanks! | The function call in the WHERE clause is slowing you down. This approach is not dramatically better, but should help a bit..
```
WHERE (@ListOfAges is NULL OR
CHARINDEX(','+TRIM(STR(A.AGE))+',',','+@ListOfAges+',')) > 0)
```
Again, not an ideal approach, but should perform better than the function call.
I would try the following to optimize performance..
* Create a temporary table with the list of ages.
* LEFT JOIN this table with Ages on a.Age using alias TT
* WHERE CLAUSE should be
WHERE @ListOfAges is null OR NOT isNull(TT.Age) | You could use a case statement within the where clause if this is Sql 2005 or above...
<http://msdn.microsoft.com/en-us/library/ms181765.aspx> | SQL for including rows based on list in input variable or all rows if variable is null | [
"",
"sql",
"sql-server",
""
] |
I've got 2 tables that have to be joined together without them joining up together.
Table 1.
```
130, 'HANSEN', 'ZIP1'
130, 'HANSEN', 'ZIP2'
130, 'HANSEN', 'ZIP3'
120, 'HANSEN', 'ZIP4'
120, 'HANSEN', 'ZIP5'
```
Table 2.
```
130, 'HANSEN', 'ZIP1'
130, 'HANSEN', 'ZIP2'
130, 'HANSEN', 'ZIP3'
120, 'HANSEN', 'ZIP4'
120, 'HANSEN', 'ZIP5'
```
Wanted outcome.
```
130, 'HANSEN', 'ZIP1'
130, 'HANSEN', 'ZIP2'
130, 'HANSEN', 'ZIP3'
120, 'HANSEN', 'ZIP4'
120, 'HANSEN', 'ZIP5'
130, 'HANSEN', 'ZIP1'
130, 'HANSEN', 'ZIP2'
130, 'HANSEN', 'ZIP3'
120, 'HANSEN', 'ZIP4'
120, 'HANSEN', 'ZIP5'
```
test script if someone is willing to help.
```
DROP TABLE TEST1;
DROP TABLE TEST2;
CREATE TABLE TEST1 ( ID INTEGER ,key VARCHAR(50),VALUE1 VARCHAR(50));
CREATE TABLE TEST2 ( ID INTEGER ,key VARCHAR(50),VALUE2 VARCHAR(50));
INSERT INTO TEST1 VALUES (130, 'HANSEN', 'STREET1');
INSERT INTO TEST1 VALUES (130, 'HANSEN', 'STREET2');
INSERT INTO TEST1 VALUES (130, 'HANSEN', 'STREET3');
INSERT INTO TEST1 VALUES (120, 'HANSEN', 'STREET5');
INSERT INTO TEST1 VALUES (120, 'HANSEN', 'STREET6');
INSERT INTO TEST2 VALUES (130, 'HANSEN', 'ZIP1');
INSERT INTO TEST2 VALUES (130, 'HANSEN', 'ZIP2');
INSERT INTO TEST2 VALUES (130, 'HANSEN', 'ZIP3');
INSERT INTO TEST2 VALUES (120, 'HANSEN', 'ZIP4');
INSERT INTO TEST2 VALUES (120, 'HANSEN', 'ZIP5');
```
**Note that the actual data are not duplicates** | Perform a union between the two tables:
```
select * from test1
union
select * from test2
```
If you want to retain duplicates in the result set use U NION ALL
**SQL Fiddle:** <http://sqlfiddle.com/#!2/32bd6/2> | You are looking for the UNION clause. To keep duplicates you would have to use UNION ALL. To eliminate them use UNION without ALL.
```
select id, key, value1 from test1
union all
select id, key, value2 from test2;
``` | Output in SQL Table : without using JOIN | [
"",
"sql",
"join",
""
] |
I have a basic table named `animals` with two fields `name` and `type`. The field `type` is an enum field with these values: `enum('dog','cat','horse','zebra','lion')`. I am trying to run a query and count the number of each species as well as specify the name of that species. For example an expected result will show something like this `dog=2, cat=2, etc.`. In the query below I am able to count the overall total of `animals` but not break down into amount of species and name. How could I do so? [SQLFIDDLE](http://sqlfiddle.com/#!2/0cccc7/1)
Query:
```
select COUNT(type) from animals
```
Table Schema:
```
CREATE TABLE animals
(
name varchar(20),
type enum('dog','cat','horse','zebra','lion')
);
INSERT INTO animals
(name, type)
VALUES
('Bertha', 'horse'),
('Louis', 'cat'),
('Gina', 'cat'),
('Rafa', 'lion'),
('lilo', 'dog'),
('kilo', 'dog'),
('stripy', 'zebra');
``` | Use `GROUP BY` and `COUNT` for counting animals by type.
Try this:
```
SELECT a.type, COUNT(1) AS Cnt
FROM animals a
GROUP BY a.type;
```
Check this [**SQL FIDDLE DEMO**](http://sqlfiddle.com/#!2/01c38/1)
**OUTPUT**
```
| TYPE | CNT |
|-------|-----|
| dog | 2 |
| cat | 2 |
| horse | 1 |
| zebra | 1 |
| lion | 1 |
``` | Do you mean
```
select type, COUNT(*) from animals
group by type
```
[SqlFiddle](http://sqlfiddle.com/#!2/0cccc7/4) | Count items in table, table field enum type involved | [
"",
"mysql",
"sql",
"select",
"count",
"group-by",
""
] |
I am trying to get past 12 months booking date count, but for the months with no booking, i need to get Null or 0. My query below skips the months with no booking,
```
SELECT bs.date,CONCAT(MONTHNAME(bs.date),' ', YEAR(bs.date)) as 'Month',count(*) AS Bookings, CONCAT(us.firstName,' ', us.lastName) as 'Name' from bookings bs RIGHT JOIN users us ON bs.clientID = us.recNo
WHERE bs.managerID = 6 AND bs.clientID = 1900
AND date > DATE_SUB(CURDATE(), INTERVAL 11 MONTH) AND date <= CURDATE() GROUP BY YEAR(date), month(date)
```
Can someone please help? | You can do it by hardcoding the 12 months in there and CROSS JOIN with `users` then LEFT JOIN with `bookings` (I have heard others mentioned that LEFT JOIN is way faster than RIGHT JOIN).
So query below
```
SELECT bs.date,
CONCAT(MONTHNAME(hardcoded.date),' ', YEAR(hardcoded.date)) as 'Month',
count(bs.date) AS Bookings,
CONCAT(us.firstName,' ', us.lastName) as 'Name'
from
(SELECT CURDATE() as date UNION
SELECT CURDATE()-INTERVAL 1 MONTH UNION
SELECT CURDATE()-INTERVAL 2 MONTH UNION
SELECT CURDATE()-INTERVAL 3 MONTH UNION
SELECT CURDATE()-INTERVAL 4 MONTH UNION
SELECT CURDATE()-INTERVAL 5 MONTH UNION
SELECT CURDATE()-INTERVAL 6 MONTH UNION
SELECT CURDATE()-INTERVAL 7 MONTH UNION
SELECT CURDATE()-INTERVAL 8 MONTH UNION
SELECT CURDATE()-INTERVAL 9 MONTH UNION
SELECT CURDATE()-INTERVAL 10 MONTH UNION
SELECT CURDATE()-INTERVAL 11 MONTH
)as hardcoded
CROSS JOIN users us
LEFT JOIN bookings bs ON (MONTH(bs.date) = MONTH(hardcoded.date)
AND YEAR(bs.date) = YEAR(hardcoded.date))
AND bs.clientID = us.recNo
WHERE ((bs.managerID = 6 AND bs.clientID = 1900)
OR
(bs.managerID IS NULL AND bs.clientID IS NULL))
AND hardcoded.date > DATE_SUB(CURDATE(), INTERVAL 11 MONTH) AND hardcoded.date <= CURDATE()
GROUP BY Month,Name
ORDER BY hardcoded.date
```
see this in ([sqlFiddle](http://sqlfiddle.com/#!2/98c5a/12/0))
Note: because have a WHERE condition to check on managerID and clientID to filter out results we want but we also want to return rows that have nulls meaning it's our hardcoded months that have no bookings. that's why there's the `OR (bs.managerID IS NULL AND bs.clientID IS NULL)`
Also the current GROUP BY is NON-Standard as it is not grouping the first field of the select `bs.date` so right now if you have 2 or more bookings in one month, it just returns a date not knowing which one which seems odd, you might want to select `hardcoded.date` instead and GROUP BY that too so that you have a ANSI-Standard proper GROUP BY. Or better yet use an aggregate function like MIN() or MAX() to MIN(bs.date) or MAX(bs.date) for the first field in the select to make sure it always returns the first booking date or the last booking date for a each month if that month has 2 or more bookings, so that you don't don't list it in the GROUP BY.
I don't know if the CROSS JOIN is killing your app or not but you can try this one
it's without the CROSS JOIN but preserving your RIGHT JOIN
```
SELECT bs.date,
CONCAT(MONTHNAME(hardcoded.date),' ', YEAR(hardcoded.date)) as 'Month',
count(bs.date) AS Bookings,
CONCAT(us.firstName,' ', us.lastName) as 'Name'
from
(SELECT CURDATE() as date UNION
SELECT CURDATE()-INTERVAL 1 MONTH UNION
SELECT CURDATE()-INTERVAL 2 MONTH UNION
SELECT CURDATE()-INTERVAL 3 MONTH UNION
SELECT CURDATE()-INTERVAL 4 MONTH UNION
SELECT CURDATE()-INTERVAL 5 MONTH UNION
SELECT CURDATE()-INTERVAL 6 MONTH UNION
SELECT CURDATE()-INTERVAL 7 MONTH UNION
SELECT CURDATE()-INTERVAL 8 MONTH UNION
SELECT CURDATE()-INTERVAL 9 MONTH UNION
SELECT CURDATE()-INTERVAL 10 MONTH UNION
SELECT CURDATE()-INTERVAL 11 MONTH
)as hardcoded
LEFT JOIN bookings bs ON (MONTH(bs.date) = MONTH(hardcoded.date)
AND YEAR(bs.date) = YEAR(hardcoded.date))
RIGHT JOIN users us ON (bs.clientID = us.recNo OR bs.clientID IS NULL)
WHERE ((bs.managerID = 6 AND bs.clientID = 1900)
OR
(bs.managerID IS NULL AND bs.clientID IS NULL))
AND hardcoded.date > DATE_SUB(CURDATE(), INTERVAL 11 MONTH) AND hardcoded.date <= CURDATE()
GROUP BY Month,Name
ORDER BY hardcoded.date
```
Trying to narrow down records before JOINING
```
SELECT MIN(bs.date) as FirstBookingDate,
CONCAT(MONTHNAME(hardcoded.date),' ', YEAR(hardcoded.date)) as 'Month',
count(bs.date) AS Bookings,
CONCAT(us.firstName,' ', us.lastName) as 'Name'
from
(SELECT CURDATE() as date UNION
SELECT CURDATE()-INTERVAL 1 MONTH UNION
SELECT CURDATE()-INTERVAL 2 MONTH UNION
SELECT CURDATE()-INTERVAL 3 MONTH UNION
SELECT CURDATE()-INTERVAL 4 MONTH UNION
SELECT CURDATE()-INTERVAL 5 MONTH UNION
SELECT CURDATE()-INTERVAL 6 MONTH UNION
SELECT CURDATE()-INTERVAL 7 MONTH UNION
SELECT CURDATE()-INTERVAL 8 MONTH UNION
SELECT CURDATE()-INTERVAL 9 MONTH UNION
SELECT CURDATE()-INTERVAL 10 MONTH UNION
SELECT CURDATE()-INTERVAL 11 MONTH
)as hardcoded
CROSS JOIN (SELECT * FROM users WHERE recNo = 1900) us
LEFT JOIN (SELECT * FROM bookings WHERE managerID = 6 AND clientID = 1900
AND date BETWEEN CURDATE()-INTERVAL 11 MONTH AND CURDATE()) bs
ON (MONTH(bs.date) = MONTH(hardcoded.date)
AND YEAR(bs.date) = YEAR(hardcoded.date))
GROUP BY Month,Name
ORDER BY hardcoded.date
``` | I'm not very familiar with MySQL syntax, but I would approach this by first creating a temp table with only month and year integer columns and inserting a record for each month of the past year. A simple WHILE loop using DATESUB and a decrementing counter would probably suffice.
Then left join the temp table with your bookings/users query much as you have it now. This should result in months without bookings still showing in your output.
I don't believe you can pull it off in only one SELECT statement. | MySQL Query - return value for past 12 months record for each month | [
"",
"mysql",
"sql",
""
] |
I have the following table:
```
Country Year
100 201313
100 201212
101 201314
101 201213
101 201112
102 201313
102 201212
103 201313
103 201212
104 201313
104 201212
```
I need a query that delivers just one country and just the greatest value of year, for example:
```
Country Year
100 201313
101 201314
102 201313
103 201313
104 201313
```
My solution until now is to make a first query in which I get the Distinct Countries, and then in a while, another query to get the years...
```
$resOne = $mysqli->query("SELECT DISTINCT Country from Table ORDER BY Country ASC");
while ($obj = $resOne->fetch_object()){
$resTwo = $mysqli->query("SELECT Country, Year from Table WHERE Country = $resOne->Country ORDER BY Country ASC LIMIT 1")->fetch_object();
echo $resTwo->Country $resTwo->Year;
}
```
**Question:** Is it someway possible to deliver this result with just one query?
Thanks for reading and answering.
**UPDATE**
The scripts on the answers from user2989408 and Drew is good and working, but when I join the table with another one I´m not getting the correct data.
Here's a fiddle to my DB sample and script: <http://sqlfiddle.com/#!2/31613/1/0>
How can I make that the Description row shows the description from MAX(s.Year)? For instance, the first row in the fiddle should show "France newest 201314". | Query:
[SQLfiddleexample](http://sqlfiddle.com/#!2/31613/5)
```
SELECT c.Name, s.Description, s.Year
FROM countries c
JOIN seasons s
ON c.Id=s.Id
LEFT JOIN seasons s2
ON s.ID = s2.ID
AND s2.Year > s.Year
WHERE s2.id is null
ORDER BY c.Name ASC
```
Result:
```
| NAME | DESCRIPTION | YEAR |
|-------------|-------------|--------|
| France | newest | 201314 |
| Germany | new | 201212 |
| Italy | new | 201313 |
| Netherlands | newest | 201313 |
| Spain | new | 201313 |
``` | Your Query should be
```
SELECT t.Country, MAX(t.Year)
FROM Table t
GROUP BY t.Country
ORDER BY t.Country ASC
```
**EDIT:** After the updated requirement, to SELECT other info the query can be modified as follows to work.
```
SELECT c.Name, s.Description, s.Year
FROM
(
SELECT s.id as id, MAX(s.Year) as Max
FROM seasons s
GROUP BY s.Id
) as X
JOIN countries c ON c.Id = X.Id
JOIN seasons s ON s.id = X.id AND X.Max = s.Year
ORDER BY c.Name ASC
```
A sub query can also be used to select the required info as follows.
```
SELECT c.Name,
(SELECT s1.Description
FROM seasons s1 WHERE s1.id = X.id AND X.Max = s1.Year) as Description,
X.Max as Year
FROM
(
SELECT s.id as id, MAX(s.Year) as Max
FROM seasons s
GROUP BY s.Id
) as X
JOIN countries c ON c.Id = X.Id
ORDER BY c.Name ASC
```
Essentially there is more than one way to get the info you need if you understand how `GROUP BY` works. | Combine 2 SQL queries to get Distinct values from one column and biggest from second | [
"",
"mysql",
"sql",
""
] |
I Want to write a sql query which will should always return false.
if i write
```
select 1 from dual where 1=2;
```
nothing is returned. how can i use where 1=2 which always returns "false". | ```
SELECT CASE WHEN 1=2 THEN 1 ELSE 0 END FROM DUAL
``` | That's because there is nothing where 1 = 2.
```
select false from dual
```
(Substitute the value you want it to return for false.) | SQL to always return false | [
"",
"sql",
"oracle",
""
] |
```
SELECT num, code, LN AS LabelName
FROM product
INNER JOIN link
ON product.seqno = link.seqno
INNER JOIN master
ON link.code = master.code
```
Like is there any other way to write this SQL?
```
DB layout is like this:
1)TABLE: product
num(key)
ln
seqno
2)TABLE: link
seqno(key)
code(key)
3)Table: master
code(key)
``` | You don't actually use the master table
```
select num,code, LN as LabelName
from product
inner join link on product.seqno = link.seqno
```
of course you might have taken that part out of your question -- but then how can we answer?
(NB, as written your query might get different results as only codes in the master table would be returned. I believe this is not why it was linked) | Since you talked of *master-detail*, the concept typical for ISAM databases and implying manual management of records locating, I would make a guess you actually needed only ONE set of records, not all the whole database read. Depending on what you needed then it would go like
```
SELECT product.num, link.code, product.LN AS LabelName
FROM product , master , link
WHERE product.seqno = link.seqno
AND link.code = master.code
AND master.ID = 12345
```
or even
```
SELECT product.num, link.code, product.LN AS LabelName
FROM product , master , link
WHERE product.seqno = link.seqno
AND link.code = 12345
```
And since you mentioned Delphi you should replace 12345 with parameters like shown at <http://bobby-tables.com/delphi.html> | Make this query faster? | [
"",
"sql",
"firebird",
""
] |
I have two tables as below
tbl1
```
id qNum
1 1
2 2
3 3
```
tbl2
```
id qNum displayNum
1 1 3
2 2 1
3 2 2
4 2 4
```
Ideally I need a sql results to look like this
```
qNum display1 display2 display3 display4
1 0 0 1 0
2 1 1 0 1
3 0 0 0 0
```
I have tried the following sql but this was not correct
```
SELECT
tbl1.qNum,
CASE when tbl2.displayNum=1 then 1 else 0 end AS filter1,
CASE when tbl2.displayNum=2 then 1 else 0 end AS filter2,
CASE when tbl2.displayNum=3 then 1 else 0 end AS filter3,
CASE when tbl2.displayNum=4 then 1 else 0 end AS filter4,
CASE when tbl2.displayNum=5 then 1 else 0 end AS filter5
FROM
tbl1
Left Join tbl2 ON tbl1.qNum = tbl2.qNum
GROUP BY
tbl1.qNum
```
Could anyone help a little please!! | Your query is almost correct, you're just missing an aggregate function:
```
SELECT
tbl1.qNum,
MAX(CASE when tbl2.displayNum=1 then 1 else 0 end) AS filter1,
MAX(CASE when tbl2.displayNum=2 then 1 else 0 end) AS filter2,
MAX(CASE when tbl2.displayNum=3 then 1 else 0 end) AS filter3,
MAX(CASE when tbl2.displayNum=4 then 1 else 0 end) AS filter4,
MAX(CASE when tbl2.displayNum=5 then 1 else 0 end) AS filter5
FROM
tbl1
Left Join tbl2 ON tbl1.qNum = tbl2.qNum
GROUP BY
tbl1.qNum
```
The columns you select should always be either in the group by clause or an aggregate function should be applied to them. A group by "collapses" a group of rows and if you don't have an aggregate function on a column (which is not in the group by) a random row of that group is displayed.
Here you can read about the different aggregate functions: [GROUP BY (Aggregate) Functions](http://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html)
The MAX() function in our case here returns the greatest value (note: not the row with the greatest value. You can also have a query like this: `select min(col), max(col) from whatever`). | You have use `MAX` function to pivot the table
Try this:
```
SELECT tbl1.qNum,
MAX(CASE WHEN tbl2.displayNum=1 THEN 1 ELSE 0 END) AS filter1,
MAX(CASE WHEN tbl2.displayNum=2 THEN 1 ELSE 0 END) AS filter2,
MAX(CASE WHEN tbl2.displayNum=3 THEN 1 ELSE 0 END) AS filter3,
MAX(CASE WHEN tbl2.displayNum=4 THEN 1 ELSE 0 END) AS filter4,
MAX(CASE WHEN tbl2.displayNum=5 THEN 1 ELSE 0 END) AS filter5
FROM tbl1
LEFT JOIN tbl2 ON tbl1.qNum = tbl2.qNum
GROUP BY tbl1.qNum
``` | joining with a group by and pivot | [
"",
"mysql",
"sql",
"select",
"group-by",
"max",
""
] |
I have seen questions on stackoverflow similar/same as the one I am asking now, however I couldn't manage to solve it in my situation.
Here is the thing:
I have an excel spreadsheet(.xlsx) whom i converted in comma seperated value(.CSV) as it is said in some answers:
My excel file looks something like this:
```
--------------------------------------------------
name | surname | voteNo | VoteA | VoteB | VoteC
--------------------------------------------------
john | smith | 1001 | 30 | 154 | 25
--------------------------------------------------
anothe| person | 1002 | 430 | 34 | 234
--------------------------------------------------
other | one | 1003 | 35 | 154 | 24
--------------------------------------------------
john | smith | 1004 | 123 | 234 | 53
--------------------------------------------------
john | smith | 1005 | 23 | 233 | 234
--------------------------------------------------
```
In PostgreSQL I created a table with name `allfields` and created 6 columns
1st and 2nd one as a character[] and last 4 ones as integers with the same name as shown in the excel table `(name, surname, voteno, votea, voteb, votec)`
Now I'm doing this:
```
copy allfields from 'C:\Filepath\filename.csv';
```
But I'm getting this error:
> ```
> could not open file "C:\Filepath\filename.csv" for reading: Permission denied
> SQL state: 42501
> ```
### My questions are:
1. Should I create those columns in `allfields` table in PostgreSQL?
2. Do I have to modify anything else in Excel file?
3. And why I get this 'permission denied' error? | Ok the Problem was that i need to change the `path` of the `Excel file`. I inserted it in the **public account** where *all users can access it*.
If you face the same problem move your `excel file` to ex `C:\\User\Public` folder (this folder is a public folder without any restrictions), otherwise you have to deal with `Windows permission issues`. | 1. Based on your file, neither of the first two columns needs to be an array type (character[]) - unlike C-strings, the "character" type in postgres *is* a string already. You might want to make things easier and use `varchar` as the type of those two columns instead.
2. I don't think you do.
3. Check that you don't still have that file open and locked in excel - if you did a "save as" to convert from xlsx to csv from within excel then you'll likely need to close out the file in excel. | Import data from Excel to PostgreSQL | [
"",
"sql",
"database",
"excel",
"postgresql",
""
] |
I am querying part numbers from an Oracle (JDE) database. If I use the clause `WHERE Item LIKE 'AS-%'`, it correctly returns all the items that begin with 'AS-'. However, when I try to narrow that set by instead using the clause `WHERE Item LIKE 'AS-%A'` in order to find all parts matching the pattern and ending with an 'A', I get no results, even though they do exist!
What gives? | When you think that your query is misidentifying rows based on your understanding of the rows' values, examine the values using the DUMP() function.
This will tell you the exact contents of the cell, including any characters that you cannot see on the display. | Perhaps some control characters at end of the data that you cannot see. Try using regexp\_like:
```
with x as (
select 'AS-123B' as col from dual
UNION
select 'AS-456A' as col from dual
UNION
select 'AS-789A' || chr(0) as col from dual
)
select * from x
where regexp_like (col, 'AS-(.*)A[[:cntrl:]]?')
```
Output:
```
COL
AS-456A
AS-789A
``` | Oracle wildcard not matching? | [
"",
"sql",
"oracle",
""
] |
I hava the following table in database (Access - Microsoft SQL Server):
```
| Product Id | Month | Sales |
----------------------------------
| 1144 | 1 | 100 |
| 20131120 | 1 | 200 |
| 1144 | 2 | 333 |
| 1144 | 3 | 333 |
| 1144 | 4 | 333 |
| 1144 | 5 | 333 |
| 20131120 | 2 | 200 |
```
And I would like to add to the table new column which will show in how many months each products has been sold till particular month. I need to keep this information in database, in this table.
After updating table I would like to get the table:
```
| Product Id | Month | Sales | Counter |
-------------------------------------------|
| 1144 | 1 | 100 | 0 |
| 20131120 | 1 | 200 | 0 |
| 1144 | 2 | 333 | 1 |
| 1144 | 3 | 333 | 2 |
| 1144 | 4 | 333 | 3 |
| 1144 | 5 | 333 | 4 |
| 20131120 | 2 | 200 | 1 |
```
For example, for product=1144 and month=3, counter=2 because this item has appeared twice till 3 month.
I would like to update column Counter with one query (update set = (select ... )). Could you help me to construct the query ? | This query should retrive the correct data:
```
SELECT m1.product_id, m1.month, m1.sales, COUNT(m2.month) - 1 AS counter
FROM mysales AS m1 INNER JOIN
mysales AS m2 ON m1.product_id = m2.product_id AND m1.month >= m2.month
GROUP BY m1.product_id, m1.month, m1.sales
```
so the statment to update the table once you added the counter column is:
```
UPDATE mysales
SET counter = x.counter
FROM (SELECT m1.product_id, m1.month, COUNT(m2.month) - 1 AS counter
FROM mysales AS m1 INNER JOIN
mysales AS m2 ON m1.product_id = m2.product_id AND m1.month >= m2.month
GROUP BY m1.product_id, m1.month) AS x INNER JOIN
mysales ON x.product_id = mysales.product_id AND x.month = mysales.month
```
This syntax for the update statement works on SqlServer, I don't know if it works also on MS Access. | In `SqlServer`, you can project your column ranking directly using [`row_number()`](http://technet.microsoft.com/en-us/library/ms186734.aspx)
```
select
productid,
[month],
sales,
row_number() over (partition by productid order by [month]) - 1 as [Rank]
from mysales
```
Assuming the columns `[ProductId, MonthId]` constitute a key, you can persist this:
```
-- Create new column
ALTER TABLE mysales ADD [Rank] INT
GO
-- Set the new Column
UPDATE m
SET m.[Rank] = X.[Rank]
FROM mysales m
INNER JOIN
(
SELECT
ProductId,
Month,
row_number() over (partition by productid order by [month]) - 1 AS [Rank]
FROM MySales
) X
ON m.ProductId = X.ProductID AND m.[Month] = X.[Month];
```
[Fiddle here](http://sqlfiddle.com/#!6/569f8/1) | Update table with subquery | [
"",
"sql",
"sql-server",
"subquery",
""
] |
I have a strange requirement. Here it is. I have a table.
```
DateTime PhoneNo Network
------------------- -------- ---------
11/01/2014 13:05:45 99999999 NetTwo
11/01/2014 13:05:45 99999999 NetOne
11/01/2014 13:05:45 99999999 NetOne
12/01/2014 13:05:45 99999999 NetOne
12/01/2014 13:06:45 99999999 NetOne
12/01/2014 13:07:45 88888888 NetTwo
12/01/2014 13:08:45 77777777 NetThree
```
I have to Update another table by running a stored procedure daily once such that it calculates summary of number of times a caller called a day and it should update it to the corresponding column
DestinationTable:
```
Date Month year Phone Network 01 02 03 -------11-12---30-31
11/01/2014 JAN 2014 99999999 NetOne 0 0 0 2 0 0 0
11/01/2014 JAN 2014 99999999 NetTwo 0 0 0 1 0 0 0
12/01/2014 JAN 2014 99999999 NetTwo 0 0 0 0 2 0 0
12/01/2014 JAN 2014 88888888 NetTwo 0 0 0 0 1 0 0
12/01/2014 JAN 2014 77777777 NetTwo 0 0 0 0 1 0 0
```
Here 01 to 31 are separate columns. I apologize if i didn't explain it properly.
Any Suggestions or approach how to implement this..Thanks for your patience to look into my problem guys. Any help would be appreciated. | Thank you soo much for your efforts. Thanks haki and Thinkjet. @think: I learned merge from you :D..
I got my expected result using subqueries inside the result set of with clause.
```
WITH MainFilter(CLI,DAYS,COUNTS)
AS (SELECT TBL_DTL_CALLACTIVITY.CLI,
TO_CHAR(TBL_DTL_CALLACTIVITY.CALLSTARTTIME, 'DD'),COUNT(TBL_DTL_CALLACTIVITY.CLI)
FROM TBL_DTL_CALLACTIVITY
WHERE UPPER(TO_CHAR(TBL_DTL_CALLACTIVITY.CALLSTARTTIME, 'mon')) =UPPER(i_Month) AND
(SELECTED_DNIS IS NULL OR TBL_DTL_CALLACTIVITY.DNIS = SELECTED_DNIS)
GROUP BY TBL_DTL_CALLACTIVITY.CLI,
TO_CHAR(TBL_DTL_CALLACTIVITY.CALLSTARTTIME,'DD'))
SELECT MF.CLI,
NVL((SELECT COUNTS FROM MainFilter WHERE MainFilter.DAYS = '01' AND CLI = MF.CLI ),0) AS Day_01,
NVL((SELECT COUNTS FROM MainFilter WHERE MainFilter.DAYS = '02' AND CLI = MF.CLI ),0) AS Day_02,
NVL((SELECT COUNTS FROM MainFilter WHERE MainFilter.DAYS = '03' AND CLI = MF.CLI ),0) AS Day_03,
--------------------
--------------------
FROM MainFilter MF GROUP BY MF.CLI ;
```
I am worried about the query performance. Is it good to go with this approach ?? | It's not strange - it's called a pivot table
this is one way to go
```
select trunc(DateTime) as date , to_char(datetimem,'MON') as month , to_number(to_char(datetime,'yyyy')) as year , PhoneNo, network ,
sum(case when to_char(DateTime,'hh24') = 0 then 1 else 0 end) as h00,
sum(case when to_char(DateTime,'hh24') = 1 then 1 else 0 end) as h01,
..
sum(case when to_char(DateTime,'hh24') = 23 then 1 else 0 end) as h23
from calls_tables
group by trunc(DateTime) , to_char(datetimem,'MON'), to_number(to_char(datetime,'yyyy')) , PhoneNo, network
```
11g has a pivot function as well.
You should run a search for it. | Update Count on Date wise in Oracle | [
"",
"sql",
"oracle",
"plsql",
"oracle11g",
"oracle10g",
""
] |
Im using mariadb with heidisql to execute sql:
DECLARE @AccountID INT;
Insert Into accounts(first\_name, mi, last\_name, email, is\_admin, is\_enabled, date\_created)
Values('testfirstname', 'a', 'testlastname', 'user@email.com', 1, 1, NOW());
set @AccountID = Last\_Insert\_Id();
I keep getting an error:
QL Error (1064): You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near '@AccountID INT' at line 1 \*/
I was looking at the usage of Declare but it says that its used in a Function. Ive tried with a BEGIN/END before the DECLARE and after with the same error.
I new to the mysql syntax where this would work in sql server. so if anyone can let me know what i got wrong it would be appreciated.
Thanks | Remove the `DECLARE`. It is not neccessary. | I'm using mariadb version 10.0.19
The above problem is described and has an easy solution
[here: Delimiters in the mysql Client](https://mariadb.com/kb/en/mariadb/delimiters-in-the-mysql-client/)
in short:
**DELIMITER //**
create function .. as usual
BEGIN
.. as usual
END
**//
DELIMITER ;**
( The accepted solution did not worked ! ) | mariadb declare variable syntax error | [
"",
"sql",
"mariadb",
"heidisql",
""
] |
Im trying to format all my DATA in my sql before display information
Both my joins work fine, I have a piece of data (status) that has associated titles to go with the value but the `CASE` statment isn't working. I currently have a PHP swtich statement that does this data conversion but I'd like to do it in SQL if possible
```
SELECT
l.leadid, l.companyname, l.status, l.dateadded, addedby.name AS addedby, assignedto.name AS assignedto, l.address, l.city, l.prov, l.country, l.postalcode,
l.phone, l.fax, l.facebook, l.twitter, l.youtube, l.instagram, l.website, l.logo
FROM leads l
INNER JOIN staff addedby ON l.addedby = addedby.staffid
INNER JOIN staff assignedto ON l.assignedto = assignedto.staffid
CASE l.status
WHEN l.status = 7 THEN l.status = 'Converted'
WHEN l.status = 6 THEN l.status = 'Hot'
WHEN l.status = 5 THEN l.status = 'Warm'
WHEN l.status = 4 THEN l.status = 'Open'
WHEN l.status = 3 THEN l.status = 'NEW'
WHEN l.status = 2 THEN l.status = 'Cold'
ELSE l.status = 'Dead'
END
WHERE l.leadid = $id
``` | The case should be in the `SELECT` list, not after the `INNER JOIN`.
Also second issue is the use of l.status = after each THEN statement. MySql syntax only requires the final value after the THEN statement, not a value assignment.
Also note that you don't need the `l.status =` part in your `when` statements, because the `case value` is listed next to the `CASE` statement. If you were building an ad hoc case statement, then it would be needed. So, you can just write it as follows:
I would just write it as follows:
```
SELECT
l.leadid,
l.companyname,
CASE l.status
WHEN 7 THEN 'Converted'
WHEN 6 THEN 'Hot'
WHEN 5 THEN 'Warm'
WHEN 4 THEN 'Open'
WHEN 3 THEN 'NEW'
WHEN 2 THEN 'Cold'
ELSE 'Dead'
END As Status,
l.dateadded,
< rest of your fields >
FROM leads l
INNER JOIN staff addedby ON l.addedby = addedby.staffid
INNER JOIN staff assignedto ON l.assignedto = assignedto.staffid
WHERE l.leadid = $id
``` | 1. You want your `CASE` in `SELECT` clause
2. Your `CASE` syntax is invalid
3. As an alternative you can use a MySQL (since your question tagged with `mysql`) specific less verbose function `ELT()`
That being said you can do either
```
SELECT
l.leadid, l.companyname,
CASE l.status
WHEN 7 THEN 'Converted'
WHEN 6 THEN 'Hot'
WHEN 5 THEN 'Warm'
WHEN 4 THEN 'Open'
WHEN 3 THEN 'NEW'
WHEN 2 THEN 'Cold'
ELSE 'Dead'
END status,
l.dateadded, addedby.name AS addedby,
assignedto.name AS assignedto, l.address,
l.city, l.prov, l.country, l.postalcode,
l.phone, l.fax, l.facebook, l.twitter,
l.youtube, l.instagram, l.website, l.logo
FROM leads l JOIN staff addedby
ON l.addedby = addedby.staffid JOIN staff assignedto
ON l.assignedto = assignedto.staffid
WHERE l.leadid = $id
```
or
```
SELECT
l.leadid, l.companyname,
IFNULL(ELT(l.status - 1, 'Cold', 'New', 'Open', 'Warm', 'Hot', 'Converted'), 'Dead') status,
l.dateadded, addedby.name AS addedby,
assignedto.name AS assignedto, l.address,
l.city, l.prov, l.country, l.postalcode,
l.phone, l.fax, l.facebook, l.twitter,
l.youtube, l.instagram, l.website, l.logo
FROM leads l JOIN staff addedby
ON l.addedby = addedby.staffid JOIN staff assignedto
ON l.assignedto = assignedto.staffid
WHERE l.leadid = $id
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!2/82d0f/3)** demo (for both cases) | Adding a case statment to SQL join statement | [
"",
"mysql",
"sql",
"sql-server",
""
] |
I created a sql server database with name of `abc-123`, in that I created a table Emp, when I run like
```
select * from abc-123.emp;
```
I am getting the results.
But when I am trying to grant some privilege to the user I unable to do that, *getting syntax error near hyphen .*
will any one help me? | Make sure you are escaping the names with [] (T-SQL) or "" (ANSI SQL). You are using non-standard naming.
```
-- Sample select
SELECT * FROM [abc-123].[dbo].[emp];
SELECT * FROM "abc-123"."dbo"."emp";
```
1 - Can you send me an example of the grant TSQL? If you are doing the action from SSMS, right click and script the code.
2 - Here is the link to the GRANT TSQL command. I do not see any syntax like you are trying.
<http://technet.microsoft.com/en-us/library/ms188371.aspx>
```
TO 'drupal'@'localhost' IDENTIFIED BY 'Drup@l';
```
First, it should be `[drupal@localhost]`. Second, I never seen the `IDENTIFIED BY` clause. Where are you getting that information from?
3 - Here is a quick TSQL script that creates a badly named database and user. If possible, change the name of the database and user.
Also, if you are granting permissions at the table level other than db\_owner (very granular and a-lot of maintenance), then create an user defined database role. Add securables to the role and add your user to the role.
<http://technet.microsoft.com/en-us/library/ms187936.aspx>
Sample code.
```
-- Create new database
create database [abc-123]
go
-- Use new database
use [abc-123];
go
-- Create table from sample data
select
[BusinessEntityID]
,[PersonType]
,[NameStyle]
,[Title]
,[FirstName]
,[MiddleName]
,[LastName]
,[Suffix]
,[EmailPromotion]
, cast([AdditionalContactInfo] as varchar(max))
as [AdditionalContactInfoTxt]
, cast([Demographics] as varchar(max))
as [DemographicsTxt]
,[rowguid]
,[ModifiedDate]
into
[abc-123].[dbo].[emp]
from
AdventureWorks2012.Person.Person;
-- Create a login
CREATE LOGIN [drupal@localhost] WITH PASSWORD=N'Ja08n13$', DEFAULT_DATABASE=[abc-123]
GO
-- Create a user
CREATE USER [drupal@localhost] FOR LOGIN [drupal@localhost] WITH DEFAULT_SCHEMA=[dbo]
GO
-- Add to database owner role
EXEC sp_addrolemember 'db_owner', [drupal@localhost]
GO
```
Output with user in db\_owner group.
 | Use Back Quote for the DB name
```
select * from `abc-123`.emp;
```
or, select the existing database with a USE statement and run the query.
```
USE `abc-123`;
select * from emp;
``` | can I give SQL Server database name with hyphen like abc-123? | [
"",
"sql",
"sql-server",
""
] |
I need to use `between` in a stored procedure. This is my code
```
ALTER PROCEDURE sp_tb_sales_entry_total_product_where_date
@to_date varchar(15),
@from_date varchar(15)
AS
BEGIN
SELECT *
FROM tb_sales_entry_total_product
WHERE [Sales Date] BETWEEN @to_date AND @from_date
END
```
I passed two values `01-01-2014` and `10-01-2014`. In my table also in same format.
Problem is its not selecting values. Where is my error? | Operator `BETWEEN` in SQL [has the following syntax](http://technet.microsoft.com/en-us/library/ms187922.aspx):
```
test_expression [ NOT ] BETWEEN begin_expression AND end_expression
```
Since the `begin_expression` needs to be first, you have to switch the order of `from_date` and `to_date`:
```
SELECT * FROM tb_sales_entry_total_product WHERE [Sales Date] BETWEEN @from_date AND @to_date
``` | Switch them
```
WHERE [Sales Date] BETWEEN @from_date AND @to_date
``` | How to use between in stored proc in SQL Server | [
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
```
select * from MYTABLE t
where EQUIPMENT = 'KEYBOARD' and ROWNUM <= 2 or
EQUIPMENT = 'MOUSE' and ROWNUM <= 2 or
EQUIPMENT = 'MONITOR' and ROWNUM <= 2;
```
I am trying to run a query that returns matches on a field (ie equipment) and limits the output of each type of equipment to 2 records or less per equipment type.. I know this probably not the best way to use multiple where clauses but i have used this in the past separated by or statements but does not work with rownum. It seems it is only returning the very last where statement. thanks in advance.. | ```
WITH numbered_equipment AS (
SELECT t.*,
ROW_NUMBER() OVER( PARTITION BY EQUIPMENT ORDER BY NULL ) AS row_num
FROM MYTABLE t
WHERE EQUIPMENT IN ( 'KEYBOARD', 'MOUSE', 'MONITOR' )
)
SELECT *
FROM numbered_equipment
WHERE row_num <= 2;
```
[SQLFIDDLE](http://sqlfiddle.com/#!4/c5182/2)
If you want to prioritize which rows are selected based on other columns then modify the `ORDER BY NULL` part of the query to put the highest priority elements first in the order.
**Edit**
To just pull out rows where the equipment matches and the status is active then use:
```
WITH numbered_equipment AS (
SELECT t.*,
ROW_NUMBER() OVER( PARTITION BY EQUIPMENT ORDER BY NULL ) AS row_num
FROM MYTABLE t
WHERE EQUIPMENT IN ( 'KEYBOARD', 'MOUSE', 'MONITOR' )
AND STATUS = 'Active'
)
SELECT *
FROM numbered_equipment
WHERE row_num <= 2;
```
[SQLFIDDLE](http://sqlfiddle.com/#!4/ab02e/2) | The Row count can be specific to every *Equipment* type!
```
SELECT * FROM MYTABLE t
where EQUIPMENT = 'KEYBOARD' and ROWNUM <= 2
UNION ALL
SELECT * FROM MYTABLE t
WHERE EQUIPMENT = 'MOUSE' and ROWNUM <= 2
UNION ALL
SELECT * FROM MYTABLE t
WHERE EQUIPMENT = 'MONITOR' and ROWNUM <= 2;
``` | SQL Oracle rownum on multiple where clauses? | [
"",
"sql",
"oracle",
"rownum",
""
] |
I am using `SQL Server 2008 R2`. I am having a database table like below :
```
+--+-----+---+---------+--------+----------+-----------------------+
|Id|Total|New|Completed|Assigned|Unassigned|CreatedDtUTC |
+--+-----+---+---------+--------+----------+-----------------------+
|1 |29 |1 |5 |6 |5 |2014-01-07 06:00:00.000|
+--+-----+---+---------+--------+----------+-----------------------+
|2 |29 |1 |5 |6 |5 |2014-01-07 06:00:00.000|
+--+-----+---+---------+--------+----------+-----------------------+
|3 |29 |1 |5 |6 |5 |2014-01-07 06:00:00.000|
+--+-----+---+---------+--------+----------+-----------------------+
|4 |30 |1 |3 |2 |3 |2014-01-08 06:00:00.000|
+--+-----+---+---------+--------+----------+-----------------------+
|5 |30 |0 |3 |4 |3 |2014-01-09 06:00:00.000|
+--+-----+---+---------+--------+----------+-----------------------+
|6 |30 |0 |0 |0 |0 |2014-01-10 06:00:00.000|
+--+-----+---+---------+--------+----------+-----------------------+
|7 |30 |0 |0 |0 |0 |2014-01-11 06:00:00.000|
+--+-----+---+---------+--------+----------+-----------------------+
```
Now, I am facing a strange problem while `grouping` the records by `CreatedDtUTC` column.
I want the `distinct records` from this table. Here you can observe that the *first three* records are *duplicates* created at the same date time. I want the distinct records so I had ran the query given below :
```
SELECT Id, Total, New, Completed, Assigned, Unassigned, MAX(CreatedDtUTC)
FROM TblUsage
GROUP BY CreatedDtUTC
```
But it gives me error :
*Column 'TblUsage.Id' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.*
I also have tried DISTINCT for CreatedDtUTC column, but had given the same error. Can anyone let me know how to get rid of this?
P.S. I want the `CreatedDtUTC` coumn in `CONVERT(VARCHAR(10), CreatedDtUTC,101)` format. | The error message itself is very explicit. You can't put a column without applying an aggregate function to it into `SELECT` clause if it's not a part of `GROUP BY`. And the reason behind is very simple SQL Server doesn't know which value for that column within a group you want to select. It's not deterministic and therefore prohibited.
You can either put all the columns besides `Id` in `GROUP BY` and use `MIN()` or `MAX()` on `Id` or you can leverage windowing function `ROW_NUMBER()` in the following way
```
SELECT Id, Total, New, Completed, Assigned, Unassigned, CONVERT(VARCHAR(10), CreatedDtUTC,101) CreatedDtUTC
FROM
(
SELECT t.*, ROW_NUMBER() OVER (PARTITION BY Total, New, Completed, Assigned, Unassigned, CreatedDtUTC
ORDER BY id DESC) rnum
FROM TblUsage t
) q
WHERE rnum = 1
```
Output:
```
| ID | TOTAL | NEW | COMPLETED | ASSIGNED | UNASSIGNED | CREATEDDTUTC |
|----|-------|-----|-----------|----------|------------|--------------|
| 3 | 29 | 1 | 5 | 6 | 5 | 01/07/2014 |
| 6 | 30 | 0 | 0 | 0 | 0 | 01/10/2014 |
| 7 | 30 | 0 | 0 | 0 | 0 | 01/11/2014 |
| 5 | 30 | 0 | 3 | 4 | 3 | 01/09/2014 |
| 4 | 30 | 1 | 3 | 2 | 3 | 01/08/2014 |
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!3/d31a3/5)** demo | Try this............
```
SELECT min(Id) Id, Total, New, Completed, Assigned, Unassigned, CreatedDtUTC
FROM TblUsage
GROUP BY Total, New, Completed, Assigned, Unassigned, CreatedDtUTC
``` | Group by records by date | [
"",
"sql",
"sql-server",
"date",
"select",
"group-by",
""
] |
I just started designing a database in `MySQLWorkbench 6.0`. I created a few tables but now when I click the "Add table" button it shows a message: `"Index out of range"` and the program crashes.
Any idea what is wrong or what could I have done to get this error ?
None of the tables have any values, they just have the fields and field type, with their primary keys and foreign keys (something tells me it's something I did when connecting tables via the foreign keys, I'm new to this software).
Any help appreciated! | Have you tried restarting MySQL Workbench? I haven't heard of such a problem before, but maybe steps you did in a certain order caused this exception.
In any case you should file a bug report for it at <http://bugs.mysql.com> to get this fixed. Only via such a report developers can take notice. | Once the alert popped up, I hit ok and then clicked on the little wrench icon above the utf selection. I had to change how it would separate the columns from ';' to ','
This fixed my issue without having to change any of the content in the csv. | Error "Index out of range" in MySQL Workbench 6.0.8.11354 (6.0), what is wrong? | [
"",
"mysql",
"sql",
"mysql-workbench",
""
] |
Basicly I have tables much like picture below.

At first I'm getting SaleOrderID by given CustomerID :
```
Select
SaleOrderID
From tblSaleOrder
where CustomerID = 512992
```
Which returns, 1002,1003,1005,1009
And I want to use that numbers for select from tblSaleOrderDetail :
```
Select
*
from tblSaleOrderDetail
where SaleOrderID = 1002,1003,1005,1009
```
(values from other query)
I need suggestion on query to use values from another select. Is there a way to create array to hold values then use it for another query ? OR a easier way ! | Use a subselect within an `in` clause:
```
select *
from tblSaleOrderDetail
where SaleOrderID in (
select SaleOrderID
from tblSaleOrder
where CustomerID = 512992)
```
When using a subselect within an in clause remember that you must only select one column within the subselect.
You could also perform a join:
```
select *
from tblSaleOrderDetail sod
join tblSaleOrder so
on sod.SaleOrderId = so.SaleOrderId
where so.CustomerID = 512992
``` | This is a very basic SQL operation, called the `join`. Although you can use `in` for this purpose, the more typical way is:
```
select sod.*
from tblSaleOrder so join
tblSaleOrderDetail sod
on so.SaleOrderID = sod.SaleOrderID
where so.CustomerID = 512992;
```
If you are learning SQL, the `join` operation is one of the first things you should be learning. | Sql Select by using another table value(s) | [
"",
"sql",
"select",
""
] |
I have looked all over and cannot seem to find the answer to this error:
> Could not find stored procedure 'dbo.aspnet\_CheckSchemaVersion'.
I am writing an ASP.NET application from an empty web project and trying to add roles/membership to the application. I have made the login/logout pages myself but I am using the premade "login" tool. The DB I am using is my own DB instance "Accounts" which uses windows authentication. When I am in debug mode, I can login just fine! But when I push to my localhost, i get this error. Before I show all the code/stack trace/etc. let me list what ive done so far:
1. I **HAVE** run the aspnet\_regsql.exe multiple times from both the V4 and V2 folders just incase I missed anything (because all the help for this error I found was in V2)
2. I have checked the application pool in IIS, I switched it from "applicationpoolidentity" to "network service" because I was getting this error while it was "applicationpoolidentity"
> System.Data.SqlClient.SqlException: Login failed for user 'IIS
> APPPOOL\DefaultAppPool'.
3. I tried restarting my computer
4. I tried uninstalling/reinstaling VS2012 and SQL Server 2008 R2
5. I manipulated the web.config multiple times with no luck
6. I checking the connection string
7. I have tried starting the MSSQL services (both my own instance and SQLEXPRESS)
8. I have tried restarting the site in IIS
I am out of ideas. So with all that said, here is the code/stack trace/etc.:
web.config:
```
<configuration>
<connectionStrings>
<remove name="AccountsConnectionString"/>
<add name="AccountsConnectionString" connectionString="Data Source=development;Initial Catalog=Accounts;Integrated Security=True" providerName="System.Data.SqlClient"/>
</connectionStrings>
<system.web>
<authentication mode="Forms"/>
<roleManager enabled="true" defaultProvider="SqlRoleProvider">
<providers>
<clear/>
<add name="SqlRoleProvider" type="System.Web.Security.SqlRoleProvider" connectionStringName="AccountsConnectionString" />
</providers>
</roleManager>
<membership defaultProvider="SqlMembershipProvider" userIsOnlineTimeWindow="1500">
<providers>
<remove name="SqlMembershipProvider"/>
<add name="SqlMembershipProvider"
type="System.Web.Security.SqlMembershipProvider"
connectionStringName="AccountsConnectionString"
enablePasswordRetrieval="false"
enablePasswordReset="true"
requiresQuestionAndAnswer="false"
requiresUniqueEmail="false"
applicationName="ConsultDemo"
minRequiredNonalphanumericCharacters="0"
minRequiredPasswordLength="5"/>
</providers>
</membership>
<profile enabled="true" defaultProvider="SqlProfileProvider">
<providers>
<clear/>
<add name="SqlProfileProvider" type="System.Web.Profile.SqlProfileProvider" connectionStringName="AccountsConnectionString" />
</providers>
</profile>
<compilation debug="true" targetFramework="4.0"/>
</system.web>
</configuration>
```
Stack trace:
```
[SqlException (0x80131904): Could not find stored procedure 'dbo.aspnet_CheckSchemaVersion'.]
System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction) +388
System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose) +688
System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady) +4403
System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) +6665097
System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async, Int32 timeout, Task& task, Boolean asyncWrite) +6667096
System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, TaskCompletionSource`1 completion, Int32 timeout, Task& task, Boolean asyncWrite) +577
System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(TaskCompletionSource`1 completion, String methodName, Boolean sendToPipe, Int32 timeout, Boolean asyncWrite) +735
System.Data.SqlClient.SqlCommand.ExecuteNonQuery() +290
System.Web.Util.SecUtility.CheckSchemaVersion(ProviderBase provider, SqlConnection connection, String[] features, String version, Int32& schemaVersionCheck) +623
System.Web.Security.SqlMembershipProvider.GetPasswordWithFormat(String username, Boolean updateLastLoginActivityDate, Int32& status, String& password, Int32& passwordFormat, String& passwordSalt, Int32& failedPasswordAttemptCount, Int32& failedPasswordAnswerAttemptCount, Boolean& isApproved, DateTime& lastLoginDate, DateTime& lastActivityDate) +3888825
System.Web.Security.SqlMembershipProvider.CheckPassword(String username, String password, Boolean updateLastLoginActivityDate, Boolean failIfNotApproved, String& salt, Int32& passwordFormat) +186
System.Web.Security.SqlMembershipProvider.ValidateUser(String username, String password) +195
System.Web.UI.WebControls.Login.AuthenticateUsingMembershipProvider(AuthenticateEventArgs e) +105
System.Web.UI.WebControls.Login.AttemptLogin() +160
System.Web.UI.WebControls.Login.OnBubbleEvent(Object source, EventArgs e) +93
System.Web.UI.Control.RaiseBubbleEvent(Object source, EventArgs args) +84
System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +3804
```
Picture to show the schema and stored procedure is infact in place:

Any help is appreciated! If there is an answer that I may have missed please point me to it. I am trying to get it to run on localhost...and nothing is working! Thank you all in advance. | Please provide `UserID` and `Password` to your connection string. Its not there in connection string. | Paraphrasing @RonakBhatt from the comments on the OP:
**Even though you are using windows authentication, create a user using SQL authentication and add those credentials to the connection string.**
Thank you @RonakBhatt for pointing that out. I figured one could still connect via windows authentication | localhost cannot find stored procedure 'dbo.aspnet_CheckSchemaVersion' | [
"",
"asp.net",
"sql",
"sql-server",
"iis-7",
"localhost",
""
] |
I am working with SQL server 2008R2. I have a query that returns the below dataset:
```
ID FirstName LastName Relation
1 Sam Ali Employee
1 Maya Ali Dependent
2 Nadia Amle Employee
1 Sue Ibram Dependent
3 Saher Jacobs Employee
2 Alie Salem Dependent
```
I want the query results this way below:
ID FirstName LastName Relation
1 Sam Ali Employee
1 Maya Ali Dependent
1 Sue Ibram Dependent
2 Nadia Amle Employee
2 Alie Salem Dependent
4 Joe Davis Employee
3 Saher Jacobs Employee
Now the requirements:
1. Group by ID so that employees and dependents that have the same ID are next to each other.
2. Order Column LastName by A-Z.
Greatly appreciate your help. | Use a self-join so you can get the employee associated with each dependent, and order by that.
```
SELECT t1.ID, t1.FirstName, t1.LastName, t1.Relation, t2.LastName AS EmployeeName
FROM YourTable AS t1
JOIN YourTable AS t2 ON t1.ID = t2.ID
WHERE t2.Relation = "Employee"
ORDER BY EmployeeName, t1.ID, t1.LastName
```
Including `t1.ID` in the ordering is in case there are two employees with the same last name. This ensures that all the people in that group stay together in the result. | You can use barmer answer like this
```
SELECT * FROM TABLE ORDER BY ID, LastName
``` | Order SQL Server Query by two columns | [
"",
"sql",
"sql-server-2008-r2",
""
] |
I have this simple table setup [see fiddle](http://www.sqlfiddle.com/#!2/469b36/4)
```
CREATE TABLE mytable
(`id` int, `itemid` int);
INSERT INTO mytable
(`id`, `itemid`)
VALUES
(1, 111),
(2, 222),
(3, 333),
(4, 444),
(5, 111),
(6, 222),
(7, 333),
(8, 564),
(9, 111),
(10, 121),
(11, 131),
(12, 111),
(13, 353),
(14, 373);
```
I want to show what the last itemid was in the row next to the current row itemid.
I have done that with the below
```
SELECT
mt.id,
mt.itemid,
(
select mt2.itemid
from mytable mt2
where mt2.id < mt.id
ORDER BY mt2.id DESC
LIMIT 1
) as lastitemid
FROM mytable mt
ORDER BY id DESC
LIMIT 5
```
This works as expected returning
```
ID ITEMID LASTITEMID
14 373 353
13 353 111
12 111 131
11 131 121
10 121 111
```
However I only want to show rows where the lastitemid = 111 for example.
I have tried doing
```
SELECT
mt.id,
mt.itemid,
(
select mt2.itemid
from mytable mt2
where mt2.id < mt.id
ORDER BY mt2.id DESC
LIMIT 1
) as lastitemid
FROM mytable mt
WHERE lastitemid = 111
ORDER BY id DESC
LIMIT 5
```
Which gets Unknown column 'lastitemid' in 'where clause'
I have also tried to add
```
AND mt2.itemid = 111
```
To the inner query
This doesn't get any errors but returns 111 for all rows which is not what I want as it is invalid eg for id=12 the lastitemid is 131 but it says 111
```
ID ITEMID LASTITEMID
14 373 111
13 353 111
12 111 111
11 131 111
10 121 111
```
With my example data set I should get the below results if I had the query right
```
ID ITEMID LASTITEMID
13 353 111
10 121 111
6 222 111
2 222 111
```
How can I achieve this? | Try this:
```
SELECT mt.id, mt.itemid,
(SELECT mt2.itemid FROM mytable mt2 WHERE mt2.id < mt.id ORDER BY mt2.id DESC LIMIT 1) AS lastitemid
FROM mytable mt
HAVING lastitemid = 111
ORDER BY id DESC
LIMIT 5
```
Check the [**SQL FIDDLE DEMO**](http://www.sqlfiddle.com/#!2/469b36/6)
**OUTPUT**
```
| ID | ITEMID | LASTITEMID |
|----|--------|------------|
| 13 | 353 | 111 |
| 10 | 121 | 111 |
| 6 | 222 | 111 |
| 2 | 222 | 111 |
``` | If the ids are guaranteed to be consecutive, you can do this.
```
SELECT curr.id, curr.itemid, previous.itemid AS lastitemid
FROM mytable curr JOIN mytable previous ON previous.id = curr.id - 1
WHERE previous.itemid = 111
```
Otherwise, you'd need something like
```
SELECT curr.id, curr.itemid, previous.itemid AS lastitemid
FROM mytable curr, mytable previous
WHERE previous.id < curr.id
AND previous.itemid = 111
AND NOT EXISTS (
SELECT 1 FROM mytable interloper
WHERE interloper.id < curr.id
AND previous.id < interloper.id )
``` | Filter data based on last row mysql | [
"",
"mysql",
"sql",
"select",
"sql-order-by",
"having",
""
] |
I would like to generate sql code using the 'stuff' function based on a table name as a parameter
This works:
```
declare @sql as nvarchar(max);
select @sql = stuff((SELECT distinct [Site]
FROM [ProcterGamble_analytics].[dbo].DATA_table
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
, 1, 0, '');
exec(@sql);
```
and i'm looking to do something like
```
declare @presql as nvarchar(max), @sql as nvarchar(max), @table as nvarchar(max);
SET @table = 'DATA_table';
select @presql = 'SELECT distinct [Site]
FROM [ProcterGamble_analytics].[dbo].' + @table
select @sql = stuff((@presql
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
, 1, 0, '');
exec(@sql);
``` | You don't really need the `@presql` portion, just need to double up single quotes so they are handled properly when the dynamic portion is processed:
```
DECLARE @sql AS NVARCHAR(MAX)
,@table AS NVARCHAR(MAX) = 'DATA_table';
SET @sql = 'stuff(( SELECT distinct [Site]
FROM [ProcterGamble_analytics].[dbo].' + @table + '
FOR XML PATH(''''), TYPE
).value(''.'', ''NVARCHAR(MAX)'')
, 1, 0, '''')';
EXEC(@sql);
```
A good way to test dynamic SQL is to use `PRINT(@sql);` insted of `EXEC` to confirm the code that will be executed is what you want it to be. | Your `sql` statement is confused as to what is a string and what is the code that generates the string. I think this will work:
```
select @sql = 'select stuff((' + @presql + '
FOR XML PATH(''''), TYPE
).value(''.'', ''NVARCHAR(MAX)'')
, 1, 0, '''')';
```
When you execute `@sql`, it should return the value. | T-SQL: Using parameter for table name in stuff | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have One table with People, and I have a second table to record if these people are "Absent"
The Absent Table (Absences) has the AbsenceID, PersonID and AbsentDate
The People Table has PersonID, FirstName, LastName
I am trying to make an SQL that will Get all the values from the People table, except for those that are Absent that day. So if for example Joe is listed in the Absences table and the AbsentDate is 11/01/2014, I want his name excluded from the query generated. This is what I have so far, but it shows me nothing:
I've tried below, but it returns all the rows:
```
SELECT * FROM People
where not exists(Select PersonID
from Absences
where Absences.AbsentDate = 11/01/2014)
```
I'm sure it's something very simple that I am missing. Any help would be appreciated | ```
SELECT * FROM People
where PersonID NOT IN (Select PersonID
from Absences
where DateValue(Absences.AbsentDate) = '20140111')
^ YYYYMMDD
``` | Since `AbsentDate` is a Date/Time field, it is storing both date and time and you are looking for equality on both. So '11/01/2014' is equivalent to searching for '11/01/2014 00:00:00.000', which I can almost bet nothing is stored as. So, the better method is to look for your value *between* '11/01/2014 00:00:00.0000' and '11/01/2014 23:59:59.999', which is what the `BETWEEN` operator does.
```
SELECT *
FROM People
LEFT JOIN Absences ON (People.PersonID = Absences.PersonID) AND (Absences.AbsentDate BETWEEN '11/01/2014' AND '12/01/2014')
WHERE AbsenceID IS NULL
```
I'm not a fan of `NOT IN` queries as they can be non-performant in general. But a `LEFT JOIN` where the joined table's primary key is `NULL` works quite nicely (assuming appropriate indexing, etc.) | Select From One Table where Does not exist in another | [
"",
"sql",
"ms-access",
""
] |
I'm feeling a little rusty with creating queries in MySQL. I thought I could solve this, but I'm having no luck and searching around doesn't result in anything similar...
Basically, I have two tables. I want to select everything from one table and the matching row from the second table. However, I only want to have the first result from the second table. I hope that makes sense.
The rows in the `daily_entries` table are unique. There will be one row for each day, but maybe not everyday. The second table `notes` contains many rows, each of which are associated with *ONE* row from `daily_entries`.
Below are examples of my tables;
**Table One**
```
mysql> desc daily_entries;
+----------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+----------+--------------+------+-----+---------+----------------+
| eid | int(11) | NO | PRI | NULL | auto_increment |
| date | date | NO | | NULL | |
| location | varchar(100) | NO | | NULL | |
+----------+--------------+------+-----+---------+----------------+
```
**Table Two**
```
mysql> desc notes;
+---------+---------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+---------+---------+------+-----+---------+----------------+
| task_id | int(11) | NO | PRI | NULL | auto_increment |
| eid | int(11) | NO | MUL | NULL | |
| notes | text | YES | | NULL | |
+---------+---------+------+-----+---------+----------------+
```
What I need to do, is select all entries from `notes`, with only one result from `daily_entries`.
Below is an example of how I want it to look:
```
+----------------------------------------------+---------+------------+----------+-----+
| notes | task_id | date | location | eid |
+----------------------------------------------+---------+------------+----------+-----+
| Another note | 3 | 2014-01-02 | Home | 2 |
| Enter a note. | 1 | 2014-01-01 | Away | 1 |
| This is a test note. To see what happens. | 2 | | Away | 1 |
| Testing another note | 4 | | Away | 1 |
+----------------------------------------------+---------+------------+----------+-----+
4 rows in set (0.00 sec)
```
Below is the query that I currently have:
```
SELECT notes.notes, notes.task_id, daily_entries.date, daily_entries.location, daily_entries.eid
FROM daily_entries
LEFT JOIN notes ON daily_entries.eid=notes.eid
ORDER BY daily_entries.date DESC
```
Below is an example of how it looks with my query:
```
+----------------------------------------------+---------+------------+----------+-----+
| notes | task_id | date | location | eid |
+----------------------------------------------+---------+------------+----------+-----+
| Another note | 3 | 2014-01-02 | Home | 2 |
| Enter a note. | 1 | 2014-01-01 | Away | 1 |
| This is a test note. To see what happens. | 2 | 2014-01-01 | Away | 1 |
| Testing another note | 4 | 2014-01-01 | Away | 1 |
+----------------------------------------------+---------+------------+----------+-----+
4 rows in set (0.00 sec)
```
At first I thought I could simply `GROUP BY daily_entries.date`, however that returned only the first row of each matching set. Can this even be done? I would greatly appreciate any help someone can offer. Using `Limit` at the end of my query obviously limited it to the value that I specified, but applied it to everything which was to be expected. | Basically, there's nothing wrong with your query. I believe it is exactly what you need because it is returning the data you want. You can not look at as if it is duplicating your `daily_entries` you should be looking at it as if it is return all `notes` with its associated `daily_entry`.
Of course, you can achieve what you described in your question (there's an answer already that solve this issue) but think twice before you do it because such nested queries will only add a lot of noticeable performance overhead to your database server.
I'd recommend to keep your query as simple as possible with one single `LEFT JOIN` (which is all you need) and then let consuming applications manipulate the data and present it the way they need to. | Use mysql's non-standard group by functionality:
```
SELECT n.notes, n.task_id, de.date, de.location, de.eid
FROM notes n
LEFT JOIN (select * from
(select * from daily_entries ORDER BY date DESC) x
group by eid) de ON de.eid = n.eid
``` | MySQL - Select everything from one table, but only first matching value in second table | [
"",
"mysql",
"sql",
"select",
""
] |
Guys I have this table
```
+--------------------+------+
|stime (datetime) |svalue|
+--------------------+------+
|1/13/2014 8:40:00 AM|5 |
+--------------------+------+
|1/13/2014 8:45:00 AM|6 |
+--------------------+------+
|1/13/2014 8:46:00 AM|5 |
+--------------------+------+
|1/13/2014 8:50:00 AM|4 |
+--------------------+------+
```
Would it be possible in mssql to create a query that takes all the data with an interval of 1 minute, and if the date does not exist, it takes the value of the first lower data (`WHERE stime <=`) and assigns that value to the time
So the result I'm trying to get would look like this:
```
+--------------------+------+
|stime (datetime) |svalue|
+--------------------+------+
|1/13/2014 8:40:00 AM|5 |
+--------------------+------+
|1/13/2014 8:41:00 AM|5 |
+--------------------+------+
|1/13/2014 8:42:00 AM|5 |
+--------------------+------+
|1/13/2014 8:43:00 AM|5 |
+--------------------+------+
|1/13/2014 8:44:00 AM|5 |
+--------------------+------+
|1/13/2014 8:45:00 AM|6 |
+--------------------+------+
|1/13/2014 8:46:00 AM|5 |
+--------------------+------+
|1/13/2014 8:47:00 AM|5 |
+--------------------+------+
|1/13/2014 8:48:00 AM|5 |
+--------------------+------+
|1/13/2014 8:49:00 AM|5 |
+--------------------+------+
|1/13/2014 8:50:00 AM|4 |
+--------------------+------+
```
Thanks in advance! | You can use a [CTE](http://technet.microsoft.com/en-us/library/ms190766%28v=sql.105%29.aspx) to generate time sequence from `MIN(stime)` to `MAX(stime)`:
```
WITH TMinMax as
(
SELECT MIN(stime) as MinTime,
MAX(stime) as MaxTime
FROM T
)
,CTE(stime) as
(
SELECT MinTime FROM TMinMax
UNION ALL
SELECT DATEADD(minute,1, stime )
FROM CTE
WHERE DATEADD(minute,1, stime )<=
(SELECT MaxTime from TMinMax)
)
select stime,
(SELECT TOP 1 svalue
FROM T
WHERE stime<=CTE.Stime
ORDER BY stime DESC) as svalue
from CTE
ORDER BY stime
```
`SQLFiddle demo` | This seems to do the job:
```
declare @t table (stime datetime,svalue int)
insert into @t(stime,svalue) values
('2014-01-13T08:40:00',5),
('2014-01-13T08:45:00',6),
('2014-01-13T08:46:00',5),
('2014-01-13T08:50:00',4)
;with times as (
select MIN(stime) as stime,MAX(stime) as etime from @t
union all
select DATEADD(minute,1,stime),etime from times where stime < etime
)
select
t.stime,t_1.svalue
from
times t
left join
@t t_1
on
t.stime >= t_1.stime --Find an earlier or equal row
left join
@t t_2
on
t.stime >= t_2.stime and --Find an earlier or equal row
t_2.stime > t_1.stime --That's a better match than t_1
where
t_2.stime is null --Second join fails
order by t.stime
option (maxrecursion 0)
```
We create a CTE called `times` that finds all of the minutes between the minimum and maximum `stime` values. We then attempt two joins back to the original table, with the comments indicating what those two joins are attempting to find. We then, in the `WHERE` clause, eliminate any rows where the `t_2` join succeeded - which are the exact rows where the `t_1` join found the best matching row from the table.
`option (maxrecursion 0)` is just to allow the CTE to be used with a wider range of input values, where generating all of the `stime` values might require a lot of recursion. | Fill in missing values | [
"",
"sql",
"sql-server",
""
] |
What's the simplest way to write and, then, read `Emoji` symbols in `Oracle` table?
Currently I have this situation:
* `iOS` client pass encoded `Emojis`: `One%20more%20time%20%F0%9F%98%81%F0%9F%98%94%F0%9F%98%8C%F0%9F%98%92`. For example, `%F0%9F%98%81` means ;
* Column type is `nvarchar2(2000)`, so when view saved text via `Oracle SQL Developer` it looks like: `One more time ????????`. | After all, we do `BASE64` encoding/decoding of the text. It’s suitable for small texts. | This seems more a client problem than a database problem. Certain iOs programs are capable of interpreting that string and show an image instead of that string.
SQL Developer does not do that.
As long as the data stored in the database is the same as the data retrieved from the database, you have no problem. | Storing and returning emojis | [
"",
"sql",
"oracle",
"unicode",
"emoji",
""
] |
I have a SQL Server database (2012 express) with many tables.
I have produced three different VIEWS based on different combinations of the underlying tables.
Each of these views consists of three columns, Year, Month & Total
The Total column in each of the 3 Views is of a different measure.
What I want to be able to do is to combine the three Totals into a single View
I have attempted this with the following script -
```
SELECT b.[Year], b.[Month], b.Fees AS [Billing],
f.Estimate AS [Estimate],
w.Fees AS [WIP]
FROM MonthlyBillingTotals AS b
FULL JOIN MonthlyFeeEstimates AS f
ON (b.[Year] = f.[Year] AND b.[Month] = f.[Month])
FULL JOIN MonthlyInstructionsWIP AS w
ON (b.[Year] = w.[Year] AND b.[Month] = w.[Month])
ORDER BY b.[Year], b.[Month]
```
Originally I tried INNER JOINS but of course unless the Year / Month combo existed in the first view (MonthlyBillingTotals) then it did not appear in the combined query. I therefore tried FULL JOINS, but the problem here is that I get some NULLS in the Year and Month columns, when they do not exist in the first view (MonthlyBillingTotals).
If the data in the three Views is as follows -

Then what I want is -

And even better (if it is possible) -

with the missing months filled in | You could try building the full list of Months/Years from your tables using a `UNION` subquery, and then use that to drive your joins.. Something like this:
```
SELECT a.[Year], a.[Month], b.Fees AS [Billing],
f.Estimate AS [Estimate],
w.Fees AS [WIP]
FROM (SELECT a.[Year], a.[Month] FROM MonthlyBillingTotals AS a
UNION
SELECT b.[Year], b.[Month] FROM MonthlyFeeEstimates AS b
UNION
SELECT c.[Year], c.[Month] FROM MonthlyInstructionsWIP AS c) AS a
LEFT OUTER JOIN MonthlyBillingTotals AS b
ON (a.[Year] = b.[Year] AND a.[Month] = b.[Month])
LEFT OUTER JOIN MonthlyFeeEstimates AS f
ON (a.[Year] = f.[Year] AND a.[Month] = f.[Month])
LEFT OUTER JOIN MonthlyInstructionsWIP AS w
ON (a.[Year] = w.[Year] AND a.[Month] = w.[Month])
ORDER BY a.[Year], a.[Month]
``` | You could set up a small date table with year and month and left join the views with that, and use the `ISNULL(variable,0)` function to replace NULL with 0. Another option instead of a date table would be to use a common table expression to generate a date range to join with. In any case I suggest you look up the date table (or numbers table), it can be a really useful tool.
Edit: added an example on how a date table can be created (for reference):
```
declare @year_month table (y int, m int)
;with cte as (
select cast('2000-01-01' as datetime) date_value
union all
select date_value + 1
from cte
where date_value + 1 < '2010-12-31'
)
insert @year_month (y, m)
select distinct year(date_value), month(date_value)
from cte
order by 1, 2
option (maxrecursion 0)
select * from @year_month
``` | Combining Multiple SQL Views ON Year & Month | [
"",
"sql",
""
] |
Please help me to know delete records based on the rownum where even id is duplicate it won't happen but if it is the case.
```
select rownum,a.* from a;
ROWNUM ID NAME
---------- ---------- ----------
1 1 leo_1
2 2 leo_2
3 3 leo_3
4 1 leo_1
5 2 leo_2
6 3 leo_3
```
Query Tried but deletes all 6 rows.
```
DELETE FROM a
WHERE rownum not in
(SELECT MIN(rownum)
FROM a
GROUP BY name);
```
But this Query gives correct result:
```
SELECT MIN(rownum)
FROM a
GROUP BY name
ROWNUM
----------
1
2
3
```
Expected Result :
```
ROWNUM ID NAME
---------- ---------- ----------
4 1 leo_1
5 2 leo_2
6 3 leo_3
``` | Use the `rowid`
```
DELETE FROM table_name a
WHERE EXISTS( SELECT 1
FROM table_name b
WHERE a.id = b.id
AND a.name = b.name
AND a.rowid > b.rowid )
```
Of course, you could do `a.rowid < b.rowid` as well. The `rowid` is just the physical address of the row so it doesn't matter whether you delete the row that has the larger or the smaller address.
Your expected results, though, don't make sense.
```
Expected Result :
ROWNUM ID NAME
---------- ---------- ----------
4 1 leo_1
5 2 leo_2
6 3 leo_3
```
The `rownum` of a result set is always assigned at query time. That means that a particular row may appear with different `rownum` values in different queries (or when the same query is run multiple times). `rownum` is always sequential so you can never have a `rownum` of 4 in a result set without also having `rownum` values of 1, 2, and 3 in the same result set. Whichever duplicate row you delete, your result will be
Expected Result :
```
ROWNUM ID NAME
---------- ---------- ----------
1 1 leo_1
2 2 leo_2
3 3 leo_3
```
But the `rownum` values are arbitrary. It would be just as valid for Oracle to return
Expected Result :
```
ROWNUM ID NAME
---------- ---------- ----------
1 2 leo_2
2 3 leo_3
3 1 leo_1
``` | ```
DELETE FROM a
WHERE rowid not in
(SELECT MIN(rowid) FROM a group BY name);
``` | Delete duplicate records using rownum in sql | [
"",
"sql",
"oracle",
""
] |
I need to find exact file name by executing SQL query on the table containing the file\_name column .In file\_name column the complete path of files are stored like D:/Workspace/app.js
I can find app.js with query Query
```
SELECT *
FROM details
WHERE file_name LIKE '%app.js'
```
but the problem is if I write the query like
```
SELECT *
FROM details
WHERE file_name LIKE '%p.js'
```
it lists app.js file also . So anyone could guide me how to get an exact match for file name from the database if file names are stored with the comple path?
Thanks in advance. | Thanks to all of you ,
I got the result I wanted
```
sql = "SELECT * FROM details WHERE file_name RLIKE ?";
ps = conn.prepareStatement(sql);
ps.setString(1, "[[:<:]]"+fname+"[[:>:]]");
```
This gives the exact string that fname varible contains. | How about this?
```
SELECT * FROM details WHERE file_name LIKE '%/app.js' OR file_name LIKE '%\app.js'
``` | Find exact file name from the file name with complete path stored in database | [
"",
"sql",
""
] |
I've have two sql queries which I'm trying to combine
The first:
```
SELECT * FROM wp_posts
JOIN wp_postmeta on (post_id=ID)
WHERE meta_key = "packageID" and meta_value = 1
ORDER BY post_date limit 50
```
Joins the wordpress wp\_post table to the wp\_postmeta and gets all the posts meeting with packageID = 1 (I think it might be an inelegant way of doing it but it works)
The second
```
SELECT * FROM wp_postmeta
JOIN wp_posts ON (meta_value=ID)
WHERE post_id = 2110
AND meta_key = '_thumbnail_id'
```
again joins the wp\_post table to the wp\_postmeta table, so for the post with the id 2110 it successfully gets the thumbnail for that posts. NB 2110 is just an example of an id
In Wordpress a thumbnail is a kind of post. So in this example the text which constitutes post 2110 is a associated with post 2115 - the latter being the thumbnail
What I'm trying to do is get the list as in the first query but also get thumbnails associated with each post
I think I need two joins but I can't see how to do it (being an sql beginner)
NB this will be in a script outside Wordpress so I can't use Wordpress's built-in functions | You can try this one,if there are more than one thumbnails for the post you can get the list of thumbnails separated by comma
```
SELECT
*,
(SELECT
GROUP_CONCAT(meta_value)
FROM
wp_postmeta
WHERE post_id = wp.ID
AND wpm.meta_key = "_thumbnail_id") AS `thumbnails`
FROM
wp_posts wp
JOIN wp_postmeta wpm
ON (wpm.post_id = wp.ID)
WHERE wpm.meta_key = "packageID"
AND wpm.meta_value = 1
ORDER BY wp.post_date
LIMIT 50
```
> Note : GROUP\_CONCAT has a limit to concat characters but you
> can increase this limit
To get only one thumbnail you can try this
```
SELECT
*,
(SELECT
(meta_value)
FROM
wp_postmeta
WHERE post_id = wp.ID
AND wpm.meta_key = "_thumbnail_id" LIMIT 1)
FROM
wp_posts wp
JOIN wp_postmeta wpm
ON (wpm.post_id = wp.ID)
WHERE wpm.meta_key = "packageID"
AND wpm.meta_value = 1
ORDER BY wp.post_date
LIMIT 50
``` | try with the following code
```
SELECT * FROM wp_posts wp JOIN wp_postmeta wm on (wp.post_id=wm.ID) WHERE wp.meta_key = "packageID" and wp.meta_value = 1 ORDER BY wp.post_date limit 50;
```
use proper alias and try it. | combine two sql queries - | [
"",
"mysql",
"sql",
"wordpress",
""
] |
I am trying to delete large amounts of data from a table that is a vendor design. It is over-indexed and any update/insert/deletes are painful. Removing NC indexes is not available to me.
I am testing different ways to delete the data in batches. I discovered today that the below statement is considerably faster when I do not use a variable to hold the date. Why would this make such a difference? Use of TempDB? Do you have a better solution you would be willing to share? Performance is even worse when explicitly typed date is replaced with getdate().
```
--example 1:
--very slow
declare @cleanday as datetime
select @cleanday = dateadd(day,-60,DATEADD(dd, 0, DATEDIFF(dd, 0, CAST('2013-12-22' as datetime))))
delete ES1
from ( select top (10000) es.id1 from es where
es.ID2 in
(
21,
20,
19,
151
)
and es.DateCreated < @cleanday
order by es.id1
) ES1
--example 2:
--much faster
delete ES1
from ( select top (10000) es.id1 from es where
es.ID2 in
(
21,
20,
19,
151
)
and es.DateCreated < dateadd(day,-60,CAST('2013-12-22' as datetime))
order by es.id1
) ES1
``` | ```
/* Some Test Data */
CREATE TABLE Stats_Test_Table (ID INT NOT NULL PRIMARY KEY IDENTITY(1,1), VALUE INT)
GO
DECLARE @i INT = 1
WHILE (@i <= 100)
BEGIN
INSERT INTO Stats_Test_Table
VALUES (@i)
SET @i = @i + 1;
END
GO
/*
Execute the following command to flush any executiong plan already
existing in your chache
**WARNING**
DO NOT execute this command on your production server as it will
flush all the created execution plan for all the queries.
I guess you are doing all this on a test server anyway.
*/
-- Clear cache
DBCC FREEPROCCACHE;
GO
/*
Four Queries with exactly the same syntax only difference is
for 1st Two queries I have Hardcoded the value in WHERE clause
for last two queries I have used an INT parameter in WHERE clause
*/
--Query 1 with Hardcoded value in WHERE clause
SELECT *
FROM Stats_Test_Table
WHERE ID = 50;
GO
--Query 2 with Hardcoded value in WHERE clause
SELECT *
FROM Stats_Test_Table
WHERE ID = 51;
GO
--Query 3 with Variable @ID_1 value in WHERE clause
DECLARE @ID_1 INT;
SET @ID_1 = 52;
SELECT *
FROM Stats_Test_Table
WHERE ID = @ID_1;
GO
--Query 4 with Variable @ID_2 value in WHERE clause
DECLARE @ID_2 INT;
SET @ID_2 = 52;
SELECT *
FROM Stats_Test_Table
WHERE ID = @ID_2;
GO
/*
Now execute the following statement to get all the cached execution plans
remeber once you have cleared you CACHE memory with the DBCC command
you will have to execute all the above queries and the following one as soon
as because sql server is constantly executing queries behind the scense but we
dont see them. so the longer you take more results you will have in your result
set of the following query.
*/
-- Query DMVs for execution plan reuse statistics
SELECT stats.execution_count AS [Execution_Count]
,p.size_in_bytes AS [Size]
,[sql].[text] AS [plan_text]
FROM sys.dm_exec_cached_plans p
OUTER APPLY sys.dm_exec_sql_text(p.plan_handle) sql
JOIN sys.dm_exec_query_stats stats
ON stats.plan_handle = p.plan_handle
ORDER BY [plan_text]
```
**Cached Execution Plans**
```
╔═════════════════╦═══════╦═══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╗
║ Execution_Count ║ Size ║ plan_text ║
╠═════════════════╬═══════╬═══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╣
║ 1 ║ 40960 ║ --Query 3 with Variable @ID_1 value in WHERE clause DECLARE @ID_1 INT; SET @ID_1 = 52; SELECT * FROM Stats_Test_Table WHERE ID = @ID_1; ║
║ 2 ║ 32768 ║ (@1 tinyint)SELECT * FROM [Stats_Test_Table] WHERE [ID]=@1 ║
║ 1 ║ 40960 ║ --Query 4 with Variable @ID_2 value in WHERE clause DECLARE @ID_2 INT; SET @ID_2 = 52; SELECT * FROM Stats_Test_Table WHERE ID = @ID_2; ║
╚═════════════════╩═══════╩═══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╝
```
I executed four queries in total. let say Q1 , Q2 , Q3 and Q4. Sql server has created 3 execution plans for me.
**Query 1 and 2**
`(@1 tinyint)SELECT * FROM [Stats_Test_Table] WHERE [ID]=@1`
> Now if you look closer at the result set of above query Sql Server
> created One Execution plan for Q1 and reused it for Q2. Both had a
> hardcoded value in the where clause.
>
> The execution plan with Execution\_Counts 2 has a variable appended to
> it **=@1** . It is called Auto Parametrization. Sql Server adds a
> parameter to an execution plan and reuses it for next execution.
**Query 3 and 4**
> Now for Query 3 and 4 we have two separate execution plans. Even
> though both queries and somewhat same but this time sql server decided
> to not to use the same execution plan and created a new one for each
> query.
**Conclusion**
> When query is passed a parameter instead of a hardcoded value sql
> server will create a new execution plan each time the query is
> executed.
>
> In your case you in 1st Query you passed a parameter and in second
> query you passed a hardcoded value therefore 2nd query is faster then
> the 1st one :). | By moving the where clauses (or equivalent) to/from a variable vs. hard coded values you can often see a difference in performance due to the query optimizator. I.e., when hard code the optimizer may recognize that use a particular index would be optimal
Sometimes you can get a big advantage by changing the isolation level, especially if you can stop deleting data in batches with an order by when you do this. | Significant Performance Difference in Delete using Derived Table | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I have a sql code like below:
```
SELECT user_name, user_e_mail, user_phone
FROM users
WHERE user_name=:user_name
OR user_e_mail=:email
OR user_phone=:phone
```
Normally, I'm trying to get 0 or 1(if there is match or not). But I don't want to create more sql query. Shortly, my question is how to know which one matching (name,mail or phone)?
I hope I could tell what the my problem is.. | The standard SQL syntax for this would return a separate flag for each condition:
```
SELECT user_name, user_e_mail, user_phone,
(case when user_name = :user_name then 1 else 0 end) as MatchesName,
(case when user_e_mail = :user_e_mail then 1 else 0 end) as MatchesEmail,
(case when user_phone = :user_phone then 1 else 0 end) as MatchesPhone
FROM users
WHERE user_name=:user_name OR user_e_mail=:email OR user_phone=:phone;
```
If you just want to know the first one that matches:
```
SELECT user_name, user_e_mail, user_phone,
(case when user_name = :user_name then 'Name'
when user_e_mail = :user_e_mail then 'Email'
when user_phone = :user_phone then 'Phone'
end) as which
FROM users
WHERE user_name=:user_name OR user_e_mail=:email OR user_phone=:phone;
``` | One approach would be like this:
```
SELECT user_name, user_e_mail, user_phone,
:user_name = user_name AS user_name_matches,
:email = user_e_mail AS email_matches,
:phone = user_phone AS phone_matches
FROM users
WHERE user_name=:user_name
OR user_e_mail=:email
OR user_phone=:phone
```
and here's a [SQL Fiddle that proves it](http://sqlfiddle.com/#!2/967a6f/1/0). | how to know which one matching when using WHERE and OR term | [
"",
"sql",
""
] |
As part of a new process requirement, we will be creating table and which will contain approximately 3000 - 4000 records. We have a copy of these records in plain text on a txt file.
Loading these records in the table leaves me with two choices
1. Use a shell script to generate SQL file containing INSERT Statements for these records
* with the use of awk, shell variables, and loops to create a sql and script execution of this sql, we can be performed with ease
2. Use of SQL Loader.
* Realignment of the record list and ctl file generation the only dependency.
Which of the above two options would be most efficient, in terms of taking up DB resources, utilisation on the client server on which this is to be performed.
I do realise the number of records are rather small, but we may have to repeat this activity with higher number of records (close to 60,000) in which case I would like to have the best possible option configured from the start. | `SQL*Loader` is the more efficient method. It gives you more control. You have an option do `DIRECT` load and `NOLOGGING`, which will reduce redo log generation, and when indexes have been disabled (as part of direct loading), the loading goes faster. Downside, is if load is interupted, indexes are left `unusable`.
But, considering the advantages, `SQL*Loader` is the best approach. And you will feel the difference, when you have millions of records, and having so many loading jobs running in parallel. I heard DBA complaining about the log size, when we do `CONVENTIONAL INSERT` statement loading, with 200+ such jobs, running in parallel. The larger the data volume, the larger the difference you'll see in performance. | SQL\*Loader will be more efficient than thousands of individual `INSERT` statements. Even with 60,000 rows, though, both approaches should complete in a matter of seconds. | What is more efficient INSERT command or SQL Loader for bulk upload - ORACLE 11g R2 | [
"",
"sql",
"oracle",
"sql-loader",
"oracle11gr2",
""
] |
I have this `DATETIME` field in my oracle database shown in the image

I'm trying to make a query which returns something between specific dates but this query returns nothing.
```
SELECT *
FROM tbl_meter
WHERE TO_DATE(DATETIME,'DD/MM/YYYY') BETWEEN '%s' AND '%s'
```
What am I missing?
**EDIT:**
```
SELECT * FROM tbl_meter WHERE DATETIME BETWEEN '15/01/2014' AND '07/01/2014'
```
 | ```
SELECT *
FROM tbl_meter
WHERE TRUNC(DATETIME) BETWEEN to_date('%s','DD/MM/YYYY') AND to_date('%s','DD/MM/YYYY');
```
Asssuming `DATETIME` is of `DATE` data type.
If it is of `TIMESTAMP` datatype then,
```
SELECT *
FROM tbl_meter
WHERE TRUNC(CAST(DATETIME AS DATE)) BETWEEN to_date('%s','DD/MM/YYYY') AND to_date('%s','DD/MM/YYYY');
```
If it is `VARCHAR`, then
```
SELECT *
FROM tbl_meter
WHERE TRUNC(To_DATE(DATETIME,'Dd/MM/YYYY HH24:MI:SS.FF6')) BETWEEN to_date('%s','DD/MM/YYYY') AND to_date('%s','DD/MM/YYYY');
``` | It looks like you are using the same parameter twice. I expect you to have
```
select *
from tbl_meter
where to_date(datetime, 'DD/MM/YYYY') between to_date('%s1', 'DD/MM/YYYY') and to_date('%s2', 'DD/MM/YYYY')
``` | Oracle db get values between specific dates | [
"",
"sql",
"oracle",
""
] |
It's a very simple query:
```
SELECT * FROM temp_company WHERE number NOT IN (SELECT number FROM company)
```
It was taking 15 minutes before but that was on a Mysql installation with too low buffer pool size and 15 minutes was OK because this is a monthly task. I upgraded to Mysql 5.7 (from something like 5.1 or 5.2) as the original install was 32bit and I couldn't up the innodb buffer pool size to the minimum required 10gb for this DB (I've set it to 16GB on a machine with 32GB RAM. I've now gone to run this query a month later and it was still running after 6 hours.
The EXPLAIN for the above is:
```
id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
1 | PRIMARY | temp_company | | ALL | | | | | 3226661 | 100.00 | Using where |
2 | DEPENDENT SUBQUERY | company | | index | number | number | 33 | | 3383517 | 100.00 | Using where |
```
The PRIMARY index on company and temp\_company is id, but number is what they match on and that is a KEY in both but does the above suggest it's not using the index for the temp\_company table?
The other logical query I thought to try was:
```
EXPLAIN SELECT tc.* FROM temp_company tc
LEFT JOIN company c on c.number = tc.number
WHERE c.number IS NULL
```
This is just as slow and the EXPLAIN is:
```
id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
1 | SIMPLE | tc | | ALL | | | | | 3226661 | 100.00 | |
2 | SIMPLE | c | | index | number | number | 33 | | 3383517 | 100.00 | Using where; Ising index; Using join buffer (block nested loop) |
```
Any help would be much appreciated. Perhaps Mysql changed the way it finds indexes?
*\**\*UPDATE 1-------
SHOW CREATE's:
company
```
CREATE TABLE `company` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`count_telephone` mediumint(8) unsigned NOT NULL,
`count_fax` mediumint(8) unsigned NOT NULL,
`count_person` mediumint(8) unsigned NOT NULL,
`person_date` date DEFAULT NULL COMMENT 'Date the company_person relation was updated',
`count_email_address` mediumint(8) unsigned NOT NULL,
`name` varchar(255) DEFAULT NULL,
`url` varchar(255) DEFAULT NULL,
`url_date` date DEFAULT NULL,
`url_status` smallint(5) unsigned NOT NULL DEFAULT '0' COMMENT 'Failure count for crawling the URL',
`website_stamp_start` int(10) unsigned DEFAULT NULL,
`website_stamp` int(10) unsigned DEFAULT NULL,
`ch_url` varchar(255) DEFAULT NULL COMMENT 'Companies house URL',
`keywords_stamp_start` int(10) unsigned DEFAULT NULL,
`keywords_stamp` int(11) DEFAULT NULL,
`number` varchar(30) CHARACTER SET ascii COLLATE ascii_bin DEFAULT NULL,
`category` varchar(255) DEFAULT NULL,
`status` varchar(255) DEFAULT NULL,
`status_date` date DEFAULT NULL COMMENT 'Date the status field was updated',
`country_of_origin` varchar(80) DEFAULT NULL,
`dissolution_date` date DEFAULT NULL,
`incorporation_date` date DEFAULT NULL,
`account_ref_day` smallint(5) unsigned DEFAULT NULL,
`account_ref_month` smallint(5) unsigned DEFAULT NULL,
`account_next_due_date` date DEFAULT NULL,
`account_last_made_up_date` date DEFAULT NULL,
`account_category` varchar(255) DEFAULT NULL,
`returns_next_due_date` date DEFAULT NULL,
`returns_last_made_up_date` date DEFAULT NULL,
`mortgages_num_charges` smallint(5) unsigned DEFAULT NULL,
`mortgages_num_outstanding` smallint(5) unsigned DEFAULT NULL,
`mortgages_num_part_satisfied` smallint(5) unsigned DEFAULT NULL,
`mortgages_num_satisfied` smallint(5) unsigned DEFAULT NULL,
`partnerships_num_gen_partners` smallint(5) unsigned DEFAULT NULL,
`partnerships_num_lim_partners` smallint(5) unsigned DEFAULT NULL,
`ext_name` varchar(255) DEFAULT NULL,
`turnover` decimal(18,2) DEFAULT NULL,
`turnover_date` date DEFAULT NULL,
`trade_debtors` decimal(18,2) DEFAULT NULL,
`other_debtors` decimal(18,2) DEFAULT NULL,
`debtors_date` date DEFAULT NULL,
`real_turnover_band` int(11) DEFAULT NULL,
`est_turnover_band` int(11) DEFAULT NULL,
`ext_address_date` date DEFAULT NULL,
`care_of` varchar(255) DEFAULT NULL,
`po_box` varchar(60) DEFAULT NULL,
`line_1` varchar(255) DEFAULT NULL,
`line_2` varchar(255) DEFAULT NULL,
`town` varchar(60) DEFAULT NULL,
`county` varchar(60) DEFAULT NULL,
`country` varchar(60) DEFAULT NULL,
`post_code` varchar(20) DEFAULT NULL,
`DirScrapeID` int(10) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `homepage_keywords_stamp` (`keywords_stamp`),
KEY `number` (`number`),
KEY `url` (`url`),
KEY `town` (`town`),
KEY `county` (`county`),
KEY `post_code` (`post_code`),
KEY `name` (`name`),
KEY `website_stamp` (`website_stamp`),
KEY `website_stamp_start` (`website_stamp_start`),
KEY `keywords_stamp_start` (`keywords_stamp_start`),
KEY `turnover` (`turnover`),
KEY `status` (`status`),
KEY `category` (`category`),
KEY `incorporation_date` (`incorporation_date`),
KEY `real_turnover_band` (`real_turnover_band`),
KEY `est_turnover_band` (`est_turnover_band`)
) ENGINE=InnoDB AUTO_INCREMENT=3706459 DEFAULT CHARSET=utf8
```
temp\_company:
```
CREATE TABLE `temp_company` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`name` varchar(255) DEFAULT NULL,
`url` varchar(255) DEFAULT NULL,
`ch_url` varchar(255) DEFAULT NULL,
`number` varchar(30) DEFAULT NULL,
`category` varchar(255) DEFAULT NULL,
`status` varchar(255) DEFAULT NULL,
`country_of_origin` varchar(80) DEFAULT NULL,
`dissolution_date` date DEFAULT NULL,
`incorporation_date` date DEFAULT NULL,
`account_ref_day` smallint(5) unsigned DEFAULT NULL,
`account_ref_month` smallint(5) unsigned DEFAULT NULL,
`account_next_due_date` date DEFAULT NULL,
`account_last_made_up_date` date DEFAULT NULL,
`account_category` varchar(255) DEFAULT NULL,
`returns_next_due_date` date DEFAULT NULL,
`returns_last_made_up_date` date DEFAULT NULL,
`mortgages_num_charges` smallint(5) unsigned DEFAULT NULL,
`mortgages_num_outstanding` smallint(5) unsigned DEFAULT NULL,
`mortgages_num_part_satisfied` smallint(5) unsigned DEFAULT NULL,
`mortgages_num_satisfied` smallint(5) unsigned DEFAULT NULL,
`partnerships_num_gen_partners` smallint(5) unsigned DEFAULT NULL,
`partnerships_num_lim_partners` smallint(5) unsigned DEFAULT NULL,
`ext_name` varchar(255) DEFAULT NULL,
`turnover` decimal(18,2) DEFAULT NULL,
`turnover_date` date DEFAULT NULL,
`trade_debtors` decimal(18,2) DEFAULT NULL,
`other_debtors` decimal(18,2) DEFAULT NULL,
`debtors_date` date DEFAULT NULL,
`real_turnover_band` int(11) DEFAULT NULL,
`est_turnover_band` int(11) DEFAULT NULL,
`ext_address_date` date DEFAULT NULL,
`care_of` varchar(255) DEFAULT NULL,
`po_box` varchar(60) DEFAULT NULL,
`line_1` varchar(255) DEFAULT NULL,
`line_2` varchar(255) DEFAULT NULL,
`town` varchar(60) DEFAULT NULL,
`county` varchar(60) DEFAULT NULL,
`country` varchar(60) DEFAULT NULL,
`post_code` varchar(20) DEFAULT NULL,
`sic_code` varchar(10) DEFAULT NULL,
`DirScrapeID` int(10) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `number` (`number`),
KEY `status` (`status`),
KEY `name` (`name`),
KEY `sic_code` (`sic_code`)
) ENGINE=InnoDB AUTO_INCREMENT=3297833 DEFAULT CHARSET=utf8
```
UPDATE 2: Profile of the query (with limit 5)
```
+-------------------------------+----------+
| Status | Duration |
+-------------------------------+----------+
| executing | 0.000001 |
| Sending data | 0.000112 |
| executing | 0.000001 |
| Sending data | 0.000111 |
| executing | 0.000001 |
| Sending data | 0.000110 |
| executing | 0.000001 |
| Sending data | 0.000110 |
| executing | 0.000001 |
| Sending data | 0.000110 |
| executing | 0.000001 |
| Sending data | 0.000111 |
| executing | 0.000001 |
| Sending data | 0.000111 |
| executing | 0.000001 |
| Sending data | 0.000112 |
| executing | 0.000001 |
| Sending data | 0.000112 |
| executing | 0.000001 |
| Sending data | 0.000112 |
| executing | 0.000001 |
| Sending data | 0.000112 |
| executing | 0.000001 |
| Sending data | 0.000112 |
| executing | 0.000001 |
| Sending data | 0.000112 |
| executing | 0.000001 |
| Sending data | 0.000113 |
| executing | 0.000001 |
| Sending data | 0.000114 |
| executing | 0.000001 |
| Sending data | 0.000114 |
| executing | 0.000001 |
| Sending data | 0.000114 |
| executing | 0.000001 |
| Sending data | 0.000115 |
| executing | 0.000001 |
| Sending data | 0.000116 |
| executing | 0.000001 |
| Sending data | 0.000115 |
| executing | 0.000001 |
| Sending data | 0.000115 |
| executing | 0.000001 |
| Sending data | 0.000116 |
| executing | 0.000001 |
| Sending data | 0.000116 |
| executing | 0.000001 |
| Sending data | 0.000115 |
| executing | 0.000001 |
| Sending data | 0.000115 |
| executing | 0.000001 |
| Sending data | 0.000116 |
| executing | 0.000001 |
| Sending data | 0.000116 |
| executing | 0.000001 |
| Sending data | 0.000117 |
| executing | 0.000001 |
| Sending data | 0.000117 |
| executing | 0.000001 |
| Sending data | 0.000117 |
| executing | 0.000001 |
| Sending data | 0.000118 |
| executing | 0.000001 |
| Sending data | 0.000118 |
| executing | 0.000001 |
| Sending data | 0.000118 |
| executing | 0.000001 |
| Sending data | 0.000118 |
| executing | 0.000001 |
| Sending data | 0.000118 |
| executing | 0.000001 |
| Sending data | 0.000118 |
| executing | 0.000001 |
| Sending data | 0.000120 |
| executing | 0.000001 |
| Sending data | 0.000120 |
| executing | 0.000001 |
| Sending data | 0.000121 |
| executing | 0.000001 |
| Sending data | 0.000123 |
| executing | 0.000001 |
| Sending data | 0.000121 |
| executing | 0.000001 |
| Sending data | 0.000120 |
| executing | 0.000001 |
| Sending data | 0.000121 |
| executing | 0.000001 |
| Sending data | 0.000121 |
| executing | 0.000001 |
| Sending data | 0.000121 |
| executing | 0.000001 |
| Sending data | 0.000122 |
| executing | 0.000001 |
| Sending data | 0.000123 |
| executing | 0.000001 |
| Sending data | 0.000124 |
| executing | 0.000001 |
| Sending data | 1.063880 |
| end | 0.000009 |
| query end | 0.000008 |
| closing tables | 0.000009 |
| freeing items | 0.000007 |
| Waiting for query cache lock | 0.000002 |
| freeing items | 0.000062 |
| Waiting for query cache lock | 0.000002 |
| freeing items | 0.000001 |
| storing result in query cache | 0.000002 |
| cleaning up | 0.000028 |
+-------------------------------+----------+
``` | It turned out that the problem was that the temp\_company table number field did not have ascii\_bin set as its Collation like the Company table.
As explained on the MySQL forums (<http://forums.mysql.com/read.php?24,603620,603732#msg-603732>), varchar fields with different collation or character sets are regarded as being of different type and thus an index could not be used between them.
The remedy was to set the same collation on the number field of the temp\_company table. The query then took 3.3 seconds (and 2.7 seconds using the left join method). | I don't know why it's suddenly running more slowly, but I recommend converting to a join, which should perform better:
```
SELECT t.*
FROM temp_company t
LEFT JOIN company c ON c.number = t.number
WHERE c.number is null
```
This is a fairly standard way of tackling a `not in (...)` via a join, and works because outer joins that *don't* match have nulls in the joined table's columns. | MySQL NOT IN Query much slower after Mysql Upgrade | [
"",
"mysql",
"sql",
"mysql-5.7",
""
] |
On outer joins(lets take a left outer join in this case) how does adding a filter on the right side table work?
```
SELECT s.id, i.name FROM Student s
LEFT OUTER JOIN Student_Instructor i
ON s.student_id=i.student_id
AND i.name='John'
```
I understand that if the filter was on the `Student` table it would be more like "Get all rows with name= John first and join the tables".
But I am not sure if that is the case if the filter is on the right side table(`Student_Instructor`). How does the filter `i.name='John'` gets interpreted?
Thank you | All rows will be returned from your left table regardless. In the case of a left join, if the filter isn't met, all data returned from the right table will show up as null. In your case, all students will show up in your results. If the student doesn't have an instructor, i.name will be null.
Since you are only selecting a column from your left table, your join is pretty useless. I would also add i.name to your select, so you can see the results
In the case of an inner join, rows will only be returned if the join filter is met. | Should be the same as:
```
SELECT s.id FROM Student s
LEFT OUTER JOIN (Select * from Student_Instructor where name='John' ) i
ON s.student_id=i.student_id
``` | Adding filter on the right side table on Left outer joins | [
"",
"sql",
"sql-server",
"join",
"left-join",
"right-join",
""
] |
I have a table with following columns and 2 rows :
```
COL1,COL2,COL3,NAME,DATE
```
Value of COL1,COL2,COL3 in both rows are A,B,C. Value of NAME in 1st row is 'DEL' and 2nd row is 'LAP'. Value of DATE in 1st row is '11.12.13' and 2nd row is '13.11.13'.
Now i want a view with singlerow and following columns
```
COL1,COL2,COL3,DEL,LAP with values A,B,C,11.12.13,13.11.13.
```
Is that possible with pivot or any other function
thanks | [SQL Fiddle](http://sqlfiddle.com/#!4/21abc/3)
**Oracle 11g R2 Schema Setup**:
```
CREATE TABLE table_name (COL1,COL2,COL3,N_NAME,D_DATE) AS
SELECT 'A','B','C', 'DEL', '11.12.13' FROM DUAL
UNION ALL SELECT 'A','B','C', 'LAP', '13.11.13' FROM DUAL
UNION ALL SELECT 'A','B','C', 'DEL', '12.12.13' FROM DUAL
UNION ALL SELECT 'A','B','C', 'LAP', '14.11.13' FROM DUAL;
```
**Query 1**:
If the combination of `COL1`, `COL2`, `COL3` and `N_Name` is unique then you can do:
```
SELECT Col1,
Col2,
Col3,
MIN( CASE N_Name WHEN 'DEL' THEN D_Date END ) AS DEL,
MIN( CASE N_Name WHEN 'LAP' THEN D_Date END ) AS LAP
FROM table_name
GROUP BY
Col1,
Col2,
Col3
```
**[Results](http://sqlfiddle.com/#!4/21abc/3/0)**:
```
| COL1 | COL2 | COL3 | DEL | LAP |
|------|------|------|----------|----------|
| A | B | C | 11.12.13 | 13.11.13 |
```
**Query 2**:
However, if you can have multiple rows with the same combination of `COL1`, `COL2`, `COL3` and `N_Name` and you want all of them returning (in date order) then you can do:
```
WITH indexed_data AS (
SELECT Col1,
Col2,
Col3,
N_Name,
D_Date,
ROW_NUMBER() OVER ( PARTITION BY Col1, Col2, Col3, N_Name ORDER BY D_Date ) AS idx
FROM table_name
)
SELECT Col1,
Col2,
Col3,
MIN( CASE N_Name WHEN 'DEL' THEN D_Date END ) AS DEL,
MIN( CASE N_Name WHEN 'LAP' THEN D_Date END ) AS LAP
FROM indexed_data
GROUP BY
Col1,
Col2,
Col3,
idx
ORDER BY
Col1,
Col2,
Col3,
idx
```
**[Results](http://sqlfiddle.com/#!4/21abc/3/1)**:
```
| COL1 | COL2 | COL3 | DEL | LAP |
|------|------|------|----------|----------|
| A | B | C | 11.12.13 | 13.11.13 |
| A | B | C | 12.12.13 | 14.11.13 |
``` | ```
with tab (COL1,COL2,COL3,N_NAME,D_DATE) as (
select 'A','B','C', 'DEL', '11.12.13' from dual union all
select 'A','B','C', 'LAP', '13.11.13' from dual)
select COL1, COL2, COL3,
min(decode(N_NAME, 'DEL', D_DATE, NULL)) DEL,
min(DECODE(N_NAME, 'LAP', D_DATE, NULL)) LAP
from tab
group by COL1, COL2, COL3
```
output
```
| COL1 | COL2 | COL3 | DEL | LAP |
|------|------|------|----------|----------|
| A | B | C | 11.12.13 | 13.11.13 |
``` | Oracle rows to columns | [
"",
"sql",
"oracle",
""
] |
--I want to use SQL - `BETWEEN` on TIME datatype.
I want to execute below query but it does not give correct result.
here, `starttime` and `endtime` both fields are typeof `smalldatetime` whichi do cast into TIME field as i need time comparision only and date's value is not actual its just dummy.
```
SELECT
count(1)
FROM
t1 INNER JOIN t2
WHERE
CAST(t1.StartTime as TIME)
BETWEEN CAST(t2.StartTime as TIME)
AND CAST(t2.EndTime as TIME)
```
---
```
CAST(t1.StartTime as TIME) is 08:00:00.0000000
CAST(t2.StartTime as TIME) is 07:00:00.0000000
CAST(t2.StartTime as TIME) is 12:00:00.0000000
```
so, above query should result record count 1 (as 8 o'clock is between 7 to 12). but it returns null.
Please suggest me what is wrong here and how to correct.
Thank You | ```
WHERE CAST(t1.StartTime as TIME) >= CAST(t2.StartTime as TIME)
AND CAST(t1.StartTime as TIME) <= CAST(t2.EndTime as TIME)
```
Using this Snytax makes your queries SARGable. [Read Here](http://msmvps.com/blogs/robfarley/archive/2010/01/22/sargable-functions-in-sql-server.aspx) for more information about making your queries sargable when working with datetime/date/time datatypes. | I suspect based on your example you are looking for times in t1 between a start and end time in t2 but the two tables do not necessarily have a relationship between them on any key.
Trying to use a join in that scenario will not work easily, you'd have to move M. Ali's solution to the join predicates rather than in the where conditions.
If my assumption is correct and your simply looking for t1 rows that match your between criteria in t2 this should get you that set without the two tables having a relationship:
```
DECLARE @sampleTime AS smalldatetime = GETDATE()
CREATE TABLE #t1 (StartTime smalldatetime)
CREATE TABLE #t2 (StartTime smalldatetime, EndTime smalldatetime)
INSERT INTO #t1 (StartTime) VALUES (@sampleTime)
INSERT INTO #t1 (StartTime) VALUES (DATEADD(hh, 6, @sampleTime))
INSERT INTO #t2 (StartTime, EndTime) VALUES (DATEADD(hh, -1, @sampleTime), DATEADD(hh, 3, @sampleTime))
-- you can see only one row in t1 lands between the Start and End in t2
SELECT * FROM #t1
WHERE EXISTS(SELECT * FROM #t2 WHERE CAST(#t1.StartTime AS time) BETWEEN CAST(#t2.StartTime AS time) AND CAST(#t2.EndTime AS time))
-- adding another row to t1 that now lands between the Start and End in t2 and the result
INSERT INTO #t1 (StartTime) VALUES (DATEADD(hh, 1, @sampleTime))
SELECT * FROM #t1
WHERE EXISTS(SELECT * FROM #t2 WHERE CAST(#t1.StartTime AS time) BETWEEN CAST(#t2.StartTime AS time) AND CAST(#t2.EndTime AS time))
DROP TABLE #t1, #t2
``` | How to apply SQL 'between' keyword on TIME data type | [
"",
"sql",
"sql-server",
"database",
"sql-server-2008",
"between",
""
] |
I have this table:
```
XPTO_TABLE (id, obj_x, date_x, type_x, status_x)
```
I wanna create a unique constraint that applies to the fields `(obj_x, date_x, type_x)` only when `status_x <> 5`.
I have tried to create this one but Oracle says:
```
line 1: ORA-00907: missing right parenthesis
```
```
CREATE UNIQUE INDEX UN_OBJ_DT_TYPE_STATUS
ON XPTO_TABLE(
(CASE
WHEN STATUS_X <> 5
THEN
(OBJ_X,
TO_CHAR (DATE_X, 'dd/MM/yyyy'),
TYPE_X)
ELSE
NULL
END));
```
What's the correct syntax ? | @jamesfrj: it looks like you are trying to ensure that your table should contain only one record for which `status <>5`.
You can try creating a unique functional index by concatenating the columns, as given below
```
create table XPTO_TABLE (id number,
obj_x varchar2(20),
date_x date,
type_x varchar2(20),
status_x varchar2(20)
);
create unique index xpto_table_idx1 on XPTO_TABLE(case when status_x <>'5' THEN obj_x||date_x||type_x||STATUS_x ELSE null END);
``` | Under Oracle 11, you can create a bunch of virtual columns that get non-NULL value only when STATUS\_X is 5, and then make *them* unique:
```
CREATE TABLE XPTO_TABLE (
ID INT PRIMARY KEY,
OBJ_X INT,
DATE_X DATE,
TYPE_X VARCHAR2(50),
STATUS_X INT,
OBJ_U AS (CASE STATUS_X WHEN 5 THEN OBJ_X ELSE NULL END),
DATE_U AS (CASE STATUS_X WHEN 5 THEN DATE_X ELSE NULL END),
TYPE_U AS (CASE STATUS_X WHEN 5 THEN TYPE_X ELSE NULL END),
UNIQUE (OBJ_U, DATE_U, TYPE_U)
);
```
You can freely insert duplicates, as long as STATUS\_X is **not** 5:
```
INSERT INTO XPTO_TABLE (ID, OBJ_X, DATE_X, TYPE_X, STATUS_X) VALUES (1, 1, '1-JAN-2014', 'foo', 4);
INSERT INTO XPTO_TABLE (ID, OBJ_X, DATE_X, TYPE_X, STATUS_X) VALUES (2, 1, '1-JAN-2014', 'foo', 4);
```
But trying to insert a duplicate when STATUS\_X is 5 fails:
```
INSERT INTO XPTO_TABLE (ID, OBJ_X, DATE_X, TYPE_X, STATUS_X) VALUES (3, 1, '1-JAN-2014', 'foo', 5);
INSERT INTO XPTO_TABLE (ID, OBJ_X, DATE_X, TYPE_X, STATUS_X) VALUES (4, 1, '1-JAN-2014', 'foo', 5);
Error report -
SQL Error: ORA-00001: unique constraint (IFSAPP.SYS_C00139498) violated
00001. 00000 - "unique constraint (%s.%s) violated"
*Cause: An UPDATE or INSERT statement attempted to insert a duplicate key.
For Trusted Oracle configured in DBMS MAC mode, you may see
this message if a duplicate entry exists at a different level.
*Action: Either remove the unique restriction or do not insert the key.
``` | Conditional unique constraint with multiple fields in oracle db | [
"",
"sql",
"oracle",
"conditional-statements",
"unique",
"unique-constraint",
""
] |
I have two separate databases. I am trying to update a column in one database to the values of a column from the other database:
```
UPDATE customer
SET customer_id=
(SELECT t1 FROM dblink('port=5432, dbname=SERVER1 user=postgres password=309245',
'SELECT store_key FROM store') AS (t1 integer));
```
This is the error I am receiving:
> ```
> ERROR: more than one row returned by a subquery used as an expression
> ```
Any ideas? | ***Technically***, to remove the error, add **`LIMIT 1`** to the subquery to return at most 1 row. The statement would still be nonsense.
```
... 'SELECT store_key FROM store LIMIT 1' ...
```
***Practically***, you want to match rows *somehow* instead of picking an arbitrary row from the remote table `store` to update every row of your local table `customer`.
I *assume* a text column `match_name` in both tables (`UNIQUE` in `store`) for the sake of this example:
```
... 'SELECT store_key FROM store
WHERE match_name = ' || quote_literal(customer.match_name) ...
```
But that's an extremely expensive way of doing things.
***Ideally***, you completely rewrite the statement.
```
UPDATE customer c
SET customer_id = s.store_key
FROM dblink('port=5432, dbname=SERVER1 user=postgres password=309245'
, 'SELECT match_name, store_key FROM store')
AS s(match_name text, store_key integer)
WHERE c.match_name = s.match_name
AND c.customer_id IS DISTINCT FROM s.store_key;
```
This remedies a number of problems in your original statement.
Obviously, the **basic error** is fixed.
It's typically better to join in additional relations in the [`FROM` clause of an `UPDATE` statement](https://www.postgresql.org/docs/current/sql-update.html) than to run **correlated subqueries** for every individual row.
When using dblink, the above becomes a thousand times more important. You do not want to call `dblink()` for every single row, that's **extremely expensive**. Call it once to retrieve all rows you need.
With correlated subqueries, if **no row is found** in the subquery, the column gets updated to NULL, which is almost always not what you want. In my updated query, the row only gets updated if a matching row is found. Else, the row is not touched.
Normally, you wouldn't want to update rows, when nothing actually changes. That's expensively doing nothing (but still produces dead rows). The last expression in the `WHERE` clause prevents such **empty updates**:
```
AND c.customer_id IS DISTINCT FROM sub.store_key
```
Related:
* [How do I (or can I) SELECT DISTINCT on multiple columns?](https://stackoverflow.com/questions/54418/how-do-i-or-can-i-select-distinct-on-multiple-columns/12632129#12632129) | The fundamental problem can often be simply solved by changing an `=` to **`IN`**, in cases where you've got a one-to-many relationship. For example, if you wanted to update or delete a bunch of accounts for a given customer:
```
WITH accounts_to_delete AS
(
SELECT account_id
FROM accounts a
INNER JOIN customers c
ON a.customer_id = c.id
WHERE c.customer_name='Some Customer'
)
-- this fails if "Some Customer" has multiple accounts, but works if there's 1:
DELETE FROM accounts
WHERE accounts.guid =
(
SELECT account_id
FROM accounts_to_delete
);
-- this succeeds with any number of accounts:
DELETE FROM accounts
WHERE accounts.guid IN
(
SELECT account_id
FROM accounts_to_delete
);
``` | Postgres Error: More than one row returned by a subquery used as an expression | [
"",
"sql",
"database",
"postgresql",
"subquery",
"dblink",
""
] |
I want to create a report by using BIRT. I have 5 SQL criterias as the parameter for the report. Usually when I have 3 criterias, I am using nested if-else for the WHERE statement with javascript.
Since right now I have more criteria it becomes more difficult to write the code and also check the possibilities, especially for debug purposes.
For example the criteria for table employee, having these 5 criterias : age, city, department, title and education. All criteria will be dynamic, you can leave it blank to show all contents.
Do anyone know the alternative of this method? | There is a magical way to handle this without any script, which makes reports much easier to maintain! We can use this kind of SQL query:
```
SELECT *
FROM mytable
WHERE (?='' OR city=? )
AND (?=-1 OR age>? )
AND (?='' OR department=? )
AND (?='' OR title=? )
```
So each criteria has two dataset parameters, with a "OR" clause allowing to ignore a criteria when the parameter gets a specific value, an empty value or a null value as you like. All those "OR" clauses are evaluated with a constant value, therefore performances of queries can't be affected.
In this example we should have 4 report parameters, 8 dataset parameters (each report parameter is bound to 2 dataset parameters) and 0 script. See a live example of a report using this approach [here](http://www.visioneo.org/interactive-birt-table).
If there are many more criteria i would recommend to use a stored procedure, hence we can do the same with just one dataset parameter per criteria.
**Integer parameter handling**
If we need to handle a "all" value for an integer column such age: we can declare report parameter "age" as a String type and dataset parameters "age" as an integer. Then, in parameters tab of the dataset use a value expression instead of a "linked to report parameters". For example if we like a robust input which handles both "all" "null" and empty values here is the expression to enter:
```
(params["age"].value=="all" || params["age"].value=="" || params["age"].value==null)?-1:params["age"].value
```
The sample report can be downloaded [here](http://www.visioneo.org/documents/10179/129221/visioneo-interactive-table.rptdesign/5426b058-10cc-4785-852e-590c8d6f1ead) (v 4.3.1) | Depending on the report requirements and audiance you may find this helpful.
Use text box paramaters and make the defualt value % (which is a wild card)
```
SELECT *
FROM mytable
WHERE city like ?
AND age like ?
AND department like ?
AND title like ?
```
This also allows your users to search for partial names. if the value in the city text box is %ville% it would return all the cities with "ville" anyplace in the city name. | IF-ELSE Alternative for Multiple SQL criteria for use in BIRT | [
"",
"sql",
"if-statement",
"birt",
""
] |
I am trying to query a huge database (aprroximately 20 millions records) to get some data. This is the query I am working on right now.
```
SELECT a.user_id, b.last_name, b.first_name, c.birth_date FROM users a
INNER JOIN users_signup b ON a.user_id a = b.user_id
INNER JOIN users_personal c ON a.user_id a = c.user_id
INNER JOIN
(
SELECT distinct d.a.user_id FROM users_signup d
WHERE d.join_date >= '2013-01-01' and d.join_date < '2014-01-01'
)
AS t ON a.user_id = t.user_id
```
I have some problems trying to retrieve additional data from the database. I would like to add 2 additional field to the results table:
1. I am able to get the birth date but I would like to get the age of the members in the results table. The data is stored as 'yyyy-mm-dd' in the users\_personal table.
2. I would like to get the total days since a member joined till the day the left (if any) from a table called user\_signup using data from join\_date & left\_date (format: yyyy-mm-dd). | Try this:
```
SELECT a.user_id, b.last_name, b.first_name, c.birth_date,
FLOOR(DATEDIFF(CURRENT_DATE(), c.birth_date) / 365) age,
DATEDIFF(b.left_date, b.join_date) workDays
FROM users a
INNER JOIN users_signup b ON a.user_id a = b.user_id
INNER JOIN users_personal c ON a.user_id a = c.user_id
WHERE b.join_date >= '2013-01-01' AND b.join_date < '2014-01-01'
GROUP BY a.user_id
``` | Or you can do just this ...
```
SELECT
TIMESTAMPDIFF(YEAR, birthday, CURDATE()) AS age_in_years,
TIMESTAMPDIFF(MONTH, birthday, CURDATE()) AS age_in_month,
TIMESTAMPDIFF(DAY, birthday, CURDATE()) AS age_in_days,
TIMESTAMPDIFF(MINUTE, birthday, NOW()) AS age_in_minutes,
TIMESTAMPDIFF(SECOND, birthday, NOW()) AS age_in_seconds
FROM
table_name
``` | MySQL - Getting age and numbers of days between two dates | [
"",
"mysql",
"sql",
"select",
"inner-join",
"datediff",
""
] |
Can anybody help me to create a good way to transfer data from one table to another table?
For example:
**table1**
```
ID | Name
1 | Juan
2 | Two
```
**table2**
```
(no content)
```
What I want is a loop that will transfer the data of `table1` to `table2`. While not all data of `table1` is transferred to `table2` the loop continues. | The standard SQL approach is:
```
insert into table2(id, name)
select id, name
from table1;
```
You don't need a loop. | I suppose you mean to do this in VB.
Let conn, rs1 and rs2 already initialized, you can obtain your goal as shown:
```
rs1.Open "Table1", conn
rs2.Open "Table2", conn, 3, 3
Do Until rs1.EOF
rs2.AddNew()
rs2("id") = rs1("id").Value
rs2("name") = rs1("name").Value
rs2.Update
rs1.MoveNext()
Loop
rs2.Close()
rs1.Close()
``` | Transfer data from one table to another table | [
"",
"sql",
"vb.net",
"visual-studio-2010",
"loops",
""
] |
i have table like
```
+---+----+
|TID|DATA|
+---+----+
|1 |C |
+---+----+
|1 |F |
+---+----+
|1 |D |
+---+----+
|1 |E |
+---+----+
|2 |X |
+---+----+
|2 |C |
+---+----+
|2 |B |
+---+----+
|2 |F |
+---+----+
|3 |C |
+---+----+
```
i want to count frequency from combination of data C and F group by their tid
so the count result for data `C` and `F` would be like = 2. (appear on tid 1 and 2)
this is my query
```
select count(data) from tabel where data like '%C%' and '%F%' group by tid
```
it didn't return the desired result
is it possible to query the result like that in mysql? or i need to manipulating it in program first? (i'm using vb net)
'UPDATE REQUIREMENT
i want to create the combination dynamically. is it possible to do that? i want to use `while` to iterate through the `if statement`. but it didn't return any result and gave me an error
here's my code. it's still error and i think the cause is because i can't use `ALIAS COLUMN` on `IF CONDITIONAL`
```
select count(*) from (
select kode_faktur,
if `DATA` like '%F%' and '%B%' then 1
else 0
end if
AS freq
FROM `table`
GROUP BY tid
HAVING freq = 2
) t
```
and this is what i wanna do, iterating the `IF CONDITIONAL` through `WHILE`
```
select count(*) from (
select kode_faktur,
while i<x
if `DATA` like '%i%' then 1
else 0
end if
AS freq
FROM `table`
GROUP BY tid
HAVING freq = x
) t
```
and this one is original code from @Aziz Shaikh and this code worked
```
SELECT COUNT(*) FROM (
SELECT tid,
SUM(CASE
WHEN `data` LIKE '%C%' THEN 1
WHEN `data` LIKE '%F%' THEN 1
ELSE 0
END
) AS freq
FROM `table`
GROUP BY tid
HAVING freq = 2
) t
```
personally i think this's to complicated in mysql. i choose to do this to saving my time instead of doing some code in program. the reason is because my program has already using too much connection and also proccessing big data. so i think mysql could be act like an alternative to my requirement | Try this:
```
SELECT COUNT(*) FROM (
SELECT tid,
SUM(CASE
WHEN `data` LIKE '%C%' THEN 1
WHEN `data` LIKE '%F%' THEN 1
ELSE 0
END
) AS freq
FROM `table`
GROUP BY tid
HAVING freq = 2
) t
``` | You should use `OR` instead of `AND`
```
select count(data) from tabel where data like '%C%' OR '%F%' group by tid
``` | mysql count combination of non unique data in row | [
"",
"mysql",
"sql",
"select",
"group-by",
"having",
""
] |
Given the following table:
```
date_field_one date_field_two arbitrary_value
---------------- ---------------- -----------------
1/1/11 1/3/11 cheese
1/1/11 1/4/11 the color orange
2/2/11 2/3/11 1
2/2/11 2/4/11 2
```
My problem: I'm not sure how to go about structuring a query using a set based approach that yields the following results:
* for each distinct date, the record with the earliest
date\_field\_two value is returned
Any ideas? | Edit for new response! The solution posted by M.Ali may be the best fit for your specific case as it will ensure you only ever get one row result from your base data, even if there exist multiple candidate rows for your answer ( as in, date\_field\_one, date\_field\_two combinations are not distinct ). The following will return multiple results per date\_field\_one, date\_field\_two combination in the not-distinct scenario:
```
SELECT t.date_field_one, t.date_field_two, t.arbitrary_value
FROM ( SELECT date_field_one,
date_field_two = MIN( date_field_two )
FROM dbo.[table]
GROUP BY date_field_one ) dl
LEFT JOIN dbo.[table] t
ON dl.date_field_one = t.date_field_one
AND dl.date_field_two = t.date_field_two;
``` | ```
;WITH CTE
AS
(
SELECT *, rn = ROW_NUMBER() OVER (PARTITION BY date_field_one ORDER BY date_field_two
ASC)
FROM TableName
)
SELECT * FROM CTE
WHERE rn = 1
``` | Ensuring only distinct records are returned with DISTINCT | [
"",
"sql",
"sql-server",
""
] |
I have a database like this:
```
|--------------------------------------------------------------------------|
| NAME | SCORE1 | SCORE2 | SCORE3 | SCORE4 | RATING |
|--------------------------------------------------------------------------|
| Joe Bloggs | -50 | 0 | -10 | -30 | 67 |
| Bob Bobbing | -30 | -10 | 0 | -10 | 74 |
| Marjorie Doors | 0 | -10 | -30 | -50 | 88 |
| etc... ------------------------------------------------------------------|
```
What I am trying to do is to find the highest-rated name for any given score position.
I do fine when there is only one score position possible:
```
SELECT name FROM db ORDER BY Score2 DESC, Rating DESC LIMIT 1
```
...gives me the highest-rated person with the best score for Score2.
---
What I now need is to find a way to combine two or more score columns (there are 23 in total) but still return the highest-rated person for any score combination given.
For example, if I wanted the highest-rated person for Score2 OR Score3, doing the above query gives me Joe Bloggs even though his rating is lower than Bob Bobbing's.
Similarly, if I wanted the highest-rated person for Score1 OR Score2 OR Score4, I'd still need to specify one of the scores to sort by first. I need a way to combine the results of all X columns specified, THEN sort by the combined score 'column', then by rating. | You may want to use the [GREATEST()](http://dev.mysql.com/doc/refman/5.1/en/comparison-operators.html#function_greatest "GREATEST()") function:
```
With two or more arguments, returns the largest (maximum-valued) argument.
```
This code snippet does what you wanted for score2 and score3 columns in your example:
```
SELECT name, GREATEST( score2, score3 ) AS max_score, rating
FROM db
ORDER BY max_score DESC , rating DESC
```
Orders by combined scores, and if they are equal then orders by result (both highest to lowest).
For more columns, simply add them as arguments to GREATEST() function. | With the information from your comment I think what you're looking for is the [GREATEST](http://dev.mysql.com/doc/refman/5.0/en/comparison-operators.html#function_greatest) function.
Use it as follows:
```
SELECT GREATEST(Score1, Score2, Score3, ...) AS Score, Rating
FROM db
ORDER BY 1, Rating
```
ORDER BY 1 means order by the 1st column. | SQL: Combine multiple columns into one for sorting (output only) | [
"",
"mysql",
"sql",
"sorting",
""
] |
I have a view that list available dates from an online calendar (basically, I created entries for all available dates and then compare this to the dates that have been booked out - the view then show all dates within 'AvailableDates' where 'Date' <> 'BookedDate'.
So far I can get it to run a list that looks like this....
```
24th January 2014
7th February 2014
8th February 2014
```
....but this takes up a lot of space and the idea is that the list can be copied and pasted into an email/message for quick reference.
What I'm looking for is a way to group the dates so that the output looks like this...
```
Jan - 24th
Feb - 7th, 8th
```
...so that there is a maximum of 12 lines.
Could somebody tell me how to do this? - the field is a 'date' type.
Thanks,
Darren | You might want to have the year in there, too, or you might get false results.
```
SELECT
YEAR(your_column) AS the_year,
DATE_FORMAT(your_column, '%b') AS the_month_abbreviated,
GROUP_CONCAT(DATE_FORMAT(your_column, '%D') ORDER BY DAY(your_column) SEPARATOR ', ') AS the_days
FROM your_table
GROUP BY YEAR(your_column), DATE_FORMAT(your_column, '%b')
ORDER BY YEAR(your_column), MONTH(your_column)
```
* you can read more about the `DATE_FORMAT()` function [here](http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html#function_date-format).
* see it working live in an `sqlfiddle` | You have to convert the datecolumn to month and use GROUP\_CONCAT(). Maybe this can work or at least in the right direction.
```
SELECT
LEFT(monthname(datecolumn),3),GROUP_CONCAT(DAY(datecolumn))
FROM table
GROUP BY CONCAT(YEAR(datecolumn),MONTH(datecolumn))
```
Missed to format the DAY. Now fixed | My SQL - creating a grouped date list from database dates | [
"",
"mysql",
"sql",
"date",
""
] |
This query returns rows that have `is_message` = 'on' and `mass_message` = 'on'
```
SELECT *
FROM `messages`
WHERE
(`sender_id` = '111' AND `recipient_id` = '222')
OR (`sender_id` = '222' AND `recipient_id` = '111')
AND (`is_message` != 'on' OR `is_message` IS NULL)
AND (`mass_message` != 'on' OR `mass_message` IS NULL)
AND `invite_for_lunch` = 'on'
LIMIT 0 , 30
```
How do I make sure it only returns rows that have `invite_for_lunch` = 'on'
This was originally a count, but I wanted to see what rows were being returned.
I checked the columns and no two columns out of (`is_message` , `mass_message`, `invite_for_lunch`) have `on` in the same row.
Expected result: should return 5 rows | Try fixing your parenthesis
```
SELECT *
FROM `messages`
WHERE
(
(`sender_id` = '111' AND `recipient_id` = '222')
OR (`sender_id` = '222' AND `recipient_id` = '111')
)
AND (`is_message` != 'on' OR `is_message` IS NULL)
AND (`mass_message` != 'on' OR `mass_message` IS NULL)
AND `invite_for_lunch` = 'on'
LIMIT 0 , 30
```
Without the extra parenthesis, the first OR is most likely be mishandled | Change your WHERE to
```
WHERE ((`sender_id` = '111'
AND `recipient_id` = '222')
OR (`sender_id` = '222'
AND `recipient_id` = '111')
)
AND ....
``` | Query not returning the right rows | [
"",
"sql",
""
] |
Trying to find difference between two avg giving error?
<http://sqlfiddle.com/#!3/7160d/9>
```
select * from
(
select avg(avg_stars) as avg_1
from
(
select r.mid, avg(stars) as avg_stars
from
rating r inner join movie m
on r.mid = m.mid
where year < '1980'
group by r.mid
)
)
-
(
select avg(avg_stars) as avg_2
from
(
select r.mid, avg(stars) as avg_stars
from
rating r inner join movie m
on r.mid = m.mid
where year > '1980'
group by r.mid
)
)
``` | Should be "littlbe bit" simplified
```
select
avg(case when year < '1980' then stars end) -
avg(case when year > '1980' then stars end)
from movie m
inner join
(select mId, Cast(stars as int) as stars from Rating) r
on m.mID = r.mID
```
or (remove the cast if stars is of correct type)
```
select
avg(case when year < '1980' then cast (stars as int) end) -
avg(case when year > '1980' then cast(stars as int) end)
from movie m
inner join
Rating r
on m.mID = r.mID
```
see [SqlFiddle](http://sqlfiddle.com/#!3/7160d/58) | Is this you want.
```
select
(
select avg(avg_stars) as avg_1
from
(
select r.mid, avg(convert(int,stars)) as avg_stars
from
rating r inner join movie m
on r.mid = m.mid
where year < '1980'
group by r.mid
) x
)
-
(
select avg(avg_stars) as avg_2
from
(
select r.mid, avg(convert(int,stars)) as avg_stars
from
rating r inner join movie m
on r.mid = m.mid
where year > '1980'
group by r.mid
) x
)
``` | SQL queries avg difference showing error? | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have a database that contains website URL's. From those URL's I'd like to extract the domain name. Here are two (quiet different) examples:
```
http://www.example.com -> example.com
example.co.uk/dir/index.html -> example.co.uk
```
In order to do this I am using a regular expression and the functions REGEXP\_SUBSTR and REGEXP\_REPLACE that Oracle provides. I am using replace to replace the preceding `http[s]` and the `www.` with an empty string (deleting it). Then I use substring to get the string between the beginning and the first `/` or if there is no `/` the whole string. My code looks like this:
```
REGEXP_SUBSTR(REGEXP_REPLACE(website_url, '^http[s]?://(www\.)?|^www\.', '', 1), '(.+?)(/|$)')
```
Everything works as expected, except the fact that my regex fails to exclude the `/`:
```
example.com/dir/index.html -> example.com/
```
I would like to get rid of the `/`. How do I do that? | Thanks to the hints in the answers I finally got it working!
The code I am using now looks like this:
```
REGEXP_REPLACE(website_url, '(http[s]?://)?(www\.)?(.*?)((/|:)(.)*|$)', '\3')
```
Thanks for the help everybody! | Use this :
```
WITH tab AS
(SELECT 'https://www.example.co.uk/dir/index.html' AS website_url
FROM dual)
SELECT REGEXP_SUBSTR(REGEXP_REPLACE(website_url, '^http[s]?://(www\.)?|^www\.', '', 1), '\w+(\.\w+)+')
FROM tab;
```
output:
```
|REGEXP_SUBSTR(REGEXP_REPLACE(W|
--------------------------------
|example.co.uk |
``` | Get domain from URL in Oracle SQL | [
"",
"sql",
"regex",
"oracle",
""
] |
I have two tables one called games and one called reviews.
I am trying to join these two tables together and have looked through the documentations and also the other questions here on stackoverflow.
```
SELECT games.id, games.title, games.developer, reviews.review, reviews.review_title,
(SELECT review, COUNT(*)
FROM reviews
GROUP BY review) AS Numberofreviews
FROM games
INNER JOIN reviews
ON games.ean=reviews.games_ean;
```
The query that i am trying to make is to get a table that shows the list of games and how many reviews each game has recieved.
But when i try implementing the above code i get the error operand should contain one column
Ive looked at other people getting this error but not in the same situation.
Any help would be apprecatied
edit: this is with mySQL | You should move the subquery into the `from` clause, instead of `reviews` to get the number:
```
SELECT g.id, g.title, g.developer, r.Numberofreviews
FROM games g inner join
(SELECT games_ean, COUNT(*) as Numberofreviews
FROM reviews
GROUP BY games_ean
) r
on g.ean = r.games_ean;
```
It does not make sense to have a column called `review_title`, because there might be more than one review. | You need to use the correlated subquery and for the error it clearly says there should be one column from your subquery
```
SELECT
g.id,
g.title,
g.developer,
(SELECT
COUNT(*)
FROM
reviews
WHERE games_ean = g.ean) AS Numberofreviews
FROM
games g
INNER JOIN reviews r
ON g.ean = r.games_ean ;
``` | SQL Inner join with count | [
"",
"mysql",
"sql",
""
] |
I am new to SQL and having and issue. I want to delete from my database wherever somebody in the description column has the hashtag "#whatever". I was able to write the following query:
```
select id from table where description_field LIKE "%#whatever%" and user_id=333
```
But if i use the LIKE function here it will delete wherever it matches #whatever but I fear that it might delete something where it has #whateverANDthis.
How can I write a query that deletes a row wherever it ONLY contains "#whatever" in the description and not other variations like "#whateverANDthis" or "#whateverORthis".
I want to delete where it says:
"I had so much fun #whatever"
but not:
"I had so much fun #whateverAndWhatever" | Use [`RLIKE`](http://dev.mysql.com/doc/refman/5.1/en/regexp.html#operator_regexp), the regex version of LIKE:
```
WHERE description_field RLIKE '[[:<:]]#whatever[[:>:]]'
```
The expressions `[[:<:]]` and `[[:>:]]` are leading and trailing "word boundaries". | It would be better to save them in multiple columns but
```
SELECT id FROM table WHERE decription_field REGEXP '[[:<:]]#whatever[[:>:]]' and user_id=333
```
could do the trick | How do I use the LIKE function in SQL but for an exact word? | [
"",
"mysql",
"sql",
"database",
"subquery",
""
] |
in a relational database, can we have a table without any relation with the other tables? | Yes. The way relations are expressed are with foreign keys. If a table you generate has no Foreign keys, and no foreign keys in other tables point to this table, it has no relationships.
It can still be given a relationship later though so don't worry about shooting yourself in the foot. | Of course. Even you can create a table without fields. | in a relational database, can we have a table without any relation with the other tables? | [
"",
"sql",
"database-design",
"relational-database",
"entity-relationship",
""
] |
Why is this sql giving tables with `minimum` field having null. Also when `A` has no data between the given date ranges, it is providing the table with all `rooms` having `minimum` as null
```
SELECT `rooms`.*,A.`minimum`
FROM (
SELECT `room_id`, min(`available_rooms`) AS `minimum`
FROM `room_bookings`
WHERE `date` BETWEEN '2014-02-01' and '2014-02-10'
GROUP BY `room_id`) as A
INNER JOIN `rooms` on `rooms`.`room_id`=A.`room_id`
WHERE `rooms`.`location`='kathmandu'
AND `rooms`.`status`=1
AND A.`minimum`!=NULL
``` | Try this:
```
SELECT r.*, MIN(rb.available_rooms) minimum
FROM rooms r
INNER JOIN room_bookings rb ON r.room_id = rb.room_id AND rb.date BETWEEN '2014-02-01' AND '2014-02-10'
WHERE r.location = 'kathmandu' AND r.status = 1
GROUP BY r.room_id HAVING minimum IS NOT NULL
``` | ```
SELECT `rooms`.*,A.`minimum`
FROM (
SELECT `room_id`, min(`available_rooms`) AS `minimum`
FROM `room_bookings`
WHERE `date` BETWEEN '2014-02-01' and '2014-02-10'
GROUP BY `room_id` having minimum > 0) as A
INNER JOIN `rooms` on `rooms`.`room_id`=A.`room_id`
WHERE `rooms`.`location`='kathmandu'
AND `rooms`.`status`=1
``` | Mysql error not following where condition | [
"",
"mysql",
"sql",
"select",
"group-by",
"where-clause",
""
] |
I'm sure this is really simple but I've been up through the night and am now getting stuck.
I have a piece of functionality that clones a record in a database however I need to ensure the new name field is unique in the database.
eg, the first record is
```
[ProjectName] [ResourceCount]
'My Project' 8
```
Then when I click the clone I want
```
'My Project Cloned', 8
```
But then if I hit the button again it should notice that the cloned name exists and rather spit out
```
'My Project Cloned 2', 8
```
Is that making sense?
I can do it with temp tables and cursors but there has to be a much nicer way to do this?
Using SQL Server 2008 R2
The solution needs to be entirely T-SQL based though, this occurs in a single stored procedure | I resolve this using an IF EXISTS inside a WHILE loop..
Personally I can't see what's wrong with this method but will obviously take any comments into account
```
DECLARE @NameInvalid varchar(100)
DECLARE @DealName varchar(100)
DECLARE @Count int
SET @Count = 1
SET @NameInvalid = 'true'
SELECT @DealName = DealName FROM Deal WHERE DealId = @DealId
--Ensure we get a unique deal name
WHILE( @NameInvalid = 'true')
BEGIN
IF NOT EXISTS(SELECT DealName FROM Deal where DealName = @DealName + ' Cloned ' + cast(@Count as varchar(10)))
BEGIN
INSERT INTO Deal
(DealName)
SELECT @DealName + ' Cloned ' + cast(@Count as varchar(10))
FROM Deal
WHERE DealID = @DealId
SET @NewDealId = @@IDENTITY
SET @NameInvalid = 'false'
END
ELSE
BEGIN
SET @NameInvalid = 'true'
SET @Count = @Count + 1
END
END
``` | So from my understanding of your problem, here's how I would approach it:
My table:
```
CREATE TABLE [dbo].[deal]
(
[dealName] varchar(100),
[resourceCount] int
)
```
Then create a unique index on the dealName column:
```
CREATE UNIQUE NONCLUSTERED INDEX [UQ_DealName] ON [dbo].[deal]
(
[dealName] ASC
)
```
Once you have the unique index, you can then just handle any exceptions such as a unique constraint violation (error 2601) directly in T-SQL using try/catch
```
SET NOCOUNT ON;
DECLARE @dealName VARCHAR(100) = 'deal'
DECLARE @resourceCount INT = 8
DECLARE @count INT
BEGIN TRY
BEGIN TRANSACTION
INSERT INTO dbo.deal (dealName,resourceCount)
VALUES (@dealName, @resourceCount)
COMMIT TRANSACTION
END TRY
BEGIN CATCH
IF @@ERROR = 2601
BEGIN
ROLLBACK TRANSACTION
SET @count = (SELECT COUNT(dealName) FROM dbo.deal WHERE resourceCount = @resourceCount)
SET @resourceCount = (SELECT resourceCount FROM dbo.deal WHERE dealName = @dealName)
SET @dealName = @dealName + ' Cloned ' + CAST(@count AS VARCHAR(100))
BEGIN TRANSACTION
INSERT INTO dbo.deal (dealName,resourceCount)
VALUES (@dealName,@resourceCount)
COMMIT TRANSACTION
END
END CATCH
SELECT * FROM dbo.deal
```
You can easily put this code into a procedure, all it does is try and insert a deal name with the resource count, if the unique constraint is violated, it enters the catch block, appends the information that you want onto the deal name after finding the resource count of the original deal and then inserts these values.
It's not bulletproof by any means, but I find this technique really useful, not just for enforcing uniqueness, but you can use a similar way of handling exception numbers to deal with deadlocking, primary key violations and loads of other errors, all in T-SQL. | How to ensure my SQL varchar field is unique | [
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2008-r2",
""
] |
I have a database with a table that stores int and string. The integer value is the primary key. I have created a function to just fetch the strings from the database and store them in a list which is then applied to the ListView using an ArrayAdapter as shown below.
```
List<String> list = db.getAllStringNotes();
ArrayAdapter<String> adapter = new ArrayAdapter<String>(this,
android.R.layout.simple_list_item_activated_1, list);
listview.setAdapter(adapter);
listview.setChoiceMode(ListView.CHOICE_MODE_SINGLE);
```
The trouble is deleting from this listview. Since the primary key is not present along with the string, I was deleting using the position of the item selected from the list view. But that obviously messes up things if I'm removing an entry from the middle of the list.
So was wondering if I could add the primary key i.e. an integer value to this list view along with the String but not display the integer value? | The simple thing is create two List,
```
1. String - Stored String notes
2. Integer - Stored all Primary Keys
```
So whenever user click on `Listview` user get its position, and based on that position get primary key value from second list and then perform your delete query. | There are many ways to do this: but as you are already using an ArrayList so i would suggest just make another arraylist while fetching from database:
So while deleting using the position :
Use the Primary Key from the **PrimaryKeyArrayList**
and delete values from both the ArrayList;
With this you will get exactly what you need; | How to mirror ListView to Database? | [
"",
"android",
"sql",
"listview",
"android-listview",
""
] |
I am creating "dynamic" sql statements from Powershell and pass it to PostGreSQL server on Windows.
Below is the code
```
for ($i=1; $i -le 9; $i++)
{
$CurDate = (get-date -format "yyyy-MM-dd HH:mm:ss")
$BatchLogInsert = "Insert into `"TEMP`".`"batchLog`" (batchid, filename, status, createdate) values ('" + $NewBatchID + "','File"+$i+"','Init','"+$CurDate+"');"
write-host $BatchLogInsert
C:\PostgreSQL\9.3\bin\psql.exe -h $DBSERVER -U $DBUSER -d $CLIENTPREFIX -w -c $BatchLogInsert
}
```
write-host returns:
```
Insert into "TEMP"."batchLog" (batchid, filename, status, createdate) values ('3','File1','Init','2014-01-13 16:24:49');
Insert into "TEMP"."batchLog" (batchid, filename, status, createdate) values ('3','File2','Init','2014-01-13 16:24:49');
Insert into "TEMP"."batchLog" (batchid, filename, status, createdate) values ('3','File3','Init','2014-01-13 16:24:49');
```
and so on.
When I execute these inserts in the "Query" window of PGAdmin, it works. However, when I am calling the psql.exe, it fails saying
```
ERROR: Relation "TEMP.batchLog" does not exist
LINE 1: Insert into TEMP.batchLog (batchid, filename, status, created...
```
What am I doing wrong here?
---
EDIT: Here is a screenshot of my powershell window and PgAdmin window...

--- | Your double quotes are eaten by Powershell, Postgres sees an unquoted table name, folds it to lowercase and fails to find such a table.
Proper double quotes escaping seems to involve backslashes **and** backticks:
```
$BatchLogInsert = "Insert into \`"TEMP\`".\`"batchLog\`" (...
```
It would be easier to avoid using different case in DB object names altogether. | Any time you're doing "string building," I recommend that people use .NET string formatting in PowerShell. Furthermore, using the `Start-Process` cmdlet can make passing arguments to executables a lot easier.
```
$PSql = 'C:\PostgreSQL\9.3\bin\psql.exe';
for ($i=1; $i -le 9; $i++)
{
$CurDate = (Get-Date -Format 'yyyy-MM-dd HH:mm:ss');
$BatchLogInsert = 'Insert into "TEMP"."batchLog" (batchid, filename, status, createdate) values (''{0}'', ''File{1}'', ''Init'', ''{2}'');' -f $NewBatchID, $i, $CurDate;
Write-Host -Object $BatchLogInsert;
$ArgumentList = '-h {0} -U {1} -d {2} -w -c "{3}"' -f $DBServer, $DBUser, $ClientPrefix, $BatchLogInsert;
Start-Process -FilePath $PSql -ArgumentList $ArgumentList -Wait -NoNewWindow;
}
``` | insert fails in psql.exe (created from powershell) | [
"",
"sql",
"postgresql",
"powershell",
""
] |
I have a log table with several statuses. It logs the position of physical objects in an external system. I want to get the latest rows for a status for each distinct physical object.
I need a list of typeids and their quantity for each status, minus the quantity of typeids that have an entry for another status that is later than the row with the status we are looking for.
e.g each status move is recorded but nothing else.
Here's the problem, I don't have a distinct ID for each physical object. I can only calculate how many there are from the state of the log table.
I've tried
```
SELECT dl.id, dl.status
FROM `log` AS dl
INNER JOIN (
SELECT MAX( `date` ) , id
FROM `log`
GROUP BY id ORDER BY `date` DESC
) AS dl2
WHERE dl.id = dl2.id
```
but this would require a distinct type id to work.
My table has a primary key id, datetime, status, product type\_id. There are four different statuses.
a product must pass through all statuses.
Example Data.
```
date typeid status id
2014-01-13 PF0180 shopfloor 71941
2014-01-13 ND0355 shopfloor 71940
2014-01-10 ND0355 machine 71938
2014-01-10 ND0355 machine 71937
2014-01-10 ND0282 machine 7193
```
when selected results for the status shopfloor I would want
```
quantity typeid
1 ND0355
1 PF0180
```
when selecting for status machine I would want
```
quantity typeid
1 ND0282
1 ND0355
```
The order of the statuses shouldn't matter it only matters if there is a later entry for the product. | If I understood you correctly, this will give you the desired output:
```
select
l1.typeid,
l1.status,
count(1) - (
select count(1)
from log l2
where l2.typeid = l1.typeid and
l2.date > l1.date
)
from log l1
group by l1.typeid, l1.status;
```
Check this [SQL Fiddle](http://sqlfiddle.com/#!2/2660c/15)
```
TYPEID STATUS TOTAL
-----------------------------
ND0282 machine 1
ND0355 machine 1
ND0355 shopfloor 1
PF0180 shopfloor 1
``` | You need to get the greatest date per `status`, not per `id`. Then join to the log table where the status *and* date are the same.
```
SELECT dl.id, dl.status
FROM `log` AS dl
INNER JOIN (
SELECT status, MAX( `date` ) AS date
FROM `log`
GROUP BY status ORDER BY NULL
) AS dl2 USING (status, date);
```
It would be helpful to have an index on `(status, date)` on this table, which would allow the subquery to run as an index-only query. | Mysql get latest row for status | [
"",
"mysql",
"sql",
"greatest-n-per-group",
""
] |
Inserting data from one table to another is usually as simple as:
`SELECT * INTO A FROM B`
But just out of curiosity,suppose I have two tables `tbl_A` and `tbl_B`. I have 100 records in `tbl_B` and some 20 rows in `tbl_A` (some of which might be common in both tables), I want to insert rows from `tbl_B` into `tbl_A` which are not already present in `tbl_A'`
Also, lets assume that both table have identity fields. | You can use `NOT EXISTS`:
```
INSERT INTO tbl_A
SELECT IdCol, Col2, Col3
FROM dbo.tbl_B B
WHERE NOT EXISTS(SELECT 1 FROM tbl_A A2 WHERE A2.IdCol = B.IdCol)
``` | You can use MERGE command
Description in MS tech
<http://msdn.microsoft.com/en-us/library/bb510625.aspx> | Insert Data from one table to another leaving the already existing rows | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
"sql-server-2005",
""
] |
I've following query, what i want to do is, i just want month name from the date column, in the following query its displaying month number
```
SELECT MONTH(Invoice_Date), SUM(R.Total) AS TOTAL
FROM Sales R
GROUP BY MONTH(Invoice_Date)
ORDER BY MONTH(Invoice_Date);
```
I tried MonthName function, also i tried convert function but neither one is working.
Following error is displayed Undefined Function.
Please Help, Thanks | try this
```
SELECT DATENAME(month, Invoice_Date), SUM(R.Total) AS TOTAL
FROM Sales R
GROUP BY DATENAME(month, Invoice_Date)
ORDER BY DATENAME(month, Invoice_Date);
``` | Try datepart function of MS SQL
<http://msdn.microsoft.com/en-us/library/ms174420.aspx>
MS sql query will be:
```
SELECT
DATEPART(month,Invoice_Date),
SUM(R.Total) AS TOTAL
FROM Sales R
GROUP BY DATEPART(month,Invoice_Date)
ORDER BY DATEPART(month,Invoice_Date)
```
Assuming that Invoice\_Date "Is an expression that can be resolved to a time, date, smalldatetime, datetime, datetime2, or datetimeoffset value. date can be an expression, column expression, user-defined variable, or string literal." | SQL Query to retrieve month name from database | [
"",
"sql",
"ms-access",
""
] |
I have the following table
```
+----+----------+------+
|D_ID|date |value |
+----+----------+------+
|0 |1992-02-01|119940|
+----+----------+------+
|1 |1992-02-01|119941|
+----+----------+------+
|1 |1992-02-02|119942|
+----+----------+------+
|2 |1992-02-01|119943|
+----+----------+------+
|0 |1992-02-02|119944|
+----+----------+------+
|0 |1992-02-03|119945|
+----+----------+------+
|2 |1992-02-02|119946|
+----+----------+------+
|1 |1992-02-04|119947|
+----+----------+------+
|1 |1992-02-05|119948|
+----+----------+------+
|2 |1992-02-02|119949|
+----+----------+------+
|2 |1992-02-03|119940|
+----+----------+------+
|0 |1992-02-04|119940|
+----+----------+------+
```
Is there any way to get the following result? I want to get everyday's D\_ID value.
```
+----------+------+------+------+
|date |D_ID_0|D_ID_1|D_ID_2|
+----------+------+------+------+
|1992-02-01|119940|119941|119943|
+----------+------+------+------+
|1992-02-02|119944|119942|119949|
+----------+------+------+------+
|1992-02-03|119945| |119940|
+----------+------+------+------+
|1992-02-04|119940|119947| |
+----------+------+------+------+
|1992-02-05| |119948| |
+----------+------+------+------+
``` | Hope this helps!
## ORACLE:
```
SELECT date ,
MAX(DECODE(D_ID,0,value,NULL)) as D_ID_0,
MAX(DECODE(D_ID,1,value,NULL)) as D_ID_1,
MAX(DECODE(D_ID,2,value,NULL)) as D_ID_2
FROM
your_table
GROUP BY date ;
```
## MySQL:
```
SELECT date ,
MAX(IF(D_ID=0,value,NULL)) as `D_ID_0`,
MAX(IF(D_ID=1,value,NULL)) as `D_ID_1`,
MAX(IF(D_ID=2,value,NULL)) as `D_ID_2`
FROM
your_table
GROUP BY date ;
```
*OR*
```
SELECT date ,
MAX((CASE WHEN (d_id = 0) THEN value ELSE NULL end)) AS `D_ID_0`,
MAX((CASE WHEN (d_id = 1) THEN value ELSE NULL end)) AS `D_ID_1`,
MAX((CASE WHEN (d_id = 2) THEN value ELSE NULL end)) AS `D_ID_2`
FROM
your_table
GROUP BY date ;
``` | Use Pivot
```
WITH tab(D_ID,d_date,d_value) AS
(SELECT 0 , '1992-02-01', 119940 FROM dual UNION ALL
SELECT 1, '1992-02-01', 119941 FROM dual UNION ALL
SELECT 1, '1992-02-02', 119942 FROM dual UNION ALL
SELECT 2, '1992-02-01', 119943 FROM dual UNION ALL
SELECT 0, '1992-02-02', 119944 FROM dual UNION ALL
SELECT 0, '1992-02-03', 119945 FROM dual UNION ALL
SELECT 2, '1992-02-02', 119946 FROM dual UNION ALL
SELECT 1, '1992-02-04', 119947 FROM dual UNION ALL
SELECT 1, '1992-02-05', 119948 FROM dual UNION ALL
SELECT 2, '1992-02-02', 119949 FROM dual UNION ALL
SELECT 2, '1992-02-03', 119940 FROM dual UNION ALL
SELECT 0, '1992-02-04', 119940 FROM dual)
-------
--End of Data
-------
SELECT * FROM tab
pivot (min(d_VALUE) AS dd_value FOR d_id IN (0 ,1 ,2));
```
output:
```
| D_DATE | 0_DD_VALUE | 1_DD_VALUE | 2_DD_VALUE |
|------------|------------|------------|------------|
| 1992-02-04 | 119940 | 119947 | (null) |
| 1992-02-03 | 119945 | (null) | 119940 |
| 1992-02-02 | 119944 | 119942 | 119946 |
| 1992-02-05 | (null) | 119948 | (null) |
| 1992-02-01 | 119940 | 119941 | 119943 |
``` | How to write this query to get proper result? | [
"",
"jquery",
"mysql",
"sql",
"oracle",
""
] |
How can I get last month date like
```
select * from table where date in ( last month )
```
I dont want the last 30 days
AND how can I get last month automatically | **Edit**
if you mean last month from today. or previous month from a specific date then you need to do something like this
```
SELECT DATEPART(MONTH, DATEADD(MONTH, -1, [Date]))
```
Or to get records from previous month of the year you can do something like this
```
SELECT * FROM Table
WHERE MONTH(Date) = DATEPART(MONTH, DATEADD(MONTH, -1, [Date]))
AND YEAR(Date) = DATEPART(YEAR, DATEADD(MONTH, -1, [Date])) --<-- or pass year for which year you are checking
```
**To make your aquery SARGable** (Suggested by t-clausen.dk)
```
select * from table
where date >=dateadd(m, datediff(m, 0, current_timestamp)-1, 0)
and date < dateadd(m, datediff(m, 0, current_timestamp)-1, 0)
```
[Read here more about sargable Queries](http://msmvps.com/blogs/robfarley/archive/2010/01/22/sargable-functions-in-sql-server.aspx) when working with date/datetime datatypes. | Assuming you want all items where the date is *within the last month* i.e. between today and 30/31 days ago:
```
Select *
From Table
Where Date Between DATEADD(m, -1, GETDATE()) and GETDATE()
``` | Select all where date in Last month sql | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
## The problem
We have a legacy Visual FoxPro reservation system with many tables. I have been asked to do some housekeeping on the tables to reduce their size.
* The tables are badly designed with no auto incrementing primary key.
* The largest table is 3 million rows.
* I am attempting to delete 380,000 rows.
* Due to the volume of data in the tables, I am trying to develop a solution which batch deletes.
## What I've got so far
I have created a C# application which accesses the database files via the vfpoledb.1 driver. This application uses recno() to batch the deletion. This is an example of the query I'm using:
```
delete from TableA
where TableA.Key in (
select Key from TableB
where Departure < date(2010,01,01) and Key <> ""
) and recno() between 1 and 10000
```
Executing this via vfpoledb.1 does not delete anything. Executing a select statement with the same where clause does not return anything.
It seems that the combination of the recno() function and an in() function is causing the issue. Testing the query with each clause in turn returns results.
## Questions
1. Is there another way of batch deleting data from Visual FoxPro?
2. Why are recno() and in() not compatible?
3. Is there anything else I'm missing?
## Additional information
* ANSI is set to TRUE
* DELETED is set to TRUE
* EXCLUSIVE is set to TRUE | Instead of doing in batch of so many record numbers, why not a simpler approach. You are looking to kill off everything prior to some date (2010-01-01).
Why not try based on starting with 2009-12-31 and keep working backwards to the earliest date on file you are trying to purge off. Also note, I don't know if Departure is a date vs datetime, so I changed it to
TTOD( Departure ) (meaning convert time to just the date component)
```
DateTime purgeDate = new DateTime(2009, 12, 31);
// the "?" is a parameter place-holder in the query
string SQLtxt = "delete from TableA "
+ " where TableA.Key in ( "
+ " select Key from TableB "
+ " where TTOD( Departure ) < ? and Key <> \"\" )";
OleDbCommand oSQL = new OleDbCommand( SQLtxt, YourOleDbConnectionHandle );
// default the "?" parameter place-holder
oSQL.Parameters.AddWithValue( "parmDate", purgeDate );
int RecordsDeleted = 0;
while( purgeDate > new DateTime(2000,1,1) )
{
// always re-apply the updated purge date for deletion
oSQL.Parameters[0].Value = purgeDate;
RecordsDeleted += oSQL.ExecuteNonQuery();
// keep going back one day at a time...
purgeDate = purgeDate.AddDays(-1);
}
```
This way, it does not matter what RECNO() you are dealing with, it will only do whatever keys are for that particular day. If you have more than 10,000 entries for a single day, then I might approach differently, but since this is more of a one-time cleanup, I would not be too concerned with doing 1000+ iterations ( 365 days per year for however many years) through the data... Or, you could do it with a date range and do maybe weekly, just change the WHERE clause and adjust the parameters... something like... (The date of 1/1/2000 is just a guess for how far back the data goes). Also, since this is doing entire date range, no need to convert possible TTOD() of the departure field.
```
DateTime purgeDate = new DateTime(2009, 12, 31);
DateTime lessThanDate = new DateTime( 2010, 1, 1 );
// the "?" is a parameter place-holder in the query
string SQLtxt = "delete from TableA "
+ " where TableA.Key in ( "
+ " select Key from TableB "
+ " where Departure >= ? "
+ " and Departure < ? "
+ " and Key <> \"\" )";
OleDbCommand oSQL = new OleDbCommand( SQLtxt, YourOleDbConnectionHandle );
// default the "?" parameter place-holder
oSQL.Parameters.AddWithValue( "parmDate", purgeDate );
oSQL.Parameters.AddWithValue( "parmLessThanDate", LessThanDate );
int RecordsDeleted = 0;
while( purgeDate > new DateTime(2000,1,1) )
{
// always re-apply the updated purge date for deletion
oSQL.Parameters[0].Value = purgeDate;
oSQL.Parameters[1].Value = lessThanDate;
RecordsDeleted += oSQL.ExecuteNonQuery();
// keep going back one WEEK at a time for both the starting and less than end date of each pass
purgeDate = purgeDate.AddDays(-7);
lessThanDate = lessThanDate.AddDays( -7);
}
``` | It looks like your condition with date isn't working. Try to execute SELECT statement with using of CTOD() function instead of DATE() you've used.
When your condition will work, then you'll be able to run DELETE statement. But remember that as a result of the DELETE execution the rows will only be marked as deleted. To remove them completely you should run PACK statement after DELETE.
As an another way, you can also try our [DBF editor](http://dbf-software.com "DBF Commander Pro") - DBF Commander Professional. It allows to execute SQL queries, including command-line (batch) mode. E.g.:
```
dbfcommander. exe -q "DELETE FROM 'D:\table_name.dbf' WHERE RECNO()<10000"
dbfcommander. exe -q "PACK 'D:\table_name.dbf'"
```
You can use it for free within 20 days full-featured trial period. | How to delete large amounts of data from Foxpro | [
"",
"sql",
"visual-foxpro",
""
] |
I have a table containing user to user messages. A conversation has all messages between two users. I am trying to get a list of all the different conversations and display only the last message sent in the listing.
I am able to do this with a SQL sub-query in FROM.
```
CREATE TABLE `messages` (
`id` bigint(20) NOT NULL AUTO_INCREMENT,
`from_user_id` bigint(20) DEFAULT NULL,
`to_user_id` bigint(20) DEFAULT NULL,
`type` smallint(6) NOT NULL,
`is_read` tinyint(1) NOT NULL,
`is_deleted` tinyint(1) NOT NULL,
`text` longtext COLLATE utf8_unicode_ci NOT NULL,
`heading` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
`created_at_utc` datetime DEFAULT NULL,
`read_at_utc` datetime DEFAULT NULL,
PRIMARY KEY (`id`)
);
SELECT * FROM
(SELECT * FROM `messages` WHERE TYPE = 1 AND
(from_user_id = 22 OR to_user_id = 22)
ORDER BY created_at_utc DESC
) tb
GROUP BY from_user_id, to_user_id;
```
**SQL Fiddle:**
<http://www.sqlfiddle.com/#!2/845275/2>
Is there a way to do this without a sub-query?
*(writing a DQL which supports sub-queries only in 'IN')* | You seem to be trying to get the last contents of messages to or from user 22 with type = 1. Your method is explicitly not guaranteed to work, because the extra columns (not in the `group by`) can come from arbitrary rows. As explained in the [documentation][1]:
> MySQL extends the use of GROUP BY so that the select list can refer to
> nonaggregated columns not named in the GROUP BY clause. This means
> that the preceding query is legal in MySQL. You can use this feature
> to get better performance by avoiding unnecessary column sorting and
> grouping. However, this is useful primarily when all values in each
> nonaggregated column not named in the GROUP BY are the same for each
> group. The server is free to choose any value from each group, so
> unless they are the same, the values chosen are indeterminate.
> Furthermore, the selection of values from each group cannot be
> influenced by adding an ORDER BY clause. Sorting of the result set
> occurs after values have been chosen, and ORDER BY does not affect
> which values within each group the server chooses.
The query that you want is more along the lines of this (assuming that you have an auto-incrementing `id` column for `messages`):
```
select m.*
from (select m.from_user_id, m.to_user_id, max(m.id) as max_id
from message m
where m.type = 1 and (m.from_user_id = 22 or m.to_user_id = 22)
) lm join
messages m
on lm.max_id = m.id;
```
Or this:
```
select m.*
from message m
where m.type = 1 and (m.from_user_id = 22 or m.to_user_id = 22) and
not exists (select 1
from messages m2
where m2.type = m.type and m2.from_user_id = m.from_user_id and
m2.to_user_id = m.to_user_id and
m2.created_at_utc > m.created_at_utc
);
```
For this latter query, an index on `messages(type, from_user_id, to_user_id, created_at_utc)` would help performance. | Since this is a rather specific type of data query which goes outside common ORM use cases, DQL isn't really fit for this - it's optimized for walking well-defined relationships.
For your case however Doctrine fully supports [native SQL with result set mapping](http://docs.doctrine-project.org/en/latest/reference/native-sql.html). Using a `NativeQuery` with `ResultSetMapping` like this you can easily use the subquery this problem requires, and still map the results on native Doctrine entities, allowing you to still profit from all caching, usability and performance advantages.
[Samples found here](http://docs.doctrine-project.org/en/latest/reference/native-sql.html#resultsetmappingbuilder). | SQL alternative to sub-query in FROM | [
"",
"mysql",
"sql",
"dql",
""
] |
I have 3 tables called `table1` `table2` and `table3`. the `table3` contains records that have `table1.id` and `table2.id` and some other columns as well. So I need to do the following. for each record in table 1 I need to see if in table3 there is a row containing that table1.id and any other table2.id if there is no such record I want to insert it.
so here is the example.
suppose table1
```
1 ... ... ...
2 ... ... ...
```
table2
```
10 .. .. ..
20 .. .. ..
```
table3
```
1 | 10 .. .. ..
2 | 20 .. .. ..
```
I need to add
`1 20 .. .. ..` and
`2 10 .. .. ..` rows to the table3 because for `table1.id` 1 it did not have the row which had all `table2.id`s (in this case 20) and for `table1.id` 2 it also did not have the row which had all `table2.id`s (in this case 10) in it. any help would be appreciated | If I've got it right try this:
```
INSERT INTO Table3 (Table1_id,Table2_id)
SELECT Tablei.id,Table2.id FROM Table1,Table2
WHERE NOT EXISTS (SELECT 1
FROM Table3
WHERE Table3.Table1_id=Table1.ID
AND
Table3.Table2_id=Table2.ID)
``` | Try this:
```
IF NOT EXISTS(SELECT 1 FROM Table3 WHERE Table3.Table1_ID = Table1.ID AND Table3.Table2_ID = Table2.ID)
INSERT INTO Table3(Table1_ID, Table2_ID) VALUES (ID1,ID2)
END IF
``` | Sql insert if row does not exist | [
"",
"sql",
"t-sql",
""
] |
I would like to implement a query in MYSQL that will remove the characters before the space
For example :
> UA 016
i want to remove 'UA ' with the space , and only keep
> 016
here is the query statements:
```
update field_data_field_id set field_id_value = '.... what i need here';
```
Thanks in advance | ```
UPDATE field_data_field_id
SET field_id_value=SUBSTRING_INDEX(
SUBSTRING_INDEX(field_id_value,' ',2),' ',-1);
``` | try this
```
CREATE FUNCTION IsNumeric (val varchar(255)) RETURNS tinyint
RETURN val REGEXP '^(-|\\+){0,1}([0-9]+\\.[0-9]*|[0-9]*\\.[0-9]+|[0-9]+)$';
CREATE FUNCTION NumericOnly (val VARCHAR(255))
RETURNS VARCHAR(255)
BEGIN
DECLARE idx INT DEFAULT 0;
IF ISNULL(val) THEN RETURN NULL; END IF;
IF LENGTH(val) = 0 THEN RETURN ""; END IF;
SET idx = LENGTH(val);
WHILE idx > 0 DO
IF IsNumeric(SUBSTRING(val,idx,1)) = 0 THEN
SET val = REPLACE(val,SUBSTRING(val,idx,1),"");
SET idx = LENGTH(val)+1;
END IF;
SET idx = idx - 1;
END WHILE;
RETURN val;
END;
```
---
Use it by calling the NumericOnly function like this:
```
select NumericOnly('1&2') as result;
```
Returns: "12"
```
select NumericOnly('abc987') as result;
```
Returns: "987" | MYSQL query will remove characters from a string | [
"",
"mysql",
"sql",
"character",
""
] |
```
SELECT *
FROM table -> 35 records
SELECT *
FROM table
WHERE x IN (SELECT x
FROM table1) -> 34 records
SELECT *
FROM table
WHERE x NOT IN (SELECT x
FROM table1) -> 0 records
```
Any ideas as to how this could be possible? | The simple fix for the `NULL` value is:
```
SELECT *
FROM table
WHERE x NOT IN (SELECT x
FROM table1
WHERE x is not null);
```
However, it is recommended to use `not exists` rather than `not in` because of the `NULL` issue:
```
select t.*
from table t
where not exists (select 1 from table1 t1 where t1.x = t.x);
``` | One of your `x` values is `NULL`. `NULL` values will never evaluate to `true` in any comparison (since the value is unknown). | Issue with NOT IN | [
"",
"sql",
"sql-server",
""
] |
I'm using JDBC to insert a single row into a DB2 database table using the insert-select style insert. I am simultaneously trying to insert dynamic data from a variable, as well as the `ENTRY_DATE` using `CURRENT DATE`:
```
INSERT INTO mytable (COL_A, COL_B, ENTRY_DATE)
SELECT COL_A, ?, CURRENT DATE FROM mytable
WHERE COL_A > 1;
```
I'm new to sql and JDBC, so I don't know if using `CURRENT DATE` is common, or specific to our system, but it works in a normal insert like:
```
INSERT INTO mytable ENTRY_DATE values(CURRENT DATE);
```
I've never used insert-select style inserts, so I don't know if the error is on my '?' which I insert using PreparedStatement.setString, or the `CURRENT DATE` parameter. However I get the following error:
> [BEA][DB2 JDBC Driver][DB2]STRING TO BE PREPARED CONTAINS INVALID USE
> OF PARAMETER MARKERS
Do I need to surround either or both of those with something to show they aren't part of the select statement? Do I need to rearrange my statement?
EDIT:
I can't show you my exact code, but here is a close aproximation:
```
String sql = null;
PreparedStatement prepStmnt = null;
Connection conn = getConnection("database");
sql = "INSERT INTO MYTABLE ";
sql += "(COLUMN_A, COLUMN_B, ENTRY_DATE)";
sql += "SELECT COLUMN_A, ?, CURRENT DATE";
sql += "FROM MYTABLE WHERE COLUMN_A > 1;";
prepStmnt = conn.prepareStatement(sql);
prepStmnt.setString(1, myVar);
prepStmnt.execute();
``` | I think the problem is that DB2 doesn't know anything about your parameter which is in the column list of the select, so can't know what the select will return, and so it can't prepare the statement properly.
Try this:
```
INSERT INTO mytable (COL_A, COL_B, ENTRY_DATE)
SELECT COL_A, CAST(? AS CHAR(1)), CURRENT DATE FROM mytable
WHERE COL_A > 1;
```
Replace the `CHAR(1)` with whatever the correct datatype for `COL_B` is. | The error message refers to `?` not to CURRENT DATE. Maybe this is what you wanted?
```
INSERT INTO mytable (COL_A, COL_B, ENTRY_DATE)
SELECT COL_A, NULL, CURRENT DATE FROM mytable
WHERE COL_A > 1;
```
In DB2 a `?` means there will be a parameter passed for that location. But it is saying you can't use a Parameter Marker. So remove the `?` which is the parameter marker. | Using CURRENT TIME in an insert with a select in DB2 | [
"",
"sql",
"select",
"jdbc",
"insert",
"db2",
""
] |
I am trying to get the output from one SQL statement into another SQL statement, although it does not seem to be working. Is there an alternative, and how would I use it?
```
$id = ($_SESSION["user_id"]);
$query1 = "SELECT event_id FROM booking where user_id = '$id'";
$result = mysql_query($query1);
$query2 = "SELECT * FROM event where event_id = '$result'";
$results = mysql_query($query2);
``` | You can join the tables, try using below syntax
```
SELECT event.event_id,booking.*
FROM event
JOIN booking on event.event_id=booking.event_id
WHERE booking.user_id='$id';
``` | What's about using the one query instead of two:
```
$id = ($_SESSION["user_id"]);
$query2 = "SELECT * FROM event
where event_id IN (SELECT event_id
FROM booking
where user_id = '$id')";
$results = mysql_query($query2);
``` | Using the output of an SQL statement in another SQL statement? | [
"",
"mysql",
"sql",
""
] |
I am having a problem with a query I have put together, I have followed the explanation on <http://www.sql-server-helper.com/error-messages/msg-147.aspx> (bottom of the page) and I can't see much difference in my code to the example other than an addition to the WHERE clause and an Inner Join.
Yet I am still getting the following error:
> Msg 147, Level 15, State 1, Line 5
> An aggregate may not appear in the WHERE clause unless it is in a subquery contained in a HAVING clause or a select list, and the column being aggregated is an outer reference.
This is my own code:
```
SELECT *
FROM [dbo].[mail] AS rm
INNER JOIN [dbo].[mytbl] AS ec ON [rm].[webref] = [ec].[Webref]
WHERE rm.[webref] = 'XYZ-594112'
AND [PolRef@] = ( SELECT MAX([PolRef@])
FROM [dbo].[mytbl]
)
```
Can anyone shed any light as to why this is occurring?
EDIT:
```
CREATE TABLE [dbo].[mail](
[id] [int] IDENTITY(1,1) NOT NULL,
[date] [datetime] NULL,
[webref] [nvarchar](20) NULL
) ON [PRIMARY]
```
The other table is actually a view that is comprised of several other tables, however the a cut down version would be:
```
CREATE TABLE [dbo].[mytbl](
[PolRef@] [varchar](10) NULL,
[Webref] [varchar](30) NULL) ON [PRIMARY]
```
EDIT UPDATE:
Now getting the following error:
```
SELECT *
FROM [FreshSystems].[dbo].[mail] AS rm
INNER JOIN [dbo].[mytbl] AS ec ON [rm].[webref] = [ec].[Webref]
WHERE rm.[webref] = 'XYZ-594112'
HAVING [PolRef@] = ( SELECT MAX([PolRef@])
FROM [dbo].[mytbl]
)
```
Error
```
Msg 8121, Level 16, State 1, Line 5
Column 'mytbl.PolRef@' is invalid in the HAVING clause because it is not contained in either an aggregate function or the GROUP BY clause.
```
WORKING:
```
SELECT *
FROM [dbo].[mail] AS rm
LEFT OUTER JOIN [dbo].[mytbl] AS ec ON [rm].[webref] = [ec].[Webref]
WHERE rm.[webref] = 'XYZ-594112'
AND [PolRef@] = ( SELECT MAX([PolRef@])
FROM [dbo].[mytbl]
WHERE [Webref] = 'XYZ-594112'
)
``` | WORKING CODE:
```
SELECT *
FROM [dbo].[mail] AS rm
LEFT OUTER JOIN [dbo].[mytbl] AS ec ON [rm].[webref] = [ec].[Webref]
WHERE rm.[webref] = 'XYZ-594112'
AND [PolRef@] = ( SELECT MAX([PolRef@])
FROM [dbo].[mytbl]
WHERE [Webref] = 'XYZ-594112'
)
``` | Just order by `poltef@ desc` and take the top 1:
```
SELECT TOP 1 * WITH TIES
FROM mail rm
JOIN mytbl ec ON rm.webref = ec.Webref
WHERE rm.webref = 'XYZ-594112'
ORDER BY poltef@ desc
```
Also, I have removed all unnecessary characters. | WHERE ID = MAX(ID) Getting Error Aggregate May Not Appear | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I am trying to import some data into sql using the import and export data wizard. In excel i have a column with 8 numbers. However when i get these in the import wizard said column is displayed as being double. and when i import it i get overflow errors. I then go back to excel, select the cell, format them as being a number with no decimal places. Then i go to the "edit mappings" in the import wizard and change the destination to numeric(8,0). I get those overflow errors as well.
Would like if i could get a way to better format the data in excels to import better
Thank You | You can import your table into a temp or staging table with a larger data type; validate and clean your imported data; then load it into your real table. If you can't always enforce the data type in your Excel doc, then make it easier on the server side to import your data, then manipulate it there. | I had a project that ran into this problem last year.
What I ended up doing is I imported the data from Excel to Access. Then import from Access to SQL. I don't remember the specifics, but it solved most of my issues. | Preparing excel workbook for importing into SQL | [
"",
"sql",
"sql-server",
"vba",
"excel",
""
] |
I am working on updating some Python code I have been working with and was looking for some advice on the best way to deal with an idea I am working with. The part of my code that I wish to change is :
```
my_reader = csv.reader(input, delimiter = ',',quotechar='|')
mouse.executemany("INSERT INTO Example_Input (ID,Name,Job,Salary) VALUES (?,?,?,?)", my_reader)
```
The code works. My question is, can I change the "(?,?,?,?)" into something more dynamic like 'range()' to allow user input. I understand that I would also have to have a dynamic create table statement, so another solution might be to count the number of inputs.
To be a little more clear: for example if I had raw\_input("How many variables does the table contain?: ") and the input was 2, the program would know to run as if (?,?).
Thoughts?
(also I am using SQLite3 and python 2.7) | Assuming your csv had a header field, you could use a [DictReader](http://docs.python.org/2/library/csv.html#csv.DictReader) and generate the field names and parameters its [fieldnames](http://docs.python.org/2/library/csv.html#csv.csvreader.fieldnames) property.
The constructor for [DictReader](http://docs.python.org/2/library/csv.html#csv.DictReader) allows you to specify the fieldnames if they are not in the file, so you could ask the user for that information if required.
Assuming the headers are in the file, this example code should work:
```
import csv
import sqlite3
#Give the table a name and use it for the file as well
table_name = 'Example'
a = open(table_name + '.csv', 'r')
#Use a dict reader
my_reader = csv.DictReader(a)
print my_reader.fieldnames # Use this to create table and get number of field values,etc.
# create statement
create_sql = 'CREATE TABLE ' + table_name + '(' + ','.join(my_reader.fieldnames) + ')'
print create_sql
#open the db
conn = sqlite3.connect('example.db')
c = conn.cursor()
# Create table using field names
c.execute(create_sql)
insert_sql = 'insert into ' + table_name + ' (' + ','.join(my_reader.fieldnames) + ') VALUES (' + ','.join(['?'] * len(my_reader.fieldnames))+ ')'
print insert_sql
values = []
for row in my_reader:
row_values = []
for field in my_reader.fieldnames:
row_values.append(row[field])
values.append(row_values)
c.executemany(insert_sql, values)
``` | On python3, using list comprehensions and dictionaries I came up with the following simple code that is capable of building a Dynamic SQLite insert string based on a given dictionary:
```
# Data to be inserted:
data = [
{
'table': 'customers',
'values': {
'name': '"Doctor Who"',
'email': '"doctorwho@timelords.com"'
}
},
{
'table': 'orders',
'values': {
'customer_id': '1',
'item': '"Sonic Screwdriver"',
'price': '1000.00'
}
}
]
def generate_insert_query(dictionary):
table = dictionary["table"] # Get name of the table
# Get all "keys" inside "values" key of dictionary (column names)
columns = ', '.join(dictionary["values"].keys())
# Get all "values" inside "values" key of dictionary (insert values)
values = ', '.join(dictionary["values"].values())
# Generate INSERT query
print(f"INSERT INTO {table} ({columns}) VALUES ({values})" + "\n")
# Generate QUERY for each dictionary inside data list
for query in data:
generate_insert_query(query)
```
Try the code on: <https://repl.it/KNZg/2> | Dynamic INSERT Statement in Python | [
"",
"sql",
"python-2.7",
"sqlite",
""
] |
I created a table named `tblEmployees` with code -
```
Create table tblEmployees
(
EmployeeID int identity primary key,
Name nvarchar(30),
Salary float,
Gender tinyint
)
```
Then I inserted values-
```
insert into tblEmployees values ('Akmal', 5000, 0)
insert into tblEmployees values ('Shakira', 6000, 1)
insert into tblEmployees values ('Kiron', 7000, 2)
insert into tblEmployees values ('Jamil', 5500, 0)
insert into tblEmployees values ('Faul', 4800, 4)
```
But, when the values are shown -
```
EmployeeID Name Salary Gender
2 Akmal 5000 0
3 Shakira 6000 1
4 Kiron 7000 2
5 Jamil 5500 0
7 Faul 4800 4
```
My question is, why EmployeeID column started with 2? And where is 6? Shouldnt this be incremented automatically? | Your T-SQL script is incomplete because my IDENTITY values are generated starting with 1 (and ending with 5).
Note #0: I'm only try to describe some reasons for those missing IDENTITY values.
Note #1: Don't run this script on a production server.
Note #2: Using `IDENTITY` within a column definition means `IDENTITY(1,1)` <=> `IDENTITY(seed value/initial value=1,increment value=1)`.
Note #3: You should avoid using `DBCC CHECKIDENT` if you aren't aware about the [consequences of this command](http://technet.microsoft.com/en-us/library/ms176057.aspx).
**The first missing value (a possible explanation):**
> why EmployeeID column started with 2?
Run the following script:
```
IF OBJECT_ID(N'dbo.tblEmployees') IS NOT NULL
DROP TABLE dbo.tblEmployees;
GO
Create table tblEmployees
(
EmployeeID int identity primary key,
Name nvarchar(30),
Salary float,
Gender tinyint
)
GO
insert into tblEmployees values ('Akmal', 5000, 0)
insert into tblEmployees values ('Shakira', 6000, 1)
insert into tblEmployees values ('Kiron', 7000, 2)
insert into tblEmployees values ('Jamil', 5500, 0)
insert into tblEmployees values ('Faul', 4800, 4)
GO
SELECT SCOPE_IDENTITY() AS [Last IDENTITY #1];
/*
Last IDENTITY #1
----------------
5
*/
GO
```
At this moment the last IDENTITY value generated for this table is (as you can see) 5 and not 7 (like your example).
```
SELECT * FROM dbo.tblEmployees;
/*
EmployeeID Name Salary G
----------- ------------------------------ ---------------------- -
1 Akmal 5000 0
2 Shakira 6000 1
3 Kiron 7000 2
4 Jamil 5500 0
5 Faul 4800 4
*/
GO
```
All rows have continuous IDENTITY values: there are no gaps.
Now, for some reasons, somebody deletes all rows from `dbo.tblEmployees` and also decides to *reset* (RESEED) the last identity value (which is 5) to 1 (from 5 to 1).
```
DELETE dbo.tblEmployees;
GO
DBCC CHECKIDENT('dbo.tblEmployees', RESEED, 1);
GO
SELECT SCOPE_IDENTITY() AS [Last IDENTITY #2];
/*
Last IDENTITY #2
----------------
1
*/
GO
```
Now, the last IDENTITY value is 1 (because of that RESEED 1).
```
insert into tblEmployees values ('Akmal', 5000, 0)
insert into tblEmployees values ('Shakira', 6000, 1)
insert into tblEmployees values ('Kiron', 7000, 2)
insert into tblEmployees values ('Jamil', 5500, 0)
insert into tblEmployees values ('Faul', 4800, 4)
GO
SELECT * FROM dbo.tblEmployees;
GO
/*
EmployeeID Name Salary Gender
----------- ------------------------------ ---------------------- ------
2 Akmal 5000 0
3 Shakira 6000 1
4 Kiron 7000 2
5 Jamil 5500 0
6 Faul 4800 4
*/
```
When I insert those rows again, the first generated IDENTITY value is 2 (this time).
Why ? The reason is described in [MSDN](http://technet.microsoft.com/en-us/library/ms176057.aspx):
"If no rows have been inserted into the table since the table was created, or **if all rows have been removed** by using the
TRUNCATE TABLE statement, the first row inserted after you run DBCC CHECKIDENT uses new\_reseed\_value as the identity.
**Otherwise, the next row inserted uses new\_reseed\_value + the current increment value.**"
This last formula explains why this time the first IDENTITY value is 2:
**new\_reseed\_value** (is 1 - because of RESEED 1) **+ the current increment value** (1 - see Note #2) **= 1 + 1 = 2**.
Note #4: If you are using `TRUNCATE TABLE` instead of `DELETE` then the first row inserted after `TRUNCATE TABLE` will have the ID = seed value (see Note #2) or new\_reseed\_value = 1. So, in this case you don't need `DBCC(..., RESEED, 1)`.
**The second missing value (a possible explanation):**
> And where is 6?
```
DELETE dbo.tblEmployees WHERE EmployeeID = 6
insert into tblEmployees values ('Faul', 4800, 4)
GO
SELECT SCOPE_IDENTITY() AS [Last IDENTITY #3];
/*
Last IDENTITY #3
----------------
7
*/
GO
SELECT * FROM dbo.tblEmployees;
GO
/*
EmployeeID Name Salary Gender
----------- ------------------------------ ---------------------- ------
2 Akmal 5000 0
3 Shakira 6000 1
4 Kiron 7000 2
5 Jamil 5500 0
7 Faul 4800 4
*/
``` | Do not rely on `IDENTITY` columns to produce a contiguous set of values with no gaps. Period. This is not guaranteed at all; several things can cause gaps such as rollbacks, deletes, reseeds, etc. I don't believe you reproduced this problem with that exact code above; there was probably other activity in between those `INSERT` statements.
For such surrogate and meaningless values you really shouldn't care if there are gaps or not. If you care about gaps, use a different technique (e.g. a serializable max()+1 solution) - just be aware that you trade gaps for scalability / concurrency concerns. The other answer (which you accepted, but I suspect will get deleted) said:
> If you want to have an identity column with dependable and specified values then you need to set IDENTITY\_INSERT to ON on that column, insert your values (with specific ID values) and then set IDENTITY\_INSERT to OFF.
This only works if you already know the values you want to insert into that column. Which defeats the purpose of the `IDENTITY` property in the first place. If you don't already know what values to insert (e.g. what is the "next" `ID`), it means you need to `SELECT MAX()` from the table, and add 1 to it. Which means the whole thing needs to be serializable, otherwise someone else can read your same `MAX()` value and add the same `+1` to it. So aside from making the `IDENTITY` property useless if you're always going to override the generated value anyway, it also kills scalability by effectively limiting concurrency to 1. I highly recommend you strongly weigh that approach before implementing it.
What I suggest you do instead, is use an `IDENTITY` column, and don't be hung up on gaps. They're going to happen, there's not much you can do about it, and it shouldn't really be a concern anyway. Who cares if there is no employee #6? | SQL Server Identity column does not behave properly? | [
"",
"sql",
"sql-server",
""
] |
I have a `veh_speed` table with the fields `vid`, `date_time`, `speed`, `status`. My objective is to get the duration(`start_date_time` and `end_date_time`) of the vehicle with speed greater than 30. Currently I am generating the report using `PL/SQL`. Is it possilble to do with an `SQL`. Also it would be great if it is possible to get the max\_speed between the range.
My table is as follows:
```
VID START_DATE_TIME SPEED STATUS
--- ------------------- ----- ------
1 15/01/2014 10:00:05 0 N
1 15/01/2014 10:00:10 10 Y
1 15/01/2014 10:00:15 30 Y
1 15/01/2014 10:00:20 35 Y
1 15/01/2014 10:00:25 45 Y
1 15/01/2014 10:00:27 10 Y
1 15/01/2014 10:00:29 0 Y
1 15/01/2014 10:00:30 20 Y
1 15/01/2014 10:00:35 32 Y
1 15/01/2014 10:00:40 33 Y
1 15/01/2014 10:00:45 35 Y
1 15/01/2014 10:00:50 38 Y
1 15/01/2014 10:00:55 10 Y
```
And I would like to get the following output:
```
VID START_DATE_TIME END_DATE_TIME MAX_SPEED
--- --------------- ------------- ---------
1 15/01/2014 10:00:15 15/01/2014 10:00:25 45
1 15/01/2014 10:00:35 15/01/2014 10:00:50 38
```
Here is the table creation script:
```
CREATE TABLE veh_speed(vid NUMBER(3),
date_time DATE,
speed NUMBER(3),
status CHAR(1));
INSERT ALL
INTO veh_speed VALUES(1, to_date('15/01/2014 10:00:05', 'dd/mm/yyyy hh24:mi:ss'), 0, 'N')
INTO veh_speed VALUES(1, to_date('15/01/2014 10:00:10', 'dd/mm/yyyy hh24:mi:ss'), 10, 'Y')
INTO veh_speed VALUES(1, to_date('15/01/2014 10:00:15', 'dd/mm/yyyy hh24:mi:ss'), 30, 'Y')
INTO veh_speed VALUES(1, to_date('15/01/2014 10:00:20', 'dd/mm/yyyy hh24:mi:ss'), 35, 'Y')
INTO veh_speed VALUES(1, to_date('15/01/2014 10:00:25', 'dd/mm/yyyy hh24:mi:ss'), 45, 'Y')
INTO veh_speed VALUES(1, to_date('15/01/2014 10:00:27', 'dd/mm/yyyy hh24:mi:ss'), 10, 'Y')
INTO veh_speed VALUES(1, to_date('15/01/2014 10:00:29', 'dd/mm/yyyy hh24:mi:ss'), 0, 'Y')
INTO veh_speed VALUES(1, to_date('15/01/2014 10:00:30', 'dd/mm/yyyy hh24:mi:ss'), 20, 'Y')
INTO veh_speed VALUES(1, to_date('15/01/2014 10:00:35', 'dd/mm/yyyy hh24:mi:ss'), 32, 'Y')
INTO veh_speed VALUES(1, to_date('15/01/2014 10:00:40', 'dd/mm/yyyy hh24:mi:ss'), 33, 'Y')
INTO veh_speed VALUES(1, to_date('15/01/2014 10:00:45', 'dd/mm/yyyy hh24:mi:ss'), 35, 'Y')
INTO veh_speed VALUES(1, to_date('15/01/2014 10:00:50', 'dd/mm/yyyy hh24:mi:ss'), 38, 'Y')
INTO veh_speed VALUES(1, to_date('15/01/2014 10:00:55', 'dd/mm/yyyy hh24:mi:ss'), 10, 'Y')
SELECT * FROM dual;
```
I hope I made my question clear.
Thanks in advance. | You can use analytic functions to group your records into blocks where the speed is 30 or more:
```
select vid, date_time, speed, status,
case when speed >= 30 then 30 else 0 end as speed_limit,
row_number() over (partition by vid order by date_time)
- row_number() over (
partition by vid, case when speed >= 30 then 30 else 0 end
order by date_time) as chain
from veh_speed;
VID DATE_TIME SPEED STATUS SPEED_LIMIT CHAIN
---------- ------------------- ---------- ------ ----------- ----------
1 15/01/2014 10:00:05 0 N 0 0
1 15/01/2014 10:00:10 10 Y 0 0
1 15/01/2014 10:00:15 30 Y 30 2
1 15/01/2014 10:00:20 35 Y 30 2
1 15/01/2014 10:00:25 45 Y 30 2
1 15/01/2014 10:00:27 10 Y 0 3
1 15/01/2014 10:00:29 0 Y 0 3
1 15/01/2014 10:00:30 20 Y 0 3
1 15/01/2014 10:00:35 32 Y 30 5
1 15/01/2014 10:00:40 33 Y 30 5
1 15/01/2014 10:00:45 35 Y 30 5
1 15/01/2014 10:00:50 38 Y 30 5
1 15/01/2014 10:00:55 10 Y 0 7
```
I can't take credit for the trick using two `row_number()` calls to generate chains of records, unfortunately, I picked that up somewhere (possibly [here](https://stackoverflow.com/a/4324654/266304)). The actual value of `chain` doesn't matter, just that they are unique within each `vid` and the same for all records in a contiguous block of records matching your criteria.
You're only interested in the chains of related records where the 'speed limit' was 30 (and that could just as easily be a Y/N flag or whatever), so you can use that and filter out those where the chain's speed was less than 30; and then use normal aggregate functios to get what you want:
```
select vid,
min(date_time) as start_date_time,
max(date_time) as end_date_time,
max(speed) as max_speed
from (
select vid, date_time, speed, status,
case when speed >= 30 then 30 else 0 end as speed_limit,
row_number() over (partition by vid order by date_time)
- row_number() over (
partition by vid, case when speed >= 30 then 30 else 0 end
order by date_time) as chain
from veh_speed
)
where speed_limit = 30
group by vid, chain
order by vid, start_date_time;
VID START_DATE_TIME END_DATE_TIME MAX_SPEED
---------- ------------------- ------------------- ----------
1 15/01/2014 10:00:15 15/01/2014 10:00:25 45
1 15/01/2014 10:00:35 15/01/2014 10:00:50 38
```
[SQL Fiddle](http://sqlfiddle.com/#!4/748f3/1). | This problem is well-known as start-of-group, you can google this.
Generic approach is
a) identify criteria to differ rows satisfied criteria from others
c) sort them in correct order
d) making a group column for each period to split them in time
e) group them.
Just as example for particular case:
```
SQL> select vid, min(date_time) start_time, max(date_time) end_time, max(speed) max_speed
2 from (
3 select vid, date_time,
4 date_time - (row_number() over(partition by vid order by date_time))*speed_sign*5/24/3600 group_time, speed_sign, speed
5 from (
6 select vid, date_time, decode(sign(speed-30),0,1,sign(speed-30)) speed_sign , speed
7 from veh_speed order by date_time
8 )) where speed_sign > 0
9 group by vid, group_time
10 /
VID START_TIME END_TIME MAX_SPEED
```
---
```
1 15.01.2014 10:00:15 15.01.2014 10:00:25 45
1 15.01.2014 10:00:35 15.01.2014 10:00:50 38
``` | Grouping the records on a specific criteria and to find the maximum value | [
"",
"sql",
"oracle",
"oracle11g",
"gaps-and-islands",
""
] |
I'm trying to select all products names and how many has been sold in current date, but I'm having a problem since not all products are sold everyday. (When the product has not been sold it must return 0)
TABLE PRODUCTS
```
ID NAME
1 APPLE
2 PINEAPPLE
3 COFFE
```
TABLE SALES
```
ID DATE
1 2014-01-13
2 2014-01-13
```
TABLE PRODUCTS\_AND\_SALES
```
SALE_ID PRODUCT_ID AMOUNT
1 3 2
1 1 1
2 3 1
```
What I expect to receive:
```
PRODUCT AMOUNT
APPLE 1
PINEAPPLE 0
COFFE 3
```
What I receive:
```
PRODUCT AMOUNT
APPLE 1
COFFE 3
```
My query:
```
select product, sum(amount) from products
join products_and_sales using (product_id)
join sales using (sale_id)
where date(dt_sale) = curdate()
group by product_id;
``` | try this
```
select name as PRODUCT , ifnull(sum(AMOUNT),0) amount from products p
left join PRODUCTS_AND_SALES ps
on p.id = ps.PRODUCT_ID
group by product
```
[**DEMO HERE**](http://sqlfiddle.com/#!2/d0851/5)
EDIT:
if you wanna use specefic date then use this
```
select name as PRODUCT ,if(date = '2014-01-13' ,ifnull(sum(AMOUNT),0), 0 ) amount
from products p
left join PRODUCTS_AND_SALES ps on p.id = ps.PRODUCT_ID
left join SALES s on s.id = ps.SALE_ID
group by product
```
[**DEMO HERE**](http://sqlfiddle.com/#!2/ca7cc/1)
just replace this date `2014-01-13` by the date you want. (curdate()) or what ever | Use OUTER JOIN instead of INNER JOIN
if you show the query you use we will correct it for you. | Select sold and unsold products | [
"",
"mysql",
"sql",
"select",
"product",
""
] |
given below table:
```
+----+---------+-----------+-------------+-------+
| ID | NAME | LAST NAME | PHONE | STATE |
+----+---------+-----------+-------------+-------+
| 1 | James | Vangohg | 04333989878 | NULL |
| 2 | Ashly | Baboon | 09898788909 | NULL |
| 3 | James | Vangohg | 04333989878 | NULL |
| 4 | Ashly | Baboon | 09898788909 | NULL |
| 5 | Michael | Foo | 02933889990 | NULL |
| 6 | James | Vangohg | 04333989878 | NULL |
+----+---------+-----------+-------------+-------+
```
I want to use MS SQL to find and update duplicate (based on name, last name and number) but only the earlier one(s). So desired result for above table is:
```
+----+---------+-----------+-------------+-------+
| ID | NAME | LAST NAME | PHONE | STATE |
+----+---------+-----------+-------------+-------+
| 1 | James | Vangohg | 04333989878 | DUPE |
| 2 | Ashly | Baboon | 09898788909 | DUPE |
| 3 | James | Vangohg | 04333989878 | DUPE |
| 4 | Ashly | Baboon | 09898788909 | NULL |
| 5 | Michael | Foo | 02933889990 | NULL |
| 6 | James | Vangohg | 04333989878 | NULL |
+----+---------+-----------+-------------+-------+
``` | This query uses a CTE to apply a row number, where any number > 1 is a dupe of the row with the highest ID.
```
;WITH x AS
(
SELECT ID,NAME,[LAST NAME],PHONE,STATE,
ROW_NUMBER() OVER (PARTITION BY NAME,[LAST NAME],PHONE ORDER BY ID DESC)
FROM dbo.YourTable
)
UPDATE x SET STATE = CASE rn WHEN 1 THEN NULL ELSE 'DUPE' END;
```
Of course, I see no reason to actually update the table with this information; every time the table is touched, this data is stale and the query must be re-applied. Since you can derive this information at run-time, this should be part of a query, not constantly updated in the table. IMHO. | Try this statement.
**LAST UPDATE:**
```
update t1
set
t1.STATE = 'DUPE'
from
TableName t1
join
(
select name, last_name, phone, max(id) as id, count(id) as cnt
from
TableName
group by name, last_name, phone
having count(id) > 1
) t2 on ( t1.name = t2.name and t1.last_name = t2.last_name and t1.phone = t2.phone and t1.id < t2.id)
``` | Find and update specific duplicates in MS SQL | [
"",
"sql",
"sql-server",
"database",
"duplicates",
""
] |
I have Table T1 with Column C1,C2 and Table T2 with Column C3,C4,C5. I would like delete records from T1 where C1 = C3 AND C2 = C4 and C5 = '123'. What will be the query I tried following
```
DELETE FROM T1 WHERE (C1,C2) = SELECT (C3,C4) FROM T2 WHERE C5 = '123'
```
but this is not working. | There is no `WHERE (x,y) = (a,b)` syntax in SQL Server, sorry. This is how you perform a delete, based on a join, regardless of how many columns are involved in the join:
```
DELETE t1
FROM t1
INNER JOIN t2
ON t1.c1 = t2.c3
AND t1.c2 = t2.c4
WHERE t2.c5 = '123';
``` | You can do this using `exists`:
```
DELETE FROM T1
WHERE exists (SELECT 1 FROM T2 WHERE C5 = '123' and t2.c3 = t1.c1 and t2.c4 = t1.c2) )
```
In general, using `exists` is better than using `in` with a subquery because NULLs can cause the latter to do behavior in strange ways. | Delete Query by joining multiple columns | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I was writing some tasks yesterday and it struck me that I don't really know **THE PROPER and ACCEPTED way** of checking if row exists in table when I'm using PL/SQL.
For examples sake let's use table:
```
PERSON (ID, Name);
```
Obviously I **can't do** (unless there's some secret method) something like:
```
BEGIN
IF EXISTS SELECT id FROM person WHERE ID = 10;
-- do things when exists
ELSE
-- do things when doesn't exist
END IF;
END;
```
So my standard way of solving it was:
```
DECLARE
tmp NUMBER;
BEGIN
SELECT id INTO tmp FROM person WHERE id = 10;
--do things when record exists
EXCEPTION
WHEN no_data_found THEN
--do things when record doesn't exist
END;
```
However I don't know if it's accepted way of doing it, or if there's any better way of checking, I would really apprieciate if someone could share their wisdom with me. | I wouldn't push regular code into an exception block. Just check whether any rows exist that meet your condition, and proceed from there:
```
declare
any_rows_found number;
begin
select count(*)
into any_rows_found
from my_table
where rownum = 1 and
... other conditions ...
if any_rows_found = 1 then
...
else
...
end if;
``` | IMO code with a stand-alone SELECT used to check to see if a row exists in a table is not taking proper advantage of the database. In your example you've got a hard-coded ID value but that's not how apps work in "the real world" (at least not in *my* world - yours may be different :-). In a typical app you're going to use a cursor to find data - so let's say you've got an app that's looking at invoice data, and needs to know if the customer exists. The main body of the app might be something like
```
FOR aRow IN (SELECT * FROM INVOICES WHERE DUE_DATE < TRUNC(SYSDATE)-60)
LOOP
-- do something here
END LOOP;
```
and in the `-- do something here` you want to find if the customer exists, and if not print an error message.
One way to do this would be to put in some kind of singleton SELECT, as in
```
-- Check to see if the customer exists in PERSON
BEGIN
SELECT 'TRUE'
INTO strCustomer_exists
FROM PERSON
WHERE PERSON_ID = aRow.CUSTOMER_ID;
EXCEPTION
WHEN NO_DATA_FOUND THEN
strCustomer_exists := 'FALSE';
END;
IF strCustomer_exists = 'FALSE' THEN
DBMS_OUTPUT.PUT_LINE('Customer does not exist!');
END IF;
```
but IMO this is relatively slow and error-prone. IMO a Better Way (tm) to do this is to incorporate it in the main cursor:
```
FOR aRow IN (SELECT i.*, p.ID AS PERSON_ID
FROM INVOICES i
LEFT OUTER JOIN PERSON p
ON (p.ID = i.CUSTOMER_PERSON_ID)
WHERE DUE_DATA < TRUNC(SYSDATE)-60)
LOOP
-- Check to see if the customer exists in PERSON
IF aRow.PERSON_ID IS NULL THEN
DBMS_OUTPUT.PUT_LINE('Customer does not exist!');
END IF;
END LOOP;
```
This code counts on PERSON.ID being declared as the PRIMARY KEY on PERSON (or at least as being NOT NULL); the logic is that if the PERSON table is outer-joined to the query, and the PERSON\_ID comes up as NULL, it means no row was found in PERSON for the given CUSTOMER\_ID because PERSON.ID must have a value (i.e. is at least NOT NULL).
Share and enjoy. | Proper way of checking if row exists in table in PL/SQL block | [
"",
"sql",
"oracle",
"select",
"plsql",
""
] |
Suppose I have the following table:
Table USERS\_GROUPS
```
USER_ID | GROUP_ID
100 1
101 1
101 2
102 1
102 2
102 3
103 1
103 2
103 3
```
I need to select only those users who has all groups (1, 2 and 3) i.e.
Query result:
```
USER_ID
102
103
```
How to compose such sql query? | The most flexible way to structure such a query is using `group by` and `having`. If you want those three specific groups:
```
select ug.user_id
from users_groups ug
group by ug.user_id
having sum(case when group_id = 1 then 1 else 0 end) > 0 and
sum(case when group_id = 2 then 1 else 0 end) > 0 and
sum(case when group_id = 3 then 1 else 0 end) > 0 ;
```
If you want users that are in all groups in the table:
```
select ug.user_id
from users_groups ug
group by ug.user_id
having count(distinct ug.group_id) = (select count(distinct group_id) from user_groups);
``` | You can use a combination of `WHERE`, `GROUP BY` and `HAVING` to get the result. The `WHERE` clause will include the list of the `group_ids` that you want. You will apply the `GROUP BY` clause to your `user_id` column and finally you will use the `HAVING` clause to get a count of the distinct `group_ids` - this count should match the number of ids that you have in the `WHERE`:
```
select user_id
from USERS_GROUPS
where group_id in (1, 2, 3)
group by user_id
having count(distinct group_id) = 3;
```
See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/6e6d7/1) | How to select rows with given set of values | [
"",
"sql",
""
] |
I'm trying to limit results in a table to records with dates that don't overlap my data. As you can see in the screenshot below I'm trying to add a clause to filter out records that are equal to the "end" column. See the final line in the query for that.
I can't figure out why results still show the record in the screenshot. Can someone help me out explaining that? It's probably a syntax thing?
 | You basically have:
```
a OR b AND c AND d AND e AND f
```
... and you probably want:
```
(a OR b) AND c AND d AND e AND f
```
Reference: [Operator Precedence](http://dev.mysql.com/doc/refman/5.5/en/operator-precedence.html) | `AND` has a higher [operator precedence](http://dev.mysql.com/doc/refman/5.0/en/operator-precedence.html) than `OR` has.
That means that your `AND` clauses will be interpreted first so that at the end there's something like
```
where (...) or (... and ... and ...)
```
Since the first condition before the `OR` (`('2014-01-14 18:30:00' between start and end)`) is met, your row shows up. Put both sides of the `OR` clause in parantheses and it should work as you want to. | Why is this "where" clause not limiting sql results | [
"",
"mysql",
"sql",
"select",
"operator-precedence",
""
] |
I'm trying to right a report to get a breakdown of all the ethnicities in my system by gender.
I have this query that I thought was working, but all of the dates are the same in the query, which they are NOT in the individual tables. I think the group\_by is causing an issue, but i'm not 100% and i'm not sure how to properly right this query.
```
SELECT f1.field_name, count(*) AS total
FROM application_fields f1
JOIN application_fields_values v1 ON v1.application_field_id = f1.id
JOIN application_fields_values v2 ON v1.applicant_id = v2.applicant_id
JOIN application_fields f2 ON v2.application_field_id = f2.id
JOIN templates t ON f1.template_id = t.id
JOIN template_fields tf ON f1.template_field_id = tf.id
WHERE v1.field_value = 1
AND v2.field_value = 1
AND f2.field_name = 'Male'
AND f1.field_name != 'Male'
-- AND f1.created_at BETWEEN '2014-01-01' AND '2014-12-31'
AND tf.id IN (9, 10, 11, 12, 13, 14, 15)
GROUP BY f1.field_name
ORDER BY f1.id;
```
This outputs:
```
FIELD_NAME CREATED_AT CREATED_AT TOTAL
Hispanic or Latino. 2013-06-20 02:35:56 2013-06-20 02:35:56 6
Caucasion. 2013-06-20 02:35:56 2013-06-20 02:35:56 26
African American. 2013-06-20 02:35:56 2013-06-20 02:35:56 25
Native Hawaiian. 2013-06-20 02:35:56 2013-06-20 02:35:56 1
Asian. 2013-06-20 02:35:56 2013-06-20 02:35:56 2
American Indian. 2013-06-20 02:35:56 2013-06-20 02:35:56 2
Two or more races . 2013-06-20 02:35:56 2013-06-20 02:35:56 1
```
I want to be able to filter this by date (as you can see in my commented out line) but that's not working correctly since all the dates are the same in this query when they most definitely are not in the tables themselves. | It seems that I was looking at the wrong tables `created_at` column in the between clause.
```
AND f1.created_at BETWEEN '2014-01-01' AND '2014-12-31'
```
should have been
```
AND v1.created_at BETWEEN '2014-01-01' AND '2014-12-31'
``` | Each non-aggregate field in your select list should be included in your `GROUP BY`:
```
SELECT f1.field_name, f1.created_at, f2.created_at, count(*) AS total
FROM application_fields f1
JOIN application_fields_values v1 ON v1.application_field_id = f1.id
JOIN application_fields_values v2 ON v1.applicant_id = v2.applicant_id
JOIN application_fields f2 ON v2.application_field_id = f2.id
JOIN templates t ON f1.template_id = t.id
JOIN template_fields tf ON f1.template_field_id = tf.id
WHERE v1.field_value = 1
AND v2.field_value = 1
AND f2.field_name = 'Male'
AND f1.field_name != 'Male'
-- AND f1.created_at BETWEEN '2014-01-01' AND '2014-12-31'
AND tf.id IN (9, 10, 11, 12, 13, 14, 15)
GROUP BY f1.field_name, f1.created_at, f2.created_at
ORDER BY f1.id;
```
MySQL does not require that all fields be included in the `GROUP BY`, but without them the returned values are arbitrary.
If there are multiple values for the fields you haven't included in your `GROUP BY` then rather than include them in your `GROUP BY` you may need to use an aggregate function on them, ie:
```
SELECT f1.field_name, MAX(f1.created_at), MAX(f2.created_at), count(*) AS total
FROM application_fields f1
JOIN application_fields_values v1 ON v1.application_field_id = f1.id
JOIN application_fields_values v2 ON v1.applicant_id = v2.applicant_id
JOIN application_fields f2 ON v2.application_field_id = f2.id
JOIN templates t ON f1.template_id = t.id
JOIN template_fields tf ON f1.template_field_id = tf.id
WHERE v1.field_value = 1
AND v2.field_value = 1
AND f2.field_name = 'Male'
AND f1.field_name != 'Male'
-- AND f1.created_at BETWEEN '2014-01-01' AND '2014-12-31'
AND tf.id IN (9, 10, 11, 12, 13, 14, 15)
GROUP BY f1.field_name
ORDER BY f1.id;
``` | Group by query causing incorrect data to appear | [
"",
"mysql",
"sql",
""
] |
I need this select query to return the sum of the same column based on different where clauses. Basically just looking for a way to merge these multiple select queries. Please see below. Any help is very much appreciated!
```
DECLARE @prEndDate smalldatetime
SET @prEndDate='2014-01-05'
SELECT Employee, SUM(Amount) as Deduction
FROM bPRDT
WHERE PREndDate=@prEndDate
AND EDLCode=100 AND EDLType='D'
GROUP BY Employee
SELECT Employee, SUM(Amount) as DeductionPlus
FROM bPRDT
WHERE PREndDate=@prEndDate
AND EDLCode=101 AND EDLType='D'
GROUP BY Employee
SELECT Employee, SUM(Amount) as Match
FROM bPRDT
WHERE PREndDate=@prEndDate
AND EDLCode=600 AND EDLType='L'
GROUP BY Employee
SELECT Employee, SUM(Amount) as MatchPlus
FROM bPRDT
WHERE PREndDate=@prEndDate
AND EDLCode=601 AND EDLType='L'
GROUP BY Employee
```
Finally fixed. Here's what it ended up being:
```
DECLARE @prEndDate smalldatetime
SET @prEndDate ='2013-12-29'
SELECT
REPLACE(c.SSN,'-',''),
SUM(CASE WHEN a.EDLType='D' AND a.Amount > 0 AND (a.EDLCode=100 OR a.EDLCode=101) THEN a.Amount ELSE 0 END) AS Deferral,
SUM(CASE WHEN a.EDLType='L' AND a.Amount > 0 AND (a.EDLCode=600 OR a.EDLCode=601) THEN a.Amount ELSE 0 END) AS EmployerMatch,
SUM(CASE WHEN a.EDLType='D' AND a.Amount > 0 AND (a.EDLCode=100 OR a.EDLCode=101) THEN a.SubjectAmt ELSE 0 END) AS Compensation415,
SUM(CASE WHEN a.EDLType='D' AND a.Amount > 0 AND (a.EDLCode=100 OR a.EDLCode=101) THEN a.SubjectAmt ELSE 0 END) AS PlanFullYearCompensation,
SUM(CASE WHEN a.EDLType='E' AND a.Amount > 0 AND (a.EDLCode=1 OR a.EDLCode=2 OR a.EDLCode=3) THEN a.Hours ELSE 0 END) AS PlanHours
FROM bPRDT a
JOIN (SELECT DISTINCT SSN, Employee FROM bPREH) c ON a.Employee=c.Employee
WHERE a.PREndDate=@prEndDate
GROUP BY c.SSN
ORDER BY Deferral DESC, c.SSN ASC
``` | You could use `UNION` or `CASE WHEN` clauses.
```
SELECT
Employee,
SUM(CASE WHEN EDLCode=100 AND EDLType='D' THEN Amount ELSE 0 END) as Deduction,
SUM(CASE WHEN EDLCode=101 AND EDLType='D' THEN Amount ELSE 0 END) as DeductionPlus
SUM(CASE WHEN EDLCode=600 AND EDLType='L' THEN Amount ELSE 0 END) as Match
SUM(CASE WHEN EDLCode=601 AND EDLType='L' THEN Amount ELSE 0 END) as MatchPlus
FROM bPRDT
WHERE PREndDate=@prEndDate
GROUP BY Employee
``` | You can move the condition out of the `WHERE` clause and in to a `CASE` statement, so you only sum the rows that interest you:
```
DECLARE @prEndDate smalldatetime
SET @prEndDate='2014-01-05'
SELECT Employee,
SUM(CASE WHEN EDLCode=100 AND EDLType='D' THEN amount ELSE 0 END)
AS Deduction,
SUM(CASE WHEN EDLCode=101 AND EDLType='D' THEN amount ELSE 0 END)
AS DeductionPlus,
SUM(CASE WHEN EDLCode=600 AND EDLType='L' THEN amount ELSE 0 END)
AS Match,
SUM(CASE WHEN EDLCode=601 AND EDLType='L' THEN amount ELSE 0 END)
AS MatchPlus
FROM bPRDT
WHERE PREndDate = @prEndDate
GROUP BY Employee
``` | Multiple Sums off Same Column | [
"",
"sql",
"sum",
"aggregate-functions",
"where-clause",
""
] |
I'm looking for a query capable of selecting from a single table in such a way that consecutive records for which an attribute is equal are collapsed together. Similar to group by, but instead of grouping every occurence of the attribute together, I want one group for each consecutive range.
Example table:
```
+-----+-----+
|order|group|
+-----+-----+
|1 |aaa |
+-----+-----+
|2 |aaa |
+-----+-----+
|3 |bbb |
+-----+-----+
|4 |aaa |
+-----+-----+
|5 |aaa |
+-----+-----+
|6 |aaa |
+-----+-----+
|7 |ccc |
+-----+-----+
|8 |aaa |
+-----+-----+
```
Example desired result:
```
+-----+-------------------+
|group|group_concat(order)|
+-----+-------------------+
|aaa |1,2 |
+-----+-------------------+
|bbb |3 |
+-----+-------------------+
|aaa |4,5,6 |
+-----+-------------------+
|ccc |7 |
+-----+-------------------+
|aaa |8 |
+-----+-------------------+
```
I can't use stored procedures.
I have a vague notion I will need at least one level of nesting for sorting the table (probably more in total), and probably have to use variables, but no more than that. Please let me know if you need further details.
EDIT: Queries for creating example:
```
create temporary table tab (
ord int,
grp varchar(8)
);
insert into tab (ord, grp) values
(1, 'aaa'),
(2, 'aaa'),
(3, 'bbb'),
(4, 'aaa'),
(5, 'aaa'),
(6, 'aaa'),
(7, 'ccc'),
(8, 'aaa');
``` | Could you try this? You can test here <http://www.sqlfiddle.com/#!2/57967/12>.
```
Select grp_new, group_concat(ord)
From (
Select ord, if(grp = @prev, @seq, @seq := @seq + 1) as seq,
if(grp = @prev, grp, @prev := grp) as grp_new
From tab, (SELECT @seq := 0, @prev := '') AS init
Order by ord
) x
Group by grp_new, seq;
```
The key idea is generate same `seq` for same consecutive group as follows.
```
Select
ord, if(grp = @prev, @seq, @seq := @seq + 1) as seq,
if(grp = @prev, grp, @prev := grp) as grp_new
From tab, (SELECT @seq := 0, @prev := '') AS init
Order by ord
```
then finally grouping `GROUP BY grp, seq` which can differenciate each consecutive groups even if they have same `grp`.
EDIT: To get exactly the result in the example:
```
Select grp_new, group_concat(ord order by ord)
From (
Select ord, if(grp = @prev, @seq, @seq := @seq + 1) as seq,
if(grp = @prev, grp, @prev := grp) as grp_new
From tab, (SELECT @seq := 0, @prev := '') AS init
Order by ord
) x
Group by seq
``` | Use:
```
select pquery.group, GROUP_CONCAT(pquery.order separator ', ') as order FROM
( select t.*, @lastSeq := if( t.group != @lastGroup
, 0, @lastSeq + 1 )
as groupSeq,
@lastGroup := t.group,
@cont := if(@lastSeq =0, @cont + 1, @cont) as counter,
@cgroup := concat(@cont,t.group) as cgroup
from t, ( select @lastGroup := '',
@lastSeq := 0, @cont :=0) sqlVars
) pquery
group by pquery.cgroup
```
Be carfeul with the variable group\_concat\_max\_len=1024 (result limit in bytes). Change depending on your needs. | Selecting groups of consecutive records with a common attribute? | [
"",
"mysql",
"sql",
""
] |
I have a somewhat complex database structure running that tracks products. Here is a diagram of it generated by MySQL Workbench:

Under this structure I have 3 products that I've added. All three of these products have the attribute `color` and an option of `red`.
I have a sql fiddle set up here: <http://sqlfiddle.com/#!2/68470/4> displaying a query I'm running to try to get the `opt_count` column to say 3 on rows where the `attribute` column is `color` and the `option` column is `red`.
Nearly all the other `opt_count` values are wrong also, so I'm suspecting I am either not grouping by the correct column or I'm approaching this whole problem incorrectly.
How can I get the correct `opt_count` to show for each row? | As others have said, your schema is the problem as you have a many to many relationship (many products may have many options) which makes queries more difficult.
Here is a query that gives you the exact output you asked for. It shows each option, how many unique products that option is assigned to (the COUNT(distinct product\_id)) and provides a comma separated list of the product\_id values that are assigned.
```
SELECT pvo.option,
count(distinct product_id),
group_concat(distinct product_id) products
FROM (`products`)
JOIN `product_variant_combinations` pvc using(`product_id`)
JOIN `product_variants` pv using(`combination_id`)
JOIN `product_variant_ao_relation` pv_ao using(`ao_id`)
JOIN `product_variant_options` pvo using(`option_id`)
JOIN `product_variant_attributes` pva using(`attribute_id`)
group by pvo.option;
```
This is the output for red:
**red *3* 111026,111025,111024**
See here:
<http://sqlfiddle.com/#!2/68470/133>
You asked how to add attribute:
```
SELECT pva.attribute, pvo.option, count(distinct product_id), group_concat(product_id)
FROM (`products`)
JOIN `product_variant_combinations` pvc using(`product_id`)
JOIN `product_variants` pv using(`combination_id`)
JOIN `product_variant_ao_relation` pv_ao using(`ao_id`)
JOIN `product_variant_options` pvo using(`option_id`)
JOIN `product_variant_attributes` pva using(`attribute_id`)
group by pva.attribute, option
```
You must GROUP BY each non-aggregate expression in the SELECT clause. In this case the two aggregate expressions are COUNT and GROUP\_CONCAT, thus, you must GROUP BY pva.attribute, pvo.option
You probably want to find a good SQL tutorial on GROUP BY. | See if this helps
```
SELECT products.product_name
, products.product_id
, pvc.combination_id
, pvc.combination
, pva.attribute
, pvo.option
, COUNT(pvo.option) as opt_count
FROM (`products`)
JOIN `product_variant_combinations` pvc ON `products`.`product_id` = `pvc`.`product_id`
JOIN `product_variants` pv ON `pv`.`combination_id` = `pvc`.`combination_id`
JOIN `product_variant_ao_relation` pv_ao ON `pv_ao`.`ao_id` = `pv`.`ao_id`
JOIN `product_variant_options` pvo ON `pvo`.`option_id` = `pv_ao`.`option_id`
JOIN `product_variant_attributes` pva ON `pva`.`attribute_id` = `pv_ao`.`attribute_id`
GROUP BY 1
```
Returns:
```
| PRODUCT_NAME | PRODUCT_ID | COMBINATION_ID | COMBINATION | ATTRIBUTE | OPTION | OPT_COUNT |
|--------------|------------|----------------|----------------------------------------------------|-----------|--------|-----------|
| Desk | 111025 | 4 | {"color":"Red","material":"Wood"} | color | red | 4 |
| Lamp | 111024 | 1 | {"color":"Red"} | color | red | 3 |
| T shirt | 111026 | 6 | {"color":"Red","size":"Small","material":"Cotton"} | color | red | 18 |
``` | COUNT(table_name.column_name) not giving accurate count. Am I applying GROUP BY on the wrong column? | [
"",
"mysql",
"sql",
"count",
"aggregate-functions",
"product",
""
] |
Basically I am attempted to auto populate a dropdown box with Team member names, this way in the future as team members get removed and added the dropdown will automatically adjusted. This is the simple part, the hard part is that I also need “All” to show up in the dropdown box. Though I can’t seem to find any way to make this happen, so I am looking for either ideas, or for someone to tell me it is impossible.
At this point I just have the dropdown rowsource to ‘Table/Query’ and the query is:
```
SELECT [Team Table].[Team_Member_ID], [Team Table].[USER_ID]
FROM [Team Table]
WHERE [Active] = True
ORDER BY [USER_ID] ;
```
I have made several attempts at added in a `UNION SELECT` or `UNION ALL SELECT` but I can't see to get that to run, SQL isn't my strongest suit so that could just be me. | I ended up having to create this function to do what I needed.
```
Sub ValueList(ByRef rs As Recordset, _
strReturnColumn As String, _
strCtrlToChange As String, _
strDefaultValueIn As String, _
frm As Form)
Dim strRowSource As String
Dim strDefaultValue As String
strDefaultValue = strDefaultValueIn
strRowSource = ""
rs.MoveFirst
'If there is a Default Value add it to the RowSource
If strDefaultValue <> "" Then
strRowSource = strRowSource & strDefaultValue & ";"
End If
'Add all results onto the RowSource
Do While Not rs.EOF
If CStr(rs.Fields(strReturnColumn) & "") <> "" Then
strRowSource = strRowSource & CStr(rs.Fields(strReturnColumn) & "") & ";"
End If
rs.MoveNext
Loop
'Set Rowsource and default value
frm.Controls(strCtrlToChange).RowSource = strRowSource
frm.Controls(strCtrlToChange).DefaultValue = "'" & strDefaultValue & "'"
End Sub
``` | Use a union
```
SELECT 0 AS [Team_Member_ID], 'All' AS [USER_ID]
UNION
SELECT [team table].[team_member_id], [team table].[user_id]
FROM [team table]
WHERE [active] = true
ORDER BY [user_id];
``` | How do I concatenate a string onto an SQL query in Access? | [
"",
"sql",
"ms-access",
"ms-access-2010",
""
] |
I been researching how to convert a date string that I have in my flatfile with also specifying a time. I found results for converting through a derived column in SSIS using DT\_DBTIMESTAMP. But in my situation, I need to add in a time of 9AM: 9:00:00.000 as well since the flat file source doesn't have it.
So in my situation, I have this example from the flat string:
```
5/9/80
```
I would like it to be in SSIS as the following 1980-05-09 9:00:00.000
Any ideas? | Try this:
```
DATEADD("HH",9,(DT_DBTIMESTAMP)((DT_STR,30,1252)(DT_DBDATE)([Create Date])))
```
[Create Date]= 5/9/80 | Use the Data Conversion to first change the date to the DT\_DBTIMESTAMP then use a derived with the following in the expression;
```
DATEADD("Hh",9,[COLUMN])
```
Obviously changing the column to the one in your dataflow. This should add 9 hours onto the timestamp. | SSIS: convert date to datetimestamp with specified time | [
"",
"sql",
"ssis",
"substring",
""
] |
I need to store a set of flags that are related to an entity into database. `Flags` might not be the best word because these are not binary information (on/off), but rather a to-be-defined set of codes.
Normally, you would store each information (say each flag value) in a distinct column, but I'm exploring opportunities for storing such information in data structures different than one-column-for-each-attribute to prevent a dramatic increase in column mappings. Since each flag is valid for each attribute of an entity, you understand that for large entities that **intrinsically** require a large number of columns the total number of columns may grow as 2n.
Eventually, these codes can be mapped to a positional string.
I'm thinking about something like: `02A` not being interpreted as `dec 42` but rather as:
* Flag 0 in position 1 (or zero if you prefer...)
* Flag 2 in position 2
* Flag A in position 3
Data formatted in such a way can be easily processed by high-level *programming* languages, because PL/SQL is out of the scope of the question and all these values are supposed to be processed by Java.
## Now the real problem
One of my specs is to optimize searching. I have been required to find a way (say, an efficient way) to seek for entities that show a certain flag (or a special `0` flag) in a given position.
Normally, in SQL, given the RDBMS-specific substring function, you would
```
SELECT * FROM ENTITIES WHERE SUBSTRING(FLAGS,{POSITION},1) = {VALUE};
```
This works, but I'm afraid it may be a little slow on all platforms but Oracle, which, AFAIK, supports creating secondary indexes mapped to a substring.
However, my solution **must** work in MySQL, Oracle, SQL Server and DB2 thanks to Hibernate.
Given such a design, is there some, possibly cross-platform, indexing strategy that I'm missing? | If performance is an issue I would go for a some different model here.
Say a table that store entities and a relation 1->N to another table (say: flags table: entId(fk), flag, position) and this table would have an index on flag and position.
The issue here would be to get this flags in a simple column wich can be done in java or even on the database (but it would be difficult to have a cross plataform query to this) | If you want a database-independent, reasonable method for storing such flags, then use typical SQL data types. For a binary flag, you can use `bit` or `boolean` (this differs among databases). For other flags, you can use `tinyint` or `smallint`.
Doing bit-fiddling is *not* going to be portable. If nothing else, the functions used to extract particular bits from data differ among databases.
Second, if performance is an issue, then you may need to create indexes to avoid full table scans. You can create indexes on normal SQL data types (although some databases may not allow indexes on bits).
It sounds like you are trying to be overly clever. You should first get the application to work using reasonable data structures. Then you will understand where the performance issues are and can work on fixing them. | Storing flags in SQL column, and indexing them | [
"",
"sql",
"multi-database",
""
] |
I have a task management app where the user can create a "waiting for" on a task. If the item that they're waiting for hasn't been received, then the task is still active.
I'm trying to set the "actionable" scope to include the tasks that either:
* Have a waiting\_for where the received\_date is not nil (whatever they were waiting for was received)
OR
* Don't have a waiting for
```
class Task < ActiveRecord::Base
has_many :waiting_fors
def self.actionable
where("snooze_date <= ? OR snooze_date IS ?", Time.now, nil ).where(complete: false)
.includes(:waiting_fors).where("waiting_fors.received_date IS NOT ?", nil)
end
class WaitingFor < ActiveRecord::Base
belongs_to :task
end
```
However, it's filtering down to ONLY show me where an item has a waiting\_for AND that waiting\_for has a received\_date.
Any sugggestions for where I'm going wrong?
UPDATE:
This is what I have now. It's working if a task either has no waiting fors or has one waiting for that was received.
```
scope :not_snoozed, -> {where("snooze_date <= ? OR snooze_date IS ?", Time.now, nil)}
scope :incomplete, -> {where(complete: false)}
scope :no_waiting_fors, -> {includes(:waiting_fors).where(waiting_fors: {id: nil})}
scope :not_received, -> {includes(:waiting_fors).where.not(waiting_fors: {received_date: nil})}
scope :actionable, -> {incomplete.not_snoozed.where(id: (Task.no_waiting_fors+Task.not_received).map(&:id))}
```
However, if a task has TWO waiting\_fors and only ONE has a received date (which means that we're still waiting on something) it's showing up as active. | Try updating your scopes to this, using the NOT IN feature of SQL:
```
scope :received, -> {includes(:waiting_fors).where("tasks.id not in (select tasks.id from tasks inner join waiting_fors on waiting_fors.task_id = tasks.id where waiting_fors.received_date is null)")}
scope :actionable, -> {incomplete.not_snoozed.where(id: (Task.no_waiting_fors+Task.received).map(&:id))}
``` | Do you mean
```
where("waiting_fors.received_date = ?", nil)
```
instead of
```
where("waiting_fors.received_date IS NOT ?", nil)
``` | Rails scope with associations - optionally filtering scope with properties of associated class | [
"",
"sql",
"ruby-on-rails",
""
] |
```
Person Table
-------------------------------------
PersonId | Name | Surname |
-------------------------------------
1 John Smith
2 Sam Rainbow
Address Table
------------------------------------------
AddressId | PersonId | Address |
------------------------------------------
1 1 AAA
2 1 BBB
```
so what I want to do is, If I execute the query below,
```
DELETE Person WHERE PersonId = 1
```
SQL should not allow me to delete from Person table since there are rows related to that Person in Address table.
How can I create this relation? | You need to define a foreign key:
```
ALTER TABLE Address
ADD FOREIGN KEY (PersonId)
REFERENCES Person (PersonId)
``` | Add a foreign key constraint and restrict deletes:
```
ALTER TABLE Address
ADD CONSTRAINT Address_PersonId_fkey
FOREIGN KEY (PersonId)
REFERENCES Person (PersonId)
ON DELETE RESTRICT -- what you asked for
ON UPDATE CASCADE; -- maybe do something different for updates?
``` | Relation between tables | [
"",
"sql",
""
] |
I'm trying to fix up data where a column which is storing a number of bit flags in an int column.
What's happened is somewhere along the lines, an incorrect flag has been set (6) so I need to prepare a script to fix up the affected records.
I've tried performing some queries to extract data that appears wrong but it's based on assumption and I'm wondering if there's a smarter way to do it.
Some facts:
* The bit that should have been set is `8`, but `6` was used
* The column is currently storing up to 23 bits to represent on/off states for properties (has garden / is furnished / is house / parking etc)
* Some records are affected, some aren't
Considering the bit `6` is invalid is there something clever I can do to pull these records out based on that fact? | I am assuming you are talking about `bit masking` here in this case.
With that assumption, I do not think you will be able to just query the data for a fix. The way `bit masking` works is to add up the values of all the bits(i.e. convert the binary to an int) so the following mask: `1001` would be stored as 9
If you used 6 instead of 8 it would be the same as if the 4 and 2 bit were both set. So your query would return all valid records where the 4 and 2 bit are on.
Using the same example if instead of 8 you accidentally used 6 then `1001` becomes 7 instead but how would you differentiate that from `0111` which would correctly be masked as 7? | Note that bit 31 can be problematic when handling 32-bit signed values. Otherwise:
```
-- Check each bit in an integer.
declare @Sample as Int = 6;
with Bits as (
select Cast( 1 as BigInt ) as BitMask, 0 as BitPosition
union all
select Bitmask * 2, BitPosition + 1
from Bits
where BitPosition < 31 )
select BitPosition, BitMask,
case when Cast( @Sample as BigInt ) & BitMask <> 0 then 'Set' else 'Clear' end as State
from Bits;
-- Play with some sample data in a table.
declare @Samples as Table ( SampleId Int Identity, Number Int );
insert into @Samples ( Number ) values ( 0 ), ( 1 ), ( 2 ), ( 3 ), ( 30 ), ( 65 ), ( 16385 );
select * from @Samples;
-- Clear bit 6 in each row.
update @Samples
set Number &= 2147483647 - Power( 2, 6 );
select * from @Samples;
-- Set bit 6 in each row.
update @Samples
set Number |= Power( 2, 6 );
select * from @Samples;
-- Clear bit 6 in each row again.
update @Samples
set Number &= 2147483647 - Power( 2, 6 );
select * from @Samples;
``` | Fixing the wrong 'bit' | [
"",
"sql",
"sql-server",
"bit-manipulation",
""
] |
I have a fairly complex SQL query - part of which requires to look up a company\_ID value found in the first table to obtain the company\_Name in the second table. The second table may have variants of the company name, but that is OK - I just need the first match.
So, tableA looks something like this (approx 2 dozen columns and many rows)
```
company_ID (CHAR(12))
161012348876
561254435253
103929478273
141567643542
```
tableB looks something like this
```
company_ID (Integer) Company_name
161012348876 Watson & Jones Ltd
161012348876 Watson and Jones
561254435253 Fictional Co. plc
103929478273 Made Up Corp.
161012348876 Watson Jones Ltd
141567643542 Thingymajig Gmbh.
```
This query will return multiple rows for 161012348876. What're good ways just to get one row returned for each matching company\_id (i.e. 4 rows instead of 6)?
```
SELECT *, t2.company_name
FROM tableA t1
JOIN tableB t2 ON t1.company_id = cast(t2.company_id as CHAR(12))
```
I am using Teradata SQL.
Any help much appreciated. | ```
SELECT *, t2.company_name
FROM tableA t1
JOIN tableB t2 ON t1.company_id = cast(t2.company_id as CHAR(12))
GROUP BY t1.company_id
```
Will return 1 row for each unique `t1.company_id` | Instead of user2989408's MAX subquery you can also do a
```
SELECT company_id , company_Name
FROM tableB
QUALIFY ROW_NUMBER() OVER (PARTITION BY company_id ORDER BY company_name) = 1
--if you don't care about MIN/MAX or want a more random result:
QUALIFY COUNT(*) OVER (PARTITION BY company_id ROWS UNBOUNDED PRECEDING) = 1
```
But assuming that \*company\_id\* is the PI of tableB the MAX will probably perform better. | How do I just get the first matching row? | [
"",
"sql",
"teradata",
""
] |
I’m in the process of creating a report that will tell end users what percentage of a gridview (the total number of records is a finite number) has been completed in a given month. I have a gridview with records that I’ve imported and users have to go into each record and update a couple of fields. I’m attempting to create a report tell me what percentage of the grand total of records was completed in a given month. All I need is the percentage. The grand total is (for this example) is 2000.
I’m not sure if the actual gridview information/code is needed here but if it does, let me know and I’ll add it.
The problem is that I have been able to calculate the percentage total but when its displayed the percentage total is repeated for every single line in the table. I’m scratching my head on how to make this result appear only once.
Right now here’s what I have for my SQL code (I use nvarchar because we import from many non windows systems and get all sorts of extra characters and added spaces to our information):
```
Declare @DateCount nvarchar(max);
Declare @DivNumber decimal(5,1);
SET @DivNumber = (.01 * 2541);
SET @DateCount = (SELECT (Count(date_record_entered) FROM dbo.tablename WHERE date_record_entered IS NOT NULL and date_record_entered >= 20131201 AND date_record_entered <= 20131231);
SELECT CAST(ROUND(@DivNumber / @DateCount, 1) AS decimal(5,1) FROM dbo.tablename
```
Let’s say for this example the total number of records in the date\_record\_entered for the month of December is 500.
I’ve tried the smaller pieces of code separately with no success. This is the most recent thing I’ve tried.
I know I'm missing something simple here but I'm not sure what.
**::edit::**
What I'm looking for as the expected result of my query is to have a percentage represented of records modified in a given month. If 500 records were done that would be 25%. I just want to have the 25 (and trainling decimal(s) when it applies) showing once and not 25 showing for every row in this table. | The following query should provide what you are looking for:
```
Declare @DivNumber decimal(5,1);
SET @DivNumber = (.01 * 2541);
SELECT
CAST(ROUND(@DivNumber / Count(date_record_entered), 1) AS decimal(5,1))
FROM dbo.tablename
WHERE date_record_entered IS NOT NULL
and date_record_entered >= 20131201
AND date_record_entered <= 20131231
``` | Why do you select the constant value `cast(round(@divNumber / @DateCount, 1) as decimal(5,1)`from the table? That's the cause of your problem.
I'm not too familiar with sql server, but you might try to just select without a from clause. | How to accurately figure percentage of a finite total | [
"",
"sql",
"sql-server",
"gridview",
"sql-server-2012",
""
] |
I have the following sql
```
case gg.finalgrade
when NULL
then 'Nothing to Show'
else gg.finalgrade
end as 'Grade'
```
Many of my gg.finalgrade values is shown in the database as NULL, but the above statement simply ignores those values and does not print anything. I want it to show 'Nothing to Show' when the value is NULL.
I have looked at some examples on SO but can't seem to get them to work.
Thanks! | Don't bother using a case expression for this, MySQL has a funciton built in called `IFNULL()`
Have a look here for an example of how this works:
[SQL NULL Functions](http://www.w3schools.com/sql/sql_isnull.asp)
The syntax is:
```
SELECT IFNULL(gg.finalgrade,'Nothing to Show')
``` | In SQL this expression is never true:
```
null = null
```
So comparing two values with null will always be false
You can use something like this
```
coalesce(gg.finalgrade, 'Nothing to show')
```
or
```
case
when gg.finalgrade is null
then 'Nothing to show'
else gg.finalgrade
end
``` | SQL - CASE NULL | [
"",
"mysql",
"sql",
""
] |
I have a condition where I want to display a status of Approved or Rejected. I have been given a database which was created by someone and is in a mess. Not structured. I have a table with names and process\_code like below for example :
```
name | process_code
A | 7
B | 7
C | 3
D | 4
...
```
What I want to achieve is if the process\_code is 7, it will return a status of Rejected. If the process\_code is other than 7, it will return Approved.
It looks something like this :
```
SELECT name, process_code AS Status
CASE process_code
WHEN '7' THEN 'Rejected'
ELSE 'Approved'
FROM association
```
It didn't work. Can someone guide to correct my sql query. | just missing the `end` of the case when clause, and a comma (and not sure you need quotes around process\_code).
```
SELECT
name,
process_code AS Status,
CASE process_code
WHEN '7' THEN 'Rejected'
ELSE 'Approved'
END
FROM association
``` | First you need a comma before the case and you need an `end`:
```
SELECT name, process_code AS Status,
(CASE process_code
WHEN '7' THEN 'Rejected'
ELSE 'Approved'
END)
FROM association;
```
If the code is actually an integer, you might need to make the comparison without the single quotes. | In SQL Query, use CASE to print desired result if condition is met | [
"",
"sql",
"case",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.