Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
Suppose I have a Product table, and a
```
id product
1 Apple
2 Bag
3 Cat
4 Ducati
```
and a Cart table
```
id user_id product_id
1 1 2
2 1 3
3 2 1
4 3 1
```
So, I want to look at a particular user and see what he/she does NOT have in their Cart.
In other words, in the above example
```
SELECT ...... WHERE user_id=1 .....
```
would return Apple and Ducati because User 1 already has Bag and Cat.
(This may well duplicate another question but there are so many variations I couldn't find the exact match and put in these simple terms may help) | Perform a left join from product to all products purchased by user1, which can be retrieved with a subselect in the join. This will cause all product id's that are not in user1's care to have null product ids. The where clause will select all null product id's meaning they will not have been in a users cart, essentially filtering purchased items.
```
select p.name
from product p
left join (select product_id, user_id
from cart where user_id = 1)
c
on p.id = c.product_id
where c.product_id is null;
```
**SQL Fiddle:** <http://sqlfiddle.com/#!2/5318eb/17> | ```
Select
*
From Product p
Where p.id Not In
(
Select c.product_id
From Cart c
Where User ID = ____
)
``` | Matching items in one table that don't match in a subset of a second table | [
"",
"mysql",
"sql",
""
] |
I have a SSRS report that do not display data in preview mode. However, when I run the same query in SQL Server 2008 R2 I can retrieve the results
What could cause this?
I also used Set FMTOnly to off because I use temptables. | if you use "SQL Server Business Intelligence Development Studio" not "Report Builder" then on reporting services (where is you table):
1. click View -> Properties Window (or just press F4)
2. select the tablix
3. on properties window find "General" and in the "DataSetName" choose your "Dataset"
4. On tablix fields set values from your "DataSets"
Or just do like here(from 8:50): <http://www.youtube.com/watch?v=HM_dquiikBA> | **The Best solutio**
**Select the entire row and change the font to arial or any other font other than segoe UI**
1. default font
[default font](https://i.stack.imgur.com/WbRRD.png)
2. no display in preview
[no display in preview](https://i.stack.imgur.com/8diWq.png)
3. changed font first row
[changed font first row](https://i.stack.imgur.com/QcN0E.png)
4. first row is displayed in preview
[first row is displayed in preview](https://i.stack.imgur.com/O2Qml.png)
5. changed secon row font
[changed secon row font](https://i.stack.imgur.com/WOSiK.png)
6. data is displayingig
[data is displayingi](https://i.stack.imgur.com/6KsTI.png) | SSRS not displaying data but displays data when query runned in tsql | [
"",
"sql",
"t-sql",
"reporting-services",
"sql-server-2008-r2",
"bids",
""
] |
According to [MySQL documentation](http://dev.mysql.com/doc/refman/5.0/en/if.html) this would run on mysql.version > 5. But I get:
> #1064 - You have an error in your SQL syntax; check the manual that
> corresponds to your MySQL server version for the right syntax to use
> near 'IF
Code:
```
IF SELECT MAX(`amount`) FROM transactions < 500
THEN
INSERT INTO transactions (amount) VALUES (500)
END IF
```
or
```
IF( (SELECT MAX(`amount`) FROM transactions < 500)
,INSERT INTO transactions (amount) VALUES (500)
, null
);
```
Transactions table:
```
id amount
1 100
2 150
3 400
```
Neither work. | The cause of the error is the fact that you can't use `IF` statement in plain query it can be used **only** in a context of a stored routine (procedure, function, trigger, event).
Your first piece of code will work successfully in a stored procedure *with shlight changes*
```
DELIMITER $$
CREATE PROCEDURE insert500()
BEGIN
IF (SELECT MAX(`amount`) FROM transactions) < 500 -- see parenthesis around select
THEN
INSERT INTO transactions (amount) VALUES (500); -- semicolon an the end
END IF; -- semicolon an the end
END$$
DELIMITER ;
CALL insert500();
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!2/505c0/1)** demo
---
Now here is one way to do what you want in one statement
```
INSERT INTO transactions (amount)
SELECT 500
FROM transactions
HAVING MAX(amount) < 500;
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!2/d30fb/1)** demo | That should be:
```
IF SELECT MAX(`amount`) FROM transactions WHERE amount < 500
THEN
INSERT INTO transactions (amount) VALUES (500)
END IF
``` | INSERT '100' into table IF '100' is the highest value gives error | [
"",
"mysql",
"sql",
"if-statement",
""
] |
I have to select data from a table using column values that contain texts with numbers, like this:
```
"SELECT Title, Category from BookList where CategoryField BETWEEN '"
+ blockCodeStart + "' AND '" + blockCodeEnd + "'";
blockCodeStart = "A25"
blockCodeEnd = "A39"
```
I understand that and could handle either texts or numbers only, but not both. Any help appreciated. | `foo BETWEEN bar AND baz` is just a handy shortcut for `bar <= foo AND foo <= baz`. You can NOT expect to get "normal" results if you're comparing apples and oranges - since you're comparing STRINGS, you'll get string results. You can NOT expect the DB to able to be figure out that `A25` is greater than `A9`. It's not human.
You'll have to decompose you strings into their individual components, and compare then as "native" values, e.g.
```
('A' == 'A') and (25 == 39)
``` | If all values in field Category has same format 'LETTER' + 'DIGITS' then you can create function that compare categories.
In this example I created and use schema `test`.
```
drop table if exists test.aaa;
drop function if exists test.natural_compare(s1 character varying, s2 character varying);
create or replace function test.natural_compare(s1 character varying, s2 character varying)
returns integer
as
$$
declare
s1_s character varying;
s2_s character varying;
s1_n bigint;
s2_n bigint;
begin
s1_s = regexp_replace(s1, '^([^[:digit:]]*).*$', '\1');
s2_s = regexp_replace(s2, '^([^[:digit:]]*).*$', '\1');
if s1_s < s2_s then
return -1;
elsif s1_s > s2_s then
return +1;
else
s1_n = regexp_replace(s1, '^.*?([[:digit:]]*)$', '\1')::bigint;
s2_n = regexp_replace(s2, '^.*?([[:digit:]]*)$', '\1')::bigint;
if s1_n < s2_n then
return -1;
elsif s1_n > s2_n then
return +1;
else
return 0;
end if;
end if;
end;
$$
language plpgsql immutable;
create table test.aaa (
id serial not null primary key,
categ character varying
);
insert into test.aaa (categ) values
('A1'),
('A2'),
('A34'),
('A35'),
('A39'),
('A355'),
('B1'),
('B6')
;
select * from test.aaa
where test.natural_compare('A34', categ) <= 0 and test.natural_compare(categ, 'A39') <= 0
``` | BETWEEN operator when range has texts with numbers | [
"",
"sql",
"between",
""
] |
Customer
```
CustomerID Name
4001 John Bob
4002 Joey Markle
4003 Johny Brown
4004 Jessie Black
```
Orders
```
OrderID Customer Status
50001 4001 Paid
50002 4002 Paid
50003 4001 Paid
50004 4003 Paid
50005 4001 Paid
50006 4003 Paid
50007 4004 Unpaid
```
I tried this join
```
Select c.Customer, COUNT(o.OrderID) as TotalOrders
from Customer c
inner join Orders o
on c.Customer = o.Customer
Where o.Status = 'Paid'
Group by c.Customer
```
But here is the result.
```
Customer TotalOrders
4001 3
4002 1
4003 2
```
The customer with unpaid is not included. How I will include all the customer ?
```
Customer TotalOrders
4001 3
4002 1
4003 2
4004 0
``` | You will have to use a more complex `left join` in order to only count the `Paid` ones:
```
SELECT c.customerid, count(o.orderid) TotalOrders
FROM customer c
LEFT JOIN orders o
ON c.customerid = o.customer AND o.status = 'Paid'
GROUP BY c.customerid
```
Output:
```
| CUSTOMERID | TOTALORDERS |
|------------|-------------|
| 4001 | 3 |
| 4002 | 1 |
| 4003 | 2 |
| 4004 | 0 |
```
See the working fiddle [here](http://sqlfiddle.com/#!3/3cbc3/1). | Use a left join
```
Select c.Customer, COUNT(o.OrderID) as TotalOrders
from Customer c
left join Orders o
on c.Customer = o.Customer
Group by c.Customer
``` | Count and Joins with Where | [
"",
"sql",
"t-sql",
"sql-server-2008-r2",
""
] |
Let's assume two tables:
TableA holds various data measurements from a variety of stations.
TableB holds metadata, about the columns used in TableA.
TableA has:
```
stationID int not null, pk
entryDate datetime not null, pk
waterTemp float null,
waterLevel float null ...etc
```
TableB has:
```
id int not null, pk, autoincrement
colname varchar(50),
unit varchar(50) ....etc
```
So for example, one line of data from tableA reads:
```
1 | 2013-01-01 00:00 | 2.4 | 3.5
```
two lines from tableB read:
```
1| waterTemp | celcius
2| waterLevel | meters
```
This is a simplified example. In truth, tableA might hold close to 20 different data columns, and table b has close to 10 metadata columns.
I am trying to design a view which will output the results like this:
```
StationID | entryDate | water temperature | water level |
1 | 2013-01-01 00:00 | 2.4 celcius | 3.5 meters |
```
So two questions:
1. Other than specifying subselects from TableB (..."where
colname='XXX'") for each column, which seems horribly insufficient
(not to mention...manual :P ), is there a way to get the result I
mentioned earlier with automatic match on colname?
2. I have a hunch
that this might be bad design on the database. Is it so? If yes,
what would be a more optimal design? (Bear in mind the complexity of
the data structure I mentioned earlier) | dynamic SQL with PIVOT is the answer. though it is dirty in terms of debugging or say for some new developer to understand the code but it will give you the result you expected.
check the below query.
in this we need to prepare two things dynamically. one is list columns in the result set and second is list of values will appear in PIVOT query. notice in the result i do not have NULL values for Column3, Column5 and Column6.
```
SET NOCOUNT ON
IF OBJECT_ID('TableA','u') IS NOT NULL
DROP TABLE TableA
GO
CREATE TABLE TableA
(
stationID int not null IDENTITY (1,1)
,entryDate datetime not null
,waterTemp float null
,waterLevel float NULL
,Column3 INT NULL
,Column4 BIGINT NULL
,Column5 FLOAT NULL
,Column6 FLOAT NULL
)
GO
IF OBJECT_ID('TableB','u') IS NOT NULL
DROP TABLE TableB
GO
CREATE TABLE TableB
(
id int not null IDENTITY(1,1)
,colname varchar(50) NOT NULL
,unit varchar(50) NOT NULL
)
INSERT INTO TableA( entryDate ,waterTemp ,waterLevel,Column4)
SELECT '2013-01-01',2.4,3.5,101
INSERT INTO TableB( colname, unit )
SELECT 'WaterTemp','celcius'
UNION ALL SELECT 'waterLevel','meters'
UNION ALL SELECT 'Column3','unit3'
UNION ALL SELECT 'Column4','unit4'
UNION ALL SELECT 'Column5','unit5'
UNION ALL SELECT 'Column6','unit6'
DECLARE @pvtInColumnList NVARCHAR(4000)=''
,@SelectColumnist NVARCHAR(4000)=''
, @SQL nvarchar(MAX)=''
----getting the list of Columnnames will be used in PIVOT query list
SELECT @pvtInColumnList = CASE WHEN @pvtInColumnList=N'' THEN N'' ELSE @pvtInColumnList + N',' END
+ N'['+ colname + N']'
FROM TableB
--PRINT @pvtInColumnList
----lt and rt are table aliases used in subsequent join.
SELECT @SelectColumnist= CASE WHEN @SelectColumnist = N'' THEN N'' ELSE @SelectColumnist + N',' END
+ N'CAST(lt.'+sc.name + N' AS Nvarchar(MAX)) + SPACE(2) + rt.' + sc.name + N' AS ' + sc.name
FROM sys.objects so
JOIN sys.columns sc
ON so.object_id=sc.object_id AND so.name='TableA' AND so.type='u'
JOIN TableB tbl
ON tbl.colname=sc.name
JOIN sys.types st
ON st.system_type_id=sc.system_type_id
ORDER BY sc.name
IF @SelectColumnist <> '' SET @SelectColumnist = N','+@SelectColumnist
--PRINT @SelectColumnist
----preparing the final SQL to be executed
SELECT @SQL = N'
SELECT
--this is a fixed column list
lt.stationID
,lt.entryDate
'
--dynamic column list
+ @SelectColumnist +N'
FROM TableA lt,
(
SELECT * FROM
(
SELECT colname,unit
FROM TableB
)p
PIVOT
( MAX(p.unit) FOR p.colname IN ( '+ @pvtInColumnList +N' ) )q
)rt
'
PRINT @SQL
EXECUTE sp_executesql @SQL
```
***here is the result***

ANSWER to your Second Question.
the design above is not even giving performance nor flexibility. if user wants to add new Metadata (Column and Unit) that can not be done w/o changing table definition of TableA.
if we are OK with writing Dynamic SQL to give user Flexibility we can redesign the TableA as below. there is nothing to change in TableB. I would convert it in to Key-value pair table. notice that StationID is not any more IDENTITY. instead for given StationID there will be N number of row where N is the number of column supplying the Values for that StationID. with this design, tomorrow if user adds new Column and Unit in TableB it will add just new Row in TableA. no table definition change required.
```
SET NOCOUNT ON
IF OBJECT_ID('TableA_New','u') IS NOT NULL
DROP TABLE TableA_New
GO
CREATE TABLE TableA_New
(
rowID INT NOT NULL IDENTITY (1,1)
,stationID int not null
,entryDate datetime not null
,ColumnID INT
,Columnvalue NVARCHAR(MAX)
)
GO
IF OBJECT_ID('TableB_New','u') IS NOT NULL
DROP TABLE TableB_New
GO
CREATE TABLE TableB_New
(
id int not null IDENTITY(1,1)
,colname varchar(50) NOT NULL
,unit varchar(50) NOT NULL
)
GO
INSERT INTO TableB_New(colname,unit)
SELECT 'WaterTemp','celcius'
UNION ALL SELECT 'waterLevel','meters'
UNION ALL SELECT 'Column3','unit3'
UNION ALL SELECT 'Column4','unit4'
UNION ALL SELECT 'Column5','unit5'
UNION ALL SELECT 'Column6','unit6'
INSERT INTO TableA_New (stationID,entrydate,ColumnID,Columnvalue)
SELECT 1,'2013-01-01',1,2.4
UNION ALL SELECT 1,'2013-01-01',2,3.5
UNION ALL SELECT 1,'2013-01-01',4,101
UNION ALL SELECT 2,'2012-01-01',1,3.6
UNION ALL SELECT 2,'2012-01-01',2,9.9
UNION ALL SELECT 2,'2012-01-01',4,104
SELECT * FROM TableA_New
SELECT * FROM TableB_New
SELECT *
FROM
(
SELECT lt.stationID,lt.entryDate,rt.Colname,lt.Columnvalue + SPACE(3) + rt.Unit AS ColValue
FROM TableA_New lt
JOIN TableB_new rt
ON lt.ColumnID=rt.ID
)t1
PIVOT
(MAX(ColValue) FOR Colname IN ([WaterTemp],[waterLevel],[Column1],[Column2],[Column4],[Column5],[Column6]))pvt
```
see the result below.
 | I would design this database like the following:
A table `MEASUREMENT_DATAPOINT` that contains the measured data points. It would have the columns `ID`, `measurement_id`, `value`, `unit`, `name`.
One entry would be `1, 1, 2.4, 'celcius', 'water temperature'`.
A table `MEASUREMENTS` that contains the data of the measurement itself. Columns: `ID, station_ID, entry_date`. | Create a view based on column metadata | [
"",
"sql",
"sql-server-2008",
"metadata",
""
] |
I have 3 tables.
I need info from 2 of the tables, while the 3rd table links them.
Can someone give me an example of the joins to use?
for this example, I need insuredβs first and last names, the effective date and the expiration date of their policies.
Mortal table
```
SQL> desc mortal
Name
---------------------
MORTAL_ID
SEX_TYPE_CODE
FIRST_NAME
LAST_NAME
DOB
MARITAL_STATUS_CODE
SSN
MIDDLE_NAME
WORK_PHONE
```
Insured (linking) table
```
SQL> desc insured
Name
------------------------
INSURED_ID
INSURED_TYPE_CODE
POLICY_ID
MORTAL_ID
BANK_ACCOUNT_NUM
INSURED_NUM
```
Policy table
```
SQL> desc policy
Name
---------------------------
POLICY_ID
PLAN_ID
POLICY_STATUS_TYPE_CODE
PAYER_GROUP_ID
EFFECTIVE_DATE
POLICY_NUM
EXPIRE_DATE
```
As you can seeI need data from tables 1 & 3 , but must use table 2 to link them.
what type of join is this? How do I use them? | This is still an easy join. Here is one way.
```
select m.first_name, m.last_name, p.effective_date, p.expired_date
from mortal m
inner join policy p
on p.policy_id = i.policy_id
inner join insured i
on i.mortal_id = m.mortal_id
```
Not sure what the problem is. But if the question is just "can you join a table you don't select from," then the answer is yes. | You simply need to join the tables. The type of join is depended on what you are trying to acheive. Below is a basic example:
```
SELECT a.field, c.field
FROM [Mortal Table] a
JOIN [Insured Table] b ON a.MORTAL_ID=b.MORTAL_ID
JOIN [Policy Table] c on b.POLICY_ID=c.POLICY_ID
*WHERE Clause*
```
Hope this helps. | Join 3 tables, only use data from 2? | [
"",
"sql",
"database",
"oracle11g",
""
] |
My MySQL 5.5 server has set `autocommit=1`.
My stored procedure has several DMLs but without explicit transaction management.
When I issue `call the_procedure()` from MySQL CLI (`autocommit` is still 1), do all the procedure's DMLs run in one transaction?
Or do they run in separate transactions, and causing implicit transaction commit after every DML (due to autocommit)? | They run in separate transactions if `autocommit=1`. Suppose you define
```
CREATE TABLE test ( id int PRIMARY KEY )//
CREATE PROCEDURE sp_test_trans()
BEGIN
INSERT INTO test (id) VALUES (1);
INSERT INTO test (id) VALUES (2);
ROLLBACK;
END//
```
If you run this procedure with `autocommit=0`, the `ROLLBACK` will undo the insertions. If you run it with `autocommit=1`, the `ROLLBACK` will do nothing. [Fiddle here](http://sqlfiddle.com/#!2/9f44de/2).
Another example:
```
CREATE PROCEDURE sp_test_trans_2()
BEGIN
INSERT INTO test (id) VALUES (1);
INSERT INTO test (id) VALUES (1);
END//
```
If you run this procedure with `autocommit=0`, failure of the second insert will cause a `ROLLBACK` undoing the first insertion. If you run it with `autocommit=1`, the second insert will fail but the effects of the first insert will not be undone. | [This](https://www.oreilly.com/library/view/mysql-stored-procedure/0596100892/re31.html) is surprising to me but:
> Although MySQL will automatically initiate a transaction on your
> behalf when you issue DML statements, you should issue an explicit
> START TRANSACTION statement in your program to mark the beginning of
> your transaction.
>
> It's possible that your stored program might be run within a server in
> which autocommit is set to TRUE, and by issuing an explicit START
> TRANSACTION statement you ensure that autocommit does not remain
> enabled during your transaction. START TRANSACTION also aids
> readability by clearly delineating the scope of your transactional
> code. | Do DMLs in a stored procedure run in a single transaction? | [
"",
"mysql",
"sql",
"database",
"stored-procedures",
""
] |
I have SQL Server database and I want to change the identity column because it started
with a big number `10010` and it's related with another table, now I have 200 records and I want to fix this issue before the records increases.
What's the best way to change or reset this column? | > **You can not update identity column.**
>
> SQL Server does not allow to update the identity column unlike what you can do with other columns with an update statement.
Although there are some alternatives to achieve a similar kind of requirement.
* **When Identity column value needs to be updated for new records**
Use [DBCC CHECKIDENT](http://msdn.microsoft.com/en-us/library/ms176057.aspx) *which checks the current identity value for the table and if it's needed, changes the identity value.*
```
DBCC CHECKIDENT('tableName', RESEED, NEW_RESEED_VALUE)
```
* **When Identity column value needs to be updated for existing records**
Use [IDENTITY\_INSERT](https://learn.microsoft.com/en-us/sql/t-sql/statements/set-identity-insert-transact-sql?view=sql-server-ver15) *which allows explicit values to be inserted into the identity column of a table.*
```
SET IDENTITY_INSERT YourTable {ON|OFF}
```
*Example:*
```
-- Set Identity insert on so that value can be inserted into this column
SET IDENTITY_INSERT YourTable ON
GO
-- Insert the record which you want to update with new value in the identity column
INSERT INTO YourTable(IdentityCol, otherCol) VALUES(13,'myValue')
GO
-- Delete the old row of which you have inserted a copy (above) (make sure about FK's)
DELETE FROM YourTable WHERE ID=3
GO
--Now set the idenetity_insert OFF to back to the previous track
SET IDENTITY_INSERT YourTable OFF
``` | If got your question right you want to do something like
```
update table
set identity_column_name = some value
```
Let me tell you, it is not an easy process and it is not advisable to use it, as there may be some `foreign key` associated on it.
But here are steps to do it, Please take a `back-up` of table
Step 1- Select design view of the table

Step 2- Turn off the identity column

Now you can use the `update` query.
Now `redo` the step 1 and step 2 and Turn on the identity column
**[Reference](http://blog.sqlauthority.com/2009/05/03/sql-server-add-or-remove-identity-property-on-column/)** | How to update Identity Column in SQL Server? | [
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2008",
""
] |
In MySQL, I want to fire a trigger *after the update* of specific columns.
I know how to do it in Oracle and DB2:
```
CREATE TRIGGER myTrigger
AFTER UPDATE of myColumn1,myColumn2 ... ON myTable
FOR EACH ROW
BEGIN
....
END
```
How to do that with MySQL? | You can't trigger on a particular column update in SQL. It is applied on a row.
You can put your condition for columm in your trigger with an `IF` statement, as below:
```
DELIMITER //
CREATE TRIGGER myTrigger AFTER UPDATE ON myTable
FOR EACH ROW
BEGIN
IF !(NEW.column1 <=> OLD.column1) THEN
--type your statements here
END IF;
END;//
DELIMITER ;
``` | You can't specify only to fire on specific column changes. But for a record change on a table you can do
```
delimiter |
CREATE TRIGGER myTrigger AFTER UPDATE ON myTable
FOR EACH ROW
BEGIN
...
END
|
delimiter ;
```
In your trigger you can refer to old and new content of a column like this
```
if NEW.column1 <> OLD.column1 ...
``` | Fire a trigger after the update of specific columns in MySQL | [
"",
"mysql",
"sql",
"triggers",
""
] |
I have a query where the individual selects are pulling the most recent result from the table. So I'm having the select order by id desc, so the most recent is top, then using a rownum to just show the top number. Each select is a different place that I want the most recent result.
However, the issue I'm running into is the order by can't be used in a select statement for the union all.
```
select 'MUHC' as org,
aa,
messagetime
from buffer_messages
where aa = 'place1'
and rownum = 1
order by id desc
union all
select 'MUHC' as org,
aa,
messagetime
from buffer_messages
where aa = 'place2'
and rownum = 1
order by id desc;
```
The each select has to have the order by, else it won't pull the most recent version. Any idea's of a different way to do this entirely, or a way to do this with the union all that would get me the desired result? | By putting `where .. and rownum = 1` condition before `order by` clause you wont produce desired result because the result set will be ordered after the `where` clause applies, thus ordering only one row in the result set which can be whatever first row is returned by the query.
Moreover, putting `order by` clause right before the `union all` clause is semantically incorrect - you will need a *wrapper* select statement.
You could rewrite your sql statement as follows:
```
select *
from ( select 'MUHC' as org
, aa
, messagetime
, row_number() over(partition by aa
order by id desc) as rn
from buffer_messages
) s
where s.rn = 1
```
And here is the second approach:
```
select max('MUHC') as org
, max(aa) as aa
, max(messagetime) keep (dense_rank last order by id) as messagetime
from buffer_messages
group by aa
``` | Try this
```
select 'MUHC' as org,
aa,
messagetime
from buffer_messages bm
where aa = 'place1'
and id= (Select max(id) from buffer_messages where aa = 'place1' )
union all
select 'MUHC' as org,
aa,
messagetime
from buffer_messages
where aa = 'place2'
and id= (Select max(id) from buffer_messages where aa = 'place2' )
``` | Oracle - order by in subquery of union all | [
"",
"sql",
"oracle",
""
] |
The problem: I have a MySQL database table which holds article numbers and 2 parameters related to the article:
```
article_no | parameter1 | parameter2
1111111 | false | false
2111111 | true | true
1222222 | false | false
2222222 | false | false
```
Articles are represented by 2 article numbers, the difference is that one starts with "1", the other starts with "2".
The problem: parameter2 of all article numbers starting with "1" has to become "true" if parameter1 of the related "2"-article number is true. in the example above parameter2 of 1111111 has to become "true". is there way to do this with sql only? | You can achieve that with one `UPDATE` query, like:
```
UPDATE
t AS l
LEFT JOIN t AS r
ON SUBSTR(l.article_no, 2)=SUBSTR(r.article_no, 2)
AND l.article_no LIKE '1%'
AND r.article_no LIKE '2%'
SET
l.parameter2='true'
WHERE
r.parameter1='true'
``` | From my understanding, you want to update the parameter 1 and 2 with following condition:if prefix is '1' then 'true' and if the prefix is '2' then false. Here's the query:
```
UPDATE article SET parameter1 = 'true', parameter2 = 'true' WHERE article_no LIKE '1%';
UPDATE article SET parameter1 = 'false', parameter2 = 'false' WHERE article_no LIKE '2%';
```
I assume that your table would be `article`. | sql statement to change somehow related rows | [
"",
"mysql",
"sql",
""
] |
I have a MySql Table with the following schema.
```
table_products - "product_id", product_name, product_description, product_image_path, brand_id
table_product_varient - "product_id", "varient_id", product_mrp, product_sellprice, product_imageurl
table_varients - "varient_id", varient_name
table_product_categories - "product_id", "category_id"
```
and this is the Mysql `select` query i am using to fetch the data for the category user provided.
```
select * from table_products, table_product_varients, table_varients, table_product_categories where table_product_categories.category_id = '$cate_id' && table_product_categories.product_id = table_products.product_id && table_products.product_id = table_product_varients.product_id && table_varients.varient_id = table_product_varients.varient_id
```
The problem is that, as table contains lot of products, and each product contains lot of varients, it is taking too much time to fetch the data. And i doubt, as data will grow, the time will increase to fetch the items. Is there any optimized way to achieve the same.
Your help will be highly appreciated.
Devesh | the query below would be a start, or something similar
```
SELECT
*
FROM
table_products P
INNER JOIN
table_product_categories PC
ON
PC.product_id = P.product_id
INNER JOIN
table_product_varients PV
ON
P.product_id = PV.product_id
INNER JOIN
table_varients V
ON
V.varient_id = PV.varient_id
where
table_product_categories.category_id = '$cate_id'
```
and as suggested do you really need to return `*` as this does mean selecting all columns from all tables within the query, which as we know from the joins themselves there a duplicates.
you should use indexing on tables for faster queries, set relationships between the joining tables this will also ensure referential integrity.
Hope this makes sense and helps :) | You can use the EXPLAIN command to see whats happening in the server. Then you can optimize the request by creating indexes.
Some links:
* [Some slides about tuning](http://de.slideshare.net/manikandakumar/mysql-query-and-index-tuning)
* [MYSQL manual: 8.2.1. Optimizing SELECT Statements](http://dev.mysql.com/doc/refman/5.5/en/select-optimization.html) | Is there any way to optimize this MYSQL query | [
"",
"mysql",
"sql",
"query-optimization",
""
] |
Here is a schema about battleships and the battles they fought in:
```
Ships(name, yearLaunched, country, numGuns, gunSize, displacement)
Battles(ship, battleName, result)
```
A typical Ships tuple would be:
```
('New Jersey', 1943, 'USA', 9, 16, 46000)
```
which means that the battleship New Jersey was launched in 1943; it belonged to the USA, carried 9 guns of size 16-inch (bore, or inside diameter of the barrel), and weighted (displaced, in nautical terms) 46,000 tons. A typical tuple for Battles is:
```
('Hood', 'North Atlantic', 'sunk')
```
That is, H.M.S. Hood was sunk in the battle of the North Atlantic. The other possible results are 'ok' and 'damaged'
Question: List all the pairs of countries that fought each other in battles. List each pair only once, and list them with the country that comes first in alphabetical order first
Answer: I wrote this:
```
SELECT
a.country, b.country
FROM
ships a, ships b, battles b1, battles b2
WHERE
name = ship
and b1.battleName = b2.battleName
and a.country > b.country
```
But it says ambiguous column name. How do I resolve it? Thanks in advance | Well, `name = ship` is the problem. `name` could be from `a` or `b`, and `ship` from `b1` or `b2`
you could do something like that :
```
select distinct s1.country, s2.country
from ships s1
inner join Battles b1 on b.ship = s1.name
inner join Battles b2 on b2.ship <> s1.name and b2.battleName = b1.battleName
inner join ships s2 on s2.name = b2.ship and s2.country < s1.country
``` | Could you try a nested query getting a table of winners and losers and then joining them on the battle name?
```
SELECT
WINNER.country as Country1
,LOSER.country as Country2
FROM
(
SELECT DISTINCT country, battleName
FROM Battles
INNER JOIN Ships ON Battles.ship = Ships.name
WHERE Battles.Result = 1
) AS WINNER
INNER JOIN
(
SELECT DISTINCT country, battleName
FROM Battles
INNER JOIN Ships ON Battles.ship = Ships.name
WHERE Battles.Result = 0
) AS LOSER
ON WINNER.battlename = LOSER.battlename
ORDER BY WINNER.country
``` | battles, ships sql query | [
"",
"sql",
"countries",
""
] |
I am going through some kind of problem. Here is the table schema.
I have two tables job,application.
```
Application: aid,aname,stime,jname
Job:jid,jname,aid,start,end
Application table:
aid aname stime
A ABC 23-SEP-13
B DEF 24-SEP-13
Job table:
jid jname aid start end
1 job1 A 10-OCT-13 13:06:20 11-OCT-13 13:06:45
2 job2 A 10-OCT-13 14:06:20 11-OCT-13 14:09:55
3 job1 B 10-OCT-13 15:16:20 11-OCT-13 15:06:45
4 job2 B 10-OCT-13 15:26:20 11-OCT-13 15:46:45
```
I need the output as follows.
I need to generate the differences between the start and end times of all the jobs in every application.
```
aname stime jname (end-start)Days Hours Minutes Seconds
ABC 23-SEP-13 job1 1 0 0 25
ABC 23-SEP-13 job2 1 0 3 35
DEF 24-SEP-13 job1 1 0 10 25
DEF 24-SEP-13 job2 1 0 20 25
```
I tried using in clause to extract but here the problem is I am unable to retrieve multiple columns from the second table.
Thank you. | Try like this,
```
SELECT a.aname,
A.stime,
j.jname,
floor(end - start) DAYS ,
MOD(FLOOR ((end - start) * 24), 24) HOURS ,
MOD (FLOOR ((end - start) * 24 * 60), 60) MINUTES ,
MOD (FLOOR ((end - start) * 24 * 60 * 60), 60) SECS
FROM application a,
JOB j
WHERE j.aid = a.aid;
```
With your sample data.
```
WITH application(aid, aname, stime) AS
(
SELECT 'A', 'ABC', TO_DATE('23-SEP-13', 'DD-MON-YY') FROM DUAL
UNION ALL
SELECT 'B', 'DEF', TO_DATE('24-SEP-13', 'DD-MON-YY') FROM DUAL
),
JOB(JID, JNAME, AID, START_, END_) AS
(
SELECT 1, 'job1', 'A', TO_DATE('10-OCT-13 13:06:20', 'DD-MON-YY HH24:MI:SS'), TO_DATE('11-OCT-13 13:06:45', 'DD-MON-YY HH24:MI:SS') FROM DUAL
UNION ALL
SELECT 2, 'job2', 'A', TO_DATE('10-OCT-13 14:06:20', 'DD-MON-YY HH24:MI:SS'), TO_DATE('11-OCT-13 14:09:55', 'DD-MON-YY HH24:MI:SS') FROM DUAL
UNION ALL
SELECT 3, 'job1', 'B', TO_DATE('10-OCT-13 15:16:20', 'DD-MON-YY HH24:MI:SS'), TO_DATE('11-OCT-13 15:06:45', 'DD-MON-YY HH24:MI:SS') FROM DUAL
UNION ALL
SELECT 4, 'job2', 'B', TO_DATE('10-OCT-13 15:26:20', 'DD-MON-YY HH24:MI:SS'), TO_DATE('11-OCT-13 15:46:45', 'DD-MON-YY HH24:MI:SS') FROM DUAL
)
SELECT a.aname,
A.stime,
j.jname,
floor(end_ - start_) DAYS ,
MOD(FLOOR ((end_ - start_) * 24), 24) HOURS ,
MOD (FLOOR ((end_ - start_) * 24 * 60), 60) MINUTES ,
MOD (FLOOR ((end_ - start_) * 24 * 60 * 60), 60) SECS
FROM application a,
JOB j
WHERE j.aid = a.aid
AND a.stime > to_date('23-sep-12','dd-mon-yy');
``` | Try this one using [**TIMESTAMPDIFF**](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_timestampdiff) specify the format like want to get difference in months ,days,minutes or seconds
```
SELECT a.aname ,a.stime , j.jname,
TIMESTAMPDIFF(DAY,j.`end`,j.`start`) AS 'end-start',
TIMESTAMPDIFF(HOUR,j.`end`,j.`start`) AS `hours`,
TIMESTAMPDIFF(MINUTE,j.`end`,j.`start`) AS `minutes`,
TIMESTAMPDIFF(SECOND,j.`end`,j.`start`) AS `seconds`
FROM `Application` a
LEFT JOIN `Job` j ON (j.aid =a.aid)
```
> NOTE for mysql ,make sure you have used correct data types for the date columns | Combining different tables and extracting data on conditions | [
"",
"sql",
"oracle",
""
] |
I'm having a hard time figuring out how to **merge together two columns** with sql (I'm new at it). It should be really simple but I simply can't find a way to do it. I have two results from **two different selects statements**, and they both have the **same number of rows**, but **different columns**. I just want to "attach" all the columns together.
Example.
This is the first table:

This is the second table:

The following query:
```
SELECT [t].* FROM [TrainerClass] AS [t];
```
or simply
```
SELECT * FROM [TrainerClass];
```
will give the result shown here:

Now, the second query, which is:
```
SELECT [d].[Description] AS [Name] FROM [DescriptionTranslation] AS [d], [TrainerClass] AS [t] WHERE [d].[TableName] = 'TrainerClass' AND [d].[FieldName] = 'Name' AND [d].[Code] = [t].[Code] AND [d].[Language] = 'en-EN';
```
will result in this table:

Pretty straight forward. Now, what I simply want to get is this:

Why is it so hard to me? What would you do to achieve that? Thanks in advance! | ```
SELECT [t].*,
[d].[Description] AS [Name],
[d2].[Description] AS [Description]
FROM [TrainerClass] AS [t] join [DescriptionTranslation] AS [d] on [t].[Code] = [d].[Code]
join [DescriptionTranslation] AS [d2] on [t].[Code] = [d2].[Code]
WHERE [d].[TableName] = 'TrainerClass' AND
[d].[FieldName] = 'Name' AND
[d].[Language] = 'en-EN' AND
[d2].[FieldName] = 'Description'
``` | I you have any common ID in your tables, you can use join as stated in answers, otherwise you can use union statement | Merging two or more columns from different SQL queries | [
"",
"sql",
"select",
"merge",
""
] |
So I am trying to build a query that will show me which users have the most points, for each type of activity. You can see the table strucutre below. Each activity has an **activity\_typeid** and each of those carries a certain **activity\_weight**.
In the example below, Bob has scored 50 points for calls and 100 points for meetings. James has scored 100 points for calls and 100 points for meetings.
```
userid activity_typeid activity_weight
------------------------------------------------------------
123 (Bob) 8765 (calls) 50
123 (Bob) 8121 (meetings) 100
431 (James) 8765 (calls) 50
431 (James) 8121 (meetings) 100
431 (James) 8765 (calls) 50
```
I want to be able to output the following:
1. Top Performer for Calls = James
2. Top Performer for Meetings = Bob, James.
I don't *know* the **activity\_typeid's** in advance, as they are entered randomly, so I was wondering if it is possible to build some sort of query that calculates the SUM for each DISTINCT/UNIQUE **activity\_typeid** ?
Thanks so much in advance. | What you need is equivalent of analytic function `DENSE_RANK()`. One way to do it in if you need top performers for each activity
```
SELECT a.activity_typeid, GROUP_CONCAT(a.userid) userid
FROM
(
SELECT activity_typeid, userid, SUM(activity_weight) activity_weight
FROM table1
-- WHERE ...
GROUP BY userid, activity_typeid
) a JOIN
(
SELECT activity_typeid, MAX(activity_weight) activity_weight
FROM
(
SELECT activity_typeid, userid, SUM(activity_weight) activity_weight
FROM table1
-- WHERE ...
GROUP BY userid, activity_typeid
) q
GROUP BY activity_typeid
) b
ON a.activity_typeid = b.activity_typeid
AND a.activity_weight = b.activity_weight
GROUP BY activity_typeid
```
Another way to emulate `DENSE_RANK()` in MySQL is to leverage session variables
```
SELECT activity_typeid, GROUP_CONCAT(userid) userid
FROM
(
SELECT activity_typeid, userid, activity_weight,
@n := IF(@g = activity_typeid, IF(@v = activity_weight, @n, @n + 1) , 1) rank,
@g := activity_typeid, @v := activity_weight
FROM
(
SELECT activity_typeid, userid,
SUM(activity_weight) activity_weight
FROM table1
-- WHERE ...
GROUP BY activity_typeid, userid
) q CROSS JOIN (SELECT @n := 0, @g := NULL, @v := NULL) i
ORDER BY activity_typeid, activity_weight DESC, userid
) q
WHERE rank = 1
GROUP BY activity_typeid
```
Output:
```
| ACTIVITY_TYPEID | USERID |
|-----------------|---------|
| 8121 | 123,431 |
| 8765 | 431 |
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!2/7dc87f/18)** demo for both queries | You must use the `GROUP BY` statement to calculate the sum for each user and each activity typeid. Try something like this:
```
SELECT userid, activity_typeid, SUM(activity_weight)
FROM table
GROUP BY userid, activity_typeid
```
Then use this as a subquery to determine the top performer for each activity\_typeid. | using SUM and DISTINCT in the same query | [
"",
"mysql",
"sql",
""
] |
I want to write a select statement output that, among other things, has both a lowest\_bid and highest\_bid column. I know how to do that bit, but want I also want is to show the user (user\_firstname and user\_lastname combined into their own column) as lowest\_bidder and highest\_bidder. What I have so far is:
```
select item_name, item_reserve, count(bid_id) as number_of_bids,
min(bid_amount) as lowest_bid, ???, max(big_amount) as highest_bid,
???
from vb_items
join vb_bids on item_id=bid_item_id
join vb_users on item_seller_user_id=user_id
where bid_status = βokβ and
item_sold = βnoβ
sort by item_reserve
```
(The ???'s are where the columns should go, once I figure out what to put there!) | This seems like good use of window functions. I've assumed a column `vb_bids.bid_user_id`. If there's no link between a bid and a user, you can't answer this question
```
With x as (
Select
b.bid_item_id,
count(*) over (partition by b.bid_item_id) as number_of_bids,
row_number() over (
partition by b.bid_item_id
order by b.bid_amount desc
) as high_row,
row_number() over (
partition by b.bid_item_id
order by b.bid_amount
) as low_row,
b.bid_amount,
u.user_firstname + ' ' + u.user_lastname username
From
vb_bids b
inner join
vb_users u
on b.bid_user_id = u.user_id
Where
b.bid_status = 'ok'
)
Select
i.item_name,
i.item_reserve,
min(x.number_of_bids) number_of_bids,
min(case when x.low_row = 1 then x.bid_amount end) lowest_bid,
min(case when x.low_row = 1 then x.username end) low_bidder,
min(case when x.high_row = 1 then x.bid_amount end) highest_bid,
min(case when x.high_row = 1 then x.username end) high_bidder
From
vb_items i
inner join
x
on i.item_id = x.bid_item_id
Where
i.item_sold = 'no'
Group By
i.item_name,
i.item_reserve
Order By
i.item_reserve
```
**`Example Fiddle`** | In order to get the users, I broke out the aggregates into their own tables, joined them by the `item_id` and filtered them by a derived value that is either the min or max of `bid_amount`. I could have joined to `vb_bids` for a third time, and kept the aggregate functions, but that would've been redundant.
This will fail if you have two low bids of the exact same amount for the same item, since the join is on `bid_amount`. If you use this, then you'd want to created an index on `vb_bids` covering `bid_amount`.
```
select item_name, item_reserve, count(bid_id) as number_of_bids,
low_bid.bid_amount as lowest_bid, low_user.first_name + ' ' + low_user.last_name,
high_bid.bid_amount as highest_bid, high_user.first_name + ' ' + high_user.last_name
from vb_items
join vb_bids AS low_bid on item_id = low_bid.bid_item_id
AND low_bid.bid_amount = (
SELECT MIN(bid_amount)
FROM vb_bids
WHERE bid_item_id = low_bid.bid_item_id)
join vb_bids AS high_bid on item_id = high_bid.bid_item_id
AND high_bid.bid_amount = (
SELECT MAX(bid_amount)
FROM vb_bids
WHERE bid_item_id = high_bid.bid_item_id)
join vb_users AS low_user on low_bid.user_id=user_id
join vb_users AS high_user on high_bid.user_id=user_id
where bid_status = βokβ and
item_sold = βnoβ
group by item_name, item_reserve,
low_bid.bid_amount, low_user.first_name, low_user.last_name,
high_bid.bid_amount, high_user.first_name, high_user.last_name
order by item_reserve
``` | Select statement to show the corresponding user with the lowest/highest amount? | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
In MS SQL Server I have a filed of type double that is called `ID`, and that stores ID numbers, surprisingly enough.
Anyway, I want to be able to search ID numberss like text - say I want all ID's that starts with 021 or that ends with 04 - to do so I need to convert the double the string.
My problem is that ID numbers here are 9 digits, so when I try `SELECT str([id])` I get something like `02123+e23`, which is not good for my purpose.
How do I go about converting it to a string that looks exactly the same, and can be compared against other strings?
EDIT: I tried `SELECT str([id],9,0)` and I got the right string, or at least what looked right to me, but when comparing against equal strings the comparison failed. Any ideas why?
Thanks! | If your ID's are stored as numbers, then there will not be prefixed zeros, so finding all records that starts with '021' really means finding those that starts with '21'
```
SELECT * FROM MyTable
WHERE CAST(Id AS VARCHAR) LIKE '21%'
``` | `SELECT * FROM tablename
WHERE SUBSTRING(CAST(id AS VARCHAR), 0, 9) = '123456789'` | MS SQL Server: cast double to string and compare text | [
"",
"sql",
"sql-server",
"casting",
""
] |
I am trying to store the results of an SQL query into a variable.The query simply detects the datatype of a column, hence the returned result is a single varchar.
```
SET @SQL =
'declare @@x varchar(max) SET @@x = (select DATA_TYPE FROM INFORMATION_SCHEMA.COLUMNS
WHERE Table_name = ' +char(39)+@TabName+char(39) +
' AND column_name = ' +char(39)+@colName+char(39) + ')'
EXECUTE (@SQL)
```
Anything within the 'SET declaration' cannot access any variables outside of it and vice versa, so I am stuck on how to store the results of this query in a varchar variable to be accessed by other parts of the stored procedure. | You dont need a dynamic query to achieve what you want, below query will give the same result as yours.
```
declare @x varchar(max)
declare @tableName varchar(100), @ColumnName varchar(50)
set @tableName = 'Employee'
set @ColumnName = 'ID'
select @x = DATA_TYPE FROM INFORMATION_SCHEMA.COLUMNS
where
Table_Name = @tableName
and column_name = @ColumnName
select @x
``` | All user-defined variables in T-SQL have private local-scope *only*. They cannot be seen by any other execution context, not even nested ones (unlike #temp tables, which *can* be seen by nested scopes). Using "*@@*" to try to trick it into making a global-variable doesn't work.
If you want to execute dynamic SQL and return information there are several ways to do it:
1. Use [sp\_ExecuteSQL](http://technet.microsoft.com/en-us/library/ms188001%28v=sql.90%29.aspx) and make one of the parameters an `OUTPUT` parameter (recommended for single values).
2. Make a #Temp table before calling the dynamic SQL and then have the Dynamic SQL write to the same #Temp table (recommended for multiple values/rows).
3. Use the `INSERT..EXEC` statement to execute your dynamic SQL which returns its information as the output of a SELECT statement. If the `INSERT` table has the same format as the dynamic SQL's `SELECT` output, then the data output will be inserted into your table.
4. If you want to return only an integer value, you can do this through the `RETURN` statement in dynamic SQL, and receive it via `@val = EXEC('...')`.
5. Use the Session context-info buffer (not recommended).
However, as others have pointed out, you shouldn't actually need dynamic SQL for what you are showing us here. You can do just this with:
```
SET @x = ( SELECT DATA_TYPE FROM INFORMATION_SCHEMA.COLUMNS
WHERE Table_name = @TabName
AND column_name = @colName )
``` | T-SQL: Variable Scope | [
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2005",
"stored-procedures",
""
] |
I'm working on a small project in which I'll need to select a record from a temporary table based on the actual row number of the record.
How can I select a record based on its row number? | A couple of the other answers touched on the problem, but this might explain. There really isn't an order implied in SQL (set theory). So to refer to the "fifth row" requires you to introduce the concept
```
Select *
From
(
Select
Row_Number() Over (Order By SomeField) As RowNum
, *
From TheTable
) t2
Where RowNum = 5
```
In the subquery, a row number is "created" by defining the order you expect. Now the outer query is able to pull the fifth entry out of that ordered set. | Technically SQL Rows do not have "RowNumbers" in their tables. Some implementations (Oracle, I think) provide one of their own, but that's not standard and SQL Server/T-SQL does not. You can add one to the table (sort of) with an IDENTITY column.
Or you can add one (for real) in a query with the ROW\_NUMBER() function, but unless you specify your own unique ORDER for the rows, the ROW\_NUMBERS will be assigned non-deterministically. | How to select a row based on its row number? | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I've read this article: <http://www.xaprb.com/blog/2006/12/07/how-to-select-the-firstleastmax-row-per-group-in-sql/> and search for other questions
I have a table that is something like:
```
| table.id | USER.id
----------------------------------------------
| 1 | 101
| 2 | 101
| 3 | 101
| 4 | 101
| 5 | 101
| 6 | 101
| 7 | 101
| 8 | 101
| 9 | 101
| 10 | 101
| 11 | 102
| 12 | 102
| 13 | 102
| 14 | 102
| 15 | 103
| 16 | 103
| 17 | 103
| 18 | 103
| 19 | 103
| 20 | 103
| 21 | 103
| 22 | 103
| 23 | 103
| 24 | 104
| 25 | 104
| 26 | 104
| 27 | 104
| 28 | 104
| 29 | 104
| 30 | 105
| 31 | 105
| 32 | 105
| 33 | 106
| 34 | 106
```
I'm trying to get the count of table.id grouped by user.id, and if the count of user.id is more than 7, only display the result as 7 (aka limiting the count results to 7).
In this example, the result should be:
```
| USER.id | count of table.ID
----------------------------------------
| 101 | 7
| 102 | 4
| 103 | 7
| 104 | 6
| 105 | 3
| 106 | 2
```
I've tried:
```
SELECT USERid, COUNT(table.id)
FROM table
WHERE table.id IN (select top 7 table.id from table)
GROUP BY USERid
```
and
```
SELECT USERid, COUNT(table.id)
FROM table
WHERE (
SELECT COUNT(table.ID) FROM table as t
WHERE t.id = t.id AND t.USERid <= table.USERid
) <= 7
GROUP BY USERid
``` | You can simplify your query, and use [LEAST](http://dev.mysql.com/doc/refman/5.0/en/comparison-operators.html#function_least) function
```
SELECT USERid, LEAST(7, COUNT(*))
FROM table
GROUP BY USERid
```
from the question in your comment
```
SELECT SUM(countByUser)
FROM
(SELECT LEAST(7, COUNT(*)) as countByUser
FROM table
GROUP BY USERid) c
```
[SqlFiddle](http://sqlfiddle.com/#!2/1a1f4/1) | ```
SELECT userid,
CASE
WHEN COUNT(*) > 7 THEN 7
ELSE COUNT(*)
END AS Qty
FROM tbl
GROUP BY userid
``` | mysql select the first n rows per group | [
"",
"mysql",
"sql",
"group-by",
""
] |
I have a situation where I need to get the differences between two columns.
> Column 1: 01-OCT-13 10:27:15
> Column 2: 01-OCT-13 10:28:00
I need to get the differences between those above two columns. I tried using '-' operator but the output is not in an expected way.
I need output as follows: 00-00-00 00:00:45 | Try this query but it is not tested
```
SELECT extract(DAY
FROM diff) days,
extract(hour
FROM diff) hours,
extract(MINUTE
FROM diff) minutes,
extract(SECOND
FROM diff) seconds,
extract(YEAR
FROM diff) years
FROM
(SELECT (CAST (to_date(t.column1, 'yyyy-mm-dd hh:mi:ss') AS TIMESTAMP) -CAST (to_date(t.column2, 'yyyy-mm-dd hh:mi:ss') AS TIMESTAMP)) diff
FROM tablename AS t)
``` | Try this too,
```
WITH T(DATE1, DATE2) AS
(
SELECT TO_DATE('01-OCT-13 10:27:15', 'DD-MON-YY HH24:MI:SS'),TO_DATE('01-OCT-13 10:28:00', 'DD-MON-YY HH24:MI:SS') FROM DUAL
)
SELECT floor(date2 - date1)
|| ' DAYS '
|| MOD(FLOOR ((date2 - date1) * 24), 24)
|| ' HOURS '
|| MOD (FLOOR ((date2 - date1) * 24 * 60), 60)
|| ' MINUTES '
|| MOD (FLOOR ((date2 - date1) * 24 * 60 * 60), 60)
|| ' SECS ' time_difference
FROM T;
``` | Date Time differences calculation in Oracle | [
"",
"sql",
"oracle",
""
] |
I've got 2 databases, let's call them **Database1** and **Database2**, a user with very limited permissions, let's call it **User1**, and a stored procedure in **Database1**, let's call it **Proc1**.
I grant `EXECUTE` permission to **User1** on **Proc1**; `GRANT EXECUTE ON [dbo].[Proc1] TO [User1]` and things worked fine as long as all the referenced tables (for SELECT, UPDATE ... etc.) are in **Database1**, although **User1** does not have explicit permission on those tables.
I modified **Proc1** to SELECT from a table, let's call it **Table1**, in **Database2**. Now when I execute **Proc1** I get the following error: The SELECT permission was denied on the object 'Table1', database 'Database2', schema 'dbo'
My understanding is that SQL Server will take care of the required permissions when I grant EXECUTE to a stored procedure; does that work differently when the table (or object) is in another database?
Notes:
* **User1** is a user in both databases with same limited permissions
* I'm using SQL Server 2005 | There seem to be a difference when SQL Server checks the permissions along the permission chain. Specifically:
> SQL Server can be configured to allow ownership chaining between specific databases or across all databases inside a single instance of SQL Server. Cross-database ownership chaining is disabled by default, and should not be enabled unless it is specifically required.
(Source: <http://msdn.microsoft.com/en-us/library/ms188676.aspx>) | Late to the party, but I recommend taking a look at signed stored procedures for this scenario. They are much more secure than granting users permissions to multiple databases or turning on database ownership chaining.
> ... To make it possible for a user to run this procedure without
> SELECT permission on testtbl, you need to take these four steps:
>
> 1.Create a certificate.
>
> 2.Create a user associated with that certificate.
>
> 3.Grant that user SELECT rights on testtbl.
>
> 4.Sign the procedure with the certificate, each time you have changed the procedure.
>
> When the procedure is invoked, the rights of the certificate user are
> added to the rights of the actual user. ...
(From <http://www.sommarskog.se/grantperm.html#Certificates>)
Further documentation is also available on MSDN. | How GRANT EXECUTE and cross-database reference works in SQL Server? | [
"",
"sql",
"sql-server",
""
] |
I have the data in one column.
Ex:
dog
dog
cat
fox
How do I change the value of all dogs to fox and values of all fox to dog at the same time? If I run an UPDATE SET it will change all dogs to fox, then I run the second one it'll turn everything into dogs. | ```
update
table
set
animal = case anmial when 'dog' then 'fox' else 'dog' end
where
animal in ('dog', 'fox')
``` | If you have more than just dogs and foxes you could consider making a conversion table to help you with the update. That way you won't need a case statement.
```
declare @original table
(
pk int identity(1,1),
column1 varchar(max)
)
insert into @original
select 'fox'
union all
select 'dog'
declare @conversion table
(
valueFrom varchar(max)
,valueTo varchar(max)
)
insert into @conversion
select 'fox','dog'
union all
select 'dog','fox'
select * from @original
update original
set original.column1 = c.valueTo
from @original as original inner join @conversion as c
on original.column1 = c.valueFrom
select * from @original
``` | How to swap values from two fields at the same time in SQL? | [
"",
"sql",
"sql-server",
""
] |
Let's say I have a table like this (with the `num` column being indexed) :
```
+-----+--------------+
| num | lots of cols |
+-----+--------------+
| 31 | bla 31 |
| 67 | bla 67 |
| 88 | bla 88 |
| 89 | bla 89 |
+-----+--------------+
```
And I want to swap the num of one row whose num is X with the precedent one (based on the order defined by `num`).
For example if I'm given X=88, I want to update the `num` of two rows so as to get
```
+-----+--------------+
| num | lots of cols |
+-----+--------------+
| 31 | bla 31 |
| 67 | bla 88 |
| 88 | bla 67 |
| 89 | bla 89 |
+-----+--------------+
```
What would be the simplest and most efficient query or queries to do this without fetching all the columns (if possible just updating the `num` column) ? | First get the number that you want to swap with:
```
select max(num)
from TheTable
where num < 88
```
Then use that to swap the numbers:
```
update TheTable
set num = (67 + 88) - num
where num in (67, 88)
```
(Note however that this only works as long as the sum of the two numbers are still within the range of the data type.) | This is based on @Guffa's answer. It simply combines the two queries into one:
```
update TheTable cross join
(select max(num) as num
from TheTable
where num < 88
) other
set num = (other.num + 88) - num
where num in (other.num, 88);
``` | Swap with precedent | [
"",
"mysql",
"sql",
""
] |
How do you run a SQL command in a shell script while setting a variable? I had tried this method and it isn't executing the command, it is thinking the command is just a string.
```
#!/bin/ksh
variable1=`sqlplus -s username/pw@oracle_instance <<EOF
SET PAGESIZE 0 FEEDBACK OFF VERIFY OFF HEADING OFF ECHO OFF
select count(*) from table;
EXIT;
EOF`
if [ -z "$variable1" ]; then
echo "No rows returned from database"
exit 0
else
echo $variable1
fi
``` | ```
#!/bin/ksh
variable1=$(
echo "set feed off
set pages 0
select count(*) from table;
exit
" | sqlplus -s username/password@oracle_instance
)
echo "found count = $variable1"
``` | You can use a [heredoc](http://en.wikipedia.org/wiki/Here_document). e.g. from a prompt:
```
$ sqlplus -s username/password@oracle_instance <<EOF
set feed off
set pages 0
select count(*) from table;
exit
EOF
```
so `sqlplus` will consume everything up to the `EOF` marker as stdin. | How to run SQL in shell script | [
"",
"sql",
"oracle",
"shell",
"unix",
""
] |
I need to get date range between 1st July till 31st October every year. Based on that I have to update another column.
Date field is datetime. Should be like this below:
```
Select Cash = Case When date between '1st July' and '31st October' Then (Cash * 2) End
From MytTable
```
Note: this range should work for each and every year. | This is one way:
```
SELECT Cash = CASE WHEN RIGHT(CONVERT(VARCHAR(8),[date],112),4)
BETWEEN '0701' AND '1031' THEN Cash*2
ELSE Cash END --I added this
``` | For your case you could just use the month and make sure it falls between 7 to 10.
This is how your query will be:
```
select Cash = case when month([Date]) in (7, 8, 9, 10) then (Cash * 2) else Cash end
```
or
```
select Cash = case when month([Date]) between 7 and 10 then (Cash * 2) else Cash end
``` | Get date range between dates having only month and day every year | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I have below 3 records in my table(**TAG\_DATA**) column(**TAGS**).
```
car,bus,van
bus,car,ship,van
ship
```
I wrote a query to get records which has `car` and `bus` as below.
```
SELECT * FROM TAG_DATA
where TAGS like '%car, bus%'
```
But above query return only below record.
```
car,bus,van
```
But i need to get output as below. because both records have `car` and `bus`
```
car,bus,van
bus,car,ship,van
```
How can i write a query for this ? I'm using MS SQL Server.
**UPDATED**
I'm selecting tags from multi select combobox in my application. so i need to give text in that. so can't use `and`/ `or` in my query. | Please try:
```
DECLARE @input NVARCHAR(MAX)='car, bus'
SELECT DISTINCT B.*
FROM(
SELECT
LTRIM(Split.a.value('.', 'VARCHAR(100)')) AS CVS
FROM
(
SELECT
CAST ('<M>' + REPLACE(@input, ',', '</M><M>') + '</M>' AS XML) AS CVS
) AS A CROSS APPLY CVS.nodes ('/M') AS Split(a)
)x INNER JOIN TAG_DATA b on TAGS like '%'+CVS+'%'
``` | Here, only records having `car, bus` tags in sequence will be fetched.
For the desired result, below query is beneficial :
```
SELECT * FROM TAG_DATA
where TAGS like '%car, bus%' or TAGS like '%bus, car%'
```
**[SQL FIDDLE](http://sqlfiddle.com/#!3/55ad4f/1)** | Query a table to find records with given text | [
"",
"sql",
"sql-server",
"select",
""
] |
I hope that you can help me on some SQL theory, as I am not 100% sure how this works.
If I have a trigger and I define it as
```
AFTER DELETE ON xxxx
```
I was wondering when exactly this would fire, in terms of transaction management?
So if I delete a record from my table I assume that the trigger will not fire until I type commit and finish the transaction. Is this correct?
If so, then I assume that if the commit on my delete statement works but the trigger fails for some reason then only the trigger would be rolled back, and the original executed delete statement that I performed would still be committed (because I have it defined as AFTER DELETE).
Can somebody please confirm this?
Thanks. | ```
1. You delete a row on TABLE1 no COMMIT;
2. TRIGGER performs an action (This takes place before COMMIT or ROLLBACK for step1, but trigger will not have any commit or rollback in it)
3a. You apply commit - Both step1 and step2 gets completed .
3b. You apply rollback- Both step1 and step2 rolled back.
```
Either you give 3a or 3b | The purpose of SQL triggers is to ensure referential consistency. But when they would be exectued in a separate transaction commit, there would be the possibility that they leave data in an inconsistent state.
So the delete trigger is executed the moment you do the delete command. When this happens as a transaction and you roll it back, the triggered delete is also rolled back. | When exactly is an AFTER DELETE trigger fired | [
"",
"sql",
""
] |
I'm trying to select the user who has the MAX microposts count:
```
SELECT "name", count(*) FROM "users"
INNER JOIN "microposts" ON "microposts"."user_id" = "users"."id"
GROUP BY users.id
```
and this returns
```
"Delphia Gleichner";15
"Louvenia Bednar IV";10
"Example User";53
"Guadalupe Volkman";20
"Isabella Harvey";30
"Madeline Franecki II";40
```
But I want to select only `"Example User";53`, (user who has MAX microposts count)
I tried to add `HAVING MAX(count*)` but this didn't work. | I'd try with a ORDER BY max DESC LIMIT 1, where maximum is the count(\*) field. Something like:
```
SELECT "name", count(*) maximum FROM "users"
INNER JOIN "microposts" ON "microposts"."user_id" = "users"."id"
GROUP BY users.id
ORDER BY maximum DESC
LIMIT 1
```
I dont' have mysql available now, so I'm doing this on the paper (and it might not work), but it's just an orientation. | ```
SELECT x.name, MAX(x.count)
FROM (
SELECT "name", count(*)
FROM "users" INNER JOIN "microposts" ON "microposts"."user_id" = "users"."id"
GROUP BY users.id
) x
GROUP BY x.name
``` | SQL select MAX(COUNT) | [
"",
"sql",
""
] |
Currently I am using the following query to display the following result.
```
SELECT * FROM RouteToGrowthRecord, GradeMaster,MileStoneMaster
WHERE MemberID = 'ALV01L11034A06' AND
RouteToGrowthRecord.GradeID=GradeMaster.GradeID AND
RouteToGrowthRecord.MileStoneID=MileStoneMaster.MileStoneID
ORDER BY CheckupDate DESC
```

Now I have another table named `RouteToGrowthRecord_st` that has same
columns as `RouteToGrowthRecord` with some additional fields.
I need to display result that are present in both the table. ie . if `RouteToGrowthRecord_st` has 3 records with the given `menberID`,then output must contain 3 more records along with the above query result.(fr ex above its 9+3=12 records in total). | You can use Union here to merge the results getting from both queries. Use default values for the unmapped additional fields. | ```
SELECT * FROM RouteToGrowthRecord a inner join GradeMaster b inner
join MileStoneMaster c inner join RouteToGrowthRecord_st d on
a.GradeID=b.GradeID AND a.MileStoneID=c.MileStoneID and
d.GradeID=b.GradeID AND d.MileStoneID=c.MileStoneID
WHERE a.MemberID = 'ALV01L11034A06'
ORDER BY CheckupDate DESC
``` | Sql query to combine result of two tables | [
"",
"mysql",
"sql",
""
] |
NOTE: I checked [Understanding QUOTED\_IDENTIFIER](https://stackoverflow.com/questions/7481441/understanding-quoted-identifier) and it does not answer my question.
I got my DBAs to run an index I made on my Prod servers (they looked it over and approved it).
It sped up my queries just like I wanted. However, I started getting errors like this:
> 
As a developer I have usually ignored these settings. And it has never mattered. (For 9+ years). Well, today it matters.
I went and looked at one of the sprocs that are failing and it has this before the create for the sproc:
```
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
```
**Can anyone tell me from a application developer point of view what these set statements do?** (Just adding the above code before my index create statements did not fix the problem.)
NOTE: Here is an example of what my indexes looked like:
```
CREATE NONCLUSTERED INDEX [ix_ClientFilerTo0]
ON [ClientTable] ([Client])
INCLUDE ([ClientCol1],[ClientCol2],[ClientCol3] ... Many more columns)
WHERE Client = 0
CREATE NONCLUSTERED INDEX [IX_Client_Status]
ON [OrderTable] ([Client],[Status])
INCLUDE ([OrderCol1],[OrderCol2],[OrderCol3],[OrderCol4])
WHERE [Status] <= 7
GO
``` | OK, from an application developer's point of view, here's what these settings do:
## QUOTED\_IDENTIFIER
This setting controls how quotation marks `".."` are interpreted by the SQL compiler. When `QUOTED_IDENTIFIER` is ON then quotes are treated like brackets (`[...]`) and can be used to quote SQL object names like table names, column names, etc. When it is OFF (not recommended), then quotes are treated like apostrophes (`'..'`) and can be used to quote text strings in SQL commands.
## ANSI\_NULLS
This setting controls what happens when you try to use any comparison operator other than `IS` on NULL. When it is ON, these comparisons follow the standard which says that comparing to NULL always fails (because it isn't a value, it's a Flag) and returns `FALSE`. When this setting is OFF (really ***not*** recommended) you can sucessfully treat it like a value and use `=`, `<>`, etc. on it and get back TRUE as appropiate.
The proper way to handle this is to instead use the `IS` (`ColumnValue IS NULL ..`).
## CONCAT\_NULL\_YIELDS\_NULL
This setting controls whether NULLs "Propogate" whn used in string expressions. When this setting is ON, it follows the standard and an expression like `'some string' + NULL ..` always returns NULL. Thus, in a series of string concatenations, one NULL can cause the whole expression to return NULL. Turning this OFF (also, not recommended) will cause the NULLs to be treated like empty strings instead, so `'some string' + NULL` just evaluates to `'some string'`.
The proper way to handle this is with the COALESCE (or ISNULL) function: `'some string' + COALESCE(NULL, '') ..`. | I find [the documentation](https://learn.microsoft.com/en-us/sql/t-sql/statements/set-quoted-identifier-transact-sql), [blog posts](http://ranjithk.com/2010/01/10/understanding-set-quoted_identifier-onoff/), [Stackoverflow answers](https://stackoverflow.com/questions/7481441/understanding-quoted-identifier) unhelpful in explaining what turning on `QUOTED_IDENTIFIER` means.
## Olden times
Originally, SQL Server allowed you to use **quotation marks** (`"..."`) and **apostrophes** (`'...'`) around strings interchangeably (like Javascript does):
* `SELECT "Hello, world!"` *--quotation mark*
* `SELECT 'Hello, world!'` *--apostrophe*
And if you wanted to name a table, view, stored procedure, column etc with something that would otherwise violate all the rules of naming objects, you could wrap it in **square brackets** (`[`, `]`):
```
CREATE TABLE [The world's most awful table name] (
[Hello, world!] int
)
SELECT [Hello, world!] FROM [The world's most awful table name]
```
And that all worked, and made sense.
## Then came ANSI
Then ANSI came along, and they had other ideas:
* use **apostrophe** (`'...'`) for strings
* if you have a funky name, wrap it in **quotation marks** (`"..."`)
* and we don't even care about your square brackets
Which means that if you wanted to *"quote"* a funky column or table name you must use quotation marks:
```
SELECT "Hello, world!" FROM "The world's most awful table name"
```
If you knew SQL Server, you knew that **quotation marks** were already being used to represent strings. If you blindly tried to execute that *ANSI-SQL* as if it were *T-SQL*, it is nonsense:
```
SELECT 'Hello, world!' FROM 'The world''s most awful table name'
```
and SQL Server tells you so:
```
Msg 102, Level 15, State 1, Line 8
Incorrect syntax near 'The world's most awful table name'.
```
## You must opt-in to the new ANSI behavior
So Microsoft added a feature to let you opt-in to the ANSI flavor of SQL.
**Original** *(aka SET QUOTED\_IDENTIFIER OFF)*
```
SELECT "Hello, world!" --valid
SELECT 'Hello, world!' --valid
```
**SET QUOTED\_IDENTIFIER ON**
```
SELECT "Hello, world!" --INVALID
SELECT 'Hello, world!' --valid
```
These days everyone has `SET QUOTED_IDENTIFIERS ON`, which technically means you should be using `quotes` rather than `square brackets` around identifiers:
**T-SQL (bad?)** *(e.g. SQL generated by Entity Framework)*
```
UPDATE [dbo].[Customers]
SET [FirstName] = N'Ian'
WHERE [CustomerID] = 7
```
**ANSI-SQL (good?)**
```
UPDATE "dbo"."Customers"
SET "FirstName" = N'Ian'
WHERE "CustomerID" = 7
```
In reality nobody in the SQL Server universe uses U+0022 QUOTATION MARK `"` to wrap identifiers. We all continue to use `[``]`. | ANSI_NULLS and QUOTED_IDENTIFIER killed things. What are they for? | [
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2012",
""
] |
I have a search screen where the user has 5 filters to search on.
I constructed a dynamic query, based on these filter values, and page 10 results at a time.
This is working fine in SQL2012 using `OFFSET` and `FETCH`, but I'm using ***two*** queries to do this.
I want to show the 10 results *and* display the total number of rows found by the query (let's say 1000).
Currently I do this by running the query ***twice*** - once for the Total count, then again to page the 10 rows.
Is there a more efficient way to do this? | You don't have to run the query twice.
```
SELECT ..., total_count = COUNT(*) OVER()
FROM ...
ORDER BY ...
OFFSET 120 ROWS
FETCH NEXT 10 ROWS ONLY;
```
Based on [the chat](https://chat.stackoverflow.com/rooms/38509/discussion-between-user788312-and-aaron-bertrand), it seems your problem is a little more complex - you are applying `DISTINCT` to the result in addition to paging. This can make it complex to determine exactly what the `COUNT()` should look like and where it should go. Here is one way (I just want to demonstrate this rather than try to incorporate the technique into your much more complex query from chat):
```
USE tempdb;
GO
CREATE TABLE dbo.PagingSample(id INT,name SYSNAME);
-- insert 20 rows, 10 x 2 duplicates
INSERT dbo.PagingSample SELECT TOP (10) [object_id], name FROM sys.all_columns;
INSERT dbo.PagingSample SELECT TOP (10) [object_id], name FROM sys.all_columns;
SELECT COUNT(*) FROM dbo.PagingSample; -- 20
SELECT COUNT(*) FROM (SELECT DISTINCT id, name FROM dbo.PagingSample) AS x; -- 10
SELECT DISTINCT id, name FROM dbo.PagingSample; -- 10 rows
SELECT DISTINCT id, name, COUNT(*) OVER() -- 20 (DISTINCT is not computed yet)
FROM dbo.PagingSample
ORDER BY id, name
OFFSET (0) ROWS FETCH NEXT (5) ROWS ONLY; -- 5 rows
-- this returns 5 rows but shows the pre- and post-distinct counts:
SELECT PostDistinctCount = COUNT(*) OVER() -- 10,
PreDistinctCount -- 20,
id, name
FROM
(
SELECT DISTINCT id, name, PreDistinctCount = COUNT(*) OVER()
FROM dbo.PagingSample
-- INNER JOIN ...
) AS x
ORDER BY id, name
OFFSET (0) ROWS FETCH NEXT (5) ROWS ONLY;
```
Clean up:
```
DROP TABLE dbo.PagingSample;
GO
``` | My solution is similar to "rs. answer"
```
DECLARE @PageNumber AS INT, @RowspPage AS INT
SET @PageNumber = 2
SET @RowspPage = 5
SELECT COUNT(*) OVER() totalrow_count,*
FROM databasename
where columnname like '%abc%'
ORDER BY columnname
OFFSET ((@PageNumber - 1) * @RowspPage) ROWS
FETCH NEXT @RowspPage ROWS ONLY;
```
The return result will include totalrow\_count as the first column name | Get total row count while paging | [
"",
"sql",
"sql-server",
"sql-server-2012",
"paging",
"fetch",
""
] |
I have the following so far.
```
DECLARE @Table Table (ID int, Value1 varchar(50), Value2 varchar(50), Value3 varchar(50))
INSERT INTO @Table (ID, Value1, Value2, Value3)
SELECT 1, 'One', 'Uno', 'FIrst'
UNION ALL
SELECT 2, 'Two', 'Dos', 'Second'
UNION ALL
SELECT 3, 'One', 'Uno', 'Third'
UNION ALL
SELECT 4, 'Three', 'Tres', 'Fourth'
SELECT *, CASE
WHEN COUNT(*) OVER (PARTITION BY Value1, Value2) > 1 THEN 1
ELSE 0
END AS Duplicate FROM @Table
```
Which gives me duplicate. I want to derive one more column which would have concatenated value for the duplicate records as 'First - Third' (both columns)
Expected output
```
ID Value1 Value2 Value3 Duplicate Value3
1 One Uno FIrst 1 First - Third
3 One Uno Third 1 First - Third
4 Three Tres Fourth 0 NULL
2 Two Dos Second 0 NULL
``` | Unfortunately, SQL Server doesn't have aggregate concatenation function, but you can use xml trick:
```
select
t.ID, t.Value1, t.Value2, t.Value3,
case
when count(*) over(partition by t.Value1, t.Value2) > 1 then 1
else 0
end as Duplicate,
stuff(
(
select ' - ' + TT.Value3
from @Table as TT
where TT.Value1 = t.Value1 and TT.Value2 = t.Value2
for xml path(''), type
).value('.', 'nvarchar(max)')
, 1, 3, '') as Value3_Dupl
from @Table as t
```
**`sql fiddle demo`**
If you need to show duplicates only for `Duplicate = 1`, I think it's better to use cte (or subquery), it's a bit cleaner:
```
;with cte as (
select *, count(*) over(partition by Value1, Value2) as cnt
from @Table
)
select
t.ID, t.Value1, t.Value2, t.Value3,
case
when cnt > 1 then 1
else 0
end as Duplicate,
case
when cnt > 1 then
stuff(
(
select ' - ' + TT.Value3
from @Table as TT
where TT.Value1 = t.Value1 and TT.Value2 = t.Value2
for xml path(''), type
).value('.', 'nvarchar(max)')
, 1, 3, '')
end as Value3_Dupl
from cte as t
```
**`sql fiddle demo`**
And just quick note - I'm using `type` after `for xml path('')` and then taking concatenated string with `value('.', 'nvarchar(max)')` function, and this is not just because I like it. If you don't use solutin this way, and just use implicit conversion to `varchar`, the characters like '<' or '&' will not be converted properly. | ```
SELECT *,
CASE
WHEN COUNT(*) OVER (PARTITION BY Value1, Value2) > 1 THEN 1
ELSE 0
END AS Duplicate,
CASE WHEN COUNT(*) OVER (PARTITION BY Value1, Value2) > 1
THEN STUFF((SELECT ' - ' + Value3
FROM @Table T
WHERE T.Value1 = V.Value1
AND T.Value2 = V.Value2
FOR XML PATH('')),
1, 3, '')
ELSE NULL
END
FROM @Table V
```
[SQL Fiddle Example](http://sqlfiddle.com/#!3/d41d8/22029) | Concatenation - T-SQL | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I've got a stored procedure that runs a couple queries in a `WHILE` loop, and these queries in particular take exponentially longer than the rest of the SP, greatly bogging down my performance.
I've read in a few places that using an `IN` vs. an `INNER JOIN` can improve performance if you're only pulling columns from the first table, which is exactly what I'm doing.
The problem, however, is that I haven't the first clue on how I can replace my `INNER JOIN` with an `IN` clause. I'm getting the notion that the syntax for an `IN` is very different from an `INNER JOIN`, but I don't know where to start.
As a footnote, SQL is not my strong suit. All suggestions are greatly appreciated.
Here's one of the queries. They're both pretty much identical in terms of logic.
```
SELECT @CurrentKey = riv.Key
FROM ResourceInfo_VW riv
INNER JOIN (SELECT Track,
Code,
[Language]
FROM ResourceInfo_VW
WHERE Key = @CurrentKey) AS riv2
ON riv.Track = riv2.Track
AND riv.Code = riv2.Code
AND riv.[Language] = riv2.[Language]
INNER JOIN UserGroupCourseCatalog_VW ugcc
ON riv.Key = ugcc.Key
WHERE riv.[Type] = 'a'
AND ugcc.UserGroupKey = @UserGroupKey
``` | It took a bit of thinking, but I optimized this performance greatly by removing all of the `INNER JOIN`s from the relevant statements.
Relative to the time it takes to run, this query gets faster and faster based on the number of records being processed.
Here's what I ended up with. You'll notice a few extra variables. These are set at the start of every `WHILE` iteration.
```
SELECT @CurrentTrack = Track,
@CurrentCode = Code,
@CurrentLanguage = [Language]
FROM ResourceInfo_VW riv
WHERE riv.Key = @CurrentKey
SELECT riv.Key
FROM ResourceInfo_VW riv
WHERE riv.[Type] = 'a'
AND riv.Track = @CurrentTrack
AND riv.Code = @CurrentCode
AND riv.[Language] = @CurrentLanguage
AND riv.CourseKey IN (SELECT CourseKey
FROM UserGroupCourseCatalog_VW
WHERE UserGroupKey = @UserGroupKey)
``` | Yes the formats are very different and they technically do different things:
This is how a join is done:
```
SELECT * FROM TABLEA a JOIN TABLEB on a.commonfield = b.commonfield
```
This is how you use an IN statement
```
SELECT * FROM TABLEA WHERE commonfield in (SELECT commonfield from tableb)
``` | How to replace an INNER JOIN with an IN? | [
"",
"sql",
"sql-server",
"sql-server-2008",
"join",
""
] |
I'm trying to use the SQL statement below, to retrieve data from 3 tables. I get a syntax error near [aren1002] however.
```
select
holiday_notes.*,
HOLIDAY_REF.holiday_name as holiday_name from [aren1002].[HOLIDAY_NOTES]
left join [aren1002].[HOLIDAY_REF] on holiday_notes.holiday_id=HOLIDAY_REF.holiday_id
[aren1002].[lookup].lookup_desc as type_desc from [aren1002].[HOLIDAY_NOTES]
left join [aren1002].[lookup] on holiday_notes.[type]=lookup.lookup_id
where [HOLIDAY_NOTES].delete_date is null order by [HOLIDAY_NOTES].create_date desc
```
I'm trying to add the holiday\_name column from HOLIDAY\_REF and type\_desc from the second table (lookup). I'm not totally sure how to structure the statement. | The columns you are selecting need to be put right after the `SELECT` portion of your query, then you put the `FROM` clause next, along with any join criteria. After that, you can put your `WHERE` clause.
```
select
holiday_notes.*,
HOLIDAY_REF.holiday_name as holiday_name,
[aren1002].[lookup].lookup_desc,
[aren1002].[lookup].type_desc
from [aren1002].[HOLIDAY_NOTES]
left join [aren1002].[HOLIDAY_REF] on holiday_notes.holiday_id=HOLIDAY_REF.holiday_id
left join [aren1002].[lookup] on holiday_notes.[type]=lookup.lookup_id
where [HOLIDAY_NOTES].delete_date is null order by [HOLIDAY_NOTES].create_date desc
``` | You have an extra `FROM` in there.
Try this
```
select
holiday_notes.*,
HOLIDAY_REF.holiday_name as holiday_name,
[aren1002].[lookup].lookup_desc as type_desc
from [aren1002].[HOLIDAY_NOTES]
left join [aren1002].[HOLIDAY_REF] on holiday_notes.holiday_id=HOLIDAY_REF.holiday_id
left join [aren1002].[lookup] on holiday_notes.[type]=lookup.lookup_id
where [HOLIDAY_NOTES].delete_date is null order by [HOLIDAY_NOTES].create_date desc
```
A select statement will normally follow the pattern of:
`SELECT` *required fields* `FROM` *table* (optionally) `JOIN` *join statements to other tables* `WHERE` *conditions* | SQL Select from multiple tables while matching values | [
"",
"sql",
"sql-server",
""
] |
I've made an event system for my school that handles registrations for events.
Since we only want students to access sign up on the site, I've got a table called **potentialusers**. Every entry has a FullName and an Email.
When a user signs up on the site, the site checks if the student's email is in the potentialusers table, and if it is, the user is added to the **users** table.
I've also got a table called **registrations**. Whenever a user registers for an event, it's added to this table.
An entry looks like this: **registrationid**, **eventid** (foreign key to the event table), **userid** (also a foreign key to the user table, not the potentialusers).
I want to search for a name with a LIKE statement and get a result list of users, a column that states if the user is registered for the event and a column that states if the user is even registered at all.
This is what I've tried (I added some comments in curly brackets):
```
SELECT FullName, Email from potentialusers
LEFT OUTER JOIN registrations ON (potentialusers.Email = registrations...{1})
WHERE events.eventid = '7'{2} AND potentialusers.Email LIKE = '%jazerix%';
```
**{1}** -> Here is the first problem since, the registration table doesn't contains a email column, only a reference to the user in the usertable, which contains the email.
**{2}**-> Just so we can separate events, 7 is only an example.
In the end I want to return something like this:
 | Give this a try:
```
SELECT
p.FullName,
p.Email,
IF(u.userid IS NULL, 'False', 'True') RegisteredInSystem,
IF(r.registrationid IS NULL, 'False', 'True') RegisteredInEvent
FROM potentialusers p
LEFT JOIN users ON p.Email = u.useremail
LEFT JOIN registrations r ON r.userid = u.id AND r.eventid = 7
WHERE p.FullName LIKE '%jazerix%'
``` | ```
SELECT potentialusers.FullName, potentialusers.Email
, IF(users.userid IS NULL, 'False', 'True') Registered
, registration.registrationid
FROM potentialusers
LEFT JOIN users
ON potentialusers.Email = users. useremail
LEFT JOIN registrations
ON registrations.userid = users.id
WHERE potentialusers.Email LIKE '%jazerix%'
AND registrations.eventid = 7;
``` | MySQL Join two lists to check if a record exist | [
"",
"mysql",
"sql",
"join",
""
] |
I have two tables in Access database. Table1 has more columns than Table2. I would like to merge those tables into one while removing duplicates. I have the following query
```
SELECT FirstName, LastName, PhoneNumber FROM Table1
UNION
SELECT FirstName, LastName, Null as PhoneNumber FROM Table2
```
Problem is, I don't want to copy any entry from Table2 that has the same FirstName and LastName in Table1. How can I change the above query to accomplish that? Thanks in advance. | Start with a query which returns only those `Table2` rows which are not matched in `Table1`.
```
SELECT t2.FirstName, t2.LastName
FROM
Table2 AS t2
LEFT JOIN Table1 AS t1
ON
t2.FirstName = t1.FirstName
AND t2.LastName = t1.LastName
WHERE t1.FirstName Is Null;
```
Then use that `SELECT` in your `UNION` query.
```
SELECT FirstName, LastName, PhoneNumber FROM Table1
UNION ALL
SELECT t2.FirstName, t2.LastName, t2.Null AS PhoneNumber
FROM
Table2 AS t2
LEFT JOIN Table1 AS t1
ON
t2.FirstName = t1.FirstName
AND t2.LastName = t1.LastName
WHERE t1.FirstName Is Null;
```
Note I used `UNION ALL` because it requires less work by the db engine so is therefore faster. Use just `UNION` when you want the db engine to weed out duplicate rows. But, in this case, that would not be necessary ... unless duplicates exist separately within one or both of those tables. | Try constraining like follows:
```
SELECT FirstName, LastName, PhoneNumber FROM Table1
UNION
SELECT FirstName, LastName, Null as PhoneNumber FROM Table2
WHERE FirstName NOT IN (SELECT FirstName FROM Table1)
AND LastName NOT IN (SELECT LastName FROM TABLE1);
``` | How do I merge two tables with different column number while removing duplicates? | [
"",
"sql",
"ms-access",
"duplicates",
"union",
"ms-access-2010",
""
] |
I have a column that contain date and for each row i would like to update only the year for example :
```
-------------
date
-------------
22/01/2013
16/02/2013
19/08/2013
23/01/2013
```
i want to change for every row only the year part like this :
```
-------------
date
-------------
22/01/2012
16/02/2012
19/08/2012
23/01/2012
```
change it for the whole table
thanks | Using:
```
Update TableName set date = DateAdd(yy,-1,Date)
```
should subtract one year from each date field for you. | ```
Update table_name set date_field=DateAdd(yyyy,2012-year(date_field),date_field)
``` | How do you update only the year in and keep day and month - sql server | [
"",
"sql",
"sql-server-2008",
""
] |
So, I'm trying to create a procedure that is going to find
a specific row in my table, save the row in a result to be
returned, delete the row and afterwards return the result.
The best thing I managed to do was the following:
```
CREATE OR REPLACE FUNCTION sth(foo integer)
RETURNS TABLE(a integer, b integer, ... other fields) AS $$
DECLARE
to_delete_id integer;
BEGIN
SELECT id INTO to_delete_id FROM my_table WHERE sth_id = foo LIMIT 1;
RETURN QUERY SELECT * FROM my_table WHERE sth_id = foo LIMIT 1;
DELETE FROM my_table where id = to_delete_id;
END;
$$ LANGUAGE plpgsql;
```
As you see, I have 2 `SELECT` operations that pretty much do the same thing (extra
overhead). Is there a way to just have the second `SELECT` and also set the `to_delete_id`
so I can delete the row afterwards? | You just want a [DELETE...RETURNING](http://www.postgresql.org/docs/current/static/sql-delete.html).
```
DELETE FROM my_table WHERE sth_id=foo LIMIT 1 RETURNING *
```
Edit based on ahwnn's comment. Quite right too - teach me to cut + paste the query without reading it properly.
```
DELETE FROM my_table WHERE id = (SELECT id ... LIMIT 1) RETURNING *
``` | Can be done much easier:
```
CREATE OR REPLACE FUNCTION sth(foo integer)
RETURNS SETOF my_table
AS
$$
BEGIN
return query
DELETE FROM my_table p
where sth_id = foo
returning *;
END;
$$
LANGUAGE plpgsql;
``` | Same queries in PostgreSQL stored procedure | [
"",
"sql",
"postgresql",
"select",
"stored-procedures",
""
] |
Lets assume I have the following 'Products' table:
```
ProductID | ProductName
----------+---------
0255463 | ProductA
0254483 | ProductB
0255341 | ProductC
0905454 | ProductD
```
Is there a way (in Android's SQLite) to select each two consecutive rows into single result row?
Here is the desired query result:
```
FirstProductID | FirstProductName | SecondProductID | SecondProductName
---------------+------------------+-----------------+---------
0255463 | ProductA | 0254483 | ProductB
0255341 | ProductC | 0905454 | ProductD
```
I would like a generic solution that can be used to any table, regardless the table content. | Create a temporary table with an auto increment column
```
CREATE TEMP TABLE temp(
id int not null primary auto increment,
pid int,
pname text,
);
```
Insert select this data into the temporary table
```
INSERT INTO temp (pid, pname) SELECT * FROM Products;
```
Join the temporary table on `id = id + 1` where the first instance has `id % 2 = 0`
```
SELECT t1.pid AS t1_pid, t1.pname AS t1_pname,
t2.pid AS t2_pid, t2.pname AS t2_pname
FROM temp as t1 LEFT JOIN temp AS t2
ON t1.id + 1 = t2.id
WHERE t1.id % 2 = 0;
``` | Single query (not faster):
```
SELECT
First.ProductId AS FirstProductId,
First.ProductName AS FirstProductName,
Second.ProductId AS SecondProductId,
Second.ProductName AS SecondProductName
FROM
(SELECT *, Cnt/2 AS Line FROM (
SELECT *, (
SELECT COUNT() FROM Products AS _ WHERE ROWID<Products.ROWID
) AS Cnt FROM Products WHERE Cnt%2=0
)) AS First
LEFT JOIN
(SELECT *, Cnt/2 AS Line FROM (
SELECT *, (
SELECT COUNT() FROM Products AS _ WHERE ROWID<Products.ROWID
) AS Cnt FROM Products WHERE Cnt%2=1
)) AS Second
ON First.Line = Second.Line
ORDER BY First.Line;
```
If you need a faster solution, @kzarns proposed a good one. | select query - each two row into single result row | [
"",
"android",
"sql",
"sqlite",
""
] |
I have a column called title\_runtime that's in seconds that I want to show in hours, and also show only columns where the runtime is over three hours. This is what I have, minus the conversion:
```
select title_name, title_type, title_release_year, title_runtime as title_runtime_hrs
from nf_titles
where title_release_year = '1999' or
title_release_year = '2000' or
title_release_year = '2001' and
title_runtime > 3
order by title_release_year, title_runtime_hrs
``` | If the both in the divide operation are `int` the result will be `int`.
Try this:
```
select title_name, title_type, title_release_year, title_runtime / 3600.0 as title_runtime_hrs
from nf_titles
where title_release_year = '1999' or
title_release_year = '2000' or
title_release_year = '2001' and
title_runtime > 3*3600
order by title_release_year, title_runtime_hrs
``` | To get a precise measurement of hours, you need to divide by 3600.0. Not just 3600. If you divide by 3600 you will never get an accurate amount of hours. For example, if the runtime is an hour 1/2, you need the result to be 1.5 hours. This is not possible without dividing by 3600.0.
```
SELECT title_name, title_type, title_release_year, title_runtime / 3600.0 as title_runtime_hrs
FROM nf_titles
WHERE title_release_year = '1999' or
title_release_year = '2000' or
title_release_year = '2001' and
title_runtime > (3.0 * 3600.0)
order by title_release_year, title_runtime_hrs
``` | Show a column that's currently in seconds in hours? | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I am trying to insert two data each for each field in table as follows:
```
INSERT INTO `wp_postmeta` (post_id, meta_key, meta_value)
SELECT post_id, 'custom', custom
SELECT post_id, '_custom', 'field_514e91e73640d'
FROM `wp_temp`
WHERE custom IS NOT NULL
```
However, this doesn't work. Is there anyway to achieve this in one query in MySQL? Thanks. | ```
INSERT INTO `wp_postmeta` (post_id, meta_key, meta_value)
SELECT post_id, 'custom', custom FROM `wp_temp` WHERE custom IS NOT NULL
UNION ALL
SELECT post_id, '_custom', 'field_514e91e73640d' FROM `wp_temp` WHERE custom IS NOT NULL
``` | You will need a separate `FROM` and `WHERE` clause with each `SELECT`, but it's possible with UNION or UNION ALL.
```
INSERT INTO `wp_postmeta` (post_id, meta_key, meta_value)
SELECT post_id, 'custom', custom
FROM `wp_temp`
WHERE custom IS NOT NULL
UNION
SELECT post_id, '_custom', 'field_514e91e73640d'
FROM `wp_temp`
WHERE custom IS NOT NULL
``` | How to select insert more than once using INSERT INTO? | [
"",
"mysql",
"sql",
""
] |
I have a sql query in which I am trying to get sub groups on the database and bind to a dropdown list in asp.net. On the database side I am using MSSQL server 2008 R2 and I have the below query on it:
```
SELECT -1 ItemSubGroupId,
'Select Sub Group' AS ItemSubGroupName
UNION
SELECT ItemSubGroupId,
ItemSubGroupName
FROM dbo.tblSetupItemSubGroup SG
WHERE SG.ItemMainGroupId = 17
```
and the result is:

Now I want to sort all sub groups by ItemSubGroupName and after I sort it it will give me a result like
```
SELECT -1 ItemSubGroupId,
'Select Sub Group' AS ItemSubGroupName
UNION
SELECT ItemSubGroupId,
ItemSubGroupName
FROM dbo.tblSetupItemSubGroup SG
WHERE SG.ItemMainGroupId = 17
ORDER BY ItemSubGroupName
```

After I sort it 'Select Sub Group' name also sorted as displayed. Is there any way to sort it without 'Select Sub Group' section? | It is unclear what you are asking.
I think you may be trying to have 'Select Sub Group' at the top of your list, with the rest of the list sorted alphabetically.
To do that, you would need to do something like:
```
SELECT -1, 'Select an item', 0
UNION ALL
SELECT 1, '1-test', null
UNION ALL
SELECT 2, '2-test', null
UNION ALL
SELECT 3, '0-test', null
ORDER BY 3 DESC,2;
```
The third column has a `0` for the item you want at the top, and `NULL` for the rest of the rows. The query is then sorted by that column in `DESC` order, then the 2nd column. This will put the desired item at the top, and retain the alphabetic sort for the remaining items.

Here is a more concrete example:
```
CREATE TABLE OrderTest
(
ID INT NOT NULL
, Data NVARCHAR(255)
);
INSERT INTO OrderTest (ID, Data) VALUES (1, '1st test');
INSERT INTO OrderTest (ID, Data) VALUES (1, '2nd test');
INSERT INTO OrderTest (ID, Data) VALUES (1, '3rd test');
INSERT INTO OrderTest (ID, Data) VALUES (1, '4th test');
SELECT -1 AS ID, 'Select an item' AS Data, 0 AS OrderByCol
UNION ALL
SELECT ID, Data, NULL AS OrderByCol
FROM OrderTest
ORDER BY OrderByCol DESC, Data;
```
 | You have to order both by creating a subquery
```
SELECT ItemSubGroupId, ItemSubGroupName
FROM (
SELECT -1 ItemSubGroupId,
'Select Sub Group' AS ItemSubGroupName
UNION
SELECT ItemSubGroupId,
ItemSubGroupName
FROM dbo.tblSetupItemSubGroup SG
WHERE SG.ItemMainGroupId = 17
) X
ORDER BY
CASE ItemSubGroupId WHEN -1 THEN 0 ELSE 1 END,
ItemSubGroupName
``` | How to sort sql query having an union statement without specific section | [
"",
"sql",
"sql-server",
"sorting",
"sql-order-by",
"union",
""
] |
When i tried to execute this stored procedure it give me is not a valid identifier error can anyone help me?
```
CREATE Proc [dbo].[AddLogtrail]
@cmodule varchar(100),
@caudittype varchar(15),
@ctable varchar(100),
@cfield varchar(100),
@cfieldtype varchar(100),
@coriginalval varchar(100),
@cnewval varchar(100),
@cuser varchar(100),
@creason varchar(100) AS
DECLARE @SQL varchar (400)
set @SQL = 'Insert into AudittrailLog (ddatetime,cmodulename,caudittype,ctablename,cfieldname,cfieldtype,coriginalval,cnewval,cuser,creason) values ' +
'(convert(datetime,GETDATE(),105),'+@cmodule+','+@caudittype+','+@ctable+','+@cfield+','+@cfieldtype+','+@coriginalval+','+@cnewval+','+@cuser+','+@creason+')';
PRINT @SQL
Exec @SQL
``` | You need to add additional quotes for varchar values
```
set @SQL = 'Insert into AudittrailLog (ddatetime,cmodulename,caudittype,ctablename,cfieldname,cfieldtype,coriginalval,cnewval,cuser,creason) values ' +
'(convert(datetime,GETDATE(),105),'''+@cmodule+''','''+@caudittype+''','''+@ctable+''','''+@cfield+''','''+@cfieldtype+''','''+@coriginalval+''','''+@cnewval+''','''+@cuser+''','''+@creason+''')';
```
When printed it should display like this.
```
Insert into AudittrailLog ( ddatetime ,cmodulename ,caudittype ,ctablename ,cfieldname ,cfieldtype ,coriginalval ,cnewval ,cuser ,creason) values (convert(datetime,GETDATE(),105),'test','test','test','test','test','test','test','test','test')
```
Also execute it like below
```
EXEC (@SQL)
``` | You need to wrap each `varchar` in '':
```
CREATE Proc [dbo].[AddLogtrail]
@cmodule varchar(100),
@caudittype varchar(15),
@ctable varchar(100),
@cfield varchar(100),
@cfieldtype varchar(100),
@coriginalval varchar(100),
@cnewval varchar(100),
@cuser varchar(100),
@creason varchar(100) AS
DECLARE @SQL varchar (400)
set @SQL = 'Insert into AudittrailLog (ddatetime,cmodulename,caudittype,ctablename,cfieldname,cfieldtype,coriginalval,cnewval,cuser,creason) values ' +
'(convert(datetime,GETDATE(),105),'''+@cmodule+''','''+@caudittype+''','''+@ctable+''','''+@cfield+''','''+@cfieldtype+''','''+@coriginalval+''','''+@cnewval+''','''+@cuser+''','''+@creason+''')';
PRINT @SQL
Exec @SQL
```
I hope I didn't miss any of ''. | Error Executing stored procedure | [
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
I want to update 1 column in SQL Table. Example: Current value in column is like this
```
2013/09/pizzalover.jpg
2013/10/pasta.jpg
```
Now i want to update whole column like this : www.mypizza.com/2013/09/pizzalover.jpg
Is there any way I can accomplish this? Thanks in advance | You can simply update column using statement
```
update TableName set ColumnName = 'www.mypizza.com/' + ColumnName
``` | If you are using MYSql, you can use the [`concat()`](http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_concat) as :
```
update tableName set columnName= CONCAT('www.mypizza.com/', columnName);
```
[SQLFiddle](http://sqlfiddle.com/#!2/6dd15/1)
If you are using oracle you can use the [concatenation operator '||'](http://docs.oracle.com/cd/B19306_01/server.102/b14200/operators003.htm) as :
```
update tableName set "columnName"='www.mypizza.com/'||"columnName";
```
[SQLFiddle](http://sqlfiddle.com/#!4/c6549/1)
In SQL Server you can use [`+` for string concatenation](http://technet.microsoft.com/en-us/library/ms177561.aspx) as:
```
update tableName set name='www.mypizza.com/'+columnName;
```
[SQLFiddle](http://sqlfiddle.com/#!6/3aa68/1) | How can I add text to SQL Column | [
"",
"sql",
"sql-update",
""
] |
```
<people>
<parent>
<parent-name> joel </parent-name>
<child> john </child>
<child> sara </child>
<child> ram </child>
</parent>
<parent>
<parent-name> sam </parent-name>
<child> david </child>
<child> george </child>
<child> wilson </child>
</parent>
</people>
```
the desired output is:
```
parent | child
--------|---------
joel | john
joel | sara
joel | ram
sam | david
sam | george
sam | wilson
```
I tried the following sql query to retrieve all child element for all parent, only I can get first child element
```
select a.b.value('parent-name[1]','varchar(50)') as parent
, a.b.value('child[1]' ,'varchar(50)') as child
from @myxml.nodes('people/parent')a(b)
``` | The trick is to iterate the children, then go up one level and get the parent name.
```
select a.b.value('(../parent-name)[1]','varchar(50)') as parent
, a.b.value('(.)[1]' ,'varchar(50)') as child
from @myxml.nodes('people/parent/child')a(b)
``` | You need to use `CROSS APPLY` and `.nodes()` on the `<child>` nodes of each parent node:
```
SELECT
a.b.value('(parent-name)[1]', 'varchar(50)') as parent,
XChild.value('.' ,'varchar(50)') as child
FROM
@myxml.nodes('people/parent') AS a(b)
CROSS APPLY
b.nodes('child') AS XTbl(XChild)
```
This will return all parents with all their children. | How to retrieve all child elements for all parent elements from xml in sql | [
"",
"sql",
"sql-server",
"xml",
"t-sql",
""
] |
So lets say I had a table of cars with attributes make and model.
Now I want to query for all cars whose model starts with the letters 'Ex'
so it would be some thing like
```
SELECT model FROM cars WHERE model = 'Ex';
```
But I guess that only returns cars whose entire model name is Ex, not whose model starts with Ex. What is the query I need here?
I can't find any good resources that list basic queries such as this. Thanks! | Use the [LIKE (Transact-SQL)](http://technet.microsoft.com/en-us/library/ms179859.aspx) operator.
For example:
```
SELECT model FROM cars WHERE model like 'Ex%';
``` | You could do :
```
SELECT model FROM cars WHERE model LIKE 'Ex%';
``` | basic sql query -- where prefix = | [
"",
"sql",
""
] |
Given this script:
```
DECLARE @token NVARCHAR(max)
SET @token = 'mytexttosearchfor'
SELECT * FROM myTable WHERE myField1 LIKE '%@token%'
```
It will not work. What is the correct way to use content @token in this context? | You can use [string concatenation](http://technet.microsoft.com/en-us/library/aa276862%28v=sql.80%29.aspx), like this:
```
SELECT * FROM myTable WHERE myField1 LIKE '%'+@token+'%'
```
The reason that your approach did not work as expected is that variables are not resolved inside string literals: `'%@token%'` means a string composed of `%`, `@`, `t`, `o`, `k`, `e`, `n`, `%` characters - no token replacement is performed. | Try something like this
```
DECLARE @token NVARCHAR(max)
SET @token = 'mytexttosearchfor'
SELECT * FROM myTable WHERE myField1 LIKE '%' + @token + '%'
``` | SQL 2012 + LIKE + SPRINTF | [
"",
"sql",
"sql-server",
""
] |
I have a column `abc varchar(100)` with data like `2011-09-26 16:36:57.810000`
I want to convert this column to `DATETIME`...
But doing a
```
Convert(DATETIME, abc,120)
```
is giving this error:
> Conversion failed when converting date and/or time from character string.
Can any one please help me convert my `varchar` format to `datetime` in SQL Server 2008?
Thanks in advance | You can use style 121 but you can have **only 3 digits for milliseconds** (i.e `yyyy-mm-dd hh:mi:ss.mmm(24h)`) format.
```
declare @abc varchar(100)='2011-09-26 16:36:57.810'
select convert(datetime,@abc,121)
```
So you can sort it out by limiting the varchar field to 23 characters before converting as:
```
declare @abc varchar(100)='2011-09-26 16:36:57.810000'
select convert(datetime,convert(varchar(23),@abc),121)
```
Or use the `Left()` function to get first 23 characters as:
```
select convert(datetime,left(@abc,23),121)
```
Try to avoid storing date as string. | In case you need 6 digits precision use `DATETIME2`
```
SELECT CONVERT(DATETIME2, '2016-08-09T08:08:50.358000', 126) as MSSQLDateTime2
SELECT CONVERT(DATETIME, '2016-08-09T08:08:50.358', 126) as MSSQLDateTime
``` | Convert varchar to datetime in sql which is having millisec | [
"",
"sql",
"sql-server-2008",
""
] |
Here is my query:
```
SELECT *
FROM client_info
WHERE (cid LIKE 'G%' OR cid LIKE 'H%' OR cid LIKE 'J%')
AND (client_name NOT LIKE 'P4%' OR client_name NOT LIKE 'P5%')
```
The results still contain client names starting with P4 and P5. What is wrong with NOT LIKE clause? | Change the second set's `OR` to an `AND`.
```
AND (client_name NOT LIKE 'P4%' AND client_name NOT LIKE 'P5%')
``` | Others have given you the correct answer - you need to use AND instead of OR. But it's worth understanding why. Take a client named P5\_JONES, for instance - why is this patient showing up in the list? Because its name is NOT LIKE 'P4%'. Yes, it is like P5%, but with an OR in there, only one of those expressions needs to be true to satisfy the condition. | NOT LIKE clause is not working | [
"",
"mysql",
"sql",
""
] |
I want to compare two dates from two columns and get the greatest and then compare against a date value.
The two column can hold NULL values too.
For example I want the below OUTPUT.
```
Col A Col B OUTPUT
---------------------------------------
NULL NULL NULL
09/21/2013 01/02/2012 09/21/2013
NULL 01/03/2013 01/03/2013
01/03/2013 NULL 01/03/2013
```
How do I use the greatest function or if there is anything else?
I am again using the output to compare against another date. | Use Oracle [`CASE... WHEN` structure](http://www.techonthenet.com/oracle/functions/case.php) in your select:
```
SELECT COLA, COLB, CASE
WHEN (COLA >= COLB OR COLB IS NULL)
THEN COLA
ELSE COLB
END
AS OUTPUT
FROM ...
``` | Your question specifically involves two columns, but I've run into situations where I needed `GREATEST`/`LEAST` of more than two columns. In those scenarios you can use `COALESCE` and expand the solution to as many columns you want.
Here is an example with three columns `a`, `b`, and `c`:
```
GREATEST(
COALESCE(a, b, c),
COALESCE(b, c, a),
COALESCE(c, a, b)
)
```
Note that the column ordering of the `COALESCE` changes so that each input column is the first element `COALESCE` at least once. The only time this will return NULL is when all input columns are NULL.
In the "general solution" the number of `COALESCE` statements will be equal to the number of input columns:
```
GREATEST(
COALESCE(col1, col2, col3, col4, ....),
COALESCE(col2, col3, col4, ...., col1),
COALESCE(col3, col4, ...., col1, col2),
COALESCE(col4, ...., col1, col2, col3),
COALESCE(...., col1, col2, col3, col4),
...
)
``` | Handling Null in Greatest function in Oracle | [
"",
"sql",
"oracle",
""
] |
I am hoping you can help with this question. I am using Oracle SQL (SQL Developer for this view)...
If I have a table with the following columns:
* ColumnA (Number)
* ColumnB (Number)
* ColumnC (Number)
In my view I have
```
Select
ColumnA,
ColumnB,
ColumnA + ColumnB As calccolumn1
```
Now at this point, I want to use calccolumn1
but I cannot just say...
```
Select
ColumnA,
ColumnB,
ColumnA + ColumnB As calccolumn1
calccolumn1 / ColumnC as calccolumn2
```
I am assuming I need some type of subquery..but this is where I need your help...
How would I word the query so that I can use calccolumn1 in another calculation within the same query? It might be an If then or a Case when, but bottomline it is some derived number. | You could use a nested query:
```
Select
ColumnA,
ColumnB,
calccolumn1,
calccolumn1 / ColumnC as calccolumn2
From (
Select
ColumnA,
ColumnB,
ColumnC,
ColumnA + ColumnB As calccolumn1
from t42
);
```
With a row with values `3`, `4`, `5` that gives:
```
COLUMNA COLUMNB CALCCOLUMN1 CALCCOLUMN2
---------- ---------- ----------- -----------
3 4 7 1.4
```
You can also just repeat the first calculation, unless it's really doing something expensive (via a function call, say):
```
Select
ColumnA,
ColumnB,
ColumnA + ColumnB As calccolumn1,
(ColumnA + ColumnB) / ColumnC As calccolumn2
from t42;
COLUMNA COLUMNB CALCCOLUMN1 CALCCOLUMN2
---------- ---------- ----------- -----------
3 4 7 1.4
``` | **In Sql Server**
You can do this using `cross apply`
```
Select
ColumnA,
ColumnB,
c.calccolumn1 As calccolumn1,
c.calccolumn1 / ColumnC As calccolumn2
from t42
cross apply (select (ColumnA + ColumnB) as calccolumn1) as c
``` | How to use a calculated column to calculate another column in the same view | [
"",
"sql",
"oracle",
""
] |
I'm sure there's a way to do this but I haven't been able to put the pieces together.
I'd like to:
```
select table_name from INFORMATION_SCHEMA.TABLES where TABLE_SCHEMA = 'myschema';
```
and then....
```
for each table_name in that list
BEGIN
CREATE table myschemacopy.table_name like myschema.table_name;
INSERT INTO myschemacopy.table_nameSELECT * FROM myschema.table_name;
END LOOP
```
How can this be done? | Just copy the below Query result and Run it, or you could build dynamic Query.
```
select 'select * into [newschema]. ' + table_name + ' from [myschema].' +table_name
from INFORMATION_SCHEMA.TABLES
where TABLE_SCHEMA = 'myschema'
``` | If it's on the same server just a different database, you can use
SOLUTION 1 :
```
CREATE TABLE newdb.mynewtable LIKE olddb.myoldtable;
```
You can then insert data into the new table from the old using
```
INSERT newdb.mynewtable SELECT * FROM olddb.myoldtable;
[or]
```
SOLUTION 2
```
ALTER table olddb.myoldtable rename newdb.mynewtable
``` | Script to copy tables and data from one schema to another in MySql | [
"",
"mysql",
"sql",
""
] |
Fist time using SQL and already confused.
I have a table :
```
A | B
--------
a | 6
b | 10
c | 12
```
I want to filter it by using string comparison, but getting the value of the second column :
```
myNum = SELECT B WHERE A ='a'
```
At the end I want the value of myNum to be 6.
Just can't get it right... any help would be very appreciated! | You need to specify table name.
```
SELECT B from table_name WHERE A = 'a';
``` | You need the table name.
`SELECT b FROM tablename WHERE A='a';` | SQL - how to get specific value of cell? | [
"",
"mysql",
"sql",
""
] |
I want to count the values on a query result but within the same query if that makes sense. the origional query is
```
SELECT CLOSING_FEE+CLOSING_FEE_INT+CLOSING_INS+CLOSING_INT+CLOSING_OTHER as BAL, total_closing_balance
FROM statement
```
this returns over 4000 rows. I want to check the two add up. is there a way to use the count function in the same query to count the first two values of the select statement?
Or would i have to use something like a temp table and then count? | try this if total\_closing\_balance is static
```
SELECT SUM(CLOSING_FEE+CLOSING_FEE_INT+CLOSING_INS+CLOSING_INT+CLOSING_OTHER) as BAL, MAX(total_closing_balance) AS total_closing_balance FROM isql.VW_300_statement WHERE brand = '1'AND DAT = '2013-01-18 00:00:00.00000'AND INS_TYPE =''group by Brand,DAT
``` | use `Select sum(CLOSING_FEE+CLOSING_FEE_INT+CLOSING_INS+CLOSING_INT+CLOSING_OTHER )`
it will add up all these details. | How to count values in a query result | [
"",
"sql",
"database",
"count",
"sybase",
""
] |
I'm trying to get the number of records from one table where a column matches another column in a `2nd` table. I then need the total values of another column that it has selected.
```
SELECT
h.holiday_id,
h.holiday_name,
CONVERT(Money,b.total_balance) AS total_balance,
b.booking_status_id,
Sum(CONVERT(Money,b.total_balance)) AS total_balance,
Count(*) AS record_count
FROM
[arend].[aren1002].[HOLIDAY_REF] AS h,
[arend].[aren1002].[BOOKING] AS b
LEFT JOIN
[arend].[aren1002].[BOOKING]
ON
h.holiday_id=booking.holiday_id
WHERE
b.booking_status_id = '330' AND h.holiday_id = b.holiday_id
ORDER BY h.holiday_id
Table 1 HOLIDAY_REF
holiday_id | holiday_name
1 | Italy
2 | Russia
3 | Spain
Table 2 BOOKING
holiday_id | booking_status_id | total_balance
1 | 330 | 2500
3 | 330 | 1500
1 | 330 | 1750
2 | 330 | 1240
2 | 330 | 5600
Results:
Holiday_id | holiday_name | total_balance | record_count
1 | Italy | 4250 | 2
2 | Russia | 6840 | 2
3 | Spain | 1500 | 1
```
Not sure I'm going about it the right way.
UPDATE: I've update the sql command to reflect where I have got to, I now get an error:
The multi-part identifier "h.holiday\_id" could not be bound. | I don't really understand why need to join a table twice.
How about using `GROUP BY` , it will give the result as you expect for the `SUM` and `COUNT`.
Something like
```
SELECT
h.holiday_id,
Sum(CONVERT(Money,b.total_balance)) AS total_balance,
Count(*) AS record_count
FROM
[arend].[aren1002].[HOLIDAY_REF] AS h,
[arend].[aren1002].[BOOKING] AS b
WHERE
b.booking_status_id = '330' AND h.holiday_id = b.holiday_id
GROUP BY h.holiday_id
ORDER BY h.holiday_id
``` | I would make sure you're storing total\_balance as money so you don't have to convert when displaying the data.
Even though you're using a left join, by checking that booking\_status\_id = '330' it will exclude all entries in Holiday\_Ref without a corresponding Booking entry with a status of '330'. If that is desired behavior you might make that more explicit and use an inner join.
In your current query you have more columns selected than in your desired result set. This is what I might suggest:
```
select
holiday_ref.holiday_id
,holiday_ref.holiday_name
,sum(booking.total_balance) as total_balance
,count(1) as record_count
from
holiday_ref
inner join
booking
on holiday_ref.holiday_id = booking.holiday_id
where
booking.booking_status_id = '330'
group by
holiday_ref.holiday_id
,holiday_ref.holiday_name
``` | Select from 2 tables and get count totals | [
"",
"sql",
"sql-server-2008",
""
] |
From mysql [ref manual](http://dev.mysql.com/doc/refman/5.0/en/explain-output.html#explain_select_type): "Second or later SELECT statement in a UNION, dependent on outer query".
But how UNION query can be dependent on outer query? Can someone give me clear example what does it mean? For instance, in this query:
```
SELECT * FROM sometable
WHERE id IN (
SELECT 1
UNION
SELECT 2 /* ! */
)
```
second select statement in union (which itself is subquery) has DEPENDENT UNION select\_type and I dont know why, it must be just UNION because it doesn`t depend on anything. | Well, the problem in this case is the `IN` operator, because subqueries in an `IN` clause are rewritten as correlated subqueries.
And a correlated subquery will give you a "dependent subquery" or a "dependent union" in case of an union.
See [doc](http://dev.mysql.com/doc/refman/5.7/en/subquery-restrictions.html)
> The problem is that, for a statement that uses an IN subquery, the
> optimizer rewrites it as a correlated subquery. Consider the following
> statement that uses an uncorrelated subquery:
>
> SELECT ... FROM t1 WHERE t1.a IN (SELECT b FROM t2); The optimizer
> rewrites the statement to a correlated subquery:
>
> SELECT ... FROM t1 WHERE EXISTS (SELECT 1 FROM t2 WHERE t2.b = t1.a); | Because MySQL will optimise your sql into Exists Dependent SubQuery.
As below
```
EXPLAIN SELECT *
FROM sometable
WHERE EXISTS (
(SELECT 1
WHERE sometable.id = 1) UNION (SELECT 1 WHERE sometable.id = 2)
)
``` | What is DEPENDENT UNION in select_type column from explain command? | [
"",
"mysql",
"sql",
"union",
"explain",
""
] |
I have written the following SQL Stored Procedure, and it keeps giving me the error at
`@pid = SELECT MAX(...` The whole procedure is:
```
Alter PROCEDURE insert_partyco
@pname varchar(200)
AS
BEGIN
DECLARE @pid varchar(200);
@pid = SELECT MAX(party_id)+1 FROM PARTY;
INSERT INTO party(party_id, name) VALUES(@pid, @pname)
SELECT SCOPE_IDENTITY() as PARTY_ID
END
GO
```
Can anyone please tell me what I'm doing wrong here? | ```
Alter PROCEDURE insert_partyco
@pname varchar(200)
AS
BEGIN
DECLARE @pid varchar(200);
SELECT @pid = MAX(party_id)+1 FROM PARTY;
INSERT INTO party(party_id, name) VALUES(@pid, @pname)
SELECT SCOPE_IDENTITY() as PARTY_ID
END
```
This has an advantage over `SET` with `SELECT` in that you can select expressions in multiple variables in one statement:
```
SELECT @var1 = exp1, @var2 = expr2 ... etc
``` | ```
declare @total int
select @total = count(*) from news;
select * from news where newsid = @total+2
//**news** table name and **newsid** column name
``` | How to use SQL Variables inside a query ( SQL Server )? | [
"",
"sql",
"sql-server",
"variables",
"stored-procedures",
""
] |
The structure of **T\_TABLE2** is
```
ID INT
TBL1_ID INT
TESTER VARCHAR
LOT_ID VARCHAR
GRP VARCHAR
SITE_NUM INT
TEST_NUM VARCHAR
TEST_DESC VARCHAR
MEASUREMENT DOUBLE PRECISION
UNIT VARCHAR
LL DOUBLE PRECISION
UL DOUBLE PRECISION
STATUS VARCHAR
```
and I use SQL editor in Firebird to test my query. The query is
```
SELECT TEST_DESC, MEASUREMENT, LL, UL
FROM T_TABLE2
GROUP BY TEST_DESC
```
but I got this error in `GROUP BY`.
```
Invalid token.
Dynamic SQL Error.
SQL error code = -104.
Invalid expression in the select list (not contained in either an aggregate function or the GROUP BY clause).
``` | You must be coming from MySQL. MySQL - IMHO misleadingly, incorrectly, and in a black-magicky, unpredictible sort of way - allows you to specify partial `GROUP BY` queries and the database engine tries to figure out from the rest of the query *which* value of the non-grouped-by columns you want. Standard SQL (Firebird and most other RDBMSes), on the other hand, does not; it requires any non-aggregate columns to be contained in the group by, and any non-group-by columns to explicitly specify *which row you want*.
In your case, the offending columns are `MEASUREMENT`, `LL`, and `UL`. You need to specify *which* `MEASUREMENT`, `LL`, and `UL` you want (yes, even if they are all the same; the database engine has no way of knowing or guaranteeing this), or if you want to group by one or more of the columns or possibly you forgot to aggregate (Did you want the `SUM`?)
---
Examples of valid queries:
1. Group by all columns (equivalent to a `SELECT DISTINCT`):
```
SELECT TEST_DESC, MEASUREMENT, LL, UL
FROM T_TABLE2
GROUP BY TEST_DESC, MEASUREMENT, LL, UL
```
2. Group by `MEASUREMENT` as well and return the `MIN` LL and `MAX` UL:
```
SELECT TEST_DESC, MEASUREMENT, MIN(LL), MAX(UL)
FROM T_TABLE2
GROUP BY TEST_DESC, MEASUREMENT
```
3. `SUM` non-grouped columns:
```
SELECT TEST_DESC, SUM(MEASUREMENT), SUM(LL), SUM(UL)
FROM T_TABLE2
GROUP BY TEST_DESC
```
4. A combination of aggregates:
```
SELECT TEST_DESC, COUNT(DISTINCT MEASUREMENT), SUM(LL), MAX(UL)
FROM T_TABLE2
GROUP BY TEST_DESC
``` | While some databases, such as MySQL, are more lenient, in standard SQL when you use `GROUP BY`, the `SELECT` list must only contain the columns being grouped by and aggregate functions (e.g. `SUM()`, `MAX()`). If you were allowed to specify other columns, it's unpredictable which of the rows of the grouped column these columns will come from -- you may even get a mix of columns from different rows.
So you need to do something like:
```
SELECT TEST_DESC, MAX(MEASUREMENT) MEASUREMENT, MAX(LL) LL, MAX(UL) UL
FROM T_TABLE2
GROUP BY TEST_DESC
``` | How to use GROUP BY in Firebird? | [
"",
"sql",
"group-by",
"firebird",
""
] |
I've got a very complicated answer about my question here:
[Select a record just if the one before it has a lower value](https://stackoverflow.com/questions/18801262/select-a-record-just-if-the-one-before-it-has-a-lower-value)
about 3 weeks ago.
now I'm troubled with altering this query.
so this is the final version of this query right now:
```
SELECT a.ID, DATE_FORMAT(a.Time,'%d/%m/%y') AS T, a.SerialNumber,
b.Remain_Toner_Black BeforeCount,
a.Remain_Toner_Black AfterCount
FROM
(
SELECT a.ID,
a.Time,
a.SerialNumber,
a.Remain_Toner_Black,
(
SELECT COUNT(*)
FROM Reports c
WHERE c.SerialNumber = a.SerialNumber AND
c.ID <= a.ID) AS RowNumber
FROM Reports a
) a
LEFT JOIN
(
SELECT a.ID,
a.Time,
a.SerialNumber,
a.Remain_Toner_Black,
(
SELECT COUNT(*)
FROM Reports c
WHERE c.SerialNumber = a.SerialNumber AND
c.ID <= a.ID) AS RowNumber
FROM Reports a
) b ON a.SerialNumber = b.SerialNumber AND
a.RowNumber = b.RowNumber + 1
WHERE b.Remain_Toner_Black < a.Remain_Toner_Black AND b.Remain_Toner_Black >= 0
```
and it takes about **0.0002** sec to accomplish.
what I want is to edit the last line of this query so it would be:
```
WHERE month(a.Time) = ".$i." AND b.Remain_Toner_Black < a.Remain_Toner_Black AND b.Remain_Toner_Black >= 0
```
but then, the query takes about **6.9047** sec to accomplish.
How can I add this:
**month(a.Time) = ".$i."**
to the query in the most time efficient way? | Looking into this, the following way is possibly a quicker way of doing the basic select than you are already using:-
```
SELECT AfterSub.ID,
AfterSub.SerialNumber,
BeforeSub.Remain_Toner_Black BeforeCount,
AfterSub.Remain_Toner_Black AfterCount
FROM
(
SELECT ID, SerialNumber, Remain_Toner_Black, @Counter1:=@Counter1+1 AS SeqCnt
FROM TableName
CROSS JOIN (SELECT @Counter1:=0) Sub1
ORDER BY SerialNumber, ID
) AfterSub
INNER JOIN
(
SELECT ID, SerialNumber, Remain_Toner_Black, @Counter2:=@Counter2+1 AS SeqCnt
FROM TableName
CROSS JOIN (SELECT @Counter2:=1) Sub2
ORDER BY SerialNumber, ID
) BeforeSub
ON BeforeSub.SerialNumber = AfterSub.SerialNumber
AND BeforeSub.SeqCnt = AfterSub.SeqCnt
WHERE AfterSub.Remain_Toner_Black > BeforeSub.Remain_Toner_Black
ORDER BY AfterSub.SerialNumber, AfterSub.ID
```
The problem with checking the month here is that the following item could be in a different month, and this is relying on a count.
You could try:-
```
SELECT AfterSub.ID,
AfterSub.SerialNumber,
BeforeSub.Remain_Toner_Black BeforeCount,
AfterSub.Remain_Toner_Black AfterCount
FROM
(
SELECT ID, SerialNumber, Remain_Toner_Black, @Counter1:=@Counter1+1 AS SeqCnt
FROM TableName
CROSS JOIN (SELECT @Counter1:=0) Sub1
ORDER BY SerialNumber, ID
) AfterSub
INNER JOIN
(
SELECT ID, SerialNumber, Remain_Toner_Black, @Counter2:=@Counter2+1 AS SeqCnt
FROM TableName
CROSS JOIN (SELECT @Counter2:=1) Sub2
ORDER BY SerialNumber, ID
) BeforeSub
ON BeforeSub.SerialNumber = AfterSub.SerialNumber
AND BeforeSub.SeqCnt = AfterSub.SeqCnt
AND AfterSub.Remain_Toner_Black > BeforeSub.Remain_Toner_Black
WHERE month(BeforeSub.Time) = ".$i."
ORDER BY AfterSub.SerialNumber, AfterSub.ID
```
but this won't use an index (but the number of rows I would hope is low so I would hope not an issue).
You could possibly do the select to get the sequence numbers, then only check the items for that month, before joining to the next month:-
```
SELECT AfterSub.ID,
AfterSub.SerialNumber,
BeforeSub.Remain_Toner_Black BeforeCount,
AfterSub.Remain_Toner_Black AfterCount
FROM
(
SELECT ID, SerialNumber, Remain_Toner_Black, @Counter1:=@Counter1+1 AS SeqCnt
FROM TableName
CROSS JOIN (SELECT @Counter1:=0) Sub1
ORDER BY SerialNumber, ID
) AfterSub
INNER JOIN
(
SELECT ID, SerialNumber, Remain_Toner_Black, SeqCnt
FROM
(
SELECT ID, SerialNumber, Remain_Toner_Black, `Time`, @Counter2:=@Counter2+1 AS SeqCnt
FROM TableName
CROSS JOIN (SELECT @Counter2:=1) Sub2
ORDER BY SerialNumber, ID
) BeforeSub
WHERE month(BeforeSub.Time) = ".$i."
) BeforeSub
ON BeforeSub.SerialNumber = AfterSub.SerialNumber
AND BeforeSub.SeqCnt = AfterSub.SeqCnt
AND AfterSub.Remain_Toner_Black > BeforeSub.Remain_Toner_Black
ORDER BY AfterSub.SerialNumber, AfterSub.ID
```
(note, neither of the last 2 selects are tested)
EDIT
Adding a check for year / month to the 2 subselects. However as the date is being formatted to do this check I am not sure the index will be useful:-
```
SELECT AfterSub.ID,
AfterSub.SerialNumber,
BeforeSub.Remain_Toner_Black BeforeCount,
AfterSub.Remain_Toner_Black AfterCount
FROM
(
SELECT ID, SerialNumber, Remain_Toner_Black, @Counter1:=@Counter1+1 AS SeqCnt
FROM TableName
CROSS JOIN (SELECT @Counter1:=0) Sub1
WHERE DATE_FORMAT(`Time`,'%Y %m') >= '2013 01'
ORDER BY SerialNumber, ID
) AfterSub
INNER JOIN
(
SELECT ID, SerialNumber, Remain_Toner_Black, `Time`, @Counter2:=@Counter2+1 AS SeqCnt
FROM TableName
CROSS JOIN (SELECT @Counter2:=1) Sub2
WHERE DATE_FORMAT(`Time`,'%Y %m') = '2013 01'
ORDER BY SerialNumber, ID
) BeforeSub
ON BeforeSub.SerialNumber = AfterSub.SerialNumber
AND BeforeSub.SeqCnt = AfterSub.SeqCnt
AND AfterSub.Remain_Toner_Black > BeforeSub.Remain_Toner_Black
ORDER BY AfterSub.SerialNumber, AfterSub.ID
```
Using a date in the subselects (which means working out the last day of the month) might be more efficient:-
```
SELECT AfterSub.ID,
AfterSub.SerialNumber,
BeforeSub.Remain_Toner_Black BeforeCount,
AfterSub.Remain_Toner_Black AfterCount
FROM
(
SELECT ID, SerialNumber, Remain_Toner_Black, @Counter1:=@Counter1+1 AS SeqCnt
FROM TableName
CROSS JOIN (SELECT @Counter1:=0) Sub1
WHERE `Time` >= '2013-01-01'
ORDER BY SerialNumber, ID
) AfterSub
INNER JOIN
(
SELECT ID, SerialNumber, Remain_Toner_Black, `Time`, @Counter2:=@Counter2+1 AS SeqCnt
FROM TableName
CROSS JOIN (SELECT @Counter2:=1) Sub2
WHERE `Time` BETWEEN '2013-01-31' AND '2013-01-31'
ORDER BY SerialNumber, ID
) BeforeSub
ON BeforeSub.SerialNumber = AfterSub.SerialNumber
AND BeforeSub.SeqCnt = AfterSub.SeqCnt
AND AfterSub.Remain_Toner_Black > BeforeSub.Remain_Toner_Black
ORDER BY AfterSub.SerialNumber, AfterSub.ID
``` | Please place index on A.time and use this
```
SELECT a.ID, DATE_FORMAT(a.Time,'%d/%m/%y') AS T, a.SerialNumber,
b.Remain_Toner_Black BeforeCount,
a.Remain_Toner_Black AfterCount
FROM
(
SELECT a.ID,
a.Time,
a.SerialNumber,
a.Remain_Toner_Black,
month(a.time) as Month
(
SELECT COUNT(*)
FROM Reports c
WHERE c.SerialNumber = a.SerialNumber AND
c.ID <= a.ID) AS RowNumber
FROM Reports a
) a
LEFT JOIN
(
SELECT a.ID,
a.Time,
a.SerialNumber,
a.Remain_Toner_Black,
(
SELECT COUNT(*)
FROM Reports c
WHERE c.SerialNumber = a.SerialNumber AND
c.ID <= a.ID) AS RowNumber
FROM Reports a
) b ON a.SerialNumber = b.SerialNumber AND
a.RowNumber = b.RowNumber + 1
WHERE a.month = ".$i." AND b.Remain_Toner_Black < a.Remain_Toner_Black AND b.Remain_Toner_Black >= 0`
``` | Select a record just if the one before it has a lower value filtered by month | [
"",
"mysql",
"sql",
""
] |
I have a table with ID's from 1 to 20 and I need to fetch 10 rows (random), but 3 of the 10 rows are predefined and needs to be at the beginning of the result list - In one MySQL statement:
This works, but the production table contains over 500K rows:
```
SELECT id
FROM tableName
WHERE id IN (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20)
ORDER BY FIELD(id, 5,6,7) DESC, RAND()
LIMIT 10
```
I would need something like this:
```
SELECT id
FROM tableName
WHERE id IN (5,6,7,*)
ORDER BY FIELD(id, 5,6,7) DESC, RAND()
LIMIT 10
```
... what would be the right syntax? | You can use statement like this:
```
SELECT
id
FROM
(
SELECT
id,
IF(id IN (5,6,7),1,0) AS priority
FROM tableName
) t
ORDER BY priority DESC, RAND()
LIMIT 10
``` | If UNION ALL is allowed, you could try something like this:
```
select id from tablename where id in (compulsary_1, compulsary_2, compulsary_3)
order by id desc
union all
select id from tablename where id not in (compulsary_1, compulsary_2, compulsary_3)
order by rand() limit 7
``` | MySQL fetch 10 randomly rows where 3 predefined rows needs to be part of the results and be at the beginning | [
"",
"mysql",
"sql",
""
] |
I have a stored procedure which needs a different if condition to work properly.
The procedure has 2 parameter namely, @CategoryID and @ClassID, which basically come from a UI tree view functionality. @CategoryID corresponds to the parent nodes, while @ClassID corresponds to the child nodes.
Based upon the above parameters I need to make a selection(Column Code) from a table which has CategoryID and ClassID as columns.
Now there are 2 scenarios:
Scenario 1
@CategoryID:A
@ClassID:B (which is a child node of CategoryID A)
Result needed: Codes corresponding to only ClassID B, which is basically the intersection
Scenario 2
@CategoryID:A
@ClassID: C (which is not a child node for CategoryID A)
Result needed: Codes corresponding to the CategoryID A, as well as ClassID B, basically a union
The procedure which I wrote gives me correct answer for the second scenario, but the first scenario it fails. Below is my procedure:
```
ALTER PROCEDURE [dbo].[uspGetCodes]
@CategoryID varchar(50),
@ClassID varchar(50)
AS
BEGIN
BEGIN TRY
DECLARE @SQLQuery NVARCHAR(MAX)
SET @SQLQuery=N'SELECT Code FROM dbo.ClassToCategoryMapping WHERE '
IF (@CategoryID IS NULL OR @CategoryID='')
BEGIN
SET @SQLQuery=@SQLQuery + 'ClassId IN ('+@ClassID+')'
PRINT(@SQLQuery)
EXEC(@SQLQuery)
END
ELSE IF (@ClassID IS NULL OR @ClassID='')
BEGIN
SET @SQLQuery=@SQLQuery+'CategoryID IN ('+@CategoryID+')'
PRINT(@SQLQuery)
EXEC(@SQLQuery)
END
ELSE
BEGIN
SET @SQLQuery=@SQLQuery+'(CategoryID IN ('+@CategoryID+') OR ClassId IN ('+@ClassID+') )'
PRINT(@SQLQuery)
EXEC(@SQLQuery)
END
END TRY
BEGIN CATCH
SELECT ERROR_NUMBER() AS 'ErrorNumber', ERROR_MESSAGE() AS 'ErrorMessage', ERROR_SEVERITY() AS 'ErrorSeverity', ERROR_STATE() AS 'ErrorState', ERROR_LINE() AS 'ErrorLine'
RETURN ERROR_NUMBER()
END CATCH
END
```
The Last Else part actually does an 'OR', which gives me the union of the Codes for CategoryID's and ClassID's irrespective whether the given ClassID is a child of the given CategoryID or not.
My question over here would be, how to write the condition to achieve both the scenarios.
**Latest Sample Data:**
Scenario 1
@CategoryId=2,5, @ClassID=10 (Here 10 is the child while 2 is the parent, CategoryID 2 corresponds to ClassID's 10, 11, 12)
Expected Result: 10, 26, 27 (26 and 27 correspond to the CategoryID 5)
Scenario 2
@CategoryID=2, @ClassID=13,15 (13 and 15 is the child of a different parent, CategoryID 2 corresponds to ClassID's 10, 11 ,12)
Expected Result: 10, 11, 12, 13, 15
**Data in Table dbo.ClasstoCategoryMapping will be somewhat as below:**
```
CategoryID ClassID Code
2 10 200
2 11 201
2 12 202
5 26 501
5 27 502
6 15 601
6 16 602
6 17 603
7 20 701
7 21 702
7 22 703
```
I guess I have made my question quite clear, if no then, folks can ask me to edit it. I would be happy to do so. I urge the experts to assist me in this problem. Any pointers too will be quite appreciated.
Regards
Anurag | If I understand the question correctly, what you require in your result set is:
(all supplied classid) + (all classid for supplied categoryid with no matching supplied classid)
That would translate to the following:
```
CREATE PROCEDURE [dbo].[uspGetCodes]
(
@CategoryID varchar(50),
@ClassID varchar(50)
)
AS
BEGIN
SELECT COALESCE(CM.CategoryID, CM2.CategoryID) AS CategoryID,
COALESCE(CM.ClassID, CM2.ClassID) AS ClassID,
COALESCE(CM.Code, CM2.Code) AS Code
--Matched classIDs:
FROM dbo.udfSplitCommaSeparatedIntList(@ClassID) CLAS
JOIN dbo.ClassToCategoryMapping CM
ON CM.ClassId = CLAS.Value
--Unmatched CategoryIDs:
FULL
OUTER
JOIN dbo.udfSplitCommaSeparatedIntList(@CategoryID) CAT
ON CM.CategoryID = CAT.Value
LEFT
JOIN dbo.ClassToCategoryMapping CM2
ON CM.CategoryID IS NULL
AND CM2.CategoryID = CAT.Value
END
```
I have included Category, Class and Code in the result since its easier to see what's going on, however I guess you only really need code
This makes use of the following function to split the supplied comma separated strings:
```
CREATE FUNCTION [dbo].[udfSplitCommaSeparatedIntList]
(
@Values varchar(50)
)
RETURNS @Result TABLE
(
Value int
)
AS
BEGIN
DECLARE @LengthValues int
SELECT @LengthValues = COALESCE(LEN(@Values), 0)
IF (@LengthValues = 0)
RETURN
DECLARE @StartIndex int
SELECT @StartIndex = 1
DECLARE @CommaIndex int
SELECT @CommaIndex = CHARINDEX(',', @Values, @StartIndex)
DECLARE @Value varchar(50);
WHILE (@CommaIndex > 0)
BEGIN
SELECT @Value = SUBSTRING(@Values, @StartIndex, @CommaIndex - @StartIndex)
INSERT @Result VALUES (@Value)
SELECT @StartIndex = @CommaIndex + 1
SELECT @CommaIndex = CHARINDEX(',', @Values, @StartIndex)
END
SELECT @Value = SUBSTRING(@Values, @StartIndex, LEN(@Values) - @StartIndex + 1)
INSERT @Result VALUES (@Value)
RETURN
END
``` | this is the sample query that can achieve your goal, is this what you want?
```
DECLARE @SAMPLE TABLE
(
ID INT IDENTITY(1,1),
CategoryId INT,
ClassID INT
)
INSERT INTO @sample
VALUES(2,10)
INSERT INTO @sample
VALUES(2,11)
INSERT INTO @sample
VALUES(2,12)
INSERT INTO @sample
VALUES(3,13)
DECLARE @CategoryID INT
DECLARE @ClassID Int
```
--Play around your parameter(s) here
```
SET @CategoryID = 2
SET @ClassID = 13
```
--Snenario 1
--@CategoryId=2, @ClassID=10 (Here 10 is the child while 2 is the parent, CategoryID 2 corresponds to ClassID's 10, 11, 12)
--Expected Result: 10
```
IF EXISTS(SELECT * FROM @SAMPLE WHERE CategoryId = @CategoryID AND ClassID = @ClassID)
SELECT ClassID FROM @SAMPLE WHERE CategoryId = @CategoryID AND ClassID = @ClassID
```
--Scenario 2
--@CategoryID=2, @ClassID=13 (13 is the child of a different parent, CategoryID 2 corresponds to ClassID's 10, 11 ,12)
--Expected Result: 10, 11, 12, 13
```
ELSE
SELECT ClassID FROM @SAMPLE WHERE ClassID = @ClassID OR CategoryId = @CategoryID
``` | Complex SQL selection query | [
"",
"sql",
"sql-server",
"t-sql",
"stored-procedures",
"if-statement",
""
] |
I have this table:
Playing(Date,Time,Location,Name,TeamName,ShirtColor)
And I want to select all teams that wore exactly two different shirt colors.
Since I dont have SQL DB, I'd like to ask which of these two (or maybe none of them) is correct and why?
1. `SELECT (DISTINCT TeamName)
FROM Playing
WHERE TeamName IN (SELECT TeamName FROM Playing GROUP BY TeamName HAVING COUNT(DISTINCT ShirtColor) = 2)`
2. `SELECT (DISTINCT TeamName)
FROM Playing
WHERE TeamName IN (SELECT TeamName FROM Playing WHERE COUNT(DISTINCT ShirtColor) = 2 GROUP BY ShirtColor)` | Unfortunately, none of your queries are correct from a syntactic point of view.
Here is a **[SQLFiddle](http://sqlfiddle.com/#!3/149ea/4)** with sample data, and with the query that you are looking for
```
SELECT TeamName
FROM Playing
GROUP BY TeamName
HAVING COUNT(DISTINCT ShirtColor) = 2
```
When you GROUP records, the conditions for your aggregate functions (COUNT, SUM, AVG etc.) need to be used in a `HAVING` statement.
Your **first query** is correct, if you eliminate the parentheses from `(DISTINCT TeamName)`, although as **ypercube** mentioned, it is a bit overcomplicated, but functionally correct.
Whereas your **second query** will not work, because the
`SELECT TeamName FROM Playing WHERE COUNT(DISTINCT ShirtColor) = 2` part is not correct. As I have mentioned previously, your conditions on aggregate functions (in your case `COUNT`) have to be used in a `HAVING` statement. | You don't need a subquery:
```
SELECT TeamName
FROM Playing
GROUP BY TeamName
HAVING COUNT(DISTINCT ShirtColor) = 2
``` | SQL - Select all teams that wore exactly two different shirt colors? | [
"",
"sql",
""
] |
I'm having trouble creating a query with a sub query to find the one customer in my DB who has made the most purchases. I need to list his/her full name, product name, price and quantity. Here is what I have so far
```
select first_name ||' '|| last_name "FullName", pt.name p.price, sum(ps.quantity)
from customers c
join purchases ps on c.customer_id = ps.customer_id
join products p on p.product_id = ps.product_id
join product_types pt on p.product_type_id = pt.product_type_id;
```
I need to use these three tables
**Customers Table**
```
Customer_ID
First_Name
Last_Name
DOB
Phone
```
**Purchases Table**
```
Product_ID
Customer_ID
Quantity
```
**Products Table**
```
Product_ID
Product_Type_ID
Name
Description
Price
```
**Product Types Table**
```
Product_Type_ID
Name
```
I am confused as where I should place the sub query (in the select row, from, having or where clause), if the arithmetic function should be placed in the select outer query or sub query. I know there are Nested subqueries, Correlated subqueries, Multiple-column subqueries, Multiple-row subqueries, Single-row subqueries. By the way, I am trying to do this in Oracle.
Here is an image with my result, except I removed sum from quantity column. Also, updated link.
(<http://i1294.photobucket.com/albums/b618/uRsh3RRaYm0nD/Capture100_zps1f951b07.jpg>)
I'm sorry, I forgot to include a fourth table, as you can see there are two name columns, in products table and product type table. The difference is that in products table "Name" is the specific name of the product, as shown in my link. The product type "Name" column is the more general name of the product, such as books, dvds, cds, etc. I need to include the product type "Name column in my query not product's name column. Therefore, the end result should look something like this
```
FullName ProductTypeName Price Quantity
John Brown Book Sumof4books 4
John Brown DVD Sumof2DVDs 2
John Brown Magazine Sumof1Mag 1
``` | Here's one way to do it. It uses an analytic function to order customers by the total quantity of purchases: `row_number() over (order by sum(quantity) desc)`. If there's more than one person with the same quantity, this will pick out only one.
It then takes this customer id and joins the rest of the tables in the obvious way to get the break down by product type.
```
Select
c.FullName,
pt.name,
Sum(p.price * ps.quantity) price,
sum(ps.quantity) quantity
From (
Select
c.Customer_ID,
c.first_name ||' '|| c.last_name FullName,
row_number() over (order by Sum(Quantity) desc) r
From
Purchases ps
Inner Join
Customers c
On ps.Customer_ID = c.Customer_ID
Group By
c.Customer_ID,
c.first_name ||' '|| c.last_name
) c
Inner Join
Purchases ps
On c.Customer_ID = ps.Customer_ID
Inner Join
Products p
On ps.Product_ID = p.Product_ID
Inner Join
Product_Types pt
On p.Product_Type_ID = pt.Product_Type_ID
Where
c.r = 1
Group By
c.FullName,
pt.name
```
**`Example Fiddle`**
For the second problem (show the customer who has the highest quantity for each product type, together with what they've spent on that product type)
```
Select
c.FullName,
c.name,
c.price,
c.quantity
From (
Select
c.first_name ||' '|| c.last_name FullName,
pt.name,
sum(p.price * ps.quantity) price,
sum(ps.quantity) quantity,
row_number() over (partition by pt.name order by Sum(Quantity) desc) r
From
Purchases ps
Inner Join
Customers c
On ps.Customer_ID = c.Customer_ID
Inner Join
Products p
On ps.Product_ID = p.Product_ID
Inner Join
Product_Types pt
On p.Product_Type_ID = pt.Product_Type_ID
Group By
c.first_name ||' '|| c.last_name,
pt.name
) c
Where
c.r = 1
```
**`Example Fiddle`** | Here is the general idea. You can adapt it for your database tables.
```
select fred, barney, maxwilma
from bedrock join
(select max(wilma) maxwilma
from bedrock
group by fred ) flinstone on wilma = maxwilma
``` | Subquery "Finding customer who has the most purchases" | [
"",
"sql",
"oracle",
""
] |
I'm working on a project as an outsourcing developer where i don't have access to testing and production servers only the development environment.
To deploy changes i have to create sql scripts containing the changes to make on each server for the feature i wish to deploy.
**Examples:**
* When i make each change on the database, i save the script to a folder, but sometimes this is not enought because i sent a script to alter a view, but forgot to include new tables that i created in another feature.
* Another situation would be changing a table via SSMS GUI and forgot to create a script with the changed or new columns and later have to send a script to update the table in testing.
Since some features can be sent for testing and others straight to production (example: queries to feed excel files) its hard to keep track of what i have to send to each environment.
Since the deployment team just executes the scripts i sent them to update the database, how can i manage/ keep track of changes to sql server database without a compare tool ?
**[Edit]**
The current tools that i use are SSMS, VS 2008 Professional and TFS 2008. | Look into Liquibase specially at the SQL format and see if that gives you what you want. I use it for our database and it's great.
You can store all your objects in separate scripts, but when you do a Liquibase "build" it will generate one SQL script with all your changes in it. The really important part is getting your Liquibase configuration to put the objects in the correct dependency order. That is tables get created before foreign key constraints for one example.
<http://www.liquibase.org/> | I can tell you how we at xSQL Software do this using our tools:
* deployment team has an automated process that takes a schema snapshot of the staging and production databases and dumps the snapshots nightly on a share that the development team has access to.
* every morning the developers have up to date schema snapshots of the production and staging databases available. They use our [Schema Compare tool](http://www.xsql.com/products/sql_server_schema_compare/) to compare the dev database with the staging/production snapshot and generate the change scripts.
Note: to take the schema snapshot you can either use the Schema Compare tool or our Schema Compare SDK. | How to manage/ track changes to SQL Server database without compare tool | [
"",
"sql",
"sql-server",
"deployment",
"compare",
""
] |
Let's pretend we have this relation:
```
βββββββββββββββββββββ
β i++ name score β
β ββββββββββββββββββββ£
β 1 Will 123 β
β 2 Joe 100 β
β 3 Bill 99 β
β 4 Max 89 β
β 5 Jan 43 β
β 6 Susi 42 β
β 7 Chris 11 β
β 8 Noa 9 β
β 9 Sisi 4 β
βββββββββββββββββββββ
```
Now I need a subset based on the data I am searching for.
For instance I'm searching for the fith place.
In my result I need more than the record of Jan, I need the two records before Jan and the two records behind Jan too. So I have the following resultset:
```
βββββββββββββββββββββ
β id++ name score β
β ββββββββββββββββββββ£
β 3 Bill 99 β
β 4 Max 89 β
β 5 Jan 43 β
β 6 Susi 42 β
β 7 Chris 11 β
βββββββββββββββββββββ
```
That is the sql I got:
```
select @a:= id from quiz.score where username = 'Jan';
set @i=0;
SELECT @i:=@i+1 as Platz, s.*
FROM quiz.score s where id BETWEEN @a-5 AND @a+5
order by points desc;
```
The problem here is that `@a` is the `id` of the record. Is there a way to use the calculated value `@i:=@i+1`?
Thx a lot for your help. | If you do not need the rank in your output (and it appears from your comments and favored answers that you do not), you can simply combine the quiz scores nearest to Jan's score:
Query ([SQL Fiddle here](http://sqlfiddle.com/#!2/76ad1/11)):
```
-- XXX this assumes `scores`.`username` is UNIQUE !
SELECT * FROM (
-- Get those who scored worse (or tied)
( SELECT s.*
FROM scores s
CROSS JOIN (SELECT score FROM scores WHERE username = 'Jan') ref
WHERE s.score <= ref.score AND username <> 'Jan'
ORDER BY s.score DESC
LIMIT 2)
UNION
-- Get our reference point record
(SELECT s.* FROM scores s WHERE username = 'Jan')
UNION
-- Get those who scored better
( SELECT s.*
FROM scores s
CROSS JOIN (SELECT score FROM scores WHERE username = 'Jan') ref
WHERE s.score > ref.score AND username <> 'Jan'
ORDER BY s.score ASC
LIMIT 2)
) slice
ORDER BY score ASC;
```
(Note that I LIMITed the results to two records before Jan and two after Jan because your sample data set was so small.)
Those parentheses on the constituent queries above are needed to allow [LIMIT and UNION to work together](https://stackoverflow.com/a/1415380/132382). The outermost query then lets us [ORDER the results of a UNION](https://stackoverflow.com/a/3531301/132382). | It's not entirely clear what you want, but a creative use of limit might be able to help:
```
Set @i = 0;
Select
(@i := @i + 1) + 2 place,
s.*
From
quiz_score s
Order By
quality Desc
Limit
2, 5;
```
**`Example Fiddle`** | sql show partition based on calculated column with mysql | [
"",
"mysql",
"sql",
"ranking",
""
] |
I have a table containing data about events and festivals with following columns recording their start and end dates.
* Start\_Date
* End\_Date
date format is in `YYYY-MM-DD`. I need to fetch event details with the following condition.
* Need to fetch all events which start with a current month and there end dates can be anything say `currentDate+next30days`.
I am clear about end date concept. but not sure how I can fetch data whose start dates are in a current month.
For this, I need to compare current year and current month against the `Start_Date` column in my database.
Can anyone help me to point out as how I can do that? | ```
select * from your_table
where year(Start_Date) = year(curdate())
and month(Start_Date) = month(curdate())
and end_date <= curdate() + interval 30 day
``` | I don't like either of the other two answers, because they do not let the optimizer use an index on `start_date`. For that, the functions need to be on the current date side.
So, I would go for:
```
where start_date >= date_add(curdate(), interval 1 - day(curdate()) day) and
start_date < date_add(date_add(curdate(), interval 1 - day(curdate()) day), interval 1 month)
```
All the date functions are on `curdate()`, which does not affect the ability of MySQL to use an index in this case.
You can also include the condition on `end_date`:
```
where (start_date >= date_add(curdate(), interval 1 - day(curdate()) day) and
start_date < date_add(date_add(curdate(), interval 1 - day(curdate()) day), interval 1 month)
) and
end_date <= date_add(curdate(), interval 30 day)
```
This can still take advantage of an index. | comparing dates by month and year in mysql | [
"",
"mysql",
"sql",
""
] |
I have a table with columns `date` and `time_spent`. I want to find for each date D the sum of the values of 'time\_spent' for the period of time : (D-7 - D), ie. past week + current day.
I can't figure out a way to do this, as I can only find examples for a total sum and not a sum over a variable period of time.
Here is a dataset example :
```
CREATE TABLE rolling_total
(
date date,
time_spent int
);
INSERT INTO rolling_total VALUES ('2013-09-01','2'),
('2013-09-02','1'),
('2013-09-03','3'),
('2013-09-04','4'),
('2013-09-05','2'),
('2013-09-06','5'),
('2013-09-07','3'),
('2013-09-08','2'),
('2013-09-09','1'),
('2013-09-10','1'),
('2013-09-11','1'),
('2013-09-12','3'),
('2013-09-13','2'),
('2013-09-14','4'),
('2013-09-15','6'),
('2013-09-16','1'),
('2013-09-17','2'),
('2013-09-18','3'),
('2013-09-19','4'),
('2013-09-20','1'),
('2013-09-21','6'),
('2013-09-22','5'),
('2013-09-23','3'),
('2013-09-24','1'),
('2013-09-25','5'),
('2013-09-26','2'),
('2013-09-27','1'),
('2013-09-28','4'),
('2013-09-29','3'),
('2013-09-30','2')
```
Result would look like :
```
date | time_spent | rolling_week_total
2013-09-01 | 2 | 2
2013-09-02 | 1 | 3
2013-09-03 | 3 | 6
2013-09-04 | 4 | 10
2013-09-05 | 2 | 12
2013-09-06 | 5 | 17
2013-09-07 | 3 | 20
2013-09-08 | 2 | 22
// now we omit values that are older than seven days
2013-09-09 | 1 | 21
2013-09-10 | 1 | 21
...
``` | And one more solution
```
SELECT r1.date, r1.time_spent, sum(r2.time_spent) AS rolling_week_total
FROM rolling_total AS r1 JOIN rolling_total AS r2
ON datediff(r1.date, r2.date) BETWEEN 0 AND 7
GROUP BY r1.date
ORDER BY r1.date
LIMIT 8
``` | MySQL 8 has [window functions](https://dev.mysql.com/doc/refman/8.0/en/window-functions-frames.html) that are meant for this exact case:
```
SELECT
SUM(time_spent) OVER(
ORDER BY date RANGE BETWEEN INTERVAL 7 DAY PRECEDING
AND CURRENT ROW) AS total,
date
FROM rolling_total
```
`ORDER BY` date determines the dimension you are using for the rolling window.
`RANGE BETWEEN A AND B` defines the filter criteria of the window.
`INTERVAL 7 DAY PRECEDING` means all 7 days prior to the current row.
`CURRENT ROW` uses the value of the current row. | Create a ROLLING sum over a period of time in mysql | [
"",
"mysql",
"sql",
"sum",
"query-optimization",
"rolling-computation",
""
] |
i have a select statement which always returns only 3 rows. One column has static values RED AMBER and GREEN.
<http://sqlfiddle.com/#!6/5e107/2>
In the fiddle link, i have a sample code.
```
select color,SUM(clrcount) from
tmp group by color
```
returns 3 rows on execution.
But if i delete items marked AMBER, and then do a group by, only two rows are returned. I need 3 rows to be always returned.
The AMBER row can have count as 0 | Redesigning would be a better approach but if you can't do it now and certain about those three colours, try something like this :
**[Fiddle demo](http://sqlfiddle.com/#!6/0e59b/5)**
```
select t.color,coalesce(sum(clrcount),0) colorCount
from tmp right join
(values('RED'),('AMBER'),('GREEN')) As t(color)
on tmp.color = t.color
group by t.color
``` | Rather than deleting amber rows you need to set clrcount = 0 for those rows - this will work with your current query.
However, if they must be deleted then the answer by Joel Coehoorn is your way to go. | select statement to always return 3 rows on execution | [
"",
"sql",
"sql-server",
"select",
""
] |
Am trying to retrieve data from Database in Ascending order.
My query was
```
select pid
from `patient_list` t1, patient_info t2
where pid > '2000' and t2.id=t1.id
order by pid asc
limit 10
```
but the data is like
```
pid
2221
2223
2224
2227
**223**
2238
2239
2242
2245
2247
**225**
```
How to sort this ? | Firstly, clean the data. Secondly, cast to int. Thirdly sort and/or filter.
```
select pid from `patient_list` t1, patient_info t2
where replace(pid, "*", "") + 0 > 2000 and t2.id = t1.id
order by replace(pid, "*", "") + 0
limit 1
```
Note that you are also filtering on that field so you should use the replacement logic twice. | Your data is sorted alphanumeric. To force a numerical sort you must cast the data into a number. For instance with `pid * 1`
```
select pid
from `patient_list` t1, patient_info t2
where pid > '2000'
and t2.id=t1.id
order by pid * 1 asc
limit 1
```
Since your `pid` is of a string type you should consider changing that to an `int`. | how to sort this data by in ascending order? | [
"",
"mysql",
"sql",
""
] |
I have a table which has a doctor\_id, patient\_id, date\_of\_visit and comments.
```
doctor_id|Patient_id |date_of_visit | comments
------------------------------------------------
1 | 11 | 12-12-2012 | abcdef
2 | 12 | 12-13-2012 | erewrwq
1 | 13 | 12-12-2012 | dsfsdf
3 | 14 | 8-8-2012 | sdfds
1 | 15 | 12-12-2012 | wereter
```
How can I find the doctors who have made 3 visits on a single day?
For example, the result for the above table should give
```
doctor_id
---------
1
```
as he has 3 visits on 12-12-2012 | ```
select doctor_id
from your_table
group by doctor_id, date_of_visit
having count(*) = 3
``` | ```
SELECT count(doctor_id)
FROM table_name
WHERE doctor_id=1
and date=12-12-2012
``` | Find number of visits on each day | [
"",
"sql",
"oracle",
""
] |
Basically I have three MySQL tables:
**Users** - contains base information on users
**Fields** - describes additional fields for said users (e.g. location, dob etc.)
**Data** - Contains user data described via links to the fields table
With the basic design as follows (the below is a stripped down version)
**Users:**
```
ID | username | password | email | registered_date
```
**Fields**
```
ID | name | type
```
**Data:**
```
ID | User_ID | Field_ID | value
```
what I want to do is search Users by the values for the fields they have, e.g. example fields might be:
Full Name
Town/City
Postcode
etc.
I've got the following, which works when you're only wanting to search by one field:
```
SELECT `users`.`ID`,
`users`.`username`,
`users`.`email`,
`data`.`value`,
`fields`.`name`
FROM `users`,
`fields`,
`data`
WHERE `data`.`Field_ID` = '2'
AND `data`.`value` LIKE 'london'
AND `users`.`ID` = `data`.`User_ID`
AND `data`.`Field_ID` = `fields`.`ID`
GROUP BY `users`.`ID`
```
But what about if you want to search for Multiple fields? e.g. say I want to search for Full Name "Joe Bloggs" With Town/City set to "London"? This is the real sticking point for me.
Is something like this possible with MySQL? | I'm going with the assumption that "searching multiple fields" is talking about the [Entity-Attribute-Value structure](http://en.wikipedia.org/wiki/Entity%2Dattribute%2Dvalue_model).
In that case, I propose that the first step is to create a derived query - basically, we want to limit the "EAV data joined" to *only* include the records that have the values we are interested in finding. (I've altered some column names, but the same premise holds.)
```
SELECT d.userId
FROM data d
JOIN fields f
ON f.fieldId = d.fieldId
-- now that we establish data/field relation, filter rows
WHERE f.type = "location" AND d.value = "london"
OR f.type = "job" AND d.value = "programmer"
```
This resulting rows are derived from the *filtered* EAV triplets that match our conditions. Only the userId is selected in this case (as it will be used to join against the user relation), but it is also possible to push fieldId/value/etc through.
Then we can use *all of this* as a derived query:
```
SELECT *
FROM users u
JOIN (
-- look, just goes in here :)
SELECT DISTINCT d.userId
FROM data d
JOIN fields f
ON f.fieldId = d.fieldId
WHERE f.type = "location" AND d.value = "london"
OR f.type = "job" AND d.value = "programmer"
) AS e
ON e.userId = u.userId
```
Notes:
1. The query planner will figure all the RA stuff out peachy keen; don't worry about this "nesting" as there is no dependent subquery.
2. I avoid the use of implicit cross-joins as I feel they muddle most queries, this case being a particularly good example.
3. I've "cheated" and added a DISTINCT to the derived query. This will ensure that at most one record will be joined/returned per user and avoids the use of GROUP BY.
---
While the above gets "OR" semantics well (it's both easier and I may have misread the question), modifications are required to get "AND" semantics. Here are some ways that the derived query can be written to get such. (And at this point I must apologize to Tony - I forget that I've already done all the plumbing to generate such queries trivially in my environment.)
Count the number of matches to ensure that all rows match. This will only work if each entity is unique per user. It also eliminates the need for DISTINCT to maintain correct multiplicity.
```
SELECT d.userId
FROM data d
JOIN fields f
ON f.fieldId = d.fieldId
-- now that we establish data/field relation, filter rows
WHERE f.type = "location" AND d.value = "london"
OR f.type = "job" AND d.value = "programmer"
GROUP BY d.userId
HAVING COUNT(*) = 2
```
Find the intersecting matches:
```
SELECT d.userId
FROM data d
JOIN fields f ON f.fieldId = d.fieldId
WHERE f.type = "location" AND d.value = "london"
INTERSECT
SELECT d.userId
FROM data d
JOIN fields f ON f.fieldId = d.fieldId
WHERE f.type = "job" AND d.value = "programmer"
```
Using JOINS (see Tony's answer).
```
SELECT d1.userId
FROM data d1
JOIN data d2 ON d2.userId = d1.userId
JOIN fields f1 ON f1.fieldId = d1.fieldId
JOIN fields f2 ON f2.fieldId = d2.fieldId
-- requires AND here across row
WHERE f1.type = "location" AND d1.value = "london"
AND f2.type = "job" AND d2.value = "programmer"
```
An inner JOIN itself provides conjunction semantics when applied outside of the condition. In this case I show "re-normalize" the data. This can also be written such that [sub-]selects appear in the select clause.
```
SELECT userId
FROM (
-- renormalize, many SO questions on this
SELECT q1.userId, q1.value as location, q2.value as job
FROM (SELECT d.userId, d.value
FROM data d
JOIN fields f ON f.fieldId = d.fieldId
WHERE f.type = "location") AS q1
JOIN (SELECT d.userId, d.value
FROM data d
JOIN fields f ON f.fieldId = d.fieldId
WHERE f.type = "job") AS q2
ON q1.userId = q2.userId
) AS q
WHERE location = "london"
AND job = "programmer"
```
The above duplicity is relatively easy to generate via code and some databases (such as SQL Server) support CTEs which make writing such much simpler. YMMV. | If I understood you right, this is what you want:
```
FROM `users`,
`fields`,
`data` `location`
`data` `name`
WHERE `location`.`Field_ID` = '2'
AND `location`.`value` LIKE 'london'
AND `location`.`Field_ID` = `fields`.`ID`
AND `name`.`Field_ID` = 'whathere? something for its name'
AND `name`.`value` LIKE 'london'
AND `name`.`Field_ID` = `fields`.`ID`
AND `users`.`ID` = `data`.`User_ID`
```
I'd prefer joins though | Search a table based on multiple rows in another table | [
"",
"mysql",
"sql",
"entity-attribute-value",
"relational-division",
""
] |
I cant get the understanding of this statement - not eveN after googling around
```
pv_no_list :='23,34,45,56';
SELECT DISTINCT REGEXP_SUBSTR (pv_no_list,
'[^,]+',
1,
LEVEL)
no_list
FROM DUAL
CONNECT BY REGEXP_SUBSTR (pv_no_list,
'[^,]+',
1,
LEVEL) IS NOT NULL
``` | The "abuse" (as Colin 't Hart put it) of `connected by` has a good purpose here:
by using `REGEXP_SUBSTR` you can extract only one of the 4 matches (23,34,45,56): the regex `[^,]+` matches any character sequence in the string which does not contain a comma.
If you'll try running:
```
SELECT REGEXP_SUBSTR ('23,34,45,56','[^,]+') as "token"
FROM DUAL
```
you'll get `23`.
and if you'll try running:
```
SELECT REGEXP_SUBSTR ('23,34,45,56','[^,]+',1,1) as "token"
FROM DUAL
```
you'll also get `23` only that now we also set *[two additional parameters](http://docs.oracle.com/cd/B12037_01/server.101/b10759/functions116.htm)*: start looking in position 1 (which is the default), and return the 1st occurrence.
Now lets run:
```
SELECT REGEXP_SUBSTR ('23,34,45,56','[^,]+',1,2) as "token"
FROM DUAL
```
this time we'll get `34` (2nd occurrence) and using `3` as the last parameter will return `45` and so on.
The use of recursive `connected by` along with `level` makes sure you'll receive all the relevant results (not necessarily in the original order though!):
```
SELECT DISTINCT REGEXP_SUBSTR ('23,34,45,56','[^,]+',1,LEVEL) as "token"
FROM DUAL
CONNECT BY REGEXP_SUBSTR ('23,34,45,56','[^,]+',1,LEVEL) IS NOT NULL
order by 1
```
will return:
```
TOKEN
23
34
45
56
```
which not only contains all 4 results, but also breaks it into separate rows in the resultset!
If you'll **[fiddle](http://sqlfiddle.com/#!4/d41d8/18267)** with it - it might give you a clearer view of the subject. | `connect by` has nothing to do with `regex_substr`:
* The first is to perform a hierarchical query, see <http://docs.oracle.com/cd/B19306_01/server.102/b14200/queries003.htm>
* The second is to get a substring using regular expressions.
This query "abuses" the `connect by` functionality to generate rows in a query on `dual`.
As long as the expression passed to `connect by` is true, it will generate a new row and increase the value of the pseudo column `LEVEL`.
Then `LEVEL` is passed to `regex_substr` to get the nth value when applying the regular expression. | connect by clause in regex_substr | [
"",
"sql",
"oracle",
"oracle10g",
""
] |
In Oracle, I have columns called orderids
```
orderids
111,222,333
444,55,66
77,77
```
How can get the output as
```
Orderid
111
222
333
444
55
66
77
77
``` | Try this:
```
WITH TT AS
(SELECT orderid COL1 FROM orders)
SELECT substr(str,
instr(str, ',', 1, LEVEL) + 1,
instr(str, ',', 1, LEVEL + 1) -
instr(str, ',', 1, LEVEL) - 1) COL1
FROM (SELECT rownum AS r,
','|| COL1||',' AS STR
FROM TT )
CONNECT BY PRIOR r = r
AND instr(str, ',', 1, LEVEL + 1) > 0
AND PRIOR dbms_random.STRING('p', 10) IS NOT NULL
;
```

See this [SQLFiddle](http://sqlfiddle.com/#!4/c37f9/1) | This is one appraoch:
```
with order_table as (
select '111,222,333' as orderids from dual
union all select '444,55,66' from dual
union all select '77,77' from dual
)
select substr(orderids, instr(orderids, ',', 1, lvl) + 1, instr(orderids, ',', 1, lvl + 1) - instr(orderids, ',', 1, lvl) - 1) orderid
from
( select ',' || orderids || ',' as orderids from order_table ),
( select level as lvl from dual connect by level <= 100 )
where lvl <= length(orderids) - length(replace(orderids, ',')) - 1;
```
Just remove the WITH clause and replace the `order_table` with your real table. | Split comma separated values in Oracle 9i | [
"",
"sql",
"oracle",
"oracle9i",
""
] |
Good day everyone. Here is my code:
```
SELECT
'Expired Item -'+ DateName(mm,DATEADD(MM,4,AE.fld_LOAN)) as [Month]
,COUNT(PIT.fld_ID)'COUNT'
,SUM (PIT.fld_GRAM)'GRAMS'
,SUM (PH.fld_AMNT)'PRINCIPAL'
FROM #AllExpired AE
INNER JOIN Transactions.tbl_ITEM PIT
ON AE.fld_MAINID=PIT.fld_MAINID
INNER JOIN Transactions.tbl_HISTO PH
ON AE.fld_MAINID =PH.fld_MAINID
GROUP BY DATENAME(MM,(DATEADD(MM,4,AE.fld_LOAN)))
```
The problem I'm facing is that my **Count** function does not return 0 if it has no values, **Sum** function does not return NULL if there are no resulting values retrieved. Instead, it just outputs blank. Why is that so and how can I fix it?
Here is a screenshot of sample output.

Of course this is not what I want. I want it to output zero and null. Please help me, I do not know what's wrong. Thank you. | You cannot expect any records to be outputted when using a GROUP BY clause, when no records exist in your source.
If you want an output of 0 from the SUM and COUNT functions, then you should not use GROUP BY.
The reason is that when you have no records, the GROUP BY clause has nothing to group by, and then is not able to give you any output.
For example:
```
SELECT COUNT(*) FROM (SELECT 'Dummy' AS [Dummy] WHERE 1 = 0) DummyTable
```
will return one record with the value '0', where as:
```
SELECT COUNT(*) FROM (SELECT 'Dummy' AS [Dummy] WHERE 1 = 0) DummyTable
GROUP BY [Dummy]
```
will return no records. | I would imagine you need to change your joins from INNER to OUTER to ensure rows are returned even when there is no corresponding record in tbl\_PawnItem -
```
SELECT
'Expired Item -'+ DateName(mm,DATEADD(MM,4,AE.fld_LoanDate)) as [Month]
,COUNT(PIT.fld_PawnItemID)'COUNT'
,SUM (PIT.fld_KaratGram)'GRAMS'
,SUM (PH.fld_PrincipalAmt)'PRINCIPAL'
FROM #AllExpired AE
LEFT JOIN Transactions.tbl_PawnItem PIT
ON AE.fld_PawnMainID=PIT.fld_PawnMainID
LEFT JOIN Transactions.tbl_PawnHisto PH
ON AE.fld_PawnMainID=PH.fld_PawnMainID
GROUP BY DATENAME(MM,(DATEADD(MM,4,AE.fld_LoanDate)))
``` | Count Returning blank instead of 0 | [
"",
"sql",
"t-sql",
"count",
"sql-server-2008-r2",
"sum",
""
] |
I am trying to use a result of a string split to where close in my sql condition
I have table which has a `varchar` column. I am trying to filter the result where there is only one word is presented.
eg. if the table have values like *'ABC DEF','XYZ','EGF HIJ'* and I am expecting to get only *'XYZ'* as result.
I am not sure what to use here, though splitting the each value in column will be a one way. But not sure how I can use it as a condition
I had look some split samples like below.
```
DECLARE @Str VARCHAR(100) ='Test Word'
SELECT SUBSTRING(@Str , 1, CHARINDEX(' ', @Str ) - 1) AS [First],
SUBSTRING(@Str , CHARINDEX(' ', @Str ) + 1, LEN(@Str )) AS [Last]
``` | To get only 'XYZ' in the with a column containing
```
tableName.fieldName
'ABC DEF'
'XYZ'
'EGF HIJ'
```
Do this
```
SELECT *
FROM tableName
WHERE CHARINDEX(' ',fieldname) = 0
``` | ***This is also work***
```
select SUBSTRING(EmpName,0,CHARINDEX(' ',EmpName)),SUBSTRING(EmpName,CHARINDEX(' ',EmpName),LEN(EMPNAME))
from tblemployee
``` | Split string by space | [
"",
"sql",
"sql-server-2008",
"split",
""
] |
I'm trying to select the last `enddate` per `nr`. In case the `nr` contains an `enddate` with value `NULL`, it means this `nr` is still active. In short I cannot use `MAX(enddate)` because out of `2013-09-25` and `NULL` it would select the date whereas I need `NULL`.
I tried the following query though it seems that `NULL IN (enddate)` does not return what I suspected. Namely: 'if the array contains at least one value `NULL`...'. In other words, `NULL` should overrank `MAX()`.
```
SELECT nr,
CASE WHEN NULL IN (enddate) THEN NULL ELSE MAX(enddate) END
FROM myTable
GROUP BY nr
```
Does someone know how to replace this expression? | You can use the query below. It returns NULL before other dates (provided that you put a date great enough) and then restores NULL.
```
SELECT nr, CASE d WHEN '20990101' THEN NULL ELSE d END d
FROM (
SELECT nr,
CASE MAX(ISNULL(enddate, '20990101') d
FROM myTable
GROUP BY nr
)
```
I couldn't check the syntax so there may be small typos. | You could use this query. The CTE calculates the maximum date ignoring any nulls, this is then left joined back to the table to see if there is a null value for each nr value. The case statement returns a null if it exists or the maximum date from the CTE.
```
WITH CTE1 AS
(SELECT nr, MAX(enddate) MaxEnddate
FROM myTable
GROUP BY nr)
SELECT CTE1.nr,
CASE WHEN MyTable.enddate IS NULL AND MyTable.NR IS NOT NULL THEN NULL ELSE CTE1.MaxEndDate END AS EndDate
FROM CTE1
LEFT JOIN MyTable
ON MyTable.nr=CTE1.nr
AND MyTable.enddate IS NULL
``` | Array contains NULL value | [
"",
"sql",
"sql-server",
"arrays",
"null",
"expression",
""
] |
Sorry I know you want example code but unfortunately, I have absolutely no idea how I can implement this functionality or for what I have to search for.
Let's pretend we have this relation:
```
βββββββββββββββββββββ
β id name quality β
β ββββββββββββββββββββ£
β 1 Will 4 β
β 2 Joe 9 β
β 3 Bill 2 β
β 4 Max 1 β
β 5 Jan 10 β
β 6 Susi 5 β
β 7 Chris 6 β
β 8 Noa 9 β
β 9 Sisi 4 β
βββββββββββββββββββββ
```
Now I need a subset based on the data I am searching for.
For instance I'm searching for the record with id 5.
In my result I need more than the record of Jan, I need the two records before Jan and the two records behind Jan too. So I have the following resultset:
```
βββββββββββββββββββββ
β id name quality β
β ββββββββββββββββββββ£
β 3 Bill 2 β
β 4 Max 1 β
β 5 Jan 10 β
β 6 Susi 5 β
β 7 Chris 6 β
βββββββββββββββββββββ
```
Could you give me some keywords for solving this problem? | Assuming the ID field is always in a preset order (two before and two after the specific row can be counted via the ID field), you can do one of two simple queries:
```
SELECT *
FROM tbl1
WHERE id BETWEEN 3 AND 7
```
(An SQL Fiddle demo can be found [here](http://sqlfiddle.com/#!2/126a4/2))
or:
```
SET @a:=5;
SELECT *
FROM tbl1
WHERE id BETWEEN @a - 2 AND @a + 2
```
(An SQL Fiddle demo can be found [here](http://sqlfiddle.com/#!2/126a4/7))
I hope this answer your question.
Please let me know if you need anything else.
Good luck! | ```
SET @search_id = 5;
SELECT *
FROM my_table
WHERE id BETWEEN (@search_id - 2) AND (@search_id + 2);
``` | sql show partition based on found record in mysql | [
"",
"mysql",
"sql",
""
] |
I'm working on my project in php [handling students attendance system]. I have a list of students along with their unique id[`jntuno`] and I need to create a database in mysql for storing the daily attendance of each student for each subject. So I created my tables in this way :
I have a table `students` in mysql with the following fields and data in it :

now I want to create a new table with the each of the values in the `jntuno` field as a columns of my new table.
I want my new table [let us name it `attendance`] to have columns like this :
```
+------------+-----------+----------+-----------+-----------+
|11341A0501 |11341A0502 |11341A0503|11341A0504 |11341A0505 |......
+------------+-----------+----------+-----------+-----------+
| | | | | |
```
**How to do this in mysql ?**
I will later add 3 fields to the `attendance` table namely :
-> `date`[the date on which a particular subject is taught] ,
->`subject`[the name of the subject taught] and
->`hours taught`[the number of hours for which a particular subject is taught(can be 1 or 2 or 3 ... upto 6)]
every `subject` taught on a particular `date` will be adding a new `row` to the `attendance` table
**Example:**
```
+------------+-----------+-----------------+------------+-----------+----------+-----------+-----------+
|date |subject | classes taught |11341A0501 |11341A0502 |11341A0503|11341A0504 |11341A0505 |..
+------------+-----------+-----------------+------------+-----------+----------+-----------+-----------+
|2013-09-31 |OOPS |3 |2 |3 |0 |1 |3 |
```
I choose the tables in this way so that the entry of attendance into the table would be more faster.
But many call this a **BAD DATABASE STRUCTURE** . So please suggest me if there's some other good and efficient database design for my problem | Create the new table with the following statements:
```
select @s:=concat('create table students_col (',group_concat(jntunno,' CHAR(10)' order by slno),')') from students;
prepare stmt from @s;
execute stmt;
deallocate prepare stmt;
```
observe how the `CREATE TABLE` is constructed using the `group_concat`
Demo: [SQL Fiddle](http://sqlfiddle.com/#!2/872b1/1)
In case you also want to insert the names, this is the statement to it:
```
select @s:=concat('insert into students_col values (',group_concat(concat('"',name,'"') order by slno),')') from students;
prepare stmt from @s;
execute stmt;
deallocate prepare stmt;
select * from students_col;
```
Here is my whole trail:
```
mysql> drop table if exists students;
Query OK, 0 rows affected (0.00 sec)
mysql> create table students (slno integer, jntunno char(10), name varchar(50));
Query OK, 0 rows affected (0.07 sec)
mysql> insert into students values (1,'1134A0501','ADARI GOPI');
Query OK, 1 row affected (0.00 sec)
mysql> insert into students values (2,'1134A0502','BALU');
Query OK, 1 row affected (0.00 sec)
mysql> insert into students values (3,'1134A0503','GEETHA');
Query OK, 1 row affected (0.00 sec)
mysql> drop table if exists students_col;
Query OK, 0 rows affected (0.00 sec)
mysql> select @s:=concat('create table students_col (',group_concat(jntunno,' CHAR(10)' order by slno),')') from students;
+-----------------------------------------------------------------------------------------------+
| @s:=concat('create table students_col (',group_concat(jntunno,' CHAR(10)' order by slno),')') |
+-----------------------------------------------------------------------------------------------+
| create table students_col (1134A0501 CHAR(10),1134A0502 CHAR(10),1134A0503 CHAR(10)) |
+-----------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)
mysql> prepare stmt from @s;
Query OK, 0 rows affected (0.00 sec)
Statement prepared
mysql> execute stmt;
Query OK, 0 rows affected (0.21 sec)
mysql> deallocate prepare stmt;
Query OK, 0 rows affected (0.01 sec)
mysql>
mysql> select @s:=concat('insert into students_col values (',group_concat(concat('"',name,'"') order by slno),')') from students;
+------------------------------------------------------------------------------------------------------+
| @s:=concat('insert into students_col values (',group_concat(concat('"',name,'"') order by slno),')') |
+------------------------------------------------------------------------------------------------------+
| insert into students_col values ("ADARI GOPI","BALU","GEETHA") |
+------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)
mysql> prepare stmt from @s;
Query OK, 0 rows affected (0.00 sec)
Statement prepared
mysql> execute stmt;
Query OK, 1 row affected (0.00 sec)
mysql> deallocate prepare stmt;
Query OK, 0 rows affected (0.00 sec)
mysql>
mysql> select * from students_col;
+------------+-----------+-----------+
| 1134A0501 | 1134A0502 | 1134A0503 |
+------------+-----------+-----------+
| ADARI GOPI | BALU | GEETHA |
+------------+-----------+-----------+
1 row in set (0.00 sec)
mysql>
``` | This procedure will do the work:
```
DELIMITER ||
DROP PROCEDURE IF EXISTS `test`.`pivot`;
CREATE PROCEDURE `test`.`pivot`()
MODIFIES SQL DATA
BEGIN
DROP TABLE IF EXISTS `test`.`new_table`;
SELECT GROUP_CONCAT(CONCAT(`jntunno`, ' CHAR(10) NOT NULL') SEPARATOR ', ') FROM `test`.`students` INTO @sql;
SET @sql := CONCAT('CREATE TABLE `test`.`new_table` (', @sql, ') ENGINE=InnoDB;');
PREPARE stmt FROM @sql;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
SET @sql := NULL;
END;
||
DELIMITER ;
```
If you cannot use stored procedures, you can easily translate that code into PHP, or any language you use. | Creating columns of a table using the rows of another | [
"",
"mysql",
"sql",
"database-design",
"pivot-table",
""
] |
Let's say I was looking for the second most highest record.
Sample Table:
```
CREATE TABLE `my_table` (
`id` int(2) NOT NULL AUTO_INCREMENT,
`name` varchar(255) NOT NULL,
`value` int(10),
PRIMARY KEY (`id`)
);
INSERT INTO `my_table` (`id`, `name`, `value`) VALUES (NULL, 'foo', '200'), (NULL, 'bar', '100'), (NULL, 'baz', '0'), (NULL, 'quux', '300');
```
The second highest value is `foo`. How many ways can you get this result?
The obvious example is:
```
SELECT name FROM my_table ORDER BY value DESC LIMIT 1 OFFSET 1;
```
Can you think of other examples?
I was trying this one, but `LIMIT & IN/ALL/ANY/SOME subquery` is not supported.
```
SELECT name FROM my_table WHERE value IN (
SELECT MIN(value) FROM my_table ORDER BY value DESC LIMIT 1
) LIMIT 1;
``` | This only works for **exactly** the second highest:
```
SELECT * FROM my_table two
WHERE EXISTS (
SELECT * FROM my_table one
WHERE one.value > two.value
AND NOT EXISTS (
SELECT * FROM my_table zero
WHERE zero.value > one.value
)
)
LIMIT 1
;
```
This one emulates a window function rank() for platforms that don't have them. It can also be adapted for ranks <> 2 by altering one constant:
```
SELECT one.*
-- , 1+COALESCE(agg.rnk,0) AS rnk
FROM my_table one
LEFT JOIN (
SELECT one.id , COUNT(*) AS rnk
FROM my_table one
JOIN my_table cnt ON cnt.value > one.value
GROUP BY one.id
) agg ON agg.id = one.id
WHERE agg.rnk=1 -- the aggregate starts counting at zero
;
```
Both solutions need functional self-joins (I don't know if mysql allows them, IIRC it only disallows them if the table is the target for updates or deletes)
The below one does not need window functions, but uses a recursive query to enumerate the rankings:
```
WITH RECURSIVE agg AS (
SELECT one.id
, one.value
, 1 AS rnk
FROM my_table one
WHERE NOT EXISTS (
SELECT * FROM my_table zero
WHERE zero.value > one.value
)
UNION ALL
SELECT two.id
, two.value
, agg.rnk+1 AS rnk
FROM my_table two
JOIN agg ON two.value < agg.value
WHERE NOT EXISTS (
SELECT * FROM my_table nx
WHERE nx.value > two.value
AND nx.value < agg.value
)
)
SELECT * FROM agg
WHERE rnk = 2
;
```
(the recursive query will not work in mysql, obviously) | Eduardo's solution in standard SQL
```
select *
from (
select id,
name,
value,
row_number() over (order by value) as rn
from my_table t
) t
where rn = 1 -- can pick any row using this
```
This works on any modern DBMS except MySQL. This solution is usually faster than solutions using sub-selects. It also can easily return the 2nd, 3rd, ... row (again this is achievable with Eduardo's solution as well).
It can also be adjusted to count by groups (adding a `partition by`) so the "greatest-n-per-group" problem can be solved with the same pattern.
Here is a SQLFiddle to play around with: <http://sqlfiddle.com/#!12/286d0/1> | How many different ways are there to get the second row in a SQL search? | [
"",
"mysql",
"sql",
""
] |
I have three MySQL tables: 'people', 'people2id', 'places'. The first one stores information about people, where the 'places' table stores information about their addresses. The middle one, 'people2id', interconnects this two tables (could be that one person has two or more addresses).
Now I want to go to some person's profile and see his profile, but also his associated addresses. I created this query for that:
```
SELECT * FROM people p
JOIN people2id e ON p.peopleId = e.peopleId
JOIN places a ON a.peopleId = e.peopleId
WHERE p.peopleId = number
```
This works when the person has associated address(es), otherwise, it will fail. I do not understand if I should use any kind of JOIN or use UNION for this matter. | Change the `JOIN` to `LEFT JOIN`, i.e.
```
SELECT * FROM people p
LEFT JOIN people2id e ON p.peopleId = e.peopleId
LEFT JOIN places a ON a.peopleId = e.peopleId
WHERE p.peopleId = number
```
Using `LEFT JOIN` will include all the records from `people`, whether they contain associated records or not. | **UNION** is used for get same kind of informations from two different queries and return them as a single result set
**JOIN** is used for *add columns and data to rows* (not as simple but you can see it like this) | Confused about using UNION or JOIN | [
"",
"mysql",
"sql",
"database-design",
"join",
"union",
""
] |
Let's say I have the following schema
```
Company:
-> company_id
-> company_name
Building_to_company:
-> building_id
-> company_id
```
So each building has its own id as well as a company id which relates it to a single company.
the following query gives two columns -- one for the company name, and then its associated buildings.
```
SELECT company.company_name, building_to_company.building_id
FROM company, building_to_company
WHERE company.company_id = building_to_company.company_id;
```
The returned table would look something like this:
```
Company Name | Building Id
Smith Banking 2001
Smith Banking 0034
Smith Banking 0101
Smith Banking 4055
Reynolds 8191
TradeCo 7119
TradeCo 8510
```
So that's all simple enough.
But I need to do something a bit different. I need 2 columns. One for the company name and then on the right the number of buildings it owns. And then for a little extra challenge I only want to list companies with 3 or less buildings.
At this point the only real progress I've made is coming up with the query above. I know I some how have to use count on the building\_id column and count the number of buildings associated with each company. And then at that point I can limit things by using something like `WHERE x < 4` | You've basically got it in words already. Assuming `company_name` is unique, all you have to add to your explanation to get it to work is a `GROUP BY` clause:
```
SELECT company.company_name, COUNT(building_to_company.building_id)
FROM company
INNER JOIN building_to_company
ON company.company_id = building_to_company.company_id
GROUP BY company.company_name
```
([SQL Fiddle demo of this query in action](http://www.sqlfiddle.com/#!3/98a84/1))
---
To limit it to companies with 3 or less buildings, the key is you have to use a `HAVING` clause and not `WHERE`. This is because you want to filter based on the results of an aggregate (`COUNT`); simply put, `WHERE` filters come before aggregation and `HAVING` come after:
```
SELECT company.company_name, COUNT(building_to_company.building_id)
FROM company
INNER JOIN building_to_company
ON company.company_id = building_to_company.company_id
GROUP BY company.company_name
HAVING COUNT(building_to_company.building_id) < 4
```
([SQL Fiddle demo of this query in action](http://www.sqlfiddle.com/#!3/98a84/2)) | I think you want something like this
```
SELECT c.company_name, count(b.building_id)
FROM
company as c,
building_to_company as b
WHERE c.company_id = b.company_id
GROUP BY c.company_name;
``` | Pretty basic SQL query involving count | [
"",
"mysql",
"sql",
""
] |
I have a lot of stored procedures. But I am only getting Request Timeout sometimes only for this SP ?
```
ALTER PROCEDURE [dbo].[Insertorupdatedevicecatalog]
(@OS NVARCHAR(50)
,@UniqueID VARCHAR(500)
,@Longitude FLOAT
,@Latitude FLOAT
,@Culture VARCHAR(10)
,@Other NVARCHAR(200)
,@IPAddress VARCHAR(50)
,@NativeDeviceID VARCHAR(50))
AS
BEGIN
SET NOCOUNT ON;
DECLARE @TranCount INT;
SET @TranCount = @@TRANCOUNT;
DECLARE @OldUniqueID VARCHAR(500) = ''-1'';
SELECT @OldUniqueID = [UniqueID] FROM DeviceCatalog WHERE (@NativeDeviceID != '''' AND [NativeDeviceID] = @NativeDeviceID);
BEGIN TRY
IF @TranCount = 0
BEGIN TRANSACTION
ELSE
SAVE TRANSACTION Insertorupdatedevicecatalog;
DECLARE @Geo GEOGRAPHY = geography::STGeomFromText(''POINT('' + CONVERT(VARCHAR(100), @Longitude) + '' '' + CONVERT(VARCHAR(100), @Latitude) + '')'', 4326);
IF EXISTS(SELECT 1 FROM DeviceCatalog WHERE [UniqueID] = @UniqueID)
BEGIN
DECLARE @OldGeo GEOGRAPHY
,@OldCity NVARCHAR(100)
,@OldCountry NVARCHAR(100)
,@OldAddress NVARCHAR(100);
SELECT @OldGeo = [LastUpdatedLocationFromJob]
,@OldCity = [City]
,@OldCountry = [Country]
,@OldAddress = [Address]
FROM DeviceCatalog
WHERE [UniqueID] = @UniqueID;
UPDATE DeviceCatalog
SET [OS] = @OS
,[Location] = @Geo
,[Culture] = @Culture
,[Other] = @Other
,[IPAddress] = @IPAddress
WHERE [UniqueID] = @UniqueID;
IF (@OldGeo IS NULL OR @OldAddress IS NULL OR @OldCity IS NULL OR @OldCountry IS NULL OR ISNULL(@Geo.STDistance(@OldGeo) / 1000,0) > 50)
BEGIN
UPDATE DeviceCatalog
SET [Lastmodifieddate] = Getdate()
WHERE [UniqueID] = @UniqueID;
END
END
ELSE
BEGIN
INSERT INTO DeviceCatalog
([OS]
,[UniqueID]
,[Location]
,[Culture]
,[Other]
,[IPAddress]
,[NativeDeviceID])
VALUES (@OS
,@UniqueID
,@Geo
,@Culture
,@Other
,@IPAddress
,@NativeDeviceID);
IF(@OldUniqueID != ''-1'' AND @OldUniqueID != @UniqueID)
BEGIN
EXEC DeleteOldAndroidDeviceID @OldUniqueID, @UniqueID;
END
END
LBEXIT:
IF @TranCount = 0
COMMIT;
END TRY
BEGIN CATCH
DECLARE @Error INT, @Message VARCHAR(4000), @XState INT;
SELECT @Error = ERROR_NUMBER() ,@Message = ERROR_MESSAGE() ,@XState = XACT_STATE();
IF @XState = -1
ROLLBACK;
IF @XState = 1 AND @TranCount = 0
rollback
IF @XState = 1 AND @TranCount > 0
ROLLBACK TRANSACTION Insertorupdatedevicecatalog;
RAISERROR (''Insertorupdatedevicecatalog: %d: %s'', 16, 1, @error, @message) ;
END CATCH
END
``` | The timeout occurs due to two updates to same table inside same transaction. You could avoid it with a case statement. Also whole IF ELSE can be replaced with a merge.
```
MERGE INTO DeviceCatalog DC
USING (SELECT @UniqueID AS UniqueID) T ON (DC.UniqueID = T.UniqueID)
WHEN MATCHED THEN
UPDATE SET [OS] = @OS
,[Location] = @Geo
,[Culture] = @Culture
,[Other] = @Other
,[IPAddress] = @IPAddress
,[Lastmodifieddate] = (CASE
WHEN (LastUpdatedLocationFromJob IS NULL OR [Address] IS NULL OR [City] IS NULL OR [Country] IS NULL OR ISNULL(@Geo.STDistance(LastUpdatedLocationFromJob) / 1000,0) > 50)
THEN Getdate()
ELSE [Lastmodifieddate]
END)
WHEN NOT MATCHED THEN
INSERT INTO DeviceCatalog
([OS]
,[UniqueID]
,[Location]
,[Culture]
,[Other]
,[IPAddress]
,[NativeDeviceID])
VALUES (@OS
,@UniqueID
,@Geo
,@Culture
,@Other
,@IPAddress
,@NativeDeviceID)
WHEN NOT MATCHED BY SOURCE AND @OldUniqueID != ''-1'' AND @OldUniqueID != @UniqueID THEN
DELETE;
```
Try it and check whether this is what you expected. | Already discussed [here](http://social.msdn.microsoft.com/Forums/sqlserver/en-US/cd804f22-875e-484f-b425-2a0acd9c0111/limiting-concurrent-execution-of-a-tsql-stored-procedure)
You can achieve it using sp\_getapplock in TSQL.
But you need a wrapper storedproc or batch for this. Check the following example it will help you to desing your wrapper sp/batch statement.
**Sample Code Snippet**
```
Create table MyTable
(
RowId int identity(1,1),
HitStartedAt datetime,
HitTimestamp datetime,
UserName varchar(100)
)
Go
Create proc LegacyProc (@user varchar(100), @CalledTime datetime)
as
Begin
Insert Into MyTable
Values(@CalledTime, getdate(), @user);
--To wait for 10 sec : not required for your procedures, producing the latency to check the concurrent users action
WAITFOR DELAY '000:00:10'
End
Go
Create Proc MyProc
(
@user varchar(100)
)
as
Begin
Declare @PorcName as NVarchar(1000), @CalledTime datetime
Begin Tran
--To get the Current SP Name, it should be unique for each SP / each batch
SET @PorcName = object_name(@@ProcID)
SET @CalledTime = Getdate()
--Lock the Current Proc
Exec sp_getapplock @Resource = @PorcName, @LockMode = 'Exclusive'
--Execute Your Legacy Procedures
Exec LegacyProc @user, @CalledTime
--Release the lock
Exec sp_releaseapplock @Resource = @PorcName
Commit Tran
End
``` | Getting Request Timeout sometimes from only one SP? | [
"",
"sql",
"t-sql",
"sql-server-2008-r2",
""
] |
I have a table with products and each product has: brand, collection, name.
I want users to search through the products but I would like to search all three fields for those searchwords because the user probably won't always know if their searchword is for brand, collection or name. A user can easily be searching for 'searchname searchbrand' and then I would like to return the records that have brand as searchname and searchbrand but also the names that contain anything like searchname or searchbrand.
I can only come up with are queries that I doubt will perform at all. What would be best to address this? Create an extra field in the table in which I combine brand, collection and name to one field and then search through that? Or create new table with only one field that is a combination of brand, collection and name and search through that?
Gabrie | It looks like what you need is a full text search: <http://dev.mysql.com/doc/refman/5.5/en/fulltext-boolean.html>
```
SELECT * FROM products WHERE MATCH (brand, collection, name) AGAINST ('+searchname +searchbrand' IN BOOLEAN MODE)
```
For a better performance, you still could create a separate table with one field containing a combination of brand, collection and name (don't forget the full text index).
```
SELECT * FROM products_search WHERE MATCH (combination) AGAINST ('+searchname +searchbrand' IN BOOLEAN MODE)
``` | MySQL provides FULLTEXT, please refer to this link to know more on this
<http://www.petefreitag.com/item/477.c> | Need tip on searching through multiple fields [mysql optimization] | [
"",
"mysql",
"sql",
"performance",
"search",
""
] |
I have the following `Split` function,
```
ALTER FUNCTION [dbo].[Split](@String varchar(8000), @Delimiter char(1))
returns @temptable TABLE (items varchar(8000))
as
begin
set @String = RTRIM(LTRIM(@String))
declare @idx int
declare @slice varchar(8000)
select @idx = 1
if len(@String)<1 or @String is null return
while @idx!= 0
begin
set @idx = charindex(@Delimiter,@String)
if @idx!=0
set @slice = left(@String,@idx - 1)
else
set @slice = @String
if(len(@slice)>0)
insert into @temptable(Items) values(@slice)
set @String = right(@String,len(@String) - @idx)
if len(@String) = 0 break
end
return
end
```
When I write,
```
SELECT Items
FROM Split('around the home,clean and protect,soaps and air fresheners,air fresheners',',')
```
This will give me,
```
air fresheners
around the home
clean and protect
soaps and air fresheners
```
I need to maintain the order. | A simpler function:
```
CREATE FUNCTION dbo.SplitStrings_Ordered
(
@List nvarchar(MAX),
@Delimiter nvarchar(255)
)
RETURNS TABLE
AS
RETURN
(
SELECT [Index] = CHARINDEX(@Delimiter, @List + @Delimiter, Number),
Item = SUBSTRING(@List, Number, CHARINDEX(@Delimiter,
@List + @Delimiter, Number) - Number)
FROM
(
SELECT ROW_NUMBER() OVER (ORDER BY [object_id]) FROM sys.all_objects
) AS n(Number)
WHERE Number <= CONVERT(INT, LEN(@List))
AND SUBSTRING(@Delimiter + @List, Number, LEN(@Delimiter)) = @Delimiter
);
GO
```
Sample usage:
```
DECLARE @s nvarchar(MAX) = N',around the home,clean and protect,soaps and air'
+ ' fresheners,air fresheners';
SELECT Item FROM dbo.SplitStrings_Ordered(@s, N',') ORDER BY [Index];
```
Or to return orders from a table ordered by input:
```
SELECT o.OrderID
FROM dbo.Orders AS o
INNER JOIN dbo.SplitStrings_Ordered('123,789,456') AS f
ON o.OrderID = CONVERT(int, f.Item)
ORDER BY f.[Index];
``` | Your function will need to set an order column (seq in this sample):
```
ALTER FUNCTION [dbo].[Split](@String varchar(8000), @Delimiter char(1))
returns @temptable TABLE (seq int, items varchar(8000))
as
begin
set @String = RTRIM(LTRIM(@String))
declare @idx int
declare @seq int
declare @slice varchar(8000)
set @seq=1
select @idx = 1
if len(@String)<1 or @String is null return
while @idx!= 0
begin
set @idx = charindex(@Delimiter,@String)
if @idx!=0
set @slice = left(@String,@idx - 1)
else
set @slice = @String
if(len(@slice)>0)
begin
set @seq = @seq + 1
insert into @temptable(seq, Items) values(@seq,@slice)
end
set @String = right(@String,len(@String) - @idx)
if len(@String) = 0 break
end
return
end
GO
SELECT * FROM Split('around the home,clean and protect,soaps and air fresheners,air fresheners',',') order by seq
``` | SQL Split Function and Ordering Issue? | [
"",
"sql",
"t-sql",
"sql-server-2008-r2",
""
] |
i ve the following oracle sentence that returns grouped results, for example 10 of one group, 8 of second group.
I am having trouble counting them, as it seems that this count(\*) as stevilo\_prosenj doesnt work properly, so i would appreciate if someone can help me how to get the result count of how many there are in each group..
Thank you
```
SELECT
PZB."OBD_ZA_POSILJANJE_PROSENJ_ID",
PZB."VISOKOSOLSKO_SREDISCE_ID",
leto.STUDIJSKO_LETO_ID,
OBD.VRSTA_PROSNJE vrsta_prosnje_string,
stdm.NAZIV_VRSTE_STUD_DOMOV naziv_vrste_stud_domov_ ,
stdm.naziv naziv_stud_dom,
SIFSTAT.OZNAKA OZNAKA_STATUSA_PROSNJE,
count(*) as stevilo_prosenj
FROM MSB_PROSNJE_ZA_BIVANJE PZB,
S_OBD_POS_PRS_ZA_SUBV_VW OBD,
S_VISOKOSOLSKO_SREDISCE_VW vss,
S_STUDIJSKA_LETA_VW leto,
S_STUDENTSKI_DOMOVI_VW stdm,
MSB_SIF_STATUSOV_PROSENJ sifstat
WHERE PZB.OBD_ZA_POSILJANJE_PROSENJ_ID = OBD.OBD_ZA_POSILJANJE_PROSENJ_ID
AND vss.visokosolsko_sredisce_id = pzb.visokosolsko_sredisce_id
AND obd.studijsko_leto_id = leto.studijsko_leto_id
GROUP BY
PZB."OBD_ZA_POSILJANJE_PROSENJ_ID",
PZB."VISOKOSOLSKO_SREDISCE_ID",
leto.STUDIJSKO_LETO_ID,
OBD.VRSTA_PROSNJE,
stdm.NAZIV_VRSTE_STUD_DOMOV,
stdm.naziv,
SIFSTAT.OZNAKA
``` | I'm going to assume that you want a GROUP BY, but not on all of your SELECTed columns...
If your database supports it, you could
```
SELECT columnA, columnB, columnC, count(*) OVER (PARTITION BY columnA, columnB)
FROM table;
```
and otherwise, you'll have to select it manually
```
SELECT a.columnA, a.columnB, a.columnC, b.countResult
FROM ( SELECT columnA, columnB, columnC
FROM table
WHERE conditions
) a
JOIN
( SELECT columnA, columnB, count(*) AS countResult
FROM table
GROUP BY columnA, columnB
) b
ON a.columnA = b.columnA
AND a.columnB = b.columnB
/
``` | You join tables in your FROM statement but `S_STUDENTSKI_DOMOVI_VW stdm, MSB_SIF_STATUSOV_PROSENJ sifstat` tables aren't connected to others in the WHERE statement so they connect with all rows and you get FULL JOIN without conditions so COUNT outputs results you haven't expected (Multiplied to both tables row counts). | Oracle count * and groupBy | [
"",
"sql",
"count",
"group-by",
""
] |
Suppose a Nested query is like :
```
SELECT *
FROM Table1
WHERE Table1.val in
(
Select Table2.val
from Table2
where Table2.val>3
)
```
So my question is how do actual query processors evaluate this -
* Do they first evaluate inner most query, store the result temporarily and then use it in the upper level query? [ If the result of subquery is large, temporary storage may not be enough ]
* Or do they evaluate the outer query for each result of the outer query [ requires too many evaluations of outer query]
Am I missing something, and which way is it actually implemented ? | It depends. You're allowed to reference a value from the outer query inside the inner. If you do this, you have what is referred to as a *correlated sub-query* or *correlated derived table*. In this case, it has to recompute the query for every possible row in the parent query. If you don't this, you have what is referred to as an *inline view* or *inline derived table*, and most database engines are smart enough to only compute the view once. | This might help you
There is plenty of memory available to run fairly large queries and there is little chance of the query's complexity creating a timeout. Nested queries have the advantage of being run almost exclusively in physical memory for the sake of speed, unless the amount of memory needed to perform the query exceeds the physical memory "footprint" allocated to SQL Server.
Also, because of a nested query's structure, the SQL Server query optimizer will attempt to find the most optimal way to derive the results from the query. (i.e. the optimizer will convert it into micro-ops and reorganize to run it efficiently) [[Source]](http://searchsqlserver.techtarget.com/tip/Nested-queries-vs-temporary-tables-in-SQL-Server)
Due to the re-ordering which query is run first is hard to say. After the optimization most probably however a form of the inner query will be run (you can't really do stuff without that result most of the time).
SQL queries don't run like stack calls the way you seem to think. They're a one line instruction which is comprehended and translated for the machine. | SQL Nested Query Processing | [
"",
"mysql",
"sql",
""
] |
Hi I currently have 3 tables as listed below. There is no common keys between the tables
Table 1->linkage\_Table
```
ID Item Material Color
1 keypouch * yellow
2 wallet plastic *
3 card-holder leather gold
```
Table 2->Material\_Table
```
ID Name
1 plastic
2 wool
3 leather
```
Table 3->Color\_Table
```
ID Color
1 Yellow
2 green
3 orange
```
I would wish to get the following result set
```
Item Material Color
keypouch plastic yellow
keypouch wool yellow
keypouch leather yellow
wallet plastic yellow
wallet plastic green
wallet plastic orange
card-holder leather gold
```
I would like to write an SQL statement to join the tables together.
Having \* in linkage table would mean that we would retrieve all values from either Material or Color table.
I am really in need of this solution now. Been trying to solve this for more than 5hours. Thanks in advance for any help. | One possible approach:
```
SELECT l.Item, m.name, c.Color
FROM linkage_Table AS l
INNER JOIN Material_Table AS m
ON l.Material = '*'
OR l.Material = m.name
INNER JOIN Color_Table AS c
ON l.Color = '*'
OR l.Color = c.Color
```
[**SQL Fiddle**](http://sqlfiddle.com/#!2/67c33/1)
Explanation: the query has to be built so that 'material' and 'color' tables are joined either completely (cross-join), when `'*'` is given in the corresponding field, or by equality of these fields. And that's exactly what we got by using `'ON l.someField = '*' OR l.someField = joined.someField'` clause. | With example:
```
CREATE TABLE linkage_table (
id INT,
item VARCHAR(40),
material VARCHAR(40),
color VARCHAR(40)
);
CREATE TABLE material_table (
id INT,
name VARCHAR(40)
);
CREATE TABLE color_table (
id INT,
color VARCHAR(40)
);
INSERT INTO linkage_table VALUES (1, 'keypouch', '*', 'yellow');
INSERT INTO linkage_table VALUES (2, 'wallet', 'plastic', '*');
INSERT INTO linkage_table VALUES (3, 'card-holder', 'leather', 'gold');
INSERT INTO material_table VALUES (1, 'plastic');
INSERT INTO material_table VALUES (2, 'wool');
INSERT INTO material_table VALUES (3, 'leather');
INSERT INTO color_table VALUES (1, 'yellow');
INSERT INTO color_table VALUES (2, 'green');
INSERT INTO color_table VALUES (3, 'orange');
SELECT l.item AS Item, m.name AS Material, IFNULL(c.Color, l.color)
FROM linkage_table l
LEFT JOIN material_table m ON (l.material = m.name OR l.material = '*')
LEFT JOIN color_table c ON (l.color = c.color OR l.color = '*')
;
```
Returns exactly what you wanted. Not sure if your sample data is lacking 'gold' colour on purpose?
Check at SQLFiddle: <http://sqlfiddle.com/#!2/d9d3d/4> | Joining multiple tables without common key | [
"",
"mysql",
"sql",
""
] |
I have two tables:
```
case(caseId,judge,verdict)
appeal(caseId,verdict)
```
And I want to select all judges that had their verdict changed at least half of the times (for example, if a judge presided in 5 cases and 3 of his verdicts were changed in the appeal, I want to select him).
I tried to solve it using nested queries but I got confused because I don't really understand how to refer to the outer query from the nested query.
**Update:** What I've tried (I don't really know how to get the nested query to only select judges which are the same judges as the outer query)
```
SELECT DISTINCT judge
FROM case c
WHERE (SELECT COUNT(*) FROM C WHERE c.judge = case.judge) <= 2 * (SELECT COUNT(*) FROM appeal,case WHERE case.cid = appeal.cid AND case.verdict <> appeal.verdict)
``` | ```
select *
from (
select c.caseid,
c.verdict,
c.judge,
count(*) over (partition by c.judge) as total_cases_per_judge,
sum(
case
when (c.verdict <> coalesce(a.verdict, -1)) then 1
else 0
end) over (partition by c.judge) as changed_cases
from "case" as c
left join appeal a on a.caseid = c.caseid
) t
where changed_cases >= total_cases_per_judge/2;
```
`case` is a reserved keyword, you should not use it as a table name (and that's why I had to put it into double quotes, otherwise reserved words are not allowed as identifiers) | Probably you can try something like:
```
SELECT n.judge FROM (
SELECT s.judge, COUNT(*) AS total, COUNT(s.changed_verdict) AS changed FROM (
SELECT c.caseId, c.judge, c.verdict, a.verdict AS changed_verdict FROM case c
LEFT JOIN appeal a ON a.caseId = c.caseId AND a.verdict != c.verdict
) s
GROUP BY s.judge
) n WHERE n.changed >= n.total / 2;
``` | Select all judges that had their verdict changed at least half of the times | [
"",
"sql",
""
] |
I was doing some homework for my security class involving SQL injections. I found that I could do a shorter SQL injection than the typical `' OR '1'=1` example. Instead I could do `'='`. Typing this is in to the password field of a typical login boxes gives a SQL query like this:
```
SELECT * FROM Users WHERE username='user' AND password=''='';
```
It turns out that `password=''=''` evaluates to `1`, allowing the SQL injection to work.
After doing some more testing, I saw that if I test if a string is equal to 0, it returns 1:
```
SELECT 0='a';
```
So in my example, `password=''` would evaluate to 0 and `0=''` would end up evaluating to 1.
My testing showed me *how* this is happening, but I want know *why* this happens (i.e why is `0='a'` true?. | As documented under [Type Conversion in Expression Evaluation](http://dev.mysql.com/doc/en/type-conversion.html), comparisons between a string and an integer will be performed numerically:
> * In all other cases, the arguments are compared as floating-point (real) numbers.
Therefore, the operands are converted to floating-point numbers and then compared.
Conversion of a string to a float will consider every numeric character (and the first period or exponentiation character) encountered up to the first non-numeric character. Therefore `'hello'` or `'a'` will be truncated to `''` (and thereby cast to zero) whereas `'123.45e6foo789'` would truncate to `'123.45e6'` (and thereby cast to 123,450,000).
Thus one can see how `0='a'` is true: it is compared as `0=0`.
That `password=''=''` is true (*provided that* `password` is a non-empty string, or a non-zero numeric) comes about because the first comparison results in zero (false), which forces the second comparison to be performed numerically (thus converting `''` to zero for comparison with the zero result of the first comparison). | Mysql sees the comparison and converts the string to an integer,but this results in an invalid integer so it cast it to 0. 0=0 so its true(1) | Why does SELECT 'a'='b'='c' return 1 in MYSQL? | [
"",
"mysql",
"sql",
"sql-injection",
""
] |
I have this data:
```
id worked_date
-----------------
1 2013-09-25
2 2013-09-26
3 2013-10-01
4 2013-10-04
5 2013-10-07
```
I want to add a column called `weekCount`. The based date is `2013-09-25`. So all the data with worked\_date from `2013-09-25 to 2013-10-01` will have `weekCount` as 1 and from `2013-10-02 to 2013-10-8` will have `weekCount` as 2 and so on. How can that be done?
Thanks. | Perhaps an approach like this will solve your problem.
I compute an in-memory table that contains the week's boundaries along with a monotonically increasing number (BuildWeeks). I then compare my `worked_date` values to my date boundaries. Based on your comment to @sgeddes, you need the reverse week number so I then use a DENSE\_RANK function to calculate the `ReverseWeekNumber`.
```
WITH BOT(StartDate) AS
(
SELECT CAST('2013-09-25' AS date)
)
, BuildWeeks (WeekNumber, StartOfWeek, EndOfWeek) AS
(
SELECT
N.number AS WeekNumber
, DateAdd(week, N.number -1, B.StartDate) AS StartOfWeek
, DateAdd(d, -1, DateAdd(week, N.number, B.StartDate)) AS EndOfWeek
FROM
dbo.Numbers AS N
CROSS APPLY
BOT AS B
)
SELECT
M.*
, BW.*
, DENSE_RANK() OVER (ORDER BY BW.WeekNumber DESC) AS ReverseWeekNumber
FROM
dbo.MyTable M
INNER JOIN
BuildWeeks AS BW
ON M.worked_date BETWEEN BW.StartOfWeek ANd BW.EndOfWeek
;
```
[SQLFiddle](http://sqlfiddle.com/#!6/2a6ba/6/0) | Here's one way using `DATEDIFF`:
```
select id,
worked_date,
1 + (datediff(day, '2013-09-25', worked_date) / 7) weekCount
from yourtable
```
* [SQL Fiddle Demo](http://sqlfiddle.com/#!6/c881c/6) | Create a weekCount column in SQL Server 2012 | [
"",
"sql",
"sql-server-2012",
""
] |
I am stuck on a SQL query. I am using PostgreSQL. I need to get the total for each month for all states.
```
table A
--------------------------------------------------------
created | Name | Agent_id | Total
--------------------------------------------------------
3/14/2013 | Harun | 1A | 5
3/14/2013 | Hardi | 2A | 20
4/14/2013 | Nizar | 3A | 30
5/14/2013 | moyes | 4A | 20
table B
----------------------------
Agent_id| state_id
----------------------------
1A | 1
2A | 1
3A | 1
4A | 2
table C
----------------------------
state_id | State
----------------------------
1 | Jakarta
2 | Singapore
3 | Kuala lumpur
```
**DESIRED RESULT:**
```
-----------------------------------------------------------------------------------------------
No |State | Januari | February | March | April | Mei ... December| Total
-----------------------------------------------------------------------------------------------
1 |Jakarta |0 |0 |25 | 30 | 0 ... | 55
2 |Singapore |0 |0 | 0 | 0 | 20 ... | 20
3 |Kuala Lumpur |0 |0 | 0 | 0 | 0 ... | 0
``` | to have all state with no data in table A / B you have to use `OUTER JOIN`
to complete @bma answer
```
select
no,
state,
sum(case when month = 1 then total else 0 end) as januari,
sum(case when month = 2 then total else 0 end) as februari,
sum(case when month = 3 then total else 0 end) as mars,
sum(case when month = 4 then total else 0 end) as april,
sum(case when month = 5 then total else 0 end) as may,
sum(case when month = 6 then total else 0 end) as juni,
sum(case when month = 7 then total else 0 end) as juli,
sum(case when month = 8 then total else 0 end) as august,
sum(case when month = 9 then total else 0 end) as september,
sum(case when month = 10 then total else 0 end) as october,
sum(case when month = 11 then total else 0 end) as november,
sum(case when month = 12 then total else 0 end) as december,
sum(coalesce(total,0)) as total
from (
select
c.state_id as no,
extract(month from created) as month,
state,
sum(total) as total
from tablec c
left join tableb b on ( b.state_id = c.state_id)
left join tablea a on ( a.agent_id = b.agent_id)
group by c.state_id,state,month
) sales
group by no,state;
```
[SQL Fiddle demo](http://sqlfiddle.com/#!12/d1e01/1) | Something like the following should give you results like your sample results. I'm not sure what the "No" column was though. This query is untested.
```
select state,
sum(case when mm = 1 then total else 0 end) as jan,
sum(case when mm = 2 then total else 0 end) as feb,
sum(case when mm = 3 then total else 0 end) as mar,
sum(case when mm = 4 then total else 0 end) as apr,
sum(case when mm = 5 then total else 0 end) as may,
sum(case when mm = 6 then total else 0 end) as jun,
sum(case when mm = 7 then total else 0 end) as jul,
sum(case when mm = 8 then total else 0 end) as aug,
sum(case when mm = 9 then total else 0 end) as sep,
sum(case when mm = 10 then total else 0 end) as oct,
sum(case when mm = 11 then total else 0 end) as nov,
sum(case when mm = 12 then total else 0 end) as dec,
sum(total) as total
from (
select extract(month from created) as mm,
state,
sum(total) as total
from table_a
group by state,mm
) s
group by state;
``` | SQL query to calculate total per month as a column | [
"",
"sql",
"postgresql",
"pivot",
""
] |
I am trying to display a list of players from team #1 and the stats to go with them, here are the tables
table 1: players
```
βββββββββββββββββββββββββββββ
β id | fname | lname | team β
β ββββββββββββββββββββββββββββ£
β 1 | Jason | McFee | 1 β
β 2 | John | Smith | 1 β
β 3 | Jack | Doe | 1 β
β 4 | Wayne | Gretzky | 2 β
βββββββββββββββββββββββββββββ
```
---
table 2: events\_goals
```
βββββββββββββββββββββββββββ
β id g_id a1_id a2_id β
β ββββββββββββββββββββββββββ£
β 1 1 2 3 β
β 2 3 1 2 β
β 3 2 1 NULL β
βββββββββββββββββββββββββββ
```
What I want to get is this
Name - being concat from table
Goals - COUNT all the times player id is in g\_id column
Assists - COUNT (a1\_id) + COUNT (a2\_id) all the times playerid is in either of those columns
POINTS - sum of goals + assists
```
ββββββββββββββββββββββββββββββββββββββββ
β id | name | goals | assists | points β
β βββββββββββββββββββββββββββββββββββββββ£
β 1 J.McFee 1 2 3 β
β 2 J.Smith 1 2 3 β
β 3 J.Doe 1 1 2 β
ββββββββββββββββββββββββββββββββββββββββ
```
What I have tried to do
```
>SELECT id,
>CONCAT_WS(', 'SUBSTR(fname, 1, 1), lname) name,
>FROM players
>WHERE teamid = 1
```
This gets me a return of the names of all the players on the team with the ID of 1 with there names in proper format no problem.
I can get the count of a single player by using
```
>SELECT COUNT(g_id) FROM events_goals WHERE id = (playerid)
```
This returns the correct number of goals for player
However when I go to put it all together the stats are wrong and it only displays 1 row when i know there is supposed to be three
```
> SELECT a.id,
> CONCAT_WS(', 'SUBSTR(a.fname, 1, 1), a.lname) name,
> (COUNT(b.g_id))goals,
> (COUNT(c.a1_id))a1,
> (COUNT(d.a2_id))a2
> FROM players a
> LEFT JOIN events_goals b ON a.id = b.g_id
> LEFT JOIN events_goals c ON a.id = c.a1_id
> LEFT JOIN events_goals d ON a.id = d.a2_id WHERE teamid = 1
``` | This is the query you're looking for:
```
SELECT
p.id,
CONCAT_WS(', ', SUBSTR(p.fname, 1, 1), p.lname) name,
COALESCE(eg_goals.goals, 0) goals,
COALESCE(eg_assists1.assists, 0) + COALESCE(eg_assists2.assists, 0) assists,
COALESCE(eg_goals.goals, 0) + COALESCE(eg_assists1.assists, 0) + COALESCE(eg_assists2.assists, 0) points
FROM players p
LEFT JOIN (
SELECT g_id, COUNT(g_id) goals FROM events_goals
GROUP BY g_id
) eg_goals ON p.id = eg_goals.g_id
LEFT JOIN (
SELECT a1_id, COUNT(a1_id) assists FROM events_goals
GROUP BY a1_id
) eg_assists1 ON p.id = eg_assists1.a1_id
LEFT JOIN (
SELECT a2_id, COUNT(a2_id) assists FROM events_goals
GROUP BY a2_id
) eg_assists2 ON p.id = eg_assists2.a2_id
WHERE p.team = 1
```
You should seriously reconsider redesigning your schema. Having those "events" mixed in the same table lead to horrible and very hard to maintain queries. | You need to add a [GROUP BY](http://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html) (see also [SQL GROUP BY Statement](http://www.w3schools.com/sql/sql_groupby.asp))
> The GROUP BY statement is used in conjunction with the aggregate
> functions to group the result-set by one or more columns.
```
**SQL GROUP BY Syntax**
SELECT column_name, aggregate_function(column_name)
FROM table_name
WHERE column_name operator value
GROUP BY column_name;
SELECT a.id,
CONCAT_WS(', 'SUBSTR(a.fname, 1, 1), a.lname) name,
(COUNT(b.g_id))goals,
(COUNT(c.a1_id))a1,
(COUNT(d.a2_id))a2
FROM players a
LEFT JOIN events_goals b ON a.id = b.g_id
LEFT JOIN events_goals c ON a.id = c.a1_id
LEFT JOIN events_goals d ON a.id = d.a2_id
WHERE teamid = 1
GROUP BY a.id,
CONCAT_WS(', 'SUBSTR(a.fname, 1, 1), a.lname)
``` | SQL Joining rows after using count | [
"",
"mysql",
"sql",
"join",
"count",
"union",
""
] |
**UPDATE: We were advised that it is possible that values for both parameters will not be provided. In such case, there should be no filter for both mobile number and email address.**
I'm having a hard time identifying what's wrong with my query. Basically, I have two variables
```
@EmlAdd
@MblNum
```
What I want to do is to search using the following rules:
at least one of the two variables must have a value
If I only supply value for email address , it must ignore the filter for mobile number
If I only supply value for mobile number, it must ignore the filter for email address
What's happening with my solution so far is that:
if values for 2 variables were supplied: IT WORKS
if only one of the variables has value: DOESN'T WORK
```
DECLARE
@EmlAdd AS VARCHAR(100),
@MblNum AS VARCHAR(100)
SET @EmlAdd = ''
SET @MblNum = '5555555'
SELECT
USER_ID,
CONTACT_VALUE
FROM
TBL_CONTACTS
WHERE
(
(CONTACT_VALUE = @MblNum OR @MblNum = '')
OR
(CONTACT_VALUE = @EmlAdd OR @EmlAdd = '')
)
```
Any help will be greatly appreciated. | Try this one -
```
DECLARE
@EmlAdd VARCHAR(100)
, @MblNum VARCHAR(100)
SELECT
@EmlAdd = ''
, @MblNum = '5555555'
SELECT [USER_ID], CONTACT_VALUE
FROM dbo.TBL_CONTACTS
WHERE CONTACT_VALUE IN (@MblNum, @EmlAdd)
``` | Remove the **OR @MblNum = ''** and **OR @EmlAdd = ''**
When only one parameter has a value set the other will equal **''** (0 length string) and therefore the logical condition will be met on every call.
```
DECLARE
@EmlAdd AS VARCHAR(100),
@MblNum AS VARCHAR(100)
SET @EmlAdd = ''
SET @MblNum = '5555555'
SELECT
USER_ID,
CONTACT_VALUE
FROM
TBL_CONTACTS
WHERE
(
(CONTACT_VALUE = @MblNum)
OR
(CONTACT_VALUE = @EmlAdd)
OR
(@EmlAdd='' AND @MblNum='')
)
``` | SQL: Conditional where clause not working | [
"",
"sql",
"sql-server",
"variables",
"where-clause",
""
] |
i'm trying to run the following query that counts the number of doctors with patient referrals within the last 6 months:
```
SELECT
count(local_doc.providerno) NumberofReferals,
LOCAL_DOC.FULLNAME FULLNAME,
LOCAL_DOC.TELNUMBER TELNUMBER,
LOCAL_DOC.STREET STREET,
LOCAL_DOC.SUBURB SUBURB,
LOCAL_DOC.STATE STATE,
LOCAL_DOC.POSTCODE POSTCODE,
LOCAL_DOC.PROVIDERNO
FROM
LOCAL_DOC,
PATIENT
WHERE
PATIENT.PROVIDERNO = LOCAL_DOC.PROVIDERNO AND PATIENT.REFERALDATE <= sysdate AND PATIENT.REFERALDATE >= ADD_MONTHS(sysdate, -6)
group by local_doc.providerno;
```
i keep getting
```
ORA-00979: not a GROUP BY expression
00979. 00000 - "not a GROUP BY expression"
*Cause:
*Action:
Error at Line: 3 Column: 5
```
and have no idea how to get around this. How could I do so? | > Dear, group By Clause ask a user to specify all Select Columns in
> Group By Clause as well except select columns with aggregated
> functions.
>
> Look in my Group by Clause i have specify all columns in Select except
> **local\_doc.providerno** because it is using **Aggregated function COUNT**
For better understanding please refer to this [link](http://www.techonthenet.com/sql/group_by.php)
```
SELECT
count(local_doc.providerno) NumberofReferals,
LOCAL_DOC.FULLNAME FULLNAME,
LOCAL_DOC.TELNUMBER TELNUMBER,
LOCAL_DOC.STREET STREET,
LOCAL_DOC.SUBURB SUBURB,
LOCAL_DOC.STATE STATE,
LOCAL_DOC.POSTCODE POSTCODE,
LOCAL_DOC.PROVIDERNO
FROM
LOCAL_DOC,
PATIENT
WHERE
PATIENT.PROVIDERNO = LOCAL_DOC.PROVIDERNO AND PATIENT.REFERALDATE <= sysdate AND PATIENT.REFERALDATE >= ADD_MONTHS(sysdate, -6)
GROUP BY
LOCAL_DOC.FULLNAME ,
LOCAL_DOC.TELNUMBER ,
LOCAL_DOC.STREET ,
LOCAL_DOC.SUBURB ,
LOCAL_DOC.STATE ,
LOCAL_DOC.POSTCODE ,
LOCAL_DOC.PROVIDERNO
``` | add
```
GROUP BY
LOCAL_DOC.FULLNAME ,
LOCAL_DOC.TELNUMBER ,
LOCAL_DOC.STREET ,
LOCAL_DOC.SUBURB ,
LOCAL_DOC.STATE ,
LOCAL_DOC.POSTCODE ,
LOCAL_DOC.PROVIDERNO
```
Most DBS other than MYSql will require you to group on the remaining selected columns | oracle sql not a group by expression while counting | [
"",
"sql",
"oracle",
"ora-00979",
""
] |
I'm trying to accumulate a Year to Date total of the number of incident events in a SQL Server shipping system, for display in a graph.
What I have at the moment:
```
SELECT EventType.Name AS [Incident Name], COUNT(Event.ID) AS [Occurrence]
FROM Event
INNER JOIN
PortEvent ON Event.PortEventID = PortEvent.ID
INNER JOIN
EventType ON PortEvent.EventID = Event.ID
WHERE Event.IsIncident = 1 and Event.Date BETWEEN (Start of Year & Now)
GROUP BY Event.Name
```
What this returns;
```
Incident Name Occurrence
Incident 1 1
Incident 3 4
Incident 4 7
```
What I want:
```
Incident Name Occurrence
Incident 1 1
Incident 2 0
Incident 3 4
Incident 4 7
Incident 5 0
```
Is there a way to do this simply? I have seen a few examples on SO, but only for adjacent tables.
Thanks | You have problem in you `EventType ON PortEvent.EventID = Event.ID` condition.The query below must solve your issue.
```
SELECT ET.Name AS [Incident Name], COUNT(E.ID) AS [Occurrence]
FROM EventType ET
LEFT JOIN PortEvent PE
ON ET.ID = PE.EventID
LEFT JOIN Event E
ON PE.EventID = E.ID
AND E.IsIncident = 1
AND E.Date BETWEEN ([Start of Year] AND Now)
GROUP BY ET.Name
```
**UPDATE**
If you want to include in result incidents only you can try this:
```
ON PE.EventID = E.ID
AND E.Date BETWEEN ([Start of Year] AND Now)
WHERE E.IsIncident = 1
GROUP BY ET.Name
``` | Simply using left joins is insufficient. You need to re-order your query so all event types are in the left-most recordset (that's how I visualize it anyway). Additionally, you need to get rid of the where clause and add it to the event table's join because, if not, it might as well be an inner join.
```
SELECT
EventType.Name AS [Incident Name], COUNT(Event.ID) AS [Occurrence]
FROM EventType
LEFT JOIN PortEvent ON Event.PortEventID = PortEvent.ID
LEFT JOIN Event ON PortEvent.EventID = Event.ID and Event.IsIncident = 1 and Event.Date BETWEEN (Start of Year & Now)
GROUP BY Event.Name
``` | Display Rows that Have a 0 count | [
"",
"sql",
"sql-server",
"count",
""
] |
Good Morning all;
I have the following table
```
Date Duration COT TD RID
6/26 30 PT OT 1
6/26 15 OT PT 1
6/27 60 PT OT 1
6/27 60 OT PT 1
6/28 15 SS MM 1
6/28 30 SS MM 1
6/28 15 MM SS 1
6/28 30 MM SS 1
```
What I am trying to do is pull a record by joining the table on itself where the following is true:
1. T1.TD = T2.COT
2. T1.COT = T2.TD
3. T1.Duration <> T2.Duration
4. T1.Date = T2.Date
5. T1.RID = T2.RID
T1 and T2 are the same table. What I have so far is:
```
SELECT *
FROM T1
WHERE NOT EXISTS (SELECT 1 FROM T2
WHERE T1.Date = T2.Date
AND T1.COT = T2.TD
AND T1.TD = T2.COT
AND T1.RID = T2.RID
AND T1.Duration = T2.Duration)
```
Obviously the above gets me 2 rows, as 2 rows meet that criteria. However, I really only want to get a single row from the table. Is there a way to do this, or perhaps a different way to go about it?
Edit: Added additional rows - none of which should be selected. Even though there are rows that do not match on 6/28, they do match - row 1 and 3, 2 and 4 match for 6/28 so should be restricted from the final dataset. In otherwords, if there are any matching records on a day for the RID, then do not select them. | You were almost there. Just use the AS keyword to alias the table into two different names and then use the criteria you put forward in your question as follows.
```
Select * from Table as T1
Join Table as T2 on T1.TD = T2.COT
AND T1.COT= T2.TD
AND T1.Duration <> T2.Duration
AND T1.Date = T2.Date
AND T1.RID = T2.RID
WHERE T1.Duration < T2.Duration
```
[Here are the results!](https://i.stack.imgur.com/9fwkP.png)
**UDPATE**
Based on your new criteria you are looking for this where SO19249978 is your table name. The sub-query which generate the `Remove` table selects the rows that you do not want and then we `join` them to the results. The rows where the values of the join are null are what we are looking for as if the join matches we need to remove the rows.
```
Select T1.Date, T1.Duration as minDur, T2.Duration as maxDur, T1.COT, T1.TD, T1.RID
from SO19249978 as T1
Join SO19249978 as T2 on T1.TD = T2.COT
AND T1.COT= T2.TD
AND T1.Duration <> T2.Duration
AND T1.Date = T2.Date
AND T1.RID = T2.RID
LEFT JOIN (
Select Date, Duration, RID
from SO19249978
GROUP BY Date, Duration, RID
Having Count(*) > 1
) as Remove ON T1.Duration=Remove.Duration
AND T1.Date= Remove.Date
AND t1.RID = Remove.RID
WHERE Remove.Date is null
``` | You may need to set criteria about which row you like in final output. Below are some some examples:
If you need any random row in final output:
```
Select top 1 * from t1 as T1
Join t1 as T2 on T1.TD = T2.COT
AND T1.COT= T2.TD
AND T1.Duration <> T2.Duration
AND T1.Date1 = T2.Date1
AND T1.RID = T2.RID
```
If you choose to get row based on duration:
```
Select * from t1 as T1
Join t1 as T2 on T1.TD = T2.COT
AND T1.COT= T2.TD
AND T1.Duration <> T2.Duration
AND T1.Date1 = T2.Date1
AND T1.RID 2.RID
AND T1.duration > T2.duration
``` | Getting a single result from table with multiple comparisons | [
"",
"sql",
""
] |
I have two user SQL tables with more or less the same data in it and I want to merge the two tables and only take the user with the highest `MyLevel`. Maybe it makes more sense if I show what I have and what I want.
Table One:
```
MyName, MyDescr, MyLevel, FromDB
John, "Hey 1", 100, DB1
Glen, "Hey 2, 100, DB1
Ive, "Hey 3, 90, DB1
```
Table Two:
```
MyName, MyDescr, MyLevel, FromDB
John, "Hey 4", 110, DB2
Glen, "Hey 5", 90, DB2
Ive, "Hey 6", 90, DB2
```
What I want to archieve (ignore the <--):
```
MyName, MyDescr, MyLevel, FromDB
John, "Hey 4", 110, DB2
Glen, "Hey 2, 100, DB1
Ive, "Hey 6", 90, DB2 <-- doesn't matter which one as it is the same level
```
Of course it is possible, but I am really in the dark regarding JOINs and especially when needing to GROUP it or alike? | You can use `CASE` for each column after you have made a join:
```
SELECT
COALESCE(t1.MyName COLLATE DATABASE_DEFAULT, t2.MyName ) AS MyName
,CASE WHEN t2.MyLevel > t1.MyLevel THEN t2.MyDescr COLLATE DATABASE_DEFAULT ELSE t1.MyDescr END AS MyDescr
,CASE WHEN t2.MyLevel > t1.MyLevel THEN t2.MyLevel ELSE t1.MyLevel END AS MyLevel
,CASE WHEN t2.MyLevel > t1.MyLevel THEN t2.FromDB COLLATE DATABASE_DEFAULT ELSE t1.FromDB END AS FromDB
FROM TableOne t1
FULL JOIN TableTwo t2 ON t1.MyName = t2.MyName COLLATE DATABASE_DEFAULT
```
**[SQLFiddle DEMO](http://sqlfiddle.com/#!3/55d7f/3)**
**Edited for collation conflicts**. I have used `COLLATE DATABASE_DEFAULT` but you can also choose to use specific collation - ie `COLLATE Danish_Norwegian_CI_AS`. You can use COLLATE on either side of operation. | The problem is that you're trying to say that one string is more than the other.
To compare the numeric value, you'll have to convert these to numbers somehow. A good start would be to use `substr(yourfieldname, n)` to cut of the text. Then you can use convert to typecast it to int; `convert(int, substr(nbr, 3)) <= 10`
You can then just join and select the max of these values. | Merge two tables in SQL and only get row with highest number | [
"",
"sql",
"sql-server-2008-r2",
"sql-view",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.