Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I have a DAG in my relational database (Firebird) with two tables `edge` and `node` (adjacency list model). I want to query them recursively, but found recursive queries very inefficient. So I tried to implement triggers to maintain the transitive closure following the Dong et.al. paper <http://homepages.inf.ed.ac.uk/libkin/papers/tc-sql.pdf>.
`SELECT`s are now very fast, but `DELETE`s are extremely slow, because almost the whole graph is copied for a single delete. Even worse, concurrent updates seem impossible.
Is there a better way to implement this?
**Edit**
I did some experiments and introduced a reference counter to the TC table. With that, deletes are fast. I wrote some simple test cases, but I'm not sure if I'm doing right. This is what i have so far:
```
CREATE GENERATOR graph_tc_seq;
CREATE TABLE EDGE (
parent DECIMAL(10, 0) NOT NULL,
child DECIMAL(10, 0) NOT NULL,
PRIMARY KEY (parent, child)
);
CREATE TABLE GRAPH_TC (
parent DECIMAL(10, 0) NOT NULL,
child DECIMAL(10, 0) NOT NULL,
refcount DECIMAL(9, 0),
PRIMARY KEY (parent, child)
);
CREATE TABLE GRAPH_TC_TEMP (
session_id DECIMAL(9, 0),
parent DECIMAL(10, 0),
child DECIMAL(10, 0)
);
CREATE PROCEDURE GRAPH_TC_CREATE (p_parent DECIMAL(10, 0), c_child DECIMAL(10, 0))
AS
declare variable tp_parent DECIMAL(10,0);
declare variable tc_child DECIMAL(10,0);
declare variable session_id DECIMAL(9,0);
declare variable refs DECIMAL(9,0);
begin
session_id = gen_id(graph_tc_seq,1);
insert into graph_tc_temp (parent, child, session_id, refcount) values (:p_parent, :p_parent, :session_id, 1);
insert into graph_tc_temp (parent, child, session_id, refcount) values (:c_child, :c_child, :session_id, 1);
insert into graph_tc_temp (parent, child, session_id, refcount) values (:p_parent, :c_child, :session_id, 1);
insert into graph_tc_temp (parent, child, session_id, refcount) select distinct :p_parent, child, :session_id, refcount from graph_tc where parent = :c_child and not parent = child;
insert into graph_tc_temp (child, parent, session_id, refcount) select distinct :c_child, parent, :session_id, refcount from graph_tc where child = :p_parent and not parent = child;
insert into graph_tc_temp (parent, child, session_id, refcount) select distinct a.parent, b.child, :session_id, a.refcount*b.refcount from graph_tc a, graph_tc b where a.child = :p_parent and b.parent = :c_child and not a.parent = a.child and not b.parent = b.child;
for select parent, child, refcount from graph_tc_temp e where session_id= :session_id and exists (select * from graph_tc t where t.parent = e.parent and t.child = e.child ) into :tp_parent, :tc_child, :refs do begin
update graph_tc set refcount=refcount+ :refs where parent = :tp_parent and child = :tc_child;
end
insert into graph_tc (parent, child, refcount) select parent, child, refcount from graph_tc_temp e where session_id = :session_id and not exists (select * from graph_tc t where t.parent = e.parent and t.child = e.child);
delete from graph_tc_temp where session_id = :session_id;
end ^
CREATE PROCEDURE GRAPH_TC_DELETE (p_parent DECIMAL(10, 0), c_child DECIMAL(10, 0))
AS
declare variable tp_parent DECIMAL(10,0);
declare variable tc_child DECIMAL(10,0);
declare variable refs DECIMAL(9,0);
begin
delete from graph_tc where parent = :p_parent and child = :p_parent and refcount <= 1;
update graph_tc set refcount = refcount - 1 where parent = :p_parent and child = :p_parent and refcount > 1;
delete from graph_tc where parent = :c_child and child = :c_child and refcount <= 1;
update graph_tc set refcount = refcount - 1 where parent = :c_child and child = :c_child and refcount > 1;
delete from graph_tc where parent = :p_parent and child = :c_child and refcount <= 1;
update graph_tc set refcount = refcount - 1 where parent = :p_parent and child = :c_child and refcount > 1;
for select distinct :p_parent, b.child, refcount from graph_tc b where b.parent = :c_child and not b.parent = b.child into :tp_parent, :tc_child, :refs do begin
delete from graph_tc where parent = :tp_parent and child = :tc_child and refcount <= :refs;
update graph_tc set refcount = refcount - :refs where parent = :tp_parent and child = :tc_child and refcount > :refs;
end
for select distinct :c_child, b.parent, refcount from graph_tc b where b.child = :p_parent and not b.parent = b.child into :tc_child, :tp_parent, :refs do begin
delete from graph_tc where child = :tc_child and parent = :tp_parent and refcount <= :refs;
update graph_tc set refcount = refcount - :refs where child = :tc_child and parent = :tp_parent and refcount > :refs;
end
for select distinct a.parent, b.child, a.refcount*b.refcount from graph_tc a, graph_tc b where not a.parent = a.child and not b.parent = b.child and a.child = :p_parent and b.parent = :c_child into :tp_parent, :tc_child, :refs do begin
delete from graph_tc where parent = :tp_parent and child = :tc_child and refcount <= :refs;
update graph_tc set refcount = refcount - :refs where parent = :tp_parent and child = :tc_child and refcount > :refs;
end
end ^
CREATE TRIGGER GRAPH_TC_AFTER_INSERT FOR EDGE AFTER INSERT as
begin
execute procedure graph_tc_create(new.parent,new.child);
end ^
CREATE TRIGGER GRAPH_TC_AFTER_UPDATE FOR EDGE AFTER UPDATE as
begin
if ((new.parent <> old.parent) or (new.child <> old.child)) then begin
execute procedure graph_tc_delete(old.parent,old.child);
execute procedure graph_tc_create(new.parent,new.child);
end
end ^
CREATE TRIGGER GRAPH_TC_AFTER_DELETE FOR EDGE AFTER DELETE as
begin
execute procedure graph_tc_delete(old.parent,old.child);
end ^
```
This is my own idea, but I think others have implemented an TC already. Are they doing the same thing?
I have some test cases, but I'm not sure if I might get an inconsistency with bigger graphs.
How about concurrency, I think this approach will fail when two simultaneous transactions want to update the graph, right?
**Edit**
I found some bugs in my code, and I'd like to share the fixed version with you.
I found a great article: <http://www.codeproject.com/Articles/22824/A-Model-to-Represent-Directed-Acyclic-Graphs-DAG-o>. Are there more interesting articles or scientific papers, with different approaches? | I just fixed up a slow delete operation by extending to the transitive reflexive closure table model described here:
<http://www.dba-oracle.com/t_sql_patterns_incremental_eval.htm>. It took a little more work to fully maintain the paths count within it, but it payed off big when deletes went from a 6 second each individual remove operation to negligable (I can now delete every relationship in the graph, and then add them all back in 14 seconds total for 4,000 relationships). | SQL is not the right tool for dealing with graphs. Use one of these :
<http://en.wikipedia.org/wiki/Graph_database>
I like very much ArangoDB, wich have a syntaxe close to mongodb. | How to maintain a transitive closure table efficiently? | [
"",
"sql",
"firebird",
"directed-acyclic-graphs",
"transitive-closure-table",
""
] |
I have a table in an SQL Server database with a date field in it, presented as varchar in `yyyymmddhhnn` format (where `nn` is minutes). For example, `200012011200` would be `01 Dec 2000 12:00`. I need to convert this to a `datetime` value, but none of the `convert` codes seems to cover it. It's closest to ISO format `yyyymmdd` but that doesn't include the time part, so calling `convert(datetime, MyDateField, 112)` fails.
The time part is important, so I can't just strip it off. How can I convert this to datetime? | Try this
```
declare @t varchar(20)
set @t='200012011200'
select cast(stuff(stuff(@t, 11,0,':'),9,0,' ') as datetime)
``` | ```
SELECT convert(varchar, cast(SUBSTRING('200012011200',1,4)+
'-'+SUBSTRING('200012011200',5,2)+
'-'+SUBSTRING('200012011200',7,2)+
' '+SUBSTRING('200012011200',9,2)+
':'+SUBSTRING('200012011200',11,2)+
':00'+
'.000' AS DATETIME), 109)
```
This will result in `Dec 1 2000 12:00:00:000PM`
Using the 112 as parameter will result to `20001201`
Enjoy
UPDATE:
The `convert(varchar...)` is just for demonstration purposes.
You can use this as well:
```
SELECT CAST(SUBSTRING('200012011200',1,4)+
'-'+SUBSTRING('200012011200',5,2)+
'-'+SUBSTRING('200012011200',7,2)+
' '+SUBSTRING('200012011200',9,2)+
':'+SUBSTRING('200012011200',11,2)+
':00'+
'.000' AS DATETIME)
``` | Convert string formatted as yyyymmddhhnn to datetime | [
"",
"sql",
"sql-server",
"datetime",
""
] |
Thanks in advance for your help.
I have two tables, a reference table and a details table. The reference table lists current production step of an order paired with the next step, as so:
**Reference Table**
```
Current_Step | Next_Step | ID
-------------------------------------------------
Step 1 | Step 2 | 1
Step 2 | Step 3 | 2
Step 3 | Step 4 | 3
```
I also have a order details table:
```
Order_ID | Step_ID | Start_Date | Planned_End | Complete_Date | Planned_Duration
-----------------------------------------------------------------------------------
1000 | 1 | 1/1/2013 | 1/3/2013 | 1/3/2013 | 2
1000 | 2 | | | | 3
1000 | 3 | | | | 8
```
In this table, each step for the order exists, but has a blank start date and planned end date.
I'm attempting to build a query that:
* Looks for every item that has a complete date of today
* Find the Next\_Step associated with that item for the same Order\_ID in the table
* If the Start\_Date is blank, updates the Start\_Date to today, and adds Planned\_Duration days to the start date to calculate a Planned\_End date
I'm able to do parts of it individually, but I'm having a hard time bringing it all together into a single query/stored procedure.
I'd appreciate any pointers in the right direction.
Thanks again! | ```
declare @Today datetime
select @Today = dateadd(dd, datediff(dd, 0, getdate()), 0)
update od2 set
start_date = @Today,
planned_end = dateadd(dd, od2.planned_duration, @Today)
from order_details as od
inner join reference_table as rt on rt.current_step_id = od.step_id
inner join order_details as od2 on od2.order_id = od.id and od2.step_id = rt.next_step_id
where
od.complete_date = @Today and od2.start_date is null
``` | Try this (for SQL Server 2005):
```
UPDATE n SET
Start_Date = CONVERT(VARCHAR, GETDATE(),112),
Planned_end = DATEADD(dd, n.Planned_Duration, CONVERT(VARCHAR, GETDATE(),112))
FROM order_details AS d
JOIN refernce_table AS r ON d.step_id= r.ID
JOIN order_details AS n ON n.step_id = CONVERT(int, REPLACE(r.Next_Step, 'Step ',''))
AND d.order_id = n.order_id
WHERE d.Complete_Date = CONVERT(VARCHAR, GETDATE(),112)
AND N.Start_date is null;
``` | Updating a row in the same SQL table based on information from another table - MSSQL | [
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2005",
""
] |
Are there any command line tools (Linux, Mac, and/or Windows) that I could use to scan a delimited file and output a DDL create table statement with the data type determined for me?
Did some googling, but couldn't find anything. Was wondering if others might know, thanks! | [DDL-generator](https://github.com/catherinedevlin/ddl-generator) can do this. It can generate DDL's for YAML, JSON, CSV, Pickle and HTML (although I don't know how the last one works). I just tried it on some data exported from Salesforce and it worked pretty well. Note you need to use it with Python 3, I could not get it to work with Python 2.7. | You can also try <https://github.com/mshanu/idli>. It can take csv file as input and can generate create statement with appropriate types.It can generate for mysql, oracle and postgres. I am actively working on this and happy to receive feedback for future improvement | Generate DDL SQL create table statement after scanning CSV file | [
"",
"sql",
"csv",
"ddl",
"delimited",
""
] |
I have a oracle table which is similar to the one below which stores people's lastname firstname and age. If last name is same people belong to same family.
```
LastName FirstName Age
===========================
1 miller charls 20
2 miller john 30
3 anderson peter 45
4 Bates andy 50
5 anderson gary 60
6 williams mark 15
```
I need to write a oracle sql query to
select youngest person from each family. output shd select rows 1,3,4 and 6
How do I do this ? | `DENSE_RANK()` is a ranking function which generates sequential number and for ties the number generated is the same. I prefer to use `DENSE_RANK()` here considering that a family can have twins, etc.
```
SELECT Lastname, FirstName, Age
FROM
(
SELECT Lastname, FirstName, Age,
DENSE_RANK() OVER (PARTITION BY LastName ORDER BY Age) rn
FROM tableName
) a
WHERE a.rn = 1
```
* [SQLFiddle Demo](http://sqlfiddle.com/#!4/dd69f/3) | Another way, a bit *shorter*:
```
select lastname
, max(firstname) keep(dense_rank first order by age) as first_name
, max(age) keep(dense_rank first order by age) as age
from you_table_name
group by lastname
order by lastname
```
Result:
```
LASTNAME FIRST_NAME AGE
-------- ---------- ----------
Bates andy 50
anderson peter 45
miller charls 20
williams mark 15
```
And [**SQLFiddle Demo**](http://sqlfiddle.com/#!4/05eda/2) | selecting a row using MIN or ROWNUM | [
"",
"sql",
"oracle",
""
] |
I have something like -
```
IF EXISTS (SELECT 1 FROM systable st
JOIN sysuserperm sup ON st.creator = sup.user_id
WHERE sup.user_name = 'TEST'
AND st.table_type = 'BASE'
AND st.table_name = 'Table1') THEN
IF EXISTS (SELECT 1 FROM systable st
JOIN sysuserperm sup ON st.creator = sup.user_id
WHERE sup.user_name = 'TEST'
AND st.table_type = 'BASE'
AND st.table_name = 'Table2') THEN
```
What's the best way to combine multiple of these in a single exist? | Try this one out,
```
SELECT 1
FROM systable st
INNER JOIN sysuserperm sup
ON st.creator = sup.user_id
WHERE sup.user_name = 'TEST' AND
st.table_type = 'BASE' AND
st.table_name IN ('Table1', 'table2')
GROUP BY sup.user_name, st.table_type
HAVING COUNT(DISTINCT st.table_name) = 2
``` | This should do the task
```
IF EXISTS (
(SELECT 1 FROM INFORMATION_SCHEMA.Tables WHERE TABLE_NAME='TABLE1')
INTERSECT
(SELECT 1 FROM INFORMATION_SCHEMA.Tables WHERE TABLE_NAME='TABLE2')
)
``` | Check for multiple tables existence using single exists query? | [
"",
"sql",
"sybase",
"exists",
""
] |
I'm getting this error:
> 1452 - Cannot add or update a child row: a foreign key constraint
> fails.
I've located them and try to get rid of the references with
```
alter table tillhör drop foreign key kat_id;
```
But getting this error instead:
> #1025 - Error on rename of '.\recept\tillh@1ir' to
> '.\recept#sql2-1570-3cb' (errno: 152).
What do I do wrong? | Before you query run
```
SET FOREIGN_KEY_CHECKS=0
```
then set it to 1 after you`re done. | I face same problem. I solve this issue by clearing, i.e. deleting all data from child table and successfully done.
This is occur if child table contain some data with the foreign key that that are not in parent table i.e, if there are two table called **Person** (with column id, name, address) and **order**(with column id, person\_id, order\_name); **order.person\_id is foreign key of person.id and order table contain person\_id that is not present in person table**.
You can solve this using the following query
```
Delete from order where person_id NOT IN (select id from person where person.id = order.person_id)
``` | 1452 - Cannot add or update a child row: a foreign key constraint fails | [
"",
"mysql",
"sql",
"foreign-keys",
""
] |
I have two tables:
- table 1 with id, firstname and lastname
- table 2 with t1\_id, firstname and lastname
I'd like a SQL query (if possible, not a PL/SQL) to update table2 with table1's id when I have a unique match on firstname and lastname
the word "unique" here is my problem :
```
update table2 t2
set t1_id = (select id from table1 t1
where t1.firstname=t2.lastname and t1.lastname=t2.lastname)
```
Whenever I have a match from t2 to multiple t1 records, I get the "ORA-01427: single-row subquery returns more than one row" error.
Any clue to not update on multiple matches?
Thanks. | ```
merge into table2 d
using (
select firstname, lastname, max(id) id
from table1 t1
group by firstname, lastname
having count(0) = 1
) s
on (d.firstname=s.firstname and d.lastname=s.lastname)
when matched then update set t1_id = s.id;
``` | If you want to set the id to NULL in case you find duplicate entries or no entries at all in table1 then simply:
```
update table2 t2
set t1_id =
(
select case when min(id) = max(id) then min(id) else null end
from table1 t1
where t1.firstname=t2.lastname and t1.lastname=t2.lastname
)
``` | How to update table2 with table1's id only on a unique match | [
"",
"sql",
"oracle",
""
] |
I have SQL syntax like this :
```
SELECT FORM_NO, SUM(QTY) as QTY FROM SEIAPPS_QTY WHERE FORM_NO = '1' AND STATUS_QTY='OK'
```
But facing problem with error like this :
```
ORA-00937: not a single-group group function
```
I saw error from FORM\_NO, how can I include that FORM\_NO ?
Please advice.
Thanks | In Oracle you need to use GROUP BY on the values you wish to retrieve (that are not in a group function):
```
SELECT FORM_NO, SUM(QTY) as QTY FROM SEIAPPS_QTY WHERE FORM_NO = '1' AND STATUS_QTY='OK' GROUP BY FORM_NO
``` | You need a group by
```
SELECT FORM_NO, SUM(QTY) as QTY
FROM SEIAPPS_QTY
WHERE FORM_NO = '1'
AND STATUS_QTY='OK'
GROUP BY FORM_NO
```
-Or-
Since you're selecting a single form you can drop the form number
```
SELECT SUM(QTY) as QTY
FROM SEIAPPS_QTY
WHERE FORM_NO = '1'
AND STATUS_QTY='OK'
```
-Or-
If you really want the single FORM\_NO in the result, use an arbitrary aggregate function
```
SELECT MIN(FORM_NO) AS FORM_NO, SUM(QTY) as QTY
FROM SEIAPPS_QTY
WHERE FORM_NO = '1'
AND STATUS_QTY='OK'
GROUP BY FORM_NO
``` | SQL Cant Include Column Table after SUM column table in SELECT SYNTAX | [
"",
"sql",
"oracle",
""
] |
I need to perform a query that find values based on 1 field if the 2nd field is same
Example table:
```
id, what, why
1, 2, 2
2, 3, 4
3, 3, 2
```
So I want the results to return what 2 and 3 because they both have why of 2.
But the why (2) is unknown so I only what to know if the combination of what (2 and 3) have the same why value. Makes sense? Any help is appreciated thanks.
Another example maybe clearer
```
id, building, color
1, house, white
2, garage, red
3, garage, white
```
I query where building = house and building = garage and results are only given if they have matching color. | Use a self-join.
```
SELECT t1.what
FROM Table t1
JOIN Table t2
ON t1.why = t2.why and t1.what != t2.what
``` | ```
SELECT what
FROM YourTable A
WHERE EXISTS(SELECT 1 FROM YourTable
WHERE what <> A.what
AND why = A.why)
```
[**Here is a demo**](http://sqlfiddle.com/#!2/62da8/1) of this. | Find rows with same value of a column | [
"",
"mysql",
"sql",
""
] |
Hi every one i want to use case statement in join using this query and got error
```
Select CONVERT(VARCHAR(10), SII.SIDATE,103)DATE,SII.SALEID,SII.ItemName,SI.TenancyID
FROM F_SALESINVOICEITEM SII
INNER JOIN F_SALESINVOICE SI ON SI.SALEID=SII.SALEID
INNER JOIN #TempTableSearch ts ON CASE
WHEN ts.ACCOUNTTYPE = '1' THEN ts.ACCOUNTID=SI.TENANCYID
WHEN ts.ACCOUNTTYPE='2' THEN ts.ACCOUNTID=SI.EMPLOYEEID
WHEN ts.ACCOUNTTYPE='3' THEN ts.ACCOUNTID=SI.SUPPLIERID
WHEN ts.ACCOUNTTYPE='4' THEN ts.ACCOUNTID=SI.SALESCUSTOMERID
```
Error
> Incorrect syntax near '='.
Please help me to solve this error. | IT should be,
```
ON
ts.ACCOUNTID = CASE
WHEN ts.ACCOUNTTYPE = '1' THEN SI.TENANCYID
WHEN ts.ACCOUNTTYPE = '2' THEN SI.EMPLOYEEID
WHEN ts.ACCOUNTTYPE = '3' THEN SI.SUPPLIERID
WHEN ts.ACCOUNTTYPE = '4' THEN SI.SALESCUSTOMERID
END
``` | Instead of using CASE, I'd much rather do this:
```
Select CONVERT(VARCHAR(10), SII.SIDATE,103)DATE,SII.SALEID,SII.ItemName,SI.TenancyID
FROM F_SALESINVOICEITEM SII
INNER JOIN F_SALESINVOICE SI ON SI.SALEID=SII.SALEID
INNER JOIN #TempTableSearch ts ON
(ts.ACCOUNTTYPE='1' AND ts.ACCOUNTID=SI.TENANCYID)
OR (ts.ACCOUNTTYPE='2' AND ts.ACCOUNTID=SI.EMPLOYEEID)
OR (ts.ACCOUNTTYPE='3' AND ts.ACCOUNTID=SI.SUPPLIERID)
OR (ts.ACCOUNTTYPE='4' AND ts.ACCOUNTID=SI.SALESCUSTOMERID)
```
To explain why the query didn't work for you: the syntax of the `CASE` requires an `END` at the end of the clause. It would work, as the other solutions proposed suggest, but I find this version to be more convenient to understand - although this part is highly subjective. | Use Case Statement in Join | [
"",
"sql",
"sql-server",
"database",
"sql-server-2008",
""
] |
I have a ***stored procedure*** which returns a list of contacts. Since there are many contacts, and for performance reasons, I wish to retrieve the contacts in a page-wise manner, whereby only 20 contacts are fetched in the each instance.
A button on my form will allow me to go to the next page, and once the button is clicked, I go and fetch the next 20 contacts.
Here's my stored proc;
```
CREATE PROCEDURE [dbo].[GetAllContacts] (@searchVal VARCHAR(500))
AS
BEGIN
SELECT DISTINCT ( Id ) AS Id,
FirstName,
LastName,
Address,
Tel_no
FROM tblContact
WHERE ( FirstName LIKE ( '%' + @searchVal + '%' )
OR LastName LIKE ( '%' + @searchVal + '%' ) )
ORDER BY LastName
END
```
How do I split the query and how do I retrieve the next 20 contacts for the 2nd page and 3rd 20 contacts for the 3rd page?
I'm using MS SQL Server 2012. | I found it (By using an ***orderedIndex*** );
```
ALTER PROCEDURE [dbo].[GetAllContacts] (@searchVal varchar(500)
, @CurrentPage int
, @PageSize int)
AS
BEGIN
DECLARE @RESULTS TABLE (
orderedIndex int IDENTITY(1,1) PRIMARY KEY
, Id bigint NOT NULL
, FirstName nvarchar(30) NULL
, LastName nvarchar(30) NULL
, Address nvarchar(130) NULL
, Tel_no nvarchar(15) NULL )
SET @CurrentPage = ISNULL(@CurrentPage, 1)
SET @PageSize = ISNULL(@PageSize, 10)
INSERT INTO @RESULTS (Id, FirstName, LastName, Address,Tel_no)
Select distinct(Id) as Id
, FirstName
, LastName
, Address
, Tel_no
from tblContact
Where (FirstName like ('%'+ @searchVal +'%') OR LastName like ('%'+ @searchVal +'%'))
Order by LastName
-- Get result on separate pages
SELECT Id
, FirstName
, LastName
, Address
, Tel_no
, (SELECT COUNT(*) FROM @RESULTS) AS NbResults
, @CurrentPage AS CurrentPage
, @PageSize AS PageSize
, FLOOR(CEILING(Cast((SELECT COUNT(*) FROM @RESULTS) as decimal(18,2))/ @PageSize)) as TotalPages
FROM @RESULTS
WHERE orderedIndex BETWEEN 1 + ((@CurrentPage - 1) * @PageSize) AND (@CurrentPage) * @PageSize
END
``` | You can use a [common table expression](http://msdn.microsoft.com/en-us/library/ms190766%28v=sql.105%29.aspx) in conjunction with the [ROW\_NUMBER()](http://technet.microsoft.com/en-us/library/ms186734.aspx) function
```
ALTER PROCEDURE [dbo].[GetAllContacts]
@searchVal VARCHAR(500),
@page INT = NULL,
@perPage INT = NULL
AS
DECLARE @Start INT, @End INT
SET @page = ISNULL(@page, 1)
SET @perPage = ISNULL(@perPage, 10)
SET @start = CASE WHEN @page = 1 THEN 0 ELSE (@page - 1) * @perPage END + 1
SET @end = CASE WHEN @page = 1 THEN @perPage ELSE (@page * @perPage) END
;WITH [Contacts] AS (
SELECT [Id]
, [FirstName] , [LastName]
, [Address] , [Tel_no]
, ROW_NUMBER( ) OVER (ORDER BY LastName) AS [Index]
FROM [tblContact]
WHERE ([FirstName] LIKE ('%'+ @searchVal +'%')
OR [LastName] LIKE ('%'+ @searchVal +'%'))
), [Counter] AS (SELECT COUNT(*) AS [Count] FROM [Contacts])
SELECT [Id]
, [FirstName] , [LastName]
, [Address] , [Tel_no]
, @page AS CurrentPage
, @perPage AS PageSize
,CEILING(CAST([Counter].[Count] AS DECIMAL(18,2))/@perPage) AS TotalPages
FROM Contacts, [Counter]
WHERE [Index] >= @start AND [Index] <= @end
```
You could then call this by passing in the your search term, with page you want to display and the number of entries you want on each page
```
EXEC [dbo].[GetAllContacts] 'Smith', 3, 20
```
That will return the 3rd page of contacts that have a first name or last name that contains the word 'Smith'
Example: <http://sqlfiddle.com/#!6/bb8ae/2> | SQL - Retrieve data pagewise | [
"",
"sql",
"sql-server",
"performance",
"stored-procedures",
"sql-server-2012",
""
] |
I'm looking to write an ActiveRecord query and this is what I have below. Unfortunately you can't use OR like this. What's the best way to execute? `category_ids` is an array of integers.
```
.where(:"categories.id" => category_ids).or.where(:"category_relationships.category_id" => category_ids)
``` | One way is to revert to raw sql...
```
YourModel.where("categories.id IN ? OR category_relationships.category_id IN ?", category_ids, category_ids)
``` | Keep the SQL out of it and use ARel, like this:
```
.where(Category.arel_table[:id].in(category_ids).
or(CategoryRelationship.arel_table[:category_id].in(category_ids))
``` | Rails ActiveRecord where or clause | [
"",
"sql",
"ruby-on-rails",
"activerecord",
"where-clause",
""
] |
i have problem trying to delete record from my VS 2012 and i'm using sql server 2012, this is my task from my lecturer, and i cant solved it
now this is what i have
```
Private Sub bt_hapus_Click(sender As Object, e As EventArgs) Handles bt_hapus.Click
Try
Dim sqlda As New SqlClient.SqlDataAdapter("Delete from tabelpasien where No_Rkm_Mds=" & Me.txt_rkm_mds.Text, Me.SqlConnection1)
sqlda.Fill(dspasien, "tabelpasien")
MsgBox("Data telah berhasil dihapus")
bersih()
pasif()
normal()
Catch ex As Exception
MsgBox(ex.Message)
End Try
End Sub
```
any help would be greatly apreciated... | A delete command is executed using an SqlCommand and the [ExecuteNonQuery](http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlcommand.executenonquery.aspx) method.
Your code should be
```
Try
Dim cmd = New SqlClient.SqlCommand("Delete from tabelpasien where No_Rkm_Mds=@rkm", Me.SqlConnection1)
cmd.Parameters.AddWithValue("@rkm", Me.txt_rkm_mds.Text)
cmd.ExecuteNonQuery()
....
```
Using a parameterized query you don't have to put quotes around your where values (if the underlying field is any kind of char/varchar/nvarchar type) but, the most important benefit of a parameterized query is the elimination of a possible [Sql Injection](https://stackoverflow.com/questions/332365/how-does-the-sql-injection-from-the-bobby-tables-xkcd-comic-work) attack | You have forgotten your single quote marks I.E." ' " from around your condition.
Your statement Should be
`Delete From tabelpasien where No_Rkm_Mds='" + Me.txt_rkm_mds.Text + "'"` | VB .NET SQL Delete error 'incorrect syntax near '=' | [
"",
"sql",
"vb.net",
"visual-studio-2012",
"sql-server-2012",
"vb.net-2010",
""
] |
I have a table with following columns:
1. User\_Id
2. Work\_Date
```
create table Test_Seq(user_id number, work_date date);
```
It has following data:
```
insert into Test_Seq values (1, '01-SEP-2013');
insert into Test_Seq values (1, '02-SEP-2013');
insert into Test_Seq values (1, '06-SEP-2013');
insert into Test_Seq values (1, '09-SEP-2013');
insert into Test_Seq values (1, '10-SEP-2013');
insert into Test_Seq values (2, '10-SEP-2013');
insert into Test_Seq values (2, '26-SEP-2013');
insert into Test_Seq values (2, '30-SEP-2013');
insert into Test_Seq values (2, '01-OCT-2013');
```
This table stores work\_date for user. This work\_date may or may not be in sequence.
There is one more table:
```
create table temp_holidys (holiday date);
insert into temp_holidys values ('27-SEP-2013');
insert into temp_holidys values ('31-DEC-2013');
```
I need queries / pl sql to get last Work\_Date (order by desc) and its associated sequence start date; Sat and Sun are will not have any record but still they will be treated as in sequence (calendar days).
Same as we are treating Sat and Sun as part of sequence, it should treat day also in sequence if that day is in temp\_holidys table (see #2 below).
1. For user\_id 1, this should give me '10-SEP-2013' as end date and '06-SEP-2013' as start date
2. For user\_id 2, this should give me '01-OCT-2013' as end date and '26-SEP-2013' as start date (27-OCT-2013 needs to be treated as in sequence as it is defined in temp\_holidys table)
3. It has to be sequence meaning if for example in # 1, for user id 1, if there was no record for '09-SEP-2013', it should return '10-SEP-2013' for start date. Also in #2, for user 2, if there was no record on '26-SEP-2013', it should return 30-SEP-2013' for start date. | Here's one approach.
* In this approach, all the working days, holidays and weekends are put together in the same table.
* Then the starting of each sequence is identified with dates ordered in descending order.
* Each sequence is given a number.
* Max and Min of first sequence is found out, which is the required result.
Here's the query for user 1.
```
/*---for user 1---*/
with minmaxdays as(
--find the latest and earliest working date for each user
select min(work_date) min_date,
max(work_date) max_date
from test_seq
where user_id = 1
),
alldays as(
--generate all days from earliest to latest dates
select min_date + level all_days
from minmaxdays
connect by min_date + level < max_date
),
combined_test_seq as(
--get the working days
select work_date working_days, 'W' date_type --W indicates working days
from test_seq
where user_id = 1
union all
--get the holidays
select holiday working_days, 'H' date_type --H indicates holidays/weekends
from temp_holidys
union all
--get all the weeknds
select all_days working_days, 'H' date_type --H indicates holidays/weekends
from alldays
where to_char(all_days,'D') in ('1','7') --select only saturdays and sundays
),
grouping as(
--find out the beginning of each sequence
select working_days,
date_type,
case when working_days + 1 =
lag(working_days,1) over (order by working_days desc)
then 0
else 1
end seq_start
from combined_test_seq
),
grouping2 as(
--assign sequence no, and keep only the working days
select working_days,
sum(seq_start) over (order by working_days desc) grp
from grouping
where date_type = 'W'
)
-- get the max and min date in the first sequence.
select max(working_days) keep (dense_rank first order by grp),
min(working_days) keep (dense_rank first order by grp)
from grouping2;
```
Result:
```
max(date) min(date)
-------------------------
10-SEP-2013 06-SEP-2013
```
Demo [here](http://sqlfiddle.com/#!4/31d21/6). | you need a PL/SQL function. Either one that gives you the pipelined output or one that tells you if days are following each other. Here is a solution for the second way:
This is the function needed. It returns 0 for false and 1 for true, due to the lack of a boolean data type in Oracle SQL:
```
create or replace function are_dates_adjacent(vi_start_date date, vi_end_date date) return number as
v_count integer;
begin
-- Same day or next day is of course in sequence with the start day
IF trunc(vi_end_date) in ( trunc(vi_start_date), trunc(vi_start_date) + 1 ) then
return 1; -- TRUE
-- An end day before start day is invalid
elsif trunc(vi_end_date) < trunc(vi_start_date) then
return 0; -- FALSE
end if;
-- Now loop through the days between first and last to look out for gaps, i.e. skipped working days
for offset in 1 .. trunc(vi_end_date) - trunc(vi_start_date) - 1 loop
-- If Saturday or Sunday, then we are fine with this, otherwise let's see if it's a holiday
if to_char(trunc(vi_start_date) + offset, 'DY', 'NLS_DATE_LANGUAGE=AMERICAN') not in ('SAT','SUN') then
-- If it's neither Saturday or Sunday nor a holiday then return false
select count(*) into v_count from temp_holidys where holiday = trunc(vi_start_date) + offset;
if v_count = 0 then
return 0; -- FALSE
end if;
end if;
end loop;
-- No gap detected; return true
return 1; -- TRUE
end;
```
Here is the select statement. In the ordered list it first looks for group changes, i.e. a user changes or the dates are not considered adjacent. Based on this groups are built, so that at last we can find the first ands last date per group.
```
select user_id, min(work_date), max(work_date)
from
(
select user_id, work_date, sum(group_change) over(order by user_id, work_date) as date_group
from
(
select
user_id,
work_date,
case when
user_id nvl(lag(user_id) over(order by user_id, work_date), user_id) or
are_dates_adjacent(nvl(lag(work_date) over(order by user_id, work_date), work_date), work_date) = 0
then 1 else 0 end as group_change
from Test_Seq
order by user_id, work_date
)
)
group by user_id, date_group
order by user_id, min(work_date);
```
EDIT: And here is the select statement giving you only the last working time for one user.
```
select start_date, end_date
from
(
select min(work_date) as start_date, max(work_date) as end_date
from
(
select work_date, sum(group_change) over(order by work_date) as date_group
from
(
select
work_date,
case when
are_dates_adjacent(nvl(lag(work_date) over(order by work_date), work_date), work_date) = 0
then 1 else 0 end as group_change
from Test_Seq
where user_id = 1
order by work_date
)
)
group by date_group
order by min(work_date) desc
)
where rownum = 1;
``` | how to get start date and end date of sequence days in table? | [
"",
"sql",
"oracle",
""
] |
I've got a PHP application that uses PDO with prepared statements, both in PostgreSQL and MySQL, and I'm wondering if there's a performance hit when preparing the exact same statements each time before executing it.
In pseudo-code, an example would be something like:
```
for ($x=0; $x<100; $x++) {
$obj = PDO::prepare("SELECT x,y,z FROM table1 WHERE x=:param1 AND y=:param2");
$obj->execute(array('param1'=>$param1, 'param2'=>$param2));
}
```
As opposed to preparing once and executing multiple times:
```
$obj = PDO::prepare("SELECT x,y,z FROM table1 WHERE x=:param1 AND y=:param2");
for ($x=0; $x<100; $x++) {
$obj->execute(array('param1'=>$param1, 'param2'=>$param2));
}
```
I've searched for this question many times, but can't seem to find a reference to it in PHP, nor PostgreSQL, nor MySQL. | Yes, it somewhat defeats the purpose of using prepared statements. Preparing allows the DB to pre-parse the query and get things ready for execution. When you do execute the statement, the DB simply slips in the values you're providing and performs the last couple steps.
When you prepare inside a loop like that, all of the preparation work gets thrown out and re-done each time. What you should have is:
```
$obj = PDO::prepare("SELECT x,y,z FROM table1 WHERE x=:param1 AND y=:param2");
for ($x=0; $x<100; $x++) {
$obj->execute(array('param1'=>$param1, 'param2'=>$param2));
}
```
prepare once, execute many times. | Yes, there's a performance hit. The prepare step has to parse the SQL string and analyze it to decide which tables it needs to access and how it's going to do that (which indexes to use, etc.). So if you prepare() once before the loop begins, and then just execute() inside the loop, this is a cost savings.
This is an example of a more general programming principle: **you shouldn't put code inside a loop if the result of that code is the same for every iteration of the loop.**
PS: You shouldn't use `PDO::prepare()`, you should create a PDO object and call `$pdo->prepare()`.
PPS: You should also check the return value of prepare() and execute(). They return **false** on errors. Or else configure PDO to throw exceptions on error. | Performance Problems When Preparing Statement Multiple Times | [
"",
"mysql",
"sql",
"performance",
"postgresql",
"prepared-statement",
""
] |
The objective here is to be able to query a database providing it with a **Journey**, based on the **Journey** the database will return all of the **Stop Codes** that run through this journey.
So for example I need to be able to say, "Select all of the stop codes that run through journey 34". This should then only return STOP CODE: SZDASDASDE. (in production many more codes will be returned).

Above you can see an image of the first table in a database.

You can also see the second table where each **STOP CODE** has many **JOURNEYS** as parents. As far as I know putting multiple journeys into a single field does not follow standard database design so if anyone can help me fix that I would really appreciate it.
These images were taken in Microsoft's Excel just to plan how I'm going to do this, production will be using a MySQL database.
Thanks | You have a **many to many** relationship between Stop Codes and Journeys. To eradicate this, you need to decompose the relationship.
In order to do this, you need an intermediary table, let's call it JourneyStopCode, which will look like:
```
JourneyStopCode:
JourneyStopCodeID (primarykey)
JourneyID
StopCodeID
```
Then your Stop Code table wouldn't have a JourneyID field.
To retrieve stop codes for a journey, you'd do:
```
SELECT * FROM StopCode
INNER JOIN JourneyStopCode ON StopCode.StopCodeID = JourneyStopCode.StopCodeID
INNER JOIN Journey On Journey.JourneyID = JourneyStopCode.JourneyID
WHERE JourneyID = @yourJourneyID
```
**Edit: To visualise:**
```
--------------- --------------------- ----------------
| Journey | | JourneyStopCode | | StopCode |
--------------- --------------------- ----------------
| JourneyID |<--- | JourneyStopCodeID | --->| StopCodeID |
| Description | |----| JourneyID | | | Latitude |
--------------- | StopCodeID |------ | Longitude |
--------------------- ----------------
```
Then your data would look like:
## Journey
```
----------------------------------------
| JourneyID | Description |
----------------------------------------
| 34 | Southampton - Portsmouth |
----------------------------------------
```
## StopCode
```
----------------------------------------
| StopID | Latitude | Longitude |
----------------------------------------
| SSDAFS | 12345 | 67890 |
----------------------------------------
```
## JourneyStopCode
```
------------------------------------------
| JourneyStopID | JourneyID | StopCodeID |
------------------------------------------
| 1 | 34 | SSDAFS |
------------------------------------------
``` | Journeys to stop codes is a many to many relation ship, you want a join table most likely.
```
Table 1 :
SID | Stop Code | Long | lat
0 | ASDFSAFA | 1 | 2
1 | sdDSGSDGS | 4 | 0
....
Table 2 :
Journey | Description
0 | Blah blah blah
Table 3 :
Journey | SID
0 | 1
2 | 1
1 | 4
SELECT A.Longitude, A.Latitude FROM TABLE1 WHERE A.SID IN (
SELECT SID FROM TABLE3 WHERE JOURNEY = 0
);
``` | Normalizing a database column | [
"",
"mysql",
"sql",
"database-design",
"database-normalization",
""
] |
I'm trying to create a table in postgresql to hold user Work Items. Each Work Item will have a unique ID across the system, but then will also have a friendly sequential ID for that specific user.
So far I have been just using:
```
CREATE TABLE WorkItems
(
Id SERIAL NOT NULL PRIMARY KEY,
...
);
```
but I don't know how to compartmentalize this serial per user, nor how to do it with sequences either.
So I want for users to see sequential Friendly Ids
```
User1: ids = 1, 2, 3 etc..
User2: ids = 1, 2, 3, 4, 5 etc..
```
and not have to deal with the unique Ids, but those items would have unique ids from 1-8.
I've looked around quite a bit and can't find any posts on this topic. Maybe it's just hard to specify and search for?
Should I be storing the LastWorkIdUsed in a user column, and then just manually create a friendly Id for their items? Seems like I'd unnecessarily have to worry about concurrency and transactions then. Is there an easy way to pass the LastWorkIdUsed per user to Postgresql when generating a number?
Or is there a way to assign a sequence to a user?
Perhaps I've got the fundamental design idea wrong. How would one do this in SQLServer or MySQL also? | You said that you *also* have a globally unique key. You can use that to solve your concurrency issues. If you allow the user work item number to be null, you can first create the work item, then do something like this to assign it a "friendly" number. (T-SQL dialect, @workItemID is the globally unique ID):
```
Update wi
set friendlyNumber = coalesce(
(select max(friendlyNumber)+1 from WorkItem wi2
where wi2.userID = wi.userID and not wi2.friendlyNumber is null)
, 1)
From WorkItem wi
where workItemID = @workItemID and friendlyNumber is null
```
Then find out the number like this:
```
select friendlyNumber from WorkItem where workItemID = @workItemID
``` | The most common way to have an auto-generated ID inside a table is the one using a sequence, if the sequence used on a `int`/`bigint` column then I see no reason why this won't be user-friendly.
A demo follows on `Postgresql 13.6` assuming `postgres` is the user under which table and sequence will belong:
* Define a sequence to work with.
```
CREATE SEQUENCE user_id_seq;
ALTER SEQUENCE user_id_seq OWNER TO chopshop;
```
* Test the sequence
```
-- This should return 1
SELECT nextval('user_id_seq'::regclass);
```
[](https://i.stack.imgur.com/y7l4u.png)
* Use it as a default value on your table/relation.
```
CREATE TABLE user (
user_id bigint not null primary key default nextval('user_id_seq'::regclass)
..
..
);
```
* You might need to modify `nextval('user_id_seq'::regclass)` to `nextval('public.user_id_seq'::regclass)` (a.k.a prepend your schema name) in some Postgresql versions.
**Can you assign a sequence to a user/entry?**
* You could use a trigger and re-calibrate the sequence before insertion, but please don't.
* That's an anti-pattern
* Use a `reference_number` column instead which will be calculated from your back-end code (Spring, Django, FastAPI, Node, etc) which will follow a pattern for each user. | How to create sequential ID's per user in Postgresql | [
"",
"mysql",
"sql",
"sql-server",
"postgresql",
""
] |
In SQL Server you can do this:
```
DECLARE @ID int
SET @ID=1
SELECT * FROM Person WHERE ID=@ID
```
What is the equivalent Oracle code?
I have spent some time Googling this, but I have not found an anwser. | The following link/page from the Oracle® Database PL/SQL User's Guide provides an overview of variable declaration:
<http://docs.oracle.com/cd/B19306_01/appdev.102/b14261/constantvar_declaration.htm>
The sample below was taken from another page in the user’s guide.
```
DECLARE
bonus NUMBER(8,2);
emp_id NUMBER(6) := 100;
BEGIN
SELECT salary * 0.10 INTO bonus FROM employees
WHERE employee_id = emp_id;
END;
/
```
... and for your example ...
```
DECLARE
p_id NUMBER(6) := 1;
BEGIN
SELECT * FROM Person WHERE ID = p_id;
END;
/
```
Regards, | You create a bind variable
```
VARIABLE ID number
```
Assigning the variable must be done in a PL/SQL block (execute is a short cut to do that)
```
execute :id := 1
```
You can then use it in a sql statement
```
SELECT * FROM Person WHERE ID=:ID ;
``` | Translate TSQL to PLSQL | [
"",
"sql",
"sql-server",
"oracle",
""
] |
I have the following query (executed through PHP). How can I make it showing ZEROs if the result is NULL and is not shown.
```
select count(schicht) as front_lcfruh,
kw,
datum
from dienstplan
left join codes on dienstplan.schicht = codes.lcfruh
left join personal on personal.perso_id = dienstplan.perso_id
where codes.lcfruh != ''
and personal.status = 'rezeption'
and dienstplan.kw = '$kw'
group by dienstplan.datum
``` | I'm not entirely sure I understand the question, but I think you want this:
```
select count(codes.lcfruh) as front_lcfruh,
dienstplan.kw,
dienstplan.datum
from dienstplan
left join codes on dienstplan.schicht = codes.lcfruh and codes.lcfruh <> ''
left join personal on personal.perso_id = dienstplan.perso_id
and personal.status = 'rezeption'
and dienstplan.kw = $kw
group by dienstplan.datum, dienstplan.kw
```
If `schicht` comes from `dienstplan` there will always be a row for that (as that is the driving table). If I understand you correctly you want a `0` if no matching rows are found. Therefor you need to count the *joined* table.
Edit:
The condition `where codes.lcfruh != ''` turns the outer join back into an inner join because any "outer" row will have lcfruh as NULL and any comparison with NULL yields "unknown" and therefor the rows are removed from the final result. If you want to exclude rows in the `codes` table where the `lcfruh` has an empty string, you need to move that condition into the JOIN clause (see above).
And two more things: get used to prefixing your columns in a query with more than one table. That avoids ambiguity and makes the query more stable against changes. You should also understand the difference between number literals and string literals `1` is a number `'1'` is a string. It's a bad habit to use string literals where numbers are expected. MySQL is pretty forgiving as it always try to "somehow" work but if you ever user other DBMS you might get errors you don't understand.
Additionally your usage of `group by` is wrong and will lead to "random" values being returned. Please see these blog posts to understand why:
* <http://rpbouman.blogspot.de/2007/05/debunking-group-by-myths.html>
* <http://www.mysqlperformanceblog.com/2006/09/06/wrong-group-by-makes-your-queries-fragile/>
Every other DBMS will reject your query the way it is written now (and MySQL will as well in case you turn on a more ANSI compliant mode) | If you have no matching rows, then MySQL will return an **empty set** (here I have defined the fields at random, just to run the query):
```
mysql> CREATE TABLE dienstplan (kw varchar(10), datum integer, schicht integer, perso_id integer);
Query OK, 0 rows affected (0.00 sec)
mysql> CREATE TABLE codes (lcfruh varchar(2));
Query OK, 0 rows affected (0.00 sec)
mysql> CREATE TABLE personal (perso_id integer, status varchar(5));
Query OK, 0 rows affected (0.00 sec)
mysql>
mysql> select count(schicht) as front_lcfruh,
-> kw,
-> datum
-> from dienstplan
-> left join codes on dienstplan.schicht = codes.lcfruh
-> left join personal on personal.perso_id = dienstplan.perso_id
-> where codes.lcfruh != ''
-> and personal.status = 'rezeption'
-> and dienstplan.kw = '$kw'
-> group by dienstplan.datum;
Empty set (0.00 sec)
```
You can check how many rows were returned by the query. In this case you will get zero.
So you could modify your code like this (pseudo code):
```
$exec = $db->execute($query);
if ($exec->error() !== false) {
// Handle errors. Possibly quit or raise an exception.
}
if (0 == $exec->count())
{
// No rows. We return a default tuple
$tuple = array(
'front_lcfruh' => 0,
'kw' => $kw,
'datum' => null
);
handleTuple($tuple);
} else {
while($tuple = $exec->fetch()) {
handleTuple($tuple);
}
}
```
Where `handleTuple()` is the function that formats or otherwise manipulates the returned rows. | showing Zero if sql count is NULL | [
"",
"mysql",
"sql",
"count",
""
] |
This is a common SQL query for me:
```
update table1 set col1 = (select col1 from table2 where table1.ID = table2.ID)
where exists (select 1 from table2 where table1.ID = table2.ID)
```
Is there any way to avoid having two nearly identical subqueries? This query is an obvious simplification but performance suffers and query is needlessly messy to read. | Unfortunately Informix don't support the FROM clause at UPDATE statement.
The way to workaround and you will get better results (performance) is change the UPDATE to MERGE statement.
This will work only if your database is version 11.50 or above
```
MERGE INTO table1 as t1
USING table2 as t2
ON t1.ID = t2.ID
WHEN MATCHED THEN UPDATE set (t1.col1, t1.col2) = (t2.col1, t2.col2);
```
Check [IBM Informix manual](http://pic.dhe.ibm.com/infocenter/informix/v121/index.jsp?tab=search&searchWord=merge) for more information | Update with inner join can be used to avoid subqueries
something like this:
```
update t1
set col1 = t2.col1
from table1 t1
inner join table2 t2
on t1.ID = t2.ID
``` | Using subquery in an update always requires subquery in a where clause? | [
"",
"sql",
"informix",
""
] |
I have the following SQL Query:
```
SELECT Score FROM (
SELECT SUM(Score) AS Score
FROM ResponseData
WHERE ScoreGUID='baf5dd3e-949c-4255-ad48-fd8f2485399f'
GROUP BY Section
) AS Totals
GROUP BY Score
```
And it produces this:
```
Score
0
1
2
2
4
5
5
```
But, what I really want is the number of each of the scores, like this:
```
Score Count
0 1
1 1
2 2
4 1
5 2
```
I want to show how many of each score, One 0, One 1, Two 2's, One 4, and 2 5's.
But I am not sure how to do this query, would someone be able to show me how to achieve this?
Thanks for the help | Try this code:
```
SELECT Score,count(score) as cnt
FROM ResponseData
WHERE ScoreGUID='baf5dd3e-949c-4255-ad48-fd8f2485399f'
GROUP BY Score
``` | Something like:
```
SELECT Score, COUNT(Score)
FROM ResponseData
WHERE ScoreGUID='baf5dd3e-949c-4255-ad48-fd8f2485399f'
GROUP BY Score
```
should work. But I don't understand the meaning of your:
```
group by Section
``` | SQL Query to show count of same results | [
"",
"sql",
""
] |
I have a bunch of rows in my db that signify orders, i.e.
```
id | date
---------------------
1 | 2013-09-01
2 | 2013-09-01
3 | 2013-09-02
4 | 2013-09-04
5 | 2013-09-04
```
What I'd like is to display the count of rows per day, including missing days, so the output would be:
```
2013-09-01 | 2
2013-09-02 | 1
2013-09-03 | 0
2013-09-04 | 2
```
I've seen examples of having 2 tables, one with the records and the other with dates, but I'd ideally like to have a single table for this.
I can currently find the rows that have a record, but not days that do not.
Does anyone have a n idea on how to do this?
Thanks | If you want to get data for last 7 days, you can generate your pseudo-table via `UNION`, like:
```
SELECT
COUNT(t.id),
fixed_days.fixed_date
FROM t
RIGHT JOIN
(SELECT CURDATE() as fixed_date
UNION ALL SELECT CURDATE() - INTERVAL 1 day
UNION ALL SELECT CURDATE() - INTERVAL 2 day
UNION ALL SELECT CURDATE() - INTERVAL 3 day
UNION ALL SELECT CURDATE() - INTERVAL 4 day
UNION ALL SELECT CURDATE() - INTERVAL 5 day
UNION ALL SELECT CURDATE() - INTERVAL 6 day) AS fixed_days
ON t.`date` = `fixed_days`.`fixed_date`
GROUP BY
`fixed_days`.`fixed_date`
```
-see this [fiddle demo](http://sqlfiddle.com/#!8/be53d/2). Note, that if your fields are `DATETIME` date type, then you'll need to applyy `DATE()` first:
```
SELECT
COUNT(t.id),
fixed_days.fixed_date
FROM t
RIGHT JOIN
(SELECT CURDATE() as fixed_date
UNION ALL SELECT CURDATE() - INTERVAL 1 day
UNION ALL SELECT CURDATE() - INTERVAL 2 day
UNION ALL SELECT CURDATE() - INTERVAL 3 day
UNION ALL SELECT CURDATE() - INTERVAL 4 day
UNION ALL SELECT CURDATE() - INTERVAL 5 day
UNION ALL SELECT CURDATE() - INTERVAL 6 day) AS fixed_days
ON DATE(t.`date`) = `fixed_days`.`fixed_date`
GROUP BY
`fixed_days`.`fixed_date`
``` | Try Something like this!
```
SELECT
c1,
GROUP_CONCAT(c2 ORDER BY c2) AS 'C2 values'
FROM table
GROUP BY c1;
```
To retrieve a list of c1 values for which there exist specific values in another column c2, you need an `IN` clause specifying the c2 values and a `HAVING` clause specifying the required number of different items in the list ...
```
SELECT c1
FROM table
WHERE c2 IN (1,2,3,4)
GROUP BY c1
HAVING COUNT(DISTINCT c2)=4;
```
For more help by this related question
[Counting all rows with specific columns and grouping by week](https://stackoverflow.com/questions/11036294/counting-all-rows-with-specific-columns-and-grouping-by-week) | Count/group rows based on date including missing | [
"",
"mysql",
"sql",
""
] |
I have a table with three columns
```
Product Version Price
1 1 25
1 2 15
1 3 25
2 1 8
2 2 8
2 3 4
3 1 25
3 2 10
3 3 5
```
I want to get the max price and the max version by product.
So in the above example the results would have product 1, version 3, price25. product 2, version 2, price 8.
Can you let me know how I would do this.
I'm on Teradata | If Teradata supports the `ROW_NUMBER` analytic function:
```
SELECT
Product,
Version,
Price
FROM (
SELECT
atable.*, /* or specify column names explicitly as necessary */
ROW_NUMBER() OVER (PARTITION BY Product
ORDER BY Price DESC, Version DESC) AS rn
FROM atable
) s
WHERE rn = 1
;
``` | Using Teradata SQL this can be further simplified:
```
SELECT * FROM atable
QUALIFY
ROW_NUMBER()
OVER (PARTITION BY Product
ORDER BY Price DESC, Version DESC) = 1;
```
The QUALIFY is a Teradata extension to Standard SQL, it's similar to a HAVING for GROUP BY, it filters the result of a window function. | SQL Max over multiple versions | [
"",
"sql",
"max",
"greatest-n-per-group",
"teradata",
""
] |
I'm using db2, not sure what version. I'm getting an overflow error when trying to sum a table. I thought that I would be able to cast the sum to a BIGINT which seems to work for a sum total but I'm looking to get a percentage and when I cast to a BIGINT my data is inaccurate. How do I get an accurate percentage for Percent\_DeliveredB/A? Converting the numerator and denominator to BIGINT and dividing for percentage is not giving me the correct results.
Here's my script:
```
SELECT
FAT.DIM_BUILDING_ID,
FAT.BUILDING_NAME,
SUM(CAST(FAT.AMOUNT AS BIGINT)) AS SALES_SUM,
SUM(CAST(FAT.ORDERS AS BIGINT)) AS ORDERS_SUM,
SUM(CAST(FAT.CAPABILITY AS BIGINT)) AS CAPABILITY_SUM,
SUM(FAT.ORDERS_B)/sum(FAT.Amount) AS Percent_DeliveredB,
SUM(FAT.ORDERS_A)/sum(FAT.Amount) AS Percent_DeliveredA,
SUM(CAST(FTS.GROUP_A AS BIGINT)) AS GROUP_A,
SUM(CAST(FTS.GROUP_B AS BIGINT)) AS GROUP_B,
SUM(CAST(FTS.GROUP_C AS BIGINT)) AS GROUP_C
FROM ORDERS AS FAT
INNER JOIN GROUPS AS FTS ON FAT.DIM_PROJECT_ID = FTS.DIM_PROJECT_ID
GROUP BY FAT.DIM_BUILDING_ID, FAT.BUILDING_NAME;
```
I tried the following but it comes back with 0 for the percentage.
```
SUM(CAST(FAT.ORDERS_B AS BIGINT))/sum(CAST(FAT.Amount AS BIGINT)) AS Percent_DeliveredB
``` | I was able to get the correct results converting to double.
```
SUM(CAST(FAT.ORDERS_B AS DOUBLE))/sum(CAST(FAT.Amount AS DOUBLE)) AS Percent_DeliveredB,
``` | It is giving you a correct result, as any value less than 1, when cast to an integer (or BIGINT for that matter) will be truncated to 0, obviously. If you are expecting a fractional number, use DECIMAL or FLOAT data types:
```
cast(SUM(FAT.ORDERS_B) as decimal(10,2)) /
cast(sum(FAT.Amount) as decimal(10,2)) AS Percent_DeliveredB
```
Use the correct precision for your needs, of course. | Arithmetic overflow or other arithmetic exception occurred solution? | [
"",
"sql",
"db2",
"aggregate-functions",
""
] |
I'm not sure the title is clear, but this is the situation.
I have a table that looks like this:
```
ID inputID value
4 1 10
4 2 20
4 3 100
6 1 15
6 2 20
6 3 44
```
I have user input that gives me the values for inputID 1 and inputID 2, after which I want to get the ID to get the other information corresponding to that ID.
Example: If the user gives inputID(1)=10, inputID(2)=20, I want to get 4
Using simple AND statements don't solve this problem. I have asked already asked a few people, but I can't seem to solve this seemingly simple problem. | What about something like this?
```
SELECT t1.ID
FROM table t1
INNER JOIN table t2
ON t1.ID = t2.ID
WHERE
t1.inputID = 1 AND t1.value = @input1 AND
t2.inputID = 2 AND t2.value = @input2
``` | ```
select ID from T as T1
where inputID=1 and value=10
and EXISTS(select id from T where ID=T1.ID and inputID=2 and value=20)
``` | SQL query to get ID from multiple inputs | [
"",
"sql",
""
] |
I have a table that has employee's daily totals with start date, I need to look back 3 months to see how many days an employee has worked.
Here is my sql query:
```
SELECT
EMPNO,
CONVERT(VARCHAR(10), STARTDATE,101),
ROW_NUMBER() OVER (ORDER BY PERSONNUM) AS 'ROWCOUNT'
FROM EMPLOYEE
WHERE STARTDATE BETWENN DATEADD(month, -3, GETDATE()) and GETDATE()
GROUP BY EMPNO,STARTDATE
ORDER BY EMPNO
```
Result
```
EMPNO STARTDATE ROWCOUNT
TEST108 09/13/2013 1
TEST108 09/16/2013 2
TEST108 09/17/2013 3
TEST108 09/19/2013 4
TEST109 09/04/2013 5
TEST109 09/05/2013 6
TEST109 09/06/2013 7
TEST110 09/03/2013 9
TEST110 09/04/2013 10
TEST110 09/05/2013 11
```
Desired Result
```
EMPNO ROWCOUNT
TEST108 4
TEST109 3
TEST110 3
```
Thank you, | You can use this CTE with `ROW_NUMBER` and `COUNT(*)OVER`:
```
WITH CTE AS
(
SELECT EMPNO, STARTDATE,
RN = ROW_NUMBER() OVER (PARTITION BY EMPNO ORDER BY STARTDATE DESC),
[ROWCOUNT] = COUNT(*) OVER (PARTITION BY EMPNO)
FROM dbo.EMPLOYEE
WHERE STARTDATE BETWEEN DATEADD(month, -3, GETDATE()) and GETDATE()
)
SELECT EMPNO, STARTDATE, [ROWCOUNT]
FROM CTE
WHERE RN = 1
```
`DEMO`
The `ROW_NUMBER` returns a number for every row in a partiton(similar to `GROUP BY EMPNO`) and allows to select all columns without needing to aggregate all. The `COUNT(*)OVER` returns the total-count of rows for each partition, so that what you want. I use the `ROW_NUMBER` just to remove the duplicates. | Can't you use a count?
```
SELECT EMPNO, COUNT(EMPNO) AS [ROWCOUNT]
FROM EMPLOYEE
WHERE STARTDATE BETWEEN DATEADD(month, -3, GETDATE()) and GETDATE()
GROUP BY EMPNO
ORDER BY EMPNO
``` | SQL 2008 - I need to count number of rows per employee in given time frame | [
"",
"sql",
"sql-server",
""
] |
In Symfony2 and Doctrine I would like to execute a query that returns a count and a group by.
Here's what I've tried. This is the SQL I want to run:
```
SELECT `terrain_id` , COUNT( * )
FROM `Partie`
WHERE 1 =1
GROUP BY `terrain_id`
```
**With my entity:**
```
class Partie
{
/**
* @var integer
*
* @ORM\Column(name="id", type="integer")
* @ORM\Id
* @ORM\GeneratedValue(strategy="AUTO")
*/
private $id;
/**
* @ORM\ManyToOne(targetEntity="Gp\UserBundle\Entity\User",
inversedBy="parties", cascade={"persist"})
* @ORM\JoinColumn(nullable=false)
*/
private $user;
/**
* @ORM\ManyToOne(targetEntity="Gp\JeuxBundle\Entity\Terrain")
*/
private $terrain;
```
**This is my PartieRepository**
```
public function getTest(\Gp\UserBundle\Entity\User $user){
return $this->createQueryBuilder('p')
->select('count(p), p.terrain')
->where('p.user = :user')
->setParameter('user', $user)
->groupBy('r.terrain')
->getQuery()
->getResult();
}
```
This is the error I get:
```
[Semantical Error] line 0, col 19 near 'terrain FROM': Error:
Invalid PathExpression. Must be a StateFieldPathExpression.
``` | You'll probably want to go with a [Native Query](http://docs.doctrine-project.org/en/latest/reference/native-sql.html)
```
$sql = "SELECT terrain_id as terrain,
count(*) AS count "
."FROM Partie "
."GROUP BY terrain_id;";
$rsm = new ResultSetMapping;
$rsm->addScalarResult('terrain', 'terrain');
$rsm->addScalarResult('count', 'count');
$query = $this->_em->createNativeQuery($sql, $rsm);
return $query->getResult();
```
Just add in any having / where clauses as needed.
The following is my result:
```
Array
(
[0] => Array
(
[terrain] =>
[count] => 7
)
[1] => Array
(
[terrain] => 1
[count] => 5
)
[2] => Array
(
[terrain] => 2
[count] => 1
)
)
```
The lack of `terrain` in the first array is due to null `terrain_id`.
**EDIT**
OP has unexpected results, so here are some troubleshooting steps:
1) Try a `var_dump($query->getSQL());` right **before** the `return` statement, and run the SQL directly against your DB. If this produces incorrect results, examine the query and alter the `$sql` as appropriate.
2) If #1 produces correct results, try a `var_dump($query->getResult());` right **before** the return statement. If this produces correct results, something is going on deeper in your code. It's time to look at why `terrain` is being filtered. It may be as simple as removing or changing the alias in SQL and `addScalarResult`.
3) Try an even simpler function:
```
$sql = "SELECT distinct(terrain_id) FROM Partie;";
$rsm = new ResultSetMapping;
$rsm->addScalarResult('terrain_id', 'terrain_id');
$query = $this->_em->createNativeQuery($sql, $rsm);
var_dump($query->getSQL());
var_dump($query->getResult());
return $query->getResult();
``` | This error occurs on this line : `select('count(p), p.terrain')` where you are trying to use `p` alias that doesn't exist anymore. The `select` method override the default alias of the `createQueryBuilder()`. To avoid this, use `addSelect` instead or specify clearly the `from` method.
Try this :
```
public function getTest(\Gp\UserBundle\Entity\User $user){
return $this->createQueryBuilder('p')
->addSelect('count(p), p.terrain')
->where('p.user = :user')
->setParameter('user', $user)
->groupBy('r.terrain')
->getQuery()
->getResult();
}
```
or this :
```
public function getTest(\Gp\UserBundle\Entity\User $user){
return $this->createQueryBuilder('p')
->select('count(p), p.terrain')
->from('YourBundle:YourEntity', 'p')
->where('p.user = :user')
->setParameter('user', $user)
->groupBy('r.terrain')
->getQuery()
->getResult();
}
``` | symfony2 - Doctrine - How to do a multiple select with a count and group by | [
"",
"sql",
"symfony",
"doctrine-orm",
""
] |
Consider three tables -
```
users
id | type
-----------|------------
1 | a
2 | b
3 | c
types
id | type
-----------|------------
a | X
a | Y
b | X
c | X
c | Y
c | Z
training_status
id | training| status
-----------|-----------|-------------
1 | X | F
2 | X | S
2 | Y | S
3 | X | F
3 | Y | S
```
Each user has a type, and types defines the trainings that each user of a particular type have to complete.
`training_status` contains status of all the trainings that a user has taken and its result (S,F). It a user is yet to take a training, there won't be any row for that training.
I would like to find out all users that have successfully completed all the trainings that they have to take.
Here's the direction that I am thinking in:
```
select
id
from users
join types
using (type)
left join training_status
using (id,type)
where status NOT IN(None, F);
```
Obviously this is not the right query because even if the user has completed one of the trainings, we get that row. In the aforementioned example, I'd like to get id = 2 because he has completed both trainings of its type. | Try
```
SELECT DISTINCT u.id
FROM users u JOIN types t
ON u.type = t.type LEFT JOIN training_status s
ON u.id = s.id AND t.training = s.training
WHERE s.status IS NOT NULL
GROUP BY u.id
HAVING COUNT(t.type) = SUM(CASE WHEN s.status = 'S' THEN 1 ELSE 0 END)
```
or
```
SELECT DISTINCT u.id
FROM users u JOIN types t
ON u.type = t.type LEFT JOIN training_status s
ON u.id = s.id AND t.training = s.training
GROUP BY u.id
HAVING MAX(s.status IS NULL OR s.status = 'F') = 0
```
Output:
```
+------+
| id |
+------+
| 2 |
+------+
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!2/af565/8)** demo | Try this
```
SELECT *
FROM users
WHERE id NOT IN (
SELECT u.id
FROM users u
JOIN types t
ON u.type = t.type
LEFT JOIN training_status s
ON u.id = s.id AND t.training = s.training
WHERE s.status IS NULL OR s.status = 'F')
``` | MYSQL:SQL query to return a row only if all the rows satisfy a condition | [
"",
"mysql",
"sql",
""
] |
I read the following question that has relevance, but the replies didn't satify me: [MySQL: #126 - Incorrect key file for table](https://stackoverflow.com/q/2011050/570796)
---
## The problem
When running a query I get this error
> ERROR 126 (HY000): Incorrect key file for table`
## The question
When I'm trying to find the problem I cant't find one, so I don't know how to fix it with the repair command.
Is there any pointers to how I can find the problem causing this issue in any other way then I already have tried?
---
### The query
```
mysql> SELECT
-> Process.processId,
-> Domain.id AS domainId,
-> Domain.host,
-> Process.started,
-> COUNT(DISTINCT Joppli.id) AS countedObjects,
-> COUNT(DISTINCT Page.id) AS countedPages,
-> COUNT(DISTINCT Rule.id) AS countedRules
-> FROM Domain
-> JOIN CustomScrapingRule
-> AS Rule
-> ON Rule.Domain_id = Domain.id
-> LEFT JOIN StructuredData_Joppli
-> AS Joppli
-> ON Joppli.CustomScrapingRule_id = Rule.id
-> LEFT JOIN Domain_Page
-> AS Page
-> ON Page.Domain_id = Domain.id
-> LEFT JOIN Domain_Process
-> AS Process
-> ON Process.Domain_id = Domain.id
-> WHERE Rule.CustomScrapingRule_id IS NULL
-> GROUP BY Domain.id
-> ORDER BY Domain.host;
ERROR 126 (HY000): Incorrect key file for table '/tmp/#sql_2b5_4.MYI'; try to repair it
```
### mysqlcheck
```
root@scraper:~# mysqlcheck -p scraper
Enter password:
scraper.CustomScrapingRule OK
scraper.Domain OK
scraper.Domain_Page OK
scraper.Domain_Page_Rank OK
scraper.Domain_Process OK
scraper.Log OK
scraper.StructuredData_Joppli OK
scraper.StructuredData_Joppli_Product OK
```
### counted rows
```
mysql> select count(*) from CustomScrapingRule;
+----------+
| count(*) |
+----------+
| 26 |
+----------+
1 row in set (0.04 sec)
mysql> select count(*) from Domain;
+----------+
| count(*) |
+----------+
| 2 |
+----------+
1 row in set (0.01 sec)
mysql> select count(*) from Domain_Page;
+----------+
| count(*) |
+----------+
| 134288 |
+----------+
1 row in set (0.17 sec)
mysql> select count(*) from Domain_Page_Rank;
+----------+
| count(*) |
+----------+
| 4671111 |
+----------+
1 row in set (11.69 sec)
mysql> select count(*) from Domain_Process;
+----------+
| count(*) |
+----------+
| 2 |
+----------+
1 row in set (0.02 sec)
mysql> select count(*) from Log;
+----------+
| count(*) |
+----------+
| 41 |
+----------+
1 row in set (0.00 sec)
mysql> select count(*) from StructuredData_Joppli;
+----------+
| count(*) |
+----------+
| 11433 |
+----------+
1 row in set (0.16 sec)
mysql> select count(*) from StructuredData_Joppli_Product;
+----------+
| count(*) |
+----------+
| 130784 |
+----------+
1 row in set (0.20 sec)
```
---
## Update
---
### Disk usage
```
root@scraper:/tmp# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 20G 4.7G 15G 26% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 237M 4.0K 237M 1% /dev
tmpfs 49M 188K 49M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 245M 0 245M 0% /run/shm
none 100M 0 100M 0% /run/user
``` | It appears that your query is returning a large intermediate result set requiring the
creation of a temporary table and that the configured location for mysql temporary
disk tables (/tmp) is not large enough for the resulting temporary table.
You could try increasing the tmpfs partition size by remounting it:
```
mount -t tmpfs -o remount,size=1G tmpfs /tmp
```
You can make this change permanent by editing /etc/fstab
If you are unable to do this you could try changing the location of disk
temporary tables by editing the "tmpdir" entry in your my.cnf file (or add
it if it is not already there). Remember that the directory you
choose should be writable by the mysql user
You could also try preventing the creation of an on disk temporary table by increasing
the values for the mysql configuration options:
```
tmp_table_size
max_heap_table_size
```
to larger values. You will need to increase both of the above parameters
Example:
```
set global tmp_table_size = 1G;
set global max_heap_table_size = 1G;
``` | If your `/tmp` mount on a linux filesystem is mounted as overflow, often sized at 1MB, ie
```
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 7.9G 12K 7.9G 1% /dev
tmpfs 1.6G 348K 1.6G 1% /run
/dev/xvda1 493G 6.9G 466G 2% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
none 5.0M 0 5.0M 0% /run/lock
none 7.9G 0 7.9G 0% /run/shm
none 100M 0 100M 0% /run/user
overflow 1.0M 4.0K 1020K 1% /tmp <------
```
this is likely due to you not specifying `/tmp` as its own partition and your root filesystem filled up and `/tmp` was remounted as a fallback.
I ran into this issue after running out of space on an EC2 volume. Once I resized the volume, I ran into the `/tmp` overflow partition filling up while executing a complicated view.
---
To fix this after you've cleared space/resized, just unmount the fallback and it should remount at its original point (generally your root partition):
```
sudo umount -l /tmp
```
Note: `-l` will lazily unmount the disk. | MySQL, Error 126: Incorrect key file for table | [
"",
"mysql",
"sql",
"mysql-error-126",
""
] |
i want to count after `group by`, not the total line
just want to count by the categories
after `group by` my result is like
```
course lecturer
comp1111 Jim
comp1100 Jim
comp1100 Jim
infs2321 Jess
infs2321 Jess
econ1222 Helen
```
my result after count should be
```
lecturer count
Jim 3
Jess 2
Helen 1
``` | I don't see why you want a group by after you have grouped. You get your desired result by doing just one group. Please have a look at this [sqlfiddle](http://sqlfiddle.com/#!2/420d6/2/0) to see it working live.
```
CREATE TABLE Table1
(`course` varchar(8), `lecturer` varchar(5))
;
INSERT INTO Table1
(`course`, `lecturer`)
VALUES
('comp1111', 'Jim'),
('comp1100', 'Jim'),
('comp1100', 'Jim'),
('infs2321', 'Jess'),
('infs2321', 'Jess'),
('econ1222', 'Helen')
;
select
lecturer, count(*)
from
Table1
group by lecturer desc;
| LECTURER | COUNT(*) |
-----------|----------|--
| Jim | 3 |
| Jess | 2 |
| Helen | 1 |
```
**EDIT:**
You don't need an extra table. To get the row with the largest count you can simply do
```
select
lecturer, count(*)
from
Table1
group by lecturer
order by count(*) desc
limit 1;
```
for MySQL or
```
select top 1
lecturer, count(*)
from
Table1
group by lecturer
order by count(*) desc;
```
for MS SQL Server. In my first answer I had `GROUP BY lecturer DESC` which is the same as `GROUP BY lecturer ORDER BY COUNT(*) DESC` because in MySQL `GROUP BY` implies an `ORDER BY`.
If this is not what you want, be careful with using `MAX()` function. When you simply do for example
```
select
lecturer, max(whatever)
from
Table1
group by lecturer;
```
you don't necessarily get the row with holding the max of whatever.
You can also do
```
select
lecturer, max(whatever), min(whatever)
from
Table1
group by lecturer;
```
See? You just get the value returned by the function, not the row belonging to it. For examples how to solve this, please refer to this [manual entry](http://dev.mysql.com/doc/refman/5.5/en//example-maximum-column-group-row.html).
I hope I didn't confuse you now, this is probably more than you wanted to know, because above is especially for groups. I think what you really want to do is simply ordering the table the way you want, then pick just one row, like mentioned above. | Try this. It might work
```
SELECT LECTURER, COUNT(*)
FROM
(SELECT LECTURER, COURSE
FROM TABLE
WHERE
GROUP BY LECTURER, COURSE )
GROUP BY LECTURER;
``` | IN SQL count after group by | [
"",
"sql",
"database",
"count",
"group-by",
""
] |
I'm new to if-then SQL statements. I only know the basics (select, update, insert with joins etc.), so it will be helpful if you could help me with the syntax in this scenario.
I have a table that holds customer's activities, let's say I'm a dentist, and I store a specific activity, and when using my software to check out what the customer should pay, I use this query:
```
SELECT ACTIVITY.ID,
ACTIVITY.DATES,
ACTIVITY.INSURANCE_ID
ACTIVITY.AMOUNT,
ACTIVITY.INSURANCE_AMOUNT,
ACTIVITY.AMOUNT * ((100 - ACTIVITY.INSURANCE_AMOUNT) / 100) AS AMOUNT_TO_PAY,
ACTIVITY.TIME,
ACTIVITY.MONEY_RECEIVED,
ACTIVITY.ROWID
FROM ACTIVITY
LEFT JOIN INSURANCE ON INSURANCE.ID = ACTIVITY.INSURANCE_ID
WHERE ACTIVITY.ID = :patient_id AND ACTIVITY.MONEY_RECEIVED IS NULL
```
This query selects the data I need plus the amount that the customer should pay, calculating the percentage discount from the insurance amount and returning the total amount to pay as money (AMOUNT\_TO\_PAY).
This works fine, but the problem is, this calculation happens even if the insurance is expired. I want to make a SQL statement with if-then or whatever other method in ORACLE JDEVELOPER, that first will check if my table CUSTOMER\_INSURANCE.TO\_DATE is still active; if it is, do the calculation; if the DATE is expired, do not include the insurance percentage.
Here is the query I'm trying to execute but I'm doing something wrong:
```
IF ((SELECT count(*) FROM CUSTOMER_INSURANCE.TO_DATE TO_DATE
WHERE TO_DATE >= sysdate AND customer_id = :patient_id) > 0)
SELECT ACTIVITY.ID,
ACTIVITY.DATES,
ACTIVITY.INSURANCE_ID
ACTIVITY.AMOUNT,
ACTIVITY.INSURANCE_AMOUNT,
ACTIVITY.AMOUNT * ((100 - ACTIVITY.INSURANCE_AMOUNT) / 100) AS AMOUNT_TO_PAY,
ACTIVITY.TIME,
ACTIVITY.MONEY_RECEIVED,
ACTIVITY.ROWID
FROM ACTIVITY
LEFT JOIN INSURANCE ON INSURANCE.ID = ACTIVITY.INSURANCE_ID
WHERE ACTIVITY.ID = :patient_id AND ACTIVITY.MONEY_RECEIVED IS NULL
ELSE IF ((SELECT count(*) FROM CUSTOMER_INSURANCE.TO_DATE TO_DATE
WHERE TO_DATE >= sysdate AND customer_id = :patient_id) = 0)
SELECT ACTIVITY.ID,
ACTIVITY.DATES,
ACTIVITY.INSURANCE_ID
ACTIVITY.AMOUNT,
ACTIVITY.INSURANCE_AMOUNT,
ACTIVITY.AMOUNT AS AMOUNT_TO_PAY,
ACTIVITY.TIME,
ACTIVITY.MONEY_RECEIVED,
ACTIVITY.ROWID
FROM ACTIVITY
LEFT JOIN INSURANCE ON INSURANCE.ID = ACTIVITY.INSURANCE_ID
WHERE ACTIVITY.ID = :patient_id AND ACTIVITY.MONEY_RECEIVED IS NULL
```
Can anyone help me finish this query? | I think this should work. You can change the CASE based on your requirement.
```
select ACTIVITY.ID,
ACTIVITY.DATES,
ACTIVITY.INSURANCE_ID
ACTIVITY.AMOUNT,
ACTIVITY.INSURANCE_AMOUNT,
CASE WHEN TO_DATE >= sysdate
THEN ACTIVITY.AMOUNT * ((100 - ACTIVITY.INSURANCE_AMOUNT) / 100)
WHEN TO_DATE < sysdate
THEN ACTIVITY.AMOUNT
END AS AMOUNT_TO_PAY
FROM
(
SELECT ACTIVITY.ID,
ACTIVITY.DATES,
ACTIVITY.INSURANCE_ID
ACTIVITY.AMOUNT,
ACTIVITY.INSURANCE_AMOUNT,
ACTIVITY.AMOUNT
ACTIVITY.TIME,
ACTIVITY.MONEY_RECEIVED,
ACTIVITY.ROWID
FROM ACTIVITY
LEFT JOIN INSURANCE ON INSURANCE.ID = ACTIVITY.INSURANCE_ID
WHERE ACTIVITY.ID = :patient_id AND ACTIVITY.MONEY_RECEIVED IS NULL
)
WHERE ACTIVITY.ID = :patient_id;
```
Thanks,
Aditya | You can add a CASE expression to return the `ACTIVITY.INSURANCE_AMOUNT` value conditionally:
```
SELECT ACTIVITY.ID,
ACTIVITY.DATES,
ACTIVITY.INSURANCE_ID
ACTIVITY.AMOUNT,
ACTIVITY.INSURANCE_AMOUNT,
ACTIVITY.AMOUNT * (1 - CASE
WHEN EXISTS (
SELECT *
FROM CUSTOMER_INSURANCE.TO_DATE
WHERE TO_DATE >= sysdate
AND customer_id = :patient_id
)
THEN ACTIVITY.INSURANCE_AMOUNT
ELSE 0
END / 100) AS AMOUNT_TO_PAY,
ACTIVITY.TIME,
ACTIVITY.MONEY_RECEIVED,
ACTIVITY.ROWID
FROM ACTIVITY
LEFT JOIN INSURANCE ON INSURANCE.ID = ACTIVITY.INSURANCE_ID
WHERE ACTIVITY.ID = :patient_id AND ACTIVITY.MONEY_RECEIVED IS NULL
;
```
When there are matching rows in `CUSTOMER_INSURANCE.TO_DATE`, the `ACTIVITY.INSURANCE_AMOUNT` value is returned to calculate the remaining amount, otherwise 0 is returned and so the entire expression evaluates to just `ACTIVITY.AMOUNT`.
Notes:
1. The "percentage to pay" calculation was changed from `(100 - x) / 100` to the equivalent (and slightly shorter) `1 - x/100` form.
2. The `(SELECT COUNT(*) FROM ...) > 0` predicate was replaced with the possibly more efficient `EXISTS (SELECT * FROM ...)` one. | What am I doing wrong in this SQL Query? | [
"",
"sql",
"oracle",
""
] |
In my database there are lots of sequences, triggers and tables. I'm confused each time which table is associated to which trigger(and sequence). How to see these list in single query? | In oracle, you cannot find which sequence is used on which table, it is not associated on table level. For that you need to find the code, and search for the sequence where it is been used, may be it is used on before insert trigger or in the PL/SQL code.
For trigger, you can see the data dictionary views
```
select table_name,
trigger_name as object_name,
'TRIGGER' object_type
from ALL_TRIGGERS
```
**EDIT**
My way of finding a sequence is
1. Suppose i want to check where 'Seq\_ID' is been used .
2. Select \* from dba\_source where lower(text) like '%seq\_id.nextval%';
3. This will tell me the code where the sequence is been referenced ,probably you can find insert statement in the code ,from that you can find that this sequence is associated with which table
4. Or it will give you the trigger code ,and from the trigger you can find which table is referred. | In sql server, you can use sys.triggers and sys.tables like,
```
select ta.name AS 'TableName', tg.name 'TriggerName' from sys.triggers tg
INNER JOIN sys.tables ta ON tg.parent_id = ta.object_id
``` | how to see list of tables and list of trigger(and sequences)? | [
"",
"sql",
"oracle",
"triggers",
"sequence",
""
] |
I have start and end weeks for a given customer and I need to make panel data for the weeks they are subscribed. I have manipulated the data into an easy form to convert, but when I transpose I do not get the weeks in between start and end filled in. Hopefully an example will shed some light on my request. Weeks start at 0 and end at 61, so forced any week above 61 to be 61, again for simplicity. Populate with a 1 if they are subscribed still and a blank if not.
```
ID Start_week End_week
1 6 61
2 0 46
3 45 61
```
what I would like
```
ID week0 week1 ... week6 ... week45 week46 week47 ... week61
1 . . ... 1 ... 1 1 1 ... 1
2 1 1 ... 1 ... 1 1 0 ... 0
3 0 0 ... 0 ... 1 1 1 ... 1
``` | I see two ways to do it.
I would go for an array approach, since it will probably be the fastest (single data step) and is not that complex:
```
data RESULT (drop=start_week end_week);
set YOUR_DATA;
array week_array{62} week0-week61;
do week=0 to 61;
if week between start_week and end_week then week_array[week+1]=1;
else week_array[week+1]=0;
end;
run;
```
Alternatively, you can prepare a table for the transpose to work by creating one record per week per id::
```
data BEFORE_TRANSPOSE (drop=start_week end_week);
set YOUR_DATA;
do week=0 to 61;
if week between start_week and end_week then subscribed=1;
else subscribed=0;
output;
end;
run;
``` | Use an array to create the variables. The one gotcha is SAS arrays are 1 indexed.
```
data input;
input ID Start_week End_week;
datalines;
1 6 61
2 0 46
3 45 61
;
data output;
array week[62] week0-week61;
set input;
do i=1 to 62;
if i > start_week and i<= (end_week+1) then
week[i] = 1;
else
week[i] = 0;
end;
drop i;
run;
``` | tracking customer retension on weekly basis | [
"",
"sql",
"sas",
"proc-sql",
""
] |
I would like to write a sql query that returns all values greater than or equal to x and also the first value that is not greater than x.
For example if we have a table containing values 1, 2, 3, 4, 5 and x is 3, I need to return 2, 3, 4, 5.
The fact that my example includes evenly spaced integers does not help because my actual data is not as cooperative.
Is this even possible or am I better off just getting the entire table and manually figuring out which rows I need? | union is your best bet. paste together the set of all values greater than x, and the largest value less than x. Something like the following should work:
```
SELECT n FROM table WHERE n > $x ORDER BY n DESC
UNION SELECT n from table WHERE n < $x ORDER By n DESC LIMIT 0,1;
``` | ```
SELECT <columns> -- you want in the result
FROM tableX
WHERE columnX >=
( SELECT MAX(columnX)
FROM tableX
WHERE columnX < @x -- @x is the parameter, 3 in your example
) ;
``` | SQL Lower Bound? | [
"",
"mysql",
"sql",
""
] |
I want to be able to do a SQL Query that gets all numbers that end in .005
```
Select * from amount where (numbers end in X.XX5)
```
I don't care what the numbers are for X | I am assuming you don't care what the first and second digit after the decimal is.Try this:
```
select * from MyTable where MyField like '%.__5'
```
The % specifies a substitution for 0 or more characters, and the underscores are a substitute for a single character.
If there are digits after the 5, use
```
select * from MyTable where MyField like '%.__5%'
``` | ```
SELECT * FROM amount WHERE numbers LIKE '%.005';
``` | SQL Query for .005 | [
"",
"sql",
""
] |
I have a big problem that I've been trying to tackle. Reading older similar issues hasn't been fruitful.
My table has columns [id], [parentid], [data1], [data2] etc.
There are some cases where only one record has same [parentid] and others have 2-10.
I would like to group by [parentid] and print out all data of the record which has maximum [id]. So I would only get all the "latest" details for the records which share the same [parentid] number.
I hope this is comprehensible goal in any way :).
I've also tried to tackle this in Crystal Reports XI and Qlikview 11 without major success.
Thanks in advance for any help! | Can values in your ID column be reused? If no, then juergen's answer will work.
If they can be reused, you will need to use the same table twice in your query, once to get the max id for each parent id, and once to get the row for each max id/parent id.
Like so:
```
select
t1.*
from aTable t1,
(select parentid, max(id) as id
from aTable group by parentid) t2
where t1.id = t2.id
and t1.parentid = t2.parentid
```
[SQLFIddle!](http://sqlfiddle.com/#!3/44e16/4) | ```
select * from your_table
where id in
(
select max(id)
from your_table
group by parentid
)
``` | Selecting * from max id when grouping by parentid (MySQL) | [
"",
"mysql",
"sql",
"crystal-reports",
"qlikview",
""
] |
Hello i need help here...
my query shows the folllowing result:
```
Id name color Version
1 leather black 1
1 leather brown 2
2 suede brown 1
3 cloth green 1
3 cloth blue 2
```
i want to display the following:
```
Id name color Color_2
1 leather black brown
2 suede brown
3 cloth green blue
```
query is simple
currently
```
SELECT ID, NAME, COLOR,VERSION
FROM table1,table2
WHERE table1.ID = table2.ID
AND id in
(SELECT ID
FROM table1,table2
WHERE table1.ID = table2.ID
AND VERSION in ('1'))
AND VERSION in ('1','2')
``` | A simple one (if you know at design time the maximum number of colors you could possibly have)
```
drop table my_test;
create table my_test (
id number,
name varchar2(32),
color varchar2(32),
version number);
insert into my_test values(1,'leather','black',1);
insert into my_test values(1,'leather','brown',2);
insert into my_test values(2,'suede','brown',1);
insert into my_test values(3,'cloth','green',1);
insert into my_test values(3,'cloth','blue ',2);
set linesize 200
select min(id) id,
name,
max(decode(version,1,color,null)) color,
max(decode(version,2,color,null)) color_2
from my_test
group by name
order by 1;
ID NAME COLOR COLOR_2
---------- ---------- ---------- ----------
1 leather black brown
2 suede brown
3 cloth green blue
3 rows selected.
```
This will work with any Oracle database version. Depending on the version you use, look at the LISTAGG, WM\_CONCAT and the like ([here](http://www.oracle-base.com/articles/misc/string-aggregation-techniques.php)) | You're a little all over the place with your database type...is it mysql or oracle? I'm guessing oracle by your select statement anyway. I'm using 'join' based syntax, I find it easier to read than what you have here. Take a select statement with version 1 and left join it to a select statement with version 2. If there is a version three, put another join in there.
```
select a.id, a.name, a.colour, b.colour
from (select * from table1 where version = 1) a
left join (select * from table1 where version = 2) b
on a.id = b.id
```
(this assumes version 1 always exists and there can't be a version 2 without a version 1) | display result - data from 1 column divided into 2 columns depending on criteria | [
"",
"mysql",
"sql",
"oracle",
"toad",
""
] |
I have 3 tables in an SQL Server 2008 database. The first table contains users names, the second contains privileges and the last links the first two tables:
**USERS** (`ID` integer, `NAME` varchar(20));
**PRIVS** (`ID` integer, `NAME` varchar(50));
**USERS\_PRIVS** (`USERID` integer, `PRIVID` integer);
For example, the USERS table has the following:
```
1, Adam
2, Benjamin
3, Chris
```
The PRIVS table has:
```
1, Add Invoice
2, Edit Invoice
3, Delete Invoice
```
The USERS\_PRIVS table has:
```
1, 1
1, 2
1, 3
2, 1
2, 2
3, 1
```
I am looking for a way to create an SQL query that would return something like the following:
```
Add Invoice Edit Invoice Delete Invoice
Adam Y Y Y
Benjamin Y Y N
Chris Y N N
```
Is this possible using the pivot function? | ```
SELECT Name, [Add Invoice],[Edit Invoice],[Delete Invoice] FROM
(SELECT U.Name AS Name, P.Name AS PrivName FROM USERS_PRIVS UP
INNER JOIN Users U ON UP.UserID = U.id
INNER JOIN Privs P ON UP.PrivID = P.id) SB
PIVOT(
count(SB.PrivName)
FOR SB.PrivName in ([Add Invoice],[edit Invoice],[Delete Invoice])) AS Results
```
**EDIT 1:**
You will have to use some dynamic sql then.
```
DECLARE @query AS NVARCHAR(MAX);
DECLARE @privColumnNames AS NVARCHAR(MAX);
select @privColumnNames = STUFF((SELECT distinct ',' + QUOTENAME(Name)
FROM PRIVS FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)'), 1, 1, '');
SELECT @query =
'SELECT *
FROM
(SELECT U.Name AS Name, P.Name AS PrivName FROM USERS_PRIVS UP
INNER JOIN Users U ON UP.UserID = U.id
INNER JOIN Privs P ON UP.PrivID = P.id) SB
PIVOT(
count(SB.PrivName)
FOR SB.PrivName in ( ' + @privColumnNames + ' )) AS Results ';
execute(@query);
```
I borrowed the XML stuff from here [Dynamic Pivot Columns in SQL Server](https://stackoverflow.com/questions/14797691/dynamic-pivot-columns-in-sql-server) | Here's a Demo on [SqlFiddle](http://sqlfiddle.com/#!3/650f1/22/0).
```
with cl
as (
select u.NAME , case when p.ID = 1 then 'Y' else 'N' end 'Add Invoice',
case when p.ID = 2 then 'Y' else 'N' end 'Edit Invoice',
case when p.ID = 3 then 'Y' else 'N' end 'Delete Invoice'
from USERS u inner join USERS_PRIVS up on u.ID = up.USERID
inner join PRIVS p on up.PRIVID = p.ID
)
select NAME, MAX([Add Invoice]) 'Add Invoice',
MAX([Edit Invoice]) 'Edit Invoice',
MAX([Delete Invoice]) 'elete Invoice'
from cl
group by NAME
```
Here's a Demo on [SqlFiddle](http://sqlfiddle.com/#!3/650f1/25/0).
```
select NAME , case when [Add Invoice] = 1 then 'Y' else 'N' end 'Add Invoice',
case when [Edit Invoice] = 1 then 'Y' else 'N' end 'Edit Invoice',
case when [Delete Invoice] = 1 then 'Y' else 'N' end 'Delete Invoice'
from (
select u.NAME, p.NAME as pname
from USERS u inner join USERS_PRIVS up on u.ID = up.USERID
inner join PRIVS p on up.PRIVID = p.ID
) p
pivot( count(pname) for pname in ([Add Invoice], [Edit Invoice], [Delete Invoice])) as pvt
``` | Return Table from SQL Query | [
"",
"sql",
"sql-server-2008-r2",
""
] |
I'm a new to `ASP.Net`, and this is my first ASP project. Hope you dont mind if my problem is so simple with some people.
I'm programming a `Student's Forum`.
In `Comment.aspx`, I use Repeater Control to get all comment (for each topic) and display them order by Time desc.
```
<asp: Repeater....
....//
</asp: Repeater...>
```
But it seems this thing is not simple as I imaged. I have to not only show all comment in this topic but also update the when a comment added into Database.
I can load all comments from SQL DB in to the repeater successfully, But I dont know how to auto update when Database inserted a new row without reload all the webpage.
I'm recommended to use `AJAX`, I've tried to find ebooks and some documents talk about AJAX but It's so hard for me to start. I really don't know how to apply AJAX on my webpage.
Can you tell me some helpful document or some examples, or some advice? | You can put the repeater under the UpdatePanel and rather than refreshing after every few seconds . I will make a very light ajax method either using PageMethod or jquery (if allowed) to check if the count is changed , If only the count is change we can trigger the updatePanel refresh. <http://encosia.com/easily-refresh-an-updatepanel-using-javascript/> While the page is loaded first time , get the count of records from the database and put in the hidden field . Next time make a ajax call to get the count of the rows in the database and compare it will the value saved in hidden field and if is changed fired the updatepanel update. You can put the break point in both the method and see if your update panel refresh is happening or not. In code behind handle the update panel refresh and update the repeater . | You can follow this link. By this artical you will learn also about AJAX
[Refresh Grid After some interval](http://csharpdotnetfreak.blogspot.com/2012/08/auto-refresh-update-gridview-in-aspnet-ajax-timer.html)
and
[Gmail style update](http://www.codeproject.com/Articles/103630/How-to-Make-GridView-Auto-Partial-Update-like-Gmai)
Hope this will help you.For further assistance please update. | Auto update a table with AJAX | [
"",
"asp.net",
"sql",
"ajax",
"webforms",
""
] |
I know this may be an easy question, but I've been stumped on this for the past hour and am not sure what terms to lookup that accurately describe what I am trying to do.
I have a MySQL database with two tables. Countries and Regions. The Regions table has two columns, id and name. An example of a row would be 1, north-america.
In the Countries table, there's a column named RegionID that would have a 1 if the country's region is north-america.
How can I grab the "north-america" in my query instead of printing out "1"?
Here's my SELECT that I am stumped on:
```
SELECT A.name, A.regionID FROM countries A, regions B ORDER BY A.name ASC
``` | ```
SELECT A.name, A.regionID, b.Name
FROM countries A
Join regions B
on B.id = A.RegionId
ORDER BY A.name ASC
``` | Try this
```
SELECT A.name, A.regionID
FROM countries A, regions B
WHERE a.RegionID = b.id
AND b.id = 1
ORDER BY A.name ASC
``` | How to get name from another table when using a key in SELECT? | [
"",
"mysql",
"sql",
"select",
"key",
""
] |
I have a table (that contains data) in Oracle 11g and I need to use Oracle SQLPlus to do the following:
Target: change the type of column `TEST1` in table `UDA1` from `number` to `varchar2`.
Proposed method:
1. backup table
2. set column to null
3. change data type
4. restore values
The following didn't work.
```
create table temp_uda1 AS (select * from UDA1);
update UDA1 set TEST1 = null;
commit;
alter table UDA1 modify TEST1 varchar2(3);
insert into UDA1(TEST1)
select cast(TEST1 as varchar2(3)) from temp_uda1;
commit;
```
There is something to do with indexes (to preserve the order), right? | ```
create table temp_uda1 (test1 integer);
insert into temp_uda1 values (1);
alter table temp_uda1 add (test1_new varchar2(3));
update temp_uda1
set test1_new = to_char(test1);
alter table temp_uda1 drop column test1 cascade constraints;
alter table temp_uda1 rename column test1_new to test1;
```
If there was an index on the column you need to re-create it.
Note that the update will fail if you have numbers in the old column that are greater than 999. If you do, you need to adjust the maximum value for the `varchar` column | Add new column as varchar2, copy data to this column, delete old column, rename new column as actual column name:
```
ALTER TABLE UDA1
ADD (TEST1_temp VARCHAR2(16));
update UDA1 set TEST1_temp = TEST1;
ALTER TABLE UDA1 DROP COLUMN TEST1;
ALTER TABLE UDA1
RENAME COLUMN TEST1_temp TO TEST1;
``` | Oracle SQL to change column type from number to varchar2 while it contains data | [
"",
"sql",
"oracle",
"oracle11g",
""
] |
Building report for slow movers in a a database. I want to know items that have not been purchases in 180 days and show the most recent purchase date. The dates ares store in datetime format mm/dd/yyyy HH:MM:SS.000
this is my latest attempt I am trying to self learn sql to help at work so any help and explanation is appreciated. The data base is MS SQL.
```
SELECT
Inventory.LocalSKU,
InventorySuppliers.SupplierSKU,
MAX([Order Details].DetailDate)
FROM
Inventory INNER JOIN
InventorySuppliers ON Inventory.LocalSKU = InventorySuppliers.LocalSKU
INNER JOIN
[Order Details] ON InventorySuppliers.LocalSKU = [Order Details].SKU
CROSS JOIN
POHistory
WHERE
GETDATE() >= CONVERT(date,DATEADD(DAY,+30,[Order Details].DetailDate))
ORDER BY
[Order Details].DetailDate DESC
``` | Can you try:
```
SELECT
Inventory.LocalSKU,
InventorySuppliers.SupplierSKU,
MAX([Order Details].DetailDate)
FROM
Inventory INNER JOIN
InventorySuppliers ON Inventory.LocalSKU = InventorySuppliers.LocalSKU
INNER JOIN
[Order Details] ON InventorySuppliers.LocalSKU = [Order Details].SKU
CROSS JOIN
POHistory
WHERE
GETDATE() >= CONVERT(date,DATEADD(DAY,180,[Order Details].DetailDate))
GROUP BY
Inventory.LocalSKU,
InventorySuppliers.SupplierSKU,
ORDER BY
MAX([Order Details].DetailDate) DESC
``` | Something like this (not syntax checked):
```
SELECT
Inventory.LocalSKU,
InventorySuppliers.SupplierSKU,
MAX([Order Details].DetailDate)
FROM
Inventory
INNER JOIN InventorySuppliers ON Inventory.LocalSKU = InventorySuppliers.LocalSKU
INNER JOIN [Order Details] ON InventorySuppliers.LocalSKU = [Order Details].SKU
CROSS JOIN POHistory --what is this here for?
GROUP BY
Inventory.LocalSKU,
InventorySuppliers.SupplierSKU
HAVING
--your question said 180 days, but your code had 30 days?
GETDATE() >= CONVERT(date,DATEADD(DAY,180,MAX([Order Details].DetailDate)))
ORDER BY
[Order Details].DetailDate DESC
``` | Need to return last purchase date for items that have not been purchased for more than 180 days | [
"",
"sql",
"sql-server",
"date",
"datetime",
""
] |
I have a table Question and there is two kinds os questions, the T/F questions and the multiple choise questions.
Is it possible to make a dynamic table contains the general colums of the Question and the specific colums of the T/F questions and the multiple choise questions?
Can anyone help me how to create the model to solve this problem ? Thanks . | Split your questions and their choices into separate tables. Have a third table that defines the mapping between the questions and their options. Having your tables normalized would help avoid repeating option groups like True and False.
Here's a rough UML sketch of the table schema:

*You'd need to add Primary/Foreign Key **constraints** as well to your tables. I've omitted them for brevity.*
```
CREATE TABLE QUESTIONS
(`Question_ID` int, `Question_Text` varchar(50), `Answer_ID` int);
INSERT INTO QUESTIONS
(`Question_ID`, `Question_Text`, `Answer_ID`)
VALUES
(1, 'True/False question?', 1),
(2, 'Multiple-choice question?', 5);
CREATE TABLE OPTIONS
(`Option_ID` int, `Option_Text` varchar(25));
INSERT INTO OPTIONS
(`Option_ID`, `Option_Text`)
VALUES
(1, 'TRUE'),
(2, 'FALSE'),
(3, 'Option 1'),
(4, 'Option 2'),
(5, 'Option 3'),
(6, 'Option 4');
CREATE TABLE QUESTION_OPTIONS
(`QnA_ID` int, `Question_ID` int, `Option_ID` int);
INSERT INTO QUESTION_OPTIONS
(`QnA_ID`, `Question_ID`, `Option_ID`)
VALUES
(1, 1, 1),
(2, 1, 2),
(3, 2, 3),
(4, 2, 4),
(5, 2, 5),
(6, 2, 6);
```
Now, just join the tables to fetch all the relevant details for a question.
```
SELECT Option_Text,
CASE
WHEN q.Answer_ID = o.Option_ID THEN 1
ELSE 0
END Is_Answer
FROM QUESTIONS q, OPTIONS o, QUESTION_OPTIONS qo
WHERE qo.Option_ID = o.Option_ID
AND q.Question_ID = qo.Question_ID
AND q.Question_ID = 1
SELECT Option_Text,
CASE
WHEN q.Answer_ID = o.Option_ID THEN 1
ELSE 0
END Is_Answer
FROM QUESTIONS q, OPTIONS o, QUESTION_OPTIONS qo
WHERE qo.Option_ID = o.Option_ID
AND q.Question_ID = qo.Question_ID
AND q.Question_ID = 2
```
***Output*** :
```
+--------------+-----------+
| OPTION_TEXT | IS_ANSWER |
+--------------+-----------+
| TRUE | 1 |
| FALSE | 0 |
+--------------+-----------+
+--------------+-----------+
| OPTION_TEXT | IS_ANSWER |
+--------------+-----------+
| Option 1 | 0 |
| Option 2 | 0 |
| Option 3 | 1 |
| Option 4 | 0 |
+--------------+-----------+
```
You can tweak the tables if you like at **[SQL Fiddle](http://www.sqlfiddle.com/#!2/f49a1/10)**. | This solution handles both T/F questions and multiple choice questions:
 | How to have a table with two kinds of variable colums | [
"",
"mysql",
"sql",
""
] |
In Ms.Access 2010, I have a similar query table like one below where its displaying duplicate records. Problem is that even though I have unique ID's, one of the field has different data than other row since I have combined two seperate tables in this query. I just want to display one row per ID and eliminate other rows. It doesn't matter which row I pick. See below:
```
ID - NAME - FAVCOLOR
1242 - John - Blue
1242 - John - Red
1378 - Mary - Green
```
I want to just pick any of the the row with same ID. It doesn't matter which row I pick as long as I am displaying one row per ID is what matters.
```
ID - NAME - FAVCOLOR
1242 - John - Red
1378 - Mary - Green
``` | Use the SQL from your current query as a subquery and then `GROUP BY` `ID` and `NAME`. You can retrieve the minimum `FAVCOLOR` since you want only one and don't care which.
```
SELECT sub.ID, sub.NAME, Min(sub.FAVCOLOR)
FROM
(
SELECT ID, [NAME], FAVCOLOR
FROM TABLE1
UNION ALL
SELECT ID, [NAME], FAVCOLOR
FROM TABLE2
) AS sub
GROUP BY sub.ID, sub.NAME;
```
Note `NAME` is a [reserved word](http://allenbrowne.com/AppIssueBadWord.html). Bracket that name or prefix it with the table name or alias to avoid confusing the db engine. | Try selecting union without the ALL parameter and see if you get the desired result.
Your new query would look like
"SELECT ID, NAME, FAVCOLOR FROM TABLE1; UNION SELECT ID, NAME, FAVCOLOR FROM TABLE2;" | Pick One Row per Unique ID from duplicate records | [
"",
"sql",
"ms-access",
"ms-access-2010",
""
] |
I need to find out row ID that the values come from while doing GROUP BY.
first create and populate a "play object"
```
-- vertical coalesce with row-source of value :) instead of column-source of value as in horizontal coalesce :)
CREATE TABLE #TBL(ID INT, GR INT, A INT, B INT)
INSERT INTO #TBL
SELECT 1,1,1,NULL
UNION ALL SELECT 2,1,NULL,2
UNION ALL SELECT 3,1,NULL,3
UNION ALL SELECT 4,2,2,NULL
UNION ALL SELECT 5,2,NULL,NULL
UNION ALL SELECT 6,2,6,NULL
```
selecting minimum values is simple:
```
SELECT GR, MIN(A) A, MIN(B) B,
'Which first available row did A come from?' A_ID,
'Which first available row did B come from?' B_ID
FROM #TBL
GROUP BY GR
```
but to answer the questions is not!
I've tried to do subquery, but since values are coming from different rows it doesn't work:
```
SELECT a.*
from #TBL a
join (
SELECT GR, MIN(A) A, MIN(B) B,
'Which row did A come from?' A_ID,
'Which row did B come from?' B_ID
FROM #TBL
GROUP BY GR) b on a.GR = b.GR and a.A = b.A and a.B = b.B
--DROP TABLE #TBL
```
Please help. | I am not sure if it will out perform joins/outer applies (I think it will), but this will work:
```
WITH CTE AS
( SELECT GR,
A,
B,
ID,
MinA = MIN(A) OVER(PARTITION BY GR),
MinB = MIN(B) OVER(PARTITION BY GR)
FROM #TBL
)
SELECT GR,
A = MIN(A),
B = MIN(B),
A_ID = MIN(CASE WHEN MinA = A THEN ID END),
B_ID = MIN(CASE WHEN MinB = B THEN ID END)
FROM CTE
GROUP BY GR;
```
On this small sample the statistics point to this performing better:
**Using Aggregates**
> (estimated) Query Cost 13%
>
> Table 'Worktable'. Scan count 3, logical reads 21, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
**Using OUTER APPLY**
> (estimated) Query Cost 87%
>
> Table 'Worktable'. Scan count 16, logical reads 42, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
But you may find real examples with indexes will be different. | This should do:
```
SELECT DISTINCT
T1.GR,
T2.A,
T3.B,
T2.ID A_ID,
T3.ID B_ID
FROM #TBL T1
OUTER APPLY (SELECT TOP 1 *
FROM #TBL
WHERE GR = T1.GR
AND A IS NOT NULL
ORDER BY A, ID) T2
OUTER APPLY (SELECT TOP 1 *
FROM #TBL
WHERE GR = T1.GR
AND B IS NOT NULL
ORDER BY B, ID) T3
```
Results:
```
╔════╦═══╦══════╦══════╦══════╗
║ GR ║ A ║ B ║ A_ID ║ B_ID ║
╠════╬═══╬══════╬══════╬══════╣
║ 1 ║ 1 ║ 2 ║ 1 ║ 2 ║
║ 2 ║ 2 ║ NULL ║ 4 ║ NULL ║
╚════╩═══╩══════╩══════╩══════╝
``` | which row min value came from (vertical coalesce) | [
"",
"sql",
"sql-server-2008",
"t-sql",
""
] |
Hy guys.
I have a query:
```
select *
from stat.cause_code_descriptions
INNER JOIN stat.stat_dial
on stat.cause_code_descriptions.CODE = stat.stat_dial.cause
where called like '%3623.1348'
and begdt > sysdate-500
and rownum <=100
order by begdt desc
```
This query returns something like that
```
login code called begdtu(unix timestamp)
oeanwel 4 VLAN:3623.1348 1336383493
oe192034 0 VLAN:3623.1348 1336382883
oe192034 2 VLAN:3623.1348 1336382640
oe192034 45 VLAN:3623.1348 1336380257
oeanwel 4 VLAN:3623.1348 1336379883
oe220850 20 VLAN:3623.1348 1336378666
oe194752 4 VLAN:3623.1348 1336378507
oeanna2510 45 VLAN:3623.1348 1336377516
oeanwel 4 VLAN:3623.1348 1336376273
oe237185 45 VLAN:3623.1348 1336374506
oe237185 4 VLAN:3623.1348 1336372662
oe237185 3 VLAN:3623.1348 1336370819
oe239364 3 VLAN:3623.1348 1336367329
oeanna2510 45 VLAN:3623.1348 1336366115
```
What I'm looking for is to return last(freshest) record of any `login`.
For not repeated records my query works well, but for `oe192034` and `oe237185` logins it shows all records.
I tried `group by` and `distinct` but it doesn't work. Please help | You need a window function, ROW\_NUMBER:
```
select *
from
(
select ccd.*, sd.*, row_number() over (partition by login order by begdtu desc) rn
from stat.cause_code_descriptions ccd
INNER JOIN stat.stat_dial sd
on ccd.CODE = sd.cause
where called like '%3623.1348'
) dt
where rn = 1
order by begdt desc
``` | ```
SELECT *
FROM (
select *,ROW_NUMBER() OVER (PARTITION BY login ORDER BY begdt DESC) RN
from stat.cause_code_descriptions
INNER JOIN stat.stat_dial on stat.cause_code_descriptions.CODE = stat.stat_dial.cause
where called like '%3623.1348' and begdt > sysdate-500 and rownum <=100 )
WHERE RN=1;
``` | Find last record in DB for each repeated field(Oracle query) | [
"",
"sql",
"oracle",
"unique",
""
] |
I have three tables, a table for users, one for orders and the last one that contains the order items.
I've created a SQL query that gets all the orders and joins the user into the results, now I want to join the order items into the query. One order can have multiple order items. Each order item has a price. I need the sum of all the order\_items that corresponds to the order\_item.
So I will get a result that looks like this:
id, order\_id, userid, order\_time, id, name, email, price
Right now my query looks this:
```
SELECT huxwz_user_orders.*, huxwz_users.name, huxwz_users.email, huxwz_users.id, SUM(huxwz_user_orderitems.price)
FROM huxwz_user_orders
LEFT OUTER JOIN huxwz_users ON (
huxwz_user_orders.userid = huxwz_users.id
)
LEFT OUTER JOIN huxwz_user_orderitems ON (
huxwz_user_orders.id = huxwz_user_orderitems.orderid
)
```
thing is, the SUM is doing a sum of all the order item, which means that i only get one result :/
Any ideas on how I could fix this? | ```
SELECT huxwz_user_orders.*, huxwz_users.name, huxwz_users.email, huxwz_users.id, SUM(huxwz_user_orderitems.price)
FROM huxwz_user_orders
LEFT OUTER JOIN huxwz_users ON (
huxwz_user_orders.userid = huxwz_users.id
)
LEFT OUTER JOIN huxwz_user_orderitems ON (
huxwz_user_orders.id = huxwz_user_orderitems.orderid
)
Group BY huxwz_user_orderitems.orderid
ORDER BY huxwz_user_orders.id
```
I did this, and it seems perfectly as I wanted it to, thanks for the grouping info, it works! :3 | Are you using oracle?
If so, this might be what you want:
```
SUM(huxwz_user_orderitems.price) OVER (PARTITION BY huxwz_user_orderitems.orderid)
```
If not, I guess you'll have to make a select of all your order items, a select with all totals per order (group by having) and join those 2 result sets.
```
SELECT a.*, b.total
FROM
(SELECT huxwz_user_orders.*, huxwz_users.name, huxwz_users.email, huxwz_users.id
FROM huxwz_user_orders
LEFT OUTER JOIN huxwz_users ON (
huxwz_user_orders.userid = huxwz_users.id
)
LEFT OUTER JOIN huxwz_user_orderitems ON (
huxwz_user_orders.id = huxwz_user_orderitems.orderid
)
) a,
(
SELECT o.id, SUM(oi.price) total
FROM huxwz_user_orders o,
JOIN huxwz_user_orderitems oi ON o.id = oi.orderid
GROUP BY o.id
) b
``` | Sum items in a join statement MySQL | [
"",
"mysql",
"sql",
"join",
""
] |
I just want to ask how can I achieve getting the records in a database by week in a month? using sql
What I really mean is when I input 1 it will get the records in 1st week of a month
I've done a lot of research today but it seems I can't find a good solution
Heres the code:
```
DECLARE @MONTH int
DECLARE @YEAR int
DECLARE @WEEK int
SET @MONTH = 9
SET @YEAR = 2013
SET @WEEK = 1
SELECT RH.RepairOrderNumber FROM dbo.RepairOrderHeader RH
WHERE MONTH(RH.DateReleased) = @MONTH AND YEAR(RH.DateReleased) = @YEAR AND WEEK(RH.DateReleased) = @WEEK
```
I just want to fetched the records according to month,year, and by week is there any way and precise code on how to do this? | `DATEPART(WEEK,)` gives you number of a week in an YEAR so you just need to calculate number of week in the month calculating difference between current date week number and the week number of the first day of the month:
```
WHERE MONTH(RH.DateReleased) = @MONTH
AND YEAR(RH.DateReleased) = @YEAR
AND DATEPART(WEEK,RH.DateReleased)
-DATEPART(WEEK,DATEADD(DAY,-DAY(RH.DateReleased)+1,RH.DateReleased))+1
= @WEEK
``` | I consider '2013-08-26' - '2013-09-01' the first week in september
```
DECLARE @MONTH int = 9
DECLARE @YEAR int = 2013
DECLARE @WEEK int = 1
DECLARE @from date= dateadd(day, datediff(day, -(@WEEK - 1)*7,
(dateadd(month, (@year-1900) * 12 + @month - 1 , 0)))/7*7 , 0)
SELECT RH.RepairOrderNumber
FROM dbo.RepairOrderHeader RH
WHERE RH.DateReleased >= @from
and RH.DateReleased < dateadd(week, 1, @from)
``` | How to get database values by week | [
"",
"sql",
"sql-server-2008",
""
] |
How do you return all possibilities, whether there is content or it is NULL?
If I want to return everything that isn't NULL:
```
SELECT * FROM table WHERE column LIKE '%'
```
And if I want to return all NULLs:
```
SELECT * FROM table WHERE column IS NULL
```
How do I combine them both? I need to be able to because I am parameterizing it. The front end of my application will have multiple options ALL (any content or NULL) or a specific value.
How can I achieve this?
EDIT:
Let me clarify better. I have a dropdown List that will show things like this
-Select All-
Team A
Team B
...
So if -Select All- is selected then I need the query to return all NULLs and those with any Team
If Team A is selected I need to show only Team A and no NULLs and so on...
I cant change the query just a single variable (parameter) | It's fairly straightforward. To only get NULLS:
```
SELECT * FROM table
WHERE column IS NULL
```
To only get NOT NULLS:
```
SELECT * FROM table
WHERE column IS NOT NULL
-- could be this if your example is not representative
-- WHERE column IS NULL OR column LIKE '%whatever%'
```
And for everything (no filter), just do:
```
SELECT * FROM table
```
**Further clarification:**
In your example, if the code is already written and you can only pass in the WHERE clause then you could do this:
```
WHERE <insert here>
column IS NULL -- just nulls
column = 'teamX' OR column IS NULL -- nulls or 'teamX'
column IS NOT NULL -- any value, but no nulls
1=1 -- for the case where you don't really want a WHERE clause. All records
```
It doesn't sound like this is the best way of structuring your code, but if you are already restricted by something that can't be changed, I guess you have to make do. | ```
WHERE column LIKE '%' OR column IS NULL
``` | WHERE statement that selects everything (content and NULL) | [
"",
"sql",
"sql-server",
""
] |
Good day,
i have a table with user activity information in.
the table basically is as follow:
```
Eventdate, UserId, Activity
2013-09-09 SusanM Support Call
2013-09-09 BrandonP Meeting
2013-09-09 MumbaM Administration
2013-09-16 SusanM Support Call
2013-09-16 BrandonP Meeting
```
so as you can see everyone has updated there work for the 9th, but user MumbaM has not yet logged his entry for the 16th.
is it possible to show on report if i select the date for the 16th MumbaM, even if he has no entry in the database? is there a way to right an sql query to based on the previous userid entries to include Mumbam for the 16th.
So I would like the report for the 16th to display as follow:
```
2013-09-16 SusanM Support Call
2013-09-16 BrandonP Meeting
2013-09-16 MumbaM no update yet
``` | You can try this:
```
SELECT t1.Eventdate, t1.UserId, t1.Activity
FROM YourTable t1
WHERE t1.Eventdate = '2013-09-16'
UNION
SELECT t1.Eventdate, t1.UserId, 'No update yet' as Activity
FROM YourTable t1
WHERE t1.userid not in (SELECT UserId FROM YourTable WHERE Eventdate = '2013-09-16')
AND t1.Eventdate =
(SELECT TOP 1 t3.Eventdate FROM YourTable as t3 WHERE t3.UserID = t1.UserId)
GROUP BY t1.UserID, t1.Eventdate
ORDER BY Eventdate DESC
``` | I think you can try something like this but instead of 'no update yet', you'll get NULL;
```
SELECT t1.Eventdate, t1.UserId, t2.Activity
From Table t1
Left Outer Join Table t2
on t1.UserId = t2.UserId
Where Eventdate = '2013/09/16'
``` | How to try and display users that do no have entries for latest week in database | [
"",
"sql",
"sql-server-2012",
""
] |
I need some help with MySQL. Lets say i have this query Q1:
**Q1:**
```
select cn.idConteudo, TIMESTAMPDIFF(SECOND, nl.dataInicio , nl.dataFim)
from navegacaolog nl, conteudoNo cn
where nl.idConteudoNo = cn.idConteudoNo AND
TIMESTAMPDIFF(SECOND, nl.dataInicio , nl.dataFim) > 120
```
the results are the following:

However if i add another table to "from", lets say: utilizador table (Q2), the results are very different as the next figure shows:

**Q2:**
```
select cn.idConteudo, TIMESTAMPDIFF(SECOND, nl.dataInicio , nl.dataFim)
from navegacaolog nl, conteudoNo cn, utilizador
where nl.idConteudoNo = cn.idConteudoNo AND
TIMESTAMPDIFF(SECOND, nl.dataInicio , nl.dataFim) > 120
```
I don't understand why, the fact of adding another table (without using it in where clause) has so much importance. Can somebody please give me some help?
Kind regards | You have not specified a join condition, so what you're getting is the FULL CROSS JOIN which produces a row for every possible combination of rows in the base tables.
<http://en.wikipedia.org/wiki/Join_(SQL)>
I find that using the ANSI syntax for joins avoids this confusion. Don't just use the commas in the FROM clause... use actual JOIN clauses...
```
select cn.idConteudo, TIMESTAMPDIFF(SECOND, nl.dataInicio , nl.dataFim)
from navegacaolog nl
JOIN conteudoNo cn ON nl.idConteudoNo = cn.idConteudoNo
where TIMESTAMPDIFF(SECOND, nl.dataInicio , nl.dataFim) > 120
``` | Results have not changed, just that each result has been repeated number of times equal to rows in new table.
And reason is that you have added new table without any join in where clause so you have a cross join. | SQL Query: why does adding another table change the results? | [
"",
"mysql",
"sql",
""
] |
I'm pretty new to SQL... I have a table with the following columns :
```
Employee,Title,Age,Children
```
Output of a basic SELECT would be :
```
Steve |Foreman|40|Billy
Steve |Foreman|40|Amy
Steve |Foreman|40|Michelle
Daniel|Smith |35|Eric
Daniel|Smith |35|Jake
Erin |Otis |29|Eileen
```
Hopefully, I've shown that each record can contain multiple children. What I'd like to be able to do is to only return values if the Employee doesn't have a child who's name starts with 'E'. Right now I'm still returning Employees but it only lists the records that don't have a child starting with 'E'. I want to completely omit the Employee if any of their children start with an 'E' not just omit the child starting with 'E'.
Is this possible?
Thanks.
EDIT :
So in actuality there are two tables, one for EMPLOYEES and one for CHILDREN. So my current query looks like this :
```
SELECT E.EMPLOYEE_NAME, E.EMPLOYEE_TITLE, E.EMPLOYEE_AGE, C.CHILDREN_NAME
FROM EMPLOYEE E INNER JOIN CHILDREN C ON E.EMPLOYEE_ID = C.EMPLOYEE_ID
WHERE C.CHILDREN_NAME NOT LIKE 'E%'
```
This returns all rows minus any children that have a name starting with E. The desired effect, is solution 2 that Trinimon provided; do not return an employee if any of their children have a name that start with E.
I'm hoping that explains it a bit more and someone can explain how to produce the desired results. As mentioned, Trinimon's solution returns the proper results but since there are two tables I'm not how to adjust the solution to my schema.
Thanks. | Either go for ...
```
SELECT *
FROM employees
WHERE Children NOT LIKE 'E%';
```
if you want all *records* where the child's name doesn't start with `E` or for ...
```
SELECT *
FROM employees e1
WHERE NOT EXISTS (SELECT 1
FROM employees e2
WHERE e1.Employee = e2.Employee
AND Children LIKE 'E%');
```
if none of the returned *employees* should have a child that starts with `E`.
Check out [version one](http://sqlfiddle.com/#!2/f3af5/3) and [version two](http://sqlfiddle.com/#!2/f3af5/6).
**p.s.** based on your structure it's
```
SELECT E.EMPLOYEE_NAME,
E.EMPLOYEE_TITLE,
E.EMPLOYEE_AGE,
C.CHILDREN_NAME
FROM EMPLOYEE E INNER JOIN CHILDREN C ON E.EMPLOYEE_ID = C.EMPLOYEE_ID
WHERE NOT EXISTS (SELECT 1
FROM CHILDREN C2
WHERE E.EMPLOYEE_ID = C2.EMPLOYEE_ID
AND C2.CHILDREN_NAME LIKE 'E%');
```
Check [this Fiddle](http://sqlfiddle.com/#!2/f77d0/3). | You could try:
```
select * from YourTable T
where NOT EXISTS (select 1 from YourTable where Employee=T.Employee and Children like 'E%')
```
This, of course, will have a problem if your have two employees with the same name. You could expand the `WHERE` clause to cover all the attributes that make an employee the same:
```
select * from YourTable T
where NOT EXISTS (select 1 from YourTable where Employee=T.Employee and Title=T.Title and Age=T.Age and Children like 'E%')
```
However, you should consider making `Children` a separate table. Have a single `Employee` (with a unique `EmployeeID`) in your table, and have `Children` contain each child with a reference to `EmployeeID`. | In SQL, omit entire record if column (of multiple values) matches condition | [
"",
"sql",
"advantage-database-server",
""
] |
I was wondering is there a way to retrieve, for example, 2nd and 5th row from SQL table that contains 100 rows?
I saw some solutions with `WHERE` clause but they all assume that the column on which `WHERE` clause is applied is linear, starting at 1.
Is there other way to query a SQL Server table for a specific rows in case table doesn't have a column whose values start at 1?
P.S. - I know for a solution with temporary tables, where you copy your select statement output and add a linear column to the table. I am using T-SQL | Try this,
```
SELECT * FROM (
SELECT
ROW_NUMBER() OVER (ORDER BY ColumnName ASC) AS rownumber
FROM TableName
) as temptablename
WHERE rownumber IN (2,5)
``` | With SQL Server:
```
; WITH Base AS (
SELECT *, ROW_NUMBER() OVER (ORDER BY id) RN FROM YourTable
)
SELECT *
FROM Base WHERE RN IN (2, 5)
```
The `id` that you'll have to replace with your primary key or your ordering, `YourTable` that is your table.
It's a CTE (Common Table Expression) so it isn't a temporary table. It's something that will be expanded together with your query. | How to retrieve specific rows from SQL Server table? | [
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I'm converting a mysql system to use oracle, but I'm having problems with this query:
```
SELECT *
FROM recent_activity
WHERE TIME_TO_SEC(TIMEDIFF(NOW(), lastactivity )) < 5
```
I know that `NOW()` uses `systimestamp` and `lastactivity` is a timestamp.
I'm trying to check activities in the last 5 seconds.
Do you know how I can write this in Oracle? | ```
select * from recent_activity where systimestamp-lastactivity < interval '5' second;
``` | Ok, now I think I got this working:
```
SELECT * FROM recent_activity WHERE EXTRACT (DAY FROM (systimestamp-lastactivity))*24*60*60+EXTRACT (HOUR FROM (systimestamp-lastactivity))*60*60+EXTRACT (MINUTE FROM (systimestamp-lastactivity))*60+ EXTRACT (SECOND FROM (systimestamp-lastactivity)) < 5
``` | An Oracle Select with time | [
"",
"mysql",
"sql",
"oracle",
""
] |
I'm wanting to be able to input any given report server url and display a list of reports available on that server.
I found this question, and it's useful if I compile the project with a reference to a specific sql server ([How do I get a list of the reports available on a reporting services instance](https://stackoverflow.com/questions/3296682/how-do-i-get-a-list-of-the-reports-available-on-a-reporting-services-instance)). But (unless I'm just completely missing something which is possible) it doesn't show me how to do what I've stated above. | You can go to Web Service URL (note: *not* Report Manager URL). So if your main managing URL is `http://server/Reports` and Web Service URL is `http://server/ReportServer` - open the second one. It will give you raw listing of available items.
Note that this will include reports, datasources, folders etc. | You could query the `ReportServer` database of your reporting server.
```
SELECT *
FROM dbo.Catalog
WHERE Type = 2
```
Should give you a list of all of the reports. | How do I get a list of the reports available on any given reporting server? | [
"",
"sql",
"reporting-services",
""
] |
I have the method that do update to data base table
but when I invoke it I have an exception "Incorrect syntax near '('."
Here is the method
```
internal Boolean update(int customerID,int followingID, string fullName, string idNumber, string address, string tel, string mobile1, string mobile2, string email, string customerComment, DateTime timeStamp)
{
string sqlStatment = "update customers set (followingID, fullName,idNumber,address,tel,mobile1,mobile2,email,customerComment,timeStamp) = (@followingID, @fullName,@idNumber,@address,@tel,@mobile1,@mobile2,@email,@customerComment,@timeStamp) where customerID=@customerID";
SqlConnection con = new SqlConnection();
con.ConnectionString = connection;
SqlCommand cmd = new SqlCommand(sqlStatment, con);
cmd.Parameters.AddWithValue("@customerID", customerID);
cmd.Parameters.AddWithValue("@followingID", followingID);
cmd.Parameters.AddWithValue("@fullName", fullName);
cmd.Parameters.AddWithValue("@idNumber", idNumber);
cmd.Parameters.AddWithValue("@address", address);
cmd.Parameters.AddWithValue("@tel", tel);
cmd.Parameters.AddWithValue("@mobile1", mobile1);
cmd.Parameters.AddWithValue("@mobile2", mobile2);
cmd.Parameters.AddWithValue("@email", email);
cmd.Parameters.AddWithValue("@customerComment", customerComment);
cmd.Parameters.AddWithValue("@timeStamp", timeStamp);
bool success = false;
try
{
con.Open();
cmd.ExecuteNonQuery();
success = true;
}
catch (Exception ex)
{
success = false;
//throw ex;
}
finally
{
con.Close();
}
return success;
}
```
and here is the database table columns
 | `UPDATE` syntax is wrong..
Try
```
string sqlStatment = "UPDATE customers SET followingID= @followingID, fullName=@fullName, idNumber=@idNumber,address=@address,tel=@tel,mobile1=@mobile1,mobile2=@mobile2,email=@email,customerComment=@customerComment,timeStamp=@timeStamp WHERE customerID=@customerID"
``` | Your Syntax error is incorrect.Please refer the link for **[Update Query Syntax](http://www.w3schools.com/sql/sql_update.asp)**
```
update customers
set
followingID= @followingID,
fullName=@fullName,
idNumber=@idNumber,
address=@address,
tel=@tel,
mobile1=@mobile1,
mobile2=@mobile2,
email=@email,
customerComment=@customerComment,
timeStamp=@timeStamp
where customerID=@customerID
``` | What is the error in this SQL update stament | [
"",
"sql",
"sql-server",
"c#-4.0",
"qsqlquery",
""
] |
I have a star schema here and I am querying the fact table and would like to join one very small dimension table. I can't really explain the following:
```
EXPLAIN ANALYZE SELECT
COUNT(impression_id), imp.os_id
FROM bi.impressions imp
GROUP BY imp.os_id;
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------
HashAggregate (cost=868719.08..868719.24 rows=16 width=10) (actual time=12559.462..12559.466 rows=26 loops=1)
-> Seq Scan on impressions imp (cost=0.00..690306.72 rows=35682472 width=10) (actual time=0.009..3030.093 rows=35682474 loops=1)
Total runtime: 12559.523 ms
(3 rows)
```
This takes ~12600ms, but of course there is no joined data, so I can't "resolve" the imp.os\_id to something meaningful, so I add a join:
```
EXPLAIN ANALYZE SELECT
COUNT(impression_id), imp.os_id, os.os_desc
FROM bi.impressions imp, bi.os_desc os
WHERE imp.os_id=os.os_id
GROUP BY imp.os_id, os.os_desc;
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------
HashAggregate (cost=1448560.83..1448564.99 rows=416 width=22) (actual time=25565.124..25565.127 rows=26 loops=1)
-> Hash Join (cost=1.58..1180942.29 rows=35682472 width=22) (actual time=0.046..15157.684 rows=35682474 loops=1)
Hash Cond: (imp.os_id = os.os_id)
-> Seq Scan on impressions imp (cost=0.00..690306.72 rows=35682472 width=10) (actual time=0.007..3705.647 rows=35682474 loops=1)
-> Hash (cost=1.26..1.26 rows=26 width=14) (actual time=0.028..0.028 rows=26 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 2kB
-> Seq Scan on os_desc os (cost=0.00..1.26 rows=26 width=14) (actual time=0.003..0.010 rows=26 loops=1)
Total runtime: 25565.199 ms
(8 rows)
```
This effectively doubles the execution time of my query. My question is, what did I leave out from the picture? I would think such a small lookup was not causing huge difference in query execution time. | Rewritten with (recommended) explicit ANSI JOIN syntax:
```
SELECT COUNT(impression_id), imp.os_id, os.os_desc
FROM bi.impressions imp
JOIN bi.os_desc os ON os.os_id = imp.os_id
GROUP BY imp.os_id, os.os_desc;
```
First of all, your second query *might* be wrong, if more or less than exactly one match are found in `os_desc` for every row in impressions.
This can be ruled out if you have a *foreign key constraint* on `os_id` in place, that guarantees referential integrity, plus a *`NOT NULL` constraint* on `bi.impressions.os_id`. If so, in a first step, simplify to:
```
SELECT COUNT(*) AS ct, imp.os_id, os.os_desc
FROM bi.impressions imp
JOIN bi.os_desc os USING (os_id)
GROUP BY imp.os_id, os.os_desc;
```
`count(*)` is faster than `count(column)` and equivalent here if the column is `NOT NULL`. And add a column alias for the count.
**Faster, yet:**
```
SELECT os_id, os.os_desc, sub.ct
FROM (
SELECT os_id, COUNT(*) AS ct
FROM bi.impressions
GROUP BY 1
) sub
JOIN bi.os_desc os USING (os_id)
```
Aggregate first, join later. More here:
* [Aggregate a single column in query with many columns](https://stackoverflow.com/questions/16018237/aggregate-a-single-column-in-query-with-many-columns/16023336#16023336)
* [PostgreSQL - order by an array](https://stackoverflow.com/questions/15664373/postgresql-order-by-an-array/15674585#15674585) | ```
HashAggregate (cost=868719.08..868719.24 rows=16 width=10)
HashAggregate (cost=1448560.83..1448564.99 rows=416 width=22)
```
Hmm, width from 10 to 22 is a doubling. Perhaps you should join after grouping instead of before? | Why does the following join increase the query time significantly? | [
"",
"sql",
"postgresql",
"join",
"aggregate-functions",
"postgresql-performance",
""
] |
I'm saving im my table a colum current\_date in this format: 'yyyy-MM-dd HH:mm'
How to search in database by date? I want, for examble the records from 2013-09-25 discarting the hour.
If I do this "Select current\_date from mydatabase where current\_date = '2013-09-25'"
It does not work. I have to make this "Select current\_date from mydatabase where current\_date = '2013-09-25 09:25'". But i want only the record based on the date, not based on the date and hour. | Apparently your dates are stored as Strings. You can use LIKE to discard the hour :
```
select current_date from mydatabase where current_date like '2013-09-25%'
``` | ```
SELECT * FROM mydatabase WHERE ( current_date > '2011-08-07') AND (current_date < '2011-08-08')
```
Replace the first date with the day you want information about and then update the second date with the following day. | How to search in sqlite by date? | [
"",
"android",
"sql",
"sqlite",
"datetime",
"android-date",
""
] |
I've been reading around StackOverflow and various forums about this problem but I cannot seem to figure it out. When trying to run the "CREATE TABLE Class" and "CREATE TABLE Enroll" commands below I get "ERROR 1005: Can't create table university.class (errno: 150)". I am using InnoDB as my storage engine. The first two "CREATE" statements work fine.
What changes do I need to make so that the "CREATE TABLE Class and CREATE TABLE Enroll" sections work?
```
CREATE TABLE Student (
stuId VARCHAR(6),
lastName VARCHAR(20) NOT NULL,
firstName VARCHAR(20) NOT NULL,
major VARCHAR(10),
credits FLOAT(3) DEFAULT 0,
CONSTRAINT Student_stuId_pk PRIMARY KEY (stuId),
CONSTRAINT Student_credits_cc CHECK ((credits>=0) AND (credits < 150)));
CREATE TABLE Faculty (
facId VARCHAR(6),
name VARCHAR(20) NOT NULL,
department VARCHAR(20),
rank VARCHAR(10),
CONSTRAINT Faculty_facId_pk PRIMARY KEY (facId));
CREATE TABLE Class (
classNumber VARCHAR(8),
facId VARCHAR(6) NOT NULL,
schedule VARCHAR(8),
room VARCHAR(6),
CONSTRAINT Class_classNumber_pk PRIMARY KEY (classNumber),
CONSTRAINT Class_facId_fk FOREIGN KEY (facId) REFERENCES Faculty
(facId) ON DELETE SET NULL,
CONSTRAINT Class_schedule_room_uk UNIQUE (schedule, room));
CREATE TABLE Enroll (
stuId VARCHAR(6),
classNumber VARCHAR(8),
grade VARCHAR(2),
CONSTRAINT Enroll_classNumber_stuId_pk PRIMARY KEY
(classNumber, stuId),
CONSTRAINT Enroll_classNumber_fk FOREIGN KEY (classNumber)
REFERENCES Class (classNumber) ON DELETE CASCADE,
CONSTRAINT Enroll_stuId_fk FOREIGN KEY (stuId) REFERENCES Student
(stuId)ON DELETE CASCADE);
```
Here is the full command and error:
```
mysql> CREATE TABLE Class (classNumber VARCHAR(8), facId VARCHAR(6) NOT NULL, schedule VARCHAR(8), room VARCHAR(6), CONSTRAINT Class_classNumber_pk PRIMARY KEY (classNumber), CONSTRAINT Class_facId_fk FOREIGN KEY (facId) REFERENCES Faculty (facId) ON DELETE SET NULL, CONSTRAINT Class_schedule_room_uk UNIQUE (schedule, room));
ERROR 1005 (HY000): Can't create table 'university.Class' (errno: 150)
``` | Remove NOT NULL in defination of facId | This is occur due to a foreign keye error. To get more details on foreign key error, run SHOW ENGINE INNODB STATUS\G and look at the "LATEST FOREIGN KEY ERROR" section.
I think it will tell that the foreign key is invalid because there is no unique index or primary key index on Faculty.facid. | MYSQL Foreign Key errno: 150 cannot create tables | [
"",
"mysql",
"sql",
"foreign-keys",
"foreign-key-relationship",
""
] |
I want to know how can i use MAX MIN command with ORMLITE.
For example lets say we have this table
> Table Name = Example
> Column 1 = id
> Column 2 = name
In ORMLITE how can i get max id ? I looked [here](http://ormlite.com/javadoc/ormlite-core/doc-files/ormlite_5.html#Advanced/ "here") but i didnt't understand exactly..
Can someone show me example about Max min in ORMLITE ? | ```
QueryBuilder<Account, Integer> qb = accountDao.queryBuilder();
qb.selectRaw("MIN(orderCount)", "MAX(orderCount)");
// the results will contain 2 string values for the min and max
results = accountDao.queryRaw(qb.prepareStatementString());
String[] values = results.getFirstResult();
```
I found this from [documentation](http://ormlite.com/javadoc/ormlite-core/doc-files/ormlite_2.html#Raw-Statements) | This is how I query for max ID in my code:
```
QueryBuilder<Example, String> builder = dao.queryBuilder();
builder.orderBy("id", false); // true or false for ascending so change to true to get min id
Example example = dao.queryForFirst(builder.prepare());
String id = null;
if (example == null)
id = "-1";
else
id = example.getId();
```
A couple of alternative answers can also be found here:
[ORMLite - return item w/ maximum ID (or value)](https://stackoverflow.com/questions/8970221/ormlite-return-item-w-maximum-id-or-value) | SQL MAX-MIN in ORMLITE - ANDROID | [
"",
"android",
"sql",
"max",
"ormlite",
""
] |
Please help me, below is my sql query. When I execute it, the result is nothing.
```
SELECT tblclientinfo.acctnum,
tblclientinfo.fname,
tblclientinfo.fname,
tblreservation.unitnum,
tblbillingsched.billnum,
tblbillingsched.duedate,
tblbillingsched.monthlyamort,
tblbillingsched.totalbalance
FROM tblclientinfo
JOIN tblreservation
ON tblclientinfo.acctnum = tblreservation.reservationnum
JOIN tblbillingsched
ON tblreservation.reservationnum = tblbillingsched.accountnum
WHERE tblbillingsched.accountnum = 'C0000000021'
AND tblbillingsched.duedate BETWEEN '1/1/2014' AND '1/30/2014'
```
Here are my tables:
```
tblClientInfo, pk = acctNum
tblReservation, pk = ReservationNum, fk = accountNum
tblBillingSched, pk = ID, fk = accountNum
``` | I think you're looking for...
```
SELECT tblclientinfo.acctnum,
tblclientinfo.fname,
tblclientinfo.fname,
tblreservation.unitnum,
tblbillingsched.billnum,
tblbillingsched.duedate,
tblbillingsched.monthlyamort,
tblbillingsched.totalbalance
FROM tblclientinfo
JOIN tblreservation
ON tblclientinfo.acctnum = tblreservation.accountnum
JOIN tblbillingsched
ON tblclientinfo.acctnum = tblbillingsched.accountnum
WHERE tblbillingsched.accountnum = 'C0000000021'
AND tblbillingsched.duedate BETWEEN '1/1/2014' AND '1/30/2014'
```
[See a demo](http://sqlfiddle.com/#!3/776ba/1) | (off topic and probably not the answer but I thought worth noting)
The safer form of this:
```
tblbillingsched.duedate BETWEEN '1/1/2014' AND '1/30/2014'
```
is this:
```
tblbillingsched.duedate >= '20140101' AND
tblbillingsched.duedate <'20140130'
```
Stating a date like this `'20140130'` is unmistakable. | SQL Server Joining 3 Tables | [
"",
"sql",
"sql-server",
""
] |
I'm looking to write a postgresql query to do the following :
```
if(field1 > 0, field2 / field1 , 0)
```
I've tried this query, but it's not working
```
if (field1 > 0)
then return field2 / field1 as field3
else return 0 as field3
```
thank youu | As stated in PostgreSQL docs [here](http://www.postgresql.org/docs/9.1/static/functions-conditional.html):
> The SQL CASE expression is a generic conditional expression, similar to if/else statements in other programming languages.
Code snippet specifically answering your question:
```
SELECT field1, field2,
CASE
WHEN field1>0 THEN field2/field1
ELSE 0
END
AS field3
FROM test
``` | ```
case when field1>0 then field2/field1 else 0 end as field3
``` | IF-THEN-ELSE statements in postgresql | [
"",
"sql",
"postgresql",
""
] |
I have several programs written in `R` that now I need to translate in T-SQL to deliver them to the client. I am new to T-SQL and I'm facing some difficulties in translating all my `R` functions.
An example is the numerical derivative function, which for two input columns (values and time) would return another column (of same length) with the computed derivative.
My current understanding is:
1. I can't use SP, because I'll need to use this functions inline with
`select` statement, like:
`SELECT Customer_ID, Date, Amount, derivative(Amount, Date) FROM Customer_Detail`
2. I can't use UDF, because they can take, as input parameter, only scalar. I'll need vectorised function due to speed and also because for some functions I have, like the one above, running row by row wouldn't be meaningful (for each value it needs the next and the previous)
3. UDA take whole column but, as the name says..., they will aggregate the column like `sum` or `avg` would.
**If the above is correct, which other techniques would allow me to create the type of function I need?** An example of `SQL` built-in function similar to what I'm after is `square()` which (apparently) takes a column and returns itself^2. My goal is creating a library of functions which behave like `square`, `power`, etc. But internally it'll be different cause `square` takes and returns each scalar is read through the rows. I would like to know if **is possible to have User Defied with an accumulate method (like the UDA) able to operates on all the data at the end of the import and then return a column of the same length?**
NB: At the moment I'm on SQL-Server 2005 but we'll switch soon to 2012 (or possibly 2014 in few months) so answers based on any 2005+ version of SQL-Server are fine.
EDIT: added the `R` tag for R developers who have, hopefully, already faced such difficulties.
EDIT2: Added `CLR` tag: I went through `CLR` user defined aggregate as defined in the Pro t-sql 2005 programmers guide. I already said above that this type of function wouldn't fit my needs but it was worth looking into it. The 4 methods needed by a UDA are: `Init`, `Accumulate`, `Merge` and `Terminate`. My request would need the whole data being analysed all together by the same instance of the `UDA`. So options including `merge` methods to group together partial results from multicore processing won't be working. | I think you may consider changing your mind a bit. SQL language is very good when working with sets of data, especially modern RDBMS implementations (like SQL Server 2012), but you have to think in sets, not in rows or columns. While I stilldon't know your exact tasks, let's see - SQL Server 2012 have very nice set of [window functions](http://technet.microsoft.com/en-us/library/ms189461.aspx) + [ranking functions](http://technet.microsoft.com/en-us/library/ms189798.aspx) + [analytic functions](http://technet.microsoft.com/en-us/library/hh213234.aspx) + [common table expressions](http://technet.microsoft.com/en-us/library/ms190766(v=sql.105).aspx), so you can write almost any query inline. You can use chains of [common table expression](http://technet.microsoft.com/en-us/library/ms190766(v=sql.105).aspx) to turn your data any way you want, to calculate running totals, to calculate averages or other aggregates over window and so on.
Actually, I've always liked SQL and when I've learned functional language (ML and Scala) a bit, my thought was that my approach to SQL is very similar to functional language paradigm - just slicing and dicing data without saving anything into variables, untils you have resultset your need.
Just **quick example**, here's a question from SO - [How to get average of the 'middle' values in a group?](https://stackoverflow.com/questions/19100772/how-to-get-average-of-the-middle-values-in-a-group). The goal was to get the average for each group of the middle 3 values:
```
TEST_ID TEST_VALUE GROUP_ID
1 5 1 -+
2 10 1 +- these values for group_id = 1
3 15 1 -+
4 25 2 -+
5 35 2 +- these values for group_id = 2
6 5 2 -+
7 15 2
8 25 3
9 45 3 -+
10 55 3 +- these values for group_id = 3
11 15 3 -+
12 5 3
13 25 3
14 45 4 +- this value for group_id = 4
```
For me, it's not an easy task to do in R, but in SQL it could be a really simple query like this:
```
with cte as (
select
*,
row_number() over(partition by group_id order by test_value) as rn,
count(*) over(partition by group_id) as cnt
from test
)
select
group_id, avg(test_value)
from cte
where
cnt <= 3 or
(rn >= cnt / 2 - 1 and rn <= cnt / 2 + 1)
group by group_id
```
You can also easily expand this query to get 5 values around the middle.
TAke closer look to [analytical functions](http://technet.microsoft.com/en-us/library/hh213234.aspx), try to rethink your calculations in terms of window functions, may be it's not so hard to rewrite your R procedures in plain SQL.
Hope it helps. | I don't think this is possible in pure T-SQL without using cursors. But with cursors, stuff will usually be very slow. Cursors are processing the table row-by/row, and some people call this "slow-by-slow".
But you can create your own aggregate function (see [Technet](http://technet.microsoft.com/en-us/library/ms182741.aspx) for more details). You have to implement the function using the .NET CLR (e.g. C# or [R.NET](http://rdotnet.codeplex.com/)).
For a nice example see [here](https://www.simple-talk.com/sql/t-sql-programming/concatenating-row-values-in-transact-sql/).
I think interfacing R with SQL is a very nice solution. Oracle is offering this combo as a [commercial product](http://www.oracle.com/technetwork/database/options/advanced-analytics/r-enterprise/index.html), so why not going the same way with SQL Server.
When integrating R in the code using the own aggregate functions, you will only pay a small performance penalty. Own aggregate functions are quite fast according to the Microsoft documentation: ["Managed code generally performs slightly slower than built-in SQL Server aggregate functions"](http://technet.microsoft.com/en-us/library/ms131075.aspx). And the [R.NET solution](http://rdotnet.codeplex.com/) seems also to be quite fast by [loading the native R DLL directly in the running process](http://rdotnet.codeplex.com/documentation). So it should be much faster than using R over ODBC. | Create a function with whole columns as input and output | [
"",
"sql",
"sql-server",
"r",
"clr",
""
] |
So I have a folder full of images, and I would like to insert each filepath into a row in one of the tables into my database. Each of them is named from 001.png up to 999.png.
I'm not sure how to do it. I've tried to use a LOOP but my SQL knowledge is basic at best.
It's a single insert, so I don't mind if it takes a while.
Thanks in advance, really appreciate it. | Can you use excel? (to generate the query only)
I often use excel to generate one off sets of statements like this.. probably get shot for it around here ;)
If column A had 0-999 in it the formula in B1 would be something like:
```
="insert into table (column) values ('/images/"&A1&".png')"
```
Then just drag down as normal | Not sure about mySQL but this would work in MSSQL:
```
create table #temp (fname varchar(10))
declare @filename as int
set @filename = 0
WHILE @filename < 1000
BEGIN
insert into #temp values (right('000' + cast(@filename as varchar(3)),3)+'.png')
set @filename = @filename + 1
END
select * from #temp
``` | Insert multiple filepaths with sql from folder | [
"",
"sql",
"filepath",
""
] |
I am trying to retrieve records from an ORACLE table that have yesterday's date in one of its column. I have a job running everyday retrieving ONLY yesterday's records.
NOTE\*\* the column insert\_date is of type `date`
These are the two SQL statements I have so far:
```
SELECT distinct column1
FROM table1
WHERE flag = 'N'
AND insert_date BETWEEN TRUNC(CURRENT_DATE-1) AND TRUNC(CURRENT_DATE)
AND ipaddress IS NOT NULL
AND ipaddress <> '0.0.0.0';
```
and
```
SELECT distinct column1
FROM table1
WHERE flag = 'N'
AND insert_date
BETWEEN To_Timestamp(CONCAT (to_char(CURRENT_DATE-1),' 12:00:00 AM'))
AND To_Timestamp(Concat (to_char(CURRENT_DATE-1),' 11:59:59 PM'))
AND ipaddress IS NOT NULL
AND ipaddress <> '0.0.0.0';
```
It seems as these two SQL statements produce the same output. However, I am no expert in ORACLE so I would like to ask the community if there is any 'gotchas' that I am not aware of. | Use the between but subtract a second from the end date.
```
insert_date between trunc(CURRENT_DATE-1) and trunc(CURRENT_DATE) - 1/86400
``` | Try this:
```
SELECT distinct column1
FROM table1
WHERE flag = 'N' AND
insert_date = trunc(sysdate-1,'DD')
and ipaddress is not null and ipaddress<>'0.0.0.0';
```
Your first query works fine but you may not need to use `between` when you want to filter the data for a single day. | Search records from previous day | [
"",
"sql",
"database",
"oracle",
"date",
""
] |
I am trying to add a simple index with the following SQL in Postgres, but the command keeps timing out:
```
CREATE INDEX playlist_tracklinks_playlist_enid ON playlist_tracklinks (playlist_enid);
```
The table definition is as follows:
```
=> \d playlist_tracklinks
Table "public.playlist_tracklinks"
Column | Type | Modifiers
----------------+---------------+--------------------
playlist_enid | numeric(20,0) | not null default 0
tracklink_enid | numeric(20,0) | not null default 0
position | integer | not null default 1
```
There are around 2.2 billion rows in the table, and it fails with the following error:
```
ERROR: canceling statement due to user request
```
I tried increasing the query timeout time with the following:
```
SET statement_timeout TO 360000000;
```
However it still hits that threshold. I have tried with and without `CONCURRENTLY`, and am sort of at a loss for what to do. Any suggestions would be greatly appreciated. | It was Heroku killing connections (the server ran out of temporary space). Contacting Heroku support was the solution eventually... | Arithmetic with `numerics` is very slow. This includes the comparisons needed to build and use the indexes. I suggest that you change the `enid` types to `char(20)` or just `varchar` if you do not do any arithmetic (other than comparisons) on them, and perhaps `bigint` if you do. Bigint isn't quite enough for the largest possible 20-digit number—I don't know what sort of information this ids carry around, if they can really be that big. | PostgreSQL: How to create index on very large table without timeouts? | [
"",
"sql",
"postgresql",
"postgresql-9.2",
""
] |
Let us say that I have a database table with the following two records:
```
CACHE_ID BUSINESS_DATE CREATED_DATE
1183 13-09-06 13-09-19 16:38:59.336000000
1169 13-09-06 13-09-24 17:19:05.762000000
1152 13-09-06 13-09-17 14:18:59.336000000
1173 13-09-05 13-09-19 15:48:59.136000000
1139 13-09-05 13-09-24 12:59:05.263000000
1152 13-09-05 13-09-27 13:28:59.332000000
```
I need to write a query that will return the CACHE\_ID for the record which has the most recent CREATED\_DATE.
I am having trouble crafting such a query. I can do a GROUP BY based on BUSINESS\_DATE and get the MAX(CREATED\_DATE)...of course, I won't have the CACHE\_ID of the record.
Could someone help with this? | Not positive on oracle syntax, but use the `ROW_NUMBER()` function:
```
SELECT BUSINESS_DATE, CACHE_ID
FROM (SELECT t.*,
ROW_NUMBER() OVER(PARTITION BY BUSINESS_DATE ORDER BY CREATED_DATE DESC) RN
FROM YourTable t
)sub
WHERE RN = 1
```
The `ROW_NUMBER()` function assigns a number to each row. `PARTITION BY` is optional, but used to start the numbering over for each value in that group, ie: if you `PARTITION BY BUSINESS_DATE` then for each unique BUSINESS\_DATE value the numbering would start over at 1. `ORDER BY` of course is used to define how the counting should go, and is required in the `ROW_NUMBER()` function. | You want to group on business date, and get the CACHE\_ID with the most current created date? Use something like this:
```
select yt.CACHE_ID, yt.BUSINESS_DATE, yt.CREATED_DATE
from YourTable yt
where yt.CREATED_DATE = (select max(yt1.CREATED_DATE)
from YourTable yt1
where yt1.BUSINESS_DATE = yt.BUSINESS_DATE)
``` | Select Record with Maximum Creation Date | [
"",
"sql",
"oracle",
"oracle11g",
""
] |
I keep getting the following error when I click on the Security tab in ASP.NET WebSite Administration Tool.
> A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified)
This is exactly what i've done so far:
1. Create a New Empty Web Site.
2. Create a Database in `/App_Data` named `ASPNETDB.MDF`
3. Run command line: `Aspnet_regsql.exe -A all -E -d (mywebsitepath)/App_Data/ASPNETDB.MDF`
4. Go to ASP.NET WebSite Administration Tool and click on the security tab.
What am I doing wrong? | You need to configure a membership provider in your websites web.config file. Please see the below example and modify the configuration to suit your requirements.
```
<configuration>
<connectionStrings>
<add name="MySqlConnection" connectionString="Data
Source=MySqlServer;Initial Catalog=aspnetdb;Integrated
Security=SSPI;" />
</connectionStrings>
<system.web>
<authentication mode="Forms" >
<forms loginUrl="login.aspx"
name=".ASPXFORMSAUTH" />
</authentication>
<authorization>
<deny users="?" />
</authorization>
<membership defaultProvider="SqlProvider" userIsOnlineTimeWindow="15">
<providers>
<clear />
<add
name="SqlProvider"
type="System.Web.Security.SqlMembershipProvider"
connectionStringName="MySqlConnection"
applicationName="MyApplication"
enablePasswordRetrieval="false"
enablePasswordReset="true"
requiresQuestionAndAnswer="true"
requiresUniqueEmail="true"
passwordFormat="Hashed" />
</providers>
</membership>
</system.web>
</configuration>
```
For more information please consult the following MSDN articles:
<http://msdn.microsoft.com/en-us/library/6e9y4s5t(v=vs.100).aspx>
<http://msdn.microsoft.com/en-us/library/vstudio/1b9hw62f(v=vs.100).aspx> | This is extension of @codkemonkeh answer if you are using SQL Server and not localDB.
My Website Administration tool was running fine until I enabled roles, at which points I started getting this error. So without roles, my application worked fine but with roles enabled, I was getting this error.
Found out that Roles by default use localDB sql connection if default connection is not provided. Since in my case, I was storing user logins in SQL Server (and not in localDB), I was getting this error.
[This Microsoft help page](https://msdn.microsoft.com/en-us/library/ff647401.aspx) exactly shows exactly how to setup default connection for Roles.
```
<roleManager enabled="true" defaultProvider="SqlRoleManager">
<providers>
<add name="SqlRoleManager"
type="System.Web.Security.SqlRoleProvider"
connectionStringName="YOUR_CONNECTION_NAME"
applicationName="MyApplication" />
</providers>
</roleManager>
```
Also note that defaultProvider="**SQLRoleManager**" and name="**SQLRoleManager**" ...> should be same.
Other than that @codemonkeh answer (above) is correct. This is just extra problem that I came across. | ASP.NET WebSite Administration Tool can't access SQL database | [
"",
"asp.net",
"sql",
"sql-server",
""
] |
I have a select statement in which the WHERE clause's IN operator. The query works properly as long as some values are passed to question mark (passed from java program). But when there are no values passed, I get a syntax error.
```
select
this_.categoryAddressMapId as category1_1_0_,
this_.categoryId as categoryId1_0_,
this_.addressId as addressId1_0_
from
icapcheckmyphotos.category_address_map this_ <br>
where
this_.addressId in (
?
)
```
When there are no parameters passed, I need null set. Please suggest how to modify the where clause. Thanks in advance | Modify your java program. Your choices are to not run the query if there are no addressIds, or ensure that it passes at least one value. If addressId is an integer field, pass -1 or something like that. Personally, I like the first option but it depends on your requirements. | how about doing sometiong like
```
where (? is null) OR this_.addressId in ( ? )
```
you may need to add some special treatment to the first part of the OR if your columns does accept NULLS | Dynamic Values in 'IN' operator | [
"",
"mysql",
"sql",
"where-in",
""
] |
I'm a newbie so I apologize if I'm not asking this question in the best way possible.
Let's say I have two tables: One called `CatColours` and one called `Cats`
`CatColours`
```
id colour spots
-- ----- -----
1 brown Yes
2 black No
3 white No
4 orange Yes
```
`Cats`
```
id cat_name
-- ----
1 Jimmy
2 Shadow
3 Snowball
4 Lucky
```
So **id** in the *CatColours* table would be the primary key, and the values of the column **colours** correspond to the **id** number.
In the second table *Cats*, we have **cat\_names** as well as **id** which would be a Foreign key (please correct me if I'm wrong).
I want to compose a query that would display **id** in the second table *Cats* as **colour** from the first table *CatColours* where the data will still correspond to the correct cat
(ie. **id** 1 in *CatColours* is corresponded to the value BROWN as well as **spots**, however I'm not concerning myself with the values under **spots** at the moment. **id** 1 in *Cats* corresponds to JIMMY.
When I query, I want to display **id** 1 as BROWN in the second table *Cats* to the **cat\_name** that the **id** corresponded to, and so on for the rest of the cats.)
I hope this makes sense, please ask me if there needs to be clarification.
I just want to run a statement that will retrieve and summarize this data, not modify or change any tables. | I think you are just looking for a simple join:
```
SELECT CatColours.id, Cats.cat_name, CatColours.colour
FROM CatColours
INNER JOIN Cats ON CatColours.id = Cats.id
```
And [here's a SQL Fiddle to demonstrate](http://www.sqlfiddle.com/#!6/3a5bf/1).
For more information on joins, see also:
* <http://en.wikipedia.org/wiki/Join_%28SQL%29>
* <http://www.codinghorror.com/blog/2007/10/a-visual-explanation-of-sql-joins.html>
---
Honestly though I would switch the FK relationship around and make Cats (with their names) be the parent table. | If you just want to get the data form existing tables you can use any of the queries already posted.
Pls note that your tables are not very well organized.
How would you represent a cat named Jimmy that is brown and doesn’t have spots?
Also, note that your CatColors table contains redundant data because cat can have one color with our without spots so you are entering more dta than you need
I’d suggest you fix your tables like this
CatColors
```
id color
1 brown
2 white
3 black
4 organge
5 gray....
```
Cats
```
id name color_id spots
1 Jimmy 1 1
2 Shadow 2 0
3 Lucky 2 1
```
Here is a good tutorial you can read about table normalization that will give you some theoretical background on this
<http://dotnetanalysis.blogspot.com/2012/01/database-normalization-sql-server.html> | Show data that are linked to each other in separate tables | [
"",
"sql",
"sql-server",
""
] |
at the moment I am using the statement:
```
delete
from tbl_name
where trunc(tbl_name.Timestamp) = to_date('06.09.2013','DD/MM/YYYY');
```
But this takes very very long.
Is there a way to speed things up?
Thank you | First, this is certainly a big table. Otherwise I see no point in a statement like yours taking very long.
Then there are two possibilities:
1) The delete statement affects many, many records.
Then it's only natural for the statement to take long. The whole table will have to be scanned (full table scan). You could only speed this up by parallelizing the statement:
```
delete /*+parallel(tbl_name,4)*/ from tbl_name ...
```
2) The delete statement affects a rather small percentage of the records in the table.
Then it would be advisable for Oracle to use an index. As you are asking only for the date portion of the timestamp column, you would create a function index:
```
create index index_name on tbl_name( trunc(the_timestamp) );
```
Once there is that index available on the table, Oracle can use it to find the desired records on selects, updates and deletes based on trunc(the\_timestamp). | Use a between value in right side instead of using trunc on left side.
```
delete from tbl_name
where tbl_name.Timestamp between to_date('06.09.2013 00:00:00','DD/MM/YYYY hh24:mi:ss')
and to_date('06.09.2013 23:59:59','DD/MM/YYYY hh24:mi:ss');
``` | Delete where Date = DD.MM.YYYY on table with 20m records | [
"",
"sql",
"oracle",
"oracle11g",
""
] |
I want to take a column with values that repeat multiple times and get that value only once and store it for later use, but at the same time I would like to get another value in the same row as that distinct column.
```
A B C
32263 123456 44
32263 123456 45
32263 123456 46
32264 246802 44
32263 246802 45
32264 246802 46
32265 369258 44
32265 369258 45
32265 369258 46
```
A, B, C represent three columns. Ignore C for now.
My question is: How can I get this information in this table and store it for I can use it later in the script?
Here is what I tried:
```
use databaseName
select distinct A from tableName
order by A
```
The result is:
```
A
32263
32264
32265
```
I'm trying to get it to also give me B's value. (Note it does not matter at all which row I get since no matter what A I choose the value of B will be the same for given A.) We are ignoring C for now.
The result should be:
```
A B
32263 123456
32264 246802
32265 369258
```
Now, once I get it like that I want to insert a row using the values I got from the query. This is where C comes in. I want to do something like this:
```
use databaseName
insert into tableName (A, B, C)
values (32263, 123456, 47)
```
Of course I don't want to put the values directly inside of there, instead have some type of loop that will cycle through each of the 3 distinct A values I found.
In short, my table should go from:
```
A B C
32263 123456 44
32263 123456 45
32263 123456 46
32264 246802 44
32263 246802 45
32264 246802 46
32265 369258 44
32265 369258 45
32265 369258 46
```
To:
```
A B C
32263 123456 44
32263 123456 45
32263 123456 46
32263 123456 47 -
32264 246802 44
32263 246802 45
32264 246802 46
32264 246802 47 -
32265 369258 44
32265 369258 45
32265 369258 46
32265 369258 47 -
```
I placed dashes next to the newly added rows to help you see the changes.
I figure I should perhaps do some type of loop that will cycle through all three distinct A values, but my problem is how to do that?
Thanks for your time. | You can use `INSERT INTO... SELECT` statement on this,
```
INSERT INTO tableName (A, B, C)
SELECT A, B, MAX(C) + 1
FROM tableName
GROUP BY A, B
```
* [SQLFiddle Demo](http://sqlfiddle.com/#!6/c2051/2) | Try this:
```
SELECT * INTO new_table FROM
(SELECT DISTINCT * FROM table_name) x
``` | SELECT DISTINCT values and INSERT INTO table | [
"",
"sql",
"select",
"sql-server-2012",
"distinct",
"sql-insert",
""
] |
I have an order table that looks like the below
```
order_id ordered checkin collected
======================================
1 2 1 4
2 4 2 4
3 4 2 4
4 1 4 1
```
This represents which member of staff ordered, checked in and collected each order.
I would like to perform a sql query which counts how many orders, checkins and collections each member of staff has made. This would also JOIN to the staff table which outputs the staff members name for each row
So for staff member 1 (John), it would be 1 order, 1 checkin and 1 collected.
Staff member 2 (Simon) would be 1 orders, 2 checkin and 0 collections and so on for the other members of staff.
What would be the best mysql query to achieve this? | **UPDATED:** You can calculate row counts per staff id for each column individually and then join with staff table using outer join.
```
SELECT s.id, s.name,
COALESCE(o.ordered, 0) ordered,
COALESCE(c.checkin, 0) checkin,
COALESCE(l.collected, 0) collected
FROM staff s LEFT JOIN
(
SELECT ordered id, COUNT(*) ordered
FROM orders
-- WHERE ordered_date >= LAST_DAY(CURDATE() - INTERVAL 2 MONTH) + INTERVAL 1 DAY
-- AND ordered_date <= LAST_DAY(CURDATE() - INTERVAL 1 MONTH)
GROUP BY ordered
) o ON s.id = o.id LEFT JOIN
(
SELECT checkin id, COUNT(*) checkin
FROM orders
-- WHERE checkin_date >= LAST_DAY(CURDATE() - INTERVAL 2 MONTH) + INTERVAL 1 DAY
-- AND checkin_date <= LAST_DAY(CURDATE() - INTERVAL 1 MONTH)
GROUP BY checkin
) c ON s.id = c.id LEFT JOIN
(
SELECT collected id, COUNT(*) collected
FROM orders
-- WHERE collected_date >= LAST_DAY(CURDATE() - INTERVAL 2 MONTH) + INTERVAL 1 DAY
-- AND collected_date <= LAST_DAY(CURDATE() - INTERVAL 1 MONTH)
GROUP BY collected
) l ON s.id = l.id
```
or you can take a different approach unpivoting `orders` table first and then conditionally aggregating it by staff id and again join it with staff table using outer join
```
SELECT s.id, s.name,
COALESCE(p.ordered, 0) ordered,
COALESCE(p.checkin, 0) checkin,
COALESCE(p.collected, 0) collected
FROM staff s LEFT JOIN
(
SELECT id,
SUM(type = 1) ordered,
SUM(type = 2) checkin,
SUM(type = 3) collected
FROM
(
SELECT type,
CASE type
WHEN 1 THEN ordered
WHEN 2 THEN checkin
WHEN 3 THEN collected
END id
FROM orders CROSS JOIN
(
SELECT 1 type UNION ALL
SELECT 2 UNION ALL
SELECT 3
) n
-- WHERE (ordered_date >= LAST_DAY(CURDATE() - INTERVAL 2 MONTH) + INTERVAL 1 DAY
-- AND ordered_date <= LAST_DAY(CURDATE() - INTERVAL 1 MONTH))
-- OR (checkin_date >= LAST_DAY(CURDATE() - INTERVAL 2 MONTH) + INTERVAL 1 DAY
-- AND checkin_date <= LAST_DAY(CURDATE() - INTERVAL 1 MONTH))
-- OR (collected_date >= LAST_DAY(CURDATE() - INTERVAL 2 MONTH) + INTERVAL 1 DAY
-- AND collected_date <= LAST_DAY(CURDATE() - INTERVAL 1 MONTH))
) u
GROUP BY id
) p
ON s.id = p.id
```
*Based on your comments queries has been updated with sample `WHERE` clauses to filter only rows for previous month*
Sample output:
```
| ID | NAME | ORDERED | CHECKIN | COLLECTED |
|----|-------|---------|---------|-----------|
| 1 | John | 1 | 1 | 1 |
| 2 | Simon | 1 | 2 | 0 |
| 3 | Mark | 0 | 0 | 0 |
| 4 | Helen | 2 | 1 | 3 |
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!2/e161d/11)** demo | You can use `COUNT(*)` to count all orders in your table. Also Use `COUNT(checkin)` `COUNT(collected)` to count the other fields.
If you are trying to get the counts for one user, you can user `WHERE` to do it.
```
SELECT
COUNT(order_id) AS order_count,
COUNT(checking) AS chcekin_count,
COUNT(collected) AS collected_count
FROM
your table
WHERE staff_member = idOfStaff
```
ALSO you can perform many counts from your table and collect it as one table
```
SELECT
(SELECT COUNT(order_id) FROM table WHERE staff = ?) AS staff_1_order_count,
(SELECT COUNT(order_id) FROM table WHERE staff = ?) AS staff_2_order_count
``` | Multiple mysql counts from order table | [
"",
"mysql",
"sql",
""
] |
please see my attached below:
my actual data will be store in table 1, can I create a view that display data like table 2?
 | Yes it is possible and it is called a [UNPIVOT](http://technet.microsoft.com/en-us/library/ms177410%28v=sql.105%29.aspx)
For your case:
```
SELECT period, value, category
FROM
(SELECT VendorID, charcge,nocharge
FROM Table) p
UNPIVOT
(value FOR category IN
(charge,nocharge)
)AS unpvt;
``` | Please see [PIVOT/UNPIVOT](http://technet.microsoft.com/en-us/library/ms177410%28v=sql.105%29.aspx) for what you need | is it possible create a sql view from "horizontal" to "vertical" | [
"",
"sql",
"sql-server",
""
] |
I'm trying to select all the results from a table A and all results from table B
I used this Request:
```
$request_public_imgs = $BDD-> query("SELECT * FROM images_public ");
UNION("SELECT * FROM images_users WHERE img_kind='PublicImg' ")
;
```
But I receive this error
> Fatal error: Call to undefined function UNION() | Write your SQL command as a single piece :
```
$request_public_imgs = $BDD-> query("SELECT id, field1, .. FROM images_public
UNION SELECT id, field1, .. FROM images_users WHERE img_kind='PublicImg' ") ;
```
You should select precise field and not using \* for performance matter.
Adrien | `UNION` is a `MySQL` function, hence you need to put it inside the `query()`:
```
$request_public_imgs = $BDD-> query("
SELECT * FROM images_public
UNION
SELECT * FROM images_users WHERE img_kind='PublicImg' ");
```
More generally, all the queries will work like:
```
$request_public_imgs = $BDD-> query("YOUR_MYSQL_QUERY");
```
Note that the error `Fatal error: Call to undefined function UNION()` was warning about `UNION` not being a function of PHP. | Select all from TWO tables | [
"",
"mysql",
"sql",
"join",
""
] |
I have a question. How can I write this SQL
**First Table**
1. Name: ORDER\_TABLE
2. Column: Order\_number, Contract\_number,vendor\_number
**Second Table**
1. Name: CONTRACT\_TABLE
2. Column: Contract\_number, vendor\_number
I have a one contract\_number ='1234' which contain in CONTRACT\_TABLE but I want check does this contract\_number have a order. I want get score this Contract\_number which doesnt have Order\_number in ORDER\_TABLE
```
SELECT ct.VENDOR_NUMBER, ct.CONTRACT_NUMBER, ot.ORDER_NUMBER
FROM CONTRACT_TABLE ct,
ORDER_TABLE ot
WHERE ct.vendor_number = ot.vendor_number
``` | Assuming you want to detect all the Contract\_number in CONTRACT\_TABLE which does not exits in ORDER\_TABLE (not clear from your description) here is one exemple ofsql:
```
select Contract_number from CONTRACT_TABLE
minus select distinct Contract_number from ORDER_TABLE
``` | ```
SELECT
Contract_Number
FROM
Contract_Table
WHERE
NOT EXISTS
(SELECT
NULL
FROM
Order_Table
WHERE
Contract_Table.Order_Number = Order_Table.Order_Number)
``` | Order Number which dosnt exists in other table | [
"",
"sql",
"oracle",
""
] |
Suppose I have a table `t` filled like this:
```
x y z
- - -
A 1 A
A 4 A
A 6 B
A 7 A
B 1 A
B 2 A
```
Basically, columns x and z are random letters and y random numbers.
I want to aggregate the rows as follows:
```
x z min(y) max(y)
- - ------ ------
A A 1 4
A B 6 6
A A 7 7
B A 1 2
```
In other words: Given that the rows are ordered by x, y, and z, select the minimum and maximum y for each *consecutive* group of x and z.
Note that this query is not what I need:
```
select x, z, min(y), max(y)
from t
group by x, z
```
As this would result in the following unwanted result:
```
x z min(y) max(y)
- - ------ ------
A A 1 7
A B 6 6
B A 1 2
```
So the question is: is there a way to express what I want in SQL?
(In case the solution depends on the SQL dialect: an Oracle solution would be preferred.) | Here a solution, but I don't have the time to explain it step by step:
```
select x, z, min(y), max(y)
from (
select b.* , sum(switch) over (order by rn) as grp_new
from(
select a.* ,
case when grp = (lag(grp) over (order by rn))
then 0
else 1 end as switch
from
(select x,y,z,
dense_rank() over (order by x, z) as grp,
row_number() over (order by x, y, z) rn
from t
)a
)b
)c
group by x, z, grp_new
order by grp_new
```
[SQLFIDDLE to test it.](http://sqlfiddle.com/#!4/d41d8/17790) | You didn't mention Oracle version if your version supports [WITH](http://www.oracle-base.com/articles/misc/with-clause.php) and [ROW\_NUMBER() OVER](http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions137.htm)
```
WITH C as
(
select t.*,
ROW_NUMBER() OVER (order by x,y,z ) as rn
from t
), C2 as
(
select t1.*,
(
select count(*) from c where Rn<=t1.Rn
and (z<>t1.z or x<>t1.x)
) as Grp
from c t1
)
select x,z,min(y),max(y) from c2
group by x,z,grp
order by min(rn)
```
[SQLFiddle demo](http://sqlfiddle.com/#!4/03ad8/28) | How do I aggregate over ordered subsets of rows in SQL? | [
"",
"sql",
"oracle",
""
] |
I need to insert a string value that looks like money from one table to another table where the column type is decimal(18,2). I have tried code that looks like: `CAST(REPLACE(REPLACE(d.Cost,',',''),'$','') AS DECIMAL(18,2)) AS 'Cost'`
but get the error converting data type varchar to numeric. | It's not a good idea to store formatted values in your database, but this problem is easy to workaround.
The reason you can't cast it into *Decimal* is because currently it's a *Varchar*. So first cast it in *Money* and then you can cast it into something else.
This works in Sql Server:
```
select cast(cast('$400,000.88' as money) as decimal(10,2))
``` | Try:
```
CAST(CAST(REPLACE(REPLACE(d.Cost,',',''),'$','') AS Float) AS DECIMAL(18,2)) AS 'Cost'
``` | How do I Cast a string value formatted like "$400,000.88" to a decimal | [
"",
"sql",
"sql-server",
""
] |
Consider the following table (snapshot):

I would like to write a query to select rows from the table for which
* At least 4 out of 7 column values (VAL, EQ, EFF, ..., SY) are not NULL..
Any idea how to do that? | Nothing fancy here, just count the number of non-null per row:
```
SELECT *
FROM Table1
WHERE
IIF(VAL IS NULL, 0, 1) +
IIF(EQ IS NULL, 0, 1) +
IIF(EFF IS NULL, 0, 1) +
IIF(SIZE IS NULL, 0, 1) +
IIF(FSCR IS NULL, 0, 1) +
IIF(MSCR IS NULL, 0, 1) +
IIF(SY IS NULL, 0, 1) >= 4
```
Just noticed you tagged sql-server-2005. `IIF` is sql server 2012, but you can substitue `CASE WHEN VAL IS NULL THEN 1 ELSE 0 END`. | How about this? Turning your columns into "rows" and use SQL to count not nulls:
```
select *
from Table1 as t
where
(
select count(*) from (values
(t.VAL), (t.EQ), (t.EFF), (t.SIZE), (t.FSCR), (t.MSCR), (t.SY)
) as a(val) where a.val is not null
) >= 4
```
I like this solution because it's splits data from data processing - after you get this derived "table with values", you can do anithing to it, and it's easy to change logic in the future. You can sum, count, do any aggregates you want. If it was something like `case when t.VAL then ... end + ...,` than you have to change logic many times.
For example, suppose you want to sum all not null elements greater than 2. In this solution you just changing `count` to `sum`, add `where` clause and you done. If it was `iif(Val is null, 0, 1) +`, first you have to think what should be done to this and then change **every** item to, for example, `case when Val > 2 then Val else 0 end`.
**`sql fiddle demo`** | Conditional select statement | [
"",
"sql",
"sql-server",
"sql-server-2005",
"select",
""
] |
I'm using polymorphic association to argument my `Post` model (which has common post attributes-title, body, date, etc... )
Here's my models,
```
class Post < ActiveRecord::Base
belongs_to :post_module, polymorphic: true
end
class Album < ActiveRecord::Base
has_one :post, as: :post_module
belongs_to :artist
end
class Artist < ActiveRecord::Base
has_many :albums
end
```
Now I have a `Post` model which has an `Album` model as `:post_module`
```
@post = Post.find(params[:id]) #its :post_module is an Album model
```
Now I want to query `Post`s which has an Album model with same artist id of that post's album's artist id.
I can query `Album` models with same artist id of that `@post`'s album... but That's not what I want..
I want a set of `Post`
```
@albums = Album.where('albums.artist_id = "?"', @post.post_module.artist.id)
```
How can I do this? or am I designing a bad model relationship? | I ended up with
`Post.joins('LEFT OUTER JOIN albums ON albums.id = posts.post_module_id').where('posts.post_module_type = "Album" AND albums.artist_id = "?"', @artist.id)` | If I understood correctly, you want to retrieve all the posts for a given artist id. So:
```
@posts = Post.joins(:post_module => :album).where(:post_module => { :album => { :artist_id => artist_id }})
``` | how to query with polymorphic has_one association's attribute? | [
"",
"sql",
"ruby-on-rails",
"activerecord",
""
] |
I would like to export a table from SQL Server 2008 R2 to a file. The problem is that I don't have bcp (nor can I install it or anything else) and am not able to run xpcmdshell. Anyone have any ideas on how this could be done without those permissions/tools? (I would like to have this happen on some automated basis preferably) | In the results pane, click the top-left cell to highlight all the records, and then right-click the top-left cell and click "Save Results As". One of the export options is CSV.
You can also use a command like this too:
```
INSERT INTO OPENROWSET ('Microsoft.Jet.OLEDB.4.0', 'Excel 8.0;Database=c:\Test.xls;','SELECT productid, price FROM dbo.product')
```
Lastly, you can look into using SSIS (replaced DTS) for data exports. Here is a link to a tutorial: <http://www.accelebrate.com/sql_training/ssis_2008_tutorial.htm> | I'm usually using Copy/Paste from SSMS `Results Pane` to `Excel`

---
**OR**
you can right click on database in the `Object Explorer` and select `Database->Tasks->Export Data`. An `SQL Server Import and Export Wizard` dialog opens and you will be able to export data from any table or query to the file or another destination.
---
**OR**
you can use [LinqPad](http://www.linqpad.net/) - awesome, simlpe and free tool (I really love it) that doesn't require installation
 | Exporting a table from SQL Server 2008 R2 to a file WITHOUT external tools | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
"sql-server-2008-r2",
""
] |
I have these queries :
```
SELECT COUNT(*) FROM t_table WHERE color = 'YELLOW';
SELECT COUNT(*) FROM t_table WHERE color = 'BLUE';
SELECT COUNT(*) FROM t_table WHERE color = 'RED';
```
Is there any way to get these results in one query? | ```
SELECT color, COUNT(*) FROM t_table GROUP BY color
``` | If you want the result to be in one row you can use:
```
SELECT
SUM(IF(color = 'YELLOW', 1, 0)) AS YELLOW,
SUM(IF(color = 'BLUE', 1, 0)) AS BLUE,
SUM(IF(color = 'RED', 1, 0)) AS RED
FROM t_table
```
[Working example](http://sqlfiddle.com/#!2/9ef77/3) | Multiple COUNT() for multiple conditions in one query (MySQL) | [
"",
"mysql",
"sql",
"count",
""
] |
I have two table with some identical field. (please don't blame the design).
Below only for the example schema
> Table A
> id
> name
> phone
> keys
>
> Table B
> id
> name
> keys
> address
So, i want to query id, name from either table A or B which meet condition 'keys' on single query, with return Field just "ID" and "NAME" no matter it's from tableA or tableB
with simple query
> SELECT a.id, a.name, b.id, b.name FROM TABELA as a, TABLEB as b WHERE a.keys = '1' or b.keys = '1'
It return duplicate id, name, id1, name1 to the result field. | Use `UNION` instead of `CROSS JOIN`:
```
SELECT a.id, a.name
FROM TABELA as a
WHERE a.keys = '1'
UNION
SELECT b.id, b.name
FROM TABLEB as b
WHERE b.keys = '1'
``` | use union or union all. Union returns only distinct rows, union all returns all rows
see examples in manual [manual on unions](http://dev.mysql.com/doc/refman/5.0/en/union.html)
```
SELECT a.id, a.name FROM TABELA as a WHERE a.keys = '1'
union
SELECT b.id, b.name FROM TABELb as b WHERE b.keys = '1'
``` | SQL Query, query to having single field name from different table | [
"",
"mysql",
"sql",
"sql-server",
"select",
"join",
""
] |
I'm having a hard time figuring out a greater than in a SQL statement.
Here's my code:
```
select one, two, three from orders
where case when @orderid > 0 then orders.orderid = @orderid end
```
---
@orderid is parameter passed to the stored procedure. The idea is that if a valid (> 0) orderid is passed, then use that as the filter in the where clause, otherwise don't use it all. | Guffa has the right answer, but the way you'd do this using the CASE trick (which does occasionally come in handy) is this:
```
--If order ID is greater than 0, use it for selection
--otherwise return all of the orders.
select one, two, three
from orders
where orders.orderid = CASE
WHEN @orderid > 0 then @orderid
ELSE orders.orderid
END
```
The CASE always has to return something, so if you want to "disable" a statement in your WHERE clause conditionally and can't use OR, you can just set the thing equal to itself and that should always be true (except when comparing nulls).
Edit: I should also say that on queries like this where the number of rows that can be returned could vary tremendously (one row versus the whole table), using the OPTION (RECOMPILE) hint may help performance a great deal in the single row case. | NYCDotnet answer, like the rest of the answers here works, but they may not be [SARGable](http://en.wikipedia.org/wiki/Sargable)
To make this non-SARGable query..
```
select one, two three from orders
where
case
when @orderid > 0 then orders.orderid
else 0
end = @orderid
```
..SARGable, do this instead:
```
select one, two three from orders
where
@orderid > 0 and orders.orderid = @orderid
or not (@orderid > 0)
```
If @orderid will not ever become negative, just make the solution simpler:
```
select one, two three from orders
where
@orderid > 0 and orders.orderid = @orderid
or @orderid = 0
```
Or better yet, i.e. if @orderid will not become negative:
```
select one, two three from orders
where
@orderid = 0
or orders.orderid = @orderid
``` | Greater than in SQL CASE statement | [
"",
"sql",
"sql-server",
""
] |
I'm having difficulty with what I figure should be an easy problem. I want to select all the columns in a table for which one particular column has duplicate values.
I've been trying to use aggregate functions, but that's constraining me as I want to just match on one column and display all values. Using aggregates seems to require that I 'group by' all columns I'm going to want to display. | If I understood you correctly, this should do:
```
SELECT *
FROM YourTable A
WHERE EXISTS(SELECT 1
FROM YourTable
WHERE Col1 = A.Col1
GROUP BY Col1
HAVING COUNT(*) > 1)
``` | You can join on a derived table where you aggregate and determine "col" values which are duplicated:
```
SELECT a.*
FROM Table1 a
INNER JOIN
(
SELECT col
FROM Table1
GROUP BY col
HAVING COUNT(1) > 1
) b ON a.col = b.col
``` | How to select all columns for rows where I check if just 1 or 2 columns contain duplicate values | [
"",
"sql",
"sql-server",
""
] |
I am trying to return the names with the max number of entries in a table and return a list of (name, count) tuples for those with the max for count. My current solution uses:
```
select name, count(*)
from action_log
group by name
order by count desc
limit 1;
```
The problem is that using `limit 1` does not account for multiple names having a max count value.
How can I determine the max count and then get ALL matching names? I want to (but can't obviously) do something like:
```
select name, max(count(*))
from action_log
group by name;
``` | [SQL Fiddle](http://sqlfiddle.com/#!12/09cca/1)
```
with s as (
select name, count(*) as total
from action_log
group by name
), m as (
select max(total) as max_total from s
)
select name, total
from s
where total = (select max_total from m)
``` | You could do this with sub queries - except there are some rules surrounding the group by. How about simplifying it with a view:
```
create view cname as
select name, count(name) c
from action_log
group by name
```
and then `SELECT` like this:
```
select distinct a.name
from action_log a
join cname c on c.name = a.name
where c.c = (select max(c) from cname)
```
and here is a [SQL Fiddle](http://sqlfiddle.com/#!2/a034f/6/0) to prove it. | How to get all max count with a field | [
"",
"sql",
"postgresql",
""
] |
Why does this SQL-statement return `0`?
```
SELECT CASE WHEN NULL IN (9,1,NULL) THEN 1 ELSE 0 END
``` | I don't know what RDBMS you are using since some of them has configuration on how `NULL` will be treated.
```
NULL IN (9, 1, NULL)
```
can be written as
```
(NULL = 9) OR (NULL = 1) OR (NULL = NULL)
```
and not of them were `TRUE` nor `FALSE`. They are all `NULL`. Since there are only two paths in the `CASE` statement, it falls under `ELSE` block. | SQL is based on [three-valued logic](http://en.wikipedia.org/wiki/Three-valued_logic), where there are three truth values: `TRUE`, `FALSE`, and `UNKNOWN`. The special `NULL` value is a placeholder for "missing data" and if you compare something, anything, with missing data the result is unknown.
For example, is `<missing data>` equal to `<missing data>`? It's impossible to know, so the result is `UNKNOWN`.
In this particular case, you are trying to find out if `<missing data>` is in a given list: since the data is missing, it's impossible to know if it's in the list, and the query returns `0`.
```
SELECT CASE WHEN NULL IN (9,1,NULL) THEN 1 ELSE 0 END
``` | NULL IN (1,2,NULL) returns false | [
"",
"sql",
""
] |
I have a table `StudentMarks` with columns `Name, Maths, Science, English`.
Data is like
```
Name, Maths, Science, English
Tilak, 90, 40, 60
Raj, 30, 20, 10
```
I want to get it arranged like the following:
```
Name, Subject, Marks
Tilak, Maths, 90
Tilak, Science, 40
Tilak, English, 60
```
With [unpivot](http://technet.microsoft.com/en-us/library/ms177410%28v=SQL.105%29.aspx) I am able to get Name, Marks properly, but not able to get the column name in the source table to the `Subject` column in the desired result set.
How can I achieve this?
I have so far reached the following query (to get Name, Marks)
```
select Name, Marks from studentmarks
Unpivot
(
Marks for details in (Maths, Science, English)
) as UnPvt
``` | Your query is very close. You should be able to use the following which includes the `subject` in the final select list:
```
select u.name, u.subject, u.marks
from student s
unpivot
(
marks
for subject in (Maths, Science, English)
) u;
```
See [SQL Fiddle with demo](http://sqlfiddle.com/#!18/3771e/1) | You may also try standard sql un-pivoting method by using a sequence of logic with the following code..
The following code has 3 steps:
1. create multiple copies for each row using cross join (also creating subject column in this case)
2. create column "marks" and fill in relevant values using case expression ( ex: if subject is science then pick value from science column)
3. remove any null combinations ( if exists, table expression can be fully avoided if there are strictly no null values in base table)
```
select *
from
(
select name, subject,
case subject
when 'Maths' then maths
when 'Science' then science
when 'English' then english
end as Marks
from studentmarks
Cross Join (values('Maths'),('Science'),('English')) AS Subjct(Subject)
)as D
where marks is not null;
``` | Unpivot with column name | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
"unpivot",
""
] |
Say I have a table called `votes`, and a column in that table called `vote_type` which can either be 0 or 1. How can I fetch the `COUNT` of `vote_type` where `vote_type` is `1`?
Right now, I can select the count of `vote_type` overall, but not the count of `vote_type` where it is 1 or 0, via this query:
```
SELECT COUNT(votes.vote_id) AS vote_count
FROM votes
WHERE <some condition>
ORDER BY vote_count
```
How can I select the total number of vote\_types, the number of vote\_types = 1, the number of vote\_types = 0, and the total vote\_value (votetypes = 1 minus votetypes = 0), and order it by the total vote\_value?
EDIT: Note that 1 & 0 are not intrinsic values of a vote, but boolean expressions for a positive vote and a negative vote. | ```
SELECT COUNT(*) TotalVotes,
SUM(CASE WHEN vote_type = 1 THEN 1 ELSE 0 END) TotalVoteOne,
SUM(CASE WHEN vote_type = 0 THEN 1 ELSE 0 END) TotalVoteZero,
SUM(CASE WHEN vote_type = 1 THEN 1 ELSE 0 END) -
SUM(CASE WHEN vote_type = 0 THEN 1 ELSE 0 END) TotalVoteValue
FROM votes
-- WHERE ....
-- ORDER ....
```
When you say *order it by the total vote\_value* -- it actually doesn't makes sense since the total number of rows in the result is only one.
You can also wrap this in a subquery,
```
SELECT TotalVotes,
TotalVoteOne,
TotalVoteZero,
(TotalVoteOne - TotalVoteZero) AS TotalVoteValue
FROM
(
SELECT COUNT(*) TotalVotes,
SUM(CASE WHEN vote_type = 1 THEN 1 ELSE 0 END) TotalVoteOne,
SUM(CASE WHEN vote_type = 0 THEN 1 ELSE 0 END) TotalVoteZero
FROM tableName
-- WHERE ....
-- ORDER BY TotalVoteValue DESC
) a
``` | I don't know what exactly are you trying to do, but if you want to count all the votes with Vote\_Type=1
try
```
SELECT COUNT(votes.vote_id) AS vote_count
FROM votes
WHERE votes.vote_type=1
```
or if you need to sum the votes
```
SELECT SUM(votes.vote_type) AS vote_sum
FROM votes
WHERE votes.vote_type=1
``` | How to count by the value of a column in SQL? | [
"",
"mysql",
"sql",
""
] |
I am attempting to get the user information of the user who left this review.
With the following code:
```
SELECT username, image, user_id FROM table_users AS us
JOIN table_reviews AS re
ON re.user_id = us.user_id
WHERE us.user_id = 1 AND
re.review_id= 1
```
I get the error:
> Column 'user\_id' in field list is ambiguous
What does this mean? | It means that both tables in the query have the column `user_id`.
You need to specify which one you want to get in the SELECT statement like
`SELECT username, image, re.user_id` | column user\_id is in both `table_reviews`, `table_users` tables.
You need to specify columns with table alias name also.
```
SELECT username, image, us.user_id FROM table_users AS us
JOIN table_reviews AS re
ON re.user_id = us.user_id
WHERE us.user_id = 1 AND
re.review_id= 1
``` | Column 'user_id' in field list is ambiguous | [
"",
"mysql",
"sql",
""
] |
How can I compare the result of a COUNT inside a trigger in SQLite?
So far, this is the code I've come up with:
```
CREATE TRIGGER mytrigger
BEFORE INSERT ON mytable
BEGIN
SELECT CASE WHEN
SELECT COUNT (*) FROM mytable >= 3
THEN
RAISE(FAIL, "Activated - mytrigger.")
END;
END;
```
It fails to compile with:
```
Error: near "SELECT": syntax error
```
If I replace `SELECT COUNT (*) FROM mytable >= 3` with `1 == 1`, it compiles fine, and the trigger executes always. | You need to add parenthesis around the whole SELECT statement `SELECT COUNT (*) FROM mytable`
```
CREATE TRIGGER mytrigger
BEFORE INSERT ON mytable
BEGIN
SELECT CASE WHEN
(SELECT COUNT (*) FROM mytable) >= 3
THEN
RAISE(FAIL, "Activated - mytrigger.")
END;
END;
``` | Try this code to delete last 50 row from table2 when count is greater than 100
```
CREATE **TRIGGER IF NOT EXISTS** delete_trigger
BEFORE INSERT ON table1
WHEN (SELECT COUNT(*) FROM table1) > 100
BEGIN
delete From table1 where id not in (select id from table1 order by id desc limit 50;
END;
``` | Test for COUNT()>X inside SQLite trigger | [
"",
"sql",
"sqlite",
"triggers",
""
] |
I have a table named `Course` in a Postgres database:
[](https://i.stack.imgur.com/XbcgA.png)
How can I select rows which have course name with latest date? I mean if I have two same course names for one ID, I should only show the latest one as the below result.
Simply, I want only to show the latest row per ("ID", "Course Name").
[](https://i.stack.imgur.com/czgcg.png)
And what if I have two date columns in table `Course`, which are `StartDate` & `EndDate` and I want to show the same based on `EndDate` only? | In PostgreSQL, to get *unique rows for a defined set of columns*, the preferable technique is generally [`DISTINCT ON`](https://www.postgresql.org/docs/current/sql-select.html#SQL-DISTINCT):
```
SELECT DISTINCT ON ("ID") *
FROM "Course"
ORDER BY "ID", "Course Date" DESC NULLS LAST, "Course Name";
```
Assuming you actually use those unfortunate upper case identifiers with spaces.
You get *exactly one row* per `ID` this way - the one with the latest known `"Course Date"` and the first `"Course Name"` (according to sort order) in case of ties on the date.
You can drop `NULLS LAST` if your column is defined `NOT NULL`.
To get unique rows per `("ID", "Course Name")`:
```
SELECT DISTINCT ON ("ID", "Course Name") *
FROM "Course"
ORDER BY "ID", "Course Name", "Course Date" DESC NULLS LAST;
```
There are faster query techniques for *many* rows per group. Further reading:
* [Select first row in each GROUP BY group?](https://stackoverflow.com/questions/3800551/select-first-row-in-each-group-by-group/7630564#7630564) | ```
SELECT "ID", "Course Name", MAX("Course Date") FROM "Course" GROUP BY "ID", "Course Name"
``` | Select rows based on last date | [
"",
"sql",
"database",
"postgresql",
"greatest-n-per-group",
""
] |
I have SQL table that has a large number of columns. For some reason, some columns have empty cells instead of NULL cells. I would like to make all empty cells in all the columns to be NULL.
I know that the way to go for a single column is:
```
UPDATE your_table SET column = NULL WHERE column = ''
```
However, I am not sure how to execute a similar logic efficiently for all columns without having to write the column names one by one.
Thanks, | Run the following query:
```
SELECT 'UPDATE yourtable SET ' + name + ' = NULL WHERE ' + name + ' = '''';'
FROM syscolumns
WHERE id = object_id('yourtable')
AND isnullable = 1;
```
The output of this query will be a chunk of SQL script like this:
```
UPDATE yourtable SET column1 = NULL WHERE column1 = '';
UPDATE yourtable SET column2 = NULL WHERE column2 = '';
UPDATE yourtable SET column3 = NULL WHERE column3 = '';
-- etc...
```
Copy and paste that SQL script into a new query and run it to update all your columns. | You could do a query on `syscolumns` to get a list of columns, and use the results to construct your query.
```
select quotename(name) + ' = nullif (' + quotename(name)+ ','''')'
from syscolumns
where id = object_id('yourtable')
```
Additionally, if you write your query as
```
update yourtable
set
yourcolumn=nullif(yourcolumn, ''),
yourcolumn2=nullif(yourcolumn2, ''),
...
```
then you can do it in a single query without a where clause | Replace empty cells with NULL values in large number of columns | [
"",
"sql",
"sql-server",
"sql-server-2005",
"null",
""
] |
I have the following query:
```
select 'junior' as type, value
from mytable
union
select 'intermediate' as type, value
from mytable
union
select 'senior' as type, value
from mytable
```
Which returns the following data:
```
type value
Intermediate 10
Junior 5
Senior 1
```
I just need to reorder it so it looks like this
```
Junior 5
Intermediate 10
Senior 1
```
I can't figure out which order by clause to use to achieve ordering by custom specific values, how would I achieve this? | You need to add a third column, called e.g. sortorder. Then, assign the proper integer value to it,
```
select 'junior' as type, value, 1 as sortorder
from mytable
union
select 'intermediate' as type, value, 2 as sortorder
from mytable
union
select 'senior' as type, value, 3 as sortorder
from mytable
order by 3
``` | You can either sort by adding a sort key column or add a simple case statement based on the values.
```
-- Sort with Case statement
with sourceData as
(
select 'junior' type, 5 value
union all
select 'intermediate' type, 10 value
union all
select 'senior' type, 1 value
)
select *
from sourceData
order by
case type
when 'junior' then 0
when 'intermediate' then 1
when 'senior' then 2
else null
end
```
[SQL Fiddle](http://www.sqlfiddle.com/#!6/d41d8/8409/0) for testing. | sql order by hardcoded values | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
Been searching for a few weeks for a solution to this but have come up blank.
I have table of data similar to this:
```
client_ref supplier_key client_amount
1111 GBP 10
1111 GBP -10
1111 EUR 50
2222 CHF -22.5
2222 CHF -20
3333 EUR -27
3333 EUR -52
3333 EUR 79
```
I need to extract all items where the client\_ref and supplier\_key match and the total of the client\_amount equals zero. The output would look like this:
```
client_ref supplier_key client_amount
1111 GBP 10
1111 GBP -10
3333 EUR -27
3333 EUR -52
3333 EUR 79
```
I have written the following that returns the totals but I need any help you could provide to change this to show the rows that make up the totals rather than just the overall results.
```
SELECT tbis.client_ref ,tbis.supplier_key ,sum(tbis.client_amount)
FROM [XXXX].[dbo].[transaction] tbis
WHERE tbis.tbis.client_amount !=0
GROUP BY tbis.client_ref, tbis.supplier_key
HAVING sum(tbis.client_amount) =0
ORDER BY sum(tbis.client_amount)
```
Hope this makes sense and my first post is OK. Please feel free to critique my post. | One possible approach is to use the SUM() windowing function:
```
SELECT *
FROM
( SELECT tbis.client_ref ,tbis.supplier_key,tbis.client_amount,
SUM(tbis.client_amount) OVER (
PARTITION BY tbis.client_ref, tbis.supplier_key) AS total_client_amount
FROM [XXXX].[dbo].[transaction] tbis
WHERE tbis.client_amount !=0
)
WHERE total_client_amount = 0
```
[SQL Fiddle](http://sqlfiddle.com/#!6/06e46/1) | Try this instead:
```
SELECT t1.*
FROM transactions AS t1
INNER JOIN
(
SELECT
tbis.client_ref ,
tbis.supplier_key,
sum(tbis.client_amount) AS total
FROM transactions tbis
WHERE tbis.client_amount !=0
GROUP BY tbis.client_ref, tbis.supplier_key
HAVING sum(tbis.client_amount) =0
) AS t2 ON t1.client_ref = t2.client_ref
AND t1.supplier_key = t2.supplier_key
ORDER BY t2.total;
```
* [SQL Fiddle Demo](http://www.sqlfiddle.com/#!3/95397/4) | How to display rows that when added together equal zero | [
"",
"sql",
""
] |
I've seen several questions/answers on how to recursively query a self-referencing table, but I am struggling to apply the answers I've found to aggregate up to each parent, grandparent, etc. regardless of where the item sits in the hierarchy.
```
MyTable
-----------
Id
Amount
ParentId
```
Data:
```
Id Amount Parent Id
1 100 NULL
2 50 1
3 50 1
4 25 2
5 10 4
```
If I were to run this query without filtering, and SUMming amount, the result would be:
```
Id SumAmount
1 235
2 85
3 50
4 35
5 10
```
In other words, I want to see each item in MyTable and it's total Amount with all children. | ```
with cte as (
select t.Id, t.Amount, t.Id as [Parent Id]
from Table1 as t
union all
select c.Id, t.Amount, t.Id as [Parent Id]
from cte as c
inner join Table1 as t on t.[Parent Id] = c.[Parent Id]
)
select Id, sum(Amount) as Amount
from cte
group by Id
```
**`sql fiddle demo`** | Something like that?
```
WITH temp (ID, ParentID, TotalAmount)
AS
(
SELECT ID, ParentID, Amount
FROM MyTable
WHERE NOT EXISTS (SELECT * FROM MyTable cc WHERE cc.ParentID = MyTable.ID)
UNION ALL
SELECT MyTable.ID, MyTable.ParentID, TotalAmount + MyTable.Amount
FROM MyTable
INNER JOIN temp ON MyTable.ID = temp.ParentID
)
SELECT ID, SUM(TotalAmount) FROM
(SELECT ID, TotalAmount - (SELECT Amount FROM MyTable M WHERE M.ID = temp.ID) TotalAmount
FROM temp
UNION ALL
SELECT ID, Amount AS TotalAmount FROM MyTable) X
GROUP BY ID
```
Edited the query based on the comments below, now it all works. | Aggregate Self-Referencing Table | [
"",
"sql",
"sql-server",
"sql-server-2008",
"recursion",
""
] |
I have a table (simplified below)
```
|company|name |age|
| 1 | a | 3 |
| 1 | a | 3 |
| 1 | a | 2 |
| 2 | b | 8 |
| 3 | c | 1 |
| 3 | c | 1 |
```
For various reason the age column should be the same for each company. I have another process that is updating this table and sometimes it put an incorrect age in. For company 1 the age should always be 3
I want to find out which companies have a mismatch of age.
Ive done this
```
select company, name age from table group by company, name, age
```
but dont know how to get the rows where the age is different. this table is a lot wider and has loads of columns so I cannot really eyeball it.
Can anyone help?
Thanks | You should not be including `age` in the group by clause.
```
SELECT company
FROM tableName
GROUP BY company, name
HAVING COUNT(DISTINCT age) <> 1
```
* [SQLFiddle Demo](http://sqlfiddle.com/#!2/4cefc/1) | Since you mentioned "how to get **the rows** where the age is different" and not just the comapnies:
Add a unique row id (a primary key) if there isn't already one. Let's call it `id`.
Then, do
```
select id from table
where company in
(select company from table
group by company
having count(distinct age)>1)
``` | SQL group by with a count | [
"",
"sql",
"group-by",
""
] |
I am having trouble writing a script which can delete all the rows which match on the first three columns and where the Quantities sum to zero?
I think the query needs to find all Products that match and then within that group, all the Names which match and then within that subset, all the currencies which match and then, the ones which have quantities netting to zero.
In the below example, the rows which would be deleted would be rows 1&2,4&6.
```
Product, Name, Currency, Quantity
1) Product A, Name A, GBP, 10
2) Product A, Name A, GBP, -10
3) Product A, Name B, GBP, 10
4) Product A, Name B, USD, 10
5) Product A, Name B, EUR, 10
6) Product A, Name B, USD, -10
7) Product A, Name C, EUR, 10
```
Hope this makes sense and appreciate any help. | One way to simplify the SQL is to just concatente the 3 columns into one and apply some grouping:
```
delete from product
where product + name + currency in (
select product + name + currency
from product
group by product + name + currency
having sum(quantity) = 0)
``` | Try this:
```
DELETE
FROM [Product]
WHERE Id IN
(
SELECT Id
FROM
(
SELECT Id, SUM(Quantity) OVER(PARTITION BY a.Product, a.Name, a.Currency) AS Sm
FROM [Product] a
) a
WHERE Sm = 0
)
``` | Sql Delete Statement Trouble | [
"",
"sql",
"sql-server",
""
] |
I want to know how to use loops to fill in missing dates with value zero based on the start/end dates by groups in sql so that i have consecutive time series in each group. I have two questions.
1. how to loop for each group?
2. How to use start/end dates for each group to dynamically fill in missing dates?
My input and expected output are listed as below.
**Input:** I have a table A like
```
date value grp_no
8/06/12 1 1
8/08/12 1 1
8/09/12 0 1
8/07/12 2 2
8/08/12 1 2
8/12/12 3 2
```
Also I have a table B which can be used to left join with A to fill in missing dates.
```
date
...
8/05/12
8/06/12
8/07/12
8/08/12
8/09/12
8/10/12
8/11/12
8/12/12
8/13/12
...
```
How can I use A and B to generate the following output in sql?
**Output:**
```
date value grp_no
8/06/12 1 1
8/07/12 0 1
8/08/12 1 1
8/09/12 0 1
8/07/12 2 2
8/08/12 1 2
8/09/12 0 2
8/10/12 0 2
8/11/12 0 2
8/12/12 3 2
```
Please send me your code and suggestion. Thank you so much in advance!!! | You can do it like this without loops
```
SELECT p.date, COALESCE(a.value, 0) value, p.grp_no
FROM
(
SELECT grp_no, date
FROM
(
SELECT grp_no, MIN(date) min_date, MAX(date) max_date
FROM tableA
GROUP BY grp_no
) q CROSS JOIN tableb b
WHERE b.date BETWEEN q.min_date AND q.max_date
) p LEFT JOIN TableA a
ON p.grp_no = a.grp_no
AND p.date = a.date
```
*The innermost subquery grabs min and max dates per group. Then cross join with `TableB` produces all possible dates within the min-max range per group. And finally outer select uses outer join with `TableA` and fills `value` column with `0` for dates that are missing in `TableA`.*
Output:
```
| DATE | VALUE | GRP_NO |
|------------|-------|--------|
| 2012-08-06 | 1 | 1 |
| 2012-08-07 | 0 | 1 |
| 2012-08-08 | 1 | 1 |
| 2012-08-09 | 0 | 1 |
| 2012-08-07 | 2 | 2 |
| 2012-08-08 | 1 | 2 |
| 2012-08-09 | 0 | 2 |
| 2012-08-10 | 0 | 2 |
| 2012-08-11 | 0 | 2 |
| 2012-08-12 | 3 | 2 |
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!3/b4bab/6)** demo | The following query does a `union` with `tableA` and `tableB`. It then uses group by to merge the rows from `tableA` and `tableB` so that all of the dates from `tableB` are in the result. If a date is not in `tableA`, then the row has 0 for `value` and `grp_no`. Otherwise, the row has the actual values for `value` and `grp_no`.
```
select
dat,
sum(val),
sum(grp)
from
(
select
date as dat,
value as val,
grp_no as grp
from
tableA
union
select
date,
0,
0
from
tableB
where
date >= date '2012-08-06' and
date <= date '2012-08-13'
)
group by
dat
order by
dat
```
I find this query to be easier for me to understand. It also runs faster. It takes 16 seconds whereas a similar `right join` query takes 32 seconds.
This solution only works with numerical data.
This solution assumes a fixed date range. With some extra work this query can be adapted to limit the date range to what is found in `tableA`. | How to fill missing dates by groups in a table in sql | [
"",
"sql",
"date",
"loops",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.